## Abstract

In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines () and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy.

- Genomic-enabled prediction accuracy
- genotype × environment interaction
- main genetic effects
- deviations from main genetic effects
- random intercepts
- Genomic Selection
- shared data resource
- GenPred

Genomic selection (GS) predicts breeding values of complex traits based on dense marker information (Meuwissen *et al.* 2001) and has shown good prediction accuracy achieved by random cross-validation partitions of plant breeding data (de los Campos *et al.* 2009, 2013; Crossa *et al.* 2010, 2011; 2013; Pérez-Rodríguez *et al.* 2012). As molecular markers become cheaper and more abundant, GS-assisted breeding has become commonly used in plant and animal improvement. When performing genomic prediction of breeding values of unobserved individuals, the relationship between individuals in the training and testing sets is computed through the genomic relationship matrix, and the prediction model is referred to as the Genomic Best Linear Unbiased Predictor (GBLUP) (VanRaden, 2007, 2008).

Multi-environment trials are routinely conducted in plant breeding to estimate and take advantage of genotype × environment interaction (G×E) for selecting stable and high performing lines across environments and within environments. Therefore, implementation of GS strategies in plant breeding should be useful for estimating the parameters of the model and predicting G×E, as is commonly done in conventional plant breeding. Modern statistical analyses of multi-environment trials assess G×E by using pedigree information with linear mixed models (Piepho, 1997, 1998; Smith *et al.* 2005; Crossa *et al.* 2006; Burgueño *et al.* 2007); however, these models do not incorporate marker information.

A Bayesian GBLUP regression model for assessing genomic-enabled prediction combining G×E introduces the main effects of environments and lines and the interaction effects of markers and environmental co-variables via random variance-covariance structures (Jarquín *et al.* 2014). The Bayesian regression model of López-Cruz *et al.* (2015) is similar to that of Jarquín *et al.* (2014) with one difference: that genomic values are partitioned into components that are stable across environments (main genomic effects) and others that are environment-specific (genomic G×E) (Crossa *et al.* 2016). Although both models assume positive sample correlations among environments and can be fitted using the BGLR package (de los Campos and Pérez-Rodríguez 2016), the advantage of the model of López-Cruz *et al.* (2015) over the model of Jarquín *et al.* (2014) is that it can be implemented using both shrinkage methods and variable selection methods and is efficient when applied to sets of environments that have positive correlations because the genetic covariance between any pair of environments is the variance of the main effect, which makes the covariance between pairs of environments positive (López-Cruz *et al.* 2015).

Cuevas *et al.* (2016) used the Bayesian model of López-Cruz *et al.* (2015) to compare methods that apply GS models with G×E using a linear kernel (GBLUP) (GB) and a non-linear Gaussian kernel (GK) for single-environment and multi-environment breeding data sets. The authors found the GK models had higher prediction accuracy than the GB models and explained that the GK models captured major and complex marker effects in addition to their interaction effects. Sousa *et al.* (2017) compared the prediction accuracy of the multi-environment, single variance G×E deviation model (MDs) of Jarquín *et al.* (2014) with GK (MDs-GK) and the prediction accuracy of the multi-environment environment-specific variance G×E deviation model (MDe) of López-Cruz *et al.* (2015) with the GK method (MDe-GK). Then, Sousa *et al.* (2017) compared the models including the GK method with the prediction accuracy of their counterpart models using the GB methods (MDs-GB and MDe-GB). In addition, Sousa *et al.* (2017) also compared the accuracy of the four previous models with the accuracy of the multi-environment, main genotypic effect (MM) of Jarquín *et al.* (2014) using the GB and GK methods (MM-GB, and MM-GK). Results show that for grain yield, a notable increase in prediction accuracy of GK over the GB methods ranged from 9 to 49% in one data set and from 34 to 70% in another data set.

In general, the previous linear mixed multi-environment models assumed the environments as fixed or random effects, and lines as random effects incorporating into the model the random slope of the genetic effect of the lines distributed as a normal random variable with zero mean and variance-covariance structure constructed from markers or pedigree; also, the genetic effect (intercept) of the lines can be considered as having a normal distribution with zero mean and constant variance (Mota *et al.* 2016). The random intercept of the lines is often not included in the model when no exchange of information occurs, assuming the intercepts are independent (Pérez-Rodríguez *et al.* 2015). However, recent studies have incorporated random intercepts (Mota *et al.* 2016; Cuevas *et al.* 2017; Sukumaran *et al.* 2017; Jarquín *et al.* 2017) in order to achieve higher genomic-prediction accuracy in cases where lines were observed in some environments but not in others (random cross-validation 2, CV2 of Burgueño *et al.* 2012); this is because the posterior distribution of the intercept generates a variance-covariance structure that allows exchanging information between the lines of the training and testing sets. When newly developed lines have never been observed (untested) (random cross-validation CV1, Burgueño *et al.* 2012), models do not improve the prediction accuracy with or without random intercept when compared with the single-environment model. One limitation of these multi-environment genomic G×E models for achieving relatively high genomic-enabled predictions is that correlations among environments should be positive. Also, none of the applications of the models of Jarquín *et al.* (2014), Sukumaran *et al.* (2017), and Jarquín *et al.* (2017) compared genomic-enabled prediction accuracy with GB kernel *vs.* GK kernel.

The previous Bayesian regression models of Jarquín *et al.* (2014) and López-Cruz *et al.* (2015) use the Hadamard product for modeling G×E and show that the exchange of information between environments is achieved by means of the variance-covariance matrix of the main effects. Thus, the variance component of the main effects measures the stability across environments and the variance component of the specific effects measures the deviations from the main effects due to specific combinations of lines in environments (G×E). This approach has the advantage that it can be used when the number of lines in each environment is the same, but also when there is an unbalanced number of lines in environments, as shown by Sousa *et al.* (2017).

On the other hand, GBLUP methodology (together with pedigree) can incorporate and model G×E effects, by means of the Kronecker product of the variance-covariance matrices of the genetic relationship between environments and the genomic or pedigree relationship between the lines (Burgueño *et al.* 2012; Oakey *et al.* 2016) where the structure of the models allows estimating negative genetic correlations between environments. Based on this, Cuevas *et al.* (2017) recently compared a Bayesian regression model for the genetic effects described by the Kronecker product of unstructured variance-covariance matrices of genetic correlations between environments and genomic kernels under the GB and GK methods. An extension includes an extra genetic residual component with random intercepts. Results of the analyses of five data sets indicated that including the random intercepts is still beneficial for increasing genomic prediction accuracy when lines have been tested in some environments. However, one drawback of the Bayesian regression models of Cuevas *et al.* (2017) is the computing time for the iteration required for the Monte Carlo Markov Chain (MCMC) method to achieve the convergence of the posterior and predictive distributions.

Recently Granato *et al.* (2017) proposed an R package called Bayesian Genomic G×E (BGGE) to obtain a rapid fit of Bayesian mixed linear models with homogeneous error variances for the models of Jarquín *et al.* (2014), López-Cruz *et al.* (2015) and also for the models used by Sousa *et al.* (2017) (MM, MDs, and MDE). The approach of Granato *et al.* (2017) uses an R library that saves time by using the structure of the block diagonal matrices with additional parameterizations to shorten the iteration time without losing precision.

Based on the above, the main objective of this study was to compute the prediction accuracy of 16 model-method combinations and compare their prediction accuracy for four different data sets (two maize and two wheat multi-environment trials) with an unbalanced number of lines in environments, and different complexity of the G×E interaction. The 16 model-methods comprise the multi-environment, main genotypic effect (MM), the multi-environment, single variance G×E deviation model (MDs) and the multi-environment environment-specific variance G×E deviation model (MDe) with the GB and GK kernel methods and with and without including random intercepts (12 model-methods) plus 4 Bayesian regression models for the genetic effects described by the Kronecker product of unstructured variance-covariance (MUC) matrices of genetic correlations between environments and genomic kernels under the GB and GK methods and their extensions, including an extra genetic residual component with random intercepts. We discuss the advantages and disadvantages of the different model-methods for sets of environments with different G×E characteristics and different degrees of unbalance among lines.

## Materials and Methods

This study uses four multi-environment plant breeding data sets with different characteristics. Two maize data sets used by Sousa *et al.* (2017) (HEL and USP) had different numbers of maize hybrids in each environment and positive correlations between environments, whereas the two wheat data sets used by Cuevas *et al.* (2017) (WHE1 and WHE5) had environments with negative or zero correlations but with the same number of wheat lines in each location.

We used the same models of Sousa *et al.* (2017) (MM, MDs, and MDe) with linear (GB) and non-linear kernels (GK) (MM-GB, MM-GK, MDs-GB, MDs-GK, MDe-GB, MDe-GK) plus the addition of one random intercept component () that captures the variation of genetic residuals (MM-GB, MM**-**GK, MDs-GB, MDs-GK, MDe-GB, MDe-GK). These 12 model-methods were fitted with the BGGE package (Granato *et al.* 2017).

In this study models 2 and 3 of Cuevas *et al.* (2017) are renamed as Multi-environment Unstructured Covariance (MUC) and Multi-environment Unstructured Covariance with random intercept vector ** f** (MUC

**), respectively, each fitted with the GB and GK kernel methods. Therefore, 4 additional models are included, MUC-GB, MUC-GK, MUC**

*f***-GB, and MUC**

*f***-GK. These models were fitted with the MTM package (de los Campos and Grüneberg 2016) and their prediction accuracy was compared with the other 12 model-method combinations.**

*f*In the first step, the phenotypic data were fitted according to the experimental design employed for each experiment, and the Best Linear Unbiased Estimates (BLUE) of the lines or hybrids for each location or environments were computed. In the second step, the various genomic models were fitted to perform random cross-validation and compute the prediction accuracy of the 16 model-method combinations.

### Experimental data

#### Maize data set HEL:

This maize data set comprises 452 maize hybrids evaluated in 2015 at five sites in Brazil: Nova Mutum (NM) and Sorriso (SO) in the state of Mato Grosso; Pato de Minas (PM) and Ipiaçú (IP) in the state of Minas Gerais; and Sertanópolis (SE) in the state of Paraná. The experimental design was a randomized block with two replicates per genotype and environment. Different numbers of hybrids were planted in each environment. The HEL parent lines were genotyped with an Affymetrix Axiom Maize Genotyping Array of 616 K SNPs with standard quality controls removing markers with a *Call Rate * 0.95.

#### Maize data set USP:

This data set comprises 740 maize hybrids evaluated at Piracicaba and Anhumas, each with two levels of nitrogen (N) fertilization: Ideal N (IN) and Low N (LN) for a total of four artificial environments (P-IN, P-LN, A-IN, and A-IN). The hybrids were evaluated using an augmented block design including two replicated commercial hybrids as checks. There was an imbalance because not all hybrids were evaluated in all locations. Similar to the maize data set HEL, the USP parent lines were genotyped with an Affymetrix Axiom Maize Genotyping Array of 616 K SNPs with standard quality controls removing markers with a *Call Rate * 0.95.

#### Wheat data set WHE1:

A historical set of 599 wheat lines from CIMMYT’s Global Wheat Program was evaluated in four mega-environments (Crossa *et al.* 2010; Cuevas *et al.* 2016) and genotyped using 1447 Diversity Array Technology (DArT) markers generated by Triticarte Pty. Ltd. (Canberra, Australia; http://www.triticarte.com.au). Markers with a minor allele frequency lower than 0.05 were not included.

#### Wheat data set WHE5:

This data set is described by López-Cruz *et al.* (2015) and includes 807 wheat lines evaluated in five environments using an alpha-lattice design with three replicates in each environment at CIMMYT’s wheat breeding station at Cd. Obregon, Mexico. The environments were three irrigation regimes (0i = zero irrigation, 2i = two irrigations, and 5i = five irrigations), two planting systems (B = bed planting and F = flat planting) and two different planting dates (N = normal and L = late).

Genotypic data consisted of genotyping-by-sequencing (GBS) data, and markers with a minor allele frequency (MAF) lower than 0.05 were removed. After editing the missing markers, a total of 14,217 GBS markers were available for analyzing this data set.

#### Availability of the phenotypic and genotypic experimental data:

Sousa *et al.* (2017) describe the two maize data sets and Cuevas *et al.* (2017) give details of the two wheat data sets. The two maize data sets, HEL and USP, can be downloaded from the link http://hdl.handle.net/11529/10887, whereas the two wheat data sets can be found at the link http://hdl.handle.net/11529/10710, from where DATASET1.Wheat_GY.Rdata (Wheat data set WHE1) and DATASET5.Wheat_GY.Rdata (Wheat data set WHE5) were obtained.

### Statistical models

The components of the 8 basic models are summarized in Table 1 and their full descriptions are given below and in Appendix 1. They include an overall mean () and the fixed effects of the environments (other effects can be incorporated) modeled with the incident matrix and one vector of fixed effects for each environment. For the first group of six models (MM, MM MDs, MDs, MDe, and MDe), it is assumed that their genetic random components ** g** have a normal distribution with mean zero and a variance-covariance structure comprising a known matrix generated from markers (and computed using the GB or GK methods) multiplied by an unknown scaled parameter (variance component). Also 4 models in this group had different forms for modeling the G×E, MDs and MDe, with a variance-covariance structure constructed by the Hadamard product of the corresponding matrices and incorporating (or not) the random intercepts ().

A second group of models (MUC) considers that their random components have a normal distribution with zero mean and a variance-covariance structure modeled by the Kronecker product of a matrix with unknown covariances among environments multiplied by a known (computed using the GB or GK methods) and incorporating (or not) the random intercepts (** f**).

#### The multi-environment main genotypic effect model (MM):

Model MM (1) (Appendix 1) is equivalent to the across-environment model of Jarquín *et al.* (2014) and when in the distribution of the random genetic effects is used in model MM, is used in the covariance (de los Campos *et al.* 2013; VanRaden 2007, 2008); the model is the GBLUP across environments (MM-GB), where is the standardized matrix of molecular markers for the individuals of order , where is the number of markers.

However, markers can have a more complex function than the linear GBLUP. For example, the Gaussian kernel (GK) function (Cuevas *et al.* 2016) is computed as , where is the Euclidean distance between the ^{th} and individuals given by the markers; is the bandwidth parameter that controls the rate of decay of values (de los Campos *et al.* 2009; Pérez-Rodríguez *et al.* 2012; Pérez-Elizalde *et al.* 2015; Cuevas *et al.* 2016). In this work, GK is ), where and the median of the distances is used as a scaling factor (Crossa *et al.* 2010). When in the distribution of the random genetic effects of the MM model (1) is used with ), in the covariance the model is the Gaussian kernel across environments (MM-GK) (Sousa *et al.* 2017).

The genetic variation between lines that is not explained by in (1) (Appendix 1) can be captured by the random vector that is considered a random intercept for each line; thus when random effects are added, model MM becomes model MMwhere the random intercepts with being the identity matrix of size , and the variance component that indicates the influence of ; the incidence matrix connects the genotypes to the phenotypes. As in MM, the kernel matrix of the random effect ** g** of model MM can be fitted with GBLUP (MM-GB) or with Gaussian kernel (MM-GK).

#### The multi-environment single variance genotype × environment interaction deviation model (MDs):

Model (2) (Appendix 1) (MDs) adds to model (1) the random interaction effect of the environments with the genetic information of the lines (). When the random component is added to model (2), the MDs model becomes MDs:Each environment matrix ** K** (Appendix 1) of models MDs and MDs can be fitted with a linear kernel (MDs-GB, MDs-GB) or a Gaussian kernel (MDs-GK, MDs-GK).

#### Multi-environment environment-specific variance genotype × environment deviation model (MDe):

The environment-specific variance genotype × environment deviation model (MDe) (López-Cruz *et al.* 2015) differs from MDs on how the interaction component is considered; ** g** is the main genetic effect across environments and is the specific genetic effect in each environment. When the random component is added to (3) (Appendix 1), the MDe model becomes MDe:where matrices

**for**

*K***g**and

**for of models MDe and MDe can be fitted with a linear kernel (MDe-GB, MDe-GB) or with a Gaussian kernel (MDe-GK, MDe-GK).**

*K*_{E}#### Multi-environment With unstructured variance-covariance (MUC):

This model considers that there is a genetic correlation between environments that can be modeled with matrices of order (where *m* denotes the environment) (Cuevas *et al.* 2017). The MUC is expressed aswhere is a vector with the observation belonging to the *j*^{th} environment each of the same size (; the random vector is the vector of genetic values, and the vector of random errors both assumed normally distributed with and ** where **** is the Kronecker product.**

The variance-covariance matrix of is the Kronecker product of one unstructured matrix with information between environments that needs to be estimated and another known matrix with information between the lines based on . Then the matrix iswhere the *j ^{th}* diagonal element is the genetic variance within the

*j*environment, and the off-diagonal elements are the genetic covariances between environments

^{th}*j*and

*j*’. For a large number of environments, a factor analytical model usually performs better than the unstructured model (Burgueño

*et al.*2012; Oakey

*et al.*2016). Furthermore, matrix is an error diagonal matrix of order ,

*i.e.*,

**=**

*diag*().

#### Multi-environment With un-structured variance-covariance and random intercepts (MUCf):

The MUC model can be extended by adding an extra variability to account for genetic variance among individuals across environments, that is, by adding the random vector ** f** (Cuevas

*et al.*2017). Therefore, the extension of the previous random linear model iswhere with the random vectors being independent of and normally distributed . Matrix is unstructured and captures genetic variance-covariance effects between the individuals across environments that were not captured by the matrix; matrix can be expressed aswhere the

*j*diagonal element of the matrix is the genetic environmental variance within the

^{th}*j*environment, and the off-diagonal element is the genetic covariance between environments

^{th}*j*and

*j*’. Similar to the previous cases, models MUC and MUC

**can be fitted using GB or GK kernels to generate the four model-method MUC-GB,MUC-GK, MUC**

*f***-GB, MUC**

*f***-GK.**

*f*#### Model implementation and random cross-validation for assessing prediction accuracy in the four data sets:

For the two maize data sets, models MM-GB, MM-GK, MDs-GB, MDs-GK, MDe-GB, and MDe-GK were fitted with the new software BGGE (Granato *et al.* 2017). Models MM-GB, MM-GK, MDs-GB, MDs-GK, MDe-GB, and MDe-GK were also fitted with BGGE with the same random partitions used by Sousa *et al.* (2017) to make results comparable for random-cross-validation 1 (CV1) and random cross-validation 2 (CV2) (Burgueño *et al.* 2012). Models MUC** f** and MUC of Cuevas

*et al.*(2017) were fitted using the software MTM (de los Campos and Grüneberg 2016) with the GB and GK kernel methods and with the same random partitions used for the 12 model-method combinations previously defined for random cross-validations CV1 and CV2. A fivefold random cross-validation was used assigning 80% of the observations to the training sets and 20% to the testing (validation) set. However, most of the results and discussion focus on cross-validation CV2. The two wheat data sets were fitted with the 12 model-method combinations (models MM-GB, MM-GK, MDs-GB, MDs-GK, MDe-GB, MDe-GK, MM-GB, MM-GK, MDs-GB, MDs-GK, MDe-GB, MDe-GK) using the BGGE software of Granato

*et al.*(2017).

Two random cross-validations (CV1 and CV2) were generated; CV1 attempts to mimic a situation where a set of lines were never evaluated in a set of environments, whereas CV2 mimics a sparse testing scheme where some lines were evaluated in some environments but not in others. Results based on CV2 are shown in the main text, tables and figures. Results of random cross-validation CV1 are given in Tables S1-S4 of Appendix 2. To implement the proposed 12 model-method combinations, 50 random partitions were performed with 80% of the lines used for training and the remaining 20% of the lines used for testing. The metric for measuring the performance of prediction accuracy was the Pearson correlation calculated between the observed and predicted values of the testing sets.

## Results

The results are given in four sections, one for each data set. In each section, we provide the results of the variance component estimates and the prediction accuracy for each of the 12 model-method combinations.

### Maize data set HEL

This maize data set has a total of 452 maize hybrids with a different number in each of the five sites ( 247, 330, 452, 367, 330). The sample phenotypic correlations among locations are positive with intermediate-to-low values, where location SE has low correlations with all the other locations, and locations NM, IP, and PM show relatively high correlations with the other locations (Table A1, Appendix 3).

Models without the random component always show a lower residual variance component in the GK models than in the GB models; for example, for model MDs-GK, = 0.278 and for MDs-GB, = 0.591 (Table 2). However, when the models include , these differences become smaller; for example, for MDs-GK, ** = 0.277 and for MDs-GB, **** = 0.368, indicating that for method GB, the random component explains the variation of the observations better, whereas for GK, including does not have much influence on the residual. This is also reflected in the small value of = 0.013 for MDs-GK as compared with = 0.243 for MDs-GB.**

The size of the genetic component, , is always much higher for MM-GK, MDs-GK, and MDe-GK than for models with the GB method. For models MM, MDs, and MDe, the sum of ** and is higher than the component for models MM, MDs, and MDe. For example, for model MM-GB, is 0.429, whereas for model MM-GB, is 0.356; MDs-GB summation is 0.415 ***vs.* MDs-GB with = 0.370, and for model MDe-GB = 0.430 *vs.* MDs-GB with = 0.370. The variance explained by the G×E of MDs, , is higher for GK than for GB and slightly higher for models with the random component than for models without **. The variance components for the specific environments show increases in MDe-GK compared to MDe-GB, and in MDe-GK compared to MDe-GK (Table 2).**

Models including the random component with GK did not improve the prediction accuracy of the locations as compared with the prediction accuracy of models without with GK (Table 3 and Figure 1); however, models with had consistently higher prediction accuracies than models with GB. In all cases, MM showed lower prediction accuracy than models with G×E (MDs and MDe). Similarly, model MM had lower prediction accuracies than models that incorporate G×E (MDs and MDe). These differences are smaller for locations that had higher sample phenotypic correlations with other locations than for locations with low phenotypic correlations. For example, location NM had prediction accuracies of 0.569, 0.589, and 0.588 for models MM-GB, MDs-GB, and MDe-GB, respectively, whereas location SE with low sample phenotypic correlations among locations had prediction accuracies of 0.372, 0.544, and 0.548 for models MM-GB, MDs-GB, and MDe-GB, respectively.

All models with kernel GK had higher prediction accuracies (with and without the random component ) than models with kernel GB (Table 3 and Figure 1). However, these differences are lower for models that include the random component (Table 3). For example, for location SO, the prediction accuracies for models MDs-GK and MDs-GB were 0.673 and 0.639, respectively, whereas for MDs-GK and MDs-GB, the mean prediction accuracies were 0.666 and 0.466, respectively. Comparing models with kernel GB, with and without , the predictions are always higher when the model includes ** than when the model excludes ; for example, for location IP, the mean prediction accuracies were 0.778 and 0.683 for MDs-GB and MDs-GB, respectively (Table 3). Note that the variance component of the random effect **** was 0.243 for model MDs-GB (Table 2). Furthermore, model 3 from Cuevas ***et al.* (2017) with the unstructured variance-covariance component ** f** or model 2 without

**did not show any clear superiority, in terms of mean prediction accuracy, over models MDs and MDe and MDs and MDe with GK and GB (Table 3 and Figure 1).**

*f*Random cross-validation CV1 decreased the prediction accuracy as compared with results achieved for CV2 (Table S1, Appendix 2); the trends and patterns of the prediction accuracy of the locations between models and methods are similar to those found for CV2, including those found for models MUC and MUC** f**.

In summary, results from maize data HEL indicated that models with the random component with GK including G×E (MDs-GK and MDe-GK) show similar mean prediction accuracy as models excluding the random component . However, this did not occur with GB models where including the random component increased the prediction accuracy for all 5 locations. Prediction accuracy using GK was always higher than using GB with or without the random component . Also, the differences between the models with and without ** and between GK and GB were smaller for locations that had higher sample phenotypic correlations with other locations. Finally, the differences in prediction accuracy were negligible between the proposed models including G×E with GK and GB and with and without the random effect and models MUC**** f** and MUC for all locations.

### Maize data set USP

This maize data set is comprised of 739 maize hybrids with different numbers of lines in each of the four sites ( 731, 732, 731, 737). Locations P-IN and A-IN had relatively high correlations with the other locations, whereas A-LN had low ones (Table A1, Appendix 3). The residual variance components for GK are smaller than those for GB for models MM, MDs and MDe; for instance, MM-GK had = 0.589 while MM-GB had = 0.854. Similarly, the residual variance components for MDs and MDe with GK are lower than for MDs and MDe with GB. The variance components of the random intercept () of GK methods are not negligible (as in data set HEL) and are always lower than for the corresponding GB methods (Table 4).

The estimated genetic variance components for GB in models MM, MDs and MDe were 0.214, 0.209, and 0.206, respectively (Table 4), increasing the genetic environmental stability (of models MM-GB, MDs-GB, and MDe-GB to 0.511, 0.513, and 0.514, respectively. The specific components for each environment of models MDe-GB and MDe-GB were negligible. The variance component () of the G×E models MDs and MDs for GB and GK was also negligible.

In general, models with -GB had similar prediction accuracy as models with -GK, whereas the increase in prediction accuracy of models without -GK over models with GB is clear. For example, for P-LN, models MDs-GK and MDs-GB had prediction accuracies of 0.545 and 0.546, respectively, whereas for MDs-GK and MDs-GB, the prediction accuracies were 0.524 and 0.325 (Table 5 and Figure 2). Models with -GB showed significant improvement in prediction accuracy compared to models GB without ; for example, for location P-IN, the mean prediction accuracies of MDs-GB and MDs-GB were 0.591 and 0.368, respectively (due to the influence of = 0.349 for model MDs-GB; see Table 4). All models with GK with the random intercept and with high values of **gave higher prediction accuracies than GK models without . There are no clear differences between model MUC**** f** and the proposed model with the random component with GK and GB in all the locations. Similar results were found for model MUC when compared to models without . For this data set, results from CV1 (Table S2, Appendix 2) were all similar and lower than those obtained for CV2.

In summary, results from maize data USP indicate that models with the random component (MDs-GK and MDe-GK) show higher mean prediction accuracy than models without and using the linear kernel GB. The G×E variance component of models MDs and MDs with GK and GB had negligible , indicating less complex G×E than that found for maize data set HEL. The differences in the mean prediction accuracy between the proposed models with or without the random effect and models MUC** f** and MUC are small for models with GK and not clearly superior to the proposed models with GB.

### Wheat data set WHE1

For this data set, environment E1 had negative correlations with the other environments (E2-E4), whereas environments E2-E4 had high correlations among themselves (Table A1, Appendix 3). Models with GK fitted the WHE1 data better than models with kernel GB (low residual variances of GK models as compared to GB models). Also, models with random component had lower residual variance components than models without . As opposed to the previous two maize data sets, where the magnitude of the variance components determines the prediction ability, the presence of environments with negative correlations with other environments makes interpreting the variance components in relation to their predictive ability not as straightforward as in the previous two data sets (Table 6). For example, models MM and MM with GK and GB had estimates of the random error variance that were much higher (∼0.8) than those of the other models; thus the prediction accuracy of these models is expected to be low for at least the environments with negative correlations.

The genetic variance component varied for models MM-GB, MDs-GB, and MDe-GB (0.192, 0.219, and 0.414, respectively) as well as for the GK models (0.599, 0.752, and 1.404, respectively). The contribution of measured in was small for MDs-GK and MDs-GB (0.101 and 0.107) (Table 6) and negligible for the other models with . On the other hand, the G×E interaction variance components for GK and GB are important (MDs-GK = 1.637, MDs-GB = 0.42; MDs-GK = 1.349, MDs-GB = 0.349) and much higher than in the two maize data sets. Models MDe-GK and MDe-GB showed high specific variance components for E1 (3.356 and 1.058, respectively) and for E4 (1.147 and 0.3) causing most of the interaction in this data set (these are the environments with the lowest sample correlations with the other environments) and contributed the least to genetic environmental stability.

Models with G×E (MDs and MDe) had mean prediction accuracies higher than MM models with lower mean prediction accuracy in E1 and E4 as compared with E2 and E3 (Table 7 and Figure 3). The exceptions are models MM-GB and MM-GK, which had higher prediction accuracy than models MDs-GB and MDS-GB in E3. Models MDe-GK and MDe-GK had higher prediction accuracy than models MM, MDs and MDe with and without for GB and GK in all locations, except MDe-GK in E1. However, in all cases and environments, models MUC** f** and MUC had better prediction accuracies than all 12 genomic model-method combinations (Figure 3). Lower prediction accuracies were found for CV1 (Table S3, Appendix 2) than for CV2; however, the decrease in prediction accuracy of CV1 was lower than for the two wheat data sets.

In summary, G×E for this data set is more complex than for the two previous maize data sets. This is expressed by higher values of (given by models MDs and MDs) compared to those computed for the maize data sets, as well as the higher values of the variance components specific to environments ( and ) compared to those computed for other environments in this data set, as well as in the maize data sets. For the 12 model-method combinations, the models with the highest prediction accuracy for the environments were MDe and MDe. However, models MU** f** and MUC had the highest prediction accuracy for each environment and for both methods, GK and GB.

### Wheat data set WHE5

This data set has sample phenotypic correlations between environments that are close to zero or negative (Table A1, Appendix 3). Only one high phenotypic correlation was observed between environments 5iBN and 5iFN (0.546). Table 8 shows the high residual variance components of models MM-GK, MM-GB, MM-GK and MM-GB, whereas for models incorporating G×E (MDs and MDe with GK and GB and with and without ), the residual variance components were much smaller.

The variance components of the genetic main effects with GB and were low (0.064 and 0.061 for MDs-GB and MDe-GB, respectively), indicating low exchange of information between environments. The most influential variance components were related to the G×E, . For example, for models MDs-GB, the variance component is 0.618 and 0.636 for MDs-GB, whereas it increases to = 1.482 for MDs-GK and to = 1.49 for MDs-GK (Table 8); this result indicates the importance of G×E interaction. The influence of the random component in this data set is negligible. The variance components related to specific environments are similar for the five environments and for MDe models with and without random component .

Among the 12 model-method combinations, the best predictive models were MDe-GK and MDe-GK in all locations (Table 9, Figure 4). However, models MDs-GK and MDs-GK also had relatively high prediction accuracies that were very similar to those of models MDe-GK and MDe-GK. Similar results were found for models with linear kernel GB (Table 9). Models with the random intercept showed no increase in prediction accuracy (values of close to zero) as compared to models without .

The comparison of the prediction accuracy of these 12 model-method combinations with the mean prediction accuracy of models MUC** f** and MUC (Figure 4) indicated the higher mean prediction accuracy of MUC

**and MUC over the mean prediction accuracy of the proposed models with (or without) the random effect . For this data set, the prediction accuracies of CV1 were similar to those found under CV2 (Table S4, Appendix 2).**

*f*In summary, the complex G×E interaction in this data set is expressed by the large variance component . Models with random component did not increase the prediction accuracy of the corresponding models without (reflected in their values of close to zero). Of the 12 model-method combinations, models MDe-GK and MDe-GK gave the highest prediction accuracies. However, the best predictive models overall and for each environment were MU** f** and MUC.

## Discussion

### Effect of random component

From a statistical perspective, the mixed models can better explain the variation among lines in environments (G×E) by considering two factors: environments and lines. The environmental effects () are considered as fixed effects with the relationship; however, the effects of the lines are considered random in for model MM. The is the common random effect of each line derived from the markers and is considered the random intercept for each line. If we make the transformation , as in López-Cruz *et al.* (2015), then **0**, where matrix comprises submatrices (or blocks) where the submatrices off the block diagonal generated the exchange of information between environments with positive correlations. As discussed by López-Cruz *et al.* (2015), this exchange of information is not effective when there are negative correlations between sites (or environments) due to the fact that they are based on . Similarly, if , then **0, and the exchange of information occurred in the submatrices off the block diagonal between the environments with positive correlations and when is not zero. On the other hand, in models MDs and MDe, the component has influence only when there is exchange of information across environments and the G×E is simple; otherwise, as in the WHE5 data set, the contribution of is negligible when the G×E is complex.**

The random effects ** l** are independent and identically distributed (iid) thus do not have the possibility of exchanging of information from tested lines to untested lines and therefore do not have any estimate of these values if no evaluation data on a line exists (CV1). Then, when trying to predict values of untested lines, only available information between lines come from the

**part of the model. In a number of cases, substantial variation for the**

*g***effects were found suggesting that the additive part of the model (**

*l***) is not capturing the total genetic value very well. In these cases, since usually the GK method did as well as the GB with**

*g***model, there is a major advantage to the GK method in that it can better predict untested genotypes since the marker information is being used in a way that captures more of the genetic variation. On the other hand, if the breeder is concerned about gain from selection following intermating and generating a new population, the breeder should only be selecting based on the additive breeding values and realizing that the breeding values are not the complete genotypic value (commercial value), such that response to selection after intermating will be less than expected based on total genetic variance.**

*l*### Effects of including G×E interaction

In general, results show that when GBLUP is used for prediction under random cross-validation CV2, models MDs-GB and MDe-GB that incorporate G×E had higher prediction accuracy than models MDs-GB and MDe-GB also with G×E. This improvement depends on and the magnitude of the correlations between environments. For maize data sets (HEL and USP) with positive sample correlations between environments, models MDs**-**GB and MDe**-**GB had higher prediction accuracy than models MDs-GB and MDe-GB, whereas in wheat data set WHE1, models MDs and MDe had better prediction accuracy than models MDs and MDe only in environments with positive correlations. Finally, for environments in wheat data set WHE5 with negligible , the accuracy of models MDs-GB and MDe-GB did not improve much over that of models MDs-GB and MDe-GB without .

### Effects of including the Gaussian kernel

In general, models MDs and MDe with the Gaussian kernel (GK) had higher prediction accuracy than models with GB, although these differences were smaller for models MDs and MDe. When GK models were better than GB models, results show that was negligible for GK models and when the prediction accuracy of MDs and MDe was only slightly superior to that of models MDs and MDe (as in maize data set HEL). On the contrary, when using GK, the prediction accuracy was not better than when using GB, as in the case of maize data set USP; then the contribution of was important and the prediction accuracy of MDs and MDe was superior to that of their counterparts MDs and MDe. These results indicate that models with random intercepts are useful when used with the linear kernel (GB) but not when used with the Gaussian kernel (GK). This is because the GK method without explains most of the genetic variance (additive and epistasis effects) between lines with negligible genetic residuals that are not picked up by the ** l**.

### The effect of the sample covariance among environments

The behavior of the covariance between observations of the *i*th line in the *j*th and *j*’th environments explains some of the results obtained in the four data sets. The covariance between and of models MM, MDs and MDe is the same; it is determined by the genetic variance component . It would be expected that the estimate of would be proportional to the sample covariance of the observations. This only occurred when the sample covariances were positive because can take only positive values; when the sample covariances between some environments are negative, this distorts the estimations of the genetic variance component () and therefore affects the prediction accuracy of the unobserved phenotypes of the lines in the testing set.

On the other hand, when the sample covariance between and of models MM, MDs and MDe is determined by the summation + , then the higher , the higher the estimated sample covariance (association) of the lines in environments and, therefore, the higher the prediction accuracy compared with those achieved by models MM, MDs and MDe (without the random effect . Again, the presence of negative sample covariances distorts the behavior of the estimated genetic variance components and this negatively affects the prediction accuracy of these models.

### Models With G×E With the Kronecker product *vs.* models With G×E With the Hadamard product

Less restrictive G×E genomic-enabled prediction models that allow any covariance value between environments had better prediction accuracy than models with more restrictive assumptions at the level of association between lines in environments affecting the estimation of the genetic variance components. Less restrictive models consider variance-covariance matrices represented by the Kronecker product of the variances and covariances of the environmental and genetic values (with the linear or non-linear kernels constructed with the markers) (Burgueño *et al.* 2012; Cuevas *et al.* 2017). When a random intercept ( is added to these models based on the Kronecker product (Cuevas *et al.* 2017), the genomic-enabled prediction accuracy increased for random cross-validation CV2 and for environments with negative sample covariance. These advantages of the G×E genomic-enabled prediction models using the Kronecker product for defining variance-covariance environmental matrices with negative or zero environmental relationship over the Hadamard product defined by models MDs and MDe are less when sample covariances between environments are all positive. The disadvantages of models with Kronecker products are that defining and measuring environmental stability is not clear, plus they demand higher computing resources compared to G×E genomic-enabled prediction models using the Hadamard product.

### Required computing time for fitting the models

We performed all the analyses in an Ubuntu Linux server with 256 GB of RAM and 32 CPUs core. To compare the computing time, we counted the mean computing time in seconds for fitting one random partition for random cross-validation CV1 for the maize data set HELIX with the same number of 50 partitions and the same number of iterations in the model. For the models with G×E without or ** f**, the mean computing time for one random partition was 290, 319, and 3110 for models MDs, MDe, and MUC, respectively. For models with G×E with random intercept or

**, the mean computing time for one random partition was 489, 541, and 4938 for models MDs, MDe, and MUC**

*f***, respectively. The differences in computing time between models MDs and MDe are low, but for model MUC, the required mean computing time needed to fit the model increased 10 times for one random partition.**

*f*### Advantages and disadvantages of the proposed models

In general, G×E genomic-enabled prediction models MDs and MDe had similar prediction accuracy and, in both cases, environmental stability and G×E can be assessed and measured. Furthermore, in models MDs and MDe, when the sample correlation among environments is positive, their prediction accuracy is similar or slightly higher than the accuracy achieved with the more flexible Kronecker product models (Burgueño *et al.* 2012; Cuevas *et al.* 2017) for the variance-covariance matrices. The advantage of models MDs and MDe with the Hadamard product for the variance-covariance is that they can perform highly dimensional matrix operations very fast and, therefore, save time when fitting these models. The BGGE software developed by Granato *et al.* (2017) is indeed an example of this efficiency for fitting models MDs and MDe by means of the Hadamard product.

When the main objective is prediction accuracy, we recommend checking for sample covariance (or correlations) between environments before using MDs and MDe G×E genomic-enabled prediction models. Models MDs, MDe, MDs and MDe are recommended when the sample correlations are positive and not close to zero. We also recommend fitting models MDs, and MDe to the training set and estimating the variance component of the random intercept ; if it is negligible, only models MDs and MDe should be used. When the number of lines in each environment is not the same, models MDs, MDe, MDs, and MDe can be efficiently fitted with the BGGE software, whereas models MUC** f** and MUC of Cuevas

*et al.*(2017) with an unbalanced number of lines in each environments require intensive computational resources.

### CONCLUSIONS

Results indicate that when the sample phenotypic correlations between environments were intermediate to moderate (HEL, USP), models with G×E with random intercept (MDs, MDe and Gaussian kernel (GK) had the advantages of other models without their disadvantages. These models allow: (i) finding regions of the chromosomes with environmental stability (Jarquín *et al.* 2014; López-Cruz *et al.* 2015), (ii) the fitted computing time is fast (Granato *et al.* 2017), and (iii) increasing the prediction accuracy in the CV2 to a level of the Gaussian kernels of Cuevas *et al.* (2016) and Sousa *et al.* (2017) or other more flexible models such as those used by Burgueño *et al.* (2012) and Cuevas *et al.* (2017). For sample low or negative phenotypic correlations like in data sets WHE1, WHE5, the prediction accuracy of model MUC** f** with GK of Cuevas

*et al.*(2017) is the one that should be used.

Including the random intercept for each line made it possible to capture some extra genetic variability. Models MDs and MDe assessed the complexity of the genomic G×E present in the two maize data sets (with all environments with positive correlations) by means of the Hadamard product between markers and environments as in models from Jarquín *et al.* (2014) (MM, and MDs) and López-Cruz *et al.* (2015) (MDe). For the two maize data sets with positive sample correlations among environments, the Hadamard models MM, MDs and MDe with had similar prediction accuracies as models MUC** f** and MUC that use a Kronecker product for assessing G×E. The advantage of models MM, MDs, and MDe over models MUC

**and MUC is shorter computing time when the number of lines in different environments is very unbalanced, as in the case of the two maize data sets.**

*f*For the two wheat data sets, the number of lines in each environment is the same. However, in view of the fact that the sample correlation among environments is not positive for all pair-wise environment combinations, using models MM, MDs and MDe with or without is less favorable than using models MUC** f** and MUCwith a Kronecker product for modeling G×E. The reduced prediction accuracy of the Hadamard product models

*vs.*the Kronecker product models indicated the flexibility of models MUC

**and MUC for assessing complex G×E multi-environment data sets. Regardless of: (i) whether is included or not, and (ii) the type of data set at hand (with more or less complex G×E) and the balanced or unbalanced data structure, the prediction accuracy of the Gaussian kernel was better than the prediction accuracy of the linear kernel GBLUP for all four data sets.**

*f*## APPENDIX 1

### The multi-environment main genotypic effect model (MM)

The multi-environment model (MM) considers the fixed effects of environments (), as well as the random genetic effects across environments ()(1)where is a vector with the observations of the *j*^{th} environment each of size , such that one line in one environment represents the observation of the ^{th} line in the *j*^{th} environment. The scalar is a general mean and the vector is of size The fixed effects of the environment for the data used in this study are modeled with the incidence matrix of the environments of order , where the parameters to be estimated are the intercept for each environment () with the vector of order . Incorporating other fixed effects into the model is straightforward.

The random vector of genetic effects follows a multivariate normal distribution with mean zero and a covariance matrix, that is, , where the vector of order represents the genetic random effects across all environments for each line and the kernel matrix is a symmetric semidefinite positive matrix constructed with molecular markers of order . If the number of lines is the same in each environment, then ; otherwise, when there are different numbers of lines in each environment, represents the number of unique lines included in the model in some environments. The incidence matrix connects genotypes with phenotypes for each environment, with order . Variance component is the genetic variance of the lines across all environments and represents the sensitivity or environmental stability. Finally, the random errors are assumed to be homoscedastic and independent, , where is the error variance.

### The multi-environment single variance genotype × environment interaction deviation model (MDs)

This model adds to the MM model the random interaction effects of the environments with the genetic information of the lines () (Sousa *et al.*, 2017; Jarquín *et al.*, 2014):(2)The vector of random effects G×E interaction, , has a multivariate normal distribution, ), where ( is the Hadamard product operator, and is the variance component of the G×E interaction. Matrix is a block diagonal constructed with the matrices **(**…for each environment; therefore, there is no exchange (borrowing) of information between environments:

### Multi-environment environment-specific variance genotype × environment deviation model (MDe)

The multi-environment, environment-specific variance genotype × environment deviation model (MDe) (López-Cruz *et al.*, 2015) differs from MDs in how the random interaction component is modeled:(3)where is the main genetic effect across all the environments and represents the specific genetic effects in each environment such that , where is a matrix block diagonal generated with individuals included in each environment:with a variance component specific for each environment (Sousa *et al.*, 2017).

## APPENDIX 2

Proposed models with random effects and f | ||||||||
---|---|---|---|---|---|---|---|---|

Location^{*} | MM-GK | MM-GB | MDs-GK | MDs-GB | MDe-GK | MDe-GB | MUC-GK MUCf-GBf | |

IP | 0.571 (0.1) | 0.439 (0.12) | 0.745 (0.05) | 0.644 (0.1) | 0.749 (0.06) | 0.634 (0.09) | 0.756 (0.06) | 0.659 (0.06) |

NM | 0.503 (0.08) | 0.354 (0.09) | 0.532 (0.08) | 0.385 (0.11) | 0.525 (0.09) | 0.365 (0.11) | 0.537 (0.08) | 0.381 (0.11) |

PM | 0.661 (0.06) | 0.574 (0.07) | 0.753 (0.05) | 0.682 (0.07) | 0.753 (0.04) | 0.685 (0.05) | 0.751 (0.04) | 0.685 (0.05) |

SE | 0.347 (0.09) | 0.202 (0.11) | 0.505 (0.08) | 0.370 (0.1) | 0.513 (0.08) | 0.366 (0.09) | 0.489 (0.09) | 0.349 (0.09) |

SO | 0.442 (0.1) | 0.287 (0.09) | 0.552 (0.08) | 0.402 (0.1) | 0.552 (0.08) | 0.395 (0.12) | 0.551 (0.08) | 0.39 (0.09) |

Proposed models without random effects and f | ||||||||

Location^{*} | MM-GK | MM-GB | MDs-GK | MDs-GB | MDe-GK | MDe-GB | MUC-GK MUC-GB | |

IP | 0.575 (0.09) | 0.426 (0.11) | 0.752 (0.06) | 0.607 (0.08) | 0.755 (0.05) | 0.618 (0.09) | 0.758 (0.05) | 0.641 (0.08) |

NM | 0.506 (0.09) | 0.361 (0.07) | 0.54 (0.09) | 0.394 (0.08) | 0.538 (0.09) | 0.394 (0.08) | 0.545 (0.06) | 0.391 (0.1) |

PM | 0.662 (0.06) | 0.533 (0.07) | 0.758 (0.05) | 0.662 (0.07) | 0.754 (0.05) | 0.671 (0.04) | 0.754 (0.04) | 0.669 (0.05) |

SE | 0.346 (0.1) | 0.219 (0.1) | 0.527 (0.06) | 0.321 (0.1) | 0.524 (0.08) | 0.339 (0.09) | 0.505 (0.07) | 0.319 (0.11) |

SO | 0.455 (0.1) | 0.293 (0.11) | 0.576 (0.07) | 0.376 (0.1) | 0.555 (0.09) | 0.383 (0.11) | 0.56 (0.07) | 0.377 (0.1) |

↵* Locations are: IP: Ipiaçú-MG, NM: Nova Mutum-MT, PM: Pato de Minas-MG, SE: Sertanópolis-PR, and SO: Sorriso-MT.

Proposed models with random effects and f | ||||||||
---|---|---|---|---|---|---|---|---|

Environment^{*} | MM-GK | MM-GB | MDs-GK | MDs-GB | MDe-GK | MDe-GB | MUC-GK MUCf-GBf | |

P-LN | 0.28 (0.06) | 0.272 (0.07) | 0.307 (0.06) | 0.293 (0.07) | 0.303 (0.06) | 0.286 (0.07) | 0.294 (0.07) | 0.286 (0.06) |

P-IN | 0.304 (0.06) | 0.298 (0.08) | 0.335 (0.08) | 0.329 (0.06) | 0.335 (0.07) | 0.332 (0.08) | 0.331 (0.08) | 0.327 (0.06) |

A-LN | 0.287 (0.07) | 0.283 (0.05) | 0.305 (0.08) | 0.31 (0.06) | 0.303 (0.06) | 0.309 (0.06) | 0.321 (0.08) | 0.309 (0.07) |

A-IN | 0.389 (0.07) | 0.386 (0.08) | 0.42 (0.07) | 0.413 (0.07) | 0.425 (0.07) | 0.422 (0.06) | 0.418 (0.05) | 0.417 (0.07) |

Proposed models without random effects and f | ||||||||

Environment^{*} | MM-GK | MM-GB | MDs-GK | MDs-GB | MDe-GK | MDe-GB | MUC-GK MUC-GB | |

P-LN | 0.286 (0.07) | 0.278 (0.05) | 0.305 (0.05) | 0.289 (0.07) | 0.313 (0.08) | 0.295 (0.07) | 0.311 (0.06) | 0.30 (0.06) |

P-IN | 0.285 (0.08) | 0.313 (0.06) | 0.324 (0.06) | 0.332 (0.07) | 0.324 (0.07) | 0.33 (0.05) | 0.318 (0.05) | 0.341 (0.06) |

A-LN | 0.262 (0.07) | 0.292 (0.07) | 0.278 (0.06) | 0.313 (0.06) | 0.285 (0.07) | 0.308 (0.06) | 0.300 (0.06) | 0.318 (0.07) |

A-IN | 0.365 (0.06) | 0.391 (0.07) | 0.395 (0.06) | 0.415 (0.07) | 0.403 (0.07) | 0.417 (0.06) | 0.406 (0.05) | 0.424 (0.07) |

↵* Environments are: Anhumas ideal N (A-IN), Anhumas low N (A-LN), Piracicaba ideal N (P-IN) and Piracicaba low N (P-LN)

Proposed models with random effects and f | ||||||||
---|---|---|---|---|---|---|---|---|

Environment | MM-GK | MM-GB | MDs-GK | MDs-GB | MDe-GK | MDe-GB | MUC-GK MUCf-GBf | |

E1 | 0.048 (0.06) | 0.054 (0.07) | 0.545 (0.05) | 0.512 (0.05) | 0.558 (0.05) | 0.510 (0.05) | 0.560 (0.05) | 0.515 (0.06) |

E2 | 0.397 (0.06) | 0.405 (0.06) | 0.49 (0.06) | 0.476 (0.06) | 0.48 (0.05) | 0.474 (0.05) | 0.472 (0.05) | 0.478 (0.05) |

E3 | 0.368 (0.07) | 0.373 (0.06) | 0.405 (0.05) | 0.366 (0.06) | 0.416 (0.06) | 0.399 (0.05) | 0.413 (0.06) | 0.386 (0.06) |

E4 | 0.341 (0.06) | 0.329 (0.05) | 0.472 (0.04) | 0.439 (0.05) | 0.467 (0.05) | 0.441 (0.06) | 0.464 (0.06) | 0.450 (0.04) |

Proposed models without random effects and f | ||||||||

Environment | MM-GK | MM-GB | MDs-GK | MDs-GB | MDe-GK | MDe-GB | MUC-GK MUC-GB | |

E1 | 0.066 (0.06) | 0.049 (0.06) | 0.544 (0.05) | 0.472 (0.06) | 0.539 (0.04) | 0.495 (0.05) | 0.571 (0.04) | 0.513 (0.04) |

E2 | 0.416 (0.06) | 0.414 (0.06) | 0.476 (0.05) | 0.475 (0.06) | 0.472 (0.05) | 0.464 (0.05) | 0.465 (0.05) | 0.454 (0.05) |

E3 | 0.377 (0.05) | 0.384 (0.05) | 0.397 (0.05) | 0.388 (0.06) | 0.423 (0.05) | 0.392 (0.05) | 0.405 (0.05) | 0.381 (0.05) |

E4 | 0.339 (0.05) | 0.339 (0.05) | 0.469 (0.04) | 0.437 (0.04) | 0.46 (0.05) | 0.416 (0.05) | 0.456 (0.05) | 0.418 (0.05) |

Proposed models with random effects and f | ||||||||
---|---|---|---|---|---|---|---|---|

Environment^{*} | MM-GK | MM-GB | MDs-GK | MDs-GB | MDe-GK | MDe-GB | MUC-GK MUCf-GBf | |

0iFN | 0.348 (0.05) | 0.301 (0.05) | 0.601 (0.04) | 0.553 (0.03) | 0.611 (0.04) | 0.555 (0.04) | 0.614 (0.03) | 0.554 (0.03) |

2iBN | 0.217 (0.05) | 0.201 (0.05) | 0.474 (0.04) | 0.431 (0.04) | 0.47 (0.05) | 0.439 (0.04) | 0.475 (0.04) | 0.448 (0.04) |

5iBH | 0.321 (0.05) | 0.35 (0.05) | 0.67 (0.03) | 0.635 (0.03) | 0.668 (0.03) | 0.634 (0.03) | 0.679 (0.03) | 0.633 (0.03) |

5iBN | 0.163 (0.06) | 0.136 (0.06) | 0.399 (0.04) | 0.353 (0.05) | 0.395 (0.05) | 0.345 (0.06) | 0.401 (0.04) | 0.358 (0.05) |

5iFN | 0.084 (0.06) | 0.082 (0.06) | 0.334 (0.04) | 0.309 (0.04) | 0.328 (0.05) | 0.306 (0.05) | 0.336 (0.05) | 0.315 (0.04) |

Proposed models without random effects and f | ||||||||

Environment^{*} | MM-GK | MM-GB | MDs-GK | MDs-GB | MDe-GK | MDe-GB | MUC-GK MUC-GB | |

0iFN | 0.341 (0.05) | 0.288 (0.05) | 0.61 (0.04) | 0.562 (0.03) | 0.612 (0.03) | 0.557 (0.03) | 0.625 (0.04) | 0.558 (0.04) |

2iBN | 0.205 (0.05) | 0.216 (0.05) | 0.478 (0.05) | 0.439 (0.05) | 0.473 (0.05) | 0.436 (0.04) | 0.476 (0.05) | 0.429 (0.06) |

5iBH | 0.323 (0.04) | 0.333 (0.05) | 0.67 (0.02) | 0.624 (0.03) | 0.662 (0.03) | 0.627 (0.03) | 0.680 (0.03) | 0.638 (0.03) |

5iBN | 0.171 (0.05) | 0.163 (0.05) | 0.397 (0.05) | 0.357 (0.04) | 0.405 (0.04) | 0.356 (0.04) | 0.407 (0.04) | 0.354 (0.05) |

5iFN | 0.107 (0.05) | 0.114 (0.06) | 0.33 (0.05) | 0.311 (0.04) | 0.329 (0.05) | 0.307 (0.05) | 0.337 (0.04) | 0.303 (0.04) |

↵* Environments are described by a sequence of codes: 0i, 2i and 5i denote the number of irrigation; B/F denotes whether the planting system was ‘bed’ (B) or ‘flat’ (F); N/H denotes whether planting date was normal (N) or late (H, simulating heat).

## APPENDIX 3

HEL (452 maize lines) (Sousa et al. 2017) | ||||||||
---|---|---|---|---|---|---|---|---|

Location^{*} | Ipiaçú (IP) (247) | Nova Mutum (NM) (330) | Pato de Minas (PM) (452) | Sertanópolis (SE) (367) | Sorriso (SO) (330) | |||

Nova Mutum (NM) | 0.46 | — | — | — | — | |||

Pato de Minas (PM) | 0.51 | 0.44 | — | — | — | |||

Sertanópolis (SE) | 0.29 | 0.36 | 0.30 | — | — | |||

Sorriso (SO) | 0.43 | 0.48 | 0.39 | 0.38 | — | |||

USP (739 maize lines) (Sousa et al. 2017) | ||||||||

Environment | Piracicaba-LN (P-LN) (731) | Piracicaba-IN (P-IN) (732) | Anhumas-LN (A-LN) (731) | Anhumas-IN (L-IN) (737) | ||||

Piracicaba-IN (P-LN) | 0.54 | — | — | — | ||||

Anhumas-LN (P-IN) | 0.31 | 0.35 | — | — | ||||

Anhumas-IN (A-IN) | 0.43 | 0.47 | 0.47 | — | ||||

WHE1 (599 wheat lines) | ||||||||

Location^{*} | E1 | E2 | E3 | E4 | ||||

E2 | −0.19 | — | — | — | ||||

E3 | −0.19 | 0.661 | — | — | ||||

E4 | −0.12 | 0.411 | 0.388 | — | ||||

WHE5 (807 wheat lines) | ||||||||

Location^{*} | 0iFN | 2iBN | 5iBH | 5iBN | 5iFN | |||

2iBN | 0.166 | — | — | — | — | |||

5iBH | 0.30 | −0.033 | — | — | — | |||

5iBN | −0.10 | 0.122 | −0.091 | — | — | |||

5iFN | −0.01 | 0.035 | 0.023 | 0.546 | — |

↵* Locations in HEL data set are: IP: Ipiaçú-MG, NM: Nova Mutum-MT, PM: Pato de Minas-MG, SE: Sertanópolis-PR, and SO: Sorriso-MT. Locations in USP data set are: IN = ideal Nitrogen; LN = low nitrogen. In WHE5 data set, environments are described by a sequence of codes: 0i, 2i and 5i denote the number of irrigations; B/F denotes whether the planting system was ‘bed’ (B) or ‘flat’ (F); N/H denotes whether planting date was normal (N) or late (H, simulating heat).

## Footnotes

Communicating Editor: J. Holland

- Received November 19, 2017.
- Accepted February 21, 2018.

- Copyright © 2018 Cuevas
*et al.*

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.