A list of common confusions and mistakes when applying numerical methods in community ecology. Based on fixing final reports of students in the class Numerical Methods in Community Ecology.
Wrong: you first use DCA to decide whether to use linear or unimodal ordination methods, and if you decide for linear methods, you Hellinger-transform the data and call it transformation-based ordination.
This confuses two alternative approaches how to do ordination on community data - read further.
There are three alternative ways how to analyse community data.
Hellinger transformation has therefore two types of use: (a) to calculate transformation-based ordination, or (b) to standardize species composition data:
Wrong: you analyse species composition (e.g. using ordination or cluster analysis), but you talk about analysis of species richness.
Analyzing pattern of species richness (or diversity) and species composition are two different things, although it could be sometimes interesting to analyse both in parallel. If we analyse species richness, we simply ask which community is more and which less diverse, and which factors are the best to interpret this pattern. In contrast, if we analyse pattern in species composition, we ask how does species composition change (in space or time) and which factors are best to explain these changes. Changes in species composition may not necessarily be accompanied by changes in species richness (two communities may have the same number of species, i.e. the same richness, yet completely different species composition).
There are many general patterns of species richness, e.g. along latitude (diversity decreases from Equator toward poles), along productivity (often reported as hump-back pattern with highest diversity in intermediate levels of productivity) or along pH (for certain organisms, e.g. plants or mollusks, pH is important factor influencing richness - basic habitats are more species rich than acid, although this works only in some geographic regions, like Europe and North America). Often the same factors influence also changes in species composition, but mechanisms behind are likely different.
Wrong: you calculate constrained ordination (e.g. RDA, CCA), and then you use the function ‘’envfit’’ to project explanatory variables onto the ordination diagram (theoretically it can be done, but needs to be done in a special way, see below), or even test the significance of these variables.
Simple suggestion: The function envfit
is designed to calculate regression of “supplementary” (not “explanatory”) variables on ordination axes of unconstrained ordination, and test the significance of this regression by permutation test. This function is not suitable to project “explanatory” variables onto ordination diagram of constrained ordination (although theoretically it can be done, see below) and definitely not to test the significance of individual variables in this way (see example below showing that in that case all variables will usually be highly significant, because constrained ordination axes already contains their information).
Longer explanation follows below, with examples.
First, to clarify. One of the use of the function ‘’envfit’’ (library vegan) is to calculate regression of supplementary variables to ordination axes in unconstrained ordination, in order to help with interpretation of these axes. Say we have community data and three external environmental variables; we calculate unconstrained ordination using the community data, and then we aim to interpret the ecological meaning of unconstrained ordination axes with available environmental variables by calculating their regression. These variables can then be projected onto the ordination diagram as supplementary (not explanatory!), and regression of each variable independently can be tested by permutation test (this is what ‘’envfit’’ returns). But variables which are significant here are not those which are important for species composition, but those which have the best fit to variation extracted by unconstrained ordination into the main (usually the first and the second) ordination axes. Often these variables are important, but not necessarily.
Now, if you calculate constrained ordination, then environmental variables enter the analysis, and resulting (constrained) axes already contain information about them. If now you use ‘’envfit’’ to calculate regression of these same variables to constrained axes, you are likely to get highly significant results; you are regressing two variables (environmental variable vs samples scores on ordination axes) which contain the same information (since axes were calculated also using these environmental variables).
Theoretically, envfit
can be used to project arrows or centroids with explanatory variables onto diagram of constrained ordination, but the fitting must be done on linear scores of ordination axes, not on sample scores, otherwise the arrows will have different direction then they should be (see example below).
Examples:
It can be shown that arrows for environmental variables projected by ‘’envfit’’ are different from the true direction of these variables in ordination diagram, unless they are calculated properly (on linear combination of scores). In the diagram below (based on river valley data), the three variables (pH, SOILDPT and ELEVATION) have been used as explanatory variables in tb-RDA. The left diagram shows the true direction of explanatory variables. The right diagram shows three options of using envfit
function. Blue arrows show the true direction (envfit
calculated with argument display = 'lc'
, i.e. on linear combination of scores). Red arrows indicate direction if they are added using ‘’envfit’’ fitted to samples scores (by default only the first and second ordination axis), and green arrows show the same with ‘’envfit’’ relating them to three instead of two axes.
vltava.spe <- read.delim ('https://raw.githubusercontent.com/zdealveindy/anadat-r/master/data/vltava-spe.txt', row.names = 1) vltava.env <- read.delim ('https://raw.githubusercontent.com/zdealveindy/anadat-r/master/data/vltava-env.txt') library (vegan) spe.hell <- decostand (log1p (vltava.spe), method = 'hellinger') tbRDA <- rda (spe.hell ~ pH + SOILDPT + ELEVATION, data = vltava.env[,1:18]) par (mfrow = c(1,2)) ordiplot (tbRDA) ef <- envfit (tbRDA ~ pH + SOILDPT + ELEVATION, data = vltava.env, display = 'lc') ef12 <- envfit (tbRDA ~ pH + SOILDPT + ELEVATION, data = vltava.env) ef123 <- envfit (tbRDA ~ pH + SOILDPT + ELEVATION, data = vltava.env, choices = 1:3) ordiplot (tbRDA, type = 'n') plot (ef, col = 'blue') plot (ef12, col = 'red') plot (ef123, col = 'green') legend ('bottomright', lwd = 1, col = c('blue', 'red', 'green'), legend = c('envfit on linear combination of scores', 'envfit on sample scores (1st & 2nd axis)', 'envfit on sample scores (1st, 2nd and 3rd axis)'), bty = 'n')
Now about testing the significance (to show why it should not be done using the function envfit
). In the example below, I again used the river valley dataset. I generated one random variable, i.e. I assign a random value to each site, and use it as “environmental variable” (obviously without any ecological meaning and without any relationship to the species composition). I calculated tb-PCA, and used envfit to fit the random variable to ordination axes (result not significant, as expected); then I calculated tb-RDA with this random variables as explanatory and tested significance of this RDA (not significant, also as expected). Then I use ‘’envfit’’ to fit the random variable to ordination axes (the result is highly significant, which could indicate that the random variable is important, although it is not). The result shows that to use envfit
to fit environmental variables to RDA which is based on the same environmental variable is highly misleading.
spe.hell <- decostand (log1p (vltava.spe), 'hell') set.seed (123) random.var <- runif (97) # generate single random variable tbPCA <- rda (spe.hell) envfit (tbPCA ~ random.var) # not significant # ***VECTORS # # PC1 PC2 r2 Pr(>r) # random.var 0.41445 0.91007 0.0072 0.716 # Permutation: free # Number of permutations: 999 tbRDA.rand <- rda (spe.hell ~ random.var) anova (tbRDA.rand) # the RDA is not significant (since the explanatory variable is random and has no meaning) # Permutation test for rda under reduced model # Permutation: free # Number of permutations: 999 # # Model: rda(formula = spe.hell ~ random.var) # Df Variance F Pr(>F) # Model 1 0.00724 0.9861 0.472 # Residual 95 0.69752 envfit (tbRDA.rand ~ random.var) # the fit of random variables to first two ordination axes is significant, since the first axis is calculated from it # ***VECTORS # # RDA1 PC1 r2 Pr(>r) # random.var 0.92615 -0.37716 0.4525 0.001 *** # --- # Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # Permutation: free # Number of permutations: 999
Problem: if the environmental variables contain missing values and such matrix is used in constrained ordination as explanatory, the samples containing (even just one) missing values will be removed from analysis. If you have many environmental variables and each has some missing values (in the worst case each variable missing for different samples), such data with many holes may mean that the final analysis is based on rather reduced number of samples.
Solution: function na.omit
applied on environmental matrix will remove all samples with one or more missing values. The same samples need to be removed also from species data. Make sure that you know on how many samples your analysis is based! Also, avoid replacing missing values by zero; this could make sense in case of species data (missing species has zero abundance), but usually does not make sense in case of environmental variables (if soil depth was not measured and has missing values, replacing it by zero would mean that the vegetation grows on a rock, which is not true; the same for e.g. soil chemical concentrations - not measured does not mean not present).
Mistake is if you calculate constrained ordination (RDA, CCA, tb-RDA) with all explanatory variables, then do forward selection to select only those significant, and then you create ordination diagram based on analysis with all explanatory variables, but in this ordination diagram you display only those explanatory variables which were significant in forward selection.
Correct way would be to display ordination diagram which is based on constrained ordination using only the selected (by forward selection) environmental variables as explanatory variables. Indeed, the ordination diagram based on all environmental variables, and the ordination diagram based on only a small subset of selected variables will look different, and different number of ordination axes will be constrained (since number of constrained axes = number of env. variables, if they are quantitative). You should, however, still conduct constrained ordination with all environmental variables before you do forward selection (so called “global test”, and only if it is significant, you can proceed to forward selection itself); there is, however, perhaps no need to draw ordination diagram with all explanatory variables (unless there is a clear interpretational need to do that).
Problem: statement in the Methods that “the significance of the constrained ordination was tested by ANOVA”.
Why: Variance explained by constrained ordination is tested by a permutation test, in which the real explained variation is compared with the variation explained by randomised explanatory variables. This kind of test is generally called Monte Carlo permutation test (thus, Monte Carlo test is not only specific for constrained ordination, the same family of tests could be used to test e.g. correlation of two variables). If the observed explained variation is higher than some proportion (usually 95%) of variation explained by randomised variables, it is considered as high enough to be significant (at P < 0.05). It gets a bit more complicated; the test statistic, in fact, is not the explained variation (R2), but so-called pseudo-F value, which consists of explained variation and number of degrees of freedom (which reflects number of explanatory variables as well as covariables). The test then resembles ANOVA in a way that ANOVA (analysis of variance) is also based on F-value. In vegan
, the function conducting Monte Carlo permutation test is called anova
, for mostly historical reason; the description of the function (see ?anova.cca
) states that it is “ANOVA-like permutation test for Constrained Correspondence Analysis (cca), Redundancy Analysis (rda) or distance-based Redundancy Analysis (dbRDA)”. The term ANOVA is traditionally understand as an analysis of variance with a single dependent variable, so nothing related to constrained ordination.
Conclusion: You can mention that the constrained ordination was tested by permutation test, Monte Carlo permutation test, or ANOVA-like permutation test (the last one is perhaps the least common). But do not say simply that the significance of constrained ordination was tested by ANOVA, this would be misleading.
Ward's algorithm is based on measuring the distance of the sample to the centroid of the group of samples in orthogonal Euclidean space. This means that distances which are not Euclidean should not be directly used with this algorithm unless modified (“being Euclidean” here means that the distance obeys triangular inequality principle and can be displayed in orthogonal Euclidean space; e.g. Manhattan distance is not Euclidean, but the square root of Manhattan distance is Euclidean). Strictly speaking, Ward's clustering algorithm should be used only with Euclidean distances, and since Bray-Curtis distance is not Euclidean, it should square-rooted first before used with Ward's algorithm.
Problem: You want to know how much variation you can maximally explain by a single explanatory variable in constrained ordination (RDA, tb-RDA, CCA), and for this, you will check the variation represented by the first unconstrained ordination axis of this constrained ordination (with the environmental variable used as explanatory).
Explanation: If you want to know how much variation you can maximally explain by a single explanatory variable, you follow this logic: the ordination axis of unconstrained ordination (e.g. PCA1) represents the direction of the strongest compositional gradient, and can be (theoretically) considered as a perfect explanatory variable for the dataset from which it was calculated. You can use it as explanatory in the constrained variant of the same ordination method (e.g. RDA) to see how much variation it explains; the same value you get if you simply check the variation represented by this axis in the original unconstrained ordination (the eigenvalue of the axis divided by total inertia/variance, Fig. 1, the horizontal axis in the right panel). This, however, does not apply to the first unconstrained ordination axis of the constrained ordination (e.g. the PC1 in RDA, Fig. 1, the vertical axis in the left panel), in which some environmental variable (or variables) was used as explanatory. Such axis represents the maximum variance a single explanatory variable can explain in the dataset after the variation of the explanatory variable has been removed.