# ANOVA table with SPSS ## Outliers Analysis ## Verification of previous hypothesis

### Normality

It is checked with the contrasts Kolmogorov-Smirnov-Lilliefors (n>50), Shapiro-Wilk (n<50), and the Asymmetry (near 0 implies normality) and Kurtosis (near 3) tests. The violation of the normality assumption does not significantly affect the Fisher-Snedecor F statistic, provided that the sample sizes are large, because since it is a test of comparative means, the Central Limit Theorem can be applied.

### Homocedasticity Verification

Graphical analysis of residues, Bartlett’s Sphericity Test, Hartley Test and the Levene Test of variance homogeneity. The ANOVA is robust against the violation of the homoscedasticity hypothesis, if the sample sizes of the groups or treatments are identical or, at least, very similar.

### Independence and randomness of samples Verification: Graphic analysis of waste

The ANOVA Test is not robust against the violation of the hypothesis of independence and randomness of the samples.

## Homocedasticity Test or Variance Homogeneity Test #### H1:Θi2 <> Θj2  (All variances are different)

 COCHRAN: Sensitive to discrepancies in Normality, same sample size. BARTLETT (SPHERICITY): Sensitive to discrepancies in Normality, equal sample sizes., Same or different sample size. LEVENE: Less sensitive to discrepancies in Normality than the Bartlett Test, same or different sample sizes.

The previous hypothesis of homocedasticity cannot be rejected, if the p-value associated with the Levene statistic is greater than 0.05, then the hypothesis of homogeneity of variances of the dependent (response) variable is corroborated, in the groups that conform the independent variable (explanatory) under study.

If the Levene test shows that the variances are not significantly homogeneous (that is, the statistically significant resulting contrast), the F statistic is recalculated, selecting the Brown-Forsythe or Welch options box, with which it is carried carry out a transformation of the scores: ## ANOVA Oneway Test

#### Ha (Alternative): Not all means are equal In general, there will be influence of the independent variable or factor (the one that makes up the 3 or more groups), in the continuous dependent variable, if the intergroup variability (between the means of the groups), is greater than the intragroup ( within the groups) or error.

If the Sig. or p-value of the Test is statistically significant (less than 0.05), the null hypothesis that the 3 or more groups behave in the same way with respect to the population mean is rejected, then there are differences in the means of at least 2 groups.

## Types of POST-HOC Tests From Latin ‘after this‘, the Interpretation of the SPSS output results panel is something like that, the crossing of any 2 treatments of the factor whose p-value is less than 0.05 differences are considered statistically significant. If the value of said mean difference is positive, the higher value will be that of the treatment on the left in the comparison, which can also be corroborated with a descriptive analysis of the means, with the submenu command ‘EXPLORE‘. These tests are more robust than performing a Student’s T for comparing means 2 to 2.

Multiple comparison tests are usually based on the probability of at least one Type I error in a set of comparisons. A sort of improved version of Student’s T can be considered for comparisons of population means 2 to 2. The most common Post Hoc tests are:

Bonferroni

This Post-Hoc multiple comparison correction is used when many statistical tests are performed at the same time. The problem with the execution of many simultaneous tests is that the probability of a significant result increases with each test, for reasons such as that the probabilities of Type II error are high for each test or that it over-corrects Type I errors. This is the test that is usually considered more conservative, widely used in Biostatistics and Psychometrics.

Scheffé method

This test is used when you want to see Post Hoc comparisons in general, (instead of pairwise comparisons). This method is usually taken into account with uneven sample sizes. Very common in non-parametric tests (HSD).

Dunnett

Compare each average with a control average, there is a version for homocedasticity problems.

Tukey

It is based on a number that represents the distance (significant difference) between groups, to compare in this way, each average with each other average of the factor treatments.

estamatica@gmail.com