The probability of committing a Type I error is also known as the ______ level of the test.

  • Confidence
  • Power
  • Significance
  • Size
The probability of committing a Type I error (rejecting a true null hypothesis) is known as the significance level (often denoted by alpha) of the test. A common significance level is 0.05, indicating a 5% risk of committing a Type I error if the null hypothesis is true.

The process of testing the effect of varying one predictor at different levels of another predictor is known as ________ effect analysis.

  • Additive
  • Independent
  • Interaction
  • Subtractive
This is known as interaction effect analysis. Interaction effect analysis involves testing how the effect of one predictor on the response variable changes at different levels of another predictor. It helps in understanding how different variables interact with each other to affect the dependent variable.

How does the Spearman rank correlation deal with categorical variables?

  • It assigns a numerical value to each category
  • It can't handle categorical variables
  • It groups categorical variables together
  • It transforms categorical variables into ranks
The Spearman rank correlation transforms categorical variables into ranks, which allows it to handle both continuous and ordinal (a type of categorical variable) data.

How does independence between events affect the calculation of their joint probability?

  • It makes the joint probability equal to the difference of the probabilities of each event
  • It makes the joint probability equal to the product of the probabilities of each event
  • It makes the joint probability equal to the ratio of the probabilities of each event
  • It makes the joint probability equal to the sum of the probabilities of each event
If events are independent, their joint probability equals the product of their individual probabilities. That is, P(A ∩ B) = P(A) * P(B) for independent events A and B.

What type of statistical test is the Kruskal-Wallis Test?

  • Chi-square test
  • Non-parametric
  • Parametric
  • T-test
The Kruskal-Wallis Test is a non-parametric statistical test.

The degrees of freedom for a Chi-square test for a contingency table with r rows and c columns is (r-1)*(c-1), otherwise known as ________ degrees of freedom.

  • dependent
  • independent
  • joint
  • multicollinearity
The degrees of freedom for a Chi-square test for a contingency table with r rows and c columns is calculated as (r-1)*(c-1). These are also known as independent degrees of freedom as they depend on the number of independent ways that the data can vary.

Can Pearson's Correlation Coefficient be used with non-linear relationships?

  • No, never
  • Yes, always
  • Yes, but it may not provide meaningful results
  • Yes, but only if the relationship is monotonic
While you can technically compute a Pearson correlation coefficient for non-linear relationships, it may not provide meaningful results. The Pearson correlation measures the degree of a linear relationship between variables, and does not fully capture the dynamics of a non-linear relationship. In such cases, Spearman's rank correlation or other non-parametric correlations may be more appropriate.

What is the purpose of a Z-test?

  • To assess the relationship between categorical variables
  • To calculate the correlation between two variables
  • To compare sample and population means when the population standard deviation is known
  • nan
A Z-test is used to compare the mean of a sample to the mean of a population when the population standard deviation is known. It's not used to calculate correlations or assess relationships between categorical variables.

If the occurrence of A does not affect the occurrence of B, we say A and B are ________.

  • Dependent
  • Independent
  • Joint
  • Mutually exclusive
If the occurrence of A does not affect the occurrence of B, we say A and B are independent. This is a key concept in probability theory where the occurrence of one event does not change the probability of another.

What are the ways to check the assumptions of an ANOVA test?

  • By calculating the F-statistic
  • By calculating the mean and variance of each group
  • By checking normality of residuals, homogeneity of variance, and independence of observations
  • By conducting post-hoc tests
The assumptions of an ANOVA test can be checked by: 1. Checking the normality of residuals using a normal probability plot or a statistical test like the Shapiro-Wilk test; 2. Checking the homogeneity of variance using a Levene's test or Bartlett's test; 3. Checking the independence of observations which usually pertains to the study design (random sampling, random assignment).