How does standard deviation differ from the mean absolute deviation?
- Mean absolute deviation is always greater
- Standard deviation is always greater
- Standard deviation squares the deviations while mean absolute deviation takes absolute values
- They are the same
The standard deviation and mean absolute deviation both measure the dispersion in a dataset. The key difference lies in how they treat deviations from the mean: standard deviation squares the deviations before averaging them, while mean absolute deviation takes the absolute value of deviations before averaging. As a result, standard deviation is more sensitive to extreme values than the mean absolute deviation.
Quantitative data represents quantities and can be measured on a ________ scale.
- Categorical
- Nominal
- Numerical
- Ordinal
Quantitative data represents quantities and can be measured on a Numerical scale. It includes both discrete data (e.g., the number of students in a class) and continuous data (e.g., the weight of a person).
What is the purpose of Pearson's Correlation Coefficient?
- To compute the standard deviation of a dataset
- To determine the linear relationship between two variables
- To find the mean of a set of values
- To transform qualitative data into quantitative data
Pearson's correlation coefficient (denoted as r) is a measure of the strength and direction of association that exists between two continuous variables. It measures the degree to which pairs of data for these two variables lie on a line. The values lie between -1 and 1, where 1 indicates a perfect positive correlation, -1 a perfect negative correlation, and 0 no correlation at all.
If the population standard deviation is unknown, we use the sample standard deviation to estimate the ________ of the mean.
- Confidence interval
- Range
- Standard error
- Variability
If the population standard deviation is unknown, the sample standard deviation is used to estimate the standard error of the mean. The standard error is a measure of how much the sample mean is expected to vary from the true population mean.
In ANOVA, if the F statistic is significantly high, it suggests that the null ________ should be rejected.
- Distribution
- Hypothesis
- Model
- Theory
If the F statistic in an ANOVA is significantly high, it suggests that the null hypothesis should be rejected. The null hypothesis in ANOVA is typically that all group means are equal.
What is a uniform distribution?
- A bell-shaped distribution
- A distribution with different probabilities for different outcomes
- A distribution with the same probability for all outcomes
- A skewed distribution
A uniform distribution, also called a rectangular distribution, is a type of probability distribution in which all outcomes are equally likely. Each interval of equal length on the distribution's support has the same probability.
The null hypothesis for the Kruskal-Wallis Test states that all ________ have the same distribution.
- factors
- groups
- pairs
- variables
The null hypothesis for the Kruskal-Wallis Test states that all groups have the same distribution. It tests whether samples originate from the same distribution.
How does the correlation coefficient change when you switch the X and Y variables?
- It changes sign
- It decreases
- It increases
- It remains the same
The correlation coefficient remains the same when you switch the X and Y variables. This is because correlation measures the strength and direction of a relationship between two variables, not the dependency of one on the other.
What is meant by the term "multicollinearity" in multiple linear regression?
- The dependent variables are correlated with each other
- The error terms are correlated with each other
- The independent variables are correlated with each other
- The residuals are correlated with each other
In multiple linear regression, multicollinearity refers to a situation in which two or more independent variables are highly linearly related. This can cause problems because it can affect the interpretability of the regression coefficients and can make the model unstable.
How do we define expectation of a random variable?
- It is the most likely outcome of the variable
- It is the range of the variable
- It is the variance of the variable
- It is the weighted average of all possible values the variable can take, with weights being the respective probabilities
The expected value or expectation of a random variable is a key concept in probability and statistics and represents the weighted average of all possible values that the variable can take, with weights being the respective probabilities.