How does factor analysis help in understanding the structure of a dataset?
- By identifying underlying factors
- By normalizing the data
- By reducing noise in the data
- By transforming the data
Factor analysis helps in understanding the structure of a dataset by identifying the underlying factors that give rise to the pattern of correlations within the set of observed variables. These factors can explain the latent structure in the data.
What is the name of the rule that states the probability of the sum of all possible outcomes of an experiment is 1?
- Bayes' Theorem
- Law of Large Numbers
- Law of Total Probability
- Rule of Complementary Events
The Law of Total Probability states that the sum of the probabilities of all possible outcomes of an experiment is 1. This rule is fundamental to probability theory and provides a way to calculate the probability of complex events by breaking them down into simpler, mutually exclusive events.
What are the two types of factor analysis used in data science?
- Confirmatory and explanatory
- Exploratory and confirmatory
- Inferential and descriptive
- Predictive and explanatory
Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) are the two types of factor analysis commonly used in data science. EFA is used when the structure of the underlying factors is not known, while CFA is used when the researcher has specific hypotheses about the factor structure.
Non-parametric tests are also known as ________ tests because they make fewer assumptions about the data.
- assumption-free
- distribution-free
- free-assumption
- free-distribution
Non-parametric tests are also known as distribution-free tests because they make fewer assumptions about the data, specifically, they do not require the data to follow a specific distribution.
How can you test the assumption of independence in a Chi-square test for goodness of fit?
- By calculating the standard deviation of the observations
- By conducting a separate Chi-square test of independence
- By conducting a t-test
- By examining the correlation between observations
To test the assumption of independence in a Chi-square test for goodness of fit, you can conduct a separate Chi-square test of independence. This test compares the observed frequencies in each category with what we would expect if the variables were independent.
How does skewness affect the relationship between the mean, median, and mode of a distribution?
- Changes the relationship
- Increases the standard deviation
- No effect
- Reduces the kurtosis
Skewness affects the relationship between the mean, median, and mode. In a positively skewed distribution, the mean is usually greater than the median, which is greater than the mode. In a negatively skewed distribution, the mode is usually greater than the median, which is greater than the mean.
Under what conditions does the Central Limit Theorem hold true?
- When the data is skewed
- When the population is normal
- When the sample size is sufficiently large
- When the standard deviation is zero
The Central Limit Theorem holds true when the sample size is sufficiently large (usually n > 30), regardless of the shape of the population distribution. This theorem states that if you have a population with mean μ and standard deviation σ and take sufficiently large random samples from the population with replacement, then the distribution of the sample means will be approximately normally distributed.
How does effect size impact hypothesis testing?
- Effect size has no impact on hypothesis testing
- Larger effect sizes always lead to rejection of the null hypothesis
- Larger effect sizes always lead to smaller p-values
- Larger effect sizes increase the statistical power of the test
Effect size measures the magnitude of the difference or the strength of the relationship in the population. A larger effect size means a larger difference or stronger relationship, which in turn increases the statistical power of the test. Power is the probability that the test correctly rejects the null hypothesis when the alternative is true.
How does a binomial distribution differ from a normal distribution?
- Binomial distribution is continuous, while normal is discrete
- Both are continuous distributions
- Both are discrete distributions
- Normal distribution is continuous, while binomial is discrete
A binomial distribution is discrete, meaning it only takes on integer values on a countable range, and it represents the number of successes in a fixed number of independent Bernoulli trials with a given success probability. A normal distribution is continuous, and it is often used as a first approximation to the binomial distribution, when the number of trials is large.
What is the underlying assumption of linearity in a multiple linear regression model?
- All independent variables must have a linear relationship with the dependent variable
- All residuals must be equal
- All variables must be continuous
- All variables must be normally distributed
The assumption of linearity in a multiple linear regression model assumes that the relationship between each independent variable and the dependent variable is linear. This implies that the change in the dependent variable due to a one-unit change in the independent variable is constant, regardless of the value of the independent variable.