hypothesis
-
Non-parametric: Differences in Two Groups
Nov 15A parametric test requires us to assume or hypothesize parameters in a population. Often, a small sample size or a highly-skewed distribution does not resemble a normal distribution. In such a case, it becomes impertinent to assume normality in our data. Even though parametric tests are quite robust against non-normal data to a certain degree, it still requires a large number of sample. With larger
\(n\)
and homogeneous intergroup variance, the parametric test may have a sufficient power to correctly reject the\(H_0\)
. However, if we cannot satisfy the required assumption, we need to drop our hypothesized claim of population parameters. In other words, we are employing a non-parametric test to measure observed differences. -
Non-parametric: Differences in Multiple Groups
Nov 24Even though parametric tests on multiple groups have a favourable statistical power in non-normal data, it requires a large enough sample to correctly reject the
\(H_0\)
. Moreover, parametric tests only applies on numeric data, as it compares the mean between assigned groups. In cases of having an ordinal data or data with a low number of sample, non-parametric tests may provide a better inference. -
Correlation of Numeric Variables
Dec 3So far, we have solved hypotheses testings for condition where we have numeric values as our dependent variable and categoric data as our independent variables. We have yet to see the solution if we have numeric variables for both of the dependent and independent variables. After learning the descriptive statistics, we understand that we can observe the spread in our data by measuring the variance, i.e. the dispersion of each data element relative to the mean value. Similarly, we can understand the dispersion of two numeric variables by accounting the covariance.
-
Linear Model
Dec 7Upon seeing a linear trend between two variables, we may guess a potential value of interest given a specific data point. When doing so, we are attempting an inference on predicted outcomes based on observed events. Given a linear trend between multiple variables, such an inference is the basic foundation of a linear model, i.e. a mathematical construct in examining the dependent variable using known independent variables.