Slide

Research in A Nutshell

Researchers ought to conduct a thorough and objective observation. In doing so, we may start from formulating a question and hypothesis prior to making prediction, performing experiments, and analyzing results. Hypotheses are different from theory, where we need evidence to establish the fact revolving around proposed hypotheses. A research or investigation is a mean to substantiate evidence and support the hypothesis. Thus, hypothesis needs to comply with cases and variables introduced in a study.

Cases and Variables

When performing an investigation, we first need to state what cases and variables to investigate. Cases are observable subjects, which apply to anything in research settings, e.g. a patient in medical research, an atom in physics, a substance in chemical engineering, and so on. There is no boundary to define a subject in research, but they have similarities: they are observable. It is important to notice that characteristics in observed cases are our variables of interest, quantifiable as a discrete or continuous measures. We shall proceed with a few example to distinguish cases and variables.

Ten schizophrenic patients participated in a study on brain activity. They underwent computer-based tests on psychotic symptoms and fMRI examination to observe the functional connectivity.

In this example, we have schizophrenic patients as our observable subjects, i.e. our cases. Then we measured reported psychotic symptoms and functional connectivity as our variables of interest.

A group of botanists observed different species of molds under conditioned environments. They set up different humidity levels and temperature to observe cellular integrity.

The second example stated different species of molds as its respective cases, where humidity levels, temperature, and cellular integrity are included variables. What makes both examples different? The second example imply an association between the former two variables and the latter one. It implicitly indicates the possibility of cellular integration being influenced by temperature and humidity levels. We observed no such an indication in the first example. This brought us to distinguish a different type of variables, i.e. independent and dependent.

Independent variables are those being unaffected by other variables. In experimental setups, independent variables are directly under our control. Design-wise, it is a presumed cause in our hypothesis. I wish to highlight the word presumed as we are yet to establish the evidence and later we shall learn that proving causality is not a trivial matter. As opposed to independent ones, dependent variables are affected by other variables. When conducting an experiments, we have dependent variables under our indirect control, so it is a presumed effect.

Grouping

Using a categorical independent variable, we can separate each observation into their respective groups. By doing so, we can differentiate within and between subjects. Within subject helps us examine the same subject overtime. We usually use such a design when testing the subject multiple times, using a temporal indicator of \(t_N: N \ni \{1, 2, ..., n\}\). We can apply a within subject perspective when investigating the same subject using different measures. On the other hand, between subject is the perspective we use when comparing different subject, usually seen in a test on multiple groups. For example, suppose we have a brain recording data from a group of individuals, where we recorded \(\beta\) and \(\gamma\) waves. We shall see both waves as different measures and delineate such a difference by comparing their record on the same subject (within subject differences). It makes much more sense than comparing \(\beta\) and \(\gamma\) wave from two completely different people. However, we can compare recorded \(\beta\) wave among different participants, enabling us to discern between subject differences.

So far, it sounds interesting and all, but why should we do that? Please be advised that by doing so will enable inferential statistics. We can describe an association between variables in observed cases and further demonstrate statistical significance in supporting our hypotheses. If you found some of the words confusing, no need to be wary, as we will carefully discuss such a matter in latter sections. However, we need to be wary of the drawback in grouping our cases as we have to identify potential confounding variables, reduce bias, and minimize random error.

Confounding Variable

A confounding variable is any variable, be either numeric or categoric, which may impede us from correctly interpret the presumed causality. Confounding variables may establish a causal association with either independent or dependent variables. As it complicates the proposed causality, it reduces internal validity! Internally valid research will present a clear evidence, hence confounding variables are ones to obfuscate such findings.

Cognitive Bias in Research

The presence of a cognitive bias introduces partiality in conducted investigation. Among a plethora of biases and fallacies, we shall take a look at most common ones happening during research, including:

  1. Selection bias
  2. Information bias
  3. Detection bias
  4. Performance bias
  5. Attrition bias
  6. Reporting bias

Selection bias occurs when the proportion of selected subjects does not represent the population, either as under- or over-coverage. Having inappropriate proportions may obscure the actual differences in population. Selection bias may happen out of participant’s volition or investigator’s negligence.

On the other hand, information bias happens when acquired information do not represent the truth. On participants’ side, information bias usually manifest as recall or response bias, with recall bias being the inability to recall correct information and response bias happened following a vaguely interpreted question. Investigators are also prone to manifest a bias, either as interviewer or confirmation bias. Interviewer bias occurs upon using close-ended questions, where the interviewer may only look for answers they want to observe. Confirmation bias is a tendency to only confirm responses conforming their assumptions.

Meanwhile, detection bias takes place when investigators only detect outcome in a group with certain characteristics. Performance bias is an inclination to give different treatment to favour a certain outcome. Attrition bias usually happen in a longitudinal study, where subjects opt to drop out or not comply with follow-up schedules. As a rule of thumb, having at most 20% lost to follow up or drop out rate is still acceptable for most research. Reporting bias is a tendency to only report significant findings and neglect insignificant ones.

After carefully identify potential biases, we may handle them through several means:

  • Random sampling
  • Random assignment
  • Blinding:
  • Participant
  • Investigator
  • Mediating parties

Systematic and Random Errors

Both confounding variables and biases contribute to systematic errors, where we have a hunch on how they will affect the data. They often have a mathematical solution since they happen systematically, e.g. an offset and scale factor error. However, a random error is a type of error we cannot completely handle. It is unpredictable and unreplicable, also presents with no mathematical solution. As the error happen randomly, we often call them an unsystematic error or system noise. To adequately handle random errors, we may focus our utmost efforts on increasing the sample size or taking the average measurement value.

Formulating a Hypothesis

We have hypotheses as assumptions about the population parameters, stated as the null \(H_0\) and alternative \(H_a\) hypothesis. In the null hypothesis, we regard observed results as happening purely by chance. Contrarily, the alternative hypothesis assumes observed results occurred due to some non-random causes. As we only have a sampled data, we do not accept the alternative hypothesis. Through a formal procedure, we can either reject or fail to reject \(H_0\).

In our last lecture, we flipped a coin ten times, which happened to have following sequences:

set.seed(1)
coin <- sample(c("H", "T"), 10, replace=TRUE, prob=rep(1/2, 2)) %T>% print()
|  [1] "T" "T" "H" "H" "T" "H" "H" "H" "H" "T"

Now, we will formulate hypotheses on our coin flip and determine whether or not to reject the \(H_0\).

\(H_0:\) The probability of having the head is \(P = 0.5\)
\(H_a:\) The probability of having the head is \(P \neq 0.5\)

Formal Test

coineeing the proportion being 0.6, can we reject the \(H_0\)? Well, not quite ;) Rejecting a hypothesis requires a formal proof using statistical tests. For now, we are quite comfortable in conducting goodness of fit test. We previously agreed our coin flip follows a Bernoulli trial and binomial distribution. So, we will test our hypotheses using a binomial test.

binom.test(x={sum(coin=="H")}, n=length(coin), p=0.5)
| 
|   Exact binomial test
| 
| data:  {     sum(coin == "H") } and length(coin)
| number of successes = 6, number of trials = 10, p-value = 0.8
| alternative hypothesis: true probability of success is not equal to 0.5
| 95 percent confidence interval:
|  0.262 0.878
| sample estimates:
| probability of success 
|                    0.6

Seeing the p-value > 0.05, we can formally state the failure on rejecting \(H_0\). The following figure represent other examples on null hypothesis:

As previously demonstrated, a formal test looks for the correspondence between our data and the \(H_0\). When the sampled data are consistent with \(H_0\), we are bound to fail in rejecting the \(H_0\). In otherwise cases, we can confidently reject \(H_0\). Please note that there are three types of formal test in general:

  • Two-tailed: \(H_0 \neq H_a\)
  • Right-tailed: \(H_0 > H_a\)
  • Left-tailed: \(H_0 < H_a\)

We can reject our \(H_0\) based on either one of two available criteria. The first procedure will have us observe the critical value within a region of acceptance, often used in older statistical method. For a more sophisticated approach, we may choose to use p-value and confidence interval. From this lecture onwards, we shall regard statistical significance as a p-value lower than a certain threshold.

Interpreting Hypotheses

When testing the hypothesis, we start from the initial assumption of \(H_0\) being true. We need collected evidence to show otherwise before we can reject the \(H_0\). We can either reject or fail to reject the \(H_0\) based on the p-value computed by statistical tests we employed. However, statistical error may arise when testing the hypothesis. Immediate recognition of both errors will help us in correctly interpreting our results:

  • Type I: False Reject Rate \(\alpha \to\) false positive
  • Type II: False Accept Rate \(\beta \to\) false negative

It may sound confusing at first, but we can imagine it more intuitively:

  • Type I error happens when we state \(H_0\) as true, while it is actually wrong
  • Type II error happens when we state \(H_0\) as wrong, while it is actually true

Following figure may provide a better explanation on the concept :)

Then, what about \(\alpha\) and \(\beta\)? The false reject rate \(\alpha\) is the probability of rejecting a correct \(H_0\). We use \(\alpha\) as our level of significance, viz. a cut-off to determine statistically significant p-value. We can use any value as our \(\alpha\), be it .1, .05, .01, and so on. However, we conveniently assign .05 as our common cut-off. In interpreting the p-value < .05, we can say it out loud as:

“We have a 5% chance of falsely rejecting the null hypothesis, thus we can confidently regard our findings as significant.”

As we shall see, the higher our \(\alpha\) is, the higher our probability on false rejecting \(H_0\). Thus, we aim to have the lowest possible p-value when interpreting our results by assigning \(\alpha \leqslant 0.1\).

Study Designs

Each field of science contribute to developing appropriate study designs to support their investigations, resulting in a plethora of available designs to use! This post will only focus on several of the most common ones in biomedical research. Please be advised that we purposely exclude case report and case series from this post. Although both are important designs widely used in biomedical research, we cannot completely implement statistical measures in such designs. Instead, we will briefly introduce systematic review and meta-analysis, where excluded designs are applicable to use in any of both. Following sections only serve as a short outline on what we can or cannot do with a particular study design.

Cross-Sectional

  • A snapshot of outcome and associated characteristics
  • No intervention, only applicable to infer existing differences
  • Relatively inexpensive and quick to conduct
  • Static and does not reveal temporal context
  • Unable to establish causality
  • Different time frames may lead to different results

Cohort

  • A longitudinal study to observe an effect overtime
  • Could either be a prospective or retrospective study
  • Starting from a potential cause
  • Involving at least two groups of interest
  • Cannot completely control the confounding variable
  • No randomization, which may reduce external validity
  • Takes a long period to complete

Case-Control

  • A longitudinal study to see an effect overtime
  • Retrospective in nature
  • Starting from an outcome of interest
  • Involving at least two groups
  • Usually employed to investigate rare conditions
  • Assessing multiple risk factors
  • Difficult to find a suitable control group
  • Inadequate to establish a diagnostic study
  • Should carefully address confounding variables

Randomized Controlled Trial

  • Uses randomization to control bias \(\to\) higher external validity
  • Akin to an experimental study, can effectively employ blinding methods
  • Statistically efficient
  • Has a clearly identified population
  • Expensive to conduct
  • Risk to have a volunteer bias
  • Loss to follow up may emerge an attrition bias

Systematic Review

  • Concatenate findings from multiple studies
  • Critical appraisal removes redundancies and addresses inconsistencies
  • Delineate where knowledge is lacking to guide future research
  • Variation among published article is a challenge to overcome
  • Has a potential bias since it includes unreviewed articles
  • Meta-analysis enabled statistical approaches on reviewed articles by analyzing previous analysis