--- title: "Introduction to the appraise Package" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Introduction to the appraise Package} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r setup, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) library(appraise) ``` ## Introduction **appraise** is an R package for bias-aware evidence synthesis in systematic reviews. It quantifies uncertainty in effect estimates by explicitly modeling multiple sources of bias and combining study-specific posterior distributions using a posterior mixture model. Unlike traditional meta-analysis, appraise does not assume a single pooled likelihood. Instead, uncertainty due to bias, random error, and study relevance is propagated directly into posterior inference. ## Study-level bias specification and prior simulation Biases are explicitly modeled using user-specified prior distributions. The same data structure used internally by the Shiny application can be constructed programmatically. ```{r bias-priors} bias_spec <- build_bias_specification( num_biases = 2, b_types = "Confounding", s_types = "Selection Bias", ab_params = list(Confounding = c(2, 5)), skn_params = list(`Selection Bias` = c(0, 0.2, 5)) ) if (requireNamespace("sn", quietly = TRUE)) { xi_samples <- simulate_bias_priors(bias_spec, n_draws = 2000) } else { xi_samples <- NULL message("Package 'sn' not available; skipping skew-normal bias simulation.") } ``` The resulting samples represent uncertainty due to bias alone and form the building blocks of posterior inference. ## Study-level posterior inference Given an observed estimate and standard error, **appraise** fits a Bayesian model that combines sampling uncertainty with bias uncertainty. To ensure the vignette runs on CRAN without requiring CmdStan, we illustrate posterior inference using simulated posterior draws ```{r posteriors} set.seed(123) # Mock posterior draws representing a study-level posterior theta_draws <- rnorm(2000, mean = -0.5, sd = 0.15) mid_draws <- theta_draws # midpoint samples used downstream ``` ## Probability of exceeding a clinically or policy meaningful threshold Users must specify a threshold $\tau$ representing a clinically or policy-relevant effect size. The posterior probability $$ P(\theta > \tau) $$ is computed directly from posterior draws. ```{r} posterior_probability(mid_draws) ``` ## Evidence synthesis via posterior mixture models When multiple studies are available, **appraise** combines study-specific posteriors using a weighted mixture model. $$ p(\theta \mid \text{evidence}) = \sum_{k=1}^K w_k \, p_k(\theta \mid \text{data}_k) $$ where $w_k$ reflects the relevance or credibility of study $k$. ```{r} theta_list <- list( theta_draws, rnorm(2000, -0.4, 0.2) ) weights <- c(0.6, 0.4) mix <- posterior_mixture(theta_list, weights) mix$summary ``` ## Relationship to the Shiny application The AppRaise Shiny application provides a graphical interface to the same functions described in this vignette. All statistical computations are performed using exported package functions; the app adds interactivity, visualization, and workflow support. ## References Kabali C (2025). AppRaise: Software for quantifying evidence uncertainty in systematic reviews using a posterior mixture model. *Journal of Evaluation in Clinical Practice*, 31, 1-12. .