---
title: "DEBIT"
output:
rmarkdown::html_vignette:
fig_width: 7
fig_height: 5
vignette: >
%\VignetteIndexEntry{DEBIT}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
bibliography: references.bib
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
```{r setup, message=FALSE}
library(scrutiny)
```
The descriptive binary test, or DEBIT, checks whether the reported mean, sample standard deviation (SD), and sample size of binary data are mutually consistent [@heathers_debit_2019]. Like GRIM, it tests if a given set of summary data can describe the same distribution.
This vignette covers scrutiny's implementation of DEBIT:
1. The basic, single-case `debit()` function.
2. A specialized mapping function, `debit_map()`.
3. The `audit()` method for summarizing `debit_map()`'s results.
4. Finally, the visualization function `debit_plot()`.
## DEBIT basics
Consider these summary data for a binary distribution: a mean of 0.35, an SD of 0.18, and a sample size of 20. To test their consistency, run this:
```{r}
debit(x = "0.35", sd = "0.18", n = 20)
```
As in `grim()`, the mean needs to be a string. (The same is true for the SD.) That is because strings preserve trailing zeros, which can be crucial for DEBIT. Numeric values don't, and even converting them to strings won't help. A workaround for larger numbers of such values, `restore_zeros()`, is discussed in the *Data wrangling* vignette.
Note that the SD is always assumed to be the sample SD, not the population SD. In some rare cases, this might possibly explain apparent inconsistencies.
`debit()` has some further arguments, but all of them can be used from within `debit_map()`. Since `debit_map()` is the more useful function in practice, the other arguments will be discussed in that context.
## Testing multiple cases
### Working with `debit_map()`
If you want to test more than a handful of cases, the recommended way is to enter them into a data frame and to run `debit_map()` on the data frame. Below are the example data from Heathers and Brown's (2019) Table 1. A useful way to enter such data is to copy them from a PDF file and paste them into `tibble::tribble()`, which is available via scrutiny:
```{r}
flying_pigs <- tibble::tibble(
x = runif(5, 0.2, 1) %>% round(2) %>% restore_zeros(),
sd = runif(5, 0, 0.3) %>% round(2) %>% restore_zeros(),
n = 1000
)
flying_pigs
```
Now, simply run `debit_map()` on that data frame:
```{r}
flying_pigs %>%
debit_map()
```
The `x`, `sd`, and `n` columns are the same as in the input. The main result, `consistency`, is the DEBIT consistency of the former three columns.
```{r, error=TRUE}
pigs3 # data saved within the package
pigs3 %>%
debit_map()
```
DEBIT only makes sense with binary means and SDs. Both `debit()` and `debit_map()` check if the inputs are such data, and fail if they are not:
```{r, error=TRUE}
pigs5 # no binary means / SDs!
pigs5 %>%
debit_map()
```
Compared to `grim_map()`, `debit_map()` is more straightforward. There is no percentage conversion or accounting for multiple scale items. The same is true when comparing the basic `grim()` and `debit()` functions. However, both implementations tap scrutiny's arsenal of rounding procedures, so that is discussed next.
## Summarizing results with `audit()`
Following up on a call to `debit_map()`, the generic function `audit()` summarizes test results:
```{r}
pigs3 %>%
debit_map() %>%
audit()
```
These columns are ---
1. `incons_cases`: number of inconsistent value sets.
2. `all_cases`: total number of value sets.
3. `incons_rate`: proportion of DEBIT-inconsistent value sets.
4. `mean_x`: average of binary distribution means.
5. `mean_sd`: average of binary distribution standard deviations.
6. `distinct_n`: number of different sample sizes.
## Visualizing results with `debit_plot()`
There is a specialized visualization function for DEBIT results, `debit_plot()`. Only run it on `debit_map()`'s output. It will fail otherwise.
```{r}
# Determine plot theme for the remaining session:
ggplot2::theme_minimal(base_size = 12) %>%
ggplot2::theme_set()
pigs3 %>%
debit_map() %>%
debit_plot()
```
DEBIT-consistent value pairs are blue, inconsistent ones red. The black arc is the DEBIT line: Given the sample size, the pairs of mean and SD values are DEBIT-consistent if and only if they cross this line. More precisely, the inner boxes must cross the line --- the outer boxes are just pointers to the inner ones, in case these are poorly visible. They have no inherent meaning.
Except for the colors, inner boxes must look exactly like they do here: Their sizes and shapes are completely determined by the mean and SD values. Since inner boxes cannot be enlarged at will, outer boxes might be helpful to spot them at first glance.
However, if outer boxes are not desired, they can be turned off like this:
```{r}
pigs3 %>%
debit_map() %>%
debit_plot(show_outer_boxes = FALSE)
```
Color settings and other ggplot2-typical options are available via arguments, as are settings handed down to `ggrepel::geom_text_repel()`, which creates the labels. For more, see `debit_plot()`'s documentation.
## Testing numeric sequences with `debit_map_seq()`
DEBIT analysts might be interested in a mean or percentage value's numeric neighborhood. Suppose you found DEBIT inconsistencies, as in out example `pigs3` data. You might wonder whether they are due to small reporting or computing errors.
Use `debit_map_seq()` to use DEBIT for the values surrounding the reported means, SDs, and sample sizes:
```{r}
out_seq1 <- debit_map_seq(pigs3)
out_seq1
```
### Summaries with `audit_seq()`
As this output is a little unwieldy, run `audit_seq()` on the results:
```{r}
audit_seq(out_seq1)
```
Here is what the output columns mean:
- `x`, `sd`, and `n` are the original inputs, reconstructed and tested for `consistency` here.
- The `hits_*` columns display is the number of DEBIT-consistent value combinations found within the specified `dispersion` range; either in total or by varying individual parameters.
- `diff_x` reports the absolute difference between `x` and the next consistent dispersed value (in dispersion steps, not the actual numeric difference). `diff_x_up` and `diff_x_down` report the difference to the next higher or lower consistent value, respectively.
- Accordingly with the `diff_sd*` and `diff_n*` columns.
The default for `dispersion` is `1:5`, for five steps up and down. When the `dispersion` sequence gets longer, the number of hits tends to increase:
```{r}
out_seq2 <- debit_map_seq(pigs3, dispersion = 1:7, include_consistent = TRUE)
audit_seq(out_seq2)
```
### Visualizing DEBIT-checked sequences
Although it is possible in principle to visualize results of `debit_map_seq()` using `debit_plot()`, it's not recommended because the results don't currently look great. This issue might be fixed in a future version of `debit_plot()`.
## Handling unknown group sizes with `debit_map_total_n()`
### Problems from underreporting
Unfortunately, some studies that report group averages don't report the corresponding group sizes --- only a total sample size. This makes any direct use of DEBIT impossible because only `x` and `sd` values are known, not `n` values. All that is feasible here in terms of DEBIT is to take a number around half the total sample size, go up and down from it, and check which *hypothetical* group sizes are consistent with the reported group means and SDs. `debit_map_total_n()` semi-automates this process, motivated by a recent GRIM analysis [@bauer_expression_2021].
Here is an example:
```{r}
out_total_n <- tibble::tribble(
~x1, ~x2, ~sd1, ~sd2, ~n,
"0.30", "0.28", "0.17", "0.10", 70,
"0.41", "0.39", "0.09", "0.15", 65
)
out_total_n <- debit_map_total_n(out_total_n)
out_total_n
audit_total_n(out_total_n)
```
See the GRIM vignette, section *Handling unknown group sizes with `grim_map_total_n()`*, for a more comprehensive case study. It uses `grim_map_total_n()`, which is the same as `debit_map_total_n()` but only for GRIM.
# References