| Type: | Package |
| Title: | Content Validity Indices for Instrument Development |
| Version: | 0.1.0 |
| Description: | Computes content validity indices commonly used in instrument development and questionnaire validation, including the Item-level Content Validity Index (I-CVI), Scale-level Content Validity Index (S-CVI), modified kappa adjusted for chance agreement, Aiken's V, and Lawshe's Content Validity Ratio (CVR). Methods follow Lynn (1986) <doi:10.1097/00006199-198611000-00017>, Polit and Beck (2006) <doi:10.1002/nur.20147>, Aiken (1985) <doi:10.1177/0013164485451012>, and Lawshe (1975) <doi:10.1111/j.1744-6570.1975.tb01393.x>. |
| License: | MIT + file LICENSE |
| Encoding: | UTF-8 |
| Suggests: | knitr, rmarkdown, testthat (≥ 3.0.0) |
| VignetteBuilder: | knitr |
| Config/testthat/edition: | 3 |
| URL: | https://github.com/Rafhq1403/contentValidity |
| BugReports: | https://github.com/Rafhq1403/contentValidity/issues |
| Config/roxygen2/version: | 8.0.0 |
| Depends: | R (≥ 3.5) |
| LazyData: | true |
| NeedsCompilation: | no |
| Packaged: | 2026-05-05 20:54:51 UTC; rashedalqahtani |
| Author: | Rashed Alqahtani [aut, cre] |
| Maintainer: | Rashed Alqahtani <rashed.alqahtani@gmail.com> |
| Repository: | CRAN |
| Date/Publication: | 2026-05-11 18:40:20 UTC |
contentValidity: Content Validity Indices for Instrument Development
Description
The contentValidity package provides functions for computing content
validity indices used in questionnaire and instrument development,
including the Item-level and Scale-level Content Validity Indices,
modified kappa, Aiken's V, and Lawshe's CVR with corrected critical
values.
Main functions
-
icvi(): Item-level Content Validity Index -
scvi_ave(): Scale-level CVI, average method -
scvi_ua(): Scale-level CVI, universal agreement method -
mod_kappa(): Modified kappa adjusted for chance agreement -
aiken_v(): Aiken's V coefficient -
cvr(): Lawshe's Content Validity Ratio -
cvr_critical(): Critical CVR values (Wilson, Pan & Schumsky 2012)
Author(s)
Maintainer: Rashed Alqahtani rashed.alqahtani@gmail.com
Authors:
Rashed Alqahtani rashed.alqahtani@gmail.com
See Also
Useful links:
Report bugs at https://github.com/Rafhq1403/contentValidity/issues
Aiken's V coefficient of content validity
Description
Computes Aiken's V (Aiken, 1985), an index of content validity that uses the full rating scale rather than dichotomizing responses as in I-CVI. Aiken's V ranges from 0 to 1, where 1 indicates all experts gave the maximum rating and 0 indicates all gave the minimum.
Usage
aiken_v(ratings, lo = 1, hi = 4, na.rm = FALSE)
Arguments
ratings |
A numeric matrix or data frame of expert ratings (rows = experts, columns = items). A numeric vector is also accepted, treated as a single item. |
lo |
Numeric. Minimum possible rating on the scale. Default 1. |
hi |
Numeric. Maximum possible rating on the scale. Default 4. |
na.rm |
Logical. If |
Details
Aiken's V is calculated as:
V = (\bar{X} - lo) / (hi - lo)
where \bar{X} is the mean expert rating across raters, and lo and
hi are the minimum and maximum possible scale values, respectively.
A common cutoff is V >= 0.70 for adequate content validity, though stricter thresholds are sometimes applied depending on panel size and research context. Unlike I-CVI, Aiken's V uses the full rating scale, so a rating of 4 contributes more than a rating of 3 (rather than both being counted equally as "relevant").
Value
A named numeric vector of V values, one per item. If ratings
is a vector, returns a single numeric value.
References
Aiken, L. R. (1985). Three coefficients for analyzing the reliability and validity of ratings. Educational and Psychological Measurement, 45(1), 131-142. doi:10.1177/0013164485451012
See Also
Examples
ratings <- matrix(
c(4, 4, 3, 4, 4,
3, 4, 4, 4, 3,
2, 3, 3, 4, 3,
1, 2, 3, 2, 3),
nrow = 5,
dimnames = list(NULL, paste0("item", 1:4))
)
aiken_v(ratings)
# 5-point scale
aiken_v(c(5, 4, 5, 5, 4), lo = 1, hi = 5)
APA-style content validity table
Description
Generates a publication-ready content validity table following APA
conventions, suitable for inclusion in journal manuscripts, theses, and
technical reports. Returns a clean data frame by default, with optional
rendering to markdown, HTML, or LaTeX via knitr::kable().
Usage
apa_table(x, ...)
## S3 method for class 'content_validity'
apa_table(
x,
format = c("data.frame", "markdown", "html", "latex", "pipe"),
digits = 2,
interpretation = TRUE,
caption = NULL,
...
)
Arguments
x |
An object to format. Currently supports objects of class
|
... |
Further arguments passed to methods. |
format |
Output format. One of |
digits |
Integer. Number of decimal places for numeric values. Default 2 (APA convention for proportions and correlations). |
interpretation |
Logical. Whether to include an Interpretation
column based on modified-kappa cutoffs (Cicchetti & Sparrow, 1981).
Default |
caption |
Optional character string. The caption to use when format
is not |
Details
Item-level interpretation labels follow the modified-kappa cutoffs of Cicchetti and Sparrow (1981), as adopted by Polit, Beck, and Owen (2007):
Excellent: kappa* > 0.74
Good: kappa* 0.60 to 0.74
Fair: kappa* 0.40 to 0.59
Poor: kappa* < 0.40
Scale-level indices are reported in the caption rather than the table body, matching the typical layout used in nursing, education, and health-sciences journals.
Value
A data frame (when format = "data.frame") or a character
string suitable for inclusion in an R Markdown document (other formats).
References
Cicchetti, D. V., & Sparrow, S. A. (1981). Developing criteria for establishing interrater reliability of specific items: Applications to assessment of adaptive behavior. American Journal of Mental Deficiency, 86(2), 127-137.
Polit, D. F., Beck, C. T., & Owen, S. V. (2007). Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Research in Nursing & Health, 30(4), 459-467. doi:10.1002/nur.20199
Examples
data(cvi_example)
result <- content_validity(cvi_example)
# Default: a clean data frame
apa_table(result)
# Markdown for R Markdown documents
if (requireNamespace("knitr", quietly = TRUE)) {
apa_table(result, format = "markdown")
}
Comprehensive content validity analysis
Description
Runs the standard relevance-scale content validity indices on a single ratings matrix and returns a tidy summary. Computes Item-level CVI, modified kappa, and Aiken's V at the item level, and S-CVI/Ave, S-CVI/UA, and the mean modified kappa at the scale level.
Usage
content_validity(
ratings,
relevant_threshold = 3,
lo = 1,
hi = 4,
na.rm = FALSE
)
Arguments
ratings |
A numeric matrix or data frame of expert ratings (rows = experts, columns = items) on a relevance scale. |
relevant_threshold |
Integer. Minimum rating considered "relevant".
Passed to |
lo, hi |
Numeric. Minimum and maximum possible rating values on the
scale; passed to |
na.rm |
Logical. Passed to all underlying functions. Defaults to
|
Details
Lawshe's CVR is not included in this wrapper because it uses a different
rating convention (essential / useful but not essential / not necessary).
For CVR analyses, use cvr() and cvr_critical() directly.
Value
An object of class "content_validity": a list containing
-
items: a data frame with one row per item and columnsitem,icvi,mod_kappa, andaiken_v. -
scale: a named numeric vector withscvi_ave,scvi_ua, andmean_kappa. -
n_experts: integer, number of experts (rows). -
n_items: integer, number of items (columns).
See Also
icvi(), scvi_ave(), scvi_ua(), mod_kappa(),
aiken_v(), cvr()
Examples
ratings <- matrix(
c(4, 4, 3, 4, 4,
3, 4, 4, 4, 3,
2, 3, 3, 4, 3,
1, 2, 3, 2, 3),
nrow = 5,
dimnames = list(NULL, paste0("item", 1:4))
)
result <- content_validity(ratings)
result
result$items
result$scale
Example expert ratings for content validity analysis
Description
A simulated dataset illustrating typical expert ratings during the content validation of a 10-item depression screening instrument. Six expert clinicians rate each item's relevance on a 4-point scale.
Usage
cvi_example
Format
A 6 by 10 numeric matrix with rows representing expert raters
(expert1 through expert6) and columns representing candidate items
(item1 through item10). Values are on a 4-point relevance scale:
1: not relevant
2: somewhat relevant (item needs major revision)
3: quite relevant (item needs minor revision)
4: highly relevant
Details
The pattern of ratings is realistic: some items achieve universal agreement, most show strong but imperfect agreement, and a couple of items would be flagged for revision based on standard CVI cutoffs (e.g., items 5 and 9 in this example).
Source
Simulated for demonstration; not based on real expert ratings.
Examples
data(cvi_example)
icvi(cvi_example)
content_validity(cvi_example)
Lawshe's Content Validity Ratio (CVR)
Description
Computes Lawshe's (1975) Content Validity Ratio for one or more items rated by an expert panel. Each expert classifies an item as "essential", "useful but not essential", or "not necessary"; CVR captures the proportion of experts endorsing "essential" relative to chance.
Usage
cvr(ratings, essential = 1, na.rm = FALSE)
Arguments
ratings |
A numeric matrix or data frame of expert ratings (rows = experts, columns = items). A numeric vector is also accepted, treated as a single item. |
essential |
Numeric vector. Rating value(s) that indicate an expert
classified the item as "essential". Defaults to |
na.rm |
Logical. If |
Details
The formula is:
CVR = (n_e - N/2) / (N/2)
where n_e is the number of experts rating the item as essential
and N is the total number of experts.
Use cvr_critical() to obtain the minimum CVR considered statistically
significant for a given panel size, following the corrected critical
values of Wilson, Pan, and Schumsky (2012).
Value
A named numeric vector of CVR values per item, ranging from -1
to +1. If ratings is a vector, returns a single numeric value.
References
Lawshe, C. H. (1975). A quantitative approach to content validity. Personnel Psychology, 28(4), 563-575. doi:10.1111/j.1744-6570.1975.tb01393.x
Wilson, F. R., Pan, W., & Schumsky, D. A. (2012). Recalculation of the critical values for Lawshe's content validity ratio. Measurement and Evaluation in Counseling and Development, 45(3), 197-210. doi:10.1177/0748175612440286
See Also
Examples
# 10 experts rating 3 items on Lawshe's 3-point scale
# (1 = essential, 2 = useful, 3 = not necessary)
ratings <- matrix(
c(1, 1, 1, 1, 1, 1, 1, 1, 2, 2, # 8 of 10 essential
1, 1, 1, 2, 2, 2, 2, 3, 3, 3, # 3 of 10 essential
1, 1, 1, 1, 1, 1, 1, 1, 1, 1), # 10 of 10 essential
nrow = 10,
dimnames = list(NULL, paste0("item", 1:3))
)
cvr(ratings)
# Compare to the critical value for N = 10
cvr_critical(10)
Critical CVR value for a given panel size
Description
Returns the minimum Content Validity Ratio considered statistically significant for a panel of N experts at the specified alpha level. The calculation uses the exact binomial distribution under the null hypothesis that each expert independently rates "essential" with probability 0.5, following the corrected approach of Wilson, Pan, and Schumsky (2012).
Usage
cvr_critical(n_experts, alpha = 0.05)
Arguments
n_experts |
Positive integer. Number of experts on the panel. |
alpha |
Numeric. One-tailed significance level. Defaults to 0.05. |
Details
The critical value is determined as the smallest k such that
P(X \geq k) \leq \alpha when X \sim Binomial(N, 0.5), then
transformed to the CVR scale via CVR_{crit} = (k - N/2) / (N/2).
Wilson, Pan, and Schumsky (2012) demonstrated that Lawshe's (1975) original critical-value table contained errors, especially for small panels. The exact binomial computation used here is their recommended replacement.
Value
Numeric. The critical CVR value. CVR values at or above this
threshold are statistically significant. Returns NA_real_ if no CVR
value can reach significance at the specified alpha (which can happen
for very small panels with stringent alpha).
References
Wilson, F. R., Pan, W., & Schumsky, D. A. (2012). Recalculation of the critical values for Lawshe's content validity ratio. Measurement and Evaluation in Counseling and Development, 45(3), 197-210. doi:10.1177/0748175612440286
See Also
Examples
cvr_critical(10) # 0.80 — need 9 of 10 experts to call it essential
cvr_critical(20) # 0.50
cvr_critical(40) # 0.25
cvr_critical(10, alpha = 0.01)
Item-level Content Validity Index (I-CVI)
Description
Computes the Item-level Content Validity Index (I-CVI) for one or more items rated by a panel of experts on a relevance scale. Following Lynn (1986) and Polit & Beck (2006), I-CVI is calculated as the proportion of experts who rate an item as 3 (relevant) or 4 (highly relevant) on a 4-point relevance scale.
Usage
icvi(ratings, relevant_threshold = 3, na.rm = FALSE)
Arguments
ratings |
A numeric matrix or data frame of expert ratings, where rows represent experts and columns represent items. Values are typically on a 1-4 relevance scale. A numeric vector is also accepted, treated as a single item. |
relevant_threshold |
Integer. The minimum rating considered "relevant". Defaults to 3 (i.e., ratings of 3 or 4 count as relevant on a 4-point scale). |
na.rm |
Logical. If |
Details
Common interpretation guidelines (Polit & Beck, 2006):
I-CVI >= 0.78: excellent content validity (with 6 or more experts).
I-CVI 0.70-0.78: acceptable, item may need revision.
I-CVI < 0.70: item should be revised or eliminated.
With fewer than six experts, Lynn (1986) recommends a stricter cutoff of I-CVI = 1.00 for unanimous agreement.
Value
A named numeric vector of I-CVI values, one per item. If ratings
is a vector, returns a single numeric value.
References
Lynn, M. R. (1986). Determination and quantification of content validity. Nursing Research, 35(6), 382-385. doi:10.1097/00006199-198611000-00017
Polit, D. F., & Beck, C. T. (2006). The content validity index: Are you sure you know what's being reported? Critique and recommendations. Research in Nursing & Health, 29(5), 489-497. doi:10.1002/nur.20147
Examples
# Five experts rating four items on a 1-4 relevance scale
ratings <- matrix(
c(4, 4, 3, 4, 4, # Item 1
3, 4, 4, 4, 3, # Item 2
2, 3, 3, 4, 3, # Item 3
1, 2, 3, 2, 3), # Item 4
nrow = 5,
dimnames = list(NULL, paste0("item", 1:4))
)
icvi(ratings)
# Single item supplied as a vector
icvi(c(4, 4, 3, 3, 4))
# Stricter threshold (only highest rating counts as relevant)
icvi(ratings, relevant_threshold = 4)
Modified kappa - I-CVI adjusted for chance agreement
Description
Computes modified kappa for each item, as proposed by Polit, Beck, and Owen (2007). Modified kappa adjusts the Item-level Content Validity Index (I-CVI) for chance agreement under the assumption that each expert independently rates an item as relevant with probability 0.5.
Usage
mod_kappa(ratings, relevant_threshold = 3, na.rm = FALSE)
Arguments
ratings |
A numeric matrix or data frame of expert ratings (rows = experts, columns = items). A numeric vector is also accepted, treated as a single item. |
relevant_threshold |
Integer. Minimum rating considered "relevant". Defaults to 3. |
na.rm |
Logical. If |
Details
The formula is:
\kappa^* = (\mathrm{I\text{-}CVI} - P_c) / (1 - P_c)
where the chance agreement probability is
P_c = \binom{N}{A} \times 0.5^N
with N = number of experts and A = number of experts rating the item as relevant.
Common interpretation cutoffs (Cicchetti and Sparrow, 1981; adopted by Polit et al., 2007):
kappa* < 0.40: poor
kappa* 0.40-0.59: fair
kappa* 0.60-0.74: good
kappa* > 0.74: excellent
Value
A named numeric vector of modified-kappa values, one per item. If ratings
is a vector, returns a single numeric value.
References
Cicchetti, D. V., & Sparrow, S. A. (1981). Developing criteria for establishing interrater reliability of specific items: Applications to assessment of adaptive behavior. American Journal of Mental Deficiency, 86(2), 127-137.
Polit, D. F., Beck, C. T., & Owen, S. V. (2007). Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Research in Nursing & Health, 30(4), 459-467. doi:10.1002/nur.20199
See Also
Examples
ratings <- matrix(
c(4, 4, 3, 4, 4,
3, 4, 4, 4, 3,
2, 3, 3, 4, 3,
1, 2, 3, 2, 3),
nrow = 5,
dimnames = list(NULL, paste0("item", 1:4))
)
mod_kappa(ratings)
Print method for content_validity objects
Description
Print method for content_validity objects
Usage
## S3 method for class 'content_validity'
print(x, digits = 4, ...)
Arguments
x |
A |
digits |
Integer. Number of digits to round numeric output to. |
... |
Currently ignored. |
Value
Invisibly returns x.
Scale-level Content Validity Index, Average method (S-CVI/Ave)
Description
Computes the Scale-level Content Validity Index using the averaging method, defined as the mean of the Item-level Content Validity Indices (I-CVI) across all items in the instrument.
Usage
scvi_ave(ratings, relevant_threshold = 3, na.rm = FALSE)
Arguments
ratings |
A numeric matrix or data frame of expert ratings (rows = experts, columns = items) on a relevance scale. |
relevant_threshold |
Integer. Minimum rating considered "relevant". Defaults to 3. |
na.rm |
Logical. Passed through to |
Details
S-CVI/Ave >= 0.90 is generally considered excellent content validity at the scale level (Polit & Beck, 2006). Note that S-CVI is undefined for a single item; supply a matrix or data frame with two or more item columns.
Value
A single numeric value: the average I-CVI across items.
References
Polit, D. F., & Beck, C. T. (2006). The content validity index: Are you sure you know what's being reported? Critique and recommendations. Research in Nursing & Health, 29(5), 489-497. doi:10.1002/nur.20147
See Also
Examples
ratings <- matrix(
c(4, 4, 3, 4, 4,
3, 4, 4, 4, 3,
2, 3, 3, 4, 3,
1, 2, 3, 2, 3),
nrow = 5
)
scvi_ave(ratings)
Scale-level Content Validity Index, Universal Agreement method (S-CVI/UA)
Description
Computes the Scale-level Content Validity Index using the universal agreement method, defined as the proportion of items where all experts rate the item as relevant.
Usage
scvi_ua(ratings, relevant_threshold = 3, na.rm = FALSE)
Arguments
ratings |
A numeric matrix or data frame of expert ratings (rows = experts, columns = items) on a relevance scale. |
relevant_threshold |
Integer. Minimum rating considered "relevant". Defaults to 3. |
na.rm |
Logical. If |
Details
S-CVI/UA is a stricter criterion than S-CVI/Ave and tends to produce lower values, especially with larger expert panels. Polit and Beck (2006) recommend reporting both indices together. With small panels of 3-5 experts, S-CVI/UA >= 0.80 is often considered acceptable.
Value
A single numeric value: the proportion of items with universal agreement.
References
Polit, D. F., & Beck, C. T. (2006). The content validity index: Are you sure you know what's being reported? Critique and recommendations. Research in Nursing & Health, 29(5), 489-497. doi:10.1002/nur.20147
See Also
Examples
ratings <- matrix(
c(4, 4, 3, 4, 4,
3, 4, 4, 4, 3,
2, 3, 3, 4, 3,
1, 2, 3, 2, 3),
nrow = 5
)
scvi_ua(ratings)