--- title: |- Bradley-Terry Models in R abstract: | This is a short overview of the R add-on package **BradleyTerry2**, which facilitates the specification and fitting of Bradley-Terry logit, probit or cauchit models to pair-comparison data. Included are the standard 'unstructured' Bradley-Terry model, structured versions in which the parameters are related through a linear predictor to explanatory variables, and the possibility of an order or 'home advantage' effect or other 'contest-specific' effects. Model fitting is either by maximum likelihood, by penalized quasi-likelihood (for models which involve a random effect), or by bias-reduced maximum likelihood in which the first-order asymptotic bias of parameter estimates is eliminated. Also provided are a simple and efficient approach to handling missing covariate data, and suitably-defined residuals for diagnostic checking of the linear predictor. date: |- For **BradleyTerry2** version `r packageDescription("BradleyTerry2")[["Version"]]`, `r Sys.Date()` vignette: |- %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} %\VignetteIndexEntry{Bradley-Terry Models in R} %\VignetteDepends{} output: function(){ if (requireNamespace('bookdown', quietly = TRUE)) { function(...){ bookdown::html_document2(..., base_format = rmarkdown::html_vignette, number_sections = TRUE, math_method = "mathjax") } } else function(...){ rmarkdown::html_vignette(..., number_sections = TRUE, math_method = "mathjax") }}() link-citations: yes bibliography: BradleyTerry.bib --- ``` {r include=FALSE} library <- function(...) suppressPackageStartupMessages(base::library(...)) library(knitr) opts_chunk$set( tidy=FALSE ) ``` ``` {r set_options, echo = FALSE} options(prompt = "R> ", continue = "+ ", width = 70, useFancyQuotes = FALSE, digits = 7) ``` ## Contents { .unnumbered} \@ref(sec:intro) [Introduction] \@ref(sec:BTmodel) [Standard Bradley-Terry model]     \@ref(sec:citations) [Example: Analysis of journal citations]     \@ref(sec:bias-reduced) [Bias-reduced estimates] \@ref(sec:covariates) [Abilities predicted by explanatory variables]     \@ref(sec:player-specific) ['Player-specific' predictor variables]     \@ref(sec:missing) [Missing values]     \@ref(sec:order) [Order effect]     \@ref(sec:CEMS) [More general (contest-specific) predictors] \@ref(sec:ability) [Ability scores] \@ref(sec:residuals) [Residuals] \@ref(sec:model) [Model search] \@ref(sec:data) [Setting up the data]     \@ref(sec:contest) [Contest-specific data]     \@ref(sec:non-contest) [Non contest-specific data]     \@ref(sec:wide) [Converting data from a 'wide' format]     \@ref(sec:BradleyTerry) [Converting data from the format required by the earlier **BradleyTerry** package] \@ref(sec:functions) [A list of the functions provided in **BradleyTerry2**] \@ref(sec:finalremarks) [Some final remarks]     \@ref(sec:ties) [A note on the treatment of ties]     \@ref(sec:random-effects) [A note on 'contest-specific' random effects] [Acknowledgments] [References] ## Introduction {#sec:intro} The Bradley-Terry model [@brad:terr:52] assumes that in a 'contest' between any two 'players', say player $i$ and player $j$ $(i, j \in \{1,\ldots,K\})$, the odds that $i$ beats $j$ are $\alpha_i/\alpha_j$, where $\alpha_i$ and $\alpha_j$ are positive-valued parameters which might be thought of as representing 'ability'. A general introduction can be found in @brad:84 or @agre:02. Applications are many, ranging from experimental psychology to the analysis of sports tournaments to genetics [for example, the allelic transmission/disequilibrium test of @sham:curt:95 is based on a Bradley-Terry model in which the 'players' are alleles]. In typical psychometric applications the 'contests' are comparisons, made by different human subjects, between pairs of items. The model can alternatively be expressed in the logit-linear form $$\mathop{\rm logit}[\mathop{\rm pr}(i\ \mathrm{beats}\ j)]=\lambda_i-\lambda_j, \label{eq:unstructured} (\#eq:unstructured)$$ where $\lambda_i=\log\alpha_i$ for all $i$. Thus, assuming independence of all contests, the parameters $\{\lambda_i\}$ can be estimated by maximum likelihood using standard software for generalized linear models, with a suitably specified model matrix. The primary purpose of the **BradleyTerry2** package [@turn:12], implemented in the R statistical computing environment [@ihak:gent:96;@r], is to facilitate the specification and fitting of such models and some extensions. The **BradleyTerry2** package supersedes the earlier **BradleyTerry** package [@firt:05], providing a more flexible user interface to allow a wider range of models to be fitted. In particular, **BradleyTerry2** allows the inclusion of simple random effects so that the ability parameters can be related to available explanatory variables through a linear predictor of the form $$\lambda_i=\sum_{r=1}^p\beta_rx_{ir} + U_i. (\#eq:autonumber2) $$ The inclusion of the prediction error $U_i$ allows for variability between players with equal covariate values and induces correlation between comparisons with a common player. **BradleyTerry2** also allows for general contest-specific effects to be included in the model and allows the logit link to be replaced, if required, by a different symmetric link function (probit or cauchit). The remainder of the paper is organised as follows. Section \@ref(sec:BTmodel) demonstrates how to use the **BradleyTerry2** package to fit a standard (i.e., unstructured) Bradley-Terry model, with a separate ability parameter estimated for each player, including the use of bias-reduced estimation for such models. Section \@ref(sec:covariates) considers variations of the standard model, including the use of player-specific variables to model ability and allowing for contest-specific effects such as an order effect or judge effects. Sections \@ref(sec:ability) and \@ref(sec:residuals) explain how to obtain important information about a fitted model, in particular the estimates of ability and their standard errors, and player-level residuals, whilst Section \@ref(sec:model) notes the functions available to aid model search. Section \@ref(sec:data) explains in more detail how set up data for use with the **BradleyTerry2** package, Section \@ref(sec:functions) lists the functions provided by the package and finally Section \@ref(sec:finalremarks) comments on two directions for further development of the software. ## Standard Bradley-Terry model {#sec:BTmodel} ### Example: Analysis of journal citations {#sec:citations} The following data come from page 448 of @agre:02, extracted from the larger table of @stig:94. The data are counts of citations among four prominent journals of statistics and are included the **BradleyTerry2** package as the data set `citations`: ``` {r LoadBradleyTerry2} library("BradleyTerry2") ``` ``` {r CitationData} data("citations", package = "BradleyTerry2") ``` ``` {r CitationData2} citations ``` Thus, for example, *Biometrika* was cited 498 times by papers in *Journal of the American Statistical Association* (JASA) during the period under study. In order to fit a Bradley-Terry model to these data using `BTm` from the **BradleyTerry2** package, the data must first be converted to binomial frequencies. That is, the data need to be organised into pairs (`player1`, `player2`) and corresponding frequencies of wins and losses for `player1` against `player2`. The **BradleyTerry2** package provides the utility function `countsToBinomial` to convert a contingency table of wins to the format just described: ``` {r countsToBinomial} citations.sf <- countsToBinomial(citations) names(citations.sf)[1:2] <- c("journal1", "journal2") citations.sf ``` Note that the self-citation counts are ignored -- these provide no information on the ability parameters, since the abilities are relative rather than absolute quantities. The binomial response can then be modelled by the difference in player abilities as follows: ``` {r citeModel} citeModel <- BTm(cbind(win1, win2), journal1, journal2, ~ journal, id = "journal", data = citations.sf) citeModel ``` The coefficients here are maximum likelihood estimates of $\lambda_2, \lambda_3, \lambda_4$, with $\lambda_1$ (the log-ability for *Biometrika*) set to zero as an identifying convention. The one-sided model formula ``` r ~ journal ``` specifies the model for player ability, in this case the 'citeability' of the journal. The `id` argument specifies that `"journal"` is the name to be used for the factor that identifies the player -- the values of which are given here by `journal1` and `journal2` for the first and second players respectively. Therefore in this case a separate citeability parameter is estimated for each journal. If a different 'reference' journal is required, this can be achieved using the optional `refcat` argument: for example, making use of `update` to avoid re-specifying the whole model, ``` {r citeModelupdate} update(citeModel, refcat = "JASA") ``` -- the same model in a different parameterization. The use of the standard Bradley-Terry model for this application might perhaps seem rather questionable -- for example, citations within a published paper can hardly be considered independent, and the model discards potentially important information on self-citation. @stig:94 provides arguments to defend the model's use despite such concerns. ### Bias-reduced estimates {#sec:bias-reduced} Estimation of the standard Bradley-Terry model in `BTm` is by default computed by maximum likelihood, using an internal call to the `glm` function. An alternative is to fit by bias-reduced maximum likelihood [@firt:93]: this requires additionally the **brglm** package [@kosm:07], and is specified by the optional argument `br = TRUE`. The resultant effect, namely removal of first-order asymptotic bias in the estimated coefficients, is often quite small. One notable feature of bias-reduced fits is that all estimated coefficients and standard errors are necessarily finite, even in situations of 'complete separation' where maximum likelihood estimates take infinite values [@hein:sche:02]. For the citation data, the parameter estimates are only very slightly changed in the bias-reduced fit: ``` {r citeModelupdate2} update(citeModel, br = TRUE) ``` Here the bias of maximum likelihood is small because the binomial counts are fairly large. In more sparse arrangements of contests -- that is, where there is less or no replication of the contests -- the effect of bias reduction would typically be more substantial than the insignificant one seen here. ## Abilities predicted by explanatory variables {#sec:covariates} ### 'Player-specific' predictor variables {#sec:player-specific} In some application contexts there may be 'player-specific' explanatory variables available, and it is then natural to consider model simplification of the form $$\lambda_i=\sum_{r=1}^p\beta_rx_{ir} + U_i, (\#eq:autonumber3) $$ in which ability of each player $i$ is related to explanatory variables $x_{i1},\ldots,x_{ip}$ through a linear predictor with coefficients $\beta_1,\ldots,\beta_p$; the $\{U_i\}$ are independent errors. Dependence of the player abilities on explanatory variables can be specified via the `formula` argument, using the standard *S*-language model formulae. The difference in the abilities of player $i$ and player $j$ is modelled by $$\sum_{r=1}^p\beta_rx_{ir} - \sum_{r=1}^p\beta_rx_{jr} + U_i - U_j, \label{eq:structured} (\#eq:structured)$$ where $U_i \sim N(0, \sigma^2)$ for all $i$. The Bradley-Terry model is then a generalized linear mixed model, which the `BTm` function currently fits by using the penalized quasi-likelihood algorithm of @bres:93. As an illustration, consider the following simple model for the `flatlizards` data, which predicts the fighting ability of Augrabies flat lizards by body size (snout to vent length): ``` {r lizModel} options(show.signif.stars = FALSE) data("flatlizards", package = "BradleyTerry2") lizModel <- BTm(1, winner, loser, ~ SVL[..] + (1|..), data = flatlizards) ``` Here the winner of each fight is compared to the loser, so the outcome is always 1. The special name '`..`' appears in the formula as the default identifier for players, in the absence of a user-specified `id` argument. The values of this factor are given by `winner` for the winning lizard and `loser` for the losing lizard in each contest. These factors are provided in the data frame `contests` that is the first element of the list object `flatlizards`. The second element of `flatlizards` is another data frame, `predictors`, containing measurements on the observed lizards, including `SVL`, which is the snout to vent length. Thus `SVL[..]` represents the snout to vent length indexed by lizard (`winner` or `loser` as appropriate). Finally a random intercept for each lizard is included using the bar notation familiar to users of the **lme4** package [@bate:11]. (Note that a random intercept is the only random effect structure currently implemented in **BradleyTerry2**.) The fitted model is summarized below: ``` {r summarize_lizModel} summary(lizModel) ``` The coefficient of snout to vent length is weakly significant; however, the standard deviation of the random effect is quite large, suggesting that this simple model has fairly poor explanatory power. A more appropriate model is considered in the next section. ### Missing values {#sec:missing} The contest data may include all possible pairs of players and hence rows of missing data corresponding to players paired with themselves. Such rows contribute no information to the Bradley-Terry model and are simply discarded by `BTm`. Where there are missing values in player-specific *predictor* (or *explanatory*) variables which appear in the formula, it will typically be very wasteful to discard all contests involving players for which some values are missing. Instead, such cases are accommodated by the inclusion of one or more parameters in the model. If, for example, player $1$ has one or more of its predictor values $x_{11},\ldots,x_{1p}$ missing, then the combination of Equations \@ref(eq:unstructured) and \@ref(eq:structured) above yields $$\mathop{\rm logit}[\mathop{\rm pr}(1\ \mathrm{beats}\ j)]=\lambda_1 - \left(\sum_{r=1}^p\beta_rx_{jr} + U_j\right), (\#eq:autonumber5) $$ for all other players $j$. This results in the inclusion of a 'direct' ability parameter for each player having missing predictor values, in addition to the common coefficients $\beta_1,\ldots,\beta_p$ -- an approach which will be appropriate when the missingness mechanism is unrelated to contest success. The same device can be used also to accommodate any user-specified departures from a structured Bradley-Terry model, whereby some players have their abilities determined by the linear predictor but others do not. In the original analysis of the `flatlizards` data [@whit:06], the final model included the first and third principal components of the spectral reflectance from the throat (representing brightness and UV intensity respectively) as well as head length and the snout to vent length seen in our earlier model. The spectroscopy data was missing for two lizards, therefore the ability of these lizards was estimated directly. The following fits this model, with the addition of a random intercept as before: ``` {r lizModel2} lizModel2 <- BTm(1, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..] + (1|..), data = flatlizards) summary(lizModel2) ``` Note that `BTm` detects that lizards 96 and 99 have missing values in the specified predictors and automatically includes separate ability parameters for these lizards. This model was found to be the single best model based on the principal components of reflectance and the other predictors available and indeed the standard deviation of the random intercept is much reduced, but still highly significant. Allowing for this significant variation between lizards with the same predictor values produces more realistic (i.e., larger) standard errors for the parameters when compared to the original analysis of @whit:06. Although this affects the significance of the morphological variables, it does not affect the significance of the principal components, so in this case does not affect the main conclusions of the study. ### Order effect {#sec:order} In certain types of application some or all contests have an associated 'bias', related to the order in which items are presented to a judge or with the location in which a contest takes place, for example. A natural extension of the Bradley-Terry model (Equation \@ref(eq:unstructured)) is then $$\mathop{\rm logit}[\mathop{\rm pr}(i\ \mathrm{beats}\ j)]=\lambda_i-\lambda_j + \delta z, (\#eq:autonumber6) $$ where $z=1$ if $i$ has the supposed advantage and $z=-1$ if $j$ has it. (If the 'advantage' is in fact a disadvantage, $\delta$ will be negative.) The scores $\lambda_i$ then relate to ability in the absence of any such advantage. As an example, consider the baseball data given in @agre:02, page 438: ``` {r baseball} data("baseball", package = "BradleyTerry2") head(baseball) ``` The data set records the home wins and losses for each baseball team against each of the 6 other teams in the data set. The `head` function is used to show the first 6 records, which are the Milwaukee home games. We see for example that Milwaukee played 7 home games against Detroit and won 4 of them. The 'standard' Bradley-Terry model without a home-advantage parameter will be fitted if no formula is specified in the call to `BTm`: ``` {r baseballModel} baseballModel1 <- BTm(cbind(home.wins, away.wins), home.team, away.team, data = baseball, id = "team") summary(baseballModel1) ``` The reference team is Baltimore, estimated to be the weakest of these seven, with Milwaukee and Detroit the strongest. In the above, the ability of each team is modelled simply as `  team` where the values of the factor `team` are given by `home.team` for the first team and `away.team` for the second team in each game. To estimate the home-advantage effect, an additional variable is required to indicate whether the team is at home or not. Therefore data frames containing both the team factor and this new indicator variable are required in place of the factors `home.team` and `away.team` in the call to `BTm`. This is achieved here by over-writing the `home.team` and `away.team` factors in the `baseball` data frame: ``` {r baseballDataUpdate} baseball$home.team <- data.frame(team = baseball$home.team, at.home = 1) baseball$away.team <- data.frame(team = baseball$away.team, at.home = 0) ``` The `at.home` variable is needed for both the home team and the away team, so that it can be differenced as appropriate in the linear predictor. With the data organised in this way, the ability formula can now be updated to include the `at.home` variable as follows: ``` {r baseballModelupdate} baseballModel2 <- update(baseballModel1, formula = ~ team + at.home) summary(baseballModel2) ``` This reproduces the results given on page 438 of @agre:02: the home team has an estimated odds-multiplier of $\exp(0.3023) = 1.35$ in its favour. ### More general (contest-specific) predictors {#sec:CEMS} The 'home advantage' effect is a simple example of a contest-specific predictor. Such predictors are necessarily interactions, between aspects of the contest and (aspects of) the two 'players' involved. For more elaborate examples of such effects, see `?chameleons` and `?CEMS`. The former includes an 'experience' effect, which changes through time, on the fighting ability of male chameleons. The latter illustrates a common situation in psychometric applications of the Bradley-Terry model, where *subjects* express preference for one of two *objects* (the 'players'), and it is the influence on the results of subject attributes that is of primary interest. As an illustration of the way in which such effects are specified, consider the following model specification taken from the examples in `?CEMS`, where data on students' preferences in relation to six European management schools is analysed. ``` {r CEMSmodel} data("CEMS", package = "BradleyTerry2") table8.model <- BTm(outcome = cbind(win1.adj, win2.adj), player1 = school1, player2 = school2, formula = ~ .. + WOR[student] * LAT[..] + DEG[student] * St.Gallen[..] + STUD[student] * Paris[..] + STUD[student] * St.Gallen[..] + ENG[student] * St.Gallen[..] + FRA[student] * London[..] + FRA[student] * Paris[..] + SPA[student] * Barcelona[..] + ITA[student] * London[..] + ITA[student] * Milano[..] + SEX[student] * Milano[..], refcat = "Stockholm", data = CEMS) ``` This model reproduces results from Table 8 of @ditt:01 apart from minor differences due to the different treatment of ties. Here the outcome is the binomial frequency of preference for `school1` over `school2`, with ties counted as half a 'win' and half a 'loss'. The formula specifies the model for school 'ability' or worth. In this formula, the default label '`..`' represents the school (with values given by `school1` or `school2` as appropriate) and `student` is a factor specifying the student that made the comparison. The remaining variables in the formula use [R]{.sans-serif}'s standard indexing mechanism to include student-specific variables, e.g., `WOR`: whether or not the student was in full-time employment, and school-specific variables, e.g., `LAT`: whether the school was in a 'Latin' city. Thus there are three types of variables: contest-specific (`school1`, `school2`, `student`), subject-specific (`WOR`, `DEG`, ...) and object-specific (`LAT`, `St.Gallen`, ...). These three types of variables are provided in three data frames, contained in the list object `CEMS`. ## Ability scores {#sec:ability} The function `BTabilities` extracts estimates and standard errors for the log-ability scores $\lambda_1, \ldots,\lambda_K$. These will either be 'direct' estimates, in the case of the standard Bradley-Terry model or for players with one or more missing predictor values, or 'model-based' estimates of the form $\hat\lambda_i=\sum_{r=1}^p\hat\beta_rx_{ir}$ for players whose ability is predicted by explanatory variables. As a simple illustration, team ability estimates in the home-advantage model for the `baseball` data are obtained by: ``` {r BTabilities} BTabilities(baseballModel2) ``` This gives, for each team, the estimated ability when the team enjoys no home advantage. Similarly, estimates of the fighting ability of each lizard in the `flatlizards` data under the model based on the principal components of the spectral reflectance from the throat are obtained as follows: ``` {r BTabilities2} head(BTabilities(lizModel2), 4) ``` The ability estimates in an unstructured Bradley-Terry model are particularly well suited to presentation using the device of *quasi-variances* [@firt:04]. The **qvcalc** package [@firt:10,version 0.8-5 or later] contains a function of the same name which does the necessary work: ``` r > library("qvcalc") > baseball.qv <- qvcalc(BTabilities(baseballModel2)) > plot(baseball.qv, + levelNames = c("Bal", "Bos", "Cle", "Det", "Mil", "NY", "Tor")) ``` ```{r figqvplot, echo=FALSE , fig.cap="Estimated relative abilities of baseball teams.", fig.alt="The ability for Baltimore is fixed at zero, with an interval ranging from -0.5 to 0.5. Boston has a relative ability near 1.2; Cleveland around 0.7. The remaining teams have relative abilities around 1.3 to 1.6. The intervals are based on quasi standard errors and all have length of approximately 1. Therefore, aside from Cleveland, all teams are clearly significantly stronger than Baltimore as the intervals do not overlap.", fig.show='hold', fig.align="center", out.width="67.0%"} knitr::include_graphics(c("baseball-qvplot.png")) ``` The 'comparison intervals' as shown in Figure \@ref(fig:figqvplot) are based on 'quasi standard errors', and can be interpreted as if they refer to *independent* estimates of ability for the journals. This has the advantage that comparison between any pair of journals is readily made (i.e., not only comparisons with the 'reference' journal). For details of the theory and method of calculation see @firt:04. ## Residuals {#sec:residuals} There are two main types of residuals available for a Bradley-Terry model object. First, there are residuals obtained by the standard methods for models of class `"glm"`. These all deliver one residual for each contest or type of contest. For example, Pearson residuals for the model `lizModel2` can be obtained simply by ``` {r residuals} res.pearson <- round(residuals(lizModel2), 3) head(cbind(flatlizards$contests, res.pearson), 4) ``` More useful for diagnostics on the linear predictor $\sum\beta_rx_{ir}$ are 'player'-level residuals, obtained by using the function `residuals` with argument `type = "grouped"`. These residuals can then be plotted against other player-specific variables. ``` {r BTresiduals} res <- residuals(lizModel2, type = "grouped") # with(flatlizards$predictors, plot(throat.PC2, res)) # with(flatlizards$predictors, plot(head.width, res)) ``` These residuals estimate the error in the linear predictor; they are obtained by suitable aggregation of the so-called 'working' residuals from the model fit. The `weights` attribute indicates the relative information in these residuals -- weight is roughly inversely proportional to variance -- which may be useful for plotting and/or interpretation; for example, a large residual may be of no real concern if based on very little information. Weighted least-squares regression of these residuals on any variable already in the model is null. For example: ``` {r residualWLS} lm(res ~ throat.PC1, weights = attr(res, "weights"), data = flatlizards$predictors) lm(res ~ head.length, weights = attr(res, "weights"), data = flatlizards$predictors) ``` As an illustration of evident *non-null* residual structure, consider the unrealistically simple model `lizModel` that was fitted in Section \@ref(sec:covariates) above. That model lacks the clearly significant predictor variable `throat.PC3`, and the plot shown in Figure \@ref(fig:figresiduals) demonstrates this fact graphically: ``` r lizModel.residuals <- residuals(lizModel, type = "grouped") plot(flatlizards$predictors$throat.PC3, lizModel.residuals) ``` ```{r figresiduals, echo=FALSE , fig.cap="Lizard residuals for the simple model lizModel, plotted against throat.PC3.", fig.alt="The residuals are quite spread out over the range -2 to 2, but the distribution is clearly not uniform over the range of the predictor variable, throat.PC3. Residuals between -2 and -1 range correspond to values throat.PC3 between -6 and 4; residuals between -1 and 1 correspond to throat.PC3 values of -4 to 4, and residuals from 1 to 2 correspond to throat.PC3 values between -3 and 6. Thus there is an overall positive correlation bewteen the residuals and throat.PC3.", fig.show='hold', fig.align="center", out.width="69.0%"} knitr::include_graphics(c("residuals.png")) ``` The residuals in the plot exhibit a strong, positive regression slope in relation to the omitted predictor variable `throat.PC3`. ## Model search {#sec:model} In addition to `update()` as illustrated in preceding sections, methods for the generic functions `add1()`, `drop1()` and `anova()` are provided. These can be used to investigate the effect of adding or removing a variable, whether that variable is contest-specific, such as an order effect, or player-specific; and to compare the fit of nested models. ## Setting up the data {#sec:data} ### Contest-specific data {#sec:contest} The `outcome` argument of `BTm` represents a binomial response and can be supplied in any of the formats allowed by the `glm` function. That is, either a two-column matrix with the columns giving the number of wins and losses (for `player1` vs. `player2`), a factor where the first level denotes a loss and all other levels denote a win, or a binary variable where 0 denotes a loss and 1 denotes a win. Each row represents either a single contest or a set of contests between the same two players. The `player1` and `player2` arguments are either factors specifying the two players in each contest, or data frames containing such factors, along with any contest-specific variables that are also player-specific, such as the `at.home` variable seen in Section \@ref(sec:order). If given in data frames, the factors identifying the players should be named as specified by the `id` argument and should have identical levels, since they represent a particular sample of the full set of players. Thus for the model `baseballModel2`, which was specified by the following call: ``` {r baseballModel2_call} baseballModel2$call ``` the data are provided in the `baseball` data frame, which has the following structure: ``` {r str_baseball} str(baseball, vec.len = 2) ``` In this case `home.team` and `away.team` are both data frames, with the factor `team` specifying the team and the variable `at.home` specifying whether or not the team was at home. So the first comparison ``` {r first_comparison} baseball$home.team[1,] baseball$away.team[1,] ``` is Milwaukee playing at home against Detroit. The outcome is given by ``` {r first_outcome} baseball[1, c("home.wins", "away.wins")] ``` Contest-specific variables that are *not* player-specific -- for example, whether it rained or not during a contest -- should only be used in interactions with variables that *are* player-specific, otherwise the effect on ability would be the same for both players and would cancel out. Such variables can conveniently be provided in a single data frame along with the `outcome`, `player1` and `player2` data. An offset in the model can be specified by using the `offset` argument to `BTm`. This facility is provided for completeness: the authors have not yet encountered an application where it is needed. To use only certain rows of the contest data in the analysis, the `subset` argument may be used in the call to `BTm`. This should either be a logical vector of the same length as the binomial response, or a numeric vector containing the indices of rows to be used. ### Non contest-specific data {#sec:non-contest} Some variables do not vary by contest directly, but rather vary by a factor that is contest-specific, such as the player ID or the judge making the paired comparison. For such variables, it is more economical to store the data by the levels of the contest-specific factor and use indexing to obtain the values for each contest. The `CEMS` example in Section \@ref(sec:CEMS) provides an illustration of such variables. In this example student-specific variables are indexed by `student` and school-specific variables are indexed by `..`, i.e., the first or second school in the comparison as appropriate. There are then two extra sets of variables in addition to the usual contest-specific data as described in the last section. A good way to provide these data to `BTm` is as a list of data frames, one for each set of variables, e.g., ``` {r str_CEMS} str(CEMS, vec.len = 2) ``` The names of the data frames are only used by `BTm` if they match the names specified in the `player1` and `player2` arguments, in which case it is assumed that these are data frames providing the data for the first and second player respectively. The rows of data frames in the list should either correspond to the contests or the levels of the factor used for indexing. Player-specific offsets should be included in the formula by using the `offset` function. ### Converting data from a 'wide' format {#sec:wide} The `BTm` function requires data in a 'long' format, with one row per contest, provided either directly as in Section \@ref(sec:contest) or via indexing as in Section \@ref(sec:non-contest). In studies where the same set of paired comparisons are made by several judges, as in a questionnaire for example, the data may be stored in a 'wide' format, with one row per judge. As an example, consider the `cemspc` data from the **prefmod** package [@hatz:12], which provides data from the CEMS study in a wide format. Each row corresponds to one student; the first 15 columns give the outcome of all pairwise comparisons between the 6 schools in the study and the last two columns correspond to two of the student-specific variables: `ENG` (indicating the student's knowledge of English) and `SEX` (indicating the student's gender). The following steps convert these data into a form suitable for analysis with `BTm`. First a new data frame is created from the student-specific variables and these variables are converted to factors: ``` {r student-specific_data} library("prefmod") student <- cemspc[c("ENG", "SEX")] student$ENG <- factor(student$ENG, levels = 1:2, labels = c("good", "poor")) student$SEX <- factor(student$SEX, levels = 1:2, labels = c("female", "male")) ``` This data frame is put into a list, which will eventually hold all the necessary data. Then a `student` factor is created for indexing the student data to produce contest-level data. This is put in a new data frame that will hold the contest-specific data. ``` {r student_factor} cems <- list(student = student) student <- gl(303, 1, 303 * 15) #303 students, 15 comparisons contest <- data.frame(student = student) ``` Next the outcome data is converted to a binomial response, adjusted for ties. The result is added to the `contest` data frame. ``` {r binomial_response} win <- cemspc[, 1:15] == 0 lose <- cemspc[, 1:15] == 2 draw <- cemspc[, 1:15] == 1 contest$win.adj <- c(win + draw/2) contest$lose.adj <- c(lose + draw/2) ``` Then two factors are created identifying the first and second school in each comparison. The comparisons are in the order 1 vs. 2, 1 vs. 3, 2 vs. 3, 1 vs. 4, ..., so the factors can be created as follows: ``` {r school_factors} lab <- c("London", "Paris", "Milano", "St. Gallen", "Barcelona", "Stockholm") contest$school1 <- factor(sequence(1:5), levels = 1:6, labels = lab) contest$school2 <- factor(rep(2:6, 1:5), levels = 1:6, labels = lab) ``` Note that both factors have exactly the same levels, even though only five of the six players are represented in each case. In other words, the numeric factor levels refer to the same players in each case, so that the player is unambiguously identified. This ensures that player-specific parameters and player-specific covariates are correctly specified. Finally the `contest` data frame is added to the main list: ``` {r cems_data} cems$contest <- contest ``` This creates a single data object that can be passed to the `data` argument of `BTm`. Of course, such a list could be created on-the-fly as in `data = list(contest, student)`, which may be more convenient in practice. ### Converting data from the format required by the earlier **BradleyTerry** package {#sec:BradleyTerry} The **BradleyTerry** package described in @firt:05 required contest/comparison results to be in a data frame with columns named `winner`, `loser` and `Freq`. The following example shows how `xtabs` and `countsToBinomial` can be used to convert such data for use with the `BTm` function in **BradleyTerry2**: ``` r library("BradleyTerry") ## the /old/ BradleyTerry package ## load data frame with columns "winner", "loser", "Freq" data("citations", package = "BradleyTerry") ## convert to 2-way table of counts citations <- xtabs(Freq ~ winner + loser, citations) ## convert to a data frame of binomial observations citations.sf <- countsToBinomial(citations) ``` The `citations.sf` data frame can then be used with `BTm` as shown in Section \@ref(sec:citations). ## A list of the functions provided in **BradleyTerry2** {#sec:functions} The standard R help files provide the definitive reference. Here we simply list the main user-level functions and their arguments, as a convenient overview: ``` {r functions, echo = FALSE} ## cf. prompt options(width = 55) for (fn in getNamespaceExports("BradleyTerry2")) { name <- as.name(fn) args <- formals(fn) n <- length(args) arg.names <- arg.n <- names(args) arg.n[arg.n == "..."] <- "\\dots" is.missing.arg <- function(arg) typeof(arg) == "symbol" && deparse(arg) == "" Call <- paste(name, "(", sep = "") for (i in seq_len(n)) { Call <- paste(Call, arg.names[i], if (!is.missing.arg(args[[i]])) paste(" = ", paste(deparse(args[[i]]), collapse = "\n"), sep = ""), sep = "") if (i != n) Call <- paste(Call, ", ", sep = "") } Call <- paste(Call, ")", sep = "") cat(deparse(parse(text = Call)[[1]], width.cutoff = 50), fill = TRUE) } options(width = 60) ``` ## Some final remarks {#sec:finalremarks} ### A note on the treatment of ties {#sec:ties} The present version of **BradleyTerry2** provides no sophisticated facilities for handling tied contests/comparisons; the well-known models of @rao:kupp:67 and @davi:70 are not implemented here. At present the `BTm` function requires a binary or binomial response variable, the third ('tied') category of response is not allowed. In several of the data examples (e.g., `?CEMS`, `?springall`, `?sound.fields`), ties are handled by the crude but simple device of adding half of a 'win' to the tally for each player involved; in each of the examples where this has been done it is found that the result is very similar, after a simple re-scaling, to the more sophisticated analyses that have appeared in the literature. Note that this device when used with `BTm` typically gives rise to warnings produced by the back-end `glm` function, about non-integer 'binomial' counts; such warnings are of no consequence and can be safely ignored. It is likely that a future version of **BradleyTerry2** will have a more general method for handling ties. ### A note on 'contest-specific' random effects {#sec:random-effects} The current version of **BradleyTerry2** provides facilities for fitting models with random effects in 'player-specific' predictor functions, as illustrated in Section \@ref(#sec:covariates). For more general, 'contest-specific' random-effect structures, such as random 'judge' effects in psychological studies [e.g., @bock:01], **BradleyTerry2** provides (through `BTm`) the necessary user interface but as yet no back-end calculation. It is hoped that this important generalization can be made successfully in a future version of **BradleyTerry2**. ## Acknowledgments {#sec:acknowledgments .unnumbered} This work was supported by the UK Engineering and Physical Sciences Research Council. ## References {#sec:references .unnumbered}