content
stringlengths 0
14.9M
| filename
stringlengths 44
136
|
---|---|
---
title: "Introduction to AHPWR package"
author: "Luciane Ferreira Alcoforado and Orlando Celso Longo"
subtitle: "Academia da Força Aérea and Universidade Federal Fluminense"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Introduction to AHP package}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
```{r setup}
knitr::opts_chunk$set(warning = FALSE, message = FALSE)
library(AHPWR)
library(kableExtra)
```
This is the introduction to our package AHPWR. All theory about this method is concerned about Saaty and Vargas (2012). The examples are inspired by this reference.
## AHP
According to Saaty and Vargas (2012), The Analytic Hierarchy Process (AHP) is a basic approach to decision making.
It is designed to cope with both the rational and the intuitive to select **the best** from
a **number of alternatives** evaluated with respect to **several criteria**. In this process,
the decision maker carries out simple **pairwise comparison judgments** which are
then used to develop overall priorities for **ranking the alternatives**. The AHP both allows for inconsistency in the judgments and provides a means to improve consistency.
The simplest form used to structure a decision problem is a hierarchy consisting
of three levels: **the goal of the decision** at the top level, followed by a second level
consisting of the **criteria** by which the **alternatives**, located in the third level, will
be evaluated.
## A tree level hierarchy
In the AHPWR package we can create the tree level hierarchy of the problem, as in the following example:
```{r}
#generic, c= 4 criteria and a = 3 alternatives
flow_chart(names=NULL, c=4, a=3)
```
You can change the graphics according to ggplot2 options:
```{r}
#generic, c= 4 criteria and a = 3 alternatives
p=flow_chart(names=NULL, c=4, a=3)
p+ggplot2::labs(title = "A tree level hierarchy", x="", y="")
```
```{r}
#generic, c= 4 criteria and a = 3 alternatives
p=flow_chart(names=NULL, c=4, a=3)
p+ggplot2::labs(title = "A tree level hierarchy", x="", y="")+ggplot2::theme_void()
```
```{r}
#generic, c= 4 criteria and a = 3 alternatives
goal = "Satisfation with House"
criterios = c("Size", "Age", "Yard", "Neighborhood" )
alternatives = c("house A", "house B", "house C")
names = c(goal, criterios, alternatives)
p=flow_chart(names, c=4, a=3)
p+ggplot2::labs(title = "A tree level hierarchy", x="", y="")+ggplot2::theme_void()
```
## The comparative judment
The next step is comparative judgment. The elements on the second level are arranged into a matrix and the family buying the house makes judgments about the relative importance of the elements with respect to the overall goal, Satisfaction with House.
The questions to ask when comparing two criteria are of the following kind: of
the two alternatives being compared, which is considered more important by the
family and how much more important is it with respect to family satisfaction with
the house, which is the overall goal?
Paired comparison judgments in the AHP are applied to pairs of homogeneous
elements. The fundamental scale of values to represent the intensities of judgments
is shown in Table 1. This scale has been validated for effectiveness, not only in
many applications by a number of people, but also through theoretical justification
of what scale one must use in the comparison of homogeneous elements.
```{r echo=FALSE}
`Intensity of importance` = 1:9
Definicion = c("Equal Importance",
" Weak",
"Moderate importance",
"Moderate plus",
"Strong importance",
"Strong plus",
"Very strong or demonstrated importance",
"Very, very strong",
"Extreme importance")
tab = data.frame(`Intensity of importance`, Definicion)
knitr::kable(tab, caption = "Table 1: The fundamental Scale")
```
Here we will use the holistic judgment criterion proposed by Godoy (2014). It provides weights for each criterion using the Saaty scale: assuming that there are $n$ criteria, establish different weights for each of the criteria according to their importance, with $w1$ being the weight of criterion 1; $w2$ the weight of criterion 2 and so on.
The judge, before assigning a holistic weight, should order the items from the most important to the least important and then establish the weights that should be different for each item and descending according to the established order, unless two consecutive items have the same importance, only in this case can they have the same weight. For example, if $w1 < w2 < ... < wn$ so order the items are A1 is less important than A2 which is less important than A3 and so on
; or An is more important than An-1 which is more important than An-2 and so on.
The hierarchy matrix will be constructed by making $a_{ij} = wi – wj +1$ if $wi > wj$ (ie, criterion $i$ has greater importance than criterion $j$); $aij = 1/(wj – wi +1)$ if $wi < wj$.
### Example 1:
The problem is to determine the best choice between two alternatives A1 = construction of a bridge connecting two points; A2 = construction of a tunnel connecting two points, based on the following criteria:
C1-life cycle, C2-maintenance cost, C3-environmental impacts, C4-construction cost.
Holistic Judgment:
M1 - Criteria judgment matrix
Weights attributed by the evaluators to each criterion: w1=2; w2 = 5; w3 = 2; w4 = 3, therefore, the order of importance of the criteria according to the judge is criterion 2 followed by criterion 4 followed by both criterion 1 and 3 with the same importance.
```{r}
x = c("life cycle", "maintenance cost", "environmental impacts", "construction cost") #criteria
y = c(2,5,2,3) #weights
m1 = matrix_ahp(x,y)
m1
```
The table with all date about the matrix, the first line informs the weights assigned by the evaluators, the following lines up to the penultimate one show the comparison matrix between criteria or alternatives and the last line informs the priority vector and the CR consistency index.
```{r}
names(y) = x
table=tabela_holistica(pesos=y)
table
knitr::kable(table)
```
We can customize our table, highlighting the main information in gray:
```{r}
require(magrittr)
require(kableExtra)
knitr::kable(as.data.frame(table), align = 'c', digits = 2) %>%
row_spec(1, italic = TRUE, background = 'gray') %>%
row_spec(2:5, color = 'black', background = 'yellow') %>%
row_spec(6, underline = TRUE, color = 'black',background = 'gray',bold = TRUE,) %>%
column_spec(6, background = 'gray')
```
M2 -Judgment matrix of alternatives in relation to criterion C1 - life cycle
Weights assigned by evaluators for each Alternative: w1= 1, w2 = 3
```{r}
x = c("bridge", "tunnel") #criteria life cycle
y = c(1,3) #weights
m2 = matrix_ahp(x,y)
m2
```
```{r}
names(y) = x
table=tabela_holistica(pesos=y)
table
```
M3 -Judgment matrix of alternatives in relation to criterion C2 - maintenance cost
Weights assigned by evaluators for each Alternative: w1= 1, w2 = 4
```{r}
x = c("bridge", "tunnel") #criteria maintenance cost
y = c(1,4) #weights
m3 = matrix_ahp(x,y)
m3
```
```{r}
names(y) = x
table=tabela_holistica(pesos=y)
table
```
M4 -Judgment matrix of alternatives in relation to criterion C3 - environmental impacts
Weights assigned by evaluators for each Alternative: w1= 1, w2 = 2
```{r}
x = c("bridge", "tunnel") #criteria environmental impacts
y = c(1,2) #weights
m4 = matrix_ahp(x,y)
m4
```
```{r}
names(y) = x
table=tabela_holistica(pesos=y)
table
```
M5 -Judgment matrix of alternatives in relation to criterion C4 - construction cost
Weights assigned by evaluators for each Alternative: w1= 5, w2 = 3
```{r}
x = c("bridge", "tunnel") #criteria construction cost
y = c(5,3) #weights
m5 = matrix_ahp(x,y)
m5
```
```{r}
names(y) = x
table=tabela_holistica(pesos=y)
table
```
## Consistency index and ratio
If $a_{ij}$ represents the importance of alternative i over alternative j and $a_{jk}$
represents the importance of alternative j over alternative k and $a_{ik}$, the importance
of alternative i over alternative k, must equal $a_{ij}a_{jk}$ or $a_{ij}a_{jk} = a_{ik}$ for the judgments
to be consistent.
The consistency index of a matrix of comparisons is
given by $ic = (\lambda_{max} - n)/(n - 1)$. The consistency ratio (RC) is obtained by
comparing the C.I. with the appropriate one of the following set of numbers (See Table 1.2) each of which is an average random consistency index derived from a
sample of randomly generated reciprocal matrices using the scale 1/9, 1/8,…, 1,…, 8, 9. If it is not less than 0.10, study the problem and revise the judgments.
```{r}
#consistency index
CI(m1)
CI(m2)
CI(m3)
CI(m4)
CI(m5)
```
```{r}
#consistency ratio
CR(m1)
CR(m2)
CR(m3)
CR(m4)
CR(m5)
```
All the consistency ratio are less than 0.1, therefore all judgment matrices are considered consistent.
## Priority vectors
```{r}
lista = list(m1, m2, m3, m4, m5)
calcula_prioridades(lista)
```
Each vector shows the weight of the criterio or alternative relative to the corresponding judgment matrix.
For example, the first vector matches the m1 matrix, so it provides the relative weights of each criteria: 0.12 for criteria 1; 0.54 for criteria 2, 0.12 for criteria 3 and 0.22 for criteria 4. The second vector corresponds to the m2 matrix, so it provides the weights of each alternative when considering criterion 1: 0.25 for alternative 1 and 0.75 for alternative 2, and so on.
## Problem has only one level of criteria
Let be a problem with m alternatives, $A_1, A_2, ..., A_m$ and n criteria $C_1, C_2, ..., C_n$.
The first matrix produces $P(C_i)$ = priority of the ith criterion for i = 1, 2, ..., n
The second until n+1 matrix produces $P(A_j|C_i)$ = priority of the j-th alternative conditional on the i-th criterion in case the problem has only one level of criteria, j=1, 2, ..., m and i=1, 2, ...,n. In this case
$P(A_j) = \sum_{i=1}^{n}P(A_j|C_i)P(C_i)$, $j=1, 2, ...,m$
The function **ahp_geral()** will provide a table containing the marginal weights of each criterion, the conditional weights of each alternative given a certain criterion, the global weights of each alternative and a consistency ratio CR.
| Criteria | Weights | A1 | A2 | ... | Am | CR | |
|-----------------|---------|-----------|-----------|-----|-----------|---|---|
| Alternatives -> | 1 | $P(A1)$ | $P(A2)$ | ... | $P(Am)$ | | |
| $C1$ | $P(C1)$ | $P(A1|C1)P(C1)$ | $P(A2|C1)P(C1)$ | ... | $P(Am|C1)P(C1)$ | $CR(M_1)$ | |
| $C2$ | $P(C2)$ | $P(A1|C2)P(C2)$ | $P(A2|C2)P(C2)$ | ... | $P(Am|C2)P(C2)$ | $CR(M_2)$ | |
| ... | ... | ... | ... | ... | ... | | |
| $Cn$ | $P(Cn)$ | $P(A1|Cn)P(Cn)$ | $P(A2|Cn)P(Cn)$ | ... | $P(Am|Cn)P(Cn)$ | $CR(M_{n+1}$) | |
| | | | | | | | |
Observe that
$\sum_{j=1}^{m}P(A_j) =1$, $\sum_{i=1}^{n}P(C_i) =1$, $P(A_j) = \sum_{i=1}^{n}P(A_j|C_i)P(C_i)$, $j=1, 2, ...,m$
The alternative with the highest priority value may be the decision maker's final choice.
## Hierarchic Synthesis and Rank
Hierarchic synthesis is obtained by a process of weighting and adding down the hierarchy leading to a multilinear form.
### Example 2: Problem with 4 criteria and 2 alternatives
```{r}
lista
ahp_geral(lista)
```
### Example 3: Problem with 5 criteria and 3 alternatives
```{r}
x=paste0(letters[3],1:5) #criteria names C1, C2, ..., C5
y=c(5,2,7,3,2) #judgments
m1=matrix_ahp(x,y)
x=paste0(letters[1],1:3) #alternatives names A1, A2, A3
y=c(4.4,5.2,3)
m2=matrix_ahp(x,y)
y=c(2,4,3)
m3=matrix_ahp(x,y)
y=c(4.9,5,3.3)
m4=matrix_ahp(x,y)
y=c(4.4,4.2,4.3)
m5=matrix_ahp(x,y)
y=c(5.4,5.2,5.7)
m6=matrix_ahp(x,y)
base=list(m1, m2, m3, m4, m5, m6)
base
calcula_prioridades(base) #fornece somente os vetores prioridades
lapply(base,tabela_holistica) #fornece uma tabela com a matriz de comparação o vetor prioridade e o CR.
ahp_geral(base)
```
## Table
```{r}
table1 = ahp_geral(base)
transforma_tabela(table1)
```
```{r}
formata_tabela(table1)
formata_tabela(table1, cores = "GRAY")
formata_tabela(table1, cores = "WHITE")
```
```{r}
ranque(table1)
```
## Criteria and sub-criteria
When the problem has one level of criteria and a second level of subcriteria, it will be necessary to map the hierarchical structure as follows:
Let $n$ be the number of criteria in the problem with $m$ alternatives and $n_{i}$ the number of sub-criteria of the ith criterion, so let's define the mapping vector $map = c(n_1, n_2, ..., n_n)$.
This mapping must match the list of paired matrices $M_1, M_2, ..., M_h$, as follows:
- M1 must be an $nxn$ matrix comparing criteria
- If $n_1=0$ M2 must be an $mxm$ matrix comparing the alternatives; if not there should be a $n_ixn_i$ matrix comparing the sub-criteria in the light of criterion 1 and a sequence of $n_1$ $mxm$ matrices comparing the alternatives in the light of each sub-criterion of the current criterion, in this case there will be $n_1+1$ matrices: $M_2, M_3, ..., M_{n1+2}$
- For $i, i=1,...,n$, if $n_i=0$ the next matrix must be an $mxm$ matrix comparing the alternatives; if not there should be a $n_ixn_i$ matrix comparing the sub-criteria in the light of criterion i and a sequence of $n_i$ $mxm$ matrices comparing the alternatives in the light of each sub-criterion of the current criterion, in this case there will be $n_i+1$ matrices: $M_{i1}, M_{i2}, ..., M_{ini+1}$
For example suppose a problem with n=5 criteria, m=2 alternatives and $n_1=0, n_2=2, n_3=4, n_4=0, n_5=0$ the number of sub-criterion for each corresponding criterion. So,
M1 will be a 5x5 matrix comparing all five criteria
M2 will be a 2x2 matrix comparing all two alternatives in the light of criterion 1 because n1=0
M3 will be a 2x2 matrix comparing all two sub-criteria of criteria 2, because n2=2
M4 will be a 2x2 matrix comparing all two alternatives in the light of sub-criterion 1 of criteria 2
M5 will be a 2x2 matrix comparing all two alternatives in the light of sub-criterion 2 of criteria 2
M6 will be a 4x4 matrix comparing all four sub-criteria of criteria 3, because n3=4
M7 will be a 2x2 matrix comparing all two alternatives in the light of sub-criterion 1 of criteria 3
M8 will be a 2x2 matrix comparing all two alternatives in the light of sub-criterion 2 of criteria 3
M9 will be a 2x2 matrix comparing all two alternatives in the light of sub-criterion 3 of criteria 3
M10 will be a 2x2 matrix comparing all two alternatives in the light of sub-criterion 4 of criteria 3
M11 will be a 2x2 matrix comparing all two alternatives in the light of criterion 4 because n4=0
M12 will be a 2x2 matrix comparing all two alternatives in the light of criterion 5 because n5=0
**It is extremely important that the list of matrices be in this order because the method takes this matched mapping into account.**
### Example 4: two criteria with two subcriteria
```{r}
#two criteria, each with two subcriteria
map = c(2,2)
#x with names and y with holistic judgment
x=paste0(letters[3],1:2) #2 criteria
y=c(5,7)
m1=matrix_ahp(x,y) # matrix compare two criteria
x=paste0("SC1",1:2)
y=c(4,6)
m2=matrix_ahp(x,y) # matrix compare two subcriteria of criteria 1
x=paste0(letters[1],1:3)
y=c(2,4,5)
m3=matrix_ahp(x,y) #alternatives for subcriteria 1 - criteria 1
y=c(4.9,5, 2)
m4=matrix_ahp(x,y) #alternatives for subcriteria 2 - criteria 1
y=c(4.4,8, 6)
x=paste0("SC2",1:2)
m5=matrix_ahp(x,y) #matrix compare two subcriteria of criteria 2
y=c(5.4,5.2, 1)
x=paste0(letters[1],1:3)
m6=matrix_ahp(x,y) #alternatives for subcriteria 1 - criteria 2
y=c(9,5.2, 3)
m7=matrix_ahp(x,y) #alternatives for subcriteria 2 - criteria 2
base=list(m1, m2, m3, m4, m5, m6, m7)
base
```
## Problem has two levels of criteria
Let be a problem with m alternatives, $A_1, A_2, ..., A_m$, n criteria $C_1, C_2, ..., C_n$ with n_k_i sub-criteria corresponding to the ith criterion.
The first matrix produces $P(C_i)$ or $P(SCi_k|C_i)$ = priority of the ith criterion or kth subcriterion of ith criterion for $i = 1, 2, ..., n$ and $k = 1, ..., n_{i}$.
The next matrices produce comparisons of criteria versus alternatives or criteria versus subcriteria followed by comparisons of alternatives versus each subcriteria corresponding to the parent criteria, according to the established mapping, $map = c(n_{k1}, n_{k2}, ..., n_{kn})$. We will consider two situations:
+ For each criterion i with $n_{i}$ > 0 we will have $n_i$ subcriteria $SC_{i1}, SC_{i2},...SC_{in_i}$:
$C_i = SC_{i1}\cup SC_{i2}\cup...\cup SC_{in_i}$
$P(SC_{ik}) = P(SC_{ik}|C_i)P(C_i)$, $k=1, 2, ...,n_{i}, i=1,2,...n$
$P(C_i) = \sum_{k=1}^{n_{i}}P(SC_{ik})$, $i=1, 2, ...,n$
$P(A_j|C_i) = \sum_{k=1}^{n_i}P(Aj|SCik)P(SCik|Ci)$
$P(A_j) = \sum_{i=1}^{n}P(A_j|C_i)P(C_i)$, $j=1, 2, ...,m$
+ For otherwise (each criterion i with $n_{i}$ = 0) we have the same expression for a one level of criterion:
$P(A_j) = \sum_{i=1}^{n}P(A_j|C_i)P(C_i)$, $j=1, 2, ...,m$
The function **ahp_s()** will provide a table containing the marginal weights of each criterion/subcriterion, the conditional weights of each alternative given a certain criterion/sucriterion and the global weights of each alternative.
| Criteria | Weights | A1 | A2 | ... | Am |CR | |
|-----------------|----------|--------------|--------------|-----|--------------|---|---|
| Alternatives -> | 1 | $P(A1)$ | $P(A2)$ | ... | $P(Am)$ | $CR(M_1)$ | |
| $SC_{11}$ | $P(SC_{11}|C1)$ | $P(A1|SC_{11})P(SC_{11}|C1)$ | $P(A2|SC11)P(SC11|C1)$ | ... | $P(Am|SC11)P(SC11|C1)$ | $CR(M_3)$ | |
| $SC_{12}$ | $P(SC_{12}|C1)$ | $P(A1|SC_{12})P(SC_{12}|C1)$ | $P(A2|SC_{12})P(SC_{12}|C1)$ | ... | $P(Am|SC_{12})P(SC_{12}|C1)$ | $CR(M_4)$ | |
| ... | | | | | | | |
| $SC_{1n_1}$ | $P(SC_{1n_1}|C1)$ | $P(A1|SC_{1n_1})P(SC_{1n_1}|C1)$ | $P(A2|SC_{1n_1})P(SC_{1n_1}|C1)$ | ... | $P(Am|SC_{1n_1})P(SC_{1n_1}|C1)$ | $CR(M_{2+n_1})$ | |
| $C1$ | $P(C1)$ | $P(A1|C1)P(C1)$ | $P(A2|C1)P(C1)$ | ... | $P(Am|C1)P(C1)$ | $CR(M_{2})$ | |
| ... | ... | ... | ... | ... | ... | | |
| $Cn$ | $P(Cn)$ | $P(A1|Cn)P(Cn)$ | $P(A2|Cn)P(Cn)$ | | $P(Am|Cn)P(Cn)$ | $CR(M_{1+n+n_1+...+n_n})$ | |
| | | | | | | | |
Observe that
$\sum_{j=1}^{m}P(A_j) =1$, $\sum_{i=1}^{n}P(C_i) =1$, $\sum_{k=1}^{n_i}P(SC_{ik}|Ci) =1$, $P(C_i) = \sum_{k=1}^{n_{i}}P(SC_{ik})$, $i=1, 2, ...,n$ and $P(A_j) = \sum_{i=1}^{n}P(A_j|C_i)P(C_i)$, $j=1, 2, ...,m$
The alternative with the highest priority value may be the decision maker's final choice.
## Hierarchic Synthesis and Rank
Hierarchic synthesis is obtained by a process of weighting and adding down the hierarchy leading to a multilinear form.
### Example 5: Problem with 2 criteria, two subcriteria and 3 alternatives
```{r}
#Priority vector and CR
#
calcula_prioridades(base) #fornece somente os vetores prioridades
lapply(base,tabela_holistica) #fornece uma tabela com a matriz de comparação o vetor prioridade e o CR.
ahp_s(base,map)
tb = ahp_s(base,map)
transforma_tabela(tb)
formata_tabela(tb)
```
## Comparing ahp_geral and ahp_s with one level
The **ahp_geral()** function constructs the summary table equal from **ahp_s**, for problem with no subcriteria. Anyway, both produce the criteria and alternative weights, in this respect the functions return the same value when problem has one level of criteria. We recommend using ahp_s when the problem has subcriteria.
### Example 6
Consider the problem with 6 criteria and 4 alternatives
```{r}
p1=c(2,4,5,1,6,3) #holistcs weights for compare 6 criteria
p2=c(5, 4, 6, 7) #holistcs weights for compare 4 alternatives for criterion 1
p3=c(2, 8, 2, 7) #holistcs weights for compare 4 alternatives for criterion 2
p4=c(5, 1, 4, 1) #holistcs weights for compare 4 alternatives for criterion 3
p5=c(3.4, 4, 2, 3) #holistcs weights for compare 4 alternatives for criterion 4
p6=c(6, 4, 2, 2.5) #holistcs weights for compare 4 alternatives for criterion 5
p7=c(5, 3, 6, 1.8) #holistcs weights for compare 4 alternatives for criterion 6
x1=paste0("C",1:6)
x= paste0("A",1:4)
m1 = matrix_ahp(x1,p1)
m2 = matrix_ahp(x,p2)
m3 = matrix_ahp(x,p3)
m4 = matrix_ahp(x,p4)
m5 = matrix_ahp(x,p5)
m6 = matrix_ahp(x,p6)
m7 = matrix_ahp(x,p7)
base=list(m1,m2, m3, m4, m5, m6, m7)
formata_tabela(ahp_geral(base))
formata_tabela(ahp_s(base, map=c(0,0,0,0,0,0)))
```
## References
Alcoforado, L.F. (2021) Utilizando a Linguagem R: conceitos, manipulação, visualização, Modelagem e Elaboração de Relatório, Alta Books, Rio de Janeiro.
Godoi, W.C. (2014). Método de construção das matrizes de julgamento paritário no AHP – método de julgamento holístico. Revista Gestão Industrial, ISSN 1808-0448 / v. 10, n. 03: p.474- 493, D.O.I: 10.3895/gi.v10i3.1970
Longo, O.C., Alcoforado, L.F., Levy, A (2022). Utilização do pacote AHP na tomada de decisão. In IX Xornada de Usuarios de R en Galicia,
Oliveira, L.S., AHP, Github.com. (2020) URL = https://github.com/Lyncoln/AHP, Acesso em 20/09/2022.
Oliveira, L.S., Alcoforado, L.F., Ross, S.D., Simão, A.S. (2019). Implementando a AHP com R. Anais do SER, ISSN 2526-7299, v.4, n.2. URL: https://periodicos.uff.br/anaisdoser/article/view/29331
Saaty, T.L., Vargas, L.G. (2012), Models, Methods, Concepts and Applications of the Analytic Hierarchy Process, Second Edition, Springer, New York.
Triantaphyllou, E., Shu, B., Nieto Sanchez, S., Ray, T. (1998). Multi-Criteria Decision Making: An Opera-tions Research Approach. Encyclopedia of Electrical and Electronics Engineering, (J.G. Webster, Ed.), John Wiley & Sons, New York, NY, Vol. 15, pp. 175-186.
```{r echo=FALSE}
#para checar o pacote
#devtools::check(args = c("--as-cran"), check_dir = dirname(getwd()))
```
| /scratch/gouwar.j/cran-all/cranData/AHPWR/inst/doc/Intro_to_AHP.Rmd |
---
title: "Introduction to AHPWR package"
author: "Luciane Ferreira Alcoforado and Orlando Celso Longo"
subtitle: "Academia da Força Aérea and Universidade Federal Fluminense"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Introduction to AHP package}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
```{r setup}
knitr::opts_chunk$set(warning = FALSE, message = FALSE)
library(AHPWR)
library(kableExtra)
```
This is the introduction to our package AHPWR. All theory about this method is concerned about Saaty and Vargas (2012). The examples are inspired by this reference.
## AHP
According to Saaty and Vargas (2012), The Analytic Hierarchy Process (AHP) is a basic approach to decision making.
It is designed to cope with both the rational and the intuitive to select **the best** from
a **number of alternatives** evaluated with respect to **several criteria**. In this process,
the decision maker carries out simple **pairwise comparison judgments** which are
then used to develop overall priorities for **ranking the alternatives**. The AHP both allows for inconsistency in the judgments and provides a means to improve consistency.
The simplest form used to structure a decision problem is a hierarchy consisting
of three levels: **the goal of the decision** at the top level, followed by a second level
consisting of the **criteria** by which the **alternatives**, located in the third level, will
be evaluated.
## A tree level hierarchy
In the AHPWR package we can create the tree level hierarchy of the problem, as in the following example:
```{r}
#generic, c= 4 criteria and a = 3 alternatives
flow_chart(names=NULL, c=4, a=3)
```
You can change the graphics according to ggplot2 options:
```{r}
#generic, c= 4 criteria and a = 3 alternatives
p=flow_chart(names=NULL, c=4, a=3)
p+ggplot2::labs(title = "A tree level hierarchy", x="", y="")
```
```{r}
#generic, c= 4 criteria and a = 3 alternatives
p=flow_chart(names=NULL, c=4, a=3)
p+ggplot2::labs(title = "A tree level hierarchy", x="", y="")+ggplot2::theme_void()
```
```{r}
#generic, c= 4 criteria and a = 3 alternatives
goal = "Satisfation with House"
criterios = c("Size", "Age", "Yard", "Neighborhood" )
alternatives = c("house A", "house B", "house C")
names = c(goal, criterios, alternatives)
p=flow_chart(names, c=4, a=3)
p+ggplot2::labs(title = "A tree level hierarchy", x="", y="")+ggplot2::theme_void()
```
## The comparative judment
The next step is comparative judgment. The elements on the second level are arranged into a matrix and the family buying the house makes judgments about the relative importance of the elements with respect to the overall goal, Satisfaction with House.
The questions to ask when comparing two criteria are of the following kind: of
the two alternatives being compared, which is considered more important by the
family and how much more important is it with respect to family satisfaction with
the house, which is the overall goal?
Paired comparison judgments in the AHP are applied to pairs of homogeneous
elements. The fundamental scale of values to represent the intensities of judgments
is shown in Table 1. This scale has been validated for effectiveness, not only in
many applications by a number of people, but also through theoretical justification
of what scale one must use in the comparison of homogeneous elements.
```{r echo=FALSE}
`Intensity of importance` = 1:9
Definicion = c("Equal Importance",
" Weak",
"Moderate importance",
"Moderate plus",
"Strong importance",
"Strong plus",
"Very strong or demonstrated importance",
"Very, very strong",
"Extreme importance")
tab = data.frame(`Intensity of importance`, Definicion)
knitr::kable(tab, caption = "Table 1: The fundamental Scale")
```
Here we will use the holistic judgment criterion proposed by Godoy (2014). It provides weights for each criterion using the Saaty scale: assuming that there are $n$ criteria, establish different weights for each of the criteria according to their importance, with $w1$ being the weight of criterion 1; $w2$ the weight of criterion 2 and so on.
The judge, before assigning a holistic weight, should order the items from the most important to the least important and then establish the weights that should be different for each item and descending according to the established order, unless two consecutive items have the same importance, only in this case can they have the same weight. For example, if $w1 < w2 < ... < wn$ so order the items are A1 is less important than A2 which is less important than A3 and so on
; or An is more important than An-1 which is more important than An-2 and so on.
The hierarchy matrix will be constructed by making $a_{ij} = wi – wj +1$ if $wi > wj$ (ie, criterion $i$ has greater importance than criterion $j$); $aij = 1/(wj – wi +1)$ if $wi < wj$.
### Example 1:
The problem is to determine the best choice between two alternatives A1 = construction of a bridge connecting two points; A2 = construction of a tunnel connecting two points, based on the following criteria:
C1-life cycle, C2-maintenance cost, C3-environmental impacts, C4-construction cost.
Holistic Judgment:
M1 - Criteria judgment matrix
Weights attributed by the evaluators to each criterion: w1=2; w2 = 5; w3 = 2; w4 = 3, therefore, the order of importance of the criteria according to the judge is criterion 2 followed by criterion 4 followed by both criterion 1 and 3 with the same importance.
```{r}
x = c("life cycle", "maintenance cost", "environmental impacts", "construction cost") #criteria
y = c(2,5,2,3) #weights
m1 = matrix_ahp(x,y)
m1
```
The table with all date about the matrix, the first line informs the weights assigned by the evaluators, the following lines up to the penultimate one show the comparison matrix between criteria or alternatives and the last line informs the priority vector and the CR consistency index.
```{r}
names(y) = x
table=tabela_holistica(pesos=y)
table
knitr::kable(table)
```
We can customize our table, highlighting the main information in gray:
```{r}
require(magrittr)
require(kableExtra)
knitr::kable(as.data.frame(table), align = 'c', digits = 2) %>%
row_spec(1, italic = TRUE, background = 'gray') %>%
row_spec(2:5, color = 'black', background = 'yellow') %>%
row_spec(6, underline = TRUE, color = 'black',background = 'gray',bold = TRUE,) %>%
column_spec(6, background = 'gray')
```
M2 -Judgment matrix of alternatives in relation to criterion C1 - life cycle
Weights assigned by evaluators for each Alternative: w1= 1, w2 = 3
```{r}
x = c("bridge", "tunnel") #criteria life cycle
y = c(1,3) #weights
m2 = matrix_ahp(x,y)
m2
```
```{r}
names(y) = x
table=tabela_holistica(pesos=y)
table
```
M3 -Judgment matrix of alternatives in relation to criterion C2 - maintenance cost
Weights assigned by evaluators for each Alternative: w1= 1, w2 = 4
```{r}
x = c("bridge", "tunnel") #criteria maintenance cost
y = c(1,4) #weights
m3 = matrix_ahp(x,y)
m3
```
```{r}
names(y) = x
table=tabela_holistica(pesos=y)
table
```
M4 -Judgment matrix of alternatives in relation to criterion C3 - environmental impacts
Weights assigned by evaluators for each Alternative: w1= 1, w2 = 2
```{r}
x = c("bridge", "tunnel") #criteria environmental impacts
y = c(1,2) #weights
m4 = matrix_ahp(x,y)
m4
```
```{r}
names(y) = x
table=tabela_holistica(pesos=y)
table
```
M5 -Judgment matrix of alternatives in relation to criterion C4 - construction cost
Weights assigned by evaluators for each Alternative: w1= 5, w2 = 3
```{r}
x = c("bridge", "tunnel") #criteria construction cost
y = c(5,3) #weights
m5 = matrix_ahp(x,y)
m5
```
```{r}
names(y) = x
table=tabela_holistica(pesos=y)
table
```
## Consistency index and ratio
If $a_{ij}$ represents the importance of alternative i over alternative j and $a_{jk}$
represents the importance of alternative j over alternative k and $a_{ik}$, the importance
of alternative i over alternative k, must equal $a_{ij}a_{jk}$ or $a_{ij}a_{jk} = a_{ik}$ for the judgments
to be consistent.
The consistency index of a matrix of comparisons is
given by $ic = (\lambda_{max} - n)/(n - 1)$. The consistency ratio (RC) is obtained by
comparing the C.I. with the appropriate one of the following set of numbers (See Table 1.2) each of which is an average random consistency index derived from a
sample of randomly generated reciprocal matrices using the scale 1/9, 1/8,…, 1,…, 8, 9. If it is not less than 0.10, study the problem and revise the judgments.
```{r}
#consistency index
CI(m1)
CI(m2)
CI(m3)
CI(m4)
CI(m5)
```
```{r}
#consistency ratio
CR(m1)
CR(m2)
CR(m3)
CR(m4)
CR(m5)
```
All the consistency ratio are less than 0.1, therefore all judgment matrices are considered consistent.
## Priority vectors
```{r}
lista = list(m1, m2, m3, m4, m5)
calcula_prioridades(lista)
```
Each vector shows the weight of the criterio or alternative relative to the corresponding judgment matrix.
For example, the first vector matches the m1 matrix, so it provides the relative weights of each criteria: 0.12 for criteria 1; 0.54 for criteria 2, 0.12 for criteria 3 and 0.22 for criteria 4. The second vector corresponds to the m2 matrix, so it provides the weights of each alternative when considering criterion 1: 0.25 for alternative 1 and 0.75 for alternative 2, and so on.
## Problem has only one level of criteria
Let be a problem with m alternatives, $A_1, A_2, ..., A_m$ and n criteria $C_1, C_2, ..., C_n$.
The first matrix produces $P(C_i)$ = priority of the ith criterion for i = 1, 2, ..., n
The second until n+1 matrix produces $P(A_j|C_i)$ = priority of the j-th alternative conditional on the i-th criterion in case the problem has only one level of criteria, j=1, 2, ..., m and i=1, 2, ...,n. In this case
$P(A_j) = \sum_{i=1}^{n}P(A_j|C_i)P(C_i)$, $j=1, 2, ...,m$
The function **ahp_geral()** will provide a table containing the marginal weights of each criterion, the conditional weights of each alternative given a certain criterion, the global weights of each alternative and a consistency ratio CR.
| Criteria | Weights | A1 | A2 | ... | Am | CR | |
|-----------------|---------|-----------|-----------|-----|-----------|---|---|
| Alternatives -> | 1 | $P(A1)$ | $P(A2)$ | ... | $P(Am)$ | | |
| $C1$ | $P(C1)$ | $P(A1|C1)P(C1)$ | $P(A2|C1)P(C1)$ | ... | $P(Am|C1)P(C1)$ | $CR(M_1)$ | |
| $C2$ | $P(C2)$ | $P(A1|C2)P(C2)$ | $P(A2|C2)P(C2)$ | ... | $P(Am|C2)P(C2)$ | $CR(M_2)$ | |
| ... | ... | ... | ... | ... | ... | | |
| $Cn$ | $P(Cn)$ | $P(A1|Cn)P(Cn)$ | $P(A2|Cn)P(Cn)$ | ... | $P(Am|Cn)P(Cn)$ | $CR(M_{n+1}$) | |
| | | | | | | | |
Observe that
$\sum_{j=1}^{m}P(A_j) =1$, $\sum_{i=1}^{n}P(C_i) =1$, $P(A_j) = \sum_{i=1}^{n}P(A_j|C_i)P(C_i)$, $j=1, 2, ...,m$
The alternative with the highest priority value may be the decision maker's final choice.
## Hierarchic Synthesis and Rank
Hierarchic synthesis is obtained by a process of weighting and adding down the hierarchy leading to a multilinear form.
### Example 2: Problem with 4 criteria and 2 alternatives
```{r}
lista
ahp_geral(lista)
```
### Example 3: Problem with 5 criteria and 3 alternatives
```{r}
x=paste0(letters[3],1:5) #criteria names C1, C2, ..., C5
y=c(5,2,7,3,2) #judgments
m1=matrix_ahp(x,y)
x=paste0(letters[1],1:3) #alternatives names A1, A2, A3
y=c(4.4,5.2,3)
m2=matrix_ahp(x,y)
y=c(2,4,3)
m3=matrix_ahp(x,y)
y=c(4.9,5,3.3)
m4=matrix_ahp(x,y)
y=c(4.4,4.2,4.3)
m5=matrix_ahp(x,y)
y=c(5.4,5.2,5.7)
m6=matrix_ahp(x,y)
base=list(m1, m2, m3, m4, m5, m6)
base
calcula_prioridades(base) #fornece somente os vetores prioridades
lapply(base,tabela_holistica) #fornece uma tabela com a matriz de comparação o vetor prioridade e o CR.
ahp_geral(base)
```
## Table
```{r}
table1 = ahp_geral(base)
transforma_tabela(table1)
```
```{r}
formata_tabela(table1)
formata_tabela(table1, cores = "GRAY")
formata_tabela(table1, cores = "WHITE")
```
```{r}
ranque(table1)
```
## Criteria and sub-criteria
When the problem has one level of criteria and a second level of subcriteria, it will be necessary to map the hierarchical structure as follows:
Let $n$ be the number of criteria in the problem with $m$ alternatives and $n_{i}$ the number of sub-criteria of the ith criterion, so let's define the mapping vector $map = c(n_1, n_2, ..., n_n)$.
This mapping must match the list of paired matrices $M_1, M_2, ..., M_h$, as follows:
- M1 must be an $nxn$ matrix comparing criteria
- If $n_1=0$ M2 must be an $mxm$ matrix comparing the alternatives; if not there should be a $n_ixn_i$ matrix comparing the sub-criteria in the light of criterion 1 and a sequence of $n_1$ $mxm$ matrices comparing the alternatives in the light of each sub-criterion of the current criterion, in this case there will be $n_1+1$ matrices: $M_2, M_3, ..., M_{n1+2}$
- For $i, i=1,...,n$, if $n_i=0$ the next matrix must be an $mxm$ matrix comparing the alternatives; if not there should be a $n_ixn_i$ matrix comparing the sub-criteria in the light of criterion i and a sequence of $n_i$ $mxm$ matrices comparing the alternatives in the light of each sub-criterion of the current criterion, in this case there will be $n_i+1$ matrices: $M_{i1}, M_{i2}, ..., M_{ini+1}$
For example suppose a problem with n=5 criteria, m=2 alternatives and $n_1=0, n_2=2, n_3=4, n_4=0, n_5=0$ the number of sub-criterion for each corresponding criterion. So,
M1 will be a 5x5 matrix comparing all five criteria
M2 will be a 2x2 matrix comparing all two alternatives in the light of criterion 1 because n1=0
M3 will be a 2x2 matrix comparing all two sub-criteria of criteria 2, because n2=2
M4 will be a 2x2 matrix comparing all two alternatives in the light of sub-criterion 1 of criteria 2
M5 will be a 2x2 matrix comparing all two alternatives in the light of sub-criterion 2 of criteria 2
M6 will be a 4x4 matrix comparing all four sub-criteria of criteria 3, because n3=4
M7 will be a 2x2 matrix comparing all two alternatives in the light of sub-criterion 1 of criteria 3
M8 will be a 2x2 matrix comparing all two alternatives in the light of sub-criterion 2 of criteria 3
M9 will be a 2x2 matrix comparing all two alternatives in the light of sub-criterion 3 of criteria 3
M10 will be a 2x2 matrix comparing all two alternatives in the light of sub-criterion 4 of criteria 3
M11 will be a 2x2 matrix comparing all two alternatives in the light of criterion 4 because n4=0
M12 will be a 2x2 matrix comparing all two alternatives in the light of criterion 5 because n5=0
**It is extremely important that the list of matrices be in this order because the method takes this matched mapping into account.**
### Example 4: two criteria with two subcriteria
```{r}
#two criteria, each with two subcriteria
map = c(2,2)
#x with names and y with holistic judgment
x=paste0(letters[3],1:2) #2 criteria
y=c(5,7)
m1=matrix_ahp(x,y) # matrix compare two criteria
x=paste0("SC1",1:2)
y=c(4,6)
m2=matrix_ahp(x,y) # matrix compare two subcriteria of criteria 1
x=paste0(letters[1],1:3)
y=c(2,4,5)
m3=matrix_ahp(x,y) #alternatives for subcriteria 1 - criteria 1
y=c(4.9,5, 2)
m4=matrix_ahp(x,y) #alternatives for subcriteria 2 - criteria 1
y=c(4.4,8, 6)
x=paste0("SC2",1:2)
m5=matrix_ahp(x,y) #matrix compare two subcriteria of criteria 2
y=c(5.4,5.2, 1)
x=paste0(letters[1],1:3)
m6=matrix_ahp(x,y) #alternatives for subcriteria 1 - criteria 2
y=c(9,5.2, 3)
m7=matrix_ahp(x,y) #alternatives for subcriteria 2 - criteria 2
base=list(m1, m2, m3, m4, m5, m6, m7)
base
```
## Problem has two levels of criteria
Let be a problem with m alternatives, $A_1, A_2, ..., A_m$, n criteria $C_1, C_2, ..., C_n$ with n_k_i sub-criteria corresponding to the ith criterion.
The first matrix produces $P(C_i)$ or $P(SCi_k|C_i)$ = priority of the ith criterion or kth subcriterion of ith criterion for $i = 1, 2, ..., n$ and $k = 1, ..., n_{i}$.
The next matrices produce comparisons of criteria versus alternatives or criteria versus subcriteria followed by comparisons of alternatives versus each subcriteria corresponding to the parent criteria, according to the established mapping, $map = c(n_{k1}, n_{k2}, ..., n_{kn})$. We will consider two situations:
+ For each criterion i with $n_{i}$ > 0 we will have $n_i$ subcriteria $SC_{i1}, SC_{i2},...SC_{in_i}$:
$C_i = SC_{i1}\cup SC_{i2}\cup...\cup SC_{in_i}$
$P(SC_{ik}) = P(SC_{ik}|C_i)P(C_i)$, $k=1, 2, ...,n_{i}, i=1,2,...n$
$P(C_i) = \sum_{k=1}^{n_{i}}P(SC_{ik})$, $i=1, 2, ...,n$
$P(A_j|C_i) = \sum_{k=1}^{n_i}P(Aj|SCik)P(SCik|Ci)$
$P(A_j) = \sum_{i=1}^{n}P(A_j|C_i)P(C_i)$, $j=1, 2, ...,m$
+ For otherwise (each criterion i with $n_{i}$ = 0) we have the same expression for a one level of criterion:
$P(A_j) = \sum_{i=1}^{n}P(A_j|C_i)P(C_i)$, $j=1, 2, ...,m$
The function **ahp_s()** will provide a table containing the marginal weights of each criterion/subcriterion, the conditional weights of each alternative given a certain criterion/sucriterion and the global weights of each alternative.
| Criteria | Weights | A1 | A2 | ... | Am |CR | |
|-----------------|----------|--------------|--------------|-----|--------------|---|---|
| Alternatives -> | 1 | $P(A1)$ | $P(A2)$ | ... | $P(Am)$ | $CR(M_1)$ | |
| $SC_{11}$ | $P(SC_{11}|C1)$ | $P(A1|SC_{11})P(SC_{11}|C1)$ | $P(A2|SC11)P(SC11|C1)$ | ... | $P(Am|SC11)P(SC11|C1)$ | $CR(M_3)$ | |
| $SC_{12}$ | $P(SC_{12}|C1)$ | $P(A1|SC_{12})P(SC_{12}|C1)$ | $P(A2|SC_{12})P(SC_{12}|C1)$ | ... | $P(Am|SC_{12})P(SC_{12}|C1)$ | $CR(M_4)$ | |
| ... | | | | | | | |
| $SC_{1n_1}$ | $P(SC_{1n_1}|C1)$ | $P(A1|SC_{1n_1})P(SC_{1n_1}|C1)$ | $P(A2|SC_{1n_1})P(SC_{1n_1}|C1)$ | ... | $P(Am|SC_{1n_1})P(SC_{1n_1}|C1)$ | $CR(M_{2+n_1})$ | |
| $C1$ | $P(C1)$ | $P(A1|C1)P(C1)$ | $P(A2|C1)P(C1)$ | ... | $P(Am|C1)P(C1)$ | $CR(M_{2})$ | |
| ... | ... | ... | ... | ... | ... | | |
| $Cn$ | $P(Cn)$ | $P(A1|Cn)P(Cn)$ | $P(A2|Cn)P(Cn)$ | | $P(Am|Cn)P(Cn)$ | $CR(M_{1+n+n_1+...+n_n})$ | |
| | | | | | | | |
Observe that
$\sum_{j=1}^{m}P(A_j) =1$, $\sum_{i=1}^{n}P(C_i) =1$, $\sum_{k=1}^{n_i}P(SC_{ik}|Ci) =1$, $P(C_i) = \sum_{k=1}^{n_{i}}P(SC_{ik})$, $i=1, 2, ...,n$ and $P(A_j) = \sum_{i=1}^{n}P(A_j|C_i)P(C_i)$, $j=1, 2, ...,m$
The alternative with the highest priority value may be the decision maker's final choice.
## Hierarchic Synthesis and Rank
Hierarchic synthesis is obtained by a process of weighting and adding down the hierarchy leading to a multilinear form.
### Example 5: Problem with 2 criteria, two subcriteria and 3 alternatives
```{r}
#Priority vector and CR
#
calcula_prioridades(base) #fornece somente os vetores prioridades
lapply(base,tabela_holistica) #fornece uma tabela com a matriz de comparação o vetor prioridade e o CR.
ahp_s(base,map)
tb = ahp_s(base,map)
transforma_tabela(tb)
formata_tabela(tb)
```
## Comparing ahp_geral and ahp_s with one level
The **ahp_geral()** function constructs the summary table equal from **ahp_s**, for problem with no subcriteria. Anyway, both produce the criteria and alternative weights, in this respect the functions return the same value when problem has one level of criteria. We recommend using ahp_s when the problem has subcriteria.
### Example 6
Consider the problem with 6 criteria and 4 alternatives
```{r}
p1=c(2,4,5,1,6,3) #holistcs weights for compare 6 criteria
p2=c(5, 4, 6, 7) #holistcs weights for compare 4 alternatives for criterion 1
p3=c(2, 8, 2, 7) #holistcs weights for compare 4 alternatives for criterion 2
p4=c(5, 1, 4, 1) #holistcs weights for compare 4 alternatives for criterion 3
p5=c(3.4, 4, 2, 3) #holistcs weights for compare 4 alternatives for criterion 4
p6=c(6, 4, 2, 2.5) #holistcs weights for compare 4 alternatives for criterion 5
p7=c(5, 3, 6, 1.8) #holistcs weights for compare 4 alternatives for criterion 6
x1=paste0("C",1:6)
x= paste0("A",1:4)
m1 = matrix_ahp(x1,p1)
m2 = matrix_ahp(x,p2)
m3 = matrix_ahp(x,p3)
m4 = matrix_ahp(x,p4)
m5 = matrix_ahp(x,p5)
m6 = matrix_ahp(x,p6)
m7 = matrix_ahp(x,p7)
base=list(m1,m2, m3, m4, m5, m6, m7)
formata_tabela(ahp_geral(base))
formata_tabela(ahp_s(base, map=c(0,0,0,0,0,0)))
```
## References
Alcoforado, L.F. (2021) Utilizando a Linguagem R: conceitos, manipulação, visualização, Modelagem e Elaboração de Relatório, Alta Books, Rio de Janeiro.
Godoi, W.C. (2014). Método de construção das matrizes de julgamento paritário no AHP – método de julgamento holístico. Revista Gestão Industrial, ISSN 1808-0448 / v. 10, n. 03: p.474- 493, D.O.I: 10.3895/gi.v10i3.1970
Longo, O.C., Alcoforado, L.F., Levy, A (2022). Utilização do pacote AHP na tomada de decisão. In IX Xornada de Usuarios de R en Galicia,
Oliveira, L.S., AHP, Github.com. (2020) URL = https://github.com/Lyncoln/AHP, Acesso em 20/09/2022.
Oliveira, L.S., Alcoforado, L.F., Ross, S.D., Simão, A.S. (2019). Implementando a AHP com R. Anais do SER, ISSN 2526-7299, v.4, n.2. URL: https://periodicos.uff.br/anaisdoser/article/view/29331
Saaty, T.L., Vargas, L.G. (2012), Models, Methods, Concepts and Applications of the Analytic Hierarchy Process, Second Edition, Springer, New York.
Triantaphyllou, E., Shu, B., Nieto Sanchez, S., Ray, T. (1998). Multi-Criteria Decision Making: An Opera-tions Research Approach. Encyclopedia of Electrical and Electronics Engineering, (J.G. Webster, Ed.), John Wiley & Sons, New York, NY, Vol. 15, pp. 175-186.
```{r echo=FALSE}
#para checar o pacote
#devtools::check(args = c("--as-cran"), check_dir = dirname(getwd()))
```
| /scratch/gouwar.j/cran-all/cranData/AHPWR/vignettes/Intro_to_AHP.Rmd |
AHPhybrid <- function(title, Alternatives, Qualitative_criteria, Quantitative_criteria,
Quantitative_crit_min_max, n_alt, n_crit, n_crit_Qual, n_crit_Quant,
Criteria_Comparison, Alternatives_comparison_qualit_crit, Alternatives_quantitative_crit) {
print(title)
if((n_alt <= 2) && (n_crit <= 2) ){
print("For implementation is necessary a minimum of 02 alternatives and 02 criteria")
}else{
###Criteria Evaluation
sum_col <- c()
for (j in 1:n_crit) {
sum_col_value = 0
for(i in 1:n_crit){
sum_col_value = sum_col_value + Criteria_Comparison[i,j]
}
sum_col <- append(sum_col, sum_col_value)
}
#Normalizing
value_normalized <- c()
for (j in 1:n_crit) {
for(i in 1:n_crit){
value = Criteria_Comparison[j,i]/sum_col[i]
value_normalized <- append(value_normalized, value)
}
}
criteria_normalized <- matrix(value_normalized, ncol = n_crit, nrow = n_crit, byrow = TRUE)
#Priority Obtaining
priority_crit <- c()
for (j in 1:n_crit) {
sum_row_value = 0
for(i in 1:n_crit){
sum_row_value = sum_row_value + criteria_normalized[j,i]
}
priority_value <- round((sum_row_value/n_crit),3)
priority_crit <- append(priority_crit, priority_value)
}
criteria <- c()
criteria <- append(Qualitative_criteria, Quantitative_criteria)
priority_crit_table <- data.frame(criteria, priority_crit)
#Consistency Evaluation
weight_cons_value <- c()
for (j in 1:n_crit) {
for(i in 1:n_crit){
value = Criteria_Comparison[j,i]*priority_crit[i]
weight_cons_value <- append(weight_cons_value, value)
}
}
weight_cons_value <- matrix(weight_cons_value, ncol = n_crit, nrow = n_crit, byrow = TRUE)
sum_weight <- c()
for (j in 1:n_crit) {
sum_row_value = 0
for(i in 1:n_crit){
sum_row_value = sum_row_value + weight_cons_value[j,i]
}
sum_weight <- append(sum_weight, sum_row_value)
}
sum_lambda = 0
for (j in 1:n_crit) {
sum_lambda = sum_lambda + (sum_weight[j]/priority_crit[j])
}
max_lambda = sum_lambda/n_crit
max_lambda
cons_index = (max_lambda - n_crit)/(n_crit - 1)
radio_index <- c(0, 0, 0.58, 0.9, 1.12, 1.24, 1.32, 1.41, 1.45, 1.49, 1.51, 1.48, 1.56, 1.57, 1.59)
ri = radio_index[n_crit]
consistency_criteria <- round((cons_index / ri), 3)
consistency_criteria
print("")
print("")
print("===== Criteria Priorities:")
print("")
print(priority_crit_table)
print("")
print(paste("The consistency ratio is:", consistency_criteria))
print("")
if (consistency_criteria <= 0.1) {
print("The assignments are consistent.")
} else {
print("The assignments are not consistent.")
}
print("")
print("")
print("")
print("")
n_crit_Qual
### Alternatives Evaluation
Alternatives_priorities <- c()
## in Qualitative Criteria
if (length(Qualitative_criteria)>=1){
for (k in 1:n_crit_Qual) {
sum_col_alt <- c()
for (j in 1:n_alt) {
sum_col_value = 0
for(i in 1:n_alt){
sum_col_value = sum_col_value + Alternatives_comparison_qualit_crit[[k]][i,j]
}
sum_col_alt <- append(sum_col_alt, sum_col_value)
}
#Normalizing
value_normalized_alt <- c()
for (j in 1:n_alt) {
for(i in 1:n_alt){
value = Alternatives_comparison_qualit_crit[[k]][j,i]/sum_col_alt[i]
value_normalized_alt <- append(value_normalized_alt, value)
}
}
alt_normalized <- matrix(value_normalized_alt, ncol = n_alt, nrow = n_alt, byrow = TRUE)
#Priority Obtaining
priority_alt_crit <- c()
for (j in 1:n_alt) {
sum_row_value = 0
for(i in 1:n_alt){
sum_row_value = sum_row_value + alt_normalized[j,i]
}
priority_value <- sum_row_value/n_alt
priority_alt_crit <- append(priority_alt_crit, priority_value)
}
priority_alt_table <- data.frame(Alternatives, priority_alt_crit)
#Consistency Evaluation
if (n_alt <= 2) {
consistency_alt_crit <- 0
}else{
weight_cons_value_alt <- c()
for (j in 1:n_alt) {
for(i in 1:n_alt){
value = Alternatives_comparison_qualit_crit[[k]][j,i]*priority_alt_crit[i]
weight_cons_value_alt <- append(weight_cons_value_alt, value)
}
}
weight_cons_value_alt <- matrix(weight_cons_value_alt, ncol = n_alt, nrow = n_alt, byrow = TRUE)
sum_weight <- c()
for (j in 1:n_alt) {
sum_row_value = 0
for(i in 1:n_alt){
sum_row_value = sum_row_value + weight_cons_value_alt[j,i]
}
sum_weight <- append(sum_weight, sum_row_value)
}
sum_lambda = 0
for (j in 1:n_alt) {
sum_lambda = sum_lambda + (sum_weight[j]/priority_alt_crit[j])
}
max_lambda = sum_lambda/n_alt
cons_index_alt_crit = (max_lambda - n_alt)/(n_alt - 1)
radio_index <- c(0, 0, 0.58, 0.9, 1.12, 1.24, 1.32, 1.41, 1.45, 1.49, 1.51, 1.48, 1.56, 1.57, 1.59)
ri = radio_index[n_alt]
consistency_alt_crit <- (cons_index_alt_crit / ri)
}
#Printing evaluation in qualitative criteria
print("")
print("")
print(paste("=== Alternatives Priorities in Criterion", Qualitative_criteria[k],":"))
print("")
print(priority_alt_table)
print("")
print(paste("The consistency ratio is:", round(consistency_alt_crit,3)))
if (consistency_alt_crit <= 0.1) {
print("The assignments are consistent.")
} else {
print("The assignments are not consistent.")
}
print("")
#Saving all priorities
Alternatives_priorities <- append(Alternatives_priorities, priority_alt_crit )
}
}
# in Quantitative Criteria
if (length(Quantitative_criteria)>=1){
for (k in 1:n_crit_Quant) {
sum_col = 0
for (i in 1:n_alt) {
if(Quantitative_crit_min_max[k] == "min"){
sum_col = sum_col + (1/Alternatives_quantitative_crit[i,k])
}else{
sum_col = sum_col + Alternatives_quantitative_crit[i,k]
}
}
priorities_simple <- c()
for (i in 1:n_alt) {
if(Quantitative_crit_min_max[k] == "min"){
alt_crit_quant_norm = (1/Alternatives_quantitative_crit[i,k])/sum_col
}else{
alt_crit_quant_norm = Alternatives_quantitative_crit[i,k]/sum_col
}
priorities_simple <- append(priorities_simple, alt_crit_quant_norm )
Alternatives_priorities <- append(Alternatives_priorities, alt_crit_quant_norm )
}
priority_alt_table <- data.frame(Alternatives, priorities_simple)
print("")
print("")
print(paste("=== Alternatives Priorities in Criterion", Quantitative_criteria[k],":"))
print("")
print(priority_alt_table)
print("")
}
}
Alternatives_priorities <- matrix(Alternatives_priorities, ncol = n_crit, nrow = n_alt)
rownames(Alternatives_priorities) <- Alternatives
colnames(Alternatives_priorities) <- criteria
print("")
print("")
print("")
print("")
print("=====Alternatives priorities for each criterion:")
print(Alternatives_priorities)
###Aggregation Process
values_aggreg <- c()
for (j in 1:n_alt) {
for (i in 1:n_crit) {
value = round((priority_crit[i] * Alternatives_priorities[j,i]),3)
values_aggreg <- append(values_aggreg, value)
}
}
print("")
print("")
print("")
print("")
print("===== Global Index :")
values_aggreg <- matrix(values_aggreg, ncol = n_crit, nrow = n_alt, byrow = TRUE)
rownames(values_aggreg) <- Alternatives
colnames(values_aggreg) <- criteria
values_aggreg
global_preference <- c()
for (j in 1:n_alt) {
sum_preference = 0
for (i in 1:n_crit) {
sum_preference = sum_preference + values_aggreg[j,i]
}
global_preference <- append(global_preference, sum_preference)
}
print("Final Results")
Final_Result <- data.frame(Alternatives, global_preference)
ordering <- sort(global_preference, decreasing = TRUE)
for(i in 1:n_alt){
print(paste(Alternatives[match(ordering[i],global_preference)],'=',ordering[i]))
}
}
}
| /scratch/gouwar.j/cran-all/cranData/AHPhybrid/R/AHPhybrid.R |
eps <- 0.0001
rin <- c(0,0,0.52,0.89,1.11,1.25,1.35,1.40,1.45,1.49,1.52,1.54,1.56,1.58,1.59)
rth <- c(0,0,0.05,0.09,rep(0.1,11))
fs <- c(1/(9:1),2:9)
# added 14.8.2023
fs2 <- sort(unique(as.vector(outer(fs, fs, "/"))))
lim <- 500
lc2 <- lim*(lim-1)/2
ord <- rep(3:12,each=2)
Q1 <- c(0.25,1.442,0.243,1.414,0.219,1.319,0.201,1.319,0.186,1.313,0.17,1.3,
0.158,1.29,0.146,1.284,0.135,1.283,0.127,1.281)
Q2 <- c(0.376,1.747,0.325,1.348,0.281,1.341,0.247,1.314,0.222,1.304,0.2,1.294,
0.182,1.297,0.169,1.303,0.155,1.296,0.144,1.292)
Q3 <- c(0.499,1.26,0.402,1.357,0.342,1.345,0.297,1.338,0.261,1.33,0.233,1.321,
0.211,1.324,0.194,1.311,0.176,1.306,0.163,1.307)
Q4 <- c(0.746,1.4,0.691,1.402,0.627,1.381,0.563,1.353,0.482,1.339,0.435,1.34,
0.424,1.331,0.331,1.326,0.293,1.322,0.278,1.319)
type <- rep(c('0D','1C'),10)
ec <- data.frame(cbind(ord,Q1,Q2,Q3,Q4,type))
notPCM <- function(PCM) {
if (!setequal(diag(PCM),rep(1,nrow(PCM)))) return(TRUE)
for (i in 1:(nrow(PCM)-1))
for (j in (i+1):ncol(PCM))
if (PCM[i,j]!=1/PCM[j,i]) return(TRUE)
return(FALSE)
}
bestM <- function(pcm, granularityLow=TRUE) {
if (granularityLow==TRUE) {
opt <- fs
} else {
opt <- fs2
}
#tSc <- c(1/(9:1),2:9)
p <- pcm
o <- nrow(pcm)
bestMatrix <- diag(o)
ep <- abs(Re(eigen(p)$vectors[,1]))
for (r in 1:(o-1))
for (c in (r+1):o) {
b <- opt[which.min(abs(ep[r]/ep[c]-opt))[1]]
bestMatrix[r, c] <- b
bestMatrix[c, r] <- 1/b
}
return(bestMatrix)
}
randomPert <- function(val, granularityLow) {
if (granularityLow==TRUE) {
opt <- fs
} else {
opt <- fs2
}
r <- which(rank(abs(val-opt))<=5)
randomChoice <- opt[sample(r,1)]
return(randomChoice)
}
perturb <- function(PCM, granularityLow=TRUE) {
pertPCM <- diag(rep(1,nrow(PCM)))
for (i in 1:(nrow(PCM)-1))
for (j in (i+1):nrow(PCM)) {
r <- randomPert(PCM[i,j], granularityLow)
pertPCM[i,j] <- r
pertPCM[j,i] <- 1/r
}
return(pertPCM)
}
mDev <- function(pcm, ppcm) {
o <- nrow(pcm)
gm <- 1
for (r in 1:(o-1))
for (c in (r+1):o) {
rat <- ppcm[r,c] / pcm[r,c]
rat <- ifelse(rat < 1, 1/rat, rat)
gm <- gm * rat
}
mgm <- gm^(2/(o*(o-1)))
return(inconGM=mgm)
}
#' @param vec
#'
#' @title Create a Pairwise Comparison Matrix of order n for Analytic Hierarchy
#' Process from a vector of length n(n-1)/2 comparison ratios
#' @description Create a Pairwise Comparison Matrix of order n from a vector of
#' length n(n-1)/2 independent upper triangular elements
#'
#' @param vec The preference vector of length as the order of the 'PCM'
#'
#' @returns A Pairwise Comparison Matrix corresponding to the upper triangular
#' elements
#' @importFrom stats runif
#' @examples
#' PCM <- createPCM(c(1,2,0.5,3,0.5,2));
#' PCM <- createPCM(c(1,.5,2,1/3,4,2,.25,1/3,.5,1,.2,6,2,3,1/3));
#' @export
createPCM <- function(vec) {
n <- (1+sqrt(1+8*length(vec)))/2
if (n!=as.integer(n)) {
return(1)
} else {
pcm <- diag(n)
vecPtr <- 0
for (r in 1:(n-1))
for (c in (r+1):n) {
vecPtr <- vecPtr+1
pcm[r,c] <- vec[vecPtr]
pcm[c,r] <- 1/vec[vecPtr]
}
}
return(pcm)
}
#' @title Simulated Logical Pairwise Comparison Matrix for the
#' Analytic Hierarchy Process
#'
#' @description Creates a logical pairwise comparison matrix for the Analytic
#' Hierarchy Process such as would be created by a rational decision maker
#' based on a relative vector of preferences for the alternatives involved.
#' Choices of the pairwise comparison ratios are from the Fundamental Scale
#' and simulate a reasonable degree of error. The algorithm is modified from
#' a paper by Bose, A [2022], \doi{https://doi.org/10.1002/mcda.1784}
#'
#' @param ord The desired order of the Pairwise Comparison Matrix
#' @param prefVec The preference vector of length as the order of the
#' input matrix
#' @param granularityLow The Scale for pairwise comparisons; default (TRUE)
#' is the fundamental scale; else uses a more find grained scale, derived
#' from pairwise ratios of the elements of the Fundamental Scale.
#' @returns A Logical Pairwise Comparison Matrix
#' @importFrom stats runif
#' @examples
#' lPCM <- createLogicalPCM(3,c(1,2,3));
#' lPCM <- createLogicalPCM(5,c(0.25,0.4,0.1,0.05,0.2));
#' @export
createLogicalPCM <- function(ord, prefVec=rep(NA,ord), granularityLow=TRUE) {
if (is.na(ord)) stop("The first parameter is mandatory")
if (!is.numeric(ord) || ord %% 1 != 0)
stop("The first parameter has to be an integer")
if (!all(is.na(prefVec)) && !is.numeric(prefVec))
stop("The second parameter has to be a numeric vector")
if (!all(is.na(prefVec)) && length(prefVec)!=ord)
stop("The length of the second parameter has to be the same as the first")
if (granularityLow==TRUE) {
opt <- fs
} else {
opt <- fs2
}
# opt <- ifelse(granularityLow,fs,fs2)
if (is.na(prefVec[1]))
prefVec <- runif(ord)
mperfect <- outer(prefVec, prefVec, "/")
m <- bestM(mperfect, granularityLow)
# now creating a logical PCM
for (r in 1:(ord-1)) {
for (c in (r+1):ord) {
m1 <- which.min(abs(opt-m[r,c]))
m2 <- which.min(abs(opt[-m1]-m[r,c]))
m3 <- which.min(abs(opt[-c(m1,m2)]-m[r,c]))
# random choice from the nearest 3
allChoices <- choices <- c(m1, m2, m3)
if (m[r,c] >= 1) {
choices <- allChoices[opt[allChoices] >= 1]
} else if (m[r,c] < 1) {
choices <- allChoices[opt[allChoices] <= 1]
}
m[r,c] <- sample(opt[choices],1)
m[c,r] <- 1/m[r,c]
}
}
return(logicalPCM=m)
}
#' @title Saaty CR Consistency
#'
#' @description Computes and returns the Consistency Ratio for an input
#' PCM and its boolean status of consistency based on Consistency Ratio
#'
#' @param typePCM boolean flag indicating if the first argument is a PCM or a
#' vector of upper triangular elements
#' @param PCM A pairwise comparison matrix
#'
#' @returns A list of 3 elements, a boolean for the 'CR' consistency of the
#' input 'PCM', the 'CR' consistency value and the principal eigenvector
#' @importFrom stats runif
#' @examples
#' CR.pcm1 <- CR(matrix(
#' c(1,1,7,1,1, 1,1,5,1,1/3, 1/7,1/5,1,1/7,1/8, 1,1,7,1,1,
#' 1,3,8,1,1), nrow=5, byrow=TRUE))
#' CR.pcm1
#' CR.pcm1a <- CR(c(1,7,1,1, 5,1,1/3, 1/7,1/8, 1), typePCM=FALSE)
#' CR.pcm1a
#' CR.pcm2 <- CR(matrix(
#' c(1,1/4,1/4,7,1/5, 4,1,1,9,1/4, 4,1,1,8,1/4,
#' 1/7,1/9,1/8,1,1/9, 5,4,4,9,1), nrow=5, byrow=TRUE))
#' CR.pcm2
#' CR.pcm2a <- CR(c(1/4,1/4,7,1/5, 1,9,1/4, 8,1/4, 1/9),typePCM=FALSE)
#' CR.pcm2a
#' @export
CR <- function(PCM,typePCM=TRUE) {
if (!typePCM) {
if (!is.vector(PCM)) stop("Input is not a vector of pairwise ratios")
if (length(PCM)<3 | length(PCM)>66)
stop("Input vector is not of appropriate length for a
PCM of order 3 to 12")
PCM <- createPCM(PCM)
if (!is.matrix(PCM)) stop("Input vector does not have required values for
all upper triangular elements")
} else {
if (!is.matrix(PCM)) stop("Input is not a matrix")
if (nrow(PCM)!=ncol(PCM)) stop("Input is not a square matrix")
if (nrow(PCM)==2 | nrow(PCM)>12) stop("Input matrix should be
of order 3 upto 12")
if (notPCM(PCM)) stop("Input is not a positive reciprocal matrix")
}
CR <- ((Re(eigen(PCM)$values[1])-nrow(PCM))/(nrow(PCM)-1))/rin[nrow(PCM)]
CR <- ifelse(abs(CR)<eps,0,CR)
CRcons <- ifelse(CR<rth[nrow(PCM)],TRUE,FALSE)
ev <- Re(eigen(PCM)$vectors[,1])
return(list(CRconsistent=CRcons, CR=CR, eVec=ev))
}
#' @title Improve the CR consistency of a PCM
#'
#' @description For an input pairwise comparison matrix, PCM that is
#' inconsistent, this function returns a consistent PCM if possible,
#' with the relative preference for its alternatives as close as
#' possible to the original preferences, as in the principal right eigenvector.
#' @param PCM A pairwise comparison matrix
#' @param typePCM boolean flag indicating if the first argument is a PCM or a
#' vector of upper triangular elements
#' @returns A list of 4 elements, suggested PCM, a boolean for the CR
#' consistency of the input PCM, the CR consistency value, a boolean for the
#' CR consistency of the suggested PCM, the CR consistency value of the
#' suggested PCM
#' @importFrom stats runif
#' @examples
#' CR.suggest2 <- improveCR(matrix(
#' c(1,1/4,1/4,7,1/5, 4,1,1,9,1/4, 4,1,1,8,1/4,
#' 1/7,1/9,1/8,1,1/9, 5,4,4,9,1), nrow=5, byrow=TRUE))
#' CR.suggest2
#' CR.suggest2a <- improveCR(c(1/4,1/4,7,1/5, 1,9,1/4, 8,1/4, 1/9),
#' typePCM=FALSE)
#' CR.suggest2a
#' CR.suggest3 <- improveCR(matrix(
#' c(1,7,1,9,8, 1/7,1,1/6,7,9, 1,6,1,9,9, 1/9,1/7,1/9,1,5,
#' 1/8,1/9,1/9,1/5,1), nrow=5, byrow=TRUE))
#' CR.suggest3
#' @export
improveCR <- function(PCM,typePCM=TRUE) {
if (!typePCM) {
if (!is.vector(PCM)) stop("Input is not a vector of pairwise ratios")
if (length(PCM)<3 | length(PCM)>66) stop("Input vector is not of
appropriate length for a PCM of
order 3 to 12")
PCM <- createPCM(PCM)
if (!is.matrix(PCM)) stop("Input vector does not have required values for
all upper triangular elements")
} else {
if (!is.matrix(PCM)) stop("Input is not a matrix")
if (nrow(PCM)!=ncol(PCM)) stop("Input is not a square matrix")
if (nrow(PCM)==2 | nrow(PCM)>12) stop("Input matrix should be of order
3 upto 12")
if (notPCM(PCM)) stop("Input is not a positive reciprocal matrix")
}
CR <- ((Re(eigen(PCM)$values[1])-nrow(PCM))/(nrow(PCM)-1))/rin[nrow(PCM)]
CR <- ifelse(abs(CR)<eps,0,CR)
CRcons <- ifelse(CR<rth[nrow(PCM)],TRUE,FALSE)
#if (CRcons) stop("Input PCM is already CR consistent")
sPCM <- bestM(PCM)
sCR <- ((Re(eigen(sPCM)$values[1])-nrow(sPCM))/(nrow(sPCM)-1))/rin[nrow(sPCM)]
sCR <- ifelse(abs(sCR)<eps,0,sCR)
#if(sCR > rin[nrow(sPCM)])
# stop("Input PCM though not CR consistent cannot be improved")
sCRcons <- ifelse(sCR<rth[nrow(sPCM)],TRUE,FALSE)
return(list(suggestedPCM=sPCM, CR.originalConsistency=CRcons,
CR.original=CR, suggestedCRconsistent=sCRcons, suggestedCR=sCR))
}
#' @title Compute Sensitivity
#'
#' @description This function returns a sensitivity measure for an input
#' pairwise comparison matrix, PCM. Sensitivity is measured by Monte Carlo
#' simulation of 500 PCMs which are perturbations of the input PCM. The
#' perturbation algorithm makes a random choice from one of the 5 closest
#' items in the Fundamental Scale \{1/9, 1/8, ..... 1/2, 1, 2, ..... 8, 9\}
#' for each element in the PCM, ensuring the the pairwise reciprocity is
#' maintained. The sensitivity measure is the average Spearman's rank
#' correlation of the vector of ranks of the principal eigenvectors of
#' (i) the input PCM and (ii) the perturbed PCM. The average of the 500 such
#' rank correlations is reported as the measure of sensitivity.
#' @param PCM A pairwise comparison matrix
#' @param typePCM boolean flag indicating if the first argument is a PCM or a
#' vector of upper triangular elements
#' @param granularityLow The Scale for pairwise comparisons; default (TRUE)
#' is the fundamental scale; else uses a more find grained scale, derived
#' from pairwise ratios of the elements of the Fundamental Scale.
#' @returns The average Spearman's rank correlation between the principal
#' eigenvectors of the input and the perturbed 'PCMs'
#' @importFrom stats runif
#' @examples
#' revcons1 <- revisedConsistency(matrix(
#' c(1,1/4,1/4,7,1/5, 4,1,1,9,1/4, 4,1,1,8,1/4,
#' 1/7,1/9,1/8,1,1/9, 5,4,4,9,1), nrow=5, byrow=TRUE))
#' revcons1
#' sensitivity2 <- sensitivity(matrix(
#' c(1,7,1,9,8, 1/7,1,1/6,7,9, 1,6,1,9,9, 1/9,1/7,1/9,1,5,
#' 1/8,1/9,1/9,1/5,1), nrow=5, byrow=TRUE))
#' sensitivity2
#' @export
sensitivity <- function(PCM,typePCM=TRUE,granularityLow=TRUE) {
if (!typePCM) {
if (!is.vector(PCM)) stop("Input is not a vector of pairwise ratios")
if (length(PCM)<3 | length(PCM)>66) stop("Input vector is not of
appropriate length for a PCM of
order 3 to 12")
PCM <- createPCM(PCM)
if (!is.matrix(PCM)) stop("Input vector does not have required values for
all upper triangular elements")
} else {
if (!is.matrix(PCM)) stop("Input is not a matrix")
if (nrow(PCM)!=ncol(PCM)) stop("Input is not a square matrix")
if (nrow(PCM)==2 | nrow(PCM)>12) stop("Input matrix should be of order
3 upto 12")
if (notPCM(PCM)) stop("Input is not a positive reciprocal matrix")
}
ev0 <- abs(Re(eigen(PCM)$vectors[,1]))
d0 <- rank(-ev0)
cs <- 0
for (i in 1:lim) {
c <- perturb(PCM, granularityLow)
ev <- abs(Re(eigen(c)$vectors[,1]))
d <- rank(-ev)
cs <- cs + stats::cor(d0, d, method="spearman")
}
meanCor <- cs / lim
return(meanCor)
}
#' @title Evaluate Revised Consistency
#'
#' @description This function returns the revised consistency classification
#' for a PCM, evaluated by comparison with the threshold of consistency for
#' intentional PCMs in the same preference heterogeneity quartile. The measure
#' for inconsistency is the geometric mean of ratios in comparison with the
#' corresponding benchmark PCM.
#'
#' @param PCM A pairwise comparison matrix
#' @param typePCM boolean flag indicating if the first argument is a PCM or a
#' vector of upper triangular elements
#' @returns A list of four elements,
#' revCons = the revised consistency classification,
#' inconGM = the Geometric Mean measure of inconsistency with the best 'PCM',
#' dQrtl = the preference heterogeneity quartile for the normalized
#' eigenvector, and diff = the preference heterogeneity measure
#' @importFrom stats runif
#' @examples
#' revCon1 <- revisedConsistency(matrix(
#' c(1,1/4,1/4,7,1/5, 4,1,1,9,1/4, 4,1,1,8,1/4,
#' 1/7,1/9,1/8,1,1/9, 5,4,4,9,1), nrow=5, byrow=TRUE))
#' revCon1
#' revCon2 <- revisedConsistency(c(7,1,9,8, 1/6,7,9, 9,9, 5), typePCM=FALSE)
#' revCon2
#' @export
revisedConsistency <- function(PCM,typePCM=TRUE) {
if (!typePCM) {
if (!is.vector(PCM)) stop("Input is not a vector of pairwise ratios")
if (length(PCM)<3 | length(PCM)>66)
stop("Input vector is not of appropriate length for a PCM of
order 3 to 12")
PCM <- createPCM(PCM)
if (!is.matrix(PCM))
stop("Input vector does not have required values for all
upper triangular elements")
} else {
if (!is.matrix(PCM)) stop("Input is not a matrix")
if (nrow(PCM)!=ncol(PCM)) stop("Input is not a square matrix")
if (nrow(PCM)==2 | nrow(PCM)>12)
stop("Input matrix should be of order 3 upto 12")
if (notPCM(PCM)) stop("Input is not a positive reciprocal matrix")
}
evector <- abs(Re(eigen(PCM)$vectors[,1]))
evector <- evector/sum(evector)
diff <- max(evector)[1] - min(evector)[1]
d <- as.numeric(unname(ec[ec$ord==nrow(PCM) & ec$type=='0D',2:5]))
inc <- ec[ec$ord==nrow(PCM) & ec$type=='1C',2:5]
# The true part of this is statement added to solve
# the problem of min(which(d>diff)) not having any argument
if (max(d)[1]<diff) {
dQrtl <- "Q4"
inconGM <- mDev(PCM, bestM(PCM))
inconsThreshold <- as.numeric(inc[1])
revCons <- inconGM <= inconsThreshold
} else {
column <- min(which(d>diff))
# Need to add the below line to take care of PCMs with eigenvalue = 0
column <- ifelse(is.infinite(column),1,column)
inconsThreshold <- as.numeric(inc[column])
inconGM <- mDev(PCM, bestM(PCM))
dQrtl <- paste0("Q",column)
revCons <- inconGM <= inconsThreshold
}
return(list(revCons=revCons,inconGM=inconGM,dQrtl=dQrtl,diff=diff))
}
| /scratch/gouwar.j/cran-all/cranData/AHPtools/R/AHPtools.R |
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## ----setup--------------------------------------------------------------------
library(AHPtools)
| /scratch/gouwar.j/cran-all/cranData/AHPtools/inst/doc/AHPtools.R |
---
title: "AHPtools"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{AHPtools}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
```{r setup}
library(AHPtools)
```
| /scratch/gouwar.j/cran-all/cranData/AHPtools/inst/doc/AHPtools.Rmd |
---
title: "AHPtools"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{AHPtools}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
```{r setup}
library(AHPtools)
```
| /scratch/gouwar.j/cran-all/cranData/AHPtools/vignettes/AHPtools.Rmd |
#--------------------------------------------------------------------------------------------------------------------------
#' New Generalized Log-logistic (GLL) hazard function.
#--------------------------------------------------------------------------------------------------------------------------
#' @param kappa : scale parameter
#' @param alpha : shape parameter
#' @param eta : shape parameter
#' @param zeta : shape parameter
#' @param t : positive argument
#' @param log :log scale (TRUE or FALSE)
#' @return the value of the NGLL hazard function
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' hNGLL(t=t, kappa=0.5, alpha=0.35, eta=0.7, zeta=1.4, log=FALSE)
#'
hNGLL<- function(t,kappa,alpha,eta,zeta, log=FALSE){
pdf0 <- ((alpha*kappa)*((t*kappa)^(alpha-1)))/(1+zeta*((t*eta)^alpha))^(((kappa^alpha)/(zeta*(eta^alpha)))+1)
cdf0 <- (1-((1+zeta*((t*eta)^alpha))^(-((kappa^alpha)/(zeta*(eta^alpha))))))
cdf0 <- ifelse(cdf0==1,0.9999999,cdf0)
log.h <- log(pdf0) - log(1-cdf0)
ifelse(log, return(log.h), return(exp(log.h)))
}
#--------------------------------------------------------------------------------------------------------------------------
#' New Generalized Log-logistic (GLL) cumulative hazard function.
#--------------------------------------------------------------------------------------------------------------------------
#' @param kappa : scale parameter
#' @param alpha : shape parameter
#' @param eta : shape parameter
#' @param zeta : shape parameter
#' @param t : positive argument
#' @return the value of the NGLL cumulative hazard function
#' @references Hassan Muse, A. A new generalized log-logistic distribution with increasing, decreasing, unimodal and bathtub-shaped hazard rates: properties and applications, in Proceedings of the Symmetry 2021 - The 3rd International Conference on Symmetry, 8–13 August 2021, MDPI: Basel, Switzerland, doi:10.3390/Symmetry2021-10765.
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' CHNGLL(t=t, kappa=0.5, alpha=0.35, eta=0.7, zeta=1.4)
#'
CHNGLL <- function(t,kappa,alpha,eta,zeta){
cdf0 <- (1-((1+zeta*((t*eta)^alpha))^(-((kappa^alpha)/(zeta*(eta^alpha))))))
return(-log(1-cdf0))
}
#----------------------------------------------------------------------------------------
#' Kumaraswamy Weibull (KW) Hazard Function.
#----------------------------------------------------------------------------------------
#' @param alpha : scale parameter
#' @param kappa : shape parameter
#' @param eta : shape parameter
#' @param zeta : shape parameter
#' @param t : positive argument
#' @param log :log scale (TRUE or FALSE)
#' @return the value of the KW hazard function
#' @references Cordeiro, G. M., Ortega, E. M., & Nadarajah, S. (2010). The Kumaraswamy Weibull distribution with application to failure data. Journal of the Franklin Institute, 347(8), 1399-1429.
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' hKW(t=t, alpha=0.35, kappa=0.5, eta=1.20, zeta=1.5, log=FALSE)
#'
hKW <- function(t,alpha,kappa,eta,zeta,log=FALSE){
log.h <- (log(kappa)+log(zeta)+log(eta)+log(alpha)+((zeta-1)*log(1-exp(-alpha*t^kappa)))+((kappa-1)*log(t)+log(exp(-alpha*t^kappa))))-(log(1-(1-exp(-alpha*t^kappa))^zeta))
ifelse(log, return(log.h), return(exp(log.h)))
}
#----------------------------------------------------------------------------------------
#' Kumaraswamy Weibull (KW) Cumulative Hazard Function.
#----------------------------------------------------------------------------------------
#' @param alpha : scale parameter
#' @param kappa : shape parameter
#' @param eta : shape parameter
#' @param zeta : shape parameter
#' @param t : positive argument
#' @return the value of the KW cumulative hazard function
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' CHKW(t=t, alpha=0.35, kappa=0.5, eta=1.20, zeta=1.5)
#'
CHKW<- function(t,alpha,kappa,eta,zeta){
sf <- (1-(1-exp(-alpha*t^kappa))^zeta)^eta
return(-log(sf))
}
#--------------------------------------------------------------------------------------------------------------------------
#' Generalized Log-logistic (GLL) hazard function.
#--------------------------------------------------------------------------------------------------------------------------
#' @param kappa : scale parameter
#' @param alpha : shape parameter
#' @param eta : shape parameter
#' @param t : positive argument
#' @param log :log scale (TRUE or FALSE)
#' @return the value of the GLL hazard function
#' @references Muse, A. H., Mwalili, S., Ngesa, O., Alshanbari, H. M., Khosa, S. K., & Hussam, E. (2022). Bayesian and frequentist approach for the generalized log-logistic accelerated failure time model with applications to larynx-cancer patients. Alexandria Engineering Journal, 61(10), 7953-7978.
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' hGLL(t=t, kappa=0.5, alpha=0.35, eta=0.7, log=FALSE)
#'
hGLL<- function(t, kappa,alpha,eta, log = FALSE){
val<-log(kappa)+log(alpha)+(alpha-1)*log(kappa*t)-log(1+(eta*t)^alpha)
if(log) return(val) else return(exp(val))
}
#--------------------------------------------------------------------------------------------------------------------------
#' Generalized Log-logistic (GLL) cumulative hazard function.
#--------------------------------------------------------------------------------------------------------------------------
#' @param kappa : scale parameter
#' @param alpha : shape parameter
#' @param eta : shape parameter
#' @param t : positive argument
#' @return the value of the GLL cumulative hazard function
#' @references Muse, A. H., Mwalili, S., Ngesa, O., Almalki, S. J., & Abd-Elmougod, G. A. (2021). Bayesian and classical inference for the generalized log-logistic distribution with applications to survival data. Computational intelligence and neuroscience, 2021.
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' CHGLL(t=t, kappa=0.5, alpha=0.35, eta=0.9)
#'
CHGLL <- function(t, kappa,alpha, eta){
val <- ((kappa^alpha)/(eta^alpha))*log(1+((eta*t)^alpha))
return(val)
}
#----------------------------------------------------------------------------------------
#' Modified Kumaraswamy Weibull (MKW) Hazard Function.
#----------------------------------------------------------------------------------------
#' @param alpha : inverse scale parameter
#' @param kappa : shape parameter
#' @param eta : shape parameter
#' @param t : positive argument
#' @param log :log scale (TRUE or FALSE)
#' @return the value of the MKW hazard function
#' @references Khosa, S. K. (2019). Parametric Proportional Hazard Models with Applications in Survival analysis (Doctoral dissertation, University of Saskatchewan).
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' hMKW(t=t, alpha=0.35, kappa=0.7, eta=1.4, log=FALSE)
#'
hMKW <- function(t,alpha,kappa,eta,log=FALSE){
log.h <- (log(kappa)+log(eta)+log(alpha)+((eta-1)*log(1-exp(-t^kappa)))+((kappa-1)*log(t)+log(exp(-t^kappa))))-(log(1-(1-exp(-t^kappa))^eta))
ifelse(log, return(log.h), return(exp(log.h)))
}
#----------------------------------------------------------------------------------------
#' Modified Kumaraswamy Weibull (MKW) Cumulative Hazard Function.
#----------------------------------------------------------------------------------------
#' @param alpha : Inverse scale parameter
#' @param kappa : shape parameter
#' @param eta : shape parameter
#' @param t : positive argument
#' @return the value of the MKW cumulative hazard function
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' CHMKW(t=t,alpha=0.35, kappa=0.7, eta=1.4)
#'
CHMKW<- function(t,alpha,kappa,eta){
sf <- (1-(1-exp(-t^kappa))^eta)^alpha
return(-log(sf))
}
#----------------------------------------------------------------------------------------
#' Exponentiated Weibull (EW) Probability Density Function.
#----------------------------------------------------------------------------------------
#' @param lambda : scale parameter
#' @param kappa : shape parameter
#' @param alpha : shape parameter
#' @param t : positive argument
#' @param log :log scale (TRUE or FALSE)
#' @return the value of the EW probability density function
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' dexpweibull(t=t, lambda=0.6,kappa=0.5, alpha=0.45, log=FALSE)
#'
dexpweibull<- function(t,lambda,kappa,alpha,log=FALSE){
log.pdf <- log(alpha) + (alpha-1)*pweibull(t,scale=lambda,shape=kappa,log.p=TRUE) +
dweibull(t,scale=lambda,shape=kappa,log=TRUE)
ifelse(log, return(log.pdf), return(exp(log.pdf)))
}
#----------------------------------------------------------------------------------------
#' Exponentiated Weibull (EW) Cumulative Distribution Function.
#----------------------------------------------------------------------------------------
#' @param lambda : scale parameter
#' @param kappa : shape parameter
#' @param alpha : shape parameter
#' @param t : positive argument
#' @param log.p :log scale (TRUE or FALSE)
#' @return the value of the EW cumulative distribution function
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' pexpweibull(t=t, lambda=0.65,kappa=0.45, alpha=0.25, log.p=FALSE)
#'
pexpweibull<- function(t,lambda,kappa,alpha,log.p=FALSE){
log.cdf <- alpha*pweibull(t,scale=lambda,shape=kappa,log.p=TRUE)
ifelse(log.p, return(log.cdf), return(exp(log.cdf)))
}
#----------------------------------------------------------------------------------------
#' Exponentiated Weibull (EW) Hazard Function.
#----------------------------------------------------------------------------------------
#' @param lambda : scale parameter
#' @param kappa : shape parameter
#' @param alpha : shape parameter
#' @param t : positive argument
#' @param log :log scale (TRUE or FALSE)
#' @return the value of the EW hazard function
#' @references Khan, S. A. (2018). Exponentiated Weibull regression for time-to-event data. Lifetime data analysis, 24(2), 328-354.
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' hEW(t=t, lambda=0.9, kappa=0.5, alpha=0.75, log=FALSE)
#'
hEW <- function(t,lambda,kappa,alpha,log=FALSE){
log.pdf <- log(alpha) + (alpha-1)*pweibull(t,scale=lambda,shape=kappa,log.p=TRUE) +
dweibull(t,scale=lambda,shape=kappa,log=TRUE)
cdf <- exp(alpha*pweibull(t,scale=lambda,shape=kappa,log.p=TRUE) )
log.h <- log.pdf - log(1-cdf)
ifelse(log, return(log.h), return(exp(log.h)))
}
#----------------------------------------------------------------------------------------
#' Exponentiated Weibull (EW) Cumulative Hazard Function.
#----------------------------------------------------------------------------------------
#' @param lambda : scale parameter
#' @param kappa : shape parameter
#' @param alpha : shape parameter
#' @param t : positive argument
#' @return the value of the EW cumulative hazard function
#' @references Rubio, F. J., Remontet, L., Jewell, N. P., & Belot, A. (2019). On a general structure for hazard-based regression models: an application to population-based cancer research. Statistical methods in medical research, 28(8), 2404-2417.
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' CHEW(t=t, lambda=0.9, kappa=0.5, alpha=0.75)
#'
CHEW<- function(t,lambda,kappa,alpha){
cdf <- exp(alpha*pweibull(t,scale=lambda,shape=kappa,log.p=TRUE) )
return(-log(1-cdf))
}
#--------------------------------------------------------------------------------------------------------------------------
#' Modified Log-logistic (MLL) hazard function.
#--------------------------------------------------------------------------------------------------------------------------
#' @param kappa : scale parameter
#' @param alpha : shape parameter
#' @param eta : shape parameter
#' @param t : positive argument
#' @param log :log scale (TRUE or FALSE)
#' @return the value of the MLL hazard function
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' hMLL(t=t, kappa=0.75, alpha=0.5, eta=0.9,log=FALSE)
#'
hMLL<- function(t,kappa,alpha,eta,log=FALSE){
log.h <- log(kappa*(kappa*t)^(alpha-1)*exp(eta*t)*(alpha+eta*t)/(1+((kappa*t)^alpha)*exp(eta*t)))
ifelse(log, return(log.h), return(exp(log.h)))
}
#--------------------------------------------------------------------------------------------------------------------------
#' Modified Log-logistic (MLL) cumulative hazard function.
#--------------------------------------------------------------------------------------------------------------------------
#' @param kappa : scale parameter
#' @param alpha : shape parameter
#' @param eta : shape parameter
#' @param t : positive argument
#' @return the value of the MLL cumulative hazard function
#' @references Kayid, M. (2022). Applications of Bladder Cancer Data Using a Modified Log-Logistic Model. Applied Bionics and Biomechanics, 2022.
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' CHMLL(t=t, kappa=0.75, alpha=0.5, eta=0.9)
#'
CHMLL<- function(t,kappa,alpha,eta){
sf <- 1/(1+((kappa*t)^alpha)*exp(eta*t))
return(-log(sf))
}
#--------------------------------------------------------------------------------------------------------------------------
#' Power Generalised Weibull (PGW) hazard function.
#--------------------------------------------------------------------------------------------------------------------------
#' @param kappa : scale parameter
#' @param alpha : shape parameter
#' @param eta : shape parameter
#' @param t : positive argument
#' @param log :log scale (TRUE or FALSE)
#' @return the value of the PGW hazard function
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' hPGW(t=t, kappa=0.5, alpha=1.5, eta=0.6,log=FALSE)
#'
hPGW <- function(t, kappa,alpha, eta, log = FALSE){
val <- log(alpha) - log(eta) - alpha*log(kappa) + (alpha-1)*log(t) +
(1/eta - 1)*log( 1 + (t/kappa)^alpha )
if(log) return(val) else return(exp(val))
}
#--------------------------------------------------------------------------------------------------------------------------
#' Power Generalised Weibull (PGW) cumulative hazard function.
#--------------------------------------------------------------------------------------------------------------------------
#' @param kappa : scale parameter
#' @param alpha : shape parameter
#' @param eta : shape parameter
#' @param t : positive argument
#' @return the value of the PGW cumulative hazard function
#' @references Alvares, D., & Rubio, F. J. (2021). A tractable Bayesian joint model for longitudinal and survival data. Statistics in Medicine, 40(19), 4213-4229.
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' CHPGW(t=t, kappa=0.5, alpha=1.5, eta=0.6)
#'
CHPGW <- function(t, kappa, alpha, eta){
val <- -1 + ( 1 + (t/kappa)^alpha)^(1/eta)
return(val)
}
#----------------------------------------------------------------------------------------
#' Generalised Gamma (GG) Probability Density Function.
#----------------------------------------------------------------------------------------
#' @param kappa : scale parameter
#' @param alpha : shape parameter
#' @param eta : shape parameter
#' @param t : positive argument
#' @param log :log scale (TRUE or FALSE)
#' @return the value of the GG probability density function
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' dggamma(t=t, kappa=0.5, alpha=0.35, eta=0.9,log=FALSE)
#'
dggamma <- function(t, kappa, alpha, eta, log = FALSE){
val <- log(eta) - alpha*log(kappa) - lgamma(alpha/eta) + (alpha - 1)*log(t) -
(t/kappa)^eta
if(log) return(val) else return(exp(val))
}
#----------------------------------------------------------------------------------------
#' Generalised Gamma (GG) Cumulative Distribution Function.
#----------------------------------------------------------------------------------------
#' @param kappa : scale parameter
#' @param alpha : shape parameter
#' @param eta : shape parameter
#' @param t : positive argument
#' @param log.p :log scale (TRUE or FALSE)
#' @return the value of the GG cumulative distribution function
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' pggamma(t=t, kappa=0.5, alpha=0.35, eta=0.9,log.p=FALSE)
#'
pggamma <- function(t, kappa, alpha, eta, log.p = FALSE){
val <- pgamma( t^eta, shape = alpha/eta, scale = kappa^eta, log.p = TRUE)
if(log.p) return(val) else return(exp(val))
}
#----------------------------------------------------------------------------------------
#' Generalised Gamma (GG) Survival Function.
#----------------------------------------------------------------------------------------
#' @param kappa : scale parameter
#' @param alpha : shape parameter
#' @param eta : shape parameter
#' @param t : positive argument
#' @param log.p :log scale (TRUE or FALSE)
#' @return the value of the GG survival function
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' sggamma(t=t, kappa=0.5, alpha=0.35, eta=0.9,log.p=FALSE)
#'
sggamma <- function(t, kappa, alpha, eta, log.p = FALSE){
val <- pgamma( t^eta, shape = alpha/eta, scale = kappa^eta, log.p = TRUE, lower.tail = FALSE)
if(log.p) return(val) else return(exp(val))
}
#----------------------------------------------------------------------------------------
#' Generalised Gamma (GG) Hazard Function.
#----------------------------------------------------------------------------------------
#' @param kappa : scale parameter
#' @param alpha : shape parameter
#' @param eta : shape parameter
#' @param t : positive argument
#' @param log :log scale (TRUE or FALSE)
#' @return the value of the GG hazard function
#' @references Agarwal, S. K., & Kalla, S. L. (1996). A generalized gamma distribution and its application in reliabilty. Communications in Statistics-Theory and Methods, 25(1), 201-210.
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' hGG(t=t, kappa=0.5, alpha=0.35, eta=0.9,log=FALSE)
#'
hGG <- function(t, kappa, alpha, eta, log = FALSE){
val <- dggamma(t, kappa, alpha, eta, log = TRUE) - sggamma(t, kappa, alpha, eta, log.p = TRUE)
if(log) return(val) else return(exp(val))
}
#----------------------------------------------------------------------------------------
#' Generalised Gamma (GG) Cumulative Hazard Function.
#----------------------------------------------------------------------------------------
#' @param kappa : scale parameter
#' @param alpha : shape parameter
#' @param eta : shape parameter
#' @param t : positive argument
#' @return the value of the GG cumulative hazard function
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' CHGG(t=t, kappa=0.5, alpha=0.35, eta=0.9)
#'
CHGG <- function(t, kappa, alpha, eta){
val <- -pgamma( t^eta, shape = alpha/eta, scale = kappa^eta, log.p = TRUE, lower.tail = FALSE)
return(val)
}
#----------------------------------------------------------------------------------------
#' Log-logistic (LL) Hazard Function.
#----------------------------------------------------------------------------------------
#' @param kappa : scale parameter
#' @param alpha : shape parameter
#' @param t : positive argument
#' @param log :log scale (TRUE or FALSE)
#' @return the value of the LL hazard function
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' hLL(t=t, kappa=0.5, alpha=0.35,log=FALSE)
#'
hLL<- function(t,kappa,alpha, log = FALSE){
pdf0 <- dllogis(t,shape=alpha,scale=kappa)
cdf0 <- pllogis(t,shape=alpha,scale=kappa)
val<-log(pdf0)-log(1-cdf0)
if(log) return(val) else return(exp(val))
}
#----------------------------------------------------------------------------------------
#' Log-logistic (LL) Cumulative Hazard Function.
#----------------------------------------------------------------------------------------
#' @param kappa : scale parameter
#' @param alpha : shape parameter
#' @param t : positive argument
#' @return the value of the LL cumulative hazard function
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' CHLL(t=t, kappa=0.5, alpha=0.35)
#'
CHLL<- function(t,kappa,alpha){
cdf <- pllogis(t,shape=alpha,scale=kappa)
val<--log(1-cdf)
return(val)
}
#----------------------------------------------------------------------------------------
#' Weibull (W) Hazard Function.
#----------------------------------------------------------------------------------------
#' @param kappa : scale parameter
#' @param alpha : shape parameter
#' @param t : positive argument
#' @param log :log scale (TRUE or FALSE)
#' @return the value of the w hazard function
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' hW(t=t, kappa=0.75, alpha=0.5,log=FALSE)
#'
hW<- function(t,kappa,alpha, log = FALSE){
val<- log(alpha*kappa*t^(alpha-1))
if(log) return(val) else return(exp(val))
}
#----------------------------------------------------------------------------------------
#' Weibull (W) Cumulative Hazard Function.
#----------------------------------------------------------------------------------------
#' @param kappa : scale parameter
#' @param alpha : shape parameter
#' @param t : positive argument
#' @return the value of the W cumulative hazard function
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' CHW(t=t, kappa=0.75, alpha=0.5)
#'
CHW<- function(t,kappa,alpha){
val <- kappa*t^alpha
return(val)
}
#----------------------------------------------------------------------------------------
#' Lognormal (LN) Hazard Function.
#----------------------------------------------------------------------------------------
#' @param kappa : meanlog parameter
#' @param alpha : sdlog parameter
#' @param t : positive argument
#' @param log :log scale (TRUE or FALSE)
#' @return the value of the LN hazard function
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' hLN(t=t, kappa=0.5, alpha=0.75,log=FALSE)
#'
hLN <- function(t,kappa,alpha, log = FALSE){
pdf0 <- dlnorm(t,meanlog=kappa,sdlog=alpha)
cdf0 <- plnorm(t,meanlog=kappa,sdlog=alpha)
val<-log(pdf0)-log(1-cdf0)
if(log) return(val) else return(exp(val))
}
#----------------------------------------------------------------------------------------
#' Lognormal (LN) Cumulative Hazard Function.
#----------------------------------------------------------------------------------------
#' @param kappa : meanlog parameter
#' @param alpha : sdlog parameter
#' @param t : positive argument
#' @return the value of the LN cumulative hazard function
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' CHLN(t=t, kappa=0.75, alpha=0.95)
#'
CHLN<- function(t,kappa,alpha){
cdf <- plnorm(t,meanlog=kappa,sdlog=alpha)
val<--log(1-cdf)
return(val)
}
#----------------------------------------------------------------------------------------
#' Burr-XII (BXII) Hazard Function.
#----------------------------------------------------------------------------------------
#' @param kappa : scale parameter
#' @param alpha : shape parameter
#' @param t : positive argument
#' @param log :log scale (TRUE or FALSE)
#' @return the value of the BXII hazard function
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' hBXII(t=t, kappa=0.85, alpha=0.45,log=FALSE)
#'
hBXII<- function(t,kappa,alpha, log = FALSE){
h0<-(alpha*kappa*t^(kappa-1))/(1+t^kappa)
val <- log(h0)
if(log) return(val) else return(exp(val))
}
#----------------------------------------------------------------------------------------
#' Burr-XII (BXII) Cumulative Hazard Function.
#----------------------------------------------------------------------------------------
#' @param kappa : scale parameter
#' @param alpha : shape parameter
#' @param t : positive argument
#' @return the value of the BXII cumulative hazard function
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' CHBXII(t=t, kappa=0.5, alpha=0.35)
#'
CHBXII<- function(t,kappa,alpha){
cdf0 <- (1-((1+t^kappa))^(-alpha))
H0<--log(1-cdf0)
return(H0)
}
#----------------------------------------------------------------------------------------
#' Gamma (G) Hazard Function.
#----------------------------------------------------------------------------------------
#' @param shape : shape parameter
#' @param scale : scale parameter
#' @param t : positive argument
#' @param log :log scale (TRUE or FALSE)
#' @return the value of the G hazard function
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' hG(t=t, shape=0.5, scale=0.85,log=FALSE)
#'
hG <- function(t, shape, scale, log = FALSE){
lpdf0 <- dgamma(t, shape = shape, scale = scale, log = T)
ls0 <- pgamma(t, shape = shape, scale = scale, lower.tail = FALSE, log.p = T)
val <- lpdf0 - ls0
if(log) return(val) else return(exp(val))
}
#----------------------------------------------------------------------------------------
#' Gamma (G) Cumulative Hazard Function.
#----------------------------------------------------------------------------------------
#' @param shape : shape parameter
#' @param scale : scale parameter
#' @param t : positive argument
#' @return the value of the G cumulative hazard function
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' t=runif(10,min=0,max=1)
#' CHG(t=t, shape=0.85, scale=0.5)
#'
CHG <- function(t, shape, scale){
H0 <- -pgamma(t, shape = shape, scale = scale, lower.tail = FALSE, log.p = TRUE)
return(H0)
}
###############################################################################################
###############################################################################################
###############################################################################################
#' Overall Survival AH model.
###############################################################################################
###############################################################################################
###############################################################################################
########################################################################################################
#' @description The flexible parametric accelerated hazards (AH) model's maximum likelihood estimation, log-likelihood, and information criterion.
#' Baseline hazards: NGLL, GLL,KW, EW, MLL, PGW, GG, MKW, Log-logistic, Weibull, Log-normal, Burr-XII, and Gamma
########################################################################################################
#' @param init : initial points for optimisation
#' @param z : design matrix for covariates (p x n), p >= 1
#' @param delta : vital indicator (0-alive,1 - dead,)
#' @param time : survival times
#' @param basehaz : {baseline hazard structure including baseline
#' (NGLLAH,GLLAH,EWAH,KWAH,MLLAH,PGWAH,GGAH,
#' MKWAH,LLAH,WAH,GAH,LNAH,BXIIAH)}
#' @param method :"nlminb" or a method from "optim"
#' @param n : The number of the observations of the data set
#' @param maxit :The maximum number of iterations. Defaults to 1000
#' @param log :log scale (TRUE or FALSE)
#' @details The function AHMLE returns MLE estimates and information criterion.
#' @format By default the function calculates the following values:
#' \itemize{
#' \item AIC: Akaike Information Criterion;
#' \item CAIC: Consistent Akaikes Information Criterion;
#' \item BIC: Bayesian Information Criterion;
#' \item BCAIC: Bozdogan’s Consistent Akaike Information Criterion;
#' \item HQIC: Hannan-Quinn information criterion;
#' \item par: maximum likelihood estimates;
#' \item Value: value of the likelihood function;
#' \item Convergence: 0 indicates successful completion and 1 indicates that the iteration limit maxit.
#' }
#' @return a list containing the output of the optimisation (OPT) and the information criterion including (AIC, BIC, CAIC, BCAIC, and HQIC).
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' #Example #1
#' data(ipass)
#' time<-ipass$time
#' delta<-ipass$status
#' z<-ipass$arm
#' AHMLE(init = c(1.0,1.0,1.0,0.5),time = time,delta = delta,n=nrow(z),
#' basehaz = "GLLAH",z = z,method = "Nelder-Mead",
#' maxit = 1000)
#'
#' #Example #2
#' data(bmt)
#' time<-bmt$Time
#' delta<-bmt$Status
#' z<-bmt$TRT
#' AHMLE(init = c(1.0,1.0,1.0,0.5),time = time,delta = delta,n=nrow(z),
#' basehaz = "GLLAH",z = z,method = "Nelder-Mead",
#' maxit = 1000)
#'
#'#Example #3
#'data("e1684")
#'time<-e1684$FAILTIME
#'delta<-e1684$FAILCENS
#'TRT<-e1684$TRT
#'AGE<-e1684$TRT
#'z<-as.matrix(cbind(scale(TRT), scale(AGE) ))
#'AHMLE(init = c(1.0,1.0,1.0,0.5,0.75),time = time,delta = delta,n=nrow(z),
#'basehaz = "GLLAH",z = z,method = "Nelder-Mead",maxit = 1000)
#'
#'#Example #4
#'data("LeukSurv")
#'time<-LeukSurv$time
#'delta<-LeukSurv$cens
#'age<-LeukSurv$age
#'wbc<-LeukSurv$wbc
#'tpi<-LeukSurv$tpi
#'z<-as.matrix(cbind(scale(age), scale(tpi),scale(wbc) ))
#'AHMLE(init = c(1.0,1.0,1.0,1.0,0.5,0.65,0.85),time = time,delta = delta,n=nrow(z),
#'basehaz = "NGLLAH",z = z,method = "Nelder-Mead",maxit = 1000)
#'
AHMLE<- function(init, time, delta,n, basehaz, z, method = "Nelder-Mead", maxit = 1000,log=FALSE){
# Required variables
time <- as.vector(time)
delta <- as.vector(as.logical(delta))
z <- as.matrix(z)
n<-nrow(z)
time.obs <- time[delta]
if(!is.null(z)) z.obs <- z[delta,]
p0 <- dim(z)[2]
# NGLL - AH Model
if(basehaz == "NGLLAH"){
log.lik <- function(par){
ae0 <- exp(par[1]); be0 <- exp(par[2]); ce0 <- exp(par[3]); de0<-exp(par[4]);beta <- par[5:(4+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- hNGLL(time.obs*exp.z.beta.obs,ae0,be0,ce0,de0, log = TRUE)
val <- - sum(lhaz0) + sum(CHNGLL(time*exp.z.beta,ae0,be0,ce0,de0)/exp.z.beta)
return(sum(val))
}
}
# KW- AH Model
if(basehaz == "KWAH"){
log.lik <- function(par){
ae0 <- par[1]; be0 <- par[2]; ce0 <- par[3]; de0<-par[4];beta <- par[5:(4+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- hKW(time.obs*exp.z.beta.obs,ae0,be0,ce0,de0, log = TRUE)
val <- - sum(lhaz0) + sum(CHKW(time*exp.z.beta,ae0,be0,ce0,de0)/exp.z.beta)
return(sum(val))
}
}
# GLL - AH Model
if(basehaz == "GLLAH"){
log.lik <- function(par){
ae0 <- exp(par[1]); be0 <- exp(par[2]); ce0 <- exp(par[3]); beta <- par[4:(3+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- hGLL(time.obs*exp.z.beta.obs,ae0,be0,ce0, log = TRUE)
val <- - sum(lhaz0) + sum(CHGLL(time*exp.z.beta,ae0,be0,ce0)/exp.z.beta)
return(sum(val))
}
}
# EW - AH Model
if(basehaz == "EWAH"){
log.lik <- function(par){
ae0 <- par[1]; be0 <- par[2]; ce0 <- par[3]; beta <- par[4:(3+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- hEW(time.obs*exp.z.beta.obs,ae0,be0,ce0, log = TRUE)
val <- - sum(lhaz0) + sum(CHEW(time*exp.z.beta,ae0,be0,ce0)/exp.z.beta)
return(sum(val))
}
}
# MLL - AH Model
if(basehaz == "MLLAH"){
log.lik <- function(par){
ae0 <- exp(par[1]); be0 <- exp(par[2]); ce0 <- exp(par[3]); beta <- par[4:(3+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- hMLL(time.obs*exp.z.beta.obs,ae0,be0,ce0, log = TRUE)
val <- - sum(lhaz0) + sum(CHMLL(time*exp.z.beta,ae0,be0,ce0)/exp.z.beta)
return(sum(val))
}
}
# PGW - AH Model
if(basehaz == "PGWAH"){
log.lik <- function(par){
ae0 <- exp(par[1]); be0 <- exp(par[2]); ce0 <- exp(par[3]); beta <- par[4:(3+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- hPGW(time.obs*exp.z.beta.obs,ae0,be0,ce0, log = TRUE)
val <- - sum(lhaz0) + sum(CHPGW(time*exp.z.beta,ae0,be0,ce0)/exp.z.beta)
return(sum(val))
}
}
# GG - AH Model
if(basehaz == "GGAH"){
log.lik <- function(par){
ae0 <- exp(par[1]); be0 <- exp(par[2]); ce0 <- exp(par[3]); beta <- par[4:(3+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- hGG(time.obs*exp.z.beta.obs,ae0,be0,ce0, log = TRUE)
val <- - sum(lhaz0) + sum(CHGG(time*exp.z.beta,ae0,be0,ce0)/exp.z.beta)
return(sum(val))
}
}
# MKW - AH Model
if(basehaz == "MKWAH"){
log.lik <- function(par){
ae0 <- par[1]; be0 <- par[2]; ce0<-par[3]; beta <- par[4:(3+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- hMKW(time.obs*exp.z.beta.obs,ae0,be0,ce0, log = TRUE)
val <- - sum(lhaz0) + sum(CHMKW(time*exp.z.beta,ae0,be0,ce0)/exp.z.beta)
return(sum(val))
}
}
# Loglogistic - AH Model
if(basehaz == "LLAH"){
log.lik <- function(par){
ae0 <- exp(par[1]); be0 <- exp(par[2]); beta <- par[3:(2+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- hLL(time.obs*exp.z.beta.obs,ae0,be0, log = TRUE)
val <- - sum(lhaz0) + sum(CHLL(time*exp.z.beta,ae0,be0)/exp.z.beta)
return(sum(val))
}
}
# Weibull - AH Model
if(basehaz == "WAH"){
log.lik <- function(par){
ae0 <- exp(par[1]); be0 <- exp(par[2]); beta <- par[3:(2+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- hW(time.obs*exp.z.beta.obs,ae0,be0, log = TRUE)
val <- - sum(lhaz0) + sum(CHW(time*exp.z.beta,ae0,be0)/exp.z.beta)
return(sum(val))
}
}
# Gamma - AH Model
if(basehaz == "GAH"){
log.lik <- function(par){
ae0 <- exp(par[1]); be0 <- exp(par[2]); beta <- par[3:(2+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- hG(time.obs*exp.z.beta.obs,ae0,be0, log = TRUE)
val <- - sum(lhaz0) + sum(CHG(time*exp.z.beta,ae0,be0)/exp.z.beta)
return(sum(val))
}
}
# Lognormal - AH Model
if(basehaz == "LNAH"){
log.lik <- function(par){
ae0 <- exp(par[1]); be0 <- exp(par[2]); beta <- par[3:(2+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- hLN(time.obs*exp.z.beta.obs,ae0,be0, log = TRUE)
val <- - sum(lhaz0) + sum(CHLN(time*exp.z.beta,ae0,be0)/exp.z.beta)
return(sum(val))
}
}
# BURXII - AH Model
if(basehaz == "BXIIAH"){
log.lik <- function(par){
ae0 <- exp(par[1]); be0 <- exp(par[2]); beta <- par[3:(2+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- hBXII(time.obs*exp.z.beta.obs,ae0,be0, log = TRUE)
val <- - sum(lhaz0) + sum(CHBXII(time*exp.z.beta,ae0,be0)/exp.z.beta)
return(sum(val))
}
}
if(method != "nlminb") OPT <- optim(init,log.lik,control=list(maxit=maxit), method = method)
if(method == "nlminb") OPT <- nlminb(init,log.lik,control=list(iter.max=maxit))
p=length(OPT$par)
l=OPT$value
AIC=2*l + 2*p
CAIC=AIC+(2*p*(p+1)/(n-l+1))
HQIC= 2*l+2*log(log(n))*p
BCAIC=2*l+(p*(log(n)+1))
BIC=(2*l)+(p*(log(n)))
result = (list("AIC" = AIC , "CAIC " = CAIC,
"BIC" = BIC, "HQIC" = HQIC, "BCAIC" = BCAIC))
OUT <- list(OPT = OPT, result=result)
return(OUT)
}
###############################################################################################
###############################################################################################
###############################################################################################
#' Relative Survival AH model.
###############################################################################################
###############################################################################################
###############################################################################################
########################################################################################################
#' @description The flexible parametric accelerated excess hazards (AEH) model's maximum likelihood estimation, log-likelihood, and information criterion.
#' Baseline hazards:NGLL, GLL, KW,EW, MLL, PGW, GG, MKW, Log-logistic, Weibull, Log-normal, Burr-XII, and Gamma
########################################################################################################
#' @param init : initial points for optimisation
#' @param z : design matrix for covariates (p x n), p >= 1
#' @param delta : vital indicator (0-alive,1 - dead)
#' @param time : survival times
#' @param basehaz : {baseline hazard structure including baseline
#' (NGLLAEH,GLLAEH,EWAEH,KWAEH,MLLAEH,
#' PGWAEH,GGAEH,MKWAEH,LLAEH,WAEH,GAEH,
#' LNAEH,BXIIAEEH)}
#' @param hp.obs : population hazards (for uncensored individuals)
#' @param n : The number of the observations of the data set
#' @param method :"nlminb" or a method from "optim"
#' @param log :log scale (TRUE or FALSE)
#' @param maxit :The maximum number of iterations. Defaults to 1000
#' @format By default the function calculates the following values:
#' \itemize{
#' \item AIC: Akaike Information Criterion;
#' \item CAIC: Consistent Akaikes Information Criterion;
#' \item BIC: Bayesian Information Criterion;
#' \item BCAIC: Bozdogan’s Consistent Akaike Information Criterion;
#' \item HQIC: Hannan-Quinn information criterion;
#' \item par: maximum likelihood estimates;
#' \item Value: value of the likelihood function;
#' \item Convergence: 0 indicates successful completion and 1 indicates that the iteration limit maxit.
#' }
#' @return a list containing the output of the optimisation (OPT) and the information criterion including (AIC, BIC, CAIC, BCAIC, and HQIC).
#' @export
#'
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#'
#' @examples
#' data(bmt)
#' time<-bmt$Time
#' delta<-bmt$Status
#' z<-bmt$TRT
#' AEHMLE(init = c(1.0,0.5,1.0,0.5),time = time,delta = delta,n=nrow(z),
#' basehaz = "GLLAEH",z = z,hp.obs=0.6,method = "Nelder-Mead",
#' maxit = 1000)
#'
AEHMLE <- function(init, time, delta, n,basehaz, z, hp.obs, method = "Nelder-Mead", maxit = 1000, log=FALSE){
# Required variables
time <- as.vector(time)
delta <- as.vector(as.logical(delta))
z <- as.matrix(z)
n<-nrow(z)
time.obs <- time[delta]
if(!is.null(z)) z.obs <- z[delta,]
hp.obs <- as.vector(hp.obs)
p0 <- dim(z)[2]
# NGLL - AH Model
if(basehaz == "NGLLAH"){
log.lik <- function(par){
ae0 <- exp(par[1]); be0 <- exp(par[2]); ce0 <- exp(par[3]); de0<-exp(par[4]);beta <- par[5:(4+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- log(hp.obs+hNGLL(time.obs*exp.z.beta.obs,ae0,be0,ce0,de0, log = FALSE))
val <- - sum(lhaz0) + sum(CHNGLL(time*exp.z.beta,ae0,be0,ce0,de0)/exp.z.beta)
return(sum(val))
}
}
# KW - AH Model
if(basehaz == "KWAH"){
log.lik <- function(par){
ae0 <- exp(par[1]); be0 <- exp(par[2]); ce0 <- exp(par[3]); de0<-exp(par[4]);beta <- par[5:(4+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- log(hp.obs+hKW(time.obs*exp.z.beta.obs,ae0,be0,ce0,de0, log = FALSE))
val <- - sum(lhaz0) + sum(CHKW(time*exp.z.beta,ae0,be0,ce0,de0)/exp.z.beta)
return(sum(val))
}
}
# GLL - AEH Model
if(basehaz == "GLLAEH"){
log.lik <- function(par){
ae0 <- exp(par[1]); be0 <- exp(par[2]); ce0 <- exp(par[3]); beta <- par[4:(3+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- log(hp.obs+hGLL(time.obs*exp.z.beta.obs,ae0,be0,ce0, log = FALSE))
val <- - sum(lhaz0) + sum(CHGLL(time*exp.z.beta,ae0,be0,ce0)/exp.z.beta)
return(sum(val))
}
}
# EW - AEH Model
if(basehaz == "EWAEH"){
log.lik <- function(par){
ae0 <- par[1]; be0 <- par[2]; ce0 <- par[3]; beta <- par[4:(3+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- log(hp.obs+hEW(time.obs*exp.z.beta.obs,ae0,be0,ce0, log = FALSE))
val <- - sum(lhaz0) + sum(CHEW(time*exp.z.beta,ae0,be0,ce0)/exp.z.beta)
return(sum(val))
}
}
# MLL - AEH Model
if(basehaz == "MLLAEH"){
log.lik <- function(par){
ae0 <- exp(par[1]); be0 <- exp(par[2]); ce0 <- exp(par[3]); beta <- par[4:(3+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- log(hp.obs+hMLL(time.obs*exp.z.beta.obs,ae0,be0,ce0, log = FALSE))
val <- - sum(lhaz0) + sum(CHMLL(time*exp.z.beta,ae0,be0,ce0)/exp.z.beta)
return(sum(val))
}
}
# PGW - AEH Model
if(basehaz == "PGWAEH"){
log.lik <- function(par){
ae0 <- exp(par[1]); be0 <- exp(par[2]); ce0 <- exp(par[3]); beta <- par[4:(3+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- log(hp.obs+hPGW(time.obs*exp.z.beta.obs,ae0,be0,ce0, log = FALSE))
val <- - sum(lhaz0) + sum(CHPGW(time*exp.z.beta,ae0,be0,ce0)/exp.z.beta)
return(sum(val))
}
}
# GG - AEH Model
if(basehaz == "GGAEH"){
log.lik <- function(par){
ae0 <- exp(par[1]); be0 <- exp(par[2]); ce0 <- exp(par[3]); beta <- par[4:(3+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- log(hp.obs+hGG(time.obs*exp.z.beta.obs,ae0,be0,ce0, log = FALSE))
val <- - sum(lhaz0) + sum(CHGG(time*exp.z.beta,ae0,be0,ce0)/exp.z.beta)
return(sum(val))
}
}
# MKW- AEH Model
if(basehaz == "MKWAEH"){
log.lik <- function(par){
ae0 <- par[1]; be0 <- par[2]; ce0<-par[3];beta <- par[4:(3+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- log(hp.obs+hMKW(time.obs*exp.z.beta.obs,ae0,be0,ce0, log = FALSE))
val <- - sum(lhaz0) + sum(CHMKW(time*exp.z.beta,ae0,be0,ce0)/exp.z.beta)
return(sum(val))
}
}
# Loglogistic - AEH Model
if(basehaz == "LLAEH"){
log.lik <- function(par){
ae0 <- exp(par[1]); be0 <- exp(par[2]); beta <- par[3:(2+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- log(hp.obs+hLL(time.obs*exp.z.beta.obs,ae0,be0, log = FALSE))
val <- - sum(lhaz0) + sum(CHLL(time*exp.z.beta,ae0,be0)/exp.z.beta)
return(sum(val))
}
}
# Weibull - AEH Model
if(basehaz == "WAEH"){
log.lik <- function(par){
ae0 <- exp(par[1]); be0 <- exp(par[2]); beta <- par[3:(2+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- log(hp.obs+hW(time.obs*exp.z.beta.obs,ae0,be0, log = FALSE))
val <- - sum(lhaz0) + sum(CHW(time*exp.z.beta,ae0,be0)/exp.z.beta)
return(sum(val))
}
}
# Gamma - AEH Model
if(basehaz == "GAEH"){
log.lik <- function(par){
ae0 <- exp(par[1]); be0 <- exp(par[2]); beta <- par[3:(2+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- log(hp.obs+hG(time.obs*exp.z.beta.obs,ae0,be0, log = FALSE))
val <- - sum(lhaz0) + sum(CHG(time*exp.z.beta,ae0,be0)/exp.z.beta)
return(sum(val))
}
}
# Lognormal - AEH Model
if(basehaz == "LNAEH"){
log.lik <- function(par){
ae0 <- exp(par[1]); be0 <- exp(par[2]); beta <- par[3:(2+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- log(hp.obs+hLN(time.obs*exp.z.beta.obs,ae0,be0, log = FALSE))
val <- - sum(lhaz0) + sum(CHLN(time*exp.z.beta,ae0,be0)/exp.z.beta)
return(sum(val))
}
}
# BURXII - AEH Model
if(basehaz == "BXIIAEH"){
log.lik <- function(par){
ae0 <- exp(par[1]); be0 <- exp(par[2]); beta <- par[3:(2+p0)];
z.beta <- as.vector(z%*%beta)
exp.z.beta <- exp(z.beta)
exp.z.beta.obs <- exp(z.beta[delta])
lhaz0 <- log(hp.obs+ hBXII(time.obs*exp.z.beta.obs,ae0,be0, log = FALSE))
val <- - sum(lhaz0) + sum(CHBXII(time*exp.z.beta,ae0,be0)/exp.z.beta)
return(sum(val))
}
}
if(method != "nlminb") OPT <- optim(init,log.lik,control=list(maxit=maxit), method = method)
if(method == "nlminb") OPT <- nlminb(init,log.lik,control=list(iter.max=maxit))
p=length(OPT$par)
l=OPT$value
AIC=2*l + 2*p
CAIC=AIC+(2*p*(p+1)/(n-l+1))
HQIC= 2*l+2*log(log(n))*p
BCAIC=2*l+(p*(log(n)+1))
BIC=(2*l)+(p*(log(n)))
result = (list("AIC" = AIC , "CAIC " = CAIC,
"BIC" = BIC, "HQIC" = HQIC, "BCAIC" = BCAIC))
OUT <- list(OPT = OPT, result=result)
return(OUT)
}
| /scratch/gouwar.j/cran-all/cranData/AHSurv/R/AHReg1.R |
#' @import stats rootSolve stats4 flexsurv
#' @importFrom stats pweibull dweibull plnorm dlnorm plogis dlogis alias
#' @importFrom stats nlminb glm.control optim qnorm pnorm optimHess optimize optimise
#' @importFrom stats4 mle
#' @importFrom rootSolve uniroot.all hessian
#' @importFrom flexsurv hgamma hlnorm hllogis dgompertz pgompertz
NULL
| /scratch/gouwar.j/cran-all/cranData/AHSurv/R/AHReg2.R |
#' The Leukemia Survival Data
#'
#' @name LeukSurv
#' @docType data
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#' @keywords datasets
#' @description A dataset on the survival of acute myeloid leukemia in 1,043 pateietns, first analyzed by Henderson et al. (2002). It is of interest to investigate possible spatial variation in survival after accounting for known subject-specific prognostic factors, which include age, sex, white blood cell count (wbc) at diagnosis, and the Townsend score (tpi) for which higher values indicates less affluent areas. Both exact residential locations of all patients and their administrative districts (24 districts that make up the whole region) are available.
#' @format A data frame with 1043 rows and 9 variables:
#'\itemize{
#' \item time: survival time in days
#' \item cens: right censoring status 0=censored, 1=dead
#' \item xcoord: coordinates in x-axis of residence
#' \item ycoord: coordinates in y-axis of residence
#' \item age: age in years
#' \item sex:male=1 female=0
#' \item wbc:white blood cell count at diagnosis, truncated at 500
#' \item tpi: the Townsend score for which higher values indicates less affluent areas
#' \item district:administrative district of residence
#'}
#'
#'@references Henderson, R., Shimakura, S., and Gorst, D. (2002), Modeling spatial variation in leukemia survival data, \emph{Journal of the American Statistical Association}, 97(460), 965-972.
#'
NULL
| /scratch/gouwar.j/cran-all/cranData/AHSurv/R/LeukSurv.R |
#' Bone Marrow Transplant (bmt) data set
#'
#' @name bmt
#' @docType data
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#' @keywords datasets
#' @description Bone marrow transplant study which is widely used in the hazard-based regression models
#' @format There were 46 patients in the allogeneic treatment and 44 patients in the autologous treatment group
#' \itemize{
#' \item Time: time to event
#' \item Status: censor indicator, 0 for censored and 1 for uncensored
#' \item TRT: 1 for autologous treatment group; 0 for allogeneic treatment group
#' }
#' @references Robertson, V. M., Dickson, L. G., Romond, E. H., & Ash, R. C. (1987). Positive antiglobulin tests due to intravenous immunoglobulin in patients who received bone marrow transplant. Transfusion, 27(1), 28-31.
NULL
| /scratch/gouwar.j/cran-all/cranData/AHSurv/R/bmt.R |
#' Melanoma data set
#'
#'@name e1684
#'@docType data
#'@author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#' @keywords datasets
#' @description Eastern Cooperative Oncology Group (ECOG) data used for modeling hazard-based regression models
#' @format A data frame with 284 observations on the following 5 variables.
#' \itemize{
#' \item TRT: 0=control group, 1=IFN treatment group
#' \item FAILTIME: observed relapse-free time
#' \item FAILCENS: relapse-free censor indicator
#' \item AGE:continuous variable, which is centered to the mean
#' \item SEX: 0 for male, 1 fopr female
#' }
#' @references Kirkwood, J. M., Manola, J., Ibrahim, J., Sondak, V., Ernstoff, M. S., & Rao, U. (2004). A pooled analysis of eastern cooperative oncology group and intergroup trials of adjuvant high-dose interferon for melanoma. Clinical Cancer Research, 10(5), 1670-1677.
NULL
| /scratch/gouwar.j/cran-all/cranData/AHSurv/R/e1684.R |
#' IRESSA Pan-Asia Study (IPASS) data set
#'
#' @name ipass
#' @docType data
#' @author Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Mutua Kilai, \email{[email protected]}
#' @keywords datasets
#' @description Argyropoulos and Unruh (2015) published reconstructed IPASS clinical trial data. Despite being reconstructed, this data set retains all of the features shown in references, as well as full access to the observations from this clinical trial.The database spans the months of March 2006 to April 2008.The study's main goal is to compare gefitinib to carboplatin/paclitaxel doublet chemotherapy as first-line treatment in terms of progression-free survival (in months) in selected non-small-cell lung cancer (NSCLC) patients.
#' @format A data frame with 1217 rows and 3 variables:
#' \itemize{
#' \item time: progression free survival (in months)
#' \item status: failure indicator (1 - failure; 0 - otherwise)
#' \item arm: (1 - gefitinib; 0 - carboplatin/paclitaxel doublet chemotherapy)
#' }
#' @references Argyropoulos, C. and Unruh, M. L. (2015). Analysis of time to event outcomes in randomized controlled trials by generalized additive models. PLOS One 10, 1-33.
#'
NULL
| /scratch/gouwar.j/cran-all/cranData/AHSurv/R/ipass.R |
#' Calculate AICc for a permutational multivariate analysis of variance (PERMANOVA)
#'
#' @description #' This function calculates the Akaike's Information Criterion (AICc) for a permutational multivariate analysis of variance (PERMANOVA) model. The AICc is a modified version of the Akaike Information Criterion (AIC) that is more appropriate for small sample sizes and high-dimensional models.
#'
#' @param adonis2_model An object of class adonis2 from the vegan package
#'
#' @return A data frame with the AICc, the number of parameters (k) and the number of observations (N).
#'
#' @examples
#'
#' library(vegan)
#' data(dune)
#' data(dune.env)
#'
#' # Run PERMANOVA using adonis2
#'
#' Model <- adonis2(dune ~ Management*A1, data = dune.env)
#'
#' # Calculate AICc
#' AICc_permanova2(Model)
#'
#' @details
#' The AICc calculation for a PERMANOVA model is:
#'
#' \deqn{AICc = AIC + \frac{2k(k+1)}{n-k-1}}{AICc = AIC + (2k(k+1))/(n-k-1)}
#'
#' where AIC is the Akaike Information Criterion, k is the number of parameters in the model (excluding the intercept), and n is the number of observations.
#'
#' @export
#'
#' @import vegan
#'
#' @references
#' Zuur, A. F., Ieno, E. N., Walker, N. J., Saveliev, A. A., & Smith, G. M. (2009). Mixed effects models and extensions in ecology with R. Springer Science & Business Media.
#'
#' @seealso \code{\link{adonis2}}
#'
#' @keywords models
AICc_permanova2 <- function(adonis2_model) {
# Ok, now extract appropriate terms from the adonis model Calculating AICc
# using residual sum of squares (RSS or SSE) since I don't think that adonis
# returns something I can use as a likelihood function... maximum likelihood
# and MSE estimates are the same when distribution is gaussian See e.g.
# https://www.jessicayung.com/mse-as-maximum-likelihood/;
# https://towardsdatascience.com/probability-concepts-explained-maximum-likelihood-estimation-c7b4342fdbb1
# So using RSS or MSE estimates is fine as long as the residuals are
# Gaussian https://robjhyndman.com/hyndsight/aic/ If models have different
# conditional likelihoods then AIC is not valid. However, comparing models
# with different error distributions is ok (above link).
RSS <- adonis2_model$SumOfSqs[ length(adonis2_model$SumOfSqs) - 1 ]
MSE <- RSS / adonis2_model$Df[ length(adonis2_model$Df) - 1 ]
nn <- adonis2_model$Df[ length(adonis2_model$Df) ] + 1
k <- nn - adonis2_model$Df[ length(adonis2_model$Df) - 1 ]
# AIC : 2*k + n*ln(RSS/n)
# AICc: AIC + [2k(k+1)]/(n-k-1)
# based on https://en.wikipedia.org/wiki/Akaike_information_criterion;
# https://www.statisticshowto.datasciencecentral.com/akaikes-information-criterion/ ;
# https://www.researchgate.net/post/What_is_the_AIC_formula;
# http://avesbiodiv.mncn.csic.es/estadistica/ejemploaic.pdf;
# https://medium.com/better-programming/data-science-modeling-how-to-use-linear-regression-with-python-fdf6ca5481be
AIC <- 2*k + nn*log(RSS/nn)
AICc <- AIC + (2*k*(k + 1))/(nn - k - 1)
output <- data.frame(AICc = AICc, k = k, N = nn)
return(output)
}
| /scratch/gouwar.j/cran-all/cranData/AICcPermanova/R/AICc_permanova2.R |
#' Akaike-Adjusted R Squared Calculation with Model Averaging
#'
#' Calculates the adjusted R squared for each predictor using the Akaike
#' Information Criterion (AIC) and model averaging. AIC is used to compare the
#' performance of candidate models and select the best one. Then, the R squared
#' is adjusted based on the weight of evidence in favor of each model. The final
#' result is a long-format table of variable names and corresponding adjusted
#' R squared values.
#'
#' @param DF A data.frame containing the variables to calculate the adjusted
#' R squared for. The data.frame should include the columns:
#' "form", "AICc", "max_vif", "k", "DeltaAICc", "AICWeight", and "N".
#' @return A data.frame with columns "Variable" and "Full_Akaike_Adjusted_RSq".
#' Each row represents a predictor, and its corresponding adjusted R
#' squared value based on the Akaike-adjusted model averaging process.
#' @details The adjusted R squared is calculated as:
#' \deqn{Adjusted R^2 = 1 - (RSS / (N - k - 1)) * ((N - 1) / (N - k - 1))}
#' where RSS is the residual sum of squares, N is the sample size, and
#' k is the number of predictors. The R squared is adjusted based on the
#' weight of evidence in favor of each model, which is calculated as:
#' \deqn{w_i = exp(-0.5 * DeltaAICc_i) / sum(exp(-0.5 * DeltaAICc))}
#' where w_i is the weight of evidence in favor of the ith model, and
#' DeltaAICc_i is the difference in AICc between the ith model and the
#' best model. Model averaging uses the weights to combine the
#' performance of different models in the final calculation of the
#' adjusted R squared.
#'
#' @importFrom dplyr mutate_at vars matches everything select summarise_if
#' @importFrom tidyr pivot_longer
#'
#' @examples
#' library(data.table)
#' df <- data.table(form = c(1,2,3),
#' AICc = c(10,20,30),
#' max_vif = c(3,4,5),
#' k = c(1,2,3),
#' DeltaAICc = c(2,5,8),
#' AICWeight = c(0.2,0.5,0.3),
#' N = c(100,100,100),
#' A1 = c(0.3, 0.5, NA),
#' A2 = c(0.7, NA, 0.2),
#' A3 = c(0.2, 0.3, 0.6))
#' akaike_adjusted_rsq(df)
#'
#' @export
akaike_adjusted_rsq <- function(DF) {
AICc <- DeltaAICc <- max_vif <- AICWeight <- Model <- k <- N <- NULL
Result <- DF |>
dplyr::mutate_at(dplyr::vars(-dplyr::matches("form|max_vif|AICc|k|DeltaAICc|N|AICWeight")), ~ifelse(is.na(.x), 0, .x))|>
dplyr::mutate_at(dplyr::vars(-dplyr::matches("form|max_vif|AICc|k|DeltaAICc|N|AICWeight")), ~.x * AICWeight)|>
dplyr::summarise_if(is.numeric, sum)|>
dplyr::select(-AICc, -DeltaAICc, -AICWeight, -matches("Model"), -max_vif, -k, -N)|>
tidyr::pivot_longer(dplyr::everything(), names_to = "Variable", values_to = "Full_Akaike_Adjusted_RSq")
return(Result)
}
| /scratch/gouwar.j/cran-all/cranData/AICcPermanova/R/akaike_adjusted_rsq.R |
#' @title Filters out equations with high multicollinearity
#'
#' @description This function takes a dataframe with several models and
#' calculates the maximum Variance Inflation Factor (VIF) for a given
#' model. And either filters out the ones with high collinearity or it
#' flags them accordingly
#'
#' @param all_forms A data frame generated by \code{\link{make_models}}
#' @param env_data A dataset with the variables described in all_froms
#' @param ncores An integer specifying the number of cores to use for parallel processing
#' @param filter logical, if TRUE it filters out the models with a maximum VIF of high or higher, if FALSE it generates a new column called collinearity, wich will
#' @param verbose logical, defaults TRUE, sends messages about processing times
#' @return A data.frame with the models, fitering out the ones with high collinearity or flagginf them.
#' @importFrom parallel makeCluster stopCluster
#' @importFrom doParallel registerDoParallel
#' @importFrom dplyr bind_rows filter mutate
#' @importFrom stringr str_replace_all
#' @importFrom stats rnorm lm as.formula complete.cases
#'
#' @examples
#'\donttest{
#' library(vegan)
#' data(dune)
#' data(dune.env)
#' AllModels <- make_models(vars = c("A1", "Moisture", "Manure"))
#'
#' filter_vif(all_forms = AllModels,
#' env_data = dune.env)
#'}
#' @export
filter_vif <- function(all_forms,
env_data,
ncores = 2,
filter = TRUE,
verbose = TRUE){
max_vif <- x <- NULL
meta_data <- all_forms
meta_data$max_vif <- NA
if(!filter){
meta_data$collinearity <- NA
}
# Check for missing values
missing_rows <- !complete.cases(env_data)
if (any(missing_rows)) {
if(verbose){
# Print message about missing rows and columns
message(sprintf("Removing %d rows with missing values\n", sum(missing_rows)))
message("Columns with missing values: ")
message(names(env_data)[colSums(is.na(env_data)) > 0], sep = ", ")
}
}
# Filter out missing rows
new_env_data <- env_data[complete.cases(env_data), ]
cl <- parallel::makeCluster(ncores)
doParallel::registerDoParallel(cl)
Fs <- foreach(x = 1:nrow(meta_data), .packages = c("dplyr", "AICcPermanova", "stringr"), .combine = bind_rows) %dopar% {
Response = new_env_data
Response$y <- rnorm(n = nrow(Response))
gc()
Temp <- meta_data[x,]
Temp$max_vif <- tryCatch(
expr = VIF(lm(as.formula(stringr::str_replace_all(Temp$form[1], "Distance ", "y")), data = Response)),
error = function(e) NA
)
Temp
}
parallel::stopCluster(cl)
if(filter){
Fs <- Fs |>
dplyr::filter(max_vif < 5)
}
if(!filter){
Fs <- Fs |>
dplyr::mutate(collinearity = ifelse(max_vif < 5, "low", "high"))
}
return(Fs)
}
| /scratch/gouwar.j/cran-all/cranData/AICcPermanova/R/filter_vif.R |
#' @title Fit PERMANOVA models and arrange by AICc
#'
#' @description This function fits PERMANOVA models for all combinations of variables in a given dataset, and arranges the models by Akaike Information Criterion (AICc) score. The function also calculates the maximum variance inflation factor (max_vif) for each model.
#'
#' @param all_forms A data frame generated by \code{\link{make_models}}
#' @param veg_data A dataset with vegetation presence absense or abundance data
#' @param env_data A dataset with the variables described in all_froms
#' @param ncores An integer specifying the number of cores to use for parallel processing
#' @param log logical if true, a log file will be generated
#' @param verbose logical, defaults TRUE, sends messages about processing times
#' @param logfile the text file that will be generated as a log
#' @param multiple after how many loops to write a log file
#' @param method method for distance from \code{\link{vegdist}}
#' @param strata a block variable similar to the use in \code{\link{adonis2}}
#'
#' @return A data.frame with fitted models arranged by AICc, including the formula used, the number of
#' explanatory variables, R2, adjusted R2, and the AICc and max VIF.
#'
#'
#' @importFrom parallel makeCluster stopCluster
#' @importFrom doParallel registerDoParallel
#' @importFrom foreach foreach %dopar%
#' @importFrom vegan vegdist adonis2
#' @importFrom dplyr bind_rows bind_cols select filter arrange
#' @importFrom broom tidy
#' @importFrom tidyr pivot_longer
#' @importFrom stringr str_replace_all
#' @importFrom stats rnorm lm as.formula complete.cases
#' @examples
#'
#' \donttest{
#' library(vegan)
#' data(dune)
#' data(dune.env)
#'
#' AllModels <- make_models(vars = c("A1", "Moisture", "Manure"))
#'
#' fit_models(all_forms = AllModels,
#' veg_data = dune,
#' env_data = dune.env)
#' }
#'
#' @references
#' Anderson, M. J. (2001). A new method for non-parametric multivariate analysis of variance. Austral Ecology, 26(1), 32-46.
#' https://doi.org/10.1111/j.1442-9993.2001.01070.pp.x
#'
#' @export
fit_models <- function(all_forms,
veg_data,
env_data,
method = "bray",
ncores = 2,
log = TRUE,
logfile = "log.txt",
multiple = 100,
strata = NULL,
verbose = FALSE){
AICc <- R2 <- term <- x <- NULL
if(log){
if(file.exists(logfile)){
file.remove(logfile)
}
}
meta_data <- all_forms
if(!("max_vif" %in% colnames(meta_data))){
meta_data$max_vif <- NA
}
vegetation_data = veg_data
# Check for missing values
missing_rows <- !complete.cases(env_data)
if (any(missing_rows)) {
if(verbose){
# Print message about missing rows and columns
message(sprintf("Removing %d rows with missing values\n", sum(missing_rows)))
message("Columns with missing values: ")
message(names(env_data)[colSums(is.na(env_data)) > 0], sep = ", ")
}
}
# Filter out missing rows
new_env_data <- env_data[complete.cases(env_data), ]
vegetation_data <- vegetation_data[complete.cases(env_data), ]
cl <- parallel::makeCluster(ncores)
doParallel::registerDoParallel(cl)
Distance <- vegan::vegdist(vegetation_data, method = method)
Fs <- foreach(x = 1:nrow(meta_data), .packages = c("vegan", "dplyr", "AICcPermanova", "tidyr", "broom"), .combine = bind_rows, .export = c("Distance")) %dopar% {
Response = new_env_data
Response$y <- rnorm(n = nrow(Response))
gc()
Temp <- meta_data[x,]
if(is.null(strata)){
Model <- try(vegan::adonis2(as.formula(Temp$form[1]), data = Response, by = "margin"))
}
if(!is.null(strata)){
# Convert strata variable to factor
strata_factor <- factor(Response[[strata]])
Model <- try(with(Response, vegan::adonis2(as.formula(Temp$form[1]), data = Response, by = "margin", strata = strata_factor)), silent = TRUE)
}
Temp <- tryCatch(
expr = cbind(Temp, AICcPermanova::AICc_permanova2(Model)),
error = function(e) NA
)
if(is.na(Temp$max_vif)){
Temp$max_vif <- tryCatch(
expr = VIF(lm(as.formula(stringr::str_replace_all(Temp$form[1], "Distance ", "y")), data = Response)),
error = function(e) NA
)
}
Rs <- tryCatch(
{
tidy_model <- broom::tidy(Model)
if (inherits(tidy_model, "try-error")) {
stop("Error occurred in broom::tidy(Model)")
}
tidy_model |>
dplyr::filter(!(term %in% c("Residual", "Total"))) |>
dplyr::select(term, R2) |>
tidyr::pivot_wider(names_from = term, values_from = R2)
},
error = function(e) {
message("Error: ", conditionMessage(e))
NULL
}
)
if(log){
if((x %% multiple) == 0){
sink(logfile, append = TRUE)
cat(paste("finished", x, "number of models", Sys.time(), "of", nrow(meta_data)))
cat("\n")
sink()
}
}
Temp <- bind_cols(Temp, Rs)
Temp
}
parallel::stopCluster(cl)
Fs <- Fs |>
dplyr::arrange(AICc)
return(Fs)
}
#' Get Maximum Variance Inflation Factor (VIF) from a Model
#'
#' This function calculates the maximum Variance Inflation Factor (VIF) for a given model.
#' The VIF is a measure of collinearity among predictor variables within a regression model.
#' It quantifies how much the variance of an estimated regression coefficient is increased due to collinearity.
#' A VIF of 1 indicates no collinearity, while values above 1 indicate increasing levels of collinearity.
#' A VIF of 5 or greater is often considered high, indicating a strong presence of collinearity.
#'
#' @param model A regression model, such as those created by lm, glm, or other similar functions.
#'
#' @return The maximum VIF value.
#'
#' @references
#' - Belsley, D. A., Kuh, E., & Welsch, R. E. (1980). Regression Diagnostics: Identifying Influential Data and Sources of Collinearity. John Wiley & Sons.
#' - Kutner, M. H., Nachtsheim, C. J., Neter, J., & Li, W. (2004). Applied Linear Statistical Models. McGraw-Hill/Irwin.
#' - O'Brien, R. M. (2007). A caution regarding rules of thumb for variance inflation factors. Quality & Quantity, 41(5), 673-690.
#'
#' @importFrom car vif
#'
#' @keywords collinearity
#'
#' @export
VIF <- function(model) {
tryCatch({
vif <- car::vif(model)
max(vif)
}, error = function(e) {
if (grepl("aliased coefficients", e$message)) {
20000
} else if (grepl("model contains fewer than 2 terms", e$message)) {
0
} else {
stop(e)
}
})
}
| /scratch/gouwar.j/cran-all/cranData/AICcPermanova/R/fit_models.R |
#' @title Create models with different combinations of variables
#' @description Generates all possible linear models for a given set of
#' predictor variables using the distance matrix as a response variable.
#' The function allows for the user to specify the maximum number of
#' variables in a model, which can be useful in cases where there are
#' many predictors. The output is a data frame containing all the
#' possible models, which can be passed to the fit_models function for
#' fitting using a PERMANOVA approach.
#' @param vars A character vector of variables to use for modeling
#' @param ncores An integer specifying the number of cores to use for parallel processing
#' @param k maximum number of variables in a model, default is NULL
#' @param verbose logical, defaults TRUE, sends messages about processing times
#' @return A data frame containing all the possible linear permanova
#' models
#'
#' @importFrom parallel makeCluster
#' @importFrom doParallel registerDoParallel
#' @importFrom utils combn
#' @importFrom data.table data.table
#' @importFrom data.table rbindlist
#' @importFrom data.table :=
#' @importFrom future plan cluster
#' @importFrom furrr future_map_dfr
#' @export
#'
#' @examples
#' \donttest{
#' make_models(vars = c("A", "B", "C", "D"),
#' ncores = 2, verbose = FALSE)
#'
#' # using k as a way to limit number of variables
#' make_models(vars = c("A", "B", "C", "D"),
#' ncores = 2, k = 2, verbose = FALSE)
#'}
#' @references
#' Anderson, M. J. (2001). A new method for non-parametric multivariate analysis of variance. Austral Ecology, 26(1), 32-46.
make_models <- function(vars, ncores = 2, k = NULL, verbose = TRUE) {
max_vif <- NULL
# create data table of variables to use for modeling
vars <- unlist(strsplit(vars, "\\s*,\\s*"))
dt <- data.table::data.table(vars)
# set response and dataset variables
dataset <- "Distance"
forms <- list()
if(is.null(k)){
MaxVars <- length(vars)
}
if(!is.null(k)){
MaxVars <- k
}
# loop over different numbers of variables to include in models
for(i in 1:MaxVars) {
test <- combn(vars, i, simplify = FALSE)
cl <- parallel::makeCluster(ncores)
future::plan(future::cluster, workers = cl)
# loop over all combinations of variables and create a list of formulas
formulas <- furrr::future_map_dfr(test, function(x) {
form <- paste(dataset, "~", paste(x, collapse = " + "))
data.frame(form = form, stringsAsFactors = FALSE)
})
parallel::stopCluster(cl)
if(verbose){
message(paste(i, "of", MaxVars, "ready", Sys.time()))
}
forms[[i]] <- formulas
}
# combine all formulas into a single data table and add the null model
all_forms <- data.table::rbindlist(forms, use.names = TRUE, fill = TRUE)
all_forms <- unique(all_forms, by = "form", fromLast = TRUE)
null_mod <- data.table::data.table(form = paste(dataset, "~ 1", collapse = ""))
all_forms <- data.table::rbindlist(list(all_forms, null_mod), use.names = TRUE, fill = TRUE)
return(as.data.frame(all_forms))
}
| /scratch/gouwar.j/cran-all/cranData/AICcPermanova/R/make_models.R |
#' Select models based on AICc and VIF.
#'
#' This function selects models from a data frame based on the AICc and VIF values. Models with AICc greater than negative infinity and VIF less than or equal to 6 are considered. The difference in AICc values for each model is calculated with respect to the model with the minimum AICc. Models with a difference in AICc less than or equal to the specified delta_aicc value are selected.
#'
#' @param df a data frame containing the models to select from.
#' @param delta_aicc a numeric value specifying the maximum difference in AICc values allowed.
#' @return a data frame containing the selected models and the AIC weights.
#' @examples
#' df <- data.frame(AICc = c(10, 12, 15, 20), max_vif = c(2, 4, 5, 6))
#' select_models(df)
#' select_models(df, delta_aicc = 5)
#' @importFrom data.table setDT .SD
#' @export
select_models <- function(df, delta_aicc = 2){
AICc <- DeltaAICc <- max_vif <- AICWeight <- NULL
Result <- data.table::setDT(df)[AICc > -Inf & max_vif <= 5,
DeltaAICc := AICc - min(AICc)][DeltaAICc <= delta_aicc][, AICWeight := exp( -0.5*DeltaAICc)/sum(exp( -0.5*DeltaAICc))] |>
as.data.frame()
# remove columns with only NAs
Result <- Result[, colSums(is.na(Result)) != nrow(Result), drop = FALSE]
return(Result)
}
| /scratch/gouwar.j/cran-all/cranData/AICcPermanova/R/select_models.R |
##compute AIC, AICc, QAIC, QAICc
##generic
AICc <- function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
UseMethod("AICc", mod)
}
AICc.default <- function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
stop("\nFunction not yet defined for this object class\n")
}
##aov objects
AICc.aov <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(mod$fitted)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##betareg objects
AICc.betareg <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(mod$fitted)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##clm objects
AICc.clm <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(fitted(mod))} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##clmm objects
AICc.clmm <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(fitted(mod))} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##coxme objects
AICc.coxme <- function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(mod$linear.predictor)} else {n <- nobs}
LL <- extractLL(mod)[1]
K <- attr(extractLL(mod), "df") #extract correct number of parameters included in model
if(second.ord==TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
if(return.K==TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##coxph objects
AICc.coxph <- function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(residuals(mod))} else {n <- nobs}
LL <- extractLL(mod)[1]
K <- attr(extractLL(mod), "df") #extract correct number of parameters included in model
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##fitdist (from fitdistrplus)
AICc.fitdist <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- mod$n} else {n <- nobs}
LL <- logLik(mod)
K <- length(mod$estimate)
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##fitdistr (from MASS)
AICc.fitdistr <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- mod$n} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##glm and lm objects
AICc.glm <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, c.hat = 1, ...){
if(is.null(nobs)) {
n <- length(mod$fitted)
} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
if(c.hat == 1) {
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
}
if(c.hat > 1 && c.hat <= 4) {
K <- K+1
if(second.ord==TRUE) {
AICc <- (-2*LL/c.hat)+2*K*(n/(n-K-1))
##adjust parameter count to include estimation of dispersion parameter
} else{
AICc <- (-2*LL/c.hat)+2*K}
}
if(c.hat > 4) stop("High overdispersion and model fit is questionable\n")
if(c.hat < 1) stop("You should set \'c.hat\' to 1 if < 1, but values << 1 might also indicate lack of fit\n")
##check if negative binomial and add 1 to K for estimation of theta if glm( ) was used
if(!is.na(charmatch(x="Negative Binomial", table=family(mod)$family))) {
if(!identical(class(mod)[1], "negbin")) { #if not negbin, add + 1 because k of negbin was estimated glm.convert( ) screws up logLik
K <- K+1
if(second.ord == TRUE) {
AICc <- -2*LL+2*K*(n/(n-K-1))
} else {
AICc <- -2*LL+2*K
}
}
if(c.hat != 1) stop("You should not use the c.hat argument with the negative binomial")
}
##add 1 for theta parameter in negative binomial fit with glm( )
##check if gamma and add 1 to K for estimation of shape parameter if glm( ) was used
if(identical(family(mod)$family, "Gamma") && c.hat > 1) stop("You should not use the c.hat argument with the gamma")
##an extra condition must be added to avoid adding a parameter for theta with negative binomial when glm.nb( ) is fit which estimates the correct number of parameters
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##glmmTMB objects
AICc.glmmTMB <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, c.hat = 1, ...){
if(is.null(nobs)) {
n <- nrow(mod$frame)
names(n) <- NULL
} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model
if(c.hat == 1) {
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
}
if(c.hat > 1 && c.hat <= 4) {
K <- K+1
if(second.ord==TRUE) {
AICc <- (-2*LL/c.hat)+2*K*(n/(n-K-1))
##adjust parameter count to include estimation of dispersion parameter
} else{
AICc <- (-2*LL/c.hat)+2*K}
}
if(c.hat > 4) stop("High overdispersion and model fit is questionable\n")
if(c.hat < 1) stop("You should set \'c.hat\' to 1 if < 1, but values << 1 might also indicate lack of fit\n")
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##gls objects
AICc.gls <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n<-length(mod$fitted)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##gnls objects
AICc.gnls <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(fitted(mod))} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##hurdle objects
AICc.hurdle <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(mod$fitted)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
if(second.ord == TRUE) {AICc <- -2*LL + 2*K*(n/(n-K-1))} else{AICc <- -2*LL + 2*K}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##lavaan
AICc.lavaan <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- mod@Data@nobs[[1]]} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##lm objects
AICc.lm <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(mod$fitted)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##lme objects
AICc.lme <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- nrow(mod$fitted)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##lmerModLmerTest objects
AICc.lmerModLmerTest <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(is.null(nobs)) {
n <- mod@devcomp$dims["n"]
names(n) <- NULL
} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##lmekin objects
AICc.lmekin <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(mod$residuals)} else {n <- nobs}
LL <- extractLL(mod)[1]
K <- attr(extractLL(mod), "df") #extract correct number of parameters included in model - this includes LM
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
return(AICc)
}
##maxlike objects
AICc.maxlikeFit <- function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, c.hat = 1, ...) {
LL <- extractLL(mod)[1]
K <- attr(extractLL(mod), "df")
if(is.null(nobs)) {
n <- nrow(mod$points.retained)
} else {n <- nobs}
if(second.ord == TRUE) {AICc <- -2 * LL + 2 * K * (n/(n - K - 1))} else {AICc <- -2*LL + 2*K}
if(c.hat != 1) stop("\nThis function does not support overdispersion in \'maxlikeFit\' models\n")
if(identical(return.K, TRUE)) {
return(K)
} else {return(AICc)}
}
##mer object
AICc.mer <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(is.null(nobs)) {
n <- mod@dims["n"]
} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##merMod objects
AICc.merMod <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(is.null(nobs)) {
n <- mod@devcomp$dims["n"]
names(n) <- NULL
} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##mult objects
AICc.multinom <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, c.hat = 1, ...){
if(identical(nobs, NULL)) {n<-length(mod$fitted)/length(mod$lev)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
if(c.hat == 1) {
if(second.ord==TRUE) {
AICc <- -2*LL+2*K*(n/(n-K-1))
} else{
AICc <- -2*LL+2*K
}
}
if(c.hat > 1 && c.hat <= 4) {
K <- K+1
if(second.ord == TRUE) {
AICc <- (-2*LL/c.hat)+2*K*(n/(n-K-1)) #adjust parameter count to include estimation of dispersion parameter
} else{
AICc <- (-2*LL/c.hat)+2*K
}
}
if(c.hat > 4) stop("High overdispersion and model fit is questionable")
if(return.K==TRUE) AICc[1]<-K #attributes the first element of AICc to K
AICc
}
##glm.nb
AICc.negbin <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(mod$fitted)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##nlme objects
AICc.nlme <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- nrow(mod$fitted)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##nls objects
AICc.nls <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(identical(nobs, NULL)) {
n <- length(fitted(mod))
##add warning if returning a constant with length(fitted(mod))
if(n == 1) {
warning(paste("\nlength(fitted(mod)) returned the following scalar: ", n,
"\nif sample size is larger than this value, supply sample size using 'nobs' argument\n"))
}
if(n > 1 && n < 5) {
warning(paste("\nlength(fitted(mod)) returned: ", n,
"\nif sample size is larger than this value, supply sample size using the 'nobs' argument\n"))
}
} else {
n <- nobs
}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##polr objects
AICc.polr <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n<-length(mod$fitted)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
if(second.ord == TRUE) {
AICc <- -2*LL+2*K*(n/(n-K-1))
} else{
AICc <- -2*LL+2*K
}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##rlm objects
##only valid for M-estimation (Huber M-estimator)
##modified from Tharmaratnam and Claeskens 2013 (equation 8)
##AICc.rlm <- function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...)
##{
## if(second.ord == TRUE) stop("\nOnly 'second.ord = FALSE' is supported for 'rlm' models\n")
## ##extract design matrix
## X <- model.matrix(mod)
## ##extract scale
## scale.m <- mod$s
## ##extract threshold value
## cval <- mod$k2
## ##extract residuals
## res <- residuals(mod)
## res.scaled <- res/scale.m
## n <- length(res)
## ##partial derivatives based on Huber's loss function
## dPsi <- ifelse(abs(res.scaled) <= cval, 2, 0)
## Psi <- (ifelse(abs(res.scaled) <= cval, 2*res.scaled, 2*cval*sign(res.scaled)))^2
## J <- (t(X) %*% diag(as.vector(dPsi)) %*% X * (1/(scale.m^2)))/n
## inv.J <- solve(J)
## ##variance
## K.var <- (t(X) %*% diag(as.vector(Psi)) %*% X * (1/(scale.m^2)))/n
## AIC <- 2*n*(log(scale.m)) + 2 * sum(diag(inv.J %*%(K.var)))
## if(return.K) {AIC <- 2 * sum(diag(inv.J %*%(K.var)))}
## return(AIC)
##}
##the estimator below extracts the estimates obtained from M- or MM-estimator
##and plugs them in the normal likelihood function
AICc.rlm <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(mod$fitted)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##survreg objects
AICc.survreg <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- nrow(mod$y)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##unmarkedFit objects
##create function to extract AICc from 'unmarkedFit'
AICc.unmarkedFit <- function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, c.hat = 1, ...) {
LL <- extractLL(mod)[1]
K <- attr(extractLL(mod), "df")
if(is.null(nobs)) {
n <- dim(mod@data@y)[1]
} else {n <- nobs}
if(c.hat == 1) {
if(second.ord == TRUE) {AICc <- -2 * LL + 2 * K * (n/(n - K - 1))} else {AICc <- -2*LL + 2*K}
}
if(c.hat > 1 && c.hat <= 4) {
##adjust parameter count to include estimation of dispersion parameter
K <- K + 1
if(second.ord == TRUE) {
AICc <- (-2 * LL/c.hat) + 2 * K * (n/(n - K - 1))
} else {
AICc <- (-2 * LL/c.hat) + 2*K}
}
if(c.hat > 4) stop("\nHigh overdispersion and model fit is questionable\n")
if(c.hat < 1) stop("\nYou should set \'c.hat\' to 1 if < 1, but values << 1 might also indicate lack of fit\n")
if(identical(return.K, TRUE)) {
return(K)
} else {return(AICc)}
}
##vglm objects
AICc.vglm <- function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, c.hat = 1, ...){
if(is.null(nobs)) {
n <- nrow([email protected])
} else {n <- nobs}
LL <- extractLL(mod)[1]
##extract number of estimated parameters
K <- attr(extractLL(mod), "df")
if(c.hat !=1) {
fam.name <- mod@family@vfamily
if(fam.name != "poissonff" && fam.name != "binomialff") stop("\nOverdispersion correction only appropriate for Poisson or binomial models\n")
}
if(c.hat == 1) {
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
}
if(c.hat > 1 && c.hat <= 4) {
K <- K + 1
if(second.ord==TRUE) {
AICc <- (-2*LL/c.hat) + 2*K*(n/(n-K-1))
##adjust parameter count to include estimation of dispersion parameter
} else{
AICc <- (-2*LL/c.hat) + 2*K}
}
if(c.hat > 4) stop("High overdispersion and model fit is questionable\n")
if(c.hat < 1) stop("You should set \'c.hat\' to 1 if < 1, but values << 1 might also indicate lack of fit\n")
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
##zeroinfl objects
AICc.zeroinfl <-
function(mod, return.K = FALSE, second.ord = TRUE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(mod$fitted)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
if(second.ord == TRUE) {AICc <- -2*LL + 2*K*(n/(n-K-1))} else{AICc <- -2*LL + 2*K}
if(return.K == TRUE) AICc[1] <- K #attributes the first element of AICc to K
AICc
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/AICc.R |
##defunct functions - package AICcmodavg M. J. Mazerolle (updated 17 November 2016)
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/AICcmodavg-defunct.R |
##custom functions for user-supplied model input
##Custom AICc computation where user inputs logL, K, and nobs manually
##convenient when output imported from other software
AICcCustom <- function(logL, K, return.K = FALSE, second.ord = TRUE,
nobs = NULL, c.hat = 1) {
if(is.null(nobs) && identical(second.ord, TRUE)) {
stop("\nYou must supply a value for 'nobs' for the second-order AIC\n")
} else {n <- nobs}
LL <- logL
if(c.hat == 1) {
if(second.ord == TRUE) {AICc <- -2*LL+2*K*(n/(n-K-1))} else{AICc <- -2*LL+2*K}
}
if(c.hat > 1 && c.hat <= 4) {
K <- K+1
if(second.ord==TRUE) {
AICc <- (-2*LL/c.hat)+2*K*(n/(n-K-1))
##adjust parameter count to include estimation of dispersion parameter
} else{
AICc <- (-2*LL/c.hat)+2*K}
# cat("\nc-hat estimate was added to parameter count\n")
}
if(c.hat > 4) stop("High overdispersion and model fit is questionable\n")
if(c.hat < 1) stop("You should set \'c.hat\' to 1 if < 1, but values << 1 might also indicate lack of fit\n")
if(return.K == TRUE) AICc <- K
AICc
}
##Custom model selection where user inputs logL, K, and nobs manually
##convenient when output imported from other software
aictabCustom <-
function(logL, K, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, c.hat = 1){
##check if modnames are not supplied
if(is.null(modnames)) {
modnames <- paste("Mod", 1:length(logL), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
}
##check that nobs is the same for all models
if(length(unique(nobs)) > 1) stop("\nSample size must be identical for all models\n")
##check that logL, K, estimate, se are vectors of same length
nlogL <- length(logL)
nK <- length(K)
if(!all(nlogL == c(nlogL, nK))) stop("\nArguments 'logL' and 'K' must be of equal length\n")
##create model selection table
Results <- NULL
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- AICcCustom(logL = logL, K = K, return.K = TRUE,
second.ord = second.ord, nobs = nobs,
c.hat = c.hat) #extract number of parameters
Results$AICc <- AICcCustom(logL = logL, K = K, second.ord = second.ord,
nobs = nobs, c.hat = c.hat) #extract AICc
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(K)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if AICc and c.hat = 1
if(second.ord == TRUE && c.hat == 1) {
Results$LL <- logL
}
##rename correctly to QAICc and add column for c-hat
if(second.ord == TRUE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAICc", "Delta_QAICc", "ModelLik", "QAICcWt")
LL <- logL
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
##rename correctly to AIC
if(second.ord == FALSE && c.hat == 1) {
colnames(Results)<-c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
Results$LL <- logL
}
##rename correctly to QAIC and add column for c-hat
if(second.ord == FALSE && c.hat > 1) {
colnames(Results)<-c("Modnames", "K", "QAIC", "Delta_QAIC", "ModelLik", "QAICWt")
LL <- logL
Results$Quasi.LL <- LL/c.hat
Results$c_hat<-c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
###################################################
###################################################
##BIC-related functions
##BIC
useBICCustom <-
function(logL, K, return.K = FALSE, nobs = NULL, c.hat = 1){
if(is.null(nobs)) {
stop("\nYou must supply a value for 'nobs' for the BIC\n")
} else {n <- nobs}
LL <- logL
if(c.hat == 1) {
BIC <- -2*LL + K * log(n)
}
if(c.hat > 1 && c.hat <= 4) {
K <- K+1
BIC <- -2*LL/c.hat + K * log(n)
}
if(c.hat > 4) stop("High overdispersion and model fit is questionable\n")
if(c.hat < 1) stop("You should set \'c.hat\' to 1 if < 1, but values << 1 might also indicate lack of fit\n")
if(return.K == TRUE) BIC <- K #attributes the first element of BIC to K
BIC
}
##model selection with BIC
bictabCustom <-
function(logL, K, modnames = NULL, nobs = NULL, sort = TRUE, c.hat = 1){
##check if modnames are not supplied
if(is.null(modnames)) {
modnames <- paste("Mod", 1:length(logL), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
}
##check that nobs is the same for all models
if(length(unique(nobs)) > 1) stop("\nSample size must be identical for all models\n")
##check that logL, K, estimate, se are vectors of same length
nlogL <- length(logL)
nK <- length(K)
if(!all(nlogL == c(nlogL, nK))) stop("\nArguments 'logL' and 'K' must be of equal length\n")
##create model selection table
Results <- NULL
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- useBICCustom(logL = logL, K = K, return.K = TRUE,
nobs = nobs, c.hat = c.hat) #extract number of parameters
Results$BIC <- useBICCustom(logL = logL, K = K, nobs = nobs,
c.hat = c.hat) #extract BIC
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute Akaike weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(K)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if BIC and c.hat = 1
if(c.hat == 1) {
Results$LL <- logL
}
##rename correctly to QBIC and add column for c-hat
if(c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QBIC", "Delta_QBIC", "ModelLik", "QBICWt")
LL <- logL
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##Custom model averaging where user inputs estimates and SE's manually
##convenient when model type or SE's not available from predict methods
modavgCustom <-
function(logL, K, modnames = NULL, estimate, se, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95, c.hat = 1,
useBIC = FALSE){
##check if modnames are not supplied
if(is.null(modnames)) {
modnames <- paste("Mod", 1:length(logL), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
}
##check that logL, K, estimate, se are vectors of same length
nlogL <- length(logL)
nK <- length(K)
nestimate <- length(estimate)
nse <- length(se)
if(!all(nlogL == c(nlogL, nK, nestimate, nse))) stop("\nArguments 'logL', 'K', 'estimate', and 'se' must be of equal length\n")
##compute table
if(!useBIC) {
new_table <- aictabCustom(modnames = modnames, logL = logL, K = K,
second.ord = second.ord, nobs = nobs, sort = FALSE,
c.hat = c.hat) #recompute AIC table and associated measures
}
if(useBIC) {
new_table <- bictabCustom(modnames = modnames, logL = logL, K = K,
nobs = nobs, sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
}
new_table$Estimate <- estimate
new_table$SE <- se
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <-new_table$SE*sqrt(c.hat)
}
if(!useBIC) {
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_estimate <- sum(new_table$AICcWt*new_table$Estimate)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Estimate- Modavg_estimate)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Estimate- Modavg_estimate)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_estimate <- sum(new_table$QAICcWt*new_table$Estimate)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Estimate- Modavg_estimate)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Estimate- Modavg_estimate)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_estimate <- sum(new_table$AICWt*new_table$Estimate)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Estimate- Modavg_estimate)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Estimate- Modavg_estimate)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_estimate <- sum(new_table$QAICWt*new_table$Estimate)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Estimate- Modavg_estimate)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Estimate- Modavg_estimate)^2)))
}
}
}
if(useBIC){
##BIC
if(c.hat == 1) {
Modavg_estimate <- sum(new_table$BICWt*new_table$Estimate)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$BICWt*sqrt(new_table$SE^2 + (new_table$Estimate- Modavg_estimate)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$BICWt*(new_table$SE^2 + (new_table$Estimate- Modavg_estimate)^2)))
}
}
##QBIC
if(c.hat > 1) {
Modavg_estimate <- sum(new_table$QBICWt*new_table$Estimate)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QBICWt*sqrt(new_table$SE^2 + (new_table$Estimate- Modavg_estimate)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QBICWt*(new_table$SE^2 + (new_table$Estimate- Modavg_estimate)^2)))
}
}
}
zcrit <- qnorm(p = (1-conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_estimate - zcrit*Uncond_SE
Upper_CL <- Modavg_estimate + zcrit*Uncond_SE
out.modavg <- list("Mod.avg.table" = new_table, "Mod.avg.est" = Modavg_estimate,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level,
"Lower.CL"= Lower_CL, "Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgCustom", "list")
return(out.modavg)
}
##print method
print.modavgCustom <- function(x, digits = 2, ...) {
ic <- colnames(x$Mod.avg.table)[3]
cat("\nMultimodel inference on manually-supplied parameter based on", ic, "\n")
cat("\n", ic, "table used to obtain model-averaged estimate:\n")
oldtab <- x$Mod.avg.table
if (any(names(oldtab)=="c_hat")) {cat("\t(c-hat estimate = ", oldtab$c_hat[1], ")\n")}
cat("\n")
if (any(names(oldtab)=="c_hat")) {
nice.tab <- cbind(oldtab[,2], oldtab[,3], oldtab[,4], oldtab[,6],
oldtab[,9], oldtab[,10])
} else {nice.tab <- cbind(oldtab[,2], oldtab[,3], oldtab[,4], oldtab[,6],
oldtab[,8], oldtab[,9])
}
colnames(nice.tab) <- c(colnames(oldtab)[c(2, 3, 4, 6)], "Estimate", "SE")
rownames(nice.tab) <- oldtab[, 1]
print(round(nice.tab, digits = digits))
cat("\nModel-averaged estimate:", eval(round(x$Mod.avg.est, digits = digits)), "\n")
cat("Unconditional SE:", eval(round(x$Uncond.SE, digits = digits)), "\n")
cat("",x$Conf.level*100, "% Unconditional confidence interval:", round(x$Lower.CL, digits = digits),
",", round(x$Upper.CL, digits = digits), "\n\n")
}
##function for generic information criteria
ictab <- function(ic, K, modnames = NULL, sort = TRUE, ic.name = NULL){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
modnames <- paste("Mod", 1:length(ic), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
}
##check that logL, K, estimate, se are vectors of same length
nic <- length(ic)
nK <- length(K)
if(!all(nic == c(nic, nK))) stop("\nArguments 'ic' and 'K' must be of equal length\n")
Results <- NULL
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- K
Results$IC <- ic
Results$Delta_IC <- Results$IC - min(Results$IC) #compute delta IC
Results$ModelLik <- exp(-0.5*Results$Delta_IC) #compute model likelihood required to compute Akaike weights
Results$ICWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$IC)) != length(K)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if name of IC is specified
if(!is.null(ic.name)) {
##replace IC with ic.name
names(Results) <- gsub(pattern = "IC", replacement = ic.name,
x = names(Results))
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on delta IC
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("ictab", "data.frame")
return(Results)
}
print.ictab <- function(x, digits = 2, ...) {
cat("\nModel selection based on ", colnames(x)[3], ":\n", sep = "")
cat("\n")
#check if Cum.Wt should be printed
if(any(names(x) == "Cum.Wt")) {
nice.tab <- cbind(x[, c(2:4, 6:7)])
colnames(nice.tab) <- colnames(x)[c(2:4, 6:7)]
rownames(nice.tab) <- x[, 1]
} else {
nice.tab <- cbind(x[, c(2:4, 6)])
colnames(nice.tab) <- colnames(x)[c(2:4, 6)]
rownames(nice.tab) <- x[, 1]
}
print(round(nice.tab, digits = digits)) #select rounding off with digits argument
cat("\n")
}
##model averaging for generic IC where user inputs estimates and SE's manually
modavgIC <- function(ic, K, modnames = NULL, estimate, se,
uncond.se = "revised", conf.level = 0.95, ic.name = NULL){
##check if modnames are not supplied
if(is.null(modnames)) {
modnames <- paste("Mod", 1:length(ic), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
}
##check that logL, K, estimate, se are vectors of same length
nic <- length(ic)
nK <- length(K)
nestimate <- length(estimate)
nse <- length(se)
if(!all(nic == c(nic, nK, nestimate, nse))) stop("\nArguments 'ic', 'K', 'estimate', and 'se' must be of equal length\n")
##compute table
new_table <- ictab(ic = ic, K = K, modnames = modnames,
sort = FALSE, ic.name = ic.name)
new_table$Estimate <- estimate
new_table$SE <- se
##compute model-averaged estimates, unconditional SE, and 95% CL
Modavg_estimate <- sum(new_table[, 6] * new_table$Estimate)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table[, 6] * sqrt(new_table$SE^2 + (new_table$Estimate- Modavg_estimate)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table[, 6] * (new_table$SE^2 + (new_table$Estimate - Modavg_estimate)^2)))
}
zcrit <- qnorm(p = (1-conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_estimate - zcrit*Uncond_SE
Upper_CL <- Modavg_estimate + zcrit*Uncond_SE
out.modavg <- list("Mod.avg.table" = new_table, "Mod.avg.est" = Modavg_estimate,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level,
"Lower.CL"= Lower_CL, "Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgIC", "list")
return(out.modavg)
}
##print method
print.modavgIC <- function(x, digits = 2, ...) {
ic <- colnames(x$Mod.avg.table)[3]
cat("\nMultimodel inference on manually-supplied parameter based on", ic, "\n")
cat("\n", ic, "table used to obtain model-averaged estimate:\n")
oldtab <- x$Mod.avg.table
cat("\n")
nice.tab <- cbind(oldtab[, c(2:4, 6:8)])
colnames(nice.tab) <- colnames(oldtab)[c(2:4, 6:8)]
rownames(nice.tab) <- oldtab[, 1]
print(round(nice.tab, digits = digits))
cat("\nModel-averaged estimate:", eval(round(x$Mod.avg.est, digits = digits)), "\n")
cat("Unconditional SE:", eval(round(x$Uncond.SE, digits = digits)), "\n")
cat("",x$Conf.level*100, "% Unconditional confidence interval:", round(x$Lower.CL, digits = digits),
",", round(x$Upper.CL, digits = digits), "\n\n")
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/Custom.functions.R |
##generic
DIC <- function(mod, return.pD = FALSE, ...) {
UseMethod("DIC", mod)
}
##default
DIC.default <- function(mod, return.pD = FALSE, ...) {
stop("\nFunction not yet defined for this object class\n")
}
##bugs
DIC.bugs <- function(mod, return.pD = FALSE, ...){
## DIC = posterior mean of the deviance + pD, where pD is the effective number of parameters
if(return.pD == FALSE){
DIC <- mod$DIC
} else {DIC <- mod$pD}
return(DIC)
}
##jags
DIC.rjags <- function(mod, return.pD = FALSE, ...){
## DIC = posterior mean of the deviance + pD, where pD is the effective number of parameters
if(return.pD == FALSE){
DIC <- mod$BUGSoutput$DIC
} else {DIC <- mod$BUGSoutput$pD}
return(DIC)
}
##jagsUI
DIC.jagsUI <- function(mod, return.pD = FALSE, ...){
## DIC = posterior mean of the deviance + pD, where pD is the effective number of parameters
if(return.pD == FALSE){
DIC <- mod$DIC
} else {DIC <- mod$pD}
return(DIC)
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/DIC.R |
##Goodness-of-fit test based on the chi-square
##chi-square
Nmix.chisq <- function(mod, ...) {
UseMethod("Nmix.chisq", mod)
}
Nmix.chisq.default <- function(mod, ...) {
stop("\nFunction not yet defined for this object class\n")
}
##PCount
Nmix.chisq.unmarkedFitPCount <- function(mod, ...) {
##extract original data from model object
obs <- mod@data@y
##extract fitted values
fits <- fitted(mod)
##check if sites were removed from analysis
sr <- mod@sitesRemoved
if(length(sr) > 0) {
obs <- obs[-sr, ]
fits <- fits[-sr, ]
}
##add NA's where fitted values are NA
#obs[is.na(fits)] <- NA
##compute chi-square
chi.sq <- sum((obs - fits)^2/fits, na.rm = TRUE) #added argument na.rm = TRUE when NA's occur
result <- list(chi.square = chi.sq, model.type = class(mod)[1])
class(result) <- "Nmix.chisq"
return(result)
}
##PCO
Nmix.chisq.unmarkedFitPCO <- function(mod, ...) {
##extract original data from model object
obs <- mod@data@y
##extract fitted values
fits <- fitted(mod)
##check if sites were removed from analysis
sr <- mod@sitesRemoved
if(length(sr) > 0) {
obs <- obs[-sr, ]
fits <- fits[-sr, ]
}
##add NA's where fitted values are NA
#obs[is.na(fits)] <- NA
##compute chi-square
chi.sq <- sum((obs - fits)^2/fits, na.rm = TRUE) #added argument na.rm = TRUE when NA's occur
result <- list(chi.square = chi.sq, model.type = class(mod)[1])
class(result) <- "Nmix.chisq"
return(result)
}
##unmarkedFitMPois
Nmix.chisq.unmarkedFitMPois <- function(mod, ...) {
##extract original data from model object
obs <- mod@data@y
##extract fitted values
fits <- fitted(mod)
##check if sites were removed from analysis
sr <- mod@sitesRemoved
if(length(sr) > 0) {
obs <- obs[-sr, ]
fits <- fits[-sr, ]
}
##add NA's where fitted values are NA
#obs[is.na(fits)] <- NA
##compute chi-square
chi.sq <- sum((obs - fits)^2/fits, na.rm = TRUE) #added argument na.rm = TRUE when NA's occur
result <- list(chi.square = chi.sq, model.type = class(mod)[1])
class(result) <- "Nmix.chisq"
return(result)
}
##unmarkedFitDS
Nmix.chisq.unmarkedFitDS <- function(mod, ...) {
##extract original data from model object
obs <- mod@data@y
##extract fitted values
fits <- fitted(mod)
##check if sites were removed from analysis
sr <- mod@sitesRemoved
if(length(sr) > 0) {
obs <- obs[-sr, ]
fits <- fits[-sr, ]
}
##add NA's where fitted values are NA
#obs[is.na(fits)] <- NA
##compute chi-square
chi.sq <- sum((obs - fits)^2/fits, na.rm = TRUE) #added argument na.rm = TRUE when NA's occur
result <- list(chi.square = chi.sq, model.type = class(mod)[1])
class(result) <- "Nmix.chisq"
return(result)
}
##unmarkedFitGDS
Nmix.chisq.unmarkedFitGDS <- function(mod, ...) {
##extract original data from model object
obs <- mod@data@y
##extract fitted values
fits <- fitted(mod)
##check if sites were removed from analysis
sr <- mod@sitesRemoved
if(length(sr) > 0) {
obs <- obs[-sr, ]
fits <- fits[-sr, ]
}
##add NA's where fitted values are NA
#obs[is.na(fits)] <- NA
##compute chi-square
chi.sq <- sum((obs - fits)^2/fits, na.rm = TRUE) #added argument na.rm = TRUE when NA's occur
result <- list(chi.square = chi.sq, model.type = class(mod)[1])
class(result) <- "Nmix.chisq"
return(result)
}
##unmarkedFitGPC
Nmix.chisq.unmarkedFitGPC <- function(mod, ...) {
##extract original data from model object
obs <- mod@data@y
##extract fitted values
fits <- fitted(mod)
##check if sites were removed from analysis
sr <- mod@sitesRemoved
if(length(sr) > 0) {
obs <- obs[-sr, ]
fits <- fits[-sr, ]
}
##add NA's where fitted values are NA
#obs[is.na(fits)] <- NA
##compute chi-square
chi.sq <- sum((obs - fits)^2/fits, na.rm = TRUE) #added argument na.rm = TRUE when NA's occur
result <- list(chi.square = chi.sq, model.type = class(mod)[1])
class(result) <- "Nmix.chisq"
return(result)
}
##unmarkedFitGMM
Nmix.chisq.unmarkedFitGMM <- function(mod, ...) {
##extract original data from model object
obs <- mod@data@y
##extract fitted values
fits <- fitted(mod)
##check if sites were removed from analysis
sr <- mod@sitesRemoved
if(length(sr) > 0) {
obs <- obs[-sr, ]
fits <- fits[-sr, ]
}
##add NA's where fitted values are NA
#obs[is.na(fits)] <- NA
##compute chi-square
chi.sq <- sum((obs - fits)^2/fits, na.rm = TRUE) #added argument na.rm = TRUE when NA's occur
result <- list(chi.square = chi.sq, model.type = class(mod)[1])
class(result) <- "Nmix.chisq"
return(result)
}
##multmixOpen
Nmix.chisq.unmarkedFitMMO <- function(mod, ...) {
##extract original data from model object
obs <- mod@data@y
##extract fitted values
fits <- fitted(mod)
##check if sites were removed from analysis
sr <- mod@sitesRemoved
if(length(sr) > 0) {
obs <- obs[-sr, ]
fits <- fits[-sr, ]
}
##add NA's where fitted values are NA
#obs[is.na(fits)] <- NA
##compute chi-square
chi.sq <- sum((obs - fits)^2/fits, na.rm = TRUE) #added argument na.rm = TRUE when NA's occur
result <- list(chi.square = chi.sq, model.type = class(mod)[1])
class(result) <- "Nmix.chisq"
return(result)
}
##distsampOpen
Nmix.chisq.unmarkedFitDSO <- function(mod, ...) {
##extract original data from model object
obs <- mod@data@y
##extract fitted values
fits <- fitted(mod)
##check if sites were removed from analysis
sr <- mod@sitesRemoved
if(length(sr) > 0) {
obs <- obs[-sr, ]
fits <- fits[-sr, ]
}
##add NA's where fitted values are NA
#obs[is.na(fits)] <- NA
##compute chi-square
chi.sq <- sum((obs - fits)^2/fits, na.rm = TRUE) #added argument na.rm = TRUE when NA's occur
result <- list(chi.square = chi.sq, model.type = class(mod)[1])
class(result) <- "Nmix.chisq"
return(result)
}
##generic
Nmix.gof.test <- function(mod, nsim = 5, plot.hist = TRUE,
report = NULL, parallel = TRUE, ncores,
cex.axis = 1, cex.lab = 1, cex.main = 1,
lwd = 1,...){
UseMethod("Nmix.gof.test", mod)
}
Nmix.gof.test.default <- function(mod, nsim = 5, plot.hist = TRUE,
report = NULL, parallel = TRUE, ncores,
cex.axis = 1, cex.lab = 1, cex.main = 1,
lwd = 1,...){
stop("\nFunction not yet defined for this object class\n")
}
##PCount
Nmix.gof.test.unmarkedFitPCount <- function(mod, nsim = 5, plot.hist = TRUE,
report = NULL, parallel = TRUE, ncores,
cex.axis = 1, cex.lab = 1, cex.main = 1,
lwd = 1, ...){
##more bootstrap samples are recommended (e.g., 1000, 5000, or 10 000)
##extract model type
model.type <- Nmix.chisq(mod)$model.type
##if NULL, don't print test statistic at each iteration
if(is.null(report)) {
##compute GOF P-value
out <- parboot(mod, statistic = function(i) Nmix.chisq(i)$chi.square,
nsim = nsim, parallel = parallel, ncores = ncores)
} else {
##compute GOF P-value
out <- parboot(mod, statistic = function(i) Nmix.chisq(i)$chi.square,
nsim = nsim, report = report, parallel = parallel, ncores = ncores)
}
##determine significance
p.value <- sum([email protected] >= out@t0)/nsim
if(p.value == 0) {
p.display <- paste("<", round(1/nsim, digits = 4))
} else {
p.display = paste("=", round(p.value, digits = 4))
}
##create plot
if(plot.hist) {
hist([email protected],
main = as.expression(substitute("Bootstrapped "*chi^2*" fit statistic ("*nsim*" samples)",
list(nsim = nsim))),
xlim = range(c([email protected], out@t0)),
xlab = paste("Simulated statistic ", "(observed = ", round(out@t0, digits = 2), ")", sep = ""),
cex.axis = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
title(main = bquote(paste(italic(P), .(p.display))), line = 0.5,
cex.main = cex.main)
abline(v = out@t0, lty = "dashed", col = "red", lwd = lwd)
}
##estimate c-hat
c.hat.est <- out@t0/mean([email protected])
##assemble result
gof.out <- list(model.type = model.type, chi.square = out@t0, t.star = [email protected],
p.value = p.value, c.hat.est = c.hat.est, nsim = nsim)
class(gof.out) <- "Nmix.chisq"
return(gof.out)
}
##PCO
Nmix.gof.test.unmarkedFitPCO <- function(mod, nsim = 5, plot.hist = TRUE,
report = NULL, parallel = TRUE, ncores,
cex.axis = 1, cex.lab = 1, cex.main = 1,
lwd = 1, ...){
##more bootstrap samples are recommended (e.g., 1000, 5000, or 10 000)
##extract model type
model.type <- Nmix.chisq(mod)$model.type
##if NULL, don't print test statistic at each iteration
if(is.null(report)) {
##compute GOF P-value
out <- parboot(mod, statistic = function(i) Nmix.chisq(i)$chi.square,
nsim = nsim, parallel = parallel, ncores = ncores)
} else {
##compute GOF P-value
out <- parboot(mod, statistic = function(i) Nmix.chisq(i)$chi.square,
nsim = nsim, report = report, parallel = parallel,
ncores = ncores)
}
##determine significance
p.value <- sum([email protected] >= out@t0)/nsim
if(p.value == 0) {
p.display <- paste("<", round(1/nsim, digits = 4))
} else {
p.display = paste("=", round(p.value, digits = 4))
}
##create plot
if(plot.hist) {
hist([email protected],
main = as.expression(substitute("Bootstrapped "*chi^2*" fit statistic ("*nsim*" samples)",
list(nsim = nsim))),
xlim = range(c([email protected], out@t0)),
xlab = paste("Simulated statistic ", "(observed = ", round(out@t0, digits = 2), ")", sep = ""),
cex.axis = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
title(main = bquote(paste(italic(P), .(p.display))), line = 0.5,
cex.main = cex.main)
abline(v = out@t0, lty = "dashed", col = "red", lwd = lwd)
}
##estimate c-hat
c.hat.est <- out@t0/mean([email protected])
##assemble result
gof.out <- list(model.type = model.type, chi.square = out@t0, t.star = [email protected],
p.value = p.value, c.hat.est = c.hat.est, nsim = nsim)
class(gof.out) <- "Nmix.chisq"
return(gof.out)
}
##DS
Nmix.gof.test.unmarkedFitDS <- function(mod, nsim = 5, plot.hist = TRUE,
report = NULL, parallel = TRUE, ncores,
cex.axis = 1, cex.lab = 1, cex.main = 1,
lwd = 1, ...){
##more bootstrap samples are recommended (e.g., 1000, 5000, or 10 000)
##extract model type
model.type <- Nmix.chisq(mod)$model.type
##if NULL, don't print test statistic at each iteration
if(is.null(report)) {
##compute GOF P-value
out <- parboot(mod, statistic = function(i) Nmix.chisq(i)$chi.square,
nsim = nsim, parallel = parallel, ncores = ncores)
} else {
##compute GOF P-value
out <- parboot(mod, statistic = function(i) Nmix.chisq(i)$chi.square,
nsim = nsim, report = report, parallel = parallel,
ncores = ncores)
}
##determine significance
p.value <- sum([email protected] >= out@t0)/nsim
if(p.value == 0) {
p.display <- paste("<", round(1/nsim, digits = 4))
} else {
p.display = paste("=", round(p.value, digits = 4))
}
##create plot
if(plot.hist) {
hist([email protected],
main = as.expression(substitute("Bootstrapped "*chi^2*" fit statistic ("*nsim*" samples)",
list(nsim = nsim))),
xlim = range(c([email protected], out@t0)),
xlab = paste("Simulated statistic ", "(observed = ", round(out@t0, digits = 2), ")", sep = ""),
cex.axis = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
title(main = bquote(paste(italic(P), .(p.display))), line = 0.5,
cex.main = cex.main)
abline(v = out@t0, lty = "dashed", col = "red", lwd = lwd)
}
##estimate c-hat
c.hat.est <- out@t0/mean([email protected])
##assemble result
gof.out <- list(model.type = model.type, chi.square = out@t0, t.star = [email protected],
p.value = p.value, c.hat.est = c.hat.est, nsim = nsim)
class(gof.out) <- "Nmix.chisq"
return(gof.out)
}
##GDS
Nmix.gof.test.unmarkedFitGDS <- function(mod, nsim = 5, plot.hist = TRUE,
report = NULL, parallel = TRUE, ncores,
cex.axis = 1, cex.lab = 1, cex.main = 1,
lwd = 1, ...){
##more bootstrap samples are recommended (e.g., 1000, 5000, or 10 000)
##extract model type
model.type <- Nmix.chisq(mod)$model.type
##if NULL, don't print test statistic at each iteration
if(is.null(report)) {
##compute GOF P-value
out <- parboot(mod, statistic = function(i) Nmix.chisq(i)$chi.square,
nsim = nsim, parallel = parallel, ncores = ncores)
} else {
##compute GOF P-value
out <- parboot(mod, statistic = function(i) Nmix.chisq(i)$chi.square,
nsim = nsim, report = report, parallel = parallel,
ncores = ncores)
}
##determine significance
p.value <- sum([email protected] >= out@t0)/nsim
if(p.value == 0) {
p.display <- paste("<", round(1/nsim, digits = 4))
} else {
p.display = paste("=", round(p.value, digits = 4))
}
##create plot
if(plot.hist) {
hist([email protected],
main = as.expression(substitute("Bootstrapped "*chi^2*" fit statistic ("*nsim*" samples)",
list(nsim = nsim))),
xlim = range(c([email protected], out@t0)),
xlab = paste("Simulated statistic ", "(observed = ", round(out@t0, digits = 2), ")", sep = ""),
cex.axis = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
title(main = bquote(paste(italic(P), .(p.display))), line = 0.5,
cex.main = cex.main)
abline(v = out@t0, lty = "dashed", col = "red", lwd = lwd)
}
##estimate c-hat
c.hat.est <- out@t0/mean([email protected])
##assemble result
gof.out <- list(model.type = model.type, chi.square = out@t0, t.star = [email protected],
p.value = p.value, c.hat.est = c.hat.est, nsim = nsim)
class(gof.out) <- "Nmix.chisq"
return(gof.out)
}
##GMM
Nmix.gof.test.unmarkedFitGMM <- function(mod, nsim = 5, plot.hist = TRUE,
report = NULL, parallel = TRUE, ncores,
cex.axis = 1, cex.lab = 1, cex.main = 1,
lwd = 1, ...){
##more bootstrap samples are recommended (e.g., 1000, 5000, or 10 000)
##extract model type
model.type <- Nmix.chisq(mod)$model.type
##if NULL, don't print test statistic at each iteration
if(is.null(report)) {
##compute GOF P-value
out <- parboot(mod, statistic = function(i) Nmix.chisq(i)$chi.square,
nsim = nsim, parallel = parallel, ncores = ncores)
} else {
##compute GOF P-value
out <- parboot(mod, statistic = function(i) Nmix.chisq(i)$chi.square,
nsim = nsim, report = report, parallel = parallel,
ncores = ncores)
}
##determine significance
p.value <- sum([email protected] >= out@t0)/nsim
if(p.value == 0) {
p.display <- paste("<", round(1/nsim, digits = 4))
} else {
p.display = paste("=", round(p.value, digits = 4))
}
##create plot
if(plot.hist) {
hist([email protected],
main = as.expression(substitute("Bootstrapped "*chi^2*" fit statistic ("*nsim*" samples)",
list(nsim = nsim))),
xlim = range(c([email protected], out@t0)),
xlab = paste("Simulated statistic ", "(observed = ", round(out@t0, digits = 2), ")", sep = ""),
cex.axis = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
title(main = bquote(paste(italic(P), .(p.display))), line = 0.5,
cex.main = cex.main)
abline(v = out@t0, lty = "dashed", col = "red", lwd = lwd)
}
##estimate c-hat
c.hat.est <- out@t0/mean([email protected])
##assemble result
gof.out <- list(model.type = model.type, chi.square = out@t0, t.star = [email protected],
p.value = p.value, c.hat.est = c.hat.est, nsim = nsim)
class(gof.out) <- "Nmix.chisq"
return(gof.out)
}
##GPC
Nmix.gof.test.unmarkedFitGPC <- function(mod, nsim = 5, plot.hist = TRUE,
report = NULL, parallel = TRUE, ncores,
cex.axis = 1, cex.lab = 1, cex.main = 1,
lwd = 1,...){
##more bootstrap samples are recommended (e.g., 1000, 5000, or 10 000)
##extract model type
model.type <- Nmix.chisq(mod)$model.type
##if NULL, don't print test statistic at each iteration
if(is.null(report)) {
##compute GOF P-value
out <- parboot(mod, statistic = function(i) Nmix.chisq(i)$chi.square,
nsim = nsim, parallel = parallel, ncores = ncores)
} else {
##compute GOF P-value
out <- parboot(mod, statistic = function(i) Nmix.chisq(i)$chi.square,
nsim = nsim, report = report, parallel = parallel,
ncores = ncores)
}
##determine significance
p.value <- sum([email protected] >= out@t0)/nsim
if(p.value == 0) {
p.display <- paste("<", round(1/nsim, digits = 4))
} else {
p.display = paste("=", round(p.value, digits = 4))
}
##create plot
if(plot.hist) {
hist([email protected],
main = as.expression(substitute("Bootstrapped "*chi^2*" fit statistic ("*nsim*" samples)",
list(nsim = nsim))),
xlim = range(c([email protected], out@t0)),
xlab = paste("Simulated statistic ", "(observed = ", round(out@t0, digits = 2), ")", sep = ""),
cex.axis = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
title(main = bquote(paste(italic(P), .(p.display))), line = 0.5,
cex.main = cex.main)
abline(v = out@t0, lty = "dashed", col = "red", lwd = lwd)
}
##estimate c-hat
c.hat.est <- out@t0/mean([email protected])
##assemble result
gof.out <- list(model.type = model.type, chi.square = out@t0, t.star = [email protected],
p.value = p.value, c.hat.est = c.hat.est, nsim = nsim)
class(gof.out) <- "Nmix.chisq"
return(gof.out)
}
##MPois
Nmix.gof.test.unmarkedFitMPois <- function(mod, nsim = 5, plot.hist = TRUE,
report = NULL, parallel = TRUE, ncores,
cex.axis = 1, cex.lab = 1, cex.main = 1,
lwd = 1, ...){
##more bootstrap samples are recommended (e.g., 1000, 5000, or 10 000)
##extract model type
model.type <- Nmix.chisq(mod)$model.type
##if NULL, don't print test statistic at each iteration
if(is.null(report)) {
##compute GOF P-value
out <- parboot(mod, statistic = function(i) Nmix.chisq(i)$chi.square,
nsim = nsim, parallel = parallel, ncores = ncores)
} else {
##compute GOF P-value
out <- parboot(mod, statistic = function(i) Nmix.chisq(i)$chi.square,
nsim = nsim, report = report, parallel = parallel,
ncores = ncores)
}
##determine significance
p.value <- sum([email protected] >= out@t0)/nsim
if(p.value == 0) {
p.display <- paste("<", round(1/nsim, digits = 4))
} else {
p.display = paste("=", round(p.value, digits = 4))
}
##create plot
if(plot.hist) {
hist([email protected],
main = as.expression(substitute("Bootstrapped "*chi^2*" fit statistic ("*nsim*" samples)",
list(nsim = nsim))),
xlim = range(c([email protected], out@t0)),
xlab = paste("Simulated statistic ", "(observed = ", round(out@t0, digits = 2), ")", sep = ""),
cex.axis = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
title(main = bquote(paste(italic(P), .(p.display))), line = 0.5,
cex.main = cex.main)
abline(v = out@t0, lty = "dashed", col = "red", lwd = lwd)
}
##estimate c-hat
c.hat.est <- out@t0/mean([email protected])
##assemble result
gof.out <- list(model.type = model.type, chi.square = out@t0, t.star = [email protected],
p.value = p.value, c.hat.est = c.hat.est, nsim = nsim)
class(gof.out) <- "Nmix.chisq"
return(gof.out)
}
##multmixOpen
Nmix.gof.test.unmarkedFitMMO <- function(mod, nsim = 5, plot.hist = TRUE,
report = NULL, parallel = TRUE, ncores,
cex.axis = 1, cex.lab = 1, cex.main = 1,
lwd = 1, ...){
##more bootstrap samples are recommended (e.g., 1000, 5000, or 10 000)
##extract model type
model.type <- Nmix.chisq(mod)$model.type
##if NULL, don't print test statistic at each iteration
if(is.null(report)) {
##compute GOF P-value
out <- parboot(mod, statistic = function(i) Nmix.chisq(i)$chi.square,
nsim = nsim, parallel = parallel, ncores = ncores)
} else {
##compute GOF P-value
out <- parboot(mod, statistic = function(i) Nmix.chisq(i)$chi.square,
nsim = nsim, report = report, parallel = parallel,
ncores = ncores)
}
##determine significance
p.value <- sum([email protected] >= out@t0)/nsim
if(p.value == 0) {
p.display <- paste("<", round(1/nsim, digits = 4))
} else {
p.display = paste("=", round(p.value, digits = 4))
}
##create plot
if(plot.hist) {
hist([email protected],
main = as.expression(substitute("Bootstrapped "*chi^2*" fit statistic ("*nsim*" samples)",
list(nsim = nsim))),
xlim = range(c([email protected], out@t0)),
xlab = paste("Simulated statistic ", "(observed = ", round(out@t0, digits = 2), ")", sep = ""),
cex.axis = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
title(main = bquote(paste(italic(P), .(p.display))), line = 0.5,
cex.main = cex.main)
abline(v = out@t0, lty = "dashed", col = "red", lwd = lwd)
}
##estimate c-hat
c.hat.est <- out@t0/mean([email protected])
##assemble result
gof.out <- list(model.type = model.type, chi.square = out@t0, t.star = [email protected],
p.value = p.value, c.hat.est = c.hat.est, nsim = nsim)
class(gof.out) <- "Nmix.chisq"
return(gof.out)
}
##distsampOpen
Nmix.gof.test.unmarkedFitDSO <- function(mod, nsim = 5, plot.hist = TRUE,
report = NULL, parallel = TRUE, ncores,
cex.axis = 1, cex.lab = 1, cex.main = 1,
lwd = 1, ...){
##more bootstrap samples are recommended (e.g., 1000, 5000, or 10 000)
##extract model type
model.type <- Nmix.chisq(mod)$model.type
##if NULL, don't print test statistic at each iteration
if(is.null(report)) {
##compute GOF P-value
out <- parboot(mod, statistic = function(i) Nmix.chisq(i)$chi.square,
nsim = nsim, parallel = parallel, ncores = ncores)
} else {
##compute GOF P-value
out <- parboot(mod, statistic = function(i) Nmix.chisq(i)$chi.square,
nsim = nsim, report = report, parallel = parallel,
ncores = ncores)
}
##determine significance
p.value <- sum([email protected] >= out@t0)/nsim
if(p.value == 0) {
p.display <- paste("<", round(1/nsim, digits = 4))
} else {
p.display = paste("=", round(p.value, digits = 4))
}
##create plot
if(plot.hist) {
hist([email protected],
main = as.expression(substitute("Bootstrapped "*chi^2*" fit statistic ("*nsim*" samples)",
list(nsim = nsim))),
xlim = range(c([email protected], out@t0)),
xlab = paste("Simulated statistic ", "(observed = ", round(out@t0, digits = 2), ")", sep = ""),
cex.axis = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
title(main = bquote(paste(italic(P), .(p.display))), line = 0.5,
cex.main = cex.main)
abline(v = out@t0, lty = "dashed", col = "red", lwd = lwd)
}
##estimate c-hat
c.hat.est <- out@t0/mean([email protected])
##assemble result
gof.out <- list(model.type = model.type, chi.square = out@t0, t.star = [email protected],
p.value = p.value, c.hat.est = c.hat.est, nsim = nsim)
class(gof.out) <- "Nmix.chisq"
return(gof.out)
}
##print method
print.Nmix.chisq <- function(x, digits.vals = 2, digits.chisq = 4, ...) {
cat("\nChi-square goodness-of-fit for N-mixture model of \'", x$model.type, "\' class\n", sep = "")
cat("\nObserved chi-square statistic =", round(x$chi.square, digits = digits.chisq), "\n")
if(length(x) > 2){
cat("Number of bootstrap samples =", x$nsim)
cat("\nP-value =", x$p.value)
cat("\n\nQuantiles of bootstrapped statistics:\n")
print(quantile(x$t.star), digits = digits.vals)
cat("\nEstimate of c-hat =", round(x$c.hat.est, digits = digits.vals), "\n")
}
cat("\n")
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/Nmix.gof.test.R |
##create generic aictab
aictab <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...) {
##format list according to model class
cand.set <- formatCands(cand.set)
UseMethod("aictab", cand.set)
}
##default to indicate when object class not supported
aictab.default <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...) {
stop("\nFunction not yet defined for this object class\n")
}
##aov
aictab.AICaov.lm <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(X = cand.set, FUN = AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##extract LL
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
##rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##betareg
aictab.AICbetareg <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(X = cand.set, FUN = AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##extract LL
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
##rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##clm
aictab.AICsclm.clm <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE, second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE, second.ord = second.ord, nobs = nobs)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
Results$LL <- unlist(lapply(X= cand.set, FUN = function(i) logLik(i)[1]))
##rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##clm
aictab.AICclm <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE, second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE, second.ord = second.ord, nobs = nobs)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
Results$LL <- unlist(lapply(X= cand.set, FUN = function(i) logLik(i)[1]))
##rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##clmm
aictab.AICclmm <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE, second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE, second.ord = second.ord, nobs = nobs)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
Results$LL <- unlist(lapply(X= cand.set, FUN = function(i) logLik(i)[1]))
##rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##coxme
aictab.AICcoxme <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)$fixed[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
##arrange in table
Results <- data.frame(Modnames=modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE, second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE, second.ord = second.ord, nobs = nobs)) #extract AICc
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##add partial log-likelihood column
Results$LL <- unlist(lapply(X = cand.set, FUN=function(i) extractLL(i)[1]))
##rename correctly to AIC
if(second.ord==FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##coxph and clogit
aictab.AICcoxph <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
##arrange in table
Results <- data.frame(Modnames=modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE, second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE, second.ord = second.ord, nobs = nobs)) #extract AICc
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##add partial log-likelihood column
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)[1]))
##rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##fitdist (from fitdistrplus)
aictab.AICfitdist <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
#check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
#if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(X = cand.set, FUN = AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##extract LL
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
##rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##fitdistr (from MASS)
aictab.AICfitdistr <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
#check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
#if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(X = cand.set, FUN = AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##extract LL
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
##rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##glm
aictab.AICglm.lm <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, c.hat = 1, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = AICc, return.K = TRUE, second.ord = second.ord,
nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$AICc <- unlist(lapply(X = cand.set, FUN = AICc, return.K = FALSE, second.ord = second.ord,
nobs = nobs, c.hat = c.hat)) #extract AICc
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if AICc and c.hat = 1
if(second.ord == TRUE && c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN=function(i) logLik(i)[1]))
}
##rename correctly to QAICc and add column for c-hat
if(second.ord == TRUE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAICc", "Delta_QAICc", "ModelLik", "QAICcWt")
LL <- unlist(lapply(X=cand.set, FUN = function(i) logLik(i)[1]))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
##rename correctly to AIC
if(second.ord == FALSE && c.hat == 1) {
colnames(Results)<-c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
Results$LL <- unlist(lapply(X=cand.set, FUN=function(i) logLik(i)[1]))
}
##rename correctly to QAIC and add column for c-hat
if(second.ord == FALSE && c.hat > 1) {
colnames(Results)<-c("Modnames", "K", "QAIC", "Delta_QAIC", "ModelLik", "QAICWt")
LL <- unlist(lapply(X=cand.set, FUN=function(i) logLik(i)[1]))
Results$Quasi.LL <- LL/c.hat
Results$c_hat<-c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##glmerMod
aictab.AICglmmTMB <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE, second.ord = second.ord, nobs = nobs,
c.hat = c.hat)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE, second.ord = second.ord, nobs = nobs,
c.hat = c.hat)) #extract AICc
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if AICc and c.hat = 1
if(second.ord == TRUE && c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN=function(i) logLik(i)[1]))
}
##rename correctly to QAICc and add column for c-hat
if(second.ord == TRUE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAICc", "Delta_QAICc", "ModelLik", "QAICcWt")
LL <- unlist(lapply(X=cand.set, FUN = function(i) logLik(i)[1]))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
##rename correctly to AIC
if(second.ord == FALSE && c.hat == 1) {
colnames(Results)<-c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
Results$LL <- unlist(lapply(X=cand.set, FUN=function(i) logLik(i)[1]))
}
##rename correctly to QAIC and add column for c-hat
if(second.ord == FALSE && c.hat > 1) {
colnames(Results)<-c("Modnames", "K", "QAIC", "Delta_QAIC", "ModelLik", "QAICWt")
LL <- unlist(lapply(X=cand.set, FUN=function(i) logLik(i)[1]))
Results$Quasi.LL <- LL/c.hat
Results$c_hat<-c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##gls
aictab.AICgls <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
##check if models were fit with same method (REML or ML)
check_ML <- unlist(lapply(cand.set, FUN = function(i) i$method))
if (any(check_ML != "ML")) {
warning("\nModel selection for fixed effects is only appropriate with method='ML':", "\n",
"REML (default) should only be used to select random effects for a constant set of fixed effects\n")
}
check.method <- unique(check_ML)
if(identical(check.method, c("ML", "REML"))) {
stop("\nYou should not have models fit with REML and ML in the same candidate model set\n")
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE, second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE, second.ord = second.ord, nobs = nobs)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
#check if ML or REML used and add column accordingly
if(identical(check.method, "ML")) {
Results$LL <- unlist(lapply(X = cand.set, FUN=function(i) logLik(i)[1]))
}
if(identical(check.method, "REML")) {
Results$Res.LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
}
#rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##gnls
aictab.AICgnls.gls <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- data.frame(Modnames=modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE, second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE, second.ord = second.ord, nobs = nobs)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##hurdle
aictab.AIChurdle <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
#if(c.hat != 1) stop("\nThis function does not support overdispersion in \'zeroinfl\' models\n")
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same data set for all models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE, second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE, second.ord = second.ord, nobs = nobs)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
Results$LL <- unlist(lapply(X= cand.set, FUN = function(i) logLik(i)))
##rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##lavaan
aictab.AIClavaan <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether observed variables are the same for all models
check.obs <- unlist(lapply(X = cand.set, FUN = function(b) b@[email protected][[1]]))
##frequency of each observed variable
freq.obs <- table(check.obs)
if(length(unique(freq.obs)) > 1) stop("\nModels with different sets of observed variables are not directly comparable\n")
Results <- NULL
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(X = cand.set, FUN = AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##extract LL
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
##rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##lm
aictab.AIClm <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(X = cand.set, FUN = AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##extract LL
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
##rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##lme
aictab.AIClme <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
#check if models were fit with same method (REML or ML)
check_ML <- unlist(lapply(cand.set, FUN = function(i) i$method))
if (any(check_ML != "ML")) {
warning("\nModel selection for fixed effects is only appropriate with method='ML':", "\n",
"REML (default) should only be used to select random effects for a constant set of fixed effects\n")
}
check.method <- unique(check_ML)
if(identical(check.method, c("ML", "REML"))) {
stop("\nYou should not have models fit with REML and ML in the same candidate model set\n")
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE, second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE, second.ord = second.ord, nobs = nobs)) #extract AICc
Results$Delta_AICc <- Results$AICc-min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
#check if ML or REML used and add column accordingly
if(identical(check.method, "ML")) {
Results$LL <- unlist(lapply(X= cand.set, FUN = function(i) logLik(i)[1]))
}
if(identical(check.method, "REML")) {
Results$Res.LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
}
#rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##lmekin
aictab.AIClmekin <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
#check if models were fit with same method (REML or ML)
check_ML <- unlist(lapply(cand.set, FUN = function(i) i$method))
if (any(check_ML != "ML")) {
warning("\nModel selection for fixed effects is only appropriate with method='ML':", "\n",
"REML (default) should only be used to select random effects for a constant set of fixed effects\n")
}
check.method <- unique(check_ML)
if(identical(check.method, c("ML", "REML"))) {
stop("\nYou should not have models fit with REML and ML in the same candidate model set\n")
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE, second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE, second.ord = second.ord, nobs = nobs)) #extract AICc #
Results$Delta_AICc <- Results$AICc-min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
#check if ML or REML used and add column accordingly
if(identical(check.method, "ML")) {
Results$LL <- unlist(lapply(X= cand.set, FUN = function(i) extractLL(i)[1]))
}
if(identical(check.method, "REML")) {
Results$Res.LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)[1]))
}
#rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##maxlike
aictab.AICmaxlikeFit.list <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
if(c.hat != 1) stop("\nThis function does not support overdispersion in \'maxlikeFit\' models\n")
##add check to see whether response variable is the same for all models
#check.resp <- lapply(X = cand.set, FUN = function(b) nrow(b$points.retained))
#if(length(unique(check.resp)) > 1) stop("\nYou must use the same data set for all models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE, second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE, second.ord = second.ord, nobs = nobs)) #extract AICc
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
Results$LL <- unlist(lapply(X= cand.set, FUN = function(i) logLik(i)))
##rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##mer - lme4 version < 1
aictab.AICmer <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if (is.null(modnames)) {
if (is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
#check if models were fit with same method (REML or ML)
check_bin <- unlist(lapply(cand.set, FUN = function(i) i@dims["REML"]))
check_ML <- ifelse(check_bin == 1, "REML", "ML")
if (any(check_ML != "ML")) {
warning("\nModel selection for fixed effects is only appropriate with ML estimation:", "\n",
"REML (default) should only be used to select random effects for a constant set of fixed effects\n")
}
check.method <- unique(check_ML)
if(length(check.method) > 1) {
stop("\nYou should not have models fit with REML and ML in the same candidate model set\n")
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE, second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE, second.ord = second.ord, nobs = nobs)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
#check if ML or REML used and add column accordingly
if(identical(check.method, "ML")) {
Results$LL <- unlist(lapply(X = cand.set, FUN=function(i) logLik(i)[1]))
}
if(identical(check.method, "REML")) {
Results$Res.LL <- unlist(lapply(X = cand.set, FUN=function(i) logLik(i)[1]))
}
#rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##lmerMod
aictab.AIClmerMod <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for subclass of object
sub.class <- lapply(X = cand.set, FUN = class)
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
##check if models were fit with same method (REML or ML)
check_REML <- unlist(lapply(cand.set, FUN = function(i) isREML(i)))
check_ML <- ifelse(check_REML, "REML", "ML")
if (any(check_REML)) {
warning("\nModel selection for fixed effects is only appropriate with ML estimation:", "\n",
"REML (default) should only be used to select random effects for a constant set of fixed effects\n")
}
check.method <- unique(check_ML)
if(length(check.method) > 1) {
stop("\nYou should not have models fit with REML and ML in the same candidate model set\n")
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE, second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE, second.ord = second.ord, nobs = nobs)) #extract AICc
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
#check if ML or REML used and add column accordingly
if(identical(check.method, "ML")) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
}
if(identical(check.method, "REML")) {
Results$Res.LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
}
#rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##lmerModLmerTest
aictab.AIClmerModLmerTest <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for subclass of object
sub.class <- lapply(X = cand.set, FUN = class)
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
##check if models were fit with same method (REML or ML)
check_REML <- unlist(lapply(cand.set, FUN = function(i) isREML(i)))
check_ML <- ifelse(check_REML, "REML", "ML")
if (any(check_REML)) {
warning("\nModel selection for fixed effects is only appropriate with ML estimation:", "\n",
"REML (default) should only be used to select random effects for a constant set of fixed effects\n")
}
check.method <- unique(check_ML)
if(length(check.method) > 1) {
stop("\nYou should not have models fit with REML and ML in the same candidate model set\n")
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE, second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE, second.ord = second.ord, nobs = nobs)) #extract AICc
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
#check if ML or REML used and add column accordingly
if(identical(check.method, "ML")) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
}
if(identical(check.method, "REML")) {
Results$Res.LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
}
#rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##glmerMod
aictab.AICglmerMod <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE, second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE, second.ord = second.ord, nobs = nobs)) #extract AICc
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##add LL
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
##rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##glm.nb
aictab.AICnegbin.glm.lm <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = AICc, return.K = TRUE, second.ord = second.ord,
nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(X = cand.set, FUN = AICc, return.K = FALSE, second.ord = second.ord,
nobs = nobs)) #extract AICc
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if AICc and c.hat = 1
if(second.ord == TRUE) {
Results$LL <- unlist(lapply(X = cand.set, FUN=function(i) logLik(i)[1]))
}
##rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)<-c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
Results$LL <- unlist(lapply(X=cand.set, FUN=function(i) logLik(i)[1]))
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##nlme
aictab.AICnlme.lme <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
#check if models were fit with same method (REML or ML)
check_ML <- unlist(lapply(cand.set, FUN = function(i) i$method))
if (any(check_ML != "ML")) {
warning("\nModel selection for fixed effects is only appropriate with method='ML':", "\n",
"REML (default) should only be used to select random effects for a constant set of fixed effects\n")
}
check.method <- unique(check_ML)
if(identical(check.method, c("ML", "REML"))) {
stop("\nYou should not have models fit with REML and ML in the same candidate model set\n")
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE, second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE, second.ord = second.ord, nobs = nobs)) #extract AICc
Results$Delta_AICc <- Results$AICc-min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
#check if ML or REML used and add column accordingly
if(identical(check.method, "ML")) {
Results$LL <- unlist(lapply(X= cand.set, FUN = function(i) logLik(i)[1]))
}
if(identical(check.method, "REML")) {
Results$Res.LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
}
#rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##nlmerMod
aictab.AICnlmerMod <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) unlist(strsplit(x = as.character(formula(b)), split = "~"))[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE, second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE, second.ord = second.ord, nobs = nobs)) #extract AICc
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##add LL
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
#rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##multinom
aictab.AICmultinom.nnet <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$AICc <- unlist(lapply(X = cand.set, FUN = AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if LL computed
if(second.ord==TRUE && c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
}
##rename correctly to QAICc and add column for c-hat
if(second.ord == TRUE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAICc", "Delta_QAICc", "ModelLik", "QAICcWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
##rename correctly to AIC
if(second.ord == FALSE && c.hat == 1) {
colnames(Results) <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
}
##rename correctly to QAIC and add column for c-hat
if(second.ord == FALSE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAIC", "Delta_QAIC", "ModelLik", "QAICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##lme
aictab.AIClme <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
##check if models were fit with same method (REML or ML)
check_ML <- unlist(lapply(cand.set, FUN = function(i) i$method))
if (any(check_ML != "ML")) {
warning("\nModel selection for fixed effects is only appropriate with method=ML:", "\n",
"REML (default) should only be used to select random effects for a constant set of fixed effects\n")
}
check.method <- unique(check_ML)
if(identical(check.method, c("ML", "REML"))) {
stop("\nYou should not have models fit with REML and ML in the same candidate model set\n")
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
#check if ML or REML used and add column accordingly
if(identical(check.method, "ML")) {
Results$LL <- unlist(lapply(X= cand.set, FUN = function(i) logLik(i)[1]))
}
if(identical(check.method, "REML")) {
Results$Res.LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
}
#rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##nls
aictab.AICnls <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- data.frame(Modnames=modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE, second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE, second.ord = second.ord, nobs = nobs)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##polr
aictab.AICpolr <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
Results <- data.frame(Modnames=modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(X = cand.set, FUN = AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
#rename correctly to AIC
if(second.ord==FALSE) {
colnames(Results)[1:6]<-c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik",
"AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##rlm
aictab.AICrlm.lm <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs)) #extract AICc
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
Results$LL <- unlist(lapply(X= cand.set, FUN = function(i) logLik(i)[1]))
##rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##survreg
aictab.AICsurvreg <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(X = cand.set, FUN = AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##extract LL
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
##rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##occu
aictab.AICunmarkedFitOccu <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract AICc #
Results$Delta_AICc <- Results$AICc-min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if AICc and c.hat = 1
if(second.ord == TRUE && c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAICc and add column for c-hat
if(second.ord == TRUE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAICc", "Delta_QAICc", "ModelLik", "QAICcWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
##rename correctly to AIC
if(second.ord == FALSE && c.hat==1) {
colnames(Results)<-c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAIC and add column for c-hat
if(second.ord == FALSE && c.hat > 1) {
colnames(Results)<-c("Modnames", "K", "QAIC", "Delta_QAIC", "ModelLik", "QAICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat<-c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##colext
aictab.AICunmarkedFitColExt <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract AICc #
Results$Delta_AICc <- Results$AICc-min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if AICc and c.hat = 1
if(second.ord == TRUE && c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAICc and add column for c-hat
if(second.ord == TRUE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAICc", "Delta_QAICc", "ModelLik", "QAICcWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
##rename correctly to AIC
if(second.ord == FALSE && c.hat==1) {
colnames(Results)<-c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAIC and add column for c-hat
if(second.ord == FALSE && c.hat > 1) {
colnames(Results)<-c("Modnames", "K", "QAIC", "Delta_QAIC", "ModelLik", "QAICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat<-c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##occuRN
aictab.AICunmarkedFitOccuRN <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check for use of c-hat
#if(c.hat > 1) stop("\nThe correction for overdispersion is not appropriate with Royle-Nichols heterogeneity models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract AICc
Results$Delta_AICc <- Results$AICc-min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if AICc and c.hat = 1
if(second.ord == TRUE && c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAICc and add column for c-hat
if(second.ord == TRUE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAICc", "Delta_QAICc", "ModelLik", "QAICcWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
##rename correctly to AIC
if(second.ord == FALSE && c.hat==1) {
colnames(Results)<-c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAIC and add column for c-hat
if(second.ord == FALSE && c.hat > 1) {
colnames(Results)<-c("Modnames", "K", "QAIC", "Delta_QAIC", "ModelLik", "QAICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat<-c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##pcount
aictab.AICunmarkedFitPCount <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if AICc and c.hat = 1
if(second.ord == TRUE && c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAICc and add column for c-hat
if(second.ord == TRUE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAICc", "Delta_QAICc", "ModelLik", "QAICcWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
##rename correctly to AIC
if(second.ord == FALSE && c.hat == 1) {
colnames(Results)<-c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAIC and add column for c-hat
if(second.ord == FALSE && c.hat > 1) {
colnames(Results)<-c("Modnames", "K", "QAIC", "Delta_QAIC", "ModelLik", "QAICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat<-c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##same function as that for objects created by pcount( )
aictab.AICunmarkedFitPCO <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract AICc
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if AICc and c.hat = 1
if(second.ord == TRUE && c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAICc and add column for c-hat
if(second.ord == TRUE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAICc", "Delta_QAICc", "ModelLik", "QAICcWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
##rename correctly to AIC
if(second.ord == FALSE && c.hat == 1) {
colnames(Results)<-c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAIC and add column for c-hat
if(second.ord == FALSE && c.hat > 1) {
colnames(Results)<-c("Modnames", "K", "QAIC", "Delta_QAIC", "ModelLik", "QAICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat<-c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##distsamp
aictab.AICunmarkedFitDS <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if AICc and c.hat = 1
if(second.ord == TRUE && c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAICc and add column for c-hat
if(second.ord == TRUE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAICc", "Delta_QAICc", "ModelLik", "QAICcWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
##rename correctly to AIC
if(second.ord == FALSE && c.hat == 1) {
colnames(Results)<-c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAIC and add column for c-hat
if(second.ord == FALSE && c.hat > 1) {
colnames(Results)<-c("Modnames", "K", "QAIC", "Delta_QAIC", "ModelLik", "QAICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat<-c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##gdistsamp
aictab.AICunmarkedFitGDS <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check for use of c-hat
#if(c.hat > 1) stop("\nThe correction for overdispersion is not appropriate for distance sampling models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract AICc
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if AICc and c.hat = 1
if(second.ord == TRUE && c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAICc and add column for c-hat
if(second.ord == TRUE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAICc", "Delta_QAICc", "ModelLik", "QAICcWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
##rename correctly to AIC
if(second.ord == FALSE && c.hat == 1) {
colnames(Results)<-c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAIC and add column for c-hat
if(second.ord == FALSE && c.hat > 1) {
colnames(Results)<-c("Modnames", "K", "QAIC", "Delta_QAIC", "ModelLik", "QAICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat<-c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##occuFP
aictab.AICunmarkedFitOccuFP <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check for use of c-hat
##if(c.hat > 1) stop("\nThe correction for overdispersion is not yet implemented for false-positive occupancy models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if AICc and c.hat = 1
if(second.ord == TRUE && c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAICc and add column for c-hat
if(second.ord == TRUE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAICc", "Delta_QAICc", "ModelLik", "QAICcWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
##rename correctly to AIC
if(second.ord == FALSE && c.hat == 1) {
colnames(Results)<-c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAIC and add column for c-hat
if(second.ord == FALSE && c.hat > 1) {
colnames(Results)<-c("Modnames", "K", "QAIC", "Delta_QAIC", "ModelLik", "QAICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat<-c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##occuMulti
aictab.AICunmarkedFitOccuMulti <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check for use of c-hat
#if(c.hat > 1) stop("\nThe correction for overdispersion is not yet implemented for multispecies occupancy models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if AICc and c.hat = 1
if(second.ord == TRUE && c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAICc and add column for c-hat
if(second.ord == TRUE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAICc", "Delta_QAICc", "ModelLik", "QAICcWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
##rename correctly to AIC
if(second.ord == FALSE && c.hat == 1) {
colnames(Results)<-c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAIC and add column for c-hat
if(second.ord == FALSE && c.hat > 1) {
colnames(Results)<-c("Modnames", "K", "QAIC", "Delta_QAIC", "ModelLik", "QAICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat<-c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##multinomPois
aictab.AICunmarkedFitMPois <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check for use of c-hat
#if(c.hat > 1) stop("\nThe correction for overdispersion is not yet implemented for multinomial Poisson models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if AICc and c.hat = 1
if(second.ord == TRUE && c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAICc and add column for c-hat
if(second.ord == TRUE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAICc", "Delta_QAICc", "ModelLik", "QAICcWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
##rename correctly to AIC
if(second.ord == FALSE && c.hat == 1) {
colnames(Results)<-c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAIC and add column for c-hat
if(second.ord == FALSE && c.hat > 1) {
colnames(Results)<-c("Modnames", "K", "QAIC", "Delta_QAIC", "ModelLik", "QAICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat<-c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##gmultmix
aictab.AICunmarkedFitGMM <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check for use of c-hat
#if(c.hat > 1) stop("\nThe correction for overdispersion is not yet implemented for generalized multinomial mixture models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract AICc
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if AICc and c.hat = 1
if(second.ord == TRUE && c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAICc and add column for c-hat
if(second.ord == TRUE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAICc", "Delta_QAICc", "ModelLik", "QAICcWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
##rename correctly to AIC
if(second.ord == FALSE && c.hat == 1) {
colnames(Results) <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAIC and add column for c-hat
if(second.ord == FALSE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAIC", "Delta_QAIC", "ModelLik", "QAICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##gpcount
aictab.AICunmarkedFitGPC <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check for use of c-hat
#if(c.hat > 1) stop("\nThe correction for overdispersion is not yet implemented for generalized binomial mixture models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract AICc
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if AICc and c.hat = 1
if(second.ord == TRUE && c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAICc and add column for c-hat
if(second.ord == TRUE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAICc", "Delta_QAICc", "ModelLik", "QAICcWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
##rename correctly to AIC
if(second.ord == FALSE && c.hat == 1) {
colnames(Results) <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAIC and add column for c-hat
if(second.ord == FALSE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAIC", "Delta_QAIC", "ModelLik", "QAICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##vglm
aictab.AICvglm <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
##changed to AICcmodavg:::AICc.vglm to avoid conflicts with VGAM
Results <- data.frame(Modnames=modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$AICc <- unlist(lapply(X = cand.set, FUN = AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract AICc
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if AICc and c.hat = 1
if(second.ord==TRUE && c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)))
}
##rename correctly to QAICc and add column for c-hat
if(second.ord == TRUE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAICc", "Delta_QAICc", "ModelLik", "QAICcWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
##rename correctly to AIC
if(second.ord == FALSE && c.hat == 1) {
colnames(Results)<-c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)))
}
##rename correctly to QAIC and add column for c-hat
if(second.ord == FALSE && c.hat > 1) {
colnames(Results)<-c("Modnames", "K", "QAIC", "Delta_QAIC", "ModelLik", "QAICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##occuMS
aictab.AICunmarkedFitOccuMS <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract AICc
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if AICc and c.hat = 1
if(second.ord == TRUE && c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAICc and add column for c-hat
if(second.ord == TRUE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAICc", "Delta_QAICc", "ModelLik", "QAICcWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
##rename correctly to AIC
if(second.ord == FALSE && c.hat == 1) {
colnames(Results) <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAIC and add column for c-hat
if(second.ord == FALSE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAIC", "Delta_QAIC", "ModelLik", "QAICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##occuTTD
aictab.AICunmarkedFitOccuTTD <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract AICc
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if AICc and c.hat = 1
if(second.ord == TRUE && c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAICc and add column for c-hat
if(second.ord == TRUE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAICc", "Delta_QAICc", "ModelLik", "QAICcWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
##rename correctly to AIC
if(second.ord == FALSE && c.hat == 1) {
colnames(Results) <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAIC and add column for c-hat
if(second.ord == FALSE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAIC", "Delta_QAIC", "ModelLik", "QAICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##multmixOpen
aictab.AICunmarkedFitMMO <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract AICc
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if AICc and c.hat = 1
if(second.ord == TRUE && c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAICc and add column for c-hat
if(second.ord == TRUE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAICc", "Delta_QAICc", "ModelLik", "QAICcWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
##rename correctly to AIC
if(second.ord == FALSE && c.hat == 1) {
colnames(Results) <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAIC and add column for c-hat
if(second.ord == FALSE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAIC", "Delta_QAIC", "ModelLik", "QAICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##distsampOpen
aictab.AICunmarkedFitDSO <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE,
second.ord = second.ord, nobs = nobs, c.hat = c.hat)) #extract AICc
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if AICc and c.hat = 1
if(second.ord == TRUE && c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAICc and add column for c-hat
if(second.ord == TRUE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAICc", "Delta_QAICc", "ModelLik", "QAICcWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
##rename correctly to AIC
if(second.ord == FALSE && c.hat == 1) {
colnames(Results) <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QAIC and add column for c-hat
if(second.ord == FALSE && c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QAIC", "Delta_QAIC", "ModelLik", "QAICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
##zeroinfl
aictab.AICzeroinfl <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta AICc
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
#if(c.hat != 1) stop("\nThis function does not support overdispersion in \'zeroinfl\' models\n")
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same data set for all models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, AICc, return.K = TRUE, second.ord = second.ord, nobs = nobs)) #extract number of parameters
Results$AICc <- unlist(lapply(cand.set, AICc, return.K = FALSE, second.ord = second.ord, nobs = nobs)) #extract AICc #
Results$Delta_AICc <- Results$AICc - min(Results$AICc) #compute delta AICc
Results$ModelLik <- exp(-0.5*Results$Delta_AICc) #compute model likelihood required to compute Akaike weights
Results$AICcWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
##check if some models are redundant
if(length(unique(Results$AICc)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
Results$LL <- unlist(lapply(X= cand.set, FUN = function(i) logLik(i)))
##rename correctly to AIC
if(second.ord == FALSE) {
colnames(Results)[1:6] <- c("Modnames", "K", "AIC", "Delta_AIC", "ModelLik", "AICWt")
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("aictab", "data.frame")
return(Results)
}
print.aictab <-
function(x, digits = 2, LL = TRUE, ...) {
cat("\nModel selection based on ", colnames(x)[3], ":\n", sep = "")
if (any(names(x) == "c_hat")) {cat("(c-hat estimate = ", x$c_hat[1], ")\n", sep = "")}
cat("\n")
#check if Cum.Wt should be printed
if(any(names(x) == "Cum.Wt")) {
nice.tab <- cbind(x[, 2], x[, 3], x[, 4], x[, 6], x[, "Cum.Wt"], x[, 7])
colnames(nice.tab) <- c(colnames(x)[c(2, 3, 4, 6)], "Cum.Wt", colnames(x)[7])
rownames(nice.tab) <- x[, 1]
} else {
nice.tab <- cbind(x[, 2], x[, 3], x[, 4], x[, 6], x[, 7])
colnames(nice.tab) <- c(colnames(x)[c(2, 3, 4, 6, 7)])
rownames(nice.tab) <- x[, 1]
}
#if LL==FALSE
if(identical(LL, FALSE)) {
names.cols <- colnames(nice.tab)
sel.LL <- which(attr(regexpr(pattern = "LL", text = names.cols), "match.length") > 1)
nice.tab <- nice.tab[, -sel.LL]
}
print(round(nice.tab, digits = digits)) #select rounding off with digits argument
cat("\n")
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/aictab.R |
##approximate F-test in presence of overdispersion
##generic
anovaOD <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...){
UseMethod("anovaOD", mod.simple)
}
anovaOD.default <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...){
stop("\nFunction not yet defined for this object class\n")
}
##glm
anovaOD.glm <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##check for distributions
modFamily1 <- family(mod.simple)$family
modFamily2 <- family(mod.complex)$family
if(!identical(modFamily1, modFamily2)) stop("\nComparisons only appropriate for models using the same distribution\n")
if(!identical(modFamily1, "poisson") && !identical(modFamily1, "binomial")) {
if(c.hat > 1) stop("\ndistribution not appropriate for overdispersion correction\n\n")
}
##for binomial, check that number of trials > 1
if(identical(modFamily1, "binomial")) {
if(!any(mod.simple$prior.weights > 1)) stop("\nOverdispersion correction only appropriate for success/trials syntax\n\n")
}
##response variable
y1 <- mod.simple$y
y2 <- mod.complex$y
##number of observations
if(is.null(nobs)) {
nobs <- length(y1)
}
##check that data are the same
if(!identical(y1, y2)) stop("\nData set should be identical to compare models\n")
##extract log-likelihood
LL1 <- logLik(mod.simple)
LL2 <- logLik(mod.complex)
LL.simple <- LL1[1]
LL.complex <- LL2[1]
##extract number of estimated parameters
K.simple <- attr(LL1, "df")
K.complex <- attr(LL2, "df")
##residual df of complex model
df.complex <- nobs - K.complex
##extract model formula
simpleForm <- formula(mod.simple)
form.simple <- paste(simpleForm[2], "~", simpleForm[3])
complexForm <- formula(mod.complex)
form.complex <- paste(complexForm[2], "~", complexForm[3])
##- 2 * (logLik(simple) - logLik(complex))
LR <- -2 * (LL.simple - LL.complex)
##difference in number of estimated parameters
K.diff <- K.complex - K.simple
##use chi-square if no overdispersion
if(c.hat == 1) {
Chistat <- LR
df <- K.diff
pval <- 1 - pchisq(Chistat, df = df)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Chistat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "Chistat", "pval")
} else {
##compute F statistic
Fstat <- LR/((K.diff)*c.hat)
##compute df
df.num <- K.diff
df.denom <- df.complex
##P value
pval <- 1 - pf(Fstat, df1 = df.num, df2 = df.denom)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Fstat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "F", "pval")
}
##assemble in list
outList <- list(form.simple = form.simple,
form.complex = form.complex,
c.hat = c.hat,
devMat = devMat)
class(outList) <- c("anovaOD", "list")
return(outList)
}
##vglm
anovaOD.vglm <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##check family of vglm to avoid problems
fam.type1 <- mod.simple@family@vfamily[1]
fam.type2 <- mod.complex@family@vfamily[1]
if(!identical(fam.type1, fam.type2)) stop("\nComparisons only appropriate for models using the same distribution\n")
if(!(fam.type1 == "poissonff" || fam.type1 == "binomialff" || fam.type1 == "multinomial")) stop("\nDistribution not supported by function\n")
if(fam.type1 == "binomialff") {
if(!any([email protected] > 1)) stop("\nOverdispersion correction only appropriate for success/trials syntax\n\n")
}
##number of observations
y1 <- mod.simple@y
y2 <- mod.complex@y
##number of observations
if(is.null(nobs)) {
nobs <- length(y1)
}
##check that data are the same
if(!identical(y1, y2)) stop("\nData set should be identical to compare models\n")
##extract log-likelihood
LL1 <- extractLL(mod.simple)
LL2 <- extractLL(mod.complex)
LL.simple <- LL1[1]
LL.complex <- LL2[1]
##extract number of estimated parameters
K.simple <- attr(LL1, "df")
K.complex <- attr(LL2, "df")
##extract model formula
simpleForm <- formula(mod.simple)
form.simple <- paste(simpleForm[2], "~", simpleForm[3])
complexForm <- formula(mod.complex)
form.complex <- paste(complexForm[2], "~", complexForm[3])
##residual df of complex model
df.complex <- nobs - K.complex
##- 2 * (logLik(simple) - logLik(complex))
LR <- -2 * (LL.simple - LL.complex)
##difference in number of estimated parameters
K.diff <- K.complex - K.simple
##use chi-square if no overdispersion
if(c.hat == 1) {
Chistat <- LR
df <- K.diff
pval <- 1 - pchisq(Chistat, df = df)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Chistat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "Chistat", "pval")
} else {
##compute F statistic
Fstat <- LR/((K.diff)*c.hat)
##compute df
df.num <- K.diff
df.denom <- df.complex
##P value
pval <- 1 - pf(Fstat, df1 = df.num, df2 = df.denom)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Fstat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "F", "pval")
}
##assemble in list
outList <- list(form.simple = form.simple,
form.complex = form.complex,
c.hat = c.hat,
devMat = devMat)
class(outList) <- c("anovaOD", "list")
return(outList)
}
##multinom
anovaOD.multinom <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##number of observations
nobs1 <- nrow(mod.simple$fitted.values)
nobs2 <- nrow(mod.complex$fitted.values)
##check that sample size is the same
if(!identical(nobs1, nobs2)) stop("\nData set should be identical to compare models\n")
##number of observations
if(is.null(nobs)) {
nobs <- nobs1
}
##extract log-likelihood
LL1 <- logLik(mod.simple)
LL2 <- logLik(mod.complex)
LL.simple <- LL1[1]
LL.complex <- LL2[1]
##extract number of estimated parameters
K.simple <- attr(LL1, "df")
K.complex <- attr(LL2, "df")
##extract model formula
simpleForm <- formula(mod.simple)
form.simple <- paste(simpleForm[2], "~", simpleForm[3])
complexForm <- formula(mod.complex)
form.complex <- paste(complexForm[2], "~", complexForm[3])
##residual df of complex model
df.complex <- nobs - K.complex
##- 2 * (logLik(simple) - logLik(complex))
LR <- -2 * (LL.simple - LL.complex)
##difference in number of estimated parameters
K.diff <- K.complex - K.simple
##use chi-square if no overdispersion
if(c.hat == 1) {
Chistat <- LR
df <- K.diff
pval <- 1 - pchisq(Chistat, df = df)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Chistat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "Chistat", "pval")
} else {
##compute F statistic
Fstat <- LR/((K.diff)*c.hat)
##compute df
df.num <- K.diff
df.denom <- df.complex
##P value
pval <- 1 - pf(Fstat, df1 = df.num, df2 = df.denom)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Fstat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "F", "pval")
}
##assemble in list
outList <- list(form.simple = form.simple,
form.complex = form.complex,
c.hat = c.hat,
devMat = devMat)
class(outList) <- c("anovaOD", "list")
return(outList)
}
###############################
##residual DF for mixed models is difficult
##glmerMod
anovaOD.glmerMod <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##check for distributions
modFamily1 <- family(mod.simple)$family
modFamily2 <- family(mod.complex)$family
if(!identical(modFamily1, modFamily2)) stop("\nComparisons only appropriate for models using the same distribution\n")
if(!identical(modFamily1, "poisson") && !identical(modFamily1, "binomial")) {
if(c.hat > 1) stop("\ndistribution not appropriate for overdispersion correction\n\n")
}
##number of observations
y1 <- mod.simple@resp$y
y2 <- mod.complex@resp$y
##check that data are the same
if(!identical(y1, y2)) stop("\nData set should be identical to compare models\n")
##number of observations
if(is.null(nobs)) {
nobs <- length(y1)
}
##for binomial, check that number of trials > 1
if(identical(modFamily1, "binomial")) {
if(!any(mod.simple@resp$weights > 1)) stop("\nOverdispersion correction only appropriate for success/trials syntax\n\n")
}
##extract LL
LL.simple <- logLik(mod.simple)
LL.complex <- logLik(mod.complex)
##extract number of estimated parameters
K.simple <- attr(logLik(mod.simple), "df")
K.complex <- attr(logLik(mod.complex), "df")
##residual df of complex model
df.complex <- nobs - K.complex
##extract model formula
simpleForm <- formula(mod.simple)
form.simple <- paste(simpleForm[2], "~", simpleForm[3])
complexForm <- formula(mod.complex)
form.complex <- paste(complexForm[2], "~", complexForm[3])
##- 2 * (logLik(simple) - logLik(complex))
LR <- -2 * (LL.simple - LL.complex)
##difference in number of estimated parameters
K.diff <- K.complex - K.simple
##use chi-square if no overdispersion
if(c.hat == 1) {
Chistat <- LR
df <- K.diff
pval <- 1 - pchisq(Chistat, df = df)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Chistat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "Chistat", "pval")
} else {
##compute F statistic
Fstat <- LR/((K.diff)*c.hat)
##compute df
df.num <- K.diff
df.denom <- df.complex
##P value
pval <- 1 - pf(Fstat, df1 = df.num, df2 = df.denom)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Fstat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "F", "pval")
}
##assemble in list
outList <- list(form.simple = form.simple,
form.complex = form.complex,
c.hat = c.hat,
devMat = devMat)
class(outList) <- c("anovaOD", "list")
return(outList)
}
##glmmTMB
anovaOD.glmmTMB <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...) {
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##check for distributions
##check for distributions
modFamily1 <- family(mod.simple)$family
modFamily2 <- family(mod.complex)$family
if(!identical(modFamily1, modFamily2)) stop("\nComparisons only appropriate for models using the same distribution\n")
if(!identical(modFamily1, "poisson") && !identical(modFamily1, "binomial")) {
if(c.hat > 1) stop("\ndistribution not appropriate for overdispersion correction\n\n")
}
##extract response
y1Name <- mod.simple$modelInfo$respCol
y2Name <- mod.simple$modelInfo$respCol
y1 <- mod.simple$frame[, y1Name, drop = FALSE]
y2 <- mod.simple$frame[, y2Name, drop = FALSE]
##check that data are the same
if(!identical(y1, y2)) stop("\nData set should be identical to compare models\n")
##number of observations
if(is.null(nobs)) {
nobs <- nrow(y1)
}
##if binomial, check if n > 1 for each case
if(modFamily1 == "binomial") {
resp <- mod.simple$frame[, mod.simple$modelInfo$respCol]
if(!is.matrix(resp)) {
if(!any(names(mod.simple$frame) == "(weights)")) {
stop("\nOverdispersion correction only appropriate for success/trials syntax\n\n")
}
}
}
##extract LL
LL.simple <- logLik(mod.simple)
LL.complex <- logLik(mod.complex)
##extract number of estimated parameters
K.simple <- attr(logLik(mod.simple), "df")
K.complex <- attr(logLik(mod.complex), "df")
##residual df of complex model
df.complex <- nobs - K.complex
##extract model formula
simpleForm <- formula(mod.simple)
form.simple <- paste(simpleForm[2], "~", simpleForm[3])
complexForm <- formula(mod.complex)
form.complex <- paste(complexForm[2], "~", complexForm[3])
##- 2 * (logLik(simple) - logLik(complex))
LR <- -2 * (LL.simple - LL.complex)
##difference in number of estimated parameters
K.diff <- K.complex - K.simple
##use chi-square if no overdispersion
if(c.hat == 1) {
Chistat <- LR
df <- K.diff
pval <- 1 - pchisq(Chistat, df = df)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Chistat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "Chistat", "pval")
} else {
##compute F statistic
Fstat <- LR/((K.diff)*c.hat)
##compute df
df.num <- K.diff
df.denom <- df.complex
##P value
pval <- 1 - pf(Fstat, df1 = df.num, df2 = df.denom)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Fstat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "F", "pval")
}
##assemble in list
outList <- list(form.simple = form.simple,
form.complex = form.complex,
c.hat = c.hat,
devMat = devMat)
class(outList) <- c("anovaOD", "list")
return(outList)
}
##occu
anovaOD.unmarkedFitOccu <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...) {
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##extract response
y1 <- mod.simple@data@y
y2 <- mod.complex@data@y
##check that data are the same
if(!identical(y1, y2)) stop("\nData set should be identical to compare models\n")
##number of observations
if(is.null(nobs)) {
nobs <- nrow(y1)
}
##extract LL
LL.simple <- logLik(mod.simple)
LL.complex <- logLik(mod.complex)
##extract number of estimated parameters
K.simple <- length(coef(mod.simple))
K.complex <- length(coef(mod.complex))
##residual df of complex model
df.complex <- nobs - K.complex
##extract model formula
##simple model
form.simplePsi <- formulaShort(mod.simple, unmarked.type = "state")
form.simplePsi2 <- paste("psi(", form.simplePsi, ")", sep = "")
form.simpleDet <- formulaShort(mod.simple, unmarked.type = "det")
form.simpleDet2 <- paste("p(", form.simpleDet, ")", sep = "")
form.simple <- paste(form.simplePsi2, form.simpleDet2, sep = "")
##complex model
form.complexPsi <- formulaShort(mod.complex, unmarked.type = "state")
form.complexPsi2 <- paste("psi(", form.complexPsi, ")", sep = "")
form.complexDet <- formulaShort(mod.complex, unmarked.type = "det")
form.complexDet2 <- paste("p(", form.complexDet, ")", sep = "")
form.complex <- paste(form.complexPsi2, form.complexDet2, sep = "")
##- 2 * (logLik(simple) - logLik(complex))
LR <- -2 * (LL.simple - LL.complex)
##difference in number of estimated parameters
K.diff <- K.complex - K.simple
##use chi-square if no overdispersion
if(c.hat == 1) {
Chistat <- LR
df <- K.diff
pval <- 1 - pchisq(Chistat, df = df)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Chistat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "Chistat", "pval")
} else {
##compute F statistic
Fstat <- LR/((K.diff)*c.hat)
##compute df
df.num <- K.diff
df.denom <- df.complex
##P value
pval <- 1 - pf(Fstat, df1 = df.num, df2 = df.denom)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Fstat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "F", "pval")
}
##assemble in list
outList <- list(form.simple = form.simple,
form.complex = form.complex,
c.hat = c.hat,
devMat = devMat)
class(outList) <- c("anovaOD", "list")
return(outList)
}
##colext
anovaOD.unmarkedFitColExt <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...) {
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##extract response
y1 <- mod.simple@data@y
y2 <- mod.complex@data@y
##check that data are the same
if(!identical(y1, y2)) stop("\nData set should be identical to compare models\n")
##number of observations
if(is.null(nobs)) {
nobs <- nrow(y1)
}
##extract LL
LL.simple <- logLik(mod.simple)
LL.complex <- logLik(mod.complex)
##extract number of estimated parameters
K.simple <- length(coef(mod.simple))
K.complex <- length(coef(mod.complex))
##residual df of complex model
df.complex <- nobs - K.complex
##extract model formula
##simple model
form.simplePsi <- formulaShort(mod.simple, unmarked.type = "psi")
form.simplePsi2 <- paste("psi(", form.simplePsi, ")", sep = "")
form.simpleGam <- formulaShort(mod.simple, unmarked.type = "col")
form.simpleGam2 <- paste("gam(", form.simpleGam, ")", sep = "")
form.simpleEps <- formulaShort(mod.simple, unmarked.type = "ext")
form.simpleEps2 <- paste("eps(", form.simpleEps, ")", sep = "")
form.simpleDet <- formulaShort(mod.simple, unmarked.type = "det")
form.simpleDet2 <- paste("p(", form.simpleDet, ")", sep = "")
form.simple <- paste(form.simplePsi2, form.simpleGam2,
form.simpleEps2, form.simpleDet2, sep = "")
##complex model
form.complexPsi <- formulaShort(mod.complex, unmarked.type = "psi")
form.complexPsi2 <- paste("psi(", form.complexPsi, ")", sep = "")
form.complexGam <- formulaShort(mod.complex, unmarked.type = "col")
form.complexGam2 <- paste("gam(", form.complexGam, ")", sep = "")
form.complexEps <- formulaShort(mod.complex, unmarked.type = "ext")
form.complexEps2 <- paste("eps(", form.complexEps, ")", sep = "")
form.complexDet <- formulaShort(mod.complex, unmarked.type = "det")
form.complexDet2 <- paste("p(", form.complexDet, ")", sep = "")
form.complex <- paste(form.complexPsi2, form.complexGam2,
form.complexEps2, form.complexDet2, sep = "")
##- 2 * (logLik(simple) - logLik(complex))
LR <- -2 * (LL.simple - LL.complex)
##difference in number of estimated parameters
K.diff <- K.complex - K.simple
##use chi-square if no overdispersion
if(c.hat == 1) {
Chistat <- LR
df <- K.diff
pval <- 1 - pchisq(Chistat, df = df)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Chistat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "Chistat", "pval")
} else {
##compute F statistic
Fstat <- LR/((K.diff)*c.hat)
##compute df
df.num <- K.diff
df.denom <- df.complex
##P value
pval <- 1 - pf(Fstat, df1 = df.num, df2 = df.denom)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Fstat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "F", "pval")
}
##assemble in list
outList <- list(form.simple = form.simple,
form.complex = form.complex,
c.hat = c.hat,
devMat = devMat)
class(outList) <- c("anovaOD", "list")
return(outList)
}
##occuRN
anovaOD.unmarkedFitOccuRN <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...) {
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##extract response
y1 <- mod.simple@data@y
y2 <- mod.complex@data@y
##check that data are the same
if(!identical(y1, y2)) stop("\nData set should be identical to compare models\n")
##number of observations
if(is.null(nobs)) {
nobs <- nrow(y1)
}
##extract LL
LL.simple <- logLik(mod.simple)
LL.complex <- logLik(mod.complex)
##extract number of estimated parameters
K.simple <- length(coef(mod.simple))
K.complex <- length(coef(mod.complex))
##residual df of complex model
df.complex <- nobs - K.complex
##extract model formula
##simple model
form.simpleLam <- formulaShort(mod.simple, unmarked.type = "state")
form.simpleLam2 <- paste("lam(", form.simpleLam, ")", sep = "")
form.simpleDet <- formulaShort(mod.simple, unmarked.type = "det")
form.simpleDet2 <- paste("p(", form.simpleDet, ")", sep = "")
form.simple <- paste(form.simpleLam2, form.simpleDet2, sep = "")
##complex model
form.complexLam <- formulaShort(mod.complex, unmarked.type = "state")
form.complexLam2 <- paste("lam(", form.complexLam, ")", sep = "")
form.complexDet <- formulaShort(mod.complex, unmarked.type = "det")
form.complexDet2 <- paste("p(", form.complexDet, ")", sep = "")
form.complex <- paste(form.complexLam2, form.complexDet2, sep = "")
##- 2 * (logLik(simple) - logLik(complex))
LR <- -2 * (LL.simple - LL.complex)
##difference in number of estimated parameters
K.diff <- K.complex - K.simple
##use chi-square if no overdispersion
if(c.hat == 1) {
Chistat <- LR
df <- K.diff
pval <- 1 - pchisq(Chistat, df = df)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Chistat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "Chistat", "pval")
} else {
##compute F statistic
Fstat <- LR/((K.diff)*c.hat)
##compute df
df.num <- K.diff
df.denom <- df.complex
##P value
pval <- 1 - pf(Fstat, df1 = df.num, df2 = df.denom)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Fstat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "F", "pval")
}
##assemble in list
outList <- list(form.simple = form.simple,
form.complex = form.complex,
c.hat = c.hat,
devMat = devMat)
class(outList) <- c("anovaOD", "list")
return(outList)
}
##occuFP
anovaOD.unmarkedFitOccuFP <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...) {
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##extract response
y1 <- mod.simple@data@y
y2 <- mod.complex@data@y
##check that data are the same
if(!identical(y1, y2)) stop("\nData set should be identical to compare models\n")
##number of observations
if(is.null(nobs)) {
nobs <- nrow(y1)
}
##extract LL
LL.simple <- logLik(mod.simple)
LL.complex <- logLik(mod.complex)
##extract number of estimated parameters
K.simple <- length(coef(mod.simple))
K.complex <- length(coef(mod.complex))
##residual df of complex model
df.complex <- nobs - K.complex
##extract model formula
##simple model
##determine if certain detections (b) occur
form.simplePsi <- formulaShort(mod.simple, unmarked.type = "state")
form.simplePsi2 <- paste("psi(", form.simplePsi, ")", sep = "")
form.simpleFp <- formulaShort(mod.simple, unmarked.type = "fp")
form.simpleFp2 <- paste("fp(", form.simpleFp, ")", sep = "")
form.simpleDet <- formulaShort(mod.simple, unmarked.type = "det")
form.simpleDet2 <- paste("p(", form.simpleDet, ")", sep = "")
if(exists("b", mod.simple@estimates@estimates)){
form.simpleB <- formulaShort(mod.simple, unmarked.type = "b")
form.simpleB2 <- paste("b(", form.simpleB, ")", sep = "")
form.simple <- paste(form.simplePsi2, form.simpleFp2,
form.simpleB2, form.simpleDet2, sep = "")
} else {
form.simple <- paste(form.simplePsi2, form.simpleFp2,
form.simpleDet2, sep = "")
}
##complex model
##determine if certain detections (b) occur
form.complexPsi <- formulaShort(mod.complex, unmarked.type = "state")
form.complexPsi2 <- paste("psi(", form.complexPsi, ")", sep = "")
form.complexFp <- formulaShort(mod.complex, unmarked.type = "fp")
form.complexFp2 <- paste("fp(", form.complexFp, ")", sep = "")
form.complexDet <- formulaShort(mod.complex, unmarked.type = "det")
form.complexDet2 <- paste("p(", form.complexDet, ")", sep = "")
if(exists("b", mod.complex@estimates@estimates)){
form.complexB <- formulaShort(mod.complex, unmarked.type = "b")
form.complexB2 <- paste("b(", form.complexB, ")", sep = "")
form.complex <- paste(form.complexPsi2, form.complexFp2,
form.complexB2, form.complexDet2, sep = "")
} else {
form.complex <- paste(form.complexPsi2, form.complexFp2,
form.complexDet2, sep = "")
}
##- 2 * (logLik(simple) - logLik(complex))
LR <- -2 * (LL.simple - LL.complex)
##difference in number of estimated parameters
K.diff <- K.complex - K.simple
##use chi-square if no overdispersion
if(c.hat == 1) {
Chistat <- LR
df <- K.diff
pval <- 1 - pchisq(Chistat, df = df)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Chistat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "Chistat", "pval")
} else {
##compute F statistic
Fstat <- LR/((K.diff)*c.hat)
##compute df
df.num <- K.diff
df.denom <- df.complex
##P value
pval <- 1 - pf(Fstat, df1 = df.num, df2 = df.denom)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Fstat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "F", "pval")
}
##assemble in list
outList <- list(form.simple = form.simple,
form.complex = form.complex,
c.hat = c.hat,
devMat = devMat)
class(outList) <- c("anovaOD", "list")
return(outList)
}
##pcount
anovaOD.unmarkedFitPCount <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...) {
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##check for distributions
modFamily1 <- mod.simple@mixture
modFamily2 <- mod.complex@mixture
if(!identical(modFamily1, modFamily2)) stop("\nComparisons only appropriate for models using the same mixture distribution\n")
if(!identical(modFamily1, "P") && !identical(modFamily1, "ZIP")) {
if(c.hat > 1) stop("\ndistribution not appropriate for overdispersion correction\n\n")
}
##extract response
y1 <- mod.simple@data@y
y2 <- mod.complex@data@y
##check that data are the same
if(!identical(y1, y2)) stop("\nData set should be identical to compare models\n")
##number of observations
if(is.null(nobs)) {
nobs <- nrow(y1)
}
##extract LL
LL.simple <- logLik(mod.simple)
LL.complex <- logLik(mod.complex)
##extract number of estimated parameters
K.simple <- length(coef(mod.simple))
K.complex <- length(coef(mod.complex))
##residual df of complex model
df.complex <- nobs - K.complex
##extract model formula
##simple model
form.simpleLam <- formulaShort(mod.simple, unmarked.type = "state")
form.simpleLam2 <- paste("lam(", form.simpleLam, ")", sep = "")
form.simpleDet <- formulaShort(mod.simple, unmarked.type = "det")
form.simpleDet2 <- paste("p(", form.simpleDet, ")", sep = "")
form.simple <- paste(form.simpleLam2, form.simpleDet2, sep = "")
##complex model
form.complexLam <- formulaShort(mod.complex, unmarked.type = "state")
form.complexLam2 <- paste("lam(", form.complexLam, ")", sep = "")
form.complexDet <- formulaShort(mod.complex, unmarked.type = "det")
form.complexDet2 <- paste("p(", form.complexDet, ")", sep = "")
form.complex <- paste(form.complexLam2, form.complexDet2, sep = "")
##- 2 * (logLik(simple) - logLik(complex))
LR <- -2 * (LL.simple - LL.complex)
##difference in number of estimated parameters
K.diff <- K.complex - K.simple
##use chi-square if no overdispersion
if(c.hat == 1) {
Chistat <- LR
df <- K.diff
pval <- 1 - pchisq(Chistat, df = df)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Chistat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "Chistat", "pval")
} else {
##compute F statistic
Fstat <- LR/((K.diff)*c.hat)
##compute df
df.num <- K.diff
df.denom <- df.complex
##P value
pval <- 1 - pf(Fstat, df1 = df.num, df2 = df.denom)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Fstat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "F", "pval")
}
##assemble in list
outList <- list(form.simple = form.simple,
form.complex = form.complex,
c.hat = c.hat,
devMat = devMat)
class(outList) <- c("anovaOD", "list")
return(outList)
}
##pcountOpen
anovaOD.unmarkedFitPCO <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...) {
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##check for distributions
modFamily1 <- mod.simple@mixture
modFamily2 <- mod.complex@mixture
if(!identical(modFamily1, modFamily2)) stop("\nComparisons only appropriate for models using the same mixture distribution\n")
if(!identical(modFamily1, "P") && !identical(modFamily1, "ZIP")) {
if(c.hat > 1) stop("\ndistribution not appropriate for overdispersion correction\n")
}
##extract response
y1 <- mod.simple@data@y
y2 <- mod.complex@data@y
##check that data are the same
if(!identical(y1, y2)) stop("\nData set should be identical to compare models\n")
##number of observations
if(is.null(nobs)) {
nobs <- nrow(y1)
}
##extract LL
LL.simple <- logLik(mod.simple)
LL.complex <- logLik(mod.complex)
##extract number of estimated parameters
K.simple <- length(coef(mod.simple))
K.complex <- length(coef(mod.complex))
##residual df of complex model
df.complex <- nobs - K.complex
##extract model formula
##simple model
form.simpleLam <- formulaShort(mod.simple, unmarked.type = "lambda")
form.simpleLam2 <- paste("lam(", form.simpleLam, ")", sep = "")
form.simpleGam <- formulaShort(mod.simple, unmarked.type = "gamma")
form.simpleGam2 <- paste("gam(", form.simpleGam, ")", sep = "")
form.simpleOmega <- formulaShort(mod.simple, unmarked.type = "omega")
form.simpleOmega2 <- paste("omega(", form.simpleOmega, ")", sep = "")
form.simpleDet <- formulaShort(mod.simple, unmarked.type = "det")
form.simpleDet2 <- paste("p(", form.simpleDet, ")", sep = "")
if(exists("iota", mod.simple@estimates@estimates)){
form.simpleIota <- formulaShort(mod.simple, unmarked.type = "iota")
form.simpleIota2 <- paste("iota(", form.simpleIota, ")", sep = "")
form.simple <- paste(form.simpleLam2, form.simpleGam2,
form.simpleOmega2, form.simpleIota2,
form.simpleDet2, sep = "")
} else {
form.simple <- paste(form.simpleLam2, form.simpleGam2,
form.simpleOmega2,
form.simpleDet2, sep = "")
}
##complex model
form.complexLam <- formulaShort(mod.complex, unmarked.type = "lambda")
form.complexLam2 <- paste("lam(", form.complexLam, ")", sep = "")
form.complexGam <- formulaShort(mod.complex, unmarked.type = "gamma")
form.complexGam2 <- paste("gam(", form.complexGam, ")", sep = "")
form.complexOmega <- formulaShort(mod.complex, unmarked.type = "omega")
form.complexOmega2 <- paste("omega(", form.complexOmega, ")", sep = "")
form.complexDet <- formulaShort(mod.complex, unmarked.type = "det")
form.complexDet2 <- paste("p(", form.complexDet, ")", sep = "")
if(exists("iota", mod.complex@estimates@estimates)){
form.complexIota <- formulaShort(mod.complex, unmarked.type = "iota")
form.complexIota2 <- paste("iota(", form.complexIota, ")", sep = "")
form.complex <- paste(form.complexLam2, form.complexGam2,
form.complexOmega2, form.complexIota2,
form.complexDet2, sep = "")
} else {
form.complex <- paste(form.complexLam2, form.complexGam2,
form.complexOmega2,
form.complexDet2, sep = "")
}
##- 2 * (logLik(simple) - logLik(complex))
LR <- -2 * (LL.simple - LL.complex)
##difference in number of estimated parameters
K.diff <- K.complex - K.simple
##use chi-square if no overdispersion
if(c.hat == 1) {
Chistat <- LR
df <- K.diff
pval <- 1 - pchisq(Chistat, df = df)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Chistat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "Chistat", "pval")
} else {
##compute F statistic
Fstat <- LR/((K.diff)*c.hat)
##compute df
df.num <- K.diff
df.denom <- df.complex
##P value
pval <- 1 - pf(Fstat, df1 = df.num, df2 = df.denom)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Fstat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "F", "pval")
}
##assemble in list
outList <- list(form.simple = form.simple,
form.complex = form.complex,
c.hat = c.hat,
devMat = devMat)
class(outList) <- c("anovaOD", "list")
return(outList)
}
##distsamp
anovaOD.unmarkedFitDS <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...) {
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##extract response
y1 <- mod.simple@data@y
y2 <- mod.complex@data@y
##check that data are the same
if(!identical(y1, y2)) stop("\nData set should be identical to compare models\n")
##number of observations
if(is.null(nobs)) {
nobs <- nrow(y1)
}
##extract LL
LL.simple <- logLik(mod.simple)
LL.complex <- logLik(mod.complex)
##extract number of estimated parameters
K.simple <- length(coef(mod.simple))
K.complex <- length(coef(mod.complex))
##residual df of complex model
df.complex <- nobs - K.complex
##extract model formula
##simple model
form.simpleLam <- formulaShort(mod.simple, unmarked.type = "state")
form.simpleLam2 <- paste("lam(", form.simpleLam, ")", sep = "")
form.simpleDet <- formulaShort(mod.simple, unmarked.type = "det")
form.simpleDet2 <- paste("p(", form.simpleDet, ")", sep = "")
key.simple <- mod.simple@keyfun
form.simple <- paste(form.simpleLam2, form.simpleDet2, " (", key.simple, ")", sep = "")
##complex model
form.complexLam <- formulaShort(mod.complex, unmarked.type = "state")
form.complexLam2 <- paste("lam(", form.complexLam, ")", sep = "")
form.complexDet <- formulaShort(mod.complex, unmarked.type = "det")
form.complexDet2 <- paste("p(", form.complexDet, ")", sep = "")
key.complex <- mod.complex@keyfun
form.complex <- paste(form.complexLam2, form.complexDet2, " (", key.complex, ")", sep = "")
##- 2 * (logLik(simple) - logLik(complex))
LR <- -2 * (LL.simple - LL.complex)
##difference in number of estimated parameters
K.diff <- K.complex - K.simple
##use chi-square if no overdispersion
if(c.hat == 1) {
Chistat <- LR
df <- K.diff
pval <- 1 - pchisq(Chistat, df = df)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Chistat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "Chistat", "pval")
} else {
##compute F statistic
Fstat <- LR/((K.diff)*c.hat)
##compute df
df.num <- K.diff
df.denom <- df.complex
##P value
pval <- 1 - pf(Fstat, df1 = df.num, df2 = df.denom)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Fstat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "F", "pval")
}
##assemble in list
outList <- list(form.simple = form.simple,
form.complex = form.complex,
c.hat = c.hat,
devMat = devMat)
class(outList) <- c("anovaOD", "list")
return(outList)
}
##gdistsamp
anovaOD.unmarkedFitGDS <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...) {
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##check for distributions
modFamily1 <- mod.simple@mixture
modFamily2 <- mod.complex@mixture
if(!identical(modFamily1, modFamily2)) stop("\nComparisons only appropriate for models using the same mixture distribution\n")
if(!identical(modFamily1, "P") && !identical(modFamily1, "ZIP")) {
if(c.hat > 1) stop("\ndistribution not appropriate for overdispersion correction\n")
}
##extract response
y1 <- mod.simple@data@y
y2 <- mod.complex@data@y
##check that data are the same
if(!identical(y1, y2)) stop("\nData set should be identical to compare models\n")
##number of observations
if(is.null(nobs)) {
nobs <- nrow(y1)
}
##extract LL
LL.simple <- logLik(mod.simple)
LL.complex <- logLik(mod.complex)
##extract number of estimated parameters
K.simple <- length(coef(mod.simple))
K.complex <- length(coef(mod.complex))
##residual df of complex model
df.complex <- nobs - K.complex
##extract model formula
##simple model
form.simpleLam <- formulaShort(mod.simple, unmarked.type = "lambda")
form.simpleLam2 <- paste("lam(", form.simpleLam, ")", sep = "")
form.simplePhi <- formulaShort(mod.simple, unmarked.type = "phi")
form.simplePhi2 <- paste("phi(", form.simplePhi, ")", sep = "")
form.simpleDet <- formulaShort(mod.simple, unmarked.type = "det")
form.simpleDet2 <- paste("p(", form.simpleDet, ")", sep = "")
key.simple <- mod.simple@keyfun
form.simple <- paste(form.simpleLam2, form.simplePhi2,
form.simpleDet2, " (", key.simple, ")",
sep = "")
##complex model
form.complexLam <- formulaShort(mod.complex, unmarked.type = "lambda")
form.complexLam2 <- paste("lam(", form.complexLam, ")", sep = "")
form.complexPhi <- formulaShort(mod.complex, unmarked.type = "phi")
form.complexPhi2 <- paste("phi(", form.complexPhi, ")", sep = "")
form.complexDet <- formulaShort(mod.complex, unmarked.type = "det")
form.complexDet2 <- paste("p(", form.complexDet, ")", sep = "")
key.complex <- mod.complex@keyfun
form.complex <- paste(form.complexLam2, form.complexPhi2,
form.complexDet2, " (", key.complex, ")",
sep = "")
##- 2 * (logLik(simple) - logLik(complex))
LR <- -2 * (LL.simple - LL.complex)
##difference in number of estimated parameters
K.diff <- K.complex - K.simple
##use chi-square if no overdispersion
if(c.hat == 1) {
Chistat <- LR
df <- K.diff
pval <- 1 - pchisq(Chistat, df = df)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Chistat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "Chistat", "pval")
} else {
##compute F statistic
Fstat <- LR/((K.diff)*c.hat)
##compute df
df.num <- K.diff
df.denom <- df.complex
##P value
pval <- 1 - pf(Fstat, df1 = df.num, df2 = df.denom)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Fstat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "F", "pval")
}
##assemble in list
outList <- list(form.simple = form.simple,
form.complex = form.complex,
c.hat = c.hat,
devMat = devMat)
class(outList) <- c("anovaOD", "list")
return(outList)
}
##multinomPois
anovaOD.unmarkedFitMPois <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...) {
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##extract response
y1 <- mod.simple@data@y
y2 <- mod.complex@data@y
##check that data are the same
if(!identical(y1, y2)) stop("\nData set should be identical to compare models\n")
##number of observations
if(is.null(nobs)) {
nobs <- nrow(y1)
}
##extract LL
LL.simple <- logLik(mod.simple)
LL.complex <- logLik(mod.complex)
##extract number of estimated parameters
K.simple <- length(coef(mod.simple))
K.complex <- length(coef(mod.complex))
##residual df of complex model
df.complex <- nobs - K.complex
##extract model formula
##simple model
form.simpleLam <- formulaShort(mod.simple, unmarked.type = "state")
form.simpleLam2 <- paste("lam(", form.simpleLam, ")", sep = "")
form.simpleDet <- formulaShort(mod.simple, unmarked.type = "det")
form.simpleDet2 <- paste("p(", form.simpleDet, ")", sep = "")
form.simple <- paste(form.simpleLam2, form.simpleDet2, sep = "")
##complex model
form.complexLam <- formulaShort(mod.complex, unmarked.type = "state")
form.complexLam2 <- paste("lam(", form.complexLam, ")", sep = "")
form.complexDet <- formulaShort(mod.complex, unmarked.type = "det")
form.complexDet2 <- paste("p(", form.complexDet, ")", sep = "")
form.complex <- paste(form.complexLam2, form.complexDet2, sep = "")
##- 2 * (logLik(simple) - logLik(complex))
LR <- -2 * (LL.simple - LL.complex)
##difference in number of estimated parameters
K.diff <- K.complex - K.simple
##use chi-square if no overdispersion
if(c.hat == 1) {
Chistat <- LR
df <- K.diff
pval <- 1 - pchisq(Chistat, df = df)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Chistat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "Chistat", "pval")
} else {
##compute F statistic
Fstat <- LR/((K.diff)*c.hat)
##compute df
df.num <- K.diff
df.denom <- df.complex
##P value
pval <- 1 - pf(Fstat, df1 = df.num, df2 = df.denom)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Fstat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "F", "pval")
}
##assemble in list
outList <- list(form.simple = form.simple,
form.complex = form.complex,
c.hat = c.hat,
devMat = devMat)
class(outList) <- c("anovaOD", "list")
return(outList)
}
##gmultmix
anovaOD.unmarkedFitGMM <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...) {
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##check for distributions
modFamily1 <- mod.simple@mixture
modFamily2 <- mod.complex@mixture
if(!identical(modFamily1, modFamily2)) stop("\nComparisons only appropriate for models using the same mixture distribution\n")
if(!identical(modFamily1, "P") && !identical(modFamily1, "ZIP")) {
if(c.hat > 1) stop("\ndistribution not appropriate for overdispersion correction\n\n")
}
##extract response
y1 <- mod.simple@data@y
y2 <- mod.complex@data@y
##check that data are the same
if(!identical(y1, y2)) stop("\nData set should be identical to compare models\n")
##number of observations
if(is.null(nobs)) {
nobs <- nrow(y1)
}
##extract LL
LL.simple <- logLik(mod.simple)
LL.complex <- logLik(mod.complex)
##extract number of estimated parameters
K.simple <- length(coef(mod.simple))
K.complex <- length(coef(mod.complex))
##residual df of complex model
df.complex <- nobs - K.complex
##extract model formula
##simple model
form.simpleLam <- formulaShort(mod.simple, unmarked.type = "lambda")
form.simpleLam2 <- paste("lam(", form.simpleLam, ")", sep = "")
form.simplePhi <- formulaShort(mod.simple, unmarked.type = "phi")
form.simplePhi2 <- paste("phi(", form.simplePhi, ")", sep = "")
form.simpleDet <- formulaShort(mod.simple, unmarked.type = "det")
form.simpleDet2 <- paste("p(", form.simpleDet, ")", sep = "")
form.simple <- paste(form.simpleLam2, form.simplePhi2,
form.simpleDet2, sep = "")
##complex model
form.complexLam <- formulaShort(mod.complex, unmarked.type = "lambda")
form.complexLam2 <- paste("lam(", form.complexLam, ")", sep = "")
form.complexPhi <- formulaShort(mod.complex, unmarked.type = "phi")
form.complexPhi2 <- paste("phi(", form.complexPhi, ")", sep = "")
form.complexDet <- formulaShort(mod.complex, unmarked.type = "det")
form.complexDet2 <- paste("p(", form.complexDet, ")", sep = "")
form.complex <- paste(form.complexLam2, form.complexPhi2,
form.complexDet2, sep = "")
##- 2 * (logLik(simple) - logLik(complex))
LR <- -2 * (LL.simple - LL.complex)
##difference in number of estimated parameters
K.diff <- K.complex - K.simple
##use chi-square if no overdispersion
if(c.hat == 1) {
Chistat <- LR
df <- K.diff
pval <- 1 - pchisq(Chistat, df = df)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Chistat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "Chistat", "pval")
} else {
##compute F statistic
Fstat <- LR/((K.diff)*c.hat)
##compute df
df.num <- K.diff
df.denom <- df.complex
##P value
pval <- 1 - pf(Fstat, df1 = df.num, df2 = df.denom)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Fstat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "F", "pval")
}
##assemble in list
outList <- list(form.simple = form.simple,
form.complex = form.complex,
c.hat = c.hat,
devMat = devMat)
class(outList) <- c("anovaOD", "list")
return(outList)
}
##gpcount
anovaOD.unmarkedFitGPC <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...) {
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##check for distributions
modFamily1 <- mod.simple@mixture
modFamily2 <- mod.complex@mixture
if(!identical(modFamily1, modFamily2)) stop("\nComparisons only appropriate for models using the same mixture distribution\n")
if(!identical(modFamily1, "P") && !identical(modFamily1, "ZIP")) {
if(c.hat > 1) stop("\ndistribution not appropriate for overdispersion correction\n\n")
}
##extract response
y1 <- mod.simple@data@y
y2 <- mod.complex@data@y
##check that data are the same
if(!identical(y1, y2)) stop("\nData set should be identical to compare models\n")
##number of observations
if(is.null(nobs)) {
nobs <- nrow(y1)
}
##extract LL
LL.simple <- logLik(mod.simple)
LL.complex <- logLik(mod.complex)
##extract number of estimated parameters
K.simple <- length(coef(mod.simple))
K.complex <- length(coef(mod.complex))
##residual df of complex model
df.complex <- nobs - K.complex
##extract model formula
##simple model
form.simpleLam <- formulaShort(mod.simple, unmarked.type = "lambda")
form.simpleLam2 <- paste("lam(", form.simpleLam, ")", sep = "")
form.simplePhi <- formulaShort(mod.simple, unmarked.type = "phi")
form.simplePhi2 <- paste("phi(", form.simplePhi, ")", sep = "")
form.simpleDet <- formulaShort(mod.simple, unmarked.type = "det")
form.simpleDet2 <- paste("p(", form.simpleDet, ")", sep = "")
form.simple <- paste(form.simpleLam2, form.simplePhi2,
form.simpleDet2, sep = "")
##complex model
form.complexLam <- formulaShort(mod.complex, unmarked.type = "lambda")
form.complexLam2 <- paste("lam(", form.complexLam, ")", sep = "")
form.complexPhi <- formulaShort(mod.complex, unmarked.type = "phi")
form.complexPhi2 <- paste("phi(", form.complexPhi, ")", sep = "")
form.complexDet <- formulaShort(mod.complex, unmarked.type = "det")
form.complexDet2 <- paste("p(", form.complexDet, ")", sep = "")
form.complex <- paste(form.complexLam2, form.complexPhi2,
form.complexDet2, sep = "")
##- 2 * (logLik(simple) - logLik(complex))
LR <- -2 * (LL.simple - LL.complex)
##difference in number of estimated parameters
K.diff <- K.complex - K.simple
##use chi-square if no overdispersion
if(c.hat == 1) {
Chistat <- LR
df <- K.diff
pval <- 1 - pchisq(Chistat, df = df)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Chistat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "Chistat", "pval")
} else {
##compute F statistic
Fstat <- LR/((K.diff)*c.hat)
##compute df
df.num <- K.diff
df.denom <- df.complex
##P value
pval <- 1 - pf(Fstat, df1 = df.num, df2 = df.denom)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Fstat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "F", "pval")
}
##assemble in list
outList <- list(form.simple = form.simple,
form.complex = form.complex,
c.hat = c.hat,
devMat = devMat)
class(outList) <- c("anovaOD", "list")
return(outList)
}
##occuMulti
anovaOD.unmarkedFitOccuMulti <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...) {
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##extract response
y1 <- mod.simple@data@y
y2 <- mod.complex@data@y
##check that data are the same
if(!identical(y1, y2)) stop("\nData set should be identical to compare models\n")
##number of observations
if(is.null(nobs)) {
nobs <- nrow(y1)
}
##number of species
nspecies <- length(mod.simple@data@ylist)
genericNames <- paste("sp", 1:nspecies, sep = "")
##extract LL
LL.simple <- logLik(mod.simple)
LL.complex <- logLik(mod.complex)
##extract number of estimated parameters
K.simple <- length(coef(mod.simple))
K.complex <- length(coef(mod.complex))
##residual df of complex model
df.complex <- nobs - K.complex
##extract model formula
##simple model
##extract labels of fDesign
simple.fDesign <- mod.simple@data@fDesign
simple.colNames <- substr(x = colnames(simple.fDesign),
start = 1, stop = 2)
##extract state formulas
simpleState <- mod.simple@stateformulas
##replace ~ 1 by "."
simplerState <- gsub(pattern = "~1", replacement = ".", x = simpleState)
form.simplePsi <- paste(simple.colNames, "(", simplerState, ")", sep = "")
form.simplePsi2 <- paste(form.simplePsi, collapse = "")
form.simplePsi3 <- paste("psi[", form.simplePsi2, "]", collapse = "")
##extract detection formulas
simpleDet <- mod.simple@detformulas
##replace ~ 1 by "."
simplerDet <- gsub(pattern = "~1", replacement = ".", x = simpleDet)
form.simpleDet <- paste(genericNames, "(", simplerDet, ")", sep = "")
form.simpleDet2 <- paste(form.simpleDet, collapse = "")
form.simpleDet3 <- paste("p[", form.simpleDet2, "]", collapse = "")
form.simple <- paste(form.simplePsi3, form.simpleDet3, sep = "")
##complex model
##extract labels of fDesign
complex.fDesign <- mod.complex@data@fDesign
complex.colNames <- substr(x = colnames(complex.fDesign),
start = 1, stop = 2)
##extract state formulas
complexState <- mod.complex@stateformulas
##replace ~ 1 by "."
complexrState <- gsub(pattern = "~1", replacement = ".", x = complexState)
form.complexPsi <- paste(complex.colNames, "(", complexrState, ")", sep = "")
form.complexPsi2 <- paste(form.complexPsi, collapse = "")
form.complexPsi3 <- paste("psi[", form.complexPsi2, "]", collapse = "")
##extract detection formulas
complexDet <- mod.complex@detformulas
##replace ~ 1 by "."
complexrDet <- gsub(pattern = "~1", replacement = ".", x = complexDet)
form.complexDet <- paste(genericNames, "(", complexrDet, ")", sep = "")
form.complexDet2 <- paste(form.complexDet, collapse = "")
form.complexDet3 <- paste("p[", form.complexDet2, "]", collapse = "")
form.complex <- paste(form.complexPsi3, form.complexDet3, sep = "")
##- 2 * (logLik(simple) - logLik(complex))
LR <- -2 * (LL.simple - LL.complex)
##difference in number of estimated parameters
K.diff <- K.complex - K.simple
##use chi-square if no overdispersion
if(c.hat == 1) {
Chistat <- LR
df <- K.diff
pval <- 1 - pchisq(Chistat, df = df)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Chistat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "Chistat", "pval")
} else {
##compute F statistic
Fstat <- LR/((K.diff)*c.hat)
##compute df
df.num <- K.diff
df.denom <- df.complex
##P value
pval <- 1 - pf(Fstat, df1 = df.num, df2 = df.denom)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Fstat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "F", "pval")
}
##assemble in list
outList <- list(form.simple = form.simple,
form.complex = form.complex,
c.hat = c.hat,
devMat = devMat)
class(outList) <- c("anovaOD", "list")
return(outList)
}
##occuMS
anovaOD.unmarkedFitOccuMS <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...) {
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##extract response
y1 <- mod.simple@data@y
y2 <- mod.complex@data@y
##check for each model single season vs dynamic
nseason.simple <- mod.simple@data@numPrimary
nseason.complex <- mod.complex@data@numPrimary
##check that data are the same
if(!identical(y1, y2)) stop("\nData set should be identical to compare models\n")
##number of observations
if(is.null(nobs)) {
nobs <- nrow(y1)
}
##extract LL
LL.simple <- logLik(mod.simple)
LL.complex <- logLik(mod.complex)
##extract number of estimated parameters
K.simple <- length(coef(mod.simple))
K.complex <- length(coef(mod.complex))
##residual df of complex model
df.complex <- nobs - K.complex
##extract model formula
##simple model
if(nseason.simple == 1) {
form.simplePsi <- formulaShort(mod.simple, unmarked.type = "state")
form.simplePsi2 <- paste("psi(", form.simplePsi, ")", sep = "")
form.simpleDet <- formulaShort(mod.simple, unmarked.type = "det")
form.simpleDet2 <- paste("p(", form.simpleDet, ")", sep = "")
form.simple <- paste(form.simplePsi2, form.simpleDet2, sep = "")
} else {
form.simplePsi <- formulaShort(mod.simple, unmarked.type = "state")
form.simplePsi2 <- paste("psi(", form.simplePsi, ")", sep = "")
form.simplePhi <- formulaShort(mod.simple, unmarked.type = "transition")
form.simplePhi2 <- paste("phi(", form.simplePhi, ")", sep = "")
form.simpleDet <- formulaShort(mod.simple, unmarked.type = "det")
form.simpleDet2 <- paste("p(", form.simpleDet, ")", sep = "")
form.simple <- paste(form.simplePsi2, form.simplePhi2, form.simpleDet2, sep = "")
}
##complex model
if(nseason.complex == 1) {
form.complexPsi <- formulaShort(mod.complex, unmarked.type = "state")
form.complexPsi2 <- paste("psi(", form.complexPsi, ")", sep = "")
form.complexDet <- formulaShort(mod.complex, unmarked.type = "det")
form.complexDet2 <- paste("p(", form.complexDet, ")", sep = "")
form.complex <- paste(form.complexPsi2, form.complexDet2, sep = "")
} else {
form.complexPsi <- formulaShort(mod.complex, unmarked.type = "state")
form.complexPsi2 <- paste("psi(", form.complexPsi, ")", sep = "")
form.complexPhi <- formulaShort(mod.complex, unmarked.type = "transition")
form.complexPhi2 <- paste("phi(", form.complexPhi, ")", sep = "")
form.complexDet <- formulaShort(mod.complex, unmarked.type = "det")
form.complexDet2 <- paste("p(", form.complexDet, ")", sep = "")
form.complex <- paste(form.complexPsi2, form.complexPhi2, form.complexDet2, sep = "")
}
##- 2 * (logLik(simple) - logLik(complex))
LR <- -2 * (LL.simple - LL.complex)
##difference in number of estimated parameters
K.diff <- K.complex - K.simple
##use chi-square if no overdispersion
if(c.hat == 1) {
Chistat <- LR
df <- K.diff
pval <- 1 - pchisq(Chistat, df = df)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Chistat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "Chistat", "pval")
} else {
##compute F statistic
Fstat <- LR/((K.diff)*c.hat)
##compute df
df.num <- K.diff
df.denom <- df.complex
##P value
pval <- 1 - pf(Fstat, df1 = df.num, df2 = df.denom)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Fstat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "F", "pval")
}
##assemble in list
outList <- list(form.simple = form.simple,
form.complex = form.complex,
c.hat = c.hat,
devMat = devMat)
class(outList) <- c("anovaOD", "list")
return(outList)
}
##occuTTD
anovaOD.unmarkedFitOccuTTD <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...) {
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##extract response
y1 <- mod.simple@data@y
y2 <- mod.complex@data@y
##check for each model single season vs dynamic
nseason.simple <- mod.simple@data@numPrimary
nseason.complex <- mod.complex@data@numPrimary
##check that data are the same
if(!identical(y1, y2)) stop("\nData set should be identical to compare models\n")
##number of observations
if(is.null(nobs)) {
nobs <- nrow(y1)
}
##extract LL
LL.simple <- logLik(mod.simple)
LL.complex <- logLik(mod.complex)
##extract number of estimated parameters
K.simple <- length(coef(mod.simple))
K.complex <- length(coef(mod.complex))
##residual df of complex model
df.complex <- nobs - K.complex
##extract model formula
##simple model
if(nseason.simple == 1) {
form.simplePsi <- formulaShort(mod.simple, unmarked.type = "psi")
form.simplePsi2 <- paste("psi(", form.simplePsi, ")", sep = "")
form.simpleDet <- formulaShort(mod.simple, unmarked.type = "det")
form.simpleDet2 <- paste("p(", form.simpleDet, ")", sep = "")
form.simple <- paste(form.simplePsi2, form.simpleDet2, sep = "")
} else {
form.simplePsi <- formulaShort(mod.simple, unmarked.type = "psi")
form.simplePsi2 <- paste("psi(", form.simplePsi, ")", sep = "")
form.simpleGam <- formulaShort(mod.simple, unmarked.type = "col")
form.simpleGam2 <- paste("gam(", form.simpleGam, ")", sep = "")
form.simpleEps <- formulaShort(mod.simple, unmarked.type = "ext")
form.simpleEps2 <- paste("eps(", form.simpleEps, ")", sep = "")
form.simpleDet <- formulaShort(mod.simple, unmarked.type = "det")
form.simpleDet2 <- paste("p(", form.simpleDet, ")", sep = "")
form.simple <- paste(form.simplePsi2, form.simpleGam2,
form.simpleEps2, form.simpleDet2, sep = "")
}
##complex model
if(nseason.complex == 1) {
form.complexPsi <- formulaShort(mod.complex, unmarked.type = "psi")
form.complexPsi2 <- paste("psi(", form.complexPsi, ")", sep = "")
form.complexDet <- formulaShort(mod.complex, unmarked.type = "det")
form.complexDet2 <- paste("p(", form.complexDet, ")", sep = "")
form.complex <- paste(form.complexPsi2, form.complexDet2, sep = "")
} else {
form.complexPsi <- formulaShort(mod.complex, unmarked.type = "psi")
form.complexPsi2 <- paste("psi(", form.complexPsi, ")", sep = "")
form.complexGam <- formulaShort(mod.complex, unmarked.type = "col")
form.complexGam2 <- paste("gam(", form.complexGam, ")", sep = "")
form.complexEps <- formulaShort(mod.complex, unmarked.type = "ext")
form.complexEps2 <- paste("eps(", form.complexEps, ")", sep = "")
form.complexDet <- formulaShort(mod.complex, unmarked.type = "det")
form.complexDet2 <- paste("p(", form.complexDet, ")", sep = "")
form.complex <- paste(form.complexPsi2, form.complexGam2,
form.complexEps2, form.complexDet2, sep = "")
}
##- 2 * (logLik(simple) - logLik(complex))
LR <- -2 * (LL.simple - LL.complex)
##difference in number of estimated parameters
K.diff <- K.complex - K.simple
##use chi-square if no overdispersion
if(c.hat == 1) {
Chistat <- LR
df <- K.diff
pval <- 1 - pchisq(Chistat, df = df)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Chistat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "Chistat", "pval")
} else {
##compute F statistic
Fstat <- LR/((K.diff)*c.hat)
##compute df
df.num <- K.diff
df.denom <- df.complex
##P value
pval <- 1 - pf(Fstat, df1 = df.num, df2 = df.denom)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Fstat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "F", "pval")
}
##assemble in list
outList <- list(form.simple = form.simple,
form.complex = form.complex,
c.hat = c.hat,
devMat = devMat)
class(outList) <- c("anovaOD", "list")
return(outList)
}
##multmixOpen
anovaOD.unmarkedFitMMO <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...) {
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##check for distributions
modFamily1 <- mod.simple@mixture
modFamily2 <- mod.complex@mixture
if(!identical(modFamily1, modFamily2)) stop("\nComparisons only appropriate for models using the same mixture distribution\n")
if(!identical(modFamily1, "P") && !identical(modFamily1, "ZIP")) {
if(c.hat > 1) stop("\ndistribution not appropriate for overdispersion correction\n")
}
##extract response
y1 <- mod.simple@data@y
y2 <- mod.complex@data@y
##check that data are the same
if(!identical(y1, y2)) stop("\nData set should be identical to compare models\n")
##number of observations
if(is.null(nobs)) {
nobs <- nrow(y1)
}
##extract LL
LL.simple <- logLik(mod.simple)
LL.complex <- logLik(mod.complex)
##extract number of estimated parameters
K.simple <- length(coef(mod.simple))
K.complex <- length(coef(mod.complex))
##residual df of complex model
df.complex <- nobs - K.complex
##extract model formula
##simple model
form.simpleLam <- formulaShort(mod.simple, unmarked.type = "lambda")
form.simpleLam2 <- paste("lam(", form.simpleLam, ")", sep = "")
form.simpleGam <- formulaShort(mod.simple, unmarked.type = "gamma")
form.simpleGam2 <- paste("gam(", form.simpleGam, ")", sep = "")
form.simpleOmega <- formulaShort(mod.simple, unmarked.type = "omega")
form.simpleOmega2 <- paste("omega(", form.simpleOmega, ")", sep = "")
form.simpleDet <- formulaShort(mod.simple, unmarked.type = "det")
form.simpleDet2 <- paste("p(", form.simpleDet, ")", sep = "")
if(exists("iota", mod.simple@estimates@estimates)){
form.simpleIota <- formulaShort(mod.simple, unmarked.type = "iota")
form.simpleIota2 <- paste("iota(", form.simpleIota, ")", sep = "")
form.simple <- paste(form.simpleLam2, form.simpleGam2,
form.simpleOmega2, form.simpleIota2,
form.simpleDet2, sep = "")
} else {
form.simple <- paste(form.simpleLam2, form.simpleGam2,
form.simpleOmega2,
form.simpleDet2, sep = "")
}
##complex model
form.complexLam <- formulaShort(mod.complex, unmarked.type = "lambda")
form.complexLam2 <- paste("lam(", form.complexLam, ")", sep = "")
form.complexGam <- formulaShort(mod.complex, unmarked.type = "gamma")
form.complexGam2 <- paste("gam(", form.complexGam, ")", sep = "")
form.complexOmega <- formulaShort(mod.complex, unmarked.type = "omega")
form.complexOmega2 <- paste("omega(", form.complexOmega, ")", sep = "")
form.complexDet <- formulaShort(mod.complex, unmarked.type = "det")
form.complexDet2 <- paste("p(", form.complexDet, ")", sep = "")
if(exists("iota", mod.complex@estimates@estimates)){
form.complexIota <- formulaShort(mod.complex, unmarked.type = "iota")
form.complexIota2 <- paste("iota(", form.complexIota, ")", sep = "")
form.complex <- paste(form.complexLam2, form.complexGam2,
form.complexOmega2, form.complexIota2,
form.complexDet2, sep = "")
} else {
form.complex <- paste(form.complexLam2, form.complexGam2,
form.complexOmega2,
form.complexDet2, sep = "")
}
##- 2 * (logLik(simple) - logLik(complex))
LR <- -2 * (LL.simple - LL.complex)
##difference in number of estimated parameters
K.diff <- K.complex - K.simple
##use chi-square if no overdispersion
if(c.hat == 1) {
Chistat <- LR
df <- K.diff
pval <- 1 - pchisq(Chistat, df = df)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Chistat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "Chistat", "pval")
} else {
##compute F statistic
Fstat <- LR/((K.diff)*c.hat)
##compute df
df.num <- K.diff
df.denom <- df.complex
##P value
pval <- 1 - pf(Fstat, df1 = df.num, df2 = df.denom)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Fstat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "F", "pval")
}
##assemble in list
outList <- list(form.simple = form.simple,
form.complex = form.complex,
c.hat = c.hat,
devMat = devMat)
class(outList) <- c("anovaOD", "list")
return(outList)
}
##distsampOpen
anovaOD.unmarkedFitDSO <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...) {
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##check for distributions
modFamily1 <- mod.simple@mixture
modFamily2 <- mod.complex@mixture
if(!identical(modFamily1, modFamily2)) stop("\nComparisons only appropriate for models using the same mixture distribution\n")
if(!identical(modFamily1, "P") && !identical(modFamily1, "ZIP")) {
if(c.hat > 1) stop("\ndistribution not appropriate for overdispersion correction\n")
}
##extract response
y1 <- mod.simple@data@y
y2 <- mod.complex@data@y
##check that data are the same
if(!identical(y1, y2)) stop("\nData set should be identical to compare models\n")
##number of observations
if(is.null(nobs)) {
nobs <- nrow(y1)
}
##extract LL
LL.simple <- logLik(mod.simple)
LL.complex <- logLik(mod.complex)
##extract number of estimated parameters
K.simple <- length(coef(mod.simple))
K.complex <- length(coef(mod.complex))
##residual df of complex model
df.complex <- nobs - K.complex
##extract model formula
##simple model
form.simpleLam <- formulaShort(mod.simple, unmarked.type = "lambda")
form.simpleLam2 <- paste("lam(", form.simpleLam, ")", sep = "")
form.simpleGam <- formulaShort(mod.simple, unmarked.type = "gamma")
form.simpleGam2 <- paste("gam(", form.simpleGam, ")", sep = "")
form.simpleOmega <- formulaShort(mod.simple, unmarked.type = "omega")
form.simpleOmega2 <- paste("omega(", form.simpleOmega, ")", sep = "")
form.simpleDet <- formulaShort(mod.simple, unmarked.type = "det")
form.simpleDet2 <- paste("p(", form.simpleDet, ")", sep = "")
if(exists("iota", mod.simple@estimates@estimates)){
form.simpleIota <- formulaShort(mod.simple, unmarked.type = "iota")
form.simpleIota2 <- paste("iota(", form.simpleIota, ")", sep = "")
form.simple <- paste(form.simpleLam2, form.simpleGam2,
form.simpleOmega2, form.simpleIota2,
form.simpleDet2, sep = "")
} else {
form.simple <- paste(form.simpleLam2, form.simpleGam2,
form.simpleOmega2,
form.simpleDet2, sep = "")
}
##complex model
form.complexLam <- formulaShort(mod.complex, unmarked.type = "lambda")
form.complexLam2 <- paste("lam(", form.complexLam, ")", sep = "")
form.complexGam <- formulaShort(mod.complex, unmarked.type = "gamma")
form.complexGam2 <- paste("gam(", form.complexGam, ")", sep = "")
form.complexOmega <- formulaShort(mod.complex, unmarked.type = "omega")
form.complexOmega2 <- paste("omega(", form.complexOmega, ")", sep = "")
form.complexDet <- formulaShort(mod.complex, unmarked.type = "det")
form.complexDet2 <- paste("p(", form.complexDet, ")", sep = "")
if(exists("iota", mod.complex@estimates@estimates)){
form.complexIota <- formulaShort(mod.complex, unmarked.type = "iota")
form.complexIota2 <- paste("iota(", form.complexIota, ")", sep = "")
form.complex <- paste(form.complexLam2, form.complexGam2,
form.complexOmega2, form.complexIota2,
form.complexDet2, sep = "")
} else {
form.complex <- paste(form.complexLam2, form.complexGam2,
form.complexOmega2,
form.complexDet2, sep = "")
}
##- 2 * (logLik(simple) - logLik(complex))
LR <- -2 * (LL.simple - LL.complex)
##difference in number of estimated parameters
K.diff <- K.complex - K.simple
##use chi-square if no overdispersion
if(c.hat == 1) {
Chistat <- LR
df <- K.diff
pval <- 1 - pchisq(Chistat, df = df)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Chistat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "Chistat", "pval")
} else {
##compute F statistic
Fstat <- LR/((K.diff)*c.hat)
##compute df
df.num <- K.diff
df.denom <- df.complex
##P value
pval <- 1 - pf(Fstat, df1 = df.num, df2 = df.denom)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Fstat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "F", "pval")
}
##assemble in list
outList <- list(form.simple = form.simple,
form.complex = form.complex,
c.hat = c.hat,
devMat = devMat)
class(outList) <- c("anovaOD", "list")
return(outList)
}
##maxlike
anovaOD.maxlikeFit <- function(mod.simple, mod.complex, c.hat = 1, nobs = NULL, ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##response variable
y1 <- mod.simple$points.retained
y2 <- mod.complex$points.retained
##number of observations
if(is.null(nobs)) {
nobs <- nrow(y1)
}
##check that data are the same
if(!identical(y1, y2)) stop("\nData set should be identical to compare models\n")
##extract log-likelihood
LL1 <- logLik(mod.simple)
LL2 <- logLik(mod.complex)
LL.simple <- LL1[1]
LL.complex <- LL2[1]
##extract number of estimated parameters
K.simple <- length(coef(mod.simple))
K.complex <- length(coef(mod.complex))
##residual df of complex model
df.complex <- nobs - K.complex
##extract model formula
simpleForm <- formula(mod.simple)
form.simple <- paste("y ~", simpleForm[2])
complexForm <- formula(mod.complex)
form.complex <- paste("y ~", complexForm[2])
##- 2 * (logLik(simple) - logLik(complex))
LR <- -2 * (LL.simple - LL.complex)
##difference in number of estimated parameters
K.diff <- K.complex - K.simple
##use chi-square if no overdispersion
if(c.hat == 1) {
Chistat <- LR
df <- K.diff
pval <- 1 - pchisq(Chistat, df = df)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Chistat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "Chistat", "pval")
} else {
##compute F statistic
Fstat <- LR/((K.diff)*c.hat)
##compute df
df.num <- K.diff
df.denom <- df.complex
##P value
pval <- 1 - pf(Fstat, df1 = df.num, df2 = df.denom)
devMat <- matrix(data = c(K.simple, K.complex,
LL.simple, LL.complex,
NA, K.diff,
NA, LR,
NA, Fstat,
NA, pval),
nrow = 2, ncol = 6)
colnames(devMat) <- c("K", "logLik",
"Kdiff", "-2LL", "F", "pval")
}
##assemble in list
outList <- list(form.simple = form.simple,
form.complex = form.complex,
c.hat = c.hat,
devMat = devMat)
class(outList) <- c("anovaOD", "list")
return(outList)
}
##residual df of most complex model is required to compute F statistic
##this value is difficult to obtain for models of unmarked classes and glmm's
##potentially give option to user of specifying effective sample size (as in AICc)
##nobs
print.anovaOD <- function(x, digits = 4, ...) {
##extract information
simple.text <- x$form.simple
complex.text <- x$form.complex
##if text too long, truncate name
if(nchar(simple.text) > 70) {
simple.text <- paste(substr(x = simple.text,
start = 1,
stop = 70),
" ...", sep = "")
}
##if text too long, truncate name
if(nchar(complex.text) > 70) {
complex.text <- paste(substr(x = complex.text,
start = 1,
stop = 70),
" ...", sep = "")
}
c.hat <- x$c.hat
outMat <- x$devMat
rownames(outMat) <- c("1", "2")
##check value of P and set to < 0.0001
#pval <- ifelse(outMat[, "pval"] < 0.0001, "< 0.0001", outMat[, "pval"])
##replace NA's with spaces
#outMat[is.na(outMat)] <- " "
if(c.hat == 1) {
##add names
colnames(outMat) <- c("K", "logLik",
"Kdiff", "-2logLik", "Chisq", "Pr(>Chisq)")
cat("\nAnalysis of deviance table\n",
sep = "")
} else {
##add names
colnames(outMat) <- c("K", "logLik",
"Kdiff", "-2logLik", "F", "Pr(>F)")
cat("\nAnalysis of deviance table corrected for overdispersion\n",
sep = "")
}
cat("\nSimple model (1): ", simple.text,
"\nComplex model (2): ", complex.text, "\n\n")
printCoefmat(outMat, digits = digits, signif.stars = FALSE, cs.ind = 1,
na.print = "")
if(c.hat > 1) {
cat("\n(c-hat = ", c.hat, ")", "\n", sep = "")
}
cat("\n")
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/anovaOD.R |
##create generic bictab
bictab <- function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...) {
##format list according to model class
cand.set <- formatCands(cand.set)
UseMethod("bictab", cand.set)
}
##default to indicate when object class not supported
bictab.default <- function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...) {
stop("\nFunction not yet defined for this object class\n")
}
##aov
bictab.AICaov.lm <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = TRUE,
nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = FALSE,
nobs = nobs)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##extract LL
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##betareg
bictab.AICbetareg <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = TRUE,
nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = FALSE,
nobs = nobs)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##extract LL
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##clm
bictab.AICsclm.clm <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE, nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE, nobs = nobs)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
Results$LL <- unlist(lapply(X= cand.set, FUN = function(i) logLik(i)[1]))
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##clm
bictab.AICclm <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE, nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE, nobs = nobs)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
Results$LL <- unlist(lapply(X= cand.set, FUN = function(i) logLik(i)[1]))
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##clmm
bictab.AICclmm <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE, nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE, nobs = nobs)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
Results$LL <- unlist(lapply(X= cand.set, FUN = function(i) logLik(i)[1]))
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##coxme
bictab.AICcoxme <- function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)$fixed[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
##arrange in table
Results <- data.frame(Modnames=modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE, nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE, nobs = nobs)) #extract BIC
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##add partial log-likelihood column
Results$LL <- unlist(lapply(X = cand.set, FUN=function(i) extractLL(i)[1]))
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##coxph and clogit
bictab.AICcoxph <- function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
##arrange in table
Results <- data.frame(Modnames=modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE, nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE, nobs = nobs)) #extract BIC
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##add partial log-likelihood column
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)[1]))
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##fitdist (from fitdistrplus)
bictab.AICfitdist <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
#check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
#if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = TRUE,
nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = FALSE,
nobs = nobs)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##extract LL
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##fitdistr (from MASS)
bictab.AICfitdistr <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
#check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
#if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = TRUE,
nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = FALSE,
nobs = nobs)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##extract LL
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##glm
bictab.AICglm.lm <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, c.hat = 1, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = TRUE,
nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$BIC <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = FALSE,
nobs = nobs, c.hat = c.hat)) #extract BIC
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if BIC and c.hat = 1
if(c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN=function(i) logLik(i)[1]))
}
##rename correctly to QBIC and add column for c-hat
if(c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QBIC", "Delta_QBIC", "ModelLik", "QBICWt")
LL <- unlist(lapply(X=cand.set, FUN = function(i) logLik(i)[1]))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##glmmTMB
bictab.AICglmmTMB <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE, nobs = nobs,
c.hat = c.hat)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE, nobs = nobs,
c.hat = c.hat)) #extract BIC
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if BIC and c.hat = 1
if(c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN=function(i) logLik(i)[1]))
}
##rename correctly to QBIC and add column for c-hat
if(c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QBIC", "Delta_QBIC", "ModelLik", "QBICWt")
LL <- unlist(lapply(X=cand.set, FUN = function(i) logLik(i)[1]))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##gls
bictab.AICgls <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
##check if models were fit with same method (REML or ML)
check_ML <- unlist(lapply(cand.set, FUN = function(i) i$method))
if (any(check_ML != "ML")) {
warning("\nModel selection for fixed effects is only appropriate with method='ML':", "\n",
"REML (default) should only be used to select random effects for a constant set of fixed effects\n")
}
check.method <- unique(check_ML)
if(identical(check.method, c("ML", "REML"))) {
stop("\nYou should not have models fit with REML and ML in the same candidate model set\n")
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE, nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE, nobs = nobs)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
#check if ML or REML used and add column accordingly
if(identical(check.method, "ML")) {
Results$LL <- unlist(lapply(X = cand.set, FUN=function(i) logLik(i)[1]))
}
if(identical(check.method, "REML")) {
Results$Res.LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##gnls
bictab.AICgnls.gls <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- data.frame(Modnames=modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE, nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE, nobs = nobs)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##hurdle
bictab.AIChurdle <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
#if(c.hat != 1) stop("\nThis function does not support overdispersion in \'zeroinfl\' models\n")
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same data set for all models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE, nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE, nobs = nobs)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
Results$LL <- unlist(lapply(X= cand.set, FUN = function(i) logLik(i)))
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##lavaan
bictab.AIClavaan <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether observed variables are the same for all models
check.obs <- unlist(lapply(X = cand.set, FUN = function(b) b@[email protected][[1]]))
##frequency of each observed variable
freq.obs <- table(check.obs)
if(length(unique(freq.obs)) > 1) stop("\nModels with different sets of observed variables are not directly comparable\n")
Results <- NULL
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = TRUE,
nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = FALSE,
nobs = nobs)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##extract LL
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##lm
bictab.AIClm <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = TRUE,
nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = FALSE,
nobs = nobs)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##extract LL
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##lme
bictab.AIClme <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
#check if models were fit with same method (REML or ML)
check_ML <- unlist(lapply(cand.set, FUN = function(i) i$method))
if (any(check_ML != "ML")) {
warning("\nModel selection for fixed effects is only appropriate with method='ML':", "\n",
"REML (default) should only be used to select random effects for a constant set of fixed effects\n")
}
check.method <- unique(check_ML)
if(identical(check.method, c("ML", "REML"))) {
stop("\nYou should not have models fit with REML and ML in the same candidate model set\n")
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE, nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE, nobs = nobs)) #extract BIC
Results$Delta_BIC <- Results$BIC-min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
#check if ML or REML used and add column accordingly
if(identical(check.method, "ML")) {
Results$LL <- unlist(lapply(X= cand.set, FUN = function(i) logLik(i)[1]))
}
if(identical(check.method, "REML")) {
Results$Res.LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##lmekin
bictab.AIClmekin <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
#check if models were fit with same method (REML or ML)
check_ML <- unlist(lapply(cand.set, FUN = function(i) i$method))
if (any(check_ML != "ML")) {
warning("\nModel selection for fixed effects is only appropriate with method='ML':", "\n",
"REML (default) should only be used to select random effects for a constant set of fixed effects\n")
}
check.method <- unique(check_ML)
if(identical(check.method, c("ML", "REML"))) {
stop("\nYou should not have models fit with REML and ML in the same candidate model set\n")
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE, nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE, nobs = nobs)) #extract BIC #
Results$Delta_BIC <- Results$BIC-min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
#check if ML or REML used and add column accordingly
if(identical(check.method, "ML")) {
Results$LL <- unlist(lapply(X= cand.set, FUN = function(i) extractLL(i)[1]))
}
if(identical(check.method, "REML")) {
Results$Res.LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)[1]))
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##maxlike
bictab.AICmaxlikeFit.list <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
if(c.hat != 1) stop("\nThis function does not support overdispersion in \'maxlikeFit\' models\n")
##add check to see whether response variable is the same for all models
#check.resp <- lapply(X = cand.set, FUN = function(b) nrow(b$points.retained))
#if(length(unique(check.resp)) > 1) stop("\nYou must use the same data set for all models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE, nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE, nobs = nobs)) #extract BIC
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
Results$LL <- unlist(lapply(X= cand.set, FUN = function(i) logLik(i)))
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##mer - lme4 version < 1
bictab.AICmer <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if (is.null(modnames)) {
if (is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
#check if models were fit with same method (REML or ML)
check_bin <- unlist(lapply(cand.set, FUN = function(i) i@dims["REML"]))
check_ML <- ifelse(check_bin == 1, "REML", "ML")
if (any(check_ML != "ML")) {
warning("\nModel selection for fixed effects is only appropriate with ML estimation:", "\n",
"REML (default) should only be used to select random effects for a constant set of fixed effects\n")
}
check.method <- unique(check_ML)
if(length(check.method) > 1) {
stop("\nYou should not have models fit with REML and ML in the same candidate model set\n")
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE, nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE, nobs = nobs)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
#check if ML or REML used and add column accordingly
if(identical(check.method, "ML")) {
Results$LL <- unlist(lapply(X = cand.set, FUN=function(i) logLik(i)[1]))
}
if(identical(check.method, "REML")) {
Results$Res.LL <- unlist(lapply(X = cand.set, FUN=function(i) logLik(i)[1]))
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##lmerMod
bictab.AIClmerMod <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for subclass of object
sub.class <- lapply(X = cand.set, FUN = class)
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
##check if models were fit with same method (REML or ML)
check_REML <- unlist(lapply(cand.set, FUN = function(i) isREML(i)))
check_ML <- ifelse(check_REML, "REML", "ML")
if (any(check_REML)) {
warning("\nModel selection for fixed effects is only appropriate with ML estimation:", "\n",
"REML (default) should only be used to select random effects for a constant set of fixed effects\n")
}
check.method <- unique(check_ML)
if(length(check.method) > 1) {
stop("\nYou should not have models fit with REML and ML in the same candidate model set\n")
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE, nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE, nobs = nobs)) #extract BIC
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
#check if ML or REML used and add column accordingly
if(identical(check.method, "ML")) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
}
if(identical(check.method, "REML")) {
Results$Res.LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##lmerModLmerTest
bictab.AIClmerModLmerTest <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for subclass of object
sub.class <- lapply(X = cand.set, FUN = class)
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
##check if models were fit with same method (REML or ML)
check_REML <- unlist(lapply(cand.set, FUN = function(i) isREML(i)))
check_ML <- ifelse(check_REML, "REML", "ML")
if (any(check_REML)) {
warning("\nModel selection for fixed effects is only appropriate with ML estimation:", "\n",
"REML (default) should only be used to select random effects for a constant set of fixed effects\n")
}
check.method <- unique(check_ML)
if(length(check.method) > 1) {
stop("\nYou should not have models fit with REML and ML in the same candidate model set\n")
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE, nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE, nobs = nobs)) #extract BIC
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
#check if ML or REML used and add column accordingly
if(identical(check.method, "ML")) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
}
if(identical(check.method, "REML")) {
Results$Res.LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##glmerMod
bictab.AICglmerMod <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE, nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE, nobs = nobs)) #extract BIC
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##add LL
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##negbin
bictab.AICnegbin.glm.lm <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = TRUE,
nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = FALSE,
nobs = nobs)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##extract LL
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##nlme
bictab.AICnlme.lme <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
#check if models were fit with same method (REML or ML)
check_ML <- unlist(lapply(cand.set, FUN = function(i) i$method))
if (any(check_ML != "ML")) {
warning("\nModel selection for fixed effects is only appropriate with method='ML':", "\n",
"REML (default) should only be used to select random effects for a constant set of fixed effects\n")
}
check.method <- unique(check_ML)
if(identical(check.method, c("ML", "REML"))) {
stop("\nYou should not have models fit with REML and ML in the same candidate model set\n")
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE, nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE, nobs = nobs)) #extract BIC
Results$Delta_BIC <- Results$BIC-min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
#check if ML or REML used and add column accordingly
if(identical(check.method, "ML")) {
Results$LL <- unlist(lapply(X= cand.set, FUN = function(i) logLik(i)[1]))
}
if(identical(check.method, "REML")) {
Results$Res.LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##nlmerMod
bictab.AICnlmerMod <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) unlist(strsplit(x = as.character(formula(b)), split = "~"))[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE, nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE, nobs = nobs)) #extract BIC
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##add LL
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##multinom
bictab.AICmultinom.nnet <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = TRUE,
nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$BIC <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = FALSE,
nobs = nobs, c.hat = c.hat)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if LL computed
if(c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
}
##rename correctly to QBIC and add column for c-hat
if(c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QBIC", "Delta_QBIC", "ModelLik", "QBICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##lme
bictab.AIClme <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
##check if models were fit with same method (REML or ML)
check_ML <- unlist(lapply(cand.set, FUN = function(i) i$method))
if (any(check_ML != "ML")) {
warning("\nModel selection for fixed effects is only appropriate with method=ML:", "\n",
"REML (default) should only be used to select random effects for a constant set of fixed effects\n")
}
check.method <- unique(check_ML)
if(identical(check.method, c("ML", "REML"))) {
stop("\nYou should not have models fit with REML and ML in the same candidate model set\n")
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE,
nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE,
nobs = nobs)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
#check if ML or REML used and add column accordingly
if(identical(check.method, "ML")) {
Results$LL <- unlist(lapply(X= cand.set, FUN = function(i) logLik(i)[1]))
}
if(identical(check.method, "REML")) {
Results$Res.LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##nls
bictab.AICnls <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- data.frame(Modnames=modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE, nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE, nobs = nobs)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##polr
bictab.AICpolr <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
Results <- data.frame(Modnames=modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = TRUE,
nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = FALSE,
nobs = nobs)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##rlm
bictab.AICrlm.lm <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE,
nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE,
nobs = nobs)) #extract BIC
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
Results$LL <- unlist(lapply(X= cand.set, FUN = function(i) logLik(i)[1]))
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##survreg
bictab.AICsurvreg <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- NULL
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = TRUE,
nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = FALSE,
nobs = nobs)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##extract LL
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)[1]))
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##occu
bictab.AICunmarkedFitOccu <- function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE,
nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE,
nobs = nobs, c.hat = c.hat)) #extract BIC #
Results$Delta_BIC <- Results$BIC-min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if BIC and c.hat = 1
if(c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QBIC and add column for c-hat
if(c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QBIC", "Delta_QBIC", "ModelLik", "QBICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##colext
bictab.AICunmarkedFitColExt <- function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE,
nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE,
nobs = nobs, c.hat = c.hat)) #extract BIC #
Results$Delta_BIC <- Results$BIC-min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if BIC and c.hat = 1
if(c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QBIC and add column for c-hat
if(c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QBIC", "Delta_QBIC", "ModelLik", "QBICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##occuRN
bictab.AICunmarkedFitOccuRN <- function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check for use of c-hat
#if(c.hat > 1) stop("\nThe correction for overdispersion is not appropriate with Royle-Nichols heterogeneity models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE,
nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE,
nobs = nobs, c.hat = c.hat)) #extract BIC
Results$Delta_BIC <- Results$BIC-min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if BIC and c.hat = 1
if(c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QBIC and add column for c-hat
if(c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QBIC", "Delta_QBIC", "ModelLik", "QBICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##pcount
bictab.AICunmarkedFitPCount <- function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE,
nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE,
nobs = nobs, c.hat = c.hat)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if BIC and c.hat = 1
if(c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QBIC and add column for c-hat
if(c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QBIC", "Delta_QBIC", "ModelLik", "QBICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##same function as that for objects created by pcount( )
bictab.AICunmarkedFitPCO <- function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE,
nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE,
nobs = nobs, c.hat = c.hat)) #extract BIC
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if BIC and c.hat = 1
if(c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QBIC and add column for c-hat
if(c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QBIC", "Delta_QBIC", "ModelLik", "QBICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##distsamp
bictab.AICunmarkedFitDS <- function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE,
nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE,
nobs = nobs, c.hat = c.hat)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if BIC and c.hat = 1
if(c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QBIC and add column for c-hat
if(c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QBIC", "Delta_QBIC", "ModelLik", "QBICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##gdistsamp
bictab.AICunmarkedFitGDS <- function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check for use of c-hat
#if(c.hat > 1) stop("\nThe correction for overdispersion is not appropriate for distance sampling models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE,
nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE,
nobs = nobs, c.hat = c.hat)) #extract BIC
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if BIC and c.hat = 1
if(c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QBIC and add column for c-hat
if(c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QBIC", "Delta_QBIC", "ModelLik", "QBICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##occuFP
bictab.AICunmarkedFitOccuFP <- function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE,
nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE,
nobs = nobs, c.hat = c.hat)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if BIC and c.hat = 1
if(c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QBIC and add column for c-hat
if(c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QBIC", "Delta_QBIC", "ModelLik", "QBICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##occuMulti
bictab.AICunmarkedFitOccuMulti <- function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check for use of c-hat
#if(c.hat > 1) stop("\nThe correction for overdispersion is not yet implemented for multinomial Poisson models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE,
nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE,
nobs = nobs, c.hat = c.hat)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if BIC and c.hat = 1
if(c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QBIC and add column for c-hat
if(c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QBIC", "Delta_QBIC", "ModelLik", "QBICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##multinomPois
bictab.AICunmarkedFitMPois <- function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check for use of c-hat
#if(c.hat > 1) stop("\nThe correction for overdispersion is not yet implemented for multinomial Poisson models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE,
nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE,
nobs = nobs, c.hat = c.hat)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if BIC and c.hat = 1
if(c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QBIC and add column for c-hat
if(c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QBIC", "Delta_QBIC", "ModelLik", "QBICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##gmultmix
bictab.AICunmarkedFitGMM <- function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check for use of c-hat
#if(c.hat > 1) stop("\nThe correction for overdispersion is not yet implemented for generalized multinomial mixture models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE,
nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE,
nobs = nobs, c.hat = c.hat)) #extract BIC
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if BIC and c.hat = 1
if(c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QBIC and add column for c-hat
if(c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QBIC", "Delta_QBIC", "ModelLik", "QBICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##gpcount
bictab.AICunmarkedFitGPC <- function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check for use of c-hat
#if(c.hat > 1) stop("\nThe correction for overdispersion is not yet implemented for generalized binomial mixture models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE,
nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE,
nobs = nobs, c.hat = c.hat)) #extract BIC
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if BIC and c.hat = 1
if(c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QBIC and add column for c-hat
if(c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QBIC", "Delta_QBIC", "ModelLik", "QBICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##occuMS
bictab.AICunmarkedFitOccuMS <- function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check for use of c-hat
#if(c.hat > 1) stop("\nThe correction for overdispersion is not yet implemented for multinomial Poisson models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE,
nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE,
nobs = nobs, c.hat = c.hat)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if BIC and c.hat = 1
if(c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QBIC and add column for c-hat
if(c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QBIC", "Delta_QBIC", "ModelLik", "QBICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##occuTTD
bictab.AICunmarkedFitOccuTTD <- function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check for use of c-hat
#if(c.hat > 1) stop("\nThe correction for overdispersion is not yet implemented for multinomial Poisson models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE,
nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE,
nobs = nobs, c.hat = c.hat)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if BIC and c.hat = 1
if(c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QBIC and add column for c-hat
if(c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QBIC", "Delta_QBIC", "ModelLik", "QBICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##multmixOpen
bictab.AICunmarkedFitMMO <- function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check for use of c-hat
#if(c.hat > 1) stop("\nThe correction for overdispersion is not yet implemented for multinomial Poisson models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE,
nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE,
nobs = nobs, c.hat = c.hat)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if BIC and c.hat = 1
if(c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QBIC and add column for c-hat
if(c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QBIC", "Delta_QBIC", "ModelLik", "QBICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##distsampOpen
bictab.AICunmarkedFitDSO <- function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check for use of c-hat
#if(c.hat > 1) stop("\nThe correction for overdispersion is not yet implemented for multinomial Poisson models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE,
nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE,
nobs = nobs, c.hat = c.hat)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if BIC and c.hat = 1
if(c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
}
##rename correctly to QBIC and add column for c-hat
if(c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QBIC", "Delta_QBIC", "ModelLik", "QBICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) extractLL(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##vglm
bictab.AICvglm <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, c.hat = 1, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same response variable for all models\n")
Results <- data.frame(Modnames=modnames) #assign model names to first column
Results$K <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = TRUE,
nobs = nobs, c.hat = c.hat)) #extract number of parameters
Results$BIC <- unlist(lapply(X = cand.set, FUN = useBIC, return.K = FALSE,
nobs = nobs, c.hat = c.hat)) #extract BIC
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
##check if BIC and c.hat = 1
if(c.hat == 1) {
Results$LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)))
}
##rename correctly to QBIC and add column for c-hat
if(c.hat > 1) {
colnames(Results) <- c("Modnames", "K", "QBIC", "Delta_QBIC", "ModelLik", "QBICWt")
LL <- unlist(lapply(X = cand.set, FUN = function(i) logLik(i)))
Results$Quasi.LL <- LL/c.hat
Results$c_hat <- c.hat
}
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
##zeroinfl
bictab.AICzeroinfl <-
function(cand.set, modnames = NULL, nobs = NULL, sort = TRUE, ...){ #specify whether table should be sorted or not by delta BIC
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
#if(c.hat != 1) stop("\nThis function does not support overdispersion in \'zeroinfl\' models\n")
##add check to see whether response variable is the same for all models
check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
if(length(unique(check.resp)) > 1) stop("\nYou must use the same data set for all models\n")
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$K <- unlist(lapply(cand.set, useBIC, return.K = TRUE, nobs = nobs)) #extract number of parameters
Results$BIC <- unlist(lapply(cand.set, useBIC, return.K = FALSE, nobs = nobs)) #extract BIC #
Results$Delta_BIC <- Results$BIC - min(Results$BIC) #compute delta BIC
Results$ModelLik <- exp(-0.5*Results$Delta_BIC) #compute model likelihood required to compute BIC weights
Results$BICWt <- Results$ModelLik/sum(Results$ModelLik) #compute BIC weights
##check if some models are redundant
if(length(unique(Results$BIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
Results$LL <- unlist(lapply(X= cand.set, FUN = function(i) logLik(i)))
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on BIC weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of BIC weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("bictab", "data.frame")
return(Results)
}
print.bictab <-
function(x, digits = 2, LL = TRUE, ...) {
cat("\nModel selection based on ", colnames(x)[3], ":\n", sep = "")
if (any(names(x) == "c_hat")) {cat("(c-hat estimate = ", x$c_hat[1], ")\n", sep = "")}
cat("\n")
#check if Cum.Wt should be printed
if(any(names(x) == "Cum.Wt")) {
nice.tab <- cbind(x[, 2], x[, 3], x[, 4], x[, 6], x[, "Cum.Wt"], x[, 7])
colnames(nice.tab) <- c(colnames(x)[c(2, 3, 4, 6)], "Cum.Wt", colnames(x)[7])
rownames(nice.tab) <- x[, 1]
} else {
nice.tab <- cbind(x[, 2], x[, 3], x[, 4], x[, 6], x[, 7])
colnames(nice.tab) <- c(colnames(x)[c(2, 3, 4, 6, 7)])
rownames(nice.tab) <- x[, 1]
}
#if LL==FALSE
if(identical(LL, FALSE)) {
names.cols <- colnames(nice.tab)
sel.LL <- which(attr(regexpr(pattern = "LL", text = names.cols), "match.length") > 1)
nice.tab <- nice.tab[, -sel.LL]
}
print(round(nice.tab, digits = digits)) #select rounding off with digits argument
cat("\n")
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/bictab.R |
##generic boot.wt
boot.wt <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL,
sort = TRUE, nsim = 100, ...){
cand.set <- formatCands(cand.set)
UseMethod("boot.wt", cand.set)
}
##default to indicate when object class not supported
boot.wt.default <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL,
sort = TRUE, nsim = 100, ...) {
stop("\nFunction not yet defined for this object class\n")
}
##aov
boot.wt.AICaov.lm <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL,
sort = TRUE, nsim = 100, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##extract data from first model
data <- eval(cand.set[[1]]$call$data, environment(formula(cand.set[[1]])))
##create vector to store top model
top <- character(nsim)
for(i in 1:nsim) {
##resample data
new.data <- data[sample(x = rownames(data), replace = TRUE), ]
##store results
results <- lapply(X = cand.set, FUN = function(j) update(j, data = new.data))
##compute AIC table
out <- aictab(cand.set = results, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = TRUE)
##determine highest-ranked model
top[i] <- as.character(out$Modnames[1])
}
##compute selection frequencies for each model
rel.freqs <- table(top)/nsim
##check whether all models appear in table
if(length(cand.set) != length(rel.freqs)) {
##assign 0 freqs for models never appearing at first rank
all.freqs <- rep(0, times = length(modnames))
names(all.freqs) <- modnames
##iterate over observed models
for (k in 1:length(rel.freqs)){
all.freqs[names(rel.freqs)[k]] <- rel.freqs[k]
}
rel.freqs <- all.freqs
}
##original model selection
orig.aic <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE)
##sort table according to model names
orig.aic <- orig.aic[order(orig.aic$Modnames), ]
##sort relative frequencies according to model names
rel.freqs <- rel.freqs[order(names(rel.freqs))]
##rename column for relative frequencies
names(orig.aic)[7] <- "PiWt"
##add column for pi_i
orig.aic$PiWt<- rel.freqs
##reorder if required
if (sort) {
orig.aic <- orig.aic[order(orig.aic[, 3]), ]
#orig.aic$Cum.PiWt <- cumsum(orig.aic[, 7])
}
class(orig.aic) <- c("boot.wt", "data.frame")
return(orig.aic)
}
##betareg
boot.wt.AICbetareg <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL,
sort = TRUE, nsim = 100, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##extract data from first model
data <- eval(cand.set[[1]]$call$data, environment(formula(cand.set[[1]])))
##create vector to store top model
top <- character(nsim)
for(i in 1:nsim) {
##resample data
new.data <- data[sample(x = rownames(data), replace = TRUE), ]
##store results
results <- lapply(X = cand.set, FUN = function(j) update(j, data = new.data))
##compute AIC table
out <- aictab(cand.set = results, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = TRUE)
##determine highest-ranked model
top[i] <- as.character(out$Modnames[1])
}
##compute selection frequencies for each model
rel.freqs <- table(top)/nsim
##check whether all models appear in table
if(length(cand.set) != length(rel.freqs)) {
##assign 0 freqs for models never appearing at first rank
all.freqs <- rep(0, times = length(modnames))
names(all.freqs) <- modnames
##iterate over observed models
for (k in 1:length(rel.freqs)){
all.freqs[names(rel.freqs)[k]] <- rel.freqs[k]
}
rel.freqs <- all.freqs
}
##original model selection
orig.aic <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE)
##sort table according to model names
orig.aic <- orig.aic[order(orig.aic$Modnames), ]
##sort relative frequencies according to model names
rel.freqs <- rel.freqs[order(names(rel.freqs))]
##rename column for relative frequencies
names(orig.aic)[7] <- "PiWt"
##add column for pi_i
orig.aic$PiWt<- rel.freqs
##reorder if required
if (sort) {
orig.aic <- orig.aic[order(orig.aic[, 3]), ]
#orig.aic$Cum.PiWt <- cumsum(orig.aic[, 7])
}
class(orig.aic) <- c("boot.wt", "data.frame")
return(orig.aic)
}
##clm
boot.wt.AICsclm.clm <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL,
sort = TRUE, nsim = 100, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##extract data from first model
data <- eval(cand.set[[1]]$call$data, environment(formula(cand.set[[1]])))
##create vector to store top model
top <- character(nsim)
for(i in 1:nsim) {
##resample data
new.data <- data[sample(x = rownames(data), replace = TRUE), ]
##store results
results <- lapply(X = cand.set, FUN = function(j) update(j, data = new.data))
##compute AIC table
out <- aictab(cand.set = results, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = TRUE)
##determine highest-ranked model
top[i] <- as.character(out$Modnames[1])
}
##compute selection frequencies for each model
rel.freqs <- table(top)/nsim
##check whether all models appear in table
if(length(cand.set) != length(rel.freqs)) {
##assign 0 freqs for models never appearing at first rank
all.freqs <- rep(0, times = length(modnames))
names(all.freqs) <- modnames
##iterate over observed models
for (k in 1:length(rel.freqs)){
all.freqs[names(rel.freqs)[k]] <- rel.freqs[k]
}
rel.freqs <- all.freqs
}
##original model selection
orig.aic <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE)
##sort table according to model names
orig.aic <- orig.aic[order(orig.aic$Modnames), ]
##sort relative frequencies according to model names
rel.freqs <- rel.freqs[order(names(rel.freqs))]
##rename column for relative frequencies
names(orig.aic)[7] <- "PiWt"
##add column for pi_i
orig.aic$PiWt<- rel.freqs
##reorder if required
if (sort) {
orig.aic <- orig.aic[order(orig.aic[, 3]), ]
#orig.aic$Cum.PiWt <- cumsum(orig.aic[, 7])
}
class(orig.aic) <- c("boot.wt", "data.frame")
return(orig.aic)
}
##glm
boot.wt.AICglm.lm <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL,
sort = TRUE, nsim = 100, c.hat = 1, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##extract data from first model
data <- eval(cand.set[[1]]$call$data, environment(formula(cand.set[[1]])))
##create vector to store top model
top <- character(nsim)
for(i in 1:nsim) {
##resample data
new.data <- data[sample(x = rownames(data), replace = TRUE), ]
##store results
results <- lapply(X = cand.set, FUN = function(j) update(j, data = new.data))
##compute AIC table
out <- aictab(cand.set = results, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = TRUE, c.hat = c.hat)
##determine highest-ranked model
top[i] <- as.character(out$Modnames[1])
}
##compute selection frequencies for each model
rel.freqs <- table(top)/nsim
##check whether all models appear in table
if(length(cand.set) != length(rel.freqs)) {
##assign 0 freqs for models never appearing at first rank
all.freqs <- rep(0, times = length(modnames))
names(all.freqs) <- modnames
##iterate over observed models
for (k in 1:length(rel.freqs)){
all.freqs[names(rel.freqs)[k]] <- rel.freqs[k]
}
rel.freqs <- all.freqs
}
##original model selection
orig.aic <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE, c.hat = c.hat)
##sort table according to model names
orig.aic <- orig.aic[order(orig.aic$Modnames), ]
##sort relative frequencies according to model names
rel.freqs <- rel.freqs[order(names(rel.freqs))]
##sort table according to model names
orig.aic <- orig.aic[order(orig.aic$Modnames), ]
##sort relative frequencies according to model names
rel.freqs <- rel.freqs[order(names(rel.freqs))]
##rename column for relative frequencies
names(orig.aic)[7] <- "PiWt"
##add column for pi_i
orig.aic$PiWt<- rel.freqs
##reorder if required
if (sort) {
orig.aic <- orig.aic[order(orig.aic[, 3]), ]
#orig.aic$Cum.PiWt <- cumsum(orig.aic[, 7])
}
class(orig.aic) <- c("boot.wt", "data.frame")
return(orig.aic)
}
##hurdle models
boot.wt.AIChurdle <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL,
sort = TRUE, nsim = 100, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##extract data from first model
data <- eval(cand.set[[1]]$call$data, environment(formula(cand.set[[1]])))
##create vector to store top model
top <- character(nsim)
for(i in 1:nsim) {
##resample data
new.data <- data[sample(x = rownames(data), replace = TRUE), ]
##store results
results <- lapply(X = cand.set, FUN = function(j) update(j, data = new.data))
##compute AIC table
out <- aictab(cand.set = results, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = TRUE)
##determine highest-ranked model
top[i] <- as.character(out$Modnames[1])
}
##compute selection frequencies for each model
rel.freqs <- table(top)/nsim
##check whether all models appear in table
if(length(cand.set) != length(rel.freqs)) {
##assign 0 freqs for models never appearing at first rank
all.freqs <- rep(0, times = length(modnames))
names(all.freqs) <- modnames
##iterate over observed models
for (k in 1:length(rel.freqs)){
all.freqs[names(rel.freqs)[k]] <- rel.freqs[k]
}
rel.freqs <- all.freqs
}
##original model selection
orig.aic <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE)
##sort table according to model names
orig.aic <- orig.aic[order(orig.aic$Modnames), ]
##sort relative frequencies according to model names
rel.freqs <- rel.freqs[order(names(rel.freqs))]
##rename column for relative frequencies
names(orig.aic)[7] <- "PiWt"
##add column for pi_i
orig.aic$PiWt<- rel.freqs
##reorder if required
if (sort) {
orig.aic <- orig.aic[order(orig.aic[, 3]), ]
#orig.aic$Cum.PiWt <- cumsum(orig.aic[, 7])
}
class(orig.aic) <- c("boot.wt", "data.frame")
return(orig.aic)
}
##lm
boot.wt.AIClm <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL,
sort = TRUE, nsim = 100, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##extract data from first model
data <- eval(cand.set[[1]]$call$data, environment(formula(cand.set[[1]])))
##create vector to store top model
top <- character(nsim)
for(i in 1:nsim) {
##resample data
new.data <- data[sample(x = rownames(data), replace = TRUE), ]
##store results
results <- lapply(X = cand.set, FUN = function(j) update(j, data = new.data))
##compute AIC table
out <- aictab(cand.set = results, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = TRUE)
##determine highest-ranked model
top[i] <- as.character(out$Modnames[1])
}
##compute selection frequencies for each model
rel.freqs <- table(top)/nsim
##check whether all models appear in table
if(length(cand.set) != length(rel.freqs)) {
##assign 0 freqs for models never appearing at first rank
all.freqs <- rep(0, times = length(modnames))
names(all.freqs) <- modnames
##iterate over observed models
for (k in 1:length(rel.freqs)){
all.freqs[names(rel.freqs)[k]] <- rel.freqs[k]
}
rel.freqs <- all.freqs
}
##original model selection
orig.aic <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE)
##sort table according to model names
orig.aic <- orig.aic[order(orig.aic$Modnames), ]
##sort relative frequencies according to model names
rel.freqs <- rel.freqs[order(names(rel.freqs))]
##rename column for relative frequencies
names(orig.aic)[7] <- "PiWt"
##add column for pi_i
orig.aic$PiWt<- rel.freqs
##reorder if required
if (sort) {
orig.aic <- orig.aic[order(orig.aic[, 3]), ]
#orig.aic$Cum.PiWt <- cumsum(orig.aic[, 7])
}
class(orig.aic) <- c("boot.wt", "data.frame")
return(orig.aic)
}
##multinom
boot.wt.AICmultinom.nnet <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL,
sort = TRUE, nsim = 100, c.hat = 1, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##extract data from first model
data <- eval(cand.set[[1]]$call$data, environment(formula(cand.set[[1]])))
##create vector to store top model
top <- character(nsim)
for(i in 1:nsim) {
##resample data
new.data <- data[sample(x = rownames(data), replace = TRUE), ]
##store results
results <- lapply(X = cand.set, FUN = function(j) update(j, data = new.data))
##compute AIC table
out <- aictab(cand.set = results, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = TRUE, c.hat = c.hat)
##determine highest-ranked model
top[i] <- as.character(out$Modnames[1])
}
##compute selection frequencies for each model
rel.freqs <- table(top)/nsim
##check whether all models appear in table
if(length(cand.set) != length(rel.freqs)) {
##assign 0 freqs for models never appearing at first rank
all.freqs <- rep(0, times = length(modnames))
names(all.freqs) <- modnames
##iterate over observed models
for (k in 1:length(rel.freqs)){
all.freqs[names(rel.freqs)[k]] <- rel.freqs[k]
}
rel.freqs <- all.freqs
}
##original model selection
orig.aic <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE, c.hat = c.hat)
##sort table according to model names
orig.aic <- orig.aic[order(orig.aic$Modnames), ]
##sort relative frequencies according to model names
rel.freqs <- rel.freqs[order(names(rel.freqs))]
##rename column for relative frequencies
names(orig.aic)[7] <- "PiWt"
##add column for pi_i
orig.aic$PiWt<- rel.freqs
##reorder if required
if (sort) {
orig.aic <- orig.aic[order(orig.aic[, 3]), ]
#orig.aic$Cum.PiWt <- cumsum(orig.aic[, 7])
}
class(orig.aic) <- c("boot.wt", "data.frame")
return(orig.aic)
}
##polr
boot.wt.AICpolr <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL,
sort = TRUE, nsim = 100, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##extract data from first model
data <- eval(cand.set[[1]]$call$data, environment(formula(cand.set[[1]])))
##create vector to store top model
top <- character(nsim)
for(i in 1:nsim) {
##resample data
new.data <- data[sample(x = rownames(data), replace = TRUE), ]
##store results
results <- lapply(X = cand.set, FUN = function(j) update(j, data = new.data))
##compute AIC table
out <- aictab(cand.set = results, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = TRUE)
##determine highest-ranked model
top[i] <- as.character(out$Modnames[1])
}
##compute selection frequencies for each model
rel.freqs <- table(top)/nsim
##check whether all models appear in table
if(length(cand.set) != length(rel.freqs)) {
##assign 0 freqs for models never appearing at first rank
all.freqs <- rep(0, times = length(modnames))
names(all.freqs) <- modnames
##iterate over observed models
for (k in 1:length(rel.freqs)){
all.freqs[names(rel.freqs)[k]] <- rel.freqs[k]
}
rel.freqs <- all.freqs
}
##original model selection
orig.aic <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE)
##sort table according to model names
orig.aic <- orig.aic[order(orig.aic$Modnames), ]
##sort relative frequencies according to model names
rel.freqs <- rel.freqs[order(names(rel.freqs))]
##rename column for relative frequencies
names(orig.aic)[7] <- "PiWt"
##add column for pi_i
orig.aic$PiWt<- rel.freqs
##reorder if required
if (sort) {
orig.aic <- orig.aic[order(orig.aic[, 3]), ]
#orig.aic$Cum.PiWt <- cumsum(orig.aic[, 7])
}
class(orig.aic) <- c("boot.wt", "data.frame")
return(orig.aic)
}
##rlm
boot.wt.AICrlm.lm <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL,
sort = TRUE, nsim = 100, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##extract data from first model
data <- eval(cand.set[[1]]$call$data, environment(formula(cand.set[[1]])))
##create vector to store top model
top <- character(nsim)
for(i in 1:nsim) {
##resample data
new.data <- data[sample(x = rownames(data), replace = TRUE), ]
##store results
results <- lapply(X = cand.set, FUN = function(j) update(j, data = new.data))
##compute AIC table
out <- aictab(cand.set = results, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = TRUE)
##determine highest-ranked model
top[i] <- as.character(out$Modnames[1])
}
##compute selection frequencies for each model
rel.freqs <- table(top)/nsim
##check whether all models appear in table
if(length(cand.set) != length(rel.freqs)) {
##assign 0 freqs for models never appearing at first rank
all.freqs <- rep(0, times = length(modnames))
names(all.freqs) <- modnames
##iterate over observed models
for (k in 1:length(rel.freqs)){
all.freqs[names(rel.freqs)[k]] <- rel.freqs[k]
}
rel.freqs <- all.freqs
}
##original model selection
orig.aic <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE)
##sort table according to model names
orig.aic <- orig.aic[order(orig.aic$Modnames), ]
##sort relative frequencies according to model names
rel.freqs <- rel.freqs[order(names(rel.freqs))]
##rename column for relative frequencies
names(orig.aic)[7] <- "PiWt"
##add column for pi_i
orig.aic$PiWt<- rel.freqs
##reorder if required
if (sort) {
orig.aic <- orig.aic[order(orig.aic[, 3]), ]
#orig.aic$Cum.PiWt <- cumsum(orig.aic[, 7])
}
class(orig.aic) <- c("boot.wt", "data.frame")
return(orig.aic)
}
##survreg
boot.wt.AICsurvreg <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL,
sort = TRUE, nsim = 100, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##extract data from first model
data <- eval(cand.set[[1]]$call$data, environment(formula(cand.set[[1]])))
##create vector to store top model
top <- character(nsim)
for(i in 1:nsim) {
##resample data
new.data <- data[sample(x = rownames(data), replace = TRUE), ]
##store results
results <- lapply(X = cand.set, FUN = function(j) update(j, data = new.data))
##compute AIC table
out <- aictab(cand.set = results, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = TRUE)
##determine highest-ranked model
top[i] <- as.character(out$Modnames[1])
}
##compute selection frequencies for each model
rel.freqs <- table(top)/nsim
##check whether all models appear in table
if(length(cand.set) != length(rel.freqs)) {
##assign 0 freqs for models never appearing at first rank
all.freqs <- rep(0, times = length(modnames))
names(all.freqs) <- modnames
##iterate over observed models
for (k in 1:length(rel.freqs)){
all.freqs[names(rel.freqs)[k]] <- rel.freqs[k]
}
rel.freqs <- all.freqs
}
##original model selection
orig.aic <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE)
##sort table according to model names
orig.aic <- orig.aic[order(orig.aic$Modnames), ]
##sort relative frequencies according to model names
rel.freqs <- rel.freqs[order(names(rel.freqs))]
##rename column for relative frequencies
names(orig.aic)[7] <- "PiWt"
##add column for pi_i
orig.aic$PiWt<- rel.freqs
##reorder if required
if (sort) {
orig.aic <- orig.aic[order(orig.aic[, 3]), ]
#orig.aic$Cum.PiWt <- cumsum(orig.aic[, 7])
}
class(orig.aic) <- c("boot.wt", "data.frame")
return(orig.aic)
}
##vglm
boot.wt.AICvglm <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL,
sort = TRUE, nsim = 100, c.hat = 1, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##extract data from first model
data <- eval(cand.set[[1]]@call$data, environment(formula(cand.set[[1]])))
##create vector to store top model
top <- character(nsim)
for(i in 1:nsim) {
##resample data
new.data <- data[sample(x = rownames(data), replace = TRUE), ]
##store results
results <- lapply(X = cand.set, FUN = function(j) update(j, data = new.data))
##compute AIC table
out <- aictab(cand.set = results, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = TRUE, c.hat = c.hat)
##determine highest-ranked model
top[i] <- as.character(out$Modnames[1])
}
##compute selection frequencies for each model
rel.freqs <- table(top)/nsim
##check whether all models appear in table
if(length(cand.set) != length(rel.freqs)) {
##assign 0 freqs for models never appearing at first rank
all.freqs <- rep(0, times = length(modnames))
names(all.freqs) <- modnames
##iterate over observed models
for (k in 1:length(rel.freqs)){
all.freqs[names(rel.freqs)[k]] <- rel.freqs[k]
}
rel.freqs <- all.freqs
}
##original model selection
orig.aic <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE, c.hat = c.hat)
##sort table according to model names
orig.aic <- orig.aic[order(orig.aic$Modnames), ]
##sort relative frequencies according to model names
rel.freqs <- rel.freqs[order(names(rel.freqs))]
##rename column for relative frequencies
names(orig.aic)[7] <- "PiWt"
##add column for pi_i
orig.aic$PiWt<- rel.freqs
##reorder if required
if (sort) {
orig.aic <- orig.aic[order(orig.aic[, 3]), ]
#orig.aic$Cum.PiWt <- cumsum(orig.aic[, 7])
}
class(orig.aic) <- c("boot.wt", "data.frame")
return(orig.aic)
}
##zeroinfl
boot.wt.AICzeroinfl <- function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL,
sort = TRUE, nsim = 100, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##extract data from first model
data <- eval(cand.set[[1]]$call$data, environment(formula(cand.set[[1]])))
##create vector to store top model
top <- character(nsim)
for(i in 1:nsim) {
##resample data
new.data <- data[sample(x = rownames(data), replace = TRUE), ]
##store results
results <- lapply(X = cand.set, FUN = function(j) update(j, data = new.data))
##compute AIC table
out <- aictab(cand.set = results, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = TRUE)
##determine highest-ranked model
top[i] <- as.character(out$Modnames[1])
}
##compute selection frequencies for each model
rel.freqs <- table(top)/nsim
##check whether all models appear in table
if(length(cand.set) != length(rel.freqs)) {
##assign 0 freqs for models never appearing at first rank
all.freqs <- rep(0, times = length(modnames))
names(all.freqs) <- modnames
##iterate over observed models
for (k in 1:length(rel.freqs)){
all.freqs[names(rel.freqs)[k]] <- rel.freqs[k]
}
rel.freqs <- all.freqs
}
##original model selection
orig.aic <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE)
##sort table according to model names
orig.aic <- orig.aic[order(orig.aic$Modnames), ]
##sort relative frequencies according to model names
rel.freqs <- rel.freqs[order(names(rel.freqs))]
##rename column for relative frequencies
names(orig.aic)[7] <- "PiWt"
##add column for pi_i
orig.aic$PiWt<- rel.freqs
##reorder if required
if (sort) {
orig.aic <- orig.aic[order(orig.aic[, 3]), ]
#orig.aic$Cum.PiWt <- cumsum(orig.aic[, 7])
}
class(orig.aic) <- c("boot.wt", "data.frame")
return(orig.aic)
}
print.boot.wt <-
function(x, digits = 2, ...) {
cat("\nModel selection based on ", colnames(x)[3], ":\n", sep = "")
if (any(names(x) == "c_hat")) {cat("(c-hat estimate = ", x$c_hat[1], ")\n", sep = "")}
cat("\n")
##check if Cum.Wt should be printed
nice.tab <- cbind(x[, 2], x[, 3], x[, 4], x[, 6], x[, 7])
colnames(nice.tab) <- c(colnames(x)[c(2, 3, 4, 6, 7)])
rownames(nice.tab) <- x[, 1]
print(round(nice.tab, digits = digits)) #select rounding off with digits argument
cat("\n")
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/boot.wt.R |
##create generic c_hat
c_hat <- function(mod, method = "pearson", ...) {
##format list according to model class
UseMethod("c_hat", mod)
}
##default to indicate when object class not supported
c_hat.default <- function(mod, method = "pearson", ...) {
stop("\nFunction not yet defined for this object class\n")
}
##function to compute c-hat from Poisson or binomial GLM with success/total syntax
c_hat.glm <- function(mod, method = "pearson", ...){
##determine family of model
fam <- family(mod)$family
##if binomial, check if n > 1 for each case
if(fam == "binomial") {
##extract number of trials
n.trials <- mod$prior.weights
if(identical(unique(n.trials), 1)) {
stop("\nWith a binomial distribution, the number of successes must be summarized for valid computation of c-hat\n")
}
}
##Poisson or binomial
if(!any(fam == c("poisson", "binomial"))) {
stop("\nEstimation of c-hat only valid for Poisson or binomial GLM's\n")
}
##Pearson chi-square
chisq <- sum(residuals(mod, type = "pearson")^2)
##return estimate based on Pearson chi-square
if(method == "pearson") {
c_hat.est <- chisq/mod$df.residual
attr(c_hat.est, "method") <- "pearson estimator"
}
##return estimate based on deviance estimator
if(method == "deviance") {
##estimate deviance
mod.deviance <- sum(residuals(mod, type = "deviance")^2)
c_hat.est <- mod.deviance/mod$df.residual
attr(c_hat.est, "method") <- "deviance estimator"
}
##extract raw residuals
raw.res <- residuals(mod, type = "response")
##extract fitted values
fit.vals <- fitted(mod)
##estimate s.bar for Poisson
if(fam == "poisson") {
si <- 1/fit.vals * raw.res
s.bar <- mean(si)
}
##estimate s.bar for binomial
if(fam == "binomial") {
si <- (1 - 2 * fit.vals)/((n.trials * fit.vals) * (1 - fit.vals))
s.bar <- mean(si)
}
##return estimate based on Farrington estimator
if(method == "farrington") {
c_hat.est <- (chisq - sum(si))/mod$df.residual
attr(c_hat.est, "method") <- "farrington estimator"
}
##return estimate based on Fletcher estimator
if(method == "fletcher") {
c_hat.est <- (chisq/mod$df.residual)/(1 + s.bar)
attr(c_hat.est, "method") <- "fletcher estimator"
}
class(c_hat.est) <- "c_hat"
return(c_hat.est)
}
##function to compute c-hat from Poisson or binomial GLM with success/total syntax
c_hat.glmmTMB <- function(mod, method = "pearson", ...){
##determine family of model
fam <- family(mod)$family
##extract response
##if binomial, check if n > 1 for each case
if(fam == "binomial") {
resp <- mod$frame[, mod$modelInfo$respCol]
if(!is.matrix(resp)) {
if(!any(names(mod$frame) == "(weights)")) {
stop("\nWith a binomial distribution, the number of successes must be summarized for valid computation of c-hat\n")
}
}
}
##Poisson or binomial
if(!any(fam == c("poisson", "binomial"))) {
stop("\nEstimation of c-hat only valid for Poisson or binomial distributions\n")
}
##number of parameters estimated
n.parms <- attr(logLik(mod), "df")
##total number of observations
n.obs <- nrow(model.frame(mod))
##residual df
res.df <- n.obs - n.parms
##Pearson chi-square
chisq <- sum(residuals(mod, type = "pearson")^2)
##return estimate based on Pearson chi-square
if(method == "pearson") {
c_hat.est <- chisq/res.df
attr(c_hat.est, "method") <- "pearson estimator"
} else {stop("\nOnly Pearson estimator is currently supported for this model class\n")}
class(c_hat.est) <- "c_hat"
return(c_hat.est)
}
##function to compute c-hat from Poisson or binomial GLM with success/total syntax
c_hat.vglm <- function(mod, method = "pearson", ...){
##determine family of model
fam <- mod@family@vfamily
if(length(fam) > 1) fam <- fam[1]
##if binomial, check if n > 1 for each case
if(fam == "binomialff") {
##extract number of trials
n.trials <- [email protected]
if(identical(nrow([email protected]), 0)) {
stop("\nWith a binomial distribution, the number of successes must be summarized for valid computation of c-hat\n")
}
}
##Poisson or binomial
if(!any(fam == c("poissonff", "binomialff"))) {
stop("\nEstimation of c-hat only valid for Poisson or binomial GLM's\n")
}
##Pearson chi-square
chisq <- sum(residuals(mod, type = "pearson")^2)
##return estimate based on Pearson chi-square
if(method == "pearson") {
c_hat.est <- chisq/[email protected]
attr(c_hat.est, "method") <- "pearson estimator"
}
##return estimate based on deviance estimator
if(method == "deviance") {
##estimate deviance
mod.deviance <- sum(residuals(mod, type = "deviance")^2)
c_hat.est <- mod.deviance/[email protected]
attr(c_hat.est, "method") <- "deviance estimator"
}
##extract raw residuals
raw.res <- residuals(mod, type = "response")
##extract fitted values
fit.vals <- fitted(mod)
##estimate s.bar for Poisson
if(fam == "poisson") {
si <- 1/fit.vals * raw.res
s.bar <- mean(si)
}
##estimate s.bar for binomial
if(fam == "binomialff") {
si <- (1 - 2 * fit.vals)/((n.trials * fit.vals) * (1 - fit.vals))
s.bar <- mean(si)
}
##return estimate based on Farrington estimator
if(method == "farrington") {
c_hat.est <- (chisq - sum(si))/[email protected]
attr(c_hat.est, "method") <- "farrington estimator"
}
##return estimate based on Fletcher estimator
if(method == "fletcher") {
c_hat.est <- (chisq/[email protected])/(1 + s.bar)
attr(c_hat.est, "method") <- "fletcher estimator"
}
class(c_hat.est) <- "c_hat"
return(c_hat.est)
}
##method for GLMM from lme4
c_hat.merMod <- function(mod, method = "pearson", ...) {
#determine family of model
fam <- family(mod)$family
##if binomial, check if n > 1 for each case
if(fam == "binomial") {
if(identical(unique(mod@resp$weights), 1)) {
stop("\nWith a binomial distribution, the number of successes must be summarized for valid computation of c-hat\n")
}
}
##Poisson or binomial
if(!any(fam == c("poisson", "binomial"))) {
stop("\nEstimation of c-hat only valid for Poisson or binomial GLMM's\n")
}
##number of parameters estimated
n.parms <- attr(logLik(mod), "df")
##total number of observations
n.obs <- nrow(model.frame(mod))
##residual df
res.df <- n.obs - n.parms
if(method == "pearson") {
chisq <- sum(residuals(mod, type = "pearson")^2)
c_hat.est <- chisq/res.df
attr(c_hat.est, "method") <- "pearson estimator"
} else {stop("\nOnly Pearson estimator is currently supported for GLMM's\n")}
class(c_hat.est) <- "c_hat"
return(c_hat.est)
}
##print method
print.c_hat <- function(x, digits = 2, ...) {
cat("'c-hat' ", paste(round(x, digits = digits), collapse = ", "),
" (method: ", attr(x, "method"), ")\n", sep = "")
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/c_hat.R |
##generic
checkConv <- function(mod, ...) {
UseMethod("checkConv", mod)
}
##default
checkConv.default <- function(mod, ...) {
stop("\nFunction not yet defined for this object class\n")
}
##betareg
checkConv.betareg <- function(mod, ...) {
if(mod$optim$convergence == 0) {
conv <- TRUE
} else {conv <- FALSE}
msg <- mod$optim$message
out <- list(converged = conv, message = msg)
class(out) <- "checkConv"
return(out)
}
##clm
checkConv.clm <- function(mod, ...) {
if(mod$convergence$code == 0) {
conv <- TRUE
} else {conv <- FALSE}
msg <- mod$convergence$alg.message
out <- list(converged = conv, message = msg)
class(out) <- "checkConv"
return(out)
}
##clmm
checkConv.clmm <- function(mod, ...) {
if(mod$optRes$convergence == 0) {
conv <- TRUE
} else {conv <- FALSE}
msg <- mod$optRes$message
out <- list(converged = conv, message = msg)
class(out) <- "checkConv"
return(out)
}
##glm
checkConv.glm <- function(mod, ...) {
if(mod$converged) {
conv <- TRUE
} else {conv <- FALSE}
msg <- NULL ##object does not include a message from IWLS algorithm
out <- list(converged = conv, message = msg)
class(out) <- "checkConv"
return(out)
}
##glmmTMB
checkConv.glmmTMB <- function(mod, ...) {
if(mod$fit$convergence == 0) {
conv <- TRUE
} else {conv <- FALSE}
msg <- mod$fit$message ##object does not include a message from IWLS algorithm
out <- list(converged = conv, message = msg)
class(out) <- "checkConv"
return(out)
}
##hurdle
checkConv.hurdle <- function(mod, ...) {
if(mod$converged) {
conv <- TRUE
} else {conv <- FALSE}
msg <- NULL ##object does not include a message from IWLS algorithm
out <- list(converged = conv, message = msg)
class(out) <- "checkConv"
return(out)
}
##lavaan
checkConv.lavaan <- function(mod, ...) {
if(mod@Fit@converged) {
conv <- TRUE
} else {conv <- FALSE}
msg <- NULL ##object does not include a message from IWLS algorithm
out <- list(converged = conv, message = msg)
class(out) <- "checkConv"
return(out)
}
##maxlikefit
checkConv.maxlikeFit <- function(mod, ...) {
if(mod$optim$convergence == 0) {
conv <- TRUE
} else {conv <- FALSE}
msg <- mod$optim$message
out <- list(converged = conv, message = msg)
class(out) <- "checkConv"
return(out)
}
##merMod
checkConv.merMod <- function(mod, ...) {
if(mod@optinfo$conv$opt == 0) {
conv <- TRUE
} else {conv <- FALSE}
msg <- NULL ##object does not include a message from IWLS algorithm
out <- list(converged = conv, message = msg)
class(out) <- "checkConv"
return(out)
}
##lmerModLmerTest
checkConv.lmerModLmerTest <- function(mod, ...) {
if(mod@optinfo$conv$opt == 0) {
conv <- TRUE
} else {conv <- FALSE}
msg <- NULL ##object does not include a message from IWLS algorithm
out <- list(converged = conv, message = msg)
class(out) <- "checkConv"
return(out)
}
##multinom
checkConv.multinom <- function(mod, ...) {
if(mod$convergence == 0) {
conv <- TRUE
} else {conv <- FALSE}
msg <- NULL ##object does not include a message from IWLS algorithm
out <- list(converged = conv, message = msg)
class(out) <- "checkConv"
return(out)
}
##nls
checkConv.nls <- function(mod, ...) {
if(mod$convInfo$isConv) {
conv <- TRUE
} else {conv <- FALSE}
msg <- mod$convInfo$stopMessage
out <- list(converged = conv, message = msg)
class(out) <- "checkConv"
return(out)
}
##polr
checkConv.polr <- function(mod, ...) {
if(mod$convergence == 0) {
conv <- TRUE
} else {conv <- FALSE}
msg <- NULL ##object does not include a message from IWLS algorithm
out <- list(converged = conv, message = msg)
class(out) <- "checkConv"
return(out)
}
##unmarked
checkConv.unmarkedFit <- function(mod, ...) {
if(mod@opt$convergence == 0) {
conv <- TRUE
} else {conv <- FALSE}
msg <- mod@opt$message
out <- list(converged = conv, message = msg)
class(out) <- "checkConv"
return(out)
}
##zeroinfl
checkConv.zeroinfl <- function(mod, ...) {
if(mod$converged) {
conv <- TRUE
} else {conv <- FALSE}
msg <- NULL ##object does not include a message from IWLS algorithm
out <- list(converged = conv, message = msg)
class(out) <- "checkConv"
return(out)
}
print.checkConv <- function(x, ...) {
cat("\nConverged: ", x$converged, "\n")
if(!is.null(x$message)) {
cat("(", x$message, ")", "\n", sep = "")
}
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/checkConv.R |
##check SE's of parameters in model
##and identify SE's above a given threshold
##or with NaN
##mainly used with unmarkedFit objects, but also useful with classic GLM's
checkParms <- function(mod, se.max = 25, simplify = TRUE, ...) {
UseMethod("checkParms", mod)
}
checkParms.default <- function(mod, se.max = 25, simplify = TRUE, ...) {
stop("\nFunction not yet defined for this object class\n")
}
##unmarkedFit objects
checkParms.unmarkedFit <- function(mod, se.max = 25, simplify = TRUE, ...) {
##extract SE
SEs <- sqrt(diag(vcov(mod)))
##extract names
var.names <- names(SEs)
##extract parms, if unmarkedFrame, multinomial
parm.names <- sapply(var.names, FUN = function(i) unlist(strsplit(i, split = "\\("))[1], simplify = TRUE)
##unique parms
parm.id <- unique(parm.names)
##format to matrix
##models with several groups of parameters
matSE <- data.frame(SEs = SEs, variable = var.names, parameter = parm.names)
##request simplified output
if(identical(simplify, TRUE)) {
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
parmSE <- as.character(matSE[which(matSE$variable == nameSE), "parameter"])
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
parmSE <- as.character(matSE[which(matSE$SEs == maxSE), "parameter"])
}
##add to rowname
rownames(out.result) <- parmSE
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
}
##requesting long output
if(identical(simplify, FALSE)) {
##create list to hold results for each parm type
out.result <- data.frame(variable = rep(NA, length(parm.id)),
max.se = rep(NA, length(parm.id)),
n.high.se = rep(NA, length(parm.id)))
rownames(out.result) <- parm.id
##for each parameter, identify maximum value of SE
for(j in parm.id) {
mat.parm <- matSE[matSE$parameter %in% j, ]
maxSE <- max(mat.parm$SEs)
out.result[j, "max.se"] <- maxSE
nameSE <- as.character(mat.parm[which(mat.parm$SEs == maxSE), "variable"])
##check if NaN are present
if(is.nan(maxSE)) {
nan.var <- as.character(mat.parm[is.nan(mat.parm$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
}
out.result[j, "variable"] <- nameSE
##determine number of SE's > SE.limit
out.result[j, "n.high.se"] <- length(which(mat.parm$SEs > se.max))
}
}
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##betareg objects
checkParms.betareg <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- sqrt(diag(vcov(mod)))
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##clm objects
checkParms.clm <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- sqrt(diag(vcov(mod)))
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##clmm objects
checkParms.clmm <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- sqrt(diag(vcov(mod)))
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##coxme objects
checkParms.coxme <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- extractSE(mod)
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##coxph objects
checkParms.coxph <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- sqrt(diag(vcov(mod)))
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##GLM's
checkParms.glm <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- sqrt(diag(vcov(mod)))
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##GLM's
checkParms.glmmTMB <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- sqrt(diag(vcov(mod)$cond))
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##gls objects
checkParms.gls <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- sqrt(diag(vcov(mod)))
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##gnls objects
checkParms.gnls <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- sqrt(diag(vcov(mod)))
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##hurdle objects
checkParms.hurdle <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- sqrt(diag(vcov(mod)))
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##classic linear regression (lm)
checkParms.lm <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- sqrt(diag(vcov(mod)))
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##lme objects
checkParms.lme <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- sqrt(diag(vcov(mod)))
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##lmekin objects
checkParms.lmekin <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- extractSE(mod)
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##maxlike objects
checkParms.maxlikeFit <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- sqrt(diag(vcov(mod)))
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##mer objects - old versions of lme4
#checkParms.mer <- function(mod, se.max = 25, ...) {
#
# ##extract SE
# SEs <- sqrt(diag(vcov(mod)))
#
# ##extract names
# var.names <- names(SEs)
#
# ##format as matrix
# matSE <- data.frame(SEs = SEs, variable = var.names)
#
# ##create matrix to hold results for parm with highest SE
# out.result <- data.frame(variable = rep(NA, 1),
# max.se = rep(NA, 1),
# n.high.se = rep(NA, 1))
#
# ##identify maximum value of SE in model
# maxSE <- max(SEs)
#
# ##check if length = 0 (when NaN are present)
# if(is.nan(maxSE)) {
# nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
# if(length(nan.var) == 1) {
# nameSE <- nan.var
# } else {nameSE <- nan.var[1]}
# } else {
# nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
# }
#
# rownames(out.result) <- "beta"
#
# out.result[, "variable"] <- nameSE
# out.result[, "max.se"] <- maxSE
# out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
#
# out <- list(model.class = class(mod), se.max = se.max, result = out.result)
# class(out) <- "checkParms"
# return(out)
#}
##merMod objects
checkParms.merMod <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- extractSE(mod)
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##lmerModLmerTest objects
checkParms.lmerModLmerTest <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- extractSE(mod)
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##multinom objects
checkParms.multinom <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- sqrt(diag(vcov(mod)))
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##nlme objects
checkParms.nlme <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- sqrt(diag(vcov(mod)))
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##nls objects
checkParms.nls <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- sqrt(diag(vcov(mod)))
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##polr objects
checkParms.polr <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- sqrt(diag(vcov(mod)))
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##rlm objects
checkParms.rlm <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- sqrt(diag(vcov(mod)))
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##survreg objects
checkParms.survreg <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- sqrt(diag(vcov(mod)))
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##vglm objects
checkParms.vglm <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- sqrt(diag(vcov(mod)))
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##zeroinfl objects
checkParms.zeroinfl <- function(mod, se.max = 25, ...) {
##extract SE
SEs <- sqrt(diag(vcov(mod)))
##extract names
var.names <- names(SEs)
##format as matrix
matSE <- data.frame(SEs = SEs, variable = var.names)
##create matrix to hold results for parm with highest SE
out.result <- data.frame(variable = rep(NA, 1),
max.se = rep(NA, 1),
n.high.se = rep(NA, 1))
##identify maximum value of SE in model
maxSE <- max(SEs)
##check if length = 0 (when NaN are present)
if(is.nan(maxSE)) {
nan.var <- as.character(matSE[is.nan(matSE$SEs), "variable"])
if(length(nan.var) == 1) {
nameSE <- nan.var
} else {nameSE <- nan.var[1]}
} else {
nameSE <- as.character(matSE[which(matSE$SEs == maxSE), "variable"])
}
rownames(out.result) <- "beta"
out.result[, "variable"] <- nameSE
out.result[, "max.se"] <- maxSE
out.result[, "n.high.se"] <- length(which(matSE$SEs > se.max))
out <- list(model.class = class(mod), se.max = se.max, result = out.result)
class(out) <- "checkParms"
return(out)
}
##print method
print.checkParms <- function(x, digits = 2, ...) {
##result data frame
out.frame <- x$result
##round numeric variables
out.frame$max.se <- round(out.frame$max.se, digits = digits)
print(out.frame)
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/checkParms.R |
confset <-
function(cand.set, modnames = NULL, second.ord = TRUE, nobs = NULL, method = "raw", level = 0.95, delta = 6, c.hat = 1) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
aic.table <- aictab(cand.set = cand.set, modnames = modnames, sort = TRUE, c.hat = c.hat,
second.ord = second.ord, nobs = nobs)
##add check to see whether response variable is the same for all models - these two lines not compatible with unmarked objects
#check.resp <- lapply(X = cand.set, FUN = function(b) formula(b)[2])
#if(length(unique(check.resp)) > 1) stop("You must use the same response variable for all models\n")
nmods <- nrow(aic.table)
#method based on simply summing the Akaike weights until a given value is reached
if(method=="raw") {
iter <- 1
sum.wt <- 0
while(level >= sum.wt) {
sum.wt <- aic.table[iter,6]+sum.wt
if(sum.wt < level) iter <- iter + 1
}
confset.tab <- list("method" = method, "level" = level, "table" = aic.table[1:iter,])
}
#method based on ranking models ordinally according to their delta AIC values
if (method=="ordinal") {
substantial <- aic.table[which(aic.table[,4] <= 2),]
some <- aic.table[which(aic.table[,4] > 2 & aic.table[,4] <= 7),]
little <- aic.table[which(aic.table[,4] > 7 & aic.table[,4] <= 10), ]
none <- aic.table[which(aic.table[,4] > 10), ]
confset.tab <- list("method" = method, "substantial" = substantial, "some" = some, "little" = little, "none" = none)
}
#method based on relative likelihoods for some cutoff value
if (method=="ratio") {
cutoff <- exp(-1*delta/2)
#if a given cutoff value is requested, one can compute the corresponding delta AIC value as:
#-2*log(cutoff); -2*log(0.125) = 4.16
ratios <- matrix(NA, nrow=nmods, ncol=1)
for (i in 1:nmods) {
ratios[i] <- aic.table[i, 5]/aic.table[1, 5]
}
confset.tab <- list("method" = method, "cutoff" = cutoff, "delta" = delta,
"table" = aic.table[which(ratios> cutoff),])
}
class(confset.tab) <- c("confset", "list")
return(confset.tab)
}
print.confset <-
function(x, digits = 2, ...) {
if(x$method == "raw") {
cat("\nConfidence set for the best model\n\n")
cat("Method:\t", "raw sum of model probabilities\n\n")
perc <- x$level*100
perc.char <- paste(perc, "%", sep = "")
cat(perc.char, "confidence set:\n")
nice.tab <- cbind(x$table[, 2], x$table[, 3], x$table[, 4], x$table[, 6])
colnames(nice.tab) <- colnames(x$table)[c(2, 3, 4, 6)]
rownames(nice.tab) <- x$table[, 1]
print(round(nice.tab, digits = digits)) #select rounding off with digits argument
cat("\n")
sum.wt <- round(sum(nice.tab[, 4]), digits = digits)
cat("Model probabilities sum to", sum.wt, "\n\n")
}
if(x$method == "ordinal") {
cat("\nConfidence set for the best model\n\n")
cat("Method:\t", "ordinal ranking based on delta AIC\n\n")
cat("Models with substantial weight:\n")
nice.tab <- cbind(x$substantial[, 2], x$substantial[, 3], x$substantial[, 4], x$substantial[, 6])
colnames(nice.tab) <- colnames(x$substantial)[c(2, 3, 4, 6)]
rownames(nice.tab) <- x$substantial[,1]
print(round(nice.tab, digits = digits)) #select rounding off with digits argument
cat("\n\n")
cat("Models with some weight:\n")
nice.tab <- cbind(x$some[, 2], x$some[, 3], x$some[, 4], x$some[, 6])
colnames(nice.tab) <- colnames(x$some)[c(2, 3, 4, 6)]
rownames(nice.tab) <- x$some[, 1]
print(round(nice.tab, digits = digits)) #select rounding off with digits argument
cat("\n\n")
cat("Models with little weight:\n")
nice.tab <- cbind(x$little[, 2], x$little[, 3], x$little[, 4], x$little[, 6])
colnames(nice.tab) <- colnames(x$little)[c(2, 3, 4, 6)]
rownames(nice.tab) <- x$little[, 1]
print(round(nice.tab, digits = digits)) #select rounding off with digits argument
cat("\n\n")
cat("Models with no weight:\n")
nice.tab <- cbind(x$none[, 2], x$none[, 3], x$none[, 4], x$none[, 6])
colnames(nice.tab) <- colnames(x$none)[c(2, 3, 4, 6)]
rownames(nice.tab) <- x$none[, 1]
print(round(nice.tab, digits = digits)) #select rounding off with digits argument
cat("\n\n")
}
if(x$method == "ratio") {
cat("\nConfidence set for the best model\n\n")
cat("Method:\t", "ranking based on relative model likelihood\n\n")
round.cut <- round(x$cutoff, digits = digits)
cat("Cutoff value:\t", round.cut, "(corresponds to delta AIC of", paste(x$delta,")", sep = ""),"\n\n")
cat("Confidence set for best model:\n")
nice.tab <- cbind(x$table[, 2], x$table[, 3], x$table[, 4], x$table[, 6])
colnames(nice.tab) <- colnames(x$table)[c(2, 3, 4, 6)]
rownames(nice.tab) <- x$table[, 1]
print(round(nice.tab, digits = digits)) #select rounding off with digits argument
cat("\n\n")
}
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/confset.R |
##summarize detection histories and count data
countDist <- function(object, plot.freq = TRUE, plot.distance = TRUE,
cex.axis = 1, cex.lab = 1, cex.main = 1, ...){
UseMethod("countDist", object)
}
countDist.default <- function(object, plot.freq = TRUE, plot.distance = TRUE,
cex.axis = 1, cex.lab = 1, cex.main = 1, ...){
stop("\nFunction not yet defined for this object class\n")
}
##for unmarkedFrameDS
countDist.unmarkedFrameDS <- function(object, plot.freq = TRUE, plot.distance = TRUE,
cex.axis = 1, cex.lab = 1, cex.main = 1, ...) {
##extract data
yMat <- object@y
nsites <- nrow(yMat)
n.seasons <- 1
nvisits <- 1
##visits per season
n.visits.season <- 1
##distance classes
dist.classes <- [email protected]
##number of distance classes
n.dist.classes <- length(dist.classes) - 1
##units
unitsIn <- object@unitsIn
##create string of names
dist.names <- rep(NA, n.dist.classes)
for(i in 1:n.dist.classes){
dist.names[i] <- paste(dist.classes[i], "-", dist.classes[i+1], sep = "")
}
##collapse yMat into a single vector
yVec <- as.vector(yMat)
##determine size of plot window
##when both types are requested
if(plot.freq && plot.distance) {
##reset graphics parameters and save in object
nRows <- 1
nCols <- 2
oldpar <- par(mfrow = c(nRows, nCols))
}
##when single type is requested
if(plot.freq && !plot.distance) {
##reset graphics parameters and save in object
nRows <- 1
nCols <- 1
oldpar <- par(mfrow = c(nRows, nCols))
}
##when single type is requested
if(!plot.freq && plot.distance) {
##reset graphics parameters and save in object
nRows <- 1
nCols <- 1
oldpar <- par(mfrow = c(nRows, nCols))
}
##summarize counts
if(plot.freq) {
##create histogram
barplot(table(yVec), ylab = "Frequency", xlab = "Counts of individuals",
main = "Distribution of raw counts",
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##summarize counts per distance
dist.sums.full <- colSums(yMat, na.rm = TRUE)
names(dist.sums.full) <- dist.names
dist.table.seasons <- list(dist.sums.full)
names(dist.table.seasons) <- "season1"
if(plot.distance) {
##create histogram
barplot(dist.sums.full, ylab = "Frequency",
xlab = paste("Distance class (", unitsIn, ")", sep = ""),
main = "Distribution of distance data",
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##raw counts
count.table.full <- table(yVec, exclude = NULL, deparse.level = 0)
count.table.seasons <- list(count.table.full)
##for each season, determine frequencies
out.freqs <- matrix(data = NA, ncol = 2, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected")
rownames(out.freqs) <- "Season-1"
ySeason <- yMat
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = ySeason, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(ySeason, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(ySeason)) == ncol(ySeason)
##number of sites sampled
out.freqs[1, 1] <- sum(!is.na(sum.rows))
##number of sites with at least 1 detection
out.freqs[1, 2] <- sum(det.sum)
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 1)
colnames(out.props) <- "naive.occ"
rownames(out.props) <- rownames(out.freqs)
out.props[, 1] <- out.freqs[, 2]/out.freqs[, 1]
##reset to original values
if(any(plot.freq || plot.distance)) {
on.exit(par(oldpar))
}
out.count <- list("count.table.full" = count.table.full,
"count.table.seasons" = count.table.seasons,
"dist.sums.full" = dist.sums.full,
"dist.table.seasons" = dist.table.seasons,
"dist.names" = dist.names,
"n.dist.classes" = n.dist.classes,
"out.freqs" = out.freqs,
"out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"missing.seasons" = FALSE)
class(out.count) <- "countDist"
return(out.count)
}
##for unmarkedFitDS
countDist.unmarkedFitDS <- function(object, plot.freq = TRUE, plot.distance = TRUE,
cex.axis = 1, cex.lab = 1, cex.main = 1, ...) {
##extract data
yMat <- object@data@y
nsites <- nrow(yMat)
n.seasons <- 1
nvisits <- 1
##visits per season
n.visits.season <- 1
##distance classes
dist.classes <- object@[email protected]
##number of distance classes
n.dist.classes <- length(dist.classes) - 1
##units
unitsIn <- object@data@unitsIn
##create string of names
dist.names <- rep(NA, n.dist.classes)
for(i in 1:n.dist.classes){
dist.names[i] <- paste(dist.classes[i], "-", dist.classes[i+1], sep = "")
}
##collapse yMat into a single vector
yVec <- as.vector(yMat)
##determine size of plot window
##when both types are requested
if(plot.freq && plot.distance) {
##reset graphics parameters and save in object
nRows <- 1
nCols <- 2
oldpar <- par(mfrow = c(nRows, nCols))
}
##when single type is requested
if(plot.freq && !plot.distance) {
##reset graphics parameters and save in object
nRows <- 1
nCols <- 1
oldpar <- par(mfrow = c(nRows, nCols))
}
##when single type is requested
if(!plot.freq && plot.distance) {
##reset graphics parameters and save in object
nRows <- 1
nCols <- 1
oldpar <- par(mfrow = c(nRows, nCols))
}
##summarize counts
if(plot.freq) {
##create histogram
barplot(table(yVec), ylab = "Frequency", xlab = "Counts of individuals",
main = "Distribution of raw counts",
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##summarize counts per distance
dist.sums.full <- colSums(yMat, na.rm = TRUE)
names(dist.sums.full) <- dist.names
dist.table.seasons <- list(dist.sums.full)
names(dist.table.seasons) <- "season1"
if(plot.distance) {
##create histogram
barplot(dist.sums.full, ylab = "Frequency",
xlab = paste("Distance class (", unitsIn, ")", sep = ""),
main = "Distribution of distance data",
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##raw counts
count.table.full <- table(yVec, exclude = NULL, deparse.level = 0)
count.table.seasons <- list(count.table.full)
##for each season, determine frequencies
out.freqs <- matrix(data = NA, ncol = 2, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected")
rownames(out.freqs) <- "Season-1"
ySeason <- yMat
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = ySeason, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(ySeason, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(ySeason)) == ncol(ySeason)
##number of sites sampled
out.freqs[1, 1] <- sum(!is.na(sum.rows))
##number of sites with at least 1 detection
out.freqs[1, 2] <- sum(det.sum)
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 1)
colnames(out.props) <- "naive.occ"
rownames(out.props) <- rownames(out.freqs)
out.props[, 1] <- out.freqs[, 2]/out.freqs[, 1]
##reset to original values
if(any(plot.freq || plot.distance)) {
on.exit(par(oldpar))
}
out.count <- list("count.table.full" = count.table.full,
"count.table.seasons" = count.table.seasons,
"dist.sums.full" = dist.sums.full,
"dist.table.seasons" = dist.table.seasons,
"dist.names" = dist.names,
"n.dist.classes" = n.dist.classes,
"out.freqs" = out.freqs,
"out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"missing.seasons" = FALSE)
class(out.count) <- "countDist"
return(out.count)
}
##for unmarkedFrameGDS
countDist.unmarkedFrameGDS <- function(object, plot.freq = TRUE, plot.distance = TRUE,
cex.axis = 1, cex.lab = 1, cex.main = 1, ...) {
##extract data
yMat <- object@y
nsites <- nrow(yMat)
n.seasons <- 1
##for GDS - several visits in single season
nvisits <- object@numPrimary
##visits per season
n.visits.season <- 1
##distance classes
dist.classes <- [email protected]
##number of distance classes
n.dist.classes <- length(dist.classes) - 1
##units
unitsIn <- object@unitsIn
##create string of names
dist.names <- rep(NA, n.dist.classes)
for(i in 1:n.dist.classes){
dist.names[i] <- paste(dist.classes[i], "-", dist.classes[i+1], sep = "")
}
##collapse yMat into a single vector
yVec <- as.vector(yMat)
##determine size of plot window
##when both types are requested
if(plot.freq && plot.distance) {
##reset graphics parameters and save in object
nRows <- 1
nCols <- 2
oldpar <- par(mfrow = c(nRows, nCols))
}
##when single type is requested
if(plot.freq && !plot.distance) {
##reset graphics parameters and save in object
nRows <- 1
nCols <- 1
oldpar <- par(mfrow = c(nRows, nCols))
}
##when single type is requested
if(!plot.freq && plot.distance) {
##reset graphics parameters and save in object
nRows <- 1
nCols <- 1
oldpar <- par(mfrow = c(nRows, nCols))
}
##summarize counts
if(plot.freq) {
##create histogram
barplot(table(yVec), ylab = "Frequency", xlab = "Counts of individuals",
main = "Distribution of raw counts",
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##summarize counts per distance
dist.sums.full <- rep(NA, n.dist.classes)
##create matrix to hold indices of visits x dist.classes
mat.dist <- matrix(1:(nvisits*n.dist.classes),
nrow = n.dist.classes,
ncol = nvisits)
for(j in 1:n.dist.classes) {
dist.sums.full[j] <- sum(colSums(yMat[, mat.dist[j, ]], na.rm = TRUE), na.rm = TRUE)
}
names(dist.sums.full) <- dist.names
dist.table.seasons <- list(dist.sums.full)
names(dist.table.seasons) <- "season1"
if(plot.distance) {
##create histogram
barplot(dist.sums.full, ylab = "Frequency",
xlab = paste("Distance class (", unitsIn, ")", sep = ""),
main = "Distribution of distance data",
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##raw counts
count.table.full <- table(yVec, exclude = NULL, deparse.level = 0)
count.table.seasons <- list(count.table.full)
##for each season, determine frequencies
out.freqs <- matrix(data = NA, ncol = 2, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected")
rownames(out.freqs) <- "Season-1"
ySeason <- yMat
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = ySeason, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(ySeason, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(ySeason)) == ncol(ySeason)
##number of sites sampled
out.freqs[1, 1] <- sum(!is.na(sum.rows))
##number of sites with at least 1 detection
out.freqs[1, 2] <- sum(det.sum)
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 1)
colnames(out.props) <- "naive.occ"
rownames(out.props) <- rownames(out.freqs)
out.props[, 1] <- out.freqs[, 2]/out.freqs[, 1]
##reset to original values
if(any(plot.freq || plot.distance)) {
on.exit(par(oldpar))
}
out.count <- list("count.table.full" = count.table.full,
"count.table.seasons" = count.table.seasons,
"dist.sums.full" = dist.sums.full,
"dist.table.seasons" = dist.table.seasons,
"dist.names" = dist.names,
"n.dist.classes" = n.dist.classes,
"out.freqs" = out.freqs,
"out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"missing.seasons" = FALSE)
class(out.count) <- "countDist"
return(out.count)
}
##for unmarkedFitGDS
countDist.unmarkedFitGDS <- function(object, plot.freq = TRUE, plot.distance = TRUE,
cex.axis = 1, cex.lab = 1, cex.main = 1, ...) {
##extract data
yMat <- object@data@y
nsites <- nrow(yMat)
n.seasons <- 1
##for GDS - several visits in single season
nvisits <- object@data@numPrimary
##visits per season
n.visits.season <- 1
##distance classes
dist.classes <- object@[email protected]
##number of distance classes
n.dist.classes <- length(dist.classes) - 1
##units
unitsIn <- object@data@unitsIn
##create string of names
dist.names <- rep(NA, n.dist.classes)
for(i in 1:n.dist.classes){
dist.names[i] <- paste(dist.classes[i], "-", dist.classes[i+1], sep = "")
}
##collapse yMat into a single vector
yVec <- as.vector(yMat)
##determine size of plot window
##when both types are requested
if(plot.freq && plot.distance) {
##reset graphics parameters and save in object
nRows <- 1
nCols <- 2
oldpar <- par(mfrow = c(nRows, nCols))
}
##when single type is requested
if(plot.freq && !plot.distance) {
##reset graphics parameters and save in object
nRows <- 1
nCols <- 1
oldpar <- par(mfrow = c(nRows, nCols))
}
##when single type is requested
if(!plot.freq && plot.distance) {
##reset graphics parameters and save in object
nRows <- 1
nCols <- 1
oldpar <- par(mfrow = c(nRows, nCols))
}
##summarize counts
if(plot.freq) {
##create histogram
barplot(table(yVec), ylab = "Frequency", xlab = "Counts of individuals",
main = "Distribution of raw counts",
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##summarize counts per distance
dist.sums.full <- rep(NA, n.dist.classes)
##create matrix to hold indices of visits x dist.classes
mat.dist <- matrix(1:(nvisits*n.dist.classes),
nrow = n.dist.classes,
ncol = nvisits)
for(j in 1:n.dist.classes) {
dist.sums.full[j] <- sum(colSums(yMat[, mat.dist[j, ]], na.rm = TRUE), na.rm = TRUE)
}
names(dist.sums.full) <- dist.names
dist.table.seasons <- list(dist.sums.full)
names(dist.table.seasons) <- "season1"
if(plot.distance) {
##create histogram
barplot(dist.sums.full, ylab = "Frequency",
xlab = paste("Distance class (", unitsIn, ")", sep = ""),
main = "Distribution of distance data",
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##raw counts
count.table.full <- table(yVec, exclude = NULL, deparse.level = 0)
count.table.seasons <- list(count.table.full)
##for each season, determine frequencies
out.freqs <- matrix(data = NA, ncol = 2, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected")
rownames(out.freqs) <- "Season-1"
ySeason <- yMat
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = ySeason, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(ySeason, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(ySeason)) == ncol(ySeason)
##number of sites sampled
out.freqs[1, 1] <- sum(!is.na(sum.rows))
##number of sites with at least 1 detection
out.freqs[1, 2] <- sum(det.sum)
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 1)
colnames(out.props) <- "naive.occ"
rownames(out.props) <- rownames(out.freqs)
out.props[, 1] <- out.freqs[, 2]/out.freqs[, 1]
##reset to original values
if(any(plot.freq || plot.distance)) {
on.exit(par(oldpar))
}
out.count <- list("count.table.full" = count.table.full,
"count.table.seasons" = count.table.seasons,
"dist.sums.full" = dist.sums.full,
"dist.table.seasons" = dist.table.seasons,
"dist.names" = dist.names,
"n.dist.classes" = n.dist.classes,
"out.freqs" = out.freqs,
"out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"missing.seasons" = FALSE)
class(out.count) <- "countDist"
return(out.count)
}
##for unmarkedFrameDSO
countDist.unmarkedFrameDSO <- function(object, plot.freq = TRUE, plot.distance = TRUE,
cex.axis = 1, cex.lab = 1, cex.main = 1,
plot.seasons = FALSE, ...) {
##extract data
yMat <- object@y
nsites <- nrow(yMat)
n.seasons <- object@numPrimary
n.seasons.adj <- n.seasons #total number of plots fixed to 11 or 12, depending on plots requested
n.columns <- ncol(yMat)
seasonNames <- paste("season", 1:n.seasons, sep = "")
##visits per season
n.visits.season <- 1
##distance breaks
dist.breaks <- [email protected]
##number of distance classes
n.dist.classes <- length(dist.breaks) - 1
##units
unitsIn <- object@unitsIn
##create string of names
dist.names <- rep(NA, n.dist.classes)
for(i in 1:n.dist.classes){
dist.names[i] <- paste(dist.breaks[i], "-", dist.breaks[i+1], sep = "")
}
##determine size of plot window
##when two types are requested
if(plot.freq && plot.distance && !plot.seasons) {
##reset graphics parameters and save in object
nRows <- 1
nCols <- 2
oldpar <- par(mfrow = c(nRows, nCols))
}
if(plot.freq && !plot.distance && !plot.seasons) {
##reset graphics parameters and save in object
nRows <- 1
nCols <- 1
oldpar <- par(mfrow = c(nRows, nCols))
}
if(!plot.freq && plot.distance && !plot.seasons) {
##reset graphics parameters and save in object
nRows <- 1
nCols <- 1
oldpar <- par(mfrow = c(nRows, nCols))
}
##if only season-specific plots are requested
if(plot.seasons) {
if(!plot.freq && !plot.distance) {
##determine arrangement of plots in matrix
if(n.seasons >= 12) {
n.seasons.adj <- 12
warning("\nOnly first 12 seasons are plotted\n")
}
}
if(plot.freq && !plot.distance || !plot.freq && plot.distance) {
##determine arrangement of plots in matrix
if(n.seasons >= 11) {
n.seasons.adj <- 11
warning("\nOnly first 11 seasons are plotted\n")
}
}
if(plot.freq && plot.distance) {
##determine arrangement of plots in matrix
if(n.seasons >= 10) {
n.seasons.adj <- 10
warning("\nOnly first 10 seasons are plotted\n")
}
}
if(n.seasons.adj <= 12) {
##if n.seasons < 12
##if 12, 11, 10 <- 4 x 3
##if 9, 8, 7 <- 3 x 3
##if 6, 5 <- 3 x 2
##if 4 <- 2 x 2
##if 3 <- 3 x 1
##if 2 <- 2 x 1
if(n.seasons.adj >= 10) {
nRows <- 4
nCols <- 3
} else {
if(n.seasons.adj >= 7) {
nRows <- 3
nCols <- 3
} else {
if(n.seasons.adj >= 5) {
nRows <- 3
nCols <- 2
} else {
if(n.seasons.adj == 4) {
nRows <- 2
nCols <- 2
} else {
if(n.seasons.adj == 3) {
nRows <- 3
nCols <- 1
} else {
nRows <- 2
nCols <- 1
}
}
}
}
}
}
##reset graphics parameters and save in object
oldpar <- par(mfrow = c(nRows, nCols))
}
##distances across years
yVec <- as.vector(yMat)
##summarize counts
if(plot.freq) {
##create histogram
barplot(table(yVec), ylab = "Frequency", xlab = "Counts of individuals",
main = paste("Distribution of raw counts (", n.seasons, " seasons combined)", sep = ""),
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##columns for each season
col.seasons <- seq(1, n.columns, by = n.dist.classes)
yMat.seasons <- vector(mode = "list", length = n.seasons)
names(yMat.seasons) <- seasonNames
minusOne <- n.dist.classes - 1
##iterate over each season
for(i in 1:n.seasons) {
##extract yMat for each year
yMat1 <- yMat[, col.seasons[i]:(col.seasons[i]+minusOne)]
yMat.seasons[[i]] <- yMat1
}
##check if any seasons were not sampled
y.seasonsNA <- sapply(yMat.seasons, FUN = function(i) all(is.na(i)))
##collapse list into matrix
fullData <- do.call("rbind", yMat.seasons)
##summarize counts per distance
dist.sums.full <- colSums(fullData, na.rm = TRUE)
names(dist.sums.full) <- dist.names
if(plot.distance) {
##create histogram
barplot(dist.sums.full, ylab = "Frequency",
xlab = paste("Distance class (", unitsIn, ")", sep = ""),
main = paste("Distribution of distance data (", n.seasons, " seasons combined)", sep = ""),
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
dist.table.seasons <- vector("list", n.seasons)
names(dist.table.seasons) <- seasonNames
for(i in 1:n.seasons) {
yMat2 <- yMat.seasons[[i]]
##summarize counts per distance
dist.sums.season <- colSums(yMat2, na.rm = TRUE)
names(dist.sums.season) <- dist.names
##replace with NA if not sampled
if(y.seasonsNA[i]) {
dist.sums.season[1:length(dist.names)] <- NA
}
dist.table.seasons[[i]] <- dist.sums.season
##check for missing season
if(y.seasonsNA[i]) {next} #skip to next season if current season not sampled
if(plot.seasons) {
##create histogram
barplot(dist.sums.season, ylab = "Frequency",
xlab = paste("Distance class (", unitsIn, ")", sep = ""),
main = paste("Distribution of distance data (season ", i, ")", sep = ""),
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
}
##raw counts
count.table.full <- table(yVec, exclude = NULL, deparse.level = 0)
count.table.seasons <- lapply(yMat.seasons, table)
##for each season, determine frequencies
out.seasons <- vector(mode = "list", length = n.seasons)
out.freqs <- matrix(data = NA, ncol = 6, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected", "colonized",
"extinct", "static", "common")
rownames(out.freqs) <- paste("Season-", 1:n.seasons, sep = "")
for(i in 1:n.seasons) {
ySeason <- yMat.seasons[[i]]
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = ySeason, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(ySeason, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(ySeason)) == ncol(ySeason)
##number of sites sampled
out.freqs[i, 1] <- sum(!is.na(sum.rows))
##number of sites with at least 1 detection
out.freqs[i, 2] <- sum(det.sum)
##sites without detections
none <- which(sum.rows == 0)
##sites with at least one detection
some <- which(sum.rows != 0)
out.seasons[[i]] <- list("none" = none, "some" = some)
}
##populate out.freqs with freqs of extinctions and colonizations
for(j in 2:n.seasons) {
none1 <- out.seasons[[j-1]]$none
some1 <- out.seasons[[j-1]]$some
none2 <- out.seasons[[j]]$none
some2 <- out.seasons[[j]]$some
##add check for seasons without sampling or previous season without sampling
if(y.seasonsNA[j] || y.seasonsNA[j-1]) {
if(y.seasonsNA[j]) {
out.freqs[j, 2:6] <- NA
}
if(y.seasonsNA[j-1]) {
out.freqs[j, 3:6] <- NA
}
} else {
##colonizations
out.freqs[j, 3] <- sum(duplicated(c(some2, none1)))
##extinctions
out.freqs[j, 4] <- sum(duplicated(c(some1, none2)))
##no change
out.freqs[j, 5] <- sum(duplicated(c(some1, some2))) + sum(duplicated(c(none1, none2)))
##sites both sampled in t and t-1
year1 <- c(none1, some1)
year2 <- c(none2, some2)
out.freqs[j, 6] <- sum(duplicated(c(year1, year2)))
}
}
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 4)
colnames(out.props) <- c("naive.occ", "naive.colonization", "naive.extinction", "naive.static")
rownames(out.props) <- rownames(out.freqs)
for(k in 1:n.seasons) {
##proportion of sites with detections
out.props[k, 1] <- out.freqs[k, 2]/out.freqs[k, 1]
##add check for seasons without sampling
if(y.seasonsNA[k]) {
out.props[k, 2:4] <- NA
} else {
##proportion colonized
out.props[k, 2] <- out.freqs[k, 3]/out.freqs[k, 6]
##proportion extinct
out.props[k, 3] <- out.freqs[k, 4]/out.freqs[k, 6]
##proportion static
out.props[k, 4] <- out.freqs[k, 5]/out.freqs[k, 6]
}
}
##reset to original values
if(any(plot.freq || plot.distance || plot.seasons)) {
on.exit(par(oldpar))
}
out.count <- list("count.table.full" = count.table.full,
"count.table.seasons" = count.table.seasons,
"dist.sums.full" = dist.sums.full,
"dist.table.seasons" = dist.table.seasons,
"dist.names" = dist.names,
"n.dist.classes" = n.dist.classes,
"out.freqs" = out.freqs,
"out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"missing.seasons" = y.seasonsNA)
class(out.count) <- "countDist"
return(out.count)
}
##for unmarkedFitDSO
countDist.unmarkedFitDSO <- function(object, plot.freq = TRUE, plot.distance = TRUE,
cex.axis = 1, cex.lab = 1, cex.main = 1,
plot.seasons = FALSE, ...) {
##extract data
yMat <- object@data@y
nsites <- nrow(yMat)
n.seasons <- object@data@numPrimary
n.seasons.adj <- n.seasons #total number of plots fixed to 11 or 12, depending on plots requested
n.columns <- ncol(yMat)
seasonNames <- paste("season", 1:n.seasons, sep = "")
##visits per season
n.visits.season <- 1
##distance breaks
dist.breaks <- object@[email protected]
##number of distance classes
n.dist.classes <- length(dist.breaks) - 1
##units
unitsIn <- object@data@unitsIn
##create string of names
dist.names <- rep(NA, n.dist.classes)
for(i in 1:n.dist.classes){
dist.names[i] <- paste(dist.breaks[i], "-", dist.breaks[i+1], sep = "")
}
##determine size of plot window
##when two types are requested
if(plot.freq && plot.distance && !plot.seasons) {
##reset graphics parameters and save in object
nRows <- 1
nCols <- 2
oldpar <- par(mfrow = c(nRows, nCols))
}
if(plot.freq && !plot.distance && !plot.seasons) {
##reset graphics parameters and save in object
nRows <- 1
nCols <- 1
oldpar <- par(mfrow = c(nRows, nCols))
}
if(!plot.freq && plot.distance && !plot.seasons) {
##reset graphics parameters and save in object
nRows <- 1
nCols <- 1
oldpar <- par(mfrow = c(nRows, nCols))
}
##if only season-specific plots are requested
if(plot.seasons) {
if(!plot.freq && !plot.distance) {
##determine arrangement of plots in matrix
if(n.seasons >= 12) {
n.seasons.adj <- 12
warning("\nOnly first 12 seasons are plotted\n")
}
}
if(plot.freq && !plot.distance || !plot.freq && plot.distance) {
##determine arrangement of plots in matrix
if(n.seasons >= 11) {
n.seasons.adj <- 11
warning("\nOnly first 11 seasons are plotted\n")
}
}
if(plot.freq && plot.distance) {
##determine arrangement of plots in matrix
if(n.seasons >= 10) {
n.seasons.adj <- 10
warning("\nOnly first 10 seasons are plotted\n")
}
}
if(n.seasons.adj <= 12) {
##if n.seasons < 12
##if 12, 11, 10 <- 4 x 3
##if 9, 8, 7 <- 3 x 3
##if 6, 5 <- 3 x 2
##if 4 <- 2 x 2
##if 3 <- 3 x 1
##if 2 <- 2 x 1
if(n.seasons.adj >= 10) {
nRows <- 4
nCols <- 3
} else {
if(n.seasons.adj >= 7) {
nRows <- 3
nCols <- 3
} else {
if(n.seasons.adj >= 5) {
nRows <- 3
nCols <- 2
} else {
if(n.seasons.adj == 4) {
nRows <- 2
nCols <- 2
} else {
if(n.seasons.adj == 3) {
nRows <- 3
nCols <- 1
} else {
nRows <- 2
nCols <- 1
}
}
}
}
}
}
##reset graphics parameters and save in object
oldpar <- par(mfrow = c(nRows, nCols))
}
##distances across years
yVec <- as.vector(yMat)
##summarize counts
if(plot.freq) {
##create histogram
barplot(table(yVec), ylab = "Frequency", xlab = "Counts of individuals",
main = paste("Distribution of raw counts (", n.seasons, " seasons combined)", sep = ""),
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##columns for each season
col.seasons <- seq(1, n.columns, by = n.dist.classes)
yMat.seasons <- vector(mode = "list", length = n.seasons)
names(yMat.seasons) <- seasonNames
minusOne <- n.dist.classes - 1
##iterate over each season
for(i in 1:n.seasons) {
##extract yMat for each season
yMat1 <- yMat[, col.seasons[i]:(col.seasons[i]+minusOne)]
yMat.seasons[[i]] <- yMat1
}
##check if any seasons were not sampled
y.seasonsNA <- sapply(yMat.seasons, FUN = function(i) all(is.na(i)))
##collapse list into matrix
fullData <- do.call("rbind", yMat.seasons)
##summarize counts per distance
dist.sums.full <- colSums(fullData, na.rm = TRUE)
names(dist.sums.full) <- dist.names
if(plot.distance) {
##create histogram
barplot(dist.sums.full, ylab = "Frequency",
xlab = paste("Distance class (", unitsIn, ")", sep = ""),
main = paste("Distribution of distance data (", n.seasons, " seasons combined)", sep = ""),
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
dist.table.seasons <- vector("list", n.seasons)
names(dist.table.seasons) <- seasonNames
for(i in 1:n.seasons) {
yMat2 <- yMat.seasons[[i]]
##summarize counts per distance
dist.sums.season <- colSums(yMat2, na.rm = TRUE)
names(dist.sums.season) <- dist.names
##replace with NA if not sampled
if(y.seasonsNA[i]) {
dist.sums.season[1:length(dist.names)] <- NA
}
dist.table.seasons[[i]] <- dist.sums.season
##check for missing season
if(y.seasonsNA[i]) {next} #skip to next season if current season not sampled
if(plot.seasons) {
##create histogram
barplot(dist.sums.season, ylab = "Frequency",
xlab = paste("Distance class (", unitsIn, ")", sep = ""),
main = paste("Distribution of distance data (season ", i, ")", sep = ""),
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
}
##raw counts
count.table.full <- table(yVec, exclude = NULL, deparse.level = 0)
count.table.seasons <- lapply(yMat.seasons, table)
##for each season, determine frequencies
out.seasons <- vector(mode = "list", length = n.seasons)
out.freqs <- matrix(data = NA, ncol = 6, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected", "colonized",
"extinct", "static", "common")
rownames(out.freqs) <- paste("Season-", 1:n.seasons, sep = "")
for(i in 1:n.seasons) {
ySeason <- yMat.seasons[[i]]
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = ySeason, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(ySeason, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(ySeason)) == ncol(ySeason)
##number of sites sampled
out.freqs[i, 1] <- sum(!is.na(sum.rows))
##number of sites with at least 1 detection
out.freqs[i, 2] <- sum(det.sum)
##sites without detections
none <- which(sum.rows == 0)
##sites with at least one detection
some <- which(sum.rows != 0)
out.seasons[[i]] <- list("none" = none, "some" = some)
}
##populate out.freqs with freqs of extinctions and colonizations
for(j in 2:n.seasons) {
none1 <- out.seasons[[j-1]]$none
some1 <- out.seasons[[j-1]]$some
none2 <- out.seasons[[j]]$none
some2 <- out.seasons[[j]]$some
##add check for seasons without sampling or previous season without sampling
if(y.seasonsNA[j] || y.seasonsNA[j-1]) {
if(y.seasonsNA[j]) {
out.freqs[j, 2:6] <- NA
}
if(y.seasonsNA[j-1]) {
out.freqs[j, 3:6] <- NA
}
} else {
##colonizations
out.freqs[j, 3] <- sum(duplicated(c(some2, none1)))
##extinctions
out.freqs[j, 4] <- sum(duplicated(c(some1, none2)))
##no change
out.freqs[j, 5] <- sum(duplicated(c(some1, some2))) + sum(duplicated(c(none1, none2)))
##sites both sampled in t and t-1
year1 <- c(none1, some1)
year2 <- c(none2, some2)
out.freqs[j, 6] <- sum(duplicated(c(year1, year2)))
}
}
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 4)
colnames(out.props) <- c("naive.occ", "naive.colonization", "naive.extinction", "naive.static")
rownames(out.props) <- rownames(out.freqs)
for(k in 1:n.seasons) {
##proportion of sites with detections
out.props[k, 1] <- out.freqs[k, 2]/out.freqs[k, 1]
##add check for seasons without sampling
if(y.seasonsNA[k]) {
out.props[k, 2:4] <- NA
} else {
##proportion colonized
out.props[k, 2] <- out.freqs[k, 3]/out.freqs[k, 6]
##proportion extinct
out.props[k, 3] <- out.freqs[k, 4]/out.freqs[k, 6]
##proportion static
out.props[k, 4] <- out.freqs[k, 5]/out.freqs[k, 6]
}
}
##reset to original values
if(any(plot.freq || plot.distance || plot.seasons)) {
on.exit(par(oldpar))
}
out.count <- list("count.table.full" = count.table.full,
"count.table.seasons" = count.table.seasons,
"dist.sums.full" = dist.sums.full,
"dist.table.seasons" = dist.table.seasons,
"dist.names" = dist.names,
"n.dist.classes" = n.dist.classes,
"out.freqs" = out.freqs,
"out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"missing.seasons" = y.seasonsNA)
class(out.count) <- "countDist"
return(out.count)
}
##print method
print.countDist <- function(x, digits = 2, ...) {
if(x$n.seasons == 1) {
cat("\nSummary of counts:\n")
count.mat <- matrix(x$count.table.full, nrow = 1)
colnames(count.mat) <- names(x$count.table.full)
rownames(count.mat) <- "Frequency"
print(count.mat)
cat("\nSummary of distance data:\n")
out.mat <- matrix(x$dist.sums.full, nrow = 1)
colnames(out.mat) <- names(x$dist.sums.full)
rownames(out.mat) <- "Frequency"
print(out.mat)
cat("\nProportion of sites with at least one detection:\n", round(x$out.props[, "naive.occ"], digits), "\n\n")
cat("Frequencies of sites with detections:\n")
##add matrix of frequencies
print(x$out.freqs)
}
if(x$n.seasons > 1) {
cat("\nSummary of counts across", x$n.seasons, "seasons:\n")
count.mat <- matrix(x$count.table.full, nrow = 1)
colnames(count.mat) <- names(x$count.table.full)
rownames(count.mat) <- "Frequency"
print(count.mat)
##if some seasons have not been sampled
if(any(x$missing.seasons)) {
if(sum(x$missing.seasons) == 1) {
cat("\nNote: season", which(x$missing.seasons), "was not sampled\n")
} else {
cat("\nNote: seasons",
paste(which(x$missing.seasons), sep = ", "),
"were not sampled\n")
}
}
cat("\nSummary of counts for each distance class across", x$n.seasons, "seasons:\n")
dist.mat <- matrix(x$dist.sums.full, nrow = 1)
colnames(dist.mat) <- names(x$dist.sums.full)
rownames(dist.mat) <- "Frequency"
print(dist.mat)
cat("\n")
cat("\nSeason-specific counts for each distance class: \n")
cat("\n")
for(i in 1:x$n.seasons) {
if(!x$missing.seasons[i]) {
cat("Season", i, "\n")
} else {
cat("Season", i, "(no sites sampled)", "\n")
}
temp.tab <- x$dist.table.seasons[[i]]
out.mat <- matrix(temp.tab, nrow = 1)
colnames(out.mat) <- names(temp.tab)
rownames(out.mat) <- "Frequency"
print(out.mat)
cat("--------\n\n")
}
##cat("\n")
cat("Frequencies of sites with detections, extinctions, and colonizations:\n")
##add matrix of frequencies
print(x$out.freqs)
}
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/countDist.R |
##summarize detection histories and count data
countHist <- function(object, plot.freq = TRUE,
cex.axis = 1, cex.lab = 1, cex.main = 1, ...){
UseMethod("countHist", object)
}
countHist.default <- function(object, plot.freq = TRUE,
cex.axis = 1, cex.lab = 1, cex.main = 1, ...){
stop("\nFunction not yet defined for this object class\n")
}
##for unmarkedFramePCount
countHist.unmarkedFramePCount <- function(object, plot.freq = TRUE,
cex.axis = 1, cex.lab = 1, cex.main = 1, ...) {
##extract data
yMat <- object@y
nsites <- nrow(yMat)
n.seasons <- 1
nvisits <- ncol(yMat)
##visits per season
n.visits.season <- nvisits/n.seasons
seasonNames <- paste("season", 1:n.seasons, sep = "")
##collapse yMat into a single vector
yVec <- as.vector(yMat)
##summarize detection histories
if(plot.freq) {
##create histogram
barplot(table(yVec), ylab = "Frequency", xlab = "Counts of individuals",
main = "Distribution of raw counts",
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##raw counts
count.table.full <- table(yVec, exclude = NULL, deparse.level = 0)
count.table.seasons <- list(count.table.full)
names(count.table.seasons) <- seasonNames
##summarize count histories
hist.full <- apply(X = yMat, MARGIN = 1, FUN = function(i) paste(i, collapse = "|"))
hist.table.full <- table(hist.full, deparse.level = 0)
##for each season, determine frequencies
hist.table.seasons <- vector(mode = "list", length = n.seasons)
names(hist.table.seasons) <- seasonNames
out.freqs <- matrix(data = NA, ncol = 2, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected")
rownames(out.freqs) <- "Season-1"
hist.table.seasons[[1]] <- hist.table.full
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = yMat, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(yMat, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(yMat)) == ncol(yMat)
##number of sites sampled
out.freqs[1, 1] <- sum(!is.na(sum.rows))
##number of sites with at least 1 detection
out.freqs[1, 2] <- sum(det.sum)
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 1)
colnames(out.props) <- "naive.occ"
rownames(out.props) <- rownames(out.freqs)
out.props[, 1] <- out.freqs[, 2]/out.freqs[, 1]
out.count <- list("count.table.full" = count.table.full,
"count.table.seasons" = count.table.seasons,
"hist.table.full" = hist.table.full,
"hist.table.seasons" = hist.table.seasons,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"missing.seasons" = FALSE)
class(out.count) <- "countHist"
return(out.count)
}
##for unmarkedFitPCount
countHist.unmarkedFitPCount <- function(object, plot.freq = TRUE,
cex.axis = 1, cex.lab = 1, cex.main = 1, ...) {
##extract data
yMat <- object@data@y
nsites <- nrow(yMat)
n.seasons <- 1
nvisits <- ncol(yMat)
##visits per season
n.visits.season <- nvisits/n.seasons
seasonNames <- paste("season", 1:n.seasons, sep = "")
##collapse yMat into a single vector
yVec <- as.vector(yMat)
##summarize detection histories
if(plot.freq) {
##create histogram
barplot(table(yVec), ylab = "Frequency", xlab = "Counts of individuals",
main = "Distribution of raw counts",
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##raw counts
count.table.full <- table(yVec, exclude = NULL, deparse.level = 0)
count.table.seasons <- list(count.table.full)
names(count.table.seasons) <- seasonNames
##count histories
hist.full <- apply(X = yMat, MARGIN = 1, FUN = function(i) paste(i, collapse = "|"))
hist.table.full <- table(hist.full, deparse.level = 0)
##for each season, determine frequencies
hist.table.seasons <- vector(mode = "list", length = n.seasons)
names(hist.table.seasons) <- seasonNames
out.freqs <- matrix(data = NA, ncol = 2, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected")
rownames(out.freqs) <- "Season-1"
hist.table.seasons[[1]] <- hist.table.full
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = yMat, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(yMat, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(yMat)) == ncol(yMat)
##number of sites sampled
out.freqs[1, 1] <- sum(!is.na(sum.rows))
##number of sites with at least 1 detection
out.freqs[1, 2] <- sum(det.sum)
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 1)
colnames(out.props) <- "naive.occ"
rownames(out.props) <- rownames(out.freqs)
out.props[, 1] <- out.freqs[, 2]/out.freqs[, 1]
out.count <- list("count.table.full" = count.table.full,
"count.table.seasons" = count.table.seasons,
"hist.table.full" = hist.table.full,
"hist.table.seasons" = hist.table.seasons,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"missing.seasons" = FALSE)
class(out.count) <- "countHist"
return(out.count)
}
##for unmarkedFrameGPC
countHist.unmarkedFrameGPC <- function(object, plot.freq = TRUE,
cex.axis = 1, cex.lab = 1, cex.main = 1, ...) {
##extract data
yMat <- object@y
nsites <- nrow(yMat)
n.seasons <- 1
nvisits <- ncol(yMat)
##visits per season
n.visits.season <- nvisits/n.seasons
seasonNames <- paste("season", 1:n.seasons, sep = "")
##collapse yMat into a single vector
yVec <- as.vector(yMat)
##summarize detection histories
if(plot.freq) {
##create histogram
barplot(table(yVec), ylab = "Frequency", xlab = "Counts of individuals",
main = "Distribution of raw counts",
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##raw counts
count.table.full <- table(yVec, exclude = NULL, deparse.level = 0)
count.table.seasons <- list(count.table.full)
names(count.table.seasons) <- seasonNames
##summarize count histories
hist.full <- apply(X = yMat, MARGIN = 1, FUN = function(i) paste(i, collapse = "|"))
hist.table.full <- table(hist.full, deparse.level = 0)
##for each season, determine frequencies
hist.table.seasons <- vector(mode = "list", length = n.seasons)
names(hist.table.seasons) <- seasonNames
out.freqs <- matrix(data = NA, ncol = 2, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected")
rownames(out.freqs) <- "Season-1"
hist.table.seasons[[1]] <- hist.table.full
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = yMat, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(yMat, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(yMat)) == ncol(yMat)
##number of sites sampled
out.freqs[1, 1] <- sum(!is.na(sum.rows))
##number of sites with at least 1 detection
out.freqs[1, 2] <- sum(det.sum)
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 1)
colnames(out.props) <- "naive.occ"
rownames(out.props) <- rownames(out.freqs)
out.props[, 1] <- out.freqs[, 2]/out.freqs[, 1]
out.count <- list("count.table.full" = count.table.full,
"count.table.seasons" = count.table.seasons,
"hist.table.full" = hist.table.full,
"hist.table.seasons" = hist.table.seasons,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"missing.seasons" = FALSE)
class(out.count) <- "countHist"
return(out.count)
}
##for unmarkedFitGPC
countHist.unmarkedFitGPC <- function(object, plot.freq = TRUE,
cex.axis = 1, cex.lab = 1, cex.main = 1, ...) {
##extract data
yMat <- object@data@y
nsites <- nrow(yMat)
n.seasons <- 1
nvisits <- ncol(yMat)
##visits per season
n.visits.season <- nvisits/n.seasons
seasonNames <- paste("season", 1:n.seasons, sep = "")
##collapse yMat into a single vector
yVec <- as.vector(yMat)
##summarize detection histories
if(plot.freq) {
##create histogram
barplot(table(yVec), ylab = "Frequency", xlab = "Counts of individuals",
main = "Distribution of raw counts",
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##raw counts
count.table.full <- table(yVec, exclude = NULL, deparse.level = 0)
count.table.seasons <- list(count.table.full)
names(count.table.seasons) <- seasonNames
##count histories
hist.full <- apply(X = yMat, MARGIN = 1, FUN = function(i) paste(i, collapse = "|"))
hist.table.full <- table(hist.full, deparse.level = 0)
##for each season, determine frequencies
hist.table.seasons <- vector(mode = "list", length = n.seasons)
names(hist.table.seasons) <- seasonNames
out.freqs <- matrix(data = NA, ncol = 2, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected")
rownames(out.freqs) <- "Season-1"
hist.table.seasons[[1]] <- hist.table.full
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = yMat, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(yMat, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(yMat)) == ncol(yMat)
##number of sites sampled
out.freqs[1, 1] <- sum(!is.na(sum.rows))
##number of sites with at least 1 detection
out.freqs[1, 2] <- sum(det.sum)
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 1)
colnames(out.props) <- "naive.occ"
rownames(out.props) <- rownames(out.freqs)
out.props[, 1] <- out.freqs[, 2]/out.freqs[, 1]
out.count <- list("count.table.full" = count.table.full,
"count.table.seasons" = count.table.seasons,
"hist.table.full" = hist.table.full,
"hist.table.seasons" = hist.table.seasons,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"missing.seasons" = FALSE)
class(out.count) <- "countHist"
return(out.count)
}
##for unmarkedFrameMPois
countHist.unmarkedFrameMPois <- function(object, plot.freq = TRUE,
cex.axis = 1, cex.lab = 1, cex.main = 1, ...) {
##extract data
yMat <- object@y
nsites <- nrow(yMat)
n.seasons <- 1
nvisits <- ncol(yMat)
##visits per season
n.visits.season <- nvisits/n.seasons
seasonNames <- paste("season", 1:n.seasons, sep = "")
##collapse yMat into a single vector
yVec <- as.vector(yMat)
##summarize detection histories
if(plot.freq) {
##create histogram
barplot(table(yVec), ylab = "Frequency", xlab = "Counts of individuals",
main = "Distribution of raw counts",
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##raw counts
count.table.full <- table(yVec, exclude = NULL, deparse.level = 0)
count.table.seasons <- list(count.table.full)
names(count.table.seasons) <- seasonNames
##summarize count histories
hist.full <- apply(X = yMat, MARGIN = 1, FUN = function(i) paste(i, collapse = "|"))
hist.table.full <- table(hist.full, deparse.level = 0)
##for each season, determine frequencies
hist.table.seasons <- vector(mode = "list", length = n.seasons)
names(hist.table.seasons) <- seasonNames
out.freqs <- matrix(data = NA, ncol = 2, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected")
rownames(out.freqs) <- "Season-1"
hist.table.seasons[[1]] <- hist.table.full
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = yMat, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(yMat, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(yMat)) == ncol(yMat)
##number of sites sampled
out.freqs[1, 1] <- sum(!is.na(sum.rows))
##number of sites with at least 1 detection
out.freqs[1, 2] <- sum(det.sum)
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 1)
colnames(out.props) <- "naive.occ"
rownames(out.props) <- rownames(out.freqs)
out.props[, 1] <- out.freqs[, 2]/out.freqs[, 1]
out.count <- list("count.table.full" = count.table.full,
"count.table.seasons" = count.table.seasons,
"hist.table.full" = hist.table.full,
"hist.table.seasons" = hist.table.seasons,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"missing.seasons" = FALSE)
class(out.count) <- "countHist"
return(out.count)
}
##for unmarkedFitMPois
countHist.unmarkedFitMPois <- function(object, plot.freq = TRUE,
cex.axis = 1, cex.lab = 1, cex.main = 1, ...) {
##extract data
yMat <- object@data@y
nsites <- nrow(yMat)
n.seasons <- 1
nvisits <- ncol(yMat)
##visits per season
n.visits.season <- nvisits/n.seasons
seasonNames <- paste("season", 1:n.seasons, sep = "")
##collapse yMat into a single vector
yVec <- as.vector(yMat)
##summarize detection histories
if(plot.freq) {
##create histogram
barplot(table(yVec), ylab = "Frequency", xlab = "Counts of individuals",
main = "Distribution of raw counts",
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##raw counts
count.table.full <- table(yVec, exclude = NULL, deparse.level = 0)
count.table.seasons <- list(count.table.full)
names(count.table.seasons) <- seasonNames
##summarize count histories
hist.full <- apply(X = yMat, MARGIN = 1, FUN = function(i) paste(i, collapse = "|"))
hist.table.full <- table(hist.full, deparse.level = 0)
##for each season, determine frequencies
hist.table.seasons <- vector(mode = "list", length = n.seasons)
names(hist.table.seasons) <- seasonNames
out.freqs <- matrix(data = NA, ncol = 2, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected")
rownames(out.freqs) <- "Season-1"
hist.table.seasons[[1]] <- hist.table.full
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = yMat, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(yMat, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(yMat)) == ncol(yMat)
##number of sites sampled
out.freqs[1, 1] <- sum(!is.na(sum.rows))
##number of sites with at least 1 detection
out.freqs[1, 2] <- sum(det.sum)
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 1)
colnames(out.props) <- "naive.occ"
rownames(out.props) <- rownames(out.freqs)
out.props[, 1] <- out.freqs[, 2]/out.freqs[, 1]
out.count <- list("count.table.full" = count.table.full,
"count.table.seasons" = count.table.seasons,
"hist.table.full" = hist.table.full,
"hist.table.seasons" = hist.table.seasons,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"missing.seasons" = FALSE)
class(out.count) <- "countHist"
return(out.count)
}
##for unmarkedFramePCO
countHist.unmarkedFramePCO <- function(object, plot.freq = TRUE,
cex.axis = 1, cex.lab = 1, cex.main = 1,
plot.seasons = FALSE, ...) {
##extract data
yMat <- object@y
nsites <- nrow(yMat)
n.seasons <- object@numPrimary
n.seasons.adj <- n.seasons #total number of plots fixed to 11 or 12, depending on plots requested
nvisits <- ncol(yMat)
##visits per season
n.visits.season <- nvisits/n.seasons
seasonNames <- paste("season", 1:n.seasons, sep = "")
##collapse yMat into a single vector
yVec.full <- as.vector(yMat)
##raw counts
count.table.full <- table(yVec.full, exclude = NULL, deparse.level = 0)
##summarize count histories
hist.full <- apply(X = yMat, MARGIN = 1, FUN = function(i) paste(i, collapse = "|"))
hist.table.full <- table(hist.full, deparse.level = 0)
##for each season, determine frequencies
yVectors <- vector(mode = "list", length = n.seasons)
out.seasons <- vector(mode = "list", length = n.seasons)
count.table.seasons <- vector(mode = "list", length = n.seasons)
names(count.table.seasons) <- seasonNames
hist.table.seasons <- vector(mode = "list", length = n.seasons)
names(hist.table.seasons) <- seasonNames
out.freqs <- matrix(data = NA, ncol = 6, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected", "colonized",
"extinct", "static", "common")
rownames(out.freqs) <- paste("Season-", 1:n.seasons, sep = "")
##sequence of visits
vis.seq <- seq(from = 1, to = nvisits, by = n.visits.season)
for(i in 1:n.seasons) {
col.start <- vis.seq[i]
col.end <- col.start + (n.visits.season - 1)
ySeason <- yMat[, col.start:col.end]
##summarize count histories
if(is.null(ncol(ySeason))){
ySeason <- as.matrix(ySeason)
}
yVec.season <- as.vector(ySeason)
yVectors[[i]] <- yVec.season
det.hist <- apply(X = ySeason, MARGIN = 1, FUN = function(i) paste(i, collapse = "|"))
hist.table.seasons[[i]] <- table(det.hist, deparse.level = 0)
count.table.seasons[[i]] <- table(yVec.season, exclude = NULL)
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = ySeason, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(ySeason, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(ySeason)) == ncol(ySeason)
##number of sites sampled
out.freqs[i, 1] <- sum(!is.na(sum.rows))
out.freqs[i, 2] <- sum(det.sum)
##sites without detections
none <- which(sum.rows == 0)
##sites with at least one detection
some <- which(sum.rows != 0)
out.seasons[[i]] <- list("none" = none, "some" = some)
}
##check if any seasons were not sampled
y.seasonsNA <- sapply(yVectors, FUN = function(i) all(is.na(i)))
##if only season-specific plots are requested
if(!plot.freq && plot.seasons) {
##determine arrangement of plots in matrix
if(n.seasons >= 12) {
n.seasons.adj <- 12
warning("\nOnly first 12 seasons are plotted\n")
}
if(n.seasons.adj <= 12) {
##if n.seasons < 12
##if 12, 11, 10 <- 4 x 3
##if 9, 8, 7 <- 3 x 3
##if 6, 5 <- 3 x 2
##if 4 <- 2 x 2
##if 3 <- 3 x 1
##if 2 <- 2 x 1
if(n.seasons.adj >= 10) {
nRows <- 4
nCols <- 3
} else {
if(n.seasons.adj >= 7) {
nRows <- 3
nCols <- 3
} else {
if(n.seasons.adj >= 5) {
nRows <- 3
nCols <- 2
} else {
if(n.seasons.adj == 4) {
nRows <- 2
nCols <- 2
} else {
if(n.seasons.adj == 3) {
nRows <- 3
nCols <- 1
} else {
nRows <- 2
nCols <- 1
}
}
}
}
}
}
##reset graphics parameters and save in object
oldpar <- par(mfrow = c(nRows, nCols))
}
##if both plots for seasons and combined are requested
##summarize detection histories
if(plot.freq) {
if(!plot.seasons) {
nRows <- 1
nCols <- 1
}
if(plot.seasons && n.seasons >= 12) {
n.seasons.adj <- 11
warning("\nOnly first 11 seasons are plotted\n")
}
if(plot.seasons && n.seasons.adj <= 11) {
if(n.seasons.adj >= 9) {
nRows <- 4
nCols <- 3
} else {
if(n.seasons.adj >= 6) {
nRows <- 3
nCols <- 3
} else {
if(n.seasons.adj >= 4) {
nRows <- 3
nCols <- 2
} else {
if(n.seasons.adj == 3) {
nRows <- 2
nCols <- 2
} else {
if(n.seasons.adj == 2) {
nRows <- 3
nCols <- 1
}
}
}
}
}
}
##reset graphics parameters and save in object
oldpar <- par(mfrow = c(nRows, nCols))
##histogram for data combined across seasons
barplot(table(yVec.full), ylab = "Frequency", xlab = "Counts of individuals",
main = paste("Distribution of raw counts (", n.seasons, " seasons combined)", sep = ""),
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##iterate over each season
if(plot.seasons) {
for(k in 1:n.seasons.adj) {
##check for missing season
if(y.seasonsNA[k]) {next} #skip to next season if current season not sampled
##histogram for data combined across seasons
barplot(table(yVectors[[k]]), ylab = "Frequency", xlab = "Counts of individuals",
main = paste("Distribution of raw counts (season ", k, ")", sep = ""),
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
}
##populate out.freqs with freqs of extinctions and colonizations
for(j in 2:n.seasons) {
none1 <- out.seasons[[j-1]]$none
some1 <- out.seasons[[j-1]]$some
none2 <- out.seasons[[j]]$none
some2 <- out.seasons[[j]]$some
##add check for seasons without sampling or previous season without sampling
if(y.seasonsNA[j] || y.seasonsNA[j-1]) {
if(y.seasonsNA[j]) {
out.freqs[j, 2:6] <- NA
}
if(y.seasonsNA[j-1]) {
out.freqs[j, 3:6] <- NA
}
} else {
##colonizations
out.freqs[j, 3] <- sum(duplicated(c(some2, none1)))
##extinctions
out.freqs[j, 4] <- sum(duplicated(c(some1, none2)))
##no change
out.freqs[j, 5] <- sum(duplicated(c(some1, some2))) + sum(duplicated(c(none1, none2)))
##sites both sampled in t and t-1
year1 <- c(none1, some1)
year2 <- c(none2, some2)
out.freqs[j, 6] <- sum(duplicated(c(year1, year2)))
}
}
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 4)
colnames(out.props) <- c("naive.occ", "naive.colonization", "naive.extinction", "naive.static")
rownames(out.props) <- rownames(out.freqs)
for(k in 1:n.seasons) {
##proportion of sites with detections
out.props[k, 1] <- out.freqs[k, 2]/out.freqs[k, 1]
##add check for seasons without sampling
if(y.seasonsNA[k]) {
out.props[k, 2:4] <- NA
} else {
##proportion colonized
out.props[k, 2] <- out.freqs[k, 3]/out.freqs[k, 6]
##proportion extinct
out.props[k, 3] <- out.freqs[k, 4]/out.freqs[k, 6]
##proportion static
out.props[k, 4] <- out.freqs[k, 5]/out.freqs[k, 6]
}
}
##reset to original values
if(any(plot.freq || plot.seasons)) {
on.exit(par(oldpar))
}
out.count <- list("count.table.full" = count.table.full,
"count.table.seasons" = count.table.seasons,
"hist.table.full" = hist.table.full,
"hist.table.seasons" = hist.table.seasons,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"missing.seasons" = y.seasonsNA)
class(out.count) <- "countHist"
return(out.count)
}
##for unmarkedFitPCO
countHist.unmarkedFitPCO <- function(object, plot.freq = TRUE,
cex.axis = 1, cex.lab = 1, cex.main = 1,
plot.seasons = FALSE, ...) {
##extract data
yMat <- object@data@y
nsites <- nrow(yMat)
n.seasons <- object@data@numPrimary
n.seasons.adj <- n.seasons #total number of plots fixed to 11 or 12, depending on plots requested
nvisits <- ncol(yMat)
##visits per season
n.visits.season <- nvisits/n.seasons
seasonNames <- paste("season", 1:n.seasons, sep = "")
##collapse yMat into a single vector
yVec.full <- as.vector(yMat)
##raw counts
count.table.full <- table(yVec.full, exclude = NULL, deparse.level = 0)
##summarize count histories
hist.full <- apply(X = yMat, MARGIN = 1, FUN = function(i) paste(i, collapse = "|"))
hist.table.full <- table(hist.full, deparse.level = 0)
##for each season, determine frequencies
yVectors <- vector(mode = "list", length = n.seasons)
out.seasons <- vector(mode = "list", length = n.seasons)
count.table.seasons <- vector(mode = "list", length = n.seasons)
names(count.table.seasons) <- seasonNames
hist.table.seasons <- vector(mode = "list", length = n.seasons)
names(hist.table.seasons) <- seasonNames
out.freqs <- matrix(data = NA, ncol = 6, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected", "colonized",
"extinct", "static", "common")
rownames(out.freqs) <- paste("Season-", 1:n.seasons, sep = "")
##sequence of visits
vis.seq <- seq(from = 1, to = nvisits, by = n.visits.season)
for(i in 1:n.seasons) {
col.start <- vis.seq[i]
col.end <- col.start + (n.visits.season - 1)
ySeason <- yMat[, col.start:col.end]
##summarize count histories
if(is.null(ncol(ySeason))){
ySeason <- as.matrix(ySeason)
}
yVec.season <- as.vector(ySeason)
yVectors[[i]] <- yVec.season
det.hist <- apply(X = ySeason, MARGIN = 1, FUN = function(i) paste(i, collapse = "|"))
hist.table.seasons[[i]] <- table(det.hist, deparse.level = 0)
count.table.seasons[[i]] <- table(yVec.season, exclude = NULL)
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = ySeason, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(ySeason, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(ySeason)) == ncol(ySeason)
##number of sites sampled
out.freqs[i, 1] <- sum(!is.na(sum.rows))
out.freqs[i, 2] <- sum(det.sum)
##sites without detections
none <- which(sum.rows == 0)
##sites with at least one detection
some <- which(sum.rows != 0)
out.seasons[[i]] <- list("none" = none, "some" = some)
}
##check if any seasons were not sampled
y.seasonsNA <- sapply(yVectors, FUN = function(i) all(is.na(i)))
##if only season-specific plots are requested
if(!plot.freq && plot.seasons) {
##determine arrangement of plots in matrix
if(n.seasons >= 12) {
n.seasons.adj <- 12
warning("\nOnly first 12 seasons are plotted\n")
}
if(n.seasons.adj <= 12) {
##if n.seasons < 12
##if 12, 11, 10 <- 4 x 3
##if 9, 8, 7 <- 3 x 3
##if 6, 5 <- 3 x 2
##if 4 <- 2 x 2
##if 3 <- 3 x 1
##if 2 <- 2 x 1
if(n.seasons.adj >= 10) {
nRows <- 4
nCols <- 3
} else {
if(n.seasons.adj >= 7) {
nRows <- 3
nCols <- 3
} else {
if(n.seasons.adj >= 5) {
nRows <- 3
nCols <- 2
} else {
if(n.seasons.adj == 4) {
nRows <- 2
nCols <- 2
} else {
if(n.seasons.adj == 3) {
nRows <- 3
nCols <- 1
} else {
nRows <- 2
nCols <- 1
}
}
}
}
}
}
##reset graphics parameters and save in object
oldpar <- par(mfrow = c(nRows, nCols))
}
##if both plots for seasons and combined are requested
##summarize detection histories
if(plot.freq) {
if(!plot.seasons) {
nRows <- 1
nCols <- 1
}
if(plot.seasons && n.seasons >= 12) {
n.seasons.adj <- 11
warning("\nOnly first 11 seasons are plotted\n")
}
if(plot.seasons && n.seasons.adj <= 11) {
if(n.seasons.adj >= 9) {
nRows <- 4
nCols <- 3
} else {
if(n.seasons.adj >= 6) {
nRows <- 3
nCols <- 3
} else {
if(n.seasons.adj >= 4) {
nRows <- 3
nCols <- 2
} else {
if(n.seasons.adj == 3) {
nRows <- 2
nCols <- 2
} else {
if(n.seasons.adj == 2) {
nRows <- 3
nCols <- 1
}
}
}
}
}
}
##reset graphics parameters and save in object
oldpar <- par(mfrow = c(nRows, nCols))
##histogram for data combined across seasons
barplot(table(yVec.full), ylab = "Frequency", xlab = "Counts of individuals",
main = paste("Distribution of raw counts (", n.seasons, " seasons combined)", sep = ""),
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##iterate over each season
if(plot.seasons) {
for(k in 1:n.seasons.adj) {
##check for missing season
if(y.seasonsNA[k]) {next} #skip to next season if current season not sampled
##histogram for data combined across seasons
barplot(table(yVectors[[k]]), ylab = "Frequency", xlab = "Counts of individuals",
main = paste("Distribution of raw counts (season ", k, ")", sep = ""),
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
}
##populate out.freqs with freqs of extinctions and colonizations
for(j in 2:n.seasons) {
none1 <- out.seasons[[j-1]]$none
some1 <- out.seasons[[j-1]]$some
none2 <- out.seasons[[j]]$none
some2 <- out.seasons[[j]]$some
##add check for seasons without sampling or previous season without sampling
if(y.seasonsNA[j] || y.seasonsNA[j-1]) {
if(y.seasonsNA[j]) {
out.freqs[j, 2:6] <- NA
}
if(y.seasonsNA[j-1]) {
out.freqs[j, 3:6] <- NA
}
} else {
##colonizations
out.freqs[j, 3] <- sum(duplicated(c(some2, none1)))
##extinctions
out.freqs[j, 4] <- sum(duplicated(c(some1, none2)))
##no change
out.freqs[j, 5] <- sum(duplicated(c(some1, some2))) + sum(duplicated(c(none1, none2)))
##sites both sampled in t and t-1
year1 <- c(none1, some1)
year2 <- c(none2, some2)
out.freqs[j, 6] <- sum(duplicated(c(year1, year2)))
}
}
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 4)
colnames(out.props) <- c("naive.occ", "naive.colonization", "naive.extinction", "naive.static")
rownames(out.props) <- rownames(out.freqs)
for(k in 1:n.seasons) {
##proportion of sites with detections
out.props[k, 1] <- out.freqs[k, 2]/out.freqs[k, 1]
##add check for seasons without sampling
if(y.seasonsNA[k]) {
out.props[k, 2:4] <- NA
} else {
##proportion colonized
out.props[k, 2] <- out.freqs[k, 3]/out.freqs[k, 6]
##proportion extinct
out.props[k, 3] <- out.freqs[k, 4]/out.freqs[k, 6]
##proportion static
out.props[k, 4] <- out.freqs[k, 5]/out.freqs[k, 6]
}
}
##reset to original values
if(any(plot.freq || plot.seasons)) {
on.exit(par(oldpar))
}
out.count <- list("count.table.full" = count.table.full,
"count.table.seasons" = count.table.seasons,
"hist.table.full" = hist.table.full,
"hist.table.seasons" = hist.table.seasons,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"missing.seasons" = y.seasonsNA)
class(out.count) <- "countHist"
return(out.count)
}
##for unmarkedFrameGMM
countHist.unmarkedFrameGMM <- function(object, plot.freq = TRUE,
cex.axis = 1, cex.lab = 1, cex.main = 1,
plot.seasons = FALSE, ...) {
##extract data
yMat <- object@y
nsites <- nrow(yMat)
n.seasons <- object@numPrimary
n.seasons.adj <- n.seasons #total number of plots fixed to 11 or 12, depending on plots requested
nvisits <- ncol(yMat)
##visits per season
n.visits.season <- nvisits/n.seasons
seasonNames <- paste("season", 1:n.seasons, sep = "")
##collapse yMat into a single vector
yVec.full <- as.vector(yMat)
##raw counts
count.table.full <- table(yVec.full, exclude = NULL, deparse.level = 0)
##summarize count histories
hist.full <- apply(X = yMat, MARGIN = 1, FUN = function(i) paste(i, collapse = "|"))
hist.table.full <- table(hist.full, deparse.level = 0)
##for each season, determine frequencies
yVectors <- vector(mode = "list", length = n.seasons)
out.seasons <- vector(mode = "list", length = n.seasons)
count.table.seasons <- vector(mode = "list", length = n.seasons)
names(count.table.seasons) <- seasonNames
hist.table.seasons <- vector(mode = "list", length = n.seasons)
names(hist.table.seasons) <- seasonNames
out.freqs <- matrix(data = NA, ncol = 6, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected", "colonized",
"extinct", "static", "common")
rownames(out.freqs) <- paste("Season-", 1:n.seasons, sep = "")
##sequence of visits
vis.seq <- seq(from = 1, to = nvisits, by = n.visits.season)
for(i in 1:n.seasons) {
col.start <- vis.seq[i]
col.end <- col.start + (n.visits.season - 1)
ySeason <- yMat[, col.start:col.end]
##summarize count histories
if(is.null(ncol(ySeason))){
ySeason <- as.matrix(ySeason)
}
yVec.season <- as.vector(ySeason)
yVectors[[i]] <- yVec.season
det.hist <- apply(X = ySeason, MARGIN = 1, FUN = function(i) paste(i, collapse = "|"))
hist.table.seasons[[i]] <- table(det.hist, deparse.level = 0)
count.table.seasons[[i]] <- table(yVec.season, exclude = NULL)
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = ySeason, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(ySeason, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(ySeason)) == ncol(ySeason)
##number of sites sampled
out.freqs[i, 1] <- sum(!is.na(sum.rows))
out.freqs[i, 2] <- sum(det.sum)
##sites without detections
none <- which(sum.rows == 0)
##sites with at least one detection
some <- which(sum.rows != 0)
out.seasons[[i]] <- list("none" = none, "some" = some)
}
##check if any seasons were not sampled
y.seasonsNA <- sapply(yVectors, FUN = function(i) all(is.na(i)))
##if only season-specific plots are requested
if(!plot.freq && plot.seasons) {
##determine arrangement of plots in matrix
if(n.seasons >= 12) {
n.seasons.adj <- 12
warning("\nOnly first 12 seasons are plotted\n")
}
if(n.seasons.adj <= 12) {
##if n.seasons < 12
##if 12, 11, 10 <- 4 x 3
##if 9, 8, 7 <- 3 x 3
##if 6, 5 <- 3 x 2
##if 4 <- 2 x 2
##if 3 <- 3 x 1
##if 2 <- 2 x 1
if(n.seasons.adj >= 10) {
nRows <- 4
nCols <- 3
} else {
if(n.seasons.adj >= 7) {
nRows <- 3
nCols <- 3
} else {
if(n.seasons.adj >= 5) {
nRows <- 3
nCols <- 2
} else {
if(n.seasons.adj == 4) {
nRows <- 2
nCols <- 2
} else {
if(n.seasons.adj == 3) {
nRows <- 3
nCols <- 1
} else {
nRows <- 2
nCols <- 1
}
}
}
}
}
}
##reset graphics parameters and save in object
oldpar <- par(mfrow = c(nRows, nCols))
}
##if both plots for seasons and combined are requested
##summarize detection histories
if(plot.freq) {
if(!plot.seasons) {
nRows <- 1
nCols <- 1
}
if(plot.seasons && n.seasons >= 12) {
n.seasons.adj <- 11
warning("\nOnly first 11 seasons are plotted\n")
}
if(plot.seasons && n.seasons.adj <= 11) {
if(n.seasons.adj >= 9) {
nRows <- 4
nCols <- 3
} else {
if(n.seasons.adj >= 6) {
nRows <- 3
nCols <- 3
} else {
if(n.seasons.adj >= 4) {
nRows <- 3
nCols <- 2
} else {
if(n.seasons.adj == 3) {
nRows <- 2
nCols <- 2
} else {
if(n.seasons.adj == 2) {
nRows <- 3
nCols <- 1
}
}
}
}
}
}
##reset graphics parameters and save in object
oldpar <- par(mfrow = c(nRows, nCols))
##histogram for data combined across seasons
barplot(table(yVec.full), ylab = "Frequency", xlab = "Counts of individuals",
main = paste("Distribution of raw counts (", n.seasons, " seasons combined)", sep = ""),
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##iterate over each season
if(plot.seasons) {
for(k in 1:n.seasons.adj) {
##check for missing season
if(y.seasonsNA[k]) {next} #skip to next season if current season not sampled
##histogram for data combined across seasons
barplot(table(yVectors[[k]]), ylab = "Frequency", xlab = "Counts of individuals",
main = paste("Distribution of raw counts (season ", k, ")", sep = ""),
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
}
##populate out.freqs with freqs of extinctions and colonizations
for(j in 2:n.seasons) {
none1 <- out.seasons[[j-1]]$none
some1 <- out.seasons[[j-1]]$some
none2 <- out.seasons[[j]]$none
some2 <- out.seasons[[j]]$some
##add check for seasons without sampling or previous season without sampling
if(y.seasonsNA[j] || y.seasonsNA[j-1]) {
if(y.seasonsNA[j]) {
out.freqs[j, 2:6] <- NA
}
if(y.seasonsNA[j-1]) {
out.freqs[j, 3:6] <- NA
}
} else {
##colonizations
out.freqs[j, 3] <- sum(duplicated(c(some2, none1)))
##extinctions
out.freqs[j, 4] <- sum(duplicated(c(some1, none2)))
##no change
out.freqs[j, 5] <- sum(duplicated(c(some1, some2))) + sum(duplicated(c(none1, none2)))
##sites both sampled in t and t-1
year1 <- c(none1, some1)
year2 <- c(none2, some2)
out.freqs[j, 6] <- sum(duplicated(c(year1, year2)))
}
}
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 4)
colnames(out.props) <- c("naive.occ", "naive.colonization", "naive.extinction", "naive.static")
rownames(out.props) <- rownames(out.freqs)
for(k in 1:n.seasons) {
##proportion of sites with detections
out.props[k, 1] <- out.freqs[k, 2]/out.freqs[k, 1]
##add check for seasons without sampling
if(y.seasonsNA[k]) {
out.props[k, 2:4] <- NA
} else {
##proportion colonized
out.props[k, 2] <- out.freqs[k, 3]/out.freqs[k, 6]
##proportion extinct
out.props[k, 3] <- out.freqs[k, 4]/out.freqs[k, 6]
##proportion static
out.props[k, 4] <- out.freqs[k, 5]/out.freqs[k, 6]
}
}
##reset to original values
if(any(plot.freq || plot.seasons)) {
on.exit(par(oldpar))
}
out.count <- list("count.table.full" = count.table.full,
"count.table.seasons" = count.table.seasons,
"hist.table.full" = hist.table.full,
"hist.table.seasons" = hist.table.seasons,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"missing.seasons" = y.seasonsNA)
class(out.count) <- "countHist"
return(out.count)
}
##for unmarkedFitGMM
countHist.unmarkedFitGMM <- function(object, plot.freq = TRUE,
cex.axis = 1, cex.lab = 1, cex.main = 1,
plot.seasons = FALSE, ...) {
##extract data
yMat <- object@data@y
nsites <- nrow(yMat)
n.seasons <- object@data@numPrimary
n.seasons.adj <- n.seasons #total number of plots fixed to 11 or 12, depending on plots requested
nvisits <- ncol(yMat)
##visits per season
n.visits.season <- nvisits/n.seasons
seasonNames <- paste("season", 1:n.seasons, sep = "")
##collapse yMat into a single vector
yVec.full <- as.vector(yMat)
##raw counts
count.table.full <- table(yVec.full, exclude = NULL, deparse.level = 0)
##summarize count histories
hist.full <- apply(X = yMat, MARGIN = 1, FUN = function(i) paste(i, collapse = "|"))
hist.table.full <- table(hist.full, deparse.level = 0)
##for each season, determine frequencies
yVectors <- vector(mode = "list", length = n.seasons)
out.seasons <- vector(mode = "list", length = n.seasons)
count.table.seasons <- vector(mode = "list", length = n.seasons)
names(count.table.seasons) <- seasonNames
hist.table.seasons <- vector(mode = "list", length = n.seasons)
names(hist.table.seasons) <- seasonNames
out.freqs <- matrix(data = NA, ncol = 6, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected", "colonized",
"extinct", "static", "common")
rownames(out.freqs) <- paste("Season-", 1:n.seasons, sep = "")
##sequence of visits
vis.seq <- seq(from = 1, to = nvisits, by = n.visits.season)
for(i in 1:n.seasons) {
col.start <- vis.seq[i]
col.end <- col.start + (n.visits.season - 1)
ySeason <- yMat[, col.start:col.end]
##summarize count histories
if(is.null(ncol(ySeason))){
ySeason <- as.matrix(ySeason)
}
yVec.season <- as.vector(ySeason)
yVectors[[i]] <- yVec.season
det.hist <- apply(X = ySeason, MARGIN = 1, FUN = function(i) paste(i, collapse = "|"))
hist.table.seasons[[i]] <- table(det.hist, deparse.level = 0)
count.table.seasons[[i]] <- table(yVec.season, exclude = NULL)
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = ySeason, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(ySeason, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(ySeason)) == ncol(ySeason)
##number of sites sampled
out.freqs[i, 1] <- sum(!is.na(sum.rows))
out.freqs[i, 2] <- sum(det.sum)
##sites without detections
none <- which(sum.rows == 0)
##sites with at least one detection
some <- which(sum.rows != 0)
out.seasons[[i]] <- list("none" = none, "some" = some)
}
##check if any seasons were not sampled
y.seasonsNA <- sapply(yVectors, FUN = function(i) all(is.na(i)))
##if only season-specific plots are requested
if(!plot.freq && plot.seasons) {
##determine arrangement of plots in matrix
if(n.seasons >= 12) {
n.seasons.adj <- 12
warning("\nOnly first 12 seasons are plotted\n")
}
if(n.seasons.adj <= 12) {
##if n.seasons < 12
##if 12, 11, 10 <- 4 x 3
##if 9, 8, 7 <- 3 x 3
##if 6, 5 <- 3 x 2
##if 4 <- 2 x 2
##if 3 <- 3 x 1
##if 2 <- 2 x 1
if(n.seasons.adj >= 10) {
nRows <- 4
nCols <- 3
} else {
if(n.seasons.adj >= 7) {
nRows <- 3
nCols <- 3
} else {
if(n.seasons.adj >= 5) {
nRows <- 3
nCols <- 2
} else {
if(n.seasons.adj == 4) {
nRows <- 2
nCols <- 2
} else {
if(n.seasons.adj == 3) {
nRows <- 3
nCols <- 1
} else {
nRows <- 2
nCols <- 1
}
}
}
}
}
}
##reset graphics parameters and save in object
oldpar <- par(mfrow = c(nRows, nCols))
}
##if both plots for seasons and combined are requested
##summarize detection histories
if(plot.freq) {
if(!plot.seasons) {
nRows <- 1
nCols <- 1
}
if(plot.seasons && n.seasons >= 12) {
n.seasons.adj <- 11
warning("\nOnly first 11 seasons are plotted\n")
}
if(plot.seasons && n.seasons.adj <= 11) {
if(n.seasons.adj >= 9) {
nRows <- 4
nCols <- 3
} else {
if(n.seasons.adj >= 6) {
nRows <- 3
nCols <- 3
} else {
if(n.seasons.adj >= 4) {
nRows <- 3
nCols <- 2
} else {
if(n.seasons.adj == 3) {
nRows <- 2
nCols <- 2
} else {
if(n.seasons.adj == 2) {
nRows <- 3
nCols <- 1
}
}
}
}
}
}
##reset graphics parameters and save in object
oldpar <- par(mfrow = c(nRows, nCols))
##histogram for data combined across seasons
barplot(table(yVec.full), ylab = "Frequency", xlab = "Counts of individuals",
main = paste("Distribution of raw counts (", n.seasons, " seasons combined)", sep = ""),
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##iterate over each season
if(plot.seasons) {
for(k in 1:n.seasons.adj) {
##check for missing season
if(y.seasonsNA[k]) {next} #skip to next season if current season not sampled
##histogram for data combined across seasons
barplot(table(yVectors[[k]]), ylab = "Frequency", xlab = "Counts of individuals",
main = paste("Distribution of raw counts (season ", k, ")", sep = ""),
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
}
##populate out.freqs with freqs of extinctions and colonizations
for(j in 2:n.seasons) {
none1 <- out.seasons[[j-1]]$none
some1 <- out.seasons[[j-1]]$some
none2 <- out.seasons[[j]]$none
some2 <- out.seasons[[j]]$some
##add check for seasons without sampling or previous season without sampling
if(y.seasonsNA[j] || y.seasonsNA[j-1]) {
if(y.seasonsNA[j]) {
out.freqs[j, 2:6] <- NA
}
if(y.seasonsNA[j-1]) {
out.freqs[j, 3:6] <- NA
}
} else {
##colonizations
out.freqs[j, 3] <- sum(duplicated(c(some2, none1)))
##extinctions
out.freqs[j, 4] <- sum(duplicated(c(some1, none2)))
##no change
out.freqs[j, 5] <- sum(duplicated(c(some1, some2))) + sum(duplicated(c(none1, none2)))
##sites both sampled in t and t-1
year1 <- c(none1, some1)
year2 <- c(none2, some2)
out.freqs[j, 6] <- sum(duplicated(c(year1, year2)))
}
}
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 4)
colnames(out.props) <- c("naive.occ", "naive.colonization", "naive.extinction", "naive.static")
rownames(out.props) <- rownames(out.freqs)
for(k in 1:n.seasons) {
##proportion of sites with detections
out.props[k, 1] <- out.freqs[k, 2]/out.freqs[k, 1]
##add check for seasons without sampling
if(y.seasonsNA[k]) {
out.props[k, 2:4] <- NA
} else {
##proportion colonized
out.props[k, 2] <- out.freqs[k, 3]/out.freqs[k, 6]
##proportion extinct
out.props[k, 3] <- out.freqs[k, 4]/out.freqs[k, 6]
##proportion static
out.props[k, 4] <- out.freqs[k, 5]/out.freqs[k, 6]
}
}
##reset to original values
if(any(plot.freq || plot.seasons)) {
on.exit(par(oldpar))
}
out.count <- list("count.table.full" = count.table.full,
"count.table.seasons" = count.table.seasons,
"hist.table.full" = hist.table.full,
"hist.table.seasons" = hist.table.seasons,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"missing.seasons" = y.seasonsNA)
class(out.count) <- "countHist"
return(out.count)
}
##multmixOpen
countHist.unmarkedFrameMMO <- function(object, plot.freq = TRUE,
cex.axis = 1, cex.lab = 1, cex.main = 1,
plot.seasons = FALSE, ...) {
##extract data
yMat <- object@y
nsites <- nrow(yMat)
n.seasons <- object@numPrimary
n.seasons.adj <- n.seasons #total number of plots fixed to 11 or 12, depending on plots requested
nvisits <- ncol(yMat)
##visits per season
n.visits.season <- nvisits/n.seasons
seasonNames <- paste("season", 1:n.seasons, sep = "")
##collapse yMat into a single vector
yVec.full <- as.vector(yMat)
##raw counts
count.table.full <- table(yVec.full, exclude = NULL, deparse.level = 0)
##summarize count histories
hist.full <- apply(X = yMat, MARGIN = 1, FUN = function(i) paste(i, collapse = "|"))
hist.table.full <- table(hist.full, deparse.level = 0)
##for each season, determine frequencies
yVectors <- vector(mode = "list", length = n.seasons)
out.seasons <- vector(mode = "list", length = n.seasons)
count.table.seasons <- vector(mode = "list", length = n.seasons)
names(count.table.seasons) <- seasonNames
hist.table.seasons <- vector(mode = "list", length = n.seasons)
names(hist.table.seasons) <- seasonNames
out.freqs <- matrix(data = NA, ncol = 6, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected", "colonized",
"extinct", "static", "common")
rownames(out.freqs) <- paste("Season-", 1:n.seasons, sep = "")
##sequence of visits
vis.seq <- seq(from = 1, to = nvisits, by = n.visits.season)
for(i in 1:n.seasons) {
col.start <- vis.seq[i]
col.end <- col.start + (n.visits.season - 1)
ySeason <- yMat[, col.start:col.end]
##summarize count histories
if(is.null(ncol(ySeason))){
ySeason <- as.matrix(ySeason)
}
yVec.season <- as.vector(ySeason)
yVectors[[i]] <- yVec.season
det.hist <- apply(X = ySeason, MARGIN = 1, FUN = function(i) paste(i, collapse = "|"))
hist.table.seasons[[i]] <- table(det.hist, deparse.level = 0)
count.table.seasons[[i]] <- table(yVec.season, exclude = NULL)
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = ySeason, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(ySeason, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(ySeason)) == ncol(ySeason)
##number of sites sampled
out.freqs[i, 1] <- sum(!is.na(sum.rows))
out.freqs[i, 2] <- sum(det.sum)
##sites without detections
none <- which(sum.rows == 0)
##sites with at least one detection
some <- which(sum.rows != 0)
out.seasons[[i]] <- list("none" = none, "some" = some)
}
##check if any seasons were not sampled
y.seasonsNA <- sapply(yVectors, FUN = function(i) all(is.na(i)))
##if only season-specific plots are requested
if(!plot.freq && plot.seasons) {
##determine arrangement of plots in matrix
if(n.seasons >= 12) {
n.seasons.adj <- 12
warning("\nOnly first 12 seasons are plotted\n")
}
if(n.seasons.adj <= 12) {
##if n.seasons < 12
##if 12, 11, 10 <- 4 x 3
##if 9, 8, 7 <- 3 x 3
##if 6, 5 <- 3 x 2
##if 4 <- 2 x 2
##if 3 <- 3 x 1
##if 2 <- 2 x 1
if(n.seasons.adj >= 10) {
nRows <- 4
nCols <- 3
} else {
if(n.seasons.adj >= 7) {
nRows <- 3
nCols <- 3
} else {
if(n.seasons.adj >= 5) {
nRows <- 3
nCols <- 2
} else {
if(n.seasons.adj == 4) {
nRows <- 2
nCols <- 2
} else {
if(n.seasons.adj == 3) {
nRows <- 3
nCols <- 1
} else {
nRows <- 2
nCols <- 1
}
}
}
}
}
}
##reset graphics parameters and save in object
oldpar <- par(mfrow = c(nRows, nCols))
}
##if both plots for seasons and combined are requested
##summarize detection histories
if(plot.freq) {
if(!plot.seasons) {
nRows <- 1
nCols <- 1
}
if(plot.seasons && n.seasons >= 12) {
n.seasons.adj <- 11
warning("\nOnly first 11 seasons are plotted\n")
}
if(plot.seasons && n.seasons.adj <= 11) {
if(n.seasons.adj >= 9) {
nRows <- 4
nCols <- 3
} else {
if(n.seasons.adj >= 6) {
nRows <- 3
nCols <- 3
} else {
if(n.seasons.adj >= 4) {
nRows <- 3
nCols <- 2
} else {
if(n.seasons.adj == 3) {
nRows <- 2
nCols <- 2
} else {
if(n.seasons.adj == 2) {
nRows <- 3
nCols <- 1
}
}
}
}
}
}
##reset graphics parameters and save in object
oldpar <- par(mfrow = c(nRows, nCols))
##histogram for data combined across seasons
barplot(table(yVec.full), ylab = "Frequency", xlab = "Counts of individuals",
main = paste("Distribution of raw counts (", n.seasons, " seasons combined)", sep = ""),
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##iterate over each season
if(plot.seasons) {
for(k in 1:n.seasons.adj) {
##check for missing season
if(y.seasonsNA[k]) {next} #skip to next season if current season not sampled
##histogram for data combined across seasons
barplot(table(yVectors[[k]]), ylab = "Frequency", xlab = "Counts of individuals",
main = paste("Distribution of raw counts (season ", k, ")", sep = ""),
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
}
##populate out.freqs with freqs of extinctions and colonizations
for(j in 2:n.seasons) {
none1 <- out.seasons[[j-1]]$none
some1 <- out.seasons[[j-1]]$some
none2 <- out.seasons[[j]]$none
some2 <- out.seasons[[j]]$some
##add check for seasons without sampling or previous season without sampling
if(y.seasonsNA[j] || y.seasonsNA[j-1]) {
if(y.seasonsNA[j]) {
out.freqs[j, 2:6] <- NA
}
if(y.seasonsNA[j-1]) {
out.freqs[j, 3:6] <- NA
}
} else {
##colonizations
out.freqs[j, 3] <- sum(duplicated(c(some2, none1)))
##extinctions
out.freqs[j, 4] <- sum(duplicated(c(some1, none2)))
##no change
out.freqs[j, 5] <- sum(duplicated(c(some1, some2))) + sum(duplicated(c(none1, none2)))
##sites both sampled in t and t-1
year1 <- c(none1, some1)
year2 <- c(none2, some2)
out.freqs[j, 6] <- sum(duplicated(c(year1, year2)))
}
}
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 4)
colnames(out.props) <- c("naive.occ", "naive.colonization", "naive.extinction", "naive.static")
rownames(out.props) <- rownames(out.freqs)
for(k in 1:n.seasons) {
##proportion of sites with detections
out.props[k, 1] <- out.freqs[k, 2]/out.freqs[k, 1]
##add check for seasons without sampling
if(y.seasonsNA[k]) {
out.props[k, 2:4] <- NA
} else {
##proportion colonized
out.props[k, 2] <- out.freqs[k, 3]/out.freqs[k, 6]
##proportion extinct
out.props[k, 3] <- out.freqs[k, 4]/out.freqs[k, 6]
##proportion static
out.props[k, 4] <- out.freqs[k, 5]/out.freqs[k, 6]
}
}
##reset to original values
if(any(plot.freq || plot.seasons)) {
on.exit(par(oldpar))
}
out.count <- list("count.table.full" = count.table.full,
"count.table.seasons" = count.table.seasons,
"hist.table.full" = hist.table.full,
"hist.table.seasons" = hist.table.seasons,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"missing.seasons" = y.seasonsNA)
class(out.count) <- "countHist"
return(out.count)
}
##for unmarkedFitMMO
countHist.unmarkedFitMMO <- function(object, plot.freq = TRUE,
cex.axis = 1, cex.lab = 1, cex.main = 1,
plot.seasons = FALSE, ...) {
##extract data
yMat <- object@data@y
nsites <- nrow(yMat)
n.seasons <- object@data@numPrimary
n.seasons.adj <- n.seasons #total number of plots fixed to 11 or 12, depending on plots requested
nvisits <- ncol(yMat)
##visits per season
n.visits.season <- nvisits/n.seasons
seasonNames <- paste("season", 1:n.seasons, sep = "")
##collapse yMat into a single vector
yVec.full <- as.vector(yMat)
##raw counts
count.table.full <- table(yVec.full, exclude = NULL, deparse.level = 0)
##summarize count histories
hist.full <- apply(X = yMat, MARGIN = 1, FUN = function(i) paste(i, collapse = "|"))
hist.table.full <- table(hist.full, deparse.level = 0)
##for each season, determine frequencies
yVectors <- vector(mode = "list", length = n.seasons)
out.seasons <- vector(mode = "list", length = n.seasons)
count.table.seasons <- vector(mode = "list", length = n.seasons)
names(count.table.seasons) <- seasonNames
hist.table.seasons <- vector(mode = "list", length = n.seasons)
names(hist.table.seasons) <- seasonNames
out.freqs <- matrix(data = NA, ncol = 6, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected", "colonized",
"extinct", "static", "common")
rownames(out.freqs) <- paste("Season-", 1:n.seasons, sep = "")
##sequence of visits
vis.seq <- seq(from = 1, to = nvisits, by = n.visits.season)
for(i in 1:n.seasons) {
col.start <- vis.seq[i]
col.end <- col.start + (n.visits.season - 1)
ySeason <- yMat[, col.start:col.end]
##summarize count histories
if(is.null(ncol(ySeason))){
ySeason <- as.matrix(ySeason)
}
yVec.season <- as.vector(ySeason)
yVectors[[i]] <- yVec.season
det.hist <- apply(X = ySeason, MARGIN = 1, FUN = function(i) paste(i, collapse = "|"))
hist.table.seasons[[i]] <- table(det.hist, deparse.level = 0)
count.table.seasons[[i]] <- table(yVec.season, exclude = NULL)
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = ySeason, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(ySeason, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(ySeason)) == ncol(ySeason)
##number of sites sampled
out.freqs[i, 1] <- sum(!is.na(sum.rows))
out.freqs[i, 2] <- sum(det.sum)
##sites without detections
none <- which(sum.rows == 0)
##sites with at least one detection
some <- which(sum.rows != 0)
out.seasons[[i]] <- list("none" = none, "some" = some)
}
##check if any seasons were not sampled
y.seasonsNA <- sapply(yVectors, FUN = function(i) all(is.na(i)))
##if only season-specific plots are requested
if(!plot.freq && plot.seasons) {
##determine arrangement of plots in matrix
if(n.seasons >= 12) {
n.seasons.adj <- 12
warning("\nOnly first 12 seasons are plotted\n")
}
if(n.seasons.adj <= 12) {
##if n.seasons < 12
##if 12, 11, 10 <- 4 x 3
##if 9, 8, 7 <- 3 x 3
##if 6, 5 <- 3 x 2
##if 4 <- 2 x 2
##if 3 <- 3 x 1
##if 2 <- 2 x 1
if(n.seasons.adj >= 10) {
nRows <- 4
nCols <- 3
} else {
if(n.seasons.adj >= 7) {
nRows <- 3
nCols <- 3
} else {
if(n.seasons.adj >= 5) {
nRows <- 3
nCols <- 2
} else {
if(n.seasons.adj == 4) {
nRows <- 2
nCols <- 2
} else {
if(n.seasons.adj == 3) {
nRows <- 3
nCols <- 1
} else {
nRows <- 2
nCols <- 1
}
}
}
}
}
}
##reset graphics parameters and save in object
oldpar <- par(mfrow = c(nRows, nCols))
}
##if both plots for seasons and combined are requested
##summarize detection histories
if(plot.freq) {
if(!plot.seasons) {
nRows <- 1
nCols <- 1
}
if(plot.seasons && n.seasons >= 12) {
n.seasons.adj <- 11
warning("\nOnly first 11 seasons are plotted\n")
}
if(plot.seasons && n.seasons.adj <= 11) {
if(n.seasons.adj >= 9) {
nRows <- 4
nCols <- 3
} else {
if(n.seasons.adj >= 6) {
nRows <- 3
nCols <- 3
} else {
if(n.seasons.adj >= 4) {
nRows <- 3
nCols <- 2
} else {
if(n.seasons.adj == 3) {
nRows <- 2
nCols <- 2
} else {
if(n.seasons.adj == 2) {
nRows <- 3
nCols <- 1
}
}
}
}
}
}
##reset graphics parameters and save in object
oldpar <- par(mfrow = c(nRows, nCols))
##histogram for data combined across seasons
barplot(table(yVec.full), ylab = "Frequency", xlab = "Counts of individuals",
main = paste("Distribution of raw counts (", n.seasons, " seasons combined)", sep = ""),
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##iterate over each season
if(plot.seasons) {
for(k in 1:n.seasons.adj) {
##check for missing season
if(y.seasonsNA[k]) {next} #skip to next season if current season not sampled
##histogram for data combined across seasons
barplot(table(yVectors[[k]]), ylab = "Frequency", xlab = "Counts of individuals",
main = paste("Distribution of raw counts (season ", k, ")", sep = ""),
cex.axis = cex.axis, cex.names = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
}
##populate out.freqs with freqs of extinctions and colonizations
for(j in 2:n.seasons) {
none1 <- out.seasons[[j-1]]$none
some1 <- out.seasons[[j-1]]$some
none2 <- out.seasons[[j]]$none
some2 <- out.seasons[[j]]$some
##add check for seasons without sampling or previous season without sampling
if(y.seasonsNA[j] || y.seasonsNA[j-1]) {
if(y.seasonsNA[j]) {
out.freqs[j, 2:6] <- NA
}
if(y.seasonsNA[j-1]) {
out.freqs[j, 3:6] <- NA
}
} else {
##colonizations
out.freqs[j, 3] <- sum(duplicated(c(some2, none1)))
##extinctions
out.freqs[j, 4] <- sum(duplicated(c(some1, none2)))
##no change
out.freqs[j, 5] <- sum(duplicated(c(some1, some2))) + sum(duplicated(c(none1, none2)))
##sites both sampled in t and t-1
year1 <- c(none1, some1)
year2 <- c(none2, some2)
out.freqs[j, 6] <- sum(duplicated(c(year1, year2)))
}
}
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 4)
colnames(out.props) <- c("naive.occ", "naive.colonization", "naive.extinction", "naive.static")
rownames(out.props) <- rownames(out.freqs)
for(k in 1:n.seasons) {
##proportion of sites with detections
out.props[k, 1] <- out.freqs[k, 2]/out.freqs[k, 1]
##add check for seasons without sampling
if(y.seasonsNA[k]) {
out.props[k, 2:4] <- NA
} else {
##proportion colonized
out.props[k, 2] <- out.freqs[k, 3]/out.freqs[k, 6]
##proportion extinct
out.props[k, 3] <- out.freqs[k, 4]/out.freqs[k, 6]
##proportion static
out.props[k, 4] <- out.freqs[k, 5]/out.freqs[k, 6]
}
}
##reset to original values
if(any(plot.freq || plot.seasons)) {
on.exit(par(oldpar))
}
out.count <- list("count.table.full" = count.table.full,
"count.table.seasons" = count.table.seasons,
"hist.table.full" = hist.table.full,
"hist.table.seasons" = hist.table.seasons,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"missing.seasons" = y.seasonsNA)
class(out.count) <- "countHist"
return(out.count)
}
##print method
print.countHist <- function(x, digits = 2, ...) {
if(x$n.seasons == 1) {
##convert NA to . for nicer printing
hist.names <- names(x$hist.table.full)
names(x$hist.table.full) <- gsub(pattern = "NA",
replacement = ".",
x = hist.names)
cat("\nSummary of counts:\n")
count.mat <- matrix(x$count.table.full, nrow = 1)
colnames(count.mat) <- names(x$count.table.full)
rownames(count.mat) <- "Frequency"
print(count.mat)
cat("\nSummary of count histories:\n")
##account for number of visits, number of unique histories, number of separators
num.chars <- nchar(paste(names(x$hist.table.full), collapse = ""))
if(num.chars >= 80) {
cat("\nNote: Count histories exceed 80 characters and are not displayed\n")
} else {
out.mat <- matrix(x$hist.table.full, nrow = 1)
colnames(out.mat) <- names(x$hist.table.full)
rownames(out.mat) <- "Frequency"
print(out.mat)
}
cat("\nProportion of sites with at least one detection:\n", round(x$out.props[, "naive.occ"], digits), "\n\n")
cat("Frequencies of sites with detections:\n")
##add matrix of frequencies
print(x$out.freqs)
}
if(x$n.seasons > 1) {
cat("\nSummary of counts (", x$n.seasons, " seasons combined): \n", sep ="")
count.mat <- matrix(x$count.table.full, nrow = 1)
colnames(count.mat) <- names(x$count.table.full)
rownames(count.mat) <- "Frequency"
print(count.mat)
cat("\nSummary of count histories:\n")
if(x$n.visits.season == 1) {
visits <- 1
} else {
visits <- x$n.visits.season - 1
}
num.chars <- nchar(paste(names(x$hist.table.full), collapse = ""))
if(num.chars >= 80) {
cat("\nNote: Count histories exceed 80 characters and are not displayed\n")
} else {
out.mat <- matrix(x$hist.table.full, nrow = 1)
colnames(out.mat) <- names(x$hist.table.full)
rownames(out.mat) <- "Frequency"
print(out.mat)
}
##if some seasons have not been sampled
if(any(x$missing.seasons)) {
if(sum(x$missing.seasons) == 1) {
cat("\nNote: season", which(x$missing.seasons), "was not sampled\n")
} else {
cat("\nNote: seasons",
paste(which(x$missing.seasons), sep = ", "),
"were not sampled\n")
}
}
cat("\nSeason-specific counts: \n")
cat("\n")
for(i in 1:x$n.seasons) {
if(!x$missing.seasons[i]) {
cat("Season", i, "\n")
} else {
cat("Season", i, "(no sites sampled)", "\n")
}
temp.tab <- x$count.table.seasons[[i]]
out.mat <- matrix(temp.tab, nrow = 1)
colnames(out.mat) <- names(temp.tab)
rownames(out.mat) <- "Frequency"
print(out.mat)
cat("--------\n\n")
}
cat("Frequencies of sites with detections, extinctions, and colonizations:\n")
##add matrix of frequencies
print(x$out.freqs)
}
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/countHist.R |
##covariance diagnostic for N-mixture model
##code slightly modified from Dennis et al. 2015: Biometrics 71: 237-246
##values <= 0 suggest lambda is infinite (data too sparse) and is likely
##to introduce problems during model fitting
##generic
covDiag <- function(object, ...){
UseMethod("covDiag", object)
}
covDiag.default <- function(object, ...){
stop("\nFunction not yet defined for this object class\n")
}
##for unmarkedFramePcount
covDiag.unmarkedFramePCount <- function(object, ...){
yMat <- object@y
p1 <- ct <- 0
nvisits <- ncol(yMat)
for(i in 1:(nvisits - 1)){
for(j in (i+1):nvisits){
p1 <- p1 + yMat[,i]*yMat[,j]
ct <- ct+1
}
}
cov.diag <- mean(p1, na.rm = TRUE)/ct-mean(yMat, na.rm = TRUE)^2
if(cov.diag <= 0) {
msg <- "Warning: lambda is infinite, data too sparse"
} else {
msg <- NULL
}
out <- list("cov.diag" = cov.diag,
"message" = msg)
class(out) <- "covDiag"
return(out)
}
##pcount
covDiag.unmarkedFitPCount <- function(object, ...){
yMat <- object@data@y
p1 <- ct <- 0
nvisits <- ncol(yMat)
for(i in 1:(nvisits - 1)){
for(j in (i+1):nvisits){
p1 <- p1 + yMat[,i]*yMat[,j]
ct <- ct+1
}
}
cov.diag <- mean(p1, na.rm = TRUE)/ct-mean(yMat, na.rm = TRUE)^2
if(cov.diag <= 0) {
msg <- "Warning: lambda is infinite, data too sparse"
} else {
msg <- NULL
}
out <- list("cov.diag" = cov.diag,
"message" = msg)
class(out) <- "covDiag"
return(out)
}
print.covDiag <- function(x, digits = 4, ...) {
cat("\nCovariance diagnostic: ", round(x$cov.diag, digits), "\n")
if(!is.null(x$message)) {
cat(x$message, "\n")
}
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/covDiag.R |
##summarize detection histories and count data
detHist <- function(object, ...){
UseMethod("detHist", object)
}
detHist.default <- function(object, ...){
stop("\nFunction not yet defined for this object class\n")
}
##for unmarkedFrameOccu (same as data format for occuRN)
detHist.unmarkedFrameOccu <- function(object, ...) {
##extract data
yMat <- object@y
nsites <- nrow(yMat)
n.seasons <- 1
nvisits <- ncol(yMat)
##visits per season
n.visits.season <- nvisits/n.seasons
seasonNames <- paste("season", 1:n.seasons, sep = "")
##summarize detection histories
hist.full <- apply(X = yMat, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))
hist.table.full <- table(hist.full, deparse.level = 0)
##for each season, determine frequencies
hist.table.seasons <- vector(mode = "list", length = n.seasons)
names(hist.table.seasons) <- seasonNames
out.freqs <- matrix(data = NA, ncol = 2, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected")
rownames(out.freqs) <- "Season-1"
hist.table.seasons[[1]] <- hist.table.full
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = yMat, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(yMat, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(yMat)) == ncol(yMat)
##number of sites sampled
out.freqs[1, 1] <- sum(!is.na(sum.rows))
##number of sites with at least 1 detection
out.freqs[1, 2] <- sum(det.sum)
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 1)
colnames(out.props) <- "naive.occ"
rownames(out.props) <- rownames(out.freqs)
out.props[, 1] <- out.freqs[, 2]/out.freqs[, 1]
out.det <- list("hist.table.full" = hist.table.full,
"hist.table.seasons" = hist.table.seasons,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"n.species" = 1, "missing.seasons" = FALSE)
class(out.det) <- "detHist"
return(out.det)
}
##for occu
detHist.unmarkedFitOccu <- function(object, ...) {
##extract data
yMat <- object@data@y
nsites <- nrow(yMat)
n.seasons <- 1
nvisits <- ncol(yMat)
##visits per season
n.visits.season <- nvisits/n.seasons
seasonNames <- paste("season", 1:n.seasons, sep = "")
##summarize detection histories
hist.full <- apply(X = yMat, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))
hist.table.full <- table(hist.full, deparse.level = 0)
##for each season, determine frequencies
hist.table.seasons <- vector(mode = "list", length = n.seasons)
names(hist.table.seasons) <- seasonNames
out.freqs <- matrix(data = NA, ncol = 2, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected")
rownames(out.freqs) <- "Season-1"
hist.table.seasons[[1]] <- hist.table.full
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = yMat, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(yMat, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(yMat)) == ncol(yMat)
##number of sites sampled
out.freqs[1, 1] <- sum(!is.na(sum.rows))
##number of sites with at least 1 detection
out.freqs[1, 2] <- sum(det.sum)
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 1)
colnames(out.props) <- "naive.occ"
rownames(out.props) <- rownames(out.freqs)
out.props[, 1] <- out.freqs[, 2]/out.freqs[, 1]
out.det <- list("hist.table.full" = hist.table.full,
"hist.table.seasons" = hist.table.seasons,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"n.species" = 1, "missing.seasons" = FALSE)
class(out.det) <- "detHist"
return(out.det)
}
##for unmarkedFrameOccuFP
detHist.unmarkedFrameOccuFP <- function(object, ...) {
##extract data
yMat <- object@y
nsites <- nrow(yMat)
n.seasons <- 1
nvisits <- ncol(yMat)
##visits per season
n.visits.season <- nvisits/n.seasons
seasonNames <- paste("season", 1:n.seasons, sep = "")
##summarize detection histories
hist.full <- apply(X = yMat, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))
hist.table.full <- table(hist.full, deparse.level = 0)
##for each season, determine frequencies
hist.table.seasons <- vector(mode = "list", length = n.seasons)
names(hist.table.seasons) <- seasonNames
out.freqs <- matrix(data = NA, ncol = 2, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected")
rownames(out.freqs) <- "Season-1"
hist.table.seasons[[1]] <- hist.table.full
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = yMat, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(yMat, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(yMat)) == ncol(yMat)
##number of sites sampled
out.freqs[1, 1] <- sum(!is.na(sum.rows))
##number of sites with at least 1 detection
out.freqs[1, 2] <- sum(det.sum)
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 1)
colnames(out.props) <- "naive.occ"
rownames(out.props) <- rownames(out.freqs)
out.props[, 1] <- out.freqs[, 2]/out.freqs[, 1]
out.det <- list("hist.table.full" = hist.table.full,
"hist.table.seasons" = hist.table.seasons,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"n.species" = 1, "missing.seasons" = FALSE)
class(out.det) <- "detHist"
return(out.det)
}
##for occuFP
detHist.unmarkedFitOccuFP <- function(object, ...) {
##extract data
yMat <- object@data@y
nsites <- nrow(yMat)
n.seasons <- 1
nvisits <- ncol(yMat)
##visits per season
n.visits.season <- nvisits/n.seasons
seasonNames <- paste("season", 1:n.seasons, sep = "")
##summarize detection histories
hist.full <- apply(X = yMat, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))
hist.table.full <- table(hist.full, deparse.level = 0)
##for each season, determine frequencies
hist.table.seasons <- vector(mode = "list", length = n.seasons)
names(hist.table.seasons) <- seasonNames
out.freqs <- matrix(data = NA, ncol = 2, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected")
rownames(out.freqs) <- "Season-1"
hist.table.seasons[[1]] <- hist.table.full
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = yMat, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(yMat, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(yMat)) == ncol(yMat)
##number of sites sampled
out.freqs[1, 1] <- sum(!is.na(sum.rows))
##number of sites with at least 1 detection
out.freqs[1, 2] <- sum(det.sum)
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 1)
colnames(out.props) <- "naive.occ"
rownames(out.props) <- rownames(out.freqs)
out.props[, 1] <- out.freqs[, 2]/out.freqs[, 1]
out.det <- list("hist.table.full" = hist.table.full,
"hist.table.seasons" = hist.table.seasons,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"n.species" = 1, "missing.seasons" = FALSE)
class(out.det) <- "detHist"
return(out.det)
}
##for occuRN
detHist.unmarkedFitOccuRN <- function(object, ...) {
##extract data
yMat <- object@data@y
nsites <- nrow(yMat)
n.seasons <- 1
nvisits <- ncol(yMat)
##visits per season
n.visits.season <- nvisits/n.seasons
seasonNames <- paste("season", 1:n.seasons, sep = "")
##summarize detection histories
hist.full <- apply(X = yMat, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))
hist.table.full <- table(hist.full, deparse.level = 0)
##for each season, determine frequencies
hist.table.seasons <- vector(mode = "list", length = n.seasons)
names(hist.table.seasons) <- seasonNames
out.freqs <- matrix(data = NA, ncol = 2, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected")
rownames(out.freqs) <- "Season-1"
hist.table.seasons[[1]] <- hist.table.full
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = yMat, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(yMat, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(yMat)) == ncol(yMat)
##number of sites sampled
out.freqs[1, 1] <- sum(!is.na(sum.rows))
##number of sites with at least 1 detection
out.freqs[1, 2] <- sum(det.sum)
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 1)
colnames(out.props) <- "naive.occ"
rownames(out.props) <- rownames(out.freqs)
out.props[, 1] <- out.freqs[, 2]/out.freqs[, 1]
out.det <- list("hist.table.full" = hist.table.full,
"hist.table.seasons" = hist.table.seasons,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"n.species" = 1, "missing.seasons" = FALSE)
class(out.det) <- "detHist"
return(out.det)
}
##for unmarkedMultFrame
detHist.unmarkedMultFrame <- function(object, ...) {
##extract data
yMat <- object@y
nsites <- nrow(yMat)
n.seasons <- object@numPrimary
nvisits <- ncol(yMat)
##visits per season
n.visits.season <- nvisits/n.seasons
seasonNames <- paste("season", 1:n.seasons, sep = "")
##summarize detection histories
##starting and ending columns
colStarts <- seq(from = 1, to = nvisits, by = n.visits.season)
colEnds <- colStarts + (n.visits.season - 1)
yrows <- list( )
##add check for seasons not sampled
y.seasons <- list( )
##subsequent seasons
for(i in 1:n.seasons) {
yrows[[i]] <- apply(yMat[, colStarts[i]:colEnds[i]], MARGIN = 1, FUN = function(i) paste(i, collapse = ""))
y.seasons[[i]] <- yMat[, colStarts[i]:colEnds[i]]
}
##check if any seasons were not sampled
y.seasonsNA <- sapply(y.seasons, FUN = function(i) all(is.na(i)))
##organize and paste rows
hist.full <- do.call(what = "paste", args = c(yrows, sep = "'"))
hist.table.full <- table(hist.full, deparse.level = 0)
##for each season, determine frequencies
out.seasons <- vector(mode = "list", length = n.seasons)
hist.table.seasons <- vector(mode = "list", length = n.seasons)
names(hist.table.seasons) <- seasonNames
out.freqs <- matrix(data = NA, ncol = 6, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected", "colonized",
"extinct", "static", "common")
rownames(out.freqs) <- paste("Season-", 1:n.seasons, sep = "")
##sequence of visits
vis.seq <- seq(from = 1, to = nvisits, by = n.visits.season)
for(i in 1:n.seasons) {
col.start <- vis.seq[i]
col.end <- col.start + (n.visits.season - 1)
ySeason <- yMat[, col.start:col.end]
##summarize detection histories
det.hist <- apply(X = ySeason, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))
hist.table.seasons[[i]] <- table(det.hist, deparse.level = 0)
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = ySeason, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(ySeason, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(ySeason)) == ncol(ySeason)
##number of sites sampled
out.freqs[i, 1] <- sum(!is.na(sum.rows))
##detections
out.freqs[i, 2] <- sum(det.sum)
##sites without detections
none <- which(sum.rows == 0)
##sites with at least one detection
some <- which(sum.rows != 0)
out.seasons[[i]] <- list("none" = none, "some" = some)
}
##populate out.freqs with freqs of extinctions and colonizations
for(j in 2:n.seasons) {
none1 <- out.seasons[[j-1]]$none
some1 <- out.seasons[[j-1]]$some
none2 <- out.seasons[[j]]$none
some2 <- out.seasons[[j]]$some
##add check for seasons without sampling or previous season without sampling
if(y.seasonsNA[j] || y.seasonsNA[j-1]) {
if(y.seasonsNA[j]) {
out.freqs[j, 2:6] <- NA
}
if(y.seasonsNA[j-1]) {
out.freqs[j, 3:6] <- NA
}
} else {
##colonizations
out.freqs[j, 3] <- sum(duplicated(c(some2, none1)))
##extinctions
out.freqs[j, 4] <- sum(duplicated(c(some1, none2)))
##no change
out.freqs[j, 5] <- sum(duplicated(c(some1, some2))) + sum(duplicated(c(none1, none2)))
##sites both sampled in t and t-1
year1 <- c(none1, some1)
year2 <- c(none2, some2)
out.freqs[j, 6] <- sum(duplicated(c(year1, year2)))
}
}
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 4)
colnames(out.props) <- c("naive.occ", "naive.colonization", "naive.extinction", "naive.static")
rownames(out.props) <- rownames(out.freqs)
for(k in 1:n.seasons) {
##proportion of sites with detections
out.props[k, 1] <- out.freqs[k, 2]/out.freqs[k, 1]
##add check for seasons without sampling
if(y.seasonsNA[k]) {
out.props[k, 2:4] <- NA
} else {
##proportion colonized
out.props[k, 2] <- out.freqs[k, 3]/out.freqs[k, 6]
##proportion extinct
out.props[k, 3] <- out.freqs[k, 4]/out.freqs[k, 6]
##proportion static
out.props[k, 4] <- out.freqs[k, 5]/out.freqs[k, 6]
}
}
out.det <- list("hist.table.full" = hist.table.full,
"hist.table.seasons" = hist.table.seasons,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"n.species" = 1, "missing.seasons" = y.seasonsNA)
class(out.det) <- "detHist"
return(out.det)
}
##for colext
detHist.unmarkedFitColExt <- function(object, ...) {
##extract data
yMat <- object@data@y
nsites <- nrow(yMat)
n.seasons <- object@data@numPrimary
nvisits <- ncol(yMat)
##visits per season
n.visits.season <- nvisits/n.seasons
seasonNames <- paste("season", 1:n.seasons, sep = "")
##summarize detection histories
##starting and ending columns
colStarts <- seq(from = 1, to = nvisits, by = n.visits.season)
colEnds <- colStarts + (n.visits.season - 1)
yrows <- list( )
##add check for seasons not sampled
y.seasons <- list( )
##subsequent seasons
for(i in 1:n.seasons) {
yrows[[i]] <- apply(yMat[, colStarts[i]:colEnds[i]], MARGIN = 1, FUN = function(i) paste(i, collapse = ""))
y.seasons[[i]] <- yMat[, colStarts[i]:colEnds[i]]
}
##check if any seasons were not sampled
y.seasonsNA <- sapply(y.seasons, FUN = function(i) all(is.na(i)))
##organize and paste rows
hist.full <- do.call(what = "paste", args = c(yrows, sep = "'"))
hist.table.full <- table(hist.full, deparse.level = 0)
##for each season, determine frequencies
out.seasons <- vector(mode = "list", length = n.seasons)
hist.table.seasons <- vector(mode = "list", length = n.seasons)
names(hist.table.seasons) <- seasonNames
out.freqs <- matrix(data = NA, ncol = 6, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected", "colonized",
"extinct", "static", "common")
rownames(out.freqs) <- paste("Season-", 1:n.seasons, sep = "")
##sequence of visits
vis.seq <- seq(from = 1, to = nvisits, by = n.visits.season)
for(i in 1:n.seasons) {
col.start <- vis.seq[i]
col.end <- col.start + (n.visits.season - 1)
ySeason <- yMat[, col.start:col.end]
##summarize detection histories
det.hist <- apply(X = ySeason, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))
hist.table.seasons[[i]] <- table(det.hist, deparse.level = 0)
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = ySeason, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(ySeason, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(ySeason)) == ncol(ySeason)
##number of sites sampled
out.freqs[i, 1] <- sum(!is.na(sum.rows))
##detections
out.freqs[i, 2] <- sum(det.sum)
##sites without detections
none <- which(sum.rows == 0)
##sites with at least one detection
some <- which(sum.rows != 0)
out.seasons[[i]] <- list("none" = none, "some" = some)
}
##populate out.freqs with freqs of extinctions and colonizations
for(j in 2:n.seasons) {
none1 <- out.seasons[[j-1]]$none
some1 <- out.seasons[[j-1]]$some
none2 <- out.seasons[[j]]$none
some2 <- out.seasons[[j]]$some
##add check for seasons without sampling or previous season without sampling
if(y.seasonsNA[j] || y.seasonsNA[j-1]) {
if(y.seasonsNA[j]) {
out.freqs[j, 2:6] <- NA
}
if(y.seasonsNA[j-1]) {
out.freqs[j, 3:6] <- NA
}
} else {
##colonizations
out.freqs[j, 3] <- sum(duplicated(c(some2, none1)))
##extinctions
out.freqs[j, 4] <- sum(duplicated(c(some1, none2)))
##no change
out.freqs[j, 5] <- sum(duplicated(c(some1, some2))) + sum(duplicated(c(none1, none2)))
##sites both sampled in t and t-1
year1 <- c(none1, some1)
year2 <- c(none2, some2)
out.freqs[j, 6] <- sum(duplicated(c(year1, year2)))
}
}
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 4)
colnames(out.props) <- c("naive.occ", "naive.colonization", "naive.extinction", "naive.static")
rownames(out.props) <- rownames(out.freqs)
for(k in 1:n.seasons) {
##proportion of sites with detections
out.props[k, 1] <- out.freqs[k, 2]/out.freqs[k, 1]
##add check for seasons without sampling
if(y.seasonsNA[k]) {
out.props[k, 2:4] <- NA
} else {
##proportion colonized
out.props[k, 2] <- out.freqs[k, 3]/out.freqs[k, 6]
##proportion extinct
out.props[k, 3] <- out.freqs[k, 4]/out.freqs[k, 6]
##proportion static
out.props[k, 4] <- out.freqs[k, 5]/out.freqs[k, 6]
}
}
out.det <- list("hist.table.full" = hist.table.full,
"hist.table.seasons" = hist.table.seasons,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"n.species" = 1, "missing.seasons" = y.seasonsNA)
class(out.det) <- "detHist"
return(out.det)
}
#############################
#############################
##TO CHANGE HERE
##for unmarkedFrameOccuMulti
detHist.unmarkedFrameOccuMulti <- function(object, ...) {
##extract species detection data
speciesList <- object@ylist
speciesNames <- names(object@ylist)
if(is.null(speciesNames)) {
speciesNames <- paste("species", 1:nspecies, sep = "")
}
nspecies <- length(speciesList)
n.seasons <- 1
nsites <- nrow(speciesList[[1]])
nvisits <- ncol(speciesList[[1]])
##visits per season
n.visits.season <- nvisits/n.seasons
##generic name to include in detection history
genericNames <- letters[1:nspecies]
##combine detection histories of each species
histList <- vector(mode = "list", length = nspecies)
for(sp in 1:nspecies) {
detVector <- as.vector(speciesList[[sp]])
histList[[sp]] <- ifelse(detVector == 1, genericNames[sp], detVector)
}
comboDet <- do.call("paste", c(histList, sep = ""))
##number of co-occurrences in any given survey across sites
coOcc <- table(comboDet)
comboMat <- matrix(comboDet, nrow = nsites, ncol = nvisits)
##detection histories
comboHist <- apply(comboMat, MARGIN = 1, FUN = function(i) paste(i, collapse = "'"))
hist.table.full <- table(comboHist)
##for each season, determine frequencies
out.freqs <- matrix(data = NA, ncol = 2, nrow = nspecies)
colnames(out.freqs) <- c("sampled", "detected")
rownames(out.freqs) <- speciesNames
##create a matrix with proportion of sites with detections
##based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 1)
colnames(out.props) <- "naive.occ"
rownames(out.props) <- rownames(out.freqs)
hist.table.species <- vector(mode = "list", length = nspecies)
names(hist.table.species) <- speciesNames
for(i in 1:nspecies) {
yMat <- speciesList[[i]]
##summarize detection histories
hist.full <- apply(X = yMat, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))
hist.table.species[[i]] <- table(hist.full, deparse.level = 0)
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = speciesList[[i]], MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(speciesList[[i]], na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(speciesList[[i]])) == ncol(yMat)
##number of sites sampled
out.freqs[i, 1] <- sum(!is.na(sum.rows))
##number of sites with at least 1 detection
out.freqs[i, 2] <- sum(det.sum)
##proportion of sites with detections
out.props[i, 1] <- out.freqs[i, 2]/out.freqs[i, 1]
}
##add frequencies of co-occurrences
hist.table.species$coOcc <- coOcc
out.det <- list("hist.table.full" = hist.table.full,
"hist.table.species" = hist.table.species,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"n.species" = nspecies, "missing.seasons" = FALSE)
class(out.det) <- "detHist"
return(out.det)
}
##for occuMulti
detHist.unmarkedFitOccuMulti <- function(object, ...) {
##extract species detection data
speciesList <- object@data@ylist
speciesNames <- names(object@data@ylist)
if(is.null(speciesNames)) {
speciesNames <- paste("species", 1:nspecies, sep = "")
}
nspecies <- length(speciesList)
n.seasons <- 1
nsites <- nrow(speciesList[[1]])
nvisits <- ncol(speciesList[[1]])
##visits per season
n.visits.season <- nvisits/n.seasons
##generic name to include in detection history
genericNames <- letters[1:nspecies]
##combine detection histories of each species
histList <- vector(mode = "list", length = nspecies)
for(sp in 1:nspecies) {
detVector <- as.vector(speciesList[[sp]])
histList[[sp]] <- ifelse(detVector == 1, genericNames[sp], detVector)
}
comboDet <- do.call("paste", c(histList, sep = ""))
##number of co-occurrences in any given survey across sites
coOcc <- table(comboDet)
comboMat <- matrix(comboDet, nrow = nsites, ncol = nvisits)
##detection histories
comboHist <- apply(comboMat, MARGIN = 1, FUN = function(i) paste(i, collapse = "'"))
hist.table.full <- table(comboHist)
##for each season, determine frequencies
out.freqs <- matrix(data = NA, ncol = 2, nrow = nspecies)
colnames(out.freqs) <- c("sampled", "detected")
rownames(out.freqs) <- speciesNames
##create a matrix with proportion of sites with detections
##based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 1)
colnames(out.props) <- "naive.occ"
rownames(out.props) <- rownames(out.freqs)
hist.table.species <- vector(mode = "list", length = nspecies)
names(hist.table.species) <- speciesNames
for(i in 1:nspecies) {
yMat <- speciesList[[i]]
##summarize detection histories
hist.full <- apply(X = yMat, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))
hist.table.species[[i]] <- table(hist.full, deparse.level = 0)
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = speciesList[[i]], MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(speciesList[[i]], na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(speciesList[[i]])) == ncol(yMat)
##number of sites sampled
out.freqs[i, 1] <- sum(!is.na(sum.rows))
##number of sites with at least 1 detection
out.freqs[i, 2] <- sum(det.sum)
##proportion of sites with detections
out.props[i, 1] <- out.freqs[i, 2]/out.freqs[i, 1]
}
##add frequencies of co-occurrences
hist.table.species$coOcc <- coOcc
out.det <- list("hist.table.full" = hist.table.full,
"hist.table.species" = hist.table.species,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"n.species" = nspecies, "missing.seasons" = FALSE)
class(out.det) <- "detHist"
return(out.det)
}
##for occuMS
detHist.unmarkedFrameOccuMS <- function(object, ...) {
##extract data
yMat <- object@y
nsites <- nrow(yMat)
n.seasons <- object@numPrimary
nvisits <- ncol(yMat)
##visits per season
n.visits.season <- nvisits/n.seasons
seasonNames <- paste("season", 1:n.seasons, sep = "")
##no missing season when single season
y.seasonsNA <- FALSE
##for each season, determine frequencies
out.seasons <- vector(mode = "list", length = n.seasons)
hist.table.seasons <- vector(mode = "list", length = n.seasons)
names(hist.table.seasons) <- seasonNames
if(n.seasons == 1) {
##summarize detection histories
hist.full <- apply(X = yMat, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))
hist.table.full <- table(hist.full, deparse.level = 0)
out.freqs <- matrix(data = NA, ncol = 2, nrow = 1)
colnames(out.freqs) <- c("sampled", "detected")
rownames(out.freqs) <- "Season-1"
##sequence of visits
vis.seq <- seq(from = 1, to = nvisits, by = n.visits.season)
for(i in 1:n.seasons) {
col.start <- vis.seq[i]
col.end <- col.start + (n.visits.season - 1)
ySeason <- yMat[, col.start:col.end]
##summarize detection histories
det.hist <- apply(X = ySeason, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))
hist.table.seasons[[i]] <- table(det.hist, deparse.level = 0)
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = ySeason, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(ySeason, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(ySeason)) == ncol(ySeason)
##number of sites sampled
out.freqs[i, 1] <- sum(!is.na(sum.rows))
out.freqs[i, 2] <- sum(det.sum)
##sites without detections
none <- which(sum.rows == 0)
##sites with at least one detection
some <- which(sum.rows != 0)
out.seasons[[i]] <- list("none" = none, "some" = some)
}
out.props <- matrix(NA, nrow = 1, ncol = 1)
colnames(out.props) <- "naive.occ"
rownames(out.props) <- rownames(out.freqs)
out.props[, 1] <- out.freqs[, 2]/out.freqs[, 1]
}
if(n.seasons > 1) {
##summarize detection histories
##starting and ending columns
colStarts <- seq(from = 1, to = nvisits, by = n.visits.season)
colEnds <- colStarts + (n.visits.season - 1)
yrows <- list( )
yMat.seasons <- vector(mode = "list", length = n.seasons)
##subsequent seasons
for(i in 1:n.seasons) {
yrows[[i]] <- apply(yMat[, colStarts[i]:colEnds[i]], MARGIN = 1, FUN = function(i) paste(i, collapse = ""))
yMat.seasons[[i]] <- yMat[, colStarts[i]:colEnds[i]]
}
##check if any seasons were not sampled
y.seasonsNA <- sapply(yMat.seasons, FUN = function(i) all(is.na(i)))
##organize and paste rows
hist.full <- do.call(what = "paste", args = c(yrows, sep = "'"))
hist.table.full <- table(hist.full, deparse.level = 0)
out.freqs <- matrix(data = NA, ncol = 6, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected", "colonized",
"extinct", "static", "common")
rownames(out.freqs) <- paste("Season-", 1:n.seasons, sep = "")
##sequence of visits
vis.seq <- seq(from = 1, to = nvisits, by = n.visits.season)
for(i in 1:n.seasons) {
col.start <- vis.seq[i]
col.end <- col.start + (n.visits.season - 1)
ySeason <- yMat[, col.start:col.end]
##summarize detection histories
det.hist <- apply(X = ySeason, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))
hist.table.seasons[[i]] <- table(det.hist, deparse.level = 0)
##determine number of sites with at least 1 detection
det.sum <- apply(X = ySeason, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(ySeason, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(ySeason)) == ncol(ySeason)
##number of sites sampled
out.freqs[i, 1] <- sum(!is.na(sum.rows))
out.freqs[i, 2] <- sum(det.sum)
##sites without detections
none <- which(sum.rows == 0)
##sites with at least one detection
some <- which(sum.rows != 0)
out.seasons[[i]] <- list("none" = none, "some" = some)
}
##populate out.freqs with freqs of extinctions and colonizations
for(j in 2:n.seasons) {
none1 <- out.seasons[[j-1]]$none
some1 <- out.seasons[[j-1]]$some
none2 <- out.seasons[[j]]$none
some2 <- out.seasons[[j]]$some
##add check for seasons without sampling or previous season without sampling
if(y.seasonsNA[j] || y.seasonsNA[j-1]) {
if(y.seasonsNA[j]) {
out.freqs[j, 2:6] <- NA
}
if(y.seasonsNA[j-1]) {
out.freqs[j, 3:6] <- NA
}
} else {
##colonizations
out.freqs[j, 3] <- sum(duplicated(c(some2, none1)))
##extinctions
out.freqs[j, 4] <- sum(duplicated(c(some1, none2)))
##no change
out.freqs[j, 5] <- sum(duplicated(c(some1, some2))) + sum(duplicated(c(none1, none2)))
##sites both sampled in t and t-1
year1 <- c(none1, some1)
year2 <- c(none2, some2)
out.freqs[j, 6] <- sum(duplicated(c(year1, year2)))
}
}
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 4)
colnames(out.props) <- c("naive.occ", "naive.colonization", "naive.extinction", "naive.static")
rownames(out.props) <- rownames(out.freqs)
for(k in 1:n.seasons) {
##proportion of sites with detections
out.props[k, 1] <- out.freqs[k, 2]/out.freqs[k, 1]
##add check for seasons without sampling
if(y.seasonsNA[k]) {
out.props[k, 2:4] <- NA
} else {
##proportion colonized
out.props[k, 2] <- out.freqs[k, 3]/out.freqs[k, 6]
##proportion extinct
out.props[k, 3] <- out.freqs[k, 4]/out.freqs[k, 6]
##proportion static
out.props[k, 4] <- out.freqs[k, 5]/out.freqs[k, 6]
}
}
}
out.det <- list("hist.table.full" = hist.table.full,
"hist.table.seasons" = hist.table.seasons,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"n.species" = 1,
"missing.seasons" = y.seasonsNA)
class(out.det) <- "detHist"
return(out.det)
}
##for occuMS
detHist.unmarkedFitOccuMS <- function(object, ...) {
##extract data
yMat <- object@data@y
nsites <- nrow(yMat)
n.seasons <- object@data@numPrimary
nvisits <- ncol(yMat)
##visits per season
n.visits.season <- nvisits/n.seasons
seasonNames <- paste("season", 1:n.seasons, sep = "")
##no missing season when single season
y.seasonsNA <- FALSE
##for each season, determine frequencies
out.seasons <- vector(mode = "list", length = n.seasons)
hist.table.seasons <- vector(mode = "list", length = n.seasons)
names(hist.table.seasons) <- seasonNames
if(n.seasons == 1) {
##summarize detection histories
hist.full <- apply(X = yMat, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))
hist.table.full <- table(hist.full, deparse.level = 0)
out.freqs <- matrix(data = NA, ncol = 2, nrow = 1)
colnames(out.freqs) <- c("sampled", "detected")
rownames(out.freqs) <- "Season-1"
##sequence of visits
vis.seq <- seq(from = 1, to = nvisits, by = n.visits.season)
for(i in 1:n.seasons) {
col.start <- vis.seq[i]
col.end <- col.start + (n.visits.season - 1)
ySeason <- yMat[, col.start:col.end]
##summarize detection histories
det.hist <- apply(X = ySeason, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))
hist.table.seasons[[i]] <- table(det.hist, deparse.level = 0)
##determine proportion of sites with at least 1 detection
det.sum <- apply(X = ySeason, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(ySeason, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(ySeason)) == ncol(ySeason)
##number of sites sampled
out.freqs[i, 1] <- sum(!is.na(sum.rows))
out.freqs[i, 2] <- sum(det.sum)
##sites without detections
none <- which(sum.rows == 0)
##sites with at least one detection
some <- which(sum.rows != 0)
out.seasons[[i]] <- list("none" = none, "some" = some)
}
out.props <- matrix(NA, nrow = 1, ncol = 1)
colnames(out.props) <- "naive.occ"
rownames(out.props) <- rownames(out.freqs)
out.props[, 1] <- out.freqs[, 2]/out.freqs[, 1]
}
if(n.seasons > 1) {
##summarize detection histories
##starting and ending columns
colStarts <- seq(from = 1, to = nvisits, by = n.visits.season)
colEnds <- colStarts + (n.visits.season - 1)
yrows <- list( )
yMat.seasons <- vector(mode = "list", length = n.seasons)
##subsequent seasons
for(i in 1:n.seasons) {
yrows[[i]] <- apply(yMat[, colStarts[i]:colEnds[i]], MARGIN = 1, FUN = function(i) paste(i, collapse = ""))
yMat.seasons[[i]] <- yMat[, colStarts[i]:colEnds[i]]
}
##check if any seasons were not sampled
y.seasonsNA <- sapply(yMat.seasons, FUN = function(i) all(is.na(i)))
##organize and paste rows
hist.full <- do.call(what = "paste", args = c(yrows, sep = "'"))
hist.table.full <- table(hist.full, deparse.level = 0)
out.freqs <- matrix(data = NA, ncol = 6, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected", "colonized",
"extinct", "static", "common")
rownames(out.freqs) <- paste("Season-", 1:n.seasons, sep = "")
##sequence of visits
vis.seq <- seq(from = 1, to = nvisits, by = n.visits.season)
for(i in 1:n.seasons) {
col.start <- vis.seq[i]
col.end <- col.start + (n.visits.season - 1)
ySeason <- yMat[, col.start:col.end]
##summarize detection histories
det.hist <- apply(X = ySeason, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))
hist.table.seasons[[i]] <- table(det.hist, deparse.level = 0)
##determine number of sites with at least 1 detection
det.sum <- apply(X = ySeason, MARGIN = 1, FUN = function(i) ifelse(sum(i, na.rm = TRUE) > 0, 1, 0))
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(ySeason, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(ySeason)) == ncol(ySeason)
##number of sites sampled
out.freqs[i, 1] <- sum(!is.na(sum.rows))
out.freqs[i, 2] <- sum(det.sum)
##sites without detections
none <- which(sum.rows == 0)
##sites with at least one detection
some <- which(sum.rows != 0)
out.seasons[[i]] <- list("none" = none, "some" = some)
}
##populate out.freqs with freqs of extinctions and colonizations
for(j in 2:n.seasons) {
none1 <- out.seasons[[j-1]]$none
some1 <- out.seasons[[j-1]]$some
none2 <- out.seasons[[j]]$none
some2 <- out.seasons[[j]]$some
##add check for seasons without sampling or previous season without sampling
if(y.seasonsNA[j] || y.seasonsNA[j-1]) {
if(y.seasonsNA[j]) {
out.freqs[j, 2:6] <- NA
}
if(y.seasonsNA[j-1]) {
out.freqs[j, 3:6] <- NA
}
} else {
##colonizations
out.freqs[j, 3] <- sum(duplicated(c(some2, none1)))
##extinctions
out.freqs[j, 4] <- sum(duplicated(c(some1, none2)))
##no change
out.freqs[j, 5] <- sum(duplicated(c(some1, some2))) + sum(duplicated(c(none1, none2)))
##sites both sampled in t and t-1
year1 <- c(none1, some1)
year2 <- c(none2, some2)
out.freqs[j, 6] <- sum(duplicated(c(year1, year2)))
}
}
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 4)
colnames(out.props) <- c("naive.occ", "naive.colonization", "naive.extinction", "naive.static")
rownames(out.props) <- rownames(out.freqs)
for(k in 1:n.seasons) {
##proportion of sites with detections
out.props[k, 1] <- out.freqs[k, 2]/out.freqs[k, 1]
##add check for seasons without sampling
if(y.seasonsNA[k]) {
out.props[k, 2:4] <- NA
} else {
##proportion colonized
out.props[k, 2] <- out.freqs[k, 3]/out.freqs[k, 6]
##proportion extinct
out.props[k, 3] <- out.freqs[k, 4]/out.freqs[k, 6]
##proportion static
out.props[k, 4] <- out.freqs[k, 5]/out.freqs[k, 6]
}
}
}
out.det <- list("hist.table.full" = hist.table.full,
"hist.table.seasons" = hist.table.seasons,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"n.species" = 1,
"missing.seasons" = y.seasonsNA)
class(out.det) <- "detHist"
return(out.det)
}
##print method
print.detHist <- function(x, digits = 2, ...) {
##convert NA to . for nicer printing
hist.names <- names(x$hist.table.full)
names(x$hist.table.full) <- gsub(pattern = "NA",
replacement = ".",
x = hist.names)
if(identical(x$n.seasons, 1)) {
if(x$n.species > 1) {
nspecies <- x$n.species
speciesNames <- rownames(x$out.freqs)
##convert NA to . for nicer printing
for(d in 1:nspecies) {
hist.names <- names(x$hist.table.species[[d]])
names(x$hist.table.species[[d]]) <- gsub(pattern = "NA",
replacement = ".",
x = hist.names)
}
##species code in detection histories
speciesCode <- character( )
for(j in 1:nspecies) {
speciesCode[j] <- paste(speciesNames[j], " (", letters[j], ")", sep = "")
}
cat("\nSummary of detection histories: \n")
num.chars <- nchar(paste(names(x$hist.table.full), collapse = ""))
if(num.chars >= 80) {
cat("\nNote: Detection histories exceed 80 characters and are not displayed\n")
} else {
cat("(")
cat(speciesCode, sep = ", ")
cat(")\n")
out.mat <- matrix(x$hist.table.full, nrow = 1)
colnames(out.mat) <- names(x$hist.table.full)
rownames(out.mat) <- "Frequency"
print(out.mat)
}
cat("\nSpecies-specific detection histories: \n")
cat("\n")
for(i in 1:nspecies) {
cat(speciesNames[i], "\n")
temp.tab <- x$hist.table.species[[i]]
out.mat <- matrix(temp.tab, nrow = 1)
colnames(out.mat) <- names(temp.tab)
rownames(out.mat) <- "Frequency"
print(out.mat)
cat("--------\n\n")
}
cat("Frequency of co-occurrence among sites: \n")
cat("(")
cat(speciesCode, sep = ", ")
cat(")\n")
occ.tab <- x$hist.table.species$coOcc
occ.mat <- matrix(occ.tab, nrow = 1)
colnames(occ.mat) <- names(occ.tab)
rownames(occ.mat) <- "Frequency"
print(occ.mat)
cat("\nProportion of sites with at least one detection:\n")
print(x$out.props[, "naive.occ"], digits)
cat("\n")
cat("Frequencies of sites with detections:\n")
##add matrix of frequencies
print(x$out.freqs)
} else {
cat("\nSummary of detection histories: \n")
out.mat <- matrix(x$hist.table.full, nrow = 1)
colnames(out.mat) <- names(x$hist.table.full)
rownames(out.mat) <- "Frequency"
print(out.mat)
cat("\nProportion of sites with at least one detection:\n", round(x$out.props[, "naive.occ"], digits), "\n\n")
cat("Frequencies of sites with detections:\n")
##add matrix of frequencies
print(x$out.freqs)
}
}
if(x$n.seasons > 1) {
##convert NA to . for nicer printing
for(d in 1:x$n.seasons) {
hist.names <- names(x$hist.table.seasons[[d]])
names(x$hist.table.seasons[[d]]) <- gsub(pattern = "NA",
replacement = ".",
x = hist.names)
}
cat("\nSummary of detection histories (", x$n.seasons, " seasons combined): \n", sep ="")
##determine number of characters
num.chars <- nchar(paste(names(x$hist.table.full), collapse = ""))
if(num.chars >= 80) {
cat("\nNote: Detection histories exceed 80 characters and are not displayed\n")
} else {
out.mat <- matrix(x$hist.table.full, nrow = 1)
colnames(out.mat) <- names(x$hist.table.full)
rownames(out.mat) <- "Frequency"
print(out.mat)
}
##if some seasons have not been sampled
if(any(x$missing.seasons)) {
if(sum(x$missing.seasons) == 1) {
cat("\nNote: season", which(x$missing.seasons), "was not sampled\n")
} else {
cat("\nNote: seasons",
paste(which(x$missing.seasons), sep = ", "),
"were not sampled\n")
}
cat("\nSeason-specific detection histories: \n")
cat("\n")
for(i in 1:x$n.seasons) {
if(!x$missing.seasons[i]) {
cat("Season", i, "\n")
} else {
cat("Season", i, "(no sites sampled)", "\n")
}
temp.tab <- x$hist.table.seasons[[i]]
out.mat <- matrix(temp.tab, nrow = 1)
colnames(out.mat) <- names(temp.tab)
rownames(out.mat) <- "Frequency"
print(out.mat)
cat("--------\n\n")
}
}
##cat("\n")
cat("Frequencies of sites with detections, extinctions, and colonizations:\n")
##add matrix of frequencies
print(x$out.freqs)
}
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/detHist.R |
##summarize time to detection data
detTime <- function(object, plot.time = TRUE, plot.seasons = FALSE,
cex.axis = 1, cex.lab = 1, cex.main = 1, ...){
UseMethod("detTime", object)
}
detTime.default <- function(object, plot.time = TRUE, plot.seasons = FALSE,
cex.axis = 1, cex.lab = 1, cex.main = 1, ...){
stop("\nFunction not yet defined for this object class\n")
}
##for ummarkedFrameOccuTTD
detTime.unmarkedFrameOccuTTD <- function(object, plot.time = TRUE, plot.seasons = FALSE,
cex.axis = 1, cex.lab = 1, cex.main = 1, ...) {
##extract data
yMat <- object@y
nsites <- nrow(yMat)
n.seasons <- object@numPrimary
nvisits <- ncol(yMat)
##visits per season
n.visits.season <- nvisits/n.seasons
n.seasons.adj <- n.seasons #total number of plots fixed to 11 or 12, depending on plots requested
seasonNames <- paste("season", 1:n.seasons, sep = "")
surveyLength <- object@surveyLength
if(plot.seasons && n.seasons == 1) {
warning("\nCannot plot data across seasons with only 1 season of data: reset to plot.seasons = FALSE\n")
plot.seasons <- FALSE
}
if(plot.time && !plot.seasons) {
nRows <- 1
nCols <- 1
##reset graphics parameters and save in object
oldpar <- par(mfrow = c(nRows, nCols))
}
if(!plot.time && plot.seasons) {
##determine arrangement of plots in matrix
if(plot.seasons && n.seasons >= 12) {
n.seasons.adj <- 12
warning("\nOnly first 12 seasons are plotted\n")
}
if(plot.seasons && n.seasons.adj <= 12) {
##if n.seasons < 12
##if 12, 11, 10 <- 4 x 3
##if 9, 8, 7 <- 3 x 3
##if 6, 5 <- 3 x 2
##if 4 <- 2 x 2
##if 3 <- 3 x 1
##if 2 <- 2 x 1
if(n.seasons.adj >= 10) {
nRows <- 4
nCols <- 3
} else {
if(n.seasons.adj >= 7) {
nRows <- 3
nCols <- 3
} else {
if(n.seasons.adj >= 5) {
nRows <- 3
nCols <- 2
} else {
if(n.seasons.adj == 4) {
nRows <- 2
nCols <- 2
} else {
if(n.seasons.adj == 3) {
nRows <- 3
nCols <- 1
} else {
nRows <- 2
nCols <- 1
}
}
}
}
}
##reset graphics parameters and save in object
oldpar <- par(mfrow = c(nRows, nCols))
}
}
##if both plots for seasons and combined are requested
if(plot.time && plot.seasons) {
if(plot.seasons && n.seasons >= 12) {
n.seasons.adj <- 11
warning("\nOnly first 11 seasons are plotted\n")
}
if(plot.seasons && n.seasons.adj <= 11) {
if(n.seasons.adj >= 9) {
nRows <- 4
nCols <- 3
} else {
if(n.seasons.adj >= 6) {
nRows <- 3
nCols <- 3
} else {
if(n.seasons.adj >= 4) {
nRows <- 3
nCols <- 2
} else {
if(n.seasons.adj == 3) {
nRows <- 2
nCols <- 2
} else {
if(n.seasons.adj == 2) {
nRows <- 3
nCols <- 1
}
}
}
}
}
}
##reset graphics parameters and save in object
oldpar <- par(mfrow = c(nRows, nCols))
}
##combine all seasons
##Censoring distance
censoredDist.full <- surveyLength
uniqueDist.full <- unique(as.vector(censoredDist.full))
##determine data that were censored
uncensoredIndex.full <- yMat < censoredDist.full
uncensoredData.full <- yMat[uncensoredIndex.full]
ncensored.full <- nsites - sum(uncensoredIndex.full)
if(plot.time) {
##check that maximum times are the same for all sites
if(n.seasons == 1) {
if(length(uniqueDist.full) == 1) {
main.title <- paste("Distribution of time to detection (survey length: ", uniqueDist.full, " min.)",
sep = "")
} else {
main.title <- paste("Distribution of time to detection (survey length: ", min(uniqueDist.full), "-",
max(uniqueDist.full), " min.)",
sep = "")
}
}
if(n.seasons > 1) {
if(length(uniqueDist.full) == 1) {
main.title <- paste("Distribution of time to detection (", n.seasons, " seasons, survey length: ",
uniqueDist.full, " min.)",
sep = "")
} else {
main.title <- paste("Distribution of time to detection (", n.seasons, " seasons, survey length: ",
min(uniqueDist.full), "-",
max(uniqueDist.full), " min.)",
sep = "")
}
}
hist(uncensoredData.full, xlim = c(0, uniqueDist.full),
xlab = "Time to detection (min.)",
main = main.title, cex.axis = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##quantiles for entire data
time.table.full <- quantile(uncensoredData.full, na.rm = TRUE)
##store data for each season
##columns for each season
col.seasons <- seq(1, nvisits, by = n.visits.season)
##list to store raw data
yMat.seasons <- vector(mode = "list", length = n.seasons)
names(yMat.seasons) <- seasonNames
minusOne <- n.visits.season - 1
##list to store quantiles excluding censored times
time.table.seasons <- vector("list", n.seasons)
names(time.table.seasons) <- seasonNames
##list of uncensored observations
uncensoredData.seasons <- vector("list", n.seasons)
names(uncensoredData.seasons) <- seasonNames
##vector of censored observations
censored.seasons <- vector("numeric", n.seasons)
names(censored.seasons) <- seasonNames
##list of unique values of maximum effort
uniqueDist.seasons <- vector("list", n.seasons)
names(uniqueDist.seasons) <- seasonNames
##list of maximum effort
censoredDist.seasons <- vector("list", n.seasons)
names(censoredDist.seasons) <- seasonNames
##iterate over each season
for(i in 1:n.seasons) {
##extract yMat for each season
yMat1 <- yMat[, col.seasons[i]:(col.seasons[i]+minusOne), drop = FALSE]
yMat.seasons[[i]] <- yMat1
##Censoring values
censoredDist <- surveyLength[, col.seasons[i]:(col.seasons[i]+minusOne), drop = FALSE]
censoredDist.seasons[[i]] <- censoredDist
uniqueDist.seasons[[i]] <- unique(as.vector(censoredDist))
##determine data that were censored
uncensoredIndex <- yMat1 < censoredDist
uncensoredData <- yMat1[uncensoredIndex]
uncensoredData.seasons[[i]] <- uncensoredData
ncensored <- nsites - sum(uncensoredIndex)
##summarize times per season
time.quantiles <- quantile(uncensoredData, na.rm = TRUE)
time.table.seasons[[i]] <- time.quantiles
censored.seasons[i] <- ncensored
}
##check if any seasons were not sampled
y.seasonsNA <- sapply(yMat.seasons, FUN = function(i) all(is.na(i)))
if(plot.seasons) {
for(i in 1:n.seasons) {
if(length(uniqueDist.seasons[[i]]) == 1) {
main.title <- paste("Distribution of time to detection (season ", i,
", survey length: ", uniqueDist.seasons[[i]],
" min.)", sep = "")
} else {
main.title <- paste("Distribution of time to detection (season ", i,
", survey length: ", min(uniqueDist.seasons[[i]]), "-",
max(uniqueDist.seasons[[i]]), " min.)",
sep = "")
}
##create histogram
##check for missing season
if(y.seasonsNA[i]) {next} #skip to next season if current season not sampled
hist(uncensoredData.seasons[[i]], xlim = c(0, uniqueDist.seasons[[i]]),
xlab = "Time to detection (min.)",
main = main.title, cex.axis = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
}
if(n.seasons == 1) {
##for each season, determine frequencies
out.freqs <- matrix(data = NA, ncol = 2, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected")
rownames(out.freqs) <- "Season-1"
##sequence of visits
for(i in 1:n.seasons) {
ySeason <- yMat.seasons[[i]]
censored <- censoredDist.seasons[[i]]
uncensoredObs <- matrix(NA, ncol = n.visits.season,
nrow = nsites)
for(j in 1:ncol(ySeason)){
##observations
uncensoredObs[, j] <- ySeason[, j] < censored[, j]
}
##determine proportion of sites with at least 1 detection
det.sum <- rowSums(uncensoredObs, na.rm = TRUE)
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(uncensoredObs, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(uncensoredObs)) == ncol(ySeason)
##number of sites sampled
out.freqs[i, 1] <- sum(!is.na(sum.rows))
out.freqs[i, 2] <- sum(det.sum)
}
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 1)
colnames(out.props) <- "naive.occ"
rownames(out.props) <- rownames(out.freqs)
out.props[, 1] <- out.freqs[, 2]/out.freqs[, 1]
}
if(n.seasons > 1) {
out.seasons <- vector("list", n.seasons)
names(out.seasons) <- seasonNames
##for each season, determine frequencies
out.freqs <- matrix(data = NA, ncol = 6, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected", "colonized",
"extinct", "static", "common")
rownames(out.freqs) <- paste("Season-", 1:n.seasons, sep = "")
##sequence of visits
for(i in 1:n.seasons) {
ySeason <- yMat.seasons[[i]]
censored <- censoredDist.seasons[[i]]
uncensoredObs <- matrix(NA, ncol = n.visits.season,
nrow = nsites)
for(j in 1:ncol(ySeason)){
##observations
uncensoredObs[, j] <- ySeason[, j] < censored[, j]
}
##determine proportion of sites with at least 1 detection
det.sum <- rowSums(uncensoredObs, na.rm = TRUE)
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(uncensoredObs, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(uncensoredObs)) == ncol(ySeason)
##number of sites sampled
out.freqs[i, 1] <- sum(!is.na(sum.rows))
out.freqs[i, 2] <- sum(det.sum)
##sites without detections
none <- which(sum.rows == 0)
##sites with at least one detection
some <- which(sum.rows != 0)
out.seasons[[i]] <- list("none" = none, "some" = some)
}
##populate out.freqs with freqs of extinctions and colonizations
for(j in 2:n.seasons) {
none1 <- out.seasons[[j-1]]$none
some1 <- out.seasons[[j-1]]$some
none2 <- out.seasons[[j]]$none
some2 <- out.seasons[[j]]$some
##add check for seasons without sampling or previous season without sampling
if(y.seasonsNA[j] || y.seasonsNA[j-1]) {
if(y.seasonsNA[j]) {
out.freqs[j, 2:6] <- NA
}
if(y.seasonsNA[j-1]) {
out.freqs[j, 3:6] <- NA
}
} else {
##colonizations
out.freqs[j, 3] <- sum(duplicated(c(some2, none1)))
##extinctions
out.freqs[j, 4] <- sum(duplicated(c(some1, none2)))
##no change
out.freqs[j, 5] <- sum(duplicated(c(some1, some2))) + sum(duplicated(c(none1, none2)))
##sites both sampled in t and t-1
year1 <- c(none1, some1)
year2 <- c(none2, some2)
out.freqs[j, 6] <- sum(duplicated(c(year1, year2)))
}
}
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 4)
colnames(out.props) <- c("naive.occ", "naive.colonization", "naive.extinction", "naive.static")
rownames(out.props) <- rownames(out.freqs)
for(k in 1:n.seasons) {
##proportion of sites with detections
out.props[k, 1] <- out.freqs[k, 2]/out.freqs[k, 1]
##add check for seasons without sampling
if(y.seasonsNA[k]) {
out.props[k, 2:4] <- NA
} else {
##proportion colonized
out.props[k, 2] <- out.freqs[k, 3]/out.freqs[k, 6]
##proportion extinct
out.props[k, 3] <- out.freqs[k, 4]/out.freqs[k, 6]
##proportion static
out.props[k, 4] <- out.freqs[k, 5]/out.freqs[k, 6]
}
}
}
##reset to original values
if(any(plot.time || plot.seasons)) {
on.exit(par(oldpar))
}
out.det <- list("time.table.full" = time.table.full,
"time.table.seasons" = time.table.seasons,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"missing.seasons" = y.seasonsNA)
class(out.det) <- "detTime"
return(out.det)
}
##for occuTTD
detTime.unmarkedFitOccuTTD <- function(object, plot.time = TRUE, plot.seasons = FALSE,
cex.axis = 1, cex.lab = 1, cex.main = 1, ...) {
##extract data
yMat <- object@data@y
nsites <- nrow(yMat)
n.seasons <- object@data@numPrimary
nvisits <- ncol(yMat)
##visits per season
n.visits.season <- nvisits/n.seasons
n.seasons.adj <- n.seasons #total number of plots fixed to 11 or 12, depending on plots requested
seasonNames <- paste("season", 1:n.seasons, sep = "")
surveyLength <- object@data@surveyLength
if(plot.seasons && n.seasons == 1) {
warning("\nCannot plot data across seasons with only 1 season of data: reset to plot.seasons = FALSE\n")
plot.seasons <- FALSE
}
if(plot.time && !plot.seasons) {
nRows <- 1
nCols <- 1
##reset graphics parameters and save in object
oldpar <- par(mfrow = c(nRows, nCols))
}
if(!plot.time && plot.seasons) {
##determine arrangement of plots in matrix
if(plot.seasons && n.seasons >= 12) {
n.seasons.adj <- 12
warning("\nOnly first 12 seasons are plotted\n")
}
if(plot.seasons && n.seasons.adj <= 12) {
##if n.seasons < 12
##if 12, 11, 10 <- 4 x 3
##if 9, 8, 7 <- 3 x 3
##if 6, 5 <- 3 x 2
##if 4 <- 2 x 2
##if 3 <- 3 x 1
##if 2 <- 2 x 1
if(n.seasons.adj >= 10) {
nRows <- 4
nCols <- 3
} else {
if(n.seasons.adj >= 7) {
nRows <- 3
nCols <- 3
} else {
if(n.seasons.adj >= 5) {
nRows <- 3
nCols <- 2
} else {
if(n.seasons.adj == 4) {
nRows <- 2
nCols <- 2
} else {
if(n.seasons.adj == 3) {
nRows <- 3
nCols <- 1
} else {
nRows <- 2
nCols <- 1
}
}
}
}
}
##reset graphics parameters and save in object
oldpar <- par(mfrow = c(nRows, nCols))
}
}
##if both plots for seasons and combined are requested
if(plot.time && plot.seasons) {
if(plot.seasons && n.seasons >= 12) {
n.seasons.adj <- 11
warning("\nOnly first 11 seasons are plotted\n")
}
if(plot.seasons && n.seasons.adj <= 11) {
if(n.seasons.adj >= 9) {
nRows <- 4
nCols <- 3
} else {
if(n.seasons.adj >= 6) {
nRows <- 3
nCols <- 3
} else {
if(n.seasons.adj >= 4) {
nRows <- 3
nCols <- 2
} else {
if(n.seasons.adj == 3) {
nRows <- 2
nCols <- 2
} else {
if(n.seasons.adj == 2) {
nRows <- 3
nCols <- 1
}
}
}
}
}
}
##reset graphics parameters and save in object
oldpar <- par(mfrow = c(nRows, nCols))
}
##combine all seasons
##Censoring distance
censoredDist.full <- surveyLength
uniqueDist.full <- unique(as.vector(censoredDist.full))
##determine data that were censored
uncensoredIndex.full <- yMat < censoredDist.full
uncensoredData.full <- yMat[uncensoredIndex.full]
ncensored.full <- nsites - sum(uncensoredIndex.full)
if(plot.time) {
##check that maximum times are the same for all sites
if(n.seasons == 1) {
if(length(uniqueDist.full) == 1) {
main.title <- paste("Distribution of time to detection (survey length: ", uniqueDist.full, " min.)",
sep = "")
} else {
main.title <- paste("Distribution of time to detection (survey length: ", min(uniqueDist.full), "-",
max(uniqueDist.full), " min.)",
sep = "")
}
}
if(n.seasons > 1) {
if(length(uniqueDist.full) == 1) {
main.title <- paste("Distribution of time to detection (", n.seasons, " seasons, survey length: ",
uniqueDist.full, " min.)",
sep = "")
} else {
main.title <- paste("Distribution of time to detection (", n.seasons, " seasons, survey length: ",
min(uniqueDist.full), "-",
max(uniqueDist.full), " min.)",
sep = "")
}
}
hist(uncensoredData.full, xlim = c(0, uniqueDist.full),
xlab = "Time to detection (min.)",
main = main.title,
cex.axis = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
##quantiles for entire data
time.table.full <- quantile(uncensoredData.full, na.rm = TRUE)
##store data for each season
##columns for each season
col.seasons <- seq(1, nvisits, by = n.visits.season)
##list to store raw data
yMat.seasons <- vector(mode = "list", length = n.seasons)
names(yMat.seasons) <- seasonNames
minusOne <- n.visits.season - 1
##list to store quantiles excluding censored times
time.table.seasons <- vector("list", n.seasons)
names(time.table.seasons) <- seasonNames
##list of uncensored observations
uncensoredData.seasons <- vector("list", n.seasons)
names(uncensoredData.seasons) <- seasonNames
##vector of censored observations
censored.seasons <- vector("numeric", n.seasons)
names(censored.seasons) <- seasonNames
##list of unique values of maximum effort
uniqueDist.seasons <- vector("list", n.seasons)
names(uniqueDist.seasons) <- seasonNames
##list of maximum effort
censoredDist.seasons <- vector("list", n.seasons)
names(censoredDist.seasons) <- seasonNames
##iterate over each season
for(i in 1:n.seasons) {
##extract yMat for each season
yMat1 <- yMat[, col.seasons[i]:(col.seasons[i]+minusOne), drop = FALSE]
yMat.seasons[[i]] <- yMat1
##Censoring values
censoredDist <- surveyLength[, col.seasons[i]:(col.seasons[i]+minusOne), drop = FALSE]
censoredDist.seasons[[i]] <- censoredDist
uniqueDist.seasons[[i]] <- unique(as.vector(censoredDist))
##determine data that were censored
uncensoredIndex <- yMat1 < censoredDist
uncensoredData <- yMat1[uncensoredIndex]
uncensoredData.seasons[[i]] <- uncensoredData
ncensored <- nsites - sum(uncensoredIndex)
##summarize times per season
time.quantiles <- quantile(uncensoredData, na.rm = TRUE)
time.table.seasons[[i]] <- time.quantiles
censored.seasons[i] <- ncensored
}
##check if any seasons were not sampled
y.seasonsNA <- sapply(yMat.seasons, FUN = function(i) all(is.na(i)))
if(plot.seasons) {
for(i in 1:n.seasons) {
if(length(uniqueDist.seasons[[i]]) == 1) {
main.title <- paste("Distribution of time to detection (season ", i,
", survey length: ", uniqueDist.seasons[[i]],
" min.)", sep = "")
} else {
main.title <- paste("Distribution of time to detection (season ", i,
", survey length: ", min(uniqueDist.seasons[[i]]), "-",
max(uniqueDist.seasons[[i]]), " min.)",
sep = "")
}
##create histogram
##check for missing season
if(y.seasonsNA[i]) {next} #skip to next season if current season not sampled
hist(uncensoredData.seasons[[i]], xlim = c(0, uniqueDist.seasons[[i]]),
xlab = "Time to detection (min.)",
main = main.title,
cex.axis = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
}
}
if(n.seasons == 1) {
##for each season, determine frequencies
out.freqs <- matrix(data = NA, ncol = 2, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected")
rownames(out.freqs) <- "Season-1"
##sequence of visits
for(i in 1:n.seasons) {
ySeason <- yMat.seasons[[i]]
censored <- censoredDist.seasons[[i]]
uncensoredObs <- matrix(NA, ncol = n.visits.season,
nrow = nsites)
for(j in 1:ncol(ySeason)){
##observations
uncensoredObs[, j] <- ySeason[, j] < censored[, j]
}
##determine proportion of sites with at least 1 detection
det.sum <- rowSums(uncensoredObs, na.rm = TRUE)
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(uncensoredObs, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(uncensoredObs)) == ncol(ySeason)
##number of sites sampled
out.freqs[i, 1] <- sum(!is.na(sum.rows))
out.freqs[i, 2] <- sum(det.sum)
}
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 1)
colnames(out.props) <- "naive.occ"
rownames(out.props) <- rownames(out.freqs)
out.props[, 1] <- out.freqs[, 2]/out.freqs[, 1]
}
if(n.seasons > 1) {
out.seasons <- vector("list", n.seasons)
names(out.seasons) <- seasonNames
##for each season, determine frequencies
out.freqs <- matrix(data = NA, ncol = 6, nrow = n.seasons)
colnames(out.freqs) <- c("sampled", "detected", "colonized",
"extinct", "static", "common")
rownames(out.freqs) <- paste("Season-", 1:n.seasons, sep = "")
##sequence of visits
for(i in 1:n.seasons) {
ySeason <- yMat.seasons[[i]]
censored <- censoredDist.seasons[[i]]
uncensoredObs <- matrix(NA, ncol = n.visits.season,
nrow = nsites)
for(j in 1:ncol(ySeason)){
##observations
uncensoredObs[, j] <- ySeason[, j] < censored[, j]
}
##determine proportion of sites with at least 1 detection
det.sum <- rowSums(uncensoredObs, na.rm = TRUE)
##check sites with observed detections and deal with NA's
sum.rows <- rowSums(uncensoredObs, na.rm = TRUE)
is.na(sum.rows) <- rowSums(is.na(uncensoredObs)) == ncol(ySeason)
##number of sites sampled
out.freqs[i, 1] <- sum(!is.na(sum.rows))
out.freqs[i, 2] <- sum(det.sum)
##sites without detections
none <- which(sum.rows == 0)
##sites with at least one detection
some <- which(sum.rows != 0)
out.seasons[[i]] <- list("none" = none, "some" = some)
}
##populate out.freqs with freqs of extinctions and colonizations
for(j in 2:n.seasons) {
none1 <- out.seasons[[j-1]]$none
some1 <- out.seasons[[j-1]]$some
none2 <- out.seasons[[j]]$none
some2 <- out.seasons[[j]]$some
##add check for seasons without sampling or previous season without sampling
if(y.seasonsNA[j] || y.seasonsNA[j-1]) {
if(y.seasonsNA[j]) {
out.freqs[j, 2:6] <- NA
}
if(y.seasonsNA[j-1]) {
out.freqs[j, 3:6] <- NA
}
} else {
##colonizations
out.freqs[j, 3] <- sum(duplicated(c(some2, none1)))
##extinctions
out.freqs[j, 4] <- sum(duplicated(c(some1, none2)))
##no change
out.freqs[j, 5] <- sum(duplicated(c(some1, some2))) + sum(duplicated(c(none1, none2)))
##sites both sampled in t and t-1
year1 <- c(none1, some1)
year2 <- c(none2, some2)
out.freqs[j, 6] <- sum(duplicated(c(year1, year2)))
}
}
##create a matrix with proportion of sites with colonizations
##and extinctions based on raw data
out.props <- matrix(NA, nrow = nrow(out.freqs), ncol = 4)
colnames(out.props) <- c("naive.occ", "naive.colonization", "naive.extinction", "naive.static")
rownames(out.props) <- rownames(out.freqs)
for(k in 1:n.seasons) {
##proportion of sites with detections
out.props[k, 1] <- out.freqs[k, 2]/out.freqs[k, 1]
##add check for seasons without sampling
if(y.seasonsNA[k]) {
out.props[k, 2:4] <- NA
} else {
##proportion colonized
out.props[k, 2] <- out.freqs[k, 3]/out.freqs[k, 6]
##proportion extinct
out.props[k, 3] <- out.freqs[k, 4]/out.freqs[k, 6]
##proportion static
out.props[k, 4] <- out.freqs[k, 5]/out.freqs[k, 6]
}
}
}
##reset to original values
if(any(plot.time || plot.seasons)) {
on.exit(par(oldpar))
}
out.det <- list("time.table.full" = time.table.full,
"time.table.seasons" = time.table.seasons,
"out.freqs" = out.freqs, "out.props" = out.props,
"n.seasons" = n.seasons,
"n.visits.season" = n.visits.season,
"missing.seasons" = y.seasonsNA)
class(out.det) <- "detTime"
return(out.det)
}
##print method
print.detTime <- function(x, digits = 2, ...) {
if(identical(x$n.seasons, 1)) {
cat("\nSummary of time to detection:\n")
time.mat <- matrix(x$time.table.full, nrow = 1)
colnames(time.mat) <- names(x$time.table.full)
rownames(time.mat) <- "Times"
print(round(time.mat, digits = digits))
cat("\nProportion of sites with at least one detection:\n", round(x$out.props[, "naive.occ"], digits), "\n\n")
cat("Frequencies of sites with detections:\n")
##add matrix of frequencies
print(x$out.freqs)
} else {
cat("\nSummary of time to detection (", x$n.seasons, " seasons combined): \n", sep ="")
time.mat <- matrix(x$time.table.full, nrow = 1)
colnames(time.mat) <- names(x$time.table.full)
rownames(time.mat) <- "Time"
print(round(time.mat, digits = digits))
##if some seasons have not been sampled
if(any(x$missing.seasons)) {
if(sum(x$missing.seasons) == 1) {
cat("\nNote: season", which(x$missing.seasons), "was not sampled\n")
} else {
cat("\nNote: seasons",
paste(which(x$missing.seasons), sep = ", "),
"were not sampled\n")
}
}
cat("\nSeason-specific time to detection: \n")
cat("\n")
for(i in 1:x$n.seasons) {
if(!x$missing.seasons[i]) {
cat("Season", i, "\n")
} else {
cat("Season", i, "(no sites sampled)", "\n")
}
temp.tab <- x$time.table.seasons[[i]]
out.mat <- matrix(temp.tab, nrow = 1)
colnames(out.mat) <- names(temp.tab)
rownames(out.mat) <- "Time"
print(round(out.mat, digits = digits))
cat("--------\n\n")
}
cat("Frequencies of sites with detections, extinctions, and colonizations:\n")
##add matrix of frequencies
print(x$out.freqs)
}
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/detTime.R |
##generic
dictab <- function(cand.set, modnames = NULL, sort = TRUE, ...) {
##format list according to model class
cand.set <- formatCands(cand.set)
UseMethod("dictab", cand.set)
}
##default
dictab.default <- function(cand.set, modnames = NULL, sort = TRUE, ...) {
stop("\nFunction not yet defined for this object class\n")
}
##bugs
dictab.AICbugs <- function(cand.set, modnames = NULL, sort = TRUE, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$pD <- unlist(lapply(cand.set, DIC, return.pD = TRUE)) #extract number of parameters
Results$DIC <- unlist(lapply(cand.set, DIC, return.pD = FALSE)) #extract DIC #
Results$Delta_DIC <- Results$DIC - min(Results$DIC) #compute delta DIC
Results$ModelLik <- exp(-0.5*Results$Delta_DIC) #compute model likelihood required to compute Akaike weights
Results$DICWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
Results$Deviance <- unlist(lapply(X = cand.set, FUN = function(i) i$mean$deviance))
##check if some models are redundant
if(length(unique(Results$DIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("dictab", "data.frame")
return(Results)
}
##rjags
dictab.AICrjags <- function(cand.set, modnames = NULL, sort = TRUE, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
}
modnames <- names(cand.set)
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$pD <- unlist(lapply(cand.set, DIC, return.pD = TRUE)) #extract number of parameters
Results$DIC <- unlist(lapply(cand.set, DIC, return.pD = FALSE)) #extract DIC #
Results$Delta_DIC <- Results$DIC - min(Results$DIC) #compute delta DIC
Results$ModelLik <- exp(-0.5*Results$Delta_DIC) #compute model likelihood required to compute Akaike weights
Results$DICWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
Results$Deviance <- unlist(lapply(X = cand.set, FUN = function(i) i$mean$deviance))
##check if some models are redundant
if(length(unique(Results$DIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("dictab", "data.frame")
return(Results)
}
##jagsUI
dictab.AICjagsUI <- function(cand.set, modnames = NULL, sort = TRUE, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
}
modnames <- names(cand.set)
}
Results <- data.frame(Modnames = modnames) #assign model names to first column
Results$pD <- unlist(lapply(cand.set, DIC, return.pD = TRUE)) #extract number of parameters
Results$DIC <- unlist(lapply(cand.set, DIC, return.pD = FALSE)) #extract DIC #
Results$Delta_DIC <- Results$DIC - min(Results$DIC) #compute delta DIC
Results$ModelLik <- exp(-0.5*Results$Delta_DIC) #compute model likelihood required to compute Akaike weights
Results$DICWt <- Results$ModelLik/sum(Results$ModelLik) #compute Akaike weights
Results$Deviance <- unlist(lapply(X = cand.set, FUN = function(i) i$mean$deviance))
##check if some models are redundant
if(length(unique(Results$DIC)) != length(cand.set)) warning("\nCheck model structure carefully as some models may be redundant\n")
if(sort) {
Results <- Results[order(Results[, 4]),] #if sort=TRUE, models are ranked based on Akaike weights
Results$Cum.Wt <- cumsum(Results[, 6]) #display cumulative sum of Akaike weights
} else {Results$Cum.Wt <- NULL}
class(Results) <- c("dictab", "data.frame")
return(Results)
}
print.dictab <-
function(x, digits = 2, deviance = TRUE, ...) {
cat("\nModel selection based on", colnames(x)[3], ":\n")
cat("\n")
#check if Cum.Wt should be printed
if(any(names(x) == "Cum.Wt")) {
nice.tab <- cbind(x[, 2], x[, 3], x[, 4], x[, 6], x[, "Cum.Wt"], x[, 7])
colnames(nice.tab) <- c(colnames(x)[c(2, 3, 4, 6)], "Cum.Wt", colnames(x)[7])
rownames(nice.tab) <- x[, 1]
} else {
nice.tab <- cbind(x[, 2], x[, 3], x[, 4], x[, 6], x[, 7])
colnames(nice.tab) <- c(colnames(x)[c(2, 3, 4, 6, 7)])
rownames(nice.tab) <- x[, 1]
}
##if deviance==FALSE
if(identical(deviance, FALSE)) {
names.cols <- colnames(nice.tab)
sel.dev <- which(attr(regexpr(pattern = "Deviance", text = names.cols), "match.length") > 1)
nice.tab <- nice.tab[, -sel.dev]
}
print(round(nice.tab, digits = digits)) #select rounding off with digits argument
cat("\n")
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/dictab.R |
evidence <-
function(aic.table, model.high = "top", model.low = "second.ranked") {
##if multComp object, extract relevant table
if(identical(class(aic.table)[1], "multComp")) {
if(!is.data.frame(aic.table)) {
aic.table <- aic.table$model.table
}
##coerce to aictab
class(aic.table) <- c("aictab", "data.frame")
}
if(identical(class(aic.table)[1], "boot.wt")) {
##coerce to aictab
class(aic.table) <- c("aictab", "data.frame")
}
##if bictab result
if(identical(class(aic.table)[1], "bictab")) {
##coerce to aictab
class(aic.table) <- c("aictab", "data.frame")
}
##if dictab result
if(identical(class(aic.table)[1], "dictab")) {
##coerce to aictab
class(aic.table) <- c("aictab", "data.frame")
}
##if ictab result
if(identical(class(aic.table)[1], "ictab")) {
##coerce to aictab
class(aic.table) <- c("aictab", "data.frame")
}
if(!identical(class(aic.table)[1], "aictab")) {stop("\nThe input object must be of class 'aictab'\n")}
##sort model table in case it is not
sort.tab <- aic.table[order(aic.table[, 4]), ]
##top model
if(identical(model.high, "top")) {
##determine which is the highest ranking model
top.name <- sort.tab[1, 1]
top.wt <- sort.tab[1, 6]
} else {
top.name <- model.high
top.wt <- sort.tab[which(sort.tab$Modnames == paste(model.high)), 6]
}
##model compared
if(identical(model.low, "second.ranked")) {
sec.name <- sort.tab[2, 1]
sec.wt <- sort.tab[2, 6]
} else {
sec.name <- model.low
sec.wt <- sort.tab[which(sort.tab$Modnames == paste(model.low)), 6]
}
##compute evidence ratio
ev.ratio <- top.wt/sec.wt
ev.ratio.list <- list("Model.high" = paste(top.name), "Model.low" = paste(sec.name), "Ev.ratio" = ev.ratio)
class(ev.ratio.list) <- c("evidence", "list")
return(ev.ratio.list)
}
print.evidence <- function(x, digits = 2, ...) {
cat("\nEvidence ratio between models '", x$Model.high,"' and '", x$Model.low, "':\n", sep = "")
cat(round(x$Ev.ratio, digits = digits), "\n\n")
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/evidence.R |
##extract condition number
#Values of the condition number near 0 or negative indicate a problem, possibly
#indicating fitting a model with too many parameters for the given data set.
extractCN <- function(mod, method = "svd", ...){
UseMethod("extractCN", mod)
}
extractCN.default <- function(mod, method = "svd", ...) {
stop("\nFunction not yet defined for this object class\n")
}
##unmarkedFit objects
extractCN.unmarkedFit <- function(mod, method = "svd", ...) {
##extract Hessian matrix
hess <- mod@opt$hessian
##SVD
if(identical(method, "svd")) {
s <- svd(hess, nu = 0, nv = 0)$d
CN <- max(s)/min(s[s > 0])
}
##eigen
if(identical(method, "eigen")) {
eigenvals <- eigen(hess)$values
CN <- max(eigenvals)/min(eigenvals)
}
##compute log10
logKappa <- log10(CN)
##arrange results
out <- list("CN" = CN, "log10" = logKappa, "method" = method)
class(out) <- "extractCN"
return(out)
}
##print method
print.extractCN <- function(x, digits = 2, ...) {
nice.vector <- c("Condition number" = x$CN,
"log10" = x$log10)
print(round(nice.vector, digits = digits))
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/extractCN.R |
##extract log-likelihood of model
##generic
extractLL <- function(mod, ...) {
UseMethod("extractLL", mod)
}
##generic
extractLL.default <- function(mod, ...) {
stop("\nFunction not yet defined for this object class\n")
}
##methods
##coxme objects
extractLL.coxme <- function(mod, type = "Integrated", ...) {
##fixed effects
fixed.K <- length(fixef(mod))
##random effects
random.K <- length(ranef(mod))
df <- fixed.K + random.K
LL <- mod$loglik[type]
attr(LL, "df") <- df
return(LL)
}
##coxph objects
extractLL.coxph <- function(mod, ...) {
coefs <- coef(mod)
if(is.null(coefs)) {
ncoefs <- 0
LL <- mod$loglik[1] #when null model, only 1 log-likelihood value
} else {
ncoefs <- length(coefs)
LL <- mod$loglik[2] #second value is the logLik at the solution
}
attr(LL, "df") <- ncoefs
return(LL)
}
##lmekin objects
extractLL.lmekin <- function(mod, ...) {
LL <- mod$loglik
#K = fixed + random + residual variance
fixed.K <- length(fixef(mod))
random.K <- length(ranef(mod))
df <- fixed.K + random.K + 1
attr(LL, "df") <- df
return(LL)
}
##maxlikeFit objects
extractLL.maxlikeFit <- function(mod, ...) {
LL <- logLik(mod)
df <- length(coef(mod))
attr(LL, "df") <- df
return(LL)
}
##unmarkedFit objects
extractLL.unmarkedFit <- function(mod, ...) {
LL <- -1*mod@negLogLike
df <- length(mod@opt$par)
attr(LL, "df") <- df
return(LL)
}
##vglm objects
extractLL.vglm <- function(mod, ...) {
LL <- logLik(mod)
df <- length(coef(mod))
if(identical(mod@family@vfamily, "gaussianff")) {df <- df + 1}
attr(LL, "df") <- df
return(LL)
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/extractLL.R |
##create SE extractor function that includes estimate labels
##generic
extractSE <- function(mod, ...) {
UseMethod("extractSE", mod)
}
extractSE.default <- function(mod, ...) {
stop("\nFunction not yet defined for this object class\n")
}
##methods
##coxme objects
extractSE.coxme <- function(mod, ...){
##extract vcov matrix
vcov.mat <- as.matrix(vcov(mod))
se <- sqrt(diag(vcov.mat))
fixed.labels <- names(fixef(mod))
names(se) <- fixed.labels
return(se)
}
##lmekin objects
extractSE.lmekin <- function(mod, ...){
##extract vcov matrix
vcov.mat <- as.matrix(vcov(mod))
se <- sqrt(diag(vcov.mat))
fixed.labels <- names(fixef(mod))
names(se) <- fixed.labels
return(se)
}
##mer objects
extractSE.mer <- function(mod, ...){
##extract vcov matrix
vcov.mat <- as.matrix(vcov(mod))
se <- sqrt(diag(vcov.mat))
fixed.labels <- names(fixef(mod))
names(se) <- fixed.labels
return(se)
}
##merMod objects
extractSE.merMod <- function(mod, ...){
##extract vcov matrix
vcov.mat <- as.matrix(vcov(mod))
se <- sqrt(diag(vcov.mat))
fixed.labels <- names(fixef(mod))
names(se) <- fixed.labels
return(se)
}
##lmerModLmerTest objects
extractSE.lmerModLmerTest <- function(mod, ...){
##extract vcov matrix
vcov.mat <- as.matrix(vcov(mod))
se <- sqrt(diag(vcov.mat))
fixed.labels <- names(fixef(mod))
names(se) <- fixed.labels
return(se)
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/extractSE.R |
##extracting data characteristics for subsequent prediction typically with modavgPred or modavgEffect
##generic
extractX <- function(cand.set, ...){
cand.set <- formatCands(cand.set)
UseMethod("extractX", cand.set)
}
##default
extractX.default <- function(cand.set, ...){
stop("\nFunction not yet defined for this object class\n")
}
##aov
extractX.AICaov.lm <- function(cand.set, ...) {
##extract predictors from list
form.list <- as.character(lapply(cand.set, FUN = function(x) formula(x)[[3]]))
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract response
resp <- unique(as.character(sapply(cand.set, FUN = function(x) formula(x)[[2]])))
##check if different response used
if(length(resp) > 1) stop("\nThe response variable should be identical in all models\n")
##extract data from model objects
dsets <- lapply(cand.set, FUN = function(i) i$model)
##remove model names from list
names(dsets) <- NULL
##combine data sets
combo <- do.call(what = "cbind", dsets)
dframe <- combo[, unique(names(combo))]
##remove response from data frame
dframe <- dframe[, names(dframe) != resp]
##assemble results
result <- list("predictors" = unique.predictors,
"data" = dframe)
class(result) <- "extractX"
return(result)
}
##glm
extractX.AICglm.lm <- function(cand.set, ...) {
##extract predictors from list
form.list <- as.character(lapply(cand.set, FUN = function(x) formula(x)[[3]]))
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
## ##extract response
resp <- unique(as.character(sapply(cand.set, FUN = function(x) formula(x)[[2]])))
## ##check if different response used
if(length(resp) > 1) stop("\nThe response variable should be identical in all models\n")
##extract data from model objects
dsets <- lapply(cand.set, FUN = function(i) i$model)
##remove model names from list
names(dsets) <- NULL
##combine data sets
combo <- do.call(what = "cbind", dsets)
dframe <- combo[, unique(names(combo))]
##remove response from data frame
dframe <- dframe[, names(dframe) != resp]
##assemble results
result <- list("predictors" = unique.predictors,
"data" = dframe)
class(result) <- "extractX"
return(result)
}
##glmmTMB
extractX.AICglmmTMB <- function(cand.set, ...) {
##extract predictors from list
form.list <- as.character(lapply(cand.set, FUN = function(x) formula(x)[[3]]))
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract response
resp <- unique(as.character(sapply(cand.set, FUN = function(x) formula(x)[[2]])))
##check if different response used
if(length(resp) > 1) stop("\nThe response variable should be identical in all models\n")
##check for | in variance terms
pipe.id <- which(regexpr("\\|", unique.predictors) != -1)
##remove variance terms from string of predictors
if(length(pipe.id) > 0) {unique.predictors <- unique.predictors[-pipe.id]}
##extract data from model objects
dsets <- lapply(cand.set, FUN = function(i) (i$frame))
##remove model names from list
names(dsets) <- NULL
##combine data sets
combo <- do.call(what = "cbind", dsets)
dframe <- combo[, unique(names(combo))]
##remove response from data frame
dframe <- dframe[, names(dframe) != resp]
##check for interactions specified with *
inter.star <- any(regexpr("\\*", unique.predictors) != -1)
##check for interaction terms
inter.id <- any(regexpr("\\:", unique.predictors) != -1)
##inter.star and inter.id
if(inter.star && inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.nostar.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
##separate terms in interaction
terms.nointer <- unlist(sapply(terms.nostar.clean, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##inter.star
if(inter.star && !inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
}
##inter.id
if(!inter.star && inter.id) {
##separate terms in interaction
terms.nointer <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##none
if(!inter.star && !inter.id) {
##remove extra white space
terms.clean <- unique.predictors
}
##combine in single character vector
final.predictors <- unique(terms.clean)
##check for I( ) custom variables in formula
I.id <- which(regexpr("I\\(", final.predictors) != -1)
##if I( ) used
if(length(I.id) > 0) {
dframe <- dframe[, final.predictors[-I.id], drop = FALSE]
} else {
dframe <- dframe[, final.predictors, drop = FALSE]
}
##assemble results
result <- list("predictors" = unique.predictors,
"data" = dframe)
class(result) <- "extractX"
return(result)
}
##gls
extractX.AICgls <- function(cand.set, ...) {
##extract predictors from list
form.list <- as.character(lapply(cand.set, FUN = function(x) formula(x)[[3]]))
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract response
resp <- unique(as.character(sapply(cand.set, FUN = function(x) formula(x)[[2]])))
##check if different response used
if(length(resp) > 1) stop("\nThe response variable should be identical in all models\n")
##extract data from model objects
dsets <- lapply(cand.set, FUN = function(i) getData(i))
##remove model names from list
names(dsets) <- NULL
##combine data sets
combo <- do.call(what = "cbind", dsets)
dframe <- combo[, unique(names(combo))]
##remove response from data frame
dframe <- dframe[, names(dframe) != resp]
##check for interactions specified with *
inter.star <- any(regexpr("\\*", unique.predictors) != -1)
##check for interaction terms
inter.id <- any(regexpr("\\:", unique.predictors) != -1)
##inter.star and inter.id
if(inter.star && inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.nostar.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
##separate terms in interaction
terms.nointer <- unlist(sapply(terms.nostar.clean, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##inter.star
if(inter.star && !inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
}
##inter.id
if(!inter.star && inter.id) {
##separate terms in interaction
terms.nointer <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##none
if(!inter.star && !inter.id) {
##remove extra white space
terms.clean <- unique.predictors
}
##combine in single character vector
final.predictors <- unique(terms.clean)
##check for I( ) custom variables in formula
I.id <- which(regexpr("I\\(", final.predictors) != -1)
##if I( ) used
if(length(I.id) > 0) {
dframe <- dframe[, final.predictors[-I.id], drop = FALSE]
} else {
dframe <- dframe[, final.predictors, drop = FALSE]
}
##assemble results
result <- list("predictors" = unique.predictors,
"data" = dframe)
class(result) <- "extractX"
return(result)
}
##lm
extractX.AIClm <- function(cand.set, ...) {
##extract predictors from list
form.list <- as.character(lapply(cand.set, FUN = function(x) formula(x)[[3]]))
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract response
resp <- unique(as.character(sapply(cand.set, FUN = function(x) formula(x)[[2]])))
##check if different response used
if(length(resp) > 1) stop("\nThe response variable should be identical in all models\n")
##extract data from model objects
dsets <- lapply(cand.set, FUN = function(i) i$model)
##remove model names from list
names(dsets) <- NULL
##combine data sets
combo <- do.call(what = "cbind", dsets)
dframe <- combo[, unique(names(combo))]
##remove response from data frame
dframe <- dframe[, names(dframe) != resp]
##assemble results
result <- list("predictors" = unique.predictors,
"data" = dframe)
class(result) <- "extractX"
return(result)
}
##lme
extractX.AIClme <- function(cand.set, ...) {
##extract predictors from list
form.list <- as.character(lapply(cand.set, FUN = function(x) formula(x)[[3]]))
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract response
resp <- unique(as.character(sapply(cand.set, FUN = function(x) formula(x)[[2]])))
##check if different response used
if(length(resp) > 1) stop("\nThe response variable should be identical in all models\n")
##extract data from model objects
dsets <- lapply(cand.set, FUN = function(i) getData(i))
##remove model names from list
names(dsets) <- NULL
##combine data sets
combo <- do.call(what = "cbind", dsets)
dframe <- combo[, unique(names(combo))]
##remove response from data frame
dframe <- dframe[, names(dframe) != resp]
##check for interactions specified with *
inter.star <- any(regexpr("\\*", unique.predictors) != -1)
##check for interaction terms
inter.id <- any(regexpr("\\:", unique.predictors) != -1)
##inter.star and inter.id
if(inter.star && inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.nostar.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
##separate terms in interaction
terms.nointer <- unlist(sapply(terms.nostar.clean, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##inter.star
if(inter.star && !inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
}
##inter.id
if(!inter.star && inter.id) {
##separate terms in interaction
terms.nointer <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##none
if(!inter.star && !inter.id) {
##remove extra white space
terms.clean <- unique.predictors
}
##combine in single character vector
final.predictors <- unique(terms.clean)
##check for I( ) custom variables in formula
I.id <- which(regexpr("I\\(", final.predictors) != -1)
##if I( ) used
if(length(I.id) > 0) {
dframe <- dframe[, final.predictors[-I.id], drop = FALSE]
} else {
dframe <- dframe[, final.predictors, drop = FALSE]
}
##assemble results
result <- list("predictors" = unique.predictors,
"data" = dframe)
class(result) <- "extractX"
return(result)
}
##glmerMod
extractX.AICglmerMod <- function(cand.set, ...) {
##extract predictors from list
form.list <- as.character(lapply(cand.set, FUN = function(x) formula(x)[[3]]))
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract response
resp <- unique(as.character(sapply(cand.set, FUN = function(x) formula(x)[[2]])))
##check if different response used
if(length(resp) > 1) stop("\nThe response variable should be identical in all models\n")
##check for | in variance terms
pipe.id <- which(regexpr("\\|", unique.predictors) != -1)
##remove variance terms from string of predictors
if(length(pipe.id) > 0) {unique.predictors <- unique.predictors[-pipe.id]}
##extract data from model objects
dsets <- lapply(cand.set, FUN = function(i) (i@frame))
##remove model names from list
names(dsets) <- NULL
##combine data sets
combo <- do.call(what = "cbind", dsets)
dframe <- combo[, unique(names(combo))]
##remove response from data frame
dframe <- dframe[, names(dframe) != resp]
##check for interactions specified with *
inter.star <- any(regexpr("\\*", unique.predictors) != -1)
##check for interaction terms
inter.id <- any(regexpr("\\:", unique.predictors) != -1)
##inter.star and inter.id
if(inter.star && inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.nostar.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
##separate terms in interaction
terms.nointer <- unlist(sapply(terms.nostar.clean, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##inter.star
if(inter.star && !inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
}
##inter.id
if(!inter.star && inter.id) {
##separate terms in interaction
terms.nointer <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##none
if(!inter.star && !inter.id) {
##remove extra white space
terms.clean <- unique.predictors
}
##combine in single character vector
final.predictors <- unique(terms.clean)
##check for I( ) custom variables in formula
I.id <- which(regexpr("I\\(", final.predictors) != -1)
##if I( ) used
if(length(I.id) > 0) {
dframe <- dframe[, final.predictors[-I.id], drop = FALSE]
} else {
dframe <- dframe[, final.predictors, drop = FALSE]
}
##assemble results
result <- list("predictors" = unique.predictors,
"data" = dframe)
class(result) <- "extractX"
return(result)
}
##lmerMod
extractX.AIClmerMod <- function(cand.set, ...) {
##extract predictors from list
form.list <- as.character(lapply(cand.set, FUN = function(x) formula(x)[[3]]))
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract response
resp <- unique(as.character(sapply(cand.set, FUN = function(x) formula(x)[[2]])))
##check if different response used
if(length(resp) > 1) stop("\nThe response variable should be identical in all models\n")
##check for | in variance terms
pipe.id <- which(regexpr("\\|", unique.predictors) != -1)
##remove variance terms from string of predictors
if(length(pipe.id) > 0) {unique.predictors <- unique.predictors[-pipe.id]}
##extract data from model objects
dsets <- lapply(cand.set, FUN = function(i) i@frame)
##remove model names from list
names(dsets) <- NULL
##combine data sets
combo <- do.call(what = "cbind", dsets)
dframe <- combo[, unique(names(combo))]
##remove response from data frame
dframe <- dframe[, names(dframe) != resp]
##check for interactions specified with *
inter.star <- any(regexpr("\\*", unique.predictors) != -1)
##check for interaction terms
inter.id <- any(regexpr("\\:", unique.predictors) != -1)
##inter.star and inter.id
if(inter.star && inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.nostar.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
##separate terms in interaction
terms.nointer <- unlist(sapply(terms.nostar.clean, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##inter.star
if(inter.star && !inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
}
##inter.id
if(!inter.star && inter.id) {
##separate terms in interaction
terms.nointer <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##none
if(!inter.star && !inter.id) {
##remove extra white space
terms.clean <- unique.predictors
}
##combine in single character vector
final.predictors <- unique(terms.clean)
##check for I( ) custom variables in formula
I.id <- which(regexpr("I\\(", final.predictors) != -1)
##if I( ) used
if(length(I.id) > 0) {
dframe <- dframe[, final.predictors[-I.id], drop = FALSE]
} else {
dframe <- dframe[, final.predictors, drop = FALSE]
}
##assemble results
result <- list("predictors" = unique.predictors,
"data" = dframe)
class(result) <- "extractX"
return(result)
}
##lmerModLmerTest
extractX.AIClmerModLmerTest <- function(cand.set, ...) {
##extract predictors from list
form.list <- as.character(lapply(cand.set, FUN = function(x) formula(x)[[3]]))
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract response
resp <- unique(as.character(sapply(cand.set, FUN = function(x) formula(x)[[2]])))
##check if different response used
if(length(resp) > 1) stop("\nThe response variable should be identical in all models\n")
##check for | in variance terms
pipe.id <- which(regexpr("\\|", unique.predictors) != -1)
##remove variance terms from string of predictors
if(length(pipe.id) > 0) {unique.predictors <- unique.predictors[-pipe.id]}
##extract data from model objects
dsets <- lapply(cand.set, FUN = function(i) i@frame)
##remove model names from list
names(dsets) <- NULL
##combine data sets
combo <- do.call(what = "cbind", dsets)
dframe <- combo[, unique(names(combo))]
##remove response from data frame
dframe <- dframe[, names(dframe) != resp]
##check for interactions specified with *
inter.star <- any(regexpr("\\*", unique.predictors) != -1)
##check for interaction terms
inter.id <- any(regexpr("\\:", unique.predictors) != -1)
##inter.star and inter.id
if(inter.star && inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.nostar.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
##separate terms in interaction
terms.nointer <- unlist(sapply(terms.nostar.clean, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##inter.star
if(inter.star && !inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
}
##inter.id
if(!inter.star && inter.id) {
##separate terms in interaction
terms.nointer <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##none
if(!inter.star && !inter.id) {
##remove extra white space
terms.clean <- unique.predictors
}
##combine in single character vector
final.predictors <- unique(terms.clean)
##check for I( ) custom variables in formula
I.id <- which(regexpr("I\\(", final.predictors) != -1)
##if I( ) used
if(length(I.id) > 0) {
dframe <- dframe[, final.predictors[-I.id], drop = FALSE]
} else {
dframe <- dframe[, final.predictors, drop = FALSE]
}
##assemble results
result <- list("predictors" = unique.predictors,
"data" = dframe)
class(result) <- "extractX"
return(result)
}
##rlm
extractX.AICrlm.lm <- function(cand.set, ...) {
##extract predictors from list
form.list <- as.character(lapply(cand.set, FUN = function(x) formula(x)[[3]]))
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract response
resp <- unique(as.character(sapply(cand.set, FUN = function(x) formula(x)[[2]])))
##check if different response used
if(length(resp) > 1) stop("\nThe response variable should be identical in all models\n")
##extract data from model objects
dsets <- lapply(cand.set, FUN = function(i) i$model)
##remove model names from list
names(dsets) <- NULL
##combine data sets
combo <- do.call(what = "cbind", dsets)
dframe <- combo[, unique(names(combo))]
##remove response from data frame
dframe <- dframe[, names(dframe) != resp]
##assemble results
result <- list("predictors" = unique.predictors,
"data" = dframe)
class(result) <- "extractX"
return(result)
}
##survreg
extractX.AICsurvreg <- function(cand.set, ...) {
##extract predictors from list
form.list <- as.character(lapply(cand.set, FUN = function(x) formula(x)[[3]]))
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract data from model object - identical for each because uses eval(data)
dsets <- lapply(cand.set, FUN = function(i) eval(i$call$data))
##remove model names from list
names(dsets) <- NULL
##combine data sets
combo <- do.call(what = "cbind", dsets)
dframe <- combo[, unique(names(combo))]
##extract response
##resp <- unique(as.character(sapply(cand.set, FUN = function(x) formula(x)[[2]])))
##check if different response used
##if(length(resp) > 1) stop("\nThe response variable should be identical in all models\n")
##check for interactions specified with *
inter.star <- any(regexpr("\\*", unique.predictors) != -1)
##check for interaction terms
inter.id <- any(regexpr("\\:", unique.predictors) != -1)
##inter.star and inter.id
if(inter.star && inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.nostar.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
##separate terms in interaction
terms.nointer <- unlist(sapply(terms.nostar.clean, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##inter.star
if(inter.star && !inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
}
##inter.id
if(!inter.star && inter.id) {
##separate terms in interaction
terms.nointer <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##none
if(!inter.star && !inter.id) {
##remove extra white space
terms.clean <- unique.predictors
}
##combine in single character vector
final.predictors <- unique(terms.clean)
##check for I( ) custom variables in formula
I.id <- which(regexpr("I\\(", final.predictors) != -1)
##if I( ) used
if(length(I.id) > 0) {
dframe <- dframe[, final.predictors[-I.id], drop = FALSE]
} else {
dframe <- dframe[, final.predictors, drop = FALSE]
}
##assemble results
result <- list("predictors" = unique.predictors,
"data" = dframe)
class(result) <- "extractX"
return(result)
}
##unmarkedFitOccu
extractX.AICunmarkedFitOccu <- function(cand.set, parm.type = NULL, ...) {
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?extractX for details\n")}
##extract predictors from list
##psi
if(identical(parm.type, "psi")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formula[[3]]))
}
##detect
if(identical(parm.type, "detect")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formula[[2]]))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract data from model objects - identical for all models
dsets <- lapply(cand.set, FUN = function(i) unmarked::getData(i))
##check that same data are used
unique.dsets <- unique(dsets)
if(length(unique.dsets) != 1) stop("\nData sets differ across models:\n check data carefully\n")
unFrame <- unique.dsets[[1]]
##extract siteCovs
siteVars <- siteCovs(unFrame)
##extract obsCovs
obsVars <- obsCovs(unFrame)
##check for interactions specified with *
inter.star <- any(regexpr("\\*", unique.predictors) != -1)
##check for interaction terms
inter.id <- any(regexpr("\\:", unique.predictors) != -1)
##inter.star and inter.id
if(inter.star && inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.nostar.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
##separate terms in interaction
terms.nointer <- unlist(sapply(terms.nostar.clean, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##inter.star
if(inter.star && !inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
}
##inter.id
if(!inter.star && inter.id) {
##separate terms in interaction
terms.nointer <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##none
if(!inter.star && !inter.id) {
##remove extra white space
terms.clean <- unique.predictors
}
##combine in single character vector
final.predictors <- unique(terms.clean)
##find where predictors occur
if(!is.null(obsVars)) {
obsID <- obsVars[, intersect(final.predictors, names(obsVars)), drop = FALSE]
if(nrow(obsID) > 0) {
obsID.info <- capture.output(str(obsID))[-1]
} else {
obsID.info <- NULL
}
} else {obsID.info <- NULL}
if(!is.null(siteVars)) {
siteID <- siteVars[, intersect(final.predictors, names(siteVars)), drop = FALSE]
if(nrow(siteID) > 0) {
siteID.info <- capture.output(str(siteID))[-1]
} else {
siteID.info <- NULL
}
} else {siteID.info <- NULL}
##store data sets
data.out <- list( )
if(is.null(obsVars)) {
data.out$obsCovs <- NULL
} else {data.out$obsCovs <- obsID}
if(is.null(siteVars)) {
data.out$siteCovs <- NULL
} else {data.out$siteCovs <- siteID}
##assemble results
result <- list("predictors" = unique.predictors,
"data" = data.out)
class(result) <- "extractX"
return(result)
}
##unmarkedFitColExt
extractX.AICunmarkedFitColExt <- function(cand.set, parm.type = NULL, ...) {
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?extractX for details\n")}
##extract predictors from list
##psi
if(identical(parm.type, "psi")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@psiformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##gamma
if(identical(parm.type, "gamma")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@gamformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##epsilon
if(identical(parm.type, "epsilon")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@epsformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##detect
if(identical(parm.type, "detect")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@detformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract data from model objects - identical for all models
dsets <- lapply(cand.set, FUN = function(i) unmarked::getData(i))
##check that same data are used
unique.dsets <- unique(dsets)
if(length(unique.dsets) != 1) stop("\nData sets differ across models:\n check data carefully\n")
unFrame <- unique.dsets[[1]]
##extract siteCovs
siteVars <- siteCovs(unFrame)
##extract obsCovs
obsVars <- obsCovs(unFrame)
##extract yearlySiteCovs
yearlyVars <- yearlySiteCovs(unFrame)
##check for interactions specified with *
inter.star <- any(regexpr("\\*", unique.predictors) != -1)
##check for interaction terms
inter.id <- any(regexpr("\\:", unique.predictors) != -1)
##inter.star and inter.id
if(inter.star && inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.nostar.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
##separate terms in interaction
terms.nointer <- unlist(sapply(terms.nostar.clean, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##inter.star
if(inter.star && !inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
}
##inter.id
if(!inter.star && inter.id) {
##separate terms in interaction
terms.nointer <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##none
if(!inter.star && !inter.id) {
##remove extra white space
terms.clean <- unique.predictors
}
##combine in single character vector
final.predictors <- unique(terms.clean)
##find where predictors occur
if(!is.null(obsVars)) {
obsID <- obsVars[, intersect(final.predictors, names(obsVars)), drop = FALSE]
if(nrow(obsID) > 0) {
obsID.info <- capture.output(str(obsID))[-1]
} else {
obsID.info <- NULL
}
} else {obsID.info <- NULL}
if(!is.null(siteVars)) {
siteID <- siteVars[, intersect(final.predictors, names(siteVars)), drop = FALSE]
if(nrow(siteID) > 0) {
siteID.info <- capture.output(str(siteID))[-1]
} else {
siteID.info <- NULL
}
} else {siteID.info <- NULL}
if(!is.null(yearlyVars)) {
yearlyID <- yearlyVars[, intersect(final.predictors, names(yearlyVars)), drop = FALSE]
if(nrow(yearlyID) > 0) {
yearlyID.info <- capture.output(str(yearlyID))[-1]
} else {
yearlyID.info <- NULL
}
} else {yearlyID.info <- NULL}
##store data sets
data.out <- list( )
if(is.null(obsVars)) {
data.out$obsCovs <- NULL
} else {data.out$obsCovs <- obsID}
if(is.null(siteVars)) {
data.out$siteCovs <- NULL
} else {data.out$siteCovs <- siteID}
if(is.null(yearlyVars)) {
data.out$yearlySiteCovs <- NULL
} else {data.out$yearlySiteCovs <- yearlyID}
##assemble results
result <- list("predictors" = unique.predictors,
"data" = data.out)
class(result) <- "extractX"
return(result)
}
##unmarkedFitOccurRN
extractX.AICunmarkedFitOccuRN <- function(cand.set, parm.type = NULL, ...) {
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?extractX for details\n")}
##extract predictors from list
##psi
if(identical(parm.type, "psi")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formula[[3]]))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##detect
if(identical(parm.type, "detect")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formula[[2]]))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract data from model objects - identical for all models
dsets <- lapply(cand.set, FUN = function(i) unmarked::getData(i))
##check that same data are used
unique.dsets <- unique(dsets)
if(length(unique.dsets) != 1) stop("\nData sets differ across models:\n check data carefully\n")
unFrame <- unique.dsets[[1]]
##extract siteCovs
siteVars <- siteCovs(unFrame)
##extract obsCovs
obsVars <- obsCovs(unFrame)
##check for interactions specified with *
inter.star <- any(regexpr("\\*", unique.predictors) != -1)
##check for interaction terms
inter.id <- any(regexpr("\\:", unique.predictors) != -1)
##inter.star and inter.id
if(inter.star && inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.nostar.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
##separate terms in interaction
terms.nointer <- unlist(sapply(terms.nostar.clean, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##inter.star
if(inter.star && !inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
}
##inter.id
if(!inter.star && inter.id) {
##separate terms in interaction
terms.nointer <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##none
if(!inter.star && !inter.id) {
##remove extra white space
terms.clean <- unique.predictors
}
##combine in single character vector
final.predictors <- unique(terms.clean)
##find where predictors occur
if(!is.null(obsVars)) {
obsID <- obsVars[, intersect(final.predictors, names(obsVars)), drop = FALSE]
if(nrow(obsID) > 0) {
obsID.info <- capture.output(str(obsID))[-1]
} else {
obsID.info <- NULL
}
} else {obsID.info <- NULL}
if(!is.null(siteVars)) {
siteID <- siteVars[, intersect(final.predictors, names(siteVars)), drop = FALSE]
if(nrow(siteID) > 0) {
siteID.info <- capture.output(str(siteID))[-1]
} else {
siteID.info <- NULL
}
} else {siteID.info <- NULL}
##store data sets
data.out <- list( )
if(is.null(obsVars)) {
data.out$obsCovs <- NULL
} else {data.out$obsCovs <- obsID}
if(is.null(siteVars)) {
data.out$siteCovs <- NULL
} else {data.out$siteCovs <- siteID}
##assemble results
result <- list("predictors" = unique.predictors,
"data" = data.out)
class(result) <- "extractX"
return(result)
}
##unmarkedFitPCO
extractX.AICunmarkedFitPCO <- function(cand.set, parm.type = NULL, ...) {
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?extractX for details\n")}
##extract predictors from list
##lambda
if(identical(parm.type, "lambda")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$lambdaformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##gamma
if(identical(parm.type, "gamma")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$gammaformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##omega
if(identical(parm.type, "omega")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$omegaformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##iota
if(identical(parm.type, "iota")) {
##check that parameter appears in all models
parfreq <- sum(sapply(cand.set, FUN = function(i) any(names(i@estimates@estimates) == parm.type)))
if(!identical(length(cand.set), parfreq)) {
stop("\nParameter \'iota\' does not appear in all models\n")
}
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$iotaformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##detect
if(identical(parm.type, "detect")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$pformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract data from model objects - identical for all models
dsets <- lapply(cand.set, FUN = function(i) unmarked::getData(i))
##check that same data are used
unique.dsets <- unique(dsets)
if(length(unique.dsets) != 1) stop("\nData sets differ across models:\n check data carefully\n")
unFrame <- unique.dsets[[1]]
##extract siteCovs
siteVars <- siteCovs(unFrame)
##extract obsCovs
obsVars <- obsCovs(unFrame)
##extract yearlySiteCovs
yearlyVars <- yearlySiteCovs(unFrame)
##check for interactions specified with *
inter.star <- any(regexpr("\\*", unique.predictors) != -1)
##check for interaction terms
inter.id <- any(regexpr("\\:", unique.predictors) != -1)
##inter.star and inter.id
if(inter.star && inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.nostar.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
##separate terms in interaction
terms.nointer <- unlist(sapply(terms.nostar.clean, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##inter.star
if(inter.star && !inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
}
##inter.id
if(!inter.star && inter.id) {
##separate terms in interaction
terms.nointer <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##none
if(!inter.star && !inter.id) {
##remove extra white space
terms.clean <- unique.predictors
}
##combine in single character vector
final.predictors <- unique(terms.clean)
##find where predictors occur
if(!is.null(obsVars)) {
obsID <- obsVars[, intersect(final.predictors, names(obsVars)), drop = FALSE]
if(nrow(obsID) > 0) {
obsID.info <- capture.output(str(obsID))[-1]
} else {
obsID.info <- NULL
}
} else {obsID.info <- NULL}
if(!is.null(siteVars)) {
siteID <- siteVars[, intersect(final.predictors, names(siteVars)), drop = FALSE]
if(nrow(siteID) > 0) {
siteID.info <- capture.output(str(siteID))[-1]
} else {
siteID.info <- NULL
}
} else {siteID.info <- NULL}
if(!is.null(yearlyVars)) {
yearlyID <- yearlyVars[, intersect(final.predictors, names(yearlyVars)), drop = FALSE]
if(nrow(yearlyID) > 0) {
yearlyID.info <- capture.output(str(yearlyID))[-1]
} else {
yearlyID.info <- NULL
}
} else {yearlyID.info <- NULL}
##store data sets
data.out <- list( )
if(is.null(obsVars)) {
data.out$obsCovs <- NULL
} else {data.out$obsCovs <- obsID}
if(is.null(siteVars)) {
data.out$siteCovs <- NULL
} else {data.out$siteCovs <- siteID}
if(is.null(yearlyVars)) {
data.out$yearlySiteCovs <- NULL
} else {data.out$yearlySiteCovs <- yearlyID}
##assemble results
result <- list("predictors" = unique.predictors,
"data" = data.out)
class(result) <- "extractX"
return(result)
}
##unmarkedFitPCount
extractX.AICunmarkedFitPCount <- function(cand.set, parm.type = NULL, ...) {
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?extractX for details\n")}
##extract predictors from list
##lambda
if(identical(parm.type, "lambda")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formula[[3]]))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##detect
if(identical(parm.type, "detect")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formula[[2]]))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract data from model objects - identical for all models
dsets <- lapply(cand.set, FUN = function(i) unmarked::getData(i))
##check that same data are used
unique.dsets <- unique(dsets)
if(length(unique.dsets) != 1) stop("\nData sets differ across models:\n check data carefully\n")
unFrame <- unique.dsets[[1]]
##extract siteCovs
siteVars <- siteCovs(unFrame)
##extract obsCovs
obsVars <- obsCovs(unFrame)
##check for interactions specified with *
inter.star <- any(regexpr("\\*", unique.predictors) != -1)
##check for interaction terms
inter.id <- any(regexpr("\\:", unique.predictors) != -1)
##inter.star and inter.id
if(inter.star && inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.nostar.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
##separate terms in interaction
terms.nointer <- unlist(sapply(terms.nostar.clean, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##inter.star
if(inter.star && !inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
}
##inter.id
if(!inter.star && inter.id) {
##separate terms in interaction
terms.nointer <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##none
if(!inter.star && !inter.id) {
##remove extra white space
terms.clean <- unique.predictors
}
##combine in single character vector
final.predictors <- unique(terms.clean)
##find where predictors occur
if(!is.null(obsVars)) {
obsID <- obsVars[, intersect(final.predictors, names(obsVars)), drop = FALSE]
if(nrow(obsID) > 0) {
obsID.info <- capture.output(str(obsID))[-1]
} else {
obsID.info <- NULL
}
} else {obsID.info <- NULL}
if(!is.null(siteVars)) {
siteID <- siteVars[, intersect(final.predictors, names(siteVars)), drop = FALSE]
if(nrow(siteID) > 0) {
siteID.info <- capture.output(str(siteID))[-1]
} else {
siteID.info <- NULL
}
} else {siteID.info <- NULL}
##store data sets
data.out <- list( )
if(is.null(obsVars)) {
data.out$obsCovs <- NULL
} else {data.out$obsCovs <- obsID}
if(is.null(siteVars)) {
data.out$siteCovs <- NULL
} else {data.out$siteCovs <- siteID}
##assemble results
result <- list("predictors" = unique.predictors,
"data" = data.out)
class(result) <- "extractX"
return(result)
}
##unmarkedFitDS
extractX.AICunmarkedFitDS <- function(cand.set, parm.type = NULL, ...) {
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?extractX for details\n")}
##extract predictors from list
##lambda
if(identical(parm.type, "lambda")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formula[[3]]))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##detect
if(identical(parm.type, "detect")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formula[[2]]))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract data from model objects - identical for all models
dsets <- lapply(cand.set, FUN = function(i) unmarked::getData(i))
##check that same data are used
unique.dsets <- unique(dsets)
if(length(unique.dsets) != 1) stop("\nData sets differ across models:\n check data carefully\n")
unFrame <- unique.dsets[[1]]
##extract siteCovs
siteVars <- siteCovs(unFrame)
##extract obsCovs
obsVars <- obsCovs(unFrame)
##check for interactions specified with *
inter.star <- any(regexpr("\\*", unique.predictors) != -1)
##check for interaction terms
inter.id <- any(regexpr("\\:", unique.predictors) != -1)
##inter.star and inter.id
if(inter.star && inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.nostar.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
##separate terms in interaction
terms.nointer <- unlist(sapply(terms.nostar.clean, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##inter.star
if(inter.star && !inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
}
##inter.id
if(!inter.star && inter.id) {
##separate terms in interaction
terms.nointer <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##none
if(!inter.star && !inter.id) {
##remove extra white space
terms.clean <- unique.predictors
}
##combine in single character vector
final.predictors <- unique(terms.clean)
##find where predictors occur
if(!is.null(obsVars)) {
obsID <- obsVars[, intersect(final.predictors, names(obsVars)), drop = FALSE]
if(nrow(obsID) > 0) {
obsID.info <- capture.output(str(obsID))[-1]
} else {
obsID.info <- NULL
}
} else {obsID.info <- NULL}
if(!is.null(siteVars)) {
siteID <- siteVars[, intersect(final.predictors, names(siteVars)), drop = FALSE]
if(nrow(siteID) > 0) {
siteID.info <- capture.output(str(siteID))[-1]
} else {
siteID.info <- NULL
}
} else {siteID.info <- NULL}
##store data sets
data.out <- list( )
if(is.null(obsVars)) {
data.out$obsCovs <- NULL
} else {data.out$obsCovs <- obsID}
if(is.null(siteVars)) {
data.out$siteCovs <- NULL
} else {data.out$siteCovs <- siteID}
##assemble results
result <- list("predictors" = unique.predictors,
"data" = data.out)
class(result) <- "extractX"
return(result)
}
##unmarkedFitGDS
extractX.AICunmarkedFitGDS <- function(cand.set, parm.type = NULL, ...) {
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?extractX for details\n")}
##extract predictors from list
##lambda
if(identical(parm.type, "lambda")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$lambdaformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##phi
if(identical(parm.type, "phi")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$phiformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##detect
if(identical(parm.type, "detect")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$pformula[[2]]))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract data from model objects - identical for all models
dsets <- lapply(cand.set, FUN = function(i) unmarked::getData(i))
##check that same data are used
unique.dsets <- unique(dsets)
if(length(unique.dsets) != 1) stop("\nData sets differ across models:\n check data carefully\n")
unFrame <- unique.dsets[[1]]
##extract siteCovs
siteVars <- siteCovs(unFrame)
##extract obsCovs
obsVars <- obsCovs(unFrame)
##extract yearlySiteCovs
yearlyVars <- yearlySiteCovs(unFrame)
##check for interactions specified with *
inter.star <- any(regexpr("\\*", unique.predictors) != -1)
##check for interaction terms
inter.id <- any(regexpr("\\:", unique.predictors) != -1)
##inter.star and inter.id
if(inter.star && inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.nostar.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
##separate terms in interaction
terms.nointer <- unlist(sapply(terms.nostar.clean, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##inter.star
if(inter.star && !inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
}
##inter.id
if(!inter.star && inter.id) {
##separate terms in interaction
terms.nointer <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##none
if(!inter.star && !inter.id) {
##remove extra white space
terms.clean <- unique.predictors
}
##combine in single character vector
final.predictors <- unique(terms.clean)
##find where predictors occur
if(!is.null(obsVars)) {
obsID <- obsVars[, intersect(final.predictors, names(obsVars)), drop = FALSE]
if(nrow(obsID) > 0) {
obsID.info <- capture.output(str(obsID))[-1]
} else {
obsID.info <- NULL
}
} else {obsID.info <- NULL}
if(!is.null(siteVars)) {
siteID <- siteVars[, intersect(final.predictors, names(siteVars)), drop = FALSE]
if(nrow(siteID) > 0) {
siteID.info <- capture.output(str(siteID))[-1]
} else {
siteID.info <- NULL
}
} else {siteID.info <- NULL}
if(!is.null(yearlyVars)) {
yearlyID <- yearlyVars[, intersect(final.predictors, names(yearlyVars)), drop = FALSE]
if(nrow(yearlyID) > 0) {
yearlyID.info <- capture.output(str(yearlyID))[-1]
} else {
yearlyID.info <- NULL
}
} else {yearlyID.info <- NULL}
##store data sets
data.out <- list( )
if(is.null(obsVars)) {
data.out$obsCovs <- NULL
} else {data.out$obsCovs <- obsID}
if(is.null(siteVars)) {
data.out$siteCovs <- NULL
} else {data.out$siteCovs <- siteID}
if(is.null(yearlyVars)) {
data.out$yearlySiteCovs <- NULL
} else {data.out$yearlySiteCovs <- yearlyID}
##assemble results
result <- list("predictors" = unique.predictors,
"data" = data.out)
class(result) <- "extractX"
return(result)
}
##unmarkedFitOccuFP
extractX.AICunmarkedFitOccuFP <- function(cand.set, parm.type = NULL, ...) {
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?extractX for details\n")}
##extract predictors from list
##psi
if(identical(parm.type, "psi")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@stateformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##fp
if(identical(parm.type, "falsepos") || identical(parm.type, "fp")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@FPformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##certain
if(identical(parm.type, "certain")) {
##check that parameter appears in all models
parfreq <- sum(sapply(cand.set, FUN = function(i) any(names(i@estimates@estimates) == parm.type)))
if(!identical(length(cand.set), parfreq)) {
stop("\nParameter \'b\' does not appear in all models\n")
}
form.list <- as.character(lapply(cand.set, FUN = function(x) x@Bformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##detect
if(identical(parm.type, "detect")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@detformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract data from model objects - identical for all models
dsets <- lapply(cand.set, FUN = function(i) unmarked::getData(i))
##check that same data are used
unique.dsets <- unique(dsets)
if(length(unique.dsets) != 1) stop("\nData sets differ across models:\n check data carefully\n")
unFrame <- unique.dsets[[1]]
##extract siteCovs
siteVars <- siteCovs(unFrame)
##extract obsCovs
obsVars <- obsCovs(unFrame)
##check for interactions specified with *
inter.star <- any(regexpr("\\*", unique.predictors) != -1)
##check for interaction terms
inter.id <- any(regexpr("\\:", unique.predictors) != -1)
##inter.star and inter.id
if(inter.star && inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.nostar.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
##separate terms in interaction
terms.nointer <- unlist(sapply(terms.nostar.clean, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##inter.star
if(inter.star && !inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
}
##inter.id
if(!inter.star && inter.id) {
##separate terms in interaction
terms.nointer <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##none
if(!inter.star && !inter.id) {
##remove extra white space
terms.clean <- unique.predictors
}
##combine in single character vector
final.predictors <- unique(terms.clean)
##find where predictors occur
if(!is.null(obsVars)) {
obsID <- obsVars[, intersect(final.predictors, names(obsVars)), drop = FALSE]
if(nrow(obsID) > 0) {
obsID.info <- capture.output(str(obsID))[-1]
} else {
obsID.info <- NULL
}
} else {obsID.info <- NULL}
if(!is.null(siteVars)) {
siteID <- siteVars[, intersect(final.predictors, names(siteVars)), drop = FALSE]
if(nrow(siteID) > 0) {
siteID.info <- capture.output(str(siteID))[-1]
} else {
siteID.info <- NULL
}
} else {siteID.info <- NULL}
##store data sets
data.out <- list( )
if(is.null(obsVars)) {
data.out$obsCovs <- NULL
} else {data.out$obsCovs <- obsID}
if(is.null(siteVars)) {
data.out$siteCovs <- NULL
} else {data.out$siteCovs <- siteID}
##assemble results
result <- list("predictors" = unique.predictors,
"data" = data.out)
class(result) <- "extractX"
return(result)
}
##unmarkedFitGMM
extractX.AICunmarkedFitMPois <- function(cand.set, parm.type = NULL, ...) {
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?extractX for details\n")}
##extract predictors from list
##lambda
if(identical(parm.type, "lambda")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formula[[3]]))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##detect
if(identical(parm.type, "detect")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formula[[2]]))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract data from model objects - identical for all models
dsets <- lapply(cand.set, FUN = function(i) unmarked::getData(i))
##check that same data are used
unique.dsets <- unique(dsets)
if(length(unique.dsets) != 1) stop("\nData sets differ across models:\n check data carefully\n")
unFrame <- unique.dsets[[1]]
##extract siteCovs
siteVars <- siteCovs(unFrame)
##extract obsCovs
obsVars <- obsCovs(unFrame)
##check for interactions specified with *
inter.star <- any(regexpr("\\*", unique.predictors) != -1)
##check for interaction terms
inter.id <- any(regexpr("\\:", unique.predictors) != -1)
##inter.star and inter.id
if(inter.star && inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.nostar.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
##separate terms in interaction
terms.nointer <- unlist(sapply(terms.nostar.clean, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##inter.star
if(inter.star && !inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
}
##inter.id
if(!inter.star && inter.id) {
##separate terms in interaction
terms.nointer <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##none
if(!inter.star && !inter.id) {
##remove extra white space
terms.clean <- unique.predictors
}
##combine in single character vector
final.predictors <- unique(terms.clean)
##find where predictors occur
if(!is.null(obsVars)) {
obsID <- obsVars[, intersect(final.predictors, names(obsVars)), drop = FALSE]
if(nrow(obsID) > 0) {
obsID.info <- capture.output(str(obsID))[-1]
} else {
obsID.info <- NULL
}
} else {obsID.info <- NULL}
if(!is.null(siteVars)) {
siteID <- siteVars[, intersect(final.predictors, names(siteVars)), drop = FALSE]
if(nrow(siteID) > 0) {
siteID.info <- capture.output(str(siteID))[-1]
} else {
siteID.info <- NULL
}
} else {siteID.info <- NULL}
##store data sets
data.out <- list( )
if(is.null(obsVars)) {
data.out$obsCovs <- NULL
} else {data.out$obsCovs <- obsID}
if(is.null(siteVars)) {
data.out$siteCovs <- NULL
} else {data.out$siteCovs <- siteID}
##assemble results
result <- list("predictors" = unique.predictors,
"data" = data.out)
class(result) <- "extractX"
return(result)
}
##unmarkedFitGMM
extractX.AICunmarkedFitGMM <- function(cand.set, parm.type = NULL, ...) {
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?extractX for details\n")}
##extract predictors from list
##lambda
if(identical(parm.type, "lambda")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$lambdaformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##phi
if(identical(parm.type, "phi")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$phiformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##detect
if(identical(parm.type, "detect")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$pformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract data from model objects - identical for all models
dsets <- lapply(cand.set, FUN = function(i) unmarked::getData(i))
##check that same data are used
unique.dsets <- unique(dsets)
if(length(unique.dsets) != 1) stop("\nData sets differ across models:\n check data carefully\n")
unFrame <- unique.dsets[[1]]
##extract siteCovs
siteVars <- siteCovs(unFrame)
##extract obsCovs
obsVars <- obsCovs(unFrame)
##extract yearlySiteCovs
yearlyVars <- yearlySiteCovs(unFrame)
##check for interactions specified with *
inter.star <- any(regexpr("\\*", unique.predictors) != -1)
##check for interaction terms
inter.id <- any(regexpr("\\:", unique.predictors) != -1)
##inter.star and inter.id
if(inter.star && inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.nostar.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
##separate terms in interaction
terms.nointer <- unlist(sapply(terms.nostar.clean, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##inter.star
if(inter.star && !inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
}
##inter.id
if(!inter.star && inter.id) {
##separate terms in interaction
terms.nointer <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##none
if(!inter.star && !inter.id) {
##remove extra white space
terms.clean <- unique.predictors
}
##combine in single character vector
final.predictors <- unique(terms.clean)
##find where predictors occur
if(!is.null(obsVars)) {
obsID <- obsVars[, intersect(final.predictors, names(obsVars)), drop = FALSE]
if(nrow(obsID) > 0) {
obsID.info <- capture.output(str(obsID))[-1]
} else {
obsID.info <- NULL
}
} else {obsID.info <- NULL}
if(!is.null(siteVars)) {
siteID <- siteVars[, intersect(final.predictors, names(siteVars)), drop = FALSE]
if(nrow(siteID) > 0) {
siteID.info <- capture.output(str(siteID))[-1]
} else {
siteID.info <- NULL
}
} else {siteID.info <- NULL}
if(!is.null(yearlyVars)) {
yearlyID <- yearlyVars[, intersect(final.predictors, names(yearlyVars)), drop = FALSE]
if(nrow(yearlyID) > 0) {
yearlyID.info <- capture.output(str(yearlyID))[-1]
} else {
yearlyID.info <- NULL
}
} else {yearlyID.info <- NULL}
##store data sets
data.out <- list( )
if(is.null(obsVars)) {
data.out$obsCovs <- NULL
} else {data.out$obsCovs <- obsID}
if(is.null(siteVars)) {
data.out$siteCovs <- NULL
} else {data.out$siteCovs <- siteID}
if(is.null(yearlyVars)) {
data.out$yearlySiteCovs <- NULL
} else {data.out$yearlySiteCovs <- yearlyID}
##assemble results
result <- list("predictors" = unique.predictors,
"data" = data.out)
class(result) <- "extractX"
return(result)
}
##unmarkedFitGPC
extractX.AICunmarkedFitGPC <- function(cand.set, parm.type = NULL, ...) {
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?extractX for details\n")}
##extract predictors from list
##lambda
if(identical(parm.type, "lambda")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$lambdaformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##phi
if(identical(parm.type, "phi")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$phiformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##detect
if(identical(parm.type, "detect")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$pformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract data from model objects - identical for all models
dsets <- lapply(cand.set, FUN = function(i) unmarked::getData(i))
##check that same data are used
unique.dsets <- unique(dsets)
if(length(unique.dsets) != 1) stop("\nData sets differ across models:\n check data carefully\n")
unFrame <- unique.dsets[[1]]
##extract siteCovs
siteVars <- siteCovs(unFrame)
##extract obsCovs
obsVars <- obsCovs(unFrame)
##extract yearlySiteCovs
yearlyVars <- yearlySiteCovs(unFrame)
##check for interactions specified with *
inter.star <- any(regexpr("\\*", unique.predictors) != -1)
##check for interaction terms
inter.id <- any(regexpr("\\:", unique.predictors) != -1)
##inter.star and inter.id
if(inter.star && inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.nostar.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
##separate terms in interaction
terms.nointer <- unlist(sapply(terms.nostar.clean, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##inter.star
if(inter.star && !inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
}
##inter.id
if(!inter.star && inter.id) {
##separate terms in interaction
terms.nointer <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##none
if(!inter.star && !inter.id) {
##remove extra white space
terms.clean <- unique.predictors
}
##combine in single character vector
final.predictors <- unique(terms.clean)
##find where predictors occur
if(!is.null(obsVars)) {
obsID <- obsVars[, intersect(final.predictors, names(obsVars)), drop = FALSE]
if(nrow(obsID) > 0) {
obsID.info <- capture.output(str(obsID))[-1]
} else {
obsID.info <- NULL
}
} else {obsID.info <- NULL}
if(!is.null(siteVars)) {
siteID <- siteVars[, intersect(final.predictors, names(siteVars)), drop = FALSE]
if(nrow(siteID) > 0) {
siteID.info <- capture.output(str(siteID))[-1]
} else {
siteID.info <- NULL
}
} else {siteID.info <- NULL}
if(!is.null(yearlyVars)) {
yearlyID <- yearlyVars[, intersect(final.predictors, names(yearlyVars)), drop = FALSE]
if(nrow(yearlyID) > 0) {
yearlyID.info <- capture.output(str(yearlyID))[-1]
} else {
yearlyID.info <- NULL
}
} else {yearlyID.info <- NULL}
##store data sets
data.out <- list( )
if(is.null(obsVars)) {
data.out$obsCovs <- NULL
} else {data.out$obsCovs <- obsID}
if(is.null(siteVars)) {
data.out$siteCovs <- NULL
} else {data.out$siteCovs <- siteID}
if(is.null(yearlyVars)) {
data.out$yearlySiteCovs <- NULL
} else {data.out$yearlySiteCovs <- yearlyID}
##assemble results
result <- list("predictors" = unique.predictors,
"data" = data.out)
class(result) <- "extractX"
return(result)
}
##unmarkedFitOccu
extractX.AICunmarkedFitOccuMulti <- function(cand.set, parm.type = NULL, ...) {
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?extractX for details\n")}
##extract predictors from list
##psi
if(identical(parm.type, "psi")) {
form.list <- lapply(cand.set, FUN = function(x) names(x@estimates@estimates$state@estimates))
}
##detect
if(identical(parm.type, "detect")) {
form.list <- lapply(cand.set, FUN = function(x) names(x@estimates@estimates$det@estimates))
}
##exclude empty strings and intercept
formStrings <- unlist(form.list)
notInclude <- grep(pattern = "(Intercept)", x = formStrings)
formNoInt <- formStrings[-notInclude]
##extract only variable names
formJustVars <- unlist(strsplit(formNoInt, split = "\\]"))
formMat <- matrix(data = formJustVars, ncol = 2, byrow = TRUE)
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", formMat[, 2])
unique.predictors <- unique(form.clean)
##extract data from model objects - identical for all models
dsets <- lapply(cand.set, FUN = function(i) unmarked::getData(i))
##check that same data are used
unique.dsets <- unique(dsets)
if(length(unique.dsets) != 1) stop("\nData sets differ across models:\n check data carefully\n")
unFrame <- unique.dsets[[1]]
##extract siteCovs
siteVars <- siteCovs(unFrame)
##extract obsCovs
obsVars <- obsCovs(unFrame)
##check for interactions specified with *
inter.star <- any(regexpr("\\*", unique.predictors) != -1)
##check for interaction terms
inter.id <- any(regexpr("\\:", unique.predictors) != -1)
##inter.star and inter.id
if(inter.star && inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.nostar.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
##separate terms in interaction
terms.nointer <- unlist(sapply(terms.nostar.clean, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##inter.star
if(inter.star && !inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
}
##inter.id
if(!inter.star && inter.id) {
##separate terms in interaction
terms.nointer <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##none
if(!inter.star && !inter.id) {
##remove extra white space
terms.clean <- unique.predictors
}
##combine in single character vector
final.predictors <- unique(terms.clean)
##find where predictors occur
if(!is.null(obsVars)) {
obsID <- obsVars[, intersect(final.predictors, names(obsVars)), drop = FALSE]
if(nrow(obsID) > 0) {
obsID.info <- capture.output(str(obsID))[-1]
} else {
obsID.info <- NULL
}
} else {obsID.info <- NULL}
if(!is.null(siteVars)) {
siteID <- siteVars[, intersect(final.predictors, names(siteVars)), drop = FALSE]
if(nrow(siteID) > 0) {
siteID.info <- capture.output(str(siteID))[-1]
} else {
siteID.info <- NULL
}
} else {siteID.info <- NULL}
##store data sets
data.out <- list( )
if(is.null(obsVars)) {
data.out$obsCovs <- NULL
} else {data.out$obsCovs <- obsID}
if(is.null(siteVars)) {
data.out$siteCovs <- NULL
} else {data.out$siteCovs <- siteID}
##assemble results
result <- list("predictors" = unique.predictors,
"data" = data.out)
class(result) <- "extractX"
return(result)
}
##unmarkedFitOccuMS
extractX.AICunmarkedFitOccuMS <- function(cand.set, parm.type = NULL, ...) {
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?extractX for details\n")}
##extract predictors from list
##psi
if(identical(parm.type, "psi")) {
form.list <- lapply(cand.set, FUN = function(x) x@psiformulas)
}
##detect
if(identical(parm.type, "detect")) {
form.list <- lapply(cand.set, FUN = function(x) x@detformulas)
}
##transition
if(identical(parm.type, "phi")) {
##check that parameter appears in all models
nseasons <- unique(sapply(cand.set, FUN = function(i) i@data@numPrimary))
if(nseasons == 1) {
stop("\nParameter \'phi\' does not appear in single-season models\n")
}
form.list <- lapply(cand.set, FUN = function(x) x@phiformulas)
}
##exclude empty strings and intercept
formStrings <- gsub(pattern = "~", replacement = "",
x = unlist(form.list))
formNoInt <- formStrings[formStrings != "1"]
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", formNoInt)
unique.predictors <- unique(form.clean)
##extract data from model objects - identical for all models
dsets <- lapply(cand.set, FUN = function(i) unmarked::getData(i))
##check that same data are used
unique.dsets <- unique(dsets)
if(length(unique.dsets) != 1) stop("\nData sets differ across models:\n check data carefully\n")
unFrame <- unique.dsets[[1]]
##extract siteCovs
siteVars <- siteCovs(unFrame)
##extract obsCovs
obsVars <- obsCovs(unFrame)
##check for interactions specified with *
inter.star <- any(regexpr("\\*", unique.predictors) != -1)
##check for interaction terms
inter.id <- any(regexpr("\\:", unique.predictors) != -1)
##inter.star and inter.id
if(inter.star && inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.nostar.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
##separate terms in interaction
terms.nointer <- unlist(sapply(terms.nostar.clean, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##inter.star
if(inter.star && !inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
}
##inter.id
if(!inter.star && inter.id) {
##separate terms in interaction
terms.nointer <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##none
if(!inter.star && !inter.id) {
##remove extra white space
terms.clean <- unique.predictors
}
##combine in single character vector
final.predictors <- unique(terms.clean)
##find where predictors occur
if(!is.null(obsVars)) {
obsID <- obsVars[, intersect(final.predictors, names(obsVars)), drop = FALSE]
if(nrow(obsID) > 0) {
obsID.info <- capture.output(str(obsID))[-1]
} else {
obsID.info <- NULL
}
} else {obsID.info <- NULL}
if(!is.null(siteVars)) {
siteID <- siteVars[, intersect(final.predictors, names(siteVars)), drop = FALSE]
if(nrow(siteID) > 0) {
siteID.info <- capture.output(str(siteID))[-1]
} else {
siteID.info <- NULL
}
} else {siteID.info <- NULL}
##store data sets
data.out <- list( )
if(is.null(obsVars)) {
data.out$obsCovs <- NULL
} else {data.out$obsCovs <- obsID}
if(is.null(siteVars)) {
data.out$siteCovs <- NULL
} else {data.out$siteCovs <- siteID}
##assemble results
result <- list("predictors" = unique.predictors,
"data" = data.out)
class(result) <- "extractX"
return(result)
}
##unmarkedFitOccuTTD
extractX.AICunmarkedFitOccuTTD <- function(cand.set, parm.type = NULL, ...) {
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?extractX for details\n")}
##extract predictors from list
##psi
if(identical(parm.type, "psi")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@psiformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##gamma
if(identical(parm.type, "gamma")) {
nseasons <- unique(sapply(cand.set, FUN = function(i) i@data@numPrimary))
if(nseasons == 1) {
stop("\nParameter \'gamma\' does not appear in single-season models\n")
}
form.list <- as.character(lapply(cand.set, FUN = function(x) x@gamformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##epsilon
if(identical(parm.type, "epsilon")) {
nseasons <- unique(sapply(cand.set, FUN = function(i) i@data@numPrimary))
if(nseasons == 1) {
stop("\nParameter \'epsilon\' does not appear in single-season models\n")
}
form.list <- as.character(lapply(cand.set, FUN = function(x) x@epsformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##detect
if(identical(parm.type, "detect")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@detformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract data from model objects - identical for all models
dsets <- lapply(cand.set, FUN = function(i) unmarked::getData(i))
##check that same data are used
unique.dsets <- unique(dsets)
if(length(unique.dsets) != 1) stop("\nData sets differ across models:\n check data carefully\n")
unFrame <- unique.dsets[[1]]
##extract siteCovs
siteVars <- siteCovs(unFrame)
##extract obsCovs
obsVars <- obsCovs(unFrame)
##extract yearlySiteCovs
yearlyVars <- yearlySiteCovs(unFrame)
##check for interactions specified with *
inter.star <- any(regexpr("\\*", unique.predictors) != -1)
##check for interaction terms
inter.id <- any(regexpr("\\:", unique.predictors) != -1)
##inter.star and inter.id
if(inter.star && inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.nostar.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
##separate terms in interaction
terms.nointer <- unlist(sapply(terms.nostar.clean, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##inter.star
if(inter.star && !inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
}
##inter.id
if(!inter.star && inter.id) {
##separate terms in interaction
terms.nointer <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##none
if(!inter.star && !inter.id) {
##remove extra white space
terms.clean <- unique.predictors
}
##combine in single character vector
final.predictors <- unique(terms.clean)
##find where predictors occur
if(!is.null(obsVars)) {
obsID <- obsVars[, intersect(final.predictors, names(obsVars)), drop = FALSE]
if(nrow(obsID) > 0) {
obsID.info <- capture.output(str(obsID))[-1]
} else {
obsID.info <- NULL
}
} else {obsID.info <- NULL}
if(!is.null(siteVars)) {
siteID <- siteVars[, intersect(final.predictors, names(siteVars)), drop = FALSE]
if(nrow(siteID) > 0) {
siteID.info <- capture.output(str(siteID))[-1]
} else {
siteID.info <- NULL
}
} else {siteID.info <- NULL}
if(!is.null(yearlyVars)) {
yearlyID <- yearlyVars[, intersect(final.predictors, names(yearlyVars)), drop = FALSE]
if(nrow(yearlyID) > 0) {
yearlyID.info <- capture.output(str(yearlyID))[-1]
} else {
yearlyID.info <- NULL
}
} else {yearlyID.info <- NULL}
##store data sets
data.out <- list( )
if(is.null(obsVars)) {
data.out$obsCovs <- NULL
} else {data.out$obsCovs <- obsID}
if(is.null(siteVars)) {
data.out$siteCovs <- NULL
} else {data.out$siteCovs <- siteID}
if(is.null(yearlyVars)) {
data.out$yearlySiteCovs <- NULL
} else {data.out$yearlySiteCovs <- yearlyID}
##assemble results
result <- list("predictors" = unique.predictors,
"data" = data.out)
class(result) <- "extractX"
return(result)
}
##unmarkedFitMMO
extractX.AICunmarkedFitMMO <- function(cand.set, parm.type = NULL, ...) {
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?extractX for details\n")}
##extract predictors from list
##lambda
if(identical(parm.type, "lambda")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$lambdaformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##gamma
if(identical(parm.type, "gamma")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$gammaformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##omega
if(identical(parm.type, "omega")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$omegaformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##iota
if(identical(parm.type, "iota")) {
##check that parameter appears in all models
parfreq <- sum(sapply(cand.set, FUN = function(i) any(names(i@estimates@estimates) == parm.type)))
if(!identical(length(cand.set), parfreq)) {
stop("\nParameter \'iota\' does not appear in all models\n")
}
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$iotaformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##detect
if(identical(parm.type, "detect")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$pformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract data from model objects - identical for all models
dsets <- lapply(cand.set, FUN = function(i) unmarked::getData(i))
##check that same data are used
unique.dsets <- unique(dsets)
if(length(unique.dsets) != 1) stop("\nData sets differ across models:\n check data carefully\n")
unFrame <- unique.dsets[[1]]
##extract siteCovs
siteVars <- siteCovs(unFrame)
##extract obsCovs
obsVars <- obsCovs(unFrame)
##extract yearlySiteCovs
yearlyVars <- yearlySiteCovs(unFrame)
##check for interactions specified with *
inter.star <- any(regexpr("\\*", unique.predictors) != -1)
##check for interaction terms
inter.id <- any(regexpr("\\:", unique.predictors) != -1)
##inter.star and inter.id
if(inter.star && inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.nostar.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
##separate terms in interaction
terms.nointer <- unlist(sapply(terms.nostar.clean, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##inter.star
if(inter.star && !inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
}
##inter.id
if(!inter.star && inter.id) {
##separate terms in interaction
terms.nointer <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##none
if(!inter.star && !inter.id) {
##remove extra white space
terms.clean <- unique.predictors
}
##combine in single character vector
final.predictors <- unique(terms.clean)
##find where predictors occur
if(!is.null(obsVars)) {
obsID <- obsVars[, intersect(final.predictors, names(obsVars)), drop = FALSE]
if(nrow(obsID) > 0) {
obsID.info <- capture.output(str(obsID))[-1]
} else {
obsID.info <- NULL
}
} else {obsID.info <- NULL}
if(!is.null(siteVars)) {
siteID <- siteVars[, intersect(final.predictors, names(siteVars)), drop = FALSE]
if(nrow(siteID) > 0) {
siteID.info <- capture.output(str(siteID))[-1]
} else {
siteID.info <- NULL
}
} else {siteID.info <- NULL}
if(!is.null(yearlyVars)) {
yearlyID <- yearlyVars[, intersect(final.predictors, names(yearlyVars)), drop = FALSE]
if(nrow(yearlyID) > 0) {
yearlyID.info <- capture.output(str(yearlyID))[-1]
} else {
yearlyID.info <- NULL
}
} else {yearlyID.info <- NULL}
##store data sets
data.out <- list( )
if(is.null(obsVars)) {
data.out$obsCovs <- NULL
} else {data.out$obsCovs <- obsID}
if(is.null(siteVars)) {
data.out$siteCovs <- NULL
} else {data.out$siteCovs <- siteID}
if(is.null(yearlyVars)) {
data.out$yearlySiteCovs <- NULL
} else {data.out$yearlySiteCovs <- yearlyID}
##assemble results
result <- list("predictors" = unique.predictors,
"data" = data.out)
class(result) <- "extractX"
return(result)
}
##unmarkedFitDSO
extractX.AICunmarkedFitDSO <- function(cand.set, parm.type = NULL, ...) {
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?extractX for details\n")}
##extract predictors from list
##lambda
if(identical(parm.type, "lambda")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$lambdaformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##gamma
if(identical(parm.type, "gamma")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$gammaformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##omega
if(identical(parm.type, "omega")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$omegaformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##iota
if(identical(parm.type, "iota")) {
##check that parameter appears in all models
parfreq <- sum(sapply(cand.set, FUN = function(i) any(names(i@estimates@estimates) == parm.type)))
if(!identical(length(cand.set), parfreq)) {
stop("\nParameter \'iota\' does not appear in all models\n")
}
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$iotaformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##detect
if(identical(parm.type, "detect")) {
form.list <- as.character(lapply(cand.set, FUN = function(x) x@formlist$pformula))
##remove ~
form.list <- gsub("~", replacement = "", x = form.list)
}
##extract based on "+"
form.noplus <- unlist(sapply(form.list, FUN = function(i) strsplit(i, split = "\\+")))
##remove extra white space
form.clean <- gsub("(^ +)|( +$)", "", form.noplus)
unique.clean <- unique(form.clean)
##exclude empty strings and intercept
unique.predictors <- unique.clean[nchar(unique.clean) != 0 & unique.clean != "1"]
##extract data from model objects - identical for all models
dsets <- lapply(cand.set, FUN = function(i) unmarked::getData(i))
##check that same data are used
unique.dsets <- unique(dsets)
if(length(unique.dsets) != 1) stop("\nData sets differ across models:\n check data carefully\n")
unFrame <- unique.dsets[[1]]
##extract siteCovs
siteVars <- siteCovs(unFrame)
##extract obsCovs
obsVars <- obsCovs(unFrame)
##extract yearlySiteCovs
yearlyVars <- yearlySiteCovs(unFrame)
##check for interactions specified with *
inter.star <- any(regexpr("\\*", unique.predictors) != -1)
##check for interaction terms
inter.id <- any(regexpr("\\:", unique.predictors) != -1)
##inter.star and inter.id
if(inter.star && inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.nostar.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
##separate terms in interaction
terms.nointer <- unlist(sapply(terms.nostar.clean, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##inter.star
if(inter.star && !inter.id) {
##separate terms in interaction
terms.nostar <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\*")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nostar)
}
##inter.id
if(!inter.star && inter.id) {
##separate terms in interaction
terms.nointer <- unlist(sapply(unique.predictors, FUN = function(i) strsplit(i, split = "\\:")))
##remove extra white space
terms.clean <- gsub("(^ +)|( +$)", "", terms.nointer)
}
##none
if(!inter.star && !inter.id) {
##remove extra white space
terms.clean <- unique.predictors
}
##combine in single character vector
final.predictors <- unique(terms.clean)
##find where predictors occur
if(!is.null(obsVars)) {
obsID <- obsVars[, intersect(final.predictors, names(obsVars)), drop = FALSE]
if(nrow(obsID) > 0) {
obsID.info <- capture.output(str(obsID))[-1]
} else {
obsID.info <- NULL
}
} else {obsID.info <- NULL}
if(!is.null(siteVars)) {
siteID <- siteVars[, intersect(final.predictors, names(siteVars)), drop = FALSE]
if(nrow(siteID) > 0) {
siteID.info <- capture.output(str(siteID))[-1]
} else {
siteID.info <- NULL
}
} else {siteID.info <- NULL}
if(!is.null(yearlyVars)) {
yearlyID <- yearlyVars[, intersect(final.predictors, names(yearlyVars)), drop = FALSE]
if(nrow(yearlyID) > 0) {
yearlyID.info <- capture.output(str(yearlyID))[-1]
} else {
yearlyID.info <- NULL
}
} else {yearlyID.info <- NULL}
##store data sets
data.out <- list( )
if(is.null(obsVars)) {
data.out$obsCovs <- NULL
} else {data.out$obsCovs <- obsID}
if(is.null(siteVars)) {
data.out$siteCovs <- NULL
} else {data.out$siteCovs <- siteID}
if(is.null(yearlyVars)) {
data.out$yearlySiteCovs <- NULL
} else {data.out$yearlySiteCovs <- yearlyID}
##assemble results
result <- list("predictors" = unique.predictors,
"data" = data.out)
class(result) <- "extractX"
return(result)
}
##print method
print.extractX <- function(x, ...) {
if(length(x$predictors) > 0) {
cat("\nPredictors appearing in candidate models:\n")
cat(x$predictors, sep = " ")
cat("\n")
##if unmarkedFit model
if(!is.data.frame(x$data)) {
##determine number of elements
nitems <- length(x$data)
for(i in 1:nitems) {
##check if data frame contains data
if(ncol(x$data[[i]]) > 0) {
cat("\nStructure of predictors in ", names(x$data)[i], ":\n", sep = "")
cat(capture.output(str(x$data[[i]]))[-1], sep = "\n")
}
}
cat("\n")
} else {
cat("\nStructure of predictors:", "\n")
cat(capture.output(str(x$data))[-1], sep = "\n")
cat("\n")
}
} else {
##if only intercept is present
cat("\nNo predictors appear in candidate models\n")
cat("\n")
}
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/extractX.R |
##utility function to determine family of distribution and link function on object of 'mer' class
fam.link.mer <- function(mod) {
if(identical(paste(class(mod)), "mer")) {
call.mod <- mod@call
##determine type of call: lmer( ) vs glmer( )
fun.call <- call.mod[1]
##supported link
supp.link <- "yes"
##for lmer
if(identical(as.character(fun.call), "lmer")) {
fam.type <- "gaussian"
link.type <- "identity"
}
##for glmer
if(identical(as.character(fun.call), "glmer")) {
fam.call <- call.mod$family
##determine family of glmm and set to canonical link
if(!is.na(charmatch(x = "binomial", table = fam.call))) {
fam.type <- "binomial"
link.type <- "logit"
} else {
if(!is.na(charmatch(x = "poisson", table = fam.call))) {
fam.type <- "poisson"
link.type <- "log"
} else {
if(!is.na(charmatch(x = "Negative Binomial", table = fam.call))) {
fam.type <- "Negative.Binomial"
link.type <- "log"
} else {
if(!is.na(charmatch(x = "gaussian", table = fam.call))) {
fam.type <- "gaussian"
link.type <- "identity"
} else {
if(!is.na(charmatch(x = "Gamma", table = fam.call))) {
fam.type <- "Gamma"
link.type <- "log"
} else {fam.type <- "other"}
}
}
}
##check for family type other than binomial, Poisson, normal, negative binomial, or Gamma
if(identical(fam.type, "other")) stop("\nThis distribution family is not yet supported\n")
##determine if canonical link was used
if(length(fam.call) > 1){
link.type <- as.character(fam.call$link)
}
##check for links supported by this function
if(identical(fam.type, "binomial")) {
if(!identical(link.type, "logit")) supp.link <- "no"
}
if(identical(fam.type, "poisson")) {
if(!identical(link.type, "log") && !identical(link.type, "identity")) supp.link <- "no"
}
if(identical(fam.type, "Negative.Binomial")) {
if(!identical(link.type, "log") && !identical(link.type, "identity")) supp.link <- "no"
}
if(identical(fam.type, "gaussian")) {
if(!identical(link.type, "log") && !identical(link.type, "identity")) supp.link <- "no"
}
if(identical(fam.type, "Gamma")) {
if(!identical(link.type, "log")) supp.link <- "no"
}
##if(identical(supp.link, "no")) stop("\nOnly canonical link is supported with current version of function\n")
##if(identical(link.type, "other")) stop("\nThis function is not yet defined for the specified link function\n")
}
}
out.link <- list("family" = fam.type, "link" = link.type, "supp.link" = supp.link)
}
if(identical(paste(class(mod)), "lmerMod") || identical(paste(class(mod)), "glmerMod")) {
call.mod <- mod@call
##determine type of call: lmer( ) vs glmer( )
fun.call <- call.mod[1]
##supported link
supp.link <- "yes"
if(identical(as.character(fun.call), "lmer")) {
fam.type <- "gaussian"
link.type <- "identity"
}
if(identical(as.character(fun.call), "glmer")) {
fam.call <- mod@resp$family
fam.type <- fam.call$family
link.type <- fam.call$link
##check for links supported by this function
if(identical(fam.type, "binomial")) {
if(!identical(link.type, "logit")) supp.link <- "no"
}
if(identical(fam.type, "poisson")) {
if(!identical(link.type, "log") && !identical(link.type, "identity")) supp.link <- "no"
}
if(!is.na(charmatch(x = "Negative Binomial", table = fam.type))) {
##modify fam.type
fam.type <- "Negative.Binomial"
if(!identical(link.type, "log") && !identical(link.type, "identity")) supp.link <- "no"
}
if(identical(fam.type, "gaussian")) {
if(!identical(link.type, "log") && !identical(link.type, "identity")) supp.link <- "no"
}
if(identical(fam.type, "Gamma")) {
if(!identical(link.type, "log")) supp.link <- "no"
}
}
out.link <- list("family" = fam.type, "link" = link.type, "supp.link" = supp.link)
}
return(out.link)
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/fam.link.mer.r |
##generic
importance <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
cand.set <- formatCands(cand.set)
UseMethod("importance", cand.set)
}
##default
importance.default <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
stop("\nFunction not yet defined for this object class\n")
}
##aov
importance.AICaov.lm <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN=function(i) rownames(summary(i)$coefficients))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##betareg
importance.AICbetareg <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
##determine if parameter is on mean or phi
if(regexpr(pattern = "\\(phi\\)_", parm) == "-1") {
mod_formula <- lapply(cand.set, FUN=function(i) rownames(summary(i)$coefficients$mean))
} else {
##replace parm
parm <- gsub(pattern = "\\(phi\\)_", "", parm)
mod_formula <- lapply(cand.set, FUN=function(i) rownames(summary(i)$coefficients$precision))
}
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##clm
importance.AICsclm.clm <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN = function(i) rownames(summary(i)$coefficients))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##clm
importance.AICclm <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN = function(i) rownames(summary(i)$coefficients))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##clmm
importance.AICclmm <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN = function(i) rownames(summary(i)$coefficients))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##clogit
importance.AICclogit.coxph <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN=function(i) rownames(summary(i)$coefficients))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##coxme
importance.AICcoxme <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN=function(i) labels(coef(i)))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##coxph
importance.AICcoxph <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN=function(i) rownames(summary(i)$coefficients))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##glm
importance.AICglm.lm <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, c.hat = 1, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN=function(i) rownames(summary(i)$coefficients))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##glmer
importance.AICglmerMod <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN = function(i) labels(fixef(i)))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##glmmTMB
importance.AICglmmTMB <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, c.hat = 1, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN = function(i) labels(fixef(i)$cond))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##gls
importance.AICgls <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN=function(i) labels(summary(i)$coefficients))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##hurdle
importance.AIChurdle <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN=function(i) labels(coef(i)))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##lm
importance.AIClm <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN=function(i) rownames(summary(i)$coefficients))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##lme
importance.AIClme <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN=function(i) labels(summary(i)$coefficients$fixed))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##lmekin
importance.AIClmekin <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN = function(i) labels(fixef(i)))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##maxlike
importance.AICmaxlikeFit.list <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, c.hat = 1, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN = function(i) labels(fixef(i)))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##mer
importance.AICmer <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN=function(i) labels(fixef(i)))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##lmerModLmerTest
importance.AIClmerModLmerTest <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN=function(i) labels(fixef(i)))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##multinom
importance.AICmultinom.nnet <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, c.hat = 1, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN=function(i) colnames(summary(i)$coefficients))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##glm.nb
importance.AICnegbin.glm.lm <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN=function(i) rownames(summary(i)$coefficients))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##nlmer
importance.AICnlmerMod <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN = function(i) labels(fixef(i)))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##polr
importance.AICpolr <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN=function(i) rownames(summary(i)$coefficients))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##rlm
importance.AICrlm.lm <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN=function(i) rownames(summary(i)$coefficients))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##survreg
importance.AICsurvreg <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN=function(i) names(summary(i)$coefficients))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##colext
importance.AICunmarkedFitColExt <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, c.hat = 1,
parm.type = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?importance for details\n")}
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##psi - initial occupancy
if(identical(parm.type, "psi")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$psi)))
##create label for parm
parm.unmarked <- "psi"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##gamma - extinction
if(identical(parm.type, "gamma")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$col)))
##create label for parm
parm.unmarked <- "col"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##epsilon - extinction
if(identical(parm.type, "epsilon")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$ext)))
##create label for parm
parm.unmarked <- "ext"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##occu
importance.AICunmarkedFitOccu <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, c.hat = 1,
parm.type = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?importance for details\n")}
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##psi
if(identical(parm.type, "psi")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
parm.unmarked <- "psi"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##occuFP
importance.AICunmarkedFitOccuFP <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, c.hat = 1,
parm.type = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?importance for details\n")}
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##psi
if(identical(parm.type, "psi")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
parm.unmarked <- "psi"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##false positives - fp
if(identical(parm.type, "fp")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$fp)))
parm.unmarked <- "fp"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##occuRN
importance.AICunmarkedFitOccuRN <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, c.hat = 1,
parm.type = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?importance for details\n")}
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
parm.unmarked <- "lam"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##detect
if(identical(parm.type, "detect")) {
mod_formula<-lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##pcount
importance.AICunmarkedFitPCount <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, c.hat = 1,
parm.type = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?importance for details\n")}
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
##create label for parm
parm.unmarked <- "lam"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##pcountOpen
importance.AICunmarkedFitPCO <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, c.hat = 1,
parm.type = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?importance for details\n")}
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$lambda)))
parm.unmarked <- "lam"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##gamma - recruitment
if(identical(parm.type, "gamma")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$gamma)))
##determine if same H0 on gamma (gamConst, gamAR, gamTrend)
strip.gam <- sapply(mod_formula, FUN = function(i) unlist(strsplit(i, "\\("))[[1]])
unique.gam <- unique(strip.gam)
if(length(unique.gam) > 1) stop("\nDifferent formulations of gamma parameter occur among models:\n
beta estimates cannot be model-averaged\n")
##create label for parm
parm.unmarked <- unique.gam
parm <- paste(unique.gam, "(", parm, ")", sep="")
}
##omega - apparent survival
if(identical(parm.type, "omega")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$omega)))
##create label for parm
parm.unmarked <- "omega"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##detect
if(identical(parm.type, "detect")) {
mod_formula<-lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##distsamp
importance.AICunmarkedFitDS <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, c.hat = 1,
parm.type = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?importance for details\n")}
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
parm.unmarked <- "lam"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##detect
if(identical(parm.type, "detect")) {
##check for key function used
keyid <- unique(sapply(cand.set, FUN = function(i) i@keyfun))
if(length(keyid) > 1) stop("\nDifferent key functions used across models:\n",
"cannot compute model-averaged estimate\n")
if(identical(keyid, "uniform")) stop("\nDetection parameter not found in models\n")
##set key prefix used in coef( )
if(identical(keyid, "halfnorm")) {
parm.key <- "sigma"
}
if(identical(keyid, "hazard")) {
parm.key <- "shape"
}
if(identical(keyid, "exp")) {
parm.key <- "rate"
}
##label for intercept - label different with this model type
if(identical(parm, "Int")) {parm <- "(Intercept)"}
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm <- paste("p", "(", parm.key, "(", parm, "))", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("p", "(", parm.key, "(", reversed.parm, "))", sep="")}
##stop("\nImportance values for detection covariates not yet supported for unmarkedFitDS class\n")
}
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##gdistsamp
importance.AICunmarkedFitGDS <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, c.hat = 1,
parm.type = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?importance for details\n")}
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
parm.unmarked <- "lam"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##detect
if(identical(parm.type, "detect")) {
stop("\nImportance values for detection covariates not yet supported for unmarkedFitGDS class\n")
}
##availability
if(identical(parm.type, "phi")) {
stop("\nImportance values for availability covariates not yet supported for unmarkedFitGDS class\n")
}
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##multinomPois
importance.AICunmarkedFitMPois <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, c.hat = 1,
parm.type = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?importance for details\n")}
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
parm.unmarked <- "lambda"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##detect
if(identical(parm.type, "detect")) {
mod_formula<-lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##gmultmix
importance.AICunmarkedFitGMM <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, c.hat = 1,
parm.type = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?importance for details\n")}
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$lambda)))
parm.unmarked <- "lambda"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##detect
if(identical(parm.type, "detect")) {
mod_formula<-lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##availability
if(identical(parm.type, "phi")) {
mod_formula<-lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$phi)))
parm.unmarked <- "phi"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##gmultmix
importance.AICunmarkedFitGPC <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, c.hat = 1,
parm.type = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?importance for details\n")}
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$lambda)))
parm.unmarked <- "lambda"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##detect
if(identical(parm.type, "detect")) {
mod_formula<-lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##availability
if(identical(parm.type, "phi")) {
mod_formula<-lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$phi)))
parm.unmarked <- "phi"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##occuMulti
importance.AICunmarkedFitOccuMulti <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, c.hat = 1,
parm.type = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
##parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?importance for details\n")}
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##psi
if(identical(parm.type, "psi")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(x) labels(coef(x@estimates@estimates$state)))
parm.unmarked <- "psi"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(x) labels(coef(x@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##occuMS
importance.AICunmarkedFitOccuMS <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, c.hat = 1,
parm.type = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
##parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?importance for details\n")}
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##psi
if(identical(parm.type, "psi")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(x) labels(coef(x@estimates@estimates$state)))
parm.unmarked <- "psi"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(x) labels(coef(x@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##transition
if(identical(parm.type, "phi")) {
##check that parameter appears in all models
nseasons <- unique(sapply(cand.set, FUN = function(i) i@data@numPrimary))
if(nseasons == 1) {
stop("\nParameter \'phi\' does not appear in single-season models\n")
}
mod_formula <- lapply(cand.set, FUN = function(x) labels(coef(x@estimates@estimates$transition)))
parm.unmarked <- "phi"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##occuTTD
importance.AICunmarkedFitOccuTTD <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, c.hat = 1,
parm.type = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?importance for details\n")}
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##psi - initial occupancy
if(identical(parm.type, "psi")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$psi)))
##create label for parm
parm.unmarked <- "psi"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##gamma - colonization
if(identical(parm.type, "gamma")) {
nseasons <- unique(sapply(cand.set, FUN = function(i) i@data@numPrimary))
if(nseasons == 1) {
stop("\nParameter \'gamma\' does not appear in single-season models\n")
}
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$col)))
##create label for parm
parm.unmarked <- "col"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##epsilon - extinction
if(identical(parm.type, "epsilon")) {
nseasons <- unique(sapply(cand.set, FUN = function(i) i@data@numPrimary))
if(nseasons == 1) {
stop("\nParameter \'epsilon\' does not appear in single-season models\n")
}
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$ext)))
##create label for parm
parm.unmarked <- "ext"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##detect - lambda parameter is a rate of a species not detected in t to be detected at next time step
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "lam"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##multmixOpen
importance.AICunmarkedFitMMO <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, c.hat = 1,
parm.type = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?importance for details\n")}
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$lambda)))
parm.unmarked <- "lam"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##gamma - recruitment
if(identical(parm.type, "gamma")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$gamma)))
##determine if same H0 on gamma (gamConst, gamAR, gamTrend)
strip.gam <- sapply(mod_formula, FUN = function(i) unlist(strsplit(i, "\\("))[[1]])
unique.gam <- unique(strip.gam)
if(length(unique.gam) > 1) stop("\nDifferent formulations of gamma parameter occur among models:\n
beta estimates cannot be model-averaged\n")
##create label for parm
parm.unmarked <- unique.gam
parm <- paste(unique.gam, "(", parm, ")", sep="")
}
##omega - apparent survival
if(identical(parm.type, "omega")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$omega)))
##create label for parm
parm.unmarked <- "omega"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##iota (for immigration = TRUE with dynamics = "autoreg", "trend", "ricker", or "gompertz")
if(identical(parm.type, "iota")) {
not.include <- lapply(cand.set, FUN = function(i) i@formlist$iotaformula)
##check that parameter appears in all models
parfreq <- sum(sapply(cand.set, FUN = function(i) any(names(i@estimates@estimates) == parm.type)))
if(!identical(length(cand.set), parfreq)) {
stop("\nParameter \'iota\' (parm.type = \"", parm.type, "\") does not appear in all models:",
"\ncannot compute model-averaged estimate across all models\n")
}
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$iota)))
##create label for parm
parm.unmarked <- "iota"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##detect
if(identical(parm.type, "detect")) {
mod_formula<-lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##distsampOpen
importance.AICunmarkedFitDSO <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, c.hat = 1,
parm.type = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?importance for details\n")}
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$lambda)))
parm.unmarked <- "lam"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##gamma - recruitment
if(identical(parm.type, "gamma")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$gamma)))
##determine if same H0 on gamma (gamConst, gamAR, gamTrend)
strip.gam <- sapply(mod_formula, FUN = function(i) unlist(strsplit(i, "\\("))[[1]])
unique.gam <- unique(strip.gam)
if(length(unique.gam) > 1) stop("\nDifferent formulations of gamma parameter occur among models:\n
beta estimates cannot be model-averaged\n")
##create label for parm
parm.unmarked <- unique.gam
parm <- paste(unique.gam, "(", parm, ")", sep="")
}
##omega - apparent survival
if(identical(parm.type, "omega")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$omega)))
##create label for parm
parm.unmarked <- "omega"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##iota (for immigration = TRUE with dynamics = "autoreg", "trend", "ricker", or "gompertz")
if(identical(parm.type, "iota")) {
not.include <- lapply(cand.set, FUN = function(i) i@formlist$iotaformula)
##check that parameter appears in all models
parfreq <- sum(sapply(cand.set, FUN = function(i) any(names(i@estimates@estimates) == parm.type)))
if(!identical(length(cand.set), parfreq)) {
stop("\nParameter \'iota\' (parm.type = \"", parm.type, "\") does not appear in all models:",
"\ncannot compute model-averaged estimate across all models\n")
}
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$iota)))
##create label for parm
parm.unmarked <- "iota"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##detect
if(identical(parm.type, "detect")) {
mod_formula<-lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "sigma"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
}
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##vglm
importance.AICvglm <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, c.hat = 1, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN=function(i) labels(coefficients(i)))
##check whether parm is involved in interaction or if label changes for some models - e.g., ZIP models
##if : not already included
if(regexpr(":", parm, fixed = TRUE) == -1) {
##if : not included
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) warning("\nLabel of parameter of interest seems to change across models:\n",
"check model syntax for possible problems\n")
} else {
##if : already included
##remove : from parm
simple.parm <- unlist(strsplit(parm, split = ":"))[1]
##search for simple.parm and parm in model formulae
no.colon <- sum(ifelse(attr(regexpr(simple.parm, mod_formula, fixed = TRUE), "match.length") != "-1", 1, 0))
with.colon <- sum(ifelse(attr(regexpr(parm, mod_formula, fixed = TRUE), "match.length") != "-1", 0, 1))
##check if both are > 0
if(no.colon > 0 && with.colon > 0) warning("\nLabel of parameter of interest seems to change across models:\n",
"check model syntax for possible problems\n")
}
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##zeroinfl
importance.AICzeroinfl <- function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
##extract labels
mod_formula <- lapply(cand.set, FUN=function(i) labels(coef(i)))
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=length(cand.set), ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:length(cand.set)) {
idents <- NULL
form <- mod_formula[[i]]
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(paste(parm), form[j])
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
}
}
include[i] <- ifelse(any(idents==1), 1, 0)
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new_table <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs,
sort = FALSE)
##add a check to determine if the same number of models include and exlude parameter
if (length(which(include == 1)) != length(which(include != 1)) ) {
stop("\nImportance values are only meaningful when the number of models with and without parameter are equal\n")
}
w.plus <- sum(new_table[which(include == 1), 6]) #select models including a given parameter
w.minus <- 1 - w.plus
imp <- list("parm" = parm, "w.plus" = w.plus, "w.minus" = w.minus)
class(imp) <- c("importance", "list")
return(imp)
}
##function for nicer printing of importance values
print.importance <- function(x, digits = 2, ...) {
cat("\nImportance values of '", x$parm, "':\n\n", sep = "")
cat("w+ (models including parameter):", round(x$w.plus, digits = digits), "\n")
cat("w- (models excluding parameter):", round(x$w.minus, digits = digits), "\n")
cat("\n")
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/importance.R |
##MacKenzie and Bailey goodness of fit test for single season occupancy models
##generic function to compute chi-square
mb.chisq <- function(mod, print.table = TRUE, ...){
UseMethod("mb.chisq", mod)
}
mb.chisq.default <- function(mod, print.table = TRUE, ...){
stop("\nFunction not yet defined for this object class\n")
}
##for single-season occupancy models of class unmarkedFitOccu
mb.chisq.unmarkedFitOccu <- function(mod, print.table = TRUE, ...) {
##step 1:
##extract detection histories
y.raw <- mod@data@y
##if some rows are all NA and sites are discarded, adjust sample size accordingly
N.raw <- nrow(y.raw)
#if(all NA) {N - number of rows with all NA}
##identify sites without data
na.raw <- apply(X = y.raw, MARGIN = 1, FUN = function(i) all(is.na(i)))
##remove sites without data
y.data <- y.raw[!na.raw, ]
N <- N.raw - sum(na.raw)
#N is required for computations in the end
Ts <- ncol(y.data)
det.hist <- apply(X = y.data, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))
##compute predicted values of occupancy
preds.psi <- predict(mod, type = "state")$Predicted
##compute predicted values of p
preds.p <- matrix(data = predict(mod, type = "det")$Predicted,
ncol = Ts, byrow = TRUE)
##assemble in data.frame
out.hist <- data.frame(det.hist, preds.psi, stringsAsFactors = TRUE)
##identify unique histories
un.hist <- unique(det.hist)
n.un.hist <- length(un.hist)
##identify if missing values occur
na.vals <- length(grep(pattern = "NA", x = un.hist)) > 0
if(na.vals) {
##identify each history with NA
id.na <- grep(pattern = "NA", x = un.hist)
id.det.hist.na <- grep(pattern = "NA", x = det.hist)
##cohorts with NA
cohort.na <- sort(un.hist[id.na])
n.cohort.na <- length(cohort.na)
##determine cohorts that will be grouped together (same missing value)
unique.na <- gsub(pattern = "NA", replacement = "N", x = cohort.na)
##determine which visit has missing value
na.visits <- sapply(strsplit(x = unique.na, split = ""), FUN = function(i) paste(ifelse(i == "N", 1, 0), collapse = ""))
##add cohort labels for histories
names(cohort.na) <- na.visits
##number of histories in each cohort
n.hist.missing.cohorts <- table(na.visits)
##number of missing cohorts
n.missing.cohorts <- length(n.hist.missing.cohorts)
out.hist.na <- out.hist[id.det.hist.na, ]
out.hist.na$det.hist <- droplevels(out.hist.na$det.hist)
##groupings in out.hist.na
just.na <- sapply(X = out.hist.na$det.hist, FUN = function(i) gsub(pattern = "1", replacement = "0", x = i))
out.hist.na$coh <- sapply(X = just.na, FUN = function(i) gsub(pattern = "NA", replacement = "1", x = i))
##number of sites in each missing cohort
freqs.missing.cohorts <- table(out.hist.na$coh)
##number of sites with each history
na.freqs <- table(det.hist[id.det.hist.na])
preds.p.na <- preds.p[id.det.hist.na, ]
##cohorts without NA
cohort.not.na <- sort(un.hist[-id.na])
out.hist.not.na <- out.hist[-id.det.hist.na, , drop = FALSE]
out.hist.not.na$det.hist <- droplevels(out.hist.not.na$det.hist)
n.cohort.not.na <- length(cohort.not.na)
n.sites.not.na <- length(det.hist) - length(id.det.hist.na)
preds.p.not.na <- preds.p[-id.det.hist.na, ]
} else {
cohort.not.na <- sort(un.hist)
out.hist.not.na <- out.hist
preds.p.not.na <- preds.p
n.cohort.not.na <- length(cohort.not.na)
n.sites.not.na <- length(det.hist)
}
##for each missing data cohort, determine number of sites for each
##iterate over each site for each unique history
if(n.cohort.not.na > 0) { ##expected frequencies for non-missing data
exp.freqs <- rep(NA, n.cohort.not.na)
names(exp.freqs) <- cohort.not.na ########################################SORT ENCOUNTER HISTORIES CHECK THAT ORDER IS IDENTICAL TO OBSERVED FREQS
##iterate over detection histories
for (i in 1:n.cohort.not.na) {
eq.solved <- rep(NA, n.sites.not.na)
select.hist <- cohort.not.na[i]
##strip all values
strip.hist <- unlist(strsplit(select.hist, split = ""))
##translate each visit in probability statement
hist.mat <- matrix(NA, nrow = n.sites.not.na, ncol = Ts)
##iterate over sites
for(j in 1:n.sites.not.na) {
##in extreme cases where only a single cohort occurs without missing values
if(n.sites.not.na == 1) {
hist.mat[j, ] <- ifelse(strip.hist == "1", preds.p.not.na,
ifelse(strip.hist == "0", 1 - preds.p.not.na,
0))
} else {
hist.mat[j, ] <- ifelse(strip.hist == "1", preds.p.not.na[j, ],
ifelse(strip.hist == "0", 1 - preds.p.not.na[j, ],
0))
}
##combine into equation
combo.p <- paste(hist.mat[j, ], collapse = "*")
##for history without detection
if(sum(as.numeric(strip.hist)) == 0) {
combo.first <- paste(c(out.hist.not.na[j, "preds.psi"], combo.p), collapse = "*")
combo.psi.p <- paste((1 - out.hist.not.na[j, "preds.psi"]), "+", combo.first)
} else {
combo.psi.p <- paste(c(out.hist.not.na[j, "preds.psi"], combo.p), collapse = "*")
}
eq.solved[j] <- eval(parse(text = as.expression(combo.psi.p)))
}
exp.freqs[i] <- sum(eq.solved, na.rm = TRUE)
}
##for each detection history, compute observed frequencies
freqs <- table(out.hist.not.na$det.hist)
out.freqs <- matrix(NA, nrow = n.cohort.not.na, ncol = 4)
colnames(out.freqs) <- c("Cohort", "Observed", "Expected", "Chi-square")
rownames(out.freqs) <- names(freqs)
##cohort
out.freqs[, 1] <- 0
##observed
out.freqs[, 2] <- freqs
##expected
out.freqs[, 3] <- exp.freqs
##chi-square
out.freqs[, 4] <- ((out.freqs[, "Observed"] - out.freqs[, "Expected"])^2)/out.freqs[, "Expected"]
}
##if missing values
if(na.vals) {
##create list to store the chisquare for each cohort
missing.cohorts <- list( )
##check if preds.p.na has only 1 row and change to matrix
if(!is.matrix(preds.p.na)) {preds.p.na <- matrix(data = preds.p.na, nrow = 1)}
for(m in 1:n.missing.cohorts) {
##select cohort
select.cohort <- out.hist.na[which(out.hist.na$coh == names(freqs.missing.cohorts)[m]), ]
select.preds.p.na <- preds.p.na[which(out.hist.na$coh == names(freqs.missing.cohorts)[m]), ]
##replace NA's with 1 to remove from likelihood
if(!is.matrix(select.preds.p.na)) {select.preds.p.na <- matrix(data = select.preds.p.na, nrow = 1)}
select.preds.p.na[, gregexpr(pattern = "N", text = gsub(pattern = "NA", replacement = "N", x = select.cohort$det.hist[1]))[[1]]] <- 1
n.total.sites <- nrow(select.cohort)
freqs.na <- table(droplevels(select.cohort$det.hist))
cohort.na.un <- sort(unique(select.cohort$det.hist))
n.hist.na <- length(freqs.na)
exp.na <- rep(NA, n.hist.na)
names(exp.na) <- cohort.na.un
for(i in 1:n.hist.na) {
##number of sites in given history
n.sites.hist <- freqs.na[i] ##this should be number of sites for each history
eq.solved <- rep(NA, n.total.sites)
##replace NA's with N
select.hist <- gsub(pattern = "NA", replacement = "N", x = cohort.na.un[i])
##strip all values
strip.hist <- unlist(strsplit(select.hist, split = ""))
##translate each visit in probability statement
hist.mat <- matrix(NA, nrow = n.total.sites, ncol = Ts)
##iterate over sites
for(j in 1:n.total.sites) {
##################
##modified
if(Ts == 1) {
hist.mat[j, ] <- ifelse(strip.hist == "1", select.preds.p.na[j],
ifelse(strip.hist == "0", 1 - select.preds.p.na[j], 1))
} else {
hist.mat[j, ] <- ifelse(strip.hist == "1", select.preds.p.na[j, ],
ifelse(strip.hist == "0", 1 - select.preds.p.na[j, ], 1))
}
##modified
##################
##replace NA by 1 (missing visit is removed from likelihood)
###################################################
###for missing value, remove occasion
###################################################
##combine into equation
combo.p <- paste(hist.mat[j, ], collapse = "*")
##for history without detection
if(sum(as.numeric(gsub(pattern = "N", replacement = "0", x = strip.hist))) == 0) {
combo.first <- paste(c(select.cohort[j, "preds.psi"], combo.p), collapse = "*")
combo.psi.p <- paste((1 - select.cohort[j, "preds.psi"]), "+", combo.first)
} else {
combo.psi.p <- paste(c(select.cohort[j, "preds.psi"], combo.p), collapse = "*")
}
eq.solved[j] <- eval(parse(text = as.expression(combo.psi.p)))
}
exp.na[i] <- sum(eq.solved, na.rm = TRUE)
}
##compute chisq for missing data cohorts
##for each detection history, compute observed frequencies
out.freqs.na <- matrix(NA, nrow = n.hist.na, ncol = 4)
colnames(out.freqs.na) <- c("Cohort", "Observed", "Expected", "Chi-square")
rownames(out.freqs.na) <- cohort.na.un
##cohort
out.freqs.na[, 1] <- m
##observed
out.freqs.na[, 2] <- freqs.na
##expected
out.freqs.na[, 3] <- exp.na
##chi-square
out.freqs.na[, 4] <- ((out.freqs.na[, "Observed"] - out.freqs.na[, "Expected"])^2)/out.freqs.na[, "Expected"]
missing.cohorts[[m]] <- list(out.freqs.na = out.freqs.na)
}
}
##test statistic is chi-square for all possible detection histories
##for observed detection histories, chisq = sum((obs - exp)^2)/exp = Y
##to avoid computing all possible detection histories, it is possible to obtain it by subtraction:
#for unobserved detection histories, chisq = sum((obs - exp)^2)/exp = sum(0 - exp)^2/exp = sum(exp) = X
##X = N.sites - sum(exp values from observed detection histories)
##Thus, test statistic = Y + X = Y + N - sum(exp values from observed detection histories)
##compute partial chi-square for observed detection histories (Y)
#chisq.obs.det <- sum(((out.freqs[, "observed"] - out.freqs[, "expected"])^2)/out.freqs[, "expected"])
##compute partial chi-square for unobserved detection histories (X)
if(na.vals) {
chisq.missing <- do.call("rbind", lapply(missing.cohorts, FUN = function(i) i$out.freqs.na))
if(n.cohort.not.na > 0) {
chisq.unobs.det <- N - sum(out.freqs[, "Expected"]) - sum(chisq.missing[, "Expected"])
chisq.table <- rbind(out.freqs, chisq.missing)
} else {
chisq.unobs.det <- N - sum(chisq.missing[, "Expected"])
chisq.table <- chisq.missing
}
} else {
chisq.unobs.det <- N - sum(out.freqs[, "Expected"])
chisq.na <- 0
chisq.table <- out.freqs
}
##test statistic (Y + X = Y - sum(exp observed detection histories)
chisq <- sum(chisq.table[, "Chi-square"]) + chisq.unobs.det
if(print.table) {
out <- list(chisq.table = chisq.table, chi.square = chisq,
model.type = "single-season")
} else {
out <- list(chi.square = chisq, model.type = "single-season")
}
class(out) <- "mb.chisq"
return(out)
}
##dynamic occupancy models of class unmarkedFitColExt
mb.chisq.unmarkedFitColExt <- function(mod, print.table = TRUE, ...) {
##information on data set
##extract number of seasons
orig.data <- mod@data
detections <- orig.data@y
n.seasons <- orig.data@numPrimary
n.sites <- nrow(detections)
##total visits
total.visits <- ncol(detections)
##number of visits per season
n.visits <- total.visits/n.seasons
##determine if certain sites have no data for certain visits
##split encounter history for each season
starts <- seq(1, total.visits, by = n.visits)
##create list to hold seasons
y.seasons <- list( )
for(k in 1:n.seasons) {
first.col <- starts[k]
y.seasons[[k]] <- detections[, first.col:(first.col+n.visits-1)]
}
##check if any seasons were not sampled
y.seasonsNA <- sapply(y.seasons, FUN = function(i) all(is.na(i)))
#################################
##from model
##predict init psi
psi.init.pred <- predict(mod, type = "psi")$Predicted
##predict gamma
gam.pred <- matrix(data = predict(mod, type = "col")$Predicted,
ncol = n.seasons, byrow = TRUE)[, 1:(n.seasons - 1)]
##predict epsilon
eps.pred <- matrix(data = predict(mod, type = "ext")$Predicted,
ncol = n.seasons, byrow = TRUE)[, 1:(n.seasons - 1)]
##predicted p
p.pred <- matrix(data = predict(mod, type = "det")$Predicted,
ncol = total.visits,
nrow = n.sites, byrow = TRUE)
##divide p's for each season
p.seasons <- list( )
for(k in 1:n.seasons) {
first.col <- starts[k]
p.seasons[[k]] <- p.pred[, first.col:(first.col+n.visits-1)]
}
##compute predicted values of psi for each season
psi.seasons <- list( )
##add first year in list
psi.seasons[[1]] <- psi.init.pred
##compute psi recursively for each year given psi(t - 1), epsilon, and gamma
if(n.seasons == 2) {
psi.seasons[[2]] <- psi.seasons[[1]] * (1 - eps.pred) + (1 - psi.seasons[[1]]) * gam.pred
} else {
for(m in 2:(n.seasons)){
psi.seasons[[m]] <- psi.seasons[[m-1]] * (1 - eps.pred[, m-1]) + (1 - psi.seasons[[m-1]]) * gam.pred[, m-1]
}
}
################################################
###the following is adapted from mb.chisq
##create list to hold table for each year
out <- vector(mode = "list", length = n.seasons)
##add label for seasons
season.labels <- paste("Season", 1:n.seasons, sep = "")
names(out) <- season.labels
#names(all.chisq) <- season.labels
##iterate over seasons
for (season in 1:n.seasons) {
##step 1:
##extract detection histories
y.raw <- y.seasons[[season]]
##if some rows are all NA and sites are discarded, adjust sample size accordingly
N.raw <- nrow(y.raw)
##if(all NA) {N - number of rows with all NA}
##identify sites without data
na.raw <- apply(X = y.raw, MARGIN = 1, FUN = function(i) all(is.na(i)))
##remove sites without data
#y.data <- y.raw[!na.raw, ] #in mb.chisq( ) for single season data, sites without data are removed
##with multiseason model, missing data in some years create an imbalance - better to maintain number of sites constant across years
##this creates a new cohort with only missing values
y.data <- y.raw
##number of observed detection histories (excludes cases with all NA's)
N <- N.raw - sum(na.raw)
##check if any columns are empty
nodata.raw <- apply(X = y.data, MARGIN = 2, FUN = function(i) all(is.na(i)))
with.data <- which(nodata.raw == FALSE)
##select only columns with data
if(any(nodata.raw)) {
y.data <- y.data[, with.data]
}
##T is required for computations in the end
##if only a single visit, ncol( ) returns NULL
###########################
##modified
if(is.vector(y.data)){
Ts <- 1
} else {
Ts <- ncol(y.data)
}
##if no data, skip to next season
if(Ts == 0) {next}
##if single visit, returns error
if(Ts == 1) {
det.hist <- paste(y.data, sep = "")
} else {
det.hist <- apply(X = y.data, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))
}
##modified
###########################
##compute predicted values of occupancy for season i
preds.psi <- psi.seasons[[season]] ##MODIFIED FROM mb.chisq - change iteration number
##extract matrix of p's for season i
if(any(nodata.raw)) {
preds.p <- p.seasons[[season]][, with.data]
} else {
preds.p <- p.seasons[[season]]
}
##assemble in data.frame
out.hist <- data.frame(det.hist, preds.psi, stringsAsFactors = TRUE)
##identify unique histories
un.hist <- unique(det.hist)
n.un.hist <- length(un.hist)
##identify if missing values occur
na.vals <- length(grep(pattern = "NA", x = un.hist)) > 0
if(na.vals) {
##identify each history with NA
id.na <- grep(pattern = "NA", x = un.hist)
id.det.hist.na <- grep(pattern = "NA", x = det.hist)
##cohorts with NA
cohort.na <- sort(un.hist[id.na])
n.cohort.na <- length(cohort.na)
##determine cohorts that will be grouped together (same missing value)
unique.na <- gsub(pattern = "NA", replacement = "N", x = cohort.na)
##determine which visit has missing value
na.visits <- sapply(strsplit(x = unique.na, split = ""), FUN = function(i) paste(ifelse(i == "N", 1, 0), collapse = ""))
##add cohort labels for histories
names(cohort.na) <- na.visits
##number of histories in each cohort
n.hist.missing.cohorts <- table(na.visits)
##number of missing cohorts
n.missing.cohorts <- length(n.hist.missing.cohorts)
out.hist.na <- out.hist[id.det.hist.na, ]
out.hist.na$det.hist <- droplevels(out.hist.na$det.hist)
##groupings in out.hist.na
just.na <- sapply(X = out.hist.na$det.hist, FUN = function(i) gsub(pattern = "1", replacement = "0", x = i))
out.hist.na$coh <- sapply(X = just.na, FUN = function(i) gsub(pattern = "NA", replacement = "1", x = i))
##number of sites in each missing cohort
freqs.missing.cohorts <- table(out.hist.na$coh)
##number of sites with each history
na.freqs <- table(det.hist[id.det.hist.na])
#####################
##modified
if(Ts == 1) {
preds.p.na <- preds.p[id.det.hist.na]
} else {
preds.p.na <- preds.p[id.det.hist.na, ]
}
##modified
#####################
##cohorts without NA
cohort.not.na <- sort(un.hist[-id.na])
out.hist.not.na <- out.hist[-id.det.hist.na, , drop = FALSE]
out.hist.not.na$det.hist <- droplevels(out.hist.not.na$det.hist)
n.cohort.not.na <- length(cohort.not.na)
n.sites.not.na <- length(det.hist) - length(id.det.hist.na)
#####################
##modified
if(Ts == 1) {
preds.p.not.na <- preds.p[-id.det.hist.na]
} else {
preds.p.not.na <- preds.p[-id.det.hist.na, ]
}
##modified
#####################
} else {
cohort.not.na <- sort(un.hist)
out.hist.not.na <- out.hist
preds.p.not.na <- preds.p
n.cohort.not.na <- length(cohort.not.na)
n.sites.not.na <- length(det.hist)
}
##for each missing data cohort, determine number of sites for each
##iterate over each site for each unique history
if(n.cohort.not.na > 0) { ##expected frequencies for non-missing data
exp.freqs <- rep(NA, n.cohort.not.na)
names(exp.freqs) <- cohort.not.na ########################################SORT ENCOUNTER HISTORIES CHECK THAT ORDER IS IDENTICAL TO OBSERVED FREQS
##iterate over detection histories
for (i in 1:n.cohort.not.na) {
eq.solved <- rep(NA, n.sites.not.na)
select.hist <- cohort.not.na[i]
##strip all values
strip.hist <- unlist(strsplit(select.hist, split = ""))
##translate each visit in probability statement
hist.mat <- matrix(NA, nrow = n.sites.not.na, ncol = Ts)
##iterate over sites
for(j in 1:n.sites.not.na) {
##in extreme cases where only a single cohort occurs without missing values
############
##modified
if(n.sites.not.na == 1 || Ts == 1) {
##modified
#############
hist.mat[j, ] <- ifelse(strip.hist == "1", preds.p.not.na,
ifelse(strip.hist == "0", 1 - preds.p.not.na,
0))
} else {
hist.mat[j, ] <- ifelse(strip.hist == "1", preds.p.not.na[j, ],
ifelse(strip.hist == "0", 1 - preds.p.not.na[j, ],
0))
}
##combine into equation
combo.p <- paste(hist.mat[j, ], collapse = "*")
##for history without detection
if(sum(as.numeric(strip.hist)) == 0) {
combo.first <- paste(c(out.hist.not.na[j, "preds.psi"], combo.p), collapse = "*")
combo.psi.p <- paste((1 - out.hist.not.na[j, "preds.psi"]), "+", combo.first)
} else {
combo.psi.p <- paste(c(out.hist.not.na[j, "preds.psi"], combo.p), collapse = "*")
}
eq.solved[j] <- eval(parse(text = as.expression(combo.psi.p)))
}
exp.freqs[i] <- sum(eq.solved, na.rm = TRUE)
}
##for each detection history, compute observed frequencies
freqs <- table(out.hist.not.na$det.hist)
out.freqs <- matrix(NA, nrow = n.cohort.not.na, ncol = 4)
colnames(out.freqs) <- c("Cohort", "Observed", "Expected", "Chi-square")
rownames(out.freqs) <- names(freqs)
##cohort
out.freqs[, 1] <- 0
##observed
out.freqs[, 2] <- freqs
##expected
out.freqs[, 3] <- exp.freqs
##chi-square
out.freqs[, 4] <- ((out.freqs[, "Observed"] - out.freqs[, "Expected"])^2)/out.freqs[, "Expected"]
}
##if missing values
if(na.vals) {
##create list to store the chisquare for each cohort
missing.cohorts <- list( )
##check if preds.p.na has only 1 row and change to matrix
if(!is.matrix(preds.p.na)) {preds.p.na <- matrix(data = preds.p.na, nrow = 1)}
for(m in 1:n.missing.cohorts) {
##select cohort
select.cohort <- out.hist.na[which(out.hist.na$coh == names(freqs.missing.cohorts)[m]), ]
#######################
##modified
if(Ts == 1) {
select.preds.p.na <- preds.p.na[which(out.hist.na$coh == names(freqs.missing.cohorts)[m])]
} else {
select.preds.p.na <- preds.p.na[which(out.hist.na$coh == names(freqs.missing.cohorts)[m]), ]
}
##modified
#######################
##replace NA's with 1 to remove from likelihood
if(!is.matrix(select.preds.p.na)) {select.preds.p.na <- matrix(data = select.preds.p.na, nrow = 1)}
select.preds.p.na[, gregexpr(pattern = "N", text = gsub(pattern = "NA", replacement = "N", x = select.cohort$det[1]))[[1]]] <- 1
n.total.sites <- nrow(select.cohort)
freqs.na <- table(droplevels(select.cohort$det.hist))
cohort.na.un <- sort(unique(select.cohort$det.hist))
n.hist.na <- length(freqs.na)
exp.na <- rep(NA, n.hist.na)
names(exp.na) <- cohort.na.un
for(i in 1:n.hist.na) {
##number of sites in given history
n.sites.hist <- freqs.na[i] ##this should be number of sites for each history
eq.solved <- rep(NA, n.total.sites)
##replace NA's with N
select.hist <- gsub(pattern = "NA", replacement = "N", x = cohort.na.un[i])
##strip all values
strip.hist <- unlist(strsplit(select.hist, split = ""))
##translate each visit in probability statement
hist.mat <- matrix(NA, nrow = n.total.sites, ncol = Ts)
##iterate over sites
for(j in 1:n.total.sites) {
hist.mat[j, ] <- ifelse(strip.hist == "1", select.preds.p.na[j, ],
ifelse(strip.hist == "0", 1 - select.preds.p.na[j, ], 1))
##replace NA by 1 (missing visit is removed from likelihood)
###################################################
###for missing value, remove occasion
###################################################
##combine into equation
combo.p <- paste(hist.mat[j, ], collapse = "*")
##for history without detection
if(sum(as.numeric(gsub(pattern = "N", replacement = "0", x = strip.hist))) == 0) {
combo.first <- paste(c(select.cohort[j, "preds.psi"], combo.p), collapse = "*")
combo.psi.p <- paste((1 - select.cohort[j, "preds.psi"]), "+", combo.first)
} else {
combo.psi.p <- paste(c(select.cohort[j, "preds.psi"], combo.p), collapse = "*")
}
eq.solved[j] <- eval(parse(text = as.expression(combo.psi.p)))
}
exp.na[i] <- sum(eq.solved, na.rm = TRUE)
}
##compute chisq for missing data cohorts
##for each detection history, compute observed frequencies
out.freqs.na <- matrix(NA, nrow = n.hist.na, ncol = 4)
colnames(out.freqs.na) <- c("Cohort", "Observed", "Expected", "Chi-square")
rownames(out.freqs.na) <- cohort.na.un
##cohort
out.freqs.na[, 1] <- m
##observed
out.freqs.na[, 2] <- freqs.na
##expected
out.freqs.na[, 3] <- exp.na
##chi-square
out.freqs.na[, 4] <- ((out.freqs.na[, "Observed"] - out.freqs.na[, "Expected"])^2)/out.freqs.na[, "Expected"]
missing.cohorts[[m]] <- list(out.freqs.na = out.freqs.na)
}
}
##test statistic is chi-square for all possible detection histories
##for observed detection histories, chisq = sum((obs - exp)^2)/exp = Y
##to avoid computing all possible detection histories, it is possible to obtain it by subtraction:
##for unobserved detection histories, chisq = sum((obs - exp)^2)/exp = sum(0 - exp)^2/exp = sum(exp) = X
##X = N.sites - sum(exp values from observed detection histories)
##Thus, test statistic = Y + X = Y + N - sum(exp values from observed detection histories)
##compute partial chi-square for observed detection histories (Y)
##chisq.obs.det <- sum(((out.freqs[, "observed"] - out.freqs[, "expected"])^2)/out.freqs[, "expected"])
##compute partial chi-square for unobserved detection histories (X)
if(na.vals) {
chisq.missing <- do.call("rbind", lapply(missing.cohorts, FUN = function(i) i$out.freqs.na))
##check for sites never sampled in table
sites.never <- rownames(chisq.missing)
never.sampled <- grep(pattern = paste(rep(NA, Ts), collapse = ""), x = sites.never)
if(length(never.sampled) > 0) {
##remove row for site never sampled
chisq.missing <- chisq.missing[-never.sampled, , drop = FALSE]
}
if(n.cohort.not.na > 0) {
chisq.unobs.det <- N - sum(out.freqs[, "Expected"]) - sum(chisq.missing[, "Expected"])
chisq.table <- rbind(out.freqs, chisq.missing)
} else {
chisq.unobs.det <- N - sum(chisq.missing[, "Expected"])
chisq.table <- chisq.missing
}
} else {
chisq.unobs.det <- N - sum(out.freqs[, "Expected"])
chisq.na <- 0
chisq.table <- out.freqs
}
##test statistic (Y + X = Y - sum(exp observed detection histories)
chisq <- sum(chisq.table[, "Chi-square"]) + chisq.unobs.det
if(print.table) {
out[[season]] <- list(chisq.table = chisq.table, chi.square = chisq) #change to iteration number
} else {
out[[season]] <- list(chi.square = chisq)
}
}
##add a final named element combining the chi-square
all.chisq <- unlist(lapply(out, FUN = function(i) i$chi.square))
##add label for seasons
sampled.seasons <- which(!y.seasonsNA)
new.season.labels <- paste("Season", sampled.seasons, sep = " ")
names(all.chisq) <- new.season.labels
##print table or only test statistic
if(print.table) {
out.nice <- list(tables = out, all.chisq = all.chisq,
n.seasons = n.seasons, model.type = "dynamic",
missing.seasons = y.seasonsNA)
} else {
out.nice <- list(all.chisq = all.chisq, n.seasons = n.seasons,
model.type = "dynamic",
missing.seasons = y.seasonsNA)
}
class(out.nice) <- "mb.chisq"
return(out.nice)
}
##Royle-Nichols count model of class unmarkedFitOccuRN - modified by Dan Linden
mb.chisq.unmarkedFitOccuRN <- function (mod, print.table = TRUE, maxK = NULL, ...){
##add a check to inform user that maxK is now extracted from model object
if(is.null(maxK)) {
maxK <- mod@K
}
y.raw <- mod@data@y
N.raw <- nrow(y.raw)
na.raw <- apply(X = y.raw, MARGIN = 1, FUN = function(i) all(is.na(i)))
y.data <- y.raw[!na.raw, ]
N <- N.raw - sum(na.raw)
###################
###modified by Dan Linden
T <- ncol(y.data)
K <- 0:maxK
################
det.hist <- apply(X = y.data, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))
preds.lam <- predict(mod, type = "state")$Predicted
preds.p <- matrix(data = predict(mod, type = "det")$Predicted,
ncol = T, byrow = TRUE)
out.hist <- data.frame(det.hist, preds.lam, stringsAsFactors = TRUE)
un.hist <- unique(det.hist)
n.un.hist <- length(un.hist)
na.vals <- length(grep(pattern = "NA", x = un.hist)) > 0
if (na.vals) {
id.na <- grep(pattern = "NA", x = un.hist)
id.det.hist.na <- grep(pattern = "NA", x = det.hist)
cohort.na <- sort(un.hist[id.na])
n.cohort.na <- length(cohort.na)
unique.na <- gsub(pattern = "NA", replacement = "N",
x = cohort.na)
na.visits <- sapply(strsplit(x = unique.na, split = ""),
FUN = function(i) paste(ifelse(i == "N", 1, 0), collapse = ""))
names(cohort.na) <- na.visits
n.hist.missing.cohorts <- table(na.visits)
n.missing.cohorts <- length(n.hist.missing.cohorts)
out.hist.na <- out.hist[id.det.hist.na, ]
out.hist.na$det.hist <- droplevels(out.hist.na$det.hist)
just.na <- sapply(X = out.hist.na$det.hist,
FUN = function(i) gsub(pattern = "1", replacement = "0", x = i))
out.hist.na$coh <- sapply(X = just.na,
FUN = function(i) gsub(pattern = "NA", replacement = "1", x = i))
freqs.missing.cohorts <- table(out.hist.na$coh)
na.freqs <- table(det.hist[id.det.hist.na])
preds.p.na <- preds.p[id.det.hist.na, ]
cohort.not.na <- sort(un.hist[-id.na])
out.hist.not.na <- out.hist[-id.det.hist.na, , drop = FALSE]
out.hist.not.na$det.hist <- droplevels(out.hist.not.na$det.hist)
n.cohort.not.na <- length(cohort.not.na)
n.sites.not.na <- length(det.hist) - length(id.det.hist.na)
preds.p.not.na <- preds.p[-id.det.hist.na, ]
} else {
cohort.not.na <- sort(un.hist)
out.hist.not.na <- out.hist
preds.p.not.na <- preds.p
n.cohort.not.na <- length(cohort.not.na)
n.sites.not.na <- length(det.hist)
}
if (n.cohort.not.na > 0) {
exp.freqs <- rep(NA, n.cohort.not.na)
names(exp.freqs) <- cohort.not.na
for (i in 1:n.cohort.not.na) {
eq.solved <- rep(NA, n.sites.not.na)
select.hist <- cohort.not.na[i]
strip.hist <- unlist(strsplit(select.hist, split = ""))
#######################
###modified by Dan Linden
hist.mat <- new.hist.mat <- new.hist.mat1 <- new.hist.mat0 <- matrix(NA, nrow = n.sites.not.na, ncol = T)
#######################
for (j in 1:n.sites.not.na) {
if (n.sites.not.na == 1) {
#######################
###modified by Dan Linden
hist.mat[j,] <- preds.p.not.na
} else {
hist.mat[j,] <- preds.p.not.na[j,]}
##Pr(y.ij=1|K)
p.k.mat <- sapply(hist.mat[j,], function(r) {1 - (1 - r)^K})
##new.hist.mat1[j,] <- dpois(K,out.hist.not.na[j, "preds.lam"]) %*% p.k.mat
##new.hist.mat0[j,] <- dpois(K,out.hist.not.na[j, "preds.lam"]) %*% (1 - p.k.mat)
##new.hist.mat[j,] <- ifelse(strip.hist == "1",
## new.hist.mat1[j,], ifelse(strip.hist == "0",
## new.hist.mat0[j,], 0))
##combo.lam.p <- paste(new.hist.mat[j, ], collapse = "*")
##eq.solved[j] <- eval(parse(text = as.expression(combo.lam.p)))
###start modifications by Ken Kellner
obs <- as.integer(strip.hist)
pk <- dpois(K, out.hist.not.na[j,"preds.lam"])
cp <- t(p.k.mat) * obs + (1 - t(p.k.mat)) * (1 - obs)
prod_cp <- apply(cp, 2, prod, na.rm = TRUE)
eq.solved[j] <- sum(pk * prod_cp)
###end modifications by Ken Kellner
}
exp.freqs[i] <- sum(eq.solved, na.rm = TRUE)
}
#######################
##for each detection history, compute observed frequencies
freqs <- table(out.hist.not.na$det.hist)
out.freqs <- matrix(NA, nrow = n.cohort.not.na, ncol = 4)
colnames(out.freqs) <- c("Cohort", "Observed", "Expected",
"Chi-square")
rownames(out.freqs) <- names(freqs)
##cohort
out.freqs[, 1] <- 0
##observed
out.freqs[, 2] <- freqs
##expected
out.freqs[, 3] <- exp.freqs
##chi-square
out.freqs[, 4] <- ((out.freqs[, "Observed"] - out.freqs[,
"Expected"])^2)/out.freqs[, "Expected"]
}
##if missing values
if (na.vals) {
missing.cohorts <- list()
if (!is.matrix(preds.p.na)) {
preds.p.na <- matrix(data = preds.p.na, nrow = 1)
}
for (m in 1:n.missing.cohorts) {
select.cohort <- out.hist.na[which(out.hist.na$coh ==
names(freqs.missing.cohorts)[m]), ]
select.preds.p.na <- preds.p.na[which(out.hist.na$coh ==
names(freqs.missing.cohorts)[m]), ]
if (!is.matrix(select.preds.p.na)) {
select.preds.p.na <- matrix(data = select.preds.p.na,
nrow = 1)
}
select.preds.p.na[, gregexpr(pattern = "N",
text = gsub(pattern = "NA",
replacement = "N", x = select.cohort$det.hist[1]))[[1]]] <- 1
n.total.sites <- nrow(select.cohort)
freqs.na <- table(droplevels(select.cohort$det.hist))
cohort.na.un <- sort(unique(select.cohort$det.hist))
n.hist.na <- length(freqs.na)
exp.na <- rep(NA, n.hist.na)
names(exp.na) <- cohort.na.un
for (i in 1:n.hist.na) {
n.sites.hist <- freqs.na[i]
eq.solved <- rep(NA, n.total.sites)
select.hist <- gsub(pattern = "NA", replacement = "N",
x = cohort.na.un[i])
strip.hist <- unlist(strsplit(select.hist, split = ""))
#######################
###modified by Dan Linden
hist.mat <- new.hist.mat <- new.hist.mat1 <-new.hist.mat0 <- matrix(NA, nrow = n.total.sites, ncol = T)
for (j in 1:n.total.sites) {
hist.mat[j, ] <- select.preds.p.na[j, ]
##Pr(y.ij=1|K)
p.k.mat <- sapply(hist.mat[j,],function(r){1 - (1 - r)^K})
##new.hist.mat1[j,] <- dpois(K,select.cohort[j, "preds.lam"]) %*% p.k.mat
##new.hist.mat0[j,] <- dpois(K,select.cohort[j, "preds.lam"]) %*% (1-p.k.mat)
##new.hist.mat[j,] <- ifelse(strip.hist == "1",
## new.hist.mat1[j,], ifelse(strip.hist == "0",
## new.hist.mat0[j,], 1))
##combo.lam.p <- paste(new.hist.mat[j, ], collapse = "*")
##eq.solved[j] <- eval(parse(text = as.expression(combo.lam.p)))
###start modifications by Ken Kellner
obs <- suppressWarnings(as.integer(strip.hist))
pk <- dpois(K, select.cohort[j,"preds.lam"])
cp <- t(p.k.mat) * obs + (1 - t(p.k.mat)) * (1 - obs)
prod_cp <- apply(cp, 2, prod, na.rm = TRUE)
eq.solved[j] <- sum(pk * prod_cp)
###end modifications by Ken Kellner
}
exp.na[i] <- sum(eq.solved, na.rm = TRUE)
}
#######################
out.freqs.na <- matrix(NA, nrow = n.hist.na, ncol = 4)
colnames(out.freqs.na) <- c("Cohort", "Observed",
"Expected", "Chi-square")
rownames(out.freqs.na) <- cohort.na.un
out.freqs.na[, 1] <- m
out.freqs.na[, 2] <- freqs.na
out.freqs.na[, 3] <- exp.na
out.freqs.na[, 4] <- ((out.freqs.na[, "Observed"] -
out.freqs.na[, "Expected"])^2)/out.freqs.na[,
"Expected"]
missing.cohorts[[m]] <- list(out.freqs.na = out.freqs.na)
}
}
if (na.vals) {
chisq.missing <- do.call("rbind", lapply(missing.cohorts,
FUN = function(i) i$out.freqs.na))
if (n.cohort.not.na > 0) {
chisq.unobs.det <- N - sum(out.freqs[, "Expected"]) -
sum(chisq.missing[, "Expected"])
chisq.table <- rbind(out.freqs, chisq.missing)
} else {
chisq.unobs.det <- N - sum(chisq.missing[, "Expected"])
chisq.table <- chisq.missing
}
} else {
chisq.unobs.det <- N - sum(out.freqs[, "Expected"])
chisq.na <- 0
chisq.table <- out.freqs
}
chisq <- sum(chisq.table[, "Chi-square"]) + chisq.unobs.det
if(print.table) {
out <- list(chisq.table = chisq.table, chi.square = chisq,
model.type = "royle-nichols")
} else {
out <- list(chi.square = chisq, model.type = "royle-nichols")
}
class(out) <- "mb.chisq"
return(out)
}
##simulating data from model to compute P-value of test statistic
##create generic mb.gof.test
mb.gof.test <- function(mod, nsim = 5, plot.hist = TRUE,
report = NULL, parallel = TRUE, ncores,
cex.axis = 1, cex.lab = 1, cex.main = 1,
lwd = 1, ...){
UseMethod("mb.gof.test", mod)
}
mb.gof.test.default <- function(mod, nsim = 5, plot.hist = TRUE,
report = NULL, parallel = TRUE, ncores,
cex.axis = 1, cex.lab = 1, cex.main = 1,
lwd = 1, ...){
stop("\nFunction not yet defined for this object class\n")
}
##for single-season occupancy models of class unmarkedFitOccu
mb.gof.test.unmarkedFitOccu <- function(mod, nsim = 5, plot.hist = TRUE,
report = NULL, parallel = TRUE, ncores,
cex.axis = 1, cex.lab = 1, cex.main = 1,
lwd = 1, ...){#more bootstrap samples are recommended (e.g., 1000, 5000, or 10 000)
##extract table from fitted model
mod.table <- mb.chisq(mod)
##if NULL, don't print test statistic at each iteration
if(is.null(report)) {
##compute GOF P-value
out <- parboot(mod, statistic = function(i) mb.chisq(i)$chi.square,
nsim = nsim, parallel = parallel, ncores = ncores)
} else {
##compute GOF P-value
out <- parboot(mod, statistic = function(i) mb.chisq(i)$chi.square,
nsim = nsim, report = report, parallel = parallel,
ncores = ncores)
}
##determine significance
p.value <- sum([email protected] >= out@t0)/nsim
if(p.value == 0) {
p.display <- paste("<", round(1/nsim, digits = 4))
} else {
p.display <- paste("=", round(p.value, digits = 4))
}
##create plot
if(plot.hist) {
hist([email protected],
main = paste("Bootstrapped MacKenzie and Bailey fit statistic (", nsim, " samples)", sep = ""),
xlim = range(c([email protected], out@t0)), xlab = paste("Simulated statistic ", "(observed = ",
round(out@t0, digits = 2), ")", sep = ""),
cex.axis = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
title(main = bquote(paste(italic(P), " ", .(p.display))), line = 0.5,
cex.main = cex.main)
abline(v = out@t0, lty = "dashed", col = "red", lwd = lwd)
}
##estimate c-hat
c.hat.est <- out@t0/mean([email protected])
##assemble result
gof.out <- list(model.type = mod.table$model.type, chisq.table = mod.table$chisq.table, chi.square = mod.table$chi.square,
t.star = [email protected], p.value = p.value, c.hat.est = c.hat.est, nsim = nsim)
class(gof.out) <- "mb.chisq"
return(gof.out)
}
##dynamic occupancy models of class unmarkedFitColExt
mb.gof.test.unmarkedFitColExt <- function(mod, nsim = 5, plot.hist = TRUE,
report = NULL, parallel = TRUE,
ncores, cex.axis = 1, cex.lab = 1, cex.main = 1,
lwd = 1, plot.seasons = FALSE, ...){#more bootstrap samples are recommended (e.g., 1000, 5000, or 10 000)
##extract table from fitted model
mod.table <- mb.chisq(mod)
n.seasons <- mod.table$n.seasons
n.seasons.adj <- n.seasons #total number of plots fixed to 11 or 12, depending on plots requested
missing.seasons <- mod.table$missing.seasons
##number of seasons with data
n.season.data <- sum(missing.seasons)
##if NULL, don't print test statistic at each iteration
if(is.null(report)) {
##compute GOF P-value
out <- parboot(mod, statistic = function(i) mb.chisq(i)$all.chisq,
nsim = nsim, parallel = parallel, ncores = ncores)
} else {
##compute GOF P-value
out <- parboot(mod, statistic = function(i) mb.chisq(i)$all.chisq, #extract chi-square for each year
nsim = nsim, report = report, parallel = parallel,
ncores = ncores)
}
##list to hold results
p.vals <- list( )
if(plot.hist && !plot.seasons) {
nRows <- 1
nCols <- 1
##reset graphics parameters and save in object
oldpar <- par(mfrow = c(nRows, nCols))
}
##if only season-specific plots are requested
if(!plot.hist && plot.seasons) {
##determine arrangement of plots in matrix
if(plot.seasons && n.seasons >= 12) {
n.seasons.adj <- 12
warning("\nOnly first 12 seasons are plotted\n")
}
if(plot.seasons && n.seasons.adj <= 12) {
##if n.seasons < 12
##if 12, 11, 10 <- 4 x 3
##if 9, 8, 7 <- 3 x 3
##if 6, 5 <- 3 x 2
##if 4 <- 2 x 2
##if 3 <- 3 x 1
##if 2 <- 2 x 1
if(n.seasons.adj >= 10) {
nRows <- 4
nCols <- 3
} else {
if(n.seasons.adj >= 7) {
nRows <- 3
nCols <- 3
} else {
if(n.seasons.adj >= 5) {
nRows <- 3
nCols <- 2
} else {
if(n.seasons.adj == 4) {
nRows <- 2
nCols <- 2
} else {
if(n.seasons.adj == 3) {
nRows <- 3
nCols <- 1
} else {
nRows <- 2
nCols <- 1
}
}
}
}
}
}
##reset graphics parameters and save in object
oldpar <- par(mfrow = c(nRows, nCols))
}
##if both plots for seasons and summary are requested
if(plot.hist && plot.seasons){
##determine arrangement of plots in matrix
if(plot.seasons && n.seasons >= 12) {
n.seasons.adj <- 11
warning("\nOnly first 11 seasons are plotted\n")
}
if(plot.seasons && n.seasons.adj <= 11) {
if(n.seasons.adj >= 9) {
nRows <- 4
nCols <- 3
} else {
if(n.seasons.adj >= 6) {
nRows <- 3
nCols <- 3
} else {
if(n.seasons.adj >= 4) {
nRows <- 3
nCols <- 2
} else {
if(n.seasons.adj == 3) {
nRows <- 2
nCols <- 2
} else {
if(n.seasons.adj == 2) {
nRows <- 3
nCols <- 1
}
}
}
}
}
}
##reset graphics parameters and save in object
oldpar <- par(mfrow = c(nRows, nCols))
}
##determine significance for each season
if(!any(missing.seasons)) {
for(k in 1:n.seasons) {
p.value <- sum([email protected][, k] >= out@t0[k])/nsim
if(p.value == 0) {
p.display <- paste("<", round(1/nsim, digits = 4))
} else {
p.display <- paste("=", round(p.value, digits = 4))
}
p.vals[[k]] <- list("p.value" = p.value, "p.display" = p.display)
}
##create plot for first 12 plots
if(plot.seasons) {
##add a check to handle error with plotting window
tryHist <- try(expr = {
for(k in 1:n.seasons.adj) {
hist([email protected][, k],
main = paste("Bootstrapped MacKenzie and Bailey fit statistic (", nsim, " samples) - season ", k, sep = ""),
xlim = range(c([email protected][, k], out@t0[k])),
xlab = paste("Simulated statistic ", "(observed = ", round(out@t0[k], digits = 2), ")", sep = ""),
cex.axis = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
title(main = bquote(paste(italic(P), " ", .(p.vals[[k]]$p.display))), line = 0.5,
cex.main = cex.main)
abline(v = out@t0[k], lty = "dashed", col = "red", lwd = lwd)
}
}, silent = TRUE)
if(is(tryHist, "try-error")) {
warning("\nFigure margins are too wide for the current plotting window: adjust graphical parameters.\n")
}
}
} else {
for(k in 1:n.season.data) {
p.value <- sum([email protected][, k] >= out@t0[k])/nsim
if(p.value == 0) {
p.display <- paste("<", round(1/nsim, digits = 4))
} else {
p.display <- paste("=", round(p.value, digits = 4))
}
p.vals[[k]] <- list("p.value" = p.value, "p.display" = p.display)
}
##create plot for first 12 plots
if(plot.seasons) {
##add a check to handle error with plotting window
tryHist <- try(expr = {
for(k in 1:n.season.data) {
hist([email protected][, k],
main = paste("Bootstrapped MacKenzie and Bailey fit statistic (", nsim, " samples) - season ", k, sep = ""),
xlim = range(c([email protected][, k], out@t0[k])),
xlab = paste("Simulated statistic ", "(observed = ", round(out@t0[k], digits = 2), ")", sep = ""),
cex.axis = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
title(main = bquote(paste(italic(P), " ", .(p.vals[[k]]$p.display))), line = 0.5,
cex.main = cex.main)
abline(v = out@t0[k], lty = "dashed", col = "red", lwd = lwd)
}
}, silent = TRUE)
if(is(tryHist, "try-error")) {
warning("\nFigure margins are too wide for the current plotting window: adjust graphical parameters.\n")
}
}
}
##estimate c-hat
obs.chisq <- sum(mod.table$all.chisq)
boot.chisq <- sum(colMeans([email protected]))
c.hat.est <- obs.chisq/boot.chisq
all.p.vals <- lapply(p.vals, FUN = function(i) i$p.value)
##lapply(mod.table, FUN = function(i) i$chisq.table)
##compute P-value for obs.chisq
sum.chisq <- rowSums([email protected])
p.global <- sum(sum.chisq >= obs.chisq)/nsim
if(p.global == 0) {
p.global.display <- paste("< ", 1/nsim)
} else {
p.global.display <- paste("=", round(p.global, digits = 4))
}
##optionally show sum of chi-squares
##create plot
if(plot.hist) {
hist(sum.chisq, main = paste("Bootstrapped sum of chi-square statistic (", nsim, " samples)", sep = ""),
xlim = range(c(sum.chisq, obs.chisq)),
xlab = paste("Simulated statistic ", "(observed = ", round(obs.chisq, digits = 2), ")", sep = ""),
cex.axis = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
title(main = bquote(paste(italic(P), " ", .(p.global.display))), line = 0.5,
cex.main = cex.main)
abline(v = obs.chisq, lty = "dashed", col = "red", lwd = lwd)
}
##reset to original values
if(any(plot.hist || plot.seasons)) {
on.exit(par(oldpar))
}
##check if missing seasons
if(identical(mod.table$model.type, "dynamic")) {
missing.seasons <- mod.table$missing.seasons
} else {
missing.seasons <- NULL
}
##assemble result
gof.out <- list(model.type = mod.table$model.type, chisq.table = mod.table,
chi.square = obs.chisq, t.star = sum.chisq,
p.value = all.p.vals, p.global = p.global, c.hat.est = c.hat.est,
nsim = nsim, n.seasons = n.seasons, missing.seasons = missing.seasons)
class(gof.out) <- "mb.chisq"
return(gof.out)
}
##Royle-Nichols count models of class unmarkedFitOccuRN
mb.gof.test.unmarkedFitOccuRN <- function (mod, nsim = 5, plot.hist = TRUE,
report = NULL, parallel = TRUE,
ncores, cex.axis = 1, cex.lab = 1,
cex.main = 1, lwd = 1, maxK = NULL, ...){
##extract table from fitted model
mod.table <- mb.chisq(mod, ...)
if(is.null(report)) {
##compute GOF P-value
out <- parboot(mod, statistic = function(i) mb.chisq(i)$chi.square,
nsim = nsim, parallel = parallel)
} else {
out <- parboot(mod, statistic = function(i) mb.chisq(i)$chi.square,
nsim = nsim, report = report, parallel = parallel)
}
##determine significance
p.value <- sum([email protected] >= out@t0)/nsim
if (p.value == 0) {
p.display <- paste("<", round(1/nsim, digits = 4))
} else {
p.display <- paste("=", round(p.value, digits = 4))
}
##create plot
if(plot.hist) {
hist([email protected], main = paste("Bootstrapped MacKenzie and Bailey fit statistic (", nsim, " samples)", sep = ""),
xlim = range(c([email protected], out@t0)), xlab = paste("Simulated statistic ", "(observed = ",
round(out@t0, digits = 2), ")", sep = ""),
cex.axis = cex.axis, cex.lab = cex.lab, cex.main = cex.main)
title(main = bquote(paste(italic(P), " ", .(p.display))), line = 0.5,
cex.main = cex.main)
abline(v = out@t0, lty = "dashed", col = "red", lwd = lwd)
}
##estimate c-hat
c.hat.est <- out@t0/mean([email protected])
gof.out <- list(model.type = mod.table$model.type, chisq.table = mod.table$chisq.table,
chi.square = mod.table$chi.square, t.star = [email protected],
p.value = p.value, c.hat.est = c.hat.est, nsim = nsim)
class(gof.out) <- "mb.chisq"
return(gof.out)
}
##print function
print.mb.chisq <- function(x, digits.vals = 2, digits.chisq = 4, ...) {
##single-season occupancy models
if(identical(x$model.type, "single-season")) {
cat("\nMacKenzie and Bailey goodness-of-fit for single-season occupancy model\n")
if(any(names(x) == "chisq.table")) {
cat("\nPearson chi-square table:\n\n")
##replace NA with "." for nicer printing
nice.rows <- gsub(pattern = "NA", replacement = ".", rownames(x$chisq.table))
rownames(x$chisq.table) <- nice.rows
print(round(x$chisq.table, digits = digits.vals))
}
cat("\nChi-square statistic =", round(x$chi.square, digits = digits.chisq), "\n")
if(any(names(x) == "c.hat.est")) {
cat("Number of bootstrap samples =", x$nsim)
cat("\nP-value =", x$p.value)
cat("\n\nQuantiles of bootstrapped statistics:\n")
print(quantile(x$t.star), digits = digits.vals)
cat("\nEstimate of c-hat =", round(x$c.hat.est, digits = digits.vals), "\n")
}
cat("\n")
}
##single-season Royle-Nichols occupancy models
if(identical(x$model.type, "royle-nichols")) {
cat("\nMacKenzie and Bailey goodness-of-fit for Royle-Nichols occupancy model\n")
if(any(names(x) == "chisq.table")) {
cat("\nPearson chi-square table:\n\n")
##replace NA with "." for nicer printing
nice.rows <- gsub(pattern = "NA", replacement = ".", rownames(x$chisq.table))
rownames(x$chisq.table) <- nice.rows
print(round(x$chisq.table, digits = digits.vals))
}
cat("\nChi-square statistic =", round(x$chi.square, digits = digits.chisq), "\n")
if(any(names(x) == "c.hat.est")) {
cat("Number of bootstrap samples =", x$nsim)
cat("\nP-value =", x$p.value)
cat("\n\nQuantiles of bootstrapped statistics:\n")
print(quantile(x$t.star), digits = digits.vals)
cat("\nEstimate of c-hat =", round(x$c.hat.est, digits = digits.vals), "\n")
}
cat("\n")
}
##dynamic occupancy models
if(identical(x$model.type, "dynamic")) {
cat("\nGoodness-of-fit for dynamic occupancy model\n")
cat("\nNumber of seasons: ", x$n.seasons, "\n")
##cat("\nPearson chi-square table:\n\n")
##print(round(x$chisq.table, digits = digits.vals))
##x$chisq.table
cat("\nChi-square statistic:\n")
if(any(names(x) == "all.chisq")) {
print(round(x$all.chisq, digits = digits.chisq))
if(any(x$missing.seasons)) {
if(sum(x$missing.seasons) == 1) {
cat("\nNote: season", which(x$missing.seasons), "was not sampled\n")
} else {
cat("\nNote: seasons",
paste(which(x$missing.seasons), sep = ", "),
"were not sampled\n")
}
}
cat("\nTotal chi-square =", round(sum(x$all.chisq),
digits = digits.chisq), "\n")
} else {
print(round(x$chisq.table$all.chisq, digits = digits.chisq))
if(any(x$missing.seasons)) {
if(sum(x$missing.seasons) == 1) {
cat("\nNote: season", which(x$missing.seasons), "was not sampled\n")
} else {
cat("\nNote: seasons",
paste(which(x$missing.seasons), sep = ", "),
"were not sampled\n")
}
}
cat("\nTotal chi-square =", round(sum(x$chi.square),
digits = digits.chisq), "\n")
cat("Number of bootstrap samples =", x$nsim)
cat("\nP-value =", x$p.global)
cat("\n\nQuantiles of bootstrapped statistics:\n")
print(quantile(x$t.star), digits = digits.vals)
cat("\nEstimate of c-hat =", round(x$c.hat.est,
digits = digits.vals), "\n")
}
}
cat("\n")
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/mb.gof.test.R |
##generic
modavg <- function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, ...){
cand.set <- formatCands(cand.set)
UseMethod("modavg", cand.set)
}
##default
modavg.default <- function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, ...){
stop("\nFunction not yet defined for this object class\n")
}
##aov
modavg.AICaov.lm <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) rownames(summary(i)$coefficients)) #extract model formula for each model in cand.set
nmods <- length(cand.set)
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
##exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
if(any(include.check == "duplicates")) {
stop("\nSome models possibly include more than one instance of the parameter of interest.\n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("Multiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN=formula)
##set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- strsplit(as.character(not.include[i]), split="~")[[1]][-1]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) { #this line causes problems if intercept is removed from model
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")} #this line causes problems if intercept is removed from model
}
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
##
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL <- Modavg_beta-zcrit*Uncond_SE
Upper_CL <- Modavg_beta+zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL"= Lower_CL, "Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##betareg
modavg.AICbetareg <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm - problematic for parameters on "(phi)_temp:batch4" vs "batch4:(phi)_temp"
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##extract labels
##determine if parameter is on mean or phi
if(regexpr(pattern = "\\(phi\\)_", parm) == "-1") {
parm.phi <- NULL
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) rownames(summary(i)$coefficients$mean))
} else {
##replace parm
parm.phi <- gsub(pattern = "\\(phi\\)_", "", parm)
if(regexpr(pattern = ":", parm) != "-1") {
warning(cat("\nthis function does not yet support interaction terms on phi:\n",
"use 'modavgCustom' instead\n"))
}
mod_formula <- lapply(cand.set, FUN=function(i) rownames(summary(i)$coefficients$precision))
}
nmods <- length(cand.set)
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##iterate over each formula in mod_formula list
##if parameters on mean
if(is.null(parm.phi)) {
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
}
##if parameters on phi
if(!is.null(parm.phi)) {
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
#if(is.null(reversed.parm)) {
##do not consider reversed parm here because of "(phi)_" prefix in coefficients
for (j in 1:length(form)) {
idents[j] <- identical(parm.phi, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm.phi, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
# } else {
# for (j in 1:length(form)) {
# idents[j] <- identical(parm.phi, form[j]) | identical(reversed.parm, form[j])
# idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
# fixed=TRUE), "match.length")=="-1" , 0, 1)
# }
# }
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
}
#####################################################
##exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
if(any(include.check == "duplicates")) {
stop("\nSome models possibly include more than one instance of the parameter of interest.\n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("Multiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN=formula)
##set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- strsplit(as.character(not.include[i]), split="~")[[1]][-1]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) { #this line causes problems if intercept is removed from model
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")} #this line causes problems if intercept is removed from model
}
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
##
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL <- Modavg_beta-zcrit*Uncond_SE
Upper_CL <- Modavg_beta+zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL"= Lower_CL, "Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##clm
modavg.AICsclm.clm <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check that link function is the same for all models
check.link <- unlist(lapply(X = cand.set, FUN=function(i) i$link))
unique.link <- unique(x = check.link)
if(length(unique.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) rownames(summary(i)$coefficients))
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow = nmods, ncol = 1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow = nmods, ncol = 1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
##exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
##check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN = formula)
##set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- as.character(not.include[[i]])[3]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
new_table <- aictab(cand.set=new.cand.set, modnames=new.mod.name,
second.ord=second.ord, nobs=nobs, sort=FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN=function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN=function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##clm
modavg.AICclm <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check that link function is the same for all models
check.link <- unlist(lapply(X = cand.set, FUN=function(i) i$link))
unique.link <- unique(x = check.link)
if(length(unique.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) rownames(summary(i)$coefficients))
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow = nmods, ncol = 1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow = nmods, ncol = 1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
##exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
##check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN = formula)
##set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- as.character(not.include[[i]])[3]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
new_table <- aictab(cand.set=new.cand.set, modnames=new.mod.name,
second.ord=second.ord, nobs=nobs, sort=FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN=function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN=function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##clmm
modavg.AICclmm <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check that link function is the same for all models
check.link <- unlist(lapply(X = cand.set, FUN=function(i) i$link))
unique.link <- unique(x = check.link)
if(length(unique.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) rownames(summary(i)$coefficients))
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow = nmods, ncol = 1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow = nmods, ncol = 1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
##exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
##check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN = formula)
##set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- as.character(not.include[[i]])[3]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
new_table <- aictab(cand.set=new.cand.set, modnames=new.mod.name,
second.ord=second.ord, nobs=nobs, sort=FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN=function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN=function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##coxme
modavg.AICcoxme <- function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) names(fixef(i)))
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
##exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
##check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN = function(i) formula(i)$fixed)
##set up a new list with model formula
forms <- list( )
for (i in 1:nmods) {
form.tmp <- strsplit(as.character(not.include[i]), split="~")[[1]][-1]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
new_table <- aictab(cand.set=new.cand.set, modnames=new.mod.name,
second.ord=second.ord, nobs=nobs, sort=FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN=function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN=function(i) extractSE(i)[paste(parm)]))
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord==TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##coxph and clogit
modavg.AICcoxph <- function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) rownames(summary(i)$coefficients))
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
##exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
##check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN=formula)
##set up a new list with model formula
forms <- list( )
for (i in 1:nmods) {
form.tmp <- strsplit(as.character(not.include[i]), split="~")[[1]][-1]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
new_table <- aictab(cand.set=new.cand.set, modnames=new.mod.name,
second.ord=second.ord, nobs=nobs, sort=FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN=function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN=function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord==TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##glm
modavg.AICglm.lm <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, c.hat = 1, gamdisp = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check that link function is the same for all models
check.link <- unlist(lapply(X = cand.set, FUN = function(i) i$family$link))
unique.link <- unique(x = check.link)
if(length(unique.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
##check family of glm to avoid problems when requesting predictions with argument 'dispersion'
fam.type <- unlist(lapply(cand.set, FUN=function(i) family(i)$family))
fam.unique <- unique(fam.type)
if(identical(fam.unique, "gaussian")) {disp <- NULL} else{disp <- 1}
##poisson and binomial defaults to 1 (no separate parameter for variance)
##for negative binomial - reset to NULL
if(any(regexpr("Negative Binomial", fam.type) != -1)) {
disp <- NULL
##check for mixture of negative binomial and other
##number of models with negative binomial
negbin.num <- sum(regexpr("Negative Binomial", fam.type) != -1)
if(negbin.num < length(fam.type)) {
stop("\nFunction does not support mixture of negative binomial with other distributions in model set\n")
}
}
##gamma is treated separately
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) rownames(summary(i)$coefficients)) #extract model formula for each model in cand.set
nmods <- length(cand.set)
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
##exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
if(any(include.check == "duplicates")) {
stop("\nSome models possibly include more than one instance of the parameter of interest.\n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("Multiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN=formula)
##set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- strsplit(as.character(not.include[i]), split="~")[[1]][-1]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) { #this line causes problems if intercept is removed from model
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")} #this line causes problems if intercept is removed from model
}
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
##
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE,
c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) sqrt(diag(vcov(i, dispersion = disp)))[paste(parm)]))
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
gam1<-unlist(lapply(new.cand.set, FUN=function(i) family(i)$family[1]=="Gamma")) #check for gamma regression models
##correct SE's for estimates of gamma regressions
if(any(gam1)==TRUE) {
##check for specification of gamdisp argument
if(is.null(gamdisp)) stop("\nYou must specify a gamma dispersion parameter with gamma generalized linear models\n")
new_table$SE <- unlist(lapply(new.cand.set,
FUN=function(i) sqrt(diag(vcov(i, dispersion=gamdisp)))[paste(parm)]))
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL <- Modavg_beta-zcrit*Uncond_SE
Upper_CL <- Modavg_beta+zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL"= Lower_CL, "Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##glmmTMB
modavg.AICglmmTMB <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, c.hat = 1, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##check for nlmer models
#if(any(unlist(lapply(X = cand.set, FUN = function(i) as.character(i@call)[1])) == "nlmer")) warning("\nNon-linear models are part of the candidate model set: model-averaged estimates may not be meaningful\n")
###################
##determine families of model
fam.list <- unlist(lapply(X = cand.set, FUN = function(i) family(i)$family))
check.fam <- unique(fam.list)
if(length(check.fam) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different families of distributions\n")
##determine link functions
link.list <- unlist(lapply(X = cand.set, FUN = function(i) family(i)$link))
check.link <- unique(link.list)
if(length(check.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
###################
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) labels(fixef(i)$cond))
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
###exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
##check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN = formula) #random effect portion is returned within parentheses
##because matching uses identical( ) to check fixed effects against formula( ),
##should not be problematic for variables included in random effects
##set up a new list with model formula
forms <- list( )
for (i in 1:nmods) {
form.tmp <- strsplit(as.character(not.include[i]), split="~")[[1]][-1]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
######################################
##remove leading and trailing spaces as well as spaces within string
##forms <- lapply(forms.space, FUN = function(b) gsub('[[:space:]]+', "", b))
######################################
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
new_table <- aictab(cand.set=new.cand.set, modnames=new.mod.name,
second.ord=second.ord, nobs=nobs, sort=FALSE,
c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN=function(i) fixef(i)$cond[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN=function(i) sqrt(diag(vcov(i)$cond))[paste(parm)]))
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL <- Modavg_beta-zcrit*Uncond_SE
Upper_CL <- Modavg_beta+zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL"= Lower_CL, "Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##gls
modavg.AICgls <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) labels(summary(i)$coefficients))
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
##exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
##check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN=formula)
##set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- strsplit(as.character(not.include[i]), split="~")[[1]][-1]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
new_table <- aictab(cand.set=new.cand.set, modnames=new.mod.name, second.ord=second.ord,
nobs=nobs, sort=FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN=function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN=function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord==TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL <- Modavg_beta-zcrit*Uncond_SE
Upper_CL <- Modavg_beta+zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##hurdle
modavg.AIChurdle <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check that link function is the same for all models
check.link <- unlist(lapply(X = cand.set, FUN=function(i) i$link))
unique.link <- unique(x=check.link)
if(length(unique.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) labels(coefficients(i))) #extract model formula for each model in cand.set
nmods <- length(cand.set)
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
##exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
if(any(include.check == "duplicates")) {
stop("\nSome models possibly include more than one instance of the parameter of interest.\n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("Multiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN=formula)
##set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- strsplit(as.character(not.include[i]), split="~")[[1]][-1]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) { #this line causes problems if intercept is removed from model
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")} #this line causes problems if intercept is removed from model
}
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
##
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN=function(i) coefficients(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN=function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL <- Modavg_beta-zcrit*Uncond_SE
Upper_CL <- Modavg_beta+zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL"= Lower_CL, "Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##lm
modavg.AIClm <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) rownames(summary(i)$coefficients)) #extract model formula for each model in cand.set
nmods <- length(cand.set)
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
##exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
if(any(include.check == "duplicates")) {
stop("\nSome models possibly include more than one instance of the parameter of interest.\n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("Multiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN=formula)
##set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- strsplit(as.character(not.include[i]), split="~")[[1]][-1]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) { #this line causes problems if intercept is removed from model
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")} #this line causes problems if intercept is removed from model
}
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
##
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL <- Modavg_beta-zcrit*Uncond_SE
Upper_CL <- Modavg_beta+zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL"= Lower_CL, "Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##lme
modavg.AIClme <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) labels(summary(i)$coefficients$fixed))
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow = nmods, ncol = 1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow = nmods, ncol = 1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
##exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
##check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN=formula)
##set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- strsplit(as.character(not.include[i]), split="~")[[1]][-1]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) fixef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL<-Modavg_beta-zcrit*Uncond_SE
Upper_CL<-Modavg_beta+zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##lmekin
modavg.AIClmekin <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) labels(fixef(i)))
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow = nmods, ncol = 1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow = nmods, ncol = 1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
##exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
##check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN=formula)
##set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- strsplit(as.character(not.include[i]), split="~")[[1]][-1]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
##compute table
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) fixef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) extractSE(i)[paste(parm)]))
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##maxlike
modavg.AICmaxlikeFit.list <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, c.hat = 1, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
if(c.hat != 1) stop("\nThis function does not support overdispersion in \'maxlikeFit\' models\n")
##check that link function is the same for all models
check.link <- unlist(lapply(X = cand.set, FUN=function(i) i$link))
unique.link <- unique(x = check.link)
if(length(unique.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) names(coef(i)))
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow = nmods, ncol = 1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow = nmods, ncol = 1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
##exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
##check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN = formula)
##set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- as.character(not.include[[i]])[3]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
new_table <- aictab(cand.set=new.cand.set, modnames=new.mod.name, second.ord=second.ord,
nobs=nobs, sort=FALSE, c.hat = 1) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN=function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN=function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##mer
modavg.AICmer <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##check for nlmer models
if(any(unlist(lapply(X = cand.set, FUN = function(i) as.character(i@call)[1])) == "nlmer")) warning("\nNon-linear models are part of the candidate model set: model-averaged estimates may not be meaningful\n")
###################
##determine families of model
fam.list <- unlist(lapply(X = cand.set, FUN = function(i) fam.link.mer(i)$family))
check.fam <- unique(fam.list)
if(length(check.fam) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different families of distributions\n")
##determine link functions
link.list <- unlist(lapply(X = cand.set, FUN = function(i) fam.link.mer(i)$link))
check.link <- unique(link.list)
if(length(check.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
###################
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) labels(fixef(i)))
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
###exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
##check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN=formula) #random effect portion is returned within parentheses
##because matching uses identical( ) to check fixed effects against formula( ),
##should not be problematic for variables included in random effects
##set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- strsplit(as.character(not.include[i]), split="~")[[1]][-1]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
######################################
##remove leading and trailing spaces as well as spaces within string
##forms <- lapply(forms.space, FUN = function(b) gsub('[[:space:]]+', "", b))
######################################
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
new_table <- aictab(cand.set=new.cand.set, modnames=new.mod.name,
second.ord=second.ord, nobs=nobs, sort=FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN=function(i) fixef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN=function(i) extractSE(i)[paste(parm)]))
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord==TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta<-sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE<-sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE<-sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL<-Modavg_beta-zcrit*Uncond_SE
Upper_CL<-Modavg_beta+zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##lmerMod
modavg.AIClmerMod <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##check for nlmer models
#if(any(unlist(lapply(X = cand.set, FUN = function(i) as.character(i@call)[1])) == "nlmer")) warning("\nNon-linear models are part of the candidate model set: model-averaged estimates may not be meaningful\n")
###################
###################
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) labels(fixef(i)))
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
###exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
##check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN = formula) #random effect portion is returned within parentheses
##because matching uses identical( ) to check fixed effects against formula( ),
##should not be problematic for variables included in random effects
##set up a new list with model formula
forms <- list( )
for (i in 1:nmods) {
form.tmp <- strsplit(as.character(not.include[i]), split="~")[[1]][-1]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
######################################
##remove leading and trailing spaces as well as spaces within string
##forms <- lapply(forms.space, FUN = function(b) gsub('[[:space:]]+', "", b))
######################################
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
new_table <- aictab(cand.set=new.cand.set, modnames=new.mod.name,
second.ord=second.ord, nobs=nobs, sort=FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN=function(i) fixef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN=function(i) extractSE(i)[paste(parm)]))
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord==TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta<-sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE<-sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE<-sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL <- Modavg_beta-zcrit*Uncond_SE
Upper_CL <- Modavg_beta+zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##lmerModLmerTest
modavg.AIClmerModLmerTest <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##check for nlmer models
#if(any(unlist(lapply(X = cand.set, FUN = function(i) as.character(i@call)[1])) == "nlmer")) warning("\nNon-linear models are part of the candidate model set: model-averaged estimates may not be meaningful\n")
###################
###################
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) labels(fixef(i)))
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
###exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
##check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN = formula) #random effect portion is returned within parentheses
##because matching uses identical( ) to check fixed effects against formula( ),
##should not be problematic for variables included in random effects
##set up a new list with model formula
forms <- list( )
for (i in 1:nmods) {
form.tmp <- strsplit(as.character(not.include[i]), split="~")[[1]][-1]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
######################################
##remove leading and trailing spaces as well as spaces within string
##forms <- lapply(forms.space, FUN = function(b) gsub('[[:space:]]+', "", b))
######################################
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
new_table <- aictab(cand.set=new.cand.set, modnames=new.mod.name,
second.ord=second.ord, nobs=nobs, sort=FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN=function(i) fixef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN=function(i) extractSE(i)[paste(parm)]))
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord==TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta<-sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE<-sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE<-sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL <- Modavg_beta-zcrit*Uncond_SE
Upper_CL <- Modavg_beta+zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##glmerMod
modavg.AICglmerMod <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##check for nlmer models
#if(any(unlist(lapply(X = cand.set, FUN = function(i) as.character(i@call)[1])) == "nlmer")) warning("\nNon-linear models are part of the candidate model set: model-averaged estimates may not be meaningful\n")
###################
##determine families of model
fam.list <- unlist(lapply(X = cand.set, FUN = function(i) fam.link.mer(i)$family))
check.fam <- unique(fam.list)
if(length(check.fam) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different families of distributions\n")
##determine link functions
link.list <- unlist(lapply(X = cand.set, FUN = function(i) fam.link.mer(i)$link))
check.link <- unique(link.list)
if(length(check.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
###################
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) labels(fixef(i)))
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
###exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
##check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN = formula) #random effect portion is returned within parentheses
##because matching uses identical( ) to check fixed effects against formula( ),
##should not be problematic for variables included in random effects
##set up a new list with model formula
forms <- list( )
for (i in 1:nmods) {
form.tmp <- strsplit(as.character(not.include[i]), split="~")[[1]][-1]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
######################################
##remove leading and trailing spaces as well as spaces within string
##forms <- lapply(forms.space, FUN = function(b) gsub('[[:space:]]+', "", b))
######################################
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
new_table <- aictab(cand.set=new.cand.set, modnames=new.mod.name,
second.ord=second.ord, nobs=nobs, sort=FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN=function(i) fixef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN=function(i) extractSE(i)[paste(parm)]))
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord==TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta<-sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE<-sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE<-sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL <- Modavg_beta-zcrit*Uncond_SE
Upper_CL <- Modavg_beta+zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##multinom
modavg.AICmultinom.nnet <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, c.hat = 1, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##extract model formula for each model in cand.set
mod_formula<-lapply(cand.set, FUN=function(i) colnames(summary(i)$coefficients))
nmods <- length(cand.set)
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
##exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
if(any(include.check == "duplicates")) {
stop("\nSome models possibly include more than one instance of the parameter of interest.\n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( ) - in multinom( ) must be extracted from call
not.include <- lapply(cand.set, FUN=function(i) formula(i$call))
##set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- strsplit(as.character(not.include[i]), split="~")[[1]][-1]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set<-cand.set[which(include==1)] #select models including a given parameter
new.mod.name<-modnames[which(include==1)] #update model names
##
##determine number of levels - 1
mod.levels <- lapply(cand.set, FUN=function(i) rownames(summary(i)$coefficients)) #extract level of response variable
check.levels <- unlist(unique(mod.levels))
##recompute AIC table and associated measures
new_table<-aictab(cand.set=new.cand.set, modnames=new.mod.name,
second.ord=second.ord, nobs=nobs, sort=FALSE, c.hat=c.hat)
##create object to store model-averaged estimate and SE's of k - 1 level of response
out.est <- matrix(data=NA, nrow=length(check.levels), ncol=4)
colnames(out.est) <- c("Mod.avg.est", "Uncond.SE", "Lower.CL", "Upper.CL")
rownames(out.est) <- check.levels
##iterate over levels of response variable
for (g in 1:length(check.levels)) {
##extract beta estimate for parm
new_table$Beta_est <- unlist(lapply(new.cand.set,
FUN=function(i) coef(i)[check.levels[g], paste(parm)]))
##extract SE of estimate for parm
new_table$SE <- unlist(lapply(new.cand.set,
FUN=function(i) sqrt(diag(vcov(i)))[paste(check.levels[g], ":",
parm, sep="")]))
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {new_table$SE<-new_table$SE*sqrt(c.hat)}
##compute model-averaged estimates, unconditional SE, and 95% CL
##AICc
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
##if c-hat is estimated compute values accordingly and adjust table names
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
##if c-hat is estimated compute values accordingly and adjust table names
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
out.est[g, 1] <- Modavg_beta
out.est[g, 2] <- Uncond_SE
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
out.est[,3] <- out.est[,1] - zcrit*out.est[,2]
out.est[,4] <- out.est[,1] + zcrit*out.est[,2]
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = out.est[,1],
"Uncond.SE" = out.est[,2], "Conf.level" = conf.level, "Lower.CL"= out.est[,3],
"Upper.CL" = out.est[,4])
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##glm.nb
modavg.AICnegbin.glm.lm <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check that link function is the same for all models
check.link <- unlist(lapply(X = cand.set, FUN = function(i) i$family$link))
unique.link <- unique(x = check.link)
if(length(unique.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) rownames(summary(i)$coefficients)) #extract model formula for each model in cand.set
nmods <- length(cand.set)
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
##exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
if(any(include.check == "duplicates")) {
stop("\nSome models possibly include more than one instance of the parameter of interest.\n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("Multiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN=formula)
##set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- strsplit(as.character(not.include[i]), split="~")[[1]][-1]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) { #this line causes problems if intercept is removed from model
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")} #this line causes problems if intercept is removed from model
}
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
##
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL <- Modavg_beta-zcrit*Uncond_SE
Upper_CL <- Modavg_beta+zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL"= Lower_CL, "Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##polr
modavg.AICpolr <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##extract model formula for each model in cand.set
mod_formula<-lapply(cand.set, FUN=function(i) rownames(summary(i)$coefficients))
nmods <- length(cand.set)
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
##exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
if(any(include.check == "duplicates")) {
stop("\nSome models possibly include more than one instance of the parameter of interest.\n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("Multiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( ) - in polr( ) must be extracted from call
not.include <- lapply(cand.set, FUN=function(i) formula(i$call))
##set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- strsplit(as.character(not.include[i]), split="~")[[1]][-1]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set<-cand.set[which(include==1)] #select models including a given parameter
new.mod.name<-modnames[which(include==1)] #update model names
new_table<-aictab(cand.set=new.cand.set, modnames=new.mod.name,
second.ord=second.ord, nobs=nobs, sort=FALSE) #recompute AIC table and associated measures
##add logical test to distinguish between intercepts and other coefs
if(attr(regexpr(pattern = "\\|", text = parm), "match.length")==-1) {
new_table$Beta_est<-unlist(lapply(new.cand.set, FUN=function(i) coef(i)[paste(parm)]))
} else {new_table$Beta_est <- unlist(lapply(new.cand.set, FUN=function(i) (i)$zeta[paste(parm)])) }
##extract beta estimate for parm
new_table$SE<-unlist(lapply(new.cand.set, FUN=function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL based on AICc
if(second.ord == TRUE) {
Modavg_beta<-sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE<-sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE<-sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL based on AIC
if(second.ord == FALSE) {
Modavg_beta<-sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE<-sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE<-sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL<-Modavg_beta-zcrit*Uncond_SE
Upper_CL<-Modavg_beta+zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL"= Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##rlm
modavg.AICrlm.lm <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) rownames(summary(i)$coefficients))
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow = nmods, ncol = 1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow = nmods, ncol = 1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
##exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
##check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN = formula)
##set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- strsplit(as.character(not.include[i]), split="~")[[1]][-1]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
new_table <- aictab(cand.set=new.cand.set, modnames=new.mod.name,
second.ord=second.ord, nobs=nobs, sort=FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN=function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN=function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##survreg
modavg.AICsurvreg <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check that distribution is the same for all models
check.dist <- sapply(X = cand.set, FUN = function(i) i$dist)
unique.dist <- unique(x = check.dist)
if(length(unique.dist) > 1) stop("\nFunction does not support model-averaging estimates from different distributions\n")
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) names(summary(i)$coefficients)) #extract model formula for each model in cand.set
nmods <- length(cand.set)
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
##exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
if(any(include.check == "duplicates")) {
stop("\nSome models possibly include more than one instance of the parameter of interest.\n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("Multiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN=formula)
##set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- strsplit(as.character(not.include[i]), split="~")[[1]][-1]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) { #this line causes problems if intercept is removed from model
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")} #this line causes problems if intercept is removed from model
}
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
##
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL <- Modavg_beta-zcrit*Uncond_SE
Upper_CL <- Modavg_beta+zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL"= Lower_CL, "Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##vglm
modavg.AICvglm <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, c.hat = 1, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check that link function is the same for all models
check.link <- unlist(lapply(X = cand.set, FUN=function(i) i@family@blurb[3]))
unique.link <- unique(x=check.link)
if(length(unique.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
##check family of vglm to avoid problems when requesting predictions with argument 'dispersion'
fam.type <- unlist(lapply(cand.set, FUN=function(i) i@family@vfamily))
fam.unique <- unique(fam.type)
if(identical(fam.unique, "gaussianff")) {disp <- NULL} else{disp <- 1}
if(identical(fam.unique, "gammaff")) stop("\nGamma distribution is not supported yet\n")
##poisson and binomial defaults to 1 (no separate parameter for variance)
##for negative binomial - reset to NULL
if(identical(fam.unique, "negbinomial")) {disp <- NULL}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) labels(coefficients(i))) #extract model formula for each model in cand.set
nmods <- length(cand.set)
##check whether parm is involved in interaction or if label changes for some models - e.g., ZIP models
##if : not already included
if(regexpr(":", parm, fixed = TRUE) == -1){
##if : not included
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) warning("\nLabel of parameter of interest seems to change across models:\n",
"check model syntax for possible problems\n")
} else {
##if : already included
##remove : from parm
simple.parm <- unlist(strsplit(parm, split = ":"))[1]
##search for simple.parm and parm in model formulae
no.colon <- sum(ifelse(attr(regexpr(simple.parm, mod_formula, fixed = TRUE), "match.length") != "-1", 1, 0))
with.colon <- sum(ifelse(attr(regexpr(parm, mod_formula, fixed = TRUE), "match.length") != "-1", 0, 1))
##check if both are > 0
if(no.colon > 0 && with.colon > 0) warning("\nLabel of parameter of interest seems to change across models:\n",
"check model syntax for possible problems\n")
}
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
##exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
if(any(include.check == "duplicates")) {
stop("\nSome models possibly include more than one instance of the parameter of interest.\n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("Multiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN = formula)
##set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- strsplit(as.character(not.include[i]), split="~")[[1]][-1]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) { #this line causes problems if intercept is removed from model
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")} #this line causes problems if intercept is removed from model
}
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
##
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN=function(i) coefficients(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN=function(i) sqrt(diag(vcov(i, dispersion = disp)))[paste(parm)]))
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL <- Modavg_beta-zcrit*Uncond_SE
Upper_CL <- Modavg_beta+zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL"= Lower_CL, "Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##zeroinfl
modavg.AICzeroinfl <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
exclude = NULL, warn = TRUE, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check that link function is the same for all models
check.link <- unlist(lapply(X = cand.set, FUN=function(i) i$link))
unique.link <- unique(x=check.link)
if(length(unique.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##reverse parm
reversed.parm <- reverse.parm(parm)
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) labels(coefficients(i))) #extract model formula for each model in cand.set
nmods <- length(cand.set)
##setup matrix to indicate presence of parm in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
##exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
if(any(include.check == "duplicates")) {
stop("\nSome models possibly include more than one instance of the parameter of interest.\n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
##exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
##assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
##warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("Multiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
##warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list\n")}
}
##if exclude is list
if(is.list(exclude)) {
##determine number of elements in exclude
nexcl <- length(exclude)
##check each formula for presence of exclude variable extracted with formula( )
not.include <- lapply(cand.set, FUN=formula)
##set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- strsplit(as.character(not.include[i]), split="~")[[1]][-1]
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) { #this line causes problems if intercept is removed from model
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")} #this line causes problems if intercept is removed from model
}
##additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
##search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
##iterate over each element in exclude list
for (var in 1:nexcl) {
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
##iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], gsub("(^ +)|( +$)", "", form.excl[j]))
}
mod.exclude[i,var] <- ifelse(any(idents==1), 1, 0)
}
}
##determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
##exclude models following models from model averaging
include[which(to.exclude>=1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include)==0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include==1)] #select models including a given parameter
new.mod.name <- modnames[which(include==1)] #update model names
##
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN=function(i) coefficients(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(new.cand.set, FUN=function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL <- Modavg_beta-zcrit*Uncond_SE
Upper_CL <- Modavg_beta+zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL"= Lower_CL, "Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
####added functionality for reversing parameters
##added additional argument parm.type = "psi", "gamma", "epsilon", "lambda", "omega", "detect"
##model type: parameters labeled in unmarked - parameters labeled in AICcmodavg.unmarked
##single season: state, det - USE psi, detect
##multiseason model: psi, col, ext, det - USE psi, gamma, epsilon, detect
##RN heterogeneity model: state, det - USE lambda, detect
##N-mixture: state, det - USE lambda, detect
##Open N-mixture: lambda, gamma, omega, iota, det - USE lambda, gamma, omega, iota, detect
##distsamp: state, det - USE lambda, detect
##gdistsamp: state, det, phi - USE lambda, detect, phi
##false-positive occupancy: state, det, fp - USE psi, detect, fp
##gpcount: lambda, phi, det - USE lambda, phi, detect
##gmultmix: lambda, phi, det - USE lambda, phi, detect
##multinomPois: state, det - USE lambda, detect
##occuMulti: state, det - USE lambda, detect
##occuMS: state, det - USE psi, detect
##occuTTD: psi, det, col, ext - USE psi, detect, gamma, epsilon
##pcount.spHDS: state, det - USE lambda, detect
##multmixOpen: lambda, gamma, omega, iota, det - USE lambda, gamma, epsilon, iota, detect
##distsampOpen: lambda, gamma, omega, iota, det - USE lambda, gamma, epsilon, iota, detect
modavg.AICunmarkedFitOccu <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL,
uncond.se = "revised", conf.level = 0.95, exclude = NULL, warn = TRUE,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavg for details\n")}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
parm.strip <- parm #to use later
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##reverse parm
reversed.parm <- reverse.parm(parm)
reversed.parm.strip <- reversed.parm #to use later
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##single-season occupancy model
##psi
if(identical(parm.type, "psi")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
parm <- paste("psi", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("psi", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formula[[3]])
parm.type1 <- "state"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm <- paste("p", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("p", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formula[[2]])
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##################################
##################################
###ADDED A NEW OBJECT TO STRIP AWAY lam( ) from parm on line 35
###to enable search with regexpr( )
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
##added parm.strip here for regexpr( )
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm.strip, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
#exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
#check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
#exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
#assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
#warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
#warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list")}
}
#if exclude is list
if(is.list(exclude)) {
#determine number of elements in exclude
nexcl <- length(exclude)
#check each formula for presence of exclude variable in not.include list
#not.include <- lapply(cand.set, FUN = formula)
#set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- as.character(not.include[i]) #changed from other versions as formula returned is of different structure for unmarked objects
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
#additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
#search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
#iterate over each element in exclude list
for (var in 1:nexcl) {
#iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
#iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], form.excl[j])
}
mod.exclude[i,var] <- ifelse(any(idents == 1), 1, 0)
}
}
#determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
#exclude models following models from model averaging
include[which(to.exclude >= 1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include) == 0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include == 1)] #select models including a given parameter
new.mod.name <- modnames[which(include == 1)] #update model names
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
##if reversed.parm is not null and varies across models, potentially check for it here
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##if reversed.parm is not null and varies across models, potentially check for it here
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##colext
modavg.AICunmarkedFitColExt <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL,
uncond.se = "revised", conf.level = 0.95, exclude = NULL, warn = TRUE,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavg for details\n")}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
parm.strip <- parm #to use later
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##reverse parm
reversed.parm <- reverse.parm(parm)
reversed.parm.strip <- reversed.parm #to use later
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##multiseason occupancy model
##psi - initial occupancy
if(identical(parm.type, "psi")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$psi)))
##create label for parm
parm <- paste("psi", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("psi", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@psiformula)
parm.type1 <- "psi"
}
##gamma - colonization
if(identical(parm.type, "gamma")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$col)))
##create label for parm
parm <- paste("col", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("col", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@gamformula)
parm.type1 <- "col"
}
##epsilon - extinction
if(identical(parm.type, "epsilon")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$ext)))
##create label for parm
parm <- paste("ext", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("ext", "(", reversed.parm, ")", sep="")}
##for epsilon
not.include <- lapply(cand.set, FUN = function(i) i@epsformula)
parm.type1 <- "ext"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm <- paste("p", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("p", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@detformula)
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##################################
##################################
###ADDED A NEW OBJECT TO STRIP AWAY lam( ) from parm on line 35
###to enable search with regexpr( )
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
##added parm.strip here for regexpr( )
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm.strip, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
#exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
#check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
#exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
#assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
#warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
#warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list")}
}
#if exclude is list
if(is.list(exclude)) {
#determine number of elements in exclude
nexcl <- length(exclude)
#check each formula for presence of exclude variable in not.include list
#not.include <- lapply(cand.set, FUN = formula)
#set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- as.character(not.include[i]) #changed from other versions as formula returned is of different structure for unmarked objects
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
#additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
#search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
#iterate over each element in exclude list
for (var in 1:nexcl) {
#iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
#iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], form.excl[j])
}
mod.exclude[i,var] <- ifelse(any(idents == 1), 1, 0)
}
}
#determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
#exclude models following models from model averaging
include[which(to.exclude >= 1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include) == 0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include == 1)] #select models including a given parameter
new.mod.name <- modnames[which(include == 1)] #update model names
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
##if reversed.parm is not null and varies across models, potentially check for it here
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##if reversed.parm is not null and varies across models, potentially check for it here
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##occuRN
modavg.AICunmarkedFitOccuRN <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL,
uncond.se = "revised", conf.level = 0.95, exclude = NULL, warn = TRUE,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavg for details\n")}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
parm.strip <- parm #to use later
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##reverse parm
reversed.parm <- reverse.parm(parm)
reversed.parm.strip <- reversed.parm #to use later
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##Royle-Nichols heterogeneity model
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
parm <- paste("lam", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("lam", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formula[[3]])
parm.type1 <- "state"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula<-lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm <- paste("p", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("p", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formula[[2]])
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##################################
##################################
###ADDED A NEW OBJECT TO STRIP AWAY lam( ) from parm on line 35
###to enable search with regexpr( )
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
##added parm.strip here for regexpr( )
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm.strip, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
#exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
#check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
#exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
#assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
#warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
#warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list")}
}
#if exclude is list
if(is.list(exclude)) {
#determine number of elements in exclude
nexcl <- length(exclude)
#check each formula for presence of exclude variable in not.include list
#not.include <- lapply(cand.set, FUN = formula)
#set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- as.character(not.include[i]) #changed from other versions as formula returned is of different structure for unmarked objects
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
#additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
#search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
#iterate over each element in exclude list
for (var in 1:nexcl) {
#iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
#iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], form.excl[j])
}
mod.exclude[i,var] <- ifelse(any(idents == 1), 1, 0)
}
}
#determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
#exclude models following models from model averaging
include[which(to.exclude >= 1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include) == 0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include == 1)] #select models including a given parameter
new.mod.name <- modnames[which(include == 1)] #update model names
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
##if reversed.parm is not null and varies across models, potentially check for it here
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##if reversed.parm is not null and varies across models, potentially check for it here
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##pcount
modavg.AICunmarkedFitPCount <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL,
uncond.se = "revised", conf.level = 0.95, exclude = NULL, warn = TRUE,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavg for details\n")}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
parm.strip <- parm #to use later
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##reverse parm
reversed.parm <- reverse.parm(parm)
reversed.parm.strip <- reversed.parm #to use later
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##single season N-mixture model
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
##create label for parm
parm <- paste("lam", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("lam", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formula[[3]])
parm.type1 <- "state"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm <- paste("p", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("p", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formula[[2]])
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##################################
##################################
###ADDED A NEW OBJECT TO STRIP AWAY lam( ) from parm on line 35
###to enable search with regexpr( )
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
##added parm.strip here for regexpr( )
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm.strip, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
#exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
#check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
#exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
#assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
#warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
#warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list")}
}
#if exclude is list
if(is.list(exclude)) {
#determine number of elements in exclude
nexcl <- length(exclude)
#check each formula for presence of exclude variable in not.include list
#not.include <- lapply(cand.set, FUN = formula)
#set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- as.character(not.include[i]) #changed from other versions as formula returned is of different structure for unmarked objects
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
#additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
#search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
#iterate over each element in exclude list
for (var in 1:nexcl) {
#iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
#iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], form.excl[j])
}
mod.exclude[i,var] <- ifelse(any(idents == 1), 1, 0)
}
}
#determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
#exclude models following models from model averaging
include[which(to.exclude >= 1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include) == 0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include == 1)] #select models including a given parameter
new.mod.name <- modnames[which(include == 1)] #update model names
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
##if reversed.parm is not null and varies across models, potentially check for it here
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##if reversed.parm is not null and varies across models, potentially check for it here
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##pcountOpen
modavg.AICunmarkedFitPCO <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL,
uncond.se = "revised", conf.level = 0.95, exclude = NULL, warn = TRUE,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavg for details\n")}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
parm.strip <- parm #to use later
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##reverse parm
reversed.parm <- reverse.parm(parm)
reversed.parm.strip <- reversed.parm #to use later
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##open version of N-mixture model
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$lambda)))
parm <- paste("lam", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("lam", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formlist$lambdaformula)
parm.type1 <- "lambda"
}
##gamma - recruitment
if(identical(parm.type, "gamma")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$gamma)))
##determine if same H0 on gamma (gamConst, gamAR, gamTrend)
strip.gam <- sapply(mod_formula, FUN = function(i) unlist(strsplit(i, "\\("))[[1]])
unique.gam <- unique(strip.gam)
if(length(unique.gam) > 1) stop("\nDifferent formulations of gamma parameter occur among models:\n
beta estimates cannot be model-averaged\n")
##create label for parm
parm <- paste(unique.gam, "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste(unique.gam, "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formlist$gammaformula)
parm.type1 <- "gamma"
}
##omega - apparent survival
if(identical(parm.type, "omega")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$omega)))
##create label for parm
parm <- paste("omega", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("omega", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formlist$omegaformula)
parm.type1 <- "omega"
}
##iota (for immigration = TRUE with dynamics = "autoreg", "trend", "ricker", or "gompertz")
if(identical(parm.type, "iota")) {
##create label for parm
parm <- paste("iota", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("iota", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formlist$iotaformula)
##check that parameter appears in all models
parfreq <- sum(sapply(cand.set, FUN = function(i) any(names(i@estimates@estimates) == parm.type)))
if(!identical(length(cand.set), parfreq)) {
stop("\nParameter \'iota\' (parm.type = \"", parm.type, "\") does not appear in all models:",
"\ncannot compute model-averaged estimate across all models\n")
}
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$iota)))
parm.type1 <- "iota"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula<-lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm <- paste("p", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("p", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formlist$pformula)
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##################################
##################################
###ADDED A NEW OBJECT TO STRIP AWAY lam( ) from parm on line 35
###to enable search with regexpr( )
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
##added parm.strip here for regexpr( )
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm.strip, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
#exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
#check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
#exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
#assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
#warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
#warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list")}
}
#if exclude is list
if(is.list(exclude)) {
#determine number of elements in exclude
nexcl <- length(exclude)
#check each formula for presence of exclude variable in not.include list
#not.include <- lapply(cand.set, FUN = formula)
#set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- as.character(not.include[i]) #changed from other versions as formula returned is of different structure for unmarked objects
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
#additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
#search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
#iterate over each element in exclude list
for (var in 1:nexcl) {
#iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
#iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], form.excl[j])
}
mod.exclude[i,var] <- ifelse(any(idents == 1), 1, 0)
}
}
#determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
#exclude models following models from model averaging
include[which(to.exclude >= 1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include) == 0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include == 1)] #select models including a given parameter
new.mod.name <- modnames[which(include == 1)] #update model names
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
##if reversed.parm is not null and varies across models, potentially check for it here
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##if reversed.parm is not null and varies across models, potentially check for it here
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##distsamp
modavg.AICunmarkedFitDS <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL,
uncond.se = "revised", conf.level = 0.95, exclude = NULL, warn = TRUE,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavg for details\n")}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
parm.strip <- parm #to use later
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##reverse parm
reversed.parm <- reverse.parm(parm)
reversed.parm.strip <- reversed.parm #to use later
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##Distance sampling model
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
parm <- paste("lam", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("lam", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formula[[3]])
parm.type1 <- "state"
}
##detect
if(identical(parm.type, "detect")) {
##check for key function used
keyid <- unique(sapply(cand.set, FUN = function(i) i@keyfun))
if(length(keyid) > 1) stop("\nDifferent key functions used across models:\n",
"cannot compute model-averaged estimate\n")
if(identical(keyid, "uniform")) stop("\nDetection parameter not found in models\n")
##set key prefix used in coef( )
if(identical(keyid, "halfnorm")) {
parm.key <- "sigma"
}
if(identical(keyid, "hazard")) {
parm.key <- "shape"
}
if(identical(keyid, "exp")) {
parm.key <- "rate"
}
##label for intercept - label different with this model type
if(identical(parm, "Int")) {parm <- "(Intercept)"}
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm <- paste("p", "(", parm.key, "(", parm, "))", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("p", "(", parm.key, "(", reversed.parm, "))", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formula[[2]])
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##################################
##################################
###ADDED A NEW OBJECT TO STRIP AWAY lam( ) from parm on line 35
###to enable search with regexpr( )
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
##added parm.strip here for regexpr( )
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm.strip, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
#exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
#check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
#exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
#assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
#warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
#warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list")}
}
#if exclude is list
if(is.list(exclude)) {
#determine number of elements in exclude
nexcl <- length(exclude)
#check each formula for presence of exclude variable in not.include list
#not.include <- lapply(cand.set, FUN = formula)
#set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- as.character(not.include[i]) #changed from other versions as formula returned is of different structure for unmarked objects
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
#additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
#search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
#iterate over each element in exclude list
for (var in 1:nexcl) {
#iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
#iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], form.excl[j])
}
mod.exclude[i,var] <- ifelse(any(idents == 1), 1, 0)
}
}
#determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
#exclude models following models from model averaging
include[which(to.exclude >= 1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include) == 0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include == 1)] #select models including a given parameter
new.mod.name <- modnames[which(include == 1)] #update model names
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
##if reversed.parm is not null and varies across models, potentially check for it here
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##if reversed.parm is not null and varies across models, potentially check for it here
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##gdistsamp
modavg.AICunmarkedFitGDS <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL,
uncond.se = "revised", conf.level = 0.95, exclude = NULL, warn = TRUE,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavg for details\n")}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
parm.strip <- parm #to use later
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##reverse parm
reversed.parm <- reverse.parm(parm)
reversed.parm.strip <- reversed.parm #to use later
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##Distance sampling model with availability
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$lambda)))
parm <- paste("lambda", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("lam", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formlist$lambdaformula)
parm.type1 <- "lambda"
}
##detect
if(identical(parm.type, "detect")) {
##check for key function used
keyid <- unique(sapply(cand.set, FUN = function(i) i@keyfun))
if(length(keyid) > 1) stop("\nDifferent key functions used across models:\n",
"cannot compute model-averaged estimate\n")
if(identical(keyid, "uniform")) stop("\nDetection parameter not found in models\n")
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm <- paste("p", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("p", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formula[[2]])
parm.type1 <- "det"
}
##availability
if(identical(parm.type, "phi")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$phi)))
parm <- paste("phi", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("phi", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formlist$phiformula)
parm.type1 <- "phi"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##################################
##################################
###ADDED A NEW OBJECT TO STRIP AWAY lam( ) from parm on line 35
###to enable search with regexpr( )
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
##added parm.strip here for regexpr( )
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm.strip, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
#exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
#check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
#exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
#assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
#warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
#warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list")}
}
#if exclude is list
if(is.list(exclude)) {
#determine number of elements in exclude
nexcl <- length(exclude)
#check each formula for presence of exclude variable in not.include list
#not.include <- lapply(cand.set, FUN = formula)
#set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- as.character(not.include[i]) #changed from other versions as formula returned is of different structure for unmarked objects
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
#additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
#search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
#iterate over each element in exclude list
for (var in 1:nexcl) {
#iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
#iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], form.excl[j])
}
mod.exclude[i,var] <- ifelse(any(idents == 1), 1, 0)
}
}
#determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
#exclude models following models from model averaging
include[which(to.exclude >= 1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include) == 0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include == 1)] #select models including a given parameter
new.mod.name <- modnames[which(include == 1)] #update model names
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
##if reversed.parm is not null and varies across models, potentially check for it here
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##if reversed.parm is not null and varies across models, potentially check for it here
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##occuFP
modavg.AICunmarkedFitOccuFP <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL,
uncond.se = "revised", conf.level = 0.95, exclude = NULL, warn = TRUE,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavg for details\n")}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
parm.strip <- parm #to use later
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##reverse parm
reversed.parm <- reverse.parm(parm)
reversed.parm.strip <- reversed.parm #to use later
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##single-season false-positive occupancy model
##psi
if(identical(parm.type, "psi")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
parm <- paste("psi", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("psi", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@stateformula)
parm.type1 <- "state"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm <- paste("p", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("p", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@detformula)
parm.type1 <- "det"
}
##false positives - fp
if(identical(parm.type, "falsepos") || identical(parm.type, "fp")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$fp)))
parm <- paste("fp", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("fp", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@FPformula)
parm.type1 <- "fp"
}
##certainty of detections - b
if(identical(parm.type, "certain")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$b)))
parm.unmarked <- "b"
parm <- paste("b", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("b", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@Bformula)
##check that parameter appears in all models
parfreq <- sum(sapply(cand.set, FUN = function(i) any(names(i@estimates@estimates) == parm.unmarked)))
if(!identical(length(cand.set), parfreq)) {
stop("\nParameter \'b\' (parm.type = \"", parm.type, "\") does not appear in all models:",
"\ncannot compute model-averaged estimate across all models\n")
}
parm.type1 <- "b"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##################################
##################################
###ADDED A NEW OBJECT TO STRIP AWAY lam( ) from parm on line 35
###to enable search with regexpr( )
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
##added parm.strip here for regexpr( )
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm.strip, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
#exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
#check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
#exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
#assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
#warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
#warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list")}
}
#if exclude is list
if(is.list(exclude)) {
#determine number of elements in exclude
nexcl <- length(exclude)
#check each formula for presence of exclude variable in not.include list
#not.include <- lapply(cand.set, FUN = formula)
#set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- as.character(not.include[i]) #changed from other versions as formula returned is of different structure for unmarked objects
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
#additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
#search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
#iterate over each element in exclude list
for (var in 1:nexcl) {
#iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
#iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], form.excl[j])
}
mod.exclude[i,var] <- ifelse(any(idents == 1), 1, 0)
}
}
#determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
#exclude models following models from model averaging
include[which(to.exclude >= 1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include) == 0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include == 1)] #select models including a given parameter
new.mod.name <- modnames[which(include == 1)] #update model names
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
##if reversed.parm is not null and varies across models, potentially check for it here
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##if reversed.parm is not null and varies across models, potentially check for it here
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##multinomPois
modavg.AICunmarkedFitMPois <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL,
uncond.se = "revised", conf.level = 0.95, exclude = NULL, warn = TRUE,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavg for details\n")}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
parm.strip <- parm #to use later
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##reverse parm
reversed.parm <- reverse.parm(parm)
reversed.parm.strip <- reversed.parm #to use later
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##multinomPois model
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
##create label for parm
parm <- paste("lambda", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("lambda", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formula[[3]])
parm.type1 <- "state"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm <- paste("p", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("p", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formula[[2]])
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##################################
##################################
###ADDED A NEW OBJECT TO STRIP AWAY lam( ) from parm on line 35
###to enable search with regexpr( )
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
##added parm.strip here for regexpr( )
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm.strip, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
#exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
#check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
#exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
#assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
#warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
#warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list")}
}
#if exclude is list
if(is.list(exclude)) {
#determine number of elements in exclude
nexcl <- length(exclude)
#check each formula for presence of exclude variable in not.include list
#not.include <- lapply(cand.set, FUN = formula)
#set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- as.character(not.include[i]) #changed from other versions as formula returned is of different structure for unmarked objects
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
#additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
#search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
#iterate over each element in exclude list
for (var in 1:nexcl) {
#iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
#iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], form.excl[j])
}
mod.exclude[i,var] <- ifelse(any(idents == 1), 1, 0)
}
}
#determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
#exclude models following models from model averaging
include[which(to.exclude >= 1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include) == 0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include == 1)] #select models including a given parameter
new.mod.name <- modnames[which(include == 1)] #update model names
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
##if reversed.parm is not null and varies across models, potentially check for it here
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##if reversed.parm is not null and varies across models, potentially check for it here
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##gmultmix
modavg.AICunmarkedFitGMM <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL,
uncond.se = "revised", conf.level = 0.95, exclude = NULL, warn = TRUE,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavg for details\n")}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
parm.strip <- parm #to use later
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##reverse parm
reversed.parm <- reverse.parm(parm)
reversed.parm.strip <- reversed.parm #to use later
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##gmultmix model
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$lambda)))
parm <- paste("lambda", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("lambda", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formlist$lambdaformula)
parm.type1 <- "lambda"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm <- paste("p", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("p", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formlist$pformula)
parm.type1 <- "det"
}
##availability
if(identical(parm.type, "phi")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$phi)))
parm <- paste("phi", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("phi", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formlist$phiformula)
parm.type1 <- "phi"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##################################
##################################
###ADDED A NEW OBJECT TO STRIP AWAY lam( ) from parm on line 35
###to enable search with regexpr( )
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
##added parm.strip here for regexpr( )
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm.strip, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
#exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
#check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
#exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
#assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
#warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
#warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list")}
}
#if exclude is list
if(is.list(exclude)) {
#determine number of elements in exclude
nexcl <- length(exclude)
#check each formula for presence of exclude variable in not.include list
#not.include <- lapply(cand.set, FUN = formula)
#set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- as.character(not.include[i]) #changed from other versions as formula returned is of different structure for unmarked objects
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
#additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
#search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
#iterate over each element in exclude list
for (var in 1:nexcl) {
#iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
#iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], form.excl[j])
}
mod.exclude[i,var] <- ifelse(any(idents == 1), 1, 0)
}
}
#determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
#exclude models following models from model averaging
include[which(to.exclude >= 1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include) == 0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include == 1)] #select models including a given parameter
new.mod.name <- modnames[which(include == 1)] #update model names
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
##if reversed.parm is not null and varies across models, potentially check for it here
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##if reversed.parm is not null and varies across models, potentially check for it here
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##gpcount
modavg.AICunmarkedFitGPC <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL,
uncond.se = "revised", conf.level = 0.95, exclude = NULL, warn = TRUE,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavg for details\n")}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
parm.strip <- parm #to use later
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##reverse parm
reversed.parm <- reverse.parm(parm)
reversed.parm.strip <- reversed.parm #to use later
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##gpcount
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$lambda)))
parm <- paste("lambda", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("lambda", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formlist$lambdaformula)
parm.type1 <- "lambda"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm <- paste("p", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("p", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formlist$pformula)
parm.type1 <- "det"
}
##availability
if(identical(parm.type, "phi")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$phi)))
parm <- paste("phi", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("phi", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formlist$phiformula)
parm.type1 <- "phi"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##################################
##################################
###ADDED A NEW OBJECT TO STRIP AWAY lam( ) from parm on line 35
###to enable search with regexpr( )
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
##added parm.strip here for regexpr( )
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm.strip, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
#exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
#check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
#exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
#assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
#warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
#warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list")}
}
#if exclude is list
if(is.list(exclude)) {
#determine number of elements in exclude
nexcl <- length(exclude)
#check each formula for presence of exclude variable in not.include list
#not.include <- lapply(cand.set, FUN = formula)
#set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- as.character(not.include[i]) #changed from other versions as formula returned is of different structure for unmarked objects
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
#additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
#search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
#iterate over each element in exclude list
for (var in 1:nexcl) {
#iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
#iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], form.excl[j])
}
mod.exclude[i,var] <- ifelse(any(idents == 1), 1, 0)
}
}
#determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
#exclude models following models from model averaging
include[which(to.exclude >= 1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include) == 0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include == 1)] #select models including a given parameter
new.mod.name <- modnames[which(include == 1)] #update model names
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
##if reversed.parm is not null and varies across models, potentially check for it here
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##if reversed.parm is not null and varies across models, potentially check for it here
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##occuMulti
modavg.AICunmarkedFitOccuMulti <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL,
uncond.se = "revised", conf.level = 0.95, exclude = NULL, warn = TRUE,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavg for details\n")}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
#parm <- gsub('[[:space:]]+', "", parm)
parm.strip <- parm #to use later
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##reverse parm
reversed.parm <- reverse.parm(parm)
reversed.parm.strip <- reversed.parm #to use later
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##single-season occupancy model
##psi
if(identical(parm.type, "psi")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
parm <- paste("psi", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("psi", "(", reversed.parm, ")", sep="")}
##not.include <- lapply(cand.set, FUN = function(i) i@formula[[3]])
not.include <- mod_formula
parm.type1 <- "state"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm <- paste("p", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("p", "(", reversed.parm, ")", sep="")}
##not.include <- lapply(cand.set, FUN = function(i) i@formula[[2]])
not.include <- mod_formula
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##################################
##################################
###ADDED A NEW OBJECT TO STRIP AWAY lam( ) from parm on line 35
###to enable search with regexpr( )
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
##added parm.strip here for regexpr( )
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm.strip, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
#exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
#check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
#exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
#assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
#warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
#warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list")}
}
#if exclude is list
if(is.list(exclude)) {
#determine number of elements in exclude
nexcl <- length(exclude)
#check each formula for presence of exclude variable in not.include list
#not.include <- lapply(cand.set, FUN = formula)
#set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- as.character(not.include[i]) #changed from other versions as formula returned is of different structure for unmarked objects
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
#additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
#search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
#iterate over each element in exclude list
for (var in 1:nexcl) {
#iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
#iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], form.excl[j])
}
mod.exclude[i,var] <- ifelse(any(idents == 1), 1, 0)
}
}
#determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
#exclude models following models from model averaging
include[which(to.exclude >= 1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include) == 0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include == 1)] #select models including a given parameter
new.mod.name <- modnames[which(include == 1)] #update model names
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
##if reversed.parm is not null and varies across models, potentially check for it here
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##if reversed.parm is not null and varies across models, potentially check for it here
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##multmixOpen
modavg.AICunmarkedFitMMO <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL,
uncond.se = "revised", conf.level = 0.95, exclude = NULL, warn = TRUE,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavg for details\n")}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
parm.strip <- parm #to use later
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##reverse parm
reversed.parm <- reverse.parm(parm)
reversed.parm.strip <- reversed.parm #to use later
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##open version of N-mixture model
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$lambda)))
parm <- paste("lam", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("lam", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formlist$lambdaformula)
parm.type1 <- "lambda"
}
##gamma - recruitment
if(identical(parm.type, "gamma")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$gamma)))
##determine if same H0 on gamma (gamConst, gamAR, gamTrend)
strip.gam <- sapply(mod_formula, FUN = function(i) unlist(strsplit(i, "\\("))[[1]])
unique.gam <- unique(strip.gam)
if(length(unique.gam) > 1) stop("\nDifferent formulations of gamma parameter occur among models:\n
beta estimates cannot be model-averaged\n")
##create label for parm
parm <- paste(unique.gam, "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste(unique.gam, "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formlist$gammaformula)
parm.type1 <- "gamma"
}
##omega - apparent survival
if(identical(parm.type, "omega")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$omega)))
##create label for parm
parm <- paste("omega", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("omega", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formlist$omegaformula)
parm.type1 <- "omega"
}
##iota (for immigration = TRUE with dynamics = "autoreg", "trend", "ricker", or "gompertz")
if(identical(parm.type, "iota")) {
##create label for parm
parm <- paste("iota", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("iota", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formlist$iotaformula)
##check that parameter appears in all models
parfreq <- sum(sapply(cand.set, FUN = function(i) any(names(i@estimates@estimates) == parm.type)))
if(!identical(length(cand.set), parfreq)) {
stop("\nParameter \'iota\' (parm.type = \"", parm.type, "\") does not appear in all models:",
"\ncannot compute model-averaged estimate across all models\n")
}
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$iota)))
parm.type1 <- "iota"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula<-lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm <- paste("p", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("p", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formlist$pformula)
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##################################
##################################
###ADDED A NEW OBJECT TO STRIP AWAY lam( ) from parm on line 35
###to enable search with regexpr( )
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
##added parm.strip here for regexpr( )
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm.strip, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
#exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
#check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
#exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
#assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
#warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
#warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list")}
}
#if exclude is list
if(is.list(exclude)) {
#determine number of elements in exclude
nexcl <- length(exclude)
#check each formula for presence of exclude variable in not.include list
#not.include <- lapply(cand.set, FUN = formula)
#set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- as.character(not.include[i]) #changed from other versions as formula returned is of different structure for unmarked objects
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
#additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
#search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
#iterate over each element in exclude list
for (var in 1:nexcl) {
#iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
#iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], form.excl[j])
}
mod.exclude[i,var] <- ifelse(any(idents == 1), 1, 0)
}
}
#determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
#exclude models following models from model averaging
include[which(to.exclude >= 1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include) == 0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include == 1)] #select models including a given parameter
new.mod.name <- modnames[which(include == 1)] #update model names
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
##if reversed.parm is not null and varies across models, potentially check for it here
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##if reversed.parm is not null and varies across models, potentially check for it here
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##distsampOpen
modavg.AICunmarkedFitDSO <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL,
uncond.se = "revised", conf.level = 0.95, exclude = NULL, warn = TRUE,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavg for details\n")}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
parm.strip <- parm #to use later
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##reverse parm
reversed.parm <- reverse.parm(parm)
reversed.parm.strip <- reversed.parm #to use later
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##open version of N-mixture model
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$lambda)))
parm <- paste("lam", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("lam", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formlist$lambdaformula)
parm.type1 <- "lambda"
}
##gamma - recruitment
if(identical(parm.type, "gamma")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$gamma)))
##determine if same H0 on gamma (gamConst, gamAR, gamTrend)
strip.gam <- sapply(mod_formula, FUN = function(i) unlist(strsplit(i, "\\("))[[1]])
unique.gam <- unique(strip.gam)
if(length(unique.gam) > 1) stop("\nDifferent formulations of gamma parameter occur among models:\n
beta estimates cannot be model-averaged\n")
##create label for parm
parm <- paste(unique.gam, "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste(unique.gam, "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formlist$gammaformula)
parm.type1 <- "gamma"
}
##omega - apparent survival
if(identical(parm.type, "omega")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$omega)))
##create label for parm
parm <- paste("omega", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("omega", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formlist$omegaformula)
parm.type1 <- "omega"
}
##iota (for immigration = TRUE with dynamics = "autoreg", "trend", "ricker", or "gompertz")
if(identical(parm.type, "iota")) {
##create label for parm
parm <- paste("iota", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("iota", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formlist$iotaformula)
##check that parameter appears in all models
parfreq <- sum(sapply(cand.set, FUN = function(i) any(names(i@estimates@estimates) == parm.type)))
if(!identical(length(cand.set), parfreq)) {
stop("\nParameter \'iota\' (parm.type = \"", parm.type, "\") does not appear in all models:",
"\ncannot compute model-averaged estimate across all models\n")
}
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$iota)))
parm.type1 <- "iota"
}
##detect
if(identical(parm.type, "detect")) {
##check for key function used
keyid <- unique(sapply(cand.set, FUN = function(i) i@keyfun))
if(length(keyid) > 1) stop("\nDifferent key functions used across models:\n",
"cannot compute model-averaged estimate\n")
if(identical(keyid, "uniform")) stop("\nDetection parameter not found in models\n")
##set key prefix used in coef( )
if(identical(keyid, "halfnorm")) {
parm.key <- "sigma"
}
if(identical(keyid, "hazard")) {
parm.key <- "shape"
}
if(identical(keyid, "exp")) {
parm.key <- "rate"
}
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm <- paste("sigma", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("sigma", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@formlist$pformula[[2]])
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##################################
##################################
###ADDED A NEW OBJECT TO STRIP AWAY lam( ) from parm on line 35
###to enable search with regexpr( )
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
##added parm.strip here for regexpr( )
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm.strip, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
#exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
#check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
#exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
#assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
#warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
#warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list")}
}
#if exclude is list
if(is.list(exclude)) {
#determine number of elements in exclude
nexcl <- length(exclude)
#check each formula for presence of exclude variable in not.include list
#not.include <- lapply(cand.set, FUN = formula)
#set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- as.character(not.include[i]) #changed from other versions as formula returned is of different structure for unmarked objects
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
#additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
#search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
#iterate over each element in exclude list
for (var in 1:nexcl) {
#iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
#iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], form.excl[j])
}
mod.exclude[i,var] <- ifelse(any(idents == 1), 1, 0)
}
}
#determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
#exclude models following models from model averaging
include[which(to.exclude >= 1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include) == 0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include == 1)] #select models including a given parameter
new.mod.name <- modnames[which(include == 1)] #update model names
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
##if reversed.parm is not null and varies across models, potentially check for it here
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##if reversed.parm is not null and varies across models, potentially check for it here
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##occuMS
modavg.AICunmarkedFitOccuMS <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL,
uncond.se = "revised", conf.level = 0.95, exclude = NULL, warn = TRUE,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavg for details\n")}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
#parm <- gsub('[[:space:]]+', "", parm)
parm.strip <- parm #to use later
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##reverse parm
reversed.parm <- reverse.parm(parm)
reversed.parm.strip <- reversed.parm #to use later
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##single-season occupancy model
##psi
if(identical(parm.type, "psi")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
parm <- paste("psi", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("psi", "(", reversed.parm, ")", sep="")}
not.include <- mod_formula
parm.type1 <- "state"
}
##transition
if(identical(parm.type, "phi")) {
##check that parameter appears in all models
nseasons <- unique(sapply(cand.set, FUN = function(i) i@data@numPrimary))
if(nseasons == 1) {
stop("\nParameter \'phi\' does not appear in single-season models\n")
}
mod_formula <- lapply(cand.set, FUN = function(x) labels(coef(x@estimates@estimates$transition)))
parm <- paste("phi", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("phi", "(", reversed.parm, ")", sep="")}
not.include <- mod_formula
parm.type1 <- "transition"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm <- paste("p", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("p", "(", reversed.parm, ")", sep="")}
not.include <- mod_formula
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##################################
##################################
###ADDED A NEW OBJECT TO STRIP AWAY lam( ) from parm on line 35
###to enable search with regexpr( )
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
##added parm.strip here for regexpr( )
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm.strip, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
#exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
#check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
#exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
#assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
#warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
#warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list")}
}
#if exclude is list
if(is.list(exclude)) {
#determine number of elements in exclude
nexcl <- length(exclude)
#check each formula for presence of exclude variable in not.include list
#not.include <- lapply(cand.set, FUN = formula)
#set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- as.character(not.include[i]) #changed from other versions as formula returned is of different structure for unmarked objects
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
#additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
#search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
#iterate over each element in exclude list
for (var in 1:nexcl) {
#iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
#iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], form.excl[j])
}
mod.exclude[i,var] <- ifelse(any(idents == 1), 1, 0)
}
}
#determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
#exclude models following models from model averaging
include[which(to.exclude >= 1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include) == 0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include == 1)] #select models including a given parameter
new.mod.name <- modnames[which(include == 1)] #update model names
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
##if reversed.parm is not null and varies across models, potentially check for it here
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##if reversed.parm is not null and varies across models, potentially check for it here
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##occuTTD
modavg.AICunmarkedFitOccuTTD <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL,
uncond.se = "revised", conf.level = 0.95, exclude = NULL, warn = TRUE,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavg for details\n")}
#####MODIFICATIONS BEGIN#######
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
parm.strip <- parm #to use later
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##reverse parm
reversed.parm <- reverse.parm(parm)
reversed.parm.strip <- reversed.parm #to use later
exclude <- reverse.exclude(exclude = exclude)
#####MODIFICATIONS END######
##single season or dynamic occupancy model
##psi - initial occupancy
if(identical(parm.type, "psi")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$psi)))
##create label for parm
parm <- paste("psi", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("psi", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@psiformula)
parm.type1 <- "psi"
}
##gamma - colonization
if(identical(parm.type, "gamma")) {
nseasons <- unique(sapply(cand.set, FUN = function(i) i@data@numPrimary))
if(nseasons == 1) {
stop("\nParameter \'gamma\' does not appear in single-season models\n")
}
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$col)))
##create label for parm
parm <- paste("col", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("col", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@gamformula)
parm.type1 <- "col"
}
##epsilon - extinction
if(identical(parm.type, "epsilon")) {
nseasons <- unique(sapply(cand.set, FUN = function(i) i@data@numPrimary))
if(nseasons == 1) {
stop("\nParameter \'epsilon\' does not appear in single-season models\n")
}
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$ext)))
##create label for parm
parm <- paste("ext", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("ext", "(", reversed.parm, ")", sep="")}
##for epsilon
not.include <- lapply(cand.set, FUN = function(i) i@epsformula)
parm.type1 <- "ext"
}
##detect
if(identical(parm.type, "detect")) {
##detect - lambda parameter is a rate of a species not detected in t to be detected at next time step
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm <- paste("lam", "(", parm, ")", sep="")
if(!is.null(reversed.parm)) {reversed.parm <- paste("lam", "(", reversed.parm, ")", sep="")}
not.include <- lapply(cand.set, FUN = function(i) i@detformula)
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
nmods <- length(cand.set)
##setup matrix to indicate presence of parms in the model
include <- matrix(NA, nrow=nmods, ncol=1)
##add a check for multiple instances of same variable in given model (i.e., interactions)
include.check <- matrix(NA, nrow=nmods, ncol=1)
##################################
##################################
###ADDED A NEW OBJECT TO STRIP AWAY lam( ) from parm on line 35
###to enable search with regexpr( )
##iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
idents.check <- NULL
form <- mod_formula[[i]]
######################################################################################################
######################################################################################################
###MODIFICATIONS BEGIN
##iterate over each element of formula[[i]] in list
if(is.null(reversed.parm)) {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j])
##added parm.strip here for regexpr( )
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")== "-1" , 0, 1)
}
} else {
for (j in 1:length(form)) {
idents[j] <- identical(parm, form[j]) | identical(reversed.parm, form[j])
idents.check[j] <- ifelse(attr(regexpr(parm.strip, form[j], fixed=TRUE), "match.length")=="-1" & attr(regexpr(reversed.parm.strip, form[j],
fixed=TRUE), "match.length")=="-1" , 0, 1)
}
}
###MODIFICATIONS END
######################################################################################################
######################################################################################################
include[i] <- ifelse(any(idents==1), 1, 0)
include.check[i] <- ifelse(sum(idents.check)>1, "duplicates", "OK")
}
#####################################################
#exclude == NULL; warn=TRUE: warn that duplicates occur and stop
if(is.null(exclude) && identical(warn, TRUE)) {
#check for duplicates in same model
if(any(include.check == "duplicates")) {
stop("\nSome models include more than one instance of the parameter of interest. \n",
"This may be due to the presence of interaction/polynomial terms, or variables\n",
"with similar names:\n",
"\tsee \"?modavg\" for details on variable specification and \"exclude\" argument\n")
}
}
#exclude == NULL; warn=FALSE: compute model-averaged beta estimate from models including variable of interest,
#assuming that the variable is not involved in interaction or higher order polynomial (x^2, x^3, etc...),
#warn that models were not excluded
if(is.null(exclude) && identical(warn, FALSE)) {
if(any(include.check == "duplicates")) {
warning("\nMultiple instances of parameter of interest in given model is presumably\n",
"not due to interaction or polynomial terms - these models will not be\n",
"excluded from the computation of model-averaged estimate\n")
}
}
#warn if exclude is neither a list nor NULL
if(!is.null(exclude)) {
if(!is.list(exclude)) {stop("\nItems in \"exclude\" must be specified as a list")}
}
#if exclude is list
if(is.list(exclude)) {
#determine number of elements in exclude
nexcl <- length(exclude)
#check each formula for presence of exclude variable in not.include list
#not.include <- lapply(cand.set, FUN = formula)
#set up a new list with model formula
forms <- list()
for (i in 1:nmods) {
form.tmp <- as.character(not.include[i]) #changed from other versions as formula returned is of different structure for unmarked objects
if(attr(regexpr("\\+", form.tmp), "match.length")==-1) {
forms[i] <- form.tmp
} else {forms[i] <- strsplit(form.tmp, split=" \\+ ")}
}
#additional check to see whether some variable names include "+"
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\+", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nPlease avoid \"+\" in variable names\n")
##additional check to determine if intercept was removed from models
check.forms <- unlist(lapply(forms, FUN=function(i) any(attr(regexpr("\\- 1", i), "match.length")>0)[[1]]))
if (any(check.forms==TRUE)) stop("\nModels without intercept are not supported in this version, please use alternative parameterization\n")
#search within formula for variables to exclude
mod.exclude <- matrix(NA, nrow=nmods, ncol=nexcl)
#iterate over each element in exclude list
for (var in 1:nexcl) {
#iterate over each formula in mod_formula list
for (i in 1:nmods) {
idents <- NULL
form.excl <- forms[[i]]
#iterate over each element of forms[[i]]
for (j in 1:length(form.excl)) {
idents[j] <- identical(exclude[var][[1]], form.excl[j])
}
mod.exclude[i,var] <- ifelse(any(idents == 1), 1, 0)
}
}
#determine outcome across all variables to exclude
to.exclude <- rowSums(mod.exclude)
#exclude models following models from model averaging
include[which(to.exclude >= 1)] <- 0
}
##add a check to determine if include always == 0
if (sum(include) == 0) {stop("\nParameter not found in any of the candidate models\n") }
new.cand.set <- cand.set[which(include == 1)] #select models including a given parameter
new.mod.name <- modnames[which(include == 1)] #update model names
new_table <- aictab(cand.set = new.cand.set, modnames = new.mod.name,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(new.cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
##if reversed.parm is not null and varies across models, potentially check for it here
new_table$SE <- unlist(lapply(new.cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##if reversed.parm is not null and varies across models, potentially check for it here
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavg", "list")
return(out.modavg)
}
##print method
print.modavg <-
function(x, digits = 2, ...) {
ic <- colnames(x$Mod.avg.table)[3]
cat("\nMultimodel inference on \"", x$Parameter, "\" based on ", ic, "\n", sep = "")
cat("\n", ic, " table used to obtain model-averaged estimate:\n", sep = "")
oldtab <- x$Mod.avg.table
if (any(names(oldtab)=="c_hat")) {cat("\t(c-hat estimate = ", oldtab$c_hat[1], ")\n", sep = "")}
cat("\n")
if (any(names(oldtab)=="c_hat")) {
nice.tab <- cbind(oldtab[,2], oldtab[,3], oldtab[,4], oldtab[,6],
oldtab[,9], oldtab[,10])
} else {nice.tab <- cbind(oldtab[,2], oldtab[,3], oldtab[,4], oldtab[,6],
oldtab[,8], oldtab[,9])
}
##modify printing style if multinomial model is used
if(length(x$Mod.avg.beta)==1) {
colnames(nice.tab) <- c(colnames(oldtab)[c(2,3,4,6)], "Estimate", "SE")
rownames(nice.tab) <- oldtab[,1]
print(round(nice.tab, digits=digits))
cat("\nModel-averaged estimate:", eval(round(x$Mod.avg.beta, digits=digits)), "\n")
cat("Unconditional SE:", eval(round(x$Uncond.SE, digits=digits)), "\n")
cat("",x$Conf.level*100, "% Unconditional confidence interval: ", round(x$Lower.CL, digits=digits),
", ", round(x$Upper.CL, digits=digits), "\n\n", sep = "")
} else {
col.ns <- ncol(nice.tab)
nice.tab <- nice.tab[,-c(col.ns-1,col.ns)]
colnames(nice.tab) <- c(colnames(oldtab)[c(2,3,4,6)])
rownames(nice.tab) <- oldtab[,1]
print(round(nice.tab, digits=digits))
cat("\n\nModel-averaged estimates for different levels of response variable:", "\n\n")
resp.labels <- labels(x$Mod.avg.beta)
mult.out <- matrix(NA, nrow=length(resp.labels), ncol=4)
colnames(mult.out) <- c("Model-averaged estimate", "Uncond. SE", paste(x$Conf.level*100,"% lower CL", sep = ""),
paste(x$Conf.level*100, "% upper CL", sep = ""))
rownames(mult.out) <- resp.labels
mult.out[,1] <- round(x$Mod.avg.beta, digits=digits)
mult.out[,2] <- round(x$Uncond.SE, digits=digits)
mult.out[,3] <- round(x$Lower.CL, digits=digits)
mult.out[,4] <- round(x$Upper.CL, digits=digits)
print(mult.out)
cat("\n")
}
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/modavg.R |
##utility function to format candidate list of models to new class
##extract class from models in list and create new class
formatCands <- function(cand.set) {
##extract model class
if(!is.list(cand.set)) stop("\n\'cand.set\' needs to be a list of candidate models\n")
n.mods <- length(cand.set)
all.mods <- lapply(cand.set, class)
check.class <- unique(all.mods)
out.class <- NULL
##for "coxph", c("coxph.null", "coxph"), c("clogit", "coxph")
if(all(regexpr("coxph", check.class) != -1)) {
out.class <- "coxph"
}
##if NULL
if(is.null(out.class)) {
if(length(check.class) > 1) stop("\nFunctions do not support mixture of model classes\n")
out.class <- unlist(check.class)
}
##rename class
mod.class.new <- c(paste("AIC", paste(out.class, collapse = "."), sep =""))
##add to list
new.cand.set <- cand.set
##new S3 class
class(new.cand.set) <- mod.class.new
return(new.cand.set)
}
##utility functions used with modavg( ) to accomodate different specifications of interaction terms (e.g., A:B, B:A, A*B, B*A)
##in models of same set
####################################################
##function to reverse terms in interaction
reverse.parm <- function(parm) {
##check if ":" appears in term
val <- grep(pattern = ":", x = parm)
##set value to NULL
parm.alt <- NULL
##if ":" appears, then reverse interaction term
if(length(val) > 0) {
##additional check if interaction involves more than 2 terms
check.terms <- unlist(strsplit(x = parm, split = ":"))
##number of terms in interaction
n.check.terms <- length(check.terms)
##issue warning if more than 2 terms are involved
if(n.check.terms > 2) warning("\nThis function only supports two terms in an interaction:\n",
"for more complex interactions, either create terms manually before analysis \n",
"or double-check that models have been correctly included in model-averaging table\n")
##reverse order of interaction
parm.alt.tmp <- rep(NA, n.check.terms)
for (b in 1:n.check.terms) {
parm.alt.tmp[b] <- check.terms[n.check.terms - b + 1]
}
##paste terms together
parm.alt <- paste(parm.alt.tmp, collapse = ":")
return(parm.alt)
}
}
#reverse.parm(parm = "BARE:AGE")
####################################################
##function to reverse order of exclude terms with colon or asterisk
reverse.exclude <- function(exclude) {
##remove all leading and trailing white space and within parm
exclude <- lapply(exclude, FUN = function(i) gsub('[[:space:]]+', "", i))
##determine which terms are interactions with colons
which.inter <- grep(pattern = ":", x = exclude)
n.inter <- length(which.inter)
##list to hold reverse terms
excl.list.alt <- list( )
excl.list.alt2 <- list( )
inter.star <- list( )
##if there are interaction terms with colons
if (n.inter > 0) {
##create list for interaction
rev.inter <- exclude[which.inter]
##create list to hold results
excl.rev.list <- list( )
for (b in 1:length(rev.inter)) {
excl.rev.list[b] <- strsplit(x = rev.inter[b][[1]], split = ":")
}
##add interaction with asterisk
inter.star <- lapply(excl.rev.list, FUN = function(i) paste(i, collapse = " * "))
##additional check if interaction involves more than 2 terms
n.check.terms <- unlist(lapply(excl.rev.list, length))
##issue warning if more than 2 terms are involved
if(any(n.check.terms > 2)) warning("\nThis function only supports two terms in an interaction:\n",
"for more complex interactions, either create terms manually before analysis \n",
"or double-check that models have been correctly excluded in model-averaging table\n")
##iterate over each item in excl.rev.list
for(k in 1:n.inter) {
inter.id <- excl.rev.list[k][[1]]
n.elements <- length(inter.id)
##reverse order of interaction
parm.alt.tmp <- rep(NA, n.elements)
for (b in 1:n.elements) {
parm.alt.tmp[b] <- inter.id[n.elements - b + 1]
}
##paste terms together
excl.list.alt[k] <- paste(parm.alt.tmp, collapse = ":")
excl.list.alt2[k] <- paste(parm.alt.tmp, collapse = " * ")
}
}
##determine which terms are interactions with asterisk
which.inter.star <- grep(pattern = "\\*", x = exclude)
n.inter.star <- length(which.inter.star)
##set lists to hold values
inter.space <- list( )
inter.nospace <- list( )
##list to hold reverse terms
excl.list.alt.star <- list( )
excl.list.alt.star2 <- list( )
##if there are interaction terms with asterisks
if (n.inter.star > 0) {
##create list for interaction
rev.inter <- exclude[which.inter.star]
##create vector to hold results
excl.rev.list <- list( )
for (b in 1:length(rev.inter)) {
excl.rev.list[b] <- strsplit(x = rev.inter[b][[1]], split = "\\*")
}
##paste interaction term with space
inter.space <- lapply(excl.rev.list, FUN = function(i) paste(i, collapse = " * "))
inter.nospace <- lapply(excl.rev.list, FUN = function(i) paste(i, collapse = ":"))
##additional check if interaction involves more than 2 terms
n.check.terms <- unlist(lapply(excl.rev.list, length))
##issue warning if more than 2 terms are involved
if(any(n.check.terms > 2)) warning("\nThis function only supports two terms in an interaction:\n",
"for more complex interactions, either create terms manually before analysis \n",
"or double-check that models have been correctly excluded in model-averaging table\n")
##iterate over each item in excl.rev.list
for(k in 1:n.inter.star) {
inter.id <- excl.rev.list[k][[1]]
n.elements <- length(inter.id)
##reverse order of interaction
parm.alt.tmp <- rep(NA, n.elements)
for (b in 1:n.elements) {
parm.alt.tmp[b] <- inter.id[n.elements - b + 1]
}
##paste terms together
excl.list.alt.star[k] <- paste(parm.alt.tmp, collapse = " * ")
excl.list.alt.star2[k] <- paste(parm.alt.tmp, collapse = ":")
}
}
##add step to replicate each term with colon and asterisk
##combine into exclude
exclude.out <- unique(c(exclude, excl.list.alt, excl.list.alt2, inter.space, inter.nospace, excl.list.alt.star, excl.list.alt.star2, inter.star))
if(length(exclude.out) == 0) {exclude.out <- NULL}
return(exclude.out)
}
##unexported functions
##create function for fixef to avoid importing nlme and lme4
#fixef <- function (mod){
##if from lme4
# if(isS4(mod)) {
# lme4::fixef(mod)
# }
##if from coxme
# if(identical(class(mod), "coxme")) {
# mod$coefficients
# }
##if from lmekin
# if(identical(class(mod), "lmekin")) {
# mod$coefficients$fixef
# }
# if(identical(class(mod), "lme")) {
# nlme::fixef(mod)
# ##if from nlme, coxme, lmekin
# }
#}
##create function to identify REML models from lme4
isREML <- function(mod){
as.logical(mod@devcomp$dims[["REML"]])
}
##function to extract formula name
formulaShort <- function(mod, unmarked.type = NULL) {
##extract estimates
formEst <- names(mod@estimates@estimates[[unmarked.type]]@estimates)
form.noInt <- formEst[formEst != "(Intercept)"]
if(length(form.noInt) == 0) {
form.noInt <- "." }
##print formula
return(paste(form.noInt, collapse = "+"))
}
##extract ranef function
#ranef <- function (mod){
# ##if from lme4
# if(isS4(mod)) {
# lme4::ranef(mod)
# } else {
# mod$coefficients$random
# } ##if from nlme, coxme, lmekin
#}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/modavg.utility.R |
##generic
modavgEffect <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
cand.set <- formatCands(cand.set)
UseMethod("modavgEffect", cand.set)
}
##default
modavgEffect.default <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
stop("\nFunction not yet defined for this object class\n")
}
##aov
modavgEffect.AICaov.lm <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##number of models
nmods <- length(modnames)
##compute fitted values
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata)$fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
##compute SE's on fitted values
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata)$se.fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs, sort = FALSE)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord==FALSE) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
##indicate scale of predictions
type <- "response"
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##glm
modavgEffect.AICglm.lm <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, gamdisp = NULL,
...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##newdata is data frame with exact structure of the original data frame (same variable names and type)
if(type == "terms") {stop("\nThe terms argument is not defined for this function\n")}
##check family of glm to avoid problems when requesting predictions with argument 'dispersion'
fam.type <- unlist(lapply(cand.set, FUN=function(i) family(i)$family))
fam.unique <- unique(fam.type)
if(identical(fam.unique, "gaussian")) {
dispersion <- NULL #set to NULL if gaussian is used
} else{dispersion <- c.hat}
##poisson and binomial defaults to 1 (no separate parameter for variance)
##for negative binomial - reset to NULL
if(any(regexpr("Negative Binomial", fam.type) != -1)) {
dispersion <- NULL
##check for mixture of negative binomial and other
##number of models with negative binomial
negbin.num <- sum(regexpr("Negative Binomial", fam.type) != -1)
if(negbin.num < length(fam.type)) {
stop("Function does not support mixture of negative binomial with other distribution")
}
}
###################CHANGES####
##############################
if(c.hat > 1) {dispersion <- c.hat }
if(!is.null(gamdisp)) {dispersion <- gamdisp}
if(c.hat > 1 && !is.null(gamdisp)) {stop("\nYou cannot specify values for both \'c.hat\' and \'gamdisp\'\n")}
##dispersion is the dispersion parameter - this influences the SE's (to specify dispersion parameter for either overdispersed Poisson or Gamma glm)
##type enables to specify either "response" (original scale = point estimate) or "link" (linear predictor)
##check if object is of "lm" or "glm" class
##extract classes
mod.class <- unlist(lapply(X = cand.set, FUN = class))
##check if all are identical
check.class <- unique(mod.class)
##check that link function is the same for all models if linear predictor is used
if(identical(type, "link")) {
check.link <- unlist(lapply(X = cand.set, FUN = function(i) i$family$link))
unique.link <- unique(x = check.link)
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged beta estimate\n",
"with different link functions\n")}
}
##check if model uses gamma distribution
gam1 <- unlist(lapply(cand.set, FUN = function(i) family(i)$family[1] == "Gamma")) #check for gamma regression models
##correct SE's for estimates of gamma regressions when gamdisp is specified
if(any(gam1) == TRUE) {
##check for specification of gamdisp argument
if(is.null(gamdisp)) stop("\nYou must specify a gamma dispersion parameter with gamma generalized linear models\n")
}
##number of models
nmods <- length(modnames)
##compute fitted values
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata, type = type,
dispersion = dispersion)$fit)), nrow = nmods, ncol = 2, byrow = TRUE)
##compute SE's on fitted values
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata, type = type,
dispersion = dispersion)$se.fit)), nrow = nmods, ncol = 2, byrow = TRUE)
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##create temporary data.frame to store fitted values and SE - QAICc
if(second.ord==TRUE && c.hat > 1) {
QAICctmp <- AICctab
QAICctmp$differ <- differ
QAICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE && c.hat == 1) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
##create temporary data.frame to store fitted values and SE - QAIC
if(second.ord == FALSE && c.hat > 1) {
QAICtmp <- AICctab
QAICtmp$differ <- differ
QAICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICtmp$QAICWt*QAICtmp$differ)
##compute unconditional SE and store in output matrix
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICtmp
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##gls
modavgEffect.AICgls <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##number of models
nmods <- length(modnames)
##compute fitted values
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE, newdata = newdata)$fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
##compute SE's on fitted values
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE, newdata = newdata)$se.fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs, sort = FALSE)
#create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord==FALSE) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
##indicate scale of predictions
type <- "response"
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##lm
modavgEffect.AIClm <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##number of models
nmods <- length(modnames)
##compute fitted values
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata)$fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
##compute SE's on fitted values
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata)$se.fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs, sort = FALSE)
#create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord==FALSE) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
##indicate scale of predictions
type <- "response"
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##lme
modavgEffect.AIClme <-
function(cand.set, modnames = NULL, newdata, second.ord = TRUE, nobs = NULL,
uncond.se = "revised", conf.level = 0.95, ...){
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##number of models
nmods <- length(modnames)
##compute fitted values
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE, newdata = newdata)$fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
##compute SE's on fitted values
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE, newdata = newdata)$se.fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs, sort = FALSE)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord==FALSE) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
##indicate scale of predictions
type <- "response"
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##mer - lme4 version < 1
modavgEffect.AICmer <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##extract classes
mod.class <- unlist(lapply(X=cand.set, FUN=class))
##check if all are identical
check.class <- unique(mod.class)
##check that link function is the same for all models if linear predictor is used
if(identical(type, "link")) {
link.list <- unlist(lapply(X = cand.set, FUN = function(i) fam.link.mer(i)$link))
check.link <- unique(link.list)
if(length(check.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
}
##number of models
nmods <- length(modnames)
##compute fitted values
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE, newdata = newdata, type = type)$fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
##compute SE's on fitted values
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE, newdata = newdata, type = type)$se.fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord==TRUE){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord==FALSE) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##glmerMod
modavgEffect.AICglmerMod <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##extract classes
mod.class <- unlist(lapply(X=cand.set, FUN=class))
##check if all are identical
check.class <- unique(mod.class)
##check that link function is the same for all models if linear predictor is used
if(identical(type, "link")) {
link.list <- unlist(lapply(X = cand.set, FUN = function(i) fam.link.mer(i)$link))
check.link <- unique(link.list)
if(length(check.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
}
##number of models
nmods <- length(modnames)
##compute fitted values
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE, newdata = newdata, type = type)$fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
##compute SE's on fitted values
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE, newdata = newdata, type = type)$se.fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord==TRUE){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord==FALSE) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##lmerMod
modavgEffect.AIClmerMod <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##extract classes
mod.class <- unlist(lapply(X=cand.set, FUN=class))
##check if all are identical
check.class <- unique(mod.class)
##number of models
nmods <- length(modnames)
##compute fitted values
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE, newdata = newdata)$fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
##compute SE's on fitted values
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE, newdata = newdata)$se.fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord==TRUE){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord==FALSE) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
##scale of predictions
type <- "response"
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##lmerModLmerTest
modavgEffect.AIClmerModLmerTest <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##extract classes
mod.class <- unlist(lapply(X=cand.set, FUN=class))
##check if all are identical
check.class <- unique(mod.class)
##number of models
nmods <- length(modnames)
##compute fitted values
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE, newdata = newdata)$fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
##compute SE's on fitted values
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE, newdata = newdata)$se.fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord==TRUE){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord==FALSE) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
##scale of predictions
type <- "response"
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##glm.nb
modavgEffect.AICnegbin.glm.lm <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##newdata is data frame with exact structure of the original data frame (same variable names and type)
if(type == "terms") {stop("\nThe terms argument is not defined for this function\n")}
###################CHANGES####
##############################
##type enables to specify either "response" (original scale = point estimate) or "link" (linear predictor)
##check if object is of "lm" or "glm" class
##extract classes
mod.class <- unlist(lapply(X = cand.set, FUN = class))
##check if all are identical
check.class <- unique(mod.class)
##check that link function is the same for all models if linear predictor is used
if(identical(type, "link")) {
check.link <- unlist(lapply(X = cand.set, FUN = function(i) i$family$link))
unique.link <- unique(x = check.link)
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged beta estimate\n",
"with different link functions\n")}
}
##number of models
nmods <- length(modnames)
##compute fitted values
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = type)$fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
##compute SE's on fitted values
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = type)$se.fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##rlm
modavgEffect.AICrlm.lm <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##number of models
nmods <- length(modnames)
##compute fitted values
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata)$fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
##compute SE's on fitted values
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata)$se.fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs, sort = FALSE)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord==FALSE) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
##indicate scale of predictions
type <- "response"
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##survreg
modavgEffect.AICsurvreg <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", ...){
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check that distribution is the same for all models
if(identical(type, "link")) {
check.dist <- sapply(X = cand.set, FUN = function(i) i$dist)
unique.dist <- unique(x = check.dist)
if(length(unique.dist) > 1) stop("\nFunction does not support model-averaging effect size on link scale using different distributions\n")
}
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##number of models
nmods <- length(modnames)
##compute fitted values
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata, type = type)$fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
##compute SE's on fitted values
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata, type = type)$se.fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs, sort = FALSE)
#create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord==FALSE) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##occu
modavgEffect.AICunmarkedFitOccu <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL,
...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgEffect for details\n")}
##rename values according to unmarked to extract from object
##psi
if(identical(parm.type, "psi")) {
parm.type1 <- "state"; parm.id <- "psi"
}
##detect
if(identical(parm.type, "detect")) {parm.type1 <- "det"; parm.id <- "p"}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set, parm.type = parm.type)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##number of models
nmods <- length(modnames)
##compute predicted values
##point estimate
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##link scale
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
QAICctmp$differ <- differ
QAICctmp$SE.differ <- SE.differ * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE && c.hat == 1) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
##begin loop - QAICc
if(second.ord == FALSE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
QAICtmp$differ <- differ
QAICtmp$SE.differ <- SE.differ* sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICtmp$QAICWt*QAICtmp$differ)
##compute unconditional SE and store in output matrix
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICtmp
}
Group.variable <- paste(parm.id, "(", var.id, ")")
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##colext
modavgEffect.AICunmarkedFitColExt <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL,
...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgEffect for details\n")}
##rename values according to unmarked to extract from object
##psi
if(identical(parm.type, "psi")) {
parm.type1 <- "psi"; parm.id <- "psi"
}
##gamma
if(identical(parm.type, "gamma")) {
parm.type1 <- "col"; parm.id <- "col"
}
##epsilon
if(identical(parm.type, "epsilon")) {
parm.type1 <- "ext"; parm.id <- "ext"
}
##detect
if(identical(parm.type, "detect")) {parm.type1 <- "det"; parm.id <- "p"}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set, parm.type = parm.type)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##number of models
nmods <- length(modnames)
##compute predicted values
##point estimate
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##link scale
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
QAICctmp$differ <- differ
QAICctmp$SE.differ <- SE.differ * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE && c.hat == 1) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
##begin loop - QAICc
if(second.ord == FALSE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
QAICtmp$differ <- differ
QAICtmp$SE.differ <- SE.differ* sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICtmp$QAICWt*QAICtmp$differ)
##compute unconditional SE and store in output matrix
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICtmp
}
Group.variable <- paste(parm.id, "(", var.id, ")")
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##occuRN
modavgEffect.AICunmarkedFitOccuRN <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL,
...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgEffect for details\n")}
##rename values according to unmarked to extract from object
##lambda
if(identical(parm.type, "lambda")) {
parm.type1 <- "state"; parm.id <- "lam"
}
##detect
if(identical(parm.type, "detect")) {parm.type1 <- "det"; parm.id <- "p"}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set, parm.type = parm.type)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##number of models
nmods <- length(modnames)
##compute predicted values
##point estimate
if(identical(type, "response")) {
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##link scale
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
QAICctmp$differ <- differ
QAICctmp$SE.differ <- SE.differ * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE && c.hat == 1) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
##begin loop - QAICc
if(second.ord == FALSE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
QAICtmp$differ <- differ
QAICtmp$SE.differ <- SE.differ* sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICtmp$QAICWt*QAICtmp$differ)
##compute unconditional SE and store in output matrix
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICtmp
}
Group.variable <- paste(parm.id, "(", var.id, ")")
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##pcount
modavgEffect.AICunmarkedFitPCount <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL,
...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgEffect for details\n")}
##rename values according to unmarked to extract from object
##lambda
if(identical(parm.type, "lambda")) {
parm.type1 <- "state"; parm.id <- "lam"
##check mixture type for mixture models
mixture.type <- sapply(X = cand.set, FUN = function(i) i@mixture)
unique.mixture <- unique(mixture.type)
}
##detect
if(identical(parm.type, "detect")) {parm.type1 <- "det"; parm.id <- "p"}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set, parm.type = parm.type)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##number of models
nmods <- length(modnames)
##compute predicted values
##point estimate
if(identical(type, "response")) {
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##link scale
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
QAICctmp$differ <- differ
QAICctmp$SE.differ <- SE.differ * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE && c.hat == 1) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
##begin loop - QAICc
if(second.ord == FALSE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
QAICtmp$differ <- differ
QAICtmp$SE.differ <- SE.differ* sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICtmp$QAICWt*QAICtmp$differ)
##compute unconditional SE and store in output matrix
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICtmp
}
Group.variable <- paste(parm.id, "(", var.id, ")")
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##unmarkedFitPCO
modavgEffect.AICunmarkedFitPCO <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL,
...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgEffect for details\n")}
##rename values according to unmarked to extract from object
##gamma
if(identical(parm.type, "gamma")) {
parm.type1 <- "gamma"; parm.id <- "gam"
}
##lambda
if(identical(parm.type, "lambda")) {
parm.type1 <- "lambda"; parm.id <- "lam"
##check mixture type for mixture models
mixture.type <- sapply(X = cand.set, FUN = function(i) i@mixture)
unique.mixture <- unique(mixture.type)
if(length(unique.mixture) > 1) {
if(any(unique.mixture == "ZIP")) stop("\nThis function does not yet support mixing ZIP with other distributions\n")
} else {
mixture.id <- unique(mixture.type)
if(identical(unique.mixture, "ZIP")) {
if(identical(type, "link")) stop("\nLink scale not yet supported for ZIP mixtures\n")
}
}
}
##omega
if(identical(parm.type, "omega")) {
parm.type1 <- "omega"; parm.id <- "omega"
}
##iota (for immigration = TRUE with dynamics = "autoreg", "trend", "ricker", or "gompertz")
if(identical(parm.type, "iota")) {
parm.type1 <- "iota"; parm.id <- "iota"
##check that parameter appears in all models
parfreq <- sum(sapply(cand.set, FUN = function(i) any(names(i@estimates@estimates) == parm.type1)))
if(!identical(length(cand.set), parfreq)) {
stop("\nParameter \'", parm.type1, "\' (parm.type = \"", parm.type, "\") does not appear in all models:",
"\ncannot compute model-averaged effect size across all models\n")
}
}
##detect
if(identical(parm.type, "detect")) {parm.type1 <- "det"; parm.id <- "p"}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set, parm.type = parm.type)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##number of models
nmods <- length(modnames)
##compute predicted values
##point estimate
if(identical(type, "response")) {
##extract fitted value for observation obs
if(identical(parm.type, "lambda") && identical(mixture.id, "ZIP")) {
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE,
newdata = newdata)$fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE,
newdata = newdata)$se.fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
} else {
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
}
##link scale
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
QAICctmp$differ <- differ
QAICctmp$SE.differ <- SE.differ * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE && c.hat == 1) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
##begin loop - QAICc
if(second.ord == FALSE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
QAICtmp$differ <- differ
QAICtmp$SE.differ <- SE.differ* sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICtmp$QAICWt*QAICtmp$differ)
##compute unconditional SE and store in output matrix
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICtmp
}
Group.variable <- paste(parm.id, "(", var.id, ")")
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##DS
modavgEffect.AICunmarkedFitDS <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL,
...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgEffect for details\n")}
##rename values according to unmarked to extract from object
##lambda
if(identical(parm.type, "lambda")) {
parm.type1 <- "state"; parm.id <- "lam"
##check mixture type for mixture models
}
##detect
if(identical(parm.type, "detect")) {
parm.type1 <- "det"; parm.id <- "p"
##check for key function used
keyid <- unique(sapply(cand.set, FUN = function(i) i@keyfun))
if(any(keyid == "uniform")) stop("\nDetection parameter not found in some models\n")
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set, parm.type = parm.type)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##number of models
nmods <- length(modnames)
##compute predicted values
##point estimate
if(identical(type, "response")) {
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##link scale
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
QAICctmp$differ <- differ
QAICctmp$SE.differ <- SE.differ * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE && c.hat == 1) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
##begin loop - QAICc
if(second.ord == FALSE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
QAICtmp$differ <- differ
QAICtmp$SE.differ <- SE.differ* sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICtmp$QAICWt*QAICtmp$differ)
##compute unconditional SE and store in output matrix
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICtmp
}
Group.variable <- paste(parm.id, "(", var.id, ")")
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##gdistsamp
modavgEffect.AICunmarkedFitGDS <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL,
...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgEffect for details\n")}
##rename values according to unmarked to extract from object
##lambda
if(identical(parm.type, "lambda")) {
parm.type1 <- "lambda"; parm.id <- "lam"
}
##detect
if(identical(parm.type, "detect")) {
parm.type1 <- "det"; parm.id <- "p"
##check for key function used
keyid <- unique(sapply(cand.set, FUN = function(i) i@keyfun))
if(any(keyid == "uniform")) stop("\nDetection parameter not found in some models\n")
}
##availability
if(identical(parm.type, "phi")) {parm.type1 <- "phi"; parm.id <- "phi"}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set, parm.type = parm.type)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##number of models
nmods <- length(modnames)
##compute predicted values
##point estimate
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##link scale
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
QAICctmp$differ <- differ
QAICctmp$SE.differ <- SE.differ * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE && c.hat == 1) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
##begin loop - QAICc
if(second.ord == FALSE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
QAICtmp$differ <- differ
QAICtmp$SE.differ <- SE.differ* sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICtmp$QAICWt*QAICtmp$differ)
##compute unconditional SE and store in output matrix
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICtmp
}
Group.variable <- paste(parm.id, "(", var.id, ")")
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##occuFP
modavgEffect.AICunmarkedFitOccuFP <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL,
...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgEffect for details\n")}
##rename values according to unmarked to extract from object
##psi
if(identical(parm.type, "psi")) {
parm.type1 <- "state"; parm.id <- "psi"
}
##detect
if(identical(parm.type, "detect")) {parm.type1 <- "det"; parm.id <- "p"}
##false positives
if(identical(parm.type, "falsepos") || identical(parm.type, "fp")) {parm.type1 <- "fp"; parm.id <- "fp"}
##certain detections
if(identical(parm.type, "certain")) {
parm.type1 <- "b"; parm.id <- "b"
##check that parameter appears in all models
parfreq <- sum(sapply(cand.set, FUN = function(i) any(names(i@estimates@estimates) == parm.type1)))
if(!identical(length(cand.set), parfreq)) {
stop("\nParameter \'", parm.type1, "\' (parm.type = \"", parm.type, "\") does not appear in all models:",
"\ncannot compute model-averaged effect size across all models\n")
}
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set, parm.type = parm.type)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##number of models
nmods <- length(modnames)
##compute predicted values
##point estimate
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##link scale
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
QAICctmp$differ <- differ
QAICctmp$SE.differ <- SE.differ * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE && c.hat == 1) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
##begin loop - QAICc
if(second.ord == FALSE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
QAICtmp$differ <- differ
QAICtmp$SE.differ <- SE.differ* sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICtmp$QAICWt*QAICtmp$differ)
##compute unconditional SE and store in output matrix
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICtmp
}
Group.variable <- paste(parm.id, "(", var.id, ")")
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##multinomPois
modavgEffect.AICunmarkedFitMPois <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL,
...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgEffect for details\n")}
##rename values according to unmarked to extract from object
##lambda
if(identical(parm.type, "lambda")) {
parm.type1 <- "state"; parm.id <- "lam"
##set check to NULL for other models
mixture.id <- NULL
}
##detect
if(identical(parm.type, "detect")) {parm.type1 <- "det"; parm.id <- "p"}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set, parm.type = parm.type)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##number of models
nmods <- length(modnames)
##compute predicted values
##point estimate
if(identical(type, "response")) {
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##link scale
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
QAICctmp$differ <- differ
QAICctmp$SE.differ <- SE.differ * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE && c.hat == 1) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
##begin loop - QAICc
if(second.ord == FALSE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
QAICtmp$differ <- differ
QAICtmp$SE.differ <- SE.differ* sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICtmp$QAICWt*QAICtmp$differ)
##compute unconditional SE and store in output matrix
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICtmp
}
Group.variable <- paste(parm.id, "(", var.id, ")")
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##gmultmix
modavgEffect.AICunmarkedFitGMM <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL,
...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgEffect for details\n")}
##rename values according to unmarked to extract from object
##lambda
if(identical(parm.type, "lambda")) {
parm.type1 <- "lambda"; parm.id <- "lam"
}
##detect
if(identical(parm.type, "detect")) {parm.type1 <- "det"; parm.id <- "p"}
##availability
if(identical(parm.type, "phi")) {parm.type1 <- "phi"; parm.id <- "phi"}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set, parm.type = parm.type)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##number of models
nmods <- length(modnames)
##compute predicted values
##point estimate
if(identical(type, "response")) {
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##link scale
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
QAICctmp$differ <- differ
QAICctmp$SE.differ <- SE.differ * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE && c.hat == 1) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
##begin loop - QAICc
if(second.ord == FALSE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
QAICtmp$differ <- differ
QAICtmp$SE.differ <- SE.differ* sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICtmp$QAICWt*QAICtmp$differ)
##compute unconditional SE and store in output matrix
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICtmp
}
Group.variable <- paste(parm.id, "(", var.id, ")")
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##gpcount
modavgEffect.AICunmarkedFitGPC <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL,
...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgEffect for details\n")}
##rename values according to unmarked to extract from object
##lambda
if(identical(parm.type, "lambda")) {
parm.type1 <- "lambda"; parm.id <- "lam"
}
##detect
if(identical(parm.type, "detect")) {parm.type1 <- "det"; parm.id <- "p"}
##availability
if(identical(parm.type, "phi")) {parm.type1 <- "phi"; parm.id <- "phi"}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set, parm.type = parm.type)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##number of models
nmods <- length(modnames)
##compute predicted values
##point estimate
if(identical(type, "response")) {
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##link scale
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
QAICctmp$differ <- differ
QAICctmp$SE.differ <- SE.differ * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE && c.hat == 1) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
##begin loop - QAICc
if(second.ord == FALSE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
QAICtmp$differ <- differ
QAICtmp$SE.differ <- SE.differ* sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICtmp$QAICWt*QAICtmp$differ)
##compute unconditional SE and store in output matrix
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICtmp
}
Group.variable <- paste(parm.id, "(", var.id, ")")
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##unmarkedFitMMO
modavgEffect.AICunmarkedFitMMO <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL,
...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgEffect for details\n")}
##rename values according to unmarked to extract from object
##gamma
if(identical(parm.type, "gamma")) {
parm.type1 <- "gamma"; parm.id <- "gam"
}
##lambda
if(identical(parm.type, "lambda")) {
parm.type1 <- "lambda"; parm.id <- "lam"
##check mixture type for mixture models
mixture.type <- sapply(X = cand.set, FUN = function(i) i@mixture)
unique.mixture <- unique(mixture.type)
if(length(unique.mixture) > 1) {
if(any(unique.mixture == "ZIP")) stop("\nThis function does not yet support mixing ZIP with other distributions\n")
} else {
mixture.id <- unique(mixture.type)
if(identical(unique.mixture, "ZIP")) {
if(identical(type, "link")) stop("\nLink scale not yet supported for ZIP mixtures\n")
}
}
}
##omega
if(identical(parm.type, "omega")) {
parm.type1 <- "omega"; parm.id <- "omega"
}
##iota (for immigration = TRUE with dynamics = "autoreg", "trend", "ricker", or "gompertz")
if(identical(parm.type, "iota")) {
parm.type1 <- "iota"; parm.id <- "iota"
##check that parameter appears in all models
parfreq <- sum(sapply(cand.set, FUN = function(i) any(names(i@estimates@estimates) == parm.type1)))
if(!identical(length(cand.set), parfreq)) {
stop("\nParameter \'", parm.type1, "\' (parm.type = \"", parm.type, "\") does not appear in all models:",
"\ncannot compute model-averaged effect size across all models\n")
}
}
##detect
if(identical(parm.type, "detect")) {parm.type1 <- "det"; parm.id <- "p"}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set, parm.type = parm.type)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##number of models
nmods <- length(modnames)
##compute predicted values
##point estimate
if(identical(type, "response")) {
##extract fitted value for observation obs
if(identical(parm.type, "lambda") && identical(mixture.id, "ZIP")) {
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE,
newdata = newdata)$fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE,
newdata = newdata)$se.fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
} else {
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
}
##link scale
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
QAICctmp$differ <- differ
QAICctmp$SE.differ <- SE.differ * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE && c.hat == 1) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
##begin loop - QAICc
if(second.ord == FALSE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
QAICtmp$differ <- differ
QAICtmp$SE.differ <- SE.differ* sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICtmp$QAICWt*QAICtmp$differ)
##compute unconditional SE and store in output matrix
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICtmp
}
Group.variable <- paste(parm.id, "(", var.id, ")")
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##unmarkedFitDSO
modavgEffect.AICunmarkedFitDSO <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL,
...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgEffect for details\n")}
##rename values according to unmarked to extract from object
##gamma
if(identical(parm.type, "gamma")) {
parm.type1 <- "gamma"; parm.id <- "gam"
}
##lambda
if(identical(parm.type, "lambda")) {
parm.type1 <- "lambda"; parm.id <- "lam"
##check mixture type for mixture models
mixture.type <- sapply(X = cand.set, FUN = function(i) i@mixture)
unique.mixture <- unique(mixture.type)
if(length(unique.mixture) > 1) {
if(any(unique.mixture == "ZIP")) stop("\nThis function does not yet support mixing ZIP with other distributions\n")
} else {
mixture.id <- unique(mixture.type)
if(identical(unique.mixture, "ZIP")) {
if(identical(type, "link")) stop("\nLink scale not yet supported for ZIP mixtures\n")
}
}
}
##omega
if(identical(parm.type, "omega")) {
parm.type1 <- "omega"; parm.id <- "omega"
}
##iota (for immigration = TRUE with dynamics = "autoreg", "trend", "ricker", or "gompertz")
if(identical(parm.type, "iota")) {
parm.type1 <- "iota"; parm.id <- "iota"
##check that parameter appears in all models
parfreq <- sum(sapply(cand.set, FUN = function(i) any(names(i@estimates@estimates) == parm.type1)))
if(!identical(length(cand.set), parfreq)) {
stop("\nParameter \'", parm.type1, "\' (parm.type = \"", parm.type, "\") does not appear in all models:",
"\ncannot compute model-averaged effect size across all models\n")
}
}
##detect
if(identical(parm.type, "detect")) {
parm.type1 <- "det"
parm.id <- "p"
##check for key function used
keyid <- unique(sapply(cand.set, FUN = function(i) i@keyfun))
if(any(keyid == "uniform")) stop("\nDetection parameter not found in some models\n")
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set, parm.type = parm.type)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##number of models
nmods <- length(modnames)
##compute predicted values
##point estimate
if(identical(type, "response")) {
##extract fitted value for observation obs
if(identical(parm.type, "lambda") && identical(mixture.id, "ZIP")) {
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE,
newdata = newdata)$fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE,
newdata = newdata)$se.fit)),
nrow = nmods, ncol = 2, byrow = TRUE)
} else {
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
}
##link scale
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
QAICctmp$differ <- differ
QAICctmp$SE.differ <- SE.differ * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE && c.hat == 1) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
##begin loop - QAICc
if(second.ord == FALSE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
QAICtmp$differ <- differ
QAICtmp$SE.differ <- SE.differ* sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICtmp$QAICWt*QAICtmp$differ)
##compute unconditional SE and store in output matrix
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICtmp
}
Group.variable <- paste(parm.id, "(", var.id, ")")
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##occuTTD
modavgEffect.AICunmarkedFitOccuTTD <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL,
...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgEffect for details\n")}
##rename values according to unmarked to extract from object
##psi
if(identical(parm.type, "psi")) {
parm.type1 <- "psi"; parm.id <- "psi"
}
##gamma
if(identical(parm.type, "gamma")) {
nseasons <- unique(sapply(cand.set, FUN = function(i) i@data@numPrimary))
if(nseasons == 1) {
stop("\nParameter \'gamma\' does not appear in single-season models\n")
}
parm.type1 <- "col"; parm.id <- "col"
}
##epsilon
if(identical(parm.type, "epsilon")) {
nseasons <- unique(sapply(cand.set, FUN = function(i) i@data@numPrimary))
if(nseasons == 1) {
stop("\nParameter \'epsilon\' does not appear in single-season models\n")
}
parm.type1 <- "ext"; parm.id <- "ext"
}
##detect
if(identical(parm.type, "detect")) {parm.type1 <- "det"; parm.id <- "p"}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set, parm.type = parm.type)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##number of models
nmods <- length(modnames)
##compute predicted values
##point estimate
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##link scale
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$Predicted)),
nrow = nmods, ncol = 2, byrow = TRUE)
##extract SE for fitted value for observation obs
SE <- matrix(data = unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, backTransform = FALSE)$SE)),
nrow = nmods, ncol = 2, byrow = TRUE)
}
##difference between groups
differ <- fit[, 1] - fit[, 2]
##SE on difference
SE.differ <- sqrt(SE[, 1]^2 + SE[, 2]^2)
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = 1, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.diff", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$differ <- differ
AICctmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICctmp$AICcWt*AICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.differ^2 + (AICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICctmp
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
QAICctmp$differ <- differ
QAICctmp$SE.differ <- SE.differ * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.differ^2 + (QAICctmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICctmp
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE && c.hat == 1) {
AICtmp <- AICctab
AICtmp$differ <- differ
AICtmp$SE.differ <- SE.differ
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(AICtmp$AICWt*AICtmp$differ)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.differ^2 + (AICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- AICtmp
}
##begin loop - QAICc
if(second.ord == FALSE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
QAICtmp$differ <- differ
QAICtmp$SE.differ <- SE.differ* sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[, 1] <- sum(QAICtmp$QAICWt*QAICtmp$differ)
##compute unconditional SE and store in output matrix
if(identical(uncond.se, "old")) {
Mod.avg.out[, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.differ^2 + (QAICtmp$differ - Mod.avg.out[, 1])^2)))
}
##store table
AICc.out <- QAICtmp
}
Group.variable <- paste(parm.id, "(", var.id, ")")
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower.CL <- Mod.avg.out[, 1] - zcrit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + zcrit * Mod.avg.out[, 2]
##arrange in matrix
predsOutMat <- matrix(data = c(Mod.avg.out[, 1], Mod.avg.out[, 2],
Lower.CL, Upper.CL),
nrow = 1, ncol = 4)
colnames(predsOutMat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
rownames(predsOutMat) <- "effect.size"
Mod.eff.list <- list("Group.variable" = var.id, "Group1" = group1,
"Group2" = group2, "Type" = type, "Mod.avg.table" = AICc.out, "Mod.avg.eff" = Mod.avg.out[,1],
"Uncond.se" = Mod.avg.out[,2], "Conf.level" = conf.level, "Lower.CL" = Lower.CL,
"Upper.CL" = Upper.CL, "Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##occuMS
modavgEffect.AICunmarkedFitOccuMS <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL,
...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgEffect for details\n")}
##check if type = "link"
if(identical(type, "link")) stop("\nLink scale predictions not yet supported for this model type\n")
##rename values according to unmarked to extract from object
##psi
if(identical(parm.type, "psi")) {
parm.type1 <- "state"
parm.id <- "psi"
##because the same elements have different labels in different parts of the results of this object type
parm.type.alt <- parm.type
}
##transition
if(identical(parm.type, "phi")) {
##check that parameter appears in all models
nseasons <- unique(sapply(cand.set, FUN = function(i) i@data@numPrimary))
if(nseasons == 1) {
stop("\nParameter \'phi\' does not appear in single-season models\n")
}
parm.id <- "phi"
parm.type1 <- "transition"
##because the same elements have different labels in different parts of the results of this object type
parm.type.alt <- parm.type
}
##detect
if(identical(parm.type, "detect")) {
parm.type1 <- "det"
parm.id <- "p"
##because the same elements have different labels in different parts of the results of this object type
parm.type.alt <- parm.type1
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set, parm.type = parm.type)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##number of models
nmods <- length(modnames)
##compute predicted values
##point estimate
##extract predicted values
predsList <- lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type.alt, ...))
##determine number of parameters
parmFirst <- predsList[[1]]
parmNames <- names(parmFirst)
nparms <- length(parmNames)
##lists to store predictions and SE's
predsEstList <- vector("list", nparms)
names(predsEstList) <- parmNames
predsSEList <- vector("list", nparms)
names(predsSEList) <- parmNames
##iterate over each parm
for(k in 1:nparms) {
predsEstList[[k]] <- lapply(predsList, FUN = function(i) i[[k]]$Predicted)
predsSEList[[k]] <- lapply(predsList, FUN = function(i) i[[k]]$SE)
}
##organize in an nobs x nmodels x nparms array
predsEst <- array(unlist(predsEstList), dim = c(nobserv, length(cand.set), nparms),
dimnames = list(1:nobserv, modnames, parmNames))
predsSE <- array(unlist(predsSEList), dim = c(nobserv, length(cand.set), nparms),
dimnames = list(1:nobserv, modnames, parmNames))
##adjust for overdispersion if c-hat > 1
if(c.hat > 1) {predsSE <- predsSE * sqrt(c.hat)}
##difference between groups
differList <- vector("list", nparms)
for(k in 1:nparms) {
differList[[k]] <- predsEst[1, , k] - predsEst[2, , k]
}
##SE on difference
SE.differList <- vector("list", nparms)
for(k in 1:nparms) {
SE.differList[[k]] <- sqrt(predsSE[1, , k]^2 + predsSE[2, , k]^2)
}
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE, c.hat = c.hat)
##prepare list for model-averaged predictions and SE's
predsOut <- array(data = NA, dim = c(1, 4, nparms),
dimnames = list(1,
c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL"),
parmNames))
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
for(j in 1:nparms) {
predsOut[, 1, j] <- sum(AICctmp$AICcWt * differList[[j]])
predsOut[, 2, j] <- sum(AICctmp$AICcWt * sqrt(SE.differList[[j]]^2 + (differList[[j]] - predsOut[, 1, j])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
for(j in 1:nparms) {
predsOut[, 1, j] <- sum(AICctmp$AICcWt * differList[[j]])
predsOut[, 2, j] <- sqrt(sum(AICctmp$AICcWt * (SE.differList[[j]]^2 + (differList[[j]] - predsOut[, 1, j])^2)))
}
}
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
for(j in 1:nparms) {
predsOut[, 1, j] <- sum(QAICctmp$QAICcWt * differList[[j]])
predsOut[, 2, j] <- sum(QAICctmp$QAICcWt * sqrt(SE.differList[[j]]^2 + (differList[[j]] - predsOut[, 1, j])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
for(j in 1:nparms) {
predsOut[, 1, j] <- sum(QAICctmp$QAICcWt * differList[[j]])
predsOut[, 2, j] <- sqrt(sum(QAICctmp$QAICcWt * (SE.differList[[j]]^2 + (differList[[j]] - predsOut[, 1, j])^2)))
}
}
}
##begin loop - AIC
if(second.ord == FALSE && c.hat == 1) {
##create temporary data.frame to store fitted values and SE
AICtmp <- AICctab
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
for(j in 1:nparms) {
predsOut[, 1, j] <- sum(AICtmp$AICWt * differList[[j]])
predsOut[, 2, j] <- sum(AICtmp$AICWt * sqrt(SE.differList[[j]]^2 + (differList[[j]] - predsOut[, 1, j])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
for(j in 1:nparms) {
predsOut[, 1, j] <- sum(AICtmp$AICWt * differList[[j]])
predsOut[, 2, j] <- sqrt(sum(AICtmp$AICWt * (SE.differList[[j]]^2 + (differList[[j]] - predsOut[, 1, j])^2)))
}
}
}
##begin loop - QAICc
if(second.ord == FALSE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
for(j in 1:nparms) {
predsOut[, 1, j] <- sum(QAICtmp$QAICWt * differList[[j]])
predsOut[, 2, j] <- sum(QAICtmp$QAICWt * sqrt(SE.differList[[j]]^2 + (differList[[j]] - predsOut[, 1, j])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
for(j in 1:nparms) {
predsOut[, 1, j] <- sum(QAICtmp$QAICWt * differList[[j]])
predsOut[, 2, j] <- sqrt(sum(QAICtmp$QAICWt * (SE.differList[[j]]^2 + (differList[[j]] - predsOut[, 1, j])^2)))
}
}
}
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
for(j in 1:nparms){
predsOut[, 3, j] <- predsOut[, 1, j] - zcrit * predsOut[, 2, j]
predsOut[, 4, j] <- predsOut[, 1, j] + zcrit * predsOut[, 2, j]
}
##format to matrix
arrayToMat <- apply(predsOut, 2L, c)
#if(is.vector(arrayToMat)) {
# predsOutMat <- matrix(arrayToMat, nrow = 1)
# colnames(predsOutMat) <- names(arrayToMat)
# AICctab$differ <- unlist(differList)
# AICctab$SE.differ <- unlist(SE.differList)
#} else {
predsOutMat <- arrayToMat
#}
##create label for rows
rownames(predsOutMat) <- parmNames
##convert array to list
##predsOutList <- lapply(seq(dim(predsOut)[3]), function(i) predsOut[ , , i])
##names(predsOutList) <- parmNames
##store table
AICc.out <- AICctab
Group.variable <- paste(parm.id, "(", var.id, ")")
##organize as list
Mod.eff.list <- list("Group.variable" = Group.variable, "Group1" = group1,
"Group2" = group2, "Type" = type,
"Mod.avg.table" = AICc.out,
"Mod.avg.eff" = predsOut[, 1, ],
"Uncond.se" = predsOut[, 2, ],
"Conf.level" = conf.level,
"Lower.CL" = predsOut[, 3, ],
"Upper.CL" = predsOut[, 4, ],
"Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
##occuMulti
modavgEffect.AICunmarkedFitOccuMulti <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL,
...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgEffect for details\n")}
##check if type = "link"
if(identical(type, "link")) stop("\nLink scale predictions not yet supported for this model type\n")
##rename values according to unmarked to extract from object
##psi
if(identical(parm.type, "psi")) {
parm.type1 <- "state"
parm.id <- "psi"
}
##detect
if(identical(parm.type, "detect")) {
parm.type1 <- "det"
parm.id <- "p"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check on newdata
##determine number of observations in new data set
nobserv <- nrow(newdata)
if(nobserv > 2) stop("\nCurrent maximum number of groups compared is 2:\nmodify newdata argument accordingly\n")
##determine number of columns in new data set
ncolumns <- ncol(newdata)
##if only 1 column, add an additional column to avoid problems in computation
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##determine which column varies
uniques <- apply(X = newdata, MARGIN = 2, FUN = unique)
lengths <- lapply(X = uniques, FUN = length)
varies <- sapply(X = lengths, FUN = function(i) i > 1)
##########################################
##CHANGES: add case when only a single variable appears in data frame
if(ncol(newdata) == 1) {
varies <- 1
}
##add extractX to check that variables appearing in model also appear in data frame
##checkVariables <- extractX(cand.set, parm.type = parm.type)
##if(any(!checkVariables$predictors %in% names(newdata))) {
## stop("\nAll predictors must appear in the 'newdata' data frame\n")
##}
##########################################
##extract name of column
if(sum(varies) == 1) {
var.id <- names(varies)[which(varies == TRUE)]
##determine name of groups compared
group1 <- as.character(newdata[,paste(var.id)][1])
group2 <- as.character(newdata[,paste(var.id)][2])
} else {
##warn that no single variable defines groups
warning("\nGroups do not seem to be defined by a single variable.\n Function proceeding with generic group names\n")
##use generic names
var.id <- "Groups"
group1 <- "group 1"
group2 <- "group 2"
}
##number of models
nmods <- length(modnames)
##compute predicted values
##point estimate
##extract predicted values
predsList <- lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, ...))
##check structure of predsList
##if predictions for all psi parameters
if(is.matrix(predsList[[1]]$Predicted) && identical(parm.type, "psi")) {
##determine number of parameters
##check if species argument was provided in call to function
parmFirst <- predsList[[1]]$Predicted
parmNames <- colnames(parmFirst)
nparms <- length(parmNames)
##lists to store predictions and SE's
predsEstList <- vector("list", nparms)
names(predsEstList) <- parmNames
predsSEList <- vector("list", nparms)
names(predsSEList) <- parmNames
##iterate over each parm
predsEstList <- lapply(predsList, FUN = function(i) i$Predicted)
predsSEList <- lapply(predsList, FUN = function(i) i$SE)
for(k in 1:nparms) {
predsEstList[[k]] <- lapply(predsList, FUN = function(i) i$Predicted[, k])
predsSEList[[k]] <- lapply(predsList, FUN = function(i) i$SE[, k])
}
##organize in an nobs x nmodels x nparms array
predsEst <- array(unlist(predsEstList), dim = c(nobserv, length(cand.set), nparms),
dimnames = list(1:nobserv, modnames, parmNames))
predsSE <- array(unlist(predsSEList), dim = c(nobserv, length(cand.set), nparms),
dimnames = list(1:nobserv, modnames, parmNames))
}
##if predictions for single species
if(!is.matrix(predsList[[1]]$Predicted) && identical(parm.type, "psi")) {
parmNames <- parm.type
nparms <- length(parmNames)
predsEstMat <- sapply(predsList, FUN = function(i) i$Predicted)
predsSEMat <- sapply(predsList, FUN = function(i) i$SE)
predsEst <- array(predsEstMat, dim = c(nobserv, length(cand.set), nparms),
dimnames = list(1:nobserv, modnames, parmNames))
predsSE <- array(predsSEMat, dim = c(nobserv, length(cand.set), nparms),
dimnames = list(1:nobserv, modnames, parmNames))
}
##if predictions for detection
if(identical(parm.type, "detect")) {
parmFirst <- predsList[[1]]
if(!is.data.frame(parmFirst)) {
orig.parmNames <- names(parmFirst)
parmNames <- paste("p", orig.parmNames, sep = "-")
nparms <- length(parmNames)
##iterate over species
predsEst <- array(NA, dim = c(nobserv, length(cand.set), nparms),
dimnames = list(1:nobserv, modnames, parmNames))
predsSE <- array(NA, dim = c(nobserv, length(cand.set), nparms),
dimnames = list(1:nobserv, modnames, parmNames))
##iterate over each parm
for(k in 1:nparms) {
predsEst[, , k] <- sapply(predsList, FUN = function(i) i[[k]]$Predicted)
predsSE[, , k] <- sapply(predsList, FUN = function(i) i[[k]]$SE)
}
} else {
##single parameter p
parmNames <- "p"
nparms <- 1
##iterate over species
predsEst <- array(NA, dim = c(nobserv, length(cand.set), nparms),
dimnames = list(1:nobserv, modnames, parmNames))
predsSE <- array(NA, dim = c(nobserv, length(cand.set), nparms),
dimnames = list(1:nobserv, modnames, parmNames))
predsEst[, , "p"] <- sapply(predsList, FUN = function(i) i[, "Predicted"])
predsSE[, , "p"] <- sapply(predsList, FUN = function(i) i[, "SE"])
}
}
##adjust for overdispersion if c-hat > 1
if(c.hat > 1) {predsSE <- predsSE * sqrt(c.hat)}
##difference between groups
differList <- vector("list", nparms)
for(k in 1:nparms) {
differList[[k]] <- predsEst[1, , k] - predsEst[2, , k]
}
##SE on difference
SE.differList <- vector("list", nparms)
for(k in 1:nparms) {
SE.differList[[k]] <- sqrt(predsSE[1, , k]^2 + predsSE[2, , k]^2)
}
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE, c.hat = c.hat)
##prepare list for model-averaged predictions and SE's
predsOut <- array(data = NA, dim = c(1, 4, nparms),
dimnames = list(1,
c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL"),
parmNames))
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
for(j in 1:nparms) {
predsOut[, 1, j] <- sum(AICctmp$AICcWt * differList[[j]])
predsOut[, 2, j] <- sum(AICctmp$AICcWt * sqrt(SE.differList[[j]]^2 + (differList[[j]] - predsOut[, 1, j])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
for(j in 1:nparms) {
predsOut[, 1, j] <- sum(AICctmp$AICcWt * differList[[j]])
predsOut[, 2, j] <- sqrt(sum(AICctmp$AICcWt * (SE.differList[[j]]^2 + (differList[[j]] - predsOut[, 1, j])^2)))
}
}
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
for(j in 1:nparms) {
predsOut[, 1, j] <- sum(QAICctmp$QAICcWt * differList[[j]])
predsOut[, 2, j] <- sum(QAICctmp$QAICcWt * sqrt(SE.differList[[j]]^2 + (differList[[j]] - predsOut[, 1, j])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
for(j in 1:nparms) {
predsOut[, 1, j] <- sum(QAICctmp$QAICcWt * differList[[j]])
predsOut[, 2, j] <- sqrt(sum(QAICctmp$QAICcWt * (SE.differList[[j]]^2 + (differList[[j]] - predsOut[, 1, j])^2)))
}
}
}
##begin loop - AIC
if(second.ord == FALSE && c.hat == 1) {
##create temporary data.frame to store fitted values and SE
AICtmp <- AICctab
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
for(j in 1:nparms) {
predsOut[, 1, j] <- sum(AICtmp$AICWt * differList[[j]])
predsOut[, 2, j] <- sum(AICtmp$AICWt * sqrt(SE.differList[[j]]^2 + (differList[[j]] - predsOut[, 1, j])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
for(j in 1:nparms) {
predsOut[, 1, j] <- sum(AICtmp$AICWt * differList[[j]])
predsOut[, 2, j] <- sqrt(sum(AICtmp$AICWt * (SE.differList[[j]]^2 + (differList[[j]] - predsOut[, 1, j])^2)))
}
}
}
##begin loop - QAICc
if(second.ord == FALSE && c.hat > 1){
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
for(j in 1:nparms) {
predsOut[, 1, j] <- sum(QAICtmp$QAICWt * differList[[j]])
predsOut[, 2, j] <- sum(QAICtmp$QAICWt * sqrt(SE.differList[[j]]^2 + (differList[[j]] - predsOut[, 1, j])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
for(j in 1:nparms) {
predsOut[, 1, j] <- sum(QAICtmp$QAICWt * differList[[j]])
predsOut[, 2, j] <- sqrt(sum(QAICtmp$QAICWt * (SE.differList[[j]]^2 + (differList[[j]] - predsOut[, 1, j])^2)))
}
}
}
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
for(j in 1:nparms){
predsOut[, 3, j] <- predsOut[, 1, j] - zcrit * predsOut[, 2, j]
predsOut[, 4, j] <- predsOut[, 1, j] + zcrit * predsOut[, 2, j]
}
##format to matrix
arrayToMat <- apply(predsOut, 2L, c)
if(is.vector(arrayToMat)) {
predsOutMat <- matrix(arrayToMat, nrow = 1)
colnames(predsOutMat) <- names(arrayToMat)
AICctab$differ <- unlist(differList)
AICctab$SE.differ <- unlist(SE.differList)
} else {
predsOutMat <- arrayToMat
}
##create label for rows
rownames(predsOutMat) <- parmNames
##store table
AICc.out <- AICctab
Group.variable <- paste(parm.id, "(", var.id, ")")
##organize as list
Mod.eff.list <- list("Group.variable" = Group.variable, "Group1" = group1,
"Group2" = group2, "Type" = type,
"Mod.avg.table" = AICc.out,
"Mod.avg.eff" = predsOut[, 1, ],
"Uncond.se" = predsOut[, 2, ],
"Conf.level" = conf.level,
"Lower.CL" = predsOut[, 3, ],
"Upper.CL" = predsOut[, 4, ],
"Matrix.output" = predsOutMat)
class(Mod.eff.list) <- c("modavgEffect", "list")
return(Mod.eff.list)
}
print.modavgEffect <- function(x, digits = 2, ...) {
##rework Group.variable labels
old.type <- x$Group.variable
stripped.type <- unlist(strsplit(old.type, split = "\\("))
ic <- colnames(x$Mod.avg.table)[3]
cat("\nModel-averaged effect size on the", x$Type, "scale based on entire model set:\n\n")
##extract elements
if(length(stripped.type) == 1) {
cat("\nMultimodel inference on \"", paste(x$Group.variable, x$Group1, sep = ""), " - ",
paste(x$Group.variable, x$Group2, sep = ""), "\" based on ", ic, "\n", sep = "")
##if unmarkedFit model, then print differently
} else {
##extract parameter name
parm.type <- gsub("(^ +)|( +$)", "", stripped.type[1])
##extract Group.variable name
var.id <- gsub("(^ +)|( +$)", "", unlist(strsplit(stripped.type[2], "\\)"))[1])
cat("\nMultimodel inference on \"", paste(parm.type, "(", var.id, x$Group1, ")", sep = ""), " - ",
paste(parm.type, "(", var.id, x$Group2, ")", sep = ""), "\" based on ", ic, "\n", sep = "")
}
cat("\n", ic, " table used to obtain model-averaged effect size:\n", sep = "")
oldtab <- x$Mod.avg.table
if (any(names(oldtab)=="c_hat")) {cat("\t(c-hat estimate = ", oldtab$c_hat[1], ")\n", sep = "")}
cat("\n")
##check if result is a scalar or vector
if(length(x$Mod.avg.eff) == 1) {
if (any(names(oldtab)=="c_hat")) {
nice.tab <- cbind(oldtab[,2], oldtab[,3], oldtab[,4], oldtab[,6],
oldtab[,9], oldtab[,10])
} else {
nice.tab <- cbind(oldtab[,2], oldtab[,3], oldtab[,4], oldtab[,6],
oldtab[,8], oldtab[,9])
}
colnames(nice.tab) <- c(colnames(oldtab)[c(2,3,4,6)], paste("Effect(", x$Group1, " - ", x$Group2, ")", sep = ""), "SE")
rownames(nice.tab) <- oldtab[,1]
print(round(nice.tab, digits=digits))
cat("\nModel-averaged effect size:", eval(round(x$Mod.avg.eff, digits=digits)), "\n")
cat("Unconditional SE:", eval(round(x$Uncond.se, digits=digits)), "\n")
cat("",x$Conf.level * 100, "% Unconditional confidence interval: ", round(x$Lower.CL, digits=digits),
", ", round(x$Upper.CL, digits=digits), "\n\n", sep = "")
} else { ##if result from occuMulti or occuMS
##extract parameter names
parmNames <- names(x$Mod.avg.eff)
nparms <- length(parmNames)
nice.tab <- cbind(oldtab[,2], oldtab[,3], oldtab[,4], oldtab[,6])
colnames(nice.tab) <- colnames(oldtab)[c(2,3,4,6)]
rownames(nice.tab) <- oldtab[,1]
print(round(nice.tab, digits=digits))
if(nparms <= 3) {
##iterate over each parameter
for(k in 1:nparms) {
cat("\nModel-averaged effect size for ", parmNames[k], ": ", round(x$Mod.avg.eff[k], digits=digits), "\n", sep = "")
cat("Unconditional SE for ", parmNames[k], ": ", eval(round(x$Uncond.se[k], digits=digits)), "\n", sep = "")
cat("",x$Conf.level * 100, "% Unconditional confidence interval for ", parmNames[k], ": ",
round(x$Lower.CL[k], digits=digits), ", ", round(x$Upper.CL[k], digits=digits), "\n",
"---", sep = "")
}
} else {
cat("\n")
cat("Model-averaged effect sizes:\n\n")
nice.mat <- x$Matrix.output
colnames(nice.mat) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
print(round(nice.mat, digits = digits))
}
cat("\n")
}
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/modavgEffect.R |
##generic
modavgPred <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...) {
cand.set <- formatCands(cand.set)
UseMethod("modavgPred", cand.set)
}
##default
modavgPred.default <- function(cand.set, modnames = NULL, newdata,
second.ord = TRUE, nobs = NULL,
uncond.se = "revised", conf.level = 0.95, ...) {
stop("\nFunction not yet defined for this object class\n")
}
##aov
##lm
modavgPred.AICaov.lm <-
function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
###################CHANGES####
##############################
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predict( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE){
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE,
newdata = newdata[obs, ])$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE,
newdata = newdata[obs, ])$se.fit))
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$fit <- fit
AICctmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit - Mod.avg.out[obs, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit - Mod.avg.out[obs, 1])^2)))
}
}
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE) {
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE,
newdata = newdata[obs, ])$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE,
newdata = newdata[obs, ])$se.fit))
AICtmp <- AICctab
AICtmp$fit <- fit
AICtmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2)))
}
}
}
type <- "response"
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##glm
modavgPred.AICglm.lm <-
function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, gamdisp = NULL, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##newdata is data frame with exact structure of the original data frame (same variable names and type)
if(type == "terms") {stop("\nThe terms argument is not defined for this function\n")}
##check family of glm to avoid problems when requesting predictions with argument 'dispersion'
fam.type <- unlist(lapply(cand.set, FUN=function(i) family(i)$family))
fam.unique <- unique(fam.type)
if(identical(fam.unique, "gaussian")) {
dispersion <- NULL #set to NULL if gaussian is used
} else{dispersion <- c.hat}
##poisson, binomial, and negative binomial defaults to 1 (no separate parameter for variance)
##for negative binomial - reset to NULL
if(any(regexpr("Negative Binomial", fam.type) != -1)) {
dispersion <- NULL
##check for mixture of negative binomial and other
##number of models with negative binomial
negbin.num <- sum(regexpr("Negative Binomial", fam.type) != -1)
if(negbin.num < length(fam.type)) {
stop("Function does not support mixture of negative binomial with other distributions in model set")
}
}
###################CHANGES####
##############################
if(c.hat > 1) {dispersion <- c.hat }
if(!is.null(gamdisp)) {dispersion <- gamdisp}
if(c.hat > 1 && !is.null(gamdisp)) {stop("\nYou cannot specify values for both \'c.hat\' and \'gamdisp\'\n")}
##dispersion is the dispersion parameter - this influences the SE's (to specify dispersion parameter for either overdispersed Poisson or Gamma glm)
##type enables to specify either "response" (original scale = point estimate) or "link" (linear predictor)
##check that link function is the same for all models if linear predictor is used
check.link <- unlist(lapply(X = cand.set, FUN = function(i) i$family$link))
unique.link <- unique(x = check.link)
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##extract inverse link function
link.inv <- unlist(lapply(X = cand.set, FUN = function(i) i$family$linkinv))[[1]]
##check if model uses gamma distribution
gam1 <- unlist(lapply(cand.set, FUN = function(i) family(i)$family[1] == "Gamma")) #check for gamma regression models
##correct SE's for estimates of gamma regressions when gamdisp is specified
if(any(gam1) == TRUE) {
##check for specification of gamdisp argument
if(is.null(gamdisp)) stop("\nYou must specify a gamma dispersion parameter with gamma generalized linear models\n")
}
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predict( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##required for CI
if(identical(type, "response")) {
Mod.avg.out.link <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out.link) <- c("Mod.avg.est", "Uncond.SE")
}
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type, dispersion = dispersion)$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type, dispersion = dispersion)$se.fit))
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$fit <- fit
AICctmp$SE <- SE
##required for CI
if(identical(type, "response")) {
if(length(unique.link) == 1) {
AICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = "link", dispersion = dispersion)$fit))
AICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = "link", dispersion = dispersion)$se.fit))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICctmp$fit.link <- NA
AICctmp$SE.link <- NA
}
}
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit - Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit - Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##create temporary data.frame to store fitted values and SE - QAICc
if(second.ord==TRUE && c.hat > 1) {
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN=function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type, dispersion = dispersion)$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type, dispersion = dispersion)$se.fit))
QAICctmp <- AICctab
QAICctmp$fit <- fit
QAICctmp$SE <- SE
##required for CI
if(identical(type, "response")) {
if(length(unique.link) == 1) {
QAICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = "link", dispersion = dispersion)$fit))
QAICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = "link", dispersion = dispersion)$se.fit))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICctmp$fit.link <- NA
QAICctmp$SE.link <- NA
}
}
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE^2 + (QAICctmp$fit - Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE^2 + (QAICctmp$fit - Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE && c.hat == 1) {
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type, dispersion = dispersion)$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type, dispersion = dispersion)$se.fit))
AICtmp <- AICctab
AICtmp$fit <- fit
AICtmp$SE <- SE
##required for CI
if(identical(type, "response")) {
if(length(unique.link) == 1) {
AICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = "link", dispersion = dispersion)$fit))
AICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = "link", dispersion = dispersion)$se.fit))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICtmp$fit.link <- NA
AICtmp$SE.link <- NA
}
}
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##create temporary data.frame to store fitted values and SE - QAIC
if(second.ord == FALSE && c.hat > 1) {
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type, dispersion = dispersion)$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type, dispersion = dispersion)$se.fit))
QAICtmp <- AICctab
QAICtmp$fit <- fit
QAICtmp$SE <- SE
##required for CI
if(identical(type, "response")) {
if(length(unique.link) == 1) {
QAICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = "link", dispersion = dispersion)$fit))
QAICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = "link", dispersion = dispersion)$se.fit))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICtmp$fit.link <- NA
QAICtmp$SE.link <- NA
}
}
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit)
##compute unconditional SE and store in output matrix
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
if(identical(type, "link")) {
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
} else {
lower.CL <- link.inv(Mod.avg.out.link[, 1] - zcrit * Mod.avg.out.link[, 2])
upper.CL <- link.inv(Mod.avg.out.link[, 1] + zcrit * Mod.avg.out.link[, 2])
}
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##lm
modavgPred.AIClm <-
function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
###################CHANGES####
##############################
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predict( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE){
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE,
newdata = newdata[obs, ])$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE,
newdata = newdata[obs, ])$se.fit))
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$fit <- fit
AICctmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit - Mod.avg.out[obs, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit - Mod.avg.out[obs, 1])^2)))
}
}
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE) {
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE,
newdata = newdata[obs, ])$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE,
newdata = newdata[obs, ])$se.fit))
AICtmp <- AICctab
AICtmp$fit <- fit
AICtmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2)))
}
}
}
type <- "response"
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##gls
modavgPred.AICgls <-
function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predictSE.mer( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs, sort = FALSE)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##begin loop - AICc
if(second.ord==TRUE){
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ])$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ])$se.fit))
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$fit <- fit
AICctmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit - Mod.avg.out[obs, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2)))
}
}
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE) {
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ])$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ])$se.fit))
AICtmp <- AICctab
AICtmp$fit <- fit
AICtmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2)))
}
}
}
type <- "response"
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##lme
modavgPred.AIClme <-
function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predictSE( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord, nobs = nobs, sort = FALSE)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##begin loop - AICc
if(second.ord==TRUE){
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ])$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ])$se.fit))
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$fit <- fit
AICctmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit - Mod.avg.out[obs, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit - Mod.avg.out[obs, 1])^2)))
}
}
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE) {
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ])$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ])$se.fit))
AICtmp <- AICctab
AICtmp$fit <- fit
AICtmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2)))
}
}
}
type <- "response"
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##mer
modavgPred.AICmer <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##newdata is data frame with exact structure of the original data frame (same variable names and type)
if(c.hat != 1) {warning("\nThis function only allows \'c.hat = 1\' for \'mer\' class objects\n")}
##check that link function is the same for all models if linear predictor is used
check.link <- unlist(lapply(X = cand.set, FUN = function(i) fam.link.mer(i)$link))
unique.link <- unique(check.link)
if(identical(type, "link")) {
if(length(unique.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
}
##extract inverse link function
link.inv <- unlist(lapply(X = cand.set, FUN = function(i) i@resp$family$linkinv))[[1]]
##determine number of observations in data set
nobserv <- dim(newdata)[1]
##determine number of columns in data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predictSE.mer( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##required for CI
if(identical(type, "response")) {
Mod.avg.out.link <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out.link) <- c("Mod.avg.est", "Uncond.SE")
}
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type, level = 0)$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type, level = 0)$se.fit))
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$fit <- fit
AICctmp$SE <- SE
##required for CI
if(identical(type, "response")) {
if(length(unique.link) == 1) {
AICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
type = "link", level = 0)$fit))
AICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
type = "link", level = 0)$se.fit))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICctmp$fit.link <- NA
AICctmp$SE.link <- NA
}
}
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit - Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit - Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE && c.hat == 1) {
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type, level = 0)$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type, level = 0)$se.fit))
AICtmp <- AICctab
AICtmp$fit <- fit
AICtmp$SE <- SE
##required for CI
if(identical(type, "response")) {
if(length(unique.link) == 1) {
AICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
type = "link", level = 0)$fit))
AICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
type = "link", level = 0)$se.fit))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICtmp$fit.link <- NA
AICtmp$SE.link <- NA
}
}
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
if(identical(type, "link")) {
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
} else {
lower.CL <- link.inv(Mod.avg.out.link[, 1] - zcrit * Mod.avg.out.link[, 2])
upper.CL <- link.inv(Mod.avg.out.link[, 1] + zcrit * Mod.avg.out.link[, 2])
}
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##glmerMod
modavgPred.AICglmerMod <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, ...) {
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
if(c.hat != 1) {warning("\nThis function only allows \'c.hat = 1\' for \'glmerMod\' class objects\n")}
##check that link function is the same for all models if linear predictor is used
check.link <- unlist(lapply(X = cand.set, FUN = function(i) fam.link.mer(i)$link))
unique.link <- unique(check.link)
if(identical(type, "link")) {
if(length(unique.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
}
##extract inverse link function
link.inv <- unlist(lapply(X = cand.set, FUN = function(i) i@resp$family$linkinv))[[1]]
##determine number of observations in data set
nobserv <- dim(newdata)[1]
##determine number of columns in data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predictSE.mer( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##required for CI
if(identical(type, "response")) {
Mod.avg.out.link <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out.link) <- c("Mod.avg.est", "Uncond.SE")
}
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type, level = 0)$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type, level = 0)$se.fit))
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$fit <- fit
AICctmp$SE <- SE
##required for CI
if(identical(type, "response")) {
if(length(unique.link) == 1) {
AICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
type = "link", level = 0)$fit))
AICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
type = "link", level = 0)$se.fit))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICctmp$fit.link <- NA
AICctmp$SE.link <- NA
}
}
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit - Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit - Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE && c.hat == 1) {
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type, level = 0)$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type, level = 0)$se.fit))
AICtmp <- AICctab
AICtmp$fit <- fit
AICtmp$SE <- SE
##required for CI
if(identical(type, "response")) {
if(length(unique.link) == 1) {
AICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
type = "link", level = 0)$fit))
AICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
type = "link", level = 0)$se.fit))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICtmp$fit.link <- NA
AICtmp$SE.link <- NA
}
}
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
if(identical(type, "link")) {
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
} else {
lower.CL <- link.inv(Mod.avg.out.link[, 1] - zcrit * Mod.avg.out.link[, 2])
upper.CL <- link.inv(Mod.avg.out.link[, 1] + zcrit * Mod.avg.out.link[, 2])
}
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##lmerMod
modavgPred.AIClmerMod <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95, ...) {
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##determine number of observations in data set
nobserv <- dim(newdata)[1]
##determine number of columns in data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predictSE.mer( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE){
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
level = 0)$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
level = 0)$se.fit))
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$fit <- fit
AICctmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit - Mod.avg.out[obs, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit - Mod.avg.out[obs, 1])^2)))
}
}
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE) {
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
level = 0)$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
level = 0)$se.fit))
AICtmp <- AICctab
AICtmp$fit <- fit
AICtmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2)))
}
}
}
type <- "response"
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##lmerModLmerTest
modavgPred.AIClmerModLmerTest <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95, ...) {
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##determine number of observations in data set
nobserv <- dim(newdata)[1]
##determine number of columns in data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predictSE.mer( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE){
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
level = 0)$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
level = 0)$se.fit))
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$fit <- fit
AICctmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit - Mod.avg.out[obs, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit - Mod.avg.out[obs, 1])^2)))
}
}
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE) {
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
level = 0)$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predictSE(i, se.fit = TRUE, newdata = newdata[obs, ],
level = 0)$se.fit))
AICtmp <- AICctab
AICtmp$fit <- fit
AICtmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2)))
}
}
}
type <- "response"
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##glm.nb
modavgPred.AICnegbin.glm.lm <-
function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##newdata is data frame with exact structure of the original data frame (same variable names and type)
if(type == "terms") {stop("\nThe terms argument is not defined for this function\n")}
###################CHANGES####
##############################
##type enables to specify either "response" (original scale = point estimate) or "link" (linear predictor)
##check that link function is the same for all models if linear predictor is used
check.link <- unlist(lapply(X = cand.set, FUN = function(i) i$family$link))
unique.link <- unique(x = check.link)
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##extract inverse link function
link.inv <- unlist(lapply(X = cand.set, FUN = function(i) i$family$linkinv))[[1]]
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predict( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##required for CI
if(identical(type, "response")) {
Mod.avg.out.link <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out.link) <- c("Mod.avg.est", "Uncond.SE")
}
##begin loop - AICc
if(second.ord == TRUE){
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type)$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type)$se.fit))
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$fit <- fit
AICctmp$SE <- SE
##required for CI
if(identical(type, "response")) {
if(length(unique.link) == 1) {
AICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE,
newdata = newdata[obs, ],
type = "link")$fit))
AICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE,
newdata = newdata[obs, ],
type = "link")$se.fit))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICctmp$fit.link <- NA
AICctmp$SE.link <- NA
}
}
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit - Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit - Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE) {
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type)$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type)$se.fit))
AICtmp <- AICctab
AICtmp$fit <- fit
AICtmp$SE <- SE
##required for CI
if(identical(type, "response")) {
if(length(unique.link) == 1) {
AICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE,
newdata = newdata[obs, ],
type = "link")$fit))
AICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE,
newdata = newdata[obs, ],
type = "link")$se.fit))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICtmp$fit.link <- NA
AICtmp$SE.link <- NA
}
}
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
if(identical(type, "link")) {
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
} else {
lower.CL <- link.inv(Mod.avg.out.link[, 1] - zcrit * Mod.avg.out.link[, 2])
upper.CL <- link.inv(Mod.avg.out.link[, 1] + zcrit * Mod.avg.out.link[, 2])
}
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##rlm
modavgPred.AICrlm.lm <-
function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95, ...) {
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predictSE.mer( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames, second.ord = second.ord,
nobs = nobs, sort = FALSE)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##begin loop - AICc
if(second.ord == TRUE){
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ])$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ])$se.fit))
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$fit <- fit
AICctmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit - Mod.avg.out[obs, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit - Mod.avg.out[obs, 1])^2)))
}
}
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE) {
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ])$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ])$se.fit))
AICtmp <- AICctab
AICtmp$fit <- fit
AICtmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2)))
}
}
}
type <- "response"
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##survreg
modavgPred.AICsurvreg <-
function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95, type = "response", ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##newdata is data frame with exact structure of the original data frame (same variable names and type)
if(type == "terms") {stop("\nThe terms argument is not defined for this function\n")}
###################CHANGES####
##############################
##type enables to specify either "response" (original scale = point estimate) or "link" (linear predictor)
##check that distribution is the same for all models
check.dist <- sapply(X = cand.set, FUN = function(i) i$dist)
unique.dist <- unique(x = check.dist)
if(identical(type, "link")) {
if(length(unique.dist) > 1) stop("\nFunction does not support model-averaging linear predictors using different distributions\n")
}
##extract inverse link function
link.inv <- sapply(X = cand.set, FUN = function(i) survreg.distributions[[unique.dist]]$itrans)[[1]]
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predictSE.mer( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##required for CI
if(identical(type, "response")) {
Mod.avg.out.link <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out.link) <- c("Mod.avg.est", "Uncond.SE")
}
##begin loop - AICc
if(second.ord == TRUE){
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type)$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type)$se.fit))
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
AICctmp$fit <- fit
AICctmp$SE <- SE
##required for CI
if(identical(type, "response")) {
if(length(unique.dist) == 1) {
AICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = "link")$fit))
AICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = "link")$se.fit))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICctmp$fit.link <- NA
AICctmp$SE.link <- NA
}
}
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit - Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit - Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE) {
for (obs in 1:nobserv) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type)$fit))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = type)$se.fit))
AICtmp <- AICctab
AICtmp$fit <- fit
AICtmp$SE <- SE
##required for CI
if(identical(type, "response")) {
if(length(unique.dist) == 1) {
AICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = "link")$fit))
AICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i) predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = "link")$se.fit))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICtmp$fit.link <- NA
AICtmp$SE.link <- NA
}
}
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit - Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
if(identical(type, "link")) {
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
} else {
lower.CL <- link.inv(Mod.avg.out.link[, 1] - zcrit * Mod.avg.out.link[, 2])
upper.CL <- link.inv(Mod.avg.out.link[, 1] + zcrit * Mod.avg.out.link[, 2])
}
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##occu
modavgPred.AICunmarkedFitOccu <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgPred for details\n")}
##rename values according to unmarked to extract from object
##psi
if(identical(parm.type, "psi")) {
parm.type1 <- "state"
}
##detect
if(identical(parm.type, "detect")) {
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##extract inverse link function
if(identical(select.link, "logistic")) {
link.inv <- plogis
}
##extract inverse link function
if(identical(select.link, "cloglog")) {
link.inv <- function(x) 1 - exp(-exp(x))
}
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predict( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##required for CI
if(identical(type, "response")) {
Mod.avg.out.link <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out.link) <- c("Mod.avg.est", "Uncond.SE")
}
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICctmp$fit.link <- NA
AICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICctmp$fit <- fit
AICctmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICctmp$fit.link <- NA
QAICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
QAICctmp$fit <- fit
QAICctmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICctmp$QAICcWt * QAICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICctmp$QAICcWt * sqrt(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt *(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - AIC
if(second.ord == FALSE && c.hat == 1) {
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICtmp <- AICctab
if(identical(type, "response")) {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICtmp$fit.link <- NA
AICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICtmp$fit <- fit
AICtmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAICc
if(second.ord == FALSE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICtmp$fit.link <- NA
QAICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
QAICtmp$fit <- fit
QAICtmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICtmp$QAICWt * QAICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICtmp$QAICWt * sqrt(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICtmp$QAICWt *(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
if(identical(type, "link")) {
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
} else {
lower.CL <- link.inv(Mod.avg.out.link[, 1] - zcrit * Mod.avg.out.link[, 2])
upper.CL <- link.inv(Mod.avg.out.link[, 1] + zcrit * Mod.avg.out.link[, 2])
}
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##colext
modavgPred.AICunmarkedFitColExt <-
function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgPred for details\n")}
##rename values according to unmarked to extract from object
##psi
if(identical(parm.type, "psi")) {
parm.type1 <- "psi"
}
##gamma
if(identical(parm.type, "gamma")) {
parm.type1 <- "col"
}
##epsilon
if(identical(parm.type, "epsilon")) {
parm.type1 <- "ext"
}
##detect
if(identical(parm.type, "detect")) {
parm.type1 <- "det"
}
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##extract inverse link function
if(identical(select.link, "logistic")) {
link.inv <- plogis
}
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predict( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##required for CI
if(identical(type, "response")) {
Mod.avg.out.link <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out.link) <- c("Mod.avg.est", "Uncond.SE")
}
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICctmp$fit.link <- NA
AICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICctmp$fit <- fit
AICctmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICctmp$fit.link <- NA
QAICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
QAICctmp$fit <- fit
QAICctmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICctmp$QAICcWt * QAICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICctmp$QAICcWt * sqrt(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt *(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - AIC
if(second.ord == FALSE && c.hat == 1) {
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICtmp <- AICctab
if(identical(type, "response")) {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICtmp$fit.link <- NA
AICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICtmp$fit <- fit
AICtmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAIC
if(second.ord == FALSE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICtmp$fit.link <- NA
QAICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
QAICtmp$fit <- fit
QAICtmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICtmp$QAICWt * QAICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICtmp$QAICWt * sqrt(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICtmp$QAICWt *(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
if(identical(type, "link")) {
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
} else {
lower.CL <- link.inv(Mod.avg.out.link[, 1] - zcrit * Mod.avg.out.link[, 2])
upper.CL <- link.inv(Mod.avg.out.link[, 1] + zcrit * Mod.avg.out.link[, 2])
}
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##occuRN
modavgPred.AICunmarkedFitOccuRN <-
function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgPred for details\n")}
##rename values according to unmarked to extract from object
##lambda
if(identical(parm.type, "lambda")) {
parm.type1 <- "state"
}
##detect
if(identical(parm.type, "detect")) {
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##extract inverse link function
if(identical(select.link, "logistic")) {
link.inv <- plogis
}
if(identical(select.link, "exp")) {
link.inv <- exp
}
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predict( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##required for CI
if(identical(type, "response")) {
Mod.avg.out.link <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out.link) <- c("Mod.avg.est", "Uncond.SE")
}
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICctmp$fit.link <- NA
AICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICctmp$fit <- fit
AICctmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICctmp$fit.link <- NA
QAICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
QAICctmp$fit <- fit
QAICctmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICctmp$QAICcWt * QAICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICctmp$QAICcWt * sqrt(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt *(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - AIC
if(second.ord == FALSE && c.hat == 1) {
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICtmp <- AICctab
if(identical(type, "response")) {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICtmp$fit.link <- NA
AICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICtmp$fit <- fit
AICtmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAIC
if(second.ord == FALSE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICtmp$fit.link <- NA
QAICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
QAICtmp$fit <- fit
QAICtmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICtmp$QAICWt * QAICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICtmp$QAICWt * sqrt(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICtmp$QAICWt *(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
if(identical(type, "link")) {
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
} else {
lower.CL <- link.inv(Mod.avg.out.link[, 1] - zcrit * Mod.avg.out.link[, 2])
upper.CL <- link.inv(Mod.avg.out.link[, 1] + zcrit * Mod.avg.out.link[, 2])
}
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##pcount
modavgPred.AICunmarkedFitPCount <-
function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgPred for details\n")}
##rename values according to unmarked to extract from object
##lambda
if(identical(parm.type, "lambda")) {
parm.type1 <- "state"
##check mixture type for mixture models
mixture.type <- sapply(X = cand.set, FUN = function(i) i@mixture)
unique.mixture <- unique(mixture.type)
}
##detect
if(identical(parm.type, "detect")) {
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##extract inverse link function
if(identical(select.link, "logistic")) {
link.inv <- plogis
}
if(identical(select.link, "exp")) {
link.inv <- exp
}
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predict( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##required for CI
if(identical(type, "response")) {
Mod.avg.out.link <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out.link) <- c("Mod.avg.est", "Uncond.SE")
}
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
if(identical(type, "response")) {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICctmp$fit.link <- NA
AICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICctmp$fit <- fit
AICctmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICctmp$fit.link <- NA
QAICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
##create temporary data.frame to store fitted values and SE
QAICctmp$fit <- fit
QAICctmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICctmp$QAICcWt * QAICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICctmp$QAICcWt * sqrt(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt *(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE && c.hat == 1) {
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICtmp <- AICctab
if(identical(type, "response")) {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICtmp$fit.link <- NA
AICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICtmp$fit <- fit
AICtmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAIC
if(second.ord == FALSE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICtmp$fit.link <- NA
QAICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
##create temporary data.frame to store fitted values and SE
QAICtmp$fit <- fit
QAICtmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICtmp$QAICWt * QAICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICtmp$QAICWt * sqrt(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICtmp$QAICWt *(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
if(identical(type, "link")) {
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
} else {
lower.CL <- link.inv(Mod.avg.out.link[, 1] - zcrit * Mod.avg.out.link[, 2])
upper.CL <- link.inv(Mod.avg.out.link[, 1] + zcrit * Mod.avg.out.link[, 2])
}
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##pcountOpen
modavgPred.AICunmarkedFitPCO <-
function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgPred for details\n")}
##rename values according to unmarked to extract from object
##lambda
if(identical(parm.type, "lambda")) {
parm.type1 <- "lambda"
##check mixture type for mixture models
mixture.type <- sapply(X = cand.set, FUN = function(i) i@mixture)
unique.mixture <- unique(mixture.type)
if(length(unique.mixture) > 1) {
if(any(unique.mixture == "ZIP")) stop("\nThis function does not yet support mixing ZIP with other distributions\n")
} else {
mixture.id <- unique(mixture.type)
if(identical(unique.mixture, "ZIP")) {
if(identical(type, "link")) stop("\nLink scale not yet supported for ZIP mixtures\n")
if(identical(type, "response")) warning("\nModel-averaging linear predictor from a ZIP model not yet implemented\n")
}
}
}
##gamma
if(identical(parm.type, "gamma")) {
parm.type1 <- "gamma"
}
##omega
if(identical(parm.type, "omega")) {
parm.type1 <- "omega"
}
##iota (for immigration = TRUE with dynamics = "autoreg", "trend", "ricker", or "gompertz")
if(identical(parm.type, "iota")) {
parm.type1 <- "iota"
##check that parameter appears in all models
parfreq <- sum(sapply(cand.set, FUN = function(i) any(names(i@estimates@estimates) == parm.type1)))
if(!identical(length(cand.set), parfreq)) {
stop("\nParameter \'", parm.type1, "\' (parm.type = \"", parm.type, "\") does not appear in all models:",
"\ncannot compute model-averaged predictions across all models\n")
}
}
##detect
if(identical(parm.type, "detect")) {
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##extract inverse link function
if(identical(select.link, "logistic")) {
link.inv <- plogis
}
if(identical(select.link, "exp")) {
link.inv <- exp
}
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predictSE.mer( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##required for CI
if(identical(type, "response")) {
Mod.avg.out.link <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out.link) <- c("Mod.avg.est", "Uncond.SE")
}
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
if(identical(parm.type, "lambda") && identical(mixture.id, "ZIP")) {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE,
newdata = newdata[obs, ])$fit))
SE <- unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE,
newdata = newdata[obs, ])$se.fit))
AICctmp$fit.link <- NA
AICctmp$SE.link <- NA
} else {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICctmp$fit.link <- NA
AICctmp$SE.link <- NA
}
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
##create temporary data.frame to store fitted values and SE
AICctmp$fit <- fit
AICctmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
if(identical(parm.type, "lambda") && identical(mixture.id, "ZIP")) {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE,
newdata = newdata[obs, ])$fit))
SE <- unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE,
newdata = newdata[obs, ])$se.fit))
QAICctmp$fit.link <- NA
QAICctmp$SE.link <- NA
} else {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICctmp$fit.link <- NA
QAICctmp$SE.link <- NA
}
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
##create temporary data.frame to store fitted values and SE
QAICctmp$fit <- fit
QAICctmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICctmp$QAICcWt * QAICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICctmp$QAICcWt * sqrt(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt *(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE && c.hat == 1) {
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICtmp <- AICctab
if(identical(type, "response")) {
if(identical(parm.type, "lambda") && identical(mixture.id, "ZIP")) {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE,
newdata = newdata[obs, ])$fit))
SE <- unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE,
newdata = newdata[obs, ])$se.fit))
AICtmp$fit.link <- NA
AICtmp$SE.link <- NA
} else {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICtmp$fit.link <- NA
AICtmp$SE.link <- NA
}
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICtmp$fit <- fit
AICtmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAIC
if(second.ord == FALSE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
if(identical(parm.type, "lambda") && identical(mixture.id, "ZIP")) {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE,
newdata = newdata[obs, ])$fit))
SE <- unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE,
newdata = newdata[obs, ])$se.fit))
QAICtmp$fit.link <- NA
QAICtmp$SE.link <- NA
} else {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICtmp$fit.link <- NA
QAICtmp$SE.link <- NA
}
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
##create temporary data.frame to store fitted values and SE
QAICtmp$fit <- fit
QAICtmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICtmp$QAICWt * QAICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICtmp$QAICWt * sqrt(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICtmp$QAICWt *(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
if(identical(type, "link")) {
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
} else {
lower.CL <- link.inv(Mod.avg.out.link[, 1] - zcrit * Mod.avg.out.link[, 2])
upper.CL <- link.inv(Mod.avg.out.link[, 1] + zcrit * Mod.avg.out.link[, 2])
}
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##distsamp
modavgPred.AICunmarkedFitDS <-
function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgPred for details\n")}
##rename values according to unmarked to extract from object
##lambda
if(identical(parm.type, "lambda")) {
parm.type1 <- "state"
}
##detect
if(identical(parm.type, "detect")){
parm.type1 <- "det"
##check for key function used
keyid <- unique(sapply(cand.set, FUN = function(i) i@keyfun))
if(any(keyid == "uniform")) stop("\nDetection parameter not found in some models\n")
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##extract inverse link function
if(identical(select.link, "logistic")) {
link.inv <- plogis
}
if(identical(select.link, "exp")) {
link.inv <- exp
}
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predictSE.mer( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##required for CI
if(identical(type, "response")) {
Mod.avg.out.link <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out.link) <- c("Mod.avg.est", "Uncond.SE")
}
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICctmp$fit.link <- NA
AICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICctmp$fit <- fit
AICctmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICctmp$fit.link <- NA
QAICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
QAICctmp$fit <- fit
QAICctmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICctmp$QAICcWt * QAICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICctmp$QAICcWt * sqrt(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt *(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - AIC
if(second.ord == FALSE && c.hat == 1) {
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICtmp <- AICctab
if(identical(type, "response")) {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICtmp$fit.link <- NA
AICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICtmp$fit <- fit
AICtmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAIC
if(second.ord == FALSE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICtmp$fit.link <- NA
QAICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
QAICtmp$fit <- fit
QAICtmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICtmp$QAICWt * QAICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICtmp$QAICWt * sqrt(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICtmp$QAICWt *(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
if(identical(type, "link")) {
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
} else {
lower.CL <- link.inv(Mod.avg.out.link[, 1] - zcrit * Mod.avg.out.link[, 2])
upper.CL <- link.inv(Mod.avg.out.link[, 1] + zcrit * Mod.avg.out.link[, 2])
}
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##gdistsamp
modavgPred.AICunmarkedFitGDS <-
function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgPred for details\n")}
##rename values according to unmarked to extract from object
##lambda
if(identical(parm.type, "lambda")) {
parm.type1 <- "lambda"
}
##detect
if(identical(parm.type, "detect")) {
parm.type1 <- "det"
##check for key function used
keyid <- unique(sapply(cand.set, FUN = function(i) i@keyfun))
if(any(keyid == "uniform")) stop("\nDetection parameter not found in some models\n")
}
##availability
if(identical(parm.type, "phi")) {
parm.type1 <- "phi"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##extract inverse link function
if(identical(select.link, "logistic")) {
link.inv <- plogis
}
if(identical(select.link, "exp")) {
link.inv <- exp
}
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predictSE.mer( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##required for CI
if(identical(type, "response")) {
Mod.avg.out.link <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out.link) <- c("Mod.avg.est", "Uncond.SE")
}
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICctmp$fit.link <- NA
AICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICctmp$fit <- fit
AICctmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICctmp$fit.link <- NA
QAICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
QAICctmp$fit <- fit
QAICctmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICctmp$QAICcWt * QAICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICctmp$QAICcWt * sqrt(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt *(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - AIC
if(second.ord == FALSE && c.hat == 1) {
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICtmp <- AICctab
if(identical(type, "response")) {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICtmp$fit.link <- NA
AICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICtmp$fit <- fit
AICtmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAIC
if(second.ord == FALSE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICtmp$fit.link <- NA
QAICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
QAICtmp$fit <- fit
QAICtmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICtmp$QAICWt * QAICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICtmp$QAICWt * sqrt(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICtmp$QAICWt *(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
if(identical(type, "link")) {
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
} else {
lower.CL <- link.inv(Mod.avg.out.link[, 1] - zcrit * Mod.avg.out.link[, 2])
upper.CL <- link.inv(Mod.avg.out.link[, 1] + zcrit * Mod.avg.out.link[, 2])
}
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##occuFP
modavgPred.AICunmarkedFitOccuFP <-
function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgPred for details\n")}
##rename values according to unmarked to extract from object
##psi
if(identical(parm.type, "psi")) {
parm.type1 <- "state"
}
##detect
if(identical(parm.type, "detect")) {
parm.type1 <- "det"
}
##false positives
if(identical(parm.type, "falsepos") || identical(parm.type, "fp")) {
parm.type1 <- "fp"
}
##certain detections
if(identical(parm.type, "certain")) {
parm.type1 <- "b"
##check that parameter appears in all models
parfreq <- sum(sapply(cand.set, FUN = function(i) any(names(i@estimates@estimates) == parm.type1)))
if(!identical(length(cand.set), parfreq)) {
stop("\nParameter \'", parm.type1, "\' (parm.type = \"", parm.type, "\") does not appear in all models:",
"\ncannot compute model-averaged predictions across all models\n")
}
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##extract inverse link function
if(identical(select.link, "logistic")) {
link.inv <- plogis
}
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predictSE.mer( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##required for CI
if(identical(type, "response")) {
Mod.avg.out.link <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out.link) <- c("Mod.avg.est", "Uncond.SE")
}
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICctmp$fit.link <- NA
AICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICctmp$fit <- fit
AICctmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICctmp$fit.link <- NA
QAICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
QAICctmp$fit <- fit
QAICctmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICctmp$QAICcWt * QAICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICctmp$QAICcWt * sqrt(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt *(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - AIC
if(second.ord == FALSE && c.hat == 1) {
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICtmp <- AICctab
if(identical(type, "response")) {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICtmp$fit.link <- NA
AICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICtmp$fit <- fit
AICtmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAIC
if(second.ord == FALSE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICtmp$fit.link <- NA
QAICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
QAICtmp$fit <- fit
QAICtmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICtmp$QAICWt * QAICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICtmp$QAICWt * sqrt(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICtmp$QAICWt *(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
if(identical(type, "link")) {
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
} else {
lower.CL <- link.inv(Mod.avg.out.link[, 1] - zcrit * Mod.avg.out.link[, 2])
upper.CL <- link.inv(Mod.avg.out.link[, 1] + zcrit * Mod.avg.out.link[, 2])
}
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##multinomPois
modavgPred.AICunmarkedFitMPois <-
function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgPred for details\n")}
##rename values according to unmarked to extract from object
##lambda
if(identical(parm.type, "lambda")) {
parm.type1 <- "state"
}
##detect
if(identical(parm.type, "detect")) {
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##extract inverse link function
if(identical(select.link, "logistic")) {
link.inv <- plogis
}
if(identical(select.link, "exp")) {
link.inv <- exp
}
####changes#############
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predictSE.mer( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##required for CI
if(identical(type, "response")) {
Mod.avg.out.link <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out.link) <- c("Mod.avg.est", "Uncond.SE")
}
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICctmp$fit.link <- NA
AICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICctmp$fit <- fit
AICctmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICctmp$fit.link <- NA
QAICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
QAICctmp$fit <- fit
QAICctmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICctmp$QAICcWt * QAICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICctmp$QAICcWt * sqrt(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt *(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - AIC
if(second.ord == FALSE && c.hat == 1) {
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICtmp <- AICctab
if(identical(type, "response")) {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICtmp$fit.link <- NA
AICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICtmp$fit <- fit
AICtmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAIC
if(second.ord == FALSE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICtmp$fit.link <- NA
QAICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
QAICtmp$fit <- fit
QAICtmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICtmp$QAICWt * QAICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICtmp$QAICWt * sqrt(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICtmp$QAICWt *(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
if(identical(type, "link")) {
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
} else {
lower.CL <- link.inv(Mod.avg.out.link[, 1] - zcrit * Mod.avg.out.link[, 2])
upper.CL <- link.inv(Mod.avg.out.link[, 1] + zcrit * Mod.avg.out.link[, 2])
}
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##gmultmix
modavgPred.AICunmarkedFitGMM <-
function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgPred for details\n")}
##rename values according to unmarked to extract from object
##lambda
if(identical(parm.type, "lambda")) {
parm.type1 <- "lambda"
}
##detect
if(identical(parm.type, "detect")) {
parm.type1 <- "det"
}
##availability
if(identical(parm.type, "phi")) {
parm.type1 <- "phi"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##extract inverse link function
if(identical(select.link, "logistic")) {
link.inv <- plogis
}
if(identical(select.link, "exp")) {
link.inv <- exp
}
####changes#############
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predictSE.mer( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##required for CI
if(identical(type, "response")) {
Mod.avg.out.link <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out.link) <- c("Mod.avg.est", "Uncond.SE")
}
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICctmp$fit.link <- NA
AICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICctmp$fit <- fit
AICctmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICctmp$fit.link <- NA
QAICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
QAICctmp$fit <- fit
QAICctmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICctmp$QAICcWt * QAICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICctmp$QAICcWt * sqrt(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt *(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - AIC
if(second.ord == FALSE && c.hat == 1) {
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICtmp <- AICctab
if(identical(type, "response")) {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICtmp$fit.link <- NA
AICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICtmp$fit <- fit
AICtmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAIC
if(second.ord == FALSE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICtmp$fit.link <- NA
QAICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
QAICtmp$fit <- fit
QAICtmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICtmp$QAICWt * QAICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICtmp$QAICWt * sqrt(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICtmp$QAICWt *(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
if(identical(type, "link")) {
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
} else {
lower.CL <- link.inv(Mod.avg.out.link[, 1] - zcrit * Mod.avg.out.link[, 2])
upper.CL <- link.inv(Mod.avg.out.link[, 1] + zcrit * Mod.avg.out.link[, 2])
}
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##gpcount
modavgPred.AICunmarkedFitGPC <-
function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgPred for details\n")}
##rename values according to unmarked to extract from object
##lambda
if(identical(parm.type, "lambda")) {
parm.type1 <- "lambda"
}
##detect
if(identical(parm.type, "detect")) {
parm.type1 <- "det"
}
##availability
if(identical(parm.type, "phi")) {
parm.type1 <- "phi"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##extract inverse link function
if(identical(select.link, "logistic")) {
link.inv <- plogis
}
if(identical(select.link, "exp")) {
link.inv <- exp
}
####changes#############
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predictSE.mer( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##required for CI
if(identical(type, "response")) {
Mod.avg.out.link <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out.link) <- c("Mod.avg.est", "Uncond.SE")
}
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICctmp$fit.link <- NA
AICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICctmp$fit <- fit
AICctmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICctmp$fit.link <- NA
QAICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
QAICctmp$fit <- fit
QAICctmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICctmp$QAICcWt * QAICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICctmp$QAICcWt * sqrt(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt *(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - AIC
if(second.ord == FALSE && c.hat == 1) {
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICtmp <- AICctab
if(identical(type, "response")) {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICtmp$fit.link <- NA
AICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICtmp$fit <- fit
AICtmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAIC
if(second.ord == FALSE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICtmp$fit.link <- NA
QAICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
QAICtmp$fit <- fit
QAICtmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICtmp$QAICWt * QAICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICtmp$QAICWt * sqrt(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICtmp$QAICWt *(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
if(identical(type, "link")) {
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
} else {
lower.CL <- link.inv(Mod.avg.out.link[, 1] - zcrit * Mod.avg.out.link[, 2])
upper.CL <- link.inv(Mod.avg.out.link[, 1] + zcrit * Mod.avg.out.link[, 2])
}
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##multmixOpen
modavgPred.AICunmarkedFitMMO <-
function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgPred for details\n")}
##rename values according to unmarked to extract from object
##lambda
if(identical(parm.type, "lambda")) {
parm.type1 <- "lambda"
##check mixture type for mixture models
mixture.type <- sapply(X = cand.set, FUN = function(i) i@mixture)
unique.mixture <- unique(mixture.type)
if(length(unique.mixture) > 1) {
if(any(unique.mixture == "ZIP")) stop("\nThis function does not yet support mixing ZIP with other distributions\n")
} else {
mixture.id <- unique(mixture.type)
if(identical(unique.mixture, "ZIP")) {
if(identical(type, "link")) stop("\nLink scale not yet supported for ZIP mixtures\n")
if(identical(type, "response")) warning("\nModel-averaging linear predictor from a ZIP model not yet implemented\n")
}
}
}
##gamma
if(identical(parm.type, "gamma")) {
parm.type1 <- "gamma"
}
##omega
if(identical(parm.type, "omega")) {
parm.type1 <- "omega"
}
##iota (for immigration = TRUE with dynamics = "autoreg", "trend", "ricker", or "gompertz")
if(identical(parm.type, "iota")) {
parm.type1 <- "iota"
##check that parameter appears in all models
parfreq <- sum(sapply(cand.set, FUN = function(i) any(names(i@estimates@estimates) == parm.type1)))
if(!identical(length(cand.set), parfreq)) {
stop("\nParameter \'", parm.type1, "\' (parm.type = \"", parm.type, "\") does not appear in all models:",
"\ncannot compute model-averaged predictions across all models\n")
}
}
##detect
if(identical(parm.type, "detect")) {
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##extract inverse link function
if(identical(select.link, "logistic")) {
link.inv <- plogis
}
if(identical(select.link, "exp")) {
link.inv <- exp
}
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predictSE.mer( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##required for CI
if(identical(type, "response")) {
Mod.avg.out.link <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out.link) <- c("Mod.avg.est", "Uncond.SE")
}
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
if(identical(parm.type, "lambda") && identical(mixture.id, "ZIP")) {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE,
newdata = newdata[obs, ])$fit))
SE <- unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE,
newdata = newdata[obs, ])$se.fit))
AICctmp$fit.link <- NA
AICctmp$SE.link <- NA
} else {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICctmp$fit.link <- NA
AICctmp$SE.link <- NA
}
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
##create temporary data.frame to store fitted values and SE
AICctmp$fit <- fit
AICctmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
if(identical(parm.type, "lambda") && identical(mixture.id, "ZIP")) {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE,
newdata = newdata[obs, ])$fit))
SE <- unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE,
newdata = newdata[obs, ])$se.fit))
QAICctmp$fit.link <- NA
QAICctmp$SE.link <- NA
} else {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICctmp$fit.link <- NA
QAICctmp$SE.link <- NA
}
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
##create temporary data.frame to store fitted values and SE
QAICctmp$fit <- fit
QAICctmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICctmp$QAICcWt * QAICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICctmp$QAICcWt * sqrt(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt *(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##create temporary data.frame to store fitted values and SE - AIC
if(second.ord == FALSE && c.hat == 1) {
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICtmp <- AICctab
if(identical(type, "response")) {
if(identical(parm.type, "lambda") && identical(mixture.id, "ZIP")) {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE,
newdata = newdata[obs, ])$fit))
SE <- unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE,
newdata = newdata[obs, ])$se.fit))
AICtmp$fit.link <- NA
AICtmp$SE.link <- NA
} else {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICtmp$fit.link <- NA
AICtmp$SE.link <- NA
}
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICtmp$fit <- fit
AICtmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAIC
if(second.ord == FALSE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
if(identical(parm.type, "lambda") && identical(mixture.id, "ZIP")) {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE,
newdata = newdata[obs, ])$fit))
SE <- unlist(lapply(X = cand.set, FUN = function(i)predictSE(i, se.fit = TRUE,
newdata = newdata[obs, ])$se.fit))
QAICtmp$fit.link <- NA
QAICtmp$SE.link <- NA
} else {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICtmp$fit.link <- NA
QAICtmp$SE.link <- NA
}
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
##create temporary data.frame to store fitted values and SE
QAICtmp$fit <- fit
QAICtmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICtmp$QAICWt * QAICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICtmp$QAICWt * sqrt(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICtmp$QAICWt *(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
if(identical(type, "link")) {
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
} else {
lower.CL <- link.inv(Mod.avg.out.link[, 1] - zcrit * Mod.avg.out.link[, 2])
upper.CL <- link.inv(Mod.avg.out.link[, 1] + zcrit * Mod.avg.out.link[, 2])
}
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##distsampOpen
modavgPred.AICunmarkedFitDSO <-
function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgPred for details\n")}
##rename values according to unmarked to extract from object
##lambda
if(identical(parm.type, "lambda")) {
parm.type1 <- "lambda"
##check mixture type for mixture models
mixture.type <- sapply(X = cand.set, FUN = function(i) i@mixture)
unique.mixture <- unique(mixture.type)
if(length(unique.mixture) > 1) {
if(any(unique.mixture == "ZIP")) stop("\nThis function does not yet support mixing ZIP with other distributions\n")
} else {
mixture.id <- unique(mixture.type)
if(identical(unique.mixture, "ZIP")) {
if(identical(type, "link")) stop("\nLink scale not yet supported for ZIP mixtures\n")
if(identical(type, "response")) warning("\nModel-averaging linear predictor from a ZIP model not yet implemented\n")
}
}
}
##gamma - recruitment
if(identical(parm.type, "gamma")) {
parm.type1 <- "gamma"
}
##omega
if(identical(parm.type, "omega")) {
parm.type1 <- "omega"
}
##iota (for immigration = TRUE with dynamics = "autoreg", "trend", "ricker", or "gompertz")
if(identical(parm.type, "iota")) {
parm.type1 <- "iota"
##check that parameter appears in all models
parfreq <- sum(sapply(cand.set, FUN = function(i) any(names(i@estimates@estimates) == parm.type1)))
if(!identical(length(cand.set), parfreq)) {
stop("\nParameter \'", parm.type1, "\' (parm.type = \"", parm.type, "\") does not appear in all models:",
"\ncannot compute model-averaged predictions across all models\n")
}
}
##detect
if(identical(parm.type, "detect")){
parm.type1 <- "det"
##check for key function used
keyid <- unique(sapply(cand.set, FUN = function(i) i@keyfun))
if(any(keyid == "uniform")) stop("\nDetection parameter not found in some models\n")
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##extract inverse link function
if(identical(select.link, "logistic")) {
link.inv <- plogis
}
if(identical(select.link, "exp")) {
link.inv <- exp
}
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predictSE.mer( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##required for CI
if(identical(type, "response")) {
Mod.avg.out.link <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out.link) <- c("Mod.avg.est", "Uncond.SE")
}
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICctmp$fit.link <- NA
AICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICctmp$fit <- fit
AICctmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICctmp$fit.link <- NA
QAICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
QAICctmp$fit <- fit
QAICctmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICctmp$QAICcWt * QAICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICctmp$QAICcWt * sqrt(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt *(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - AIC
if(second.ord == FALSE && c.hat == 1) {
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICtmp <- AICctab
if(identical(type, "response")) {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICtmp$fit.link <- NA
AICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICtmp$fit <- fit
AICtmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAIC
if(second.ord == FALSE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICtmp$fit.link <- NA
QAICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
QAICtmp$fit <- fit
QAICtmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICtmp$QAICWt * QAICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICtmp$QAICWt * sqrt(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICtmp$QAICWt *(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
if(identical(type, "link")) {
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
} else {
lower.CL <- link.inv(Mod.avg.out.link[, 1] - zcrit * Mod.avg.out.link[, 2])
upper.CL <- link.inv(Mod.avg.out.link[, 1] + zcrit * Mod.avg.out.link[, 2])
}
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##occuTTD
modavgPred.AICunmarkedFitOccuTTD <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgPred for details\n")}
##rename values according to unmarked to extract from object
##psi
if(identical(parm.type, "psi")) {
parm.type1 <- "psi"
}
##gamma - colonization
if(identical(parm.type, "gamma")) {
nseasons <- unique(sapply(cand.set, FUN = function(i) i@data@numPrimary))
if(nseasons == 1) {
stop("\nParameter \'gamma\' does not appear in single-season models\n")
}
parm.type1 <- "col"
}
##epsilon - extinction
if(identical(parm.type, "epsilon")) {
nseasons <- unique(sapply(cand.set, FUN = function(i) i@data@numPrimary))
if(nseasons == 1) {
stop("\nParameter \'epsilon\' does not appear in single-season models\n")
}
parm.type1 <- "ext"
}
##detect
if(identical(parm.type, "detect")) {
parm.type1 <- "det"
} #lambda is rate parameter of species not detected at time t to be detected at next time step
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##################
##extract inverse link function
if(identical(select.link, "logistic")) {
link.inv <- plogis
}
##extract inverse link function
if(identical(select.link, "exp")) {
link.inv <- exp
}
##extract inverse link function
if(identical(select.link, "cloglog")) {
link.inv <- function(x) 1 - exp(-exp(x))
}
####changes#############
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predict( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat)
##create object to hold Model-averaged estimates and unconditional SE's
Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##required for CI
if(identical(type, "response")) {
Mod.avg.out.link <- matrix(NA, nrow = nobserv, ncol = 2)
colnames(Mod.avg.out.link) <- c("Mod.avg.est", "Uncond.SE")
}
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICctmp$fit.link <- NA
AICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICctmp$fit <- fit
AICctmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICctmp$AICcWt*sqrt(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE^2 + (AICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICctmp$AICcWt*AICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICctmp$AICcWt*(AICctmp$SE.link^2 + (AICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICctmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICctmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICctmp$fit.link <- NA
QAICctmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
QAICctmp$fit <- fit
QAICctmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICctmp$QAICcWt * QAICctmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICctmp$QAICcWt * sqrt(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICctmp$QAICcWt*sqrt(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt *(QAICctmp$SE^2 + (QAICctmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICctmp$QAICcWt*QAICctmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICctmp$QAICcWt*(QAICctmp$SE.link^2 + (QAICctmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - AIC
if(second.ord == FALSE && c.hat == 1) {
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICtmp <- AICctab
if(identical(type, "response")) {
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
AICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
AICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
AICtmp$fit.link <- NA
AICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
AICtmp$fit <- fit
AICtmp$SE <- SE
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(AICtmp$AICWt*sqrt(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE^2 + (AICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(AICtmp$AICWt*AICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(AICtmp$AICWt*(AICtmp$SE.link^2 + (AICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##begin loop - QAICc
if(second.ord == FALSE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
if(identical(type, "response")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1)$SE))
if(length(unique.link) == 1) {
QAICtmp$fit.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
QAICtmp$SE.link <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE)) * sqrt(c.hat)
} else {
warning("\nIt is not appropriate to model-average linear predictors using different link functions\n")
QAICtmp$fit.link <- NA
QAICtmp$SE.link <- NA
}
}
if(identical(type, "link")) {
##extract fitted value for observation obs
fit <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$Predicted))
##extract SE for fitted value for observation obs
SE <- unlist(lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata[obs, ],
type = parm.type1, backTransform = FALSE)$SE))
}
QAICtmp$fit <- fit
QAICtmp$SE <- SE * sqrt(c.hat)
##compute model averaged prediction and store in output matrix
Mod.avg.out[obs, 1] <- sum(QAICtmp$QAICWt * QAICtmp$fit)
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[obs, 2] <- sum(QAICtmp$QAICWt * sqrt(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sum(QAICtmp$QAICWt*sqrt(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[obs, 2] <- sqrt(sum(QAICtmp$QAICWt *(QAICtmp$SE^2 + (QAICtmp$fit- Mod.avg.out[obs, 1])^2)))
if(identical(type, "response")) {
Mod.avg.out.link[obs, 1] <- sum(QAICtmp$QAICWt*QAICtmp$fit.link)
Mod.avg.out.link[obs, 2] <- sqrt(sum(QAICtmp$QAICWt*(QAICtmp$SE.link^2 + (QAICtmp$fit.link - Mod.avg.out.link[obs, 1])^2)))
}
}
}
}
##############changes start here
mod.avg.pred <- Mod.avg.out[, 1]
uncond.se <- Mod.avg.out[, 2]
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
if(identical(type, "link")) {
lower.CL <- mod.avg.pred - zcrit * uncond.se
upper.CL <- mod.avg.pred + zcrit * uncond.se
} else {
lower.CL <- link.inv(Mod.avg.out.link[, 1] - zcrit * Mod.avg.out.link[, 2])
upper.CL <- link.inv(Mod.avg.out.link[, 1] + zcrit * Mod.avg.out.link[, 2])
}
##create matrix
matrix.output <- matrix(data = c(mod.avg.pred, uncond.se, lower.CL, upper.CL),
nrow = nrow(newdata), ncol = 4)
colnames(matrix.output) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = mod.avg.pred, "uncond.se" = uncond.se,
"conf.level" = conf.level, "lower.CL" = lower.CL, "upper.CL" = upper.CL,
"matrix.output" = matrix.output)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##occuMS
modavgPred.AICunmarkedFitOccuMS <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgPred for details\n")}
##check if type = "link"
if(identical(type, "link")) stop("\nLink scale predictions not yet supported for this model type\n")
##rename values according to unmarked to extract from object
##psi
if(identical(parm.type, "psi")) {
parm.type1 <- "state"
##because the same elements have different labels in different parts of the results of this object type
parm.type.alt <- parm.type
}
##transition
if(identical(parm.type, "phi")) {
##check that parameter appears in all models
nseasons <- unique(sapply(cand.set, FUN = function(i) i@data@numPrimary))
if(nseasons == 1) {
stop("\nParameter \'phi\' does not appear in single-season models\n")
}
parm.type1 <- "transition"
##because the same elements have different labels in different parts of the results of this object type
parm.type.alt <- parm.type
}
##detect
if(identical(parm.type, "detect")) {
parm.type1 <- "det"
##because the same elements have different labels in different parts of the results of this object type
parm.type.alt <- parm.type1
}
####changes#############
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##extract inverse link function
#if(identical(select.link, "multinomial")) {
# link.inv <- TODO
#}
##extract inverse link function
#if(identical(select.link, "logistic")) {
# link.inv <- plogis
#}
####changes#############
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predict( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat)
#################################################
#################################################
###CHANGES
##create object to hold Model-averaged estimates and unconditional SE's
##Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##extract predicted values
predsList <- lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type.alt, ...))
##determine number of parameters
parmFirst <- predsList[[1]]
parmNames <- names(parmFirst)
nparms <- length(parmNames)
##lists to store predictions and SE's
predsEstList <- vector("list", nparms)
names(predsEstList) <- parmNames
predsSEList <- vector("list", nparms)
names(predsSEList) <- parmNames
##iterate over each parm
for(k in 1:nparms) {
predsEstList[[k]] <- lapply(predsList, FUN = function(i) i[[k]]$Predicted)
predsSEList[[k]] <- lapply(predsList, FUN = function(i) i[[k]]$SE)
}
##organize in an nobs x nmodels x nparms array
predsEst <- array(unlist(predsEstList), dim = c(nobserv, length(cand.set), nparms),
dimnames = list(1:nobserv, modnames, parmNames))
predsSE <- array(unlist(predsSEList), dim = c(nobserv, length(cand.set), nparms),
dimnames = list(1:nobserv, modnames, parmNames))
##adjust for overdispersion if c-hat > 1
if(c.hat > 1) {predsSE <- predsSE * sqrt(c.hat)}
##prepare list for model-averaged predictions and SE's
predsOut <- array(data = NA, dim = c(nobserv, 4, nparms),
dimnames = list(1:nobserv,
c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL"),
parmNames))
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
for(j in 1:nparms) {
predsOut[obs, 1, j] <- sum(AICctmp$AICcWt * predsEst[obs, , j])
predsOut[obs, 2, j] <- sum(AICctmp$AICcWt * sqrt(predsSE[obs, , j]^2 + (predsEst[obs, , j] - predsOut[obs, 1, j])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
for(j in 1:nparms) {
predsOut[obs, 1, j] <- sum(AICctmp$AICcWt * predsEst[obs, , j])
predsOut[obs, 2, j] <- sqrt(sum(AICctmp$AICcWt * (predsSE[obs, , j]^2 + (predsEst[obs, , j] - predsOut[obs, 1, j])^2)))
}
}
}
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
for(j in 1:nparms) {
predsOut[obs, 1, j] <- sum(QAICctmp$QAICcWt * predsEst[obs, , j])
predsOut[obs, 2, j] <- sum(QAICctmp$QAICcWt * sqrt(predsSE[obs, , j]^2 + (predsEst[obs, , j] - predsOut[obs, 1, j])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
for(j in 1:nparms) {
predsOut[obs, 1, j] <- sum(QAICctmp$QAICcWt * predsEst[obs, , j])
predsOut[obs, 2, j] <- sqrt(sum(QAICctmp$QAICcWt * (predsSE[obs, , j]^2 + (predsEst[obs, , j] - predsOut[obs, 1, j])^2)))
}
}
}
}
##begin loop - AIC
if(second.ord == FALSE && c.hat == 1) {
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICtmp <- AICctab
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
for(j in 1:nparms) {
predsOut[obs, 1, j] <- sum(AICtmp$AICWt * predsEst[obs, , j])
predsOut[obs, 2, j] <- sum(AICtmp$AICWt * sqrt(predsSE[obs, , j]^2 + (predsEst[obs, , j] - predsOut[obs, 1, j])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
for(j in 1:nparms) {
predsOut[obs, 1, j] <- sum(AICtmp$AICWt * predsEst[obs, , j])
predsOut[obs, 2, j] <- sqrt(sum(AICtmp$AICWt * (predsSE[obs, , j]^2 + (predsEst[obs, , j] - predsOut[obs, 1, j])^2)))
}
}
}
}
##begin loop - QAICc
if(second.ord == FALSE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
for(j in 1:nparms) {
predsOut[obs, 1, j] <- sum(QAICtmp$QAICWt * predsEst[obs, , j])
predsOut[obs, 2, j] <- sum(QAICtmp$QAICWt * sqrt(predsSE[obs, , j]^2 + (predsEst[obs, , j] - predsOut[obs, 1, j])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
for(j in 1:nparms) {
predsOut[obs, 1, j] <- sum(QAICtmp$QAICWt * predsEst[obs, , j])
predsOut[obs, 2, j] <- sqrt(sum(QAICtmp$QAICWt * (predsSE[obs, , j]^2 + (predsEst[obs, , j] - predsOut[obs, 1, j])^2)))
}
}
}
}
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
for(j in 1:nparms){
predsOut[, 3, j] <- predsOut[, 1, j] - zcrit * predsOut[, 2, j]
predsOut[, 4, j] <- predsOut[, 1, j] + zcrit * predsOut[, 2, j]
}
##format to matrix
arrayToMat <- apply(predsOut, 2L, c)
arrayToDF <- expand.grid(dimnames(predsOut)[c(1, 3)])
##rename columns
names(arrayToDF)[1:2] <- c("Observation", "Parameter")
rawFrame <- data.frame(arrayToDF, arrayToMat)
predsOut.mat <- as.matrix(rawFrame[, c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")])
#rawFrame <- expand.grid(Obs = 1:matRows, Parms = names(predsOut))
##create label for rows
rowLab <- paste(rawFrame$Parameter, rawFrame$Observation, sep = "-")
rownames(predsOut.mat) <- rowLab
##convert array to list
#predsOutList <- lapply(seq(dim(predsOut)[3]), function(i) predsOut[ , , i])
#names(predsOutList) <- parmNames
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = predsOut[, 1, ],
"uncond.se" = predsOut[, 2, ],
"conf.level" = conf.level,
"lower.CL" = predsOut[, 3, ],
"upper.CL" = predsOut[, 4, ],
"matrix.output" = predsOut.mat)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
##occuMulti
modavgPred.AICunmarkedFitOccuMulti <- function(cand.set, modnames = NULL, newdata, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
type = "response", c.hat = 1, parm.type = NULL, ...) {
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgPred for details\n")}
##check if type = "link"
if(identical(type, "link")) stop("\nLink scale predictions not yet supported for this model type\n")
##rename values according to unmarked to extract from object
##psi
if(identical(parm.type, "psi")) {
parm.type1 <- "state"
}
##detect
if(identical(parm.type, "detect")) {
parm.type1 <- "det"
}
####changes#############
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(identical(type, "link")) {
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
}
##extract inverse link function
#if(identical(select.link, "multinomial")) {
# link.inv <- TODO
#}
##extract inverse link function
#if(identical(select.link, "logistic")) {
# link.inv <- plogis
#}
####changes#############
##newdata is data frame with exact structure of the original data frame (same variable names and type)
##determine number of observations in new data set
nobserv <- dim(newdata)[1]
##determine number of columns in new data set
ncolumns <- dim(newdata)[2]
##if only 1 column, add an additional column to avoid problems in computation with predict( )
if(ncolumns == 1) newdata$blank.fake.column.NAs <- NA
##store AICc table
AICctab <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE, c.hat = c.hat)
#################################################
#################################################
###CHANGES
##create object to hold Model-averaged estimates and unconditional SE's
##Mod.avg.out <- matrix(NA, nrow = nobserv, ncol = 2)
##colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
##extract predicted values
predsList <- lapply(X = cand.set, FUN = function(i)predict(i, se.fit = TRUE, newdata = newdata,
type = parm.type1, ...))
##check structure of predsList
##if predictions for all psi parameters
if(is.matrix(predsList[[1]]$Predicted) && identical(parm.type, "psi")) {
##determine number of parameters
##check if species argument was provided in call to function
parmFirst <- predsList[[1]]$Predicted
parmNames <- colnames(parmFirst)
nparms <- length(parmNames)
##lists to store predictions and SE's
predsEstList <- vector("list", nparms)
names(predsEstList) <- parmNames
predsSEList <- vector("list", nparms)
names(predsSEList) <- parmNames
##iterate over each parm
predsEstList <- lapply(predsList, FUN = function(i) i$Predicted)
predsSEList <- lapply(predsList, FUN = function(i) i$SE)
for(k in 1:nparms) {
predsEstList[[k]] <- lapply(predsList, FUN = function(i) i$Predicted[, k])
predsSEList[[k]] <- lapply(predsList, FUN = function(i) i$SE[, k])
}
##organize in an nobs x nmodels x nparms array
predsEst <- array(unlist(predsEstList), dim = c(nobserv, length(cand.set), nparms),
dimnames = list(1:nobserv, modnames, parmNames))
predsSE <- array(unlist(predsSEList), dim = c(nobserv, length(cand.set), nparms),
dimnames = list(1:nobserv, modnames, parmNames))
}
##if predictions for single species
if(!is.matrix(predsList[[1]]$Predicted) && identical(parm.type, "psi")) {
parmNames <- parm.type
nparms <- length(parmNames)
predsEstMat <- sapply(predsList, FUN = function(i) i$Predicted)
predsSEMat <- sapply(predsList, FUN = function(i) i$SE)
predsEst <- array(predsEstMat, dim = c(nobserv, length(cand.set), nparms),
dimnames = list(1:nobserv, modnames, parmNames))
predsSE <- array(predsSEMat, dim = c(nobserv, length(cand.set), nparms),
dimnames = list(1:nobserv, modnames, parmNames))
}
##if predictions for detection
if(identical(parm.type, "detect")) {
parmFirst <- predsList[[1]]
if(!is.data.frame(parmFirst)) {
orig.parmNames <- names(parmFirst)
parmNames <- paste("p", orig.parmNames, sep = "-")
nparms <- length(parmNames)
##iterate over species
predsEst <- array(NA, dim = c(nobserv, length(cand.set), nparms),
dimnames = list(1:nobserv, modnames, parmNames))
predsSE <- array(NA, dim = c(nobserv, length(cand.set), nparms),
dimnames = list(1:nobserv, modnames, parmNames))
##iterate over each parm
for(k in 1:nparms) {
predsEst[, , k] <- sapply(predsList, FUN = function(i) i[[k]]$Predicted)
predsSE[, , k] <- sapply(predsList, FUN = function(i) i[[k]]$SE)
}
} else {
##single parameter p
parmNames <- "p"
nparms <- 1
##iterate over species
predsEst <- array(NA, dim = c(nobserv, length(cand.set), nparms),
dimnames = list(1:nobserv, modnames, parmNames))
predsSE <- array(NA, dim = c(nobserv, length(cand.set), nparms),
dimnames = list(1:nobserv, modnames, parmNames))
predsEst[, , "p"] <- sapply(predsList, FUN = function(i) i[, "Predicted"])
predsSE[, , "p"] <- sapply(predsList, FUN = function(i) i[, "SE"])
}
}
##adjust for overdispersion if c-hat > 1
if(c.hat > 1) {predsSE <- predsSE * sqrt(c.hat)}
##prepare list for model-averaged predictions and SE's
predsOut <- array(data = NA, dim = c(nobserv, 4, nparms),
dimnames = list(1:nobserv,
c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL"),
parmNames))
##begin loop - AICc
if(second.ord == TRUE && c.hat == 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICctmp <- AICctab
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
for(j in 1:nparms) {
predsOut[obs, 1, j] <- sum(AICctmp$AICcWt * predsEst[obs, , j])
predsOut[obs, 2, j] <- sum(AICctmp$AICcWt * sqrt(predsSE[obs, , j]^2 + (predsEst[obs, , j] - predsOut[obs, 1, j])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
for(j in 1:nparms) {
predsOut[obs, 1, j] <- sum(AICctmp$AICcWt * predsEst[obs, , j])
predsOut[obs, 2, j] <- sqrt(sum(AICctmp$AICcWt * (predsSE[obs, , j]^2 + (predsEst[obs, , j] - predsOut[obs, 1, j])^2)))
}
}
}
}
##begin loop - QAICc
if(second.ord == TRUE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICctmp <- AICctab
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
for(j in 1:nparms) {
predsOut[obs, 1, j] <- sum(QAICctmp$QAICcWt * predsEst[obs, , j])
predsOut[obs, 2, j] <- sum(QAICctmp$QAICcWt * sqrt(predsSE[obs, , j]^2 + (predsEst[obs, , j] - predsOut[obs, 1, j])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
for(j in 1:nparms) {
predsOut[obs, 1, j] <- sum(QAICctmp$QAICcWt * predsEst[obs, , j])
predsOut[obs, 2, j] <- sqrt(sum(QAICctmp$QAICcWt * (predsSE[obs, , j]^2 + (predsEst[obs, , j] - predsOut[obs, 1, j])^2)))
}
}
}
}
##begin loop - AIC
if(second.ord == FALSE && c.hat == 1) {
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
AICtmp <- AICctab
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
for(j in 1:nparms) {
predsOut[obs, 1, j] <- sum(AICtmp$AICWt * predsEst[obs, , j])
predsOut[obs, 2, j] <- sum(AICtmp$AICWt * sqrt(predsSE[obs, , j]^2 + (predsEst[obs, , j] - predsOut[obs, 1, j])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
for(j in 1:nparms) {
predsOut[obs, 1, j] <- sum(AICtmp$AICWt * predsEst[obs, , j])
predsOut[obs, 2, j] <- sqrt(sum(AICtmp$AICWt * (predsSE[obs, , j]^2 + (predsEst[obs, , j] - predsOut[obs, 1, j])^2)))
}
}
}
}
##begin loop - QAICc
if(second.ord == FALSE && c.hat > 1){
for (obs in 1:nobserv) {
##create temporary data.frame to store fitted values and SE
QAICtmp <- AICctab
##compute unconditional SE and store in output matrix
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
for(j in 1:nparms) {
predsOut[obs, 1, j] <- sum(QAICtmp$QAICWt * predsEst[obs, , j])
predsOut[obs, 2, j] <- sum(QAICtmp$QAICWt * sqrt(predsSE[obs, , j]^2 + (predsEst[obs, , j] - predsOut[obs, 1, j])^2))
}
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
for(j in 1:nparms) {
predsOut[obs, 1, j] <- sum(QAICtmp$QAICWt * predsEst[obs, , j])
predsOut[obs, 2, j] <- sqrt(sum(QAICtmp$QAICWt * (predsSE[obs, , j]^2 + (predsEst[obs, , j] - predsOut[obs, 1, j])^2)))
}
}
}
}
##compute confidence interval
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
for(j in 1:nparms){
predsOut[, 3, j] <- predsOut[, 1, j] - zcrit * predsOut[, 2, j]
predsOut[, 4, j] <- predsOut[, 1, j] + zcrit * predsOut[, 2, j]
}
##format to matrix
arrayToMat <- apply(predsOut, 2L, c)
arrayToDF <- expand.grid(dimnames(predsOut)[c(1, 3)])
##rename columns
names(arrayToDF)[1:2] <- c("Observation", "Parameter")
rawFrame <- data.frame(arrayToDF, arrayToMat)
predsOut.mat <- as.matrix(rawFrame[, c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")])
#rawFrame <- expand.grid(Obs = 1:matRows, Parms = names(predsOut))
##create label for rows
rowLab <- paste(rawFrame$Parameter, rawFrame$Observation, sep = "-")
rownames(predsOut.mat) <- rowLab
##convert array to list
#predsOutList <- lapply(seq(dim(predsOut)[3]), function(i) predsOut[ , , i])
#names(predsOutList) <- parmNames
##organize as list
Mod.pred.list <- list("type" = type, "mod.avg.pred" = predsOut[, 1, ],
"uncond.se" = predsOut[, 2, ],
"conf.level" = conf.level,
"lower.CL" = predsOut[, 3, ],
"upper.CL" = predsOut[, 4, ],
"matrix.output" = predsOut.mat)
class(Mod.pred.list) <- c("modavgPred", "list")
return(Mod.pred.list)
}
print.modavgPred <- function(x, digits = 3, ...) {
if(any(names(x)=="type") ) {
cat("\nModel-averaged predictions on the ", x$type, " scale\n",
"based on entire model set and ",
x$conf.level*100, "% confidence interval:\n\n", sep = "")
} else {cat("\nModel-averaged predictions based on entire\n",
"model set and ", x$conf.level*100,
"% confidence interval:\n\n", sep = "")}
nice.tab <- x$matrix.output
colnames(nice.tab) <- c("mod.avg.pred", "uncond.se", "lower.CL", "upper.CL")
##add check to display output from occuMS or occuMulti
##check if rownames are present
if(is.null(rownames(nice.tab))){
nrows <- dim(nice.tab)[1]
rownames(nice.tab) <- 1:nrows
}
print(round(nice.tab, digits = digits))
cat("\n")
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/modavgPred.R |
#generic
modavgShrink <- function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
cand.set <- formatCands(cand.set)
UseMethod("modavgShrink", cand.set)
}
##default
modavgShrink.default <- function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
stop("\nFunction not yet defined for this object class\n")
}
##aov
modavgShrink.AICaov.lm <- function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##check for frequency of each terms
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) rownames(summary(i)$coefficients))
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != "(Intercept)")]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], pooled.terms, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
pooled.terms, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1-conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##betareg
modavgShrink.AICbetareg <- function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##check for frequency of each terms
##extract labels
##determine if parameter is on mean or phi
if(regexpr(pattern = "\\(phi\\)_", parm) == "-1") {
parm.phi <- NULL
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) rownames(summary(i)$coefficients$mean))
} else {
##replace parm
parm.phi <- gsub(pattern = "\\(phi\\)_", "", parm)
if(regexpr(pattern = ":", parm) != "-1") {
warning(cat("\nthis function does not yet support interaction terms on phi:\n",
"use 'modavgCustom' instead\n"))
}
mod_formula <- lapply(cand.set, FUN=function(i) rownames(summary(i)$coefficients$precision))
}
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != "(Intercept)")]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) stop("\nTo compute a shrinkage version of model-averaged estimate, each term must appear with the same frequency across models\n")
##check whether parm is involved in interaction
##if parameters on mean
if(is.null(parm.phi)) {
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], pooled.terms, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
pooled.terms, fixed = TRUE), "match.length") == "-1", 0, 1)
}
##if parameters on phi
if(!is.null(parm.phi)) {
parm.inter <- c(paste(parm.phi, ":", sep = ""), paste(":", parm.phi, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], pooled.terms, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
pooled.terms, fixed = TRUE), "match.length") == "-1", 0, 1)
}
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1-conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##clm
modavgShrink.AICsclm.clm <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL,
uncond.se = "revised", conf.level = 0.95, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check that link function is the same for all models
check.link <- unlist(lapply(X = cand.set, FUN=function(i) i$link))
unique.link <- unique(x = check.link)
if(length(unique.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##check for frequency of each terms
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) rownames(summary(i)$beta))
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != "(Intercept)")]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], pooled.terms, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
pooled.terms, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set=cand.set, modnames=modnames,
second.ord=second.ord, nobs=nobs, sort=FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1-conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##clm
modavgShrink.AICclm <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE, nobs = NULL,
uncond.se = "revised", conf.level = 0.95, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check that link function is the same for all models
check.link <- unlist(lapply(X = cand.set, FUN=function(i) i$link))
unique.link <- unique(x = check.link)
if(length(unique.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##check for frequency of each terms
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) rownames(summary(i)$beta))
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != "(Intercept)")]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], pooled.terms, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
pooled.terms, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set=cand.set, modnames=modnames,
second.ord=second.ord, nobs=nobs, sort=FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1-conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##clmm
modavgShrink.AICclmm <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check that link function is the same for all models
check.link <- unlist(lapply(X = cand.set, FUN=function(i) i$link))
unique.link <- unique(x = check.link)
if(length(unique.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##check for frequency of each terms
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) rownames(summary(i)$beta))
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != "(Intercept)")]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], pooled.terms, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
pooled.terms, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1-conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##coxme
modavgShrink.AICcoxme <- function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##check for frequency of each terms
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) names(fixef(i))) #extract model formula for each model in cand.set
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
terms.freq <- table(pooled.terms)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], pooled.terms, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
pooled.terms, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) extractSE(i)[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##coxph and clogit
modavgShrink.AICcoxph <- function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##check for frequency of each terms
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) rownames(summary(i)$coefficients)) #extract model formula for each model in cand.set
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
terms.freq <- table(pooled.terms)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], pooled.terms, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
pooled.terms, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##glm
modavgShrink.AICglm.lm <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
c.hat = 1, gamdisp = NULL, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
#check that link function is the same for all models
check.link <- unlist(lapply(X = cand.set, FUN=function(i) i$family$link))
unique.link <- unique(x=check.link)
if(length(unique.link) > 1) stop("\nIt is not appropriate to compute a model averaged beta estimate\n",
"from models using different link functions\n")
##check family of glm to avoid problems when requesting predictions with argument 'dispersion'
fam.type <- unlist(lapply(cand.set, FUN = function(i) family(i)$family))
fam.unique <- unique(fam.type)
if(identical(fam.unique, "gaussian")) {disp <- NULL} else{disp <- 1}
##poisson and binomial defaults to 1 (no separate parameter for variance)
##for negative binomial - reset to NULL
if(any(regexpr("Negative Binomial", fam.type) != -1)) {
disp <- NULL
##check for mixture of negative binomial and other
##number of models with negative binomial
negbin.num <- sum(regexpr("Negative Binomial", fam.type) != -1)
if(negbin.num < length(fam.type)) {
stop("Function does not support mixture of negative binomial with other distributions in model set")
}
}
##gamma is treated separately
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##check for frequency of each terms
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) rownames(summary(i)$coefficients)) #extract model formula for each model in cand.set
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != "(Intercept)")]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE,
c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i, dispersion = disp)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <-new_table$SE*sqrt(c.hat)
}
gam1 <- unlist(lapply(cand.set, FUN=function(i) family(i)$family[1]=="Gamma")) #check for gamma regression models
##correct SE's for estimates of gamma regressions
if(any(gam1) == TRUE) {
##check for specification of gamdisp argument
if(is.null(gamdisp)) stop("\nYou must specify a gamma dispersion parameter with gamma generalized linear models\n")
new_table$SE <- unlist(lapply(cand.set,
FUN = function(i) sqrt(diag(vcov(i, dispersion = gamdisp)))[paste(parm)]))
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1-conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta, "Uncond.SE" = Uncond_SE,
"Conf.level" = conf.level, "Lower.CL"= Lower_CL, "Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##glmmTMB
modavgShrink.AICglmmTMB <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
c.hat = 1, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##determine families of model
fam.list <- unlist(lapply(X = cand.set, FUN = function(i) family(i)$family))
check.fam <- unique(fam.list)
if(length(check.fam) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different families of distributions\n")
##determine link functions
link.list <- unlist(lapply(X = cand.set, FUN = function(i) family(i)$link))
check.link <- unique(link.list)
if(length(check.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
###################
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(fixef(i)$cond))
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != "(Intercept)")]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE,
c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) fixef(i)$cond[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)$cond))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <-new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1-conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL"= Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##gls
modavgShrink.AICgls <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##check for frequency of each terms
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) rownames(summary(i)$coefficients)) #extract model formula for each model in cand.set
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != "(Intercept)")]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], pooled.terms, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
pooled.terms, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1-conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##hurdle
modavgShrink.AIChurdle <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check that link function is the same for all models
check.link <- unlist(lapply(X = cand.set, FUN=function(i) i$link))
unique.link <- unique(x=check.link)
if(length(unique.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##check for frequency of each terms
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) labels(coefficients(i))) #extract model formula for each model in cand.set
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != "count_(Intercept)" & pooled.terms != "zero_(Intercept)")]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN=function(i) coefficients(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN=function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1-conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta, "Uncond.SE" = Uncond_SE,
"Conf.level" = conf.level, "Lower.CL"= Lower_CL, "Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##lm
modavgShrink.AIClm <- function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##check for frequency of each terms
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) rownames(summary(i)$coefficients))
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != "(Intercept)")]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], pooled.terms, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
pooled.terms, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1-conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##lme
modavgShrink.AIClme <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##check for frequency of each terms
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) labels(summary(i)$coefficients$fixed))
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != "(Intercept)")]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], pooled.terms, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
pooled.terms, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) fixef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1-conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##lmekin
modavgShrink.AIClmekin <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##check for frequency of each terms
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) labels(fixef(i)))
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != "(Intercept)")]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], pooled.terms, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
pooled.terms, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) fixef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) extractSE(i)[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1-conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##maxlike
modavgShrink.AICmaxlikeFit.list <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
c.hat = 1, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check that link function is the same for all models
check.link <- unlist(lapply(X = cand.set, FUN=function(i) i$link))
unique.link <- unique(x = check.link)
if(length(unique.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##check for frequency of each terms
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) names(coef(i)))
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != "(Intercept)")]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], pooled.terms, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
pooled.terms, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE,
c.hat = 1) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1-conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##mer
modavgShrink.AICmer <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##determine families of model
fam.list <- unlist(lapply(X = cand.set, FUN = function(i) fam.link.mer(i)$family))
check.fam <- unique(fam.list)
if(length(check.fam) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different families of distributions\n")
##determine link functions
link.list <- unlist(lapply(X = cand.set, FUN = function(i) fam.link.mer(i)$link))
check.link <- unique(link.list)
if(length(check.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
###################
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(fixef(i)))
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != "(Intercept)")]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) fixef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) extractSE(i)[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord==TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta<-sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE<-sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE<-sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL<-Modavg_beta-zcrit*Uncond_SE
Upper_CL<-Modavg_beta+zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##glmerMod
modavgShrink.AICglmerMod <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##determine families of model
fam.list <- unlist(lapply(X = cand.set, FUN = function(i) fam.link.mer(i)$family))
check.fam <- unique(fam.list)
if(length(check.fam) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different families of distributions\n")
##determine link functions
link.list <- unlist(lapply(X = cand.set, FUN = function(i) fam.link.mer(i)$link))
check.link <- unique(link.list)
if(length(check.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
###################
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(fixef(i)))
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != "(Intercept)")]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) fixef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) extractSE(i)[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord==TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta<-sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE<-sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE<-sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL<-Modavg_beta-zcrit*Uncond_SE
Upper_CL<-Modavg_beta+zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##lmerMod
modavgShrink.AIClmerMod <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
###################
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(fixef(i)))
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != "(Intercept)")]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) fixef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) extractSE(i)[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord==TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta<-sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE<-sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE<-sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL<-Modavg_beta-zcrit*Uncond_SE
Upper_CL<-Modavg_beta+zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##lmerModLmerTest
modavgShrink.AIClmerModLmerTest <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
}
modnames <- names(cand.set)
}
###################
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(fixef(i)))
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != "(Intercept)")]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) fixef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) extractSE(i)[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord==TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta<-sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE<-sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE<-sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p=(1-conf.level)/2, lower.tail=FALSE)
Lower_CL<-Modavg_beta-zcrit*Uncond_SE
Upper_CL<-Modavg_beta+zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##multinom
modavgShrink.AICmultinom.nnet <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
c.hat = 1, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) colnames(summary(i)$coefficients))
nmods <- length(cand.set)
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != "(Intercept)")]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], pooled.terms, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
pooled.terms, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##determine number of levels - 1
mod.levels <- lapply(cand.set, FUN=function(i) rownames(summary(i)$coefficients)) #extract level of response variable
check.levels <- unlist(unique(mod.levels))
##recompute AIC table and associated measures
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat)
##create object to store model-averaged estimate and SE's of k - 1 level of response
out.est <- matrix(data = NA, nrow = length(check.levels), ncol = 4)
colnames(out.est) <- c("Mod.avg.est", "Uncond.SE", "Lower.CL", "Upper.CL")
rownames(out.est) <- check.levels
##iterate over levels of response variable
for (g in 1:length(check.levels)) {
##extract coefficients from each model for given level
coefs.levels <- lapply(cand.set, FUN = function(i) coef(i)[check.levels[g], ])
##extract coefficients from each model for all levels
SE.all.levels <- lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i))))
id.coef <- lapply(coefs.levels, FUN = function(i) which(names(i) == paste(parm)))
##temporary matrix to hold estimates and SE's from models and set to 0 otherwise
tmp.coef <- matrix(NA, ncol = 2, nrow = nmods)
for(k in 1:nmods) {
tmp.coef[k, 1] <- ifelse(length(id.coef[[k]]) != 0, coefs.levels[[k]][paste(parm)], 0)
tmp.coef[k, 2] <- ifelse(length(id.coef[[k]]) != 0, SE.all.levels[[k]][paste(check.levels[g], ":",
parm, sep="")], 0)
}
##extract beta estimate for parm
new_table$Beta_est <- tmp.coef[, 1]
##extract SE of estimate for parm
new_table$SE <- tmp.coef[, 2]
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {new_table$SE <- new_table$SE*sqrt(c.hat)}
##compute model-averaged estimates, unconditional SE, and 95% CL
#AICc
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
#unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
#revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
#QAICc
#if c-hat is estimated compute values accordingly and adjust table names
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
#unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
#revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
#AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
#unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
#revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
#QAIC
#if c-hat is estimated compute values accordingly and adjust table names
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
#unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
#revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
out.est[g, 1] <- Modavg_beta
out.est[g, 2] <- Uncond_SE
}
zcrit <- qnorm(p = (1-conf.level)/2, lower.tail = FALSE)
out.est[,3] <- out.est[,1] - zcrit*out.est[,2]
out.est[,4] <- out.est[,1] + zcrit*out.est[,2]
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = out.est[,1],
"Uncond.SE" = out.est[,2], "Conf.level" = conf.level, "Lower.CL"= out.est[,3],
"Upper.CL" = out.est[,4])
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##glm.nb
modavgShrink.AICnegbin.glm.lm <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check that link function is the same for all models
check.link <- unlist(lapply(X = cand.set, FUN=function(i) i$family$link))
unique.link <- unique(x=check.link)
if(length(unique.link) > 1) stop("\nIt is not appropriate to compute a model averaged beta estimate\n",
"from models using different link functions\n")
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##check for frequency of each terms
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) rownames(summary(i)$coefficients)) #extract model formula for each model in cand.set
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != "(Intercept)")]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1-conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table,
"Mod.avg.beta" = Modavg_beta, "Uncond.SE" = Uncond_SE,
"Conf.level" = conf.level, "Lower.CL"= Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##polr
modavgShrink.AICpolr <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) rownames(summary(i)$coefficients))
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[attr(regexpr(pattern = "\\|", text = pooled.terms), "match.length") == -1 ]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
##add logical test to distinguish between intercepts and other coefs
if(attr(regexpr(pattern = "\\|", text = parm), "match.length") == -1) {
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)]))
} else {new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) (i)$zeta[paste(parm)])) }
##extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL based on AICc
if(second.ord == TRUE) {
Modavg_beta<-sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE<-sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE<-sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL based on AIC
if(second.ord == FALSE) {
Modavg_beta<-sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE<-sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE<-sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1-conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL"= Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##rlm
modavgShrink.AICrlm.lm <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##check for frequency of each terms
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) rownames(summary(i)$coefficients))
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != "(Intercept)")]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], pooled.terms, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
pooled.terms, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1-conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##survreg
modavgShrink.AICsurvreg <- function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check that distribution is the same for all models
check.dist <- sapply(X = cand.set, FUN = function(i) i$dist)
unique.dist <- unique(x = check.dist)
if(length(unique.dist) > 1) stop("\nFunction does not support model-averaging estimates from different distributions\n")
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##check for frequency of each terms
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) names(summary(i)$coefficients))
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != "(Intercept)")]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], pooled.terms, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
pooled.terms, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
} else {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1-conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##vglm
modavgShrink.AICvglm <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
c.hat = 1, ...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check that link function is the same for all models
check.link <- unlist(lapply(X = cand.set, FUN=function(i) i@family@blurb[3]))
unique.link <- unique(x=check.link)
if(length(unique.link) > 1) stop("\nIt is not appropriate to compute a model averaged beta estimate\n",
"from models using different link functions\n")
##check family of vglm to avoid problems when requesting predictions with argument 'dispersion'
fam.type <- unlist(lapply(cand.set, FUN=function(i) i@family@vfamily))
fam.unique <- unique(fam.type)
if(identical(fam.unique, "gaussianff")) {disp <- NULL} else{disp <- 1}
if(identical(fam.unique, "gammaff")) stop("\nGamma distribution is not supported yet\n")
##poisson and binomial defaults to 1 (no separate parameter for variance)
##for negative binomial - reset to NULL
if(identical(fam.unique, "negbinomial")) {disp <- NULL}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##check for frequency of each terms
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) labels(coefficients(i))) #extract model formula for each model in cand.set
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != "(Intercept)" & pooled.terms != "(Intercept):1" & pooled.terms != "(Intercept):2")]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction or if label changes for some models - e.g., ZIP models
##if : not already included
if(regexpr(":", parm, fixed = TRUE) == -1){
##if : not included
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) warning("\nLabel of parameter of interest seems to change across models:\n",
"check model syntax for possible problems\n")
} else {
##if : already included
##remove : from parm
simple.parm <- unlist(strsplit(parm, split = ":"))[1]
##search for simple.parm and parm in model formulae
no.colon <- sum(ifelse(attr(regexpr(simple.parm, mod_formula, fixed = TRUE), "match.length") != "-1", 1, 0))
with.colon <- sum(ifelse(attr(regexpr(parm, mod_formula, fixed = TRUE), "match.length") != "-1", 0, 1))
##check if both are > 0
if(no.colon > 0 && with.colon > 0) warning("\nLabel of parameter of interest seems to change across models:\n",
"check model syntax for possible problems\n")
}
##compute table
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN=function(i) coefficients(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN=function(i) sqrt(diag(vcov(i, dispersion = disp)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <-new_table$SE*sqrt(c.hat)
}
##gam1 <- unlist(lapply(cand.set, FUN=function(i) family(i)$family[1]=="Gamma")) #check for gamma regression models
##correct SE's for estimates of gamma regressions
##if(any(gam1) == TRUE) {
##check for specification of gamdisp argument
## if(is.null(gamdisp)) stop("\nYou must specify a gamma dispersion parameter with gamma generalized linear models\n")
## new_table$SE <- unlist(lapply(cand.set,
## FUN = function(i) sqrt(diag(vcov(i, dispersion = gamdisp)))[paste(parm)]))
##}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1-conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta, "Uncond.SE" = Uncond_SE,
"Conf.level" = conf.level, "Lower.CL"= Lower_CL, "Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##zeroinfl
modavgShrink.AICzeroinfl <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
...){
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check that link function is the same for all models
check.link <- unlist(lapply(X = cand.set, FUN=function(i) i$link))
unique.link <- unique(x=check.link)
if(length(unique.link) > 1) stop("\nIt is not appropriate to compute a model-averaged beta estimate\n",
"from models using different link functions\n")
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##check for frequency of each terms
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN=function(i) labels(coefficients(i))) #extract model formula for each model in cand.set
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != "count_(Intercept)" & pooled.terms != "zero_(Intercept)")]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
##compute table
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs, sort = FALSE) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN=function(i) coefficients(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN=function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1-conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter" = paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta, "Uncond.SE" = Uncond_SE,
"Conf.level" = conf.level, "Lower.CL"= Lower_CL, "Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
####added functionality for reversing parameters
##added additional argument parm.type = "psi", "gamma", "epsilon", "lambda", "omega", "detect"
##model type: parameters labeled in unmarked - parameters labeled in AICcmodavg.unmarked
##single season: state, det - USE psi, detect
##multiseason model: psi, col, ext, det - USE psi, gamma, epsilon, detect
##RN heterogeneity model: state, det - USE lambda, detect
##N-mixture: state, det - USE lambda, detect
##Open N-mixture: lambda, gamma, omega, det - USE lambda, gamma, omega, iota, detect
##distsamp: state, det - USE lambda, detect
##gdistsamp: state, det, phi - USE lambda, detect, phi
##false-positive occupancy: state, det, fp - USE psi, detect, fp
##gpcount: lambda, phi, det - USE lambda, phi, detect
##gmultmix: lambda, phi, det - USE lambda, phi, detect
##multinomPois: state, det - USE lambda, detect
##occuMulti: state, det - USE lambda, detect
##occuMS: state, det - USE psi, detect
##occuTTD: psi, det, col, ext - USE psi, detect, gamma, epsilon
##pcount.spHDS: state, det - USE lambda, detect
##multmixOpen: lambda, gamma, omega, iota, det - USE lambda, gamma, epsilon, iota, detect
##distsampOpen: lambda, gamma, omega, iota, det - USE lambda, gamma, epsilon, iota, detect
##occu
modavgShrink.AICunmarkedFitOccu <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgShrink for details\n")}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##single-season occupancy model
##psi
if(identical(parm.type, "psi")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
parm.unmarked <- "psi"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "state"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
##NEED TO PASTE THE PARAMETER TYPE - INCLUDE THIS STEP ABOVE FOR EACH PARM.TYPE
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != paste(parm.unmarked, "(Int)", sep = ""))]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
nmods <- length(cand.set)
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##colext
modavgShrink.AICunmarkedFitColExt <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgShrink for details\n")}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##multiseason occupancy model
##psi - initial occupancy
if(identical(parm.type, "psi")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$psi)))
##create label for parm
parm.unmarked <- "psi"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "psi"
}
##gamma - colonization
if(identical(parm.type, "gamma")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$col)))
##create label for parm
parm.unmarked <- "col"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "col"
}
##epsilon - extinction
if(identical(parm.type, "epsilon")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$ext)))
##create label for parm
parm.unmarked <- "ext"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "ext"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
##NEED TO PASTE THE PARAMETER TYPE - INCLUDE THIS STEP ABOVE FOR EACH PARM.TYPE
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != paste(parm.unmarked, "(Int)", sep = ""))]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
nmods <- length(cand.set)
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##occuRN
modavgShrink.AICunmarkedFitOccuRN <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgShrink for details\n")}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##Royle-Nichols heterogeneity model
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
parm.unmarked <- "lam"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "state"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula<-lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
##NEED TO PASTE THE PARAMETER TYPE - INCLUDE THIS STEP ABOVE FOR EACH PARM.TYPE
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != paste(parm.unmarked, "(Int)", sep = ""))]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
nmods <- length(cand.set)
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##pcount
modavgShrink.AICunmarkedFitPCount <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgShrink for details\n")}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##single season N-mixture model
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
##create label for parm
parm.unmarked <- "lam"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "state"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
##NEED TO PASTE THE PARAMETER TYPE - INCLUDE THIS STEP ABOVE FOR EACH PARM.TYPE
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != paste(parm.unmarked, "(Int)", sep = ""))]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
nmods <- length(cand.set)
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##pcountOpen
modavgShrink.AICunmarkedFitPCO <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgShrink for details\n")}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##open version of N-mixture model
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$lambda)))
parm.unmarked <- "lam"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "lambda"
}
##gamma - recruitment
if(identical(parm.type, "gamma")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$gamma)))
##determine if same H0 on gamma (gamConst, gamAR, gamTrend)
strip.gam <- sapply(mod_formula, FUN = function(i) unlist(strsplit(i, "\\("))[[1]])
unique.gam <- unique(strip.gam)
if(length(unique.gam) > 1) stop("\nDifferent formulations of gamma parameter occur among models:\n
beta estimates cannot be model-averaged\n")
##create label for parm
parm.unmarked <- unique.gam
parm <- paste(unique.gam, "(", parm, ")", sep="")
parm.type1 <- "gamma"
}
##omega - apparent survival
if(identical(parm.type, "omega")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$omega)))
##create label for parm
parm.unmarked <- "omega"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "omega"
}
##iota (for immigration = TRUE with dynamics = "autoreg", "trend", "ricker", or "gompertz")
if(identical(parm.type, "iota")) {
##create label for parm
parm.unmarked <- "iota"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
##check that parameter appears in all models
parfreq <- sum(sapply(cand.set, FUN = function(i) any(names(i@estimates@estimates) == parm.unmarked)))
if(!identical(length(cand.set), parfreq)) {
stop("\nParameter \'", parm.unmarked, "\' (parm.type = \"", parm.type, "\") does not appear in all models:",
"\ncannot compute model-averaged estimate across all models\n")
}
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$iota)))
parm.type1 <- "iota"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula<-lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
##NEED TO PASTE THE PARAMETER TYPE - INCLUDE THIS STEP ABOVE FOR EACH PARM.TYPE
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != paste(parm.unmarked, "(Int)", sep = ""))]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
nmods <- length(cand.set)
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##distsamp
modavgShrink.AICunmarkedFitDS <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgShrink for details\n")}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##Distance sampling model
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
parm.unmarked <- "lam"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "state"
}
##detect, e.g., parm = "sigmaarea"
if(identical(parm.type, "detect")) {
##check for key function used
keyid <- unique(sapply(cand.set, FUN = function(i) i@keyfun))
if(length(keyid) > 1) stop("\nDifferent key functions used across models:\n",
"cannot compute model-averaged estimate\n")
if(identical(keyid, "uniform")) stop("\nDetection parameter not found in models\n")
##set key prefix used in coef( )
if(identical(keyid, "halfnorm")) {
parm.key <- "sigma"
}
if(identical(keyid, "hazard")) {
parm.key <- "shape"
}
if(identical(keyid, "exp")) {
parm.key <- "rate"
}
##label for intercept - label different with this model type
if(identical(parm, "Int")) {parm <- "(Intercept)"}
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm.key, "(", parm, "))", sep="")
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
##NEED TO PASTE THE PARAMETER TYPE - INCLUDE THIS STEP ABOVE FOR EACH PARM.TYPE
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != paste(parm.unmarked, "(Int)", sep = ""))]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
nmods <- length(cand.set)
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##gdistsamp
modavgShrink.AICunmarkedFitGDS <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgShrink for details\n")}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##Distance sampling model with availability
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$lambda)))
parm.unmarked <- "lambda"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "lambda"
}
##detect, e.g., parm = "sigmaarea"
if(identical(parm.type, "detect")) {
##check for key function used
keyid <- unique(sapply(cand.set, FUN = function(i) i@keyfun))
if(length(keyid) > 1) stop("\nDifferent key functions used across models:\n",
"cannot compute model-averaged estimate\n")
if(identical(keyid, "uniform")) stop("\nDetection parameter not found in models\n")
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "det"
}
##availability
if(identical(parm.type, "phi")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$phi)))
parm.unmarked <- "phi"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "phi"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
##NEED TO PASTE THE PARAMETER TYPE - INCLUDE THIS STEP ABOVE FOR EACH PARM.TYPE
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != paste(parm.unmarked, "(Int)", sep = ""))]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
nmods <- length(cand.set)
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##occuFP
modavgShrink.AICunmarkedFitOccuFP <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgShrink for details\n")}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##single-season false-positive occupancy model
##psi
if(identical(parm.type, "psi")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
parm.unmarked <- "psi"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "state"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "det"
}
##false positives - fp
if(identical(parm.type, "falsepos") || identical(parm.type, "fp")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$fp)))
parm.unmarked <- "fp"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "fp"
}
##certainty of detections - b
if(identical(parm.type, "certain")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$b)))
parm.unmarked <- "b"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
##check that parameter appears in all models
parfreq <- sum(sapply(cand.set, FUN = function(i) any(names(i@estimates@estimates) == parm.unmarked)))
if(!identical(length(cand.set), parfreq)) {
stop("\nParameter \'", parm.unmarked, "\' (parm.type = \"", parm.type, "\") does not appear in all models:",
"\ncannot compute model-averaged estimate across all models\n")
}
parm.type1 <- "b"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
##NEED TO PASTE THE PARAMETER TYPE - INCLUDE THIS STEP ABOVE FOR EACH PARM.TYPE
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != paste(parm.unmarked, "(Int)", sep = ""))]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
nmods <- length(cand.set)
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##multinomPois
modavgShrink.AICunmarkedFitMPois <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgShrink for details\n")}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##multinomPois model
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
parm.unmarked <- "lambda"
##create label for parm
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "state"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
##NEED TO PASTE THE PARAMETER TYPE - INCLUDE THIS STEP ABOVE FOR EACH PARM.TYPE
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != paste(parm.unmarked, "(Int)", sep = ""))]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
nmods <- length(cand.set)
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##gmultmix
modavgShrink.AICunmarkedFitGMM <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgShrink for details\n")}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##gmultmix model
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$lambda)))
parm.unmarked <- "lambda"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "lambda"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "det"
}
##availability
if(identical(parm.type, "phi")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$phi)))
parm.unmarked <- "phi"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "phi"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
##NEED TO PASTE THE PARAMETER TYPE - INCLUDE THIS STEP ABOVE FOR EACH PARM.TYPE
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != paste(parm.unmarked, "(Int)", sep = ""))]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
nmods <- length(cand.set)
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##gpcount
modavgShrink.AICunmarkedFitGPC <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgShrink for details\n")}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##gpcount model
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$lambda)))
parm.unmarked <- "lambda"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "lambda"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "det"
}
##availability
if(identical(parm.type, "phi")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$phi)))
parm.unmarked <- "phi"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "phi"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
##NEED TO PASTE THE PARAMETER TYPE - INCLUDE THIS STEP ABOVE FOR EACH PARM.TYPE
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != paste(parm.unmarked, "(Int)", sep = ""))]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
nmods <- length(cand.set)
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##occuMulti
modavgShrink.AICunmarkedFitOccuMulti <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgShrink for details\n")}
##remove all leading and trailing white space and within parm
##parm <- gsub('[[:space:]]+', "", parm)
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##single-season occupancy model
##psi
if(identical(parm.type, "psi")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
parm.unmarked <- "psi"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "state"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
##NEED TO PASTE THE PARAMETER TYPE - INCLUDE THIS STEP ABOVE FOR EACH PARM.TYPE
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != paste(parm.unmarked, "(Int)", sep = ""))]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
nmods <- length(cand.set)
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##multimixOpen
modavgShrink.AICunmarkedFitMMO <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgShrink for details\n")}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##open version of N-mixture model
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$lambda)))
parm.unmarked <- "lam"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "lambda"
}
##gamma - recruitment
if(identical(parm.type, "gamma")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$gamma)))
##determine if same H0 on gamma (gamConst, gamAR, gamTrend)
strip.gam <- sapply(mod_formula, FUN = function(i) unlist(strsplit(i, "\\("))[[1]])
unique.gam <- unique(strip.gam)
if(length(unique.gam) > 1) stop("\nDifferent formulations of gamma parameter occur among models:\n
beta estimates cannot be model-averaged\n")
##create label for parm
parm.unmarked <- unique.gam
parm <- paste(unique.gam, "(", parm, ")", sep="")
parm.type1 <- "gamma"
}
##omega - apparent survival
if(identical(parm.type, "omega")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$omega)))
##create label for parm
parm.unmarked <- "omega"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "omega"
}
##iota (for immigration = TRUE with dynamics = "autoreg", "trend", "ricker", or "gompertz")
if(identical(parm.type, "iota")) {
##create label for parm
parm.unmarked <- "iota"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
##check that parameter appears in all models
parfreq <- sum(sapply(cand.set, FUN = function(i) any(names(i@estimates@estimates) == parm.unmarked)))
if(!identical(length(cand.set), parfreq)) {
stop("\nParameter \'", parm.unmarked, "\' (parm.type = \"", parm.type, "\") does not appear in all models:",
"\ncannot compute model-averaged estimate across all models\n")
}
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$iota)))
parm.type1 <- "iota"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula<-lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
##NEED TO PASTE THE PARAMETER TYPE - INCLUDE THIS STEP ABOVE FOR EACH PARM.TYPE
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != paste(parm.unmarked, "(Int)", sep = ""))]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
nmods <- length(cand.set)
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##distsampOpen
modavgShrink.AICunmarkedFitDSO <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgShrink for details\n")}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##Distance sampling model
##lambda - abundance
if(identical(parm.type, "lambda")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$lambda)))
parm.unmarked <- "lam"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "lambda"
}
##gamma - recruitment
if(identical(parm.type, "gamma")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$gamma)))
##determine if same H0 on gamma (gamConst, gamAR, gamTrend)
strip.gam <- sapply(mod_formula, FUN = function(i) unlist(strsplit(i, "\\("))[[1]])
unique.gam <- unique(strip.gam)
if(length(unique.gam) > 1) stop("\nDifferent formulations of gamma parameter occur among models:\n
beta estimates cannot be model-averaged\n")
##create label for parm
parm.unmarked <- unique.gam
parm <- paste(unique.gam, "(", parm, ")", sep="")
parm.type1 <- "gamma"
}
##omega - apparent survival
if(identical(parm.type, "omega")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$omega)))
##create label for parm
parm.unmarked <- "omega"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "omega"
}
##iota (for immigration = TRUE with dynamics = "autoreg", "trend", "ricker", or "gompertz")
if(identical(parm.type, "iota")) {
##create label for parm
parm.unmarked <- "iota"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
##check that parameter appears in all models
parfreq <- sum(sapply(cand.set, FUN = function(i) any(names(i@estimates@estimates) == parm.unmarked)))
if(!identical(length(cand.set), parfreq)) {
stop("\nParameter \'", parm.unmarked, "\' (parm.type = \"", parm.type, "\") does not appear in all models:",
"\ncannot compute model-averaged estimate across all models\n")
}
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$iota)))
parm.type1 <- "iota"
}
##detect, e.g., parm = "sigmaarea"
if(identical(parm.type, "detect")) {
##check for key function used
keyid <- unique(sapply(cand.set, FUN = function(i) i@keyfun))
if(length(keyid) > 1) stop("\nDifferent key functions used across models:\n",
"cannot compute model-averaged estimate\n")
if(identical(keyid, "uniform")) stop("\nDetection parameter not found in models\n")
##set key prefix used in coef( )
if(identical(keyid, "halfnorm")) {
parm.key <- "sigma"
}
if(identical(keyid, "hazard")) {
parm.key <- "shape"
}
if(identical(keyid, "exp")) {
parm.key <- "rate"
}
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "sigma"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
##NEED TO PASTE THE PARAMETER TYPE - INCLUDE THIS STEP ABOVE FOR EACH PARM.TYPE
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != paste(parm.unmarked, "(Int)", sep = ""))]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
nmods <- length(cand.set)
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##occuMS
modavgShrink.AICunmarkedFitOccuMS <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgShrink for details\n")}
##remove all leading and trailing white space and within parm
#parm <- gsub('[[:space:]]+', "", parm)
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##single-season occupancy model
##psi
if(identical(parm.type, "psi")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
parm.unmarked <- "psi"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "state"
}
##transition
if(identical(parm.type, "phi")) {
##check that parameter appears in all models
nseasons <- unique(sapply(cand.set, FUN = function(i) i@data@numPrimary))
if(nseasons == 1) {
stop("\nParameter \'phi\' does not appear in single-season models\n")
}
mod_formula <- lapply(cand.set, FUN = function(x) labels(coef(x@estimates@estimates$transition)))
parm.unmarked <- "phi"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "transition"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "p"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
##NEED TO PASTE THE PARAMETER TYPE - INCLUDE THIS STEP ABOVE FOR EACH PARM.TYPE
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != paste(parm.unmarked, "(Int)", sep = ""))]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
nmods <- length(cand.set)
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
##occuTTD
modavgShrink.AICunmarkedFitOccuTTD <-
function(cand.set, parm, modnames = NULL, second.ord = TRUE,
nobs = NULL, uncond.se = "revised", conf.level = 0.95,
c.hat = 1, parm.type = NULL, ...){
##note that parameter is referenced differently from unmarked object - see labels( )
##check if named list if modnames are not supplied
if(is.null(modnames)) {
if(is.null(names(cand.set))) {
modnames <- paste("Mod", 1:length(cand.set), sep = "")
warning("\nModel names have been supplied automatically in the table\n")
} else {
modnames <- names(cand.set)
}
}
##check for parm.type and stop if NULL
if(is.null(parm.type)) {stop("\n'parm.type' must be specified for this model type, see ?modavgShrink for details\n")}
##remove all leading and trailing white space and within parm
parm <- gsub('[[:space:]]+', "", parm)
##if (Intercept) is chosen assign (Int) - for compatibility
if(identical(parm, "(Intercept)")) parm <- "Int"
##single-season or dynamic occupancy model
##psi
if(identical(parm.type, "psi")) {
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$state)))
parm.unmarked <- "psi"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "psi"
}
##gamma - colonization
if(identical(parm.type, "gamma")) {
nseasons <- unique(sapply(cand.set, FUN = function(i) i@data@numPrimary))
if(nseasons == 1) {
stop("\nParameter \'gamma\' does not appear in single-season models\n")
}
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$col)))
parm.unmarked <- "col"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "col"
}
##epsilon - extinction
if(identical(parm.type, "epsilon")) {
nseasons <- unique(sapply(cand.set, FUN = function(i) i@data@numPrimary))
if(nseasons == 1) {
stop("\nParameter \'epsilon\' does not appear in single-season models\n")
}
##extract model formula for each model in cand.set
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$ext)))
##create label for parm
parm.unmarked <- "ext"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "ext"
}
##detect
if(identical(parm.type, "detect")) {
mod_formula <- lapply(cand.set, FUN = function(i) labels(coef(i@estimates@estimates$det)))
parm.unmarked <- "lam"
parm <- paste(parm.unmarked, "(", parm, ")", sep="")
parm.type1 <- "det"
}
##################
##extract link function
check.link <- sapply(X = cand.set, FUN = function(i) eval(parse(text = paste("i@estimates@estimates$",
parm.type1, "@invlink",
sep = ""))))
unique.link <- unique(check.link)
select.link <- unique.link[1]
if(length(unique.link) > 1) {stop("\nIt is not appropriate to compute a model averaged linear predictor\n",
"with different link functions\n")}
##################
##NEED TO PASTE THE PARAMETER TYPE - INCLUDE THIS STEP ABOVE FOR EACH PARM.TYPE
##determine frequency of each term across models (except (Intercept) )
pooled.terms <- unlist(mod_formula)
##remove intercept from vector
no.int <- pooled.terms[which(pooled.terms != paste(parm.unmarked, "(Int)", sep = ""))]
terms.freq <- table(no.int)
if(length(unique(terms.freq)) > 1) warning("\nVariables do not appear with same frequency across models, proceed with caution\n")
##check whether parm is involved in interaction
parm.inter <- c(paste(parm, ":", sep = ""), paste(":", parm, sep = ""))
inter.check <- ifelse(attr(regexpr(parm.inter[1], mod_formula, fixed = TRUE), "match.length") == "-1" & attr(regexpr(parm.inter[2],
mod_formula, fixed = TRUE), "match.length") == "-1", 0, 1)
if(sum(inter.check) > 0) stop("\nParameter of interest should not be involved in interaction for shrinkage version of model-averaging to be appropriate\n")
nmods <- length(cand.set)
new_table <- aictab(cand.set = cand.set, modnames = modnames,
second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat) #recompute AIC table and associated measures
new_table$Beta_est <- unlist(lapply(cand.set, FUN = function(i) coef(i)[paste(parm)])) #extract beta estimate for parm
new_table$SE <- unlist(lapply(cand.set, FUN = function(i) sqrt(diag(vcov(i)))[paste(parm)]))
##replace NA's with 0
new_table$Beta_est[is.na(new_table$Beta_est)] <- 0
new_table$SE[is.na(new_table$SE)] <- 0
##add a check to determine if parameter occurs in any model
if (isTRUE(all.equal(unique(new_table$Beta_est), 0))) {stop("\nParameter not found in any of the candidate models\n") }
##if c-hat is estimated adjust the SE's by multiplying with sqrt of c-hat
if(c.hat > 1) {
new_table$SE <- new_table$SE*sqrt(c.hat)
}
##AICc
##compute model-averaged estimates, unconditional SE, and 95% CL
if(c.hat == 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$AICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAICc
if(c.hat > 1 && second.ord == TRUE) {
Modavg_beta <- sum(new_table$QAICcWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICcWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICcWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##AIC
if(c.hat == 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$AICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$AICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$AICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
##QAIC
if(c.hat > 1 && second.ord == FALSE) {
Modavg_beta <- sum(new_table$QAICWt*new_table$Beta_est)
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Uncond_SE <- sum(new_table$QAICWt*sqrt(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Uncond_SE <- sqrt(sum(new_table$QAICWt*(new_table$SE^2 + (new_table$Beta_est- Modavg_beta)^2)))
}
}
zcrit <- qnorm(p = (1 - conf.level)/2, lower.tail = FALSE)
Lower_CL <- Modavg_beta - zcrit*Uncond_SE
Upper_CL <- Modavg_beta + zcrit*Uncond_SE
out.modavg <- list("Parameter"=paste(parm), "Mod.avg.table" = new_table, "Mod.avg.beta" = Modavg_beta,
"Uncond.SE" = Uncond_SE, "Conf.level" = conf.level, "Lower.CL" = Lower_CL,
"Upper.CL" = Upper_CL)
class(out.modavg) <- c("modavgShrink", "list")
return(out.modavg)
}
print.modavgShrink <-
function(x, digits = 2, ...) {
ic <- colnames(x$Mod.avg.table)[3]
cat("\nMultimodel inference on \"", x$Parameter, "\" based on ", ic, "\n", sep = "")
cat("\n", ic, " table used to obtain model-averaged estimate with shrinkage:\n", sep = "")
oldtab <- x$Mod.avg.table
if (any(names(oldtab) == "c_hat")) {cat("\t(c-hat estimate = ", oldtab$c_hat[1], ")\n", sep = "")}
cat("\n")
if (any(names(oldtab)=="c_hat")) {
nice.tab <- cbind(oldtab[, 2], oldtab[, 3], oldtab[, 4], oldtab[, 6],
oldtab[, 9], oldtab[, 10])
} else {nice.tab <- cbind(oldtab[, 2], oldtab[, 3], oldtab[, 4], oldtab[, 6],
oldtab[, 8], oldtab[, 9])
}
##modify printing style if multinomial model is used
if(length(x$Mod.avg.beta) == 1) {
colnames(nice.tab) <- c(colnames(oldtab)[c(2, 3, 4, 6)], "Estimate", "SE")
rownames(nice.tab) <- oldtab[, 1]
print(round(nice.tab, digits = digits))
cat("\nModel-averaged estimate with shrinkage:", eval(round(x$Mod.avg.beta, digits = digits)), "\n")
cat("Unconditional SE:", eval(round(x$Uncond.SE, digits = digits)), "\n")
cat("",x$Conf.level*100, "% Unconditional confidence interval: ", round(x$Lower.CL, digits = digits),
", ", round(x$Upper.CL, digits = digits), "\n\n", sep = "")
} else {
col.ns <- ncol(nice.tab)
nice.tab <- nice.tab[,-c(col.ns - 1, col.ns)]
colnames(nice.tab) <- c(colnames(oldtab)[c(2, 3, 4, 6)])
rownames(nice.tab) <- oldtab[, 1]
print(round(nice.tab, digits = digits))
cat("\n\nModel-averaged estimates with shrinkage for different levels of response variable:", "\n\n")
resp.labels <- labels(x$Mod.avg.beta)
mult.out <- matrix(NA, nrow = length(resp.labels), ncol = 4)
colnames(mult.out) <- c("Model-averaged estimate with shrinkage", "Uncond. SE", paste(x$Conf.level*100,"% lower CL", sep = ""),
paste(x$Conf.level*100, "% upper CL", sep = ""))
rownames(mult.out) <- resp.labels
mult.out[, 1] <- round(x$Mod.avg.beta, digits = digits)
mult.out[, 2] <- round(x$Uncond.SE, digits = digits)
mult.out[, 3] <- round(x$Lower.CL, digits = digits)
mult.out[, 4] <- round(x$Upper.CL, digits = digits)
print(mult.out)
cat("\n")
}
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/modavgShrink.R |
##create generic
multComp <- function(mod, factor.id, letter.labels = TRUE, second.ord = TRUE, nobs = NULL,
sort = TRUE, newdata = NULL, uncond.se = "revised", conf.level = 0.95,
correction = "none", ...) {
UseMethod("multComp", mod)
}
##default
multComp.default <- function(mod, factor.id, letter.labels = TRUE, second.ord = TRUE, nobs = NULL,
sort = TRUE, newdata = NULL, uncond.se = "revised", conf.level = 0.95,
correction = "none", ...) {
stop("\nFunction not yet defined for this object class\n")
}
##unmarked models are not supported yet
##aov
multComp.aov <- function(mod, factor.id, letter.labels = TRUE, second.ord = TRUE, nobs = NULL,
sort = TRUE, newdata = NULL, uncond.se = "revised", conf.level = 0.95,
correction = "none", ...) {
##2^(k - 1) combinations of group patterns
##extract data set from model
data.set <- eval(mod$call$data, environment(formula(mod)))
##add a check for missing values
if(any(is.na(data.set))) stop("\nMissing values occur in the data set:\n", "remove these values before proceeding")
##check for presence of interactions
form.check <- formula(mod)
form.mod <- strsplit(as.character(form.check), split="~")[[3]]
if(attr(regexpr(paste(":", factor.id, sep = ""), form.mod), "match.length") != -1 || attr(regexpr(paste(factor.id, ":", sep = ""),
form.mod), "match.length") != -1 || attr(regexpr(paste("\\*", factor.id), form.mod), "match.length") != -1 ||
attr(regexpr(paste(factor.id, "\\*"), form.mod), "match.length") != -1 ) {
stop("\nDo not involve the factor of interest in interaction terms with this function,\n",
"see \"?multComp\" for details on specifying interaction terms\n")
}
##identify column with factor.id
parm.vals <- data.set[, factor.id]
##check that variable is factor
if(!is.factor(parm.vals)) stop("\n'factor.id' must be a factor\n")
##identify response variable
resp.id <- as.character(formula(mod))[2]
##check for response variable
if(!any(names(data.set) == resp.id)) stop("\nThis function does not support transformations of the response variable in model formulae:\n")
##changed to allow when response variable is transformed
##in the formula to avoid error
resp <- data.set[, resp.id]
##determine group identity
groups.id <- levels(parm.vals)
##determine number of groups
n.groups <- length(groups.id)
##order group means
ord.means <- sort(tapply(X = resp, INDEX = parm.vals, FUN = mean))
##or sort(tapply(X = fitted(mod), INDEX = parm.vals, FUN = mean))
##order groups
ord.groups <- names(ord.means)
##generic groups
gen.groups <- paste(1:n.groups)
###################CHANGES####
##############################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
#if(type == "terms") {stop("\nThe terms argument is not defined for this function\n")}
##check data frame for predictions
if(is.null(newdata)) {
##if no data set is specified, use first observation in data set
new.data <- data.set[1, ]
preds.data <- new.data
##replicate first line n.groups time
for(k in 1:(n.groups-1)){
preds.data <- rbind(preds.data, new.data)
}
##add ordered groups
preds.data[, factor.id] <- as.factor(ord.groups)
pred.parm <- preds.data[, factor.id]
} else {
preds.data <- newdata
##check that no more rows are specified than group levels
if(nrow(newdata) != n.groups) stop("\nnumber of rows in \'newdata\' must match number of factor levels\n")
preds.data[, factor.id] <- as.factor(ord.groups)
pred.parm <- preds.data[, factor.id]
}
##create pattern matrix of group assignment
if(n.groups == 2) {
##combinations for 2 groups
##{1, 2}
##{12}
##rows correspond to parameterization of different models
pat.mat <- matrix(data =
c(1, 2,
1, 1),
byrow = TRUE,
ncol = 2)
}
if(n.groups == 3) {
##combinations for 3 groups
##{1, 2, 3}
##{12, 3}
##{1, 23}
##{123}
pat.mat <- matrix(data =
c(1, 2, 3,
1, 1, 2,
1, 2, 2,
1, 1, 1),
byrow = TRUE,
ncol = 3)
}
if(n.groups == 4) {
##combinations for 4 groups
##{1, 2, 3, 4}
##{1, 2, 34}
##{12, 3, 4}
##{12, 34}
##{1, 23, 4}
##{1, 234}
##{123, 4}
##{1234}
pat.mat <- matrix(data =
c(1, 2, 3, 4,
1, 2, 3, 3,
1, 1, 2, 3,
1, 1, 2, 2,
1, 2, 2, 3,
1, 2, 2, 2,
1, 1, 1, 2,
1, 1, 1, 1),
byrow = TRUE,
ncol = 4)
}
if(n.groups == 5) {
##combinations for 5 groups
##{1, 2, 3, 4, 5}
##{1, 2, 3, 45}
##{1, 2, 345}
##{1, 2, 34, 5}
##{12, 3, 4, 5}
##{12, 34, 5}
##{12, 3, 45}
##{12, 345}
##{1, 23, 4, 5}
##{1, 23, 45}
##{1, 234, 5}
##{123, 4, 5}
##{123, 45}
##{1234, 5}
##{1, 2345}
##{12345}
pat.mat <- matrix(data =
c(1, 2, 3, 4, 5,
1, 2, 3, 4, 4,
1, 2, 3, 3, 3,
1, 2, 3, 3, 4,
1, 1, 2, 3, 4,
1, 1, 2, 2, 3,
1, 1, 2, 3, 3,
1, 1, 2, 2, 2,
1, 2, 2, 3, 4,
1, 2, 2, 3, 3,
1, 2, 2, 2, 3,
1, 1, 1, 2, 3,
1, 1, 1, 2, 2,
1, 1, 1, 1, 2,
1, 2, 2, 2, 2,
1, 1, 1, 1, 1),
byrow = TRUE,
ncol = 5)
}
##combinations for 6 groups
if(n.groups == 6) {
pat.mat <- matrix(data =
c(1, 2, 2, 2, 2, 2,
1, 2, 2, 2, 2, 3,
1, 2, 2, 2, 3, 3,
1, 2, 2, 2, 3, 4,
1, 2, 2, 3, 3, 3,
1, 2, 3, 3, 3, 4,
1, 2, 3, 3, 3, 3,
1, 2, 3, 3, 4, 5,
1, 2, 3, 3, 4, 4,
1, 2, 3, 4, 4, 4,
1, 2, 3, 4, 4, 5,
1, 2, 3, 4, 5, 6,
1, 2, 3, 4, 5, 5,
1, 2, 2, 3, 4, 5,
1, 2, 2, 3, 4, 4,
1, 2, 2, 3, 3, 4,
1, 1, 2, 2, 2, 2,
1, 1, 2, 2, 2, 3,
1, 1, 2, 2, 3, 3,
1, 1, 2, 3, 3, 3,
1, 1, 1, 2, 2, 2,
1, 1, 1, 2, 2, 3,
1, 1, 1, 2, 3, 3,
1, 1, 1, 2, 3, 4,
1, 1, 2, 3, 4, 5,
1, 1, 1, 1, 2, 3,
1, 1, 1, 1, 2, 2,
1, 1, 1, 1, 1, 2,
1, 1, 1, 1, 1, 1,
1, 1, 2, 3, 4, 4,
1, 1, 2, 2, 3, 4,
1, 1, 2, 3, 3, 4),
byrow = TRUE,
ncol = 6)
}
if(n.groups > 6) stop("\nThis function supports a maximum of 6 groups\n")
##number of models
n.mods <- nrow(pat.mat)
data.iter <- data.set
##create list to store models
mod.list <- vector("list", length = n.mods)
##create list to store preds
pred.list <- vector("list", length = n.mods)
if(n.groups == 2) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
##for model
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1], pat.mat[k, 2])
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1], pat.mat[k, 2])
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
##determine if lm or rlm
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 3) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
pat.mat[k, 3]))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
pat.mat[k, 3]))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 4) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
pat.mat[k, 4])))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
pat.mat[k, 4])))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 5) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
ifelse(orig.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
ifelse(pred.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 6) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
ifelse(orig.parm == ord.groups[4], pat.mat[k, 4],
ifelse(orig.parm == ord.groups[5], pat.mat[k, 5],
pat.mat[k, 6])))))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
ifelse(pred.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
##determine if lm or rlm
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
##if group label should be letters
if(letter.labels) {
letter.mat <- matrix(data = letters[pat.mat], ncol = ncol(pat.mat))
pat.mat <- letter.mat
}
##create vector of names
model.names <- apply(X = pat.mat, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))##use contrasts in name
##1111, 122, etc...
model.names <- paste("m_", model.names, sep = "")
##compute AICc table for output
out.table <- aictab(cand.set = mod.list, modnames = model.names,
second.ord = second.ord, nobs = nobs, sort = sort)
out.table.avg <- aictab(cand.set = mod.list, modnames = model.names,
second.ord = second.ord, nobs = nobs, sort = FALSE)
Mod.avg.out <- matrix(NA, nrow = nrow(preds.data), ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
rownames(Mod.avg.out) <- ord.groups
##iterate over predictions
for(m in 1:nrow(preds.data)){
##add fitted values in table
out.table.avg$fit <- unlist(lapply(X = pred.list, FUN = function(i) i$fit[m]))
out.table.avg$SE <- unlist(lapply(X = pred.list, FUN = function(i) i$se.fit[m]))
##model-averaged estimate
mod.avg.est <- out.table.avg[, 6] %*% out.table.avg$fit
Mod.avg.out[m, 1] <- mod.avg.est
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[m, 2] <- sum(out.table.avg[, 6] * sqrt(out.table.avg$SE^2 + (out.table.avg$fit - as.vector(mod.avg.est))^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[m, 2] <- sqrt(sum(out.table.avg[, 6] * (out.table.avg$SE^2 + (out.table.avg$fit - as.vector(mod.avg.est))^2)))
}
}
#################################################
###NEW CODE
#################################################
##compute model-average estimates and confidence intervals
##classic uncorrected for multiple comparisons
##extract residual df (lm, rlm, gls, lme)
res.df <- df.residual(mod)
##extract quantile
##no correction for multiple comparisons
if(identical(correction, "none")){
t.crit <- qt(p = (1 - conf.level)/2, df = res.df,
lower.tail = FALSE)
}
##number of possible comparisons
ncomp <- choose(n.groups, 2)
##Bonferroni correction
if(identical(correction, "bonferroni")){
alpha.corr <- (1 - conf.level)/ncomp
t.crit <- qt(p = alpha.corr/2, df = res.df,
lower.tail = FALSE)
}
##Sidak correction
if(identical(correction, "sidak")){
alpha.corr <- 1 - (conf.level)^(1/ncomp)
t.crit <- qt(p = alpha.corr/2, df = res.df,
lower.tail = FALSE)
}
##compute CI's
Lower.CL <- Mod.avg.out[, 1] - t.crit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + t.crit * Mod.avg.out[, 2]
##combine results in matrix
out.matrix <- cbind(Mod.avg.out, Lower.CL, Upper.CL)
##arrange output
results <- list(factor.id = factor.id, models = mod.list, model.names = model.names,
model.table = out.table, ordered.levels = ord.groups, model.avg.est = out.matrix,
conf.level = conf.level, correction = correction)
#################################################
###NEW CODE
#################################################
class(results) <- c("multComp", "list")
return(results)
}
##lm
multComp.lm <- function(mod, factor.id, letter.labels = TRUE, second.ord = TRUE, nobs = NULL,
sort = TRUE, newdata = NULL, uncond.se = "revised", conf.level = 0.95,
correction = "none", ...) {
##2^(k - 1) combinations of group patterns
##extract data set from model
data.set <- eval(mod$call$data, environment(formula(mod)))
##add a check for missing values
if(any(is.na(data.set))) stop("\nMissing values occur in the data set:\n", "remove these values before proceeding")
##check for presence of interactions
form.check <- formula(mod)
form.mod <- strsplit(as.character(form.check), split="~")[[3]]
if(attr(regexpr(paste(":", factor.id, sep = ""), form.mod), "match.length") != -1 || attr(regexpr(paste(factor.id, ":", sep = ""),
form.mod), "match.length") != -1 || attr(regexpr(paste("\\*", factor.id), form.mod), "match.length") != -1 ||
attr(regexpr(paste(factor.id, "\\*"), form.mod), "match.length") != -1 ) {
stop("\nDo not involve the factor of interest in interaction terms with this function,\n",
"see \"?multComp\" for details on specifying interaction terms\n")
}
##identify column with factor.id
parm.vals <- data.set[, factor.id]
##check that variable is factor
if(!is.factor(parm.vals)) stop("\n'factor.id' must be a factor\n")
##identify response variable
resp.id <- as.character(formula(mod))[2]
##check for response variable
if(!any(names(data.set) == resp.id)) stop("\nThis function does not support transformations of the response variable in model formulae:\n")
##changed to allow when response variable is transformed
##in the formula to avoid error
resp <- data.set[, resp.id]
##determine group identity
groups.id <- levels(parm.vals)
##determine number of groups
n.groups <- length(groups.id)
##order group means
ord.means <- sort(tapply(X = resp, INDEX = parm.vals, FUN = mean))
##or sort(tapply(X = fitted(mod), INDEX = parm.vals, FUN = mean))
##order groups
ord.groups <- names(ord.means)
##generic groups
gen.groups <- paste(1:n.groups)
###################CHANGES####
##############################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
#if(type == "terms") {stop("\nThe terms argument is not defined for this function\n")}
##check data frame for predictions
if(is.null(newdata)) {
##if no data set is specified, use first observation in data set
new.data <- data.set[1, ]
preds.data <- new.data
##replicate first line n.groups time
for(k in 1:(n.groups-1)){
preds.data <- rbind(preds.data, new.data)
}
##add ordered groups
preds.data[, factor.id] <- as.factor(ord.groups)
pred.parm <- preds.data[, factor.id]
} else {
preds.data <- newdata
##check that no more rows are specified than group levels
if(nrow(newdata) != n.groups) stop("\nnumber of rows in \'newdata\' must match number of factor levels\n")
preds.data[, factor.id] <- as.factor(ord.groups)
pred.parm <- preds.data[, factor.id]
}
##create pattern matrix of group assignment
if(n.groups == 2) {
##combinations for 2 groups
##{1, 2}
##{12}
##rows correspond to parameterization of different models
pat.mat <- matrix(data =
c(1, 2,
1, 1),
byrow = TRUE,
ncol = 2)
}
if(n.groups == 3) {
##combinations for 3 groups
##{1, 2, 3}
##{12, 3}
##{1, 23}
##{123}
pat.mat <- matrix(data =
c(1, 2, 3,
1, 1, 2,
1, 2, 2,
1, 1, 1),
byrow = TRUE,
ncol = 3)
}
if(n.groups == 4) {
##combinations for 4 groups
##{1, 2, 3, 4}
##{1, 2, 34}
##{12, 3, 4}
##{12, 34}
##{1, 23, 4}
##{1, 234}
##{123, 4}
##{1234}
pat.mat <- matrix(data =
c(1, 2, 3, 4,
1, 2, 3, 3,
1, 1, 2, 3,
1, 1, 2, 2,
1, 2, 2, 3,
1, 2, 2, 2,
1, 1, 1, 2,
1, 1, 1, 1),
byrow = TRUE,
ncol = 4)
}
if(n.groups == 5) {
##combinations for 5 groups
##{1, 2, 3, 4, 5}
##{1, 2, 3, 45}
##{1, 2, 345}
##{1, 2, 34, 5}
##{12, 3, 4, 5}
##{12, 34, 5}
##{12, 3, 45}
##{12, 345}
##{1, 23, 4, 5}
##{1, 23, 45}
##{1, 234, 5}
##{123, 4, 5}
##{123, 45}
##{1234, 5}
##{1, 2345}
##{12345}
pat.mat <- matrix(data =
c(1, 2, 3, 4, 5,
1, 2, 3, 4, 4,
1, 2, 3, 3, 3,
1, 2, 3, 3, 4,
1, 1, 2, 3, 4,
1, 1, 2, 2, 3,
1, 1, 2, 3, 3,
1, 1, 2, 2, 2,
1, 2, 2, 3, 4,
1, 2, 2, 3, 3,
1, 2, 2, 2, 3,
1, 1, 1, 2, 3,
1, 1, 1, 2, 2,
1, 1, 1, 1, 2,
1, 2, 2, 2, 2,
1, 1, 1, 1, 1),
byrow = TRUE,
ncol = 5)
}
##combinations for 6 groups
if(n.groups == 6) {
pat.mat <- matrix(data =
c(1, 2, 2, 2, 2, 2,
1, 2, 2, 2, 2, 3,
1, 2, 2, 2, 3, 3,
1, 2, 2, 2, 3, 4,
1, 2, 2, 3, 3, 3,
1, 2, 3, 3, 3, 4,
1, 2, 3, 3, 3, 3,
1, 2, 3, 3, 4, 5,
1, 2, 3, 3, 4, 4,
1, 2, 3, 4, 4, 4,
1, 2, 3, 4, 4, 5,
1, 2, 3, 4, 5, 6,
1, 2, 3, 4, 5, 5,
1, 2, 2, 3, 4, 5,
1, 2, 2, 3, 4, 4,
1, 2, 2, 3, 3, 4,
1, 1, 2, 2, 2, 2,
1, 1, 2, 2, 2, 3,
1, 1, 2, 2, 3, 3,
1, 1, 2, 3, 3, 3,
1, 1, 1, 2, 2, 2,
1, 1, 1, 2, 2, 3,
1, 1, 1, 2, 3, 3,
1, 1, 1, 2, 3, 4,
1, 1, 2, 3, 4, 5,
1, 1, 1, 1, 2, 3,
1, 1, 1, 1, 2, 2,
1, 1, 1, 1, 1, 2,
1, 1, 1, 1, 1, 1,
1, 1, 2, 3, 4, 4,
1, 1, 2, 2, 3, 4,
1, 1, 2, 3, 3, 4),
byrow = TRUE,
ncol = 6)
}
if(n.groups > 6) stop("\nThis function supports a maximum of 6 groups\n")
##number of models
n.mods <- nrow(pat.mat)
data.iter <- data.set
##create list to store models
mod.list <- vector("list", length = n.mods)
##create list to store preds
pred.list <- vector("list", length = n.mods)
if(n.groups == 2) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
##for model
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1], pat.mat[k, 2])
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1], pat.mat[k, 2])
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
##determine if lm or rlm
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 3) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
pat.mat[k, 3]))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
pat.mat[k, 3]))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 4) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
pat.mat[k, 4])))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
pat.mat[k, 4])))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 5) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
ifelse(orig.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
ifelse(pred.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 6) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
ifelse(orig.parm == ord.groups[4], pat.mat[k, 4],
ifelse(orig.parm == ord.groups[5], pat.mat[k, 5],
pat.mat[k, 6])))))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
ifelse(pred.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
##determine if lm or rlm
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
##if group label should be letters
if(letter.labels) {
letter.mat <- matrix(data = letters[pat.mat], ncol = ncol(pat.mat))
pat.mat <- letter.mat
}
##create vector of names
model.names <- apply(X = pat.mat, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))##use contrasts in name
##1111, 122, etc...
model.names <- paste("m_", model.names, sep = "")
##compute AICc table for output
out.table <- aictab(cand.set = mod.list, modnames = model.names,
second.ord = second.ord, nobs = nobs, sort = sort)
out.table.avg <- aictab(cand.set = mod.list, modnames = model.names,
second.ord = second.ord, nobs = nobs, sort = FALSE)
Mod.avg.out <- matrix(NA, nrow = nrow(preds.data), ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
rownames(Mod.avg.out) <- ord.groups
##iterate over predictions
for(m in 1:nrow(preds.data)){
##add fitted values in table
out.table.avg$fit <- unlist(lapply(X = pred.list, FUN = function(i) i$fit[m]))
out.table.avg$SE <- unlist(lapply(X = pred.list, FUN = function(i) i$se.fit[m]))
##model-averaged estimate
mod.avg.est <- out.table.avg[, 6] %*% out.table.avg$fit
Mod.avg.out[m, 1] <- mod.avg.est
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[m, 2] <- sum(out.table.avg[, 6] * sqrt(out.table.avg$SE^2 + (out.table.avg$fit - as.vector(mod.avg.est))^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[m, 2] <- sqrt(sum(out.table.avg[, 6] * (out.table.avg$SE^2 + (out.table.avg$fit - as.vector(mod.avg.est))^2)))
}
}
#################################################
###NEW CODE
#################################################
##compute model-average estimates and confidence intervals
##classic uncorrected for multiple comparisons
##extract residual df (lm, rlm, gls, lme)
res.df <- df.residual(mod)
##extract quantile
##no correction for multiple comparisons
if(identical(correction, "none")){
t.crit <- qt(p = (1 - conf.level)/2, df = res.df,
lower.tail = FALSE)
}
##number of possible comparisons
ncomp <- choose(n.groups, 2)
##Bonferroni correction
if(identical(correction, "bonferroni")){
alpha.corr <- (1 - conf.level)/ncomp
t.crit <- qt(p = alpha.corr/2, df = res.df,
lower.tail = FALSE)
}
##Sidak correction
if(identical(correction, "sidak")){
alpha.corr <- 1 - (conf.level)^(1/ncomp)
t.crit <- qt(p = alpha.corr/2, df = res.df,
lower.tail = FALSE)
}
##compute CI's
Lower.CL <- Mod.avg.out[, 1] - t.crit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + t.crit * Mod.avg.out[, 2]
##combine results in matrix
out.matrix <- cbind(Mod.avg.out, Lower.CL, Upper.CL)
##arrange output
results <- list(factor.id = factor.id, models = mod.list, model.names = model.names,
model.table = out.table, ordered.levels = ord.groups, model.avg.est = out.matrix,
conf.level = conf.level, correction = correction)
#################################################
###NEW CODE
#################################################
class(results) <- c("multComp", "list")
return(results)
}
##gls
multComp.gls <- function(mod, factor.id, letter.labels = TRUE, second.ord = TRUE, nobs = NULL,
sort = TRUE, newdata = NULL, uncond.se = "revised", conf.level = 0.95,
correction = "none", ...) {
##2^(k - 1) combinations of group patterns
##extract data set from model
##unmarked models are not supported yet
##check if S4
data.set <- eval(mod$call$data, environment(formula(mod)))
##add a check for missing values
if(any(is.na(data.set))) stop("\nMissing values occur in the data set:\n", "remove these values before proceeding")
##check for presence of interactions
form.check <- formula(mod)
form.mod <- strsplit(as.character(form.check), split="~")[[3]]
if(attr(regexpr(paste(":", factor.id, sep = ""), form.mod), "match.length") != -1 || attr(regexpr(paste(factor.id, ":", sep = ""),
form.mod), "match.length") != -1 || attr(regexpr(paste("\\*", factor.id), form.mod), "match.length") != -1 ||
attr(regexpr(paste(factor.id, "\\*"), form.mod), "match.length") != -1 ) {
stop("\nDo not involve the factor of interest in interaction terms with this function,\n",
"see \"?multComp\" for details on specifying interaction terms\n")
}
##identify column with factor.id
parm.vals <- data.set[, factor.id]
##check that variable is factor
if(!is.factor(parm.vals)) stop("\n'factor.id' must be a factor\n")
##identify response variable
resp.id <- as.character(formula(mod))[2]
##check for response variable
if(!any(names(data.set) == resp.id)) stop("\nThis function does not support transformations of the response variable in the formula\n")
##changed to allow when response variable is transformed
##in the formula to avoid error
resp <- data.set[, resp.id]
##determine group identity
groups.id <- levels(parm.vals)
##determine number of groups
n.groups <- length(groups.id)
##order group means
ord.means <- sort(tapply(X = resp, INDEX = parm.vals, FUN = mean))
##or sort(tapply(X = fitted(mod), INDEX = parm.vals, FUN = mean))
##order groups
ord.groups <- names(ord.means)
##generic groups
gen.groups <- paste(1:n.groups)
###################CHANGES####
##############################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
# if(type == "terms") {stop("\nThe terms argument is not defined for this function\n")}
##check data frame for predictions
if(is.null(newdata)) {
##if no data set is specified, use first observation in data set
new.data <- data.set[1, ]
preds.data <- new.data
##replicate first line n.groups time
for(k in 1:(n.groups-1)){
preds.data <- rbind(preds.data, new.data)
}
##add ordered groups
preds.data[, factor.id] <- as.factor(ord.groups)
pred.parm <- preds.data[, factor.id]
} else {
preds.data <- newdata
##check that no more rows are specified than group levels
if(nrow(newdata) != n.groups) stop("\nnumber of rows in \'newdata\' must match number of factor levels\n")
preds.data[, factor.id] <- as.factor(ord.groups)
pred.parm <- preds.data[, factor.id]
}
##create pattern matrix of group assignment
if(n.groups == 2) {
##combinations for 2 groups
##{1, 2}
##{12}
##rows correspond to parameterization of different models
pat.mat <- matrix(data =
c(1, 2,
1, 1),
byrow = TRUE,
ncol = 2)
}
if(n.groups == 3) {
##combinations for 3 groups
##{1, 2, 3}
##{12, 3}
##{1, 23}
##{123}
pat.mat <- matrix(data =
c(1, 2, 3,
1, 1, 2,
1, 2, 2,
1, 1, 1),
byrow = TRUE,
ncol = 3)
}
if(n.groups == 4) {
##combinations for 4 groups
##{1, 2, 3, 4}
##{1, 2, 34}
##{12, 3, 4}
##{12, 34}
##{1, 23, 4}
##{1, 234}
##{123, 4}
##{1234}
pat.mat <- matrix(data =
c(1, 2, 3, 4,
1, 2, 3, 3,
1, 1, 2, 3,
1, 1, 2, 2,
1, 2, 2, 3,
1, 2, 2, 2,
1, 1, 1, 2,
1, 1, 1, 1),
byrow = TRUE,
ncol = 4)
}
if(n.groups == 5) {
##combinations for 5 groups
##{1, 2, 3, 4, 5}
##{1, 2, 3, 45}
##{1, 2, 345}
##{1, 2, 34, 5}
##{12, 3, 4, 5}
##{12, 34, 5}
##{12, 3, 45}
##{12, 345}
##{1, 23, 4, 5}
##{1, 23, 45}
##{1, 234, 5}
##{123, 4, 5}
##{123, 45}
##{1234, 5}
##{1, 2345}
##{12345}
pat.mat <- matrix(data =
c(1, 2, 3, 4, 5,
1, 2, 3, 4, 4,
1, 2, 3, 3, 3,
1, 2, 3, 3, 4,
1, 1, 2, 3, 4,
1, 1, 2, 2, 3,
1, 1, 2, 3, 3,
1, 1, 2, 2, 2,
1, 2, 2, 3, 4,
1, 2, 2, 3, 3,
1, 2, 2, 2, 3,
1, 1, 1, 2, 3,
1, 1, 1, 2, 2,
1, 1, 1, 1, 2,
1, 2, 2, 2, 2,
1, 1, 1, 1, 1),
byrow = TRUE,
ncol = 5)
}
##combinations for 6 groups
if(n.groups == 6) {
pat.mat <- matrix(data =
c(1, 2, 2, 2, 2, 2,
1, 2, 2, 2, 2, 3,
1, 2, 2, 2, 3, 3,
1, 2, 2, 2, 3, 4,
1, 2, 2, 3, 3, 3,
1, 2, 3, 3, 3, 4,
1, 2, 3, 3, 3, 3,
1, 2, 3, 3, 4, 5,
1, 2, 3, 3, 4, 4,
1, 2, 3, 4, 4, 4,
1, 2, 3, 4, 4, 5,
1, 2, 3, 4, 5, 6,
1, 2, 3, 4, 5, 5,
1, 2, 2, 3, 4, 5,
1, 2, 2, 3, 4, 4,
1, 2, 2, 3, 3, 4,
1, 1, 2, 2, 2, 2,
1, 1, 2, 2, 2, 3,
1, 1, 2, 2, 3, 3,
1, 1, 2, 3, 3, 3,
1, 1, 1, 2, 2, 2,
1, 1, 1, 2, 2, 3,
1, 1, 1, 2, 3, 3,
1, 1, 1, 2, 3, 4,
1, 1, 2, 3, 4, 5,
1, 1, 1, 1, 2, 3,
1, 1, 1, 1, 2, 2,
1, 1, 1, 1, 1, 2,
1, 1, 1, 1, 1, 1,
1, 1, 2, 3, 4, 4,
1, 1, 2, 2, 3, 4,
1, 1, 2, 3, 3, 4),
byrow = TRUE,
ncol = 6)
}
if(n.groups > 6) stop("\nThis function supports a maximum of 6 groups\n")
##number of models
n.mods <- nrow(pat.mat)
data.iter <- data.set
##create list to store models
mod.list <- vector("list", length = n.mods)
##create list to store preds
pred.list <- vector("list", length = n.mods)
if(n.groups == 2) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
##for model
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1], pat.mat[k, 2])
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1], pat.mat[k, 2])
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
##determine if gls
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 3) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
pat.mat[k, 3]))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
pat.mat[k, 3]))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
##determine if gls
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 4) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
pat.mat[k, 4])))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
pat.mat[k, 4])))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
##determine if gls
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
##determine if gls
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 5) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
ifelse(orig.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
ifelse(pred.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 6) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
ifelse(orig.parm == ord.groups[4], pat.mat[k, 4],
ifelse(orig.parm == ord.groups[5], pat.mat[k, 5],
pat.mat[k, 6])))))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
ifelse(pred.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
##if group label should be letters
if(letter.labels) {
letter.mat <- matrix(data = letters[pat.mat], ncol = ncol(pat.mat))
pat.mat <- letter.mat
}
##create vector of names
model.names <- apply(X = pat.mat, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))##use contrasts in name
##1111, 122, etc...
model.names <- paste("m_", model.names, sep = "")
##compute AICc table for output
out.table <- aictab(cand.set = mod.list, modnames = model.names,
second.ord = second.ord, nobs = nobs,
sort = sort)
out.table.avg <- aictab(cand.set = mod.list, modnames = model.names,
second.ord = second.ord, nobs = nobs,
sort = FALSE)
Mod.avg.out <- matrix(NA, nrow = nrow(preds.data), ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
rownames(Mod.avg.out) <- ord.groups
##iterate over predictions
for(m in 1:nrow(preds.data)){
##add fitted values in table
out.table.avg$fit <- unlist(lapply(X = pred.list, FUN = function(i) i$fit[m]))
out.table.avg$SE <- unlist(lapply(X = pred.list, FUN = function(i) i$se.fit[m]))
##model-averaged estimate
mod.avg.est <- out.table.avg[, 6] %*% out.table.avg$fit
Mod.avg.out[m, 1] <- mod.avg.est
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[m, 2] <- sum(out.table.avg[, 6] * sqrt(out.table.avg$SE^2 + (out.table.avg$fit - as.vector(mod.avg.est))^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[m, 2] <- sqrt(sum(out.table.avg[, 6] * (out.table.avg$SE^2 + (out.table.avg$fit - as.vector(mod.avg.est))^2)))
}
}
#################################################
###NEW CODE
#################################################
##compute model-average estimates
##classic uncorrected for multiple comparisons
##extract residual df (lm, rlm, gls, lme)
res.df <- mod$dims[["N"]] - mod$dims[["p"]]
##extract quantile
##no correction for multiple comparisons
if(identical(correction, "none")){
t.crit <- qt(p = (1 - conf.level)/2, df = res.df,
lower.tail = FALSE)
}
##number of possible comparisons
ncomp <- choose(n.groups, 2)
##Bonferroni correction
if(identical(correction, "bonferroni")){
alpha.corr <- (1 - conf.level)/ncomp
t.crit <- qt(p = alpha.corr/2, df = res.df,
lower.tail = FALSE)
}
##Sidak correction
if(identical(correction, "sidak")){
alpha.corr <- 1 - (conf.level)^(1/ncomp)
t.crit <- qt(p = alpha.corr/2, df = res.df,
lower.tail = FALSE)
}
##compute CI's
Lower.CL <- Mod.avg.out[, 1] - t.crit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + t.crit * Mod.avg.out[, 2]
##combine results in matrix
out.matrix <- cbind(Mod.avg.out, Lower.CL, Upper.CL)
##arrange output
results <- list(factor.id = factor.id, models = mod.list, model.names = model.names,
model.table = out.table, ordered.levels = ord.groups, model.avg.est = out.matrix,
conf.level = conf.level, correction = correction)
#################################################
###NEW CODE
#################################################
class(results) <- c("multComp", "list")
return(results)
}
##glm
multComp.glm <- function(mod, factor.id, letter.labels = TRUE, second.ord = TRUE, nobs = NULL, sort = TRUE,
newdata = NULL, uncond.se = "revised", conf.level = 0.95, correction = "none",
type = "response", c.hat = 1, gamdisp = NULL, ...) {
##2^(k - 1) combinations of group patterns
##extract data set from model
data.set <- eval(mod$call$data, environment(formula(mod)))
##add a check for missing values
if(any(is.na(data.set))) stop("\nMissing values occur in the data set:\n", "remove these values before proceeding")
##check for presence of interactions
form.check <- formula(mod)
form.mod <- strsplit(as.character(form.check), split="~")[[3]]
if(attr(regexpr(paste(":", factor.id, sep = ""), form.mod), "match.length") != -1 || attr(regexpr(paste(factor.id, ":", sep = ""),
form.mod), "match.length") != -1 || attr(regexpr(paste("\\*", factor.id), form.mod), "match.length") != -1 ||
attr(regexpr(paste(factor.id, "\\*"), form.mod), "match.length") != -1 ) {
stop("\nDo not involve the factor of interest in interaction terms with this function,\n",
"see \"?multComp\" for details on specifying interaction terms\n")
}
##identify column with factor.id
parm.vals <- data.set[, factor.id]
##check that variable is factor
if(!is.factor(parm.vals)) stop("\n'factor.id' must be a factor\n")
##identify response variable
resp.id <- as.character(formula(mod))[2]
##check for response variable
if(!any(names(data.set) == resp.id)) stop("\nThis function does not support transformations of the response variable in the formula\n")
##changed to allow when response variable is transformed
##in the formula to avoid error
resp <- data.set[, resp.id]
##determine group identity
groups.id <- levels(parm.vals)
##determine number of groups
n.groups <- length(groups.id)
##order group means
ord.means <- sort(tapply(X = resp, INDEX = parm.vals, FUN = mean))
##or sort(tapply(X = fitted(mod), INDEX = parm.vals, FUN = mean))
##order groups
ord.groups <- names(ord.means)
##generic groups
gen.groups <- paste(1:n.groups)
###################CHANGES####
##############################
##check family of glm to avoid problems when requesting predictions with argument 'dispersion'
fam.type <- family(mod)$family
if(identical(fam.type, "gaussian")) {
dispersion <- NULL #set to NULL if gaussian is used
} else{dispersion <- c.hat}
##for negative binomial - reset to NULL
if(any(regexpr("Negative Binomial", fam.type) != -1)) {
dispersion <- NULL
##check for mixture of negative binomial and other
##number of models with negative binomial
negbin.num <- sum(regexpr("Negative Binomial", fam.type) != -1)
if(negbin.num < length(fam.type)) {
stop("Function does not support mixture of negative binomial with other distributions in model set")
}
}
if(c.hat > 1) {dispersion <- c.hat }
if(!is.null(gamdisp)) {dispersion <- gamdisp}
if(c.hat > 1 && !is.null(gamdisp)) {stop("\nYou cannot specify values for both \'c.hat\' and \'gamdisp\'\n")}
##correct SE's for estimates of gamma regressions when gamdisp is specified
if(identical(family(mod)$family[1], "Gamma")) {
##check for specification of gamdisp argument
if(is.null(gamdisp)) stop("\nYou must specify a gamma dispersion parameter with gamma generalized linear models\n")
}
##newdata is data frame with exact structure of the original data frame (same variable names and type)
# if(type == "terms") {stop("\nThe terms argument is not defined for this function\n")}
##check data frame for predictions
if(is.null(newdata)) {
##if no data set is specified, use first observation in data set
new.data <- data.set[1, ]
preds.data <- new.data
##replicate first line n.groups time
for(k in 1:(n.groups-1)){
preds.data <- rbind(preds.data, new.data)
}
##add ordered groups
preds.data[, factor.id] <- as.factor(ord.groups)
pred.parm <- preds.data[, factor.id]
} else {
preds.data <- newdata
##check that no more rows are specified than group levels
if(nrow(newdata) != n.groups) stop("\nnumber of rows in \'newdata\' must match number of factor levels\n")
preds.data[, factor.id] <- as.factor(ord.groups)
pred.parm <- preds.data[, factor.id]
}
##create pattern matrix of group assignment
if(n.groups == 2) {
##combinations for 2 groups
##{1, 2}
##{12}
##rows correspond to parameterization of different models
pat.mat <- matrix(data =
c(1, 2,
1, 1),
byrow = TRUE,
ncol = 2)
}
if(n.groups == 3) {
##combinations for 3 groups
##{1, 2, 3}
##{12, 3}
##{1, 23}
##{123}
pat.mat <- matrix(data =
c(1, 2, 3,
1, 1, 2,
1, 2, 2,
1, 1, 1),
byrow = TRUE,
ncol = 3)
}
if(n.groups == 4) {
##combinations for 4 groups
##{1, 2, 3, 4}
##{1, 2, 34}
##{12, 3, 4}
##{12, 34}
##{1, 23, 4}
##{1, 234}
##{123, 4}
##{1234}
pat.mat <- matrix(data =
c(1, 2, 3, 4,
1, 2, 3, 3,
1, 1, 2, 3,
1, 1, 2, 2,
1, 2, 2, 3,
1, 2, 2, 2,
1, 1, 1, 2,
1, 1, 1, 1),
byrow = TRUE,
ncol = 4)
}
if(n.groups == 5) {
##combinations for 5 groups
##{1, 2, 3, 4, 5}
##{1, 2, 3, 45}
##{1, 2, 345}
##{1, 2, 34, 5}
##{12, 3, 4, 5}
##{12, 34, 5}
##{12, 3, 45}
##{12, 345}
##{1, 23, 4, 5}
##{1, 23, 45}
##{1, 234, 5}
##{123, 4, 5}
##{123, 45}
##{1234, 5}
##{1, 2345}
##{12345}
pat.mat <- matrix(data =
c(1, 2, 3, 4, 5,
1, 2, 3, 4, 4,
1, 2, 3, 3, 3,
1, 2, 3, 3, 4,
1, 1, 2, 3, 4,
1, 1, 2, 2, 3,
1, 1, 2, 3, 3,
1, 1, 2, 2, 2,
1, 2, 2, 3, 4,
1, 2, 2, 3, 3,
1, 2, 2, 2, 3,
1, 1, 1, 2, 3,
1, 1, 1, 2, 2,
1, 1, 1, 1, 2,
1, 2, 2, 2, 2,
1, 1, 1, 1, 1),
byrow = TRUE,
ncol = 5)
}
##combinations for 6 groups
if(n.groups == 6) {
pat.mat <- matrix(data =
c(1, 2, 2, 2, 2, 2,
1, 2, 2, 2, 2, 3,
1, 2, 2, 2, 3, 3,
1, 2, 2, 2, 3, 4,
1, 2, 2, 3, 3, 3,
1, 2, 3, 3, 3, 4,
1, 2, 3, 3, 3, 3,
1, 2, 3, 3, 4, 5,
1, 2, 3, 3, 4, 4,
1, 2, 3, 4, 4, 4,
1, 2, 3, 4, 4, 5,
1, 2, 3, 4, 5, 6,
1, 2, 3, 4, 5, 5,
1, 2, 2, 3, 4, 5,
1, 2, 2, 3, 4, 4,
1, 2, 2, 3, 3, 4,
1, 1, 2, 2, 2, 2,
1, 1, 2, 2, 2, 3,
1, 1, 2, 2, 3, 3,
1, 1, 2, 3, 3, 3,
1, 1, 1, 2, 2, 2,
1, 1, 1, 2, 2, 3,
1, 1, 1, 2, 3, 3,
1, 1, 1, 2, 3, 4,
1, 1, 2, 3, 4, 5,
1, 1, 1, 1, 2, 3,
1, 1, 1, 1, 2, 2,
1, 1, 1, 1, 1, 2,
1, 1, 1, 1, 1, 1,
1, 1, 2, 3, 4, 4,
1, 1, 2, 2, 3, 4,
1, 1, 2, 3, 3, 4),
byrow = TRUE,
ncol = 6)
}
if(n.groups > 6) stop("\nThis function supports a maximum of 6 groups\n")
##number of models
n.mods <- nrow(pat.mat)
data.iter <- data.set
##create list to store models
mod.list <- vector("list", length = n.mods)
##create list to store preds
pred.list <- vector("list", length = n.mods)
if(n.groups == 2) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
##for model
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1], pat.mat[k, 2])
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1], pat.mat[k, 2])
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data, dispersion = dispersion,
se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data, dispersion = dispersion,
se.fit = TRUE)
}
}
}
if(n.groups == 3) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
pat.mat[k, 3]))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
pat.mat[k, 3]))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data, dispersion = dispersion,
se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data, dispersion = dispersion,
se.fit = TRUE)
}
}
}
if(n.groups == 4) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
pat.mat[k, 4])))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
pat.mat[k, 4])))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data, dispersion = dispersion,
se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data, dispersion = dispersion,
se.fit = TRUE)
}
}
}
if(n.groups == 5) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
ifelse(orig.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
ifelse(pred.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data, dispersion = dispersion,
se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data, dispersion = dispersion,
se.fit = TRUE)
}
}
}
if(n.groups == 6) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
ifelse(orig.parm == ord.groups[4], pat.mat[k, 4],
ifelse(orig.parm == ord.groups[5], pat.mat[k, 5],
pat.mat[k, 6])))))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
ifelse(pred.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data, dispersion = dispersion,
se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data, dispersion = dispersion,
se.fit = TRUE)
}
}
}
##if group label should be letters
if(letter.labels) {
letter.mat <- matrix(data = letters[pat.mat], ncol = ncol(pat.mat))
pat.mat <- letter.mat
}
##create vector of names
model.names <- apply(X = pat.mat, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))##use contrasts in name
##1111, 122, etc...
model.names <- paste("m_", model.names, sep = "")
##compute AICc table for output
out.table <- aictab(cand.set = mod.list, modnames = model.names,
second.ord = second.ord, nobs = nobs,
sort = sort, c.hat = c.hat)
out.table.avg <- aictab(cand.set = mod.list, modnames = model.names,
second.ord = second.ord, nobs = nobs,
sort = FALSE, c.hat = c.hat)
Mod.avg.out <- matrix(NA, nrow = nrow(preds.data), ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
rownames(Mod.avg.out) <- ord.groups
##iterate over predictions
for(m in 1:nrow(preds.data)){
##add fitted values in table
out.table.avg$fit <- unlist(lapply(X = pred.list, FUN = function(i) i$fit[m]))
out.table.avg$SE <- unlist(lapply(X = pred.list, FUN = function(i) i$se.fit[m]))
##model-averaged estimate
mod.avg.est <- out.table.avg[, 6] %*% out.table.avg$fit
Mod.avg.out[m, 1] <- mod.avg.est
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[m, 2] <- sum(out.table.avg[, 6] * sqrt(out.table.avg$SE^2 + (out.table.avg$fit - as.vector(mod.avg.est))^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[m, 2] <- sqrt(sum(out.table.avg[, 6] * (out.table.avg$SE^2 + (out.table.avg$fit - as.vector(mod.avg.est))^2)))
}
}
#################################################
###NEW CODE
#################################################
##compute model-average estimates
##classic uncorrected for multiple comparisons
##extract quantile
##no correction for multiple comparisons
if(identical(correction, "none")){
z.crit <- qnorm(p = (1 - conf.level)/2,
lower.tail = FALSE)
}
##number of possible comparisons
ncomp <- choose(n.groups, 2)
##Bonferroni correction
if(identical(correction, "bonferroni")){
alpha.corr <- (1 - conf.level)/ncomp
z.crit <- qnorm(p = alpha.corr/2,
lower.tail = FALSE)
}
##Sidak correction
if(identical(correction, "sidak")){
alpha.corr <- 1 - (conf.level)^(1/ncomp)
z.crit <- qnorm(p = alpha.corr/2,
lower.tail = FALSE)
}
##compute CI's
Lower.CL <- Mod.avg.out[, 1] - z.crit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + z.crit * Mod.avg.out[, 2]
##combine results in matrix
out.matrix <- cbind(Mod.avg.out, Lower.CL, Upper.CL)
##arrange output
results <- list(factor.id = factor.id, models = mod.list, model.names = model.names,
model.table = out.table, ordered.levels = ord.groups, model.avg.est = out.matrix,
conf.level = conf.level, correction = correction)
#################################################
###NEW CODE
#################################################
class(results) <- c("multComp", "list")
return(results)
}
##lme
multComp.lme <- function(mod, factor.id, letter.labels = TRUE, second.ord = TRUE, nobs = NULL,
sort = TRUE, newdata = NULL, uncond.se = "revised", conf.level = 0.95,
correction = "none", ...) {
##2^(k - 1) combinations of group patterns
##extract data set from model
##unmarked models are not supported yet
##check if S4
data.set <- eval(mod$call$data, environment(formula(mod)))
##add a check for missing values
if(any(is.na(data.set))) stop("\nMissing values occur in the data set:\n", "remove these values before proceeding")
##check for presence of interactions
form.check <- formula(mod)
form.mod <- strsplit(as.character(form.check), split="~")[[3]]
if(attr(regexpr(paste(":", factor.id, sep = ""), form.mod), "match.length") != -1 || attr(regexpr(paste(factor.id, ":", sep = ""),
form.mod), "match.length") != -1 || attr(regexpr(paste("\\*", factor.id), form.mod), "match.length") != -1 ||
attr(regexpr(paste(factor.id, "\\*"), form.mod), "match.length") != -1 ) {
stop("\nDo not involve the factor of interest in interaction terms with this function,\n",
"see \"?multComp\" for details on specifying interaction terms\n")
}
##identify column with factor.id
parm.vals <- data.set[, factor.id]
##check that variable is factor
if(!is.factor(parm.vals)) stop("\n'factor.id' must be a factor\n")
##identify response variable
resp.id <- as.character(formula(mod))[2]
##check for response variable
if(!any(names(data.set) == resp.id)) stop("\nThis function does not support transformations of the response variable in the formula\n")
##changed to allow when response variable is transformed
##in the formula to avoid error
resp <- data.set[, resp.id]
##determine group identity
groups.id <- levels(parm.vals)
##determine number of groups
n.groups <- length(groups.id)
##order group means
ord.means <- sort(tapply(X = resp, INDEX = parm.vals, FUN = mean))
##or sort(tapply(X = fitted(mod), INDEX = parm.vals, FUN = mean))
##order groups
ord.groups <- names(ord.means)
##generic groups
gen.groups <- paste(1:n.groups)
###################CHANGES####
##############################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
# if(type == "terms") {stop("\nThe terms argument is not defined for this function\n")}
##check data frame for predictions
if(is.null(newdata)) {
##if no data set is specified, use first observation in data set
new.data <- data.set[1, ]
preds.data <- new.data
##replicate first line n.groups time
for(k in 1:(n.groups-1)){
preds.data <- rbind(preds.data, new.data)
}
##add ordered groups
preds.data[, factor.id] <- as.factor(ord.groups)
pred.parm <- preds.data[, factor.id]
} else {
preds.data <- newdata
##check that no more rows are specified than group levels
if(nrow(newdata) != n.groups) stop("\nnumber of rows in \'newdata\' must match number of factor levels\n")
preds.data[, factor.id] <- as.factor(ord.groups)
pred.parm <- preds.data[, factor.id]
}
##create pattern matrix of group assignment
if(n.groups == 2) {
##combinations for 2 groups
##{1, 2}
##{12}
##rows correspond to parameterization of different models
pat.mat <- matrix(data =
c(1, 2,
1, 1),
byrow = TRUE,
ncol = 2)
}
if(n.groups == 3) {
##combinations for 3 groups
##{1, 2, 3}
##{12, 3}
##{1, 23}
##{123}
pat.mat <- matrix(data =
c(1, 2, 3,
1, 1, 2,
1, 2, 2,
1, 1, 1),
byrow = TRUE,
ncol = 3)
}
if(n.groups == 4) {
##combinations for 4 groups
##{1, 2, 3, 4}
##{1, 2, 34}
##{12, 3, 4}
##{12, 34}
##{1, 23, 4}
##{1, 234}
##{123, 4}
##{1234}
pat.mat <- matrix(data =
c(1, 2, 3, 4,
1, 2, 3, 3,
1, 1, 2, 3,
1, 1, 2, 2,
1, 2, 2, 3,
1, 2, 2, 2,
1, 1, 1, 2,
1, 1, 1, 1),
byrow = TRUE,
ncol = 4)
}
if(n.groups == 5) {
##combinations for 5 groups
##{1, 2, 3, 4, 5}
##{1, 2, 3, 45}
##{1, 2, 345}
##{1, 2, 34, 5}
##{12, 3, 4, 5}
##{12, 34, 5}
##{12, 3, 45}
##{12, 345}
##{1, 23, 4, 5}
##{1, 23, 45}
##{1, 234, 5}
##{123, 4, 5}
##{123, 45}
##{1234, 5}
##{1, 2345}
##{12345}
pat.mat <- matrix(data =
c(1, 2, 3, 4, 5,
1, 2, 3, 4, 4,
1, 2, 3, 3, 3,
1, 2, 3, 3, 4,
1, 1, 2, 3, 4,
1, 1, 2, 2, 3,
1, 1, 2, 3, 3,
1, 1, 2, 2, 2,
1, 2, 2, 3, 4,
1, 2, 2, 3, 3,
1, 2, 2, 2, 3,
1, 1, 1, 2, 3,
1, 1, 1, 2, 2,
1, 1, 1, 1, 2,
1, 2, 2, 2, 2,
1, 1, 1, 1, 1),
byrow = TRUE,
ncol = 5)
}
##combinations for 6 groups
if(n.groups == 6) {
pat.mat <- matrix(data =
c(1, 2, 2, 2, 2, 2,
1, 2, 2, 2, 2, 3,
1, 2, 2, 2, 3, 3,
1, 2, 2, 2, 3, 4,
1, 2, 2, 3, 3, 3,
1, 2, 3, 3, 3, 4,
1, 2, 3, 3, 3, 3,
1, 2, 3, 3, 4, 5,
1, 2, 3, 3, 4, 4,
1, 2, 3, 4, 4, 4,
1, 2, 3, 4, 4, 5,
1, 2, 3, 4, 5, 6,
1, 2, 3, 4, 5, 5,
1, 2, 2, 3, 4, 5,
1, 2, 2, 3, 4, 4,
1, 2, 2, 3, 3, 4,
1, 1, 2, 2, 2, 2,
1, 1, 2, 2, 2, 3,
1, 1, 2, 2, 3, 3,
1, 1, 2, 3, 3, 3,
1, 1, 1, 2, 2, 2,
1, 1, 1, 2, 2, 3,
1, 1, 1, 2, 3, 3,
1, 1, 1, 2, 3, 4,
1, 1, 2, 3, 4, 5,
1, 1, 1, 1, 2, 3,
1, 1, 1, 1, 2, 2,
1, 1, 1, 1, 1, 2,
1, 1, 1, 1, 1, 1,
1, 1, 2, 3, 4, 4,
1, 1, 2, 2, 3, 4,
1, 1, 2, 3, 3, 4),
byrow = TRUE,
ncol = 6)
}
if(n.groups > 6) stop("\nThis function supports a maximum of 6 groups\n")
##number of models
n.mods <- nrow(pat.mat)
data.iter <- data.set
##create list to store models
mod.list <- vector("list", length = n.mods)
##create list to store preds
pred.list <- vector("list", length = n.mods)
if(n.groups == 2) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
##for model
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1], pat.mat[k, 2])
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1], pat.mat[k, 2])
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 3) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
pat.mat[k, 3]))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
pat.mat[k, 3]))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 4) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
pat.mat[k, 4])))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
pat.mat[k, 4])))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 5) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
ifelse(orig.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
ifelse(pred.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 6) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
ifelse(orig.parm == ord.groups[4], pat.mat[k, 4],
ifelse(orig.parm == ord.groups[5], pat.mat[k, 5],
pat.mat[k, 6])))))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
ifelse(pred.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
##if group label should be letters
if(letter.labels) {
letter.mat <- matrix(data = letters[pat.mat], ncol = ncol(pat.mat))
pat.mat <- letter.mat
}
##create vector of names
model.names <- apply(X = pat.mat, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))##use contrasts in name
##1111, 122, etc...
model.names <- paste("m_", model.names, sep = "")
##compute AICc table for output
out.table <- aictab(cand.set = mod.list, modnames = model.names,
second.ord = second.ord, nobs = nobs,
sort = sort)
out.table.avg <- aictab(cand.set = mod.list, modnames = model.names,
second.ord = second.ord, nobs = nobs,
sort = FALSE)
Mod.avg.out <- matrix(NA, nrow = nrow(preds.data), ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
rownames(Mod.avg.out) <- ord.groups
##iterate over predictions
for(m in 1:nrow(preds.data)){
##add fitted values in table
out.table.avg$fit <- unlist(lapply(X = pred.list, FUN = function(i) i$fit[m]))
out.table.avg$SE <- unlist(lapply(X = pred.list, FUN = function(i) i$se.fit[m]))
##model-averaged estimate
mod.avg.est <- out.table.avg[, 6] %*% out.table.avg$fit
Mod.avg.out[m, 1] <- mod.avg.est
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[m, 2] <- sum(out.table.avg[, 6] * sqrt(out.table.avg$SE^2 + (out.table.avg$fit - as.vector(mod.avg.est))^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[m, 2] <- sqrt(sum(out.table.avg[, 6] * (out.table.avg$SE^2 + (out.table.avg$fit - as.vector(mod.avg.est))^2)))
}
}
#################################################
###NEW CODE
#################################################
##compute model-average estimates
##classic uncorrected for multiple comparisons
##extract residual df (lm, rlm, gls, lme)
res.df <- mod$fixDF$terms["(Intercept)"]
##use denominator residual df of intercept
##note that anova.lme in nlme requires that denominator DF is identical for all terms
##extract quantile
##no correction for multiple comparisons
if(identical(correction, "none")){
t.crit <- qt(p = (1 - conf.level)/2, df = res.df,
lower.tail = FALSE)
}
##number of possible comparisons
ncomp <- choose(n.groups, 2)
##Bonferroni correction
if(identical(correction, "bonferroni")){
alpha.corr <- (1 - conf.level)/ncomp
t.crit <- qt(p = alpha.corr/2, df = res.df,
lower.tail = FALSE)
}
##Sidak correction
if(identical(correction, "sidak")){
alpha.corr <- 1 - (conf.level)^(1/ncomp)
t.crit <- qt(p = alpha.corr/2, df = res.df,
lower.tail = FALSE)
}
##compute CI's
Lower.CL <- Mod.avg.out[, 1] - t.crit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + t.crit * Mod.avg.out[, 2]
##combine results in matrix
out.matrix <- cbind(Mod.avg.out, Lower.CL, Upper.CL)
##arrange output
results <- list(factor.id = factor.id, models = mod.list, model.names = model.names,
model.table = out.table, ordered.levels = ord.groups, model.avg.est = out.matrix,
conf.level = conf.level, correction = correction)
#################################################
###NEW CODE
#################################################
class(results) <- c("multComp", "list")
return(results)
}
##glm.nb
multComp.negbin <- function(mod, factor.id, letter.labels = TRUE, second.ord = TRUE, nobs = NULL, sort = TRUE,
newdata = NULL, uncond.se = "revised", conf.level = 0.95, correction = "none",
type = "response", ...) {
##2^(k - 1) combinations of group patterns
##extract data set from model
data.set <- eval(mod$call$data, environment(formula(mod)))
##add a check for missing values
if(any(is.na(data.set))) stop("\nMissing values occur in the data set:\n", "remove these values before proceeding")
##check for presence of interactions
form.check <- formula(mod)
form.mod <- strsplit(as.character(form.check), split="~")[[3]]
if(attr(regexpr(paste(":", factor.id, sep = ""), form.mod), "match.length") != -1 || attr(regexpr(paste(factor.id, ":", sep = ""),
form.mod), "match.length") != -1 || attr(regexpr(paste("\\*", factor.id), form.mod), "match.length") != -1 ||
attr(regexpr(paste(factor.id, "\\*"), form.mod), "match.length") != -1 ) {
stop("\nDo not involve the factor of interest in interaction terms with this function,\n",
"see \"?multComp\" for details on specifying interaction terms\n")
}
##identify column with factor.id
parm.vals <- data.set[, factor.id]
##check that variable is factor
if(!is.factor(parm.vals)) stop("\n'factor.id' must be a factor\n")
##identify response variable
resp.id <- as.character(formula(mod))[2]
##check for response variable
if(!any(names(data.set) == resp.id)) stop("\nThis function does not support transformations of the response variable in the formula\n")
##changed to allow when response variable is transformed
##in the formula to avoid error
resp <- data.set[, resp.id]
##determine group identity
groups.id <- levels(parm.vals)
##determine number of groups
n.groups <- length(groups.id)
##order group means
ord.means <- sort(tapply(X = resp, INDEX = parm.vals, FUN = mean))
##or sort(tapply(X = fitted(mod), INDEX = parm.vals, FUN = mean))
##order groups
ord.groups <- names(ord.means)
##generic groups
gen.groups <- paste(1:n.groups)
###################CHANGES####
##############################
##check data frame for predictions
if(is.null(newdata)) {
##if no data set is specified, use first observation in data set
new.data <- data.set[1, ]
preds.data <- new.data
##replicate first line n.groups time
for(k in 1:(n.groups-1)){
preds.data <- rbind(preds.data, new.data)
}
##add ordered groups
preds.data[, factor.id] <- as.factor(ord.groups)
pred.parm <- preds.data[, factor.id]
} else {
preds.data <- newdata
##check that no more rows are specified than group levels
if(nrow(newdata) != n.groups) stop("\nnumber of rows in \'newdata\' must match number of factor levels\n")
preds.data[, factor.id] <- as.factor(ord.groups)
pred.parm <- preds.data[, factor.id]
}
##create pattern matrix of group assignment
if(n.groups == 2) {
##combinations for 2 groups
##{1, 2}
##{12}
##rows correspond to parameterization of different models
pat.mat <- matrix(data =
c(1, 2,
1, 1),
byrow = TRUE,
ncol = 2)
}
if(n.groups == 3) {
##combinations for 3 groups
##{1, 2, 3}
##{12, 3}
##{1, 23}
##{123}
pat.mat <- matrix(data =
c(1, 2, 3,
1, 1, 2,
1, 2, 2,
1, 1, 1),
byrow = TRUE,
ncol = 3)
}
if(n.groups == 4) {
##combinations for 4 groups
##{1, 2, 3, 4}
##{1, 2, 34}
##{12, 3, 4}
##{12, 34}
##{1, 23, 4}
##{1, 234}
##{123, 4}
##{1234}
pat.mat <- matrix(data =
c(1, 2, 3, 4,
1, 2, 3, 3,
1, 1, 2, 3,
1, 1, 2, 2,
1, 2, 2, 3,
1, 2, 2, 2,
1, 1, 1, 2,
1, 1, 1, 1),
byrow = TRUE,
ncol = 4)
}
if(n.groups == 5) {
##combinations for 5 groups
##{1, 2, 3, 4, 5}
##{1, 2, 3, 45}
##{1, 2, 345}
##{1, 2, 34, 5}
##{12, 3, 4, 5}
##{12, 34, 5}
##{12, 3, 45}
##{12, 345}
##{1, 23, 4, 5}
##{1, 23, 45}
##{1, 234, 5}
##{123, 4, 5}
##{123, 45}
##{1234, 5}
##{1, 2345}
##{12345}
pat.mat <- matrix(data =
c(1, 2, 3, 4, 5,
1, 2, 3, 4, 4,
1, 2, 3, 3, 3,
1, 2, 3, 3, 4,
1, 1, 2, 3, 4,
1, 1, 2, 2, 3,
1, 1, 2, 3, 3,
1, 1, 2, 2, 2,
1, 2, 2, 3, 4,
1, 2, 2, 3, 3,
1, 2, 2, 2, 3,
1, 1, 1, 2, 3,
1, 1, 1, 2, 2,
1, 1, 1, 1, 2,
1, 2, 2, 2, 2,
1, 1, 1, 1, 1),
byrow = TRUE,
ncol = 5)
}
##combinations for 6 groups
if(n.groups == 6) {
pat.mat <- matrix(data =
c(1, 2, 2, 2, 2, 2,
1, 2, 2, 2, 2, 3,
1, 2, 2, 2, 3, 3,
1, 2, 2, 2, 3, 4,
1, 2, 2, 3, 3, 3,
1, 2, 3, 3, 3, 4,
1, 2, 3, 3, 3, 3,
1, 2, 3, 3, 4, 5,
1, 2, 3, 3, 4, 4,
1, 2, 3, 4, 4, 4,
1, 2, 3, 4, 4, 5,
1, 2, 3, 4, 5, 6,
1, 2, 3, 4, 5, 5,
1, 2, 2, 3, 4, 5,
1, 2, 2, 3, 4, 4,
1, 2, 2, 3, 3, 4,
1, 1, 2, 2, 2, 2,
1, 1, 2, 2, 2, 3,
1, 1, 2, 2, 3, 3,
1, 1, 2, 3, 3, 3,
1, 1, 1, 2, 2, 2,
1, 1, 1, 2, 2, 3,
1, 1, 1, 2, 3, 3,
1, 1, 1, 2, 3, 4,
1, 1, 2, 3, 4, 5,
1, 1, 1, 1, 2, 3,
1, 1, 1, 1, 2, 2,
1, 1, 1, 1, 1, 2,
1, 1, 1, 1, 1, 1,
1, 1, 2, 3, 4, 4,
1, 1, 2, 2, 3, 4,
1, 1, 2, 3, 3, 4),
byrow = TRUE,
ncol = 6)
}
if(n.groups > 6) stop("\nThis function supports a maximum of 6 groups\n")
##number of models
n.mods <- nrow(pat.mat)
data.iter <- data.set
##create list to store models
mod.list <- vector("list", length = n.mods)
##create list to store preds
pred.list <- vector("list", length = n.mods)
if(n.groups == 2) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
##for model
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1], pat.mat[k, 2])
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1], pat.mat[k, 2])
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data,
se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data,
se.fit = TRUE)
}
}
}
if(n.groups == 3) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
pat.mat[k, 3]))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
pat.mat[k, 3]))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data,
se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data,
se.fit = TRUE)
}
}
}
if(n.groups == 4) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
pat.mat[k, 4])))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
pat.mat[k, 4])))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data,
se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data,
se.fit = TRUE)
}
}
}
if(n.groups == 5) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
ifelse(orig.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
ifelse(pred.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data,
se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data,
se.fit = TRUE)
}
}
}
if(n.groups == 6) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
ifelse(orig.parm == ord.groups[4], pat.mat[k, 4],
ifelse(orig.parm == ord.groups[5], pat.mat[k, 5],
pat.mat[k, 6])))))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
ifelse(pred.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data,
se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data,
se.fit = TRUE)
}
}
}
##if group label should be letters
if(letter.labels) {
letter.mat <- matrix(data = letters[pat.mat], ncol = ncol(pat.mat))
pat.mat <- letter.mat
}
##create vector of names
model.names <- apply(X = pat.mat, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))##use contrasts in name
##1111, 122, etc...
model.names <- paste("m_", model.names, sep = "")
##compute AICc table for output
out.table <- aictab(cand.set = mod.list, modnames = model.names,
second.ord = second.ord, nobs = nobs,
sort = sort)
out.table.avg <- aictab(cand.set = mod.list, modnames = model.names,
second.ord = second.ord, nobs = nobs,
sort = FALSE)
Mod.avg.out <- matrix(NA, nrow = nrow(preds.data), ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
rownames(Mod.avg.out) <- ord.groups
##iterate over predictions
for(m in 1:nrow(preds.data)){
##add fitted values in table
out.table.avg$fit <- unlist(lapply(X = pred.list, FUN = function(i) i$fit[m]))
out.table.avg$SE <- unlist(lapply(X = pred.list, FUN = function(i) i$se.fit[m]))
##model-averaged estimate
mod.avg.est <- out.table.avg[, 6] %*% out.table.avg$fit
Mod.avg.out[m, 1] <- mod.avg.est
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[m, 2] <- sum(out.table.avg[, 6] * sqrt(out.table.avg$SE^2 + (out.table.avg$fit - as.vector(mod.avg.est))^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[m, 2] <- sqrt(sum(out.table.avg[, 6] * (out.table.avg$SE^2 + (out.table.avg$fit - as.vector(mod.avg.est))^2)))
}
}
#################################################
###NEW CODE
#################################################
##compute model-average estimates
##classic uncorrected for multiple comparisons
##extract quantile
##no correction for multiple comparisons
if(identical(correction, "none")){
z.crit <- qnorm(p = (1 - conf.level)/2,
lower.tail = FALSE)
}
##number of possible comparisons
ncomp <- choose(n.groups, 2)
##Bonferroni correction
if(identical(correction, "bonferroni")){
alpha.corr <- (1 - conf.level)/ncomp
z.crit <- qnorm(p = alpha.corr/2,
lower.tail = FALSE)
}
##Sidak correction
if(identical(correction, "sidak")){
alpha.corr <- 1 - (conf.level)^(1/ncomp)
z.crit <- qnorm(p = alpha.corr/2,
lower.tail = FALSE)
}
##compute CI's
Lower.CL <- Mod.avg.out[, 1] - z.crit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + z.crit * Mod.avg.out[, 2]
##combine results in matrix
out.matrix <- cbind(Mod.avg.out, Lower.CL, Upper.CL)
##arrange output
results <- list(factor.id = factor.id, models = mod.list, model.names = model.names,
model.table = out.table, ordered.levels = ord.groups, model.avg.est = out.matrix,
conf.level = conf.level, correction = correction)
#################################################
###NEW CODE
#################################################
class(results) <- c("multComp", "list")
return(results)
}
##rlm
multComp.rlm <- function(mod, factor.id, letter.labels = TRUE, second.ord = TRUE, nobs = NULL,
sort = TRUE, newdata = NULL, uncond.se = "revised", conf.level = 0.95,
correction = "none", ...) {
##2^(k - 1) combinations of group patterns
##extract data set from model
data.set <- eval(mod$call$data, environment(formula(mod)))
##add a check for missing values
if(any(is.na(data.set))) stop("\nMissing values occur in the data set:\n", "remove these values before proceeding")
##check for presence of interactions
form.check <- formula(mod)
form.mod <- strsplit(as.character(form.check), split="~")[[3]]
if(attr(regexpr(paste(":", factor.id, sep = ""), form.mod), "match.length") != -1 || attr(regexpr(paste(factor.id, ":", sep = ""),
form.mod), "match.length") != -1 || attr(regexpr(paste("\\*", factor.id), form.mod), "match.length") != -1 ||
attr(regexpr(paste(factor.id, "\\*"), form.mod), "match.length") != -1 ) {
stop("\nDo not involve the factor of interest in interaction terms with this function,\n",
"see \"?multComp\" for details on specifying interaction terms\n")
}
##identify column with factor.id
parm.vals <- data.set[, factor.id]
##check that variable is factor
if(!is.factor(parm.vals)) stop("\n'factor.id' must be a factor\n")
##identify response variable
resp.id <- as.character(formula(mod))[2]
##check for response variable
if(!any(names(data.set) == resp.id)) stop("\nThis function does not support transformations of the response variable in model formulae:\n")
##changed to allow when response variable is transformed
##in the formula to avoid error
resp <- data.set[, resp.id]
##determine group identity
groups.id <- levels(parm.vals)
##determine number of groups
n.groups <- length(groups.id)
##order group means
ord.means <- sort(tapply(X = resp, INDEX = parm.vals, FUN = mean))
##or sort(tapply(X = fitted(mod), INDEX = parm.vals, FUN = mean))
##order groups
ord.groups <- names(ord.means)
##generic groups
gen.groups <- paste(1:n.groups)
###################CHANGES####
##############################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
# if(type == "terms") {stop("\nThe terms argument is not defined for this function\n")}
##check data frame for predictions
if(is.null(newdata)) {
##if no data set is specified, use first observation in data set
new.data <- data.set[1, ]
preds.data <- new.data
##replicate first line n.groups time
for(k in 1:(n.groups-1)){
preds.data <- rbind(preds.data, new.data)
}
##add ordered groups
preds.data[, factor.id] <- as.factor(ord.groups)
pred.parm <- preds.data[, factor.id]
} else {
preds.data <- newdata
##check that no more rows are specified than group levels
if(nrow(newdata) != n.groups) stop("\nnumber of rows in \'newdata\' must match number of factor levels\n")
preds.data[, factor.id] <- as.factor(ord.groups)
pred.parm <- preds.data[, factor.id]
}
##create pattern matrix of group assignment
if(n.groups == 2) {
##combinations for 2 groups
##{1, 2}
##{12}
##rows correspond to parameterization of different models
pat.mat <- matrix(data =
c(1, 2,
1, 1),
byrow = TRUE,
ncol = 2)
}
if(n.groups == 3) {
##combinations for 3 groups
##{1, 2, 3}
##{12, 3}
##{1, 23}
##{123}
pat.mat <- matrix(data =
c(1, 2, 3,
1, 1, 2,
1, 2, 2,
1, 1, 1),
byrow = TRUE,
ncol = 3)
}
if(n.groups == 4) {
##combinations for 4 groups
##{1, 2, 3, 4}
##{1, 2, 34}
##{12, 3, 4}
##{12, 34}
##{1, 23, 4}
##{1, 234}
##{123, 4}
##{1234}
pat.mat <- matrix(data =
c(1, 2, 3, 4,
1, 2, 3, 3,
1, 1, 2, 3,
1, 1, 2, 2,
1, 2, 2, 3,
1, 2, 2, 2,
1, 1, 1, 2,
1, 1, 1, 1),
byrow = TRUE,
ncol = 4)
}
if(n.groups == 5) {
##combinations for 5 groups
##{1, 2, 3, 4, 5}
##{1, 2, 3, 45}
##{1, 2, 345}
##{1, 2, 34, 5}
##{12, 3, 4, 5}
##{12, 34, 5}
##{12, 3, 45}
##{12, 345}
##{1, 23, 4, 5}
##{1, 23, 45}
##{1, 234, 5}
##{123, 4, 5}
##{123, 45}
##{1234, 5}
##{1, 2345}
##{12345}
pat.mat <- matrix(data =
c(1, 2, 3, 4, 5,
1, 2, 3, 4, 4,
1, 2, 3, 3, 3,
1, 2, 3, 3, 4,
1, 1, 2, 3, 4,
1, 1, 2, 2, 3,
1, 1, 2, 3, 3,
1, 1, 2, 2, 2,
1, 2, 2, 3, 4,
1, 2, 2, 3, 3,
1, 2, 2, 2, 3,
1, 1, 1, 2, 3,
1, 1, 1, 2, 2,
1, 1, 1, 1, 2,
1, 2, 2, 2, 2,
1, 1, 1, 1, 1),
byrow = TRUE,
ncol = 5)
}
##combinations for 6 groups
if(n.groups == 6) {
pat.mat <- matrix(data =
c(1, 2, 2, 2, 2, 2,
1, 2, 2, 2, 2, 3,
1, 2, 2, 2, 3, 3,
1, 2, 2, 2, 3, 4,
1, 2, 2, 3, 3, 3,
1, 2, 3, 3, 3, 4,
1, 2, 3, 3, 3, 3,
1, 2, 3, 3, 4, 5,
1, 2, 3, 3, 4, 4,
1, 2, 3, 4, 4, 4,
1, 2, 3, 4, 4, 5,
1, 2, 3, 4, 5, 6,
1, 2, 3, 4, 5, 5,
1, 2, 2, 3, 4, 5,
1, 2, 2, 3, 4, 4,
1, 2, 2, 3, 3, 4,
1, 1, 2, 2, 2, 2,
1, 1, 2, 2, 2, 3,
1, 1, 2, 2, 3, 3,
1, 1, 2, 3, 3, 3,
1, 1, 1, 2, 2, 2,
1, 1, 1, 2, 2, 3,
1, 1, 1, 2, 3, 3,
1, 1, 1, 2, 3, 4,
1, 1, 2, 3, 4, 5,
1, 1, 1, 1, 2, 3,
1, 1, 1, 1, 2, 2,
1, 1, 1, 1, 1, 2,
1, 1, 1, 1, 1, 1,
1, 1, 2, 3, 4, 4,
1, 1, 2, 2, 3, 4,
1, 1, 2, 3, 3, 4),
byrow = TRUE,
ncol = 6)
}
if(n.groups > 6) stop("\nThis function supports a maximum of 6 groups\n")
##number of models
n.mods <- nrow(pat.mat)
data.iter <- data.set
##create list to store models
mod.list <- vector("list", length = n.mods)
##create list to store preds
pred.list <- vector("list", length = n.mods)
if(n.groups == 2) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
##for model
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1], pat.mat[k, 2])
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1], pat.mat[k, 2])
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 3) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
pat.mat[k, 3]))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
pat.mat[k, 3]))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 4) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
pat.mat[k, 4])))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
pat.mat[k, 4])))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 5) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
ifelse(orig.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
ifelse(pred.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 6) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
ifelse(orig.parm == ord.groups[4], pat.mat[k, 4],
ifelse(orig.parm == ord.groups[5], pat.mat[k, 5],
pat.mat[k, 6])))))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
ifelse(pred.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
##if group label should be letters
if(letter.labels) {
letter.mat <- matrix(data = letters[pat.mat], ncol = ncol(pat.mat))
pat.mat <- letter.mat
}
##create vector of names
model.names <- apply(X = pat.mat, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))##use contrasts in name
##1111, 122, etc...
model.names <- paste("m_", model.names, sep = "")
##compute AICc table for output
out.table <- aictab(cand.set = mod.list, modnames = model.names,
second.ord = second.ord, nobs = nobs, sort = sort)
out.table.avg <- aictab(cand.set = mod.list, modnames = model.names,
second.ord = second.ord, nobs = nobs, sort = FALSE)
Mod.avg.out <- matrix(NA, nrow = nrow(preds.data), ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
rownames(Mod.avg.out) <- ord.groups
##iterate over predictions
for(m in 1:nrow(preds.data)){
##add fitted values in table
out.table.avg$fit <- unlist(lapply(X = pred.list, FUN = function(i) i$fit[m]))
out.table.avg$SE <- unlist(lapply(X = pred.list, FUN = function(i) i$se.fit[m]))
##model-averaged estimate
mod.avg.est <- out.table.avg[, 6] %*% out.table.avg$fit
Mod.avg.out[m, 1] <- mod.avg.est
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[m, 2] <- sum(out.table.avg[, 6] * sqrt(out.table.avg$SE^2 + (out.table.avg$fit - as.vector(mod.avg.est))^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[m, 2] <- sqrt(sum(out.table.avg[, 6] * (out.table.avg$SE^2 + (out.table.avg$fit - as.vector(mod.avg.est))^2)))
}
}
#################################################
###NEW CODE
#################################################
##compute model-average estimates
##classic uncorrected for multiple comparisons
##extract residual df (lm, rlm, gls, lme)
wresid <- length(mod$wresid)
ptotal <- length(mod$coefficients)
res.df <- wresid - ptotal
##extract quantile
##no correction for multiple comparisons
if(identical(correction, "none")){
t.crit <- qt(p = (1 - conf.level)/2, df = res.df,
lower.tail = FALSE)
}
##number of possible comparisons
ncomp <- choose(n.groups, 2)
##Bonferroni correction
if(identical(correction, "bonferroni")){
alpha.corr <- (1 - conf.level)/ncomp
t.crit <- qt(p = alpha.corr/2, df = res.df,
lower.tail = FALSE)
}
##Sidak correction
if(identical(correction, "sidak")){
alpha.corr <- 1 - (conf.level)^(1/ncomp)
t.crit <- qt(p = alpha.corr/2, df = res.df,
lower.tail = FALSE)
}
##compute CI's
Lower.CL <- Mod.avg.out[, 1] - t.crit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + t.crit * Mod.avg.out[, 2]
##combine results in matrix
out.matrix <- cbind(Mod.avg.out, Lower.CL, Upper.CL)
##arrange output
results <- list(factor.id = factor.id, models = mod.list, model.names = model.names,
model.table = out.table, ordered.levels = ord.groups, model.avg.est = out.matrix,
conf.level = conf.level, correction = correction)
#################################################
###NEW CODE
#################################################
class(results) <- c("multComp", "list")
return(results)
}
##survreg
multComp.survreg <- function(mod, factor.id, letter.labels = TRUE, second.ord = TRUE, nobs = NULL, sort = TRUE,
newdata = NULL, uncond.se = "revised", conf.level = 0.95, correction = "none",
type = "response", ...) {
##2^(k - 1) combinations of group patterns
##extract data set from model
data.set <- eval(mod$call$data, environment(formula(mod)))
##add a check for missing values
if(any(is.na(data.set))) stop("\nMissing values occur in the data set:\n", "remove these values before proceeding")
##check for presence of interactions
form.check <- formula(mod)
form.mod <- strsplit(as.character(form.check), split="~")[[3]]
if(attr(regexpr(paste(":", factor.id, sep = ""), form.mod), "match.length") != -1 || attr(regexpr(paste(factor.id, ":", sep = ""),
form.mod), "match.length") != -1 || attr(regexpr(paste("\\*", factor.id), form.mod), "match.length") != -1 ||
attr(regexpr(paste(factor.id, "\\*"), form.mod), "match.length") != -1 ) {
stop("\nDo not involve the factor of interest in interaction terms with this function,\n",
"see \"?multComp\" for details on specifying interaction terms\n")
}
##identify column with factor.id
parm.vals <- data.set[, factor.id]
##check that variable is factor
if(!is.factor(parm.vals)) stop("\n'factor.id' must be a factor\n")
##use predictions instead of response variable
resp.preds <- predict(mod, type = type)
##identify response variable - here given by Surv( , )
#resp.id <- as.character(formula(mod))[2]
#resp.id2 <- gsub(pattern="Surv\\(", replacement = "", x = resp.id)
#resp.id3 <- unlist(strsplit(gsub(pattern="\\)", replacement = "", x = resp.id2), split = ","))
##check for response variable
#if(!any(names(data.set) == resp.id3[1]) || !any(names(data.set) == resp.id3[1])) stop("\nResponse variable in the formula does not appear in data set\n")
##changed to allow when response variable is transformed
##in the formula to avoid error
#resp <- data.set[, resp.id]
##determine group identity
groups.id <- levels(parm.vals)
##determine number of groups
n.groups <- length(groups.id)
##order group preds
ord.means <- sort(tapply(X = resp.preds, INDEX = parm.vals, FUN = mean))
##or sort(tapply(X = fitted(mod), INDEX = parm.vals, FUN = mean))
##order groups
ord.groups <- names(ord.means)
##generic groups
gen.groups <- paste(1:n.groups)
###################CHANGES####
##############################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
# if(type == "terms") {stop("\nThe terms argument is not defined for this function\n")}
##check data frame for predictions
if(is.null(newdata)) {
##if no data set is specified, use first observation in data set
new.data <- data.set[1, ]
preds.data <- new.data
##replicate first line n.groups time
for(k in 1:(n.groups-1)){
preds.data <- rbind(preds.data, new.data)
}
##add ordered groups
preds.data[, factor.id] <- as.factor(ord.groups)
pred.parm <- preds.data[, factor.id]
} else {
preds.data <- newdata
##check that no more rows are specified than group levels
if(nrow(newdata) != n.groups) stop("\nnumber of rows in \'newdata\' must match number of factor levels\n")
preds.data[, factor.id] <- as.factor(ord.groups)
pred.parm <- preds.data[, factor.id]
}
##create pattern matrix of group assignment
if(n.groups == 2) {
##combinations for 2 groups
##{1, 2}
##{12}
##rows correspond to parameterization of different models
pat.mat <- matrix(data =
c(1, 2,
1, 1),
byrow = TRUE,
ncol = 2)
}
if(n.groups == 3) {
##combinations for 3 groups
##{1, 2, 3}
##{12, 3}
##{1, 23}
##{123}
pat.mat <- matrix(data =
c(1, 2, 3,
1, 1, 2,
1, 2, 2,
1, 1, 1),
byrow = TRUE,
ncol = 3)
}
if(n.groups == 4) {
##combinations for 4 groups
##{1, 2, 3, 4}
##{1, 2, 34}
##{12, 3, 4}
##{12, 34}
##{1, 23, 4}
##{1, 234}
##{123, 4}
##{1234}
pat.mat <- matrix(data =
c(1, 2, 3, 4,
1, 2, 3, 3,
1, 1, 2, 3,
1, 1, 2, 2,
1, 2, 2, 3,
1, 2, 2, 2,
1, 1, 1, 2,
1, 1, 1, 1),
byrow = TRUE,
ncol = 4)
}
if(n.groups == 5) {
##combinations for 5 groups
##{1, 2, 3, 4, 5}
##{1, 2, 3, 45}
##{1, 2, 345}
##{1, 2, 34, 5}
##{12, 3, 4, 5}
##{12, 34, 5}
##{12, 3, 45}
##{12, 345}
##{1, 23, 4, 5}
##{1, 23, 45}
##{1, 234, 5}
##{123, 4, 5}
##{123, 45}
##{1234, 5}
##{1, 2345}
##{12345}
pat.mat <- matrix(data =
c(1, 2, 3, 4, 5,
1, 2, 3, 4, 4,
1, 2, 3, 3, 3,
1, 2, 3, 3, 4,
1, 1, 2, 3, 4,
1, 1, 2, 2, 3,
1, 1, 2, 3, 3,
1, 1, 2, 2, 2,
1, 2, 2, 3, 4,
1, 2, 2, 3, 3,
1, 2, 2, 2, 3,
1, 1, 1, 2, 3,
1, 1, 1, 2, 2,
1, 1, 1, 1, 2,
1, 2, 2, 2, 2,
1, 1, 1, 1, 1),
byrow = TRUE,
ncol = 5)
}
##combinations for 6 groups
if(n.groups == 6) {
pat.mat <- matrix(data =
c(1, 2, 2, 2, 2, 2,
1, 2, 2, 2, 2, 3,
1, 2, 2, 2, 3, 3,
1, 2, 2, 2, 3, 4,
1, 2, 2, 3, 3, 3,
1, 2, 3, 3, 3, 4,
1, 2, 3, 3, 3, 3,
1, 2, 3, 3, 4, 5,
1, 2, 3, 3, 4, 4,
1, 2, 3, 4, 4, 4,
1, 2, 3, 4, 4, 5,
1, 2, 3, 4, 5, 6,
1, 2, 3, 4, 5, 5,
1, 2, 2, 3, 4, 5,
1, 2, 2, 3, 4, 4,
1, 2, 2, 3, 3, 4,
1, 1, 2, 2, 2, 2,
1, 1, 2, 2, 2, 3,
1, 1, 2, 2, 3, 3,
1, 1, 2, 3, 3, 3,
1, 1, 1, 2, 2, 2,
1, 1, 1, 2, 2, 3,
1, 1, 1, 2, 3, 3,
1, 1, 1, 2, 3, 4,
1, 1, 2, 3, 4, 5,
1, 1, 1, 1, 2, 3,
1, 1, 1, 1, 2, 2,
1, 1, 1, 1, 1, 2,
1, 1, 1, 1, 1, 1,
1, 1, 2, 3, 4, 4,
1, 1, 2, 2, 3, 4,
1, 1, 2, 3, 3, 4),
byrow = TRUE,
ncol = 6)
}
if(n.groups > 6) stop("\nThis function supports a maximum of 6 groups\n")
##number of models
n.mods <- nrow(pat.mat)
data.iter <- data.set
##create list to store models
mod.list <- vector("list", length = n.mods)
##create list to store preds
pred.list <- vector("list", length = n.mods)
if(n.groups == 2) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
##for model
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1], pat.mat[k, 2])
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1], pat.mat[k, 2])
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data,
se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data,
se.fit = TRUE)
}
}
}
if(n.groups == 3) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
pat.mat[k, 3]))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
pat.mat[k, 3]))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data,
se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data,
se.fit = TRUE)
}
}
}
if(n.groups == 4) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
pat.mat[k, 4])))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
pat.mat[k, 4])))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data,
se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data,
se.fit = TRUE)
}
}
}
if(n.groups == 5) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
ifelse(orig.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
ifelse(pred.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data,
se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data,
se.fit = TRUE)
}
}
}
if(n.groups == 6) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
ifelse(orig.parm == ord.groups[4], pat.mat[k, 4],
ifelse(orig.parm == ord.groups[5], pat.mat[k, 5],
pat.mat[k, 6])))))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
ifelse(pred.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data,
se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predict(mod.list[[k]], type = type, newdata = preds.data,
se.fit = TRUE)
}
}
}
##if group label should be letters
if(letter.labels) {
letter.mat <- matrix(data = letters[pat.mat], ncol = ncol(pat.mat))
pat.mat <- letter.mat
}
##create vector of names
model.names <- apply(X = pat.mat, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))##use contrasts in name
##1111, 122, etc...
model.names <- paste("m_", model.names, sep = "")
##compute AICc table for output
out.table <- aictab(cand.set = mod.list, modnames = model.names,
second.ord = second.ord, nobs = nobs,
sort = sort)
out.table.avg <- aictab(cand.set = mod.list, modnames = model.names,
second.ord = second.ord, nobs = nobs,
sort = FALSE)
Mod.avg.out <- matrix(NA, nrow = nrow(preds.data), ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
rownames(Mod.avg.out) <- ord.groups
##iterate over predictions
for(m in 1:nrow(preds.data)){
##add fitted values in table
out.table.avg$fit <- unlist(lapply(X = pred.list, FUN = function(i) i$fit[m]))
out.table.avg$SE <- unlist(lapply(X = pred.list, FUN = function(i) i$se.fit[m]))
##model-averaged estimate
mod.avg.est <- out.table.avg[, 6] %*% out.table.avg$fit
Mod.avg.out[m, 1] <- mod.avg.est
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[m, 2] <- sum(out.table.avg[, 6] * sqrt(out.table.avg$SE^2 + (out.table.avg$fit - as.vector(mod.avg.est))^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[m, 2] <- sqrt(sum(out.table.avg[, 6] * (out.table.avg$SE^2 + (out.table.avg$fit - as.vector(mod.avg.est))^2)))
}
}
#################################################
###NEW CODE
#################################################
##compute model-average estimates
##classic uncorrected for multiple comparisons
##extract quantile
##no correction for multiple comparisons
if(identical(correction, "none")){
z.crit <- qnorm(p = (1 - conf.level)/2,
lower.tail = FALSE)
}
##number of possible comparisons
ncomp <- choose(n.groups, 2)
##Bonferroni correction
if(identical(correction, "bonferroni")){
alpha.corr <- (1 - conf.level)/ncomp
z.crit <- qnorm(p = alpha.corr/2,
lower.tail = FALSE)
}
##Sidak correction
if(identical(correction, "sidak")){
alpha.corr <- 1 - (conf.level)^(1/ncomp)
z.crit <- qnorm(p = alpha.corr/2,
lower.tail = FALSE)
}
##compute CI's
Lower.CL <- Mod.avg.out[, 1] - z.crit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + z.crit * Mod.avg.out[, 2]
##combine results in matrix
out.matrix <- cbind(Mod.avg.out, Lower.CL, Upper.CL)
##arrange output
results <- list(factor.id = factor.id, models = mod.list, model.names = model.names,
model.table = out.table, ordered.levels = ord.groups, model.avg.est = out.matrix,
conf.level = conf.level, correction = correction)
#################################################
###NEW CODE
#################################################
class(results) <- c("multComp", "list")
return(results)
}
##mer
multComp.mer <- function(mod, factor.id, letter.labels = TRUE, second.ord = TRUE, nobs = NULL,
sort = TRUE, newdata = NULL, uncond.se = "revised", conf.level = 0.95,
correction = "none", type = "response", ...) {
##2^(k - 1) combinations of group patterns
##extract data set from model
##unmarked models are not supported yet
##check if S4
##if mer or merMod objects
data.set <- eval(mod@call$data, environment(formula(mod)))
##add a check for missing values
if(any(is.na(data.set))) stop("\nMissing values occur in the data set:\n", "remove these values before proceeding")
##check for presence of interactions
form.check <- formula(mod)
form.mod <- strsplit(as.character(form.check), split="~")[[3]]
if(attr(regexpr(paste(":", factor.id, sep = ""), form.mod), "match.length") != -1 || attr(regexpr(paste(factor.id, ":", sep = ""),
form.mod), "match.length") != -1 || attr(regexpr(paste("\\*", factor.id), form.mod), "match.length") != -1 ||
attr(regexpr(paste(factor.id, "\\*"), form.mod), "match.length") != -1 ) {
stop("\nDo not involve the factor of interest in interaction terms with this function,\n",
"see \"?multComp\" for details on specifying interaction terms\n")
}
##identify column with factor.id
parm.vals <- data.set[, factor.id]
##check that variable is factor
if(!is.factor(parm.vals)) stop("\n'factor.id' must be a factor\n")
##identify response variable
resp.id <- as.character(formula(mod))[2]
##check for response variable
if(!any(names(data.set) == resp.id)) stop("\nThis function does not support transformations of the response variable in the formula\n")
##changed to allow when response variable is transformed
##in the formula to avoid error
resp <- data.set[, resp.id]
##determine group identity
groups.id <- levels(parm.vals)
##determine number of groups
n.groups <- length(groups.id)
##order group means
ord.means <- sort(tapply(X = resp, INDEX = parm.vals, FUN = mean))
##or sort(tapply(X = fitted(mod), INDEX = parm.vals, FUN = mean))
##order groups
ord.groups <- names(ord.means)
##generic groups
gen.groups <- paste(1:n.groups)
###################CHANGES####
##############################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
if(type == "terms") {stop("\nThe terms argument is not defined for this function\n")}
##check data frame for predictions
if(is.null(newdata)) {
##if no data set is specified, use first observation in data set
new.data <- data.set[1, ]
preds.data <- new.data
##replicate first line n.groups time
for(k in 1:(n.groups-1)){
preds.data <- rbind(preds.data, new.data)
}
##add ordered groups
preds.data[, factor.id] <- as.factor(ord.groups)
pred.parm <- preds.data[, factor.id]
} else {
preds.data <- newdata
##check that no more rows are specified than group levels
if(nrow(newdata) != n.groups) stop("\nnumber of rows in \'newdata\' must match number of factor levels\n")
preds.data[, factor.id] <- as.factor(ord.groups)
pred.parm <- preds.data[, factor.id]
}
##create pattern matrix of group assignment
if(n.groups == 2) {
##combinations for 2 groups
##{1, 2}
##{12}
##rows correspond to parameterization of different models
pat.mat <- matrix(data =
c(1, 2,
1, 1),
byrow = TRUE,
ncol = 2)
}
if(n.groups == 3) {
##combinations for 3 groups
##{1, 2, 3}
##{12, 3}
##{1, 23}
##{123}
pat.mat <- matrix(data =
c(1, 2, 3,
1, 1, 2,
1, 2, 2,
1, 1, 1),
byrow = TRUE,
ncol = 3)
}
if(n.groups == 4) {
##combinations for 4 groups
##{1, 2, 3, 4}
##{1, 2, 34}
##{12, 3, 4}
##{12, 34}
##{1, 23, 4}
##{1, 234}
##{123, 4}
##{1234}
pat.mat <- matrix(data =
c(1, 2, 3, 4,
1, 2, 3, 3,
1, 1, 2, 3,
1, 1, 2, 2,
1, 2, 2, 3,
1, 2, 2, 2,
1, 1, 1, 2,
1, 1, 1, 1),
byrow = TRUE,
ncol = 4)
}
if(n.groups == 5) {
##combinations for 5 groups
##{1, 2, 3, 4, 5}
##{1, 2, 3, 45}
##{1, 2, 345}
##{1, 2, 34, 5}
##{12, 3, 4, 5}
##{12, 34, 5}
##{12, 3, 45}
##{12, 345}
##{1, 23, 4, 5}
##{1, 23, 45}
##{1, 234, 5}
##{123, 4, 5}
##{123, 45}
##{1234, 5}
##{1, 2345}
##{12345}
pat.mat <- matrix(data =
c(1, 2, 3, 4, 5,
1, 2, 3, 4, 4,
1, 2, 3, 3, 3,
1, 2, 3, 3, 4,
1, 1, 2, 3, 4,
1, 1, 2, 2, 3,
1, 1, 2, 3, 3,
1, 1, 2, 2, 2,
1, 2, 2, 3, 4,
1, 2, 2, 3, 3,
1, 2, 2, 2, 3,
1, 1, 1, 2, 3,
1, 1, 1, 2, 2,
1, 1, 1, 1, 2,
1, 2, 2, 2, 2,
1, 1, 1, 1, 1),
byrow = TRUE,
ncol = 5)
}
##combinations for 6 groups
if(n.groups == 6) {
pat.mat <- matrix(data =
c(1, 2, 2, 2, 2, 2,
1, 2, 2, 2, 2, 3,
1, 2, 2, 2, 3, 3,
1, 2, 2, 2, 3, 4,
1, 2, 2, 3, 3, 3,
1, 2, 3, 3, 3, 4,
1, 2, 3, 3, 3, 3,
1, 2, 3, 3, 4, 5,
1, 2, 3, 3, 4, 4,
1, 2, 3, 4, 4, 4,
1, 2, 3, 4, 4, 5,
1, 2, 3, 4, 5, 6,
1, 2, 3, 4, 5, 5,
1, 2, 2, 3, 4, 5,
1, 2, 2, 3, 4, 4,
1, 2, 2, 3, 3, 4,
1, 1, 2, 2, 2, 2,
1, 1, 2, 2, 2, 3,
1, 1, 2, 2, 3, 3,
1, 1, 2, 3, 3, 3,
1, 1, 1, 2, 2, 2,
1, 1, 1, 2, 2, 3,
1, 1, 1, 2, 3, 3,
1, 1, 1, 2, 3, 4,
1, 1, 2, 3, 4, 5,
1, 1, 1, 1, 2, 3,
1, 1, 1, 1, 2, 2,
1, 1, 1, 1, 1, 2,
1, 1, 1, 1, 1, 1,
1, 1, 2, 3, 4, 4,
1, 1, 2, 2, 3, 4,
1, 1, 2, 3, 3, 4),
byrow = TRUE,
ncol = 6)
}
if(n.groups > 6) stop("\nThis function supports a maximum of 6 groups\n")
##number of models
n.mods <- nrow(pat.mat)
data.iter <- data.set
##create list to store models
mod.list <- vector("list", length = n.mods)
##create list to store preds
pred.list <- vector("list", length = n.mods)
if(n.groups == 2) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
##for model
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1], pat.mat[k, 2])
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1], pat.mat[k, 2])
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE,
type = type)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE,
type = type)
}
}
}
if(n.groups == 3) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
pat.mat[k, 3]))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
pat.mat[k, 3]))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE,
type = type)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE,
type = type)
}
}
}
if(n.groups == 4) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
pat.mat[k, 4])))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
pat.mat[k, 4])))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE,
type = type)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE,
type = type)
}
}
}
if(n.groups == 5) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
ifelse(orig.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
ifelse(pred.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE,
type = type)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE,
type = type)
}
}
}
if(n.groups == 6) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
ifelse(orig.parm == ord.groups[4], pat.mat[k, 4],
ifelse(orig.parm == ord.groups[5], pat.mat[k, 5],
pat.mat[k, 6])))))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
ifelse(pred.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE,
type = type)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE,
type = type)
}
}
}
##if group label should be letters
if(letter.labels) {
letter.mat <- matrix(data = letters[pat.mat], ncol = ncol(pat.mat))
pat.mat <- letter.mat
}
##create vector of names
model.names <- apply(X = pat.mat, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))##use contrasts in name
##1111, 122, etc...
model.names <- paste("m_", model.names, sep = "")
##compute AICc table for output
out.table <- aictab(cand.set = mod.list, modnames = model.names,
second.ord = second.ord, nobs = nobs,
sort = sort)
out.table.avg <- aictab(cand.set = mod.list, modnames = model.names,
second.ord = second.ord, nobs = nobs,
sort = FALSE)
Mod.avg.out <- matrix(NA, nrow = nrow(preds.data), ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
rownames(Mod.avg.out) <- ord.groups
##iterate over predictions
for(m in 1:nrow(preds.data)){
##add fitted values in table
out.table.avg$fit <- unlist(lapply(X = pred.list, FUN = function(i) i$fit[m]))
out.table.avg$SE <- unlist(lapply(X = pred.list, FUN = function(i) i$se.fit[m]))
##model-averaged estimate
mod.avg.est <- out.table.avg[, 6] %*% out.table.avg$fit
Mod.avg.out[m, 1] <- mod.avg.est
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[m, 2] <- sum(out.table.avg[, 6] * sqrt(out.table.avg$SE^2 + (out.table.avg$fit - as.vector(mod.avg.est))^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[m, 2] <- sqrt(sum(out.table.avg[, 6] * (out.table.avg$SE^2 + (out.table.avg$fit - as.vector(mod.avg.est))^2)))
}
}
#################################################
###NEW CODE
#################################################
##compute model-average estimates
##classic uncorrected for multiple comparisons
##here, uses normal approximation instead of t for lmer
##extract quantile
##no correction for multiple comparisons
if(identical(correction, "none")){
z.crit <- qnorm(p = (1 - conf.level)/2,
lower.tail = FALSE)
}
##number of possible comparisons
ncomp <- choose(n.groups, 2)
##Bonferroni correction
if(identical(correction, "bonferroni")){
alpha.corr <- (1 - conf.level)/ncomp
z.crit <- qnorm(p = alpha.corr/2,
lower.tail = FALSE)
}
##Sidak correction
if(identical(correction, "sidak")){
alpha.corr <- 1 - (conf.level)^(1/ncomp)
z.crit <- qnorm(p = alpha.corr/2,
lower.tail = FALSE)
}
##compute CI's
Lower.CL <- Mod.avg.out[, 1] - z.crit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + z.crit * Mod.avg.out[, 2]
##combine results in matrix
out.matrix <- cbind(Mod.avg.out, Lower.CL, Upper.CL)
##arrange output
results <- list(factor.id = factor.id, models = mod.list, model.names = model.names,
model.table = out.table, ordered.levels = ord.groups, model.avg.est = out.matrix,
conf.level = conf.level, correction = correction)
#################################################
###NEW CODE
#################################################
class(results) <- c("multComp", "list")
return(results)
}
##merMod
##mer
multComp.merMod <- function(mod, factor.id, letter.labels = TRUE, second.ord = TRUE, nobs = NULL,
sort = TRUE, newdata = NULL, uncond.se = "revised", conf.level = 0.95,
correction = "none", type = "response", ...) {
##2^(k - 1) combinations of group patterns
##extract data set from model
##unmarked models are not supported yet
##check if S4
##if mer or merMod objects
data.set <- eval(mod@call$data, environment(formula(mod)))
##add a check for missing values
if(any(is.na(data.set))) stop("\nMissing values occur in the data set:\n", "remove these values before proceeding")
##check for presence of interactions
form.check <- formula(mod)
form.mod <- strsplit(as.character(form.check), split="~")[[3]]
if(attr(regexpr(paste(":", factor.id, sep = ""), form.mod), "match.length") != -1 || attr(regexpr(paste(factor.id, ":", sep = ""),
form.mod), "match.length") != -1 || attr(regexpr(paste("\\*", factor.id), form.mod), "match.length") != -1 ||
attr(regexpr(paste(factor.id, "\\*"), form.mod), "match.length") != -1 ) {
stop("\nDo not involve the factor of interest in interaction terms with this function,\n",
"see \"?multComp\" for details on specifying interaction terms\n")
}
##identify column with factor.id
parm.vals <- data.set[, factor.id]
##check that variable is factor
if(!is.factor(parm.vals)) stop("\n'factor.id' must be a factor\n")
##identify response variable
resp.id <- as.character(formula(mod))[2]
##check for response variable
if(!any(names(data.set) == resp.id)) stop("\nThis function does not support transformations of the response variable in the formula\n")
##changed to allow when response variable is transformed
##in the formula to avoid error
resp <- data.set[, resp.id]
##determine group identity
groups.id <- levels(parm.vals)
##determine number of groups
n.groups <- length(groups.id)
##order group means
ord.means <- sort(tapply(X = resp, INDEX = parm.vals, FUN = mean))
##or sort(tapply(X = fitted(mod), INDEX = parm.vals, FUN = mean))
##order groups
ord.groups <- names(ord.means)
##generic groups
gen.groups <- paste(1:n.groups)
###################CHANGES####
##############################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
if(type == "terms") {stop("\nThe terms argument is not defined for this function\n")}
##check data frame for predictions
if(is.null(newdata)) {
##if no data set is specified, use first observation in data set
new.data <- data.set[1, ]
preds.data <- new.data
##replicate first line n.groups time
for(k in 1:(n.groups-1)){
preds.data <- rbind(preds.data, new.data)
}
##add ordered groups
preds.data[, factor.id] <- as.factor(ord.groups)
pred.parm <- preds.data[, factor.id]
} else {
preds.data <- newdata
##check that no more rows are specified than group levels
if(nrow(newdata) != n.groups) stop("\nnumber of rows in \'newdata\' must match number of factor levels\n")
preds.data[, factor.id] <- as.factor(ord.groups)
pred.parm <- preds.data[, factor.id]
}
##create pattern matrix of group assignment
if(n.groups == 2) {
##combinations for 2 groups
##{1, 2}
##{12}
##rows correspond to parameterization of different models
pat.mat <- matrix(data =
c(1, 2,
1, 1),
byrow = TRUE,
ncol = 2)
}
if(n.groups == 3) {
##combinations for 3 groups
##{1, 2, 3}
##{12, 3}
##{1, 23}
##{123}
pat.mat <- matrix(data =
c(1, 2, 3,
1, 1, 2,
1, 2, 2,
1, 1, 1),
byrow = TRUE,
ncol = 3)
}
if(n.groups == 4) {
##combinations for 4 groups
##{1, 2, 3, 4}
##{1, 2, 34}
##{12, 3, 4}
##{12, 34}
##{1, 23, 4}
##{1, 234}
##{123, 4}
##{1234}
pat.mat <- matrix(data =
c(1, 2, 3, 4,
1, 2, 3, 3,
1, 1, 2, 3,
1, 1, 2, 2,
1, 2, 2, 3,
1, 2, 2, 2,
1, 1, 1, 2,
1, 1, 1, 1),
byrow = TRUE,
ncol = 4)
}
if(n.groups == 5) {
##combinations for 5 groups
##{1, 2, 3, 4, 5}
##{1, 2, 3, 45}
##{1, 2, 345}
##{1, 2, 34, 5}
##{12, 3, 4, 5}
##{12, 34, 5}
##{12, 3, 45}
##{12, 345}
##{1, 23, 4, 5}
##{1, 23, 45}
##{1, 234, 5}
##{123, 4, 5}
##{123, 45}
##{1234, 5}
##{1, 2345}
##{12345}
pat.mat <- matrix(data =
c(1, 2, 3, 4, 5,
1, 2, 3, 4, 4,
1, 2, 3, 3, 3,
1, 2, 3, 3, 4,
1, 1, 2, 3, 4,
1, 1, 2, 2, 3,
1, 1, 2, 3, 3,
1, 1, 2, 2, 2,
1, 2, 2, 3, 4,
1, 2, 2, 3, 3,
1, 2, 2, 2, 3,
1, 1, 1, 2, 3,
1, 1, 1, 2, 2,
1, 1, 1, 1, 2,
1, 2, 2, 2, 2,
1, 1, 1, 1, 1),
byrow = TRUE,
ncol = 5)
}
##combinations for 6 groups
if(n.groups == 6) {
pat.mat <- matrix(data =
c(1, 2, 2, 2, 2, 2,
1, 2, 2, 2, 2, 3,
1, 2, 2, 2, 3, 3,
1, 2, 2, 2, 3, 4,
1, 2, 2, 3, 3, 3,
1, 2, 3, 3, 3, 4,
1, 2, 3, 3, 3, 3,
1, 2, 3, 3, 4, 5,
1, 2, 3, 3, 4, 4,
1, 2, 3, 4, 4, 4,
1, 2, 3, 4, 4, 5,
1, 2, 3, 4, 5, 6,
1, 2, 3, 4, 5, 5,
1, 2, 2, 3, 4, 5,
1, 2, 2, 3, 4, 4,
1, 2, 2, 3, 3, 4,
1, 1, 2, 2, 2, 2,
1, 1, 2, 2, 2, 3,
1, 1, 2, 2, 3, 3,
1, 1, 2, 3, 3, 3,
1, 1, 1, 2, 2, 2,
1, 1, 1, 2, 2, 3,
1, 1, 1, 2, 3, 3,
1, 1, 1, 2, 3, 4,
1, 1, 2, 3, 4, 5,
1, 1, 1, 1, 2, 3,
1, 1, 1, 1, 2, 2,
1, 1, 1, 1, 1, 2,
1, 1, 1, 1, 1, 1,
1, 1, 2, 3, 4, 4,
1, 1, 2, 2, 3, 4,
1, 1, 2, 3, 3, 4),
byrow = TRUE,
ncol = 6)
}
if(n.groups > 6) stop("\nThis function supports a maximum of 6 groups\n")
##number of models
n.mods <- nrow(pat.mat)
data.iter <- data.set
##create list to store models
mod.list <- vector("list", length = n.mods)
##create list to store preds
pred.list <- vector("list", length = n.mods)
if(n.groups == 2) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
##for model
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1], pat.mat[k, 2])
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1], pat.mat[k, 2])
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE,
type = type)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE,
type = type)
}
}
}
if(n.groups == 3) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
pat.mat[k, 3]))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
pat.mat[k, 3]))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE,
type = type)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE,
type = type)
}
}
}
if(n.groups == 4) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
pat.mat[k, 4])))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
pat.mat[k, 4])))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE,
type = type)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE,
type = type)
}
}
}
if(n.groups == 5) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
ifelse(orig.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
ifelse(pred.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE,
type = type)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE,
type = type)
}
}
}
if(n.groups == 6) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
ifelse(orig.parm == ord.groups[4], pat.mat[k, 4],
ifelse(orig.parm == ord.groups[5], pat.mat[k, 5],
pat.mat[k, 6])))))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
ifelse(pred.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE,
type = type)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE,
type = type)
}
}
}
##if group label should be letters
if(letter.labels) {
letter.mat <- matrix(data = letters[pat.mat], ncol = ncol(pat.mat))
pat.mat <- letter.mat
}
##create vector of names
model.names <- apply(X = pat.mat, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))##use contrasts in name
##1111, 122, etc...
model.names <- paste("m_", model.names, sep = "")
##compute AICc table for output
out.table <- aictab(cand.set = mod.list, modnames = model.names,
second.ord = second.ord, nobs = nobs,
sort = sort)
out.table.avg <- aictab(cand.set = mod.list, modnames = model.names,
second.ord = second.ord, nobs = nobs,
sort = FALSE)
Mod.avg.out <- matrix(NA, nrow = nrow(preds.data), ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
rownames(Mod.avg.out) <- ord.groups
##iterate over predictions
for(m in 1:nrow(preds.data)){
##add fitted values in table
out.table.avg$fit <- unlist(lapply(X = pred.list, FUN = function(i) i$fit[m]))
out.table.avg$SE <- unlist(lapply(X = pred.list, FUN = function(i) i$se.fit[m]))
##model-averaged estimate
mod.avg.est <- out.table.avg[, 6] %*% out.table.avg$fit
Mod.avg.out[m, 1] <- mod.avg.est
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[m, 2] <- sum(out.table.avg[, 6] * sqrt(out.table.avg$SE^2 + (out.table.avg$fit - as.vector(mod.avg.est))^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[m, 2] <- sqrt(sum(out.table.avg[, 6] * (out.table.avg$SE^2 + (out.table.avg$fit - as.vector(mod.avg.est))^2)))
}
}
#################################################
###NEW CODE
#################################################
##compute model-average estimates
##classic uncorrected for multiple comparisons
##here, uses normal approximation instead of t for lmer
##extract quantile
##no correction for multiple comparisons
if(identical(correction, "none")){
z.crit <- qnorm(p = (1 - conf.level)/2,
lower.tail = FALSE)
}
##number of possible comparisons
ncomp <- choose(n.groups, 2)
##Bonferroni correction
if(identical(correction, "bonferroni")){
alpha.corr <- (1 - conf.level)/ncomp
z.crit <- qnorm(p = alpha.corr/2,
lower.tail = FALSE)
}
##Sidak correction
if(identical(correction, "sidak")){
alpha.corr <- 1 - (conf.level)^(1/ncomp)
z.crit <- qnorm(p = alpha.corr/2,
lower.tail = FALSE)
}
##compute CI's
Lower.CL <- Mod.avg.out[, 1] - z.crit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + z.crit * Mod.avg.out[, 2]
##combine results in matrix
out.matrix <- cbind(Mod.avg.out, Lower.CL, Upper.CL)
##arrange output
results <- list(factor.id = factor.id, models = mod.list, model.names = model.names,
model.table = out.table, ordered.levels = ord.groups, model.avg.est = out.matrix,
conf.level = conf.level, correction = correction)
#################################################
###NEW CODE
#################################################
class(results) <- c("multComp", "list")
return(results)
}
##lmerModLmerTest
multComp.lmerModLmerTest <- function(mod, factor.id, letter.labels = TRUE, second.ord = TRUE, nobs = NULL,
sort = TRUE, newdata = NULL, uncond.se = "revised", conf.level = 0.95,
correction = "none", ...) {
##2^(k - 1) combinations of group patterns
##extract data set from model
##unmarked models are not supported yet
##check if S4
##if mer or merMod objects
data.set <- eval(mod@call$data, environment(formula(mod)))
##add a check for missing values
if(any(is.na(data.set))) stop("\nMissing values occur in the data set:\n", "remove these values before proceeding")
##check for presence of interactions
form.check <- formula(mod)
form.mod <- strsplit(as.character(form.check), split="~")[[3]]
if(attr(regexpr(paste(":", factor.id, sep = ""), form.mod), "match.length") != -1 || attr(regexpr(paste(factor.id, ":", sep = ""),
form.mod), "match.length") != -1 || attr(regexpr(paste("\\*", factor.id), form.mod), "match.length") != -1 ||
attr(regexpr(paste(factor.id, "\\*"), form.mod), "match.length") != -1 ) {
stop("\nDo not involve the factor of interest in interaction terms with this function,\n",
"see \"?multComp\" for details on specifying interaction terms\n")
}
##identify column with factor.id
parm.vals <- data.set[, factor.id]
##check that variable is factor
if(!is.factor(parm.vals)) stop("\n'factor.id' must be a factor\n")
##identify response variable
resp.id <- as.character(formula(mod))[2]
##check for response variable
if(!any(names(data.set) == resp.id)) stop("\nThis function does not support transformations of the response variable in the formula\n")
##changed to allow when response variable is transformed
##in the formula to avoid error
resp <- data.set[, resp.id]
##determine group identity
groups.id <- levels(parm.vals)
##determine number of groups
n.groups <- length(groups.id)
##order group means
ord.means <- sort(tapply(X = resp, INDEX = parm.vals, FUN = mean))
##or sort(tapply(X = fitted(mod), INDEX = parm.vals, FUN = mean))
##order groups
ord.groups <- names(ord.means)
##generic groups
gen.groups <- paste(1:n.groups)
###################CHANGES####
##############################
##newdata is data frame with exact structure of the original data frame (same variable names and type)
#if(type == "terms") {stop("\nThe terms argument is not defined for this function\n")}
##check data frame for predictions
if(is.null(newdata)) {
##if no data set is specified, use first observation in data set
new.data <- data.set[1, ]
preds.data <- new.data
##replicate first line n.groups time
for(k in 1:(n.groups-1)){
preds.data <- rbind(preds.data, new.data)
}
##add ordered groups
preds.data[, factor.id] <- as.factor(ord.groups)
pred.parm <- preds.data[, factor.id]
} else {
preds.data <- newdata
##check that no more rows are specified than group levels
if(nrow(newdata) != n.groups) stop("\nnumber of rows in \'newdata\' must match number of factor levels\n")
preds.data[, factor.id] <- as.factor(ord.groups)
pred.parm <- preds.data[, factor.id]
}
##create pattern matrix of group assignment
if(n.groups == 2) {
##combinations for 2 groups
##{1, 2}
##{12}
##rows correspond to parameterization of different models
pat.mat <- matrix(data =
c(1, 2,
1, 1),
byrow = TRUE,
ncol = 2)
}
if(n.groups == 3) {
##combinations for 3 groups
##{1, 2, 3}
##{12, 3}
##{1, 23}
##{123}
pat.mat <- matrix(data =
c(1, 2, 3,
1, 1, 2,
1, 2, 2,
1, 1, 1),
byrow = TRUE,
ncol = 3)
}
if(n.groups == 4) {
##combinations for 4 groups
##{1, 2, 3, 4}
##{1, 2, 34}
##{12, 3, 4}
##{12, 34}
##{1, 23, 4}
##{1, 234}
##{123, 4}
##{1234}
pat.mat <- matrix(data =
c(1, 2, 3, 4,
1, 2, 3, 3,
1, 1, 2, 3,
1, 1, 2, 2,
1, 2, 2, 3,
1, 2, 2, 2,
1, 1, 1, 2,
1, 1, 1, 1),
byrow = TRUE,
ncol = 4)
}
if(n.groups == 5) {
##combinations for 5 groups
##{1, 2, 3, 4, 5}
##{1, 2, 3, 45}
##{1, 2, 345}
##{1, 2, 34, 5}
##{12, 3, 4, 5}
##{12, 34, 5}
##{12, 3, 45}
##{12, 345}
##{1, 23, 4, 5}
##{1, 23, 45}
##{1, 234, 5}
##{123, 4, 5}
##{123, 45}
##{1234, 5}
##{1, 2345}
##{12345}
pat.mat <- matrix(data =
c(1, 2, 3, 4, 5,
1, 2, 3, 4, 4,
1, 2, 3, 3, 3,
1, 2, 3, 3, 4,
1, 1, 2, 3, 4,
1, 1, 2, 2, 3,
1, 1, 2, 3, 3,
1, 1, 2, 2, 2,
1, 2, 2, 3, 4,
1, 2, 2, 3, 3,
1, 2, 2, 2, 3,
1, 1, 1, 2, 3,
1, 1, 1, 2, 2,
1, 1, 1, 1, 2,
1, 2, 2, 2, 2,
1, 1, 1, 1, 1),
byrow = TRUE,
ncol = 5)
}
##combinations for 6 groups
if(n.groups == 6) {
pat.mat <- matrix(data =
c(1, 2, 2, 2, 2, 2,
1, 2, 2, 2, 2, 3,
1, 2, 2, 2, 3, 3,
1, 2, 2, 2, 3, 4,
1, 2, 2, 3, 3, 3,
1, 2, 3, 3, 3, 4,
1, 2, 3, 3, 3, 3,
1, 2, 3, 3, 4, 5,
1, 2, 3, 3, 4, 4,
1, 2, 3, 4, 4, 4,
1, 2, 3, 4, 4, 5,
1, 2, 3, 4, 5, 6,
1, 2, 3, 4, 5, 5,
1, 2, 2, 3, 4, 5,
1, 2, 2, 3, 4, 4,
1, 2, 2, 3, 3, 4,
1, 1, 2, 2, 2, 2,
1, 1, 2, 2, 2, 3,
1, 1, 2, 2, 3, 3,
1, 1, 2, 3, 3, 3,
1, 1, 1, 2, 2, 2,
1, 1, 1, 2, 2, 3,
1, 1, 1, 2, 3, 3,
1, 1, 1, 2, 3, 4,
1, 1, 2, 3, 4, 5,
1, 1, 1, 1, 2, 3,
1, 1, 1, 1, 2, 2,
1, 1, 1, 1, 1, 2,
1, 1, 1, 1, 1, 1,
1, 1, 2, 3, 4, 4,
1, 1, 2, 2, 3, 4,
1, 1, 2, 3, 3, 4),
byrow = TRUE,
ncol = 6)
}
if(n.groups > 6) stop("\nThis function supports a maximum of 6 groups\n")
##number of models
n.mods <- nrow(pat.mat)
data.iter <- data.set
##create list to store models
mod.list <- vector("list", length = n.mods)
##create list to store preds
pred.list <- vector("list", length = n.mods)
if(n.groups == 2) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
##for model
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1], pat.mat[k, 2])
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1], pat.mat[k, 2])
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 3) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
pat.mat[k, 3]))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
pat.mat[k, 3]))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 4) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
pat.mat[k, 4])))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
pat.mat[k, 4])))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 5) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
ifelse(orig.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
ifelse(pred.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
if(n.groups == 6) {
##loop over matrix and change parameterization
for(k in 1:nrow(pat.mat)) {
orig.parm <- data.set[, factor.id]
new.parm <- ifelse(orig.parm == ord.groups[1], pat.mat[k, 1],
ifelse(orig.parm == ord.groups[2], pat.mat[k, 2],
ifelse(orig.parm == ord.groups[3], pat.mat[k, 3],
ifelse(orig.parm == ord.groups[4], pat.mat[k, 4],
ifelse(orig.parm == ord.groups[5], pat.mat[k, 5],
pat.mat[k, 6])))))
##for predictions
new.pred.parm <- ifelse(pred.parm == ord.groups[1], pat.mat[k, 1],
ifelse(pred.parm == ord.groups[2], pat.mat[k, 2],
ifelse(pred.parm == ord.groups[3], pat.mat[k, 3],
ifelse(pred.parm == ord.groups[4], pat.mat[k, 4],
pat.mat[k, 5]))))
##replace in preds.data
preds.data[, factor.id] <- as.factor(new.pred.parm)
## data.iter[, factor.id] <- new.parm
##convert to factor only for cases with different groups
if(length(unique(new.parm)) > 1) {
data.iter[, factor.id] <- as.factor(new.parm)
##run model on updated data
mod.list[[k]] <- update(mod, data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
} else {
##if constant, remove factor.id from model
mod.list[[k]] <- update(mod, as.formula(paste(". ~ . -", factor.id)), data = data.iter)
##compute predictions
pred.list[[k]] <- predictSE(mod.list[[k]], newdata = preds.data, se.fit = TRUE)
}
}
}
##if group label should be letters
if(letter.labels) {
letter.mat <- matrix(data = letters[pat.mat], ncol = ncol(pat.mat))
pat.mat <- letter.mat
}
##create vector of names
model.names <- apply(X = pat.mat, MARGIN = 1, FUN = function(i) paste(i, collapse = ""))##use contrasts in name
##1111, 122, etc...
model.names <- paste("m_", model.names, sep = "")
##compute AICc table for output
out.table <- aictab(cand.set = mod.list, modnames = model.names,
second.ord = second.ord, nobs = nobs,
sort = sort)
out.table.avg <- aictab(cand.set = mod.list, modnames = model.names,
second.ord = second.ord, nobs = nobs,
sort = FALSE)
Mod.avg.out <- matrix(NA, nrow = nrow(preds.data), ncol = 2)
colnames(Mod.avg.out) <- c("Mod.avg.est", "Uncond.SE")
rownames(Mod.avg.out) <- ord.groups
##iterate over predictions
for(m in 1:nrow(preds.data)){
##add fitted values in table
out.table.avg$fit <- unlist(lapply(X = pred.list, FUN = function(i) i$fit[m]))
out.table.avg$SE <- unlist(lapply(X = pred.list, FUN = function(i) i$se.fit[m]))
##model-averaged estimate
mod.avg.est <- out.table.avg[, 6] %*% out.table.avg$fit
Mod.avg.out[m, 1] <- mod.avg.est
##unconditional SE based on equation 4.9 of Burnham and Anderson 2002
if(identical(uncond.se, "old")) {
Mod.avg.out[m, 2] <- sum(out.table.avg[, 6] * sqrt(out.table.avg$SE^2 + (out.table.avg$fit - as.vector(mod.avg.est))^2))
}
##revised computation of unconditional SE based on equation 6.12 of Burnham and Anderson 2002; Anderson 2008, p. 111
if(identical(uncond.se, "revised")) {
Mod.avg.out[m, 2] <- sqrt(sum(out.table.avg[, 6] * (out.table.avg$SE^2 + (out.table.avg$fit - as.vector(mod.avg.est))^2)))
}
}
#################################################
###NEW CODE
#################################################
##compute model-average estimates
##classic uncorrected for multiple comparisons
##here, uses normal approximation instead of t for lmer
##extract quantile
##no correction for multiple comparisons
if(identical(correction, "none")){
z.crit <- qnorm(p = (1 - conf.level)/2,
lower.tail = FALSE)
}
##number of possible comparisons
ncomp <- choose(n.groups, 2)
##Bonferroni correction
if(identical(correction, "bonferroni")){
alpha.corr <- (1 - conf.level)/ncomp
z.crit <- qnorm(p = alpha.corr/2,
lower.tail = FALSE)
}
##Sidak correction
if(identical(correction, "sidak")){
alpha.corr <- 1 - (conf.level)^(1/ncomp)
z.crit <- qnorm(p = alpha.corr/2,
lower.tail = FALSE)
}
##compute CI's
Lower.CL <- Mod.avg.out[, 1] - z.crit * Mod.avg.out[, 2]
Upper.CL <- Mod.avg.out[, 1] + z.crit * Mod.avg.out[, 2]
##combine results in matrix
out.matrix <- cbind(Mod.avg.out, Lower.CL, Upper.CL)
##arrange output
results <- list(factor.id = factor.id, models = mod.list, model.names = model.names,
model.table = out.table, ordered.levels = ord.groups, model.avg.est = out.matrix,
conf.level = conf.level, correction = correction)
#################################################
###NEW CODE
#################################################
class(results) <- c("multComp", "list")
return(results)
}
##print method
print.multComp <-
function(x, digits = 2, LL = TRUE, ...) {
##extract model table
parm <- x$factor.id
ord.groups <- x$ordered.levels
mod.avg.out <- x$model.avg.est
conf.level <- x$conf.level
correction <- x$correction
x <- x$model.table
cat("\nModel selection for multiple comparisons of \"", parm, "\" based on ", colnames(x)[3], ":\n", sep = "")
if (any(names(x) == "c_hat")) {cat("(c-hat estimate = ", x$c_hat[1], ")\n", sep = "")}
cat("\n")
#check if Cum.Wt should be printed
if(any(names(x) == "Cum.Wt")) {
nice.tab <- cbind(x[, 2], x[, 3], x[, 4], x[, 6], x[, "Cum.Wt"], x[, 7])
colnames(nice.tab) <- c(colnames(x)[c(2, 3, 4, 6)], "Cum.Wt", colnames(x)[7])
rownames(nice.tab) <- x[, 1]
} else {
nice.tab <- cbind(x[, 2], x[, 3], x[, 4], x[, 6], x[, 7])
colnames(nice.tab) <- c(colnames(x)[c(2, 3, 4, 6, 7)])
rownames(nice.tab) <- x[, 1]
}
#if LL==FALSE
if(identical(LL, FALSE)) {
names.cols <- colnames(nice.tab)
sel.LL <- which(attr(regexpr(pattern = "LL", text = names.cols), "match.length") > 1)
nice.tab <- nice.tab[, -sel.LL]
}
print(round(nice.tab, digits = digits)) #select rounding off with digits argument
cat("\nLabels in model names denote grouping structure and\n")
cat("are ordered based on increasing means:\t", ord.groups, "\n")
cat("\nModel-averaged estimates of group means:", "\n")
print(round(mod.avg.out, digits = digits))
cat("---\n")
if(identical(correction, "none")) {
cat("Note: ", conf.level*100, "% unconditional confidence intervals uncorrected for multiple comparisons\n", sep = "")
}
if(identical(correction, "bonferroni")) {
cat("Note: ", conf.level*100, "% unconditional confidence intervals with Bonferroni adjustment\n", sep = "")
}
if(identical(correction, "sidak")) {
cat("Note: ", conf.level*100, "% unconditional confidence intervals with Sidak adjustment\n", sep = "")
}
cat("\n")
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/multComp.R |
##generic
predictSE <- function(mod, newdata, se.fit = TRUE, print.matrix = FALSE, ...){
UseMethod("predictSE", mod)
}
##default
predictSE.default <- function(mod, newdata, se.fit = TRUE, print.matrix = FALSE, ...){
stop("\nFunction not yet defined for this object class\n")
}
##predictions not accounting for correlation structure - using Delta method
##gls
predictSE.gls <- function(mod, newdata, se.fit = TRUE, print.matrix = FALSE, ...){
##first part of code converts data.frame (including factors) into design matrix of model
##fixed <- mod$call$model[-2] #extract only fixed portion of model formula
fixed <- formula(mod)[-2] #modification suggested by C. R. Andersen to extract left part of model formula
tt <- terms.formula(formula(mod))
TT <- delete.response(tt)
newdata <- as.data.frame(newdata)
#################################################################################################################
########################### This following piece of code is modified from predict.lme( ) from nlme package
#################################################################################################################
mfArgs <- list(formula = fixed, data = newdata)
dataMix <- do.call("model.frame", mfArgs)
## making sure factor levels are the same as in contrasts
contr <- mod$contrasts
for(i in names(dataMix)) {
if (inherits(dataMix[,i], "factor") && !is.null(contr[[i]])) {
levs <- levels(dataMix[,i])
levsC <- dimnames(contr[[i]])[[1]]
if (any(wch <- is.na(match(levs, levsC)))) {
stop(paste("Levels", paste(levs[wch], collapse = ","),
"not allowed for", i))
}
attr(dataMix[,i], "contrasts") <- contr[[i]][levs, , drop = FALSE]
}
}
#################################################################################################################
########################### The previous piece of code is modified from predict.lme( ) from nlme package
#################################################################################################################
m <- model.frame(TT, data=dataMix)
des.matrix <- model.matrix(TT, m)
newdata <- des.matrix #we now have a design matrix
######START OF PREDICT FUNCTION
######
fix.coef <- coef(mod)
ncoefs <- length(fix.coef)
names.coef <- labels(fix.coef)
nvals <- dim(newdata)[1]
##check for intercept fixed effect term in model
int.yes <- any(names.coef == "(Intercept)")
##if no intercept term, return error
if(!int.yes) stop("\nThis function does not work with models excluding the intercept terms\n")
formula <- character(length=ncoefs)
nbetas <- ncoefs - 1
if(int.yes & nbetas >= 1) {
##create loop to construct formula for derivative
formula <- paste("Beta", 1:nbetas, sep="")
formula <- c("Beta0", formula)
} else {
if(int.yes & nbetas == 0) {
formula <- "Beta0"
}
}
##for models without intercept - formula <- paste("Beta", 1:ncoefs, sep="")
##a loop to assemble formula
##first, identify interaction terms
inters <- rep(NA, ncoefs)
for (m in 1:ncoefs) {
inters[m] <- attr(regexpr(pattern = ":", text = names.coef[m]), "match.length")
}
##change the name of the labels for flexibility
names.cov <- paste("cov", 1:ncoefs-1, sep="")
if(!int.yes) {names.cov <- paste("cov", 1:ncoefs, sep="")}
id <- which(inters == 1)
for (k in 1:length(id)) {
names.cov[id[k]] <- paste("inter", k, sep="")
}
##iterate and combine betas and covariates
formula2 <- character(length = ncoefs)
for(b in 1:ncoefs) {
formula2[b] <- paste(formula[b], names.cov[b], sep="*")
}
##replace with Beta0 if fixed intercept term present
if(int.yes) {formula2[1] <- "Beta0"}
##collapse into a single equation and convert to expression
##parse returns the unevaluated expression
eq.space <- parse(text = as.expression(paste(formula2, collapse="+")),
srcfile = NULL)
##add step to remove white space to avoid reaching 500 character limit
##remove space within expression
no.space <- gsub("[[:space:]]+", "", as.character(eq.space))
equation <- parse(text = as.expression(no.space))
##
if(identical(se.fit, TRUE)) {
##determine number of partial derivatives to compute
part.devs <- list( )
for(j in 1:ncoefs) {
part.devs[[j]] <- D(equation, formula[j])
}
}
##determine number of covariates excluding interaction terms
ncovs <- ncoefs - length(id)
##assign values of covariates
cov.values <- list()
##if only intercept, then add column
if(int.yes && ncovs == 1) {
cov.values[[1]] <- 1
}
if(int.yes && ncovs > 1) {
cov.values[[1]] <- rep(1, nvals)
for (q in 2:ncoefs) {
cov.values[[q]] <- newdata[, labels(fix.coef)[q]]
}
} else {
for (q in 1:ncoefs) {
cov.values[[q]] <- newdata[, labels(fix.coef)[q]]
}
}
names(cov.values) <- names.cov
cov.values.mat <- matrix(data = unlist(cov.values), nrow = nvals, ncol = ncoefs)
if(identical(se.fit, TRUE)) {
##substitute a given row for each covariate
predicted.SE <- matrix(NA, nrow = nvals, ncol = 2)
colnames(predicted.SE) <- c("Pred.value", "SE")
rownames(predicted.SE) <- 1:nvals
part.devs.eval <- list( )
part.devs.eval[[1]] <- 1
for (w in 1:nvals) {
if(int.yes && ncovs > 1) {
for (p in 2:ncoefs) {
part.devs.eval[[p]] <- cov.values[names.cov[p]][[1]][w]
}
} # else { ##for cases without intercept
##for (p in 1:ncoefs) {
## part.devs.eval[[p]] <- cov.values[names.cov[p]][[1]][w]
##}
##}
part.devs.solved <- unlist(part.devs.eval)
##extract vc matrix
vcmat <- vcov(mod)
mat_partialdevs<-as.matrix(part.devs.solved) #create matrix from vector of 2 rows by 1 column
mat_tpartialdevs<-t(part.devs.solved) #transpose of partial derivatives to have 2 columns by 1 row
#####5)
var_hat<-mat_tpartialdevs%*%vcmat%*%mat_partialdevs
SE<-sqrt(var_hat)
predicted.vals <- fix.coef%*%cov.values.mat[w,]
predicted.SE[w, 1] <- predicted.vals
predicted.SE[w, 2] <- SE
}
out.fit.SE <- list(fit = predicted.SE[,"Pred.value"], se.fit = predicted.SE[, "SE"])
} else {
predicted.SE <- matrix(NA, nrow = nvals, ncol = 1)
colnames(predicted.SE) <- c("Pred.value")
rownames(predicted.SE) <- 1:nvals
for (w in 1:nvals) {
predicted.vals <- fix.coef%*%cov.values.mat[w,]
predicted.SE[w, 1] <- predicted.vals
}
out.fit.SE <- predicted.SE
colnames(out.fit.SE) <- "fit"
}
##print as nice matrix, otherwise print as list
if(identical(print.matrix, TRUE)) {
out.fit.SE <- predicted.SE
if(identical(se.fit, TRUE)) {
colnames(out.fit.SE) <- c("fit", "se.fit")
} else {
colnames(out.fit.SE) <- c("fit")
}
}
return(out.fit.SE)
}
##lme
predictSE.lme <- function(mod, newdata, se.fit = TRUE, print.matrix = FALSE, level = 0, ...){
##logical test for level
if(!identical(level, 0)) stop("\nThis function does not support computation of predicted values\n",
"or standard errors for higher levels of nesting\n")
##first part of code converts data.frame (including factors) into design matrix of model
#fixed <- mod$call$fixed[-2] #extract only fixed portion of model formula - creates problems if formula specified in separate object
fixed <- formula(mod)[-2] #extract only fixed portion of model formula
tt <- terms(mod)
TT <- delete.response(tt)
newdata <- as.data.frame(newdata)
#################################################################################################################
########################### This following piece of code is modified from predict.lme( ) from nlme package
#################################################################################################################
mfArgs <- list(formula = fixed, data = newdata)
dataMix <- do.call("model.frame", mfArgs)
## making sure factor levels are the same as in contrasts
contr <- mod$contrasts
for(i in names(dataMix)) {
if (inherits(dataMix[,i], "factor") && !is.null(contr[[i]])) {
levs <- levels(dataMix[,i])
levsC <- dimnames(contr[[i]])[[1]] ##could change to rownames(contr[[i]])
if (any(wch <- is.na(match(levs, levsC)))) {
stop(paste("Levels", paste(levs[wch], collapse = ","),
"not allowed for", i))
}
attr(dataMix[,i], "contrasts") <- contr[[i]][levs, , drop = FALSE]
}
}
#################################################################################################################
########################### The previous piece of code is modified from predict.lme( ) from nlme package
#################################################################################################################
m <- model.frame(TT, data=dataMix)
des.matrix <- model.matrix(TT, m)
newdata <- des.matrix #we now have a design matrix
######START OF PREDICT FUNCTION
######
fix.coef <- fixef(mod)
ncoefs <- length(fix.coef)
names.coef <- labels(fix.coef)
nvals <- dim(newdata)[1]
##check for intercept fixed effect term in model
int.yes <- any(names.coef == "(Intercept)")
##if no intercept term, return error
if(!int.yes) stop("\nThis function does not work with models excluding the intercept terms\n")
formula <- character(length=ncoefs)
nbetas <- ncoefs - 1
if(int.yes & nbetas >= 1) {
##create loop to construct formula for derivative
formula <- paste("Beta", 1:nbetas, sep="")
formula <- c("Beta0", formula)
} else {
if(int.yes & nbetas == 0) {
formula <- "Beta0"
}
}
##for models without intercept - formula <- paste("Beta", 1:ncoefs, sep="")
##a loop to assemble formula
##first, identify interaction terms
inters <- rep(NA, ncoefs)
for (m in 1:ncoefs) {
inters[m] <- attr(regexpr(pattern = ":", text = names.coef[m]), "match.length")
}
##change the name of the labels for flexibility
names.cov <- paste("cov", 1:ncoefs-1, sep="")
if(!int.yes) {names.cov <- paste("cov", 1:ncoefs, sep="")}
id <- which(inters == 1)
for (k in 1:length(id)) {
names.cov[id[k]] <- paste("inter", k, sep="")
}
##iterate and combine betas and covariates
formula2 <- character(length = ncoefs)
for(b in 1:ncoefs) {
formula2[b] <- paste(formula[b], names.cov[b], sep="*")
}
##replace with Beta0 if fixed intercept term present
if(int.yes) {formula2[1] <- "Beta0"}
##collapse into a single equation and convert to expression
##parse returns the unevaluated expression
eq.space <- parse(text = as.expression(paste(formula2, collapse="+")),
srcfile = NULL)
##add step to remove white space to avoid reaching 500 character limit
##remove space within expression
no.space <- gsub("[[:space:]]+", "", as.character(eq.space))
equation <- parse(text = as.expression(no.space))
##
if(identical(se.fit, TRUE)) {
##determine number of partial derivatives to compute
part.devs <- list( )
for(j in 1:ncoefs) {
part.devs[[j]] <- D(equation, formula[j])
}
}
##determine number of covariates excluding interaction terms
ncovs <- ncoefs - length(id)
##assign values of covariates
cov.values <- list()
##if only intercept, then add column
if(int.yes && ncovs == 1) {
cov.values[[1]] <- 1
}
if(int.yes && ncovs > 1) {
cov.values[[1]] <- rep(1, nvals)
for (q in 2:ncoefs) {
cov.values[[q]] <- newdata[, labels(fix.coef)[q]]
}
} else {
for (q in 1:ncoefs) {
cov.values[[q]] <- newdata[, labels(fix.coef)[q]]
}
}
names(cov.values) <- names.cov
cov.values.mat <- matrix(data = unlist(cov.values), nrow = nvals, ncol = ncoefs)
if(identical(se.fit, TRUE)) {
##substitute a given row for each covariate
predicted.SE <- matrix(NA, nrow = nvals, ncol = 2)
colnames(predicted.SE) <- c("Pred.value", "SE")
rownames(predicted.SE) <- 1:nvals
part.devs.eval <- list( )
part.devs.eval[[1]] <- 1
for (w in 1:nvals) {
if(int.yes && ncovs > 1) {
for (p in 2:ncoefs) {
part.devs.eval[[p]] <- cov.values[names.cov[p]][[1]][w]
}
} # else { ##for cases without intercept
##for (p in 1:ncoefs) {
## part.devs.eval[[p]] <- cov.values[names.cov[p]][[1]][w]
##}
##}
part.devs.solved <- unlist(part.devs.eval)
##extract vc matrix
vcmat <- vcov(mod)
mat_partialdevs<-as.matrix(part.devs.solved) #create matrix from vector of 2 rows by 1 column
mat_tpartialdevs<-t(part.devs.solved) #transpose of partial derivatives to have 2 columns by 1 row
#####5)
var_hat<-mat_tpartialdevs%*%vcmat%*%mat_partialdevs
SE<-sqrt(var_hat)
predicted.vals <- fix.coef%*%cov.values.mat[w,]
predicted.SE[w, 1] <- predicted.vals
predicted.SE[w, 2] <- SE
}
out.fit.SE <- list(fit = predicted.SE[,"Pred.value"], se.fit = predicted.SE[, "SE"])
} else {
predicted.SE <- matrix(NA, nrow = nvals, ncol = 1)
colnames(predicted.SE) <- c("Pred.value")
rownames(predicted.SE) <- 1:nvals
for (w in 1:nvals) {
predicted.vals <- fix.coef%*%cov.values.mat[w,]
predicted.SE[w, 1] <- predicted.vals
}
out.fit.SE <- predicted.SE
colnames(out.fit.SE) <- "fit"
}
##print as nice matrix, otherwise print as list
if(identical(print.matrix, TRUE)) {
out.fit.SE <- predicted.SE
if(identical(se.fit, TRUE)) {
colnames(out.fit.SE) <- c("fit", "se.fit")
} else {
colnames(out.fit.SE) <- c("fit")
}
}
return(out.fit.SE)
}
##mer
##current function only works for offset with Poisson distribution and log link
predictSE.mer <- function(mod, newdata, se.fit = TRUE, print.matrix = FALSE, level = 0, type = "response", ...){
##logical test for level
if(!identical(level, 0)) stop("\nThis function does not support computation of predicted values\n",
"or standard errors for higher levels of nesting\n")
##########################################################################
###determine characteristics of glmm
##########################################################################
mod.details <- fam.link.mer(mod)
fam.type <- mod.details$family
link.type <- mod.details$link
supp.link <- mod.details$supp
if(identical(supp.link, "no")) stop("\nOnly canonical link is supported with current version of function\n")
if(identical(link.type, "other")) stop("\nThis function is not yet defined for the specified link function\n")
##########################################################################
##this part of code converts data.frame (including factors) into design matrix of model
tt <- terms(mod)
TT <- delete.response(tt)
newdata <- as.data.frame(newdata)
#################################################################################################################
########################### This following clever piece of code is modified from predict.lme( ) from nlme package
#################################################################################################################
mfArgs <- list(formula = TT, data = newdata)
dataMix <- do.call("model.frame", mfArgs)
## making sure factor levels are the same as in contrasts
###########this part creates a list to hold factors - changed from nlme
orig.frame <- mod@frame
##matrix with info on factors
fact.frame <- attr(attr(orig.frame, "terms"), "dataClasses")[-1]
##continue if factors
if(any(fact.frame == "factor")) {
id.factors <- which(fact.frame == "factor")
fact.name <- names(fact.frame)[id.factors] #identify the rows for factors
contr <- list( )
for(j in fact.name) {
contr[[j]] <- contrasts(orig.frame[, j])
}
}
##########end of code to create list changed from nlme
for(i in names(dataMix)) {
if (inherits(dataMix[,i], "factor") && !is.null(contr[[i]])) {
levs <- levels(dataMix[,i])
levsC <- rownames(contr[[i]])
if (any(wch <- is.na(match(levs, levsC)))) {
stop(paste("Levels", paste(levs[wch], collapse = ","),
"not allowed for", i))
}
attr(dataMix[,i], "contrasts") <- contr[[i]][levs, , drop = FALSE]
}
}
#################################################################################################################
########################### The previous clever piece of code is modified from predict.lme( ) from nlme package
#################################################################################################################
###############################################
###############################################
### THIS BIT IS MODIFIED FOR OFFSET
###############################################
##check for offset
##if(length(mod@offset) > 0) {
## calls <- attr(TT, "variables")
## off.num <- attr(TT, "offset")
##offset values
##offset.values <- eval(calls[[off.num+1]], newdata)
##}
checkCall <- as.list(mod@call)
checkOffset <- any(regexpr("offset", text = as.character(names(checkCall))) != -1)
if(checkOffset) {
calls <- attr(TT, "variables")
off.num <- attr(TT, "offset")
##offset values
offset.values <- eval(attr(mod@frame, "offset"), newdata)
}
###############################################
###END OF MODIFICATIONS FOR OFFSET
###############################################
###############################################
##m <- model.frame(TT, data = dataMix)
##m <- model.frame(TT, data = newdata) gives error when offset is converted to log( ) scale within call
des.matrix <- model.matrix(TT, dataMix)
newdata <- des.matrix #we now have a design matrix
######START OF PREDICT FUNCTION
######
fix.coef <- fixef(mod)
ncoefs <- length(fix.coef)
names.coef <- labels(fix.coef)
nvals <- dim(newdata)[1]
##check for intercept fixed effect term in model
int.yes <- any(names.coef == "(Intercept)")
##if no intercept term, return error
if(!int.yes) stop("\nThis function does not work with models excluding the intercept\n")
formula <- character(length=ncoefs)
nbetas <- ncoefs - 1
if(int.yes & nbetas >= 1) {
##create loop to construct formula for derivative
formula <- paste("Beta", 1:nbetas, sep="")
formula <- c("Beta0", formula)
} else {
if(int.yes & nbetas == 0) {
formula <- "Beta0"
}
}
##for models without intercept - formula <- paste("Beta", 1:ncoefs, sep="")
##a loop to assemble formula
##first, identify interaction terms
inters <- rep(NA, ncoefs)
for (m in 1:ncoefs) {
inters[m] <- attr(regexpr(pattern = ":", text = names.coef[m]), "match.length")
}
##change the name of the labels for flexibility
names.cov <- paste("cov", 1:ncoefs-1, sep="")
if(!int.yes) {names.cov <- paste("cov", 1:ncoefs, sep="")}
id <- which(inters == 1)
for (k in 1:length(id)) {
names.cov[id[k]] <- paste("inter", k, sep="")
}
##iterate and combine betas and covariates
formula2 <- character(length = ncoefs)
for(b in 1:ncoefs) {
formula2[b] <- paste(formula[b], names.cov[b], sep="*")
}
##replace with Beta0 if fixed intercept term present
if(int.yes) {formula2[1] <- "Beta0"}
##collapse into a single equation and convert to expression
##parse returns the unevaluated expression
eq.space <- parse(text = as.expression(paste(formula2, collapse="+")),
srcfile = NULL)
##add step to remove white space to avoid reaching 500 character limit
##remove space within expression
no.space <- gsub("[[:space:]]+", "", as.character(eq.space))
equation <- parse(text = as.expression(no.space))
##############################################
########BEGIN MODIFIED FOR OFFSET############
##############################################
##############################################
if(checkOffset) {
##iterate and combine betas and covariates
formula2 <- character(length = ncoefs+1)
for(b in 1:ncoefs) {
formula2[b] <- paste(formula[b], names.cov[b], sep="*")
}
##add offset variable to formula
formula2[ncoefs+1] <- "offset.vals"
##replace with Beta0 if fixed intercept term present
if(int.yes) {formula2[1] <- "Beta0"}
##collapse into a single equation and convert to expression
eq.space <- parse(text = as.expression(paste(formula2, collapse="+")),
srcfile = NULL)
##add step to remove white space to avoid reaching 500 character limit
##remove space within expression
no.space <- gsub("[[:space:]]+", "", as.character(eq.space))
equation <- parse(text = as.expression(no.space))
}
##############################################
########END MODIFIED FOR OFFSET###############
##############################################
##############################################
##determine number of covariates excluding interaction terms
ncovs <- ncoefs - length(id)
##assign values of covariates
cov.values <- list( )
##if only intercept, then add column
if(int.yes && ncovs == 1) {
cov.values[[1]] <- 1
}
if(int.yes && ncovs > 1) {
cov.values[[1]] <- rep(1, nvals)
for (q in 2:ncoefs) {
cov.values[[q]] <- newdata[, labels(fix.coef)[q]]
}
} else {
for (q in 1:ncoefs) {
cov.values[[q]] <- newdata[, labels(fix.coef)[q]]
}
}
names(cov.values) <- names.cov
cov.values.mat <- matrix(data = unlist(cov.values), nrow = nvals, ncol = ncoefs)
################################################################
####use the following code to compute predicted values and SE's
####on response scale if identity link is used OR link scale
if((identical(type, "response") && identical(link.type, "identity")) || (identical(type, "link"))) {
if(identical(se.fit, TRUE)) {
##determine number of partial derivatives to compute
part.devs <- list( )
for(j in 1:ncoefs) {
part.devs[[j]] <- D(equation, formula[j])
}
}
if(identical(se.fit, TRUE)) {
##substitute a given row for each covariate
predicted.SE <- matrix(NA, nrow = nvals, ncol = 2)
colnames(predicted.SE) <- c("Pred.value", "SE")
rownames(predicted.SE) <- 1:nvals
part.devs.eval <- list( )
part.devs.eval[[1]] <- 1
for (w in 1:nvals) {
if(int.yes && ncovs > 1) {
for (p in 2:ncoefs) {
part.devs.eval[[p]] <- cov.values[names.cov[p]][[1]][w]
}
} # else { ##for cases without intercept
##for (p in 1:ncoefs) {
## part.devs.eval[[p]] <- cov.values[names.cov[p]][[1]][w]
##}
##}
part.devs.solved <- unlist(part.devs.eval)
##extract vc matrix
vcmat <- vcov(mod)
mat_partialdevs<-as.matrix(part.devs.solved) #create matrix from vector of 2 rows by 1 column
mat_tpartialdevs<-t(part.devs.solved) #transpose of partial derivatives to have 2 columns by 1 row
var_hat<-mat_tpartialdevs%*%vcmat%*%mat_partialdevs
SE<-sqrt(var_hat)
predicted.vals <- fix.coef%*%cov.values.mat[w,]
######################################
###BEGIN MODIFIED FOR OFFSET
######################################
if(length(mod@offset) > 0) {
predicted.vals <- fix.coef%*%cov.values.mat[w,] + offset.values[w]
}
######################################
###END MODIFIED FOR OFFSET
######################################
predicted.SE[w, 1] <- predicted.vals
predicted.SE[w, 2] <- SE@x #to extract only value computed
}
out.fit.SE <- list(fit = predicted.SE[,"Pred.value"], se.fit = predicted.SE[, "SE"])
} else {
predicted.SE <- matrix(NA, nrow = nvals, ncol = 1)
colnames(predicted.SE) <- c("Pred.value")
rownames(predicted.SE) <- 1:nvals
for (w in 1:nvals) {
predicted.vals <- fix.coef%*%cov.values.mat[w,]
######################################
###BEGIN MODIFIED FOR OFFSET
######################################
if(checkOffset) {
predicted.vals <- fix.coef%*%cov.values.mat[w,] + offset.values[w]
}
######################################
###END MODIFIED FOR OFFSET
######################################
predicted.SE[w, 1] <- predicted.vals
}
out.fit.SE <- predicted.SE
colnames(out.fit.SE) <- "fit"
}
}
###################################################################################
###################################################################################
####use the following code to compute predicted values and SE's
####on response scale if other than identity link is used
###################################################################################
###################################################################################
if(identical(type, "response") && !identical(link.type, "identity")) {
##for binomial GLMM with logit link
if(identical(link.type, "logit")) {
##build partial derivatives
logit.eq.space <- parse(text = as.expression(paste("exp(", equation, ")/(1+exp(", equation, "))")),
srcfile = NULL)
##add step to remove white space to avoid reaching 500 character limit
##remove space within expression
no.space <- gsub("[[:space:]]+", "", as.character(logit.eq.space))
logit.eq <- parse(text = as.expression(no.space))
part.devs <- list( )
for(j in 1:ncoefs) {
part.devs[[j]] <- D(logit.eq, formula[j])
}
}
##for poisson, gaussian or Gamma GLMM with log link
if(identical(link.type, "log")) {
##build partial derivatives
log.eq.space <- parse(text = as.expression(paste("exp(", equation, ")")),
srcfile = NULL)
##remove space within expression
no.space <- gsub("[[:space:]]+", "", as.character(log.eq.space))
log.eq <- parse(text = as.expression(no.space))
part.devs <- list( )
for(j in 1:ncoefs) {
part.devs[[j]] <- D(log.eq, formula[j])
}
}
##assign values of beta estimates to beta parameters
beta.vals <- fix.coef
names(beta.vals) <- formula
##neat way of assigning beta estimate values to objects using names in beta.vals
for(d in 1:ncoefs) {
assign(names(beta.vals)[d], beta.vals[d])
}
if(identical(se.fit, TRUE)) {
##substitute a given row for each covariate
predicted.SE <- matrix(NA, nrow = nvals, ncol = 2)
colnames(predicted.SE) <- c("Pred.value", "SE")
rownames(predicted.SE) <- 1:nvals
part.devs.eval <- list( )
for (w in 1:nvals) {
if(int.yes && ncovs > 1) {
for (p in 1:ncoefs) {
cmds <- list( )
for(r in 2:ncoefs) {
##create commands
cmds[[r]] <- paste(names.cov[r], "=", "cov.values[[names.cov[", r, "]]][", w, "]")
}
######################################
###BEGIN MODIFIED FOR OFFSET
######################################
##if offset present, add in equation
if(checkOffset) {
cmds[[ncoefs+1]] <- paste("offset.vals = offset.values[w]")
}
######################################
###END MODIFIED FOR OFFSET
######################################
##assemble commands
cmd.arg <- paste(unlist(cmds), collapse = ", ")
cmd.eval <- paste("eval(expr = part.devs[[", p, "]],", "envir = list(", cmd.arg, ")", ")")
##evaluate partial derivative
part.devs.eval[[p]] <- eval(parse(text = cmd.eval))
}
}
if(int.yes && ncovs == 1) { #for cases with intercept only
part.devs.eval[[1]] <- eval(part.devs[[1]])
}
## else { ##for cases without intercept
##for (p in 1:ncoefs) {
## part.devs.eval[[p]] <- cov.values[names.cov[p]][[1]][w]
##}
##}
part.devs.solved <- unlist(part.devs.eval)
##extract vc matrix
vcmat <- vcov(mod)
mat_partialdevs <- as.matrix(part.devs.solved) #create matrix from vector of 2 rows by 1 column
mat_tpartialdevs <- t(part.devs.solved) #transpose of partial derivatives to have 2 columns by 1 row
var_hat <- mat_tpartialdevs %*% vcmat%*%mat_partialdevs
SE <- sqrt(var_hat)
predicted.vals <- fix.coef %*% cov.values.mat[w,]
######################################
###BEGIN MODIFIED FOR OFFSET
######################################
if(checkOffset) {
predicted.vals <- fix.coef%*%cov.values.mat[w,] + offset.values[w]
}
######################################
###END MODIFIED FOR OFFSET
######################################
if(identical(link.type, "logit")) {
predicted.SE[w, 1] <- exp(predicted.vals)/(1 + exp(predicted.vals))
} else {
if(identical(link.type, "log")) {
predicted.SE[w, 1] <- exp(predicted.vals)
}
}
predicted.SE[w, 2] <- SE@x #to extract only value computed
}
out.fit.SE <- list(fit = predicted.SE[,"Pred.value"], se.fit = predicted.SE[, "SE"])
} else {
predicted.SE <- matrix(NA, nrow = nvals, ncol = 1)
colnames(predicted.SE) <- c("Pred.value")
rownames(predicted.SE) <- 1:nvals
for (w in 1:nvals) {
predicted.vals <- fix.coef%*%cov.values.mat[w,]
######################################
###BEGIN MODIFIED FOR OFFSET
######################################
if(checkOffset) {
predicted.vals <- fix.coef%*%cov.values.mat[w,] + offset.values[w]
}
######################################
###END MODIFIED FOR OFFSET
######################################
if(identical(link.type, "logit")) {
predicted.SE[w, 1] <- exp(predicted.vals)/(1 + exp(predicted.vals))
} else {
if(identical(link.type, "log")) {
predicted.SE[w, 1] <- exp(predicted.vals)
}
}
}
out.fit.SE <- predicted.SE
colnames(out.fit.SE) <- "fit"
}
}
###################################################################
##print as nice matrix, otherwise print as list
if(identical(print.matrix, TRUE)) {
out.fit.SE <- predicted.SE
if(identical(se.fit, TRUE)) {
colnames(out.fit.SE) <- c("fit", "se.fit")
} else {
colnames(out.fit.SE) <- c("fit")
}
}
return(out.fit.SE)
}
##########################
##########################
##merMod (glmerMod and lmerMod) fits
predictSE.merMod <- function(mod, newdata, se.fit = TRUE, print.matrix = FALSE, level = 0, type = "response", ...){
##logical test for level
if(!identical(level, 0)) stop("\nThis function does not support computation of predicted values\n",
"or standard errors for higher levels of nesting\n")
##########################################################################
###determine characteristics of glmm
##########################################################################
mod.details <- fam.link.mer(mod)
fam.type <- mod.details$family
link.type <- mod.details$link
supp.link <- mod.details$supp
if(identical(supp.link, "no")) stop("\nOnly canonical link is supported with current version of function\n")
if(identical(link.type, "other")) stop("\nThis function is not yet defined for the specified link function\n")
##########################################################################
##this part of code converts data.frame (including factors) into design matrix of model
tt <- terms(mod)
TT <- delete.response(tt)
newdata <- as.data.frame(newdata)
#################################################################################################################
########################### This following clever piece of code is modified from predict.lme( ) from nlme package
#################################################################################################################
mfArgs <- list(formula = TT, data = newdata)
dataMix <- do.call("model.frame", mfArgs)
## making sure factor levels are the same as in contrasts
###########this part creates a list to hold factors - changed from nlme
orig.frame <- mod@frame
##matrix with info on factors
fact.frame <- attr(attr(orig.frame, "terms"), "dataClasses")[-1]
##continue if factors
if(any(fact.frame == "factor")) {
id.factors <- which(fact.frame == "factor")
fact.name <- names(fact.frame)[id.factors] #identify the rows for factors
contr <- list( )
for(j in fact.name) {
contr[[j]] <- contrasts(orig.frame[, j])
}
}
##########end of code to create list changed from nlme
for(i in names(dataMix)) {
if (inherits(dataMix[,i], "factor") && !is.null(contr[[i]])) {
levs <- levels(dataMix[,i])
levsC <- rownames(contr[[i]])
if (any(wch <- is.na(match(levs, levsC)))) {
stop(paste("Levels", paste(levs[wch], collapse = ","),
"not allowed for", i))
}
attr(dataMix[,i], "contrasts") <- contr[[i]][levs, , drop = FALSE]
}
}
#################################################################################################################
########################### The previous clever piece of code is modified from predict.lme( ) from nlme package
#################################################################################################################
###############################################
###############################################
### THIS BIT IS MODIFIED FOR OFFSET
###############################################
##check for offset
if(length(mod@frame$offset) > 0) {
calls <- attr(TT, "variables")
off.num <- attr(TT, "offset")
##offset values
offset.values <- eval(calls[[off.num+1]], newdata)
}
###############################################
###END OF MODIFICATIONS FOR OFFSET
###############################################
###############################################
##m <- model.frame(TT, data = dataMix)
##m <- model.frame(TT, data = newdata) gives error when offset is converted to log( ) scale within call
des.matrix <- model.matrix(TT, dataMix)
newdata <- des.matrix #we now have a design matrix
######START OF PREDICT FUNCTION
######
fix.coef <- fixef(mod)
ncoefs <- length(fix.coef)
names.coef <- labels(fix.coef)
nvals <- dim(newdata)[1]
##check for intercept fixed effect term in model
int.yes <- any(names.coef == "(Intercept)")
##if no intercept term, return error
if(!int.yes) stop("\nThis function does not work with models excluding the intercept\n")
formula <- character(length=ncoefs)
nbetas <- ncoefs - 1
if(int.yes & nbetas >= 1) {
##create loop to construct formula for derivative
formula <- paste("Beta", 1:nbetas, sep="")
formula <- c("Beta0", formula)
} else {
if(int.yes & nbetas == 0) {
formula <- "Beta0"
}
}
##for models without intercept - formula <- paste("Beta", 1:ncoefs, sep="")
##a loop to assemble formula
##first, identify interaction terms
inters <- rep(NA, ncoefs)
for (m in 1:ncoefs) {
inters[m] <- attr(regexpr(pattern = ":", text = names.coef[m]), "match.length")
}
##change the name of the labels for flexibility
names.cov <- paste("cov", 1:ncoefs-1, sep="")
if(!int.yes) {names.cov <- paste("cov", 1:ncoefs, sep="")}
id <- which(inters == 1)
for (k in 1:length(id)) {
names.cov[id[k]] <- paste("inter", k, sep="")
}
##iterate and combine betas and covariates
formula2 <- character(length = ncoefs)
for(b in 1:ncoefs) {
formula2[b] <- paste(formula[b], names.cov[b], sep="*")
}
##replace with Beta0 if fixed intercept term present
if(int.yes) {formula2[1] <- "Beta0"}
##collapse into a single equation and convert to expression
##parse returns the unevaluated expression
eq.space <- parse(text = as.expression(paste(formula2, collapse="+")),
srcfile = NULL)
##add step to remove white space to avoid reaching 500 character limit
##remove space within expression
no.space <- gsub("[[:space:]]+", "", as.character(eq.space))
equation <- parse(text = as.expression(no.space))
##############################################
########BEGIN MODIFIED FOR OFFSET############
##############################################
##############################################
if(length(mod@frame$offset) > 0) {
##iterate and combine betas and covariates
formula2 <- character(length = ncoefs+1)
for(b in 1:ncoefs) {
formula2[b] <- paste(formula[b], names.cov[b], sep="*")
}
##add offset variable to formula
formula2[ncoefs+1] <- "offset.vals"
##replace with Beta0 if fixed intercept term present
if(int.yes) {formula2[1] <- "Beta0"}
##collapse into a single equation and convert to expression
eq.space <- parse(text = as.expression(paste(formula2, collapse="+")),
srcfile = NULL)
##add step to remove white space to avoid reaching 500 character limit
##remove space within expression
no.space <- gsub("[[:space:]]+", "", as.character(eq.space))
equation <- parse(text = as.expression(no.space))
}
##############################################
########END MODIFIED FOR OFFSET###############
##############################################
##############################################
##determine number of covariates excluding interaction terms
ncovs <- ncoefs - length(id)
##assign values of covariates
cov.values <- list( )
##if only intercept, then add column
if(int.yes && ncovs == 1) {
cov.values[[1]] <- 1
}
if(int.yes && ncovs > 1) {
cov.values[[1]] <- rep(1, nvals)
for (q in 2:ncoefs) {
cov.values[[q]] <- newdata[, labels(fix.coef)[q]]
}
} else {
for (q in 1:ncoefs) {
cov.values[[q]] <- newdata[, labels(fix.coef)[q]]
}
}
names(cov.values) <- names.cov
cov.values.mat <- matrix(data = unlist(cov.values), nrow = nvals, ncol = ncoefs)
################################################################
####use the following code to compute predicted values and SE's
####on response scale if identity link is used OR link scale
if((identical(type, "response") && identical(link.type, "identity")) || (identical(type, "link"))) {
if(identical(se.fit, TRUE)) {
##determine number of partial derivatives to compute
part.devs <- list( )
for(j in 1:ncoefs) {
part.devs[[j]] <- D(equation, formula[j])
}
}
if(identical(se.fit, TRUE)) {
##substitute a given row for each covariate
predicted.SE <- matrix(NA, nrow = nvals, ncol = 2)
colnames(predicted.SE) <- c("Pred.value", "SE")
rownames(predicted.SE) <- 1:nvals
part.devs.eval <- list( )
part.devs.eval[[1]] <- 1
for (w in 1:nvals) {
if(int.yes && ncovs > 1) {
for (p in 2:ncoefs) {
part.devs.eval[[p]] <- cov.values[names.cov[p]][[1]][w]
}
} # else { ##for cases without intercept
##for (p in 1:ncoefs) {
## part.devs.eval[[p]] <- cov.values[names.cov[p]][[1]][w]
##}
##}
part.devs.solved <- unlist(part.devs.eval)
##extract vc matrix
vcmat <- vcov(mod)
mat_partialdevs<-as.matrix(part.devs.solved) #create matrix from vector of 2 rows by 1 column
mat_tpartialdevs<-t(part.devs.solved) #transpose of partial derivatives to have 2 columns by 1 row
var_hat<-mat_tpartialdevs%*%vcmat%*%mat_partialdevs
SE<-sqrt(var_hat)
predicted.vals <- fix.coef%*%cov.values.mat[w,]
######################################
###BEGIN MODIFIED FOR OFFSET
######################################
if(length(mod@frame$offset) > 0) {
predicted.vals <- fix.coef%*%cov.values.mat[w,] + offset.values[w]
}
######################################
###END MODIFIED FOR OFFSET
######################################
predicted.SE[w, 1] <- predicted.vals
predicted.SE[w, 2] <- SE@x #to extract only value computed
}
out.fit.SE <- list(fit = predicted.SE[,"Pred.value"], se.fit = predicted.SE[, "SE"])
} else {
predicted.SE <- matrix(NA, nrow = nvals, ncol = 1)
colnames(predicted.SE) <- c("Pred.value")
rownames(predicted.SE) <- 1:nvals
for (w in 1:nvals) {
predicted.vals <- fix.coef%*%cov.values.mat[w,]
######################################
###BEGIN MODIFIED FOR OFFSET
######################################
if(length(mod@frame$offset) > 0) {
predicted.vals <- fix.coef%*%cov.values.mat[w,] + offset.values[w]
}
######################################
###END MODIFIED FOR OFFSET
######################################
predicted.SE[w, 1] <- predicted.vals
}
out.fit.SE <- predicted.SE
colnames(out.fit.SE) <- "fit"
}
}
###################################################################################
###################################################################################
####use the following code to compute predicted values and SE's
####on response scale if other than identity link is used
###################################################################################
###################################################################################
if(identical(type, "response") && !identical(link.type, "identity")) {
##for binomial GLMM with logit link
if(identical(link.type, "logit")) {
##build partial derivatives
logit.eq.space <- parse(text = as.expression(paste("exp(", equation, ")/(1+exp(", equation, "))")),
srcfile = NULL)
##add step to remove white space to avoid reaching 500 character limit
##remove space within expression
no.space <- gsub("[[:space:]]+", "", as.character(logit.eq.space))
logit.eq <- parse(text = as.expression(no.space))
part.devs <- list( )
for(j in 1:ncoefs) {
part.devs[[j]] <- D(logit.eq, formula[j])
}
}
##for poisson, gaussian or Gamma GLMM with log link
if(identical(link.type, "log")) {
##build partial derivatives
log.eq.space <- parse(text = as.expression(paste("exp(", equation, ")")),
srcfile = NULL)
##remove space within expression
no.space <- gsub("[[:space:]]+", "", as.character(log.eq.space))
log.eq <- parse(text = as.expression(no.space))
part.devs <- list( )
for(j in 1:ncoefs) {
part.devs[[j]] <- D(log.eq, formula[j])
}
}
##assign values of beta estimates to beta parameters
beta.vals <- fix.coef
names(beta.vals) <- formula
##neat way of assigning beta estimate values to objects using names in beta.vals
for(d in 1:ncoefs) {
assign(names(beta.vals)[d], beta.vals[d])
}
if(identical(se.fit, TRUE)) {
##substitute a given row for each covariate
predicted.SE <- matrix(NA, nrow = nvals, ncol = 2)
colnames(predicted.SE) <- c("Pred.value", "SE")
rownames(predicted.SE) <- 1:nvals
part.devs.eval <- list( )
for (w in 1:nvals) {
if(int.yes && ncovs > 1) {
for (p in 1:ncoefs) {
cmds <- list( )
for(r in 2:ncoefs) {
##create commands
cmds[[r]] <- paste(names.cov[r], "=", "cov.values[[names.cov[", r, "]]][", w, "]")
}
######################################
###BEGIN MODIFIED FOR OFFSET
######################################
##if offset present, add in equation
if(length(mod@frame$offset) > 0) {
cmds[[ncoefs+1]] <- paste("offset.vals = offset.values[w]")
}
######################################
###END MODIFIED FOR OFFSET
######################################
##assemble commands
cmd.arg <- paste(unlist(cmds), collapse = ", ")
cmd.eval <- paste("eval(expr = part.devs[[", p, "]],", "envir = list(", cmd.arg, ")", ")")
##evaluate partial derivative
part.devs.eval[[p]] <- eval(parse(text = cmd.eval))
}
}
if(int.yes && ncovs == 1) { #for cases with intercept only
part.devs.eval[[1]] <- eval(part.devs[[1]])
}
## else { ##for cases without intercept
##for (p in 1:ncoefs) {
## part.devs.eval[[p]] <- cov.values[names.cov[p]][[1]][w]
##}
##}
part.devs.solved <- unlist(part.devs.eval)
##extract vc matrix
vcmat <- vcov(mod)
mat_partialdevs <- as.matrix(part.devs.solved) #create matrix from vector of 2 rows by 1 column
mat_tpartialdevs <- t(part.devs.solved) #transpose of partial derivatives to have 2 columns by 1 row
var_hat <- mat_tpartialdevs %*% vcmat%*%mat_partialdevs
SE <- sqrt(var_hat)
predicted.vals <- fix.coef %*% cov.values.mat[w,]
######################################
###BEGIN MODIFIED FOR OFFSET
######################################
if(length(mod@frame$offset) > 0) {
predicted.vals <- fix.coef%*%cov.values.mat[w,] + offset.values[w]
}
######################################
###END MODIFIED FOR OFFSET
######################################
if(identical(link.type, "logit")) {
predicted.SE[w, 1] <- exp(predicted.vals)/(1 + exp(predicted.vals))
} else {
if(identical(link.type, "log")) {
predicted.SE[w, 1] <- exp(predicted.vals)
}
}
predicted.SE[w, 2] <- SE@x #to extract only value computed
}
out.fit.SE <- list(fit = predicted.SE[,"Pred.value"], se.fit = predicted.SE[, "SE"])
} else {
predicted.SE <- matrix(NA, nrow = nvals, ncol = 1)
colnames(predicted.SE) <- c("Pred.value")
rownames(predicted.SE) <- 1:nvals
for (w in 1:nvals) {
predicted.vals <- fix.coef%*%cov.values.mat[w,]
######################################
###BEGIN MODIFIED FOR OFFSET
######################################
if(length(mod@frame$offset) > 0) {
predicted.vals <- fix.coef%*%cov.values.mat[w,] + offset.values[w]
}
######################################
###END MODIFIED FOR OFFSET
######################################
if(identical(link.type, "logit")) {
predicted.SE[w, 1] <- exp(predicted.vals)/(1 + exp(predicted.vals))
} else {
if(identical(link.type, "log")) {
predicted.SE[w, 1] <- exp(predicted.vals)
}
}
}
out.fit.SE <- predicted.SE
colnames(out.fit.SE) <- "fit"
}
}
###################################################################
##print as nice matrix, otherwise print as list
if(identical(print.matrix, TRUE)) {
out.fit.SE <- predicted.SE
if(identical(se.fit, TRUE)) {
colnames(out.fit.SE) <- c("fit", "se.fit")
} else {
colnames(out.fit.SE) <- c("fit")
}
}
return(out.fit.SE)
}
##unmarkedFitPCount
##compute predicted values and SE
predictSE.unmarkedFitPCount <- function(mod, newdata, se.fit = TRUE, print.matrix = FALSE,
type = "response", c.hat = 1, parm.type = "lambda", ...) {
##only response scale is supported for ZIP models
if(identical(type, "link")) stop("\nLink scale not yet supported for predictions of this model type\n")
##extract data from model object
if(!is.data.frame(newdata)) stop("\n'newdata' must be a data frame\n")
new.data.set <- newdata
##nobs
nvals <- nrow(new.data.set)
##extract variables on lambda for pcount( ) model
lam.est <- coef(mod@estimates@estimates$state)
lam.est.noint <- lam.est[-1]
##total parameters lambda + psi
ncoefs <- length(lam.est) + 1
##number of parameters on lambda
n.est.lam <- length(lam.est)
##extract variables on psi
psi.est <- coef(mod@estimates@estimates$psi)
##check if NULL
if(is.null(psi.est)) stop("\nThis function is only for zero-inflated Poisson mixture:\nuse \'predict\' for other cases\n")
##full model labels
mod.lab <- labels(coef(mod))
##extract labels
lam.lab <- labels(lam.est)
lam.lab.noint <- lam.lab[-1]
psi.lab <- labels(psi.est)
##extract formula from model
formula <- mod@formula
##if lambda
if(identical(parm.type, "lambda")) {
form <- as.formula(paste("~", formula[3], sep="")) #state
} else {
stop("\nThis function only supports predictions on lamba\n")
}
##extract model frame matrix
Mat <- model.frame(formula = form, data = new.data.set)
des.mat <- model.matrix(form, Mat)
##########################################
##########################################
##check for offset
X.offset <- model.offset(Mat)
if(is.null(X.offset)) {
X.offset <- rep(0, nrow(Mat))
}
##check for intercept
if(identical(parm.type, "lambda")) {int.yes <- any(lam.lab == "lam(Int)")}
##if no intercept term, return error
if(!int.yes) stop("\nThis function does not work with models excluding the intercept terms: change model parameterization\n")
##number of estimates (not counting intercept)
n.est <- n.est.lam - 1
if(n.est.lam > 1) {
##create a list holding each cov
covs <- list( )
for(i in 1:n.est) {
covs[[i]] <- paste("cov", i, sep = "")
}
##covariate labels
cov.labels <- unlist(covs)
##change names of columns in design matrix
colnames(des.mat) <- c("(Int)", unlist(covs))
} else {colnames(des.mat) <- "(Int)"}
##names of columns in design matrix
design.names <- colnames(des.mat)
##extract values from new.data.set
cov.values <- list( )
for(i in 1:n.est.lam) {
cov.values[[i]] <- des.mat[, design.names[i]]
}
names(cov.values) <- design.names
##build equation
##iterate over betas except first
if(n.est.lam > 1) {
##betas
betas <- paste("beta", 0:(n.est), sep = "")
betas.noint <- betas[-1]
temp.eq <- list( )
for(i in 1:length(betas.noint)){
temp.eq[i] <- paste(betas.noint[i], "*", covs[i], sep = " ")
}
##linear predictor log scale
lam.eq.log <- paste(c("beta0", unlist(temp.eq)), collapse = " + ")
} else {
betas <- "beta0"
lam.eq.log <- betas
}
##linear predictor log scale
lam.eq.resp <- paste("exp(", lam.eq.log, "+ Val.offset", ")")
##logit scale for psi0 (zero-inflation intercept)
psi.eq <- paste("(1 - (exp(psi0)/(1 + exp(psi0))))")
##combine both parameters to get abundance
final.eq <- paste(lam.eq.resp, "*", psi.eq)
##total estimates
tot.est.names <- c(betas, "psi0")
if(n.est.lam > 1) {
tot.est <- c(tot.est.names, cov.labels)
} else {
tot.est <- c(tot.est.names)
}
##extract vcov matrix
##multiply by c.hat
vcmat <- vcov(mod)[c(lam.lab, psi.lab), c(lam.lab, psi.lab)] * c.hat
##################################
##start modifications
##################################
eq.space <- parse(text = as.expression(paste(final.eq, collapse = "+")),
srcfile = NULL)
no.space <- gsub("[[:space:]]+", "", as.character(eq.space))
equation <- parse(text = as.expression(no.space))
if (identical(se.fit, TRUE)) {
part.devs <- list( )
for (j in 1:ncoefs) {
part.devs[[j]] <- D(equation, tot.est.names[j])
}
}
##assign values of betas and psi
for(i in 1:n.est.lam) {
assign(betas[i], lam.est[i])
}
psi0 <- psi.est
cov.values.mat <- matrix(data = unlist(cov.values), nrow = nvals,
ncol = n.est.lam)
if (identical(se.fit, TRUE)) {
predicted.SE <- matrix(NA, nrow = nvals, ncol = 2)
colnames(predicted.SE) <- c("Pred.value", "SE")
rownames(predicted.SE) <- 1:nvals
pred.eq <- list()
##extract columns
for (w in 1:nvals) {
if (int.yes) {
for (p in 1:n.est.lam) {
pred.eq[[p]] <- des.mat[w, design.names[p]]
}
}
##values from design matrix
design.vals <- unlist(pred.eq)
##add value of offset for w
Val.offset <- X.offset[w]
##compute values for betas
exp.beta.pred <- exp(lam.est %*% design.vals + Val.offset)
##compute predictions including psi
predicted.vals <- exp.beta.pred * (1 - (exp(psi.est)/(1 + exp(psi.est))))
##assign values for covariates
##columns for covariates only - exclude intercept
if(n.est.lam > 1) {
design.covs <- design.vals[-1]
for (p in 1:length(cov.labels)) {
assign(cov.labels[p], design.covs[p])
}
}
##evaluate partial derivative
part.devs.solved <- list( )
for (j in 1:ncoefs) {
part.devs.solved[[j]] <- eval(part.devs[[j]])
}
mat_partialdevs <- as.matrix(unlist(part.devs.solved))
mat_tpartialdevs <- t(mat_partialdevs)
var_hat <- mat_tpartialdevs %*% vcmat %*% mat_partialdevs
SE <- sqrt(var_hat)
predicted.SE[w, 1] <- predicted.vals
predicted.SE[w, 2] <- SE
}
out.fit.SE <- list(fit = predicted.SE[, "Pred.value"],
se.fit = predicted.SE[, "SE"])
} else {
predicted.SE <- matrix(NA, nrow = nvals, ncol = 1)
colnames(predicted.SE) <- c("Pred.value")
rownames(predicted.SE) <- 1:nvals
pred.eq <- list( )
##extract columns
for (w in 1:nvals) {
if (int.yes) {
for (p in 1:n.est.lam) {
pred.eq[[p]] <- des.mat[w, design.names[p]]
}
}
##values from design matrix
design.vals <- unlist(pred.eq)
##compute values for betas
exp.beta.pred <- exp(lam.est %*% design.vals)
##compute predictions including psi
predicted.vals <- exp.beta.pred * (1 - (exp(psi.est)/(1 + exp(psi.est))))
predicted.SE[w, 1] <- predicted.vals
}
out.fit.SE <- predicted.SE
colnames(out.fit.SE) <- "fit"
}
if (identical(print.matrix, TRUE)) {
out.fit.SE <- predicted.SE
if (identical(se.fit, TRUE)) {
colnames(out.fit.SE) <- c("fit", "se.fit")
}
else {
colnames(out.fit.SE) <- c("fit")
}
}
return(out.fit.SE)
}
##unmarkedFitPCO
##compute predicted values and SE
predictSE.unmarkedFitPCO <- function(mod, newdata, se.fit = TRUE, print.matrix = FALSE,
type = "response", c.hat = 1, parm.type = "lambda", ...) {
##only response scale is supported for ZIP models
if(identical(type, "link")) stop("\nLink scale not supported for predictions of this model type\n")
##extract data from model object
if(!is.data.frame(newdata)) stop("\n'newdata' must be a data frame\n")
new.data.set <- newdata
##nobs
nvals <- nrow(new.data.set)
##extract variables on lambda for pcountOpen( ) model
lam.est <- coef(mod@estimates@estimates$lambda)
lam.est.noint <- lam.est[-1]
##total parameters lambda + psi
ncoefs <- length(lam.est) + 1
##number of parameters on lambda
n.est.lam <- length(lam.est)
##extract variables on psi
psi.est <- coef(mod@estimates@estimates$psi)
##check if NULL
if(is.null(psi.est)) stop("\nThis function is only for zero-inflated Poisson mixture:\nuse \'predict\' for other cases\n")
##full model labels
mod.lab <- labels(coef(mod))
##extract labels
lam.lab <- labels(lam.est)
lam.lab.noint <- lam.lab[-1]
psi.lab <- labels(psi.est)
##extract formula from model
formula <- mod@formula
##if lambda
if(identical(parm.type, "lambda")) {
form <- mod@formlist$lambdaformula
} else {
stop("\nThis function only supports predictions on lamba\n")
}
##extract model frame matrix
Mat <- model.frame(formula = form, data = new.data.set)
des.mat <- model.matrix(form, Mat)
##########################################
##########################################
##check for offset
X.offset <- model.offset(Mat)
if(is.null(X.offset)) {
X.offset <- rep(0, nrow(Mat))
}
##check for intercept
if(identical(parm.type, "lambda")) {int.yes <- any(lam.lab == "lam(Int)")}
##if no intercept term, return error
if(!int.yes) stop("\nThis function does not work with models excluding the intercept terms: change model parameterization\n")
##number of estimates (not counting intercept)
n.est <- n.est.lam - 1
if(n.est.lam > 1) {
##create a list holding each cov
covs <- list( )
for(i in 1:n.est) {
covs[[i]] <- paste("cov", i, sep = "")
}
##covariate labels
cov.labels <- unlist(covs)
##change names of columns in design matrix
colnames(des.mat) <- c("(Int)", unlist(covs))
} else {colnames(des.mat) <- "(Int)"}
##names of columns in design matrix
design.names <- colnames(des.mat)
##extract values from new.data.set
cov.values <- list( )
for(i in 1:n.est.lam) {
cov.values[[i]] <- des.mat[, design.names[i]]
}
names(cov.values) <- design.names
##build equation
##iterate over betas except first
if(n.est.lam > 1) {
##betas
betas <- paste("beta", 0:(n.est), sep = "")
betas.noint <- betas[-1]
temp.eq <- list( )
for(i in 1:length(betas.noint)){
temp.eq[i] <- paste(betas.noint[i], "*", covs[i], sep = " ")
}
##linear predictor log scale
lam.eq.log <- paste(c("beta0", unlist(temp.eq)), collapse = " + ")
} else {
betas <- "beta0"
lam.eq.log <- betas
}
##linear predictor log scale
lam.eq.resp <- paste("exp(", lam.eq.log, "+ Val.offset", ")")
##logit scale for psi0 (zero-inflation intercept)
psi.eq <- paste("(1 - (exp(psi0)/(1 + exp(psi0))))")
##combine both parameters to get abundance
final.eq <- paste(lam.eq.resp, "*", psi.eq)
##total estimates
tot.est.names <- c(betas, "psi0")
if(n.est.lam > 1) {
tot.est <- c(tot.est.names, cov.labels)
} else {
tot.est <- c(tot.est.names)
}
##extract vcov matrix
##multiply by c.hat
vcmat <- vcov(mod)[c(lam.lab, psi.lab), c(lam.lab, psi.lab)] * c.hat
##################################
##start modifications
##################################
eq.space <- parse(text = as.expression(paste(final.eq, collapse = "+")),
srcfile = NULL)
no.space <- gsub("[[:space:]]+", "", as.character(eq.space))
equation <- parse(text = as.expression(no.space))
if (identical(se.fit, TRUE)) {
part.devs <- list( )
for (j in 1:ncoefs) {
part.devs[[j]] <- D(equation, tot.est.names[j])
}
}
##assign values of betas and psi
for(i in 1:n.est.lam) {
assign(betas[i], lam.est[i])
}
psi0 <- psi.est
cov.values.mat <- matrix(data = unlist(cov.values), nrow = nvals,
ncol = n.est.lam)
if (identical(se.fit, TRUE)) {
predicted.SE <- matrix(NA, nrow = nvals, ncol = 2)
colnames(predicted.SE) <- c("Pred.value", "SE")
rownames(predicted.SE) <- 1:nvals
pred.eq <- list()
##extract columns
for (w in 1:nvals) {
if (int.yes) {
for (p in 1:n.est.lam) {
pred.eq[[p]] <- des.mat[w, design.names[p]]
}
}
##values from design matrix
design.vals <- unlist(pred.eq)
##add value of offset for w
Val.offset <- X.offset[w]
##compute values for betas
exp.beta.pred <- exp(lam.est %*% design.vals + Val.offset)
##compute predictions including psi
predicted.vals <- exp.beta.pred * (1 - (exp(psi.est)/(1 + exp(psi.est))))
##assign values for covariates
##columns for covariates only - exclude intercept
if(n.est.lam > 1) {
design.covs <- design.vals[-1]
for (p in 1:length(cov.labels)) {
assign(cov.labels[p], design.covs[p])
}
}
##evaluate partial derivative
part.devs.solved <- list( )
for (j in 1:ncoefs) {
part.devs.solved[[j]] <- eval(part.devs[[j]])
}
mat_partialdevs <- as.matrix(unlist(part.devs.solved))
mat_tpartialdevs <- t(mat_partialdevs)
var_hat <- mat_tpartialdevs %*% vcmat %*% mat_partialdevs
SE <- sqrt(var_hat)
predicted.SE[w, 1] <- predicted.vals
predicted.SE[w, 2] <- SE
}
out.fit.SE <- list(fit = predicted.SE[, "Pred.value"],
se.fit = predicted.SE[, "SE"])
} else {
predicted.SE <- matrix(NA, nrow = nvals, ncol = 1)
colnames(predicted.SE) <- c("Pred.value")
rownames(predicted.SE) <- 1:nvals
pred.eq <- list( )
##extract columns
for (w in 1:nvals) {
if (int.yes) {
for (p in 1:n.est.lam) {
pred.eq[[p]] <- des.mat[w, design.names[p]]
}
}
##values from design matrix
design.vals <- unlist(pred.eq)
##compute values for betas
exp.beta.pred <- exp(lam.est %*% design.vals)
##compute predictions including psi
predicted.vals <- exp.beta.pred * (1 - (exp(psi.est)/(1 + exp(psi.est))))
predicted.SE[w, 1] <- predicted.vals
}
out.fit.SE <- predicted.SE
colnames(out.fit.SE) <- "fit"
}
if (identical(print.matrix, TRUE)) {
out.fit.SE <- predicted.SE
if (identical(se.fit, TRUE)) {
colnames(out.fit.SE) <- c("fit", "se.fit")
}
else {
colnames(out.fit.SE) <- c("fit")
}
}
return(out.fit.SE)
}
##########################
##########################
##lmerModLmerTest fits
predictSE.lmerModLmerTest <- function(mod, newdata, se.fit = TRUE, print.matrix = FALSE, level = 0, ...){
##logical test for level
if(!identical(level, 0)) stop("\nThis function does not support computation of predicted values\n",
"or standard errors for higher levels of nesting\n")
##this part of code converts data.frame (including factors) into design matrix of model
tt <- terms(mod)
TT <- delete.response(tt)
newdata <- as.data.frame(newdata)
#################################################################################################################
########################### This following clever piece of code is modified from predict.lme( ) from nlme package
#################################################################################################################
mfArgs <- list(formula = TT, data = newdata)
dataMix <- do.call("model.frame", mfArgs)
## making sure factor levels are the same as in contrasts
###########this part creates a list to hold factors - changed from nlme
orig.frame <- mod@frame
##matrix with info on factors
fact.frame <- attr(attr(orig.frame, "terms"), "dataClasses")[-1]
##continue if factors
if(any(fact.frame == "factor")) {
id.factors <- which(fact.frame == "factor")
fact.name <- names(fact.frame)[id.factors] #identify the rows for factors
contr <- list( )
for(j in fact.name) {
contr[[j]] <- contrasts(orig.frame[, j])
}
}
##########end of code to create list changed from nlme
for(i in names(dataMix)) {
if (inherits(dataMix[,i], "factor") && !is.null(contr[[i]])) {
levs <- levels(dataMix[,i])
levsC <- rownames(contr[[i]])
if (any(wch <- is.na(match(levs, levsC)))) {
stop(paste("Levels", paste(levs[wch], collapse = ","),
"not allowed for", i))
}
attr(dataMix[,i], "contrasts") <- contr[[i]][levs, , drop = FALSE]
}
}
#################################################################################################################
########################### The previous clever piece of code is modified from predict.lme( ) from nlme package
#################################################################################################################
###############################################
###############################################
### THIS BIT IS MODIFIED FOR OFFSET
###############################################
##check for offset
if(length(mod@frame$offset) > 0) {
calls <- attr(TT, "variables")
off.num <- attr(TT, "offset")
##offset values
offset.values <- eval(calls[[off.num+1]], newdata)
}
###############################################
###END OF MODIFICATIONS FOR OFFSET
###############################################
###############################################
##m <- model.frame(TT, data = dataMix)
##m <- model.frame(TT, data = newdata) gives error when offset is converted to log( ) scale within call
des.matrix <- model.matrix(TT, dataMix)
newdata <- des.matrix #we now have a design matrix
######START OF PREDICT FUNCTION
######
fix.coef <- fixef(mod)
ncoefs <- length(fix.coef)
names.coef <- labels(fix.coef)
nvals <- dim(newdata)[1]
##check for intercept fixed effect term in model
int.yes <- any(names.coef == "(Intercept)")
##if no intercept term, return error
if(!int.yes) stop("\nThis function does not work with models excluding the intercept\n")
formula <- character(length=ncoefs)
nbetas <- ncoefs - 1
if(int.yes & nbetas >= 1) {
##create loop to construct formula for derivative
formula <- paste("Beta", 1:nbetas, sep="")
formula <- c("Beta0", formula)
} else {
if(int.yes & nbetas == 0) {
formula <- "Beta0"
}
}
##for models without intercept - formula <- paste("Beta", 1:ncoefs, sep="")
##a loop to assemble formula
##first, identify interaction terms
inters <- rep(NA, ncoefs)
for (m in 1:ncoefs) {
inters[m] <- attr(regexpr(pattern = ":", text = names.coef[m]), "match.length")
}
##change the name of the labels for flexibility
names.cov <- paste("cov", 1:ncoefs-1, sep="")
if(!int.yes) {names.cov <- paste("cov", 1:ncoefs, sep="")}
id <- which(inters == 1)
for (k in 1:length(id)) {
names.cov[id[k]] <- paste("inter", k, sep="")
}
##iterate and combine betas and covariates
formula2 <- character(length = ncoefs)
for(b in 1:ncoefs) {
formula2[b] <- paste(formula[b], names.cov[b], sep="*")
}
##replace with Beta0 if fixed intercept term present
if(int.yes) {formula2[1] <- "Beta0"}
##collapse into a single equation and convert to expression
##parse returns the unevaluated expression
eq.space <- parse(text = as.expression(paste(formula2, collapse="+")),
srcfile = NULL)
##add step to remove white space to avoid reaching 500 character limit
##remove space within expression
no.space <- gsub("[[:space:]]+", "", as.character(eq.space))
equation <- parse(text = as.expression(no.space))
##############################################
########BEGIN MODIFIED FOR OFFSET############
##############################################
##############################################
if(length(mod@frame$offset) > 0) {
##iterate and combine betas and covariates
formula2 <- character(length = ncoefs+1)
for(b in 1:ncoefs) {
formula2[b] <- paste(formula[b], names.cov[b], sep="*")
}
##add offset variable to formula
formula2[ncoefs+1] <- "offset.vals"
##replace with Beta0 if fixed intercept term present
if(int.yes) {formula2[1] <- "Beta0"}
##collapse into a single equation and convert to expression
eq.space <- parse(text = as.expression(paste(formula2, collapse="+")),
srcfile = NULL)
##add step to remove white space to avoid reaching 500 character limit
##remove space within expression
no.space <- gsub("[[:space:]]+", "", as.character(eq.space))
equation <- parse(text = as.expression(no.space))
}
##############################################
########END MODIFIED FOR OFFSET###############
##############################################
##############################################
##determine number of covariates excluding interaction terms
ncovs <- ncoefs - length(id)
##assign values of covariates
cov.values <- list( )
##if only intercept, then add column
if(int.yes && ncovs == 1) {
cov.values[[1]] <- 1
}
if(int.yes && ncovs > 1) {
cov.values[[1]] <- rep(1, nvals)
for (q in 2:ncoefs) {
cov.values[[q]] <- newdata[, labels(fix.coef)[q]]
}
} else {
for (q in 1:ncoefs) {
cov.values[[q]] <- newdata[, labels(fix.coef)[q]]
}
}
names(cov.values) <- names.cov
cov.values.mat <- matrix(data = unlist(cov.values), nrow = nvals, ncol = ncoefs)
################################################################
####use the following code to compute predicted values and SE's
####on response scale if identity link is used OR link scale
if(identical(se.fit, TRUE)) {
##determine number of partial derivatives to compute
part.devs <- list( )
for(j in 1:ncoefs) {
part.devs[[j]] <- D(equation, formula[j])
}
}
if(identical(se.fit, TRUE)) {
##substitute a given row for each covariate
predicted.SE <- matrix(NA, nrow = nvals, ncol = 2)
colnames(predicted.SE) <- c("Pred.value", "SE")
rownames(predicted.SE) <- 1:nvals
part.devs.eval <- list( )
part.devs.eval[[1]] <- 1
for (w in 1:nvals) {
if(int.yes && ncovs > 1) {
for (p in 2:ncoefs) {
part.devs.eval[[p]] <- cov.values[names.cov[p]][[1]][w]
}
} # else { ##for cases without intercept
##for (p in 1:ncoefs) {
## part.devs.eval[[p]] <- cov.values[names.cov[p]][[1]][w]
##}
##}
part.devs.solved <- unlist(part.devs.eval)
##extract vc matrix
vcmat <- vcov(mod)
mat_partialdevs<-as.matrix(part.devs.solved) #create matrix from vector of 2 rows by 1 column
mat_tpartialdevs<-t(part.devs.solved) #transpose of partial derivatives to have 2 columns by 1 row
var_hat<-mat_tpartialdevs%*%vcmat%*%mat_partialdevs
SE<-sqrt(var_hat)
predicted.vals <- fix.coef%*%cov.values.mat[w,]
######################################
###BEGIN MODIFIED FOR OFFSET
######################################
if(length(mod@frame$offset) > 0) {
predicted.vals <- fix.coef%*%cov.values.mat[w,] + offset.values[w]
}
######################################
###END MODIFIED FOR OFFSET
######################################
predicted.SE[w, 1] <- predicted.vals
predicted.SE[w, 2] <- SE@x #to extract only value computed
}
out.fit.SE <- list(fit = predicted.SE[,"Pred.value"], se.fit = predicted.SE[, "SE"])
} else {
predicted.SE <- matrix(NA, nrow = nvals, ncol = 1)
colnames(predicted.SE) <- c("Pred.value")
rownames(predicted.SE) <- 1:nvals
for (w in 1:nvals) {
predicted.vals <- fix.coef%*%cov.values.mat[w,]
######################################
###BEGIN MODIFIED FOR OFFSET
######################################
if(length(mod@frame$offset) > 0) {
predicted.vals <- fix.coef%*%cov.values.mat[w,] + offset.values[w]
}
######################################
###END MODIFIED FOR OFFSET
######################################
predicted.SE[w, 1] <- predicted.vals
}
out.fit.SE <- predicted.SE
colnames(out.fit.SE) <- "fit"
}
###################################################################
##print as nice matrix, otherwise print as list
if(identical(print.matrix, TRUE)) {
out.fit.SE <- predicted.SE
if(identical(se.fit, TRUE)) {
colnames(out.fit.SE) <- c("fit", "se.fit")
} else {
colnames(out.fit.SE) <- c("fit")
}
}
return(out.fit.SE)
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/predictSE.R |
##generic
summaryOD <- function(mod, c.hat = 1, conf.level = 0.95,
out.type = "confint", ...){
UseMethod("summaryOD", mod)
}
summaryOD.default <- function(mod, c.hat = 1, conf.level = 0.95,
out.type = "confint", ...){
stop("\nFunction not yet defined for this object class\n")
}
##summaryOD: summary with overdispersion to display CI or P-values
summaryOD.glm <- function(mod, c.hat = 1,
conf.level = 0.95,
out.type = "confint", ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##check for distributions
modFamily <- family(mod)$family
if(!identical(modFamily, "poisson") && !identical(modFamily, "binomial")) {
if(c.hat > 1) stop("\ndistribution not appropriate for overdispersion correction\n\n")
}
##for binomial, check that number of trials > 1
if(identical(modFamily, "binomial")) {
if(!any(mod$prior.weights > 1)) stop("\nOverdispersion correction only appropriate for success/trials syntax\n\n")
}
##extract coefs
coefs <- coef(mod)
##extract SE's
ses <- sqrt(diag(vcov(mod) * c.hat))
##number of estimated parameters
nparms <- length(coefs)
##arrange in matrix
outMat <- matrix(NA, nrow = nparms,
ncol = 4)
outMat[, 1:2] <- cbind(coefs, ses)
rownames(outMat) <- names(coefs)
##if interval
if(identical(out.type, "confint")) {
##compute confidence intervals
zstat <- qnorm(p = conf.level)
outMat[, 3] <- outMat[, 1] - zstat * outMat[, 2]
outMat[, 4] <- outMat[, 1] + zstat * outMat[, 2]
##add names
colnames(outMat) <- c("estimate", "se", "lowlim", "upplim")
}
##if htest
if(identical(out.type, "nhst")) {
##compute P-value
outMat[, 3] <- outMat[, 1]/outMat[, 2]
outMat[, 4] <- 2 * pnorm(abs(outMat[, 3]), lower.tail = FALSE)
##add names
colnames(outMat) <- c("estimate", "se", "z", "pvalue")
}
##assemble in list
outList <- list(out.type = out.type,
c.hat = c.hat,
conf.level = conf.level,
outMat = outMat)
class(outList) <- c("summaryOD", "list")
return(outList)
}
##occu
summaryOD.unmarkedFitOccu <- function(mod, c.hat = 1,
conf.level = 0.95,
out.type = "confint", ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##extract coefs
coefs <- coef(mod)
##extract SE's
ses <- sqrt(diag(vcov(mod) * c.hat))
##number of estimated parameters
nparms <- length(coefs)
##arrange in matrix
outMat <- matrix(NA, nrow = nparms,
ncol = 4)
outMat[, 1:2] <- cbind(coefs, ses)
rownames(outMat) <- names(coefs)
##if interval
if(identical(out.type, "confint")) {
##compute confidence intervals
zstat <- qnorm(p = conf.level)
outMat[, 3] <- outMat[, 1] - zstat * outMat[, 2]
outMat[, 4] <- outMat[, 1] + zstat * outMat[, 2]
##add names
colnames(outMat) <- c("estimate", "se", "lowlim", "upplim")
}
##if htest
if(identical(out.type, "nhst")) {
##compute P-value
outMat[, 3] <- outMat[, 1]/outMat[, 2]
outMat[, 4] <- 2 * pnorm(abs(outMat[, 3]), lower.tail = FALSE)
##add names
colnames(outMat) <- c("estimate", "se", "z", "pvalue")
}
##assemble in list
outList <- list(out.type = out.type,
c.hat = c.hat,
conf.level = conf.level,
outMat = outMat)
class(outList) <- c("summaryOD", "list")
return(outList)
}
##colext
summaryOD.unmarkedFitColExt <- function(mod, c.hat = 1,
conf.level = 0.95,
out.type = "confint", ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##extract coefs
coefs <- coef(mod)
##extract SE's
ses <- sqrt(diag(vcov(mod) * c.hat))
##number of estimated parameters
nparms <- length(coefs)
##arrange in matrix
outMat <- matrix(NA, nrow = nparms,
ncol = 4)
outMat[, 1:2] <- cbind(coefs, ses)
rownames(outMat) <- names(coefs)
##if interval
if(identical(out.type, "confint")) {
##compute confidence intervals
zstat <- qnorm(p = conf.level)
outMat[, 3] <- outMat[, 1] - zstat * outMat[, 2]
outMat[, 4] <- outMat[, 1] + zstat * outMat[, 2]
##add names
colnames(outMat) <- c("estimate", "se", "lowlim", "upplim")
}
##if htest
if(identical(out.type, "nhst")) {
##compute P-value
outMat[, 3] <- outMat[, 1]/outMat[, 2]
outMat[, 4] <- 2 * pnorm(abs(outMat[, 3]), lower.tail = FALSE)
##add names
colnames(outMat) <- c("estimate", "se", "z", "pvalue")
}
##assemble in list
outList <- list(out.type = out.type,
c.hat = c.hat,
conf.level = conf.level,
outMat = outMat)
class(outList) <- c("summaryOD", "list")
return(outList)
}
##occuRN
summaryOD.unmarkedFitOccuRN <- function(mod, c.hat = 1,
conf.level = 0.95,
out.type = "confint", ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##extract coefs
coefs <- coef(mod)
##extract SE's
ses <- sqrt(diag(vcov(mod) * c.hat))
##number of estimated parameters
nparms <- length(coefs)
##arrange in matrix
outMat <- matrix(NA, nrow = nparms,
ncol = 4)
outMat[, 1:2] <- cbind(coefs, ses)
rownames(outMat) <- names(coefs)
##if interval
if(identical(out.type, "confint")) {
##compute confidence intervals
zstat <- qnorm(p = conf.level)
outMat[, 3] <- outMat[, 1] - zstat * outMat[, 2]
outMat[, 4] <- outMat[, 1] + zstat * outMat[, 2]
##add names
colnames(outMat) <- c("estimate", "se", "lowlim", "upplim")
}
##if htest
if(identical(out.type, "nhst")) {
##compute P-value
outMat[, 3] <- outMat[, 1]/outMat[, 2]
outMat[, 4] <- 2 * pnorm(abs(outMat[, 3]), lower.tail = FALSE)
##add names
colnames(outMat) <- c("estimate", "se", "z", "pvalue")
}
##assemble in list
outList <- list(out.type = out.type,
c.hat = c.hat,
conf.level = conf.level,
outMat = outMat)
class(outList) <- c("summaryOD", "list")
return(outList)
}
##pcount
summaryOD.unmarkedFitPCount <- function(mod, c.hat = 1,
conf.level = 0.95,
out.type = "confint", ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##check for distributions
modFamily <- mod@mixture
if(!identical(modFamily, "P") && !identical(modFamily, "ZIP")) {
if(c.hat > 1) stop("\ndistribution not appropriate for overdispersion correction\n\n")
}
##extract coefs
coefs <- coef(mod)
##extract SE's
ses <- sqrt(diag(vcov(mod) * c.hat))
##number of estimated parameters
nparms <- length(coefs)
##arrange in matrix
outMat <- matrix(NA, nrow = nparms,
ncol = 4)
outMat[, 1:2] <- cbind(coefs, ses)
rownames(outMat) <- names(coefs)
##if interval
if(identical(out.type, "confint")) {
##compute confidence intervals
zstat <- qnorm(p = conf.level)
outMat[, 3] <- outMat[, 1] - zstat * outMat[, 2]
outMat[, 4] <- outMat[, 1] + zstat * outMat[, 2]
##add names
colnames(outMat) <- c("estimate", "se", "lowlim", "upplim")
}
##if htest
if(identical(out.type, "nhst")) {
##compute P-value
outMat[, 3] <- outMat[, 1]/outMat[, 2]
outMat[, 4] <- 2 * pnorm(abs(outMat[, 3]), lower.tail = FALSE)
##add names
colnames(outMat) <- c("estimate", "se", "z", "pvalue")
}
##assemble in list
outList <- list(out.type = out.type,
c.hat = c.hat,
conf.level = conf.level,
outMat = outMat)
class(outList) <- c("summaryOD", "list")
return(outList)
}
##pcountOpen
summaryOD.unmarkedFitPCO <- function(mod, c.hat = 1,
conf.level = 0.95,
out.type = "confint", ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##check for distributions
modFamily <- mod@mixture
if(!identical(modFamily, "P") && !identical(modFamily, "ZIP")) {
if(c.hat > 1) stop("\ndistribution not appropriate for overdispersion correction\n\n")
}
##extract coefs
coefs <- coef(mod)
##extract SE's
ses <- sqrt(diag(vcov(mod) * c.hat))
##number of estimated parameters
nparms <- length(coefs)
##arrange in matrix
outMat <- matrix(NA, nrow = nparms,
ncol = 4)
outMat[, 1:2] <- cbind(coefs, ses)
rownames(outMat) <- names(coefs)
##if interval
if(identical(out.type, "confint")) {
##compute confidence intervals
zstat <- qnorm(p = conf.level)
outMat[, 3] <- outMat[, 1] - zstat * outMat[, 2]
outMat[, 4] <- outMat[, 1] + zstat * outMat[, 2]
##add names
colnames(outMat) <- c("estimate", "se", "lowlim", "upplim")
}
##if htest
if(identical(out.type, "nhst")) {
##compute P-value
outMat[, 3] <- outMat[, 1]/outMat[, 2]
outMat[, 4] <- 2 * pnorm(abs(outMat[, 3]), lower.tail = FALSE)
##add names
colnames(outMat) <- c("estimate", "se", "z", "pvalue")
}
##assemble in list
outList <- list(out.type = out.type,
c.hat = c.hat,
conf.level = conf.level,
outMat = outMat)
class(outList) <- c("summaryOD", "list")
return(outList)
}
##distsamp
summaryOD.unmarkedFitDS <- function(mod, c.hat = 1,
conf.level = 0.95,
out.type = "confint", ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##extract coefs
coefs <- coef(mod)
##extract SE's
ses <- sqrt(diag(vcov(mod) * c.hat))
##number of estimated parameters
nparms <- length(coefs)
##arrange in matrix
outMat <- matrix(NA, nrow = nparms,
ncol = 4)
outMat[, 1:2] <- cbind(coefs, ses)
rownames(outMat) <- names(coefs)
##if interval
if(identical(out.type, "confint")) {
##compute confidence intervals
zstat <- qnorm(p = conf.level)
outMat[, 3] <- outMat[, 1] - zstat * outMat[, 2]
outMat[, 4] <- outMat[, 1] + zstat * outMat[, 2]
##add names
colnames(outMat) <- c("estimate", "se", "lowlim", "upplim")
}
##if htest
if(identical(out.type, "nhst")) {
##compute P-value
outMat[, 3] <- outMat[, 1]/outMat[, 2]
outMat[, 4] <- 2 * pnorm(abs(outMat[, 3]), lower.tail = FALSE)
##add names
colnames(outMat) <- c("estimate", "se", "z", "pvalue")
}
##assemble in list
outList <- list(out.type = out.type,
c.hat = c.hat,
conf.level = conf.level,
outMat = outMat)
class(outList) <- c("summaryOD", "list")
return(outList)
}
##gdistsamp
summaryOD.unmarkedFitGDS <- function(mod, c.hat = 1,
conf.level = 0.95,
out.type = "confint", ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##check for distributions
modFamily <- mod@mixture
if(!identical(modFamily, "P")) {
if(c.hat > 1) stop("\ndistribution not appropriate for overdispersion correction\n\n")
}
##extract coefs
coefs <- coef(mod)
##extract SE's
ses <- sqrt(diag(vcov(mod) * c.hat))
##number of estimated parameters
nparms <- length(coefs)
##arrange in matrix
outMat <- matrix(NA, nrow = nparms,
ncol = 4)
outMat[, 1:2] <- cbind(coefs, ses)
rownames(outMat) <- names(coefs)
##if interval
if(identical(out.type, "confint")) {
##compute confidence intervals
zstat <- qnorm(p = conf.level)
outMat[, 3] <- outMat[, 1] - zstat * outMat[, 2]
outMat[, 4] <- outMat[, 1] + zstat * outMat[, 2]
##add names
colnames(outMat) <- c("estimate", "se", "lowlim", "upplim")
}
##if htest
if(identical(out.type, "nhst")) {
##compute P-value
outMat[, 3] <- outMat[, 1]/outMat[, 2]
outMat[, 4] <- 2 * pnorm(abs(outMat[, 3]), lower.tail = FALSE)
##add names
colnames(outMat) <- c("estimate", "se", "z", "pvalue")
}
##assemble in list
outList <- list(out.type = out.type,
c.hat = c.hat,
conf.level = conf.level,
outMat = outMat)
class(outList) <- c("summaryOD", "list")
return(outList)
}
##occuFP
summaryOD.unmarkedFitOccuFP <- function(mod, c.hat = 1,
conf.level = 0.95,
out.type = "confint", ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##extract coefs
coefs <- coef(mod)
##extract SE's
ses <- sqrt(diag(vcov(mod) * c.hat))
##number of estimated parameters
nparms <- length(coefs)
##arrange in matrix
outMat <- matrix(NA, nrow = nparms,
ncol = 4)
outMat[, 1:2] <- cbind(coefs, ses)
rownames(outMat) <- names(coefs)
##if interval
if(identical(out.type, "confint")) {
##compute confidence intervals
zstat <- qnorm(p = conf.level)
outMat[, 3] <- outMat[, 1] - zstat * outMat[, 2]
outMat[, 4] <- outMat[, 1] + zstat * outMat[, 2]
##add names
colnames(outMat) <- c("estimate", "se", "lowlim", "upplim")
}
##if htest
if(identical(out.type, "nhst")) {
##compute P-value
outMat[, 3] <- outMat[, 1]/outMat[, 2]
outMat[, 4] <- 2 * pnorm(abs(outMat[, 3]), lower.tail = FALSE)
##add names
colnames(outMat) <- c("estimate", "se", "z", "pvalue")
}
##assemble in list
outList <- list(out.type = out.type,
c.hat = c.hat,
conf.level = conf.level,
outMat = outMat)
class(outList) <- c("summaryOD", "list")
return(outList)
}
##multinomPois
summaryOD.unmarkedFitMPois <- function(mod, c.hat = 1,
conf.level = 0.95,
out.type = "confint", ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##extract coefs
coefs <- coef(mod)
##extract SE's
ses <- sqrt(diag(vcov(mod) * c.hat))
##number of estimated parameters
nparms <- length(coefs)
##arrange in matrix
outMat <- matrix(NA, nrow = nparms,
ncol = 4)
outMat[, 1:2] <- cbind(coefs, ses)
rownames(outMat) <- names(coefs)
##if interval
if(identical(out.type, "confint")) {
##compute confidence intervals
zstat <- qnorm(p = conf.level)
outMat[, 3] <- outMat[, 1] - zstat * outMat[, 2]
outMat[, 4] <- outMat[, 1] + zstat * outMat[, 2]
##add names
colnames(outMat) <- c("estimate", "se", "lowlim", "upplim")
}
##if htest
if(identical(out.type, "nhst")) {
##compute P-value
outMat[, 3] <- outMat[, 1]/outMat[, 2]
outMat[, 4] <- 2 * pnorm(abs(outMat[, 3]), lower.tail = FALSE)
##add names
colnames(outMat) <- c("estimate", "se", "z", "pvalue")
}
##assemble in list
outList <- list(out.type = out.type,
c.hat = c.hat,
conf.level = conf.level,
outMat = outMat)
class(outList) <- c("summaryOD", "list")
return(outList)
}
##gmultmix
summaryOD.unmarkedFitGMM <- function(mod, c.hat = 1,
conf.level = 0.95,
out.type = "confint", ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##check for distributions
modFamily <- mod@mixture
if(!identical(modFamily, "P")) {
if(c.hat > 1) stop("\ndistribution not appropriate for overdispersion correction\n\n")
}
##extract coefs
coefs <- coef(mod)
##extract SE's
ses <- sqrt(diag(vcov(mod) * c.hat))
##number of estimated parameters
nparms <- length(coefs)
##arrange in matrix
outMat <- matrix(NA, nrow = nparms,
ncol = 4)
outMat[, 1:2] <- cbind(coefs, ses)
rownames(outMat) <- names(coefs)
##if interval
if(identical(out.type, "confint")) {
##compute confidence intervals
zstat <- qnorm(p = conf.level)
outMat[, 3] <- outMat[, 1] - zstat * outMat[, 2]
outMat[, 4] <- outMat[, 1] + zstat * outMat[, 2]
##add names
colnames(outMat) <- c("estimate", "se", "lowlim", "upplim")
}
##if htest
if(identical(out.type, "nhst")) {
##compute P-value
outMat[, 3] <- outMat[, 1]/outMat[, 2]
outMat[, 4] <- 2 * pnorm(abs(outMat[, 3]), lower.tail = FALSE)
##add names
colnames(outMat) <- c("estimate", "se", "z", "pvalue")
}
##assemble in list
outList <- list(out.type = out.type,
c.hat = c.hat,
conf.level = conf.level,
outMat = outMat)
class(outList) <- c("summaryOD", "list")
return(outList)
}
##gpcount
summaryOD.unmarkedFitGPC <- function(mod, c.hat = 1,
conf.level = 0.95,
out.type = "confint", ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##check for distributions
modFamily <- mod@mixture
if(!identical(modFamily, "P")) {
if(c.hat > 1) stop("\ndistribution not appropriate for overdispersion correction\n\n")
}
##extract coefs
coefs <- coef(mod)
##extract SE's
ses <- sqrt(diag(vcov(mod) * c.hat))
##number of estimated parameters
nparms <- length(coefs)
##arrange in matrix
outMat <- matrix(NA, nrow = nparms,
ncol = 4)
outMat[, 1:2] <- cbind(coefs, ses)
rownames(outMat) <- names(coefs)
##if interval
if(identical(out.type, "confint")) {
##compute confidence intervals
zstat <- qnorm(p = conf.level)
outMat[, 3] <- outMat[, 1] - zstat * outMat[, 2]
outMat[, 4] <- outMat[, 1] + zstat * outMat[, 2]
##add names
colnames(outMat) <- c("estimate", "se", "lowlim", "upplim")
}
##if htest
if(identical(out.type, "nhst")) {
##compute P-value
outMat[, 3] <- outMat[, 1]/outMat[, 2]
outMat[, 4] <- 2 * pnorm(abs(outMat[, 3]), lower.tail = FALSE)
##add names
colnames(outMat) <- c("estimate", "se", "z", "pvalue")
}
##assemble in list
outList <- list(out.type = out.type,
c.hat = c.hat,
conf.level = conf.level,
outMat = outMat)
class(outList) <- c("summaryOD", "list")
return(outList)
}
##glmer
summaryOD.glmerMod <- function(mod, c.hat = 1,
conf.level = 0.95,
out.type = "confint", ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##check for distributions
modFamily <- fam.link.mer(mod)$family
if(!identical(modFamily, "poisson") && !identical(modFamily, "binomial")) {
if(c.hat > 1) stop("\ndistribution not appropriate for overdispersion correction\n\n")
}
##for binomial, check that number of trials > 1
if(identical(modFamily, "binomial")) {
if(!any(mod@resp$weights > 1)) stop("\nOverdispersion correction only appropriate for success/trials syntax\n\n")
}
##extract coefs
coefs <- fixef(mod)
##extract SE's
ses <- sqrt(diag(vcov(mod) * c.hat))
##number of estimated parameters
nparms <- length(coefs)
##arrange in matrix
outMat <- matrix(NA, nrow = nparms,
ncol = 4)
outMat[, 1:2] <- cbind(coefs, ses)
rownames(outMat) <- names(coefs)
##if interval
if(identical(out.type, "confint")) {
##compute confidence intervals
zstat <- qnorm(p = conf.level)
outMat[, 3] <- outMat[, 1] - zstat * outMat[, 2]
outMat[, 4] <- outMat[, 1] + zstat * outMat[, 2]
##add names
colnames(outMat) <- c("estimate", "se", "lowlim", "upplim")
}
##if htest
if(identical(out.type, "nhst")) {
##compute P-value
outMat[, 3] <- outMat[, 1]/outMat[, 2]
outMat[, 4] <- 2 * pnorm(abs(outMat[, 3]), lower.tail = FALSE)
##add names
colnames(outMat) <- c("estimate", "se", "z", "pvalue")
}
##assemble in list
outList <- list(out.type = out.type,
c.hat = c.hat,
conf.level = conf.level,
outMat = outMat)
class(outList) <- c("summaryOD", "list")
return(outList)
}
##occuMulti
summaryOD.unmarkedFitOccuMulti <- function(mod, c.hat = 1,
conf.level = 0.95,
out.type = "confint", ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##extract coefs
coefs <- coef(mod)
##extract SE's
ses <- sqrt(diag(vcov(mod) * c.hat))
##number of estimated parameters
nparms <- length(coefs)
##arrange in matrix
outMat <- matrix(NA, nrow = nparms,
ncol = 4)
outMat[, 1:2] <- cbind(coefs, ses)
rownames(outMat) <- names(coefs)
##if interval
if(identical(out.type, "confint")) {
##compute confidence intervals
zstat <- qnorm(p = conf.level)
outMat[, 3] <- outMat[, 1] - zstat * outMat[, 2]
outMat[, 4] <- outMat[, 1] + zstat * outMat[, 2]
##add names
colnames(outMat) <- c("estimate", "se", "lowlim", "upplim")
}
##if htest
if(identical(out.type, "nhst")) {
##compute P-value
outMat[, 3] <- outMat[, 1]/outMat[, 2]
outMat[, 4] <- 2 * pnorm(abs(outMat[, 3]), lower.tail = FALSE)
##add names
colnames(outMat) <- c("estimate", "se", "z", "pvalue")
}
##assemble in list
outList <- list(out.type = out.type,
c.hat = c.hat,
conf.level = conf.level,
outMat = outMat)
class(outList) <- c("summaryOD", "list")
return(outList)
}
##occuMS
summaryOD.unmarkedFitOccuMS <- function(mod, c.hat = 1,
conf.level = 0.95,
out.type = "confint", ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##extract coefs
coefs <- coef(mod)
##extract SE's
ses <- sqrt(diag(vcov(mod) * c.hat))
##number of estimated parameters
nparms <- length(coefs)
##arrange in matrix
outMat <- matrix(NA, nrow = nparms,
ncol = 4)
outMat[, 1:2] <- cbind(coefs, ses)
rownames(outMat) <- names(coefs)
##if interval
if(identical(out.type, "confint")) {
##compute confidence intervals
zstat <- qnorm(p = conf.level)
outMat[, 3] <- outMat[, 1] - zstat * outMat[, 2]
outMat[, 4] <- outMat[, 1] + zstat * outMat[, 2]
##add names
colnames(outMat) <- c("estimate", "se", "lowlim", "upplim")
}
##if htest
if(identical(out.type, "nhst")) {
##compute P-value
outMat[, 3] <- outMat[, 1]/outMat[, 2]
outMat[, 4] <- 2 * pnorm(abs(outMat[, 3]), lower.tail = FALSE)
##add names
colnames(outMat) <- c("estimate", "se", "z", "pvalue")
}
##assemble in list
outList <- list(out.type = out.type,
c.hat = c.hat,
conf.level = conf.level,
outMat = outMat)
class(outList) <- c("summaryOD", "list")
return(outList)
}
##occuTTD
summaryOD.unmarkedFitOccuTTD <- function(mod, c.hat = 1,
conf.level = 0.95,
out.type = "confint", ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##extract coefs
coefs <- coef(mod)
##extract SE's
ses <- sqrt(diag(vcov(mod) * c.hat))
##number of estimated parameters
nparms <- length(coefs)
##arrange in matrix
outMat <- matrix(NA, nrow = nparms,
ncol = 4)
outMat[, 1:2] <- cbind(coefs, ses)
rownames(outMat) <- names(coefs)
##if interval
if(identical(out.type, "confint")) {
##compute confidence intervals
zstat <- qnorm(p = conf.level)
outMat[, 3] <- outMat[, 1] - zstat * outMat[, 2]
outMat[, 4] <- outMat[, 1] + zstat * outMat[, 2]
##add names
colnames(outMat) <- c("estimate", "se", "lowlim", "upplim")
}
##if htest
if(identical(out.type, "nhst")) {
##compute P-value
outMat[, 3] <- outMat[, 1]/outMat[, 2]
outMat[, 4] <- 2 * pnorm(abs(outMat[, 3]), lower.tail = FALSE)
##add names
colnames(outMat) <- c("estimate", "se", "z", "pvalue")
}
##assemble in list
outList <- list(out.type = out.type,
c.hat = c.hat,
conf.level = conf.level,
outMat = outMat)
class(outList) <- c("summaryOD", "list")
return(outList)
}
##multmixOpen
summaryOD.unmarkedFitMMO <- function(mod, c.hat = 1,
conf.level = 0.95,
out.type = "confint", ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##extract coefs
coefs <- coef(mod)
##extract SE's
ses <- sqrt(diag(vcov(mod) * c.hat))
##number of estimated parameters
nparms <- length(coefs)
##arrange in matrix
outMat <- matrix(NA, nrow = nparms,
ncol = 4)
outMat[, 1:2] <- cbind(coefs, ses)
rownames(outMat) <- names(coefs)
##if interval
if(identical(out.type, "confint")) {
##compute confidence intervals
zstat <- qnorm(p = conf.level)
outMat[, 3] <- outMat[, 1] - zstat * outMat[, 2]
outMat[, 4] <- outMat[, 1] + zstat * outMat[, 2]
##add names
colnames(outMat) <- c("estimate", "se", "lowlim", "upplim")
}
##if htest
if(identical(out.type, "nhst")) {
##compute P-value
outMat[, 3] <- outMat[, 1]/outMat[, 2]
outMat[, 4] <- 2 * pnorm(abs(outMat[, 3]), lower.tail = FALSE)
##add names
colnames(outMat) <- c("estimate", "se", "z", "pvalue")
}
##assemble in list
outList <- list(out.type = out.type,
c.hat = c.hat,
conf.level = conf.level,
outMat = outMat)
class(outList) <- c("summaryOD", "list")
return(outList)
}
##distsampOpen
summaryOD.unmarkedFitDSO <- function(mod, c.hat = 1,
conf.level = 0.95,
out.type = "confint", ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##extract coefs
coefs <- coef(mod)
##extract SE's
ses <- sqrt(diag(vcov(mod) * c.hat))
##number of estimated parameters
nparms <- length(coefs)
##arrange in matrix
outMat <- matrix(NA, nrow = nparms,
ncol = 4)
outMat[, 1:2] <- cbind(coefs, ses)
rownames(outMat) <- names(coefs)
##if interval
if(identical(out.type, "confint")) {
##compute confidence intervals
zstat <- qnorm(p = conf.level)
outMat[, 3] <- outMat[, 1] - zstat * outMat[, 2]
outMat[, 4] <- outMat[, 1] + zstat * outMat[, 2]
##add names
colnames(outMat) <- c("estimate", "se", "lowlim", "upplim")
}
##if htest
if(identical(out.type, "nhst")) {
##compute P-value
outMat[, 3] <- outMat[, 1]/outMat[, 2]
outMat[, 4] <- 2 * pnorm(abs(outMat[, 3]), lower.tail = FALSE)
##add names
colnames(outMat) <- c("estimate", "se", "z", "pvalue")
}
##assemble in list
outList <- list(out.type = out.type,
c.hat = c.hat,
conf.level = conf.level,
outMat = outMat)
class(outList) <- c("summaryOD", "list")
return(outList)
}
##glmmTMB
summaryOD.glmmTMB <- function(mod, c.hat = 1,
conf.level = 0.95,
out.type = "confint", ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##check for distributions
##determine family of model
fam <- family(mod)$family
##extract response
##if binomial, check if n > 1 for each case
if(fam == "binomial") {
resp <- mod$frame[, mod$modelInfo$respCol]
if(!is.matrix(resp)) {
if(!any(names(mod$frame) == "(weights)")) {
stop("\nOverdispersion correction only appropriate for success/trials syntax\n\n")
}
}
}
##Poisson or binomial
if(!any(fam == c("poisson", "binomial"))) {
stop("\ndistribution not appropriate for overdispersion correction\n")
}
##extract coefs
coefs <- fixef(mod)$cond
##extract SE's
ses <- sqrt(diag(vcov(mod)$cond * c.hat))
##number of estimated parameters
nparms <- length(coefs)
##arrange in matrix
outMat <- matrix(NA, nrow = nparms,
ncol = 4)
outMat[, 1:2] <- cbind(coefs, ses)
rownames(outMat) <- names(coefs)
##if interval
if(identical(out.type, "confint")) {
##compute confidence intervals
zstat <- qnorm(p = conf.level)
outMat[, 3] <- outMat[, 1] - zstat * outMat[, 2]
outMat[, 4] <- outMat[, 1] + zstat * outMat[, 2]
##add names
colnames(outMat) <- c("estimate", "se", "lowlim", "upplim")
}
##if nhst
if(identical(out.type, "nhst")) {
##compute P-value
outMat[, 3] <- outMat[, 1]/outMat[, 2]
outMat[, 4] <- 2 * pnorm(abs(outMat[, 3]), lower.tail = FALSE)
##add names
colnames(outMat) <- c("estimate", "se", "z", "pvalue")
}
##assemble in list
outList <- list(out.type = out.type,
c.hat = c.hat,
conf.level = conf.level,
outMat = outMat)
class(outList) <- c("summaryOD", "list")
return(outList)
}
##maxlike
summaryOD.maxlikeFit <- function(mod, c.hat = 1,
conf.level = 0.95,
out.type = "confint", ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##extract coefs
coefs <- coef(mod)
##extract SE's
ses <- sqrt(diag(vcov(mod) * c.hat))
##number of estimated parameters
nparms <- length(coefs)
##arrange in matrix
outMat <- matrix(NA, nrow = nparms,
ncol = 4)
outMat[, 1:2] <- cbind(coefs, ses)
rownames(outMat) <- names(coefs)
##if interval
if(identical(out.type, "confint")) {
##compute confidence intervals
zstat <- qnorm(p = conf.level)
outMat[, 3] <- outMat[, 1] - zstat * outMat[, 2]
outMat[, 4] <- outMat[, 1] + zstat * outMat[, 2]
##add names
colnames(outMat) <- c("estimate", "se", "lowlim", "upplim")
}
##if htest
if(identical(out.type, "nhst")) {
##compute P-value
outMat[, 3] <- outMat[, 1]/outMat[, 2]
outMat[, 4] <- 2 * pnorm(abs(outMat[, 3]), lower.tail = FALSE)
##add names
colnames(outMat) <- c("estimate", "se", "z", "pvalue")
}
##assemble in list
outList <- list(out.type = out.type,
c.hat = c.hat,
conf.level = conf.level,
outMat = outMat)
class(outList) <- c("summaryOD", "list")
return(outList)
}
##multinom
summaryOD.multinom <- function(mod, c.hat = 1,
conf.level = 0.95,
out.type = "confint", ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##extract coefs
coefs <- as.vector(coef(mod))
##extract SE's
ses <- sqrt(diag(vcov(mod) * c.hat))
##extract names
parm.names <- names(ses)
##number of estimated parameters
nparms <- length(ses)
##arrange in matrix
outMat <- matrix(NA, nrow = nparms,
ncol = 4)
outMat[, 1:2] <- cbind(coefs, ses)
rownames(outMat) <- parm.names
##if interval
if(identical(out.type, "confint")) {
##compute confidence intervals
zstat <- qnorm(p = conf.level)
outMat[, 3] <- outMat[, 1] - zstat * outMat[, 2]
outMat[, 4] <- outMat[, 1] + zstat * outMat[, 2]
##add names
colnames(outMat) <- c("estimate", "se", "lowlim", "upplim")
}
##if htest
if(identical(out.type, "nhst")) {
##compute P-value
outMat[, 3] <- outMat[, 1]/outMat[, 2]
outMat[, 4] <- 2 * pnorm(abs(outMat[, 3]), lower.tail = FALSE)
##add names
colnames(outMat) <- c("estimate", "se", "z", "pvalue")
}
##assemble in list
outList <- list(out.type = out.type,
c.hat = c.hat,
conf.level = conf.level,
outMat = outMat)
class(outList) <- c("summaryOD", "list")
return(outList)
}
##vglm
summaryOD.vglm <- function(mod, c.hat = 1,
conf.level = 0.95,
out.type = "confint", ...){
if(c.hat > 4) warning("\nHigh overdispersion: model fit is questionable\n")
if(c.hat < 1) {
warning("\nUnderdispersion: c-hat is fixed to 1\n")
c.hat <- 1
}
##check family of vglm to avoid problems
fam.type <- mod@family@vfamily[1]
if(!(fam.type == "poissonff" || fam.type == "binomialff" || fam.type == "multinomial")) stop("\nDistribution not supported by function\n")
if(fam.type == "binomialff") {
if(!any([email protected] > 1)) stop("\nOverdispersion correction only appropriate for success/trials syntax\n\n")
}
##extract coefs
coefs <- coef(mod)
##extract SE's
ses <- sqrt(diag(vcov(mod) * c.hat))
##number of estimated parameters
nparms <- length(coefs)
##arrange in matrix
outMat <- matrix(NA, nrow = nparms,
ncol = 4)
outMat[, 1:2] <- cbind(coefs, ses)
rownames(outMat) <- names(coefs)
##if interval
if(identical(out.type, "confint")) {
##compute confidence intervals
zstat <- qnorm(p = conf.level)
outMat[, 3] <- outMat[, 1] - zstat * outMat[, 2]
outMat[, 4] <- outMat[, 1] + zstat * outMat[, 2]
##add names
colnames(outMat) <- c("estimate", "se", "lowlim", "upplim")
}
##if htest
if(identical(out.type, "nhst")) {
##compute P-value
outMat[, 3] <- outMat[, 1]/outMat[, 2]
outMat[, 4] <- 2 * pnorm(abs(outMat[, 3]), lower.tail = FALSE)
##add names
colnames(outMat) <- c("estimate", "se", "z", "pvalue")
}
##assemble in list
outList <- list(out.type = out.type,
c.hat = c.hat,
conf.level = conf.level,
outMat = outMat)
class(outList) <- c("summaryOD", "list")
return(outList)
}
print.summaryOD <- function(x, digits = 4, ...) {
##extract information
conf.level <- x$conf.level
c.hat <- x$c.hat
out.type <- x$out.type
outMat <- x$outMat
if(identical(out.type, "confint")) {
##label for confidence limit
lowLab <- paste("Lower ", conf.level * 100, "%", " CL", sep = "")
uppLab <- paste("Upper ", conf.level * 100, "%", " CL", sep = "")
##add names
colnames(outMat) <- c("Estimate", "Std. Error", lowLab, uppLab)
if(c.hat <= 1) {
cat("\nPrecision unadjusted for overdispersion:\n\n")
} else {
cat("\nPrecision adjusted for overdispersion:\n\n")
}
printCoefmat(outMat, digits = digits)
cat("\n(c-hat = ", c.hat, ")", "\n", sep = "")
cat("\n")
}
if(identical(out.type, "nhst")) {
##label for P-values
zLab <- "z value"
pLab <- "Pr(>|z|)"
##add names
colnames(outMat) <- c("Estimate", "Std. Error", zLab, pLab)
if(c.hat <= 1) {
cat("\nPrecision and hypothesis tests unadjusted for overdispersion:\n\n")
} else {
cat("\nPrecision and hypothesis tests adjusted for overdispersion:\n\n")
}
printCoefmat(outMat, digits = digits, signif.stars = FALSE)
cat("\n(c-hat = ", c.hat, ")", "\n", sep = "")
cat("\n")
}
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/summaryOD.R |
##compute BIC
##generic
useBIC <- function(mod, return.K = FALSE, nobs = NULL, ...){
UseMethod("useBIC", mod)
}
useBIC.default <- function(mod, return.K = FALSE, nobs = NULL, ...){
stop("\nFunction not yet defined for this object class\n")
}
##aov objects
useBIC.aov <-
function(mod, return.K = FALSE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(mod$fitted)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
BIC <- -2*LL + K * log(n)
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##betareg objects
useBIC.betareg <-
function(mod, return.K = FALSE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(mod$fitted)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
BIC <- -2*LL + K * log(n)
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##clm objects
useBIC.clm <-
function(mod, return.K = FALSE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(fitted(mod))} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
BIC <- -2*LL + K * log(n)
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##clmm objects
useBIC.clmm <-
function(mod, return.K = FALSE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(fitted(mod))} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
BIC <- -2*LL + K * log(n)
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##coxme objects
useBIC.coxme <- function(mod, return.K = FALSE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(mod$linear.predictor)} else {n <- nobs}
LL <- extractLL(mod)[1]
K <- attr(extractLL(mod), "df") #extract correct number of parameters included in model
BIC <- -2*LL + K * log(n)
if(return.K==TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##coxph objects
useBIC.coxph <- function(mod, return.K = FALSE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(residuals(mod))} else {n <- nobs}
LL <- extractLL(mod)[1]
K <- attr(extractLL(mod), "df") #extract correct number of parameters included in model
BIC <- -2*LL + K * log(n)
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##fitdist (from fitdistrplus)
useBIC.fitdist <-
function(mod, return.K = FALSE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- mod$n} else {n <- nobs}
LL <- logLik(mod)
K <- length(mod$estimate)
BIC <- -2*LL + K * log(n)
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##fitdistr (from MASS)
useBIC.fitdistr <-
function(mod, return.K = FALSE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- mod$n} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
BIC <- -2*LL + K * log(n)
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##glm and lm objects
useBIC.glm <-
function(mod, return.K = FALSE, nobs = NULL, c.hat = 1, ...){
if(is.null(nobs)) {
n <- length(mod$fitted)
} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
if(c.hat == 1) {
BIC <- -2*LL + K * log(n)
}
if(c.hat > 1 && c.hat <= 4) {
K <- K+1
BIC <- -2*LL/c.hat + K * log(n)
}
if(c.hat > 4) stop("High overdispersion and model fit is questionable\n")
if(c.hat < 1) stop("You should set \'c.hat\' to 1 if < 1, but values << 1 might also indicate lack of fit\n")
##check if negative binomial and add 1 to K for estimation of theta if glm( ) was used
if(!is.na(charmatch(x="Negative Binomial", table=family(mod)$family))) {
if(!identical(class(mod)[1], "negbin")) { #if not negbin, add + 1 because k of negbin was estimated glm.convert( ) screws up logLik
K <- K+1
BIC <- -2*LL + K * log(n)
}
if(c.hat != 1) stop("You should not use the c.hat argument with the negative binomial")
}
##add 1 for theta parameter in negative binomial fit with glm( )
##check if gamma and add 1 to K for estimation of shape parameter if glm( ) was used
if(identical(family(mod)$family, "Gamma") && c.hat > 1) stop("You should not use the c.hat argument with the gamma")
##an extra condition must be added to avoid adding a parameter for theta with negative binomial when glm.nb( ) is fit which estimates the correct number of parameters
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##glmmTMB
useBIC.glmmTMB <-
function(mod, return.K = FALSE, nobs = NULL, c.hat = 1, ...){
if(is.null(nobs)) {
n <- nrow(mod$frame)
names(n) <- NULL
} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
if(c.hat == 1) {
BIC <- -2*LL + K * log(n)
}
if(c.hat > 1 && c.hat <= 4) {
K <- K+1
BIC <- -2*LL/c.hat + K * log(n)
}
if(c.hat > 4) stop("High overdispersion and model fit is questionable\n")
if(c.hat < 1) stop("You should set \'c.hat\' to 1 if < 1, but values << 1 might also indicate lack of fit\n")
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##gls objects
useBIC.gls <-
function(mod, return.K = FALSE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n<-length(mod$fitted)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
BIC <- -2*LL + K * log(n)
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##gnls objects
useBIC.gnls <-
function(mod, return.K = FALSE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(fitted(mod))} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model
BIC <- -2*LL + K * log(n)
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##hurdle objects
useBIC.hurdle <-
function(mod, return.K = FALSE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(mod$fitted)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
BIC <- -2*LL + K * log(n)
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##lavaan
useBIC.lavaan <-
function(mod, return.K = FALSE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- mod@Data@nobs[[1]]} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model
BIC <- -2*LL + K * log(n)
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##lm objects
useBIC.lm <-
function(mod, return.K = FALSE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(mod$fitted)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
BIC <- -2*LL + K * log(n)
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##lme objects
useBIC.lme <-
function(mod, return.K = FALSE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- nrow(mod$fitted)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
BIC <- -2*LL + K * log(n)
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##lmekin objects
useBIC.lmekin <-
function(mod, return.K = FALSE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(mod$residuals)} else {n <- nobs}
LL <- extractLL(mod)[1]
K <- attr(extractLL(mod), "df") #extract correct number of parameters included in model - this includes LM
BIC <- -2*LL + K * log(n)
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
return(BIC)
}
##maxlike objects
useBIC.maxlikeFit <- function(mod, return.K = FALSE, nobs = NULL, c.hat = 1, ...) {
LL <- extractLL(mod)[1]
K <- attr(extractLL(mod), "df")
if(is.null(nobs)) {
n <- nrow(mod$points.retained)
} else {n <- nobs}
BIC <- -2*LL + K * log(n)
if(c.hat != 1) stop("\nThis function does not support overdispersion in \'maxlikeFit\' models\n")
if(identical(return.K, TRUE)) {
return(K)
} else {return(BIC)}
}
##mer object
useBIC.mer <-
function(mod, return.K = FALSE, nobs = NULL, ...){
if(is.null(nobs)) {
n <- mod@dims["n"]
} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model
BIC <- -2*LL + K * log(n)
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##merMod objects
useBIC.merMod <-
function(mod, return.K = FALSE, nobs = NULL, ...){
if(is.null(nobs)) {
n <- mod@devcomp$dims["n"]
names(n) <- NULL
} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model
BIC <- -2*LL + K * log(n)
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##lmerModLmerTest objects
useBIC.lmerModLmerTest <-
function(mod, return.K = FALSE, nobs = NULL, ...){
if(is.null(nobs)) {
n <- mod@devcomp$dims["n"]
names(n) <- NULL
} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model
BIC <- -2*LL + K * log(n)
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##mult objects
useBIC.multinom <-
function(mod, return.K = FALSE, nobs = NULL, c.hat = 1, ...){
if(identical(nobs, NULL)) {n<-length(mod$fitted)/length(mod$lev)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
if(c.hat == 1) {
BIC <- -2*LL + K * log(n)
}
if(c.hat > 1 && c.hat <= 4) {
K <- K+1
BIC <- -2*LL/c.hat + K * log(n)
}
if(c.hat > 4) stop("High overdispersion and model fit is questionable")
if(return.K==TRUE) BIC[1]<-K #attributes the first element of BIC to K
BIC
}
##nlme objects
useBIC.nlme <-
function(mod, return.K = FALSE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- nrow(mod$fitted)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
BIC <- -2*LL + K * log(n)
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##nls objects
useBIC.nls <-
function(mod, return.K = FALSE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(fitted(mod))} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model
BIC <- -2*LL + K * log(n)
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##polr objects
useBIC.polr <-
function(mod, return.K = FALSE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n<-length(mod$fitted)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
BIC <- -2*LL + K * log(n)
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##rlm objects
##only valid for M-estimation (Huber M-estimator)
##modified from Tharmaratnam and Claeskens 2013 (equation 8)
##useBIC.rlm <- function(mod, return.K = FALSE, nobs = NULL, ...)
##{
## if(second.ord == TRUE) stop("\nOnly 'second.ord = FALSE' is supported for 'rlm' models\n")
## ##extract design matrix
## X <- model.matrix(mod)
## ##extract scale
## scale.m <- mod$s
## ##extract threshold value
## cval <- mod$k2
## ##extract residuals
## res <- residuals(mod)
## res.scaled <- res/scale.m
## n <- length(res)
## ##partial derivatives based on Huber's loss function
## dPsi <- ifelse(abs(res.scaled) <= cval, 2, 0)
## Psi <- (ifelse(abs(res.scaled) <= cval, 2*res.scaled, 2*cval*sign(res.scaled)))^2
## J <- (t(X) %*% diag(as.vector(dPsi)) %*% X * (1/(scale.m^2)))/n
## inv.J <- solve(J)
## ##variance
## K.var <- (t(X) %*% diag(as.vector(Psi)) %*% X * (1/(scale.m^2)))/n
## AIC <- 2*n*(log(scale.m)) + 2 * sum(diag(inv.J %*%(K.var)))
## if(return.K) {AIC <- 2 * sum(diag(inv.J %*%(K.var)))}
## return(AIC)
##}
##the estimator below extracts the estimates obtained from M- or MM-estimator
##and plugs them in the normal likelihood function
useBIC.rlm <-
function(mod, return.K = FALSE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(mod$fitted)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
BIC <- -2*LL + K * log(n)
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##survreg objects
useBIC.survreg <-
function(mod, return.K = FALSE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- nrow(mod$y)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
BIC <- -2*LL + K * log(n)
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##unmarkedFit objects
##create function to extract BIC from 'unmarkedFit'
useBIC.unmarkedFit <- function(mod, return.K = FALSE, nobs = NULL, c.hat = 1, ...) {
LL <- extractLL(mod)[1]
K <- attr(extractLL(mod), "df")
if(is.null(nobs)) {
n <- dim(mod@data@y)[1]
} else {n <- nobs}
if(c.hat == 1) {
BIC <- -2*LL + K * log(n)
}
if(c.hat > 1 && c.hat <= 4) {
##adjust parameter count to include estimation of dispersion parameter
K <- K + 1
BIC <- -2*LL/c.hat + K * log(n)
}
if(c.hat > 4) stop("\nHigh overdispersion and model fit is questionable\n")
if(c.hat < 1) stop("\nYou should set \'c.hat\' to 1 if < 1, but values << 1 might also indicate lack of fit\n")
if(identical(return.K, TRUE)) {
return(K)
} else {return(BIC)}
}
##vglm objects
useBIC.vglm <- function(mod, return.K = FALSE, nobs = NULL, c.hat = 1, ...){
if(is.null(nobs)) {
n <- nrow([email protected])
} else {n <- nobs}
LL <- extractLL(mod)[1]
##extract number of estimated parameters
K <- attr(extractLL(mod), "df")
if(c.hat !=1) {
fam.name <- mod@family@vfamily
if(fam.name != "poissonff" && fam.name != "binomialff") stop("\nOverdispersion correction only appropriate for Poisson or binomial models\n")
}
if(c.hat == 1) {
BIC <- -2*LL + K * log(n)
}
if(c.hat > 1 && c.hat <= 4) {
K <- K + 1
BIC <- -2*LL/c.hat + K * log(n)
}
if(c.hat > 4) stop("High overdispersion and model fit is questionable\n")
if(c.hat < 1) stop("You should set \'c.hat\' to 1 if < 1, but values << 1 might also indicate lack of fit\n")
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
##zeroinfl objects
useBIC.zeroinfl <-
function(mod, return.K = FALSE, nobs = NULL, ...){
if(identical(nobs, NULL)) {n <- length(mod$fitted)} else {n <- nobs}
LL <- logLik(mod)[1]
K <- attr(logLik(mod), "df") #extract correct number of parameters included in model - this includes LM
BIC <- -2*LL + K * log(n)
if(return.K == TRUE) BIC[1] <- K #attributes the first element of BIC to K
BIC
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/useBIC.R |
##methods for xtable
##aictab
xtable.aictab <- function(x, caption = NULL, label = NULL, align = NULL,
digits = NULL, display = NULL, auto = FALSE,
nice.names = TRUE, include.AICc = TRUE,
include.LL = TRUE, include.Cum.Wt = FALSE, ...) {
##change to nicer names
if(nice.names) {
new.delta <- names(x)[4]
new.weight <- names(x)[6]
names(x)[1] <- "Model"
names(x)[2] <- "K"
#names(x)[4] <- paste("$\\delta$", unlist(strsplit(new.delta, "_"))[2], collapse = " ") #requires sanitize.text.function( )
names(x)[4] <- paste(unlist(strsplit(new.delta, "_")), collapse = " ")
names(x)[6] <- paste(unlist(strsplit(new.weight, "Wt")), "weight", collapse = " ")
names(x)[7] <- "log-Likelihood"
names(x)[8] <- "Cumulative weight"
}
#format to data.frame
x <- data.frame(x, check.names = FALSE)
class(x) <- c("xtable","data.frame")
##with AICc and LL but not Cum.Wt
if(include.AICc && include.LL && !include.Cum.Wt) {
x <- x[, c(1:4, 6:7)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f","f","f"))
}
##with AICc, but not LL and Cum.Wt
if(include.AICc && !include.LL && !include.Cum.Wt) {
x <- x[, c(1:4, 6)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f","f"))
}
##without AICc, but with LL but not Cum.Wt
if(!include.AICc && include.LL && !include.Cum.Wt) {
x <- x[, c(1:2, 4, 6:7)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f","f"))
}
##without AICc and LL and Cum.Wt
if(!include.AICc && !include.LL && !include.Cum.Wt) {
x <- x[, c(1:2, 4, 6)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f"))
}
##with AICc and LL and Cum.Wt
if(include.AICc && include.LL && include.Cum.Wt) {
x <- x[, c(1:4, 6:8)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f","f","f","f"))
}
##with AICc, but not LL but with Cum.Wt
if(include.AICc && !include.LL && include.Cum.Wt) {
x <- x[, c(1:4, 6, 8)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f","f","f"))
}
##without AICc, but with LL and Cum.Wt
if(!include.AICc && include.LL && include.Cum.Wt) {
x <- x[, c(1:2, 4, 6:8)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f","f","f"))
}
##without AICc and LL but with Cum.Wt
if(!include.AICc && !include.LL && include.Cum.Wt) {
x <- x[, c(1:2, 4, 6, 8)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f","f"))
}
caption(x) <- caption
label(x) <- label
return(x)
}
##modavg
xtable.modavg <- function(x, caption = NULL, label = NULL, align = NULL,
digits = NULL, display = NULL, auto = FALSE,
nice.names = TRUE, print.table = FALSE, ...) {
##different format for models of class multinom
if(length(x$Mod.avg.beta) == 1){
if(print.table) {
##extract model selection table
modavg.table <- data.frame(x$Mod.avg.table[, c(1:4, 6, 8:9)], check.names = FALSE)
##change to nicer names
if(nice.names) {
new.delta <- names(modavg.table)[4]
new.weight <- names(modavg.table)[5]
names(modavg.table)[1] <- "Model"
names(modavg.table)[2] <- "K"
##names(x)[4] <- paste("$\\delta$", unlist(strsplit(new.delta, "_"))[2], collapse = " ") #requires sanitize.text.function( )
names(modavg.table)[4] <- paste(unlist(strsplit(new.delta, "_")), collapse = " ")
names(modavg.table)[5] <- paste(unlist(strsplit(new.weight, "Wt")), "weight", collapse = " ")
names(modavg.table)[6] <- paste("Beta(", x$Parameter, ")", sep = "")
names(modavg.table)[7] <- paste("SE(", x$Parameter, ")", sep = "")
}
##format to data.frame
class(modavg.table) <- c("xtable","data.frame")
align(modavg.table) <- switch(1+is.null(align), align, c("l","r","r","r","r","r","r","r"))
digits(modavg.table) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2,2,2))
display(modavg.table) <- switch(1+is.null(display), display, c("s","s","d","f","f","f","f","f"))
}
##print model-averaged estimate, unconditional SE, CI
if(!print.table) {
##model-averaged estimate
modavg.table <- data.frame(Mod.avg.beta = x$Mod.avg.beta, Uncond.SE = x$Uncond.SE,
Lower.CL = x$Lower.CL, Upper.CL = x$Upper.CL, check.names = FALSE)
rownames(modavg.table) <- x$Parameter
##change to nicer names
if(nice.names) {
names(modavg.table)[1] <- "Model-averaged beta estimate"
names(modavg.table)[2] <- "Unconditional SE"
names(modavg.table)[3] <- paste(100*x$Conf.level, "%", " lower limit", sep = "")
names(modavg.table)[4] <- paste(100*x$Conf.level, "%", " upper limit", sep = "")
}
##format to data.frame
class(modavg.table) <- c("xtable","data.frame")
align(modavg.table) <- switch(1+is.null(align), align, c("l","r","r","r","r"))
digits(modavg.table) <- switch(1+is.null(digits), digits, c(0,2,2,2,2))
display(modavg.table) <- switch(1+is.null(display), display, c("s","f","f","f","f"))
}
}
if(length(x$Mod.avg.beta) > 1){
if(print.table) {
##extract model selection table
modavg.table <- data.frame(x$Mod.avg.table[, c(1:4, 6)], check.names = FALSE)
##change to nicer names
if(nice.names) {
new.delta <- names(modavg.table)[4]
new.weight <- names(modavg.table)[5]
names(modavg.table)[1] <- "Model"
names(modavg.table)[2] <- "K"
##names(x)[4] <- paste("$\\delta$", unlist(strsplit(new.delta, "_"))[2], collapse = " ") #requires sanitize.text.function( )
names(modavg.table)[4] <- paste(unlist(strsplit(new.delta, "_")), collapse = " ")
names(modavg.table)[5] <- paste(unlist(strsplit(new.weight, "Wt")), "weight", collapse = " ")
##names(modavg.table)[6] <- paste("Beta(", x$Parameter, ")", sep = "")
##names(modavg.table)[7] <- paste("SE(", x$Parameter, ")", sep = "")
}
##format to data.frame
class(modavg.table) <- c("xtable","data.frame")
align(modavg.table) <- switch(1+is.null(align), align, c("l","r","r","r","r","r"))
digits(modavg.table) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2))
display(modavg.table) <- switch(1+is.null(display), display, c("s","s","d","f","f","f"))
}
##print model-averaged estimate, unconditional SE, CI
if(!print.table) {
##model-averaged estimate
modavg.table <- data.frame(Mod.avg.beta = x$Mod.avg.beta, Uncond.SE = x$Uncond.SE,
Lower.CL = x$Lower.CL, Upper.CL = x$Upper.CL, check.names = FALSE)
rownames(modavg.table) <- names(x$Mod.avg.beta)
##change to nicer names
if(nice.names) {
names(modavg.table)[1] <- "Model-averaged beta estimate"
names(modavg.table)[2] <- "Unconditional SE"
names(modavg.table)[3] <- paste(100*x$Conf.level, "%", " lower limit", sep = "")
names(modavg.table)[4] <- paste(100*x$Conf.level, "%", " upper limit", sep = "")
}
##format to data.frame
class(modavg.table) <- c("xtable","data.frame")
align(modavg.table) <- switch(1+is.null(align), align, c("l","r","r","r","r"))
digits(modavg.table) <- switch(1+is.null(digits), digits, c(0,2,2,2,2))
display(modavg.table) <- switch(1+is.null(display), display, c("s","f","f","f","f"))
}
}
caption(modavg.table) <- caption
label(modavg.table) <- label
return(modavg.table)
}
##modavgShrink
xtable.modavgShrink <- function(x, caption = NULL, label = NULL, align = NULL,
digits = NULL, display = NULL, auto = FALSE,
nice.names = TRUE, print.table = FALSE, ...) {
if(print.table) {
##extract model selection table
modavg.table <- data.frame(x$Mod.avg.table[, c(1:4, 6, 8:9)], check.names = FALSE)
##change to nicer names
if(nice.names) {
new.delta <- names(modavg.table)[4]
new.weight <- names(modavg.table)[5]
names(modavg.table)[1] <- "Model"
names(modavg.table)[2] <- "K"
##names(x)[4] <- paste("$\\delta$", unlist(strsplit(new.delta, "_"))[2], collapse = " ") #requires sanitize.text.function( )
names(modavg.table)[4] <- paste(unlist(strsplit(new.delta, "_")), collapse = " ")
names(modavg.table)[5] <- paste(unlist(strsplit(new.weight, "Wt")), "weight", collapse = " ")
names(modavg.table)[6] <- paste("Beta(", x$Parameter, ")", sep = "")
names(modavg.table)[7] <- paste("SE(", x$Parameter, ")", sep = "")
}
##format to data.frame
class(modavg.table) <- c("xtable","data.frame")
align(modavg.table) <- switch(1+is.null(align), align, c("l","r","r","r","r","r","r","r"))
digits(modavg.table) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2,2,2))
display(modavg.table) <- switch(1+is.null(display), display, c("s","s","d","f","f","f","f","f"))
}
##print model-averaged estimate, unconditional SE, CI
if(!print.table) {
##model-averaged estimate
modavg.table <- data.frame(Mod.avg.beta = x$Mod.avg.beta, Uncond.SE = x$Uncond.SE,
Lower.CL = x$Lower.CL, Upper.CL = x$Upper.CL, check.names = FALSE)
rownames(modavg.table) <- x$Parameter
##change to nicer names
if(nice.names) {
names(modavg.table)[1] <- "Model-averaged beta estimate"
names(modavg.table)[2] <- "Unconditional SE"
names(modavg.table)[3] <- paste(100*x$Conf.level, "%", " lower limit", sep = "")
names(modavg.table)[4] <- paste(100*x$Conf.level, "%", " upper limit", sep = "")
}
##format to data.frame
class(modavg.table) <- c("xtable","data.frame")
align(modavg.table) <- switch(1+is.null(align), align, c("l","r","r","r","r"))
digits(modavg.table) <- switch(1+is.null(digits), digits, c(0,2,2,2,2))
display(modavg.table) <- switch(1+is.null(display), display, c("s","f","f","f","f"))
}
caption(modavg.table) <- caption
label(modavg.table) <- label
return(modavg.table)
}
##modavgPred
xtable.modavgPred <- function(x, caption = NULL, label = NULL, align = NULL,
digits = NULL, display = NULL, auto = FALSE,
nice.names = TRUE, ...) {
modavg.pred <- data.frame(x$matrix.output, check.names = FALSE)
##change to nicer names
if(nice.names) {
names(modavg.pred)[1] <- "Model-averaged predictions"
names(modavg.pred)[2] <- "Unconditional SE"
names(modavg.pred)[3] <- paste(100*x$conf.level, "%", " lower limit", sep = "")
names(modavg.pred)[4] <- paste(100*x$conf.level, "%", " upper limit", sep = "")
}
##format to data.frame
class(modavg.pred) <- c("xtable","data.frame")
align(modavg.pred) <- switch(1+is.null(align), align, c("l","r","r","r","r"))
digits(modavg.pred) <- switch(1+is.null(digits), digits, c(0,2,2,2,2))
display(modavg.pred) <- switch(1+is.null(display), display, c("s","f","f","f","f"))
caption(modavg.pred) <- caption
label(modavg.pred) <- label
return(modavg.pred)
}
##dictab
xtable.dictab <- function(x, caption = NULL, label = NULL, align = NULL,
digits = NULL, display = NULL, auto = FALSE,
nice.names = TRUE, include.DIC = TRUE,
include.Cum.Wt = FALSE, ...) {
##change to nicer names
if(nice.names) {
names(x)[1] <- "Model"
names(x)[2] <- "pD"
names(x)[3] <- "DIC"
names(x)[4] <- "Delta DIC"
names(x)[6] <- "DIC weight"
names(x)[7] <- "Cumulative weight"
}
#format to data.frame
x <- data.frame(x, check.names = FALSE)
class(x) <- c("xtable","data.frame")
##with DIC and Cum.Wt
if(include.DIC && include.Cum.Wt) {
x <- x[, c(1:4, 6:7)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","f","f","f","f","f"))
}
##with DIC but not Cum.Wt
if(include.DIC && !include.Cum.Wt) {
x <- x[, c(1:4, 6)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","f","f","f","f"))
}
##without DIC but with Cum.Wt
if(!include.DIC && include.Cum.Wt) {
x <- x[, c(1:2, 4, 6:7)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","f","f","f","f"))
}
##without DIC and Cum.Wt
if(!include.DIC && !include.Cum.Wt) {
x <- x[, c(1:2, 4, 6)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","f","f","f"))
}
caption(x) <- caption
label(x) <- label
return(x)
}
##modavgEffect
xtable.modavgEffect <- function(x, caption = NULL, label = NULL, align = NULL,
digits = NULL, display = NULL, auto = FALSE,
nice.names = TRUE, print.table = FALSE, ...) {
if(print.table) {
##check if output from occuMulti or occuMS
if(length(x$Mod.avg.eff) > 1) {
warning("\nToo many effect sizes to print in table\n")
##extract model selection table
modavg.table <- data.frame(x$Mod.avg.table[, c(1:4, 6)], check.names = FALSE)
if(nice.names) {
new.delta <- names(modavg.table)[4]
new.weight <- names(modavg.table)[5]
names(modavg.table)[1] <- "Model"
names(modavg.table)[2] <- "K"
##names(x)[4] <- paste("$\\delta$", unlist(strsplit(new.delta, "_"))[2], collapse = " ") #requires sanitize.text.function( )
names(modavg.table)[4] <- paste(unlist(strsplit(new.delta, "_")), collapse = " ")
names(modavg.table)[5] <- paste(unlist(strsplit(new.weight, "Wt")), "weight", collapse = " ")
}
##format to data.frame
class(modavg.table) <- c("xtable","data.frame")
align(modavg.table) <- switch(1+is.null(align), align, c("l","r","r","r","r","r"))
digits(modavg.table) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2))
display(modavg.table) <- switch(1+is.null(display), display, c("s","s","d","f","f","f"))
} else {
##extract model selection table
modavg.table <- data.frame(x$Mod.avg.table[, c(1:4, 6, 8:9)], check.names = FALSE)
##change to nicer names
if(nice.names) {
new.delta <- names(modavg.table)[4]
new.weight <- names(modavg.table)[5]
names(modavg.table)[1] <- "Model"
names(modavg.table)[2] <- "K"
##names(x)[4] <- paste("$\\delta$", unlist(strsplit(new.delta, "_"))[2], collapse = " ") #requires sanitize.text.function( )
names(modavg.table)[4] <- paste(unlist(strsplit(new.delta, "_")), collapse = " ")
names(modavg.table)[5] <- paste(unlist(strsplit(new.weight, "Wt")), "weight", collapse = " ")
names(modavg.table)[6] <- paste("Effect(", x$Group1, " - ", x$Group2, ")", sep = "")
names(modavg.table)[7] <- paste("SE(", x$Group1, " - ", x$Group2, ")", sep = "")
}
##format to data.frame
class(modavg.table) <- c("xtable","data.frame")
align(modavg.table) <- switch(1+is.null(align), align, c("l","r","r","r","r","r","r","r"))
digits(modavg.table) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2,2,2))
display(modavg.table) <- switch(1+is.null(display), display, c("s","s","d","f","f","f","f","f"))
}
}
##print model-averaged estimate, unconditional SE, CI
if(!print.table) {
modavg.table <- as.data.frame(x$Matrix.output)
##modavg.table <- data.frame(Mod.avg.beta = x$Mod.avg.eff, Uncond.SE = x$Uncond.se,
## Lower.CL = x$Lower.CL, Upper.CL = x$Upper.CL, check.names = FALSE)
##rownames(modavg.table) <- x$Group.variable ##TO CHANGE LABEL TO "Effect Size"
##change to nicer names
if(nice.names) {
names(modavg.table)[1] <- "Model-averaged effect size"
names(modavg.table)[2] <- "Unconditional SE"
names(modavg.table)[3] <- paste(100*x$Conf.level, "%", " lower limit", sep = "")
names(modavg.table)[4] <- paste(100*x$Conf.level, "%", " upper limit", sep = "")
if(length(x$Mod.avg.eff) == 1) {rownames(modavg.table) <- paste("Effect size (", x$Group.variable, ")", sep = "")}
}
##format to data.frame
class(modavg.table) <- c("xtable","data.frame")
align(modavg.table) <- switch(1+is.null(align), align, c("l","r","r","r","r"))
digits(modavg.table) <- switch(1+is.null(digits), digits, c(0,2,2,2,2))
display(modavg.table) <- switch(1+is.null(display), display, c("s","f","f","f","f"))
}
caption(modavg.table) <- caption
label(modavg.table) <- label
return(modavg.table)
}
##multComp
xtable.multComp <- function(x, caption = NULL, label = NULL, align = NULL,
digits = NULL, display = NULL, auto = FALSE,
nice.names = TRUE, print.table = FALSE, ...) {
if(print.table) {
##extract model selection table
modavg.table <- data.frame(x$model.table[, c(1:4, 6)], check.names = FALSE)
##change to nicer names
if(nice.names) {
new.delta <- names(modavg.table)[4]
new.weight <- names(modavg.table)[5]
names(modavg.table)[1] <- "Group structure"
names(modavg.table)[2] <- "K"
##names(x)[4] <- paste("$\\delta$", unlist(strsplit(new.delta, "_"))[2], collapse = " ") #requires sanitize.text.function( )
names(modavg.table)[4] <- paste(unlist(strsplit(new.delta, "_")), collapse = " ")
names(modavg.table)[5] <- paste(unlist(strsplit(new.weight, "Wt")), "weight", collapse = " ")
}
##format to data.frame
class(modavg.table) <- c("xtable","data.frame")
align(modavg.table) <- switch(1+is.null(align), align, c("l","r","r","r","r","r"))
digits(modavg.table) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2))
display(modavg.table) <- switch(1+is.null(display), display, c("s","s","d","f","f","f"))
}
##print model-averaged estimate, unconditional SE, CI
if(!print.table) {
modavg.table <- data.frame(Group = x$ordered.levels,
Model.avg.est = x$model.avg.est[, "Mod.avg.est"],
Uncond.SE = x$model.avg.est[, "Uncond.SE"],
Lower.CL = x$model.avg.est[, "Lower.CL"],
Upper.CL = x$model.avg.est[, "Upper.CL"], check.names = FALSE)
##change to nicer names
if(nice.names) {
names(modavg.table)[2] <- paste("Model-averaged estimates (", x$factor.id, ")", sep = "")
names(modavg.table)[3] <- "Unconditional SE"
names(modavg.table)[4] <- paste(100*x$conf.level, "%", " lower limit", sep = "")
names(modavg.table)[5] <- paste(100*x$conf.level, "%", " upper limit", sep = "")
}
##format to data.frame
class(modavg.table) <- c("xtable","data.frame")
align(modavg.table) <- switch(1+is.null(align), align, c("l","r","r","r","r","r"))
digits(modavg.table) <- switch(1+is.null(digits), digits, c(0,2,2,2,2,2))
display(modavg.table) <- switch(1+is.null(display), display, c("s","f","f","f","f","f"))
}
caption(modavg.table) <- caption
label(modavg.table) <- label
return(modavg.table)
}
##boot.wt - class aictab - potentially create new class for boot.wt
xtable.boot.wt <- function(x, caption = NULL, label = NULL, align = NULL,
digits = NULL, display = NULL, auto = FALSE,
nice.names = TRUE, include.AICc = TRUE,
include.AICcWt = FALSE, ...) {
##change to nicer names
if(nice.names) {
new.delta <- names(x)[4]
new.weight <- names(x)[6]
names(x)[1] <- "Model"
names(x)[2] <- "K"
names(x)[4] <- paste(unlist(strsplit(new.delta, "_")), collapse = " ")
names(x)[6] <- paste(unlist(strsplit(new.weight, "Wt")), "weight", collapse = " ")
names(x)[7] <- "Pi weight"
}
#format to data.frame
x <- data.frame(x, check.names = FALSE)
class(x) <- c("xtable","data.frame")
##with AICc but not AICc.Wt
if(include.AICc && !include.AICcWt) {
x <- x[, c(1:4, 7)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f","f"))
}
##with AICc and AICc.Wt
if(include.AICc && include.AICcWt) {
x <- x[, c(1:4, 6:7)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f","f","f"))
}
##without AICc, but with AICc.Wt
if(!include.AICc && include.AICcWt) {
x <- x[, c(1:2, 4, 6:7)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f","f"))
}
##without AICc and AICc.Wt
if(!include.AICc && !include.AICcWt) {
x <- x[, c(1:2, 4, 7)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f"))
}
caption(x) <- caption
label(x) <- label
return(x)
}
##mb.chisq - for single-season model
xtable.mb.chisq <- function(x, caption = NULL, label = NULL, align = NULL,
digits = NULL, display = NULL, auto = FALSE,
nice.names = TRUE,
include.detection.histories = TRUE, ...) {
##stop if dynamic occupancy model
if(identical(x$model.type, "dynamic")) {
stop("\n'xtable' does not yet support dynamic occupancy models\n")
}
##extract names
x.names <- names(x)
##stop if no chi-square table is present
if(!any(x.names == "chisq.table")) {
stop("\nchi-square table must be included in the object\n")
}
##extract table
chisq.table <- data.frame("Detection.history" = rownames(x$chisq.table),
x$chisq.table, check.names = FALSE)
##change to nicer names
if(nice.names) {
colnames(chisq.table)[1] <- "Detection history"
##extract rownames
rows <- rownames(x$chisq.table)
##replace NA by ".", here "" to avoid creating endash in LaTeX
##also possible to use {} between consecutive dashes - this requires sanitization in print( )
new.rows <- gsub(pattern = "NA", replacement = ".", rows)
chisq.table[, "Detection history"] <- new.rows
rownames(chisq.table) <- new.rows
}
##format to data.frame
class(chisq.table) <- c("xtable", "data.frame")
##do not include detection history as a column (only rownames)
if(!include.detection.histories) {
##exclude column with detection histories
chisq.table <- chisq.table[, 2:5]
align(chisq.table) <- switch(1+is.null(align), align, c("l","r","r","r","r"))
digits(chisq.table) <- switch(1+is.null(digits), digits, c(0,0,2,2,2))
display(chisq.table) <- switch(1+is.null(display), display, c("s","s","d","f","f")) #2 columns as integers
}
##include detection history as a column
if(include.detection.histories) {
##exclude column with detection histories
chisq.table <- chisq.table[, 1:5]
align(chisq.table) <- switch(1+is.null(align), align, c("l","r","r","r","r","r"))
digits(chisq.table) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2))
display(chisq.table) <- switch(1+is.null(display), display, c("s","s","d","f","f","f")) #2 columns as integers
}
caption(chisq.table) <- caption
label(chisq.table) <- label
return(chisq.table)
}
##add method for detHist class
xtable.detHist <- function(x, caption = NULL, label = NULL, align = NULL,
digits = NULL, display = NULL, auto = FALSE,
nice.names = TRUE, table.detHist = "freq",
...) {
##for single season single species, display frequencies in a single matrix
if(x$n.seasons == 1 && x$n.species == 1) {
##display detection histories
if(identical(table.detHist, "hist")){
det.hist <- x$hist.table.full
det.mat <- matrix(det.hist, nrow = 1)
if(nice.names) {
det.names <- names(det.hist)
new.names <- gsub(pattern = "NA", replacement = ".", det.names)
colnames(det.mat) <- new.names
rownames(det.mat) <- "Season-1"
}
det.frame <- as.data.frame(det.mat)
n.cols <- ncol(det.frame)
}
##display frequencies
if(identical(table.detHist, "freq")) {
det.frame <- as.data.frame(x$out.freqs)
n.cols <- ncol(det.frame)
}
##display proportions
if(identical(table.detHist, "prop")){
det.frame <- as.data.frame(x$out.props)
n.cols <- ncol(det.frame)
}
}
##for single season multiple species, display frequencies in a single matrix
if(x$n.seasons == 1 && x$n.species > 1) {
##display detection histories
if(identical(table.detHist, "hist")){
det.hist <- x$hist.table.full
det.mat <- matrix(det.hist, nrow = 1)
if(nice.names) {
det.names <- names(det.hist)
new.names <- gsub(pattern = "NA", replacement = ".", det.names)
colnames(det.mat) <- new.names
rownames(det.mat) <- "Season-1"
}
det.frame <- as.data.frame(det.mat)
n.cols <- ncol(det.frame)
}
##display frequencies
if(identical(table.detHist, "freq")) {
det.frame <- as.data.frame(x$out.freqs)
n.cols <- ncol(det.frame)
}
##display proportions
if(identical(table.detHist, "prop")){
det.frame <- as.data.frame(x$out.props)
n.cols <- ncol(det.frame)
}
}
if(x$n.seasons > 1 && x$n.species == 1) {
##display entire detection histories
if(identical(table.detHist, "hist")) {
det.hist <- x$hist.table.full
det.mat <- matrix(det.hist, nrow = 1)
if(nice.names) {
det.names <- names(det.hist)
new.names <- gsub(pattern = "NA", replacement = ".", det.names)
colnames(det.mat) <- new.names
rownames(det.mat) <- "All seasons"
}
det.frame <- as.data.frame(det.mat)
n.cols <- ncol(det.frame)
}
##display frequencies
if(identical(table.detHist, "freq")) {
det.frame <- as.data.frame(x$out.freqs)
n.cols <- ncol(det.frame)
}
##display proportions
if(identical(table.detHist, "prop")) {
det.frame <- as.data.frame(x$out.props)
n.cols <- ncol(det.frame)
}
}
##format to data.frame
class(det.frame) <- c("xtable","data.frame")
align(det.frame) <- switch(1+is.null(align), align, c("l", rep("r", n.cols)))
if(identical(table.detHist, "prop")) {
digits(det.frame) <- switch(1+is.null(digits), digits, c(0, rep(2, n.cols)))
} else {
digits(det.frame) <- switch(1+is.null(digits), digits, c(0, rep(0, n.cols)))
}
display(det.frame) <- switch(1+is.null(display), display, c("s", rep("f", n.cols)))
caption(det.frame) <- caption
label(det.frame) <- label
return(det.frame)
}
##add method for countHist class
xtable.countHist <- function(x, caption = NULL, label = NULL, align = NULL,
digits = NULL, display = NULL, auto = FALSE,
nice.names = TRUE, table.countHist = "count",
...) {
##for single season, display frequencies in a single matrix
if(x$n.seasons == 1) {
##display count histories
if(identical(table.countHist, "hist")){
det.hist <- x$hist.table.full
det.mat <- matrix(det.hist, nrow = 1)
if(nice.names) {
det.names <- names(det.hist)
new.names <- gsub(pattern = "NA", replacement = ".", det.names)
colnames(det.mat) <- new.names
rownames(det.mat) <- "Season-1"
}
det.frame <- as.data.frame(det.mat)
n.cols <- ncol(det.frame)
}
##display frequencies
if(identical(table.countHist, "freq")) {
det.frame <- as.data.frame(x$out.freqs)
n.cols <- ncol(det.frame)
}
##display counts
if(identical(table.countHist, "count")) {
det.mat <- matrix(x$count.table.full, nrow = 1)
colnames(det.mat) <- names(x$count.table.full)
if(nice.names) {
rownames(det.mat) <- "Season-1"
}
n.cols <- ncol(det.mat)
det.frame <- as.data.frame(det.mat)
}
##display proportions
if(identical(table.countHist, "prop")){
det.frame <- as.data.frame(x$out.props)
n.cols <- ncol(det.frame)
}
} else {
##display entire detection histories
if(identical(table.countHist, "hist")) {
det.hist <- x$hist.table.full
det.mat <- matrix(det.hist, nrow = 1)
if(nice.names) {
det.names <- names(det.hist)
new.names <- gsub(pattern = "NA", replacement = ".", det.names)
colnames(det.mat) <- new.names
rownames(det.mat) <- "All seasons"
}
det.frame <- as.data.frame(det.mat)
n.cols <- ncol(det.frame)
}
##display frequencies
if(identical(table.countHist, "freq")) {
det.frame <- as.data.frame(x$out.freqs)
#det.frame[1, 3:6] <- "."
n.cols <- ncol(det.frame)
}
##display counts
if(identical(table.countHist, "count")) {
det.mat <- matrix(x$count.table.full, nrow = 1)
colnames(det.mat) <- names(x$count.table.full)
if(nice.names) {
rownames(det.mat) <- "All seasons"
}
n.cols <- ncol(det.mat)
det.frame <- as.data.frame(det.mat)
}
##display proportions
if(identical(table.countHist, "prop")) {
det.frame <- as.data.frame(x$out.props)
# det.frame[1, 2:4] <- "."
n.cols <- ncol(det.frame)
}
}
##format to data.frame
class(det.frame) <- c("xtable","data.frame")
align(det.frame) <- switch(1+is.null(align), align, c("l", rep("r", n.cols)))
if(identical(table.countHist, "prop")) {
digits(det.frame) <- switch(1+is.null(digits), digits, c(0, rep(2, n.cols)))
} else {
digits(det.frame) <- switch(1+is.null(digits), digits, c(0, rep(0, n.cols)))
}
display(det.frame) <- switch(1+is.null(display), display, c("s", rep("f", n.cols)))
caption(det.frame) <- caption
label(det.frame) <- label
return(det.frame)
}
##add method for countDist class
xtable.countDist <- function(x, caption = NULL, label = NULL, align = NULL,
digits = NULL, display = NULL, auto = FALSE,
nice.names = TRUE, table.countDist = "distance",
...) {
##for single season, display frequencies in a single matrix
if(x$n.seasons == 1) {
##display counts across distance classes
if(identical(table.countDist, "distance")){
det.dist <- matrix(x$dist.sums.full, nrow = 1)
colnames(det.dist) <- names(x$dist.sums.full)
if(nice.names) {
rownames(det.dist) <- "Season-1"
}
n.cols <- ncol(det.dist)
det.frame <- as.data.frame(det.dist)
}
##display frequencies
if(identical(table.countDist, "freq")) {
det.frame <- as.data.frame(x$out.freqs)
n.cols <- ncol(det.frame)
}
##display counts
if(identical(table.countDist, "count")) {
det.mat <- matrix(x$count.table.full, nrow = 1)
colnames(det.mat) <- names(x$count.table.full)
if(nice.names) {
rownames(det.mat) <- "Season-1"
}
n.cols <- ncol(det.mat)
det.frame <- as.data.frame(det.mat)
}
##display proportions
if(identical(table.countDist, "prop")){
det.frame <- as.data.frame(x$out.props)
n.cols <- ncol(det.frame)
}
} else {
##display counts across distance classes
if(identical(table.countDist, "distance")) {
##assumes the same distance classes were used each year
det.dist <- matrix(unlist(x$dist.table.seasons),
nrow = x$n.seasons)
colnames(det.dist) <- names(x$dist.sums.full)
if(nice.names) {
rownames(det.dist) <- paste("Season-", 1:x$n.seasons, sep = "")
}
n.cols <- ncol(det.dist)
det.frame <- as.data.frame(det.dist)
}
##display frequencies
if(identical(table.countDist, "freq")) {
det.frame <- as.data.frame(x$out.freqs)
#det.frame[1, 3:6] <- "."
n.cols <- ncol(det.frame)
}
##display counts
if(identical(table.countDist, "count")) {
det.mat <- matrix(x$count.table.full, nrow = 1)
colnames(det.mat) <- names(x$count.table.full)
if(nice.names) {
rownames(det.mat) <- "All seasons"
}
n.cols <- ncol(det.mat)
det.frame <- as.data.frame(det.mat)
}
##display proportions
if(identical(table.countDist, "prop")) {
det.frame <- as.data.frame(x$out.props)
# det.frame[1, 2:4] <- "."
n.cols <- ncol(det.frame)
}
}
##format to data.frame
class(det.frame) <- c("xtable","data.frame")
align(det.frame) <- switch(1+is.null(align), align, c("l", rep("r", n.cols)))
if(identical(table.countDist, "prop")) {
digits(det.frame) <- switch(1+is.null(digits), digits, c(0, rep(2, n.cols)))
} else {
digits(det.frame) <- switch(1+is.null(digits), digits, c(0, rep(0, n.cols)))
}
display(det.frame) <- switch(1+is.null(display), display, c("s", rep("f", n.cols)))
caption(det.frame) <- caption
label(det.frame) <- label
return(det.frame)
}
##add method for detTime class
xtable.detTime <- function(x, caption = NULL, label = NULL, align = NULL,
digits = NULL, display = NULL, auto = FALSE,
nice.names = TRUE, table.detTime = "freq",
...) {
##for single season, display frequencies in a single matrix
if(x$n.seasons == 1) {
##display detection histories
if(identical(table.detTime, "dist")){
det.time <- x$time.table.full
det.mat <- matrix(det.time, nrow = 1)
if(nice.names) {
det.names <- names(det.time)
colnames(det.mat) <- det.names
rownames(det.mat) <- "Season-1"
}
det.frame <- as.data.frame(det.mat)
n.cols <- ncol(det.frame)
}
##display frequencies
if(identical(table.detTime, "freq")) {
det.frame <- as.data.frame(x$out.freqs)
n.cols <- ncol(det.frame)
}
##display proportions
if(identical(table.detTime, "prop")){
det.frame <- as.data.frame(x$out.props)
n.cols <- ncol(det.frame)
}
}
if(x$n.seasons > 1) {
##display distribution of detection times across seasons
if(identical(table.detTime, "dist")) {
det.time <- x$time.table.full
det.mat <- matrix(det.time, nrow = 1)
if(nice.names) {
det.names <- names(det.time)
colnames(det.mat) <- det.names
rownames(det.mat) <- "All seasons"
}
det.frame <- as.data.frame(det.mat)
n.cols <- ncol(det.frame)
}
##display frequencies
if(identical(table.detTime, "freq")) {
det.frame <- as.data.frame(x$out.freqs)
n.cols <- ncol(det.frame)
}
##display proportions
if(identical(table.detTime, "prop")) {
det.frame <- as.data.frame(x$out.props)
n.cols <- ncol(det.frame)
}
}
##format to data.frame
class(det.frame) <- c("xtable","data.frame")
align(det.frame) <- switch(1+is.null(align), align, c("l", rep("r", n.cols)))
if(identical(table.detTime, "prop")) {
digits(det.frame) <- switch(1+is.null(digits), digits, c(0, rep(2, n.cols)))
} else {
digits(det.frame) <- switch(1+is.null(digits), digits, c(0, rep(0, n.cols)))
}
display(det.frame) <- switch(1+is.null(display), display, c("s", rep("f", n.cols)))
caption(det.frame) <- caption
label(det.frame) <- label
return(det.frame)
}
##bictab
xtable.bictab <- function(x, caption = NULL, label = NULL, align = NULL,
digits = NULL, display = NULL, auto = FALSE,
nice.names = TRUE, include.BIC = TRUE,
include.LL = TRUE, include.Cum.Wt = FALSE, ...) {
##change to nicer names
if(nice.names) {
new.delta <- names(x)[4]
new.weight <- names(x)[6]
names(x)[1] <- "Model"
names(x)[2] <- "K"
#names(x)[4] <- paste("$\\delta$", unlist(strsplit(new.delta, "_"))[2], collapse = " ") #requires sanitize.text.function( )
names(x)[4] <- paste(unlist(strsplit(new.delta, "_")), collapse = " ")
names(x)[6] <- paste(unlist(strsplit(new.weight, "Wt")), "weight", collapse = " ")
names(x)[7] <- "log-Likelihood"
names(x)[8] <- "Cumulative weight"
}
#format to data.frame
x <- data.frame(x, check.names = FALSE)
class(x) <- c("xtable","data.frame")
##with BIC and LL but not Cum.Wt
if(include.BIC && include.LL && !include.Cum.Wt) {
x <- x[, c(1:4, 6:7)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f","f","f"))
}
##with BIC, but not LL and Cum.Wt
if(include.BIC && !include.LL && !include.Cum.Wt) {
x <- x[, c(1:4, 6)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f","f"))
}
##without BIC, but with LL but not Cum.Wt
if(!include.BIC && include.LL && !include.Cum.Wt) {
x <- x[, c(1:2, 4, 6:7)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f","f"))
}
##without BIC and LL and Cum.Wt
if(!include.BIC && !include.LL && !include.Cum.Wt) {
x <- x[, c(1:2, 4, 6)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f"))
}
##with BIC and LL and Cum.Wt
if(include.BIC && include.LL && include.Cum.Wt) {
x <- x[, c(1:4, 6:8)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f","f","f","f"))
}
##with BIC, but not LL but with Cum.Wt
if(include.BIC && !include.LL && include.Cum.Wt) {
x <- x[, c(1:4, 6, 8)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f","f","f"))
}
##without BIC, but with LL and Cum.Wt
if(!include.BIC && include.LL && include.Cum.Wt) {
x <- x[, c(1:2, 4, 6:8)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f","f","f"))
}
##without BIC and LL but with Cum.Wt
if(!include.BIC && !include.LL && include.Cum.Wt) {
x <- x[, c(1:2, 4, 6, 8)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2))
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f","f"))
}
caption(x) <- caption
label(x) <- label
return(x)
}
##checkParms
xtable.checkParms <- function(x, caption = NULL, label = NULL, align = NULL,
digits = NULL, display = NULL, auto = FALSE,
nice.names = TRUE, include.variable = TRUE,
include.max.se = TRUE,
include.n.high.se = TRUE, ...) {
##change to nicer names
if(nice.names) {
se.max <- x$se.max
names(x$result)[1] <- "Variable"
names(x$result)[2] <- "Maximum SE"
names(x$result)[3] <- paste("Num parms with SE >", se.max, sep = " ")
}
##format to data.frame
x <- data.frame(x$result, check.names = FALSE)
class(x) <- c("xtable","data.frame")
##with variable, max.se, and n.high.se
if(include.variable && include.max.se && include.n.high.se) {
x <- x[, c(1:3)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,2,2,0))
display(x) <- switch(1+is.null(display), display, c("s","f","f","f"))
}
##with variable and max.se, but not n.high.se
if(include.variable && include.max.se && !include.n.high.se) {
x <- x[, c(1:2)]
align(x) <- switch(1+is.null(align), align, c("l","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,2,2))
display(x) <- switch(1+is.null(display), display, c("s","f","f"))
}
##with variable and n.high.se, but not max.se
if(include.variable && !include.max.se && include.n.high.se) {
x <- x[, c(1, 3)]
align(x) <- switch(1+is.null(align), align, c("l","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,2,0))
display(x) <- switch(1+is.null(display), display, c("s","f","f"))
}
##with n.high.se and max.se, but without variable
if(!include.variable && include.max.se && include.n.high.se) {
x <- x[, c(2:3)]
align(x) <- switch(1+is.null(align), align, c("l","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,2,0))
display(x) <- switch(1+is.null(display), display, c("s","f","f"))
}
##with n.high.se
if(!include.variable && !include.max.se && include.n.high.se) {
x <- x[, 3, drop = FALSE]
align(x) <- switch(1+is.null(align), align, c("l","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0))
display(x) <- switch(1+is.null(display), display, c("s","f"))
}
##with max.se
if(!include.variable && include.max.se && !include.n.high.se) {
x <- x[, 2, drop = FALSE]
align(x) <- switch(1+is.null(align), align, c("l","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,2))
display(x) <- switch(1+is.null(display), display, c("s","f"))
}
##with variable
if(include.variable && !include.max.se && !include.n.high.se) {
x <- x[, 1, drop = FALSE]
align(x) <- switch(1+is.null(align), align, c("l","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,2))
display(x) <- switch(1+is.null(display), display, c("s","f"))
}
caption(x) <- caption
label(x) <- label
return(x)
}
##summaryOD
xtable.summaryOD <- function(x, caption = NULL, label = NULL, align = NULL,
digits = NULL, display = NULL, auto = FALSE,
nice.names = TRUE, ...) {
##extract model table
summaryOD.table <- data.frame(x$outMat, check.names = FALSE)
##change to nicer names
if(nice.names) {
if(identical(x$out.type, "interval")) {
names(summaryOD.table)[1] <- "Estimate"
names(summaryOD.table)[2] <- "Standard error"
lowLab <- paste("Lower ", x$conf.level * 100, "%", " CL", sep = "")
uppLab <- paste("Upper ", x$conf.level * 100, "%", " CL", sep = "")
names(summaryOD.table)[3] <- lowLab
names(summaryOD.table)[4] <- uppLab
}
if(identical(x$out.type, "nhst")) {
names(summaryOD.table)[1] <- "Estimate"
names(summaryOD.table)[2] <- "Standard error"
names(summaryOD.table)[3] <- "Wald Z"
names(summaryOD.table)[4] <- "P value"
}
}
##format to data.frame
class(summaryOD.table) <- c("xtable","data.frame")
align(summaryOD.table) <- switch(1+is.null(align), align, c("l","r","r","r", "r"))
digits(summaryOD.table) <- switch(1+is.null(digits), digits, c(0,2,2,2,2))
display(summaryOD.table) <- switch(1+is.null(display), display, c("s","f","f","f","f"))
caption(summaryOD.table) <- caption
label(summaryOD.table) <- label
return(summaryOD.table)
}
##anovaOD
xtable.anovaOD <- function(x, caption = NULL, label = NULL, align = NULL,
digits = NULL, display = NULL, auto = FALSE,
nice.names = TRUE, ...) {
##extract model selection table
anovaOD.table <- data.frame(x$devMat, check.names = FALSE)
##change to nicer names
if(nice.names) {
rownames(anovaOD.table) <- c("Model 1", "Model 2")
names(anovaOD.table)[2] <- "log-likelihood"
names(anovaOD.table)[3] <- "Delta K"
names(anovaOD.table)[4] <- "-2(Delta log-likelihoods)"
names(anovaOD.table)[6] <- "P value"
if(x$c.hat == 1) {
names(anovaOD.table)[5] <- "Chi-square"
} else {
names(anovaOD.table)[5] <- "F"
}
}
##format to data.frame
class(anovaOD.table) <- c("xtable","data.frame")
align(anovaOD.table) <- switch(1+is.null(align), align, c("l","r","r","r","r","r","r"))
digits(anovaOD.table) <- switch(1+is.null(digits), digits, c(0,2,2,2,2,2,2))
display(anovaOD.table) <- switch(1+is.null(display), display, c("s","d","f","d","f","f","f"))
caption(anovaOD.table) <- caption
label(anovaOD.table) <- label
return(anovaOD.table)
}
##ictab
xtable.ictab <- function(x, caption = NULL, label = NULL, align = NULL,
digits = NULL, display = NULL, auto = FALSE,
nice.names = TRUE, include.IC = TRUE,
include.Cum.Wt = FALSE, ...) {
##change to nicer names
if(nice.names) {
new.delta <- names(x)[4]
new.weight <- names(x)[6]
names(x)[1] <- "Model"
names(x)[2] <- "K"
#names(x)[4] <- paste("$\\delta$", unlist(strsplit(new.delta, "_"))[2], collapse = " ") #requires sanitize.text.function( )
names(x)[4] <- paste(unlist(strsplit(new.delta, "_")), collapse = " ")
names(x)[6] <- paste(unlist(strsplit(new.weight, "Wt")), "weight", collapse = " ")
names(x)[7] <- "Cumulative weight"
}
#format to data.frame
x <- data.frame(x, check.names = FALSE)
class(x) <- c("xtable","data.frame")
##with IC but not Cum.Wt
if(include.IC && !include.Cum.Wt) {
x <- x[, c(1:4, 6)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2))
if(all(x[, 2] %% 1 == 0)) {
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f","f"))
} else {
display(x) <- switch(1+is.null(display), display, c("s","s","f","f","f","f"))
}
}
##without IC and Cum.Wt
if(!include.IC && !include.Cum.Wt) {
x <- x[, c(1:2, 4, 6)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2))
if(all(x[, 2] %% 1 == 0)) {
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f"))
} else {
display(x) <- switch(1+is.null(display), display, c("s","s","f","f","f"))
}
}
##with IC and Cum.Wt
if(include.IC && include.Cum.Wt) {
x <- x[, c(1:4, 6:7)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2,2))
if(all(x[, 2] %% 1 == 0)) {
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f","f","f"))
} else {
display(x) <- switch(1+is.null(display), display, c("s","s","f","f","f","f","f"))
}
}
##without IC, but with Cum.Wt
if(!include.IC && include.Cum.Wt) {
x <- x[, c(1:2, 4, 6:7)]
align(x) <- switch(1+is.null(align), align, c("l","r","r","r","r","r"))
digits(x) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2))
if(all(x[, 2] %% 1 == 0)) {
display(x) <- switch(1+is.null(display), display, c("s","s","d","f","f","f"))
} else {
display(x) <- switch(1+is.null(display), display, c("s","s","f","f","f","f"))
}
}
caption(x) <- caption
label(x) <- label
return(x)
}
##modavgIC
xtable.modavgIC <- function(x, caption = NULL, label = NULL, align = NULL,
digits = NULL, display = NULL, auto = FALSE,
nice.names = TRUE, print.table = FALSE, ...) {
if(print.table) {
##extract model selection table
modavg.table <- data.frame(x$Mod.avg.table[, c(1:4, 6:8)], check.names = FALSE)
##change to nicer names
if(nice.names) {
new.delta <- names(modavg.table)[4]
new.weight <- names(modavg.table)[5]
names(modavg.table)[1] <- "Model"
names(modavg.table)[2] <- "K"
##names(x)[4] <- paste("$\\delta$", unlist(strsplit(new.delta, "_"))[2], collapse = " ") #requires sanitize.text.function( )
names(modavg.table)[4] <- paste(unlist(strsplit(new.delta, "_")), collapse = " ")
names(modavg.table)[5] <- paste(unlist(strsplit(new.weight, "Wt")), "weight", collapse = " ")
names(modavg.table)[6] <- "Estimate"
names(modavg.table)[7] <- "SE"
}
##format to data.frame
class(modavg.table) <- c("xtable","data.frame")
align(modavg.table) <- switch(1+is.null(align), align, c("l","r","r","r","r","r","r","r"))
digits(modavg.table) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2,2,2))
if(all(modavg.table$K %% 1 == 0)) {
display(modavg.table) <- switch(1+is.null(display), display, c("s","s","d","f","f","f","f","f"))
} else {
display(modavg.table) <- switch(1+is.null(display), display, c("s","s","f","f","f","f","f","f"))
}
}
##print model-averaged estimate, unconditional SE, CI
if(!print.table) {
##model-averaged estimate
modavg.table <- data.frame(Mod.avg.est = x$Mod.avg.est, Uncond.SE = x$Uncond.SE,
Lower.CL = x$Lower.CL, Upper.CL = x$Upper.CL, check.names = FALSE)
rownames(modavg.table) <- "Parameter"
##change to nicer names
if(nice.names) {
names(modavg.table)[1] <- "Model-averaged estimate"
names(modavg.table)[2] <- "Unconditional SE"
names(modavg.table)[3] <- paste(100*x$Conf.level, "%", " lower limit", sep = "")
names(modavg.table)[4] <- paste(100*x$Conf.level, "%", " upper limit", sep = "")
}
##format to data.frame
class(modavg.table) <- c("xtable","data.frame")
align(modavg.table) <- switch(1+is.null(align), align, c("l","r","r","r","r"))
digits(modavg.table) <- switch(1+is.null(digits), digits, c(0,2,2,2,2))
display(modavg.table) <- switch(1+is.null(display), display, c("s","f","f","f","f"))
}
caption(modavg.table) <- caption
label(modavg.table) <- label
return(modavg.table)
}
##modavgCustom
xtable.modavgCustom <- function(x, caption = NULL, label = NULL, align = NULL,
digits = NULL, display = NULL, auto = FALSE,
nice.names = TRUE, print.table = FALSE, ...) {
if(print.table) {
##extract model selection table
modavg.table <- data.frame(x$Mod.avg.table[, c(1:4, 6, 8:9)], check.names = FALSE)
##change to nicer names
if(nice.names) {
new.delta <- names(modavg.table)[4]
new.weight <- names(modavg.table)[5]
names(modavg.table)[1] <- "Model"
names(modavg.table)[2] <- "K"
##names(x)[4] <- paste("$\\delta$", unlist(strsplit(new.delta, "_"))[2], collapse = " ") #requires sanitize.text.function( )
names(modavg.table)[4] <- paste(unlist(strsplit(new.delta, "_")), collapse = " ")
names(modavg.table)[5] <- paste(unlist(strsplit(new.weight, "Wt")), "weight", collapse = " ")
names(modavg.table)[6] <- "Estimate"
names(modavg.table)[7] <- "SE"
}
##format to data.frame
class(modavg.table) <- c("xtable","data.frame")
align(modavg.table) <- switch(1+is.null(align), align, c("l","r","r","r","r","r","r","r"))
digits(modavg.table) <- switch(1+is.null(digits), digits, c(0,0,2,2,2,2,2,2))
display(modavg.table) <- switch(1+is.null(display), display, c("s","s","d","f","f","f","f","f"))
}
##print model-averaged estimate, unconditional SE, CI
if(!print.table) {
##model-averaged estimate
modavg.table <- data.frame(Mod.avg.beta = x$Mod.avg.est, Uncond.SE = x$Uncond.SE,
Lower.CL = x$Lower.CL, Upper.CL = x$Upper.CL, check.names = FALSE)
rownames(modavg.table) <- "Parameter"
##change to nicer names
if(nice.names) {
names(modavg.table)[1] <- "Model-averaged estimate"
names(modavg.table)[2] <- "Unconditional SE"
names(modavg.table)[3] <- paste(100*x$Conf.level, "%", " lower limit", sep = "")
names(modavg.table)[4] <- paste(100*x$Conf.level, "%", " upper limit", sep = "")
}
##format to data.frame
class(modavg.table) <- c("xtable","data.frame")
align(modavg.table) <- switch(1+is.null(align), align, c("l","r","r","r","r"))
digits(modavg.table) <- switch(1+is.null(digits), digits, c(0,2,2,2,2))
display(modavg.table) <- switch(1+is.null(display), display, c("s","f","f","f","f"))
}
caption(modavg.table) <- caption
label(modavg.table) <- label
return(modavg.table)
}
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/R/xtable.R |
### R code from vignette source 'AICcmodavg-unmarked.Rnw'
###################################################
### code chunk number 1: AICcmodavg-unmarked.Rnw:37-38
###################################################
options(width=70, continue = " ")
###################################################
### code chunk number 2: loadPackage
###################################################
##load package
library(AICcmodavg)
##load data frame
data(bullfrog)
###################################################
### code chunk number 3: checkBullfrog
###################################################
##check data structure
str(bullfrog)
##first rows
head(bullfrog)
###################################################
### code chunk number 4: formatData
###################################################
##extract detections
yObs <- bullfrog[, c("V1", "V2", "V3", "V4", "V5", "V6", "V7")]
##extract site variables
siteVars <- bullfrog[, c("Location", "Reed.presence")]
##extract observation variables
##centered sampling effort on each visit
effort <- bullfrog[, c("Effort1", "Effort2", "Effort3", "Effort4",
"Effort5", "Effort6", "Effort7")]
##survey type (0 = call survey, 1 = minnow trap)
type <- bullfrog[, c("Type1", "Type2", "Type3", "Type4", "Type5",
"Type6", "Type7")]
###################################################
### code chunk number 5: loadFormat
###################################################
##load package
library(unmarked)
##format data
bfrogData <- unmarkedFrameOccu(y = yObs,
siteCovs = siteVars,
obsCovs = list(Type = type, Effort = effort))
###################################################
### code chunk number 6: summary1
###################################################
summary(bfrogData)
###################################################
### code chunk number 7: detHist
###################################################
detHist(bfrogData)
###################################################
### code chunk number 8: fitOccu
###################################################
##null model
m1 <- occu(~ 1 ~ 1, data = bfrogData)
##p varies with survey type and effort, occupancy is constant
m2 <- occu(~ Type + Effort ~ 1, data = bfrogData)
##p constant, occupancy varies with reed presence
m3 <- occu(~ 1 ~ Reed.presence, data = bfrogData)
##global model
m4 <- occu(~ Type + Effort ~ Reed.presence, data = bfrogData)
###################################################
### code chunk number 9: checkOut
###################################################
summary(m4)
summaryOD(m4, out.type = "confint")
summaryOD(m4, out.type = "nhst")
###################################################
### code chunk number 10: createList
###################################################
bfrogMods <- list("null" = m1, "psidot.pTypeEffort" = m2,
"psiReed.pdot" = m3,
"psiReed.pTypeEffort" = m4)
###################################################
### code chunk number 11: checkConv
###################################################
##check convergence for a single model
checkConv(m1)
##extract values across all models
sapply(bfrogMods, checkConv)
###################################################
### code chunk number 12: extractCN
###################################################
##extract condition number of single model
extractCN(m1)
##extract condition across all models
sapply(bfrogMods, extractCN)
###################################################
### code chunk number 13: largeSE
###################################################
##check highest SE in single model
checkParms(m1)
##check highest SE across all models
lapply(bfrogMods, checkParms)
###################################################
### code chunk number 14: LRT
###################################################
##compare global model vs null
anovaOD(mod.simple = m1, mod.complex = m3)
###################################################
### code chunk number 15: gof (eval = FALSE)
###################################################
## ##this takes 226 min. using 2 cores
## gof <- mb.gof.test(mod = m4, nsim = 10000, parallel = TRUE, ncores = 2)
## gof
## save(gof, file = "gofMod3.Rdata")
###################################################
### code chunk number 16: gof2
###################################################
load("gofMod3.Rdata")
gof
p.value <- sum(gof$t.star >= gof$chi.square)/gof$nsim
if (p.value == 0) {
p.display <- paste("<", round(1/gof$nsim, digits = 4))
} else {
p.display <- paste("=", round(p.value, digits = 4))
}
hist(gof$t.star,
main = "Bootstrapped MacKenzie and Bailey fit statistic (10 000 samples)",
xlim = range(c(gof$t.star, gof$chi.square)),
xlab = paste("Simulated statistic (observed = ",
round(gof$chi.square, digits = 2), ")", sep = ""),
cex.axis = 1.2, cex.lab = 1.2, cex.main = 1.2)
title(main = bquote(paste(italic(P), " ", .(p.display))),
line = 0.5, cex.main = 1.2)
abline(v = gof$chi.square, lty = "dashed",
col = "red")
###################################################
### code chunk number 17: summaryOD2
###################################################
##compare inferences
summaryOD(m3)
summaryOD(m3, c.hat = 1.08)
###################################################
### code chunk number 18: aic
###################################################
##when no overdispersion is present
outTab <- aictab(cand.set = bfrogMods)
##accounting for overdispersion
outTabC <- aictab(cand.set = bfrogMods, c.hat = 1.08)
outTab
outTabC
###################################################
### code chunk number 19: evidenceRatio
###################################################
##evidence ratio between top-ranked model vs second-ranked model
evidence(aic.table = outTabC)
###################################################
### code chunk number 20: modavgShrink
###################################################
##model-averaged estimate of reed presence - shrinkage estimator
estReed <- modavgShrink(cand.set = bfrogMods,
parm = "Reed.presence", parm.type = "psi",
c.hat = 1.08)
estReed
###################################################
### code chunk number 21: modavgShrink2
###################################################
estType <- modavgShrink(cand.set = bfrogMods,
parm = "Type", parm.type = "detect",
c.hat = 1.08)
estType
estEffort <- modavgShrink(cand.set = bfrogMods,
parm = "Effort", parm.type = "detect",
c.hat = 1.08)
estEffort
###################################################
### code chunk number 22: checkXpsi
###################################################
##variables on psi
extractX(cand.set = bfrogMods, parm.type = "psi")
##variables on p
extractX(cand.set = bfrogMods, parm.type = "detect")
###################################################
### code chunk number 23: predReed
###################################################
reedFrame <- data.frame(Reed.presence = c(0, 1))
###################################################
### code chunk number 24: predReed2
###################################################
outReed <- modavgPred(cand.set = bfrogMods, newdata = reedFrame,
parm.type = "psi", c.hat = 1.08)
outReed
###################################################
### code chunk number 25: storeReed
###################################################
##store predictions and confidence intervals in data frame
reedFrame$fit <- outReed$mod.avg.pred
reedFrame$low95 <- outReed$lower.CL
reedFrame$upp95 <- outReed$upper.CL
###################################################
### code chunk number 26: plotReed
###################################################
##create plot
xvals <- c(0.2, 0.4)
plot(fit ~ xvals, data = reedFrame,
ylab = "Probability of occupancy",
xlab = "Presence of reed",
ylim = c(0, 1),
cex = 1.2, cex.axis = 1.2, cex.lab = 1.2,
xlim = c(0, 0.6),
xaxt = "n")
#add x axis
axis(side = 1, at = xvals,
labels = c("absent", "present"),
cex.axis = 1.2)
##add error bars
segments(x0 = xvals, y0 = reedFrame$low95,
x1 = xvals, y1 = reedFrame$upp95)
###################################################
### code chunk number 27: predType
###################################################
##vary Type, hold Effort constant at its mean
typeFrame <- data.frame(Type = c(0, 1), Effort = 0)
##model-averaged predictions
outType <- modavgPred(cand.set = bfrogMods, newdata = typeFrame,
parm.type = "detect", c.hat = 1.08)
outType
###################################################
### code chunk number 28: plotType
###################################################
##store predictions and confidence intervals in data frame
typeFrame$fit <- outType$mod.avg.pred
typeFrame$low95 <- outType$lower.CL
typeFrame$upp95 <- outType$upper.CL
##create plot
xvals <- c(0.2, 0.4)
plot(fit ~ xvals, data = typeFrame,
ylab = "Detection probability",
xlab = "Survey type",
ylim = c(0, 1),
cex = 1.2, cex.axis = 1.2, cex.lab = 1.2,
xlim = c(0, 0.6),
xaxt = "n")
#add x axis
axis(side = 1, at = xvals,
labels = c("call survey", "minnow trapping"),
cex.axis = 1.2)
##add error bars
segments(x0 = xvals, y0 = typeFrame$low95,
x1 = xvals, y1 = typeFrame$upp95)
###################################################
### code chunk number 29: extractEffort
###################################################
##extract centered values of sampling effort
effort <- bfrogData@obsCovs$Effort
##create a series of 30 values to plot
Effort.cent <- seq(from = min(effort), to = max(effort),
length.out = 30)
##back-transform values to original scale of variable
Effort.mean <- 8.67 #mean of original variable see ?bullfrog
Effort.orig <- Effort.cent + Effort.mean
###################################################
### code chunk number 30: predEffort
###################################################
##note that all variables on the parameter must appear here
pred.dataEffort <- data.frame(Effort.orig = Effort.orig,
Effort = Effort.cent, #centered variable
Type = 1)
#Recall that \texttt{Type} was coded 1 (minnow trap) or 0 (call survey)
##compute model-averaged predictions with modavgPred on probability scale
out.predsEffort <- modavgPred(cand.set = bfrogMods,
newdata = pred.dataEffort, parm.type = "detect",
type = "response", c.hat = 1.08)
###################################################
### code chunk number 31: addPreds
###################################################
##add predictions to data set to keep everything in the same place
pred.dataEffort$fit <- out.predsEffort$mod.avg.pred
pred.dataEffort$se.fit <- out.predsEffort$uncond.se
pred.dataEffort$low95 <- out.predsEffort$lower.CL
pred.dataEffort$upp95 <- out.predsEffort$upper.CL
###################################################
### code chunk number 32: plotEffort
###################################################
##create plot
##plot
plot(fit ~ Effort.orig,
ylab = "Detection probability",
xlab = "Sampling effort",
ylim = c(0, 1),
type = "l",
cex = 1.2, cex.lab = 1.2, cex.axis = 1.2,
data = pred.dataEffort)
##add 95% CI around predictions
lines(low95 ~ Effort.orig, data = pred.dataEffort,
lty = "dashed")
lines(upp95 ~ Effort.orig, data = pred.dataEffort,
lty = "dashed")
###################################################
### code chunk number 33: xtable1
###################################################
library(xtable)
xtable(outTabC)
###################################################
### code chunk number 34: table2
###################################################
xtable(estReed)
###################################################
### code chunk number 35: table3
###################################################
xtable(detHist(m3))
###################################################
### code chunk number 36: table4
###################################################
xtable(mb.chisq(m3))
###################################################
### code chunk number 37: xtableOptions
###################################################
#add caption, suppress log-likelihood, and include cumulative Akaike weight
print(xtable(outTabC,
caption = "Model selection accounting for overdispersion in the bullfrog data.",
include.LL = FALSE, include.Cum.Wt = TRUE),
caption.placement = "top", include.rownames = FALSE)
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/inst/doc/AICcmodavg-unmarked.R |
### R code from vignette source 'AICcmodavg.Rnw'
###################################################
### code chunk number 1: AICcmodavg.Rnw:37-38
###################################################
options(width=70, continue = " ")
###################################################
### code chunk number 2: import
###################################################
library(AICcmodavg)
data(dry.frog)
###################################################
### code chunk number 3: subData
###################################################
##extract only first 7 columns
frog <- dry.frog[, 1:7]
##first lines
head(frog)
##structure of data frame
str(frog)
###################################################
### code chunk number 4: na
###################################################
any(is.na(frog))
###################################################
### code chunk number 5: centInitialMass
###################################################
##center initial mass
frog$InitMass_cent <- frog$Initial_mass - mean(frog$Initial_mass)
###################################################
### code chunk number 6: InitialMass2
###################################################
frog$InitMass2 <- frog$InitMass_cent^2
###################################################
### code chunk number 7: checkDiag
###################################################
##run global model
global <- lm(Mass_lost ~ InitMass_cent + InitMass2 + Substrate + Shade,
data = frog)
par(mfrow = c(2, 2))
plot(global)
###################################################
### code chunk number 8: logMass
###################################################
frog$logMass_lost <- log(frog$Mass_lost + 1) #adding 1 due to presence of 0's
###################################################
### code chunk number 9: checkDiag2
###################################################
##run global model
global.log <- lm(logMass_lost ~ InitMass_cent + InitMass2 + Substrate + Shade,
data = frog)
par(mfrow = c(2, 2))
plot(global.log)
###################################################
### code chunk number 10: fitCands
###################################################
m.null <- lm(logMass_lost ~ 1,
data = frog)
m.shade <- lm(logMass_lost ~ Shade,
data = frog)
m.substrate <- lm(logMass_lost ~ Substrate,
data = frog)
m.shade.substrate <- lm(logMass_lost ~ Shade + Substrate,
data = frog)
m.null.mass <- lm(logMass_lost ~ InitMass_cent + InitMass2,
data = frog)
m.shade.mass <- lm(logMass_lost ~ InitMass_cent + InitMass2 + Shade,
data = frog)
m.substrate.mass <- lm(logMass_lost ~ InitMass_cent + InitMass2 + Substrate,
data = frog)
m.global.mass <- global.log
###################################################
### code chunk number 11: storeList
###################################################
##store models in named list
Cand.models <- list("null" = m.null, "shade" = m.shade,
"substrate" = m.substrate,
"shade + substrate" = m.shade.substrate,
"mass" = m.null.mass, "mass + shade" = m.shade.mass,
"mass + substrate" = m.substrate.mass,
"global" = m.global.mass)
###################################################
### code chunk number 12: modTableAICc
###################################################
selectionTable <- aictab(cand.set = Cand.models)
selectionTable
###################################################
### code chunk number 13: modTableAIC
###################################################
aictab(Cand.models, second.ord = FALSE)
###################################################
### code chunk number 14: exportTable3 (eval = FALSE)
###################################################
## library(xtable)
## print(xtable(selectionTable, caption = "Model selection table on frog mass lost.",
## label = "tab:selection"),
## include.rownames = FALSE, caption.placement = "top")
###################################################
### code chunk number 15: exportTable4
###################################################
library(xtable)
print(xtable(selectionTable, caption = "Model selection table on frog mass lost.",
label = "tab:selection"),
include.rownames = FALSE, caption.placement = "top", )
###################################################
### code chunk number 16: confSet
###################################################
##confidence set of models
confset(cand.set = Cand.models)
###################################################
### code chunk number 17: evidence
###################################################
##evidence ratios
evidence(aic.table = selectionTable)
###################################################
### code chunk number 18: evidenceSilent
###################################################
evRatio <- evidence(selectionTable)
###################################################
### code chunk number 19: evidence2
###################################################
##compare "substrate" vs "shade"
evidence(selectionTable, model.high = "substrate",
model.low = "shade")
###################################################
### code chunk number 20: evidence2Silent
###################################################
##compare "substrate" vs "shade"
evRatio2 <- evidence(selectionTable, model.high = "substrate",
model.low = "shade")
###################################################
### code chunk number 21: evidenceNull
###################################################
evidence(selectionTable, model.high = "global",
model.low = "null")
###################################################
### code chunk number 22: confint
###################################################
confint(m.global.mass)
###################################################
### code chunk number 23: modavg
###################################################
modavg(cand.set = Cand.models, parm = "Shade")
###################################################
### code chunk number 24: modavg2
###################################################
modavgShade <- modavg(cand.set = Cand.models, parm = "Shade")
###################################################
### code chunk number 25: coef
###################################################
coef(m.global.mass)
###################################################
### code chunk number 26: substrateSPHAG
###################################################
modavg(Cand.models, parm = "SubstrateSPHAGNUM")
###################################################
### code chunk number 27: substrateSPHAG2
###################################################
modavgSphag <- modavg(Cand.models, parm = "SubstrateSPHAGNUM")
###################################################
### code chunk number 28: modavg
###################################################
modavgShrink(cand.set = Cand.models, parm = "Shade")
###################################################
### code chunk number 29: substrateSPHAGShrink
###################################################
modavgShrink(Cand.models, parm = "SubstrateSPHAGNUM")
###################################################
### code chunk number 30: shadePred
###################################################
##data frame to make predictions
##all variables are held constant, except Shade
predData <- data.frame(InitMass_cent = c(0, 0),
InitMass2 = c(0, 0),
Substrate = factor("SOIL",
levels = levels(frog$Substrate)),
Shade = c(0, 1))
##predictions from global model
predict(m.global.mass, newdata = predData, se.fit = TRUE)
##predictions from null model
predict(m.null, newdata = predData, se.fit = TRUE)
###################################################
### code chunk number 31: extractX
###################################################
extractX(cand.set = Cand.models)
###################################################
### code chunk number 32: modavgPred
###################################################
modavgPred(cand.set = Cand.models, newdata = predData)
###################################################
### code chunk number 33: modavgPredSub
###################################################
##data frame holding all variables constant, except Substrate
predSub <- data.frame(InitMass_cent = c(0, 0, 0),
InitMass2 = c(0, 0, 0),
Substrate = factor(c("PEAT", "SOIL", "SPHAGNUM"),
levels = levels(frog$Substrate)),
Shade = c(1, 1, 1))
##model-average predictions
predsMod <- modavgPred(Cand.models, newdata = predSub)
predsMod
###################################################
### code chunk number 34: checkContent
###################################################
##check content of object
str(predsMod)
###################################################
### code chunk number 35: savePreds
###################################################
##add predictions, lower CL, and upper CL
predSub$fit <- predsMod$mod.avg.pred
predSub$low95 <- predsMod$lower.CL
predSub$upp95 <- predsMod$upper.CL
###################################################
### code chunk number 36: plotPreds
###################################################
##create vector for X axis
predSub$xvals <- c(0.25, 0.5, 0.75)
##create empty box
plot(fit ~ xvals,
data = predSub,
xlim = c(0, 1),
ylim = range(low95, upp95),
xlab = "Substrate type",
ylab = "Predicted mass lost (log of mass in g)",
xaxt = "n",
cex.axis = 1.2,
cex.lab = 1.2)
##add x axis
axis(side = 1, at = predSub$xvals,
labels = c("Peat", "Soil", "Sphagnum"),
cex.axis = 1.2)
##add CI's
segments(x0 = predSub$xvals, x1 = predSub$xvals,
y0 = predSub$low95, predSub$upp95)
###################################################
### code chunk number 37: compGroups
###################################################
predComp <- data.frame(InitMass_cent = c(0, 0),
InitMass2 = c(0, 0),
Substrate = factor(c("PEAT", "SPHAGNUM"),
levels = levels(frog$Substrate)),
Shade = c(1, 1))
##model-average predictions
modavgEffect(Cand.models, newdata = predComp)
###################################################
### code chunk number 38: customAICc
###################################################
##log-likelihoods
modL <- c(-225.4180, -224.0697, -225.4161)
##number of parameters
modK <- c(2, 3, 3)
##model selection
outTab <- aictabCustom(logL = modL,
K = modK,
modnames = c("null", "phi(SVL)p(.)",
"phi(Road)p(.)"),
nobs = 621)
###################################################
### code chunk number 39: evRatioCustom (eval = FALSE)
###################################################
## evidence(outTab, model.high = "phi(SVL)p(.)",
## model.low = "phi(Road)p(.)")
###################################################
### code chunk number 40: evRatioCustom
###################################################
evRatioCust <- evidence(outTab, model.high = "phi(SVL)p(.)",
model.low = "phi(Road)p(.)")
###################################################
### code chunk number 41: estSE
###################################################
##survival estimates with road mitigation
modEst <- c(0.1384450, 0.1266030, 0.1378745)
##SE's of survival estimates with road mitigation
modSE <- c(0.03670327, 0.03347475, 0.03862634)
###################################################
### code chunk number 42: customModavg
###################################################
##model-averaged survival with road mitigation
modavgCustom(logL = modL,
K = modK,
modnames = c("null", "phi(SVL)p(.)",
"phi(Road)p(.)"),
estimate = modEst,
se = modSE,
nobs = 621)
###################################################
### code chunk number 43: customModavg2
###################################################
##survival estimates without road mitigation
modEst2 <- c(0.1384450, 0.1266030, 0.1399727)
##SE's of survival estimates without road mitigation
modSE2 <- c(0.03670327, 0.03347475, 0.04981635)
##model-averaged survival
modavgCustom(logL = modL,
K = modK,
modnames = c("null", "phi(SVL)p(.)",
"phi(Road)p(.)"),
estimate = modEst2,
se = modSE2,
nobs = 621)
| /scratch/gouwar.j/cran-all/cranData/AICcmodavg/inst/doc/AICcmodavg.R |
boxcoxfr <- function(y, x, option="both",lambda = seq(-3,3,0.01), lambda2 = NULL, tau = 0.05, alpha = 0.05, verbose = TRUE){
dname1<-deparse(substitute(y))
dname2<-deparse(substitute(x))
x=factor(x)
k=length(levels(x))
if (length(y) != length(x)) {stop("The lengths of x and y must be equal")}
if(is.null(lambda2)) lambda2<-0
y <- y+lambda2
if (is.na(min(y)) == TRUE) {stop("Data include NA")}
if (min(y) <= 0) {stop("Data must include positive values. Specify shifting parameter, lambda2")}
if(((option=="both")|(option=="nor")|(option=="var"))==F) {stop("Correct option argument")}
####################################
if ((option=="both")|(option=="nor")){
stor_w=NULL
for (i in 1:k){
for (j in 1:length(lambda)) {
if (lambda[j]!=0){
y1=y[which(x==(levels(x)[i]))]
w=(shapiro.test((y1^(lambda[j]) - 1)/(lambda[j])))
stor_w=rbind(stor_w,c(lambda[j],w$statistic,w$p))
}
if (lambda[j]==0){
y1=y[which(x==(levels(x)[i]))]
w=shapiro.test(log(y1))
stor_w=rbind(stor_w,c(lambda[j],w$statistic,w$p))
}
} #closing for loop
lambda=stor_w[which(stor_w[,3]>=tau),1]
if (length(lambda)==0) {stop("Feasible region is null set. No solution. \n Try to enlarge the range of feasible lambda values, lambda. \n Try to decrease feasible region parameter, tau.")}
stor_w=NULL
} #closing for loop
}
################################
##########
if ((option=="both")|(option=="var")){
stor_w=NULL
for (j in 1:length(lambda)) {
if (lambda[j]!=0){
lt=bartlett.test((y^(lambda[j]) - 1)/(lambda[j]),x)
stor_w=rbind(stor_w,c(lambda[j],lt$statistic,lt$p.value))
}
if (lambda[j]==0){
lt=bartlett.test(log(y),x)
stor_w=rbind(stor_w,c(lambda[j],lt$statistic,lt$p.value))
}
}
lambda=stor_w[which(stor_w[,3]>=tau),1]
if (length(lambda)==0) {stop("Feasible region is null set. No solution. \n Try to enlarge the range of feasible lambda values, lambda. \n Try to decrease feasible region parameter, tau.")}
}
##########
####
van=boxcox(y~x, lambda, plotit = FALSE)
lambda=van$x[which.max(van$y)]
####
################################
stor1=stor2=NULL
for(i in 1:k){
if(lambda!=0){
kk=shapiro.test((y[which(x==(levels(x)[i]))]^lambda-1)/lambda)
}else{
kk=shapiro.test(log(y[which(x==(levels(x)[i]))]))
}
stor1=c(stor1,kk$statistic)
stor2=c(stor2,kk$p)
}
store = data.frame(matrix(NA, nrow = k, ncol = 4))
colnames(store) = c("Level", "statistic", "p.value", "Normality")
store$statistic=stor1
store$p.value=stor2
store$Normality = ifelse(store$p.value > alpha, "YES", "NO")
store$Level=levels(x)
if(lambda!=0){
kk2=bartlett.test((y^lambda-1)/lambda,x)
}else{
kk2=bartlett.test(log(y),x)
}
store2 = data.frame(matrix(NA, nrow = 1, ncol = 4))
colnames(store2) = c("Level","statistic", "p.value", "Homogeneity")
store2$statistic=kk2$statistic
store2$p.value=kk2$p.value
store2$Homogeneity= ifelse(store2$p.value > alpha, "YES", "NO")
store2$Level="All"
if(lambda!=0){
tf.data=(y^lambda-1)/lambda
}else{
tf.data=log(y)
}
if(tau==0){
method="MLE"
}else{
method="MLEFR"
}
if (verbose){
cat("\n"," Box-Cox power transformation", "\n", sep = " ")
cat("---------------------------------------------------------------------", "\n\n", sep = " ")
cat(" lambda.hat :", lambda, "\n\n", sep = " ")
cat("\n"," Shapiro-Wilk normality test for transformed data ","(alpha = ",alpha,")", "\n", sep = "")
cat("-------------------------------------------------------------------", "\n", sep = " ")
print(store)
cat("\n\n"," Bartlett's homogeneity test for transformed data ","(alpha = ",alpha,")", "\n", sep = "")
cat("-------------------------------------------------------------------", "\n", sep = " ")
print(store2)
cat("---------------------------------------------------------------------", "\n\n", sep = " ")
}
out <- list()
out$method <-method
out$lambda.hat <-lambda
out$lambda2 <-lambda2
out$shapiro <- store
out$bartlett <- store2
out$alpha<-as.numeric(alpha)
out$tf.data <- tf.data
out$x <- x
out$y.name <- dname1
out$x.name <- dname2
attr(out, "class") <- "boxcoxfr"
invisible(out)
}
| /scratch/gouwar.j/cran-all/cranData/AID/R/boxcoxfr.R |
boxcoxlm <-
function(x, y, method="lse", lambda = seq(-3,3,0.01), lambda2 = NULL, plot = TRUE, alpha = 0.05, verbose = TRUE)
{
dname1<-deparse(substitute(y))
dname2<-deparse(substitute(x))
y<-as.numeric(y)
if(is.null(lambda2)) lambda2<-0
y <- y + lambda2
if (!any(class(x)=="matrix")) stop("x must be a matrix")
if (is.na(min(y))==TRUE) stop("x matrix includes NA")
if (is.na(min(x))==TRUE) stop("response y includes NA")
if (min(y)<=0) stop("response y must include positive values. Specify shifting parameter, lambda2")
x<-cbind(1,x)
if (method=="sw") {
store1<-lapply(1:length(lambda), function(i) lambda[i])
store2<-lapply(1:length(lambda), function(i) if (store1[[i]] != 0) (y^store1[[i]]-1)/store1[[i]] else log(y))
store3<-lapply(1:length(lambda), function(i) ginv(t(x) %*% x) %*% t(x) %*% store2[[i]])
store4<-lapply(1:length(lambda), function(i) t(store3[[i]]) %*% t(x))
store5<-lapply(1:length(lambda), function(i) store4[[i]]-store2[[i]])
store6<-lapply(1:length(lambda), function(i) shapiro.test(store5[[i]])$statistic)
pred.lamb<-store1[[which.max(store6)]]
method.name<-"Estimating Box-Cox transformation parameter via Shapiro-Wilk test statistic"
}
else if (method=="ad") {
store1<-lapply(1:length(lambda), function(i) lambda[i])
store2<-lapply(1:length(lambda), function(i) if (store1[[i]] != 0) (y^store1[[i]]-1)/store1[[i]] else log(y))
store3<-lapply(1:length(lambda), function(i) ginv(t(x) %*% x) %*% t(x) %*% store2[[i]])
store4<-lapply(1:length(lambda), function(i) t(store3[[i]]) %*% t(x))
store5<-lapply(1:length(lambda), function(i) store4[[i]]-store2[[i]])
store6<-lapply(1:length(lambda), function(i) ad.test(store5[[i]])$statistic)
pred.lamb<-store1[[which.min(store6)]]
method.name<-"Estimating Box-Cox transformation parameter via Anderson-Darling test statistic"
}
else if (method=="cvm") {
store1<-lapply(1:length(lambda), function(i) lambda[i])
store2<-lapply(1:length(lambda), function(i) if (store1[[i]] != 0) (y^store1[[i]]-1)/store1[[i]] else log(y))
store3<-lapply(1:length(lambda), function(i) ginv(t(x) %*% x) %*% t(x) %*% store2[[i]])
store4<-lapply(1:length(lambda), function(i) t(store3[[i]]) %*% t(x))
store5<-lapply(1:length(lambda), function(i) store4[[i]]-store2[[i]])
store6<-lapply(1:length(lambda), function(i) cvm.test(store5[[i]])$statistic)
pred.lamb<-store1[[which.min(store6)]]
method.name<-"Estimating Box-Cox transformation parameter via Cramer-von Mises test statistic"
}
else if (method=="pt") {
store1<-lapply(1:length(lambda), function(i) lambda[i])
store2<-lapply(1:length(lambda), function(i) if (store1[[i]] != 0) (y^store1[[i]]-1)/store1[[i]] else log(y))
store3<-lapply(1:length(lambda), function(i) ginv(t(x) %*% x) %*% t(x) %*% store2[[i]])
store4<-lapply(1:length(lambda), function(i) t(store3[[i]]) %*% t(x))
store5<-lapply(1:length(lambda), function(i) store4[[i]]-store2[[i]])
store6<-lapply(1:length(lambda), function(i) pearson.test(store5[[i]])$statistic)
pred.lamb<-store1[[which.min(store6)]]
method.name<-"Estimating Box-Cox transformation parameter via Pearson Chi-Square test statistic"
}
else if (method=="sf") {
store1<-lapply(1:length(lambda), function(i) lambda[i])
store2<-lapply(1:length(lambda), function(i) if (store1[[i]] != 0) (y^store1[[i]]-1)/store1[[i]] else log(y))
store3<-lapply(1:length(lambda), function(i) ginv(t(x) %*% x) %*% t(x) %*% store2[[i]])
store4<-lapply(1:length(lambda), function(i) t(store3[[i]]) %*% t(x))
store5<-lapply(1:length(lambda), function(i) store4[[i]]-store2[[i]])
store6<-lapply(1:length(lambda), function(i) sf.test(store5[[i]])$statistic)
pred.lamb<-store1[[which.max(store6)]]
method.name<-"Estimating Box-Cox transformation parameter via Shapiro-Francia test statistic"
}
else if (method=="lt") {
store1<-lapply(1:length(lambda), function(i) lambda[i])
store2<-lapply(1:length(lambda), function(i) if (store1[[i]] != 0) (y^store1[[i]]-1)/store1[[i]] else log(y))
store3<-lapply(1:length(lambda), function(i) ginv(t(x) %*% x) %*% t(x) %*% store2[[i]])
store4<-lapply(1:length(lambda), function(i) t(store3[[i]]) %*% t(x))
store5<-lapply(1:length(lambda), function(i) store4[[i]]-store2[[i]])
store6<-lapply(1:length(lambda), function(i) lillie.test(store5[[i]])$statistic)
pred.lamb<-store1[[which.min(store6)]]
method.name<-"Estimating Box-Cox transformation parameter via Lilliefors test statistic"
}
else if (method=="jb") {
store1<-lapply(1:length(lambda), function(i) lambda[i])
store2<-lapply(1:length(lambda), function(i) if (store1[[i]] != 0) (y^store1[[i]]-1)/store1[[i]] else log(y))
store3<-lapply(1:length(lambda), function(i) ginv(t(x) %*% x) %*% t(x) %*% store2[[i]])
store4<-lapply(1:length(lambda), function(i) t(store3[[i]]) %*% t(x))
store5<-lapply(1:length(lambda), function(i) store4[[i]]-store2[[i]])
store6<-lapply(1:length(lambda), function(i) jarque.bera.test(store5[[i]][1,])$statistic)
pred.lamb<-store1[[which.min(store6)]]
method.name<-"Estimating Box-Cox transformation parameter via Jarque-Bera test statistic"
}
else if (method=="mle") {
store1<-lapply(1:length(lambda), function(i) lambda[i])
store2<-lapply(1:length(lambda), function(x) if (store1[[x]] != 0) (y^store1[[x]]-1)/(store1[[x]]*(geometric.mean(y)^(store1[[x]]-1))) else geometric.mean(y)*log(y))
store3<-lapply(1:length(lambda), function(i) ginv(t(x) %*% x) %*% t(x) %*% store2[[i]])
store4<-lapply(1:length(lambda), function(i) t(store3[[i]]) %*% t(x))
store5<-lapply(1:length(lambda), function(i) store4[[i]]-store2[[i]])
store6<-lapply(1:length(lambda), function(i) sum(log(dnorm(store5[[i]], mean = mean(store5[[i]]), sd = sd(store5[[i]])))))
pred.lamb<-store1[[which.max(store6)]]
method.name<-"Estimating Box-Cox transformation parameter via maximum likelihood estimation"
}
else if (method=="lse") {
store1<-lapply(1:length(lambda), function(i) lambda[i])
store2<-lapply(1:length(lambda), function(x) if (store1[[x]] != 0) (y^store1[[x]]-1)/(store1[[x]]*(geometric.mean(y)^(store1[[x]]-1))) else geometric.mean(y)*log(y))
store3<-lapply(1:length(lambda), function(i) ginv(t(x) %*% x) %*% t(x) %*% store2[[i]])
store4<-lapply(1:length(lambda), function(i) t(store3[[i]]) %*% t(x))
store5<-lapply(1:length(lambda), function(i) store4[[i]]-store2[[i]])
store6<-lapply(1:length(lambda), function(i) sum(store5[[i]]^2))
pred.lamb<-store1[[which.min(store6)]]
method.name<-"Estimating Box-Cox transformation parameter via least square estimation"
}
if (pred.lamb==max(lambda)) stop("Enlarge the range of the lambda")
if (pred.lamb==min(lambda)) stop("Enlarge the range of the lambda")
coef = ginv(t(x) %*% x) %*% t(x) %*% y
ypred = t(coef) %*% t(x)
residual = ypred - y
if (pred.lamb!=0) y.transformed<-((y^pred.lamb)-1)/pred.lamb
if (pred.lamb==0) y.transformed<-log(y)
coef.transformed = ginv(t(x) %*% x) %*% t(x) %*% y.transformed
ypred.transformed = t(coef.transformed) %*% t(x)
residual.transformed = ypred.transformed - y.transformed
if(plot){
par(mfrow=c(2,2))
hist(residual, xlab = "Residuals", prob=TRUE, main = "Histogram of residuals")
lines(density(residual))
hist(residual.transformed, xlab = "Residuals after transformation", prob=TRUE, main = paste("Histogram of residuals after transformation"))
lines(density(residual.transformed))
qqnorm(residual, main = "Q-Q plot of residuals")
qqline(residual)
qqnorm(residual.transformed, main = "Q-Q plot of residuals after transformation")
qqline(residual.transformed)
}
if (method=="sw") {
statistic<-shapiro.test(residual.transformed)$statistic
pvalue<-shapiro.test(residual.transformed)$p.value
nortest.name<-"Shapiro-Wilk normality test"
}
if (method=="ad") {
statistic<-ad.test(residual.transformed)$statistic
pvalue<-ad.test(residual.transformed)$p.value
nortest.name<-"Anderson-Darling normality test"
}
if (method=="cvm") {
statistic<-cvm.test(residual.transformed)$statistic
pvalue<-cvm.test(residual.transformed)$p.value
nortest.name<-"Cramer-von Mises normality test"
}
if (method=="pt") {
statistic<-pearson.test(residual.transformed)$statistic
pvalue<-pearson.test(residual.transformed)$p.value
nortest.name<-"Pearson Chi-square normality test"
}
if (method=="sf") {
statistic<-sf.test(residual.transformed)$statistic
pvalue<-sf.test(residual.transformed)$p.value
nortest.name<-"Shapiro-Francia normality test"
}
if (method=="lt") {
statistic<-lillie.test(residual.transformed)$statistic
pvalue<-lillie.test(residual.transformed)$p.value
nortest.name<-"Lilliefors normality test"
}
if (method=="jb") {
statistic<-jarque.bera.test(residual.transformed[1,])$statistic
pvalue<-jarque.bera.test(residual.transformed[1,])$p.value
nortest.name<-"Jarque-Bera normality test"
}
if ((method=="mle")|(method=="lse")) {
statistic<-shapiro.test(residual.transformed)$statistic
pvalue<-shapiro.test(residual.transformed)$p.value
nortest.name<-"Shapiro-Wilk normality test"
}
if (verbose){
cat("\n"," Box-Cox power transformation", "\n", sep = " ")
cat("--------------------------------------------------------------", "\n\n", sep = " ")
cat(" lambda.hat :", pred.lamb, "\n\n", sep = " ")
cat("\n", " ",nortest.name," (alpha = ",alpha,")", "\n", sep = "")
cat("--------------------------------------------------------------", "\n\n", sep = " ")
cat(" statistic :", statistic, "\n", sep = " ")
cat(" p.value :", pvalue, "\n\n", sep = " ")
cat(if(pvalue > alpha){" Result : Residuals are normal after transformation."}
else {" Result : Residuals are not normal after transformation."},"\n")
cat("--------------------------------------------------------------", "\n\n", sep = " ")
}
out<-list()
out$method<-method.name
out$lambda.hat<-as.numeric(pred.lamb)
out$lambda2<-as.numeric(lambda2)
out$statistic<-as.numeric(statistic)
out$p.value<-as.numeric(pvalue)
out$alpha<-as.numeric(alpha)
out$tf.data<-as.numeric(y.transformed)
out$tf.residuals<-as.numeric(residual.transformed[1,])
out$y.name<-dname1
out$x.name<-dname2
attr(out, "class") <- "boxcoxlm"
invisible(out)
} | /scratch/gouwar.j/cran-all/cranData/AID/R/boxcoxlm.R |
boxcoxmeta<-function(data,
lambda = seq(-3, 3, 0.01), nboot = 100,
lambda2 = NULL, plot = TRUE, alpha = 0.05, verbose = TRUE){
method = c("sw","ad","jb")
if(is.null(lambda2)) lambda2<-0
if(length(method)<2){
boxcoxnc(data,method=method,alpha=alpha,lambda = lambda,lambda2 = lambda2)
}else{
lambdas<-data.frame(matrix(nrow=1, ncol=length(method)))
boost_lambda<-data.frame(matrix(nrow = nboot, ncol = length(method)))
colnames(boost_lambda)<-method
colnames(lambdas)<-method
for(i in method){
lambdas[,i]<-boxcoxnc(data, method = i, plot =FALSE, verbose = FALSE,lambda = lambda,lambda2 = lambda2,alpha = alpha)$lambda.hat
for (j in c(1:nboot)) {
sample<-sample(1:length(data),length(data),replace = TRUE)
boost_lambda[j,i]<-boxcoxnc(data[sample],method = i, plot = FALSE,verbose =FALSE,lambda = lambda,lambda2 = lambda2,alpha = alpha)$lambda.hat
}
}
sd <- apply(boost_lambda,2,sd)
pred.lamb<-metamean(n=rep(length(data),length(method)),mean =as.double(lambdas[method]),sd=as.double(sd[method]) )$TE.random
if (pred.lamb == max(lambda)) stop("Enlarge the range of the lambda")
if (pred.lamb == min(lambda)) stop("Enlarge the range of the lambda")
if (pred.lamb != 0) data.transformed <- ((data^pred.lamb) - 1)/pred.lamb
if (pred.lamb == 0) data.transformed <- log(data)
dname<-deparse(substitute(data))
nortest.name <- str_replace_all(paste(method,collapse = " "),pattern = " ",replacement = ",")
results<-data.frame(matrix(nrow=length(method),ncol=4))
colnames(results)<-c("Test","Statistic","P.Value","Normality")
row.names(results)<-method
for (i in method) {
if(i=="sw"){
results[i,"Test"]<-"Shapiro-Wilk"
results[i,"Statistic"]<-shapiro.test(data.transformed)$statistic
results[i,"P.Value"]<-shapiro.test(data.transformed)$p.value
results[i,"Normality"]<-ifelse(results[i,"P.Value"]<0.05,"Reject","Not reject")
}else if(i=="ad"){
results[i,"Test"]<-"Anderson Darling"
results[i,"Statistic"]<-ad.test(data.transformed)$statistic
results[i,"P.Value"]<-ad.test(data.transformed)$p.value
results[i,"Normality"]<-ifelse(results[i,"P.Value"]<0.05,"Reject","Not reject")
}else if(i=="jb"){
results[i,"Test"]<-"Jarque-Bera"
results[i,"Statistic"]<-jarque.bera.test(data.transformed)$statistic
results[i,"P.Value"]<-jarque.bera.test(data.transformed)$p.value
results[i,"Normality"]<-ifelse(results[i,"P.Value"]<0.05,"Reject","Not reject")
}
}
row.names(results)<-NULL
if (verbose) {
cat("\n", " Box-Cox power transformation via meta analysis",
"\n", sep = " ")
cat("-------------------------------------------------------",
"\n\n", sep = " ")
cat(" lambda.hat :", pred.lamb, "\n\n",
sep = " ")
cat("\n", " ","Normality tests for transformed data ",
"(alpha = ", alpha, ")", "\n",
sep = "")
cat("-------------------------------------------------------",
"\n", sep = " ")
print(results)
cat("-------------------------------------------------------",
"\n\n", sep = " ")
}
if (plot) {
par(mfrow = c(2, 2))
hist(data, xlab = dname, prob = TRUE, main = paste("Histogram of", dname))
lines(density(data))
hist(data.transformed, xlab = paste("Transformed", dname),
prob = TRUE, main = paste("Histogram of tf", dname))
lines(density(data.transformed))
qqnorm(data, main = paste("Q-Q plot of", dname))
qqline(data)
qqnorm(data.transformed, main = paste("Q-Q plot of tf", dname))
qqline(data.transformed)
}
out <- list()
out$method <- "Ensemble Based Box-Cox Transformation via Meta Analysis"
out$lambda.hat <- as.numeric(pred.lamb)
out$lambda2 <- as.numeric(lambda2)
out$result <- results
out$alpha <- as.numeric(alpha)
out$tf.data <- data.transformed
out$var.name <- dname
attr(out, "class") <- "boxcoxmeta"
invisible(out)
}
}
| /scratch/gouwar.j/cran-all/cranData/AID/R/boxcoxmeta.R |
boxcoxnc <-
function(data, method="sw", lambda = seq(-3,3,0.01), lambda2 = NULL, plot = TRUE, alpha = 0.05, verbose = TRUE)
{
dname<-deparse(substitute(data))
data<-as.numeric(data)
if(is.null(lambda2)) lambda2<-0
data <- data+lambda2
if (is.na(min(data))==TRUE) stop("Data include NA")
if (min(data)<=0) stop("Data must include positive values. Specify shifting parameter, lambda2")
if (method=="sw") {
store1<-lapply(1:length(lambda), function(x) lambda[x])
store2<-lapply(1:length(lambda), function(x) if (store1[[x]] != 0) (data^store1[[x]]-1)/store1[[x]] else log(data))
store3<-lapply(1:length(lambda), function(x) shapiro.test(store2[[x]])$statistic)
pred.lamb<-store1[[which.max(store3)]]
method.name<-"Estimating Box-Cox transformation parameter via Shapiro-Wilk test statistic"
}
else if (method=="ad") {
store1<-lapply(1:length(lambda), function(x) lambda[x])
store2<-lapply(1:length(lambda), function(x) if (store1[[x]] != 0) (data^store1[[x]]-1)/store1[[x]] else log(data))
store3<-lapply(1:length(lambda), function(x) ad.test(store2[[x]])$statistic)
pred.lamb<-store1[[which.min(store3)]]
method.name<-"Estimating Box-Cox transformation parameter via Anderson-Darling test statistic"
}
else if (method=="cvm") {
store1<-lapply(1:length(lambda), function(x) lambda[x])
store2<-lapply(1:length(lambda), function(x) if (store1[[x]] != 0) (data^store1[[x]]-1)/store1[[x]] else log(data))
store3<-lapply(1:length(lambda), function(x) cvm.test(store2[[x]])$statistic)
pred.lamb<-store1[[which.min(store3)]]
method.name<-"Estimating Box-Cox transformation parameter via Cramer-von Mises test statistic"
}
else if (method=="pt") {
store1<-lapply(1:length(lambda), function(x) lambda[x])
store2<-lapply(1:length(lambda), function(x) if (store1[[x]] != 0) (data^store1[[x]]-1)/store1[[x]] else log(data))
store3<-lapply(1:length(lambda), function(x) pearson.test(store2[[x]])$statistic)
pred.lamb<-store1[[which.min(store3)]]
method.name<-"Estimating Box-Cox transformation parameter via Pearson Chi-Square test statistic"
}
else if (method=="sf") {
store1<-lapply(1:length(lambda), function(x) lambda[x])
store2<-lapply(1:length(lambda), function(x) if (store1[[x]] != 0) (data^store1[[x]]-1)/store1[[x]] else log(data))
store3<-lapply(1:length(lambda), function(x) sf.test(store2[[x]])$statistic)
pred.lamb<-store1[[which.max(store3)]]
method.name<-"Estimating Box-Cox transformation parameter via Shapiro-Francia test statistic"
}
else if (method=="lt") {
store1<-lapply(1:length(lambda), function(x) lambda[x])
store2<-lapply(1:length(lambda), function(x) if (store1[[x]] != 0) (data^store1[[x]]-1)/store1[[x]] else log(data))
store3<-lapply(1:length(lambda), function(x) lillie.test(store2[[x]])$statistic)
pred.lamb<-store1[[which.min(store3)]]
method.name<-"Estimating Box-Cox transformation parameter via Lilliefors test statistic"
}
else if (method=="jb") {
store1<-lapply(1:length(lambda), function(x) lambda[x])
store2<-lapply(1:length(lambda), function(x) if (store1[[x]] != 0) (data^store1[[x]]-1)/store1[[x]] else log(data))
store3<-lapply(1:length(lambda), function(x) jarque.bera.test(store2[[x]])$statistic)
pred.lamb<-store1[[which.min(store3)]]
method.name<-"Estimating Box-Cox transformation parameter via Jarque-Bera test statistic"
}
else if (method=="ac") {
set.seed(100)
stor1<-lapply(1:30, function(x) rnorm(length(data),0,100))
stor2<-lapply(1:30, function(x) glm(data~stor1[[x]],family=gaussian))
stor3<-lapply(1:30, function(x) boxcox(stor2[[x]],lambda,plotit=FALSE))
stor4<-sapply(1:30, function(x) stor3[[x]]$x[which.max(stor3[[x]]$y)])
pred.lamb<-mean(stor4)
method.name<-"Estimating Box-Cox transformation parameter via artificial covariate method"
}
else if (method=="mle") {
store1<-lapply(1:length(lambda), function(x) lambda[x])
store2<-lapply(1:length(lambda), function(x) if (store1[[x]] != 0) (data^store1[[x]]-1)/(store1[[x]]*(geometric.mean(data)^(store1[[x]]-1))) else geometric.mean(data)*log(data))
store3<-lapply(1:length(lambda), function(x) sum(log(dnorm(store2[[x]], mean = mean(store2[[x]]), sd = sd(store2[[x]])))))
pred.lamb<-store1[[which.max(store3)]]
method.name<-"Estimating Box-Cox transformation parameter via maximum likelihood estimation"
}
if (pred.lamb==max(lambda)) stop("Enlarge the range of the lambda")
if (pred.lamb==min(lambda)) stop("Enlarge the range of the lambda")
if (pred.lamb!=0) data.transformed<-((data^pred.lamb)-1)/pred.lamb
if (pred.lamb==0) data.transformed<-log(data)
if(plot){
par(mfrow=c(2,2))
hist(data, xlab = dname, prob=TRUE, main = paste("Histogram of", dname))
lines(density(data))
hist(data.transformed, xlab = paste("Transformed", dname), prob=TRUE, main = paste("Histogram of tf", dname))
lines(density(data.transformed))
qqnorm(data, main = paste("Q-Q plot of", dname))
qqline(data)
qqnorm(data.transformed, main = paste("Q-Q plot of tf", dname))
qqline(data.transformed)
}
if (method=="sw") {
statistic<-shapiro.test(data.transformed)$statistic
pvalue<-shapiro.test(data.transformed)$p.value
nortest.name<-"Shapiro-Wilk normality test"
}
if (method=="ad") {
statistic<-ad.test(data.transformed)$statistic
pvalue<-ad.test(data.transformed)$p.value
nortest.name<-"Anderson-Darling normality test"
}
if (method=="cvm") {
statistic<-cvm.test(data.transformed)$statistic
pvalue<-cvm.test(data.transformed)$p.value
nortest.name<-"Cramer-von Mises normality test"
}
if (method=="pt") {
statistic<-pearson.test(data.transformed)$statistic
pvalue<-pearson.test(data.transformed)$p.value
nortest.name<-"Pearson Chi-square normality test"
}
if (method=="sf") {
statistic<-sf.test(data.transformed)$statistic
pvalue<-sf.test(data.transformed)$p.value
nortest.name<-"Shapiro-Francia normality test"
}
if (method=="lt") {
statistic<-lillie.test(data.transformed)$statistic
pvalue<-lillie.test(data.transformed)$p.value
nortest.name<-"Lilliefors normality test"
}
if (method=="jb") {
statistic<-jarque.bera.test(data.transformed)$statistic
pvalue<-jarque.bera.test(data.transformed)$p.value
nortest.name<-"Jarque-Bera normality test"
}
if ((method=="ac")|(method=="mle")) {
statistic<-shapiro.test(data.transformed)$statistic
pvalue<-shapiro.test(data.transformed)$p.value
nortest.name<-"Shapiro-Wilk normality test"
}
if (verbose){
cat("\n"," Box-Cox power transformation", "\n", sep = " ")
cat("-------------------------------------------------------------------", "\n\n", sep = " ")
cat(" lambda.hat :", pred.lamb, "\n\n", sep = " ")
cat("\n", " ",nortest.name," for transformed data ", "(alpha = ",alpha,")", "\n", sep = "")
cat("-------------------------------------------------------------------", "\n\n", sep = " ")
cat(" statistic :", statistic, "\n", sep = " ")
cat(" p.value :", pvalue, "\n\n", sep = " ")
cat(if(pvalue > alpha){" Result : Transformed data are normal."}
else {" Result : Transformed data are not normal."},"\n")
cat("-------------------------------------------------------------------", "\n\n", sep = " ")
}
out<-list()
out$method <- method.name
out$lambda.hat <- as.numeric(pred.lamb)
out$lambda2 <- as.numeric(lambda2)
out$statistic <- as.numeric(statistic)
out$p.value <- as.numeric(pvalue)
out$alpha <- as.numeric(alpha)
out$tf.data <- data.transformed
out$var.name <- dname
attr(out, "class") <- "boxcoxnc"
invisible(out)
}
| /scratch/gouwar.j/cran-all/cranData/AID/R/boxcoxnc.R |
confInt<- function(x,...) UseMethod("confInt")
confInt.boxcoxnc<- function(x, level = 0.95, verbose = TRUE,...){
if ((level<=0)|(level>=1)) stop("Confidence level must be between 0 and 1")
if (x$p.value<=x$alpha) stop(paste("Transformed data must be normally distributed at alpha = ",x$alpha,sep = ""))
if (x$p.value>x$alpha){
meantf <- mean(x$tf.data)
lowertf <- mean(x$tf.data)-qt((1-level)/2,df = (length(x$tf.data)-1),lower.tail = FALSE)*sd(x$tf.data)/sqrt(length(x$tf.data))
uppertf <- mean(x$tf.data)+qt((1-level)/2,df = (length(x$tf.data)-1),lower.tail = FALSE)*sd(x$tf.data)/sqrt(length(x$tf.data))
vectf <- c(meantf, lowertf, uppertf)
if (x$lambda.hat != 0) vecbt <- (vectf*x$lambda.hat+1)^(1/x$lambda.hat)
if (x$lambda.hat == 0) vecbt <- exp(vectf)
}
vecbt<- vecbt-x$lambda2
vecbt<- matrix(vecbt,1,3)
colnames(vecbt)<-c("Mean", paste((1-level)/2*100, "%",sep = ""), paste((1-(1-level)/2)*100, "%",sep = ""))
rownames(vecbt)<-x$var.name
if (verbose){
cat("\n"," Back transformed data", "\n", sep = " ")
cat("---------------------------------------------", "\n", sep = " ")
print(vecbt)
cat("---------------------------------------------", "\n\n", sep = " ")
}
invisible(vecbt)
}
| /scratch/gouwar.j/cran-all/cranData/AID/R/confInt.R |
confInt.boxcoxfr<- function(x, level = 0.95, plot = TRUE, xlab = NULL, ylab = NULL, title = NULL, width = NULL, verbose = TRUE,...){
if ((level<=0)|(level>=1)) stop("Confidence level must be between 0 and 1")
if (max((x$shapiro$p.value<x$alpha))==1) stop(paste("Transformed data in each group must be normally distributed at alpha = ",x$alpha,sep = ""))
if (max((x$shapiro$p.value<x$alpha))!=1){
k = length(levels(x$x))
stor1 = stor2 = stor3 = NULL
for (i in 1:k) {
datasub <- x$tf.data[which(x$x == (levels(x$x)[i]))]
meantf<-mean(datasub)
lowertf <- meantf-qt((1-level)/2,df = (length(datasub)-1),lower.tail = FALSE)*sd(datasub)/sqrt(length(datasub))
uppertf <- meantf+qt((1-level)/2,df = (length(datasub)-1),lower.tail = FALSE)*sd(datasub)/sqrt(length(datasub))
stor1 = c(stor1, meantf)
stor2 = c(stor2, lowertf)
stor3 = c(stor3, uppertf)
}
mattf <- cbind(stor1, stor2, stor3)
if (x$lambda.hat != 0) matbt <- (mattf*x$lambda.hat+1)^(1/x$lambda.hat)-x$lambda2
if (x$lambda.hat == 0) matbt <- exp(mattf)-x$lambda2
}
colnames(matbt)<-c("Mean", paste((1-level)/2*100, "%",sep = ""), paste((1-(1-level)/2)*100, "%",sep = ""))
rownames(matbt)<-levels(x$x)
if (verbose){
cat("\n"," Back transformed data", "\n", sep = " ")
cat("-----------------------------------------", "\n", sep = " ")
print(matbt)
cat("-----------------------------------------", "\n\n", sep = " ")
}
resp <- trt <- NULL
if (plot == TRUE){
df <- data.frame(trt = levels(x$x), resp = matbt[,1])
limits <- aes(ymax = matbt[,3], ymin = matbt[,2])
out <- ggplot(df, aes(y = resp, x = trt))
if (is.null(width)) width <- 0.15 else width <- width
out <- out + geom_point() + geom_errorbar(limits, width = width, size = 0.8)
if (is.null(ylab)) out <- out + ylab(x$y.name) else out <- out + ylab(ylab)
if (is.null(xlab)) out <- out + xlab(x$x.name) else out <- out + xlab(xlab)
if (is.null(title)) out <- out + ggtitle("") else out <- out + ggtitle(title)
plot(out)
}
invisible(matbt)
}
| /scratch/gouwar.j/cran-all/cranData/AID/R/confInt.boxcoxfr.R |
confInt.boxcoxmeta<- function(x, level = 0.95, verbose = TRUE,...){
if ((level<=0)|(level>=1)) stop("Confidence level must be between 0 and 1")
if (all(x$result$P.Value<=x$alpha)) stop(paste("Transformed data must be normally distributed at alpha = ",x$alpha,sep = ""))
meantf <- mean(x$tf.data)
lowertf <- mean(x$tf.data)-qt((1-level)/2,df = (length(x$tf.data)-1),lower.tail = FALSE)*sd(x$tf.data)/sqrt(length(x$tf.data))
uppertf <- mean(x$tf.data)+qt((1-level)/2,df = (length(x$tf.data)-1),lower.tail = FALSE)*sd(x$tf.data)/sqrt(length(x$tf.data))
vectf <- c(meantf, lowertf, uppertf)
if (x$lambda.hat != 0) vecbt <- (vectf*x$lambda.hat+1)^(1/x$lambda.hat)
if (x$lambda.hat == 0) vecbt <- exp(vectf)
vecbt<- vecbt-x$lambda2
vecbt<- matrix(vecbt,1,3)
colnames(vecbt)<-c("Mean", paste((1-level)/2*100, "%",sep = ""), paste((1-(1-level)/2)*100, "%",sep = ""))
rownames(vecbt)<-x$var.name
if (verbose){
cat("\n"," Back transformed data", "\n", sep = " ")
cat("---------------------------------------------", "\n", sep = " ")
print(vecbt)
cat("---------------------------------------------", "\n\n", sep = " ")
}
invisible(vecbt)
}
| /scratch/gouwar.j/cran-all/cranData/AID/R/confInt.boxcoxmeta.R |
#' @title Augmented Inverse Probability Weighting (AIPW)
#'
#' @description An R6Class of AIPW for estimating the average causal effects with users' inputs of exposure, outcome, covariates and related
#' libraries for estimating the efficient influence function.
#'
#' @details An AIPW object is constructed by `new()` with users' inputs of data and causal structures, then it `fit()` the data using the
#' libraries in `Q.SL.library` and `g.SL.library` with `k_split` cross-fitting, and provides results via the `summary()` method.
#' After using `fit()` and/or `summary()` methods, propensity scores and inverse probability weights by exposure status can be
#' examined with `plot.p_score()` and `plot.ip_weights()`, respectively.
#'
#' If outcome is missing, analysis assumes missing at random (MAR) by estimating propensity scores of I(A=a, observed=1) with all covariates `W`.
#' (`W.Q` and `W.g` are disabled.) Missing exposure is not supported.
#'
#' See examples for illustration.
#'
#' @section Constructor:
#' \code{AIPW$new(Y = NULL, A = NULL, W = NULL, W.Q = NULL, W.g = NULL, Q.SL.library = NULL, g.SL.library = NULL, k_split = 10, verbose = TRUE, save.sl.fit = FALSE)}
#'
#' ## Constructor Arguments
#' \tabular{lll}{
#' \strong{Argument} \tab \strong{Type} \tab \strong{Details} \cr
#' \code{Y} \tab Integer \tab A vector of outcome (binary (0, 1) or continuous) \cr
#' \code{A} \tab Integer \tab A vector of binary exposure (0 or 1) \cr
#' \code{W} \tab Data \tab Covariates for \strong{both} exposure and outcome models. \cr
#' \code{W.Q} \tab Data \tab Covariates for the \strong{outcome} model (Q).\cr
#' \code{W.g} \tab Data \tab Covariates for the \strong{exposure} model (g). \cr
#' \code{Q.SL.library} \tab SL.library \tab Algorithms used for the \strong{outcome} model (Q). \cr
#' \code{g.SL.library} \tab SL.library \tab Algorithms used for the \strong{exposure} model (g). \cr
#' \code{k_split} \tab Integer \tab Number of folds for splitting (Default = 10).\cr
#' \code{verbose} \tab Logical \tab Whether to print the result (Default = TRUE) \cr
#' \code{save.sl.fit} \tab Logical \tab Whether to save Q.fit and g.fit (Default = FALSE) \cr
#' }
#'
#' ## Constructor Argument Details
#' \describe{
#' \item{\code{W}, \code{W.Q} & \code{W.g}}{It can be a vector, matrix or data.frame. If and only if `W == NULL`, `W` would be replaced by `W.Q` and `W.g`. }
#' \item{\code{Q.SL.library} & \code{g.SL.library}}{Machine learning algorithms from [SuperLearner] libraries}
#' \item{\code{k_split}}{It ranges from 1 to number of observation-1.
#' If k_split=1, no cross-fitting; if k_split>=2, cross-fitting is used
#' (e.g., `k_split=10`, use 9/10 of the data to estimate and the remaining 1/10 leftover to predict).
#' \strong{NOTE: it's recommended to use cross-fitting.} }
#' \item{\code{save.sl.fit}}{This option allows users to save the fitted sl object (libs$Q.fit & libs$g.fit) for debug use.
#' \strong{Warning: Saving the SuperLearner fitted object may cause a substantive storage/memory use.}}
#' }
#'
#'
#' @section Public Methods:
#' \tabular{lll}{
#' \strong{Methods} \tab \strong{Details} \tab \strong{Link} \cr
#' \code{fit()} \tab Fit the data to the [AIPW] object \tab [fit.AIPW] \cr
#' \code{stratified_fit()}\tab Fit the data to the [AIPW] object stratified by `A` \tab [stratified_fit.AIPW] \cr
#' \code{summary()} \tab Summary of the average treatment effects from AIPW \tab [summary.AIPW_base]\cr
#' \code{plot.p_score()} \tab Plot the propensity scores by exposure status \tab [plot.p_score]\cr
#' \code{plot.ip_weights()} \tab Plot the inverse probability weights using truncated propensity scores \tab [plot.ip_weights]\cr
#' }
#'
#' @section Public Variables:
#' \tabular{lll}{
#' \strong{Variable} \tab \strong{Generated by} \tab \strong{Return} \cr
#' \code{n} \tab Constructor \tab Number of observations \cr
#' \code{stratified_fitted} \tab `stratified_fit()` \tab Fit the outcome model stratified by exposure status \cr
#' \code{obs_est} \tab `fit()` & `summary()` \tab Components calculating average causal effects \cr
#' \code{estimates} \tab `summary()` \tab A list of Risk difference, risk ratio, odds ratio \cr
#' \code{result} \tab `summary()` \tab A matrix contains RD, ATT, ATC, RR and OR with their SE and 95%CI \cr
#' \code{g.plot} \tab `plot.p_score()` \tab A density plot of propensity scores by exposure status\cr
#' \code{ip_weights.plot} \tab `plot.ip_weights()` \tab A box plot of inverse probability weights \cr
#' \code{libs} \tab `fit()` \tab [SuperLearner] libraries and their fitted objects \cr
#' \code{sl.fit} \tab Constructor \tab A wrapper function for fitting [SuperLearner] \cr
#' \code{sl.predict} \tab Constructor \tab A wrapper function using \code{sl.fit} to predict \cr
#' }
#'
#' ## Public Variable Details
#' \describe{
#' \item{\code{stratified_fit}}{An indicator for whether the outcome model is fitted stratified by exposure status in the`fit()` method.
#' Only when using `stratified_fit()` to turn on `stratified_fit = TRUE`, `summary` outputs average treatment effects among the treated and the controls.}
#' \item{\code{obs_est}}{After using `fit()` and `summary()` methods, this list contains the propensity scores (`p_score`),
#' counterfactual predictions (`mu`, `mu1` & `mu0`) and
#' efficient influence functions (`aipw_eif1` & `aipw_eif0`) for later average treatment effect calculations.}
#' \item{\code{g.plot}}{This plot is generated by `ggplot2::geom_density`}
#' \item{\code{ip_weights.plot}}{This plot uses truncated propensity scores stratified by exposure status (`ggplot2::geom_boxplot`)}
#' }
#'
#' @return \code{AIPW} object
#'
#' @references Zhong Y, Kennedy EH, Bodnar LM, Naimi AI (2021, In Press). AIPW: An R Package for Augmented Inverse Probability Weighted Estimation of Average Causal Effects. \emph{American Journal of Epidemiology}.
#' @references Robins JM, Rotnitzky A (1995). Semiparametric efficiency in multivariate regression models with missing data. \emph{Journal of the American Statistical Association}.
#' @references Chernozhukov V, Chetverikov V, Demirer M, et al (2018). Double/debiased machine learning for treatment and structural parameters. \emph{The Econometrics Journal}.
#' @references Kennedy EH, Sjolander A, Small DS (2015). Semiparametric causal inference in matched cohort studies. \emph{Biometrika}.
#'
#'
#' @examples
#' library(SuperLearner)
#' library(ggplot2)
#'
#' #create an object
#' aipw_sl <- AIPW$new(Y=rbinom(100,1,0.5), A=rbinom(100,1,0.5),
#' W.Q=rbinom(100,1,0.5), W.g=rbinom(100,1,0.5),
#' Q.SL.library="SL.mean",g.SL.library="SL.mean",
#' k_split=1,verbose=FALSE)
#'
#' #fit the object
#' aipw_sl$fit()
#' # or use `aipw_sl$stratified_fit()` to estimate ATE and ATT/ATC
#'
#' #calculate the results
#' aipw_sl$summary(g.bound = 0.025)
#'
#' #check the propensity scores by exposure status after truncation
#' aipw_sl$plot.p_score()
#'
#' @export
AIPW <- R6::R6Class(
"AIPW",
portable = TRUE,
inherit = AIPW_base,
public = list(
#-------------------------public fields-----------------------------#
libs =list(Q.SL.library=NULL,
Q.fit = NULL,
g.SL.library=NULL,
g.fit = NULL,
validation_index = NULL,
validation_index.Q = NULL),
sl.fit = NULL,
sl.predict = NULL,
#-------------------------constructor-----------------------------#
initialize = function(Y=NULL, A=NULL, verbose=TRUE,
W=NULL, W.Q=NULL, W.g=NULL,
Q.SL.library=NULL, g.SL.library=NULL,
k_split=10, save.sl.fit=FALSE){
#-----initialize from AIPW_base class-----#
super$initialize(Y=Y,A=A,verbose=verbose)
#decide covariate set(s): W.Q and W.g only works when W is null.
if (is.null(W) & private$Y.missing==FALSE){
if (any(is.null(W.Q),is.null(W.g))) {
stop("Insufficient covariate sets were provided.")
} else{
tryCatch({
private$Q.set=cbind(A, W.Q)
}, error = function(e) stop('Covariates dimension error: nrow(W.Q) != length(A)'))
private$g.set=W.g
}
} else if (is.null(W) & private$Y.missing==TRUE){
stop("`W.Q` and `W.g` are disabled when missing outcome is detected. Please provide covariates in `W`")
} else{
tryCatch({
private$Q.set=cbind(A, W)
}, error = function(e) stop('Covariates dimension error: nrow(W) != length(A)'))
private$g.set=W
}
#subset observations with complete outcome
private$Q.set = as.data.frame(private$Q.set)
private$g.set = as.data.frame(private$g.set)
if (ncol(private$g.set)==1) {
private$g.set = as.data.frame(private$g.set)
colnames(private$g.set) <- "Z"
} else {
private$g.set = private$g.set
}
#save input into private fields
private$k_split=k_split
#whether to save sl.fit (Q.fit and g.fit)
private$save.sl.fit = save.sl.fit
#check data length
if (length(private$Y)!=dim(private$Q.set)[1] | length(private$A)!=dim(private$g.set)[1]){
stop("Please check the dimension of the covariates")
}
#-----determine SuperLearner or sl3 and change accordingly-----#
if (is.character(Q.SL.library) & is.character(g.SL.library)) {
if (any(grepl("SL.",Q.SL.library)) & any(grepl("SL.",g.SL.library))){
#change future package loading
private$sl.pkg <- "SuperLearner"
#create a new local env for superlearner
private$sl.env = new.env()
#find the learners in global env and assign them into sl.env
private$sl.learners = grep("SL.",lsf.str(globalenv()),value = T)
lapply(private$sl.learners, function(x) assign(x=x,value=get(x,globalenv()),envir=private$sl.env))
#change wrapper functions
self$sl.fit = function(Y, X, SL.library, CV){
suppressMessages({
fit <- SuperLearner::SuperLearner(Y = Y, X = X, SL.library = SL.library, family= private$Y.type,
env=private$sl.env, cvControl = CV)
})
return(fit)
}
self$sl.predict = function(fit, newdata){
suppressMessages({
pred <- as.numeric(predict(fit,newdata = newdata)$pred)
})
return(pred)
}
} else{
stop("Input Q.SL.library and/or g.SL.library is not a valid SuperLearner library")
}
} else {
stop("Input Q.SL.library and/or g.SL.library is not a valid SuperLearner library")
}
#input sl libraries
self$libs$Q.SL.library=Q.SL.library
self$libs$g.SL.library=g.SL.library
#------input checking-----#
#check k_split value
if (private$k_split>=self$n){
stop("`k_split` >= number of observations is not allowed.")
}else if (private$k_split < 1){
stop("`k_split` < 1 is not allowed.")
}
#check verbose value
if (!is.logical(private$verbose)){
stop("`verbose` is not valid")
}
#check if SuperLearner and/or sl3 library is loaded
if (!any(names(sessionInfo()$otherPkgs) %in% c("SuperLearner"))){
warning("`SuperLearner` package is not loaded.")
}
#-------check if future.apply is loaded otherwise lapply would be used.------#
if (any(names(sessionInfo()$otherPkgs) %in% c("future.apply"))){
private$.f_lapply = function(iter,func) {
future.apply::future_lapply(iter,func,future.seed = T,future.packages = private$sl.pkg,future.globals = TRUE)
}
}else{
private$.f_lapply = function(iter,func) lapply(iter,func)
}
},
#-------------------------fit method-----------------------------#
fit = function(){
self$stratified_fitted = FALSE
#----------create index for cross-fitting---------#
private$cv$k_index <- sample(rep(1:private$k_split,ceiling(self$n/private$k_split))[1:self$n],replace = F)
private$cv$fold_index = split(1:self$n, private$cv$k_index)
private$cv$fold_length = sapply(private$cv$fold_index,length)
#create non-missing index for the outcome model
if (private$Y.missing) {
private$cv$fold_index.Q = lapply(private$cv$fold_index, function(x) x[x %in% which(private$observed==1)])
private$cv$fold_length.Q = sapply(private$cv$fold_index.Q,length)
} else{
private$cv$fold_index.Q = private$cv$fold_index
private$cv$fold_length.Q = private$cv$fold_length
}
iter <- 1:private$k_split
#----------------progress bar setup----------#
#check if progressr is loaded
if (any(names(sessionInfo()$otherPkgs) %in% c("progressr"))){
private$isLoaded_progressr = TRUE
pb <- progressr::progressor(along = iter)
}
#---------parallelization with future.apply------#
fitted <- private$.f_lapply(
iter=iter,
func=function(i,...){
#when k_split in 1:2, no cvControl will be used (same cv for k_split)
if (private$k_split==1){
train_index <- validation_index <- as.numeric(unlist(private$cv$fold_index))
cv_param <- list()
} else if (private$k_split==2){
train_index <- as.numeric(unlist(private$cv$fold_index[-i]))
validation_index <- as.numeric(unlist(private$cv$fold_index[i]))
cv_param <- list()
} else{
train_index <- as.numeric(unlist(private$cv$fold_index[-i]))
validation_index <- as.numeric(unlist(private$cv$fold_index[i]))
cv_param <- list(V=private$k_split-1,
validRows= private$.new_cv_index(val_fold=i , fold_length = private$cv$fold_length))
}
#when outcome is missing, subset the complete case for Q estimation
if (private$Y.missing){
if (private$k_split==1){
train_index.Q <- validation_index.Q <-as.numeric(unlist(private$cv$fold_index.Q))
cv_param.Q <- list()
} else if (private$k_split==2) {
train_index.Q <- as.numeric(unlist(private$cv$fold_index.Q[-i]))
validation_index.Q <- as.numeric(unlist(private$cv$fold_index.Q[i]))
cv_param.Q <- list()
} else {#special care for cross-fitting indices when outcome is missing
train_index.Q <- as.numeric(unlist(private$cv$fold_index.Q[-i]))
validation_index.Q <- as.numeric(unlist(private$cv$fold_index.Q[i]))
cv_param.Q <- list(V=private$k_split-1,
validRows= private$.new_cv_index(val_fold=i, fold_length =private$cv$fold_length.Q))
}
} else{
train_index.Q = train_index
validation_index.Q = validation_index
cv_param.Q <- cv_param
}
#split the sample based on the index
#Q outcome set
train_set.Q <- private$Q.set[train_index.Q,]
validation_set.Q <- private$Q.set[validation_index.Q,]
#g exposure set
train_set.g <- data.frame(private$g.set[train_index,])
validation_set.g <- data.frame(private$g.set[validation_index,])
colnames(train_set.g)=colnames(validation_set.g)=colnames(private$g.set) #make to g df colnames consistent
#Q model(outcome model: g-comp)
#fit with train set
Q.fit <- self$sl.fit(Y = private$Y[train_index.Q],
X = train_set.Q,
SL.library = self$libs$Q.SL.library,
CV= cv_param.Q)
# predict on validation set
mu0 <- self$sl.predict(Q.fit,newdata=transform(validation_set.Q, A = 0)) #Q0_pred
mu1 <- self$sl.predict(Q.fit,newdata=transform(validation_set.Q, A = 1)) #Q1_pred
#g model(exposure model: propensity score)
# fit with train set
g.fit <- self$sl.fit(Y=private$AxObserved[train_index],
X=train_set.g,
SL.library = self$libs$g.SL.library,
CV= cv_param)
# predict on validation set
raw_p_score <- self$sl.predict(g.fit,newdata = validation_set.g) #g_pred
#add metadata
names(validation_index) <- rep(i,length(validation_index))
if (private$isLoaded_progressr){
pb(sprintf("No.%g iteration", i,private$k_split))
}
if (private$save.sl.fit){
output <- list(validation_index, validation_index.Q, Q.fit, mu0, mu1, g.fit, raw_p_score)
names(output) <- c("validation_index","validation_index.Q","Q.fit","mu0","mu1","g.fit","raw_p_score")
} else {
output <- list(validation_index, validation_index.Q, mu0, mu1, raw_p_score)
names(output) <- c("validation_index","validation_index.Q","mu0","mu1","raw_p_score")
}
return(output)
})
#store fitted values from future to member variables
for (i in fitted){
#add estimates based on the val index
self$obs_est$mu0[i$validation_index.Q] <- i$mu0
self$obs_est$mu1[i$validation_index.Q] <- i$mu1
self$obs_est$raw_p_score[i$validation_index] <- i$raw_p_score
#append fitted objects
if (private$save.sl.fit) {
self$libs$Q.fit = append(self$libs$Q.fit, list(i$Q.fit))
self$libs$g.fit = append(self$libs$g.fit, list(i$g.fit))
}
self$libs$validation_index = append(self$libs$validation_index, i$validation_index)
self$libs$validation_index.Q = append(self$libs$validation_index.Q, i$validation_index.Q)
}
self$obs_est$mu[private$observed==1] <- self$obs_est$mu0[private$observed==1]*(1-private$A[private$observed==1]) +
self$obs_est$mu1[private$observed==1]*(private$A[private$observed==1])#Q_pred
if (private$verbose){
message("Done!\n")
}
invisible(self)
},
#-------------------------stratified_fit method-----------------------------#
stratified_fit = function(){
self$stratified_fitted = TRUE
#----------create index for cross-fitting---------#
private$cv$k_index <- sample(rep(1:private$k_split,ceiling(self$n/private$k_split))[1:self$n],replace = F)
private$cv$fold_index = split(1:self$n, private$cv$k_index)
private$cv$fold_length = sapply(private$cv$fold_index,length)
#create non-missing index for the outcome model
if (private$Y.missing) {
private$cv$fold_index.Q = lapply(private$cv$fold_index, function(x) x[x %in% which(private$observed==1)])
private$cv$fold_length.Q = sapply(private$cv$fold_index.Q,length)
} else{
private$cv$fold_index.Q = private$cv$fold_index
private$cv$fold_length.Q = private$cv$fold_length
}
iter <- 1:private$k_split
#----------------progress bar setup----------#
#check if progressr is loaded
if (any(names(sessionInfo()$otherPkgs) %in% c("progressr"))){
private$isLoaded_progressr = TRUE
pb <- progressr::progressor(along = iter)
}
#---------parallelization with future.apply------#
fitted <- private$.f_lapply(
iter=iter,
func=function(i,...){
#when k_split in 1:2, no cvControl will be used (same cv for k_split)
if (private$k_split==1){
train_index <- validation_index <- as.numeric(unlist(private$cv$fold_index))
} else if (private$k_split>=2){
train_index <- as.numeric(unlist(private$cv$fold_index[-i]))
validation_index <- as.numeric(unlist(private$cv$fold_index[i]))
}
cv_param <- list()
#when outcome is missing, subset the complete case for Q estimation
if (private$Y.missing){
if (private$k_split==1){
train_index.Q <- validation_index.Q <-as.numeric(unlist(private$cv$fold_index.Q))
} else if (private$k_split>=2) {
train_index.Q <- as.numeric(unlist(private$cv$fold_index.Q[-i]))
validation_index.Q <- as.numeric(unlist(private$cv$fold_index.Q[i]))
}
cv_param.Q <- list()
} else{
train_index.Q = train_index
validation_index.Q = validation_index
cv_param.Q <- cv_param
}
#Q model(outcome model: g-comp)
#fit with train set
#A==0
train_index.Q0 <- intersect(train_index.Q, which(private$A==0))
Q0.fit <- self$sl.fit(Y = private$Y[train_index.Q0],
X = private$Q.set[train_index.Q0,],
SL.library = self$libs$Q.SL.library,
CV= cv_param.Q)
#A==1
train_index.Q1 <- intersect(train_index.Q, which(private$A==1))
Q1.fit <- self$sl.fit(Y = private$Y[train_index.Q1],
X = private$Q.set[train_index.Q1,],
SL.library = self$libs$Q.SL.library,
CV= cv_param.Q)
# predict on validation set
mu0 <- self$sl.predict(Q0.fit,newdata=private$Q.set[validation_index.Q,]) #Q0_pred
mu1 <- self$sl.predict(Q1.fit,newdata=private$Q.set[validation_index.Q,]) #Q1_pred
#g model(exposure model: propensity score)
#g exposure set
train_set.g <- data.frame(private$g.set[train_index,])
validation_set.g <- data.frame(private$g.set[validation_index,])
colnames(train_set.g)=colnames(validation_set.g)=colnames(private$g.set) #make to g df colnames consistent
# fit with train set
g.fit <- self$sl.fit(Y=private$AxObserved[train_index],
X=train_set.g,
SL.library = self$libs$g.SL.library,
CV= cv_param)
# predict on validation set
raw_p_score <- self$sl.predict(g.fit,newdata = validation_set.g) #g_pred
#add metadata
names(validation_index) <- rep(i,length(validation_index))
if (private$isLoaded_progressr){
pb(sprintf("No.%g iteration", i,private$k_split))
}
if (private$save.sl.fit){
Q.fit <- list(Q0=Q0.fit, Q1= Q1.fit)
output <- list(validation_index, validation_index.Q, Q.fit, mu0, mu1, g.fit, raw_p_score)
names(output) <- c("validation_index","validation_index.Q","Q.fit","mu0","mu1","g.fit","raw_p_score")
} else {
output <- list(validation_index, validation_index.Q, mu0, mu1, raw_p_score)
names(output) <- c("validation_index","validation_index.Q","mu0","mu1","raw_p_score")
}
return(output)
})
#store fitted values from future to member variables
for (i in fitted){
#add estimates based on the val index
self$obs_est$mu0[i$validation_index.Q] <- i$mu0
self$obs_est$mu1[i$validation_index.Q] <- i$mu1
self$obs_est$raw_p_score[i$validation_index] <- i$raw_p_score
#append fitted objects
if (private$save.sl.fit) {
self$libs$Q.fit = append(self$libs$Q.fit, list(i$Q.fit))
self$libs$g.fit = append(self$libs$g.fit, list(i$g.fit))
}
self$libs$validation_index = append(self$libs$validation_index, i$validation_index)
self$libs$validation_index.Q = append(self$libs$validation_index.Q, i$validation_index.Q)
}
self$obs_est$mu[private$observed==1] <- self$obs_est$mu0[private$observed==1]*(1-private$A[private$observed==1]) +
self$obs_est$mu1[private$observed==1]*(private$A[private$observed==1])#Q_pred
if (private$verbose){
message("Done!\n")
}
invisible(self)
}
),
#-------------------------private fields and methods----------------------------#
private = list(
#input
Q.set=NULL,
g.set=NULL,
k_split=NULL,
save.sl.fit=FALSE,
cv = list(
#a vector stores the groups for splitting
k_index= NULL,
#a list of indices for each fold
fold_index= NULL,
fold_index.Q = NULL,
#a vector of length(fold_index[[i]])
fold_length = NULL,
fold_length.Q = NULL
),
fitted=NULL,
sl.pkg =NULL,
sl.env=NULL,
sl.learners = NULL,
isLoaded_progressr = FALSE,
#private methods
#lapply or future_lapply
.f_lapply =NULL,
#create new index for training set
.new_cv_index = function(val_fold,fold_length=private$cv$fold_length, k_split=private$k_split){
train_fold_length = c(0,fold_length[-val_fold])
train_fold_cumsum = cumsum(train_fold_length)
new_train_index= lapply(1:(k_split-1),
function(x) {
(1:train_fold_length[[x+1]])+ train_fold_cumsum[[x]]
}
)
names(new_train_index) = names(train_fold_length[-1])
return(new_train_index)
}
)
)
#' @name fit
#' @aliases fit.AIPW
#' @title Fit the data to the [AIPW] object
#'
#' @description
#' Fitting the data into the [AIPW] object with/without cross-fitting to estimate the efficient influence functions
#'
#' @section R6 Usage:
#' \code{$fit()}
#'
#' @return A fitted [AIPW] object with `obs_est` and `libs` (public variables)
#'
#' @seealso [AIPW]
NULL
#' @name stratified_fit
#' @aliases stratified_fit.AIPW
#' @title Fit the data to the [AIPW] object stratified by `A` for the outcome model
#'
#' @description
#' Fitting the data into the [AIPW] object with/without cross-fitting to estimate the efficient influence functions.
#' Outcome model is fitted, stratified by exposure status `A`
#'
#' @section R6 Usage:
#' \code{$stratified_fit.AIPW()}
#'
#' @return A fitted [AIPW] object with `obs_est` and `libs` (public variables)
#'
#' @seealso [AIPW]
NULL
| /scratch/gouwar.j/cran-all/cranData/AIPW/R/AIPW.R |
#' @title Augmented Inverse Probability Weighting Base Class (AIPW_base)
#'
#' @description A base class for AIPW that implements the common methods, such as \code{summary()} and \code{plot.p_score()}, inheritted by [AIPW] and [AIPW_tmle] class
#'
#' @docType class
#'
#' @importFrom R6 R6Class
#'
#' @return \code{AIPW} base object
#' @seealso [AIPW] and [AIPW_tmle]
#' @format \code{\link{R6Class}} object.
#' @export
AIPW_base <- R6::R6Class(
"AIPW_base",
portable = TRUE,
class = TRUE,
public = list(
#-------------------------public fields-----------------------------#
#Number of observations
n = NULL,
#Number of exposed
n_A1 = NULL,
#Number ofunexposed
n_A0 = NULL,
#Fit the outcome model stratified by exposure status (only applicable to AIPW class or manual setup)
stratified_fitted = FALSE,
#Components for estimating the influence functions of all observations to calculate average causal effects
obs_est = list(mu0 = NULL,
mu1 = NULL,
mu = NULL,
raw_p_score = NULL,
p_score = NULL,
ip_weights = NULL,
aipw_eif1 = NULL,
aipw_eif0 = NULL),
#ATE: Risk difference, risk ratio, odds ratio and variance-covariance matrix for SE calculation
estimates = list(risk_A1 = NULL,
risk_A0 = NULL,
RD = NULL,
RR = NULL,
OR = NULL,
sigma_covar = NULL),
#ATT: Risk difference
ATT_estimates = list(RD = NULL),
#ATC: Risk difference
ATC_estimates = list(RD = NULL),
#A matrix contains RD, RR and OR with their SE and 95%CI
result = NULL,
#A density plot of propensity scores by exposure status (`ggplot2::geom_density`)
g.plot = NULL,
#A box plot of inverse probability weights using truncated propensity scores by exposure status (`ggplot2::geom_boxplot`)
ip_weights.plot = NULL,
#-------------------------constructor-----------------------------#
initialize = function(Y=NULL, A=NULL,verbose=TRUE){
#save input into private fields
private$Y=as.numeric(Y)
private$A=as.numeric(A)
private$observed = as.numeric(!is.na(private$Y))
private$verbose=verbose
#check data length
if (length(private$Y)!=length(private$A)){
stop("Please check the dimension of the data")
}
#detect outcome is binary or continuous
if (length(unique(private$Y[!is.na(private$Y)]))==2) {
private$Y.type = 'binomial'
} else {
private$Y.type = 'gaussian'
}
#check missing exposure
if (any(is.na(private$A))){
stop("Missing exposure is not allowed.")
}
#check missing outcome
if (any(private$observed == 0)){
warning("Missing outcome is detected. Analysis assumes missing at random (MAR).")
private$Y.missing = TRUE
}
#setup
private$AxObserved = private$A * private$observed #I(A=a, observed==1)
self$n <- length(private$A)
self$n_A1 <- sum(private$A==1)
self$n_A0 <- sum(private$A==0)
self$obs_est$mu0 <- rep(NA,self$n)
self$obs_est$mu1 <- rep(NA,self$n)
self$obs_est$mu <- rep(NA,self$n)
self$obs_est$raw_p_score <- rep(NA,self$n)
},
#-------------------------summary method-----------------------------#
summary = function(g.bound=0.025){
#p_score truncation
if (length(g.bound) > 2){
warning('More than two `g.bound` are provided. Only the first two will be used.')
g.bound = g.bound[1:2]
} else if (length(g.bound) ==1 & g.bound[1] >= 0.5){
stop("`g.bound` >= 0.5 is not allowed when only one `g.bound` value is provided")
}
private$g.bound=g.bound
#check g.bound value
if (!is.numeric(private$g.bound)){
stop("`g.bound` must be numeric")
} else if (max(private$g.bound) > 1 | min(private$g.bound) < 0){
stop("`g.bound` must between 0 and 1")
}
self$obs_est$p_score <- private$.bound(self$obs_est$raw_p_score)
#inverse probability weights
self$obs_est$ip_weights <- (as.numeric(private$A==1)/self$obs_est$p_score) + (as.numeric(private$A==0)/(1-self$obs_est$p_score))
##------AIPW est------##
#### ATE EIF
self$obs_est$aipw_eif1 <- ifelse(private$observed == 1,
(as.numeric(private$A[private$observed==1]==1)/self$obs_est$p_score[private$observed==1])*
(private$Y[private$observed==1] - self$obs_est$mu[private$observed==1]) +
self$obs_est$mu1[private$observed==1],
0)
self$obs_est$aipw_eif0 <- ifelse(private$observed == 1,
(as.numeric(private$A[private$observed==1]==0)/(1-self$obs_est$p_score[private$observed==1]))*
(private$Y[private$observed==1] - self$obs_est$mu[private$observed==1]) +
self$obs_est$mu0[private$observed==1],
0)
root_n <- sqrt(self$n)
## risk for the treated and controls
self$estimates$risk_A1 <- private$get_RD(self$obs_est$aipw_eif1, 0, root_n)
self$estimates$risk_A0 <- private$get_RD(self$obs_est$aipw_eif0, 0, root_n)
## risk difference
self$estimates$RD <- private$get_RD(self$obs_est$aipw_eif1, self$obs_est$aipw_eif0, root_n)
#results on additive scales
self$result <- cbind(matrix(c(self$estimates$risk_A1, self$estimates$risk_A0,
self$estimates$RD), nrow=3, byrow=T),
c( self$n_A1, self$n_A0,rep(self$n,1)))
row.names(self$result) <- c("Risk of exposure", "Risk of control","Risk Difference")
colnames(self$result) <- c("Estimate","SE","95% LCL","95% UCL","N")
if (private$Y.type == 'binomial'){
## var-cov mat for rr and or calculation
self$estimates$sigma_covar <- private$get_sigma_covar(self$obs_est$aipw_eif0,self$obs_est$aipw_eif1)
## risk ratio
self$estimates$RR <- private$get_RR(self$obs_est$aipw_eif1,self$obs_est$aipw_eif0, self$estimates$sigma_covar, root_n)
## odds ratio
self$estimates$OR <- private$get_OR(self$obs_est$aipw_eif1,self$obs_est$aipw_eif0, self$estimates$sigma_covar, root_n)
#w/ results on the multiplicative scale
mult_result <- cbind(matrix(c(self$estimates$RR, self$estimates$OR),nrow=2,byrow=T),self$n)
row.names(mult_result) <- c("Risk Ratio", "Odds Ratio")
self$result <- rbind(self$result, mult_result)
}
#### ATT/ATC
if (self$stratified_fitted) {
#ATT
self$ATT_estimates$RD <- private$get_ATT_RD(mu0 = self$obs_est$mu0[private$observed==1],
p_score = self$obs_est$p_score[private$observed==1],
A_level = 1, root_n=root_n, ATC = F)
self$ATC_estimates$RD <- private$get_ATT_RD(mu0 = self$obs_est$mu1[private$observed==1],
p_score = 1-self$obs_est$p_score[private$observed==1],
A_level = 0, root_n=root_n, ATC = T)
ATT_ATC_result <- matrix(c(self$ATT_estimates$RD, self$n,
self$ATC_estimates$RD, self$n), nrow = 2,byrow = T)
row.names(ATT_ATC_result) <- c("ATT Risk Difference","ATC Risk Difference")
self$result <- rbind(self$result, ATT_ATC_result)
}
if (private$verbose){
print(self$result,digit=3)
}
invisible(self)
},
#-------------------------plot.p_score method-----------------------------#
plot.p_score = function(print.ip_weights = F){
#check if ggplot2 library is loaded
if (!any(names(sessionInfo()$otherPkgs) %in% c("ggplot2"))){
stop("`ggplot2` package is not loaded.")
}
plot_data_A = factor(private$A, levels = 0:1)
#input check
if (any(is.na(self$obs_est$raw_p_score))){
stop("Propensity scores are not estimated.")
} else if (is.null(self$obs_est$p_score)) {
#p_score before truncation (estimated ps)
plot_data = data.frame(A = plot_data_A,
p_score= self$obs_est$raw_p_score,
trunc = "Not truncated")
message("ATE has not been calculated.")
} else {
plot_data = rbind(data.frame(A = plot_data_A,
p_score= self$obs_est$raw_p_score,
trunc = "Not truncated"),
data.frame(A = plot_data_A,
p_score= self$obs_est$p_score,
trunc = "Truncated"))
}
self$g.plot = ggplot2::ggplot(data = plot_data,ggplot2::aes(x = p_score, group = A, color = A, fill=A)) +
ggplot2::geom_density(alpha=0.5) +
ggplot2::scale_x_continuous(limits = c(0,1)) +
ggplot2::facet_wrap(~trunc) +
ggtitle("Propensity scores by exposure status") +
theme_bw() +
theme(legend.position = 'bottom')
xlab('Propensity Scores')
print(self$g.plot)
invisible(self)
}
,
#-------------------------plot.ip_weights method-----------------------------#
plot.ip_weights = function(){
#check if ggplot2 library is loaded
if (!any(names(sessionInfo()$otherPkgs) %in% c("ggplot2"))){
stop("`ggplot2` package is not loaded.")
}
plot_data_A = factor(private$A, levels = 0:1)
#input check
if (any(is.na(self$obs_est$raw_p_score))){
stop("Propensity scores are not estimated.")
} else if (is.null(self$obs_est$p_score)) {
stop("ATE has not been calculated.")
} else {
ipw_plot_data = data.frame(A = plot_data_A, ip_weights= self$obs_est$ip_weights)
self$ip_weights.plot = ggplot2::ggplot(data = ipw_plot_data, ggplot2::aes(y = ip_weights, x = A, fill = A)) +
ggplot2::geom_boxplot(alpha=0.5) +
ggtitle("IP-weights using truncated propensity scores by exposure status") +
theme_bw() +
ylab('Inverse Probablity Weights') +
coord_flip() +
theme(legend.position = 'bottom')
print(self$ip_weights.plot)
}
invisible(self)
}
),
#-------------------------private fields and methods----------------------------#
private = list(
#input
Y=NULL,
A=NULL,
observed=NULL,
AxObserved = NULL,
verbose=NULL,
g.bound=NULL,
#outcome type
Y.type = NULL,
Y.missing = FALSE,
#private methods
#Use individual estimates of efficient influence functions (obs_est$aipw_eif0 & obs_est$aipw_eif0) to calculate RD, RR and OR with SE and 95CI%
get_RD = function(aipw_eif1,aipw_eif0,root_n){
est <- mean(aipw_eif1 - aipw_eif0)
se <- stats::sd(aipw_eif1 - aipw_eif0)/root_n
ci <- get_ci(est,se,ratio=F)
output = c(est, se, ci)
names(output) = c("Estimate","SE","95% LCL","95% UCL")
return(output)
},
get_RR = function(aipw_eif1,aipw_eif0,sigma_covar,root_n){
est <- mean(aipw_eif1)/mean(aipw_eif0)
se <- sqrt((sigma_covar[1,1]/(mean(aipw_eif0)^2)) -
(2*sigma_covar[1,2]/(mean(aipw_eif1)*mean(aipw_eif0))) +
(sigma_covar[2,2]/mean(aipw_eif1)^2) -
(2*sigma_covar[1,2]/(mean(aipw_eif1)*mean(aipw_eif0))))/root_n
ci <- get_ci(est,se,ratio=T)
output = c(est, se, ci)
names(output) = c("Estimate","SE","95% LCL","95% UCL")
return(output)
},
get_OR = function(aipw_eif1,aipw_eif0,sigma_covar,root_n){
est <- (mean(aipw_eif1)/(1-mean(aipw_eif1))) / (mean(aipw_eif0)/(1-mean(aipw_eif0)))
se <- sqrt((sigma_covar[1,1]/((mean(aipw_eif0)^2)*(mean(1-aipw_eif0)^2))) -
(2*sigma_covar[1,2]/(mean(aipw_eif1)*mean(aipw_eif0)*mean(1-aipw_eif1)*mean(1-aipw_eif0))) +
(sigma_covar[2,2]/((mean(aipw_eif1)^2)*(mean(1-aipw_eif1)^2))) -
(2*sigma_covar[1,2]/(mean(aipw_eif1)*mean(aipw_eif0)
*mean(1-aipw_eif1)*mean(1-aipw_eif0))))/root_n
ci <- get_ci(est,se,ratio=T)
output = c(est, se, ci)
names(output) = c("Estimate","SE","95% LCL","95% UCL")
return(output)
},
get_sigma_covar = function(aipw_eif0,aipw_eif1){
mat <- matrix(c(stats::var(aipw_eif0),
stats::cov(aipw_eif0,aipw_eif1),
stats::cov(aipw_eif1,aipw_eif0),
stats::var(aipw_eif1)),nrow=2)
return(mat)
},
#ATT/ATC calculation
get_ATT_RD = function(A =private$A[private$observed==1], Y = private$Y[private$observed==1],
mu0, p_score, A_level, root_n, ATC = F){
I_A = (A==A_level) / mean(A==A_level)
I_A_com = (1-A==A_level) / mean(1-(A==A_level))
eif <- I_A*Y - (I_A*(mu0) + I_A_com*(Y-mu0)*p_score/(1-p_score))
est <- mean(eif)
if (ATC){
est <- -1 * est
}
se <- stats::sd(eif - I_A*est)/root_n
ci <- get_ci(est,se,ratio=F)
output = c(est, se, ci)
names(output) = c("Estimate","SE","95% LCL","95% UCL")
return(output)
},
#setup the bounds for the propensity score to ensure the balance
.bound = function(p_score,bound = private$g.bound){
if (length(bound) == 1){
res <- base::ifelse(p_score<bound, bound,
base::ifelse(p_score > (1-bound), (1-bound) ,p_score))
} else {
res <- base::ifelse(p_score< min(bound), min(bound),
base::ifelse(p_score > max(bound), max(bound), p_score))
}
return(res)
}
)
)
#' @name summary
#' @aliases summary.AIPW_base
#' @title Summary of the average treatment effects from AIPW
#'
#' @description
#' Calculate average causal effects in RD, RR and OR in the fitted [AIPW] or [AIPW_tmle] object using the estimated efficient influence functions
#'
#' @section R6 Usage:
#' \code{$summary(g.bound = 0.025)} \cr
#' \code{$summary(g.bound = c(0.025,0.975))}
#'
#' @param g.bound Value between \[0,1\] at which the propensity score should be truncated.
#' Propensity score will be truncated to \eqn{[g.bound, 1-g.bound]} when one g.bound value is provided, or to \eqn{[min(g.bound), max(g.bound)]} when two values are provided.
#' \strong{Defaults to 0.025}.
#'
#' @seealso [AIPW] and [AIPW_tmle]
#'
#' @return `estimates` and `result` (public variables): Risks, Average treatment effect in RD, RR and OR.
NULL
#' @name plot.p_score
#' @title Plot the propensity scores by exposure status
#'
#' @description
#' Plot and check the balance of propensity scores by exposure status
#'
#' @section R6 Usage:
#' \code{$plot.p_plot()}
#'
#' @seealso [AIPW] and [AIPW_tmle]
#'
#' @return `g.plot` (public variable): A density plot of propensity scores by exposure status (`ggplot2::geom_density`)
NULL
#' @name plot.ip_weights
#' @title Plot the inverse probability weights using truncated propensity scores by exposure status
#'
#' @description
#' Plot and check the balance of propensity scores by exposure status
#'
#' @section R6 Usage:
#' \code{$plot.ip_weights()}
#'
#' @seealso [AIPW] and [AIPW_tmle]
#'
#' @return `ip_weights.plot` (public variable): A box plot of inverse probability weights using truncated propensity scores by exposure status (`ggplot2::geom_boxplot`)
NULL
| /scratch/gouwar.j/cran-all/cranData/AIPW/R/AIPW_base.R |
#' @title Augmented Inverse Probability Weighting (AIPW) uses tmle or tmle3 as inputs
#'
#' @description `AIPW_nuis` class for users to manually input nuisance functions (estimates from the exposure and the outcome models)
#'
#' @details Create an AIPW_nuis object that uses users' input nuisance functions from the exposure model \eqn{P(A| W)},
#' and the outcome models \eqn{P(Y| do(A=0), W)} and \eqn{P(Y| do(A=1), W.Q)}:
#' \deqn{
#' \psi(a) = E{[ I(A=a) / P(A=a|W) ] * [Y-P(Y=1|A,W)] + P(Y=1| do(A=a),W) }
#' }
#' Note: If outcome is missing, replace (A=a) with (A=a, observed=1) when estimating the propensity scores.
#'
#' @section Constructor:
#' \code{AIPW$new(Y = NULL, A = NULL, tmle_fit = NULL, verbose = TRUE)}
#'
#' ## Constructor Arguments
#' \tabular{lll}{
#' \strong{Argument} \tab \strong{Type} \tab \strong{Details} \cr
#' \code{Y} \tab Integer \tab A vector of outcome (binary (0, 1) or continuous) \cr
#' \code{A} \tab Integer \tab A vector of binary exposure (0 or 1) \cr
#' \code{mu0} \tab Numeric \tab User input of \eqn{P(Y=1| do(A = 0),W_Q)} \cr
#' \code{mu1} \tab Numeric \tab User input of \eqn{P(Y=1| do(A = 1),W_Q)} \cr
#' \code{raw_p_score} \tab Numeric \tab User input of \eqn{P(A=a|W_g)} \cr
#' \code{verbose} \tab Logical \tab Whether to print the result (Default = TRUE) \cr
#' \code{stratified_fitted} \tab Logical \tab Whether mu0 & mu1 was estimated only using `A=0` & `A=1` (Default = FALSE) \cr
#' }
#'
#' @section Public Methods:
#' \tabular{lll}{
#' \strong{Methods} \tab \strong{Details} \tab \strong{Link} \cr
#' \code{summary()} \tab Summary of the average treatment effects from AIPW \tab [summary.AIPW_base]\cr
#' \code{plot.p_score()} \tab Plot the propensity scores by exposure status \tab [plot.p_score]\cr
#' \code{plot.ip_weights()} \tab Plot the inverse probability weights using truncated propensity scores \tab [plot.ip_weights]\cr
#' }
#'
#' @section Public Variables:
#' \tabular{lll}{
#' \strong{Variable} \tab \strong{Generated by} \tab \strong{Return} \cr
#' \code{n} \tab Constructor \tab Number of observations \cr
#' \code{obs_est} \tab Constructor \tab Components calculating average causal effects \cr
#' \code{estimates} \tab `summary()` \tab A list of Risk difference, risk ratio, odds ratio \cr
#' \code{result} \tab `summary()` \tab A matrix contains RD, ATT, ATC, RR and OR with their SE and 95%CI \cr
#' \code{g.plot} \tab `plot.p_score()` \tab A density plot of propensity scores by exposure status \cr
#' \code{ip_weights.plot} \tab `plot.ip_weights()` \tab A box plot of inverse probability weights \cr
#' }
#'
#' ## Public Variable Details
#' \describe{
#' \item{\code{stratified_fit}}{An indicator for whether the outcome model is fitted stratified by exposure status in the`fit()` method.
#' Only when using `stratified_fit()` to turn on `stratified_fit = TRUE`, `summary` outputs average treatment effects among the treated and the controls.}
#' \item{\code{obs_est}}{This list includes propensity scores (`p_score`), counterfactual predictions (`mu`, `mu1` & `mu0`) and efficient influence functions (`aipw_eif1` & `aipw_eif0`)}
#' \item{\code{g.plot}}{This plot is generated by `ggplot2::geom_density`}
#' \item{\code{ip_weights.plot}}{This plot uses truncated propensity scores stratified by exposure status (`ggplot2::geom_boxplot`)}
#' }
#'
#' @return \code{AIPW_nuis} object
#'
#' @export
AIPW_nuis <- R6::R6Class(
"AIPW_tmle",
portable = TRUE,
inherit = AIPW_base,
public = list(
#-------------------------constructor----------------------------#
initialize = function(Y=NULL, A=NULL, mu0 = NULL , mu1 = NULL, raw_p_score = NULL, verbose=TRUE, stratified_fitted=FALSE){
#initialize from AIPW_base class
super$initialize(Y=Y,A=A,verbose=verbose)
message("Cross-fitting for estimating nuisance functions is recommended")
self$obs_est$mu0 <- mu0
self$obs_est$mu1 <- mu1
self$obs_est$mu <- self$obs_est$mu0*(1-private$A) + self$obs_est$mu1*(private$A)
self$obs_est$raw_p_score <- raw_p_score
self$stratified_fitted = stratified_fitted
}
)
)
| /scratch/gouwar.j/cran-all/cranData/AIPW/R/AIPW_nuis.R |
#' @title Augmented Inverse Probability Weighting (AIPW) uses tmle or tmle3 as inputs
#'
#' @description `AIPW_tmle` class uses a fitted `tmle` or `tmle3` object as input
#'
#' @details Create an AIPW_tmle object that uses the estimated efficient influence function from a fitted `tmle` or `tmle3` object
#'
#' @section Constructor:
#' \code{AIPW$new(Y = NULL, A = NULL, tmle_fit = NULL, verbose = TRUE)}
#'
#' ## Constructor Arguments
#' \tabular{lll}{
#' \strong{Argument} \tab \strong{Type} \tab \strong{Details} \cr
#' \code{Y} \tab Integer \tab A vector of outcome (binary (0, 1) or continuous) \cr
#' \code{A} \tab Integer \tab A vector of binary exposure (0 or 1) \cr
#' \code{tmle_fit} \tab Object \tab A fitted `tmle` or `tmle3` object \cr
#' \code{verbose} \tab Logical \tab Whether to print the result (Default = TRUE)
#' }
#'
#' @section Public Methods:
#' \tabular{lll}{
#' \strong{Methods} \tab \strong{Details} \tab \strong{Link} \cr
#' \code{summary()} \tab Summary of the average treatment effects from AIPW \tab [summary.AIPW_base]\cr
#' \code{plot.p_score()} \tab Plot the propensity scores by exposure status \tab [plot.p_score]\cr
#' \code{plot.ip_weights()} \tab Plot the inverse probability weights using truncated propensity scores \tab [plot.ip_weights]\cr
#' }
#'
#' @section Public Variables:
#' \tabular{lll}{
#' \strong{Variable} \tab \strong{Generated by} \tab \strong{Return} \cr
#' \code{n} \tab Constructor \tab Number of observations \cr
#' \code{obs_est} \tab Constructor \tab Components calculating average causal effects \cr
#' \code{estimates} \tab `summary()` \tab A list of Risk difference, risk ratio, odds ratio \cr
#' \code{result} \tab `summary()` \tab A matrix contains RD, ATT, ATC, RR and OR with their SE and 95%CI \cr
#' \code{g.plot} \tab `plot.p_score()` \tab A density plot of propensity scores by exposure status \cr
#' \code{ip_weights.plot} \tab `plot.ip_weights()` \tab A box plot of inverse probability weights \cr
#' }
#'
#' ## Public Variable Details
#' \describe{
#' \item{\code{obs_est}}{This list extracts from the fitted `tmle` object.
#' It includes propensity scores (`p_score`), counterfactual predictions (`mu`, `mu1` & `mu0`) and efficient influence functions (`aipw_eif1` & `aipw_eif0`)}
#' \item{\code{g.plot}}{This plot is generated by `ggplot2::geom_density`}
#' \item{\code{ip_weights.plot}}{This plot uses truncated propensity scores stratified by exposure status (`ggplot2::geom_boxplot`)}
#' }
#'
#' @return \code{AIPW_tmle} object
#'
#' @export
#'
#' @examples
#' vec <- function() sample(0:1,100,replace = TRUE)
#' df <- data.frame(replicate(4,vec()))
#' names(df) <- c("A","Y","W1","W2")
#'
#' ## From tmle
#' library(tmle)
#' library(SuperLearner)
#' tmle_fit <- tmle(Y=df$Y,A=df$A,W=subset(df,select=c("W1","W2")),
#' Q.SL.library="SL.glm",
#' g.SL.library="SL.glm",
#' family="binomial")
#' AIPW_tmle$new(A=df$A,Y=df$Y,tmle_fit = tmle_fit,verbose = TRUE)$summary()
AIPW_tmle <- R6::R6Class(
"AIPW_tmle",
portable = TRUE,
inherit = AIPW_base,
public = list(
#-------------------------constructor----------------------------#
initialize = function(Y=NULL,A=NULL,tmle_fit = NULL,verbose=TRUE){
#initialize from AIPW_base class
super$initialize(Y=Y,A=A,verbose=verbose)
#check the fitted object is tmle or tmle3 and import values accordingly
if (any(class(tmle_fit) %in% "tmle")){
message("Cross-fitting is supported only within the outcome model from a fitted tmle object (with cvQinit = TRUE)")
self$obs_est$mu0 <- tmle_fit$Qstar[,1]
self$obs_est$mu1 <- tmle_fit$Qstar[,2]
self$obs_est$mu <- self$obs_est$mu0*(1-private$A) + self$obs_est$mu1*(private$A)
self$obs_est$raw_p_score <- tmle_fit$g$g1W
} else {
stop("The tmle_fit is neither a `tmle` or `tmle3_Fit` object")
}
}
)
)
| /scratch/gouwar.j/cran-all/cranData/AIPW/R/AIPW_tmle.R |
#' @title AIPW wrapper function
#'
#' @description
#' A wrapper function for `AIPW$new()$fit()$summary()`
#'
#' @param Y Outcome (binary integer: 0 or 1)
#' @param A Exposure (binary integer: 0 or 1)
#' @param verbose Whether to print the result (logical; Default = FALSE)
#' @param W covariates for both exposure and outcome models (vector, matrix or data.frame). If null, this function will seek for
#' inputs from `W.Q` and `W.g`.
#' @param W.Q Only valid when `W` is null, otherwise it would be replaced by `W`.
#' Covariates for outcome model (vector, matrix or data.frame).
#' @param W.g Only valid when `W` is null, otherwise it would be replaced by `W`.
#' Covariates for exposure model (vector, matrix or data.frame)
#' @param Q.SL.library SuperLearner libraries for outcome model
#' @param g.SL.library SuperLearner libraries for exposure model
#' @param k_split Number of splitting (integer; range: from 1 to number of observation-1):
#' if k_split=1, no cross-fitting;
#' if k_split>=2, cross-fitting is used
#' (e.g., `k_split=10`, use 9/10 of the data to estimate and the remaining 1/10 leftover to predict).
#' NOTE: it's recommended to use cross-fitting.
#' @param g.bound Value between \[0,1\] at which the propensity score should be truncated. Defaults to 0.025.
#' @param stratified_fit An indicator for whether the outcome model is fitted stratified by exposure status in the`fit()` method.
#' Only when using `stratified_fit()` to turn on `stratified_fit = TRUE`, `summary` outputs average treatment effects among the treated and the controls.
#'
#' @export
#' @seealso [AIPW]
#' @return A fitted `AIPW` object with summarised results
#'
#' @examples
#' library(SuperLearner)
#' aipw_sl <- aipw_wrapper(Y=rbinom(100,1,0.5), A=rbinom(100,1,0.5),
#' W.Q=rbinom(100,1,0.5), W.g=rbinom(100,1,0.5),
#' Q.SL.library="SL.mean",g.SL.library="SL.mean",
#' k_split=1,verbose=FALSE)
aipw_wrapper = function(Y, A, verbose=TRUE,
W=NULL, W.Q=NULL, W.g=NULL,
Q.SL.library, g.SL.library,
k_split=10, g.bound=0.025,stratified_fit=FALSE){
aipw_obj <- AIPW$new(Y=Y,A=A,verbose=verbose,
W=W, W.Q=W.Q,W.g=W.g,
Q.SL.library=Q.SL.library, g.SL.library=g.SL.library,
k_split=k_split)
if (stratified_fit){
aipw_obj$stratified_fit()
} else{
aipw_obj$fit()
}
aipw_obj$summary(g.bound=g.bound)
invisible(aipw_obj)
}
| /scratch/gouwar.j/cran-all/cranData/AIPW/R/aipw_wrapper.R |
#' Simulated Observational Study
#'
#' Datasets were simulated using baseline covariates (sampling with replacement) from the Effects of Aspirin in Gestation and Reproduction (EAGeR) study.
#' Data generating mechanisms were described in our manuscript (Zhong et al. (inpreparation), Am. J. Epidemiol.).
#' True marginal causal effects on risk difference, log risk ratio and log odds ratio scales were attached to the dataset attributes (true_rd, true_logrr,true_logor).
#'
#' @docType data
#'
#' @usage data(eager_sim_obs)
#'
#' @format An object of class data.frame with 200 rows and 8 column:
#' \describe{
#' \item{sim_Y}{binary, simulated outcome which is condition on all other covariates in the dataset}
#' \item{sim_A}{binary, simulated exposure which is conditon on all other covarites expect sim_Y.}
#' \item{eligibility}{binary, indicator of the eligibility stratum}
#' \item{loss_num}{count, number of prior pregnancy losses}
#' \item{age}{continuous, age in years}
#' \item{time_try_pregnant}{count, months of conception attempts prior to randomization}
#' \item{BMI}{continuous, body mass index}
#' \item{meanAP}{continuous, mean arterial blood pressure}
#' }
#' @references Schisterman, E.F., Silver, R.M., Lesher, L.L., Faraggi, D., Wactawski-Wende, J., Townsend, J.M., Lynch, A.M., Perkins, N.J., Mumford, S.L. and Galai, N., 2014. Preconception low-dose aspirin and pregnancy outcomes: results from the EAGeR randomised trial. The Lancet, 384(9937), pp.29-36.
#' @references Zhong, Y., Naimi, A.I., Kennedy, E.H., (In preparation). AIPW: An R package for Augmented Inverse Probability Weighted Estimation of Average Causal Effects. American Journal of Epidemiology
#' @seealso [eager_sim_rct]
"eager_sim_obs"
#' Simulated Randomized Trial
#'
#' Datasets were simulated using baseline covariates (sampling with replacement) from the Effects of Aspirin in Gestation and Reproduction (EAGeR) study.
#'
#' @docType data
#'
#' @usage data(eager_sim_rct)
#'
#' @format An object of class data.frame with 1228 rows and 8 column:
#' \describe{
#' \item{sim_Y}{binary, simulated outcome which is condition on all other covariates in the dataset}
#' \item{sim_T}{binary, simulated treatment which is condition on eligibility only.}
#' \item{eligibility}{binary, indicator of the eligibility stratum}
#' \item{loss_num}{count, number of prior pregnancy losses}
#' \item{age}{continuous, age in years}
#' \item{time_try_pregnant}{count, months of conception attempts prior to randomization}
#' \item{BMI}{continuous, body mass index}
#' \item{meanAP}{continuous, mean arterial blood pressure}
#' }
#'
#' @references Schisterman, E.F., Silver, R.M., Lesher, L.L., Faraggi, D., Wactawski-Wende, J., Townsend, J.M., Lynch, A.M., Perkins, N.J., Mumford, S.L. and Galai, N., 2014. Preconception low-dose aspirin and pregnancy outcomes: results from the EAGeR randomised trial. The Lancet, 384(9937), pp.29-36.
#' @references Zhong, Y., Naimi, A.I., Kennedy, E.H., (In preparation). AIPW: An R package for Augmented Inverse Probability Weighted Estimation of Average Causal Effects. American Journal of Epidemiology
#' @seealso [eager_sim_obs]
"eager_sim_rct"
| /scratch/gouwar.j/cran-all/cranData/AIPW/R/data.R |
#' Title Get 95% Condifence Intervals
#'
#' @param est point estimate
#' @param se standard error
#' @param ratio logical (default==F); when TRUE, return exp(log(ci))
#'
#' @return lower and upper bounds of the 95% confidence interval
#'
#' @noRd
get_ci <- function(est, se, ratio=F) {
if (ratio){
est <- log(est)
lcl <- est - 1.96*se
ucl <- est + 1.96*se
output <- exp(c(lcl,ucl))
} else{
lcl <- est - 1.96*se
ucl <- est + 1.96*se
output <- c(lcl,ucl)
}
return(output)
}
| /scratch/gouwar.j/cran-all/cranData/AIPW/R/util.R |
#' @importFrom R62S3 R62Fun
#' @importFrom R6 R6Class
# R62S3::R62Fun(AIPW_base, assignEnvir = topenv(), scope = c("public"))
# R62S3::R62Fun(AIPW, assignEnvir = topenv(), scope = c("public"))
| /scratch/gouwar.j/cran-all/cranData/AIPW/R/zzz.R |
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
fig.width = 6,
comment = "#>"#,
# cache=TRUE
)
## ---- eval = FALSE------------------------------------------------------------
# install.packages("remotes")
# remotes::install_github("yqzhong7/AIPW")
## ---- eval = FALSE------------------------------------------------------------
# #SuperLearner
# install.packages("SuperLearner")
# #sl3
# remotes::install_github("tlverse/sl3")
# install.packages("Rsolnp")
## ----example data-------------------------------------------------------------
library(AIPW)
library(SuperLearner)
library(ggplot2)
set.seed(123)
data("eager_sim_obs")
cov = c("eligibility","loss_num","age", "time_try_pregnant","BMI","meanAP")
## ----one_line-----------------------------------------------------------------
AIPW_SL <- AIPW$new(Y= eager_sim_obs$sim_Y,
A= eager_sim_obs$sim_A,
W= subset(eager_sim_obs,select=cov),
Q.SL.library = c("SL.mean","SL.glm"),
g.SL.library = c("SL.mean","SL.glm"),
k_split = 10,
verbose=FALSE)$
fit()$
#Default truncation is set to 0.025; using 0.25 here is for illustrative purposes and not recommended
summary(g.bound = c(0.25,0.75))$
plot.p_score()$
plot.ip_weights()
## ----SuperLearner, message=FALSE,eval=F---------------------------------------
# library(AIPW)
# library(SuperLearner)
#
# #SuperLearner libraries for outcome (Q) and exposure models (g)
# sl.lib <- c("SL.mean","SL.glm")
#
# #construct an aipw object for later estimations
# AIPW_SL <- AIPW$new(Y= eager_sim_obs$sim_Y,
# A= eager_sim_obs$sim_A,
# W= subset(eager_sim_obs,select=cov),
# Q.SL.library = sl.lib,
# g.SL.library = sl.lib,
# k_split = 10,
# verbose=FALSE)
## -----------------------------------------------------------------------------
#fit the AIPW_SL object
AIPW_SL$fit()
# or you can use stratified_fit
# AIPW_SL$stratified_fit()
## -----------------------------------------------------------------------------
#estimate the average causal effects from the fitted AIPW_SL object
AIPW_SL$summary(g.bound = 0.25) #propensity score truncation
## ----ps_trunc-----------------------------------------------------------------
library(ggplot2)
AIPW_SL$plot.p_score()
AIPW_SL$plot.ip_weights()
## -----------------------------------------------------------------------------
suppressWarnings({
AIPW_SL$stratified_fit()$summary()
})
## ----parallel, eval=FALSE-----------------------------------------------------
# # install.packages("future.apply")
# library(future.apply)
# plan(multiprocess, workers=2, gc=T)
# set.seed(888)
# AIPW_SL <- AIPW$new(Y= eager_sim_obs$sim_Y,
# A= eager_sim_obs$sim_A,
# W= subset(eager_sim_obs,select=cov),
# Q.SL.library = sl3.lib,
# g.SL.library = sl3.lib,
# k_split = 10,
# verbose=FALSE)$fit()$summary()
## ----tmle, eval=F-------------------------------------------------------------
# # install.packages("tmle")
# library(tmle)
# library(SuperLearner)
#
# tmle_fit <- tmle(Y=eager_sim_obs$sim_Y,
# A=eager_sim_obs$sim_A,
# W=eager_sim_obs[,-1:-2],
# Q.SL.library=c("SL.mean","SL.glm"),
# g.SL.library=c("SL.mean","SL.glm"),
# family="binomial",
# cvQinit = TRUE)
#
# cat("\nEstimates from TMLE\n")
# unlist(tmle_fit$estimates$ATE)
# unlist(tmle_fit$estimates$RR)
# unlist(tmle_fit$estimates$OR)
#
# cat("\nEstimates from AIPW\n")
# a_tmle <- AIPW_tmle$
# new(A=eager_sim_obs$sim_A,Y=eager_sim_obs$sim_Y,tmle_fit = tmle_fit,verbose = TRUE)$
# summary(g.bound=0.025)
| /scratch/gouwar.j/cran-all/cranData/AIPW/inst/doc/AIPW.R |
---
title: "Getting Started with AIPW"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Getting Started with AIPW}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
fig.width = 6,
comment = "#>"#,
# cache=TRUE
)
```
Contents:
* [Installation](#Installation)
* [One-line version](#one_line)
* [Longer version](#details)
+ [Create an AIPW object](#constructor)
+ [Fit the object](#fit)
+ [Calculate average treatment effects](#ate)
+ [Calculate average treatment effects among the treated](#att)
* [Parallelization](#par)
* [Using tmle/tmle3 as input](#tmle_input)
+ [tmle](#tmle)
## <a id="Installation"></a>Installation
1. Install AIPW from [GitHub](https://github.com/yqzhong7/AIPW)
```{r, eval = FALSE}
install.packages("remotes")
remotes::install_github("yqzhong7/AIPW")
```
__* CRAN version only supports SuperLearner and tmle. Please install the Github version (master branch) to use sl3 and tmle3.__
2. Install [SuperLearner](https://CRAN.R-project.org/package=SuperLearner) or [sl3](https://tlverse.org/sl3/articles/intro_sl3.html)
```{r, eval = FALSE}
#SuperLearner
install.packages("SuperLearner")
#sl3
remotes::install_github("tlverse/sl3")
install.packages("Rsolnp")
```
## Input data for analyses
```{r example data}
library(AIPW)
library(SuperLearner)
library(ggplot2)
set.seed(123)
data("eager_sim_obs")
cov = c("eligibility","loss_num","age", "time_try_pregnant","BMI","meanAP")
```
## Using AIPW to estimate the average treatment effect
### <a id="one_line"></a>One line version (Method chaining from R6class)
Using native AIPW class allows users to define different covariate sets for the exposure and the outcome models, respectively.
```{r one_line}
AIPW_SL <- AIPW$new(Y= eager_sim_obs$sim_Y,
A= eager_sim_obs$sim_A,
W= subset(eager_sim_obs,select=cov),
Q.SL.library = c("SL.mean","SL.glm"),
g.SL.library = c("SL.mean","SL.glm"),
k_split = 10,
verbose=FALSE)$
fit()$
#Default truncation is set to 0.025; using 0.25 here is for illustrative purposes and not recommended
summary(g.bound = c(0.25,0.75))$
plot.p_score()$
plot.ip_weights()
```
### <a id="details"></a>A more detailed tutorial
#### 1. <a id="constructor"></a>Create an AIPW object
* ##### Use [SuperLearner](https://CRAN.R-project.org/package=SuperLearner) libraries
```{r SuperLearner, message=FALSE,eval=F}
library(AIPW)
library(SuperLearner)
#SuperLearner libraries for outcome (Q) and exposure models (g)
sl.lib <- c("SL.mean","SL.glm")
#construct an aipw object for later estimations
AIPW_SL <- AIPW$new(Y= eager_sim_obs$sim_Y,
A= eager_sim_obs$sim_A,
W= subset(eager_sim_obs,select=cov),
Q.SL.library = sl.lib,
g.SL.library = sl.lib,
k_split = 10,
verbose=FALSE)
```
If outcome is missing, analysis assumes missing at random (MAR) by estimating propensity scores with I(A=a, observed=1). Missing exposure is not supported.
#### 2. <a id="fit"></a>Fit the AIPW object
This step will fit the data stored in the AIPW object to obtain estimates for later average treatment effect calculations.
```{r}
#fit the AIPW_SL object
AIPW_SL$fit()
# or you can use stratified_fit
# AIPW_SL$stratified_fit()
```
#### 3. <a id="ate"></a>Calculate average treatment effects
* ##### Estimate the ATE with propensity scores truncation
```{r}
#estimate the average causal effects from the fitted AIPW_SL object
AIPW_SL$summary(g.bound = 0.25) #propensity score truncation
```
* ##### Check the balance of propensity scores and inverse probability weights by exposure status after truncation
```{r ps_trunc}
library(ggplot2)
AIPW_SL$plot.p_score()
AIPW_SL$plot.ip_weights()
```
#### 4. <a id="att"></a>Calculate average treatment effects among the treated/controls
* ##### `stratified_fit()` fits the outcome model by exposure status while `fit()` does not. Hence, `stratified_fit()` must be used to compute ATT/ATC [(Kennedy et al. 2015)](http://www.ehkennedy.com/uploads/5/8/4/5/58450265/2015_kennedy_et_al_-_semiparametric_causal_inference_in_matched_cohort_studies.pdf)
```{r}
suppressWarnings({
AIPW_SL$stratified_fit()$summary()
})
```
## <a id="par"></a>Parallelization with future.apply
In default setting, the `AIPW$fit()` method will be run sequentially. The current version of AIPW package supports parallel processing implemented by [future.apply](https://github.com/HenrikBengtsson/future.apply) package under the [future](https://github.com/HenrikBengtsson/future) framework. Before creating a `AIPW` object, simply use `future::plan()` to enable parallelization and `set.seed()` to take care of the random number generation (RNG) problem:
```{r parallel, eval=FALSE}
# install.packages("future.apply")
library(future.apply)
plan(multiprocess, workers=2, gc=T)
set.seed(888)
AIPW_SL <- AIPW$new(Y= eager_sim_obs$sim_Y,
A= eager_sim_obs$sim_A,
W= subset(eager_sim_obs,select=cov),
Q.SL.library = sl3.lib,
g.SL.library = sl3.lib,
k_split = 10,
verbose=FALSE)$fit()$summary()
```
## <a id="tmle_input"></a>Use `tmle` fitted object as input
AIPW shares similar intermediate estimates (nuisance functions) with the Targeted Maximum Likelihood / Minimum Loss-Based Estimation (TMLE). Therefore, `AIPW_tmle` class is designed for using `tmle` fitted object as input. Details about these two packages can be found [here](https://www.jstatsoft.org/article/view/v051i13) and [here](https://tlverse.org/tlverse-handbook/). This feature is designed for debugging and easy comparisons across these three packages because cross-fitting procedures are different in `tmle`. In addition, this feature does not support ATT outputs.
#### <a id="tmle"></a>`tmle`
As shown in the message, [tmle](https://CRAN.R-project.org/package=tmle) only support cross-fitting in the outcome model.
```{r tmle, eval=F}
# install.packages("tmle")
library(tmle)
library(SuperLearner)
tmle_fit <- tmle(Y=eager_sim_obs$sim_Y,
A=eager_sim_obs$sim_A,
W=eager_sim_obs[,-1:-2],
Q.SL.library=c("SL.mean","SL.glm"),
g.SL.library=c("SL.mean","SL.glm"),
family="binomial",
cvQinit = TRUE)
cat("\nEstimates from TMLE\n")
unlist(tmle_fit$estimates$ATE)
unlist(tmle_fit$estimates$RR)
unlist(tmle_fit$estimates$OR)
cat("\nEstimates from AIPW\n")
a_tmle <- AIPW_tmle$
new(A=eager_sim_obs$sim_A,Y=eager_sim_obs$sim_Y,tmle_fit = tmle_fit,verbose = TRUE)$
summary(g.bound=0.025)
```
| /scratch/gouwar.j/cran-all/cranData/AIPW/inst/doc/AIPW.Rmd |
---
title: "Getting Started with AIPW"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Getting Started with AIPW}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
fig.width = 6,
comment = "#>"#,
# cache=TRUE
)
```
Contents:
* [Installation](#Installation)
* [One-line version](#one_line)
* [Longer version](#details)
+ [Create an AIPW object](#constructor)
+ [Fit the object](#fit)
+ [Calculate average treatment effects](#ate)
+ [Calculate average treatment effects among the treated](#att)
* [Parallelization](#par)
* [Using tmle/tmle3 as input](#tmle_input)
+ [tmle](#tmle)
## <a id="Installation"></a>Installation
1. Install AIPW from [GitHub](https://github.com/yqzhong7/AIPW)
```{r, eval = FALSE}
install.packages("remotes")
remotes::install_github("yqzhong7/AIPW")
```
__* CRAN version only supports SuperLearner and tmle. Please install the Github version (master branch) to use sl3 and tmle3.__
2. Install [SuperLearner](https://CRAN.R-project.org/package=SuperLearner) or [sl3](https://tlverse.org/sl3/articles/intro_sl3.html)
```{r, eval = FALSE}
#SuperLearner
install.packages("SuperLearner")
#sl3
remotes::install_github("tlverse/sl3")
install.packages("Rsolnp")
```
## Input data for analyses
```{r example data}
library(AIPW)
library(SuperLearner)
library(ggplot2)
set.seed(123)
data("eager_sim_obs")
cov = c("eligibility","loss_num","age", "time_try_pregnant","BMI","meanAP")
```
## Using AIPW to estimate the average treatment effect
### <a id="one_line"></a>One line version (Method chaining from R6class)
Using native AIPW class allows users to define different covariate sets for the exposure and the outcome models, respectively.
```{r one_line}
AIPW_SL <- AIPW$new(Y= eager_sim_obs$sim_Y,
A= eager_sim_obs$sim_A,
W= subset(eager_sim_obs,select=cov),
Q.SL.library = c("SL.mean","SL.glm"),
g.SL.library = c("SL.mean","SL.glm"),
k_split = 10,
verbose=FALSE)$
fit()$
#Default truncation is set to 0.025; using 0.25 here is for illustrative purposes and not recommended
summary(g.bound = c(0.25,0.75))$
plot.p_score()$
plot.ip_weights()
```
### <a id="details"></a>A more detailed tutorial
#### 1. <a id="constructor"></a>Create an AIPW object
* ##### Use [SuperLearner](https://CRAN.R-project.org/package=SuperLearner) libraries
```{r SuperLearner, message=FALSE,eval=F}
library(AIPW)
library(SuperLearner)
#SuperLearner libraries for outcome (Q) and exposure models (g)
sl.lib <- c("SL.mean","SL.glm")
#construct an aipw object for later estimations
AIPW_SL <- AIPW$new(Y= eager_sim_obs$sim_Y,
A= eager_sim_obs$sim_A,
W= subset(eager_sim_obs,select=cov),
Q.SL.library = sl.lib,
g.SL.library = sl.lib,
k_split = 10,
verbose=FALSE)
```
If outcome is missing, analysis assumes missing at random (MAR) by estimating propensity scores with I(A=a, observed=1). Missing exposure is not supported.
#### 2. <a id="fit"></a>Fit the AIPW object
This step will fit the data stored in the AIPW object to obtain estimates for later average treatment effect calculations.
```{r}
#fit the AIPW_SL object
AIPW_SL$fit()
# or you can use stratified_fit
# AIPW_SL$stratified_fit()
```
#### 3. <a id="ate"></a>Calculate average treatment effects
* ##### Estimate the ATE with propensity scores truncation
```{r}
#estimate the average causal effects from the fitted AIPW_SL object
AIPW_SL$summary(g.bound = 0.25) #propensity score truncation
```
* ##### Check the balance of propensity scores and inverse probability weights by exposure status after truncation
```{r ps_trunc}
library(ggplot2)
AIPW_SL$plot.p_score()
AIPW_SL$plot.ip_weights()
```
#### 4. <a id="att"></a>Calculate average treatment effects among the treated/controls
* ##### `stratified_fit()` fits the outcome model by exposure status while `fit()` does not. Hence, `stratified_fit()` must be used to compute ATT/ATC [(Kennedy et al. 2015)](http://www.ehkennedy.com/uploads/5/8/4/5/58450265/2015_kennedy_et_al_-_semiparametric_causal_inference_in_matched_cohort_studies.pdf)
```{r}
suppressWarnings({
AIPW_SL$stratified_fit()$summary()
})
```
## <a id="par"></a>Parallelization with future.apply
In default setting, the `AIPW$fit()` method will be run sequentially. The current version of AIPW package supports parallel processing implemented by [future.apply](https://github.com/HenrikBengtsson/future.apply) package under the [future](https://github.com/HenrikBengtsson/future) framework. Before creating a `AIPW` object, simply use `future::plan()` to enable parallelization and `set.seed()` to take care of the random number generation (RNG) problem:
```{r parallel, eval=FALSE}
# install.packages("future.apply")
library(future.apply)
plan(multiprocess, workers=2, gc=T)
set.seed(888)
AIPW_SL <- AIPW$new(Y= eager_sim_obs$sim_Y,
A= eager_sim_obs$sim_A,
W= subset(eager_sim_obs,select=cov),
Q.SL.library = sl3.lib,
g.SL.library = sl3.lib,
k_split = 10,
verbose=FALSE)$fit()$summary()
```
## <a id="tmle_input"></a>Use `tmle` fitted object as input
AIPW shares similar intermediate estimates (nuisance functions) with the Targeted Maximum Likelihood / Minimum Loss-Based Estimation (TMLE). Therefore, `AIPW_tmle` class is designed for using `tmle` fitted object as input. Details about these two packages can be found [here](https://www.jstatsoft.org/article/view/v051i13) and [here](https://tlverse.org/tlverse-handbook/). This feature is designed for debugging and easy comparisons across these three packages because cross-fitting procedures are different in `tmle`. In addition, this feature does not support ATT outputs.
#### <a id="tmle"></a>`tmle`
As shown in the message, [tmle](https://CRAN.R-project.org/package=tmle) only support cross-fitting in the outcome model.
```{r tmle, eval=F}
# install.packages("tmle")
library(tmle)
library(SuperLearner)
tmle_fit <- tmle(Y=eager_sim_obs$sim_Y,
A=eager_sim_obs$sim_A,
W=eager_sim_obs[,-1:-2],
Q.SL.library=c("SL.mean","SL.glm"),
g.SL.library=c("SL.mean","SL.glm"),
family="binomial",
cvQinit = TRUE)
cat("\nEstimates from TMLE\n")
unlist(tmle_fit$estimates$ATE)
unlist(tmle_fit$estimates$RR)
unlist(tmle_fit$estimates$OR)
cat("\nEstimates from AIPW\n")
a_tmle <- AIPW_tmle$
new(A=eager_sim_obs$sim_A,Y=eager_sim_obs$sim_Y,tmle_fit = tmle_fit,verbose = TRUE)$
summary(g.bound=0.025)
```
| /scratch/gouwar.j/cran-all/cranData/AIPW/vignettes/AIPW.Rmd |
#' @importFrom stats rnorm runif qnorm optim
#' @importFrom methods setClass new show
#' @importFrom graphics lines polygon legend
#' @importFrom plot3D image2D
#' @importFrom grDevices grey
NULL
#> NULL
#' @keywords internal
| /scratch/gouwar.j/cran-all/cranData/AIUQ/R/AIUQ-package.R |
#' Scattering analysis of microscopy
#'
#' @description
#' Fast parameter estimation in scattering analysis of microscopy, using either
#' AIUQ or DDM method.
#'
#' @param intensity intensity profile. See 'Details'.
#' @param intensity_str structure of the intensity profile, options from
#' ('SST_array','S_ST_mat','T_SS_mat'). See 'Details'.
#' @param pxsz size of one pixel in unit of micron, 1 for simulated data
#' @param sz frame size of the intensity profile in x and y directions,
#' number of pixels contained in each frame equals sz_x by sz_y.
#' @param mindt minimum lag time, 1 for simulated data
#' @param AIUQ_thr threshold for wave number selection, numeric vector of two
#' elements with values between 0 and 1. See 'Details'.
#' @param model_name fitted model, options from ('BM','OU','FBM','OU+FBM',
#' 'user_defined'), with Brownian motion as the default model. See 'Details'.
#' @param sigma_0_2_ini initial value for background noise. If NA, use minimum
#' value of absolute square of intensity profile in reciprocal space.
#' @param msd_fn user defined mean squared displacement(MSD) structure, a
#' function of parameters and lag times. NA if \code{model_name} is not
#' 'user_defined'.
#' @param msd_grad_fn gradient for user defined mean squared displacement
#' structure. If \code{NA}, then numerical gradient will be used for parameter
#' estimation in \code{'user_defined'} model.
#' @param num_param number of parameters need to be estimated in the intermediate
#' scattering function, need to be non-NA value for user_defined' model.
#' @param param_initial initial values for param estimation.
#' @param num_optim number of optimization.
#' @param uncertainty a logical evaluating to TRUE or FALSE indicating whether
#' parameter uncertainty should be computed.
#' @param M number of particles. See 'Details'.
#' @param sim_object NA or an S4 object of class \code{simulation}.
#' @param msd_truth true MSD or reference MSD value.
#' @param method methods for parameter estimation, options from ('AIUQ', 'DDM').
#' @param index_q_AIUQ index range for wave number when using AIUQ method. See 'Details'.
#' @param index_q_DDM index range for wave number when using DDM method. See 'Details'.
#' @param message_out a logical evaluating to TRUE or FALSE indicating whether
#' or not to output the message.
#' @param square a logical evaluating to TRUE or FALSE indicating whether or not
#' to crop the original intensity profile into square image.
#' @param output_dqt a logical evaluating to TRUE or FALSE indicating whether or
#' not to compute observed dynamic image structure function(Dqt).
#' @param output_isf a logical evaluating to TRUE or FALSE indicating whether or
#' not to compute empirical intermediate scattering function(ISF).
#' @param output_modeled_isf a logical evaluating to TRUE or FALSE indicating
#' whether or not to compute modeled intermediate scattering function(ISF).
#' @param output_modeled_dqt a logical evaluating to TRUE or FALSE indicating
#' whether or not to compute modeled dynamic image structure function(Dqt).
#'
#' @details
#' For simulated data using \code{simulation} in AIUQ package, \code{intensity}
#' will be automatically extracted from \code{simulation} class.
#'
#' By default \code{intensity_str} is set to 'T_SS_mat', a time by space\eqn{\times}{%\times}space
#' matrix, which is the structure of intensity profile obtained from \code{simulation}
#' class. For \code{intensity_str='SST_array'} , input intensity profile should be a
#' space by space by time array, which is the structure from loading a tif file.
#' For \code{intensity_str='S_ST_mat'}, input intensity profile should be a
#' space by space\eqn{\times}{%\times}time matrix.
#'
#' By default \code{AIUQ_thr} is set to \code{c(1,1)}, uses information from all
#' complete q rings. The first element affects maximum wave number selected,
#' and second element controls minimum proportion of wave number selected. By
#' setting 1 for the second element, if maximum wave number selected is less
#' than the wave number length, then maximum wave number selected is coerced to
#' use all wave number unless user defined another index range through \code{index_q_AIUQ}.
#'
#' If \code{model_name} equals 'user_defined', or NA (will coerced to
#' 'user_defined'), then \code{msd_fn} and \code{num_param} need to be provided
#' for parameter estimation.
#'
#' Number of particles \code{M} is set to 50 or automatically extracted from
#' \code{simulation} class for simulated data using \code{simulation} in AIUQ
#' package.
#'
#' By default, using all wave vectors from complete q ring for both \code{AIUQ}
#' and \code{DDM} method, unless user defined index range through \code{index_q_AIUQ}
#' or \code{index_q_DDM}.
#'
#' @return Returns an S4 object of class \code{SAM}.
#' @export
#' @author \packageAuthor{AIUQ}
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#'
#' @examples
#' library(AIUQ)
#' # Example 1: Estimation for simulated data
#' sim_bm = simulation(len_t=100,sz=100,sigma_bm=0.5)
#' show(sim_bm)
#' sam = SAM(sim_object = sim_bm)
#' show(sam)
SAM <- function(intensity=NA,intensity_str="T_SS_mat",pxsz=1,sz=c(NA,NA),mindt=1,
AIUQ_thr=c(1,1),model_name='BM',sigma_0_2_ini=NaN,param_initial=NA,
num_optim=1,msd_fn=NA,msd_grad_fn=NA,num_param=NA,
uncertainty=FALSE,M=50,sim_object=NA, msd_truth=NA,
method="AIUQ",index_q_AIUQ=NA, index_q_DDM=NA,message_out=TRUE,
square=FALSE, output_dqt=FALSE, output_isf=FALSE,
output_modeled_isf=FALSE,output_modeled_dqt=FALSE){
model <- methods::new("SAM")
#check
if(!is.character(intensity_str)){
stop("Structure of the intensity profile input should be a character. \n")
}
if(intensity_str!="SST_array" && intensity_str!="S_ST_mat" && intensity_str!="T_SS_mat"){
stop("Structure of the intensity profile input should be one of the type listed in help page. \n")
}
if(!is.numeric(pxsz)){
stop("Pixel size should be a numerical value. \n")
}
if(!is.numeric(mindt)){
stop("Lag time between 2 consecutive image should be a numerical value. \n")
}
if(length(AIUQ_thr)==1){
AIUQ_thr = c(AIUQ_thr,1)
}
if(is.na(AIUQ_thr[1])){
AIUQ_thr = c(1,AIUQ_thr[2])
}
if(is.na(AIUQ_thr[2])){
AIUQ_thr = c(AIUQ_thr[1],1)
}
if(!is.numeric(AIUQ_thr)){
stop("AIUQ threshold should be a numerical vector. \n")
}
if(AIUQ_thr[1]<0 || AIUQ_thr[2]<0 || AIUQ_thr[1]>1 || AIUQ_thr[2]>1){
stop("AIUQ threshold has value between 0 and 1. \n")
}
if(!is.numeric(sigma_0_2_ini)){
stop("Inital value for background noise should be numeric. \n")
}
if(class(sim_object)[1]=="simulation"){
intensity = sim_object@intensity
model@pxsz = sim_object@pxsz
#model@mindt = sim_object@mindt
M = sim_object@M
#model_name = sim_object@model_name ##not always inherent
len_t = sim_object@len_t
model@param_truth = get_true_param_sim(param_truth=sim_object@param,model_name=sim_object@model_name)
model@msd_truth = sim_object@theor_msd
model@sigma_2_0_truth = sim_object@sigma_2_0
sz = sim_object@sz
}else{
model@pxsz = pxsz
#model@mindt = mindt
model@msd_truth = msd_truth
model@sigma_2_0_truth = NA
model@param_truth = NA
sz = sz
}
model@mindt = mindt
if(is.vector(intensity)){
if(is.na(intensity)){
stop("Intensity profile can't be missing and should have one of the structure listed in intensity_str. \n")
}
}
if(is.na(model_name)){
model_name = "user_defined"
}
if (model_name == "user_defined" && is.na(num_param)){
stop("For user defined model, number of parameters that need to be estimated can't be empty. \n")
}
if(!is.character(model_name)){
stop("Fitted model name should be character. \n")
}
if(!is.character(method)){
stop("Method should be character. \n")
}
model@model_name = model_name
model@method = method
# Transform intensity into the same format and crop image into square image
# total number of pixels in each image = sz_x*sz_y
intensity_list = intensity_format_transform(intensity = intensity,
intensity_str = intensity_str,
square = square,sz=sz)
# Fourier transform
fft_list = FFT2D(intensity_list=intensity_list,pxsz=model@pxsz,mindt=model@mindt)
#num of rows and columns of intensity matrix, also representing frame size in y and x directions
model@sz = c(fft_list$sz_y,fft_list$sz_x)
model@len_q = fft_list$len_q
model@len_t = fft_list$len_t
model@q = fft_list$q
model@d_input = fft_list$d_input
if(!is.na(index_q_DDM)[1]){
if(min(index_q_DDM)<1 || max(index_q_DDM)>model@len_q){
stop("Selected q range should between 1 and half frame size. \n")
}
}
if(!is.na(index_q_AIUQ)[1]){
if(min(index_q_AIUQ)<1 || max(index_q_AIUQ)>model@len_q){
stop("Selected q range should between 1 and half frame size. \n")
}
}
# get each q ring location index
if(model@sz[1]==model@sz[2]){
v = (-(model@sz[1]-1)/2):((model@sz[1]-1)/2)
x = matrix(rep(v,each = model@sz[1]), byrow = FALSE,nrow = model@sz[1])
y = matrix(rep(v,each = model@sz[1]), byrow = TRUE,nrow = model@sz[1])
}else{
v_x = (-(model@sz[2]-1)/2):((model@sz[2]-1)/2)
v_y = (-(model@sz[1]-1)/2):((model@sz[1]-1)/2)
x = matrix(rep(v_x,each = model@sz[1]), byrow = FALSE,nrow = model@sz[1])
y = matrix(rep(v_y,each = model@sz[2]), byrow = TRUE,nrow = model@sz[1])
}
theta_q = cart2polar(x, y)
q_ring_num = theta_q[,(model@sz[2]+1):dim(theta_q)[2]]
q_ring_num = round(q_ring_num)
nq_index = vector(mode = "list")
for(i in 1:model@len_q){
nq_index[[i]] = which(q_ring_num==i)
}
q_ori_ring_loc = fftshift(q_ring_num, dim = 3)
q_ori_ring_loc_index = as.list(1:model@len_q)
total_q_ori_ring_loc_index = NULL
for(i in 1:model@len_q){
q_ori_ring_loc_index[[i]] = which(q_ori_ring_loc==i)
total_q_ori_ring_loc_index = c(total_q_ori_ring_loc_index, q_ori_ring_loc_index[[i]])
}
#model@q_ring_loc = q_ring_num
q_ring_loc = q_ring_num
#model@q_ori_ring_loc = q_ori_ring_loc
#model@q_ori_ring_loc_index = q_ori_ring_loc_index
#model@total_q_ori_ring_loc_index = total_q_ori_ring_loc_index
#Get A an B ini est
avg_I_2_ori = 0
for(i in 1:model@len_t){
avg_I_2_ori = avg_I_2_ori+abs(fft_list$I_q_matrix[,i])^2/(model@sz[1]*model@sz[2])
}
avg_I_2_ori = avg_I_2_ori/model@len_t
model@I_o_q_2_ori = rep(NA,model@len_q)
for(i in 1:model@len_q){
model@I_o_q_2_ori[i] = mean(avg_I_2_ori[q_ori_ring_loc_index[[i]]])
}
I_o_q_2_ori_last = model@I_o_q_2_ori[model@len_q]
model@B_est_ini = 2*I_o_q_2_ori_last
model@A_est_ini = 2*(model@I_o_q_2_ori - I_o_q_2_ori_last)
for (i in 1:model@len_q){
##change to >=
if(sum(model@A_est_ini[1:i])/sum(model@A_est_ini)>=AIUQ_thr[1]){
num_q_max = i
break
}
}
if(num_q_max/model@len_q<=AIUQ_thr[2]){
num_q_max=ceiling(AIUQ_thr[2]*model@len_q)
}
# get unique index
# improved this later, check one of the previous code where I have another way that doesn't require unique search
#model@q_ori_ring_loc_unique_index = as.list(1:model@len_q)
q_ori_ring_loc_unique_index = as.list(1:model@len_q)
for(i in 1:model@len_q){
unique_val = unique(avg_I_2_ori[q_ori_ring_loc_index[[i]]])
unique_val = unique_val[1:(length(q_ori_ring_loc_index[[i]])/2)]
index_selected = NULL
for(j in 1:length(unique_val)){
index_selected = c(index_selected,which(avg_I_2_ori == unique_val[j])[1])
}
q_ori_ring_loc_unique_index[[i]] = index_selected
}
total_q_ori_ring_loc_unique_index = NULL
for(i in 1:model@len_q){
total_q_ori_ring_loc_unique_index=c(total_q_ori_ring_loc_unique_index,
q_ori_ring_loc_unique_index[[i]])
}
if(is.na(sigma_0_2_ini)){sigma_0_2_ini=min(model@I_o_q_2_ori)}
if(sum(is.na(param_initial))>=1){
param_initial = get_initial_param(model_name=model@model_name,
sigma_0_2_ini=sigma_0_2_ini,
num_param=num_param)
}else{
param_initial = log(c(param_initial,sigma_0_2_ini))
}
if(model@method == "AIUQ"){
##if index_q_AIUQ is not defined then we define it; otherwise we use the defined index
if( is.na(index_q_AIUQ)[1]){
index_q_AIUQ = 1:num_q_max
}
p = length(param_initial)-1
num_iteration_max = 50+(p-1)*10
lower_bound = c(rep(-30,p),-Inf)
if(model@model_name == "user_defined"){
if(is.function(msd_grad_fn)==T){
gr = log_lik_grad
}else{gr = NULL}
}else{gr = log_lik_grad}
m_param = try(optim(param_initial,log_lik, gr=gr,
I_q_cur=fft_list$I_q_matrix,
B_cur=NA,index_q=index_q_AIUQ,
I_o_q_2_ori=model@I_o_q_2_ori,
q_ori_ring_loc_unique_index=q_ori_ring_loc_unique_index,
sz=model@sz,len_t=model@len_t,d_input=model@d_input,
q=model@q,model_name=model@model_name,msd_fn=msd_fn,
msd_grad_fn=msd_grad_fn,method='L-BFGS-B',lower=lower_bound,
control = list(fnscale=-1,maxit=num_iteration_max)),TRUE)
if(num_optim>1){
for(i_try in 1:(num_optim-1)){
param_initial_try=param_initial+i_try*runif(p+1)
if(message_out){
cat("start of another optimization, initial values: ",param_initial_try, "\n")
}
m_param_try = try(optim(param_initial_try,log_lik, gr=gr,
I_q_cur=fft_list$I_q_matrix,
B_cur=NA,index_q=index_q_AIUQ,
I_o_q_2_ori=model@I_o_q_2_ori,
q_ori_ring_loc_unique_index=q_ori_ring_loc_unique_index,
sz=model@sz,len_t=model@len_t,d_input=model@d_input,
q=model@q,model_name=model@model_name,msd_fn=msd_fn,
msd_grad_fn=msd_grad_fn,method='L-BFGS-B',lower=lower_bound,
control = list(fnscale=-1,maxit=num_iteration_max)),TRUE)
if(class(m_param)[1]!="try-error"){
if(class(m_param_try)[1]!="try-error"){
if(m_param_try$value>m_param$value){
m_param=m_param_try
}
}
}else{##if m_param has an error then change
m_param=m_param_try
}
}
}
count_compute=0 ##if it has an error in optimization, try some more
while(class(m_param)[1]=="try-error"){
count_compute=count_compute+1
param_initial_try=param_initial+count_compute*runif(p+1)
if(message_out){
cat("start of another optimization, initial values: ",param_initial_try, "\n")
}
m_param = try(optim(param_initial_try,log_lik,gr=gr,
I_q_cur=fft_list$I_q_matrix,B_cur=NA,
index_q=index_q_AIUQ,I_o_q_2_ori=model@I_o_q_2_ori,
q_ori_ring_loc_unique_index=q_ori_ring_loc_unique_index,
sz=model@sz,len_t=model@len_t,d_input=model@d_input,
q=model@q,model_name=model@model_name,msd_fn=msd_fn,
msd_grad_fn=msd_grad_fn,method='L-BFGS-B',lower=lower_bound,
control = list(fnscale=-1,maxit=num_iteration_max)),TRUE)
if(count_compute>=2){
break
}
}
##if still not converge to a finite value, let's try no derivative search
if(class(m_param)[1]=="try-error"){
m_param = try(optim(param_initial,log_lik,
I_q_cur=fft_list$I_q_matrix,
B_cur=NA,index_q=index_q_AIUQ,
I_o_q_2_ori=model@I_o_q_2_ori,
q_ori_ring_loc_unique_index=q_ori_ring_loc_unique_index,
sz=model@sz,len_t=model@len_t,d_input=model@d_input,
q=model@q,model_name=model@model_name,msd_fn=msd_fn,
msd_grad_fn=msd_grad_fn,method='L-BFGS-B',lower=lower_bound,
control = list(fnscale=-1,maxit=num_iteration_max)),TRUE)
count_compute=0
while(class(m_param)[1]=="try-error"){
count_compute=count_compute+1
#compute_twice=T
##change it to runif
#c(rep(0.5,p),0)
param_initial_try=param_initial+count_compute*runif(p+1)
if(message_out){
cat("start of another optimization, initial values: ",param_initial_try, "\n")
}
m_param = try(optim(param_initial_try,log_lik,
I_q_cur=fft_list$I_q_matrix,B_cur=NA,
index_q=index_q_AIUQ,I_o_q_2_ori=model@I_o_q_2_ori,
q_ori_ring_loc_unique_index=q_ori_ring_loc_unique_index,
sz=model@sz,len_t=model@len_t,d_input=model@d_input,
q=model@q,model_name=model@model_name,msd_fn=msd_fn,
msd_grad_fn=msd_grad_fn,method='L-BFGS-B',lower=lower_bound,
control = list(fnscale=-1,maxit=num_iteration_max)),TRUE)
if(count_compute>=2){
break
}
}
}
param_est = m_param$par
model@mle = m_param$value
AIC = 2*(length(param_est)+length(index_q_AIUQ)-m_param$value)
model@sigma_2_0_est = exp(param_est[length(param_est)])
#est_list = get_est_param_MSD(theta=exp(param_est),d_input=model@d_input,model_name = model@model_name)
model@param_est = get_est_param(theta=exp(param_est),model_name = model@model_name)
if(model@model_name=='user_defined'){
model@param_est = model@param_est[-length(param_initial)]
}
model@msd_est = get_MSD(theta=model@param_est,d_input=model@d_input,
model_name = model@model_name, msd_fn=msd_fn)
if(uncertainty==T && is.na(M)!=T){
param_uq_range=param_uncertainty(param_est=m_param$par,I_q_cur=fft_list$I_q_matrix,
index_q=index_q_AIUQ,
I_o_q_2_ori=model@I_o_q_2_ori,
q_ori_ring_loc_unique_index=q_ori_ring_loc_unique_index,
sz=model@sz,len_t=model@len_t,q=model@q,
d_input=model@d_input,
model_name=model@model_name,M=M,
num_iteration_max=num_iteration_max,
lower_bound=lower_bound,msd_fn=msd_fn,
msd_grad_fn=msd_grad_fn)
for(i_p in 1:length(m_param$par)){
param_uq_range[,i_p]=c(min((m_param$par[i_p]),param_uq_range[1,i_p]),
max((m_param$par[i_p]),param_uq_range[2,i_p]))
}
SAM_range_list=get_est_parameters_MSD_SAM_interval(param_uq_range,
model_name=model@model_name,
d_input=model@d_input, msd_fn=msd_fn)
model@uncertainty = uncertainty
model@msd_lower = SAM_range_list$MSD_lower
model@msd_upper = SAM_range_list$MSD_upper
model@param_uq_range = cbind(SAM_range_list$est_parameters_lower,SAM_range_list$est_parameters_upper)
}else{
model@uncertainty = uncertainty
model@msd_lower = NA
model@msd_upper = NA
model@param_uq_range = matrix(NA,1,1)
}
if(output_dqt==FALSE && output_isf==FALSE){
Dqt = matrix(NA,1,1)
isf = matrix(NA,1,1)
}else if(output_dqt==TRUE && output_isf==FALSE){
Dqt = SAM_Dqt(len_q=model@len_q,index_q=1:model@len_q,len_t=model@len_t,
I_q_matrix=fft_list$I_q_matrix,sz=model@sz,
q_ori_ring_loc_unique_index=q_ori_ring_loc_unique_index)
isf = matrix(NA,1,1)
}else if(output_isf==TRUE){
Dqt = matrix(NA,model@len_q,model@len_t-1)
isf = matrix(NA,model@len_q,model@len_t-1)
for (q_j in 1:model@len_q){
index_cur = q_ori_ring_loc_unique_index[[q_j]]
I_q_cur = fft_list$I_q_matrix[index_cur,]
for (t_i in 1:(model@len_t-1)){
Dqt[q_j,t_i]=mean((abs(I_q_cur[,(t_i+1):model@len_t]-I_q_cur[,1:(model@len_t-t_i)]))^2/(model@sz[1]*model@sz[2]),na.rm=T)
}
if(model@A_est_ini[q_j]==0){break}
isf[q_j,] = 1-(Dqt[q_j,]-model@B_est_ini)/model@A_est_ini[q_j]
}
}
}else if(model@method == "DDM_fixedAB"){
if(length(index_q_DDM)==1 &&is.na(index_q_DDM)){
index_q_DDM = 1:model@len_q
}
Dqt = SAM_Dqt(len_q=model@len_q,index_q=index_q_DDM,len_t=model@len_t,
I_q_matrix=fft_list$I_q_matrix,sz=model@sz,
q_ori_ring_loc_unique_index=q_ori_ring_loc_unique_index)
if(output_isf==TRUE){
isf = matrix(NA,model@len_q,model@len_t-1)
for (q_j in 1:model@len_q){
if(model@A_est_ini[q_j]==0){break}
isf[q_j,] = 1-(Dqt[q_j,]-model@B_est_ini)/model@A_est_ini[q_j]
}
}else{
isf = matrix(NA,1,1)
}
l2_est_list = theta_est_l2_dqt_fixedAB(param=param_initial[-length(param_initial)],q=model@q,index_q=index_q_DDM,
Dqt=Dqt,A_est_q=model@A_est_ini,B_est=model@B_est_ini,
d_input=model@d_input, model_name=model@model_name,
msd_fn=msd_fn,msd_grad_fn=msd_grad_fn)
model@param_est = l2_est_list$param_est
model@msd_est = l2_est_list$msd_est
model@sigma_2_0_est = model@B_est_ini/2
p = NaN
AIC = NaN
model@mle = NaN
model@param_uq_range = matrix(NA,1,1)
}else if(model@method == "DDM_estAB"){
if(length(index_q_DDM)==1 &&is.na(index_q_DDM)){
index_q_DDM = 1:model@len_q
}
Dqt = SAM_Dqt(len_q=model@len_q,index_q=index_q_DDM,len_t=model@len_t,
I_q_matrix=fft_list$I_q_matrix,sz=model@sz,
q_ori_ring_loc_unique_index=q_ori_ring_loc_unique_index)
if(output_isf==TRUE){
isf = matrix(NA,model@len_q,model@len_t-1)
for (q_j in 1:model@len_q){
if(model@A_est_ini[q_j]==0){break}
isf[q_j,] = 1-(Dqt[q_j,]-model@B_est_ini)/model@A_est_ini[q_j]
}
}else{
isf = matrix(NA,1,1)
}
l2_est_list = theta_est_l2_dqt_estAB(param=param_initial,q=model@q,index_q=index_q_DDM,
Dqt=Dqt,A_ini=model@A_est_ini,d_input=model@d_input,
model_name=model@model_name,msd_fn=msd_fn,msd_grad_fn=msd_grad_fn)
model@param_est = l2_est_list$param_est
model@msd_est = l2_est_list$msd_est
model@sigma_2_0_est = l2_est_list$sigma_2_0_est
A_est = l2_est_list$A_est
p = NaN
AIC = NaN
model@mle = NaN
model@param_uq_range = matrix(NA,1,1)
}
model@Dqt = Dqt
model@ISF = isf
if(output_modeled_dqt==FALSE && output_modeled_isf==FALSE){
model@modeled_Dqt = matrix(NA,1,1)
model@modeled_ISF = matrix(NA,1,1)
}else if(output_modeled_isf==TRUE && output_modeled_dqt==FALSE){
model@modeled_ISF = matrix(NA,model@len_q,model@len_t-1)
model@modeled_Dqt = matrix(NA,1,1)
for(q_j in 1:model@len_q){
q_selected = model@q[q_j]
model@modeled_ISF [q_j,] = exp(-q_selected^2*model@msd_est[-1]/4)
}
}else if(output_modeled_dqt==TRUE){
model@modeled_ISF = matrix(NA,model@len_q,model@len_t-1)
model@modeled_Dqt = matrix(NA,model@len_q,model@len_t-1)
for(q_j in 1:model@len_q){
q_selected = model@q[q_j]
model@modeled_ISF[q_j,] = exp(-q_selected^2*model@msd_est[-1]/4)
if(model@A_est_ini[q_j]==0){break}
model@modeled_Dqt[q_j,] = model@A_est_ini[q_j]*(1-model@modeled_ISF[q_j,])+model@sigma_2_0_est*2
}
}
if(model@method=='AIUQ'){
model@index_q = index_q_AIUQ
}else{ ##DDM
model@index_q = index_q_DDM
}
model@I_q = fft_list$I_q_matrix
#model@p = p
model@AIC = AIC
model@q_ori_ring_loc_unique_index = q_ori_ring_loc_unique_index
return(model)
}
| /scratch/gouwar.j/cran-all/cranData/AIUQ/R/SAM.R |
#' Scattering analysis of microscopy for anisotropic processes
#'
#' @description
#' Fast parameter estimation in scattering analysis of microscopy for anisotropic
#' processes, using AIUQ method.
#'
#' @param intensity intensity profile. See 'Details'.
#' @param intensity_str structure of the intensity profile, options from
#' ('SST_array','S_ST_mat','T_SS_mat'). See 'Details'.
#' @param sz frame size of the intensity profile in x and y directions,
#' number of pixels contained in each frame equals sz_x by sz_y.
#' @param pxsz size of one pixel in unit of micron, 1 for simulated data
#' @param mindt minimum lag time, 1 for simulated data
#' @param AIUQ_thr threshold for wave number selection, numeric vector of two
#' elements with values between 0 and 1. See 'Details'.
#' @param model_name fitted model, options from ('BM','OU','FBM','OU+FBM',
#' 'user_defined'), with Brownian motion as the default model. See 'Details'.
#' @param sigma_0_2_ini initial value for background noise. If NA, use minimum
#' value of absolute square of intensity profile in reciprocal space.
#' @param msd_fn user defined mean squared displacement(MSD) structure, a
#' function of parameters and lag times. NA if \code{model_name} is not
#' 'user_defined'.
#' @param msd_grad_fn gradient for user defined mean squared displacement
#' structure. If \code{NA}, then numerical gradient will be used for parameter
#' estimation in \code{'user_defined'} model.
#' @param num_param number of parameters need to be estimated in the intermediate
#' scattering function, need to be non-NA value for user_defined' model.
#' @param param_initial initial values for param estimation.
#' @param num_optim number of optimization.
#' @param uncertainty a logical evaluating to TRUE or FALSE indicating whether
#' parameter uncertainty should be computed.
#' @param M number of particles. See 'Details'.
#' @param sim_object NA or an S4 object of class \code{simulation}.
#' @param msd_truth true MSD or reference MSD value.
#' @param method methods for parameter estimation, options from ('AIUQ', 'DDM').
#' @param index_q_AIUQ index range for wave number when using AIUQ method. See 'Details'.
#' @param message_out a logical evaluating to TRUE or FALSE indicating whether
#' or not to output the message.
#' @param square a logical evaluating to TRUE or FALSE indicating whether or not
#' to crop the original intensity profile into square image.
#'
#' @details
#' For simulated data using \code{aniso_simulation} in AIUQ package, \code{intensity}
#' will be automatically extracted from \code{aniso_simulation} class.
#'
#' By default \code{intensity_str} is set to 'T_SS_mat', a time by space\eqn{\times}{%\times}space
#' matrix, which is the structure of intensity profile obtained from \code{aniso_simulation}
#' class. For \code{intensity_str='SST_array'} , input intensity profile should be a
#' space by space by time array, which is the structure from loading a tif file.
#' For \code{intensity_str='S_ST_mat'}, input intensity profile should be a
#' space by space\eqn{\times}{%\times}time matrix.
#'
#' By default \code{AIUQ_thr} is set to \code{c(1,1)}, uses information from all
#' complete q rings. The first element affects maximum wave number selected,
#' and second element controls minimum proportion of wave number selected. By
#' setting 1 for the second element, if maximum wave number selected is less
#' than the wave number length, then maximum wave number selected is coerced to
#' use all wave number unless user defined another index range through \code{index_q_AIUQ}.
#'
#' If \code{model_name} equals 'user_defined', or NA (will coerced to
#' 'user_defined'), then \code{msd_fn} and \code{num_param} need to be provided
#' for parameter estimation.
#'
#' Number of particles \code{M} is set to 50 or automatically extracted from
#' \code{simulation} class for simulated data using \code{simulation} in AIUQ
#' package.
#'
#' By default, using all wave vectors from complete q ring for both \code{AIUQ},
#' unless user defined index range through \code{index_q_AIUQ}.
#'
#' @return Returns an S4 object of class \code{aniso_SAM}.
#' @export
#' @author \packageAuthor{AIUQ}
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#'
#' @examples
#' library(AIUQ)
#' # Example 1: Estimation for simulated data
#' set.seed(1)
#' aniso_sim = aniso_simulation(sz=100,len_t=100, model_name="BM",M=100,sigma_bm=c(0.5,0.3))
#' show(aniso_sim)
#' plot_traj(object=aniso_sim)
#' aniso_sam = aniso_SAM(sim_object=aniso_sim, model_name="BM",AIUQ_thr = c(0.999,0))
#' show(aniso_sam)
#' plot_MSD(aniso_sam,msd_truth = aniso_sam@msd_truth)
aniso_SAM <- function(intensity=NA,intensity_str="T_SS_mat",pxsz=1, sz=c(NA,NA),
mindt=1,AIUQ_thr=c(1,1),model_name='BM',sigma_0_2_ini=NaN,
param_initial=NA,num_optim=1,msd_fn=NA,msd_grad_fn=NA,
num_param=NA,uncertainty=FALSE,M=50,sim_object=NA, msd_truth=NA,
method="AIUQ",index_q_AIUQ=NA, message_out=TRUE,
square=FALSE){
model <- methods::new("aniso_SAM")
#check
if(!is.character(intensity_str)){
stop("Structure of the intensity profile input should be a character. \n")
}
if(intensity_str!="SST_array" && intensity_str!="S_ST_mat" && intensity_str!="T_SS_mat"){
stop("Structure of the intensity profile input should be one of the type listed in help page. \n")
}
if(!is.numeric(pxsz)){
stop("Pixel size should be a numerical value. \n")
}
if(!is.numeric(mindt)){
stop("Lag time between 2 consecutive image should be a numerical value. \n")
}
if(length(AIUQ_thr)==1){
AIUQ_thr = c(AIUQ_thr,1)
}
if(is.na(AIUQ_thr[1])){
AIUQ_thr = c(1,AIUQ_thr[2])
}
if(is.na(AIUQ_thr[2])){
AIUQ_thr = c(AIUQ_thr[1],1)
}
if(!is.numeric(AIUQ_thr)){
stop("AIUQ threshold should be a numerical vector. \n")
}
if(AIUQ_thr[1]<0 || AIUQ_thr[2]<0 || AIUQ_thr[1]>1 || AIUQ_thr[2]>1){
stop("AIUQ threshold has value between 0 and 1. \n")
}
if(!is.numeric(sigma_0_2_ini)){
stop("Inital value for background noise should be numeric. \n")
}
if(class(sim_object)[1]=="simulation"){
intensity = sim_object@intensity
model@pxsz = sim_object@pxsz
model@mindt = sim_object@mindt
M = sim_object@M
len_t = sim_object@len_t
param = matrix(rep(sim_object@param,2),nrow=length(sim_object@param),ncol=2)
if(sim_object@model_name=="BM"){
model_param = apply(param,2,function(x){get_true_param_aniso_sim(param_truth=x,model_name=sim_object@model_name)})
model@param_truth = matrix(model_param,nrow=1,ncol=2)
}else{
model@param_truth = apply(param,2,function(x){get_true_param_aniso_sim(param_truth=x,model_name=sim_object@model_name)})
}
model@msd_truth = apply(model@param_truth, 2,function(x){get_MSD(theta = x ,d_input=0:(len_t-1),model_name=sim_object@model_name)})
model@sigma_2_0_truth = sim_object@sigma_2_0
sz = sim_object@sz
}else if(class(sim_object)[1]=="aniso_simulation"){
intensity = sim_object@intensity
model@pxsz = sim_object@pxsz
model@mindt = sim_object@mindt
M = sim_object@M
len_t = sim_object@len_t
if(sim_object@model_name=="BM"){
model_param = apply(sim_object@param,2,function(x){get_true_param_aniso_sim(param_truth=x,model_name=sim_object@model_name)})
model@param_truth = matrix(model_param,1)
}else{
model@param_truth = apply(sim_object@param,2,function(x){get_true_param_aniso_sim(param_truth=x,model_name=sim_object@model_name)})
}
model@msd_truth = sim_object@theor_msd
model@sigma_2_0_truth = sim_object@sigma_2_0
sz = sim_object@sz
}else{
model@pxsz = pxsz
model@mindt = mindt
model@msd_truth = matrix(msd_truth,1,1)
model@sigma_2_0_truth = NA
model@param_truth = matrix(NA,1,1)
sz = sz
}
if(is.vector(intensity)){
if(is.na(intensity)){
stop("Intensity profile can't be missing and should have one of the structure listed in intensity_str. \n")
}
}
if(is.na(model_name)){
model_name = "user_defined"
}
if (model_name == "user_defined" && is.na(num_param)){
stop("For user defined model, number of parameters that need to be estimated can't be empty. \n")
}
if(!is.character(model_name)){
stop("Fitted model name should be character. \n")
}
if(!is.character(method)){
stop("Method should be character. \n")
}
model@model_name = model_name
model@method = method
# Transform intensity into the same format and crop image into square image
# total number of pixels in each image = sz_x*sz_y
intensity_list = intensity_format_transform(intensity = intensity,
intensity_str = intensity_str,
square = square,sz=sz)
# Fourier transform
fft_list = FFT2D(intensity_list=intensity_list,pxsz=model@pxsz,mindt=model@mindt)
#num of rows and columns of intensity matrix, also representing frame size in y and x directions
model@sz = c(fft_list$sz_y,fft_list$sz_x)
model@len_q = fft_list$len_q
model@len_t = fft_list$len_t
model@q = fft_list$q
model@d_input = fft_list$d_input
if(!is.na(index_q_AIUQ)[1]){
if(min(index_q_AIUQ)<1 || max(index_q_AIUQ)>model@len_q){
stop("Selected q range should between 1 and half frame size. \n")
}
}
# get each q ring location index
if(model@sz[1]==model@sz[2]){
v = (-(model@sz[1]-1)/2):((model@sz[1]-1)/2)
x = matrix(rep(v,each = model@sz[1]), byrow = FALSE,nrow = model@sz[1])
y = matrix(rep(v,each = model@sz[1]), byrow = TRUE,nrow = model@sz[1])
}else{
v_x = (-(model@sz[2]-1)/2):((model@sz[2]-1)/2)
v_y = (-(model@sz[1]-1)/2):((model@sz[1]-1)/2)
x = matrix(rep(v_x,each = model@sz[1]), byrow = FALSE,nrow = model@sz[1])
y = matrix(rep(v_y,each = model@sz[2]), byrow = TRUE,nrow = model@sz[1])
}
theta_q = cart2polar(x, y)
q_ring_num = theta_q[,(model@sz[2]+1):dim(theta_q)[2]]
q_ring_num = round(q_ring_num)
nq_index = vector(mode = "list")
for(i in 1:model@len_q){
nq_index[[i]] = which(q_ring_num==i)
}
q_ori_ring_loc = fftshift(q_ring_num, dim = 3)
q_ori_ring_loc_index = as.list(1:model@len_q)
total_q_ori_ring_loc_index = NULL
for(i in 1:model@len_q){
q_ori_ring_loc_index[[i]] = which(q_ori_ring_loc==i)
total_q_ori_ring_loc_index = c(total_q_ori_ring_loc_index, q_ori_ring_loc_index[[i]])
}
#model@q_ring_loc = q_ring_num
q_ring_loc = q_ring_num
#model@q_ori_ring_loc = q_ori_ring_loc
#model@q_ori_ring_loc_index = q_ori_ring_loc_index
#model@total_q_ori_ring_loc_index = total_q_ori_ring_loc_index
#Get A an B ini est
avg_I_2_ori = 0
for(i in 1:model@len_t){
avg_I_2_ori = avg_I_2_ori+abs(fft_list$I_q_matrix[,i])^2/(model@sz[1]*model@sz[2])
}
avg_I_2_ori = avg_I_2_ori/model@len_t
model@I_o_q_2_ori = rep(NA,model@len_q)
for(i in 1:model@len_q){
model@I_o_q_2_ori[i] = mean(avg_I_2_ori[q_ori_ring_loc_index[[i]]])
}
I_o_q_2_ori_last = model@I_o_q_2_ori[model@len_q]
model@B_est_ini = 2*I_o_q_2_ori_last
model@A_est_ini = 2*(model@I_o_q_2_ori - I_o_q_2_ori_last)
for (i in 1:model@len_q){
if(sum(model@A_est_ini[1:i])/sum(model@A_est_ini)>=AIUQ_thr[1]){
num_q_max = i
break
}
}
if(num_q_max/model@len_q<=AIUQ_thr[2]){
num_q_max=ceiling(AIUQ_thr[2]*model@len_q)
}
# get unique index
#model@q_ori_ring_loc_unique_index = as.list(1:model@len_q)
q_ori_ring_loc_unique_index = as.list(1:model@len_q)
# for(i in 1:model@len_q){
# unique_val = unique(avg_I_2_ori[q_ori_ring_loc_index[[i]]])
# unique_val = unique_val[1:(length(q_ori_ring_loc_index[[i]])/2)]
# index_selected = NULL
# for(j in 1:length(unique_val)){
# index_selected = c(index_selected,which(avg_I_2_ori == unique_val[j])[1])
# }
# q_ori_ring_loc_unique_index[[i]] = index_selected
# }
for(i in 1:model@len_q){
len_here = (length(q_ori_ring_loc_index[[i]])-2)/2
q_ori_ring_loc_unique_index[[i]] = q_ori_ring_loc_index[[i]][c(1,3:(3+len_here-1))]
}
total_q_ori_ring_loc_unique_index = NULL
for(i in 1:model@len_q){
total_q_ori_ring_loc_unique_index=c(total_q_ori_ring_loc_unique_index,
q_ori_ring_loc_unique_index[[i]])
}
#anisotropic
if(model@sz[1]==model@sz[2]){
q1_unique_index=as.list(1:model@len_q)
q2_unique_index=as.list(1:model@len_q)
for(i in 1:model@len_q){
index_here=q_ori_ring_loc_unique_index[[i]]
total_num_unique_index_here=length(q_ori_ring_loc_unique_index[[i]])
q1_unique_index[[i]]=q2_unique_index[[i]]=rep(NA,total_num_unique_index_here)
for(j in 1:(total_num_unique_index_here)){
q1_unique_index[[i]][j]=floor((index_here[j]-1)/model@sz[1]) ##could contain zero
left_here=(index_here[j]-1)%%model@sz[1]
if(left_here<=model@len_q){
q2_unique_index[[i]][j]=(index_here[j]-1)%%model@sz[1] ##could contain zero
}else{
q2_unique_index[[i]][j]=model@sz[1]-(index_here[j]-1)%%model@sz[1]-1 ##could contain zero
}
}
}
q1 = c((1:((model@sz[1]-1)/2))*2*pi/(model@sz[1]*model@pxsz))
q2 = c((1:((model@sz[1]-1)/2))*2*pi/(model@sz[1]*model@pxsz))
}else{stop("Update for rectangular image!")}
if(is.na(sigma_0_2_ini)){sigma_0_2_ini=min(model@I_o_q_2_ori)}
if(sum(is.na(param_initial))>=1){
param_initial = get_initial_param(model_name=paste(model@model_name,"_anisotropic",sep=""),
sigma_0_2_ini=sigma_0_2_ini,
num_param=num_param)
}else{
param_initial = log(c(param_initial,sigma_0_2_ini))
}
if(model@method == "AIUQ"){
##if index_q_AIUQ is not defined then we define it; otherwise we use the defined index
if( is.na(index_q_AIUQ)[1]){
index_q_AIUQ = 1:num_q_max
}
p = (length(param_initial)-1)/2
num_iteration_max = 50+(2*p-1)*10
lower_bound = c(rep(-30,2*p),-Inf)
if(model@model_name == "user_defined"){
if(is.function(msd_grad_fn)==T){
gr = anisotropic_log_lik_grad
}else{gr = NULL}
}else{gr = anisotropic_log_lik_grad}
m_param = try(optim(param_initial,anisotropic_log_lik, #gr=gr,
I_q_cur=fft_list$I_q_matrix,
B_cur=NA,index_q=index_q_AIUQ,
I_o_q_2_ori=model@I_o_q_2_ori,
q_ori_ring_loc_unique_index=q_ori_ring_loc_unique_index,
sz=model@sz,len_t=model@len_t,d_input=model@d_input,
q1=q1,q2=q2,q1_unique_index=q1_unique_index,
q2_unique_index=q2_unique_index,
model_name=paste(model@model_name,"_anisotropic",sep=""),
msd_fn=msd_fn,msd_grad_fn=msd_grad_fn,
method='L-BFGS-B',lower=lower_bound,
control = list(fnscale=-1,maxit=num_iteration_max)),TRUE)
if(num_optim>1){
for(i_try in 1:(num_optim-1)){
param_initial_try=param_initial+i_try*runif(2*p+1)
if(message_out){
cat("start of another optimization, initial values: ",param_initial_try, "\n")
}
m_param_try = try(optim(param_initial_try,anisotropic_log_lik, #gr=gr,
I_q_cur=fft_list$I_q_matrix,B_cur=NA,
index_q=index_q_AIUQ,I_o_q_2_ori=model@I_o_q_2_ori,
q_ori_ring_loc_unique_index=q_ori_ring_loc_unique_index,
sz=model@sz,len_t=model@len_t,d_input=model@d_input,
qq1=q1,q2=q2,q1_unique_index=q1_unique_index,
q2_unique_index=q2_unique_index,
model_name=paste(model@model_name,"_anisotropic",sep=""),
msd_fn=msd_fn,msd_grad_fn=msd_grad_fn,
method='L-BFGS-B',lower=lower_bound,
control = list(fnscale=-1,maxit=num_iteration_max)),TRUE)
if(class(m_param)[1]!="try-error"){
if(class(m_param_try)[1]!="try-error"){
if(m_param_try$value>m_param$value){
m_param=m_param_try
}
}
}else{##if m_param has an error then change
m_param=m_param_try
}
}
}
count_compute=0 ##if it has an error in optimization, try some more
while(class(m_param)[1]=="try-error"){
count_compute=count_compute+1
param_initial_try=param_initial+count_compute*runif(2*p+1)
if(message_out){
cat("start of another optimization, initial values: ",param_initial_try, "\n")
}
m_param = try(optim(param_initial_try,anisotropic_log_lik, #gr=gr,
I_q_cur=fft_list$I_q_matrix,B_cur=NA,
index_q=index_q_AIUQ,I_o_q_2_ori=model@I_o_q_2_ori,
q_ori_ring_loc_unique_index=q_ori_ring_loc_unique_index,
sz=model@sz,len_t=model@len_t,d_input=model@d_input,
q1=q1,q2=q2,q1_unique_index=q1_unique_index,
q2_unique_index=q2_unique_index,
model_name=paste(model@model_name,"_anisotropic",sep=""),
msd_fn=msd_fn,msd_grad_fn=msd_grad_fn,method='L-BFGS-B',
lower=lower_bound,control = list(fnscale=-1,maxit=num_iteration_max)),TRUE)
if(count_compute>=2){
break
}
}
##if still not converge to a finite value, let's try no derivative search
if(class(m_param)[1]=="try-error"){
m_param = try(optim(param_initial,anisotropic_log_lik,
I_q_cur=fft_list$I_q_matrix,
B_cur=NA,index_q=index_q_AIUQ,
I_o_q_2_ori=model@I_o_q_2_ori,
q_ori_ring_loc_unique_index=q_ori_ring_loc_unique_index,
sz=model@sz,len_t=model@len_t,d_input=model@d_input,
q1=q1,q2=q2,q1_unique_index=q1_unique_index,
q2_unique_index=q2_unique_index,
model_name=paste(model@model_name,"_anisotropic",sep=""),
msd_fn=msd_fn,msd_grad_fn=msd_grad_fn,method='L-BFGS-B',
lower=lower_bound,control = list(fnscale=-1,maxit=num_iteration_max)),TRUE)
count_compute=0
while(class(m_param)[1]=="try-error"){
count_compute=count_compute+1
#compute_twice=T
##change it to runif
#c(rep(0.5,p),0)
param_initial_try=param_initial+count_compute*runif(2*p+1)
if(message_out){
cat("start of another optimization, initial values: ",param_initial_try, "\n")
}
m_param = try(optim(param_initial_try,anisotropic_log_lik,
I_q_cur=fft_list$I_q_matrix,B_cur=NA,
index_q=index_q_AIUQ,I_o_q_2_ori=model@I_o_q_2_ori,
q_ori_ring_loc_unique_index=q_ori_ring_loc_unique_index,
sz=model@sz,len_t=model@len_t,d_input=model@d_input,
q1=q1,q2=q2,q1_unique_index=q1_unique_index,
q2_unique_index=q2_unique_index,
model_name=paste(model@model_name,"_anisotropic",sep=""),
msd_fn=msd_fn,msd_grad_fn=msd_grad_fn,
method='L-BFGS-B',lower=lower_bound,
control = list(fnscale=-1,maxit=num_iteration_max)),TRUE)
if(count_compute>=2){
break
}
}
}
param_est = m_param$par
model@mle = m_param$value
AIC = 2*(length(param_est)+length(index_q_AIUQ)-m_param$value)
model@sigma_2_0_est = exp(param_est[length(param_est)])
param_est = matrix(param_est[-length(param_est)],ncol=2)
if(model_name=="BM"){
param_est = apply(param_est,2,function(x){get_est_param(theta=exp(x),model_name = model@model_name)})
model@param_est = matrix(param_est,nrow=1,ncol=2)
}else{
model@param_est = apply(param_est,2,function(x){get_est_param(theta=exp(x),model_name = model@model_name)})
}
model@msd_est = apply(model@param_est, 2,function(x){get_MSD(theta = x,d_input=model@d_input,model_name=model@model_name, msd_fn=msd_fn)})
if(uncertainty==TRUE && is.na(M)!=TRUE){
param_uq_range=param_uncertainty_anisotropic(param_est=m_param$par,I_q_cur=fft_list$I_q_matrix,
index_q=index_q_AIUQ,
I_o_q_2_ori=model@I_o_q_2_ori,
q_ori_ring_loc_unique_index=q_ori_ring_loc_unique_index,
sz=model@sz,len_t=model@len_t,
q1=q1,q2=q2,q1_unique_index=q1_unique_index,
q2_unique_index=q2_unique_index,
d_input=model@d_input,
model_name=paste(model@model_name,"_anisotropic",sep=""),
M=M,num_iteration_max=num_iteration_max,
lower_bound=lower_bound,msd_fn=msd_fn,
msd_grad_fn=msd_grad_fn)
for(i_p in 1:length(m_param$par)){
param_uq_range[,i_p]=c(min((m_param$par[i_p]),param_uq_range[1,i_p]),
max((m_param$par[i_p]),param_uq_range[2,i_p]))
}
SAM_range_list=get_est_parameters_MSD_SAM_interval_anisotropic(param_uq_range,
model_name=model@model_name,
d_input=model@d_input, msd_fn=msd_fn)
model@uncertainty = uncertainty
model@msd_x_lower = SAM_range_list$MSD_x_lower
model@msd_x_upper = SAM_range_list$MSD_x_upper
model@msd_y_lower = SAM_range_list$MSD_y_lower
model@msd_y_upper = SAM_range_list$MSD_y_upper
model@param_uq_range = cbind(SAM_range_list$est_parameters_lower,SAM_range_list$est_parameters_upper)
}else{
model@uncertainty = FALSE
model@msd_x_lower = NA
model@msd_x_upper = NA
model@msd_y_lower = NA
model@msd_y_upper = NA
model@param_uq_range = matrix(NA,1,1)
}
}
if(model@method=='AIUQ'){
model@index_q = index_q_AIUQ
}
model@I_q = fft_list$I_q_matrix
#model@p = p
model@AIC = AIC
model@q_ori_ring_loc_unique_index = q_ori_ring_loc_unique_index
return(model)
}
| /scratch/gouwar.j/cran-all/cranData/AIUQ/R/aniso_SAM.R |
#' Simulate anisotropic 2D particle movement
#'
#' @description
#' Simulate anisotropic 2D particle movement from a user selected stochastic
#' process, and output intensity profiles.
#'
#' @param sz frame size of simulated image with default \code{c(200,200)}.
#' @param len_t number of time steps with default 200.
#' @param M number of particles with default 50.
#' @param model_name stochastic process simulated, options from
#' ('BM','OU','FBM','OU+FBM'), with default 'BM'.
#' @param noise background noise, options from ('uniform','gaussian'),
#' with default 'gaussian'.
#' @param I0 background intensity, value between 0 and 255, with default 20.
#' @param Imax maximum intensity at the center of the particle, value between 0
#' and 255, with default 255.
#' @param pos0 initial position for M particles, matrix with dimension M by 2.
#' @param rho correlation between successive step and previous step in O-U
#' process, in x, y-directions. A vector of length 2 with values between 0 and 1,
#' default \code{c(0.95,0.9)}.
#' @param H Hurst parameter of fractional Brownian Motion, in x, y-directions.
#' A vector of length 2, value between 0 and 1, default \code{c(0.4,0.3)}.
#' @param sigma_p radius of the spherical particle (3sigma_p), with default 2.
#' @param sigma_bm distance moved per time step of Brownian Motion,
#' in x,y-directions. A vector of length 2 with default \code{c(1,0.5)}.
#' @param sigma_ou distance moved per time step of Ornstein–Uhlenbeck process,
#' in x, y-directions. A vector of length 2 with default \code{c(2,1.5)}.
#' @param sigma_fbm distance moved per time step of fractional Brownian Motion,
#' in x, y-directions. A vector of length 2 with default \code{c(2,1.5)}.
#'
#' @return Returns an S4 object of class \code{anisotropic_simulation}.
#' @export
#' @author \packageAuthor{AIUQ}
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#'
#' @examples
#' library(AIUQ)
#' # -------------------------------------------------
#' # Example 1: Simple diffusion for 200 images with
#' # 200 by 200 pixels and 50 particles
#' # -------------------------------------------------
#' aniso_sim_bm = aniso_simulation()
#' show(aniso_sim_bm)
#'
#' # -------------------------------------------------
#' # Example 2: Simple diffusion for 100 images with
#' # 100 by 100 pixels and slower speed
#' # -------------------------------------------------
#' aniso_sim_bm = aniso_simulation(sz=100,len_t=100,sigma_bm=c(0.5,0.1))
#' show(aniso_sim_bm)
#'
#' # -------------------------------------------------
#' # Example 3: Ornstein-Uhlenbeck process
#' # -------------------------------------------------
#' aniso_sim_ou = aniso_simulation(model_name="OU")
#' show(aniso_sim_ou)
aniso_simulation <- function(sz=c(200,200), len_t=200, M=50, model_name="BM",
noise="gaussian", I0=20, Imax=255,
pos0=matrix(NaN,nrow=M,ncol=2), rho=c(0.95,0.9),
H=c(0.4,0.3), sigma_p=2,sigma_bm=c(1,0.5),
sigma_ou=c(2,1.5), sigma_fbm=c(2,1.5)){
model <- methods::new("aniso_simulation")
#check
len_t = as.integer(len_t)
M = as.integer(M)
if(length(sz)==1){
sz=c(sz,sz)
}
if(length(sz)>2){
stop("Frame size of simulated image should be a vector with length 2. \n")
}
if(!is.character(model_name)){
stop("Type of stochastic process should be a character value. \n")
}
if(model_name!="BM" && model_name!="OU" && model_name!="FBM" && model_name!="OU+FBM"){
stop("Type of stochastic process should be one of the type listed in help page. \n")
}
if(!is.character(noise)){
stop("Type of background noise should be a character value. \n")
}
if(noise!="gaussian" && noise!="uniform"){
stop("Type of background noise should be one of the type listed in help page. \n")
}
if(!is.numeric(I0)){
stop("Background intensity should have numeric value. \n")
}
if(I0<0 || I0>255){
stop("Background intensity should have value between 0 and 255. \n")
}
if(!is.numeric(Imax)){
stop("Maximum intensity at the center of the particle should be a numeric value. \n")
}
if(Imax<0 || Imax>255){
stop("Maximum intensity at the center of the particle should have value between 0 and 255. \n")
}
if(!is.numeric(pos0)){
stop("Initial position for particles should be all numeric. \n")
}
if(nrow(pos0)!=M || ncol(pos0)!=2){
stop("Dimension of particle initial position matrix should match M by 2. \n")
}
if(!is.numeric(rho)){
stop("Correlation between steps in O-U process should be numeric. \n")
}
if(length(rho)==1){
rho=c(rho,rho)
}
if(!is.numeric(H)){
stop("Hurst parameter of fractional Brownian Motion should be numeric. \n")
}
if(length(H)==1){
H=c(H,H)
}
if(H[1]<0 || H[2]<0 || H[1]>1 || H[2]>1){
stop("Hurst parameter of fractional Brownian Motion should have value between 0 and 1. \n")
}
if(!is.numeric(sigma_p)){
stop("Radius of the spherical particle should be numeric. \n")
}
if(!is.numeric(sigma_bm)){
stop("Distance moved per time step in Brownian Motion should be numeric. \n")
}
if(length(sigma_bm)==1){
sigma_bm=c(sigma_bm,sigma_bm)
}
if(!is.numeric(sigma_ou)){
stop("Distance moved per time step in Ornstein Uhlenbeck process should be numeric. \n")
}
if(length(sigma_ou)==1){
sigma_ou=c(sigma_ou,sigma_ou)
}
if(!is.numeric(sigma_fbm)){
stop("Distance moved per time step in fractional Brownian Motion should be numeric. \n")
}
if(length(sigma_fbm)==1){
sigma_fbm=c(sigma_fbm,sigma_fbm)
}
# Simulation particle trajectory for isotropic process
if(sum(is.na(pos0))>=1){
pos0 = matrix(c(sz[2]/8+0.75*sz[2]*stats::runif(M),
sz[1]/8+0.75*sz[1]*stats::runif(M)),nrow=M,ncol=2)
}
if(model_name == "BM"){
pos = anisotropic_bm_particle_intensity(pos0=pos0,M=M,len_t=len_t,
sigma=sigma_bm)
model@param = matrix(sigma_bm,nrow=1,ncol=2)
}else if(model_name == "OU"){
pos = anisotropic_ou_particle_intensity(pos0=pos0,M=M,len_t=len_t,
sigma=sigma_ou,rho=rho)
model@param = rbind(rho,sigma_ou)
}else if(model_name == "FBM"){
pos = anisotropic_fbm_particle_intensity(pos0=pos0,M=M,len_t=len_t,
sigma=sigma_fbm,H=H)
model@param = rbind(sigma_fbm,H)
}else if(model_name == "OU+FBM"){
pos = anisotropic_fbm_ou_particle_intensity(pos0=pos0,M=M,len_t=len_t,H=H,
rho=rho,sigma_ou = sigma_ou,
sigma_fbm = sigma_fbm)
model@param = rbind(rho,sigma_ou,sigma_fbm,H)
}
if(model_name=="BM"){
model_param = apply(model@param,2,function(x){get_true_param_aniso_sim(param_truth=x,model_name=model_name)})
model_param = matrix(model_param,nrow=1,ncol=2)
}else{
model_param = apply(model@param,2,function(x){get_true_param_aniso_sim(param_truth=x,model_name=model_name)})
}
model@theor_msd = apply(model_param, 2,function(x){get_MSD(theta = x ,d_input=0:(len_t-1),model_name=model_name)})
# Fill intensity
if(length(I0) == len_t){
if(noise == "uniform"){
I = matrix(stats::runif(sz[1]*sz[2]*len_t)-0.5, nrow=len_t,ncol = sz[1]*sz[2])
I = I*I0
model@sigma_2_0 = I0^2/12
}else if(noise == "gaussian"){
I = matrix(stats::rnorm(sz[1]*sz[2]*len_t), nrow=len_t,ncol = sz[1]*sz[2])
I = I*sqrt(I0)
model@sigma_2_0 = I0
}
}else if(length(I0) == 1){
if(noise == "uniform"){
I = matrix(I0*(stats::runif(sz[1]*sz[2]*len_t)-0.5), nrow=len_t,ncol = sz[1]*sz[2])
model@sigma_2_0 = c(I0^2/12)
} else if(noise == "gaussian"){
I = matrix(sqrt(I0)*stats::rnorm(sz[1]*sz[2]*len_t), nrow=len_t,ncol = sz[1]*sz[2])
model@sigma_2_0 = c(I0)
}
}
if(length(Imax)==1){
Ic = rep(Imax,M)
model@intensity = fill_intensity(len_t=len_t,M=M,I=I,pos=pos,Ic=Ic,sz=sz, sigma_p=sigma_p)
}
model@sz = sz
model@pxsz = 1
model@mindt = 1
model@len_t = len_t
model@noise = noise
model@M = M
model@model_name = model_name
model@pos = pos
model@num_msd = anisotropic_numerical_msd(pos=model@pos,M=model@M,len_t=model@len_t)
return(model)
}
| /scratch/gouwar.j/cran-all/cranData/AIUQ/R/aniso_simulation.R |
#' Transform Cartesian coordinates to polar coordinates
#' @description
#' Transform ordered pair (x,y), where x and y denotes the
#' directed distance between the point and each of two
#' perpendicular lines, the x-axis and the y-axis, to polar
#' coordinate. Input x and y must have the same length.
#'
#'
#' @param x a vector of x-coordinates
#' @param y a vector of y-coordinates
#'
#' @return A data frame with 2 variables, where r is the
#' directed distance from a point designed as the pole, and
#' theta represents the angle, in radians, between the pole and the point.
#' @export
#' @author \packageAuthor{AIUQ}
#' @concept cart2pol
#' @examples
#' library(AIUQ)
#'
#' # Input in Cartesian coordinates
#' (x <- rep(1:3,each = 3))
#' (y <- rep(1:3,3))
#'
#' # Data frame with polar coordinates
#' (polar <- cart2polar(x, y))
#'
#' @keywords internal
cart2polar <- function(x, y) {
if(length(x)!=length(y)){
stop("Length of points in each coordinates shoule be the same. \n")
}
if(!is.numeric(x) || !is.numeric(y)){
stop("Inputs shoule have numeric value. \n")
}
data.frame(theta = atan2(y, x),r = sqrt(x^2 + y^2))
}
#' Transform intensity profile into SS_T matrix
#'
#' @description
#' Transform intensity profile with different formats, ('SST_array','T_SS_mat',
#' 'SS_T_mat','S_ST_mat'), space by space by time array, time by (space by space) matrix,
#' (space by space) by time matrix, or space by (space by time) matrix, into
#' 'SS_T_mat'. In addition, crop each frame with odd frame size.
#'
#' @param intensity intensity profile, array or matrix
#' @param intensity_str structure of the original intensity
#' profile, options from ('SST_array','T_SS_mat','SS_T_mat','S_ST_mat')
#' @param square a logical evaluating to TRUE or FALSE indicating whether or not
#' to crop each frame into a square such that frame size in x direction equals
#' frame size in y direction with \code{sz_x=sz_y}
#' @param sz frame size of each frame. A vector of length 2 with frame size in
#' y/(row) direction, and frame size in x/(column) direction, with default \code{NA}.
#'
#' @return A matrix of transformed intensity profile.
#' @export
#' @author \packageAuthor{AIUQ}
#' @examples
#' library(AIUQ)
#' # -------------------------------------------------
#' # Example 1: Transform T_SS_mat into SS_T_mat, each
#' # frame contains number 1-9
#' # -------------------------------------------------
#' (m <- matrix(rep(1:9,4),4,9,byrow=TRUE))
#' intensity_format_transform(m,intensity_str="T_SS_mat",sz=c(4,9))
#'
#' # -------------------------------------------------
#' # Example 2: Transform SST_array into SS_T_mat, each
#' # frame contains number 1-9
#' # -------------------------------------------------
#' (m <- array(rep(1:9,4),dim=c(3,3,4)))
#' intensity_format_transform(m,intensity_str="SST_array")
#'
#' @keywords internal
intensity_format_transform<-function(intensity,intensity_str, square=FALSE, sz=NA){
if(intensity_str=='SST_array'){#most real experimental structure
if(square==TRUE){
sz_x = sz_y = min(dim(intensity)[1],dim(intensity)[2])
if(sz_x%%2 == 0){ #if even frame size, crop into odd
sz_x=sz_x-1
sz_y=sz_y-1
}
}else{
sz_y = dim(intensity)[1]
sz_x = dim(intensity)[2]
if(sz_y%%2==0){
sz_y = sz_y-1
}
if(sz_x%%2==0){
sz_x = sz_x-1
}
}
len_t = dim(intensity)[3]
intensity_transform = matrix(NA,sz_x*sz_y,len_t)
for(i in 1:len_t){
intensity_transform[,i] = as.numeric(intensity[1:sz_y,1:sz_x,i])
}
}else if(intensity_str=='T_SS_mat'){#Simulated data using AIUQ simulation class
if(sum(is.na(sz))>0){
stop("Please give a vector of sz that contains frame size of each frame.")
}
intensity=as.matrix(intensity)
if(square==TRUE){
sz_x=sz_y=min(sz)
if(sz_x%%2==0){
sz_x=sz_x-1
sz_y=sz_y-1
}
}else{
sz_y = sz[1]
sz_x = sz[2]
if(sz_y%%2==0){
sz_y = sz_y-1
}
if(sz_x%%2==0){
sz_x = sz_x-1
}
}
len_t = dim(intensity)[1]
intensity_transform=matrix(NA,sz_x*sz_y,len_t)
for(i in 1:len_t){
intensity_mat=matrix(intensity[i,],sz[1],sz[2])
intensity_transform[,i]=as.vector(intensity_mat[1:sz_y,1:sz_x])
}
}else if(intensity_str=='S_ST_mat'){
if(sum(is.na(sz))>0){
stop("Please give a vector of sz that contains frame size of each frame.")
}
intensity=as.matrix(intensity)
if(square==TRUE){
sz_x=sz_y=min(sz)
if(sz_x%%2==0){
sz_x=sz_x-1
sz_y=sz_y-1
}
}else{
sz_y = sz[1]
sz_x = sz[2]
if(sz_y%%2==0){
sz_y=sz_y-1
}
if(sz_x%%2==0){
sz_x=sz_x-1
}
}
len_t = dim(intensity)[2]/sz[2]
intensity_transform=matrix(NA,sz_x*sz_y,len_t)
for(i in 1:len_t){
selected_x = ((1+sz[2]*(i-1)):(sz[2]+sz[2]*(i-1)))[1:sz_x]
intensity_transform[,i]=
as.vector(intensity[1:sz_y,selected_x])
}
}else if(intensity_str=='SS_T_mat'){
if(sum(is.na(sz))>0){
stop("Please give a vector of sz that contains frame size of each frame.")
}
intensity=as.matrix(intensity)
if(square==TRUE){
sz_x=sz_y=min(sz)
if(sz_x%%2==0){
sz_x=sz_x-1
sz_y=sz_y-1
}
}else{
sz_y = sz[1]
sz_x = sz[2]
if(sz_y%%2==0){
sz_y = sz_y-1
}
if(sz_x%%2==0){
sz_x=sz_x-1
}
}
len_t = dim(intensity)[2]
intensity_transform=matrix(NA,sz_x*sz_y,len_t)
for(i in 1:len_t){
intensity_mat=matrix(intensity[,i],sz[1],sz[2])
intensity_transform[,i]=as.vector(intensity_mat[1:sz_y,1:sz_x])
}
}
intensity_transform_list = list()
intensity_transform_list$sz_x = sz_x
intensity_transform_list$sz_y = sz_y
intensity_transform_list$intensity = intensity_transform
return(intensity_transform_list)
}
#' 2D Fourier transformation and calculate wave number
#' @description
#' Perform 2D fast Fourier transformation on SS_T matrix, record frame size for
#' each frame, total number of frames, and sequence of lag times. Calculate and
#' record circular wave number for complete q ring.
#'
#' @param intensity_list intensity profile, SS_T matrix
#' @param pxsz size of one pixel in unit of micron
#' @param mindt minimum lag time
#'
#' @return A list object containing transformed intensity profile in reciprocal
#' space and corresponding parameters.
#' @export
#' @author \packageAuthor{AIUQ}
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#'
#' @keywords internal
FFT2D<-function(intensity_list,pxsz,mindt){
sz_x = intensity_list$sz_x
sz_y = intensity_list$sz_y
intensity = intensity_list$intensity
len_t = dim(intensity)[2]
I_q_matrix = matrix(NA,sz_x*sz_y,len_t)
for(i in 1:len_t){
I_q_matrix[,i] = as.vector(fftwtools::fftw2d(matrix(intensity[,i],sz_y,sz_x)))
}
ans_list = list()
#ans_list$sz = sz
ans_list$sz_x = sz_x
ans_list$sz_y = sz_y
ans_list$len_q = length(1:((max(sz_x,sz_y)-1)/2))
ans_list$len_t = len_t
ans_list$I_q_matrix = I_q_matrix
ans_list$q = (1:((max(sz_x,sz_y)-1)/2))*2*pi/(max(sz_x,sz_y)*pxsz)
ans_list$input = mindt*(1:(len_t))
ans_list$d_input = ans_list$input[1:length(ans_list$input)]-ans_list$input[1] ##delta t, including zero
return(ans_list)
}
#' fftshift
#'
#' @description
#' Rearranges a 2D Fourier transform x by shifting the zero-frequency component
#' to the center of the matrix.
#'
#'
#' @param x square matrix input with odd number of rows and columns
#' @param dim shift method. See 'Details'.
#'
#' @return Shifted matrix.
#'
#' @details
#' By default, \code{dim=-1}, swaps the first quadrant of x with the
#' third, and the second quadrant with the fourth. If \code{dim=1}, swaps rows 1
#' to middle with rows (middle+1) to end. If \code{dim=2}, swaps columns 1
#' to middle with columns (middle+1) to end. If \code{dim=3}, reverse fftshift.
#'
#' @export
#' @author \packageAuthor{AIUQ}
#' @examples
#' library(AIUQ)
#'
#' (m <- matrix(0:8,3,3))
#' fftshift(m)
#'
#' @keywords internal
fftshift <- function(x, dim = -1) {
rows <- dim(x)[1]
cols <- dim(x)[2]
swap_up_down <- function(x) {
rows_half <- ceiling(rows/2)
return(rbind(x[((rows_half+1):rows), (1:cols)], x[(1:rows_half), (1:cols)]))
}
swap_left_right <- function(x) {
cols_half <- ceiling(cols/2)
return(cbind(x[1:rows, ((cols_half+1):cols)], x[1:rows, 1:cols_half]))
}
swap_up_down_reverse <- function(x) {
rows_half <- ceiling(rows/2)
return(rbind(x[((rows_half):rows), (1:cols)], x[1:(rows_half-1), (1:cols)]))
}
swap_left_right_reverse <- function(x) {
cols_half <- ceiling(cols/2)
return(cbind(x[1:rows, ((cols_half):cols)], x[1:rows, 1:(cols_half-1)]))
}
if (dim == -1) {
x <- swap_up_down(x)
return(swap_left_right(x))
}
else if (dim == 1) {
return(swap_up_down(x))
}
else if (dim == 2) {
return(swap_left_right(x))
}else if(dim == 3){
x <- swap_up_down_reverse(x)
return(swap_left_right_reverse(x))
}
else {
stop("Invalid dimension parameter")
}
}
#' Construct MSD and MSD gradient with transformed parameters
#'
#' @description
#' Construct mean squared displacement (MSD) and its gradient for a given
#' stochastic process or a user defined MSD and gradient structure.
#'
#' @param theta transformed parameters in MSD function for MLE estimation
#' @param d_input sequence of lag times
#' @param model_name model name for the process, options from ('BM','OU','FBM',
#' 'OU+FBM', 'user_defined').
#' @param msd_fn user defined MSD structure, a function of \code{theta} and \code{d_input}
#' @param msd_grad_fn user defined MSD gradient structure, a function of
#' \code{theta} and \code{d_input}
#'
#' @return A list of two variables, MSD and MSD gradient.
#' @details
#' Note for non \code{user_defined} model, \code{msd_fn} and \code{msd_grad_fn}
#' are not needed. For Brownian Motion, the MSD follows
#' \deqn{MSD_{BM}(\Delta t) = \theta_1\Delta t= 4D\Delta t}{%MSD_{BM}(\Delta t) = \theta_1\Delta t= 4D\Delta t}
#' where \code{D} is the diffusion coefficient.
#'
#' For Ornstein–Uhlenbeck process, the MSD follows
#' \deqn{MSD_{OU}(\Delta t) = \theta_2(1-\frac{\theta_1}{1+\theta_1}^{\Delta t})}{%MSD_{OU}(\Delta t) = \theta_2(1-\frac{\theta_1}{1+\theta_1}^{\Delta t})}
#' where \eqn{\frac{\theta_1}{1+\theta_1}=\rho}{%\frac{\theta_1}{1+\theta_1}=\rho}
#' is the correlation with previous steps.
#'
#' For fractional Brownian Motion, the MSD follows
#' \deqn{MSD_{FBM}(\Delta t) =\theta_1\Delta t^{\frac{2\theta_2}{1+\theta_2}}}{%MSD_{FBM}(\Delta t) =\theta_1\Delta t^{\frac{2\theta_2}{1+\theta_2}}}
#' where \eqn{\frac{2\theta_2}{1+\theta_2}=2H}{%\frac{2\theta_2}{1+\theta_2}=2H}
#' with \code{H} is the the Hurst parameter.
#'
#' For 'OU+FBM', the MSD follows
#' \deqn{MSD_{OU+FBM}(\Delta t) = \theta_2(1-\frac{\theta_1}{1+\theta_1}^{\Delta t})+\theta_3\Delta t^{\frac{2\theta_4}{1+\theta_4}}}{%MSD_{OU+FBM}(\Delta t) = \theta_2(1-\frac{\theta_1}{1+\theta_1}^{\Delta t})+\theta_3\Delta t^{\frac{2\theta_4}{1+\theta_4}}}
#'
#' @export
#' @author \packageAuthor{AIUQ}
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' @examples
#' library(AIUQ)
#' msd_fn <- function(param, d_input){
#' beta = 2*param[1]^2
#' MSD = beta*d_input
#' }
#' msd_grad_fn <- function(param, d_input){
#' MSD_grad = 4*param[1]*d_input
#' }
#'
#' theta = 2
#' d_input = 0:10
#' model_name = "user_defined"
#'
#' MSD_list = get_MSD_with_grad(theta=theta,d_input=d_input,
#' model_name=model_name,msd_fn=msd_fn,
#' msd_grad_fn=msd_grad_fn)
#' MSD_list$msd
#' MSD_list$msd_grad
#'
#' @keywords internal
get_MSD_with_grad<-function(theta,d_input,model_name,msd_fn=NA,msd_grad_fn=NA){
if(model_name=='user_defined' || model_name=='user_defined_anisotropic'){
if(is.function(msd_grad_fn)==T){
MSD = msd_fn(theta, d_input)
MSD_grad = msd_grad_fn(theta, d_input)
}else{
MSD = msd_fn(theta, d_input)
MSD_grad = NA
}
}else if(model_name=='BM'||model_name=='BM_anisotropic'){
beta = theta[1]
MSD = beta*d_input
MSD_grad = as.matrix(d_input)
}else if(model_name=='FBM'||model_name=='FBM_anisotropic'){
beta = theta[1]
alpha = 2*theta[2]/(1+theta[2])
MSD = beta*d_input^alpha
MSD_grad = cbind(d_input^alpha, beta*c(0,log(d_input[-1]))*(d_input^alpha))
}else if(model_name=='OU'||model_name=='OU_anisotropic'){
rho = theta[1]/(1+theta[1])
amplitude = theta[2]
MSD = (amplitude*(1-rho^d_input))
MSD_grad = cbind(-amplitude*d_input*(rho^(d_input-1)), (1-rho^d_input))
}else if(model_name=='OU+FBM'||model_name=='OU+FBM_anisotropic'){
rho = theta[1]/(1+theta[1])
amplitude = theta[2]
beta = theta[3]
alpha = 2*theta[4]/(1+theta[4])
MSD = beta*d_input^alpha+(amplitude*(1-rho^d_input))
MSD_grad = cbind(-amplitude*d_input*(rho^(d_input-1)),
(1-rho^d_input),
d_input^alpha,beta*c(0,log(d_input[-1]))*(d_input^alpha))
}else if(model_name=='VFBM'||model_name=='VFBM_anisotropic'){
a = theta[1]
b = theta[2]
c = theta[3]/(1+theta[3])
d = theta[4]/(1+theta[4])
MSD = a*d_input^((c*d_input)/(1+b*d_input)+d)
MSD_grad = cbind(d_input^((c*d_input)/(1+b*d_input)+d),
-a*d_input^((c*d_input)/(1+b*d_input)+d)*c(0,log(d_input[-1]))*c*(1+b*d_input)^(-2)*d_input^2,
a*d_input^((c*d_input)/(1+b*d_input)+d)*c(0,log(d_input[-1]))*d_input/(1+b*d_input),
a*d_input^((c*d_input)/(1+b*d_input)+d)*c(0,log(d_input[-1])))
}
msd_list = list()
msd_list$msd = MSD
msd_list$msd_grad = MSD_grad
return(msd_list)
}
#' Transform parameters in simulation class to parameters structure in MSD function
#' @description
#' Transform parameters in \code{simulation} class to parameters contained in MSD
#' function with structure \code{theta} in \code{\link{get_MSD}}. Prepare for
#' truth MSD construction.
#'
#' @param param_truth parameters used in \code{simulation} class
#' @param model_name stochastic process used in \code{simulation}, options from
#' ('BM','OU','FBM','OU+FBM')
#'
#' @return A vector of parameters contained in MSD with structure \code{theta} in
#' \code{\link{get_MSD}}.
#' @export
#' @author \packageAuthor{AIUQ}
#' @examples
#' library(AIUQ)
#' # Simulate simple diffusion for 100 images with 100 by 100 pixels and
#' # distance moved per time step is 0.5
#' sim_bm = simulation(sz=100,len_t=100,sigma_bm=0.5)
#' show(sim_bm)
#' get_true_param_sim(param_truth=sim_bm@param,model_name=sim_bm@model_name)
#'
#' @keywords internal
get_true_param_sim<-function(param_truth,model_name){
if(model_name=='BM'){
beta = 2*param_truth[1]^2
param = c(beta)
}else if(model_name=='FBM'){
beta = 2*param_truth[1]^2
alpha = 2*param_truth[2]
param = c(beta, alpha)
}else if(model_name=='OU'){
rho = param_truth[1]
amplitude = 4*param_truth[2]^2
param = c(rho,amplitude)
}else if(model_name=='OU+FBM'){
rho = param_truth[1]
amplitude = 4*param_truth[2]^2
beta = 2*param_truth[1]^2
alpha = 2*param_truth[2]
param = c(rho,amplitude,beta,alpha)
}
return(param)
}
#' Transform parameters in anisotropic simulation class to parameters structure in MSD function
#' @description
#' Transform parameters in \code{aniso_simulation} class to parameters contained in MSD
#' function with structure \code{theta} in \code{\link{get_MSD}}. Prepare for
#' truth MSD construction.
#'
#' @param param_truth parameters used in \code{aniso_simulation} class
#' @param model_name stochastic process used in \code{aniso_simulation}, options from
#' ('BM','OU','FBM','OU+FBM')
#'
#' @return A vector of parameters contained in MSD with structure \code{theta} in
#' \code{\link{get_MSD}}.
#' @export
#' @author \packageAuthor{AIUQ}
#' @examples
#' library(AIUQ)
#' # Simulate simple diffusion for 100 images with 100 by 100 pixels and
#' # distance moved per time step is 0.5
#' aniso_sim_bm = aniso_simulation(sz=100,len_t=100,sigma_bm=c(0.5,0.1))
#' show(aniso_sim_bm)
#' get_true_param_aniso_sim(param_truth=aniso_sim_bm@param,model_name=aniso_sim_bm@model_name)
#'
#' @keywords internal
get_true_param_aniso_sim<-function(param_truth,model_name){
if(model_name=='BM'){
beta = param_truth[1]^2
param = c(beta)
}else if(model_name=='FBM'){
beta = param_truth[1]^2
alpha = 2*param_truth[2]
param = c(beta, alpha)
}else if(model_name=='OU'){
rho = param_truth[1]
amplitude = 2*param_truth[2]^2
param = c(rho,amplitude)
}else if(model_name=='OU+FBM'){
rho = param_truth[1]
amplitude = 2*param_truth[2]^2
beta = param_truth[3]^2
alpha = 2*param_truth[4]
param = c(rho,amplitude,beta,alpha)
}
return(param)
}
#' Transform parameters estimated in SAM class to parameters structure in MSD function
#' @description
#' Transform parameters estimated using Maximum Likelihood Estimation (MLE) in
#' \code{SAM} class, to parameters contained in MSD with structure \code{theta}
#' in \code{\link{get_MSD}}.
#'
#' @param theta estimated parameters through MLE
#' @param model_name fitted stochastic process, options from ('BM','OU','FBM','OU+FBM')
#'
#' @return A vector of estimated parameters after transformation with structure
#' \code{theta} in \code{\link{get_MSD}}.
#'
#' @export
#' @author \packageAuthor{AIUQ}
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' @keywords internal
get_est_param<-function(theta,model_name){
if(model_name=='BM'){
beta = theta[1]
est_param = c(beta)
}else if(model_name=='FBM'){
beta = theta[1]
alpha = 2*theta[2]/(1+theta[2])
est_param = c(beta, alpha)
}else if(model_name=='OU'){
rho = theta[1]/(1+theta[1])
amplitude = theta[2]
est_param = c(rho,amplitude)
}else if(model_name=='OU+FBM'){
rho = theta[1]/(1+theta[1])
amplitude = theta[2]
beta = theta[3]
alpha = 2*theta[4]/(1+theta[4])
est_param = c(rho,amplitude,beta,alpha)
}else if(model_name=='user_defined'){
est_param = theta
}else if(model_name=='VFBM'){
a = theta[1]
b = theta[2]
c = theta[3]/(1+theta[3])
d = theta[4]/(1+theta[4])
est_param = c(a,b,c,d)
}
return(est_param)
}
#' Construct MSD
#' @description
#' Construct estimated mean squared displacement (MSD) for a given stochastic process.
#'
#' @param theta parameters in MSD function
#' @param d_input sequence of lag times
#' @param model_name model name for the process, options from ('BM','OU','FBM',
#' 'OU+FBM','user_defined')
#' @param msd_fn user defined mean squared displacement structure (MSD), a
#' function of \code{param} parameters and \code{d_input} lag times
#'
#' @return A vector of MSD values for a given sequence of lag times.
#' @details
#' For Brownian Motion, the MSD follows
#' \deqn{MSD_{BM}(\Delta t) = \theta_1\Delta t= 4D\Delta t}{%MSD_{BM}(\Delta t) = \theta_1\Delta t= 4D\Delta t}
#' where \code{D} is the diffusion coefficient.
#'
#' For Ornstein–Uhlenbeck process, the MSD follows
#' \deqn{MSD_{OU}(\Delta t) = \theta_2(1-\theta_1^{\Delta t})}{%MSD_{OU}(\Delta t) = \theta_2(1-\theta_1^{\Delta t})}
#' where \eqn{\theta_1=\rho}{%\theta_1=\rho}
#' is the correlation with previous steps.
#'
#' For fractional Brownian Motion, the MSD follows
#' \deqn{MSD_{FBM}(\Delta t) =\theta_1\Delta t^{\theta_2}}{%MSD_{FBM}(\Delta t) =\theta_1\Delta t^{\theta_2}}
#' where \eqn{\theta_2=2H}{%\theta_2=2H} with \code{H} is the the Hurst parameter.
#'
#' For 'OU+FBM', the MSD follows
#' \deqn{MSD_{OU+FBM}(\Delta t) = \theta_2(1-\theta_1^{\Delta t})+\theta_3\Delta t^{\theta_4}}{%MSD_{OU+FBM}(\Delta t) = \theta_2(1-\theta_1^{\Delta t})+\theta_3\Delta t^{\theta_4}}
#'
#' @export
#' @author \packageAuthor{AIUQ}
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#' @examples
#' library(AIUQ)
#' # Construct MSD for BM
#' get_MSD(theta=0.2,d_input=0:100,model_name='BM')
#'
#' @keywords internal
get_MSD<-function(theta,d_input,model_name, msd_fn=NA){
if(model_name=='BM'){
beta = theta[1]
MSD = beta*d_input
}else if(model_name=='FBM'){
beta = theta[1]
alpha = theta[2]
MSD = beta*d_input^alpha
}else if(model_name=='OU'){
rho = theta[1]
amplitude = theta[2]
MSD = amplitude*(1-rho^d_input)
}else if(model_name=='OU+FBM'){
rho = theta[1]
amplitude = theta[2]
beta = theta[3]
alpha = theta[4]
MSD = beta*d_input^alpha+(amplitude*(1-rho^d_input))
}else if(model_name=='user_defined'){
MSD = msd_fn(theta, d_input)
}else if(model_name=='VFBM'){
a = theta[1]
b = theta[2]
c = theta[3]
d = theta[4]
power = (c*d_input)/(b*d_input+1)+d
MSD = a*d_input^power
}
return(MSD)
}
#' Construct 95% confidence interval
#' @description
#' This function construct the lower and upper bound for 95% confidence interval
#' of estimated parameters and mean squared displacement(MSD) for a given model.
#' See 'References'.
#'
#' @param param_uq_range lower and upper bound for natural logorithm of
#' parameters in the fitted model using \code{AIUQ} method in \code{SAM} class
#' @param model_name model for constructing MSD, options from ('BM','OU',
#' 'FBM','OU+FBM', 'user_defined')
#' @param d_input sequence of lag times
#' @param msd_fn user defined mean squared displacement structure (MSD), a
#' function of \code{param} parameters and \code{d_input} lag times
#'
#' @return A list of lower and upper bound for 95% confidence interval
#' of estimated parameters and MSD for a given model.
#'
#' @export
#' @author \packageAuthor{AIUQ}
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#' @keywords internal
get_est_parameters_MSD_SAM_interval <- function(param_uq_range,model_name,d_input,msd_fn=NA){
theta=exp(param_uq_range[,-dim(param_uq_range)[2]])
sigma_2_0_est=exp(param_uq_range[,dim(param_uq_range)[2]])
sigma_2_0_est_lower=sigma_2_0_est[1]
sigma_2_0_est_upper=sigma_2_0_est[2]
est_parameters=NA;
if(model_name=='BM'){
beta_lower=theta[1] ##only 1 param
beta_upper=theta[2]
MSD_lower=beta_lower*d_input
MSD_upper=beta_upper*d_input
est_parameters_lower=c(beta_lower,sigma_2_0_est_lower);
est_parameters_upper=c(beta_upper,sigma_2_0_est_upper);
}else if(model_name=='FBM'){
beta_lower=theta[1,1]
beta_upper=theta[2,1]
alpha_lower=2*theta[1,2]/(1+theta[1,2])
alpha_upper=2*theta[2,2]/(1+theta[2,2])
MSD_lower=rep(NA,length(d_input))
MSD_upper=rep(NA,length(d_input))
index_less_than_1=which(d_input<1)
if(length(index_less_than_1)>0){
MSD_lower[index_less_than_1]=beta_lower*d_input[index_less_than_1]^{alpha_upper}
MSD_upper[index_less_than_1]=beta_upper*d_input[index_less_than_1]^{alpha_lower}
MSD_lower[-index_less_than_1]=beta_lower*d_input[-index_less_than_1]^{alpha_lower}
MSD_upper[-index_less_than_1]=beta_upper*d_input[-index_less_than_1]^{alpha_upper}
}else{
MSD_lower=beta_lower*d_input^{alpha_lower}
MSD_upper=beta_upper*d_input^{alpha_upper}
}
est_parameters_lower=c(beta_lower,alpha_lower,sigma_2_0_est_lower);
est_parameters_upper=c(beta_upper,alpha_upper,sigma_2_0_est_upper);
}else if(model_name=='OU'){ #seems okay
rho_lower=theta[1,1]/(1+theta[1,1])
rho_upper=theta[2,1]/(1+theta[2,1])
amplitude_lower=theta[1,2]
amplitude_upper=theta[2,2]
MSD_lower=(amplitude_lower*(1-rho_lower^d_input))
MSD_upper=(amplitude_upper*(1-rho_upper^d_input))
est_parameters_lower=c(rho_lower,amplitude_lower,sigma_2_0_est_lower);
est_parameters_upper=c(rho_upper,amplitude_upper,sigma_2_0_est_upper);
}else if(model_name=='OU+FBM'){
rho_lower=theta[1,1]/(1+theta[1,1])
rho_upper=theta[2,1]/(1+theta[2,1])
amplitude_lower=theta[1,2]
amplitude_upper=theta[2,2]
beta_lower=theta[1,3]
beta_upper=theta[2,3]
alpha_lower=2*theta[1,4]/(1+theta[1,4])
alpha_upper=2*theta[2,4]/(1+theta[2,4])
####change of uq, need to test
MSD_lower=rep(NA,length(d_input))
MSD_upper=rep(NA,length(d_input))
index_less_than_1=which(d_input<1)
if(length(index_less_than_1)>0){
MSD_lower[index_less_than_1]=beta_lower*d_input[index_less_than_1]^{alpha_upper}
MSD_upper[index_less_than_1]=beta_upper*d_input[index_less_than_1]^{alpha_lower}
MSD_lower[-index_less_than_1]=beta_lower*d_input[-index_less_than_1]^{alpha_lower}
MSD_upper[-index_less_than_1]=beta_upper*d_input[-index_less_than_1]^{alpha_upper}
}else{
MSD_lower=beta_lower*d_input^{alpha_lower}
MSD_upper=beta_upper*d_input^{alpha_upper}
}
MSD_lower=MSD_lower+(amplitude_lower*(1-rho_lower^d_input))
MSD_upper=MSD_upper+(amplitude_upper*(1-rho_upper^d_input))
est_parameters_lower=c(rho_lower,amplitude_lower,beta_lower,alpha_lower,sigma_2_0_est_lower);
est_parameters_upper=c(rho_upper,amplitude_upper,beta_upper,alpha_upper,sigma_2_0_est_upper);
}else if(model_name=='user_defined'){
if(is.matrix(theta)){
theta_lower=theta[1,]
theta_upper=theta[2,]
MSD_lower=msd_fn(theta_lower,d_input)
MSD_upper=msd_fn(theta_upper,d_input)
est_parameters_lower=c(theta_lower,sigma_2_0_est_lower)
est_parameters_upper=c(theta_upper,sigma_2_0_est_upper)
}else{
theta_lower=theta[1]
theta_upper=theta[2]
MSD_lower=msd_fn(theta_lower,d_input)
MSD_upper=msd_fn(theta_upper,d_input)
est_parameters_lower=c(theta_lower,sigma_2_0_est_lower)
est_parameters_upper=c(theta_upper,sigma_2_0_est_upper)
}
}
ans_list=list()
ans_list$est_parameters_lower=est_parameters_lower
ans_list$est_parameters_upper=est_parameters_upper
ans_list$MSD_lower=MSD_lower
ans_list$MSD_upper=MSD_upper
return(ans_list)
}
#' Construct 95% confidence interval for anisotropic processes
#' @description
#' This function construct the lower and upper bound for 95% confidence interval
#' of estimated parameters and mean squared displacement(MSD) for a given
#' anisotropic model. See 'References'.
#'
#' @param param_uq_range lower and upper bound for natural logorithm of
#' parameters in the fitted model using \code{AIUQ} method in \code{aniso_SAM} class
#' @param model_name model for constructing MSD, options from ('BM','OU',
#' 'FBM','OU+FBM', 'user_defined')
#' @param d_input sequence of lag times
#' @param msd_fn user defined mean squared displacement structure (MSD), a
#' function of \code{param} parameters and \code{d_input} lag times
#'
#' @return A list of lower and upper bound for 95% confidence interval
#' of estimated parameters and MSD for a given model.
#'
#' @export
#' @author \packageAuthor{AIUQ}
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#' @keywords internal
get_est_parameters_MSD_SAM_interval_anisotropic <- function(param_uq_range,model_name,d_input,msd_fn=NA){
theta=exp(param_uq_range[,-dim(param_uq_range)[2]])
sigma_2_0_est=exp(param_uq_range[,dim(param_uq_range)[2]])
sigma_2_0_est_lower=sigma_2_0_est[1]
sigma_2_0_est_upper=sigma_2_0_est[2]
est_parameters=NA
if(model_name=='BM'){
beta_x_lower=theta[1,1] ##only 1 param
beta_x_upper=theta[2,1]
beta_y_lower=theta[1,2] ##only 1 param
beta_y_upper=theta[2,2]
MSD_x_lower=beta_x_lower*d_input
MSD_x_upper=beta_x_upper*d_input
MSD_y_lower=beta_y_lower*d_input
MSD_y_upper=beta_y_upper*d_input
est_parameters_lower=c(beta_x_lower,beta_y_lower,sigma_2_0_est_lower);
est_parameters_upper=c(beta_x_upper,beta_y_upper,sigma_2_0_est_upper);
}else if(model_name=='FBM'){
beta_x_lower=theta[1,1]
beta_x_upper=theta[2,1]
alpha_x_lower=2*theta[1,2]/(1+theta[1,2])
alpha_x_upper=2*theta[2,2]/(1+theta[2,2])
MSD_x_lower=rep(NA,length(d_input))
MSD_x_upper=rep(NA,length(d_input))
index_less_than_1=which(d_input<1)
if(length(index_less_than_1)>0){
MSD_x_lower[index_less_than_1]=beta_x_lower*d_input[index_less_than_1]^{alpha_x_upper}
MSD_x_upper[index_less_than_1]=beta_x_upper*d_input[index_less_than_1]^{alpha_x_lower}
MSD_x_lower[-index_less_than_1]=beta_x_lower*d_input[-index_less_than_1]^{alpha_x_lower}
MSD_x_upper[-index_less_than_1]=beta_x_upper*d_input[-index_less_than_1]^{alpha_x_upper}
}else{
MSD_x_lower=beta_x_lower*d_input^{alpha_x_lower}
MSD_x_upper=beta_x_upper*d_input^{alpha_x_upper}
}
beta_y_lower=theta[1,3]
beta_y_upper=theta[2,3]
alpha_y_lower=2*theta[1,4]/(1+theta[1,4])
alpha_y_upper=2*theta[2,4]/(1+theta[2,4])
MSD_y_lower=rep(NA,length(d_input))
MSD_y_upper=rep(NA,length(d_input))
index_less_than_1=which(d_input<1)
if(length(index_less_than_1)>0){
MSD_y_lower[index_less_than_1]=beta_y_lower*d_input[index_less_than_1]^{alpha_y_upper}
MSD_y_upper[index_less_than_1]=beta_y_upper*d_input[index_less_than_1]^{alpha_y_lower}
MSD_y_lower[-index_less_than_1]=beta_y_lower*d_input[-index_less_than_1]^{alpha_y_lower}
MSD_y_upper[-index_less_than_1]=beta_y_upper*d_input[-index_less_than_1]^{alpha_y_upper}
}else{
MSD_y_lower=beta_y_lower*d_input^{alpha_y_lower}
MSD_y_upper=beta_y_upper*d_input^{alpha_y_upper}
}
est_parameters_lower=c(beta_x_lower,alpha_x_lower,beta_y_lower,alpha_y_lower,sigma_2_0_est_lower);
est_parameters_upper=c(beta_x_upper,alpha_x_upper,beta_y_upper,alpha_y_upper,sigma_2_0_est_upper);
}else if(model_name=='OU'){ #seems okay
rho_x_lower=theta[1,1]/(1+theta[1,1])
rho_x_upper=theta[2,1]/(1+theta[2,1])
amplitude_x_lower=theta[1,2]
amplitude_x_upper=theta[2,2]
MSD_x_lower=(amplitude_x_lower*(1-rho_x_lower^d_input))
MSD_x_upper=(amplitude_x_upper*(1-rho_x_upper^d_input))
rho_y_lower=theta[1,3]/(1+theta[1,3])
rho_y_upper=theta[2,3]/(1+theta[2,3])
amplitude_y_lower=theta[1,4]
amplitude_y_upper=theta[2,4]
MSD_y_lower=(amplitude_y_lower*(1-rho_y_lower^d_input))
MSD_y_upper=(amplitude_y_upper*(1-rho_y_upper^d_input))
est_parameters_lower=c(rho_x_lower,amplitude_x_lower,rho_y_lower,amplitude_y_lower,sigma_2_0_est_lower);
est_parameters_upper=c(rho_x_upper,amplitude_x_upper,rho_y_upper,amplitude_y_upper,sigma_2_0_est_upper);
}else if(model_name=='OU+FBM'){
rho_x_lower=theta[1,1]/(1+theta[1,1])
rho_x_upper=theta[2,1]/(1+theta[2,1])
amplitude_x_lower=theta[1,2]
amplitude_x_upper=theta[2,2]
beta_x_lower=theta[1,3]
beta_x_upper=theta[2,3]
alpha_x_lower=2*theta[1,4]/(1+theta[1,4])
alpha_x_upper=2*theta[2,4]/(1+theta[2,4])
####change of uq, need to test
MSD_x_lower=rep(NA,length(d_input))
MSD_x_upper=rep(NA,length(d_input))
index_less_than_1=which(d_input<1)
if(length(index_less_than_1)>0){
MSD_x_lower[index_less_than_1]=beta_x_lower*d_input[index_less_than_1]^{alpha_x_upper}
MSD_x_upper[index_less_than_1]=beta_x_upper*d_input[index_less_than_1]^{alpha_x_lower}
MSD_x_lower[-index_less_than_1]=beta_x_lower*d_input[-index_less_than_1]^{alpha_x_lower}
MSD_x_upper[-index_less_than_1]=beta_x_upper*d_input[-index_less_than_1]^{alpha_x_upper}
}else{
MSD_x_lower=beta_x_lower*d_input^{alpha_x_lower}
MSD_x_upper=beta_x_upper*d_input^{alpha_x_upper}
}
MSD_x_lower=MSD_x_lower+(amplitude_x_lower*(1-rho_x_lower^d_input))
MSD_x_upper=MSD_x_upper+(amplitude_x_upper*(1-rho_x_upper^d_input))
rho_y_lower=theta[1,5]/(1+theta[1,5])
rho_y_upper=theta[2,5]/(1+theta[2,5])
amplitude_y_lower=theta[1,6]
amplitude_y_upper=theta[2,6]
beta_y_lower=theta[1,7]
beta_y_upper=theta[2,7]
alpha_y_lower=2*theta[1,8]/(1+theta[1,8])
alpha_y_upper=2*theta[2,8]/(1+theta[2,8])
####change of uq, need to test
MSD_y_lower=rep(NA,length(d_input))
MSD_y_upper=rep(NA,length(d_input))
index_less_than_1=which(d_input<1)
if(length(index_less_than_1)>0){
MSD_y_lower[index_less_than_1]=beta_y_lower*d_input[index_less_than_1]^{alpha_y_upper}
MSD_y_upper[index_less_than_1]=beta_y_upper*d_input[index_less_than_1]^{alpha_y_lower}
MSD_y_lower[-index_less_than_1]=beta_y_lower*d_input[-index_less_than_1]^{alpha_y_lower}
MSD_y_upper[-index_less_than_1]=beta_y_upper*d_input[-index_less_than_1]^{alpha_y_upper}
}else{
MSD_y_lower=beta_y_lower*d_input^{alpha_y_lower}
MSD_y_upper=beta_y_upper*d_input^{alpha_y_upper}
}
MSD_y_lower=MSD_y_lower+(amplitude_y_lower*(1-rho_y_lower^d_input))
MSD_y_upper=MSD_y_upper+(amplitude_y_upper*(1-rho_y_upper^d_input))
est_parameters_lower=c(rho_x_lower,amplitude_x_lower,beta_x_lower,alpha_x_lower,
rho_y_lower,amplitude_y_lower,beta_y_lower,alpha_y_lower,sigma_2_0_est_lower);
est_parameters_upper=c(rho_x_upper,amplitude_x_upper,beta_x_upper,alpha_x_upper,
rho_y_upper,amplitude_y_upper,beta_y_upper,alpha_y_upper,sigma_2_0_est_upper);
}else if(model_name=='user_defined'){
theta_x_lower=theta[1,]
theta_x_upper=theta[2,]
theta_y_lower=theta[3,]
theta_y_upper=theta[4,]
MSD_x_lower=msd_fn(theta_x_lower,d_input)
MSD_x_upper=msd_fn(theta_x_upper,d_input)
MSD_y_lower=msd_fn(theta_y_lower,d_input)
MSD_y_upper=msd_fn(theta_y_upper,d_input)
est_parameters_lower=c(theta_x_lower,theta_y_lower,sigma_2_0_est_lower)
est_parameters_upper=c(theta_x_upper,theta_y_upper,sigma_2_0_est_upper)
}else if(model_name=='VFBM'){
a_x_lower=theta[1,1]
a_x_upper=theta[2,1]
b_x_lower=theta[1,2]
b_x_upper=theta[2,2]
c_x_lower=theta[1,3]/(1+theta[1,3])
c_x_upper=theta[2,3]/(1+theta[2,3])
d_x_lower=theta[1,4]/(1+theta[1,4])
d_x_upper=theta[2,4]/(1+theta[2,4])
MSD_x_lower=rep(NA,length(d_input))
MSD_x_upper=rep(NA,length(d_input))
index_less_than_1=which(d_input<1)
#MSD_x_lower=a_x_lower*d_input^((c_x_lower*d_input)/(1+b_x_lower*d_input)+d_x_lower)
#MSD_x_upper=a_x_upper*d_input^((c_x_upper*d_input)/(1+b_x_upper*d_input)+d_x_upper)
if(length(index_less_than_1)>0){
MSD_x_lower[index_less_than_1]=a_x_lower*d_input[index_less_than_1]^{(c_x_upper*d_input[index_less_than_1])/(1+b_x_upper*d_input[index_less_than_1])+d_x_upper}
MSD_x_upper[index_less_than_1]=a_x_upper*d_input[index_less_than_1]^{(c_x_lower*d_input[index_less_than_1])/(1+b_x_lower*d_input[index_less_than_1])+d_x_lower}
MSD_x_lower[-index_less_than_1]=a_x_lower*d_input[-index_less_than_1]^{(c_x_lower*d_input[-index_less_than_1])/(1+b_x_lower*d_input[-index_less_than_1])+d_x_lower}
MSD_x_upper[-index_less_than_1]=a_x_upper*d_input[-index_less_than_1]^{(c_x_upper*d_input[-index_less_than_1])/(1+b_x_upper*d_input[-index_less_than_1])+d_x_upper}
}else{
MSD_x_lower=a_x_lower*d_input^((c_x_lower*d_input)/(1+b_x_lower*d_input)+d_x_lower)
MSD_x_upper=a_x_upper*d_input^((c_x_upper*d_input)/(1+b_x_upper*d_input)+d_x_upper)
}
a_y_lower=theta[1,5]
a_y_upper=theta[2,5]
b_y_lower=theta[1,6]
b_y_upper=theta[2,6]
c_y_lower=theta[1,7]/(1+theta[1,7])
c_y_upper=theta[2,7]/(1+theta[2,7])
d_y_lower=theta[1,8]/(1+theta[1,8])
d_y_upper=theta[2,8]/(1+theta[2,8])
MSD_y_lower=rep(NA,length(d_input))
MSD_y_upper=rep(NA,length(d_input))
index_less_than_1=which(d_input<1)
#MSD_y_lower=a_y_lower*d_input^((c_y_lower*d_input)/(1+b_y_lower*d_input)+d_y_lower)
#MSD_y_upper=a_y_upper*d_input^((c_y_upper*d_input)/(1+b_y_upper*d_input)+d_y_upper)
if(length(index_less_than_1)>0){
MSD_y_lower[index_less_than_1]=a_y_lower*d_input[index_less_than_1]^{(c_y_upper*d_input[index_less_than_1])/(1+b_y_upper*d_input[index_less_than_1])+d_y_upper}
MSD_y_upper[index_less_than_1]=a_y_upper*d_input[index_less_than_1]^{(c_y_lower*d_input[index_less_than_1])/(1+b_y_lower*d_input[index_less_than_1])+d_y_lower}
MSD_y_lower[-index_less_than_1]=a_y_lower*d_input[-index_less_than_1]^{(c_y_lower*d_input[-index_less_than_1])/(1+b_y_lower*d_input[-index_less_than_1])+d_y_lower}
MSD_y_upper[-index_less_than_1]=a_y_upper*d_input[-index_less_than_1]^{(c_y_upper*d_input[-index_less_than_1])/(1+b_y_upper*d_input[-index_less_than_1])+d_y_upper}
}else{
MSD_y_lower=a_y_lower*d_input^((c_y_lower*d_input)/(1+b_y_lower*d_input)+d_y_lower)
MSD_y_upper=a_y_upper*d_input^((c_y_upper*d_input)/(1+b_y_upper*d_input)+d_y_upper)
}
est_parameters_lower=c(a_x_lower,b_x_lower,c_x_lower,d_x_lower,a_y_lower,
b_y_lower, c_y_lower,d_y_lower, sigma_2_0_est_lower);
est_parameters_upper=c(a_x_upper,b_x_upper,c_x_upper,d_x_upper,a_y_upper,
b_y_upper, c_y_upper,d_y_upper, sigma_2_0_est_upper);
}
ans_list=list()
ans_list$est_parameters_lower=est_parameters_lower
ans_list$est_parameters_upper=est_parameters_upper
ans_list$MSD_x_lower=MSD_x_lower
ans_list$MSD_x_upper=MSD_x_upper
ans_list$MSD_y_lower=MSD_y_lower
ans_list$MSD_y_upper=MSD_y_upper
return(ans_list)
}
#' Log likelihood of the model
#' @description
#' This function computes the natural logarithm of the likelihood of the
#' latent factor model for selected range of wave vectors. See 'References'.
#'
#' @param param a vector of natural logarithm of parameters
#' @param I_q_cur Fourier transformed intensity profile
#' @param B_cur current value of B. This parameter is determined by the noise
#' in the system. See 'References'.
#' @param index_q selected index of wave number
#' @param I_o_q_2_ori absolute square of Fourier transformed intensity profile,
#' ensemble over time
#' @param d_input sequence of lag times
#' @param q_ori_ring_loc_unique_index index for wave vector that give unique frequency
#' @param sz frame size of the intensity profile
#' @param len_t number of time steps
#' @param q wave vector in unit of um^-1
#' @param model_name model for constructing MSD, options from ('BM','OU',
#' 'FBM','OU+FBM', 'user_defined')
#' @param msd_fn user defined mean squared displacement structure (MSD), a
#' function of \code{param} parameters and \code{d_input} lag times
#' @param msd_grad_fn user defined MSD gradient structure, a function of
#' \code{param} and \code{d_input}
#'
#' @return The numerical value of natural logarithm of the likelihood.
#'
#' @export
#' @author \packageAuthor{AIUQ}
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#' @keywords internal
log_lik <- function(param,I_q_cur,B_cur,index_q,I_o_q_2_ori,d_input,
q_ori_ring_loc_unique_index,sz,len_t,q,model_name,
msd_fn=NA,msd_grad_fn=NA){
p=length(param)-1
theta=exp(param[-(p+1)]) ##first p parameters are parameters in ISF
if(is.na(B_cur)){ ##this fix the dimension
sigma_2_0_hat=exp(param[p+1]) ##noise
B_cur=2*sigma_2_0_hat
}
A_cur = abs(2*(I_o_q_2_ori - B_cur/2))
##the model is defined by MSD
MSD_list = get_MSD_with_grad(theta,d_input,model_name, msd_fn,msd_grad_fn)
MSD = MSD_list$msd
log_lik_sum = 0
NTz <- SuperGauss::NormalToeplitz$new(len_t)
eta=B_cur/4 ##nugget
for(i_q_selected in index_q){
output_re=Re(I_q_cur[q_ori_ring_loc_unique_index[[i_q_selected]],])/(sqrt(sz[1]*sz[2]))
output_im=Im(I_q_cur[q_ori_ring_loc_unique_index[[i_q_selected]],])/(sqrt(sz[1]*sz[2]))
q_selected=q[i_q_selected]
#beta_q = (D*q[i_q_selected]^2)
sigma_2=A_cur[i_q_selected]/4
acf = sigma_2*exp(-q_selected^2*MSD/4) ##assume 2d
acf[1] = acf[1]+eta
acf=as.numeric(acf)
log_lik_sum=log_lik_sum+sum(NTz$logdens(z = t(output_re), acf = acf))+sum(NTz$logdens(z = t(output_im), acf = acf))
}
log_lik_sum=log_lik_sum-0.5*sum(lengths(q_ori_ring_loc_unique_index))*log(2*pi) ##add 2pi
if(is.nan(log_lik_sum)){
#log_lik_sum=-10^15
log_lik_sum=-10^50 ##make it smaller in case dealing some small value
}
return(log_lik_sum)
}
#' Log likelihood for anisotropic processes
#' @description
#' This function computes the natural logarithm of the likelihood of the
#' latent factor model for selected range of wave vectors of anisotropic
#' processes. See 'References'.
#'
#' @param param a vector of natural logarithm of parameters
#' @param I_q_cur Fourier transformed intensity profile
#' @param B_cur current value of B. This parameter is determined by the noise
#' in the system. See 'References'.
#' @param index_q selected index of wave number
#' @param I_o_q_2_ori absolute square of Fourier transformed intensity profile,
#' ensemble over time
#' @param d_input sequence of lag times
#' @param q_ori_ring_loc_unique_index index for wave vector that give unique frequency
#' @param sz frame size of the intensity profile
#' @param len_t number of time steps
#' @param q1 wave vector in unit of um^-1 in x direction
#' @param q2 wave vector in unit of um^-1 in y direction
#' @param q1_unique_index index for wave vector that give unique frequency in x direction
#' @param q2_unique_index index for wave vector that give unique frequency in y direction
#' @param model_name model for constructing MSD, options from ('BM','OU',
#' 'FBM','OU+FBM', 'user_defined')
#' @param msd_fn user defined mean squared displacement structure (MSD), a
#' function of \code{param} parameters and \code{d_input} lag times
#' @param msd_grad_fn user defined MSD gradient structure, a function of
#' \code{param} and \code{d_input}
#'
#' @return The numerical value of natural logarithm of the likelihood.
#'
#' @export
#' @author \packageAuthor{AIUQ}
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#' @keywords internal
anisotropic_log_lik <- function(param,I_q_cur,B_cur,index_q,I_o_q_2_ori,d_input,
q_ori_ring_loc_unique_index,sz,len_t,q1,q2,q1_unique_index,
q2_unique_index,model_name,msd_fn=NA,msd_grad_fn=NA){
p=(length(param)-1)/2
theta_x=exp(param[1:p]) ##first p parameters are parameters in ISF
theta_y=exp(param[(p+1):(2*p)])
if(is.na(B_cur)){ ##this fix the dimension
sigma_2_0_hat=exp(param[2*p+1]) ##noise
B_cur=2*sigma_2_0_hat
}
A_cur=abs(2*(I_o_q_2_ori - B_cur/2))
##the model is defined by MSD
MSD_list_x = get_MSD_with_grad(theta_x,d_input,model_name, msd_fn,msd_grad_fn)
MSD_x = MSD_list_x$msd
MSD_list_y = get_MSD_with_grad(theta_y,d_input,model_name, msd_fn,msd_grad_fn)
MSD_y = MSD_list_y$msd
log_lik_sum = 0
NTz <- SuperGauss::NormalToeplitz$new(len_t)
eta=B_cur/4 ##nugget
q1_zero_included=c(0,q1)
q2_zero_included=c(0,q2)
for(i_q_selected in index_q){
for(i_q_ori in 1:length(q_ori_ring_loc_unique_index[[i_q_selected]])){
output_re=Re(I_q_cur[q_ori_ring_loc_unique_index[[i_q_selected]][i_q_ori],])/(sqrt(sz[1]*sz[2]))
output_im=Im(I_q_cur[q_ori_ring_loc_unique_index[[i_q_selected]][i_q_ori],])/(sqrt(sz[1]*sz[2]))
#q_selected=q[i_q_selected]
q1_unique_index_selected=q1_unique_index[[i_q_selected]][i_q_ori]+1
q2_unique_index_selected=q2_unique_index[[i_q_selected]][i_q_ori]+1
sigma_2=A_cur[i_q_selected]/4
#acf = sigma_2*exp(-q_selected^2*MSD/4) ##assume 2d
acf = sigma_2*exp(-(q1_zero_included[q1_unique_index_selected]^2*MSD_x+ q2_zero_included[q2_unique_index_selected]^2*MSD_y)/(2) )
acf[1] = acf[1]+eta
acf=as.numeric(acf)
log_lik_sum=log_lik_sum+sum(NTz$logdens(z = as.numeric(output_re), acf = acf))+sum(NTz$logdens(z = as.numeric(output_im), acf = acf))
}
}
log_lik_sum=log_lik_sum-0.5*sum(length(q1_unique_index)+length(q2_unique_index))*log(2*pi) ##add 2pi
if(is.nan(log_lik_sum)){
#log_lik_sum=-10^15
log_lik_sum=-10^50 ##make it smaller in case dealing some small value
}
return(log_lik_sum)
}
#' Gradient of log likelihood
#' @description
#' This function computes the gradient for natural logarithm of the likelihood
#' for selected range of wave vectors. See 'References'.
#'
#' @param param a vector of natural logarithm of parameters
#' @param I_q_cur Fourier transformed intensity profile
#' @param B_cur current value of B. This parameter is determined by the noise
#' in the system. See 'References'.
#' @param index_q selected index of wave number
#' @param I_o_q_2_ori absolute square of Fourier transformed intensity profile,
#' ensemble over time
#' @param d_input sequence of lag times
#' @param q_ori_ring_loc_unique_index index for wave vector that give unique frequency
#' @param sz frame size of the intensity profile
#' @param len_t number of time steps
#' @param q wave vector in unit of um^-1
#' @param model_name stochastic process for constructing MSD, options from ('BM',
#' 'OU','FBM','OU+FBM', 'user_defined')
#' @param msd_fn user defined mean squared displacement structure (MSD), a
#' function of \code{param} parameters and \code{d_input} lag times
#' @param msd_grad_fn user defined MSD gradient structure, a function of
#' \code{param} and \code{d_input}
#'
#' @return The numerical value of gradient for natural logarithm of the likelihood.
#'
#' @export
#' @author \packageAuthor{AIUQ}
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#' @keywords internal
log_lik_grad<-function(param,I_q_cur,B_cur,index_q,I_o_q_2_ori,d_input,
q_ori_ring_loc_unique_index,sz,len_t,q,model_name,
msd_fn=NA,msd_grad_fn=NA){
p=length(param)-1
theta=exp(param[-(p+1)]) ##first p parameters are parameters in ISF
if(is.na(B_cur)){ ##this fix the dimension
sigma_2_0_hat=exp(param[p+1]) ##noise
B_cur=2*sigma_2_0_hat
}
#A_cur = 2*(I_o_q_2_ori - B_cur/2)
A_cur = abs(2*(I_o_q_2_ori - B_cur/2))
##the model is defined by MSD
MSD_list = get_MSD_with_grad(theta,d_input,model_name, msd_fn,msd_grad_fn)
MSD = MSD_list$msd
MSD_grad = MSD_list$msd_grad
grad_trans = get_grad_trans(theta,d_input,model_name)
eta=B_cur/4 ##nugget
grad=rep(0,p+1)
quad_terms=rep(0,p+1)
trace_terms=rep(0,p+1)
for(i_q_selected in index_q){
output_re=Re(I_q_cur[q_ori_ring_loc_unique_index[[i_q_selected]],])/(sqrt(sz[1]*sz[2]))
output_im=Im(I_q_cur[q_ori_ring_loc_unique_index[[i_q_selected]],])/(sqrt(sz[1]*sz[2]))
n_q=length(q_ori_ring_loc_unique_index[[i_q_selected]])
q_selected=q[i_q_selected]
sigma_2=A_cur[i_q_selected]/4
acf0 = sigma_2*exp(-q_selected^2*MSD/4) ##assume 2d
acf = acf0
acf[1] = acf[1]+eta ##for grad this is probably no adding
NTz=SuperGauss::Toeplitz$new(len_t, acf)
#tilde_Sigma_inv_output_re=solve(NTz,t(output_re))
#tilde_Sigma_inv_output_im=solve(NTz,t(output_im))
tilde_Sigma_inv_output_re=NTz$solve(t(output_re))
tilde_Sigma_inv_output_im=NTz$solve(t(output_im))
acf_grad=matrix(NA,len_t,p+1)
for(i_p in 1:p){
acf_grad[,i_p]=-acf0*q_selected^2/4*MSD_grad[,i_p]*grad_trans[i_p]
NTz_grad=SuperGauss::Toeplitz$new(len_t, as.numeric(acf_grad[,i_p]))
Q_tilde_Sigma_inv_output_re=NTz_grad$prod(tilde_Sigma_inv_output_re)
Q_tilde_Sigma_inv_output_im=NTz_grad$prod(tilde_Sigma_inv_output_im)
quad_terms[i_p]=quad_terms[i_p]+sum(tilde_Sigma_inv_output_re*Q_tilde_Sigma_inv_output_re) ##fast way to compute quadratic terms in grad
quad_terms[i_p]=quad_terms[i_p]+sum(tilde_Sigma_inv_output_im*Q_tilde_Sigma_inv_output_im) ##fast way to compute quadratic terms in grad
trace_terms[i_p]= trace_terms[i_p]+n_q*NTz$trace_grad( as.numeric(acf_grad[,i_p]))
#n_q*sum(diag(solve(NTz, toeplitz(as.numeric(acf_grad[,i_p])))))
}
#acf_grad[,p+1]=(-acf0*0.5/sigma_2)
acf_grad[,p+1]=(-acf0*0.5/sigma_2)*sign(I_o_q_2_ori[i_q_selected] - B_cur/2) ##add the sign as we use absolute value
acf_grad[1,p+1]= acf_grad[1,p+1]+0.5
acf_grad[,p+1]= acf_grad[,p+1]*sigma_2_0_hat ##sigma_2_0_hat is the jaccobian trans
NTz_grad=SuperGauss::Toeplitz$new(len_t, as.numeric( acf_grad[,p+1]))
Q_tilde_Sigma_inv_output_re=NTz_grad$prod(tilde_Sigma_inv_output_re)
Q_tilde_Sigma_inv_output_im=NTz_grad$prod(tilde_Sigma_inv_output_im)
quad_terms[p+1]=quad_terms[p+1]+sum(tilde_Sigma_inv_output_re*Q_tilde_Sigma_inv_output_re) ##fast way to compute quadratic terms in grad
quad_terms[p+1]=quad_terms[p+1]+sum(tilde_Sigma_inv_output_im*Q_tilde_Sigma_inv_output_im) ##fast way to compute quadratic terms in grad
trace_terms[p+1]= trace_terms[p+1]+n_q*NTz$trace_grad( as.numeric(acf_grad[,p+1]))
}
grad=-trace_terms+0.5*quad_terms ##note that there are two trace terms correspond to real and imaginary
return(grad)
}
#' Gradient of log likelihood for anisotropic processes
#' @description
#' This function computes the gradient for natural logarithm of the likelihood
#' for selected range of wave vectors of anisotropic processes. See 'References'.
#'
#' @param param a vector of natural logarithm of parameters
#' @param I_q_cur Fourier transformed intensity profile
#' @param B_cur current value of B. This parameter is determined by the noise
#' in the system. See 'References'.
#' @param index_q selected index of wave number
#' @param I_o_q_2_ori absolute square of Fourier transformed intensity profile,
#' ensemble over time
#' @param d_input sequence of lag times
#' @param q_ori_ring_loc_unique_index index for wave vector that give unique frequency
#' @param sz frame size of the intensity profile
#' @param len_t number of time steps
#' @param q1 wave vector in unit of um^-1 in x direction
#' @param q2 wave vector in unit of um^-1 in y direction
#' @param q1_unique_index index for wave vector that give unique frequency in x direction
#' @param q2_unique_index index for wave vector that give unique frequency in y direction
#' @param model_name stochastic process for constructing MSD, options from ('BM',
#' 'OU','FBM','OU+FBM', 'user_defined')
#' @param msd_fn user defined mean squared displacement structure (MSD), a
#' function of \code{param} parameters and \code{d_input} lag times
#' @param msd_grad_fn user defined MSD gradient structure, a function of
#' \code{param} and \code{d_input}
#'
#' @return The numerical value of gradient for natural logarithm of the likelihood.
#'
#' @export
#' @author \packageAuthor{AIUQ}
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#' @keywords internal
anisotropic_log_lik_grad<-function(param,I_q_cur,B_cur,index_q,I_o_q_2_ori,d_input,
q_ori_ring_loc_unique_index,sz,len_t,model_name,q1,q2,
q1_unique_index,q2_unique_index,msd_fn=NA,msd_grad_fn=NA){
p=(length(param)-1)/2
theta_x=exp(param[1:p])
theta_y=exp(param[(p+1):(2*p)])
if(is.na(B_cur)){ ##this fix the dimension
sigma_2_0_hat=exp(param[2*p+1]) ##noise
B_cur=2*sigma_2_0_hat
}
#A_cur = 2*(I_o_q_2_ori - B_cur/2)
A_cur = abs(2*(I_o_q_2_ori - B_cur/2))
##the model is defined by MSD
MSD_list_x = get_MSD_with_grad(theta_x,d_input,model_name, msd_fn,msd_grad_fn)
MSD_x = MSD_list_x$msd
MSD_grad_x = MSD_list_x$msd_grad
MSD_list_y = get_MSD_with_grad(theta_y,d_input,model_name, msd_fn,msd_grad_fn)
MSD_y = MSD_list_y$msd
MSD_grad_y = MSD_list_y$msd_grad
grad_trans_x = get_grad_trans(theta_x,d_input,model_name)
grad_trans_y = get_grad_trans(theta_y,d_input,model_name)
eta=B_cur/4 ##nugget
grad=rep(0,2*p+1)
quad_terms=rep(0,2*p+1)
trace_terms=rep(0,2*p+1)
q1_zero_included=c(0,q1)
q2_zero_included=c(0,q2)
for(i_q_selected in index_q){
for(i_q_ori in 1:length(q_ori_ring_loc_unique_index[[i_q_selected]])){
output_re=Re(I_q_cur[q_ori_ring_loc_unique_index[[i_q_selected]][i_q_ori],])/(sqrt(sz[1]*sz[2]))
output_im=Im(I_q_cur[q_ori_ring_loc_unique_index[[i_q_selected]][i_q_ori],])/(sqrt(sz[1]*sz[2]))
#n_q=length(q_ori_ring_loc_unique_index[[i_q_selected]])
n_q=1
q1_unique_index_selected=q1_unique_index[[i_q_selected]][i_q_ori]+1
q2_unique_index_selected=q2_unique_index[[i_q_selected]][i_q_ori]+1
sigma_2=A_cur[i_q_selected]/4
#acf0 = sigma_2*exp(-q_selected^2*MSD/4) ##assume 2d
acf0 = sigma_2*exp(-(q1_zero_included[q1_unique_index_selected]^2*MSD_x+
q2_zero_included[q2_unique_index_selected]^2*MSD_y)/(2))
acf = acf0
acf[1] = acf[1]+eta ##for grad this is probably no adding
NTz=SuperGauss::Toeplitz$new(len_t, acf)
#tilde_Sigma_inv_output_re=solve(NTz,t(output_re))
#tilde_Sigma_inv_output_im=solve(NTz,t(output_im))
tilde_Sigma_inv_output_re=NTz$solve(as.numeric(output_re))
tilde_Sigma_inv_output_im=NTz$solve(as.numeric(output_im))
acf_grad=matrix(NA,len_t,2*p+1)
for(i_p in 1:p){
acf_grad[,i_p]=-acf0/2*(q1_zero_included[q1_unique_index_selected]^2*MSD_grad_x[,i_p]*grad_trans_x[i_p])
NTz_grad=SuperGauss::Toeplitz$new(len_t, as.numeric(acf_grad[,i_p]))
Q_tilde_Sigma_inv_output_re=NTz_grad$prod(tilde_Sigma_inv_output_re)
Q_tilde_Sigma_inv_output_im=NTz_grad$prod(tilde_Sigma_inv_output_im)
quad_terms[i_p]=quad_terms[i_p]+sum(tilde_Sigma_inv_output_re*Q_tilde_Sigma_inv_output_re) ##fast way to compute quadratic terms in grad
quad_terms[i_p]=quad_terms[i_p]+sum(tilde_Sigma_inv_output_im*Q_tilde_Sigma_inv_output_im) ##fast way to compute quadratic terms in grad
trace_terms[i_p]= trace_terms[i_p]+n_q*NTz$trace_grad(as.numeric(acf_grad[,i_p]))
#n_q*sum(diag(solve(NTz, toeplitz(as.numeric(acf_grad[,i_p])))))
}
for(i_p in (p+1):(2*p)){
acf_grad[,i_p]=-acf0/2*(q2_zero_included[q2_unique_index_selected]^2*MSD_grad_y[,(i_p-p)]*grad_trans_y[(i_p-p)])
NTz_grad=SuperGauss::Toeplitz$new(len_t, as.numeric(acf_grad[,i_p]))
Q_tilde_Sigma_inv_output_re=NTz_grad$prod(tilde_Sigma_inv_output_re)
Q_tilde_Sigma_inv_output_im=NTz_grad$prod(tilde_Sigma_inv_output_im)
quad_terms[i_p]=quad_terms[i_p]+sum(tilde_Sigma_inv_output_re*Q_tilde_Sigma_inv_output_re) ##fast way to compute quadratic terms in grad
quad_terms[i_p]=quad_terms[i_p]+sum(tilde_Sigma_inv_output_im*Q_tilde_Sigma_inv_output_im) ##fast way to compute quadratic terms in grad
trace_terms[i_p]= trace_terms[i_p]+n_q*NTz$trace_grad(as.numeric(acf_grad[,i_p]))
#n_q*sum(diag(solve(NTz, toeplitz(as.numeric(acf_grad[,i_p])))))
}
#acf_grad[,p+1]=(-acf0*0.5/sigma_2)
acf_grad[,2*p+1]=(-acf0*0.5/sigma_2)*sign(I_o_q_2_ori[i_q_selected] - B_cur/2) ##add the sign as we use absolute value
acf_grad[1,2*p+1]= acf_grad[1,2*p+1]+0.5
acf_grad[,2*p+1]= acf_grad[,2*p+1]*sigma_2_0_hat ##sigma_2_0_hat is the jaccobian trans
NTz_grad=SuperGauss::Toeplitz$new(len_t, as.numeric( acf_grad[,2*p+1]))
Q_tilde_Sigma_inv_output_re=NTz_grad$prod(tilde_Sigma_inv_output_re)
Q_tilde_Sigma_inv_output_im=NTz_grad$prod(tilde_Sigma_inv_output_im)
quad_terms[2*p+1]=quad_terms[2*p+1]+sum(tilde_Sigma_inv_output_re*Q_tilde_Sigma_inv_output_re) ##fast way to compute quadratic terms in grad
quad_terms[2*p+1]=quad_terms[2*p+1]+sum(tilde_Sigma_inv_output_im*Q_tilde_Sigma_inv_output_im) ##fast way to compute quadratic terms in grad
trace_terms[2*p+1]= trace_terms[2*p+1]+n_q*NTz$trace_grad( as.numeric(acf_grad[,2*p+1]))
}
}
grad=-trace_terms+0.5*quad_terms ##note that there are two trace terms correspond to real and imaginary
return(grad)
}
#' Construct initial values for the parameters to be optimized over
#' @description
#' Construct initial values for the parameters to be optimized over in \code{AIUQ}
#' method of \code{SAM} class.
#'
#' @param model_name fitted model, options from ('BM','OU','FBM','OU+FBM',
#' 'user_defined'), with Brownian motion as the default model. See 'Details'.
#' @param sigma_0_2_ini initial value for background noise, default is \code{NA}
#' @param num_param number of parameters need to be estimated in the model,
#' need to be non-NA value for \code{'user_defined'} model.
#'
#' @return A matrix with one row of initial values for the parameters to be
#' optimized over in \code{AIUQ} method of \code{SAM} class.
#' @export
#' @details
#' If \code{model_name} equals 'user_defined', then \code{num_param} need to be
#' provided to determine the length of the initial values vector.
#' @author \packageAuthor{AIUQ}
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#' @examples
#' library(AIUQ)
#' get_initial_param(model_name = "BM")
#' @keywords internal
get_initial_param <- function(model_name,sigma_0_2_ini=NA, num_param=NA){
if(model_name=='BM'){
param_initial=matrix(NA,1,2) #include B
param_initial[1,]=log(c(1,sigma_0_2_ini))#method='L-BFGS-B'
}else if(model_name=='FBM'){
param_initial=matrix(NA,1,3) #include B
param_initial[1,]=log(c(rep(0.5,2),sigma_0_2_ini))#method='L-BFGS-B'
}else if(model_name=='OU'){
param_initial=matrix(NA,1,3) #include B
param_initial[1,]=log(c(rep(1,2),sigma_0_2_ini))#method='L-BFGS-B'
}else if(model_name=='OU+FBM'){
param_initial=matrix(NA,1,5) #include B
param_initial[1,]=log(c(rep(0.5,4),sigma_0_2_ini))#method='L-BFGS-B',
}else if(model_name=='VFBM'){
param_initial=matrix(NA,1,5) #include B
param_initial[1,]=log(c(rep(0.1,4),sigma_0_2_ini))#method='L-BFGS-B}
}else if(model_name == 'user_defined'){
param_initial = matrix(NA,1,(num_param+1)) #include B
param_initial[1,] = log(c(rep(0.5,num_param),sigma_0_2_ini))
}else if(model_name=='VFBM_anisotropic'){
param_initial=matrix(NA,1,9) #include B
param_initial[1,]=log(c(rep(0.1,8),sigma_0_2_ini))#method='L-BFGS-B'
}else if(model_name=='BM_anisotropic'){
param_initial=matrix(NA,1,3) #include B
param_initial[1,]=log(c(1,1,sigma_0_2_ini))#method='L-BFGS-B',
}else if(model_name=='FBM_anisotropic'){
param_initial=matrix(NA,1,5) #include B
param_initial[1,]=log(c(0.5,2,0.5,2,sigma_0_2_ini))#method='L-BFGS-B',
}else if(model_name=='OU_anisotropic'){
param_initial=matrix(NA,1,5) #include B
param_initial[1,]=log(c(rep(1,4),sigma_0_2_ini))#method='L-BFGS-B',
}else if(model_name=='OU+BM_anisotropic'){
param_initial=matrix(NA,1,7) #include B
param_initial[1,]=log(c(rep(0.5,6),sigma_0_2_ini))#method='L-BFGS-B',
}else if(model_name=='OU+FBM_anisotropic'){
param_initial=matrix(NA,1,9) #include B
param_initial[1,]=log(c(rep(c(1,1,0.5,2),2),sigma_0_2_ini))#method='L-BFGS-B',
}else if(model_name == 'user_defined_anisotropic'){
param_initial = matrix(NA,1,2*num_param+1) #include B
param_initial[1,] = log(c(rep(0.5,2*num_param),sigma_0_2_ini))
}
return(param_initial)
}
#' Construct parameter transformation for optimization using exact gradient
#' @description
#' Construct parameter transformation for parameters to be optimized over in \code{AIUQ}
#' method of \code{SAM} class. See 'References'.
#'
#' @param theta parameters to be optimized over
#' @param d_input sequence of lag times
#' @param model_name model name for the fitted model, options from ('BM','OU',
#' 'FBM',OU+FBM','user_defined')
#'
#' @return A vector of transformed parameters to be optimized over in \code{AIUQ}
#' method of \code{SAM} class.
#' @export
#' @author \packageAuthor{AIUQ}
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#' @keywords internal
get_grad_trans<-function(theta,d_input,model_name){
if(model_name=='BM'||model_name=='BM_anisotropic'){
grad_trans = theta[1]
}else if(model_name=='FBM'||model_name=='FBM_anisotropic'){
beta = theta[1]
alpha = 2*theta[2]/(1+theta[2])
grad_trans = c(beta, alpha*(1-alpha/2))
}else if(model_name=='OU'||model_name=='OU_anisotropic'){
rho = theta[1]/(1+theta[1])
amplitude = theta[2]
grad_trans = c(rho*(1-rho), amplitude)
}else if(model_name=='OU+FBM'||model_name=='OU+FBM_anisotropic'){
rho = theta[1]/(1+theta[1])
amplitude = theta[2]
beta = theta[3]
alpha = 2*theta[4]/(1+theta[4])
grad_trans = c(rho*(1-rho), amplitude, beta, alpha*(1-alpha/2))
}else if(model_name=='user_defined'||model_name=='user_defined_anisotropic'){
grad_trans = theta
}else if(model_name=='VFBM'||model_name=='VFBM_anisotropic'){
a = theta[1]
b = theta[2]
c = theta[3]/(1+theta[3])
d = theta[4]/(1+theta[4])
#grad_trans = c(a,b,c*(1-c),d*(1-d))
grad_trans = c(a,b,c/(1-c),d/(1-d))
}
return(grad_trans)
}
#' Compute 95% confidence interval
#' @description
#' This function construct the lower and upper bound for 95% confidence interval
#' of estimated parameters for the given model, including parameters contained
#' in the intermediate scattering function and background noise. See 'References'.
#'
#' @param param_est a vector of natural logarithm of estimated parameters from
#' maximize the log likelihood. This vector will serve as initial values in the
#' \code{optim} function.
#' @param I_q_cur Fourier transformed intensity profile
#' @param B_cur current value of B. This parameter is determined by the noise
#' in the system. See 'References'.
#' @param index_q selected index of wave number
#' @param I_o_q_2_ori absolute square of Fourier transformed intensity profile,
#' ensemble over time
#' @param q_ori_ring_loc_unique_index index for wave vector that give unique frequency
#' @param sz frame size of the intensity profile
#' @param len_t number of time steps
#' @param q wave vector in unit of um^-1
#' @param d_input sequence of lag times
#' @param model_name model name for the fitted model, options from ('BM','OU',
#' 'FBM',OU+FBM','user_defined')
#' @param estimation_method method for constructing 95% confidence interval,
#' default is asymptotic
#' @param M number of particles
#' @param num_iteration_max the maximum number of iterations in \code{optim}
#' @param lower_bound lower bound for the "L-BFGS-B" method in \code{optim}
#' @param msd_grad_fn user defined MSD gradient structure, a function of
#' \code{param} and \code{d_input}
#'
#' @return A matrix of lower and upper bound for natural logarithm of
#' parameters in the fitted model using \code{AIUQ} method in \code{SAM} class
#' @export
#' @author \packageAuthor{AIUQ}
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#' @keywords internal
param_uncertainty<-function(param_est,I_q_cur,B_cur=NA,index_q,
I_o_q_2_ori,q_ori_ring_loc_unique_index,
sz,len_t,d_input,q,model_name,
estimation_method='asymptotic',M,
num_iteration_max,lower_bound,msd_fn=NA,
msd_grad_fn=NA){
p = length(param_est)-1
q_lower=q-min(q)
if(model_name == "user_defined"){
if(is.function(msd_grad_fn)==T){
gr = log_lik_grad
}else{gr = NULL}
}else{gr = log_lik_grad}
#param_ini=param_est
m_param_lower = try(optim(param_est,log_lik,gr = gr,I_q_cur=I_q_cur,B_cur=NA,
index_q=index_q,I_o_q_2_ori=I_o_q_2_ori,
q_ori_ring_loc_unique_index=q_ori_ring_loc_unique_index,
method='L-BFGS-B',lower=lower_bound,
control = list(fnscale=-1,maxit=num_iteration_max),
sz=sz,len_t=len_t,d_input=d_input,q=q_lower,
model_name=model_name,msd_fn=msd_fn,
msd_grad_fn=msd_grad_fn),TRUE)
if(class(m_param_lower)[1]=="try-error"){
compute_twice=T
m_param_lower = try(optim(param_est+c(rep(0.5,p),0),log_lik,gr=gr,
I_q_cur=I_q_cur,B_cur=NA,
index_q=index_q,I_o_q_2_ori=I_o_q_2_ori,
q_ori_ring_loc_unique_index=q_ori_ring_loc_unique_index,
method='L-BFGS-B',lower=lower_bound,
control = list(fnscale=-1,maxit=num_iteration_max),
sz=sz,len_t=len_t,d_input=d_input,q=q_lower,
model_name=model_name,msd_fn=msd_fn,
msd_grad_fn=msd_grad_fn),TRUE)
}
q_upper=q+min(q)
m_param_upper = try(optim(param_est,log_lik,gr=gr,
I_q_cur=I_q_cur,B_cur=NA,
index_q=index_q,I_o_q_2_ori=I_o_q_2_ori,
q_ori_ring_loc_unique_index=q_ori_ring_loc_unique_index,
method='L-BFGS-B',lower=lower_bound,
control = list(fnscale=-1,maxit=num_iteration_max),
sz=sz,len_t=len_t,d_input=d_input,q=q_upper,
model_name=model_name,msd_fn=msd_fn,
msd_grad_fn=msd_grad_fn),TRUE)
if(class(m_param_upper)[1]=="try-error"){
compute_twice = T
m_param_upper = try(optim(param_est+c(rep(0.5,p),0),log_lik,gr=gr,
I_q_cur=I_q_cur,B_cur=NA,
index_q=index_q,I_o_q_2_ori=I_o_q_2_ori,
q_ori_ring_loc_unique_index=q_ori_ring_loc_unique_index,
method='L-BFGS-B',lower=lower_bound,
control = list(fnscale=-1,maxit=num_iteration_max),
sz=sz,len_t=len_t,d_input=d_input,q=q_upper,
model_name=model_name,msd_fn=msd_fn,
msd_grad_fn=msd_grad_fn),TRUE)
}
param_range = matrix(NA,2,p+1)
for(i in 1:(p+1) ){
param_range[1,i] = min(m_param_lower$par[i],m_param_upper$par[i])
param_range[2,i] = max(m_param_lower$par[i],m_param_upper$par[i])
}
half_length_param_range_fft = (param_range[2,]-param_range[1,])/2
if(estimation_method=='asymptotic'){
theta = exp(param_est[-(p+1)]) ##first p parameters are parameters in ISF
if(is.na(B_cur)){ ##this fix the dimension
sigma_2_0_hat = exp(param_est[p+1]) ##noise
B_cur = 2*sigma_2_0_hat
}
A_cur = abs(2*(I_o_q_2_ori - B_cur/2))
eta = B_cur/4 ##nugget
MSD_list = get_MSD_with_grad(theta,d_input,model_name,msd_fn,msd_grad_fn)
MSD = MSD_list$msd
MSD_grad = MSD_list$msd_grad
grad_trans = get_grad_trans(theta,d_input,model_name)
Hessian_list = as.list(index_q)
Hessian_sum = 0
for(i_q_selected in index_q){
q_selected=q[i_q_selected]
sigma_2=A_cur[i_q_selected]/4
acf0=sigma_2*exp(-q_selected^2*MSD/4) ##assume 2d
acf=acf0
acf[1] = acf[1]+eta ##for grad this is probably no adding
acf=as.numeric(acf)
Tz <- SuperGauss::Toeplitz$new(len_t,acf=acf)
Hessian = matrix(NA,p+1,p+1) ##last one is
acf_grad = matrix(NA,len_t,p+1)
for(i_p in 1:p){
acf_grad[,i_p] = -acf0*q_selected^2/4* MSD_grad[,i_p]*grad_trans[i_p]
}
#acf_grad[,p+1]=(-acf0*0.5/sigma_2)
acf_grad[,p+1]=(-acf0*0.5/sigma_2)*sign(I_o_q_2_ori[i_q_selected] - B_cur/2)
acf_grad[1,p+1]= acf_grad[1,p+1]+0.5
acf_grad[,p+1]= acf_grad[,p+1]*sigma_2_0_hat
for(i_p in 1:(p+1) ){
for(j_p in 1:(p+1) ){
Hessian[i_p,j_p]=Tz$trace_hess(as.numeric(acf_grad[,i_p]), as.numeric(acf_grad[,j_p]) )
}
}
#Hessian_list[[i_q_selected]]=Hessian
Hessian_sum=Hessian_sum+Hessian*length(q_ori_ring_loc_unique_index[[i_q_selected]])
###a litte more conservation is to say they are perfectly correlated in a ring
#Hessian_sum=Hessian_sum+Hessian
}
Hessian_sum = Hessian_sum*M/sum(lengths(q_ori_ring_loc_unique_index[index_q]))
if(kappa(Hessian_sum)>1e10){
epsilon <- 1e-6
Hessian_sum <- Hessian_sum + epsilon * diag(ncol(Hessian_sum))
}
sd_theta_B = sqrt(diag(solve(Hessian_sum)))
#sd_theta_B=sqrt(diag(Hessian_inv_sum/sum(lengths(q_ori_ring_loc_unique_index))^2 ))
param_range[1,]=param_range[1,]-sd_theta_B*qnorm(0.975) ##this is 10 times larger to account for not estimating A and B correct and other misspecification
param_range[2,]=param_range[2,]+sd_theta_B*qnorm(0.975)
half_length_param_range_est = sd_theta_B*qnorm(0.975)
}
return(param_range)
}
#' Compute 95% confidence interval for anisotropic processes
#' @description
#' This function construct the lower and upper bound for 95% confidence interval
#' of estimated parameters for the given anisotropic model, including parameters
#' contained in the intermediate scattering function and background noise.
#' See 'References'.
#'
#' @param param_est a vector of natural logarithm of estimated parameters from
#' maximize the log likelihood. This vector will serve as initial values in the
#' \code{optim} function.
#' @param I_q_cur Fourier transformed intensity profile
#' @param B_cur current value of B. This parameter is determined by the noise
#' in the system. See 'References'.
#' @param index_q selected index of wave number
#' @param I_o_q_2_ori absolute square of Fourier transformed intensity profile,
#' ensemble over time
#' @param q_ori_ring_loc_unique_index index for wave vector that give unique frequency
#' @param sz frame size of the intensity profile
#' @param len_t number of time steps
#' @param q1 wave vector in unit of um^-1 in x direction
#' @param q2 wave vector in unit of um^-1 in y direction
#' @param q1_unique_index index for wave vector that give unique frequency in x direction
#' @param q2_unique_index index for wave vector that give unique frequency in y direction
#' @param d_input sequence of lag times
#' @param model_name model name for the fitted model, options from ('BM','OU',
#' 'FBM',OU+FBM','user_defined')
#' @param estimation_method method for constructing 95% confidence interval,
#' default is asymptotic
#' @param M number of particles
#' @param num_iteration_max the maximum number of iterations in \code{optim}
#' @param lower_bound lower bound for the "L-BFGS-B" method in \code{optim}
#' @param msd_grad_fn user defined MSD gradient structure, a function of
#' \code{param} and \code{d_input}
#'
#' @return A matrix of lower and upper bound for natural logarithm of
#' parameters in the fitted model using \code{AIUQ} method in \code{SAM} class
#' @export
#' @author \packageAuthor{AIUQ}
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#' @keywords internal
param_uncertainty_anisotropic<-function(param_est,I_q_cur,B_cur=NA,index_q,
I_o_q_2_ori,q_ori_ring_loc_unique_index,
sz,len_t,d_input,q1,q2,q1_unique_index,q2_unique_index,
model_name,estimation_method='asymptotic',M,
num_iteration_max,lower_bound,msd_fn=NA,
msd_grad_fn=NA){
p=(length(param_est)-1)/2
q1_lower=q1-min(q1)
q2_lower=q2-min(q2)
# if(model_name == "user_defined"){
# if(is.function(msd_grad_fn)==T){
# gr = log_lik_grad
# }else{gr = NULL}
# }else{gr = log_lik_grad}
#param_ini=param_est
m_param_lower = try(optim(param_est,anisotropic_log_lik,#gr = gr,
I_q_cur=I_q_cur,B_cur=NA,
index_q=index_q,I_o_q_2_ori=I_o_q_2_ori,
q_ori_ring_loc_unique_index=q_ori_ring_loc_unique_index,
method='L-BFGS-B',lower=lower_bound,
control = list(fnscale=-1,maxit=num_iteration_max),
sz=sz,len_t=len_t,d_input=d_input,q1=q1_lower,
q2=q2_lower, q1_unique_index=q1_unique_index,
q2_unique_index=q2_unique_index,
model_name=model_name,msd_fn=msd_fn,
msd_grad_fn=msd_grad_fn),TRUE)
if(class(m_param_lower)[1]=="try-error"){
compute_twice=T
m_param_lower = try(optim(param_est,anisotropic_log_lik,#gr = gr,
I_q_cur=I_q_cur,B_cur=NA,
index_q=index_q,I_o_q_2_ori=I_o_q_2_ori,
q_ori_ring_loc_unique_index=q_ori_ring_loc_unique_index,
method='L-BFGS-B',lower=lower_bound,
control = list(fnscale=-1,maxit=num_iteration_max),
sz=sz,len_t=len_t,d_input=d_input,q1=q1_lower,
q2=q2_lower, q1_unique_index=q1_unique_index,
q2_unique_index=q2_unique_index,
model_name=model_name,msd_fn=msd_fn,
msd_grad_fn=msd_grad_fn),TRUE)
}
q1_upper=q1+min(q1)
q2_upper=q2+min(q2)
m_param_upper = try(optim(param_est,anisotropic_log_lik, #gr=gr,
I_q_cur=I_q_cur,B_cur=NA,
index_q=index_q,I_o_q_2_ori=I_o_q_2_ori,
q_ori_ring_loc_unique_index=q_ori_ring_loc_unique_index,
method='L-BFGS-B',lower=lower_bound,
control = list(fnscale=-1,maxit=num_iteration_max),
sz=sz,len_t=len_t,d_input=d_input,q1=q1_upper,
q2=q2_upper, q1_unique_index=q1_unique_index,
q2_unique_index=q2_unique_index,
model_name=model_name,msd_fn=msd_fn,
msd_grad_fn=msd_grad_fn),TRUE)
if(class(m_param_upper)[1]=="try-error"){
compute_twice = T
m_param_upper = try(optim(param_est+c(rep(0.5,p),0),anisotropic_log_lik,#gr=gr,
I_q_cur=I_q_cur,B_cur=NA,
index_q=index_q,I_o_q_2_ori=I_o_q_2_ori,
q_ori_ring_loc_unique_index=q_ori_ring_loc_unique_index,
method='L-BFGS-B',lower=lower_bound,
control = list(fnscale=-1,maxit=num_iteration_max),
sz=sz,len_t=len_t,d_input=d_input,q1=q1_upper,
q2=q2_upper, q1_unique_index=q1_unique_index,
q2_unique_index=q2_unique_index,
model_name=model_name,msd_fn=msd_fn,
msd_grad_fn=msd_grad_fn),TRUE)
}
param_range = matrix(NA,2,2*p+1)
for(i in 1:(2*p+1) ){
param_range[1,i] = min(m_param_lower$par[i],m_param_upper$par[i])
param_range[2,i] = max(m_param_lower$par[i],m_param_upper$par[i])
}
half_length_param_range_fft = (param_range[2,]-param_range[1,])/2
if(estimation_method=='asymptotic'){
theta_x = exp(param_est[1:p]) ##first p parameters are parameters in ISF
theta_y = exp(param_est[(p+1):(2*p)])
#theta = exp(param_est[-(p+1)])
if(is.na(B_cur)){ ##this fix the dimension
sigma_2_0_hat = exp(param_est[2*p+1]) ##noise
B_cur = 2*sigma_2_0_hat
}
A_cur = abs(2*(I_o_q_2_ori - B_cur/2))
eta = B_cur/4 ##nugget
MSD_list_x = get_MSD_with_grad(theta_x,d_input,model_name,msd_fn,msd_grad_fn)
MSD_x = MSD_list_x$msd
MSD_grad_x = MSD_list_x$msd_grad
MSD_list_y = get_MSD_with_grad(theta_y,d_input,model_name,msd_fn,msd_grad_fn)
MSD_y = MSD_list_y$msd
MSD_grad_y = MSD_list_y$msd_grad
grad_trans_x = get_grad_trans(theta_x,d_input,model_name)
grad_trans_y = get_grad_trans(theta_y,d_input,model_name)
Hessian_list = as.list(index_q)
Hessian_sum = 0
q1_zero_included=c(0,q1)
q2_zero_included=c(0,q2)
for(i_q_selected in index_q){
for(i_q_ori in 1:length(q_ori_ring_loc_unique_index[[i_q_selected]]) ){
#q_selected=q[i_q_selected]
q1_unique_index_selected=q1_unique_index[[i_q_selected]][i_q_ori]+1
q2_unique_index_selected=q2_unique_index[[i_q_selected]][i_q_ori]+1
sigma_2=A_cur[i_q_selected]/4
#acf0=sigma_2*exp(-q_selected^2*MSD/4) ##assume 2d
acf0 = sigma_2*exp(-(q1_zero_included[q1_unique_index_selected]^2*MSD_x+
q2_zero_included[q2_unique_index_selected]^2*MSD_y)/(2) ) ##assume 2d
acf=acf0
acf[1] = acf[1]+eta ##for grad this is probably no adding
acf=as.numeric(acf)
Tz <- SuperGauss::Toeplitz$new(len_t,acf=acf)
Hessian = matrix(NA,2*p+1,2*p+1) ##last one is
acf_grad = matrix(NA,len_t,2*p+1)
for(i_p in 1:p){
#acf_grad[,i_p] = -acf0*q_selected^2/4* MSD_grad[,i_p]*grad_trans[i_p]
acf_grad[,i_p]=-acf0*q1_zero_included[q1_unique_index_selected]^2/2*
MSD_grad_x[,i_p]*grad_trans_x[i_p]
acf_grad[,p+i_p]=-acf0*q2_zero_included[q2_unique_index_selected]^2/2*
MSD_grad_y[,i_p]*grad_trans_y[i_p]
}
#acf_grad[,p+1]=(-acf0*0.5/sigma_2)
acf_grad[,2*p+1]=(-acf0*0.5/sigma_2)*sign(I_o_q_2_ori[i_q_selected] - B_cur/2)
acf_grad[1,2*p+1]= acf_grad[1,2*p+1]+0.5
acf_grad[,2*p+1]= acf_grad[,2*p+1]*sigma_2_0_hat
for(i_p in 1:(2*p+1) ){
for(j_p in 1:(2*p+1) ){
Hessian[i_p,j_p]=Tz$trace_hess(as.numeric(acf_grad[,i_p]), as.numeric(acf_grad[,j_p]) )
}
}
#Hessian_list[[i_q_selected]]=Hessian
Hessian_sum=Hessian_sum+Hessian #length(q_ori_ring_loc_unique_index[[i_q_selected]])
###a litte more conservation is to say they are perfectly correlated in a ring
#Hessian_sum=Hessian_sum+Hessian
}
}
Hessian_sum = Hessian_sum*M/sum(lengths(q_ori_ring_loc_unique_index[index_q]))
if(kappa(Hessian_sum)>1e10){
epsilon <- 1e-6
Hessian_sum <- Hessian_sum + epsilon * diag(ncol(Hessian_sum))
}
sd_theta_B = sqrt(diag(solve(Hessian_sum)))
#sd_theta_B=sqrt(diag(Hessian_inv_sum/sum(lengths(q_ori_ring_loc_unique_index))^2 ))
param_range[1,]=param_range[1,]-sd_theta_B*qnorm(0.975) ##this is 10 times larger to account for not estimating A and B correct and other misspecification
param_range[2,]=param_range[2,]+sd_theta_B*qnorm(0.975)
half_length_param_range_est = sd_theta_B*qnorm(0.975)
}
return(param_range)
}
#' Simulate 2D particle trajectory follows Brownian Motion
#'
#' @description
#' Simulate 2D particle trajectory follows Brownian Motion (BM) for \code{M}
#' particles.
#'
#' @param pos0 initial position for \code{M} particles, matrix with dimension M by 2
#' @param M number of particles
#' @param len_t number of time steps
#' @param sigma distance moved per time step
#'
#' @return Position matrix with dimension \code{M}\eqn{\times}{%\times}\code{len_t}
#' by 2 for particle trajectory. The first \code{M} rows being the initial position
#' \code{pos0}.
#'
#' @export
#' @author \packageAuthor{AIUQ}
#' @examples
#' library(AIUQ)
#' M = 10
#' len_t = 50
#' sigma = 0.5
#' pos0 = matrix(100/8+0.75*100*runif(M*2),nrow=M,ncol=2)
#' pos = bm_particle_intensity(pos0=pos0,M=M,len_t=len_t,sigma=sigma)
#' @keywords internal
bm_particle_intensity <- function(pos0,M,len_t,sigma){
pos = matrix(NA,M*len_t,2)
pos[1:M,] = pos0
for(i in 1:(len_t-1)){
pos[i*M+(1:M),] = pos[(i-1)*M+(1:M),]+matrix(rnorm(2*M,sd=sigma),M,2)
}
return(pos)
}
#' Simulate 2D particle trajectory follows anisotropic Brownian Motion
#'
#' @description
#' Simulate 2D particle trajectory follows anisotropic Brownian Motion (BM) for
#' \code{M} particles, with different step sizes in x, y-directions.
#'
#' @param pos0 initial position for \code{M} particles, matrix with dimension M by 2
#' @param M number of particles
#' @param len_t number of time steps
#' @param sigma distance moved per time step in x,y-directions, a vector of length 2
#'
#' @return Position matrix with dimension \code{M}\eqn{\times}{%\times}\code{len_t}
#' by 2 for particle trajectory. The first \code{M} rows being the initial position
#' \code{pos0}.
#'
#' @export
#' @author \packageAuthor{AIUQ}
#' @examples
#' library(AIUQ)
#' M = 10
#' len_t = 50
#' sigma = c(0.5,0.1)
#' pos0 = matrix(100/8+0.75*100*runif(M*2),nrow=M,ncol=2)
#' pos = anisotropic_bm_particle_intensity(pos0=pos0,M=M,len_t=len_t,sigma=sigma)
#' @keywords internal
anisotropic_bm_particle_intensity <- function(pos0,M,len_t,sigma){
pos = matrix(NA,M*len_t,2)
pos[1:M,] = pos0
for(i in 1:(len_t-1)){
pos[i*M+(1:M),1] = pos[(i-1)*M+(1:M),1]+matrix(rnorm(M,sd=sigma[1]),M,1)
pos[i*M+(1:M),2] = pos[(i-1)*M+(1:M),2]+matrix(rnorm(M,sd=sigma[2]),M,1)
}
return(pos)
}
#' Simulate 2D particle trajectory follows OU process
#' @description
#' Simulate 2D particle trajectory follows Ornstein–Uhlenbeck process(OU) for
#' \code{M} particles.
#'
#' @param pos0 initial position for \code{M} particles, matrix with dimension M by 2
#' @param M number of particles
#' @param len_t number of time steps
#' @param sigma distance moved per time step
#' @param rho correlation between successive step and previous step,
#' value between 0 and 1
#'
#' @return Position matrix with dimension \code{M}\eqn{\times}{%\times}\code{len_t}
#' by 2 for particle trajectory. The first \code{M} rows being the initial position
#' \code{pos0}.
#'
#' @export
#' @author \packageAuthor{AIUQ}
#' @examples
#' library(AIUQ)
#' M = 10
#' len_t = 50
#' sigma = 2
#' rho = 0.95
#' pos0 = matrix(100/8+0.75*100*runif(M*2),nrow=M,ncol=2)
#' pos = ou_particle_intensity(pos0=pos0,M=M,len_t=len_t,sigma=sigma, rho=rho)
#' @keywords internal
ou_particle_intensity <- function(pos0,M,len_t,sigma,rho){
pos = matrix(NA,M*len_t,2)
pos[1:M,] = pos0+matrix(rnorm(2*M, sd=sigma),M,2)
sd_innovation_OU = sqrt(sigma^2*(1-rho^2))
for(i in 1:(len_t-1)){
pos[i*M+(1:M),1] = rho*(pos[(i-1)*M+(1:M),1]-pos0[,1])+pos0[,1]+sd_innovation_OU*rnorm(M)
pos[i*M+(1:M),2] = rho*(pos[(i-1)*M+(1:M),2]-pos0[,2])+pos0[,2]+sd_innovation_OU*rnorm(M)
}
return(pos)
}
#' Simulate 2D particle trajectory follows anisotropic OU process
#' @description
#' Simulate 2D particle trajectory follows anisotropic Ornstein–Uhlenbeck
#' process(OU) for \code{M} particles, with different step sizes in x, y-directions.
#'
#' @param pos0 initial position for \code{M} particles, matrix with dimension M by 2
#' @param M number of particles
#' @param len_t number of time steps
#' @param sigma distance moved per time step in x, y-directions, a vector of length 2
#' @param rho correlation between successive step and previous step in x, y-directions,
#' a vector of length 2 with values between 0 and 1
#'
#' @return Position matrix with dimension \code{M}\eqn{\times}{%\times}\code{len_t}
#' by 2 for particle trajectory. The first \code{M} rows being the initial position
#' \code{pos0}.
#'
#' @export
#' @author \packageAuthor{AIUQ}
#' @examples
#' library(AIUQ)
#' M = 10
#' len_t = 50
#' sigma = c(2,2.5)
#' rho = c(0.95,0.9)
#' pos0 = matrix(100/8+0.75*100*runif(M*2),nrow=M,ncol=2)
#' pos = anisotropic_ou_particle_intensity(pos0=pos0,M=M,len_t=len_t,sigma=sigma, rho=rho)
#' @keywords internal
anisotropic_ou_particle_intensity <- function(pos0,M,len_t,sigma,rho){
pos = matrix(NA,M*len_t,2)
pos[1:M,1] = pos0[,1]+matrix(rnorm(M, sd=sigma[1]),M,1)
pos[1:M,2] = pos0[,2]+matrix(rnorm(M, sd=sigma[2]),M,1)
sd_innovation_OU_1 = sqrt(sigma[1]^2*(1-rho[1]^2))
sd_innovation_OU_2 = sqrt(sigma[2]^2*(1-rho[2]^2))
for(i in 1:(len_t-1)){
pos[i*M+(1:M),1] = rho[1]*(pos[(i-1)*M+(1:M),1]-pos0[,1])+pos0[,1]+sd_innovation_OU_1*matrix(rnorm(M),M,1)
pos[i*M+(1:M),2] = rho[2]*(pos[(i-1)*M+(1:M),2]-pos0[,2])+pos0[,2]+sd_innovation_OU_2*matrix(rnorm(M),M,1)
}
return(pos)
}
#' Construct correlation matrix for FBM
#' @description
#' Construct correlation matrix for fractional Brownian motion.
#'
#' @param len_t number of time steps
#' @param H Hurst parameter, value between 0 and 1
#'
#' @return Correlation matrix with dimension \code{len_t-1} by \code{len_t-1}.
#' @export
#' @author \packageAuthor{AIUQ}
#'
#' @examples
#' library(AIUQ)
#' len_t = 50
#' H = 0.3
#' m = corr_fBM(len_t=len_t,H=H)
#' @keywords internal
# corr_fBM <- function(len_t,H){
# corr = matrix(NA, len_t, len_t)
# for(i in 1:(len_t)){
# for(j in 1:(len_t)){
# corr[i,j] = 0.5*(i^(2*H)+j^(2*H)-abs(i-j)^(2*H))
# }
# }
# return(corr)
# }
corr_fBM <- function(len_t,H){
cov = matrix(NA, len_t-1, len_t-1)
for(i in 0:(len_t-2)){
cov[i+1,] = 0.5*(abs(i-(0:(len_t-2))+1)^(2*H))+0.5*(abs(1-i+(0:(len_t-2)))^(2*H))-abs(i-(0:(len_t-2)))^(2*H)
}
return(cov)
}
#' Simulate 2D particle trajectory follows fBM
#' @description
#' Simulate 2D particle trajectory follows fraction Brownian Motion(fBM) for
#' \code{M} particles.
#'
#' @param pos0 initial position for \code{M} particles, matrix with dimension M by 2
#' @param M number of particles
#' @param len_t number of time steps
#' @param sigma distance moved per time step
#' @param H Hurst parameter, value between 0 and 1
#'
#' @return Position matrix with dimension \code{M}\eqn{\times}{%\times}\code{len_t}
#' by 2 for particle trajectory. The first \code{M} rows being the initial position
#' \code{pos0}.
#'
#' @export
#' @author \packageAuthor{AIUQ}
#' @examples
#' library(AIUQ)
#' M = 10
#' len_t = 50
#' sigma = 2
#' H = 0.3
#' pos0 = matrix(100/8+0.75*100*runif(M*2),nrow=M,ncol=2)
#' pos = fbm_particle_intensity(pos0=pos0,M=M,len_t=len_t,sigma=sigma,H=H)
#' @keywords internal
# fbm_particle_intensity <- function(pos0,M,len_t,sigma,H){
# pos = matrix(NA,M*len_t,2)
# pos[,1] = rep(pos0[,1],len_t)
# pos[,2] = rep(pos0[,2],len_t)
# fBM_corr = corr_fBM(len_t,H)
# L = t(chol(sigma^2*fBM_corr))
# pos[,1] = pos[,1]+as.numeric(t(L%*%matrix(rnorm((len_t)*M),nrow=len_t,ncol=M)))
# pos[,2] = pos[,2]+as.numeric(t(L%*%matrix(rnorm((len_t)*M),nrow=len_t,ncol=M)))
# return(pos)
# }
fbm_particle_intensity <- function(pos0,M,len_t,sigma,H){
pos = matrix(NA,M*len_t,2)
pos[,1] = rep(pos0[,1],len_t)
pos[,2] = rep(pos0[,2],len_t)
fBM_corr = corr_fBM(len_t,H)
L = t(chol(sigma^2*fBM_corr))
increments1 = L%*%matrix(rnorm((len_t-1)*M),nrow=len_t-1,ncol=M)
increments2 = L%*%matrix(rnorm((len_t-1)*M),nrow=len_t-1,ncol=M)
pos[(M+1):(M*len_t),1] = pos[(M+1):(M*len_t),1]+as.numeric(t(apply(increments1,2,cumsum)))
pos[(M+1):(M*len_t),2] = pos[(M+1):(M*len_t),2]+as.numeric(t(apply(increments2,2,cumsum)))
return(pos)
}
#' Simulate 2D particle trajectory follows anisotropic fBM
#' @description
#' Simulate 2D particle trajectory follows anisotropic fraction Brownian Motion(fBM) for
#' \code{M} particles, with different step sizes in x, y-directions.
#'
#' @param pos0 initial position for \code{M} particles, matrix with dimension M by 2
#' @param M number of particles
#' @param len_t number of time steps
#' @param sigma distance moved per time step in x, y-directions, a vector of length 2
#' @param H Hurst parameter in x, y-directions, a vector of length 2, value
#' between 0 and 1
#'
#' @return Position matrix with dimension \code{M}\eqn{\times}{%\times}\code{len_t}
#' by 2 for particle trajectory. The first \code{M} rows being the initial position
#' \code{pos0}.
#'
#' @export
#' @author \packageAuthor{AIUQ}
#' @examples
#' library(AIUQ)
#' M = 10
#' len_t = 50
#' sigma = c(2,1)
#' H = c(0.3,0.4)
#' pos0 = matrix(100/8+0.75*100*runif(M*2),nrow=M,ncol=2)
#' pos = anisotropic_fbm_particle_intensity(pos0=pos0,M=M,len_t=len_t,sigma=sigma,H=H)
#' @keywords internal
anisotropic_fbm_particle_intensity <- function(pos0,M,len_t,sigma,H){
pos = matrix(NA,M*len_t,2)
pos[,1] = rep(pos0[,1],len_t)
pos[,2] = rep(pos0[,2],len_t)
fBM_corr1 = corr_fBM(len_t,H[1])
L1 = t(chol(sigma[1]^2*fBM_corr1))
fBM_corr2 = corr_fBM(len_t,H[2])
L2 = t(chol(sigma[2]^2*fBM_corr2))
increments1 = L1%*%matrix(rnorm((len_t-1)*M),nrow=len_t-1,ncol=M)
increments2 = L2%*%matrix(rnorm((len_t-1)*M),nrow=len_t-1,ncol=M)
pos[(M+1):(M*len_t),1] = pos[(M+1):(M*len_t),1]+as.numeric(t(apply(increments1,2,cumsum)))
pos[(M+1):(M*len_t),2] = pos[(M+1):(M*len_t),2]+as.numeric(t(apply(increments2,2,cumsum)))
return(pos)
}
#' Simulate 2D particle trajectory follows fBM plus OU
#' @description
#' Simulate 2D particle trajectory follows fraction Brownian Motion(fBM) plus a
#' Ornstein–Uhlenbeck(OU) process for \code{M} particles.
#'
#' @param pos0 initial position for \code{M} particles, matrix with dimension M by 2
#' @param M number of particles
#' @param len_t number of time steps
#' @param sigma_fbm distance moved per time step in fractional Brownian Motion
#' @param sigma_ou distance moved per time step in Ornstein–Uhlenbeck process
#' @param H Hurst parameter of fractional Brownian Motion, value between 0 and 1
#' @param rho correlation between successive step and previous step in OU process,
#' value between 0 and 1
#'
#' @return Position matrix with dimension \code{M}\eqn{\times}{%\times}\code{len_t}
#' by 2 for particle trajectory. The first \code{M} rows being the initial position
#' \code{pos0}.
#'
#' @export
#' @author \packageAuthor{AIUQ}
#' @examples
#' library(AIUQ)
#' M = 10
#' len_t = 50
#' sigma_fbm = 2
#' H = 0.3
#' sigma_ou = 2
#' rho = 0.95
#' pos0 = matrix(100/8+0.75*100*runif(M*2),nrow=M,ncol=2)
#'
#' pos = fbm_ou_particle_intensity(pos0=pos0, M=M, len_t=len_t,
#' sigma_fbm=sigma_fbm, sigma_ou=sigma_ou, H=H, rho=rho)
#' @keywords internal
# fbm_ou_particle_intensity <- function(pos0,M,len_t,sigma_fbm,sigma_ou,H,rho){
# pos_ou = matrix(NA,M*len_t,2)
# pos_fbm = matrix(NA,M*len_t,2)
#
# pos0 = pos0 + matrix(rnorm(2*M, sd=sigma_ou),M,2)
# pos_ou[1:M,] = pos0
# sd_innovation_OU=sqrt(sigma_ou^2*(1-rho^2))
#
# for(i in 1:(len_t-1)){
# pos_ou[i*M+(1:M),] = rho*(pos_ou[(i-1)*M+(1:M),]-pos0)+pos0+
# sd_innovation_OU*matrix(rnorm(2*M),M,2)
# }
#
#
# fBM_corr = corr_fBM(len_t,H)
# L = t(chol(sigma_fbm^2*fBM_corr))
# pos_fbm[,1] = as.numeric(t(L%*%matrix(rnorm((len_t)*M),nrow=len_t,ncol=M)))
# pos_fbm[,2] = as.numeric(t(L%*%matrix(rnorm((len_t)*M),nrow=len_t,ncol=M)))
# pos = pos_ou+pos_fbm
#
# return(pos)
# }
fbm_ou_particle_intensity <- function(pos0,M,len_t,sigma_fbm,sigma_ou,H,rho){
pos1 = matrix(NA,M*len_t,2)
pos2 = matrix(NA,M*len_t,2)
pos = matrix(NA,M*len_t,2)
pos0 = pos0 + matrix(rnorm(2*M, sd=sigma_ou),M,2)
pos[1:M,] = pos0
pos1[1:M,] = pos0
sd_innovation_OU=sqrt(sigma_ou^2*(1-rho^2))
for(i in 1:(len_t-1)){
pos1[i*M+(1:M),] = rho*(pos1[(i-1)*M+(1:M),]-pos0)+pos0+sd_innovation_OU*matrix(rnorm(2*M),M,2)
}
pos2[,1] = rep(pos0[,1],len_t)
pos2[,2] = rep(pos0[,2],len_t)
fBM_corr = corr_fBM(len_t,H)
L = t(chol(sigma_fbm^2*fBM_corr))
increments1 = L%*%matrix(rnorm((len_t-1)*M),nrow=len_t-1,ncol=M)
increments2 = L%*%matrix(rnorm((len_t-1)*M),nrow=len_t-1,ncol=M)
pos2[(M+1):(M*len_t),1] = pos2[(M+1):(M*len_t),1]+as.numeric(t(apply(increments1,2,cumsum)))
pos2[(M+1):(M*len_t),2] = pos2[(M+1):(M*len_t),2]+as.numeric(t(apply(increments2,2,cumsum)))
pos = pos1+pos2 - cbind(rep(pos0[,1],len_t), rep(pos0[,2],len_t))
return(pos)
}
#' Simulate 2D particle trajectory follows anisotropic fBM plus OU
#' @description
#' Simulate 2D particle trajectory follows anisotropic fraction Brownian
#' Motion(fBM) plus a Ornstein–Uhlenbeck(OU) process for \code{M} particles,
#' with different step sizes in x, y-directions.
#'
#' @param pos0 initial position for \code{M} particles, matrix with dimension M by 2
#' @param M number of particles
#' @param len_t number of time steps
#' @param sigma_fbm distance moved per time step in fractional Brownian Motion
#' in x, y-directions, a vector of length 2
#' @param sigma_ou distance moved per time step in Ornstein–Uhlenbeck process
#' in x, y-directions, a vector of length 2
#' @param H Hurst parameter of fractional Brownian Motion in x, y-directions,
#' a vector of length 2 with values between 0 and 1
#' @param rho correlation between successive step and previous step in OU process
#' in x, y-directions, a vector of length 2 with values between 0 and 1
#'
#' @return Position matrix with dimension \code{M}\eqn{\times}{%\times}\code{len_t}
#' by 2 for particle trajectory. The first \code{M} rows being the initial position
#' \code{pos0}.
#'
#' @export
#' @author \packageAuthor{AIUQ}
#' @examples
#' library(AIUQ)
#' M = 10
#' len_t = 50
#' sigma_fbm = c(2,1)
#' H = c(0.3,0.4)
#' sigma_ou = c(2,2.5)
#' rho = c(0.95,0.9)
#' pos0 = matrix(100/8+0.75*100*runif(M*2),nrow=M,ncol=2)
#'
#' pos = anisotropic_fbm_ou_particle_intensity(pos0=pos0, M=M, len_t=len_t,
#' sigma_fbm=sigma_fbm, sigma_ou=sigma_ou, H=H, rho=rho)
#' @keywords internal
anisotropic_fbm_ou_particle_intensity <- function(pos0,M,len_t,sigma_fbm,sigma_ou,H,rho){
pos1 = matrix(NA,M*len_t,2)
pos2 = matrix(NA,M*len_t,2)
pos = matrix(NA,M*len_t,2)
pos0[,1] = pos0[,1] + matrix(rnorm(M, sd=sigma_ou[1]),M,1)
pos0[,2] = pos0[,2] + matrix(rnorm(M, sd=sigma_ou[2]),M,1)
pos[1:M,] = pos0
pos1[1:M,] = pos0
sd_innovation_OU_1=sqrt(sigma_ou[1]^2*(1-rho[1]^2))
sd_innovation_OU_2=sqrt(sigma_ou[2]^2*(1-rho[2]^2))
for(i in 1:(len_t-1)){
pos1[i*M+(1:M),1] = rho*(pos1[(i-1)*M+(1:M),1]-pos0[,1])+pos0[,1]+sd_innovation_OU_1*matrix(rnorm(M),M,1)
pos1[i*M+(1:M),2] = rho*(pos1[(i-1)*M+(1:M),2]-pos0[,2])+pos0[,2]+sd_innovation_OU_2*matrix(rnorm(M),M,1)
}
pos2[,1] = rep(pos0[,1],len_t)
pos2[,2] = rep(pos0[,2],len_t)
fBM_corr1 = corr_fBM(len_t,H[1])
L1 = t(chol(sigma_fbm[1]^2*fBM_corr1))
fBM_corr2 = corr_fBM(len_t,H[2])
L2 = t(chol(sigma_fbm[2]^2*fBM_corr2))
increments1 = L1%*%matrix(rnorm((len_t-1)*M),nrow=len_t-1,ncol=M)
increments2 = L2%*%matrix(rnorm((len_t-1)*M),nrow=len_t-1,ncol=M)
pos2[(M+1):(M*len_t),1] = pos2[(M+1):(M*len_t),1]+as.numeric(t(apply(increments1,2,cumsum)))
pos2[(M+1):(M*len_t),2] = pos2[(M+1):(M*len_t),2]+as.numeric(t(apply(increments2,2,cumsum)))
pos = pos1+pos2 - cbind(rep(pos0[,1],len_t), rep(pos0[,2],len_t))
return(pos)
}
#' Construct intensity profile for a given particle trajectory
#' @description
#' Construct intensity profile with structure 'T_SS_mat' for a given particle
#' trajectory, background intensity profile, and user defined radius of particle.
#'
#' @param len_t number of time steps
#' @param M number of particles
#' @param I background intensity profile. See 'Details'.
#' @param pos position matrix for particle trajectory
#' @param Ic vector of maximum intensity of each particle
#' @param sz frame size of simulated square image
#' @param sigma_p radius of the spherical particle (3sigma_p)
#'
#' @return Intensity profile matrix with structure 'T_SS_mat' (matrix with
#' dimension \code{len_t} by \code{sz}\eqn{\times}{%\times}\code{sz}).
#' @details
#' Input \code{I} should has structure 'T_SS_mat', matrix with dimension
#' \code{len_t} by \code{sz}\eqn{\times}{%\times}\code{sz}.
#'
#' Input \code{pos} should be the position matrix with dimension
#' \code{M}\eqn{\times}{%\times}\code{len_t}. See \code{\link{bm_particle_intensity}},
#' \code{\link{ou_particle_intensity}}, \code{\link{fbm_particle_intensity}},
#' \code{\link{fbm_ou_particle_intensity}}.
#'
#' @export
#' @author \packageAuthor{AIUQ}
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#' @keywords internal
fill_intensity <- function(len_t, M, I, pos, Ic, sz, sigma_p){
for(i in 1:len_t){
for(j in 1:M){
xp = pos[j+M*(i-1),1]
yp = pos[j+M*(i-1),2]
x_range = floor(xp-3*sigma_p):ceiling(xp+3*sigma_p)
y_range = floor(yp-3*sigma_p):ceiling(yp+3*sigma_p)
x = rep(x_range,length(x_range))
y = rep(y_range,each=length(x_range))
dist_2 = (x-xp)^2+(y-yp)^2
binary_result = (dist_2<=((3*sigma_p)^2))
Ip = Ic[j]*exp(-dist_2 / (2*sigma_p^2))
x_fill = x[binary_result]
y_fill = y[binary_result]
index_fill = y_fill+sz[1]*(x_fill-1)
Ip_fill = Ip[binary_result]
legitimate_index = (index_fill>0) & (index_fill<(sz[1]*sz[2]))
if (length(legitimate_index) > 0){
I[i,index_fill[legitimate_index]] = I[i,index_fill[legitimate_index]]+Ip_fill[legitimate_index]
}
}
}
return(I)
}
#' Compute numerical MSD
#' @description
#' Compute numerical mean squared displacement(MSD) based on particle trajectory.
#'
#'
#' @param pos position matrix for particle trajectory. See 'Details'.
#' @param M number of particles
#' @param len_t number of time steps
#'
#' @return A vector of numerical MSD for given lag times.
#' @details
#' Input \code{pos} should be the position matrix with dimension
#' \code{M}\eqn{\times}{%\times}\code{len_t}. See \code{\link{bm_particle_intensity}},
#' \code{\link{ou_particle_intensity}}, \code{\link{fbm_particle_intensity}},
#' \code{\link{fbm_ou_particle_intensity}}.
#'
#' @export
#' @author \packageAuthor{AIUQ}
#' @examples
#' library(AIUQ)
#' # Simulate particle trajectory for BM
#' M = 10
#' len_t = 50
#' sigma = 0.5
#' pos0 = matrix(100/8+0.75*100*runif(M*2),nrow=M,ncol=2)
#' pos = bm_particle_intensity(pos0=pos0,M=M,len_t=len_t,sigma=sigma)
#'
#' # Compute numerical MSD
#' (num_msd = numerical_msd(pos=pos, M=M, len_t = len_t))
#' @keywords internal
numerical_msd <- function(pos, M,len_t){
pos_msd = array(pos, dim=c(M, len_t, 2))
msd_i = matrix(NaN,nrow=M,ncol=len_t-1)
for(dt in 1:(len_t-1)){
ndt = len_t-dt
xdiff = pos_msd[,1:ndt,1]-pos_msd[,(1+dt):(ndt+dt),1]
ydiff = pos_msd[,1:ndt,2]-pos_msd[,(1+dt):(ndt+dt),2]
mean_square = xdiff^2+ydiff^2
if (length(dim(mean_square))>1){
msd_i[,dt] = apply(mean_square,1,function(x){mean(x,na.rm=T)})
}else{msd_i[,dt] = mean_square}
}
#result_list = list()
num_msd_mean = apply(msd_i,2,function(x){mean(x,na.rm=T)})
#result_list$num_msd_mean = num_msd_mean
#result_list$num_msd = msd_i
num_msd_mean = c(0,num_msd_mean)
return(num_msd_mean)
}
#' Compute anisotropic numerical MSD
#' @description
#' Compute numerical mean squared displacement(MSD) based on particle trajectory
#' for anisotropic processes in x,y-directions separately.
#'
#'
#' @param pos position matrix for particle trajectory. See 'Details'.
#' @param M number of particles
#' @param len_t number of time steps
#'
#' @return A matrix of numerical MSD for given lag times in x,y-directions,
#' dimension 2 by \code{len_t}.
#' @details
#' Input \code{pos} should be the position matrix with dimension
#' \code{M}\eqn{\times}{%\times}\code{len_t}. See \code{\link{bm_particle_intensity}},
#' \code{\link{ou_particle_intensity}}, \code{\link{fbm_particle_intensity}},
#' \code{\link{fbm_ou_particle_intensity}}.
#'
#' @export
#' @author \packageAuthor{AIUQ}
#' @examples
#' library(AIUQ)
#' # Simulate particle trajectory for BM
#' M = 10
#' len_t = 50
#' sigma = c(0.5,0.1)
#' pos0 = matrix(100/8+0.75*100*runif(M*2),nrow=M,ncol=2)
#' pos = anisotropic_bm_particle_intensity(pos0=pos0,M=M,len_t=len_t,sigma=sigma)
#'
#' # Compute numerical MSD
#' (num_msd = anisotropic_numerical_msd(pos=pos, M=M, len_t=len_t))
#' @keywords internal
anisotropic_numerical_msd <- function(pos, M,len_t){
pos_msd = array(pos, dim=c(M, len_t, 2))
msd_x_i = matrix(NaN,nrow=M,ncol=len_t-1)
msd_y_i = matrix(NaN,nrow=M,ncol=len_t-1)
num_msd_mean = matrix(NaN, nrow=len_t,ncol=2)
num_msd_mean[1,] = c(0,0)
for(dt in 1:(len_t-1)){
ndt = len_t-dt
xdiff = pos_msd[,1:ndt,1]-pos_msd[,(1+dt):(ndt+dt),1]
ydiff = pos_msd[,1:ndt,2]-pos_msd[,(1+dt):(ndt+dt),2]
mean_square_x = xdiff^2
mean_square_y = ydiff^2
if (length(dim(mean_square_x))>1){
msd_x_i[,dt] = apply(mean_square_x,1,function(x){mean(x,na.rm=T)})
}else{msd_x_i[,dt] = mean_square_x}
if (length(dim(mean_square_y))>1){
msd_y_i[,dt] = apply(mean_square_y,1,function(x){mean(x,na.rm=T)})
}else{msd_y_i[,dt] = mean_square_y}
}
#result_list = list()
num_msd_mean[-1,1] = apply(msd_x_i,2,function(x){mean(x,na.rm=T)})
num_msd_mean[-1,2] = apply(msd_y_i,2,function(x){mean(x,na.rm=T)})
return(num_msd_mean)
}
#' Plot 2D particle trajectory
#' @description
#' Function to plot the particle trajectory after the \code{simulation} class
#' has been constructed.
#'
#' @param object an S4 object of class \code{simulation}
#' @param title main title of the plot. If \code{NA}, title is "model_name with
#' M particles" with \code{model_name} and \code{M} being field in \code{simulation}
#' class.
#'
#' @return 2D plot of particle trajectory for a given simulation from \code{simulation}
#' class.
#'
#' @export
#' @author \packageAuthor{AIUQ}
#' @examples
#' library(AIUQ)
#' sim_bm = simulation(sz=100,len_t=100,sigma_bm=0.5)
#' show(sim_bm)
#' plot_traj(sim_bm)
plot_traj<- function(object, title=NA){
if(class(object)[1]=="simulation" || class(object)[1]=="aniso_simulation"){
if(is.na(title)==T){
if(class(object)[1]=="simulation"){title=paste(object@model_name,"with",object@M,"particles")
}else if(class(object)[1]=="aniso_simulation"){
title=paste("Anisotropic",object@model_name,"with",object@M,"particles")
}
}
# highlight start and end point?
traj1 = object@pos[seq(1,dim(object@pos)[1],by=object@M),]
# change plot as 0,0 to be the top left corner
#plot(traj1[,1],object@sz[1]-traj1[,2],ylim=c(0,object@sz[1]),
plot(traj1[,1],traj1[,2],ylim=c(0,object@sz[1]),
xlim=c(0,object@sz[2]),type="l",col=1,xlab="frame size",ylab="frame size",
main=title)
for (i in 2:object@M){
v = object@pos[seq(i,dim(object@pos)[1],by=object@M),]
lines(v[,1],v[,2],ylim=c(0,object@sz[1]),
xlim=c(0,object@sz[2]),type="l", col=i)
}
}else{
stop("Please input a simulation or aniso_simulation class object. \n")
}
}
#' Show simulation object
#' @description
#' Function to print the \code{simulation} class object after the \code{simulation}
#' model has been constructed.
#'
#' @param object an S4 object of class \code{simulation}
#'
#' @return Show a list of important parameters in class \code{simulation}.
#' @export
#' @author \packageAuthor{AIUQ}
#' @examples
#' library(AIUQ)
#'
#' # Simulate simple diffusion for 100 images with 100 by 100 pixels
#' sim_bm = simulation(sz=100,len_t=100,sigma_bm=0.5)
#' show(sim_bm)
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
show.simulation <- function(object){
cat("Frame size: ",object@sz, "\n")
cat("Number of time steps: ",object@len_t, "\n")
cat("Number of particles: ",object@M, "\n")
cat("Stochastic process: ",object@model_name, "\n")
cat("Variance of background noise: ",object@sigma_2_0, "\n")
if(object@model_name == "BM"){
cat("sigma_bm: ",object@param, "\n")
}else if(object@model_name == "OU"){
cat("(rho, sigma_ou): ",object@param,"\n")
}else if(object@model_name == "FBM"){
cat("(sigma_fbm, Hurst parameter): ",object@param, "\n")
}else if(object@model_name == "OU+FBM"){
cat("(rho, sigma_ou,sigma_fbm, Hurst parameter): ",object@param, "\n")
}
}
#' Show anisotropic simulation object
#' @description
#' Function to print the \code{aniso_simulation} class object after the
#' \code{aniso_simulation} model has been constructed.
#'
#' @param object an S4 object of class \code{aniso_simulation}
#'
#' @return Show a list of important parameters in class \code{aniso_simulation}.
#' @export
#' @author \packageAuthor{AIUQ}
#' @examples
#' library(AIUQ)
#'
#' # Simulate simple diffusion for 100 images with 100 by 100 pixels
#' aniso_sim_bm = aniso_simulation(sz=100,len_t=100,sigma_bm=c(0.5,0.1))
#' show(aniso_sim_bm)
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
show.aniso_simulation <- function(object){
cat("Frame size: ",object@sz, "\n")
cat("Number of time steps: ",object@len_t, "\n")
cat("Number of particles: ",object@M, "\n")
cat("Stochastic process: ",object@model_name, "\n")
cat("Variance of background noise: ",object@sigma_2_0, "\n")
if(object@model_name == "BM"){
cat("sigma_bm: ",object@param, "\n")
}else if(object@model_name == "OU"){
cat("(rho, sigma_ou): ",object@param,"\n")
}else if(object@model_name == "FBM"){
cat("(sigma_fbm, Hurst parameter): ",object@param, "\n")
}else if(object@model_name == "OU+FBM"){
cat("(rho, sigma_ou,sigma_fbm, Hurst parameter): ",object@param, "\n")
}
}
#' Show scattering analysis of microscopy (SAM) object
#' @description
#' Function to print the \code{SAM} class object after the \code{SAM} model has
#' been constructed.
#'
#' @param object an S4 object of class \code{SAM}
#'
#' @return Show a list of important parameters in class \code{SAM}.
#' @export
#' @author \packageAuthor{AIUQ}
#' @examples
#' library(AIUQ)
#'
#' ## Simulate BM and get estimated parameters using BM model
#' # Simulation
#' sim_bm = simulation(sz=100,len_t=100,sigma_bm=0.5)
#' show(sim_bm)
#'
#' # AIUQ method: fitting using BM model
#' sam = SAM(sim_object=sim_bm)
#' show(sam)
#'
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
show.sam <- function(object){
cat("Fitted model: ",object@model_name, "\n")
cat("Number of q ring: ",object@len_q, "\n")
cat("Index of wave number selected: ",object@index_q, "\n")
cat("True parameters in the model: ",object@param_truth, "\n")
cat("Estimated parameters in the model: ",object@param_est, "\n")
cat("True variance of background noise: ",object@sigma_2_0_truth, "\n")
cat("Estimated variance of background noise: ",object@sigma_2_0_est, "\n")
if(object@method=="AIUQ"){
cat("Maximum log likelihood value: ",object@mle, "\n")
cat("Akaike information criterion score: ",object@AIC, "\n")
}
}
#' Show scattering analysis of microscopy for anisotropic processes (aniso_SAM) object
#' @description
#' Function to print the \code{aniso_SAM} class object after the
#' \code{aniso_SAM} model has been constructed.
#'
#' @param object an S4 object of class \code{aniso_SAM}
#'
#' @return Show a list of important parameters in class \code{aniso_SAM}.
#' @export
#' @author \packageAuthor{AIUQ}
#' @examples
#' library(AIUQ)
#'
#' ## Simulate BM and get estimated parameters using BM model
#' # Simulation
#' aniso_sim_bm = aniso_simulation(sz=100,len_t=100,sigma_bm=c(0.5,0.3))
#' show(aniso_sim_bm)
#'
#' # AIUQ method: fitting using BM model
#' aniso_sam = aniso_SAM(sim_object=aniso_sim_bm, AIUQ_thr=c(0.99,0))
#' show(aniso_sam)
#'
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
show.aniso_sam <- function(object){
cat("Fitted model: ",object@model_name, "\n")
cat("Number of q ring: ",object@len_q, "\n")
cat("Index of wave number selected: ",object@index_q, "\n")
cat("True parameters in the model: ",object@param_truth, "\n")
cat("Estimated parameters in the model: ",object@param_est, "\n")
cat("True variance of background noise: ",object@sigma_2_0_truth, "\n")
cat("Estimated variance of background noise: ",object@sigma_2_0_est, "\n")
if(object@method=="AIUQ"){
cat("Maximum log likelihood value: ",object@mle, "\n")
cat("Akaike information criterion score: ",object@AIC, "\n")
}
}
#' Plot estimated MSD with uncertainty from SAM class
#' @description
#' Function to plot estimated MSD with uncertainty from \code{SAM} class, versus
#' true mean squared displacement(MSD) or given reference values.
#'
#' @param object an S4 object of class \code{SAM}
#' @param msd_truth a vector/matrix of true MSD or reference MSD value,
#' default is \code{NA}
#' @param title main title of the plot. If \code{NA}, title is "model_name" with
#' \code{model_name} being a field in \code{SAM} class representing fitted model.
#' @param log10 a logical evaluating to TRUE or FALSE indicating whether a plot
#' in log10 scale is generated
#'
#' @return A plot of estimated MSD with uncertainty versus truth/reference values.
#' @export
#' @author \packageAuthor{AIUQ}
#' @examples
#' library(AIUQ)
#'
#' ## Simulate BM and get estimated parameters with uncertainty using BM model
#' # Simulation
#' set.seed(1)
#' sim_bm = simulation(sz=100,len_t=100,sigma_bm=0.5)
#' show(sim_bm)
#'
#' # AIUQ method: fitting using BM model
#' sam = SAM(sim_object=sim_bm, uncertainty=TRUE,AIUQ_thr=c(0.999,0))
#' show(sam)
#'
#' plot_MSD(object=sam, msd_truth=sam@msd_truth) #in log10 scale
#' plot_MSD(object=sam, msd_truth=sam@msd_truth,log10=FALSE) #in real scale
#'
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
plot_MSD<-function(object, msd_truth=NA, title=NA, log10=TRUE){
if(class(object)[1]=='SAM'){
if(is.na(title)==T){title=paste(object@model_name)}
if(!is.na(msd_truth)[1]){
len_t = min(length(msd_truth),length(object@d_input))
msd_truth_here = msd_truth[2:len_t]
if(log10==T){
plot(log10(object@d_input[2:len_t]),log10(msd_truth_here),
type='l',col='black',ylab='log10(MSD)',
xlab=expression(paste("log10(", Delta, "t)",sep="")), main=title,
lwd=2,lty=1, ylim = c(log10(min(msd_truth_here)),log10(max(msd_truth_here))*1.1),
mgp=c(2.5,1,0))
if(!is.na(object@msd_upper)[1]){
polygon(log10(c(object@d_input[-1],rev(object@d_input[-1]))),
log10(c(object@msd_upper[-1],rev(object@msd_lower[-1]))),
col = "grey80", border = F)
}
lines(log10(object@d_input[-1]),log10(object@msd_est[-1]),type='p',
col='blue', pch=20, cex=0.8)
lines(log10(object@d_input[2:len_t]),log10(msd_truth_here),
type='l',col='black',lwd=2)
legend('topleft',lty=c(1,NA),pch=c(NA,20),col=c('black','blue'),
legend=c('Reference','SAM'),lwd=c(2,2), cex=0.7)
}else{
plot(object@d_input[2:len_t],msd_truth_here,type='l',col='black',
ylab='MSD',xlab=expression(Delta~t),main=title,lwd=2,lty=1,
ylim = c(0,max(msd_truth_here)*1.2),mgp=c(2.5,1,0))
if(!is.na(object@msd_upper)[1]){
polygon(c(object@d_input[-1],rev(object@d_input[-1])),
c(object@msd_upper[-1],rev(object@msd_lower[-1])),
col = "grey80", border = F)
}
lines(object@d_input[-1],object@msd_est[-1],type='p',col='blue', pch=20, cex=0.8)
lines(object@d_input[2:len_t],msd_truth_here,type='l',col='black',lwd=2)
legend('topleft',lty=c(1,NA),pch=c(NA,20), col=c('black','blue'),
legend=c('Reference','SAM'),lwd=c(2,2), cex=0.7)
}
}else{
if(log10==T){
plot(log10(object@d_input[-1]),log10(object@msd_est[-1]),type='p',
col='blue', pch=20, cex=0.8,ylab='log10(MSD)',
xlab=expression(paste("log10(", Delta, "t)",sep="")),main=title,
lwd=2,lty=1, ylim = c(log10(min(object@msd_est[-1])),log10(max(object@msd_est))*1.2),
mgp=c(2.5,1,0))
if(!is.na(object@msd_upper)[1]){
polygon(log10(c(object@d_input[-1],rev(object@d_input[-1]))),
log10(c(object@msd_upper[-1],rev(object@msd_lower[-1]))),
col = "grey80", border = F)
}
lines(log10(object@d_input[-1]),log10(object@msd_est[-1]),type='p',col='blue', pch=20, cex=0.8)
legend('topleft',pch=20, lty=NA,col=c('blue'),
legend=c('SAM'),lwd=c(2), cex=0.7)
}else{
plot(object@d_input[-1],object@msd_est[-1],type='p',col='blue', pch=20, cex=0.8,
ylab='MSD',xlab=expression(Delta~t),main=title,lwd=2,lty=1,
ylim = c(0,max(object@msd_est[-1])*1.1),mgp=c(2.5,1,0))
if(!is.na(object@msd_upper)[1]){
polygon(c(object@d_input[-1],rev(object@d_input[-1])),
c(object@msd_upper[-1],rev(object@msd_lower[-1])),col = "grey80", border = F)
}
lines(object@d_input[-1],object@msd_est[-1],type='p',col='blue', pch=20, cex=0.8)
legend('topleft',pch=20,lty=NA, col=c('blue'),legend=c('SAM'),
lwd=c(2), cex=0.7)
}
}
}else if(class(object)[1]=='aniso_SAM'){
if(is.na(title)==T){title=paste("Anisotropic",object@model_name)}
if(!is.na(msd_truth)[1]){
if(ncol(msd_truth)!=2){stop("Please input a matrix with 2 columns where
each column holds MSD truth for x,y directions,
respectively.\n")}
len_t = min(nrow(msd_truth),length(object@d_input))
msd_true_x = msd_truth[2:len_t,1]
msd_true_y = msd_truth[2:len_t,2]
msd_x = object@msd_est[-1,1]
msd_y = object@msd_est[-1,2]
if(log10==T){
plot(log10(object@d_input[2:len_t]),log10(msd_true_x),
type='l',col='black',ylab='log10(MSD)',
xlab=expression(paste("log10(", Delta, "t)",sep="")), main=title,
lwd=2,lty=1, ylim = c(log10(min(msd_true_x,msd_true_y)),
log10(max(msd_true_x,msd_true_y))*1.1),
mgp=c(2.5,1,0))
lines(log10(object@d_input[2:len_t]),log10(msd_true_y),col='black',lwd=2,lty=2)
if(!is.na(object@msd_x_upper)[1]){
polygon(log10(c(object@d_input[-1],rev(object@d_input[-1]))),
log10(c(object@msd_x_upper[-1],rev(object@msd_x_lower[-1]))),
col = "grey80", border = F)
}
if(!is.na(object@msd_y_upper)[1]){
polygon(log10(c(object@d_input[-1],rev(object@d_input[-1]))),
log10(c(object@msd_y_upper[-1],rev(object@msd_y_lower[-1]))),
col = "grey80", border = F)
}
lines(log10(object@d_input[-1]),log10(msd_x),type='p',col='blue', pch=20, cex=0.8)
lines(log10(object@d_input[-1]),log10(msd_y),type='p',col='blue', pch=17, cex=0.8)
lines(log10(object@d_input[2:len_t]),log10(msd_true_x),col='black',lwd=2,lty=1)
lines(log10(object@d_input[2:len_t]),log10(msd_true_y),col='black',lwd=2,lty=2)
legend('topleft',lty=c(1,2,NA,NA),pch=c(NA,NA,20,17),
col=c('black','black','blue','blue'),
legend=c('Reference x','Reference y','SAM x','SAM y'),lwd=c(2,2,2,2),
cex=0.7)
}else{
plot(object@d_input[2:len_t],msd_true_x,
type='l',col='black',ylab='MSD',
xlab=expression(paste("", Delta, "t",sep="")), main=title,
lwd=2,lty=1, ylim = c(min(msd_true_x,msd_true_y),
max(msd_true_x,msd_true_y)*1.1),
mgp=c(2.5,1,0))
lines(object@d_input[2:len_t],msd_true_y,col='black',lwd=2,lty=2)
if(!is.na(object@msd_x_upper)[1]){
polygon(c(object@d_input[-1],rev(object@d_input[-1])),
c(object@msd_x_upper[-1],rev(object@msd_x_lower[-1])),
col = "grey80", border = F)
}
if(!is.na(object@msd_y_upper)[1]){
polygon(c(object@d_input[-1],rev(object@d_input[-1])),
c(object@msd_y_upper[-1],rev(object@msd_y_lower[-1])),
col = "grey80", border = F)
}
lines(object@d_input[-1],msd_x,type='p',col='blue', pch=20, cex=0.8)
lines(object@d_input[-1],msd_y,type='p',col='blue', pch=17, cex=0.8)
lines(object@d_input[2:len_t],msd_true_x,col='black',lwd=2,lty=1)
lines(object@d_input[2:len_t],msd_true_y,col='black',lwd=2,lty=2)
legend('topleft',lty=c(1,2,NA,NA),pch=c(NA,NA,20,17),
col=c('black','black','blue','blue'),
legend=c('Reference x','Reference y','SAM x','SAM y'),lwd=c(2,2,2,2),
cex=0.7)
}
}else{
if(log10==T){
msd_x = object@msd_est[-1,1]
msd_y = object@msd_est[-1,2]
plot(log10(object@d_input[-1]),log10(msd_x),type='p', col='blue',pch=20,
cex=0.8, main=title, lwd=2, lty=1, ylab='log10(MSD)',
xlab=expression(paste("log10(", Delta, "t)",sep="")),
ylim = c(log10(min(msd_x,msd_y)),log10(max(msd_x,msd_y))*1.2),
mgp=c(2.5,1,0))
lines(log10(object@d_input[-1]),log10(msd_y),type='p',col='blue', pch=17, cex=0.8)
if(!is.na(object@msd_x_upper)[1]){
polygon(log10(c(object@d_input[-1],rev(object@d_input[-1]))),
log10(c(object@msd_x_upper[-1],rev(object@msd_x_lower[-1]))),
col = "grey80", border = F)}
if(!is.na(object@msd_y_upper)[1]){
polygon(log10(c(object@d_input[-1],rev(object@d_input[-1]))),
log10(c(object@msd_y_upper[-1],rev(object@msd_y_lower[-1]))),
col = "grey80", border = F)}
lines(log10(object@d_input[-1]),log10(msd_x),type='p',col='blue', pch=20, cex=0.8)
lines(log10(object@d_input[-1]),log10(msd_y),type='p',col='blue', pch=17, cex=0.8)
legend('topleft',pch=c(20,17), lty=c(NA,NA),col=c('blue','blue'),
legend=c('SAM x','SAM y'),lwd=c(2,2), cex=0.7)
}else{
msd_x = object@msd_est[-1,1]
msd_y = object@msd_est[-1,2]
plot(object@d_input[-1],msd_x,type='p', col='blue',pch=20,
cex=0.8, main=title, lwd=2, lty=1, ylab='MSD',
xlab=expression(paste( Delta, "t",sep="")),
ylim = c(min(msd_x,msd_y),max(msd_x,msd_y)*1.2),
mgp=c(2.5,1,0))
lines(object@d_input[-1],msd_y,type='p',col='blue', pch=17, cex=0.8)
if(!is.na(object@msd_x_upper)[1]){
polygon(c(object@d_input[-1],rev(object@d_input[-1])),
c(object@msd_x_upper[-1],rev(object@msd_x_lower[-1])),
col = "grey80", border = F)}
if(!is.na(object@msd_y_upper)[1]){
polygon(c(object@d_input[-1],rev(object@d_input[-1])),
c(object@msd_y_upper[-1],rev(object@msd_y_lower[-1])),
col = "grey80", border = F)}
lines(object@d_input[-1],msd_x,type='p',col='blue', pch=20, cex=0.8)
lines(object@d_input[-1],msd_y,type='p',col='blue', pch=17, cex=0.8)
legend('topleft',pch=c(20,17), lty=c(NA,NA),col=c('blue','blue'),
legend=c('SAM x','SAM y'),lwd=c(2,2), cex=0.7)
}
}
}
else{stop("Please input an SAM or aniso_SAM class object. \n")}
}
#' Plot 2D intensity
#' @description
#' Function to plot 2D intensity profile for a certain frame, default is to plot
#' the first frame. Input can be a matrix (2D) or an array (3D).
#'
#' @param intensity intensity profile
#' @param intensity_str structure of the intensity profile, options from
#' ('SST_array','S_ST_mat','T_SS_mat', 'SS_T_mat'). See 'Details'.
#' @param frame frame index
#' @param title main title of the plot. If \code{NA}, title is "intensity profile
#' for frame n" with n being the frame index in \code{frame}.
#' @param color a logical evaluating to TRUE or FALSE indicating whether a colorful
#' plot is generated
#' @param sz frame size of simulated image with default \code{c(200,200)}.
#'
#' @return 2D plot in gray scale (or with color) of selected frame.
#' @details
#' By default \code{intensity_str} is set to 'T_SS_mat', a time by space\eqn{\times}{%\times}space
#' matrix, which is the structure of intensity profile obtained from \code{simulation}
#' class. For \code{intensity_str='SST_array'} , input intensity profile should be a
#' space by space by time array, which is the structure from loading a tif file.
#' For \code{intensity_str='S_ST_mat'}, input intensity profile should be a
#' space by space\eqn{\times}{%\times}time matrix. For \code{intensity_str='SS_T_mat'},
#' input intensity profile should be a space\eqn{\times}{%\times}space by time matrix.
#'
#' @author \packageAuthor{AIUQ}
#' @examples
#' library(AIUQ)
#' sim_bm = simulation(sz=100,len_t=100,sigma_bm=0.5)
#' show(sim_bm)
#' plot_intensity(sim_bm@intensity, sz=sim_bm@sz)
#'
#' @export
plot_intensity<-function(intensity,intensity_str="T_SS_mat",frame=1,sz=NA,
title=NA, color=FALSE){
if(is.na(title)){
title = paste("intensity profile for frame ", frame, sep="")
}
if(length(sz)==1 && is.na(sz)){
intensity_list = intensity_format_transform(intensity=intensity,
intensity_str=intensity_str)
intensity_trans = intensity_list$intensity
sz_x = intensity_list$sz_x
sz_y = intensity_list$sz_y
}else{
intensity_list = intensity_format_transform(intensity=intensity,
intensity_str=intensity_str,sz=sz)
intensity_trans = intensity_list$intensity
sz_x = intensity_list$sz_x
sz_y = intensity_list$sz_y
}
if(color==FALSE){
plot_m = t(matrix(intensity_trans[,frame],sz_y,sz_x))
#reversed_m = plot_m[, ncol(plot_m):1]
plot3D::image2D(plot_m,main=title,col = grey(seq(0, 1, length = 256)))
}else{
plot_m = t(matrix(intensity_trans[,frame],sz_y,sz_x))
#reversed_m = plot_m[, ncol(plot_m):1]
plot3D::image2D(plot_m,main=title)
}
}
#' Compute dynamic image structure function
#' @description
#' Compute dynamic image structure function(Dqt) using Fourier transformed
#' intensity profile and a selection of wave number(q) range.
#'
#' @param len_q number of wave number
#' @param index_q a vector of selected wave number index
#' @param len_t number of time steps
#' @param I_q_matrix intensity profile in reciprocal space (after Fourier transformation)
#' @param q_ori_ring_loc_unique_index index for wave vector that give unique frequency
#' @param sz frame size of intensity profile
#'
#' @return Matrix of dynamic image structure with dimension \code{len_q} by \code{len_t-1}.
#' @export
#' @author \packageAuthor{AIUQ}
#' @details
#' Dynamic image structure function(Dqt) can be obtained from ensemble average
#' of absolute values squared of Four transformed intensity difference:
#' \deqn{D(q,\Delta t) = \langle |\Delta \hat{I}(q,t,\Delta t)|^2\rangle}{%D(q,\Delta t) = \langle |\Delta \hat{I}(q,t,\Delta t)|^2\rangle}
#' See 'References'.
#'
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#'
#' @keywords internal
SAM_Dqt<-function(len_q,index_q,len_t,I_q_matrix,q_ori_ring_loc_unique_index,sz){
Dqt = matrix(NA,len_q,len_t-1)
for (q_j in index_q){
I_q_cur = I_q_matrix[q_ori_ring_loc_unique_index[[q_j]],]
for (t_i in 1:(len_t-1)){
Dqt[q_j,t_i]=mean((abs(I_q_cur[,(t_i+1):len_t]-I_q_cur[,1:(len_t-t_i)]))^2/(sz[1]*sz[2]),na.rm=T)
}
}
return(Dqt)
}
#' Compute l2 loss for Dqt with fixed A(q) and B
#' @description
#' Compute l2 loss for dynamic image structure function(Dqt) using fixed A(q)
#' and B parameters.
#'
#' @param param a vector of natural logarithm of parameters
#' @param Dqt_cur observed dynamic image structure function. See 'Details'.
#' @param q_cur wave vector in unit of um^-1
#' @param A_est_q_cur estimated value of A(q). This parameter is determined by
#' the properties of the imaged material and imaging optics. See 'References'.
#' @param B_est estimated value of B. This parameter is determined by the noise
#' in the system. See 'References'.
#' @param d_input sequence of lag times
#' @param model_name model name for the fitted model, options from ('BM','OU',
#' 'FBM',OU+FBM','user_defined')
#' @param msd_fn msd_fn user defined mean squared displacement structure (MSD), a
#' function of \code{param} parameters and \code{d_input} lag times
#' @param msd_grad_fn user defined MSD gradient structure, a function of
#' \code{param} and \code{d_input}
#'
#' @return Squared differences between the true Dqt and the predicted Dqt.
#' @details
#' Dynamic image structure function(Dqt) can be obtained from ensemble average
#' of absolute values squared of Four transformed intensity difference:
#' \deqn{D(q,\Delta t) = \langle |\Delta \hat{I}(q,t,\Delta t)|^2\rangle}{%D(q,\Delta t) = \langle |\Delta \hat{I}(q,t,\Delta t)|^2\rangle}
#' See 'References'.
#'
#' @author \packageAuthor{AIUQ}
#' @export
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#' @keywords internal
l2_fixedAB<-function(param,Dqt_cur,q_cur,A_est_q_cur,B_est,
d_input,model_name,msd_fn=NA,msd_grad_fn=NA){
theta = exp(param)
msd_list = get_MSD_with_grad(theta=theta,d_input=d_input[-1],
model_name=model_name,msd_fn,msd_grad_fn=msd_grad_fn)
msd = msd_list$msd
sum((Dqt_cur-(A_est_q_cur*(1-exp(-q_cur^2*msd/4))+B_est))^2)
}
#' Compute l2 loss for Dqt
#' @description
#' Compute l2 loss for dynamic image structure function(Dqt) using A(q) and B
#' are both estimated within the model.
#'
#' @param param a vector of natural logarithm of parameters
#' @param Dqt_cur observed dynamic image structure function. See 'Details'.
#' @param q_cur wave vector in unit of um^-1
#' @param d_input sequence of lag times
#' @param model_name model name for the fitted model, options from ('BM','OU',
#' 'FBM',OU+FBM','user_defined')
#' @param msd_fn msd_fn user defined mean squared displacement structure (MSD), a
#' function of \code{param} parameters and \code{d_input} lag times
#' @param msd_grad_fn user defined MSD gradient structure, a function of
#' \code{param} and \code{d_input}
#'
#' @return Squared differences between the true Dqt and the predicted Dqt.
#' @details
#' Dynamic image structure function(Dqt) can be obtained from ensemble average
#' of absolute values squared of Four transformed intensity difference:
#' \deqn{D(q,\Delta t) = \langle |\Delta \hat{I}(q,t,\Delta t)|^2\rangle}{%D(q,\Delta t) = \langle |\Delta \hat{I}(q,t,\Delta t)|^2\rangle}
#' See 'References'.
#'
#' @author \packageAuthor{AIUQ}
#' @export
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#' @keywords internal
l2_estAB<-function(param,Dqt_cur,q_cur,d_input,model_name,
msd_fn=NA,msd_grad_fn=NA){
theta = exp(param)
A_cur = theta[length(theta)]
B_cur= theta[length(theta)-1]
theta_msd = theta[-((length(theta)-1):length(theta))]
msd_list = get_MSD_with_grad(theta=theta_msd,d_input=d_input[-1],
model_name=model_name,msd_fn,msd_grad_fn=msd_grad_fn)
msd = msd_list$msd
sum((Dqt_cur-(A_cur*(1-exp(-q_cur^2*msd/4))+B_cur))^2)
}
#' Minimize l2 loss for Dqt with fixed A(q) and B
#' @description
#' Minimize l2 loss function for dynamic image structure function(Dqt) with
#' fixed A(q) and B, and return estimated parameters and mean squared
#' displacement(MSD).
#'
#' @param param a vector of natural logarithm of parameters
#' @param q wave vector in unit of um^-1
#' @param index_q selected index of wave number
#' @param Dqt observed dynamic image structure function. See 'Details'.
#' @param A_est_q estimated value of A(q). This parameter is determined by
#' the properties of the imaged material and imaging optics. See 'References'.
#' @param B_est estimated value of B. This parameter is determined by the noise
#' in the system. See 'References'.
#' @param d_input sequence of lag times
#' @param model_name model name for the fitted model, options from ('BM','OU',
#' 'FBM',OU+FBM','user_defined')
#' @param msd_fn msd_fn user defined mean squared displacement structure (MSD), a
#' function of \code{param} parameters and \code{d_input} lag times
#' @param msd_grad_fn user defined MSD gradient structure, a function of
#' \code{param} and \code{d_input}
#'
#' @return A list of estimated parameters and MSD from minimizing the l2 loss
#' function.
#' @details
#' Dynamic image structure function(Dqt) can be obtained from ensemble average
#' of absolute values squared of Four transformed intensity difference:
#' \deqn{D(q,\Delta t) = \langle |\Delta \hat{I}(q,t,\Delta t)|^2\rangle}{%D(q,\Delta t) = \langle |\Delta \hat{I}(q,t,\Delta t)|^2\rangle}
#' See 'References'.
#'
#' @author \packageAuthor{AIUQ}
#' @export
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#' @keywords internal
theta_est_l2_dqt_fixedAB<-function(param,q,index_q,Dqt,A_est_q,B_est,
d_input,model_name,msd_fn=NA,msd_grad_fn=NA){
param_est_l2 = matrix(nrow=length(q),ncol=length(param))
for(q_j in index_q){
m_optim=try(optim(param,l2_fixedAB,Dqt_cur=Dqt[q_j,],d_input=d_input,
A_est_q_cur=A_est_q[q_j],B_est=B_est,q_cur=q[q_j],
model_name=model_name,msd_fn=msd_fn,msd_grad_fn=msd_grad_fn,
method='L-BFGS-B' ),silent=T)
try_num=0
while(!is.numeric(m_optim[[1]])){
try_num=try_num+1
param=param+runif(length(param))
m_optim=try(optim(param,l2_fixedAB,Dqt_cur=Dqt[q_j,],d_input=d_input,
A_est_q_cur=A_est_q[q_j],B_est=B_est,q_cur=q[q_j],
model_name=model_name,msd_fn=msd_fn,msd_grad_fn=msd_grad_fn,
method='L-BFGS-B' ),silent=T)
if(try_num>10){
break
}
}
##try no gradient if still no value
try_num=0
while(!is.numeric(m_optim[[1]])){
try_num=try_num+1
param=param+runif(length(param))
m_optim=try(optim(param,l2_fixedAB,Dqt_cur=Dqt[q_j,],d_input=d_input,
A_est_q_cur=A_est_q[q_j],B_est=B_est,q_cur=q[q_j],
model_name=model_name,msd_fn=msd_fn,msd_grad_fn=msd_grad_fn),silent=T)
if(try_num>10){
break
}
}
param_est_l2[q_j,]=(m_optim$par)
}
param_ddm = apply(exp(param_est_l2),1,function(x){get_est_param(x,model_name=model_name)})
if(is.vector(param_ddm)==T){
param_ddm = mean(param_ddm,na.rm=T)
}else{
param_ddm = apply(param_ddm,1,function(x){mean(x,na.rm=T)})
}
msd_ddm = get_MSD(theta=param_ddm,d_input=d_input,model_name=model_name,msd_fn=msd_fn)
ddm_result = list()
ddm_result$param_est = param_ddm
ddm_result$msd_est = msd_ddm
return(ddm_result)
}
#' Minimize l2 loss for Dqt
#' @description
#' Minimize l2 loss function for dynamic image structure function(Dqt), and
#' return estimated parameters and mean squared displacement(MSD).
#'
#' @param param a vector of natural logarithm of parameters
#' @param A_ini initial value of A(q) to be optimized over. Note true A(q) is
#' determined by the properties of the imaged material and imaging optics.
#' See 'References'.
#' @param q wave vector in unit of um^-1
#' @param index_q selected index of wave number
#' @param Dqt observed dynamic image structure function. See 'Details'.
#' @param d_input sequence of lag times
#' @param model_name model name for the fitted model, options from ('BM','OU',
#' 'FBM',OU+FBM','user_defined')
#' @param msd_fn msd_fn user defined mean squared displacement structure (MSD), a
#' function of \code{param} parameters and \code{d_input} lag times
#' @param msd_grad_fn user defined MSD gradient structure, a function of
#' \code{param} and \code{d_input}
#'
#' @return A list of estimated parameters and MSD from minimizing the l2 loss
#' function.
#' @details
#' Dynamic image structure function(Dqt) can be obtained from ensemble average
#' of absolute values squared of Four transformed intensity difference:
#' \deqn{D(q,\Delta t) = \langle |\Delta \hat{I}(q,t,\Delta t)|^2\rangle}{%D(q,\Delta t) = \langle |\Delta \hat{I}(q,t,\Delta t)|^2\rangle}
#' See 'References'.
#'
#' @author \packageAuthor{AIUQ}
#' @export
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#' @keywords internal
theta_est_l2_dqt_estAB<- function(param,A_ini,q,index_q,Dqt,d_input,
model_name,msd_fn=NA,msd_grad_fn=NA){
param_est_l2 = matrix(nrow=length(q),ncol=length(param)+1)
for(q_j in index_q){
param_start=c(param,log(abs(A_ini[q_j])))
if(q_j==length(q)){
param_start=c(param,0)
}
m_optim=try(optim(param_start,l2_estAB,Dqt_cur=Dqt[q_j,],d_input=d_input,
q_cur=q[q_j],model_name=model_name,msd_fn=msd_fn,
msd_grad_fn=msd_grad_fn,method='L-BFGS-B' ),silent=T)
try_num=0
while(!is.numeric(m_optim[[1]])){
try_num=try_num+1
param_start=param_start+runif(length(param)+1)
m_optim=try(optim(param_start,l2_estAB,Dqt_cur=Dqt[q_j,],d_input=d_input,
q_cur=q[q_j],model_name=model_name,msd_fn=msd_fn,
msd_grad_fn=msd_grad_fn,method='L-BFGS-B' ),silent=T)
if(try_num>10){
break
}
}
try_num=0
while(!is.numeric(m_optim[[1]])){
try_num=try_num+1
param_start=param_start+runif(length(param)+1)
m_optim=try(optim(param_start,l2_estAB,Dqt_cur=Dqt[q_j,],d_input=d_input,
q_cur=q[q_j],model_name=model_name,msd_fn=msd_fn,
msd_grad_fn=msd_grad_fn),silent=T)
if(try_num>10){
break
}
}
param_est_l2[q_j,]=(m_optim$par)
}
param_ddm = apply(exp(param_est_l2[,-((length(param)):length(param)+1)]),1,
function(x){get_est_param(x,model_name=model_name)})
if(is.vector(param_ddm)==T){
param_ddm = mean(param_ddm,na.rm=T)
}else{
param_ddm = apply(param_ddm,1,function(x){mean(x,na.rm=T)})
}
msd_ddm = get_MSD(theta=param_ddm,d_input=d_input,model_name=model_name,msd_fn=msd_fn)
ddm_result = list()
ddm_result$param_est = param_ddm
ddm_result$msd_est = msd_ddm
ddm_result$sigma_2_0_est = mean(exp(param_est_l2[,length(param)]),na.rm=T)
ddm_result$A_est=exp(param_est_l2[,length(param)+1])
return(ddm_result)
}
#' Compute observed dynamic image structure function
#' @description
#' Compute observed dynamic image structure function (Dqt) using object of
#' \code{SAM} class.
#'
#' @param object an S4 object of class \code{SAM}
#' @param index_q wavevector range used for computing Dqt
#'
#' @return A matrix of observed dynamic image structure function with dimension
#' \code{len_q} by \code{len_t-1}.
#'
#' @author \packageAuthor{AIUQ}
#' @export
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#' @examples
#' library(AIUQ)
#' sim_bm = simulation(len_t=100,sz=100,sigma_bm=0.5)
#' show(sim_bm)
#' sam = SAM(sim_object = sim_bm)
#' show(sam)
#' Dqt = get_dqt(object=sam)
get_dqt <- function(object, index_q = NA){
if(class(object)[1]=='SAM'){
len_q = object@len_q
len_t = object@len_t
sz = object@sz
if(length(index_q)==1 && is.na(index_q)){
index_q = 1:len_q
}
Dqt = matrix(NA,len_q,len_t-1)
for (q_j in index_q){
index_cur = object@q_ori_ring_loc_unique_index[[q_j]]
I_q_cur = object@I_q[index_cur,]
for (t_i in 1:(len_t-1)){
Dqt[q_j,t_i]=mean((abs(I_q_cur[,(t_i+1):len_t]-I_q_cur[,1:(len_t-t_i)]))^2/(sz[1]*sz[2]),na.rm=T)
}
}
return(Dqt)
}else{stop("Please input an SAM class. \n")}
}
#' Compute empirical intermediate scattering function
#' @description
#' Compute empirical intermediate scattering function (ISF) using object of
#' \code{SAM} class.
#'
#' @param object an S4 object of class \code{SAM}
#' @param index_q wavevector range used for computing ISF
#'
#' @return A matrix of empirical intermediate scattering function with dimension
#' \code{len_q} by \code{len_t-1}.
#'
#' @author \packageAuthor{AIUQ}
#' @export
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#' @examples
#' library(AIUQ)
#' sim_bm = simulation(len_t=100,sz=100,sigma_bm=0.5)
#' show(sim_bm)
#' sam = SAM(sim_object = sim_bm)
#' show(sam)
#' ISF = get_isf(object=sam)
get_isf <- function(object, index_q = NA){
if(class(object)[1]=='SAM'){
len_q = object@len_q
len_t = object@len_t
A_est = object@A_est_ini
B_est = object@B_est_ini
if(length(index_q)==1 && is.na(index_q)){
index_q = 1:len_q
}
if(nrow(object@Dqt)*ncol(object@Dqt)==1 && is.na(object@Dqt)){
sz = object@sz
Dqt = matrix(NA,len_q,len_t-1)
isf = matrix(NA,len_q,len_t-1)
for (q_j in index_q){
index_cur = object@q_ori_ring_loc_unique_index[[q_j]]
I_q_cur = object@I_q[index_cur,]
for (t_i in 1:(len_t-1)){
Dqt[q_j,t_i]=mean((abs(I_q_cur[,(t_i+1):len_t]-I_q_cur[,1:(len_t-t_i)]))^2/(sz[1]*sz[2]),na.rm=T)
}
if(A_est[q_j]==0){break}
isf[q_j,] = 1-(Dqt[q_j,]-B_est)/A_est[q_j]
}
}else{
Dqt = object@Dqt
isf = matrix(NA,len_q,len_t-1)
for (q_j in index_q){
if(A_est[q_j]==0){break}
isf[q_j,] = 1-(Dqt[q_j,]-B_est)/A_est[q_j]
}
}
return(isf)
}else{stop("Please input an SAM class. \n")}
}
#' Compute modeled intermediate scattering function
#' @description
#' Compute modeled intermediate scattering function (ISF) using object of
#' \code{SAM} class.
#'
#' @param object an S4 object of class \code{SAM}
#' @param index_q wavevector range used for computing ISF
#'
#' @return A matrix of modeled intermediate scattering function with dimension
#' \code{len_q} by \code{len_t-1}.
#'
#' @author \packageAuthor{AIUQ}
#' @export
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#' @examples
#' library(AIUQ)
#' sim_bm = simulation(len_t=100,sz=100,sigma_bm=0.5)
#' show(sim_bm)
#' sam = SAM(sim_object = sim_bm)
#' show(sam)
#' modeled_ISF = modeled_isf(object=sam)
modeled_isf <- function(object, index_q=NA){
if(class(object)[1]=='SAM'){
len_q = object@len_q
len_t = object@len_t
q = object@q
msd_est = object@msd_est[-1]
isf = matrix(NA,len_q,len_t-1)
if(length(index_q)==1 && is.na(index_q)){
index_q = 1:len_q
}
for(q_j in index_q){
q_selected = q[q_j]
isf[q_j,] = exp(-q_selected^2*msd_est/4)
}
return(isf)
}else{stop("Please input an SAM class. \n")}
}
#' Compute modeled dynamic image structure function
#' @description
#' Compute modeled dynamic image structure function (Dqt) using object of
#' \code{SAM} class.
#'
#' @param object an S4 object of class \code{SAM}
#' @param index_q wavevector range used for computing Dqt
#'
#' @return A matrix of modeled dynamic image structure function with dimension
#' \code{len_q} by \code{len_t-1}.
#'
#' @author \packageAuthor{AIUQ}
#' @export
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#' @examples
#' library(AIUQ)
#' sim_bm = simulation(len_t=100,sz=100,sigma_bm=0.5)
#' show(sim_bm)
#' sam = SAM(sim_object = sim_bm)
#' show(sam)
#' modeled_Dqt = modeled_dqt(object=sam)
modeled_dqt <- function(object, index_q = NA){
if(class(object)[1]=='SAM'){
len_q = object@len_q
len_t = object@len_t
A_est = object@A_est_ini
B_est = object@sigma_2_0_est*2
q = object@q
msd_est = object@msd_est[-1]
if(length(index_q)==1 && is.na(index_q)){
index_q = 1:len_q
}
if(nrow(object@modeled_ISF)*ncol(object@modeled_ISF)==1 && is.na(object@modeled_ISF)){
isf = matrix(NA,len_q,len_t-1)
dqt = matrix(NA,len_q,len_t-1)
for(q_j in index_q){
q_selected = q[q_j]
isf[q_j,] = exp(-q_selected^2*msd_est/4)
if(A_est[q_j]==0){break}
dqt[q_j,] = A_est[q_j]*(1-isf[q_j,])+B_est
}
}else{
isf = object@modeled_ISF
dqt = matrix(NA,len_q,len_t-1)
for (q_j in index_q){
if(A_est[q_j]==0){break}
dqt[q_j,] = A_est[q_j]*(1-isf[q_j,])+B_est
}
}
return(dqt)
}else{stop("Please input an SAM class. \n")}
}
| /scratch/gouwar.j/cran-all/cranData/AIUQ/R/functions.R |
#' Simulate 2D particle movement
#'
#' @description
#' Simulate 2D particle movement from a user selected stochastic process, and
#' output intensity profiles.
#'
#' @param sz frame size of simulated image with default \code{c(200,200)}.
#' @param len_t number of time steps with default 200.
#' @param M number of particles with default 50.
#' @param model_name stochastic process simulated, options from
#' ('BM','OU','FBM','OU+FBM'), with default 'BM'.
#' @param noise background noise, options from ('uniform','gaussian'),
#' with default 'gaussian'.
#' @param I0 background intensity, value between 0 and 255, with default 20.
#' @param Imax maximum intensity at the center of the particle, value between 0
#' and 255, with default 255.
#' @param pos0 initial position for M particles, matrix with dimension M by 2.
#' @param rho correlation between successive step and previous step in O-U
#' process, value between 0 and 1, with default 0.95.
#' @param H Hurst parameter of fractional Brownian Motion, value between 0 and 1,
#' with default 0.3.
#' @param sigma_p radius of the spherical particle (3sigma_p), with default 2.
#' @param sigma_bm distance moved per time step in Brownian Motion, with default 1.
#' @param sigma_ou distance moved per time step in Ornstein–Uhlenbeck process,
#' with default 2.
#' @param sigma_fbm distance moved per time step in fractional Brownian Motion,
#' with default 2.
#' @return Returns an S4 object of class \code{simulation}.
#' @export
#' @author \packageAuthor{AIUQ}
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#'
#' @examples
#' library(AIUQ)
#' # -------------------------------------------------
#' # Example 1: Simple diffusion for 200 images with
#' # 200 by 200 pixels and 50 particles
#' # -------------------------------------------------
#' sim_bm = simulation()
#' show(sim_bm)
#'
#' # -------------------------------------------------
#' # Example 2: Simple diffusion for 100 images with
#' # 100 by 100 pixels and slower speed
#' # -------------------------------------------------
#' sim_bm = simulation(sz=100,len_t=100,sigma_bm=0.5)
#' show(sim_bm)
#'
#' # -------------------------------------------------
#' # Example 3: Ornstein-Uhlenbeck process
#' # -------------------------------------------------
#' sim_ou = simulation(model_name="OU")
#' show(sim_ou)
simulation <- function(sz=c(200,200), len_t=200, M=50, model_name="BM",noise="gaussian",
I0=20, Imax=255, pos0=matrix(NaN,nrow=M,ncol=2),rho=0.95,
H=0.3, sigma_p=2, sigma_bm=1,sigma_ou=2, sigma_fbm=2){
model <- methods::new("simulation")
#check
len_t = as.integer(len_t)
M = as.integer(M)
if(length(sz)==1){
sz=c(sz,sz)
}
if(length(sz)>2){
stop("Frame size of simulated image should be a vector with length 2. \n")
}
if(!is.character(model_name)){
stop("Type of stochastic process should be a character value. \n")
}
if(model_name!="BM" && model_name!="OU" && model_name!="FBM" && model_name!="OU+FBM"){
stop("Type of stochastic process should be one of the type listed in help page. \n")
}
if(!is.character(noise)){
stop("Type of background noise should be a character value. \n")
}
if(noise!="gaussian" && noise!="uniform"){
stop("Type of background noise should be one of the type listed in help page. \n")
}
if(!is.numeric(I0)){
stop("Background intensity should have numeric value. \n")
}
if(I0<0 || I0>255){
stop("Background intensity should have value between 0 and 255. \n")
}
if(!is.numeric(Imax)){
stop("Maximum intensity at the center of the particle should be a numeric value. \n")
}
if(Imax<0 || Imax>255){
stop("Maximum intensity at the center of the particle should have value between 0 and 255. \n")
}
if(!is.numeric(pos0)){
stop("Initial position for particles should be all numeric. \n")
}
if(nrow(pos0)!=M || ncol(pos0)!=2){
stop("Dimension of particle initial position matrix should match M by 2. \n")
}
if(!is.numeric(rho)){
stop("Correlation between steps in O-U process should be numeric. \n")
}
if(!is.numeric(H)){
stop("Hurst parameter of fractional Brownian Motion should be numeric. \n")
}
if(H<0 || H>1){
stop("Hurst parameter of fractional Brownian Motion should have value between 0 and 1. \n")
}
if(!is.numeric(sigma_p)){
stop("Radius of the spherical particle should be numeric. \n")
}
if(!is.numeric(sigma_bm)){
stop("Distance moved per time step in Brownian Motion should be numeric. \n")
}
if(!is.numeric(sigma_ou)){
stop("Distance moved per time step in Ornstein Uhlenbeck process should be numeric. \n")
}
if(!is.numeric(sigma_fbm)){
stop("Distance moved per time step in fractional Brownian Motion should be numeric. \n")
}
# Simulation particle trajectory for isotropic process
if(sum(is.na(pos0))>=1){
pos0 = matrix(c(sz[2]/8+0.75*sz[2]*stats::runif(M),
sz[1]/8+0.75*sz[1]*stats::runif(M)),nrow=M,ncol=2)
}
if(model_name == "BM"){
pos = bm_particle_intensity(pos0=pos0,M=M,len_t=len_t,sigma=sigma_bm)
model@param = c(sigma_bm)
}else if(model_name == "OU"){
pos = ou_particle_intensity(pos0=pos0,M=M,len_t=len_t,sigma=sigma_ou,rho=rho)
model@param = c(rho,sigma_ou)
}else if(model_name == "FBM"){
pos = fbm_particle_intensity(pos0=pos0,M=M,len_t=len_t,sigma=sigma_fbm,H=H)
model@param = c(sigma_fbm,H)
}else if(model_name == "OU+FBM"){
pos = fbm_ou_particle_intensity(pos0=pos0,M=M,len_t=len_t,H=H, rho=rho,
sigma_ou = sigma_ou, sigma_fbm = sigma_fbm)
model@param = c(rho,sigma_ou,sigma_fbm,H)
}
model_param = get_true_param_sim(param_truth=model@param,model_name=model_name)
model@theor_msd = get_MSD(theta = model_param ,d_input=0:(len_t-1),model_name=model_name)
# Fill intensity
if(length(I0) == len_t){
if(noise == "uniform"){
I = matrix(stats::runif(sz[1]*sz[2]*len_t)-0.5, nrow=len_t,ncol = sz[1]*sz[2])
I = I*I0
model@sigma_2_0 = I0^2/12
}else if(noise == "gaussian"){
I = matrix(stats::rnorm(sz[1]*sz[2]*len_t), nrow=len_t,ncol = sz[1]*sz[2])
I = I*sqrt(I0)
model@sigma_2_0 = I0
}
}else if(length(I0) == 1){
if(noise == "uniform"){
I = matrix(I0*(stats::runif(sz[1]*sz[2]*len_t)-0.5), nrow=len_t,ncol = sz[1]*sz[2])
model@sigma_2_0 = c(I0^2/12)
} else if(noise == "gaussian"){
I = matrix(sqrt(I0)*stats::rnorm(sz[1]*sz[2]*len_t), nrow=len_t,ncol = sz[1]*sz[2])
model@sigma_2_0 = c(I0)
}
}
if(length(Imax)==1){
Ic = rep(Imax,M)
model@intensity = fill_intensity(len_t=len_t,M=M,I=I,pos=pos,Ic=Ic,sz=sz, sigma_p=sigma_p)
}
model@sz = sz
model@pxsz = 1
model@mindt = 1
model@len_t = len_t
model@noise = noise
model@M = M
model@model_name = model_name
model@pos = pos
model@num_msd = numerical_msd(pos=model@pos,M=model@M,len_t=model@len_t)
return(model)
}
| /scratch/gouwar.j/cran-all/cranData/AIUQ/R/simulation.R |
#' SAM class
#'
#'@description
#' S4 class for fast parameter estimation in scattering analysis of microscopy,
#' using either \code{AIUQ} or \code{DDM} method.
#'
#' @slot pxsz numeric. Size of one pixel in unit of micron with default value 1.
#' @slot mindt numeric. Minimum lag time with default value 1.
#' @slot sz vector. Frame size of the intensity profile in x and y directions,
#' number of pixels contained in each frame equals sz_x by sz_y.
#' @slot len_t integer. Number of time steps.
#' @slot len_q integer. Number of wave vector.
#' @slot q vector. Wave vector in unit of um^-1.
#' @slot d_input vector. Sequence of lag times.
#' @slot B_est_ini numeric. Estimation of B. This parameter is determined by the
#' noise in the system. See 'References'.
#' @slot A_est_ini vector. Estimation of A(q). Note this parameter is
#' determined by the properties of the imaged material and imaging optics.
#' See 'References'.
#' @slot I_o_q_2_ori vector. Absolute square of Fourier transformed intensity
#' profile, ensemble over time.
#' @slot q_ori_ring_loc_unique_index list. List of location index of non-duplicate
#' values for each q ring.
#' @slot model_name character. Fitted model, options from
#' ('BM','OU','FBM','OU+FBM', 'user_defined').
#' @slot param_est vector. Estimated parameters contained in MSD.
#' @slot sigma_2_0_est numeric. Estimated variance of background noise.
#' @slot msd_est vector. Estimated MSD.
#' @slot uncertainty logical. A logical evaluating to TRUE or FALSE indicating whether
#' parameter uncertainty should be computed.
#' @slot msd_lower vector. Lower bound of 95% confidence interval of MSD.
#' @slot msd_upper vector. Upper bound of 95% confidence interval of MSD.
#' @slot msd_truth vector. True MSD or reference MSD value.
#' @slot sigma_2_0_truth vector. True variance of background noise, non NA for
#' simulated data using \code{simulation}.
#' @slot param_truth vector. True parameters used to construct MSD, non NA for
#' simulated data using \code{simulation}.
#' @slot index_q vector. Selected index of wave vector.
#' @slot Dqt matrix. Dynamic image structure function D(q,delta t).
#' @slot ISF matrix. Empirical intermediate scattering function f(q,delta t).
#' @slot I_q matrix. Fourier transformed intensity profile with structure 'SS_T_mat'.
#' @slot AIC numeric. Akaike information criterion score.
#' @slot mle numeric. Maximum log likelihood value.
#' @slot param_uq_range matrix. 95% confidence interval for estimated parameters.
#' @slot modeled_Dqt matrix. Modeled dynamic image structure function D(q,delta t).
#' @slot modeled_ISF matrix. Modeled intermediate scattering function f(q,delta t).
#'
#' @method show SAM
#' @author \packageAuthor{AIUQ}
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#'
#' @keywords classes
methods::setClass("SAM", representation(
#intensity_str = "character",
pxsz = "numeric",
mindt = "numeric",
sz = "vector",
len_t = "integer",
len_q = "integer",
q = "vector",
d_input = "vector",
B_est_ini = "numeric",
A_est_ini = "vector",
#num_q_max = "numeric",
I_o_q_2_ori = "vector",
q_ori_ring_loc_unique_index = "list",
model_name = "character",
param_est = "vector",
sigma_2_0_est = "numeric",
msd_est = "vector",
uncertainty = "logical",
msd_lower = "vector",
msd_upper = "vector",
msd_truth = "vector",
sigma_2_0_truth = "vector",
param_truth = "vector",
method = "character",
index_q = "vector",
Dqt = "matrix",
ISF = "matrix",
I_q = "matrix",
AIC = "numeric",
mle = "numeric",
param_uq_range = "matrix",
modeled_Dqt = "matrix",
modeled_ISF = "matrix"
#p = "numeric"
)
)
## Show
if(!isGeneric("show")){
setGeneric(name = "show",
def = function(object) standardGeneric("show"))
}
setMethod("show", "SAM",
function(object){show.sam(object)})
#' Anisotropic SAM class
#'
#'@description
#' S4 class for fast parameter estimation in scattering analysis of microscopy
#' for anisotropic processes, using either \code{AIUQ} or \code{DDM} method.
#'
#' @slot pxsz numeric. Size of one pixel in unit of micron with default value 1.
#' @slot mindt numeric. Minimum lag time with default value 1.
#' @slot sz vector. Frame size of the intensity profile in x and y directions,
#' number of pixels contained in each frame equals sz_x by sz_y.
#' @slot len_t integer. Number of time steps.
#' @slot len_q integer. Number of wave vector.
#' @slot q vector. Wave vector in unit of um^-1.
#' @slot d_input vector. Sequence of lag times.
#' @slot B_est_ini numeric. Estimation of B. This parameter is determined by the
#' noise in the system. See 'References'.
#' @slot A_est_ini vector. Estimation of A(q). Note this parameter is
#' determined by the properties of the imaged material and imaging optics.
#' See 'References'.
#' @slot I_o_q_2_ori vector. Absolute square of Fourier transformed intensity
#' profile, ensemble over time.
#' @slot q_ori_ring_loc_unique_index list. List of location index of non-duplicate
#' values for each q ring.
#' @slot model_name character. Fitted model, options from
#' ('BM','OU','FBM','OU+FBM', 'user_defined').
#' @slot param_est matrix. Estimated parameters contained in MSD.
#' @slot sigma_2_0_est vector. Estimated variance of background noise.
#' @slot msd_est matrix. Estimated MSD.
#' @slot uncertainty logical. A logical evaluating to TRUE or FALSE indicating whether
#' parameter uncertainty should be computed.
#' @slot msd_truth matrix. True MSD or reference MSD value.
#' @slot sigma_2_0_truth vector. True variance of background noise, non NA for
#' simulated data using \code{simulation}.
#' @slot param_truth matrix. True parameters used to construct MSD, non NA for
#' simulated data using \code{aniso_simulation}.
#' @slot index_q vector. Selected index of wave vector.
#' @slot I_q matrix. Fourier transformed intensity profile with structure 'SS_T_mat'.
#' @slot AIC numeric. Akaike information criterion score.
#' @slot mle numeric. Maximum log likelihood value.
#' @slot msd_x_lower vector. Lower bound of 95% confidence interval of MSD in x directions.
#' @slot msd_x_upper vector. Upper bound of 95% confidence interval of MSD in x directions.
#' @slot msd_y_lower vector. Lower bound of 95% confidence interval of MSD in y directions.
#' @slot msd_y_upper vector. Upper bound of 95% confidence interval of MSD in y directions.
#' @slot param_uq_range matrix. 95% confidence interval for estimated parameters.
#'
#' @method show aniso_SAM
#' @author \packageAuthor{AIUQ}
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#'
#' @keywords classes
methods::setClass("aniso_SAM", representation(
pxsz = "numeric",
mindt = "numeric",
sz = "vector",
len_t = "integer",
len_q = "integer",
q = "vector",
d_input = "vector",
B_est_ini = "numeric",
A_est_ini = "vector",
#num_q_max = "numeric",
I_o_q_2_ori = "vector",
q_ori_ring_loc_unique_index = "list",
model_name = "character",
param_est = "matrix",
sigma_2_0_est = "vector",
msd_est = "matrix",
uncertainty = "logical",
msd_x_lower = "vector",
msd_x_upper = "vector",
msd_y_lower = "vector",
msd_y_upper = "vector",
msd_truth = "matrix",
sigma_2_0_truth = "vector",
param_truth = "matrix",
method = "character",
index_q = "vector",
I_q = "matrix",
AIC = "numeric",
mle = "numeric",
param_uq_range = "matrix"
)
)
## Show
if(!isGeneric("show")){
setGeneric(name = "show",
def = function(object) standardGeneric("show"))
}
setMethod("show", "aniso_SAM",
function(object){show.aniso_sam(object)})
#' Simulation class
#'
#' @description
#' S4 class for 2D particle movement simulation.
#'
#' @slot sz vector. Frame size of the intensity profile, number of pixels
#' contained in each frame equals \code{sz[1]} by \code{sz[2]}.
#' @slot len_t integer. Number of time steps.
#' @slot noise character. Background noise, options from ('uniform','gaussian').
#' @slot model_name character. Simulated stochastic process, options from ('BM','OU','FBM','OU+FBM').
#' @slot M integer. Number of particles.
#' @slot pxsz numeric. Size of one pixel in unit of micron, 1 for simulated data.
#' @slot mindt numeric. Minimum lag time, 1 for simulated data.
#' @slot pos matrix. Position matrix for particle trajectory, see 'Details'.
#' @slot intensity matrix. Filled intensity profile, see 'Details'.
#' @slot num_msd vector. Numerical mean squared displacement (MSD).
#' @slot param vector. Parameters for simulated stochastic process.
#' @slot theor_msd vector. Theoretical MSD.
#' @slot sigma_2_0 vector. Variance of background noise.
#'
#' @method show simulation
#' @details
#' \code{intensity} should has structure 'T_SS_mat', matrix with dimension
#' \code{len_t} by \code{sz}\eqn{\times}{%\times}\code{sz}.
#'
#' \code{pos} should be the position matrix with dimension
#' \code{M}\eqn{\times}{%\times}\code{len_t}. See \code{\link{bm_particle_intensity}},
#' \code{\link{ou_particle_intensity}}, \code{\link{fbm_particle_intensity}},
#' \code{\link{fbm_ou_particle_intensity}}.
#'
#' @author \packageAuthor{AIUQ}
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#'
#'
#'@keywords classes
methods::setClass("simulation", representation(
sz = "vector",
len_t = "integer",
noise = "character",
model_name = "character",
M = "integer",
pxsz = "numeric",
mindt = "numeric",
pos = "matrix", #first M equals pos0
intensity = "matrix",
num_msd = "vector",
param = "vector",
theor_msd = "vector",
sigma_2_0 = "vector"
)
)
## Show
if(!isGeneric("show")){
setGeneric(name = "show",
def = function(object) standardGeneric("show"))
}
setMethod("show", "simulation",
function(object){show.simulation(object)})
#' Anisotropic simulation class
#'
#' @description
#' S4 class for anisotropic 2D particle movement simulation.
#'
#' @slot sz vector. Frame size of the intensity profile, number of pixels
#' contained in each frame equals \code{sz[1]} by \code{sz[2]}.
#' @slot len_t integer. Number of time steps.
#' @slot noise character. Background noise, options from ('uniform','gaussian').
#' @slot model_name character. Simulated stochastic process, options from ('BM','OU','FBM','OU+FBM').
#' @slot M integer. Number of particles.
#' @slot pxsz numeric. Size of one pixel in unit of micron, 1 for simulated data.
#' @slot mindt numeric. Minimum lag time, 1 for simulated data.
#' @slot pos matrix. Position matrix for particle trajectory, see 'Details'.
#' @slot intensity matrix. Filled intensity profile, see 'Details'.
#' @slot num_msd matrix. Numerical mean squared displacement (MSD).
#' @slot param matrix. Parameters used to construct MSD.
#' @slot theor_msd matrix. Theoretical MSD.
#' @slot sigma_2_0 vector. Variance of background noise.
#'
#' @method show aniso_simulation
#' @details
#' \code{intensity} should has structure 'T_SS_mat', matrix with dimension
#' \code{len_t} by \code{sz}\eqn{\times}{%\times}\code{sz}.
#'
#' \code{pos} should be the position matrix with dimension
#' \code{M}\eqn{\times}{%\times}\code{len_t}. See \code{\link{bm_particle_intensity}},
#' \code{\link{ou_particle_intensity}}, \code{\link{fbm_particle_intensity}},
#' \code{\link{fbm_ou_particle_intensity}}.
#'
#' @author \packageAuthor{AIUQ}
#' @references
#' Gu, M., He, Y., Liu, X., & Luo, Y. (2023). Ab initio uncertainty
#' quantification in scattering analysis of microscopy.
#' arXiv preprint arXiv:2309.02468.
#'
#' Gu, M., Luo, Y., He, Y., Helgeson, M. E., & Valentine, M. T. (2021).
#' Uncertainty quantification and estimation in differential dynamic microscopy.
#' Physical Review E, 104(3), 034610.
#'
#' Cerbino, R., & Trappe, V. (2008). Differential dynamic microscopy: probing
#' wave vector dependent dynamics with a microscope. Physical review letters,
#' 100(18), 188102.
#'
#'
#'@keywords classes
methods::setClass("aniso_simulation", representation(
sz = "vector",
len_t = "integer",
noise = "character",
model_name = "character",
M = "integer",
pxsz = "numeric",
mindt = "numeric",
pos = "matrix", #first M equals pos0
intensity = "matrix",
num_msd = "matrix",
param = "matrix",
theor_msd = "matrix",
sigma_2_0 = "vector"
)
)
## Show
if(!isGeneric("show")){
setGeneric(name = "show",
def = function(object) standardGeneric("show"))
}
setMethod("show", "aniso_simulation",
function(object){show.aniso_simulation(object)})
| /scratch/gouwar.j/cran-all/cranData/AIUQ/R/structure.R |
################################################################################
############# EM algorithm Quantile regression ###########
# Atualizado em 17/01/17 #
################################################################################
################################################################################
### Iniciando o ALgoritmo EM
################################################################################
EM.qr<-function(y,x=NULL,tau=NULL, error = 0.000001 ,iter=2000, envelope=FALSE)
{
#############################################################
### ENVELOPES: Bootstrap ###
#############################################################
if(envelope==TRUE){
n <-length(y)
#### Regressao Quantilica: Envelope \rho_p(y-mu)/sigma^2 \sim exp(1)
rq <- EM.qr(y,x,tau)
columas <- ncol(x)
muc <- (y-x%*%rq$theta[1:columas])
Ind <- (muc<0)+0
d2s <- muc*(tau-Ind) ### Distancia de mahalobonisb
d2s <- sort(d2s)
xq2 <- qexp(ppoints(n), 1/(rq$theta[4]))
Xsim <- matrix(0,100,n)
for(i in 1:100){
Xsim[i,] <- rexp(n, 1/(rq$theta[4]))
}
Xsim2 <- apply(Xsim,1,sort)
d21 <- matrix(0,n,1)
d22 <- matrix(0,n,1)
for(i in 1:n){
d21[i] <- quantile(Xsim2[i,],0.05)
d22[i] <- quantile(Xsim2[i,],0.95)
}
d2med <-apply(Xsim2,1,mean)
fy <- range(d2s,d21,d22)
plot(xq2,d2s,xlab = expression(paste("Theoretical ",exp(1), " quantiles")),
ylab="Sample values and simulated envelope",pch=20,ylim=fy)
par(new=T)
plot(xq2,d21,type="l",ylim=fy,xlab="",ylab="")
par(new=T)
plot(xq2,d2med,type="l",ylim=fy,xlab="",ylab="")
par(new=T)
plot(xq2,d22,type="l",ylim=fy,xlab="",ylab="")
}
################################################################################
### MI Empirica: Veja givens
################################################################################
MI_empirica<-function(y,x,tau,theta){
p <- ncol(x)
n <- nrow(x)
taup2 <- (2/(tau*(1-tau)))
thep <- (1-2*tau)/(tau*(1-tau))
beta <- theta[1:p]
sigma <- theta[p+1]
mu <- x%*%beta
muc <- y-mu
delta2 <- (y-x%*%beta)^2/(taup2*sigma)
gamma2 <- (2+thep^2/taup2)/sigma
K05P <- 2*besselK(sqrt(delta2*gamma2), 0.5)*(sqrt(delta2/gamma2)^0.5)
K05N <- 2*besselK(sqrt(delta2*gamma2), -0.5)*(sqrt(delta2/gamma2)^(-0.5))
K15P <- 2*besselK(sqrt(delta2*gamma2), 1.5)*(sqrt(delta2/gamma2)^(1.5))
DerG <- matrix(0,nrow=(p+1),ncol=(p+1))
for (i in 1:n)
{
dkibeta <- (muc[i]/(taup2*sigma))*(K05N[i])*x[i,]
dkisigma <- sqrt(delta2[i])/(2*sigma)*K05N[i]+sqrt(gamma2)/(2*sigma)*K15P[i]
GradBeta <- -thep/(taup2*sigma)*x[i,]+(K05P[i])^(-1)*dkibeta
Gradsigma <- -1.5/sigma-thep*muc[i]/(taup2*sigma^2)+ (K05P[i])^(-1)*dkisigma
GradAux <- as.matrix(c(GradBeta,Gradsigma),p+1,1)
DerG <- DerG+GradAux%*%t(GradAux)
}
EP <- sqrt(diag(solve(DerG)))
obj.out <- list(EP = as.vector(EP))
return(obj.out)
}
################################################################################
################################################################################
## Verossimilhanca da RQ: Usando a funcao de perda e usando a Bessel funcion
################################################################################
logVQR <- function(y,x,tau,theta)
{
p <- ncol(x)
n <- nrow(x)
beta <- theta[1:p]
sigma <- theta[p+1]
mu <- x%*%beta
muc <- (y-mu)/sigma
Ind <- (muc<0)+0
logver <- sum(-log(sigma)+log(tau*(1-tau))-muc*(tau-Ind))
return(logver)
}
p <- ncol(x)
n <- nrow(x)
reg <- lm(y ~ x[,2:p])
taup2 <- (2/(tau*(1-tau)))
thep <- (1-2*tau)/(tau*(1-tau))
#Inicializa beta e sigma2 com os estimadores de minimos quadrados
beta <- as.vector(coefficients(reg),mode="numeric")
sigma <- sqrt(sum((y-x%*%beta)^2)/(n-p))
lk = lk1 = lk2 <- logVQR(y,x,tau,c(beta,sigma))## log-likelihood
teta_velho <- matrix(c(beta,sigma),ncol=1)
cont <- 0
criterio <- 1
while( criterio> error)
{ print(criterio)
cont <- (cont+1)
muc <- (y-x%*%beta)
delta2 <- (y-x%*%beta)^2/(taup2*sigma)
gamma2 <- (2+thep^2/taup2)/sigma
vchpN <- besselK(sqrt(delta2*gamma2), 0.5-1)/(besselK(sqrt(delta2*gamma2), 0.5))*(sqrt(delta2/gamma2))^(-1)
vchp1 <- besselK(sqrt(delta2*gamma2), 0.5+1)/(besselK(sqrt(delta2*gamma2), 0.5))*(sqrt(delta2/gamma2))
xM <- c(sqrt(vchpN))*x
suma1 <- t(xM)%*%(xM)
suma2 <- x*c(vchpN*y-thep)
sigma <- sum(vchpN*muc^2-2*muc*thep+vchp1*(thep^2+2*taup2))/(3*n*taup2)
beta <- solve(suma1)%*%apply(suma2,2,sum)
teta_novo <- matrix(c(beta,sigma),ncol=1)
criterio <- sqrt(sum((teta_velho-teta_novo)^2))
lk3 <- logVQR(y,x,tau,c(beta,sigma))
if(cont<2) criterio <- abs(lk2 - lk3)/abs(lk3)
else {
tmp <- (lk3 - lk2)/(lk2 - lk1)
tmp2 <- lk2 + (lk3 - lk2)/(1-tmp)
criterio <- abs(tmp2 - lk3)
}
lk2 <- lk3
if (cont==iter)
{
break
}
teta_velho <- teta_novo
}
Weights <- vchpN*vchp1
EP <- MI_empirica(y,x,tau,teta_novo)$EP
logver <- logVQR(y,x,tau,teta_novo)
return(list(theta=teta_novo,EP=EP,logver=logver,iter=cont,Weights=Weights,di=abs(muc)/sigma))
}
| /scratch/gouwar.j/cran-all/cranData/ALDqr/R/EM.qr.r |
################################################################################
### DIAGNOSTICO: Ponderacao de casos
################################################################################
diag.qr <- function(y,x,tau,theta)
{
## PC= Ponderacao de casos
p <- ncol(x)
n <- nrow(x)
taup2 <- (2/(tau*(1-tau)))
thep <- (1-2*tau)/(tau*(1-tau))
beta <- theta[1:p]
sigma <- theta[p+1]
mu <- x%*%beta
muc <- y-mu
B <- thep/(taup2*sigma)
E <- (thep^2+2*taup2)/(2*taup2*sigma^2)
delta2 <- (y-x%*%beta)^2/(taup2*sigma)
gamma2 <- (2+thep^2/taup2)/sigma
DerBB <- matrix(0,p,p)
DerSS <- 0
DerBS <- matrix(0,1,p)
MatrizQ <- matrix(0,nrow=(p+1),ncol=(p+1))
GradQ <- matrix(0,nrow=(p+1),ncol=n)
vchpN1 <- besselK(sqrt(delta2*gamma2), 0.5-1)/(besselK(sqrt(delta2*gamma2), 0.5))*(sqrt(delta2/gamma2))^(-1)
vchpN2 <- besselK(sqrt(delta2*gamma2), 0.5-2)/(besselK(sqrt(delta2*gamma2), 0.5))*(sqrt(delta2/gamma2))^(-2)
vchpP1 <- besselK(sqrt(delta2*gamma2), 0.5+1)/(besselK(sqrt(delta2*gamma2), 0.5))*(sqrt(delta2/gamma2))
vchpP2 <- besselK(sqrt(delta2*gamma2), 0.5+2)/(besselK(sqrt(delta2*gamma2), 0.5))*(sqrt(delta2/gamma2))^(2)
for(i in 1:n)
{
Ai <- muc[i]/(taup2*sigma)
Ci <- -0.5*(3*taup2*sigma+2*muc[i]*thep)/(taup2*sigma^2)
Di <- muc[i]^2/(2*taup2*sigma^2)
DerBB <- DerBB+ (-vchpN1[i]/(taup2*sigma))*((x[i,])%*%t(x[i,]))
DerBS <- DerBS+(-(vchpN1[i]*muc[i]-thep)/(taup2*sigma^2))*(x[i,])
DerSS <- DerSS+(1.5/sigma^2-(vchpN1[i]*(muc[i])^2-2*thep*muc[i]+vchpP1[i]*(2*taup2+thep^2))/(taup2*sigma^3))
GradQ[,i] <- as.matrix(c((vchpN1[i]*Ai-B)*(x[i,]),(Ci+vchpN1[i]*Di+vchpP1[i]*E)),p+1,1)
}
MatrizQ[1:p,1:p] <- DerBB
MatrizQ[p+1,1:p] <- (DerBS)
MatrizQ[1:p,p+1] <- t(DerBS)
MatrizQ[p+1,p+1] <- DerSS
MatrizQ <- (MatrizQ+t(MatrizQ))/2
#############################################################
### Graphic of the likelihood displacemente for data ###
#############################################################
thetaest <- theta
sigmaest <- thetaest[p+1]
betaest <- matrix(thetaest[1:p],p,1)
taup2 <- (2/(tau*(1-tau)))
thep <- (1-2*tau)/(tau*(1-tau))
HessianMatrix <- MatrizQ
Gradiente <- GradQ
sigma <- sigmaest
beta <- betaest
muc <- (y-x%*%beta)
delta2 <- (y-x%*%beta)^2/(taup2*sigma)
gamma2 <- (2+thep^2/taup2)/sigma
vchpN <- besselK(sqrt(delta2*gamma2), 0.5-1)/(besselK(sqrt(delta2*gamma2), 0.5))*(sqrt(delta2/gamma2))^(-1)
vchp1 <- besselK(sqrt(delta2*gamma2), 0.5+1)/(besselK(sqrt(delta2*gamma2), 0.5))*(sqrt(delta2/gamma2))
Q <- -0.5*n*log(sigmaest)-0.5*(sigmaest*taup2)^{-1}*(sum(vchpN*muc^2 - 2*muc*thep + vchp1*(thep^2+2*taup2)))
########################################################
theta_i <- thetaest%*%matrix(1,1,n) +(-solve(HessianMatrix))%*%Gradiente
sigmaest <- theta_i[p+1,]
betaest <- theta_i[1:p,]
sigma <- sigmaest
beta <- betaest
muc <- (y-x%*%beta)
delta2 <- (y-x%*%beta)^2/(taup2*sigma)
gamma2 <- (2+thep^2/taup2)/sigma
vchpN <- besselK(sqrt(delta2*gamma2), 0.5-1)/(besselK(sqrt(delta2*gamma2), 0.5))*(sqrt(delta2/gamma2))^(-1)
vchp1 <- besselK(sqrt(delta2*gamma2), 0.5+1)/(besselK(sqrt(delta2*gamma2), 0.5))*(sqrt(delta2/gamma2))
Q1 <- c()
for (i in 1:n){Q1[i] <- -0.5*n*log(sigmaest[i])-sum(vchpN[,i]*muc[,i]^2 - 2*muc[,i]*thep + vchp1[,i]*(thep^2+2*taup2))/(2*(sigmaest[i]*taup2))}
########################################################
QDi <- 2*(-Q+Q1)
#############################################################
#############################################################
## Graphic of the generalized Cook distance for data ###
#############################################################
HessianMatrix <- MatrizQ
Gradiente <- GradQ
GDi <- c()
for (i in 1:n) {GDi[i] <- t(Gradiente[,i])%*%solve(-HessianMatrix)%*%Gradiente[,i]}
obj.out <- list(MatrizQ = MatrizQ, mdelta=GradQ,QDi=QDi,GDi=GDi)
return(obj.out)
}
#theta <- EM.qr(y,x,tau)$theta
#diag_qr(y,x,tau,theta)
| /scratch/gouwar.j/cran-all/cranData/ALDqr/R/diag.qr.r |
.Random.seed <-
c(403L, 10L, -1849970072L, 478532502L, 151795473L, -1142075197L,
-2008698670L, -2045037212L, -1409046361L, 1596939773L, 1847746548L,
-1934817062L, 1284327973L, -1284831425L, 1816634566L, 186980688L,
-427382413L, -1657629807L, 545744320L, -1316839522L, -1512193687L,
1522987419L, -1077992278L, 289342156L, 956582799L, 2029793221L,
2073645948L, -1359049774L, -146003123L, 679098887L, 1144875502L,
93570248L, 917359563L, 1688116777L, 727307384L, 288919974L, 477901761L,
-1908940717L, 100098L, 1565766196L, 1233053623L, -1005840467L,
2013971652L, -343783190L, -986329259L, -1281556241L, -1347205386L,
697995808L, -1585137693L, -2075837311L, -999556368L, 1394708878L,
375406329L, -1032049525L, -159968070L, -1950445636L, 1705069823L,
-399851435L, -1456696468L, 480299970L, 468843549L, -182018921L,
-1304593922L, -1320805960L, -1433070181L, -1911030791L, 640181064L,
-351654282L, -1515690063L, 942979555L, -701083982L, 1964811012L,
1809530951L, 1924335389L, 2033597844L, 49054010L, -1813234299L,
99178079L, 890126566L, -393477264L, 227566739L, -1747795663L,
-1106737184L, 1698393214L, -2076827895L, -1668497093L, -285612790L,
-2070184724L, -859141777L, 476221093L, 1119207516L, 272406578L,
-1641222355L, -631743001L, -2139463410L, 1493223144L, 1113049323L,
968362377L, -734742248L, -1668830138L, -465767263L, -1495868621L,
461038242L, 2062948756L, 1030424599L, -724062195L, 2085483812L,
149557514L, 1552222325L, 161383375L, -1361021738L, 532374656L,
371186371L, -1566202783L, 816071888L, -131369426L, -1865701415L,
80807915L, 1734213210L, 650795036L, 1816577311L, -164321931L,
-1678691316L, -1193437726L, -280254787L, 702505271L, 206302L,
1937636120L, 1912926843L, 391102489L, -1628079960L, 404699734L,
1216521937L, -154720637L, 1016495378L, 1801906340L, -12597145L,
537684413L, -704699596L, 4352154L, -78958875L, 1415284863L, -738115066L,
1233972624L, -928583629L, -15507503L, -1530890880L, 1491933278L,
-1257113687L, -1403818405L, -1004801430L, 1010417420L, 237238607L,
90702341L, 1142575292L, 1854019986L, 1970737805L, -1069588281L,
-411028050L, 139739144L, -398887925L, -404483863L, -1449553992L,
935290982L, 530127105L, -223285357L, -403709886L, -760141836L,
-578817545L, 83244525L, 686880388L, -18446166L, 1602803221L,
-708089169L, -1379319626L, 1013407456L, 1015792163L, -1787024063L,
-1953435600L, -456799666L, 1767829177L, 1028430539L, -1773405318L,
-1557575044L, -1654677825L, 607766421L, -1509974100L, -225976958L,
-1003265187L, -594530729L, -1132401346L, 712596856L, 1666403931L,
-815814215L, 1274698760L, 449046198L, 591345137L, -368522205L,
-611302286L, 2000361156L, 464416903L, -1509174179L, -681207212L,
-925642630L, -258460731L, 1692345631L, -2116033370L, 863614512L,
-1898287917L, -979811599L, -1723887072L, -1185934146L, -1954111287L,
718267003L, 154003530L, -1943709524L, 1079508911L, -1428477083L,
1319371036L, 1540516210L, 1956411885L, 1141493287L, -1185669554L,
902726312L, -999227221L, 989678025L, 1688602072L, 52900998L,
1848449377L, 2114168360L, 1961541292L, -1492848964L, -1043557902L,
340078896L, -1521076060L, 154000056L, 2055300954L, -1223910000L,
-1980371588L, -453334764L, -1655909054L, -1151339776L, 226214844L,
2054247248L, -771175134L, -22005432L, -938824692L, 211076188L,
1778443474L, 340481024L, 1000249172L, -845579800L, 1676256250L,
-166829360L, -264270548L, 447735028L, 1024033682L, -1701252000L,
467151116L, -1543429408L, 1639935330L, -1165703512L, 1600495180L,
-817224836L, -1936060622L, -1327977872L, 584655172L, -1366124584L,
-5516166L, -282932688L, -745949508L, 1743157076L, -450173374L,
1478260640L, 920298684L, 1195483600L, -1269821086L, 1598951400L,
62002124L, -1969871108L, -1507400718L, 1741024384L, -1334114060L,
-788922232L, -1751867078L, 1743883088L, 1143963756L, -4440940L,
-1300019886L, 1318496992L, -1561383732L, 2016875680L, -91589246L,
-809335576L, -1875172116L, 315201596L, 1788513394L, 1291107440L,
1702056484L, -120502728L, 748806682L, -805876528L, 1206709820L,
1021753492L, 1470514306L, -333274176L, -505910980L, -120443248L,
-696888286L, 644898824L, 396379276L, 1299758108L, -161128302L,
738483328L, -1701041132L, 775051048L, 2145140794L, -2057963312L,
1195929708L, 1743554228L, -1753396206L, 603915232L, 812618700L,
-238978592L, 298567714L, 1242484008L, -2146373364L, 1325937212L,
-461879054L, -1787015952L, 1990147908L, -912319400L, -655610310L,
-1783088912L, 529016508L, -339910124L, 1533531330L, 97229408L,
255594876L, -597414704L, -703516638L, -573975384L, -1067429492L,
84992444L, 992971826L, -1986244800L, 1142467252L, -22500024L,
-279546182L, 776837904L, -1455404820L, 1257417812L, 1931608850L,
-2080117344L, 368555980L, -2072638624L, 1396657346L, 1440686632L,
774061484L, -738327748L, -939774094L, -802685136L, 1954864164L,
1601827384L, 190940890L, 123757456L, 807350524L, 853633428L,
1150892866L, 1330205184L, 2135536700L, 1793903184L, -553487198L,
1169123144L, 144860684L, 2071795420L, -914956206L, 675377280L,
1840108372L, -560434200L, 1570665210L, -1358524592L, 2108960812L,
-532434572L, -1408914798L, 1328009824L, 263347340L, 1846168928L,
-172236574L, 916453544L, 376726220L, -279083524L, -261130318L,
1090531312L, -639548L, 1006840152L, 33811834L, -1769120336L,
-1711580740L, -1592464812L, -1669575230L, 148499232L, 261966780L,
-1036536752L, -1778747422L, 1246468968L, -1684936884L, -639272196L,
480654066L, 888131840L, -2069957516L, 612809224L, 1694261690L,
1897028048L, -856812564L, 1972909972L, -579246894L, 1318465632L,
1779931468L, -2038022368L, -1923988862L, -677334552L, 1412662252L,
-1422442948L, -937364878L, 556660592L, 976797988L, -1219690952L,
-166425190L, 1411241424L, -1608748612L, 1926306452L, 624793218L,
563255744L, 90709820L, 219621008L, -1116118238L, 254995976L,
-1722472692L, 166887324L, 689322002L, -271490176L, -1802649836L,
-1545426648L, -3712966L, -1699772464L, 1354146412L, -1531845580L,
656223378L, 699209056L, 215925068L, 337986016L, 331036578L, -572055640L,
8334476L, -1981256772L, 27482793L, -253768130L, -1434418388L,
585641341L, 175326151L, 1413026480L, 833818830L, -764425733L,
-1268921763L, 1905845642L, -2080195160L, 1345733393L, -357207549L,
-1775631100L, 427802930L, 1769754503L, -1597039151L, 340201302L,
1095183668L, -839097275L, 336432271L, -1998616056L, -976591322L,
-1206190381L, -1429737419L, 905084818L, 169443168L, -407622967L,
-1722982917L, -679466548L, -22085542L, -279405233L, -347007975L,
-1326409170L, -58787780L, 1579642285L, -564162985L, -1192919136L,
-1054483202L, 689799115L, -248653363L, 546495738L, 1769942840L,
1855397089L, 577572755L, -1476869196L, 1214113986L, 1060555991L,
338200865L, -2046721818L, -332208668L, 1843274005L, 894777663L,
669476056L, -912601866L, 133707523L, 1709363269L, -1659752350L,
-1544423472L, 57265401L, 1024575019L, 723841436L, 2125042122L,
1017377471L, -628079607L, 1321432350L, 1783150796L, -594432867L,
682217767L, -154199216L, -2053872146L, -1028865381L, -1479498499L,
-938972182L, 839371912L, -551185871L, -1419533149L, 453322916L,
-1207404206L, 309445799L, 1282171121L, -337995722L, 905955540L,
893615205L, 1849833071L, 1398036648L, 378242566L, -1957877965L,
-290733163L, -2137245710L, 1093489728L, 2106961321L, 1675150363L,
-2053455380L, -1089193478L, -1109598673L, 112003641L, 776668238L,
1121782940L, 467779085L, -596049289L, -1554763136L, -269793058L,
102556587L, -400560979L, -1089842918L, -872348200L, -2002837951L,
921613555L, -796761324L, -1232039262L, 1560529847L, -35502207L,
1441664390L, 200744900L, -737287435L, -500181153L, -390497864L,
-937262954L, 679986851L, 400271461L, 17948034L, 973872880L, -1216228711L,
-1820240245L, -22866052L, -729261654L, -1835961889L, 1435223913L,
-808751234L, -899467412L, 741239741L, -1678952313L, -391031696L,
1493774094L, -1833184197L, 1207614237L, -1734602294L, 670522088L,
-1418654255L, 295747523L, -160937788L, 2096871282L, 423701063L,
1552833041L, -591498346L, 273643252L, 371153029L, 2036673999L,
-2070947640L, -429551258L, -1362402413L, -1595814667L, -1694954286L,
-1850483936L, 674394889L, 1111725115L, -2122114676L, 1859121178L,
1254276239L, 1215161305L, -2117507986L, -1760000004L, 993746285L,
16381719L, 990111456L, 1919259838L, -1495539061L, -813586163L,
-1859747704L)
| /scratch/gouwar.j/cran-all/cranData/ALEPlot/R/ALEPlot-internal.R |
ALEPlot <-
function(X, X.model, pred.fun, J, K = 40, NA.plot = TRUE) {
N = dim(X)[1] #sample size
d = dim(X)[2] #number of predictor variables
if (length(J) == 1) { #calculate main effects ALE plot
if (class(X[,J]) == "factor") {#for categorical X[,J], calculate the ALE plot
#Get rid of any empty levels of x and tabulate level counts and probabilities
X[,J] <- droplevels(X[,J])
x.count <- as.numeric(table(X[,J])) #frequency count vector for levels of X[,J]
x.prob <- x.count/sum(x.count) #probability vector for levels of X[,J]
K <- nlevels(X[,J]) #reset K to the number of levels of X[,J]
D.cum <- matrix(0, K, K) #will be the distance matrix between pairs of levels of X[,J]
D <- matrix(0, K, K) #initialize matrix
#For loop for calculating distance matrix D for each of the other predictors
for (j in setdiff(1:d, J)) {
if (class(X[,j]) == "factor") {#Calculate the distance matrix for each categorical predictor
A=table(X[,J],X[,j]) #frequency table, rows of which will be compared
A=A/x.count
for (i in 1:(K-1)) {
for (k in (i+1):K) {
D[i,k] = sum(abs(A[i,]-A[k,]))/2 #This dissimilarity measure is always within [0,1]
D[k,i] = D[i,k]
}
}
D.cum <- D.cum + D
} #End of if (class(X[,j] == "factor") statement
else { #calculate the distance matrix for each numerical predictor
q.x.all <- quantile(X[,j], probs = seq(0, 1, length.out = 100), na.rm = TRUE, names = FALSE) #quantiles of X[,j] for all levels of X[,J] combined
x.ecdf=tapply(X[,j], X[,J], ecdf) #list of ecdf's for X[,j] by levels of X[,J]
for (i in 1:(K-1)) {
for (k in (i+1):K) {
D[i,k] = max(abs(x.ecdf[[i]](q.x.all)-x.ecdf[[k]](q.x.all))) #This dissimilarity measure is the Kolmogorov-Smirnov distance between X[,j] for levels i and k of X[,J]. It is always within [0,1]
D[k,i] = D[i,k]
}
}
D.cum <- D.cum + D
} #End of else statement that goes with if (class(X[,j] == "factor") statement
} #end of for (j in setdiff(1:d, J) loop
#calculate the 1-D MDS representation of D and the ordered levels of X[,J]
D1D <- cmdscale(D.cum, k = 1) #1-dimensional MDS representation of the distance matrix
ind.ord <- sort(D1D, index.return = T)$ix #K-length index vector. The i-th element is the original level index of the i-th lowest ordered level of X[,J].
ord.ind <- sort(ind.ord, index.return = T)$ix #Another K-length index vector. The i-th element is the order of the i-th original level of X[,J].
levs.orig <- levels(X[,J]) #as.character levels of X[,J] in original order
levs.ord <- levs.orig[ind.ord] #as.character levels of X[,J] after ordering
x.ord <- ord.ind[as.numeric(X[,J])] #N-length vector of numerical version of X[,J] with numbers corresponding to the indices of the ordered levels
#Calculate the model predictions with the levels of X[,J] increased and decreased by one
row.ind.plus <- (1:N)[x.ord < K] #indices of rows for which X[,J] was not the highest level
row.ind.neg <- (1:N)[x.ord > 1] #indices of rows for which X[,J] was not the lowest level
X.plus <- X
X.neg <- X
X.plus[row.ind.plus,J] <- levs.ord[x.ord[row.ind.plus]+1] #Note that this leaves the J-th column as a factor with the same levels as X[,J], whereas X.plus[,J] <- . . . would convert it to a character vector
X.neg[row.ind.neg,J] <- levs.ord[x.ord[row.ind.neg]-1]
y.hat <- pred.fun(X.model=X.model, newdata = X)
y.hat.plus <- pred.fun(X.model=X.model, newdata = X.plus[row.ind.plus,])
y.hat.neg <- pred.fun(X.model=X.model, newdata = X.neg[row.ind.neg,])
#Take the appropriate differencing and averaging for the ALE plot
Delta.plus <- y.hat.plus-y.hat[row.ind.plus] #N.plus-length vector of individual local effect values. They are the differences between the predictions with the level of X[,J] increased by one level (in ordered levels) and the predictions with the actual level of X[,J].
Delta.neg <- y.hat[row.ind.neg]-y.hat.neg #N.neg-length vector of individual local effect values. They are the differences between the predictions with the actual level of X[,J] and the predictions with the level of X[,J] decreased (in ordered levels) by one level.
Delta <- as.numeric(tapply(c(Delta.plus, Delta.neg), c(x.ord[row.ind.plus], x.ord[row.ind.neg]-1), mean)) #(K-1)-length vector of averaged local effect values corresponding to the first K-1 ordered levels of X[,J].
fJ <- c(0, cumsum(Delta)) #K length vector of accumulated averaged local effects
#now vertically translate fJ, by subtracting its average (averaged across X[,J])
fJ = fJ - sum(fJ*x.prob[ind.ord])
x <- levs.ord
barplot(fJ, names=x, xlab=paste("x_", J, " (", names(X)[J], ")", sep=""), ylab= paste("f_",J,"(x_",J,")", sep=""), las =3)
} #end of if (class(X[,J]) == "factor") statement
else if (class(X[,J]) == "numeric" | class(X[,J]) == "integer") {#for numerical or integer X[,J], calculate the ALE plot
#find the vector of z values corresponding to the quantiles of X[,J]
z= c(min(X[,J]), as.numeric(quantile(X[,J],seq(1/K,1,length.out=K), type=1))) #vector of K+1 z values
z = unique(z) #necessary if X[,J] is discrete, in which case z could have repeated values
K = length(z)-1 #reset K to the number of unique quantile points
fJ = numeric(K)
#group training rows into bins based on z
a1=as.numeric(cut(X[,J], breaks=z, include.lowest=TRUE)) #N-length index vector indicating into which z-bin the training rows fall
X1 = X
X2 = X
X1[,J] = z[a1]
X2[,J] = z[a1+1]
y.hat1 = pred.fun(X.model=X.model, newdata = X1)
y.hat2 = pred.fun(X.model=X.model, newdata = X2)
Delta=y.hat2-y.hat1 #N-length vector of individual local effect values
Delta = as.numeric(tapply(Delta, a1, mean)) #K-length vector of averaged local effect values
fJ = c(0, cumsum(Delta)) #K+1 length vector
#now vertically translate fJ, by subtracting its average (averaged across X[,J])
b1 <- as.numeric(table(a1)) #frequency count of X[,J] values falling into z intervals
fJ = fJ - sum((fJ[1:K]+fJ[2:(K+1)])/2*b1)/sum(b1)
x <- z
plot(x, fJ, type="l", xlab=paste("x_",J, " (", names(X)[J], ")", sep=""), ylab= paste("f_",J,"(x_",J,")", sep=""))
} #end of else if (class(X[,J]) == "numeric" | class(X[,J]) == "integer") statement
else print("error: class(X[,J]) must be either factor or numeric or integer")
} #end of if (length(J) == 1) statement
else if (length(J) == 2) { #calculate second-order effects ALE plot
if (class(X[,J[2]]) != "numeric" & class(X[,J[2]]) != "integer") {
print("error: X[,J[2]] must be numeric or integer. Only X[,J[1]] can be a factor")
}
if (class(X[,J[1]]) == "factor") {#for categorical X[,J[1]], calculate the ALE plot
#Get rid of any empty levels of x and tabulate level counts and probabilities
X[,J[1]] <- droplevels(X[,J[1]])
x.count <- as.numeric(table(X[,J[1]])) #frequency count vector for levels of X[,J[1]]
x.prob <- x.count/sum(x.count) #probability vector for levels of X[,J[1]]
K1 <- nlevels(X[,J[1]]) #set K1 to the number of levels of X[,J[1]]
D.cum <- matrix(0, K1, K1) #will be the distance matrix between pairs of levels of X[,J[1]]
D <- matrix(0, K1, K1) #initialize matrix
#For loop for calculating distance matrix D for each of the other predictors
for (j in setdiff(1:d, J[1])) {
if (class(X[,j]) == "factor") {#Calculate the distance matrix for each categorical predictor
A=table(X[,J[1]],X[,j]) #frequency table, rows of which will be compared
A=A/x.count
for (i in 1:(K1-1)) {
for (k in (i+1):K1) {
D[i,k] = sum(abs(A[i,]-A[k,]))/2 #This dissimilarity measure is always within [0,1]
D[k,i] = D[i,k]
}
}
D.cum <- D.cum + D
} #End of if (class(X[,j[1]] == "factor") statement
else { #calculate the distance matrix for each numerical predictor
q.x.all <- quantile(X[,j], probs = seq(0, 1, length.out = 100), na.rm = TRUE, names = FALSE) #quantiles of X[,j] for all levels of X[,J[1]] combined
x.ecdf=tapply(X[,j], X[,J[1]], ecdf) #list of ecdf's for X[,j] by levels of X[,J[1]]
for (i in 1:(K1-1)) {
for (k in (i+1):K1) {
D[i,k] = max(abs(x.ecdf[[i]](q.x.all)-x.ecdf[[k]](q.x.all))) #This dissimilarity measure is the Kolmogorov-Smirnov distance between X[,j] for levels i and k of X[,J[1]]. It is always within [0,1]
D[k,i] = D[i,k]
}
}
D.cum <- D.cum + D
} #End of else statement that goes with if (class(X[,j] == "factor") statement
} #end of for (j in setdiff(1:d, J[1]) loop
#calculate the 1-D MDS representation of D and the ordered levels of X[,J[1]]
D1D <- cmdscale(D.cum, k = 1) #1-dimensional MDS representation of the distance matrix
ind.ord <- sort(D1D, index.return = T)$ix #K1-length index vector. The i-th element is the original level index of the i-th lowest ordered level of X[,J[1]].
ord.ind <- sort(ind.ord, index.return = T)$ix #Another K1-length index vector. The i-th element is the order of the i-th original level of X[,J[1]].
levs.orig <- levels(X[,J[1]]) #as.character levels of X[,J[1]] in original order
levs.ord <- levs.orig[ind.ord] #as.character levels of X[,J[1]] after ordering
x.ord <- ord.ind[as.numeric(X[,J[1]])] #N-length index vector of numerical version of X[,J[1]] with numbers corresponding to the indices of the ordered levels
#Calculate the model predictions with the levels of X[,J[1]] increased and decreased by one
z2 = c(min(X[,J[2]]), as.numeric(quantile(X[,J[2]],seq(1/K,1,length.out=K), type=1))) #vector of K+1 z values for X[,J[2]]
z2 = unique(z2) #necessary if X[,J(2)] is discrete, in which case z2 could have repeated values
K2 = length(z2)-1 #reset K2 to the number of unique quantile points
#group training rows into bins based on z2
a2 = as.numeric(cut(X[,J[2]], breaks=z2, include.lowest=TRUE)) #N-length index vector indicating into which z2-bin the training rows fall
row.ind.plus <- (1:N)[x.ord < K1] #indices of rows for which X[,J[1]] was not the highest level
X11 = X #matrix with low X[,J[1]] and low X[,J[2]]
X12 = X #matrix with low X[,J[1]] and high X[,J[2]]
X21 = X #matrix with high X[,J[1]] and low X[,J[2]]
X22 = X #matrix with high X[,J[1]] and high X[,J[2]]
X11[row.ind.plus,J[2]] = z2[a2][row.ind.plus]
X12[row.ind.plus,J[2]] = z2[a2+1][row.ind.plus]
X21[row.ind.plus,J[1]] = levs.ord[x.ord[row.ind.plus]+1]
X22[row.ind.plus,J[1]] = levs.ord[x.ord[row.ind.plus]+1]
X21[row.ind.plus,J[2]] = z2[a2][row.ind.plus]
X22[row.ind.plus,J[2]] = z2[a2+1][row.ind.plus]
y.hat11 = pred.fun(X.model=X.model, newdata = X11[row.ind.plus,])
y.hat12 = pred.fun(X.model=X.model, newdata = X12[row.ind.plus,])
y.hat21 = pred.fun(X.model=X.model, newdata = X21[row.ind.plus,])
y.hat22 = pred.fun(X.model=X.model, newdata = X22[row.ind.plus,])
Delta.plus=(y.hat22-y.hat21)-(y.hat12-y.hat11) #N.plus-length vector of individual local effect values
row.ind.neg <- (1:N)[x.ord > 1] #indices of rows for which X[,J[1]] was not the lowest level
X11 = X #matrix with low X[,J[1]] and low X[,J[2]]
X12 = X #matrix with low X[,J[1]] and high X[,J[2]]
X21 = X #matrix with high X[,J[1]] and low X[,J[2]]
X22 = X #matrix with high X[,J[1]] and high X[,J[2]]
X11[row.ind.neg,J[1]] = levs.ord[x.ord[row.ind.neg]-1]
X12[row.ind.neg,J[1]] = levs.ord[x.ord[row.ind.neg]-1]
X11[row.ind.neg,J[2]] = z2[a2][row.ind.neg]
X12[row.ind.neg,J[2]] = z2[a2+1][row.ind.neg]
X21[row.ind.neg,J[2]] = z2[a2][row.ind.neg]
X22[row.ind.neg,J[2]] = z2[a2+1][row.ind.neg]
y.hat11 = pred.fun(X.model=X.model, newdata = X11[row.ind.neg,])
y.hat12 = pred.fun(X.model=X.model, newdata = X12[row.ind.neg,])
y.hat21 = pred.fun(X.model=X.model, newdata = X21[row.ind.neg,])
y.hat22 = pred.fun(X.model=X.model, newdata = X22[row.ind.neg,])
Delta.neg=(y.hat22-y.hat21)-(y.hat12-y.hat11) #N.neg-length vector of individual local effect values
Delta = as.matrix(tapply(c(Delta.plus, Delta.neg), list(c(x.ord[row.ind.plus], x.ord[row.ind.neg]-1), a2[c(row.ind.plus, row.ind.neg)]), mean)) #(K1-1)xK2 matrix of averaged local effects, which includes NA values if a cell is empty
#replace NA values in Delta by the Delta value in their nearest neighbor non-NA cell
NA.Delta = is.na(Delta) #(K1-1)xK2 matrix indicating cells that contain no observations
NA.ind = which(NA.Delta, arr.ind=T, useNames = F) #2-column matrix of row and column indices for NA cells
if (nrow(NA.ind) > 0) {
notNA.ind = which(!NA.Delta, arr.ind=T, useNames = F) #2-column matrix of row and column indices for non-NA cells
range1 =K1-1
range2 = max(z2)-min(z2)
Z.NA = cbind(NA.ind[,1]/range1, (z2[NA.ind[,2]] + z2[NA.ind[,2]+1])/2/range2) # standardized {z1,z2} values for NA cells corresponding to each row of NA.ind, where z1 =1:(K1-1) represents the ordered levels of X[,J]
Z.notNA = cbind(notNA.ind[,1]/range1, (z2[notNA.ind[,2]] + z2[notNA.ind[,2]+1])/2/range2) #standardized {z1,z2} values for non-NA cells corresponding to each row of notNA.ind
nbrs <- ann(Z.notNA, Z.NA, k=1, verbose = F)$knnIndexDist[,1] #vector of row indices (into Z.notNA) of nearest neighbor non-NA cells for each NA cell
Delta[NA.ind] = Delta[matrix(notNA.ind[nbrs,], ncol=2)] #Set Delta for NA cells equal to Delta for their closest neighbor non-NA cell. The matrix() command is needed, because if there is only one empty cell, notNA.ind[nbrs] is created as a 2-length vector instead of a 1x2 matrix, which does not index Delta properly
} #end of if (nrow(NA.ind) > 0) statement
#accumulate the values in Delta
fJ = matrix(0,K1-1,K2) #rows correspond to X[,J(1)] and columns to X[,J(2)]
fJ = apply(t(apply(Delta,1,cumsum)),2,cumsum) #second-order accumulated effects before subtracting lower order effects
fJ = rbind(rep(0,K2),fJ) #add a first row to fJ that are all zeros
fJ = cbind(rep(0,K1),fJ) #add a first column to fJ that are all zeros, so fJ is now K1x(K2+1)
#now subtract the lower-order effects from fJ
b=as.matrix(table(x.ord,a2)) #K1xK2 cell count matrix (rows correspond to X[,J[1]]; columns to X[,J[2]])
b2=apply(b,2,sum) #K2x1 count vector summed across X[,J[1]], as function of X[,J[2]]
Delta = fJ[,2:(K2+1)]-fJ[,1:K2] #K1xK2 matrix of differenced fJ values, differenced across X[,J[2]]
b.Delta = b*Delta
Delta.Ave = apply(b.Delta,2,sum)/b2 #K2x1 vector of averaged local effects
fJ2 = c(0, cumsum(Delta.Ave)) #(K2+1)x1 vector of accumulated local effects
b.ave=matrix((b[1:(K1-1),]+b[2:K1,])/2, K1-1, K2) #(K1-1)xK2 cell count matrix (rows correspond to X[,J[1]] but averaged across neighboring levels; columns to X[,J[2]]). Must use "matrix(...)" in case K1=2
b1=apply(b.ave,1,sum) #(K1-1)x1 count vector summed across X[,J[2]], as function of X[,J[1]]
Delta =matrix(fJ[2:K1,]-fJ[1:(K1-1),], K1-1, K2+1) #(K1-1)x(K2+1) matrix of differenced fJ values, differenced across X[,J[1]]
b.Delta = matrix(b.ave*(Delta[,1:K2]+Delta[,2:(K2+1)])/2, K1-1, K2) #(K1-1)xK2 matrix
Delta.Ave = apply(b.Delta,1,sum)/b1 #(K1-1)x1 vector of averaged local effects
fJ1 = c(0,cumsum(Delta.Ave)) #K1x1 vector of accumulated local effects
fJ = fJ - outer(fJ1,rep(1,K2+1)) - outer(rep(1,K1),fJ2)
fJ0 = sum(b*(fJ[,1:K2] + fJ[,2:(K2+1)])/2)/sum(b)
fJ = fJ - fJ0 #K1x(K2+1) matrix
x <- list(levs.ord, z2)
K <- c(K1, K2)
image(1:K1, x[[2]], fJ, xlab=paste("x_",J[1], " (", names(X)[J[1]], ")", sep=""), ylab= paste("x_",J[2], " (", names(X)[J[2]], ")", sep=""), ylim = range(z2), yaxs = "i")
contour(1:K1, x[[2]], fJ, add=TRUE, drawlabels=TRUE)
axis(side=1, labels=x[[1]], at=1:K1, las = 3, padj=1.2) #add level names to x-axis
if (NA.plot == FALSE) {#plot black rectangles over the empty cell regions if NA.plot == FALSE
if (nrow(NA.ind) > 0) {
NA.ind = which(b==0, arr.ind=T, useNames = F) #2-column matrix of row and column indices for empty cells
rect(xleft = NA.ind[,1]-0.5, ybottom = z2[NA.ind[,2]], xright = NA.ind[,1]+0.5, ytop = z2[NA.ind[,2]+1], col="black")
}
}#end of if (NA.plot == FALSE) statement to plot black rectangles for empty cells
} #end of if (class(X[,J[1]]) == "factor") statement
else if (class(X[,J[1]]) == "numeric" | class(X[,J[1]]) == "integer") {#for numerical/integer X[,J[1]], calculate the ALE plot
#find the vectors of z values corresponding to the quantiles of X[,J[1]] and X[,J[2]]
z1 = c(min(X[,J[1]]), as.numeric(quantile(X[,J[1]],seq(1/K,1,length.out=K), type=1))) #vector of K+1 z values for X[,J[1]]
z1 = unique(z1) #necessary if X[,J(1)] is discrete, in which case z1 could have repeated values
K1 = length(z1)-1 #reset K1 to the number of unique quantile points
#group training rows into bins based on z1
a1 = as.numeric(cut(X[,J[1]], breaks=z1, include.lowest=TRUE)) #N-length index vector indicating into which z1-bin the training rows fall
z2 = c(min(X[,J[2]]), as.numeric(quantile(X[,J[2]],seq(1/K,1,length.out=K), type=1))) #vector of K+1 z values for X[,J[2]]
z2 = unique(z2) #necessary if X[,J(2)] is discrete, in which case z2 could have repeated values
K2 = length(z2)-1 #reset K2 to the number of unique quantile points
fJ = matrix(0,K1,K2) #rows correspond to X[,J(1)] and columns to X[,J(2)]
#group training rows into bins based on z2
a2 = as.numeric(cut(X[,J[2]], breaks=z2, include.lowest=TRUE)) #N-length index vector indicating into which z2-bin the training rows fall
X11 = X #matrix with low X[,J[1]] and low X[,J[2]]
X12 = X #matrix with low X[,J[1]] and high X[,J[2]]
X21 = X #matrix with high X[,J[1]] and low X[,J[2]]
X22 = X #matrix with high X[,J[1]] and high X[,J[2]]
X11[,J] = cbind(z1[a1], z2[a2])
X12[,J] = cbind(z1[a1], z2[a2+1])
X21[,J] = cbind(z1[a1+1], z2[a2])
X22[,J] = cbind(z1[a1+1], z2[a2+1])
y.hat11 = pred.fun(X.model=X.model, newdata = X11)
y.hat12 = pred.fun(X.model=X.model, newdata = X12)
y.hat21 = pred.fun(X.model=X.model, newdata = X21)
y.hat22 = pred.fun(X.model=X.model, newdata = X22)
Delta=(y.hat22-y.hat21)-(y.hat12-y.hat11) #N-length vector of individual local effect values
Delta = as.matrix(tapply(Delta, list(a1, a2), mean)) #K1xK2 matrix of averaged local effects, which includes NA values if a cell is empty
#replace NA values in Delta by the Delta value in their nearest neighbor non-NA cell
NA.Delta = is.na(Delta) #K1xK2 matrix indicating cells that contain no observations
NA.ind = which(NA.Delta, arr.ind=T, useNames = F) #2-column matrix of row and column indices for NA cells
if (nrow(NA.ind) > 0) {
notNA.ind = which(!NA.Delta, arr.ind=T, useNames = F) #2-column matrix of row and column indices for non-NA cells
range1 = max(z1)-min(z1)
range2 = max(z2)-min(z2)
Z.NA = cbind((z1[NA.ind[,1]] + z1[NA.ind[,1]+1])/2/range1, (z2[NA.ind[,2]] + z2[NA.ind[,2]+1])/2/range2) #standardized {z1,z2} values for NA cells corresponding to each row of NA.ind
Z.notNA = cbind((z1[notNA.ind[,1]] + z1[notNA.ind[,1]+1])/2/range1, (z2[notNA.ind[,2]] + z2[notNA.ind[,2]+1])/2/range2) #standardized {z1,z2} values for non-NA cells corresponding to each row of notNA.ind
nbrs <- ann(Z.notNA, Z.NA, k=1, verbose = F)$knnIndexDist[,1] #vector of row indices (into Z.notNA) of nearest neighbor non-NA cells for each NA cell
Delta[NA.ind] = Delta[matrix(notNA.ind[nbrs,], ncol=2)] #Set Delta for NA cells equal to Delta for their closest neighbor non-NA cell. The matrix() command is needed, because if there is only one empty cell, notNA.ind[nbrs] is created as a 2-length vector instead of a 1x2 matrix, which does not index Delta properly
} #end of if (nrow(NA.ind) > 0) statement
#accumulate the values in Delta
fJ = apply(t(apply(Delta,1,cumsum)),2,cumsum) #second-order accumulated effects before subtracting lower order effects
fJ = rbind(rep(0,K2),fJ) #add a first row and first column to fJ that are all zeros
fJ = cbind(rep(0,K1+1),fJ)
#now subtract the lower-order effects from fJ
b=as.matrix(table(a1,a2)) #K1xK2 cell count matrix (rows correspond to X[,J[1]]; columns to X[,J[2]])
b1=apply(b,1,sum) #K1x1 count vector summed across X[,J[2]], as function of X[,J[1]]
b2=apply(b,2,sum) #K2x1 count vector summed across X[,J[1]], as function of X[,J[2]]
Delta =fJ[2:(K1+1),]-fJ[1:K1,] #K1x(K2+1) matrix of differenced fJ values, differenced across X[,J[1]]
b.Delta = b*(Delta[,1:K2]+Delta[,2:(K2+1)])/2
Delta.Ave = apply(b.Delta,1,sum)/b1
fJ1 = c(0,cumsum(Delta.Ave))
Delta = fJ[,2:(K2+1)]-fJ[,1:K2] #(K1+1)xK2 matrix of differenced fJ values, differenced across X[,J[2]]
b.Delta = b*(Delta[1:K1,]+Delta[2:(K1+1),])/2
Delta.Ave = apply(b.Delta,2,sum)/b2
fJ2 = c(0, cumsum(Delta.Ave))
fJ = fJ - outer(fJ1,rep(1,K2+1)) - outer(rep(1,K1+1),fJ2)
fJ0 = sum(b*(fJ[1:K1,1:K2] + fJ[1:K1,2:(K2+1)] + fJ[2:(K1+1),1:K2] + fJ[2:(K1+1), 2:(K2+1)])/4)/sum(b)
fJ = fJ - fJ0
x <- list(z1, z2)
K <- c(K1, K2)
image(x[[1]], x[[2]], fJ, xlab=paste("x_",J[1], " (", names(X)[J[1]], ")", sep=""), ylab= paste("x_",J[2], " (", names(X)[J[2]], ")", sep=""), xlim = range(z1), ylim = range(z2), xaxs = "i", yaxs = "i")
contour(x[[1]], x[[2]], fJ, add=TRUE, drawlabels=TRUE)
if (NA.plot == FALSE) {#plot black rectangles over the empty cell regions if NA.plot == FALSE
if (nrow(NA.ind) > 0) {
rect(xleft = z1[NA.ind[,1]], ybottom = z2[NA.ind[,2]], xright = z1[NA.ind[,1]+1], ytop = z2[NA.ind[,2]+1], col="black")
}
}#end of if (NA.plot == FALSE) statement to plot black rectangles for empty cells
} #end of else if (class(X[,J[1]]) == "numeric" | class(X[,J[1]]) == "integer") statement
else print("error: class(X[,J[1]]) must be either factor or numeric/integer")
} #end of "if (length(J) == 2)" statement
else print("error: J must be a vector of length one or two")
list(K=K, x.values=x, f.values = fJ)
}
| /scratch/gouwar.j/cran-all/cranData/ALEPlot/R/ALEPlot.R |
PDPlot <-
function(X, X.model, pred.fun, J, K) {
N = dim(X)[1] #sample size
d = dim(X)[2] #number of predictor variables
if (length(J) == 1) { #calculate main effects PD plot
if (class(X[,J]) == "numeric" | class(X[,J]) == "integer") {#for numeric or integer X[,J], calculate the ALE plot
fJ = numeric(K)
fJ = numeric(K)
xmin = min(X[,J])
xmax = max(X[,J])
x = seq(xmin, xmax, length.out=K)
for (k in 1:K) {
X.predict = X
X.predict[,J] = x[k]
y.hat = pred.fun(X.model=X.model, newdata = X.predict)
fJ[k] = mean(y.hat)
} #end of for loop
#now vertically translate fJ, by subtracting its average (averaged across X[,J])
a<-cut(X[,J], breaks=c(xmin-(x[2]-x[1]),x), include.lowest=TRUE)
b<- as.numeric(table(a)) #frequency count vector of X[,J] values falling into x intervals
fJ = fJ - sum(fJ*b)/sum(b)
plot(x, fJ, type="l", xlab = paste("x_",J, " (", names(X)[J], ")", sep=""), ylab = paste("f_",J,"(x_",J,")", sep=""))
} #end of if (class(X[,J]) == "numeric" | class(X[,J]) == "integer") statement
else if (class(X[,J]) == "factor") {#for factor X[,J], calculate the ALE plot
#Get rid of any empty levels of x and tabulate level counts and probabilities
X[,J] <- droplevels(X[,J])
x.count <- as.numeric(table(X[,J])) #frequency count vector for levels of X[,J]
x.prob <- x.count/sum(x.count) #probability vector for levels of X[,J]
K <- nlevels(X[,J]) #reset K to the number of levels of X[,J]
x <- levels(X[,J]) #as.character levels of X[,J] in original order
fJ = numeric(K)
for (k in 1:K) {
X.predict = X
X.predict[,J] = x[k]
y.hat = pred.fun(X.model=X.model, newdata = X.predict)
fJ[k] = mean(y.hat)
} #end of for loop
#now vertically translate fJ, by subtracting its average (averaged across X[,J])
fJ = fJ - sum(fJ*x.prob)
barplot(fJ, names=x, xlab=paste("x_", J, " (", names(X)[J], ")", sep=""), ylab= paste("f_",J,"(x_",J,")", sep=""), las =3)
} #end of else if (class(X[,J]) == "factor") statement
else print("error: class(X[,J]) must be either factor or numeric or integer")
} #end of if (length(J) == 1) statement
else if (length(J) == 2) { #calculate second-order effects PD plot
if (class(X[,J[2]]) != "numeric" & class(X[,J[2]]) != "integer") {
print("error: X[,J[2]] must be numeric or integer. Only X[,J[1]] can be a factor")
}
if (class(X[,J[1]]) == "factor") {#for categorical X[,J[1]], calculate the PD plot
#Get rid of any empty levels of x and tabulate level counts and probabilities
X[,J[1]] <- droplevels(X[,J[1]])
K1 <- nlevels(X[,J[1]]) #set K1 to the number of levels of X[,J[1]]
fJ = matrix(0,K1,K)
x1.char <- levels(X[,J[1]]) #as.character levels of X[,J[1]] in original order
x1.num <- 1:K1 #numeric version of levels of X[,J[1]]
xmin2 = min(X[,J[2]])
xmax2 = max(X[,J[2]])
x2 = seq(xmin2, xmax2, length.out=K)
for (k1 in 1:K1) {
for (k2 in 1:K) {
X.predict = X
X.predict[,J[1]] = x1.char[k1]
X.predict[,J[2]] = x2[k2]
y.hat = pred.fun(X.model=X.model, newdata = X.predict)
fJ[k1,k2] = mean(y.hat)
} #end of k2 for loop
} #end of k1 for loop
#now vertically translate fJ, by subtracting the averaged main effects
b1=as.numeric(table(X[,J[1]])) #K1-length frequency count vector of X[,J[1]] values falling into x1 levels
a2=cut(X[,J[2]], breaks=c(xmin2-(x2[2]-x2[1]),x2), include.lowest=TRUE)
b2=as.numeric(table(a2)) #K-length frequency count vector of X[,J[2]] values falling into x2 intervals
b=as.matrix(table(X[,J[1]],a2)) #K1xK frequency count matrix (rows correspond to x1; columns to x2)
fJ1=apply(t(fJ)*b2,2,sum)/sum(b2) #main PD effect of x1 on fJ
fJ2=apply(fJ*b1,2,sum)/sum(b1) #main PD effect of x2 on fJ
fJ = fJ - outer(fJ1,rep(1,K)) - outer(rep(1,K1),fJ2)
fJ0=sum(fJ*b)/sum(b) #average of fJ
fJ=fJ - fJ0
x <- list(x1.char, x2)
K <- c(K1, K)
image(x1.num, x2, fJ, xlab=paste("x_",J[1], " (", names(X)[J[1]], ")", sep=""), ylab= paste("x_",J[2], " (", names(X)[J[2]], ")", sep=""), ylim = range(x2), yaxs = "i")
contour(x1.num, x2, fJ, add=TRUE, drawlabels=TRUE)
axis(side=1, labels=x1.char, at=1:K1, las = 3, padj=1.2) #add level names to x-axis
} #end of if (class(X[,J[1]]) == "factor") statement
else if (class(X[,J[1]]) == "numeric" | class(X[,J[1]]) == "integer") {#for numerical/integer X[,J[1]], calculate the PD plot
fJ = matrix(0,K,K)
xmin1 = min(X[,J[1]])
xmax1 = max(X[,J[1]])
xmin2 = min(X[,J[2]])
xmax2 = max(X[,J[2]])
x1 = seq(xmin1, xmax1, length.out=K)
x2 = seq(xmin2, xmax2, length.out=K)
for (k1 in 1:K) {
for (k2 in 1:K) {
X.predict = X
X.predict[,J[1]] = x1[k1]
X.predict[,J[2]] = x2[k2]
y.hat = pred.fun(X.model=X.model, newdata = X.predict)
fJ[k1,k2] = mean(y.hat)
} #end of k2 for loop
} #end of k1 for loop
#now vertically translate fJ, by subtracting the averaged main effects
a1=cut(X[,J[1]], breaks=c(xmin1-(x1[2]-x1[1]),x1), include.lowest=TRUE)
a2=cut(X[,J[2]], breaks=c(xmin2-(x2[2]-x2[1]),x2), include.lowest=TRUE)
b1=as.numeric(table(a1)) #frequency count vector of X[,J[1]] values falling into x1 intervals
b2=as.numeric(table(a2)) #frequency count vector of X[,J[2]] values falling into x2 intervals
b=as.matrix(table(a1,a2)) #frequency count matrix (rows correspond to x1; columns to x2)
fJ1=apply(t(fJ)*b2,2,sum)/sum(b2) #main PD effect of x1 on fJ
fJ2=apply(fJ*b1,2,sum)/sum(b1) #main PD effect of x2 on fJ
fJ = fJ - outer(fJ1,rep(1,K)) - outer(rep(1,K),fJ2)
fJ0=sum(fJ*b)/sum(b) #average of fJ
fJ=fJ - fJ0
x <- list(x1, x2)
K <- c(K, K)
image(x1, x2, fJ, xlab=paste("x_",J[1], " (", names(X)[J[1]], ")", sep=""), ylab= paste("x_",J[2], " (", names(X)[J[2]], ")", sep=""), xlim = range(x1), ylim = range(x2), xaxs = "i", yaxs = "i")
contour(x1, x2, fJ, add=TRUE, drawlabels=TRUE)
} #end of else if (class(X[,J[1]]) == "numeric" | class(X[,J[1]]) == "integer") statement
else print("error: class(X[,J[1]]) must be either factor or numeric/integer")
} #end of if (length(J) == 2) statement
else print("error: J must be a vector of length one or two")
list(x.values=x, f.values = fJ)
}
| /scratch/gouwar.j/cran-all/cranData/ALEPlot/R/PDPlot.R |
"als" <- function(CList, PsiList, S=matrix(), WList=list(), thresh =
.001, maxiter = 100, forcemaxiter=FALSE, optS1st=TRUE,
x=1:nrow(CList[[1]]), x2 = 1:nrow(S), baseline=FALSE,
fixed=vector("list", length(PsiList)), uniC=FALSE,
uniS=FALSE, nonnegC = TRUE, nonnegS = TRUE,
normS=0, closureC=list())
{
RD <- 10^20
PsiAll <- do.call("rbind", PsiList)
resid <- vector("list", length(PsiList))
# if a weighting specification is absent, set weights to unity
if(length(WList) == 0){
WList <- vector("list", length(PsiList))
for(i in 1:length(PsiList))
WList[[i]] <- matrix(1, nrow(PsiList[[1]]), ncol(PsiList[[1]]))
}
W <- do.call("rbind",WList)
# initialize residual matrices to zero
for(i in 1:length(PsiList)) resid[[i]] <- matrix(0, nrow(PsiList[[i]]),
ncol(PsiList[[i]]))
# determine the residuals at the starting values
for(j in 1:length(PsiList)) {
for(i in 1:nrow(PsiList[[j]])) {
resid[[j]][i,] <- PsiList[[j]][i,] - CList[[j]][i,] %*% t(S * WList[[j]][i,])
}
}
# set the initial residual sum of squares
initialrss <- oldrss <- sum(unlist(resid)^2)
cat("Initial RSS", initialrss, "\n")
iter <- 1
b <- if(optS1st) 1 else 0
oneMore <- FALSE
while( ((RD > thresh || forcemaxiter ) && maxiter >= iter) || oneMore) {
if(iter %% 2 == b) ## solve for S, get RSS
S <- getS(CList, PsiAll, S, W, baseline, uniS, nonnegS, normS, x2)
else ## solve for CList, get resid matrices in this step only
CList <- getCList(S, PsiList, CList, WList, resid, x, baseline,
fixed, uniC, nonnegC, closureC)
# determine the residuals
for(j in 1:length(PsiList)) {
for(i in 1:nrow(PsiList[[j]])) {
resid[[j]][i,] <- PsiList[[j]][i,] - CList[[j]][i,] %*% t(S * WList[[j]][i,])
}
}
rss <- sum(unlist(resid)^2)
RD <- ((oldrss - rss) / oldrss)
oldrss <- rss
typ <- if(iter %% 2 == b) "S" else "C"
cat("Iteration (opt. ", typ, "): ", iter, ", RSS: ", rss, ", RD: ", RD,
"\n", sep = "")
iter <- iter + 1
## make sure the last iteration enforces any normalization/closure
oneMore <- (normS > 0 && (iter %% 2 != b) && maxiter != 1) ||
(length(closureC) > 0 && (iter %% 2 == b) )
}
cat("Initial RSS / Final RSS =", initialrss, "/", rss, "=",
initialrss/rss,"\n")
return(list(CList = CList, S = S, rss = rss, resid = resid, iter = iter))
}
| /scratch/gouwar.j/cran-all/cranData/ALS/R/als.R |
# Revision 2.1 2005/06/06
# - Modified default behavior with 0's and NA's in
# 'height' so that these values are not plotted.
# - Warning messages added in the case of the above.
# Revision 2.0 2005/04/27
# - Added panel.first and panel.last arguments
# - As per R 2.0.0, the default barplot() method by default uses a
# gamma-corrected grey palette (rather than the heat color
# palette) for coloring its output when given a matrix.
barplot3 <-
function(
height,
width = 1,
space = NULL,
names.arg = NULL,
legend.text = NULL,
beside = FALSE,
horiz = FALSE,
density = NULL,
angle = 45,
col = NULL,
prcol = NULL,
border = par("fg"),
main = NULL,
sub = NULL, xlab = NULL, ylab = NULL,
xlim = NULL, ylim = NULL, xpd = TRUE, log = "",
axes = TRUE, axisnames = TRUE,
cex.axis = par("cex.axis"), cex.names = par("cex.axis"),
inside = TRUE, plot = TRUE, axis.lty = 0, offset = 0,
plot.ci = FALSE, ci.l = NULL, ci.u = NULL,
ci.color = "black", ci.lty = "solid", ci.lwd = 1,
plot.grid = FALSE, grid.inc = NULL,
grid.lty = "dotted", grid.lwd = 1, grid.col = "black",
add = FALSE, panel.first = NULL, panel.last = NULL,
names.side = 1, names.by = 1, ...)
{
if (!missing(inside)) .NotYetUsed("inside", error = FALSE)# -> help(.)
if (missing(space))
space <- if (is.matrix(height) && beside) c(0, 1) else 0.2
space <- space * mean(width)
if (plot && axisnames && missing(names.arg))
names.arg <-
if(is.matrix(height)) colnames(height) else names(height)
if (is.vector(height)
|| (is.array(height) && (length(dim(height)) == 1))) {
## Treat vectors and 1-d arrays the same.
height <- cbind(height)
beside <- TRUE
## The above may look strange, but in particular makes color
## specs work as most likely expected by the users.
if(is.null(col)) col <- "grey"
} else if (is.matrix(height)) {
## In the matrix case, we use " heat colors" by default.
if(is.null(col)) col <- heat.colors(nrow(height))
}
else
stop(paste(sQuote("height"), "must be a vector or a matrix"))
if(is.logical(legend.text))
legend.text <-
if(legend.text && is.matrix(height)) rownames(height)
# Check for log scales
logx <- FALSE
logy <- FALSE
if (log != "")
{
if (any(grep("x", log)))
logx <- TRUE
if (any(grep("y", log)))
logy <- TRUE
}
# Cannot "hatch" with rect() when log scales used
if ((logx || logy) && !is.null(density))
stop("Cannot use shading lines in bars when log scale is used")
NR <- nrow(height)
NC <- ncol(height)
if (beside) {
if (length(space) == 2)
space <- rep.int(c(space[2], rep.int(space[1], NR - 1)), NC)
width <- rep(width, length.out = NR)
} else
width <- rep(width, length.out = NC)
offset <- rep(as.vector(offset), length.out = length(width))
delta <- width / 2
w.r <- cumsum(space + width)
w.m <- w.r - delta
w.l <- w.m - delta
#if graphic will be stacked bars, do not plot ci
if (!beside && (NR > 1) && plot.ci)
plot.ci = FALSE
# error check ci arguments
if (plot && plot.ci)
{
if ((missing(ci.l)) || (missing(ci.u)))
stop("confidence interval values are missing")
if (is.vector(ci.l)
|| (is.array(ci.l) && (length(dim(ci.l)) == 1)))
ci.l <- cbind(ci.l)
else if (!is.matrix(ci.l))
stop(paste(sQuote("ci.l"), "must be a vector or a matrix"))
if (is.vector(ci.u)
|| (is.array(ci.u) && (length(dim(ci.u)) == 1)))
ci.u <- cbind(ci.u)
else if (!is.matrix(ci.u))
stop(paste(sQuote("ci.u"), "must be a vector or a matrix"))
if (any(dim(height) != dim(ci.u)))
stop(paste(sQuote("height"), "and", sQuote("ci.u"),
"must have the same dimensions."))
else if (any(dim(height) != dim(ci.l)))
stop(paste(sQuote("height"), "and", sQuote("ci.l"),
"must have the same dimensions."))
}
# check height + offset/ci.l if using log scale to prevent log(<=0) error
# adjust appropriate ranges and bar base values
if ((logx && horiz) || (logy && !horiz))
{
# Check for NA values and issue warning if required
height.na <- sum(is.na(height))
if (height.na > 0)
{
warning(sprintf("%.0f values == NA in 'height' omitted from logarithmic plot",
height.na), domain = NA)
}
# Check for 0 values and issue warning if required
# _FOR NOW_ change 0's to NA's so that other calculations are not
# affected. 0's and NA's affect plot output in the same way anyway,
# except for stacked bars, so don't change those.
height.lte0 <- sum(height <= 0, na.rm = TRUE)
if (height.lte0 > 0)
{
warning(sprintf("%0.f values <=0 in 'height' omitted from logarithmic plot",
height.lte0), domain = NA)
# If NOT stacked bars, modify 'height'
if (beside)
height[height <= 0] <- NA
}
if (plot.ci && (min(ci.l) <= 0))
stop("log scale error: at least one lower c.i. value <= 0")
if (logx && !is.null(xlim) && (xlim[1] <= 0))
stop("log scale error: 'xlim[1]' <= 0")
if (logy && !is.null(ylim) && (ylim[1] <= 0))
stop("'log scale error: 'ylim[1]' <= 0")
# arbitrary adjustment to display some of bar for min(height) since
# 0 cannot be used with log scales. If plot.ci, also check ci.l
if (plot.ci)
{
rectbase <- c(height[is.finite(height)], ci.l)
rectbase <- min(0.9 * rectbase[rectbase > 0])
}
else
{
rectbase <- height[is.finite(height)]
rectbase <- min(0.9 * rectbase[rectbase > 0])
}
# if axis limit is set to < above, adjust bar base value
# to draw a full bar
if (logy && !is.null(ylim) && !horiz)
rectbase <- ylim[1]
else if (logx && !is.null(xlim) && horiz)
rectbase <- xlim[1]
# if stacked bar, set up base/cumsum levels, adjusting for log scale
if (!beside)
height <- rbind(rectbase, apply(height, 2, cumsum))
# if plot.ci, be sure that appropriate axis limits are set to include range(ci)
lim <-
if (plot.ci)
c(height, ci.l, ci.u)
else
height
rangeadj <- c(0.9 * lim + offset, lim + offset)
rangeadj <- rangeadj[rangeadj > 0]
}
else
{
# Use original bar base value
rectbase <- 0
# if stacked bar, set up base/cumsum levels
if (!beside)
height <- rbind(rectbase, apply(height, 2, cumsum))
# if plot.ci, be sure that appropriate axis limits are set to include range(ci)
lim <-
if (plot.ci)
c(height, ci.l, ci.u)
else
height
# use original range adjustment factor
rangeadj <- c(-0.01 * lim + offset, lim + offset)
}
# define xlim and ylim, adjusting for log-scale if needed
if (horiz)
{
if (missing(xlim)) xlim <- range(rangeadj, na.rm=TRUE)
if (missing(ylim)) ylim <- c(min(w.l), max(w.r))
}
else
{
if (missing(xlim)) xlim <- c(min(w.l), max(w.r))
if (missing(ylim)) ylim <- range(rangeadj, na.rm=TRUE)
}
if (beside)
w.m <- matrix(w.m, ncol = NC)
if(horiz)
names.side <- 2
if(plot) ##-------- Plotting :
{
opar <-
if (horiz) par(xaxs = "i", xpd = xpd)
else par(yaxs = "i", xpd = xpd)
on.exit(par(opar))
# If add = FALSE open new plot window
# else allow for adding new plot to existing window
if (!add)
{
plot.new()
plot.window(xlim, ylim, log = log, ...)
}
# Execute the panel.first expression. This will work here
# even if 'add = TRUE'
panel.first
# Set plot region coordinates
usr <- par("usr")
# adjust par("usr") values if log scale(s) used
if (logx)
{
usr[1] <- 10 ^ usr[1]
usr[2] <- 10 ^ usr[2]
}
if (logy)
{
usr[3] <- 10 ^ usr[3]
usr[4] <- 10 ^ usr[4]
}
# if prcol specified, set plot region color
if (!missing(prcol))
rect(usr[1], usr[3], usr[2], usr[4], col = prcol)
# if plot.grid, draw major y-axis lines if vertical or x axis if horizontal
# R V1.6.0 provided axTicks() as an R equivalent of the C code for
# CreateAtVector. Use this to determine default axis tick marks when log
# scale used to be consistent when no grid is plotted.
# Otherwise if grid.inc is specified, use pretty()
if (plot.grid)
{
par(xpd = FALSE)
if (is.null(grid.inc))
{
if (horiz)
{
grid <- axTicks(1)
abline(v = grid, lty = grid.lty, lwd = grid.lwd, col = grid.col)
}
else
{
grid <- axTicks(2)
abline(h = grid, lty = grid.lty, lwd = grid.lwd, col = grid.col)
}
}
else
{
if (horiz)
{
grid <- pretty(xlim, n = grid.inc)
abline(v = grid, lty = grid.lty, lwd = grid.lwd, col = grid.col)
}
else
{
grid <- pretty(ylim, n = grid.inc)
abline(h = grid, lty = grid.lty, lwd = grid.lwd, col = grid.col)
}
}
par(xpd = xpd)
}
xyrect <- function(x1,y1, x2,y2, horizontal = TRUE, ...)
{
if(horizontal)
rect(x1,y1, x2,y2, ...)
else
rect(y1,x1, y2,x2, ...)
}
if (beside)
xyrect(rectbase + offset, w.l, c(height) + offset, w.r, horizontal=horiz,
angle = angle, density = density, col = col, border = border)
else
{
for (i in 1:NC)
xyrect(height[1:NR, i] + offset[i], w.l[i], height[-1, i] + offset[i], w.r[i],
horizontal=horiz, angle = angle, density = density,
col = col, border = border)
}
# Execute the panel.last expression here
panel.last
if (plot.ci)
{
# CI plot width = barwidth / 2
ci.width = width / 4
if (horiz)
{
segments(ci.l, w.m, ci.u, w.m, col = ci.color, lty = ci.lty, lwd = ci.lwd)
segments(ci.l, w.m - ci.width, ci.l, w.m + ci.width, col = ci.color, lty = ci.lty, lwd = ci.lwd)
segments(ci.u, w.m - ci.width, ci.u, w.m + ci.width, col = ci.color, lty = ci.lty, lwd = ci.lwd)
}
else
{
segments(w.m, ci.l, w.m, ci.u, col = ci.color, lty = ci.lty, lwd = ci.lwd)
segments(w.m - ci.width, ci.l, w.m + ci.width, ci.l, col = ci.color, lty = ci.lty, lwd = ci.lwd)
segments(w.m - ci.width, ci.u, w.m + ci.width, ci.u, col = ci.color, lty = ci.lty, lwd = ci.lwd)
}
}
if (axisnames && !is.null(names.arg)) # specified or from {col}names
{
at.l <-
if (length(names.arg) != length(w.m))
{
if (length(names.arg) == NC) # i.e. beside (!)
colMeans(w.m)
else
stop("incorrect number of names")
}
else w.m
axis(names.side, at = at.l[seq(1, length(at.l), by = names.by)],
labels = names.arg[seq(1, length(at.l), by = names.by)],
lty = axis.lty, cex.axis = cex.names, ...)
}
if(!is.null(legend.text))
{
legend.col <- rep(col, length = length(legend.text))
if((horiz & beside) || (!horiz & !beside))
{
legend.text <- rev(legend.text)
legend.col <- rev(legend.col)
density <- rev(density)
angle <- rev(angle)
}
# adjust legend x and y values if log scaling in use
if (logx)
legx <- usr[2] - ((usr[2] - usr[1]) / 10)
else
legx <- usr[2] - xinch(0.1)
if (logy)
legy <- usr[4] - ((usr[4] - usr[3]) / 10)
else
legy <- usr[4] - yinch(0.1)
legend(legx, legy,
legend = legend.text, angle = angle, density = density,
fill = legend.col, xjust = 1, yjust = 1)
}
title(main = main, sub = sub, xlab = xlab, ylab = ylab, ...)
# if axis is to be plotted, adjust for grid "at" values
if (axes)
{
if(plot.grid)
axis(if(horiz) 1 else 2, at = grid, cex.axis = cex.axis, ...)
else
axis(if(horiz) 1 else 2, cex.axis = cex.axis, ...)
}
invisible(w.m)
}
else w.m
}
| /scratch/gouwar.j/cran-all/cranData/ALS/R/barplot3.R |
"getC" <- function(EAllAllTimes, PsiAll_T, C, baseline, uni, nonnegC, x) {
for(i in 1:ncol(PsiAll_T))
C[i,] <- if(nonnegC) coef(nnls::nnls(A = EAllAllTimes[[i]], b = PsiAll_T[,i]))
else qr.coef(qr( EAllAllTimes[[i]]), PsiAll_T[,i])
if(uni) {
ncolel <- ncol(C)
if(baseline)
ncolel <- ncolel - 1
for(i in 1:ncolel) {
C[,i] <- Iso::ufit(y=C[,i],x=x)$y
}
}
C
}
| /scratch/gouwar.j/cran-all/cranData/ALS/R/getC.R |
"getCList" <- function(S, PsiList, CList, WList, resid, x, baseline,
fixed, uni, nonnegC, closureC) {
for(j in 1:length(PsiList)) {
S[which(is.nan(S))] <- 1
if(length(fixed[[j]])>0)
S <- S[, -fixed[[j]]]
for(i in 1:nrow(PsiList[[j]])) {
if(nonnegC)
cc <- try(nnls::nnls(A = S * WList[[j]][i,], b = PsiList[[j]][i,]))
else
cc <- try(qr.coef(qr(S * WList[[j]][i,]), PsiList[[j]][i,]))
if(inherits(cc, "try-error"))
sol <- rep(1, ncol(S))
else
sol <- if(nonnegC) coef(cc) else cc
cc1 <- rep(NA, ncol(CList[[j]]))
if(length(fixed[[j]])>0)
cc1[fixed[[j]]] <- 0
cc1[is.na(cc1)] <- sol
CList[[j]][i,] <- cc1
}
}
if(uni) {
for(j in 1:length(PsiList)) {
ncolel <- ncol(CList[[j]])
if(baseline)
ncolel <- ncolel - 1
for(i in 1:ncolel) {
CList[[j]][,i] <- Iso::ufit(y=CList[[j]][,i],x=x)$y
}
}
}
if(length(closureC) > 1) {
for(j in 1:length(PsiList))
for(i in 1:nrow(PsiList[[j]]))
CList[[j]][i,] <- sum((CList[[j]][i,]*closureC[[j]][i])/
max(sum(CList[[j]][i,])))
}
CList
}
| /scratch/gouwar.j/cran-all/cranData/ALS/R/getCList.R |
"getS" <- function(CList, PsiAll, S, W, baseline, uni, nonnegS, normS, x2) {
C <- do.call("rbind",CList)
C[which(is.nan(C))] <- 1
for(i in 1:ncol(PsiAll)) {
if(nonnegS)
s <- try(nnls::nnls(A = C * W[,i], b = PsiAll[,i]))
else
s <- try(qr.coef( qr(C * W[,i]), PsiAll[,i]))
if(inherits(s, "try-error"))
S[i,] <- rep(1, ncol(C))
else S[i,] <- if(nonnegS) coef(s) else s
}
if(uni) {
ncolel <- ncol(C)
if(baseline)
ncolel <- ncolel - 1
for(i in 1:ncolel)
S[i,] <- Iso::ufit(y=S[i,],x=x2)$y
}
if(normS>0) {
if(normS==1)
S <- normdat(S)
else {
for(i in 1:ncol(S)) {
nm <- sqrt(sum((S[,i])^2))
S[,i] <- S[,i]/nm
}
}
}
S
}
| /scratch/gouwar.j/cran-all/cranData/ALS/R/getS.R |
`getWSList` <-
function(S, WList, tt) {
SList <- vector("list", length=length(WList))
for(j in 1:length(WList))
SList[[j]] <- t(S * WList[[j]][tt,])
SList
}
| /scratch/gouwar.j/cran-all/cranData/ALS/R/getWSList.R |
`getWSListAllTimes` <-
function(S,WList) {
allWSList <- vector("list",nrow(WList[[1]]))
for(i in 1:nrow(WList[[1]]))
allWSList[[i]] <- getWSList(S, WList, i)
allWSList
}
| /scratch/gouwar.j/cran-all/cranData/ALS/R/getWSListAllTimes.R |
matchFactor <- function(u,s, type = "dot") {
s2u <- sum(u^2)
s2s <- sum(s^2)
if(s2u == 0 || s2s == 0)
ret <- if( s2u == 0 && s2s == 0) 1 else 0
else {
if(type=="euclid")
ret <- 1/ ( 1+sum( ( (u/sqrt(s2u)) - (s/sqrt(s2s)))^2 ))
if(type=="dot")
ret <- (u%*%s)/(sqrt(s2u)*sqrt(s2s))
}
ret
}
| /scratch/gouwar.j/cran-all/cranData/ALS/R/matchFactor.R |
"normdat" <-
function (mat)
{
if(!is.matrix(mat))
mat <- as.matrix(mat)
for (i in 1:ncol(mat))
mat[, i] <- mat[, i]/max(abs(mat[, i]))
mat
}
| /scratch/gouwar.j/cran-all/cranData/ALS/R/normdat.R |
plotS <- function(S, x2, out="",
filename=paste("S.", out,sep=""),
col=vector(), cex = 1, lab="", cex.lab=1) {
if(out=="pdf")
pdf(filename)
if(out=="ps")
postscript(filename)
par(mgp = c(2, 1, 0), mar=c(0,0,0,0), oma = c(1,0,4,0))
if(length(lab)>0)
par(oma=c(4,0,4,0), cex.lab=cex.lab)
if(ncol(S) > 2)
par(mfrow=c( ceiling( ncol(S) / 2), 2))
if(ncol(S) == 2) # make 1 row 2 col if only plotting 2 comp.
par(mfrow=c(2,1))
if(ncol(S) == 1)
par(mfrow=c(1,1))
par(cex=cex)
for(j in 1:ncol(S)) {
names.side <- 0
if(j <= 2)
names.side <- 3
if( j >= ncol(S)-1 ){
names.side <- 1
par(mar=c(2,0,0,0))
}
if(length(col)==0)
cl <- j
else
cl <- col[j]
if(names.side != 0)
barplot3(S[,j], col = cl, border = cl,
names.arg = x2,
names.side = names.side,
names.by = 30,
axes=FALSE)
else
barplot3(S[,j], col = cl, border = cl, axes=FALSE,
names.arg="")
}
if(length(lab)>0)
mtext(lab, side = 1, outer = TRUE, line = 1, cex=cex.lab)
if(out=="ps"||out=="pdf") dev.off()
}
| /scratch/gouwar.j/cran-all/cranData/ALS/R/plotSp.R |
ALS.CPC <-function(alpha,beta,sigma,epsilon,G,nval,D,S){
orthonormal<- function(B){
p <- nrow(B)
M <- ncol(B)
W <- B
if (M > 1){for (i in 2:M){C <- c(crossprod(B[,i],W[,1:(i-1)])) /diag(crossprod(W[,1:(i-1)]))
W[,i] <- B[,i] - matrix(W[,1:(i-1)],nrow=p) %*% matrix(C,nrow=i-1)}}
C <- 1/sqrt(diag(crossprod(W)))
W <- t(t(W) * C)
return(W)
}
#==========================================
QRdecomposition<-function(A){
res<-vector("list",length=2)
Q<-orthonormal(A)
R<-matrix(nrow=dim(A)[2],ncol=dim(A)[2])
for(i in 1:dim(A)[1]){
for( j in 1:dim(A)[2]){
if(i<j){R[i,j]<-Q[i,]%*%A[,j]}
if(i>j){R[i,j]<-0 }
S<-mat.or.vec(dim(A)[1], 1)
if(i==j){
if(j==1){R[j,j]<-sqrt(t(A[,i])%*%A[,i])}
else{
for(r in 1:(j-1)){S<-S+R[r,j]*Q[r,]};R[j,j]<-sqrt(t(A[,j]-S)%*%(A[,j]-S))}
}}}
res[[1]]<-Q
res[[2]]<-R
return(res)
}
#==================================================
RetractionQR<-function(D,V){
result<-QRdecomposition(D+V)[[1]]
return(result)
}
#=============================================
objectfunctionG<-function(G,nval,D,S){
#S is list of covariance matrices
values<-numeric(G)
for(i in 1:G){values[i]<-nval[i]*log(det(diag(diag(t(D)%*%S[[i]]%*%(D)))))}
return(sum(values))
}
#============================================================
unconsgradfG<-function(G,nval,D,S){
p=dim(D)[1]
#W list of covariance matrices and A list of positive diagonal matrices
components<-vector("list",length=G)
Z=mat.or.vec(p, p)
for(g in 1:G){components[[g]]<-nval[g]*2*t(S[[g]])%*%D%*%solve(diag(diag(t(D)%*%S[[g]]%*%(D))))
Z <- Z+components[[g]]}
return(Z)}
#=================================================================
sym<-function(M){(M+t(M))/2}
#===============================================================
Projection<-function(Z,D){Z-D%*%sym(t(D)%*%Z)}
#================================================
gradfG<-function(G,nval,D,S){Projection(unconsgradfG(G,nval,D,S),D)}
#================================================================
frobenius.product<-function(A,B){
m<-dim(A)[1]
n<-dim(B)[2]
C<-matrix(nrow=m,ncol=n)
for(i in 1:m){
for(j in 1:n){
C[i,j]<-A[i,j]*B[i,j]
}
}
return(sum(C))
}
#=================== armijo step size ===========================
Armijo<-function(alpha,beta,sigma,G,nval,D,S){
m<-0
repeat{lower<-alpha*sigma*(beta^m)*frobenius.product(gradfG(G,nval,D,S),gradfG(G,nval,D,S))
Re<-RetractionQR(D,-alpha*beta^m *gradfG(G,nval,D,S))
upper<-objectfunctionG(G,nval,D,S)-objectfunctionG(G,nval,Re,S)
if(upper>=lower){armijo<-m;break}
m<-m+1
}
return(alpha*beta^armijo)
}
#=====================================================
Respons<-vector("list")
Respons[[1]]<-D;t<-NULL
j<-1
repeat{
t[j]<-Armijo(alpha,beta,sigma,G,nval,Respons[[j]],S)
Respons[[j+1]]<-RetractionQR(Respons[[j]],-t[j]*gradfG(G,nval,Respons[[j]],S))
if(abs(objectfunctionG(G,nval,Respons[[j]],S)-objectfunctionG(G,nval,Respons[[j+1]],S))<epsilon){fin<-j;break}
j<-j+1
}
return(Respons[[fin]])
}
| /scratch/gouwar.j/cran-all/cranData/ALSCPC/R/ALSCPC.R |
#' @export
#'
#' @import SuppDists
#'
#'
oneway <- function(y,group,alpha=0.05,MSE=NULL,c.value=0,mc=NULL,residual=c('simple','semistudentized','studentized','studentized.deleted'),omission.variable=NULL){
x<-group
means<-tapply(y,x,mean)
r<-length(table(x))
residual<-match.arg(residual)
a<-alpha
fit <- lm(y ~ factor(x)) ## ok ##BF
nt<-length(y)
n<-tapply(y,x,length)
if ( is.null(match.arg(MSE)) ){
mse<-(deviance(fit))/(fit$df.residual)
}else{
mse<-MSE
}
rvo<-as.integer(names(table(x)))
ci<-c()
cj<-c()
for (i in 1:(r-1)){
ii<-i+1
for ( j in ii:r){
ci<-c(ci,rvo[i])
cj<-c(cj,rvo[j])
}
}
rn<-paste0(ci,' - ' ,cj)
############################### change x
j<-1
xx<-x
for( i in rvo){
x[x==i]<-j
j<-j+1
}
rv.s<-1:r
ci<-c()
cj<-c()
for (i in 1:(r-1)){
ii<-i+1
for ( j in ii:r){
ci<-c(ci,rv.s[i])
cj<-c(cj,rv.s[j])
}
}
##################### descr
out.des<- cbind(Group=rvo,n=n,mean=tapply(y,x,mean),median=tapply(y,x,median),Var=tapply(y,x,var),SD=tapply(y,x,sd), min=tapply(y,x,min),max=tapply(y,x,max))
########### res
res<-fit$residuals
sem<-res/sqrt(mse)
s<-sqrt( (mse*(n-1))/(n) )
stu<-res/s[x]
del<-rstudent(fit)
if ( (residual=='simple')){
e<-res
tt<-'residuals'
}else if((residual=='semistudentized')){
e<-sem # semi
tt<-'semistudentized residuals'
}else if((residual=='studentized')){
e<-stu
tt<-'studentized residuals'
}else if((residual=='studentized.deleted')){
e<-del
tt<-'studentized deleted residuals'
}
########### omisssion variable
if ( !is.null(omission.variable) ){
o1<-omission.variable==unique(omission.variable)[1]
o2<-omission.variable==unique(omission.variable)[2]
plot(e[o1]~fit$fitted.values[o1],xlim=range(fit$fitted.values),ylim=range(e),xlab='Yhat',ylab=tt,main=paste0(tt,' Plot against Fitted Values categorize Omission Variable'))
points(fit$fitted.values[o2],e[o2],pch=20)
plot(e[o1],x[o1],yaxt='n',frame.plot=F ,xlim=range(e),ylim=c(0,r),ylab='Group',xlab=tt)
abline(h = 1:r, lty = 2, col = "gray40", lwd = 1)
axis(2,1:r,labels=rvo)
points(e[o2],x[o2],pch=20)
title(paste0('Aligned ',tt, ' Dot Plot categorize Omission Variable '))
}
######################## seq
plot(e, type = "b", lty = 2, xlab = "Run", ylab = tt,pch=16)
title("Sequence Plot")
############################ plot
qqnorm(e,main = paste0('Normal Q-Q Plot ',tt ))
qqline(e)
boxplot(e,main=paste0('Box plot ',tt ))
hist(e,xlab=tt,main="")
for (i in rvo){
ee<-e[xx==i]
qqnorm(ee,main = paste0('Normal Q-Q Plot ',tt,' Group ',i,' '))
qqline(ee)
hist(ee,main = paste0(' Group ',i,' '),xlab=tt)
}
boxplot(e~xx,xlab='Group',main=paste0('Box plot ',tt ))
plot(e~fit$fitted.values,xlab='Yhat',ylab=tt)
plot(e,rep(0,nt),frame.plot=F,yaxt='n',xlab=tt,ylab='',ylim = c(0,1),main = paste0(' Dot Plot ',tt))
plot(e,x,yaxt='n',frame.plot=F ,ylim=c(0,r),ylab='Group')
abline(h = 1:r, lty = 2, col = "gray40", lwd = 1)
axis(2,1:r,labels=rvo)
title(paste0('Aligned ',tt, ' Dot Plot '))
#plot(TukeyHSD(aov(y~x)))
plot(means,rep(0,r),frame.plot=F,yaxt='n',xlim = range(y),ylab='',ylim = c(0,1))
text(means,0,rvo,pos=3)
title(" Dot plot level means ")
plot(y,x,yaxt='n',frame.plot=F ,ylim=c(0,r),ylab='Group')
abline(h = 1:r, lty = 2, col = "gray40", lwd = 1)
axis(2,1:r,labels=rvo)
title(paste0('Aligned response variable Dot Plot ',tt))
boxplot(y~xx,main='Boxplot respnse by groups ')
barplot(means, xlab = "Group")
title(" Means level Bar Graph")
plot(means~rvo, type = "o", pch = 19, xlab = "Group")
abline(h = mean(y), lty = 2, col = 2, lwd = 1)
title(main = " Main Effects Plot")
##################### single factor sf
t<-qt(1-a/2,nt-r)
s<-sqrt(mse/n)
tv<-(means-c.value)/s
pv<-2*(1-pt(abs(tv),nt-r))
out.sf<-cbind(rvo,n,means,means-t*s,means+t*s,tv,round(pv,6) )
colnames(out.sf)<- c('Group','size','mean', 'lower', 'upper','t','p-value')
row.names(out.sf)<-NULL
################## plot
diff<-t*s
bar <- barplot(means, xlab = 'Group',ylim=c(min(means-2*diff,0),max(means+2*diff,y)))
arrows(bar, means+diff, bar, means-diff, angle = 90, code = 3)
tt<-paste0(' Bar-Interval Graph ', 1-a ,' percent confidence limits for each factor level')
title(tt)
plot(means~rvo, xlim=c(min(rvo-1),max(rvo+1)),ylim=c( min(means-diff)-min(diff) , max(means+diff)+min(diff) ),xlab = "Design")
arrows(rvo, means+diff, rvo, means-diff, angle = 90, code = 3)
abline(h = mean(y), lty = 2, col =2, lwd = 2)
tt<-paste0('Interval Plot ', 1-a ,' percent confidence limits for each factor level')
title(tt)
############# lsd lsd
d<-means[ci]-means[cj]
s<-sqrt( mse*(1/(n[ci]) +1/(n[cj])) )
t<-qt(1-a/2,nt-r)
tv<-(d-c.value)/s
pv<-2*(1-pt(abs(tv),nt-r))
out.lsd<- cbind(d,d-t*s,d+t*s,tv,round(pv,6))
colnames(out.lsd)<- c('diffrence', 'lower', 'upper','t','p-value')
rownames(out.lsd)<- rn
##################################### contrast one
if ( !is.null(mc) ){
out.c1<-matrix(1:5,1,5)
for ( q in 1:dim(mc)[1]){
l<-sum(means*mc[q,])
s<-sqrt( mse*sum( (mc[q,]^2)/(n) ) )
tv<-(l)/s
t<-qt(1-a/2,nt-r)
pv<-2*(1-pt(abs(tv),nt-r))
pv<-round(pv,6)
out<- cbind(l,l-t*s,l+t*s,tv,pv)
out.c1<-rbind(out.c1,out)
}
colnames(out.c1)<- c('L', 'lower', 'upper','t','p-value')
out.c1<-out.c1[-1,]
}
###############Tukey
# d<-means[ci]-means[cj]
#s<-sqrt( mse*(1/(n[ci]) +1/(n[cj])) )
#t<-qtukey(1-a,r,nt-r)/sqrt(2)
#q<-(sqrt(2)*d)/s
#pv<-2*(1-ptukey(abs(q),r,nt-r)) ######### p-v chek shavad
#out.tky<- cbind(d,d-t*s,d+t*s,q,round(pv,4))
#colnames(out.tky)<- c('diffrence', 'lower', 'upper','q*','p-value')
#rownames(out.tky)<- rn
out.tky<-TukeyHSD(aov(y~factor(x)),conf.level=1-a)
plot(out.tky<-TukeyHSD(aov(y~factor(x)),conf.level=1-a))
#################################### shefffe c.value =0 hamishee ,p.v TEST
if ( !is.null(mc) ){
outsh<-matrix(1:5,1,5)
for ( q in 1:dim(mc)[1]){
l<-sum(means*mc[q,])
S<-sqrt( (r - 1)*qf(1- a,r-1,nt - r) )
s<-sqrt( mse*sum( ((mc[q,])^2)/(n) ) )
fv<-(l^2)/((r-1)*(s^2))
pv<-1-pf(fv,r-1,nt-r)
out<- cbind(l,l-S*s,l+S*s,fv,pv)
outsh<-rbind(outsh,out)
}
colnames(outsh)<- c('L', 'lower', 'upper','F','p-value')
out.sh<-outsh[-1,]
}
############# bon c==0 pv chekkkkk
if ( !is.null(mc) ){
out.b<-matrix(1:5,1,5)
g<-dim(mc)[1]
for ( q in 1:dim(mc)[1]){
l<-sum(means*mc[q,])
B<-qt(1-a/(2*g),nt - r)
s<-sqrt( mse*sum( ((mc[q,])^2)/(n) ) )
tv<-l/s
pv<-2*(1-pt(abs(tv),nt-r))
out<- cbind(l,l-B*s,l+B*s,tv,pv)
out.b<-rbind(out.b,out)
}
colnames(out.b)<- c('L', 'lower', 'upper','t','p-value')
out.b<-out.b[-1,]
}
############# hartly test
df<-as.integer(mean(n))
s2<-tapply(y,x,var)
H<-max(s2)/min(s2)
pv<- 1- pmaxFratio(H, df-1,r)
if ( is.numeric(pv)){
out.ht<-cbind(H,pv)
colnames(out.ht)<-c('H','p-value')
}else{
out.ht<-NULL
}
############### bron forsy test
d<-unlist(tapply(y,x,function(x) abs(x-median(x))))
names(d)<-NULL
di<-tapply(d,x,mean)
dii<-mean(d)
f<-anova(lm(d~factor(xx)) )
out.bf<-cbind(f$`F value`[1],f$`Pr(>F)`[1] )
colnames(out.bf)<-c('F value','p-value')
########## non para F test
d<-rank(y)
f<-anova(lm(d~factor(xx)) )
out.n<-cbind(f$F[1],f$P[1] )
colnames(out.n)<- c('F', 'p.value')
############ pair wise non para
means_n<-tapply(d,x,mean)
d<-means_n[ci]-means_n[cj]
g<-(r*(r-1))/2
s<-sqrt( nt*(nt+1)*(1/(n[ci]) +1/(n[cj]) )*(1/12) )
B<-qnorm(1-a/(2*g))
out.np<-cbind(d,d-B*s,d+s*B )
colnames(out.np)<- c('difference', 'lower', 'upper')
row.names(out.np)<-rn
########### outlier
t.outli<-qt(1-a/(2*nt),nt-r-1)
case<-c(1:nt)[abs(rstudent(fit))>t.outli]
g<-y[case]
if ( length(case!=0)){
out.ot<-cbind(case=case,y=g,studentized.deleted.residual=rstudent(fit)[case],t.value=rep(t.outli,length(case) ))
}else{
out.ot<-paste0('t value=',t.outli,' ,do not exist outlier')
}
####### ANOM
mmm<-mean(means)
tt<-qt(1-alpha/(2*r),nt-r)
s=c()
for ( i in 1:r){
s<-c(s,sqrt((mse/n[i]) * ((r-1)/r)^2 + (mse/r^2) * sum(1/n[-i])))
}
out.an<-cbind(rvo,means-mmm-tt*s , means-mmm+tt*s)
out.an2<-cbind(rvo,mmm-tt*s , mmm+tt*s)
colnames(out.an)<- c('factor level', 'lower', 'upper')
dd<-diff(range(means))/4
plot(x = seq(r), means, pch = 20, xlim=c(.5,r+.5),ylim = c(min(out.an2[,2],means,mmm), max(out.an2[,3],means,mmm)), xlab = "Levels of Design", ylab = "Mean", xaxt = 'n')
axis(1, seq(r),labels = rvo)
segments(seq(r), mmm, seq(r), means)
lines(seq(1, r+.5, 0.5), rep(out.an2[,3], each = 2), type = "S")
lines(seq(1, r+.5, 0.5), rep(out.an2[,2], each = 2), type = "S")
abline(h = mmm)
############
if ( !is.null(mc) ){
o<-list(descriptive=out.des,fit=summary(fit),anova=anova(fit), Single.factor.level=out.sf,Contrast.NOT.simultaneous=out.c1 ,LSD=out.lsd,Tukey=out.tky,Scheffe=out.sh ,Bonferroni=out.b ,Nonparametric.Rank.F.Test=out.n,Nonparametric.Rank.F.Test.Pairwise=out.np ,Hartley.Test=out.ht,Brown.Forsythe.test=out.bf,Bonfferoni.Test.Outlier=out.ot,ANOM.Bonferroni=out.an,residuals=res ,semistudentized.residuals=sem ,studentized.residuals=stu ,studentized.deleted.residuals= del)
}else{
o<-list( descriptive=out.des,fit=summary(fit),anova=anova(fit), Single.factor.level=out.sf,LSD=out.lsd,Tukey=out.tky,Nonparametric.Rank.F.Test=out.n,Nonparametric.Rank.F.Test.Pairwise=out.np ,Hartley.Test=out.ht,Brown.Forsythe.test=out.bf,Bonfferoni.Test.Outlier=out.ot,ANOM.Bonferroni=out.an,residuals=res ,semistudentized.residuals=sem ,studentized.residuals=stu ,studentized.deleted.residuals= del)
}
return(o)
}
| /scratch/gouwar.j/cran-all/cranData/ALSM/R/16_17_18_oneway.R |