content
stringlengths
0
14.9M
filename
stringlengths
44
136
--- title: " ADMMsigma Tutorial" #author: "Matt Galloway" #date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{ADMMsigma Tutorial} %\VignetteEngine{knitr::knitr} %\usepackage[UTF-8]{inputenc} --- ```{r setup, include=FALSE} knitr::opts_chunk$set(echo = TRUE, cache = TRUE) ``` ## Introduction In many statistical applications, estimating the covariance for a set of random variables is a critical task. The covariance is useful because it characterizes the *relationship* between variables. For instance, suppose we have three variables $X, Y, \mbox{ and } Z$ and their covariance matrix is of the form \[ \Sigma_{xyz} = \begin{pmatrix} 1 & 0 & 0.5 \\ 0 & 1 & 0 \\ 0.5 & 0 & 1 \end{pmatrix} \] We can gather valuable information from this matrix. First of all, we know that each of the variables has an equal variance of 1. Second, we know that variables $X$ and $Y$ are likely independent because the covariance between the two is equal to 0. This implies that any information in $X$ is useless in trying to gather information about $Y$. Lastly, we know that variables $X$ and $Z$ are moderately, positively correlated because their covariance is 0.5. Unfortunately, estimating $\Sigma$ well is often computationally expensive and, in a few settings, extremely challenging. For this reason, emphasis in the literature and elsewhere has been placed on estimating the inverse of $\Sigma$ which we like to denote as $\Omega \equiv \Sigma^{-1}$. `ADMMsigma` is designed to estimate a robust $\Omega$ efficiently while also allowing for flexibility and rapid experimentation for the end user. We will illustrate this with a short simulation. <br>\vspace{0.5cm} ## Simulation Let us generate some data. <br>\vspace{0.5cm} ```{r, message = FALSE, echo = TRUE} library(ADMMsigma) # generate data from a sparse matrix # first compute covariance matrix S = matrix(0.7, nrow = 5, ncol = 5) for (i in 1:5){ for (j in 1:5){ S[i, j] = S[i, j]^abs(i - j) } } # generate 100 x 5 matrix with rows drawn from iid N_p(0, S) set.seed(123) Z = matrix(rnorm(100*5), nrow = 100, ncol = 5) out = eigen(S, symmetric = TRUE) S.sqrt = out$vectors %*% diag(out$values^0.5) %*% t(out$vectors) X = Z %*% S.sqrt # snap shot of data head(X) ``` <br>\vspace{0.5cm} We have generated 100 samples (5 variables) from a normal distribution with mean equal to zero and an oracle covariance matrix $S$. <br>\vspace{0.5cm} ```{r, message = FALSE, echo = TRUE} # print oracle covariance matrix S # print inverse covariance matrix (omega) round(qr.solve(S), 5) ``` <br>\vspace{0.5cm} It turns out that this particular oracle covariance matrix (tapered matrix) has an inverse - or precision matrix - that is sparse (tri-diagonal). That is, the precision matrix has many zeros. In this particular setting, we could estimate $\Omega$ by taking the inverse of the sample covariance matrix $\hat{S} = \sum_{i = 1}^{n}(X_{i} - \bar{X})(X_{i} - \bar{X})^{T}/n$: <br>\vspace{0.5cm} ```{r, message = FALSE, echo = TRUE} # print inverse of sample precision matrix (perhaps a bad estimate) round(qr.solve(cov(X)*(nrow(X) - 1)/nrow(X)), 5) ``` <br>\vspace{0.5cm} However, because $\Omega$ is sparse, this estimator will likely perform very poorly. Notice the number of zeros in our oracle precision matrix compared to the inverse of the sample covariance matrix. Instead, we will use `ADMMsigma` to estimate $\Omega$. By default, `ADMMsigma` will construct $\Omega$ using an elastic-net penalty and choose the optimal `lam` and `alpha` tuning parameters. <br>\vspace{0.5cm} ```{r, message = FALSE, echo = TRUE} # elastic-net type penalty (set tolerance to 1e-8) ADMMsigma(X, tol.abs = 1e-8, tol.rel = 1e-8) ``` <br>\vspace{0.5cm} We can see that the optimal `alpha` value selected was 1. This selection corresponds with a lasso penalty -- a special case of the elastic-net penalty. Further, a lasso penalty embeds an assumption in the estimate (call it $\hat{\Omega}$) that the true $\Omega$ is sparse. Thus the package has automatically selected the penalty that most-appropriately matches the *true* data-generating precision matrix. We can also explicitly assume sparsity in our estimate by restricting the class of penalties to the lasso. We do this by setting `alpha = 1` in our function: <br>\vspace{0.5cm} ```{r, message = FALSE, echo = TRUE} # lasso penalty (default tolerance) ADMMsigma(X, alpha = 1) ``` <br>\vspace{0.5cm} We might also want to restrict `alpha = 0.5`: <br>\vspace{0.5cm} ```{r, message = FALSE, echo = TRUE} # elastic-net penalty (alpha = 0.5) ADMMsigma(X, alpha = 0.5) ``` <br>\newpage Or maybe we want to assume that $\Omega$ is *not* sparse but has entries close to zero. In this case, a ridge penalty would be most appropriate. We can estimate an $\Omega$ of this form by setting `alpha = 0` in the `ADMMsigma` function or using the `RIDGEsigma` function. `RIDGEsigma` uses a closed-form solution rather than an algorithm to compute its estimate -- and for this reason should be preferred in most cases (less computationally intensive). <br>\vspace{0.5cm} ```{r, message = FALSE, echo = TRUE} # ridge penalty ADMMsigma(X, alpha = 0) # ridge penalty (using closed-form solution) RIDGEsigma(X, lam = 10^seq(-8, 8, 0.01)) ``` <br>\newpage `ADMMsigma` also has the capability to provide plots for the cross validation errors. This allows the user to analyze and select the appropriate tuning parameters. In the heatmap plot below, the more bright (white) areas of the heat map correspond to a better tuning parameter selection. <br>\vspace{0.5cm} ```{r, message = FALSE, echo = TRUE} # produce CV heat map for ADMMsigma ADMM = ADMMsigma(X, tol.abs = 1e-8, tol.rel = 1e-8) plot(ADMM, type = "heatmap") ``` <br>\vspace{0.5cm} We can also produce a line graph of the cross validation errors: <br>\vspace{0.5cm} ```{r, message = FALSE, echo = TRUE} # produce line graph for CV errors for ADMMsigma plot(ADMM, type = "line") ``` <br>\vspace{0.5cm} And similarly for `RIDGEsigma`: <br>\vspace{0.5cm} ```{r, message = FALSE, echo = TRUE} # produce CV heat map for RIDGEsigma RIDGE = RIDGEsigma(X, lam = 10^seq(-8, 8, 0.01)) plot(RIDGE, type = "heatmap") # produce line graph for CV errors for RIDGEsigma plot(RIDGE, type = "line") ``` <br>\vspace{0.5cm} ## More advanced options `ADMMsigma` contains a number of different criteria for selecting the optimal tuning parameters during cross validation. The package default is to choose the tuning parameters that maximize the log-likelihood (`crit.cv = loglik`). Other options include `AIC` and `BIC`. <br>\vspace{0.5cm} ```{r, message = FALSE, echo = TRUE} # AIC plot(ADMMsigma(X, crit.cv = "AIC")) # BIC plot(ADMMsigma(X, crit.cv = "BIC")) ``` <br>\vspace{0.5cm} This allows the user to select appropriate tuning parameters under various decision criteria. We also have the option to print *all* of the estimated precision matrices for each tuning parameter combination using the `path` option. This option should be used with *extreme* care when the dimension and sample size is large -- you may run into memory issues. <br>\vspace{0.5cm} ```{r, message = FALSE, echo = TRUE} # keep all estimates using path ADMM = ADMMsigma(X, path = TRUE) # print only first three objects ADMM$Path[,,1:3] ``` <br>\vspace{0.5cm} A huge issue in precision matrix estimation is the computational complexity when the sample size and dimension of our data is particularly large. There are a number of built-in options in `ADMMsigma` that can be used to improve computation speed: - Reduce the number of `lam` values during cross validation. The default number is 10. <br>\vspace{0.5cm} ```{r, message = FALSE, echo = TRUE, eval = FALSE} # reduce number of lam to 5 ADMM = ADMMsigma(X, nlam = 5) ``` <br>\vspace{0.5cm} - Reduce the number of `K` folds during cross validation. The default number is 5. <br>\vspace{0.5cm} ```{r, message = FALSE, echo = TRUE, eval = FALSE} # reduce number of folds to 3 ADMM = ADMMsigma(X, K = 3) ``` <br>\vspace{0.5cm} - Relax the convergence critera for the ADMM algorithm using the `tol.abs` and `tol.rel` options. The default for each is 1e-4. <br>\vspace{0.5cm} ```{r, message = FALSE, echo = TRUE, eval = FALSE} # relax convergence criteria ADMM = ADMMsigma(X, tol.abs = 1e-3, tol.rel = 1e-3) ``` <br>\vspace{0.5cm} - Adjust the maximum number of iterations. The default is 1e4. <br>\vspace{0.5cm} ```{r, message = FALSE, echo = TRUE, eval = FALSE} # adjust maximum number of iterations ADMM = ADMMsigma(X, maxit = 1e3) ``` <br>\vspace{0.5cm} - Adjust `adjmaxit`. This allows the user to adjust the maximum number of iterations *after* the first `lam` tuning parameter has fully converged during cross validation. This allows for *one-step estimators* and can greatly reduce the time required for the cross validation procedure while still choosing near-optimal tuning parameters. <br>\vspace{0.5cm} ```{r, message = FALSE, echo = TRUE, eval = FALSE} # adjust adjmaxit ADMM = ADMMsigma(X, maxit = 1e4, adjmaxit = 2) ``` <br>\vspace{0.5cm} - We can also opt to run our cross validation procedure in parallel. The user should check how many cores are on their system before using this option <br>\vspace{0.5cm} ```{r, message = FALSE, echo = TRUE, eval = FALSE} # parallel CV ADMM = ADMMsigma(X, cores = 3) ``` <br>\vspace{0.5cm}
/scratch/gouwar.j/cran-all/cranData/ADMMsigma/vignettes/Tutorial.Rmd
#-------------------------------------------------------------------------------------------- # R functions for ADMUR package #-------------------------------------------------------------------------------------------- #-------------------------------------------------------------------------------------------- # global variables if(getRversion() >= "2.15.1") utils::globalVariables(c('age','datingType','site','calBP','phase','intcal20')) #-------------------------------------------------------------------------------------------- #-------------------------------------------------------------------------------------------- checkDataStructure <- function(data){ # data: data.frame of 14C dates. Requires 'age' and 'sd'. # helper function to check format of data, and throw warnings x <- 'good' if(class(data)!='data.frame'){warning('data must be a data.frame');return('bad')} if(sum(names(data)%in%c('age','sd'))!=2){warning("data must include 'age' and 'sd'");return('bad')} if(!is.numeric(data$age)){warning('age must be numeric');return('bad')} if(!is.numeric(data$sd)){warning('sd must be numeric');return('bad')} if(min(data$age)<0){warning('some ages are negative');return('bad')} if(min(data$sd)<1){warning('some sd are impossibly small');return('bad')} return(x)} #-------------------------------------------------------------------------------------------- checkData <- function(data){ # structural problems x <- checkDataStructure(data) # a few checks for absolute clangers # check suspicious sds and ages bad1 <- subset(data, sd<15) bad2 <- data[(data$age/data$sd)>1000,] bad3 <- data[(data$sd/data$age)>0.5,] bad4 <- subset(data, age<100 | age>57000) bad <- unique(rbind(bad1,bad2,bad3,bad4)) if(x=='good' & nrow(bad)==0)print('No obvious clangers found') if(nrow(bad)>0){ print('Please check the following samples...') print(bad) } return(NULL)} #-------------------------------------------------------------------------------------------- checkDatingType <- function(data){ # used by a couple of functions, so worth avoiding repetition # assume all dates are 14C if no datingType is provided if(!'datingType'%in%names(data)){ data$datingType <- '14C' warning('data did not contain datingType, so all dates are assumed to be 14C') } # avoid misspellings of '14C' bad <- c('14c','C14','c14','14.C','14.c','c.14','C.14','c-14','C-14','14-C') i <- data$datingType%in%bad if(sum(i)>0){ warning("Misspelling of '14C' in datingType are assumed to need calibrating") data$datingType[i] <- '14C' } return(data)} #-------------------------------------------------------------------------------------------- makeCalArray <- function(calcurve,calrange,inc=5){ # calcurve: intcal20 or any other calibration curve # calrange: vector of two values giving a calendar range to analyse (BP). Narrowing the range from the default c(0,50000) reduces memory and time. # inc: increments to interpolate calendar years. # builds a matrix of probabilities representing the calibration curve # rows of the matrix represent c14 years (annual resolution) # columns of the matrix use the calcurve 14C date and error to form Gaussian distributions # therefore it is rather memory intensive, and takes a while, but is only required once for any number of dates # extract the requested section calmin <- min(calrange) calmax <- max(calrange) # an extra 200 years is required to avoid edge effects calmin.extra <- max((calmin - 200),0) calmax.extra <- calmax + 200 # include an extra data row in the calibration curve, to 'feather' extremely old dates onto a 1-to-1 mapping of 14C to cal time calcurve <- rbind(data.frame(cal=60000,C14=60000,error=calcurve$error[1]),calcurve) # interpolate the calcurve to the required resolution cal <- seq(calmin.extra,calmax.extra,by=inc) c14.interp <- approx(x=calcurve$cal,y=calcurve$C14,xout=cal)$y error.interp <- approx(x=calcurve$cal,y=calcurve$error,xout=cal)$y # assume a one-to-one mapping of cal and c14 beyond 60000 i <- is.na(c14.interp) c14.interp[i] <- cal[i] error.interp[i] <- calcurve$error[1] # pick a sensible c14 range, at 1 yr resolution min.c14 <- round(min(c14.interp - 4*error.interp)) if(min.c14<0)min.c14 <- 0 max.c14 <- round(max(c14.interp + 4*error.interp)) c14 <- min.c14:max.c14 # fill the array R <- length(c14) C <- length(cal) probs <- array(0,c(R,C)) for(i in 1:C)probs[,i] <- dnorm(c14,c14.interp[i],error.interp[i]) row.names(probs) <- c14 colnames(probs) <- cal # store a bunch of objects CalArray <- list( probs = probs, calcurve = calcurve, cal = cal[cal>=calmin & cal<=calmax], calrange = calrange, inc = inc ) # give it an attribute, so other functions can check it attr(CalArray, 'creator') <- 'makeCalArray' return(CalArray)} #-------------------------------------------------------------------------------------------- plotCalArray <- function(CalArray){ # CalArray: matrix created by makeCalArray(). Requires row and col names corresponding to c14 and cal years respectively # Generates an image plot of the stacked Gaussians that define the calibration curve # check argument if(attr(CalArray, 'creator')!= 'makeCalArray') stop('CalArray was not made by makeCalArray()' ) P <- CalArray$probs c14 <- as.numeric(row.names(P)) cal <- as.numeric(colnames(P)) colfunc <- colorRampPalette(c('white','steelblue')) image(cal,c14,t(P)^0.1,xlab='Cal BP',ylab='14C',xlim=rev(range(cal)),col = colfunc(20),las=1, cex.axis=0.7, cex.lab=0.7) } #-------------------------------------------------------------------------------------------- plotPD <- function(x){ if(!'year'%in%names(x)){ years <- as.numeric(row.names(x)) } if('year'%in%names(x)){ years <- x$year x <- data.frame(x$pdf) } plot(NULL, type = "n", bty = "n", xlim = rev(range(years)), ylim=c(0,max(x)*1.2),las = 1, cex.axis = 0.7, cex.lab = 0.7, ylab='PD',xlab='calBP') for(n in 1:ncol(x)){ prob <- x[,n] polygon(x = c(years, years[c(length(years), 1)]), y = c(prob, 0, 0), col = "grey", border = 'steelblue') } if(ncol(x)>1)text(x=colSums(x*years)/colSums(x), y=apply(x, 2, max), labels=names(x),cex=0.7, srt=90) } #-------------------------------------------------------------------------------------------- chooseCalrange <- function(data,calcurve){ # data: data.frame of dates. Requires 'age' and 'sd' and datingType # calcurve: the object 'intcal13' loaded from intcal13.RData, or any other calibration curve calmin <- min <- calmax <- max <- NA # choose a reasonable calrange for the 14C data C14.data <- subset(data,datingType=='14C') if(nrow(C14.data)>0){ c14min <- min(C14.data$age - 5*pmax(C14.data$sd,20)) c14max <- max(C14.data$age + 5*pmax(C14.data$sd,20)) calmin <- min(calcurve$cal[calcurve$C14>c14min]) calmax <- max(calcurve$cal[calcurve$C14<c14max]) } # choose a reasonable calrange for the nonC14 data nonC14.data <- subset(data,datingType!='14C') if(nrow(nonC14.data)>0){ min <- min(nonC14.data$age - 5*pmax(nonC14.data$sd,20)) max <- max(nonC14.data$age + 5*pmax(nonC14.data$sd,20)) } calrange <- c(min(calmin,min,na.rm=T),max(calmax,max,na.rm=T)) return(calrange)} #-------------------------------------------------------------------------------------------- binner <- function(data, width, calcurve){ # data: data.frame containing at least the following columns :"age", "site", "datingType" # width: any time interval in c14 time, default = 200 c14 years # calcurve: the object 'intcal13' loaded from intcal13.RData, or any other calibration curve # argument checks if(!is.numeric(width))stop('width must be numeric') if(width<1)stop('width must be > 1') if(!is.data.frame(calcurve))stop('calcurve format must be data frame with cal, C14 and error') if(sum(names(calcurve)%in%c('cal','C14','error'))!=3)stop('calcurve format must be data frame with cal, C14 and error') if('phase'%in%names(data))warning('Data was already phased, so should not have been handed to binner(). Check for internal bug') # approximate nonC14 dates ito C14 time, so they can also be binned data.C14 <- subset(data,datingType=='14C') data.nonC14 <- subset(data,datingType!='14C') data.C14$c14age <- data.C14$age if(nrow(data.nonC14)>0)data.nonC14$c14age <- approx(x=calcurve$cal, y=calcurve$C14, xout=data.nonC14$age)$y data <- rbind(data.C14,data.nonC14) if(length(unique(data$site))>1)data <- data[order(data$site,data$c14age),] data$phase <- NA # binning end <- 0 for(s in unique(data$site)){ site.data <- subset(data,site==s) bins <- c() gaps <- c(0,diff(site.data$c14age)) bin <- 1 n <- nrow(site.data) for(i in 1:n){ if(gaps[i]>width)bin <- bin+1 bins[i] <- paste(s,bin,sep='.') } start <- end + 1 end <- start + n - 1 data$phase[start:end] <- bins } return(data)} #-------------------------------------------------------------------------------------------- internalCalibrator <- function(data, CalArray){ # generate a summed probability distribution (SPD) of calibrated 14C dates # This is acheived quickly by converting the uncalibrated dates using a prior, summing, then calibrating once # This is equivalent to calibrating each date, then summing, but faster # calibrates across the full range of CalArray. # data: data.frame of 14C dates. Requires 'age' and 'sd'. # CalArray: object created by makeCalArray(). Requires row and col names corresponding to c14 and cal years respectively # check argument if(attr(CalArray, 'creator')!= 'makeCalArray') stop('CalArray was not made by makeCalArray()' ) if(nrow(data)==0){ result <- data.frame(calBP=CalArray$cal,prob=0) return(result) } # generate prior (based on calibration curve, some c14 dates are more likely than others) c14.prior <- rowSums(CalArray$prob) # all c14 likelihoods (Either Gaussians or lognormals in c14 time) c14 <- as.numeric(row.names(CalArray$prob)) all.dates <- t(array(c14,c(length(c14),nrow(data)))) # gaussian # all.c14.lik <- dnorm(all.dates, mean=as.numeric(data$age), sd=as.numeric(data$sd)) # or log normals m <- as.numeric(data$age) v <- as.numeric(data$sd)^2 mu <- log(m^2/sqrt(v+m^2)) sig <- sqrt(log(v/m^2+1)) all.c14.lik <- dlnorm(all.dates, meanlog=mu, sdlog=sig) # all c14 probabilities (bayes theorem requires two steps, multiply prior by likelihood, then divide by integral) all.c14.prob <- c14.prior * t(all.c14.lik) all.c14.prob <- t(all.c14.prob) / colSums(all.c14.prob,na.rm=T) all.c14.prob[is.nan(all.c14.prob)] <- 0 # the division above will cause NaN if 0/0 # in the circumstance that some of the date falls outside the calibration curve range all.c14.prob <- all.c14.prob * rowSums(all.c14.lik) # combined c14 prob c14.prob <- colSums(all.c14.prob) # calibrate to cal # division by prior to ensure the probability of all calendar dates (for a given c14) sum to 1 # ie, use the calibration curve to generate a probability for calendar time (for every c14) # this combines the c14 probability distribution with the calendar probabilities given a c14 date. cal.prob <- as.numeric(crossprod(as.numeric(c14.prob),CalArray$prob/c14.prior)) # truncate to the required range result <- data.frame(calBP=as.numeric(colnames(CalArray$prob)),prob=cal.prob) result <- subset(result, calBP>=min(CalArray$calrange) & calBP<=max(CalArray$calrange)) return(result)} #-------------------------------------------------------------------------------------------- summedCalibrator <- function(data, CalArray, normalise = 'standard', checks = TRUE){ # performs a few checks # separates data into C14 for calibration using internalCalibrator(), and nonC14 dates # combines PDs to produce an SPD # reduces the SPD to the orginal required range and applies normalisation if required if(nrow(data)==0){ result <- data.frame(rep(0,length(CalArray$cal))) row.names(result) <- CalArray$cal names(result) <- NULL return(result) } # check arguments if(checks){ if(checkDataStructure(data)=='bad')stop() if(attr(CalArray, 'creator')!= 'makeCalArray') stop('CalArray was not made by makeCalArray()' ) if(!normalise %in% c('none','standard','full')) stop('normalise must be none, standard or full') data <- checkDatingType(data) } # C14 C14.data <- subset(data, datingType=='14C') C14.PD <- internalCalibrator(C14.data, CalArray)$prob C14.PD <- C14.PD/CalArray$inc # PD needs dividing by inc to convert area to height (PMF to PDF). Not required for nonC14, as dnorm() already generated PDF # nonC14 nonC14.data <- subset(data, datingType!='14C') n <- nrow(nonC14.data) nonC14.PD <- colSums(matrix(dnorm(rep(CalArray$cal,each=n), nonC14.data$age, nonC14.data$sd),n,length(CalArray$cal))) # combine PD <- nonC14.PD + C14.PD # No normalisation. Results in a PD with an area equal to the total number of samples minus any probability mass outside the date range. For example in cases where CalArray is badly specified to the dataset, or visaversa. # Rarely required. if(normalise=='none'){ result <- data.frame(PD) } # Standard normalisation adjusts for the number of samples. Results in a PD with a total area of 1 minus any probability mass outside the date range. For example in cases where CalArray is badly specified to the dataset, or visaversa. # should be used with phaseCalibrator(). if(normalise=='standard'){ PD <- PD / nrow(data) result <- data.frame(PD) } # Full normalisation results in a true PDF with a total area equal to 1. # Should be used when all dates are of equal importance, and the resulting SPD is the final product. For example when generating simulated SPDs. if(normalise=='full'){ PD <- PD / (sum(PD) * CalArray$inc) result <- data.frame(PD) } row.names(result) <- CalArray$cal names(result) <- NULL return(result)} #-------------------------------------------------------------------------------------------- phaseCalibrator <- function(data, CalArray, width = 200, remove.external = FALSE){ # generates a normalised SPD for every phase, phasing data through binner if required # remove.external: exludes phases (columns) with less than half their probability mass outside the date range. Useful for modelling. # argument checks if(!is.numeric(width))stop('width must be numeric') if(width<1)stop('width must be > 1') if(attr(CalArray, 'creator')!= 'makeCalArray') stop('CalArray was not made by makeCalArray()' ) if(nrow(data)==0)return(NULL) if(checkDataStructure(data)=='bad')stop() data <- checkDatingType(data) # ensure dates are phased if(!'phase'%in%names(data)){ if(!'site'%in%names(data))stop("data must contain 'phase' or 'site'") calcurve <- CalArray$calcurve data <- binner(data=data, width=width, calcurve=calcurve) warning("data did not contain 'phase', so phases were generated automatically") } phases <- sort(unique(data$phase)) phase.SPDs <- array(0,c(length(CalArray$cal),length(phases))) for(p in 1:length(phases)){ phase.data <- subset(data,phase==phases[p]) phase.SPDs[,p] <- summedCalibrator(phase.data, CalArray, normalise = 'standard', checks = FALSE)[,1] } phase.SPDs <- as.data.frame(phase.SPDs); names(phase.SPDs) <- phases; row.names(phase.SPDs) <- CalArray$cal # remove phases that are mostly outside the date range if(remove.external){ keep.i <- colSums(phase.SPDs)>=(0.5 / CalArray$inc) phase.SPDs <- phase.SPDs[,keep.i, drop=FALSE] } return(phase.SPDs)} #-------------------------------------------------------------------------------------------- summedCalibratorWrapper <- function(data, calcurve=intcal20, plot=TRUE){ # data: data.frame of 14C dates. Requires 'age' and 'sd' and optional datingType # calcurve: the object intcal13 loaded from intcal13.RData, or any other calibration curve # function to easily plot a calibrated Summed Probability Distribution from 14C dates # takes care of choosing a sensible date and interpolation increments range automatically if(nrow(data)==0)return(NULL) if(checkDataStructure(data)=='bad')stop() data <- checkDatingType(data) calrange <- chooseCalrange(data,calcurve) inc <- 5 if(diff(calrange)>10000)inc <- 10 if(diff(calrange)>25000)inc <- 20 CalArray <- makeCalArray(calcurve,calrange,inc) SPD <- summedCalibrator(data,CalArray, normalise='standard') if(plot)plotPD(SPD) return(SPD) } #-------------------------------------------------------------------------------------------- summedPhaseCalibrator <- function(data, calcurve, calrange, inc=5, width=200){ CalArray <- makeCalArray(calcurve=calcurve, calrange=calrange, inc=inc) x <- phaseCalibrator(data=data, CalArray=CalArray, width=width, remove.external = FALSE) SPD <- as.data.frame(rowSums(x)) # normalise SPD <- SPD/(sum(SPD) * CalArray$inc) names(SPD) <- NULL return(SPD)} #-------------------------------------------------------------------------------------------- uncalibrateCalendarDates <- function(dates, calcurve){ # dates: vector of calendar dates (point estimates) # randomly samples dates the calcurve error, at the corresponding cal date # returns a vector of point estimates on 14C scale # include an extra data row in the calibration curve, to 'feather' extremely old dates onto a 1-to-1 mapping of 14C to cal time calcurve <- rbind(data.frame(cal=60000,C14=60000,error=calcurve$error[1]),calcurve) simC14.means <- approx(x=calcurve$cal,y=calcurve$C14,xout=dates)$y simC14.errors <- approx(x=calcurve$cal,y=calcurve$error,xout=dates)$y simC14Samples <- numeric(length(dates)) i <- !is.na(simC14.means) & !is.na(simC14.errors) simC14Samples[i] <- round(rnorm(n=sum(i),mean=simC14.means[i],sd=simC14.errors[i])) simC14Samples[!i] <- round(dates[!i]) return(simC14Samples)} #-------------------------------------------------------------------------------------------- interpolate.model.to.PD <- function(PD, model){ years <- as.numeric(row.names(PD)) x <- as.numeric(model$year) y <- model$pdf y.out <- approx(x=x, y=y, xout=years)$y model <- data.frame(year=years, pdf=y.out) } #-------------------------------------------------------------------------------------------- loglik <- function(PD, model){ # relative likelihood of a perfectly precise date is the model PDF # therefore the relative likelihood of a date with uncertainty is an average of the model PDF, weighted by the date probabilities. # Numerically this is the scalar product: sum of (model PDF x date PDF). years <- as.numeric(row.names(PD)) # ensure the date ranges exactly match. If not, interpolate model pdf to match PD. check <- identical(years,model$year) if(!check) model <- interpolate.model.to.PD(PD, model) # ensure model PD is provided as a discretised PDF inc <- (years[2]-years[1]) model$pdf <- model$pdf/(sum(model$pdf)*inc) # convert the date PD pdfs to discrete PMFs to perform a weighted average PMF <- PD * inc # likelihoods weighted by the observational uncertainty weighted.PD <- PMF * model$pdf # sum all possibilities for each date (a calibrated date's probabilities are OR) to give the relative likelihood for each date. liks <- colSums(weighted.PD) # calculate the overall log lik given all the dates loglik <- sum(log(liks)) if(is.nan(loglik))loglik <- -Inf return(loglik)} #-------------------------------------------------------------------------------------------- convertPars <- function(pars, years, type, taphonomy=FALSE){ # The model must be returned as a PDF. I.e, the total area must sum to 1. # sanity checks model.choices <- c('CPL','exp','uniform','norm','sine','cauchy','logistic','power') if(!type%in%model.choices)stop(paste('Unknown model type. Choose from:',paste(model.choices,collapse=', '))) if('data.frame'%in%class(pars))pars <- as.matrix(pars) if('integer'%in%class(years))years <- as.numeric(years) if(!'numeric'%in%class(years))stop('years must be a numeric vector') if('NULL'%in%class(pars) | 'numeric'%in%class(pars)){ res <- convertParsInner(pars, years, type, taphonomy) return(res) } if(!'numeric'%in%class(pars)){ N <- nrow(pars) C <- length(years) res <- as.data.frame(matrix(,N,C)) names(res) <- years for(n in 1:N)res[n,] <- convertParsInner(pars[n,], years, type, taphonomy)$pdf } return(res) } #-------------------------------------------------------------------------------------------- convertParsInner <- function(pars, years, type, taphonomy){ if(taphonomy){ p <- length(pars) model.pars <- pars[0:(p-2)] taph.pars <- pars[(p-1):p] } if(!taphonomy){ model.pars <- pars taph.pars <- c(0,0) } if(type=='CPL'){ tmp <- CPLPDF(years,model.pars) } if(type=='uniform'){ if(length(model.pars)!=0)stop('A uniform model must have zero parameters') tmp <- dunif(years, min(years), max(years)) } if(type=='exp'){ if(length(model.pars)!=1)stop('exponential model requires just one rate parameter') tmp <- exponentialPDF(years, min(years), max(years),model.pars[1]) } if(type=='logistic'){ if(length(model.pars)!=2)stop('logistic model requires two parameters, rate and centre') tmp <- logisticPDF(years, min(years), max(years),model.pars[1], model.pars[2]) } if(type=='norm'){ if(length(model.pars)!=2)stop('A Gaussian model must have two parameters, mean and sd') tmp <- dnorm(years, model.pars[1], model.pars[2]) } if(type=='sine'){ if(length(model.pars)!=3)stop('A sinusoidal model must have three parameters, f, p and r') tmp <- sinewavePDF(years, min(years), max(years), model.pars[1], model.pars[2], model.pars[3]) } if(type=='cauchy'){ if(length(model.pars)!=2)stop('A cauchy model must have two parameters, location and scale') tmp <- cauchyPDF(years, min(years), max(years),model.pars[1], model.pars[2]) } if(type=='power'){ if(length(model.pars)!=2)stop('A power function model must have two parameters, b and c') tmp <- powerPDF(years, min(years), max(years),model.pars[1], model.pars[2]) } # incorporate taphonomy taph <- (years + taph.pars[1])^taph.pars[2] tmp <- tmp * taph inc <- years[2]-years[1] pdf <- tmp/(sum(tmp)*inc) res <- data.frame(year = years, pdf = pdf) return(res)} #-------------------------------------------------------------------------------------------- CPLparsToHinges <- function(pars, years){ if('numeric'%in%class(pars)){ res <- CPLparsToHingesInner(pars, years) return(res)} N <- nrow(pars) C <- (ncol(pars)+1)/2 +1 yr <- pdf <- as.data.frame(matrix(,N,C)) names(yr) <- paste('yr',1:C,sep='') names(pdf) <- paste('pdf',1:C,sep='') for(n in 1:N){ x <- CPLparsToHingesInner(pars[n,],years) yr[n,] <- x$year pdf[n,] <- x$pdf } res <- cbind(yr,pdf) return(res)} #-------------------------------------------------------------------------------------------- CPLparsToHingesInner <- function(pars, years){ # must be odd, as (2n-1 parameters where n=number of pieces) cond <- ((length(pars)+1) %% 2) == 0 if(!cond)stop('A CPL model must have an odd number of parameters') # parameters must be between 0 and 1 if(sum(pars>1 | pars<0)!=0)stop('CPL parameters must be between 0 and 1') if(length(pars)==1){ x.par <- c() y.par <- pars } if(length(pars)!=1){ x.par <- pars[1:((length(pars)-1)/2)] y.par <- pars[(length(x.par)+1):length(pars)] } # conversion of pars to raw hinge coordinates x between 0 and 1, and y between 0 and Inf # much more efficient stick breaking algorithm for x # mapping for y (0 to 1) -> (0 to Inf) using (1/(1-y)^2)-1 # y0 is arbitrarily fixed at 3 since (1/(1-0.5)^2)-1 xn <- length(x.par) if(xn>=1)proportion <- qbeta(x.par, 1 , xn:1) if(xn==0)proportion <- c() x.raw <- c(0,1-cumprod(1 - proportion),1) y.raw <- c(3, (1/(1-y.par)^2)-1) # convert x.raw from 0 to 1, to years x <- x.raw * (max(years)-min(years)) + min(years) # area under curve widths <- diff(x) mids <- 0.5*(y.raw[1:(xn+1)]+y.raw[2:(xn+2)]) area <- sum(widths*mids) # convert y.raw to PD y <- y.raw/area # store d <- data.frame(year=x, pdf=y) return(d)} #-------------------------------------------------------------------------------------------- objectiveFunction <- function(pars, PDarray, type, taphonomy=FALSE){ if(!is.data.frame(PDarray))stop('PDarray must be a data frame') years <- as.numeric(row.names(PDarray)) model <- convertPars(pars,years,type,taphonomy) loglik <- loglik(PDarray, model) return(-loglik)} #-------------------------------------------------------------------------------------------- proposalFunction <- function(pars, jumps, type, taphonomy, taph.min, taph.max){ if(taphonomy){ p <- length(pars) taph.pars <- pars[(p-1):p] taph.jumps <- abs(taph.max-taph.min)/30 taph.moves <- rnorm(2,0,taph.jumps) new.taph.pars <- taph.pars + taph.moves # taphonomy constraints to a reasonable prior range if(new.taph.pars[1]<taph.min[1] | new.taph.pars[1]>taph.max[1]) new.taph.pars[1] <- taph.pars[1] if(new.taph.pars[2]<taph.min[2] | new.taph.pars[2]>taph.max[2]) new.taph.pars[2] <- taph.pars[2] pars <- pars[1:(p-2)] } # remaining parameters (non-taph) moves <- rnorm(length(pars),0,jumps) new.pars <- pars + moves # technical constraints. Usually floating point bullshit. if(type=='CPL'){ new.pars[new.pars<0.00000000001] <- 0.00000000001 new.pars[new.pars>0.99999999999] <- 0.99999999999 } if(type=='exp'){ if(new.pars==0)new.pars <- 1e-100 } if(type=='norm'){ new.pars[new.pars<=1] <- 1 } # recombine pars with taph.pars if necessary if(taphonomy)new.pars <- c(new.pars,new.taph.pars) return(new.pars)} #-------------------------------------------------------------------------------------------- mcmc <- function(PDarray, startPars, type, taphonomy=FALSE, taph.min=c(0,-3), taph.max=c(20000,0), N = 30000, burn = 2000, thin = 5, jumps = 0.02){ if(!type%in%c('CPL','exp','norm','sine','cauchy','logistic','power'))stop('unknown model type. Only CPL, exp, norm, sine, cauchy, logistic, power currently handled') # starting parameters pars <- startPars all.pars <- matrix(,N,length(startPars)) # mcmc loop accepted <- rep(0,N) for(n in 1:N){ all.pars[n,] <- pars llik <- -objectiveFunction(pars, PDarray, type, taphonomy) prop.pars <- proposalFunction(pars, jumps, type, taphonomy, taph.min, taph.max) prop.llik <- -objectiveFunction(prop.pars, PDarray, type, taphonomy) ratio <- min(exp(prop.llik-llik),1) move <- sample(c(T,F),size=1,prob=c(ratio,1-ratio)) if(move){ pars <- prop.pars accepted[n] <- 1 } if(n%in%seq(0,N,by=1000))print(paste(n,'iterations of',N)) } ar <- sum(accepted[burn:N])/(N-burn) # thinning i <- seq(burn,N,by=thin) res <- all.pars[i,] return(list(res=res,all.pars=all.pars, acceptance.ratio=ar))} #-------------------------------------------------------------------------------------------- simulateCalendarDates <- function(model, n){ # sanity check a few arguments if(!is.data.frame(model))stop('model must be a data frame') cond <- sum(c('year','pdf')%in%names(model)) if(cond!=2)stop('model must include year and pdf') x <- range(model$year) + c(-150,150) years.wide <- min(x):max(x) pdf.wider <- approx(x=model$year,y=model$pdf,xout=years.wide,ties='ordered',rule=2)$y dates <- sample(years.wide, replace=T, size=n, prob=pdf.wider) return(dates)} #-------------------------------------------------------------------------------------------- estimateDataDomain <- function(data, calcurve){ thresholds <- c(60000,20000,4000) incs <- c(100,20,1) min.year <- 0 max.year <- 60000 for(n in 1:length(incs)){ if((max.year - min.year) > thresholds[n])return(c(min.year, max.year)) if((max.year - min.year) <= thresholds[n]){ CalArray <- makeCalArray(calcurve, calrange = c(min.year, max.year), inc = incs[n]) SPD <- summedCalibrator(data, CalArray) cum <- cumsum(SPD[,1])/sum(SPD) min.year <- CalArray$cal[min(which(cum>0.000001)-1)] max.year <- CalArray$cal[max(which(cum<0.999999)+2)] } } return(c(min.year, max.year))} #-------------------------------------------------------------------------------------------- SPDsimulationTest <- function(data, calcurve, calrange, pars, type, inc=5, N=20000){ # 1. generate observed data SPD print('Generating SPD for observed data') CalArray <- makeCalArray(calcurve, calrange, inc) # makeCalArray, used for obs and each simulation x <- phaseCalibrator(data, CalArray, width=200, remove.external = FALSE) SPD.obs <- as.data.frame(rowSums(x)) SPD.obs <- SPD.obs/(sum(SPD.obs) * CalArray$inc) SPD.obs <- SPD.obs[,1] # 2. various sample sizes and effective sample sizes # number of dates in the entire dataset n.dates.all <- nrow(data) # effective number of dates that contribute to the date range. Some dates may be slightly outside, giving non-integer. tmp <- summedCalibrator(data, CalArray, normalise = 'none') n.dates.effective <- round(sum(tmp)*inc,1) # number of phases in entire dataset if(!'phase'%in%names(data))data <- binner(data, width=200, calcurve) n.phases.all <- length(unique(data$phase)) # effective number of phases that contribute to the date range. Some phases may be slightly outside, giving non-integer. n.phases.effective <- round(sum(x)*inc,1) # number of phases that are mostly internal to date range, used for likelihoods n.phases.internal <- sum(colSums(x)>=(0.5 /inc)) # 3. convert best pars to a model print('Converting model parameters into a PDF') model <- convertPars(pars, years=CalArray$cal, type) # 4. Generate N simulations print('Generating simulated SPDs under the model') SPD.sims <- matrix(,length(SPD.obs),N) # blank matrix # how many phases to simulate, increasing slightly to account for sampling across a range 300 yrs wider np <- round(sum(x)*inc * (1+300/diff(calrange))) # Generate simulations for(n in 1:N){ cal <- simulateCalendarDates(model=model, n=np) age <- uncalibrateCalendarDates(cal, calcurve) d <- data.frame(age = age, sd = sample(data$sd, replace=T, size=length(age)), datingType = '14C') SPD.sims[,n] <- summedCalibrator(d, CalArray, normalise = 'full')[,1] # house-keeping if(n>1 & n%in%seq(0,N,length.out=11))print(paste(n,'of',N,'simulations completed')) } # 5. Construct various timeseries summaries # calBP years calBP <- CalArray$cal # expected simulation expected.sim <- rowMeans(SPD.sims) # local standard deviation SD <- apply(SPD.sims,1,sd) # CIs CI <- t(apply(SPD.sims,1,quantile,prob=c(0.025,0.125,0.25,0.75,0.875,0.975))) # model mod <- approx(x=model$year,y=model$pdf,xout=calBP,ties='ordered',rule=2)$y # index of SPD.obs above (+1) and below(-1) the 95% CI upper.95 <- CI[,dimnames(CI)[[2]]=="97.5%"] lower.95 <- CI[,dimnames(CI)[[2]]=="2.5%"] index <- as.numeric(SPD.obs>=upper.95)-as.numeric(SPD.obs<=lower.95) # -1,0,1 values # 6. calculate summary statistic for each sim and obs; and GOF p-value print('Generating summary statistics') # for observed # summary stat (SS) is simply the proportion of years outside the 95%CI SS.obs <- sum(SPD.obs>upper.95 | SPD.obs<lower.95) / length(SPD.obs) # for each simulation SS.sims <- numeric(N) for(n in 1:N){ SPD <- SPD.sims[,n] SS.sims[n] <-sum(SPD>upper.95 | SPD<lower.95) / length(SPD.obs) } # calculate p-value pvalue <- sum(SS.sims>=SS.obs)/N # 7. summarise and return timeseries <- cbind(data.frame(calBP=calBP, expected.sim=expected.sim, local.sd=SD, model=mod, SPD=SPD.obs, index=index),CI) return(list(timeseries=timeseries, pvalue=pvalue, observed.stat=SS.obs, simulated.stat=SS.sims, n.dates.all=n.dates.all, n.dates.effective=n.dates.effective, n.phases.all=n.phases.all, n.phases.effective=n.phases.effective, n.phases.internal=n.phases.internal))} #-------------------------------------------------------------------------------------------- relativeDeclineRate <- function(x, y, generation, N){ if('numeric'%in%class(x)){ x <- sort(x, decreasing=T) y <- sort(y, decreasing=T) X <- seq(x[1],x[2], length.out=N) Y <- seq(y[1],y[2], length.out=N) k <- exp(log(Y[2:N]/Y[1:(N-1)])/((X[1:(N-1)]-X[2:N])/generation)) res <- 100*(mean(k)-1) } if(!'numeric'%in%class(x)){ x <- t(apply(x, MARGIN=1, FUN=sort, decreasing=T)) y <- t(apply(y, MARGIN=1, FUN=sort, decreasing=T)) C <- nrow(x) X <- Y <- matrix(,N, C) for(c in 1:C){ X[,c] <- seq(x[c,1],x[c,2],length.out=N) Y[,c] <- seq(y[c,1],y[c,2],length.out=N) } km <- matrix(,N-1,C) for(c in 1:C)km[,c] <- exp(log(Y[2:N,c]/Y[1:(N-1),c])/((X[1:(N-1),c]-X[2:N,c])/generation)) res <- 100*(colMeans(km)-1) } return(res) } #---------------------------------------------------------------------------------------------- relativeRate <- function(x, y, generation=25, N=1000){ if('numeric'%in%class(x)){ grad <- diff(y)/diff(x) if(grad==0)return(0) res <- relativeDeclineRate(x, y, generation, N) if(grad<0)res <- res*(-1) } if(!'numeric'%in%class(x)){ grad <- apply(x, MARGIN=1, FUN=diff)/apply(y, MARGIN=1, FUN=diff) res <- relativeDeclineRate(x, y, generation, N) res[grad<0] <- res[grad<0]*(-1) } return(res)} #---------------------------------------------------------------------------------------------- plotSimulationSummary <- function(summary, title=NULL, legend.x=NULL, legend.y=NULL){ X <- summary$timeseries$calBP Y <- summary$timeseries$SPD ymax <- max(Y,summary$timeseries$model,summary$timeseries$'97.5%')*1.05 P <- paste(', p =',round(summary$pvalue,3)) if(round(summary$pvalue,3)==0)P <- ', p < 0.001' xticks <- seq(max(X),min(X),by=-1000) if(is.null(title))title <- paste('Samples N = ',round(summary$n.dates.effective),', bins N = ',round(summary$n.phases.effective),P,sep='') if(is.null(legend.x))legend.x <- max(X)*0.75 if(is.null(legend.y))legend.y <- ymax*0.8 plot(NULL,xlim=rev(range(X)),ylim=c(0,ymax), main='', xlab='calBP', ylab='PD', xaxt='n',las=1, cex.axis=0.6) axis(1,at=xticks,labels=paste(xticks/1000,'kyr'),cex.axis=0.7) text(title,x=mean(X),y=ymax*0.9,cex=1) polygon(x=c(X,rev(X)) ,y=c(summary$timeseries$`2.5%`,rev(summary$timeseries$`97.5%`)) ,col='grey90',border=F) polygon(x=c(X,rev(X)) ,y=c(summary$timeseries$`12.5%`,rev(summary$timeseries$`87.5%`)) ,col='grey70',border=F) polygon(x=c(X,rev(X)) ,y=c(summary$timeseries$`25%`,rev(summary$timeseries$`75%`)) ,col='grey50',border=F) upperpoly <- which(summary$timeseries$index != 1) lowerpoly <- which(summary$timeseries$index != -1) upper.y <- lower.y <- Y upper.y[upperpoly] <- summary$timeseries$model[upperpoly] lower.y[lowerpoly] <- summary$timeseries$model[lowerpoly] polygon(c(X,rev(X)),c(upper.y,rev(summary$timeseries$model)),border=NA, col=scales::alpha('firebrick',alpha=0.6)) polygon(c(X,rev(X)),c(lower.y,rev(summary$timeseries$model)),border=NA, col=scales::alpha('firebrick',alpha=0.6)) lines(x=X, y=summary$timeseries$model, col='steelblue',lty=3, lwd=2) lines(y=Y,x=X,lty=2) smooth <- round(200/mean(diff(X))) Y.smooth <- zoo::rollmean(Y,smooth) X.smooth <- zoo::rollmean(X,smooth) lines(y=Y.smooth,X.smooth,lty=1,lwd=2) legend(legend=c('SPD (200 yrs rolling mean)','SPD','Null model','50% CI','75% CI','95% CI','Outside 95% CI'), x = legend.x, y = legend.y, cex = 0.7, bty = 'n', lty = c(1,2,3,NA,NA,NA,NA), lwd = c(2,1,2,NA,NA,NA,NA), col = c(1,1,'steelblue',NA,NA,NA,NA), fill = c(NA,NA,NA,'grey50','grey70','grey90','firebrick'), border = NA, xjust = 1, x.intersp = c(1,1,1,-0.5,-0.5,-0.5,-0.5)) } #---------------------------------------------------------------------------------------------- CPLPDF <- function(x,pars){ hinges <- CPLparsToHinges(pars, x) pdf <- approx(x=hinges$year, y=hinges$pdf, xout=x)$y return(pdf)} #---------------------------------------------------------------------------------------------- sinewavePDF <- function(x,min,max,f,p,r){ if(r==0)return(dunif(x,min,max)) if(r<0 | r>1)stop('r must be between 0 and 1') if(p<0 | p>(2*pi))stop('p must be between 0 and 2pi') num <- (sin(2*pi*f*x + p) + 1 - log(r)) denom <- (max - min)*(1 - log(r)) + (1/(2*pi*f))*( cos(2*pi*f*min+p) - cos(2*pi*f*max+p) ) pdf <- num/denom return(pdf)} #---------------------------------------------------------------------------------------------- exponentialPDF <- function(x,min,max,r){ if(r==0)return(dunif(x,min,max)) num <- -r*exp(-r*x) denom <- exp(-r*max)-exp(-r*min) pdf <- num/denom return(pdf)} #---------------------------------------------------------------------------------------------- logisticPDF <- function(x,min,max,k,x0){ if(k==0)return(dunif(x,min,max)) num <- 1 / ( 1 + exp( -k * (x0-x) ) ) denom <- (1/k) * log( (1 + exp(k*(x0-min)) ) / (1 + exp(k*(x0-max)) ) ) pdf <- num/denom return(pdf)} #---------------------------------------------------------------------------------------------- cauchyPDF <- function(x,min,max,x0,g){ num <- 1 denom1 <- g denom2 <- 1+((x-x0)/g)^2 denom3 <- atan((x0-min)/g) - atan((x0-max)/g) pdf <- num/(denom1*denom2*denom3) return(pdf)} #---------------------------------------------------------------------------------------------- powerPDF <- function(x,min,max,b,c){ num <- (c+1)*(b+x)^c denom <- (b+max)*(c+1) - (b+min)*(c+1) pdf <- num/denom return(pdf)} #----------------------------------------------------------------------------------------------
/scratch/gouwar.j/cran-all/cranData/ADMUR/R/functions.R
## ---- eval = FALSE------------------------------------------------------------ # install.packages('ADMUR') ## ---- eval = FALSE------------------------------------------------------------ # install.packages('devtools') # library(devtools) # install_github('UCL/ADMUR') ## ---- message = FALSE--------------------------------------------------------- library(ADMUR) ## ---- eval = FALSE------------------------------------------------------------ # help(ADMUR) # help(SAAD) ## ---- eval = TRUE------------------------------------------------------------- SAAD[1:5,1:8] ## ---- eval = TRUE------------------------------------------------------------- citation('ADMUR') ## ---- eval = TRUE, fig.height = 3, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE---- data <- data.frame( age=c(6562,7144), sd=c(44,51) ) x <- summedCalibratorWrapper(data) ## ---- eval = TRUE, fig.height = 3, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE---- data <- data.frame( age=c(6562,7144), sd=c(44,51), datingType=c('14C','TL') ) x <- summedCalibratorWrapper(data=data, calcurve=shcal20) ## ---- eval = TRUE, fig.height = 3, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE---- data <- data.frame( age = c(9144), sd=c(151) ) CalArray <- makeCalArray( calcurve=intcal20, calrange=c(8000,13000) ) cal <- summedCalibrator(data, CalArray) plotPD(cal) ## ---- eval = TRUE, fig.height = 5, fig.width=7, fig.align = "center", dev='jpeg'---- x <- makeCalArray( calcurve=shcal20, calrange=c(5500,6000), inc=1 ) plotCalArray(x) ## ---- eval = TRUE------------------------------------------------------------- data <- subset( SAAD, site %in% c('Carrizal','Pacopampa') ) data[,2:7] ## ---- eval = TRUE, fig.height = 3, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE---- CalArray <- makeCalArray( calcurve=shcal20, calrange=c(2000,6000) ) x <- phaseCalibrator(data=data, CalArray=CalArray) plotPD(x) ## ---- eval = TRUE------------------------------------------------------------- SPD <- as.data.frame( rowSums(x) ) # normalise SPD <- SPD/( sum(SPD) * CalArray$inc ) ## ---- eval = TRUE, fig.height = 3, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE---- SPD <- summedPhaseCalibrator( data=data, calcurve=shcal20, calrange=c(2000,6000) ) plotPD(SPD) ## ---- eval = TRUE------------------------------------------------------------- set.seed(12345) N <- 350 # randomly sample calendar dates from the toy model cal <- simulateCalendarDates(toy, N) # Convert to 14C dates. age <- uncalibrateCalendarDates(cal, shcal20) data <- data.frame(age = age, sd = 50, phase = 1:N, datingType = '14C') # Calibrate each phase, taking care to restrict to the modelled date range with 'remove.external' CalArray <- makeCalArray(shcal20, calrange = range(toy$year)) PD <- phaseCalibrator(data, CalArray, remove.external = TRUE) ## ---- eval = TRUE------------------------------------------------------------- print( ncol(PD) ) ## ---- eval = TRUE------------------------------------------------------------- loglik(PD=PD, model=toy) ## ---- eval = TRUE------------------------------------------------------------- uniform.model <- convertPars(pars=NULL, years=5500:7500, type='uniform') loglik(PD=PD, model=uniform.model) ## ---- eval = TRUE------------------------------------------------------------- exp( loglik(PD=PD, model=toy) - loglik(PD=PD, model=uniform.model) ) ## ---- eval = TRUE------------------------------------------------------------- set.seed(12345) CPLparsToHinges(pars=runif(11), years=5500:7500) ## ---- eval = FALSE------------------------------------------------------------ # library(DEoptimR) # best <- JDEoptim(lower = rep(0,5), # upper = rep(1,5), # fn = objectiveFunction, # PDarray = PD, # type = 'CPL', # NP = 100, # trace = TRUE) ## ---- echo = FALSE------------------------------------------------------------ load('vignette.3CPL.JDEoptim.best.RData') ## ---- eval = TRUE, fig.height = 4, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE---- CPL <- CPLparsToHinges(pars=best$par, years=5500:7500) SPD <- summedPhaseCalibrator( data=data, calcurve=shcal20, calrange=c(5500,7500) ) plotPD(SPD) lines(CPL$year, CPL$pdf, lwd=2, col='firebrick') legend(x=6300, y=max(CPL$pdf), cex=0.7, lwd=2, col='firebrick', bty='n', legend='best fitted 3-CPL') text(x=CPL$year, y=CPL$pdf, pos=3, labels=c('H1','H2','H3','H4')) ## ---- eval = FALSE------------------------------------------------------------ # chain <- mcmc(PDarray=PD, startPars=best$par, type='CPL', N=100000, burn=2000, thin=5, jumps =0.025) ## ---- eval = FALSE------------------------------------------------------------ # print(chain$acceptance.ratio) # par(mfrow=c(3,2), mar=c(4,3,3,1)) # col <- 'steelblue' # for(n in 1:5){ # plot(chain$all.pars[,n], type='l', ylim=c(0,1), col=col, xlab='', ylab='', main=paste('par',n)) # } ## ---- eval = FALSE------------------------------------------------------------ # hinges <- convertPars(pars=chain$res, years=5500:7500, type='CPL') # par(mfrow=c(3,2), mar=c(4,3,3,1)) # c1 <- 'steelblue' # c2 <- 'firebrick' # lwd <- 3 # pdf.brk <- seq(0,0.0015, length.out=40) # yr.brk <- seq(5500,7500,length.out=40) # names <- c('Date of H2','Date of H3','PD of H1','PD of H2','PD of H3','PD of H4') # hist(hinges$yr2,border=c1,breaks=yr.brk, main=names[1], xlab='');abline(v=CPL$year[2],col=c2,lwd=lwd) # hist(hinges$yr3, border=c1,breaks=yr.brk, main=names[2], xlab='');abline(v=CPL$year[3],col=c2,lwd=lwd) # hist(hinges$pdf1, border=c1,breaks=pdf.brk, main=names[3], xlab='');abline(v=CPL$pdf[1],col=c2,lwd=lwd) # hist(hinges$pdf2, border=c1,breaks=pdf.brk, main=names[4], xlab='');abline(v=CPL$pdf[2],col=c2,lwd=lwd) # hist(hinges$pdf3, border=c1,breaks=pdf.brk, main=names[5], xlab='');abline(v=CPL$pdf[3],col=c2,lwd=lwd) # hist(hinges$pdf4, border=c1,breaks=pdf.brk, main=names[6], xlab='');abline(v=CPL$pdf[4],col=c2,lwd=lwd) ## ---- eval = FALSE------------------------------------------------------------ # require(scales) # par( mfrow=c(1,2) , mar=c(4,4,1.5,2), cex=0.7 ) # plot(hinges$yr2, hinges$pdf2, pch=16, col=alpha(1,0.02), ylim=c(0,0.0005)) # points(CPL$year[2], CPL$pdf[2], col='red', pch=16, cex=1.2) # plot(hinges$yr3, hinges$pdf3, pch=16, col=alpha(1,0.02), ylim=c(0,0.0015)) # points(CPL$year[3], CPL$pdf[3], col='red', pch=16, cex=1.2) ## ---- eval = FALSE------------------------------------------------------------ # plot(NULL, xlim=c(7500,5500),ylim=c(0,0.0011), xlab='calBP', ylab='PD', cex=0.7) # for(n in 1:nrow(hinges)){ # x <- c(hinges$yr1[n], hinges$yr2[n], hinges$yr3[n], hinges$yr4[n]) # y <- c(hinges$pdf1[n], hinges$pdf2[n], hinges$pdf3[n], hinges$pdf4[n]) # lines( x, y, col=alpha(1,0.005) ) # } # lines(x=CPL$year, y=CPL$pdf, lwd=2, col=c2) ## ---- eval = TRUE, fig.height = 4, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE---- N <- 1000 x <- cbind(rep(5100,N),rep(5000,N)) y <- cbind(seq(1,100,length.out=N),seq(100,1,length.out=N)) conventional <- 100 * exp(log(y[,2]/y[,1])/((x[,1]-x[,2])/25))-100 relative <- relativeRate(x,y) plot(conventional, relative, type='l') rect(-100,-100,c(10,0,-10),c(10,0,-10), lty=2,border='grey') ## ---- eval = FALSE------------------------------------------------------------ # # CPL parameters must be between 0 and 1, and an odd length. # CPL.1 <- JDEoptim(lower=0, upper=1, fn=objectiveFunction, PDarray=PD, type='CPL', NP=20) # CPL.2 <- JDEoptim(lower=rep(0,3), upper=rep(1,3), fn=objectiveFunction, PDarray=PD, type='CPL', NP=60) # CPL.3 <- JDEoptim(lower=rep(0,5), upper=rep(1,5), fn=objectiveFunction, PDarray=PD, type='CPL', NP=100) # CPL.4 <- JDEoptim(lower=rep(0,7), upper=rep(1,7), fn=objectiveFunction, PDarray=PD, type='CPL', NP=140) # # # exponential has a single parameter, which can be negative (decay). # exp <- JDEoptim(lower=-0.01, upper=0.01, fn=objectiveFunction, PDarray=PD, type='exp', NP=20) # # # uniform has no parameters so a search is not required. # uniform <- objectiveFunction(NULL, PD, type='uniform') ## ---- echo = FALSE------------------------------------------------------------ load('vignette.model.comparison.RData') ## ---- eval = TRUE------------------------------------------------------------- # likelihoods data.frame(L1= -CPL.1$value, L2= -CPL.2$value, L3= -CPL.3$value, L4= -CPL.4$value, Lexp= -exp$value, Lunif= -uniform) BIC.1 <- 1*log(303) - 2*(-CPL.1$value) BIC.2 <- 3*log(303) - 2*(-CPL.2$value) BIC.3 <- 5*log(303) - 2*(-CPL.3$value) BIC.4 <- 7*log(303) - 2*(-CPL.4$value) BIC.exp <- 1*log(303) - 2*(-exp$value) BIC.uniform <- 0 - 2*(-uniform) data.frame(BIC.1,BIC.2,BIC.3,BIC.4,BIC.exp,BIC.uniform) ## ---- eval = TRUE, fig.height = 4, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE---- # convert parameters to model PDs CPL1 <- convertPars(pars=CPL.1$par, years=5500:7500, type='CPL') CPL2 <- convertPars(pars=CPL.2$par, years=5500:7500, type='CPL') CPL3 <- convertPars(pars=CPL.3$par, years=5500:7500, type='CPL') CPL4 <- convertPars(pars=CPL.4$par, years=5500:7500, type='CPL') EXP <- convertPars(pars=exp$par, years=5500:7500, type='exp') # Plot SPD and five competing models: plotPD(SPD) cols <- c('firebrick','orchid2','coral2','steelblue','goldenrod3') lines(CPL1$year, CPL1$pdf, col=cols[1], lwd=2) lines(CPL2$year, CPL2$pdf, col=cols[2], lwd=2) lines(CPL3$year, CPL3$pdf, col=cols[3], lwd=2) lines(CPL4$year, CPL4$pdf, col=cols[4], lwd=2) lines(EXP$year, EXP$pdf, col=cols[5], lwd=2) legend <- c('1-CPL','2-CPL','3-CPL','4-CPL','exponential') legend(x=6300, y=max(CPL$pdf), cex=0.7, lwd=2, col=cols, bty='n', legend=legend) ## ---- eval = FALSE------------------------------------------------------------ # summary <- SPDsimulationTest(data, calcurve=shcal20, calrange=c(5500,7500), pars=CPL.3$par, type='CPL') ## ---- echo = FALSE------------------------------------------------------------ load('vignette.3CPL.SPDsimulationTest.RData') ## ---- eval = TRUE, fig.height = 5, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE---- print(summary$pvalue) hist(summary$simulated.stat, main='Summary statistic', xlab='') abline(v=summary$observed.stat, col='red') legend(0.3,6000, bty='n', lwd=c(1,3), col=c('red','grey'), legend=c('observed','simulated')) ## ---- eval = FALSE------------------------------------------------------------ # summary <- SPDsimulationTest(data, calcurve=shcal20, calrange=c(5500,7500), pars=exp$par, type='exp') ## ---- echo = FALSE------------------------------------------------------------ load('vignette.exp.SPDsimulationTest.RData') ## ---- eval = TRUE, fig.height = 4, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE, message=FALSE---- plotSimulationSummary(summary, legend.y=0.0012) ## ---- eval = FALSE------------------------------------------------------------ # # generate SPDs # CalArray <- makeCalArray(intcal20, calrange = c(1000,4000)) # spd1 <- summedCalibrator(data1, CalArray, normalise='full') # spd2 <- summedCalibrator(data2, CalArray, normalise='full') # spd3 <- summedCalibrator(data3, CalArray, normalise='full') # # # calibrate phases # PD1 <- phaseCalibrator(data1, CalArray, remove.external = TRUE) # PD2 <- phaseCalibrator(data2, CalArray, remove.external = TRUE) # PD3 <- phaseCalibrator(data3, CalArray, remove.external = TRUE) # # # effective sample sizes # ncol(PD1) # ncol(PD2) # ncol(PD3) # # # maximum likelihood search, fitting various models to various datasets # norm <- JDEoptim(lower=c(1000,1), upper=c(4000,5000), # fn=objectiveFunction, PDarray=PD1, type='norm', NP=40, trace=T) # cauchy <- JDEoptim(lower=c(1000,1), upper=c(4000,5000), # fn=objectiveFunction, PDarray=PD1, type='cauchy', NP=40, trace=T) # sine <- JDEoptim(lower=c(0,0,0), upper=c(1/1000,2*pi,1), # fn=objectiveFunction, PDarray=PD2, type='sine', NP=60, trace=T) # logistic <- JDEoptim(lower=c(0,0000), upper=c(1,10000), # fn=objectiveFunction, PDarray=PD3, type='logistic', NP=40, trace=T) # exp <- JDEoptim(lower=c(0), upper=c(1), # fn=objectiveFunction, PDarray=PD3, type='exp', NP=20, trace=T) # power <- JDEoptim(lower=c(0,-10), upper=c(10000,0), # fn=objectiveFunction, PDarray=PD3, type='power', NP=40, trace=T) # ## ---- eval = FALSE------------------------------------------------------------ # # convert parameters to model PDs # years <- 1000:4000 # mod.norm <- convertPars(pars=norm$par, years, type='norm') # mod.cauchy <- convertPars(pars=cauchy$par, years, type='cauchy') # mod.sine <- convertPars(pars=sine$par, years, type='sine') # mod.uniform <- convertPars(pars=NULL, years, type='uniform') # mod.logistic <- convertPars(pars=logistic$par, years, type='logistic') # mod.exp <- convertPars(pars=exp$par, years, type='exp') # mod.power <- convertPars(pars=power$par, years, type='power') # # # Plot SPDs and various fitted models: # par(mfrow=c(3,1), mar=c(4,4,1,1)) # cols <- c('steelblue','firebrick','orange') # # plotPD(spd1) # lines(mod.norm, col=cols[1], lwd=5) # lines(mod.cauchy, col=cols[2], lwd=5) # legend(x=4000, y=max(spd1)*1.2, lwd=5, col=cols, bty='n', legend=c('Gaussian','Cauchy')) # # plotPD(spd2) # lines(mod.sine, col=cols[1], lwd=5) # lines(mod.uniform, col=cols[2], lwd=5) # legend(x=4000, y=max(spd2)*1.2, lwd=5, col=cols, bty='n', legend=c('Sinewave','Uniform')) # # plotPD(spd3) # lines(mod.logistic, col=cols[1], lwd=5) # lines(mod.exp, col=cols[2], lwd=5) # lines(mod.power, col=cols[3], lwd=5) # legend(x=4000, y=max(spd3)*1.2, lwd=5, col=cols, bty='n', legend=c('Logistic','Exponential','Power Law')) ## ---- eval = FALSE------------------------------------------------------------ # # generate an PD array for each dataset # years <- seq(1000,40000,by=50) # CalArray <- makeCalArray(intcal20, calrange = c(1000,40000),inc=50) # PD1 <- phaseCalibrator(bryson1848, CalArray, remove.external = TRUE) # PD2 <- phaseCalibrator(bluhm2421, CalArray, remove.external = TRUE) # # # MCMC search # chain.bryson <- mcmc(PDarray=PD1, # startPars=c(10000,-1.5), # type='power', N=50000, # burn=2000, # thin=5, # jumps =c(250,0.075)) # # chain.bluhm <- mcmc(PDarray=PD2, # startPars=c(10000,-1.5), # type='power', N=50000, # burn=2000, # thin=5, # jumps =c(250,0.075)) # # # convert parameters to taphonomy curves # curve.bryson <- convertPars(chain.bryson$res, type='power', years=years) # curve.bluhm <- convertPars(chain.bluhm$res, type='power', years=years) # # # plot # plot(NULL, xlim=c(0,12000),ylim=c(-2.5,-1), xlab='parameter b', ylab='parameter c') # points(chain.bryson$res, col=cols[1]) # points(chain.bluhm$res, col=cols[2]) # # plot(NULL, xlim=c(0,40000),ylim=c(0,0.00025), xlab='yrs BP', ylab='PD') # N <- nrow(chain.bryson$res) # for(n in sample(1:N,size=1000)){ # lines(years,curve.bryson[n,], col=cols[1]) # lines(years,curve.bluhm[n,], col=cols[2]) # } ## ---- eval = FALSE------------------------------------------------------------ # best <- JDEoptim(lower=c(0,0,0,0,0), # upper=c(1,1,1,1,1), # fn=objectiveFunction, # PDarray=PD, # type='CPL', # taphonomy=F, # trace=T, # NP=100) # # best.taph <- JDEoptim(lower=c(0,0,0,0,0,0,-3), # upper=c(1,1,1,1,1,20000,0), # fn=objectiveFunction, # PDarray=PD, # type='CPL', # taphonomy=T, # trace=T, # NP=140) ## ---- echo = FALSE------------------------------------------------------------ load('vignette.3CPL.JDEoptim.best.RData') load('vignette.3CPL.JDEoptim.best.taph.RData') ## ---- eval = TRUE, fig.height = 4, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE---- CPL <- convertPars(pars=best$par, years=5500:7500, type='CPL', taphonomy=F) CPL.taph <- convertPars(pars=best.taph$par, years=5500:7500, type='CPL', taphonomy=T) SPD <- summedPhaseCalibrator( data=data, calcurve=shcal20, calrange=c(5500,7500) ) plotPD(SPD) lines(CPL$year, CPL$pdf, lwd=2, col=cols[1]) lines(CPL.taph$year, CPL.taph$pdf, lwd=2, col=cols[2]) legend(x=6300,y=0.001,legend=c('3-CPL','3-CPL with taphonomy'),bty='n',col=cols[1:2],lwd=2,cex=0.7) ## ---- eval = TRUE, fig.height = 4, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE---- pop <- convertPars(pars=best.taph$par[1:5], years=5500:7500, type='CPL') taph <- convertPars(pars=best.taph$par[6:7], years=5500:7500, type='power') plotPD(pop) title('Population dynamics') plotPD(taph) title('Taphonomic loss') ## ---- eval = TRUE------------------------------------------------------------- CPLparsToHinges(pars=best.taph$par[1:5], years=5500:7500) ## ---- eval = FALSE------------------------------------------------------------ # chain.taph <- mcmc(PDarray = PD, # startPars = c(0.5,0.5,0.5,0.5,0.5,10000,-1.5), # type='CPL', taphonomy=T, # N = 30000, # burn = 2000, # thin = 5, # jumps = 0.025) ## ---- eval = FALSE------------------------------------------------------------ # # convert parameters into model PDFs # pop <- convertPars(pars=chain.taph$res[,1:5], years=5500:7500, type='CPL') # taph <- convertPars(pars=chain.taph$res[,6:7], years=seq(1000,30000,by=50), type='power') # # # plot population dynamics PDF # plot(NULL, xlim=c(7500,5500),ylim=c(0,0.0013), xlab='calBP', ylab='PD', las=1) # for(n in 1:nrow(pop))lines(5500:7500, pop[n,],col=alpha(1,0.05)) # # # plot taphonomy PDF # plot(NULL, xlim=c(30000,0),ylim=c(0,0.00025), xlab='calBP', ylab='PD',las=1,) # for(n in 1:nrow(taph))lines(seq(1000,30000,by=50), taph[n,],col=alpha(1,0.02)) # # # plot taphonomic parameters # plot(NULL, xlim=c(0,20000),ylim=c(-3,0), xlab='parameter b', ylab='parameter c',las=1) # for(n in 1:nrow(chain.taph$res))points(chain.taph$res[n,6], chain.taph$res[n,7],col=alpha(1,0.2),pch=20)
/scratch/gouwar.j/cran-all/cranData/ADMUR/inst/doc/guide.R
--- title: | ![](four_logos.png){width=680px} ADMUR: Ancient Demographic Modelling Using Radiocarbon author: "Adrian Timpson" date: "`r Sys.Date()`" output: rmarkdown::html_vignette: toc: true toc_depth: 2 logo: logo.jpg vignette: > %\VignetteEngine{knitr::rmarkdown} %\VignetteIndexEntry{Guide to using ADMUR} %\usepackage[utf8]{inputenc} --- <style> p.caption {font-size: 0.7em;} </style> ********** # 1. Overview ## Introduction to ADMUR This vignette provides a comprehensive guide to modelling population dynamics using the R package ADMUR, and accompanies the publication 'Directly modelling population dynamics in the South American Arid Diagonal using 14C dates', Philosophical Transactions B, 2020, A. Timpson et al. https://doi.org/10.1098/rstb.2019.0723 Throughout this vignette, R code blocks often use objects created earlier in the vignette in previous code blocks. However, the manual for each function provides examples with self sufficient R code blocks. The motivation for creating the ADMUR package is to provide a robust framework to infer population dynamics from radiocarbon datasets, given the uncontroversial assumption that (to a first order of approximation) the archaeological record contains more dateable anthropogenic material from prehistoric periods when population levels were greater. Unfortunately, the spatiotemporal sparsity of radiocarbon data conspires with the wiggly nature of the calibration curve to encourage the overinterpretation of such datasets, often leading to colourful but statistically unjustified interpretations of population dynamics. No statistical method can (or ever will) be able to perfectly reconstruct the true population dynamics from such a dataset. ADMUR is no exception to this, but provides tools to infer a plausible yet conservative reconstruction of population dynamics. ## Installation The ADMUR package can be installed directly from the CRAN in the usual way: ```{r, eval = FALSE} install.packages('ADMUR') ``` Alternatively it can be installed from GitHub, after installing and loading the 'devtools' package on the CRAN: ```{r, eval = FALSE} install.packages('devtools') library(devtools) install_github('UCL/ADMUR') ``` Either way, the ADMUR package can then be locally loaded: ```{r, message = FALSE} library(ADMUR) ``` ## 14C datasets A summary of the available help files and data sets included in the package can be browsed, which include a terrestrial anthropogenic ^14^C dataset from the South American Arid Diagonal: ```{r, eval = FALSE} help(ADMUR) help(SAAD) ``` Datasets must be structured as a data frame that include columns 'age' and 'sd', which represent the uncalibrated ^14^C age and its error, respectively. ```{r, eval = TRUE} SAAD[1:5,1:8] ``` ## Citations Citations are available as follows: ```{r, eval = TRUE} citation('ADMUR') ``` ********** # 2. Date calibration and SPDs The algorithm used by ADMUR to calculate model likelihoods of a ^14^C dataset uses several functions to first calibrate ^14^C dates. These functions are also intrinsically useful for ^14^C date calibration or for generating a Summed Probability Distribution (SPD). ## Calibrated ^14^C date probability distributions Generating a single calibrated date distribution or SPD requires either a two-step process to give the user full control of the date range and temporal resolution, or a simpler one step process using a wrapper function that automatically estimates a sensible date range and resolution from the dataset, performs the two step process internally, and plots the SPD. ### With the wrapper 1. Use the function [summedCalibratorWrapper()](../html/summedCalibratorWrapper.html) ```{r, eval = TRUE, fig.height = 3, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE} data <- data.frame( age=c(6562,7144), sd=c(44,51) ) x <- summedCalibratorWrapper(data) ``` Notice the function assumes the data provided were all ^14^C dates. However, if you have other kinds of date such as thermoluminescence you can specify this. Non-^14^C types are assumed to be in calendar time, BP. You can also specify a particular calibration curve: ```{r, eval = TRUE, fig.height = 3, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE} data <- data.frame( age=c(6562,7144), sd=c(44,51), datingType=c('14C','TL') ) x <- summedCalibratorWrapper(data=data, calcurve=shcal20) ``` ### Without the wrapper Generating the SPD without the wrapper gives you more control, and requires a two-step process: 1. Convert a calibration curve to a CalArray using the function [makeCalArray()](../html/makeCalArray.html) 1. Calibrate the ^14^C dates through the CalArray using the function [summedCalibrator()](../html/summedCalibrator.html). This is useful for improving computational times if generating many SPDs, for example in a simulation framework, since the CalArray needs generating only once. ```{r, eval = TRUE, fig.height = 3, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE} data <- data.frame( age = c(9144), sd=c(151) ) CalArray <- makeCalArray( calcurve=intcal20, calrange=c(8000,13000) ) cal <- summedCalibrator(data, CalArray) plotPD(cal) ``` The CalArray is essentially a two-dimensional probability array of the calibration curve, and can be viewed using the [plotCalArray()](../html/plotCalArray.html) function. Calibration curves vary in their temporal resolution, and the preferred resolution can be specified using the parameter **inc** which interpolates the calibration curve. It would become prohibitively time and memory costly if analysing the entire 50,000 year range of the calibration curve at a 1 year resolution (requiring a 50,000 by 50,000 array) and in practice the default 5 year resolution provides equivalent results to 1 year resolution for study periods wider than c.1000 years. ```{r, eval = TRUE, fig.height = 5, fig.width=7, fig.align = "center", dev='jpeg'} x <- makeCalArray( calcurve=shcal20, calrange=c(5500,6000), inc=1 ) plotCalArray(x) ``` ## Comparison with other calibration software It is worth noting that the algorithm used by this package to calibrate ^14^C dates gives practically equivalent results to those from [OxCal](https://c14.arch.ox.ac.uk/oxcal.html) generated using [oxcAAR](https://cran.r-project.org/package=oxcAAR) and [Bchron](https://cran.r-project.org/package=Bchron) ![Comparison of calibration software for the ^14^C date: 3000 +/- 50 BP calibrated through intcal13.](software_compare_1.png) However, there are two fringe circumstances where these software programs differ substantially: at the border of the calibration curve; and if a date has a large error. ### Edge effects Consider the real ^14^C date [MAMS-13035] <https://doi.org/10.1016/j.aeae.2015.11.003> age: 50524 +/- 833 BP calibrated through intcal13, which only extends to 46401BP. Bchron throws an error, whilst OxCal applies a one-to-one mapping between Conventional Radiocarbon (CRA) time and calendar time for any date (mean) beyond the range of the calibration curve. The latter is in theory a reasonable way to mitigate the problem, however OxCal applies this in a binary manner that can create peculiarities. Instead ADMUR gradually fades the calibration curve to a one-to-one mapping between the end of the curve and 60,000 BP. ![Comparison of calibration software at the limits of intcal13. OxCal and Bchron produce a truncated distribution for date C. Bchron cannot calibrate date D, and OxCal suggests date D is younger than dates A, B and C. ADMUR performs a soft fade at the limit of the calibration curve.](software_compare_2.png) ### Large errors A ^14^C date is typically reported as a mean date with an error, which is often interpreted as representing a symmetric Gaussian distribution before calibration. However, a Gaussian has a non-zero probability at all possible years (between -$\infty$ and +$\infty$), and therefore cannot fairly represent the date uncertainty which must be skewed towards the past. Specifically, if we consider the date in CRA time, it must have a zero probability of occurring in the future. Alternatively, if we consider the date as a ^14^C/^12^C ratio, it cannot be smaller than 1 (the present). Therefore ADMUR assumes a ^14^C date error is lognormally distributed with a mean equal to the CRA date, and a variance equal to the CRA error squared. This naturally skews the distribution away from the present. In practice, this difference is undetectably trivial for typical radiocarbon errors since the lognormal distribution approximates a normal distribution away from zero. However, theoretically the differences can be large if considering dates with large errors that are close to the present. ![Comparison of calibration software for the ^14^C dates 15000 +/- 9000 BP, 15000 +/- 3000 BP and 15000 +/- 1000 BP, using intcal13. The total probability mass of each of the nine curves equals 1. Differences are apparent if a date has a large error (top tile): Bchron assumes the CRA error is Normally distributed, resulting in a truncated curve with a substantial probability at present. OxCal produces a heavily skewed distribution with a low probability at present and a substantial probability at 50,000 BP that suddenly truncates to zero beyond this. ADMUR assumes the CRA error is Lognormally distributed, which is indistinguishable from a normal distribution for typical errors, but naturally prevents any probability mass occurring at the present or future when errors are large.](software_compare_3.png) ## Phased data: adjusting for ascertainment bias A naive approach to generating an SPD as a proxy for population dynamics would be to sum all dates in the dataset, but a more sensible approach is to sum the SPDs of each phase. The need to bin dates into phases is an important step in modelling population dynamics to adjust for the data ascertainment bias of some archaeological finds having more dates by virtue of a larger research interest or budget. Therefore [phaseCalibrator()](../html/phaseCalibrator.html) generates an SPD for each phase in a dataset, and includes a binning algorithm which provides a useful solution to handling large datasets that have not been phased. For example, consider the following 8 dates from 2 sites: ```{r, eval = TRUE} data <- subset( SAAD, site %in% c('Carrizal','Pacopampa') ) data[,2:7] ``` The data have not already been phased (do not include a column 'phase') therefore the default binning algorithm calibrates these dates into four phases. this is achieved by binning dates that have a mean ^14^C date within 200 ^14^C years of any other date in that respective bin. Therefore Pacopampa.1 comprises samples 1207 and 1206, Pacopampa.2 comprises sample 1205, Carrizal.1 comprises samples 1196 and 1195 and 1194 and 1193, and Carrizal.2 comprises sample 1192: ```{r, eval = TRUE, fig.height = 3, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE} CalArray <- makeCalArray( calcurve=shcal20, calrange=c(2000,6000) ) x <- phaseCalibrator(data=data, CalArray=CalArray) plotPD(x) ``` Finally, the distributions in each phase can be summed and normalised to unity. It is straight forward to achieve this directly from the dataframe created above: ```{r, eval = TRUE} SPD <- as.data.frame( rowSums(x) ) # normalise SPD <- SPD/( sum(SPD) * CalArray$inc ) ``` Alternatively, the wrapper function [summedPhaseCalibrator()](../html/summedPhaseCalibrator.html) will perform this entire workflow internally: ```{r, eval = TRUE, fig.height = 3, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE} SPD <- summedPhaseCalibrator( data=data, calcurve=shcal20, calrange=c(2000,6000) ) plotPD(SPD) ``` ********** # 3. Continuous Piecewise Linear (CPL) Modelling A CPL model lends itself well to the objectives of identifying specific demographic events. Its parameters are the (x,y) coordinates of the hinge points, which are the relative population size (y) and timing (x) of these events. Crucially, this package calculates model likelihoods (the probability of the data given some proposed parameter combination). This likelihood is used in a search algorithm to find the maximum likelihood parameters; to compare models with different numbers parameters to find the best fit without overfitting; in Monte-Carlo Markov Chain (MCMC) analysis to estimate credible intervals of those parameters; and in a goodness-of-fit test to check that the data is a typical realisation of the maximum likelihood model and its parameters. ## Calculating likelihoods Theoretically a calibrated date should be a continuous Probability Density Function (PDF), however in practice a date is represented as a discrete vector of probabilities corresponding to each calendar year, and therefore is a Probability Mass Function (PMF). This discretisation provides the advantage that numerical methods can be used to easily calculate relative likelihoods, provided the model is also discretised to the same time points. A [toy()](../html/toy.html) model is provided to demonstrate how this achieved. First, we simulate a plausible ^14^C dataset and calibrate it. The function [simulateCalendarDates()](../html/simulateCalendarDates.html) automatically covers a slightly wider date range to ensure simulated ^14^C dates are well represented around the edges: ```{r, eval = TRUE} set.seed(12345) N <- 350 # randomly sample calendar dates from the toy model cal <- simulateCalendarDates(toy, N) # Convert to 14C dates. age <- uncalibrateCalendarDates(cal, shcal20) data <- data.frame(age = age, sd = 50, phase = 1:N, datingType = '14C') # Calibrate each phase, taking care to restrict to the modelled date range with 'remove.external' CalArray <- makeCalArray(shcal20, calrange = range(toy$year)) PD <- phaseCalibrator(data, CalArray, remove.external = TRUE) ``` The argument 'remove.external = TRUE' ensures any calibrated phases with less than 50% of their probability mass within the modelled date range are excluded, reducing the effective sample size from 350 to 303. This is a crucial step to avoid mischievous edge effects of dates outside the date range. Similarly, notice we constrained the CalArray to the modelled date range. These are important to ensure that we only model the population across a range that is well represented by data. To extend the model beyond the range of available data would be to assume the absence of evidence means evidence of absence. No doubt there may be occasions when this is reasonable (for example if modelling the first colonisation of an island that has been well excavated, and the period before arrival is evidenced by the absence of datable material), but more often the range of representative data is due to research interest, and therefore the logic of only including dates with at least 50% of their probability within the date range is that their true dates are more likely to be internal (within the date range) than external. ```{r, eval = TRUE} print( ncol(PD) ) ``` Finally we calculate the overall relative log likelihood of the model using function [loglik()](../html/loglik.html) ```{r, eval = TRUE} loglik(PD=PD, model=toy) ``` For comparison, we can calculate the overall relative likelihood of a uniform model given exactly the same data. Intuitively this should have a lower relative likelihood, since our dataset was randomly generated from the non-uniform toy population history: ```{r, eval = TRUE} uniform.model <- convertPars(pars=NULL, years=5500:7500, type='uniform') loglik(PD=PD, model=uniform.model) ``` And indeed the toy model is thirty nine million trillion times more likely than the uniform model: ```{r, eval = TRUE} exp( loglik(PD=PD, model=toy) - loglik(PD=PD, model=uniform.model) ) ``` Crucially, [loglik()](../html/loglik.html) calculates the relative likelihoods for each effective sample separately (each phase containing a few dates). The overall model likelihood is the overall product of these individual likelihoods. This means that even in the case where there is no ascertainment bias, each date should still be assigned to its own phase, to ensure phaseCalibrator() calibrates each date separately. In contrast, attempting to calculate a likelihood for a single SPD constructed from the entire dataset would be incorrect, as this would be treating the entire dataset as a single 'average' sample. ## The anatomy of a CPL model Having established how to calculate the relative likelihood of a proposed model given a dataset, we can use any out-of-the-box search algorithm to find the maximum likelihood model. This first requires us to describe the PD of any population model in terms of a small number of parameters, rather than a vector of probabilities for each year. We achieve this using the Continuous Piecewise Linear (CPL) model, which is defined by the (x,y) coordinates of its hinge points. ![Illustration of the toy 3-CPL model PD, described using just four coordinate pairs (hinges).](model_plot.svg) When performing a search for the best 3-CPL model coordinates (given a dataset), only five of these eight values are free parameters. The x-coordinates of the start and end (5500 BP and 7500 BP) are fixed by the choice of date range. Additionally, one of the y-coordinates must be constrained by the other parameters, since the total probability (area) must equal 1. As a result, an n-CPL model will have 2n-1 free parameters. ## Parameter space: The Area Breaking Process We use the function [convertPars()](../html/convertPars.html) to map our search parameters to their corresponding PD coordinates. This allows us to propose independent parameter values from a uniform distribution between 0 and 1, and convert them into coordinates that describe a corresponding CPL model PD. This parameter-to-coordinate mapping is achieved using a modified stick breaking Dirichlet process. The Dirichlet Process (not to be confused with the Dirichlet distribution) is an algorithm that can break a stick (the x-axis date range) into a desired number of pieces, ensuring all lengths are sampled evenly. The length (proportion) of remaining stick to break is chosen by sampling from the Beta distribution, such that we use the Beta CDF (with $\alpha$ = 1 and $\beta$ = the number of pieces still to be broken) to convert an x-parameter into its equivalent x-coordinate value. We extend this algorithm for use with the CPL model by also converting y-parameters to y-coordinates as follows: 1. Fix the y-value of the first hinge (H1, x = 5500 BP) to any constant (y = 3 is arbitrarily chosen since the mapping function below gives 3 for an average y-parameter of 0.5). 1. Use the mapping function $f(y) = (1/(1-y))^2)-1$ to convert all remaining y-parameters (between 0 and 1) to y-values (between 0 and +$\infty$). 1. Calculate the total area, given the y-values and previously calculated x-coordinates. 1. Divide y-values by the total area, to give the y-coordinates of the final PDF. The parameters must be provided as a single vector with an odd length, each between 0 and 1 (y,x,y,x,...y). For example, a randomly generated 6-CPL model will have 11 parameters and 7 hinges: ```{r, eval = TRUE} set.seed(12345) CPLparsToHinges(pars=runif(11), years=5500:7500) ``` Note: The Area Breaking Algorithm is a heuristic that ensures all parameter space is explored and therefore the maximum likelihood parameters are always found. However, unlike the one-dimensional stick-breaking process, its mapping of random parameters to PD coordinates is not perfectly even, and we welcome ideas for a more elegant algorithm. ## Maximum Likelihood parameter search Any preferred search algorithm can be used. For example, the JDEoptim function from [DEoptimR](https://cran.r-project.org/package=DEoptimR) uses a differential evolution optimisation algorithm that performs very nicely for this application. We recommend increasing the default NP parameter to at least 20 times the number of parameters, and repeating the search to ensure consistency: ```{r, eval = FALSE} library(DEoptimR) best <- JDEoptim(lower = rep(0,5), upper = rep(1,5), fn = objectiveFunction, PDarray = PD, type = 'CPL', NP = 100, trace = TRUE) ``` ```{r, echo = FALSE} load('vignette.3CPL.JDEoptim.best.RData') ``` ```{r, eval = TRUE, fig.height = 4, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE} CPL <- CPLparsToHinges(pars=best$par, years=5500:7500) SPD <- summedPhaseCalibrator( data=data, calcurve=shcal20, calrange=c(5500,7500) ) plotPD(SPD) lines(CPL$year, CPL$pdf, lwd=2, col='firebrick') legend(x=6300, y=max(CPL$pdf), cex=0.7, lwd=2, col='firebrick', bty='n', legend='best fitted 3-CPL') text(x=CPL$year, y=CPL$pdf, pos=3, labels=c('H1','H2','H3','H4')) ``` ## Credible interval parameter search using MCMC The ADMUR function [mcmc()](../html/mcmc.html) uses the Metropolis-Hastings algorithm to search joint parameter values of an n-CPL model, given a the calibrated probability distributions of phases in a ^14^C dataset (PDarray). In principle the starting parameters do not matter if burn is of an appropriate length, but in practice it is more efficient to start in a sensible place such as the maximum likelihood parameters: ```{r, eval = FALSE} chain <- mcmc(PDarray=PD, startPars=best$par, type='CPL', N=100000, burn=2000, thin=5, jumps =0.025) ``` The acceptance ratio (AR) and raw chain (before burn-in and thinning) can be sanity checked. Ideally we want the AR somewhere in the range 0.3 to 0.5 (this can be tuned with the 'jumps' argument), and the raw chain to resemble 'hairy caterpillars': ```{r, eval = FALSE} print(chain$acceptance.ratio) par(mfrow=c(3,2), mar=c(4,3,3,1)) col <- 'steelblue' for(n in 1:5){ plot(chain$all.pars[,n], type='l', ylim=c(0,1), col=col, xlab='', ylab='', main=paste('par',n)) } ``` ![Single chain of all 5 raw parameters](mcmc_chain.png){width=680px} These parameters can then be converted to the hinge coordinates using the [convertPars()](../html/convertPars.html) function, and their marginal distributions plotted. Note, the MLE parameters (red lines) may not exactly match the peaks of these distributions because they are only marginals. Note also the dates of hinge 1 and 2 are fixed at 5500 and 7500: ```{r, eval = FALSE} hinges <- convertPars(pars=chain$res, years=5500:7500, type='CPL') par(mfrow=c(3,2), mar=c(4,3,3,1)) c1 <- 'steelblue' c2 <- 'firebrick' lwd <- 3 pdf.brk <- seq(0,0.0015, length.out=40) yr.brk <- seq(5500,7500,length.out=40) names <- c('Date of H2','Date of H3','PD of H1','PD of H2','PD of H3','PD of H4') hist(hinges$yr2,border=c1,breaks=yr.brk, main=names[1], xlab='');abline(v=CPL$year[2],col=c2,lwd=lwd) hist(hinges$yr3, border=c1,breaks=yr.brk, main=names[2], xlab='');abline(v=CPL$year[3],col=c2,lwd=lwd) hist(hinges$pdf1, border=c1,breaks=pdf.brk, main=names[3], xlab='');abline(v=CPL$pdf[1],col=c2,lwd=lwd) hist(hinges$pdf2, border=c1,breaks=pdf.brk, main=names[4], xlab='');abline(v=CPL$pdf[2],col=c2,lwd=lwd) hist(hinges$pdf3, border=c1,breaks=pdf.brk, main=names[5], xlab='');abline(v=CPL$pdf[3],col=c2,lwd=lwd) hist(hinges$pdf4, border=c1,breaks=pdf.brk, main=names[6], xlab='');abline(v=CPL$pdf[4],col=c2,lwd=lwd) ``` ![Marginal distributions after conversion to hinge coordinates. Maximum Likelihoods (calculated separately) in red.](mcmc_posteriors.png){width=680px} Some two-dimensional combinations of joint parameters may be preferred, but still these are 2D marginal representations of 5D parameters, again with MLE in red: ```{r, eval = FALSE} require(scales) par( mfrow=c(1,2) , mar=c(4,4,1.5,2), cex=0.7 ) plot(hinges$yr2, hinges$pdf2, pch=16, col=alpha(1,0.02), ylim=c(0,0.0005)) points(CPL$year[2], CPL$pdf[2], col='red', pch=16, cex=1.2) plot(hinges$yr3, hinges$pdf3, pch=16, col=alpha(1,0.02), ylim=c(0,0.0015)) points(CPL$year[3], CPL$pdf[3], col='red', pch=16, cex=1.2) ``` ![2D Marginal distributions. Maximum Likelihoods (calculated separately) in red.](mcmc_2D.png){width=680px} Alternatively, the joint distributions can be visualised by plotting the CPL model for each iteration of the chain, with the MLE in red: ```{r, eval = FALSE} plot(NULL, xlim=c(7500,5500),ylim=c(0,0.0011), xlab='calBP', ylab='PD', cex=0.7) for(n in 1:nrow(hinges)){ x <- c(hinges$yr1[n], hinges$yr2[n], hinges$yr3[n], hinges$yr4[n]) y <- c(hinges$pdf1[n], hinges$pdf2[n], hinges$pdf3[n], hinges$pdf4[n]) lines( x, y, col=alpha(1,0.005) ) } lines(x=CPL$year, y=CPL$pdf, lwd=2, col=c2) ``` ![Joint posterior distributions. Maximum Likelihood (calculated separately) in red.](mcmc_joint.png){width=680px} ## Relative growth and decline rates Percentage growth rates per generation provide an intuitive statistic to quantify and compare population changes through time. However there are two key issues to overcome when estimating growth rates for a CPL model. 1. CPL modelling allows for the possibility of hiatus periods, defined by pieces between hinges with a zero or near zero PD. Conventionally, the percentage decrease from any value to zero is 100%, however the equivalent percentage increase from zero is undefined. 2. Each section of the CPL is a straight line with a constant gradient. However, a straight line has a constantly changing growth/decline rate. The first problem is an extreme manifestation of the asymmetry from conventionally reporting change always with respect to the first value. For example, if we consider a population of 80 individuals at time $t_1$, changing to 100 at $t_2$ and to 80 at $t_3$, this would be conventionally described as a 25% increase followed by a 20% decrease. This asymmetry is unintuitive and unhelpful in the context of population change, and instead we use a *relative rate* which is always calculated with respect to the larger value (e.g., a '20% relative growth' followed by '20% relative decline'). We overcome the second problem by calculating the expected (mean average) rate across the entire linear piece. This is achieved by notionally breaking the line into $N$ equal pieces, such that the coordinates of the ends of the $i^{th}$ piece are $(x_1,y_1)$ and $(x_2,y_2)$. The generational (25 yr) rate $r$ of this $i^{th}$ piece is: $$r_i=100\times exp[\ln(\frac{y_2}{y_1})/\frac{x_1-x_2}{25}]-100$$ and the expected rate across the entire line as $N$ approaches +$\infty$ is: $$\sum_{i=1}^{N}r_i/N$$ For example, a population decline from n=200 to n=160 across 100 years is conventionally considered to have a generational decline rate of $100\times exp[\ln(\frac{160}{200})/\frac{100}{25}]-100$ = 5.426% loss per generation. If partitioned into just $N=2$ equal sections (n=200, n=180, n=160), we require two generational decline rates: $100\times exp[\ln(\frac{180}{200})/\frac{50}{25}]-100$ = 5.132% and $100\times exp[\ln(\frac{160}{180})/\frac{50}{25}]-100$ = 5.719%, giving a mean of 5.425%. As the number of sections $N$ approaches +$\infty$, the mean rate asymptotically approaches 5.507%. The similarity to the conventional rate of 5.426% is because the total percentage loss is small (20%), therefore an exponential curve between n=200 and n=160 is similar to a straight line. In contrast, a huge percentage loss of 99.5% illustrates the importance of calculating the expected growth rate, averaged across the whole line: An exponential curve between n=200 and n=1 across the same 100 years has a decline rate of $100\times exp[\ln(\frac{1}{200})/\frac{100}{25}]-100$ = 73.409% loss per 25 yr generation. Meanwhile a linear model between n=200 and n=1 across the same 100 years has an expected decline rate of 47.835% loss per generation. The relationship between the conventional rate and relative rate is almost identical for realistic rates of change (c. -10% to +10% per generation): ```{r, eval = TRUE, fig.height = 4, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE} N <- 1000 x <- cbind(rep(5100,N),rep(5000,N)) y <- cbind(seq(1,100,length.out=N),seq(100,1,length.out=N)) conventional <- 100 * exp(log(y[,2]/y[,1])/((x[,1]-x[,2])/25))-100 relative <- relativeRate(x,y) plot(conventional, relative, type='l') rect(-100,-100,c(10,0,-10),c(10,0,-10), lty=2,border='grey') ``` ********** # 4. Inference ## Model selection using BIC A fundamentally important issue in modelling is the need to avoid overfitting an unjustifiably complex model to data, by using a formal model selection approach. In the example above we arbitrarily chose a 3-CPL model to fit to the data (since the data was randomly sampled from a 3-CPL toy population), however, given the small sample size (n = 303) it is possible a simpler model may have better predictive power. ADMUR achieves this using the so-called Bayesian Information Criterion (BIC) aka Schwarz Information Criterion, which balances the model likelihood against the number of parameters and sample size. Therefore we should also find the Maximum Likelihood for other plausible models such as a 4-CPL, 2-CPL, 1-CPL, exponential and even a uniform: ```{r, eval = FALSE} # CPL parameters must be between 0 and 1, and an odd length. CPL.1 <- JDEoptim(lower=0, upper=1, fn=objectiveFunction, PDarray=PD, type='CPL', NP=20) CPL.2 <- JDEoptim(lower=rep(0,3), upper=rep(1,3), fn=objectiveFunction, PDarray=PD, type='CPL', NP=60) CPL.3 <- JDEoptim(lower=rep(0,5), upper=rep(1,5), fn=objectiveFunction, PDarray=PD, type='CPL', NP=100) CPL.4 <- JDEoptim(lower=rep(0,7), upper=rep(1,7), fn=objectiveFunction, PDarray=PD, type='CPL', NP=140) # exponential has a single parameter, which can be negative (decay). exp <- JDEoptim(lower=-0.01, upper=0.01, fn=objectiveFunction, PDarray=PD, type='exp', NP=20) # uniform has no parameters so a search is not required. uniform <- objectiveFunction(NULL, PD, type='uniform') ``` ```{r, echo = FALSE} load('vignette.model.comparison.RData') ``` The objective function returns the negative log-likelihood since the search algorithm seeks to minimise the objective function. It is therefore trivial to extract the log-likelihoods, and calculate the BIC scores using the formula $BIC=k\ln(n)-2L$ where $k$ is the number of parameters, $n$ is the effective sample size (i.e. the number of phases = 303), and $L$ is the maximum log-likelihood. ```{r, eval = TRUE} # likelihoods data.frame(L1= -CPL.1$value, L2= -CPL.2$value, L3= -CPL.3$value, L4= -CPL.4$value, Lexp= -exp$value, Lunif= -uniform) BIC.1 <- 1*log(303) - 2*(-CPL.1$value) BIC.2 <- 3*log(303) - 2*(-CPL.2$value) BIC.3 <- 5*log(303) - 2*(-CPL.3$value) BIC.4 <- 7*log(303) - 2*(-CPL.4$value) BIC.exp <- 1*log(303) - 2*(-exp$value) BIC.uniform <- 0 - 2*(-uniform) data.frame(BIC.1,BIC.2,BIC.3,BIC.4,BIC.exp,BIC.uniform) ``` Clearly the 4-CPL has the highest likelihood, however the 3-CPL model has the lowest BIC and is selected as the best. This tells us that the 4-CPL is overfitted to the data and is unjustifiably complex, whilst the other models are underfitted and lack explanatory power. Nevertheless for comparison we can plot all the competing models, illustrating that the 4-CPL fits the closest, but cannot warn us that it is overfit: ```{r, eval = TRUE, fig.height = 4, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE} # convert parameters to model PDs CPL1 <- convertPars(pars=CPL.1$par, years=5500:7500, type='CPL') CPL2 <- convertPars(pars=CPL.2$par, years=5500:7500, type='CPL') CPL3 <- convertPars(pars=CPL.3$par, years=5500:7500, type='CPL') CPL4 <- convertPars(pars=CPL.4$par, years=5500:7500, type='CPL') EXP <- convertPars(pars=exp$par, years=5500:7500, type='exp') # Plot SPD and five competing models: plotPD(SPD) cols <- c('firebrick','orchid2','coral2','steelblue','goldenrod3') lines(CPL1$year, CPL1$pdf, col=cols[1], lwd=2) lines(CPL2$year, CPL2$pdf, col=cols[2], lwd=2) lines(CPL3$year, CPL3$pdf, col=cols[3], lwd=2) lines(CPL4$year, CPL4$pdf, col=cols[4], lwd=2) lines(EXP$year, EXP$pdf, col=cols[5], lwd=2) legend <- c('1-CPL','2-CPL','3-CPL','4-CPL','exponential') legend(x=6300, y=max(CPL$pdf), cex=0.7, lwd=2, col=cols, bty='n', legend=legend) ``` ## Goodness of fit (GOF) test it is crucial to test if the selected model is plausible, or in other words, to test if the observed data is a reasonable outcome of the model. If the observed data is highly unlikely the model must be rejected, even if it was the best model selected. Typically a GOF quantifies how unusual it would be for the observed data to be generated by the model. Of course the probability of any particular dataset being generated by any particular model is vanishingly small, so instead we estimate how probable it is for the model to produce the observed data, *or data that are more extreme*. This is a similar concept to the p-value, but instead of using a null hypothesis we use the best selected model. We can generate many simulated datasets under this model, and calculate a summary statistic for each simulation. A one-tailed test will then establish the proportion of simulations that have a poorer summary statistic (more extreme) than the observed data's summary statistic. For each dataset (simulated and observed) we generate an SPD and use a statistic that measures how divergent each SPD is from expectation, by calculating the proportion of the SPD that sits outside the 95% CI. ```{r, eval = FALSE} summary <- SPDsimulationTest(data, calcurve=shcal20, calrange=c(5500,7500), pars=CPL.3$par, type='CPL') ``` The test provides a p-value of 1.00 for the best model (3-CPL), since all of the 20,000 simulated SPDs were as or more extreme than the observed SPD, providing a sanity check that the data cannot be rejected under this model, and therefore is a plausible model: ```{r, echo = FALSE} load('vignette.3CPL.SPDsimulationTest.RData') ``` ```{r, eval = TRUE, fig.height = 5, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE} print(summary$pvalue) hist(summary$simulated.stat, main='Summary statistic', xlab='') abline(v=summary$observed.stat, col='red') legend(0.3,6000, bty='n', lwd=c(1,3), col=c('red','grey'), legend=c('observed','simulated')) ``` ## SPD simulation testing Part 2 provided a framework to directly select the best model given a dataset. This contrasts with the SPD simulation methodology which requires the researcher to *a priori* specify a single null model, then generate many simulated datasets under this null model which are compared with the observed dataset to generate a p-value. Without the model selection framework, the SPD simulation approach alone has several inferential shortcomings: * Recent studies increasingly suggest population fluctuations are ubiquitous thought history, rendering the application of a null model inappropriate. In contrast this new model selection framework allows any number of models to be compared. * A low p-value merely allows us to reject (or fail to reject) the tested model, but does not provide us with a plausible alternative explanation. This leaves an inferential vacuum in which it is common for researchers to assign colourful demographic narratives to periods outside the 95% CI, which are not directly supported by the test statistic. Instead the CPL framework provides a single best explanation. * Fitting the null model (and therefore estimating its parameters) is commonly achieved by discretising the SPD, then incorrectly assuming these points somehow represent data points, to which the null model is fitted my minimising their residuals. In contrast, the CPL framework correctly calculates the relative likelihood of the proposed model parameters given the data, and therefore can correctly fit a model. Nevertheless, the p-value from the SPD simulation framework is hugely useful in providing a Goodness of Fit test for the best selected model. Therefore the summary generated in the section *'Goodness of fit test'* by the [SPDsimulationTest()](../html/SPDsimulationTest.html) function provides a number of other useful outputs that can be plotted, including: **pvalue** the proportion of N simulated SPDs that have more points outside the 95%CI than the observed SPD has. **observed.stat** the summary statistic for the observed data (number of points outside the 95% CI). **simulated.stat** a vector of summary statistics (number of points outside the 95% CI), one for each simulated SPD. **n.dates.all** the total number of date in the whole data set. Trivially, the number of rows in data. **n.dates.effective** the effective number of dates within the date range. Will be non-integer since a proportion of some dates will be outside the date range. **n.phases.all** the total number of phases in the whole data set. **n.phases.effective** the effective number of phases within the date range. Will be non-integer since a proportion of some phases will be outside the date range. **n.phases.internal** an integer subset of n.phases.all that have more than 50% of their total probability mass within the date range. **timeseries** a data frame containing the following: **CI** several vectors of various Confidence Intervals. **calBP** a vector of calendar years BP. **expected.sim** a vector of the expected simulation (mean average of all N simulations). **local.sd** a vector of the local (each year) standard deviation of all N simulations. **model** a vector of the model PDF. **SPD** a vector of the observed SPD PDF, generated from data. **index** a vector of -1,0,+1 corresponding to the SPD points that are above, within or below the 95% CI of all N simulations. ```{r, eval = FALSE} summary <- SPDsimulationTest(data, calcurve=shcal20, calrange=c(5500,7500), pars=exp$par, type='exp') ``` The function [plotSimulationSummary()](../html/plotSimulationSummary.html) then represents these summary results in a single plot: ```{r, echo = FALSE} load('vignette.exp.SPDsimulationTest.RData') ``` ```{r, eval = TRUE, fig.height = 4, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE, message=FALSE} plotSimulationSummary(summary, legend.y=0.0012) ``` ## Other Models in ADMUR The above modelling components (MCMC, GOF, model comparison, relative likelihoods, BIC etc) are not constrained to CPL models, but can be applied to any model structure. Currently ADMUR offers the following: **CPL, Uniform, Exponential, Gaussian, Cauchy, Sinusoidal, Logistic, Power law** See [convertPars()](../html/convertPars.html) for details. Care should be taken when considering a Gaussian model. The distribution of data from a single event can often superficially appear to be normally distributed due to the tendency to unconsciously apply regression methods (minimising the residuals). However, contrary to appearances (and intuitions) a Gaussian does not 'flatten' towards the tails, but decreases at a greater and greater rate towards zero. As a consequence, small amounts of data that are several standard deviations away from the mean *appear* to fit a Gaussian quite well, but under a likelihood framework are in fact absurdly improbable. Instead, for single events consider a Cauchy model, given the phenomenon that real life data usually has fatter tails than a Gaussian. Alternatively, if the waxing and waning of data is suspected to be driven by an oscillating system (such as climate) a sinusoidal model may more sensible. The following code uses three toy datasets to demonstrate these models. After calibration through intcal20, they retain effective sample sizes of a little under n = 100. ```{r, eval = FALSE} # generate SPDs CalArray <- makeCalArray(intcal20, calrange = c(1000,4000)) spd1 <- summedCalibrator(data1, CalArray, normalise='full') spd2 <- summedCalibrator(data2, CalArray, normalise='full') spd3 <- summedCalibrator(data3, CalArray, normalise='full') # calibrate phases PD1 <- phaseCalibrator(data1, CalArray, remove.external = TRUE) PD2 <- phaseCalibrator(data2, CalArray, remove.external = TRUE) PD3 <- phaseCalibrator(data3, CalArray, remove.external = TRUE) # effective sample sizes ncol(PD1) ncol(PD2) ncol(PD3) # maximum likelihood search, fitting various models to various datasets norm <- JDEoptim(lower=c(1000,1), upper=c(4000,5000), fn=objectiveFunction, PDarray=PD1, type='norm', NP=40, trace=T) cauchy <- JDEoptim(lower=c(1000,1), upper=c(4000,5000), fn=objectiveFunction, PDarray=PD1, type='cauchy', NP=40, trace=T) sine <- JDEoptim(lower=c(0,0,0), upper=c(1/1000,2*pi,1), fn=objectiveFunction, PDarray=PD2, type='sine', NP=60, trace=T) logistic <- JDEoptim(lower=c(0,0000), upper=c(1,10000), fn=objectiveFunction, PDarray=PD3, type='logistic', NP=40, trace=T) exp <- JDEoptim(lower=c(0), upper=c(1), fn=objectiveFunction, PDarray=PD3, type='exp', NP=20, trace=T) power <- JDEoptim(lower=c(0,-10), upper=c(10000,0), fn=objectiveFunction, PDarray=PD3, type='power', NP=40, trace=T) ``` Note the upper boundaries for the sinewave (see [sinewavePDF()](../html/sinewavePDF.html) for details). The first parameter governs the frequency, so should be constrained by a wavelength no shorter than c. 1/10th of the date range. Now the maximum likelihood parameters need to be converted into a PDF and plotted: ```{r, eval = FALSE} # convert parameters to model PDs years <- 1000:4000 mod.norm <- convertPars(pars=norm$par, years, type='norm') mod.cauchy <- convertPars(pars=cauchy$par, years, type='cauchy') mod.sine <- convertPars(pars=sine$par, years, type='sine') mod.uniform <- convertPars(pars=NULL, years, type='uniform') mod.logistic <- convertPars(pars=logistic$par, years, type='logistic') mod.exp <- convertPars(pars=exp$par, years, type='exp') mod.power <- convertPars(pars=power$par, years, type='power') # Plot SPDs and various fitted models: par(mfrow=c(3,1), mar=c(4,4,1,1)) cols <- c('steelblue','firebrick','orange') plotPD(spd1) lines(mod.norm, col=cols[1], lwd=5) lines(mod.cauchy, col=cols[2], lwd=5) legend(x=4000, y=max(spd1)*1.2, lwd=5, col=cols, bty='n', legend=c('Gaussian','Cauchy')) plotPD(spd2) lines(mod.sine, col=cols[1], lwd=5) lines(mod.uniform, col=cols[2], lwd=5) legend(x=4000, y=max(spd2)*1.2, lwd=5, col=cols, bty='n', legend=c('Sinewave','Uniform')) plotPD(spd3) lines(mod.logistic, col=cols[1], lwd=5) lines(mod.exp, col=cols[2], lwd=5) lines(mod.power, col=cols[3], lwd=5) legend(x=4000, y=max(spd3)*1.2, lwd=5, col=cols, bty='n', legend=c('Logistic','Exponential','Power Law')) ``` ![Examples of other models available in ADMUR](further_models.png){width=680px} ********** # 5. Taphonomy Taphonomic loss has an important influence on the amount of datable material that can be recovered, with the obvious bias that older material is less likely to survive. This means that if a constant population deposited a perfectly uniform amount of material through time, we should expect the archaeological record to show an increase in dates towards the present, rather than a uniform distribution. This taphonomic loss rate has been estimated by [Surovell et al.](https://doi.org/10.1016/j.jas.2009.03.029) and [Bluhm and Surovell](https://doi.org/10.1017/qua.2018.78) who make a compelling argument that a power function $a(x+b)^c$ provides a useful model of taphonomic loss through time ($x$), providing not only a good statistical fit to empirical data, but is also consistent with the mechanism that datable material is subject to greater initial environmental degradation when first deposited on the ground surface compared to the increasing protection through time as it becomes cocooned from these forces. However, there are two important issues to consider when modelling taphonomy: 1. There is substantial uncertainty regarding the values of the parameters that determine the shape of the power function. We should expect different taphonomic rates in different locations due to variation in environmental and geological conditions. Indeed these studies have estimated different parameter values for the two datasets used. 1. There is a common misunderstanding that the taphonomic curve can be used to 'adjust' or 'correct' the data or an SPD to generate a more faithful representation of the true population dynamics. In the fact inclusion of taphonomy is achieved with additional appropriate model parameters, resulting in a more complex model. Whether or not this greater complexity is justified is moot - the decision to include or exclude cannot be resolved with model comparison and should be justified with an independent argument. When comparing models using BIC, all should be consistent in either including or excluding taphonomic curve parameters. ## Taphonomic curve parameters The above studies use regression methods to estimate the taphonomic curve parameters. These methods don't incorporate the full information of the calibrated 14C dates (instead a point estimate is used for each date), and therefore are not based on likelihoods. Nor do they provide confidence intervals for the curve parameters. Finally, the parameter $a$ is unnecessary for the purposes of population modelling, since we are not interested in estimating the *absolute* loss in material, merely the *relative* loss through time. Therefore we can consider the taphonomic curve as a PDF such that the total area across the study period equals 1. This results in the following formula, where $x$ is time, and $x_{min}$ and $x_{max}$ are the time boundaries of the study period: $$\frac{(c+1)(b+x)^c}{(b+x_{max})^{(c+1)} - (b+x_{min})^{(c+1)}}$$ This defines ADMUR's power model PDF which we apply within the MCMC framework to estimate the joint parameter distributions of $b$ and $c$ from the same two datasets used in the above studies, constraining the study period (as they did) to between 1kyr and 40kyr BP as follows: ```{r, eval = FALSE} # generate an PD array for each dataset years <- seq(1000,40000,by=50) CalArray <- makeCalArray(intcal20, calrange = c(1000,40000),inc=50) PD1 <- phaseCalibrator(bryson1848, CalArray, remove.external = TRUE) PD2 <- phaseCalibrator(bluhm2421, CalArray, remove.external = TRUE) # MCMC search chain.bryson <- mcmc(PDarray=PD1, startPars=c(10000,-1.5), type='power', N=50000, burn=2000, thin=5, jumps =c(250,0.075)) chain.bluhm <- mcmc(PDarray=PD2, startPars=c(10000,-1.5), type='power', N=50000, burn=2000, thin=5, jumps =c(250,0.075)) # convert parameters to taphonomy curves curve.bryson <- convertPars(chain.bryson$res, type='power', years=years) curve.bluhm <- convertPars(chain.bluhm$res, type='power', years=years) # plot plot(NULL, xlim=c(0,12000),ylim=c(-2.5,-1), xlab='parameter b', ylab='parameter c') points(chain.bryson$res, col=cols[1]) points(chain.bluhm$res, col=cols[2]) plot(NULL, xlim=c(0,40000),ylim=c(0,0.00025), xlab='yrs BP', ylab='PD') N <- nrow(chain.bryson$res) for(n in sample(1:N,size=1000)){ lines(years,curve.bryson[n,], col=cols[1]) lines(years,curve.bluhm[n,], col=cols[2]) } ``` ![Joint taphonomic parameter estimates (and equivalent curves) from the MCMC chain generated in ADMUR using datasets used in Surovell et al. 2009 and Bluhm and Surovell 2018](taphonomy_bryson_bluhm.png){width=680px} Clearly the taphonomic parameters $b$ and $c$ are highly correlated, and although the curves superficially appear very similar, the parameters differ significantly between the two datasets. ## Including taphonomy in a model ### Maximum Likelihood Search Taphonomy can be included in any ADMUR model by including the argument *taphonomy = TRUE*, which will then use the last two model parameters as the taphonomic parameters $b$ and $c$. We suggest constraining these parameters to $0 < b < 20000$ and $-3 < c < 0$, but if there is better prior knowledge of this range (perhaps an independent dataset based on volcanic eruptions for the same study area) then this can be further constrained accordingly. For example, we might perform a maximum likelihood search using the previously generated PD array, to find the best 3-CPL model with and without taphonomy as follows: ```{r, eval = FALSE} best <- JDEoptim(lower=c(0,0,0,0,0), upper=c(1,1,1,1,1), fn=objectiveFunction, PDarray=PD, type='CPL', taphonomy=F, trace=T, NP=100) best.taph <- JDEoptim(lower=c(0,0,0,0,0,0,-3), upper=c(1,1,1,1,1,20000,0), fn=objectiveFunction, PDarray=PD, type='CPL', taphonomy=T, trace=T, NP=140) ``` These parameters can then be converted to model PDFs and plotted: ```{r, echo = FALSE} load('vignette.3CPL.JDEoptim.best.RData') load('vignette.3CPL.JDEoptim.best.taph.RData') ``` ```{r, eval = TRUE, fig.height = 4, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE} CPL <- convertPars(pars=best$par, years=5500:7500, type='CPL', taphonomy=F) CPL.taph <- convertPars(pars=best.taph$par, years=5500:7500, type='CPL', taphonomy=T) SPD <- summedPhaseCalibrator( data=data, calcurve=shcal20, calrange=c(5500,7500) ) plotPD(SPD) lines(CPL$year, CPL$pdf, lwd=2, col=cols[1]) lines(CPL.taph$year, CPL.taph$pdf, lwd=2, col=cols[2]) legend(x=6300,y=0.001,legend=c('3-CPL','3-CPL with taphonomy'),bty='n',col=cols[1:2],lwd=2,cex=0.7) ``` The above *3-CPL with taphonomy* model represents a conflation of two model components: the population dynamics and the taphonomic loss. Instead we are interested in separating these components: ```{r, eval = TRUE, fig.height = 4, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE} pop <- convertPars(pars=best.taph$par[1:5], years=5500:7500, type='CPL') taph <- convertPars(pars=best.taph$par[6:7], years=5500:7500, type='power') plotPD(pop) title('Population dynamics') plotPD(taph) title('Taphonomic loss') ``` Finally the hinge coordinates of the population dynamics can be extracted: ```{r, eval = TRUE} CPLparsToHinges(pars=best.taph$par[1:5], years=5500:7500) ``` ### MCMC search for credible intervals We should always be cautious of assigning too much importance to point estimates.Maximum Likelihood Estimates above are no exception to this. Smaller sample sizes will always result in larger uncertainties, and it is always better to estimate this plausible range of results. This is of particular concern with taphonomic parameters since the reanalysis of the volcanic datasets above illustrates how a large range of parameter combinations provide very similar taphonomic curves. Furthermore, when including taphonomy in the model, the taphonomic parameters have the potential to interact with the population dynamics parameters in vastly more parameter combinations that will give in many different combinations of similar overall radiocarbon date distributions. Therefore we can perform an MCMC parameter search as follows: ```{r, eval = FALSE} chain.taph <- mcmc(PDarray = PD, startPars = c(0.5,0.5,0.5,0.5,0.5,10000,-1.5), type='CPL', taphonomy=T, N = 30000, burn = 2000, thin = 5, jumps = 0.025) ``` These can then be separated into population dynamics parameters and taphonomic parameters for either direct plotting, or converted to model PDFs and plotted: ```{r, eval = FALSE} # convert parameters into model PDFs pop <- convertPars(pars=chain.taph$res[,1:5], years=5500:7500, type='CPL') taph <- convertPars(pars=chain.taph$res[,6:7], years=seq(1000,30000,by=50), type='power') # plot population dynamics PDF plot(NULL, xlim=c(7500,5500),ylim=c(0,0.0013), xlab='calBP', ylab='PD', las=1) for(n in 1:nrow(pop))lines(5500:7500, pop[n,],col=alpha(1,0.05)) # plot taphonomy PDF plot(NULL, xlim=c(30000,0),ylim=c(0,0.00025), xlab='calBP', ylab='PD',las=1,) for(n in 1:nrow(taph))lines(seq(1000,30000,by=50), taph[n,],col=alpha(1,0.02)) # plot taphonomic parameters plot(NULL, xlim=c(0,20000),ylim=c(-3,0), xlab='parameter b', ylab='parameter c',las=1) for(n in 1:nrow(chain.taph$res))points(chain.taph$res[n,6], chain.taph$res[n,7],col=alpha(1,0.2),pch=20) ``` ![Joint posterior distributions of population dynamics only.](mcmc_pop_without_taph.png){width=680px} ![Joint posterior distributions of taphonomy only. Clearly there is not enough information content in such a small toy dataset to narrow the taphonomic parameters better than the initial prior constraints](mcmc_taph.png){width=680px} ********** ![](four_logos.png){width=680px} **********
/scratch/gouwar.j/cran-all/cranData/ADMUR/inst/doc/guide.Rmd
## ---- eval = FALSE------------------------------------------------------------ # library(ADMUR) # library(DEoptimR) # N <- 1500 # # # generate 5 sets of random calendar dates under the toy model. # set.seed(882) # cal1 <- simulateCalendarDates(model = toy, N) # set.seed(884) # cal2 <- simulateCalendarDates(model = toy, N) # set.seed(886) # cal3 <- simulateCalendarDates(model = toy, N) # set.seed(888) # cal4 <- simulateCalendarDates(model = toy, N) # set.seed(890) # cal5 <- simulateCalendarDates(model = toy, N) # # # Convert to 14C dates. # age1 <- uncalibrateCalendarDates(cal1, shcal20) # age2 <- uncalibrateCalendarDates(cal2, shcal20) # age3 <- uncalibrateCalendarDates(cal3, shcal20) # age4 <- uncalibrateCalendarDates(cal4, shcal20) # age5 <- uncalibrateCalendarDates(cal5, shcal20) # # # construct data frames. One date per phase. # data1 <- data.frame(age = age1, sd = 25, phase = 1:N, datingType = '14C') # data2 <- data.frame(age = age2, sd = 25, phase = 1:N, datingType = '14C') # data3 <- data.frame(age = age3, sd = 25, phase = 1:N, datingType = '14C') # data4 <- data.frame(age = age4, sd = 25, phase = 1:N, datingType = '14C') # data5 <- data.frame(age = age5, sd = 25, phase = 1:N, datingType = '14C') # # # Calibrate each phase, taking care to restrict to the modelled date range # CalArray <- makeCalArray(shcal20, calrange = range(toy$year), inc = 5) # # PD1 <- phaseCalibrator(data1, CalArray, remove.external = TRUE) # PD2 <- phaseCalibrator(data2, CalArray, remove.external = TRUE) # PD3 <- phaseCalibrator(data3, CalArray, remove.external = TRUE) # PD4 <- phaseCalibrator(data4, CalArray, remove.external = TRUE) # PD5 <- phaseCalibrator(data5, CalArray, remove.external = TRUE) # # # Generate SPD of each dataset # SPD1 <- summedCalibrator(data1, CalArray, normalise='full') # SPD2 <- summedCalibrator(data2, CalArray, normalise='full') # SPD3 <- summedCalibrator(data3, CalArray, normalise='full') # SPD4 <- summedCalibrator(data4, CalArray, normalise='full') # SPD5 <- summedCalibrator(data5, CalArray, normalise='full') # # # 3-CPL parameter search # lower <- rep(0,5) # upper <- rep(1,5) # fn <- objectiveFunction # best1 <- JDEoptim(lower, upper, fn, PDarray=PD1, type='CPL',trace=T,NP=100) # best2 <- JDEoptim(lower, upper, fn, PDarray=PD2, type='CPL',trace=T,NP=100) # best3 <- JDEoptim(lower, upper, fn, PDarray=PD3, type='CPL',trace=T,NP=100) # best4 <- JDEoptim(lower, upper, fn, PDarray=PD4, type='CPL',trace=T,NP=100) # best5 <- JDEoptim(lower, upper, fn, PDarray=PD5, type='CPL',trace=T,NP=100) # # #save results, for separate plotting # save(best1,best2,best3,best4,best5,SPD1,SPD2,SPD3,SPD4,SPD5, file='results.RData',version=2) ## ---- eval = FALSE------------------------------------------------------------ # library(ADMUR) # load('results.RData') # oldpar <- par(no.readonly = TRUE) # pdf('Fig1.pdf',height=4,width=10) # par(mar=c(2,4,0.1,2)) # plot(NULL, xlim=c(7500,5500), ylim=c(0,0.0011), xlab='', ylab='', xaxs='i',cex.axis=0.7, bty='n',las=1) # axis(1,at=6400,labels='calBP',tick=F) # axis(2,at=-0.00005,labels='PD',tick=F, las=1) # lwd1 <- 1 # lwd2 <- 2 # lwd3 <- 3 # legend(x=6000, y = 0.0011, bty='n', cex=0.7, # legend=c('True (toy) population', # 'SPD 1', # 'SPD 2', # 'SPD 3', # 'SPD 4', # 'SPD 5', # 'Pop model 1', # 'Pop model 2', # 'Pop model 3', # 'Pop model 4', # 'Pop model 5'), # lwd=c(lwd3,rep(lwd1,5),rep(lwd2,5)), # col=c(1,2:6,2:6) # ) # # years <- as.numeric(row.names(SPD1)) # # # plot SPDs # lines(years,SPD1[,1],col=2, lwd=lwd1) # lines(years,SPD2[,1],col=3, lwd=lwd1) # lines(years,SPD3[,1],col=4, lwd=lwd1) # lines(years,SPD4[,1],col=5, lwd=lwd1) # lines(years,SPD5[,1],col=6, lwd=lwd1) # # # convert parameters to model pdfs # mod.1 <- convertPars(pars=best1$par, years=years, type='CPL') # mod.2 <- convertPars(pars=best2$par, years=years, type='CPL') # mod.3 <- convertPars(pars=best3$par, years=years, type='CPL') # mod.4 <- convertPars(pars=best4$par, years=years, type='CPL') # mod.5 <- convertPars(pars=best5$par, years=years, type='CPL') # # lines(mod.1$year,mod.1$pdf,col=2,lwd=lwd2) # lines(mod.2$year,mod.2$pdf,col=3,lwd=lwd2) # lines(mod.3$year,mod.3$pdf,col=4,lwd=lwd2) # lines(mod.4$year,mod.4$pdf,col=5,lwd=lwd2) # lines(mod.5$year,mod.5$pdf,col=6,lwd=lwd2) # # # plot true toy model # lines(toy$year, toy$pdf, lwd=lwd3) # # dev.off() # par(oldpar) ## ---- eval = FALSE------------------------------------------------------------ # library(ADMUR) # library(DEoptimR) # set.seed(888) # N <- c(6,20,60,180,360,540) # names <- c('sample1','sample2','sample3','sample4','sample5','sample6') # # # generate 6 sets of random calendar dates under the toy model. # cal1 <- simulateCalendarDates(model = toy, N[1]) # cal2 <- simulateCalendarDates(model = toy, N[2]) # cal3 <- simulateCalendarDates(model = toy, N[3]) # cal4 <- simulateCalendarDates(model = toy, N[4]) # cal5 <- simulateCalendarDates(model = toy, N[5]) # cal6 <- simulateCalendarDates(model = toy, N[6]) # # # Convert to 14C dates. # age1 <- uncalibrateCalendarDates(cal1, shcal20) # age2 <- uncalibrateCalendarDates(cal2, shcal20) # age3 <- uncalibrateCalendarDates(cal3, shcal20) # age4 <- uncalibrateCalendarDates(cal4, shcal20) # age5 <- uncalibrateCalendarDates(cal5, shcal20) # age6 <- uncalibrateCalendarDates(cal6, shcal20) # # # construct data frames. One date per phase. # data1 <- data.frame(age = age1, sd = 25, phase = 1:N[1], datingType = '14C') # data2 <- data.frame(age = age2, sd = 25, phase = 1:N[2], datingType = '14C') # data3 <- data.frame(age = age3, sd = 25, phase = 1:N[3], datingType = '14C') # data4 <- data.frame(age = age4, sd = 25, phase = 1:N[4], datingType = '14C') # data5 <- data.frame(age = age5, sd = 25, phase = 1:N[5], datingType = '14C') # data6 <- data.frame(age = age6, sd = 25, phase = 1:N[6], datingType = '14C') # # # narrow domain of the model to the range of data, # # since absence of evidence in periods well outside the data should # # not be interpreted as evidence of absence. # # Only required when sample sizes are extremely small. # # Otherwise the data domain is constrained by the model date range. # r1 <- estimateDataDomain(data1, shcal20) # # # narrower range for extremely small samples # CalArray1 <- makeCalArray(shcal20, calrange = c( max(r1[1],5500) , min(r1[2],7500) ), inc = 5) # CalArray <- makeCalArray(shcal20, calrange = range(toy$year), inc = 5) # # # Calibrate each phase # PD1 <- phaseCalibrator(data1, CalArray1, remove.external = TRUE) # PD2 <- phaseCalibrator(data2, CalArray, remove.external = TRUE) # PD3 <- phaseCalibrator(data3, CalArray, remove.external = TRUE) # PD4 <- phaseCalibrator(data4, CalArray, remove.external = TRUE) # PD5 <- phaseCalibrator(data5, CalArray, remove.external = TRUE) # PD6 <- phaseCalibrator(data6, CalArray, remove.external = TRUE) # PD <- list(PD1, PD2, PD3, PD4, PD5, PD6); names(PD) <- names # # # Generate SPD of each dataset # SPD1 <- summedCalibrator(data1, CalArray, normalise='full') # SPD2 <- summedCalibrator(data2, CalArray, normalise='full') # SPD3 <- summedCalibrator(data3, CalArray, normalise='full') # SPD4 <- summedCalibrator(data4, CalArray, normalise='full') # SPD5 <- summedCalibrator(data5, CalArray, normalise='full') # SPD6 <- summedCalibrator(data6, CalArray, normalise='full') # SPD <- list(SPD1, SPD2, SPD3, SPD4, SPD5, SPD6); names(SPD) <- names # # # Uniform model: No parameters. # # Log Likelihood calculated directly using objectiveFunction, without a search required. # unif1.loglik <- -objectiveFunction(pars = NULL, PDarray = PD1, type = 'uniform') # unif2.loglik <- -objectiveFunction(pars = NULL, PDarray = PD2, type = 'uniform') # unif3.loglik <- -objectiveFunction(pars = NULL, PDarray = PD3, type = 'uniform') # unif4.loglik <- -objectiveFunction(pars = NULL, PDarray = PD4, type = 'uniform') # unif5.loglik <- -objectiveFunction(pars = NULL, PDarray = PD5, type = 'uniform') # unif6.loglik <- -objectiveFunction(pars = NULL, PDarray = PD6, type = 'uniform') # uniform <- list(unif1.loglik, unif2.loglik, unif3.loglik, unif4.loglik, unif5.loglik, unif6.loglik) # names(uniform) <- names # # # Best 1-CPL model. Parameters and log likelihood found using search # lower <- rep(0,1) # upper <- rep(1,1) # fn <- objectiveFunction # best1 <- JDEoptim(lower, upper, fn, PDarray=PD1, type='CPL',trace=T,NP=20) # best2 <- JDEoptim(lower, upper, fn, PDarray=PD2, type='CPL',trace=T,NP=20) # best3 <- JDEoptim(lower, upper, fn, PDarray=PD3, type='CPL',trace=T,NP=20) # best4 <- JDEoptim(lower, upper, fn, PDarray=PD4, type='CPL',trace=T,NP=20) # best5 <- JDEoptim(lower, upper, fn, PDarray=PD5, type='CPL',trace=T,NP=20) # best6 <- JDEoptim(lower, upper, fn, PDarray=PD6, type='CPL',trace=T,NP=20) # CPL1 <- list(best1, best2, best3, best4, best5, best6); names(CPL1) <- names # # # Best 2-CPL model. Parameters and log likelihood found using search # lower <- rep(0,3) # upper <- rep(1,3) # fn <- objectiveFunction # best1 <- JDEoptim(lower, upper, fn, PDarray=PD1, type='CPL',trace=T,NP=60) # best2 <- JDEoptim(lower, upper, fn, PDarray=PD2, type='CPL',trace=T,NP=60) # best3 <- JDEoptim(lower, upper, fn, PDarray=PD3, type='CPL',trace=T,NP=60) # best4 <- JDEoptim(lower, upper, fn, PDarray=PD4, type='CPL',trace=T,NP=60) # best5 <- JDEoptim(lower, upper, fn, PDarray=PD5, type='CPL',trace=T,NP=60) # best6 <- JDEoptim(lower, upper, fn, PDarray=PD6, type='CPL',trace=T,NP=60) # CPL2 <- list(best1, best2, best3, best4, best5, best6); names(CPL2) <- names # # # Best 3-CPL model. Parameters and log likelihood found using search # lower <- rep(0,5) # upper <- rep(1,5) # fn <- objectiveFunction # best1 <- JDEoptim(lower, upper, fn, PDarray=PD1, PDarray=PD1, type='CPL',trace=T,NP=100) # best2 <- JDEoptim(lower, upper, fn, PDarray=PD1, PDarray=PD2, type='CPL',trace=T,NP=100) # best3 <- JDEoptim(lower, upper, fn, PDarray=PD1, PDarray=PD3, type='CPL',trace=T,NP=100) # best4 <- JDEoptim(lower, upper, fn, PDarray=PD1, PDarray=PD4, type='CPL',trace=T,NP=100) # best5 <- JDEoptim(lower, upper, fn, PDarray=PD1, PDarray=PD5, type='CPL',trace=T,NP=100) # best6 <- JDEoptim(lower, upper, fn, PDarray=PD1, PDarray=PD6, type='CPL',trace=T,NP=100) # CPL3 <- list(best1, best2, best3, best4, best5, best6); names(CPL3) <- names # # # Best 4-CPL model. Parameters and log likelihood found using search # lower <- rep(0,7) # upper <- rep(1,7) # fn <- objectiveFunction # best1 <- JDEoptim(lower, upper, fn, PDarray=PD1, type='CPL',trace=T,NP=140) # best2 <- JDEoptim(lower, upper, fn, PDarray=PD2, type='CPL',trace=T,NP=140) # best3 <- JDEoptim(lower, upper, fn, PDarray=PD3, type='CPL',trace=T,NP=140) # best4 <- JDEoptim(lower, upper, fn, PDarray=PD4, type='CPL',trace=T,NP=140) # best5 <- JDEoptim(lower, upper, fn, PDarray=PD5, type='CPL',trace=T,NP=140) # best6 <- JDEoptim(lower, upper, fn, PDarray=PD6, type='CPL',trace=T,NP=140) # CPL4 <- list(best1, best2, best3, best4, best5, best6); names(CPL4) <- names # # # Best 5-CPL model. Parameters and log likelihood found using search # lower <- rep(0,9) # upper <- rep(1,9) # fn <- objectiveFunction # best1 <- JDEoptim(lower, upper, fn, PDarray=PD1, type='CPL',trace=T,NP=180) # best2 <- JDEoptim(lower, upper, fn, PDarray=PD2, type='CPL',trace=T,NP=180) # best3 <- JDEoptim(lower, upper, fn, PDarray=PD3, type='CPL',trace=T,NP=180) # best4 <- JDEoptim(lower, upper, fn, PDarray=PD4, type='CPL',trace=T,NP=180) # best5 <- JDEoptim(lower, upper, fn, PDarray=PD5, type='CPL',trace=T,NP=180) # best6 <- JDEoptim(lower, upper, fn, PDarray=PD6, type='CPL',trace=T,NP=180) # CPL5 <- list(best1, best2, best3, best4, best5, best6); names(CPL5) <- names # # # save results, for separate plotting # save(SPD, PD, uniform, CPL1, CPL2, CPL3, CPL4, CPL5, file='results.RData',version=2) ## ---- eval = FALSE------------------------------------------------------------ # library(ADMUR) # load('results.RData') # # # Calculate BICs for all six sample sizes and all six models # BIC <- as.data.frame(matrix(,6,6)) # row.names(BIC) <- c('uniform','1-CPL','2-CPL','3-CPL','4-CPL','5-CPL') # for(s in 1:6){ # # # extract log likelihoods for each model # loglik <- c(uniform[[s]], # -CPL1[[s]]$value, # -CPL2[[s]]$value, # -CPL3[[s]]$value, # -CPL4[[s]]$value, # -CPL5[[s]]$value) # # # extract effective sample sizes for each model # N <- c(rep(ncol(PD[[s]]),6)) # # # number of parameters for each model # K <- c(0, 1, 3, 5, 7, 9) # # # calculate BIC for each model # BIC[,s] <- log(N)*K - 2*loglik # # # store effective sample size # names(BIC)[s] <- paste('N',N[1],sep='=') # } # # # Show all BICs for all sample sizes and models # print(BIC) ## ---- eval = FALSE------------------------------------------------------------ # oldpar <- par(no.readonly = TRUE) # # # Fig 2 plot # pdf('Fig2.pdf',height=6,width=13) # layout(mat=matrix(1:14, 2, 7, byrow = F),widths=c(0.3,rep(1,6)), heights=c(1,1.5),respect=T) # # # plot two blanks first # par(mar=c(5,4,1.5,0),las=2) # ymax <- 0.0032 # plot(NULL, xlim=c(0,1),ylim=c(0,1),main='', xlab='',ylab='',bty='n',xaxt='n',yaxt='n') # mtext(side=2, at=0.5,text='BIC',las=0,line=1) # plot(NULL, xlim=c(0,1),ylim=sqrt(c(0,ymax)),main='', xlab='',ylab='',bty='n',xaxt='n',yaxt='n') # axis(side=2, at=sqrt(seq(0,ymax,by=0.001)), labels=round(seq(0,ymax,by=0.001),4),las=1) # mtext(side=2, at=sqrt(0.00025),text='PD',las=0,line=0.8,cex=1) # abline(h=sqrt(seq(0,ymax,by=0.001)),col='grey') # # for(n in 1:6){ # # # extract the best model (lowest BIC) # BICs <- BIC[,n] # best <- which(BICs==min(BICs)) # # # convert parameters to model # if(best==1){ # type <- 'uniform' # pars <- NULL # } # if(best!=1)type <- 'CPL' # if(best==2)pars <- CPL1[[n]]$par # if(best==3)pars <- CPL2[[n]]$par # if(best==4)pars <- CPL3[[n]]$par # if(best==5)pars <- CPL4[[n]]$par # if(best==6)pars <- CPL5[[n]]$par # # spd.years <- as.numeric(row.names(SPD[[n]])) # spd.pdf <- SPD[[n]][,1] # mod.years <- as.numeric(row.names(PD[[n]])) # model <- convertPars(pars, mod.years, type) # # # # plot # red <- 'firebrick' # col <- rep('grey35',6); col[best] <- red # ymin <- min(BIC)-diff(range(BIC))*0.15 # par(mar=c(5,3,1.5,1),las=2) # plot(BICs,xlab='',ylab='',xaxt='n',pch=20,cex=3,col=col, main='') # axis(side=1, at=1:6, labels=c('Uniform','1-piece','2-piece','3-piece','4-piece','5-piece')) # # par(mar=c(5,1,1.5,1),las=2) # plot(NULL,type='l', # xlab='Cal Yrs BP', # ylab='',yaxt='n', # col='steelblue', # main=paste('N =',ncol(PD[[n]])), # ylim=sqrt(c(0,ymax)), # xlim=c(7500,5500)) # abline(h=sqrt(seq(0,ymax,by=0.001)),col='grey') # polygon(c(min(spd.years),spd.years,max(spd.years)),sqrt(c(0,spd.pdf,0)),col='steelblue',border=NA) # # lwd=3 # lines(toy$year,sqrt(toy$pdf),lwd=lwd) # lines(model$year, sqrt(model$pdf), lwd=lwd, col=red) # } # dev.off() # par(oldpar) ## ---- eval = FALSE------------------------------------------------------------ # library(ADMUR) # set.seed(999) # # best exponential parameter previously found using ML search for Fig 5. # summary <- SPDsimulationTest(data=SAAD, # calcurve=shcal20, # calrange=c(2500,14000), # pars=-0.0001674152, # type='exp', # N=20000) # save(summary, file='results.RData',version=2) ## ---- eval = FALSE------------------------------------------------------------ # library(ADMUR) # load('results.RData') # oldpar <- par(no.readonly = TRUE) # pdf('Fig4.pdf',height=4,width=10) # par(mar=c(2,4,0.1,0.1)) # plotSimulationSummary(summary, legend.x=11500,legend.y=0.0003) # axis(side=1, at=2500,labels='calBP',tick=F) # dev.off() # par(oldpar) ## ---- eval = FALSE------------------------------------------------------------ # library(ADMUR) # library(DEoptimR) # # # Generate SPD # SPD <- summedPhaseCalibrator(data=SAAD, calcurve=shcal20, calrange = c(2500,14000)) # # # Calibrate each phase # CalArray <- makeCalArray(calcurve=shcal20, calrange = c(2500,14000)) # PD <- phaseCalibrator(data=SAAD, CalArray, remove.external = TRUE) # # # Best exponential model. Parameter and log likelihood found using seach # exp <- JDEoptim(lower=-0.01, upper=0.01, fn=objectiveFunction, PDarray=PD, type='exp', trace=T, NP=20) # # # Best CPL models. Parameters and log likelihood found using seach # fn <- objectiveFunction # CPL1 <- JDEoptim(lower=rep(0,1), upper=rep(1,1), fn, PDarray=PD, type='CPL',trace=T,NP=20) # CPL2 <- JDEoptim(lower=rep(0,3), upper=rep(1,3), fn, PDarray=PD, type='CPL',trace=T,NP=60) # CPL3 <- JDEoptim(lower=rep(0,5), upper=rep(1,5), fn, PDarray=PD, type='CPL',trace=T,NP=100) # CPL4 <- JDEoptim(lower=rep(0,7), upper=rep(1,7), fn, PDarray=PD, type='CPL',trace=T,NP=140) # CPL5 <- JDEoptim(lower=rep(0,9), upper=rep(1,9), fn, PDarray=PD, type='CPL',trace=T,NP=180) # CPL6 <- JDEoptim(lower=rep(0,11),upper=rep(1,11),fn, PDarray=PD, type='CPL',trace=T,NP=220) # # # save results, for separate plotting # save(SPD, PD, exp, CPL1, CPL2, CPL3, CPL4, CPL5, CPL6, file='results.RData',version=2) ## ---- eval = FALSE------------------------------------------------------------ # library(ADMUR) # load('results.RData') # # # Calculate BICs for all six models # # name of each model # model <- c('exponential','1-CPL','2-CPL','3-CPL','4-CPL','5-CPL','6-CPL') # # # extract log likelihoods for each model # loglik <- c(-exp$value, -CPL1$value, -CPL2$value, -CPL3$value, -CPL4$value, -CPL5$value, -CPL6$value) # # # extract effective sample sizes # N <- c(rep(ncol(PD),7)) # # # number of parameters for each model # K <- c(1, 1, 3, 5, 7, 9, 11) # # # calculate BIC for each model # BICs <- log(N)*K - 2*loglik # # # convert best 3-CPL parameters into model pdf # best <- convertPars(pars=CPL3$par, years=c(2500:14000), type='CPL') ## ---- eval = FALSE------------------------------------------------------------ # oldpar <- par(no.readonly = TRUE) # pdf('Fig5.pdf',height=4,width=10) # par(mfrow=c(1,2)) # # # model comparison # par(mar=c(6,6,2,0.1)) # red <- 'firebrick' # blue <- 'steelblue' # col <- rep('grey35',7); col[which(BICs==min(BICs))] <- red # plot(BICs,xlab='',ylab='',xaxt='n', pch=20,cex=2,col=col,main='',las=1,cex.axis=0.7) # labels <- c('exponential','1-CPL','2-CPL','3-CPL','4-CPL','5-CPL','6-CPL') # axis(side=1, at=1:7, las=2, labels=labels, cex.axis=0.9) # mtext(side=2, at=mean(BICs),text='BIC',las=0,line=3) # # # best fitting CPL # years <- as.numeric(row.names(SPD)) # plot(NULL,xlim=rev(range(years)), ylim=range(SPD), # type='l',xlab='kyr cal BP',xaxt='n', ylab='',las=1,cex.axis=0.7) # axis(1,at=seq(14000,3000,by=-1000), labels=seq(14,3,by=-1),cex.axis=0.9) # mtext(side=2, at=max(SPD[,1])/2,text='PD',las=0,line=3.5,cex=1) # polygon(c(min(years),years,max(years)),c(0,SPD[,1],0),col=blue,border=NA) # lines(best$year,best$pdf,col=red,lwd=3) # # legend(x=14000,y=0.0003,lwd=c(5,3),col=c(blue,red),bty='n',legend=c('SPD','3-CPL')) # dev.off() # par(oldpar) ## ---- eval = FALSE------------------------------------------------------------ # library(ADMUR) # library(DEoptimR) # # # Calibrate each phase # CalArray <- makeCalArray(calcurve=shcal20, calrange = c(2500,14000)) # PD <- phaseCalibrator(data=SAAD, CalArray, remove.external = TRUE) # # # arbitrary starting parameters # chain <- mcmc(PDarray=PD, startPars=rep(0.5,5), type='CPL', N=100000, burn=2000, thin=5, jumps=0.025) # # # find ML parameters # best.pars <- JDEoptim(lower=rep(0,5),upper=rep(1,5),fn=objectiveFunction,PDarray=PD,type='CPL',trace=T,NP=100)$par # # # save results, for separate plotting # save(chain, best.pars, file='results.RData',version=2) ## ---- eval = FALSE------------------------------------------------------------ # library('ADMUR') # library('scales') # load('results.RData') # # # Convert Maximum Likelihood parameters to hinge coordinates # ML <- CPLparsToHinges(best.pars,years=c(2500,14000)) # # # Convert MCMC chain of parameters to hinge coordinates # hinges <- CPLparsToHinges(chain$res, years=c(2500,14000)) # # # check the acceptance ratio is sensible (c. 0.2 to 0.5) # chain$acceptance.ratio # # # Eyeball the entire chain, before burn-in and thinning # for(n in 1:5)plot(chain$all.pars[,n], type='l', ylim=c(0,1)) # # # Generate CI for Fig 7 # N <- nrow(hinges) # years <- 2500:14000 # Y <- length(years) # pdf.matrix <- matrix(,N,Y) # for(n in 1:N){ # yr <- c('yr1','yr2','yr3','yr4') # pdf <- c('pdf1','pdf2','pdf3','pdf4') # pdf.matrix[n,] <- approx(x=hinges[n,yr],y=hinges[n,pdf],xout=years, ties='ordered')$y # } # CI <- matrix(,Y,6) # for(y in 1:Y)CI[y,] <- quantile(pdf.matrix[,y],prob=c(0.025,0.125,0.25,0.75,0.875,0.975)) ## ---- eval = FALSE------------------------------------------------------------ # oldpar <- par(no.readonly = TRUE) # pdf('Fig6.pdf',height=5,width=11) # par(mfrow=c(2,3)) # lwd <- 3 # red='firebrick' # grey='grey65' # breaks.yr <- seq(14000,2000,length.out=80) # breaks.pdf <- seq(0,0.0003,length.out=80) # xlab.yr <- 'yrs BP' # xlab.pdf <-'PD' # names <- c('Date of Hinge B','Date of Hinge C','PD of Hinge A','PD fo Hinge B','PD of Hinge C','PD of Hinge D') # hist(hinges$yr3, breaks=breaks.yr, col=grey, border=NA, main=names[1], xlab=xlab.yr) # abline(v = ML$year[3], col=red, lwd=lwd) # hist(hinges$yr2, breaks=breaks.yr, col=grey, border=NA, main=names[2], xlab=xlab.yr) # abline(v = ML$year[2], col=red, lwd=lwd) # hist(hinges$pdf4, breaks=breaks.pdf, col=grey, border=NA, main=names[3], xlab=xlab.pdf) # abline(v = ML$pdf[4], col=red, lwd=lwd) # hist(hinges$pdf3, breaks=breaks.pdf, col=grey, border=NA, main=names[5], xlab=xlab.pdf) # abline(v = ML$pdf[3], col=red, lwd=lwd) # hist(hinges$pdf2, breaks=breaks.pdf, col=grey, border=NA, main=names[4], xlab=xlab.pdf) # abline(v = ML$pdf[2], col=red, lwd=lwd) # hist(hinges$pdf1, breaks=breaks.pdf, col=grey, border=NA, main=names[6], xlab=xlab.pdf) # abline(v = ML$pdf[1], col=red, lwd=lwd) # dev.off() # par(oldpar) ## ---- eval = FALSE------------------------------------------------------------ # oldpar <- par(no.readonly = TRUE) # pdf('Fig7.pdf',height=5,width=12) # grey1 <- 'grey90' # grey2 <- 'grey70' # grey3 <- 'grey50' # red <- 'firebrick' # par(mfrow=c(1,2),las=0) # plot(NULL,xlim=c(14000,2500),ylim=c(0,0.00025),xlab='kyr cal BP',xaxt='n', ylab='PD', las=1, cex.axis=0.7) # set.seed(888) # S <- sample(1:N,size=1000) # for(n in 1:1000){ # lines(x=hinges[S[n],c('yr1','yr2','yr3','yr4')],y=hinges[S[n],c('pdf1','pdf2','pdf3','pdf4')],col=alpha('black',0.05)) # } # lines(ML$year, ML$pdf,col='firebrick',lwd=2) # axis(1,at=seq(14000,3000,by=-1000), labels=seq(14,3,by=-1)) # text(x=ML$year, y=ML$pdf + c(-0.00002,-0.00002,0.00002,0.00002), labels=rev(c('A','B','C','D'))) # legend(legend=c('Maximum Likelihood model PDF','Model PDF sampled from joint posterior parameters'), # x = 6000,y = 0.00024,cex = 0.7,bty = 'n',border = NA, xjust = 1, lwd=c(2,1), col=c(red,grey3)) # # plot(NULL,xlim=c(14000,2500),ylim=c(0,0.00025),xlab='kyr cal BP',xaxt='n', ylab='PD', las=1, cex.axis=0.7) # polygon(x=c(years,rev(years)),c(CI[,1],rev(CI[,6])),col=grey1,border=F) # polygon(x=c(years,rev(years)),c(CI[,2],rev(CI[,5])),col=grey2,border=F) # polygon(x=c(years,rev(years)),c(CI[,3],rev(CI[,4])),col=grey3,border=F) # a <- 0.05 # cex <- 0.2 # points(hinges$yr1,hinges$pdf1,pch=20,col=alpha(red,alpha=a),cex=cex) # points(hinges$yr2,hinges$pdf2,pch=20,col=alpha(red,alpha=a),cex=cex) # points(hinges$yr3,hinges$pdf3,pch=20,col=alpha(red,alpha=a),cex=cex) # points(hinges$yr4,hinges$pdf4,pch=20,col=alpha(red,alpha=a),cex=cex) # axis(1,at=seq(14000,3000,by=-1000), labels=seq(14,3,by=-1)) # legend(legend=c('Joint posterior parameters','50% CI of model PDF','75% CI of model PDF','95% CI of model PDF'), # x = 10000,y = 0.00024,cex = 0.7,bty = 'n',border = NA, xjust = 1, # pch = c(16,NA,NA,NA), # col = c(red,NA,NA,NA), # fill = c(NA,grey3,grey2,grey1), # x.intersp = c(1.5,1,1,1)) # dev.off() # par(oldpar) ## ---- eval = FALSE------------------------------------------------------------ # #---------------------------------------------------------------------------------------------- # # dates (H = hinge) # # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # H.A.date <- ML$year[4] # H.B.date <- round(ML$year[3]) # H.C.date <- round(ML$year[2]) # H.D.date <- ML$year[1] # # H.B.date.CI <- round(quantile(hinges$yr3,prob=c(0.025,0.975))) # H.C.date.CI <- round(quantile(hinges$yr2,prob=c(0.025,0.975))) # # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # # gradients (P = phase or piece) # # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # P.1.gradient <- (ML$pdf[3] - ML$pdf[4]) / (ML$year[4] - ML$year[3]) # P.2.gradient <- (ML$pdf[2] - ML$pdf[3]) / (ML$year[3] - ML$year[2]) # P.3.gradient <- (ML$pdf[1] - ML$pdf[2]) / (ML$year[2] - ML$year[1]) # # P.1.gradient.mcmc <- (hinges$pdf3 - hinges$pdf4) / (hinges$yr4 - hinges$yr3) # P.2.gradient.mcmc <- (hinges$pdf2 - hinges$pdf3) / (hinges$yr3 - hinges$yr2) # P.3.gradient.mcmc <- (hinges$pdf1 - hinges$pdf2) / (hinges$yr2 - hinges$yr1) # # P.1.gradient.CI <- quantile(P.1.gradient.mcmc,prob=c(0.025,0.975)) # P.2.gradient.CI <- quantile(P.2.gradient.mcmc,prob=c(0.025,0.975)) # P.3.gradient.CI <- quantile(P.3.gradient.mcmc,prob=c(0.025,0.975)) # # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # # relative growth rate per generation (P = phase or piece) # # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # P.1.growth <- round(relativeRate(x=c(ML$year[3],ML$year[4]), y=c(ML$pdf[3],ML$pdf[4]) ),2) # P.2.growth <- round(relativeRate(x=c(ML$year[2],ML$year[3]), y=c(ML$pdf[2],ML$pdf[3]) ),2) # P.3.growth <- round(relativeRate(x=c(ML$year[1],ML$year[2]), y=c(ML$pdf[1],ML$pdf[2]) ),2) # # P.1.growth.mcmc <- relativeRate(x=hinges[,c('yr3','yr4')], y=hinges[,c('pdf3','pdf4')] ) # P.2.growth.mcmc <- relativeRate(x=hinges[,c('yr2','yr3')], y=hinges[,c('pdf2','pdf3')] ) # P.3.growth.mcmc <- relativeRate(x=hinges[,c('yr1','yr2')], y=hinges[,c('pdf1','pdf2')] ) # # P.1.growth.CI <- round(quantile(P.1.growth.mcmc,prob=c(0.025,0.975)),2) # P.2.growth.CI <- round(quantile(P.2.growth.mcmc,prob=c(0.025,0.975)),2) # P.3.growth.CI <- round(quantile(P.3.growth.mcmc,prob=c(0.025,0.975)),2) # # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # # summary # # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # headings <- c('Linear phase between hinges', # 'Start yrs BP (95% CI)','End yrs BP (95% CI)', # 'Gradient (x 10^-9 per year)(95% CI)', # 'Relative growth rate per 25 yr generation (95% CI)') # # all.dates <- c(H.A.date, # paste(H.B.date,' (',H.B.date.CI[2],' to ',H.B.date.CI[1],')',sep=''), # paste(H.C.date,' (',H.C.date.CI[2],' to ',H.C.date.CI[1],')',sep=''), # H.D.date) # # all.gradients <- round(c(P.1.gradient, P.2.gradient, P.3.gradient) / 1e-09, 1) # all.gradients.lower <- round(c(P.1.gradient.CI[1], P.2.gradient.CI[1], P.3.gradient.CI[1]) / 1e-09, 1) # all.gradients.upper <- round(c(P.1.gradient.CI[2], P.2.gradient.CI[2], P.3.gradient.CI[2]) / 1e-09, 1) # # col.1 <- c('1 (A-B)', # '2 (B-C)', # '3 (C-D)') # # col.2 <- all.dates[1:3] # # col.3 <- all.dates[2:4] # # col.4 <- c(paste(all.gradients[1],' (',all.gradients.lower[1],' to ',all.gradients.upper[1],')',sep=''), # paste(all.gradients[2],' (',all.gradients.lower[2],' to ',all.gradients.upper[2],')',sep=''), # paste(all.gradients[3],' (',all.gradients.lower[3],' to ',all.gradients.upper[3],')',sep='')) # # col.5 <- c(paste(P.1.growth,'%',' (',P.1.growth.CI[1],' to ',P.1.growth.CI[2],')',sep=''), # paste(P.2.growth,'%',' (',P.2.growth.CI[1],' to ',P.2.growth.CI[2],')',sep=''), # paste(P.3.growth,'%',' (',P.3.growth.CI[1],' to ',P.3.growth.CI[2],')',sep='')) # # res <- cbind(col.1,col.2,col.3,col.4,col.5); colnames(res) <- headings # write.csv(res, 'Table 2.csv', row.names=F) ## ---- eval = TRUE, echo = FALSE----------------------------------------------- tb2 <- read.csv(file='Table2.csv') print(tb2) ## ---- eval = FALSE------------------------------------------------------------ # library(ADMUR) # library(DEoptimR) # # # generate a set of random calendar dates under the toy model. # set.seed(888) # cal <- simulateCalendarDates(model = toy, 1500) # # # Convert to 14C dates. # age <- uncalibrateCalendarDates(cal, shcal20) # # # construct data frame. One date per phase. # data <- data.frame(age = age, sd = 25, phase = 1:1500, datingType = '14C') # # # Calibrate each phase # CalArray <- makeCalArray(shcal20, calrange = range(toy$year), inc = 5) # PD <- phaseCalibrator(data, CalArray, remove.external = TRUE) # # # Generate SPD # SPD <- summedCalibrator(data, CalArray) # # # Uniform model: No parameters. # # Log Likelihood calculated directly using objectiveFunction, without a search required. # unif.loglik <- -objectiveFunction(pars = NULL, PDarray = PD, type = 'uniform') # # # Best CPL models. Parameters and log likelihood found using seach # fn <- objectiveFunction # CPL1 <- JDEoptim(lower=rep(0,1), upper=rep(1,1), fn, PDarray=PD, type='CPL',trace=T,NP=20) # CPL2 <- JDEoptim(lower=rep(0,3), upper=rep(1,3), fn, PDarray=PD, type='CPL',trace=T,NP=60) # CPL3 <- JDEoptim(lower=rep(0,5), upper=rep(1,5), fn, PDarray=PD, type='CPL',trace=T,NP=100) # CPL4 <- JDEoptim(lower=rep(0,7), upper=rep(1,7), fn, PDarray=PD, type='CPL',trace=T,NP=140) # CPL5 <- JDEoptim(lower=rep(0,9), upper=rep(1,9), fn, PDarray=PD, type='CPL',trace=T,NP=180) # # # save results, for separate plotting # save(SPD, PD, unif.loglik, CPL1, CPL2, CPL3, CPL4, CPL5, file='results.RData',version=2) ## ---- eval = FALSE------------------------------------------------------------ # load('results.RData') # # # Calculate BICs for all six models # # name of each model # model <- c('uniform','1-CPL','2-CPL','3-CPL','4-CPL','5-CPL') # # # extract log likelihoods for each model # loglik <- c(unif.loglik, -CPL1$value, -CPL2$value, -CPL3$value, -CPL4$value, -CPL5$value) # # # extract effective sample sizes # N <- c(rep(ncol(PD),6)) # # # number of parameters for each model # K <- c(0, 1, 3, 5, 7, 9) # # # calculate BIC for each model # BIC <- log(N)*K - 2*loglik # # table <- data.frame(Model=model, Parameters=K, MaxLogLikelihood=loglik, BIC=BIC) # names(table) <- c('model','parameter','maximum log likelihood','BIC') # # print(table) # write.csv(table,file='Table 1.csv', row.names=F) ## ---- eval = TRUE, echo = FALSE----------------------------------------------- tb1 <- read.csv(file='Table1.csv') print(tb1)
/scratch/gouwar.j/cran-all/cranData/ADMUR/inst/doc/replicating-timpson-rstb.2020.R
--- title: | ![](four_logos.png){height=0.5in} Replicating published results author: "Adrian Timpson" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteEngine{knitr::rmarkdown} %\VignetteIndexEntry{Replicating published results from doi:0.1098/rstb.2019.0723} %\usepackage[utf8]{inputenc} --- <style> p.caption {font-size: 0.7em;} </style> ********** This vignette provides the R code used to generate all results, plots and tables in the following publication: ### Directly modelling population dynamics in the South American Arid Diagonal using 14C dates by Adrian Timpson, Ramiro Barberena, Mark G. Thomas, Cesar Mendez and Katie Manning, published in Philosophical Transactions of the Royal Society B, 2020. https://doi.org/10.1098/rstb.2019.0723 The only exception to this is the exclusion of R code for figure 3, which is an adaptation of [Fig 7 from Peel et al 2007](https://doi.org/10.5194/hess-11-1633-2007) and is therefore not novel. Each section of this vignette provides stand alone R code that is not reliant on objects created earlier in the vignette. As such, there is some repetition between sections. Setting random seeds is not necessary, but can be used to ensure random components are identical to those used in the publication. The generation and calibration of each random dataset takes seconds to complete. Simulation tests and searches performed by JDEoptim or the generation of MCMC chains then requires several hours to complete. Therefore the code for each section is separated into two or more blocks. The first block always includes all slow components which are saved by the last line of code. This provides a firewall to allow plots to be quickly generated on a later occasion using the remaining block(s), which runs in seconds. Sometimes there is an intermediate block which takes a few seconds to perform some pre-plot processing. ********** # Figure 1 ## Simulating datasets from a 3-CPL toy. Generate key objects: ```{r, eval = FALSE} library(ADMUR) library(DEoptimR) N <- 1500 # generate 5 sets of random calendar dates under the toy model. set.seed(882) cal1 <- simulateCalendarDates(model = toy, N) set.seed(884) cal2 <- simulateCalendarDates(model = toy, N) set.seed(886) cal3 <- simulateCalendarDates(model = toy, N) set.seed(888) cal4 <- simulateCalendarDates(model = toy, N) set.seed(890) cal5 <- simulateCalendarDates(model = toy, N) # Convert to 14C dates. age1 <- uncalibrateCalendarDates(cal1, shcal20) age2 <- uncalibrateCalendarDates(cal2, shcal20) age3 <- uncalibrateCalendarDates(cal3, shcal20) age4 <- uncalibrateCalendarDates(cal4, shcal20) age5 <- uncalibrateCalendarDates(cal5, shcal20) # construct data frames. One date per phase. data1 <- data.frame(age = age1, sd = 25, phase = 1:N, datingType = '14C') data2 <- data.frame(age = age2, sd = 25, phase = 1:N, datingType = '14C') data3 <- data.frame(age = age3, sd = 25, phase = 1:N, datingType = '14C') data4 <- data.frame(age = age4, sd = 25, phase = 1:N, datingType = '14C') data5 <- data.frame(age = age5, sd = 25, phase = 1:N, datingType = '14C') # Calibrate each phase, taking care to restrict to the modelled date range CalArray <- makeCalArray(shcal20, calrange = range(toy$year), inc = 5) PD1 <- phaseCalibrator(data1, CalArray, remove.external = TRUE) PD2 <- phaseCalibrator(data2, CalArray, remove.external = TRUE) PD3 <- phaseCalibrator(data3, CalArray, remove.external = TRUE) PD4 <- phaseCalibrator(data4, CalArray, remove.external = TRUE) PD5 <- phaseCalibrator(data5, CalArray, remove.external = TRUE) # Generate SPD of each dataset SPD1 <- summedCalibrator(data1, CalArray, normalise='full') SPD2 <- summedCalibrator(data2, CalArray, normalise='full') SPD3 <- summedCalibrator(data3, CalArray, normalise='full') SPD4 <- summedCalibrator(data4, CalArray, normalise='full') SPD5 <- summedCalibrator(data5, CalArray, normalise='full') # 3-CPL parameter search lower <- rep(0,5) upper <- rep(1,5) fn <- objectiveFunction best1 <- JDEoptim(lower, upper, fn, PDarray=PD1, type='CPL',trace=T,NP=100) best2 <- JDEoptim(lower, upper, fn, PDarray=PD2, type='CPL',trace=T,NP=100) best3 <- JDEoptim(lower, upper, fn, PDarray=PD3, type='CPL',trace=T,NP=100) best4 <- JDEoptim(lower, upper, fn, PDarray=PD4, type='CPL',trace=T,NP=100) best5 <- JDEoptim(lower, upper, fn, PDarray=PD5, type='CPL',trace=T,NP=100) #save results, for separate plotting save(best1,best2,best3,best4,best5,SPD1,SPD2,SPD3,SPD4,SPD5, file='results.RData',version=2) ``` Generate plot: ```{r, eval = FALSE} library(ADMUR) load('results.RData') oldpar <- par(no.readonly = TRUE) pdf('Fig1.pdf',height=4,width=10) par(mar=c(2,4,0.1,2)) plot(NULL, xlim=c(7500,5500), ylim=c(0,0.0011), xlab='', ylab='', xaxs='i',cex.axis=0.7, bty='n',las=1) axis(1,at=6400,labels='calBP',tick=F) axis(2,at=-0.00005,labels='PD',tick=F, las=1) lwd1 <- 1 lwd2 <- 2 lwd3 <- 3 legend(x=6000, y = 0.0011, bty='n', cex=0.7, legend=c('True (toy) population', 'SPD 1', 'SPD 2', 'SPD 3', 'SPD 4', 'SPD 5', 'Pop model 1', 'Pop model 2', 'Pop model 3', 'Pop model 4', 'Pop model 5'), lwd=c(lwd3,rep(lwd1,5),rep(lwd2,5)), col=c(1,2:6,2:6) ) years <- as.numeric(row.names(SPD1)) # plot SPDs lines(years,SPD1[,1],col=2, lwd=lwd1) lines(years,SPD2[,1],col=3, lwd=lwd1) lines(years,SPD3[,1],col=4, lwd=lwd1) lines(years,SPD4[,1],col=5, lwd=lwd1) lines(years,SPD5[,1],col=6, lwd=lwd1) # convert parameters to model pdfs mod.1 <- convertPars(pars=best1$par, years=years, type='CPL') mod.2 <- convertPars(pars=best2$par, years=years, type='CPL') mod.3 <- convertPars(pars=best3$par, years=years, type='CPL') mod.4 <- convertPars(pars=best4$par, years=years, type='CPL') mod.5 <- convertPars(pars=best5$par, years=years, type='CPL') lines(mod.1$year,mod.1$pdf,col=2,lwd=lwd2) lines(mod.2$year,mod.2$pdf,col=3,lwd=lwd2) lines(mod.3$year,mod.3$pdf,col=4,lwd=lwd2) lines(mod.4$year,mod.4$pdf,col=5,lwd=lwd2) lines(mod.5$year,mod.5$pdf,col=6,lwd=lwd2) # plot true toy model lines(toy$year, toy$pdf, lwd=lwd3) dev.off() par(oldpar) ``` ![Low resolution png of Figure 1](Fig1.png) ********** # Figure 2 ## Model selection with small simulated data. Generate key objects: ```{r, eval = FALSE} library(ADMUR) library(DEoptimR) set.seed(888) N <- c(6,20,60,180,360,540) names <- c('sample1','sample2','sample3','sample4','sample5','sample6') # generate 6 sets of random calendar dates under the toy model. cal1 <- simulateCalendarDates(model = toy, N[1]) cal2 <- simulateCalendarDates(model = toy, N[2]) cal3 <- simulateCalendarDates(model = toy, N[3]) cal4 <- simulateCalendarDates(model = toy, N[4]) cal5 <- simulateCalendarDates(model = toy, N[5]) cal6 <- simulateCalendarDates(model = toy, N[6]) # Convert to 14C dates. age1 <- uncalibrateCalendarDates(cal1, shcal20) age2 <- uncalibrateCalendarDates(cal2, shcal20) age3 <- uncalibrateCalendarDates(cal3, shcal20) age4 <- uncalibrateCalendarDates(cal4, shcal20) age5 <- uncalibrateCalendarDates(cal5, shcal20) age6 <- uncalibrateCalendarDates(cal6, shcal20) # construct data frames. One date per phase. data1 <- data.frame(age = age1, sd = 25, phase = 1:N[1], datingType = '14C') data2 <- data.frame(age = age2, sd = 25, phase = 1:N[2], datingType = '14C') data3 <- data.frame(age = age3, sd = 25, phase = 1:N[3], datingType = '14C') data4 <- data.frame(age = age4, sd = 25, phase = 1:N[4], datingType = '14C') data5 <- data.frame(age = age5, sd = 25, phase = 1:N[5], datingType = '14C') data6 <- data.frame(age = age6, sd = 25, phase = 1:N[6], datingType = '14C') # narrow domain of the model to the range of data, # since absence of evidence in periods well outside the data should # not be interpreted as evidence of absence. # Only required when sample sizes are extremely small. # Otherwise the data domain is constrained by the model date range. r1 <- estimateDataDomain(data1, shcal20) # narrower range for extremely small samples CalArray1 <- makeCalArray(shcal20, calrange = c( max(r1[1],5500) , min(r1[2],7500) ), inc = 5) CalArray <- makeCalArray(shcal20, calrange = range(toy$year), inc = 5) # Calibrate each phase PD1 <- phaseCalibrator(data1, CalArray1, remove.external = TRUE) PD2 <- phaseCalibrator(data2, CalArray, remove.external = TRUE) PD3 <- phaseCalibrator(data3, CalArray, remove.external = TRUE) PD4 <- phaseCalibrator(data4, CalArray, remove.external = TRUE) PD5 <- phaseCalibrator(data5, CalArray, remove.external = TRUE) PD6 <- phaseCalibrator(data6, CalArray, remove.external = TRUE) PD <- list(PD1, PD2, PD3, PD4, PD5, PD6); names(PD) <- names # Generate SPD of each dataset SPD1 <- summedCalibrator(data1, CalArray, normalise='full') SPD2 <- summedCalibrator(data2, CalArray, normalise='full') SPD3 <- summedCalibrator(data3, CalArray, normalise='full') SPD4 <- summedCalibrator(data4, CalArray, normalise='full') SPD5 <- summedCalibrator(data5, CalArray, normalise='full') SPD6 <- summedCalibrator(data6, CalArray, normalise='full') SPD <- list(SPD1, SPD2, SPD3, SPD4, SPD5, SPD6); names(SPD) <- names # Uniform model: No parameters. # Log Likelihood calculated directly using objectiveFunction, without a search required. unif1.loglik <- -objectiveFunction(pars = NULL, PDarray = PD1, type = 'uniform') unif2.loglik <- -objectiveFunction(pars = NULL, PDarray = PD2, type = 'uniform') unif3.loglik <- -objectiveFunction(pars = NULL, PDarray = PD3, type = 'uniform') unif4.loglik <- -objectiveFunction(pars = NULL, PDarray = PD4, type = 'uniform') unif5.loglik <- -objectiveFunction(pars = NULL, PDarray = PD5, type = 'uniform') unif6.loglik <- -objectiveFunction(pars = NULL, PDarray = PD6, type = 'uniform') uniform <- list(unif1.loglik, unif2.loglik, unif3.loglik, unif4.loglik, unif5.loglik, unif6.loglik) names(uniform) <- names # Best 1-CPL model. Parameters and log likelihood found using search lower <- rep(0,1) upper <- rep(1,1) fn <- objectiveFunction best1 <- JDEoptim(lower, upper, fn, PDarray=PD1, type='CPL',trace=T,NP=20) best2 <- JDEoptim(lower, upper, fn, PDarray=PD2, type='CPL',trace=T,NP=20) best3 <- JDEoptim(lower, upper, fn, PDarray=PD3, type='CPL',trace=T,NP=20) best4 <- JDEoptim(lower, upper, fn, PDarray=PD4, type='CPL',trace=T,NP=20) best5 <- JDEoptim(lower, upper, fn, PDarray=PD5, type='CPL',trace=T,NP=20) best6 <- JDEoptim(lower, upper, fn, PDarray=PD6, type='CPL',trace=T,NP=20) CPL1 <- list(best1, best2, best3, best4, best5, best6); names(CPL1) <- names # Best 2-CPL model. Parameters and log likelihood found using search lower <- rep(0,3) upper <- rep(1,3) fn <- objectiveFunction best1 <- JDEoptim(lower, upper, fn, PDarray=PD1, type='CPL',trace=T,NP=60) best2 <- JDEoptim(lower, upper, fn, PDarray=PD2, type='CPL',trace=T,NP=60) best3 <- JDEoptim(lower, upper, fn, PDarray=PD3, type='CPL',trace=T,NP=60) best4 <- JDEoptim(lower, upper, fn, PDarray=PD4, type='CPL',trace=T,NP=60) best5 <- JDEoptim(lower, upper, fn, PDarray=PD5, type='CPL',trace=T,NP=60) best6 <- JDEoptim(lower, upper, fn, PDarray=PD6, type='CPL',trace=T,NP=60) CPL2 <- list(best1, best2, best3, best4, best5, best6); names(CPL2) <- names # Best 3-CPL model. Parameters and log likelihood found using search lower <- rep(0,5) upper <- rep(1,5) fn <- objectiveFunction best1 <- JDEoptim(lower, upper, fn, PDarray=PD1, PDarray=PD1, type='CPL',trace=T,NP=100) best2 <- JDEoptim(lower, upper, fn, PDarray=PD1, PDarray=PD2, type='CPL',trace=T,NP=100) best3 <- JDEoptim(lower, upper, fn, PDarray=PD1, PDarray=PD3, type='CPL',trace=T,NP=100) best4 <- JDEoptim(lower, upper, fn, PDarray=PD1, PDarray=PD4, type='CPL',trace=T,NP=100) best5 <- JDEoptim(lower, upper, fn, PDarray=PD1, PDarray=PD5, type='CPL',trace=T,NP=100) best6 <- JDEoptim(lower, upper, fn, PDarray=PD1, PDarray=PD6, type='CPL',trace=T,NP=100) CPL3 <- list(best1, best2, best3, best4, best5, best6); names(CPL3) <- names # Best 4-CPL model. Parameters and log likelihood found using search lower <- rep(0,7) upper <- rep(1,7) fn <- objectiveFunction best1 <- JDEoptim(lower, upper, fn, PDarray=PD1, type='CPL',trace=T,NP=140) best2 <- JDEoptim(lower, upper, fn, PDarray=PD2, type='CPL',trace=T,NP=140) best3 <- JDEoptim(lower, upper, fn, PDarray=PD3, type='CPL',trace=T,NP=140) best4 <- JDEoptim(lower, upper, fn, PDarray=PD4, type='CPL',trace=T,NP=140) best5 <- JDEoptim(lower, upper, fn, PDarray=PD5, type='CPL',trace=T,NP=140) best6 <- JDEoptim(lower, upper, fn, PDarray=PD6, type='CPL',trace=T,NP=140) CPL4 <- list(best1, best2, best3, best4, best5, best6); names(CPL4) <- names # Best 5-CPL model. Parameters and log likelihood found using search lower <- rep(0,9) upper <- rep(1,9) fn <- objectiveFunction best1 <- JDEoptim(lower, upper, fn, PDarray=PD1, type='CPL',trace=T,NP=180) best2 <- JDEoptim(lower, upper, fn, PDarray=PD2, type='CPL',trace=T,NP=180) best3 <- JDEoptim(lower, upper, fn, PDarray=PD3, type='CPL',trace=T,NP=180) best4 <- JDEoptim(lower, upper, fn, PDarray=PD4, type='CPL',trace=T,NP=180) best5 <- JDEoptim(lower, upper, fn, PDarray=PD5, type='CPL',trace=T,NP=180) best6 <- JDEoptim(lower, upper, fn, PDarray=PD6, type='CPL',trace=T,NP=180) CPL5 <- list(best1, best2, best3, best4, best5, best6); names(CPL5) <- names # save results, for separate plotting save(SPD, PD, uniform, CPL1, CPL2, CPL3, CPL4, CPL5, file='results.RData',version=2) ``` Pre-plot processing: ```{r, eval = FALSE} library(ADMUR) load('results.RData') # Calculate BICs for all six sample sizes and all six models BIC <- as.data.frame(matrix(,6,6)) row.names(BIC) <- c('uniform','1-CPL','2-CPL','3-CPL','4-CPL','5-CPL') for(s in 1:6){ # extract log likelihoods for each model loglik <- c(uniform[[s]], -CPL1[[s]]$value, -CPL2[[s]]$value, -CPL3[[s]]$value, -CPL4[[s]]$value, -CPL5[[s]]$value) # extract effective sample sizes for each model N <- c(rep(ncol(PD[[s]]),6)) # number of parameters for each model K <- c(0, 1, 3, 5, 7, 9) # calculate BIC for each model BIC[,s] <- log(N)*K - 2*loglik # store effective sample size names(BIC)[s] <- paste('N',N[1],sep='=') } # Show all BICs for all sample sizes and models print(BIC) ``` Generate plot: ```{r, eval = FALSE} oldpar <- par(no.readonly = TRUE) # Fig 2 plot pdf('Fig2.pdf',height=6,width=13) layout(mat=matrix(1:14, 2, 7, byrow = F),widths=c(0.3,rep(1,6)), heights=c(1,1.5),respect=T) # plot two blanks first par(mar=c(5,4,1.5,0),las=2) ymax <- 0.0032 plot(NULL, xlim=c(0,1),ylim=c(0,1),main='', xlab='',ylab='',bty='n',xaxt='n',yaxt='n') mtext(side=2, at=0.5,text='BIC',las=0,line=1) plot(NULL, xlim=c(0,1),ylim=sqrt(c(0,ymax)),main='', xlab='',ylab='',bty='n',xaxt='n',yaxt='n') axis(side=2, at=sqrt(seq(0,ymax,by=0.001)), labels=round(seq(0,ymax,by=0.001),4),las=1) mtext(side=2, at=sqrt(0.00025),text='PD',las=0,line=0.8,cex=1) abline(h=sqrt(seq(0,ymax,by=0.001)),col='grey') for(n in 1:6){ # extract the best model (lowest BIC) BICs <- BIC[,n] best <- which(BICs==min(BICs)) # convert parameters to model if(best==1){ type <- 'uniform' pars <- NULL } if(best!=1)type <- 'CPL' if(best==2)pars <- CPL1[[n]]$par if(best==3)pars <- CPL2[[n]]$par if(best==4)pars <- CPL3[[n]]$par if(best==5)pars <- CPL4[[n]]$par if(best==6)pars <- CPL5[[n]]$par spd.years <- as.numeric(row.names(SPD[[n]])) spd.pdf <- SPD[[n]][,1] mod.years <- as.numeric(row.names(PD[[n]])) model <- convertPars(pars, mod.years, type) # plot red <- 'firebrick' col <- rep('grey35',6); col[best] <- red ymin <- min(BIC)-diff(range(BIC))*0.15 par(mar=c(5,3,1.5,1),las=2) plot(BICs,xlab='',ylab='',xaxt='n',pch=20,cex=3,col=col, main='') axis(side=1, at=1:6, labels=c('Uniform','1-piece','2-piece','3-piece','4-piece','5-piece')) par(mar=c(5,1,1.5,1),las=2) plot(NULL,type='l', xlab='Cal Yrs BP', ylab='',yaxt='n', col='steelblue', main=paste('N =',ncol(PD[[n]])), ylim=sqrt(c(0,ymax)), xlim=c(7500,5500)) abline(h=sqrt(seq(0,ymax,by=0.001)),col='grey') polygon(c(min(spd.years),spd.years,max(spd.years)),sqrt(c(0,spd.pdf,0)),col='steelblue',border=NA) lwd=3 lines(toy$year,sqrt(toy$pdf),lwd=lwd) lines(model$year, sqrt(model$pdf), lwd=lwd, col=red) } dev.off() par(oldpar) ``` ![Low resolution png of Figure 2](Fig2.png) ********** # Figure 4 ## SPD simulation analysis of SAAD data Generate key objects: ```{r, eval = FALSE} library(ADMUR) set.seed(999) # best exponential parameter previously found using ML search for Fig 5. summary <- SPDsimulationTest(data=SAAD, calcurve=shcal20, calrange=c(2500,14000), pars=-0.0001674152, type='exp', N=20000) save(summary, file='results.RData',version=2) ``` Generate plot: ```{r, eval = FALSE} library(ADMUR) load('results.RData') oldpar <- par(no.readonly = TRUE) pdf('Fig4.pdf',height=4,width=10) par(mar=c(2,4,0.1,0.1)) plotSimulationSummary(summary, legend.x=11500,legend.y=0.0003) axis(side=1, at=2500,labels='calBP',tick=F) dev.off() par(oldpar) ``` ![Low resolution png of Figure 4](Fig4.png) ********** # Figure 5 ## Model selection of SAAD data. Generate key objects: ```{r, eval = FALSE} library(ADMUR) library(DEoptimR) # Generate SPD SPD <- summedPhaseCalibrator(data=SAAD, calcurve=shcal20, calrange = c(2500,14000)) # Calibrate each phase CalArray <- makeCalArray(calcurve=shcal20, calrange = c(2500,14000)) PD <- phaseCalibrator(data=SAAD, CalArray, remove.external = TRUE) # Best exponential model. Parameter and log likelihood found using seach exp <- JDEoptim(lower=-0.01, upper=0.01, fn=objectiveFunction, PDarray=PD, type='exp', trace=T, NP=20) # Best CPL models. Parameters and log likelihood found using seach fn <- objectiveFunction CPL1 <- JDEoptim(lower=rep(0,1), upper=rep(1,1), fn, PDarray=PD, type='CPL',trace=T,NP=20) CPL2 <- JDEoptim(lower=rep(0,3), upper=rep(1,3), fn, PDarray=PD, type='CPL',trace=T,NP=60) CPL3 <- JDEoptim(lower=rep(0,5), upper=rep(1,5), fn, PDarray=PD, type='CPL',trace=T,NP=100) CPL4 <- JDEoptim(lower=rep(0,7), upper=rep(1,7), fn, PDarray=PD, type='CPL',trace=T,NP=140) CPL5 <- JDEoptim(lower=rep(0,9), upper=rep(1,9), fn, PDarray=PD, type='CPL',trace=T,NP=180) CPL6 <- JDEoptim(lower=rep(0,11),upper=rep(1,11),fn, PDarray=PD, type='CPL',trace=T,NP=220) # save results, for separate plotting save(SPD, PD, exp, CPL1, CPL2, CPL3, CPL4, CPL5, CPL6, file='results.RData',version=2) ``` Pre-plot: ```{r, eval = FALSE} library(ADMUR) load('results.RData') # Calculate BICs for all six models # name of each model model <- c('exponential','1-CPL','2-CPL','3-CPL','4-CPL','5-CPL','6-CPL') # extract log likelihoods for each model loglik <- c(-exp$value, -CPL1$value, -CPL2$value, -CPL3$value, -CPL4$value, -CPL5$value, -CPL6$value) # extract effective sample sizes N <- c(rep(ncol(PD),7)) # number of parameters for each model K <- c(1, 1, 3, 5, 7, 9, 11) # calculate BIC for each model BICs <- log(N)*K - 2*loglik # convert best 3-CPL parameters into model pdf best <- convertPars(pars=CPL3$par, years=c(2500:14000), type='CPL') ``` Generate plot: ```{r, eval = FALSE} oldpar <- par(no.readonly = TRUE) pdf('Fig5.pdf',height=4,width=10) par(mfrow=c(1,2)) # model comparison par(mar=c(6,6,2,0.1)) red <- 'firebrick' blue <- 'steelblue' col <- rep('grey35',7); col[which(BICs==min(BICs))] <- red plot(BICs,xlab='',ylab='',xaxt='n', pch=20,cex=2,col=col,main='',las=1,cex.axis=0.7) labels <- c('exponential','1-CPL','2-CPL','3-CPL','4-CPL','5-CPL','6-CPL') axis(side=1, at=1:7, las=2, labels=labels, cex.axis=0.9) mtext(side=2, at=mean(BICs),text='BIC',las=0,line=3) # best fitting CPL years <- as.numeric(row.names(SPD)) plot(NULL,xlim=rev(range(years)), ylim=range(SPD), type='l',xlab='kyr cal BP',xaxt='n', ylab='',las=1,cex.axis=0.7) axis(1,at=seq(14000,3000,by=-1000), labels=seq(14,3,by=-1),cex.axis=0.9) mtext(side=2, at=max(SPD[,1])/2,text='PD',las=0,line=3.5,cex=1) polygon(c(min(years),years,max(years)),c(0,SPD[,1],0),col=blue,border=NA) lines(best$year,best$pdf,col=red,lwd=3) legend(x=14000,y=0.0003,lwd=c(5,3),col=c(blue,red),bty='n',legend=c('SPD','3-CPL')) dev.off() par(oldpar) ``` ![Low resolution png of Figure 5](Fig5.png) ********** # Figure 6, Figure 7, Table 2 ## Parameter estimates and CI of SAAD data. Generate key objects: ```{r, eval = FALSE} library(ADMUR) library(DEoptimR) # Calibrate each phase CalArray <- makeCalArray(calcurve=shcal20, calrange = c(2500,14000)) PD <- phaseCalibrator(data=SAAD, CalArray, remove.external = TRUE) # arbitrary starting parameters chain <- mcmc(PDarray=PD, startPars=rep(0.5,5), type='CPL', N=100000, burn=2000, thin=5, jumps=0.025) # find ML parameters best.pars <- JDEoptim(lower=rep(0,5),upper=rep(1,5),fn=objectiveFunction,PDarray=PD,type='CPL',trace=T,NP=100)$par # save results, for separate plotting save(chain, best.pars, file='results.RData',version=2) ``` Pre-plot processing: ```{r, eval = FALSE} library('ADMUR') library('scales') load('results.RData') # Convert Maximum Likelihood parameters to hinge coordinates ML <- CPLparsToHinges(best.pars,years=c(2500,14000)) # Convert MCMC chain of parameters to hinge coordinates hinges <- CPLparsToHinges(chain$res, years=c(2500,14000)) # check the acceptance ratio is sensible (c. 0.2 to 0.5) chain$acceptance.ratio # Eyeball the entire chain, before burn-in and thinning for(n in 1:5)plot(chain$all.pars[,n], type='l', ylim=c(0,1)) # Generate CI for Fig 7 N <- nrow(hinges) years <- 2500:14000 Y <- length(years) pdf.matrix <- matrix(,N,Y) for(n in 1:N){ yr <- c('yr1','yr2','yr3','yr4') pdf <- c('pdf1','pdf2','pdf3','pdf4') pdf.matrix[n,] <- approx(x=hinges[n,yr],y=hinges[n,pdf],xout=years, ties='ordered')$y } CI <- matrix(,Y,6) for(y in 1:Y)CI[y,] <- quantile(pdf.matrix[,y],prob=c(0.025,0.125,0.25,0.75,0.875,0.975)) ``` Generate Figure 6: ```{r, eval = FALSE} oldpar <- par(no.readonly = TRUE) pdf('Fig6.pdf',height=5,width=11) par(mfrow=c(2,3)) lwd <- 3 red='firebrick' grey='grey65' breaks.yr <- seq(14000,2000,length.out=80) breaks.pdf <- seq(0,0.0003,length.out=80) xlab.yr <- 'yrs BP' xlab.pdf <-'PD' names <- c('Date of Hinge B','Date of Hinge C','PD of Hinge A','PD fo Hinge B','PD of Hinge C','PD of Hinge D') hist(hinges$yr3, breaks=breaks.yr, col=grey, border=NA, main=names[1], xlab=xlab.yr) abline(v = ML$year[3], col=red, lwd=lwd) hist(hinges$yr2, breaks=breaks.yr, col=grey, border=NA, main=names[2], xlab=xlab.yr) abline(v = ML$year[2], col=red, lwd=lwd) hist(hinges$pdf4, breaks=breaks.pdf, col=grey, border=NA, main=names[3], xlab=xlab.pdf) abline(v = ML$pdf[4], col=red, lwd=lwd) hist(hinges$pdf3, breaks=breaks.pdf, col=grey, border=NA, main=names[5], xlab=xlab.pdf) abline(v = ML$pdf[3], col=red, lwd=lwd) hist(hinges$pdf2, breaks=breaks.pdf, col=grey, border=NA, main=names[4], xlab=xlab.pdf) abline(v = ML$pdf[2], col=red, lwd=lwd) hist(hinges$pdf1, breaks=breaks.pdf, col=grey, border=NA, main=names[6], xlab=xlab.pdf) abline(v = ML$pdf[1], col=red, lwd=lwd) dev.off() par(oldpar) ``` ![Low resolution png of Figure 6](Fig6.png) Generate Figure 7: ```{r, eval = FALSE} oldpar <- par(no.readonly = TRUE) pdf('Fig7.pdf',height=5,width=12) grey1 <- 'grey90' grey2 <- 'grey70' grey3 <- 'grey50' red <- 'firebrick' par(mfrow=c(1,2),las=0) plot(NULL,xlim=c(14000,2500),ylim=c(0,0.00025),xlab='kyr cal BP',xaxt='n', ylab='PD', las=1, cex.axis=0.7) set.seed(888) S <- sample(1:N,size=1000) for(n in 1:1000){ lines(x=hinges[S[n],c('yr1','yr2','yr3','yr4')],y=hinges[S[n],c('pdf1','pdf2','pdf3','pdf4')],col=alpha('black',0.05)) } lines(ML$year, ML$pdf,col='firebrick',lwd=2) axis(1,at=seq(14000,3000,by=-1000), labels=seq(14,3,by=-1)) text(x=ML$year, y=ML$pdf + c(-0.00002,-0.00002,0.00002,0.00002), labels=rev(c('A','B','C','D'))) legend(legend=c('Maximum Likelihood model PDF','Model PDF sampled from joint posterior parameters'), x = 6000,y = 0.00024,cex = 0.7,bty = 'n',border = NA, xjust = 1, lwd=c(2,1), col=c(red,grey3)) plot(NULL,xlim=c(14000,2500),ylim=c(0,0.00025),xlab='kyr cal BP',xaxt='n', ylab='PD', las=1, cex.axis=0.7) polygon(x=c(years,rev(years)),c(CI[,1],rev(CI[,6])),col=grey1,border=F) polygon(x=c(years,rev(years)),c(CI[,2],rev(CI[,5])),col=grey2,border=F) polygon(x=c(years,rev(years)),c(CI[,3],rev(CI[,4])),col=grey3,border=F) a <- 0.05 cex <- 0.2 points(hinges$yr1,hinges$pdf1,pch=20,col=alpha(red,alpha=a),cex=cex) points(hinges$yr2,hinges$pdf2,pch=20,col=alpha(red,alpha=a),cex=cex) points(hinges$yr3,hinges$pdf3,pch=20,col=alpha(red,alpha=a),cex=cex) points(hinges$yr4,hinges$pdf4,pch=20,col=alpha(red,alpha=a),cex=cex) axis(1,at=seq(14000,3000,by=-1000), labels=seq(14,3,by=-1)) legend(legend=c('Joint posterior parameters','50% CI of model PDF','75% CI of model PDF','95% CI of model PDF'), x = 10000,y = 0.00024,cex = 0.7,bty = 'n',border = NA, xjust = 1, pch = c(16,NA,NA,NA), col = c(red,NA,NA,NA), fill = c(NA,grey3,grey2,grey1), x.intersp = c(1.5,1,1,1)) dev.off() par(oldpar) ``` ![Low resolution png of Figure 7](Fig7.png) Generate Table 2 ```{r, eval = FALSE} #---------------------------------------------------------------------------------------------- # dates (H = hinge) # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - H.A.date <- ML$year[4] H.B.date <- round(ML$year[3]) H.C.date <- round(ML$year[2]) H.D.date <- ML$year[1] H.B.date.CI <- round(quantile(hinges$yr3,prob=c(0.025,0.975))) H.C.date.CI <- round(quantile(hinges$yr2,prob=c(0.025,0.975))) # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # gradients (P = phase or piece) # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - P.1.gradient <- (ML$pdf[3] - ML$pdf[4]) / (ML$year[4] - ML$year[3]) P.2.gradient <- (ML$pdf[2] - ML$pdf[3]) / (ML$year[3] - ML$year[2]) P.3.gradient <- (ML$pdf[1] - ML$pdf[2]) / (ML$year[2] - ML$year[1]) P.1.gradient.mcmc <- (hinges$pdf3 - hinges$pdf4) / (hinges$yr4 - hinges$yr3) P.2.gradient.mcmc <- (hinges$pdf2 - hinges$pdf3) / (hinges$yr3 - hinges$yr2) P.3.gradient.mcmc <- (hinges$pdf1 - hinges$pdf2) / (hinges$yr2 - hinges$yr1) P.1.gradient.CI <- quantile(P.1.gradient.mcmc,prob=c(0.025,0.975)) P.2.gradient.CI <- quantile(P.2.gradient.mcmc,prob=c(0.025,0.975)) P.3.gradient.CI <- quantile(P.3.gradient.mcmc,prob=c(0.025,0.975)) # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # relative growth rate per generation (P = phase or piece) # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - P.1.growth <- round(relativeRate(x=c(ML$year[3],ML$year[4]), y=c(ML$pdf[3],ML$pdf[4]) ),2) P.2.growth <- round(relativeRate(x=c(ML$year[2],ML$year[3]), y=c(ML$pdf[2],ML$pdf[3]) ),2) P.3.growth <- round(relativeRate(x=c(ML$year[1],ML$year[2]), y=c(ML$pdf[1],ML$pdf[2]) ),2) P.1.growth.mcmc <- relativeRate(x=hinges[,c('yr3','yr4')], y=hinges[,c('pdf3','pdf4')] ) P.2.growth.mcmc <- relativeRate(x=hinges[,c('yr2','yr3')], y=hinges[,c('pdf2','pdf3')] ) P.3.growth.mcmc <- relativeRate(x=hinges[,c('yr1','yr2')], y=hinges[,c('pdf1','pdf2')] ) P.1.growth.CI <- round(quantile(P.1.growth.mcmc,prob=c(0.025,0.975)),2) P.2.growth.CI <- round(quantile(P.2.growth.mcmc,prob=c(0.025,0.975)),2) P.3.growth.CI <- round(quantile(P.3.growth.mcmc,prob=c(0.025,0.975)),2) # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # summary # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - headings <- c('Linear phase between hinges', 'Start yrs BP (95% CI)','End yrs BP (95% CI)', 'Gradient (x 10^-9 per year)(95% CI)', 'Relative growth rate per 25 yr generation (95% CI)') all.dates <- c(H.A.date, paste(H.B.date,' (',H.B.date.CI[2],' to ',H.B.date.CI[1],')',sep=''), paste(H.C.date,' (',H.C.date.CI[2],' to ',H.C.date.CI[1],')',sep=''), H.D.date) all.gradients <- round(c(P.1.gradient, P.2.gradient, P.3.gradient) / 1e-09, 1) all.gradients.lower <- round(c(P.1.gradient.CI[1], P.2.gradient.CI[1], P.3.gradient.CI[1]) / 1e-09, 1) all.gradients.upper <- round(c(P.1.gradient.CI[2], P.2.gradient.CI[2], P.3.gradient.CI[2]) / 1e-09, 1) col.1 <- c('1 (A-B)', '2 (B-C)', '3 (C-D)') col.2 <- all.dates[1:3] col.3 <- all.dates[2:4] col.4 <- c(paste(all.gradients[1],' (',all.gradients.lower[1],' to ',all.gradients.upper[1],')',sep=''), paste(all.gradients[2],' (',all.gradients.lower[2],' to ',all.gradients.upper[2],')',sep=''), paste(all.gradients[3],' (',all.gradients.lower[3],' to ',all.gradients.upper[3],')',sep='')) col.5 <- c(paste(P.1.growth,'%',' (',P.1.growth.CI[1],' to ',P.1.growth.CI[2],')',sep=''), paste(P.2.growth,'%',' (',P.2.growth.CI[1],' to ',P.2.growth.CI[2],')',sep=''), paste(P.3.growth,'%',' (',P.3.growth.CI[1],' to ',P.3.growth.CI[2],')',sep='')) res <- cbind(col.1,col.2,col.3,col.4,col.5); colnames(res) <- headings write.csv(res, 'Table 2.csv', row.names=F) ``` ```{r, eval = TRUE, echo = FALSE} tb2 <- read.csv(file='Table2.csv') print(tb2) ``` ********** # Table 1 Generate key objects: ```{r, eval = FALSE} library(ADMUR) library(DEoptimR) # generate a set of random calendar dates under the toy model. set.seed(888) cal <- simulateCalendarDates(model = toy, 1500) # Convert to 14C dates. age <- uncalibrateCalendarDates(cal, shcal20) # construct data frame. One date per phase. data <- data.frame(age = age, sd = 25, phase = 1:1500, datingType = '14C') # Calibrate each phase CalArray <- makeCalArray(shcal20, calrange = range(toy$year), inc = 5) PD <- phaseCalibrator(data, CalArray, remove.external = TRUE) # Generate SPD SPD <- summedCalibrator(data, CalArray) # Uniform model: No parameters. # Log Likelihood calculated directly using objectiveFunction, without a search required. unif.loglik <- -objectiveFunction(pars = NULL, PDarray = PD, type = 'uniform') # Best CPL models. Parameters and log likelihood found using seach fn <- objectiveFunction CPL1 <- JDEoptim(lower=rep(0,1), upper=rep(1,1), fn, PDarray=PD, type='CPL',trace=T,NP=20) CPL2 <- JDEoptim(lower=rep(0,3), upper=rep(1,3), fn, PDarray=PD, type='CPL',trace=T,NP=60) CPL3 <- JDEoptim(lower=rep(0,5), upper=rep(1,5), fn, PDarray=PD, type='CPL',trace=T,NP=100) CPL4 <- JDEoptim(lower=rep(0,7), upper=rep(1,7), fn, PDarray=PD, type='CPL',trace=T,NP=140) CPL5 <- JDEoptim(lower=rep(0,9), upper=rep(1,9), fn, PDarray=PD, type='CPL',trace=T,NP=180) # save results, for separate plotting save(SPD, PD, unif.loglik, CPL1, CPL2, CPL3, CPL4, CPL5, file='results.RData',version=2) ``` Pre-process and generate table: ```{r, eval = FALSE} load('results.RData') # Calculate BICs for all six models # name of each model model <- c('uniform','1-CPL','2-CPL','3-CPL','4-CPL','5-CPL') # extract log likelihoods for each model loglik <- c(unif.loglik, -CPL1$value, -CPL2$value, -CPL3$value, -CPL4$value, -CPL5$value) # extract effective sample sizes N <- c(rep(ncol(PD),6)) # number of parameters for each model K <- c(0, 1, 3, 5, 7, 9) # calculate BIC for each model BIC <- log(N)*K - 2*loglik table <- data.frame(Model=model, Parameters=K, MaxLogLikelihood=loglik, BIC=BIC) names(table) <- c('model','parameter','maximum log likelihood','BIC') print(table) write.csv(table,file='Table 1.csv', row.names=F) ``` ```{r, eval = TRUE, echo = FALSE} tb1 <- read.csv(file='Table1.csv') print(tb1) ``` ********** ![](four_logos.png){height=0.55in} **********
/scratch/gouwar.j/cran-all/cranData/ADMUR/inst/doc/replicating-timpson-rstb.2020.Rmd
--- title: | ![](four_logos.png){width=680px} ADMUR: Ancient Demographic Modelling Using Radiocarbon author: "Adrian Timpson" date: "`r Sys.Date()`" output: rmarkdown::html_vignette: toc: true toc_depth: 2 logo: logo.jpg vignette: > %\VignetteEngine{knitr::rmarkdown} %\VignetteIndexEntry{Guide to using ADMUR} %\usepackage[utf8]{inputenc} --- <style> p.caption {font-size: 0.7em;} </style> ********** # 1. Overview ## Introduction to ADMUR This vignette provides a comprehensive guide to modelling population dynamics using the R package ADMUR, and accompanies the publication 'Directly modelling population dynamics in the South American Arid Diagonal using 14C dates', Philosophical Transactions B, 2020, A. Timpson et al. https://doi.org/10.1098/rstb.2019.0723 Throughout this vignette, R code blocks often use objects created earlier in the vignette in previous code blocks. However, the manual for each function provides examples with self sufficient R code blocks. The motivation for creating the ADMUR package is to provide a robust framework to infer population dynamics from radiocarbon datasets, given the uncontroversial assumption that (to a first order of approximation) the archaeological record contains more dateable anthropogenic material from prehistoric periods when population levels were greater. Unfortunately, the spatiotemporal sparsity of radiocarbon data conspires with the wiggly nature of the calibration curve to encourage the overinterpretation of such datasets, often leading to colourful but statistically unjustified interpretations of population dynamics. No statistical method can (or ever will) be able to perfectly reconstruct the true population dynamics from such a dataset. ADMUR is no exception to this, but provides tools to infer a plausible yet conservative reconstruction of population dynamics. ## Installation The ADMUR package can be installed directly from the CRAN in the usual way: ```{r, eval = FALSE} install.packages('ADMUR') ``` Alternatively it can be installed from GitHub, after installing and loading the 'devtools' package on the CRAN: ```{r, eval = FALSE} install.packages('devtools') library(devtools) install_github('UCL/ADMUR') ``` Either way, the ADMUR package can then be locally loaded: ```{r, message = FALSE} library(ADMUR) ``` ## 14C datasets A summary of the available help files and data sets included in the package can be browsed, which include a terrestrial anthropogenic ^14^C dataset from the South American Arid Diagonal: ```{r, eval = FALSE} help(ADMUR) help(SAAD) ``` Datasets must be structured as a data frame that include columns 'age' and 'sd', which represent the uncalibrated ^14^C age and its error, respectively. ```{r, eval = TRUE} SAAD[1:5,1:8] ``` ## Citations Citations are available as follows: ```{r, eval = TRUE} citation('ADMUR') ``` ********** # 2. Date calibration and SPDs The algorithm used by ADMUR to calculate model likelihoods of a ^14^C dataset uses several functions to first calibrate ^14^C dates. These functions are also intrinsically useful for ^14^C date calibration or for generating a Summed Probability Distribution (SPD). ## Calibrated ^14^C date probability distributions Generating a single calibrated date distribution or SPD requires either a two-step process to give the user full control of the date range and temporal resolution, or a simpler one step process using a wrapper function that automatically estimates a sensible date range and resolution from the dataset, performs the two step process internally, and plots the SPD. ### With the wrapper 1. Use the function [summedCalibratorWrapper()](../html/summedCalibratorWrapper.html) ```{r, eval = TRUE, fig.height = 3, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE} data <- data.frame( age=c(6562,7144), sd=c(44,51) ) x <- summedCalibratorWrapper(data) ``` Notice the function assumes the data provided were all ^14^C dates. However, if you have other kinds of date such as thermoluminescence you can specify this. Non-^14^C types are assumed to be in calendar time, BP. You can also specify a particular calibration curve: ```{r, eval = TRUE, fig.height = 3, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE} data <- data.frame( age=c(6562,7144), sd=c(44,51), datingType=c('14C','TL') ) x <- summedCalibratorWrapper(data=data, calcurve=shcal20) ``` ### Without the wrapper Generating the SPD without the wrapper gives you more control, and requires a two-step process: 1. Convert a calibration curve to a CalArray using the function [makeCalArray()](../html/makeCalArray.html) 1. Calibrate the ^14^C dates through the CalArray using the function [summedCalibrator()](../html/summedCalibrator.html). This is useful for improving computational times if generating many SPDs, for example in a simulation framework, since the CalArray needs generating only once. ```{r, eval = TRUE, fig.height = 3, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE} data <- data.frame( age = c(9144), sd=c(151) ) CalArray <- makeCalArray( calcurve=intcal20, calrange=c(8000,13000) ) cal <- summedCalibrator(data, CalArray) plotPD(cal) ``` The CalArray is essentially a two-dimensional probability array of the calibration curve, and can be viewed using the [plotCalArray()](../html/plotCalArray.html) function. Calibration curves vary in their temporal resolution, and the preferred resolution can be specified using the parameter **inc** which interpolates the calibration curve. It would become prohibitively time and memory costly if analysing the entire 50,000 year range of the calibration curve at a 1 year resolution (requiring a 50,000 by 50,000 array) and in practice the default 5 year resolution provides equivalent results to 1 year resolution for study periods wider than c.1000 years. ```{r, eval = TRUE, fig.height = 5, fig.width=7, fig.align = "center", dev='jpeg'} x <- makeCalArray( calcurve=shcal20, calrange=c(5500,6000), inc=1 ) plotCalArray(x) ``` ## Comparison with other calibration software It is worth noting that the algorithm used by this package to calibrate ^14^C dates gives practically equivalent results to those from [OxCal](https://c14.arch.ox.ac.uk/oxcal.html) generated using [oxcAAR](https://cran.r-project.org/package=oxcAAR) and [Bchron](https://cran.r-project.org/package=Bchron) ![Comparison of calibration software for the ^14^C date: 3000 +/- 50 BP calibrated through intcal13.](software_compare_1.png) However, there are two fringe circumstances where these software programs differ substantially: at the border of the calibration curve; and if a date has a large error. ### Edge effects Consider the real ^14^C date [MAMS-13035] <https://doi.org/10.1016/j.aeae.2015.11.003> age: 50524 +/- 833 BP calibrated through intcal13, which only extends to 46401BP. Bchron throws an error, whilst OxCal applies a one-to-one mapping between Conventional Radiocarbon (CRA) time and calendar time for any date (mean) beyond the range of the calibration curve. The latter is in theory a reasonable way to mitigate the problem, however OxCal applies this in a binary manner that can create peculiarities. Instead ADMUR gradually fades the calibration curve to a one-to-one mapping between the end of the curve and 60,000 BP. ![Comparison of calibration software at the limits of intcal13. OxCal and Bchron produce a truncated distribution for date C. Bchron cannot calibrate date D, and OxCal suggests date D is younger than dates A, B and C. ADMUR performs a soft fade at the limit of the calibration curve.](software_compare_2.png) ### Large errors A ^14^C date is typically reported as a mean date with an error, which is often interpreted as representing a symmetric Gaussian distribution before calibration. However, a Gaussian has a non-zero probability at all possible years (between -$\infty$ and +$\infty$), and therefore cannot fairly represent the date uncertainty which must be skewed towards the past. Specifically, if we consider the date in CRA time, it must have a zero probability of occurring in the future. Alternatively, if we consider the date as a ^14^C/^12^C ratio, it cannot be smaller than 1 (the present). Therefore ADMUR assumes a ^14^C date error is lognormally distributed with a mean equal to the CRA date, and a variance equal to the CRA error squared. This naturally skews the distribution away from the present. In practice, this difference is undetectably trivial for typical radiocarbon errors since the lognormal distribution approximates a normal distribution away from zero. However, theoretically the differences can be large if considering dates with large errors that are close to the present. ![Comparison of calibration software for the ^14^C dates 15000 +/- 9000 BP, 15000 +/- 3000 BP and 15000 +/- 1000 BP, using intcal13. The total probability mass of each of the nine curves equals 1. Differences are apparent if a date has a large error (top tile): Bchron assumes the CRA error is Normally distributed, resulting in a truncated curve with a substantial probability at present. OxCal produces a heavily skewed distribution with a low probability at present and a substantial probability at 50,000 BP that suddenly truncates to zero beyond this. ADMUR assumes the CRA error is Lognormally distributed, which is indistinguishable from a normal distribution for typical errors, but naturally prevents any probability mass occurring at the present or future when errors are large.](software_compare_3.png) ## Phased data: adjusting for ascertainment bias A naive approach to generating an SPD as a proxy for population dynamics would be to sum all dates in the dataset, but a more sensible approach is to sum the SPDs of each phase. The need to bin dates into phases is an important step in modelling population dynamics to adjust for the data ascertainment bias of some archaeological finds having more dates by virtue of a larger research interest or budget. Therefore [phaseCalibrator()](../html/phaseCalibrator.html) generates an SPD for each phase in a dataset, and includes a binning algorithm which provides a useful solution to handling large datasets that have not been phased. For example, consider the following 8 dates from 2 sites: ```{r, eval = TRUE} data <- subset( SAAD, site %in% c('Carrizal','Pacopampa') ) data[,2:7] ``` The data have not already been phased (do not include a column 'phase') therefore the default binning algorithm calibrates these dates into four phases. this is achieved by binning dates that have a mean ^14^C date within 200 ^14^C years of any other date in that respective bin. Therefore Pacopampa.1 comprises samples 1207 and 1206, Pacopampa.2 comprises sample 1205, Carrizal.1 comprises samples 1196 and 1195 and 1194 and 1193, and Carrizal.2 comprises sample 1192: ```{r, eval = TRUE, fig.height = 3, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE} CalArray <- makeCalArray( calcurve=shcal20, calrange=c(2000,6000) ) x <- phaseCalibrator(data=data, CalArray=CalArray) plotPD(x) ``` Finally, the distributions in each phase can be summed and normalised to unity. It is straight forward to achieve this directly from the dataframe created above: ```{r, eval = TRUE} SPD <- as.data.frame( rowSums(x) ) # normalise SPD <- SPD/( sum(SPD) * CalArray$inc ) ``` Alternatively, the wrapper function [summedPhaseCalibrator()](../html/summedPhaseCalibrator.html) will perform this entire workflow internally: ```{r, eval = TRUE, fig.height = 3, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE} SPD <- summedPhaseCalibrator( data=data, calcurve=shcal20, calrange=c(2000,6000) ) plotPD(SPD) ``` ********** # 3. Continuous Piecewise Linear (CPL) Modelling A CPL model lends itself well to the objectives of identifying specific demographic events. Its parameters are the (x,y) coordinates of the hinge points, which are the relative population size (y) and timing (x) of these events. Crucially, this package calculates model likelihoods (the probability of the data given some proposed parameter combination). This likelihood is used in a search algorithm to find the maximum likelihood parameters; to compare models with different numbers parameters to find the best fit without overfitting; in Monte-Carlo Markov Chain (MCMC) analysis to estimate credible intervals of those parameters; and in a goodness-of-fit test to check that the data is a typical realisation of the maximum likelihood model and its parameters. ## Calculating likelihoods Theoretically a calibrated date should be a continuous Probability Density Function (PDF), however in practice a date is represented as a discrete vector of probabilities corresponding to each calendar year, and therefore is a Probability Mass Function (PMF). This discretisation provides the advantage that numerical methods can be used to easily calculate relative likelihoods, provided the model is also discretised to the same time points. A [toy()](../html/toy.html) model is provided to demonstrate how this achieved. First, we simulate a plausible ^14^C dataset and calibrate it. The function [simulateCalendarDates()](../html/simulateCalendarDates.html) automatically covers a slightly wider date range to ensure simulated ^14^C dates are well represented around the edges: ```{r, eval = TRUE} set.seed(12345) N <- 350 # randomly sample calendar dates from the toy model cal <- simulateCalendarDates(toy, N) # Convert to 14C dates. age <- uncalibrateCalendarDates(cal, shcal20) data <- data.frame(age = age, sd = 50, phase = 1:N, datingType = '14C') # Calibrate each phase, taking care to restrict to the modelled date range with 'remove.external' CalArray <- makeCalArray(shcal20, calrange = range(toy$year)) PD <- phaseCalibrator(data, CalArray, remove.external = TRUE) ``` The argument 'remove.external = TRUE' ensures any calibrated phases with less than 50% of their probability mass within the modelled date range are excluded, reducing the effective sample size from 350 to 303. This is a crucial step to avoid mischievous edge effects of dates outside the date range. Similarly, notice we constrained the CalArray to the modelled date range. These are important to ensure that we only model the population across a range that is well represented by data. To extend the model beyond the range of available data would be to assume the absence of evidence means evidence of absence. No doubt there may be occasions when this is reasonable (for example if modelling the first colonisation of an island that has been well excavated, and the period before arrival is evidenced by the absence of datable material), but more often the range of representative data is due to research interest, and therefore the logic of only including dates with at least 50% of their probability within the date range is that their true dates are more likely to be internal (within the date range) than external. ```{r, eval = TRUE} print( ncol(PD) ) ``` Finally we calculate the overall relative log likelihood of the model using function [loglik()](../html/loglik.html) ```{r, eval = TRUE} loglik(PD=PD, model=toy) ``` For comparison, we can calculate the overall relative likelihood of a uniform model given exactly the same data. Intuitively this should have a lower relative likelihood, since our dataset was randomly generated from the non-uniform toy population history: ```{r, eval = TRUE} uniform.model <- convertPars(pars=NULL, years=5500:7500, type='uniform') loglik(PD=PD, model=uniform.model) ``` And indeed the toy model is thirty nine million trillion times more likely than the uniform model: ```{r, eval = TRUE} exp( loglik(PD=PD, model=toy) - loglik(PD=PD, model=uniform.model) ) ``` Crucially, [loglik()](../html/loglik.html) calculates the relative likelihoods for each effective sample separately (each phase containing a few dates). The overall model likelihood is the overall product of these individual likelihoods. This means that even in the case where there is no ascertainment bias, each date should still be assigned to its own phase, to ensure phaseCalibrator() calibrates each date separately. In contrast, attempting to calculate a likelihood for a single SPD constructed from the entire dataset would be incorrect, as this would be treating the entire dataset as a single 'average' sample. ## The anatomy of a CPL model Having established how to calculate the relative likelihood of a proposed model given a dataset, we can use any out-of-the-box search algorithm to find the maximum likelihood model. This first requires us to describe the PD of any population model in terms of a small number of parameters, rather than a vector of probabilities for each year. We achieve this using the Continuous Piecewise Linear (CPL) model, which is defined by the (x,y) coordinates of its hinge points. ![Illustration of the toy 3-CPL model PD, described using just four coordinate pairs (hinges).](model_plot.svg) When performing a search for the best 3-CPL model coordinates (given a dataset), only five of these eight values are free parameters. The x-coordinates of the start and end (5500 BP and 7500 BP) are fixed by the choice of date range. Additionally, one of the y-coordinates must be constrained by the other parameters, since the total probability (area) must equal 1. As a result, an n-CPL model will have 2n-1 free parameters. ## Parameter space: The Area Breaking Process We use the function [convertPars()](../html/convertPars.html) to map our search parameters to their corresponding PD coordinates. This allows us to propose independent parameter values from a uniform distribution between 0 and 1, and convert them into coordinates that describe a corresponding CPL model PD. This parameter-to-coordinate mapping is achieved using a modified stick breaking Dirichlet process. The Dirichlet Process (not to be confused with the Dirichlet distribution) is an algorithm that can break a stick (the x-axis date range) into a desired number of pieces, ensuring all lengths are sampled evenly. The length (proportion) of remaining stick to break is chosen by sampling from the Beta distribution, such that we use the Beta CDF (with $\alpha$ = 1 and $\beta$ = the number of pieces still to be broken) to convert an x-parameter into its equivalent x-coordinate value. We extend this algorithm for use with the CPL model by also converting y-parameters to y-coordinates as follows: 1. Fix the y-value of the first hinge (H1, x = 5500 BP) to any constant (y = 3 is arbitrarily chosen since the mapping function below gives 3 for an average y-parameter of 0.5). 1. Use the mapping function $f(y) = (1/(1-y))^2)-1$ to convert all remaining y-parameters (between 0 and 1) to y-values (between 0 and +$\infty$). 1. Calculate the total area, given the y-values and previously calculated x-coordinates. 1. Divide y-values by the total area, to give the y-coordinates of the final PDF. The parameters must be provided as a single vector with an odd length, each between 0 and 1 (y,x,y,x,...y). For example, a randomly generated 6-CPL model will have 11 parameters and 7 hinges: ```{r, eval = TRUE} set.seed(12345) CPLparsToHinges(pars=runif(11), years=5500:7500) ``` Note: The Area Breaking Algorithm is a heuristic that ensures all parameter space is explored and therefore the maximum likelihood parameters are always found. However, unlike the one-dimensional stick-breaking process, its mapping of random parameters to PD coordinates is not perfectly even, and we welcome ideas for a more elegant algorithm. ## Maximum Likelihood parameter search Any preferred search algorithm can be used. For example, the JDEoptim function from [DEoptimR](https://cran.r-project.org/package=DEoptimR) uses a differential evolution optimisation algorithm that performs very nicely for this application. We recommend increasing the default NP parameter to at least 20 times the number of parameters, and repeating the search to ensure consistency: ```{r, eval = FALSE} library(DEoptimR) best <- JDEoptim(lower = rep(0,5), upper = rep(1,5), fn = objectiveFunction, PDarray = PD, type = 'CPL', NP = 100, trace = TRUE) ``` ```{r, echo = FALSE} load('vignette.3CPL.JDEoptim.best.RData') ``` ```{r, eval = TRUE, fig.height = 4, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE} CPL <- CPLparsToHinges(pars=best$par, years=5500:7500) SPD <- summedPhaseCalibrator( data=data, calcurve=shcal20, calrange=c(5500,7500) ) plotPD(SPD) lines(CPL$year, CPL$pdf, lwd=2, col='firebrick') legend(x=6300, y=max(CPL$pdf), cex=0.7, lwd=2, col='firebrick', bty='n', legend='best fitted 3-CPL') text(x=CPL$year, y=CPL$pdf, pos=3, labels=c('H1','H2','H3','H4')) ``` ## Credible interval parameter search using MCMC The ADMUR function [mcmc()](../html/mcmc.html) uses the Metropolis-Hastings algorithm to search joint parameter values of an n-CPL model, given a the calibrated probability distributions of phases in a ^14^C dataset (PDarray). In principle the starting parameters do not matter if burn is of an appropriate length, but in practice it is more efficient to start in a sensible place such as the maximum likelihood parameters: ```{r, eval = FALSE} chain <- mcmc(PDarray=PD, startPars=best$par, type='CPL', N=100000, burn=2000, thin=5, jumps =0.025) ``` The acceptance ratio (AR) and raw chain (before burn-in and thinning) can be sanity checked. Ideally we want the AR somewhere in the range 0.3 to 0.5 (this can be tuned with the 'jumps' argument), and the raw chain to resemble 'hairy caterpillars': ```{r, eval = FALSE} print(chain$acceptance.ratio) par(mfrow=c(3,2), mar=c(4,3,3,1)) col <- 'steelblue' for(n in 1:5){ plot(chain$all.pars[,n], type='l', ylim=c(0,1), col=col, xlab='', ylab='', main=paste('par',n)) } ``` ![Single chain of all 5 raw parameters](mcmc_chain.png){width=680px} These parameters can then be converted to the hinge coordinates using the [convertPars()](../html/convertPars.html) function, and their marginal distributions plotted. Note, the MLE parameters (red lines) may not exactly match the peaks of these distributions because they are only marginals. Note also the dates of hinge 1 and 2 are fixed at 5500 and 7500: ```{r, eval = FALSE} hinges <- convertPars(pars=chain$res, years=5500:7500, type='CPL') par(mfrow=c(3,2), mar=c(4,3,3,1)) c1 <- 'steelblue' c2 <- 'firebrick' lwd <- 3 pdf.brk <- seq(0,0.0015, length.out=40) yr.brk <- seq(5500,7500,length.out=40) names <- c('Date of H2','Date of H3','PD of H1','PD of H2','PD of H3','PD of H4') hist(hinges$yr2,border=c1,breaks=yr.brk, main=names[1], xlab='');abline(v=CPL$year[2],col=c2,lwd=lwd) hist(hinges$yr3, border=c1,breaks=yr.brk, main=names[2], xlab='');abline(v=CPL$year[3],col=c2,lwd=lwd) hist(hinges$pdf1, border=c1,breaks=pdf.brk, main=names[3], xlab='');abline(v=CPL$pdf[1],col=c2,lwd=lwd) hist(hinges$pdf2, border=c1,breaks=pdf.brk, main=names[4], xlab='');abline(v=CPL$pdf[2],col=c2,lwd=lwd) hist(hinges$pdf3, border=c1,breaks=pdf.brk, main=names[5], xlab='');abline(v=CPL$pdf[3],col=c2,lwd=lwd) hist(hinges$pdf4, border=c1,breaks=pdf.brk, main=names[6], xlab='');abline(v=CPL$pdf[4],col=c2,lwd=lwd) ``` ![Marginal distributions after conversion to hinge coordinates. Maximum Likelihoods (calculated separately) in red.](mcmc_posteriors.png){width=680px} Some two-dimensional combinations of joint parameters may be preferred, but still these are 2D marginal representations of 5D parameters, again with MLE in red: ```{r, eval = FALSE} require(scales) par( mfrow=c(1,2) , mar=c(4,4,1.5,2), cex=0.7 ) plot(hinges$yr2, hinges$pdf2, pch=16, col=alpha(1,0.02), ylim=c(0,0.0005)) points(CPL$year[2], CPL$pdf[2], col='red', pch=16, cex=1.2) plot(hinges$yr3, hinges$pdf3, pch=16, col=alpha(1,0.02), ylim=c(0,0.0015)) points(CPL$year[3], CPL$pdf[3], col='red', pch=16, cex=1.2) ``` ![2D Marginal distributions. Maximum Likelihoods (calculated separately) in red.](mcmc_2D.png){width=680px} Alternatively, the joint distributions can be visualised by plotting the CPL model for each iteration of the chain, with the MLE in red: ```{r, eval = FALSE} plot(NULL, xlim=c(7500,5500),ylim=c(0,0.0011), xlab='calBP', ylab='PD', cex=0.7) for(n in 1:nrow(hinges)){ x <- c(hinges$yr1[n], hinges$yr2[n], hinges$yr3[n], hinges$yr4[n]) y <- c(hinges$pdf1[n], hinges$pdf2[n], hinges$pdf3[n], hinges$pdf4[n]) lines( x, y, col=alpha(1,0.005) ) } lines(x=CPL$year, y=CPL$pdf, lwd=2, col=c2) ``` ![Joint posterior distributions. Maximum Likelihood (calculated separately) in red.](mcmc_joint.png){width=680px} ## Relative growth and decline rates Percentage growth rates per generation provide an intuitive statistic to quantify and compare population changes through time. However there are two key issues to overcome when estimating growth rates for a CPL model. 1. CPL modelling allows for the possibility of hiatus periods, defined by pieces between hinges with a zero or near zero PD. Conventionally, the percentage decrease from any value to zero is 100%, however the equivalent percentage increase from zero is undefined. 2. Each section of the CPL is a straight line with a constant gradient. However, a straight line has a constantly changing growth/decline rate. The first problem is an extreme manifestation of the asymmetry from conventionally reporting change always with respect to the first value. For example, if we consider a population of 80 individuals at time $t_1$, changing to 100 at $t_2$ and to 80 at $t_3$, this would be conventionally described as a 25% increase followed by a 20% decrease. This asymmetry is unintuitive and unhelpful in the context of population change, and instead we use a *relative rate* which is always calculated with respect to the larger value (e.g., a '20% relative growth' followed by '20% relative decline'). We overcome the second problem by calculating the expected (mean average) rate across the entire linear piece. This is achieved by notionally breaking the line into $N$ equal pieces, such that the coordinates of the ends of the $i^{th}$ piece are $(x_1,y_1)$ and $(x_2,y_2)$. The generational (25 yr) rate $r$ of this $i^{th}$ piece is: $$r_i=100\times exp[\ln(\frac{y_2}{y_1})/\frac{x_1-x_2}{25}]-100$$ and the expected rate across the entire line as $N$ approaches +$\infty$ is: $$\sum_{i=1}^{N}r_i/N$$ For example, a population decline from n=200 to n=160 across 100 years is conventionally considered to have a generational decline rate of $100\times exp[\ln(\frac{160}{200})/\frac{100}{25}]-100$ = 5.426% loss per generation. If partitioned into just $N=2$ equal sections (n=200, n=180, n=160), we require two generational decline rates: $100\times exp[\ln(\frac{180}{200})/\frac{50}{25}]-100$ = 5.132% and $100\times exp[\ln(\frac{160}{180})/\frac{50}{25}]-100$ = 5.719%, giving a mean of 5.425%. As the number of sections $N$ approaches +$\infty$, the mean rate asymptotically approaches 5.507%. The similarity to the conventional rate of 5.426% is because the total percentage loss is small (20%), therefore an exponential curve between n=200 and n=160 is similar to a straight line. In contrast, a huge percentage loss of 99.5% illustrates the importance of calculating the expected growth rate, averaged across the whole line: An exponential curve between n=200 and n=1 across the same 100 years has a decline rate of $100\times exp[\ln(\frac{1}{200})/\frac{100}{25}]-100$ = 73.409% loss per 25 yr generation. Meanwhile a linear model between n=200 and n=1 across the same 100 years has an expected decline rate of 47.835% loss per generation. The relationship between the conventional rate and relative rate is almost identical for realistic rates of change (c. -10% to +10% per generation): ```{r, eval = TRUE, fig.height = 4, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE} N <- 1000 x <- cbind(rep(5100,N),rep(5000,N)) y <- cbind(seq(1,100,length.out=N),seq(100,1,length.out=N)) conventional <- 100 * exp(log(y[,2]/y[,1])/((x[,1]-x[,2])/25))-100 relative <- relativeRate(x,y) plot(conventional, relative, type='l') rect(-100,-100,c(10,0,-10),c(10,0,-10), lty=2,border='grey') ``` ********** # 4. Inference ## Model selection using BIC A fundamentally important issue in modelling is the need to avoid overfitting an unjustifiably complex model to data, by using a formal model selection approach. In the example above we arbitrarily chose a 3-CPL model to fit to the data (since the data was randomly sampled from a 3-CPL toy population), however, given the small sample size (n = 303) it is possible a simpler model may have better predictive power. ADMUR achieves this using the so-called Bayesian Information Criterion (BIC) aka Schwarz Information Criterion, which balances the model likelihood against the number of parameters and sample size. Therefore we should also find the Maximum Likelihood for other plausible models such as a 4-CPL, 2-CPL, 1-CPL, exponential and even a uniform: ```{r, eval = FALSE} # CPL parameters must be between 0 and 1, and an odd length. CPL.1 <- JDEoptim(lower=0, upper=1, fn=objectiveFunction, PDarray=PD, type='CPL', NP=20) CPL.2 <- JDEoptim(lower=rep(0,3), upper=rep(1,3), fn=objectiveFunction, PDarray=PD, type='CPL', NP=60) CPL.3 <- JDEoptim(lower=rep(0,5), upper=rep(1,5), fn=objectiveFunction, PDarray=PD, type='CPL', NP=100) CPL.4 <- JDEoptim(lower=rep(0,7), upper=rep(1,7), fn=objectiveFunction, PDarray=PD, type='CPL', NP=140) # exponential has a single parameter, which can be negative (decay). exp <- JDEoptim(lower=-0.01, upper=0.01, fn=objectiveFunction, PDarray=PD, type='exp', NP=20) # uniform has no parameters so a search is not required. uniform <- objectiveFunction(NULL, PD, type='uniform') ``` ```{r, echo = FALSE} load('vignette.model.comparison.RData') ``` The objective function returns the negative log-likelihood since the search algorithm seeks to minimise the objective function. It is therefore trivial to extract the log-likelihoods, and calculate the BIC scores using the formula $BIC=k\ln(n)-2L$ where $k$ is the number of parameters, $n$ is the effective sample size (i.e. the number of phases = 303), and $L$ is the maximum log-likelihood. ```{r, eval = TRUE} # likelihoods data.frame(L1= -CPL.1$value, L2= -CPL.2$value, L3= -CPL.3$value, L4= -CPL.4$value, Lexp= -exp$value, Lunif= -uniform) BIC.1 <- 1*log(303) - 2*(-CPL.1$value) BIC.2 <- 3*log(303) - 2*(-CPL.2$value) BIC.3 <- 5*log(303) - 2*(-CPL.3$value) BIC.4 <- 7*log(303) - 2*(-CPL.4$value) BIC.exp <- 1*log(303) - 2*(-exp$value) BIC.uniform <- 0 - 2*(-uniform) data.frame(BIC.1,BIC.2,BIC.3,BIC.4,BIC.exp,BIC.uniform) ``` Clearly the 4-CPL has the highest likelihood, however the 3-CPL model has the lowest BIC and is selected as the best. This tells us that the 4-CPL is overfitted to the data and is unjustifiably complex, whilst the other models are underfitted and lack explanatory power. Nevertheless for comparison we can plot all the competing models, illustrating that the 4-CPL fits the closest, but cannot warn us that it is overfit: ```{r, eval = TRUE, fig.height = 4, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE} # convert parameters to model PDs CPL1 <- convertPars(pars=CPL.1$par, years=5500:7500, type='CPL') CPL2 <- convertPars(pars=CPL.2$par, years=5500:7500, type='CPL') CPL3 <- convertPars(pars=CPL.3$par, years=5500:7500, type='CPL') CPL4 <- convertPars(pars=CPL.4$par, years=5500:7500, type='CPL') EXP <- convertPars(pars=exp$par, years=5500:7500, type='exp') # Plot SPD and five competing models: plotPD(SPD) cols <- c('firebrick','orchid2','coral2','steelblue','goldenrod3') lines(CPL1$year, CPL1$pdf, col=cols[1], lwd=2) lines(CPL2$year, CPL2$pdf, col=cols[2], lwd=2) lines(CPL3$year, CPL3$pdf, col=cols[3], lwd=2) lines(CPL4$year, CPL4$pdf, col=cols[4], lwd=2) lines(EXP$year, EXP$pdf, col=cols[5], lwd=2) legend <- c('1-CPL','2-CPL','3-CPL','4-CPL','exponential') legend(x=6300, y=max(CPL$pdf), cex=0.7, lwd=2, col=cols, bty='n', legend=legend) ``` ## Goodness of fit (GOF) test it is crucial to test if the selected model is plausible, or in other words, to test if the observed data is a reasonable outcome of the model. If the observed data is highly unlikely the model must be rejected, even if it was the best model selected. Typically a GOF quantifies how unusual it would be for the observed data to be generated by the model. Of course the probability of any particular dataset being generated by any particular model is vanishingly small, so instead we estimate how probable it is for the model to produce the observed data, *or data that are more extreme*. This is a similar concept to the p-value, but instead of using a null hypothesis we use the best selected model. We can generate many simulated datasets under this model, and calculate a summary statistic for each simulation. A one-tailed test will then establish the proportion of simulations that have a poorer summary statistic (more extreme) than the observed data's summary statistic. For each dataset (simulated and observed) we generate an SPD and use a statistic that measures how divergent each SPD is from expectation, by calculating the proportion of the SPD that sits outside the 95% CI. ```{r, eval = FALSE} summary <- SPDsimulationTest(data, calcurve=shcal20, calrange=c(5500,7500), pars=CPL.3$par, type='CPL') ``` The test provides a p-value of 1.00 for the best model (3-CPL), since all of the 20,000 simulated SPDs were as or more extreme than the observed SPD, providing a sanity check that the data cannot be rejected under this model, and therefore is a plausible model: ```{r, echo = FALSE} load('vignette.3CPL.SPDsimulationTest.RData') ``` ```{r, eval = TRUE, fig.height = 5, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE} print(summary$pvalue) hist(summary$simulated.stat, main='Summary statistic', xlab='') abline(v=summary$observed.stat, col='red') legend(0.3,6000, bty='n', lwd=c(1,3), col=c('red','grey'), legend=c('observed','simulated')) ``` ## SPD simulation testing Part 2 provided a framework to directly select the best model given a dataset. This contrasts with the SPD simulation methodology which requires the researcher to *a priori* specify a single null model, then generate many simulated datasets under this null model which are compared with the observed dataset to generate a p-value. Without the model selection framework, the SPD simulation approach alone has several inferential shortcomings: * Recent studies increasingly suggest population fluctuations are ubiquitous thought history, rendering the application of a null model inappropriate. In contrast this new model selection framework allows any number of models to be compared. * A low p-value merely allows us to reject (or fail to reject) the tested model, but does not provide us with a plausible alternative explanation. This leaves an inferential vacuum in which it is common for researchers to assign colourful demographic narratives to periods outside the 95% CI, which are not directly supported by the test statistic. Instead the CPL framework provides a single best explanation. * Fitting the null model (and therefore estimating its parameters) is commonly achieved by discretising the SPD, then incorrectly assuming these points somehow represent data points, to which the null model is fitted my minimising their residuals. In contrast, the CPL framework correctly calculates the relative likelihood of the proposed model parameters given the data, and therefore can correctly fit a model. Nevertheless, the p-value from the SPD simulation framework is hugely useful in providing a Goodness of Fit test for the best selected model. Therefore the summary generated in the section *'Goodness of fit test'* by the [SPDsimulationTest()](../html/SPDsimulationTest.html) function provides a number of other useful outputs that can be plotted, including: **pvalue** the proportion of N simulated SPDs that have more points outside the 95%CI than the observed SPD has. **observed.stat** the summary statistic for the observed data (number of points outside the 95% CI). **simulated.stat** a vector of summary statistics (number of points outside the 95% CI), one for each simulated SPD. **n.dates.all** the total number of date in the whole data set. Trivially, the number of rows in data. **n.dates.effective** the effective number of dates within the date range. Will be non-integer since a proportion of some dates will be outside the date range. **n.phases.all** the total number of phases in the whole data set. **n.phases.effective** the effective number of phases within the date range. Will be non-integer since a proportion of some phases will be outside the date range. **n.phases.internal** an integer subset of n.phases.all that have more than 50% of their total probability mass within the date range. **timeseries** a data frame containing the following: **CI** several vectors of various Confidence Intervals. **calBP** a vector of calendar years BP. **expected.sim** a vector of the expected simulation (mean average of all N simulations). **local.sd** a vector of the local (each year) standard deviation of all N simulations. **model** a vector of the model PDF. **SPD** a vector of the observed SPD PDF, generated from data. **index** a vector of -1,0,+1 corresponding to the SPD points that are above, within or below the 95% CI of all N simulations. ```{r, eval = FALSE} summary <- SPDsimulationTest(data, calcurve=shcal20, calrange=c(5500,7500), pars=exp$par, type='exp') ``` The function [plotSimulationSummary()](../html/plotSimulationSummary.html) then represents these summary results in a single plot: ```{r, echo = FALSE} load('vignette.exp.SPDsimulationTest.RData') ``` ```{r, eval = TRUE, fig.height = 4, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE, message=FALSE} plotSimulationSummary(summary, legend.y=0.0012) ``` ## Other Models in ADMUR The above modelling components (MCMC, GOF, model comparison, relative likelihoods, BIC etc) are not constrained to CPL models, but can be applied to any model structure. Currently ADMUR offers the following: **CPL, Uniform, Exponential, Gaussian, Cauchy, Sinusoidal, Logistic, Power law** See [convertPars()](../html/convertPars.html) for details. Care should be taken when considering a Gaussian model. The distribution of data from a single event can often superficially appear to be normally distributed due to the tendency to unconsciously apply regression methods (minimising the residuals). However, contrary to appearances (and intuitions) a Gaussian does not 'flatten' towards the tails, but decreases at a greater and greater rate towards zero. As a consequence, small amounts of data that are several standard deviations away from the mean *appear* to fit a Gaussian quite well, but under a likelihood framework are in fact absurdly improbable. Instead, for single events consider a Cauchy model, given the phenomenon that real life data usually has fatter tails than a Gaussian. Alternatively, if the waxing and waning of data is suspected to be driven by an oscillating system (such as climate) a sinusoidal model may more sensible. The following code uses three toy datasets to demonstrate these models. After calibration through intcal20, they retain effective sample sizes of a little under n = 100. ```{r, eval = FALSE} # generate SPDs CalArray <- makeCalArray(intcal20, calrange = c(1000,4000)) spd1 <- summedCalibrator(data1, CalArray, normalise='full') spd2 <- summedCalibrator(data2, CalArray, normalise='full') spd3 <- summedCalibrator(data3, CalArray, normalise='full') # calibrate phases PD1 <- phaseCalibrator(data1, CalArray, remove.external = TRUE) PD2 <- phaseCalibrator(data2, CalArray, remove.external = TRUE) PD3 <- phaseCalibrator(data3, CalArray, remove.external = TRUE) # effective sample sizes ncol(PD1) ncol(PD2) ncol(PD3) # maximum likelihood search, fitting various models to various datasets norm <- JDEoptim(lower=c(1000,1), upper=c(4000,5000), fn=objectiveFunction, PDarray=PD1, type='norm', NP=40, trace=T) cauchy <- JDEoptim(lower=c(1000,1), upper=c(4000,5000), fn=objectiveFunction, PDarray=PD1, type='cauchy', NP=40, trace=T) sine <- JDEoptim(lower=c(0,0,0), upper=c(1/1000,2*pi,1), fn=objectiveFunction, PDarray=PD2, type='sine', NP=60, trace=T) logistic <- JDEoptim(lower=c(0,0000), upper=c(1,10000), fn=objectiveFunction, PDarray=PD3, type='logistic', NP=40, trace=T) exp <- JDEoptim(lower=c(0), upper=c(1), fn=objectiveFunction, PDarray=PD3, type='exp', NP=20, trace=T) power <- JDEoptim(lower=c(0,-10), upper=c(10000,0), fn=objectiveFunction, PDarray=PD3, type='power', NP=40, trace=T) ``` Note the upper boundaries for the sinewave (see [sinewavePDF()](../html/sinewavePDF.html) for details). The first parameter governs the frequency, so should be constrained by a wavelength no shorter than c. 1/10th of the date range. Now the maximum likelihood parameters need to be converted into a PDF and plotted: ```{r, eval = FALSE} # convert parameters to model PDs years <- 1000:4000 mod.norm <- convertPars(pars=norm$par, years, type='norm') mod.cauchy <- convertPars(pars=cauchy$par, years, type='cauchy') mod.sine <- convertPars(pars=sine$par, years, type='sine') mod.uniform <- convertPars(pars=NULL, years, type='uniform') mod.logistic <- convertPars(pars=logistic$par, years, type='logistic') mod.exp <- convertPars(pars=exp$par, years, type='exp') mod.power <- convertPars(pars=power$par, years, type='power') # Plot SPDs and various fitted models: par(mfrow=c(3,1), mar=c(4,4,1,1)) cols <- c('steelblue','firebrick','orange') plotPD(spd1) lines(mod.norm, col=cols[1], lwd=5) lines(mod.cauchy, col=cols[2], lwd=5) legend(x=4000, y=max(spd1)*1.2, lwd=5, col=cols, bty='n', legend=c('Gaussian','Cauchy')) plotPD(spd2) lines(mod.sine, col=cols[1], lwd=5) lines(mod.uniform, col=cols[2], lwd=5) legend(x=4000, y=max(spd2)*1.2, lwd=5, col=cols, bty='n', legend=c('Sinewave','Uniform')) plotPD(spd3) lines(mod.logistic, col=cols[1], lwd=5) lines(mod.exp, col=cols[2], lwd=5) lines(mod.power, col=cols[3], lwd=5) legend(x=4000, y=max(spd3)*1.2, lwd=5, col=cols, bty='n', legend=c('Logistic','Exponential','Power Law')) ``` ![Examples of other models available in ADMUR](further_models.png){width=680px} ********** # 5. Taphonomy Taphonomic loss has an important influence on the amount of datable material that can be recovered, with the obvious bias that older material is less likely to survive. This means that if a constant population deposited a perfectly uniform amount of material through time, we should expect the archaeological record to show an increase in dates towards the present, rather than a uniform distribution. This taphonomic loss rate has been estimated by [Surovell et al.](https://doi.org/10.1016/j.jas.2009.03.029) and [Bluhm and Surovell](https://doi.org/10.1017/qua.2018.78) who make a compelling argument that a power function $a(x+b)^c$ provides a useful model of taphonomic loss through time ($x$), providing not only a good statistical fit to empirical data, but is also consistent with the mechanism that datable material is subject to greater initial environmental degradation when first deposited on the ground surface compared to the increasing protection through time as it becomes cocooned from these forces. However, there are two important issues to consider when modelling taphonomy: 1. There is substantial uncertainty regarding the values of the parameters that determine the shape of the power function. We should expect different taphonomic rates in different locations due to variation in environmental and geological conditions. Indeed these studies have estimated different parameter values for the two datasets used. 1. There is a common misunderstanding that the taphonomic curve can be used to 'adjust' or 'correct' the data or an SPD to generate a more faithful representation of the true population dynamics. In the fact inclusion of taphonomy is achieved with additional appropriate model parameters, resulting in a more complex model. Whether or not this greater complexity is justified is moot - the decision to include or exclude cannot be resolved with model comparison and should be justified with an independent argument. When comparing models using BIC, all should be consistent in either including or excluding taphonomic curve parameters. ## Taphonomic curve parameters The above studies use regression methods to estimate the taphonomic curve parameters. These methods don't incorporate the full information of the calibrated 14C dates (instead a point estimate is used for each date), and therefore are not based on likelihoods. Nor do they provide confidence intervals for the curve parameters. Finally, the parameter $a$ is unnecessary for the purposes of population modelling, since we are not interested in estimating the *absolute* loss in material, merely the *relative* loss through time. Therefore we can consider the taphonomic curve as a PDF such that the total area across the study period equals 1. This results in the following formula, where $x$ is time, and $x_{min}$ and $x_{max}$ are the time boundaries of the study period: $$\frac{(c+1)(b+x)^c}{(b+x_{max})^{(c+1)} - (b+x_{min})^{(c+1)}}$$ This defines ADMUR's power model PDF which we apply within the MCMC framework to estimate the joint parameter distributions of $b$ and $c$ from the same two datasets used in the above studies, constraining the study period (as they did) to between 1kyr and 40kyr BP as follows: ```{r, eval = FALSE} # generate an PD array for each dataset years <- seq(1000,40000,by=50) CalArray <- makeCalArray(intcal20, calrange = c(1000,40000),inc=50) PD1 <- phaseCalibrator(bryson1848, CalArray, remove.external = TRUE) PD2 <- phaseCalibrator(bluhm2421, CalArray, remove.external = TRUE) # MCMC search chain.bryson <- mcmc(PDarray=PD1, startPars=c(10000,-1.5), type='power', N=50000, burn=2000, thin=5, jumps =c(250,0.075)) chain.bluhm <- mcmc(PDarray=PD2, startPars=c(10000,-1.5), type='power', N=50000, burn=2000, thin=5, jumps =c(250,0.075)) # convert parameters to taphonomy curves curve.bryson <- convertPars(chain.bryson$res, type='power', years=years) curve.bluhm <- convertPars(chain.bluhm$res, type='power', years=years) # plot plot(NULL, xlim=c(0,12000),ylim=c(-2.5,-1), xlab='parameter b', ylab='parameter c') points(chain.bryson$res, col=cols[1]) points(chain.bluhm$res, col=cols[2]) plot(NULL, xlim=c(0,40000),ylim=c(0,0.00025), xlab='yrs BP', ylab='PD') N <- nrow(chain.bryson$res) for(n in sample(1:N,size=1000)){ lines(years,curve.bryson[n,], col=cols[1]) lines(years,curve.bluhm[n,], col=cols[2]) } ``` ![Joint taphonomic parameter estimates (and equivalent curves) from the MCMC chain generated in ADMUR using datasets used in Surovell et al. 2009 and Bluhm and Surovell 2018](taphonomy_bryson_bluhm.png){width=680px} Clearly the taphonomic parameters $b$ and $c$ are highly correlated, and although the curves superficially appear very similar, the parameters differ significantly between the two datasets. ## Including taphonomy in a model ### Maximum Likelihood Search Taphonomy can be included in any ADMUR model by including the argument *taphonomy = TRUE*, which will then use the last two model parameters as the taphonomic parameters $b$ and $c$. We suggest constraining these parameters to $0 < b < 20000$ and $-3 < c < 0$, but if there is better prior knowledge of this range (perhaps an independent dataset based on volcanic eruptions for the same study area) then this can be further constrained accordingly. For example, we might perform a maximum likelihood search using the previously generated PD array, to find the best 3-CPL model with and without taphonomy as follows: ```{r, eval = FALSE} best <- JDEoptim(lower=c(0,0,0,0,0), upper=c(1,1,1,1,1), fn=objectiveFunction, PDarray=PD, type='CPL', taphonomy=F, trace=T, NP=100) best.taph <- JDEoptim(lower=c(0,0,0,0,0,0,-3), upper=c(1,1,1,1,1,20000,0), fn=objectiveFunction, PDarray=PD, type='CPL', taphonomy=T, trace=T, NP=140) ``` These parameters can then be converted to model PDFs and plotted: ```{r, echo = FALSE} load('vignette.3CPL.JDEoptim.best.RData') load('vignette.3CPL.JDEoptim.best.taph.RData') ``` ```{r, eval = TRUE, fig.height = 4, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE} CPL <- convertPars(pars=best$par, years=5500:7500, type='CPL', taphonomy=F) CPL.taph <- convertPars(pars=best.taph$par, years=5500:7500, type='CPL', taphonomy=T) SPD <- summedPhaseCalibrator( data=data, calcurve=shcal20, calrange=c(5500,7500) ) plotPD(SPD) lines(CPL$year, CPL$pdf, lwd=2, col=cols[1]) lines(CPL.taph$year, CPL.taph$pdf, lwd=2, col=cols[2]) legend(x=6300,y=0.001,legend=c('3-CPL','3-CPL with taphonomy'),bty='n',col=cols[1:2],lwd=2,cex=0.7) ``` The above *3-CPL with taphonomy* model represents a conflation of two model components: the population dynamics and the taphonomic loss. Instead we are interested in separating these components: ```{r, eval = TRUE, fig.height = 4, fig.width=7, fig.align = "center", dev='jpeg', quality=100, warning=FALSE} pop <- convertPars(pars=best.taph$par[1:5], years=5500:7500, type='CPL') taph <- convertPars(pars=best.taph$par[6:7], years=5500:7500, type='power') plotPD(pop) title('Population dynamics') plotPD(taph) title('Taphonomic loss') ``` Finally the hinge coordinates of the population dynamics can be extracted: ```{r, eval = TRUE} CPLparsToHinges(pars=best.taph$par[1:5], years=5500:7500) ``` ### MCMC search for credible intervals We should always be cautious of assigning too much importance to point estimates.Maximum Likelihood Estimates above are no exception to this. Smaller sample sizes will always result in larger uncertainties, and it is always better to estimate this plausible range of results. This is of particular concern with taphonomic parameters since the reanalysis of the volcanic datasets above illustrates how a large range of parameter combinations provide very similar taphonomic curves. Furthermore, when including taphonomy in the model, the taphonomic parameters have the potential to interact with the population dynamics parameters in vastly more parameter combinations that will give in many different combinations of similar overall radiocarbon date distributions. Therefore we can perform an MCMC parameter search as follows: ```{r, eval = FALSE} chain.taph <- mcmc(PDarray = PD, startPars = c(0.5,0.5,0.5,0.5,0.5,10000,-1.5), type='CPL', taphonomy=T, N = 30000, burn = 2000, thin = 5, jumps = 0.025) ``` These can then be separated into population dynamics parameters and taphonomic parameters for either direct plotting, or converted to model PDFs and plotted: ```{r, eval = FALSE} # convert parameters into model PDFs pop <- convertPars(pars=chain.taph$res[,1:5], years=5500:7500, type='CPL') taph <- convertPars(pars=chain.taph$res[,6:7], years=seq(1000,30000,by=50), type='power') # plot population dynamics PDF plot(NULL, xlim=c(7500,5500),ylim=c(0,0.0013), xlab='calBP', ylab='PD', las=1) for(n in 1:nrow(pop))lines(5500:7500, pop[n,],col=alpha(1,0.05)) # plot taphonomy PDF plot(NULL, xlim=c(30000,0),ylim=c(0,0.00025), xlab='calBP', ylab='PD',las=1,) for(n in 1:nrow(taph))lines(seq(1000,30000,by=50), taph[n,],col=alpha(1,0.02)) # plot taphonomic parameters plot(NULL, xlim=c(0,20000),ylim=c(-3,0), xlab='parameter b', ylab='parameter c',las=1) for(n in 1:nrow(chain.taph$res))points(chain.taph$res[n,6], chain.taph$res[n,7],col=alpha(1,0.2),pch=20) ``` ![Joint posterior distributions of population dynamics only.](mcmc_pop_without_taph.png){width=680px} ![Joint posterior distributions of taphonomy only. Clearly there is not enough information content in such a small toy dataset to narrow the taphonomic parameters better than the initial prior constraints](mcmc_taph.png){width=680px} ********** ![](four_logos.png){width=680px} **********
/scratch/gouwar.j/cran-all/cranData/ADMUR/vignettes/guide.Rmd
--- title: | ![](four_logos.png){height=0.5in} Replicating published results author: "Adrian Timpson" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteEngine{knitr::rmarkdown} %\VignetteIndexEntry{Replicating published results from doi:0.1098/rstb.2019.0723} %\usepackage[utf8]{inputenc} --- <style> p.caption {font-size: 0.7em;} </style> ********** This vignette provides the R code used to generate all results, plots and tables in the following publication: ### Directly modelling population dynamics in the South American Arid Diagonal using 14C dates by Adrian Timpson, Ramiro Barberena, Mark G. Thomas, Cesar Mendez and Katie Manning, published in Philosophical Transactions of the Royal Society B, 2020. https://doi.org/10.1098/rstb.2019.0723 The only exception to this is the exclusion of R code for figure 3, which is an adaptation of [Fig 7 from Peel et al 2007](https://doi.org/10.5194/hess-11-1633-2007) and is therefore not novel. Each section of this vignette provides stand alone R code that is not reliant on objects created earlier in the vignette. As such, there is some repetition between sections. Setting random seeds is not necessary, but can be used to ensure random components are identical to those used in the publication. The generation and calibration of each random dataset takes seconds to complete. Simulation tests and searches performed by JDEoptim or the generation of MCMC chains then requires several hours to complete. Therefore the code for each section is separated into two or more blocks. The first block always includes all slow components which are saved by the last line of code. This provides a firewall to allow plots to be quickly generated on a later occasion using the remaining block(s), which runs in seconds. Sometimes there is an intermediate block which takes a few seconds to perform some pre-plot processing. ********** # Figure 1 ## Simulating datasets from a 3-CPL toy. Generate key objects: ```{r, eval = FALSE} library(ADMUR) library(DEoptimR) N <- 1500 # generate 5 sets of random calendar dates under the toy model. set.seed(882) cal1 <- simulateCalendarDates(model = toy, N) set.seed(884) cal2 <- simulateCalendarDates(model = toy, N) set.seed(886) cal3 <- simulateCalendarDates(model = toy, N) set.seed(888) cal4 <- simulateCalendarDates(model = toy, N) set.seed(890) cal5 <- simulateCalendarDates(model = toy, N) # Convert to 14C dates. age1 <- uncalibrateCalendarDates(cal1, shcal20) age2 <- uncalibrateCalendarDates(cal2, shcal20) age3 <- uncalibrateCalendarDates(cal3, shcal20) age4 <- uncalibrateCalendarDates(cal4, shcal20) age5 <- uncalibrateCalendarDates(cal5, shcal20) # construct data frames. One date per phase. data1 <- data.frame(age = age1, sd = 25, phase = 1:N, datingType = '14C') data2 <- data.frame(age = age2, sd = 25, phase = 1:N, datingType = '14C') data3 <- data.frame(age = age3, sd = 25, phase = 1:N, datingType = '14C') data4 <- data.frame(age = age4, sd = 25, phase = 1:N, datingType = '14C') data5 <- data.frame(age = age5, sd = 25, phase = 1:N, datingType = '14C') # Calibrate each phase, taking care to restrict to the modelled date range CalArray <- makeCalArray(shcal20, calrange = range(toy$year), inc = 5) PD1 <- phaseCalibrator(data1, CalArray, remove.external = TRUE) PD2 <- phaseCalibrator(data2, CalArray, remove.external = TRUE) PD3 <- phaseCalibrator(data3, CalArray, remove.external = TRUE) PD4 <- phaseCalibrator(data4, CalArray, remove.external = TRUE) PD5 <- phaseCalibrator(data5, CalArray, remove.external = TRUE) # Generate SPD of each dataset SPD1 <- summedCalibrator(data1, CalArray, normalise='full') SPD2 <- summedCalibrator(data2, CalArray, normalise='full') SPD3 <- summedCalibrator(data3, CalArray, normalise='full') SPD4 <- summedCalibrator(data4, CalArray, normalise='full') SPD5 <- summedCalibrator(data5, CalArray, normalise='full') # 3-CPL parameter search lower <- rep(0,5) upper <- rep(1,5) fn <- objectiveFunction best1 <- JDEoptim(lower, upper, fn, PDarray=PD1, type='CPL',trace=T,NP=100) best2 <- JDEoptim(lower, upper, fn, PDarray=PD2, type='CPL',trace=T,NP=100) best3 <- JDEoptim(lower, upper, fn, PDarray=PD3, type='CPL',trace=T,NP=100) best4 <- JDEoptim(lower, upper, fn, PDarray=PD4, type='CPL',trace=T,NP=100) best5 <- JDEoptim(lower, upper, fn, PDarray=PD5, type='CPL',trace=T,NP=100) #save results, for separate plotting save(best1,best2,best3,best4,best5,SPD1,SPD2,SPD3,SPD4,SPD5, file='results.RData',version=2) ``` Generate plot: ```{r, eval = FALSE} library(ADMUR) load('results.RData') oldpar <- par(no.readonly = TRUE) pdf('Fig1.pdf',height=4,width=10) par(mar=c(2,4,0.1,2)) plot(NULL, xlim=c(7500,5500), ylim=c(0,0.0011), xlab='', ylab='', xaxs='i',cex.axis=0.7, bty='n',las=1) axis(1,at=6400,labels='calBP',tick=F) axis(2,at=-0.00005,labels='PD',tick=F, las=1) lwd1 <- 1 lwd2 <- 2 lwd3 <- 3 legend(x=6000, y = 0.0011, bty='n', cex=0.7, legend=c('True (toy) population', 'SPD 1', 'SPD 2', 'SPD 3', 'SPD 4', 'SPD 5', 'Pop model 1', 'Pop model 2', 'Pop model 3', 'Pop model 4', 'Pop model 5'), lwd=c(lwd3,rep(lwd1,5),rep(lwd2,5)), col=c(1,2:6,2:6) ) years <- as.numeric(row.names(SPD1)) # plot SPDs lines(years,SPD1[,1],col=2, lwd=lwd1) lines(years,SPD2[,1],col=3, lwd=lwd1) lines(years,SPD3[,1],col=4, lwd=lwd1) lines(years,SPD4[,1],col=5, lwd=lwd1) lines(years,SPD5[,1],col=6, lwd=lwd1) # convert parameters to model pdfs mod.1 <- convertPars(pars=best1$par, years=years, type='CPL') mod.2 <- convertPars(pars=best2$par, years=years, type='CPL') mod.3 <- convertPars(pars=best3$par, years=years, type='CPL') mod.4 <- convertPars(pars=best4$par, years=years, type='CPL') mod.5 <- convertPars(pars=best5$par, years=years, type='CPL') lines(mod.1$year,mod.1$pdf,col=2,lwd=lwd2) lines(mod.2$year,mod.2$pdf,col=3,lwd=lwd2) lines(mod.3$year,mod.3$pdf,col=4,lwd=lwd2) lines(mod.4$year,mod.4$pdf,col=5,lwd=lwd2) lines(mod.5$year,mod.5$pdf,col=6,lwd=lwd2) # plot true toy model lines(toy$year, toy$pdf, lwd=lwd3) dev.off() par(oldpar) ``` ![Low resolution png of Figure 1](Fig1.png) ********** # Figure 2 ## Model selection with small simulated data. Generate key objects: ```{r, eval = FALSE} library(ADMUR) library(DEoptimR) set.seed(888) N <- c(6,20,60,180,360,540) names <- c('sample1','sample2','sample3','sample4','sample5','sample6') # generate 6 sets of random calendar dates under the toy model. cal1 <- simulateCalendarDates(model = toy, N[1]) cal2 <- simulateCalendarDates(model = toy, N[2]) cal3 <- simulateCalendarDates(model = toy, N[3]) cal4 <- simulateCalendarDates(model = toy, N[4]) cal5 <- simulateCalendarDates(model = toy, N[5]) cal6 <- simulateCalendarDates(model = toy, N[6]) # Convert to 14C dates. age1 <- uncalibrateCalendarDates(cal1, shcal20) age2 <- uncalibrateCalendarDates(cal2, shcal20) age3 <- uncalibrateCalendarDates(cal3, shcal20) age4 <- uncalibrateCalendarDates(cal4, shcal20) age5 <- uncalibrateCalendarDates(cal5, shcal20) age6 <- uncalibrateCalendarDates(cal6, shcal20) # construct data frames. One date per phase. data1 <- data.frame(age = age1, sd = 25, phase = 1:N[1], datingType = '14C') data2 <- data.frame(age = age2, sd = 25, phase = 1:N[2], datingType = '14C') data3 <- data.frame(age = age3, sd = 25, phase = 1:N[3], datingType = '14C') data4 <- data.frame(age = age4, sd = 25, phase = 1:N[4], datingType = '14C') data5 <- data.frame(age = age5, sd = 25, phase = 1:N[5], datingType = '14C') data6 <- data.frame(age = age6, sd = 25, phase = 1:N[6], datingType = '14C') # narrow domain of the model to the range of data, # since absence of evidence in periods well outside the data should # not be interpreted as evidence of absence. # Only required when sample sizes are extremely small. # Otherwise the data domain is constrained by the model date range. r1 <- estimateDataDomain(data1, shcal20) # narrower range for extremely small samples CalArray1 <- makeCalArray(shcal20, calrange = c( max(r1[1],5500) , min(r1[2],7500) ), inc = 5) CalArray <- makeCalArray(shcal20, calrange = range(toy$year), inc = 5) # Calibrate each phase PD1 <- phaseCalibrator(data1, CalArray1, remove.external = TRUE) PD2 <- phaseCalibrator(data2, CalArray, remove.external = TRUE) PD3 <- phaseCalibrator(data3, CalArray, remove.external = TRUE) PD4 <- phaseCalibrator(data4, CalArray, remove.external = TRUE) PD5 <- phaseCalibrator(data5, CalArray, remove.external = TRUE) PD6 <- phaseCalibrator(data6, CalArray, remove.external = TRUE) PD <- list(PD1, PD2, PD3, PD4, PD5, PD6); names(PD) <- names # Generate SPD of each dataset SPD1 <- summedCalibrator(data1, CalArray, normalise='full') SPD2 <- summedCalibrator(data2, CalArray, normalise='full') SPD3 <- summedCalibrator(data3, CalArray, normalise='full') SPD4 <- summedCalibrator(data4, CalArray, normalise='full') SPD5 <- summedCalibrator(data5, CalArray, normalise='full') SPD6 <- summedCalibrator(data6, CalArray, normalise='full') SPD <- list(SPD1, SPD2, SPD3, SPD4, SPD5, SPD6); names(SPD) <- names # Uniform model: No parameters. # Log Likelihood calculated directly using objectiveFunction, without a search required. unif1.loglik <- -objectiveFunction(pars = NULL, PDarray = PD1, type = 'uniform') unif2.loglik <- -objectiveFunction(pars = NULL, PDarray = PD2, type = 'uniform') unif3.loglik <- -objectiveFunction(pars = NULL, PDarray = PD3, type = 'uniform') unif4.loglik <- -objectiveFunction(pars = NULL, PDarray = PD4, type = 'uniform') unif5.loglik <- -objectiveFunction(pars = NULL, PDarray = PD5, type = 'uniform') unif6.loglik <- -objectiveFunction(pars = NULL, PDarray = PD6, type = 'uniform') uniform <- list(unif1.loglik, unif2.loglik, unif3.loglik, unif4.loglik, unif5.loglik, unif6.loglik) names(uniform) <- names # Best 1-CPL model. Parameters and log likelihood found using search lower <- rep(0,1) upper <- rep(1,1) fn <- objectiveFunction best1 <- JDEoptim(lower, upper, fn, PDarray=PD1, type='CPL',trace=T,NP=20) best2 <- JDEoptim(lower, upper, fn, PDarray=PD2, type='CPL',trace=T,NP=20) best3 <- JDEoptim(lower, upper, fn, PDarray=PD3, type='CPL',trace=T,NP=20) best4 <- JDEoptim(lower, upper, fn, PDarray=PD4, type='CPL',trace=T,NP=20) best5 <- JDEoptim(lower, upper, fn, PDarray=PD5, type='CPL',trace=T,NP=20) best6 <- JDEoptim(lower, upper, fn, PDarray=PD6, type='CPL',trace=T,NP=20) CPL1 <- list(best1, best2, best3, best4, best5, best6); names(CPL1) <- names # Best 2-CPL model. Parameters and log likelihood found using search lower <- rep(0,3) upper <- rep(1,3) fn <- objectiveFunction best1 <- JDEoptim(lower, upper, fn, PDarray=PD1, type='CPL',trace=T,NP=60) best2 <- JDEoptim(lower, upper, fn, PDarray=PD2, type='CPL',trace=T,NP=60) best3 <- JDEoptim(lower, upper, fn, PDarray=PD3, type='CPL',trace=T,NP=60) best4 <- JDEoptim(lower, upper, fn, PDarray=PD4, type='CPL',trace=T,NP=60) best5 <- JDEoptim(lower, upper, fn, PDarray=PD5, type='CPL',trace=T,NP=60) best6 <- JDEoptim(lower, upper, fn, PDarray=PD6, type='CPL',trace=T,NP=60) CPL2 <- list(best1, best2, best3, best4, best5, best6); names(CPL2) <- names # Best 3-CPL model. Parameters and log likelihood found using search lower <- rep(0,5) upper <- rep(1,5) fn <- objectiveFunction best1 <- JDEoptim(lower, upper, fn, PDarray=PD1, PDarray=PD1, type='CPL',trace=T,NP=100) best2 <- JDEoptim(lower, upper, fn, PDarray=PD1, PDarray=PD2, type='CPL',trace=T,NP=100) best3 <- JDEoptim(lower, upper, fn, PDarray=PD1, PDarray=PD3, type='CPL',trace=T,NP=100) best4 <- JDEoptim(lower, upper, fn, PDarray=PD1, PDarray=PD4, type='CPL',trace=T,NP=100) best5 <- JDEoptim(lower, upper, fn, PDarray=PD1, PDarray=PD5, type='CPL',trace=T,NP=100) best6 <- JDEoptim(lower, upper, fn, PDarray=PD1, PDarray=PD6, type='CPL',trace=T,NP=100) CPL3 <- list(best1, best2, best3, best4, best5, best6); names(CPL3) <- names # Best 4-CPL model. Parameters and log likelihood found using search lower <- rep(0,7) upper <- rep(1,7) fn <- objectiveFunction best1 <- JDEoptim(lower, upper, fn, PDarray=PD1, type='CPL',trace=T,NP=140) best2 <- JDEoptim(lower, upper, fn, PDarray=PD2, type='CPL',trace=T,NP=140) best3 <- JDEoptim(lower, upper, fn, PDarray=PD3, type='CPL',trace=T,NP=140) best4 <- JDEoptim(lower, upper, fn, PDarray=PD4, type='CPL',trace=T,NP=140) best5 <- JDEoptim(lower, upper, fn, PDarray=PD5, type='CPL',trace=T,NP=140) best6 <- JDEoptim(lower, upper, fn, PDarray=PD6, type='CPL',trace=T,NP=140) CPL4 <- list(best1, best2, best3, best4, best5, best6); names(CPL4) <- names # Best 5-CPL model. Parameters and log likelihood found using search lower <- rep(0,9) upper <- rep(1,9) fn <- objectiveFunction best1 <- JDEoptim(lower, upper, fn, PDarray=PD1, type='CPL',trace=T,NP=180) best2 <- JDEoptim(lower, upper, fn, PDarray=PD2, type='CPL',trace=T,NP=180) best3 <- JDEoptim(lower, upper, fn, PDarray=PD3, type='CPL',trace=T,NP=180) best4 <- JDEoptim(lower, upper, fn, PDarray=PD4, type='CPL',trace=T,NP=180) best5 <- JDEoptim(lower, upper, fn, PDarray=PD5, type='CPL',trace=T,NP=180) best6 <- JDEoptim(lower, upper, fn, PDarray=PD6, type='CPL',trace=T,NP=180) CPL5 <- list(best1, best2, best3, best4, best5, best6); names(CPL5) <- names # save results, for separate plotting save(SPD, PD, uniform, CPL1, CPL2, CPL3, CPL4, CPL5, file='results.RData',version=2) ``` Pre-plot processing: ```{r, eval = FALSE} library(ADMUR) load('results.RData') # Calculate BICs for all six sample sizes and all six models BIC <- as.data.frame(matrix(,6,6)) row.names(BIC) <- c('uniform','1-CPL','2-CPL','3-CPL','4-CPL','5-CPL') for(s in 1:6){ # extract log likelihoods for each model loglik <- c(uniform[[s]], -CPL1[[s]]$value, -CPL2[[s]]$value, -CPL3[[s]]$value, -CPL4[[s]]$value, -CPL5[[s]]$value) # extract effective sample sizes for each model N <- c(rep(ncol(PD[[s]]),6)) # number of parameters for each model K <- c(0, 1, 3, 5, 7, 9) # calculate BIC for each model BIC[,s] <- log(N)*K - 2*loglik # store effective sample size names(BIC)[s] <- paste('N',N[1],sep='=') } # Show all BICs for all sample sizes and models print(BIC) ``` Generate plot: ```{r, eval = FALSE} oldpar <- par(no.readonly = TRUE) # Fig 2 plot pdf('Fig2.pdf',height=6,width=13) layout(mat=matrix(1:14, 2, 7, byrow = F),widths=c(0.3,rep(1,6)), heights=c(1,1.5),respect=T) # plot two blanks first par(mar=c(5,4,1.5,0),las=2) ymax <- 0.0032 plot(NULL, xlim=c(0,1),ylim=c(0,1),main='', xlab='',ylab='',bty='n',xaxt='n',yaxt='n') mtext(side=2, at=0.5,text='BIC',las=0,line=1) plot(NULL, xlim=c(0,1),ylim=sqrt(c(0,ymax)),main='', xlab='',ylab='',bty='n',xaxt='n',yaxt='n') axis(side=2, at=sqrt(seq(0,ymax,by=0.001)), labels=round(seq(0,ymax,by=0.001),4),las=1) mtext(side=2, at=sqrt(0.00025),text='PD',las=0,line=0.8,cex=1) abline(h=sqrt(seq(0,ymax,by=0.001)),col='grey') for(n in 1:6){ # extract the best model (lowest BIC) BICs <- BIC[,n] best <- which(BICs==min(BICs)) # convert parameters to model if(best==1){ type <- 'uniform' pars <- NULL } if(best!=1)type <- 'CPL' if(best==2)pars <- CPL1[[n]]$par if(best==3)pars <- CPL2[[n]]$par if(best==4)pars <- CPL3[[n]]$par if(best==5)pars <- CPL4[[n]]$par if(best==6)pars <- CPL5[[n]]$par spd.years <- as.numeric(row.names(SPD[[n]])) spd.pdf <- SPD[[n]][,1] mod.years <- as.numeric(row.names(PD[[n]])) model <- convertPars(pars, mod.years, type) # plot red <- 'firebrick' col <- rep('grey35',6); col[best] <- red ymin <- min(BIC)-diff(range(BIC))*0.15 par(mar=c(5,3,1.5,1),las=2) plot(BICs,xlab='',ylab='',xaxt='n',pch=20,cex=3,col=col, main='') axis(side=1, at=1:6, labels=c('Uniform','1-piece','2-piece','3-piece','4-piece','5-piece')) par(mar=c(5,1,1.5,1),las=2) plot(NULL,type='l', xlab='Cal Yrs BP', ylab='',yaxt='n', col='steelblue', main=paste('N =',ncol(PD[[n]])), ylim=sqrt(c(0,ymax)), xlim=c(7500,5500)) abline(h=sqrt(seq(0,ymax,by=0.001)),col='grey') polygon(c(min(spd.years),spd.years,max(spd.years)),sqrt(c(0,spd.pdf,0)),col='steelblue',border=NA) lwd=3 lines(toy$year,sqrt(toy$pdf),lwd=lwd) lines(model$year, sqrt(model$pdf), lwd=lwd, col=red) } dev.off() par(oldpar) ``` ![Low resolution png of Figure 2](Fig2.png) ********** # Figure 4 ## SPD simulation analysis of SAAD data Generate key objects: ```{r, eval = FALSE} library(ADMUR) set.seed(999) # best exponential parameter previously found using ML search for Fig 5. summary <- SPDsimulationTest(data=SAAD, calcurve=shcal20, calrange=c(2500,14000), pars=-0.0001674152, type='exp', N=20000) save(summary, file='results.RData',version=2) ``` Generate plot: ```{r, eval = FALSE} library(ADMUR) load('results.RData') oldpar <- par(no.readonly = TRUE) pdf('Fig4.pdf',height=4,width=10) par(mar=c(2,4,0.1,0.1)) plotSimulationSummary(summary, legend.x=11500,legend.y=0.0003) axis(side=1, at=2500,labels='calBP',tick=F) dev.off() par(oldpar) ``` ![Low resolution png of Figure 4](Fig4.png) ********** # Figure 5 ## Model selection of SAAD data. Generate key objects: ```{r, eval = FALSE} library(ADMUR) library(DEoptimR) # Generate SPD SPD <- summedPhaseCalibrator(data=SAAD, calcurve=shcal20, calrange = c(2500,14000)) # Calibrate each phase CalArray <- makeCalArray(calcurve=shcal20, calrange = c(2500,14000)) PD <- phaseCalibrator(data=SAAD, CalArray, remove.external = TRUE) # Best exponential model. Parameter and log likelihood found using seach exp <- JDEoptim(lower=-0.01, upper=0.01, fn=objectiveFunction, PDarray=PD, type='exp', trace=T, NP=20) # Best CPL models. Parameters and log likelihood found using seach fn <- objectiveFunction CPL1 <- JDEoptim(lower=rep(0,1), upper=rep(1,1), fn, PDarray=PD, type='CPL',trace=T,NP=20) CPL2 <- JDEoptim(lower=rep(0,3), upper=rep(1,3), fn, PDarray=PD, type='CPL',trace=T,NP=60) CPL3 <- JDEoptim(lower=rep(0,5), upper=rep(1,5), fn, PDarray=PD, type='CPL',trace=T,NP=100) CPL4 <- JDEoptim(lower=rep(0,7), upper=rep(1,7), fn, PDarray=PD, type='CPL',trace=T,NP=140) CPL5 <- JDEoptim(lower=rep(0,9), upper=rep(1,9), fn, PDarray=PD, type='CPL',trace=T,NP=180) CPL6 <- JDEoptim(lower=rep(0,11),upper=rep(1,11),fn, PDarray=PD, type='CPL',trace=T,NP=220) # save results, for separate plotting save(SPD, PD, exp, CPL1, CPL2, CPL3, CPL4, CPL5, CPL6, file='results.RData',version=2) ``` Pre-plot: ```{r, eval = FALSE} library(ADMUR) load('results.RData') # Calculate BICs for all six models # name of each model model <- c('exponential','1-CPL','2-CPL','3-CPL','4-CPL','5-CPL','6-CPL') # extract log likelihoods for each model loglik <- c(-exp$value, -CPL1$value, -CPL2$value, -CPL3$value, -CPL4$value, -CPL5$value, -CPL6$value) # extract effective sample sizes N <- c(rep(ncol(PD),7)) # number of parameters for each model K <- c(1, 1, 3, 5, 7, 9, 11) # calculate BIC for each model BICs <- log(N)*K - 2*loglik # convert best 3-CPL parameters into model pdf best <- convertPars(pars=CPL3$par, years=c(2500:14000), type='CPL') ``` Generate plot: ```{r, eval = FALSE} oldpar <- par(no.readonly = TRUE) pdf('Fig5.pdf',height=4,width=10) par(mfrow=c(1,2)) # model comparison par(mar=c(6,6,2,0.1)) red <- 'firebrick' blue <- 'steelblue' col <- rep('grey35',7); col[which(BICs==min(BICs))] <- red plot(BICs,xlab='',ylab='',xaxt='n', pch=20,cex=2,col=col,main='',las=1,cex.axis=0.7) labels <- c('exponential','1-CPL','2-CPL','3-CPL','4-CPL','5-CPL','6-CPL') axis(side=1, at=1:7, las=2, labels=labels, cex.axis=0.9) mtext(side=2, at=mean(BICs),text='BIC',las=0,line=3) # best fitting CPL years <- as.numeric(row.names(SPD)) plot(NULL,xlim=rev(range(years)), ylim=range(SPD), type='l',xlab='kyr cal BP',xaxt='n', ylab='',las=1,cex.axis=0.7) axis(1,at=seq(14000,3000,by=-1000), labels=seq(14,3,by=-1),cex.axis=0.9) mtext(side=2, at=max(SPD[,1])/2,text='PD',las=0,line=3.5,cex=1) polygon(c(min(years),years,max(years)),c(0,SPD[,1],0),col=blue,border=NA) lines(best$year,best$pdf,col=red,lwd=3) legend(x=14000,y=0.0003,lwd=c(5,3),col=c(blue,red),bty='n',legend=c('SPD','3-CPL')) dev.off() par(oldpar) ``` ![Low resolution png of Figure 5](Fig5.png) ********** # Figure 6, Figure 7, Table 2 ## Parameter estimates and CI of SAAD data. Generate key objects: ```{r, eval = FALSE} library(ADMUR) library(DEoptimR) # Calibrate each phase CalArray <- makeCalArray(calcurve=shcal20, calrange = c(2500,14000)) PD <- phaseCalibrator(data=SAAD, CalArray, remove.external = TRUE) # arbitrary starting parameters chain <- mcmc(PDarray=PD, startPars=rep(0.5,5), type='CPL', N=100000, burn=2000, thin=5, jumps=0.025) # find ML parameters best.pars <- JDEoptim(lower=rep(0,5),upper=rep(1,5),fn=objectiveFunction,PDarray=PD,type='CPL',trace=T,NP=100)$par # save results, for separate plotting save(chain, best.pars, file='results.RData',version=2) ``` Pre-plot processing: ```{r, eval = FALSE} library('ADMUR') library('scales') load('results.RData') # Convert Maximum Likelihood parameters to hinge coordinates ML <- CPLparsToHinges(best.pars,years=c(2500,14000)) # Convert MCMC chain of parameters to hinge coordinates hinges <- CPLparsToHinges(chain$res, years=c(2500,14000)) # check the acceptance ratio is sensible (c. 0.2 to 0.5) chain$acceptance.ratio # Eyeball the entire chain, before burn-in and thinning for(n in 1:5)plot(chain$all.pars[,n], type='l', ylim=c(0,1)) # Generate CI for Fig 7 N <- nrow(hinges) years <- 2500:14000 Y <- length(years) pdf.matrix <- matrix(,N,Y) for(n in 1:N){ yr <- c('yr1','yr2','yr3','yr4') pdf <- c('pdf1','pdf2','pdf3','pdf4') pdf.matrix[n,] <- approx(x=hinges[n,yr],y=hinges[n,pdf],xout=years, ties='ordered')$y } CI <- matrix(,Y,6) for(y in 1:Y)CI[y,] <- quantile(pdf.matrix[,y],prob=c(0.025,0.125,0.25,0.75,0.875,0.975)) ``` Generate Figure 6: ```{r, eval = FALSE} oldpar <- par(no.readonly = TRUE) pdf('Fig6.pdf',height=5,width=11) par(mfrow=c(2,3)) lwd <- 3 red='firebrick' grey='grey65' breaks.yr <- seq(14000,2000,length.out=80) breaks.pdf <- seq(0,0.0003,length.out=80) xlab.yr <- 'yrs BP' xlab.pdf <-'PD' names <- c('Date of Hinge B','Date of Hinge C','PD of Hinge A','PD fo Hinge B','PD of Hinge C','PD of Hinge D') hist(hinges$yr3, breaks=breaks.yr, col=grey, border=NA, main=names[1], xlab=xlab.yr) abline(v = ML$year[3], col=red, lwd=lwd) hist(hinges$yr2, breaks=breaks.yr, col=grey, border=NA, main=names[2], xlab=xlab.yr) abline(v = ML$year[2], col=red, lwd=lwd) hist(hinges$pdf4, breaks=breaks.pdf, col=grey, border=NA, main=names[3], xlab=xlab.pdf) abline(v = ML$pdf[4], col=red, lwd=lwd) hist(hinges$pdf3, breaks=breaks.pdf, col=grey, border=NA, main=names[5], xlab=xlab.pdf) abline(v = ML$pdf[3], col=red, lwd=lwd) hist(hinges$pdf2, breaks=breaks.pdf, col=grey, border=NA, main=names[4], xlab=xlab.pdf) abline(v = ML$pdf[2], col=red, lwd=lwd) hist(hinges$pdf1, breaks=breaks.pdf, col=grey, border=NA, main=names[6], xlab=xlab.pdf) abline(v = ML$pdf[1], col=red, lwd=lwd) dev.off() par(oldpar) ``` ![Low resolution png of Figure 6](Fig6.png) Generate Figure 7: ```{r, eval = FALSE} oldpar <- par(no.readonly = TRUE) pdf('Fig7.pdf',height=5,width=12) grey1 <- 'grey90' grey2 <- 'grey70' grey3 <- 'grey50' red <- 'firebrick' par(mfrow=c(1,2),las=0) plot(NULL,xlim=c(14000,2500),ylim=c(0,0.00025),xlab='kyr cal BP',xaxt='n', ylab='PD', las=1, cex.axis=0.7) set.seed(888) S <- sample(1:N,size=1000) for(n in 1:1000){ lines(x=hinges[S[n],c('yr1','yr2','yr3','yr4')],y=hinges[S[n],c('pdf1','pdf2','pdf3','pdf4')],col=alpha('black',0.05)) } lines(ML$year, ML$pdf,col='firebrick',lwd=2) axis(1,at=seq(14000,3000,by=-1000), labels=seq(14,3,by=-1)) text(x=ML$year, y=ML$pdf + c(-0.00002,-0.00002,0.00002,0.00002), labels=rev(c('A','B','C','D'))) legend(legend=c('Maximum Likelihood model PDF','Model PDF sampled from joint posterior parameters'), x = 6000,y = 0.00024,cex = 0.7,bty = 'n',border = NA, xjust = 1, lwd=c(2,1), col=c(red,grey3)) plot(NULL,xlim=c(14000,2500),ylim=c(0,0.00025),xlab='kyr cal BP',xaxt='n', ylab='PD', las=1, cex.axis=0.7) polygon(x=c(years,rev(years)),c(CI[,1],rev(CI[,6])),col=grey1,border=F) polygon(x=c(years,rev(years)),c(CI[,2],rev(CI[,5])),col=grey2,border=F) polygon(x=c(years,rev(years)),c(CI[,3],rev(CI[,4])),col=grey3,border=F) a <- 0.05 cex <- 0.2 points(hinges$yr1,hinges$pdf1,pch=20,col=alpha(red,alpha=a),cex=cex) points(hinges$yr2,hinges$pdf2,pch=20,col=alpha(red,alpha=a),cex=cex) points(hinges$yr3,hinges$pdf3,pch=20,col=alpha(red,alpha=a),cex=cex) points(hinges$yr4,hinges$pdf4,pch=20,col=alpha(red,alpha=a),cex=cex) axis(1,at=seq(14000,3000,by=-1000), labels=seq(14,3,by=-1)) legend(legend=c('Joint posterior parameters','50% CI of model PDF','75% CI of model PDF','95% CI of model PDF'), x = 10000,y = 0.00024,cex = 0.7,bty = 'n',border = NA, xjust = 1, pch = c(16,NA,NA,NA), col = c(red,NA,NA,NA), fill = c(NA,grey3,grey2,grey1), x.intersp = c(1.5,1,1,1)) dev.off() par(oldpar) ``` ![Low resolution png of Figure 7](Fig7.png) Generate Table 2 ```{r, eval = FALSE} #---------------------------------------------------------------------------------------------- # dates (H = hinge) # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - H.A.date <- ML$year[4] H.B.date <- round(ML$year[3]) H.C.date <- round(ML$year[2]) H.D.date <- ML$year[1] H.B.date.CI <- round(quantile(hinges$yr3,prob=c(0.025,0.975))) H.C.date.CI <- round(quantile(hinges$yr2,prob=c(0.025,0.975))) # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # gradients (P = phase or piece) # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - P.1.gradient <- (ML$pdf[3] - ML$pdf[4]) / (ML$year[4] - ML$year[3]) P.2.gradient <- (ML$pdf[2] - ML$pdf[3]) / (ML$year[3] - ML$year[2]) P.3.gradient <- (ML$pdf[1] - ML$pdf[2]) / (ML$year[2] - ML$year[1]) P.1.gradient.mcmc <- (hinges$pdf3 - hinges$pdf4) / (hinges$yr4 - hinges$yr3) P.2.gradient.mcmc <- (hinges$pdf2 - hinges$pdf3) / (hinges$yr3 - hinges$yr2) P.3.gradient.mcmc <- (hinges$pdf1 - hinges$pdf2) / (hinges$yr2 - hinges$yr1) P.1.gradient.CI <- quantile(P.1.gradient.mcmc,prob=c(0.025,0.975)) P.2.gradient.CI <- quantile(P.2.gradient.mcmc,prob=c(0.025,0.975)) P.3.gradient.CI <- quantile(P.3.gradient.mcmc,prob=c(0.025,0.975)) # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # relative growth rate per generation (P = phase or piece) # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - P.1.growth <- round(relativeRate(x=c(ML$year[3],ML$year[4]), y=c(ML$pdf[3],ML$pdf[4]) ),2) P.2.growth <- round(relativeRate(x=c(ML$year[2],ML$year[3]), y=c(ML$pdf[2],ML$pdf[3]) ),2) P.3.growth <- round(relativeRate(x=c(ML$year[1],ML$year[2]), y=c(ML$pdf[1],ML$pdf[2]) ),2) P.1.growth.mcmc <- relativeRate(x=hinges[,c('yr3','yr4')], y=hinges[,c('pdf3','pdf4')] ) P.2.growth.mcmc <- relativeRate(x=hinges[,c('yr2','yr3')], y=hinges[,c('pdf2','pdf3')] ) P.3.growth.mcmc <- relativeRate(x=hinges[,c('yr1','yr2')], y=hinges[,c('pdf1','pdf2')] ) P.1.growth.CI <- round(quantile(P.1.growth.mcmc,prob=c(0.025,0.975)),2) P.2.growth.CI <- round(quantile(P.2.growth.mcmc,prob=c(0.025,0.975)),2) P.3.growth.CI <- round(quantile(P.3.growth.mcmc,prob=c(0.025,0.975)),2) # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # summary # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - headings <- c('Linear phase between hinges', 'Start yrs BP (95% CI)','End yrs BP (95% CI)', 'Gradient (x 10^-9 per year)(95% CI)', 'Relative growth rate per 25 yr generation (95% CI)') all.dates <- c(H.A.date, paste(H.B.date,' (',H.B.date.CI[2],' to ',H.B.date.CI[1],')',sep=''), paste(H.C.date,' (',H.C.date.CI[2],' to ',H.C.date.CI[1],')',sep=''), H.D.date) all.gradients <- round(c(P.1.gradient, P.2.gradient, P.3.gradient) / 1e-09, 1) all.gradients.lower <- round(c(P.1.gradient.CI[1], P.2.gradient.CI[1], P.3.gradient.CI[1]) / 1e-09, 1) all.gradients.upper <- round(c(P.1.gradient.CI[2], P.2.gradient.CI[2], P.3.gradient.CI[2]) / 1e-09, 1) col.1 <- c('1 (A-B)', '2 (B-C)', '3 (C-D)') col.2 <- all.dates[1:3] col.3 <- all.dates[2:4] col.4 <- c(paste(all.gradients[1],' (',all.gradients.lower[1],' to ',all.gradients.upper[1],')',sep=''), paste(all.gradients[2],' (',all.gradients.lower[2],' to ',all.gradients.upper[2],')',sep=''), paste(all.gradients[3],' (',all.gradients.lower[3],' to ',all.gradients.upper[3],')',sep='')) col.5 <- c(paste(P.1.growth,'%',' (',P.1.growth.CI[1],' to ',P.1.growth.CI[2],')',sep=''), paste(P.2.growth,'%',' (',P.2.growth.CI[1],' to ',P.2.growth.CI[2],')',sep=''), paste(P.3.growth,'%',' (',P.3.growth.CI[1],' to ',P.3.growth.CI[2],')',sep='')) res <- cbind(col.1,col.2,col.3,col.4,col.5); colnames(res) <- headings write.csv(res, 'Table 2.csv', row.names=F) ``` ```{r, eval = TRUE, echo = FALSE} tb2 <- read.csv(file='Table2.csv') print(tb2) ``` ********** # Table 1 Generate key objects: ```{r, eval = FALSE} library(ADMUR) library(DEoptimR) # generate a set of random calendar dates under the toy model. set.seed(888) cal <- simulateCalendarDates(model = toy, 1500) # Convert to 14C dates. age <- uncalibrateCalendarDates(cal, shcal20) # construct data frame. One date per phase. data <- data.frame(age = age, sd = 25, phase = 1:1500, datingType = '14C') # Calibrate each phase CalArray <- makeCalArray(shcal20, calrange = range(toy$year), inc = 5) PD <- phaseCalibrator(data, CalArray, remove.external = TRUE) # Generate SPD SPD <- summedCalibrator(data, CalArray) # Uniform model: No parameters. # Log Likelihood calculated directly using objectiveFunction, without a search required. unif.loglik <- -objectiveFunction(pars = NULL, PDarray = PD, type = 'uniform') # Best CPL models. Parameters and log likelihood found using seach fn <- objectiveFunction CPL1 <- JDEoptim(lower=rep(0,1), upper=rep(1,1), fn, PDarray=PD, type='CPL',trace=T,NP=20) CPL2 <- JDEoptim(lower=rep(0,3), upper=rep(1,3), fn, PDarray=PD, type='CPL',trace=T,NP=60) CPL3 <- JDEoptim(lower=rep(0,5), upper=rep(1,5), fn, PDarray=PD, type='CPL',trace=T,NP=100) CPL4 <- JDEoptim(lower=rep(0,7), upper=rep(1,7), fn, PDarray=PD, type='CPL',trace=T,NP=140) CPL5 <- JDEoptim(lower=rep(0,9), upper=rep(1,9), fn, PDarray=PD, type='CPL',trace=T,NP=180) # save results, for separate plotting save(SPD, PD, unif.loglik, CPL1, CPL2, CPL3, CPL4, CPL5, file='results.RData',version=2) ``` Pre-process and generate table: ```{r, eval = FALSE} load('results.RData') # Calculate BICs for all six models # name of each model model <- c('uniform','1-CPL','2-CPL','3-CPL','4-CPL','5-CPL') # extract log likelihoods for each model loglik <- c(unif.loglik, -CPL1$value, -CPL2$value, -CPL3$value, -CPL4$value, -CPL5$value) # extract effective sample sizes N <- c(rep(ncol(PD),6)) # number of parameters for each model K <- c(0, 1, 3, 5, 7, 9) # calculate BIC for each model BIC <- log(N)*K - 2*loglik table <- data.frame(Model=model, Parameters=K, MaxLogLikelihood=loglik, BIC=BIC) names(table) <- c('model','parameter','maximum log likelihood','BIC') print(table) write.csv(table,file='Table 1.csv', row.names=F) ``` ```{r, eval = TRUE, echo = FALSE} tb1 <- read.csv(file='Table1.csv') print(tb1) ``` ********** ![](four_logos.png){height=0.55in} **********
/scratch/gouwar.j/cran-all/cranData/ADMUR/vignettes/replicating-timpson-rstb.2020.Rmd
#' How many people use my product #' #' A data base of adoption probability by triers and users #' user_p - a vector of percentage (0<users<1) of users. #' triers_p - a vector of percentage (0<triers<1) of triers #' ADP - vector of predicted percentage (0<ADP<1) of the adoption probability #' of an innovative product in the population. #' @author Mickey Kislev and Shira Kislev #' @details #' The measuring of triers is relatively easy. It is just a question of whether #' a person tried a product even once in his life or not. While measuring the #' rate of people who also adopt it as part of their life is more complicated #' since the adoption of a product is a subjective view of the individual. #' Mickey Kislev and Shira Kislev developed a formula to calculates the prevalence #' of users of a product to overcome this difficulty. The current dataseet #' assists in calculating the users of a product based on the prevalence of #' triers in the population. #' #' For example, suppose that a candy company launched a new chocolate bar. #' A candy company can collect data on the number of people who tried their #' chocolate bar and know from the rate of triers how many people decided to #' consume the new chocolate bar regularly. It should be noticed that the model #' was proved only on consumer behaviour of adults above the age of 21 years old #' and above. #' @seealso \code{\link{ptriers}}, \code{\link{pusers}}, \code{\link{adp.t}}, and \code{\link{adp.u}} #' @source Kislev, Mickey M. & Kislev, Shira, (2020). The Market Trajectory of a #' Radically New Product: E-Cigarettes. IJMS 12(4):63-92, DOI:\href{https://www.ccsenet.org/journal/index.php/ijms/article/view/0/44285}{10.5539/ijms.v12n4p63} ADP <- data.frame(triers_p = 1:100/100) ADP$user_p <- ADP$triers_p * 0.25 * exp(1.35*ADP$triers_p) ADP$ADP <- ADP$user_p / ADP$triers_p #' Calculates the predicted prevalence of triers according to the users' rate #' #' This function develops a prediction of the triers' rate of an innovation in #' the market, according to the number of users. #' #' @param users a vector of percentige (0<users<1) of known users. #' @return a vector of predicted percentige(0<triers<1) of triers in the population of a certain innovation according to known users rate #' @author Mickey Kislev and Shira Kislev #' @details #' This function calculates the rate of triers in the population of a certain #' innovation according to known users rate in that population and measured in a survey. #' @seealso \code{\link{pusers}}, \code{\link{adp.t}}, and \code{\link{adp.u}} #' @source Kislev, Mickey M. & Kislev, Shira, (2020). The Market Trajectory of a #' Radically New Product: E-Cigarettes. IJMS 12(4):63-92, DOI:\href{https://www.ccsenet.org/journal/index.php/ijms/article/view/0/44285}{10.5539/ijms.v12n4p63} #' @examples #' # 50% rate of users #' ptriers(0.5) #' 0.7382 #' # means that 74% of the population tried the product, in case that 50% of #' # the population are using it. #' @export ptriers <- function(users) { if(users < 0 | users > 1){ stop("users prevalence can be dafine in values between 0 to 1", call. = FALSE) } p <- length(users) pvalues <- numeric(p) exp_model <- data.frame(triers_p = 1:10000/10000) exp_model$user_p <- exp_model$triers_p * 0.25 * exp(1.35*exp_model$triers_p) for (i in p) { triers <- which(abs(exp_model$user_p-users)==min(abs(exp_model$user_p-users)))/10000 print(triers) } } #' Calculates the predicted prevalence of users according to the triers' rate #' #' This function develops a prediction of the users' rate of an innovation in #' the market, according to the number of triers #' #' @param triers a vector of percentage (0<triers<1) of known triers #' @return a vector of predicted percentage (0<users<1) of users in the population of a certain innovation according to known triers rate #' @author Mickey Kislev and Shira Kislev #' @details #' This function calculates the rate of users in the population of a certain #' innovation according to known triers rate in that population and measured in a survey. #' #' The measuring of triers is relatively easy. It is just a question of whether #' a person tried a product even once in his life or not. While measuring the #' rate of people who also adopt it as part of their life is more complicated #' since the adoption of a product is a subjective view of the individual. #' Mickey Kislev and Shira Kislev developed a formula to calculates the prevalence #' of users of a product to overcome this difficulty. The current function #' assists in calculating the users of a product based on the prevalence of #' triers in the population. #' @seealso \code{\link{ptriers}}, \code{\link{adp.t}}, and \code{\link{adp.u}} #' @references Kislev, Mickey M. & Kislev, Shira, (2020). The Market Trajectory of a #' Radically New Product: E-Cigarettes. IJMS 12(4):63-92, DOI:\href{https://www.ccsenet.org/journal/index.php/ijms/article/view/0/44285}{10.5539/ijms.v12n4p63} #' @examples #' # 50% rate of triers #' pusers(0.5) #' 0.2455041 #' # means that 24.5% of the population uses the product regularly, in case #' # that 50% of the population already tried it. #' @export pusers <- function(triers) { if(triers < 0 | triers > 1){ stop("triers prevalence can be dafine in values between 0 to 1", call. = FALSE) } p <- length(triers) pvalues <- numeric(p) for (i in p) { users <- triers * 0.25 * exp(1.35*triers) print(users) } } #' Calculates the predicted adoption probability according to the triers' rate #' #' #' This function develops a prediction of the adoption rate of an innovation in #' the market, according to the number of triers #' #' @param triers a vector of percentige (0<triers<1) of known triers #' @return a vector of predicted percentige(0<ADP<1) of the adoption probability #' of a innovative product in the population. #' @author Mickey Kislev and Shira Kislev #' @details #' This function calculates the adoption probability in the population of a certain #' innovation according to known triers rate measured in a survey. #' @seealso \code{\link{pusers}}, \code{\link{ptriers}}, and \code{\link{adp.u}} #' @source Kislev, Mickey M. & Kislev, Shira, (2020). The Market Trajectory of a #' Radically New Product: E-Cigarettes. IJMS 12(4):63-92, DOI:\href{https://www.ccsenet.org/journal/index.php/ijms/article/view/0/44285}{10.5539/ijms.v12n4p63} #' @examples #' # 50% rate of triers #' adp.t(0.5) #' 0.4910082 #' # means that every second person who tries the product will adopt it, in case #' # that 50% of the population already tried it. #' @export adp.t <- function(triers) { if(triers < 0 | triers > 1){ stop("triers prevalence can be dafine in values between 0 to 1", call. = FALSE) } p <- length(triers) pvalues <- numeric(p) for (i in p) { y <- triers * 0.25 * exp(1.35*triers) ADP <- y / triers print(ADP) } } #' Calculates the predicted adoption probability according to the users' rate #' #' This function develops a prediction of the adoption rate of an innovation in #' the market, according to the number of users. #' #' @param users a vector of percentige (0<users<1) of known users. #' @return a vector of predicted percentige(0<ADP<1) of the adoption probability #' of a innovative product in the population. #' @author Mickey Kislev and Shira Kislev #' @details #' This function calculates the adoption probability in the population of a certain #' innovation according to known users rate measured in a survey. #' @seealso \code{\link{pusers}}, \code{\link{ptriers}}, and \code{\link{adp.t}} #' @source Kislev, Mickey M. & Kislev, Shira, (2020). The Market Trajectory of a #' Radically New Product: E-Cigarettes. IJMS 12(4):63-92, DOI:\href{https://www.ccsenet.org/journal/index.php/ijms/article/view/0/44285}{10.5539/ijms.v12n4p63} #' @examples #' # 50% rate of users #' adp.u(0.5) #' 0.6773232 #' # means that two out of three people who try the product will adopt it, #' # in case that 50% of the population already uses it. #' @export adp.u <- function(users) { if(users < 0 | users > 1){ stop("users prevalence can be dafine in values between 0 to 1", call. = FALSE) } p <- length(users) pvalues <- numeric(p) exp_model <- data.frame(triers_p = 1:10000/10000) exp_model$user_p <- exp_model$triers_p * 0.25 * exp(1.35*exp_model$triers_p) for (i in p) { y <- which(abs(exp_model$user_p-users)==min(abs(exp_model$user_p-users)))/10000 ADP <- users / y print(ADP) } }
/scratch/gouwar.j/cran-all/cranData/ADP/R/ADP.R
#' @title Adaptive Degree Polynomial Filter [ADPF] #' @description #' ADPF outputs a \code{data.frame} containing a column for the original data, the polynomial degree used to smooth it, and the requested derivative(s). #' @usage ADPF(YData, SthDeriv,MaxOrder,FilterLength, DeltaX, WriteFile) #' @param YData a numeric \code{data.frame}, \code{matrix} or \code{vector} to transform #' @param SthDeriv differentiation order #' @param MaxOrder maximum polynomial order #' @param FilterLength window size (must be odd) #' @param DeltaX optional sampling interval #' @param WriteFile a boolean that writes a \code{data.frame} to the working directory if true #' @author Phillip Barak #' @author Samuel Kruse #' @export #' @importFrom stats qf #' @importFrom utils write.csv #' @examples #' #' ADPF::CHROM #' #' smooth<-ADPF(CHROM[,6],0,9,13) #' numpoints=length(CHROM[,6]) #' plot(x=1:numpoints,y=CHROM[,6]);lines(x=1:numpoints, y=smooth[,3]) #' @details This is a code listing of a smoothing algorithm published in 1995 and written by Phillip Barak. ADPF modifies the Savitzky-Golay algorithm with a statistical heurism that increases signal fidelty while decreasing statisical noise. #' Mathematically, it operates simply as a weighted sum over a given window: #' \deqn{f_{t}^{n,s}=\sum{_{i=-m}^{m} h_{i}^{n,s,t}y_{i}}} #' Where \eqn{h_{i}^{n,s,t}} is the convolution weight of the \eqn{i}th point to the evaluate the \eqn{s}th derivative at point \eqn{t} using a polynomial of degree \eqn{n} #' on 2\eqn{m+1} data points, \eqn{y}. These convolution weights \eqn{h} are calculated using Gram polynomials which are optimally selected using a \eqn{F_{chi}} test. #' This improves upon the signal fidelity of Savitzky-Golay by optimally choosing the Gram polynomial degree between zero and the max polynomial order give by the user while removing statistical noise. #' The sampling interval specified with the \code{DeltaX} argument is used for scaling and get numerically correct derivatives. For more details on the statistical heurism see the Barak, 1995 article. This can be found at http://soils.wisc.edu/facstaff/barak/ under the publications section. #' @references Barak, P., 1995. Smoothing and Differentiation by and Adaptive-Degree Polynomial filter; Anal. Chem. 67, 2758-2762. #' #' Marchand, P.; Marmet, L. Rev. Sci. Instrum. 1983, 54, 1034-1041. #' #' Greville, T. N. E., Ed. Theory and Applications of Spline Functions; Academic Press: New York, 1969. #' #' Press, W. H.; Flannery, B. P.; Teukolsky, S. A.;Vetterling. W. T. Numerical Recipes; Cambridge University Press: Cambridge U.K., 1986. #' #' Savitzky, A., and Golay, M. J. E., 1964. Smoothing and differentiation of data by simplified least squares procedures. Anal. Chem. 36, 1627-1639. #' #' Macauly, F. R. The Smoothing of Time Series; National Bureau of Economic Research, Inc,: New York, 1931. #' #' Gorry, P. A. Anal. Chem. 1964, 36,1627-1639. #' #' Steiner, J.; Termonia, Y.; Deltour, J. Anal. Chem. 1972, 44. 1906-1909. #' #' Ernst, R. R. Adv. magn. Reson. 1966, 2,1-135. #' #' Gorry P. A. Anal. Chem. 1991, 64, 534-536. #' #' Ratzlaff, K. L.; Johnson, J. T. Adal. Chem. 1989, 61, 1303-1305. #' #' Kuo, J. E.; Wang, H.; Pickup, S. Anal. Chem. 1991, 63,630-645. #' #' Enke, C. G; Nieman, T. A. Anal. Chm 1976, 48, 705A-712A. #' #' Phillips, G. R., Harris, J. M. Anal. Chem. 1990, 62, 2749-2752. #' #' Duran, B.S. Polynomial Regression. In Encyclopedia of the Statistical Sciences, Kotz, S., Johnsonn N. L., Eds.; Wiley: New York, 1986; Vol. 7, pp 700-703. #' #' Bevington, P. R. Data Reduction and Error Analysis for the Physical Sciences; McGraw-Hill Book Co,: New York, 1969; Chapter 10. #' #' Snedecor, G. W.; Cochran, W. G. Statistical Methods, 6th ed.; Iowa State University Press: Ames, IA, 1967; Chapter 15. #' #' Hanning, R. W. Digital Filters, 2nd ed.; Prentice-Hall: Englewood Cliffs, NJ, 1983; Chapter 3. #' #' Ralston, A. A First Course in Numerical Analysis McGraw-Hill: New York, 1965; Chapter 6. #' #' Robert De Levie. 2008. Advanced Excel for Scientific data analysis. 2nd edn. Chapter 3.15 Least squares for equidistant data. Oxford Univ. Press, New York, NY. #' #' Wentzell, P. D., and Brown, C. D., 2000. Signal processing in analytical chemistry. Encyclopedia of Analytical Chemistry, 9764-9800. ADPF <- function(YData, SthDeriv, MaxOrder, FilterLength, DeltaX, WriteFile) { if (missing(DeltaX)) DeltaX <- 1 if (missing(WriteFile)) WriteFile = FALSE if (is.data.frame(YData)) YData <- as.matrix(YData) if (FilterLength %% 2 != 1) stop("needs an odd filter length") if (MaxOrder >= FilterLength) stop("filter length should be greater than maximum polynomial order") if (MaxOrder < SthDeriv) stop("polynomial order should be geater or equal to differentiation order") if (SthDeriv > 2) stop("Derivative should be less than or equal to 2") #this is the function that ultimately combines the convolution weights with the original data points NPts <- length(YData) MinOrder <- 0 sgbsmooth <- array(dim = c(NPts, 5)) sgbsmooth[, 1] <- YData colnames(sgbsmooth) = c("YData", "Jsignif", "smooth", "1st Derivative", "2nd Derivative") GenFact <- function(A1, B1) { gf = 1 C1 = A1 - B1 + 1 if (C1 > A1) { return(gf) } else{ for (j in C1:A1) { gf = gf * j } } return(gf) } m <- (FilterLength - 1) / 2 q <- FilterLength - m GramPoly = array(dim = c(FilterLength, MaxOrder + 2, SthDeriv + 2)) for (i in 1:FilterLength) { GramPoly[i, 1, 2] = 0 GramPoly[i, 2, 2] = 1 } for (j in-m:m) { n = j + q GramPoly[n, 3, 2] = j / m } for (k in 2:MaxOrder) { A2 = 2 * (2 * k - 1) / (k * (2 * m - k + 1)) B2 = ((k - 1) * (2 * m + k)) / (k * (2 * m - k + 1)) for (i in 0:m) { GramPoly[i + q, k + 2, 2] = A2 * i * GramPoly[i + q, k + 1, 2] - B2 * GramPoly[i + q, k, 2] } for (i in 1:m) { if (k %% 2 == 0) { GramPoly[i, k + 2, 2] <- GramPoly[FilterLength + 1 - i, k + 2, 2] } else{ GramPoly[i, k + 2, 2] <- -1 * (GramPoly[FilterLength + 1 - i, k + 2, 2]) } } } if (SthDeriv > 0) { for (s in 1:SthDeriv) { for (i in 1:FilterLength) { GramPoly[i, 1, s + 2] = 0 } for (i in 1:FilterLength) { GramPoly[i, 2, s + 2] = 0 } for (k in 1:MaxOrder) { A1 = 2 * (2 * k - 1) / (k * (2 * m - k + 1)) B1 = ((k - 1) * (2 * m + k)) / (k * (2 * m - k + 1)) for (i in-m:m) { GramPoly[i + q, k + 2, s + 2] = A1 * (i * GramPoly[i + q, k + 1, s + 2] + s * GramPoly[i + q, k + 1, s + 1]) - B1 * GramPoly[i + q, k, s + 2] } } } } #This is the array of weights that uses gram-polynomials and recursion to produce desired values Weight <- array(dim = c(FilterLength, FilterLength, MaxOrder + 2, SthDeriv + 2)) for (k in 0:MaxOrder) { A = (2 * k + 1) * GenFact(2 * m, k) / GenFact(2 * m + k + 1, k + 1) for (s in 0:SthDeriv) { for (i in 1:FilterLength) { for (t in 1:FilterLength) { Weight[i, t, 1, s + 2] <- 0 Weight[i, t, k + 2, s + 2] = Weight[i, t, k + 1, s + 2] + A * GramPoly[i, k + 2, 2] * GramPoly[t, k + 2, s + 2] } } } } SumX2 <- c(1:MaxOrder) for (j in 1:MaxOrder) { Sum = 0 for (i in 1:FilterLength) { Sum <- Sum + (GramPoly[i, j + 2, 2]) ^ 2 } SumX2[j] <- Sum } FValueTable <- array(dim = c(MaxOrder, FilterLength)) dF2 = 0 while (dF2 < FilterLength) { dF2 = dF2 + 1 for (i in 1:MaxOrder) { FValueTable[i, dF2] <- qf(0.05, i, dF2, lower.tail = FALSE) } } Y <- c(FilterLength) SumSquares <- c(MaxOrder + 1) Ftest <- c(MaxOrder + 1) JSignif = 0 #set at 0 for first fit;all others calc off previous fit SGSmooth <- function(j, t, Y, s) { Sumsg = 0 for (i in 1:FilterLength) { Sumsg = Sumsg + Weight[i, t, j + 2, s] * Y[i] } return(Sumsg) } for (k in q:(NPts - m)) { for (i in-m:m) { Y[i + q] <- YData[k + i] } #rezero arrays for (j in 1:MaxOrder + 1) { SumSquares[j] = 0 Ftest[j] = 0 } #Calc Sum of squares for start point if (JSignif == 0) { SumY = 0 SumY2 = 0 for (i in 1:FilterLength) { SumY = SumY + Y[i] SumY2 = SumY2 + (Y[i]) ^ 2 } SumSquares[1] = SumY2 - ((SumY) ^ 2) / (2 * m + 1) } else{ SumSq = 0 for (t in 1:FilterLength) { SumSq = SumSq + (SGSmooth(JSignif, t, Y, 2) - Y[t]) ^ 2 } SumSquares[JSignif + 1] = SumSq } j = JSignif + 1 repeat { SumXY = 0 for (p in 1:FilterLength) { SumXY = SumXY + Y[p] * GramPoly[p, j + 2 , 2] } SumSquares[j + 1] = SumSquares[j] - SumXY ^ 2 / SumX2[j] #calc F-test againsts last significant order Jsignif Ftest[j + 1] = (SumSquares[JSignif + 1] - SumSquares[j + 1]) / (SumSquares[j + 1] / ((2 * m + 1) - j - 1)) dF1 = (j - JSignif) dF2 = (FilterLength - j - 1) FValue = FValueTable[dF1, dF2] if (Ftest[j + 1] > FValue) { JSignif = j } j = j + 1 if ((JSignif + 2) >= MinOrder) { MinOrder = JSignif + 2 if (MinOrder > MaxOrder) { MinOrder = MaxOrder } } if (j > MinOrder) { break } } if (JSignif == 0 || JSignif == 1) { MinMax = 0 } else{ if (JSignif == 2) { MinMax = 1 } else{ MinMax = 0 OldestY = SGSmooth(JSignif, 1, Y, 2) OldY = SGSmooth(JSignif, 2, Y, 2) OldSign = sign(OldY - OldestY) for (t in 3:FilterLength){ NewY = SGSmooth(JSignif, t, Y, 2) Sign = sign(NewY - OldY) if (Sign != OldSign) { MinMax = MinMax + 1 } OldSign = Sign OldY = NewY } } } #this if statement fills out first m spots with the qth polynomial if (k == q) { for (t in 1:m) { sgbsmooth[t, 3] <- SGSmooth(JSignif, t, Y, 2) if (SthDeriv > 0) { sgbsmooth[t, 4] = SGSmooth(JSignif, t, Y, 3) / DeltaX } if (SthDeriv == 2) { sgbsmooth[t, 5] = SGSmooth(JSignif, t, Y, 4) / DeltaX ^ 2 } } } sgbsmooth[k, 3] <- SGSmooth(JSignif, q, Y, 2) sgbsmooth[k, 2] <- JSignif if (SthDeriv > 0) { sgbsmooth[k, 4] = SGSmooth(JSignif, q, Y, 3) / DeltaX } if (SthDeriv == 2) { sgbsmooth[k, 5] = SGSmooth(JSignif, q, Y, 4) / DeltaX ^ 2 } if (k == (NPts - m)) { for (t in 1:m) { sgbsmooth[k + t, 3] = SGSmooth(JSignif, t + q, Y, 2) if (SthDeriv > 0) { sgbsmooth[k + t, 4] = SGSmooth(JSignif, t + q, Y, 3) / DeltaX } if (SthDeriv == 2) { sgbsmooth[k + t, 5] = SGSmooth(JSignif, t + q, Y, 4) / DeltaX ^ 2 } } } MinOrder = MinMax + 1 if (MinOrder > MaxOrder) { MinOrder = MaxOrder } if (MinOrder < 0) { MinOrder = 0 } JSignif = 0 } #print("Smoothed Values are saved in Files Tab and printed below!") if (WriteFile == TRUE) { write.csv(sgbsmooth, "smooth") } if (SthDeriv == 0) { return(sgbsmooth[, 1:3]) } if (SthDeriv == 1) { return(sgbsmooth[, 1:4]) } if (SthDeriv == 2) { return(sgbsmooth[, 1:5]) } }
/scratch/gouwar.j/cran-all/cranData/ADPF/R/ADPF.R
#' @doctype data #' @rdname #' @aliases CHROM #' @title CHROM #' @format A data frame of 201 observances of chromatogram data with 6 different levels of statisical noise. Column 6 has no noise. #' @description Barak gathered this data for the purpose of testing his algorithm. #' @source BARAK SOILS LAB UW-Madison #' @references Barak, P., 1995. Smoothing and Differentiation by and Adaptive-Degree Polynomial filter; Anal. Chem. 67, 2758-2762. #' @export
/scratch/gouwar.j/cran-all/cranData/ADPF/R/CHROM.R
##' @title Fast Clustering Using Adaptive Density Peak Detection ##' ##' @description Clustering of data by finding cluster centers from estimated density peaks. ADPclust is a non-iterative procedure that incorporates multivariate Gaussian density estimation. The number of clusters as well as bandwidths can either be selected by the user or selected automatically through an internal clustering criterion. ##' ##' @details Given n data points x's in p dimensions, adpclust() calculates f(x) and delta(x) for each data point x, where f(x) is the local density at x, and delta(x) is the shortest distance between x and y for all y such that f(x) <= f(y). Data points with large f and large delta values are labeled class centroids. In other words, they appear as isolated points in the upper right corner of the f vs. delta plot (the decision plot). After cluster centroids are determined, other data points are clustered according to their distances to the closes centroids. ##' ##' A bandwidth (smoothing parameter) h is used to calculate local density f(x) in various ways. See parameter 'fdelta' for details. If centroids = 'user', then h must be explicitly provided. If centroids = 'auto' and h is not specified, then it is automatically selected from a range of testing values: First a reference bandwidth h0 is calculated by one of the two methods: Scott's Rule-of-Thumb value (htype = "ROT") or Wand's Asymptotic-Mean-Integrated-Squared-Error value (htype = "AMISE"), then 10 values equally spread in the range [1/3h0, 3h0] are tested. The value that yields the highest silhouette score is chosen as the final h. ##' ##' @param x numeric data frame where rows are observations and columns are variables. One of x and distm must be provided. ##' @param distm distance matrix of class 'dist'. distm is ignored if x is given. ##' @param p number of variables (ncol(x)). This is only needed if neither x nor h is given. ##' @param centroids character string specifying how cluster centroids are selected. Valid options are "user" and "auto". ##' @param h nonnegative number specifying the bandwidth in density estimation. If h is NULL, the algorithm attempts to find h in a neighborhood centered at either the AMISE bandwidth or ROT bandwidth (see htype). ##' @param htype character string specifying the method used to calculate a reference bandwidth for the density estimation. htype is ignored if h is given. Valid options of are "ROT" and "AMISE" (see details). ##' @param nclust integer, or a vector of integers specifying the pool of the number of clusters in automatic variation. The default is 2:10. ##' @param ac integer indicating which automatic cut method is used. This is ignored if centroids = 'user'. The valid options are: ##' \itemize{ ##' \item{ac = 1: }{centroids are chosen to be the data points x's with the largest delta values such that f(x) >= a'th percentile of all f(x). The number of centroids is given by the parameter nclust. The cutting percentile(s) is given by the parameter f.cut. } ##' \item{ac = 2: }{let l denote the straight line connecting (min(f), max(delta)) and (max(f), min(delta)). The centroids are selected to be data points above l and farthest away from it. The number of centroids is given by the parameter nclust.} ##' } ##' @param f.cut number between (0, 1) or numeric vector of numbers between (0, 1). f.cut is used when centroids = "auto" and ac = 1 to automatically select cluster centroids from the decision plot (see ac). The default is c(0.1, 0.2, 0.3). ##' @param fdelta character string that specifies the method used to estimate local density f(x) at each data point x. The default (recommended) is "mnorm" that uses a multivariate Gaussian density estimation to calculate f. Other options are listed below. Here 'distm' denotes the distance matrix. ##' \itemize{ ##' \item{unorm}{(f <- 1/(h * sqrt(2 * pi)) * rowSums(exp(-(distm/h)^2/2))); Univariate Gaussian smoother} ##' \item{weighted}{(f <- rowSums(exp(-(distm/h)^2))); Univariate weighted smoother} ##' \item{count}{(f <- rowSums(distm < h) - 1); Histogram estimator (used in Rodriguez [2014])} ##' } ##' @param dmethod character string that is passed to the 'method' argument in function dist(), which is used to calculate the distance matrix if 'distm' is not given. The default is "euclidean". ##' @param draw boolean. If draw = TRUE the clustering result is plotted after the algorithm finishes. The plot is produced by by plot.adpclust(ans), where 'ans' is the outcome of 'adpclust()' ##' @return An 'adpclust' object that contains the list of the following items. ##' \itemize{ ##' \item{clusters}{ Cluster assignments. A vector of the same length as the number of observations.} ##' \item{centers:}{ Indices of the clustering centers.} ##' \item{silhouette:}{ Silhouette score from the final clustering result.} ##' \item{nclust:}{ Number of clusters.} ##' \item{h:}{ Final bandwidth.} ##' \item{f:}{ Final density vector f(x).} ##' \item{delta:}{ Final delta vector delta(x).} ##' \item{selection.type:}{ 'user' or 'auto'.} ##' } ##' ##' @references ##' \itemize{ ##' \item{GitHub: \url{https://github.com/ethanyxu/ADPclust}} ##' \item{Xiao-Feng Wang, and Yifan Xu, (2015) "Fast Clustering Using Adaptive Density Peak Detection." Statistical Methods in Medical Research, doi:10.1177/0962280215609948. } ##' \item{PubMed: \url{http://www.ncbi.nlm.nih.gov/pubmed/26475830}} ##' } ##' @export ##' @examples ##' # Load a data set with 3 clusters ##' data(clust3) ##' ##' # Automatically select cluster centroids ##' ans <- adpclust(clust3, centroids = "auto", draw = FALSE) ##' summary(ans) ##' plot(ans) ##' ##' # Specify distm instead of data ##' distm <- FindDistm(clust3, normalize = TRUE) ##' ans.distm <- adpclust(distm = distm, p = 2, centroids = "auto", draw = FALSE) ##' identical(ans, ans.distm) ##' ##' # Specify the grid of h and nclust ##' ans <- adpclust(clust3, centroids = "auto", h = c(0.1, 0.2, 0.3), nclust = 2:6) ##' ##' # Specify that bandwidths should be searched around ##' # Wand's Asymptotic-Mean-Integrated-Squared-Error bandwidth ##' # Also test 3 to 6 clusters. ##' ans <- adpclust(clust3, centroids = "auto", htype = "AMISE", nclust = 3:6) ##' ##' # Set a specific bandwidth value. ##' ans <- adpclust(clust3, centroids = "auto", h = 5) ##' ##' # Change method of automatic selection of centers ##' ans <- adpclust(clust3, centroids = "auto", nclust = 2:6, ac = 2) ##' ##' # Specify that the single "ROT" bandwidth value by ##' # using the 'ROT()' function ##' ans <- adpclust(clust3, centroids = "auto", h = ROT(clust3)) ##' ##' # Centroids selected by user ##' \dontrun{ ##' ans <- adpclust(clust3, centroids = "user", h = ROT(clust3)) ##' } ##' ##' # A larger data set ##' data(clust5) ##' ans <- adpclust(clust5, centroids = "auto", htype = "ROT", nclust = 3:5) ##' summary(ans) ##' plot(ans) adpclust <- function(x = NULL, distm = NULL, p = NULL, centroids = 'auto', h = NULL, htype = 'amise', nclust = 2:10, ac = 1, f.cut = c(0.1, 0.2, 0.3), fdelta = 'mnorm', dmethod = 'euclidean', draw = FALSE ){ # ------------------------------------------------------------------------- # Check arguments # ------------------------------------------------------------------------- if(!centroids %in% c('user', 'auto')){ stop('arg centroids must be one of c(\'user\', \'auto\') Got ', centroids) } if(!is.null(h)){ if(!is.numeric(h)) stop('arg h must be numeric. Got ', class(h)) if(length(h) == 0) stop('arg h is empty: ', h) if(min(h) <= 0) stop('arg h must be nonnegative. Got', h) } if(!tolower(htype) %in% c('amise', 'rot')){ stop('arg centroids must be one of c(\'amise\', \'rot\') Got ', htype) } if(!all(nclust == floor(nclust))) stop('arg nclust must all be integers. Got ', nclust) if(min(nclust) <= 1) stop('arg nclust must be integers > 1. Got ', nclust) if(!ac %in% c(1,2)) stop('arg ac must be one of c(1,2). Got ', ac) if(!is.numeric(f.cut)) stop('arg f.cut must be numeric. Got ', class(f.cut)) if(length(f.cut) == 0) stop('arg f.cut is empty: ', f.cut) if(min(f.cut) < 0) stop('arg f.cut must be between 0 - 1. Got', f.cut) if(max(f.cut) >= 1) stop('arg f.cut must be between 0 - 1. Got', f.cut) if(!fdelta %in% c('mnorm', 'unorm', 'weighted', 'count')){ stop('arg fdelta must be one of c(\'mnorm\', \'unorm\', \'weighted\', \'count\'). Got ', fdelta) } if(is.null(x)){ # Use distm if(is.null(distm)) stop("Must provide one of x or distm") if(!inherits(distm, 'dist')) stop("arg distm must inherit dist class. Got ", class(distm)) if(is.null(p) && is.null(h)){ stop("Bandwidth h and data x are not given. Must provide p to calculate h.") } }else{ # Use x. Calculate distm. if(fdelta == "mnorm"){ distm <- FindDistm(x, normalize = TRUE, method = "euclidean") }else{ distm <- FindDistm(x, normalize = FALSE, method = dmethod) } p = ncol(x) } # ------------------------------------------------------------------------- # Find bandwidth h # ------------------------------------------------------------------------- if(is.null(h)){ if(fdelta != "mnorm"){ stop("Must give h unless fdelta == 'mnorm'") } h <- FindH(p, attr(distm, 'Size'), htype) } # ------------------------------------------------------------------------- # Clustering with the 'user' option # ------------------------------------------------------------------------- if(centroids == "user"){ if(length(h) > 1){ stop("h must be a scalar when centroids == 'user'") } fd <- FindFD(distm, h, fdelta) ans <- FindClustersManual(distm, fd$f, fd$delta) ans[['h']] <- h ans[['f']] <- fd[['f']] ans[['delta']] <- fd[['delta']] ans[['selection.type']] <- 'user' class(ans) <- c("adpclust", "list") if(draw) plot.adpclust((ans)) return(ans) } # ------------------------------------------------------------------------- # Clustring with the 'auto' option # ------------------------------------------------------------------------- if(centroids == "auto"){ if(length(h) > 1){ h.seq <- h }else{ h.seq <- seq(h / 3, h * 3, length.out = 10) } # Find f and delta for each h fd.list <- lapply(h.seq, function(h) FindFD(distm, h, fdelta)) result.list <- lapply(fd.list, function(fd){ FindClustersAuto(distm = distm, f = fd[['f']], delta = fd[['delta']], ac = ac, nclust = nclust, f.cut = f.cut) }) score.seq <- sapply(result.list, function(x) x$silhouette) iwinner <- which.max(score.seq) # Generate a list of all tested possibilities tested <- list() for(i in seq_along(h.seq)){ for(one.sil in result.list[[i]]$tested.sils){ one.tested <- list(f.cut = attr(one.sil, 'f.cut'), f.cut.value = attr(one.sil, 'f.cut.value'), nclust = attr(one.sil, 'nclust'), h = h.seq[i], sil = as.vector(one.sil)) tested <- c(tested, list(one.tested)) } } ans <- result.list[[iwinner]] ans[['tested.sils']] <- NULL # Redundant. In 'tested' ans[['h']] <- h.seq[iwinner] fd <- fd.list[[iwinner]] ans[['f']] <- fd[['f']] ans[['delta']] <- fd[['delta']] ans[['selection.type']] <- 'auto' ans[['tested']] <- tested class(ans) <- c("adpclust", "list") if(draw) plot.adpclust((ans)) return(ans) } stop('centroids not recognized') # should never reach here. }
/scratch/gouwar.j/cran-all/cranData/ADPclust/R/ADPclust.R
##' Calculate the AMISE bandwidth from either a data frame, or from the number of observations and the dimension of the data. ##' ##' IMPORTANT NOTE: The standard deviation of each variable is omitted in this formula. ##' ##' @title AMISE bandwidth ##' ##' @export ##' ##' @param x the number of variables (if y is given), or a data frame or a matrix (if y is missing). ##' @param y the number of observations. If y is missing then x is interpreted as the data matrix. ##' @return AMISE bandwidth. AMISE <- function(x, y = NULL){ if(is.null(y)){ if(inherits(x, c("data.frame", "matrix"))){ n <- nrow(x) p <- ncol(x) } }else{ if(is.numeric(x) && x > 0 && is.numeric(y) && y > 0){ p = x n = y }else stop("Wrong x, y.") } h <- (4 / (p + 2)) ^ (1 / (p + 4)) * n ^ (-1 / (p + 4)) return(h) }
/scratch/gouwar.j/cran-all/cranData/ADPclust/R/AMISE.R
##' Automatically finds centers with diagonal f(x) vs delta(x) thresholds. This is used in adpclust() with ac = 2. It finds points that are above and farthest from the diagonal line in the f vs. delta plots, and label them to be centers. ##' ##' @title Automatically finds centers with diagonal f(x) vs delta(x) thresholds ##' @param f vector of local distance. ##' @param delta vector of minimal distances to higher ground. ##' @param nclust number of clusters. Can be a single integer or a vector of integers. Duplicates are silently removed. ##' @return a list of vectors. Each vector gives the locations of centers. ##' @author Ethan Xu FindCentersAutoD <- function(f, delta, nclust){ if(!is.numeric(nclust)) stop('arg nclust should inherit numeric. Got ', class(nclust)) if(!all.equal(nclust, as.integer(nclust))) stop('nclust must all be integers') if(min(nclust) <= 0) stop('nclust must be positive integers') center.list <- list() nclust <- unique(nclust) x1 <- min(f); y1 <- max(delta) x2 <- max(f); y2 <- min(delta) pl.dist <- ((x2 - x1) * delta - (y2 - y1) * f + y2 * x1 - x2 * y1) / sqrt((y2 - y1) ^ 2 + (x2 - x1) ^ 2) cts <- order(pl.dist, decreasing = TRUE)[1:max(nclust)] for(i in seq_along(nclust)){ centers <- cts[1:nclust[i]] attributes(centers) <- list(nclust = nclust[i]) center.list <- c(center.list, list(centers)) } return(center.list = center.list) }
/scratch/gouwar.j/cran-all/cranData/ADPclust/R/FindCentersAutoD.R
##' Automatically find centers with vertical threshold vertical f(x) thresholds. ##' ##' Given f's and delta's, cluster centers are chosen to be the data points whose delta values are high and f values are larger than a fixed threshold. To be more specific, let F denote the set of all f(x). centers are selected as points with the largest m delta values in the set {x | f(x) > a'th percentile of F}. The number of centers m is given by the parameter nclust. The cutting percentile a is given by the parameter f.cut. When at least one of these two parameters are vectors, centers are selected based all combinations of them, and returned in a list. ##' ##' @title Automatically find centers with vertical threshold ##' @param f vector of local distance f(x). See the detail section of the help(adpclust). ##' @param delta vector of minimal distances to higher ground delta(x). See the detail section of the help(adpclust). ##' @param nclust number of clusters. It can be either a single integer or a vector of integers. ##' @param f.cut number between (0, 1) or numeric vector of numbers between (0, 1). Data points whose f values are larger than f.cut with large delta values are selected as centers. The default is c(0.1, 0.2, 0.3). ##' @param rm.dup boolean. If TRUE (default) duplicated centers vectors are removed from returned list. ##' @export ##' @return a list of vectors. Each vector contains the indices of selected centers. ##' @author Ethan Xu FindCentersAutoV <- function(f, delta, f.cut = c(0.1, 0.2, 0.3), nclust, rm.dup = TRUE){ # ------------------------------------------------------------------------- # Check arguments # ------------------------------------------------------------------------- if(!is.numeric(f.cut)) stop('arg f.cut should inherit numeric. Got ', class(f.cut)) if(length(f.cut) == 0) stop('arg f.cut is empty: ', f.cut) if(min(f.cut) < 0) stop('arg f.cut must be between 0 - 1. Got', f.cut) if(max(f.cut) >= 1) stop('arg f.cut must be between 0 - 1. Got', f.cut) if(!is.numeric(nclust)) stop('arg nclust should inherit numeric. Got ', class(nclust)) if(!all.equal(nclust, as.integer(nclust))) stop('nclust must all be integers') if(min(nclust) <= 0) stop('nclust must be positive integers') center.list <- list() for(i in seq_along(f.cut)){ # For each f.cuts ##f0 <- min(f) + f.cut[j] * (max(f) - min(f)) f0 <- stats::quantile(f, probs = f.cut[i]) delta1 <- delta delta1[f < f0] <- -Inf cts <- order(delta1, decreasing = TRUE)[1 : max(nclust)] for(j in seq_along(nclust)){ # For each nclust if(sum(f >= f0) < nclust[j]){ # Number of points that > f.cut is less than nclust. Stop stop("Only (", sum(f >= f0), ") points to the right of f.cut (", f0, "), but nclust = ", nclust[j]) } centers <- cts[1 : nclust[j]] attributes(centers) <- list(f.cut = f.cut[i], f.cut.value = f0, nclust = nclust[j]) if(rm.dup){ if(!IsDup(center.list, centers)){ center.list <- c(center.list, list(centers)) } }else{ center.list <- c(center.list, list(centers)) } } } return(center.list = center.list) }
/scratch/gouwar.j/cran-all/cranData/ADPclust/R/FindCentersAutoV.R
##' This is the subroutine that automatically finds cluster assignments from given f and delta by testing various parameter settings and find the one that maximizes the silhouette. ##' ##' @title Automatically find cluster assignment given f and delta. ##' @param distm the distance matrix ##' @param f vector of local distance f(x). See the help of adpclust() for details. ##' @param delta vector of minimal distances to higher ground delta(x). See the help of adpclust() for details. ##' @param ac type of auto selection. The valid options are 1 and 2. See the help of adpclust() for details. ##' @param nclust number of clusters to test. Either a single integer or a vector of integers. ##' @param f.cut number between (0, 1) or numeric vector of numbers between (0, 1). Data points whose f values are larger than f.cut with large delta values are selected as centers. The default is c(0.1, 0.2, 0.3). See the help of FindCentersAutoV() for more details. ##' @export ##' ##' @return list of four elements: ##' \itemize{ ##' \item{clusters}{ Cluster assignments. A vector of the same length as the number of observations.} ##' \item{centers:}{ Indices of the clustering centers.} ##' \item{silhouette:}{ Silhouette score from the final clustering result.} ##' \item{nclust:}{ Number of clusters.} ##' } ##' @author Ethan Xu FindClustersAuto <- function(distm, f, delta, ac = 1, nclust = 2:10, f.cut = c(0.1, 0.2, 0.3)){ # ------------------------------------------------------------------------- # Check arguments # ------------------------------------------------------------------------- if(!inherits(distm, 'dist')) stop("arg distm must inherit dist class. Got ", class(distm)) if(!is.numeric(f)) stop("arg f must be numeric. Got ", class(f)) if(!is.numeric(delta)) stop("arg delta must be numeric. Got ", class(delta)) if(attr(distm, 'Size') != length(f)) stop("length of f (", length(f),") not equal to number of observations in distm (", attr(distm, 'Size'), ")") if(attr(distm, 'Size') != length(delta)) stop("length of delta (", length(delta),") not equal to number of observations in distm (", attr(distm, 'Size'), ")") if(!all.equal(nclust, as.integer(nclust))) stop('arg nclust must all be integers. Got ', class(nclust)) if(min(nclust) <= 0) stop('nclust must be positive integers') if(!is.numeric(f.cut)) stop('arg f.cut must be numeric. Got ', class(f.cut)) if(length(f.cut) == 0) stop('arg f.cut is empty: ', f.cut) if(min(f.cut) < 0) stop('arg f.cut must be between 0 - 1. Got', f.cut) if(max(f.cut) >= 1) stop('arg f.cut must be between 0 - 1. Got', f.cut) if(length(ac) != 1) stop('arg ac must have length 1. Got', ac) # ------------------------------------------------------------------------- # Find centers # ------------------------------------------------------------------------- if(ac == 1){ center.list <- FindCentersAutoV(f, delta, f.cut = f.cut, nclust = nclust, rm.dup = FALSE) }else if(ac == 2){ center.list <- FindCentersAutoD(f, delta, nclust = nclust) }else{ stop("Wrong ac. Must be either 1 or 2. Got ", ac) } if(length(center.list) == 0){ stop("Failed to find any centers") } # ------------------------------------------------------------------------- # Cluster # ------------------------------------------------------------------------- cluster.list <- lapply(center.list, function(x){ a <- FindClustersGivenCenters(distm, centers = x) attributes(a) <- attributes(x) return(a) }) sils.list <- lapply(cluster.list, function(x){ a <- FindSilhouette(distm, clusters = x) attributes(a) <- attributes(x) return(a) }) sils.vector <- unlist(sils.list) winner.i <- which.max(sils.vector) ans <- list() ans[['clusters']] <- cluster.list[[winner.i]] ans[['centers']] <- center.list[[winner.i]] ans[['silhouette']] <- sils.list[[winner.i]] ans[['nclust']] <- length(ans[['centers']]) ans[['tested.sils']] <- sils.list return(ans) }
/scratch/gouwar.j/cran-all/cranData/ADPclust/R/FindClustersAuto.R
##' Find cluster assignments from given centers and distance matrix. Each point is assigned to the center that has the shortest Euclidean distance. ##' ##' @title Find cluster assignments given centers and distance matrix ##' @param distm distance matrix ##' @param centers vector of integers that gives the indices of centers. Duplications will be silently dropped. ##' @export ##' @return Cluster assignments. A vector of the same length as the number of observations. FindClustersGivenCenters <- function(distm, centers){ if(!inherits(distm, 'dist')) stop("arg distm must be a dist class") if(!is.numeric(centers)) stop("arg centers must be numeric. Got ", class(centers)) if(length(centers) == 0) stop("arg centers must have length > 1") if(!all(centers == floor(centers))) stop("arg centers must be a (vector of) integer(s). Got ", centers) if(anyDuplicated(centers)) stop("Duplications in centers not allowed") if(anyNA(centers)) stop("NA in centers not allowed") if(min(centers) <= 0) stop("min(centers) <= 0") if(max(centers) > attr(distm, 'Size')) stop("max(centers) larger than number of observations in distm") centers <- unique(centers) if(length(centers) <= 1) stop('length of unique(centers) must be greater than 1') distm <- as.matrix(distm) dist.to.centers <- distm[, centers] clusters <- apply(dist.to.centers, 1, FUN = which.min) return(clusters) }
/scratch/gouwar.j/cran-all/cranData/ADPclust/R/FindClustersGivenCenters.R
#' Plot the f vs. delta plot, then wait for the user to select centers of clusters by left clicking the points. In general points with both large f and large delta are good candidates of cluster centroids. Selected centers are highlighted. Press ESC to end the selection. #' #' @title User-interactive routine to find clusters #' #' @param distm distance matrix. #' @param f vector of local densities f(x). Same length of the number of observations. #' @param delta vector of distances to the closest high ground delta(x). Same length of the number of observations. #' #' @export #' @return a list of the following items: #' \itemize{ ##' \item{clusters}{ Cluster assignments. A vector of the same length as the number of observations.} #' \item{centers:}{ Indices of the clustering centers.} #' \item{silhouette:}{ Silhouette score from the final clustering result.} #' \item{nclust:}{ Number of clusters.} #' } #' #' @examples #' data(clust3) #' distm <- FindDistm(clust3, normalize = TRUE) #' \dontrun{ #' fd <- FindFD(distm, 2, "mnorm") #' ans <- FindClustersManual(distm, fd$f, fd$delta) #' names(ans) #' ans$centers #' } FindClustersManual <- function(distm, f, delta){ if(!inherits(distm, 'dist')) stop('arg distm must inherit \'dist\'. Got: ', class(distm)) if(!inherits(f, 'numeric')) stop('arg f must be numeric. Got: ', class(f)) if(!inherits(delta, 'numeric')) stop('arg delta must be numeric. Got: ', class(delta)) if(length(f) != length(delta)){ stop('lengths of f and delta are different. length(f) = ', length(f), '; length(delta) = ', length(delta)) } mycols <- defCol() ## Plot f(x) vs delta(x) plot. Click to select centerss # dev.new(width = 12, height = 6) graphics::par(mfrow = c(1,1), mar = c(7,4,4,3), mgp = c(3,1,0)) graphics::plot(f, delta, main = "Decision Plot", xlab = "", ylab = paste0(expression(delta), "(x)")) graphics::mtext("f(x)\n Select centroids by left clicking \nPress 'ESC' to end selection", side = 1, line = 4) cat("Waiting user selection of centroids on the density-distance plot.\n") centers <- PickCenter(f, delta, col = mycols, labelcex = 0.6) # indices of centers frange <- range(f); drange <- range(delta) graphics::rect(xleft = frange[1] + 0.3 * (frange[2] - frange[1]), ybottom = drange[1] + 0.4 * (drange[2] - drange[1]), xright = frange[1] + 0.7 * (frange[2] - frange[1]), ytop = drange[1] + 0.6 * (drange[2] - drange[1]), col = "white") graphics::text(mean(frange), mean(drange), labels = "Selection Finished.") if(length(centers) < 2) stop("Select at least two centers.") ## Assign pts to clusters # 'rdist' finds distances between pts to centers; "euclidean distance"; clusters <- FindClustersGivenCenters(distm, centers) silhouette <- FindSilhouette(distm, clusters) ##------------------------------------ ## Return ##------------------------------------ ans <- list() ans[['clusters']] <- clusters ans[['centers']] <- centers ans[['silhouette']] <- silhouette ans[['nclust']] <- length(ans[['centers']]) return(ans) }
/scratch/gouwar.j/cran-all/cranData/ADPclust/R/FindClustersManual.R
##' A wrapper of the dist() method, with the option to rescale the data with standard deviation of each dimension before calculating the distance matrix. NOTE: If fdelta='mnorm' is passed to adpclust(), then the distm is calculated from rescaled data internally, i.e. distm <- FindDistm(x, normalize = TRUE). ##' ##' @title Find the distance matrix from data. ##' @param x data ##' @param normalize boolean. Normalize data before calculating distance? ##' @param method passed to 'dist()' ##' @export ##' @return distance matrix of class dist. ##' @author Ethan Xu FindDistm <- function(x, normalize = FALSE, method = 'euclidean'){ if(!inherits(x, 'data.frame') && !inherits(x, 'matrix')) stop('arg x must be data frame or matrix') if(nrow(x) == 0) stop('x is empty. Cannot calculate distance matrix.') if(!inherits(normalize, 'logical')) stop('arg normalize must be boolean') if(normalize){ ## distm.std <- as.matrix(dist(scale(dat, center = FALSE, scale = TRUE), ## method = "euclidean")) sds <- apply(x, 2, stats::sd) distm <- stats::dist(scale(x, center = FALSE, scale = sds), method = method, upper = TRUE) }else{ distm <- stats::dist(x, upper = TRUE, method = method) } return(distm) }
/scratch/gouwar.j/cran-all/cranData/ADPclust/R/FindDistm.R
##' Calculate f(x) and delta(x) from distm and h. ##' ##' @title Find f and delta from distance matrix. ##' @param distm distance matrix of class 'dist'. ##' @param h bandwidth. ##' @param fdelta character string that specifies the method used to estimate local density f(x) at each data point x. The default is "mnorm" that uses a multivariate Gaussian density estimation to calculate f. Other options are listed below. Here 'distm' denotes the distance matrix. ##' \itemize{ ##' \item{unorm}{(f <- 1/(h * sqrt(2 * pi)) * rowSums(exp(-(distm/h)^2/2))); Univariate Gaussian smoother} ##' \item{weighted}{(f <- rowSums(exp(-(distm/h)^2))); Univariate weighted smoother} ##' \item{count}{(f <- rowSums(distm < h) - 1); Histogram estimator (used in Rodriguez [2014])} ##' } ##' @export ##' @return list of two items: f and delta. FindFD <- function(distm, h, fdelta){ # ------------------------------------------------------------------------- # Check arguments # ------------------------------------------------------------------------- if(!inherits(distm, 'dist')) stop("arg distm must inherit dist class. Got ", class(distm)) n <- attr(distm, 'Size') ## Find f(x) distm <- as.matrix(distm) if(fdelta == "unorm"){ f <- 1/(h * sqrt(2 * pi)) * rowSums(exp(-(distm/h)^2/2)) }else if(fdelta == "weighted"){ f <- rowSums(exp(-(distm/h)^2)) }else if(fdelta == "count"){ f <- rowSums(distm < h) - 1 }else if(fdelta == "mnorm"){ f <- rowSums(exp(-(distm / h) ^ 2 / 2)) }else{ stop("Wrong fdelta, try 'unorm', 'weighted', 'count' or 'mnorm' (recommended).") } # ------------------------------------------------------------------------- # Find f and delta # ------------------------------------------------------------------------- if(fdelta == "count"){ f1 <- rank(f, ties.method = "first") # Break ties in f delta <- apply(distm / outer(f1, f1, FUN = ">"), 2, min, na.rm = TRUE) loc.max <- which.max(delta) delta[loc.max] <- max(delta[-loc.max]) # Equation in the Matlab code }else if(fdelta == "mnorm"){ f.order <- order(f, decreasing = TRUE) delta <- rep(NA, n) delta[f.order[1]] <- Inf for(i in 2:length(f.order)){ delta[f.order[i]] <- min(distm[f.order[i], f.order[1:(i - 1)]]) } delta[f.order[1]] <- max(delta[-f.order[1]]) }else{ delta <- apply(distm / outer(f, f, FUN = ">"), 2, min, na.rm = TRUE) loc.max <- which.max(delta) delta[loc.max] <- max(delta[-loc.max]) # Equation in the Matlab code } return(list(f = f, delta = delta)) }
/scratch/gouwar.j/cran-all/cranData/ADPclust/R/FindFD.R
##' Find bandwidth h from the number of observations n and the dimension p. ##' ##' @title Find bandwidth h. ##' @param p dimension of data. The number of variables. ##' @param n the number of observations. ##' @param htype methods to calculate h. The valid options are (case insensitive) "amise" or "rot". ##' @return bandwidth h. FindH <- function(p, n, htype){ if(!is.numeric(p)) stop('arg p must be numeric') if(length(p) != 1) stop('arg p must be scalar') if(!is.numeric(n)) stop('arg n must be numeric') if(length(n) != 1) stop('arg n must be scalar') if(!inherits(htype, 'character')) stop('arg htype not character') if(tolower(htype) == "amise"){ return(AMISE(p, n)) }else if(tolower(htype) == "rot"){ return(ROT(p, n)) }else{ stop("htype can only take two options 'amise' or 'rot'") } }
/scratch/gouwar.j/cran-all/cranData/ADPclust/R/FindH.R
FindSilhouette <- function(distm, clusters){ if(!inherits(distm, 'dist')) stop("arg distm must inherit dist class. Got: ", class(distm)) if(!all(clusters == floor(clusters))) stop('arg clusters must all be integers') if(length(clusters) != attr(distm, 'Size')) stop('length of clusters', length(clusters), 'not equal to number of observations in distm', attr(distm, 'Size')) ans <- mean(cluster::silhouette(x = clusters, dist = distm)[,3]) return(ans) }
/scratch/gouwar.j/cran-all/cranData/ADPclust/R/FindSilhouette.R
IsDup <- function(x.list, y){ if(!inherits(x.list, "list")) stop("Expecting a list") dup <- FALSE for(x in x.list){ if(setequal(x, y)){ dup <- TRUE break } } return(dup) }
/scratch/gouwar.j/cran-all/cranData/ADPclust/R/IsDup.R
## This function is taken and slightly modified from the example of ?identify. ## It uses identify to select points, and paint the selected points. PickCenter <- function(x, y = NULL, n = length(x), pch = 19, col = "red", cex = 1.2, labels = seq_along(x), labelcex = 1, ...) { xy <- grDevices::xy.coords(x, y); x <- xy$x; y <- xy$y sel <- rep(FALSE, length(x)); res <- integer(0) N <- 1 while(sum(sel) < n) { ans <- graphics::identify(x[!sel], y[!sel], n = 1, plot = FALSE, ...) if(!length(ans)) break ans <- which(!sel)[ans] graphics::points(x[ans], y[ans], pch = pch, cex = cex, col = col[(N - 1) %% length(col) + 1]) graphics::text(x[ans], y[ans], labels = ans, pos = 1, cex = labelcex) sel[ans] <- TRUE res <- c(res, ans) N <- N + 1 } if(sum(sel) > length(col)) warning(paste0("Number of clusters (",sum(sel), ") > number of colors(",length(col), "). Colors recycled.")) invisible(res) }
/scratch/gouwar.j/cran-all/cranData/ADPclust/R/PickCenters.R
##' Calculate the ROT bandwidth either from a data frame, or from p and n. ##' ##' IMPORTANT NOTE: The standard deviation of each variable is omitted in this formula. ##' ##' @title Calculate ROT bandwidth ##' ##' @export ##' ##' @param x the number of variables (if y is missing), or a data frame or a matrix (if y is not missing). ##' @param y the number of observations. If y is missing, x should be the data matrix. ##' @return ROT bandwidth. ROT <- function(x, y = NULL){ if(is.null(y)){ if(inherits(x, c("data.frame", "matrix"))){ n <- nrow(x) p <- ncol(x) }else{ stop("y is missing. x is must be a data frame.") } }else{ if(is.numeric(x) && x > 0 && is.numeric(y) && y > 0){ p = x n = y }else{ stop("Wrong x, y.") } } h <- n ^ (-1 / (p + 4)) ## Scott's ROT return(h) }
/scratch/gouwar.j/cran-all/cranData/ADPclust/R/ROT.R
#' 1000 5-dimensional data points that form ten clusters #' #' Generated from the genRandomClust() function of the "clusterGeneration" package with separation value 0.2. #' #' @docType data #' @keywords datasets #' @format data frame #' @name clust10 NULL
/scratch/gouwar.j/cran-all/cranData/ADPclust/R/clust10.R
#' 90 2-dimensional data points that form three clusters #' #' Randomly generated from three normal distributions. #' #' @docType data #' @keywords datasets #' @format data frame #' @name clust3 NULL
/scratch/gouwar.j/cran-all/cranData/ADPclust/R/clust3.R
#' 500 5-dimensional data points that form five clusters #' #' Generated from the genRandomClust() function of the "clusterGeneration" package with separation value 0.01 (tightly clustered). #' #' @docType data #' @keywords datasets #' @format data frame #' @name clust5.1 NULL
/scratch/gouwar.j/cran-all/cranData/ADPclust/R/clust5.1.R
#' 500 5-dimensional data points that form five clusters #' #' 500 5-dim points in 5 clusters. Generated from the genRandomClust() function of the "clusterGeneration" package with separation value 0.1. #' #' @docType data #' @keywords datasets #' @format data frame #' @name clust5 NULL
/scratch/gouwar.j/cran-all/cranData/ADPclust/R/clust5.R
#' 243-dimensional gene expression data of 38 patients (243 genes) #' #' 38 by 243 matrix. Each row represents a patient. Each column represents a gene. #' #' @docType data #' @keywords datasets #' @format matrix #' @name dat_gene NULL
/scratch/gouwar.j/cran-all/cranData/ADPclust/R/dat_gene.R
##' Returns 10 default colors ##' ##' @title Default colors ##' @return vector of colors defCol <- function(){ mycols <- c("#E41A1C", "#377EB8", "#4DAF4A", "#984EA3", "#FF7F00", "#FFFF33", "#A65628", "#F781BF", "#999999", "blue") return(mycols) }
/scratch/gouwar.j/cran-all/cranData/ADPclust/R/defCol.R
##' @importFrom graphics abline axis par plot text points segments NULL ##' @importFrom stats complete.cases NULL ##' Plot the f vs. delta plot with selected centroids. ##' ##' @title Visualize the result of adpclust() ##' @param x an object of class "adpclust". Result of adpclust(). ##' @param cols vector of colors used to distinguish different clusters. Recycled if necessary. ##' @param to.plot string vector that indicate which plot(s) to show. The two options are 'cluster.sil' (nclust vs. silhouette) and 'fd' (f vs. delta). ##' @param ... Not used. ##' @return NULL ##' ##' @export ##' ##' @examples ##' ## Load a data set with 3 clusters ##' data(clust3) ##' ## Automatically select cluster centroids ##' ans <- adpclust(clust3, centroids = "auto") ##' plot(ans) ##' plot(ans, to.plot = "fd") ##' plot(ans, to.plot = "cluster.sil") ##' plot(ans, to.plot = c("cluster.sil", "fd")) #Default plot.adpclust <- function(x, cols = "default", to.plot = c("cluster.sil", "fd"), ...) { nclusters <- sils <- NULL # Null out to remove "no visible binding for global variable" note from R check. if(!inherits(x, 'adpclust')) stop('arg x must inherit adpclust. Got ', class(x)) if(cols == "default") cols = defCol() if(!all(to.plot %in% c("cluster.sil", "fd"))) stop('to.plot must be "cluster.sil" and/or "fd".') # Recycle colors if((temp <- ceiling(x$nclust / length(cols))) > 1) cols <- rep(cols, temp)[1:x$nclust] f <- x[['f']] delta <- x[['delta']] centers <- x[['centers']] par(mfrow = c(1, length(unique(to.plot)))) ##-------------------- ## nclust vs. silouette ##-------------------- if("cluster.sil" %in% to.plot){ tried <- data.frame(nclusters = NA, sils = NA) for(i in 1:length(x$tested)){ tried[i, 'nclusters'] <- x$tested[[i]][['nclust']] tried[i, 'sils'] <- x$tested[[i]][['sil']] } tried <- tried[complete.cases(tried), ] tried <- dplyr::group_by(tried, nclusters) tried <- dplyr::summarize(tried, best.sil = max(sils)) plot(tried, type = "b", xlab = "number of clusters", ylab = "silhouette", xaxt = "n", main = "# cluster vs silhouette") abline(v = x$nclust, col = "red", lty = 2) axis(1, at = tried$nclusters, labels = tried$nclusters) } ##-------------------- ## f vs delta ##-------------------- if("fd" %in% to.plot){ plot(f, delta, xlab = "f(x)", ylab = "delta(x)", main = "f(x) vs delta(x) \n chosen centers") f.range <- range(f) delta.range <- range(delta) points(f[centers], delta[centers], col = cols, pch = 19, cex = 1.2) text(f[centers], delta[centers], labels = centers, cex = 0.6, pos = 1) if(x[['selection.type']] == 'auto'){ if(length(attr(centers, 'f.cut')) > 0){ abline(v = attr(centers, 'f.cut.value'), col = "red", lty = 2) }else{ segments(f.range[1], delta.range[2], f.range[2], delta.range[1], col = "red", lty = 2) } } } }
/scratch/gouwar.j/cran-all/cranData/ADPclust/R/plot.adpclust.R
##' Summarizes the result from the adpclust() function. ##' ##' @title Summary of adpclust ##' @param object object of class "adpclust" that is returned from adpclust(). ##' @param ... other arguments. NOT used. ##' @return NULL ##' @export summary.adpclust <- function(object, ...){ cat("-- ADPclust Result -- \n\n") cat("Number of obs.: \t", length(object[['clusters']]), "\n") cat("Centroids selection: \t", object[['selection.type']], "\n") cat("Number of clusters: \t", length(object[['centers']]), "\n") cat("Avg. Silhouette: \t", object[['silhouette']], "\n") cat("Elements in result: \t", paste0(names(object)), sep = " $") cat("\nf(x): \n") print(summary(object[['f']])) cat("\ndelta(x): \n") print(summary(object[['delta']])) invisible(NULL) }
/scratch/gouwar.j/cran-all/cranData/ADPclust/R/summary.adpclust.R
## ---- eval=FALSE--------------------------------------------------------- # install.packages("ADPclust", repos = "http://cran.us.r-project.org") ## ------------------------------------------------------------------------ library(ADPclust) ## ---- fig.height=5, fig.width=9------------------------------------------ # Load a simple simulated data set with 3 clusters. data(clust3) ans <- adpclust(clust3) # Above is equivalent to # ans <- adpclust(clust3, centroids = "auto") plot(ans) ## ------------------------------------------------------------------------ summary(ans) ## ---- eval=FALSE--------------------------------------------------------- # # A simple wrapper of dist() with normalization # distm <- FindDistm(clust3, normalize = TRUE) # ans.distm <- adpclust(distm = distm, p = 2) ## ---- eval=FALSE--------------------------------------------------------- # # Result is similar. Not shown. # ans <- adpclust(clust3, htype = "ROT") ## ---- eval=FALSE--------------------------------------------------------- # # Setting a single h. Result not shown. # ans <- adpclust(clust3, h = 10) # # Setting a vector of testing h's. Result not shown. # ans <- adpclust(clust3, h = c(10, 12, 18)) # # Setting h to the 'ROT' bandwidth. result not shown. # ans <- adpclust(clust3, h = ROT(clust3)) ## ---- fig.height=5, fig.width=5------------------------------------------ # Setting different testing cluster numbers ans <- adpclust(clust3, nclust = 2:15) # Specifying one cluster number. ans <- adpclust(clust3, nclust = 3) plot(ans, to.plot = "fd") ## ---- fig.height=5, fig.width=9------------------------------------------ # Load a data set with 10 clusters data(clust10) ans <- adpclust(clust10, f.cut = 0.1, nclust = 5:13, h = ROT(clust10)) plot(ans) ## ---- fig.height=5, fig.width=9------------------------------------------ ans <- adpclust(clust10, f.cut = 0.95, nclust = 5:13, h = ROT(clust10)) plot(ans) ## ---- eval = FALSE------------------------------------------------------- # data(clust5.1) # ans <- adpclust(clust5.1, centroids = "user")
/scratch/gouwar.j/cran-all/cranData/ADPclust/inst/doc/ADPclust.R
--- title: "Fast Clustering Using Adaptive Density Peak Detection (ADPclust)" author: "Yifan (Ethan) Xu ([email protected])\nXiao-Feng Wang ([email protected])" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{ADPclust-vignette} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- This page provides installation instruction and usage examples for the R package **ADPclust**. Please see the paper for details of the procedure: * Wang, Xiao-Feng, and Xu, Yifan. "Fast clustering using adaptive density peak detection." _Statistical methods in medical research (2015) doi\:10.1177/0962280215609948_ Most recent developments are in [this GitHub repo](https://github.com/ethanyxu/ADPclust). ## Introduction ADPclust is a non-iterative procedure that finds the number of clusters and cluster assignments of large amount of high dimensional data by identifying cluster centroids from estimated local densities. The procedure is built upon the work by Rodriguez [2014]. ADPclust automatically identifies cluster centroids from a projected two dimensional decision plot that separates cluster centroids from the rest of the points. This decision plot is generated from the local density $f(\mathbf{x})$ and an "isolation" score $\delta(\mathbf{x})$ for each data point $\mathbf{x}$. For a data set $\{\mathbf{x}_1, \ldots, \mathbf{x}_n\}$ where each $\mathbf{x}_i$ is a d dimensional vector, ADPclust first estimates the local multivariate Gaussian density $f(\mathbf{x}_i), i=1,\ldots,n$ by $$\hat{f}(\mathbf{x}_i; h_1,...h_d) = n^{-1} \left(\prod_{l=1}^d h_l \right)^{-1} \cdot \sum_{j=1}^n K\left(\frac{x_{i1} - x_{j1}}{h_1}, ..., \frac{x_{id} - x_{jd}}{h_d}\right). $$ where $h_1,...,h_d$ are bandwidths at each dimension. Two default $h$ values are provided in ADPclust: 1. *rule-of-thumb (ROT)* bandwidth by Scott [2002] 2. *asymptotic mean integrated squared error (AMISE)* bandwidth by Wand [1994]. Other bandwidths can also be specified if the default values don't give satisfactory results. Given density estimation $\hat{f}(\mathbf{x}_i), i = 1,...,n$, the "isolation" indices $\delta(\mathbf{x}_i)'s$ are found by: $$\hat{\delta}(\mathbf{x}_i) = \min_{j:\hat{f}(\mathbf{x}_i) < \hat{f}(\mathbf{x}_j)}{d(\mathbf{x}_i,\mathbf{x}_j)}.$$ where $d(\mathbf{x}_i,\mathbf{x}_j)$ is the distance between $\mathbf{x}_i$ and $\mathbf{x}_j$. The scatter plot of $(\hat{f}(\mathbf{x}_i), \hat{\delta}(\mathbf{x}_i)), i = 1,...,n$ is called a decision plot, from which $k$ centroids are selected automatically or manually from the upper-right corner, and all other points are clustered according to their distances to the closest centroid. The average silhouette score is calculated after clusters are assigned, and is used to chose the best number of clusters among a sequence of testing $k$'s. ## Installation Run the following line to install the package. ```{r, eval=FALSE} install.packages("ADPclust", repos = "http://cran.us.r-project.org") ``` Run the following line to load the package. ```{r} library(ADPclust) ``` ## Example 1: Automatic centroids selection in ADPclust ### Default settings The automatic centroids selection by ADPclust finds the best bandwidth $h$ and number of clusters $k$ from a grid of $(h,k)$ pairs. By default, the testing $h's$ are 10 values evenly spread in the interval $[1/3h_0, 3h_0]$, where $h_0$ is the Wand's asymptotic mean integrated squared error bandwidth (AMISE). The default testing cluster numbers are $k = 2,\ldots,10$. Here is a simple example: ```{r, fig.height=5, fig.width=9} # Load a simple simulated data set with 3 clusters. data(clust3) ans <- adpclust(clust3) # Above is equivalent to # ans <- adpclust(clust3, centroids = "auto") plot(ans) ``` The output of ADPclust `ans` is an object of class `adpclust` associated with a `summary` and a `plot` method. `plot(ans)` produces a figure similar to the one shown above. `summary(ans)` gives a fitting summary: ```{r} summary(ans) ``` ### Input Distance Matrix Instead of Raw Data The input can be a distance matrix of class `dist` instead of the raw data frame. Note that if `centroid = "auto"` (default), then the dimension `p` must be provided to calculate `h`. ```{r, eval=FALSE} # A simple wrapper of dist() with normalization distm <- FindDistm(clust3, normalize = TRUE) ans.distm <- adpclust(distm = distm, p = 2) ``` ### Change the Bandwidth h The reference bandwidth $h_0$ can be changed to Scott's rule-of-thumb (ROT) value by setting `htype = "ROT"`: ```{r, eval=FALSE} # Result is similar. Not shown. ans <- adpclust(clust3, htype = "ROT") ``` Passing a specific value to the optional argument `h` specifies the bandwidth and suppresses `htype`. If a numeric vector is passed to `h` then every entry of it is tested to find the one given the best clustering result, according to average silhouette. Note `h` is a relative value so manually setting its value often requires trial and error. ```{r, eval=FALSE} # Setting a single h. Result not shown. ans <- adpclust(clust3, h = 10) # Setting a vector of testing h's. Result not shown. ans <- adpclust(clust3, h = c(10, 12, 18)) # Setting h to the 'ROT' bandwidth. result not shown. ans <- adpclust(clust3, h = ROT(clust3)) ``` ### Change the Number of Clusters to Test The number of (testing) cluster(s) can be set by the `nclust` argument. ```{r, fig.height=5, fig.width=5} # Setting different testing cluster numbers ans <- adpclust(clust3, nclust = 2:15) # Specifying one cluster number. ans <- adpclust(clust3, nclust = 3) plot(ans, to.plot = "fd") ``` ### Change f Cutoff in Auto Selection Another important argument is `f.cut`, denoting the cutoff value of $f's$ (red dotted line in the middle figure) for centroid/outlier discrimination. Points to the right of the line with high $delta's$ are potential cluster centroids. Points to the left of it with high $delta's$ are potential outliers. `f.cut` is the percentile value of $f$ with default value at 10%. ```{r, fig.height=5, fig.width=9} # Load a data set with 10 clusters data(clust10) ans <- adpclust(clust10, f.cut = 0.1, nclust = 5:13, h = ROT(clust10)) plot(ans) ``` Setting `f.cut` to different values could result in different cluster assignment. In the following case `f.cut` is obviously set too high: ```{r, fig.height=5, fig.width=9} ans <- adpclust(clust10, f.cut = 0.95, nclust = 5:13, h = ROT(clust10)) plot(ans) ``` ## Example 2: User interactive centroids selection in ADPclust ADPclust also allow user to interactively select cluster centroids from the $(f(x), \delta(x))$ decision scatter plot. After running the following line, the first figure below is displayed, on which you can click arbitrary number of centroids, then hit "ESC" to end selection. The right figure then shows the corresponding clustering result. ```{r, eval = FALSE} data(clust5.1) ans <- adpclust(clust5.1, centroids = "user") ``` <img src="./manual.png", height="300px" width="550px" />
/scratch/gouwar.j/cran-all/cranData/ADPclust/inst/doc/ADPclust.Rmd
--- title: "Fast Clustering Using Adaptive Density Peak Detection (ADPclust)" author: "Yifan (Ethan) Xu ([email protected])\nXiao-Feng Wang ([email protected])" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{ADPclust-vignette} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- This page provides installation instruction and usage examples for the R package **ADPclust**. Please see the paper for details of the procedure: * Wang, Xiao-Feng, and Xu, Yifan. "Fast clustering using adaptive density peak detection." _Statistical methods in medical research (2015) doi\:10.1177/0962280215609948_ Most recent developments are in [this GitHub repo](https://github.com/ethanyxu/ADPclust). ## Introduction ADPclust is a non-iterative procedure that finds the number of clusters and cluster assignments of large amount of high dimensional data by identifying cluster centroids from estimated local densities. The procedure is built upon the work by Rodriguez [2014]. ADPclust automatically identifies cluster centroids from a projected two dimensional decision plot that separates cluster centroids from the rest of the points. This decision plot is generated from the local density $f(\mathbf{x})$ and an "isolation" score $\delta(\mathbf{x})$ for each data point $\mathbf{x}$. For a data set $\{\mathbf{x}_1, \ldots, \mathbf{x}_n\}$ where each $\mathbf{x}_i$ is a d dimensional vector, ADPclust first estimates the local multivariate Gaussian density $f(\mathbf{x}_i), i=1,\ldots,n$ by $$\hat{f}(\mathbf{x}_i; h_1,...h_d) = n^{-1} \left(\prod_{l=1}^d h_l \right)^{-1} \cdot \sum_{j=1}^n K\left(\frac{x_{i1} - x_{j1}}{h_1}, ..., \frac{x_{id} - x_{jd}}{h_d}\right). $$ where $h_1,...,h_d$ are bandwidths at each dimension. Two default $h$ values are provided in ADPclust: 1. *rule-of-thumb (ROT)* bandwidth by Scott [2002] 2. *asymptotic mean integrated squared error (AMISE)* bandwidth by Wand [1994]. Other bandwidths can also be specified if the default values don't give satisfactory results. Given density estimation $\hat{f}(\mathbf{x}_i), i = 1,...,n$, the "isolation" indices $\delta(\mathbf{x}_i)'s$ are found by: $$\hat{\delta}(\mathbf{x}_i) = \min_{j:\hat{f}(\mathbf{x}_i) < \hat{f}(\mathbf{x}_j)}{d(\mathbf{x}_i,\mathbf{x}_j)}.$$ where $d(\mathbf{x}_i,\mathbf{x}_j)$ is the distance between $\mathbf{x}_i$ and $\mathbf{x}_j$. The scatter plot of $(\hat{f}(\mathbf{x}_i), \hat{\delta}(\mathbf{x}_i)), i = 1,...,n$ is called a decision plot, from which $k$ centroids are selected automatically or manually from the upper-right corner, and all other points are clustered according to their distances to the closest centroid. The average silhouette score is calculated after clusters are assigned, and is used to chose the best number of clusters among a sequence of testing $k$'s. ## Installation Run the following line to install the package. ```{r, eval=FALSE} install.packages("ADPclust", repos = "http://cran.us.r-project.org") ``` Run the following line to load the package. ```{r} library(ADPclust) ``` ## Example 1: Automatic centroids selection in ADPclust ### Default settings The automatic centroids selection by ADPclust finds the best bandwidth $h$ and number of clusters $k$ from a grid of $(h,k)$ pairs. By default, the testing $h's$ are 10 values evenly spread in the interval $[1/3h_0, 3h_0]$, where $h_0$ is the Wand's asymptotic mean integrated squared error bandwidth (AMISE). The default testing cluster numbers are $k = 2,\ldots,10$. Here is a simple example: ```{r, fig.height=5, fig.width=9} # Load a simple simulated data set with 3 clusters. data(clust3) ans <- adpclust(clust3) # Above is equivalent to # ans <- adpclust(clust3, centroids = "auto") plot(ans) ``` The output of ADPclust `ans` is an object of class `adpclust` associated with a `summary` and a `plot` method. `plot(ans)` produces a figure similar to the one shown above. `summary(ans)` gives a fitting summary: ```{r} summary(ans) ``` ### Input Distance Matrix Instead of Raw Data The input can be a distance matrix of class `dist` instead of the raw data frame. Note that if `centroid = "auto"` (default), then the dimension `p` must be provided to calculate `h`. ```{r, eval=FALSE} # A simple wrapper of dist() with normalization distm <- FindDistm(clust3, normalize = TRUE) ans.distm <- adpclust(distm = distm, p = 2) ``` ### Change the Bandwidth h The reference bandwidth $h_0$ can be changed to Scott's rule-of-thumb (ROT) value by setting `htype = "ROT"`: ```{r, eval=FALSE} # Result is similar. Not shown. ans <- adpclust(clust3, htype = "ROT") ``` Passing a specific value to the optional argument `h` specifies the bandwidth and suppresses `htype`. If a numeric vector is passed to `h` then every entry of it is tested to find the one given the best clustering result, according to average silhouette. Note `h` is a relative value so manually setting its value often requires trial and error. ```{r, eval=FALSE} # Setting a single h. Result not shown. ans <- adpclust(clust3, h = 10) # Setting a vector of testing h's. Result not shown. ans <- adpclust(clust3, h = c(10, 12, 18)) # Setting h to the 'ROT' bandwidth. result not shown. ans <- adpclust(clust3, h = ROT(clust3)) ``` ### Change the Number of Clusters to Test The number of (testing) cluster(s) can be set by the `nclust` argument. ```{r, fig.height=5, fig.width=5} # Setting different testing cluster numbers ans <- adpclust(clust3, nclust = 2:15) # Specifying one cluster number. ans <- adpclust(clust3, nclust = 3) plot(ans, to.plot = "fd") ``` ### Change f Cutoff in Auto Selection Another important argument is `f.cut`, denoting the cutoff value of $f's$ (red dotted line in the middle figure) for centroid/outlier discrimination. Points to the right of the line with high $delta's$ are potential cluster centroids. Points to the left of it with high $delta's$ are potential outliers. `f.cut` is the percentile value of $f$ with default value at 10%. ```{r, fig.height=5, fig.width=9} # Load a data set with 10 clusters data(clust10) ans <- adpclust(clust10, f.cut = 0.1, nclust = 5:13, h = ROT(clust10)) plot(ans) ``` Setting `f.cut` to different values could result in different cluster assignment. In the following case `f.cut` is obviously set too high: ```{r, fig.height=5, fig.width=9} ans <- adpclust(clust10, f.cut = 0.95, nclust = 5:13, h = ROT(clust10)) plot(ans) ``` ## Example 2: User interactive centroids selection in ADPclust ADPclust also allow user to interactively select cluster centroids from the $(f(x), \delta(x))$ decision scatter plot. After running the following line, the first figure below is displayed, on which you can click arbitrary number of centroids, then hit "ESC" to end selection. The right figure then shows the corresponding clustering result. ```{r, eval = FALSE} data(clust5.1) ans <- adpclust(clust5.1, centroids = "user") ``` <img src="./manual.png", height="300px" width="550px" />
/scratch/gouwar.j/cran-all/cranData/ADPclust/vignettes/ADPclust.Rmd
Analyze_Fre_Acf <- function(ts, max_frequency = 0.5) { acf_values <- acf(ts, plot = FALSE)$acf fft_values <- fft(acf_values) frequencies <- seq(0, max_frequency, length.out = length(fft_values)) return(data.frame(frequency = frequencies, fft_value = Mod(fft_values))) }
/scratch/gouwar.j/cran-all/cranData/ADTSA/R/Analyze_Fre_Acf.R
Astimate_Acf_Band <- function(ts, confidence_level = 0.95) { n <- length(ts) acf_values <- acf(ts, plot = FALSE)$acf se_acf <- sqrt((1 + 2*sum(acf_values^2)) / n) # Confidence intervals for autocorrelation ci_upper <- acf_values + qnorm(confidence_level)*se_acf ci_lower <- acf_values - qnorm(confidence_level)*se_acf # Calculate bandwidth bandwidth <- which(ci_lower | ci_upper < 0) if (length(bandwidth) > 0) { return(min(bandwidth)) } else { return(NA) # If there is no intersection } }
/scratch/gouwar.j/cran-all/cranData/ADTSA/R/Astimate_Acf_Band.R
Cal_Cross_Corr <- function(ts1, ts2, max_lag) { # Calculation of cross-correlation cc_result <- ccf(ts1, ts2, lag.max = max_lag, plot = FALSE) # Extracting the results lags <- cc_result$lag correlations <- cc_result$acf # Return the results as a data frame return(data.frame(lag = lags, correlation = correlations)) }
/scratch/gouwar.j/cran-all/cranData/ADTSA/R/Cal_Cross_Corr.R
Der_Lev_Pac <- function(x){ lgmx <- length(x) phi_value <- rep(NA,lgmx) phi_value[1] <- x[1] phi_value.lk <- vector("list",lgmx+1) phi_value.lk[[1]] <- 0 phi_value.lk[[2]] <- as.vector(x[1]) for(l in 2:lgmx){ phi_value.lk[[l+1]] <- rep(NA,l) if(l>2){ for(k in 1:(l-2)){ phi_value.lk[[l]][k] <- phi_value.lk[[l-1]][k] - phi_value.lk[[l]][l-1]*phi_value.lk[[l-1]][l-1-k] } } numer <- x[l] - phi_value.lk[[l]][1:(l-1)]%*%x[(l-1):1] denom <- 1-phi_value.lk[[l]][1:(l-1)]%*%x[1:(l-1)] phi_value.lk[[l+1]][l] <- numer / denom phi_value[l] <- numer/denom } return(phi_value) }
/scratch/gouwar.j/cran-all/cranData/ADTSA/R/Der_Lev_Pac.R
Estimate_Acps <- function(ts, method = "periodogram") { # Calculating autocorrelation acf_values <- acf(ts, plot = FALSE)$acf # Estimation of power spectrum based on periodogram method spectrum <- spectrum(acf_values, plot = FALSE) return(spectrum$spec) }
/scratch/gouwar.j/cran-all/cranData/ADTSA/R/Estimate_Acps.R
JN_ACSDM <- function(ts,lgmx){ l_ts<-length(ts) ahat<-vector('list',2) names(ahat) = c('acf','pacf') acf_y<-matrix(NA,nrow=l_ts,ncol=lgmx) pacf_y<-matrix(NA,nrow=l_ts,ncol=lgmx) for(t in 1:l_ts){ ts.j<-ts[-t] acf_y[t,]<-acf(ts.j,lag.max=lgmx,plot=F, na.action = na.pass)$acf[2:(lgmx+1)] pacf_y[t,]<-acf(ts.j,lag.max=lgmx,plot=F, na.action = na.pass,type = 'partial')$acf[1:lgmx] } ahat$acf = apply(acf_y,2,get.ahat) ahat$pacf = apply(pacf_y,2,get.ahat) return(ahat) }
/scratch/gouwar.j/cran-all/cranData/ADTSA/R/JN_ACSDM.R
JN_VMBBA <- function(ts,lgmx,bs){ l_ts<- length(ts) ahat<- vector('list',2); names(ahat) = c('acf','pacf') acf_y<- vector('list',lgmx) pacf_y<- vector('list',lgmx) for(i in 1:lgmx){ pair.id = rbind(1:(l_ts-i),(1+i):(l_ts)) upper = l_ts-i-bs+1 tmp = rep(NA,upper) for(j in 1:upper){ sel = j:(j+bs-1) x.j<-ts[pair.id[1,-sel]] y.j<-ts[pair.id[2,-sel]] tmp1 = try(cor(x.j,y.j,use = 'complete.obs'),silent = TRUE) if(inherits(tmp1,'try-error')==0 && is.na(tmp1)==0){ if((round(tmp1,3) > -1) && (round(tmp1,3) < 1)){ tmp[j]<-tmp1 } } } acf_y[[i]] = tmp pacf_y[[i]] = rep(NA,upper) } for(t in 1:(l_ts-bs)){ v.acf = unlist(lapply(acf_y,function(x,i) x[i], i = t)) tmp = Der_Lev_Pac(v.acf) if(length(na.omit(tmp))>0){ for(i in 1:length(tmp)){ pacf_y[[i]][t] = tmp[i] } } } ahat$acf = unlist(lapply(acf_y,get.ahat)) ahat$pacf = unlist(lapply(pacf_y,get.ahat)) return(ahat) }
/scratch/gouwar.j/cran-all/cranData/ADTSA/R/JN_VMBBA.R
MB_Ac <- function(pair_mat,ts){ lgmx = length(pair_mat) lcorr= rep(NA,lgmx) l_ts = length(ts) for(i in 1:lgmx){ x = ts[pair_mat[[i]][1,]] y = ts[pair_mat[[i]][2,]] tmp = try(cor(x,y,use = 'complete.obs'),silent=TRUE) if(inherits(tmp,'try-error')==0 && is.na(tmp)==0){ if((round(tmp,3) > -1) && (round(tmp,3) < 1)){ lcorr[i] <- tmp } } } return(lcorr) }
/scratch/gouwar.j/cran-all/cranData/ADTSA/R/MB_Ac.R
P_CI <- function(e.b,a1,a2){ lagmax = ncol(e.b) CI.l <- as.vector(apply(e.b,2,quantile,a1,na.rm = TRUE)) CI.u <- as.vector(apply(e.b,2,quantile,a2,na.rm = TRUE)) CI.per = cbind(CI.l,CI.u) rownames(CI.per) = paste('lag',1:lagmax,sep='') colnames(CI.per) = c('low','up') return(CI.per) } ###### B_CI <- function(e.b,e,B,ahat,a1,a2){ lagmax = ncol(e.b) num <- apply(rbind(e,e.b),2,function(x) sum(x[2:(B+1)] < x[1],na.rm=TRUE)) B.na = apply(e.b,2,function(x) length(na.omit(x))) z0 <- qnorm(num/B.na) if(sum(num == B.na)>0){z0[num==B.na]=1000} if(min(num) == 0){z0[num==0]=-1000} zlow <- qnorm(a1) qlow <- z0 + (z0+zlow)/(1-ahat*(z0+zlow)) plow <- matrix(pnorm(qlow),nrow = 1) zup <- qnorm(a2) qup <- z0 + (z0+zup)/(1-ahat*(z0+zup)) pup <- matrix(pnorm(qup),nrow = 1) BCa.l <- as.vector(apply(rbind(plow,e.b),2, function(x) quantile(x[2:(B+1)],prob=x[1],na.rm = TRUE))) BCa.u <- as.vector(apply(rbind(pup,e.b),2, function(x) quantile(x[2:(B+1)],prob=x[1],na.rm = TRUE))) CI.BCa <-cbind(BCa.l,BCa.u) rownames(CI.BCa) = paste('lag',1:lagmax,sep='') colnames(CI.BCa) = c('low','up') return(CI.BCa) }
/scratch/gouwar.j/cran-all/cranData/ADTSA/R/P_CI.R
Period_ts <- function(ts) { specvalues <- spec.pgram(ts, taper=0, log='no', plot = FALSE) ind <- which.max(specvalues$spec) dd <- specvalues$freq[ind] return(1/dd) }
/scratch/gouwar.j/cran-all/cranData/ADTSA/R/Period_ts.R
Sug_dm <- function(ahat,ts,a1,a2,boot,lgmx){ acf_y<-matrix(NA,nrow=boot,ncol=lgmx) pacf_y<-matrix(NA,nrow=boot,ncol=lgmx) l_ts<-length(ts) for(i in 1:boot){ for(j in 1:50){ ts.b<-ts[sample(1:l_ts,l_ts,replace=FALSE)] tmp.acf<-acf(ts.b,lag.max=lgmx,plot=F, na.action = na.pass)$acf[2:(lgmx+1)] tmp.pacf<-acf(ts.b,lag.max=lgmx,plot=F, na.action = na.pass,type = 'partial')$acf[1:lgmx] if(sum(abs(tmp.pacf>1) + abs(tmp.pacf < (-1)),na.rm = TRUE)==0){break} } acf_y[i,]<- tmp.acf pacf_y[i,]<- tmp.pacf } acf.l <- list(se = apply(acf_y,2,sd,na.rm = TRUE), CI = list(per = P_CI(acf_y,a1,a2), BCa = B_CI(acf_y,rep(0,lgmx),boot,ahat$acf,a1,a2))) pacf.l <- list(se = apply(pacf_y,2,sd,na.rm = TRUE), CI = list(per = P_CI(pacf_y,a1,a2), BCa = B_CI(pacf_y,rep(0,lgmx),boot,ahat$pacf,a1,a2))) res = list(acf= acf.l,pacf = pacf.l) return(res) }
/scratch/gouwar.j/cran-all/cranData/ADTSA/R/Sug_dm.R
VMBB <- function(acf.est,pacf.est,ahat,ts,bs,a1,a2,boot,lgmx){ acf_y<-matrix(NA,nrow=boot,ncol=lgmx) pacf_y<-matrix(NA,nrow=boot,ncol=lgmx) l_ts<-length(ts) k <- floor(l_ts/bs) num_bl<-l_ts-bs+1 #number of blocks seq_bl<-seq(1,num_bl,by=1) odr<- seq_bl %x% t(rep(1,bs)) add<- rep(1,num_bl) %x% t(0:(bs-1)) odr <- odr + add for(i in 1:boot){ for(j in 1:50){ temp<- as.vector(sample(seq_bl,k+1,replace=TRUE)) odri<- t(odr[temp,]) pair_mat <- pairwise_MBL(odri,lgmx,l_ts) tmp.acf<- MB_Ac(pair_mat,ts) tmp.pacf<- Der_Lev_Pac(tmp.acf) if(sum(abs(tmp.pacf>1) + abs(tmp.pacf < (-1)),na.rm = TRUE)==0){break} } acf_y[i,]<- tmp.acf pacf_y[i,]<- tmp.pacf } acf_l <- list(se = apply(acf_y,2,sd,na.rm = TRUE), CI = list(per = P_CI(acf_y,a1,a2), BCa = B_CI(acf_y,acf.est,boot,ahat$acf,a1,a2))) pacf_l <- list(se = apply(pacf_y,2,sd,na.rm = TRUE), CI = list(per = P_CI(pacf_y,a1,a2), BCa = B_CI(pacf_y,pacf.est,boot,ahat$pacf,a1,a2))) return(list(acf = acf_l,pacf = pacf_l)) }
/scratch/gouwar.j/cran-all/cranData/ADTSA/R/VMBB.R
get.ahat <- function(x){ x = na.omit(x) x = as.vector(x) x.c = mean(x,na.rm = TRUE)- x x3 =t(x.c)%*%(x.c^2) sd.x = sqrt(t(x.c)%*%x.c) a = x3/6/(sd.x^3) return(a) }
/scratch/gouwar.j/cran-all/cranData/ADTSA/R/get.ahat.R
pairwise_MBL <- function(mat,lgmx,l_ts){ l = nrow(mat) pair.mat = vector('list',lgmx) for(i in 1:lgmx){ tmp = mat+i tmp2 = rbind(c(mat),c(tmp)) sel = which(tmp2[2,]>l_ts) if(length(sel)>0){tmp2 = tmp2[,-sel]} if(ncol(tmp2)>l_ts){tmp2 = tmp2[,1:l_ts]} pair.mat[[i]] = tmp2 } return(pair.mat) }
/scratch/gouwar.j/cran-all/cranData/ADTSA/R/pairwise_MBL.R
QRS <- function(x, y, Nsims = 100) { re_Order <- function(model_df) { xy.lm <- lm(y ~ ., data = model_df) coefOrder <- order(summary(xy.lm)$coefficients[,4]) return(list(coefOrder = coefOrder, model = xy.lm)) } Beta_Estimation = function(lm.object, coefOrder, Nsims){ y <- model.frame(lm.object)$y X <- model.matrix(lm.object)[, coefOrder] n <- length(y) p <- length(coef(lm.object)) X.qr <- qr(X) Q <- qr.Q(X.qr, complete = TRUE) Q1 <- Q[, (1 : p)] Q2 <- Q[, -(1 : p)] U <- qr.R(X.qr) Uinv<-backsolve(U, diag(p)) uBhat <- t(Q1) %*% y Qres<- t(Q2) %*% y cutoff <- min(abs(quantile(Qres, c(.005, .995)))) indices <- (abs(uBhat) > cutoff) UbHat_keep <- uBhat*indices betaHat <- as.vector(backsolve(U, UbHat_keep)) if (p<n){ # if #feature are smaller than #samples sigma2hat <- sum(Qres^2) sigma2hat <- sigma2hat/(n-p) } eps <- matrix(rnorm(Nsims*n , sd = sqrt(sigma2hat)),nrow =n) temp<-t(Q1)%*%eps tempPlusUbhat <- matrix(uBhat,nrow = p,ncol = Nsims) +temp temp_indices <- abs(tempPlusUbhat)>cutoff Vhat<-tempPlusUbhat*temp_indices Vstar<-cov(t(Vhat)) vUstar_temp<-Uinv%*%Vstar vUstar<-vUstar_temp%*%t(Uinv) SE <- sqrt(diag(vUstar)) SE <- as.vector(SE) SE <- sqrt(diag(solve(t(U)%*%U))*sigma2hat) z <- betaHat/SE pvalue <- 2*(1 - pnorm(abs(z))) mylist = list(coefs = betaHat, SE = SE, z = z, pvalue = pvalue, sigma2 = sigma2hat, modelmatrix = X, rank = sum(abs(betaHat)>1e-16), effects=c(UbHat_keep, Qres), qr = X.qr, df.residuals=n-p) return(mylist) } xy <- data.frame(x[,-1], y) orderOut <- re_Order(xy) cOrder <- orderOut$coefOrder xy.lm <- orderOut$model fit <-Beta_Estimation(xy.lm, cOrder, Nsims) est_coef <- fit$coefs sigma2 <- fit$sigma2 std_error <- fit$SE X <- fit$modelmatrix fitteds <- X%*%matrix(est_coef, ncol=1) residuals <- y - fitteds qrs<- list(coefficients = est_coef, residuals = residuals, effects = fit$effects, rank = fit$rank, fitted.values = fitteds, sigma2 = sigma2, std_error = std_error, df.residual = fit$df.residuals, x = x, y = y, qr = fit$qr, coefOrder=cOrder) class(qrs)<- c("QRS", "lm") return(qrs) }
/scratch/gouwar.j/cran-all/cranData/ADVICE/R/QRS.R
confint.QRS <- function(object, parm, level = .95, ...){ se <- object$std_error betaHat <- object$coefficients parnames <- object$names[object$coefOrder] parnames[parnames=="y"] <- "Intercept" if (missing(parm)) parm <- parnames else if (is.numeric(parm)) parm <- parnames[parm] betaHat <- setNames(betaHat, parnames) se <- setNames(se, parnames) n <- nrow(object$y) p <- ncol(object$x) degf <- object$df.residual alpha <- (1-level)/2 a <- c(alpha, 1 - alpha) fac <- fac <- qt(a, object$df.residual) ci <- array(NA_real_, dim = c(length(parm), 2L), dimnames = list(parm, a)) ci[] <- betaHat[parm] + se[parm] %o% fac ci }
/scratch/gouwar.j/cran-all/cranData/ADVICE/R/confint.QRS.R
# Hello, world! # # This is an example function named 'hello' # which prints 'Hello, world!'. # # You can learn more about package authoring with RStudio at: # # http://r-pkgs.had.co.nz/ # # Some useful keyboard shortcuts for package authoring: # # Install Package: 'Ctrl + Shift + B' # Check Package: 'Ctrl + Shift + E' # Test Package: 'Ctrl + Shift + T' hello <- function() { print("Hello, world!") }
/scratch/gouwar.j/cran-all/cranData/ADVICE/R/hello.R
ices <- function(formula, data, model = TRUE, x = FALSE, y = FALSE, qr = TRUE) { ret.x <- x ret.y <- y cl <- match.call() mf <- match.call(expand.dots = FALSE) m <- match(c("formula", "data"), names(mf), 0L) mf <- mf[c(1L, m)] mf$drop.unused.levels <- TRUE mf[[1L]] <- quote(stats::model.frame) mf <- eval(mf, parent.frame()) mt <- attr(mf, "terms") y <- model.response(mf, "numeric") x <- model.matrix(mt, mf) z <- QRS(x, y, Nsims = 100) z$call <- cl z$terms <- mt z$names <- names(mf) if (model) z$model <- mf if (ret.x) z$x <- x if (ret.y) z$y <- y if (!qr) z$qr <- NULL class(z)<- c("QRS", "lm") z }
/scratch/gouwar.j/cran-all/cranData/ADVICE/R/ices.R
plot.QRS <- function(x, normqq = FALSE, scaleloc = FALSE, ...){ res <- x$residuals fit <- x$fitted.values if (normqq) { qqnorm(res, ...) qqline(res) } else { if (scaleloc) { sigma <- sqrt(x$sigma2) stres <- res/sigma sqstres <- sqrt(abs(stres)) plot(sqstres ~ fit, xlab = "Fitted values", ylab = expression(sqrt(abs("Standardized residuals"))), ...) lines(lowess(fit, sqstres), col=2) } else { plot(fit, res, type = 'p', main = "Residuals vs Fitted Values", xlab = "Fitted", ylab = "Residual", ...) abline(h = 0, lty = 2) lines(lowess(fit, res), col=2) } } invisible() }
/scratch/gouwar.j/cran-all/cranData/ADVICE/R/plot.QRS.R
rmultreg <- function(n, k = 1, minimum = 0, maximum = 1, p = 0.5, dfnoise = 100, sdnoise = 1) { beta <- runif(n = k+1, min = minimum, max = maximum)*rbinom(n = k+1, size = 1, prob = p) xy <- data.frame(matrix(rnorm(k*n), nrow = n)) names(xy) <- paste("x", 1 : k, sep = "") noise <- rt(n, df = dfnoise)*sdnoise*sqrt((dfnoise - 2)/dfnoise) y <- as.matrix(xy)%*%beta[-1] + beta[1] + noise names(beta) <- c("Intercept", names(xy)) xy$y <- as.vector(y) names(xy)[k+1] <- "y" return(list(data = xy, coefficients = beta)) }
/scratch/gouwar.j/cran-all/cranData/ADVICE/R/rmultreg.R
summary.QRS <- function(object, ...){ cat("Residuals:") cat("\n") res <- quantile(object$residuals, seq(0,1,.25)) names(res)<- c(" Min"," 1Q", " Median", " 3Q", " Max") print(res) cat("\n") cat("Coefficients:") cat("\n") Nrows<- ncol(object$x) coef_qrs <-matrix(0, nrow = Nrows, ncol = 4) colnames(coef_qrs)<- c(" Estimate " , " Std. Error ", " z score ", " Pr(>|z|) ") coefnames <- object$names[object$coefOrder] coefnames[coefnames == "y"] <- "Intercept" rownames(coef_qrs) <- coefnames indices <- 1:Nrows coef_qrs[,1] <- object$coefficients coef_qrs[,2] <- object$std_error coef_qrs[,3] <- object$coefficients/object$std_error abs_z<- abs(coef_qrs[,3]) pr_greaterThan_z <-pnorm(abs_z, lower.tail = FALSE) coef_qrs[,4] <- 2*pr_greaterThan_z print(coef_qrs[indices,]) cat("\n") df<-object$df.residual rse<- sqrt(object$sigma2) # rse <- sqrt(sum(object$residuals^2)/(length(y)-Nrows)) cat("Residual standard error:", rse, "on", df, "degrees of freedom" ) cat("\n") }
/scratch/gouwar.j/cran-all/cranData/ADVICE/R/summary.QRS.R
#' CPI Function #' #' Incorporate change point analysis in ARIMA forecasting #' @param myts a time series object #' @param startChangePoint a positive integer for minimum number of changepoints #' @param endChangePoint a positive integer for maximum number of change points. If 0 then only startChangePoint number of change points will be entered. Should be either 0 or greater than startChangePoint and if so the algorithm will loop through all values inbetween subject to step #' @param step an integer to step through loop of change points #' @param num Bump model number (see below) #' @param cpmeth changepoint method. Default is BinSeg. See cpa package for details #' @param CPpenalty default is SIC. See cpa package for details #' @param showModel default is False, if True shows all models for all changepoints, if an integer all models for that changepoint, if a string all changepoints for that model #' @return A data frame with all the results from analysis #' @export #' @importFrom "grDevices" "dev.new" #' @importFrom "graphics" "grid" "plot" #' @importFrom "stats" "is.ts" cpi <- function(myts, startChangePoint = 1, endChangePoint = 0, step = 1, num=15, cpmeth='BinSeg', CPpenalty="SIC", showModel=FALSE) { if (is.ts(myts) == FALSE) { message("Error: First parameter must be a time series") } else if (!((startChangePoint > 0) && (startChangePoint %% 1 == 0))) { message("Error: Starting change point must be a positive integer") } else if (!((endChangePoint > 0) && (endChangePoint %% 1 == 0)) && (endChangePoint != 0)){ message("Error: Ending change point must be a positive integer") } else if ((endChangePoint <= startChangePoint) && (endChangePoint != 0)) { message("Error: Ending change point must be greater than starting change point") } else { f <- function(t) { return (exp(-1/t) * (t > 0)) } # match the post StackCutoff <- function(t, start_level=1, start_transition=1, stop_transition=2) { tr <- 1 + (t - start_transition) / (stop_transition - start_transition) f2t <- f(2 - abs(tr)) ft1 <- f(abs(tr) - 1) return (start_level * f2t / (ft1 + f2t)) } ExpCos <- function(t, start_point, amplitude=1, decay.speed=0.1, period=20 ) { return ( amplitude * cos(2*pi*(t-start_point)/period) / exp((t-start_point) * decay.speed) ) } ModulatedExp <- function(t, start_point, amplitude=0.1, decay.speed=0.1, period=20) { return ( (1 + amplitude * cos(2*pi*(t-start_point)/period) ) / exp((t-start_point) * decay.speed) ) } DF2 <- 0 DFtemp <- c() cpiP2 <- function(myts, n=startChangePoint, cpmeth.=cpmeth, CPpenalty.=CPpenalty, num.=num, showModel.=showModel) { DF <- data.frame('Model'=rep("", num), 'No.ChangePoints'=rep(n, num), 'AIC' = rep(0,num),stringsAsFactors = FALSE, 'cpiv' = I(vector(mode="list", length=num))) nSamples <- length(myts) model11_vector <- signal::bartlett(nSamples) model12_vector <- abs(signal::flattopwin(nSamples)) showThisModel <- FALSE if (class(showModel) == "logical") { showThisModel <- ( showModel == TRUE) } for (k in 1:num) { #for (abc in 1: 2) { #k = 8 m.bin <- suppressWarnings(changepoint::cpt.mean(myts,penalty=CPpenalty,method=cpmeth,Q=n)) aveTS=mean(myts) temp1 <- c() temp2 <- c() cpiv <- matrix(0,ncol = n, nrow = length(myts)) if (class(showModel) == "character") { showThisModel <- ( (sprintf("model %d", k) == showModel) ) } for (i in 1:n) { if (class(showModel) == "numeric") { showThisModel <- (showModel == i) } for (j in 1:m.bin@cpts[i]) { cpiv[j,i] <- 0 #for the ith interv var fills cpiv with 0s until the ith cpa row. } temp1 <- length(myts) - m.bin@cpts[i] for (j in 1:temp1) { temp2 <- j + m.bin@cpts[i] if(k == 1) { cpiv[temp2,i] <- 1 - (1-(log(temp2)/temp2)) #DF[1,1] <- "1 - (1 - log(changepoint/changepoint)" DF[1,1] <- "model 1" } #model 1 else if (k==2) { cpiv[temp2,i] <- aveTS*(1 - (temp2-m.bin@cpts[i])/(1 +(temp2-m.bin@cpts[i]))) DF[2,1] <- "model 2" } #model 2 else if (k==3) { cpiv[temp2,i] <- aveTS*(1-(1 /(temp2-m.bin@cpts[i]))) DF[3,1] <- "model 3" } #model 3 else if (k==4) { cpiv[temp2,i] <- aveTS*(1-(1 /(temp2-m.bin@cpts[i]))) DF[4,1] <- "model 4" } #model 4 else if (k==5) { cpiv[temp2,i] <- aveTS*exp(-((temp2-m.bin@cpts[i])^2)/((length(myts)^2)*2.5)) DF[5,1] <- "model 5" } #model 5 else if (k==6) { cpiv[temp2,i] <- aveTS*(1-(1 /(temp2-m.bin@cpts[i]))) DF[6,1] <- "model 6" } #model 6 else if (k==7) { cpiv[temp2,i] <- 1 - (1-(1/log(temp2))) DF[7,1] <- "model 7" } #model 7 else if (k==8) { cpiv[temp2,i] <- 1 DF[8,1] <- "model 8" } # model 8" # additional bump functions else if (k==9) { cpiv[temp2,i] <- StackCutoff (temp2-m.bin@cpts[i], start_level=aveTS, start_transition=1, stop_transition=nSamples-m.bin@cpts[i]) DF[9,1] <- "model 9" } #model 9 else if (k==10) { # currently use abs to get positive value, can be removed cpiv[temp2,i] <- abs(ExpCos (temp2-m.bin@cpts[i], start_point=0, amplitude=2*aveTS, decay.speed=4/nSamples, period=nSamples/3 )) DF[10,1] <- "model 10" } #model 10 else if (k==11) { # all standard window functions in signal can be used, here is an example # note that the function call is outside the inner loop, here we just take one point at the time cpiv[temp2,i] <- aveTS * model11_vector[temp2] DF[11,1] <- "model 11" } # model 11 else if (k==12) { # all standard window functions in signal can be used, here is an example # note that the function call is outside the inner loop, here we just take one point at the time cpiv[temp2,i] <- aveTS * model12_vector[temp2] DF[12,1] <- "model 12" } # model 12 else if (k==13) { cpiv[temp2,i] <- aveTS * ModulatedExp (temp2-m.bin@cpts[i], start_point=0, amplitude=0.5, decay.speed=4/nSamples, period=nSamples/6 ) DF[13,1] <- "model 13" } #model 13 else if (k==14) { # sinc based, constrained to be positive, construction insure end point has value 0 cpiv[temp2,i] <- aveTS * sin (pi*temp2/nSamples) / (pi*temp2/nSamples) DF[14,1] <- "model 14" } # model 14 else if (k==15) { # sinc based, constrained to be positive, construction insure end point has value 0 but with more oscillations cpiv[temp2,i] <- aveTS * abs(sin (4*pi*temp2/nSamples) / (pi*temp2/nSamples)) DF[15,1] <- "model 15" } # model 15 } if (showThisModel) { dev.new() ; plot(cpiv[,i], col="blue", type="l", main=sprintf("%s, changepoint %d", DF[k,1], i)) grid() } } xxx=suppressWarnings(forecast::auto.arima(myts,xreg = cpiv)) DF[k,3] <- xxx$aic DF[[k,4]] <- list(cpiv) } return(DF) } suppressWarnings(if (endChangePoint == 0) { DF2 <- cpiP2(myts) } else { for (z in seq(startChangePoint,endChangePoint,step)) { if (DF2 == 0) { DF2 <- cpiP2(myts,z) } else { DFtemp <- cpiP2(myts,z) DF2 <- rbind(DF2, DFtemp) } } }) return(DF2) } }
/scratch/gouwar.j/cran-all/cranData/AEDForecasting/R/AED_Function.r
paep<-function(x, alpha, sigma, mu, epsilon, log.p = FALSE, lower.tail = TRUE) { n<-length(x) cdf0<-rep(NA, n) for (i in 1:n) { if (x[i]> mu){ cdf0[i] <- 1-((1+epsilon)/2-(1+epsilon)/2*pgamma((x[i]-mu)^alpha/(sigma*(1+epsilon))^alpha,1/alpha,1)) } if (x[i]<= mu){ cdf0[i] <- ((1-epsilon)/2-(1-epsilon)/2*pgamma((mu-x[i])^alpha/(sigma*(1-epsilon))^alpha,1/alpha,1)) } } if(log.p==TRUE & lower.tail == FALSE) cdf0<-suppressWarnings( log(1-cdf0) ) if(log.p==TRUE & lower.tail == TRUE) cdf0<-suppressWarnings( log(cdf0) ) if(log.p==FALSE & lower.tail == FALSE) cdf0<-suppressWarnings( 1-cdf0 ) return(cdf0) } daep<-function(x, alpha, sigma, mu, epsilon, log = FALSE) { n<-length(x) pdf0<-rep(NA, n) for(i in 1:n) { pdf0[i]<-1/(2*sigma*gamma(1+1/alpha))*exp(-(abs(x[i]-mu)/(sigma*(1+sign(x[i]-mu)*epsilon)))^alpha) } suppressWarnings(if(log==TRUE) pdf0<-log(pdf0)) return(pdf0) } raep<-function(n, alpha, sigma, mu, epsilon) { sab<-function(n, a, beta) { y <- c() m <- c() b <- beta/a i <- 1 bbb <- a^(-a)*(1-a)^(-(1-a)) si <- 1/sqrt(b*a*(1-a)) while(i<=n) { if (si>=sqrt(2*pi)) { uu<-runif(1,0,pi) vv<-runif(1,0,1) if( (vv*bbb^b)<((sin(uu)/((sin(a*uu))^(a)*(sin((1-a)*uu))^(1-a)))^b) ) { y[i]<-uu i<-i+1 } } else { N<-rnorm(1,0,1) vv<-runif(1,0,1) if( ((si*abs(N))<pi) && (vv*bbb^b*exp(-N^2/2))<((sin(si*abs(N))/((sin(a*si*abs(N)))^(a)*(sin((1-a)*si*abs(N)))^(1-a)))^b) ) { y[i]<-si*abs(N) i<-i+1 } } } m<-(((sin(a*y))^(a)*(sin((1-a)*y))^(1-a))/sin(y))^(1/(1-a)) return(m) } W <- (sab(n,alpha/2,.5)/rgamma(n,(1+1*(1-alpha/2)/alpha),1))^(2*(1-alpha/2)/alpha) rb<- rbinom(n,1,(1-epsilon)/2) X <- (1-rb)*(1+epsilon)*abs(rnorm(n))-(rb)*(1-epsilon)*abs(rnorm(n)) Y <- sigma*(X)/sqrt(2*W)+mu return(Y) } qaep<-function(u, alpha, sigma, mu, epsilon) { n<-length(u) R<-rep(NA, n) for (i in 1:n) { if ( u[i]<(1-epsilon)/2 ) { R[i] <- mu - sigma*(1-epsilon)*( qgamma(1-2*u[i]/(1-epsilon), 1/alpha, 1) )^(1/alpha) }else{ R[i] <- mu + sigma*(1+epsilon)*( qgamma((u[i]-(1-epsilon)/2)*2/(1+epsilon), 1/alpha, 1) )^(1/alpha) } } return(R) } fitaep<-function(x, initial = FALSE, starts) { N <- 6000 cri<- 10e-5 j <- 2 Eps<- 1 del<- 10e-6 n <- length(x) E <- anderson <- u<- von<- rep(NA,n) A <- matrix(NA, nrow=N, ncol=4) OFIM <- matrix(NA, nrow=4, ncol=4) if(initial==FALSE) { f0.alpha <- function(w){-mean((x-mean(x))^4)/((n-1)/n*var(x))^2+gamma(5/w)*gamma(1/w)/(gamma(3/w))^2} alpha <- ifelse( f0.alpha(0.05)*f0.alpha(2)<0,uniroot(f0.alpha, lower=0.05, upper=2)$root, 1 ) mu <- median(x) epsilon <- 1-2*sum( ifelse( (x-mu)<0, 1, 0) )/n sigma <- sqrt(var(x)*gamma(1/alpha)/gamma(3/alpha)) } if(initial==TRUE) { alpha <- starts[1] sigma <- starts[2] mu <- starts[3] epsilon <- starts[4] } A[1,] <- c(alpha, sigma, mu, epsilon) while ( Eps>0.5 & j<N ) { E <- ifelse( abs(x-mu)<=0.00000001, mu, alpha/2*(abs(x-mu)/sigma)^(alpha-2))*abs(1+sign(x-mu)*epsilon)^(2-alpha ) mu <- sum( x*E/(1+sign(x-mu)*epsilon)^2 )/sum( E/(1+sign(x-mu)*epsilon)^2 ) sigma <- (2/n*sum( (x-mu)^2*E/(1+sign(x-mu)*epsilon)^2) )^(1/2) F <- function(par) sum( (x-mu)^2*E/( sigma^2*(1 + sign(x-mu)*par[1])^2 ) ) epsilon <- optimize(F, c(-0.999,0.999) )$minimum f <- function(par) n*lgamma(1+1/par[1])+sum( (abs(x-mu)/(sigma*(1+sign(x-mu)*epsilon)) )^par[1]) alpha <- optimize(f, c(.01,2) )$minimum A[j,] <- c(alpha, sigma, mu, epsilon) #print(c(j,A[j,])) if ( sum( abs(A[j-1,]-A[j,]) )<cri || j>=(N-1) ) { Eps<-0 }else{ j<-j+1 } } alpha <- A[j,1] sigma <- A[j,2] mu <- A[j,3] epsilon <- A[j,4] D <-cbind( ( digamma(1+1/alpha)/(alpha^2)-(abs(x-mu)/(sigma*(1+sign(x-mu)*epsilon) ) )^(alpha)*log( (abs(x-mu)/(sigma*(1+sign(x-mu)*epsilon) ) ) ) ), ( -1/sigma+ alpha*sigma^(-alpha-1)*( abs(x-mu)/ (1+sign(x-mu)*epsilon) )^alpha ), alpha*sign(x-mu)/( sigma*(1+ sign(x-mu)*epsilon) )* (abs(x-mu)/(sigma*(1+sign(x-mu)*epsilon) ) )^(alpha-1), alpha*sign(x-mu)/( 1+ sign(x-mu)*epsilon )*(abs(x-mu)/(sigma*(1+sign(x-mu)*epsilon) ) )^(alpha) ) OFIM <- solve(t(D)%*%D) s.Y <- sort(x) cdf0 <- paep(s.Y, alpha, sigma, mu, epsilon) pdf0 <- daep(s.Y, alpha, sigma, mu, epsilon) for(i in 1:n) { u[i] <- ifelse( cdf0[i]==1, 0.99999999, cdf0[i] ) von[i] <- ( cdf0[i]-(2*i-1)/(2*n) )^2 anderson[i] <- suppressWarnings( (2*i-1)*log(cdf0[i])+(2*n+1-2*i)*log(1-cdf0[i]) ) } von.stat <- suppressWarnings( sum(von)+1/(12*n) ) n.p <- 3 log.likelihood <- suppressWarnings( sum( log(pdf0) ) ) I <- seq(1,n) ks.stat <- suppressWarnings( max( I/n-cdf0, cdf0-(I-1)/n ) ) anderson.stat <- suppressWarnings( -n - mean(anderson) ) CAIC <- -2*log.likelihood + 2*n.p + 2*(n.p*(n.p+1))/(n-n.p-1) AIC <- -2*log.likelihood + 2*n.p BIC <- -2*log.likelihood + n.p*log(n) HQIC <- -2*log.likelihood + 2*log(log(n))*n.p out1 <- cbind(alpha, sigma, mu, epsilon) out2 <- cbind(AIC, CAIC, BIC, HQIC, anderson.stat, von.stat, ks.stat, log.likelihood) colnames(out1) <- c("alpha", "sigma", "mu", "epsilon") colnames(out2) <- c("AIC", "CAIC", "BIC", "HQIC", "AD", "CVM", "KS", "log.likelihood") out3 <- OFIM colnames(out3) <- c("alpha","sigma", "mu", "epsilon") rownames(out3) <- c("alpha","sigma", "mu", "epsilon") list("estimate" = out1, "measures" = out2, "Inverted OFIM" = out3) } regaep<-function(y, x){ if( any(is.na(y)) ) warning('y contains missing values') if( any(is.na(x)) ) warning('x contains missing values') n <- length(y) regout<-function(Y, X){ n <- length(Y) m2 <- 0.8 m1 <- 0.2 N <- 2000 cri <- 10e-5 j <- 2 Eps <- 1 u <- rep( NA, n ) y <- rep( NA, n ) k <- dim( cbind(Y, X) )[2] Y1 <- matrix( NA, nrow = n, ncol = k) OFIM <- matrix( NA, k, k) x <- cbind( rep(1, n), X ) out <- matrix( NA, ncol = (3+k), nrow = N ) X1 <- subset( cbind(Y,X), (Y<quantile(Y,0.8) | Y>quantile(Y,0.2)) ) Beta <- summary(lm( X1[,1]~X1[,2], data=data.frame(X1) ))$coefficients[1:k] f0.alpha <- function(x){-mean((Y-mean(Y))^4)/((n-1)/n*var(Y))^2+gamma(5/x)*gamma(1/x)/(gamma(3/x))^2} alpha <- ifelse(f0.alpha(0.05)*f0.alpha(2)<0, uniroot(f0.alpha, lower=0.05, upper=2)$root, 1) epsilon <- (quantile(Y,m2)[[1]]-2*quantile(Y,.5)[[1]]+quantile(Y,m1)[[1]])/(quantile(Y,m2)[[1]]-quantile(Y,m1)[[1]]) mu <- quantile(Y,(1-(epsilon))/2)[[1]] sigma <- sqrt(var(Y)*gamma(1/alpha)/gamma(3/alpha) ) out[1,1:k] <- Beta out[1,(k+1):(k+3)] <- c(alpha, sigma, epsilon) while (Eps>0.5 & j<N) { y <- Y-x%*%Beta E <- ifelse( abs(y-mu)<=0.00000001, mu, alpha/2*(abs(y-mu)/sigma)^(alpha-2))*abs(1+sign(y-mu)*epsilon)^(2-alpha ) mu <- 0 # sum( y*E/(1+sign(y-mu)*epsilon)^2 )/sum( E/(1+sign(y-mu)*epsilon)^2 ) sigma <- (2/n*sum( (y-mu)^2*E/(1+sign(y-mu)*epsilon)^2) )^(1/2) F <- function(par) sum( (y-mu)^2*E/( sigma^2*(1 + sign(y-mu)*par[1])^2 ) ) epsilon <- optimize(F, c(-0.999,0.999) )$minimum f <- function(par) n*lgamma(1+1/par[1])+sum( (abs(y-mu)/(sigma*(1+sign(y-mu)*epsilon)) )^par[1]) alpha <- optimize(f, c(.01,2) )$minimum T <- matrix(0, k, k) Y1 <- t(x)%*%( (Y-mu)*E/(1+sign(y-mu)*epsilon)^2 ) for (i in 1:n) { T <- T+(x[i,])%*%t(x[i,])*(E[i]/(1+sign(y[i]-mu)*epsilon)^2)[1] } Beta <- as.vector( solve(T)%*%Y1 ) out[j,] <- c(Beta, alpha, sigma, epsilon ) # print(c(j,out[j,])) if (sum(abs(out[j-1,1:(k+3)]-out[j,1:(k+3)]))<cri || j>=(N-1)) { Eps <- 0}else{ j <- j+1 } } list( Beta = Beta, alpha = alpha, sigma = sigma, epsilon = epsilon ) } out<-regout(y, x) w <- y-cbind(1,x)%*%out$Beta alpha <- out$alpha sigma <- out$sigma epsilon <- out$epsilon p <- length(out$Beta) D<-cbind( cbind(1,x)* matrix( rep( alpha*sign(w)/( sigma*(1+ sign(w)*epsilon) )*(abs(w)/( sigma*(1+sign(w)*epsilon) ) )^(alpha-1), p ), nrow = n, ncol = p ), digamma(1+1/alpha)/(alpha^2)-(abs(w)/(sigma*(1+sign(w)*epsilon) ) )^(alpha)*log( (abs(w)/(sigma*(1+sign(w)*epsilon) ) ) ), ( -1/sigma+ alpha*sigma^(-alpha-1)*( abs(w)/( (1+sign(w)*epsilon) ) )^(alpha) ), alpha*sign(w)/( 1+ sign(w)*epsilon )*(abs(w)/(sigma*(1+sign(w)*epsilon) ) )^(alpha) ) colnames(D) <- NULL OFIM <- solve( t(D)%*%D ) Error <- w S.E <- sum( Error^2 ) S.T <- sum( (y-mean(y))^2 ) F.value <- (S.T-S.E)/(p-1)*(n-p)*S.E out1 <- cbind(out$Beta) colnames(out1) <- c("Estimate") rownames(out1) <- c("beta.0", rownames(out1[2:p,], do.NULL = FALSE, prefix = "beta.") ) out2 <- cbind( min(Error), quantile(Error,0.25)[[1]], quantile(Error,0.50)[[1]], mean(Error), quantile(Error,0.75)[[1]], max(Error) ) colnames(out2) <- c("Min", "1Q", "Median", "Mean", "3Q", "Max") out3 <- cbind( F.value, p-1, n-p, 1-pf(F.value, p-1, n-p) ) colnames(out3) <- cbind("Value", "DF1", "DF2", "P value") rownames(out3) <- c("F-statistic") out4 <- cbind(out$sigma, p-1) colnames(out4) <- cbind("Value", "DF") rownames(out4) <- c("Residual Std. Error") out5 <- cbind( 1-S.E/S.T, 1-(n-1)/(n-p)*(S.E/S.T) ) colnames(out5) <- cbind("Non-adjusted", "Adjusted") rownames(out5) <- c("Multiple R-Squared") out6 <- cbind(out$alpha, out$sigma, out$epsilon) colnames(out6) <- c("Tail thickness", "Scale", "Skewness") colnames(OFIM) <- NULL out7 <- OFIM colnames(out7) <- c("beta.0",colnames(out7[,2:p], do.NULL = FALSE, prefix = "beta."), "alpha", "sigma", "epsilon") rownames(out7) <- c("beta.0",colnames(out7[,2:p], do.NULL = FALSE, prefix = "beta."), "alpha", "sigma", "epsilon") list("Coefficients:" = out1, "Residuals:" = out2, "F:" = out3, "MSE:" = out4, "R2:" = out5, "Estimated Parameters for Error Distribution:" = out6, "Inverted Observed Fisher Information Matrix:" = out7) }
/scratch/gouwar.j/cran-all/cranData/AEP/R/AEP.R
welcome<- function(){ msg <- c(paste0( " Welcome to _____________ ____________ _____________ | | | | | | | _____ | | ________| | _____ | | | | | | | | | | | | |_____| | | |________ | |_____| | | | | | | | | _____ | | ________| | _________| | | | | | | | | | | | | | |________ | | | | | | | | | | |___| |___| |____________| |___| version ", packageVersion("AEP")),"\nType 'citation(\"AEP\")' for citing this R package in publications.") return(msg) } .onAttach <- function(libname, pkgname) { mess <- welcome() if(!interactive()) mess[1] <- paste("Package 'AEP' version", packageVersion("AEP")) packageStartupMessage(mess) invisible() }
/scratch/gouwar.j/cran-all/cranData/AEP/R/welcome.R
coeftest.multinom <- function(x, vcov. = NULL, df = NULL, ..., save = FALSE) { ## extract coefficients est <- coef(x) if(!is.null(dim(est))) { est <- structure(as.vector(t(est)), names = as.vector(t(outer(rownames(est), colnames(est), paste, sep = ":")))) } ## process vcov. if(is.null(vcov.)) vc <- vcov(x) else { if(is.function(vcov.)) vc <- vcov.(x) else vc <- vcov. } se <- sqrt(diag(vc)) tval <- as.vector(est)/se ## process degrees of freedom if(is.null(df)) df <- Inf if(is.finite(df) && df > 0) { pval <- 2 * pt(abs(tval), df = df, lower.tail = FALSE) cnames <- c("Estimate", "Std. Error", "t value", "Pr(>|t|)") mthd <- "t" } else { pval <- 2 * pnorm(abs(tval), lower.tail = FALSE) cnames <- c("Estimate", "Std. Error", "z value", "Pr(>|z|)") mthd <- "z" } rval <- cbind(est, se, tval, pval) colnames(rval) <- cnames class(rval) <- "coeftest" attr(rval, "method") <- paste(mthd, "test of coefficients") attr(rval, "df") <- df attr(rval, "logLik") <- logLik(x) if(save) attr(rval, "object") <- x return(rval) } coeftest.polr <- function(x, vcov. = NULL, df = NULL, ..., save = FALSE) { ## extract coefficients est <- c(x$coefficients, x$zeta) ## process vcov. if(is.null(vcov.)) vc <- vcov(x) else { if(is.function(vcov.)) vc <- vcov.(x) else vc <- vcov. } se <- sqrt(diag(vc)) tval <- as.vector(est)/se ## process degrees of freedom if(is.null(df)) df <- Inf if(is.finite(df) && df > 0) { pval <- 2 * pt(abs(tval), df = df, lower.tail = FALSE) cnames <- c("Estimate", "Std. Error", "t value", "Pr(>|t|)") mthd <- "t" } else { pval <- 2 * pnorm(abs(tval), lower.tail = FALSE) cnames <- c("Estimate", "Std. Error", "z value", "Pr(>|z|)") mthd <- "z" } rval <- cbind(est, se, tval, pval) colnames(rval) <- cnames class(rval) <- "coeftest" attr(rval, "method") <- paste(mthd, "test of coefficients") attr(rval, "df") <- df attr(rval, "nobs") <- nobs(x) attr(rval, "logLik") <- logLik(x) if(save) attr(rval, "object") <- x return(rval) } lrtest.fitdistr <- function(object, ..., name = NULL) { if(is.null(name)) name <- function(x) if(is.null(names(x$estimate))) { paste(round(x$estimate, digits = max(getOption("digits") - 3, 2)), collapse = ", ") } else { paste(names(x$estimate), "=", round(x$estimate, digits = max(getOption("digits") - 3, 2)), collapse = ", ") } lrtest.default(object, ..., name = name) }
/scratch/gouwar.j/cran-all/cranData/AER/R/coeftest-methods.R
dispersiontest <- function(object, trafo = NULL, alternative = c("greater", "two.sided", "less")) { if(!inherits(object, "glm") || family(object)$family != "poisson") stop("only Poisson GLMs can be tested") alternative <- match.arg(alternative) otrafo <- trafo if(is.numeric(otrafo)) trafo <- function(x) x^otrafo y <- if(is.null(object$y)) model.response(model.frame(object)) else object$y yhat <- fitted(object) aux <- ((y - yhat)^2 - y)/yhat if(is.null(trafo)) { STAT <- sqrt(length(aux)) * mean(aux)/sd(aux) NVAL <- c(dispersion = 1) EST <- c(dispersion = mean(aux) + 1) } else { auxreg <- lm(aux ~ 0 + I(trafo(yhat)/yhat)) STAT <- as.vector(summary(auxreg)$coef[1,3]) NVAL <- c(alpha = 0) EST <- c(alpha = as.vector(coef(auxreg)[1])) } rval <- list(statistic = c(z = STAT), p.value = switch(alternative, "greater" = pnorm(STAT, lower.tail = FALSE), "two.sided" = pnorm(abs(STAT), lower.tail = FALSE)*2, "less" = pnorm(STAT)), estimate = EST, null.value = NVAL, alternative = alternative, method = switch(alternative, "greater" = "Overdispersion test", "two.sided" = "Dispersion test", "less" = "Underdispersion test"), data.name = deparse(substitute(object))) class(rval) <- "htest" return(rval) } ## NB. score tests a la DCluster now implemented in countreg ## ## TODO: ## LRT for Poi vs NB2. ## fix DCluster::test.nb.pois() and pscl::odTest() ## proposed interface: ## poistest(object, object2 = NULL) ## where either a "negbin" and a "glm" object have to be ## supplied or only one of them, then update via either ## cl <- object$call ## cl[[1]] <- as.name("glm.nb") ## cl$link <- object$family$link ## cl$family <- NULL ## or ## cl <- object$call ## cl[[1]] <- as.name("glm") ## cl$family <- call("poisson") ## cl$family$link <- object$family$link ## cl$link <- NULL ## cl$init.theta <- NULL ## and evaluate the call "cl" appropriately.
/scratch/gouwar.j/cran-all/cranData/AER/R/dispersiontest.R
ivreg <- function(formula, instruments, data, subset, na.action, weights, offset, contrasts = NULL, model = TRUE, y = TRUE, x = FALSE, ...) { ## set up model.frame() call cl <- match.call() if(missing(data)) data <- environment(formula) mf <- match.call(expand.dots = FALSE) m <- match(c("formula", "data", "subset", "na.action", "weights", "offset"), names(mf), 0) mf <- mf[c(1, m)] mf$drop.unused.levels <- TRUE ## handle instruments for backward compatibility if(!missing(instruments)) { formula <- Formula::as.Formula(formula, instruments) cl$instruments <- NULL cl$formula <- formula(formula) } else { formula <- Formula::as.Formula(formula) } stopifnot(length(formula)[1] == 1L, length(formula)[2] %in% 1:2) ## try to handle dots in formula has_dot <- function(formula) inherits(try(terms(formula), silent = TRUE), "try-error") if(has_dot(formula)) { f1 <- formula(formula, rhs = 1) f2 <- formula(formula, lhs = 0, rhs = 2) if(!has_dot(f1) & has_dot(f2)) formula <- Formula::as.Formula(f1, update(formula(formula, lhs = 0, rhs = 1), f2)) } ## call model.frame() mf$formula <- formula mf[[1]] <- as.name("model.frame") mf <- eval(mf, parent.frame()) ## extract response, terms, model matrices Y <- model.response(mf, "numeric") mt <- terms(formula, data = data) mtX <- terms(formula, data = data, rhs = 1) X <- model.matrix(mtX, mf, contrasts) if(length(formula)[2] < 2L) { mtZ <- NULL Z <- NULL } else { mtZ <- delete.response(terms(formula, data = data, rhs = 2)) Z <- model.matrix(mtZ, mf, contrasts) } ## weights and offset weights <- model.weights(mf) offset <- model.offset(mf) if(is.null(offset)) offset <- 0 if(length(offset) == 1) offset <- rep(offset, NROW(Y)) offset <- as.vector(offset) ## call default interface rval <- ivreg.fit(X, Y, Z, weights, offset, ...) ## enhance information stored in fitted model object rval$call <- cl rval$formula <- formula(formula) rval$terms <- list(regressors = mtX, instruments = mtZ, full = mt) rval$na.action <- attr(mf, "na.action") rval$levels <- .getXlevels(mt, mf) rval$contrasts <- list(regressors = attr(X, "contrasts"), instruments = attr(Z, "contrasts")) if(model) rval$model <- mf if(y) rval$y <- Y if(x) rval$x <- list(regressors = X, instruments = Z, projected = rval$x) else rval$x <- NULL class(rval) <- "ivreg" return(rval) } ivreg.fit <- function(x, y, z, weights, offset, ...) { ## model dimensions n <- NROW(y) p <- ncol(x) ## defaults if(missing(z)) z <- NULL if(missing(weights)) weights <- NULL if(missing(offset)) offset <- rep(0, n) ## sanity checks stopifnot(n == nrow(x)) if(!is.null(z)) stopifnot(n == nrow(z)) if(!is.null(weights)) stopifnot(n == NROW(weights)) stopifnot(n == NROW(offset)) ## project regressors x on image of instruments z if(!is.null(z)) { if(ncol(z) < ncol(x)) warning("more regressors than instruments") auxreg <- if(is.null(weights)) lm.fit(z, x, ...) else lm.wfit(z, x, weights, ...) xz <- as.matrix(auxreg$fitted.values) # pz <- z %*% chol2inv(auxreg$qr$qr) %*% t(z) colnames(xz) <- colnames(x) } else { xz <- x # pz <- diag(NROW(x)) # colnames(pz) <- rownames(pz) <- rownames(x) } ## main regression fit <- if(is.null(weights)) lm.fit(xz, y, offset = offset, ...) else lm.wfit(xz, y, weights, offset = offset, ...) ## model fit information ok <- which(!is.na(fit$coefficients)) yhat <- drop(x[, ok, drop = FALSE] %*% fit$coefficients[ok]) names(yhat) <- names(y) res <- y - yhat ucov <- chol2inv(fit$qr$qr[1:length(ok), 1:length(ok), drop = FALSE]) colnames(ucov) <- rownames(ucov) <- names(fit$coefficients[ok]) rss <- if(is.null(weights)) sum(res^2) else sum(weights * res^2) ## hat <- diag(x %*% ucov %*% t(x) %*% pz) ## names(hat) <- rownames(x) rval <- list( coefficients = fit$coefficients, residuals = res, fitted.values = yhat, weights = weights, offset = if(identical(offset, rep(0, n))) NULL else offset, n = n, nobs = if(is.null(weights)) n else sum(weights > 0), rank = fit$rank, df.residual = fit$df.residual, cov.unscaled = ucov, sigma = sqrt(rss/fit$df.residual), ## NOTE: Stata divides by n here and uses z tests rather than t tests... # hatvalues = hat, x = xz ) return(rval) } vcov.ivreg <- function(object, ...) object$sigma^2 * object$cov.unscaled bread.ivreg <- function (x, ...) x$cov.unscaled * x$nobs estfun.ivreg <- function (x, ...) { xmat <- model.matrix(x) if(any(alias <- is.na(coef(x)))) xmat <- xmat[, !alias, drop = FALSE] wts <- weights(x) if(is.null(wts)) wts <- 1 res <- residuals(x) rval <- as.vector(res) * wts * xmat attr(rval, "assign") <- NULL attr(rval, "contrasts") <- NULL return(rval) } hatvalues.ivreg <- function(model, ...) { xz <- model.matrix(model, component = "projected") x <- model.matrix(model, component = "regressors") z <- model.matrix(model, component = "instruments") solve_qr <- function(x) chol2inv(qr.R(qr(x))) diag(x %*% solve_qr(xz) %*% t(x) %*% z %*% solve_qr(z) %*% t(z)) } terms.ivreg <- function(x, component = c("regressors", "instruments"), ...) x$terms[[match.arg(component)]] model.matrix.ivreg <- function(object, component = c("projected", "regressors", "instruments"), ...) { component <- match.arg(component, c("projected", "regressors", "instruments")) if(!is.null(object$x)) rval <- object$x[[component]] else if(!is.null(object$model)) { X <- model.matrix(object$terms$regressors, object$model, contrasts = object$contrasts$regressors) Z <- if(is.null(object$terms$instruments)) NULL else model.matrix(object$terms$instruments, object$model, contrasts = object$contrasts$instruments) w <- weights(object) XZ <- if(is.null(Z)) { X } else if(is.null(w)) { lm.fit(Z, X)$fitted.values } else { lm.wfit(Z, X, w)$fitted.values } if(is.null(dim(XZ))) { XZ <- matrix(XZ, ncol = 1L, dimnames = list(names(XZ), colnames(X))) attr(XZ, "assign") <- attr(X, "assign") } rval <- switch(component, "regressors" = X, "instruments" = Z, "projected" = XZ) } else stop("not enough information in fitted model to return model.matrix") return(rval) } predict.ivreg <- function(object, newdata, na.action = na.pass, ...) { if(missing(newdata)) fitted(object) else { mf <- model.frame(delete.response(object$terms$full), newdata, na.action = na.action, xlev = object$levels) X <- model.matrix(delete.response(object$terms$regressors), mf, contrasts = object$contrasts$regressors) ok <- !is.na(object$coefficients) drop(X[, ok, drop = FALSE] %*% object$coefficients[ok]) } } print.ivreg <- function(x, digits = max(3, getOption("digits") - 3), ...) { cat("\nCall:\n", deparse(x$call), "\n\n", sep = "") cat("Coefficients:\n") print.default(format(coef(x), digits = digits), print.gap = 2, quote = FALSE) cat("\n") invisible(x) } summary.ivreg <- function(object, vcov. = NULL, df = NULL, diagnostics = FALSE, ...) { ## weighted residuals res <- object$residuals y <- object$fitted.values + res n <- NROW(res) w <- object$weights if(is.null(w)) w <- rep(1, n) res <- res * sqrt(w) ## R-squared rss <- sum(res^2) if(attr(object$terms$regressors, "intercept")) { tss <- sum(w * (y - weighted.mean(y, w))^2) dfi <- 1 } else { tss <- sum(w * y^2) dfi <- 0 } r.squared <- 1 - rss/tss adj.r.squared <- 1 - (1 - r.squared) * ((n - dfi)/object$df.residual) ## degrees of freedom (for z vs. t test) if(is.null(df)) df <- object$df.residual if(!is.finite(df)) df <- 0 if(df > 0 & (df != object$df.residual)) { df <- object$df.residual } ## covariance matrix if(is.null(vcov.)) vc <- vcov(object) else { if(is.function(vcov.)) vc <- vcov.(object) else vc <- vcov. } ## Wald test of each coefficient cf <- lmtest::coeftest(object, vcov. = vc, df = df, ...) attr(cf, "method") <- NULL class(cf) <- "matrix" ## Wald test of all coefficients Rmat <- if(attr(object$terms$regressors, "intercept")) cbind(0, diag(length(na.omit(coef(object)))-1)) else diag(length(na.omit(coef(object)))) waldtest <- car::linearHypothesis(object, Rmat, vcov. = vcov., test = ifelse(df > 0, "F", "Chisq"), singular.ok = TRUE) waldtest <- c(waldtest[2,3], waldtest[2,4], abs(waldtest[2,2]), if(df > 0) waldtest[2,1] else NULL) ## diagnostic tests diag <- if(diagnostics) ivdiag(object, vcov. = vcov.) else NULL rval <- list( call = object$call, terms = object$terms, residuals = res, weights <- object$weights, coefficients = cf, sigma = object$sigma, df = c(object$rank, if(df > 0) df else Inf, object$rank), ## aliasing r.squared = r.squared, adj.r.squared = adj.r.squared, waldtest = waldtest, vcov = vc, diagnostics = diag) class(rval) <- "summary.ivreg" return(rval) } print.summary.ivreg <- function(x, digits = max(3, getOption("digits") - 3), signif.stars = getOption("show.signif.stars"), ...) { cat("\nCall:\n") cat(paste(deparse(x$call), sep = "\n", collapse = "\n"), "\n\n", sep = "") cat(if(!is.null(x$weights) && diff(range(x$weights))) "Weighted ", "Residuals:\n", sep = "") if(NROW(x$residuals) > 5L) { nam <- c("Min", "1Q", "Median", "3Q", "Max") rq <- if(length(dim(x$residuals)) == 2) structure(apply(t(x$residuals), 1, quantile), dimnames = list(nam, dimnames(x$residuals)[[2]])) else structure(quantile(x$residuals), names = nam) print(rq, digits = digits, ...) } else { print(x$residuals, digits = digits, ...) } cat("\nCoefficients:\n") printCoefmat(x$coefficients, digits = digits, signif.stars = signif.stars, signif.legend = signif.stars & is.null(x$diagnostics), na.print = "NA", ...) if(!is.null(x$diagnostics)) { cat("\nDiagnostic tests:\n") printCoefmat(x$diagnostics, cs.ind = 1L:2L, tst.ind = 3L, has.Pvalue = TRUE, P.values = TRUE, digits = digits, signif.stars = signif.stars, na.print = "NA", ...) } cat("\nResidual standard error:", format(signif(x$sigma, digits)), "on", x$df[2L], "degrees of freedom\n") cat("Multiple R-Squared:", formatC(x$r.squared, digits = digits)) cat(",\tAdjusted R-squared:", formatC(x$adj.r.squared, digits = digits), "\nWald test:", formatC(x$waldtest[1L], digits = digits), "on", x$waldtest[3L], if(length(x$waldtest) > 3L) c("and", x$waldtest[4L]) else NULL, "DF, p-value:", format.pval(x$waldtest[2L], digits = digits), "\n\n") invisible(x) } anova.ivreg <- function(object, object2, test = "F", vcov = NULL, ...) { rval <- waldtest(object, object2, test = test, vcov = vcov) if(is.null(vcov)) { head <- attr(rval, "heading") head[1] <- "Analysis of Variance Table\n" rss <- sapply(list(object, object2), function(x) sum(residuals(x)^2)) dss <- c(NA, -diff(rss)) rval <- cbind(rval, cbind("RSS" = rss, "Sum of Sq" = dss))[,c(1L, 5L, 2L, 6L, 3L:4L)] attr(rval, "heading") <- head class(rval) <- c("anova", "data.frame") } return(rval) } update.ivreg <- function (object, formula., ..., evaluate = TRUE) { if(is.null(call <- getCall(object))) stop("need an object with call component") extras <- match.call(expand.dots = FALSE)$... if(!missing(formula.)) call$formula <- formula(update(Formula(formula(object)), formula.)) if(length(extras)) { existing <- !is.na(match(names(extras), names(call))) for (a in names(extras)[existing]) call[[a]] <- extras[[a]] if(any(!existing)) { call <- c(as.list(call), extras[!existing]) call <- as.call(call) } } if(evaluate) eval(call, parent.frame()) else call } ivdiag <- function(obj, vcov. = NULL) { ## extract data y <- model.response(model.frame(obj)) x <- model.matrix(obj, component = "regressors") z <- model.matrix(obj, component = "instruments") w <- weights(obj) ## names of "regressors" and "instruments" xnam <- colnames(x) znam <- colnames(z) ## relabel "instruments" to match order from "regressors" fx <- attr(terms(obj, component = "regressors"), "factors") fz <- attr(terms(obj, component = "instruments"), "factors") fz <- fz[c(rownames(fx)[rownames(fx) %in% rownames(fz)], rownames(fz)[!(rownames(fz) %in% rownames(fx))]), , drop = FALSE] nz <- apply(fz > 0, 2, function(x) paste(rownames(fz)[x], collapse = ":")) nz <- nz[names(nz) != nz] nz <- nz[nz %in% colnames(fx)] if(length(nz) > 0L) znam[names(nz)] <- nz ## endogenous/instrument variables endo <- which(!(xnam %in% znam)) inst <- which(!(znam %in% xnam)) if((length(endo) <= 0L) | (length(inst) <= 0L)) stop("no endogenous/instrument variables") ## return value rval <- matrix(NA, nrow = length(endo) + 2L, ncol = 4L) colnames(rval) <- c("df1", "df2", "statistic", "p-value") rownames(rval) <- c(if(length(endo) > 1L) paste0("Weak instruments (", xnam[endo], ")") else "Weak instruments", "Wu-Hausman", "Sargan") ## convenience functions lmfit <- function(x, y, w = NULL) { rval <- if(is.null(w)) lm.fit(x, y) else lm.wfit(x, y, w) rval$x <- x rval$y <- y return(rval) } rss <- function(obj, weights = NULL) if(is.null(weights)) sum(obj$residuals^2) else sum(weights * obj$residuals^2) wald <- function(obj0, obj1, vcov. = NULL, weights = NULL) { df <- c(obj1$rank - obj0$rank, obj1$df.residual) if(!is.function(vcov.)) { w <- ((rss(obj0, w) - rss(obj1, w)) / df[1L]) / (rss(obj1, w)/df[2L]) } else { if(NCOL(obj0$coefficients) > 1L) { cf0 <- structure(as.vector(obj0$coefficients), .Names = c(outer(rownames(obj0$coefficients), colnames(obj0$coefficients), paste, sep = ":"))) cf1 <- structure(as.vector(obj1$coefficients), .Names = c(outer(rownames(obj1$coefficients), colnames(obj1$coefficients), paste, sep = ":"))) } else { cf0 <- obj0$coefficients cf1 <- obj1$coefficients } cf0 <- na.omit(cf0) cf1 <- na.omit(cf1) ovar <- which(!(names(cf1) %in% names(cf0))) vc <- vcov.(lm(obj1$y ~ 0 + obj1$x, weights = w)) w <- t(cf1[ovar]) %*% solve(vc[ovar,ovar]) %*% cf1[ovar] w <- w / df[1L] } pval <- pf(w, df[1L], df[2L], lower.tail = FALSE) c(df, w, pval) } # Test for weak instruments for(i in seq_along(endo)) { aux0 <- lmfit(z[, -inst, drop = FALSE], x[, endo[i]], w) aux1 <- lmfit(z, x[, endo[i]], w) rval[i, ] <- wald(aux0, aux1, vcov. = vcov., weights = w) } ## Wu-Hausman test for endogeneity if(length(endo) > 1L) aux1 <- lmfit(z, x[, endo], w) xfit <- as.matrix(aux1$fitted.values) colnames(xfit) <- paste("fit", colnames(xfit), sep = "_") auxo <- lmfit( x, y, w) auxe <- lmfit(cbind(x, xfit), y, w) rval[nrow(rval) - 1L, ] <- wald(auxo, auxe, vcov. = vcov., weights = w) ## Sargan test of overidentifying restrictions r <- residuals(obj) auxs <- lmfit(z, r, w) rssr <- if(is.null(w)) sum((r - mean(r))^2) else sum(w * (r - weighted.mean(r, w))^2) rval[nrow(rval), 1L] <- length(inst) - length(endo) if(rval[nrow(rval), 1L] > 0L) { rval[nrow(rval), 3L] <- length(r) * (1 - rss(auxs, w)/rssr) rval[nrow(rval), 4L] <- pchisq(rval[nrow(rval), 3L], rval[nrow(rval), 1L], lower.tail = FALSE) } return(rval) } ## If #Instruments = #Regressors then ## b = (Z'X)^{-1} Z'y ## and solves the estimating equations ## Z' (y - X beta) = 0 ## For ## cov(y) = Omega ## the following holds ## cov(b) = (Z'X)^{-1} Z' Omega Z (X'Z)^{-1} ## ## Generally: ## b = (X' P_Z X)^{-1} X' P_Z y ## with estimating equations ## X' P_Z (y - X beta) = 0 ## where P_Z is the usual projector (hat matrix wrt Z) and ## cov(b) = (X' P_Z X)^{-1} X' P_Z Omega P_Z X (X' P_Z X)^{-1} ## Thus meat is X' P_Z Omega P_Z X and bread i (X' P_Z X)^{-1} ## ## See ## http://www.stata.com/support/faqs/stat/2sls.html
/scratch/gouwar.j/cran-all/cranData/AER/R/ivreg.R
## Dedicated methods for "tobit" objects that really should be inherited ## from "survival". However, some versions of "survival" did not provide ## these at all or had bugs. With survival >= 3.1-6 the methods in ## "survival" are ok. So for now we still keep the "tobit" methods but ## might remove them in future versions. fitted.tobit <- function(object, ...) predict(object, type = "response", se.fit = FALSE) nobs.tobit <- function(object, ...) length(object$linear.predictors) weights.tobit <- function(object, ...) model.weights(model.frame(object)) vcov.tobit <- function(object, ...) { vc <- NextMethod() if(is.null(colnames(vc))) { nam <- names(object$coefficients) nam <- if(length(nam) == ncol(vc)) { nam } else if(length(nam) == ncol(vc) - 1L) { c(nam, "Log(scale)") } else { c(nam, names(object$scale)) } colnames(vc) <- rownames(vc) <- nam } return(vc) } bread.tobit <- function(x, ...) { length(x$linear.predictors) * vcov(x) } ## "survival" chose not to include this deviance() method ## so this needs to be provided even if "survival" >= 3.1-6 ## is required. deviance.survreg <- function(object, ...) sum(residuals(object, type = "deviance")^2) ## convenience tobit() interface to survreg() tobit <- function(formula, left = 0, right = Inf, dist = "gaussian", subset = NULL, data = list(), ...) { ## remember original environment oenv <- environment(formula) oformula <- eval(formula) ## process censoring stopifnot(all(left < right)) lfin <- any(is.finite(left)) rfin <- any(is.finite(right)) ## formula processing: replace dependent variable ## original y <- formula[[2]] if(lfin & rfin) { ## interval censoring formula[[2]] <- call("Surv", call("ifelse", call(">=", y, substitute(right)), substitute(right), call("ifelse", call("<=", y, substitute(left)), substitute(left), y)), time2 = substitute(right), call("ifelse", call(">=", y, substitute(right)), 0, call("ifelse", call("<=", y, substitute(left)), 2, 1)), type = "interval") } else if(!rfin) { ## left censoring formula[[2]] <- call("Surv", call("ifelse", call("<=", y, substitute(left)), substitute(left), y), call(">", y, substitute(left)) , type = "left") } else { ## right censoring formula[[2]] <- call("Surv", call("ifelse", call(">=", y, substitute(right)), substitute(right), y), call("<", y, substitute(right)) , type = "right") } ## ensure the the fully-qualified survival::Surv() is used rather than just Surv() formula[[2]][[1]] <- quote(survival::Surv) ## call survreg cl <- ocl <- match.call() cl$formula <- formula cl$left <- NULL cl$right <- NULL cl$dist <- dist cl[[1]] <- quote(survival::survreg) rval <- eval(cl, oenv) ## slightly modify result class(rval) <- c("tobit", class(rval)) ocl$formula <- oformula rval$call <- ocl rval$formula <- formula return(rval) } ## add printing and summary methods that are more similar to ## the corresponding methods for lm objects print.tobit <- function(x, digits = max(3, getOption("digits") - 3), ...) { ## failure if(!is.null(x$fail)) { cat("tobit/survreg failed.", x$fail, "\n") return(invisible(x)) } ## call cat("\nCall:\n") cat(paste(deparse(x$call), sep = "\n", collapse = "\n"), "\n\n", sep = "") ## coefficients coef <- x$coefficients if(any(nas <- is.na(coef))) { if (is.null(names(coef))) names(coef) <- paste("b", 1:length(coef), sep = "") cat("Coefficients: (", sum(nas), " not defined because of singularities)\n", sep = "") } else cat("Coefficients:\n") print.default(format(coef, digits = digits), print.gap = 2, quote = FALSE) ## scale if(nrow(x$var) == length(coef)) cat("\nScale fixed at", format(x$scale, digits = digits), "\n") else if (length(x$scale) == 1) cat("\nScale:", format(x$scale, digits = digits), "\n") else { cat("\nScale:\n") print(format(x$scale, digits = digits), ...) } ## return cat("\n") invisible(x) } summary.tobit <- function(object, correlation = FALSE, symbolic.cor = FALSE, vcov. = NULL, ...) { ## failure if(!is.null(object$fail)) { warning("tobit/survreg failed.", object$fail, " No summary provided\n") return(invisible(object)) } ## rank if(all(is.na(object$coefficients))) { warning("This model has zero rank --- no summary is provided") return(invisible(object)) } ## vcov if(is.null(vcov.)) vcov. <- vcov(object) else { if(is.function(vcov.)) vcov. <- vcov.(object) } ## coefmat coef <- coeftest(object, vcov. = vcov., ...) attr(coef, "method") <- NULL ## Wald test nc <- length(coef(object)) has_intercept <- attr(terms(object), "intercept") > 0.5 wald <- if(nc <= has_intercept) NULL else linearHypothesis(object, if(has_intercept) cbind(0, diag(nc-1)) else diag(nc), vcov. = vcov.)[2,3] ## instead of: waldtest(object, vcov = vcov.) ## correlation correlation <- if(correlation) cov2cor(vcov.) else NULL ## distribution dist <- object$dist if(is.character(dist)) sd <- survival::survreg.distributions[[dist]] else sd <- dist if(length(object$parms)) pprint <- paste(sd$name, "distribution: parmameters =", object$parms) else pprint <- paste(sd$name, "distribution") ## number of observations ## (incorporating "bug fix" change for $y in survival 2.42-7) surv_table <- function(y) { if(!inherits(y, "Surv")) y <- y$y type <- attr(y, "type") if(is.null(type) || (type == "left" && any(y[, 2L] > 1))) type <- "old" y <- switch(type, "left" = 2 - y[, 2L], "interval" = y[, 3L], y[, 2L] ) table(factor(y, levels = c(2, 1, 0, 3), labels = c("Left-censored", "Uncensored", "Right-censored", "Interval-censored"))) } nobs <- surv_table(object$y) nobs <- c("Total" = sum(nobs), nobs[1:3]) rval <- object[match(c("call", "df", "loglik", "iter", "na.action", "idf", "scale"), names(object), nomatch = 0)] rval <- c(rval, list(coefficients = coef, correlation = correlation, symbolic.cor = symbolic.cor, parms = pprint, n = nobs, wald = wald)) class(rval) <- "summary.tobit" return(rval) } print.summary.tobit <- function(x, digits = max(3, getOption("digits") - 3), ...) { ## call cat("\nCall:\n") cat(paste(deparse(x$call), sep = "\n", collapse = "\n"), "\n\n", sep = "") ## observations and censoring if(length(x$na.action)) cat("Observations: (", naprint(x$na.action), ")\n", sep = "") else cat("Observations:\n") print(x$n) ## coefficients if(any(nas <- is.na(x$coefficients[,1]))) cat("\nCoefficients: (", sum(nas), " not defined because of singularities)\n", sep = "") else cat("\nCoefficients:\n") printCoefmat(x$coefficients, digits = digits, ...) ## scale if("Log(scale)" %in% rownames(x$coefficients)) cat("\nScale:", format(x$scale, digits = digits), "\n") else cat("\nScale fixed at", format(x$scale, digits = digits), "\n") ## logLik and Chi-squared test cat(paste("\n", x$parms, "\n", sep = "")) cat("Number of Newton-Raphson Iterations:", format(trunc(x$iter)), "\n") cat("Log-likelihood:", formatC(x$loglik[2], digits = digits), "on", x$df, "Df\n") if(!is.null(x$wald)) cat("Wald-statistic:", formatC(x$wald, digits = digits), "on", sum(x$df) - x$idf, "Df, p-value:", format.pval(pchisq(x$wald, sum(x$df) - x$idf, lower.tail = FALSE)), "\n") ## correlation correl <- x$correlation if (!is.null(correl)) { p <- NCOL(correl) if (p > 1) { cat("\nCorrelation of Coefficients:\n") if (is.logical(x$symbolic.cor) && x$symbolic.cor) { print(symnum(correl, abbr.colnames = NULL)) } else { correl <- format(round(correl, 2), nsmall = 2, digits = digits) correl[!lower.tri(correl)] <- "" print(correl[-1, -p, drop = FALSE], quote = FALSE) } } } ## return cat("\n") invisible(x) } ## as the apparent y ~ ... and actual Surv(y) ~ ... formula ## differ, some standard functionality has to be done by work-arounds formula.tobit <- function(x, ...) x$formula model.frame.tobit <- function(formula, ...) { Call <- formula$call Call[[1]] <- quote(stats::model.frame) Call <- Call[match(c("", "formula", "data", "weights", "subset", "na.action"), names(Call), 0)] dots <- list(...) nargs <- dots[match(c("data", "na.action", "subset"), names(dots), 0)] Call[names(nargs)] <- nargs Call$formula <- formula$formula env <- environment(formula$terms) if(is.null(env)) env <- parent.frame() eval(Call, env) } update.tobit <- function(object, formula., ..., evaluate = TRUE) { call <- object$call extras <- match.call(expand.dots = FALSE)$... if(!missing(formula.)) { ff <- formula(object) ff[[2]] <- call$formula[[2]] call$formula <- update.formula(ff, formula.) } if (length(extras) > 0) { existing <- !is.na(match(names(extras), names(call))) for (a in names(extras)[existing]) call[[a]] <- extras[[a]] if (any(!existing)) { call <- c(as.list(call), extras[!existing]) call <- as.call(call) } } if(evaluate) eval(call, parent.frame()) else call } waldtest.tobit <- function(object, ..., test = c("Chisq", "F"), name = NULL) { if(is.null(name)) name <- function(x) paste(deparse(x$call$formula), collapse="\n") waldtest.default(object, ..., test = match.arg(test), name = name) } lrtest.tobit <- function(object, ..., name = NULL) { if(is.null(name)) name <- function(x) paste(deparse(x$call$formula), collapse="\n") lrtest.default(object, ..., name = name) } linearHypothesis.tobit <- function(model, hypothesis.matrix, rhs = NULL, vcov. = NULL, ...) { if(is.null(vcov.)) { vcov. <- vcov(model) } else { if(is.function(vcov.)) vcov. <- vcov.(model) } if("Log(scale)" %in% rownames(vcov.)) vcov. <- vcov.[-nrow(vcov.), -ncol(vcov.)] model$formula <- model$call$formula car::linearHypothesis.default(model, hypothesis.matrix = hypothesis.matrix, rhs = rhs, vcov. = vcov., ...) }
/scratch/gouwar.j/cran-all/cranData/AER/R/tobit.R
################################################### ### chunk number 1: setup ################################################### options(prompt = "R> ", continue = "+ ", width = 64, digits = 4, show.signif.stars = FALSE, useFancyQuotes = FALSE) options(SweaveHooks = list(onefig = function() {par(mfrow = c(1,1))}, twofig = function() {par(mfrow = c(1,2))}, threefig = function() {par(mfrow = c(1,3))}, fourfig = function() {par(mfrow = c(2,2))}, sixfig = function() {par(mfrow = c(3,2))})) library("AER") suppressWarnings(RNGversion("3.5.0")) set.seed(1071) ################################################### ### chunk number 2: calc1 ################################################### 1 + 1 2^3 ################################################### ### chunk number 3: calc2 ################################################### log(exp(sin(pi/4)^2) * exp(cos(pi/4)^2)) ################################################### ### chunk number 4: vec1 ################################################### x <- c(1.8, 3.14, 4, 88.169, 13) ################################################### ### chunk number 5: length ################################################### length(x) ################################################### ### chunk number 6: vec2 ################################################### 2 * x + 3 5:1 * x + 1:5 ################################################### ### chunk number 7: vec3 ################################################### log(x) ################################################### ### chunk number 8: subset1 ################################################### x[c(1, 4)] ################################################### ### chunk number 9: subset2 ################################################### x[-c(2, 3, 5)] ################################################### ### chunk number 10: pattern1 ################################################### ones <- rep(1, 10) even <- seq(from = 2, to = 20, by = 2) trend <- 1981:2005 ################################################### ### chunk number 11: pattern2 ################################################### c(ones, even) ################################################### ### chunk number 12: matrix1 ################################################### A <- matrix(1:6, nrow = 2) ################################################### ### chunk number 13: matrix2 ################################################### t(A) ################################################### ### chunk number 14: matrix3 ################################################### dim(A) nrow(A) ncol(A) ################################################### ### chunk number 15: matrix-subset ################################################### A1 <- A[1:2, c(1, 3)] ################################################### ### chunk number 16: matrix4 ################################################### solve(A1) ################################################### ### chunk number 17: matrix-solve ################################################### A1 %*% solve(A1) ################################################### ### chunk number 18: diag ################################################### diag(4) ################################################### ### chunk number 19: matrix-combine1 ################################################### cbind(1, A1) ################################################### ### chunk number 20: matrix-combine2 ################################################### rbind(A1, diag(4, 2)) ################################################### ### chunk number 21: vector-mode ################################################### x <- c(1.8, 3.14, 4, 88.169, 13) ################################################### ### chunk number 22: logical ################################################### x > 3.5 ################################################### ### chunk number 23: names ################################################### names(x) <- c("a", "b", "c", "d", "e") x ################################################### ### chunk number 24: subset-more ################################################### x[3:5] x[c("c", "d", "e")] x[x > 3.5] ################################################### ### chunk number 25: list1 ################################################### mylist <- list(sample = rnorm(5), family = "normal distribution", parameters = list(mean = 0, sd = 1)) mylist ################################################### ### chunk number 26: list2 ################################################### mylist[[1]] mylist[["sample"]] mylist$sample ################################################### ### chunk number 27: list3 ################################################### mylist[[3]]$sd ################################################### ### chunk number 28: logical2 ################################################### x <- c(1.8, 3.14, 4, 88.169, 13) x > 3 & x <= 4 ################################################### ### chunk number 29: logical3 ################################################### which(x > 3 & x <= 4) ################################################### ### chunk number 30: logical4 ################################################### all(x > 3) any(x > 3) ################################################### ### chunk number 31: logical5 ################################################### (1.5 - 0.5) == 1 (1.9 - 0.9) == 1 ################################################### ### chunk number 32: logical6 ################################################### all.equal(1.9 - 0.9, 1) ################################################### ### chunk number 33: logical7 ################################################### 7 + TRUE ################################################### ### chunk number 34: coercion1 ################################################### is.numeric(x) is.character(x) as.character(x) ################################################### ### chunk number 35: coercion2 ################################################### c(1, "a") ################################################### ### chunk number 36: rng1 ################################################### set.seed(123) rnorm(2) rnorm(2) set.seed(123) rnorm(2) ################################################### ### chunk number 37: rng2 ################################################### sample(1:5) sample(c("male", "female"), size = 5, replace = TRUE, prob = c(0.2, 0.8)) ################################################### ### chunk number 38: flow1 ################################################### x <- c(1.8, 3.14, 4, 88.169, 13) if(rnorm(1) > 0) sum(x) else mean(x) ################################################### ### chunk number 39: flow2 ################################################### ifelse(x > 4, sqrt(x), x^2) ################################################### ### chunk number 40: flow3 ################################################### for(i in 2:5) { x[i] <- x[i] - x[i-1] } x[-1] ################################################### ### chunk number 41: flow4 ################################################### while(sum(x) < 100) { x <- 2 * x } x ################################################### ### chunk number 42: cmeans ################################################### cmeans <- function(X) { rval <- rep(0, ncol(X)) for(j in 1:ncol(X)) { mysum <- 0 for(i in 1:nrow(X)) mysum <- mysum + X[i,j] rval[j] <- mysum/nrow(X) } return(rval) } ################################################### ### chunk number 43: colmeans1 ################################################### X <- matrix(1:20, ncol = 2) cmeans(X) ################################################### ### chunk number 44: colmeans2 ################################################### colMeans(X) ################################################### ### chunk number 45: colmeans3 ################################################### X <- matrix(rnorm(2*10^6), ncol = 2) system.time(colMeans(X)) system.time(cmeans(X)) ################################################### ### chunk number 46: colmeans4 ################################################### cmeans2 <- function(X) { rval <- rep(0, ncol(X)) for(j in 1:ncol(X)) rval[j] <- mean(X[,j]) return(rval) } ################################################### ### chunk number 47: colmeans5 ################################################### system.time(cmeans2(X)) ################################################### ### chunk number 48: colmeans6 eval=FALSE ################################################### ## apply(X, 2, mean) ################################################### ### chunk number 49: colmeans7 ################################################### system.time(apply(X, 2, mean)) ################################################### ### chunk number 50: formula1 ################################################### f <- y ~ x class(f) ################################################### ### chunk number 51: formula2 ################################################### x <- seq(from = 0, to = 10, by = 0.5) y <- 2 + 3 * x + rnorm(21) ################################################### ### chunk number 52: formula3 eval=FALSE ################################################### ## plot(y ~ x) ## lm(y ~ x) ################################################### ### chunk number 53: formula3a ################################################### print(lm(y ~ x)) ################################################### ### chunk number 54: formula3b ################################################### plot(y ~ x) ################################################### ### chunk number 55: formula3c ################################################### fm <- lm(y ~ x) ################################################### ### chunk number 56: mydata1 ################################################### mydata <- data.frame(one = 1:10, two = 11:20, three = 21:30) ################################################### ### chunk number 57: mydata1a ################################################### mydata <- as.data.frame(matrix(1:30, ncol = 3)) names(mydata) <- c("one", "two", "three") ################################################### ### chunk number 58: mydata2 ################################################### mydata$two mydata[, "two"] mydata[, 2] ################################################### ### chunk number 59: attach ################################################### attach(mydata) mean(two) detach(mydata) ################################################### ### chunk number 60: with ################################################### with(mydata, mean(two)) ################################################### ### chunk number 61: mydata-subset ################################################### mydata.sub <- subset(mydata, two <= 16, select = -two) ################################################### ### chunk number 62: write-table ################################################### write.table(mydata, file = "mydata.txt", col.names = TRUE) ################################################### ### chunk number 63: read-table ################################################### newdata <- read.table("mydata.txt", header = TRUE) ################################################### ### chunk number 64: save ################################################### save(mydata, file = "mydata.rda") ################################################### ### chunk number 65: load ################################################### load("mydata.rda") ################################################### ### chunk number 66: file-remove ################################################### file.remove("mydata.rda") ################################################### ### chunk number 67: data ################################################### data("Journals", package = "AER") ################################################### ### chunk number 68: foreign ################################################### library("foreign") write.dta(mydata, file = "mydata.dta") ################################################### ### chunk number 69: read-dta ################################################### mydata <- read.dta("mydata.dta") ################################################### ### chunk number 70: cleanup ################################################### file.remove("mydata.dta") ################################################### ### chunk number 71: factor ################################################### g <- rep(0:1, c(2, 4)) g <- factor(g, levels = 0:1, labels = c("male", "female")) g ################################################### ### chunk number 72: na1 ################################################### newdata <- read.table("mydata.txt", na.strings = "-999") ################################################### ### chunk number 73: na2 ################################################### file.remove("mydata.txt") ################################################### ### chunk number 74: oop1 ################################################### x <- c(1.8, 3.14, 4, 88.169, 13) g <- factor(rep(c(0, 1), c(2, 4)), levels = c(0, 1), labels = c("male", "female")) ################################################### ### chunk number 75: oop2 ################################################### summary(x) summary(g) ################################################### ### chunk number 76: oop3 ################################################### class(x) class(g) ################################################### ### chunk number 77: oop4 ################################################### summary ################################################### ### chunk number 78: oop5 ################################################### normsample <- function(n, ...) { rval <- rnorm(n, ...) class(rval) <- "normsample" return(rval) } ################################################### ### chunk number 79: oop6 ################################################### set.seed(123) x <- normsample(10, mean = 5) class(x) ################################################### ### chunk number 80: oop7 ################################################### summary.normsample <- function(object, ...) { rval <- c(length(object), mean(object), sd(object)) names(rval) <- c("sample size","mean","standard deviation") return(rval) } ################################################### ### chunk number 81: oop8 ################################################### summary(x) ################################################### ### chunk number 82: journals-data eval=FALSE ################################################### ## data("Journals") ## Journals$citeprice <- Journals$price/Journals$citations ## attach(Journals) ## plot(log(subs), log(citeprice)) ## rug(log(subs)) ## rug(log(citeprice), side = 2) ## detach(Journals) ################################################### ### chunk number 83: journals-data1 ################################################### data("Journals") Journals$citeprice <- Journals$price/Journals$citations attach(Journals) plot(log(subs), log(citeprice)) rug(log(subs)) rug(log(citeprice), side = 2) detach(Journals) ################################################### ### chunk number 84: plot-formula ################################################### plot(log(subs) ~ log(citeprice), data = Journals) ################################################### ### chunk number 85: graphics1 ################################################### plot(log(subs) ~ log(citeprice), data = Journals, pch = 20, col = "blue", ylim = c(0, 8), xlim = c(-7, 4), main = "Library subscriptions") ################################################### ### chunk number 86: graphics2 ################################################### pdf("myfile.pdf", height = 5, width = 6) plot(1:20, pch = 1:20, col = 1:20, cex = 2) dev.off() ################################################### ### chunk number 87: dnorm-annotate eval=FALSE ################################################### ## curve(dnorm, from = -5, to = 5, col = "slategray", lwd = 3, ## main = "Density of the standard normal distribution") ## text(-5, 0.3, expression(f(x) == frac(1, sigma ~~ ## sqrt(2*pi)) ~~ e^{-frac((x - mu)^2, 2*sigma^2)}), adj = 0) ################################################### ### chunk number 88: dnorm-annotate1 ################################################### curve(dnorm, from = -5, to = 5, col = "slategray", lwd = 3, main = "Density of the standard normal distribution") text(-5, 0.3, expression(f(x) == frac(1, sigma ~~ sqrt(2*pi)) ~~ e^{-frac((x - mu)^2, 2*sigma^2)}), adj = 0) ################################################### ### chunk number 89: eda1 ################################################### data("CPS1985") str(CPS1985) ################################################### ### chunk number 90: eda2 ################################################### head(CPS1985) ################################################### ### chunk number 91: eda3 ################################################### levels(CPS1985$occupation)[c(2, 6)] <- c("techn", "mgmt") attach(CPS1985) ################################################### ### chunk number 92: eda4 ################################################### summary(wage) ################################################### ### chunk number 93: eda5 ################################################### mean(wage) median(wage) ################################################### ### chunk number 94: eda6 ################################################### var(wage) sd(wage) ################################################### ### chunk number 95: wage-hist ################################################### hist(wage, freq = FALSE) hist(log(wage), freq = FALSE) lines(density(log(wage)), col = 4) ################################################### ### chunk number 96: wage-hist1 ################################################### hist(wage, freq = FALSE) hist(log(wage), freq = FALSE) lines(density(log(wage)), col = 4) ################################################### ### chunk number 97: occ-table ################################################### summary(occupation) ################################################### ### chunk number 98: occ-table ################################################### tab <- table(occupation) prop.table(tab) ################################################### ### chunk number 99: occ-barpie ################################################### barplot(tab) pie(tab) ################################################### ### chunk number 100: occ-barpie ################################################### par(mar = c(4, 3, 1, 1)) barplot(tab, las = 3) par(mar = c(2, 3, 1, 3)) pie(tab, radius = 1) ################################################### ### chunk number 101: xtabs ################################################### xtabs(~ gender + occupation, data = CPS1985) ################################################### ### chunk number 102: spine eval=FALSE ################################################### ## plot(gender ~ occupation, data = CPS1985) ################################################### ### chunk number 103: spine1 ################################################### plot(gender ~ occupation, data = CPS1985) ################################################### ### chunk number 104: wageeduc-cor ################################################### cor(log(wage), education) cor(log(wage), education, method = "spearman") ################################################### ### chunk number 105: wageeduc-scatter eval=FALSE ################################################### ## plot(log(wage) ~ education) ################################################### ### chunk number 106: wageeduc-scatter1 ################################################### plot(log(wage) ~ education) ################################################### ### chunk number 107: tapply ################################################### tapply(log(wage), gender, mean) ################################################### ### chunk number 108: boxqq1 eval=FALSE ################################################### ## plot(log(wage) ~ gender) ################################################### ### chunk number 109: boxqq2 eval=FALSE ################################################### ## mwage <- subset(CPS1985, gender == "male")$wage ## fwage <- subset(CPS1985, gender == "female")$wage ## qqplot(mwage, fwage, xlim = range(wage), ylim = range(wage), ## xaxs = "i", yaxs = "i", xlab = "male", ylab = "female") ## abline(0, 1) ################################################### ### chunk number 110: qq ################################################### plot(log(wage) ~ gender) mwage <- subset(CPS1985, gender == "male")$wage fwage <- subset(CPS1985, gender == "female")$wage qqplot(mwage, fwage, xlim = range(wage), ylim = range(wage), xaxs = "i", yaxs = "i", xlab = "male", ylab = "female") abline(0, 1) ################################################### ### chunk number 111: detach ################################################### detach(CPS1985)
/scratch/gouwar.j/cran-all/cranData/AER/demo/Ch-Basics.R
################################################### ### chunk number 1: setup ################################################### options(prompt = "R> ", continue = "+ ", width = 64, digits = 4, show.signif.stars = FALSE, useFancyQuotes = FALSE) options(SweaveHooks = list(onefig = function() {par(mfrow = c(1,1))}, twofig = function() {par(mfrow = c(1,2))}, threefig = function() {par(mfrow = c(1,3))}, fourfig = function() {par(mfrow = c(2,2))}, sixfig = function() {par(mfrow = c(3,2))})) library("AER") suppressWarnings(RNGversion("3.5.0")) set.seed(1071) ################################################### ### chunk number 2: journals-data ################################################### data("Journals", package = "AER") ################################################### ### chunk number 3: journals-dim ################################################### dim(Journals) names(Journals) ################################################### ### chunk number 4: journals-plot eval=FALSE ################################################### ## plot(log(subs) ~ log(price/citations), data = Journals) ################################################### ### chunk number 5: journals-lm eval=FALSE ################################################### ## j_lm <- lm(log(subs) ~ log(price/citations), data = Journals) ## abline(j_lm) ################################################### ### chunk number 6: journals-lmplot ################################################### plot(log(subs) ~ log(price/citations), data = Journals) j_lm <- lm(log(subs) ~ log(price/citations), data = Journals) abline(j_lm) ################################################### ### chunk number 7: journals-lm-summary ################################################### summary(j_lm) ################################################### ### chunk number 8: cps-data ################################################### data("CPS1985", package = "AER") cps <- CPS1985 ################################################### ### chunk number 9: cps-data1 eval=FALSE ################################################### ## data("CPS1985", package = "AER") ## cps <- CPS1985 ################################################### ### chunk number 10: cps-reg ################################################### library("quantreg") cps_lm <- lm(log(wage) ~ experience + I(experience^2) + education, data = cps) cps_rq <- rq(log(wage) ~ experience + I(experience^2) + education, data = cps, tau = seq(0.2, 0.8, by = 0.15)) ################################################### ### chunk number 11: cps-predict ################################################### cps2 <- data.frame(education = mean(cps$education), experience = min(cps$experience):max(cps$experience)) cps2 <- cbind(cps2, predict(cps_lm, newdata = cps2, interval = "prediction")) cps2 <- cbind(cps2, predict(cps_rq, newdata = cps2, type = "")) ################################################### ### chunk number 12: rq-plot eval=FALSE ################################################### ## plot(log(wage) ~ experience, data = cps) ## for(i in 6:10) lines(cps2[,i] ~ experience, ## data = cps2, col = "red") ################################################### ### chunk number 13: rq-plot1 ################################################### plot(log(wage) ~ experience, data = cps) for(i in 6:10) lines(cps2[,i] ~ experience, data = cps2, col = "red") ################################################### ### chunk number 14: srq-plot eval=FALSE ################################################### ## plot(summary(cps_rq)) ################################################### ### chunk number 15: srq-plot1 ################################################### plot(summary(cps_rq)) ################################################### ### chunk number 16: bkde-fit ################################################### library("KernSmooth") cps_bkde <- bkde2D(cbind(cps$experience, log(cps$wage)), bandwidth = c(3.5, 0.5), gridsize = c(200, 200)) ################################################### ### chunk number 17: bkde-plot eval=FALSE ################################################### ## image(cps_bkde$x1, cps_bkde$x2, cps_bkde$fhat, ## col = rev(gray.colors(10, gamma = 1)), ## xlab = "experience", ylab = "log(wage)") ## box() ## lines(fit ~ experience, data = cps2) ## lines(lwr ~ experience, data = cps2, lty = 2) ## lines(upr ~ experience, data = cps2, lty = 2) ################################################### ### chunk number 18: bkde-plot1 ################################################### image(cps_bkde$x1, cps_bkde$x2, cps_bkde$fhat, col = rev(gray.colors(10, gamma = 1)), xlab = "experience", ylab = "log(wage)") box() lines(fit ~ experience, data = cps2) lines(lwr ~ experience, data = cps2, lty = 2) lines(upr ~ experience, data = cps2, lty = 2) ################################################### ### chunk number 19: install eval=FALSE ################################################### ## install.packages("AER") ################################################### ### chunk number 20: library ################################################### library("AER") ################################################### ### chunk number 21: objects ################################################### objects() ################################################### ### chunk number 22: search ################################################### search() ################################################### ### chunk number 23: assignment ################################################### x <- 2 objects() ################################################### ### chunk number 24: remove ################################################### remove(x) objects() ################################################### ### chunk number 25: log eval=FALSE ################################################### ## log(16, 2) ## log(x = 16, 2) ## log(16, base = 2) ## log(base = 2, x = 16) ################################################### ### chunk number 26: q eval=FALSE ################################################### ## q() ################################################### ### chunk number 27: apropos ################################################### apropos("help")
/scratch/gouwar.j/cran-all/cranData/AER/demo/Ch-Intro.R
################################################### ### chunk number 1: setup ################################################### options(prompt = "R> ", continue = "+ ", width = 64, digits = 4, show.signif.stars = FALSE, useFancyQuotes = FALSE) options(SweaveHooks = list(onefig = function() {par(mfrow = c(1,1))}, twofig = function() {par(mfrow = c(1,2))}, threefig = function() {par(mfrow = c(1,3))}, fourfig = function() {par(mfrow = c(2,2))}, sixfig = function() {par(mfrow = c(3,2))})) library("AER") suppressWarnings(RNGversion("3.5.0")) set.seed(1071) ################################################### ### chunk number 2: data-journals ################################################### data("Journals") journals <- Journals[, c("subs", "price")] journals$citeprice <- Journals$price/Journals$citations summary(journals) ################################################### ### chunk number 3: linreg-plot eval=FALSE ################################################### ## plot(log(subs) ~ log(citeprice), data = journals) ## jour_lm <- lm(log(subs) ~ log(citeprice), data = journals) ## abline(jour_lm) ################################################### ### chunk number 4: linreg-plot1 ################################################### plot(log(subs) ~ log(citeprice), data = journals) jour_lm <- lm(log(subs) ~ log(citeprice), data = journals) abline(jour_lm) ################################################### ### chunk number 5: linreg-class ################################################### class(jour_lm) ################################################### ### chunk number 6: linreg-names ################################################### names(jour_lm) ################################################### ### chunk number 7: linreg-summary ################################################### summary(jour_lm) ################################################### ### chunk number 8: linreg-summary ################################################### jour_slm <- summary(jour_lm) class(jour_slm) names(jour_slm) ################################################### ### chunk number 9: linreg-coef ################################################### jour_slm$coefficients ################################################### ### chunk number 10: linreg-anova ################################################### anova(jour_lm) ################################################### ### chunk number 11: journals-coef ################################################### coef(jour_lm) ################################################### ### chunk number 12: journals-confint ################################################### confint(jour_lm, level = 0.95) ################################################### ### chunk number 13: journals-predict ################################################### predict(jour_lm, newdata = data.frame(citeprice = 2.11), interval = "confidence") predict(jour_lm, newdata = data.frame(citeprice = 2.11), interval = "prediction") ################################################### ### chunk number 14: predict-plot eval=FALSE ################################################### ## lciteprice <- seq(from = -6, to = 4, by = 0.25) ## jour_pred <- predict(jour_lm, interval = "prediction", ## newdata = data.frame(citeprice = exp(lciteprice))) ## plot(log(subs) ~ log(citeprice), data = journals) ## lines(jour_pred[, 1] ~ lciteprice, col = 1) ## lines(jour_pred[, 2] ~ lciteprice, col = 1, lty = 2) ## lines(jour_pred[, 3] ~ lciteprice, col = 1, lty = 2) ################################################### ### chunk number 15: predict-plot1 ################################################### lciteprice <- seq(from = -6, to = 4, by = 0.25) jour_pred <- predict(jour_lm, interval = "prediction", newdata = data.frame(citeprice = exp(lciteprice))) plot(log(subs) ~ log(citeprice), data = journals) lines(jour_pred[, 1] ~ lciteprice, col = 1) lines(jour_pred[, 2] ~ lciteprice, col = 1, lty = 2) lines(jour_pred[, 3] ~ lciteprice, col = 1, lty = 2) ################################################### ### chunk number 16: journals-plot eval=FALSE ################################################### ## par(mfrow = c(2, 2)) ## plot(jour_lm) ## par(mfrow = c(1, 1)) ################################################### ### chunk number 17: journals-plot1 ################################################### par(mfrow = c(2, 2)) plot(jour_lm) par(mfrow = c(1, 1)) ################################################### ### chunk number 18: journal-lht ################################################### linearHypothesis(jour_lm, "log(citeprice) = -0.5") ################################################### ### chunk number 19: CPS-data ################################################### data("CPS1988") summary(CPS1988) ################################################### ### chunk number 20: CPS-base ################################################### cps_lm <- lm(log(wage) ~ experience + I(experience^2) + education + ethnicity, data = CPS1988) ################################################### ### chunk number 21: CPS-visualization-unused eval=FALSE ################################################### ## ex <- 0:56 ## ed <- with(CPS1988, tapply(education, ## list(ethnicity, experience), mean))[, as.character(ex)] ## fm <- cps_lm ## wago <- predict(fm, newdata = data.frame(experience = ex, ## ethnicity = "cauc", education = as.numeric(ed["cauc",]))) ## wagb <- predict(fm, newdata = data.frame(experience = ex, ## ethnicity = "afam", education = as.numeric(ed["afam",]))) ## plot(log(wage) ~ experience, data = CPS1988, pch = ".", ## col = as.numeric(ethnicity)) ## lines(ex, wago) ## lines(ex, wagb, col = 2) ################################################### ### chunk number 22: CPS-summary ################################################### summary(cps_lm) ################################################### ### chunk number 23: CPS-noeth ################################################### cps_noeth <- lm(log(wage) ~ experience + I(experience^2) + education, data = CPS1988) anova(cps_noeth, cps_lm) ################################################### ### chunk number 24: CPS-anova ################################################### anova(cps_lm) ################################################### ### chunk number 25: CPS-noeth2 eval=FALSE ################################################### ## cps_noeth <- update(cps_lm, formula = . ~ . - ethnicity) ################################################### ### chunk number 26: CPS-waldtest ################################################### waldtest(cps_lm, . ~ . - ethnicity) ################################################### ### chunk number 27: CPS-spline ################################################### library("splines") cps_plm <- lm(log(wage) ~ bs(experience, df = 5) + education + ethnicity, data = CPS1988) ################################################### ### chunk number 28: CPS-spline-summary eval=FALSE ################################################### ## summary(cps_plm) ################################################### ### chunk number 29: CPS-BIC ################################################### cps_bs <- lapply(3:10, function(i) lm(log(wage) ~ bs(experience, df = i) + education + ethnicity, data = CPS1988)) structure(sapply(cps_bs, AIC, k = log(nrow(CPS1988))), .Names = 3:10) ################################################### ### chunk number 30: plm-plot eval=FALSE ################################################### ## cps <- data.frame(experience = -2:60, education = ## with(CPS1988, mean(education[ethnicity == "cauc"])), ## ethnicity = "cauc") ## cps$yhat1 <- predict(cps_lm, newdata = cps) ## cps$yhat2 <- predict(cps_plm, newdata = cps) ## ## plot(log(wage) ~ jitter(experience, factor = 3), pch = 19, ## col = rgb(0.5, 0.5, 0.5, alpha = 0.02), data = CPS1988) ## lines(yhat1 ~ experience, data = cps, lty = 2) ## lines(yhat2 ~ experience, data = cps) ## legend("topleft", c("quadratic", "spline"), lty = c(2,1), ## bty = "n") ################################################### ### chunk number 31: plm-plot1 ################################################### cps <- data.frame(experience = -2:60, education = with(CPS1988, mean(education[ethnicity == "cauc"])), ethnicity = "cauc") cps$yhat1 <- predict(cps_lm, newdata = cps) cps$yhat2 <- predict(cps_plm, newdata = cps) plot(log(wage) ~ jitter(experience, factor = 3), pch = 19, col = rgb(0.5, 0.5, 0.5, alpha = 0.02), data = CPS1988) lines(yhat1 ~ experience, data = cps, lty = 2) lines(yhat2 ~ experience, data = cps) legend("topleft", c("quadratic", "spline"), lty = c(2,1), bty = "n") ################################################### ### chunk number 32: CPS-int ################################################### cps_int <- lm(log(wage) ~ experience + I(experience^2) + education * ethnicity, data = CPS1988) coeftest(cps_int) ################################################### ### chunk number 33: CPS-int2 eval=FALSE ################################################### ## cps_int <- lm(log(wage) ~ experience + I(experience^2) + ## education + ethnicity + education:ethnicity, ## data = CPS1988) ################################################### ### chunk number 34: CPS-sep ################################################### cps_sep <- lm(log(wage) ~ ethnicity / (experience + I(experience^2) + education) - 1, data = CPS1988) ################################################### ### chunk number 35: CPS-sep-coef ################################################### cps_sep_cf <- matrix(coef(cps_sep), nrow = 2) rownames(cps_sep_cf) <- levels(CPS1988$ethnicity) colnames(cps_sep_cf) <- names(coef(cps_lm))[1:4] cps_sep_cf ################################################### ### chunk number 36: CPS-sep-anova ################################################### anova(cps_sep, cps_lm) ################################################### ### chunk number 37: CPS-sep-visualization-unused eval=FALSE ################################################### ## ex <- 0:56 ## ed <- with(CPS1988, tapply(education, list(ethnicity, ## experience), mean))[, as.character(ex)] ## fm <- cps_lm ## wago <- predict(fm, newdata = data.frame(experience = ex, ## ethnicity = "cauc", education = as.numeric(ed["cauc",]))) ## wagb <- predict(fm, newdata = data.frame(experience = ex, ## ethnicity = "afam", education = as.numeric(ed["afam",]))) ## plot(log(wage) ~ jitter(experience, factor = 2), ## data = CPS1988, pch = ".", col = as.numeric(ethnicity)) ## ## ## plot(log(wage) ~ as.factor(experience), data = CPS1988, ## pch = ".") ## lines(ex, wago, lwd = 2) ## lines(ex, wagb, col = 2, lwd = 2) ## fm <- cps_sep ## wago <- predict(fm, newdata = data.frame(experience = ex, ## ethnicity = "cauc", education = as.numeric(ed["cauc",]))) ## wagb <- predict(fm, newdata = data.frame(experience = ex, ## ethnicity = "afam", education = as.numeric(ed["afam",]))) ## lines(ex, wago, lty = 2, lwd = 2) ## lines(ex, wagb, col = 2, lty = 2, lwd = 2) ################################################### ### chunk number 38: CPS-region ################################################### CPS1988$region <- relevel(CPS1988$region, ref = "south") cps_region <- lm(log(wage) ~ ethnicity + education + experience + I(experience^2) + region, data = CPS1988) coef(cps_region) ################################################### ### chunk number 39: wls1 ################################################### jour_wls1 <- lm(log(subs) ~ log(citeprice), data = journals, weights = 1/citeprice^2) ################################################### ### chunk number 40: wls2 ################################################### jour_wls2 <- lm(log(subs) ~ log(citeprice), data = journals, weights = 1/citeprice) ################################################### ### chunk number 41: journals-wls1 eval=FALSE ################################################### ## plot(log(subs) ~ log(citeprice), data = journals) ## abline(jour_lm) ## abline(jour_wls1, lwd = 2, lty = 2) ## abline(jour_wls2, lwd = 2, lty = 3) ## legend("bottomleft", c("OLS", "WLS1", "WLS2"), ## lty = 1:3, lwd = 2, bty = "n") ################################################### ### chunk number 42: journals-wls11 ################################################### plot(log(subs) ~ log(citeprice), data = journals) abline(jour_lm) abline(jour_wls1, lwd = 2, lty = 2) abline(jour_wls2, lwd = 2, lty = 3) legend("bottomleft", c("OLS", "WLS1", "WLS2"), lty = 1:3, lwd = 2, bty = "n") ################################################### ### chunk number 43: fgls1 ################################################### auxreg <- lm(log(residuals(jour_lm)^2) ~ log(citeprice), data = journals) jour_fgls1 <- lm(log(subs) ~ log(citeprice), weights = 1/exp(fitted(auxreg)), data = journals) ################################################### ### chunk number 44: fgls2 ################################################### gamma2i <- coef(auxreg)[2] gamma2 <- 0 while(abs((gamma2i - gamma2)/gamma2) > 1e-7) { gamma2 <- gamma2i fglsi <- lm(log(subs) ~ log(citeprice), data = journals, weights = 1/citeprice^gamma2) gamma2i <- coef(lm(log(residuals(fglsi)^2) ~ log(citeprice), data = journals))[2] } jour_fgls2 <- lm(log(subs) ~ log(citeprice), data = journals, weights = 1/citeprice^gamma2) ################################################### ### chunk number 45: fgls2-coef ################################################### coef(jour_fgls2) ################################################### ### chunk number 46: journals-fgls ################################################### plot(log(subs) ~ log(citeprice), data = journals) abline(jour_lm) abline(jour_fgls2, lty = 2, lwd = 2) ################################################### ### chunk number 47: usmacro-plot eval=FALSE ################################################### ## data("USMacroG") ## plot(USMacroG[, c("dpi", "consumption")], lty = c(3, 1), ## plot.type = "single", ylab = "") ## legend("topleft", legend = c("income", "consumption"), ## lty = c(3, 1), bty = "n") ################################################### ### chunk number 48: usmacro-plot1 ################################################### data("USMacroG") plot(USMacroG[, c("dpi", "consumption")], lty = c(3, 1), plot.type = "single", ylab = "") legend("topleft", legend = c("income", "consumption"), lty = c(3, 1), bty = "n") ################################################### ### chunk number 49: usmacro-fit ################################################### library("dynlm") cons_lm1 <- dynlm(consumption ~ dpi + L(dpi), data = USMacroG) cons_lm2 <- dynlm(consumption ~ dpi + L(consumption), data = USMacroG) ################################################### ### chunk number 50: usmacro-summary1 ################################################### summary(cons_lm1) ################################################### ### chunk number 51: usmacro-summary2 ################################################### summary(cons_lm2) ################################################### ### chunk number 52: dynlm-plot eval=FALSE ################################################### ## plot(merge(as.zoo(USMacroG[,"consumption"]), fitted(cons_lm1), ## fitted(cons_lm2), 0, residuals(cons_lm1), ## residuals(cons_lm2)), screens = rep(1:2, c(3, 3)), ## lty = rep(1:3, 2), ylab = c("Fitted values", "Residuals"), ## xlab = "Time", main = "") ## legend(0.05, 0.95, c("observed", "cons_lm1", "cons_lm2"), ## lty = 1:3, bty = "n") ################################################### ### chunk number 53: dynlm-plot1 ################################################### plot(merge(as.zoo(USMacroG[,"consumption"]), fitted(cons_lm1), fitted(cons_lm2), 0, residuals(cons_lm1), residuals(cons_lm2)), screens = rep(1:2, c(3, 3)), lty = rep(1:3, 2), ylab = c("Fitted values", "Residuals"), xlab = "Time", main = "") legend(0.05, 0.95, c("observed", "cons_lm1", "cons_lm2"), lty = 1:3, bty = "n") ################################################### ### chunk number 54: encompassing1 ################################################### cons_lmE <- dynlm(consumption ~ dpi + L(dpi) + L(consumption), data = USMacroG) ################################################### ### chunk number 55: encompassing2 ################################################### anova(cons_lm1, cons_lmE, cons_lm2) ################################################### ### chunk number 56: encompassing3 ################################################### encomptest(cons_lm1, cons_lm2) ################################################### ### chunk number 57: pdata.frame ################################################### data("Grunfeld", package = "AER") library("plm") gr <- subset(Grunfeld, firm %in% c("General Electric", "General Motors", "IBM")) pgr <- pdata.frame(gr, index = c("firm", "year")) ################################################### ### chunk number 58: plm-pool ################################################### gr_pool <- plm(invest ~ value + capital, data = pgr, model = "pooling") ################################################### ### chunk number 59: plm-FE ################################################### gr_fe <- plm(invest ~ value + capital, data = pgr, model = "within") summary(gr_fe) ################################################### ### chunk number 60: plm-pFtest ################################################### pFtest(gr_fe, gr_pool) ################################################### ### chunk number 61: plm-RE ################################################### gr_re <- plm(invest ~ value + capital, data = pgr, model = "random", random.method = "walhus") summary(gr_re) ################################################### ### chunk number 62: plm-plmtest ################################################### plmtest(gr_pool) ################################################### ### chunk number 63: plm-phtest ################################################### phtest(gr_re, gr_fe) ################################################### ### chunk number 64: EmplUK-data ################################################### data("EmplUK", package = "plm") ################################################### ### chunk number 65: plm-AB ################################################### empl_ab <- pgmm(log(emp) ~ lag(log(emp), 1:2) + lag(log(wage), 0:1) + log(capital) + lag(log(output), 0:1) | lag(log(emp), 2:99), data = EmplUK, index = c("firm", "year"), effect = "twoways", model = "twosteps") ################################################### ### chunk number 66: plm-AB-summary ################################################### summary(empl_ab) ################################################### ### chunk number 67: systemfit ################################################### library("systemfit") gr2 <- subset(Grunfeld, firm %in% c("Chrysler", "IBM")) pgr2 <- pdata.frame(gr2, c("firm", "year")) ################################################### ### chunk number 68: SUR ################################################### gr_sur <- systemfit(invest ~ value + capital, method = "SUR", data = pgr2) summary(gr_sur, residCov = FALSE, equations = FALSE) ################################################### ### chunk number 69: nlme eval=FALSE ################################################### ## library("nlme") ## g1 <- subset(Grunfeld, firm == "Westinghouse") ## gls(invest ~ value + capital, data = g1, correlation = corAR1())
/scratch/gouwar.j/cran-all/cranData/AER/demo/Ch-LinearRegression.R
################################################### ### chunk number 1: setup ################################################### options(prompt = "R> ", continue = "+ ", width = 64, digits = 4, show.signif.stars = FALSE, useFancyQuotes = FALSE) options(SweaveHooks = list(onefig = function() {par(mfrow = c(1,1))}, twofig = function() {par(mfrow = c(1,2))}, threefig = function() {par(mfrow = c(1,3))}, fourfig = function() {par(mfrow = c(2,2))}, sixfig = function() {par(mfrow = c(3,2))})) library("AER") suppressWarnings(RNGversion("3.5.0")) set.seed(1071) ################################################### ### chunk number 2: swisslabor-data ################################################### data("SwissLabor") swiss_probit <- glm(participation ~ . + I(age^2), data = SwissLabor, family = binomial(link = "probit")) summary(swiss_probit) ################################################### ### chunk number 3: swisslabor-plot eval=FALSE ################################################### ## plot(participation ~ age, data = SwissLabor) ################################################### ### chunk number 4: swisslabor-plot-refined ################################################### plot(participation ~ education, data = SwissLabor) fm <- glm(participation ~ education + I(education^2), data = SwissLabor, family = binomial) edu <- sort(unique(SwissLabor$education)) prop <- sapply(edu, function(x) mean(SwissLabor$education <= x)) lines(predict(fm, newdata = data.frame(education = edu), type = "response") ~ prop, col = 2) plot(participation ~ age, data = SwissLabor) fm <- glm(participation ~ age + I(age^2), data = SwissLabor, family = binomial) ag <- sort(unique(SwissLabor$age)) prop <- sapply(ag, function(x) mean(SwissLabor$age <= x)) lines(predict(fm, newdata = data.frame(age = ag), type = "response") ~ prop, col = 2) ################################################### ### chunk number 5: effects1 ################################################### fav <- mean(dnorm(predict(swiss_probit, type = "link"))) fav * coef(swiss_probit) ################################################### ### chunk number 6: effects2 ################################################### av <- colMeans(SwissLabor[, -c(1, 7)]) av <- data.frame(rbind(swiss = av, foreign = av), foreign = factor(c("no", "yes"))) av <- predict(swiss_probit, newdata = av, type = "link") av <- dnorm(av) av["swiss"] * coef(swiss_probit)[-7] ################################################### ### chunk number 7: effects3 ################################################### av["foreign"] * coef(swiss_probit)[-7] ################################################### ### chunk number 8: mcfadden ################################################### swiss_probit0 <- update(swiss_probit, formula = . ~ 1) 1 - as.vector(logLik(swiss_probit)/logLik(swiss_probit0)) ################################################### ### chunk number 9: confusion-matrix ################################################### table(true = SwissLabor$participation, pred = round(fitted(swiss_probit))) ################################################### ### chunk number 10: confusion-matrix1 ################################################### tab <- table(true = SwissLabor$participation, pred = round(fitted(swiss_probit))) tabp <- round(100 * c(tab[1,1] + tab[2,2], tab[2,1] + tab[1,2])/sum(tab), digits = 2) ################################################### ### chunk number 11: roc-plot eval=FALSE ################################################### ## library("ROCR") ## pred <- prediction(fitted(swiss_probit), ## SwissLabor$participation) ## plot(performance(pred, "acc")) ## plot(performance(pred, "tpr", "fpr")) ## abline(0, 1, lty = 2) ################################################### ### chunk number 12: roc-plot1 ################################################### library("ROCR") pred <- prediction(fitted(swiss_probit), SwissLabor$participation) plot(performance(pred, "acc")) plot(performance(pred, "tpr", "fpr")) abline(0, 1, lty = 2) ################################################### ### chunk number 13: rss ################################################### deviance(swiss_probit) sum(residuals(swiss_probit, type = "deviance")^2) sum(residuals(swiss_probit, type = "pearson")^2) ################################################### ### chunk number 14: coeftest eval=FALSE ################################################### ## coeftest(swiss_probit, vcov = sandwich) ################################################### ### chunk number 15: murder ################################################### data("MurderRates") murder_logit <- glm(I(executions > 0) ~ time + income + noncauc + lfp + southern, data = MurderRates, family = binomial) ################################################### ### chunk number 16: murder-coeftest ################################################### coeftest(murder_logit) ################################################### ### chunk number 17: murder2 ################################################### murder_logit2 <- glm(I(executions > 0) ~ time + income + noncauc + lfp + southern, data = MurderRates, family = binomial, control = list(epsilon = 1e-15, maxit = 50, trace = FALSE)) ################################################### ### chunk number 18: murder2-coeftest ################################################### coeftest(murder_logit2) ################################################### ### chunk number 19: separation ################################################### table(I(MurderRates$executions > 0), MurderRates$southern) ################################################### ### chunk number 20: countreg-pois ################################################### data("RecreationDemand") rd_pois <- glm(trips ~ ., data = RecreationDemand, family = poisson) ################################################### ### chunk number 21: countreg-pois-coeftest ################################################### coeftest(rd_pois) ################################################### ### chunk number 22: countreg-pois-logLik ################################################### logLik(rd_pois) ################################################### ### chunk number 23: countreg-odtest1 ################################################### dispersiontest(rd_pois) ################################################### ### chunk number 24: countreg-odtest2 ################################################### dispersiontest(rd_pois, trafo = 2) ################################################### ### chunk number 25: countreg-nbin ################################################### library("MASS") rd_nb <- glm.nb(trips ~ ., data = RecreationDemand) coeftest(rd_nb) logLik(rd_nb) ################################################### ### chunk number 26: countreg-se ################################################### round(sqrt(rbind(diag(vcov(rd_pois)), diag(sandwich(rd_pois)))), digits = 3) ################################################### ### chunk number 27: countreg-sandwich ################################################### coeftest(rd_pois, vcov = sandwich) ################################################### ### chunk number 28: countreg-OPG ################################################### round(sqrt(diag(vcovOPG(rd_pois))), 3) ################################################### ### chunk number 29: countreg-plot ################################################### plot(table(RecreationDemand$trips), ylab = "") ################################################### ### chunk number 30: countreg-zeros ################################################### rbind(obs = table(RecreationDemand$trips)[1:10], exp = round( sapply(0:9, function(x) sum(dpois(x, fitted(rd_pois)))))) ################################################### ### chunk number 31: countreg-pscl ################################################### library("pscl") ################################################### ### chunk number 32: countreg-zinb ################################################### rd_zinb <- zeroinfl(trips ~ . | quality + income, data = RecreationDemand, dist = "negbin") ################################################### ### chunk number 33: countreg-zinb-summary ################################################### summary(rd_zinb) ################################################### ### chunk number 34: countreg-zinb-expected ################################################### round(colSums(predict(rd_zinb, type = "prob")[,1:10])) ################################################### ### chunk number 35: countreg-hurdle ################################################### rd_hurdle <- hurdle(trips ~ . | quality + income, data = RecreationDemand, dist = "negbin") summary(rd_hurdle) ################################################### ### chunk number 36: countreg-hurdle-expected ################################################### round(colSums(predict(rd_hurdle, type = "prob")[,1:10])) ################################################### ### chunk number 37: tobit1 ################################################### data("Affairs") aff_tob <- tobit(affairs ~ age + yearsmarried + religiousness + occupation + rating, data = Affairs) summary(aff_tob) ################################################### ### chunk number 38: tobit2 ################################################### aff_tob2 <- update(aff_tob, right = 4) summary(aff_tob2) ################################################### ### chunk number 39: tobit3 ################################################### linearHypothesis(aff_tob, c("age = 0", "occupation = 0"), vcov = sandwich) ################################################### ### chunk number 40: numeric-response ################################################### SwissLabor$partnum <- as.numeric(SwissLabor$participation) - 1 ################################################### ### chunk number 41: kleinspady eval=FALSE ################################################### ## library("np") ## swiss_bw <- npindexbw(partnum ~ income + age + education + ## youngkids + oldkids + foreign + I(age^2), data = SwissLabor, ## method = "kleinspady", nmulti = 5) ################################################### ### chunk number 42: kleinspady-bw eval=FALSE ################################################### ## summary(swiss_bw) ################################################### ### chunk number 43: kleinspady-summary eval=FALSE ################################################### ## swiss_ks <- npindex(bws = swiss_bw, gradients = TRUE) ## summary(swiss_ks) ################################################### ### chunk number 44: probit-confusion ################################################### table(Actual = SwissLabor$participation, Predicted = round(predict(swiss_probit, type = "response"))) ################################################### ### chunk number 45: bw-tab ################################################### data("BankWages") edcat <- factor(BankWages$education) levels(edcat)[3:10] <- rep(c("14-15", "16-18", "19-21"), c(2, 3, 3)) tab <- xtabs(~ edcat + job, data = BankWages) prop.table(tab, 1) ################################################### ### chunk number 46: bw-plot eval=FALSE ################################################### ## plot(job ~ edcat, data = BankWages, off = 0) ################################################### ### chunk number 47: bw-plot1 ################################################### plot(job ~ edcat, data = BankWages, off = 0) box() ################################################### ### chunk number 48: bw-multinom ################################################### library("nnet") bank_mnl <- multinom(job ~ education + minority, data = BankWages, subset = gender == "male", trace = FALSE) ################################################### ### chunk number 49: bw-multinom-coeftest ################################################### coeftest(bank_mnl) ################################################### ### chunk number 50: bw-polr ################################################### library("MASS") bank_polr <- polr(job ~ education + minority, data = BankWages, subset = gender == "male", Hess = TRUE) coeftest(bank_polr) ################################################### ### chunk number 51: bw-AIC ################################################### AIC(bank_mnl) AIC(bank_polr)
/scratch/gouwar.j/cran-all/cranData/AER/demo/Ch-Microeconometrics.R
################################################### ### chunk number 1: setup ################################################### options(prompt = "R> ", continue = "+ ", width = 64, digits = 4, show.signif.stars = FALSE, useFancyQuotes = FALSE) options(SweaveHooks = list(onefig = function() {par(mfrow = c(1,1))}, twofig = function() {par(mfrow = c(1,2))}, threefig = function() {par(mfrow = c(1,3))}, fourfig = function() {par(mfrow = c(2,2))}, sixfig = function() {par(mfrow = c(3,2))})) library("AER") suppressWarnings(RNGversion("3.5.0")) set.seed(1071) ################################################### ### chunk number 2: DGP ################################################### dgp <- function(nobs = 15, model = c("trend", "dynamic"), corr = 0, coef = c(0.25, -0.75), sd = 1) { model <- match.arg(model) coef <- rep(coef, length.out = 2) err <- as.vector(filter(rnorm(nobs, sd = sd), corr, method = "recursive")) if(model == "trend") { x <- 1:nobs y <- coef[1] + coef[2] * x + err } else { y <- rep(NA, nobs) y[1] <- coef[1] + err[1] for(i in 2:nobs) y[i] <- coef[1] + coef[2] * y[i-1] + err[i] x <- c(0, y[1:(nobs-1)]) } return(data.frame(y = y, x = x)) } ################################################### ### chunk number 3: simpower ################################################### simpower <- function(nrep = 100, size = 0.05, ...) { pval <- matrix(rep(NA, 2 * nrep), ncol = 2) colnames(pval) <- c("dwtest", "bgtest") for(i in 1:nrep) { dat <- dgp(...) pval[i,1] <- dwtest(y ~ x, data = dat, alternative = "two.sided")$p.value pval[i,2] <- bgtest(y ~ x, data = dat)$p.value } return(colMeans(pval < size)) } ################################################### ### chunk number 4: simulation-function ################################################### simulation <- function(corr = c(0, 0.2, 0.4, 0.6, 0.8, 0.9, 0.95, 0.99), nobs = c(15, 30, 50), model = c("trend", "dynamic"), ...) { prs <- expand.grid(corr = corr, nobs = nobs, model = model) nprs <- nrow(prs) pow <- matrix(rep(NA, 2 * nprs), ncol = 2) for(i in 1:nprs) pow[i,] <- simpower(corr = prs[i,1], nobs = prs[i,2], model = as.character(prs[i,3]), ...) rval <- rbind(prs, prs) rval$test <- factor(rep(1:2, c(nprs, nprs)), labels = c("dwtest", "bgtest")) rval$power <- c(pow[,1], pow[,2]) rval$nobs <- factor(rval$nobs) return(rval) } ################################################### ### chunk number 5: simulation ################################################### set.seed(123) psim <- simulation() ################################################### ### chunk number 6: simulation-table ################################################### tab <- xtabs(power ~ corr + test + model + nobs, data = psim) ftable(tab, row.vars = c("model", "nobs", "test"), col.vars = "corr") ################################################### ### chunk number 7: simulation-visualization ################################################### library("lattice") xyplot(power ~ corr | model + nobs, groups = ~ test, data = psim, type = "b") ################################################### ### chunk number 8: simulation-visualization1 ################################################### library("lattice") trellis.par.set(theme = canonical.theme(color = FALSE)) print(xyplot(power ~ corr | model + nobs, groups = ~ test, data = psim, type = "b")) ################################################### ### chunk number 9: journals-lm ################################################### data("Journals") journals <- Journals[, c("subs", "price")] journals$citeprice <- Journals$price/Journals$citations jour_lm <- lm(log(subs) ~ log(citeprice), data = journals) ################################################### ### chunk number 10: journals-residuals-based-resampling-unused eval=FALSE ################################################### ## refit <- function(data, i) { ## d <- data ## d$subs <- exp(d$fitted + d$res[i]) ## coef(lm(log(subs) ~ log(citeprice), data = d)) ## } ################################################### ### chunk number 11: journals-case-based-resampling ################################################### refit <- function(data, i) coef(lm(log(subs) ~ log(citeprice), data = data[i,])) ################################################### ### chunk number 12: journals-boot ################################################### library("boot") set.seed(123) jour_boot <- boot(journals, refit, R = 999) ################################################### ### chunk number 13: journals-boot-print ################################################### jour_boot ################################################### ### chunk number 14: journals-lm-coeftest ################################################### coeftest(jour_lm) ################################################### ### chunk number 15: journals-boot-ci ################################################### boot.ci(jour_boot, index = 2, type = "basic") ################################################### ### chunk number 16: journals-lm-ci ################################################### confint(jour_lm, parm = 2) ################################################### ### chunk number 17: ml-loglik ################################################### data("Equipment", package = "AER") nlogL <- function(par) { beta <- par[1:3] theta <- par[4] sigma2 <- par[5] Y <- with(Equipment, valueadded/firms) K <- with(Equipment, capital/firms) L <- with(Equipment, labor/firms) rhs <- beta[1] + beta[2] * log(K) + beta[3] * log(L) lhs <- log(Y) + theta * Y rval <- sum(log(1 + theta * Y) - log(Y) + dnorm(lhs, mean = rhs, sd = sqrt(sigma2), log = TRUE)) return(-rval) } ################################################### ### chunk number 18: ml-0 ################################################### fm0 <- lm(log(valueadded/firms) ~ log(capital/firms) + log(labor/firms), data = Equipment) ################################################### ### chunk number 19: ml-0-coef ################################################### par0 <- as.vector(c(coef(fm0), 0, mean(residuals(fm0)^2))) ################################################### ### chunk number 20: ml-optim ################################################### opt <- optim(par0, nlogL, hessian = TRUE) ################################################### ### chunk number 21: ml-optim-output ################################################### opt$par sqrt(diag(solve(opt$hessian)))[1:4] -opt$value ################################################### ### chunk number 22: Sweave eval=FALSE ################################################### ## Sweave("Sweave-journals.Rnw") ################################################### ### chunk number 23: Stangle eval=FALSE ################################################### ## Stangle("Sweave-journals.Rnw") ################################################### ### chunk number 24: texi2dvi eval=FALSE ################################################### ## texi2dvi("Sweave-journals.tex", pdf = TRUE) ################################################### ### chunk number 25: vignette eval=FALSE ################################################### ## vignette("Sweave-journals", package = "AER")
/scratch/gouwar.j/cran-all/cranData/AER/demo/Ch-Programming.R
################################################### ### chunk number 1: setup ################################################### options(prompt = "R> ", continue = "+ ", width = 64, digits = 4, show.signif.stars = FALSE, useFancyQuotes = FALSE) options(SweaveHooks = list(onefig = function() {par(mfrow = c(1,1))}, twofig = function() {par(mfrow = c(1,2))}, threefig = function() {par(mfrow = c(1,3))}, fourfig = function() {par(mfrow = c(2,2))}, sixfig = function() {par(mfrow = c(3,2))})) library("AER") suppressWarnings(RNGversion("3.5.0")) set.seed(1071) ################################################### ### chunk number 2: options ################################################### options(digits = 6) ################################################### ### chunk number 3: ts-plot eval=FALSE ################################################### ## data("UKNonDurables") ## plot(UKNonDurables) ################################################### ### chunk number 4: UKNonDurables-data ################################################### data("UKNonDurables") ################################################### ### chunk number 5: tsp ################################################### tsp(UKNonDurables) ################################################### ### chunk number 6: window ################################################### window(UKNonDurables, end = c(1956, 4)) ################################################### ### chunk number 7: filter eval=FALSE ################################################### ## data("UKDriverDeaths") ## plot(UKDriverDeaths) ## lines(filter(UKDriverDeaths, c(1/2, rep(1, 11), 1/2)/12), ## col = 2) ################################################### ### chunk number 8: ts-plot1 ################################################### data("UKNonDurables") plot(UKNonDurables) data("UKDriverDeaths") plot(UKDriverDeaths) lines(filter(UKDriverDeaths, c(1/2, rep(1, 11), 1/2)/12), col = 2) ################################################### ### chunk number 9: filter1 eval=FALSE ################################################### ## data("UKDriverDeaths") ## plot(UKDriverDeaths) ## lines(filter(UKDriverDeaths, c(1/2, rep(1, 11), 1/2)/12), ## col = 2) ################################################### ### chunk number 10: rollapply ################################################### plot(rollapply(UKDriverDeaths, 12, sd)) ################################################### ### chunk number 11: ar-sim ################################################### set.seed(1234) x <- filter(rnorm(100), 0.9, method = "recursive") ################################################### ### chunk number 12: decompose ################################################### dd_dec <- decompose(log(UKDriverDeaths)) dd_stl <- stl(log(UKDriverDeaths), s.window = 13) ################################################### ### chunk number 13: decompose-components ################################################### plot(dd_dec$trend, ylab = "trend") lines(dd_stl$time.series[,"trend"], lty = 2, lwd = 2) ################################################### ### chunk number 14: seat-mean-sd ################################################### plot(dd_dec$trend, ylab = "trend") lines(dd_stl$time.series[,"trend"], lty = 2, lwd = 2) plot(rollapply(UKDriverDeaths, 12, sd)) ################################################### ### chunk number 15: stl ################################################### plot(dd_stl) ################################################### ### chunk number 16: Holt-Winters ################################################### dd_past <- window(UKDriverDeaths, end = c(1982, 12)) dd_hw <- HoltWinters(dd_past) dd_pred <- predict(dd_hw, n.ahead = 24) ################################################### ### chunk number 17: Holt-Winters-plot ################################################### plot(dd_hw, dd_pred, ylim = range(UKDriverDeaths)) lines(UKDriverDeaths) ################################################### ### chunk number 18: Holt-Winters-plot1 ################################################### plot(dd_hw, dd_pred, ylim = range(UKDriverDeaths)) lines(UKDriverDeaths) ################################################### ### chunk number 19: acf eval=FALSE ################################################### ## acf(x) ## pacf(x) ################################################### ### chunk number 20: acf1 ################################################### acf(x, ylim = c(-0.2, 1)) pacf(x, ylim = c(-0.2, 1)) ################################################### ### chunk number 21: ar ################################################### ar(x) ################################################### ### chunk number 22: window-non-durab ################################################### nd <- window(log(UKNonDurables), end = c(1970, 4)) ################################################### ### chunk number 23: non-durab-acf ################################################### acf(diff(nd), ylim = c(-1, 1)) pacf(diff(nd), ylim = c(-1, 1)) acf(diff(diff(nd, 4)), ylim = c(-1, 1)) pacf(diff(diff(nd, 4)), ylim = c(-1, 1)) ################################################### ### chunk number 24: non-durab-acf1 ################################################### acf(diff(nd), ylim = c(-1, 1)) pacf(diff(nd), ylim = c(-1, 1)) acf(diff(diff(nd, 4)), ylim = c(-1, 1)) pacf(diff(diff(nd, 4)), ylim = c(-1, 1)) ################################################### ### chunk number 25: arima-setup ################################################### nd_pars <- expand.grid(ar = 0:2, diff = 1, ma = 0:2, sar = 0:1, sdiff = 1, sma = 0:1) nd_aic <- rep(0, nrow(nd_pars)) for(i in seq(along = nd_aic)) nd_aic[i] <- AIC(arima(nd, unlist(nd_pars[i, 1:3]), unlist(nd_pars[i, 4:6])), k = log(length(nd))) nd_pars[which.min(nd_aic),] ################################################### ### chunk number 26: arima ################################################### nd_arima <- arima(nd, order = c(0,1,1), seasonal = c(0,1,1)) nd_arima ################################################### ### chunk number 27: tsdiag ################################################### tsdiag(nd_arima) ################################################### ### chunk number 28: tsdiag1 ################################################### tsdiag(nd_arima) ################################################### ### chunk number 29: arima-predict ################################################### nd_pred <- predict(nd_arima, n.ahead = 18 * 4) ################################################### ### chunk number 30: arima-compare ################################################### plot(log(UKNonDurables)) lines(nd_pred$pred, col = 2) ################################################### ### chunk number 31: arima-compare1 ################################################### plot(log(UKNonDurables)) lines(nd_pred$pred, col = 2) ################################################### ### chunk number 32: pepper ################################################### data("PepperPrice") plot(PepperPrice, plot.type = "single", col = 1:2) legend("topleft", c("black", "white"), bty = "n", col = 1:2, lty = rep(1,2)) ################################################### ### chunk number 33: pepper1 ################################################### data("PepperPrice") plot(PepperPrice, plot.type = "single", col = 1:2) legend("topleft", c("black", "white"), bty = "n", col = 1:2, lty = rep(1,2)) ################################################### ### chunk number 34: adf1 ################################################### library("tseries") adf.test(log(PepperPrice[, "white"])) ################################################### ### chunk number 35: adf1 ################################################### adf.test(diff(log(PepperPrice[, "white"]))) ################################################### ### chunk number 36: pp ################################################### pp.test(log(PepperPrice[, "white"]), type = "Z(t_alpha)") ################################################### ### chunk number 37: urca eval=FALSE ################################################### ## library("urca") ## pepper_ers <- ur.ers(log(PepperPrice[, "white"]), ## type = "DF-GLS", model = "const", lag.max = 4) ## summary(pepper_ers) ################################################### ### chunk number 38: kpss ################################################### kpss.test(log(PepperPrice[, "white"])) ################################################### ### chunk number 39: po ################################################### po.test(log(PepperPrice)) ################################################### ### chunk number 40: joh-trace ################################################### library("urca") pepper_jo <- ca.jo(log(PepperPrice), ecdet = "const", type = "trace") summary(pepper_jo) ################################################### ### chunk number 41: joh-lmax eval=FALSE ################################################### ## pepper_jo2 <- ca.jo(log(PepperPrice), ecdet = "const", type = "eigen") ## summary(pepper_jo2) ################################################### ### chunk number 42: dynlm-by-hand ################################################### dd <- log(UKDriverDeaths) dd_dat <- ts.intersect(dd, dd1 = lag(dd, k = -1), dd12 = lag(dd, k = -12)) lm(dd ~ dd1 + dd12, data = dd_dat) ################################################### ### chunk number 43: dynlm ################################################### library("dynlm") dynlm(dd ~ L(dd) + L(dd, 12)) ################################################### ### chunk number 44: efp ################################################### library("strucchange") dd_ocus <- efp(dd ~ dd1 + dd12, data = dd_dat, type = "OLS-CUSUM") ################################################### ### chunk number 45: efp-test ################################################### sctest(dd_ocus) ################################################### ### chunk number 46: efp-plot eval=FALSE ################################################### ## plot(dd_ocus) ################################################### ### chunk number 47: Fstats ################################################### dd_fs <- Fstats(dd ~ dd1 + dd12, data = dd_dat, from = 0.1) plot(dd_fs) sctest(dd_fs) ################################################### ### chunk number 48: ocus-supF ################################################### plot(dd_ocus) plot(dd_fs, main = "supF test") ################################################### ### chunk number 49: GermanM1 ################################################### data("GermanM1") LTW <- dm ~ dy2 + dR + dR1 + dp + m1 + y1 + R1 + season ################################################### ### chunk number 50: re eval=FALSE ################################################### ## m1_re <- efp(LTW, data = GermanM1, type = "RE") ## plot(m1_re) ################################################### ### chunk number 51: re1 ################################################### m1_re <- efp(LTW, data = GermanM1, type = "RE") plot(m1_re) ################################################### ### chunk number 52: dating ################################################### dd_bp <- breakpoints(dd ~ dd1 + dd12, data = dd_dat, h = 0.1) ################################################### ### chunk number 53: dating-coef ################################################### coef(dd_bp, breaks = 2) ################################################### ### chunk number 54: dating-plot eval=FALSE ################################################### ## plot(dd) ## lines(fitted(dd_bp, breaks = 2), col = 4) ## lines(confint(dd_bp, breaks = 2)) ################################################### ### chunk number 55: dating-plot1 ################################################### plot(dd_bp, legend = FALSE, main = "") plot(dd) lines(fitted(dd_bp, breaks = 2), col = 4) lines(confint(dd_bp, breaks = 2)) ################################################### ### chunk number 56: StructTS ################################################### dd_struct <- StructTS(log(UKDriverDeaths)) ################################################### ### chunk number 57: StructTS-plot eval=FALSE ################################################### ## plot(cbind(fitted(dd_struct), residuals(dd_struct))) ################################################### ### chunk number 58: StructTS-plot1 ################################################### dd_struct_plot <- cbind(fitted(dd_struct), residuals = residuals(dd_struct)) colnames(dd_struct_plot) <- c("level", "slope", "season", "residuals") plot(dd_struct_plot, main = "") ################################################### ### chunk number 59: garch-plot ################################################### data("MarkPound") plot(MarkPound, main = "") ################################################### ### chunk number 60: garch ################################################### mp <- garch(MarkPound, grad = "numerical", trace = FALSE) summary(mp)
/scratch/gouwar.j/cran-all/cranData/AER/demo/Ch-TimeSeries.R
################################################### ### chunk number 1: setup ################################################### options(prompt = "R> ", continue = "+ ", width = 64, digits = 4, show.signif.stars = FALSE, useFancyQuotes = FALSE) options(SweaveHooks = list(onefig = function() {par(mfrow = c(1,1))}, twofig = function() {par(mfrow = c(1,2))}, threefig = function() {par(mfrow = c(1,3))}, fourfig = function() {par(mfrow = c(2,2))}, sixfig = function() {par(mfrow = c(3,2))})) library("AER") suppressWarnings(RNGversion("3.5.0")) set.seed(1071) ################################################### ### chunk number 2: ps-summary ################################################### data("PublicSchools") summary(PublicSchools) ################################################### ### chunk number 3: ps-plot eval=FALSE ################################################### ## ps <- na.omit(PublicSchools) ## ps$Income <- ps$Income / 10000 ## plot(Expenditure ~ Income, data = ps, ylim = c(230, 830)) ## ps_lm <- lm(Expenditure ~ Income, data = ps) ## abline(ps_lm) ## id <- c(2, 24, 48) ## text(ps[id, 2:1], rownames(ps)[id], pos = 1, xpd = TRUE) ################################################### ### chunk number 4: ps-plot1 ################################################### ps <- na.omit(PublicSchools) ps$Income <- ps$Income / 10000 plot(Expenditure ~ Income, data = ps, ylim = c(230, 830)) ps_lm <- lm(Expenditure ~ Income, data = ps) abline(ps_lm) id <- c(2, 24, 48) text(ps[id, 2:1], rownames(ps)[id], pos = 1, xpd = TRUE) ################################################### ### chunk number 5: ps-lmplot eval=FALSE ################################################### ## plot(ps_lm, which = 1:6) ################################################### ### chunk number 6: ps-lmplot1 ################################################### plot(ps_lm, which = 1:6) ################################################### ### chunk number 7: ps-hatvalues eval=FALSE ################################################### ## ps_hat <- hatvalues(ps_lm) ## plot(ps_hat) ## abline(h = c(1, 3) * mean(ps_hat), col = 2) ## id <- which(ps_hat > 3 * mean(ps_hat)) ## text(id, ps_hat[id], rownames(ps)[id], pos = 1, xpd = TRUE) ################################################### ### chunk number 8: ps-hatvalues1 ################################################### ps_hat <- hatvalues(ps_lm) plot(ps_hat) abline(h = c(1, 3) * mean(ps_hat), col = 2) id <- which(ps_hat > 3 * mean(ps_hat)) text(id, ps_hat[id], rownames(ps)[id], pos = 1, xpd = TRUE) ################################################### ### chunk number 9: influence-measures1 eval=FALSE ################################################### ## influence.measures(ps_lm) ################################################### ### chunk number 10: which-hatvalues ################################################### which(ps_hat > 3 * mean(ps_hat)) ################################################### ### chunk number 11: influence-measures2 ################################################### summary(influence.measures(ps_lm)) ################################################### ### chunk number 12: ps-noinf eval=FALSE ################################################### ## plot(Expenditure ~ Income, data = ps, ylim = c(230, 830)) ## abline(ps_lm) ## id <- which(apply(influence.measures(ps_lm)$is.inf, 1, any)) ## text(ps[id, 2:1], rownames(ps)[id], pos = 1, xpd = TRUE) ## ps_noinf <- lm(Expenditure ~ Income, data = ps[-id,]) ## abline(ps_noinf, lty = 2) ################################################### ### chunk number 13: ps-noinf1 ################################################### plot(Expenditure ~ Income, data = ps, ylim = c(230, 830)) abline(ps_lm) id <- which(apply(influence.measures(ps_lm)$is.inf, 1, any)) text(ps[id, 2:1], rownames(ps)[id], pos = 1, xpd = TRUE) ps_noinf <- lm(Expenditure ~ Income, data = ps[-id,]) abline(ps_noinf, lty = 2) ################################################### ### chunk number 14: journals-age ################################################### data("Journals") journals <- Journals[, c("subs", "price")] journals$citeprice <- Journals$price/Journals$citations journals$age <- 2000 - Journals$foundingyear ################################################### ### chunk number 15: journals-lm ################################################### jour_lm <- lm(log(subs) ~ log(citeprice), data = journals) ################################################### ### chunk number 16: bptest1 ################################################### bptest(jour_lm) ################################################### ### chunk number 17: bptest2 ################################################### bptest(jour_lm, ~ log(citeprice) + I(log(citeprice)^2), data = journals) ################################################### ### chunk number 18: gqtest ################################################### gqtest(jour_lm, order.by = ~ citeprice, data = journals) ################################################### ### chunk number 19: resettest ################################################### resettest(jour_lm) ################################################### ### chunk number 20: raintest ################################################### raintest(jour_lm, order.by = ~ age, data = journals) ################################################### ### chunk number 21: harvtest ################################################### harvtest(jour_lm, order.by = ~ age, data = journals) ################################################### ### chunk number 22: ################################################### library("dynlm") ################################################### ### chunk number 23: usmacro-dynlm ################################################### data("USMacroG") consump1 <- dynlm(consumption ~ dpi + L(dpi), data = USMacroG) ################################################### ### chunk number 24: dwtest ################################################### dwtest(consump1) ################################################### ### chunk number 25: Box-test ################################################### Box.test(residuals(consump1), type = "Ljung-Box") ################################################### ### chunk number 26: bgtest ################################################### bgtest(consump1) ################################################### ### chunk number 27: vcov ################################################### vcov(jour_lm) vcovHC(jour_lm) ################################################### ### chunk number 28: coeftest ################################################### coeftest(jour_lm, vcov = vcovHC) ################################################### ### chunk number 29: sandwiches ################################################### t(sapply(c("const", "HC0", "HC1", "HC2", "HC3", "HC4"), function(x) sqrt(diag(vcovHC(jour_lm, type = x))))) ################################################### ### chunk number 30: ps-anova ################################################### ps_lm <- lm(Expenditure ~ Income, data = ps) ps_lm2 <- lm(Expenditure ~ Income + I(Income^2), data = ps) anova(ps_lm, ps_lm2) ################################################### ### chunk number 31: ps-waldtest ################################################### waldtest(ps_lm, ps_lm2, vcov = vcovHC(ps_lm2, type = "HC4")) ################################################### ### chunk number 32: vcovHAC ################################################### rbind(SE = sqrt(diag(vcov(consump1))), QS = sqrt(diag(kernHAC(consump1))), NW = sqrt(diag(NeweyWest(consump1)))) ################################################### ### chunk number 33: solow-lm ################################################### data("OECDGrowth") solow_lm <- lm(log(gdp85/gdp60) ~ log(gdp60) + log(invest) + log(popgrowth + .05), data = OECDGrowth) summary(solow_lm) ################################################### ### chunk number 34: solow-plot eval=FALSE ################################################### ## plot(solow_lm) ################################################### ### chunk number 35: solow-lts ################################################### library("MASS") solow_lts <- lqs(log(gdp85/gdp60) ~ log(gdp60) + log(invest) + log(popgrowth + .05), data = OECDGrowth, psamp = 13, nsamp = "exact") ################################################### ### chunk number 36: solow-smallresid ################################################### smallresid <- which( abs(residuals(solow_lts)/solow_lts$scale[2]) <= 2.5) ################################################### ### chunk number 37: solow-nohighlev ################################################### X <- model.matrix(solow_lm)[,-1] Xcv <- cov.rob(X, nsamp = "exact") nohighlev <- which( sqrt(mahalanobis(X, Xcv$center, Xcv$cov)) <= 2.5) ################################################### ### chunk number 38: solow-goodobs ################################################### goodobs <- unique(c(smallresid, nohighlev)) ################################################### ### chunk number 39: solow-badobs ################################################### rownames(OECDGrowth)[-goodobs] ################################################### ### chunk number 40: solow-rob ################################################### solow_rob <- update(solow_lm, subset = goodobs) summary(solow_rob) ################################################### ### chunk number 41: quantreg ################################################### library("quantreg") ################################################### ### chunk number 42: cps-lad ################################################### library("quantreg") data("CPS1988") cps_f <- log(wage) ~ experience + I(experience^2) + education cps_lad <- rq(cps_f, data = CPS1988) summary(cps_lad) ################################################### ### chunk number 43: cps-rq ################################################### cps_rq <- rq(cps_f, tau = c(0.25, 0.75), data = CPS1988) summary(cps_rq) ################################################### ### chunk number 44: cps-rqs ################################################### cps_rq25 <- rq(cps_f, tau = 0.25, data = CPS1988) cps_rq75 <- rq(cps_f, tau = 0.75, data = CPS1988) anova(cps_rq25, cps_rq75) ################################################### ### chunk number 45: cps-rq-anova ################################################### anova(cps_rq25, cps_rq75, joint = FALSE) ################################################### ### chunk number 46: rqbig ################################################### cps_rqbig <- rq(cps_f, tau = seq(0.05, 0.95, by = 0.05), data = CPS1988) cps_rqbigs <- summary(cps_rqbig) ################################################### ### chunk number 47: rqbig-plot eval=FALSE ################################################### ## plot(cps_rqbigs) ################################################### ### chunk number 48: rqbig-plot1 ################################################### plot(cps_rqbigs)
/scratch/gouwar.j/cran-all/cranData/AER/demo/Ch-Validation.R
### R code from vignette source 'AER.Rnw' ################################################### ### code chunk number 1: options ################################################### options(prompt = "R> ", digits = 4, show.signif.stars = FALSE) ################################################### ### code chunk number 2: demo (eval = FALSE) ################################################### ## demo("Ch-Intro", package = "AER") ################################################### ### code chunk number 3: data (eval = FALSE) ################################################### ## data(package = "AER") ################################################### ### code chunk number 4: help (eval = FALSE) ################################################### ## help("Greene2003", package = "AER") ################################################### ### code chunk number 5: pgmm-new (eval = FALSE) ################################################### ## empl_ab <- pgmm(log(emp) ~ lag(log(emp), 1:2) + lag(log(wage), 0:1) ## + log(capital) + lag(log(output), 0:1) | lag(log(emp), 2:99), ## data = EmplUK, index = c("firm", "year"), ## effect = "twoways", model = "twosteps")
/scratch/gouwar.j/cran-all/cranData/AER/inst/doc/AER.R
### R code from vignette source 'Sweave-journals.Rnw' ################################################### ### code chunk number 1: Sweave-journals.Rnw:8-11 ################################################### data("Journals", package = "AER") journals_lm <- lm(log(subs) ~ log(price/citations), data = Journals) journals_lm ################################################### ### code chunk number 2: Sweave-journals.Rnw:17-19 ################################################### plot(log(subs) ~ log(price/citations), data = Journals) abline(journals_lm)
/scratch/gouwar.j/cran-all/cranData/AER/inst/doc/Sweave-journals.R
## Function Fisher_enrichment: ------------------------------------------------ ## performe Enrichment test using Fisher test ### Input: ### 1. data (data.frame): ### Type I: ID DRUG_TYPE AE_NAME ### 201 FLUN Insomnia ### ... ... ... ### 299 FLU Chills ### ### 2. dd.group (data.frame): dd.meddra (AE_NAME GROUP_NAME) ### ### 3. drug.case: Name of the Target vaccine ### 4. drug.control: Name of the Reference vaccine ### 5. n_perms: Number of simulations to get null distribution of ES ### 6. q.cut (numerical value): q value cut deciding the significance of ### each AE ### 7. or.cut (numerical value): odds ratio cut deciding the significance ### of each AE ### 8. zero: Default False, perform classic fisher exact test. If ### True, add zero indicator to the Enrichment score. ### 9. min_size: The minimum size of group required for enrichment analysis. ### 10. min_AE: The minimum number of cases required to start counting ### for a specific AE. ### 11. cores: The number of cores to use for parallel execution. ### Output: ### 1. a list with 2 data.frames: ### 1. Final_result (data.frame): ### GROUP_NAME ES p_value GROUP_SIZE ### Acid-base disorders 0 1 2 ### Allergic conditions 0 1 41 ### ... ... ... ... ### Angioedema and urticaria 0 1 17 ### Anxiety disorders and symptoms 0 1 11 ### ### 2. AE_info (data.frame): ### AE_NAME OR p_value ### Abdominal mass 2.13 0.175 ### ... ... ... ### Abdominal pain 0 1 ### `95Lower` `95Upper` `se(logOR)` ### 2.0 2.2 ... ### ... ... ... ### 0 0 ... #------------------------------------------------------------------------------ Fisher_enrichment = function(data, dd.group, drug.case, drug.control = NULL, n_perms = 1000, q.cut = 0.05, or.cut = 1.5, zero = FALSE, min_size = 5, covar = NULL, min_AE = 10, cores = detectCores()){ # Get odds ratio estimate and related info for null simulation odds_out = odds_ratio(data, drug.case, drug.control, covar = covar, min_AE = min_AE, cores = cores) fisher_res = odds_out$res %>% mutate(OR = exp(BETA)) %>% select(-BETA) %>% mutate(isRatio0 = as.logical(ifelse(OR < 1e-5, "TRUE", "FALSE"))) %>% relocate(AE_NAME, OR, p_value, isRatio0, `95Lower`, `95Upper`, `se(logOR)`) fisher_res_withQ = fisher_res %>% select(-`95Lower`, -`95Upper`, -`se(logOR)`) %>% mutate(qval = (qvalue(p_value))$qvalue) # Get the group size for groups of interest dd.group = dd.group[!duplicated(dd.group), ] %>% filter(!is.na(GROUP_NAME)) df_size = dd.group %>% filter(AE_NAME %in% odds_out$ae) %>% group_by(GROUP_NAME) %>% summarise(GROUP_SIZE = n(), .groups = 'drop_last') %>% filter(GROUP_SIZE >= as.integer(min_size)) ## filter out groups in which the number of AEs is less than the minimum size dd.group = dd.group %>% filter(GROUP_NAME %in% df_size$GROUP_NAME) %>% filter(AE_NAME %in% odds_out$ae) # Calculate the Enrichment Score for each group fisher_result = fisher_test(dd.group, fisher_res_withQ, q.cut, or.cut, n_perms, zero) Final_result_fisher = fisher_result %>% merge(df_size, by = 'GROUP_NAME') %>% as_tibble() return(list(Final_result = Final_result_fisher, AE_info = fisher_res[,-4])) }
/scratch/gouwar.j/cran-all/cranData/AEenrich/R/Fisher_enrichment.R
## Function HitMiss_Curve: ---------------------------------------------------- ## Calculate miss and hit value in the KS method(considering ties), called by ## get_ES function. ### ### Input: ### 1. ddF: a data frame ### AE_NAME OR GROUP_NAME ### <chr> <dbl> <chr> ### Dysgeusia 5.30 Tongue conditions ### ... ... ... ### Tongue disorder 4.15 Tongue conditions ### Paraesthesia oral 4.09 Oral soft tissue conditions ### Paraesthesia oral 4.09 Neurological disorders NEC ### ### 2. miss_ind (vector): if miss, 1; Otherwise, 0 ### 3. p: An exponent p to control the weight of the step. ### ### Output: ### a list with three vectors: P_hit, P_miss and position # 79: ------------------------------------------------------------------------- HitMiss_Curve = function(ddF, miss_ind, p){ # create hit index based on the miss index. # This is to avoid multiple selections of a certain AE which corresponds to # multiple groups. ddF_temp = ddF %>% mutate(miss = miss_ind) %>% mutate(hit = ifelse(miss_ind == TRUE, FALSE, TRUE)) %>% select(AE_NAME, OR, hit, miss) %>% distinct() # found distinct positions position = sapply(unique(ddF_temp$OR), function(x) tail(which(ddF_temp$OR == x), n = 1), simplify = T) # number of different positions n_pos = length(position) # The sum of modified correlation metric(odds ratio here) # reduces to the standard Kolmogorov-Smirnov statistic when p = 0 N_R = ddF_temp %>% filter(hit == TRUE) %>% mutate(OR_p = abs(OR)^p) %>% summarize(Nr = sum(OR_p)) %>% as.numeric() ## Number of miss hits N_miss = ddF_temp %>% summarize(N_M = sum(miss_ind)) %>% as.numeric() if (N_R == 0){ hit_value = rep(0, n_pos) }else{ if(p == 0){ hit_value = cumsum(ddF_temp$hit / N_R) hit_value = hit_value[position] } else{ OR_hit = ddF_temp %>% mutate(OR_p = abs(OR)^p * hit) hit_value = cumsum(OR_hit$OR_p / N_R) hit_value = hit_value[position] } } miss_value = cumsum(ddF_temp$miss / N_miss) miss_value = miss_value[position] return(list(hit = hit_value, miss = miss_value, pos = position)) }
/scratch/gouwar.j/cran-all/cranData/AEenrich/R/HitMiss_curve.R
## Function KS_enrichment: ---------------------------------------------------- ## perform Enrichment analysis using proposed KS score and permutation test ### ### Input: ### 1. data: ### Type I: ID DRUG_TYPE AE_NAME AGE SEX ### 201 FLUN Insomnia 69 F ### ... ... ... ... ... ### 299 FLU Chills 68 M ### ### Type II: DRUG_TYPE AE_NAME COUNT(YES) COUNT(NO) AGE SEX ### FLUN Insomnia 640 6544 69 F ### ... ... ... ... ... ... ### FLU Chills 586 3720 68 M ### ### For Type II data, the 3rd and 4th Columns give the numbers of ### successes(have AE) and failures(Do not have AE) respectively. ### ### 2. dd.group (data.frame): dd.meddra (AE_NAME GROUP_NAME) ### 3. drug.case: Name of the target vaccine. ### 4. drug.control: Name of the Reference vaccine. ### 5. n_perms: Number of permutations to get null distribution of ES. ### 6. p: An exponent p to control the weight of the step. ### 7. zero: logical, if TRUE, calculate zero inflated KS score. If FALSE, ### calculate KS score without zero indicator. ### 8. min_size: The minimum size of group required for enrichment analysis. ### 9. min_AE: The minimum number of cases required to start counting ### for a specific AE. ### 10. cores: The number of cores to use for parallel execution. ### Output: ### a list with 2 data.frames: ### 1. Final_result (data.frame): ### GROUP_NAME ES p_value GROUP_SIZE ### 1 Acid-base disorders 0.0000000 1.000 2 ### 2 Allergic conditions 0.0000000 1.000 41 ### 3 Anaemias nonhaemolytic and marrow depression 0.6874658 0.300 1 ### 4 Ancillary infectious topics 0.4867987 0.206 3 ### 5 Angioedema and urticaria 0.0000000 1.000 17 ### 6 Anxiety disorders and symptoms 0.0000000 1.000 11 ### ### 2. AE_info (data.frame): ### AE_NAME RR p_value L U ### 1 Abdomen scan normal 0.25 0.428 ### 2 Abdominal discomfort 0.137 0.517 ### 3 Abdominal distension 0.269 0.065 ### 4 Abdominal mass 1 0.137 ### 5 Abdominal pain 0.282 0 ### 6 Abdominal pain lower 0.25 0.216 # 79: ------------------------------------------------------------------------- KS_enrichment = function(data, drug.case, drug.control = NULL, covar = NULL, dd.group, n_perms = 1000, p = 0, zero = FALSE, min_size = 5, min_AE = 10, cores = detectCores()){ i <- "Muted" ## check p is between 0 and 1 if(p > 1 | p < 0){ stop("P should take any value between 0 and 1") } n_perms = as.integer(n_perms) ## Calculate the log odds ratio. odds_out = odds_ratio(data, drug.case, drug.control, covar = covar, min_AE = min_AE, cores = cores) ## Convert log odds ratio to odds ratio. OR_data = odds_out$res %>% mutate(OR = exp(BETA)) %>% select(AE_NAME, OR, p_value, `95Lower`, `95Upper`, `se(logOR)`) # Get the group size for groups of interest dd.group = dd.group[!duplicated(dd.group), ] %>% filter(!is.na(GROUP_NAME)) df_size = dd.group %>% filter(AE_NAME %in% odds_out$ae) %>% group_by(GROUP_NAME) %>% summarise(GROUP_SIZE = n(), .groups = 'drop_last') %>% filter(GROUP_SIZE >= as.integer(min_size)) ## filter out groups in which the number of AEs is less than the minimum size dd.group = dd.group %>% filter(GROUP_NAME %in% df_size$GROUP_NAME) %>% filter(AE_NAME %in% odds_out$ae) ## calculate the enrichment score(generalized KS statistic). ks_raw = get_ES(dd.group, OR_data[, 1:2], p, zero) ## Change the ES from long to wide. ks_true = ks_raw %>% as_tibble() %>% pivot_wider(names_from = group, values_from = ES) ## get ready for permutation test. perms = modelr::permute(OR_data, n_perms, OR) ## calculate ES for each permutation. perm_object = perms$perm cl = makeCluster(cores) registerDoParallel(cl) models = foreach(i = 1:length(perm_object), .packages = c("tidyverse"), .export = c("get_ES", "HitMiss_Curve") ) %dopar% { dat = perm_object[i] %>% as.data.frame() get_ES(dd.group, dat[,1:2], p, zero) } stopCluster(cl) ## combine all the results of permutations together into a data frame. cl = makeCluster(cores) registerDoParallel(cl) ks_null = foreach(i = 1:length(models), .packages = c("tidyverse"), .combine = bind_rows ) %dopar% { ES_i = models[[i]] %>% as_tibble() %>% pivot_wider(names_from = group, values_from = ES) ES_i } stopCluster(cl) ## add true ES to the end of the data frame. ks_all = ks_true %>% bind_rows(ks_null) ## calculate the p value based on permutation results. p_value_ks = sapply(ks_all, function(x) mean(x[2:n_perms+1] >= x[1])) Final_result_ks = tibble(GROUP_NAME = ks_raw$group, ES = ks_raw$ES, p_value = p_value_ks) Final_result_ks = Final_result_ks %>% left_join(df_size, by = "GROUP_NAME") return(list(Final_result = Final_result_ks, AE_info = OR_data)) }
/scratch/gouwar.j/cran-all/cranData/AEenrich/R/KS_enrichment.R
#' Convert data on the report level to aggregated data. #' #' The count_cases function is used to convert data on the report #' level to aggregated data, grouping by specified covariates. #' #' @param data a data.frame with at least 3 columns, consisting data on the report #' level, having ID, Drug type and AE name as the first 3 columns with #' covariates(optional) followed. The order of columns is not interchangeable. #' @param drug.case a character string for the target drug of interest. #' @param drug.control a character string for the reference drug. If NULL(default), #' all other drugs combined are the reference. #' @param covar_cont a character vector of continuous covariates. #' @param covar_disc a character vector of categorical covariates. #' @param breaks a list consists of vectors used for creating specific bins to #' transform continuous covariates into categorical. Breaks Should have the same #' length as covar_cont. Given a vector of non-decreasing breakpoints in `breaks[i]`, #' find the interval containing each element of `covar_cont[i]`; i.e., for each index #' j in `breaks[i]`, value j is assigned to `covar_cont[i]` if and only if `breaks[i][j]` #' `<= covar_cont[i] < breaks[i][j+1]`. #' @param min_AE the minimum number of cases required to start counting #' for a specific AE. Default 10. #' @param cores the number of cores to use for parallel execution. #' #' @return A **data.frame** consists of aggregated data. #' #' The returned data.frame contains the following columns: #' \itemize{ #' \item{DRUG_TYPE: }{type of the drug, DrugYes for target drug and DrugNo for referenced drug} #' \item{AE_NAME: }{the name of the adverse event} #' \item{AEyes: }{number of observations that have this AE} #' \item{AEno: }{number of observations that do not have this AE} #' \item{covariates: }{covariates specifed by user} #' } #' #' @examples #' #' # count_cases(data = covid1, drug.case = "COVID19", drug.control = "OTHER", #' # covar_cont = c("AGE"), covar_disc = c("SEX"), #' # breaks = list(c(16,30,50,65,120))) #' count_cases = function(data, drug.case = drug.case, drug.control = NULL, covar_disc = NULL, covar_cont = NULL, breaks = NULL, cores = detectCores(), min_AE = 10){ . <- "Muted" data = as_tibble(data) ## check the breaks-covar_cont pairs if(!length(breaks) == length(covar_cont)){ stop("The length of breaks does not match that of continuous covariates") } ## check the basic first three columns if(length(names(data)) < 3){ stop("Unexpected data type") } ## check the existence of continuous covariates if(!is.null(covar_cont)){ if(!all(covar_cont %in% names(data))){ stop("nonexistent column") } else { if(!all(sapply(data, is.numeric)[covar_cont])){ stop("Discrete covariates misclassified as continuous") } } } ## check the existence of discrete covariates if(!is.null(covar_disc)){ if(!all(covar_disc %in% names(data))){ stop("nonexistent column") }else { if(any(sapply(data, is.numeric)[covar_disc])){ stop("Continuous covariates misclassified as discrete") } } } ## rename the first three columns names(data)[1:3] = c('ID', 'DRUG_TYPE', 'AE_NAME') ## filter by drug.case and drug.control if(!is.null(drug.control)){ drug_list = c(drug.case, drug.control) data = data[data$DRUG_TYPE %in% drug_list, ] } ## remove NAs rename drug type based on drug.case data = data[complete.cases(data), ] %>% mutate(DRUG_TYPE = ifelse(DRUG_TYPE %in% drug.case, "DrugYes", "DrugNo") ) ## filter out reports with both case and control vaccines ID_No = data %>% filter(DRUG_TYPE == "DrugNo") %>% .$ID %>% unique() ID_Yes = data %>% filter(DRUG_TYPE == "DrugYes") %>% .$ID %>% unique() Confused_ID = intersect(ID_No, ID_Yes) ## convert all the character variable to factor data_temp = data %>% filter(!ID %in% Confused_ID) ## for continuous covariates, classifying each obs by argument 'breaks' if(!is.null(covar_cont)){ for(i in 1:length(breaks)){ breaks_temp = breaks[[i]] var_temp = covar_cont[i] covar_group = data_temp %>% dplyr::select(all_of(var_temp)) %>% .[[1]] %>% findInterval(breaks_temp) data_temp = data_temp %>% dplyr::select(-all_of(var_temp)) %>% tibble({{var_temp}} := covar_group) %>% filter(!!as.symbol(var_temp) != 0) %>% filter(!!as.symbol(var_temp) != length(breaks_temp)) %>% mutate(!!as.symbol(var_temp) := as.factor(!!as.symbol(var_temp))) } } ## filter out AE with less than 10 observations AE_list = data_temp %>% group_by(AE_NAME) %>% summarise(count = n()) %>% filter(count >= as.integer(min_AE)) %>% .$AE_NAME %>% as.character() data_comp = data_temp %>% filter(AE_NAME %in% AE_list) %>% mutate_if(sapply(data, is.character), as.factor) cl = makeCluster(cores) registerDoParallel(cl) results = foreach(i = 1:length(AE_list), .packages = c("tidyverse"), .combine = bind_rows ) %dopar% { AE = AE_list[i] AE_yes = data_comp %>% filter(AE_NAME == AE) %>% mutate(AE_NAME = "AEYes") %>% distinct(ID, .keep_all = TRUE) ## filter by ID AE_no = data_comp %>% filter(! ID %in% AE_yes$ID) %>% mutate(AE_NAME = "AENo") %>% distinct(ID, .keep_all = TRUE) ## combine together data_AE = AE_yes %>% bind_rows(AE_no) %>% mutate(AE_NAME = as.factor(AE_NAME)) ## grouping variables grp_cols = c("DRUG_TYPE", "AE_NAME", covar_cont, covar_disc) ## convertnto count data_count = data_AE %>% dplyr::select(-ID) %>% group_by_at(grp_cols) %>% summarise(count = n(), .groups = "drop") %>% pivot_wider(names_from = AE_NAME, values_from = count) %>% mutate(AEYes = replace_na(AEYes, 0), AENo = replace_na(AENo, 0)) %>% mutate(AE_NAME = {{AE}}) %>% relocate(DRUG_TYPE, AE_NAME, AEYes, AENo) data_count } stopCluster(cl) return(results) } #' @description The count_cases function is used to convert data on the report #' level to aggregated data, grouping by specified covariates. #' #' Use the function `count_cases` to convert report level data into aggregated data. #' #' See our \href{https://github.com/umich-biostatistics/AEenrich}{Github home page} #' or run ?count_cases for examples. "_PACKAGE"
/scratch/gouwar.j/cran-all/cranData/AEenrich/R/count_cases.R
#' Covid Vaccine Adverse Event Data #' #' Adverse event data in the long format. Each row is a single adverse #' event, along with covariates. #' #' \itemize{ #' \item VAERS_ID Event ID #' \item VAX_LABEL Vaccine type #' \item AE_NAME Adverse event name #' \item AGE covariate #' \item SEX covariate #' } #' "covid1" #' Covid Vaccine Adverse Event Data #' #' Adverse event data in the short format. Each row is a count of adverse #' events with the given name. #' #' \itemize{ #' \item DRUG_TYPE Vaccine type #' \item AE_NAME Adverse event name #' \item AEYes Number of observations that have this AE #' \item AENo Number of observations that do not have this AE #' \item AGE covariate #' \item SEX covariate #' } #' "covid2" #' Group Structure Data #' #' Identifies which group each set of adverse events belongs. #' #' \itemize{ #' \item AE_NAME Adverse event name #' \item GROUP_NAME Group name #' } #' "group"
/scratch/gouwar.j/cran-all/cranData/AEenrich/R/data.R
#' Perform Adverse Event Enrichment Tests #' #' The enrich function is used to perform Adverse event (AE) enrichment analysis. #' Unlike the continuous gene expression data, AE data are counts. Therefore, #' AE data has many zeros and ties. We propose two enrichment tests. AEFisher is #' a modified Fisher's exact test based on pre-selected significant AEs, while #' AEKS is based on a modified Kolmogorov-Smirnov statistic. #' #' @param data a data.frame. Two data types are allowed. Type I data consisting #' data on the report level, having ID, Drug type and AE name as the first 3 #' columns with covariates(optional) followed. Type II data have drug type and #' AE name as the first two columns, with the 3rd and 4th Columns giving the #' numbers of successes(have AE) and failures(Do not have AE) respectively, then #' followed by covariates. See example data for details. #' @param dd.group a data.frame with AE name and Group name. This data.frame have #' the group information for each individual AE. #' @param drug.case a character string for the target drug of interest. #' @param drug.control a character string for the reference drug. If NULL(default), #' all other drugs combined are the reference. #' @param method a character string specifying the method for the enrichment test. #' It must take "aeks" (default) or "aefisher"; "aeks" is the rank-based #' enrichment test, and "aefisher" is the Fisher enrichment test. See details #' described in the paper (see reference section of this document). #' @param n_perms an integer value specifying the number of permutations in #' permutation test. #' @param covar a character vector specifying the columns of covariates, default #' NULL. #' @param p a numerical value to control the weight of the step, can take any #' value between 0 and 1. If 0(default), reduces to the standard Kolmogorov-Smirnov #' statistics. #' @param q.cut a numerical value specifying the significance cut for q value #' of AEs in aefisher. #' @param or.cut a numerical value specifying the significance cut for odds ratio #' of AEs in aefisher. #' @param zero logical, default FALSE.If TRUE, add zero indicator to enrichment score. #' @param min_size the minimum size of group required for enrichment analysis. #' @param min_AE the minimum number of cases required to start counting #' for a specific AE. #' @param cores the number of cores to use for parallel execution. #' @references Li, S. and Zhao, L. (2020). Adverse event enrichment tests using #' VAERS. \href{https://arxiv.org/abs/2007.02266}{arXiv:2007.02266}. #' #' Subramanian, A.e.a. (2005). Gene set enrichment analysis: a knowledge-based #' approach for interpreting genome-wide expression profiles. Proc Natl Acad #' Sci U S A. Proceedings of the National Academy of Sciences. 102. 15545-15550. #' #' Tian, Lu & Greenberg, Steven & Kong, Sek Won & Altschuler, Josiah & Kohane, Isaac & Park, #' Peter. (2005). Discovering statistically significant pathways in expression profiling studies. #' Proceedings of the National Academy of Sciences of the United States of America. #' 102. 13544-9. 10.1073/pnas.0506577102. #' #' @return A list containing 2 data.frames named **Final_result** and **AE_info**. #' #' The **Final_result** data.frame contains the following columns: #' \itemize{ #' \item{GROUP_NAME: }{AE group names} #' \item{ES: }{enrichment score} #' \item{p_value: }{p value of the enrichment test} #' \item{GROUP_SIZE: }{number of AEs per group} #' } #' #' The **AE_info** contains the following columns: #' \itemize{ #' \item{AE_NAME: }{AE names} #' \item{OR: }{odds ratio for each individual AE} #' \item{p_value: }{p value for AE-drug association} #' \item{95Lower: }{lower bound of 95 percent confidence interval of odds ratio} #' \item{95Lower: }{upper bound of 95 percent confidence interval of odds ratio} #' \item{se(logOR): }{standard error of log odds ratio} #' } #' #' @examples #' #' \donttest{ #'# AEKS #' #'### Type I data: data on report level #'# enrich(data = covid1, covar = c("SEX", "AGE"), p = 0, method = "aeks", #'# n_perms = 1000, drug.case = "COVID19", dd.group = group, cores = 2, #'# drug.control = "OTHER", min_size = 5, min_AE = 10, zero = FALSE) #' #'## Type II data: aggregated data #'# enrich(data = covid2, covar = c("SEX", "AGE"), p = 0, method = "aeks", #'# n_perms = 1000, drug.case = "DrugYes", dd.group = group, cores = 2, #'# drug.control = "DrugNo", min_size = 5, min_AE = 10) #' #'# AEFISHER #'## Type I data: data on report level #'# enrich(data = covid1, covar = c("SEX", "AGE"), p = 0, method = "aefisher", #'# n_perms = 1000, drug.case = "COVID19", dd.group = group, #'# drug.control = "OTHER", min_size = 5, min_AE = 10, q.cut = 0.05, #'# or.cut = 1.5, cores = 2) #' #'## Type II data: aggregated data #'# enrich(data = covid2, covar = c("SEX", "AGE"), p = 0, method = "aefisher", #'# n_perms = 1000, drug.case = "DrugYes", dd.group = group, #'# drug.control = "DrugNo", min_size = 5, min_AE = 10, cores = 2) #' } enrich = function(data, dd.group, drug.case, drug.control = NULL, method = 'aeks', n_perms = 1000, covar = NULL, p = 0, q.cut = 0.1, or.cut = 1.5, zero = FALSE, min_size = 5, min_AE = 10, cores = detectCores() ) { names(dd.group) = c('AE_NAME', 'GROUP_NAME') if (method == 'aeks'){ KS_result = KS_enrichment(data, drug.case, drug.control, covar = covar, dd.group = dd.group, n_perms = n_perms, p = p, zero = zero, min_size = min_size, min_AE = min_AE, cores = cores) return(KS_result) }else if (method == 'aefisher'){ fisher_result = Fisher_enrichment(data, dd.group, drug.case, drug.control, n_perms = n_perms, q.cut = q.cut, or.cut = or.cut, zero = zero, min_size = min_size, covar = covar, min_AE = min_AE, cores = cores) return(fisher_result) }else{ stop('Please choose one of two methods: aeks or fisher') } } #' @description Perform Adverse Event Enrichment Tests #' The enrich function is used to perform Adverse event (AE) enrichment analysis. #' Unlike the continuous gene expression data, AE data are counts. Therefore, #' AE data has many zeros and ties. We propose two enrichment tests. AEFisher is #' a modified Fisher's exact test based on pre-selected significant AEs, while #' AEKS is based on a modified Kolmogorov-Smirnov statistic. #' #' Use the function `enrich` to fit models and inspect results. #' #' See our \href{https://github.com/umich-biostatistics/AEenrich}{Github home page} #' or run ?enrich for examples. "_PACKAGE"
/scratch/gouwar.j/cran-all/cranData/AEenrich/R/enrich.R
## Function fisher_test: ------------------------------------------------------ ## Permute significant AE label; for each permutated data, construct a ## contingency table for each Group and compute OR*Zero indicator. ## grp ## sig 0 1 ## 0 1614 17 ## 1 197 0 ## Used in Fisher_enrichment function ### Input: ### 1. dd.group (data.frame): dd.meddra (AE_NAME GROUP_NAME) ### ### 2. fisher_res (data.frame): ### AE_NAME OR qval isRatio0 ### ### 1 Abdomen scan normal 0.25 0.611 FALSE ### 2 Abdominal discomfort 0.137 0.684 FALSE ### 3 Abdominal distension 0.269 0.273 FALSE ### ### 3. q.cut (numerical value): q value cut deciding the significance of ### each AE ### 4. or.cut (numerical value): odds ratio cut deciding the significance of ### each AE ### 5. n_perms: number of permutations ### 6. zero: Default False, perform classic fisher exact test. If ### True, add zero indicator to the Enrichment score. ### ### Output: ### 1. Final_result (data.frame): ### GROUP_NAME ES p_value ### 1 Acid-base disorders 0 1 ### 2 Allergic conditions 0 1 ### 3 Anaemias nonhaemolytic and marrow depression 0 1 ### 4 Ancillary infectious topics 0 1 ### 5 Angioedema and urticaria 0 1 ### 6 Anxiety disorders and symptoms 0 1 #------------------------------------------------------------------------------ fisher_test = function(dd.group, fisher_res, q.cut, or.cut, n_perms, zero = FALSE){ # add statistics for AE not mentioned with the target vaccine. # (OR = 0, q = 1, isRatio0 = TRUE) . <- "Muted" ddF = fisher_res %>% right_join(dd.group, by = "AE_NAME") %>% mutate(OR = coalesce(OR, 0), qval = coalesce(qval, 1), isRatio0 = coalesce(isRatio0, TRUE)) # determine the significance of each AE ddF$sig = factor(ifelse( (ddF$qval < q.cut) & (ddF$OR>or.cut), 1, 0 ), levels = c(0,1) ) # get interesting group name group.enrich = ddF %>% arrange(GROUP_NAME) %>% distinct(GROUP_NAME) %>% .[[1]] ng = length(group.enrich) # put group name together for a single AE. # For example, if AE "pain" belongs to two groups "G1" and "G2", the original # data frame looks like # AE_NAME OR isRatio0 sig GROUP_NAME # pain 1.3 FALSE 0 G1 # pain 1.3 FALSE 0 G2 # But after we summarise, it will be like # AE_NAME OR isRatio0 sig GROUP_NAME # pain 1.3 FALSE 0 list(G1,G2) ddF_new = ddF %>% group_by(AE_NAME) %>% summarise(OR = OR[1], isRatio0 = isRatio0[1], sig = sig[1], GROUP_NAME = list(GROUP_NAME)) # Calculate true ES # initialize values ES = c() grp_info = list() ## Classic fisher exact test or including zero indicator(zero) if(zero == TRUE){ for(j in 1:ng){ # This group indicator function can handle one AE with multiple groups grp = sapply(ddF_new$GROUP_NAME, function(x) group.enrich[j] %in% x) # store the grp info so that can speed up the program grp_info[[j]] = grp # calculate 0 proportion for each group in_0 = sum(ddF_new$isRatio0[grp == T]) / sum(grp) out_0 = sum(ddF_new$isRatio0[grp == F]) / sum(grp == F) # do one-sided fisher's exact test table2 = table(sig = ddF_new$sig, grp) or = table2[1,1] * table2[2,2] / (table2[1,2] * table2[2,1]) # in_0 is the ratio of zero (#zero AE/ total #AE in target group) for target drug; # out_0 is the ratio of zero (#zero AE/ total #AE in other groups) for target drug; ES[j] = or * (in_0 <= out_0) if (is.na(ES[j])){ ES[j] = 0 # Inf*0 = NAN } } # get the index of ES not 0 index = which(ES!=0) # Calculate the null distribution of ES ES_null_df = data.frame(matrix(NA, ncol = length(ES), nrow = n_perms)) for (perm in 1:n_perms){ # use permutation to construct null data.frame ind = sample(nrow(ddF_new)) ddF_null = ddF_new %>% mutate(sig = sig[ind], isRatio0 = isRatio0[ind]) ES_null = rep(0, ng) for(j in index){ grp = grp_info[[j]] # calculate 0 proportion in_0 = sum(ddF_null$isRatio0[grp == T]) / sum(grp) out_0 = sum(ddF_null$isRatio0[grp == F]) / sum(grp == F) # do one-sided fisher's exact test table2 = table(sig = ddF_null$sig,grp) or = table2[1,1] * table2[2,2] / (table2[1,2] * table2[2,1]) # in_0 is the ratio of zero (#zero AE/ total #AE in target group) for target drug; # out_0 is the ratio of zero (#zero AE/ total #AE in other groups) for target drug; ES_null[j] = or*(in_0<=out_0) if (is.na(ES_null[j])){ ES_null[j] = 0 # Inf*0 = NAN } } ES_null_df[perm,] = ES_null } # calculate tail probability (p value) ES_true_df = data.frame(matrix(NA, nrow = 1, ncol = length(ES))) ES_true_df[1,] = ES # combines true ES and permuted ES ES_all = rbind(ES_null_df, ES_true_df) p_value = sapply(ES_all, function(x) mean(x[1:n_perms]>=x[n_perms+1])) res = cbind.data.frame(GROUP_NAME = group.enrich, ES = ES, p_value = p_value) } else { for(j in 1:ng){ # This group indicator function can handle one AE with multiple groups grp = sapply(ddF_new$GROUP_NAME, function(x) group.enrich[j] %in% x) # store the grp info so that can speed up the program grp_info[[j]] = grp # do one-sided fisher's exact test table2 = table(sig = ddF_new$sig, grp) or = table2[1,1] * table2[2,2] / (table2[1,2] * table2[2,1]) # in_0 is the ratio of zero (#zero AE/ total #AE in target group) for target drug; # out_0 is the ratio of zero (#zero AE/ total #AE in other groups) for target drug; ES[j] = or if (is.na(ES[j])){ ES[j] = 0 # Inf*0 = NAN } } # get the index of ES not 0 index = which(ES != 0) # Calculate the null distribution of ES ES_null_df = data.frame(matrix(NA, ncol = length(ES), nrow = n_perms)) for (perm in 1:n_perms){ # use permutation to construct null data.frame ind = sample(nrow(ddF_new)) ddF_null = ddF_new %>% mutate(sig = sig[ind], isRatio0 = isRatio0[ind]) ES_null = rep(0, ng) for(j in index){ grp = grp_info[[j]] # do one-sided fisher's exact test table2 = table(sig = ddF_null$sig,grp) or = table2[1,1] * table2[2,2] / (table2[1,2] * table2[2,1]) # in_0 is the ratio of zero (#zero AE/ total #AE in target group) for target drug; # out_0 is the ratio of zero (#zero AE/ total #AE in other groups) for target drug; ES_null[j] = or if (is.na(ES_null[j])){ ES_null[j] = 0 # Inf*0 = NAN } } ES_null_df[perm,] = ES_null } # calculate tail probability (p value) ES_true_df = data.frame(matrix(NA, nrow = 1, ncol = length(ES))) ES_true_df[1,] = ES # combines true ES and permuted ES ES_all = rbind(ES_null_df, ES_true_df) p_value = sapply(ES_all, function(x) mean(x[1:n_perms] >= x[n_perms+1])) res = cbind.data.frame(GROUP_NAME = group.enrich, ES = ES, p_value = p_value) } return(res) }
/scratch/gouwar.j/cran-all/cranData/AEenrich/R/fisher_test.R
## Function get_ES: ----------------------------------------------------------- ## Calculate Enrichment Score over AE Groups, called by KS_enrichment function. ### ### Input: ### 1. dd.group (data.frame): dd.meddra (AE_NAME GROUP_NAME) ### 2. data_Ratio (data.frame): AE_NAME OR ### <chr> <dbl> ### Injected limb mobility decreased 0.0914 ### Injection site joint pain 0.206 ### ... ... ### Arthralgia 1.25 ### 3. p: An exponent p to control the weight of the step. Default 0, which ### corresponds to standard Kolmogorov-Smirnov statistic. ### 4. zero: logical, if TRUE, calculate zero inflated KS score. If FALSE, ### calculate KS score without zero indicator. ### ### Output: ### a data.frame: group ES ### Respiratory 0.83 ### ... ... ### Infections 0.46 # 79: ------------------------------------------------------------------------- get_ES = function(dd.group, data_Ratio, p, zero){ . <- "Muted" # add zero to AE if it was not mentioned with the target vaccine ddF = data_Ratio %>% right_join(dd.group, by = "AE_NAME") %>% mutate(OR = coalesce(OR, 0)) %>% arrange(desc(OR)) # order by odds ratio # check if there are 0's flag_0 = any(ddF$OR == 0) # get interesting group names group.enrich = ddF %>% .$GROUP_NAME %>% unique() ng = length(group.enrich) get_score = function(ddF, j, p, flag_0, zero){ # check which AE in this group hit_ind = ddF$GROUP_NAME == group.enrich[j] AE_vec = (ddF$AE_NAME)[hit_ind] # get the miss index (handle one AE with multiple groups) miss_ind = (hit_ind == FALSE & !(ddF$AE_NAME %in% AE_vec)) h_m_lst = HitMiss_Curve(ddF, miss_ind = miss_ind, p) h_vec = h_m_lst$hit m_vec = h_m_lst$miss position = h_m_lst$pos n_pos = length(position) if ((flag_0) & (zero == TRUE)){ n_pos = length(position) ES = max((h_vec-m_vec)[1:(n_pos-1)])*((h_vec-m_vec)[n_pos-1]>=0) }else{ ES = max(h_vec-m_vec) } ES } ES_vec = sapply(1:ng, function(x) get_score(ddF, x, p, flag_0, zero)) return(data.frame(group = group.enrich, ES = ES_vec)) }
/scratch/gouwar.j/cran-all/cranData/AEenrich/R/get_ES.R
## Function odds_ratio: ------------------------------------------------------- ## Estimate log odds ratio for each AE, perform Logistic regression on data, ## use Logit as link function. ### ### Input: ### 1. data: ### Type I: ID DRUG_TYPE AE_NAME AGE SEX ### 201 FLUN Insomnia 69 F ### ... ... ... ... ... ### 299 FLU Chills 68 M ### ### Type II: DRUG_TYPE AE_NAME COUNT(YES) COUNT(NO) AGE SEX ### FLUN Insomnia 640 6544 69 F ### ... ... ... ... ... ... ### FLU Chills 586 3720 68 M ### ### For Type II data, the 3rd and 4th Columns give the numbers of ### successes(have AE) and failures(Do not have AE) respectively. ### ### 2. drug.case: Target vaccine type ### 3. drug.control: Reference vaccine type ### 4. covar: Covariates for logistic regression ### 5. min_AE: The minimum number of cases required to start counting ### for a specific AE. ### 6. cores: The number of cores to use for parallel execution. ### Output: ### a tibble(data.frame) with six columns: ### ### AE_NAME BETA p_value `95Lower` ### <chr> <dbl> <dbl> <dbl> ### Injection site joint pain -1.58 4.75e- 18 ... ### Apathy -0.166 7.02e- 1 ... ### Arthralgia 0.225 2.07e- 24 ... ### ### `95Upper` `se(logOR)` ### <dbl> <dbl> ### ... ... ### ... ... ### ... ... ### AE_NAME: The name of the adverse event. ### BETA: Log odds ratio of AEs. ### p_value: p value of log odds. ### `95Lower` and `95Upper`: The lower/upper bound of confidence ### interval of odds ratio. # 79: ------------------------------------------------------------------------- odds_ratio = function(data, drug.case, drug.control = NULL, covar = NULL, min_AE = 10, cores = detectCores()){ i <- "Muted" . <- "Muted" data = as_tibble(data) if(!is.null(covar)){ if(!all(covar %in% names(data))){ stop("covariates not found") } } ## Check data type if (!sapply(data, is.numeric)[3]){ ## change the names of columns names(data)[1:3] = c('ID', 'DRUG_TYPE', 'AE_NAME') if(!is.null(drug.control)){ drug_list = c(drug.case, drug.control) data = data[data$DRUG_TYPE %in% drug_list, ] } ## remove NA data = data[complete.cases(data), ] data = data %>% mutate(DRUG_TYPE = ifelse(DRUG_TYPE %in% drug.case, "DrugYes", "DrugNo") ) ## filter out AE with less than 10 observations AE_list = data %>% group_by(AE_NAME) %>% summarise(count = n()) %>% filter(count >= as.integer(min_AE)) %>% .$AE_NAME %>% as.character() ## Convert character columns to factor data_temp = data %>% filter(AE_NAME %in% AE_list) %>% mutate_if(sapply(data, is.character), as.factor) data_comp = data_temp ## A set consists of unique AE names AE_SET = unique(as.character(data_comp$AE_NAME)) cl = makeCluster(cores) registerDoParallel(cl) results = foreach(i = 1:length(AE_SET), .packages = c("tidyverse"), .combine = bind_rows ) %dopar% { AE = AE_SET[i] ## those have AE AE_yes = data_comp %>% filter(AE_NAME == AE) %>% mutate(AE_NAME = "AEYes") %>% distinct(ID, .keep_all = TRUE) ## those who don't ID_list = AE_yes$ID ## filter by ID AE_no = data_comp %>% filter(! ID %in% ID_list) %>% mutate(AE_NAME = "AENo") %>% distinct(ID, .keep_all = TRUE) ## combine data together data_AE = AE_yes %>% bind_rows(AE_no) %>% mutate(AE_NAME = as.factor(AE_NAME)) covar_formula = ifelse(is.null(covar), "", paste("+", paste(covar, collapse = " + "))) str_formula = as.formula(paste("AE_NAME ~ DRUG_TYPE", covar_formula)) ## Logistic regression mod = glm(formula = str_formula, family = binomial(link = logit), data = data_AE) ## return the log odds ratio tibble(AE_NAME = AE, BETA = coef(mod)["DRUG_TYPEDrugYes"], p_value = coef(summary(mod))[,4]["DRUG_TYPEDrugYes"], `95Lower` = exp(confint.default(mod)[2,])[1], `95Upper` = exp(confint.default(mod)[2,])[2], `se(logOR)` = coef(summary(mod))[,2]["DRUG_TYPEDrugYes"]) } stopCluster(cl) } else { if (!sapply(data, is.numeric)[4]){ stop("Invalid data type") } ## change the names of columns names(data)[1:4] = c('DRUG_TYPE', 'AE_NAME', 'YES', 'NO') if(!is.null(drug.control)){ drug_list = c(drug.case, drug.control) data = data[data$DRUG_TYPE %in% drug_list, ] } ## remove NA data = data[complete.cases(data), ] data = data %>% mutate(DRUG_TYPE = ifelse(DRUG_TYPE %in% drug.case, "DrugYes", "DrugNo") ) ## Type two data, so every covariate should be factor if(length(names(data)) == 4){ index = 1:2 } else{ index = c(1:2, 5:length(names(data))) } data_temp = data %>% mutate_at(.vars = index, as.factor) # filter out AEs with less than 10 observations AE_list = data_temp %>% group_by(AE_NAME) %>% summarize(count = sum(YES)) %>% filter(count >= as.integer(min_AE)) %>% .$AE_NAME %>% as.character() data_comp = data_temp %>% filter(AE_NAME %in% AE_list) ## A set consists of AE names AE_SET = unique(as.character(data_comp$AE_NAME)) cl = makeCluster(cores) registerDoParallel(cl) results = foreach(i = 1:length(AE_SET), .packages = c("tidyverse"), .combine = bind_rows ) %dopar% { AE = AE_SET[i] data_count = data_comp %>% filter(AE_NAME == {{AE}}) %>% mutate(YES = YES + 1, NO = NO + 1) covar_formula = ifelse(is.null(covar), "", paste("+", paste(covar, collapse = " + "))) str_formula = as.formula(paste("cbind(YES, NO) ~ DRUG_TYPE", covar_formula)) ## Logistic regression mod = glm(formula = str_formula, family = binomial(link = logit), data = data_count) ## return the log odds ratio tibble(AE_NAME = AE, BETA = coef(mod)["DRUG_TYPEDrugYes"], p_value = coef(summary(mod))[,4]["DRUG_TYPEDrugYes"], `95Lower` = exp(confint.default(mod)[2,])[1], `95Upper` = exp(confint.default(mod)[2,])[2], `se(logOR)` = coef(summary(mod))[,2]["DRUG_TYPEDrugYes"]) } stopCluster(cl) } return(list(res = results, ae = AE_list)) }
/scratch/gouwar.j/cran-all/cranData/AEenrich/R/odds_ratio.R
AENo <- AEYes <- count <- AE_NAME <- DRUG_TYPE <- Freq <- GROUP_NAME <- NULL YES <- ID <- OR <- Ratio <- n_AE <- p_value <- qval <- isRatio0 <- sig <- NULL NO <- miss <- OR_p <- N_M <- hit <- BETA <- `95Lower` <- `95Upper` <- NULL ES <- group <- GROUP_SIZE <- Nr <- `se(logOR)`<- NULL
/scratch/gouwar.j/cran-all/cranData/AEenrich/R/set_null.R
############## AF function for matched and unmatched case-control ##################### #' @title Attributable fraction for mached and non-matched case-control sampling designs. NOTE! Deprecated function. Use \code{\link[AF]{AFglm}} (for unmatched case-control studies) or \code{\link[AF]{AFclogit}} (for matched case-control studies). #' @description \code{AF.cc} estimates the model-based adjusted attributable fraction for data from matched and non-matched case-control sampling designs. #' @param formula an object of class "\code{formula}" (or one that can be coerced to that class): a symbolic description of the model used for confounder adjustment. The exposure and confounders should be specified as independent (right-hand side) variables. The outcome should be specified as dependent (left-hand side) variable. The formula is used to object a logistic regression by \code{\link[stats]{glm}} for non-matched case-control and conditional logistic regression by \code{\link[drgee]{gee}} (in package \code{\link[drgee]{drgee}}) for matched case-control. #' @param data an optional data frame, list or environment (or object coercible by \code{as.data.frame} to a data frame) containing the variables in the model. If not found in \code{data}, the variables are taken from environment (\code{formula}), typically the environment from which the function is called. #' @param exposure the name of the exposure variable as a string. The exposure must be binary (0/1) where unexposed is coded as 0. #' @param matched a logical that specifies if the sampling design is matched (TRUE) or non-matched (FALSE) case-control. Default setting is non-matched (\code{matched = FALSE}). #' @param clusterid the name of the cluster identifier variable as a string, if data are clustered (e.g. matched). #' @return \item{AF.est}{estimated attributable fraction.} #' @return \item{AF.var}{estimated variance of \code{AF.est}. The variance is obtained by combining the delta methods with the sandwich formula.} #' @return \item{log.or}{a vector of the estimated log odds ratio for every individual. \code{log.or} contains the estimated coefficient for the exposure variable \code{X} for every level of the confounder \code{Z} as specified by the user in the formula. If the model to be estimated is #' \deqn{logit\{Pr(Y=1|X,Z)\} = \alpha+\beta{X}+\gamma{Z}}{logit {Pr(Y=1|X,Z)} = \alpha + \beta X + \gamma Z} #' then \code{log.or} is the estimate of \eqn{\beta}. #' If the model to be estimated is #' \deqn{logit\{Pr(Y=1|X,Z)\}=\alpha+\beta{X}+\gamma{Z}+\psi{XZ}}{logit{Pr(Y=1|X,Z)} = \alpha + \beta X +\gamma Z +\psi XZ} #' then \code{log.odds} is the estimate of #' \eqn{\beta + \psi{Z}}{\beta + \psi Z}.} #' @return \item{object}{the fitted model. Fitted using logistic regression, \code{\link{glm}}, for non-matched case-control and conditional logistic regression, \code{\link[drgee]{gee}}, for matched case-control.} #' @details \code{Af.cc} estimates the attributable fraction for a binary outcome \code{Y} #' under the hypothetical scenario where a binary exposure \code{X} is eliminated from the population. #' The estimate is adjusted for confounders \code{Z} by logistic regression for unmatched case-control (\code{\link[stats]{glm}}) and conditional logistic regression for matched case-control (\code{\link[drgee]{gee}}). #' The estimation assumes that the outcome is rare so that the risk ratio can be approximated by the odds ratio, for details see Bruzzi et. al. #' Let the AF be defined as #' \deqn{AF = 1 - \frac{Pr(Y_0=1)}{Pr(Y = 1)}}{AF = 1 - Pr(Y0 = 1) / Pr(Y = 1)} #' where \eqn{Pr(Y_0=1)}{Pr(Y0 = 1)} denotes the counterfactual probability of the outcome if #' the exposure would have been eliminated from the population. If \code{Z} is sufficient for confounding control then the probability \eqn{Pr(Y_0=1)}{Pr(Y0 = 1)} can be expressed as #' \deqn{Pr(Y_0=1)=E_Z\{Pr(Y=1\mid{X}=0,Z)\}.}{Pr(Y0=1) = E_z{Pr(Y = 1 | X = 0, Z)}.} #' Using Bayes' theorem this implies that the AF can be expressed as #' \deqn{AF = 1-\frac{E_Z\{Pr(Y=1\mid X=0,Z)\}}{Pr(Y=1)}=1-E_Z\{RR^{-X}(Z)\mid{Y = 1}\}}{ #' AF = 1 - E_z{Pr( Y = 1 | X = 0, Z)} / Pr(Y = 1) = 1 - E_z{RR^{-X} (Z) | Y = 1}} #' where \eqn{RR(Z)} is the risk ratio \deqn{\frac{Pr(Y=1\mid{X=1,Z})}{Pr(Y=1\mid{X=0,Z})}.}{Pr(Y = 1 | X = 1,Z)/Pr(Y=1 | X = 0, Z).} #' Moreover, the risk ratio can be approximated by the odds ratio if the outcome is rare. Thus, #' \deqn{ AF \approx 1 - E_Z\{OR^{-X}(Z)\mid{Y = 1}\}.}{AF is approximately 1 - E_z{OR^{-X}(Z) | Y = 1}.} #' The odds ratio is estimated by logistic regression or conditional logistic regression. #' If \code{clusterid} is supplied, then a clustered sandwich formula is used in all variance calculations. #' @author Elisabeth Dahlqwist, Arvid \enc{Sjölander}{Sjolander} #' @seealso The new and more general version of the function: \code{\link[AF]{AFglm}} for non-matched and \code{\link[AF]{AFclogit}} for matched case-control sampling designs. \code{\link[stats]{glm}} and \code{\link[drgee]{gee}} used for fitting the logistic regression model (for non-matched case-control) and the conditional logistic regression model (for matched case-control). #' @references Bruzzi, P., Green, S. B., Byar, D., Brinton, L. A., and Schairer, C. (1985). Estimating the population attributable risk for multiple risk factors using case-control data. \emph{American Journal of Epidemiology} \bold{122}, 904-914. #' @examples #' expit <- function(x) 1 / (1 + exp( - x)) #' NN <- 1000000 #' n <- 500 #' #' # Example 1: non matched case-control #' # Simulate a sample from a non matched case-control sampling design #' # Make the outcome a rare event by setting the intercept to -6 #' intercept <- -6 #' Z <- rnorm(n = NN) #' X <- rbinom(n = NN, size = 1, prob = expit(Z)) #' Y <- rbinom(n = NN, size = 1, prob = expit(intercept + X + Z)) #' population <- data.frame(Z, X, Y) #' Case <- which(population$Y == 1) #' Control <- which(population$Y == 0) #' # Sample cases and controls from the population #' case <- sample(Case, n) #' control <- sample(Control, n) #' data <- population[c(case, control), ] #' #' # Estimation of the attributable fraction #' AF.cc_est <- AF.cc(formula = Y ~ X + Z + X * Z, data = data, exposure = "X") #' summary(AF.cc_est) #' #' # Example 2: matched case-control #' # Duplicate observations in order to create a matched data sample #' # Create an unobserved confounder U common for each pair of individuals #' U <- rnorm(n = NN) #' Z1 <- rnorm(n = NN) #' Z2 <- rnorm(n = NN) #' X1 <- rbinom(n = NN, size = 1, prob = expit(U + Z1)) #' X2 <- rbinom(n = NN, size = 1, prob = expit(U + Z2)) #' Y1 <- rbinom(n = NN, size = 1, prob = expit(intercept + U + Z1 + X1)) #' Y2 <- rbinom(n = NN, size = 1, prob = expit(intercept + U + Z2 + X2)) #' # Select discordant pairs #' discordant <- which(Y1!=Y2) #' id <- rep(1:n, 2) #' # Sample from discordant pairs #' incl <- sample(x = discordant, size = n, replace = TRUE) #' data <- data.frame(id = id, Y = c(Y1[incl], Y2[incl]), X = c(X1[incl], X2[incl]), #' Z = c(Z1[incl], Z2[incl])) #' #' # Estimation of the attributable fraction #' AF.cc_match <- AF.cc(formula = Y ~ X + Z + X * Z, data = data, #' exposure = "X", clusterid = "id", matched = TRUE) #' summary(AF.cc_match) #' @import drgee #' @export AF.cc<-function(formula, data, exposure, clusterid, matched = FALSE){ warning("NOTE! Deprecated function. Use AFglm (for unmatched case-control studies) or AFclogit (for matched case-control studies).", call = FALSE) call <- match.call() mm <- match(c("formula", "data", "exposure", "clusterid", "matched"), names(call), 0L) #### Preparation of dataset #### ## Delete rows with missing on variables in the model ## rownames(data) <- 1:nrow(data) m <- model.matrix(object = formula, data = data) complete <- as.numeric(rownames(m)) data <- data[complete, ] outcome <- as.character(terms(formula)[[2]]) if(matched == TRUE){ ni.vals <- ave(as.vector(data[, outcome]), data[, clusterid], FUN = function(y) { length(unique(y[which(!is.na(y))])) }) compl.rows <- (ni.vals > 1) data <- data[compl.rows, ] } ## Checks ## if(is.binary(data[, outcome]) == FALSE) stop("Only binary outcome (0/1) is accepted.", call. = FALSE) if(is.binary(data[, exposure]) == FALSE) stop("Only binary exposure (0/1) is accepted.", call. = FALSE) if(max(all.vars(formula[[3]]) == exposure) == 0) stop("The exposure variable is not included in the formula.", call. = FALSE) if(missing(clusterid)) n.cluster <- 0 else n.cluster <- length(unique(data[, clusterid])) #### Methods for non-matched or matched sampling designs #### n <- nrow(data) n.cases <- sum(data[, outcome]) if (!missing(clusterid)) data <- data[order(data[, clusterid]), ] data0 <- data data0[, exposure] <- 0 #### Estimate model #### if(matched == FALSE) object <- glm(formula = formula, family = binomial, data = data) if(matched == TRUE) object <- gee(formula, link = "logit", data, cond = TRUE, clusterid = clusterid) npar <- length(object$coef) ## Design matrices ## if(matched == FALSE){ design <- as.matrix(model.matrix(object = delete.response(terms(object)), data = data)) design0 <- as.matrix(model.matrix(object = delete.response(terms(object)), data = data0)) } if(matched == TRUE){ design <- as.matrix(model.matrix(object = formula, data = data)[, - 1]) design0 <- as.matrix(model.matrix(object = formula, data = data0)[, - 1]) } ## Create linear predictors to estimate the log odds ratio ## diff.design <- design0 - design linearpredictor <- design %*% coef(object) linearpredictor0 <- design0 %*% coef(object) #log odds ratio# log.or <- linearpredictor - linearpredictor0 ## Estimate approximate AF ## AF.est <- 1 - sum(data[, outcome] * exp( - log.or)) / sum(data[, outcome]) #### Meat: score equations #### ## Score equation 1 ## individual estimating equations of the estimate of AF score.AF <- data[, outcome] * (exp( - log.or) - AF.est) ## Score equation 2 ## individual estimating equations from conditional logistic reg. if(matched == FALSE) pred.diff <- data[, outcome] - predict(object, newdata = data, type = "response") if(matched == TRUE) pred.diff <- object$res score.beta <- design * pred.diff score.equations <- cbind(score.AF, score.beta) if (!missing(clusterid)) score.equations <- aggregate(score.equations, list(data[, clusterid]), sum)[, - 1] meat <- var(score.equations, na.rm=TRUE) #### Bread: hessian of score equations #### ## Hessian of score equation 1 ## #### Estimating variance using Sandwich estimator #### hessian.AF1 <- - data[, outcome] hessian.AF2 <- (design0 - design) * as.vector(data[, outcome] * exp( - log.or)) if (!missing(clusterid)){ if(length(all.vars(formula[[3]]))>1){ hessian.AF <- cbind(mean(aggregate(hessian.AF1, list(data[, clusterid]), sum)[, - 1], na.rm=TRUE) , t(colMeans(aggregate(hessian.AF2 , list(data[, clusterid]), sum)[, - 1], na.rm = TRUE))) } if(length(all.vars(formula[[3]]))==1){ hessian.AF <- cbind(mean(aggregate(hessian.AF1, list(data[, clusterid]), sum)[, - 1], na.rm=TRUE) , t(mean(aggregate(hessian.AF2 , list(data[, clusterid]), sum)[, - 1], na.rm = TRUE))) } } else hessian.AF <- cbind(mean(hessian.AF1), t(colMeans(hessian.AF2, na.rm = TRUE))) hessian.beta <- cbind(matrix(rep(0, npar), nrow = npar, ncol = 1), - solve(vcov(object = object)) / n) ### Bread ### bread <- rbind(hessian.AF, hessian.beta) #### Sandwich #### if (!missing(clusterid)) sandwich <- (solve (bread) %*% meat %*% t(solve (bread)) * n.cluster/ n ^ 2 ) [1:2, 1:2] else sandwich <- (solve (bread) %*% meat %*% t(solve (bread)) / n) [1:2, 1:2] AF.var <- sandwich[1, 1] clusterid <- data[, clusterid] #### Output #### out <- c(list(hessian.beta = hessian.beta, hessian.AF= hessian.AF,clusterid = clusterid, score.equations= score.equations, hessian.beta = hessian.beta, bread = bread, meat = meat, AF.est = AF.est, AF.var = AF.var, log.or = log.or, objectcall = object$call, call = call, exposure = exposure, outcome = outcome, object = object, sandwich = sandwich, formula = formula, n = n, n.cases = n.cases, n.cluster = n.cluster)) class(out) <- "AF" return(out) }
/scratch/gouwar.j/cran-all/cranData/AF/R/AFcc.R
############## AF function for cohort time-to-event outcomes ##################### #' @title Attributable fraction function for cohort sampling designs with time-to-event outcomes. NOTE! Deprecated function. Use \code{\link[AF]{AFcoxph}}. #' @description \code{AF.ch} estimates the model-based adjusted attributable fraction function for data from cohort sampling designs with time-to-event outcomes. #' @param formula a formula object, with the response on the left of a ~ operator, and the terms on the right. The response must be a survival object as returned by the \code{Surv} function (\code{\link[survival]{Surv}}). The exposure and confounders should be specified as independent (right-hand side) variables. The time-to-event outcome should be specified by the survival object. The formula is used to fit a Cox proportional hazards model. #' @param data an optional data frame, list or environment (or object coercible by \code{as.data.frame} to a data frame) containing the variables in the model. If not found in data, the variables are taken from environment (\code{formula}), typically the environment from which the function is called. #' @param exposure the name of the exposure variable as a string. The exposure must be binary (0/1) where unexposed is coded as 0. #' @param ties a character string specifying the method for tie handling. If there are no tied death times all the methods are equivalent. Uses the Breslow method by default. #' @param times a scalar or vector of time points specified by the user for which the attributable fraction function is estimated. If not specified the observed death times will be used. #' @param clusterid the name of the cluster identifier variable as a string, if data are clustered. #' @return \item{AF.est}{estimated attributable fraction function for every time point specified by \code{times}.} #' @return \item{AF.var}{estimated variance of \code{AF.est}. The variance is obtained by combining the delta methods with the sandwich formula.} #' @return \item{S.est}{estimated factual survival function; \eqn{S(t)}.} #' @return \item{S.var}{estimated variance of \code{S.est}. The variance is obtained by the sandwich formula.} #' @return \item{S0.est}{estimated counterfactual survival function if exposure would be eliminated; \eqn{S_0(t)}{S0(t)}.} #' @return \item{S0.var}{estimated variance of \code{S0.est}. The variance is obtained by the sandwich formula.} #' @return \item{object}{the fitted model. Fitted using Cox proportional hazard, \code{\link[survival]{coxph}}.} #' @details \code{Af.ch} estimates the attributable fraction for a time-to-event outcome #' under the hypothetical scenario where a binary exposure \code{X} is eliminated from the population. The estimate is adjusted for confounders \code{Z} #' by the Cox proportional hazards model (\code{\link[survival]{coxph}}). Let the AF function be defined as #' \deqn{AF=1-\frac{\{1-S_0(t)\}}{\{1-S(t)\}}}{AF = 1 - {1 - S0(t)} / {1 - S(t)}} #' where \eqn{S_0(t)}{S0(t)} denotes the counterfactual survival function for the event if #' the exposure would have been eliminated from the population at baseline and \eqn{S(t)} denotes the factual survival function. #' If \code{Z} is sufficient for confounding control, then \eqn{S_0(t)}{S0(t)} can be expressed as \eqn{E_Z\{S(t\mid{X=0,Z })\}}{E_z{S(t|X=0,Z)}}. #' The function uses Cox proportional hazards regression to estimate \eqn{S(t\mid{X=0,Z})}{S(t|X=0,Z)}, and the marginal sample distribution of \code{Z} #' to approximate the outer expectation (\enc{Sjölander}{Sjolander} and Vansteelandt, 2014). If \code{clusterid} is supplied, then a clustered sandwich formula is used in all variance calculations. #' @author Elisabeth Dahlqwist, Arvid \enc{Sjölander}{Sjolander} #' @seealso The new and more general version of the function: \code{\link[AF]{AFcoxph}}. \code{\link[survival]{coxph}} and \code{\link[survival]{Surv}} used for fitting the Cox proportional hazards model. #' @references Chen, L., Lin, D. Y., and Zeng, D. (2010). Attributable fraction functions for censored event times. \emph{Biometrika} \bold{97}, 713-726. #' @references \enc{Sjölander}{Sjolander}, A. and Vansteelandt, S. (2014). Doubly robust estimation of attributable fractions in survival analysis. \emph{Statistical Methods in Medical Research}. doi: 10.1177/0962280214564003. #' @examples #' # Simulate a sample from a cohort sampling design with time-to-event outcome #' expit <- function(x) 1 / (1 + exp( - x)) #' n <- 500 #' time <- c(seq(from = 0.2, to = 1, by = 0.2)) #' Z <- rnorm(n = n) #' X <- rbinom(n = n, size = 1, prob = expit(Z)) #' Tim <- rexp(n = n, rate = exp(X + Z)) #' C <- rexp(n = n, rate = exp(X + Z)) #' Tobs <- pmin(Tim, C) #' D <- as.numeric(Tobs < C) #' #Ties created by rounding #' Tobs <- round(Tobs, digits = 2) #' #' # Example 1: non clustered data from a cohort sampling design with time-to-event outcomes #' data <- data.frame(Tobs, D, X, Z) #' #' # Estimation of the attributable fraction #' AF.ch_est <- AF.ch(formula = Surv(Tobs, D) ~ X + Z + X * Z, data = data, #' exposure = "X", times = time) #' summary(AF.ch_est) #' #' # Example 2: clustered data from a cohort sampling design with time-to-event outcomes #' # Duplicate observations in order to create clustered data #' id <- rep(1:n, 2) #' data <- data.frame(Tobs = c(Tobs, Tobs), D = c(D, D), X = c(X, X), Z = c(Z, Z), id = id) #' #' # Estimation of the attributable fraction #' AF.ch_clust <- AF.ch(formula = Surv(Tobs, D) ~ X + Z + X * Z, data = data, #' exposure = "X", times = time, clusterid = "id") #' summary(AF.ch_clust) #' plot(AF.ch_clust, CI = TRUE) #' @import survival data.table #' @export AF.ch <- function(formula, data, exposure, ties="breslow", times, clusterid){ warning("NOTE! Deprecated function. Use AFcoxph.", call = FALSE) call <- match.call() mm <- match(c("formula", "data", "exposure", "ties", "times", "clusterid"), names(call), 0L) #### Preparation of dataset #### ## Delete rows with missing on variables in the model ## rownames(data) <- 1:nrow(data) m <- model.matrix(object = formula, data = data) complete <- as.numeric(rownames(m)) data <- data[complete, ] ## If times is missing ## if(missing(times)) times <- fit.detail$time ## Checks ## if(!is.binary(data[, exposure])) stop("Only binary exposure (0/1) is accepted.", call. = FALSE) if(max(all.vars(formula[[3]]) == exposure) == 0) stop("The exposure variable is not included in the formula.", call. = FALSE) if(missing(clusterid)) n.cluster <- 0 else n.cluster <- length(unique(data[, clusterid])) ## Find names of end variable and event variable rr <- rownames(attr(terms(formula), "factors"))[1] temp <- gregexpr(", ", rr)[[1]] if(length(temp == 1)){ endvar <- substr(rr, 6, temp[1] - 1) eventvar <- substr(rr, temp[1] + 2, nchar(rr) - 1) } if(length(temp) == 2){ endvar <- substr(rr, temp[1] + 2, temp[2] - 1) eventvar <- substr(rr, temp[2] + 2, nchar(rr) - 1) } n <- nrow(data) n.cases <- sum(data[, eventvar]) # Sort on "end-variable" data <- data[order(data[, endvar]), ] # Create dataset data0 for counterfactual X=0 data0 <- data data0[, exposure] <- 0 #### Estimate model #### ## Fit a Cox PH model ## environment(formula) <- new.env() object <- coxph(formula = formula, data = data, ties = "breslow") npar <- length(object$coef) fit.detail <- coxph.detail(object = object) ## Design matrices ## design <- as.matrix(model.matrix(object = delete.response(terms(object)), data = data)[, -1]) design0 <- as.matrix(model.matrix(object = delete.response(terms(object)), data = data0)[, -1]) ### Estimate the survival functions ### ## Hazard increment ## dH0 <- fit.detail$hazard H0 <- cumsum(dH0) ## Baseline hazard function ## H0step <- stepfun(fit.detail$time, c(0, H0)) H0res <- rep(0, n) dH0.untied <- rep(dH0, fit.detail$nevent) / rep(fit.detail$nevent, fit.detail$nevent) H0res[data[, eventvar] == 1] <- dH0.untied * n #handle ties #H0res[data[, eventvar] == 1] <- dH0 * n ## Predict based on the Cox PH model ## epred <- predict(object = object, newdata = data, type = "risk") epred0 <- predict(object = object, newdata = data0, type = "risk") ### Meat ### ## Score equation 4 ## for the Cox PH model (made outside of loop) score.beta <- residuals(object = object, type = "score") ## Weighted mean of the variable at event for all at risk at that time ## E <- matrix(0, nrow = n, ncol = npar) means <- as.matrix(fit.detail$means) means <- means[rep(1:nrow(means), fit.detail$nevent), ] #handle ties E[data[, eventvar] == 1, ] <- means #E[data[, eventvar] == 1, ] <- fit.detail$means ## One point and variance estimate for each time t in times ## S.est <- vector(length = length(times)) S0.est <- vector(length = length(times)) AF.var <- vector(length = length(times)) S.var <- vector(length = length(times)) S0.var <- vector(length = length(times)) # Loop over all t in times for (i in 1:length(times)){ t <- times[i] #### Meat: score equations #### ## Score equation 1 ## for the factual survival function score.S <- exp( - H0step(t) * epred) ## Score equation 2 ## for the counterfactual survival function score.S0 <- exp( - H0step(t) * epred0) ## Score equation 3 ## for the Breslow estimator score.H0 <- H0res * (data[, endvar] <= t) ## Score equation 4 ## for the Cox PH model (made outside of loop) ### Meat ### score.equations <- cbind(score.S, score.S0, score.H0, score.beta) if (!missing(clusterid)){ #score.equations <-aggregate(score.equations, by = list(data[, clusterid]), sum)[, - 1] score.equations <- data.table(score.equations) score.equations <- score.equations[, j=lapply(.SD,sum), by=clusterid] score.equations <- as.matrix(score.equations) score.equations <- score.equations[, -1] } meat <- var(score.equations, na.rm = TRUE) #### Bread: hessian of score equations #### ## Hessian of score equation 1 ## hessian.S <- c(-1, 0, mean(epred * score.S), colMeans(design * H0step(t) * epred * score.S)) ## Hessian of score equation 2 ## hessian.S0 <- c(0, -1, mean(epred0 * score.S0), colMeans(design0 * H0step(t) * epred0 * score.S0)) ## Hessian of score equation 3 ## hessian.H0 <- c(rep(0,2), - 1, - colMeans(E * score.H0, na.rm = TRUE)) ## Hessian of score equation 4 ## hessian.beta <- cbind(matrix(0, nrow = npar, ncol = 3), - solve(vcov(object = object)) / n) ### Bread ### bread<-rbind(hessian.S, hessian.S0, hessian.H0, hessian.beta) ### Sandwich ### if (!missing(clusterid)) sandwich <- (solve (bread) %*% meat %*% t(solve (bread)) * n.cluster/ n^2 ) [1:2, 1:2] else sandwich <- (solve (bread) %*% meat %*% t(solve (bread)) / n) [1:2, 1:2] #### For point estimate #### S.est[i] <- mean(x = score.S, na.rm = TRUE) S0.est[i] <- mean(x = score.S0, na.rm = TRUE) #### Estimate of variance using the delta method #### gradient <- as.matrix(c( - (1 - S0.est[i]) / (1 - S.est[i]) ^ 2, 1 / (1 - S.est[i])) , nrow = 2, ncol = 1) AF.var[i] <- t(gradient) %*% sandwich %*% gradient S.var[i] <- sandwich[1, 1] S0.var[i] <- sandwich[2, 2] } ### The AF function estimate ### AF.est <- 1 - (1 - S0.est) / (1 - S.est) #### Output #### #func <- AF.cc out <- c(list(AF.est = AF.est, AF.var = AF.var, S.est = S.est, S0.est = S0.est, S.var = S.var, S0.var = S0.var, objectcall = object$call, call = call, exposure = exposure, outcome = eventvar, object = object, sandwich = sandwich, gradient = gradient, formula = formula, n = n, n.cases = n.cases, n.cluster = n.cluster, times = times)) class(out) <- "AF" return(out) }
/scratch/gouwar.j/cran-all/cranData/AF/R/AFch.R
############## AF function for a clogit object ##################### #' @title Attributable fraction estimation based on a conditional logistic regression model as a \code{clogit} object (commonly used for matched case-control sampling designs). #' @description \code{AFclogit} estimates the model-based adjusted attributable fraction from a conditional logistic regression model in form of a \code{\link[survival]{clogit}} object. This model is model is commonly used for data from matched case-control sampling designs. #' @param object a fitted conditional logistic regression model object of class "\code{\link[survival]{clogit}}". #' @param data an optional data frame, list or environment (or object coercible by \code{as.data.frame} to a data frame) containing the variables in the model. If not found in \code{data}, the variables are taken from environment (\code{formula}), typically the environment from which the function is called. #' @param exposure the name of the exposure variable as a string. The exposure must be binary (0/1) where unexposed is coded as 0. #' @param clusterid the name of the cluster identifier variable as a string. Because conditional logistic regression is only used for clustered data, this argument must be supplied. #' @return \item{AF.est}{estimated attributable fraction.} #' @return \item{AF.var}{estimated variance of \code{AF.est}. The variance is obtained by combining the delta methods with the sandwich formula.} #' @return \item{log.or}{a vector of the estimated log odds ratio for every individual. \code{log.or} contains the estimated coefficient for the exposure variable \code{X} for every level of the confounder \code{Z} as specified by the user in the formula. If the model to be estimated is #' \deqn{logit\{Pr(Y=1|X,Z)\} = \alpha+\beta{X}+\gamma{Z}}{logit {Pr(Y=1|X,Z)} = \alpha + \beta X + \gamma Z} #' then \code{log.or} is the estimate of \eqn{\beta}. #' If the model to be estimated is #' \deqn{logit\{Pr(Y=1|X,Z)\}=\alpha+\beta{X}+\gamma{Z}+\psi{XZ}}{logit{Pr(Y=1|X,Z)} = \alpha + \beta X +\gamma Z +\psi XZ} #' then \code{log.odds} is the estimate of #' \eqn{\beta + \psi{Z}}{\beta + \psi Z}.} #' @details \code{AFclogit} estimates the attributable fraction for a binary outcome \code{Y} #' under the hypothetical scenario where a binary exposure \code{X} is eliminated from the population. #' The estimate is adjusted for confounders \code{Z} by conditional logistic regression. #' The estimation assumes that the outcome is rare so that the risk ratio can be approximated by the odds ratio, for details see Bruzzi et. al. #' Let the AF be defined as #' \deqn{AF = 1 - \frac{Pr(Y_0=1)}{Pr(Y = 1)}}{AF = 1 - Pr(Y0 = 1) / Pr(Y = 1)} #' where \eqn{Pr(Y_0=1)}{Pr(Y0 = 1)} denotes the counterfactual probability of the outcome if #' the exposure would have been eliminated from the population. If \code{Z} is sufficient for confounding control then the probability \eqn{Pr(Y_0=1)}{Pr(Y0 = 1)} can be expressed as #' \deqn{Pr(Y_0=1)=E_Z\{Pr(Y=1\mid{X}=0,Z)\}.}{Pr(Y0=1) = E_z{Pr(Y = 1 | X = 0, Z)}.} #' Using Bayes' theorem this implies that the AF can be expressed as #' \deqn{AF = 1-\frac{E_Z\{Pr(Y=1\mid X=0,Z)\}}{Pr(Y=1)}=1-E_Z\{RR^{-X}(Z)\mid{Y = 1}\}}{ #' AF = 1 - E_z{Pr( Y = 1 | X = 0, Z)} / Pr(Y = 1) = 1 - E_z{RR^{-X} (Z) | Y = 1}} #' where \eqn{RR(Z)} is the risk ratio \deqn{\frac{Pr(Y=1\mid{X=1,Z})}{Pr(Y=1\mid{X=0,Z})}.}{Pr(Y = 1 | X = 1,Z)/Pr(Y=1 | X = 0, Z).} #' Moreover, the risk ratio can be approximated by the odds ratio if the outcome is rare. Thus, #' \deqn{ AF \approx 1 - E_Z\{OR^{-X}(Z)\mid{Y = 1}\}.}{AF is approximately 1 - E_z{OR^{-X}(Z) | Y = 1}.} #' The odds ratio is estimated by conditional logistic regression. #' The function \code{\link[drgee]{gee}} in the \code{drgee} package is used to get the score contributions for each cluster and the hessian. #' A clustered sandwich formula is used in the variance calculation. #' @author Elisabeth Dahlqwist, Arvid \enc{Sjölander}{Sjolander} #' @seealso \code{\link[survival]{clogit}} used for fitting the conditional logistic regression model for matched case-control designs. For non-matched case-control designs see \code{\link[AF]{AFglm}}. #' @references Bruzzi, P., Green, S. B., Byar, D., Brinton, L. A., and Schairer, C. (1985). Estimating the population attributable risk for multiple risk factors using case-control data. \emph{American Journal of Epidemiology} \bold{122}, 904-914. #' @examples #' expit <- function(x) 1 / (1 + exp( - x)) #' NN <- 1000000 #' n <- 500 #' #' # Example 1: matched case-control #' # Duplicate observations in order to create a matched data sample #' # Create an unobserved confounder U common for each pair of individuals #' intercept <- -6 #' U <- rnorm(n = NN) #' Z1 <- rnorm(n = NN) #' Z2 <- rnorm(n = NN) #' X1 <- rbinom(n = NN, size = 1, prob = expit(U + Z1)) #' X2 <- rbinom(n = NN, size = 1, prob = expit(U + Z2)) #' Y1 <- rbinom(n = NN, size = 1, prob = expit(intercept + U + Z1 + X1)) #' Y2 <- rbinom(n = NN, size = 1, prob = expit(intercept + U + Z2 + X2)) #' # Select discordant pairs #' discordant <- which(Y1!=Y2) #' id <- rep(1:n, 2) #' # Sample from discordant pairs #' incl <- sample(x = discordant, size = n, replace = TRUE) #' data <- data.frame(id = id, Y = c(Y1[incl], Y2[incl]), X = c(X1[incl], X2[incl]), #' Z = c(Z1[incl], Z2[incl])) #' #' # Fit a clogit object #' fit <- clogit(formula = Y ~ X + Z + X * Z + strata(id), data = data) #' #' # Estimate the attributable fraction from the fitted conditional logistic regression #' AFclogit_est <- AFclogit(fit, data, exposure = "X", clusterid="id") #' summary(AFclogit_est) #' @import survival drgee data.table #' @export AFclogit<-function(object, data, exposure, clusterid){ call <- match.call() # Warning if the object is not a clogit object objectcall <- object$userCall if(!(class(object)[1])=="clogit") stop("The object is not a clogit object", call. = FALSE) if(missing(clusterid)) stop("Argument 'clusterid' must be provided by the user", call. = FALSE) #### Preparation of variables #### formula <- object$formula npar <- length(object$coef) ## Delete rows with missing on variables in the model ## #rownames(data) <- 1:nrow(data) #m <- model.matrix(object = formula, data = data) #complete <- as.numeric(rownames(m)) #data <- data[complete, ] #data <- complete_cases(data, formula) outcome <- as.character(terms(formula)[[2]])[3] variables <- attr(object$coefficients, "names") ## Create a formula which can be used to create a design matrix formula.model <- as.formula(paste(outcome, "~", paste(variables, collapse=" + "))) ni.vals <- ave(as.vector(data[, outcome]), data[, clusterid], FUN = function(y) { length(unique(y[which(!is.na(y))])) }) compl.rows <- (ni.vals > 1) data <- data[compl.rows, ] ## Checks ## if(is.binary(data[, outcome]) == FALSE) stop("Only binary outcome (0/1) is accepted.", call. = FALSE) if(is.binary(data[, exposure]) == FALSE) stop("Only binary exposure (0/1) is accepted.", call. = FALSE) if(max(all.vars(formula[[3]]) == exposure) == 0) stop("The exposure variable is not included in the formula.", call. = FALSE) #### Methods for non-matched or matched sampling designs #### n <- nrow(data) n.cases <- sum(data[, outcome]) n.cluster <- length(unique(data[, clusterid])) data <- data[order(data[, clusterid]), ] # Create dataset data0 for counterfactual X = 0s data0 <- data data0[, exposure] <- 0 clusters <- data[, clusterid] ## Design matrices ## design <- model.matrix(object = formula.model, data = data)[, - 1, drop = FALSE] design0 <- model.matrix(object = formula.model, data = data0)[, - 1, drop = FALSE] ## Create linear predictors to estimate the log odds ratio ## diff.design <- design0 - design linearpredictor <- design %*% coef(object) linearpredictor0 <- design0 %*% coef(object) #log odds ratio# log.or <- linearpredictor - linearpredictor0 ## Estimate approximate AF ## AF.est <- 1 - sum(data[, outcome] * exp( - log.or)) / sum(data[, outcome]) #### Meat: score equations #### ## Score equation 1 ## individual estimating equations of the estimate of AF score.AF <- data[, outcome] * (exp( - log.or) - AF.est) ## Score equation 2 ## individual estimating equations from conditional logistic reg. pred.diff <- getScoreResidualsFromClogit(fit = object, y = data[, outcome], x = design, id = clusters) if(missing(pred.diff)) warning("Use the latest version of package 'drgee'", call. = FALSE) score.beta <- pred.diff$U score.equations <- cbind(score.AF, score.beta) score.equations <- aggr(x = score.equations, clusters = clusters) meat <- var(score.equations, na.rm=TRUE) #### Bread: hessian of score equations #### ### Hessian of score equation 1 ## #### Estimating variance using Sandwich estimator #### ### Aggregate data ### hessian.AF1 <- - data[, outcome] hessian.AF1 <- aggr(x = hessian.AF1, clusters = clusters) hessian.AF2 <- cbind(as.matrix((design0 - design) * as.vector(data[, outcome] * exp( - log.or)))) hessian.AF2 <- aggr(x = hessian.AF2, clusters = clusters) hessian.AF <- cbind(mean(hessian.AF1), t(colMeans(hessian.AF2))) hessian.beta <- cbind(matrix(rep(0, npar), nrow = npar, ncol = 1), pred.diff$dU.sum / n) ### Bread ### bread <- rbind(hessian.AF, hessian.beta) #### Sandwich #### sandwich <- (solve (bread) %*% meat %*% t(solve (bread)) * n.cluster/ n ^ 2 ) AF.var <- sandwich[1, 1] #### Output #### out <- c(list(hessian.beta = hessian.beta, hessian.AF = hessian.AF, clusterid = clusterid, score.equations = score.equations, hessian.beta = hessian.beta, bread = bread, meat = meat, AF.est = AF.est, AF.var = AF.var, log.or = log.or, objectcall = objectcall, call = call, exposure = exposure, outcome = outcome, object = object, sandwich = sandwich, formula = formula, n = n, n.cases = n.cases, n.cluster = n.cluster)) class(out) <- "AF" return(out) }
/scratch/gouwar.j/cran-all/cranData/AF/R/AFclogit.R
############## AF function for a coxph object ##################### #' @title Attributable fraction function based on a Cox Proportional Hazard regression model as a \code{coxph} object (commonly used for cohort sampling designs with time-to-event outcomes). #' @description \code{AFcoxph} estimates the model-based adjusted attributable fraction function from a Cox Proportional Hazard regression model in form of a \code{\link[survival]{coxph}} object. This model is commonly used for data from cohort sampling designs with time-to-event outcomes. #' @param object a fitted Cox Proportional Hazard regression model object of class "\code{\link[survival]{coxph}}". Method for handling ties must be breslow since this is assumed in the calculation of the standard errors. No special terms such as \code{cluster}, \code{strata} and \code{tt} is allowed in the formula for the fitted object. #' @param data an optional data frame, list or environment (or object coercible by \code{as.data.frame} to a data frame) containing the variables in the model. If not found in data, the variables are taken from environment (\code{formula}), typically the environment from which the function is called. #' @param exposure the name of the exposure variable as a string. The exposure must be binary (0/1) where unexposed is coded as 0. #' @param times a scalar or vector of time points specified by the user for which the attributable fraction function is estimated. If not specified the observed event times will be used. #' @param clusterid the name of the cluster identifier variable as a string, if data are clustered. Cluster robust standard errors will be calculated. #' @return \item{AF.est}{estimated attributable fraction function for every time point specified by \code{times}.} #' @return \item{AF.var}{estimated variance of \code{AF.est}. The variance is obtained by combining the delta methods with the sandwich formula.} #' @return \item{S.est}{estimated factual survival function; \eqn{S(t)}.} #' @return \item{S.var}{estimated variance of \code{S.est}. The variance is obtained by the sandwich formula.} #' @return \item{S0.est}{estimated counterfactual survival function if exposure would be eliminated; \eqn{S_0(t)}{S0(t)}.} #' @return \item{S0.var}{estimated variance of \code{S0.est}. The variance is obtained by the sandwich formula.} #' @details \code{AFcoxph} estimates the attributable fraction for a time-to-event outcome #' under the hypothetical scenario where a binary exposure \code{X} is eliminated from the population. The estimate is adjusted for confounders \code{Z} #' by the Cox proportional hazards model (\code{\link[survival]{coxph}}). Let the AF function be defined as #' \deqn{AF=1-\frac{\{1-S_0(t)\}}{\{1-S(t)\}}}{AF = 1 - {1 - S0(t)} / {1 - S(t)}} #' where \eqn{S_0(t)}{S0(t)} denotes the counterfactual survival function for the event if #' the exposure would have been eliminated from the population at baseline and \eqn{S(t)} denotes the factual survival function. #' If \code{Z} is sufficient for confounding control, then \eqn{S_0(t)}{S0(t)} can be expressed as \eqn{E_Z\{S(t\mid{X=0,Z })\}}{E_z{S(t|X=0,Z)}}. #' The function uses a fitted Cox proportional hazards regression to estimate \eqn{S(t\mid{X=0,Z})}{S(t|X=0,Z)}, and the marginal sample distribution of \code{Z} #' to approximate the outer expectation (\enc{Sjölander}{Sjolander} and Vansteelandt, 2014). If \code{clusterid} is supplied, then a clustered sandwich formula is used in all variance calculations. #' @author Elisabeth Dahlqwist, Arvid \enc{Sjölander}{Sjolander} #' @seealso \code{\link[survival]{coxph}} and \code{\link[survival]{Surv}} used for fitting the Cox proportional hazards model. #' @references Chen, L., Lin, D. Y., and Zeng, D. (2010). Attributable fraction functions for censored event times. \emph{Biometrika} \bold{97}, 713-726. #' @references \enc{Sjölander}{Sjolander}, A. and Vansteelandt, S. (2014). Doubly robust estimation of attributable fractions in survival analysis. \emph{Statistical Methods in Medical Research}. doi: 10.1177/0962280214564003. #' @examples #' # Simulate a sample from a cohort sampling design with time-to-event outcome #' expit <- function(x) 1 / (1 + exp( - x)) #' n <- 500 #' time <- c(seq(from = 0.2, to = 1, by = 0.2)) #' Z <- rnorm(n = n) #' X <- rbinom(n = n, size = 1, prob = expit(Z)) #' Tim <- rexp(n = n, rate = exp(X + Z)) #' C <- rexp(n = n, rate = exp(X + Z)) #' Tobs <- pmin(Tim, C) #' D <- as.numeric(Tobs < C) #' #Ties created by rounding #' Tobs <- round(Tobs, digits = 2) #' #' # Example 1: non clustered data from a cohort sampling design with time-to-event outcomes #' data <- data.frame(Tobs, D, X, Z) #' #' # Fit a Cox PH regression model #' fit <- coxph(formula = Surv(Tobs, D) ~ X + Z + X * Z, data = data, ties="breslow") #' #' # Estimate the attributable fraction from the fitted Cox PH regression model #' AFcoxph_est <- AFcoxph(fit, data=data, exposure ="X", times = time) #' summary(AFcoxph_est) #' #' # Example 2: clustered data from a cohort sampling design with time-to-event outcomes #' # Duplicate observations in order to create clustered data #' id <- rep(1:n, 2) #' data <- data.frame(Tobs = c(Tobs, Tobs), D = c(D, D), X = c(X, X), Z = c(Z, Z), id = id) #' #' # Fit a Cox PH regression model #' fit <- coxph(formula = Surv(Tobs, D) ~ X + Z + X * Z, data = data, ties="breslow") #' #' # Estimate the attributable fraction from the fitted Cox PH regression model #' AFcoxph_clust <- AFcoxph(object = fit, data = data, #' exposure = "X", times = time, clusterid = "id") #' summary(AFcoxph_clust) #' plot(AFcoxph_clust, CI = TRUE) #' #' # Estimate the attributable fraction from the fitted Cox PH regression model, time unspecified #' AFcoxph_clust_no_time <- AFcoxph(object = fit, data = data, #' exposure = "X", clusterid = "id") #' summary(AFcoxph_clust_no_time) #' plot(AFcoxph_clust, CI = TRUE) #' @import survival data.table #' @export AFcoxph <- function(object, data, exposure, times, clusterid){ call <- match.call() #### Preparation of dataset #### formula <- object$formula vars <- as.character(attr(terms(formula),"variables"))[-1] npar <- length(object$coef) # Warning if the object is not a glm object if(!(as.character(object$call[1]) == "coxph") | !is.null(object$userCall)) stop("The object is not a coxph object", call. = FALSE) # Warning if specials are in the object formula specials <- pmatch(c("strata(","cluster(","tt("), attr(terms(formula),"variables")) if(any(!is.na(specials))) stop("No special terms are allowed in the formula") ## Delete rows with missing on variables in the model ## rownames(data) <- 1:nrow(data) m <- model.matrix(object = formula, data = data) complete <- as.numeric(rownames(m)) data <- data[complete, ] ## Define object.detail object.detail <- coxph.detail(object = object) ## If times is missing ## if(missing(times)) times <- object.detail$time ## Checks ## if(!object$method=="breslow") stop("Only breslow method for handling ties is allowed.", call. = FALSE) if(!is.binary(data[, exposure])) stop("Only binary exposure (0/1) is accepted.", call. = FALSE) if(max(all.vars(formula[[3]]) == exposure) == 0) stop("The exposure variable is not included in the formula.", call. = FALSE) if(missing(clusterid)) n.cluster <- 0 else n.cluster <- length(unique(data[, clusterid])) ## Find names of end variable and event variable rr <- rownames(attr(terms(formula), "factors"))[1] temp <- gregexpr(", ", rr)[[1]] if(length(temp == 1)){ endvar <- substr(rr, 6, temp[1] - 1) eventvar <- substr(rr, temp[1] + 2, nchar(rr) - 1) } if(length(temp) == 2){ endvar <- substr(rr, temp[1] + 2, temp[2] - 1) eventvar <- substr(rr, temp[2] + 2, nchar(rr) - 1) } n <- nrow(data) n.cases <- sum(data[, eventvar]) clusters <- data[, clusterid] npar <- length(object$coef) # Sort on "end-variable" ord <- order(data[, endvar]) data <- data[ord, ] # Create dataset data0 for counterfactual X = 0 data0 <- data data0[, exposure] <- 0 ## Design matrices ## design <- as.matrix(model.matrix(object = delete.response(terms(object)), data = data)[, -1]) design0 <- as.matrix(model.matrix(object = delete.response(terms(object)), data = data0)[, -1]) ### Estimate the survival functions ### ## Hazard increment ## dH0 <- object.detail$hazard H0 <- cumsum(dH0) ## Baseline hazard function ## H0step <- stepfun(object.detail$time, c(0, H0)) H0res <- rep(0, n) dH0.untied <- rep(dH0, object.detail$nevent) / rep(object.detail$nevent, object.detail$nevent) H0res[data[, eventvar] == 1] <- dH0.untied * n #handle ties ## Predict based on the Cox PH model ## epred <- predict(object = object, newdata = data, type = "risk") epred0 <- predict(object = object, newdata = data0, type = "risk") ### Meat ### ## Score equation 4 ## for the Cox PH model (made outside of loop) score.beta <- as.matrix(residuals(object = object, type = "score")) score.beta <- score.beta[ord, ] ## Weighted mean of the variable at event for all at risk at that time ## E <- matrix(0, nrow = n, ncol = npar) means <- as.matrix(object.detail$means) means <- means[rep(1:nrow(means), object.detail$nevent), ] #handle ties E[data[, eventvar] == 1, ] <- means ## One point and variance estimate for each time t in times ## S.est <- vector(length = length(times)) S0.est <- vector(length = length(times)) AF.var <- vector(length = length(times)) S.var <- vector(length = length(times)) S0.var <- vector(length = length(times)) # Loop over all t in times for (i in 1:length(times)){ t <- times[i] #### Meat: score equations #### ## Score equation 1 ## for the factual survival function score.S <- exp( - H0step(t) * epred) ## Score equation 2 ## for the counterfactual survival function score.S0 <- exp( - H0step(t) * epred0) ## Score equation 3 ## for the breslow estimator score.H0 <- H0res * (data[, endvar] <= t) ## Score equation 4 ## for the Cox PH model (made outside of loop) ### Meat ### score.equations <- cbind(score.S, score.S0, score.H0, score.beta) if (!missing(clusterid)){ score.equations <- score.equations score.equations <- aggr(score.equations, clusters = clusters) } meat <- var(score.equations, na.rm = TRUE) #### Bread: hessian of score equations #### ## Hessian of score equation 1 ## hessian.S <- c(-1, 0, mean(epred * score.S), colMeans(design * H0step(t) * epred * score.S)) ## Hessian of score equation 2 ## hessian.S0 <- c(0, -1, mean(epred0 * score.S0), colMeans(design0 * H0step(t) * epred0 * score.S0)) ## Hessian of score equation 3 ## hessian.H0 <- c(rep(0,2), - 1, - colMeans(E * score.H0, na.rm = TRUE)) ## Hessian of score equation 4 ## hessian.beta <- cbind(matrix(0, nrow = npar, ncol = 3), - solve(vcov(object = object)) / n) ### Bread ### bread<-rbind(hessian.S, hessian.S0, hessian.H0, hessian.beta) ### Sandwich ### if (!missing(clusterid)) sandwich <- (solve (bread) %*% meat %*% t(solve (bread)) * n.cluster/ n^2 ) [1:2, 1:2] else sandwich <- (solve (bread) %*% meat %*% t(solve (bread)) / n) [1:2, 1:2] #### For point estimate #### S.est[i] <- mean(x = score.S, na.rm = TRUE) S0.est[i] <- mean(x = score.S0, na.rm = TRUE) #### Estimate of variance using the delta method #### gradient <- as.matrix(c( - (1 - S0.est[i]) / (1 - S.est[i]) ^ 2, 1 / (1 - S.est[i])) , nrow = 2, ncol = 1) AF.var[i] <- t(gradient) %*% sandwich %*% gradient S.var[i] <- sandwich[1, 1] S0.var[i] <- sandwich[2, 2] } ### The AF function estimate ### AF.est <- 1 - (1 - S0.est) / (1 - S.est) #### Output #### #func <- AF.cc out <- c(list(AF.est = AF.est, AF.var = AF.var, S.est = S.est, S0.est = S0.est, S.var = S.var, S0.var = S0.var, objectcall = object$call, call = call, exposure = exposure, outcome = eventvar, object = object, sandwich = sandwich, gradient = gradient, formula = formula, n = n, n.cases = n.cases, n.cluster = n.cluster, times = times)) class(out) <- "AF" return(out) }
/scratch/gouwar.j/cran-all/cranData/AF/R/AFcoxph.R
############## AF function for cross-sectional sampling design ##################### #' @title Attributable fraction for cross-sectional sampling designs. NOTE! Deprecated function. Use \code{\link[AF]{AFglm}}. #' @description \code{AF.cs} estimates the model-based adjusted attributable fraction for data from cross-sectional sampling designs. #' @param formula an object of class "\code{\link{formula}}" (or one that can be coerced to that class): a symbolic description of the model used for adjusting for confounders. The exposure and confounders should be specified as independent (right-hand side) variables. The outcome should be specified as dependent (left-hand side) variable. The formula is used to object a logistic regression by \code{\link[stats]{glm}}. #' @param data an optional data frame, list or environment (or object coercible by \code{as.data.frame} to a data frame) containing the variables in the model. If not found in data, the variables are taken from environment (\code{formula}), typically the environment from which the function is called. #' @param exposure the name of the exposure variable as a string. The exposure must be binary (0/1) where unexposed is coded as 0. #' @param clusterid the name of the cluster identifier variable as a string, if data are clustered. #' @return \item{AF.est}{estimated attributable fraction.} #' @return \item{AF.var}{estimated variance of \code{AF.est}. The variance is obtained by combining the delta method with the sandwich formula.} #' @return \item{P.est}{estimated factual proportion of cases; \eqn{Pr(Y=1)}.} #' @return \item{P.var}{estimated variance of \code{P.est}. The variance is obtained by the sandwich formula.} #' @return \item{P0.est}{estimated counterfactual proportion of cases if exposure would be eliminated; \eqn{Pr(Y_0=1)}{Pr(Y0=1)}.} #' @return \item{P0.var}{estimated variance of \code{P0.est}. The variance is obtained by the sandwich formula.} #' @return \item{object}{the fitted model. Fitted using logistic regression, \code{\link{glm}}.} #' @details \code{Af.cs} estimates the attributable fraction for a binary outcome \code{Y} #' under the hypothetical scenario where a binary exposure \code{X} is eliminated from the population. #' The estimate is adjusted for confounders \code{Z} by logistic regression (\code{\link{glm}}). #' Let the AF be defined as #' \deqn{AF=1-\frac{Pr(Y_0=1)}{Pr(Y=1)}}{AF = 1 - Pr(Y0 = 1) / Pr(Y = 1)} #' where \eqn{Pr(Y_0=1)}{Pr(Y0 = 1)} denotes the counterfactual probability of the outcome if #' the exposure would have been eliminated from the population and \eqn{Pr(Y = 1)} denotes the factual probability of the outcome. #' If \code{Z} is sufficient for confounding control, then \eqn{Pr(Y_0=1)}{Pr(Y0 = 1)} can be expressed as #' \eqn{E_Z\{Pr(Y=1\mid{X=0,Z})\}.}{E_z{Pr(Y = 1 |X = 0,Z)}.} #' The function uses logistic regression to estimate \eqn{Pr(Y=1\mid{X=0,Z})}{Pr(Y=1|X=0,Z)}, and the marginal sample distribution of \code{Z} #' to approximate the outer expectation (\enc{Sjölander}{Sjolander} and Vansteelandt, 2012). #' If \code{clusterid} is supplied, then a clustered sandwich formula is used in all variance calculations. #' @author Elisabeth Dahlqwist, Arvid \enc{Sjölander}{Sjolander} #' @seealso The new and more general version of the function: \code{\link[AF]{AFglm}}. #' @references Greenland, S. and Drescher, K. (1993). Maximum Likelihood Estimation of the Attributable Fraction from logistic Models. \emph{Biometrics} \bold{49}, 865-872. #' @references \enc{Sjölander}{Sjolander}, A. and Vansteelandt, S. (2011). Doubly robust estimation of attributable fractions. \emph{Biostatistics} \bold{12}, 112-121. #' @examples #' # Simulate a cross-sectional sample #' expit <- function(x) 1 / (1 + exp( - x)) #' n <- 1000 #' Z <- rnorm(n = n) #' X <- rbinom(n = n, size = 1, prob = expit(Z)) #' Y <- rbinom(n = n, size = 1, prob = expit(Z + X)) #' #' # Example 1: non clustered data from a cross-sectional sampling design #' data <- data.frame(Y, X, Z) #' #' # Estimation of the attributable fraction #' AF.cs_est <- AF.cs(formula = Y ~ X + Z + X * Z, data = data, exposure = "X") #' summary(AF.cs_est) #' #' # Example 2: clustered data from a cross-sectional sampling design #' # Duplicate observations in order to create clustered data #' id <- rep(1:n, 2) #' data <- data.frame(id = id, Y = c(Y, Y), X = c(X, X), Z = c(Z, Z)) #' #' # Estimation of the attributable fraction #' AF.cs_clust <- AF.cs(formula = Y ~ X + Z + X * Z, data = data, #' exposure = "X", clusterid = "id") #' summary(AF.cs_clust) #' @importFrom stats aggregate ave binomial coef delete.response family glm model.matrix pnorm predict qnorm residuals stepfun terms var vcov #' @export AF.cs<- function(formula, data, exposure, clusterid){ warning("NOTE! Deprecated function. Use AFglm.") call <- match.call() mm <- match(c("formula", "data", "exposure", "clusterid"), names(call), 0L) #### Preparation of dataset #### ## Delete rows with missing on variables in the model ## rownames(data) <- 1:nrow(data) m <- model.matrix(object = formula, data = data) complete <- as.numeric(rownames(m)) data <- data[complete, ] outcome <- as.character(terms(formula)[[2]]) n <- nrow(data) n.cases <- sum(data[, outcome]) if(missing(clusterid)) n.cluster <- 0 else { n.cluster <- length(unique(data[, clusterid])) data <- data[order(data[, clusterid]), ] } ## Checks ## if(!is.binary(data[, outcome])) stop("Only binary outcome (0/1) is accepted.", call. = FALSE) if(!is.binary(data[, exposure])) stop("Only binary exposure (0/1) is accepted.", call. = FALSE) if(max(all.vars(formula[[3]]) == exposure) == 0) stop("The exposure variable is not included in the formula.", call. = FALSE) ## Counterfactual dataset ## data0 <- data data0[, exposure] <- 0 #### Estimate model #### object <- glm(formula = formula, family = binomial, data = data) npar <- length(object$coef) ## Design matrices ## design <- model.matrix(object = delete.response(terms(object)), data = data) design0 <- model.matrix(object = delete.response(terms(object)), data = data0) #### Meat: score equations #### ## Score equation 1 ## score.P <- data[, outcome] pred.Y <- predict(object, newdata = data, type = "response") ## Score equation 2 ## score.P0 <- predict(object, newdata = data0, type = "response") ## Score equation 3 ## score.beta <- design * (score.P - pred.Y) ### Meat ### score.equations <- cbind(score.P, score.P0, score.beta) if (!missing(clusterid)){ score.equations <- aggregate(score.equations, list(data[, clusterid]), sum)[, - 1] } meat <- var(score.equations, na.rm = TRUE) #### Bread: hessian of score equations #### ## Hessian of score equation 1 ## hessian.P <- matrix(c(- 1, 0, rep(0,npar)), nrow = 1, ncol = 2 + npar) ## Hessian of score equation 2 ## g <- family(object)$mu.eta dmu.deta <- g(predict(object = object, newdata = data0)) deta.dbeta <- design0 dmu.dbeta <- dmu.deta * deta.dbeta hessian.P0 <- matrix(c(0, - 1, colMeans(dmu.dbeta)), nrow = 1, ncol = 2 + npar) ## Hessian of score equation 3 ## hessian.beta <- cbind(matrix(rep(0, npar * 2), nrow = npar, ncol = 2) , - solve(vcov(object = object)) / n) ### Bread ### bread <- rbind(hessian.P, hessian.P0, hessian.beta) #### Sandwich #### if (!missing(clusterid)) sandwich <- (solve (bread) %*% meat %*% t(solve (bread)) * n.cluster / n^2 ) [1:2, 1:2] else sandwich <- (solve (bread) %*% meat %*% t(solve (bread)) / n) [1:2, 1:2] #### Point estimate of AF #### P.est <- mean(score.P, na.rm = TRUE) P0.est <- mean(score.P0, na.rm = TRUE) AF.est <- 1 - P0.est / P.est ## Delta method for variance estimate ## gradient <- as.matrix(c(P0.est / P.est ^ 2, - 1 / P.est), nrow = 2, ncol = 1) AF.var <- t(gradient) %*% sandwich %*% gradient P.var <- sandwich[1, 1] P0.var <- sandwich[2, 2] #### Output #### out <- c(list(AF.est = AF.est, AF.var = AF.var, P.est = P.est, P0.est = P0.est, P.var = P.var, P0.var = P0.var, objectcall = object$call, call = call, exposure = exposure, outcome = outcome, object = object, sandwich = sandwich, gradient = gradient, formula = formula, n = n, n.cases = n.cases, n.cluster = n.cluster)) class(out) <- "AF" return(out) }
/scratch/gouwar.j/cran-all/cranData/AF/R/AFcs.R
############## Common functions ############## globalVariables(".SD") plogis <- stats::plogis is.binary <- function(v) { x <- unique(v) if((length(x) - sum(is.na(x)) == 2L) & (max(x) == 1 & min(x) == 0)) TRUE else FALSE } # The function "aggr" aggregate the data table x by clusterid aggr <- function(x, clusters){ temp <- data.table(x) temp <- as.matrix(temp[, j = lapply(.SD, sum), by = clusters])[, -1] } ######### Remove missing expand <- function(x, names){ n <- length(names) if(is.vector(x)){ temp <- rep(NA, n) names(temp) <- names mm <- match(names(temp), names(x)) vv <- !is.na(mm) mm <- mm[vv] temp[vv] <- x[mm] } if(is.matrix(x)){ temp <- matrix(NA, nrow=n, ncol=ncol(x)) rownames(temp) <- names colnames(temp) <- colnames(x) mm <- match(rownames(temp), rownames(x)) vv <- !is.na(mm) mm <- mm[vv] temp[vv, ] <- x[mm, ] } return(temp) } deriv_matrix <- function(m1, m2, n, npar, npsi){ a1 <- aperm(array(m1, c(n, npar, npsi)), c(1, 3, 2)) a2 <- array(m2, c(n, npsi, npar)) a <- a1 * a2 out <- list(a1 = a1, a2 = a2, a = a) return(out) } ############## Summary and print functions ############## #' @export print.AF<-function(x, ...){ if(!x$n.cluster == 0) { Std.Error <- "Robust SE" se <- "cluster-robust standard error" } else { Std.Error <- "Std.Error" se <- "standard error" } cat("\nEstimated attributable fraction (AF) and", se, ":", "\n") cat("\n") table.est <- cbind(x$AF.est, sqrt(x$AF.var)) colnames(table.est) <- c("AF", Std.Error) r <- rep("", , length(x$AF.est)) rownames(table.est) <- c(r) modelcall <- as.character(x$objectcall[1]) if(modelcall == "coxph") { table.est <- cbind(x$times, table.est) colnames(table.est) <- c("Time", "AF", Std.Error) print.default(table.est) } else { print.default(table.est) } } CI.AF <- function(AF, Std.Error, confidence.level, CI.transform){ if(CI.transform == "untransformed"){ lower <- AF - abs(qnorm((1 - confidence.level) / 2)) * Std.Error upper <- AF + abs(qnorm((1 - confidence.level) / 2)) * Std.Error } if(CI.transform == "log"){ lower <- AF * exp( - abs(qnorm((1 - confidence.level) / 2)) * Std.Error / AF) upper <- AF * exp(abs(qnorm((1 - confidence.level) / 2)) * Std.Error / AF) } if(CI.transform == "logit"){ logit <- function(x) log(x / (1 - x)) lower <- exp(logit(AF) - abs(qnorm((1 - confidence.level) / 2)) * Std.Error / (AF * (1 - AF))) / (1 + exp(logit(AF) - abs(qnorm((1 - confidence.level) / 2)) * Std.Error / (AF * (1 - AF)))) upper <- exp(logit(AF) + abs(qnorm((1 - confidence.level) / 2)) * Std.Error / (AF * (1 - AF))) / (1 + exp(logit(AF) + abs(qnorm((1 - confidence.level) / 2)) * Std.Error / (AF * (1 - AF)))) } CI.AF <- cbind(lower, upper) return(CI.AF) } #' @title Summary function for objects of class "\code{AF}". #' @description Gives a summary of the AF estimate(s) including z-value, p-value and confidence interval(s). #' @param object an object of class \code{AF} from \code{\link{AFglm}}, \code{\link{AFcoxph}}, \code{\link{AFclogit}}, \code{\link{AFparfrailty}} or \code{\link{AFivglm}} functions. #' @param confidence.level user-specified confidence level for the confidence intervals. If not specified it defaults to 95 percent. Should be specified in decimals such as 0.95 for 95 percent. #' @param CI.transform user-specified transformation of the Wald confidence interval(s). Options are \code{untransformed}, \code{log} and \code{logit}. If not specified untransformed will be calculated. #' @param digits maximum number of digits. #' @param ... further arguments to be passed to the summary function. See \code{\link[base]{summary}}. #' @author Elisabeth Dahlqwist, Arvid \enc{Sjölander}{Sjolander} #' @importFrom graphics legend lines plot.default #' @export summary.AF <- function(object, digits = max(3L, getOption("digits") - 3L), confidence.level, CI.transform, ...){ if(missing(confidence.level)) confidence.level <- 0.95 if(missing(CI.transform)) CI.transform <- "untransformed" se <- sqrt(object$AF.var) zvalue <- object$AF.est / sqrt(object$AF.var) pvalue <- 2 * pnorm( - abs(zvalue)) confidence.interval <- CI.AF(AF = object$AF.est, Std.Error = se, confidence.level = confidence.level, CI.transform = CI.transform) colnames(confidence.interval) <- c("Lower limit", "Upper limit") if(!object$n.cluster == 0) Std.Error <- "Robust SE" else Std.Error <- "Std.Error" AF <- cbind(object$AF.est, se, zvalue, pvalue) colnames(AF) <- c("AF estimate", Std.Error, "z value", "Pr(>|z|)") modelcall <- as.character(object$objectcall[1]) if(modelcall == "glm") method = "Logistic regression" if(modelcall == "coxph") method = "Cox Proportional Hazards model" if(modelcall == "gee") method = "Conditional logistic regression" if(modelcall == "clogit") method = "Conditional logistic regression" if(modelcall == "parfrailty") method = "Weibull gamma-frailty model" if(modelcall == "ivglm") modelcall = object$inputcall if(modelcall == "g") method = "G-estimation" if(modelcall == "ts") method = "Two Stage Least Square" if(modelcall == "coxph" | modelcall == "parfrailty"){ ans <- list(AF = AF, times = object$times, CI.transform = CI.transform, confidence.level = confidence.level, confidence.interval = confidence.interval, n.obs = object$n, n.cases = object$n.cases, n.cluster = object$n.cluster, modelcall = modelcall, objectcall = object$objectcall, method = method, formula = object$formula, exposure = object$exposure, outcome = object$outcome, object = object, sandwich = object$sandwich, Std.Error = se, times = object$times, call = object$call) } else if(modelcall == "g"| modelcall == "ts"){ ans <- list(AF = AF, CI.transform = CI.transform, confidence.level = confidence.level, confidence.interval = confidence.interval, n.obs = object$n, n.cases = object$n.cases, n.cluster = object$n.cluster, modelcall = modelcall, objectcall = object$objectcall, link = object$link, method = method, formula = object$formula, exposure = object$exposure, outcome = object$outcome, object = object, sandwich = object$sandwich, Std.Error = se, formula = object$formula, psi = object$psi, fitY = object$fitY, fitZ = object$fitZ, call = object$call, inputcall = object$inputcall) } else{ ans <- list(AF = AF, CI.transform = CI.transform, confidence.level = confidence.level, confidence.interval = confidence.interval, n.obs = object$n, n.cases = object$n.cases, n.cluster = object$n.cluster, modelcall = modelcall, objectcall = object$objectcall, method = method, formula = object$formula, exposure = object$exposure, outcome = object$outcome, object = object, sandwich = object$sandwich, Std.Error = se, call = object$call) } class(ans) <- "summary.AF" return(ans) } #' @export print.summary.AF <- function(x, digits = max(3L, getOption("digits") - 3L), ...){ cat("Call: ", "\n") print.default(x$call) if(!x$n.cluster == 0) Std.Error <- "Robust SE" else Std.Error <- "Std.Error" if(x$CI.transform == "log") x$CI.transform <- "log transformed" if(x$CI.transform == "logit") x$CI.transform <- "logit transformed" level <- x$confidence.level * 100 CI.text <- paste0(as.character(level),"%") cat("\nEstimated attributable fraction (AF) and", x$CI.transform, CI.text, "Wald CI:", "\n") cat("\n") table.est <- cbind(x$AF, x$confidence.interval) colnames(table.est) <- c("AF", Std.Error, "z value", "Pr(>|z|)", "Lower limit", "Upper limit") r <- rep("", , nrow(x$AF)) rownames(table.est) <- c(r) modelcall <- as.character(x$call[1]) if(x$modelcall == "coxph" | x$modelcall == "parfrailty"){ table.est <- cbind(x$times, table.est) colnames(table.est) <- c("Time", "AF", Std.Error, "z value", "Pr(>|z|)", "Lower limit", "Upper limit") print.default(table.est) } else { print.default(table.est) } cat("\nExposure", ":", x$exposure, "\n") if(x$modelcall == "coxph" | x$modelcall == "parfrailty") outcome <- "Event " else outcome <- "Outcome " #cat("\n") cat(outcome, ":", x$outcome, "\n") cat("\n") table.nr <- cbind(x$n.obs, x$n.cases) rownames(table.nr) <- c("") if(x$modelcall == "coxph" | x$modelcall == "parfrailty") number <- "Events" else number <- "Cases" colnames(table.nr) <- c("Observations", number) if (x$n.cluster == 0) print.default(table.nr) else{ table.nr.cluster <- cbind(table.nr, x$n.cluster) colnames(table.nr.cluster) <- c("Observations", number, "Clusters") print.default(table.nr.cluster) } if(x$modelcall == "g"){ cat("\nMethod for confounder adjustment: G-estimation with", x$link, "-link", "\n") target <- ifelse(x$link == "log", "Causal Risk Ratio:", "Causal Odds Ratio:") est <- ifelse(x$link == "log", exp(x$psi), exp(x$psi)) Target_param <- paste("\n", target, sep="") cat(Target_param, as.character(round(est, 2)), "\n") cat("Call: ", "\n") print.default(x$objectcall) if(length(x$fitZ$coef) > 1){ cat("\nConfounder adjustment of the IV-outcome relationship:", "\n") cat("Call: ", "\n") print.default(x$fitZ$call) } if(length(x$psi) > 1){ cat("\nInteraction model for the IV-outcome confounders and exposure:", "\n") cat("Call: ", "\n") print.default(x$formula) } if(x$link == "logit"){ cat("\nAssociation model:", "\n") cat("Call: ", "\n") print.default(x$fitY$call) } } else{ cat("\nMethod for confounder adjustment: ", x$method, "\n") if(x$modelcall == "ts"){ Target_param <- paste("\n", "Causal Risk Ratio:", sep="") cat(Target_param, as.character(round(exp(x$psi), 2)), "\n") } cat("Call: ", "\n") print.default(x$objectcall) } return(table.est) } #' @title Plot function for objects of class "\code{AF}" from the function \code{AFcoxph} or \code{AFparfrailty}. #' @description Creates a simple scatterplot for the AF function with time sequence (specified by the user as \code{times} in the \code{\link{AFcoxph}} function) on the x-axis and the AF function estimate on the y-axis. #' @param x an object of class \code{AF} from the \code{\link{AFcoxph}} or \code{\link{AFparfrailty}} function. #' @param CI if TRUE confidence intervals are estimated and ploted in the graph. #' @param confidence.level user-specified confidence level for the confidence intervals. If not specified it defaults to 95 percent. Should be specified in decimals such as 0.95 for 95 percent. #' @param CI.transform user-specified transformation of the Wald confidence interval(s). Options are \code{untransformed}, \code{log} and \code{logit}. If not specified untransformed will be calculated. #' @param xlab label on the x-axis. If not specified the label \emph{"Time"} will be displayed. #' @param main main title of the plot. If not specified the lable \emph{"Estimate of the attributable fraction function"} will be displayed. #' @param ylim limits on the y-axis of the plot. If not specified the minimum value of the lower bound of the confidence interval will be used as the minimal value and the maximum value of the upper bound of the confidence interval will be used as the maximum of y-axis of the plot. #' @param ... further arguments to be passed to the plot function. See \code{\link[graphics]{plot}}. #' @author Elisabeth Dahlqwist, Arvid \enc{Sjölander}{Sjolander} #' @importFrom graphics legend lines plot.default #' @export plot.AF <- function(x, CI = TRUE, confidence.level, CI.transform, xlab, main, ylim, ...){ modelcall <- as.character(x$objectcall[1]) if(modelcall != "coxph" & modelcall != "parfrailty") stop("Plot function is only available for the attributable fraction function. That is objects from the AFcoxph or AFparfrailty functions", call. = FALSE) if(missing(confidence.level)) confidence.level <- 0.95 if(missing(CI.transform)) CI.transform <- "untransformed" if(missing(xlab)) xlab <- "Time" if(missing(main)) main <- "" if(CI == TRUE){ confidence.interval <- CI.AF(AF = x$AF.est, Std.Error = sqrt(x$AF.var), confidence.level = confidence.level, CI.transform = CI.transform) if(missing(ylim)) ylim <- c(min(confidence.interval), max(confidence.interval)) plot.default(x$times, x$AF.est, main = main, ylab = "Attributable fraction function" , xlab = xlab, ylim = ylim, pch = 19, lty = 1, type = "o", ...) lines( x$times, confidence.interval[, 2], lty = 2) lines( x$times, confidence.interval[, 1], lty = 2) level <- confidence.level * 100 CI <- paste0(as.character(level),"% Conf. Interval") if(CI.transform == "log") transform <- "(log transformed)" if(CI.transform == "logit") transform <- "(logit transformed)" if(CI.transform == "untransformed") transform <- "" legend("topright", legend = c("AF estimate", CI, transform), pch = c(19, NA, NA), lty = c(1, 2, 0), bty = "n") } }
/scratch/gouwar.j/cran-all/cranData/AF/R/AFfunctions.R
AFgest <- function(object, data){ call <- match.call() link <- object$input$link inputcall <- object$input$estmethod objectcall <- object$call # Warning if the object is not a gest object if(!(as.character(inputcall) == "g")) stop("The object is not the G-estimator", call. = FALSE) # Warning if the object is not a gest object with log or logit link if(!(link == "log" | link == "logit")) stop("The link is not log or logit", call. = FALSE) ####################### # Preparation of object to fit format psi <- object$est X <- object$input$X Y <- object$input$Y ####################### fitY <- object$input$fitY.LZX fitZ <- object$fitZ.L formulaZ <- fitZ$formula Z <- as.character(formulaZ[2]) formula <- object$input$formula #### remove NA based on additional model temp <- model.matrix(object = formula, data = data) if(formula == ~1) rownames(temp) <- rownames(data) ## Delete rows with missing on variables in the model ## designpsi <- expand(temp, rownames(data)) npsi <- ncol(designpsi) nZ <- length(coef(fitZ)) n <- nrow(designpsi) n.cases <- sum(data[ , Y], na.rm = TRUE) if(object$converged == TRUE) { if(link == "log"){ y0 <- data[, Y] * as.vector(exp( -(designpsi %*% psi) * data[, X])) P0.est <- mean(y0, na.rm = TRUE) P.est <- mean(data[, Y], na.rm = TRUE) AF.est <- 1 - P0.est / P.est ### Score equations ## S(y0) score.y0 <- y0 - P0.est ## S(y) score.y <- data[, Y] - P.est ## Score functions of all parameters (y, y0, psi, E(z)) score <- cbind(score.y, score.y0, object$estfun) ### Meat for the sandwich estimator meat <- var(score, na.rm = TRUE) ### Hessian I.y <- c(-1, 0, rep(0, npsi + nZ)) dy0.dpsi <- colMeans(-designpsi * data[, X] * y0, na.rm = TRUE) I.y0 <- c(0, -1, dy0.dpsi, rep(0, nZ)) ## Bread bread <- rbind(I.y, I.y0, cbind(matrix(0, ncol = 2, nrow = npsi + nZ), object$d.estfun)) ## Variance sandwich <- (solve(bread) %*% meat %*% t(solve(bread)) / n)[1:2, 1:2] gradient <- as.matrix(c(P0.est / P.est ^ 2, - 1 / P.est), nrow = 2, ncol = 1) AF.var <- t(gradient) %*% sandwich %*% gradient } if(link == "logit"){ nY <- length(coef(fitY)) designY <- expand(model.matrix(object=fitY, data = data), rownames(data)) linear_predY <- predict(object = fitY, newdata = data) ## Use only observations with non-missing for Y and X linear_predY <- linear_predY y0 <- plogis(linear_predY - (designpsi %*% psi) * data[, X]) P0.est <- mean(y0, na.rm = TRUE) P.est <- mean(data[, Y], na.rm = TRUE) AF.est <- 1 - P0.est / P.est ### Score equations ## S(y0) score.y0 <- y0 - mean(y0, na.rm = TRUE) ## S(y) score.y <- data[, Y] - mean(data[, Y], na.rm = TRUE) ## Score functions of all parameters (y, y0, psi, E(z), alpha) score <- cbind(score.y, score.y0, object$estfun) ### Meat for the sandwich estimator meat <- var(score, na.rm = TRUE) ### Hessian I.y <- c(-1, 0, rep(0, npsi + nZ + nY)) dy0.dalpha <- colMeans(designY * as.vector((y0 * (1- y0))), na.rm = TRUE) dy0.dpsi <- colMeans(-(designpsi * data[, X]) * as.vector((y0 * (1- y0))), na.rm = TRUE) I.y0 <- c(0, -1, dy0.dpsi, rep(0, nZ), dy0.dalpha) ## Bread bread <- rbind(I.y, I.y0, cbind(matrix(0, ncol = 2, nrow = npsi + nZ + nY), object$d.estfun)) ## Variance sandwich <- (solve(bread) %*% meat %*% t(solve(bread)) / n)[1:2, 1:2] gradient <- as.matrix(c(P0.est / P.est ^ 2, - 1 / P.est), nrow = 2, ncol = 1) AF.var <- t(gradient) %*% sandwich %*% gradient } #### Output out <- list(AF.est = AF.est, AF.var = AF.var, link = link, objectcall = objectcall, call = call, inputcall = inputcall, exposure = X, outcome = Y, n = n, n.cases = n.cases, n.cluster = 0, formula = formula, psi = psi, fitZ = fitZ, nZ = nZ, fitY = fitY, Z = Z) } else{ #### Output out <- list(AF.est = NA, AF.var = NA, link = link, objectcall = objectcall, call = call, inputcall = inputcall, exposure = X, outcome = Y, n = n, n.cases = n.cases, n.cluster = 0, formula = formula, psi = psi, fitZ = fitZ, nZ = nZ, fitY = fitY, Z = Z) } class(out) <- "AF" return(out) }
/scratch/gouwar.j/cran-all/cranData/AF/R/AFgest.R
############## AF function for a glm object ##################### #' @title Attributable fraction estimation based on a logistic regression model from a \code{glm} object (commonly used for cross-sectional or case-control sampling designs). #' @description \code{AFglm} estimates the model-based adjusted attributable fraction for data from a logistic regression model in the form of a \code{\link[stats]{glm}} object. This model is commonly used for data from a cross-sectional or non-matched case-control sampling design. #' @param object a fitted logistic regression model object of class "\code{\link[stats]{glm}}". #' @param data an optional data frame, list or environment (or object coercible by \code{as.data.frame} to a data frame) containing the variables in the model. If not found in data, the variables are taken from environment (\code{formula}), typically the environment from which the function is called. #' @param exposure the name of the exposure variable as a string. The exposure must be binary (0/1) where unexposed is coded as 0. #' @param clusterid the name of the cluster identifier variable as a string, if data are clustered. Cluster robust standard errors will be calculated. #' @param case.control can be set to \code{TRUE} if the data is from a non-matched case control study. By default \code{case.control} is set to \code{FALSE} which is used for cross-sectional sampling designs. #' @return \item{AF.est}{estimated attributable fraction.} #' @return \item{AF.var}{estimated variance of \code{AF.est}. The variance is obtained by combining the delta method with the sandwich formula.} #' @return \item{P.est}{estimated factual proportion of cases; \eqn{Pr(Y=1)}. Returned by default when \code{case.control = FALSE}.} #' @return \item{P.var}{estimated variance of \code{P.est}. The variance is obtained by the sandwich formula. Returned by default when \code{case.control = FALSE}.} #' @return \item{P0.est}{estimated counterfactual proportion of cases if exposure would be eliminated; \eqn{Pr(Y_0=1)}{Pr(Y0=1)}. Returned by default when \code{case.control = FALSE}.} #' @return \item{P0.var}{estimated variance of \code{P0.est}. The variance is obtained by the sandwich formula. Returned by default when \code{case.control = FALSE}.} #' @return \item{log.or}{a vector of the estimated log odds ratio for every individual. \code{log.or} contains the estimated coefficient for the exposure variable \code{X} for every level of the confounder \code{Z} as specified by the user in the formula. If the model to be estimated is #' \deqn{logit\{Pr(Y=1|X,Z)\} = \alpha+\beta{X}+\gamma{Z}}{logit {Pr(Y=1|X,Z)} = \alpha + \beta X + \gamma Z} #' then \code{log.or} is the estimate of \eqn{\beta}. #' If the model to be estimated is #' \deqn{logit\{Pr(Y=1|X,Z)\}=\alpha+\beta{X}+\gamma{Z}+\psi{XZ}}{logit{Pr(Y=1|X,Z)} = \alpha + \beta X +\gamma Z +\psi XZ} #' then \code{log.odds} is the estimate of #' \eqn{\beta + \psi{Z}}{\beta + \psi Z}. Only returned if argument \code{case.control} is set to \code{TRUE}.} #' @details \code{AFglm} estimates the attributable fraction for a binary outcome \code{Y} #' under the hypothetical scenario where a binary exposure \code{X} is eliminated from the population. #' The estimate is adjusted for confounders \code{Z} by logistic regression using the (\code{\link[stats]{glm}}) function. #' The estimation strategy is different for cross-sectional and case-control sampling designs even if the underlying logististic regression model is the same. #' For cross-sectional sampling designs the AF can be defined as #' \deqn{AF=1-\frac{Pr(Y_0=1)}{Pr(Y=1)}}{AF = 1 - Pr(Y0 = 1) / Pr(Y = 1)} #' where \eqn{Pr(Y_0=1)}{Pr(Y0 = 1)} denotes the counterfactual probability of the outcome if #' the exposure would have been eliminated from the population and \eqn{Pr(Y = 1)} denotes the factual probability of the outcome. #' If \code{Z} is sufficient for confounding control, then \eqn{Pr(Y_0=1)}{Pr(Y0 = 1)} can be expressed as #' \eqn{E_Z\{Pr(Y=1\mid{X=0,Z})\}.}{E_z{Pr(Y = 1 |X = 0,Z)}.} #' The function uses logistic regression to estimate \eqn{Pr(Y=1\mid{X=0,Z})}{Pr(Y=1|X=0,Z)}, and the marginal sample distribution of \code{Z} #' to approximate the outer expectation (\enc{Sjölander}{Sjolander} and Vansteelandt, 2012). #' For case-control sampling designs the outcome prevalence is fixed by sampling design and absolute probabilities (\code{P.est} and \code{P0.est}) can not be estimated. #' Instead adjusted log odds ratios (\code{log.or}) are estimated for each individual. #' This is done by setting \code{case.control} to \code{TRUE}. It is then assumed that the outcome is rare so that the risk ratio can be approximated by the odds ratio. #' For case-control sampling designs the AF be defined as (Bruzzi et. al) #' \deqn{AF = 1 - \frac{Pr(Y_0=1)}{Pr(Y = 1)}}{AF = 1 - Pr(Y0 = 1) / Pr(Y = 1)} #' where \eqn{Pr(Y_0=1)}{Pr(Y0 = 1)} denotes the counterfactual probability of the outcome if #' the exposure would have been eliminated from the population. If \code{Z} is sufficient for confounding control then the probability \eqn{Pr(Y_0=1)}{Pr(Y0 = 1)} can be expressed as #' \deqn{Pr(Y_0=1)=E_Z\{Pr(Y=1\mid{X}=0,Z)\}.}{Pr(Y0=1) = E_z{Pr(Y = 1 | X = 0, Z)}.} #' Using Bayes' theorem this implies that the AF can be expressed as #' \deqn{AF = 1-\frac{E_Z\{Pr(Y=1\mid X=0,Z)\}}{Pr(Y=1)}=1-E_Z\{RR^{-X}(Z)\mid{Y = 1}\}}{ #' AF = 1 - E_z{Pr( Y = 1 | X = 0, Z)} / Pr(Y = 1) = 1 - E_z{RR^{-X} (Z) | Y = 1}} #' where \eqn{RR(Z)} is the risk ratio \deqn{\frac{Pr(Y=1\mid{X=1,Z})}{Pr(Y=1\mid{X=0,Z})}.}{Pr(Y = 1 | X = 1,Z)/Pr(Y=1 | X = 0, Z).} #' Moreover, the risk ratio can be approximated by the odds ratio if the outcome is rare. Thus, #' \deqn{ AF \approx 1 - E_Z\{OR^{-X}(Z)\mid{Y = 1}\}.}{AF is approximately 1 - E_z{OR^{-X}(Z) | Y = 1}.} #' If \code{clusterid} is supplied, then a clustered sandwich formula is used in all variance calculations. #' @author Elisabeth Dahlqwist, Arvid \enc{Sjölander}{Sjolander} #' @seealso \code{\link[stats]{glm}} used for fitting the logistic regression model. For conditional logistic regression (commonly for data from a matched case-control sampling design) see \code{\link[AF]{AFclogit}}. #' @references Bruzzi, P., Green, S. B., Byar, D., Brinton, L. A., and Schairer, C. (1985). Estimating the population attributable risk for multiple risk factors using case-control data. \emph{American Journal of Epidemiology} \bold{122}, 904-914. #' @references Greenland, S. and Drescher, K. (1993). Maximum Likelihood Estimation of the Attributable Fraction from logistic Models. \emph{Biometrics} \bold{49}, 865-872. #' @references \enc{Sjölander}{Sjolander}, A. and Vansteelandt, S. (2011). Doubly robust estimation of attributable fractions. \emph{Biostatistics} \bold{12}, 112-121. #' @examples #' # Simulate a cross-sectional sample #' #' expit <- function(x) 1 / (1 + exp( - x)) #' n <- 1000 #' Z <- rnorm(n = n) #' X <- rbinom(n = n, size = 1, prob = expit(Z)) #' Y <- rbinom(n = n, size = 1, prob = expit(Z + X)) #' #' # Example 1: non clustered data from a cross-sectional sampling design #' data <- data.frame(Y, X, Z) #' #' # Fit a glm object #' fit <- glm(formula = Y ~ X + Z + X * Z, family = binomial, data = data) #' #' # Estimate the attributable fraction from the fitted logistic regression #' AFglm_est <- AFglm(object = fit, data = data, exposure = "X") #' summary(AFglm_est) #' #' # Example 2: clustered data from a cross-sectional sampling design #' # Duplicate observations in order to create clustered data #' id <- rep(1:n, 2) #' data <- data.frame(id = id, Y = c(Y, Y), X = c(X, X), Z = c(Z, Z)) #' #' # Fit a glm object #' fit <- glm(formula = Y ~ X + Z + X * Z, family = binomial, data = data) #' #' # Estimate the attributable fraction from the fitted logistic regression #' AFglm_clust <- AFglm(object = fit, data = data, #' exposure = "X", clusterid = "id") #' summary(AFglm_clust) #' #' #' # Example 3: non matched case-control #' # Simulate a sample from a non matched case-control sampling design #' # Make the outcome a rare event by setting the intercept to -6 #' #' expit <- function(x) 1 / (1 + exp( - x)) #' NN <- 1000000 #' n <- 500 #' intercept <- -6 #' Z <- rnorm(n = NN) #' X <- rbinom(n = NN, size = 1, prob = expit(Z)) #' Y <- rbinom(n = NN, size = 1, prob = expit(intercept + X + Z)) #' population <- data.frame(Z, X, Y) #' Case <- which(population$Y == 1) #' Control <- which(population$Y == 0) #' # Sample cases and controls from the population #' case <- sample(Case, n) #' control <- sample(Control, n) #' data <- population[c(case, control), ] #' #' # Fit a glm object #' fit <- glm(formula = Y ~ X + Z + X * Z, family = binomial, data = data) #' #' # Estimate the attributable fraction from the fitted logistic regression #' AFglm_est_cc <- AFglm(object = fit, data = data, exposure = "X", case.control = TRUE) #' summary(AFglm_est_cc) #' @importFrom stats aggregate as.formula ave binomial coef delete.response family model.matrix pnorm predict qnorm residuals stepfun terms var vcov #' @import data.table #' @export AFglm <- function(object, data, exposure, clusterid, case.control = FALSE){ call <- match.call() # Warning if the object is not a glm object if(!(as.character(object$call[1]) == "glm")) stop("The object is not a glm object", call. = FALSE) # Warning if the object is not a logistic regression if(!(object$family[1] == "binomial" & object$family[2] == "logit")) stop("The object is not a logistic regression", call. = FALSE) #### Preparation of dataset #### formula <- object$formula #data <- object$data npar <- length(object$coef) ## Delete rows with missing on variables in the model ## rownames(data) <- 1:nrow(data) m <- model.matrix(object = formula, data = data) complete <- as.numeric(rownames(m)) data <- data[complete, ] outcome <- as.character(terms(formula)[[2]]) n <- nrow(data) n.cases <- sum(data[, outcome]) clusters <- data[, clusterid] if(missing(clusterid)) n.cluster <- 0 else { n.cluster <- length(unique(data[, clusterid])) } ## Checks ## if(!class(exposure) == "character") stop("Exposure must be a string.", call. = FALSE) if(!is.binary(data[, exposure])) stop("Only binary exposure (0/1) is accepted.", call. = FALSE) if(max(all.vars(formula[[3]]) == exposure) == 0) stop("The exposure variable is not included in the formula.", call. = FALSE) # Create dataset data0 for counterfactual X = 0 data0 <- data data0[, exposure] <- 0 ## Design matrices ## design <- model.matrix(object = delete.response(terms(object)), data = data) design0 <- model.matrix(object = delete.response(terms(object)), data = data0) #### Meat: score equations #### ## If sampling design is case-control ## if (case.control == TRUE){ ## Create linear predictors to estimate the log odds ratio ## diff.design <- design0 - design linearpredictor <- design %*% coef(object) linearpredictor0 <- design0 %*% coef(object) #log odds ratio# log.or <- linearpredictor - linearpredictor0 ## Estimate approximate AF ## AF.est <- 1 - sum(data[, outcome] * exp( - log.or)) / sum(data[, outcome]) #### Meat: score equations #### ## Score equation 1 ## individual estimating equations of the estimate of AF score.AF <- data[, outcome] * (exp( - log.or) - AF.est) ## Score equation 2 ## individual estimating equations from conditional logistic reg. pred.diff <- data[, outcome] - predict(object, newdata = data, type = "response") score.beta <- design * pred.diff score.equations <- cbind(score.AF, score.beta) if (!missing(clusterid)){ score.equations <- score.equations score.equations <- aggr(score.equations, clusters = clusters) } meat <- var(score.equations, na.rm=TRUE) #### Bread: hessian of score equations #### ## Hessian of score equation 1 ## #### Estimating variance using Sandwich estimator #### hessian.AF1 <- - data[, outcome] hessian.AF2 <- (design0 - design) * as.vector(data[, outcome] * exp( - log.or)) hessian.AF <- cbind(mean(hessian.AF1), t(colMeans(hessian.AF2, na.rm = TRUE))) hessian.beta <- cbind(matrix(rep(0, npar), nrow = npar, ncol = 1), - solve(vcov(object = object)) / n) ### Bread ### bread <- rbind(hessian.AF, hessian.beta) #### Sandwich #### if (!missing(clusterid)) sandwich <- (solve (bread) %*% meat %*% t(solve (bread)) * n.cluster / n^2 ) [1:2, 1:2] else sandwich <- (solve (bread) %*% meat %*% t(solve (bread)) / n) [1:2, 1:2] AF.var <- sandwich[1, 1] #### Output #### out <- c(list(AF.est = AF.est, AF.var = AF.var, log.or = log.or, objectcall = object$call, call = call, exposure = exposure, outcome = outcome, object = object, sandwich = sandwich, formula = formula, n = n, n.cases = n.cases, n.cluster = n.cluster)) } ## If sampling design is cross-sectional ## else { ## Score equation 1 ## score.P <- data[, outcome] pred.Y <- predict(object, newdata = data, type = "response") ## Score equation 2 ## score.P0 <- predict(object, newdata = data0, type = "response") ## Score equation 3 ## score.beta <- design * (score.P - pred.Y) ### Meat ### score.equations <- cbind(score.P, score.P0, score.beta) if (!missing(clusterid)){ score.equations <- score.equations score.equations <- aggr(score.equations, clusters = clusters) } meat <- var(score.equations, na.rm = TRUE) #### Bread: hessian of score equations #### ## Hessian of score equation 1 ## hessian.P <- matrix(c(- 1, 0, rep(0,npar)), nrow = 1, ncol = 2 + npar) ## Hessian of score equation 2 ## g <- family(object)$mu.eta dmu.deta <- g(predict(object = object, newdata = data0)) deta.dbeta <- design0 dmu.dbeta <- dmu.deta * deta.dbeta hessian.P0 <- matrix(c(0, - 1, colMeans(dmu.dbeta)), nrow = 1, ncol = 2 + npar) ## Hessian of score equation 3 ## hessian.beta <- cbind(matrix(rep(0, npar * 2), nrow = npar, ncol = 2) , - solve(vcov(object = object)) / n) ### Bread ### bread <- rbind(hessian.P, hessian.P0, hessian.beta) #### Sandwich #### if (!missing(clusterid)) sandwich <- (solve (bread) %*% meat %*% t(solve (bread)) * n.cluster / n^2 ) [1:2, 1:2] else sandwich <- (solve (bread) %*% meat %*% t(solve (bread)) / n) [1:2, 1:2] #### Point estimate of AF #### P.est <- mean(score.P, na.rm = TRUE) P0.est <- mean(score.P0, na.rm = TRUE) AF.est <- 1 - P0.est / P.est ## Delta method for variance estimate ## gradient <- as.matrix(c(P0.est / P.est ^ 2, - 1 / P.est), nrow = 2, ncol = 1) AF.var <- t(gradient) %*% sandwich %*% gradient P.var <- sandwich[1, 1] P0.var <- sandwich[2, 2] objectcall <- object$call #### Output #### out <- c(list(AF.est = AF.est, AF.var = AF.var, P.est = P.est, P0.est = P0.est, P.var = P.var, P0.var = P0.var, objectcall = objectcall, call = call, exposure = exposure, outcome = outcome, object = object, sandwich = sandwich, gradient = gradient, formula = formula, n = n, n.cases = n.cases, n.cluster = n.cluster)) } class(out) <- "AF" return(out) }
/scratch/gouwar.j/cran-all/cranData/AF/R/AFglm.R
############## AF function for a ivglm object ##################### #' @title Attributable fraction function based on Instrumental Variables (IV) regression as an \code{\link[ivtools]{ivglm}} object in the \code{ivtools} package. #' @description \code{AFivglm} estimates the model-based adjusted attributable fraction from a Instrumental Variable regression from a \code{\link[ivtools]{ivglm}} object. The IV regression can be estimated by either G-estimation or Two Stage estimation for a binary exposure and outcome. #' @param object a fitted Instrumental Variable regression of class "\code{\link[ivtools]{ivglm}}". #' @param data an optional data frame, list or environment (or object coercible by \code{as.data.frame} to a data frame) containing the variables in the model. If not found in data, the variables are taken from environment (\code{formula}), typically the environment from which the function is called. #' @return \item{AF.est}{estimated attributable fraction.} #' @return \item{AF.var}{estimated variance of \code{AF.est}. The variance is obtained by combining the delta methods with the sandwich formula.} #' @details \code{AFivglm} estimates the attributable fraction for an IV regression #' under the hypothetical scenario where a binary exposure \code{X} is eliminated from the population. #' The estimate can be adjusted for IV-outcome confounders \code{L} in the \code{\link[ivtools]{ivglm}} function. #' Let the AF function be defined as #' \deqn{AF=1-\frac{Pr(Y_0=1)}{Pr(Y=1)}}{AF = 1 - \frac{Pr(Y0=1)}{Pr(Y=1)}} #' where \eqn{Pr(Y_0=1)}{Pr(Y0=1)} denotes the counterfactual outcome prevalence had everyone been unexposed and \eqn{Pr(Y=1)}{Pr(Y=1)} denotes the factual outcome prevalence. #' If the instrument \code{Z} is valid, conditional on covariates \code{L}, i.e. fulfills the IV assumptions 1) the IV should have a (preferably strong) association with #'the exposure, 2) the effect of the IV on the outcome should only go through the exposure and 3) the IV-outcome association should be unconfounded #' (Imbens and Angrist, 1994) then \eqn{Pr(Y_0=1)}{Pr(Y0=1)} can be estimated. #' @author Elisabeth Dahlqwist, Arvid \enc{Sjölander}{Sjolander} #' @seealso \code{\link[ivtools]{ivglm}} used for fitting the causal risk ratio or odds ratio using the G-estimator or Two stage estimator. #' @references Dahlqwist E., Kutalik Z., \enc{Sjölander}{Sjolander}, A. (2019). Using Instrumental Variables to estimate the attributable fraction. \emph{Manuscript}. #' @examples #' # Example 1 #' set.seed(2) #' n <- 5000 #' ## parameter a0 determines the outcome prevalence #' a0 <- -4 #' psi.true <- 1 #' l <- rbinom(n, 1, 0.5) #' u <- rbinom(n, 1, 0.5) #' z <- rbinom(n, 1, plogis(a0)) #' x <- rbinom(n, 1, plogis(a0+3*z+ u)) #' y <- rbinom(n, 1, exp(a0+psi.true*x+u)) #' d <- data.frame(z,u,x,y,l) #' ## Outcome prevalence #' mean(d$y) #' #' ####### G-estimation #' ## log CRR #' fitz.l <- glm(z~1, family=binomial, data=d) #' gest_log <- ivglm(estmethod="g", X="x", Y="y", #' fitZ.L=fitz.l, data=d, link="log") #' AFgestlog <- AFivglm(gest_log, data=d) #' summary(AFgestlog) #' #' ## log COR #' ## Associational model, saturated #' fit_y <- glm(y~x+z+x*z, family="binomial", data=d) #' ## Estimations of COR and AF #' gest_logit <- ivglm(estmethod="g", X="x", Y="y", #' fitZ.L=fitz.l, fitY.LZX=fit_y, #' data=d, link="logit") #' AFgestlogit <- AFivglm(gest_logit, data = d) #' summary(AFgestlogit) #' #' ####### TS estimation #' ## log CRR #' # First stage #' fitx <- glm(x ~ z, family=binomial, data=d) #' # Second stage #' fity <- glm(y ~ x, family=poisson, data=d) #' ## Estimations of CRR and AF #' TSlog <- ivglm(estmethod="ts", X="x", Y="y", #' fitY.LX=fity, fitX.LZ=fitx, data=d, link="log") #' AFtslog <- AFivglm(TSlog, data=d) #' summary(AFtslog) #' #' ## log COR #' # First stage #' fitx_logit <- glm(x ~ z, family=binomial, data=d) #' # Second stage #' fity_logit <- glm(y ~ x, family=binomial, data=d) #' ## Estimations of COR and AF #' TSlogit <- ivglm(estmethod="ts", X="x", Y="y", #' fitY.LX=fity_logit, fitX.LZ=fitx_logit, #' data=d, link="logit") #' AFtslogit <- AFivglm(TSlogit, data=d) #' summary(AFtslogit) #' #' ## Example 2: IV-outcome confounding by L #' ####### G-estimation #' ## log CRR #' fitz.l <- glm(z~l, family=binomial, data=d) #' gest_log <- ivglm(estmethod="g", X="x", Y="y", #' fitZ.L=fitz.l, data=d, link="log") #' AFgestlog <- AFivglm(gest_log, data=d) #' summary(AFgestlog) #' #' ## log COR #' ## Associational model #' fit_y <- glm(y~x+z+l+x*z+x*l+z*l, family="binomial", data=d) #' ## Estimations of COR and AF #' gest_logit <- ivglm(estmethod="g", X="x", Y="y", #' fitZ.L=fitz.l, fitY.LZX=fit_y, #' data=d, link="logit") #' AFgestlogit <- AFivglm(gest_logit, data = d) #' summary(AFgestlogit) #' #' ####### TS estimation #' ## log CRR #' # First stage #' fitx <- glm(x ~ z+l, family=binomial, data=d) #' # Second stage #' fity <- glm(y ~ x+l, family=poisson, data=d) #' ## Estimations of CRR and AF #' TSlog <- ivglm(estmethod="ts", X="x", Y="y", #' fitY.LX=fity, fitX.LZ=fitx, data=d, #' link="log") #' AFtslog <- AFivglm(TSlog, data=d) #' summary(AFtslog) #' #' ## log COR #' # First stage #' fitx_logit <- glm(x ~ z+l, family=binomial, data=d) #' # Second stage #' fity_logit <- glm(y ~ x+l, family=binomial, data=d) #' ## Estimations of COR and AF #' TSlogit <- ivglm(estmethod="ts", X="x", Y="y", #' fitY.LX=fity_logit, fitX.LZ=fitx_logit, #' data=d, link="logit") #' AFtslogit <- AFivglm(TSlogit, data=d) #' summary(AFtslogit) #' @importFrom stats plogis #' @import ivtools #' @export AFivglm<- function(object, data){ call <- match.call() method <- object$input$estmethod # Warning if the object is not a ivglm object if(!(as.character(object$call[1]) == "ivglm")) stop("The object is not an ivglm object", call. = FALSE) ####################### if(method == "g"){ result <- AFgest(object = object, data = data) } if(method == "ts"){ result <- AFtsest(object = object, data = data) } #### Output out <- result return(out) }
/scratch/gouwar.j/cran-all/cranData/AF/R/AFivglm.R
############## AF function for a parfrailty object ##################### #' @title Attributable fraction function based on a Weibull gamma-frailty model as a \code{\link[stdReg]{parfrailty}} object (commonly used for cohort sampling family designs with time-to-event outcomes). #' @description \code{AFparfrailty} estimates the model-based adjusted attributable fraction function from a shared Weibull gamma-frailty model in form of a \code{\link[stdReg]{parfrailty}} object. This model is commonly used for data from cohort sampling familty designs with time-to-event outcomes. #' @param object a fitted Weibull gamma-parfrailty object of class "\code{\link[stdReg]{parfrailty}}". #' @param data an optional data frame, list or environment (or object coercible by \code{as.data.frame} to a data frame) containing the variables in the model. If not found in data, the variables are taken from environment (\code{formula}), typically the environment from which the function is called. #' @param exposure the name of the exposure variable as a string. The exposure must be binary (0/1) where unexposed is coded as 0. #' @param times a scalar or vector of time points specified by the user for which the attributable fraction function is estimated. If not specified the observed death times will be used. #' @param clusterid the name of the cluster identifier variable as a string, if data are clustered. #' @return \item{AF.est}{estimated attributable fraction function for every time point specified by \code{times}.} #' @return \item{AF.var}{estimated variance of \code{AF.est}. The variance is obtained by combining the delta methods with the sandwich formula.} #' @return \item{S.est}{estimated factual survival function; \eqn{S(t)}.} #' @return \item{S.var}{estimated variance of \code{S.est}. The variance is obtained by the sandwich formula.} #' @return \item{S0.est}{estimated counterfactual survival function if exposure would be eliminated; \eqn{S_0(t)}{S0(t)}.} #' @return \item{S0.var}{estimated variance of \code{S0.est}. The variance is obtained by the sandwich formula.} #' @details \code{AFparfrailty} estimates the attributable fraction for a time-to-event outcome #' under the hypothetical scenario where a binary exposure \code{X} is eliminated from the population. #' The estimate is adjusted for confounders \code{Z} by the shared frailty model (\code{\link[stdReg]{parfrailty}}). #' The baseline hazard is assumed to follow a Weibull distribution and the unobserved shared frailty effects \code{U} are assumed to be gamma distributed. #' Let the AF function be defined as #' \deqn{AF=1-\frac{\{1-S_0(t)\}}{\{1-S(t)\}}}{AF = 1 - {1 - S0(t)} / {1 - S(t)}} #' where \eqn{S_0(t)}{S0(t)} denotes the counterfactual survival function for the event if #' the exposure would have been eliminated from the population at baseline and \eqn{S(t)} denotes the factual survival function. #' If \code{Z} and \code{U} are sufficient for confounding control, then \eqn{S_0(t)}{S0(t)} can be expressed as \eqn{E_Z\{S(t\mid{X=0,Z })\}}{E_z{S(t|X=0,Z)}}. #' The function uses a fitted Weibull gamma-frailty model to estimate \eqn{S(t\mid{X=0,Z})}{S(t|X=0,Z)}, and the marginal sample distribution of \code{Z} #' to approximate the outer expectation. A clustered sandwich formula is used in all variance calculations. #' @author Elisabeth Dahlqwist, Arvid \enc{Sjölander}{Sjolander} #' @seealso \code{\link[stdReg]{parfrailty}} used for fitting the Weibull gamma-frailty and \code{\link[stdReg]{stdParfrailty}} used for standardization of a \code{parfrailty} object. #' @examples #'# Example 1: clustered data with frailty U #' expit <- function(x) 1 / (1 + exp( - x)) #' n <- 100 #' m <- 2 #' alpha <- 1.5 #' eta <- 1 #' phi <- 0.5 #' beta <- 1 #' id <- rep(1:n,each=m) #' U <- rep(rgamma(n, shape = 1 / phi, scale = phi), each = m) #' Z <- rnorm(n * m) #' X <- rbinom(n * m, size = 1, prob = expit(Z)) #' # Reparametrize scale as in rweibull function #' weibull.scale <- alpha / (U * exp(beta * X)) ^ (1 / eta) #' t <- rweibull(n * m, shape = eta, scale = weibull.scale) #' #' # Right censoring #' c <- runif(n * m, 0, 10) #' delta <- as.numeric(t < c) #' t <- pmin(t, c) #' #' data <- data.frame(t, delta, X, Z, id) #' #' # Fit a parfrailty object #' library(stdReg) #' fit <- parfrailty(formula = Surv(t, delta) ~ X + Z + X * Z, data = data, clusterid = "id") #' summary(fit) #' #' # Estimate the attributable fraction from the fitted frailty model #' #' time <- c(seq(from = 0.2, to = 1, by = 0.2)) #' #' AFparfrailty_est <- AFparfrailty(object = fit, data = data, exposure = "X", #' times = time, clusterid = "id") #' summary(AFparfrailty_est) #' plot(AFparfrailty_est, CI = TRUE, ylim=c(0.1,0.7)) #' @import survival data.table stdReg #' @importFrom stats model.extract model.frame #' @export AFparfrailty <- function(object, data, exposure, times, clusterid){ call <- match.call() formula <- object$formula npar <- length(object$est) ## Delete rows with missing on variables in the model ## rownames(data) <- 1:nrow(data) m <- model.frame(formula, data = data) complete <- as.numeric(rownames(m)) data <- data[complete, ] ## Find names of outcome rr <- rownames(attr(terms(formula), "factors"))[1] temp <- gregexpr(", ", rr)[[1]] if(length(temp == 1)){ outcome <- substr(rr, temp[1] + 2, nchar(rr) - 1) } if(length(temp) == 2){ outcome <- substr(rr, temp[2] + 2, nchar(rr) - 1) } ## Define end variable and event variable Y <- model.extract(frame = m, "response") if(ncol(Y) == 2){ endvar <- Y[, 1] eventvar <- Y[, 2] } if(ncol(Y) == 3){ endvar <- Y[, 2] eventvar <- Y[, 3] } ## Defining parameters and variables## logalpha <- object$est[1] alpha <- exp(logalpha) logeta <- object$est[2] eta <- exp(logeta) logphi <- object$est[3] phi <- exp(logphi) beta <- object$est[(3 + 1):npar] # Assign value to t if missing if(missing(times)){ times <- endvar[eventvar == 1] } times <- sort(times) n <- nrow(data) n.cases <- sum(eventvar) n.cluster <- object$ncluster ## Counterfactual dataset ## data0 <- data data0[, exposure] <- 0 ## Design matrices ## design <- model.matrix(object = formula, data = data)[, -1, drop=FALSE] design0 <- model.matrix(object = formula, data = data0)[, -1, drop=FALSE] clusters <- data[, clusterid] ### Estimate the survival functions ### ########### order of beta has to be the same as order of design matrix, not fixed predX <- design %*% beta pred0 <- design0 %*% beta ## One point and variance estimate for each time t in times ## S.est <- vector(length = length(times)) S0.est <- vector(length = length(times)) AF.var <- vector(length = length(times)) S.var <- vector(length = length(times)) S0.var <- vector(length = length(times)) # Loop over all t in times for (i in 1:length(times)){ t <- times[i] H0t <- (t / alpha) ^ eta ### Survival functions temp <- 1 + phi * H0t * exp(predX) temp0 <- 1 + phi * H0t * exp(pred0) surv <- temp ^ ( - 1 / phi) surv0 <- temp0 ^ ( - 1 / phi) S.est[i] <- mean(surv, na.rm = TRUE) S0.est[i] <- mean(surv0, na.rm = TRUE) ## Score functions sres <- surv - S.est[i] sres0 <- surv0 - S0.est[i] Scores.S <- cbind(sres, sres0) Scores.S <- aggr(x = Scores.S, clusters = clusters) coefres <- object$score res <- cbind(Scores.S, coefres) meat <- var(res, na.rm = TRUE) ### Hessian for the factual survival function dS.dlogalpha <- sum(H0t * eta * exp(predX) / temp ^ (1 / phi + 1)) / n.cluster dS.dlogeta <- sum(-H0t * exp(predX) * log(t / alpha) * eta / temp ^ (1 / phi + 1)) / n.cluster dS.dlogphi <- sum(log(temp) / (phi * temp ^ (1 / phi)) - H0t * exp(predX) / temp ^ (1 / phi + 1)) / n.cluster dS.dbeta <- colSums(-H0t * as.vector(exp(predX)) * design / as.vector(temp) ^ (1 / phi + 1)) / n.cluster ### Hessian for the counterfactual survival function dS0.dlogalpha <- sum(H0t * eta * exp(pred0) / temp0 ^ (1 / phi + 1)) / n.cluster dS0.dlogeta <- sum( - H0t * exp(pred0) * log(t / alpha) * eta / temp0 ^ (1 / phi + 1)) /n.cluster dS0.dlogphi <- sum(log(temp0) / (phi * temp0 ^ (1 / phi)) - H0t * exp(pred0) / temp0 ^ (1 / phi + 1)) / n.cluster dS0.dbeta <- colSums(-H0t * as.vector(exp(pred0)) * design0 / as.vector(temp0) ^ (1 / phi + 1)) / n.cluster #Note: the term n/n.cluster is because SI.logalpha, SI.logeta, SI.logphi, #and SI.beta are clustered, which they are not in stdCoxph S.hessian <- cbind(-diag(2) * n / n.cluster, rbind(dS.dlogalpha, dS0.dlogalpha), rbind(dS.dlogeta, dS0.dlogeta), rbind(dS.dlogphi, dS0.dlogphi), rbind(dS.dbeta, dS0.dbeta)) par.hessian <- cbind(matrix(0, nrow = npar, ncol = 2), -solve(object$vcov) / n.cluster) bread <- rbind(S.hessian, par.hessian) sandwich <- (solve(bread) %*% meat %*% t(solve(bread)) / n.cluster)[1:2, 1:2] #### Estimate of variance using the delta method #### gradient <- as.matrix(c( - (1 - S0.est[i]) / (1 - S.est[i]) ^ 2, 1 / (1 - S.est[i])), nrow = 2, ncol = 1) AF.var[i] <- t(gradient) %*% sandwich %*% gradient S.var[i] <- sandwich[1, 1] S0.var[i] <- sandwich[2, 2] } ### The AF function estimate ### AF.est <- 1 - (1 - S0.est) / (1 - S.est) out <- c(list(AF.est = AF.est, AF.var = AF.var, S.est = S.est, S0.est = S0.est, S.var = S.var, S0.var = S0.var, objectcall = object$call, call = call, exposure = exposure, outcome = outcome, object = object, sandwich = sandwich, gradient = gradient, formula = formula, n = n, n.cases = n.cases, n.cluster = n.cluster, times = times)) class(out) <- "AF" return(out) }
/scratch/gouwar.j/cran-all/cranData/AF/R/AFparfrailty.R
AFtsest <- function(object, data){ call <- match.call() inputcall <- object$input$estmethod objectcall <- object$call # Warning if the object is not a tsest object if(!(as.character(inputcall) == "ts")) stop("The object is not a TS estimate", call. = FALSE) psi <- object$est[2] fitY <- object$input$fitY.LX fitX <- object$input$fitX.LZ link <- family(fitY)$link Y <- as.character(fitY$formula)[2] X <- as.character(fitX$formula)[2] ################ nX <- length(coef(fitX)) nY <- length(coef(fitY)) n <- nrow(data) n.cases <- sum(data[ , Y], na.rm = TRUE) ########### AF estimate ############# y0 <- data[, Y] * as.vector(exp( -psi * data[, X])) P0.est <- mean(y0, na.rm = TRUE) P.est <- mean(data[, Y], na.rm = TRUE) AF.est <- 1 - P0.est / P.est ### Score equations ## S(y0) score.y0 <- y0 - P0.est ## S(y) score.y <- data[, Y] - P.est ## Score functions of all parameters (y, y0, psi, E(z)) score <- cbind(score.y, score.y0, object$estfun) ### Meat for the sandwich estimator meat <- var(score, na.rm = TRUE) ### Hessian I.y <- c(-1, 0, rep(0, nY + nX)) dy0.dpsi <- mean(data[, X] * y0, na.rm = TRUE) I.y0 <- c(0, -1, rep(0, nX + nY - 1), dy0.dpsi) ## Bread bread <- rbind(I.y, I.y0, cbind(matrix(0, ncol = 2, nrow = nX + nY), object$d.estfun)) ## Variance sandwich <- (solve(bread) %*% meat %*% t(solve(bread)) / n)[1:2, 1:2] gradient <- as.matrix(c(P0.est / P.est ^ 2, - 1 / P.est), nrow = 2, ncol = 1) AF.var <- t(gradient) %*% sandwich %*% gradient #### Output out <- list(AF.est = AF.est, AF.var = AF.var, link = link, objectcall = objectcall, call = call, inputcall = inputcall, exposure = X, outcome = Y, n = n, n.cases = n.cases, n.cluster = 0, psi = psi, nX = nX, fitY = fitY, fitX = fitX) class(out) <- "AF" return(out) }
/scratch/gouwar.j/cran-all/cranData/AF/R/AFtsest.R
#' Birthweight data clustered on the mother. #' #' This dataset is borrowed from "An introduction to Stata for health reserachers" (Juul and Frydenberg, 2010). #' The dataset contains data on 189 mothers who have given birth to one or several children. In total, the dataset contains data on 487 births. #' #' @docType data #' @name clslowbwt #' @usage data(clslowbwt) #' @format The dataset is structured so that each row corresponds to one birth/child. It contains the following variables: #' \describe{ #' \item{id}{the identification number of the mother.} #' \item{birth}{the number of the birth, i.e. "1" for the mother's first birth, "2" for the mother's second birth etc.} #' \item{smoke}{a categorical variable indicating if the mother is a smoker or not with levels "\code{0. No}" and "\code{1. Yes}".} #' \item{race}{the race of the mother with levels "\code{1. White}", "\code{2. Black}" or "\code{3. Other}".} #' \item{age}{the age of the mother at childbirth.} #' \item{lwt}{weight of the mother at last menstruational period (in pounds).} #' \item{bwt}{birthweight of the newborn.} #' \item{low}{a categorical variable indicating if the newborn is categorized as a low birthweight baby (<2500 grams) or not with levels "\code{0. No}" and "\code{1. Yes}".} #' \item{smoker}{a numeric indicator if the mother is a smoker or not. Recoded version of the variable "\code{smoke}" where "\code{0.No}" is recoded as "0" and "\code{1.Yes}" is recoded as "1".} #' \item{lbw}{a numeric indicator of whether the newborn is categorized as a low birthweight baby (<2500 grams) or not. Recoded version of the variable "\code{low}" where "\code{0.No}" is recoded as "0" and "\code{1.Yes}" is recoded as "1".} #' } #' #' #'\strong{The following changes have been made to the original data in Juul & Frydenberg (2010):} #' #' - The variable "\code{low}" is recoded into the numeric indicator variable "\code{lbw}": #' #' \code{clslowbwt$lbw <- as.numeric(clslowbwt$low == "1. Yes")} #' #' - The variable "\code{smoke}" is recoded into the numeric indicator variable "\code{smoker}": #' #' \code{clslowbwt$smoker <- as.numeric(clslowbwt$smoke == "1. Yes")} #' #' @references Juul, Svend & Frydenberg, Morten (2010). \emph{An introduction to Stata for health researchers}, Texas, Stata press, 2010 (Third edition). #' @references \url{http://www.stata-press.com/data/ishr3.html} #' NULL
/scratch/gouwar.j/cran-all/cranData/AF/R/data-clslowbwt.R
#' Cohort study on breast cancer patients from the Netherlands. #' #' This dataset is borrowed from #' "Flexible parametric survival analysis using Stata: beyond the Cox model" (Roystone and Lambert, 2011). #' It contains follow-up data on 2982 woman with breast cancer who have gone through breast surgery. #' The women are followed from the time of surgery until death, relapse or censoring. #' #' @docType data #' @name rott2 #' @usage data(rott2) #' @format The dataset \code{rott2} contains the following variables: #' \describe{ #' \item{pid}{patient ID number.} #' \item{year}{year of breast surgery (i.e. year of enrollment into the study), between the years 1978-1993.} #' \item{rf}{relapse free interval measured in months.} #' \item{rfi}{relapse indicator.} #' \item{m}{metastasis free.} #' \item{mfi}{metastasis status.} #' \item{os}{overall survival} #' \item{osi}{overall survival indicator} #' \item{age}{age at surgery measured in years.} #' \item{meno}{menopausal status with levels "\code{pre}" and "\code{post}".} #' \item{size}{tumor size in three classes: \code{<=20mm, >20-50mmm} and \code{>50mm}.} #' \item{grade}{differentiation grade with levels 2 or 3.} #' \item{pr}{progesterone receptors, fmol/l.} #' \item{er}{oestrogen receptors, fmol/l.} #' \item{nodes}{the number of positive lymph nodes.} #' \item{hormon}{hormonal therapy with levels "\code{no}" and "\code{yes}".} #' \item{chemo}{categorical variable indicating whether the patient recieved chemotheraphy or not, with levels "\code{no}" and "\code{yes}".} #' \item{recent}{a numeric indicator of whether the tumor was discovered recently with levels "\code{1978-87}" and "\code{1988-93}".} #' \item{no.chemo}{a numerical indicator of whether the patient did not recieved chemotherapy. Recoded version of "\code{chemo}" where "\code{yes}" is recoded as 0 and "\code{no}" is recoded as 1.} #' } #' #' \strong{The following changes have been made to the original data in Roystone and Lambert (2011):} #' #' - The variable "\code{chemo}" is recoded into the numeric indicator variable "\code{no.chemo}": #' #' \code{rott22$no.chemo <- as.numeric(rott2$chemo == "no")} #' #' The follwing variables have been removed from the original dataset: \code{enodes, pr_1, enodes_1, _st, _d, _t, _t0} #' since they are recodings of some existing variables which are not used in this analysis. #' @references Royston, Patrick & Lambert, Paul. C (2011). \emph{Flexible parametric survival analysis using Stata: beyond the Cox model}. College Station, Texas, U.S, Stata press. #' @references \url{http://www.stata-press.com/data/fpsaus.html} NULL
/scratch/gouwar.j/cran-all/cranData/AF/R/data-rott2.R
#' Case-control study on oesophageal cancer in Chinese Singapore men. #' #' This dataset is borrowed from "Aetiological factors in oesophageal cancer in Singapore Chinese" by De Jong UW, Breslow N, Hong JG, Sridharan M, Shanmugaratnam K (1974). #' #' @docType data #' @name singapore #' @usage data(singapore) #' @format The dataset contains the following variables: #' \describe{ #' \item{Age}{age of the patient.} #' \item{Dial}{dialect group where 1 represent "\code{Hokhien/Teochew}" and 0 represent "\code{Cantonese/Other}".} #' \item{Samsu}{a numeric indicator of whether the patient consumes Samsu wine or not.} #' \item{Cigs}{number of cigarettes smoked per day.} #' \item{Bev}{number of beverage at "burning hot" temperatures ranging between 0 to 3 different drinks per day.} #' \item{Everhotbev}{a numeric indicator of whether the patients ever drinks "burning hot beverage" or not. Recoded from the variable "\code{Bev}".} #' \item{Set}{matched set identification number.} #' \item{CC}{a numeric variable where 1 represent if the patient is a case, 2 represent if the patient is a control from the same ward as the case and 3 represent if the patient is control from orthopedic hospital.} #' \item{Oesophagealcancer}{a numeric indicator variable of whether the patient is a case of oesophageal cancer or not.} #' } #' #' \strong{The following changes have been made to the data from the original data in De Jong UW (1974):} #' #' - The variable "\code{Bev}" is recoded into the numeric indicator variable "\code{Everhotbev}": #' #' \code{singapore$Everhotbev <- ifelse(singapore$Bev >= 1, 1, 0)} #' #' @references De Jong UW, Breslow N, Hong JG, Sridharan M, Shanmugaratnam K. (1974). Aetiological factors in oesophageal cancer in Singapore Chinese. \emph{Int J Cancer} Mar 15;13(3), 291-303. #' @references \url{http://faculty.washington.edu/heagerty/Courses/b513/WEB2002/datasets.html} NULL
/scratch/gouwar.j/cran-all/cranData/AF/R/data-singapore.R
#' @title Accelerated Functional Failure Time Model with Error-Contaminated Survival Times #' #' @description The package AFFECT, referred to Accelerated Functional Failure time model with Error-Contaminated survival Times, #'aims to recover the functional covariates under accelerated functional failure time models, where the data are #' subject to error-prone response and misclassified censoring status. This package primarily #' contains three functions. \code{data_gen} is applied to generate artificial data based on accelerated functional #' failure time models, including potential covariates, error-prone response and misclassified censoring status. #' \code{ME_correction} is used to do correction for error-prone response variable and misclassified censoring #' status, and \code{Boosting} is used to recover the functional covariates under accelerated functional failure time models. #' #' @details This package aims to estimate functional covariates under an AFT models with error-prone response and #' and misclassified censoring status. The strategy is to derive an unbiased estimating function by the Buckley-James #' estimator with measurement error in response and misclassification in censoring status being corrected. #' Finally. the functional covariates as well as informative covariates under an AFT models can be derived by #' the boosting procedure. #' @return No return value, called for side effects. #' @export AFFECT <- function(){}
/scratch/gouwar.j/cran-all/cranData/AFFECT/R/AFTER.R
W <- function(r_star,g_x){ a <- data.frame(r_star); b <- data.frame(g_x) data <- cbind(a,b) colnames(data) <- c('Y',"X") w <- lm(Y~X,data = data) return(w$coefficients) }
/scratch/gouwar.j/cran-all/cranData/AFFECT/R/Boost_weight.R
#' @title Estimation of Functional Forms of Covaraites under AFT Models #' #' @description The function aims to select informative covariates under the AFT model and estimate their corresponding #' functional forms with survival time. Specifically, the first step in this function is to derive an unbiased #' estimating function by the Buckley-James method with corrected survival times and censoring status. After that, a #' boosting algorithm with the cubic-spline method is implemented to an unbiased estimating function to detect #' informative covariates and estimate the functional forms of covariates iteratively. #' #' @param data A \code{c(n,p+2)} dimension of data, where \code{n} is sample size and #' \code{p} is the number of covariates. The first column is survival time and second #' column is censoring status, and the other columns are covariates. #' @param iter The iteration times of the boosting procedure. The default value = 50 and the iteration will stop #' when the absolute value of increment of every estimated value is small than 0.01. #' #' @importFrom ggplot2 "geom_line" "theme_minimal" "labs" "aes" #' @importFrom ggplot2 "ggplot" #' @importFrom stats "lm" "smooth.spline" "fitted" #' #' @return covariates The first ten covariates that are selected in the iteration. #' @return functional_forms The functional forms of the first ten covariates that are selected #' in the iteration. #' @return predicted_failure_time The predicted failure time of every sample #' @return survival_curve Predicted survival curve of the sample. #' #' @examples #' ## generate data with misclassification = 0.9 with n = 50, p = 6 #' ## and variance of noise term is 0.75. The y* is is related to the first #' ## covariate. #' #' b <- matrix(0,ncol=6, nrow = 1) #' b[1,1] <- 1 #' data <- data_gen(n=50, p=6, pi_01=0.9, pi_10 = 0.9, gamma0=1, #' gamma1=b, e_var=0.75) #' #' ## Assume that covariates are independent and observed failure time is #' ## related to first covariate with weight equals 1. And the scalar #' ## in the classical additive measurement error model is 1 and #' ## Misclassifcation probability = 0.9. #' #' matrixb <- diag(6) #' gamma_0 <- 1 #' gamma_1 <- matrix(0,ncol=6, nrow =1) #' gamma_1[1,1] <- 1 #' data1 <- ME_correction(pi_10=0.9,pi_01=0.9,gamma0 = gamma_0, #' gamma1 = gamma_1, #' cor_covar=matrixb, y=data[,1], #' indicator=data[,2], covariate = data[,3:8]) #' data1 <- cbind(data1,data[,3:8]) #' #' ## Data in boosting procedure with iteration times =2 #' #' result <- Boosting(data=data1, iter=2) #' #' #' @export Boosting <- function(data, iter=50){ W <- function(r_star,g_x){ a <- data.frame(r_star); b <- data.frame(g_x) data <- cbind(a,b) colnames(data) <- c('Y',"X") w <- lm(Y~X+0,data = data) return(w$coefficients) } interval<- function(lower_bound,upper_bound){ interval_size = (upper_bound-lower_bound)/7 interval_points <- c(lower_bound) for(i in (1:7)){ interval_points <- c(interval_points,lower_bound+interval_size*i) } return(interval_points) } corrected_data<- data colnames(corrected_data)[1:2] <- c('Y','censoring_indicator') # number of sample n = dim(corrected_data)[1] # dimension p = dim(corrected_data)[2]-2 censoring_indicator <- corrected_data$censoring_indicator variable_catch <- c() y <- corrected_data$Y #step 0. sum_of_every_fitted_value <- rep(0,times=n) y_star <- corrected_data$Y r <- y - sum_of_every_fitted_value r_star <- y_star - sum_of_every_fitted_value df3 <- data.frame(corrected_data$censoring_indicator,r_star,r) colnames(df3)[1] <- "censoring_indicator" survival_probability <- c() for (i in c(1:dim(df3)[1])){ yy <- df3[df3$r<=df3$r[i],] u <- c(yy$r) data_producted <- 1 for (j in c(1:length(u))){ total_sum_of_denominator <- sum((df3$r>=u[j])*1) censoring_indicator_1<- df3[df3$censoring_indicator==1,] total_sum_of_numerator <- sum((censoring_indicator_1$r==u[j])*1) data_producted = data_producted * (1-total_sum_of_numerator/total_sum_of_denominator) } survival_probability[i] <- data_producted } survival_probability[which(survival_probability==0)] <- 0.000001 sum_riemann_for_every_n <- c() upper = max(r) for (i in c(1:length(df3$r))){ point <- interval(df3$r[i],upper) riemann_x <- c() data_producted <- 1 for (j in c(1:length(point))){ xx <- df3[df3$r<=point[j],] u <- c(xx$r) for (k in c(1:length(u))){ total_sum_of_denominator <- sum((df3$r>=u[k])*1) censoring_indicator_1<- df3[df3$censoring_indicator==1,] total_sum_of_numerator <- sum((censoring_indicator_1$r==u[k])*1) data_producted = data_producted * (1-total_sum_of_numerator/total_sum_of_denominator) } riemann_x[j]<- data_producted } riemann_minus <- c() for (l in c(1:length(riemann_x)-1)){ riemann_minus[l] <- riemann_x[l+1]-riemann_x[l] } sum_integrate <-c(0) for (m in c(1:length(riemann_minus))){ sum_integrate <- sum_integrate + point[m+1] * riemann_minus[m] } sum_riemann_for_every_n[i] <- sum_integrate } y_star_update <- c() for (i in c(1:length(y))){ y_star_update[i] <- censoring_indicator[i] * y[i]+ (1-censoring_indicator[i])*(sum_of_every_fitted_value[i] - sum_riemann_for_every_n[i]/survival_probability[i]) } y_star <- y_star_update add_g_every_time <- data.frame() df <- as.data.frame(matrix(numeric(0),ncol = p, nrow = n)) df[is.na(df)] <- 0 for (i in c(1:p)){ colnames(df)[i] <- i } iterations = 0 stop_times = iter while (TRUE) { r_star <- y_star - sum_of_every_fitted_value residual<- c(0) add_f <- c() times <- c(0) variable <- c() for (i in c(1:p)){ times = times + 1 f_variables <- smooth.spline(x=corrected_data[,i+2],y=r_star,cv=FALSE,all.knots=c(0,0.2,0.4,0.6,0.8,1)) if (residual==0){ add_f <- f_variables residual <- f_variables$pen.crit variable <- times } else if(f_variables$pen.crit < residual){ add_f <- f_variables variable <- times residual <- f_variables$pen.crit } } variable_catch <- c(variable_catch,variable) fit = fitted(add_f) fit[which(is.na(fit))]=0 w <- W(r_star,fit) add_g <- as.data.frame(w * fit, ncol = 1) stop_value <- 1e-2 number_of_added_value <- sum((abs(add_g) < stop_value)*1) if (iterations == stop_times){ break } if (number_of_added_value != n && iterations!= stop_times){ df[as.character(variable)] <- df[,variable] + add_g sum_of_every_fitted_value <- sum_of_every_fitted_value + w * fit r_star = y_star - sum_of_every_fitted_value r <- y - sum_of_every_fitted_value df1 <- data.frame(corrected_data$censoring_indicator,r_star,r) colnames(df1)[1] <- "censoring_indicator" survival_probability <- c() for (i in c(1:dim(df1)[1])){ yy <- df1[df1$r<=df1$r[i],] u <- c(yy$r) data_producted <- 1 for (j in c(1:length(u))){ total_sum_of_denominator <- sum((df1$r>=u[j])*1) censoring_indicator_1<- df1[df1$censoring_indicator==1,] total_sum_of_numerator <- sum((censoring_indicator_1$r==u[j])*1) data_producted = data_producted * (1-total_sum_of_numerator/total_sum_of_denominator) } survival_probability[i] <- data_producted } survival_probability[which(survival_probability==0)] <- 0.000001 sum_riemann_for_every_n <- c() upper = max(r) for (i in c(1:length(df1$r))){ point <- interval(df1$r[i],upper) riemann_x <- c() data_producted <- 1 for (j in c(1:length(point))){ xx <- df1[df1$r<=point[j],] u <- c(xx$r) for (k in c(1:length(u))){ total_sum_of_denominator <- sum((df1$r>=u[k])*1) total_sum_of_denominator censoring_indicator_1<- df1[df1$censoring_indicator==1,] total_sum_of_numerator <- sum((censoring_indicator_1$r==u[k])*1) data_producted = data_producted * (1-total_sum_of_numerator/total_sum_of_denominator) } riemann_x[j]<- data_producted } riemann_minus <- c() for (l in c(1:length(riemann_x)-1)){ riemann_minus[l] <- riemann_x[l+1]-riemann_x[l] } sum_integrate <-c(0) for (m in c(1:length(riemann_minus))){ sum_integrate <- sum_integrate + point[m+1] * riemann_minus[m] } sum_riemann_for_every_n[i] <- sum_integrate } y_star_update <- c() for (i in c(1:length(y))){ y_star_update[i] <- censoring_indicator[i] * y[i]+ (1-censoring_indicator[i])*(sum_of_every_fitted_value[i] - sum_riemann_for_every_n[i]/survival_probability[i]) } y_star <- y_star_update iterations = iterations + 1 }else{ break } } survival_data<- df predict_failure_time <- apply(survival_data,1,sum) sur_time <-apply(survival_data,1,sum) sur_time <-exp(sur_time) sur_time <-sort(sur_time) sur_time <-data.frame(sur_time) sur_time pro <- seq(1,dim(sur_time)[1])/dim(sur_time)[1] pro <- sort(pro, decreasing= TRUE) pro <- data.frame(pro) # survival probability ddf1 <- cbind(sur_time, pro) survival_curve <- ggplot(ddf1, aes(x=sur_time,y=pro))+geom_line()+ theme_minimal(14)+ labs(x="time", y="survival probability", title='survival curve') variable_catch <- variable_catch[1:10] variable_catch <- as.numeric(names(table(variable_catch))) variable_names <- c() for (i in c(1:length(variable_catch))){ variable_names[i] <- colnames(corrected_data)[variable_catch[i]+2] } pictures <- list() for (i in c(1:length(variable_catch))){ temp <- as.data.frame(cbind(corrected_data[,variable_catch[i]+2],df[,variable_catch[i]])) colnames(temp) <- c('x','y') temp <- temp [order(temp$x),] pic <- ggplot(temp, aes(x=x,y=y))+geom_line()+ theme_minimal(14)+ labs(x="x", y="f", title=colnames(corrected_data)[variable_catch[i]+2]) pictures[[i]] <- pic } names(pictures) <- variable_names results <-list(predict_failure_time=predict_failure_time, covariates = variable_names, function_forms = pictures,survival_curve=survival_curve) x = c() return(results) }
/scratch/gouwar.j/cran-all/cranData/AFFECT/R/boosting.R