domain
stringclasses 48
values | url
stringlengths 35
137
| text
stringlengths 0
836k
| topic
stringclasses 13
values |
---|---|---|---|
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/appendix.html |
Appendix
========
Texts
-----
### Donald Barthelme
[
> “I have to admit we are mired in the most exquisite mysterious muck.
> This muck heaves and palpitates. It is multi\-directional and has a mayor.”
> “You may not be interested in absurdity, but absurdity is interested in you.”
#### The First Thing the Baby Did Wrong
This short story is essentially a how\-to on parenting.
[link](http://jessamyn.com/barth/baby.html)
#### The Balloon
This story is about a balloon that can represent whatever you want it to.
[link](http://www.uni.edu/oloughli/elit11/Balloon.rtf)
#### Some of Us Had Been Threatening Our Friend Colby
A brief work about etiquette and how to act in society.
[link](http://jessamyn.com/barth/colby.html)
### Raymond Carver
[
> “It ought to make us feel ashamed when we talk like we know what we’re talking about when we talk about love.”
> “That’s all we have, finally, the words, and they had better be the right ones.”
#### What We Talk About When We Talk About Love
The text we use is actually *Beginners*, or the unedited version. A drink is required in order to read it with the proper context. Probably several. No. Definitely several.
[link](http://www.newyorker.com/magazine/2007/12/24/beginners)
### Billy Dee Shakespeare
> “It works every time.”
These old works have pretty much no relevance today, and are mostly forgotten by everyone except humanities faculty. The analysis of them depicted in this document is pretty much definitive, and leaves little else to say regarding them, so don’t bother reading them if you haven’t already.
R
-
Up until even a couple years ago, R was *terrible* at text. You really only had base R for basic processing and a couple packages that were not straightforward to use. There was little for scraping the web. Nowadays, I would say it’s probably easier to deal with text in R than it is elsewhere, including Python. Packages like rvest, stringr/stringi, and tidytext and more make it almost easy enough to jump right in.
One can peruse the Natural Language Processing task view to start getting a sense of what all is available in R.
[NLP task view](https://www.r-pkg.org/ctv/NaturalLanguageProcessing)
The one drawback with R is that most of the dealing with text is slow and/or memory intensive. The Shakespeare texts are only a few dozen and not very long works, and yet your basic LDA might still take a minute or so. Most text analysis situations might have thousands to millions of texts, such that the corpus itself may be too much to hold in memory, and thus R, at least on a standard computing device or with the usual methods, might not be viable for your needs.
Python
------
While R has done a lot to catch up, more advanced text analysis techniques are developed in Python (if not lower level languages), and so the state of the art may be found there. Furthermore, much of text analysis is a high volume affair, and that means it will likely be done much more efficiently in the Python environment if so, though one still might need a high performance computing environment. Here are some of the popular modules in Python.
* nltk
* textblob (the tidytext for Python)
* gensim (topic modeling)
* spaCy
A Faster LDA
------------
We noted in the Shakespeare start to finish example that there are faster alternatives than the standard LDA in topicmodels. In particular, the powerful text2vec package contains a faster and less memory intensive implementation of LDA and dealing with text generally. Both of which are very important if you’re wanting to use R for text analysis. The other nice thing is that it works with LDAvis for visualization.
For the following, we’ll use one of the partially cleaned document term matrix for the Shakespeare texts. One of the things to get used to is that text2vec uses the newer R6 classes of R objects, hence the `$` approach you see to using specific methods.
```
library(text2vec)
load('data/shakes_dtm_stemmed.RData')
# load('data/shakes_words_df.RData') # non-stemmed
# convert to the sparse matrix representation using Matrix package
shakes_dtm = as(shakes_dtm, 'CsparseMatrix')
# setup the model
lda_model = LDA$new(n_topics = 10, doc_topic_prior = 0.1, topic_word_prior = 0.01)
# fit the model
doc_topic_distr = lda_model$fit_transform(x = shakes_dtm,
n_iter = 1000,
convergence_tol = 0.0001,
n_check_convergence = 25,
progressbar = FALSE)
```
```
INFO [2018-03-06 19:16:15] iter 25 loglikelihood = -1746173.024
INFO [2018-03-06 19:16:16] iter 50 loglikelihood = -1683541.903
INFO [2018-03-06 19:16:17] iter 75 loglikelihood = -1660985.396
INFO [2018-03-06 19:16:17] iter 100 loglikelihood = -1648984.411
INFO [2018-03-06 19:16:18] iter 125 loglikelihood = -1641481.467
INFO [2018-03-06 19:16:19] iter 150 loglikelihood = -1638983.461
INFO [2018-03-06 19:16:20] iter 175 loglikelihood = -1636730.733
INFO [2018-03-06 19:16:20] iter 200 loglikelihood = -1636356.883
INFO [2018-03-06 19:16:21] iter 225 loglikelihood = -1636487.222
INFO [2018-03-06 19:16:21] early stopping at 225 iteration
```
```
lda_model$get_top_words(n = 10, topic_number = 1:10, lambda = 1)
```
```
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] "prai" "hear" "ey" "love" "word" "natur" "night" "god" "friend" "death"
[2,] "honor" "madam" "sweet" "dai" "letter" "fortun" "fear" "dai" "hand" "grace"
[3,] "heaven" "bring" "fair" "true" "hous" "world" "ear" "england" "nobl" "soul"
[4,] "life" "sea" "heart" "wit" "prai" "power" "sleep" "crown" "word" "live"
[5,] "matter" "bear" "light" "fair" "sweet" "poor" "death" "war" "stand" "blood"
[6,] "honest" "seek" "desir" "live" "husband" "set" "dead" "arm" "rome" "life"
[7,] "fellow" "heard" "beauti" "youth" "woman" "nobl" "bid" "majesti" "honor" "dai"
[8,] "hear" "lose" "black" "heart" "reason" "truth" "bed" "fight" "leav" "hope"
[9,] "heart" "strang" "kiss" "marri" "hand" "leav" "mad" "sword" "deed" "heaven"
[10,] "friend" "sister" "sun" "night" "talk" "command" "hand" "heart" "tear" "die"
```
```
which.max(doc_topic_distr['Hamlet', ])
```
```
[1] 7
```
```
# top-words could be sorted by “relevance” which also takes into account
# frequency of word in the corpus (0 < lambda < 1)
lda_model$get_top_words(n = 10, topic_number = 1:10, lambda = 0.2)
```
```
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] "honest" "madam" "ey" "love" "letter" "natur" "ear" "england" "rome" "bloodi"
[2,] "beseech" "sea" "cheek" "youth" "merri" "report" "sleep" "majesti" "deed" "royal"
[3,] "knave" "water" "black" "wit" "woo" "spirit" "beat" "field" "banish" "graciou"
[4,] "warrant" "sister" "wretch" "signior" "jest" "judgment" "night" "uncl" "countri" "high"
[5,] "glad" "women" "flower" "count" "finger" "worst" "air" "march" "citi" "subject"
[6,] "action" "hair" "sweet" "lover" "choos" "author" "soft" "lieg" "son" "sovereign"
[7,] "worship" "lose" "vow" "danc" "ring" "qualiti" "knock" "fight" "rise" "foe"
[8,] "matter" "entreat" "mortal" "song" "horn" "virgin" "poison" "battl" "kneel" "flourish"
[9,] "fellow" "seek" "wing" "paint" "bond" "wine" "shake" "harri" "fly" "king"
[10,] "walk" "passion" "short" "wed" "troth" "direct" "move" "crown" "wert" "tide"
```
```
# ldavis not shown
# lda_model$plot()
```
Given that most text analysis can be very time consuming for a model, consider any approach that might give you more efficiency.
Texts
-----
### Donald Barthelme
[
> “I have to admit we are mired in the most exquisite mysterious muck.
> This muck heaves and palpitates. It is multi\-directional and has a mayor.”
> “You may not be interested in absurdity, but absurdity is interested in you.”
#### The First Thing the Baby Did Wrong
This short story is essentially a how\-to on parenting.
[link](http://jessamyn.com/barth/baby.html)
#### The Balloon
This story is about a balloon that can represent whatever you want it to.
[link](http://www.uni.edu/oloughli/elit11/Balloon.rtf)
#### Some of Us Had Been Threatening Our Friend Colby
A brief work about etiquette and how to act in society.
[link](http://jessamyn.com/barth/colby.html)
### Raymond Carver
[
> “It ought to make us feel ashamed when we talk like we know what we’re talking about when we talk about love.”
> “That’s all we have, finally, the words, and they had better be the right ones.”
#### What We Talk About When We Talk About Love
The text we use is actually *Beginners*, or the unedited version. A drink is required in order to read it with the proper context. Probably several. No. Definitely several.
[link](http://www.newyorker.com/magazine/2007/12/24/beginners)
### Billy Dee Shakespeare
> “It works every time.”
These old works have pretty much no relevance today, and are mostly forgotten by everyone except humanities faculty. The analysis of them depicted in this document is pretty much definitive, and leaves little else to say regarding them, so don’t bother reading them if you haven’t already.
### Donald Barthelme
[
> “I have to admit we are mired in the most exquisite mysterious muck.
> This muck heaves and palpitates. It is multi\-directional and has a mayor.”
> “You may not be interested in absurdity, but absurdity is interested in you.”
#### The First Thing the Baby Did Wrong
This short story is essentially a how\-to on parenting.
[link](http://jessamyn.com/barth/baby.html)
#### The Balloon
This story is about a balloon that can represent whatever you want it to.
[link](http://www.uni.edu/oloughli/elit11/Balloon.rtf)
#### Some of Us Had Been Threatening Our Friend Colby
A brief work about etiquette and how to act in society.
[link](http://jessamyn.com/barth/colby.html)
#### The First Thing the Baby Did Wrong
This short story is essentially a how\-to on parenting.
[link](http://jessamyn.com/barth/baby.html)
#### The Balloon
This story is about a balloon that can represent whatever you want it to.
[link](http://www.uni.edu/oloughli/elit11/Balloon.rtf)
#### Some of Us Had Been Threatening Our Friend Colby
A brief work about etiquette and how to act in society.
[link](http://jessamyn.com/barth/colby.html)
### Raymond Carver
[
> “It ought to make us feel ashamed when we talk like we know what we’re talking about when we talk about love.”
> “That’s all we have, finally, the words, and they had better be the right ones.”
#### What We Talk About When We Talk About Love
The text we use is actually *Beginners*, or the unedited version. A drink is required in order to read it with the proper context. Probably several. No. Definitely several.
[link](http://www.newyorker.com/magazine/2007/12/24/beginners)
#### What We Talk About When We Talk About Love
The text we use is actually *Beginners*, or the unedited version. A drink is required in order to read it with the proper context. Probably several. No. Definitely several.
[link](http://www.newyorker.com/magazine/2007/12/24/beginners)
### Billy Dee Shakespeare
> “It works every time.”
These old works have pretty much no relevance today, and are mostly forgotten by everyone except humanities faculty. The analysis of them depicted in this document is pretty much definitive, and leaves little else to say regarding them, so don’t bother reading them if you haven’t already.
R
-
Up until even a couple years ago, R was *terrible* at text. You really only had base R for basic processing and a couple packages that were not straightforward to use. There was little for scraping the web. Nowadays, I would say it’s probably easier to deal with text in R than it is elsewhere, including Python. Packages like rvest, stringr/stringi, and tidytext and more make it almost easy enough to jump right in.
One can peruse the Natural Language Processing task view to start getting a sense of what all is available in R.
[NLP task view](https://www.r-pkg.org/ctv/NaturalLanguageProcessing)
The one drawback with R is that most of the dealing with text is slow and/or memory intensive. The Shakespeare texts are only a few dozen and not very long works, and yet your basic LDA might still take a minute or so. Most text analysis situations might have thousands to millions of texts, such that the corpus itself may be too much to hold in memory, and thus R, at least on a standard computing device or with the usual methods, might not be viable for your needs.
Python
------
While R has done a lot to catch up, more advanced text analysis techniques are developed in Python (if not lower level languages), and so the state of the art may be found there. Furthermore, much of text analysis is a high volume affair, and that means it will likely be done much more efficiently in the Python environment if so, though one still might need a high performance computing environment. Here are some of the popular modules in Python.
* nltk
* textblob (the tidytext for Python)
* gensim (topic modeling)
* spaCy
A Faster LDA
------------
We noted in the Shakespeare start to finish example that there are faster alternatives than the standard LDA in topicmodels. In particular, the powerful text2vec package contains a faster and less memory intensive implementation of LDA and dealing with text generally. Both of which are very important if you’re wanting to use R for text analysis. The other nice thing is that it works with LDAvis for visualization.
For the following, we’ll use one of the partially cleaned document term matrix for the Shakespeare texts. One of the things to get used to is that text2vec uses the newer R6 classes of R objects, hence the `$` approach you see to using specific methods.
```
library(text2vec)
load('data/shakes_dtm_stemmed.RData')
# load('data/shakes_words_df.RData') # non-stemmed
# convert to the sparse matrix representation using Matrix package
shakes_dtm = as(shakes_dtm, 'CsparseMatrix')
# setup the model
lda_model = LDA$new(n_topics = 10, doc_topic_prior = 0.1, topic_word_prior = 0.01)
# fit the model
doc_topic_distr = lda_model$fit_transform(x = shakes_dtm,
n_iter = 1000,
convergence_tol = 0.0001,
n_check_convergence = 25,
progressbar = FALSE)
```
```
INFO [2018-03-06 19:16:15] iter 25 loglikelihood = -1746173.024
INFO [2018-03-06 19:16:16] iter 50 loglikelihood = -1683541.903
INFO [2018-03-06 19:16:17] iter 75 loglikelihood = -1660985.396
INFO [2018-03-06 19:16:17] iter 100 loglikelihood = -1648984.411
INFO [2018-03-06 19:16:18] iter 125 loglikelihood = -1641481.467
INFO [2018-03-06 19:16:19] iter 150 loglikelihood = -1638983.461
INFO [2018-03-06 19:16:20] iter 175 loglikelihood = -1636730.733
INFO [2018-03-06 19:16:20] iter 200 loglikelihood = -1636356.883
INFO [2018-03-06 19:16:21] iter 225 loglikelihood = -1636487.222
INFO [2018-03-06 19:16:21] early stopping at 225 iteration
```
```
lda_model$get_top_words(n = 10, topic_number = 1:10, lambda = 1)
```
```
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] "prai" "hear" "ey" "love" "word" "natur" "night" "god" "friend" "death"
[2,] "honor" "madam" "sweet" "dai" "letter" "fortun" "fear" "dai" "hand" "grace"
[3,] "heaven" "bring" "fair" "true" "hous" "world" "ear" "england" "nobl" "soul"
[4,] "life" "sea" "heart" "wit" "prai" "power" "sleep" "crown" "word" "live"
[5,] "matter" "bear" "light" "fair" "sweet" "poor" "death" "war" "stand" "blood"
[6,] "honest" "seek" "desir" "live" "husband" "set" "dead" "arm" "rome" "life"
[7,] "fellow" "heard" "beauti" "youth" "woman" "nobl" "bid" "majesti" "honor" "dai"
[8,] "hear" "lose" "black" "heart" "reason" "truth" "bed" "fight" "leav" "hope"
[9,] "heart" "strang" "kiss" "marri" "hand" "leav" "mad" "sword" "deed" "heaven"
[10,] "friend" "sister" "sun" "night" "talk" "command" "hand" "heart" "tear" "die"
```
```
which.max(doc_topic_distr['Hamlet', ])
```
```
[1] 7
```
```
# top-words could be sorted by “relevance” which also takes into account
# frequency of word in the corpus (0 < lambda < 1)
lda_model$get_top_words(n = 10, topic_number = 1:10, lambda = 0.2)
```
```
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] "honest" "madam" "ey" "love" "letter" "natur" "ear" "england" "rome" "bloodi"
[2,] "beseech" "sea" "cheek" "youth" "merri" "report" "sleep" "majesti" "deed" "royal"
[3,] "knave" "water" "black" "wit" "woo" "spirit" "beat" "field" "banish" "graciou"
[4,] "warrant" "sister" "wretch" "signior" "jest" "judgment" "night" "uncl" "countri" "high"
[5,] "glad" "women" "flower" "count" "finger" "worst" "air" "march" "citi" "subject"
[6,] "action" "hair" "sweet" "lover" "choos" "author" "soft" "lieg" "son" "sovereign"
[7,] "worship" "lose" "vow" "danc" "ring" "qualiti" "knock" "fight" "rise" "foe"
[8,] "matter" "entreat" "mortal" "song" "horn" "virgin" "poison" "battl" "kneel" "flourish"
[9,] "fellow" "seek" "wing" "paint" "bond" "wine" "shake" "harri" "fly" "king"
[10,] "walk" "passion" "short" "wed" "troth" "direct" "move" "crown" "wert" "tide"
```
```
# ldavis not shown
# lda_model$plot()
```
Given that most text analysis can be very time consuming for a model, consider any approach that might give you more efficiency.
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/appendix.html |
Appendix
========
Texts
-----
### Donald Barthelme
[
> “I have to admit we are mired in the most exquisite mysterious muck.
> This muck heaves and palpitates. It is multi\-directional and has a mayor.”
> “You may not be interested in absurdity, but absurdity is interested in you.”
#### The First Thing the Baby Did Wrong
This short story is essentially a how\-to on parenting.
[link](http://jessamyn.com/barth/baby.html)
#### The Balloon
This story is about a balloon that can represent whatever you want it to.
[link](http://www.uni.edu/oloughli/elit11/Balloon.rtf)
#### Some of Us Had Been Threatening Our Friend Colby
A brief work about etiquette and how to act in society.
[link](http://jessamyn.com/barth/colby.html)
### Raymond Carver
[
> “It ought to make us feel ashamed when we talk like we know what we’re talking about when we talk about love.”
> “That’s all we have, finally, the words, and they had better be the right ones.”
#### What We Talk About When We Talk About Love
The text we use is actually *Beginners*, or the unedited version. A drink is required in order to read it with the proper context. Probably several. No. Definitely several.
[link](http://www.newyorker.com/magazine/2007/12/24/beginners)
### Billy Dee Shakespeare
> “It works every time.”
These old works have pretty much no relevance today, and are mostly forgotten by everyone except humanities faculty. The analysis of them depicted in this document is pretty much definitive, and leaves little else to say regarding them, so don’t bother reading them if you haven’t already.
R
-
Up until even a couple years ago, R was *terrible* at text. You really only had base R for basic processing and a couple packages that were not straightforward to use. There was little for scraping the web. Nowadays, I would say it’s probably easier to deal with text in R than it is elsewhere, including Python. Packages like rvest, stringr/stringi, and tidytext and more make it almost easy enough to jump right in.
One can peruse the Natural Language Processing task view to start getting a sense of what all is available in R.
[NLP task view](https://www.r-pkg.org/ctv/NaturalLanguageProcessing)
The one drawback with R is that most of the dealing with text is slow and/or memory intensive. The Shakespeare texts are only a few dozen and not very long works, and yet your basic LDA might still take a minute or so. Most text analysis situations might have thousands to millions of texts, such that the corpus itself may be too much to hold in memory, and thus R, at least on a standard computing device or with the usual methods, might not be viable for your needs.
Python
------
While R has done a lot to catch up, more advanced text analysis techniques are developed in Python (if not lower level languages), and so the state of the art may be found there. Furthermore, much of text analysis is a high volume affair, and that means it will likely be done much more efficiently in the Python environment if so, though one still might need a high performance computing environment. Here are some of the popular modules in Python.
* nltk
* textblob (the tidytext for Python)
* gensim (topic modeling)
* spaCy
A Faster LDA
------------
We noted in the Shakespeare start to finish example that there are faster alternatives than the standard LDA in topicmodels. In particular, the powerful text2vec package contains a faster and less memory intensive implementation of LDA and dealing with text generally. Both of which are very important if you’re wanting to use R for text analysis. The other nice thing is that it works with LDAvis for visualization.
For the following, we’ll use one of the partially cleaned document term matrix for the Shakespeare texts. One of the things to get used to is that text2vec uses the newer R6 classes of R objects, hence the `$` approach you see to using specific methods.
```
library(text2vec)
load('data/shakes_dtm_stemmed.RData')
# load('data/shakes_words_df.RData') # non-stemmed
# convert to the sparse matrix representation using Matrix package
shakes_dtm = as(shakes_dtm, 'CsparseMatrix')
# setup the model
lda_model = LDA$new(n_topics = 10, doc_topic_prior = 0.1, topic_word_prior = 0.01)
# fit the model
doc_topic_distr = lda_model$fit_transform(x = shakes_dtm,
n_iter = 1000,
convergence_tol = 0.0001,
n_check_convergence = 25,
progressbar = FALSE)
```
```
INFO [2018-03-06 19:16:15] iter 25 loglikelihood = -1746173.024
INFO [2018-03-06 19:16:16] iter 50 loglikelihood = -1683541.903
INFO [2018-03-06 19:16:17] iter 75 loglikelihood = -1660985.396
INFO [2018-03-06 19:16:17] iter 100 loglikelihood = -1648984.411
INFO [2018-03-06 19:16:18] iter 125 loglikelihood = -1641481.467
INFO [2018-03-06 19:16:19] iter 150 loglikelihood = -1638983.461
INFO [2018-03-06 19:16:20] iter 175 loglikelihood = -1636730.733
INFO [2018-03-06 19:16:20] iter 200 loglikelihood = -1636356.883
INFO [2018-03-06 19:16:21] iter 225 loglikelihood = -1636487.222
INFO [2018-03-06 19:16:21] early stopping at 225 iteration
```
```
lda_model$get_top_words(n = 10, topic_number = 1:10, lambda = 1)
```
```
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] "prai" "hear" "ey" "love" "word" "natur" "night" "god" "friend" "death"
[2,] "honor" "madam" "sweet" "dai" "letter" "fortun" "fear" "dai" "hand" "grace"
[3,] "heaven" "bring" "fair" "true" "hous" "world" "ear" "england" "nobl" "soul"
[4,] "life" "sea" "heart" "wit" "prai" "power" "sleep" "crown" "word" "live"
[5,] "matter" "bear" "light" "fair" "sweet" "poor" "death" "war" "stand" "blood"
[6,] "honest" "seek" "desir" "live" "husband" "set" "dead" "arm" "rome" "life"
[7,] "fellow" "heard" "beauti" "youth" "woman" "nobl" "bid" "majesti" "honor" "dai"
[8,] "hear" "lose" "black" "heart" "reason" "truth" "bed" "fight" "leav" "hope"
[9,] "heart" "strang" "kiss" "marri" "hand" "leav" "mad" "sword" "deed" "heaven"
[10,] "friend" "sister" "sun" "night" "talk" "command" "hand" "heart" "tear" "die"
```
```
which.max(doc_topic_distr['Hamlet', ])
```
```
[1] 7
```
```
# top-words could be sorted by “relevance” which also takes into account
# frequency of word in the corpus (0 < lambda < 1)
lda_model$get_top_words(n = 10, topic_number = 1:10, lambda = 0.2)
```
```
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] "honest" "madam" "ey" "love" "letter" "natur" "ear" "england" "rome" "bloodi"
[2,] "beseech" "sea" "cheek" "youth" "merri" "report" "sleep" "majesti" "deed" "royal"
[3,] "knave" "water" "black" "wit" "woo" "spirit" "beat" "field" "banish" "graciou"
[4,] "warrant" "sister" "wretch" "signior" "jest" "judgment" "night" "uncl" "countri" "high"
[5,] "glad" "women" "flower" "count" "finger" "worst" "air" "march" "citi" "subject"
[6,] "action" "hair" "sweet" "lover" "choos" "author" "soft" "lieg" "son" "sovereign"
[7,] "worship" "lose" "vow" "danc" "ring" "qualiti" "knock" "fight" "rise" "foe"
[8,] "matter" "entreat" "mortal" "song" "horn" "virgin" "poison" "battl" "kneel" "flourish"
[9,] "fellow" "seek" "wing" "paint" "bond" "wine" "shake" "harri" "fly" "king"
[10,] "walk" "passion" "short" "wed" "troth" "direct" "move" "crown" "wert" "tide"
```
```
# ldavis not shown
# lda_model$plot()
```
Given that most text analysis can be very time consuming for a model, consider any approach that might give you more efficiency.
Texts
-----
### Donald Barthelme
[
> “I have to admit we are mired in the most exquisite mysterious muck.
> This muck heaves and palpitates. It is multi\-directional and has a mayor.”
> “You may not be interested in absurdity, but absurdity is interested in you.”
#### The First Thing the Baby Did Wrong
This short story is essentially a how\-to on parenting.
[link](http://jessamyn.com/barth/baby.html)
#### The Balloon
This story is about a balloon that can represent whatever you want it to.
[link](http://www.uni.edu/oloughli/elit11/Balloon.rtf)
#### Some of Us Had Been Threatening Our Friend Colby
A brief work about etiquette and how to act in society.
[link](http://jessamyn.com/barth/colby.html)
### Raymond Carver
[
> “It ought to make us feel ashamed when we talk like we know what we’re talking about when we talk about love.”
> “That’s all we have, finally, the words, and they had better be the right ones.”
#### What We Talk About When We Talk About Love
The text we use is actually *Beginners*, or the unedited version. A drink is required in order to read it with the proper context. Probably several. No. Definitely several.
[link](http://www.newyorker.com/magazine/2007/12/24/beginners)
### Billy Dee Shakespeare
> “It works every time.”
These old works have pretty much no relevance today, and are mostly forgotten by everyone except humanities faculty. The analysis of them depicted in this document is pretty much definitive, and leaves little else to say regarding them, so don’t bother reading them if you haven’t already.
### Donald Barthelme
[
> “I have to admit we are mired in the most exquisite mysterious muck.
> This muck heaves and palpitates. It is multi\-directional and has a mayor.”
> “You may not be interested in absurdity, but absurdity is interested in you.”
#### The First Thing the Baby Did Wrong
This short story is essentially a how\-to on parenting.
[link](http://jessamyn.com/barth/baby.html)
#### The Balloon
This story is about a balloon that can represent whatever you want it to.
[link](http://www.uni.edu/oloughli/elit11/Balloon.rtf)
#### Some of Us Had Been Threatening Our Friend Colby
A brief work about etiquette and how to act in society.
[link](http://jessamyn.com/barth/colby.html)
#### The First Thing the Baby Did Wrong
This short story is essentially a how\-to on parenting.
[link](http://jessamyn.com/barth/baby.html)
#### The Balloon
This story is about a balloon that can represent whatever you want it to.
[link](http://www.uni.edu/oloughli/elit11/Balloon.rtf)
#### Some of Us Had Been Threatening Our Friend Colby
A brief work about etiquette and how to act in society.
[link](http://jessamyn.com/barth/colby.html)
### Raymond Carver
[
> “It ought to make us feel ashamed when we talk like we know what we’re talking about when we talk about love.”
> “That’s all we have, finally, the words, and they had better be the right ones.”
#### What We Talk About When We Talk About Love
The text we use is actually *Beginners*, or the unedited version. A drink is required in order to read it with the proper context. Probably several. No. Definitely several.
[link](http://www.newyorker.com/magazine/2007/12/24/beginners)
#### What We Talk About When We Talk About Love
The text we use is actually *Beginners*, or the unedited version. A drink is required in order to read it with the proper context. Probably several. No. Definitely several.
[link](http://www.newyorker.com/magazine/2007/12/24/beginners)
### Billy Dee Shakespeare
> “It works every time.”
These old works have pretty much no relevance today, and are mostly forgotten by everyone except humanities faculty. The analysis of them depicted in this document is pretty much definitive, and leaves little else to say regarding them, so don’t bother reading them if you haven’t already.
R
-
Up until even a couple years ago, R was *terrible* at text. You really only had base R for basic processing and a couple packages that were not straightforward to use. There was little for scraping the web. Nowadays, I would say it’s probably easier to deal with text in R than it is elsewhere, including Python. Packages like rvest, stringr/stringi, and tidytext and more make it almost easy enough to jump right in.
One can peruse the Natural Language Processing task view to start getting a sense of what all is available in R.
[NLP task view](https://www.r-pkg.org/ctv/NaturalLanguageProcessing)
The one drawback with R is that most of the dealing with text is slow and/or memory intensive. The Shakespeare texts are only a few dozen and not very long works, and yet your basic LDA might still take a minute or so. Most text analysis situations might have thousands to millions of texts, such that the corpus itself may be too much to hold in memory, and thus R, at least on a standard computing device or with the usual methods, might not be viable for your needs.
Python
------
While R has done a lot to catch up, more advanced text analysis techniques are developed in Python (if not lower level languages), and so the state of the art may be found there. Furthermore, much of text analysis is a high volume affair, and that means it will likely be done much more efficiently in the Python environment if so, though one still might need a high performance computing environment. Here are some of the popular modules in Python.
* nltk
* textblob (the tidytext for Python)
* gensim (topic modeling)
* spaCy
A Faster LDA
------------
We noted in the Shakespeare start to finish example that there are faster alternatives than the standard LDA in topicmodels. In particular, the powerful text2vec package contains a faster and less memory intensive implementation of LDA and dealing with text generally. Both of which are very important if you’re wanting to use R for text analysis. The other nice thing is that it works with LDAvis for visualization.
For the following, we’ll use one of the partially cleaned document term matrix for the Shakespeare texts. One of the things to get used to is that text2vec uses the newer R6 classes of R objects, hence the `$` approach you see to using specific methods.
```
library(text2vec)
load('data/shakes_dtm_stemmed.RData')
# load('data/shakes_words_df.RData') # non-stemmed
# convert to the sparse matrix representation using Matrix package
shakes_dtm = as(shakes_dtm, 'CsparseMatrix')
# setup the model
lda_model = LDA$new(n_topics = 10, doc_topic_prior = 0.1, topic_word_prior = 0.01)
# fit the model
doc_topic_distr = lda_model$fit_transform(x = shakes_dtm,
n_iter = 1000,
convergence_tol = 0.0001,
n_check_convergence = 25,
progressbar = FALSE)
```
```
INFO [2018-03-06 19:16:15] iter 25 loglikelihood = -1746173.024
INFO [2018-03-06 19:16:16] iter 50 loglikelihood = -1683541.903
INFO [2018-03-06 19:16:17] iter 75 loglikelihood = -1660985.396
INFO [2018-03-06 19:16:17] iter 100 loglikelihood = -1648984.411
INFO [2018-03-06 19:16:18] iter 125 loglikelihood = -1641481.467
INFO [2018-03-06 19:16:19] iter 150 loglikelihood = -1638983.461
INFO [2018-03-06 19:16:20] iter 175 loglikelihood = -1636730.733
INFO [2018-03-06 19:16:20] iter 200 loglikelihood = -1636356.883
INFO [2018-03-06 19:16:21] iter 225 loglikelihood = -1636487.222
INFO [2018-03-06 19:16:21] early stopping at 225 iteration
```
```
lda_model$get_top_words(n = 10, topic_number = 1:10, lambda = 1)
```
```
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] "prai" "hear" "ey" "love" "word" "natur" "night" "god" "friend" "death"
[2,] "honor" "madam" "sweet" "dai" "letter" "fortun" "fear" "dai" "hand" "grace"
[3,] "heaven" "bring" "fair" "true" "hous" "world" "ear" "england" "nobl" "soul"
[4,] "life" "sea" "heart" "wit" "prai" "power" "sleep" "crown" "word" "live"
[5,] "matter" "bear" "light" "fair" "sweet" "poor" "death" "war" "stand" "blood"
[6,] "honest" "seek" "desir" "live" "husband" "set" "dead" "arm" "rome" "life"
[7,] "fellow" "heard" "beauti" "youth" "woman" "nobl" "bid" "majesti" "honor" "dai"
[8,] "hear" "lose" "black" "heart" "reason" "truth" "bed" "fight" "leav" "hope"
[9,] "heart" "strang" "kiss" "marri" "hand" "leav" "mad" "sword" "deed" "heaven"
[10,] "friend" "sister" "sun" "night" "talk" "command" "hand" "heart" "tear" "die"
```
```
which.max(doc_topic_distr['Hamlet', ])
```
```
[1] 7
```
```
# top-words could be sorted by “relevance” which also takes into account
# frequency of word in the corpus (0 < lambda < 1)
lda_model$get_top_words(n = 10, topic_number = 1:10, lambda = 0.2)
```
```
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] "honest" "madam" "ey" "love" "letter" "natur" "ear" "england" "rome" "bloodi"
[2,] "beseech" "sea" "cheek" "youth" "merri" "report" "sleep" "majesti" "deed" "royal"
[3,] "knave" "water" "black" "wit" "woo" "spirit" "beat" "field" "banish" "graciou"
[4,] "warrant" "sister" "wretch" "signior" "jest" "judgment" "night" "uncl" "countri" "high"
[5,] "glad" "women" "flower" "count" "finger" "worst" "air" "march" "citi" "subject"
[6,] "action" "hair" "sweet" "lover" "choos" "author" "soft" "lieg" "son" "sovereign"
[7,] "worship" "lose" "vow" "danc" "ring" "qualiti" "knock" "fight" "rise" "foe"
[8,] "matter" "entreat" "mortal" "song" "horn" "virgin" "poison" "battl" "kneel" "flourish"
[9,] "fellow" "seek" "wing" "paint" "bond" "wine" "shake" "harri" "fly" "king"
[10,] "walk" "passion" "short" "wed" "troth" "direct" "move" "crown" "wert" "tide"
```
```
# ldavis not shown
# lda_model$plot()
```
Given that most text analysis can be very time consuming for a model, consider any approach that might give you more efficiency.
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/appendix.html |
Appendix
========
Texts
-----
### Donald Barthelme
[
> “I have to admit we are mired in the most exquisite mysterious muck.
> This muck heaves and palpitates. It is multi\-directional and has a mayor.”
> “You may not be interested in absurdity, but absurdity is interested in you.”
#### The First Thing the Baby Did Wrong
This short story is essentially a how\-to on parenting.
[link](http://jessamyn.com/barth/baby.html)
#### The Balloon
This story is about a balloon that can represent whatever you want it to.
[link](http://www.uni.edu/oloughli/elit11/Balloon.rtf)
#### Some of Us Had Been Threatening Our Friend Colby
A brief work about etiquette and how to act in society.
[link](http://jessamyn.com/barth/colby.html)
### Raymond Carver
[
> “It ought to make us feel ashamed when we talk like we know what we’re talking about when we talk about love.”
> “That’s all we have, finally, the words, and they had better be the right ones.”
#### What We Talk About When We Talk About Love
The text we use is actually *Beginners*, or the unedited version. A drink is required in order to read it with the proper context. Probably several. No. Definitely several.
[link](http://www.newyorker.com/magazine/2007/12/24/beginners)
### Billy Dee Shakespeare
> “It works every time.”
These old works have pretty much no relevance today, and are mostly forgotten by everyone except humanities faculty. The analysis of them depicted in this document is pretty much definitive, and leaves little else to say regarding them, so don’t bother reading them if you haven’t already.
R
-
Up until even a couple years ago, R was *terrible* at text. You really only had base R for basic processing and a couple packages that were not straightforward to use. There was little for scraping the web. Nowadays, I would say it’s probably easier to deal with text in R than it is elsewhere, including Python. Packages like rvest, stringr/stringi, and tidytext and more make it almost easy enough to jump right in.
One can peruse the Natural Language Processing task view to start getting a sense of what all is available in R.
[NLP task view](https://www.r-pkg.org/ctv/NaturalLanguageProcessing)
The one drawback with R is that most of the dealing with text is slow and/or memory intensive. The Shakespeare texts are only a few dozen and not very long works, and yet your basic LDA might still take a minute or so. Most text analysis situations might have thousands to millions of texts, such that the corpus itself may be too much to hold in memory, and thus R, at least on a standard computing device or with the usual methods, might not be viable for your needs.
Python
------
While R has done a lot to catch up, more advanced text analysis techniques are developed in Python (if not lower level languages), and so the state of the art may be found there. Furthermore, much of text analysis is a high volume affair, and that means it will likely be done much more efficiently in the Python environment if so, though one still might need a high performance computing environment. Here are some of the popular modules in Python.
* nltk
* textblob (the tidytext for Python)
* gensim (topic modeling)
* spaCy
A Faster LDA
------------
We noted in the Shakespeare start to finish example that there are faster alternatives than the standard LDA in topicmodels. In particular, the powerful text2vec package contains a faster and less memory intensive implementation of LDA and dealing with text generally. Both of which are very important if you’re wanting to use R for text analysis. The other nice thing is that it works with LDAvis for visualization.
For the following, we’ll use one of the partially cleaned document term matrix for the Shakespeare texts. One of the things to get used to is that text2vec uses the newer R6 classes of R objects, hence the `$` approach you see to using specific methods.
```
library(text2vec)
load('data/shakes_dtm_stemmed.RData')
# load('data/shakes_words_df.RData') # non-stemmed
# convert to the sparse matrix representation using Matrix package
shakes_dtm = as(shakes_dtm, 'CsparseMatrix')
# setup the model
lda_model = LDA$new(n_topics = 10, doc_topic_prior = 0.1, topic_word_prior = 0.01)
# fit the model
doc_topic_distr = lda_model$fit_transform(x = shakes_dtm,
n_iter = 1000,
convergence_tol = 0.0001,
n_check_convergence = 25,
progressbar = FALSE)
```
```
INFO [2018-03-06 19:16:15] iter 25 loglikelihood = -1746173.024
INFO [2018-03-06 19:16:16] iter 50 loglikelihood = -1683541.903
INFO [2018-03-06 19:16:17] iter 75 loglikelihood = -1660985.396
INFO [2018-03-06 19:16:17] iter 100 loglikelihood = -1648984.411
INFO [2018-03-06 19:16:18] iter 125 loglikelihood = -1641481.467
INFO [2018-03-06 19:16:19] iter 150 loglikelihood = -1638983.461
INFO [2018-03-06 19:16:20] iter 175 loglikelihood = -1636730.733
INFO [2018-03-06 19:16:20] iter 200 loglikelihood = -1636356.883
INFO [2018-03-06 19:16:21] iter 225 loglikelihood = -1636487.222
INFO [2018-03-06 19:16:21] early stopping at 225 iteration
```
```
lda_model$get_top_words(n = 10, topic_number = 1:10, lambda = 1)
```
```
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] "prai" "hear" "ey" "love" "word" "natur" "night" "god" "friend" "death"
[2,] "honor" "madam" "sweet" "dai" "letter" "fortun" "fear" "dai" "hand" "grace"
[3,] "heaven" "bring" "fair" "true" "hous" "world" "ear" "england" "nobl" "soul"
[4,] "life" "sea" "heart" "wit" "prai" "power" "sleep" "crown" "word" "live"
[5,] "matter" "bear" "light" "fair" "sweet" "poor" "death" "war" "stand" "blood"
[6,] "honest" "seek" "desir" "live" "husband" "set" "dead" "arm" "rome" "life"
[7,] "fellow" "heard" "beauti" "youth" "woman" "nobl" "bid" "majesti" "honor" "dai"
[8,] "hear" "lose" "black" "heart" "reason" "truth" "bed" "fight" "leav" "hope"
[9,] "heart" "strang" "kiss" "marri" "hand" "leav" "mad" "sword" "deed" "heaven"
[10,] "friend" "sister" "sun" "night" "talk" "command" "hand" "heart" "tear" "die"
```
```
which.max(doc_topic_distr['Hamlet', ])
```
```
[1] 7
```
```
# top-words could be sorted by “relevance” which also takes into account
# frequency of word in the corpus (0 < lambda < 1)
lda_model$get_top_words(n = 10, topic_number = 1:10, lambda = 0.2)
```
```
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] "honest" "madam" "ey" "love" "letter" "natur" "ear" "england" "rome" "bloodi"
[2,] "beseech" "sea" "cheek" "youth" "merri" "report" "sleep" "majesti" "deed" "royal"
[3,] "knave" "water" "black" "wit" "woo" "spirit" "beat" "field" "banish" "graciou"
[4,] "warrant" "sister" "wretch" "signior" "jest" "judgment" "night" "uncl" "countri" "high"
[5,] "glad" "women" "flower" "count" "finger" "worst" "air" "march" "citi" "subject"
[6,] "action" "hair" "sweet" "lover" "choos" "author" "soft" "lieg" "son" "sovereign"
[7,] "worship" "lose" "vow" "danc" "ring" "qualiti" "knock" "fight" "rise" "foe"
[8,] "matter" "entreat" "mortal" "song" "horn" "virgin" "poison" "battl" "kneel" "flourish"
[9,] "fellow" "seek" "wing" "paint" "bond" "wine" "shake" "harri" "fly" "king"
[10,] "walk" "passion" "short" "wed" "troth" "direct" "move" "crown" "wert" "tide"
```
```
# ldavis not shown
# lda_model$plot()
```
Given that most text analysis can be very time consuming for a model, consider any approach that might give you more efficiency.
Texts
-----
### Donald Barthelme
[
> “I have to admit we are mired in the most exquisite mysterious muck.
> This muck heaves and palpitates. It is multi\-directional and has a mayor.”
> “You may not be interested in absurdity, but absurdity is interested in you.”
#### The First Thing the Baby Did Wrong
This short story is essentially a how\-to on parenting.
[link](http://jessamyn.com/barth/baby.html)
#### The Balloon
This story is about a balloon that can represent whatever you want it to.
[link](http://www.uni.edu/oloughli/elit11/Balloon.rtf)
#### Some of Us Had Been Threatening Our Friend Colby
A brief work about etiquette and how to act in society.
[link](http://jessamyn.com/barth/colby.html)
### Raymond Carver
[
> “It ought to make us feel ashamed when we talk like we know what we’re talking about when we talk about love.”
> “That’s all we have, finally, the words, and they had better be the right ones.”
#### What We Talk About When We Talk About Love
The text we use is actually *Beginners*, or the unedited version. A drink is required in order to read it with the proper context. Probably several. No. Definitely several.
[link](http://www.newyorker.com/magazine/2007/12/24/beginners)
### Billy Dee Shakespeare
> “It works every time.”
These old works have pretty much no relevance today, and are mostly forgotten by everyone except humanities faculty. The analysis of them depicted in this document is pretty much definitive, and leaves little else to say regarding them, so don’t bother reading them if you haven’t already.
### Donald Barthelme
[
> “I have to admit we are mired in the most exquisite mysterious muck.
> This muck heaves and palpitates. It is multi\-directional and has a mayor.”
> “You may not be interested in absurdity, but absurdity is interested in you.”
#### The First Thing the Baby Did Wrong
This short story is essentially a how\-to on parenting.
[link](http://jessamyn.com/barth/baby.html)
#### The Balloon
This story is about a balloon that can represent whatever you want it to.
[link](http://www.uni.edu/oloughli/elit11/Balloon.rtf)
#### Some of Us Had Been Threatening Our Friend Colby
A brief work about etiquette and how to act in society.
[link](http://jessamyn.com/barth/colby.html)
#### The First Thing the Baby Did Wrong
This short story is essentially a how\-to on parenting.
[link](http://jessamyn.com/barth/baby.html)
#### The Balloon
This story is about a balloon that can represent whatever you want it to.
[link](http://www.uni.edu/oloughli/elit11/Balloon.rtf)
#### Some of Us Had Been Threatening Our Friend Colby
A brief work about etiquette and how to act in society.
[link](http://jessamyn.com/barth/colby.html)
### Raymond Carver
[
> “It ought to make us feel ashamed when we talk like we know what we’re talking about when we talk about love.”
> “That’s all we have, finally, the words, and they had better be the right ones.”
#### What We Talk About When We Talk About Love
The text we use is actually *Beginners*, or the unedited version. A drink is required in order to read it with the proper context. Probably several. No. Definitely several.
[link](http://www.newyorker.com/magazine/2007/12/24/beginners)
#### What We Talk About When We Talk About Love
The text we use is actually *Beginners*, or the unedited version. A drink is required in order to read it with the proper context. Probably several. No. Definitely several.
[link](http://www.newyorker.com/magazine/2007/12/24/beginners)
### Billy Dee Shakespeare
> “It works every time.”
These old works have pretty much no relevance today, and are mostly forgotten by everyone except humanities faculty. The analysis of them depicted in this document is pretty much definitive, and leaves little else to say regarding them, so don’t bother reading them if you haven’t already.
R
-
Up until even a couple years ago, R was *terrible* at text. You really only had base R for basic processing and a couple packages that were not straightforward to use. There was little for scraping the web. Nowadays, I would say it’s probably easier to deal with text in R than it is elsewhere, including Python. Packages like rvest, stringr/stringi, and tidytext and more make it almost easy enough to jump right in.
One can peruse the Natural Language Processing task view to start getting a sense of what all is available in R.
[NLP task view](https://www.r-pkg.org/ctv/NaturalLanguageProcessing)
The one drawback with R is that most of the dealing with text is slow and/or memory intensive. The Shakespeare texts are only a few dozen and not very long works, and yet your basic LDA might still take a minute or so. Most text analysis situations might have thousands to millions of texts, such that the corpus itself may be too much to hold in memory, and thus R, at least on a standard computing device or with the usual methods, might not be viable for your needs.
Python
------
While R has done a lot to catch up, more advanced text analysis techniques are developed in Python (if not lower level languages), and so the state of the art may be found there. Furthermore, much of text analysis is a high volume affair, and that means it will likely be done much more efficiently in the Python environment if so, though one still might need a high performance computing environment. Here are some of the popular modules in Python.
* nltk
* textblob (the tidytext for Python)
* gensim (topic modeling)
* spaCy
A Faster LDA
------------
We noted in the Shakespeare start to finish example that there are faster alternatives than the standard LDA in topicmodels. In particular, the powerful text2vec package contains a faster and less memory intensive implementation of LDA and dealing with text generally. Both of which are very important if you’re wanting to use R for text analysis. The other nice thing is that it works with LDAvis for visualization.
For the following, we’ll use one of the partially cleaned document term matrix for the Shakespeare texts. One of the things to get used to is that text2vec uses the newer R6 classes of R objects, hence the `$` approach you see to using specific methods.
```
library(text2vec)
load('data/shakes_dtm_stemmed.RData')
# load('data/shakes_words_df.RData') # non-stemmed
# convert to the sparse matrix representation using Matrix package
shakes_dtm = as(shakes_dtm, 'CsparseMatrix')
# setup the model
lda_model = LDA$new(n_topics = 10, doc_topic_prior = 0.1, topic_word_prior = 0.01)
# fit the model
doc_topic_distr = lda_model$fit_transform(x = shakes_dtm,
n_iter = 1000,
convergence_tol = 0.0001,
n_check_convergence = 25,
progressbar = FALSE)
```
```
INFO [2018-03-06 19:16:15] iter 25 loglikelihood = -1746173.024
INFO [2018-03-06 19:16:16] iter 50 loglikelihood = -1683541.903
INFO [2018-03-06 19:16:17] iter 75 loglikelihood = -1660985.396
INFO [2018-03-06 19:16:17] iter 100 loglikelihood = -1648984.411
INFO [2018-03-06 19:16:18] iter 125 loglikelihood = -1641481.467
INFO [2018-03-06 19:16:19] iter 150 loglikelihood = -1638983.461
INFO [2018-03-06 19:16:20] iter 175 loglikelihood = -1636730.733
INFO [2018-03-06 19:16:20] iter 200 loglikelihood = -1636356.883
INFO [2018-03-06 19:16:21] iter 225 loglikelihood = -1636487.222
INFO [2018-03-06 19:16:21] early stopping at 225 iteration
```
```
lda_model$get_top_words(n = 10, topic_number = 1:10, lambda = 1)
```
```
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] "prai" "hear" "ey" "love" "word" "natur" "night" "god" "friend" "death"
[2,] "honor" "madam" "sweet" "dai" "letter" "fortun" "fear" "dai" "hand" "grace"
[3,] "heaven" "bring" "fair" "true" "hous" "world" "ear" "england" "nobl" "soul"
[4,] "life" "sea" "heart" "wit" "prai" "power" "sleep" "crown" "word" "live"
[5,] "matter" "bear" "light" "fair" "sweet" "poor" "death" "war" "stand" "blood"
[6,] "honest" "seek" "desir" "live" "husband" "set" "dead" "arm" "rome" "life"
[7,] "fellow" "heard" "beauti" "youth" "woman" "nobl" "bid" "majesti" "honor" "dai"
[8,] "hear" "lose" "black" "heart" "reason" "truth" "bed" "fight" "leav" "hope"
[9,] "heart" "strang" "kiss" "marri" "hand" "leav" "mad" "sword" "deed" "heaven"
[10,] "friend" "sister" "sun" "night" "talk" "command" "hand" "heart" "tear" "die"
```
```
which.max(doc_topic_distr['Hamlet', ])
```
```
[1] 7
```
```
# top-words could be sorted by “relevance” which also takes into account
# frequency of word in the corpus (0 < lambda < 1)
lda_model$get_top_words(n = 10, topic_number = 1:10, lambda = 0.2)
```
```
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] "honest" "madam" "ey" "love" "letter" "natur" "ear" "england" "rome" "bloodi"
[2,] "beseech" "sea" "cheek" "youth" "merri" "report" "sleep" "majesti" "deed" "royal"
[3,] "knave" "water" "black" "wit" "woo" "spirit" "beat" "field" "banish" "graciou"
[4,] "warrant" "sister" "wretch" "signior" "jest" "judgment" "night" "uncl" "countri" "high"
[5,] "glad" "women" "flower" "count" "finger" "worst" "air" "march" "citi" "subject"
[6,] "action" "hair" "sweet" "lover" "choos" "author" "soft" "lieg" "son" "sovereign"
[7,] "worship" "lose" "vow" "danc" "ring" "qualiti" "knock" "fight" "rise" "foe"
[8,] "matter" "entreat" "mortal" "song" "horn" "virgin" "poison" "battl" "kneel" "flourish"
[9,] "fellow" "seek" "wing" "paint" "bond" "wine" "shake" "harri" "fly" "king"
[10,] "walk" "passion" "short" "wed" "troth" "direct" "move" "crown" "wert" "tide"
```
```
# ldavis not shown
# lda_model$plot()
```
Given that most text analysis can be very time consuming for a model, consider any approach that might give you more efficiency.
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/appendix.html |
Appendix
========
Texts
-----
### Donald Barthelme
[
> “I have to admit we are mired in the most exquisite mysterious muck.
> This muck heaves and palpitates. It is multi\-directional and has a mayor.”
> “You may not be interested in absurdity, but absurdity is interested in you.”
#### The First Thing the Baby Did Wrong
This short story is essentially a how\-to on parenting.
[link](http://jessamyn.com/barth/baby.html)
#### The Balloon
This story is about a balloon that can represent whatever you want it to.
[link](http://www.uni.edu/oloughli/elit11/Balloon.rtf)
#### Some of Us Had Been Threatening Our Friend Colby
A brief work about etiquette and how to act in society.
[link](http://jessamyn.com/barth/colby.html)
### Raymond Carver
[
> “It ought to make us feel ashamed when we talk like we know what we’re talking about when we talk about love.”
> “That’s all we have, finally, the words, and they had better be the right ones.”
#### What We Talk About When We Talk About Love
The text we use is actually *Beginners*, or the unedited version. A drink is required in order to read it with the proper context. Probably several. No. Definitely several.
[link](http://www.newyorker.com/magazine/2007/12/24/beginners)
### Billy Dee Shakespeare
> “It works every time.”
These old works have pretty much no relevance today, and are mostly forgotten by everyone except humanities faculty. The analysis of them depicted in this document is pretty much definitive, and leaves little else to say regarding them, so don’t bother reading them if you haven’t already.
R
-
Up until even a couple years ago, R was *terrible* at text. You really only had base R for basic processing and a couple packages that were not straightforward to use. There was little for scraping the web. Nowadays, I would say it’s probably easier to deal with text in R than it is elsewhere, including Python. Packages like rvest, stringr/stringi, and tidytext and more make it almost easy enough to jump right in.
One can peruse the Natural Language Processing task view to start getting a sense of what all is available in R.
[NLP task view](https://www.r-pkg.org/ctv/NaturalLanguageProcessing)
The one drawback with R is that most of the dealing with text is slow and/or memory intensive. The Shakespeare texts are only a few dozen and not very long works, and yet your basic LDA might still take a minute or so. Most text analysis situations might have thousands to millions of texts, such that the corpus itself may be too much to hold in memory, and thus R, at least on a standard computing device or with the usual methods, might not be viable for your needs.
Python
------
While R has done a lot to catch up, more advanced text analysis techniques are developed in Python (if not lower level languages), and so the state of the art may be found there. Furthermore, much of text analysis is a high volume affair, and that means it will likely be done much more efficiently in the Python environment if so, though one still might need a high performance computing environment. Here are some of the popular modules in Python.
* nltk
* textblob (the tidytext for Python)
* gensim (topic modeling)
* spaCy
A Faster LDA
------------
We noted in the Shakespeare start to finish example that there are faster alternatives than the standard LDA in topicmodels. In particular, the powerful text2vec package contains a faster and less memory intensive implementation of LDA and dealing with text generally. Both of which are very important if you’re wanting to use R for text analysis. The other nice thing is that it works with LDAvis for visualization.
For the following, we’ll use one of the partially cleaned document term matrix for the Shakespeare texts. One of the things to get used to is that text2vec uses the newer R6 classes of R objects, hence the `$` approach you see to using specific methods.
```
library(text2vec)
load('data/shakes_dtm_stemmed.RData')
# load('data/shakes_words_df.RData') # non-stemmed
# convert to the sparse matrix representation using Matrix package
shakes_dtm = as(shakes_dtm, 'CsparseMatrix')
# setup the model
lda_model = LDA$new(n_topics = 10, doc_topic_prior = 0.1, topic_word_prior = 0.01)
# fit the model
doc_topic_distr = lda_model$fit_transform(x = shakes_dtm,
n_iter = 1000,
convergence_tol = 0.0001,
n_check_convergence = 25,
progressbar = FALSE)
```
```
INFO [2018-03-06 19:16:15] iter 25 loglikelihood = -1746173.024
INFO [2018-03-06 19:16:16] iter 50 loglikelihood = -1683541.903
INFO [2018-03-06 19:16:17] iter 75 loglikelihood = -1660985.396
INFO [2018-03-06 19:16:17] iter 100 loglikelihood = -1648984.411
INFO [2018-03-06 19:16:18] iter 125 loglikelihood = -1641481.467
INFO [2018-03-06 19:16:19] iter 150 loglikelihood = -1638983.461
INFO [2018-03-06 19:16:20] iter 175 loglikelihood = -1636730.733
INFO [2018-03-06 19:16:20] iter 200 loglikelihood = -1636356.883
INFO [2018-03-06 19:16:21] iter 225 loglikelihood = -1636487.222
INFO [2018-03-06 19:16:21] early stopping at 225 iteration
```
```
lda_model$get_top_words(n = 10, topic_number = 1:10, lambda = 1)
```
```
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] "prai" "hear" "ey" "love" "word" "natur" "night" "god" "friend" "death"
[2,] "honor" "madam" "sweet" "dai" "letter" "fortun" "fear" "dai" "hand" "grace"
[3,] "heaven" "bring" "fair" "true" "hous" "world" "ear" "england" "nobl" "soul"
[4,] "life" "sea" "heart" "wit" "prai" "power" "sleep" "crown" "word" "live"
[5,] "matter" "bear" "light" "fair" "sweet" "poor" "death" "war" "stand" "blood"
[6,] "honest" "seek" "desir" "live" "husband" "set" "dead" "arm" "rome" "life"
[7,] "fellow" "heard" "beauti" "youth" "woman" "nobl" "bid" "majesti" "honor" "dai"
[8,] "hear" "lose" "black" "heart" "reason" "truth" "bed" "fight" "leav" "hope"
[9,] "heart" "strang" "kiss" "marri" "hand" "leav" "mad" "sword" "deed" "heaven"
[10,] "friend" "sister" "sun" "night" "talk" "command" "hand" "heart" "tear" "die"
```
```
which.max(doc_topic_distr['Hamlet', ])
```
```
[1] 7
```
```
# top-words could be sorted by “relevance” which also takes into account
# frequency of word in the corpus (0 < lambda < 1)
lda_model$get_top_words(n = 10, topic_number = 1:10, lambda = 0.2)
```
```
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] "honest" "madam" "ey" "love" "letter" "natur" "ear" "england" "rome" "bloodi"
[2,] "beseech" "sea" "cheek" "youth" "merri" "report" "sleep" "majesti" "deed" "royal"
[3,] "knave" "water" "black" "wit" "woo" "spirit" "beat" "field" "banish" "graciou"
[4,] "warrant" "sister" "wretch" "signior" "jest" "judgment" "night" "uncl" "countri" "high"
[5,] "glad" "women" "flower" "count" "finger" "worst" "air" "march" "citi" "subject"
[6,] "action" "hair" "sweet" "lover" "choos" "author" "soft" "lieg" "son" "sovereign"
[7,] "worship" "lose" "vow" "danc" "ring" "qualiti" "knock" "fight" "rise" "foe"
[8,] "matter" "entreat" "mortal" "song" "horn" "virgin" "poison" "battl" "kneel" "flourish"
[9,] "fellow" "seek" "wing" "paint" "bond" "wine" "shake" "harri" "fly" "king"
[10,] "walk" "passion" "short" "wed" "troth" "direct" "move" "crown" "wert" "tide"
```
```
# ldavis not shown
# lda_model$plot()
```
Given that most text analysis can be very time consuming for a model, consider any approach that might give you more efficiency.
Texts
-----
### Donald Barthelme
[
> “I have to admit we are mired in the most exquisite mysterious muck.
> This muck heaves and palpitates. It is multi\-directional and has a mayor.”
> “You may not be interested in absurdity, but absurdity is interested in you.”
#### The First Thing the Baby Did Wrong
This short story is essentially a how\-to on parenting.
[link](http://jessamyn.com/barth/baby.html)
#### The Balloon
This story is about a balloon that can represent whatever you want it to.
[link](http://www.uni.edu/oloughli/elit11/Balloon.rtf)
#### Some of Us Had Been Threatening Our Friend Colby
A brief work about etiquette and how to act in society.
[link](http://jessamyn.com/barth/colby.html)
### Raymond Carver
[
> “It ought to make us feel ashamed when we talk like we know what we’re talking about when we talk about love.”
> “That’s all we have, finally, the words, and they had better be the right ones.”
#### What We Talk About When We Talk About Love
The text we use is actually *Beginners*, or the unedited version. A drink is required in order to read it with the proper context. Probably several. No. Definitely several.
[link](http://www.newyorker.com/magazine/2007/12/24/beginners)
### Billy Dee Shakespeare
> “It works every time.”
These old works have pretty much no relevance today, and are mostly forgotten by everyone except humanities faculty. The analysis of them depicted in this document is pretty much definitive, and leaves little else to say regarding them, so don’t bother reading them if you haven’t already.
### Donald Barthelme
[
> “I have to admit we are mired in the most exquisite mysterious muck.
> This muck heaves and palpitates. It is multi\-directional and has a mayor.”
> “You may not be interested in absurdity, but absurdity is interested in you.”
#### The First Thing the Baby Did Wrong
This short story is essentially a how\-to on parenting.
[link](http://jessamyn.com/barth/baby.html)
#### The Balloon
This story is about a balloon that can represent whatever you want it to.
[link](http://www.uni.edu/oloughli/elit11/Balloon.rtf)
#### Some of Us Had Been Threatening Our Friend Colby
A brief work about etiquette and how to act in society.
[link](http://jessamyn.com/barth/colby.html)
#### The First Thing the Baby Did Wrong
This short story is essentially a how\-to on parenting.
[link](http://jessamyn.com/barth/baby.html)
#### The Balloon
This story is about a balloon that can represent whatever you want it to.
[link](http://www.uni.edu/oloughli/elit11/Balloon.rtf)
#### Some of Us Had Been Threatening Our Friend Colby
A brief work about etiquette and how to act in society.
[link](http://jessamyn.com/barth/colby.html)
### Raymond Carver
[
> “It ought to make us feel ashamed when we talk like we know what we’re talking about when we talk about love.”
> “That’s all we have, finally, the words, and they had better be the right ones.”
#### What We Talk About When We Talk About Love
The text we use is actually *Beginners*, or the unedited version. A drink is required in order to read it with the proper context. Probably several. No. Definitely several.
[link](http://www.newyorker.com/magazine/2007/12/24/beginners)
#### What We Talk About When We Talk About Love
The text we use is actually *Beginners*, or the unedited version. A drink is required in order to read it with the proper context. Probably several. No. Definitely several.
[link](http://www.newyorker.com/magazine/2007/12/24/beginners)
### Billy Dee Shakespeare
> “It works every time.”
These old works have pretty much no relevance today, and are mostly forgotten by everyone except humanities faculty. The analysis of them depicted in this document is pretty much definitive, and leaves little else to say regarding them, so don’t bother reading them if you haven’t already.
R
-
Up until even a couple years ago, R was *terrible* at text. You really only had base R for basic processing and a couple packages that were not straightforward to use. There was little for scraping the web. Nowadays, I would say it’s probably easier to deal with text in R than it is elsewhere, including Python. Packages like rvest, stringr/stringi, and tidytext and more make it almost easy enough to jump right in.
One can peruse the Natural Language Processing task view to start getting a sense of what all is available in R.
[NLP task view](https://www.r-pkg.org/ctv/NaturalLanguageProcessing)
The one drawback with R is that most of the dealing with text is slow and/or memory intensive. The Shakespeare texts are only a few dozen and not very long works, and yet your basic LDA might still take a minute or so. Most text analysis situations might have thousands to millions of texts, such that the corpus itself may be too much to hold in memory, and thus R, at least on a standard computing device or with the usual methods, might not be viable for your needs.
Python
------
While R has done a lot to catch up, more advanced text analysis techniques are developed in Python (if not lower level languages), and so the state of the art may be found there. Furthermore, much of text analysis is a high volume affair, and that means it will likely be done much more efficiently in the Python environment if so, though one still might need a high performance computing environment. Here are some of the popular modules in Python.
* nltk
* textblob (the tidytext for Python)
* gensim (topic modeling)
* spaCy
A Faster LDA
------------
We noted in the Shakespeare start to finish example that there are faster alternatives than the standard LDA in topicmodels. In particular, the powerful text2vec package contains a faster and less memory intensive implementation of LDA and dealing with text generally. Both of which are very important if you’re wanting to use R for text analysis. The other nice thing is that it works with LDAvis for visualization.
For the following, we’ll use one of the partially cleaned document term matrix for the Shakespeare texts. One of the things to get used to is that text2vec uses the newer R6 classes of R objects, hence the `$` approach you see to using specific methods.
```
library(text2vec)
load('data/shakes_dtm_stemmed.RData')
# load('data/shakes_words_df.RData') # non-stemmed
# convert to the sparse matrix representation using Matrix package
shakes_dtm = as(shakes_dtm, 'CsparseMatrix')
# setup the model
lda_model = LDA$new(n_topics = 10, doc_topic_prior = 0.1, topic_word_prior = 0.01)
# fit the model
doc_topic_distr = lda_model$fit_transform(x = shakes_dtm,
n_iter = 1000,
convergence_tol = 0.0001,
n_check_convergence = 25,
progressbar = FALSE)
```
```
INFO [2018-03-06 19:16:15] iter 25 loglikelihood = -1746173.024
INFO [2018-03-06 19:16:16] iter 50 loglikelihood = -1683541.903
INFO [2018-03-06 19:16:17] iter 75 loglikelihood = -1660985.396
INFO [2018-03-06 19:16:17] iter 100 loglikelihood = -1648984.411
INFO [2018-03-06 19:16:18] iter 125 loglikelihood = -1641481.467
INFO [2018-03-06 19:16:19] iter 150 loglikelihood = -1638983.461
INFO [2018-03-06 19:16:20] iter 175 loglikelihood = -1636730.733
INFO [2018-03-06 19:16:20] iter 200 loglikelihood = -1636356.883
INFO [2018-03-06 19:16:21] iter 225 loglikelihood = -1636487.222
INFO [2018-03-06 19:16:21] early stopping at 225 iteration
```
```
lda_model$get_top_words(n = 10, topic_number = 1:10, lambda = 1)
```
```
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] "prai" "hear" "ey" "love" "word" "natur" "night" "god" "friend" "death"
[2,] "honor" "madam" "sweet" "dai" "letter" "fortun" "fear" "dai" "hand" "grace"
[3,] "heaven" "bring" "fair" "true" "hous" "world" "ear" "england" "nobl" "soul"
[4,] "life" "sea" "heart" "wit" "prai" "power" "sleep" "crown" "word" "live"
[5,] "matter" "bear" "light" "fair" "sweet" "poor" "death" "war" "stand" "blood"
[6,] "honest" "seek" "desir" "live" "husband" "set" "dead" "arm" "rome" "life"
[7,] "fellow" "heard" "beauti" "youth" "woman" "nobl" "bid" "majesti" "honor" "dai"
[8,] "hear" "lose" "black" "heart" "reason" "truth" "bed" "fight" "leav" "hope"
[9,] "heart" "strang" "kiss" "marri" "hand" "leav" "mad" "sword" "deed" "heaven"
[10,] "friend" "sister" "sun" "night" "talk" "command" "hand" "heart" "tear" "die"
```
```
which.max(doc_topic_distr['Hamlet', ])
```
```
[1] 7
```
```
# top-words could be sorted by “relevance” which also takes into account
# frequency of word in the corpus (0 < lambda < 1)
lda_model$get_top_words(n = 10, topic_number = 1:10, lambda = 0.2)
```
```
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] "honest" "madam" "ey" "love" "letter" "natur" "ear" "england" "rome" "bloodi"
[2,] "beseech" "sea" "cheek" "youth" "merri" "report" "sleep" "majesti" "deed" "royal"
[3,] "knave" "water" "black" "wit" "woo" "spirit" "beat" "field" "banish" "graciou"
[4,] "warrant" "sister" "wretch" "signior" "jest" "judgment" "night" "uncl" "countri" "high"
[5,] "glad" "women" "flower" "count" "finger" "worst" "air" "march" "citi" "subject"
[6,] "action" "hair" "sweet" "lover" "choos" "author" "soft" "lieg" "son" "sovereign"
[7,] "worship" "lose" "vow" "danc" "ring" "qualiti" "knock" "fight" "rise" "foe"
[8,] "matter" "entreat" "mortal" "song" "horn" "virgin" "poison" "battl" "kneel" "flourish"
[9,] "fellow" "seek" "wing" "paint" "bond" "wine" "shake" "harri" "fly" "king"
[10,] "walk" "passion" "short" "wed" "troth" "direct" "move" "crown" "wert" "tide"
```
```
# ldavis not shown
# lda_model$plot()
```
Given that most text analysis can be very time consuming for a model, consider any approach that might give you more efficiency.
| Text Analysis |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/index.html |
Preface
=======
This is material that was developed as part of a course we teach at the University of Washington on applied time series analysis for fisheries and environmental data. You can find our lectures on our course website [ATSA](https://nwfsc-timeseries.github.io/atsa/).
### Book package
The book uses a number of R packages and a variety of fisheries data sets. The packages and data sets can be installed by installing our **atsalibrary** package which is hosted on GitHub:
```
library(devtools)
# Windows users will likely need to set this
# Sys.setenv('R_REMOTES_NO_ERRORS_FROM_WARNINGS' = 'true')
devtools::install_github("nwfsc-timeseries/atsalibrary")
```
### Authors
Links to more code and publications can be found on our academic websites at the University of Washington:
* Elizabeth Eli Holmes <http://faculty.washington.edu/eeholmes>
* Mark D. Scheuerell <http://faculty.washington.edu/scheuerl>
* Eric J. Ward <http://faculty.washington.edu/warde>
### Citation
Holmes, E. E., M. D. Scheuerell, and E. J. Ward. Applied time series analysis for fisheries and environmental data. Edition 2021\. Contacts [[email protected]](mailto:[email protected]), [[email protected]](mailto:[email protected]), and [[email protected]](mailto:[email protected])
### License
This book was developed by United States federal government employees as part of their official duties. As such, it is not subject to copyright protection and is considered “public domain” (see 17 USC § 105\). Public domain works can be used by anyone for any purpose, and cannot be released under a copyright license. *However if you use our work, please cite it and give proper attribution.*
### Book package
The book uses a number of R packages and a variety of fisheries data sets. The packages and data sets can be installed by installing our **atsalibrary** package which is hosted on GitHub:
```
library(devtools)
# Windows users will likely need to set this
# Sys.setenv('R_REMOTES_NO_ERRORS_FROM_WARNINGS' = 'true')
devtools::install_github("nwfsc-timeseries/atsalibrary")
```
### Authors
Links to more code and publications can be found on our academic websites at the University of Washington:
* Elizabeth Eli Holmes <http://faculty.washington.edu/eeholmes>
* Mark D. Scheuerell <http://faculty.washington.edu/scheuerl>
* Eric J. Ward <http://faculty.washington.edu/warde>
### Citation
Holmes, E. E., M. D. Scheuerell, and E. J. Ward. Applied time series analysis for fisheries and environmental data. Edition 2021\. Contacts [[email protected]](mailto:[email protected]), [[email protected]](mailto:[email protected]), and [[email protected]](mailto:[email protected])
### License
This book was developed by United States federal government employees as part of their official duties. As such, it is not subject to copyright protection and is considered “public domain” (see 17 USC § 105\). Public domain works can be used by anyone for any purpose, and cannot be released under a copyright license. *However if you use our work, please cite it and give proper attribution.*
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/chap-basicmat.html |
Chapter 1 Basic matrix math in R
================================
This chapter reviews the basic matrix math operations that you will need to understand the course material and shows how to do these operations in R.
A script with all the R code in the chapter can be downloaded [here](./Rcode/basic-matrix-math.R).
After reviewing the material, you can check your knowledge via an [online quiz](https://atsa.shinyapps.io/matrix/) (with solutions) or run the quiz from R using the atsalibrary package:
```
learnr::run_tutorial("matrix", package="atsalibrary")
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-basicmat-create.html |
1\.1 Creating matrices in R
---------------------------
Create a \\(3 \\times 4\\) matrix, meaning 3 row and 4 columns, that is all 1s:
```
matrix(1, 3, 4)
```
```
[,1] [,2] [,3] [,4]
[1,] 1 1 1 1
[2,] 1 1 1 1
[3,] 1 1 1 1
```
Create a \\(3 \\times 4\\) matrix filled in with the numbers 1 to 12 by column (default) and by row:
```
matrix(1:12, 3, 4)
```
```
[,1] [,2] [,3] [,4]
[1,] 1 4 7 10
[2,] 2 5 8 11
[3,] 3 6 9 12
```
```
matrix(1:12, 3, 4, byrow = TRUE)
```
```
[,1] [,2] [,3] [,4]
[1,] 1 2 3 4
[2,] 5 6 7 8
[3,] 9 10 11 12
```
Create a matrix with one column:
```
matrix(1:4, ncol = 1)
```
```
[,1]
[1,] 1
[2,] 2
[3,] 3
[4,] 4
```
Create a matrix with one row:
```
matrix(1:4, nrow = 1)
```
```
[,1] [,2] [,3] [,4]
[1,] 1 2 3 4
```
Check the dimensions of a matrix
```
A = matrix(1:6, 2, 3)
A
```
```
[,1] [,2] [,3]
[1,] 1 3 5
[2,] 2 4 6
```
```
dim(A)
```
```
[1] 2 3
```
Get the number of rows in a matrix:
```
dim(A)[1]
```
```
[1] 2
```
```
nrow(A)
```
```
[1] 2
```
Create a 3D matrix (called array):
```
A = array(1:6, dim = c(2, 3, 2))
A
```
```
, , 1
[,1] [,2] [,3]
[1,] 1 3 5
[2,] 2 4 6
, , 2
[,1] [,2] [,3]
[1,] 1 3 5
[2,] 2 4 6
```
```
dim(A)
```
```
[1] 2 3 2
```
Check if an object is a matrix. A data frame is not a matrix. A vector is not a matrix.
```
A = matrix(1:4, 1, 4)
A
```
```
[,1] [,2] [,3] [,4]
[1,] 1 2 3 4
```
```
class(A)
```
```
[1] "matrix" "array"
```
```
B = data.frame(A)
B
```
```
X1 X2 X3 X4
1 1 2 3 4
```
```
class(B)
```
```
[1] "data.frame"
```
```
C = 1:4
C
```
```
[1] 1 2 3 4
```
```
class(C)
```
```
[1] "integer"
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-basicmat-multiply.html |
1\.2 Matrix multiplication, addition and transpose
--------------------------------------------------
You will need to be very solid in matrix multiplication for the course. If you haven’t done it in awhile, google \`matrix multiplication youtube’ and you find lots of 5min videos to remind you.
In R, you use the `%*%` operation to do matrix multiplication. When you do matrix multiplication, the columns of the matrix on the left must equal the rows of the matrix on the right. The result is a matrix that has the number of rows of the matrix on the left and number of columns of the matrix on the right.
\\\[(n \\times m)(m \\times p) \= (n \\times p)\\]
```
A=matrix(1:6, 2, 3) #2 rows, 3 columns
B=matrix(1:6, 3, 2) #3 rows, 2 columns
A%*%B #this works
```
```
[,1] [,2]
[1,] 22 49
[2,] 28 64
```
```
B%*%A #this works
```
```
[,1] [,2] [,3]
[1,] 9 19 29
[2,] 12 26 40
[3,] 15 33 51
```
```
try(B%*%B) #this doesn't
```
```
Error in B %*% B : non-conformable arguments
```
To add two matrices use `+`. The matrices have to have the same dimensions.
```
A+A #works
```
```
[,1] [,2] [,3]
[1,] 2 6 10
[2,] 4 8 12
```
```
A+t(B) #works
```
```
[,1] [,2] [,3]
[1,] 2 5 8
[2,] 6 9 12
```
```
try(A+B) #does not work since A has 2 rows and B has 3
```
```
Error in A + B : non-conformable arrays
```
The transpose of a matrix is denoted \\(\\mathbf{A}^\\top\\) or \\(\\mathbf{A}^\\prime\\). To transpose a matrix in R, you use `t()`.
```
A=matrix(1:6, 2, 3) #2 rows, 3 columns
t(A) #is the transpose of A
```
```
[,1] [,2]
[1,] 1 2
[2,] 3 4
[3,] 5 6
```
```
try(A%*%A) #this won't work
```
```
Error in A %*% A : non-conformable arguments
```
```
A%*%t(A) #this will
```
```
[,1] [,2]
[1,] 35 44
[2,] 44 56
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-basicmat-subset.html |
1\.3 Subsetting a matrix
------------------------
To subset a matrix, we use `[ ]`:
```
A=matrix(1:9, 3, 3) #3 rows, 3 columns
#get the first and second rows of A
#it's a 2x3 matrix
A[1:2,]
```
```
[,1] [,2] [,3]
[1,] 1 4 7
[2,] 2 5 8
```
```
#get the top 2 rows and left 2 columns
A[1:2,1:2]
```
```
[,1] [,2]
[1,] 1 4
[2,] 2 5
```
```
#What does this do?
A[c(1,3),c(1,3)]
```
```
[,1] [,2]
[1,] 1 7
[2,] 3 9
```
```
#This?
A[c(1,2,1),c(2,3)]
```
```
[,1] [,2]
[1,] 4 7
[2,] 5 8
[3,] 4 7
```
If you have used matlab, you know you can say something like `A[1,end]` to denote the element of a matrix in row 1 and the last column. R does not have \`end’. To do, the same in R you do something like:
```
A=matrix(1:9, 3, 3)
A[1,ncol(A)]
```
```
[1] 7
```
```
#or
A[1,dim(A)[2]]
```
```
[1] 7
```
**Warning R will create vectors from subsetting matrices!**
One of the really bad things that R does with matrices is create a vector if you happen to subset a matrix to create a matrix with 1 row or 1 column. Look at this:
```
A=matrix(1:9, 3, 3)
#take the first 2 rows
B=A[1:2,]
#everything is ok
dim(B)
```
```
[1] 2 3
```
```
class(B)
```
```
[1] "matrix" "array"
```
```
#take the first row
B=A[1,]
#oh no! It should be a 1x3 matrix but it is not.
dim(B)
```
```
NULL
```
```
#It is not even a matrix any more
class(B)
```
```
[1] "integer"
```
```
#and what happens if we take the transpose?
#Oh no, it's a 1x3 matrix not a 3x1 (transpose of 1x3)
t(B)
```
```
[,1] [,2] [,3]
[1,] 1 4 7
```
```
#A%*%B should fail because A is (3x3) and B is (1x3)
A%*%B
```
```
[,1]
[1,] 66
[2,] 78
[3,] 90
```
```
#It works? That is horrible!
```
This will create hard to find bugs in your code because you will look at `B=A[1,]` and everything looks fine. Why is R saying it is not a matrix! To stop R from doing this use `drop=FALSE`.
```
B=A[1,,drop=FALSE]
#Now it is a matrix as it should be
dim(B)
```
```
[1] 1 3
```
```
class(B)
```
```
[1] "matrix" "array"
```
```
#this fails as it should (alerting you to a problem!)
try(A%*%B)
```
```
Error in A %*% B : non-conformable arguments
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-basicmat-replace.html |
1\.4 Replacing elements in a matrix
-----------------------------------
Replace 1 element.
```
A=matrix(1, 3, 3)
A[1,1]=2
A
```
```
[,1] [,2] [,3]
[1,] 2 1 1
[2,] 1 1 1
[3,] 1 1 1
```
Replace a row with all 1s or a string of values
```
A=matrix(1, 3, 3)
A[1,]=2
A
```
```
[,1] [,2] [,3]
[1,] 2 2 2
[2,] 1 1 1
[3,] 1 1 1
```
```
A[1,]=1:3
A
```
```
[,1] [,2] [,3]
[1,] 1 2 3
[2,] 1 1 1
[3,] 1 1 1
```
Replace group of elements. This often does not work as one expects so be sure look at your matrix after trying something like this. Here I want to replace elements (1,3\) and (3,1\) with 2, but it didn’t work as I wanted.
```
A=matrix(1, 3, 3)
A[c(1,3),c(3,1)]=2
A
```
```
[,1] [,2] [,3]
[1,] 2 1 2
[2,] 1 1 1
[3,] 2 1 2
```
How do I replace elements (1,1\) and (3,3\) with 2 then? It’s tedious. If you have a lot of elements to replace, you might want to use a for loop.
```
A=matrix(1, 3, 3)
A[1,3]=2
A[3,1]=2
A
```
```
[,1] [,2] [,3]
[1,] 1 1 2
[2,] 1 1 1
[3,] 2 1 1
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-basicmat-diagonal.html |
1\.5 Diagonal matrices and identity matrices
--------------------------------------------
A diagonal matrix is one that is square, meaning number of rows equals number of columns, and it has 0s on the off\-diagonal and non\-zeros on the diagonal. In R, you form a diagonal matrix with the `diag()` function:
```
diag(1,3) #put 1 on diagonal of 3x3 matrix
```
```
[,1] [,2] [,3]
[1,] 1 0 0
[2,] 0 1 0
[3,] 0 0 1
```
```
diag(2, 3) #put 2 on diagonal of 3x3 matrix
```
```
[,1] [,2] [,3]
[1,] 2 0 0
[2,] 0 2 0
[3,] 0 0 2
```
```
diag(1:4) #put 1 to 4 on diagonal of 4x4 matrix
```
```
[,1] [,2] [,3] [,4]
[1,] 1 0 0 0
[2,] 0 2 0 0
[3,] 0 0 3 0
[4,] 0 0 0 4
```
The `diag()` function can also be used to replace elements on the diagonal of a matrix:
```
A = matrix(3, 3, 3)
diag(A) = 1
A
```
```
[,1] [,2] [,3]
[1,] 1 3 3
[2,] 3 1 3
[3,] 3 3 1
```
```
A = matrix(3, 3, 3)
diag(A) = 1:3
A
```
```
[,1] [,2] [,3]
[1,] 1 3 3
[2,] 3 2 3
[3,] 3 3 3
```
```
A = matrix(3, 3, 4)
diag(A[1:3, 2:4]) = 1
A
```
```
[,1] [,2] [,3] [,4]
[1,] 3 1 3 3
[2,] 3 3 1 3
[3,] 3 3 3 1
```
The `diag()` function is also used to get the diagonal of a matrix.
```
A = matrix(1:9, 3, 3)
diag(A)
```
```
[1] 1 5 9
```
The identity matrix is a special kind of diagonal matrix with 1s on the diagonal. It is denoted \\(\\mathbf{I}\\). \\(\\mathbf{I}\_3\\) would mean a \\(3 \\times 3\\) diagonal matrix. A identity matrix has the property that \\(\\mathbf{A}\\mathbf{I}\=\\mathbf{A}\\) and \\(\\mathbf{I}\\mathbf{A}\=\\mathbf{A}\\) so it is like a 1\.
```
A = matrix(1:9, 3, 3)
I = diag(3) #shortcut for 3x3 identity matrix
A %*% I
```
```
[,1] [,2] [,3]
[1,] 1 4 7
[2,] 2 5 8
[3,] 3 6 9
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-basicmat-inverse.html |
1\.6 Taking the inverse of a square matrix
------------------------------------------
The inverse of a matrix is denoted \\(\\mathbf{A}^{\-1}\\). You can think of the inverse of a matrix like \\(1/a\\). \\(1/a \\times a \= 1\\). \\(\\mathbf{A}^{\-1}\\mathbf{A} \= \\mathbf{A}\\mathbf{A}^{\-1} \= \\mathbf{I}\\). The inverse of a matrix does not always exist; for one it has to be square. We’ll be using inverses for variance\-covariance matrices and by definition (of a variance\-covariance matrix), the inverse of those exist. In R, there are a couple way common ways to take the inverse of a variance\-covariance matrix (or something with the same properties). `solve()` is the most common probably:
```
A = diag(3, 3) + matrix(1, 3, 3)
invA = solve(A)
invA %*% A
```
```
[,1] [,2] [,3]
[1,] 1.000000e+00 -6.938894e-18 0
[2,] 2.081668e-17 1.000000e+00 0
[3,] 0.000000e+00 0.000000e+00 1
```
```
A %*% invA
```
```
[,1] [,2] [,3]
[1,] 1.000000e+00 -6.938894e-18 0
[2,] 2.081668e-17 1.000000e+00 0
[3,] 0.000000e+00 0.000000e+00 1
```
Another option is to use `chol2inv()` which uses a Cholesky decomposition:
```
A = diag(3, 3) + matrix(1, 3, 3)
invA = chol2inv(chol(A))
invA %*% A
```
```
[,1] [,2] [,3]
[1,] 1.000000e+00 6.938894e-17 0.000000e+00
[2,] 2.081668e-17 1.000000e+00 -2.775558e-17
[3,] -5.551115e-17 0.000000e+00 1.000000e+00
```
```
A %*% invA
```
```
[,1] [,2] [,3]
[1,] 1.000000e+00 2.081668e-17 -5.551115e-17
[2,] 6.938894e-17 1.000000e+00 0.000000e+00
[3,] 0.000000e+00 -2.775558e-17 1.000000e+00
```
For the purpose of this course, `solve()` is fine.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-basicmat-problems.html |
1\.7 Problems
-------------
1. Build a \\(4 \\times 3\\) matrix with the numbers 1 through 3 in each column. Try the same with the numbers 1 through 4 in each row.
2. Extract the elements in the 1st and 2nd rows and 1st and 2nd columns (you’ll have a \\(2 \\times 2\\) matrix). Show the R code that will do this.
3. Build a \\(4 \\times 3\\) matrix with the numbers 1 through 12 by row (meaning the first row will have the numbers 1 through 3 in it).
4. Extract the 3rd row of the above. Show R code to do this where you end up with a vector and how to do this where you end up with a \\(1 \\times 3\\) matrix.
5. Build a \\(4 \\times 3\\) matrix that is all 1s except a 2 in the (2,3\) element (2nd row, 3rd column).
6. Take the transpose of the above.
7. Build a \\(4 \\times 4\\) diagonal matrix with 1 through 4 on the diagonal.
8. Build a \\(5 \\times 5\\) identity matrix.
9. Replace the diagonal in the above matrix with 2 (the number 2\).
10. Build a matrix with 2 on the diagonal and 1s on the offdiagonals.
11. Take the inverse of the above.
12. Build a \\(3 \\times 3\\) matrix with the first 9 letters of the alphabet. First column should be “a,” “b,” “c.” `letters[1:9]` gives you these letters.
13. Replace the diagonal of this matrix with the word “cat.”
14. Build a \\(4 \\times 3\\) matrix with all 1s. Multiply by a \\(3 \\times 4\\) matrix with all 2s.
15. If \\(\\mathbf{A}\\) is a \\(4 \\times 3\\) matrix, is \\(\\mathbf{A} \\mathbf{A}\\) possible? Is \\(\\mathbf{A} \\mathbf{A}^\\top\\) possible? Show how to write \\(\\mathbf{A}\\mathbf{A}^\\top\\) in R.
16. In the equation, \\(\\mathbf{A} \\mathbf{B} \= \\mathbf{C}\\), let \\(\\mathbf{A}\=\\left\[ \\begin{smallmatrix}1\&4\&7\\\\2\&5\&8\\\\3\&6\&9\\end{smallmatrix}\\right]\\). Build a \\(3 \\times 3\\) \\(\\mathbf{B}\\) matrix with only 1s and 0s such that the values on the diagonal of \\(\\mathbf{C}\\) are 1, 8, 6 (in that order). Show your R code for \\(\\mathbf{A}\\), \\(\\mathbf{B}\\) and \\(\\mathbf{A} \\mathbf{B}\\).
17. Same \\(\\mathbf{A}\\) matrix as above and same equation \\(\\mathbf{A} \\mathbf{B} \= \\mathbf{C}\\). Build a \\(3 \\times 3\\) \\(\\mathbf{B}\\) matrix such that \\(\\mathbf{C}\=2\\mathbf{A}\\). So \\(\\mathbf{C}\=\\left\[ \\begin{smallmatrix}2\&8\&14\\\\ 4\&10\&16\\\\ 6\&12\&18\\end{smallmatrix}\\right]\\). Hint, \\(\\mathbf{B}\\) is diagonal.
18. Same \\(\\mathbf{A}\\) and \\(\\mathbf{A} \\mathbf{B}\=\\mathbf{C}\\) equation. Build a \\(\\mathbf{B}\\) matrix to compute the row sums of \\(\\mathbf{A}\\). So the first \`row sum’ would be \\(1\+4\+7\\), the sum of all elements in row 1 of \\(\\mathbf{A}\\). \\(\\mathbf{C}\\) will be \\(\\left\[ \\begin{smallmatrix}12\\\\ 15\\\\ 18\\end{smallmatrix}\\right]\\), the row sums of \\(\\mathbf{A}\\). Hint, \\(\\mathbf{B}\\) is a column matrix (1 column).
19. Same \\(\\mathbf{A}\\) matrix as above but now equation \\(\\mathbf{B} \\mathbf{A} \= \\mathbf{C}\\). Build a \\(\\mathbf{B}\\) matrix to compute the column sums of \\(\\mathbf{A}\\). So the first \`column sum’ would be \\(1\+2\+3\\). \\(\\mathbf{C}\\) will be a \\(1 \\times 3\\) matrix.
20. Let \\(\\mathbf{A} \\mathbf{B}\=\\mathbf{C}\\) equation but \\(\\mathbf{A}\=\\left\[ \\begin{smallmatrix}2\&1\&1\\\\1\&2\&1\\\\1\&1\&2\\end{smallmatrix}\\right]\\) (so A\=`diag(3)+1`). Build a \\(\\mathbf{B}\\) matrix such that \\(\\mathbf{C}\=\\left\[ \\begin{smallmatrix}3\\\\ 3\\\\ 3\\end{smallmatrix}\\right]\\). Hint, you need to use the inverse of \\(\\mathbf{A}\\).
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/chap-mlr.html |
Chapter 2 Linear regression in matrix form
==========================================
This chapter shows how to write linear regression models in matrix form. The purpose is to get you comfortable writing multivariate linear models in different matrix forms before we start working with time series versions of these models. Each matrix form is an equivalent model for the data, but written in different forms. You do not need to worry which form is better or worse at this point. Simply get comfortable writing multivariate linear models in different matrix forms.
A script with all the R code in the chapter can be downloaded [here](./Rcode/linear-regression-models-matrix.R). The Rmd file of this chapter can be downloaded [here](./Rmds/linear-regression-models-matrix.Rmd).
### Data and packages
This chapter uses the **stats**, **MARSS** and **datasets** packages. Install those packages, if needed, and load:
```
library(stats)
library(MARSS)
library(datasets)
```
We will work with the `stackloss` dataset available in the **datasets** package. The dataset consists of 21 observations on the efficiency of a plant that produces nitric acid as a function of three explanatory variables: air flow, water temperature and acid concentration. We are going to use just the first 4 datapoints so that it is easier to write the matrices, but the concepts extend to as many datapoints as you have.
```
data(stackloss, package = "datasets")
dat = stackloss[1:4, ] #subsetted first 4 rows
dat
```
```
Air.Flow Water.Temp Acid.Conc. stack.loss
1 80 27 89 42
2 80 27 88 37
3 75 25 90 37
4 62 24 87 28
```
### Data and packages
This chapter uses the **stats**, **MARSS** and **datasets** packages. Install those packages, if needed, and load:
```
library(stats)
library(MARSS)
library(datasets)
```
We will work with the `stackloss` dataset available in the **datasets** package. The dataset consists of 21 observations on the efficiency of a plant that produces nitric acid as a function of three explanatory variables: air flow, water temperature and acid concentration. We are going to use just the first 4 datapoints so that it is easier to write the matrices, but the concepts extend to as many datapoints as you have.
```
data(stackloss, package = "datasets")
dat = stackloss[1:4, ] #subsetted first 4 rows
dat
```
```
Air.Flow Water.Temp Acid.Conc. stack.loss
1 80 27 89 42
2 80 27 88 37
3 75 25 90 37
4 62 24 87 28
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-mlr-lr1.html |
2\.1 A simple regression: one explanatory variable
--------------------------------------------------
We will start by regressing stack loss against air flow. In R using the `lm()` function this is
```
# the dat data.frame is defined on the first page of the
# chapter
lm(stack.loss ~ Air.Flow, data = dat)
```
This fits the following model for the \\(i\\)\-th measurment:
\\\[\\begin{equation}
\\tag{2\.1}
stack.loss\_i \= \\alpha \+ \\beta air\_i \+ e\_i, \\text{ where } e\_i \\sim \\text{N}(0,\\sigma^2\)
\\end{equation}\\]
We will write the model for all the measurements together in two different ways, Form 1 and Form 2\.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-mlr-form1.html |
2\.2 Matrix Form 1
------------------
In this form, we have the explanatory variables in a matrix on the left of our parameter matrix:
\\\[\\begin{equation}
\\tag{2\.2}
\\begin{bmatrix}stack.loss\_1\\\\stack.loss\_2\\\\stack.loss\_3\\\\stack.loss\_4\\end{bmatrix}
\=
\\begin{bmatrix}1\&air\_1\\\\1\&air\_2\\\\1\&air\_3\\\\1\&air\_4\\end{bmatrix}
\\begin{bmatrix}\\alpha\\\\ \\beta\\end{bmatrix}
\+
\\begin{bmatrix}e\_1\\\\e\_2\\\\e\_3\\\\e\_4\\end{bmatrix}
\\end{equation}\\]
You should work through the matrix algebra to make sure you understand why Equation [(2\.2\)](sec-mlr-form1.html#eq:stackloss-form1) is Equation [(2\.1\)](sec-mlr-lr1.html#eq:stacklossi) for all the \\(i\\) data points together.
We can write the first line of Equation [(2\.2\)](sec-mlr-form1.html#eq:stackloss-form1) succinctly as
\\\[\\begin{equation}
\\tag{2\.3}
\\mathbf{y} \= \\mathbf{Z}\\mathbf{x} \+ \\mathbf{e}
\\end{equation}\\]
where \\(\\mathbf{x}\\) are our parameters, \\(\\mathbf{y}\\) are our response variables, and \\(\\mathbf{Z}\\) are our explanatory variables (with a 1 column for the intercept). The `lm()` function uses Form 1, and we can recover the \\(\\mathbf{Z}\\) matrix for Form 1 by using the `model.matrix()` function on the output from a `lm()` call:
```
fit = lm(stack.loss ~ Air.Flow, data = dat)
Z = model.matrix(fit)
Z[1:4, ]
```
```
(Intercept) Air.Flow
1 1 80
2 1 80
3 1 75
4 1 62
```
### 2\.2\.1 Solving for the parameters
Note: You will not need to know how to solve linear matrix equations for this course. This section just shows you what the `lm()` function is doing to estimate the parameters.
Notice that \\(\\mathbf{Z}\\) is not a square matrix and its inverse does not exist but the inverse of \\(\\mathbf{Z}^\\top\\mathbf{Z}\\) exists—if this is a solveable problem. We can go through the following steps to solve for \\(\\mathbf{x}\\), our parameters \\(\\alpha\\) and \\(\\beta\\).
Start with \\(\\mathbf{y} \= \\mathbf{Z}\\mathbf{x} \+ \\mathbf{e}\\) and multiply by \\(\\mathbf{Z}^\\top\\) on the left to get
\\\[\\begin{equation\*}
\\mathbf{Z}^\\top\\mathbf{y} \= \\mathbf{Z}^\\top\\mathbf{Z}\\mathbf{x} \+ \\mathbf{Z}^\\top\\mathbf{e}
\\end{equation\*}\\]
Multiply that by \\((\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\) on the left to get
\\\[\\begin{equation\*}
(\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\mathbf{Z}^\\top\\mathbf{y} \= (\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\mathbf{Z}^\\top\\mathbf{Z}\\mathbf{x} \+ (\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\mathbf{Z}^\\top\\mathbf{e}
\\end{equation\*}\\]
\\((\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\mathbf{Z}^\\top\\mathbf{Z}\\) equals the identity matrix, thus
\\\[\\begin{equation\*}
(\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\mathbf{Z}^\\top\\mathbf{y} \= \\mathbf{x} \+ (\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\mathbf{Z}^\\top\\mathbf{e}\\\\
\\end{equation\*}\\]
Move \\(\\mathbf{x}\\) to the right by itself, to get
\\\[\\begin{equation\*}
(\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\mathbf{Z}^\\top\\mathbf{y} \- (\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\mathbf{Z}^\\top\\mathbf{e} \= \\mathbf{x}
\\end{equation\*}\\]
Let’s assume our errors, the \\(\\mathbf{e}\\), are i.i.d. which means that
\\\[\\begin{equation\*}
\\mathbf{e} \\sim \\text{MVN}\\begin{pmatrix}0,
\\begin{bmatrix}
\\sigma^2\&0\&0\&0\\\\ 0\&\\sigma^2\&0\&0\\\\ 0\&0\&\\sigma^2\&0\\\\ 0\&0\&0\&\\sigma^2
\\end{bmatrix}
\\end{pmatrix}
\\end{equation\*}\\]
This equation means \\(\\mathbf{e}\\) is drawn from a multivariate normal distribution with a variance\-covariance matrix that is diagonal with equal variances.
Under that assumption, the expected value of \\((\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\mathbf{Z}^\\top\\mathbf{e}\\) is zero. So we can solve for \\(\\mathbf{x}\\) as
\\\[\\begin{equation\*}
\\mathbf{x} \= (\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\mathbf{Z}^\\top\\mathbf{y}
\\end{equation\*}\\]
Let’s try that with R and compare to what you get with `lm()`:
```
y = matrix(dat$stack.loss, ncol = 1)
Z = cbind(1, dat$Air.Flow) #or use model.matrix() to get Z
solve(t(Z) %*% Z) %*% t(Z) %*% y
```
```
[,1]
[1,] -11.6159170
[2,] 0.6412918
```
```
coef(lm(stack.loss ~ Air.Flow, data = dat))
```
```
(Intercept) Air.Flow
-11.6159170 0.6412918
```
As you see, you get the same values.
### 2\.2\.2 Form 1 with multiple explanatory variables
We can easily extend Form 1 to multiple explanatory variables. Let’s say we wanted to fit this model:
\\\[\\begin{equation}
\\tag{2\.4}
stack.loss\_i \= \\alpha \+ \\beta\_1 air\_i \+ \\beta\_2 water\_i \+ \\beta\_3 acid\_i \+ e\_i
\\end{equation}\\]
With `lm()`, we can fit this with
```
fit1.mult = lm(stack.loss ~ Air.Flow + Water.Temp + Acid.Conc.,
data = dat)
```
Written in matrix form (Form 1\), this is
\\\[\\begin{equation}
\\tag{2\.5}
\\begin{bmatrix}stack.loss\_1\\\\stack.loss\_2\\\\stack.loss\_3\\\\stack.loss\_4\\end{bmatrix}
\=
\\begin{bmatrix}1\&air\_1\&water\_1\&acid\_1\\\\1\&air\_2\&water\_2\&acid\_2\\\\1\&air\_3\&water\_3\&acid\_3\\\\1\&air\_4\&water\_4\&acid\_4\\end{bmatrix}
\\begin{bmatrix}\\alpha\\\\ \\beta\_1 \\\\ \\beta\_2 \\\\ \\beta\_3\\end{bmatrix}
\+
\\begin{bmatrix}e\_1\\\\e\_2\\\\e\_3\\\\e\_4\\end{bmatrix}
\\end{equation}\\]
Now \\(\\mathbf{Z}\\) is a matrix with 4 columns and \\(\\mathbf{x}\\) is a column vector with 4 rows. We can show the \\(\\mathbf{Z}\\) matrix again directly from our `lm()` fit:
```
Z = model.matrix(fit1.mult)
Z
```
```
(Intercept) Air.Flow Water.Temp Acid.Conc.
1 1 80 27 89
2 1 80 27 88
3 1 75 25 90
4 1 62 24 87
attr(,"assign")
[1] 0 1 2 3
```
We can solve for \\(\\mathbf{x}\\) just like before and compare to what we get with `lm()`:
```
y = matrix(dat$stack.loss, ncol = 1)
Z = cbind(1, dat$Air.Flow, dat$Water.Temp, dat$Acid.Conc)
# or Z=model.matrix(fit2)
solve(t(Z) %*% Z) %*% t(Z) %*% y
```
```
[,1]
[1,] -524.904762
[2,] -1.047619
[3,] 7.619048
[4,] 5.000000
```
```
coef(fit1.mult)
```
```
(Intercept) Air.Flow Water.Temp Acid.Conc.
-524.904762 -1.047619 7.619048 5.000000
```
Take a look at the \\(\\mathbf{Z}\\) we made in R. It looks exactly like what is in our model written in matrix form (Equation [(2\.5\)](sec-mlr-form1.html#eq:stackloss-form1-mult)).
### 2\.2\.3 When does Form 1 arise?
This form of writing a regression model will come up when you work with dynamic linear models (DLMs). With DLMs, you will be fitting models of the form \\(\\mathbf{y}\_t\=\\mathbf{Z}\_t\\mathbf{x}\_t\+\\mathbf{e}\_t\\). In these models you have multiple \\(\\mathbf{y}\\) at regular time points and you allow your regression parameters, the \\(\\mathbf{x}\\), to evolve through time as a random walk.
### 2\.2\.4 Matrix Form 1b: The transpose of Form 1
We could also write Form 1 as follows:
\\\[\\begin{equation}
\\tag{2\.6}
\\begin{split}
\\begin{bmatrix}stack.loss\_1\&stack.loss\_2\&stack.loss\_3 \&stack.loss\_4\\end{bmatrix}
\= \\\\
\\begin{bmatrix}\\alpha\& \\beta\_1 \& \\beta\_2 \& \\beta\_3 \\end{bmatrix}
\\begin{bmatrix}1\&1\&1\&1\\\\air\_1\&air\_2\&air\_3\&air\_4\\\\wind\_1\&wind\_2\&wind\_3\&wind\_4\\\\acid\_1\&acid\_2\&acid\_3\&acid\_4\\end{bmatrix}
\+
\\begin{bmatrix}e\_1\&e\_2\&e\_3\&e\_4\\end{bmatrix}
\\end{split}
\\end{equation}\\]
This is just the transpose of Form 1\. Work through the matrix algebra to make sure you understand why Equation [(2\.6\)](sec-mlr-form1.html#eq:stackloss-form1b) is Equation [(2\.1\)](sec-mlr-lr1.html#eq:stacklossi) for all the \\(i\\) data points together and why it is equal to the transpose of Equation [(2\.2\)](sec-mlr-form1.html#eq:stackloss-form1). You’ll need the relationship \\((\\mathbf{A}\\mathbf{B})^\\top\=\\mathbf{B}^\\top \\mathbf{A}^\\top\\).
Let’s write Equation [(2\.6\)](sec-mlr-form1.html#eq:stackloss-form1b) as \\(\\mathbf{y} \= \\mathbf{D}\\mathbf{d}\\), where \\(\\mathbf{D}\\) contains our parameters. Then we can solve for \\(\\mathbf{D}\\) following the steps in Section [2\.2\.1](sec-mlr-form1.html#sec-mlr-solveform1) but multiplying from the right instead of from the left. Work through the steps to show that
\\(\\mathbf{d} \= \\mathbf{y}\\mathbf{d}^\\top(\\mathbf{d}\\mathbf{d}^\\top)^{\-1}\\).
```
y = matrix(dat$stack.loss, nrow = 1)
d = rbind(1, dat$Air.Flow, dat$Water.Temp, dat$Acid.Conc)
y %*% t(d) %*% solve(d %*% t(d))
```
```
[,1] [,2] [,3] [,4]
[1,] -524.9048 -1.047619 7.619048 5
```
```
coef(fit1.mult)
```
```
(Intercept) Air.Flow Water.Temp Acid.Conc.
-524.904762 -1.047619 7.619048 5.000000
```
### 2\.2\.1 Solving for the parameters
Note: You will not need to know how to solve linear matrix equations for this course. This section just shows you what the `lm()` function is doing to estimate the parameters.
Notice that \\(\\mathbf{Z}\\) is not a square matrix and its inverse does not exist but the inverse of \\(\\mathbf{Z}^\\top\\mathbf{Z}\\) exists—if this is a solveable problem. We can go through the following steps to solve for \\(\\mathbf{x}\\), our parameters \\(\\alpha\\) and \\(\\beta\\).
Start with \\(\\mathbf{y} \= \\mathbf{Z}\\mathbf{x} \+ \\mathbf{e}\\) and multiply by \\(\\mathbf{Z}^\\top\\) on the left to get
\\\[\\begin{equation\*}
\\mathbf{Z}^\\top\\mathbf{y} \= \\mathbf{Z}^\\top\\mathbf{Z}\\mathbf{x} \+ \\mathbf{Z}^\\top\\mathbf{e}
\\end{equation\*}\\]
Multiply that by \\((\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\) on the left to get
\\\[\\begin{equation\*}
(\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\mathbf{Z}^\\top\\mathbf{y} \= (\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\mathbf{Z}^\\top\\mathbf{Z}\\mathbf{x} \+ (\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\mathbf{Z}^\\top\\mathbf{e}
\\end{equation\*}\\]
\\((\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\mathbf{Z}^\\top\\mathbf{Z}\\) equals the identity matrix, thus
\\\[\\begin{equation\*}
(\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\mathbf{Z}^\\top\\mathbf{y} \= \\mathbf{x} \+ (\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\mathbf{Z}^\\top\\mathbf{e}\\\\
\\end{equation\*}\\]
Move \\(\\mathbf{x}\\) to the right by itself, to get
\\\[\\begin{equation\*}
(\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\mathbf{Z}^\\top\\mathbf{y} \- (\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\mathbf{Z}^\\top\\mathbf{e} \= \\mathbf{x}
\\end{equation\*}\\]
Let’s assume our errors, the \\(\\mathbf{e}\\), are i.i.d. which means that
\\\[\\begin{equation\*}
\\mathbf{e} \\sim \\text{MVN}\\begin{pmatrix}0,
\\begin{bmatrix}
\\sigma^2\&0\&0\&0\\\\ 0\&\\sigma^2\&0\&0\\\\ 0\&0\&\\sigma^2\&0\\\\ 0\&0\&0\&\\sigma^2
\\end{bmatrix}
\\end{pmatrix}
\\end{equation\*}\\]
This equation means \\(\\mathbf{e}\\) is drawn from a multivariate normal distribution with a variance\-covariance matrix that is diagonal with equal variances.
Under that assumption, the expected value of \\((\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\mathbf{Z}^\\top\\mathbf{e}\\) is zero. So we can solve for \\(\\mathbf{x}\\) as
\\\[\\begin{equation\*}
\\mathbf{x} \= (\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\mathbf{Z}^\\top\\mathbf{y}
\\end{equation\*}\\]
Let’s try that with R and compare to what you get with `lm()`:
```
y = matrix(dat$stack.loss, ncol = 1)
Z = cbind(1, dat$Air.Flow) #or use model.matrix() to get Z
solve(t(Z) %*% Z) %*% t(Z) %*% y
```
```
[,1]
[1,] -11.6159170
[2,] 0.6412918
```
```
coef(lm(stack.loss ~ Air.Flow, data = dat))
```
```
(Intercept) Air.Flow
-11.6159170 0.6412918
```
As you see, you get the same values.
### 2\.2\.2 Form 1 with multiple explanatory variables
We can easily extend Form 1 to multiple explanatory variables. Let’s say we wanted to fit this model:
\\\[\\begin{equation}
\\tag{2\.4}
stack.loss\_i \= \\alpha \+ \\beta\_1 air\_i \+ \\beta\_2 water\_i \+ \\beta\_3 acid\_i \+ e\_i
\\end{equation}\\]
With `lm()`, we can fit this with
```
fit1.mult = lm(stack.loss ~ Air.Flow + Water.Temp + Acid.Conc.,
data = dat)
```
Written in matrix form (Form 1\), this is
\\\[\\begin{equation}
\\tag{2\.5}
\\begin{bmatrix}stack.loss\_1\\\\stack.loss\_2\\\\stack.loss\_3\\\\stack.loss\_4\\end{bmatrix}
\=
\\begin{bmatrix}1\&air\_1\&water\_1\&acid\_1\\\\1\&air\_2\&water\_2\&acid\_2\\\\1\&air\_3\&water\_3\&acid\_3\\\\1\&air\_4\&water\_4\&acid\_4\\end{bmatrix}
\\begin{bmatrix}\\alpha\\\\ \\beta\_1 \\\\ \\beta\_2 \\\\ \\beta\_3\\end{bmatrix}
\+
\\begin{bmatrix}e\_1\\\\e\_2\\\\e\_3\\\\e\_4\\end{bmatrix}
\\end{equation}\\]
Now \\(\\mathbf{Z}\\) is a matrix with 4 columns and \\(\\mathbf{x}\\) is a column vector with 4 rows. We can show the \\(\\mathbf{Z}\\) matrix again directly from our `lm()` fit:
```
Z = model.matrix(fit1.mult)
Z
```
```
(Intercept) Air.Flow Water.Temp Acid.Conc.
1 1 80 27 89
2 1 80 27 88
3 1 75 25 90
4 1 62 24 87
attr(,"assign")
[1] 0 1 2 3
```
We can solve for \\(\\mathbf{x}\\) just like before and compare to what we get with `lm()`:
```
y = matrix(dat$stack.loss, ncol = 1)
Z = cbind(1, dat$Air.Flow, dat$Water.Temp, dat$Acid.Conc)
# or Z=model.matrix(fit2)
solve(t(Z) %*% Z) %*% t(Z) %*% y
```
```
[,1]
[1,] -524.904762
[2,] -1.047619
[3,] 7.619048
[4,] 5.000000
```
```
coef(fit1.mult)
```
```
(Intercept) Air.Flow Water.Temp Acid.Conc.
-524.904762 -1.047619 7.619048 5.000000
```
Take a look at the \\(\\mathbf{Z}\\) we made in R. It looks exactly like what is in our model written in matrix form (Equation [(2\.5\)](sec-mlr-form1.html#eq:stackloss-form1-mult)).
### 2\.2\.3 When does Form 1 arise?
This form of writing a regression model will come up when you work with dynamic linear models (DLMs). With DLMs, you will be fitting models of the form \\(\\mathbf{y}\_t\=\\mathbf{Z}\_t\\mathbf{x}\_t\+\\mathbf{e}\_t\\). In these models you have multiple \\(\\mathbf{y}\\) at regular time points and you allow your regression parameters, the \\(\\mathbf{x}\\), to evolve through time as a random walk.
### 2\.2\.4 Matrix Form 1b: The transpose of Form 1
We could also write Form 1 as follows:
\\\[\\begin{equation}
\\tag{2\.6}
\\begin{split}
\\begin{bmatrix}stack.loss\_1\&stack.loss\_2\&stack.loss\_3 \&stack.loss\_4\\end{bmatrix}
\= \\\\
\\begin{bmatrix}\\alpha\& \\beta\_1 \& \\beta\_2 \& \\beta\_3 \\end{bmatrix}
\\begin{bmatrix}1\&1\&1\&1\\\\air\_1\&air\_2\&air\_3\&air\_4\\\\wind\_1\&wind\_2\&wind\_3\&wind\_4\\\\acid\_1\&acid\_2\&acid\_3\&acid\_4\\end{bmatrix}
\+
\\begin{bmatrix}e\_1\&e\_2\&e\_3\&e\_4\\end{bmatrix}
\\end{split}
\\end{equation}\\]
This is just the transpose of Form 1\. Work through the matrix algebra to make sure you understand why Equation [(2\.6\)](sec-mlr-form1.html#eq:stackloss-form1b) is Equation [(2\.1\)](sec-mlr-lr1.html#eq:stacklossi) for all the \\(i\\) data points together and why it is equal to the transpose of Equation [(2\.2\)](sec-mlr-form1.html#eq:stackloss-form1). You’ll need the relationship \\((\\mathbf{A}\\mathbf{B})^\\top\=\\mathbf{B}^\\top \\mathbf{A}^\\top\\).
Let’s write Equation [(2\.6\)](sec-mlr-form1.html#eq:stackloss-form1b) as \\(\\mathbf{y} \= \\mathbf{D}\\mathbf{d}\\), where \\(\\mathbf{D}\\) contains our parameters. Then we can solve for \\(\\mathbf{D}\\) following the steps in Section [2\.2\.1](sec-mlr-form1.html#sec-mlr-solveform1) but multiplying from the right instead of from the left. Work through the steps to show that
\\(\\mathbf{d} \= \\mathbf{y}\\mathbf{d}^\\top(\\mathbf{d}\\mathbf{d}^\\top)^{\-1}\\).
```
y = matrix(dat$stack.loss, nrow = 1)
d = rbind(1, dat$Air.Flow, dat$Water.Temp, dat$Acid.Conc)
y %*% t(d) %*% solve(d %*% t(d))
```
```
[,1] [,2] [,3] [,4]
[1,] -524.9048 -1.047619 7.619048 5
```
```
coef(fit1.mult)
```
```
(Intercept) Air.Flow Water.Temp Acid.Conc.
-524.904762 -1.047619 7.619048 5.000000
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-mlr-form2.html |
2\.3 Matrix Form 2
------------------
In this form, we have the explanatory variables in a matrix on the right of our parameter matrix as in Form 1b but we arrange everything a little differently:
\\\[\\begin{equation}
\\tag{2\.7}
\\begin{bmatrix}stack.loss\_1\\\\stack.loss\_2\\\\stack.loss\_3\\\\stack.loss\_4\\end{bmatrix}
\=
\\begin{bmatrix}
\\beta\&0\&0\&0\\\\
0\&\\beta\&0\&0\\\\
0\&0\&\\beta\&0\\\\
0\&0\&0\&\\beta
\\end{bmatrix}
\\begin{bmatrix}air\_1\\\\air\_2\\\\air\_3\\\\air\_4\\end{bmatrix}
\+
\\begin{bmatrix}
\\alpha\\\\
\\alpha\\\\
\\alpha\\\\
\\alpha
\\end{bmatrix} \+
\\begin{bmatrix}e\_1\\\\e\_2\\\\e\_3\\\\e\_4\\end{bmatrix}
\\end{equation}\\]
Work through the matrix algebra to make sure you understand why Equation [(2\.7\)](sec-mlr-form2.html#eq:stackloss-form2) is the same as Equation [(2\.1\)](sec-mlr-lr1.html#eq:stacklossi) for all the \\(i\\) data points together.
We will write Form 2 succinctly as
\\\[\\begin{equation}
\\tag{2\.8}
\\mathbf{y}\=\\mathbf{Z}\\mathbf{x}\+\\mathbf{a}\+\\mathbf{e}
\\end{equation}\\]
### 2\.3\.1 Form 2 with multiple explanatory variables
The \\(\\mathbf{x}\\) is a column vector of the explanatory variables. If we have more explanatory variables, we add them to the column vector at the bottom. So if we had air flow, water temperature and acid concentration as explanatory variables, \\(\\mathbf{x}\\) looks like
\\\[\\begin{equation}
\\tag{2\.9}
\\begin{bmatrix}air\_1 \\\\ air\_2 \\\\ air\_3 \\\\ air\_4 \\\\ water\_1 \\\\ water\_2 \\\\ water\_3 \\\\ water\_4 \\\\ acid\_1 \\\\ acid\_2 \\\\ acid\_3 \\\\ acid\_4 \\end{bmatrix}
\\end{equation}\\]
Add columns to the \\(\\mathbf{Z}\\) matrix for each new variable.
\\\[\\begin{equation}
\\begin{bmatrix}
\\beta\_1 \& 0 \& 0 \& 0 \& \\beta\_2 \& 0 \& 0 \& 0 \& \\beta\_3 \& 0 \& 0 \& 0\\\\
0 \& \\beta\_1 \& 0 \& 0 \& 0 \& \\beta\_2 \& 0 \& 0 \& 0 \& \\beta\_3 \& 0 \& 0\\\\
0\&0\&\\beta\_1\&0\&0\&0\&\\beta\_2\&0\&0\&0\&\\beta\_3\&0\\\\
0\&0\&0\&\\beta\_1\&0\&0\&0\&\\beta\_2\&0\&0\&0\&\\beta\_3
\\end{bmatrix}
\\end{equation}\\]
The number of rows of \\(\\mathbf{Z}\\) is always \\(n\\), the number of rows of \\(\\mathbf{y}\\), because the number of rows on the left and right of the equal sign must match. The number of columns in \\(\\mathbf{Z}\\) is determined by the size of \\(\\mathbf{x}\\). Each explanatory variable (like air flow and wind) appears \\(n\\) times (\\(air\_1\\), \\(air\_2\\), \\(\\dots\\), \\(air\_n\\), etc). So if the number of explanatory variables is \\(k\\), the number of columns in \\(\\mathbf{Z}\\) is \\(k \\times n\\). The \\(\\mathbf{a}\\) column matrix holds the intercept terms.
### 2\.3\.2 When does Form 2 arise?
Form 2 is similar to how multivariate time series models are typically written for reading by humans (on a whiteboard or paper). In these models, we see equations like this:
\\\[\\begin{equation}
\\tag{2\.10}
\\begin{bmatrix}y\_1\\\\y\_2\\\\y\_3\\\\y\_4\\end{bmatrix}\_t
\=
\\begin{bmatrix}
\\beta\_a\&\\beta\_b\\\\
\\beta\_a\&0\.1\\\\
\\beta\_b\&\\beta\_a\\\\
0\&\\beta\_a
\\end{bmatrix}
\\begin{bmatrix}x\_1 \\\\ x\_2 \\end{bmatrix}\_t
\+
\\begin{bmatrix}
a\\\\
a\\\\
a\\\\
a
\\end{bmatrix} \+
\\begin{bmatrix}e\_1\\\\e\_2\\\\e\_3\\\\e\_4\\end{bmatrix}\_t
\\end{equation}\\]
In this case, \\(\\mathbf{y}\_t\\) is the set of four observations at time \\(t\\) and \\(\\mathbf{x}\_t\\) is the set of two explanatory variables at time \\(t\\). The \\(\\mathbf{Z}\\) is showing how we are modeling the effects of \\(x\_1\\) and \\(x\_2\\) on the \\(y\\)s. Notice that the effects are not consistent across the \\(x\\) and \\(y\\). This model would not be possible to fit with `lm()` but will be easy to fit with `MARSS()`.
### 2\.3\.1 Form 2 with multiple explanatory variables
The \\(\\mathbf{x}\\) is a column vector of the explanatory variables. If we have more explanatory variables, we add them to the column vector at the bottom. So if we had air flow, water temperature and acid concentration as explanatory variables, \\(\\mathbf{x}\\) looks like
\\\[\\begin{equation}
\\tag{2\.9}
\\begin{bmatrix}air\_1 \\\\ air\_2 \\\\ air\_3 \\\\ air\_4 \\\\ water\_1 \\\\ water\_2 \\\\ water\_3 \\\\ water\_4 \\\\ acid\_1 \\\\ acid\_2 \\\\ acid\_3 \\\\ acid\_4 \\end{bmatrix}
\\end{equation}\\]
Add columns to the \\(\\mathbf{Z}\\) matrix for each new variable.
\\\[\\begin{equation}
\\begin{bmatrix}
\\beta\_1 \& 0 \& 0 \& 0 \& \\beta\_2 \& 0 \& 0 \& 0 \& \\beta\_3 \& 0 \& 0 \& 0\\\\
0 \& \\beta\_1 \& 0 \& 0 \& 0 \& \\beta\_2 \& 0 \& 0 \& 0 \& \\beta\_3 \& 0 \& 0\\\\
0\&0\&\\beta\_1\&0\&0\&0\&\\beta\_2\&0\&0\&0\&\\beta\_3\&0\\\\
0\&0\&0\&\\beta\_1\&0\&0\&0\&\\beta\_2\&0\&0\&0\&\\beta\_3
\\end{bmatrix}
\\end{equation}\\]
The number of rows of \\(\\mathbf{Z}\\) is always \\(n\\), the number of rows of \\(\\mathbf{y}\\), because the number of rows on the left and right of the equal sign must match. The number of columns in \\(\\mathbf{Z}\\) is determined by the size of \\(\\mathbf{x}\\). Each explanatory variable (like air flow and wind) appears \\(n\\) times (\\(air\_1\\), \\(air\_2\\), \\(\\dots\\), \\(air\_n\\), etc). So if the number of explanatory variables is \\(k\\), the number of columns in \\(\\mathbf{Z}\\) is \\(k \\times n\\). The \\(\\mathbf{a}\\) column matrix holds the intercept terms.
### 2\.3\.2 When does Form 2 arise?
Form 2 is similar to how multivariate time series models are typically written for reading by humans (on a whiteboard or paper). In these models, we see equations like this:
\\\[\\begin{equation}
\\tag{2\.10}
\\begin{bmatrix}y\_1\\\\y\_2\\\\y\_3\\\\y\_4\\end{bmatrix}\_t
\=
\\begin{bmatrix}
\\beta\_a\&\\beta\_b\\\\
\\beta\_a\&0\.1\\\\
\\beta\_b\&\\beta\_a\\\\
0\&\\beta\_a
\\end{bmatrix}
\\begin{bmatrix}x\_1 \\\\ x\_2 \\end{bmatrix}\_t
\+
\\begin{bmatrix}
a\\\\
a\\\\
a\\\\
a
\\end{bmatrix} \+
\\begin{bmatrix}e\_1\\\\e\_2\\\\e\_3\\\\e\_4\\end{bmatrix}\_t
\\end{equation}\\]
In this case, \\(\\mathbf{y}\_t\\) is the set of four observations at time \\(t\\) and \\(\\mathbf{x}\_t\\) is the set of two explanatory variables at time \\(t\\). The \\(\\mathbf{Z}\\) is showing how we are modeling the effects of \\(x\_1\\) and \\(x\_2\\) on the \\(y\\)s. Notice that the effects are not consistent across the \\(x\\) and \\(y\\). This model would not be possible to fit with `lm()` but will be easy to fit with `MARSS()`.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-mlr-intercepts.html |
2\.4 Groups of intercepts
-------------------------
Let’s say that the odd numbered plants are in the north and the even numbered are in the south. We want to include this as a factor in our model that affects the intercept. Let’s go back to just having air flow be our explanatory variable. Now if the plant is in the north our model is
\\\[\\begin{equation}
\\tag{2\.11}
stack.loss\_i \= \\alpha\_n \+ \\beta air\_i \+ e\_i, \\text{ where } e\_i \\sim \\text{N}(0,\\sigma^2\)
\\end{equation}\\]
If the plant is in the south, our model is
\\\[\\begin{equation}
\\tag{2\.12}
stack.loss\_i \= \\alpha\_s \+ \\beta air\_i \+ e\_i, \\text{ where } e\_i \\sim \\text{N}(0,\\sigma^2\)
\\end{equation}\\]
We’ll add north/south as a factor called \`reg’ (region) to our dataframe:
```
dat = cbind(dat, reg = rep(c("n", "s"), 4)[1:4])
dat
```
```
Air.Flow Water.Temp Acid.Conc. stack.loss reg
1 80 27 89 42 n
2 80 27 88 37 s
3 75 25 90 37 n
4 62 24 87 28 s
```
And we can easily fit this model with `lm()`.
```
fit2 = lm(stack.loss ~ -1 + Air.Flow + reg, data = dat)
coef(fit2)
```
```
Air.Flow regn regs
0.5358166 -2.0257880 -5.5429799
```
The \-1 is added to the `lm()` call to get rid of \\(\\alpha\\). We just want the \\(\\alpha\_n\\) and \\(\\alpha\_s\\) intercepts coming from our regions.
### 2\.4\.1 North/South intercepts in Form 1
Written in matrix form, Form 1 for this model is
\\\[\\begin{equation}
\\tag{2\.13}
\\begin{bmatrix}stack.loss\_1\\\\ stack.loss\_2\\\\ stack.loss\_3\\\\ stack.loss\_4\\end{bmatrix}
\=
\\begin{bmatrix}air\_1\&1\&0\\\\ air\_2\&0\&1 \\\\air\_3\&1\&0\\\\air\_4\&0\&1\\end{bmatrix}
\\begin{bmatrix}\\beta \\\\ \\alpha\_n \\\\ \\alpha\_s \\end{bmatrix}
\+
\\begin{bmatrix}e\_1\\\\e\_2\\\\e\_3\\\\e\_4\\end{bmatrix}
\\end{equation}\\]
Notice that odd plants get \\(\\alpha\_n\\) and even plants get \\(\\alpha\_s\\). Use `model.matrix()` to see that this is the \\(\\mathbf{Z}\\) matrix that `lm()` formed. Notice the matrix output by `model.matrix()` looks exactly like \\(\\mathbf{Z}\\) in Equation [(2\.13\)](sec-mlr-intercepts.html#eq:stackloss-form1-ns).
```
Z = model.matrix(fit2)
Z[1:4, ]
```
```
Air.Flow regn regs
1 80 1 0
2 80 0 1
3 75 1 0
4 62 0 1
```
We can solve for the parameters using \\(\\mathbf{x} \= (\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\mathbf{Z}^\\top\\mathbf{y}\\) as we did for Form 1 before by adding on the 1s and 0s columns we see in the \\(\\mathbf{Z}\\) matrix in Equation [(2\.13\)](sec-mlr-intercepts.html#eq:stackloss-form1-ns). We could build this \\(\\mathbf{Z}\\) using the following R code:
```
Z = cbind(dat$Air.Flow, c(1, 0, 1, 0), c(0, 1, 0, 1))
colnames(Z) = c("beta", "regn", "regs")
```
Or just use `model.matrix()`. This will save time when models are more complex.
```
Z = model.matrix(fit2)
Z[1:4, ]
```
```
Air.Flow regn regs
1 80 1 0
2 80 0 1
3 75 1 0
4 62 0 1
```
Now we can solve for the parameters:
```
y = matrix(dat$stack.loss, ncol = 1)
solve(t(Z) %*% Z) %*% t(Z) %*% y
```
```
[,1]
Air.Flow 0.5358166
regn -2.0257880
regs -5.5429799
```
Compare to the output from `lm()` and you will see it is the same.
```
coef(fit2)
```
```
Air.Flow regn regs
0.5358166 -2.0257880 -5.5429799
```
### 2\.4\.2 North/South intercepts in Form 2
We would write this model in Form 2 as
\\\[\\begin{equation}
\\tag{2\.14}
\\begin{bmatrix}stack.loss\_1\\\\ stack.loss\_2\\\\ stack.loss\_3\\\\ stack.loss\_4\\end{bmatrix}
\=
\\begin{bmatrix}
\\beta\&0\&0\&0\\\\
0\&\\beta\&0\&0\\\\
0\&0\&\\beta\&0\\\\
0\&0\&0\&\\beta
\\end{bmatrix}\\begin{bmatrix}air\_1\\\\air\_2\\\\air\_3\\\\air\_4\\end{bmatrix}
\+
\\begin{bmatrix}
\\alpha\_n\\\\
\\alpha\_s\\\\
\\alpha\_n\\\\
\\alpha\_s
\\end{bmatrix}
\+
\\begin{bmatrix}e\_1\\\\e\_2\\\\e\_3\\\\e\_4\\end{bmatrix}\=\\mathbf{Z}\\mathbf{x}\+\\mathbf{a}\+\\mathbf{e}
\\end{equation}\\]
### 2\.4\.1 North/South intercepts in Form 1
Written in matrix form, Form 1 for this model is
\\\[\\begin{equation}
\\tag{2\.13}
\\begin{bmatrix}stack.loss\_1\\\\ stack.loss\_2\\\\ stack.loss\_3\\\\ stack.loss\_4\\end{bmatrix}
\=
\\begin{bmatrix}air\_1\&1\&0\\\\ air\_2\&0\&1 \\\\air\_3\&1\&0\\\\air\_4\&0\&1\\end{bmatrix}
\\begin{bmatrix}\\beta \\\\ \\alpha\_n \\\\ \\alpha\_s \\end{bmatrix}
\+
\\begin{bmatrix}e\_1\\\\e\_2\\\\e\_3\\\\e\_4\\end{bmatrix}
\\end{equation}\\]
Notice that odd plants get \\(\\alpha\_n\\) and even plants get \\(\\alpha\_s\\). Use `model.matrix()` to see that this is the \\(\\mathbf{Z}\\) matrix that `lm()` formed. Notice the matrix output by `model.matrix()` looks exactly like \\(\\mathbf{Z}\\) in Equation [(2\.13\)](sec-mlr-intercepts.html#eq:stackloss-form1-ns).
```
Z = model.matrix(fit2)
Z[1:4, ]
```
```
Air.Flow regn regs
1 80 1 0
2 80 0 1
3 75 1 0
4 62 0 1
```
We can solve for the parameters using \\(\\mathbf{x} \= (\\mathbf{Z}^\\top\\mathbf{Z})^{\-1}\\mathbf{Z}^\\top\\mathbf{y}\\) as we did for Form 1 before by adding on the 1s and 0s columns we see in the \\(\\mathbf{Z}\\) matrix in Equation [(2\.13\)](sec-mlr-intercepts.html#eq:stackloss-form1-ns). We could build this \\(\\mathbf{Z}\\) using the following R code:
```
Z = cbind(dat$Air.Flow, c(1, 0, 1, 0), c(0, 1, 0, 1))
colnames(Z) = c("beta", "regn", "regs")
```
Or just use `model.matrix()`. This will save time when models are more complex.
```
Z = model.matrix(fit2)
Z[1:4, ]
```
```
Air.Flow regn regs
1 80 1 0
2 80 0 1
3 75 1 0
4 62 0 1
```
Now we can solve for the parameters:
```
y = matrix(dat$stack.loss, ncol = 1)
solve(t(Z) %*% Z) %*% t(Z) %*% y
```
```
[,1]
Air.Flow 0.5358166
regn -2.0257880
regs -5.5429799
```
Compare to the output from `lm()` and you will see it is the same.
```
coef(fit2)
```
```
Air.Flow regn regs
0.5358166 -2.0257880 -5.5429799
```
### 2\.4\.2 North/South intercepts in Form 2
We would write this model in Form 2 as
\\\[\\begin{equation}
\\tag{2\.14}
\\begin{bmatrix}stack.loss\_1\\\\ stack.loss\_2\\\\ stack.loss\_3\\\\ stack.loss\_4\\end{bmatrix}
\=
\\begin{bmatrix}
\\beta\&0\&0\&0\\\\
0\&\\beta\&0\&0\\\\
0\&0\&\\beta\&0\\\\
0\&0\&0\&\\beta
\\end{bmatrix}\\begin{bmatrix}air\_1\\\\air\_2\\\\air\_3\\\\air\_4\\end{bmatrix}
\+
\\begin{bmatrix}
\\alpha\_n\\\\
\\alpha\_s\\\\
\\alpha\_n\\\\
\\alpha\_s
\\end{bmatrix}
\+
\\begin{bmatrix}e\_1\\\\e\_2\\\\e\_3\\\\e\_4\\end{bmatrix}\=\\mathbf{Z}\\mathbf{x}\+\\mathbf{a}\+\\mathbf{e}
\\end{equation}\\]
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-mlr-betas.html |
2\.5 Groups of \\(\\beta\\)’s
-----------------------------
Now let’s say that the plants have different owners, Sue and Aneesh, and we want to have \\(\\beta\\) for the air flow effect vary by owner. If the plant is in the north and owned by Sue, the model is
\\\[\\begin{equation}
\\tag{2\.15}
stack.loss\_i \= \\alpha\_n \+ \\beta\_s air\_i \+ e\_i, \\text{ where } e\_i \\sim \\text{N}(0,\\sigma^2\)
\\end{equation}\\]
If it is in the south and owned by Aneesh, the model is
\\\[\\begin{equation}
\\tag{2\.16}
stack.loss\_i \= \\alpha\_s \+ \\beta\_a air\_i \+ e\_i, \\text{ where } e\_i \\sim \\text{N}(0,\\sigma^2\)
\\end{equation}\\]
You get the idea.
Now we need to add an operator variable as a factor in our stackloss dataframe. Plants 1,3 are run by Sue and plants 2,4 are run by Aneesh.
```
dat = cbind(dat, owner = c("s", "a"))
dat
```
```
Air.Flow Water.Temp Acid.Conc. stack.loss reg owner
1 80 27 89 42 n s
2 80 27 88 37 s a
3 75 25 90 37 n s
4 62 24 87 28 s a
```
Since the operator names can be replicated the length of our data set, R fills in the operator colmun by replicating our string of operator names to the right length, conveniently (or alarmingly).
We can easily fit this model with `lm()` using the “:” notation.
```
coef(lm(stack.loss ~ -1 + Air.Flow:owner + reg, data = dat))
```
```
regn regs Air.Flow:ownera Air.Flow:owners
-38.0 -3.0 0.5 1.0
```
Notice that we have 4 datapoints and are estimating 4 parameters. We are not going to be able to estimate any more parameters than data points. If we want to estimate any more, we’ll need to use the fuller stackflow dataset (which has 21 data points).
### 2\.5\.1 Owner \\(\\beta\\)’s in Form 1
Written in Form 1, this model is
\\\[\\begin{equation}
\\tag{2\.17}
\\begin{bmatrix}stack.loss\_1\\\\ stack.loss\_2\\\\ stack.loss\_3\\\\ stack.loss\_4\\end{bmatrix}
\=
\\begin{bmatrix}1\&0\&0\&air\_1\\\\ 0\&1\&air\_2\&0 \\\\ 1\&0\&0\&air\_3\\\\ 0\&1\&air\_4\&0\\end{bmatrix}
\\begin{bmatrix}\\alpha\_n \\\\ \\alpha\_s \\\\ \\beta\_a \\\\ \\beta\_s \\end{bmatrix}
\+
\\begin{bmatrix}e\_1\\\\e\_2\\\\e\_3\\\\e\_4\\end{bmatrix}\=\\mathbf{Z}\\mathbf{x}\+\\mathbf{e}
\\end{equation}\\]
The air data have been written to the right of the 1s and 0s for north/south intercepts because that is how `lm()` writes this model in Form 1 and I want to duplicate that (for teaching purposes). Also the \\(\\beta\\)’s are ordered to be alphabetical because `lm()` writes the \\(\\mathbf{Z}\\) matrix like that.
Now our model is more complicated and using `model.matrix()` to get our \\(\\mathbf{Z}\\) saves us a lot tedious matrix building.
```
fit3 = lm(stack.loss ~ -1 + Air.Flow:owner + reg, data = dat)
Z = model.matrix(fit3)
Z[1:4, ]
```
```
regn regs Air.Flow:ownera Air.Flow:owners
1 1 0 0 80
2 0 1 80 0
3 1 0 0 75
4 0 1 62 0
```
Notice the matrix output by `model.matrix()` looks exactly like \\(\\mathbf{Z}\\) in Equation [(2\.17\)](sec-mlr-betas.html#eq:stackloss-form1-owner) (ignore the attributes info). Now we can solve for the parameters:
```
y = matrix(dat$stack.loss, ncol = 1)
solve(t(Z) %*% Z) %*% t(Z) %*% y
```
```
[,1]
regn -38.0
regs -3.0
Air.Flow:ownera 0.5
Air.Flow:owners 1.0
```
Compare to the output from `lm()` and you will see it is the same.
### 2\.5\.2 Owner \\(\\beta\\)’s in Form 2
To write this model in Form 2, we just add subscripts to the \\(\\beta\\)’s in our Form 2 \\(\\mathbf{Z}\\) matrix:
\\\[\\begin{equation}
\\tag{2\.18}
\\begin{bmatrix}stack.loss\_1\\\\ stack.loss\_2\\\\ stack.loss\_3\\\\ stack.loss\_4\\end{bmatrix}
\=
\\begin{bmatrix}
\\beta\_s\&0\&0\&0\\\\
0\&\\beta\_a\&0\&0\\\\
0\&0\&\\beta\_s\&0\\\\
0\&0\&0\&\\beta\_a
\\end{bmatrix}\\begin{bmatrix}air\_1\\\\air\_2\\\\air\_3\\\\air\_4\\end{bmatrix}
\+
\\begin{bmatrix}
\\alpha\_n\\\\
\\alpha\_s\\\\
\\alpha\_n\\\\
\\alpha\_s
\\end{bmatrix}
\+
\\begin{bmatrix}e\_1\\\\e\_2\\\\e\_3\\\\e\_4\\end{bmatrix}\=\\mathbf{Z}\\mathbf{x}\+\\mathbf{a}\+\\mathbf{e}
\\end{equation}\\]
### 2\.5\.1 Owner \\(\\beta\\)’s in Form 1
Written in Form 1, this model is
\\\[\\begin{equation}
\\tag{2\.17}
\\begin{bmatrix}stack.loss\_1\\\\ stack.loss\_2\\\\ stack.loss\_3\\\\ stack.loss\_4\\end{bmatrix}
\=
\\begin{bmatrix}1\&0\&0\&air\_1\\\\ 0\&1\&air\_2\&0 \\\\ 1\&0\&0\&air\_3\\\\ 0\&1\&air\_4\&0\\end{bmatrix}
\\begin{bmatrix}\\alpha\_n \\\\ \\alpha\_s \\\\ \\beta\_a \\\\ \\beta\_s \\end{bmatrix}
\+
\\begin{bmatrix}e\_1\\\\e\_2\\\\e\_3\\\\e\_4\\end{bmatrix}\=\\mathbf{Z}\\mathbf{x}\+\\mathbf{e}
\\end{equation}\\]
The air data have been written to the right of the 1s and 0s for north/south intercepts because that is how `lm()` writes this model in Form 1 and I want to duplicate that (for teaching purposes). Also the \\(\\beta\\)’s are ordered to be alphabetical because `lm()` writes the \\(\\mathbf{Z}\\) matrix like that.
Now our model is more complicated and using `model.matrix()` to get our \\(\\mathbf{Z}\\) saves us a lot tedious matrix building.
```
fit3 = lm(stack.loss ~ -1 + Air.Flow:owner + reg, data = dat)
Z = model.matrix(fit3)
Z[1:4, ]
```
```
regn regs Air.Flow:ownera Air.Flow:owners
1 1 0 0 80
2 0 1 80 0
3 1 0 0 75
4 0 1 62 0
```
Notice the matrix output by `model.matrix()` looks exactly like \\(\\mathbf{Z}\\) in Equation [(2\.17\)](sec-mlr-betas.html#eq:stackloss-form1-owner) (ignore the attributes info). Now we can solve for the parameters:
```
y = matrix(dat$stack.loss, ncol = 1)
solve(t(Z) %*% Z) %*% t(Z) %*% y
```
```
[,1]
regn -38.0
regs -3.0
Air.Flow:ownera 0.5
Air.Flow:owners 1.0
```
Compare to the output from `lm()` and you will see it is the same.
### 2\.5\.2 Owner \\(\\beta\\)’s in Form 2
To write this model in Form 2, we just add subscripts to the \\(\\beta\\)’s in our Form 2 \\(\\mathbf{Z}\\) matrix:
\\\[\\begin{equation}
\\tag{2\.18}
\\begin{bmatrix}stack.loss\_1\\\\ stack.loss\_2\\\\ stack.loss\_3\\\\ stack.loss\_4\\end{bmatrix}
\=
\\begin{bmatrix}
\\beta\_s\&0\&0\&0\\\\
0\&\\beta\_a\&0\&0\\\\
0\&0\&\\beta\_s\&0\\\\
0\&0\&0\&\\beta\_a
\\end{bmatrix}\\begin{bmatrix}air\_1\\\\air\_2\\\\air\_3\\\\air\_4\\end{bmatrix}
\+
\\begin{bmatrix}
\\alpha\_n\\\\
\\alpha\_s\\\\
\\alpha\_n\\\\
\\alpha\_s
\\end{bmatrix}
\+
\\begin{bmatrix}e\_1\\\\e\_2\\\\e\_3\\\\e\_4\\end{bmatrix}\=\\mathbf{Z}\\mathbf{x}\+\\mathbf{a}\+\\mathbf{e}
\\end{equation}\\]
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-mlr-season-factor.html |
2\.6 Seasonal effect as a factor
--------------------------------
Let’s imagine that the data were taken consecutively in time by quarter. We want to model the seasonal effect as an intercept change. We will drop all other effects for now.
If the data were collected in quarter 1, the model is
\\\[\\begin{equation}
\\tag{2\.19}
stack.loss\_i \= \\alpha\_1 \+ e\_i, \\text{ where } e\_i \\sim \\text{N}(0,\\sigma^2\)
\\end{equation}\\]
If collected in quarter 2, the model is
\\\[\\begin{equation}
\\tag{2\.20}
stack.loss\_i \= \\alpha\_2 \+ e\_i, \\text{ where } e\_i \\sim \\text{N}(0,\\sigma^2\)
\\end{equation}\\]
etc.
We add a column to our dataframe to account for season:
```
dat = cbind(dat, qtr = paste(rep("qtr", 4), 1:4, sep = ""))
dat
```
```
Air.Flow Water.Temp Acid.Conc. stack.loss reg owner qtr
1 80 27 89 42 n s qtr1
2 80 27 88 37 s a qtr2
3 75 25 90 37 n s qtr3
4 62 24 87 28 s a qtr4
```
And we can easily fit this model with `lm()`.
```
coef(lm(stack.loss ~ -1 + qtr, data = dat))
```
```
qtrqtr1 qtrqtr2 qtrqtr3 qtrqtr4
42 37 37 28
```
The \-1 is added to the `lm()` call to get rid of \\(\\alpha\\). We just want the \\(\\alpha\_1\\), \\(\\alpha\_2\\), etc. intercepts coming from our quarters.
For comparison look at
```
coef(lm(stack.loss ~ qtr, data = dat))
```
```
(Intercept) qtrqtr2 qtrqtr3 qtrqtr4
42 -5 -5 -14
```
Why does it look like that when \-1 is missing from the `lm()` call? Where did the intercept for quarter 1 go and why are the other intercepts so much smaller?
### 2\.6\.1 Seasonal intercepts written in Form 1
Remembering that `lm()` puts models in Form 1, look at the \\(\\mathbf{Z}\\) matrix for Form 1:
```
fit4 = lm(stack.loss ~ -1 + qtr, data = dat)
Z = model.matrix(fit4)
Z[1:4, ]
```
```
qtrqtr1 qtrqtr2 qtrqtr3 qtrqtr4
1 1 0 0 0
2 0 1 0 0
3 0 0 1 0
4 0 0 0 1
```
Written in Form 1, this model is
\\\[\\begin{equation}
\\tag{2\.21}
\\begin{bmatrix}stack.loss\_1\\\\ stack.loss\_2\\\\ stack.loss\_3\\\\ stack.loss\_4\\end{bmatrix}
\=
\\begin{bmatrix}1\&0\&0\&0\\\\ 0\&1\&0\&0 \\\\ 0\&0\&1\&0\\\\ 0\&0\&0\&1\\end{bmatrix}
\\begin{bmatrix}\\alpha\_1 \\\\ \\alpha\_2 \\\\ \\alpha\_3 \\\\ \\alpha\_4 \\end{bmatrix}
\+
\\begin{bmatrix}e\_1\\\\e\_2\\\\e\_3\\\\e\_4\\end{bmatrix}\=\\mathbf{Z}\\mathbf{x}\+\\mathbf{e}
\\end{equation}\\]
Compare to the model that `lm()` is using when the intercept included. What does this model look like written in matrix form?
```
fit5 = lm(stack.loss ~ qtr, data = dat)
Z = model.matrix(fit5)
Z[1:4, ]
```
```
(Intercept) qtrqtr2 qtrqtr3 qtrqtr4
1 1 0 0 0
2 1 1 0 0
3 1 0 1 0
4 1 0 0 1
```
### 2\.6\.2 Seasonal intercepts written in Form 2
We do not need to add 1s and 0s to our \\(\\mathbf{Z}\\) matrix in Form 2; we just add subscripts to our intercepts matrix like we did when we had north\-south intercepts. In this model, we do not have any explanatory variables so \\(\\mathbf{Z}\\mathbf{x}\\) does not appear.
\\\[\\begin{equation}
\\tag{2\.22}
\\begin{bmatrix}stack.loss\_1\\\\ stack.loss\_2\\\\ stack.loss\_3\\\\ stack.loss\_4\\end{bmatrix}
\=
\\begin{bmatrix}
\\alpha\_1\\\\
\\alpha\_2\\\\
\\alpha\_3\\\\
\\alpha\_4
\\end{bmatrix}
\+
\\begin{bmatrix}e\_1\\\\e\_2\\\\e\_3\\\\e\_4\\end{bmatrix}\=\\mathbf{a}\+\\mathbf{e}
\\end{equation}\\]
### 2\.6\.1 Seasonal intercepts written in Form 1
Remembering that `lm()` puts models in Form 1, look at the \\(\\mathbf{Z}\\) matrix for Form 1:
```
fit4 = lm(stack.loss ~ -1 + qtr, data = dat)
Z = model.matrix(fit4)
Z[1:4, ]
```
```
qtrqtr1 qtrqtr2 qtrqtr3 qtrqtr4
1 1 0 0 0
2 0 1 0 0
3 0 0 1 0
4 0 0 0 1
```
Written in Form 1, this model is
\\\[\\begin{equation}
\\tag{2\.21}
\\begin{bmatrix}stack.loss\_1\\\\ stack.loss\_2\\\\ stack.loss\_3\\\\ stack.loss\_4\\end{bmatrix}
\=
\\begin{bmatrix}1\&0\&0\&0\\\\ 0\&1\&0\&0 \\\\ 0\&0\&1\&0\\\\ 0\&0\&0\&1\\end{bmatrix}
\\begin{bmatrix}\\alpha\_1 \\\\ \\alpha\_2 \\\\ \\alpha\_3 \\\\ \\alpha\_4 \\end{bmatrix}
\+
\\begin{bmatrix}e\_1\\\\e\_2\\\\e\_3\\\\e\_4\\end{bmatrix}\=\\mathbf{Z}\\mathbf{x}\+\\mathbf{e}
\\end{equation}\\]
Compare to the model that `lm()` is using when the intercept included. What does this model look like written in matrix form?
```
fit5 = lm(stack.loss ~ qtr, data = dat)
Z = model.matrix(fit5)
Z[1:4, ]
```
```
(Intercept) qtrqtr2 qtrqtr3 qtrqtr4
1 1 0 0 0
2 1 1 0 0
3 1 0 1 0
4 1 0 0 1
```
### 2\.6\.2 Seasonal intercepts written in Form 2
We do not need to add 1s and 0s to our \\(\\mathbf{Z}\\) matrix in Form 2; we just add subscripts to our intercepts matrix like we did when we had north\-south intercepts. In this model, we do not have any explanatory variables so \\(\\mathbf{Z}\\mathbf{x}\\) does not appear.
\\\[\\begin{equation}
\\tag{2\.22}
\\begin{bmatrix}stack.loss\_1\\\\ stack.loss\_2\\\\ stack.loss\_3\\\\ stack.loss\_4\\end{bmatrix}
\=
\\begin{bmatrix}
\\alpha\_1\\\\
\\alpha\_2\\\\
\\alpha\_3\\\\
\\alpha\_4
\\end{bmatrix}
\+
\\begin{bmatrix}e\_1\\\\e\_2\\\\e\_3\\\\e\_4\\end{bmatrix}\=\\mathbf{a}\+\\mathbf{e}
\\end{equation}\\]
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-mlr-season-w-var.html |
2\.7 Seasonal effect plus other explanatory variables\*
-------------------------------------------------------
With our four data points, we are limited to estimating four parameters. Let’s use the full 21 data points so we can estimate some more complex models. We’ll add an owner variable and a quarter variable to the stackloss dataset.
```
data(stackloss, package = "datasets")
fulldat = stackloss
n = nrow(fulldat)
fulldat = cbind(fulldat, owner = rep(c("sue", "aneesh", "joe"),
n)[1:n], qtr = paste("qtr", rep(1:4, n)[1:n], sep = ""),
reg = rep(c("n", "s"), n)[1:n])
```
Let’s fit a model where there is only an effect of air flow, but that effect varies by owner and by quarter. We also want a different intercept for each quarter. So if datapoint \\(i\\) is from quarter \\(j\\) on a plant owned by owner \\(k\\), the model is
\\\[\\begin{equation}
\\tag{2\.23}
stack.loss\_i \= \\alpha\_j \+ \\beta\_{j,k} air\_i \+ e\_i
\\end{equation}\\]
So there there are \\(4 \\times 3\\) \\(\\beta\\)’s (4 quarters and 3 owners) and 4 \\(\\alpha\\)’s (4 quarters).
With `lm()`, we fit the model as:
```
fit7 = lm(stack.loss ~ -1 + qtr + Air.Flow:qtr:owner, data = fulldat)
```
Take a look at \\(\\mathbf{Z}\\) for Form 1 using `model.matrix(Z)`. It’s not shown since it is large:
```
model.matrix(fit7)
```
The \\(\\mathbf{x}\\) will be
\\\[\\begin{equation}
\\begin{bmatrix}\\alpha\_1 \\\\ \\alpha\_2 \\\\ \\alpha\_3 \\\\ \\alpha\_4 \\\\ \\beta\_{1,a} \\\\ \\beta\_{2,a} \\\\ \\beta\_{3,a} \\\\ \\dots \\end{bmatrix}
\\end{equation}\\]
Take a look at the model matrix that `lm()` is using and make sure you understand how \\(\\mathbf{Z}\\mathbf{x}\\) produces Equation [(2\.23\)](sec-mlr-season-w-var.html#eq:stackloss-mult-beta).
```
Z = model.matrix(fit7)
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-mlr-confound.html |
2\.8 Models with confounded parameters\*
----------------------------------------
Try adding region as another factor in your model along with quarter and fit with `lm()`:
```
coef(lm(stack.loss ~ -1 + Air.Flow + reg + qtr, data = fulldat))
```
```
Air.Flow regn regs qtrqtr2 qtrqtr3 qtrqtr4
1.066524 -49.024320 -44.831760 -3.066094 3.499428 NA
```
The estimate for quarter 1 is gone (actually it was set to 0\) and the estimate for quarter 4 is NA. Look at the \\(\\mathbf{Z}\\) matrix for Form 1 and see if you can figure out the problem. Try also writing out the model for the 1st plant and you’ll see what part of the problem is and why the estimate for quarter 1 is fixed at 0\.
```
fit = lm(stack.loss ~ -1 + Air.Flow + reg + qtr, data = fulldat)
Z = model.matrix(fit)
```
But why is the estimate for quarter 4 equal to NA? What if the ordering of north and south regions was different, say 1 through 4 north, 5 through 8 south, 9 through 12 north, etc?
```
fulldat2 = fulldat
fulldat2$reg2 = rep(c("n", "n", "n", "n", "s", "s", "s", "s"),
3)[1:21]
fit = lm(stack.loss ~ Air.Flow + reg2 + qtr, data = fulldat2)
coef(fit)
```
```
(Intercept) Air.Flow reg2s qtrqtr2 qtrqtr3 qtrqtr4
-45.6158421 1.0407975 -3.5754722 0.7329027 3.0389763 3.6960928
```
Now an estimate for quarter 4 appears.
The problem is two\-fold. First by having both region and quarter intercepts, we created models where 2 intercepts appear for one \\(i\\) model and we cannot estimate both. `lm()` helps us out by setting one of the factor effects to 0\. It will chose the first alphabetically. But as we saw with the model where odd numbered plants were north and even numbered were south, we can still have a situation where one of the intercepts is non\-identifiable. `lm()` helps us out by alerting us to the problem by setting one to NA.
Once you start developing your own models, you will need to make sure that all your parameters are identifiable. If they are not, your code will simply \`chase its tail’. The code will generally take forever to converge or if you did not try different starting conditions, it may look like it converged but actually the estimates for the confounded parameters are meaningless. So you will need to think carefully about the model you are fitting and consider if there are multiple parameters measuring the same thing (for example 2 intercept parameters).
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-mlr-solveform2.html |
2\.9 Solving for the parameters for Form 2\*
--------------------------------------------
Solving for the parameters when the model is written in Form 2 is not straight\-forward. We could re\-write the model in Form 1, or another approach is to use Kronecker products and permutation matrices.
To solve for \\(\\alpha\\) and \\(\\beta\\), we need our parameters in a column matrix like so \\(\\left\[ \\begin{smallmatrix}\\alpha\\\\\\beta\\end{smallmatrix} \\right]\\). We start by moving the intercept matrix, \\(\\mathbf{a}\\) into \\(\\mathbf{Z}\\).
\\\[\\begin{equation}
\\tag{2\.24}
\\begin{bmatrix}stack.loss\_1\\\\stack.loss\_2\\\\stack.loss\_3\\\\stack.loss\_4\\end{bmatrix}
\=
\\begin{bmatrix}
\\alpha\&\\beta\&0\&0\&0\\\\
\\alpha\&0\&\\beta\&0\&0\\\\
\\alpha\&0\&0\&\\beta\&0\\\\
\\alpha\&0\&0\&0\&\\beta
\\end{bmatrix}
\\begin{bmatrix}1\\\\air\_1\\\\air\_2\\\\air\_3\\\\air\_4\\end{bmatrix}
\+
\\begin{bmatrix}e\_1\\\\e\_2\\\\e\_3\\\\e\_4\\end{bmatrix}
\= \\mathbf{Z}\\mathbf{x} \+ \\mathbf{e}.
\\end{equation}\\]
Then we rewrite \\(\\mathbf{Z}\\mathbf{x}\\) in Equation [(2\.24\)](sec-mlr-solveform2.html#eq:stackloss-form2-solve) in \`vec’ form: if \\(\\mathbf{Z}\\) is a \\(n \\times m\\) matrix and \\(\\mathbf{x}\\) is a matrix with 1 column and \\(m\\) rows, then \\(\\mathbf{Z}\\mathbf{x} \= (\\mathbf{x}^\\top \\otimes \\mathbf{I}\_n)\\,\\text{vec}(\\mathbf{Z})\\). The symbol \\(\\otimes\\) means Kronecker product and just ignore it since you’ll never see it again in our course (or google ‘kronecker product’ if you are curious). The “vec” of a matrix is that matrix rearranged as a single column:
\\\[\\begin{equation\*}
\\,\\text{vec} \\begin{bmatrix}
1\&2\\\\
3\&4
\\end{bmatrix} \= \\begin{bmatrix}
1\\\\3\\\\2\\\\4
\\end{bmatrix}
\\end{equation\*}\\]
Notice how you just take each column one by one and stack them under each other. In R, the vec is
```
A = matrix(1:6, nrow = 2, byrow = TRUE)
vecA = matrix(A, ncol = 1)
```
\\(\\mathbf{I}\_n\\) is a \\(n \\times n\\) identity matrix, a diagonal matrix with all 0s on the off\-diagonals and all 1s on the diagonal. In R, this is simply `diag(n)`.
To show how we solve for \\(\\alpha\\) and \\(\\beta\\), let’s use an example with only 3 data points so Equation [(2\.24\)](sec-mlr-solveform2.html#eq:stackloss-form2-solve) becomes:
\\\[\\begin{equation}
\\tag{2\.25}
\\begin{bmatrix}stack.loss\_1\\\\stack.loss\_2\\\\stack.loss\_3\\end{bmatrix}
\=
\\begin{bmatrix}
\\alpha\&\\beta\&0\&0\\\\
\\alpha\&0\&\\beta\&0\\\\
\\alpha\&0\&0\&\\beta
\\end{bmatrix}
\\begin{bmatrix}1\\\\air\_1\\\\air\_2\\\\air\_3\\end{bmatrix}
\+
\\begin{bmatrix}e\_1\\\\e\_2\\\\e\_3\\end{bmatrix}
\\end{equation}\\]
Using \\(\\mathbf{Z}\\mathbf{x} \= (\\mathbf{x}^\\top \\otimes \\mathbf{I}\_n)\\,\\text{vec}(\\mathbf{Z})\\), this means
\\\[\\begin{equation}
\\begin{bmatrix}
\\alpha\&\\beta\&0\&0\\\\
\\alpha\&0\&\\beta\&0\\\\
\\alpha\&0\&0\&\\beta
\\end{bmatrix}
\\begin{bmatrix}1\\\\air\_1\\\\air\_2\\\\air\_3\\end{bmatrix}
\=\\big(\\begin{bmatrix}1\&air\_1\&air\_2\& air\_3\\end{bmatrix} \\otimes \\begin{bmatrix}1\&0\&0\\\\ 0\&1\&0 \\\\ 0\&0\&1 \\end{bmatrix} \\bigr)
\\begin{bmatrix}
\\alpha\\\\
\\alpha\\\\
\\alpha\\\\
\\beta\\\\
0\\\\
0\\\\
0\\\\
\\beta\\\\
0\\\\
0\\\\
0\\\\
\\beta
\\end{bmatrix}
\\end{equation}\\]
We need to rewrite the \\(\\,\\text{vec}(\\mathbf{Z})\\) as a \`permutation’ matrix times \\(\\left\[ \\begin{smallmatrix}\\alpha\\\\\\beta\\end{smallmatrix} \\right]\\):
\\\[\\begin{equation}
\\begin{bmatrix}
\\alpha\\\\
\\alpha\\\\
\\alpha\\\\
\\beta\\\\
0\\\\
0\\\\
0\\\\
\\beta\\\\
0\\\\
0\\\\
0\\\\
\\beta
\\end{bmatrix}
\=
\\begin{bmatrix}
1\&0\\\\
1\&0\\\\
1\&0\\\\
0\&1\\\\
0\&0\\\\
0\&0\\\\
0\&0\\\\
0\&1\\\\
0\&0\\\\
0\&0\\\\
0\&0\\\\
0\&1\\\\
\\end{bmatrix}
\\begin{bmatrix}
\\alpha\\\\
\\beta
\\end{bmatrix} \= \\mathbf{P}\\mathbf{p}
\\end{equation}\\]
where \\(\\mathbf{P}\\) is the permutation matrix and \\(\\mathbf{p}\=\\left\[ \\begin{smallmatrix}\\alpha\\\\\\beta\\end{smallmatrix} \\right]\\).
Thus,
\\\[\\begin{equation}
\\tag{2\.26}
\\mathbf{y}\=\\mathbf{Z}\\mathbf{x}\+\\mathbf{e} \= (\\mathbf{x}^\\top \\otimes \\mathbf{I}\_n)\\mathbf{P}\\begin{bmatrix}\\alpha\\\\ \\beta\\end{bmatrix} \= \\mathbf{M}\\mathbf{p} \+ \\mathbf{e}
\\end{equation}\\]
where \\(\\mathbf{M}\=(\\mathbf{x}^\\top \\otimes \\mathbf{I}\_n)\\mathbf{P}\\).
We can solve for \\(\\mathbf{p}\\), the parameters, using
\\\[(\\mathbf{M}^\\top\\mathbf{M})^{\-1}\\mathbf{M}^\\top\\mathbf{y}\\]
as before.
#### 2\.9\.0\.1 Code to solve for parameters in Form 2
In the homework, you will use the R code in this section to solve for the parameters in Form 2\.
```
#make your y and x matrices
y=matrix(dat$stack.loss, ncol=1)
x=matrix(c(1,dat$Air.Flow),ncol=1)
#make the Z matrix
n=nrow(dat) #number of rows in our data file
k=1
#Z has n rows and 1 col for intercept, and n cols for the n air data points
#a list matrix allows us to combine "characters" and numbers
Z=matrix(list(0),n,k*n+1)
Z[,1]="alpha"
diag(Z[1:n,1+1:n])="beta"
#this function creates that permutation matrix for you
P=MARSS:::convert.model.mat(Z)$free[,,1]
M=kronecker(t(x),diag(n))%*%P
solve(t(M)%*%M)%*%t(M)%*%y
```
```
[,1]
alpha -11.6159170
beta 0.6412918
```
```
coef(lm(dat$stack.loss ~ dat$Air.Flow))
```
```
(Intercept) dat$Air.Flow
-11.6159170 0.6412918
```
Go through this code line by line at the R command line. Look at `Z`. It is a list matrix that allows you to combine numbers (the 0s) with character string (names of parameters). Notice that `class(Z[1,3])="numeric"` while `class(Z[1,2])="character"`. This is important. `0` in R is a number while `"0"` would be a character (the name of a parameter).
Look at the permutation matrix `P`. Try `MARSS:::convert.model.mat(Z)$free` and see that it returns a 3D matrix, which is why the `[,,1]` appears (to get us a 2D matrix). To use more data points, you can redefine
`dat` to say `dat=stackloss` to use all 21 data points.
Here’s another example. Rewrite the model with multiple intercepts (Equation [(2\.14\)](sec-mlr-intercepts.html#eq:stackloss-form2-ns) ) as
\\\[\\begin{equation}
\\tag{2\.27}
\\begin{bmatrix}stack.loss\_1\\\\ stack.loss\_2\\\\ stack.loss\_3\\\\ stack.loss\_4\\end{bmatrix}
\=
\\begin{bmatrix}
\\alpha\_n\&\\beta\&0\&0\&0\\\\
\\alpha\_s\&0\&\\beta\&0\&0\\\\
\\alpha\_n\&0\&0\&\\beta\&0\\\\
\\alpha\_s\&0\&0\&0\&\\beta
\\end{bmatrix}\\begin{bmatrix}1\\\\air\_1\\\\air\_2\\\\air\_3\\\\air\_4\\end{bmatrix}
\+
\\begin{bmatrix}e\_1\\\\e\_2\\\\e\_3\\\\e\_4\\end{bmatrix}\=\\mathbf{Z}\\mathbf{x}\+\\mathbf{a}\+\\mathbf{e}
\\end{equation}\\]
To estimate the parameters, we need to be able to write a list matrix that looks like \\(\\mathbf{Z}\\) in Equation [(2\.27\)](sec-mlr-solveform2.html#eq:stackloss-form2-ns-compact). We can use the same code as above with \\(\\mathbf{Z}\\) changed to look like that in Equation [(2\.27\)](sec-mlr-solveform2.html#eq:stackloss-form2-ns-compact).
```
y = matrix(dat$stack.loss, ncol = 1)
x = matrix(c(1, dat$Air.Flow), ncol = 1)
n = nrow(dat)
k = 1
# list matrix allows us to combine numbers and character
# strings
Z = matrix(list(0), n, k * n + 1)
Z[seq(1, n, 2), 1] = "alphanorth"
Z[seq(2, n, 2), 1] = "alphasouth"
diag(Z[1:n, 1 + 1:n]) = "beta"
P = MARSS:::convert.model.mat(Z)$free[, , 1]
M = kronecker(t(x), diag(n)) %*% P
solve(t(M) %*% M) %*% t(M) %*% y
```
```
[,1]
alphanorth -2.0257880
alphasouth -5.5429799
beta 0.5358166
```
Similarly to estimate the parameters for Equation [(2\.18\)](sec-mlr-betas.html#eq:stackloss-form2-owners), we change the \\(\\beta\\)’s in our \\(\\mathbf{Z}\\) list matrix to have owner designations:
```
Z = matrix(list(0), n, k * n + 1)
Z[seq(1, n, 2), 1] = "alphanorth"
Z[seq(2, n, 2), 1] = "alphasouth"
diag(Z[1:n, 1 + 1:n]) = rep(c("beta.s", "beta.a"), n)[1:n]
P = MARSS:::convert.model.mat(Z)$free[, , 1]
M = kronecker(t(x), diag(n)) %*% P
solve(t(M) %*% M) %*% t(M) %*% y
```
```
[,1]
alphanorth -38.0
alphasouth -3.0
beta.s 1.0
beta.a 0.5
```
The parameters estimates are the same as with the model in Form 1, though \\(\\beta\\)’s are given in reversed order simply due to the way `convert.model.mat()` is ordering the columns in Form 2’s \\(\\mathbf{Z}\\).
#### 2\.9\.0\.1 Code to solve for parameters in Form 2
In the homework, you will use the R code in this section to solve for the parameters in Form 2\.
```
#make your y and x matrices
y=matrix(dat$stack.loss, ncol=1)
x=matrix(c(1,dat$Air.Flow),ncol=1)
#make the Z matrix
n=nrow(dat) #number of rows in our data file
k=1
#Z has n rows and 1 col for intercept, and n cols for the n air data points
#a list matrix allows us to combine "characters" and numbers
Z=matrix(list(0),n,k*n+1)
Z[,1]="alpha"
diag(Z[1:n,1+1:n])="beta"
#this function creates that permutation matrix for you
P=MARSS:::convert.model.mat(Z)$free[,,1]
M=kronecker(t(x),diag(n))%*%P
solve(t(M)%*%M)%*%t(M)%*%y
```
```
[,1]
alpha -11.6159170
beta 0.6412918
```
```
coef(lm(dat$stack.loss ~ dat$Air.Flow))
```
```
(Intercept) dat$Air.Flow
-11.6159170 0.6412918
```
Go through this code line by line at the R command line. Look at `Z`. It is a list matrix that allows you to combine numbers (the 0s) with character string (names of parameters). Notice that `class(Z[1,3])="numeric"` while `class(Z[1,2])="character"`. This is important. `0` in R is a number while `"0"` would be a character (the name of a parameter).
Look at the permutation matrix `P`. Try `MARSS:::convert.model.mat(Z)$free` and see that it returns a 3D matrix, which is why the `[,,1]` appears (to get us a 2D matrix). To use more data points, you can redefine
`dat` to say `dat=stackloss` to use all 21 data points.
Here’s another example. Rewrite the model with multiple intercepts (Equation [(2\.14\)](sec-mlr-intercepts.html#eq:stackloss-form2-ns) ) as
\\\[\\begin{equation}
\\tag{2\.27}
\\begin{bmatrix}stack.loss\_1\\\\ stack.loss\_2\\\\ stack.loss\_3\\\\ stack.loss\_4\\end{bmatrix}
\=
\\begin{bmatrix}
\\alpha\_n\&\\beta\&0\&0\&0\\\\
\\alpha\_s\&0\&\\beta\&0\&0\\\\
\\alpha\_n\&0\&0\&\\beta\&0\\\\
\\alpha\_s\&0\&0\&0\&\\beta
\\end{bmatrix}\\begin{bmatrix}1\\\\air\_1\\\\air\_2\\\\air\_3\\\\air\_4\\end{bmatrix}
\+
\\begin{bmatrix}e\_1\\\\e\_2\\\\e\_3\\\\e\_4\\end{bmatrix}\=\\mathbf{Z}\\mathbf{x}\+\\mathbf{a}\+\\mathbf{e}
\\end{equation}\\]
To estimate the parameters, we need to be able to write a list matrix that looks like \\(\\mathbf{Z}\\) in Equation [(2\.27\)](sec-mlr-solveform2.html#eq:stackloss-form2-ns-compact). We can use the same code as above with \\(\\mathbf{Z}\\) changed to look like that in Equation [(2\.27\)](sec-mlr-solveform2.html#eq:stackloss-form2-ns-compact).
```
y = matrix(dat$stack.loss, ncol = 1)
x = matrix(c(1, dat$Air.Flow), ncol = 1)
n = nrow(dat)
k = 1
# list matrix allows us to combine numbers and character
# strings
Z = matrix(list(0), n, k * n + 1)
Z[seq(1, n, 2), 1] = "alphanorth"
Z[seq(2, n, 2), 1] = "alphasouth"
diag(Z[1:n, 1 + 1:n]) = "beta"
P = MARSS:::convert.model.mat(Z)$free[, , 1]
M = kronecker(t(x), diag(n)) %*% P
solve(t(M) %*% M) %*% t(M) %*% y
```
```
[,1]
alphanorth -2.0257880
alphasouth -5.5429799
beta 0.5358166
```
Similarly to estimate the parameters for Equation [(2\.18\)](sec-mlr-betas.html#eq:stackloss-form2-owners), we change the \\(\\beta\\)’s in our \\(\\mathbf{Z}\\) list matrix to have owner designations:
```
Z = matrix(list(0), n, k * n + 1)
Z[seq(1, n, 2), 1] = "alphanorth"
Z[seq(2, n, 2), 1] = "alphasouth"
diag(Z[1:n, 1 + 1:n]) = rep(c("beta.s", "beta.a"), n)[1:n]
P = MARSS:::convert.model.mat(Z)$free[, , 1]
M = kronecker(t(x), diag(n)) %*% P
solve(t(M) %*% M) %*% t(M) %*% y
```
```
[,1]
alphanorth -38.0
alphasouth -3.0
beta.s 1.0
beta.a 0.5
```
The parameters estimates are the same as with the model in Form 1, though \\(\\beta\\)’s are given in reversed order simply due to the way `convert.model.mat()` is ordering the columns in Form 2’s \\(\\mathbf{Z}\\).
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-mlr-problems.html |
2\.10 Problems
--------------
For the homework questions, we will using part of the `airquality` data set in R. Load that as
```
data(airquality, package="datasets")
#remove any rows with NAs omitted.
airquality=na.omit(airquality)
#make Month a factor (i.e., the Month number is a name rather than a number)
airquality$Month=as.factor(airquality$Month)
#add a region factor
airquality$region = rep(c("north","south"),60)[1:111]
#Only use 5 data points for the homework so you can show the matrices easily
homeworkdat = airquality[1:5,]
```
1. Using Form 1 \\(\\mathbf{y}\=\\mathbf{Z}\\mathbf{x}\+\\mathbf{e}\\), write out the model, showing the \\(\\mathbf{Z}\\) and \\(\\mathbf{x}\\) matrices, being fit by this command
```
fit = lm(Ozone ~ Wind + Temp, data = homeworkdat)
```
2. For the above model, write out the following R code.
1. Create the \\(\\mathbf{y}\\) and \\(\\mathbf{Z}\\) matrices in R.
2. Solve for \\(\\mathbf{x}\\) (the parameters). Show that they match what you get from the first `lm()` call.
3. Add \-1 to your `lm()` call in question 1:
```
fit = lm(Ozone ~ -1 + Wind + Temp, data = homeworkdat)
```
1. What changes in your model?
2. Write out the in Form 1 as an equation. Show the new \\(\\mathbf{Z}\\) and \\(\\mathbf{x}\\) matrices.
3. Solve for the parameters (\\(\\mathbf{x}\\)) and show they match what is returned by `lm()`.
4. For the model for question 1,
1. Write in Form 2 as an equation.
2. Adapt the code from subsection [2\.9\.0\.1](sec-mlr-solveform2.html#sec-mlr-solveform2code) and construct new `Z`, `y` and `x` in R code.
3. Solve for the parameters using the code from subsection [2\.9\.0\.1](sec-mlr-solveform2.html#sec-mlr-solveform2code).
5. A model of the ozone data with only a region (north/south) effect can be written:
```
fit = lm(Ozone ~ -1 + region, data = homeworkdat)
```
1. Write this model in Form 1 as an equation.
2. Solve for the parameter values and show that they match what you get from the `lm()` call.
6. Using the same model from question 5,
1. Write the model in Form 2 as an equation.
2. Write out the `Z` and `x` in R code.
3. Solve for the parameter values and show that they match what you get from the `lm()` call. To do this, you adapt the code from subsection [2\.9\.0\.1](sec-mlr-solveform2.html#sec-mlr-solveform2code).
7. Write the model below in Form 2 as an equation. Show the \\(\\mathbf{Z}\\), \\(\\mathbf{y}\\) and \\(\\mathbf{x}\\) matrices.
```
fit = lm(Ozone ~ Temp:region, data = homeworkdat)
```
8. Using the airquality dataset with 111 data points
1. Write the model below in Form 2\.
```
fit = lm(Ozone ~ -1 + Temp:region + Month, data = airquality)
```
2. Solve for the parameters by adapting code from subsection [2\.9\.0\.1](sec-mlr-solveform2.html#sec-mlr-solveform2code).
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/chap-ts.html |
Chapter 3 Introduction to time series
=====================================
At a very basic level, a time series is a set of observations taken sequentially in time. It is different than non\-temporal data because each data point has an order and is, typically, related to the data points before and after by some process.
A script with all the R code in the chapter can be downloaded [here](./Rcode/intro-to-ts.R). The Rmd for this chapter can be downloaded [here](./Rmds/intro-to-ts.Rmd).
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-ts-examples.html |
3\.1 Examples of time series
----------------------------
```
data(WWWusage, package = "datasets")
par(mai = c(0.9, 0.9, 0.1, 0.1), omi = c(0, 0, 0, 0))
plot.ts(WWWusage, ylab = "", las = 1, col = "blue", lwd = 2)
```
Figure 3\.1: Number of users connected to the internet
```
data(lynx, package = "datasets")
par(mai = c(0.9, 0.9, 0.1, 0.1), omi = c(0, 0, 0, 0))
plot.ts(lynx, ylab = "", las = 1, col = "blue", lwd = 2)
```
Figure 3\.2: Number of lynx trapped in Canada from 1821\-1934
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-ts-classification.html |
3\.2 Classification of time series
----------------------------------
A ts can be represented as a set
\\\[
\\{ x\_1,x\_2,x\_3,\\dots,x\_n \\}
\\]
For example,
\\\[
\\{ 10,31,27,42,53,15 \\}
\\]
It can be further classified.
### 3\.2\.1 By some *index set*
Interval across real time; \\(x(t)\\)
* begin/end: \\(t \\in \[1\.1,2\.5]\\)
Discrete time; \\(x\_t\\)
* Equally spaced: \\(t \= \\{1,2,3,4,5\\}\\)
* Equally spaced w/ missing value: \\(t \= \\{1,2,4,5,6\\}\\)
* Unequally spaced: \\(t \= \\{2,3,4,6,9\\}\\)
### 3\.2\.2 By the *underlying process*
Discrete (eg, total \# of fish caught per trawl)
Continuous (eg, salinity, temperature)
### 3\.2\.3 By the *number of values recorded*
Univariate/scalar (eg, total \# of fish caught)
Multivariate/vector (eg, \# of each spp of fish caught)
### 3\.2\.4 By the *type of values recorded*
Integer (eg, \# of fish in 5 min trawl \= 2413\)
Rational (eg, fraction of unclipped fish \= 47/951\)
Real (eg, fish mass \= 10\.2 g)
Complex (eg, cos(2 \\(\\pi\\) 2\.43\) \+ *i* sin(2 \\(\\pi\\) 2\.43\))
### 3\.2\.1 By some *index set*
Interval across real time; \\(x(t)\\)
* begin/end: \\(t \\in \[1\.1,2\.5]\\)
Discrete time; \\(x\_t\\)
* Equally spaced: \\(t \= \\{1,2,3,4,5\\}\\)
* Equally spaced w/ missing value: \\(t \= \\{1,2,4,5,6\\}\\)
* Unequally spaced: \\(t \= \\{2,3,4,6,9\\}\\)
### 3\.2\.2 By the *underlying process*
Discrete (eg, total \# of fish caught per trawl)
Continuous (eg, salinity, temperature)
### 3\.2\.3 By the *number of values recorded*
Univariate/scalar (eg, total \# of fish caught)
Multivariate/vector (eg, \# of each spp of fish caught)
### 3\.2\.4 By the *type of values recorded*
Integer (eg, \# of fish in 5 min trawl \= 2413\)
Rational (eg, fraction of unclipped fish \= 47/951\)
Real (eg, fish mass \= 10\.2 g)
Complex (eg, cos(2 \\(\\pi\\) 2\.43\) \+ *i* sin(2 \\(\\pi\\) 2\.43\))
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-ts-stat-analysis.html |
3\.3 Statistical analyses of time series
----------------------------------------
Most statistical analyses are concerned with estimating properties of a population from a sample. For example, we use fish caught in a seine to infer the mean size of fish in a lake. Time series analysis, however, presents a different situation:
* Although we could vary the *length* of an observed time series, it is often impossible to make multiple observations at a *given* point in time
For example, one can’t observe today’s closing price of Microsoft stock more than once. Thus, conventional statistical procedures, based on large sample estimates, are inappropriate.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-ts-definition.html |
3\.4 What is a time series model?
---------------------------------
We use a time series model to analyze time series data. A *time series model* for \\(\\{x\_t\\}\\) is a specification of the joint distributions of a sequence of random variables \\(\\{X\_t\\}\\), of which \\(\\{x\_t\\}\\) is thought to be a realization.
Here is a plot of many realizations from a time series model.
Figure 3\.3: Distribution of realizations
These lines represent the distribution of possible realizations. However, we have only one realization. The time series model allows us to use the one realization we have to make inferences about the underlying joint distribution from whence our realization came.
Figure 3\.4: Blue line is our one realization.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-ts-two-examples.html |
3\.5 Two simple and classic time series models
----------------------------------------------
White noise: \\(x\_t \\sim N(0,1\)\\)
```
par(mai = c(0.9, 0.9, 0.1, 0.1), omi = c(0, 0, 0, 0))
matplot(ww, type = "l", lty = "solid", las = 1, ylab = expression(italic(x[t])),
xlab = "Time", col = gray(0.5, 0.4))
```
Random walk: \\(x\_t \= x\_{t\-1} \+ w\_t,\~\\text{with}\~w\_t \\sim N(0,1\)\\)
```
par(mai = c(0.9, 0.9, 0.1, 0.1), omi = c(0, 0, 0, 0))
matplot(apply(ww, 2, cumsum), type = "l", lty = "solid", las = 1,
ylab = expression(italic(x[t])), xlab = "Time", col = gray(0.5,
0.4))
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-ts-classical-decomposition.html |
3\.6 Classical decomposition
----------------------------
Model time series \\(\\{x\_t\\}\\) as a combination of
1. trend (\\(m\_t\\))
2. seasonal component (\\(s\_t\\))
3. remainder (\\(e\_t\\))
\\(x\_t \= m\_t \+ s\_t \+ e\_t\\)
### 3\.6\.1 1\. The trend (\\(m\_t\\))
We need a way to extract the so\-called *signal*. One common method is via “linear filters”
\\\[
m\_t \= \\sum\_{i\=\-\\infty}^{\\infty} \\lambda\_i x\_{t\+1}
\\]
For example, a moving average
\\\[
m\_t \= \\sum\_{i\=\-a}^{a} \\frac{1}{2a \+ 1} x\_{t\+i}
\\]
If \\(a \= 1\\), then
\\\[
m\_t \= \\frac{1}{3}(x\_{t\-1} \+ x\_t \+ x\_{t\+1})
\\]
### 3\.6\.2 Example of linear filtering
Here is a time series.
Figure 3\.5: Monthly airline passengers from 1949\-1960
A linear filter with \\(a\=3\\) closely tracks the data.
Figure 3\.6: Monthly airline passengers from 1949\-1960 with a low filter.
As we increase the length of data that is averaged from 1 on each side (\\(a\=3\\)) to 4 on each side (\\(a\=9\\)), the trend line is smoother.
Figure 3\.7: Monthly airline passengers from 1949\-1960 with a medium filter.
When we increase up to 13 points on each side (\\(a\=27\\)), the trend line is very smooth.
Figure 3\.8: Monthly airline passengers from 1949\-1960 with a high filter.
### 3\.6\.3 2\. Seasonal effect (\\(s\_t\\))
Once we have an estimate of the trend \\(m\_t\\), we can estimate \\(s\_t\\) simply by subtraction:
\\\[
s\_t \= x\_t \- m\_t
\\]
This is the seasonal effect (\\(s\_t\\)), assuming \\(\\lambda \= 1/9\\),
but, \\(s\_t\\) includes the remainder \\(e\_t\\) as well. Instead we can estimate the mean seasonal effect (\\(s\_t\\)).
```
seas_2 <- decompose(xx)$seasonal
par(mai = c(0.9, 0.9, 0.1, 0.1), omi = c(0, 0, 0, 0))
plot.ts(seas_2, las = 1, ylab = "")
```
Figure 3\.9: Mean seasonal effect.
### 3\.6\.4 3\. Remainder (\\(e\_t\\))
Now we can estimate \\(e\_t\\) via subtraction:
\\\[
e\_t \= x\_t \- m\_t \- s\_t
\\]
```
ee <- decompose(xx)$random
par(mai = c(0.9, 0.9, 0.1, 0.1), omi = c(0, 0, 0, 0))
plot.ts(ee, las = 1, ylab = "")
```
Figure 3\.10: Errors.
### 3\.6\.1 1\. The trend (\\(m\_t\\))
We need a way to extract the so\-called *signal*. One common method is via “linear filters”
\\\[
m\_t \= \\sum\_{i\=\-\\infty}^{\\infty} \\lambda\_i x\_{t\+1}
\\]
For example, a moving average
\\\[
m\_t \= \\sum\_{i\=\-a}^{a} \\frac{1}{2a \+ 1} x\_{t\+i}
\\]
If \\(a \= 1\\), then
\\\[
m\_t \= \\frac{1}{3}(x\_{t\-1} \+ x\_t \+ x\_{t\+1})
\\]
### 3\.6\.2 Example of linear filtering
Here is a time series.
Figure 3\.5: Monthly airline passengers from 1949\-1960
A linear filter with \\(a\=3\\) closely tracks the data.
Figure 3\.6: Monthly airline passengers from 1949\-1960 with a low filter.
As we increase the length of data that is averaged from 1 on each side (\\(a\=3\\)) to 4 on each side (\\(a\=9\\)), the trend line is smoother.
Figure 3\.7: Monthly airline passengers from 1949\-1960 with a medium filter.
When we increase up to 13 points on each side (\\(a\=27\\)), the trend line is very smooth.
Figure 3\.8: Monthly airline passengers from 1949\-1960 with a high filter.
### 3\.6\.3 2\. Seasonal effect (\\(s\_t\\))
Once we have an estimate of the trend \\(m\_t\\), we can estimate \\(s\_t\\) simply by subtraction:
\\\[
s\_t \= x\_t \- m\_t
\\]
This is the seasonal effect (\\(s\_t\\)), assuming \\(\\lambda \= 1/9\\),
but, \\(s\_t\\) includes the remainder \\(e\_t\\) as well. Instead we can estimate the mean seasonal effect (\\(s\_t\\)).
```
seas_2 <- decompose(xx)$seasonal
par(mai = c(0.9, 0.9, 0.1, 0.1), omi = c(0, 0, 0, 0))
plot.ts(seas_2, las = 1, ylab = "")
```
Figure 3\.9: Mean seasonal effect.
### 3\.6\.4 3\. Remainder (\\(e\_t\\))
Now we can estimate \\(e\_t\\) via subtraction:
\\\[
e\_t \= x\_t \- m\_t \- s\_t
\\]
```
ee <- decompose(xx)$random
par(mai = c(0.9, 0.9, 0.1, 0.1), omi = c(0, 0, 0, 0))
plot.ts(ee, las = 1, ylab = "")
```
Figure 3\.10: Errors.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-ts-decomposition-log-data.html |
3\.7 Decomposition on log\-transformed data
-------------------------------------------
Let’s repeat the decomposition with the log of the airline data.
```
lx <- log(AirPassengers)
par(mai = c(0.9, 0.9, 0.1, 0.1), omi = c(0, 0, 0, 0))
plot.ts(lx, las = 1, ylab = "")
```
Figure 3\.11: Log monthly airline passengers from 1949\-1960
### 3\.7\.1 The trend (\\(m\_t\\))
### 3\.7\.2 Seasonal effect (\\(s\_t\\)) with error (\\(e\_t\\))
### 3\.7\.3 Mean seasonal effect (\\(s\_t\\))
### 3\.7\.4 Remainder (\\(e\_t\\))
```
le <- lx - pp - seas_2
par(mai = c(0.9, 0.9, 0.1, 0.1), omi = c(0, 0, 0, 0))
plot.ts(le, las = 1, ylab = "")
```
### 3\.7\.1 The trend (\\(m\_t\\))
### 3\.7\.2 Seasonal effect (\\(s\_t\\)) with error (\\(e\_t\\))
### 3\.7\.3 Mean seasonal effect (\\(s\_t\\))
### 3\.7\.4 Remainder (\\(e\_t\\))
```
le <- lx - pp - seas_2
par(mai = c(0.9, 0.9, 0.1, 0.1), omi = c(0, 0, 0, 0))
plot.ts(le, las = 1, ylab = "")
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/chap-tslab.html |
Chapter 4 Basic time series functions in R
==========================================
This chapter introduces you to some of the basic functions in R for plotting and analyzing univariate time series data. Many of the things you learn here will be relevant when we start examining multivariate time series as well. We will begin with the creation and plotting of time series objects in R, and then moves on to decomposition, differencing, and correlation (*e.g.*, ACF, PACF) before ending with fitting and simulation of ARMA models.
A script with all the R code in the chapter can be downloaded [here](./Rcode/intro-ts-funcs-lab.R). The Rmd for this chapter can be downloaded [here](./Rmds/intro-ts-funcs-lab.Rmd).
### Data and packages
This chapter uses the **stats** package, which is often loaded by default when you start R, the **MARSS** package and the **forecast** package. The problems use a dataset in the **datasets** package. After installing the packages, if needed, load:
```
library(stats)
library(MARSS)
library(forecast)
library(datasets)
```
The chapter uses data sets which are in the **atsalibrary** package. If needed, install using the **devtools** package.
```
library(devtools)
# Windows users will likely need to set this
# Sys.setenv('R_REMOTES_NO_ERRORS_FROM_WARNINGS' = 'true')
devtools::install_github("nwfsc-timeseries/atsalibrary")
```
The main one is a time series of the atmospheric concentration of CO\\(\_2\\) collected at the Mauna Loa Observatory in Hawai’i (`MLCO2`). The second is Northern Hemisphere land and ocean temperature anomalies from NOAA. (`NHTemp`). The problems use a data set on hourly phytoplankton counts (`hourlyphyto`). Use `?MLCO2`, `?NHTemp` and `?hourlyphyto` for information on these datasets.
Load the data.
```
data(NHTemp, package = "atsalibrary")
Temp <- NHTemp
data(MLCO2, package = "atsalibrary")
CO2 <- MLCO2
data(hourlyphyto, package = "atsalibrary")
phyto_dat <- hourlyphyto
```
### Data and packages
This chapter uses the **stats** package, which is often loaded by default when you start R, the **MARSS** package and the **forecast** package. The problems use a dataset in the **datasets** package. After installing the packages, if needed, load:
```
library(stats)
library(MARSS)
library(forecast)
library(datasets)
```
The chapter uses data sets which are in the **atsalibrary** package. If needed, install using the **devtools** package.
```
library(devtools)
# Windows users will likely need to set this
# Sys.setenv('R_REMOTES_NO_ERRORS_FROM_WARNINGS' = 'true')
devtools::install_github("nwfsc-timeseries/atsalibrary")
```
The main one is a time series of the atmospheric concentration of CO\\(\_2\\) collected at the Mauna Loa Observatory in Hawai’i (`MLCO2`). The second is Northern Hemisphere land and ocean temperature anomalies from NOAA. (`NHTemp`). The problems use a data set on hourly phytoplankton counts (`hourlyphyto`). Use `?MLCO2`, `?NHTemp` and `?hourlyphyto` for information on these datasets.
Load the data.
```
data(NHTemp, package = "atsalibrary")
Temp <- NHTemp
data(MLCO2, package = "atsalibrary")
CO2 <- MLCO2
data(hourlyphyto, package = "atsalibrary")
phyto_dat <- hourlyphyto
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-tslab-time-series-plots.html |
4\.1 Time series plots
----------------------
Time series plots are an excellent way to begin the process of understanding what sort of process might have generated the data of interest. Traditionally, time series have been plotted with the observed data on the \\(y\\)\-axis and time on the \\(x\\)\-axis. Sequential time points are usually connected with some form of line, but sometimes other plot forms can be a useful way of conveying important information in the time series (*e.g.*, bAR\_p\_coeflots of sea\-surface temperature anomolies show nicely the contrasting El Niño and La Niña phenomena).
### 4\.1\.1 **ts** objects and `plot.ts()`
The CO\\(\_2\\) [data](#sec-tslab-data) are stored in R as a `data.frame` object, but we would like to transform the class to a more user\-friendly format for dealing with time series. Fortunately, the `ts()` function will do just that, and return an object of class **ts** as well. In addition to the data themselves, we need to provide `ts()` with 2 pieces of information about the time index for the data.
The first, `frequency`, is a bit of a misnomer because it does not really refer to the number of cycles per unit time, but rather the number of observations/samples per cycle. So, for example, if the data were collected each hour of a day then `frequency = 24`.
The second, `start`, specifies the first sample in terms of (\\(day\\), \\(hour\\)), (\\(year\\), \\(month\\)), etc. So, for example, if the data were collected monthly beginning in November of 1969, then `frequency = 12` and `start = c(1969, 11)`. If the data were collected annually, then you simply specify `start` as a scalar (*e.g.*, `start = 1991`) and omit `frequency` (*i.e.*, R will set `frequency = 1` by default).
The Mauna Loa time series is collected monthly and begins in March of 1958, which we can get from the data themselves, and then pass to `ts()`.
```
## create a time series (ts) object from the CO2 data
co2 <- ts(data = CO2$ppm, frequency = 12, start = c(CO2[1, "year"],
CO2[1, "month"]))
```
Now let’s plot the data using `plot.ts()`, which is designed specifically for **ts** objects like the one we just created above. It’s nice because we don’t need to specify any \\(x\\)\-values as they are taken directly from the **ts** object.
```
## plot the ts
plot.ts(co2, ylab = expression(paste("CO"[2], " (ppm)")))
```
Figure 4\.1: Time series of the atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i measured monthly from March 1958 to present.
Examination of the plotted time series (Figure [4\.1](sec-tslab-time-series-plots.html#fig:tslab-plotdata1)) shows 2 obvious features that would violate any assumption of stationarity: 1\) an increasing (and perhaps non\-linear) trend over time, and 2\) strong seasonal patterns. (*Aside*: Do you know the causes of these 2 phenomena?)
### 4\.1\.2 Combining and plotting multiple **ts** objects
Before we examine the CO\\(\_2\\) data further, however, let’s see a quick example of how you can combine and plot multiple time series together. We’ll use the [data](#sec-tslab-data) on monthly mean temperature anomolies for the Northern Hemisphere (`Temp`). First convert `Temp` to a `ts` object.
```
temp_ts <- ts(data = Temp$Value, frequency = 12, start = c(1880,
1))
```
Before we can plot the two time series together, however, we need to line up their time indices because the temperature data start in January of 1880, but the CO\\(\_2\\) data start in March of 1958\. Fortunately, the `ts.intersect()` function makes this really easy once the data have been transformed to **ts** objects by trimming the data to a common time frame. Also, `ts.union()` works in a similar fashion, but it pads one or both series with the appropriate number of NA’s. Let’s try both.
```
## intersection (only overlapping times)
dat_int <- ts.intersect(co2, temp_ts)
## dimensions of common-time data
dim(dat_int)
```
```
[1] 682 2
```
```
## union (all times)
dat_unn <- ts.union(co2, temp_ts)
## dimensions of all-time data
dim(dat_unn)
```
```
[1] 1647 2
```
As you can see, the intersection of the two data sets is much smaller than the union. If you compare them, you will see that the first 938 rows of `dat_unn` contains `NA` in the `co2` column.
It turns out that the regular `plot()` function in R is smart enough to recognize a **ts** object and use the information contained therein appropriately. Here’s how to plot the intersection of the two time series together with the y\-axes on alternate sides (results are shown in Figure [4\.2](sec-tslab-time-series-plots.html#fig:tslab-plotdata2)):
```
## plot the ts
plot(dat_int, main = "", yax.flip = TRUE)
```
Figure 4\.2: Time series of the atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i (top) and the mean temperature index for the Northern Hemisphere (bottom) measured monthly from March 1958 to present.
### 4\.1\.1 **ts** objects and `plot.ts()`
The CO\\(\_2\\) [data](#sec-tslab-data) are stored in R as a `data.frame` object, but we would like to transform the class to a more user\-friendly format for dealing with time series. Fortunately, the `ts()` function will do just that, and return an object of class **ts** as well. In addition to the data themselves, we need to provide `ts()` with 2 pieces of information about the time index for the data.
The first, `frequency`, is a bit of a misnomer because it does not really refer to the number of cycles per unit time, but rather the number of observations/samples per cycle. So, for example, if the data were collected each hour of a day then `frequency = 24`.
The second, `start`, specifies the first sample in terms of (\\(day\\), \\(hour\\)), (\\(year\\), \\(month\\)), etc. So, for example, if the data were collected monthly beginning in November of 1969, then `frequency = 12` and `start = c(1969, 11)`. If the data were collected annually, then you simply specify `start` as a scalar (*e.g.*, `start = 1991`) and omit `frequency` (*i.e.*, R will set `frequency = 1` by default).
The Mauna Loa time series is collected monthly and begins in March of 1958, which we can get from the data themselves, and then pass to `ts()`.
```
## create a time series (ts) object from the CO2 data
co2 <- ts(data = CO2$ppm, frequency = 12, start = c(CO2[1, "year"],
CO2[1, "month"]))
```
Now let’s plot the data using `plot.ts()`, which is designed specifically for **ts** objects like the one we just created above. It’s nice because we don’t need to specify any \\(x\\)\-values as they are taken directly from the **ts** object.
```
## plot the ts
plot.ts(co2, ylab = expression(paste("CO"[2], " (ppm)")))
```
Figure 4\.1: Time series of the atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i measured monthly from March 1958 to present.
Examination of the plotted time series (Figure [4\.1](sec-tslab-time-series-plots.html#fig:tslab-plotdata1)) shows 2 obvious features that would violate any assumption of stationarity: 1\) an increasing (and perhaps non\-linear) trend over time, and 2\) strong seasonal patterns. (*Aside*: Do you know the causes of these 2 phenomena?)
### 4\.1\.2 Combining and plotting multiple **ts** objects
Before we examine the CO\\(\_2\\) data further, however, let’s see a quick example of how you can combine and plot multiple time series together. We’ll use the [data](#sec-tslab-data) on monthly mean temperature anomolies for the Northern Hemisphere (`Temp`). First convert `Temp` to a `ts` object.
```
temp_ts <- ts(data = Temp$Value, frequency = 12, start = c(1880,
1))
```
Before we can plot the two time series together, however, we need to line up their time indices because the temperature data start in January of 1880, but the CO\\(\_2\\) data start in March of 1958\. Fortunately, the `ts.intersect()` function makes this really easy once the data have been transformed to **ts** objects by trimming the data to a common time frame. Also, `ts.union()` works in a similar fashion, but it pads one or both series with the appropriate number of NA’s. Let’s try both.
```
## intersection (only overlapping times)
dat_int <- ts.intersect(co2, temp_ts)
## dimensions of common-time data
dim(dat_int)
```
```
[1] 682 2
```
```
## union (all times)
dat_unn <- ts.union(co2, temp_ts)
## dimensions of all-time data
dim(dat_unn)
```
```
[1] 1647 2
```
As you can see, the intersection of the two data sets is much smaller than the union. If you compare them, you will see that the first 938 rows of `dat_unn` contains `NA` in the `co2` column.
It turns out that the regular `plot()` function in R is smart enough to recognize a **ts** object and use the information contained therein appropriately. Here’s how to plot the intersection of the two time series together with the y\-axes on alternate sides (results are shown in Figure [4\.2](sec-tslab-time-series-plots.html#fig:tslab-plotdata2)):
```
## plot the ts
plot(dat_int, main = "", yax.flip = TRUE)
```
Figure 4\.2: Time series of the atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i (top) and the mean temperature index for the Northern Hemisphere (bottom) measured monthly from March 1958 to present.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-tslab-decomposition-of-time-series.html |
4\.2 Decomposition of time series
---------------------------------
Plotting time series data is an important first step in analyzing their various components. Beyond that, however, we need a more formal means for identifying and removing characteristics such as a trend or seasonal variation. As discussed in lecture, the decomposition model reduces a time series into 3 components: trend, seasonal effects, and random errors. In turn, we aim to model the random errors as some form of stationary process.
Let’s begin with a simple, additive decomposition model for a time series \\(x\_t\\)
\\\[\\begin{equation}
\\tag{4\.1}
x\_t \= m\_t \+ s\_t \+ e\_t,
\\end{equation}\\]
where, at time \\(t\\), \\(m\_t\\) is the trend, \\(s\_t\\) is the seasonal effect, and \\(e\_t\\) is a random error that we generally assume to have zero\-mean and to be correlated over time. Thus, by estimating and subtracting both \\(\\{m\_t\\}\\) and \\(\\{s\_t\\}\\) from \\(\\{x\_t\\}\\), we hope to have a time series of stationary residuals \\(\\{e\_t\\}\\).
### 4\.2\.1 Estimating trends
In lecture we discussed how linear filters are a common way to estimate trends in time series. One of the most common linear filters is the moving average, which for time lags from \\(\-a\\) to \\(a\\) is defined as
\\\[\\begin{equation}
\\tag{4\.2}
\\hat{m}\_t \= \\sum\_{k\=\-a}^{a} \\left(\\frac{1}{1\+2a}\\right) x\_{t\+k}.
\\end{equation}\\]
This model works well for moving windows of odd\-numbered lengths, but should be adjusted for even\-numbered lengths by adding only \\(\\frac{1}{2}\\) of the 2 most extreme lags so that the filtered value at time \\(t\\) lines up with the original observation at time \\(t\\). So, for example, in a case with monthly data such as the atmospheric CO\\(\_2\\) concentration where a 12\-point moving average would be an obvious choice, the linear filter would be
\\\[\\begin{equation}
\\tag{4\.3}
\\hat{m}\_t \= \\frac{\\frac{1}{2}x\_{t\-6} \+ x\_{t\-5} \+ \\dots \+ x\_{t\-1} \+ x\_t \+ x\_{t\+1} \+ \\dots \+ x\_{t\+5} \+ \\frac{1}{2}x\_{t\+6}}{12}
\\end{equation}\\]
It is important to note here that our time series of the estimated trend \\(\\{\\hat{m}\_t\\}\\) is actually shorter than the observed time series by \\(2a\\) units.
Conveniently, R has the built\-in function `filter()` in the **stats** package for estimating moving\-average (and other) linear filters. In addition to specifying the time series to be filtered, we need to pass in the filter weights (and 2 other arguments we won’t worry about here–type `?filter` to get more information). The easiest way to create the filter is with the `rep()` function:
```
## weights for moving avg
fltr <- c(1/2, rep(1, times = 11), 1/2)/12
```
Now let’s get our estimate of the trend \\(\\{\\hat{m}\\}\\) with `filter()`} and plot it:
```
## estimate of trend
co2_trend <- stats::filter(co2, filter = fltr, method = "convo",
sides = 2)
## plot the trend
plot.ts(co2_trend, ylab = "Trend", cex = 1)
```
The trend is a more\-or\-less smoothly increasing function over time, the average slope of which does indeed appear to be increasing over time as well (Figure [4\.3](sec-tslab-decomposition-of-time-series.html#fig:tslab-plotTrendTSb)).
Figure 4\.3: Time series of the estimated trend \\(\\{\\hat{m}\_t\\}\\) for the atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i.
### 4\.2\.2 Estimating seasonal effects
Once we have an estimate of the trend for time \\(t\\) (\\(\\hat{m}\_t\\)) we can easily obtain an estimate of the seasonal effect at time \\(t\\) (\\(\\hat{s}\_t\\)) by subtraction
\\\[\\begin{equation}
\\tag{4\.4}
\\hat{s}\_t \= x\_t \- \\hat{m}\_t,
\\end{equation}\\]
which is really easy to do in R:
```
## seasonal effect over time
co2_seas <- co2 - co2_trend
```
This estimate of the seasonal effect for each time \\(t\\) also contains the random error \\(e\_t\\), however, which can be seen by plotting the time series and careful comparison of Equations [(4\.1\)](sec-tslab-decomposition-of-time-series.html#eq:classDecomp) and [(4\.4\)](sec-tslab-decomposition-of-time-series.html#eq:seasEst).
```
## plot the monthly seasonal effects
plot.ts(co2_seas, ylab = "Seasonal effect", xlab = "Month", cex = 1)
```
Figure 4\.4: Time series of seasonal effects plus random errors for the atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i, measured monthly from March 1958 to present.
We can obtain the overall seasonal effect by averaging the estimates of \\(\\{\\hat{s}\_t\\}\\) for each month and repeating this sequence over all years.
```
## length of ts
ll <- length(co2_seas)
## frequency (ie, 12)
ff <- frequency(co2_seas)
## number of periods (years); %/% is integer division
periods <- ll%/%ff
## index of cumulative month
index <- seq(1, ll, by = ff) - 1
## get mean by month
mm <- numeric(ff)
for (i in 1:ff) {
mm[i] <- mean(co2_seas[index + i], na.rm = TRUE)
}
## subtract mean to make overall mean = 0
mm <- mm - mean(mm)
```
Before we create the entire time series of seasonal effects, let’s plot them for each month to see what is happening within a year:
```
## plot the monthly seasonal effects
plot.ts(mm, ylab = "Seasonal effect", xlab = "Month", cex = 1)
```
It looks like, on average, that the CO\\(\_2\\) concentration is highest in spring (March) and lowest in summer (August) (Figure [4\.5](sec-tslab-decomposition-of-time-series.html#fig:tslab-plotSeasMean)). (*Aside*: Do you know why this is?)
Figure 4\.5: Estimated monthly seasonal effects for the atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i.
Finally, let’s create the entire time series of seasonal effects \\(\\{\\hat{s}\_t\\}\\):
```
## create ts object for season
co2_seas_ts <- ts(rep(mm, periods + 1)[seq(ll)], start = start(co2_seas),
frequency = ff)
```
### 4\.2\.3 Completing the model
The last step in completing our full decomposition model is obtaining the random errors \\(\\{\\hat{e}\_t\\}\\), which we can get via simple subtraction
\\\[\\begin{equation}
\\tag{4\.5}
\\hat{e}\_t \= x\_t \- \\hat{m}\_t \- \\hat{s}\_t.
\\end{equation}\\]
Again, this is really easy in R:
```
## random errors over time
co2_err <- co2 - co2_trend - co2_seas_ts
```
Now that we have all 3 of our model components, let’s plot them together with the observed data \\(\\{x\_t\\}\\). The results are shown in Figure [4\.6](sec-tslab-decomposition-of-time-series.html#fig:tslab-plotTrSeas).
```
## plot the obs ts, trend & seasonal effect
plot(cbind(co2, co2_trend, co2_seas_ts, co2_err), main = "",
yax.flip = TRUE)
```
Figure 4\.6: Time series of the observed atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i (top) along with the estimated trend, seasonal effects, and random errors.
### 4\.2\.4 Using `decompose()` for decomposition
Now that we have seen how to estimate and plot the various components of a classical decomposition model in a piecewise manner, let’s see how to do this in one step in R with the function `decompose()`, which accepts a **ts** object as input and returns an object of class **decomposed.ts**.
```
## decomposition of CO2 data
co2_decomp <- decompose(co2)
```
`co2_decomp` is a list with the following elements, which should be familiar by now:
* `x`: the observed time series \\(\\{x\_t\\}\\)
* `seasonal`: time series of estimated seasonal component \\(\\{\\hat{s}\_t\\}\\)
* `figure`: mean seasonal effect (`length(figure) == frequency(x)`)
* `trend`: time series of estimated trend \\(\\{\\hat{m}\_t\\}\\)
* `random`: time series of random errors \\(\\{\\hat{e}\_t\\}\\)
* `type`: type of error (`"additive"` or `"multiplicative"`)
We can easily make plots of the output and compare them to those in Figure [4\.6](sec-tslab-decomposition-of-time-series.html#fig:tslab-plotTrSeas):
```
## plot the obs ts, trend & seasonal effect
plot(co2_decomp, yax.flip = TRUE)
```
Figure 4\.7: Time series of the observed atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i (top) along with the estimated trend, seasonal effects, and random errors obtained with the function `decompose()`.
The results obtained with `decompose()` (Figure [4\.7](sec-tslab-decomposition-of-time-series.html#fig:tslab-plotDecompB)) are identical to those we estimated previously.
Another nice feature of the `decompose()` function is that it can be used for decomposition models with multiplicative (*i.e.*, non\-additive) errors (*e.g.*, if the original time series had a seasonal amplitude that increased with time). To do, so pass in the argument `type = "multiplicative"`, which is set to `type = "additive"` by default.
### 4\.2\.1 Estimating trends
In lecture we discussed how linear filters are a common way to estimate trends in time series. One of the most common linear filters is the moving average, which for time lags from \\(\-a\\) to \\(a\\) is defined as
\\\[\\begin{equation}
\\tag{4\.2}
\\hat{m}\_t \= \\sum\_{k\=\-a}^{a} \\left(\\frac{1}{1\+2a}\\right) x\_{t\+k}.
\\end{equation}\\]
This model works well for moving windows of odd\-numbered lengths, but should be adjusted for even\-numbered lengths by adding only \\(\\frac{1}{2}\\) of the 2 most extreme lags so that the filtered value at time \\(t\\) lines up with the original observation at time \\(t\\). So, for example, in a case with monthly data such as the atmospheric CO\\(\_2\\) concentration where a 12\-point moving average would be an obvious choice, the linear filter would be
\\\[\\begin{equation}
\\tag{4\.3}
\\hat{m}\_t \= \\frac{\\frac{1}{2}x\_{t\-6} \+ x\_{t\-5} \+ \\dots \+ x\_{t\-1} \+ x\_t \+ x\_{t\+1} \+ \\dots \+ x\_{t\+5} \+ \\frac{1}{2}x\_{t\+6}}{12}
\\end{equation}\\]
It is important to note here that our time series of the estimated trend \\(\\{\\hat{m}\_t\\}\\) is actually shorter than the observed time series by \\(2a\\) units.
Conveniently, R has the built\-in function `filter()` in the **stats** package for estimating moving\-average (and other) linear filters. In addition to specifying the time series to be filtered, we need to pass in the filter weights (and 2 other arguments we won’t worry about here–type `?filter` to get more information). The easiest way to create the filter is with the `rep()` function:
```
## weights for moving avg
fltr <- c(1/2, rep(1, times = 11), 1/2)/12
```
Now let’s get our estimate of the trend \\(\\{\\hat{m}\\}\\) with `filter()`} and plot it:
```
## estimate of trend
co2_trend <- stats::filter(co2, filter = fltr, method = "convo",
sides = 2)
## plot the trend
plot.ts(co2_trend, ylab = "Trend", cex = 1)
```
The trend is a more\-or\-less smoothly increasing function over time, the average slope of which does indeed appear to be increasing over time as well (Figure [4\.3](sec-tslab-decomposition-of-time-series.html#fig:tslab-plotTrendTSb)).
Figure 4\.3: Time series of the estimated trend \\(\\{\\hat{m}\_t\\}\\) for the atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i.
### 4\.2\.2 Estimating seasonal effects
Once we have an estimate of the trend for time \\(t\\) (\\(\\hat{m}\_t\\)) we can easily obtain an estimate of the seasonal effect at time \\(t\\) (\\(\\hat{s}\_t\\)) by subtraction
\\\[\\begin{equation}
\\tag{4\.4}
\\hat{s}\_t \= x\_t \- \\hat{m}\_t,
\\end{equation}\\]
which is really easy to do in R:
```
## seasonal effect over time
co2_seas <- co2 - co2_trend
```
This estimate of the seasonal effect for each time \\(t\\) also contains the random error \\(e\_t\\), however, which can be seen by plotting the time series and careful comparison of Equations [(4\.1\)](sec-tslab-decomposition-of-time-series.html#eq:classDecomp) and [(4\.4\)](sec-tslab-decomposition-of-time-series.html#eq:seasEst).
```
## plot the monthly seasonal effects
plot.ts(co2_seas, ylab = "Seasonal effect", xlab = "Month", cex = 1)
```
Figure 4\.4: Time series of seasonal effects plus random errors for the atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i, measured monthly from March 1958 to present.
We can obtain the overall seasonal effect by averaging the estimates of \\(\\{\\hat{s}\_t\\}\\) for each month and repeating this sequence over all years.
```
## length of ts
ll <- length(co2_seas)
## frequency (ie, 12)
ff <- frequency(co2_seas)
## number of periods (years); %/% is integer division
periods <- ll%/%ff
## index of cumulative month
index <- seq(1, ll, by = ff) - 1
## get mean by month
mm <- numeric(ff)
for (i in 1:ff) {
mm[i] <- mean(co2_seas[index + i], na.rm = TRUE)
}
## subtract mean to make overall mean = 0
mm <- mm - mean(mm)
```
Before we create the entire time series of seasonal effects, let’s plot them for each month to see what is happening within a year:
```
## plot the monthly seasonal effects
plot.ts(mm, ylab = "Seasonal effect", xlab = "Month", cex = 1)
```
It looks like, on average, that the CO\\(\_2\\) concentration is highest in spring (March) and lowest in summer (August) (Figure [4\.5](sec-tslab-decomposition-of-time-series.html#fig:tslab-plotSeasMean)). (*Aside*: Do you know why this is?)
Figure 4\.5: Estimated monthly seasonal effects for the atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i.
Finally, let’s create the entire time series of seasonal effects \\(\\{\\hat{s}\_t\\}\\):
```
## create ts object for season
co2_seas_ts <- ts(rep(mm, periods + 1)[seq(ll)], start = start(co2_seas),
frequency = ff)
```
### 4\.2\.3 Completing the model
The last step in completing our full decomposition model is obtaining the random errors \\(\\{\\hat{e}\_t\\}\\), which we can get via simple subtraction
\\\[\\begin{equation}
\\tag{4\.5}
\\hat{e}\_t \= x\_t \- \\hat{m}\_t \- \\hat{s}\_t.
\\end{equation}\\]
Again, this is really easy in R:
```
## random errors over time
co2_err <- co2 - co2_trend - co2_seas_ts
```
Now that we have all 3 of our model components, let’s plot them together with the observed data \\(\\{x\_t\\}\\). The results are shown in Figure [4\.6](sec-tslab-decomposition-of-time-series.html#fig:tslab-plotTrSeas).
```
## plot the obs ts, trend & seasonal effect
plot(cbind(co2, co2_trend, co2_seas_ts, co2_err), main = "",
yax.flip = TRUE)
```
Figure 4\.6: Time series of the observed atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i (top) along with the estimated trend, seasonal effects, and random errors.
### 4\.2\.4 Using `decompose()` for decomposition
Now that we have seen how to estimate and plot the various components of a classical decomposition model in a piecewise manner, let’s see how to do this in one step in R with the function `decompose()`, which accepts a **ts** object as input and returns an object of class **decomposed.ts**.
```
## decomposition of CO2 data
co2_decomp <- decompose(co2)
```
`co2_decomp` is a list with the following elements, which should be familiar by now:
* `x`: the observed time series \\(\\{x\_t\\}\\)
* `seasonal`: time series of estimated seasonal component \\(\\{\\hat{s}\_t\\}\\)
* `figure`: mean seasonal effect (`length(figure) == frequency(x)`)
* `trend`: time series of estimated trend \\(\\{\\hat{m}\_t\\}\\)
* `random`: time series of random errors \\(\\{\\hat{e}\_t\\}\\)
* `type`: type of error (`"additive"` or `"multiplicative"`)
We can easily make plots of the output and compare them to those in Figure [4\.6](sec-tslab-decomposition-of-time-series.html#fig:tslab-plotTrSeas):
```
## plot the obs ts, trend & seasonal effect
plot(co2_decomp, yax.flip = TRUE)
```
Figure 4\.7: Time series of the observed atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i (top) along with the estimated trend, seasonal effects, and random errors obtained with the function `decompose()`.
The results obtained with `decompose()` (Figure [4\.7](sec-tslab-decomposition-of-time-series.html#fig:tslab-plotDecompB)) are identical to those we estimated previously.
Another nice feature of the `decompose()` function is that it can be used for decomposition models with multiplicative (*i.e.*, non\-additive) errors (*e.g.*, if the original time series had a seasonal amplitude that increased with time). To do, so pass in the argument `type = "multiplicative"`, which is set to `type = "additive"` by default.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-tslab-differencing-to-remove-a-trend-or-seasonal-effects.html |
4\.3 Differencing to remove a trend or seasonal effects
-------------------------------------------------------
An alternative to decomposition for removing trends is differencing. We saw in lecture how the difference operator works and how it can be used to remove linear and nonlinear trends as well as various seasonal features that might be evident in the data. As a reminder, we define the difference operator as
\\\[\\begin{equation}
\\tag{4\.6}
\\nabla x\_t \= x\_t \- x\_{t\-1},
\\end{equation}\\]
and, more generally, for order \\(d\\)
\\\[\\begin{equation}
\\tag{4\.7}
\\nabla^d x\_t \= (1\-\\mathbf{B})^d x\_t,
\\end{equation}\\]
where **B** is the backshift operator (*i.e.*, \\(\\mathbf{B}^k x\_t \= x\_{t\-k}\\) for \\(k \\geq 1\\)).
So, for example, a random walk is one of the most simple and widely used time series models, but it is not stationary. We can write a random walk model as
\\\[\\begin{equation}
\\tag{4\.8}
x\_t \= x\_{t\-1} \+ w\_t, \\text{ with } w\_t \\sim \\text{N}(0,q).
\\end{equation}\\]
Applying the difference operator to Equation [(4\.8\)](sec-tslab-differencing-to-remove-a-trend-or-seasonal-effects.html#eq:defnRW) will yield a time series of Gaussian white noise errors \\(\\{w\_t\\}\\):
\\\[\\begin{equation}
\\tag{4\.9}
\\begin{aligned}
\\nabla (x\_t \&\= x\_{t\-1} \+ w\_t) \\\\
x\_t \- x\_{t\-1} \&\= x\_{t\-1} \- x\_{t\-1} \+ w\_t \\\\
x\_t \- x\_{t\-1} \&\= w\_t
\\end{aligned}
\\end{equation}\\]
### 4\.3\.1 Using the `diff()` function
In R we can use the `diff()` function for differencing a time series, which requires 3 arguments: `x` (the data), `lag` (the lag at which to difference), and `differences` (the order of differencing; \\(d\\) in Equation [(4\.7\)](sec-tslab-differencing-to-remove-a-trend-or-seasonal-effects.html#eq:diffDefnB)). For example, first\-differencing a time series will remove a linear trend (*i.e.*, `differences = 1`); twice\-differencing will remove a quadratic trend (*i.e.*, `differences = 2`). In addition, first\-differencing a time series at a lag equal to the period will remove a seasonal trend (*e.g.*, set `lag = 12` for monthly data).
Let’s use `diff()` to remove the trend and seasonal signal from the CO\\(\_2\\) time series, beginning with the trend. Close inspection of Figure [4\.1](sec-tslab-time-series-plots.html#fig:tslab-plotdata1) would suggest that there is a nonlinear increase in CO\\(\_2\\) concentration over time, so we’ll set `differences = 2`):
```
## twice-difference the CO2 data
co2_d2 <- diff(co2, differences = 2)
## plot the differenced data
plot(co2_d2, ylab = expression(paste(nabla^2, "CO"[2])))
```
Figure 4\.8: Time series of the twice\-differenced atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i.
We were apparently successful in removing the trend, but the seasonal effect still appears obvious (Figure [4\.8](sec-tslab-differencing-to-remove-a-trend-or-seasonal-effects.html#fig:tslab-plotCO2diff2)). Therefore, let’s go ahead and difference that series at lag\-12 because our data were collected monthly.
```
## difference the differenced CO2 data
co2_d2d12 <- diff(co2_d2, lag = 12)
## plot the newly differenced data
plot(co2_d2d12, ylab = expression(paste(nabla, "(", nabla^2,
"CO"[2], ")")))
```
Figure 4\.9: Time series of the lag\-12 difference of the twice\-differenced atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i.
Now we have a time series that appears to be random errors without any obvious trend or seasonal components (Figure [4\.9](sec-tslab-differencing-to-remove-a-trend-or-seasonal-effects.html#fig:tslab-plotCO2diff12)).
### 4\.3\.1 Using the `diff()` function
In R we can use the `diff()` function for differencing a time series, which requires 3 arguments: `x` (the data), `lag` (the lag at which to difference), and `differences` (the order of differencing; \\(d\\) in Equation [(4\.7\)](sec-tslab-differencing-to-remove-a-trend-or-seasonal-effects.html#eq:diffDefnB)). For example, first\-differencing a time series will remove a linear trend (*i.e.*, `differences = 1`); twice\-differencing will remove a quadratic trend (*i.e.*, `differences = 2`). In addition, first\-differencing a time series at a lag equal to the period will remove a seasonal trend (*e.g.*, set `lag = 12` for monthly data).
Let’s use `diff()` to remove the trend and seasonal signal from the CO\\(\_2\\) time series, beginning with the trend. Close inspection of Figure [4\.1](sec-tslab-time-series-plots.html#fig:tslab-plotdata1) would suggest that there is a nonlinear increase in CO\\(\_2\\) concentration over time, so we’ll set `differences = 2`):
```
## twice-difference the CO2 data
co2_d2 <- diff(co2, differences = 2)
## plot the differenced data
plot(co2_d2, ylab = expression(paste(nabla^2, "CO"[2])))
```
Figure 4\.8: Time series of the twice\-differenced atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i.
We were apparently successful in removing the trend, but the seasonal effect still appears obvious (Figure [4\.8](sec-tslab-differencing-to-remove-a-trend-or-seasonal-effects.html#fig:tslab-plotCO2diff2)). Therefore, let’s go ahead and difference that series at lag\-12 because our data were collected monthly.
```
## difference the differenced CO2 data
co2_d2d12 <- diff(co2_d2, lag = 12)
## plot the newly differenced data
plot(co2_d2d12, ylab = expression(paste(nabla, "(", nabla^2,
"CO"[2], ")")))
```
Figure 4\.9: Time series of the lag\-12 difference of the twice\-differenced atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i.
Now we have a time series that appears to be random errors without any obvious trend or seasonal components (Figure [4\.9](sec-tslab-differencing-to-remove-a-trend-or-seasonal-effects.html#fig:tslab-plotCO2diff12)).
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-tslab-correlation-within-and-among-time-series.html |
4\.4 Correlation within and among time series
---------------------------------------------
The concepts of covariance and correlation are very important in time series analysis. In particular, we can examine the correlation structure of the original data or random errors from a decomposition model to help us identify possible form(s) of (non)stationary model(s) for the stochastic process.
### 4\.4\.1 Autocorrelation function (ACF)
Autocorrelation is the correlation of a variable with itself at differing time lags. Recall from lecture that we defined the sample autocovariance function (ACVF), \\(c\_k\\), for some lag \\(k\\) as
\\\[\\begin{equation}
\\tag{4\.10}
c\_k \= \\frac{1}{n}\\sum\_{t\=1}^{n\-k} \\left(x\_t\-\\bar{x}\\right) \\left(x\_{t\+k}\-\\bar{x}\\right)
\\end{equation}\\]
Note that the sample autocovariance of \\(\\{x\_t\\}\\) at lag 0, \\(c\_0\\), equals the sample variance of \\(\\{x\_t\\}\\) calculated with a denominator of \\(n\\). The sample autocorrelation function (ACF) is defined as
\\\[\\begin{equation}
\\tag{4\.11}
r\_k \= \\frac{c\_k}{c\_0} \= \\text{Cor}(x\_t,x\_{t\+k})
\\end{equation}\\]
Recall also that an approximate 95% confidence interval on the ACF can be estimated by
\\\[\\begin{equation}
\\tag{4\.12}
\-\\frac{1}{n} \\pm \\frac{2}{\\sqrt{n}}
\\end{equation}\\]
where \\(n\\) is the number of data points used in the calculation of the ACF.
It is important to remember two things here. First, although the confidence interval is commonly plotted and interpreted as a horizontal line over all time lags, the interval itself actually grows as the lag increases because the number of data points \\(n\\) used to estimate the correlation decreases by 1 for every integer increase in lag. Second, care must be exercised when interpreting the “significance” of the correlation at various lags because we should expect, *a priori*, that approximately 1 out of every 20 correlations will be significant based on chance alone.
We can use the `acf()` function in R to compute the sample ACF (note that adding the option `type = "covariance"` will return the sample auto\-covariance (ACVF) instead of the ACF–type `?acf` for details). Calling the function by itself will will automatically produce a correlogram (*i.e.*, a plot of the autocorrelation versus time lag). The argument `lag.max` allows you to set the number of positive and negative lags. Let’s try it for the CO\\(\_2\\) data.
```
## correlogram of the CO2 data
acf(co2, lag.max = 36)
```
Figure 4\.10: Correlogram of the observed atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i obtained with the function `acf()`.
There are 4 things about Figure [4\.10](sec-tslab-correlation-within-and-among-time-series.html#fig:tslab-plotACFb) that are noteworthy:
1. the ACF at lag 0, \\(r\_0\\), equals 1 by default (*i.e.*, the correlation of a time series with itself)–it’s plotted as a reference point;
2. the \\(x\\)\-axis has decimal values for lags, which is caused by R using the year index as the lag rather than the month;
3. the horizontal blue lines are the approximate 95% CI’s; and
4. there is very high autocorrelation even out to lags of 36 months.
As an alternative to the default plots for **acf** objects, let’s define a new plot function for **acf** objects with some better features:
```
## better ACF plot
plot.acf <- function(ACFobj) {
rr <- ACFobj$acf[-1]
kk <- length(rr)
nn <- ACFobj$n.used
plot(seq(kk), rr, type = "h", lwd = 2, yaxs = "i", xaxs = "i",
ylim = c(floor(min(rr)), 1), xlim = c(0, kk + 1), xlab = "Lag",
ylab = "Correlation", las = 1)
abline(h = -1/nn + c(-2, 2)/sqrt(nn), lty = "dashed", col = "blue")
abline(h = 0)
}
```
Now we can assign the result of `acf()` to a variable and then use the information contained therein to plot the correlogram with our new plot function.
```
## acf of the CO2 data
co2_acf <- acf(co2, lag.max = 36)
## correlogram of the CO2 data
plot.acf(co2_acf)
```
Figure 4\.11: Correlogram of the observed atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i obtained with the function `plot.acf()`.
Notice that all of the relevant information is still there (Figure [4\.11](sec-tslab-correlation-within-and-among-time-series.html#fig:tslab-plotbetterACF)), but now \\(r\_0\=1\\) is not plotted at lag\-0 and the lags on the \\(x\\)\-axis are displayed correctly as integers.
Before we move on to the PACF, let’s look at the ACF for some deterministic time series, which will help you identify interesting properties (*e.g.*, trends, seasonal effects) in a stochastic time series, and account for them in time series models–an important topic in this course. First, let’s look at a straight line.
```
## length of ts
nn <- 100
## create straight line
tt <- seq(nn)
## set up plot area
par(mfrow = c(1, 2))
## plot line
plot.ts(tt, ylab = expression(italic(x[t])))
## get ACF
line.acf <- acf(tt, plot = FALSE)
## plot ACF
plot.acf(line.acf)
```
Figure 4\.12: Time series plot of a straight line (left) and the correlogram of its ACF (right).
The correlogram for a straight line is itself a linearly decreasing function over time (Figure [4\.12](sec-tslab-correlation-within-and-among-time-series.html#fig:tslab-plotLinearACF)).
Now let’s examine the ACF for a sine wave and see what sort of pattern arises.
```
## create sine wave
tt <- sin(2 * pi * seq(nn)/12)
## set up plot area
par(mfrow = c(1, 2))
## plot line
plot.ts(tt, ylab = expression(italic(x[t])))
## get ACF
sine_acf <- acf(tt, plot = FALSE)
## plot ACF
plot.acf(sine_acf)
```
Figure 4\.13: Time series plot of a discrete sine wave (left) and the correlogram of its ACF (right).
Perhaps not surprisingly, the correlogram for a sine wave is itself a sine wave whose amplitude decreases linearly over time (Figure [4\.13](sec-tslab-correlation-within-and-among-time-series.html#fig:tslab-plotSineACF)).
Now let’s examine the ACF for a sine wave with a linear downward trend and see what sort of patterns arise.
```
## create sine wave with trend
tt <- sin(2 * pi * seq(nn)/12) - seq(nn)/50
## set up plot area
par(mfrow = c(1, 2))
## plot line
plot.ts(tt, ylab = expression(italic(x[t])))
## get ACF
sili_acf <- acf(tt, plot = FALSE)
## plot ACF
plot.acf(sili_acf)
```
Figure 4\.14: Time series plot of a discrete sine wave (left) and the correlogram of its ACF (right).
The correlogram for a sine wave with a trend is itself a nonsymmetrical sine wave whose amplitude and center decrease over time (Figure [4\.14](sec-tslab-correlation-within-and-among-time-series.html#fig:tslab-plotSiLiACF)).
As we have seen, the ACF is a powerful tool in time series analysis for identifying important features in the data. As we will see later, the ACF is also an important diagnostic tool for helping to select the proper order of \\(p\\) and \\(q\\) in ARMA(\\(p\\),\\(q\\)) models.
### 4\.4\.2 Partial autocorrelation function (PACF)
The partial autocorrelation function (PACF) measures the linear correlation of a series \\(\\{x\_t\\}\\) and a lagged version of itself \\(\\{x\_{t\+k}\\}\\) with the linear dependence of \\(\\{x\_{t\-1},x\_{t\-2},\\dots,x\_{t\-(k\-1\)}\\}\\) removed. Recall from lecture that we define the PACF as
\\\[\\begin{equation}
\\tag{4\.13}
f\_k \= \\begin{cases}
\\text{Cor}(x\_1,x\_0\)\=r\_1 \& \\text{if } k \= 1;\\\\
\\text{Cor}(x\_k\-x\_k^{k\-1},x\_0\-x\_0^{k\-1}) \& \\text{if } k \\geq 2;
\\end{cases}
\\end{equation}\\]
with
It’s easy to compute the PACF for a variable in R using the `pacf()` function, which will automatically plot a correlogram when called by itself (similar to `acf()`). Let’s look at the PACF for the CO\\(\_2\\) data.
```
## PACF of the CO2 data
pacf(co2, lag.max = 36)
```
The default plot for PACF is a bit better than for ACF, but here is another plotting function that might be useful.
```
## better PACF plot
plot.pacf <- function(PACFobj) {
rr <- PACFobj$acf
kk <- length(rr)
nn <- PACFobj$n.used
plot(seq(kk), rr, type = "h", lwd = 2, yaxs = "i", xaxs = "i",
ylim = c(floor(min(rr)), 1), xlim = c(0, kk + 1), xlab = "Lag",
ylab = "PACF", las = 1)
abline(h = -1/nn + c(-2, 2)/sqrt(nn), lty = "dashed", col = "blue")
abline(h = 0)
}
```
Figure 4\.15: Correlogram of the PACF for the observed atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i obtained with the function `pacf()`.
Notice in Figure [4\.15](sec-tslab-correlation-within-and-among-time-series.html#fig:tslab-plotPACFb) that the partial autocorrelation at lag\-1 is very high (it equals the ACF at lag\-1\), but the other values at lags \> 1 are relatively small, unlike what we saw for the ACF. We will discuss this in more detail later on in this lab.
Notice also that the PACF plot again has real\-valued indices for the time lag, but it does not include any value for lag\-0 because it is impossible to remove any intermediate autocorrelation between \\(t\\) and \\(t\-k\\) when \\(k\=0\\), and therefore the PACF does not exist at lag\-0\. If you would like, you can use the `plot.acf()` function we defined above to plot the PACF estimates because `acf()` and `pacf()` produce identical list structures (results not shown here).
```
## PACF of the CO2 data
co2_pacf <- pacf(co2)
## correlogram of the CO2 data
plot.acf(co2_pacf)
```
As with the ACF, we will see later on how the PACF can also be used to help identify the appropriate order of \\(p\\) and \\(q\\) in ARMA(\\(p\\),\\(q\\)) models.
### 4\.4\.3 Cross\-correlation function (CCF)
Often we are interested in looking for relationships between 2 different time series. There are many ways to do this, but a simple method is via examination of their cross\-covariance and cross\-correlation.
We begin by defining the sample cross\-covariance function (CCVF) in a manner similar to the ACVF, in that
\\\[\\begin{equation}
\\tag{4\.14}
g\_k^{xy} \= \\frac{1}{n}\\sum\_{t\=1}^{n\-k} \\left(y\_t\-\\bar{y}\\right) \\left(x\_{t\+k}\-\\bar{x}\\right),
\\end{equation}\\]
but now we are estimating the correlation between a variable \\(y\\) and a *different* time\-shifted variable \\(x\_{t\+k}\\). The sample cross\-correlation function (CCF) is then defined analogously to the ACF, such that
\\\[\\begin{equation}
\\tag{4\.15}
r\_k^{xy} \= \\frac{g\_k^{xy}}{\\sqrt{\\text{SD}\_x\\text{SD}\_y}};
\\end{equation}\\]
SD\\(\_x\\) and SD\\(\_y\\) are the sample standard deviations of \\(\\{x\_t\\}\\) and \\(\\{y\_t\\}\\), respectively. It is important to re\-iterate here that \\(r\_k^{xy} \\neq r\_{\-k}^{xy}\\), but \\(r\_k^{xy} \= r\_{\-k}^{yx}\\). Therefore, it is very important to pay particular attention to which variable you call \\(y\\) (*i.e.*, the “response”) and which you call \\(x\\) (*i.e.*, the “predictor”).
As with the ACF, an approximate 95% confidence interval on the CCF can be estimated by
\\\[\\begin{equation}
\\tag{4\.16}
\-\\frac{1}{n} \\pm \\frac{2}{\\sqrt{n}}
\\end{equation}\\]
where \\(n\\) is the number of data points used in the calculation of the CCF, and the same assumptions apply to its interpretation.
Computing the CCF in R is easy with the function `ccf()` and it works just like `acf()`. In fact, `ccf()` is just a “wrapper” function that calls `acf()`. As an example, let’s examine the CCF between sunspot activity and number of lynx trapped in Canada as in the classic paper by Moran.
To begin, let’s get the data, which are conveniently included in the **datasets** package included as part of the base installation of R. Before calculating the CCF, however, we need to find the matching years of data. Again, we’ll use the `ts.intersect()` function.
```
## get the matching years of sunspot data
suns <- ts.intersect(lynx, sunspot.year)[, "sunspot.year"]
## get the matching lynx data
lynx <- ts.intersect(lynx, sunspot.year)[, "lynx"]
```
Here are plots of the time series.
```
## plot time series
plot(cbind(suns, lynx), yax.flip = TRUE)
```
Figure 4\.16: Time series of sunspot activity (top) and lynx trappings in Canada (bottom) from 1821\-1934\.
It is important to remember which of the 2 variables you call \\(y\\) and \\(x\\) when calling `ccf(x, y, ...)`. In this case, it seems most relevant to treat lynx as the \\(y\\) and sunspots as the \\(x\\), in which case we are mostly interested in the CCF at negative lags (*i.e.*, when sunspot activity predates inferred lynx abundance). Furthermore, we’ll use log\-transformed lynx trappings.
```
## CCF of sunspots and lynx
ccf(suns, log(lynx), ylab = "Cross-correlation")
```
Figure 4\.17: CCF for annual sunspot activity and the log of the number of lynx trappings in Canada from 1821\-1934\.
From Figures [4\.16](sec-tslab-correlation-within-and-among-time-series.html#fig:tslab-plotSunsLynx) and [4\.17](sec-tslab-correlation-within-and-among-time-series.html#fig:tslab-plotCCFb) it looks like lynx numbers are relatively low 3\-5 years after high sunspot activity (*i.e.*, significant correlation at lags of \-3 to \-5\).
### 4\.4\.1 Autocorrelation function (ACF)
Autocorrelation is the correlation of a variable with itself at differing time lags. Recall from lecture that we defined the sample autocovariance function (ACVF), \\(c\_k\\), for some lag \\(k\\) as
\\\[\\begin{equation}
\\tag{4\.10}
c\_k \= \\frac{1}{n}\\sum\_{t\=1}^{n\-k} \\left(x\_t\-\\bar{x}\\right) \\left(x\_{t\+k}\-\\bar{x}\\right)
\\end{equation}\\]
Note that the sample autocovariance of \\(\\{x\_t\\}\\) at lag 0, \\(c\_0\\), equals the sample variance of \\(\\{x\_t\\}\\) calculated with a denominator of \\(n\\). The sample autocorrelation function (ACF) is defined as
\\\[\\begin{equation}
\\tag{4\.11}
r\_k \= \\frac{c\_k}{c\_0} \= \\text{Cor}(x\_t,x\_{t\+k})
\\end{equation}\\]
Recall also that an approximate 95% confidence interval on the ACF can be estimated by
\\\[\\begin{equation}
\\tag{4\.12}
\-\\frac{1}{n} \\pm \\frac{2}{\\sqrt{n}}
\\end{equation}\\]
where \\(n\\) is the number of data points used in the calculation of the ACF.
It is important to remember two things here. First, although the confidence interval is commonly plotted and interpreted as a horizontal line over all time lags, the interval itself actually grows as the lag increases because the number of data points \\(n\\) used to estimate the correlation decreases by 1 for every integer increase in lag. Second, care must be exercised when interpreting the “significance” of the correlation at various lags because we should expect, *a priori*, that approximately 1 out of every 20 correlations will be significant based on chance alone.
We can use the `acf()` function in R to compute the sample ACF (note that adding the option `type = "covariance"` will return the sample auto\-covariance (ACVF) instead of the ACF–type `?acf` for details). Calling the function by itself will will automatically produce a correlogram (*i.e.*, a plot of the autocorrelation versus time lag). The argument `lag.max` allows you to set the number of positive and negative lags. Let’s try it for the CO\\(\_2\\) data.
```
## correlogram of the CO2 data
acf(co2, lag.max = 36)
```
Figure 4\.10: Correlogram of the observed atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i obtained with the function `acf()`.
There are 4 things about Figure [4\.10](sec-tslab-correlation-within-and-among-time-series.html#fig:tslab-plotACFb) that are noteworthy:
1. the ACF at lag 0, \\(r\_0\\), equals 1 by default (*i.e.*, the correlation of a time series with itself)–it’s plotted as a reference point;
2. the \\(x\\)\-axis has decimal values for lags, which is caused by R using the year index as the lag rather than the month;
3. the horizontal blue lines are the approximate 95% CI’s; and
4. there is very high autocorrelation even out to lags of 36 months.
As an alternative to the default plots for **acf** objects, let’s define a new plot function for **acf** objects with some better features:
```
## better ACF plot
plot.acf <- function(ACFobj) {
rr <- ACFobj$acf[-1]
kk <- length(rr)
nn <- ACFobj$n.used
plot(seq(kk), rr, type = "h", lwd = 2, yaxs = "i", xaxs = "i",
ylim = c(floor(min(rr)), 1), xlim = c(0, kk + 1), xlab = "Lag",
ylab = "Correlation", las = 1)
abline(h = -1/nn + c(-2, 2)/sqrt(nn), lty = "dashed", col = "blue")
abline(h = 0)
}
```
Now we can assign the result of `acf()` to a variable and then use the information contained therein to plot the correlogram with our new plot function.
```
## acf of the CO2 data
co2_acf <- acf(co2, lag.max = 36)
## correlogram of the CO2 data
plot.acf(co2_acf)
```
Figure 4\.11: Correlogram of the observed atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i obtained with the function `plot.acf()`.
Notice that all of the relevant information is still there (Figure [4\.11](sec-tslab-correlation-within-and-among-time-series.html#fig:tslab-plotbetterACF)), but now \\(r\_0\=1\\) is not plotted at lag\-0 and the lags on the \\(x\\)\-axis are displayed correctly as integers.
Before we move on to the PACF, let’s look at the ACF for some deterministic time series, which will help you identify interesting properties (*e.g.*, trends, seasonal effects) in a stochastic time series, and account for them in time series models–an important topic in this course. First, let’s look at a straight line.
```
## length of ts
nn <- 100
## create straight line
tt <- seq(nn)
## set up plot area
par(mfrow = c(1, 2))
## plot line
plot.ts(tt, ylab = expression(italic(x[t])))
## get ACF
line.acf <- acf(tt, plot = FALSE)
## plot ACF
plot.acf(line.acf)
```
Figure 4\.12: Time series plot of a straight line (left) and the correlogram of its ACF (right).
The correlogram for a straight line is itself a linearly decreasing function over time (Figure [4\.12](sec-tslab-correlation-within-and-among-time-series.html#fig:tslab-plotLinearACF)).
Now let’s examine the ACF for a sine wave and see what sort of pattern arises.
```
## create sine wave
tt <- sin(2 * pi * seq(nn)/12)
## set up plot area
par(mfrow = c(1, 2))
## plot line
plot.ts(tt, ylab = expression(italic(x[t])))
## get ACF
sine_acf <- acf(tt, plot = FALSE)
## plot ACF
plot.acf(sine_acf)
```
Figure 4\.13: Time series plot of a discrete sine wave (left) and the correlogram of its ACF (right).
Perhaps not surprisingly, the correlogram for a sine wave is itself a sine wave whose amplitude decreases linearly over time (Figure [4\.13](sec-tslab-correlation-within-and-among-time-series.html#fig:tslab-plotSineACF)).
Now let’s examine the ACF for a sine wave with a linear downward trend and see what sort of patterns arise.
```
## create sine wave with trend
tt <- sin(2 * pi * seq(nn)/12) - seq(nn)/50
## set up plot area
par(mfrow = c(1, 2))
## plot line
plot.ts(tt, ylab = expression(italic(x[t])))
## get ACF
sili_acf <- acf(tt, plot = FALSE)
## plot ACF
plot.acf(sili_acf)
```
Figure 4\.14: Time series plot of a discrete sine wave (left) and the correlogram of its ACF (right).
The correlogram for a sine wave with a trend is itself a nonsymmetrical sine wave whose amplitude and center decrease over time (Figure [4\.14](sec-tslab-correlation-within-and-among-time-series.html#fig:tslab-plotSiLiACF)).
As we have seen, the ACF is a powerful tool in time series analysis for identifying important features in the data. As we will see later, the ACF is also an important diagnostic tool for helping to select the proper order of \\(p\\) and \\(q\\) in ARMA(\\(p\\),\\(q\\)) models.
### 4\.4\.2 Partial autocorrelation function (PACF)
The partial autocorrelation function (PACF) measures the linear correlation of a series \\(\\{x\_t\\}\\) and a lagged version of itself \\(\\{x\_{t\+k}\\}\\) with the linear dependence of \\(\\{x\_{t\-1},x\_{t\-2},\\dots,x\_{t\-(k\-1\)}\\}\\) removed. Recall from lecture that we define the PACF as
\\\[\\begin{equation}
\\tag{4\.13}
f\_k \= \\begin{cases}
\\text{Cor}(x\_1,x\_0\)\=r\_1 \& \\text{if } k \= 1;\\\\
\\text{Cor}(x\_k\-x\_k^{k\-1},x\_0\-x\_0^{k\-1}) \& \\text{if } k \\geq 2;
\\end{cases}
\\end{equation}\\]
with
It’s easy to compute the PACF for a variable in R using the `pacf()` function, which will automatically plot a correlogram when called by itself (similar to `acf()`). Let’s look at the PACF for the CO\\(\_2\\) data.
```
## PACF of the CO2 data
pacf(co2, lag.max = 36)
```
The default plot for PACF is a bit better than for ACF, but here is another plotting function that might be useful.
```
## better PACF plot
plot.pacf <- function(PACFobj) {
rr <- PACFobj$acf
kk <- length(rr)
nn <- PACFobj$n.used
plot(seq(kk), rr, type = "h", lwd = 2, yaxs = "i", xaxs = "i",
ylim = c(floor(min(rr)), 1), xlim = c(0, kk + 1), xlab = "Lag",
ylab = "PACF", las = 1)
abline(h = -1/nn + c(-2, 2)/sqrt(nn), lty = "dashed", col = "blue")
abline(h = 0)
}
```
Figure 4\.15: Correlogram of the PACF for the observed atmospheric CO\\(\_2\\) concentration at Mauna Loa, Hawai’i obtained with the function `pacf()`.
Notice in Figure [4\.15](sec-tslab-correlation-within-and-among-time-series.html#fig:tslab-plotPACFb) that the partial autocorrelation at lag\-1 is very high (it equals the ACF at lag\-1\), but the other values at lags \> 1 are relatively small, unlike what we saw for the ACF. We will discuss this in more detail later on in this lab.
Notice also that the PACF plot again has real\-valued indices for the time lag, but it does not include any value for lag\-0 because it is impossible to remove any intermediate autocorrelation between \\(t\\) and \\(t\-k\\) when \\(k\=0\\), and therefore the PACF does not exist at lag\-0\. If you would like, you can use the `plot.acf()` function we defined above to plot the PACF estimates because `acf()` and `pacf()` produce identical list structures (results not shown here).
```
## PACF of the CO2 data
co2_pacf <- pacf(co2)
## correlogram of the CO2 data
plot.acf(co2_pacf)
```
As with the ACF, we will see later on how the PACF can also be used to help identify the appropriate order of \\(p\\) and \\(q\\) in ARMA(\\(p\\),\\(q\\)) models.
### 4\.4\.3 Cross\-correlation function (CCF)
Often we are interested in looking for relationships between 2 different time series. There are many ways to do this, but a simple method is via examination of their cross\-covariance and cross\-correlation.
We begin by defining the sample cross\-covariance function (CCVF) in a manner similar to the ACVF, in that
\\\[\\begin{equation}
\\tag{4\.14}
g\_k^{xy} \= \\frac{1}{n}\\sum\_{t\=1}^{n\-k} \\left(y\_t\-\\bar{y}\\right) \\left(x\_{t\+k}\-\\bar{x}\\right),
\\end{equation}\\]
but now we are estimating the correlation between a variable \\(y\\) and a *different* time\-shifted variable \\(x\_{t\+k}\\). The sample cross\-correlation function (CCF) is then defined analogously to the ACF, such that
\\\[\\begin{equation}
\\tag{4\.15}
r\_k^{xy} \= \\frac{g\_k^{xy}}{\\sqrt{\\text{SD}\_x\\text{SD}\_y}};
\\end{equation}\\]
SD\\(\_x\\) and SD\\(\_y\\) are the sample standard deviations of \\(\\{x\_t\\}\\) and \\(\\{y\_t\\}\\), respectively. It is important to re\-iterate here that \\(r\_k^{xy} \\neq r\_{\-k}^{xy}\\), but \\(r\_k^{xy} \= r\_{\-k}^{yx}\\). Therefore, it is very important to pay particular attention to which variable you call \\(y\\) (*i.e.*, the “response”) and which you call \\(x\\) (*i.e.*, the “predictor”).
As with the ACF, an approximate 95% confidence interval on the CCF can be estimated by
\\\[\\begin{equation}
\\tag{4\.16}
\-\\frac{1}{n} \\pm \\frac{2}{\\sqrt{n}}
\\end{equation}\\]
where \\(n\\) is the number of data points used in the calculation of the CCF, and the same assumptions apply to its interpretation.
Computing the CCF in R is easy with the function `ccf()` and it works just like `acf()`. In fact, `ccf()` is just a “wrapper” function that calls `acf()`. As an example, let’s examine the CCF between sunspot activity and number of lynx trapped in Canada as in the classic paper by Moran.
To begin, let’s get the data, which are conveniently included in the **datasets** package included as part of the base installation of R. Before calculating the CCF, however, we need to find the matching years of data. Again, we’ll use the `ts.intersect()` function.
```
## get the matching years of sunspot data
suns <- ts.intersect(lynx, sunspot.year)[, "sunspot.year"]
## get the matching lynx data
lynx <- ts.intersect(lynx, sunspot.year)[, "lynx"]
```
Here are plots of the time series.
```
## plot time series
plot(cbind(suns, lynx), yax.flip = TRUE)
```
Figure 4\.16: Time series of sunspot activity (top) and lynx trappings in Canada (bottom) from 1821\-1934\.
It is important to remember which of the 2 variables you call \\(y\\) and \\(x\\) when calling `ccf(x, y, ...)`. In this case, it seems most relevant to treat lynx as the \\(y\\) and sunspots as the \\(x\\), in which case we are mostly interested in the CCF at negative lags (*i.e.*, when sunspot activity predates inferred lynx abundance). Furthermore, we’ll use log\-transformed lynx trappings.
```
## CCF of sunspots and lynx
ccf(suns, log(lynx), ylab = "Cross-correlation")
```
Figure 4\.17: CCF for annual sunspot activity and the log of the number of lynx trappings in Canada from 1821\-1934\.
From Figures [4\.16](sec-tslab-correlation-within-and-among-time-series.html#fig:tslab-plotSunsLynx) and [4\.17](sec-tslab-correlation-within-and-among-time-series.html#fig:tslab-plotCCFb) it looks like lynx numbers are relatively low 3\-5 years after high sunspot activity (*i.e.*, significant correlation at lags of \-3 to \-5\).
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-tslab-white-noise-wn.html |
4\.5 White noise (WN)
---------------------
A time series \\(\\{w\_t\\}\\) is a discrete white noise series (DWN) if the \\(w\_1, w\_1, \\dots, w\_t\\) are independent and identically distributed (IID) with a mean of zero. For most of the examples in this course we will assume that the \\(w\_t \\sim \\text{N}(0,q)\\), and therefore we refer to the time series \\(\\{w\_t\\}\\) as Gaussian white noise. If our time series model has done an adequate job of removing all of the serial autocorrelation in the time series with trends, seasonal effects, etc., then the model residuals (\\(e\_t \= y\_t \- \\hat{y}\_t\\)) will be a WN sequence with the following properties for its mean (\\(\\bar{e}\\)), covariance (\\(c\_k\\)), and autocorrelation (\\(r\_k\\)):
\\\[\\begin{equation}
\\tag{4\.17}
\\begin{aligned}
\\bar{x} \&\= 0 \\\\
c\_k \&\= \\text{Cov}(e\_t,e\_{t\+k}) \= \\begin{cases}
q \& \\text{if } k \= 0 \\\\
0 \& \\text{if } k \\neq 1
\\end{cases} \\\\
r\_k \&\= \\text{Cor}(e\_t,e\_{t\+k}) \= \\begin{cases}
1 \& \\text{if } k \= 0 \\\\
0 \& \\text{if } k \\neq 1\.
\\end{cases}
\\end{aligned}
\\end{equation}\\]
### 4\.5\.1 Simulating white noise
Simulating WN in R is straightforward with a variety of built\-in random number generators for continuous and discrete distributions. Once you know R’s abbreviation for the distribution of interest, you add an \\(\\texttt{r}\\) to the beginning to get the function’s name. For example, a Gaussian (or normal) distribution is abbreviated \\(\\texttt{norm}\\) and so the function is `rnorm()`. All of the random number functions require two things: the number of samples from the distribution and the parameters for the distribution itself (*e.g.*, mean \& SD of a normal). Check the help file for the distribution of interest to find out what parameters you must specify (*e.g.*, type `?rnorm` to see the help for a normal distribution).
Here’s how to generate 100 samples from a normal distribution with mean of 5 and standard deviation of 0\.2, and 50 samples from a Poisson distribution with a rate (\\(\\lambda\\)) of 20\.
```
set.seed(123)
## random normal variates
GWN <- rnorm(n = 100, mean = 5, sd = 0.2)
## random Poisson variates
PWN <- rpois(n = 50, lambda = 20)
```
Here are plots of the time series. Notice that on one occasion the same number was drawn twice in a row from the Poisson distribution, which is discrete. That is virtually guaranteed to never happen with a continuous distribution.
```
## set up plot region
par(mfrow = c(1, 2))
## plot normal variates with mean
plot.ts(GWN)
abline(h = 5, col = "blue", lty = "dashed")
## plot Poisson variates with mean
plot.ts(PWN)
abline(h = 20, col = "blue", lty = "dashed")
```
Figure 4\.18: Time series plots of simulated Gaussian (left) and Poisson (right) white noise.
Now let’s examine the ACF for the 2 white noise series and see if there is, in fact, zero autocorrelation for lags \\(\\geq\\) 1\.
```
## set up plot region
par(mfrow = c(1, 2))
## plot normal variates with mean
acf(GWN, main = "", lag.max = 20)
## plot Poisson variates with mean
acf(PWN, main = "", lag.max = 20)
```
Figure 4\.19: ACF’s for the simulated Gaussian (left) and Poisson (right) white noise shown in Figure [4\.18](sec-tslab-white-noise-wn.html#fig:tslab-plotDWNsims).
Interestingly, the \\(r\_k\\) are all greater than zero in absolute value although they are not statistically different from zero for lags 1\-20\. This is because we are dealing with a *sample* of the distributions rather than the entire population of all random variates. As an exercise, try setting `n = 1e6` instead of `n = 100` or `n = 50` in the calls calls above to generate the WN sequences and see what effect it has on the estimation of \\(r\_k\\). It is also important to remember, as we discussed earlier, that we should expect that approximately 1 in 20 of the \\(r\_k\\) will be statistically greater than zero based on chance alone, especially for relatively small sample sizes, so don’t get too excited if you ever come across a case like then when inspecting model residuals.
### 4\.5\.1 Simulating white noise
Simulating WN in R is straightforward with a variety of built\-in random number generators for continuous and discrete distributions. Once you know R’s abbreviation for the distribution of interest, you add an \\(\\texttt{r}\\) to the beginning to get the function’s name. For example, a Gaussian (or normal) distribution is abbreviated \\(\\texttt{norm}\\) and so the function is `rnorm()`. All of the random number functions require two things: the number of samples from the distribution and the parameters for the distribution itself (*e.g.*, mean \& SD of a normal). Check the help file for the distribution of interest to find out what parameters you must specify (*e.g.*, type `?rnorm` to see the help for a normal distribution).
Here’s how to generate 100 samples from a normal distribution with mean of 5 and standard deviation of 0\.2, and 50 samples from a Poisson distribution with a rate (\\(\\lambda\\)) of 20\.
```
set.seed(123)
## random normal variates
GWN <- rnorm(n = 100, mean = 5, sd = 0.2)
## random Poisson variates
PWN <- rpois(n = 50, lambda = 20)
```
Here are plots of the time series. Notice that on one occasion the same number was drawn twice in a row from the Poisson distribution, which is discrete. That is virtually guaranteed to never happen with a continuous distribution.
```
## set up plot region
par(mfrow = c(1, 2))
## plot normal variates with mean
plot.ts(GWN)
abline(h = 5, col = "blue", lty = "dashed")
## plot Poisson variates with mean
plot.ts(PWN)
abline(h = 20, col = "blue", lty = "dashed")
```
Figure 4\.18: Time series plots of simulated Gaussian (left) and Poisson (right) white noise.
Now let’s examine the ACF for the 2 white noise series and see if there is, in fact, zero autocorrelation for lags \\(\\geq\\) 1\.
```
## set up plot region
par(mfrow = c(1, 2))
## plot normal variates with mean
acf(GWN, main = "", lag.max = 20)
## plot Poisson variates with mean
acf(PWN, main = "", lag.max = 20)
```
Figure 4\.19: ACF’s for the simulated Gaussian (left) and Poisson (right) white noise shown in Figure [4\.18](sec-tslab-white-noise-wn.html#fig:tslab-plotDWNsims).
Interestingly, the \\(r\_k\\) are all greater than zero in absolute value although they are not statistically different from zero for lags 1\-20\. This is because we are dealing with a *sample* of the distributions rather than the entire population of all random variates. As an exercise, try setting `n = 1e6` instead of `n = 100` or `n = 50` in the calls calls above to generate the WN sequences and see what effect it has on the estimation of \\(r\_k\\). It is also important to remember, as we discussed earlier, that we should expect that approximately 1 in 20 of the \\(r\_k\\) will be statistically greater than zero based on chance alone, especially for relatively small sample sizes, so don’t get too excited if you ever come across a case like then when inspecting model residuals.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-tslab-random-walks-rw.html |
4\.6 Random walks (RW)
----------------------
Random walks receive considerable attention in time series analyses because of their ability to fit a wide range of data despite their surprising simplicity. In fact, random walks are the most simple non\-stationary time series model. A random walk is a time series \\(\\{x\_t\\}\\) where
\\\[\\begin{equation}
\\tag{4\.18}
x\_t \= x\_{t\-1} \+ w\_t,
\\end{equation}\\]
and \\(w\_t\\) is a discrete white noise series where all values are independent and identically distributed (IID) with a mean of zero. In practice, we will almost always assume that the \\(w\_t\\) are Gaussian white noise, such that \\(w\_t \\sim \\text{N}(0,q)\\). We will see later that a random walk is a special case of an autoregressive model.
### 4\.6\.1 Simulating a random walk
Simulating a RW model in R is straightforward with a for loop and the use of `rnorm()` to generate Gaussian errors (type `?rnorm` to see details on the function and its useful relatives `dnorm()` and `pnorm()`). Let’s create 100 obs (we’ll also set the random number seed so everyone gets the same results).
```
## set random number seed
set.seed(123)
## length of time series
TT <- 100
## initialize {x_t} and {w_t}
xx <- ww <- rnorm(n = TT, mean = 0, sd = 1)
## compute values 2 thru TT
for (t in 2:TT) {
xx[t] <- xx[t - 1] + ww[t]
}
```
Now let’s plot the simulated time series and its ACF.
```
## setup plot area
par(mfrow = c(1, 2))
## plot line
plot.ts(xx, ylab = expression(italic(x[t])))
## plot ACF
plot.acf(acf(xx, plot = FALSE))
```
Figure 4\.20: Simulated time series of a random walk model (left) and its associated ACF (right).
Perhaps not surprisingly based on their names, autoregressive models such as RW’s have a high degree of autocorrelation out to long lags (Figure [4\.20](sec-tslab-random-walks-rw.html#fig:tslab-plotRW)).
### 4\.6\.2 Alternative formulation of a random walk
As an aside, let’s use an alternative formulation of a random walk model to see an even shorter way to simulate an RW in R. Based on our definition of a random walk in Equation [(4\.18\)](sec-tslab-random-walks-rw.html#eq:defnRW2), it is easy to see that
\\\[\\begin{equation}
\\tag{4\.19}
\\begin{aligned}
x\_t \&\= x\_{t\-1} \+ w\_t \\\\
x\_{t\-1} \&\= x\_{t\-2} \+ w\_{t\-1} \\\\
x\_{t\-2} \&\= x\_{t\-3} \+ w\_{t\-2} \\\\
\&\\; \\; \\vdots
\\end{aligned}
\\end{equation}\\]
Therefore, if we substitute \\(x\_{t\-2} \+ w\_{t\-1}\\) for \\(x\_{t\-1}\\) in the first equation, and then \\(x\_{t\-3} \+ w\_{t\-2}\\) for \\(x\_{t\-2}\\), and so on in a recursive manner, we get
\\\[\\begin{equation}
\\tag{4\.20}
x\_t \= w\_t \+ w\_{t\-1} \+ w\_{t\-2} \+ \\dots \+ w\_{t\-\\infty} \+ x\_{t\-\\infty}.
\\end{equation}\\]
In practice, however, the time series will not start an infinite time ago, but rather at some \\(t\=1\\), in which case we can write
\\\[\\begin{equation}
\\tag{4\.21}
\\begin{aligned}
x\_t \&\= w\_1 \+ w\_2 \+ \\dots \+ w\_t \\\\
\&\= \\sum\_{t\=1}^{T} w\_t.
\\end{aligned}
\\end{equation}\\]
From Equation [(4\.21\)](sec-tslab-random-walks-rw.html#eq:defnRWalt3) it is easy to see that the value of an RW process at time step \\(t\\) is the sum of all the random errors up through time \\(t\\). Therefore, in R we can easily simulate a realization from an RW process using the `cumsum(x)` function, which does cumulative summation of the vector `x` over its entire length. If we use the same errors as before, we should get the same results.
```
## simulate RW
x2 <- cumsum(ww)
```
Let’s plot both time series to see if it worked.
```
## setup plot area
par(mfrow = c(1, 2))
## plot 1st RW
plot.ts(xx, ylab = expression(italic(x[t])))
## plot 2nd RW
plot.ts(x2, ylab = expression(italic(x[t])))
```
Figure 4\.21: Time series of the same random walk model formulated as Equation [(4\.18\)](sec-tslab-random-walks-rw.html#eq:defnRW2) and simulated via a for loop (left), and as Equation [(4\.21\)](sec-tslab-random-walks-rw.html#eq:defnRWalt3) and simulated via `cumsum()` (right).
Indeed, both methods of generating a RW time series appear to be equivalent.
### 4\.6\.1 Simulating a random walk
Simulating a RW model in R is straightforward with a for loop and the use of `rnorm()` to generate Gaussian errors (type `?rnorm` to see details on the function and its useful relatives `dnorm()` and `pnorm()`). Let’s create 100 obs (we’ll also set the random number seed so everyone gets the same results).
```
## set random number seed
set.seed(123)
## length of time series
TT <- 100
## initialize {x_t} and {w_t}
xx <- ww <- rnorm(n = TT, mean = 0, sd = 1)
## compute values 2 thru TT
for (t in 2:TT) {
xx[t] <- xx[t - 1] + ww[t]
}
```
Now let’s plot the simulated time series and its ACF.
```
## setup plot area
par(mfrow = c(1, 2))
## plot line
plot.ts(xx, ylab = expression(italic(x[t])))
## plot ACF
plot.acf(acf(xx, plot = FALSE))
```
Figure 4\.20: Simulated time series of a random walk model (left) and its associated ACF (right).
Perhaps not surprisingly based on their names, autoregressive models such as RW’s have a high degree of autocorrelation out to long lags (Figure [4\.20](sec-tslab-random-walks-rw.html#fig:tslab-plotRW)).
### 4\.6\.2 Alternative formulation of a random walk
As an aside, let’s use an alternative formulation of a random walk model to see an even shorter way to simulate an RW in R. Based on our definition of a random walk in Equation [(4\.18\)](sec-tslab-random-walks-rw.html#eq:defnRW2), it is easy to see that
\\\[\\begin{equation}
\\tag{4\.19}
\\begin{aligned}
x\_t \&\= x\_{t\-1} \+ w\_t \\\\
x\_{t\-1} \&\= x\_{t\-2} \+ w\_{t\-1} \\\\
x\_{t\-2} \&\= x\_{t\-3} \+ w\_{t\-2} \\\\
\&\\; \\; \\vdots
\\end{aligned}
\\end{equation}\\]
Therefore, if we substitute \\(x\_{t\-2} \+ w\_{t\-1}\\) for \\(x\_{t\-1}\\) in the first equation, and then \\(x\_{t\-3} \+ w\_{t\-2}\\) for \\(x\_{t\-2}\\), and so on in a recursive manner, we get
\\\[\\begin{equation}
\\tag{4\.20}
x\_t \= w\_t \+ w\_{t\-1} \+ w\_{t\-2} \+ \\dots \+ w\_{t\-\\infty} \+ x\_{t\-\\infty}.
\\end{equation}\\]
In practice, however, the time series will not start an infinite time ago, but rather at some \\(t\=1\\), in which case we can write
\\\[\\begin{equation}
\\tag{4\.21}
\\begin{aligned}
x\_t \&\= w\_1 \+ w\_2 \+ \\dots \+ w\_t \\\\
\&\= \\sum\_{t\=1}^{T} w\_t.
\\end{aligned}
\\end{equation}\\]
From Equation [(4\.21\)](sec-tslab-random-walks-rw.html#eq:defnRWalt3) it is easy to see that the value of an RW process at time step \\(t\\) is the sum of all the random errors up through time \\(t\\). Therefore, in R we can easily simulate a realization from an RW process using the `cumsum(x)` function, which does cumulative summation of the vector `x` over its entire length. If we use the same errors as before, we should get the same results.
```
## simulate RW
x2 <- cumsum(ww)
```
Let’s plot both time series to see if it worked.
```
## setup plot area
par(mfrow = c(1, 2))
## plot 1st RW
plot.ts(xx, ylab = expression(italic(x[t])))
## plot 2nd RW
plot.ts(x2, ylab = expression(italic(x[t])))
```
Figure 4\.21: Time series of the same random walk model formulated as Equation [(4\.18\)](sec-tslab-random-walks-rw.html#eq:defnRW2) and simulated via a for loop (left), and as Equation [(4\.21\)](sec-tslab-random-walks-rw.html#eq:defnRWalt3) and simulated via `cumsum()` (right).
Indeed, both methods of generating a RW time series appear to be equivalent.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-tslab-autoregressive-ar-models.html |
4\.7 Autoregressive (AR) models
-------------------------------
Autoregressive models of order \\(p\\), abbreviated AR(\\(p\\)), are commonly used in time series analyses. In particular, AR(1\) models (and their multivariate extensions) see considerable use in ecology as we will see later in the course. Recall from lecture that an AR(\\(p\\)) model is written as
\\\[\\begin{equation}
(\\\#eq:defnAR.p.coef)
x\_t \= \\phi\_1 x\_{t\-1} \+ \\phi\_2 x\_{t\-2} \+ \\dots \+ \\phi\_p x\_{t\-p} \+ w\_t,
\\end{equation}\\]
where \\(\\{w\_t\\}\\) is a white noise sequence with zero mean and some variance \\(\\sigma^2\\). For our purposes we usually assume that \\(w\_t \\sim \\text{N}(0,q)\\). Note that the random walk in Equation [(4\.18\)](sec-tslab-random-walks-rw.html#eq:defnRW2) is a special case of an AR(1\) model where \\(\\phi\_1\=1\\) and \\(\\phi\_k\=0\\) for \\(k \\geq 2\\).
### 4\.7\.1 Simulating an AR(\\(p\\)) process
Although we could simulate an AR(\\(p\\)) process in R using a for loop just as we did for a random walk, it’s much easier with the function `arima.sim()`, which works for all forms and subsets of ARIMA models. To do so, remember that the AR in ARIMA stands for “autoregressive,” the I for “integrated,” and the MA for “moving\-average”; we specify the order of ARIMA models as \\(p,d,q\\). So, for example, we would specify an AR(2\) model as ARIMA(2,0,0\), or an MA(1\) model as ARIMA(0,0,1\). If we had an ARMA(3,1\) model that we applied to data that had been twice\-differenced, then we would have an ARIMA(3,2,1\) model.
`arima.sim()` will accept many arguments, but we are interested primarily in three of them (type `?arima.sim` to learn more):
1. `n`: the length of desired time series
2. `model`: a list with the following elements:
* `order`: a vector of length 3 containing the ARIMA(\\(p,d,q\\)) order
* `ar`: a vector of length \\(p\\) containing the AR(\\(p\\)) coefficients
* `ma`: a vector of length \\(q\\) containing the MA(\\(q\\)) coefficients
3. `sd`: the standard deviation of the Gaussian errors
Note that you can omit the `ma` element entirely if you have an AR(\\(p\\)) model, or omit the `ar` element if you have an MA(\\(q\\)) model. If you omit the `sd` element, `arima.sim()` will assume you want normally distributed errors with SD \= 1\. Also note that you can pass `arima.sim()` your own time series of random errors or the name of a function that will generate the errors (*e.g.*, you could use `rpois()` if you wanted a model with Poisson errors). Type `?arima.sim` for more details.
Let’s begin by simulating some AR(1\) models and comparing their behavior. First, let’s choose models with contrasting AR coefficients. Recall that in order for an AR(1\) model to be stationary, \\(\\phi \< \\lvert 1 \\rvert\\), so we’ll try 0\.1 and 0\.9\. We’ll again set the random number seed so we will get the same answers.
```
set.seed(456)
## list description for AR(1) model with small coef
AR_sm <- list(order = c(1, 0, 0), ar = 0.1)
## list description for AR(1) model with large coef
AR_lg <- list(order = c(1, 0, 0), ar = 0.9)
## simulate AR(1)
AR1_sm <- arima.sim(n = 50, model = AR_sm, sd = 0.1)
AR1_lg <- arima.sim(n = 50, model = AR_lg, sd = 0.1)
```
Now let’s plot the 2 simulated series.
```
## setup plot region
par(mfrow = c(1, 2))
## get y-limits for common plots
ylm <- c(min(AR1_sm, AR1_lg), max(AR1_sm, AR1_lg))
## plot the ts
plot.ts(AR1_sm, ylim = ylm, ylab = expression(italic(x)[italic(t)]),
main = expression(paste(phi, " = 0.1")))
plot.ts(AR1_lg, ylim = ylm, ylab = expression(italic(x)[italic(t)]),
main = expression(paste(phi, " = 0.9")))
```
Figure 4\.22: Time series of simulated AR(1\) processes with \\(\\phi\=0\.1\\) (left) and \\(\\phi\=0\.9\\) (right).
What do you notice about the two plots in Figure [4\.22](sec-tslab-autoregressive-ar-models.html#fig:tslab-plotAR1contrast)? It looks like the time series with the smaller AR coefficient is more “choppy” and seems to stay closer to 0 whereas the time series with the larger AR coefficient appears to wander around more. Remember that as the coefficient in an AR(1\) model goes to 0, the model approaches a WN sequence, which is stationary in both the mean and variance. As the coefficient goes to 1, however, the model approaches a random walk, which is not stationary in either the mean or variance.
Next, let’s generate two AR(1\) models that have the same magnitude coeficient, but opposite signs, and compare their behavior.
```
set.seed(123)
## list description for AR(1) model with small coef
AR_pos <- list(order = c(1, 0, 0), ar = 0.5)
## list description for AR(1) model with large coef
AR_neg <- list(order = c(1, 0, 0), ar = -0.5)
## simulate AR(1)
AR1_pos <- arima.sim(n = 50, model = AR_pos, sd = 0.1)
AR1_neg <- arima.sim(n = 50, model = AR_neg, sd = 0.1)
```
OK, let’s plot the 2 simulated series.
```
## setup plot region
par(mfrow = c(1, 2))
## get y-limits for common plots
ylm <- c(min(AR1_pos, AR1_neg), max(AR1_pos, AR1_neg))
## plot the ts
plot.ts(AR1_pos, ylim = ylm, ylab = expression(italic(x)[italic(t)]),
main = expression(paste(phi[1], " = 0.5")))
plot.ts(AR1_neg, ylab = expression(italic(x)[italic(t)]), main = expression(paste(phi[1],
" = -0.5")))
```
Figure 4\.23: Time series of simulated AR(1\) processes with \\(\\phi\_1\=0\.5\\) (left) and \\(\\phi\_1\=\-0\.5\\) (right).
Now it appears like both time series vary around the mean by about the same amount, but the model with the negative coefficient produces a much more “sawtooth” time series. It turns out that any AR(1\) model with \\(\-1\<\\phi\<0\\) will exhibit the 2\-point oscillation you see here.
We can simulate higher order AR(\\(p\\)) models in the same manner, but care must be exercised when choosing a set of coefficients that result in a stationary model or else `arima.sim()` will fail and report an error. For example, an AR(2\) model with both coefficients equal to 0\.5 is not stationary, and therefore this function call will not work:
```
arima.sim(n = 100, model = list(order(2, 0, 0), ar = c(0.5, 0.5)))
```
If you try, R will respond that the “`'ar' part of model is not stationary`.”
### 4\.7\.2 Correlation structure of AR(\\(p\\)) processes
Let’s review what we learned in lecture about the general behavior of the ACF and PACF for AR(\\(p\\)) models. To do so, we’ll simulate four stationary AR(\\(p\\)) models of increasing order \\(p\\) and then examine their ACF’s and PACF’s. Let’s use a really big \\(n\\) so as to make them “pure,” which will provide a much better estimate of the correlation structure.
```
set.seed(123)
## the 4 AR coefficients
AR_p_coef <- c(0.7, 0.2, -0.1, -0.3)
## empty list for storing models
AR_mods <- list()
## loop over orders of p
for (p in 1:4) {
## assume sd = 1, so not specified
AR_mods[[p]] <- arima.sim(n = 10000, list(ar = AR_p_coef[1:p]))
}
```
Now that we have our four AR(\\(p\\)) models, lets look at plots of the time series, ACF’s, and PACF’s.
```
## set up plot region
par(mfrow = c(4, 3))
## loop over orders of p
for (p in 1:4) {
plot.ts(AR_mods[[p]][1:50], ylab = paste("AR(", p, ")", sep = ""))
acf(AR_mods[[p]], lag.max = 12)
pacf(AR_mods[[p]], lag.max = 12, ylab = "PACF")
}
```
Figure 4\.24: Time series of simulated AR(\\(p\\)) processes (left column) of increasing orders from 1\-4 (rows) with their associated ACF’s (center column) and PACF’s (right column). Note that only the first 50 values of \\(x\_t\\) are plotted.
As we saw in lecture and is evident from our examples shown in Figure [4\.24](sec-tslab-autoregressive-ar-models.html#fig:tslab-plotAR-p-coefComps), the ACF for an AR(\\(p\\)) process tails off toward zero very slowly, but the PACF goes to zero for lags \> \\(p\\). This is an important diagnostic tool when trying to identify the order of \\(p\\) in ARMA(\\(p,q\\)) models.
### 4\.7\.1 Simulating an AR(\\(p\\)) process
Although we could simulate an AR(\\(p\\)) process in R using a for loop just as we did for a random walk, it’s much easier with the function `arima.sim()`, which works for all forms and subsets of ARIMA models. To do so, remember that the AR in ARIMA stands for “autoregressive,” the I for “integrated,” and the MA for “moving\-average”; we specify the order of ARIMA models as \\(p,d,q\\). So, for example, we would specify an AR(2\) model as ARIMA(2,0,0\), or an MA(1\) model as ARIMA(0,0,1\). If we had an ARMA(3,1\) model that we applied to data that had been twice\-differenced, then we would have an ARIMA(3,2,1\) model.
`arima.sim()` will accept many arguments, but we are interested primarily in three of them (type `?arima.sim` to learn more):
1. `n`: the length of desired time series
2. `model`: a list with the following elements:
* `order`: a vector of length 3 containing the ARIMA(\\(p,d,q\\)) order
* `ar`: a vector of length \\(p\\) containing the AR(\\(p\\)) coefficients
* `ma`: a vector of length \\(q\\) containing the MA(\\(q\\)) coefficients
3. `sd`: the standard deviation of the Gaussian errors
Note that you can omit the `ma` element entirely if you have an AR(\\(p\\)) model, or omit the `ar` element if you have an MA(\\(q\\)) model. If you omit the `sd` element, `arima.sim()` will assume you want normally distributed errors with SD \= 1\. Also note that you can pass `arima.sim()` your own time series of random errors or the name of a function that will generate the errors (*e.g.*, you could use `rpois()` if you wanted a model with Poisson errors). Type `?arima.sim` for more details.
Let’s begin by simulating some AR(1\) models and comparing their behavior. First, let’s choose models with contrasting AR coefficients. Recall that in order for an AR(1\) model to be stationary, \\(\\phi \< \\lvert 1 \\rvert\\), so we’ll try 0\.1 and 0\.9\. We’ll again set the random number seed so we will get the same answers.
```
set.seed(456)
## list description for AR(1) model with small coef
AR_sm <- list(order = c(1, 0, 0), ar = 0.1)
## list description for AR(1) model with large coef
AR_lg <- list(order = c(1, 0, 0), ar = 0.9)
## simulate AR(1)
AR1_sm <- arima.sim(n = 50, model = AR_sm, sd = 0.1)
AR1_lg <- arima.sim(n = 50, model = AR_lg, sd = 0.1)
```
Now let’s plot the 2 simulated series.
```
## setup plot region
par(mfrow = c(1, 2))
## get y-limits for common plots
ylm <- c(min(AR1_sm, AR1_lg), max(AR1_sm, AR1_lg))
## plot the ts
plot.ts(AR1_sm, ylim = ylm, ylab = expression(italic(x)[italic(t)]),
main = expression(paste(phi, " = 0.1")))
plot.ts(AR1_lg, ylim = ylm, ylab = expression(italic(x)[italic(t)]),
main = expression(paste(phi, " = 0.9")))
```
Figure 4\.22: Time series of simulated AR(1\) processes with \\(\\phi\=0\.1\\) (left) and \\(\\phi\=0\.9\\) (right).
What do you notice about the two plots in Figure [4\.22](sec-tslab-autoregressive-ar-models.html#fig:tslab-plotAR1contrast)? It looks like the time series with the smaller AR coefficient is more “choppy” and seems to stay closer to 0 whereas the time series with the larger AR coefficient appears to wander around more. Remember that as the coefficient in an AR(1\) model goes to 0, the model approaches a WN sequence, which is stationary in both the mean and variance. As the coefficient goes to 1, however, the model approaches a random walk, which is not stationary in either the mean or variance.
Next, let’s generate two AR(1\) models that have the same magnitude coeficient, but opposite signs, and compare their behavior.
```
set.seed(123)
## list description for AR(1) model with small coef
AR_pos <- list(order = c(1, 0, 0), ar = 0.5)
## list description for AR(1) model with large coef
AR_neg <- list(order = c(1, 0, 0), ar = -0.5)
## simulate AR(1)
AR1_pos <- arima.sim(n = 50, model = AR_pos, sd = 0.1)
AR1_neg <- arima.sim(n = 50, model = AR_neg, sd = 0.1)
```
OK, let’s plot the 2 simulated series.
```
## setup plot region
par(mfrow = c(1, 2))
## get y-limits for common plots
ylm <- c(min(AR1_pos, AR1_neg), max(AR1_pos, AR1_neg))
## plot the ts
plot.ts(AR1_pos, ylim = ylm, ylab = expression(italic(x)[italic(t)]),
main = expression(paste(phi[1], " = 0.5")))
plot.ts(AR1_neg, ylab = expression(italic(x)[italic(t)]), main = expression(paste(phi[1],
" = -0.5")))
```
Figure 4\.23: Time series of simulated AR(1\) processes with \\(\\phi\_1\=0\.5\\) (left) and \\(\\phi\_1\=\-0\.5\\) (right).
Now it appears like both time series vary around the mean by about the same amount, but the model with the negative coefficient produces a much more “sawtooth” time series. It turns out that any AR(1\) model with \\(\-1\<\\phi\<0\\) will exhibit the 2\-point oscillation you see here.
We can simulate higher order AR(\\(p\\)) models in the same manner, but care must be exercised when choosing a set of coefficients that result in a stationary model or else `arima.sim()` will fail and report an error. For example, an AR(2\) model with both coefficients equal to 0\.5 is not stationary, and therefore this function call will not work:
```
arima.sim(n = 100, model = list(order(2, 0, 0), ar = c(0.5, 0.5)))
```
If you try, R will respond that the “`'ar' part of model is not stationary`.”
### 4\.7\.2 Correlation structure of AR(\\(p\\)) processes
Let’s review what we learned in lecture about the general behavior of the ACF and PACF for AR(\\(p\\)) models. To do so, we’ll simulate four stationary AR(\\(p\\)) models of increasing order \\(p\\) and then examine their ACF’s and PACF’s. Let’s use a really big \\(n\\) so as to make them “pure,” which will provide a much better estimate of the correlation structure.
```
set.seed(123)
## the 4 AR coefficients
AR_p_coef <- c(0.7, 0.2, -0.1, -0.3)
## empty list for storing models
AR_mods <- list()
## loop over orders of p
for (p in 1:4) {
## assume sd = 1, so not specified
AR_mods[[p]] <- arima.sim(n = 10000, list(ar = AR_p_coef[1:p]))
}
```
Now that we have our four AR(\\(p\\)) models, lets look at plots of the time series, ACF’s, and PACF’s.
```
## set up plot region
par(mfrow = c(4, 3))
## loop over orders of p
for (p in 1:4) {
plot.ts(AR_mods[[p]][1:50], ylab = paste("AR(", p, ")", sep = ""))
acf(AR_mods[[p]], lag.max = 12)
pacf(AR_mods[[p]], lag.max = 12, ylab = "PACF")
}
```
Figure 4\.24: Time series of simulated AR(\\(p\\)) processes (left column) of increasing orders from 1\-4 (rows) with their associated ACF’s (center column) and PACF’s (right column). Note that only the first 50 values of \\(x\_t\\) are plotted.
As we saw in lecture and is evident from our examples shown in Figure [4\.24](sec-tslab-autoregressive-ar-models.html#fig:tslab-plotAR-p-coefComps), the ACF for an AR(\\(p\\)) process tails off toward zero very slowly, but the PACF goes to zero for lags \> \\(p\\). This is an important diagnostic tool when trying to identify the order of \\(p\\) in ARMA(\\(p,q\\)) models.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-tslab-moving-average-ma-models.html |
4\.8 Moving\-average (MA) models
--------------------------------
A moving\-averge process of order \\(q\\), or MA(\\(q\\)), is a weighted sum of the current random error plus the \\(q\\) most recent errors, and can be written as
\\\[\\begin{equation}
\\tag{4\.22}
x\_t \= w\_t \+ \\theta\_1 w\_{t\-1} \+ \\theta\_2 w\_{t\-2} \+ \\dots \+ \\theta\_q w\_{t\-q},
\\end{equation}\\]
where \\(\\{w\_t\\}\\) is a white noise sequence with zero mean and some variance \\(\\sigma^2\\); for our purposes we usually assume that \\(w\_t \\sim \\text{N}(0,q)\\). Of particular note is that because MA processes are finite sums of stationary errors, they themselves are stationary.
Of interest to us are so\-called “invertible” MA processes that can be expressed as an infinite AR process with no error term. The term invertible comes from the inversion of the backshift operator (**B**) that we discussed in class (*i.e.*, \\(\\mathbf{B} x\_t\= x\_{t\-1}\\)). So, for example, an MA(1\) process with \\(\\theta \< \\lvert 1 \\rvert\\) is invertible because it can be written using the backshift operator as
\\\[\\begin{equation}
\\tag{4\.23}
\\begin{aligned}
x\_t \&\= w\_t \- \\theta w\_{t\-1} \\\\
x\_t \&\= w\_t \- \\theta \\mathbf{B} w\_t \\\\
x\_t \&\= (1 \- \\theta \\mathbf{B}) w\_t, \\\\
\&\\Downarrow \\\\
w\_t \&\= \\frac{1}{(1 \- \\theta \\mathbf{B})} x\_t \\\\
w\_t \&\= (1 \+ \\theta \\mathbf{B} \+ \\theta^2 \\mathbf{B}^2 \+ \\theta^3 \\mathbf{B}^3 \+ \\dots) x\_t \\\\
w\_t \&\= x\_t \+ \\theta x\_{t\-1} \+ \\theta^2 x\_{t\-2} \+ \\theta^3 x\_{t\-3} \+ \\dots
\\end{aligned}
\\end{equation}\\]
### 4\.8\.1 Simulating an MA(\\(q\\)) process
We can simulate MA(\\(q\\)) processes just as we did for AR(\\(p\\)) processes using `arima.sim()`. Here are 3 different ones with contrasting \\(\\theta\\)’s:
```
set.seed(123)
## list description for MA(1) model with small coef
MA_sm <- list(order = c(0, 0, 1), ma = 0.2)
## list description for MA(1) model with large coef
MA_lg <- list(order = c(0, 0, 1), ma = 0.8)
## list description for MA(1) model with large coef
MA_neg <- list(order = c(0, 0, 1), ma = -0.5)
## simulate MA(1)
MA1_sm <- arima.sim(n = 50, model = MA_sm, sd = 0.1)
MA1_lg <- arima.sim(n = 50, model = MA_lg, sd = 0.1)
MA1_neg <- arima.sim(n = 50, model = MA_neg, sd = 0.1)
```
with their associated plots.
```
## setup plot region
par(mfrow = c(1, 3))
## plot the ts
plot.ts(MA1_sm, ylab = expression(italic(x)[italic(t)]), main = expression(paste(theta,
" = 0.2")))
plot.ts(MA1_lg, ylab = expression(italic(x)[italic(t)]), main = expression(paste(theta,
" = 0.8")))
plot.ts(MA1_neg, ylab = expression(italic(x)[italic(t)]), main = expression(paste(theta,
" = -0.5")))
```
Figure 4\.25: Time series of simulated MA(1\) processes with \\(\\theta\=0\.2\\) (left), \\(\\theta\=0\.8\\) (middle), and \\(\\theta\=\-0\.5\\) (right).
In contrast to AR(1\) processes, MA(1\) models do not exhibit radically different behavior with changing \\(\\theta\\). This should not be too surprising given that they are simply linear combinations of white noise.
### 4\.8\.2 Correlation structure of MA(\\(q\\)) processes
We saw in lecture and above how the ACF and PACF have distinctive features for AR(\\(p\\)) models, and they do for MA(\\(q\\)) models as well. Here are examples of four MA(\\(q\\)) processes. As before, we’ll use a really big \\(n\\) so as to make them “pure,” which will provide a much better estimate of the correlation structure.
```
set.seed(123)
## the 4 MA coefficients
MA_q_coef <- c(0.7, 0.2, -0.1, -0.3)
## empty list for storing models
MA_mods <- list()
## loop over orders of q
for (q in 1:4) {
## assume sd = 1, so not specified
MA_mods[[q]] <- arima.sim(n = 1000, list(ma = MA_q_coef[1:q]))
}
```
Now that we have our four MA(\\(q\\)) models, lets look at plots of the time series, ACF’s, and PACF’s.
```
## set up plot region
par(mfrow = c(4, 3))
## loop over orders of q
for (q in 1:4) {
plot.ts(MA_mods[[q]][1:50], ylab = paste("MA(", q, ")", sep = ""))
acf(MA_mods[[q]], lag.max = 12)
pacf(MA_mods[[q]], lag.max = 12, ylab = "PACF")
}
```
Figure 4\.26: Time series of simulated MA(\\(q\\)) processes (left column) of increasing orders from 1\-4 (rows) with their associated ACF’s (center column) and PACF’s (right column). Note that only the first 50 values of \\(x\_t\\) are plotted.
Note very little qualitative difference in the realizations of the four MA(\\(q\\)) processes (Figure [4\.26](sec-tslab-moving-average-ma-models.html#fig:tslab-plotMApComps)). As we saw in lecture and is evident from our examples here, however, the ACF for an MA(\\(q\\)) process goes to zero for lags \> \\(q\\), but the PACF tails off toward zero very slowly. This is an important diagnostic tool when trying to identify the order of \\(q\\) in ARMA(\\(p,q\\)) models.
### 4\.8\.1 Simulating an MA(\\(q\\)) process
We can simulate MA(\\(q\\)) processes just as we did for AR(\\(p\\)) processes using `arima.sim()`. Here are 3 different ones with contrasting \\(\\theta\\)’s:
```
set.seed(123)
## list description for MA(1) model with small coef
MA_sm <- list(order = c(0, 0, 1), ma = 0.2)
## list description for MA(1) model with large coef
MA_lg <- list(order = c(0, 0, 1), ma = 0.8)
## list description for MA(1) model with large coef
MA_neg <- list(order = c(0, 0, 1), ma = -0.5)
## simulate MA(1)
MA1_sm <- arima.sim(n = 50, model = MA_sm, sd = 0.1)
MA1_lg <- arima.sim(n = 50, model = MA_lg, sd = 0.1)
MA1_neg <- arima.sim(n = 50, model = MA_neg, sd = 0.1)
```
with their associated plots.
```
## setup plot region
par(mfrow = c(1, 3))
## plot the ts
plot.ts(MA1_sm, ylab = expression(italic(x)[italic(t)]), main = expression(paste(theta,
" = 0.2")))
plot.ts(MA1_lg, ylab = expression(italic(x)[italic(t)]), main = expression(paste(theta,
" = 0.8")))
plot.ts(MA1_neg, ylab = expression(italic(x)[italic(t)]), main = expression(paste(theta,
" = -0.5")))
```
Figure 4\.25: Time series of simulated MA(1\) processes with \\(\\theta\=0\.2\\) (left), \\(\\theta\=0\.8\\) (middle), and \\(\\theta\=\-0\.5\\) (right).
In contrast to AR(1\) processes, MA(1\) models do not exhibit radically different behavior with changing \\(\\theta\\). This should not be too surprising given that they are simply linear combinations of white noise.
### 4\.8\.2 Correlation structure of MA(\\(q\\)) processes
We saw in lecture and above how the ACF and PACF have distinctive features for AR(\\(p\\)) models, and they do for MA(\\(q\\)) models as well. Here are examples of four MA(\\(q\\)) processes. As before, we’ll use a really big \\(n\\) so as to make them “pure,” which will provide a much better estimate of the correlation structure.
```
set.seed(123)
## the 4 MA coefficients
MA_q_coef <- c(0.7, 0.2, -0.1, -0.3)
## empty list for storing models
MA_mods <- list()
## loop over orders of q
for (q in 1:4) {
## assume sd = 1, so not specified
MA_mods[[q]] <- arima.sim(n = 1000, list(ma = MA_q_coef[1:q]))
}
```
Now that we have our four MA(\\(q\\)) models, lets look at plots of the time series, ACF’s, and PACF’s.
```
## set up plot region
par(mfrow = c(4, 3))
## loop over orders of q
for (q in 1:4) {
plot.ts(MA_mods[[q]][1:50], ylab = paste("MA(", q, ")", sep = ""))
acf(MA_mods[[q]], lag.max = 12)
pacf(MA_mods[[q]], lag.max = 12, ylab = "PACF")
}
```
Figure 4\.26: Time series of simulated MA(\\(q\\)) processes (left column) of increasing orders from 1\-4 (rows) with their associated ACF’s (center column) and PACF’s (right column). Note that only the first 50 values of \\(x\_t\\) are plotted.
Note very little qualitative difference in the realizations of the four MA(\\(q\\)) processes (Figure [4\.26](sec-tslab-moving-average-ma-models.html#fig:tslab-plotMApComps)). As we saw in lecture and is evident from our examples here, however, the ACF for an MA(\\(q\\)) process goes to zero for lags \> \\(q\\), but the PACF tails off toward zero very slowly. This is an important diagnostic tool when trying to identify the order of \\(q\\) in ARMA(\\(p,q\\)) models.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-tslab-autoregressive-moving-average-arma-models.html |
4\.9 Autoregressive moving\-average (ARMA) models
-------------------------------------------------
ARMA(\\(p,q\\)) models have a rich history in the time series literature, but they are not nearly as common in ecology as plain AR(\\(p\\)) models. As we discussed in lecture, both the ACF and PACF are important tools when trying to identify the appropriate order of \\(p\\) and \\(q\\). Here we will see how to simulate time series from AR(\\(p\\)), MA(\\(q\\)), and ARMA(\\(p,q\\)) processes, as well as fit time series models to data based on insights gathered from the ACF and PACF.
We can write an ARMA(\\(p,q\\)) as a mixture of AR(\\(p\\)) and MA(\\(q\\)) models, such that
\\\[\\begin{equation}
\\tag{4\.24}
x\_t \= \\phi\_1x\_{t\-1} \+ \\phi\_2x\_{t\-2} \+ \\dots \+ \\phi\_p x\_{t\-p} \+ w\_t \+ \\theta\_1 w\_{t\-1} \+ \\theta\_2 w\_{t\-2} \+ \\dots \+ \\theta\_q w\_{t\-q},
\\end{equation}\\]
and the \\(w\_t\\) are white noise.
### 4\.9\.1 Fitting ARMA(\\(p,q\\)) models with `arima()`
We have already seen how to simulate AR(\\(p\\)) and MA(\\(q\\)) models with `arima.sim()`; the same concepts apply to ARMA(\\(p,q\\)) models and therefore we will not do that here. Instead, we will move on to fitting ARMA(\\(p,q\\)) models when we only have a realization of the process (*i.e.*, data) and do not know the underlying parameters that generated it.
The function `arima()` accepts a number of arguments, but two of them are most important:
* `x`: a univariate time series
* `order`: a vector of length 3 specifying the order of ARIMA(p,d,q) model
In addition, note that by default `arima()` will estimate an underlying mean of the time series unless \\(d\>0\\). For example, an AR(1\) process with mean \\(\\mu\\) would be written
\\\[\\begin{equation}
\\tag{4\.25}
x\_t \= \\mu \+ \\phi (x\_{t\-1} \- \\mu) \+ w\_t.
\\end{equation}\\]
If you know for a fact that the time series data have a mean of zero (*e.g.*, you already subtracted the mean from them), you should include the argument `include.mean = FALSE`, which is set to `TRUE` by default. Note that ignoring and not estimating a mean in ARMA(\\(p,q\\)) models when one exists will bias the estimates of all other parameters.
Let’s see an example of how `arima()` works. First we’ll simulate an ARMA(2,2\) model and then estimate the parameters to see how well we can recover them. In addition, we’ll add in a constant to create a non\-zero mean, which `arima()` reports as `intercept` in its output.
```
set.seed(123)
## ARMA(2,2) description for arim.sim()
ARMA22 <- list(order = c(2, 0, 2), ar = c(-0.7, 0.2), ma = c(0.7,
0.2))
## mean of process
mu <- 5
## simulated process (+ mean)
ARMA_sim <- arima.sim(n = 10000, model = ARMA22) + mu
## estimate parameters
arima(x = ARMA_sim, order = c(2, 0, 2))
```
```
Call:
arima(x = ARMA_sim, order = c(2, 0, 2))
Coefficients:
ar1 ar2 ma1 ma2 intercept
-0.7079 0.1924 0.6912 0.2001 4.9975
s.e. 0.0291 0.0284 0.0289 0.0236 0.0125
sigma^2 estimated as 0.9972: log likelihood = -14175.92, aic = 28363.84
```
It looks like we were pretty good at estimating the true parameters, but our sample size was admittedly quite large; the estimate of the variance of the process errors is reported as `sigma^2` below the other coefficients. As an exercise, try decreasing the length of time series in the `arima.sim()` call above from 10,000 to something like 100 and see what effect it has on the parameter estimates.
### 4\.9\.2 Searching over model orders
In an ideal situation, you could examine the ACF and PACF of the time series of interest and immediately decipher what orders of \\(p\\) and \\(q\\) must have generated the data, but that doesn’t always work in practice. Instead, we are often left with the task of searching over several possible model forms and seeing which of them provides the most parsimonious fit to the data. There are two easy ways to do this for ARIMA models in R. The first is to write a little script that loops ove the possible dimensions of \\(p\\) and \\(q\\). Let’s try that for the process we simulated above and search over orders of \\(p\\) and \\(q\\) from 0\-3 (it will take a few moments to run and will likely report an error about a “`possible convergence problem`,” which you can ignore).
```
## empty list to store model fits
ARMA_res <- list()
## set counter
cc <- 1
## loop over AR
for (p in 0:3) {
## loop over MA
for (q in 0:3) {
ARMA_res[[cc]] <- arima(x = ARMA_sim, order = c(p, 0,
q))
cc <- cc + 1
}
}
```
```
Warning in arima(x = ARMA_sim, order = c(p, 0, q)): possible convergence
problem: optim gave code = 1
```
```
## get AIC values for model evaluation
ARMA_AIC <- sapply(ARMA_res, function(x) x$aic)
## model with lowest AIC is the best
ARMA_res[[which(ARMA_AIC == min(ARMA_AIC))]]
```
```
Call:
arima(x = ARMA_sim, order = c(p, 0, q))
Coefficients:
ar1 ar2 ma1 ma2 intercept
-0.7079 0.1924 0.6912 0.2001 4.9975
s.e. 0.0291 0.0284 0.0289 0.0236 0.0125
sigma^2 estimated as 0.9972: log likelihood = -14175.92, aic = 28363.84
```
It looks like our search worked, so let’s look at the other method for fitting ARIMA models. The `auto.arima()` function in the **forecast** package will conduct an automatic search over all possible orders of ARIMA models that you specify. For details, type `?auto.arima` after loading the package. Let’s repeat our search using the same criteria.
```
## find best ARMA(p,q) model
auto.arima(ARMA_sim, start.p = 0, max.p = 3, start.q = 0, max.q = 3)
```
```
Series: ARMA_sim
ARIMA(2,0,2) with non-zero mean
Coefficients:
ar1 ar2 ma1 ma2 mean
-0.7079 0.1924 0.6912 0.2001 4.9975
s.e. 0.0291 0.0284 0.0289 0.0236 0.0125
sigma^2 estimated as 0.9977: log likelihood=-14175.92
AIC=28363.84 AICc=28363.84 BIC=28407.1
```
We get the same results with an increase in speed and less coding, which is nice. If you want to see the form for each of the models checked by `auto.arima()` and their associated AIC values, include the argument `trace = 1`.
### 4\.9\.1 Fitting ARMA(\\(p,q\\)) models with `arima()`
We have already seen how to simulate AR(\\(p\\)) and MA(\\(q\\)) models with `arima.sim()`; the same concepts apply to ARMA(\\(p,q\\)) models and therefore we will not do that here. Instead, we will move on to fitting ARMA(\\(p,q\\)) models when we only have a realization of the process (*i.e.*, data) and do not know the underlying parameters that generated it.
The function `arima()` accepts a number of arguments, but two of them are most important:
* `x`: a univariate time series
* `order`: a vector of length 3 specifying the order of ARIMA(p,d,q) model
In addition, note that by default `arima()` will estimate an underlying mean of the time series unless \\(d\>0\\). For example, an AR(1\) process with mean \\(\\mu\\) would be written
\\\[\\begin{equation}
\\tag{4\.25}
x\_t \= \\mu \+ \\phi (x\_{t\-1} \- \\mu) \+ w\_t.
\\end{equation}\\]
If you know for a fact that the time series data have a mean of zero (*e.g.*, you already subtracted the mean from them), you should include the argument `include.mean = FALSE`, which is set to `TRUE` by default. Note that ignoring and not estimating a mean in ARMA(\\(p,q\\)) models when one exists will bias the estimates of all other parameters.
Let’s see an example of how `arima()` works. First we’ll simulate an ARMA(2,2\) model and then estimate the parameters to see how well we can recover them. In addition, we’ll add in a constant to create a non\-zero mean, which `arima()` reports as `intercept` in its output.
```
set.seed(123)
## ARMA(2,2) description for arim.sim()
ARMA22 <- list(order = c(2, 0, 2), ar = c(-0.7, 0.2), ma = c(0.7,
0.2))
## mean of process
mu <- 5
## simulated process (+ mean)
ARMA_sim <- arima.sim(n = 10000, model = ARMA22) + mu
## estimate parameters
arima(x = ARMA_sim, order = c(2, 0, 2))
```
```
Call:
arima(x = ARMA_sim, order = c(2, 0, 2))
Coefficients:
ar1 ar2 ma1 ma2 intercept
-0.7079 0.1924 0.6912 0.2001 4.9975
s.e. 0.0291 0.0284 0.0289 0.0236 0.0125
sigma^2 estimated as 0.9972: log likelihood = -14175.92, aic = 28363.84
```
It looks like we were pretty good at estimating the true parameters, but our sample size was admittedly quite large; the estimate of the variance of the process errors is reported as `sigma^2` below the other coefficients. As an exercise, try decreasing the length of time series in the `arima.sim()` call above from 10,000 to something like 100 and see what effect it has on the parameter estimates.
### 4\.9\.2 Searching over model orders
In an ideal situation, you could examine the ACF and PACF of the time series of interest and immediately decipher what orders of \\(p\\) and \\(q\\) must have generated the data, but that doesn’t always work in practice. Instead, we are often left with the task of searching over several possible model forms and seeing which of them provides the most parsimonious fit to the data. There are two easy ways to do this for ARIMA models in R. The first is to write a little script that loops ove the possible dimensions of \\(p\\) and \\(q\\). Let’s try that for the process we simulated above and search over orders of \\(p\\) and \\(q\\) from 0\-3 (it will take a few moments to run and will likely report an error about a “`possible convergence problem`,” which you can ignore).
```
## empty list to store model fits
ARMA_res <- list()
## set counter
cc <- 1
## loop over AR
for (p in 0:3) {
## loop over MA
for (q in 0:3) {
ARMA_res[[cc]] <- arima(x = ARMA_sim, order = c(p, 0,
q))
cc <- cc + 1
}
}
```
```
Warning in arima(x = ARMA_sim, order = c(p, 0, q)): possible convergence
problem: optim gave code = 1
```
```
## get AIC values for model evaluation
ARMA_AIC <- sapply(ARMA_res, function(x) x$aic)
## model with lowest AIC is the best
ARMA_res[[which(ARMA_AIC == min(ARMA_AIC))]]
```
```
Call:
arima(x = ARMA_sim, order = c(p, 0, q))
Coefficients:
ar1 ar2 ma1 ma2 intercept
-0.7079 0.1924 0.6912 0.2001 4.9975
s.e. 0.0291 0.0284 0.0289 0.0236 0.0125
sigma^2 estimated as 0.9972: log likelihood = -14175.92, aic = 28363.84
```
It looks like our search worked, so let’s look at the other method for fitting ARIMA models. The `auto.arima()` function in the **forecast** package will conduct an automatic search over all possible orders of ARIMA models that you specify. For details, type `?auto.arima` after loading the package. Let’s repeat our search using the same criteria.
```
## find best ARMA(p,q) model
auto.arima(ARMA_sim, start.p = 0, max.p = 3, start.q = 0, max.q = 3)
```
```
Series: ARMA_sim
ARIMA(2,0,2) with non-zero mean
Coefficients:
ar1 ar2 ma1 ma2 mean
-0.7079 0.1924 0.6912 0.2001 4.9975
s.e. 0.0291 0.0284 0.0289 0.0236 0.0125
sigma^2 estimated as 0.9977: log likelihood=-14175.92
AIC=28363.84 AICc=28363.84 BIC=28407.1
```
We get the same results with an increase in speed and less coding, which is nice. If you want to see the form for each of the models checked by `auto.arima()` and their associated AIC values, include the argument `trace = 1`.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-tslab-problems.html |
4\.10 Problems
--------------
We have seen how to do a variety of introductory time series analyses with R. Now it is your turn to apply the information you learned here and in lecture to complete some analyses. You have been asked by a colleague to help analyze some time series data she collected as part of an experiment on the effects of light and nutrients on the population dynamics of phytoplankton. Specifically, after controlling for differences in light and temperature, she wants to know if the natural log of population density can be modeled with some form of ARMA(\\(p,q\\)) model.
The data are expressed as the number of cells per milliliter recorded every hour for one week beginning at 8:00 AM on December 1, 2014\. You can load the data using
```
data(hourlyphyto, package = "atsalibrary")
phyto_dat <- hourlyphyto
```
Use the information above to do the following:
1. Convert `phyto_dat`, which is a **data.frame** object, into a **ts** object. This bit of code might be useful to get you started:
```
## what day of 2014 is Dec 1st?
date_begin <- as.Date("2014-12-01")
day_of_year <- (date_begin - as.Date("2014-01-01") + 1)
```
2. Plot the time series of phytoplankton density and provide a brief description of any notable features.
3. Although you do not have the actual measurements for the specific temperature and light regimes used in the experiment, you have been informed that they follow a regular light/dark period with accompanying warm/cool temperatures. Thus, estimating a fixed seasonal effect is justifiable. Also, the instrumentation is precise enough to preclude any systematic change in measurements over time (*i.e.*, you can assume \\(m\_t \= 0\\) for all \\(t\\)). Obtain the time series of the estimated log\-density of phytoplankton absent any hourly effects caused by variation in temperature or light. (*Hint*: You will need to do some decomposition.)
4. Use diagnostic tools to identify the possible order(s) of ARMA model(s) that most likely describes the log of population density for this particular experiment. Note that at this point you should be focusing your analysis on the results obtained in Question 3\.
5. Use some form of search to identify what form of ARMA(\\(p,q\\)) model best describes the log of population density for this particular experiment. Use what you learned in Question 4 to inform possible orders of \\(p\\) and \\(q\\). (*Hint*: if you use `auto.arima()`, include the additional argument `seasonal = FALSE`)
6. Write out the best model in the form of Equation [(4\.24\)](sec-tslab-autoregressive-moving-average-arma-models.html#eq:ARMAdefn) using the underscore notation to refer to subscripts (*e.g.*, write `x_t` for \\(x\_t\\)). You can round any parameters/coefficients to the nearest hundreth. (\\(Hint\\): if the mean of the time series is not zero, refer to Eqn 1\.27 in the lab handout).
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/chap-boxjenkins-.html |
Chapter 5 Box\-Jenkins method
=============================
In this chapter, you will practice selecting and fitting an ARIMA model to catch data using the Box\-Jenkins method. After fitting a model, you will prepare simple forecasts using the **forecast** package.
A script with all the R code in the chapter can be downloaded [here](./Rcode/box-jenkins.R). The Rmd for this chapter can be downloaded [here](./Rmds/box-jenkins.Rmd)
### Data and packages
We will use the catch landings from Greek waters (`greeklandings`) and the Chinook landings (`chinook`) in Washington data sets for this chapter. These datasets are in the **atsalibrary** package on GitHub. Install using the **devtools** package.
```
library(devtools)
# Windows users will likely need to set this
# Sys.setenv('R_REMOTES_NO_ERRORS_FROM_WARNINGS' = 'true')
devtools::install_github("nwfsc-timeseries/atsalibrary")
```
Load the data.
```
data(greeklandings, package = "atsalibrary")
landings <- greeklandings
# Use the monthly data
data(chinook, package = "atsalibrary")
chinook <- chinook.month
```
Ensure you have the necessary packages.
```
library(ggplot2)
library(gridExtra)
library(reshape2)
library(tseries)
library(urca)
library(forecast)
```
### Data and packages
We will use the catch landings from Greek waters (`greeklandings`) and the Chinook landings (`chinook`) in Washington data sets for this chapter. These datasets are in the **atsalibrary** package on GitHub. Install using the **devtools** package.
```
library(devtools)
# Windows users will likely need to set this
# Sys.setenv('R_REMOTES_NO_ERRORS_FROM_WARNINGS' = 'true')
devtools::install_github("nwfsc-timeseries/atsalibrary")
```
Load the data.
```
data(greeklandings, package = "atsalibrary")
landings <- greeklandings
# Use the monthly data
data(chinook, package = "atsalibrary")
chinook <- chinook.month
```
Ensure you have the necessary packages.
```
library(ggplot2)
library(gridExtra)
library(reshape2)
library(tseries)
library(urca)
library(forecast)
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-boxjenkins-intro.html |
5\.1 Box\-Jenkins method
------------------------
A. Model form selection
1. Evaluate stationarity
2. Selection of the differencing level (d) – to fix stationarity problems
3. Selection of the AR level (p)
4. Selection of the MA level (q)
B. Parameter estimation
C. Model checking
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-boxjenkins-stationarity.html |
5\.2 Stationarity
-----------------
It is important to test and transform (via differencing) your data to ensure stationarity when fitting an ARMA model using standard algorithms. The standard algorithms for ARIMA models assume stationarity and we will be using those algorithms. It possible to fit ARMA models without transforming the data. We will cover that in later chapters. However, that is not commonly done in the literature on forecasting with ARMA models, certainly not in the literature on catch forecasting.
Keep in mind also that many ARMA models are stationary and you do not want to get in the situation of trying to fit an incompatible process model to your data. We will see examples of this when we start fitting models to non\-stationary data and random walks.
### 5\.2\.1 Look at stationarity in simulated data
We will start by looking at white noise and a stationary AR(1\) process from simulated data. White noise is simply a string of random numbers drawn from a Normal distribution. `rnorm()` with return random numbers drawn from a Normal distribution. Use `?rnorm` to understand what the function requires.
```
TT <- 100
y <- rnorm(TT, mean = 0, sd = 1) # 100 random numbers
op <- par(mfrow = c(1, 2))
plot(y, type = "l")
acf(y)
```
```
par(op)
```
Here we use `ggplot()` to plot 10 white noise time series.
```
dat <- data.frame(t = 1:TT, y = y)
p1 <- ggplot(dat, aes(x = t, y = y)) + geom_line() + ggtitle("1 white noise time series") +
xlab("") + ylab("value")
ys <- matrix(rnorm(TT * 10), TT, 10)
ys <- data.frame(ys)
ys$id = 1:TT
ys2 <- melt(ys, id.var = "id")
p2 <- ggplot(ys2, aes(x = id, y = value, group = variable)) +
geom_line() + xlab("") + ylab("value") + ggtitle("10 white noise processes")
grid.arrange(p1, p2, ncol = 1)
```
These are stationary because the variance and mean (level) does not change with time.
An AR(1\) process is also stationary.
```
theta <- 0.8
nsim <- 10
ar1 <- arima.sim(TT, model = list(ar = theta))
plot(ar1)
```
We can use ggplot to plot 10 AR(1\) time series, but we need to change the data to a data frame.
```
dat <- data.frame(t = 1:TT, y = ar1)
p1 <- ggplot(dat, aes(x = t, y = y)) + geom_line() + ggtitle("AR-1") +
xlab("") + ylab("value")
ys <- matrix(0, TT, nsim)
for (i in 1:nsim) ys[, i] <- as.vector(arima.sim(TT, model = list(ar = theta)))
ys <- data.frame(ys)
ys$id <- 1:TT
ys2 <- melt(ys, id.var = "id")
p2 <- ggplot(ys2, aes(x = id, y = value, group = variable)) +
geom_line() + xlab("") + ylab("value") + ggtitle("The variance of an AR-1 process is steady")
grid.arrange(p1, p2, ncol = 1)
```
```
Don't know how to automatically pick scale for object of type ts. Defaulting to continuous.
```
### 5\.2\.2 Stationary around a linear trend
Fluctuating around a linear trend is a very common type of stationarity used in ARMA modeling and forecasting. This is just a stationary process, like white noise or AR(1\), around an linear trend up or down.
```
intercept <- 0.5
trend <- 0.1
sd <- 0.5
TT <- 20
wn <- rnorm(TT, sd = sd) #white noise
wni <- wn + intercept #white noise witn interept
wnti <- wn + trend * (1:TT) + intercept
```
See how the white noise with trend is just the white noise overlaid on a linear trend.
```
op <- par(mfrow = c(1, 3))
plot(wn, type = "l")
plot(trend * 1:TT)
plot(wnti, type = "l")
```
```
par(op)
```
We can make a similar plot with ggplot.
```
dat <- data.frame(t = 1:TT, wn = wn, wni = wni, wnti = wnti)
p1 <- ggplot(dat, aes(x = t, y = wn)) + geom_line() + ggtitle("White noise")
p2 <- ggplot(dat, aes(x = t, y = wni)) + geom_line() + ggtitle("with non-zero mean")
p3 <- ggplot(dat, aes(x = t, y = wnti)) + geom_line() + ggtitle("with linear trend")
grid.arrange(p1, p2, p3, ncol = 3)
```
We can make a similar plot with AR(1\) data. Ignore the warnings about not knowing how to pick the scale.
```
beta1 <- 0.8
ar1 <- arima.sim(TT, model = list(ar = beta1), sd = sd)
ar1i <- ar1 + intercept
ar1ti <- ar1 + trend * (1:TT) + intercept
dat <- data.frame(t = 1:TT, ar1 = ar1, ar1i = ar1i, ar1ti = ar1ti)
p4 <- ggplot(dat, aes(x = t, y = ar1)) + geom_line() + ggtitle("AR1")
p5 <- ggplot(dat, aes(x = t, y = ar1i)) + geom_line() + ggtitle("with non-zero mean")
p6 <- ggplot(dat, aes(x = t, y = ar1ti)) + geom_line() + ggtitle("with linear trend")
grid.arrange(p4, p5, p6, ncol = 3)
```
```
Don't know how to automatically pick scale for object of type ts. Defaulting to continuous.
Don't know how to automatically pick scale for object of type ts. Defaulting to continuous.
Don't know how to automatically pick scale for object of type ts. Defaulting to continuous.
```
### 5\.2\.3 Greek landing data
We will look at the anchovy data. Notice the two `==` in the subset call not one `=`. We will use the Greek data before 1989 for the lab.
```
anchovy <- subset(landings, Species == "Anchovy" & Year <= 1989)$log.metric.tons
anchovyts <- ts(anchovy, start = 1964)
```
Plot the data.
```
plot(anchovyts, ylab = "log catch")
```
Questions to ask.
* Does it have a trend (goes up or down)? Yes, definitely
* Does it have a non\-zero mean? Yes
* Does it look like it might be stationary around a trend? Maybe
### 5\.2\.1 Look at stationarity in simulated data
We will start by looking at white noise and a stationary AR(1\) process from simulated data. White noise is simply a string of random numbers drawn from a Normal distribution. `rnorm()` with return random numbers drawn from a Normal distribution. Use `?rnorm` to understand what the function requires.
```
TT <- 100
y <- rnorm(TT, mean = 0, sd = 1) # 100 random numbers
op <- par(mfrow = c(1, 2))
plot(y, type = "l")
acf(y)
```
```
par(op)
```
Here we use `ggplot()` to plot 10 white noise time series.
```
dat <- data.frame(t = 1:TT, y = y)
p1 <- ggplot(dat, aes(x = t, y = y)) + geom_line() + ggtitle("1 white noise time series") +
xlab("") + ylab("value")
ys <- matrix(rnorm(TT * 10), TT, 10)
ys <- data.frame(ys)
ys$id = 1:TT
ys2 <- melt(ys, id.var = "id")
p2 <- ggplot(ys2, aes(x = id, y = value, group = variable)) +
geom_line() + xlab("") + ylab("value") + ggtitle("10 white noise processes")
grid.arrange(p1, p2, ncol = 1)
```
These are stationary because the variance and mean (level) does not change with time.
An AR(1\) process is also stationary.
```
theta <- 0.8
nsim <- 10
ar1 <- arima.sim(TT, model = list(ar = theta))
plot(ar1)
```
We can use ggplot to plot 10 AR(1\) time series, but we need to change the data to a data frame.
```
dat <- data.frame(t = 1:TT, y = ar1)
p1 <- ggplot(dat, aes(x = t, y = y)) + geom_line() + ggtitle("AR-1") +
xlab("") + ylab("value")
ys <- matrix(0, TT, nsim)
for (i in 1:nsim) ys[, i] <- as.vector(arima.sim(TT, model = list(ar = theta)))
ys <- data.frame(ys)
ys$id <- 1:TT
ys2 <- melt(ys, id.var = "id")
p2 <- ggplot(ys2, aes(x = id, y = value, group = variable)) +
geom_line() + xlab("") + ylab("value") + ggtitle("The variance of an AR-1 process is steady")
grid.arrange(p1, p2, ncol = 1)
```
```
Don't know how to automatically pick scale for object of type ts. Defaulting to continuous.
```
### 5\.2\.2 Stationary around a linear trend
Fluctuating around a linear trend is a very common type of stationarity used in ARMA modeling and forecasting. This is just a stationary process, like white noise or AR(1\), around an linear trend up or down.
```
intercept <- 0.5
trend <- 0.1
sd <- 0.5
TT <- 20
wn <- rnorm(TT, sd = sd) #white noise
wni <- wn + intercept #white noise witn interept
wnti <- wn + trend * (1:TT) + intercept
```
See how the white noise with trend is just the white noise overlaid on a linear trend.
```
op <- par(mfrow = c(1, 3))
plot(wn, type = "l")
plot(trend * 1:TT)
plot(wnti, type = "l")
```
```
par(op)
```
We can make a similar plot with ggplot.
```
dat <- data.frame(t = 1:TT, wn = wn, wni = wni, wnti = wnti)
p1 <- ggplot(dat, aes(x = t, y = wn)) + geom_line() + ggtitle("White noise")
p2 <- ggplot(dat, aes(x = t, y = wni)) + geom_line() + ggtitle("with non-zero mean")
p3 <- ggplot(dat, aes(x = t, y = wnti)) + geom_line() + ggtitle("with linear trend")
grid.arrange(p1, p2, p3, ncol = 3)
```
We can make a similar plot with AR(1\) data. Ignore the warnings about not knowing how to pick the scale.
```
beta1 <- 0.8
ar1 <- arima.sim(TT, model = list(ar = beta1), sd = sd)
ar1i <- ar1 + intercept
ar1ti <- ar1 + trend * (1:TT) + intercept
dat <- data.frame(t = 1:TT, ar1 = ar1, ar1i = ar1i, ar1ti = ar1ti)
p4 <- ggplot(dat, aes(x = t, y = ar1)) + geom_line() + ggtitle("AR1")
p5 <- ggplot(dat, aes(x = t, y = ar1i)) + geom_line() + ggtitle("with non-zero mean")
p6 <- ggplot(dat, aes(x = t, y = ar1ti)) + geom_line() + ggtitle("with linear trend")
grid.arrange(p4, p5, p6, ncol = 3)
```
```
Don't know how to automatically pick scale for object of type ts. Defaulting to continuous.
Don't know how to automatically pick scale for object of type ts. Defaulting to continuous.
Don't know how to automatically pick scale for object of type ts. Defaulting to continuous.
```
### 5\.2\.3 Greek landing data
We will look at the anchovy data. Notice the two `==` in the subset call not one `=`. We will use the Greek data before 1989 for the lab.
```
anchovy <- subset(landings, Species == "Anchovy" & Year <= 1989)$log.metric.tons
anchovyts <- ts(anchovy, start = 1964)
```
Plot the data.
```
plot(anchovyts, ylab = "log catch")
```
Questions to ask.
* Does it have a trend (goes up or down)? Yes, definitely
* Does it have a non\-zero mean? Yes
* Does it look like it might be stationary around a trend? Maybe
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-boxjenkins-aug-dickey-fuller.html |
5\.3 Dickey\-Fuller and Augmented Dickey\-Fuller tests
------------------------------------------------------
### 5\.3\.1 Dickey\-Fuller test
The Dickey\-Fuller test is testing if \\(\\phi\=0\\) in this model of the data:
\\\[y\_t \= \\alpha \+ \\beta t \+ \\phi y\_{t\-1} \+ e\_t\\]
which is written as
\\\[\\Delta y\_t \= y\_t\-y\_{t\-1}\= \\alpha \+ \\beta t \+ \\gamma y\_{t\-1} \+ e\_t\\]
where \\(y\_t\\) is your data. It is written this way so we can do a linear regression of \\(\\Delta y\_t\\) against \\(t\\) and \\(y\_{t\-1}\\) and test if \\(\\gamma\\) is different from 0\. If \\(\\gamma\=0\\), then we have a random walk process. If not and \\(\-1\<1\+\\gamma\<1\\), then we have a stationary process.
### 5\.3\.2 Augmented Dickey\-Fuller test
The Augmented Dickey\-Fuller test allows for higher\-order autoregressive processes by including \\(\\Delta y\_{t\-p}\\) in the model. But our test is still if \\(\\gamma \= 0\\).
\\\[\\Delta y\_t \= \\alpha \+ \\beta t \+ \\gamma y\_{t\-1} \+ \\delta\_1 \\Delta y\_{t\-1} \+ \\delta\_2 \\Delta y\_{t\-2} \+ \\dots\\]
The null hypothesis for both tests is that the data are non\-stationary. We want to REJECT the null hypothesis for this test, so we want a p\-value of less that 0\.05 (or smaller).
### 5\.3\.3 ADF test using `adf.test()`
The `adf.test()` from the **tseries** package will do a Augmented Dickey\-Fuller test (Dickey\-Fuller if we set lags equal to 0\) with a trend and an intercept. Use `?adf.test` to read about this function. The function is
```
adf.test(x, alternative = c("stationary", "explosive"),
k = trunc((length(x)-1)^(1/3)))
```
`x` are your data. `alternative="stationary"` means that \\(\-2\<\\gamma\<0\\) (\\(\-1\<\\phi\<1\\)) and `alternative="explosive"` means that is outside these bounds. `k` is the number of \\(\\delta\\) lags. For a Dickey\-Fuller test, so only up to AR(1\) time dependency in our stationary process, we set `k=0` so we have no \\(\\delta\\)’s in our test. Being able to control the lags in our test, allows us to avoid a stationarity test that is too complex to be supported by our data.
#### 5\.3\.3\.1 Test on white noise
Let’s start by doing the test on data that we know are stationary, white noise. We will use an Augmented Dickey\-Fuller test where we use the default number of lags (amount of time\-dependency) in our test. For a time\-series of 100, this is 4\.
```
TT <- 100
wn <- rnorm(TT) # white noise
tseries::adf.test(wn)
```
```
Warning in tseries::adf.test(wn): p-value smaller than printed p-value
```
```
Augmented Dickey-Fuller Test
data: wn
Dickey-Fuller = -4.8309, Lag order = 4, p-value = 0.01
alternative hypothesis: stationary
```
The null hypothesis is rejected.
Try a Dickey\-Fuller test. This is testing with a null hypothesis of AR(1\) stationarity versus a null hypothesis with AR(4\) stationarity when we used the default `k`.
```
tseries::adf.test(wn, k = 0)
```
```
Warning in tseries::adf.test(wn, k = 0): p-value smaller than printed p-value
```
```
Augmented Dickey-Fuller Test
data: wn
Dickey-Fuller = -10.122, Lag order = 0, p-value = 0.01
alternative hypothesis: stationary
```
Notice that the test\-statistic is smaller. This is a more restrictive test and we can reject the null with a higher significance level.
#### 5\.3\.3\.2 Test on white noise with trend
Try the test on white noise with a trend and intercept.
```
intercept <- 1
wnt <- wn + 1:TT + intercept
tseries::adf.test(wnt)
```
```
Warning in tseries::adf.test(wnt): p-value smaller than printed p-value
```
```
Augmented Dickey-Fuller Test
data: wnt
Dickey-Fuller = -4.8309, Lag order = 4, p-value = 0.01
alternative hypothesis: stationary
```
The null hypothesis is still rejected. `adf.test()` uses a model that allows an intercept and trend.
#### 5\.3\.3\.3 Test on random walk
Let’s try the test on a random walk (nonstationary).
```
rw <- cumsum(rnorm(TT))
tseries::adf.test(rw)
```
```
Augmented Dickey-Fuller Test
data: rw
Dickey-Fuller = -2.3038, Lag order = 4, p-value = 0.4508
alternative hypothesis: stationary
```
The null hypothesis is NOT rejected as the p\-value is greater than 0\.05\.
Try a Dickey\-Fuller test.
```
tseries::adf.test(rw, k = 0)
```
```
Augmented Dickey-Fuller Test
data: rw
Dickey-Fuller = -1.7921, Lag order = 0, p-value = 0.6627
alternative hypothesis: stationary
```
Notice that the test\-statistic is larger.
#### 5\.3\.3\.4 Test the anchovy data
```
tseries::adf.test(anchovyts)
```
```
Augmented Dickey-Fuller Test
data: anchovyts
Dickey-Fuller = -1.6851, Lag order = 2, p-value = 0.6923
alternative hypothesis: stationary
```
The p\-value is greater than 0\.05\. We cannot reject the null hypothesis. The null hypothesis is that the data are non\-stationary.
### 5\.3\.4 ADF test using `ur.df()`
The `ur.df()` Augmented Dickey\-Fuller test in the **urca** package gives us a bit more information on and control over the test.
```
ur.df(y, type = c("none", "drift", "trend"), lags = 1,
selectlags = c("Fixed", "AIC", "BIC"))
```
The `ur.df()` function allows us to specify whether to test stationarity around a zero\-mean with no trend, around a non\-zero mean with no trend, or around a trend with an intercept. This can be useful when we know that our data have no trend, for example if you have removed the trend already. `ur.df()` allows us to specify the lags or select them using model selection.
#### 5\.3\.4\.1 Test on white noise
Let’s first do the test on data we know is stationary, white noise. We have to choose the `type` and `lags`. If you have no particular reason to not include an intercept and trend, then use `type="trend"`. This allows both intercept and trend. When you might you have a particular reason not to use `"trend"`? When you have removed the trend and/or intercept.
Next you need to chose the `lags`. We will use `lags=0` to do the Dickey\-Fuller test. Note the number of lags you can test will depend on the amount of data that you have. `adf.test()` used a default of `trunc((length(x)-1)^(1/3))` for the lags, but `ur.df()` requires that you pass in a value or use a fixed default of 1\.
`lags=0` is fitting the following model to the data:
`z.diff = gamma * z.lag.1 + intercept + trend * tt`
`z.diff` means \\(\\Delta y\_t\\) and `z.lag.1` is \\(y\_{t\-1}\\). You are testing if the effect for `z.lag.1` is 0\.
When you use `summary()` for the output from `ur.df()`, you will see the estimated values for \\(\\gamma\\) (denoted `z.lag.1`), intercept and trend. If you see `***` or `**` on the coefficients list for `z.lag.1`, it suggest that the effect of `z.lag.1` is significantly different than 0 and this supports the assumption of stationarity. However, the test level shown is for independent data not time series data. The correct test levels (critical values) are shown at the bottom of the summary output.
```
wn <- rnorm(TT)
test <- urca::ur.df(wn, type = "trend", lags = 0)
urca::summary(test)
```
```
###############################################
# Augmented Dickey-Fuller Test Unit Root Test #
###############################################
Test regression trend
Call:
lm(formula = z.diff ~ z.lag.1 + 1 + tt)
Residuals:
Min 1Q Median 3Q Max
-2.2170 -0.6654 -0.1210 0.5311 2.6277
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.0776865 0.2037709 0.381 0.704
z.lag.1 -1.0797598 0.1014244 -10.646 <2e-16 ***
tt 0.0004891 0.0035321 0.138 0.890
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1.004 on 96 degrees of freedom
Multiple R-squared: 0.5416, Adjusted R-squared: 0.532
F-statistic: 56.71 on 2 and 96 DF, p-value: < 2.2e-16
Value of test-statistic is: -10.646 37.806 56.7083
Critical values for test statistics:
1pct 5pct 10pct
tau3 -4.04 -3.45 -3.15
phi2 6.50 4.88 4.16
phi3 8.73 6.49 5.47
```
Note `urca::` in front of `summary()` is needed if you have not loaded the urca package with `library(urca)`.
We need to look at information at the bottom of the summary output for the test statistics and critical values. The part that looks like this
```
Value of test-statistic is: #1 #2 #3
Critical values for test statistics:
1pct 5pct 10pct
tau3 xxx xxx xxx
...
```
The first test statistic number is for \\(\\gamma\=0\\) and will be labeled `tau`, `tau2` or `tau3`.
In our example with white noise, notice that the test statistic is LESS than the critical value for `tau3` at 5 percent. This means the null hypothesis is rejected at \\(\\alpha\=0\.05\\), a standard level for significance testing.
#### 5\.3\.4\.2 When you might want to use `ur.df()`
If you remove the trend (and/or level) from your data, the `ur.df()` test allows you to increase the power of the test by removing the trend and/or level from the model.
### 5\.3\.1 Dickey\-Fuller test
The Dickey\-Fuller test is testing if \\(\\phi\=0\\) in this model of the data:
\\\[y\_t \= \\alpha \+ \\beta t \+ \\phi y\_{t\-1} \+ e\_t\\]
which is written as
\\\[\\Delta y\_t \= y\_t\-y\_{t\-1}\= \\alpha \+ \\beta t \+ \\gamma y\_{t\-1} \+ e\_t\\]
where \\(y\_t\\) is your data. It is written this way so we can do a linear regression of \\(\\Delta y\_t\\) against \\(t\\) and \\(y\_{t\-1}\\) and test if \\(\\gamma\\) is different from 0\. If \\(\\gamma\=0\\), then we have a random walk process. If not and \\(\-1\<1\+\\gamma\<1\\), then we have a stationary process.
### 5\.3\.2 Augmented Dickey\-Fuller test
The Augmented Dickey\-Fuller test allows for higher\-order autoregressive processes by including \\(\\Delta y\_{t\-p}\\) in the model. But our test is still if \\(\\gamma \= 0\\).
\\\[\\Delta y\_t \= \\alpha \+ \\beta t \+ \\gamma y\_{t\-1} \+ \\delta\_1 \\Delta y\_{t\-1} \+ \\delta\_2 \\Delta y\_{t\-2} \+ \\dots\\]
The null hypothesis for both tests is that the data are non\-stationary. We want to REJECT the null hypothesis for this test, so we want a p\-value of less that 0\.05 (or smaller).
### 5\.3\.3 ADF test using `adf.test()`
The `adf.test()` from the **tseries** package will do a Augmented Dickey\-Fuller test (Dickey\-Fuller if we set lags equal to 0\) with a trend and an intercept. Use `?adf.test` to read about this function. The function is
```
adf.test(x, alternative = c("stationary", "explosive"),
k = trunc((length(x)-1)^(1/3)))
```
`x` are your data. `alternative="stationary"` means that \\(\-2\<\\gamma\<0\\) (\\(\-1\<\\phi\<1\\)) and `alternative="explosive"` means that is outside these bounds. `k` is the number of \\(\\delta\\) lags. For a Dickey\-Fuller test, so only up to AR(1\) time dependency in our stationary process, we set `k=0` so we have no \\(\\delta\\)’s in our test. Being able to control the lags in our test, allows us to avoid a stationarity test that is too complex to be supported by our data.
#### 5\.3\.3\.1 Test on white noise
Let’s start by doing the test on data that we know are stationary, white noise. We will use an Augmented Dickey\-Fuller test where we use the default number of lags (amount of time\-dependency) in our test. For a time\-series of 100, this is 4\.
```
TT <- 100
wn <- rnorm(TT) # white noise
tseries::adf.test(wn)
```
```
Warning in tseries::adf.test(wn): p-value smaller than printed p-value
```
```
Augmented Dickey-Fuller Test
data: wn
Dickey-Fuller = -4.8309, Lag order = 4, p-value = 0.01
alternative hypothesis: stationary
```
The null hypothesis is rejected.
Try a Dickey\-Fuller test. This is testing with a null hypothesis of AR(1\) stationarity versus a null hypothesis with AR(4\) stationarity when we used the default `k`.
```
tseries::adf.test(wn, k = 0)
```
```
Warning in tseries::adf.test(wn, k = 0): p-value smaller than printed p-value
```
```
Augmented Dickey-Fuller Test
data: wn
Dickey-Fuller = -10.122, Lag order = 0, p-value = 0.01
alternative hypothesis: stationary
```
Notice that the test\-statistic is smaller. This is a more restrictive test and we can reject the null with a higher significance level.
#### 5\.3\.3\.2 Test on white noise with trend
Try the test on white noise with a trend and intercept.
```
intercept <- 1
wnt <- wn + 1:TT + intercept
tseries::adf.test(wnt)
```
```
Warning in tseries::adf.test(wnt): p-value smaller than printed p-value
```
```
Augmented Dickey-Fuller Test
data: wnt
Dickey-Fuller = -4.8309, Lag order = 4, p-value = 0.01
alternative hypothesis: stationary
```
The null hypothesis is still rejected. `adf.test()` uses a model that allows an intercept and trend.
#### 5\.3\.3\.3 Test on random walk
Let’s try the test on a random walk (nonstationary).
```
rw <- cumsum(rnorm(TT))
tseries::adf.test(rw)
```
```
Augmented Dickey-Fuller Test
data: rw
Dickey-Fuller = -2.3038, Lag order = 4, p-value = 0.4508
alternative hypothesis: stationary
```
The null hypothesis is NOT rejected as the p\-value is greater than 0\.05\.
Try a Dickey\-Fuller test.
```
tseries::adf.test(rw, k = 0)
```
```
Augmented Dickey-Fuller Test
data: rw
Dickey-Fuller = -1.7921, Lag order = 0, p-value = 0.6627
alternative hypothesis: stationary
```
Notice that the test\-statistic is larger.
#### 5\.3\.3\.4 Test the anchovy data
```
tseries::adf.test(anchovyts)
```
```
Augmented Dickey-Fuller Test
data: anchovyts
Dickey-Fuller = -1.6851, Lag order = 2, p-value = 0.6923
alternative hypothesis: stationary
```
The p\-value is greater than 0\.05\. We cannot reject the null hypothesis. The null hypothesis is that the data are non\-stationary.
#### 5\.3\.3\.1 Test on white noise
Let’s start by doing the test on data that we know are stationary, white noise. We will use an Augmented Dickey\-Fuller test where we use the default number of lags (amount of time\-dependency) in our test. For a time\-series of 100, this is 4\.
```
TT <- 100
wn <- rnorm(TT) # white noise
tseries::adf.test(wn)
```
```
Warning in tseries::adf.test(wn): p-value smaller than printed p-value
```
```
Augmented Dickey-Fuller Test
data: wn
Dickey-Fuller = -4.8309, Lag order = 4, p-value = 0.01
alternative hypothesis: stationary
```
The null hypothesis is rejected.
Try a Dickey\-Fuller test. This is testing with a null hypothesis of AR(1\) stationarity versus a null hypothesis with AR(4\) stationarity when we used the default `k`.
```
tseries::adf.test(wn, k = 0)
```
```
Warning in tseries::adf.test(wn, k = 0): p-value smaller than printed p-value
```
```
Augmented Dickey-Fuller Test
data: wn
Dickey-Fuller = -10.122, Lag order = 0, p-value = 0.01
alternative hypothesis: stationary
```
Notice that the test\-statistic is smaller. This is a more restrictive test and we can reject the null with a higher significance level.
#### 5\.3\.3\.2 Test on white noise with trend
Try the test on white noise with a trend and intercept.
```
intercept <- 1
wnt <- wn + 1:TT + intercept
tseries::adf.test(wnt)
```
```
Warning in tseries::adf.test(wnt): p-value smaller than printed p-value
```
```
Augmented Dickey-Fuller Test
data: wnt
Dickey-Fuller = -4.8309, Lag order = 4, p-value = 0.01
alternative hypothesis: stationary
```
The null hypothesis is still rejected. `adf.test()` uses a model that allows an intercept and trend.
#### 5\.3\.3\.3 Test on random walk
Let’s try the test on a random walk (nonstationary).
```
rw <- cumsum(rnorm(TT))
tseries::adf.test(rw)
```
```
Augmented Dickey-Fuller Test
data: rw
Dickey-Fuller = -2.3038, Lag order = 4, p-value = 0.4508
alternative hypothesis: stationary
```
The null hypothesis is NOT rejected as the p\-value is greater than 0\.05\.
Try a Dickey\-Fuller test.
```
tseries::adf.test(rw, k = 0)
```
```
Augmented Dickey-Fuller Test
data: rw
Dickey-Fuller = -1.7921, Lag order = 0, p-value = 0.6627
alternative hypothesis: stationary
```
Notice that the test\-statistic is larger.
#### 5\.3\.3\.4 Test the anchovy data
```
tseries::adf.test(anchovyts)
```
```
Augmented Dickey-Fuller Test
data: anchovyts
Dickey-Fuller = -1.6851, Lag order = 2, p-value = 0.6923
alternative hypothesis: stationary
```
The p\-value is greater than 0\.05\. We cannot reject the null hypothesis. The null hypothesis is that the data are non\-stationary.
### 5\.3\.4 ADF test using `ur.df()`
The `ur.df()` Augmented Dickey\-Fuller test in the **urca** package gives us a bit more information on and control over the test.
```
ur.df(y, type = c("none", "drift", "trend"), lags = 1,
selectlags = c("Fixed", "AIC", "BIC"))
```
The `ur.df()` function allows us to specify whether to test stationarity around a zero\-mean with no trend, around a non\-zero mean with no trend, or around a trend with an intercept. This can be useful when we know that our data have no trend, for example if you have removed the trend already. `ur.df()` allows us to specify the lags or select them using model selection.
#### 5\.3\.4\.1 Test on white noise
Let’s first do the test on data we know is stationary, white noise. We have to choose the `type` and `lags`. If you have no particular reason to not include an intercept and trend, then use `type="trend"`. This allows both intercept and trend. When you might you have a particular reason not to use `"trend"`? When you have removed the trend and/or intercept.
Next you need to chose the `lags`. We will use `lags=0` to do the Dickey\-Fuller test. Note the number of lags you can test will depend on the amount of data that you have. `adf.test()` used a default of `trunc((length(x)-1)^(1/3))` for the lags, but `ur.df()` requires that you pass in a value or use a fixed default of 1\.
`lags=0` is fitting the following model to the data:
`z.diff = gamma * z.lag.1 + intercept + trend * tt`
`z.diff` means \\(\\Delta y\_t\\) and `z.lag.1` is \\(y\_{t\-1}\\). You are testing if the effect for `z.lag.1` is 0\.
When you use `summary()` for the output from `ur.df()`, you will see the estimated values for \\(\\gamma\\) (denoted `z.lag.1`), intercept and trend. If you see `***` or `**` on the coefficients list for `z.lag.1`, it suggest that the effect of `z.lag.1` is significantly different than 0 and this supports the assumption of stationarity. However, the test level shown is for independent data not time series data. The correct test levels (critical values) are shown at the bottom of the summary output.
```
wn <- rnorm(TT)
test <- urca::ur.df(wn, type = "trend", lags = 0)
urca::summary(test)
```
```
###############################################
# Augmented Dickey-Fuller Test Unit Root Test #
###############################################
Test regression trend
Call:
lm(formula = z.diff ~ z.lag.1 + 1 + tt)
Residuals:
Min 1Q Median 3Q Max
-2.2170 -0.6654 -0.1210 0.5311 2.6277
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.0776865 0.2037709 0.381 0.704
z.lag.1 -1.0797598 0.1014244 -10.646 <2e-16 ***
tt 0.0004891 0.0035321 0.138 0.890
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1.004 on 96 degrees of freedom
Multiple R-squared: 0.5416, Adjusted R-squared: 0.532
F-statistic: 56.71 on 2 and 96 DF, p-value: < 2.2e-16
Value of test-statistic is: -10.646 37.806 56.7083
Critical values for test statistics:
1pct 5pct 10pct
tau3 -4.04 -3.45 -3.15
phi2 6.50 4.88 4.16
phi3 8.73 6.49 5.47
```
Note `urca::` in front of `summary()` is needed if you have not loaded the urca package with `library(urca)`.
We need to look at information at the bottom of the summary output for the test statistics and critical values. The part that looks like this
```
Value of test-statistic is: #1 #2 #3
Critical values for test statistics:
1pct 5pct 10pct
tau3 xxx xxx xxx
...
```
The first test statistic number is for \\(\\gamma\=0\\) and will be labeled `tau`, `tau2` or `tau3`.
In our example with white noise, notice that the test statistic is LESS than the critical value for `tau3` at 5 percent. This means the null hypothesis is rejected at \\(\\alpha\=0\.05\\), a standard level for significance testing.
#### 5\.3\.4\.2 When you might want to use `ur.df()`
If you remove the trend (and/or level) from your data, the `ur.df()` test allows you to increase the power of the test by removing the trend and/or level from the model.
#### 5\.3\.4\.1 Test on white noise
Let’s first do the test on data we know is stationary, white noise. We have to choose the `type` and `lags`. If you have no particular reason to not include an intercept and trend, then use `type="trend"`. This allows both intercept and trend. When you might you have a particular reason not to use `"trend"`? When you have removed the trend and/or intercept.
Next you need to chose the `lags`. We will use `lags=0` to do the Dickey\-Fuller test. Note the number of lags you can test will depend on the amount of data that you have. `adf.test()` used a default of `trunc((length(x)-1)^(1/3))` for the lags, but `ur.df()` requires that you pass in a value or use a fixed default of 1\.
`lags=0` is fitting the following model to the data:
`z.diff = gamma * z.lag.1 + intercept + trend * tt`
`z.diff` means \\(\\Delta y\_t\\) and `z.lag.1` is \\(y\_{t\-1}\\). You are testing if the effect for `z.lag.1` is 0\.
When you use `summary()` for the output from `ur.df()`, you will see the estimated values for \\(\\gamma\\) (denoted `z.lag.1`), intercept and trend. If you see `***` or `**` on the coefficients list for `z.lag.1`, it suggest that the effect of `z.lag.1` is significantly different than 0 and this supports the assumption of stationarity. However, the test level shown is for independent data not time series data. The correct test levels (critical values) are shown at the bottom of the summary output.
```
wn <- rnorm(TT)
test <- urca::ur.df(wn, type = "trend", lags = 0)
urca::summary(test)
```
```
###############################################
# Augmented Dickey-Fuller Test Unit Root Test #
###############################################
Test regression trend
Call:
lm(formula = z.diff ~ z.lag.1 + 1 + tt)
Residuals:
Min 1Q Median 3Q Max
-2.2170 -0.6654 -0.1210 0.5311 2.6277
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.0776865 0.2037709 0.381 0.704
z.lag.1 -1.0797598 0.1014244 -10.646 <2e-16 ***
tt 0.0004891 0.0035321 0.138 0.890
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1.004 on 96 degrees of freedom
Multiple R-squared: 0.5416, Adjusted R-squared: 0.532
F-statistic: 56.71 on 2 and 96 DF, p-value: < 2.2e-16
Value of test-statistic is: -10.646 37.806 56.7083
Critical values for test statistics:
1pct 5pct 10pct
tau3 -4.04 -3.45 -3.15
phi2 6.50 4.88 4.16
phi3 8.73 6.49 5.47
```
Note `urca::` in front of `summary()` is needed if you have not loaded the urca package with `library(urca)`.
We need to look at information at the bottom of the summary output for the test statistics and critical values. The part that looks like this
```
Value of test-statistic is: #1 #2 #3
Critical values for test statistics:
1pct 5pct 10pct
tau3 xxx xxx xxx
...
```
The first test statistic number is for \\(\\gamma\=0\\) and will be labeled `tau`, `tau2` or `tau3`.
In our example with white noise, notice that the test statistic is LESS than the critical value for `tau3` at 5 percent. This means the null hypothesis is rejected at \\(\\alpha\=0\.05\\), a standard level for significance testing.
#### 5\.3\.4\.2 When you might want to use `ur.df()`
If you remove the trend (and/or level) from your data, the `ur.df()` test allows you to increase the power of the test by removing the trend and/or level from the model.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-boxjenkins-kpss.html |
5\.4 KPSS test
--------------
The null hypothesis for the KPSS test is that the data are stationary. For this test, we do NOT want to reject the null hypothesis. In other words, we want the p\-value to be greater than 0\.05 not less than 0\.05\.
### 5\.4\.1 Test on simulated data
Let’s try the KPSS test on white noise with a trend. The default is a null hypothesis with no trend. We will change this to `null="Trend"`.
```
tseries::kpss.test(wnt, null = "Trend")
```
```
Warning in tseries::kpss.test(wnt, null = "Trend"): p-value greater than printed
p-value
```
```
KPSS Test for Trend Stationarity
data: wnt
KPSS Trend = 0.045579, Truncation lag parameter = 4, p-value = 0.1
```
The p\-value is greater than 0\.05\. The null hypothesis of stationarity around a trend is not rejected.
Let’s try the KPSS test on white noise with a trend but let’s use the default of stationary with no trend.
```
tseries::kpss.test(wnt, null = "Level")
```
```
Warning in tseries::kpss.test(wnt, null = "Level"): p-value smaller than printed
p-value
```
```
KPSS Test for Level Stationarity
data: wnt
KPSS Level = 2.1029, Truncation lag parameter = 4, p-value = 0.01
```
The p\-value is less than 0\.05\. The null hypothesis of stationarity around a level is rejected. This is white noise around a trend so it is definitely a stationary process but has a trend. This illustrates that you need to be thoughtful when applying stationarity tests.
### 5\.4\.2 Test the anchovy data
Let’s try the anchovy data.
```
kpss.test(anchovyts, null = "Trend")
```
```
KPSS Test for Trend Stationarity
data: anchovyts
KPSS Trend = 0.14779, Truncation lag parameter = 2, p-value = 0.04851
```
The null is rejected (p\-value less than 0\.05\). Again stationarity is not supported.
### 5\.4\.1 Test on simulated data
Let’s try the KPSS test on white noise with a trend. The default is a null hypothesis with no trend. We will change this to `null="Trend"`.
```
tseries::kpss.test(wnt, null = "Trend")
```
```
Warning in tseries::kpss.test(wnt, null = "Trend"): p-value greater than printed
p-value
```
```
KPSS Test for Trend Stationarity
data: wnt
KPSS Trend = 0.045579, Truncation lag parameter = 4, p-value = 0.1
```
The p\-value is greater than 0\.05\. The null hypothesis of stationarity around a trend is not rejected.
Let’s try the KPSS test on white noise with a trend but let’s use the default of stationary with no trend.
```
tseries::kpss.test(wnt, null = "Level")
```
```
Warning in tseries::kpss.test(wnt, null = "Level"): p-value smaller than printed
p-value
```
```
KPSS Test for Level Stationarity
data: wnt
KPSS Level = 2.1029, Truncation lag parameter = 4, p-value = 0.01
```
The p\-value is less than 0\.05\. The null hypothesis of stationarity around a level is rejected. This is white noise around a trend so it is definitely a stationary process but has a trend. This illustrates that you need to be thoughtful when applying stationarity tests.
### 5\.4\.2 Test the anchovy data
Let’s try the anchovy data.
```
kpss.test(anchovyts, null = "Trend")
```
```
KPSS Test for Trend Stationarity
data: anchovyts
KPSS Trend = 0.14779, Truncation lag parameter = 2, p-value = 0.04851
```
The null is rejected (p\-value less than 0\.05\). Again stationarity is not supported.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-boxjenkins-non-stationarity.html |
5\.5 Dealing with non\-stationarity
-----------------------------------
The anchovy data have failed both tests for the stationarity, the Augmented Dickey\-Fuller and the KPSS test. How do we fix this? The approach in the Box\-Jenkins method is to use differencing.
Let’s see how this works with random walk data. A random walk is non\-stationary but the difference is white noise so is stationary:
\\\[x\_t \- x\_{t\-1} \= e\_t, e\_t \\sim N(0,\\sigma)\\]
```
adf.test(diff(rw))
```
```
Augmented Dickey-Fuller Test
data: diff(rw)
Dickey-Fuller = -3.8711, Lag order = 4, p-value = 0.01834
alternative hypothesis: stationary
```
```
kpss.test(diff(rw))
```
```
Warning in kpss.test(diff(rw)): p-value greater than printed p-value
```
```
KPSS Test for Level Stationarity
data: diff(rw)
KPSS Level = 0.30489, Truncation lag parameter = 3, p-value = 0.1
```
If we difference random walk data, the null is rejected for the ADF test and not rejected for the KPSS test. This is what we want.
Let’s try a single difference with the anchovy data. A single difference means `dat(t)-dat(t-1)`. We get this using `diff(anchovyts)`.
```
diff1dat <- diff(anchovyts)
adf.test(diff1dat)
```
```
Augmented Dickey-Fuller Test
data: diff1dat
Dickey-Fuller = -3.2718, Lag order = 2, p-value = 0.09558
alternative hypothesis: stationary
```
```
kpss.test(diff1dat)
```
```
Warning in kpss.test(diff1dat): p-value greater than printed p-value
```
```
KPSS Test for Level Stationarity
data: diff1dat
KPSS Level = 0.089671, Truncation lag parameter = 2, p-value = 0.1
```
If a first difference were not enough, we would try a second difference which is the difference of a first difference.
```
diff2dat <- diff(diff1dat)
adf.test(diff2dat)
```
```
Warning in adf.test(diff2dat): p-value smaller than printed p-value
```
```
Augmented Dickey-Fuller Test
data: diff2dat
Dickey-Fuller = -4.8234, Lag order = 2, p-value = 0.01
alternative hypothesis: stationary
```
The null hypothesis of a random walk is now rejected so you might think that a 2nd difference is needed for the anchovy data. However the actual problem is that the default for `adf.test()` includes a trend but we removed the trend with our first difference. Thus we included an unneeded trend parameter in our test. Our data are not that long and this affects the result.
Let’s repeat without the trend and we’ll see that the null hypothesis is rejected. The number of lags is set to be what would be used by `adf.test()`. See `?adf.test`.
```
k <- trunc((length(diff1dat) - 1)^(1/3))
test <- urca::ur.df(diff1dat, type = "drift", lags = k)
summary(test)
```
```
###############################################
# Augmented Dickey-Fuller Test Unit Root Test #
###############################################
Test regression drift
Call:
lm(formula = z.diff ~ z.lag.1 + 1 + z.diff.lag)
Residuals:
Min 1Q Median 3Q Max
-0.37551 -0.13887 0.04753 0.13277 0.28223
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.11062 0.06165 1.794 0.08959 .
z.lag.1 -2.16711 0.64900 -3.339 0.00365 **
z.diff.lag1 0.58837 0.47474 1.239 0.23113
z.diff.lag2 0.13273 0.25299 0.525 0.60623
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.207 on 18 degrees of freedom
Multiple R-squared: 0.7231, Adjusted R-squared: 0.677
F-statistic: 15.67 on 3 and 18 DF, p-value: 2.918e-05
Value of test-statistic is: -3.3391 5.848
Critical values for test statistics:
1pct 5pct 10pct
tau2 -3.75 -3.00 -2.63
phi1 7.88 5.18 4.12
```
### 5\.5\.1 `ndiffs()`
As an alternative to trying many different differences and remembering to include or not include the trend or level, you can use the `ndiffs()` function in the **forecast** package. This automates finding the number of differences needed.
```
forecast::ndiffs(anchovyts, test = "kpss")
```
```
[1] 1
```
```
forecast::ndiffs(anchovyts, test = "adf")
```
```
[1] 1
```
One difference is required to pass both the ADF and KPSS stationarity tests.
### 5\.5\.1 `ndiffs()`
As an alternative to trying many different differences and remembering to include or not include the trend or level, you can use the `ndiffs()` function in the **forecast** package. This automates finding the number of differences needed.
```
forecast::ndiffs(anchovyts, test = "kpss")
```
```
[1] 1
```
```
forecast::ndiffs(anchovyts, test = "adf")
```
```
[1] 1
```
One difference is required to pass both the ADF and KPSS stationarity tests.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-boxjenkins-stationarity-summary.html |
5\.6 Summary: stationarity testing
----------------------------------
The basic stationarity diagnostics are the following
* Plot your data. Look for
+ An increasing trend
+ A non\-zero level (if no trend)
+ Strange shocks or steps in your data (indicating something dramatic changed like the data collection methodology)
* Apply stationarity tests
+ `adf.test()` p\-value should be less than 0\.05 (reject null)
+ `kpss.test()` p\-value should be greater than 0\.05 (do not reject null)
* If stationarity tests are failed, then try differencing to correct
+ Try `ndiffs()` in the **forecast** package or manually try different differences.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-boxjenkins-est-ARMA-params.html |
5\.7 Estimating ARMA parameters
-------------------------------
Let’s start with fitting to simulated data.
### 5\.7\.1 AR(2\) data
Simulate AR(2\) data and add a mean level so that the data are not mean 0\.
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= 0\.8 x\_{t\-1} \+ 0\.1 x\_{t\-2} \+ e\_t\\\\
y\_t \= x\_t \+ m
\\end{gathered}
\\end{equation}\\]
```
m <- 1
ar2 <- arima.sim(n = 1000, model = list(ar = c(0.8, 0.1))) +
m
```
To see info on `arima.sim()`, type `?arima.sim`.
### 5\.7\.2 Fit with `Arima()`
Fit an ARMA(2\) with level to the data.
```
forecast::Arima(ar2, order = c(2, 0, 0), include.constant = TRUE)
```
```
Series: ar2
ARIMA(2,0,0) with non-zero mean
Coefficients:
ar1 ar2 mean
0.7684 0.1387 0.9561
s.e. 0.0314 0.0314 0.3332
sigma^2 estimated as 0.9832: log likelihood=-1409.77
AIC=2827.54 AICc=2827.58 BIC=2847.17
```
Note, the model being fit by `Arima()` is not this model
\\\[\\begin{equation}
y\_t \= m \+ 0\.8 y\_{t\-1} \+ 0\.1 y\_{t\-2} \+ e\_t
\\end{equation}\\]
It is this model:
\\\[\\begin{equation}
(y\_t \- m) \= 0\.8 (y\_{t\-1}\-m) \+ 0\.1 (y\_{t\-2}\-m)\+ e\_t
\\end{equation}\\]
or as written above:
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= 0\.8 x\_{t\-1} \+ 0\.1 x\_{t\-2} \+ e\_t\\\\
y\_t \= x\_t \+ m
\\end{gathered}
\\end{equation}\\]
We could also use `arima()` to fit to the data.
```
arima(ar2, order = c(2, 0, 0), include.mean = TRUE)
```
```
Warning in arima(ar2, order = c(2, 0, 0), include.mean = TRUE): possible
convergence problem: optim gave code = 1
```
```
Call:
arima(x = ar2, order = c(2, 0, 0), include.mean = TRUE)
Coefficients:
ar1 ar2 intercept
0.7684 0.1387 0.9561
s.e. 0.0314 0.0314 0.3332
sigma^2 estimated as 0.9802: log likelihood = -1409.77, aic = 2827.54
```
However we will not be using `arima()` directly because for if we have differenced data, it will not allow us to include and estimated mean level. Unless we have transformed our differenced data in a way that ensures it is mean zero, then we want to include a mean.
*Try increasing the length of the simulated data (from 100 to 1000 say) and see how that affects your parameter estimates. Run the simulation a few times.*
### 5\.7\.3 AR(1\) simulated data
```
ar1 <- arima.sim(n = 100, model = list(ar = c(0.8))) + m
forecast::Arima(ar1, order = c(1, 0, 0), include.constant = TRUE)
```
```
Series: ar1
ARIMA(1,0,0) with non-zero mean
Coefficients:
ar1 mean
0.7091 0.4827
s.e. 0.0705 0.3847
sigma^2 estimated as 1.34: log likelihood=-155.85
AIC=317.7 AICc=317.95 BIC=325.51
```
### 5\.7\.4 ARMA(1,2\) simulated data
Simulate ARMA(1,2\)
\\\[x\_t \= 0\.8 x\_{t\-1} \+ e\_t \+ 0\.8 e\_{t\-1} \+ 0\.2 e\_{t\-2}\\]
```
arma12 = arima.sim(n = 100, model = list(ar = c(0.8), ma = c(0.8,
0.2))) + m
forecast::Arima(arma12, order = c(1, 0, 2), include.constant = TRUE)
```
```
Series: arma12
ARIMA(1,0,2) with non-zero mean
Coefficients:
ar1 ma1 ma2 mean
0.8138 0.8599 0.1861 0.3350
s.e. 0.0646 0.1099 0.1050 0.8145
sigma^2 estimated as 0.6264: log likelihood=-118.02
AIC=246.03 AICc=246.67 BIC=259.06
```
We will up the number of data points to 1000 because models with a MA component take a lot of data to estimate. Models with MA(\>1\) are not very practical for fisheries data for that reason.
### 5\.7\.5 These functions work for data with missing values
Create some AR(2\) data and then add missing values (NA).
```
ar2miss <- arima.sim(n = 100, model = list(ar = c(0.8, 0.1)))
ar2miss[sample(100, 50)] <- NA
plot(ar2miss, type = "l")
title("many missing values")
```
Fit
```
fit <- forecast::Arima(ar2miss, order = c(2, 0, 0))
fit
```
```
Series: ar2miss
ARIMA(2,0,0) with non-zero mean
Coefficients:
ar1 ar2 mean
1.0625 -0.2203 -0.0586
s.e. 0.1555 0.1618 0.6061
sigma^2 estimated as 0.9679: log likelihood=-79.86
AIC=167.72 AICc=168.15 BIC=178.06
```
Note `fitted()` does not return the expected value at time \\(t\\). It is the expected value of \\(y\_t\\) given the data up to time \\(t\-1\\).
```
plot(ar2miss, type = "l")
title("many missing values")
lines(fitted(fit), col = "blue")
```
It is easy enough to get the expected value of \\(y\_t\\) for all the missing values but we’ll learn to do that when we learn the **MARSS** package and can apply the Kalman Smoother in that package.
### 5\.7\.1 AR(2\) data
Simulate AR(2\) data and add a mean level so that the data are not mean 0\.
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= 0\.8 x\_{t\-1} \+ 0\.1 x\_{t\-2} \+ e\_t\\\\
y\_t \= x\_t \+ m
\\end{gathered}
\\end{equation}\\]
```
m <- 1
ar2 <- arima.sim(n = 1000, model = list(ar = c(0.8, 0.1))) +
m
```
To see info on `arima.sim()`, type `?arima.sim`.
### 5\.7\.2 Fit with `Arima()`
Fit an ARMA(2\) with level to the data.
```
forecast::Arima(ar2, order = c(2, 0, 0), include.constant = TRUE)
```
```
Series: ar2
ARIMA(2,0,0) with non-zero mean
Coefficients:
ar1 ar2 mean
0.7684 0.1387 0.9561
s.e. 0.0314 0.0314 0.3332
sigma^2 estimated as 0.9832: log likelihood=-1409.77
AIC=2827.54 AICc=2827.58 BIC=2847.17
```
Note, the model being fit by `Arima()` is not this model
\\\[\\begin{equation}
y\_t \= m \+ 0\.8 y\_{t\-1} \+ 0\.1 y\_{t\-2} \+ e\_t
\\end{equation}\\]
It is this model:
\\\[\\begin{equation}
(y\_t \- m) \= 0\.8 (y\_{t\-1}\-m) \+ 0\.1 (y\_{t\-2}\-m)\+ e\_t
\\end{equation}\\]
or as written above:
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= 0\.8 x\_{t\-1} \+ 0\.1 x\_{t\-2} \+ e\_t\\\\
y\_t \= x\_t \+ m
\\end{gathered}
\\end{equation}\\]
We could also use `arima()` to fit to the data.
```
arima(ar2, order = c(2, 0, 0), include.mean = TRUE)
```
```
Warning in arima(ar2, order = c(2, 0, 0), include.mean = TRUE): possible
convergence problem: optim gave code = 1
```
```
Call:
arima(x = ar2, order = c(2, 0, 0), include.mean = TRUE)
Coefficients:
ar1 ar2 intercept
0.7684 0.1387 0.9561
s.e. 0.0314 0.0314 0.3332
sigma^2 estimated as 0.9802: log likelihood = -1409.77, aic = 2827.54
```
However we will not be using `arima()` directly because for if we have differenced data, it will not allow us to include and estimated mean level. Unless we have transformed our differenced data in a way that ensures it is mean zero, then we want to include a mean.
*Try increasing the length of the simulated data (from 100 to 1000 say) and see how that affects your parameter estimates. Run the simulation a few times.*
### 5\.7\.3 AR(1\) simulated data
```
ar1 <- arima.sim(n = 100, model = list(ar = c(0.8))) + m
forecast::Arima(ar1, order = c(1, 0, 0), include.constant = TRUE)
```
```
Series: ar1
ARIMA(1,0,0) with non-zero mean
Coefficients:
ar1 mean
0.7091 0.4827
s.e. 0.0705 0.3847
sigma^2 estimated as 1.34: log likelihood=-155.85
AIC=317.7 AICc=317.95 BIC=325.51
```
### 5\.7\.4 ARMA(1,2\) simulated data
Simulate ARMA(1,2\)
\\\[x\_t \= 0\.8 x\_{t\-1} \+ e\_t \+ 0\.8 e\_{t\-1} \+ 0\.2 e\_{t\-2}\\]
```
arma12 = arima.sim(n = 100, model = list(ar = c(0.8), ma = c(0.8,
0.2))) + m
forecast::Arima(arma12, order = c(1, 0, 2), include.constant = TRUE)
```
```
Series: arma12
ARIMA(1,0,2) with non-zero mean
Coefficients:
ar1 ma1 ma2 mean
0.8138 0.8599 0.1861 0.3350
s.e. 0.0646 0.1099 0.1050 0.8145
sigma^2 estimated as 0.6264: log likelihood=-118.02
AIC=246.03 AICc=246.67 BIC=259.06
```
We will up the number of data points to 1000 because models with a MA component take a lot of data to estimate. Models with MA(\>1\) are not very practical for fisheries data for that reason.
### 5\.7\.5 These functions work for data with missing values
Create some AR(2\) data and then add missing values (NA).
```
ar2miss <- arima.sim(n = 100, model = list(ar = c(0.8, 0.1)))
ar2miss[sample(100, 50)] <- NA
plot(ar2miss, type = "l")
title("many missing values")
```
Fit
```
fit <- forecast::Arima(ar2miss, order = c(2, 0, 0))
fit
```
```
Series: ar2miss
ARIMA(2,0,0) with non-zero mean
Coefficients:
ar1 ar2 mean
1.0625 -0.2203 -0.0586
s.e. 0.1555 0.1618 0.6061
sigma^2 estimated as 0.9679: log likelihood=-79.86
AIC=167.72 AICc=168.15 BIC=178.06
```
Note `fitted()` does not return the expected value at time \\(t\\). It is the expected value of \\(y\_t\\) given the data up to time \\(t\-1\\).
```
plot(ar2miss, type = "l")
title("many missing values")
lines(fitted(fit), col = "blue")
```
It is easy enough to get the expected value of \\(y\_t\\) for all the missing values but we’ll learn to do that when we learn the **MARSS** package and can apply the Kalman Smoother in that package.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-boxjenkins-est-ARMA-orders.html |
5\.8 Estimating the ARMA orders
-------------------------------
We will use the `auto.arima()` function in **forecast**. This function will estimate the level of differencing needed to make our data stationary and estimate the AR and MA orders using AICc (or BIC if we choose).
### 5\.8\.1 Example: model selection for AR(2\) data
```
forecast::auto.arima(ar2)
```
```
Series: ar2
ARIMA(2,0,2) with non-zero mean
Coefficients:
ar1 ar2 ma1 ma2 mean
0.2795 0.5938 0.4861 -0.0943 0.9553
s.e. 1.1261 1.0413 1.1284 0.1887 0.3398
sigma^2 estimated as 0.9848: log likelihood=-1409.57
AIC=2831.15 AICc=2831.23 BIC=2860.59
```
Works with missing data too though might not estimate very close to the true model form.
```
forecast::auto.arima(ar2miss)
```
```
Series: ar2miss
ARIMA(0,1,0)
sigma^2 estimated as 1.066: log likelihood=-82.07
AIC=166.15 AICc=166.19 BIC=168.72
```
### 5\.8\.2 Fitting to 100 simulated data sets
Let’s fit to 100 simulated data sets and see how often the true (generating) model form is selected.
```
save.fits <- rep(NA, 100)
for (i in 1:100) {
a2 <- arima.sim(n = 100, model = list(ar = c(0.8, 0.1)))
fit <- auto.arima(a2, seasonal = FALSE, max.d = 0, max.q = 0)
save.fits[i] <- paste0(fit$arma[1], "-", fit$arma[2])
}
table(save.fits)
```
```
save.fits
1-0 2-0 3-0
71 22 7
```
`auto.arima()` uses AICc for selection by default. You can change that to AIC or BIC using `ic="aic"` or `ic="bic"`.
*Repeat the simulation using AIC and BIC to see how the choice of the information criteria affects the model that is selected.*
### 5\.8\.3 Trace\=TRUE
We can set `Trace=TRUE` to see what models `auto.arima()` fit.
```
forecast::auto.arima(ar2, trace = TRUE)
```
```
Fitting models using approximations to speed things up...
ARIMA(2,0,2) with non-zero mean : 2824.88
ARIMA(0,0,0) with non-zero mean : 4430.868
ARIMA(1,0,0) with non-zero mean : 2842.785
ARIMA(0,0,1) with non-zero mean : 3690.512
ARIMA(0,0,0) with zero mean : 4602.31
ARIMA(1,0,2) with non-zero mean : 2827.422
ARIMA(2,0,1) with non-zero mean : 2825.235
ARIMA(3,0,2) with non-zero mean : 2830.176
ARIMA(2,0,3) with non-zero mean : 2826.503
ARIMA(1,0,1) with non-zero mean : 2825.438
ARIMA(1,0,3) with non-zero mean : 2829.358
ARIMA(3,0,1) with non-zero mean : Inf
ARIMA(3,0,3) with non-zero mean : 2825.766
ARIMA(2,0,2) with zero mean : 2829.536
Now re-fitting the best model(s) without approximations...
ARIMA(2,0,2) with non-zero mean : 2831.232
Best model: ARIMA(2,0,2) with non-zero mean
```
```
Series: ar2
ARIMA(2,0,2) with non-zero mean
Coefficients:
ar1 ar2 ma1 ma2 mean
0.2795 0.5938 0.4861 -0.0943 0.9553
s.e. 1.1261 1.0413 1.1284 0.1887 0.3398
sigma^2 estimated as 0.9848: log likelihood=-1409.57
AIC=2831.15 AICc=2831.23 BIC=2860.59
```
### 5\.8\.4 stepwise\=FALSE
We can set `stepwise=FALSE` to use an exhaustive search. The model may be different than the result from the non\-exhaustive search.
```
forecast::auto.arima(ar2, trace = TRUE, stepwise = FALSE)
```
```
Fitting models using approximations to speed things up...
ARIMA(0,0,0) with zero mean : 4602.31
ARIMA(0,0,0) with non-zero mean : 4430.868
ARIMA(0,0,1) with zero mean : 3815.931
ARIMA(0,0,1) with non-zero mean : 3690.512
ARIMA(0,0,2) with zero mean : 3425.037
ARIMA(0,0,2) with non-zero mean : 3334.754
ARIMA(0,0,3) with zero mean : 3239.347
ARIMA(0,0,3) with non-zero mean : 3170.541
ARIMA(0,0,4) with zero mean : 3114.265
ARIMA(0,0,4) with non-zero mean : 3059.938
ARIMA(0,0,5) with zero mean : 3042.136
ARIMA(0,0,5) with non-zero mean : 2998.531
ARIMA(1,0,0) with zero mean : 2850.655
ARIMA(1,0,0) with non-zero mean : 2842.785
ARIMA(1,0,1) with zero mean : 2830.652
ARIMA(1,0,1) with non-zero mean : 2825.438
ARIMA(1,0,2) with zero mean : 2832.668
ARIMA(1,0,2) with non-zero mean : 2827.422
ARIMA(1,0,3) with zero mean : 2834.675
ARIMA(1,0,3) with non-zero mean : 2829.358
ARIMA(1,0,4) with zero mean : 2835.539
ARIMA(1,0,4) with non-zero mean : 2829.825
ARIMA(2,0,0) with zero mean : 2828.987
ARIMA(2,0,0) with non-zero mean : 2823.774
ARIMA(2,0,1) with zero mean : 2829.952
ARIMA(2,0,1) with non-zero mean : 2825.235
ARIMA(2,0,2) with zero mean : 2829.536
ARIMA(2,0,2) with non-zero mean : 2824.88
ARIMA(2,0,3) with zero mean : 2831.461
ARIMA(2,0,3) with non-zero mean : 2826.503
ARIMA(3,0,0) with zero mean : 2831.057
ARIMA(3,0,0) with non-zero mean : 2826.236
ARIMA(3,0,1) with zero mean : Inf
ARIMA(3,0,1) with non-zero mean : Inf
ARIMA(3,0,2) with zero mean : 2834.788
ARIMA(3,0,2) with non-zero mean : 2830.176
ARIMA(4,0,0) with zero mean : 2833.323
ARIMA(4,0,0) with non-zero mean : 2828.759
ARIMA(4,0,1) with zero mean : 2827.798
ARIMA(4,0,1) with non-zero mean : 2823.853
ARIMA(5,0,0) with zero mean : 2835.315
ARIMA(5,0,0) with non-zero mean : 2830.501
Now re-fitting the best model(s) without approximations...
Best model: ARIMA(2,0,0) with non-zero mean
```
```
Series: ar2
ARIMA(2,0,0) with non-zero mean
Coefficients:
ar1 ar2 mean
0.7684 0.1387 0.9561
s.e. 0.0314 0.0314 0.3332
sigma^2 estimated as 0.9832: log likelihood=-1409.77
AIC=2827.54 AICc=2827.58 BIC=2847.17
```
### 5\.8\.5 Fit to the anchovy data
```
fit <- auto.arima(anchovyts)
fit
```
```
Series: anchovyts
ARIMA(0,1,1) with drift
Coefficients:
ma1 drift
-0.6685 0.0542
s.e. 0.1977 0.0142
sigma^2 estimated as 0.04037: log likelihood=5.39
AIC=-4.79 AICc=-3.65 BIC=-1.13
```
Note `arima()` writes a MA model like:
\\\[x\_t \= e\_t \+ b\_1 e\_{t\-1} \+ b\_2 e\_{t\-2}\\]
while many authors use this notation:
\\\[x\_t \= e\_t \- \\theta\_1 e\_{t\-1} \- \\theta\_2 e\_{t\-2}\\]
so the MA parameters reported by `auto.arima()` will be NEGATIVE of that reported in Stergiou and Christou (1996\) who analyze these same data. *Note, in Stergiou and Christou, the model is written in backshift notation on page 112\. To see the model as the equation above, I translated from backshift to non\-backshift notation.*
### 5\.8\.1 Example: model selection for AR(2\) data
```
forecast::auto.arima(ar2)
```
```
Series: ar2
ARIMA(2,0,2) with non-zero mean
Coefficients:
ar1 ar2 ma1 ma2 mean
0.2795 0.5938 0.4861 -0.0943 0.9553
s.e. 1.1261 1.0413 1.1284 0.1887 0.3398
sigma^2 estimated as 0.9848: log likelihood=-1409.57
AIC=2831.15 AICc=2831.23 BIC=2860.59
```
Works with missing data too though might not estimate very close to the true model form.
```
forecast::auto.arima(ar2miss)
```
```
Series: ar2miss
ARIMA(0,1,0)
sigma^2 estimated as 1.066: log likelihood=-82.07
AIC=166.15 AICc=166.19 BIC=168.72
```
### 5\.8\.2 Fitting to 100 simulated data sets
Let’s fit to 100 simulated data sets and see how often the true (generating) model form is selected.
```
save.fits <- rep(NA, 100)
for (i in 1:100) {
a2 <- arima.sim(n = 100, model = list(ar = c(0.8, 0.1)))
fit <- auto.arima(a2, seasonal = FALSE, max.d = 0, max.q = 0)
save.fits[i] <- paste0(fit$arma[1], "-", fit$arma[2])
}
table(save.fits)
```
```
save.fits
1-0 2-0 3-0
71 22 7
```
`auto.arima()` uses AICc for selection by default. You can change that to AIC or BIC using `ic="aic"` or `ic="bic"`.
*Repeat the simulation using AIC and BIC to see how the choice of the information criteria affects the model that is selected.*
### 5\.8\.3 Trace\=TRUE
We can set `Trace=TRUE` to see what models `auto.arima()` fit.
```
forecast::auto.arima(ar2, trace = TRUE)
```
```
Fitting models using approximations to speed things up...
ARIMA(2,0,2) with non-zero mean : 2824.88
ARIMA(0,0,0) with non-zero mean : 4430.868
ARIMA(1,0,0) with non-zero mean : 2842.785
ARIMA(0,0,1) with non-zero mean : 3690.512
ARIMA(0,0,0) with zero mean : 4602.31
ARIMA(1,0,2) with non-zero mean : 2827.422
ARIMA(2,0,1) with non-zero mean : 2825.235
ARIMA(3,0,2) with non-zero mean : 2830.176
ARIMA(2,0,3) with non-zero mean : 2826.503
ARIMA(1,0,1) with non-zero mean : 2825.438
ARIMA(1,0,3) with non-zero mean : 2829.358
ARIMA(3,0,1) with non-zero mean : Inf
ARIMA(3,0,3) with non-zero mean : 2825.766
ARIMA(2,0,2) with zero mean : 2829.536
Now re-fitting the best model(s) without approximations...
ARIMA(2,0,2) with non-zero mean : 2831.232
Best model: ARIMA(2,0,2) with non-zero mean
```
```
Series: ar2
ARIMA(2,0,2) with non-zero mean
Coefficients:
ar1 ar2 ma1 ma2 mean
0.2795 0.5938 0.4861 -0.0943 0.9553
s.e. 1.1261 1.0413 1.1284 0.1887 0.3398
sigma^2 estimated as 0.9848: log likelihood=-1409.57
AIC=2831.15 AICc=2831.23 BIC=2860.59
```
### 5\.8\.4 stepwise\=FALSE
We can set `stepwise=FALSE` to use an exhaustive search. The model may be different than the result from the non\-exhaustive search.
```
forecast::auto.arima(ar2, trace = TRUE, stepwise = FALSE)
```
```
Fitting models using approximations to speed things up...
ARIMA(0,0,0) with zero mean : 4602.31
ARIMA(0,0,0) with non-zero mean : 4430.868
ARIMA(0,0,1) with zero mean : 3815.931
ARIMA(0,0,1) with non-zero mean : 3690.512
ARIMA(0,0,2) with zero mean : 3425.037
ARIMA(0,0,2) with non-zero mean : 3334.754
ARIMA(0,0,3) with zero mean : 3239.347
ARIMA(0,0,3) with non-zero mean : 3170.541
ARIMA(0,0,4) with zero mean : 3114.265
ARIMA(0,0,4) with non-zero mean : 3059.938
ARIMA(0,0,5) with zero mean : 3042.136
ARIMA(0,0,5) with non-zero mean : 2998.531
ARIMA(1,0,0) with zero mean : 2850.655
ARIMA(1,0,0) with non-zero mean : 2842.785
ARIMA(1,0,1) with zero mean : 2830.652
ARIMA(1,0,1) with non-zero mean : 2825.438
ARIMA(1,0,2) with zero mean : 2832.668
ARIMA(1,0,2) with non-zero mean : 2827.422
ARIMA(1,0,3) with zero mean : 2834.675
ARIMA(1,0,3) with non-zero mean : 2829.358
ARIMA(1,0,4) with zero mean : 2835.539
ARIMA(1,0,4) with non-zero mean : 2829.825
ARIMA(2,0,0) with zero mean : 2828.987
ARIMA(2,0,0) with non-zero mean : 2823.774
ARIMA(2,0,1) with zero mean : 2829.952
ARIMA(2,0,1) with non-zero mean : 2825.235
ARIMA(2,0,2) with zero mean : 2829.536
ARIMA(2,0,2) with non-zero mean : 2824.88
ARIMA(2,0,3) with zero mean : 2831.461
ARIMA(2,0,3) with non-zero mean : 2826.503
ARIMA(3,0,0) with zero mean : 2831.057
ARIMA(3,0,0) with non-zero mean : 2826.236
ARIMA(3,0,1) with zero mean : Inf
ARIMA(3,0,1) with non-zero mean : Inf
ARIMA(3,0,2) with zero mean : 2834.788
ARIMA(3,0,2) with non-zero mean : 2830.176
ARIMA(4,0,0) with zero mean : 2833.323
ARIMA(4,0,0) with non-zero mean : 2828.759
ARIMA(4,0,1) with zero mean : 2827.798
ARIMA(4,0,1) with non-zero mean : 2823.853
ARIMA(5,0,0) with zero mean : 2835.315
ARIMA(5,0,0) with non-zero mean : 2830.501
Now re-fitting the best model(s) without approximations...
Best model: ARIMA(2,0,0) with non-zero mean
```
```
Series: ar2
ARIMA(2,0,0) with non-zero mean
Coefficients:
ar1 ar2 mean
0.7684 0.1387 0.9561
s.e. 0.0314 0.0314 0.3332
sigma^2 estimated as 0.9832: log likelihood=-1409.77
AIC=2827.54 AICc=2827.58 BIC=2847.17
```
### 5\.8\.5 Fit to the anchovy data
```
fit <- auto.arima(anchovyts)
fit
```
```
Series: anchovyts
ARIMA(0,1,1) with drift
Coefficients:
ma1 drift
-0.6685 0.0542
s.e. 0.1977 0.0142
sigma^2 estimated as 0.04037: log likelihood=5.39
AIC=-4.79 AICc=-3.65 BIC=-1.13
```
Note `arima()` writes a MA model like:
\\\[x\_t \= e\_t \+ b\_1 e\_{t\-1} \+ b\_2 e\_{t\-2}\\]
while many authors use this notation:
\\\[x\_t \= e\_t \- \\theta\_1 e\_{t\-1} \- \\theta\_2 e\_{t\-2}\\]
so the MA parameters reported by `auto.arima()` will be NEGATIVE of that reported in Stergiou and Christou (1996\) who analyze these same data. *Note, in Stergiou and Christou, the model is written in backshift notation on page 112\. To see the model as the equation above, I translated from backshift to non\-backshift notation.*
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-boxjenkins-check-resids.html |
5\.9 Check residuals
--------------------
We can do a test of autocorrelation of the residuals with `Box.test()` with `fitdf` adjusted for the number of parameters estimated in the fit. In our case, MA(1\) and drift parameters.
```
res <- resid(fit)
Box.test(res, type = "Ljung-Box", lag = 12, fitdf = 2)
```
```
Box-Ljung test
data: res
X-squared = 5.1609, df = 10, p-value = 0.8802
```
`checkresiduals()` in the **forecast** package will automate this test and show some standard diagnostics plots.
```
forecast::checkresiduals(fit)
```
```
Ljung-Box test
data: Residuals from ARIMA(0,1,1) with drift
Q* = 1.0902, df = 3, p-value = 0.7794
Model df: 2. Total lags used: 5
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-boxjenkins-forecast.html |
5\.10 Forecast from a fitted ARIMA model
----------------------------------------
We can create a forecast from our anchovy ARIMA model using `forecast()`. The shading is the 80% and 95% prediction intervals.
```
fr <- forecast::forecast(fit, h = 10)
plot(fr)
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-boxjenkins-seasonal.html |
5\.11 Seasonal ARIMA model
--------------------------
The Chinook data are monthly and start in January 1990\. To make this into a ts object do
```
chinookts <- ts(chinook$log.metric.tons, start = c(1990, 1),
frequency = 12)
```
`start` is the year and month and frequency is the number of months in the year.
Use `?ts` to see more examples of how to set up ts objects.
### 5\.11\.1 Plot seasonal data
```
plot(chinookts)
```
### 5\.11\.2 `auto.arima()` for seasonal ts
`auto.arima()` will recognize that our data has season and fit a seasonal ARIMA model to our data by default. Let’s define the training data up to 1998 and use 1999 as the test data.
```
traindat <- window(chinookts, c(1990, 10), c(1998, 12))
testdat <- window(chinookts, c(1999, 1), c(1999, 12))
fit <- forecast::auto.arima(traindat)
fit
```
```
Series: traindat
ARIMA(1,0,0)(0,1,0)[12] with drift
Coefficients:
ar1 drift
0.3676 -0.0320
s.e. 0.1335 0.0127
sigma^2 estimated as 0.8053: log likelihood=-107.37
AIC=220.73 AICc=221.02 BIC=228.13
```
Use `?window` to understand how subsetting a ts object works.
### 5\.11\.1 Plot seasonal data
```
plot(chinookts)
```
### 5\.11\.2 `auto.arima()` for seasonal ts
`auto.arima()` will recognize that our data has season and fit a seasonal ARIMA model to our data by default. Let’s define the training data up to 1998 and use 1999 as the test data.
```
traindat <- window(chinookts, c(1990, 10), c(1998, 12))
testdat <- window(chinookts, c(1999, 1), c(1999, 12))
fit <- forecast::auto.arima(traindat)
fit
```
```
Series: traindat
ARIMA(1,0,0)(0,1,0)[12] with drift
Coefficients:
ar1 drift
0.3676 -0.0320
s.e. 0.1335 0.0127
sigma^2 estimated as 0.8053: log likelihood=-107.37
AIC=220.73 AICc=221.02 BIC=228.13
```
Use `?window` to understand how subsetting a ts object works.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-boxjenkins-forecast-seasonal.html |
5\.12 Forecast using a seasonal model
-------------------------------------
Forecasting works the same using the `forecast()` function.
```
fr <- forecast::forecast(fit, h = 12)
plot(fr)
points(testdat)
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-boxjenkins-problems.html |
5\.13 Problems
--------------
For these problems, use the catch landings from Greek waters (`greeklandings`) and the Chinook landings (`chinook`) in Washington data. Load the data as follows:
```
data(greeklandings, package = "atsalibrary")
landings <- greeklandings
data(chinook, package = "atsalibrary")
chinook <- chinook.month
```
1. Augmented Dickey\-Fuller tests in R.
1. What is the null hypothesis for the Dickey\-Fuller and Augmented Dickey\-Fuller tests?
2. How do the Dickey\-Fuller and Augmented Dickey\-Fuller tests differ?
3. For `adf.test()`, does the test allow the data to have a non\-zero level? Does the test allow the data to be stationarity around a trend (a linear slope)?
4. For `ur.df()`, what does type \= “none,” “drift,” and “trend” mean? Which one gives you the same result as `adf.test()`? What do you have to set the lags equal to get the default lags in `adf.test()`?
5. For `ur.df()`, how do you determine if the null hypothesis is rejected?
6. For `ur.df()`, how do you determine if there is a significant trend in the data? How do you determine if the intercept is different than zero?
2. KPSS tests in R.
1. What is the null hypothesis for the KPSS test?
2. For `kpss.test()`, what does setting null equal to “Level” versus “Trend” change?
3. Repeat the stationarity tests for sardine 1964\-1987 in the landings data set. Here is how to set up the data for another species.
```
datdf <- subset(landings, Species == "Sardine")
dat <- ts(datdf$log.metric.tons, start = 1964)
dat <- window(dat, start = 1964, end = 1987)
```
1. Do a Dickey\-Fuller (DF) test using `ur.df()` and `adf.test()`. You will have to set the lags. What does the result tell you? *Note for `ur.df()` use `summary(ur.df(...))` and look at the bottom of the summary information for the test statistics and critical values. The first test statistic is the one you want, labeled `tau` (or `tau3`).*
2. Do an Augmented Dickey\-Fuller (ADF) test using `ur.df()`. How did you choose to set the lags? How is the ADF test different than the DF test?
3. Do a KPSS test using `kpss.test()`. What does the result tell you?
4. Use the anchovy 1964\-2007 data \[Corrected 1/20\. If you did the HW with 1964\-1987, that’s fine but part b won’t have any models within 2 of the best for the shorter series.]. Fit this time series using `auto.arima()` with `trace=TRUE`.
```
forecast::auto.arima(anchovy, trace = TRUE)
```
1. Fit each of the models listed using `Arima()` and show that you can produce the same AICc value that is shown in the trace table.
2. What models are within \\(\\Delta\\)AICc of 2 of the best model (model with lowest AICc)? What is different about these models?
5. Repeat the stationarity tests and differencing tests for anchovy using the following two time ranges: 1964\-1987 and 1988\-2007\. The following shows you how to subset the data:
```
datdf <- subset(landings, Species == "Anchovy")
dat <- ts(datdf$log.metric.tons, start = 1964)
dat64.87 <- window(dat, start = 1964, end = 1987)
```
1. Plot the time series for the two time periods. For the `kpss.test()`, which null is appropriate, “Level” or “Trend?”
2. Do the conclusions regarding stationarity and the amount of differencing needed change depending on which time period you analyze? For both time periods, use `adf.test()` with default values and `kpss.test()` with null\=“Trend.”
3. Fit each time period using `auto.arima()`. Do the selected models change? What do the coefficients mean? Coefficients means the mean and drifts terms and the AR and MA terms.
4. Discuss the best models for each time period. How are they different?
5. You cannot compare the AIC values for an Arima(0,1,0\) and Arima(0,0,1\). Why do you think that is? Hint when comparing AICs, the data being fit must be the same for each model.
6. For the anchovy 1964\-2007 data, use `auto.arima()` with `stepwise=FALSE` to fit models.
1. find the set of models within \\(\\Delta AICc\=2\\) of the top model.
2. Use `Arima()` to fit the models with Inf or \-Inf in the list. Does the set of models within \\(\\Delta AICc\=2\\) change?
3. Create a 5\-year forecast for each of the top 3 models according to AICc.
4. How do the forecasts differ in trend and size of prediction intervals?
7. Using the `chinook` data set,
1. Set up a monthly time series object for the Chinook log metric tons catch for Jan 1990 to Dec 2015\.
2. Fit a seasonal model to the Chinook Jan 1990 to Dec 1999 data using `auto.arima()`.
3. Create a forecast through 2015 using the model in part b.
4. Plot the forecast with the 2014 and 2015 actual landings added as data points.
5. The model from part b has drift. Fit this model using `Arima()` without drift and compare the 2015 forecast with this model.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/chap-univariate-state-space.html |
Chapter 6 Univariate state\-space models
========================================
This chapter will show you how to fit some basic univariate state\-space models using the **MARSS** package, the `StructTS()` function, and JAGS code. This chapter will also introduce you to the idea of writing AR(1\) models in state\-space form.
A script with all the R code in the chapter can be downloaded [here](./Rcode/fitting-univariate-state-space.R). The Rmd for this chapter can be downloaded [here](./Rmds/fitting-univariate-state-space.Rmd).
### Data and packages
All the data used in the chapter are in the **MARSS** package. The other required packages are **stats** (normally loaded by default when starting R), **datasets** and **forecast**. Install the packages, if needed, and load:
```
library(stats)
library(MARSS)
library(forecast)
library(datasets)
```
To run the JAGS code example (optional), you will also need [JAGS](http://mcmc-jags.sourceforge.net/) installed and the **R2jags**, **rjags** and **coda** R packages. To run the Stan code example (optional), you will need the **rstan** package.
### Data and packages
All the data used in the chapter are in the **MARSS** package. The other required packages are **stats** (normally loaded by default when starting R), **datasets** and **forecast**. Install the packages, if needed, and load:
```
library(stats)
library(MARSS)
library(forecast)
library(datasets)
```
To run the JAGS code example (optional), you will also need [JAGS](http://mcmc-jags.sourceforge.net/) installed and the **R2jags**, **rjags** and **coda** R packages. To run the Stan code example (optional), you will need the **rstan** package.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-uss-fitting-a-state-space-model-with-marss.html |
6\.1 Fitting a state\-space model with MARSS
--------------------------------------------
The **MARSS** package fits multivariate auto\-regressive models of this form:
\\\[\\begin{equation}
\\begin{gathered}
\\mathbf{x}\_t \= \\mathbf{B} \\mathbf{x}\_{t\-1}\+\\mathbf{u}\+\\mathbf{w}\_t \\text{ where } \\mathbf{w}\_t \\sim \\,\\text{N}(0,\\mathbf{Q}) \\\\
\\mathbf{y}\_t \= \\mathbf{Z}\\mathbf{x}\_t\+\\mathbf{a}\+\\mathbf{v}\_t \\text{ where } \\mathbf{v}\_t \\sim \\,\\text{N}(0,\\mathbf{R}) \\\\
\\mathbf{x}\_0 \= \\boldsymbol{\\mu}
\\end{gathered}
\\tag{6\.1}
\\end{equation}\\]
To fit your time series model with the **MARSS** package, you need to put your model into the form above. The \\(\\mathbf{B}\\), \\(\\mathbf{Z}\\), \\(\\mathbf{u}\\), \\(\\mathbf{a}\\), \\(\\mathbf{Q}\\), \\(\\mathbf{R}\\) and \\(\\boldsymbol{\\mu}\\) are parameters that are (potentially) estimated. The \\(\\mathbf{y}\\) are your data. The \\(\\mathbf{x}\\) are the hidden state(s). Everything in bold is a matrix; if it is a small bolded letter, it is a matrix with 1 column.
*Important: In the state\-space model equation, \\(\\mathbf{y}\\) is always the data and \\(\\mathbf{x}\\) is a hidden random walk estimated from the data.*
A basic `MARSS()` call looks like
`fit=MARSS(y, model=list(...))`.
The argument `model` tells the function what form the parameters take. The list has the elements with the names: `B`, `U`, `Q`, etc. The names correspond to the parameters with the same names in Equation [(6\.1\)](sec-uss-fitting-a-state-space-model-with-marss.html#eq:uss-marss) except that \\(\\boldsymbol{\\mu}\\) is called `x0`. `tinitx` indicates whether the initial \\(\\mathbf{x}\\) is specified at \\(t\=0\\) so \\(\\mathbf{x}\_0\\) or \\(t\=1\\) so \\(\\mathbf{x}\_1\\).
Here’s an example. Let’s say we want to fit a univariate AR(1\) model observed with error. Here is that model:
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= b x\_{t\-1} \+ w\_t \\text{ where } \\mathbf{w}\_t \\sim \\,\\text{N}(0,q) \\\\
y\_t \= x\_t\+v\_t \\text{ where } v\_t \\sim \\,\\text{N}(0,r) \\\\
x\_0 \= \\mu
\\end{gathered}
\\tag{6\.2}
\\end{equation}\\]
To fit this with `MARSS()`, we need to write Equation [(6\.2\)](sec-uss-fitting-a-state-space-model-with-marss.html#eq:uss-ar1witherror) as Equation [(6\.1\)](sec-uss-fitting-a-state-space-model-with-marss.html#eq:uss-marss). Equation [(6\.1\)](sec-uss-fitting-a-state-space-model-with-marss.html#eq:uss-marss) is in MATRIX form. In the model list, the parameters must be written EXACTLY like they would be written for Equation [(6\.1\)](sec-uss-fitting-a-state-space-model-with-marss.html#eq:uss-marss). For example, `1` is the number 1 in R. It is not a matrix:
```
class(1)
```
```
[1] "numeric"
```
If you need a 1 (or 0\) in your model, you need to pass in the parameter as a \\(1 \\times 1\\) matrix: `matrix(1)`.
With that mind, our model list for Equation [(6\.2\)](sec-uss-fitting-a-state-space-model-with-marss.html#eq:uss-ar1witherror) is:
```
mod.list <- list(B = matrix(1), U = matrix(0), Q = matrix("q"),
Z = matrix(1), A = matrix(0), R = matrix("r"), x0 = matrix("mu"),
tinitx = 0)
```
We can simulate some AR(1\) plus error data like so
```
q <- 0.1
r <- 0.1
n <- 100
y <- cumsum(rnorm(n, 0, sqrt(q))) + rnorm(n, 0, sqrt(r))
```
And then fit with `MARSS()` using `mod.list` above:
```
fit <- MARSS(y, model = mod.list)
```
```
Success! abstol and log-log tests passed at 16 iterations.
Alert: conv.test.slope.tol is 0.5.
Test with smaller values (<0.1) to ensure convergence.
MARSS fit is
Estimation method: kem
Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
Estimation converged in 16 iterations.
Log-likelihood: -65.70444
AIC: 137.4089 AICc: 137.6589
Estimate
R.r 0.1066
Q.q 0.0578
x0.mu -0.2024
Initial states (x0) defined at t=0
Standard errors have not been calculated.
Use MARSSparamCIs to compute CIs and bias estimates.
```
If we wanted to fix \\(q\=0\.1\\), then \\(\\mathbf{Q}\=\[0\.1]\\) (a \\(1 \\times 1\\) matrix with 0\.1\). We just change `mod.list$Q` and re\-fit:
```
mod.list$Q <- matrix(0.1)
fit <- MARSS(y, model = mod.list)
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-uss-examples-using-the-nile-river-data.html |
6\.2 Examples using the Nile river data
---------------------------------------
We will use the data from the Nile River (Figure [6\.1](sec-uss-examples-using-the-nile-river-data.html#fig:uss-plotdata)). We will fit different flow models to the data and compare the models with AIC.
```
library(datasets)
dat <- Nile
```
Figure 6\.1: The Nile River flow volume 1871 to 1970 (`Nile` dataset in R).
### 6\.2\.1 Flat level model
We will start by modeling these data as a simple average river flow with variability around some level \\(\\mu\\).
\\\[\\begin{equation}
y\_t \= \\mu \+ v\_t \\text{ where } v\_t \\sim \\,\\text{N}(0,r)
\\tag{6\.3}
\\end{equation}\\]
where \\(y\_t\\) is the river flow volume at year \\(t\\).
We can write this model as a univariate state\-space model as follows. We use \\(x\_t\\) to model the average flow level. \\(y\_t\\) is just an observation of this flat \\(x\_t\\). Work through \\(x\_1\\), \\(x\_2\\), \\(\\dots\\) starting from \\(x\_0\\) to convince yourself that \\(x\_t\\) will always equal \\(\\mu\\).
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= 1 \\times x\_{t\-1}\+ 0 \+ w\_t \\text{ where } w\_t \\sim \\,\\text{N}(0,0\) \\\\
y\_t \= 1 \\times x\_t \+ 0 \+ v\_t \\text{ where } v\_t \\sim \\,\\text{N}(0,r) \\\\
x\_0 \= \\mu
\\end{gathered}
\\tag{6\.4}
\\end{equation}\\]
The model is specified as a list as follows:
```
mod.nile.0 <- list(B = matrix(1), U = matrix(0), Q = matrix(0),
Z = matrix(1), A = matrix(0), R = matrix("r"), x0 = matrix("mu"),
tinitx = 0)
```
We then fit the model:
```
kem.0 <- MARSS(dat, model = mod.nile.0)
```
Output not shown, but here are the estimates and AICc.
```
c(coef(kem.0, type = "vector"), LL = kem.0$logLik, AICc = kem.0$AICc)
```
```
R.r x0.mu LL AICc
28351.5675 919.3500 -654.5157 1313.1552
```
### 6\.2\.2 Linear trend in flow model
Figure [6\.2](sec-uss-the-structts-function.html#fig:uss-plotfit) shows the fit for the flat average river flow model. Looking at the data, we might expect that a declining average river flow would be better. In MARSS form, that model would be:
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= 1 \\times x\_{t\-1}\+ u \+ w\_t \\text{ where } w\_t \\sim \\,\\text{N}(0,0\) \\\\
y\_t \= 1 \\times x\_t \+ 0 \+ v\_t \\text{ where } v\_t \\sim \\,\\text{N}(0,r) \\\\
x\_0 \= \\mu
\\end{gathered}
\\tag{6\.5}
\\end{equation}\\]
where \\(u\\) is now the average per\-year decline in river flow volume. The model is specified as follows:
```
mod.nile.1 <- list(B = matrix(1), U = matrix("u"), Q = matrix(0),
Z = matrix(1), A = matrix(0), R = matrix("r"), x0 = matrix("mu"),
tinitx = 0)
```
We then fit the model:
```
kem.1 <- MARSS(dat, model = mod.nile.1)
```
Here are the estimates, log\-likelihood and AICc:
```
c(coef(kem.1, type = "vector"), LL = kem.1$logLik, AICc = kem.1$AICc)
```
```
R.r U.u x0.mu LL AICc
22213.595453 -2.692106 1054.935067 -642.315910 1290.881821
```
Figure [6\.2](sec-uss-the-structts-function.html#fig:uss-plotfit) shows the fits for the two models with deterministic models (flat and declining) for mean river flow along with their AICc values (smaller AICc is better). The AICc for the model with a declining river flow is lower by over 20 (which is a lot).
### 6\.2\.3 Stochastic level model
Looking at the flow levels, we might suspect that a model that allows the average flow to change would model the data better and we might suspect that there have been sudden, and anomalous, changes in the river flow level.
We will now model the average river flow at year \\(t\\) as a random walk, specifically an autoregressive process which means that average river flow is year \\(t\\) is a function of average river flow in year \\(t\-1\\).
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= x\_{t\-1}\+w\_t \\text{ where } w\_t \\sim \\,\\text{N}(0,q) \\\\
y\_t \= x\_t\+v\_t \\text{ where } v\_t \\sim \\,\\text{N}(0,r) \\\\
x\_0 \= \\mu
\\end{gathered}
\\tag{6\.6}
\\end{equation}\\]
As before, \\(y\_t\\) is the river flow volume at year \\(t\\). \\(x\_t\\) is the mean level.
The model is specified as:
```
mod.nile.2 <- list(B = matrix(1), U = matrix(0), Q = matrix("q"),
Z = matrix(1), A = matrix(0), R = matrix("r"), x0 = matrix("mu"),
tinitx = 0)
```
We could also use the text shortcuts to specify the model. Because \\(\\mathbf{R}\\) and \\(\\mathbf{Q}\\) are \\(1 \\times 1\\) matrices, “unconstrained,” “diagonal and unequal,” “diagonal and equal” and “equalvarcov” will all lead to a \\(1 \\times 1\\) matrix with one estimated element. For \\(\\mathbf{a}\\) and \\(\\mathbf{u}\\), the following shortcut could be used:
```
A <- "zero"
U <- "zero"
```
Because \\(\\mathbf{x}\_0\\) is \\(1 \\times 1\\), it could be specified as “unequal,” “equal” or “unconstrained.”
```
kem.2 <- MARSS(dat, model = mod.nile.2)
```
Here are the estimates, log\-likelihood and AICc:
```
c(coef(kem.2, type = "vector"), LL = kem.2$logLik, AICc = kem.2$AICc)
```
```
R.r Q.q x0.mu LL AICc
15065.6121 1425.0030 1111.6338 -637.7631 1281.7762
```
### 6\.2\.4 Stochastic level model with drift
We can add a drift to term to our random walk; the \\(u\\) in the process model (\\(x\\)) is the drift term. This causes the random walk to tend to trend up or down.
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= x\_{t\-1}\+u\+w\_t \\text{ where } w\_t \\sim \\,\\text{N}(0,q) \\\\
y\_t \= x\_t\+v\_t \\text{ where } v\_t \\sim \\,\\text{N}(0,r) \\\\
x\_0 \= \\mu
\\end{gathered}
\\tag{6\.7}
\\end{equation}\\]
The model is then specified by changing `U` to indicate that a \\(u\\) is estimated:
```
mod.nile.3 <- list(B = matrix(1), U = matrix("u"), Q = matrix("q"),
Z = matrix(1), A = matrix(0), R = matrix("r"), x0 = matrix("mu"),
tinitx = 0)
```
```
kem.3 <- MARSS(dat, model = mod.nile.3)
```
Here are the estimates, log\-likelihood and AICc:
```
c(coef(kem.3, type = "vector"), LL = kem.3$logLik, AICc = kem.3$AICc)
```
```
R.r U.u Q.q x0.mu LL AICc
15585.278194 -3.248793 1088.987455 1124.044484 -637.302692 1283.026436
```
Figure [6\.2](sec-uss-the-structts-function.html#fig:uss-plotfit) shows all the models along with their AICc values.
### 6\.2\.1 Flat level model
We will start by modeling these data as a simple average river flow with variability around some level \\(\\mu\\).
\\\[\\begin{equation}
y\_t \= \\mu \+ v\_t \\text{ where } v\_t \\sim \\,\\text{N}(0,r)
\\tag{6\.3}
\\end{equation}\\]
where \\(y\_t\\) is the river flow volume at year \\(t\\).
We can write this model as a univariate state\-space model as follows. We use \\(x\_t\\) to model the average flow level. \\(y\_t\\) is just an observation of this flat \\(x\_t\\). Work through \\(x\_1\\), \\(x\_2\\), \\(\\dots\\) starting from \\(x\_0\\) to convince yourself that \\(x\_t\\) will always equal \\(\\mu\\).
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= 1 \\times x\_{t\-1}\+ 0 \+ w\_t \\text{ where } w\_t \\sim \\,\\text{N}(0,0\) \\\\
y\_t \= 1 \\times x\_t \+ 0 \+ v\_t \\text{ where } v\_t \\sim \\,\\text{N}(0,r) \\\\
x\_0 \= \\mu
\\end{gathered}
\\tag{6\.4}
\\end{equation}\\]
The model is specified as a list as follows:
```
mod.nile.0 <- list(B = matrix(1), U = matrix(0), Q = matrix(0),
Z = matrix(1), A = matrix(0), R = matrix("r"), x0 = matrix("mu"),
tinitx = 0)
```
We then fit the model:
```
kem.0 <- MARSS(dat, model = mod.nile.0)
```
Output not shown, but here are the estimates and AICc.
```
c(coef(kem.0, type = "vector"), LL = kem.0$logLik, AICc = kem.0$AICc)
```
```
R.r x0.mu LL AICc
28351.5675 919.3500 -654.5157 1313.1552
```
### 6\.2\.2 Linear trend in flow model
Figure [6\.2](sec-uss-the-structts-function.html#fig:uss-plotfit) shows the fit for the flat average river flow model. Looking at the data, we might expect that a declining average river flow would be better. In MARSS form, that model would be:
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= 1 \\times x\_{t\-1}\+ u \+ w\_t \\text{ where } w\_t \\sim \\,\\text{N}(0,0\) \\\\
y\_t \= 1 \\times x\_t \+ 0 \+ v\_t \\text{ where } v\_t \\sim \\,\\text{N}(0,r) \\\\
x\_0 \= \\mu
\\end{gathered}
\\tag{6\.5}
\\end{equation}\\]
where \\(u\\) is now the average per\-year decline in river flow volume. The model is specified as follows:
```
mod.nile.1 <- list(B = matrix(1), U = matrix("u"), Q = matrix(0),
Z = matrix(1), A = matrix(0), R = matrix("r"), x0 = matrix("mu"),
tinitx = 0)
```
We then fit the model:
```
kem.1 <- MARSS(dat, model = mod.nile.1)
```
Here are the estimates, log\-likelihood and AICc:
```
c(coef(kem.1, type = "vector"), LL = kem.1$logLik, AICc = kem.1$AICc)
```
```
R.r U.u x0.mu LL AICc
22213.595453 -2.692106 1054.935067 -642.315910 1290.881821
```
Figure [6\.2](sec-uss-the-structts-function.html#fig:uss-plotfit) shows the fits for the two models with deterministic models (flat and declining) for mean river flow along with their AICc values (smaller AICc is better). The AICc for the model with a declining river flow is lower by over 20 (which is a lot).
### 6\.2\.3 Stochastic level model
Looking at the flow levels, we might suspect that a model that allows the average flow to change would model the data better and we might suspect that there have been sudden, and anomalous, changes in the river flow level.
We will now model the average river flow at year \\(t\\) as a random walk, specifically an autoregressive process which means that average river flow is year \\(t\\) is a function of average river flow in year \\(t\-1\\).
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= x\_{t\-1}\+w\_t \\text{ where } w\_t \\sim \\,\\text{N}(0,q) \\\\
y\_t \= x\_t\+v\_t \\text{ where } v\_t \\sim \\,\\text{N}(0,r) \\\\
x\_0 \= \\mu
\\end{gathered}
\\tag{6\.6}
\\end{equation}\\]
As before, \\(y\_t\\) is the river flow volume at year \\(t\\). \\(x\_t\\) is the mean level.
The model is specified as:
```
mod.nile.2 <- list(B = matrix(1), U = matrix(0), Q = matrix("q"),
Z = matrix(1), A = matrix(0), R = matrix("r"), x0 = matrix("mu"),
tinitx = 0)
```
We could also use the text shortcuts to specify the model. Because \\(\\mathbf{R}\\) and \\(\\mathbf{Q}\\) are \\(1 \\times 1\\) matrices, “unconstrained,” “diagonal and unequal,” “diagonal and equal” and “equalvarcov” will all lead to a \\(1 \\times 1\\) matrix with one estimated element. For \\(\\mathbf{a}\\) and \\(\\mathbf{u}\\), the following shortcut could be used:
```
A <- "zero"
U <- "zero"
```
Because \\(\\mathbf{x}\_0\\) is \\(1 \\times 1\\), it could be specified as “unequal,” “equal” or “unconstrained.”
```
kem.2 <- MARSS(dat, model = mod.nile.2)
```
Here are the estimates, log\-likelihood and AICc:
```
c(coef(kem.2, type = "vector"), LL = kem.2$logLik, AICc = kem.2$AICc)
```
```
R.r Q.q x0.mu LL AICc
15065.6121 1425.0030 1111.6338 -637.7631 1281.7762
```
### 6\.2\.4 Stochastic level model with drift
We can add a drift to term to our random walk; the \\(u\\) in the process model (\\(x\\)) is the drift term. This causes the random walk to tend to trend up or down.
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= x\_{t\-1}\+u\+w\_t \\text{ where } w\_t \\sim \\,\\text{N}(0,q) \\\\
y\_t \= x\_t\+v\_t \\text{ where } v\_t \\sim \\,\\text{N}(0,r) \\\\
x\_0 \= \\mu
\\end{gathered}
\\tag{6\.7}
\\end{equation}\\]
The model is then specified by changing `U` to indicate that a \\(u\\) is estimated:
```
mod.nile.3 <- list(B = matrix(1), U = matrix("u"), Q = matrix("q"),
Z = matrix(1), A = matrix(0), R = matrix("r"), x0 = matrix("mu"),
tinitx = 0)
```
```
kem.3 <- MARSS(dat, model = mod.nile.3)
```
Here are the estimates, log\-likelihood and AICc:
```
c(coef(kem.3, type = "vector"), LL = kem.3$logLik, AICc = kem.3$AICc)
```
```
R.r U.u Q.q x0.mu LL AICc
15585.278194 -3.248793 1088.987455 1124.044484 -637.302692 1283.026436
```
Figure [6\.2](sec-uss-the-structts-function.html#fig:uss-plotfit) shows all the models along with their AICc values.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-uss-the-structts-function.html |
6\.3 The StructTS function
--------------------------
The `StructTS` function in the **stats** package in R will also fit the stochastic level model:
```
fit.sts <- StructTS(dat, type = "level")
fit.sts
```
```
Call:
StructTS(x = dat, type = "level")
Variances:
level epsilon
1469 15099
```
The estimates from `StructTS()` will be different (though similar) from `MARSS()` because `StructTS()` uses \\(x\_1 \= y\_1\\), that is the hidden state at \\(t\=1\\) is fixed to be the data at \\(t\=1\\). That is fine if you have a long data set, but would be disastrous for the short data sets typical in fisheries and ecology.
`StructTS()` is much, much faster for long time series. The example in `?StructTS` is pretty much instantaneous with `StructTS()` but takes minutes with the EM algorithm that is the default in `MARSS()`. With the BFGS algorithm, it is much closer to `StructTS()`:
```
trees <- window(treering, start = 0)
fitts <- StructTS(trees, type = "level")
fitem <- MARSS(trees, mod.nile.2)
fitbf <- MARSS(trees, mod.nile.2, method = "BFGS")
```
Note that `mod.nile.2` specifies a univariate stochastic level model so we can use it just fine with other univariate data sets.
In addition, `fitted(fit.sts)` where `fit.sts` is a fit from `StructTS()` is very different than `fit.marss$states` from `MARSS()`.
```
t <- 10
fitted(fit.sts)[t]
```
```
[1] 1162.904
```
is the expected value of \\(y\_{t\+1}\\) (in this case \\(y\_{11}\\) since we set \\(t\=10\\)) given the data up to \\(y\_t\\) (in this case, up to \\(y\_{10}\\)). It is called the one\-step ahead prediction.
We are not going to use the one\-step ahead predictions unless we are forecasting or doing cross\-validation.
Typically, when we analyze fisheries and ecological data, we want to know the estimate of the state, the \\(x\_t\\), given ALL the data (sometimes we might want the estimate of the \\(y\_t\\) process given all the data). For example, we might need an estimate of the population size in year 1990 given a time series of counts from 1930 to 2015\. We don’t want to use only the data up to 1989; we want to use all the information. `fit.marss$states` from `MARSS()` is the expected value of \\(x\_t\\) given all the data. In the MARSS package, this is denoted “xtT.”
```
fitted(kem.2, type = "xtT") %>%
subset(t == 11)
```
If you needed the one\-step predictions from `MARSS()`, you can get that using “xtt1\.”
```
fitted(kem.2, type = "xtt1") %>%
subset(t == 11)
```
This is the expected value of \\(x\_t\\) conditioned on \\(y\_1\\) to \\(y\_{t\-1}\\).
```
Loading required package: lattice
```
```
Loading required package: survival
```
```
Loading required package: Formula
```
```
Attaching package: 'Hmisc'
```
```
The following object is masked from 'package:quantmod':
Lag
```
```
The following objects are masked from 'package:base':
format.pval, units
```
Figure 6\.2: The Nile River flow volume with the model estimated flow rates (solid lines). The bottom model is a stochastic level model, meaning there isn’t one level line. Rather the level line is a distribution that has a mean and standard deviation. The solid state line in the bottom plots is the mean of the stochastic level and the 2 standard deviations are shown. The other two models are deterministic level models so the state is not stochastic and does not have a standard deviation.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-uss-comparing-models-with-aic-and-model-weights.html |
6\.4 Comparing models with AIC and model weights
------------------------------------------------
To get the AIC or AICc values for a model fit from a MARSS fit, use `fit$AIC` or `fit$AICc`. The log\-likelihood is in `fit$logLik` and the number of estimated parameters in `fit$num.params`. For fits from other functions, try `AIC(fit)` or look at the function documentation.
Let’s put the AICc values 3 Nile models together:
```
nile.aic <- c(kem.0$AICc, kem.1$AICc, kem.2$AICc, kem.3$AICc)
```
Then we calculate the AICc minus the minus AICc in our model set and compute the model weights. \\(\\Delta\\text{AIC}\\) is the AIC values minus the minimum AIC value in your model set.
```
delAIC <- nile.aic - min(nile.aic)
relLik <- exp(-0.5 * delAIC)
aicweight <- relLik/sum(relLik)
```
And this leads to our model weights table:
```
aic.table <- data.frame(AICc = nile.aic, delAIC = delAIC, relLik = relLik,
weight = aicweight)
rownames(aic.table) <- c("flat level", "linear trend", "stoc level",
"stoc level w drift")
```
Here the table is printed using `round()` to limit the number of digits shown.
```
round(aic.table, digits = 3)
```
```
AICc delAIC relLik weight
flat level 1313.155 31.379 0.000 0.000
linear trend 1290.882 9.106 0.011 0.007
stoc level 1281.776 0.000 1.000 0.647
stoc level w drift 1283.026 1.250 0.535 0.346
```
One thing to keep in mind when comparing models within a set of models is that the model set needs to include at least one model that can fit the data reasonably well. `Reasonably well' means the model can put a fitted line through the data. Can't all models do that? Definitely, not. For example, the flat-level model cannot put a fitted line through the Nile River data. It is simply impossible. The straight trend model also cannot put a fitted line through the flow data. So if our model set only included flat-level and straight trend, then we might have said that the straight trend model is`best’ even though it is just the better of two bad models.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-uss-basic-diagnostics.html |
6\.5 Basic diagnostics
----------------------
The first diagnostic that you do with any statistical analysis is check that your residuals correspond to your assumed error structure. For a basic residuals diagnostic check for a state\-space model, we want to use the ‘innovations residuals.’ This is the observed data at time \\(t\\) minus the value predicted using the model plus the data up to time \\(t\-1\\). Innovations residuals should be Gaussian and temporally independent (no autocorrelation).
`residuals(fit)` will return the innovations residuals as a data frame.
```
head(residuals(kem.0))
```
```
type .rownames name t value .fitted .resids .sigma .std.resids
1 ytt1 Y1 model 1871 1120 919.35 200.65 168.3792 1.1916552
2 ytt1 Y1 model 1872 1160 919.35 240.65 168.3792 1.4292142
3 ytt1 Y1 model 1873 963 919.35 43.65 168.3792 0.2592362
4 ytt1 Y1 model 1874 1210 919.35 290.65 168.3792 1.7261629
5 ytt1 Y1 model 1875 1160 919.35 240.65 168.3792 1.4292142
6 ytt1 Y1 model 1876 1160 919.35 240.65 168.3792 1.4292142
```
The innovations residuals should also not be autocorrelated in time. We can check the autocorrelation with the function `acf()`. The autocorrelation plots are shown in Figure [6\.3](sec-uss-basic-diagnostics.html#fig:uss-acfs). The stochastic level model looks the best in that its innovations residuals are fine.
```
par(mfrow = c(2, 2), mar = c(2, 2, 4, 2))
resids <- residuals(kem.0)
acf(resids$.resids, main = "flat level v(t)", na.action = na.pass)
resids <- residuals(kem.1)
acf(resids$.resids, main = "linear trend v(t)", na.action = na.pass)
resids <- residuals(kem.2)
acf(resids$.resids, main = "stoc level v(t)", na.action = na.pass)
```
Figure 6\.3: The model innovations residual acfs for the 3 models.
### 6\.5\.1 Outlier diagnostics
Another type of residual used in state\-space models is smoothation residuals. This residual at time \\(t\\) is conditioned on all the data. Smoothation residuals are used for outlier detection and can help detect anomalous shocks in the data. Smoothation residuals can be autocorrelated but should fluctuate around 0\. The should not have a trend. Looking at your smoothation residuals can help you determine if there are fundamental problems with the structure of your model.
We can get the smoothation residuals by passing in `type="tT"` to the `residuals()` call. Figure [6\.4](sec-uss-basic-diagnostics.html#fig:uss-resids) shows the model and state smoothation residuals. The flat level and linear trend models do not have a stochastic state so their state residuals are all 0\.
Figure 6\.4: The model and state smoothations residuals for the first 3 models.
The flat and linear trend models show problems. The model smoothation residuals are positive early and then are slightly negative. They should fluctuate around 0 the whole time series. The stochastic level model looks fine. The residuals fluctuate around 0\.
The smoothation residuals can also help us look for outliers in the data or outlier shifts in the level (sudden anolmalous changes). If we standardize by the variance of the residuals (divide by the square root of the variance), then the standardized residuals should have an approximate standard normal distribution.
We will look just at the stochastic level model since the other models do not have a stochastic state (\\(x\\)). Figure [6\.5](sec-uss-basic-diagnostics.html#fig:uss-resids2) (right panel) shows us that there was a sudden level change around 1902\. The Aswan Low Dam was completed in 1902 and changed the mean flow. The Aswan High Dam was completed in 1970 and also affected the flow though not as much. You can see these perturbations in Figure [6\.1](sec-uss-examples-using-the-nile-river-data.html#fig:uss-plotdata).
Figure 6\.5: The model and state smoothations residuals for the first 3 models.
### 6\.5\.1 Outlier diagnostics
Another type of residual used in state\-space models is smoothation residuals. This residual at time \\(t\\) is conditioned on all the data. Smoothation residuals are used for outlier detection and can help detect anomalous shocks in the data. Smoothation residuals can be autocorrelated but should fluctuate around 0\. The should not have a trend. Looking at your smoothation residuals can help you determine if there are fundamental problems with the structure of your model.
We can get the smoothation residuals by passing in `type="tT"` to the `residuals()` call. Figure [6\.4](sec-uss-basic-diagnostics.html#fig:uss-resids) shows the model and state smoothation residuals. The flat level and linear trend models do not have a stochastic state so their state residuals are all 0\.
Figure 6\.4: The model and state smoothations residuals for the first 3 models.
The flat and linear trend models show problems. The model smoothation residuals are positive early and then are slightly negative. They should fluctuate around 0 the whole time series. The stochastic level model looks fine. The residuals fluctuate around 0\.
The smoothation residuals can also help us look for outliers in the data or outlier shifts in the level (sudden anolmalous changes). If we standardize by the variance of the residuals (divide by the square root of the variance), then the standardized residuals should have an approximate standard normal distribution.
We will look just at the stochastic level model since the other models do not have a stochastic state (\\(x\\)). Figure [6\.5](sec-uss-basic-diagnostics.html#fig:uss-resids2) (right panel) shows us that there was a sudden level change around 1902\. The Aswan Low Dam was completed in 1902 and changed the mean flow. The Aswan High Dam was completed in 1970 and also affected the flow though not as much. You can see these perturbations in Figure [6\.1](sec-uss-examples-using-the-nile-river-data.html#fig:uss-plotdata).
Figure 6\.5: The model and state smoothations residuals for the first 3 models.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-uss-fitting-with-jags.html |
6\.6 Fitting with JAGS
----------------------
Here we show how to fit the stochastic level model, model 3 Equation [(6\.7\)](sec-uss-examples-using-the-nile-river-data.html#eq:uss-random-walk-w-noise-w-drift), with JAGS. This is a model where the level is a random walk with drift and the Nile River flow is that level plus error.
```
library(datasets)
y <- Nile
```
This section requires that you have JAGS installed and the **R2jags**, **rjags** and **coda** R packages loaded.
```
library(R2jags)
library(rjags)
library(coda)
```
The first step is to write the model for JAGS to a file (filename in `model.loc`):
```
model.loc <- "ss_model.txt"
jagsscript <- cat("
model {
# priors on parameters
mu ~ dnorm(Y1, 1/(Y1*100)); # normal mean = 0, sd = 1/sqrt(0.01)
tau.q ~ dgamma(0.001,0.001); # This is inverse gamma
sd.q <- 1/sqrt(tau.q); # sd is treated as derived parameter
tau.r ~ dgamma(0.001,0.001); # This is inverse gamma
sd.r <- 1/sqrt(tau.r); # sd is treated as derived parameter
u ~ dnorm(0, 0.01);
# Because init X is specified at t=0
X0 <- mu
X[1] ~ dnorm(X0+u,tau.q);
Y[1] ~ dnorm(X[1], tau.r);
for(i in 2:TT) {
predX[i] <- X[i-1]+u;
X[i] ~ dnorm(predX[i],tau.q); # Process variation
Y[i] ~ dnorm(X[i], tau.r); # Observation variation
}
}
",
file = model.loc)
```
Next we specify the data (and any other input) that the JAGS code needs. In this case, we need to pass in `dat` and the number of time steps since that is used in the for loop. We also specify the parameters that we want to monitor. We need to specify at least one, but we will monitor all of them so we can plot them after fitting. Note, that the hidden state is a parameter in the Bayesian context (but not in the maximum likelihood context).
```
jags.data <- list(Y = y, TT = length(y), Y1 = y[1])
jags.params <- c("sd.q", "sd.r", "X", "mu", "u")
```
Now we can fit the model:
```
mod_ss <- jags(jags.data, parameters.to.save = jags.params, model.file = model.loc,
n.chains = 3, n.burnin = 5000, n.thin = 1, n.iter = 10000,
DIC = TRUE)
```
We can then show the posteriors along with the MLEs from MARSS on top (Figure [6\.6](sec-uss-fitting-with-jags.html#fig:uss-fig-posteriors) ) using the code below.
```
attach.jags(mod_ss)
par(mfrow = c(2, 2))
hist(mu)
abline(v = coef(kem.3)$x0, col = "red")
hist(u)
abline(v = coef(kem.3)$U, col = "red")
hist(log(sd.q^2))
abline(v = log(coef(kem.3)$Q), col = "red")
hist(log(sd.r^2))
abline(v = log(coef(kem.3)$R), col = "red")
```
Figure 6\.6: The posteriors for model 3 with MLE estimates from `MARSS()` shown in red.
```
detach.jags()
```
To plot the estimated states ( Figure [6\.7](sec-uss-fitting-with-jags.html#fig:uss-fig-bayesian-states) ), we write a helper function:
```
plotModelOutput <- function(jagsmodel, Y) {
attach.jags(jagsmodel)
x <- seq(1, length(Y))
XPred <- cbind(apply(X, 2, quantile, 0.025), apply(X, 2,
mean), apply(X, 2, quantile, 0.975))
ylims <- c(min(c(Y, XPred), na.rm = TRUE), max(c(Y, XPred),
na.rm = TRUE))
plot(Y, col = "white", ylim = ylims, xlab = "", ylab = "State predictions")
polygon(c(x, rev(x)), c(XPred[, 1], rev(XPred[, 3])), col = "grey70",
border = NA)
lines(XPred[, 2])
points(Y)
}
```
```
plotModelOutput(mod_ss, y)
```
```
The following object is masked _by_ .GlobalEnv:
mu
```
```
lines(kem.3$states[1, ], col = "red")
lines(1.96 * kem.3$states.se[1, ] + kem.3$states[1, ], col = "red",
lty = 2)
lines(-1.96 * kem.3$states.se[1, ] + kem.3$states[1, ], col = "red",
lty = 2)
title("State estimate and data from\nJAGS (black) versus MARSS (red)")
```
Figure 6\.7: The estimated states from the Bayesian fit along with 95% credible intervals (black and grey) with the MLE states and 95% condidence intervals in red.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-uss-fitting-with-stan.html |
6\.7 Fitting with Stan
----------------------
Let’s fit the same model with Stan using the **rstan** package. If you have not already, you will need to install the **rstan** package. This package depends on a number of other packages which should install automatically when you install **rstan**.
```
library(datasets)
library(rstan)
y <- as.vector(Nile)
```
First we write the model. We could write this to a file (recommended), but for this example, we write as a character object. Though the syntax is different from the JAGS code, it has many similarities. Note, unlike the JAGS, the Stan does **not allow** any NAs in your data. Thus we have to specify the location of the NAs in our data. The Nile data does not have NAs, but we want to write the code so it would work even if there were NAs.
```
scode <- "
data {
int<lower=0> TT;
int<lower=0> n_pos; // number of non-NA values
int<lower=0> indx_pos[n_pos]; // index of the non-NA values
vector[n_pos] y;
}
parameters {
real x0;
real u;
vector[TT] pro_dev;
real<lower=0> sd_q;
real<lower=0> sd_r;
}
transformed parameters {
vector[TT] x;
x[1] = x0 + u + pro_dev[1];
for(i in 2:TT) {
x[i] = x[i-1] + u + pro_dev[i];
}
}
model {
x0 ~ normal(y[1],10);
u ~ normal(0,2);
sd_q ~ cauchy(0,5);
sd_r ~ cauchy(0,5);
pro_dev ~ normal(0, sd_q);
for(i in 1:n_pos){
y[i] ~ normal(x[indx_pos[i]], sd_r);
}
}
generated quantities {
vector[n_pos] log_lik;
for (i in 1:n_pos) log_lik[i] = normal_lpdf(y[i] | x[indx_pos[i]], sd_r);
}
"
```
Then we call `stan()` and pass in the data, names of parameter we wish to have returned, and information on number of chains, samples (iter), and thinning. The output is verbose (hidden here) and may have some warnings.
```
# We pass in the non-NA ys as vector
ypos <- y[!is.na(y)]
n_pos <- sum(!is.na(y)) # number on non-NA ys
indx_pos <- which(!is.na(y)) # index on the non-NAs
mod <- rstan::stan(model_code = scode, data = list(y = ypos,
TT = length(y), n_pos = n_pos, indx_pos = indx_pos), pars = c("sd_q",
"x", "sd_r", "u", "x0"), chains = 3, iter = 1000, thin = 1)
```
We use `extract()` to extract the parameters from the fitted model and we can plot. The estimated level is `x` and we will plot that with the 95% credible intervals.
```
pars <- rstan::extract(mod)
pred_mean <- apply(pars$x, 2, mean)
pred_lo <- apply(pars$x, 2, quantile, 0.025)
pred_hi <- apply(pars$x, 2, quantile, 0.975)
plot(pred_mean, type = "l", lwd = 3, ylim = range(c(pred_mean,
pred_lo, pred_hi)), ylab = "Nile River Level")
lines(pred_lo)
lines(pred_hi)
points(y, col = "blue")
```
Figure 6\.8: Estimated level and 95 percent credible intervals. Blue dots are the actual Nile River levels.
Here is a `ggplot()` version of the plot.
```
library(ggplot2)
nile <- data.frame(y = y, year = 1871:1970)
h <- ggplot(nile, aes(year))
h + geom_ribbon(aes(ymin = pred_lo, ymax = pred_hi), fill = "grey70") +
geom_line(aes(y = pred_mean), size = 1) + geom_point(aes(y = y),
color = "blue") + labs(y = "Nile River level")
```
Figure 6\.9: Estimated level and 95 percent credible intervals
We can plot the histogram of the samples against the values estimated via maximum likelihood.
```
par(mfrow = c(2, 2))
hist(pars$x0)
abline(v = coef(kem.3)$x0, col = "red")
hist(pars$u)
abline(v = coef(kem.3)$U, col = "red")
hist(log(pars$sd_q^2))
abline(v = log(coef(kem.3)$Q), col = "red")
hist(log(pars$sd_r^2))
abline(v = log(coef(kem.3)$R), col = "red")
```
Figure 6\.10: Histogram of the parameter samples versus the estimate (red line) from maximum likelihood.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-uss-a-simple-random-walk-model-of-animal-movement.html |
6\.8 A random walk model of animal movement
-------------------------------------------
A simple random walk model of movement with drift (directional movement) but no correlation is
\\\[\\begin{gather}
x\_{1,t} \= x\_{1,t\-1} \+ u\_1 \+ w\_{1,t}, \\;\\; w\_{1,t} \\sim \\,\\text{N}(0,\\sigma^2\_1\)\\\\
x\_{2,t} \= x\_{2,t\-1} \+ u\_2 \+ w\_{2,t}, \\;\\; w\_{2,t} \\sim \\,\\text{N}(0,\\sigma^2\_2\)
\\tag{6\.8}
\\end{gather}\\]
where \\(x\_{1,t}\\) is the location at time \\(t\\) along one axis (here, longitude) and \\(x\_{2,t}\\) is for another, generally orthogonal, axis (in here, latitude). The parameter \\(u\_1\\) is the rate of longitudinal movement and \\(u\_2\\) is the rate of latitudinal movement. We add errors to our observations of location:
\\\[\\begin{gather}
y\_{1,t} \= x\_{1,t} \+ v\_{1,t}, \\;\\; v\_{1,t} \\sim \\,\\text{N}(0,\\eta^2\_1\)\\\\
y\_{2,t} \= x\_{2,t} \+ v\_{2,t}, \\;\\; v\_{2,t} \\sim \\,\\text{N}(0,\\eta^2\_2\),
\\tag{6\.9}
\\end{gather}\\]
This model is comprised of two separate univariate state\-space models. Note that \\(y\_1\\) depends only on \\(x\_1\\) and \\(y\_2\\) depends only on \\(x\_2\\). There are no actual interactions between these two univariate models. However, we can write the model down in the form of a multivariate model using diagonal variance\-covariance matrices and a diagonal design (\\(\\mathbf{Z}\\)) matrix. Because the variance\-covariance matrices and \\(\\mathbf{Z}\\) are diagonal, the \\(x\_1\\):\\(y\_1\\) and \\(x\_2\\):\\(y\_2\\) processes will be independent as intended. Here are Equations [(6\.8\)](sec-uss-a-simple-random-walk-model-of-animal-movement.html#eq:uss-movement) and [(6\.9\)](sec-uss-a-simple-random-walk-model-of-animal-movement.html#eq:uss-observe) written as a MARSS model (in matrix form):
\\\[\\begin{gather}
\\begin{bmatrix}x\_{1,t}\\\\x\_{2,t}\\end{bmatrix}
\= \\begin{bmatrix}x\_{1,t\-1}\\\\x\_{2,t\-1}\\end{bmatrix}
\+ \\begin{bmatrix}u\_1\\\\u\_2\\end{bmatrix}
\+ \\begin{bmatrix}w\_{1,t}\\\\w\_{2,t}\\end{bmatrix},
\\textrm{ } \\mathbf{w}\_t \\sim \\,\\text{MVN}\\begin{pmatrix}0,\\begin{bmatrix}\\sigma^2\_1\&0\\\\0\&\\sigma^2\_2\\end{bmatrix} \\end{pmatrix} \\tag{6\.10} \\\\
\\nonumber \\\\
\\begin{bmatrix}y\_{1,t}\\\\y\_{2,t}\\end{bmatrix}
\= \\begin{bmatrix}1\&0\\\\0\&1\\end{bmatrix}
\\begin{bmatrix}x\_{1,t}\\\\x\_{2,t}\\end{bmatrix}
\+ \\begin{bmatrix}v\_{1,t}\\\\ v\_{2,t}\\end{bmatrix},
\\textrm{ } \\mathbf{v}\_t \\sim \\,\\text{MVN}\\begin{pmatrix}0,\\begin{bmatrix}\\eta^2\_1\&0\\\\0\&\\eta^2\_2\\end{bmatrix} \\end{pmatrix} \\tag{6\.11}
\\end{gather}\\]
The variance\-covariance matrix for \\(\\mathbf{w}\_t\\) is a diagonal matrix with unequal variances, \\(\\sigma^2\_1\\) and \\(\\sigma^2\_2\\). The variance\-covariance matrix for \\(\\mathbf{v}\_t\\) is a diagonal matrix with unequal variances, \\(\\eta^2\_1\\) and \\(\\eta^2\_2\\). We can write this succinctly as
\\\[\\begin{gather}
\\mathbf{x}\_t \= \\mathbf{x}\_{t\-1} \+ \\mathbf{u} \+ \\mathbf{w}\_t, \\;\\; \\mathbf{w}\_t \\sim \\,\\text{MVN}(0,\\mathbf{Q}) \\\\
\\mathbf{y}\_t \= \\mathbf{x}\_{t} \+ \\mathbf{v}\_t, \\;\\; \\mathbf{v}\_t \\sim \\,\\text{MVN}(0,\\mathbf{R}).
\\tag{6\.12}
\\end{gather}\\]
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-uss-problems.html |
6\.9 Problems
-------------
1. Write the equations for each of these models: ARIMA(0,0,0\), ARIMA(0,1,0\), ARIMA(1,0,0\), ARIMA(0,0,1\), ARIMA(1,0,1\). Read the help file for the `Arima()` function (in the **forecast** package) if you are fuzzy on the arima notation.
2. The **MARSS** package includes a data set of sharp\-tailed grouse in Washington. Load the data to use as follows:
```
library(MARSS)
dat <- log(grouse[, 2])
```
Consider these two models for the data:
* Model 1 random walk with no drift observed with no error
* Model 2 random walk with drift observed with no errorWritten as a univariate state\-space model, model 1 is
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= x\_{t\-1}\+w\_t \\text{ where } w\_t \\sim \\,\\text{N}(0,q)\\\\
x\_0 \= a \\\\
y\_t \= x\_t
\\end{gathered}
\\tag{6\.13}
\\end{equation}\\]
Model 2 is almost identical except with \\(u\\) added
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= x\_{t\-1}\+u\+w\_t \\text{ where } w\_t \\sim \\,\\text{N}(0,q)\\\\
x\_0 \= a \\\\
y\_t \= x\_t
\\end{gathered}
\\tag{6\.14}
\\end{equation}\\]
\\(y\\) is the log grouse count in year \\(t\\).
1. Plot the data. The year is in column 1 of `grouse`.
2. Fit each model using `MARSS()`.
3. Which one appears better supported given AICc?
4. Load the **forecast** package. Use `?auto.arima` to learn what it does. Then use `auto.arima(dat)` to fit the data. Next run `auto.arima(dat, trace=TRUE)` to see all the ARIMA models that the function compared. Note, ARIMA(0,1,0\) is a random walk with b\=1\. ARIMA(0,1,0\) with drift would be a random walk (b\=1\) with drift (with \\(u\\)).
5. Is the difference in the AICc values between a random walk with and without drift comparable between MARSS() and auto.arima()?Note when using `auto.arima()`, an AR(1\) model of the following form will be fit (notice the \\(b\\)): \\(x\_t \= b x\_{t\-1}\+w\_t\\). `auto.arima()` refers to this model \\(x\_t \= x\_{t\-1}\+w\_t\\), which is also AR(1\) but with \\(b\=1\\), as ARIMA(0,1,0\). This says that the first difference of the data (that’s the 1 in the middle) is a ARMA(0,0\) process (the 0s in the 1st and 3rd spots). So ARIMA(0,1,0\) means this: \\(x\_t \- x\_{t\-1} \= w\_t\\).
3. Create a random walk with drift time series using `cumsum()` and `rnorm()`. Look at the `rnorm()` help file (`?rnorm`) to make sure you know what the arguments to the `rnorm()` are.
```
dat <- cumsum(rnorm(100, 0.1, 1))
```
1. What is the order of this random walk written as ARIMA(p, d, q)? “what is the order” means “what is \\(p\\), \\(d\\), and \\(q\\). Model”order" is how `arima()` and `Arima()` specify arima models.
2. Fit that model using `Arima()` in the **forecast** package. You’ll need to specify the arguments `order` and `include.drift`. Use `?Arima` to review what that function does if needed.
3. Write out the equation for this random walk as a univariate state\-space model. Notice that there is no observation error, but still write this as a state\-space model.
4. Fit that model with `MARSS()`.
5. How are the two estimates from `Arima()` and `MARSS()` different?
4. The first\-difference of `dat` used in the previous problem is:
```
diff.dat <- diff(dat)
```
Use `?diff` to check what the `diff()` function does.
1. If \\(x\_t\\) denotes a time series. What is the first difference of \\(x\\)? What is the second difference?
2. What is the \\(\\mathbf{x}\\) model for `diff.dat`? Look at your answer to part (a) and the answer to part (e).
3. Fit `diff.dat` using `Arima()`. You’ll need to change the arguments `order` and `include.mean`.
4. Fit with `MARSS()`. You will need to write the model for `diff.dat` as a state\-space model. If you’ve done this right, the estimated parameters using `Arima()` and `MARSS()` will now be the same.This question should clue you into the fact that `Arima()` is not exactly fitting Equation [(6\.1\)](sec-uss-fitting-a-state-space-model-with-marss.html#eq:uss-marss). It’s very similar, but not quite written that way. By the way, Equation [(6\.1\)](sec-uss-fitting-a-state-space-model-with-marss.html#eq:uss-marss) is how structural time series observed with error are written (state\-space models). To recover the estimates that a function like `arima()` or `Arima()` returns, you need to write your state\-space model in a specific way (as seen above).
5. `Arima()` will also fit what it calls an “AR(1\) with drift.” An AR(1\) with drift is NOT this model:
\\\[\\begin{equation}
x\_t \= b x\_{t\-1}\+u\+w\_t \\text{ where } w\_t \\sim \\,\\text{N}(0,q)
\\tag{6\.15}
\\end{equation}\\]
In the population dynamics literature, this equation is called the Gompertz model and is a type of density\-dependent population model.
1. Write R code to simulate Equation [(6\.15\)](sec-uss-problems.html#eq:uss-gompertz). Make \\(b\\) less than 1 and greater than 0\. Set \\(u\\) and \\(x\_0\\) to whatever you want. You can use a for loop.
2. Plot the trajectories and show that this model does not “drift” upward or downward. It fluctuates about a mean value.
3. Hold \\(b\\) constant and change \\(u\\). How do the trajectories change?
4. Hold \\(u\\) constant and change \\(b\\). Make sure to use a \\(b\\) close to 1 and another close to 0\. How do the trajectories change?
5. Do 2 simulations each with the same \\(w\_t\\). In one simulation, set \\(u\=1\\) and in the other \\(u\=2\\). For both simulations, set \\(x\_1 \= u/(1\-b)\\). You can set \\(b\\) to whatever you want as long as \\(0\<b\<1\\). Plot the 2 trajectories on the same plot. What is different?We will fit what `Arima()` calls “AR(1\) with drift” models in the chapter on MARSS models with covariates.
6. The **MARSS** package includes a data set of gray whales. Load the data to use as follows:
```
library(MARSS)
dat <- log(graywhales[, 2])
```
Fit a random walk with drift model observed with error to the data:
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= x\_{t\-1}\+u\+w\_t \\text{ where } w\_t \\sim \\,\\text{N}(0,q) \\\\
y\_t \= x\_t\+v\_t \\text{ where } v\_t \\sim \\,\\text{N}(0,r) \\\\
x\_0 \= a
\\end{gathered}
\\tag{6\.16}
\\end{equation}\\]
\\(y\\) is the whale count in year \\(t\\). \\(x\\) is interpreted as the ‘true’ unknown population size that we are trying to estimate.
1. Fit this model with `MARSS()`
2. Plot the estimated \\(x\\) as a line with the actual counts added as points. \\(x\\) is in `fit$states`. It is a matrix. To plot using `plot()`, you will need to change it to a vector using `as.vector()` or `fit$states[1,]`
3. Simulate 1000 sample gray whale populstion trajectories (the \\(x\\) in your model) using the estimated \\(u\\) and \\(q\\) starting at the estimated \\(x\\) in 1997\. You can do this with a couple for loops or write something terse with `cumsum()` and `apply()`.
4. Using these simulated trajectories, what is your estimate of the probability that the grey whale population will be above 50,000 graywhales in 2007?
5. What kind(s) of uncertainty does your estimate above NOT include?
7. Fit the following models to the graywhales data using MARSS(). Assume \\(b\=1\\).
* Model 1 Process error only model with drift
* Model 2 Process error only model without drift
* Model 3 Process error with drift and observation error with observation error variance fixed \= 0\.05\.
* Model 4 Process error with drift and observation error with observation error variance estimated.
1. Compute the AICc’s for each model and likelihood or deviance (\-2 \* log likelihood). Where to find these? Try `names(fit)`. `logLik()` is the standard R function to return log\-likelihood from fits.
2. Calculate a table of \\(\\Delta\\text{AICc}\\) values and AICc weights.
3. Show the acf of the model and state residuals for the best model. You will need a vector of the residuals to do this. If `fit` is the fit from a fit call like `fit = MARSS(dat)`, you get the residuals using this code:
```
residuals(fit)$state.residuals[1, ]
residuals(fit)$model.residuals[1, ]
```
Do the acf’s suggest any problems?
8. Evaluate the predictive accuracy of forecasts using the **forecast** package using the `airmiles` dataset.
Load the data to use as follows:
```
library(forecast)
dat <- log(airmiles)
n <- length(dat)
training.dat <- dat[1:(n - 3)]
test.dat <- dat[(n - 2):n]
```
This will prepare the training data and set aside the last 3 data points for validation.
1. Fit the following four models using `Arima()`: ARIMA(0,0,0\), ARIMA(1,0,0\), ARIMA(0,0,1\), ARIMA(1,0,1\).
2. Use `forecast()` to make 3 step ahead forecasts from each.
3. Calculate the MASE statistic for each using the `accuracy()` function in the **forecast** package. Type `?accuracy` to learn how to use this function.
4. Present the results in a table.
5. Which model is best supported based on the MASE statistic?
9. The WhaleNet Archive of STOP Data has movement data on loggerhead turtles on the east coast of the US from ARGOS tags. The **MARSS** package `loggerheadNoisy` dataset is lat/lot data on eight individuals, however we have corrupted this data severely by adding random errors in order to create a “bad tag” problem (very noisy). Use `head(loggerheadNoisy)` to get an idea of the data. Then load the data on one turtle, MaryLee. MARSS needs time across the columns to you need to use transpose the data (as shown).
```
turtlename <- "MaryLee"
dat <- loggerheadNoisy[which(loggerheadNoisy$turtle == turtlename),
5:6]
dat <- t(dat)
```
1. Plot MaryLee’s locations (as a line not dots). Put the latitude locations on the y\-axis and the longitude on the y\-axis. You can use `rownames(dat)` to see which is in which row. You can just use `plot()` for the homework. But if you want, you can look at the MARSS Manual chapter on animal movement to see how to plot the turtle locations on a map using the **maps** package.
2. Analyze the data with a state\-space model (movement observed with error) using
```
fit0 <- MARSS(dat)
```
Look at the output from the above MARSS call. What is the meaning of the parameters output from MARSS in terms of turtle movement? What exactly is the \\(u\\) estimate for example? Look at the data and think about the model you fit.
3. What assumption did the default MARSS model make about observation error and process error? What does that assumption mean in terms of how steps in the N\-S and E\-W directions are related? What does that assumption mean in terms of our assumption about the latitudal and longitudinal observation errors?
4. Does MaryLee move faster in the latitude direction versus longitude direction?
5. Add MaryLee’s estimated “true” positions to your plot of her locations. You can use `lines(x, y, col="red")` (with x and y replaced with your x and y data). The true position is the “state.” This is in the states element of an output from MARSS `fit0$states`.
6. Fit the following models with different assumptions regarding the movement in the lat/lon direction:
* Lat/lon movements are independent but the variance is the same
* Lat/lon movements are correlated and lat/lon variances are different
* Lat/lon movements are correlated and the lat/lon variances are the same.You only need to change `Q` specification. Your MARSS call will now look like the following with `...` replaced with your `Q` specification.
```
fit1 <- MARSS(dat, list(Q = ...))
```
7. Plot your state residuals (true location residuals). What are the problems? Discuss in reference to your plot of the location data. Here is how to get state residuals from `MARSS()` output:
```
resids <- residuals(fit0)$state.residuals
```
The lon residuals are in row 1 and lat residuals are in row 2 (same order as the data).
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/chap-mss.html |
Chapter 7 MARSS models
======================
This lab will show you how to fit multivariate state\-space (MARSS) models using the **MARSS** package. This class of time\-series model is also called vector autoregressive state\-space (VARSS) models. This chapter works through an example which uses model selection to test different population structures in west coast harbor seals. See for a fuller version of this example.
A script with all the R code in the chapter can be downloaded [here](./Rcode/multivariate-ss.R). The Rmd for this chapter can be downloaded [here](./Rmds/multivariate-ss.Rmd)
### Data and packages
All the data used in the chapter are in the **MARSS** package. For most examples, we will use the `MARSS()` function to fit models via maximum\-likelihood. We also show how to fit a Bayesian model using JAGS and Stan. For these sectiosn you will need the **R2jags**, **coda** and **rstan** packages. To run the JAGS code, you will also need [JAGS](http://mcmc-jags.sourceforge.net/) installed. See Chapter [12](chap-jags.html#chap-jags) for more details on JAGS and Chapter [13](chap-stan.html#chap-stan) for more details on Stan.
```
library(MARSS)
library(R2jags)
library(coda)
library(rstan)
```
### Data and packages
All the data used in the chapter are in the **MARSS** package. For most examples, we will use the `MARSS()` function to fit models via maximum\-likelihood. We also show how to fit a Bayesian model using JAGS and Stan. For these sectiosn you will need the **R2jags**, **coda** and **rstan** packages. To run the JAGS code, you will also need [JAGS](http://mcmc-jags.sourceforge.net/) installed. See Chapter [12](chap-jags.html#chap-jags) for more details on JAGS and Chapter [13](chap-stan.html#chap-stan) for more details on Stan.
```
library(MARSS)
library(R2jags)
library(coda)
library(rstan)
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-mss-overview.html |
7\.1 Overview
-------------
As discussed in Chapter [6](chap-univariate-state-space.html#chap-univariate-state-space), the **MARSS** package fits multivariate state\-space models in this form:
\\\[\\begin{equation}
\\begin{gathered}
\\mathbf{x}\_t \= \\mathbf{B} \\mathbf{x}\_{t\-1}\+\\mathbf{u}\+\\mathbf{w}\_t \\text{ where } \\mathbf{w}\_t \\sim \\,\\text{N}(0,\\mathbf{Q}) \\\\
\\mathbf{y}\_t \= \\mathbf{Z}\\mathbf{x}\_t\+\\mathbf{a}\+\\mathbf{v}\_t \\text{ where } \\mathbf{v}\_t \\sim \\,\\text{N}(0,\\mathbf{R}) \\\\
\\mathbf{x}\_0 \= \\boldsymbol{\\mu}
\\end{gathered}
\\tag{7\.1}
\\end{equation}\\]
where each of the bolded terms are matrices. Those that are bolded and small (not capitalized) have one column only, so are column matrices.
To fit a multivariate time series model with the **MARSS** package, you need to first determine the size and structure of each of the parameter matrices: \\(\\mathbf{B}\\), \\(\\mathbf{u}\\), \\(\\mathbf{Q}\\), \\(\\mathbf{Z}\\), \\(\\mathbf{a}\\), \\(\\mathbf{R}\\) and \\(\\boldsymbol{\\mu}\\). This requires first writing down your model in matrix form. We will illustarte this with a series of models for the temporal population dynamics of West coast harbor seals.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-mss-west-coast-harbor-seals-counts.html |
7\.2 West coast harbor seals counts
-----------------------------------
In this example, we will use multivariate state\-space models to combine surveys from four survey regions to estimate the average long\-term population growth rate and the year\-to\-year variability in that population growth rate.
We have five regions (or sites) where harbor seals were censused from 1978\-1999 while hauled out of land. During the period of this dataset, harbor seals were recovering steadily after having been reduced to low levels by hunting prior to protection. We will assume that the underlying population process is a stochastic exponential growth process with mean rates of increase that were not changing through 1978\-1999\.
The survey methodologies were consistent throughout the 20 years of the data but we do not know what fraction of the population that each region represents nor do we know the observation\-error variance for each region. Given differences between the numbers of haul\-outs in each region, the observation errors may be quite different. The regions have had different levels of sampling; the best sampled region has only 4 years missing while the worst has over half the years missing (Figure [7\.1](sec-mss-west-coast-harbor-seals-counts.html#fig:mss-fig1)).
Figure 7\.1: Plot of the of the count data from the five harbor seal regions (Jeffries et al. 2003\). The numbers on each line denote the different regions: 1\) Strait of Juan de Fuca (SJF), 2\) San Juan Islands (SJI), 2\) Eastern Bays (EBays), 4\) Puget Sound (PSnd), and 5\) Hood Canal (HC). Each region is an index of the total harbor seal population in each region.
### 7\.2\.1 Load the harbor seal data
The harbor seal data are included in the **MARSS** package as matrix with years in column 1 and the logged counts in the other columns. Let’s look at the first few years of data:
```
data(harborSealWA, package = "MARSS")
print(harborSealWA[1:8, ], digits = 3)
```
```
Year SJF SJI EBays PSnd HC
[1,] 1978 6.03 6.75 6.63 5.82 6.6
[2,] 1979 NA NA NA NA NA
[3,] 1980 NA NA NA NA NA
[4,] 1981 NA NA NA NA NA
[5,] 1982 NA NA NA NA NA
[6,] 1983 6.78 7.43 7.21 NA NA
[7,] 1984 6.93 7.74 7.45 NA NA
[8,] 1985 7.16 7.53 7.26 6.60 NA
```
We are going to leave out Hood Canal (HC) since that region is somewhat isolated from the others and experiencing very different conditions due to hypoxic events and periodic intense killer whale predation. We will set up the data as follows:
```
dat <- MARSS::harborSealWA
years <- dat[, "Year"]
dat <- dat[, !(colnames(dat) %in% c("Year", "HC"))]
dat <- t(dat) # transpose to have years across columns
colnames(dat) <- years
n <- nrow(dat) - 1
```
### 7\.2\.1 Load the harbor seal data
The harbor seal data are included in the **MARSS** package as matrix with years in column 1 and the logged counts in the other columns. Let’s look at the first few years of data:
```
data(harborSealWA, package = "MARSS")
print(harborSealWA[1:8, ], digits = 3)
```
```
Year SJF SJI EBays PSnd HC
[1,] 1978 6.03 6.75 6.63 5.82 6.6
[2,] 1979 NA NA NA NA NA
[3,] 1980 NA NA NA NA NA
[4,] 1981 NA NA NA NA NA
[5,] 1982 NA NA NA NA NA
[6,] 1983 6.78 7.43 7.21 NA NA
[7,] 1984 6.93 7.74 7.45 NA NA
[8,] 1985 7.16 7.53 7.26 6.60 NA
```
We are going to leave out Hood Canal (HC) since that region is somewhat isolated from the others and experiencing very different conditions due to hypoxic events and periodic intense killer whale predation. We will set up the data as follows:
```
dat <- MARSS::harborSealWA
years <- dat[, "Year"]
dat <- dat[, !(colnames(dat) %in% c("Year", "HC"))]
dat <- t(dat) # transpose to have years across columns
colnames(dat) <- years
n <- nrow(dat) - 1
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-mss-a-single-well-mixed-population.html |
7\.3 A single well\-mixed population
------------------------------------
When we are looking at data over a large geographic region, we might make the assumption that the different census regions are measuring a single population if we think animals are moving sufficiently such that the whole area (multiple regions together) is “well\-mixed.” We write a model of the total population abundance for this case as:
\\\[\\begin{equation}
n\_t \= \\,\\text{exp}(u \+ w\_t) n\_{t\-1},
\\tag{7\.2}
\\end{equation}\\]
where \\(n\_t\\) is the total count in year \\(t\\), \\(u\\) is the mean population growth rate, and \\(w\_t\\) is the deviation from that average in year \\(t\\).
We then take the log of both sides and write the model in log space:
\\\[\\begin{equation}
x\_t \= x\_{t\-1} \+ u \+ w\_t, \\textrm{ where } w\_t \\sim \\,\\text{N}(0,q)
\\tag{7\.3}
\\end{equation}\\]
\\(x\_t\=\\log{n\_t}\\). When there is one effective population, there is one \\(x\\), therefore \\(\\mathbf{x}\_t\\) is a \\(1 \\times 1\\) matrix. This is our **state** model and \\(x\\) is called the “state.” This is just the jargon used in this type of model (state\-space model) for the hidden state that you are estimating from the data. “Hidden” means that you observe this state with error.
### 7\.3\.1 The observation process
We assume that all four regional time series are observations of this one population trajectory but they are scaled up or down relative to that trajectory. In effect, we think of each regional survey as an index of the total population. With this model, we do not think the regions represent independent subpopulations but rather independent observations of one population.
Our model for the data, \\(\\mathbf{y}\_t \= \\mathbf{Z} \\mathbf{x}\_t \+ \\mathbf{a} \+ \\mathbf{v}\_t\\), is written as:
\\\[\\begin{equation}
\\left\[ \\begin{array}{c}
y\_{1} \\\\
y\_{2} \\\\
y\_{3} \\\\
y\_{4} \\end{array} \\right]\_t \=
\\left\[ \\begin{array}{c}
1\\\\
1\\\\
1\\\\
1\\end{array} \\right] x\_t \+
\\left\[ \\begin{array}{c}
0 \\\\
a\_2 \\\\
a\_3 \\\\
a\_4 \\end{array} \\right] \+
\\left\[ \\begin{array}{c}
v\_{1} \\\\
v\_{2} \\\\
v\_{3} \\\\
v\_{4} \\end{array} \\right]\_t
\\tag{7\.4}
\\end{equation}\\]
Each \\(y\_{i}\\) is the observed time series of counts for a different region. The \\(a\\)’s are the bias between the regional sample and the total population. \\(\\mathbf{Z}\\) specifies which observation time series, \\(y\_i\\), is associated with which population trajectory, \\(x\_j\\). In this case, \\(\\mathbf{Z}\\) is a matrix with 1 column since each region is an observation of the one population trajectory.
We allow that each region could have a unique observation variance and that the observation errors are independent between regions. We assume that the observations errors on log(counts) are normal and thus the errors on (counts) are log\-normal. The assumption of normality is not unreasonable since these regional counts are the sum of counts across multiple haul\-outs. We specify independent observation errors with different variances by specifying that \\(\\mathbf{v} \\sim \\,\\text{MVN}(0,\\mathbf{R})\\), where
\\\[\\begin{equation}
\\mathbf{R} \= \\begin{bmatrix}
r\_1 \& 0 \& 0 \& 0 \\\\
0 \& r\_2 \& 0 \& 0\\\\
0 \& 0 \& r\_3 \& 0 \\\\
0 \& 0 \& 0 \& r\_4 \\end{bmatrix}
\\tag{7\.5}
\\end{equation}\\]
This is a diagonal matrix with unequal variances. The shortcut for this structure in `MARSS()` is `"diagonal and unequal"`.
### 7\.3\.2 Fitting the model
We need to write the model in the form of Equation [(7\.1\)](sec-mss-overview.html#eq:mss-marss) with each parameter written as a matrix. The observation model (Equation [(7\.4\)](sec-mss-a-single-well-mixed-population.html#eq:mss-meas)) is already in matrix form. Let’s write the state model in matrix form too:
\\\[\\begin{equation}
\[x]\_t \= \[1]\[x]\_{t\-1} \+ \[u] \+ \[w]\_t, \\textrm{ where } \[w]\_t \\sim \\,\\text{N}(0,\[q])
\\tag{7\.6}
\\end{equation}\\]
It is very simple since all terms are \\(1 \\times 1\\) matrices.
To fit our model with `MARSS()`, we set up a list which precisely describes the size and structure of each parameter matrix. Fixed values in a matrix are designated with their numeric value and estimated values are given a character name and put in quotes. Our model list for a single well\-mixed population is:
```
mod.list.0 <- list(B = matrix(1), U = matrix("u"), Q = matrix("q"),
Z = matrix(1, 4, 1), A = "scaling", R = "diagonal and unequal",
x0 = matrix("mu"), tinitx = 0)
```
and fit:
```
fit.0 <- MARSS(dat, model = mod.list.0)
```
```
Success! abstol and log-log tests passed at 32 iterations.
Alert: conv.test.slope.tol is 0.5.
Test with smaller values (<0.1) to ensure convergence.
MARSS fit is
Estimation method: kem
Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
Estimation converged in 32 iterations.
Log-likelihood: 21.62931
AIC: -23.25863 AICc: -19.02786
Estimate
A.SJI 0.79583
A.EBays 0.27528
A.PSnd -0.54335
R.(SJF,SJF) 0.02883
R.(SJI,SJI) 0.03063
R.(EBays,EBays) 0.01661
R.(PSnd,PSnd) 0.01168
U.u 0.05537
Q.q 0.00642
x0.mu 6.22810
Initial states (x0) defined at t=0
Standard errors have not been calculated.
Use MARSSparamCIs to compute CIs and bias estimates.
```
We already discussed that the short\-cut `"diagonal and unequal"` means a diagonal matrix with each diagonal element having a different value. The short\-cut `"scaling"` means the form of \\(\\mathbf{a}\\) in Equation [(7\.4\)](sec-mss-a-single-well-mixed-population.html#eq:mss-meas) with one value set to 0 and the rest estimated. You should run the code in the list to make sure you see that each parameter in the list has the same form as in our mathematical equation for the model.
### 7\.3\.3 Model residuals
The model fits fine but look at the model residuals (Figure [7\.2](sec-mss-a-single-well-mixed-population.html#fig:mss-model-resids-plot)). They have problems.
```
par(mfrow = c(2, 2))
resids <- MARSSresiduals(fit.0, type = "tt1")
for (i in 1:4) {
plot(resids$model.residuals[i, ], ylab = "model residuals",
xlab = "")
abline(h = 0)
title(rownames(dat)[i])
}
```
Figure 7\.2: The model residuals for the first model. SJI and EBays do not look good.
### 7\.3\.1 The observation process
We assume that all four regional time series are observations of this one population trajectory but they are scaled up or down relative to that trajectory. In effect, we think of each regional survey as an index of the total population. With this model, we do not think the regions represent independent subpopulations but rather independent observations of one population.
Our model for the data, \\(\\mathbf{y}\_t \= \\mathbf{Z} \\mathbf{x}\_t \+ \\mathbf{a} \+ \\mathbf{v}\_t\\), is written as:
\\\[\\begin{equation}
\\left\[ \\begin{array}{c}
y\_{1} \\\\
y\_{2} \\\\
y\_{3} \\\\
y\_{4} \\end{array} \\right]\_t \=
\\left\[ \\begin{array}{c}
1\\\\
1\\\\
1\\\\
1\\end{array} \\right] x\_t \+
\\left\[ \\begin{array}{c}
0 \\\\
a\_2 \\\\
a\_3 \\\\
a\_4 \\end{array} \\right] \+
\\left\[ \\begin{array}{c}
v\_{1} \\\\
v\_{2} \\\\
v\_{3} \\\\
v\_{4} \\end{array} \\right]\_t
\\tag{7\.4}
\\end{equation}\\]
Each \\(y\_{i}\\) is the observed time series of counts for a different region. The \\(a\\)’s are the bias between the regional sample and the total population. \\(\\mathbf{Z}\\) specifies which observation time series, \\(y\_i\\), is associated with which population trajectory, \\(x\_j\\). In this case, \\(\\mathbf{Z}\\) is a matrix with 1 column since each region is an observation of the one population trajectory.
We allow that each region could have a unique observation variance and that the observation errors are independent between regions. We assume that the observations errors on log(counts) are normal and thus the errors on (counts) are log\-normal. The assumption of normality is not unreasonable since these regional counts are the sum of counts across multiple haul\-outs. We specify independent observation errors with different variances by specifying that \\(\\mathbf{v} \\sim \\,\\text{MVN}(0,\\mathbf{R})\\), where
\\\[\\begin{equation}
\\mathbf{R} \= \\begin{bmatrix}
r\_1 \& 0 \& 0 \& 0 \\\\
0 \& r\_2 \& 0 \& 0\\\\
0 \& 0 \& r\_3 \& 0 \\\\
0 \& 0 \& 0 \& r\_4 \\end{bmatrix}
\\tag{7\.5}
\\end{equation}\\]
This is a diagonal matrix with unequal variances. The shortcut for this structure in `MARSS()` is `"diagonal and unequal"`.
### 7\.3\.2 Fitting the model
We need to write the model in the form of Equation [(7\.1\)](sec-mss-overview.html#eq:mss-marss) with each parameter written as a matrix. The observation model (Equation [(7\.4\)](sec-mss-a-single-well-mixed-population.html#eq:mss-meas)) is already in matrix form. Let’s write the state model in matrix form too:
\\\[\\begin{equation}
\[x]\_t \= \[1]\[x]\_{t\-1} \+ \[u] \+ \[w]\_t, \\textrm{ where } \[w]\_t \\sim \\,\\text{N}(0,\[q])
\\tag{7\.6}
\\end{equation}\\]
It is very simple since all terms are \\(1 \\times 1\\) matrices.
To fit our model with `MARSS()`, we set up a list which precisely describes the size and structure of each parameter matrix. Fixed values in a matrix are designated with their numeric value and estimated values are given a character name and put in quotes. Our model list for a single well\-mixed population is:
```
mod.list.0 <- list(B = matrix(1), U = matrix("u"), Q = matrix("q"),
Z = matrix(1, 4, 1), A = "scaling", R = "diagonal and unequal",
x0 = matrix("mu"), tinitx = 0)
```
and fit:
```
fit.0 <- MARSS(dat, model = mod.list.0)
```
```
Success! abstol and log-log tests passed at 32 iterations.
Alert: conv.test.slope.tol is 0.5.
Test with smaller values (<0.1) to ensure convergence.
MARSS fit is
Estimation method: kem
Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
Estimation converged in 32 iterations.
Log-likelihood: 21.62931
AIC: -23.25863 AICc: -19.02786
Estimate
A.SJI 0.79583
A.EBays 0.27528
A.PSnd -0.54335
R.(SJF,SJF) 0.02883
R.(SJI,SJI) 0.03063
R.(EBays,EBays) 0.01661
R.(PSnd,PSnd) 0.01168
U.u 0.05537
Q.q 0.00642
x0.mu 6.22810
Initial states (x0) defined at t=0
Standard errors have not been calculated.
Use MARSSparamCIs to compute CIs and bias estimates.
```
We already discussed that the short\-cut `"diagonal and unequal"` means a diagonal matrix with each diagonal element having a different value. The short\-cut `"scaling"` means the form of \\(\\mathbf{a}\\) in Equation [(7\.4\)](sec-mss-a-single-well-mixed-population.html#eq:mss-meas) with one value set to 0 and the rest estimated. You should run the code in the list to make sure you see that each parameter in the list has the same form as in our mathematical equation for the model.
### 7\.3\.3 Model residuals
The model fits fine but look at the model residuals (Figure [7\.2](sec-mss-a-single-well-mixed-population.html#fig:mss-model-resids-plot)). They have problems.
```
par(mfrow = c(2, 2))
resids <- MARSSresiduals(fit.0, type = "tt1")
for (i in 1:4) {
plot(resids$model.residuals[i, ], ylab = "model residuals",
xlab = "")
abline(h = 0)
title(rownames(dat)[i])
}
```
Figure 7\.2: The model residuals for the first model. SJI and EBays do not look good.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-mss-segind.html |
7\.4 Four subpopulations with temporally uncorrelated errors
------------------------------------------------------------
The model for one well\-mixed population was not very good. Another reasonable assumption is that the different census regions are measuring four different temporally independent subpopulations. We write a model of the log subpopulation abundances for this case as:
\\\[\\begin{equation}
\\begin{gathered}
\\begin{bmatrix}x\_1\\\\x\_2\\\\x\_3\\\\x\_4\\end{bmatrix}\_t \=
\\begin{bmatrix}
1 \& 0 \& 0 \& 0 \\\\
0 \& 1 \& 0 \& 0 \\\\
0 \& 0 \& 1 \& 0 \\\\
0 \& 0 \& 0 \& 1
\\end{bmatrix}
\\begin{bmatrix}x\_1\\\\x\_2\\\\x\_3\\\\x\_4\\end{bmatrix}\_{t\-1} \+
\\begin{bmatrix}u\\\\u\\\\u\\\\u\\end{bmatrix} \+
\\begin{bmatrix}w\_1\\\\w\_2\\\\w\_3\\\\w\_4\\end{bmatrix}\_t \\\\
\\textrm{ where } \\mathbf{w}\_t \\sim \\,\\text{MVN}\\begin{pmatrix}0,
\\begin{bmatrix}
q \& 0 \& 0 \& 0 \\\\
0 \& q \& 0 \& 0\\\\
0 \& 0 \& q \& 0 \\\\
0 \& 0 \& 0 \& q \\end{bmatrix}\\end{pmatrix}\\\\
\\begin{bmatrix}x\_1\\\\x\_2\\\\x\_3\\\\x\_4\\end{bmatrix}\_0 \= \\begin{bmatrix}\\mu\_1\\\\\\mu\_2\\\\\\mu\_3\\\\\\mu\_4\\end{bmatrix}\_t
\\end{gathered}
\\tag{7\.7}
\\end{equation}\\]
The \\(\\mathbf{Q}\\) matrix is diagonal with one variance value. This means that the process variance (variance in year\-to\-year population growth rates) is independent (good and bad years are not correlated) but the level of variability is the same across regions. We made the \\(\\mathbf{u}\\) matrix with one \\(u\\) value. This means that we assume the population growth rates are the same across regions.
Notice that we set the \\(\\mathbf{B}\\) matrix equal to a diagonal matrix with 1 on the diagonal. This is the “identity” matrix and it is like a 1 but for matrices. We do not need \\(\\mathbf{B}\\) for our model, but `MARSS()` requires a value.
### 7\.4\.1 The observation process
In this model, each survey is an observation of a different \\(x\\):
\\\[\\begin{equation}
\\left\[ \\begin{array}{c}
y\_{1} \\\\
y\_{2} \\\\
y\_{3} \\\\
y\_{4} \\end{array} \\right]\_t \=
\\begin{bmatrix}
1 \& 0 \& 0 \& 0 \\\\
0 \& 1 \& 0 \& 0\\\\
0 \& 0 \& 1 \& 0 \\\\
0 \& 0 \& 0 \& 1 \\end{bmatrix} \\begin{bmatrix}x\_1\\\\x\_2\\\\x\_3\\\\x\_4\\end{bmatrix}\_t \+
\\left\[ \\begin{array}{c}
0 \\\\
0 \\\\
0 \\\\
0 \\end{array} \\right] \+
\\left\[ \\begin{array}{c}
v\_{1} \\\\
v\_{2} \\\\
v\_{3} \\\\
v\_{4} \\end{array} \\right]\_t
\\tag{7\.8}
\\end{equation}\\]
No \\(a\\)’s can be estimated since we do not have multiple observations of a given \\(x\\) time series. Our \\(\\mathbf{R}\\) matrix doesn’t change; the observation errors are still assumed to the independent with different variances.
Notice that our \\(\\mathbf{Z}\\) matrix changed. \\(\\mathbf{Z}\\) is specifying which \\(y\_i\\) goes to which \\(x\_j\\). The one we have specified means that \\(y\_1\\) is observing \\(x\_1\\), \\(y\_2\\) observes \\(x\_2\\), etc. We could have set up \\(\\mathbf{Z}\\) like so
\\\[\\begin{equation}
\\begin{bmatrix}
0 \& 1 \& 0 \& 0 \\\\
1 \& 0 \& 0 \& 0 \\\\
0 \& 0 \& 0 \& 1 \\\\
0 \& 0 \& 1 \& 0
\\end{bmatrix}
\\end{equation}\\]
This would mean that \\(y\_1\\) observes \\(x\_2\\), \\(y\_2\\) observes \\(x\_1\\), \\(y\_3\\) observes \\(x\_4\\), and \\(y\_4\\) observes \\(x\_3\\). Which \\(x\\) goes to which \\(y\\) is arbitrary; we need to make sure it is one\-to\-one. We will stay with \\(\\mathbf{Z}\\) as an identity matrix since \\(y\_i\\) observing \\(x\_i\\) makes it easier to remember which \\(x\\) goes with which \\(y\\).
### 7\.4\.2 Fitting the model
We set up the model list for `MARSS()` as:
```
mod.list.1 <- list(B = "identity", U = "equal", Q = "diagonal and equal",
Z = "identity", A = "scaling", R = "diagonal and unequal",
x0 = "unequal", tinitx = 0)
```
We introduced a few more short\-cuts. `"equal"` means all the values in the matrix are the same. `"diagonal and equal"` means that the matrix is diagonal with one value on the diagonal. `"unequal"` means that all values in the matrix are different.
We can then fit our model for 4 subpopulations as:
```
fit.1 <- MARSS::MARSS(dat, model = mod.list.1)
```
### 7\.4\.1 The observation process
In this model, each survey is an observation of a different \\(x\\):
\\\[\\begin{equation}
\\left\[ \\begin{array}{c}
y\_{1} \\\\
y\_{2} \\\\
y\_{3} \\\\
y\_{4} \\end{array} \\right]\_t \=
\\begin{bmatrix}
1 \& 0 \& 0 \& 0 \\\\
0 \& 1 \& 0 \& 0\\\\
0 \& 0 \& 1 \& 0 \\\\
0 \& 0 \& 0 \& 1 \\end{bmatrix} \\begin{bmatrix}x\_1\\\\x\_2\\\\x\_3\\\\x\_4\\end{bmatrix}\_t \+
\\left\[ \\begin{array}{c}
0 \\\\
0 \\\\
0 \\\\
0 \\end{array} \\right] \+
\\left\[ \\begin{array}{c}
v\_{1} \\\\
v\_{2} \\\\
v\_{3} \\\\
v\_{4} \\end{array} \\right]\_t
\\tag{7\.8}
\\end{equation}\\]
No \\(a\\)’s can be estimated since we do not have multiple observations of a given \\(x\\) time series. Our \\(\\mathbf{R}\\) matrix doesn’t change; the observation errors are still assumed to the independent with different variances.
Notice that our \\(\\mathbf{Z}\\) matrix changed. \\(\\mathbf{Z}\\) is specifying which \\(y\_i\\) goes to which \\(x\_j\\). The one we have specified means that \\(y\_1\\) is observing \\(x\_1\\), \\(y\_2\\) observes \\(x\_2\\), etc. We could have set up \\(\\mathbf{Z}\\) like so
\\\[\\begin{equation}
\\begin{bmatrix}
0 \& 1 \& 0 \& 0 \\\\
1 \& 0 \& 0 \& 0 \\\\
0 \& 0 \& 0 \& 1 \\\\
0 \& 0 \& 1 \& 0
\\end{bmatrix}
\\end{equation}\\]
This would mean that \\(y\_1\\) observes \\(x\_2\\), \\(y\_2\\) observes \\(x\_1\\), \\(y\_3\\) observes \\(x\_4\\), and \\(y\_4\\) observes \\(x\_3\\). Which \\(x\\) goes to which \\(y\\) is arbitrary; we need to make sure it is one\-to\-one. We will stay with \\(\\mathbf{Z}\\) as an identity matrix since \\(y\_i\\) observing \\(x\_i\\) makes it easier to remember which \\(x\\) goes with which \\(y\\).
### 7\.4\.2 Fitting the model
We set up the model list for `MARSS()` as:
```
mod.list.1 <- list(B = "identity", U = "equal", Q = "diagonal and equal",
Z = "identity", A = "scaling", R = "diagonal and unequal",
x0 = "unequal", tinitx = 0)
```
We introduced a few more short\-cuts. `"equal"` means all the values in the matrix are the same. `"diagonal and equal"` means that the matrix is diagonal with one value on the diagonal. `"unequal"` means that all values in the matrix are different.
We can then fit our model for 4 subpopulations as:
```
fit.1 <- MARSS::MARSS(dat, model = mod.list.1)
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-mss-four-subpopulations-with-temporally-correlated-errors.html |
7\.5 Four subpopulations with temporally correlated errors
----------------------------------------------------------
Another reasonable assumption is that the different census regions are measuring different subpopulations but that the year\-to\-year population growth rates are correlated (good and bad year coincide). The only parameter that changes is the \\(\\mathbf{Q}\\) matrix:
\\\[\\begin{equation}
\\mathbf{Q}\=\\begin{bmatrix}
q \& c \& c \& c \\\\
c \& q \& c \& c\\\\
c \& c \& q \& c \\\\
c \& c \& c \& q \\end{bmatrix}
\\tag{7\.9}
\\end{equation}\\]
This \\(\\mathbf{Q}\\) matrix structure means that the process variance (variance in year\-to\-year population growth rates) is the same across regions and the covariance in year\-to\-year population growth rates is also the same across regions.
### 7\.5\.1 Fitting the model
Set up the model list for `MARSS()` as:
```
mod.list.2 <- mod.list.1
mod.list.2$Q <- "equalvarcov"
```
`"equalvarcov"` is a shortcut for the matrix form in Equation [(7\.9\)](sec-mss-four-subpopulations-with-temporally-correlated-errors.html#eq:mss-qseg-mod2).
Fit the model with:
```
fit.2 <- MARSS::MARSS(dat, model = mod.list.2)
```
Results are not shown, but here are the AICc. This last model is much better:
```
c(fit.0$AICc, fit.1$AICc, fit.2$AICc)
```
```
[1] -19.02786 -22.20194 -41.00511
```
### 7\.5\.2 Model residuals
Look at the model residuals (Figure [7\.3](sec-mss-four-subpopulations-with-temporally-correlated-errors.html#fig:mss-model-resids-2)). They are also much better.
```
MARSSresiduals.tt1 reported warnings. See msg element of returned residuals object.
```
Figure 7\.3: The model residuals for the model with four temporally correlated subpopulations.
Figure [7\.4](sec-mss-four-subpopulations-with-temporally-correlated-errors.html#fig:mss-fig2-plot) shows the estimated states for each region using this code:
```
par(mfrow = c(2, 2))
for (i in 1:4) {
plot(years, fit.2$states[i, ], ylab = "log subpopulation estimate",
xlab = "", type = "l")
lines(years, fit.2$states[i, ] - 1.96 * fit.2$states.se[i,
], type = "l", lwd = 1, lty = 2, col = "red")
lines(years, fit.2$states[i, ] + 1.96 * fit.2$states.se[i,
], type = "l", lwd = 1, lty = 2, col = "red")
title(rownames(dat)[i])
}
```
Figure 7\.4: Plot of the estimate of log harbor seals in each region. The 95% confidence intervals on the population estimates are the dashed lines. These are not the confidence intervals on the observations, and the observations (the numbers) will not fall between the confidence interval lines.
### 7\.5\.1 Fitting the model
Set up the model list for `MARSS()` as:
```
mod.list.2 <- mod.list.1
mod.list.2$Q <- "equalvarcov"
```
`"equalvarcov"` is a shortcut for the matrix form in Equation [(7\.9\)](sec-mss-four-subpopulations-with-temporally-correlated-errors.html#eq:mss-qseg-mod2).
Fit the model with:
```
fit.2 <- MARSS::MARSS(dat, model = mod.list.2)
```
Results are not shown, but here are the AICc. This last model is much better:
```
c(fit.0$AICc, fit.1$AICc, fit.2$AICc)
```
```
[1] -19.02786 -22.20194 -41.00511
```
### 7\.5\.2 Model residuals
Look at the model residuals (Figure [7\.3](sec-mss-four-subpopulations-with-temporally-correlated-errors.html#fig:mss-model-resids-2)). They are also much better.
```
MARSSresiduals.tt1 reported warnings. See msg element of returned residuals object.
```
Figure 7\.3: The model residuals for the model with four temporally correlated subpopulations.
Figure [7\.4](sec-mss-four-subpopulations-with-temporally-correlated-errors.html#fig:mss-fig2-plot) shows the estimated states for each region using this code:
```
par(mfrow = c(2, 2))
for (i in 1:4) {
plot(years, fit.2$states[i, ], ylab = "log subpopulation estimate",
xlab = "", type = "l")
lines(years, fit.2$states[i, ] - 1.96 * fit.2$states.se[i,
], type = "l", lwd = 1, lty = 2, col = "red")
lines(years, fit.2$states[i, ] + 1.96 * fit.2$states.se[i,
], type = "l", lwd = 1, lty = 2, col = "red")
title(rownames(dat)[i])
}
```
Figure 7\.4: Plot of the estimate of log harbor seals in each region. The 95% confidence intervals on the population estimates are the dashed lines. These are not the confidence intervals on the observations, and the observations (the numbers) will not fall between the confidence interval lines.
| Time Series Analysis and Forecasting |