-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathREADME.Rmd
250 lines (165 loc) · 9.48 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
---
output: github_document
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "man/figures/README-",
out.width = "100%"
)
devtools::load_all()
load("data/bottles.RData")
```
# What is `speccurvieR`?
`speccurvieR` is an R package aimed at making specification curve
analysis easy, fast, and pretty. In other words, it helps you understand how your model changes under different specifications.
# How do I install it?
`speccurvieR` is available [via CRAN](https://cran.r-project.org/package=speccurvieR), just run the following and you’re good to go:
```{r, eval=FALSE}
install.packages("speccurvieR")
library(speccurvieR)
```
# How do I cite it?
If you find this package useful and use it in your own work, a citation would be greatly appreciated:
**Sember, Zayne. "speccurvier: Easy, Fast, and Pretty Specification Curve Analysis." doi:10.32614/CRAN.package.speccurvieR.**
# Why did you make it?
Data visualization and tinkering in `R` have been some of the most enjoyable parts of my time working on a PhD. The seeds for this package were planted when I took a course on replication in social science with Professor Gareth Nellis. An assignment involved performing a specification curve analysis for which the professor provided us with some code to generate a specification curve. Not feeling like modifying someone else’s code to get the final plot looking how I wanted, I was disappointed to see the available packages weren’t much better. My biggest gripe was their use of base `R` plots which aren’t exactly the prettiest ducklings and for which customization is a confusing hassle. Give me my `ggplot`! I want to arrange all the grobs! Long story short I gave in and used the professor’s code. Fast forward a couple years and I need to run a specification curve analysis but I can’t find the assignment with the professor’s code–guess I’ll just write it myself.
Version 0.4.0 builds on standard specification curve analysis (i.e. comparing coefficient estimates) by offering an easy way to compare different types of standard errors. To date no `R` package offers this functionality, leaving modelers to manually try different standard errors to check the robustness of their results . . . or more realistically to stick to IID standard errors or a single variety of heteroskedasticity-consistent errors.
# Why should I use it over other packages?
speccurvieR seeks to provide everything alternative specification curve analysis packages do with major improvements to usability and visualization. Some features that set the package apart currently:
* Ability to compare coefficient estimates and statistical significance across model specifications
* Ability to compare different standard error estimates including heteroskedasticity-consistent, clustered, and bootstrapped
* Ability to compare various model fit parameters across models
* Support for parallel computing to speed up model estimation and progress bars to monitor model estimation
* Support for fixed effects
* All plots are generated using ggplot2, allowing a high degree of customization
# Is this just a tool for p-hacking?
Any sensitivity analysis (or statistics in general) can be used for good or evil–specification curve analysis lets the researcher assess the robustness of their estimates and by easily trying variety of model specifications. Using this package to find a model specification with a significant p-value won’t mean that result is a robust or consistent with your theory. These tools are meant to allow you to demonstrate that you’re *not* cherry-picking models!
# Can I see it in action?
I thought you’d never ask, let’s walk through some examples.
## Estimating models
The main function of the package is `sca()` (short for specification curve analysis, not the music genre). This is where you specify the models you want estimated and get back a data frame with useful data for each model. This can then be fed to plotting functions like `plotCurve()` and `plotRMSE()`.
Let’s look at the sample data provided with the package–[a sample of the CalCOFI bottle database](https://calcofi.org/data/oceanographic-data/bottle-database/) with plenty of variables to play around with.
```{r}
names(bottles)
```
Suppose we’re modeling the effect of salinity on ocean temperatures and want to understand how including the concentration of other chemicals affects the model
```{r}
s <- sca(y = "T_degC", x = "Salnty",
controls = c("O2Sat", "ChlorA", "NH3uM", "NO2uM",
"SiO3uM", "NO2uM*SiO3uM"),
data = bottles)
```
The function returns a data frame containing a row for every possible combination of controls with all the information needed to generate plots.
Let’s take a look at the first four rows and columns:
```{r}
s[1:4,1:4]
```
Lots of goodies in there, including the formula used to generate each model, indicator variables for the presence of each of the controls in that row’s model, as well as the control coefficients for each model.
```{r}
names(s)
```
## Plotting
Now let’s plot the specification curve for our independent variable’s coefficient
```{r}
plotCurve(s)
```
By default, a bottom panel is provided showing which controls are present in each model. You can get the bottom panel by itself using `plotVars()`:
```{r}
plotVars(s)
```
You can also get just the top panel with the specification curve, add a title, and more:
```{r}
plotCurve(s, plotVars=F, title="Salinity Coefficient Specification Curve")
```
When `plotVars = FALSE` (i.e. when you are only having a single ggplot object returned) you can also customize the plot as you would any `ggplot` object:
```{r}
library(ggplot2)
plotCurve(s, plotVars=F, title="Salinity Coefficient Specification Curve") +
theme_minimal() +
theme(legend.position = "bottom",
legend.title = element_blank()) +
labs(title = "I changed my mind and want a different title",
x = "Model index")
```
Note you may need to adjust the plot’s margin when customizing like this
to avoid going off the plot’s edge, this can be done easily:
```{r}
plotCurve(s, plotVars = F) +
labs(title = "I'm missing")
```
```{r}
plotCurve(s, plotVars = F) +
theme(plot.margin = unit(c(5, 5, 5, 5), unit = "points")) +
labs(title = "I'm found!")
```
Let’s see what other stuff we can plot.
We can look at model fits across models:
```{r}
plotRMSE(s)
```
```{r}
plotR2Adj(s)
```
We can also look at the distributions of coefficients for our control
variables:
```{r}
plotControlDistributions(s)
```
Or maybe we want histograms:
```{r}
plotControlDistributions(s, type="histogram")
```
(Note: because the above plot is a facet wrapped `ggplot` object you can
customize it like any other `ggplot` object)
## Comparing standard errors
Suppose you want to investigate how your model fares with different standard error types, `se_compare()` allows you to do so in a single line of code:
```{r}
se_compare(formula = "Salnty ~ T_degC + ChlorA", data = bottles, types = "all")
```
Note that the estimate column provides the coefficient estimate, not a standard error estimate.
You can provide bootstrapping parameters if you want to investigate bootstrapped errors:
```{r}
se_compare(formula = "Salnty ~ T_degC + ChlorA", data = bottles,
types = c("iid", "bootstrapped"),
bootSamples=c(8, 10), bootSampleSize=c(200, 300))
```
Clustered standard errors are also supported:
```{r}
se_compare(formula = "Salnty ~ T_degC + ChlorA", data = bottles,
types = "HC1", cluster=c("Sta_ID", "Depth_ID"))
```
As well as fixed effects:
```{r, message=FALSE, warning=FALSE}
se_compare(formula = "Salnty ~ T_degC + ChlorA | Sta_ID", data = bottles,
types = c("CL_FE", "iid", "HC0", "HC1"))
```
Note: CL_FE refers to standard errors clustered by fixed effects variables, i.e. the default errors reported by `fixest::feols()`.
# Other features
## Fixed effects with `fixest::feols`
Pass the name of your fixed effects variable(s) when calling `sca()` or `se_compare()` and all models will be run with `feols()` from the `fixest` package!
## Parallel computing
`sca()` uses the `parallel` package to offer parallel computing when estimating models. Simply set `parallel = TRUE` and the number of workers you want, i.e. `workers = 2`.
Note: parallelization is only recommended for specification curve analysis involving very large (>1000) numbers of models--for less intensive tasks parallelization will actually slow down model estimation.
## Getting formulae
If you hate the plotting functions I’ve made or need something from the model not provided by the default output of `sca()` you can always have it just return a list of all possible formulae with
`returnFormulae = TRUE`:
```{r}
formulae <- sca(y = "T_degC", x = "Salnty",
controls = c("O2Sat", "NO2uM", "SiO3uM"),
data = bottles, returnFormulae = TRUE)
formulae
```
Then it’s easy to estimate the models yourself with the pre-made
formulae:
```{r}
my_own_models <- lapply(formulae, lm, data = bottles)
summary(my_own_models[[1]])
```
# What's next?
Feel free to contact me at [zsember@ucsd.edu](mailto:zsember@ucsd.edu) to let me know of features you would find useful. Currently, I hope to add the following:
* Plotting different types of standard errors
* Adding support for all `glm` models in `se_compare()`
* Tools to understand variation in coefficient estimates across model specifications.
If you find a bug please create and issue on GitHub and I'll work to fix it ASAP.