<p>Statistics for Linguists: An Introduction Using R is the first statistics textbook on linear models for linguistics. The book covers simple uses of linear models through generalized models to more advanced approaches, maintaining its focus on conceptual issues and avoiding excessive mathematical details. It contains many applied examples using the R statistical programming environment. Written in an accessible tone and style, this text is the ideal main resource for graduate and advanced undergraduate students of Linguistics statistics courses as well as those in other fields, including Psychology, Cognitive Science, and Data Science.</p> <p>Table of contents</p><p>0. Preface: Approach and how to use this book</p><p>0.1. Strategy of the book</p><p>0.2. Why R?</p><p>0.3. Why the tidyverse?</p><p>0.4. R packages required for this book</p><p>0.5. What this book is not</p><p>0.6. How to use this book</p><p>0.7. Information for teachers</p><p>1. Introduction to base R</p><p>1.1. Introduction</p><p>1.2. Baby steps: simple math with R</p><p>1.3. Your first R script</p><p>1.4. Assigning variables</p><p>1.5. Numeric vectors</p><p>1.6. Indexing</p><p>1.7. Logical vectors</p><p>1.8. Character vectors</p><p>1.9. Factor vectors</p><p>1.10. Data frames</p><p>1.11. Loading in files</p><p>1.12. Plotting</p><p>1.13. Installing, loading, and citing packages</p><p>1.14. Seeking help</p><p>1.15. A note on keyboard shortcuts</p><p>1.16. Your R journey: The road ahead</p><p>2. Tidy functions and reproducible R workflows</p><p>2.1. Introduction</p><p>2.2. tibble and readr</p><p>2.3. dplyr</p><p>2.4. ggplot2</p><p>2.5. Piping with magrittr</p><p>2.6. A more extensive example: iconicity and the senses</p><p>2.7. R markdown</p><p>2.8. Folder structure for analysis projects</p><p>2.9. Readme files and more markdown</p><p>2.10. Open and reproducible research</p><p>3. Models and distributions</p><p>3.1. Models</p><p>3.2. Distributions</p><p>3.3. The normal distribution</p><p>3.4. Thinking of the mean as a model</p><p>3.5. Other summary statistics: median and range</p><p>3.6. Boxplots and the interquartile range</p><p>3.7. Summary statistics in R</p><p>3.8. Exploring the emotional valence ratings</p><p>3.9. Chapter conclusions</p><p>4. Introduction to the linear model: Simple linear regression</p><p>4.1. Word frequency effects</p><p>4.2. Intercepts and slopes</p><p>4.3. Fitted values and residuals</p><p>4.4. Assumptions: Normality and constant variance</p><p>4.5. Measuring model fit with </p><p>4.6. A simple linear model in R</p><p>4.7. Linear models with tidyverse functions</p><p>4.8. Model formula notation: Intercept placeholders</p><p>4.9. Chapter conclusions</p><p>5. Correlation, linear, and nonlinear transformations</p><p>5.1. Centering</p><p>5.2. Standardizing</p><p>5.3. Correlation</p><p>5.4. Using logarithms to describe magnitudes</p><p>5.5. Example: Response durations and word frequency</p><p>5.6. Centering and standardization in R</p><p>5.7. Terminological note on the term ‘normalizing’</p><p>5.8. Chapter conclusions</p><p>6. Multiple regression</p><p>6.1. Regression with more than one predictor</p><p>6.2. Multiple regression with standardized coefficients</p><p>6.3. Assessing assumptions</p><p>6.4. Collinearity</p><p>6.5. Adjusted </p><p>6.6. Chapter conclusions</p><p>7. Categorical predictors</p><p>7.1. Introduction</p><p>7.2. Modeling the emotional valence of taste and smell words</p><p>7.3. Processing the taste and smell data</p><p>7.4. Treatment coding in R</p><p>7.5. Doing dummy coding ‘by hand’</p><p>7.6. Changing the reference level</p><p>7.7. Sum coding in R</p><p>7.8. Categorical predictors with more than two levels</p><p>7.9. Assumptions again</p><p>7.10. Other coding schemes</p><p>7.11. Chapter conclusions</p><p>8. Interactions and nonlinear effects</p><p>8.1. Introduction</p><p>8.2. Categorical * continuous interactions</p><p>8.3. Categorical * categorical interactions</p><p>8.4. Continuous * continuous interactions</p><p>8.5. Continuous interactions and regression planes</p><p>8.6. Higher-order interactions</p><p>8.7. Chapter conclusions</p><p>9. Inferential statistics 1: Significance testing</p><p>9.1. Introduction</p><p>9.2. Effect size: Cohen’s </p><p>9.3. Cohen’s <i>in R</i></p><p>9.4. Standard errors and confidence intervals</p><p>9.5. Null hypotheses</p><p>9.6. Using <i>to measure the incompatibility with the null hypothesis</i></p><p>9.7. Using the <i>-distribution to compute <i>-values</i></i></p><p>9.8. Chapter conclusions</p><p>10. Inferential statistics 2: Issues in significance testing</p><p>10.1. Common misinterpretations of <i>-values</i></p><p>10.2. Statistical power and Type I, II, M, and S errors</p><p>10.3. Multiple testing</p><p>10.4. Stopping rules</p><p>10.5. Chapter conclusions</p><p>11. Inferential statistics 3: Significance testing in a regression context</p><p>11.1. Introduction</p><p>11.2. Standard errors and confidence intervals for regression coefficients</p><p>11.3. Significance tests with multi-level categorical predictors</p><p>11.4. Another example: the absolute valence of taste and smell words</p><p>11.5. Communicating uncertainty for categorical predictors</p><p>11.6. Communicating uncertainty for continuous predictors</p><p>11.7. Chapter conclusions</p><p>12. Generalized linear models: Logistic regression</p><p>12.1. Motivating generalized linear models</p><p>12.2. Theoretical background: Data-generating processes</p><p>12.3. The log odd function and interpreting logits</p><p>12.4. Speech errors and blood alcohol concentration</p><p>12.5. Predicting the dative alternation</p><p>12.6. Analyzing gesture perception: Hassemer &amp; Winter (2016)</p><p>12.6.1. Exploring the dataset</p><p>12.6.2. Logistic regression analysis</p><p>12.7. Chapter conclusions</p><p>13. Generalized linear models 2: Poisson regression</p><p>13.1. Motivating Poisson regression</p><p>13.2. The Poisson distribution</p><p>13.3. Analyzing linguistic diversity using Poisson regression</p><p>13.4. Adding exposure variables</p><p>13.5. Negative binomial regression for overdispersed count data</p><p>13.6. Overview and summary of the generalized linear model framework</p><p>13.7. Chapter conclusions</p><p>14. Mixed models 1: Conceptual introduction</p><p>14.1. Introduction</p><p>14.2. The independence assumption</p><p>14.3. Dealing with non-independence via experimental design and averaging</p><p>14.4. Mixed models: Varying intercepts and varying slopes</p><p>14.5. More on varying intercepts and varying slopes</p><p>14.6. Interpreting random effects and random effect correlations</p><p>14.7. Specifying mixed effects models: lme4 syntax</p><p>14.8. Reasoning about your mixed model: The importance of varying slopes</p><p>14.9. Chapter conclusions</p><p>15. Mixed models 2: Extended example, significance testing, convergence issues</p><p>15.1. Introduction</p><p>15.2. Simulating vowel durations for a mixed model analysis</p><p>15.3. Analyzing the simulated vowel durations with mixed models</p><p>15.4. Extracting information out of lme4 objects</p><p>15.5. Messing up the model</p><p>15.6. Likelihood ratio tests</p><p>15.7. Remaining issues</p><p>15.7.1. <i>-squared for mixed models</i></p><p>15.7.2. Predictions from mixed models</p><p>15.7.3. Convergence issues</p><p>15.8. Mixed logistic regression: Ugly selfies</p><p>15.9. Shrinkage and individual differences</p><p>15.10. Chapter conclusions</p><p>16. Outlook and strategies for model building</p><p>16.1. What you have learned so far</p><p>16.2. Model choice</p><p>16.3. The cookbook approach</p><p>16.4. Stepwise regression</p><p>16.5. A plea for subjective and theory-driven statistical modeling</p><p>16.6. Reproducible research</p><p>16.7. Closing words</p><p>References</p><p>Appendix A. Correspondences between significance tests and linear models</p><p>Appendix B. Reading recommendations</p>
Piracy-free
Assured Quality
Secure Transactions
Delivery Options
Please enter pincode to check delivery time.
*COD & Shipping Charges may apply on certain items.