Amazon cover image
Image from Amazon.com

Bayesian statistics for experimental scientists: a general introduction using distribution-free methods

By: Material type: TextTextPublication details: The MIT Press 2020 CambridgeDescription: xviii, 454 p.: ill. Includes bibliographical references and indexISBN:
  • 9780262044585
Subject(s): DDC classification:
  • 519.542 C4B2
Summary: An introduction to the Bayesian approach to statistical inference that demonstrates its superiority to orthodox frequentist statistical analysis. This book offers an introduction to the Bayesian approach to statistical inference, with a focus on nonparametric and distribution-free methods. It covers not only well-developed methods for doing Bayesian statistics but also novel tools that enable Bayesian statistical analyses for cases that previously did not have a full Bayesian solution. The book's premise is that there are fundamental problems with orthodox frequentist statistical analyses that distort the scientific process. Side-by-side comparisons of Bayesian and frequentist methods illustrate the mismatch between the needs of experimental scientists in making inferences from data and the properties of the standard tools of classical statistics. The book first covers elementary probability theory, the binomial model, the multinomial model, and methods for comparing different experimental conditions or groups. It then turns its focus to distribution-free statistics that are based on having ranked data, examining data from experimental studies and rank-based correlative methods. Each chapter includes exercises that help readers achieve a more complete understanding of the material. The book devotes considerable attention not only to the linkage of statistics to practices in experimental science but also to the theoretical foundations of statistics. Frequentist statistical practices often violate their own theoretical premises. The beauty of Bayesian statistics, readers will learn, is that it is an internally coherent system of scientific inference that can be proved from probability theory. https://mitpress.mit.edu/books/bayesian-statistics-experimental-scientists
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Collection Call number Status Date due Barcode Item holds
Book Book Ahmedabad General Stacks Non-fiction 519.542 C4B2 (Browse shelf(Opens below)) Available 203042
Book Book Raipur 519.542 CHE-20 (Browse shelf(Opens below)) Available IIMRP-11733
Total holds: 0

Table of contents:

I.Introduction To Bayesian Analysis For Categorical Data
1.1.Overview
1.2.Statistics as a Tool for Building Evidence
1.3.Broad Data Types
1.3.1.Categorical Data
1.3.2.Ranked Data
1.3.3.Interval and Ratio Data
1.4.Obtaining and Using R Software
1.5.Organization of Part I
1.Probability And Inference
1.2.Samples, Populations, and Statistical Inference
1.2.1.Populations versus Samples
1.2.2.Representative Samples and Human Judgment
1.2.3.Parameters, Statistics, and Statistical Inference
1.3.Defining Probability
1.3.1.Addressable Questions, Sample Spaces, and Events
1.3.2.Kolmogorov Axioms of Probability
1.3.3.Backgammon Example
1.3.4.Properties of Continuous Probability Distributions
1.4.Assigning Probability Values
1.4.1.Problems with Equal-Probability Assignment
1.4.2.Relative-Frequency Theory
1.4.3.Probability as an Encoding of In formation
1.4.4.A Hybrid Bayesian Solution
1.4.5.Gambles, Odds, and Probability Measurement
1.5.Conjunctive Events
1.5.1.Conditional Probabilities
1.5.2.Conjunctive Events and Bayes Theorem
1.5.3.Statistical Dependence and Independence
1.5.4.Disjunctions from Conjunctions
1.6.Probability Trees and Unlimited Games
1.7.Exercises
2.Binomial Model
2.1.Overview
2.2.Binomial Features and Examples
2.2.1.Examples of Binomial Sampling
2.3.Binomial Distribution
2.3.1.Normal Approximation to the Binomial
2.3.2.Binomial Model over Experiments
2.4.Bayesian Inference-Discrete Approximation
2.4.1.Point Estimation
2.4.2.Interval Estimation
2.4.3.Hypothesis Testing
2.4.4.Quality of the Discrete-Approach Model
2.5.Bayesian Inference-Continuous Model
2.5.1.The Beta Distribution and the Binomial Model
2.5.2.Monte Carlo Samples from the Posterior Distribution
2.5.3.Case Study Example: TAS2R38 Gene Study
2.5.4.Case Study Example: Machine Recalibration Decision
2.5.5.Bayesian-Sign Test: A Pattern-Recognition Case Study
2.6.Which Prior?
2.6.1.The Fisher Invariance Principle and the Jeffreys Prior
2.6.2.Uninformative versus Informative Priors
2.7.Statistical Decisions and the Bayes Factor
2.7.1.The Predictive Distribution and Sequential Sampling
2.7.2.The Bayes Factor for Interval Hypotheses
2.7.3.Bayes Factor for the Sharp Null Hypothesis
2.7.4.Bayes Factor for a Trivially Small Null Interval
2.7.5.Bayes Factors and Sample Size Planning
2.7.6.Criticisms of the Bayes Factor
2.8.Comparison to the Frequentist Analysis
2.8.1.The Frequentist Maximum Likelihood Estimate
2.8.2.Frequmtist Hypothesis Testing
2.8.3.The Confidence Interval
2.8.4.Power and Sample Size Planning
2.8.5.Likelihood Principle
2.8.6.Meta-Analysis Comparisons
2.9.Exercises
3.Multinomial Data
3.1.Overview
3.2.Multinomial Distribution and Examples
3.2.1.Examples of Multinomial Studies
3.2.2.The Multinomial Distribution
3.3.The Dirichlet Distribution
3.3.1.Covariation of the Dirichlet Variables
3.4.Random Samples from a Dirichlet Distribution
3.5.Multinomial Process Models
3.5.1.Logistic Models versus Latent Process Models
3.5.2.Introduction to Two Process-Tree Models
3.5.3.The Recall/2-AFC Follow-Up Model
3.5.4.The Chechile-Soraci (1999) Model
3.6.Markov Chain Monte Carlo Estimation
3.6.1.Classic Monte Carlo Sampling
3.6.2.Introduction to Markov Chain Monte Carlo
3.6.3.MCMC Estimation for the Recall/2-AFC Model
3.6.4.MCMC Estimation for the Chechile-Soraci Model
3.7.Population Parameter Mapping
3.7.1.PPM Estimation for the Recall/2-AFC Model
3.7.2.PPM Estimation for the Chechile-Soraci Model
3.8.Exercises
3.9.Appendix: Proofs of Selected Theorems
4.Condition Effects: Categorical Data
4.1.Overview
4.2.The Importance of Comparison Conditions
4.3.Related Contingency Tables (1 = 2 Conditions)
4.3.1.The Classical McNemar Test
4.3.2.Bayesian 2 [×] 2 RB-Contingency Tables (L = 2)
4.3.3.Classical m [×] m RB-Contingency Tables (L = 2)
4.3.4.Bayesian m [×] m RB-Contingency Tables (L = 2)
4.4.Bayesian CR Analysis (L = 2 Conditions)
4.4.1.CR (L = 2) Contingency Table Framework
4.4.2.Bayesian (L = 2, k = 1) Contingency Table Analysis
4.4.3.Bayesian (L = 2, k > 1) Contingency Table Analysis
4.5.Multiple Comparisons for Bayesian Inference
4.5.1.Contrasts and Frequentist Multiple Comparisons
4.5.2.Multiple Comparisons from a Bayesian Perspective
4.5.3.Examples of Bayesian Multiple Comparison Analyses
4.6.L [≥] 2 Completely Randomized Conditions
4.6.1.Frequentist Omnibus Test for Independence
4.6.2.Pointlessness of the Chi-Square Independence Test
4.6.3.Bayesian Analysis for L [≥] 2 Groups or Conditions
4.7.L [≥] 2 Randomized-Block Conditions
4.7.1.Binomial Data: Frequentist Test
4.7.2.Bayesian RB-Contingency Tables for L [≥] 2: Binomial Data
4.7.3.Bayesian RB Analysis for Multinomial Data with L [≥] 2
4.8.2 [×] 2 Split-Plot or Mixed Designs
4.9.Planning the Sample Size in Advance
4.9.1.Sample-Size Planning for RB Experiments
4.9.2.Sample-Size Planning for CR Experiments
4.10.Overview of Bayesian Comparison Procedures
4.11.Exercises
II.Bayesian Analysis of Ordinal Information
5.Median- And Sign-Based Methods
5.1.Overview
5.2.Median Test
5.2.1.Examples of a Median Test Analysis
5.2.2.Frequentist Median Test for L = 2 Groups
5.2.3.Frequentist Median Test Extension for L > 2
5.2.4.Bayesian Median-Test Analysis L = 2 CR Conditions
5.2.5.Bayesian Median-Test Analysts L > 2 Conditions
5.2.6.Limitations of the Median Test
5.3.Sign Test for RB Research Designs
5.3.1.Bayesian L = 2 Conditions Sign Test
5.3.2.Frequentist Tests for Rank-Based RB Designs for L > 2
5.3.3.Bayesian Multiple-Sign Tests for RB Designs for L > 2
5.3.4.Sample Size and the Bayes-Factor Relative Efficiency
5.4.Bayesian Nonparametric Split-Plot Analysis
5.5.Exercises
6.Wilcoxon Signed-Rank Procedure
6.1.Overview
6.2.Frequentist Wilcoxon Signed-Rank Analysis
6.2.1.Examples for the Wilcoxon Signed-Rank Statistic
6.2.2.Frequentist Wilcoxon Analysis
6.3.Bayesian Discrete Small-Sample Analysis
6.3.1.Introduction to Bayesian Wilcoxon Analysis
6.3.2.Noninteger T+ for n < 25
6.3.3.Comparisons to the Yuan-Johnson Approach
6.4.Continuous Large-Sample Model
6.4.1.The Large-Sample Model
6.4.2.A Meta-Analysis Application
6.5.Comparisons with Other Procedures
6.5.1.Comparisons with the Bayesian Sign Test
Contents note continued: 6.5.2.Comparisons with the Within-Block t Test
6.6.Exercises
6.7.Appendix: Discrete-Approximation Software
7.Mann-Whitney Procedure
7.1.Overview
7.2.Frequentist Mann-Whitney Statistic
7.2.1.Some Examples for the Mann-Whitney Statistic
7.2.2.The Mann-Whitney Statistics
7.3.Bayesian Mann-Whitney Analysis: Discrete Case
7.3.1.The Population Difference Proportion Parameter
7.3.2.Exponential Mimicry and the Likelihood Function
7.3.3.Discrete Small-Sample Analysis
7.4.Continuous Larger-Sample Approximation
7.4.1.The General Method
7.4.2.Stress-Strength Application
7.5.Planning and Bayes-Factor Relative Efficiency
7.6.Comparisons to the Independent-Groups t Test
7.7.Exercises
7.8.Appendix: Programs and Documentation
7.8.1.Program for the Discrete Approximation Method
7.8.2.Lagrange Estimates for ΩE(x), na, and nb
8.Distribution-Free Correlation
8.1.Overview
8.2.Introduction to Rank-Based Correlation
8.2.1.Three Correlation Coefficients
8.3.The Kendall Tau with Tied Ranks
8.3.1.The Goodman-Kruskal G Statistic
8.4.Bayesian Analysis for the Kendall Tau
8.4.1.Study of Brain Size and Intelligence
8.4.2.Predicting Consumer Preference
8.4.3.Ordered-Contingency Table Application
8.4.4.Monte Carlo Comparisons
8.4.5.Kendall Tau and Experimental Differences
8.5.Testing Theories with the Kendall Tau
8.5.1.Comparing Scientific Functions
8.5.2.Testing Theories of the Risky Weighting Function
8.5.3.Testing for a Perfect or Near-Perfect Fit
8.6.Exercises

An introduction to the Bayesian approach to statistical inference that demonstrates its superiority to orthodox frequentist statistical analysis.
This book offers an introduction to the Bayesian approach to statistical inference, with a focus on nonparametric and distribution-free methods. It covers not only well-developed methods for doing Bayesian statistics but also novel tools that enable Bayesian statistical analyses for cases that previously did not have a full Bayesian solution. The book's premise is that there are fundamental problems with orthodox frequentist statistical analyses that distort the scientific process. Side-by-side comparisons of Bayesian and frequentist methods illustrate the mismatch between the needs of experimental scientists in making inferences from data and the properties of the standard tools of classical statistics.
The book first covers elementary probability theory, the binomial model, the multinomial model, and methods for comparing different experimental conditions or groups. It then turns its focus to distribution-free statistics that are based on having ranked data, examining data from experimental studies and rank-based correlative methods. Each chapter includes exercises that help readers achieve a more complete understanding of the material.
The book devotes considerable attention not only to the linkage of statistics to practices in experimental science but also to the theoretical foundations of statistics. Frequentist statistical practices often violate their own theoretical premises. The beauty of Bayesian statistics, readers will learn, is that it is an internally coherent system of scientific inference that can be proved from probability theory.

https://mitpress.mit.edu/books/bayesian-statistics-experimental-scientists

There are no comments on this title.

to post a comment.

Powered by Koha