MARC details
000 -LEADER |
fixed length control field |
12339aam a2200217 4500 |
008 - FIXED-LENGTH DATA ELEMENTS--GENERAL INFORMATION |
fixed length control field |
210127b2021 ||||| |||| 00| 0 eng d |
020 ## - INTERNATIONAL STANDARD BOOK NUMBER |
International Standard Book Number |
9780367545864 |
082 ## - DEWEY DECIMAL CLASSIFICATION NUMBER |
Classification number |
332.60285631 |
Item number |
C6M2 |
100 ## - MAIN ENTRY--PERSONAL NAME |
Personal name |
Coqueret, Guillaume |
9 (RLIN) |
2509508 |
245 ## - TITLE STATEMENT |
Title |
Machine learning for factor investing: R version |
260 ## - PUBLICATION, DISTRIBUTION, ETC. |
Name of publisher, distributor, etc. |
CRC Press |
Place of publication, distribution, etc. |
Boca Raton |
Date of publication, distribution, etc. |
2021 |
300 ## - PHYSICAL DESCRIPTION |
Extent |
xix, 321 p.: col. ill. |
Other physical details |
Includes bibliographical references and index |
440 ## - SERIES STATEMENT/ADDED ENTRY--TITLE |
Title |
Chapman and Hall/CRC: financial mathematics |
9 (RLIN) |
399734 |
504 ## - BIBLIOGRAPHY, ETC. NOTE |
Bibliography, etc. note |
Table of contents:<br/><br/>I Introduction<br/>1. Notations and data<br/> Notations <br/> Dataset <br/>2. Introduction<br/> Context <br/> Portfolio construction: the workflow <br/> Machine Learning is no Magic Wand <br/>3. Factor investing and asset pricing anomalies<br/> Introduction <br/> Detecting anomalies <br/> Simple portfolio sorts <br/> Factors <br/> Predictive regressions, sorts, and p-value issues <br/> Fama-Macbeth regressions <br/> Factor competition <br/> Advanced techniques <br/> Factors or characteristics? <br/> Hot topics: momentum, timing and ESG <br/> Factor momentum <br/> Factor timing <br/> The green factors <br/> The link with machine learning <br/> A short list of recent references <br/> Explicit connections with asset pricing models <br/> Coding exercises <br/>4. Data preprocessing<br/> Know your data <br/> Missing data <br/> Outlier detection <br/> Feature engineering <br/> Feature selection <br/> Scaling the predictors <br/> Labelling <br/> Simple labels <br/> Categorical labels <br/> The triple barrier method <br/> Filtering the sample <br/> Return horizons <br/> Handling persistence <br/> Extensions <br/> Transforming features <br/> Macro-economic variables <br/> Active learning <br/> Additional code and results <br/> Impact of rescaling: graphical representation <br/> Impact of rescaling: toy example <br/> Coding exercises <br/>II Common supervised algorithms<br/>5. Penalized regressions and sparse hedging for minimum variance portfolios<br/> Penalised regressions <br/> Simple regressions <br/> Forms of penalizations <br/> Illustrations <br/> Sparse hedging for minimum variance portfolios <br/> Presentation and derivations <br/> Example <br/> Predictive regressions <br/> Literature review and principle <br/> Code and results <br/> Coding exercise <br/>6. Tree-based methods<br/> Simple trees <br/> Principle <br/> Further details on classification <br/> Pruning criteria <br/> Code and interpretation <br/> Random forests <br/> Principle <br/> Code and results <br/> Boosted trees: Adaboost <br/> Methodology <br/> Illustration <br/> Boosted trees: extreme gradient boosting <br/> Managing Loss <br/> Penalisation <br/> Aggregation <br/> Tree structure <br/> Extensions <br/> Code and results <br/> Instance weighting <br/> Discussion <br/> Coding exercises<br/>7. Neural networks<br/> The original perceptron <br/> Multilayer perceptron (MLP) <br/> Introduction and notations <br/> Universal approximation <br/> Learning via back-propagation <br/> Further details on classification <br/> How deep should we go? And other practical issues <br/> Architectural choices <br/> Frequency of weight updates and learning duration <br/> Penalizations and dropout <br/> Code samples and comments for vanilla MLP <br/> Regression example <br/> Classification example <br/> Custom losses <br/> Recurrent networks <br/> Presentation <br/> Code and results <br/> Other common architectures <br/> Generative adversarial networks <br/> Auto-encoders <br/> A word on convolutional networks <br/> Advanced architectures <br/> Coding exercise<br/>8. Support vector machines<br/> SVM for classification <br/> SVM for regression <br/> Practice <br/> Coding exercises<br/>9. Bayesian methods<br/> The Bayesian framework <br/> Bayesian sampling <br/> Gibbs sampling <br/> Metropolis-Hastings sampling <br/> Bayesian linear regression <br/> Naive Bayes classifier <br/> Bayesian additive trees <br/> General formulation <br/> Priors <br/> Sampling and predictions <br/> Code<br/>III From predictions to portfolios<br/>10. Validating and tuning<br/> Learning metrics <br/> Regression analysis <br/> Classification analysis <br/> Validation <br/> The variance-bias tradeoff: theory <br/> The variance-bias tradeoff: illustration <br/> The risk of overfitting: principle <br/> The risk of overfitting: some solutions <br/> The search for good hyperparameters <br/> Methods <br/> Example: grid search <br/> Example: Bayesian optimization <br/> Short discussion on validation in backtests<br/>11. Ensemble models<br/> Linear ensembles <br/> Principles <br/> Example <br/> Stacked ensembles <br/> Two stage training <br/> Code and results <br/> Extensions <br/> Exogenous variables <br/> Shrinking inter-model correlations <br/> Exercise<br/>12. Portfolio backtesting<br/> Setting the protocol <br/> Turning signals into portfolio weights <br/> Performance metrics <br/> Discussion <br/> Pure performance and risk indicators <br/> Factor-based evaluation <br/> Risk-adjusted measures <br/> Transaction costs and turnover <br/> Common errors and issues <br/> Forward looking data <br/> Backtest overfitting <br/> Simple safeguards <br/> Implication of non-stationarity: forecasting is hard <br/> General comments <br/> The no free lunch theorem <br/> Example <br/> Coding exercises<br/>IV Further important topics<br/>13. Interpretability<br/> Global interpretations <br/> Simple models as surrogates <br/> Variable importance (tree-based) <br/> Variable importance (agnostic) <br/> Partial dependence plot <br/> Local interpretations <br/> LIME <br/> Shapley values <br/> Breakdown <br/>14. Two key concepts: causality and non-stationarity<br/> Causality <br/> Granger causality <br/> Causal additive models <br/> Structural time-series models <br/> Dealing with changing environments <br/> Non-stationarity: yet another illustration <br/> Online learning <br/> Homogeneous transfer learning <br/>15. Unsupervised learning<br/> The problem with correlated predictors <br/> Principal component analysis and autoencoders <br/> A bit of algebra <br/> PCA <br/> Autoencoders <br/> Application <br/> Clustering via k-means <br/> Nearest neighbors <br/> Coding exercise <br/>16. Reinforcement learning<br/> Theoretical layout <br/> General framework <br/> Q-learning <br/> SARSA <br/> The curse of dimensionality <br/> Policy gradient <br/> Principle <br/> Extensions <br/> Simple examples <br/> Q-learning with simulations <br/> Q-learning with market data <br/> Concluding remarks <br/> Exercises <br/><br/> |
520 ## - SUMMARY, ETC. |
Summary, etc. |
Machine learning (ML) is progressively reshaping the fields of quantitative finance and algorithmic trading. ML tools are increasingly adopted by hedge funds and asset managers, notably for alpha signal generation and stocks selection. The technicality of the subject can make it hard for non-specialists to join the bandwagon, as the jargon and coding requirements may seem out of reach. Machine Learning for Factor Investing: R Version bridges this gap. It provides a comprehensive tour of modern ML-based investment strategies that rely on firm characteristics.<br/>The book covers a wide array of subjects which range from economic rationales to rigorous portfolio back-testing and encompass both data processing and model interpretability. Common supervised learning algorithms such as tree models and neural networks are explained in the context of style investing and the reader can also dig into more complex techniques like autoencoder asset returns, Bayesian additive trees, and causal models.<br/>All topics are illustrated with self-contained R code samples and snippets that are applied to a large public dataset that contains over 90 predictors. The material, along with the content of the book, is available online so that readers can reproduce and enhance the examples at their convenience. If you have even a basic knowledge of quantitative finance, this combination of theoretical concepts and practical illustrations will help you learn quickly and deepen your financial and technical expertise.<br/><br/>https://www.routledge.com/Machine-Learning-for-Factor-Investing-R-Version/Coqueret-Guida/p/book/9780367545864 |
650 ## - SUBJECT ADDED ENTRY--TOPICAL TERM |
Topical term or geographic name entry element |
R (Computer programming language) |
9 (RLIN) |
1688294 |
650 ## - SUBJECT ADDED ENTRY--TOPICAL TERM |
Topical term or geographic name entry element |
Investment - Data Processing |
9 (RLIN) |
2509509 |
650 ## - SUBJECT ADDED ENTRY--TOPICAL TERM |
Topical term or geographic name entry element |
Machine learning |
9 (RLIN) |
2509510 |
700 ## - ADDED ENTRY--PERSONAL NAME |
Personal name |
Guida, Tony |
Relator term |
Co-author |
9 (RLIN) |
2509511 |
942 ## - ADDED ENTRY ELEMENTS (KOHA) |
Source of classification or shelving scheme |
Dewey Decimal Classification |
Koha item type |
Book |