Table of Contents for
Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd Edition

Version ebook / Retour

Cover image for bash Cookbook, 2nd Edition Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd Edition by Aurélien Géron Published by O'Reilly Media, Inc., 2019
  1. Cover
  2. nav
  3. Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow
  4. Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow
  5. 1. The Machine Learning Landscape
  6. 2. End-to-End Machine Learning Project
  7. 3. Classification
  8. 4. Training Models
  9. 5. Support Vector Machines
  10. 6. Decision Trees
  11. 7. Ensemble Learning and Random Forests
  12. 8. Dimensionality Reduction
  13. 9. Unsupervised Learning Techniques
  14. About the Author
  15. Colophon
  1. 1. The Machine Learning Landscape
    1. What Is Machine Learning?
    2. Why Use Machine Learning?
    3. Types of Machine Learning Systems
      1. Supervised/Unsupervised Learning
      2. Batch and Online Learning
      3. Instance-Based Versus Model-Based Learning
    4. Main Challenges of Machine Learning
      1. Insufficient Quantity of Training Data
      2. Nonrepresentative Training Data
      3. Poor-Quality Data
      4. Irrelevant Features
      5. Overfitting the Training Data
      6. Underfitting the Training Data
      7. Stepping Back
    5. Testing and Validating
    6. Exercises
  2. 2. End-to-End Machine Learning Project
    1. Working with Real Data
    2. Look at the Big Picture
      1. Frame the Problem
      2. Select a Performance Measure
      3. Check the Assumptions
    3. Get the Data
      1. Create the Workspace
      2. Download the Data
      3. Take a Quick Look at the Data Structure
      4. Create a Test Set
    4. Discover and Visualize the Data to Gain Insights
      1. Visualizing Geographical Data
      2. Looking for Correlations
      3. Experimenting with Attribute Combinations
    5. Prepare the Data for Machine Learning Algorithms
      1. Data Cleaning
      2. Handling Text and Categorical Attributes
      3. Custom Transformers
      4. Feature Scaling
      5. Transformation Pipelines
    6. Select and Train a Model
      1. Training and Evaluating on the Training Set
      2. Better Evaluation Using Cross-Validation
    7. Fine-Tune Your Model
      1. Grid Search
      2. Randomized Search
      3. Ensemble Methods
      4. Analyze the Best Models and Their Errors
      5. Evaluate Your System on the Test Set
    8. Launch, Monitor, and Maintain Your System
    9. Try It Out!
    10. Exercises
  3. 3. Classification
    1. MNIST
    2. Training a Binary Classifier
    3. Performance Measures
      1. Measuring Accuracy Using Cross-Validation
      2. Confusion Matrix
      3. Precision and Recall
      4. Precision/Recall Tradeoff
      5. The ROC Curve
    4. Multiclass Classification
    5. Error Analysis
    6. Multilabel Classification
    7. Multioutput Classification
    8. Exercises
  4. 4. Training Models
    1. Linear Regression
      1. The Normal Equation
      2. Computational Complexity
    2. Gradient Descent
      1. Batch Gradient Descent
      2. Stochastic Gradient Descent
      3. Mini-batch Gradient Descent
    3. Polynomial Regression
    4. Learning Curves
    5. Regularized Linear Models
      1. Ridge Regression
      2. Lasso Regression
      3. Elastic Net
      4. Early Stopping
    6. Logistic Regression
      1. Estimating Probabilities
      2. Training and Cost Function
      3. Decision Boundaries
      4. Softmax Regression
    7. Exercises
  5. 5. Support Vector Machines
    1. Linear SVM Classification
      1. Soft Margin Classification
    2. Nonlinear SVM Classification
      1. Polynomial Kernel
      2. Adding Similarity Features
      3. Gaussian RBF Kernel
      4. Computational Complexity
    3. SVM Regression
    4. Under the Hood
      1. Decision Function and Predictions
      2. Training Objective
      3. Quadratic Programming
      4. The Dual Problem
      5. Kernelized SVM
      6. Online SVMs
    5. Exercises
  6. 6. Decision Trees
    1. Training and Visualizing a Decision Tree
    2. Making Predictions
    3. Estimating Class Probabilities
    4. The CART Training Algorithm
    5. Computational Complexity
    6. Gini Impurity or Entropy?
    7. Regularization Hyperparameters
    8. Regression
    9. Instability
    10. Exercises
  7. 7. Ensemble Learning and Random Forests
    1. Voting Classifiers
    2. Bagging and Pasting
      1. Bagging and Pasting in Scikit-Learn
      2. Out-of-Bag Evaluation
    3. Random Patches and Random Subspaces
    4. Random Forests
      1. Extra-Trees
      2. Feature Importance
    5. Boosting
      1. AdaBoost
      2. Gradient Boosting
    6. Stacking
    7. Exercises
  8. 8. Dimensionality Reduction
    1. The Curse of Dimensionality
    2. Main Approaches for Dimensionality Reduction
      1. Projection
      2. Manifold Learning
    3. PCA
      1. Preserving the Variance
      2. Principal Components
      3. Projecting Down to d Dimensions
      4. Using Scikit-Learn
      5. Explained Variance Ratio
      6. Choosing the Right Number of Dimensions
      7. PCA for Compression
      8. Randomized PCA
      9. Incremental PCA
    4. Kernel PCA
      1. Selecting a Kernel and Tuning Hyperparameters
    5. LLE
      1. Other Dimensionality Reduction Techniques
    6. Exercises
  9. 9. Unsupervised Learning Techniques
    1. Clustering
      1. K-Means
      2. Limits of K-Means
      3. Using clustering for image segmentation
      4. Using Clustering for Preprocessing
      5. Using Clustering for Semi-Supervised Learning
      6. DBSCAN
      7. Other Clustering Algorithms
    2. Gaussian Mixtures
      1. Anomaly Detection using Gaussian Mixtures
      2. Selecting the Number of Clusters
      3. Bayesian Gaussian Mixture Models
      4. Other Anomaly Detection and Novelty Detection Algorithms
Back to top