About 52 results
Open links in new tab
  1. Help Understanding Cross Validation and Decision Trees

    I've been reading up on Decision Trees and Cross Validation, and I understand both concepts. However, I'm having trouble understanding Cross Validation as it pertains to Decision Trees. Essentially...

  2. ImportError: No module named sklearn.cross_validation

    Jun 5, 2015 · from sklearn.cross_validation import train_test_split This isn't ideal though because you're comparing package versions as strings, which usually works but doesn't always.

  3. understanding fbprophet cross_validation - Stack Overflow

    Nov 22, 2021 · 1 I was able to perform a cross validation to assess the models accuracy, but I am having trouble understanding the output. I have 687 rows, I want to train the model on all my data to …

  4. Is there a rule-of-thumb for how to divide a dataset into training and ...

    Assuming you have enough data to do proper held-out test data (rather than cross-validation), the following is an instructive way to get a handle on variances: Split your data into training and testing …

  5. Cross Validation in Weka - Stack Overflow

    May 4, 2012 · Then cross-validation is run. cross-validation involves creating (in this case) 10 new models with the training and testing on segments of the data as has been described. The key is the …

  6. Cross validation in deep neural networks - Stack Overflow

    Jun 10, 2017 · How do you perform cross-validation in a deep neural network? I know that to perform cross validation to will train it on all folds except one and test it on the excluded fold. Then do this for …

  7. machine learning - Is k-folds cross validation a smarter idea than ...

    May 24, 2022 · Should you be using k-fold cross validation? Compared to a single validation set, k-fold cross-validation avoids over-fitting hyperparameters to a fixed validation set and makes better use of …

  8. What is the difference between cross-validation and grid search?

    May 5, 2023 · Cross-validation is a method for robustly estimating test-set performance (generalization) of a model. Grid-search is a way to select the best of a family of models, parametrized by a grid of …

  9. Evaluating Logistic regression with cross validation

    Aug 26, 2016 · I would like to use cross validation to test/train my dataset and evaluate the performance of the logistic regression model on the entire dataset and not only on the test set (e.g. 25%). These co...

  10. r - Cross validation for glm () models - Stack Overflow

    I'm trying to do a 10-fold cross validation for some glm models that I have built earlier in R. I'm a little confused about the cv.glm() function in the boot package, although I've read a lot of help