Hyperparameter tuning is where data scientists tend to waste the most time. I’ve watched teams spend three weeks tuning a model that had fundamental data problems. They squeezed out a 2% accuracy gain while ignoring that their feature engineering was garbage.
In reality, hyperparameter tuning is the last thing you should do, not the first. Fix your data. Engineer better features. Choose the right algorithm. Then, and only then, tune hyperparameters.
In this tutorial, we’re tuning a Random Forest model for customer churn prediction. You’ll learn GridSearch, RandomSearch, when tuning actually helps, and when you’re just wasting compute time.
If you need the foundation first, start with Tutorial 1: ML Fundamentals and work through the series. Make sure you’ve covered Tutorial 4: Feature Engineering because good features matter more than tuned hyperparameters.
What We’re Building
We’re optimizing a Random Forest classifier for churn prediction. We’ll start with a baseline model using default parameters, then try GridSearchCV for exhaustive parameter search, followed by RandomizedSearchCV for efficient random sampling. We’ll compare performance across all approaches and analyze whether tuning was worth the computational cost.
By the end, you’ll know when hyperparameter tuning helps and when it’s a waste of time.
Understanding Hyperparameters vs Parameters
Before we tune anything, let’s clarify what hyperparameters actually are. Parameters are learned from the data during training, like model weights and coefficients. Hyperparameters are set before training starts, like learning rate, number of trees, and maximum depth.
You don’t tune parameters. The model learns those. You tune hyperparameters to control how the model learns.
Common Random Forest Hyperparameters
from sklearn.ensemble import RandomForestClassifier
# Default Random Forest
model = RandomForestClassifier(
n_estimators=100, # Number of trees
max_depth=None, # Maximum tree depth (None = unlimited)
min_samples_split=2, # Minimum samples to split a node
min_samples_leaf=1, # Minimum samples in a leaf node
max_features='sqrt', # Number of features per split
bootstrap=True, # Use bootstrap sampling
random_state=42
)
print("Default hyperparameters:")
print(model.get_params())
Each of these controls how the Random Forest learns from data. Tuning means finding better values than the defaults.
The Baseline Model
Always start with a baseline using default hyperparameters. This is your comparison point.
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import roc_auc_score, precision_score, recall_score, f1_score
import time
# Load preprocessed data (from Tutorial 2)
df = pd.read_csv('customer_churn_preprocessed.csv')
# Features and target
X = df.drop(['customer_id', 'churned'], axis=1)
y = df['churned']
# Split data
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42, stratify=y
)
# Train baseline model with defaults
start_time = time.time()
baseline_model = RandomForestClassifier(random_state=42)
baseline_model.fit(X_train, y_train)
training_time = time.time() - start_time
# Evaluate
y_pred = baseline_model.predict(X_test)
y_pred_proba = baseline_model.predict_proba(X_test)[:, 1]
print("Baseline Model Performance:")
print(f"Training Time: {training_time:.2f} seconds")
print(f"ROC-AUC: {roc_auc_score(y_test, y_pred_proba):.3f}")
print(f"Precision: {precision_score(y_test, y_pred):.3f}")
print(f"Recall: {recall_score(y_test, y_pred):.3f}")
print(f"F1 Score: {f1_score(y_test, y_pred):.3f}")
The output shows your starting point:
Baseline Model Performance: Training Time: 2.34 seconds ROC-AUC: 0.742 Precision: 0.622 Recall: 0.329 F1 Score: 0.430
Any tuning must beat these numbers to be worth the effort.
Method 1: GridSearchCV (Exhaustive Search)
GridSearchCV tries every combination of hyperparameters you specify. It’s thorough but computationally expensive.
Define the Parameter Grid
from sklearn.model_selection import GridSearchCV
# Define parameter grid
param_grid = {
'n_estimators': [50, 100, 200],
'max_depth': [10, 20, 30, None],
'min_samples_split': [2, 5, 10],
'min_samples_leaf': [1, 2, 4],
'max_features': ['sqrt', 'log2']
}
# Calculate total combinations
total_combinations = 1
for key, values in param_grid.items():
total_combinations *= len(values)
print(f"Total combinations to try: {total_combinations}")
print(f"With 5-fold CV: {total_combinations * 5} model fits")
This grid defines 144 combinations. With 5-fold cross-validation, GridSearch will train 720 models. This takes time (and costs money).
Run GridSearchCV
# Set up GridSearchCV
grid_search = GridSearchCV(
estimator=RandomForestClassifier(random_state=42),
param_grid=param_grid,
cv=5, # 5-fold cross-validation
scoring='roc_auc', # Optimize for ROC-AUC
n_jobs=-1, # Use all CPU cores
verbose=1, # Show progress
return_train_score=True
)
# Run grid search
print("Starting GridSearchCV...")
start_time = time.time()
grid_search.fit(X_train, y_train)
grid_time = time.time() - start_time
print(f"nGridSearchCV completed in {grid_time:.2f} seconds ({grid_time/60:.1f} minutes)")
print(f"nBest parameters found:")
for param, value in grid_search.best_params_.items():
print(f" {param}: {value}")
print(f"nBest cross-validation ROC-AUC: {grid_search.best_score_:.3f}")
After fitting 720 models over 8.1 minutes, GridSearch found that the optimal configuration uses 200 trees with a max depth of 20, min_samples_split of 5, min_samples_leaf of 2, and sqrt for max_features. The best cross-validation ROC-AUC improved to 0.758.
GridSearch found better hyperparameters. But was it worth 8 minutes of compute time?
Evaluate Tuned Model
# Get the best model
best_model_grid = grid_search.best_estimator_
# Evaluate on test set
y_pred_grid = best_model_grid.predict(X_test)
y_pred_proba_grid = best_model_grid.predict_proba(X_test)[:, 1]
print("GridSearch Tuned Model Performance:")
print(f"ROC-AUC: {roc_auc_score(y_test, y_pred_proba_grid):.3f}")
print(f"Precision: {precision_score(y_test, y_pred_grid):.3f}")
print(f"Recall: {recall_score(y_test, y_pred_grid):.3f}")
print(f"F1 Score: {f1_score(y_test, y_pred_grid):.3f}")
# Compare to baseline
print(f"nImprovement over baseline:")
print(f"ROC-AUC: +{roc_auc_score(y_test, y_pred_proba_grid) - roc_auc_score(y_test, y_pred_proba):.3f}")
print(f"Recall: +{recall_score(y_test, y_pred_grid) - recall_score(y_test, y_pred):.3f}")
The tuned model achieved 0.758 ROC-AUC, 0.645 precision, 0.371 recall, and 0.471 F1 score. We gained 1.6% ROC-AUC and 4.2% recall. That’s something, but again, it took 8 minutes of compute time.
Method 2: RandomizedSearchCV (Efficient Alternative)
RandomizedSearchCV randomly samples parameter combinations instead of trying all of them. Much faster, often as good.
Define Parameter Distributions
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import randint, uniform
# Define parameter distributions
param_distributions = {
'n_estimators': randint(50, 500),
'max_depth': [10, 20, 30, 40, 50, None],
'min_samples_split': randint(2, 20),
'min_samples_leaf': randint(1, 10),
'max_features': ['sqrt', 'log2', 0.5, 0.7],
'bootstrap': [True, False]
}
print("RandomizedSearch will sample random combinations from:")
for param, dist in param_distributions.items():
print(f" {param}: {dist}")
Run RandomizedSearchCV
# Set up RandomizedSearchCV
random_search = RandomizedSearchCV(
estimator=RandomForestClassifier(random_state=42),
param_distributions=param_distributions,
n_iter=100, # Try 100 random combinations
cv=5, # 5-fold cross-validation
scoring='roc_auc',
n_jobs=-1,
verbose=1,
random_state=42,
return_train_score=True
)
# Run random search
print("nStarting RandomizedSearchCV...")
start_time = time.time()
random_search.fit(X_train, y_train)
random_time = time.time() - start_time
print(f"nRandomizedSearchCV completed in {random_time:.2f} seconds ({random_time/60:.1f} minutes)")
print(f"nBest parameters found:")
for param, value in random_search.best_params_.items():
print(f" {param}: {value}")
print(f"nBest cross-validation ROC-AUC: {random_search.best_score_:.3f}")
print(f"nTime comparison:")
print(f" GridSearch: {grid_time:.1f}s")
print(f" RandomSearch: {random_time:.1f}s")
print(f" Speedup: {grid_time/random_time:.1f}x faster")
RandomizedSearch finished in 5.7 minutes and found even better parameters: 287 trees, max depth of 30, min_samples_split of 8, and min_samples_leaf of 1. The best cross-validation ROC-AUC reached 0.761, and it completed 42% faster than GridSearch.
Evaluate RandomSearch Model
# Get the best model
best_model_random = random_search.best_estimator_
# Evaluate on test set
y_pred_random = best_model_random.predict(X_test)
y_pred_proba_random = best_model_random.predict_proba(X_test)[:, 1]
print("RandomSearch Tuned Model Performance:")
print(f"ROC-AUC: {roc_auc_score(y_test, y_pred_proba_random):.3f}")
print(f"Precision: {precision_score(y_test, y_pred_random):.3f}")
print(f"Recall: {recall_score(y_test, y_pred_random):.3f}")
print(f"F1 Score: {f1_score(y_test, y_pred_random):.3f}")
# Compare to baseline
print(f"nImprovement over baseline:")
print(f"ROC-AUC: +{roc_auc_score(y_test, y_pred_proba_random) - roc_auc_score(y_test, y_pred_proba):.3f}")
print(f"Recall: +{recall_score(y_test, y_pred_random) - recall_score(y_test, y_pred):.3f}")
The RandomSearch model achieved 0.763 ROC-AUC, 0.651 precision, 0.386 recall, and 0.485 F1 score. That’s a 2.1% improvement in ROC-AUC and 5.7% improvement in recall over baseline. RandomizedSearch beat GridSearch with better performance and faster runtime.
Analyzing Hyperparameter Importance
Which hyperparameters actually mattered? We can analyze the results to see which parameters had the biggest impact on performance.
import numpy as np
import matplotlib.pyplot as plt
# Get results from random search
results = pd.DataFrame(random_search.cv_results_)
# Analyze each parameter's impact
fig, axes = plt.subplots(2, 3, figsize=(15, 10))
axes = axes.ravel()
params_to_analyze = ['n_estimators', 'max_depth', 'min_samples_split',
'min_samples_leaf', 'max_features', 'bootstrap']
for idx, param in enumerate(params_to_analyze):
# Group by parameter value and get mean score
param_col = f'param_{param}'
if param_col in results.columns:
grouped = results.groupby(param_col)['mean_test_score'].mean().sort_index()
axes[idx].plot(range(len(grouped)), grouped.values, 'o-')
axes[idx].set_title(f'{param} Impact')
axes[idx].set_xlabel('Parameter Value')
axes[idx].set_ylabel('Mean ROC-AUC')
axes[idx].grid(alpha=0.3)
plt.tight_layout()
plt.savefig('hyperparameter_impact.png')
plt.show()
This visualization shows which hyperparameters actually improved performance and which barely mattered. Typically, n_estimators and max_depth have the largest impact, while min_samples_leaf often shows minimal effect.
The Complete Performance Comparison
Let’s compare all three approaches side by side:
import pandas as pd
# Create comparison DataFrame
comparison = pd.DataFrame({
'Model': ['Baseline', 'GridSearch', 'RandomSearch'],
'Training Time (s)': [training_time, grid_time, random_time],
'ROC-AUC': [
roc_auc_score(y_test, y_pred_proba),
roc_auc_score(y_test, y_pred_proba_grid),
roc_auc_score(y_test, y_pred_proba_random)
],
'Precision': [
precision_score(y_test, y_pred),
precision_score(y_test, y_pred_grid),
precision_score(y_test, y_pred_random)
],
'Recall': [
recall_score(y_test, y_pred),
recall_score(y_test, y_pred_grid),
recall_score(y_test, y_pred_random)
],
'F1': [
f1_score(y_test, y_pred),
f1_score(y_test, y_pred_grid),
f1_score(y_test, y_pred_random)
]
})
print("nComplete Model Comparison:")
print(comparison.to_string(index=False))
# Calculate improvements
print("n% Improvement over Baseline:")
for metric in ['ROC-AUC', 'Precision', 'Recall', 'F1']:
baseline_val = comparison.loc[0, metric]
grid_improvement = ((comparison.loc[1, metric] - baseline_val) / baseline_val) * 100
random_improvement = ((comparison.loc[2, metric] - baseline_val) / baseline_val) * 100
print(f"{metric:12s}: GridSearch +{grid_improvement:5.1f}% | RandomSearch +{random_improvement:5.1f}%")
The results tell a clear story. The baseline trained in 2.34 seconds with 0.742 ROC-AUC. GridSearch took 487 seconds and achieved 0.758 ROC-AUC, a 2.2% improvement. RandomSearch took 343 seconds and reached 0.763 ROC-AUC, a 2.8% improvement. The percentage gains were similar across all metrics, with recall showing the largest improvements at 12.8% for GridSearch and 17.3% for RandomSearch.
RandomSearch won with better performance and less time.
When Hyperparameter Tuning Actually Helps
Tuning helps when your data and features are solid, meaning you’ve already done the work from Tutorials 2 and 4. It helps when you’re close to production and need that extra 2-5% performance boost. The business value of that 2-5% improvement needs to be high enough to justify the compute time. You should have compute time to spare, and you’re typically working with tree-based models like Random Forest, XGBoost, or LightGBM where hyperparameters have significant impact.
Tuning doesn’t help when your data is garbage. Fix that first before wasting time on optimization. If your features are weak, engineer better ones instead of tuning. If you haven’t tried different algorithms yet, do that before tuning. Sometimes the baseline model already works well enough for your use case. And if you’re using simple models like logistic regression that have few hyperparameters, there’s not much to tune anyway.
The Bottom Line
Hyperparameter tuning in machine learning is the last step, not the first. Fix your data, engineer better features, and try different algorithms before spending time tuning.
When you do tune, try RandomizedSearchCV first. It’s faster than GridSearch and usually finds equally good parameters. Only use GridSearch if you need to exhaustively search a narrow parameter range.
Most importantly, always measure whether tuning actually helped. A 2% improvement that costs 8 hours of compute time might not be worth it. Know when to stop tuning and ship the model.
A well-engineered baseline with good features beats a perfectly tuned model with bad features, every time.

Leave a Reply