When I started machine learning, I encountered a lot of complexity and confusion. In fact, I was confused about where some concepts are used. One of these issues is:
The difference between Sci-Kit Learn’s
- This method is primarily used for training machine learning models. It allows the model to learn patterns and relationships in the training data. For example, when working with classification or regression models, you use
.fit()to train the model using your training dataset.
from sklearn.linear_model import LogisticRegression model = LogisticRegression() model.fit(X_train, y_train) # Training the model
.transform()method is mainly used for data transformation. After a transformation or preprocessing step is learned using
.fit(), you can apply the same transformation to new data using
.transform(). This is common in preprocessing tasks like feature scaling or dimensionality reduction (PCA).
- However, transformation cannot be performed without fitting a data set because the parameters to be used during the transformation have not been calculated.
from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_scaled = scaler.transform(X) # Transforming the data
- This method combines both fitting and transforming in a single step. It is often used when you want to learn a transformation from the data and immediately apply it to the same data efficiently. It can be more convenient than calling
*** You may wonder why they have separate functions if we can perform these operations at the same time.
When training a model, we should only fit or fit_transform to the train data, because we train the model using the data in the train data and test it with the test data. If we fit the test data, we will give the model a copy and a clue about the test data. Everything the model learns should be from train data. That’s why usually train data is .fit_transform() and test data is just .transform() (using the parameters in the train).
from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_scaled = scaler.fit_transform(X) # Fit and transform in one step
In summary, the choice of method depends on the specific machine learning task and the data preprocessing or transformation step
you are performing.
.fit() is used for model training,
.transform() for data transformation, and
.fit_transform() when you want to combine both steps efficiently.
Why should we only use
.fit_transform() on the training data when training a model?
1- Preventing Data Leakage: When training a model, it is necessary to use the
.fit_transform() methods only on the training data. This is related to several reasons and helps improve model performance and generalization ability in machine learning.
2- Model’s Generalization Ability: The model’s ability to generalize demonstrates how well the model can adapt to new, unseen data. Training the model only on the training data helps enhance this generalization ability. The training data is used as a representative of the patterns the model needs to learn, and it is observed whether these representations work on the test data.
3- Statistical Accuracy: To assess a model’s statistical accuracy-related performance, test data should be used. The results obtained on the training data show how well the model fits only the training data and do not reflect its generalization ability.
4- Preventing Overfitting: Model adaptations on the training data carry the risk of overfitting to the data. This means that the model fits the training data too closely and performs poorly on new data. It is important to detect the risk of overfitting before evaluating it on test data.
As a result, the rule of only performing .fit() or .fit_transform() on the training data during model training helps ensure successful generalization of the model in real-world conditions and accurate evaluation of its performance. Test data should be used for an independent evaluation that better reflects the real-world performance of the model.