我正在try 将各种转换与LightGBM
模型一起整合到scikit-learn
管道中.该模型旨在预测二手车的价格.接受培训后,我计划将这个模型集成到HTML页面中以供实际使用.
from sklearn.preprocessing import StandardScaler, LabelEncoder
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
import joblib
print(numeric_features)
`['car_year', 'km', 'horse_power', 'cyl_capacity']`
print(categorical_features)
`['make', 'model', 'trimlevel', 'fueltype', 'transmission', 'bodytype', 'color']`
# Define transformers for numeric and categorical features
numeric_transformer = Pipeline(steps=[('scaler', StandardScaler())])
categorical_transformer = Pipeline(steps=[('labelencoder', LabelEncoder())])
# Combine transformers using ColumnTransformer
preprocessor = ColumnTransformer(
transformers=[
('num', numeric_transformer, numeric_features),
('cat', categorical_transformer, categorical_features)
]
)
# Append the LightGBM model to the preprocessing pipeline
pipeline = Pipeline(steps=[
('preprocessor', preprocessor),
('model', best_lgb_model)
])
# Fit the pipeline to training data
pipeline.fit(X_train, y_train)
我在训练时得到的输出是:
LabelEncoder.fit_transform() takes 2 positional arguments but 3 were given