You need to enable JavaScript to run this app.
最新活动
大模型
产品
解决方案
定价
生态与合作
支持与服务
开发者
了解我们

使用scikit-learn贝叶斯优化调优LSTM超参数遇TypeError求助

解决scikit-optimize与Keras类方法结合时的TypeError问题

你遇到的TypeError: wrapper() takes 1 positional argument but 2 were given主要有两个核心原因,下面我会逐一分析并给出修复方案:

错误原因分析

  1. 类方法与装饰器的兼容性冲突@use_named_args装饰器默认是为普通函数设计的,当用于类的实例方法时,它无法正确识别第一个必传参数self,导致参数传递时出现数量不匹配的问题。
  2. 参数名不匹配:你在维度定义中使用了name="adam_decay",但在fitness方法中对应的参数却是rms_decay,这会导致装饰器无法正确映射超参数。

修复后的完整代码

我标注了关键修改点,方便你对比查看:

from skopt.space import Integer, Categorical, Real
from skopt.utils import use_named_args
from skopt import gp_minimize
import tensorflow
import keras.backend as K
import GetPrediction
import Model

# 维度定义保持不变,注意adam_decay的名称
dim_learning_rate = Real(low=1e-4, high=1e-2, prior='log-uniform', name='learning_rate')
dim_num_dense_layers = Integer(low=1, high=5, name='num_dense_layers')
dim_num_input_nodes = Integer(low=16, high=128, name='num_input_nodes')
dim_num_dense_nodes = Integer(low=8, high=64, name='num_dense_nodes')
# 注意:dropout取值范围通常是0-1,这里把high从2修正为1.0
dim_dropout = Real(low=0.01, high=1.0, name='dropout')
dim_activation = Categorical(categories=['relu', 'sigmoid'], name='activation')
dim_batch_size = Integer(low=1, high=128, name='batch_size')
dim_adam_decay = Real(low=1e-6, high=1e-2, name="adam_decay")

dimensions = [dim_learning_rate, dim_num_dense_layers, dim_num_input_nodes, dim_num_dense_nodes, dim_dropout, dim_activation, dim_batch_size, dim_adam_decay ]
default_parameters = [1e-3, 1, 512, 13, 0.5, 'relu', 64, 1e-3]

class Optimize:
    def __init__(self, _STOCK, _INTERVAL, _TYPE):
        self.stock = _STOCK
        self.interval = _INTERVAL
        self._type = _TYPE

    def Return_BestHyperParameters(self):
        # 核心修改:将fitness改为内部函数,避免类方法与装饰器的冲突
        @use_named_args(dimensions=dimensions)
        def fitness(learning_rate, num_dense_layers, num_input_nodes, num_dense_nodes, dropout, activation, batch_size, adam_decay):
            # 修正参数名:与维度定义的adam_decay保持一致
            model = Model.Tuning_Model(
                learning_rate=learning_rate, 
                num_dense_layers=num_dense_layers, 
                num_input_nodes=num_input_nodes, 
                num_dense_nodes=num_dense_nodes, 
                dropout=dropout, 
                activation=activation, 
                rms_decay=adam_decay  # 如果Model方法需要rms_decay参数,保持赋值即可
            )

            Train_Closing, \
            Train_Volume, \
            Train_Labels, \
            Test_Closing, \
            Test_Volume, \
            Test_Labels, \
            ClosingData_scaled, \
            VolumeData_scaled = GetPrediction.Return_Data(self.stock, self.interval, self._type)

            blackbox = model.fit(
                [ Train_Closing, Train_Volume ],
                [ Train_Labels ],
                validation_data=(
                    [ Test_Closing, Test_Volume ],
                    [ Test_Labels ]
                ),
                epochs=250,
                batch_size=batch_size
            )

            accuracy = blackbox.history['val_mae'][-1]

            del model
            K.clear_session()
            tensorflow.reset_default_graph()

            return -accuracy

        # 调用gp_minimize时传入内部定义的fitness函数
        gp_result = gp_minimize(func=fitness, dimensions=dimensions, n_calls=12)
        return gp_result

if __name__ == '__main__':
    MyClass = Optimize('DJI', '', 'Daily')
    print(MyClass.Return_BestHyperParameters())

关键修改说明

  1. 将fitness改为内部函数:把fitness定义在Return_BestHyperParameters方法内部,既可以直接访问self的属性(如self.stock),又避免了类方法与@use_named_args装饰器的兼容性问题。
  2. 修正参数名不匹配:将fitness方法中的参数名与维度定义的adam_decay保持一致,确保装饰器能正确映射超参数。如果你的Model.Tuning_Model确实需要rms_decay参数,保持rms_decay=adam_decay的赋值即可。
  3. 修正dropout取值范围:原代码中dim_dropout的high值设为2,而dropout的合理取值范围是0到1之间,这里修正为1.0,避免模型训练时出现逻辑错误。

内容的提问来源于stack exchange,提问作者Martin Chtilianov

火山引擎 最新活动