我正在使用这种架构(一个用于不同轨迹长度的掩蔽层,填充0到最大长度轨迹,然后是一个LSTM,之后是一个密集层,输出2个值)来构建一个回归器,根据轨迹预测2个值.
samples, timesteps, features = x_train.shape[0], x_train.shape[1], x_train.shape[2]
model = Sequential()
model.add(tf.keras.layers.Masking(mask_value=0., input_shape=(timesteps, features), name="mask"))
model.add(LSTM(30, return_sequences=True, name="lstm1"))
model.add(LSTM(30, return_sequences=False, name="lstm2"))
model.add(Dense(20, activation='relu', name="dense1"))
model.add(Dense(20, activation='relu', name="dense2"))
model.add(Dense(2, activation='linear', name="output"))
model.compile(optimizer="adam", loss="mse")
培训内容包括:
model.fit(x_train, y_train, epochs = 10, batch_size = 32)
我的输入数据形状如下:
x_train (269, 527, 11) (269 trajectories of 527 timesteps of 11 features)
y_train (269, 2) (these 269 trajectories have 2 target values)
x_test (30, 527, 11) (--- same ---)
y_test (30, 2) (--- same ---)
我已经对我的数据进行了预处理,所以我的所有序列都有固定的长度,较小的序列在缺失的时间步中填充0.
正如预期的那样,输出的形状如下:
(30, 2)
但研究它似乎是在回归同样的价值观.
[[37.48257 0.7025466 ]
[37.48258 0.70254654]
[37.48257 0.70254654]
[37.48257 0.7025466 ]
[37.48258 0.70254654]
[37.48258 0.70254654]
[37.48258 0.70254654]
[37.48258 0.7025465 ]
[42.243515 0.6581909 ]
[37.48258 0.70254654]
[37.48257 0.70254654]
[37.48258 0.70254654]
[37.48261 0.7025462 ]
[37.48257 0.7025466 ]
[37.482582 0.70254654]
[37.482567 0.70254654]
[37.48257 0.7025466 ]
[37.48258 0.70254654]
[37.48258 0.70254654]
[37.48257 0.7025466 ]
[37.48258 0.70254654]
[37.48258 0.70254654]
[37.48258 0.70254654]
[37.482567 0.7025465 ]
[37.48261 0.7025462 ]
[37.482574 0.7025466 ]
[37.48261 0.7025462 ]
[37.48261 0.70254624]
[37.48258 0.70254654]
[37.48261 0.7025462 ]]
而我的目标值(y_测试)是:
[[70. 0.6]
[40. 0.6]
[ 6. 0.6]
[94. 0.7]
[50. 0.6]
[60. 0.6]
[16. 0.6]
[76. 0.9]
[92. 0.6]
[32. 0.8]
[22. 0.7]
[70. 0.7]
[36. 1. ]
[64. 0.7]
[ 0. 0.9]
[82. 0.9]
[38. 0.6]
[54. 0.8]
[28. 0.8]
[62. 0.7]
[12. 0.6]
[72. 0.8]
[66. 0.8]
[ 2. 1. ]
[98. 1. ]
[20. 0.8]
[82. 1. ]
[38. 1. ]
[68. 0.6]
[62. 1. ]]
这就像将整个数据集作为一个数据点.
感谢任何帮助!