python - Neural-net Regression predicts same value for all test samples -


my neural-network regression model predicts 1 value test samples. playing hyperparameters epochs, batch_size, number of layers, hidden units, learning rate, etc. changes prediction values new constant.

for testing, if test on training data itself, accurate results , rmse ~ 1.

note: task predict remaining life of machine run-till-failure time series data. have used tsfresh library generate 1045 features original time series data 24 features.

what should causing behavior? how should visualize neural network model development make sure things going in right direction?

print "shape of training_features is", train_x.shape print "shape of train_labels is", train_y.shape print "shape of test_features is", test_x.shape print "shape of test_labels is", test_y.shape  input_dim = train_x.shape[1] # function create model, required kerasregressor def create_model(h1=50, h2=50, act1='sigmoid', act2='sigmoid', init='he_normal', learn_rate=0.001, momentum=0.1, loss='mean_squared_error'):     # create model     model = sequential()     model.add(dense(h1, input_dim=input_dim, init=init, activation=act1))     model.add(dense(h2, init=init, activation=act2))     model.add(dense(1, init=init))     # compile model     optimizer = sgd(lr=learn_rate, momentum=momentum)     model.compile(loss=loss, optimizer=optimizer, metrics=['accuracy'])     return model  ''' real thing ''' # create model model = kerasregressor(build_fn=create_model, verbose=0)  # scoring function grid_scorer = make_scorer(mean_squared_error, greater_is_better=false) # grid search batch_size = [8] epochs = [500] init_mode = ['glorot_uniform'] learn_rate = [0.0001] momentum = [0.1]  hidden_layer_1 = [75] activation_1 = ['sigmoid'] hidden_layer_2 = [15] activation_2 = ['sigmoid']  param_grid = dict(batch_size=batch_size, nb_epoch=epochs, init=init_mode, h1=hidden_layer_1, h2=hidden_layer_2, act1 = activation_1, act2=activation_2, learn_rate=learn_rate, momentum=momentum)  print "\n...begin search..." grid = gridsearchcv(estimator=model, param_grid=param_grid, cv=5, scoring=grid_scorer, verbose=1)  print "\nlet's fit training data..." grid_result = grid.fit(train_x, train_y)  # summarize results print("best: %f using %s" % (grid_result.best_score_, grid_result.best_params_)) means = grid_result.cv_results_['mean_test_score'] stds = grid_result.cv_results_['std_test_score'] params = grid_result.cv_results_['params'] mean, stdev, param in zip(means, stds, params):     print("%f (%f) with: %r" % (mean, stdev, param))  predicted = grid.predict(test_x)   print "\nprediction array is\n", predicted rmse = numpy.sqrt(((predicted - test_y) ** 2).mean(axis=0)) print "test rmse is", rmse 

output:

shape of training_features (249, 1045) shape of train_labels (249,) shape of test_features (248, 1045) shape of test_labels (248,)  ...begin search...  let's fit training data... fitting 5 folds each of 1 candidates, totalling 5 fits best: -891.761863 using {'learn_rate': 0.0001, 'h2': 15, 'act1': 'sigmoid', 'act2': 'sigmoid', 'h1': 75, 'batch_size': 8, 'init': 'glorot_uniform', 'nb_epoch': 500, 'momentum': 0.1} -891.761863 (347.253351) with: {'learn_rate': 0.0001, 'h2': 15, 'act1': 'sigmoid', 'act2': 'sigmoid', 'h1': 75, 'batch_size': 8, 'init': 'glorot_uniform', 'nb_epoch': 500, 'momentum': 0.1}  prediction array [ 295.72067261  295.72067261  295.72067261  295.72067261  295.72067261   295.72067261  295.72067261  ...                               295.72067261  295.72067261  295.72067261   295.72067261  295.72067261  295.72067261] test rmse 95.0019297411 


Comments

Popular posts from this blog

commonjs - How to write a typescript definition file for a node module that exports a function? -

openid - Okta: Failed to get authorization code through API call -

ios - Change Storyboard View using Seague -