python 3.x - How to predict new data using a trained simple feed forward neural network in tensorflow -
forgive me if sounds dumb question. assuming have neural network trained data of shape [m, n], how test trained network data of shape [1, 3]
here code have:
n_hidden_1 = 1024 n_hidden_2 = 1024 n = len(test_data[0]) - 1 m = len(test_data) alpha = 0.005 training_epoch = 1000 display_epoch = 100 train_x = np.array([i[:-1:] in test_data]).astype('float32') train_x = normalize_data(train_x) train_y = np.array([i[-1::] in test_data]).astype('float32') train_y = normalize_data(train_y) x = tf.placeholder(dtype=np.float32, shape=[m, n]) y = tf.placeholder(dtype=np.float32, shape=[m, 1]) weights = { 'h1': tf.variable(tf.random_normal([n, n_hidden_1])), 'h2': tf.variable(tf.random_normal([n_hidden_1, n_hidden_2])), 'out': tf.variable(tf.random_normal([n_hidden_2, 1])) } biases = { 'b1': tf.variable(tf.random_normal([n_hidden_1])), 'b2': tf.variable(tf.random_normal([n_hidden_2])), 'out': tf.variable(tf.random_normal([1])), } layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1']) layer_1 = tf.nn.sigmoid(layer_1) layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2']) layer_2 = tf.nn.sigmoid(layer_2) activation = tf.matmul(layer_2, weights['out']) + biases['out'] cost = tf.reduce_sum(tf.square(activation - y)) / (2 * m) optimizer = tf.train.gradientdescentoptimizer(alpha).minimize(cost) tf.session() sess: sess.run(tf.global_variables_initializer()) epoch in range(training_epoch): sess.run([optimizer, cost], feed_dict={x: train_x, y: train_y}) cost_ = sess.run(cost, feed_dict={x: train_x, y: train_y}) if epoch % display_epoch == 0: print('epoch:', epoch, 'cost:', cost_)
how test new data? regression know can use data [0.4, 0.5, 0.1]
predict_x = np.array([0.4, 0.5, 0.1], dtype=np.float32).reshape([1, 3]) predict_x = (predict_x - mean) / std predict_y = tf.add(tf.matmul(predict_x, w), b) result = sess.run(predict_y).flatten()[0]
how do same neural network?
if use
x = tf.placeholder(dtype=np.float32, shape=[none, n]) y = tf.placeholder(dtype=np.float32, shape=[none, 1])
the first dimension of 2 placeholders have variable size, i.e. @ training time can different (e.g. 720) @ test time (e.g. 1). referred having "variable batch sizes" quite common have different batch sizes during training , testing.
on line:
cost = tf.reduce_sum(tf.square(activation - y)) / (2 * m)
you making use of m
variable. make line work variable batch sizes (as m
unknown before execution of graph) should like:
m = tf.shape(x)[0] cost = tf.reduce_sum(tf.square(activation - y)) / (tf.multiply(m, 2))
tf.shape
evaluates dynamic shape of x
, i.e. shape has @ runtime.
Comments
Post a Comment