tensorflow - Shape mismatch with tf.placeholder -


i'm using 128 x 128 x 128 ndarrays input in cnn using:

# input arrays     x = tf.placeholder(tf.float32, [none, 128, 128, 128, 1]) 

the each ndarray had no colur channel data, used:

data = np.reshape(data, (128, 128, 128, 1)) 

in order fit placeholder originally. i'm getting error:

traceback (most recent call last):   file "tfvgg.py", line 287, in <module>     in range(10000 + 1): training_step(i, % 100 == 0, % 20 == 0)   file "tfvgg.py", line 277, in training_step     a, c = sess.run([accuracy, cross_entropy], {x: batch_x, y: batch_y})   file "/home/entelechy/tfenv/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 717, in run run_metadata_ptr)   file "/home/entelechy/tfenv/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 894, in _run % (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape()))) valueerror: cannot feed value of shape (128, 128, 128, 1) tensor 'placeholder:0', has shape '(?, 128, 128, 128, 1)' 

i'm confused way placeholders work, because thought first parameter batch size. using none there thought placeholder take number of (128, 128, 128, 1) inputs. because 3d net, if change placeholder (128, 128, 128, 1) there error thrown @ first conv3d layer missing parameter.

what missing placeholder parameter passing?

edit: (train_data list of lists, each being [ndarray, label])

this initialisation of net:

def training_step(i, update_test_data, update_train_data):      in range(len(train_data)):          batch = train_data[a]         batch_x = batch[0]         batch_y = batch[1]          # learning rate decay         max_learning_rate = 0.003         min_learning_rate = 0.0001         decay_speed = 2000.0         learning_rate = min_learning_rate + (max_learning_rate - min_learning_rate) * math.exp(-i / decay_speed)          if update_train_data:             a, c = sess.run([accuracy, cross_entropy], {x: batch_x, y: batch_y})             print(str(i) + ": accuracy:" + str(a) + " loss: " + str(c) + " (lr:" + str(learning_rate) + ")")           if update_test_data:             a, c = sess.run([accuracy, cross_entropy], {x: test_data[0], y: test_data[1]})         print(str(i) + ": ********* epoch " + " ********* test accuracy:" + str(a) + " test loss: " + str(c))          sess.run(train_step, {x: batch_x, y: batch_y, lr: learning_rate})  in range(10000 + 1): training_step(i, % 100 == 0, % 20 == 0) 

in last question fed network list 1 image: [image]. why first dimension of data not needed , reshaping (128, 128, 128, 1) enough. feeding [image] or [image1, image2, image3] worked in last example. however, feeding image without list:batch[0] first dimension gone , not work.

[np.reshape(image, (128, 128, 128, 1))] has overall shape of (1, 128, 128, 128, 1) , works

np.reshape(image, (1, 128, 128, 128, 1)) has overall shape of (1, 128, 128, 128, 1) , works too

np.reshape(image, (128, 128, 128, 1)) without list has overall shape of (128, 128, 128, 1) , not work.

you can either put image list or reshape directly (1, 128, 128, 128, 1). in both cases overall shape correct. if plan input multiple images simpler use list, , fill (128, 128, 128, 1) shaped images.

in way now, can use batch_x = [batch[0]] 1 image , batch_x = batch[0:4] multiple images


Comments

Popular posts from this blog

commonjs - How to write a typescript definition file for a node module that exports a function? -

openid - Okta: Failed to get authorization code through API call -

thorough guide for profiling racket code -