python - Caching tensorflow results on GPU -


using tensorflow how keep result of session.run(some_tensors,...) on gpu , use again feed feed_dicts?

edit::: here concrete example of why need this. have data (multi dimensional tensors) multiple rnns running on them. data huge cannot process 1 sample on gpu. therefore break sample in parts , run them through rnns. means need save final states of rnns after processing 1st part , pass on rnns processing next part. right evaluate part1 rnn's state using session.run() , bring them on cpu , again pass these feed_dicts evaluating next part of data , on.

the answer sess.partial_run_setup followed sess.partial_run.

as of r1.3, still "experimental , subject change."

sess.partial_run_setup sets graph feeds , fetches partial run.

sess.partial_run continues execution more feeds , fetches.

more info @ https://www.tensorflow.org/api_docs/python/tf/session


Comments

Popular posts from this blog

commonjs - How to write a typescript definition file for a node module that exports a function? -

openid - Okta: Failed to get authorization code through API call -

thorough guide for profiling racket code -