2018-04-13 23:06:58.664369: W T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:279] **************************************************************************************************xx
2018-04-13 23:06:58.664482: W T:\src\github\tensorflow\tensorflow\core\framework\op_kernel.cc:1273] OP_REQUIRES failed at batch_matmul_op_impl.h:489 : Resource exhausted: OOM when allocating tensor with shape[4,65536,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
2018-04-13 23:06:59.164964: W T:\src\github\tensorflow\tensorflow\core\kernels\queue_base.cc:277] _0_input_producer: Skipping cancelled enqueue attempt with queue not closed
2018-04-13 23:06:59.175464: W T:\src\github\tensorflow\tensorflow\core\kernels\queue_base.cc:277] _1_batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed
Traceback (most recent call last):
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\client\session.py”, line 1327, in _do_call
return fn(*args)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\client\session.py”, line 1312, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\client\session.py”, line 1420, in _call_tf_sessionrun
status, run_metadata)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\framework\errors_impl.py”, line 516, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[4,65536,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: gradients/MatMul_grad/MatMul = BatchMatMul[T=DT_FLOAT, adj_x=false, adj_y=true, _device=”/job:localhost/replica:0/task:0/device:GPU:0″](Reshape_5, gradients/truediv_21_grad/RealDiv)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[Node: Adam/update/_226 = _Recv[client_terminated=false, recv_device=”/job:localhost/replica:0/task:0/device:CPU:0″, send_device=”/job:localhost/replica:0/task:0/device:GPU:0″, send_device_incarnation=1, tensor_name=”edge_2091_Adam/update”, tensor_type=DT_FLOAT, _device=”/job:localhost/replica:0/task:0/device:CPU:0″]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “train.py”, line 146, in
main(FLAGS)
File “train.py”, line 118, in main
_, loss_t, step = sess.run([train_op, loss, global_step])
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\client\session.py”, line 905, in run
run_metadata_ptr)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\client\session.py”, line 1140, in _run
feed_dict_tensor, options, run_metadata)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\client\session.py”, line 1321, in _do_run
run_metadata)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\client\session.py”, line 1340, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[4,65536,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: gradients/MatMul_grad/MatMul = BatchMatMul[T=DT_FLOAT, adj_x=false, adj_y=true, _device=”/job:localhost/replica:0/task:0/device:GPU:0″](Reshape_5, gradients/truediv_21_grad/RealDiv)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[Node: Adam/update/_226 = _Recv[client_terminated=false, recv_device=”/job:localhost/replica:0/task:0/device:CPU:0″, send_device=”/job:localhost/replica:0/task:0/device:GPU:0″, send_device_incarnation=1, tensor_name=”edge_2091_Adam/update”, tensor_type=DT_FLOAT, _device=”/job:localhost/replica:0/task:0/device:CPU:0″]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Caused by op ‘gradients/MatMul_grad/MatMul’, defined at:
File “train.py”, line 146, in
main(FLAGS)
File “train.py”, line 92, in main
train_op = tf.train.AdamOptimizer(1e-3).minimize(loss, global_step=global_step, var_list=variable_to_train)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\training\optimizer.py”, line 399, in minimize
grad_loss=grad_loss)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\training\optimizer.py”, line 492, in compute_gradients
colocate_gradients_with_ops=colocate_gradients_with_ops)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\ops\gradients_impl.py”, line 488, in gradients
gate_gradients, aggregation_method, stop_gradients)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\ops\gradients_impl.py”, line 625, in _GradientsHelper
lambda: grad_fn(op, *out_grads))
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\ops\gradients_impl.py”, line 379, in _MaybeCompile
return grad_fn() # Exit early
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\ops\gradients_impl.py”, line 625, in
lambda: grad_fn(op, *out_grads))
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\ops\math_grad.py”, line 1166, in _BatchMatMul
grad_x = math_ops.matmul(y, grad, adjoint_a=False, adjoint_b=True)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\ops\math_ops.py”, line 2071, in matmul
a, b, adj_x=adjoint_a, adj_y=adjoint_b, name=name)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\ops\gen_math_ops.py”, line 1295, in batch_mat_mul
“BatchMatMul”, x=x, y=y, adj_x=adj_x, adj_y=adj_y, name=name)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\framework\op_def_library.py”, line 787, in _apply_op_helper
op_def=op_def)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\framework\ops.py”, line 3290, in create_op
op_def=op_def)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\framework\ops.py”, line 1654, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
…which was originally created as op ‘MatMul’, defined at:
File “train.py”, line 146, in
main(FLAGS)
File “train.py”, line 59, in main
style_loss, style_loss_summary = losses.style_loss(endpoints_dict, style_features_t, FLAGS.style_layers)
File “E:\CodeFile\Tensorflow\fast-neural-style-tensorflow\losses.py”, line 86, in style_loss
layer_style_loss = tf.nn.l2_loss(gram(generated_images) – style_gram) * 2 / tf.to_float(size)
File “E:\CodeFile\Tensorflow\fast-neural-style-tensorflow\losses.py”, line 19, in gram
grams = tf.matmul(filters, filters, transpose_a=True) / tf.to_float(width * height * num_filters)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\ops\math_ops.py”, line 2071, in matmul
a, b, adj_x=adjoint_a, adj_y=adjoint_b, name=name)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\ops\gen_math_ops.py”, line 1295, in batch_mat_mul
“BatchMatMul”, x=x, y=y, adj_x=adj_x, adj_y=adj_y, name=name)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\framework\op_def_library.py”, line 787, in _apply_op_helper
op_def=op_def)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\framework\ops.py”, line 3290, in create_op
op_def=op_def)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\framework\ops.py”, line 1654, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[4,65536,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: gradients/MatMul_grad/MatMul = BatchMatMul[T=DT_FLOAT, adj_x=false, adj_y=true, _device=”/job:localhost/replica:0/task:0/device:GPU:0″](Reshape_5, gradients/truediv_21_grad/RealDiv)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[Node: Adam/update/_226 = _Recv[client_terminated=false, recv_device=”/job:localhost/replica:0/task:0/device:CPU:0″, send_device=”/job:localhost/replica:0/task:0/device:GPU:0″, send_device_incarnation=1, tensor_name=”edge_2091_Adam/update”, tensor_type=DT_FLOAT, _device=”/job:localhost/replica:0/task:0/device:CPU:0″]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.