• 欢迎访问开心洋葱网站,在线教程,推荐使用最新版火狐浏览器和Chrome浏览器访问本网站,欢迎加入开心洋葱 QQ群
  • 为方便开心洋葱网用户,开心洋葱官网已经开启复制功能!
  • 欢迎访问开心洋葱网站,手机也能访问哦~欢迎加入开心洋葱多维思维学习平台 QQ群
  • 如果您觉得本站非常有看点,那么赶紧使用Ctrl+D 收藏开心洋葱吧~~~~~~~~~~~~~!
  • 由于近期流量激增,小站的ECS没能经的起亲们的访问,本站依然没有盈利,如果各位看如果觉着文字不错,还请看官给小站打个赏~~~~~~~~~~~~~!

During handling of the above exception, another exception occurred: 深度机器学习遇到的bug

机器学习 开心洋葱 1665次浏览 0个评论

2018-04-13 23:06:58.664369: W T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:279] **************************************************************************************************xx
2018-04-13 23:06:58.664482: W T:\src\github\tensorflow\tensorflow\core\framework\op_kernel.cc:1273] OP_REQUIRES failed at batch_matmul_op_impl.h:489 : Resource exhausted: OOM when allocating tensor with shape[4,65536,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
2018-04-13 23:06:59.164964: W T:\src\github\tensorflow\tensorflow\core\kernels\queue_base.cc:277] _0_input_producer: Skipping cancelled enqueue attempt with queue not closed
2018-04-13 23:06:59.175464: W T:\src\github\tensorflow\tensorflow\core\kernels\queue_base.cc:277] _1_batch/padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed
Traceback (most recent call last):
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\client\session.py”, line 1327, in _do_call
return fn(*args)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\client\session.py”, line 1312, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\client\session.py”, line 1420, in _call_tf_sessionrun
status, run_metadata)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\framework\errors_impl.py”, line 516, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[4,65536,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: gradients/MatMul_grad/MatMul = BatchMatMul[T=DT_FLOAT, adj_x=false, adj_y=true, _device=”/job:localhost/replica:0/task:0/device:GPU:0″](Reshape_5, gradients/truediv_21_grad/RealDiv)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

[[Node: Adam/update/_226 = _Recv[client_terminated=false, recv_device=”/job:localhost/replica:0/task:0/device:CPU:0″, send_device=”/job:localhost/replica:0/task:0/device:GPU:0″, send_device_incarnation=1, tensor_name=”edge_2091_Adam/update”, tensor_type=DT_FLOAT, _device=”/job:localhost/replica:0/task:0/device:CPU:0″]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “train.py”, line 146, in
main(FLAGS)
File “train.py”, line 118, in main
_, loss_t, step = sess.run([train_op, loss, global_step])
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\client\session.py”, line 905, in run
run_metadata_ptr)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\client\session.py”, line 1140, in _run
feed_dict_tensor, options, run_metadata)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\client\session.py”, line 1321, in _do_run
run_metadata)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\client\session.py”, line 1340, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[4,65536,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: gradients/MatMul_grad/MatMul = BatchMatMul[T=DT_FLOAT, adj_x=false, adj_y=true, _device=”/job:localhost/replica:0/task:0/device:GPU:0″](Reshape_5, gradients/truediv_21_grad/RealDiv)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

[[Node: Adam/update/_226 = _Recv[client_terminated=false, recv_device=”/job:localhost/replica:0/task:0/device:CPU:0″, send_device=”/job:localhost/replica:0/task:0/device:GPU:0″, send_device_incarnation=1, tensor_name=”edge_2091_Adam/update”, tensor_type=DT_FLOAT, _device=”/job:localhost/replica:0/task:0/device:CPU:0″]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

Caused by op ‘gradients/MatMul_grad/MatMul’, defined at:
File “train.py”, line 146, in
main(FLAGS)
File “train.py”, line 92, in main
train_op = tf.train.AdamOptimizer(1e-3).minimize(loss, global_step=global_step, var_list=variable_to_train)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\training\optimizer.py”, line 399, in minimize
grad_loss=grad_loss)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\training\optimizer.py”, line 492, in compute_gradients
colocate_gradients_with_ops=colocate_gradients_with_ops)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\ops\gradients_impl.py”, line 488, in gradients
gate_gradients, aggregation_method, stop_gradients)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\ops\gradients_impl.py”, line 625, in _GradientsHelper
lambda: grad_fn(op, *out_grads))
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\ops\gradients_impl.py”, line 379, in _MaybeCompile
return grad_fn() # Exit early
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\ops\gradients_impl.py”, line 625, in
lambda: grad_fn(op, *out_grads))
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\ops\math_grad.py”, line 1166, in _BatchMatMul
grad_x = math_ops.matmul(y, grad, adjoint_a=False, adjoint_b=True)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\ops\math_ops.py”, line 2071, in matmul
a, b, adj_x=adjoint_a, adj_y=adjoint_b, name=name)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\ops\gen_math_ops.py”, line 1295, in batch_mat_mul
“BatchMatMul”, x=x, y=y, adj_x=adj_x, adj_y=adj_y, name=name)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\framework\op_def_library.py”, line 787, in _apply_op_helper
op_def=op_def)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\framework\ops.py”, line 3290, in create_op
op_def=op_def)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\framework\ops.py”, line 1654, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

…which was originally created as op ‘MatMul’, defined at:
File “train.py”, line 146, in
main(FLAGS)
File “train.py”, line 59, in main
style_loss, style_loss_summary = losses.style_loss(endpoints_dict, style_features_t, FLAGS.style_layers)
File “E:\CodeFile\Tensorflow\fast-neural-style-tensorflow\losses.py”, line 86, in style_loss
layer_style_loss = tf.nn.l2_loss(gram(generated_images) – style_gram) * 2 / tf.to_float(size)
File “E:\CodeFile\Tensorflow\fast-neural-style-tensorflow\losses.py”, line 19, in gram
grams = tf.matmul(filters, filters, transpose_a=True) / tf.to_float(width * height * num_filters)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\ops\math_ops.py”, line 2071, in matmul
a, b, adj_x=adjoint_a, adj_y=adjoint_b, name=name)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\ops\gen_math_ops.py”, line 1295, in batch_mat_mul
“BatchMatMul”, x=x, y=y, adj_x=adj_x, adj_y=adj_y, name=name)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\framework\op_def_library.py”, line 787, in _apply_op_helper
op_def=op_def)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\framework\ops.py”, line 3290, in create_op
op_def=op_def)
File “D:\PFiles\Python36\lib\site-packages\tensorflow\python\framework\ops.py”, line 1654, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[4,65536,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: gradients/MatMul_grad/MatMul = BatchMatMul[T=DT_FLOAT, adj_x=false, adj_y=true, _device=”/job:localhost/replica:0/task:0/device:GPU:0″](Reshape_5, gradients/truediv_21_grad/RealDiv)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

[[Node: Adam/update/_226 = _Recv[client_terminated=false, recv_device=”/job:localhost/replica:0/task:0/device:CPU:0″, send_device=”/job:localhost/replica:0/task:0/device:GPU:0″, send_device_incarnation=1, tensor_name=”edge_2091_Adam/update”, tensor_type=DT_FLOAT, _device=”/job:localhost/replica:0/task:0/device:CPU:0″]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


开心洋葱 , 版权所有丨如未注明 , 均为原创丨未经授权请勿修改 , 转载请注明During handling of the above exception, another exception occurred: 深度机器学习遇到的bug
喜欢 (0)

您必须 登录 才能发表评论!

加载中……