• 欢迎访问开心洋葱网站,在线教程,推荐使用最新版火狐浏览器和Chrome浏览器访问本网站,欢迎加入开心洋葱 QQ群
  • 为方便开心洋葱网用户,开心洋葱官网已经开启复制功能!
  • 欢迎访问开心洋葱网站,手机也能访问哦~欢迎加入开心洋葱多维思维学习平台 QQ群
  • 如果您觉得本站非常有看点,那么赶紧使用Ctrl+D 收藏开心洋葱吧~~~~~~~~~~~~~!
  • 由于近期流量激增,小站的ECS没能经的起亲们的访问,本站依然没有盈利,如果各位看如果觉着文字不错,还请看官给小站打个赏~~~~~~~~~~~~~!

[未解决]kafka服务起动报错

JAVA相关 开心洋葱 3079次浏览 0个评论

kafka服务起动报错

[2018-07-07 08:10:26,914] ERROR Error while creating log for __consumer_offsets-11 in dir \tmp\kafka-logs (kafka.server.LogDirFailureChannel)
java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:940)
at kafka.log.AbstractIndex.<init>(AbstractIndex.scala:67)
at kafka.log.OffsetIndex.<init>(OffsetIndex.scala:53)
at kafka.log.LogSegment$.open(LogSegment.scala:560)
at kafka.log.Log.loadSegments(Log.scala:412)
at kafka.log.Log.<init>(Log.scala:216)
at kafka.log.Log$.apply(Log.scala:1747)
at kafka.log.LogManager$$anonfun$getOrCreateLog$1.apply(LogManager.scala:673)
at kafka.log.LogManager$$anonfun$getOrCreateLog$1.apply(LogManager.scala:641)
at scala.Option.getOrElse(Option.scala:121)
at kafka.log.LogManager.getOrCreateLog(LogManager.scala:641)
at kafka.cluster.Partition$$anonfun$getOrCreateReplica$1.apply(Partition.scala:177)
at kafka.cluster.Partition$$anonfun$getOrCreateReplica$1.apply(Partition.scala:173)
at kafka.utils.Pool.getAndMaybePut(Pool.scala:65)
at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:172)
at kafka.cluster.Partition$$anonfun$6$$anonfun$8.apply(Partition.scala:259)
at kafka.cluster.Partition$$anonfun$6$$anonfun$8.apply(Partition.scala:259)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at kafka.cluster.Partition$$anonfun$6.apply(Partition.scala:259)
at kafka.cluster.Partition$$anonfun$6.apply(Partition.scala:253)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:258)
at kafka.cluster.Partition.makeLeader(Partition.scala:253)
at kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:1165)
at kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:1163)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:130)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:130)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:236)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:130)
at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:1163)
at kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:1083)
at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:183)
at kafka.server.KafkaApis.handle(KafkaApis.scala:108)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937)
… 42 more
[2018-07-07 08:10:26,919] INFO [ReplicaManager broker=0] Stopping serving replicas in dir \tmp\kafka-logs (kafka.server.ReplicaManager)
[2018-07-07 08:10:26,922] ERROR [ReplicaManager broker=0] Error while making broker the leader for partition Topic: __consumer_offsets; Partition: 11; Leader: None; AllReplicas: ; InSyncReplicas: in dir None (kafka.server.ReplicaManager)
org.apache.kafka.common.errors.KafkaStorageException: Error while creating log for __consumer_offsets-11 in dir \tmp\kafka-logs
Caused by: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:940)
at kafka.log.AbstractIndex.<init>(AbstractIndex.scala:67)
at kafka.log.OffsetIndex.<init>(OffsetIndex.scala:53)
at kafka.log.LogSegment$.open(LogSegment.scala:560)
at kafka.log.Log.loadSegments(Log.scala:412)
at kafka.log.Log.<init>(Log.scala:216)
at kafka.log.Log$.apply(Log.scala:1747)
at kafka.log.LogManager$$anonfun$getOrCreateLog$1.apply(LogManager.scala:673)
at kafka.log.LogManager$$anonfun$getOrCreateLog$1.apply(LogManager.scala:641)
at scala.Option.getOrElse(Option.scala:121)
at kafka.log.LogManager.getOrCreateLog(LogManager.scala:641)
at kafka.cluster.Partition$$anonfun$getOrCreateReplica$1.apply(Partition.scala:177)
at kafka.cluster.Partition$$anonfun$getOrCreateReplica$1.apply(Partition.scala:173)
at kafka.utils.Pool.getAndMaybePut(Pool.scala:65)
at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:172)
at kafka.cluster.Partition$$anonfun$6$$anonfun$8.apply(Partition.scala:259)
at kafka.cluster.Partition$$anonfun$6$$anonfun$8.apply(Partition.scala:259)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at kafka.cluster.Partition$$anonfun$6.apply(Partition.scala:259)
at kafka.cluster.Partition$$anonfun$6.apply(Partition.scala:253)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:258)
at kafka.cluster.Partition.makeLeader(Partition.scala:253)
at kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:1165)
at kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:1163)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:130)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:130)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:236)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:130)
at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:1163)
at kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:1083)
at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:183)
at kafka.server.KafkaApis.handle(KafkaApis.scala:108)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937)
… 42 more
[2018-07-07 08:10:27,099] ERROR Error while creating log for __consumer_offsets-30 in dir \tmp\kafka-logs (kafka.server.LogDirFailureChannel)
java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:940)
at kafka.log.AbstractIndex.<init>(AbstractIndex.scala:67)
at kafka.log.OffsetIndex.<init>(OffsetIndex.scala:53)
at kafka.log.LogSegment$.open(LogSegment.scala:560)
at kafka.log.Log.loadSegments(Log.scala:412)
at kafka.log.Log.<init>(Log.scala:216)
at kafka.log.Log$.apply(Log.scala:1747)
at kafka.log.LogManager$$anonfun$getOrCreateLog$1.apply(LogManager.scala:673)
at kafka.log.LogManager$$anonfun$getOrCreateLog$1.apply(LogManager.scala:641)
at scala.Option.getOrElse(Option.scala:121)
at kafka.log.LogManager.getOrCreateLog(LogManager.scala:641)
at kafka.cluster.Partition$$anonfun$getOrCreateReplica$1.apply(Partition.scala:177)
at kafka.cluster.Partition$$anonfun$getOrCreateReplica$1.apply(Partition.scala:


开心洋葱 , 版权所有丨如未注明 , 均为原创丨未经授权请勿修改 , 转载请注明[未解决]kafka服务起动报错
喜欢 (0)

您必须 登录 才能发表评论!

加载中……