首页 > 解决方案 > 如何限制 Apache Flink 任务管理器日志大小

问题描述

我正在使用 duc 检查磁盘使用情况,最后发现 Apache Flink 占用超过 2GB,那么如何限制日志大小小于 100MB?我的 Apache Flink 部署在 Kubernetes(v1.15.2) 集群中。

[root@uat-k8s-01 opt]# duc ls -Fg /var/lib/docker/overlay2/d1f441865d83867a21dd1dc0b11da2c75ffe1efe39209770cbd5e12e386df065/diff/opt/flink/log/
  1.9G flink--taskexecutor-0-flink-taskmanager-7f9df8fbf6-79746.log  [+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++]
  4.0K flink--taskexecutor-0-flink-taskmanager-7f9df8fbf6-79746.out  [                                                                                                                           ]

我已经从互联网上搜索并像这样调整我的配置,但仍然无法正常工作:

flink-conf.yaml:
jobmanager.rpc.address: flink-jobmanager
taskmanager.numberOfTaskSlots: 6
blob.server.port: 6124
jobmanager.rpc.port: 6123
taskmanager.rpc.port: 6122
jobmanager.heap.size: 1024m
taskmanager.memory.process.size: 1024m
log4j.properties:
log4j.rootLogger=INFO, file
log4j.logger.akka=INFO
log4j.logger.org.apache.kafka=INFO
log4j.logger.org.apache.hadoop=INFO
log4j.logger.org.apache.zookeeper=INFO
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.file=${log.file}
log4j.appender.file.MaxFileSize=50MB
log4j.appender.file.MaxBackupIndex=1
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
log4j.logger.org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline=ERROR, file

标签: kubernetes

解决方案


推荐阅读