首页 > 解决方案 > Kafka Connect 没有记录我的连接器日志

问题描述

我写了一个自定义的 Kafka 源连接器。我想测试连接器,所以我将 uber JAR 复制到该${CONFLUENT_HOME}/share/java位置,并将配置(connect-standalone.propertiesconnect-file-source.properties)放在我为连接器创建的目录下的 etc 位置中。

当我尝试按如下方式启动连接器时:

$CONFLUENT_HOME/bin/connect-standalone $CONFLUENT_HOME/etc/kafka/connect-standalone.properties $CONFLUENT_HOME/etc/kafka/connect-file-source.properties

我试图运行我的连接器。它启动并将记录生成到 Kafka。但是,我在任何地方都看不到我的连接器日志。我尝试添加连接器的 log4j.properties,还尝试更新 connect-log4j.properties。我仍然没有看到我的日志。

例如:我有这样的日志语句:

@Override
public void start(Map<String, String> props) {
    log.info("*** Start Method called ***");

    filename = props.get(FileStreamTailerSourceConnector.FILE_CONFIG);

下面是我的connect-file-source.properties文件的片段。

name=kafka-connect-file
connector.class=com.connect.file.FileStreamTailerSourceConnector
tasks.max=1

#Path to where the file is going to be published.
file=test_data.csv

# Topic to publish file data to.
# MANDATORY FIELD, no default
# Valid Values: non-empty string and no ISO control characters
topic=tailer_test2

# Window of data to pull from log api.
# Valid Values: [2,...,10000]
# The default is 100.
batch.size=2

# Poll interval in milliseconds. E.G. Roughly, how often the connector will connect to the file and read data.
# The default is 1000 as in once a second.
poll.interval=1000

# Schema file for the records of the file
source.schema=test_data_schema.avsc

# Encryption of fields in the record for security
field.encryption=true
encryption.fields=bin,ssn,acn
encryption.class=com.connect.encryption.DummyEncryption

以下是我的片段connect-standalone.properties

# A list of host/port pairs to use for establishing the initial connection to the Kafka cluster.
bootstrap.servers=<LIST OF COMMA SEPARATED BOOTSTRAP SERVERS>

# unique name for the cluster, used in forming the Connect cluster group. Note that this must not conflict with consumer group IDs
group.id=connect-cluster-file-test1

offset.storage.topic=connect-offsets-test1
offset.storage.replication.factor=1

config.storage.topic=connect-configs-test1
config.storage.replication.factor=1

status.storage.topic=connect-status-test1
status.storage.replication.factor=1
#status.storage.partitions=5

# Flush much faster than normal, which is useful for testing/debugging
offset.flush.interval.ms=10000

rest.port=8083


plugin.path=/share/confluent/package/share/java

我在任何地方都看不到上述日志消息。任何帮助表示赞赏!

标签: javaapache-kafkalog4japache-kafka-connect

解决方案


推荐阅读