airflow - 由于无法读取日志文件,任务失败
问题描述
Composer 由于无法读取日志文件而导致任务失败,它抱怨编码不正确。
这是 UI 中显示的日志:
*** Unable to read remote log from gs://bucket/logs/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** 'ascii' codec can't decode byte 0xc2 in position 6986: ordinal not in range(128)
*** Log file does not exist: /home/airflow/gcs/logs/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** Fetching from: http://airflow-worker-68dc66c9db-x945n:8793/log/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='airflow-worker-68dc66c9db-x945n', port=8793): Max retries exceeded with url: /log/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1c9ff19d10>: Failed to establish a new connection: [Errno -2] Name or service not known',))
我尝试在谷歌云控制台中查看该文件,它也会引发错误:
Failed to load
Tracking Number: 8075820889980640204
但我可以通过gsutil
.
当我查看文件时,似乎有文本覆盖了其他文本。
我无法显示整个文件,但它看起来像这样:
--------------------------------------------------------------------------------
Starting attempt 1 of 1
--------------------------------------------------------------------------------
@-@{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:23,313] {models.py:1569} INFO - Executing <Task(BigQueryOperator): merge_campaign_exceptions> on 2019-08-03T10:00:00+00:00@-@{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:23,314] {base_task_runner.py:124} INFO - Running: ['bash', '-c', u'airflow run __campaign_exceptions_0_0_1 merge_campaign_exceptions 2019-08-03T10:00:00+00:00 --job_id 22767 --pool _bq_pool --raw -sd DAGS_FOLDER//-campaign-exceptions.py --cfg_path /tmp/tmpyBIVgT']@-@{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:24,658] {base_task_runner.py:107} INFO - Job 22767: Subtask merge_campaign_exceptions [2019-08-04 10:01:24,658] {settings.py:176} INFO - setting.configure_orm(): Using pool settings. pool_size=5, pool_recycle=1800@-@{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
这些@-@{}
碎片似乎在典型的原木“之上”。
解决方案
我遇到了同样的问题。就我而言,问题是我删除了google_gcloud_default
用于检索日志的连接。
检查配置并查找连接名称。
[core]
remote_log_conn_id = google_cloud_default
然后检查用于该连接名称的凭据是否具有访问GCS bucket
.
推荐阅读
- django - 是否可以对 Django 模型方法 (validate_unique) 进行 doctest
- javascript - 如何选择所有具有图像子项的链接
- c - 用句号结束程序
- python - 如何在 XSD 限制中允许换行?
- python - pandas idxmax(idxmin) 方法与 pd.NaT 的行为不一致
- r - stat_sum 中点之间的填充
- html - 使用 jquery 删除
但保留行
- reactjs - 结合两个单独的状态对象以在 React js 中发送到后端
- python-3.x - 在 anaconda 环境中安装自己的包
- python - 在 numpy 数组中找到连续索引的中断