首页 > 解决方案 > 遍历 Python 日志文件实际上并没有遍历内容并给我不同的条目

问题描述

所以我现在正试图解析一个日志文件,我对如何做到这一点有点困惑。

我有一个日志文件,比如说log_file.log,该文件的内容是这样的:

1 03/25/2020 16:41:18 - INFO - X -   Loading X with AVX2 support.
2 03/25/2020 16:41:18 - INFO - __main__ -   Query address: http://163.XXX.XXX.XXX:9011
3 03/25/2020 16:41:18 - INFO - __main__ -   Doc address: http://163.XXX.XXX.XXX:9020
4 03/25/2020 16:41:18 - INFO - __main__ -   Index address: http://163.XXX.XXX.XXX:80
5 03/25/2020 16:41:18 - INFO - mips_new -   using doc ranker functions: <bound method NaturalKB.get_doc_scores of <__main__.NaturalKB object at 0x7fdc3a95dac8>>
6 03/25/2020 16:41:18 - INFO - mips_new -   Reading dump/
7 ^Mloading idx2id:   0%|          | 0/1 [00:00<?, ?it/s]^Mloading idx2id: 100%|██████████| 1/1 [00:00<00:00, 34.70it/s]
8 03/25/2020 16:41:20 - INFO - __main__ -   Starting Index server at http://163.XXX.XXX.XXX:80
9 03/25/2020 16:41:22 - INFO - mips_new -   1st rerank (1000 => 100), (1, 100), 0.4613668918609619
10 03/25/2020 16:41:23 - INFO - mips_new -   2nd rerank (100 => 10), (1, 10), 0.29343676567077637
11 03/25/2020 16:41:23 - INFO - tornado.access -   200 GET /search?query=%3F (163.XXX.XXX.XXX) 1080.74ms
12 03/25/2020 16:41:23 - INFO - tornado.access -   200 GET /files/js/jquery.min.js (163.XXX.XXX.XXX) 9.37ms
13 03/25/2020 16:41:23 - INFO - tornado.access -   200 GET /files/js/jquery.easing.min.js (163.152.20.191) 1.97ms
14 03/25/2020 16:41:23 - INFO - tornado.access -   200 GET /files/js/bootstrap.min.js (163.XXX.XXX.XXX) 1.95ms

当我做:

with open(file='log_file.log', mode='r') as f:
    log_file = f.readlines()

print(log_file[0])
print(log_file[1])

我得到:

>>> 03/25/2020 16:41:18 - INFO - X -   Loading X with AVX2 support.
>>> 03/25/2020 16:41:18 - INFO - __main__ -   Query address: http://163.XXX.XXX.XXX:9011

但是,当我尝试像这样循环遍历它时:

for idx, line in enumerate(log_file):
    print(line)
    if idx == 5:
        break

我得到:

03/25/2020 07:27:14 - INFO - X -   Loading X with AVX2 support.

03/25/2020 07:27:14 - INFO - __main__ -   Query address: http://163.XXX.XXX.XXX:9010

03/25/2020 07:27:24 - INFO - run_natkb -   load with different params =>

03/25/2020 07:27:24 - INFO - run_natkb -   Loaded weight does not have {'module.sparse_end_q.2.query.bias', 'module.sparse_start_q.2.key.bias', 'module.q_linear.bias', 'module.sparse_end_q.1.query.weight', 'module.sparse_end_q.1.key.bias', 'module.sparse_end_q.2.key.weight', 'module.sparse_start_q.1.query.bias', 'module.sparse_end_q.1.query.bias', 'module.sparse_end_q.1.key.weight', 'module.sparse_start_q.2.query.weight', 'module.sparse_start_q.2.key.weight', 'module.sparse_end_q.2.query.weight', 'module.sparse_start_q.1.query.weight', 'module.sparse_start_q.2.query.bias', 'module.q_linear.weight', 'module.sparse_start_q.1.key.weight', 'module.sparse_end_q.2.key.bias', 'module.sparse_start_q.1.key.bias'}

03/25/2020 07:27:24 - INFO - run_natkb -   Model code does not have: {'module.linear.bias', 'module.tfidf_weight', 'module.linear.weight', 'module.true_help'}

03/25/2020 07:27:24 - INFO - __main__ -   Model loaded from /home/user/models/model.pt

我假设这是在循环期间输出的某种 API 调用。无论如何我可以阻止它并获取日志文件本身的内容吗?

编辑

我正在运行的代码(通过 IPython shell)是(我在输出中添加了更多行,以便我的问题更清楚一点):

In [1]: with open(file='./log_file.log', mode='r') as f:
   ...:     lines = f.readlines()
   ...:

In [2]: for idx, line in enumerate(lines):
   ...:     print(line)
   ...:     if idx == 20:
   ...:         break
03/25/2020 07:27:14 - INFO - X -   Loading X with AVX2 support.

03/25/2020 07:27:14 - INFO - __main__ -   Query address: http://163.XXX.XXX.XXX:9010

03/25/2020 07:27:24 - INFO - run_natkb -   load with different params =>

03/25/2020 07:27:24 - INFO - run_natkb -   Loaded weight does not have {'module.sparse_end_q.2.query.bias', 'module.sparse_start_q.2.key.bias', 'module.q_linear.bias', 'module.sparse_end_q.1.query.weight', 'module.sparse_end_q.1.key.bias', 'module.sparse_end_q.2.key.weight', 'module.sparse_start_q.1.query.bias', 'module.sparse_end_q.1.query.bias', 'module.sparse_end_q.1.key.weight', 'module.sparse_start_q.2.query.weight', 'module.sparse_start_q.2.key.weight', 'module.sparse_end_q.2.query.weight', 'module.sparse_start_q.1.query.weight', 'module.sparse_start_q.2.query.bias', 'module.q_linear.weight', 'module.sparse_start_q.1.key.weight', 'module.sparse_end_q.2.key.bias', 'module.sparse_start_q.1.key.bias'}

03/25/2020 07:27:24 - INFO - run_natkb -   Model code does not have: {'module.linear.bias', 'module.tfidf_weight', 'module.linear.weight', 'module.true_help'}

03/25/2020 07:27:24 - INFO - __main__ -   Model loaded from /home/user/models/model.pt

03/25/2020 07:27:24 - INFO - __main__ -   Number of model parameters: 343,540,739

03/25/2020 07:27:24 - INFO - __main__ -   Starting QueryEncoder server at http://163.XXX.XXX.XXX:9010



Converting questions:   0%|          | 0/1 [00:00<?, ?it/s]03/25/2020 07:30:09 - INFO - pre -   tokens: are there geographic variations in the rate of co ##vid - 19 spread ?



Converting questions: 100%|██████████| 1/1 [00:00<00:00, 867.31it/s]

03/25/2020 07:30:09 - INFO - tornado.access -   200 POST /batch_api (163.XXX.XXX.XXX) 76.77ms



Converting questions:   0%|          | 0/1 [00:00<?, ?it/s]03/25/2020 07:30:33 - INFO - pre -   tokens: are there geographic variations in the mortality rate of co ##vid - 19 ?


Converting questions: 100%|██████████| 1/1 [00:00<00:00, 688.72it/s]

03/25/2020 07:30:33 - INFO - tornado.access -   200 POST /batch_api (163.XXX.XXX.XXX) 60.82ms


Converting questions:   0%|          | 0/1 [00:00<?, ?it/s]03/25/2020 07:32:39 - INFO - pre -   tokens: are there geographic variations in the rate of co ##vid - 19 spread ?

输出行说Converting questions不是日志文件的一部分。

标签: pythonlogging

解决方案


推荐阅读