首页 > 解决方案 > Elasticsearch Analyzer 前 4 个和后 4 个字符

问题描述

使用 Elasticsearch,我想指定一个搜索分析器,其中前 4 个字符和后 4 个字符被标记。

For example: supercalifragilisticexpialidocious => ["supe", "ious"]

我尝试了如下的ngram

PUT my_index
{
  "settings": {
    "analysis": {
      "analyzer": {
        "my_analyzer": {
          "tokenizer": "my_tokenizer"
        }
      },
      "tokenizer": {
        "my_tokenizer": {
          "type": "ngram",
          "min_gram": 4,
          "max_gram": 4
        }
      }
    }
  }
}

我正在测试分析仪如下

POST my_index/_analyze
{
  "analyzer": "my_analyzer",
  "text": "supercalifragilisticexpialidocious."
}

并返回“超级”……大量我不想要的东西和“珍贵”。对我来说,问题是我怎样才能只从上面指定的 ngram 标记器中获取第一个和最后一个结果?

{
  "tokens": [
    {
      "token": "supe",
      "start_offset": 0,
      "end_offset": 4,
      "type": "word",
      "position": 0
    },
    {
      "token": "uper",
      "start_offset": 1,
      "end_offset": 5,
      "type": "word",
      "position": 1
    },
...
    {
      "token": "ciou",
      "start_offset": 29,
      "end_offset": 33,
      "type": "word",
      "position": 29
    },
    {
      "token": "ious",
      "start_offset": 30,
      "end_offset": 34,
      "type": "word",
      "position": 30
    },
    {
      "token": "ous.",
      "start_offset": 31,
      "end_offset": 35,
      "type": "word",
      "position": 31
    }
  ]
}

标签: elasticsearchquery-analyzer

解决方案


实现此目的的一种方法是利用pattern_capture令牌过滤器并获取前 4 个和后 4 个字符。

首先,像这样定义您的索引:

PUT my_index
{
  "settings": {
    "index": {
      "analysis": {
        "analyzer": {
          "my_analyzer": {
            "type": "custom",
            "tokenizer": "keyword",
            "filter": [
              "lowercase",
              "first_last_four"
            ]
          }
        },
        "filter": {
          "first_last_four": {
            "type": "pattern_capture",
            "preserve_original": false,
            "patterns": [
              """(\w{4}).*(\w{4})"""
            ]
          }
        }
      }
    }
  }
}

然后,您可以测试新的自定义分析器:

POST test/_analyze
{
  "text": "supercalifragilisticexpialidocious",
  "analyzer": "my_analyzer"
}

并看到您期望的令牌在那里:

{
  "tokens" : [
    {
      "token" : "supe",
      "start_offset" : 0,
      "end_offset" : 34,
      "type" : "word",
      "position" : 0
    },
    {
      "token" : "ious",
      "start_offset" : 0,
      "end_offset" : 34,
      "type" : "word",
      "position" : 0
    }
  ]
}

推荐阅读