Elasticsearch在應(yīng)用中常見錯誤示例解析
一 read_only_allow_delete" : "true"
當(dāng)我們在向某個索引添加一條數(shù)據(jù)的時候,可能(極少情況)會碰到下面的報錯:
{
"error": {
"root_cause": [
{
"type": "cluster_block_exception",
"reason": "blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"
}
],
"type": "cluster_block_exception",
"reason": "blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"
},
"status": 403
}
上述報錯是說索引現(xiàn)在的狀態(tài)是只讀模式(read-only),如果查看該索引此時的狀態(tài):
GET z1/_settings
# 結(jié)果如下
{
"z1" : {
"settings" : {
"index" : {
"number_of_shards" : "5",
"blocks" : {
"read_only_allow_delete" : "true"
},
"provided_name" : "z1",
"creation_date" : "1556204559161",
"number_of_replicas" : "1",
"uuid" : "3PEevS9xSm-r3tw54p0o9w",
"version" : {
"created" : "6050499"
}
}
}
}
}
可以看到"read_only_allow_delete" : "true",說明此時無法插入數(shù)據(jù),當(dāng)然,我們也可以模擬出來這個錯誤:
PUT z1
{
"mappings": {
"doc": {
"properties": {
"title": {
"type":"text"
}
}
}
},
"settings": {
"index.blocks.read_only_allow_delete": true
}
}
PUT z1/doc/1
{
"title": "es真難學(xué)"
}
現(xiàn)在我們?nèi)绻麍?zhí)行插入數(shù)據(jù),就會報開始的錯誤。那么怎么解決呢?
- 清理磁盤,使占用率低于85%。
- 手動調(diào)整該項,具體參考官網(wǎng)
這里介紹一種,我們將該字段重新設(shè)置為:
PUT z1/_settings
{
"index.blocks.read_only_allow_delete": null
}
現(xiàn)在再查看該索引就正常了,也可以正常的插入數(shù)據(jù)和查詢了。
二 illegal_argument_exception
有時候,在聚合中,我們會發(fā)現(xiàn)如下報錯:
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [age] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead."
}
],
"type": "search_phase_execution_exception",
"reason": "all shards failed",
"phase": "query",
"grouped": true,
"failed_shards": [
{
"shard": 0,
"index": "z2",
"node": "NRwiP9PLRFCTJA7w3H9eqA",
"reason": {
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [age] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead."
}
}
],
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [age] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [age] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead."
}
}
},
"status": 400
}
這是怎么回事呢?是因為,聚合查詢時,指定字段不能是text類型。比如下列示例:
PUT z2/doc/1
{
"age":"18"
}
PUT z2/doc/2
{
"age":20
}
GET z2/doc/_search
{
"query": {
"match_all": {}
},
"aggs": {
"my_sum": {
"sum": {
"field": "age"
}
}
}
}
當(dāng)我們向elasticsearch中,添加一條數(shù)據(jù)時(此時,如果索引存在則直接新增或者更新文檔,不存在則先創(chuàng)建索引),首先檢查該age字段的映射類型。
如上示例中,我們添加第一篇文檔時(z1索引不存在),elasticsearch會自動的創(chuàng)建索引,然后為age字段創(chuàng)建映射關(guān)系(es就猜此時age字段的值是什么類型,如果發(fā)現(xiàn)是text類型,那么存儲該字段的映射類型就是text),此時age字段的值是text類型,所以,第二條插入數(shù)據(jù),age的值也是text類型,而不是我們看到的long類型。我們可以查看一下該索引的mappings信息:
GET z2/_mapping
# mapping信息如下
{
"z2" : {
"mappings" : {
"doc" : {
"properties" : {
"age" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
}
}
}
}
}
}
上述返回結(jié)果發(fā)現(xiàn),age類型是text。而該類型又不支持聚合,所以,就會報錯了。解決辦法就是:
- 如果選擇動態(tài)創(chuàng)建一篇文檔,映射關(guān)系取決于你添加的第一條文檔的各字段都對應(yīng)什么類型。而不是我們看到的那樣,第一次是text,第二次不加引號,就是long類型了不是這樣的。
- 如果嫌棄上面的解決辦法麻煩,那就選擇手動創(chuàng)建映射關(guān)系。首先指定好各字段對應(yīng)什么類型。后續(xù)才不至于出錯。
三 Result window is too large
很多時候,我們在查詢文檔時,一次查詢結(jié)果很可能會有很多,而elasticsearch一次返回多少條結(jié)果,由size參數(shù)決定:
GET e2/doc/_search
{
"size": 100000,
"query": {
"match_all": {}
}
}
而默認(rèn)是最多范圍一萬條,那么當(dāng)我們的請求超過一萬條時(比如有十萬條),就會報:
Result window is too large, from + size must be less than or equal to: [10000] but was [100000]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting.
意思是一次請求返回的結(jié)果太大,可以另行參考 scroll API或者設(shè)置index.max_result_window參數(shù)手動調(diào)整size的最大默認(rèn)值:
# kibana中設(shè)置
PUT e2/_settings
{
"index": {
"max_result_window": "100000"
}
}
# Python中設(shè)置
from elasticsearch import Elasticsearch
es = Elasticsearch()
es.indices.put_settings(index='e2', body={"index": {"max_result_window": 100000}})
如上例,我們手動調(diào)整索引e2的size參數(shù)最大默認(rèn)值到十萬,這時,一次查詢結(jié)果只要不超過10萬就都會一次返回。
注意,這個設(shè)置對于索引es的size參數(shù)是永久生效的。
以上就是Elasticsearch在應(yīng)用中常見錯誤示例解析的詳細(xì)內(nèi)容,更多關(guān)于Elasticsearch錯誤示例解析的資料請關(guān)注腳本之家其它相關(guān)文章!
相關(guān)文章
Notepad++文本比較插件Compare詳解(最新免費)
Notepad++是一款強大的文本編輯器,它提供了文件對比功能,可以幫助我們快速找出兩個文件之間的差異點,這篇文章主要介紹了Notepad++文本比較插件Compare詳解(最新免費),感興趣的朋友一起看看吧2024-01-01
大數(shù)據(jù)就業(yè)的三大方向和最熱門十大崗位【推薦】
這篇文章主要介紹了大數(shù)據(jù)就業(yè)的三大方向和最熱門十大崗位,需要的朋友可以參考下2019-06-06
基于ChatGPT使用AI實現(xiàn)自然對話的原理分析
ChatGPT是當(dāng)前自然語言處理領(lǐng)域的重要進(jìn)展之一,可以生成高質(zhì)量的文本,可應(yīng)用于多種場景,如智能客服、聊天機器人、語音助手等。本文將詳細(xì)介紹ChatGPT的原理、實戰(zhàn)演練和流程圖,幫助讀者更好地理解ChatGPT技術(shù)的應(yīng)用和優(yōu)勢2023-05-05

