diff options
Diffstat (limited to 'content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md')
| -rw-r--r-- | content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md | 107 |
1 files changed, 0 insertions, 107 deletions
diff --git a/content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md b/content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md deleted file mode 100644 index bf1d710..0000000 --- a/content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md +++ /dev/null | |||
| @@ -1,107 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: The strange case of Elasticsearch allocation failure | ||
| 3 | url: the-strange-case-of-elasticsearch-allocation-failure.html | ||
| 4 | date: 2020-03-29T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | I've been using Elasticsearch in production for 5 years now and never had a | ||
| 9 | single problem with it. Hell, never even known there could be a problem. Just | ||
| 10 | worked. All this time. The first node that I deployed is still being used in | ||
| 11 | production, never updated, upgraded, touched in anyway. | ||
| 12 | |||
| 13 | All this bliss came to an abrupt end this Friday when I got notification that | ||
| 14 | Elasticsearch cluster went warm. Well, warm is not that bad right? Wrong! | ||
| 15 | Quickly after that I got another email which sent chills down my spine. Cluster | ||
| 16 | is now red. RED! Now, shit really hit the fan! | ||
| 17 | |||
| 18 | I tried googling what could be the problem and after executing allocation | ||
| 19 | function noticed that some shards were unassigned and 5 attempts were already | ||
| 20 | made (which is BTW to my luck the maximum) and that meant I am basically fucked. | ||
| 21 | They also applied that one should wait for cluster to re-balance itself. So, I | ||
| 22 | waited. One hour, two hours, several hours. Nothing, still RED. | ||
| 23 | |||
| 24 | The strangest thing about it all was, that queries were still being fulfilled. | ||
| 25 | Data was coming out. On the outside it looked like nothing was wrong but | ||
| 26 | everybody that would look at the cluster would know immediately that something | ||
| 27 | was very very wrong and we were living on borrowed time here. | ||
| 28 | |||
| 29 | > **Please, DO NOT do what I did.** Seriously! Please ask someone on official | ||
| 30 | forums or if you know an expert please consult him. There could be million of | ||
| 31 | reasons and these solution fit my problem. Maybe in your case it would | ||
| 32 | disastrous. I had all the data backed up and even if I would fail spectacularly | ||
| 33 | I would be able to restore the data. It would be a huge pain and I would loose | ||
| 34 | couple of days but I had a plan B. | ||
| 35 | |||
| 36 | Executing allocation and told me what the problem was but no clear solution yet. | ||
| 37 | |||
| 38 | ```yaml | ||
| 39 | GET /_cat/allocation?format=json | ||
| 40 | ``` | ||
| 41 | |||
| 42 | I got a message that `ALLOCATION_FAILED` with additional info `failed to create | ||
| 43 | shard, failure ioexception[failed to obtain in-memory shard lock]`. Well | ||
| 44 | splendid! I must also say that our cluster is capable more than enough to handle | ||
| 45 | the traffic. Also JVM memory pressure never was an issue. So what happened | ||
| 46 | really then? | ||
| 47 | |||
| 48 | I tried also re-routing failed ones with no success due to AWS restrictions on | ||
| 49 | having managed Elasticsearch cluster (they lock some of the functions). | ||
| 50 | |||
| 51 | ```yaml | ||
| 52 | POST /_cluster/reroute?retry_failed=true | ||
| 53 | ``` | ||
| 54 | |||
| 55 | I got a message that significantly reduced my options. | ||
| 56 | |||
| 57 | ```json | ||
| 58 | { | ||
| 59 | "Message": "Your request: '/_cluster/reroute' is not allowed." | ||
| 60 | } | ||
| 61 | ``` | ||
| 62 | |||
| 63 | After that I went on a hunt again. I won't bother you with all the details | ||
| 64 | because hours/days went by until I was finally able to re-index the problematic | ||
| 65 | index and hoped for the best. Until that moment even re-indexing was giving me | ||
| 66 | errors. | ||
| 67 | |||
| 68 | ```yaml | ||
| 69 | POST _reindex | ||
| 70 | { | ||
| 71 | "source": { | ||
| 72 | "index": "myindex" | ||
| 73 | }, | ||
| 74 | "dest": { | ||
| 75 | "index": "myindex-new" | ||
| 76 | } | ||
| 77 | } | ||
| 78 | ``` | ||
| 79 | |||
| 80 | I needed to do this multiple times to get all the documents re-indexed. Then I | ||
| 81 | dropped the original one with the following command. | ||
| 82 | |||
| 83 | ```yaml | ||
| 84 | DELETE /myindex | ||
| 85 | ``` | ||
| 86 | |||
| 87 | And re-indexed again new one in the original one (well by name only). | ||
| 88 | |||
| 89 | ```yaml | ||
| 90 | POST _reindex | ||
| 91 | { | ||
| 92 | "source": { | ||
| 93 | "index": "myindex-new" | ||
| 94 | }, | ||
| 95 | "dest": { | ||
| 96 | "index": "myindex" | ||
| 97 | } | ||
| 98 | } | ||
| 99 | ``` | ||
| 100 | |||
| 101 | On the surface it looks like all is working but I have a long road in front of | ||
| 102 | me to get all the things working again. Cluster now shows that it is in Green | ||
| 103 | mode but I am also getting a notification that the cluster has processing status | ||
| 104 | which could mean million of things. | ||
| 105 | |||
| 106 | Godspeed! | ||
| 107 | |||
