aboutsummaryrefslogtreecommitdiff
path: root/content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md
diff options
context:
space:
mode:
authorMitja Felicijan <m@mitjafelicijan.com>2023-07-08 23:25:41 +0200
committerMitja Felicijan <m@mitjafelicijan.com>2023-07-08 23:25:41 +0200
commitcd6644ea4ddc78597934ab0ef5ba50e3c3daa927 (patch)
tree03de331a8db6386dfd6fa75155bfbcea6b4feaf3 /content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md
parent84ed124529ffeee1590295b8de3a8faf51848680 (diff)
downloadmitjafelicijan.com-cd6644ea4ddc78597934ab0ef5ba50e3c3daa927.tar.gz
Moved to a simpler SSG
Diffstat (limited to 'content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md')
-rw-r--r--content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md107
1 files changed, 0 insertions, 107 deletions
diff --git a/content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md b/content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md
deleted file mode 100644
index bf1d710..0000000
--- a/content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md
+++ /dev/null
@@ -1,107 +0,0 @@
1---
2title: The strange case of Elasticsearch allocation failure
3url: the-strange-case-of-elasticsearch-allocation-failure.html
4date: 2020-03-29T12:00:00+02:00
5draft: false
6---
7
8I've been using Elasticsearch in production for 5 years now and never had a
9single problem with it. Hell, never even known there could be a problem. Just
10worked. All this time. The first node that I deployed is still being used in
11production, never updated, upgraded, touched in anyway.
12
13All this bliss came to an abrupt end this Friday when I got notification that
14Elasticsearch cluster went warm. Well, warm is not that bad right? Wrong!
15Quickly after that I got another email which sent chills down my spine. Cluster
16is now red. RED! Now, shit really hit the fan!
17
18I tried googling what could be the problem and after executing allocation
19function noticed that some shards were unassigned and 5 attempts were already
20made (which is BTW to my luck the maximum) and that meant I am basically fucked.
21They also applied that one should wait for cluster to re-balance itself. So, I
22waited. One hour, two hours, several hours. Nothing, still RED.
23
24The strangest thing about it all was, that queries were still being fulfilled.
25Data was coming out. On the outside it looked like nothing was wrong but
26everybody that would look at the cluster would know immediately that something
27was very very wrong and we were living on borrowed time here.
28
29> **Please, DO NOT do what I did.** Seriously! Please ask someone on official
30forums or if you know an expert please consult him. There could be million of
31reasons and these solution fit my problem. Maybe in your case it would
32disastrous. I had all the data backed up and even if I would fail spectacularly
33I would be able to restore the data. It would be a huge pain and I would loose
34couple of days but I had a plan B.
35
36Executing allocation and told me what the problem was but no clear solution yet.
37
38```yaml
39GET /_cat/allocation?format=json
40```
41
42I got a message that `ALLOCATION_FAILED` with additional info `failed to create
43shard, failure ioexception[failed to obtain in-memory shard lock]`. Well
44splendid! I must also say that our cluster is capable more than enough to handle
45the traffic. Also JVM memory pressure never was an issue. So what happened
46really then?
47
48I tried also re-routing failed ones with no success due to AWS restrictions on
49having managed Elasticsearch cluster (they lock some of the functions).
50
51```yaml
52POST /_cluster/reroute?retry_failed=true
53```
54
55I got a message that significantly reduced my options.
56
57```json
58{
59 "Message": "Your request: '/_cluster/reroute' is not allowed."
60}
61```
62
63After that I went on a hunt again. I won't bother you with all the details
64because hours/days went by until I was finally able to re-index the problematic
65index and hoped for the best. Until that moment even re-indexing was giving me
66errors.
67
68```yaml
69POST _reindex
70{
71 "source": {
72 "index": "myindex"
73 },
74 "dest": {
75 "index": "myindex-new"
76 }
77}
78```
79
80I needed to do this multiple times to get all the documents re-indexed. Then I
81dropped the original one with the following command.
82
83```yaml
84DELETE /myindex
85```
86
87And re-indexed again new one in the original one (well by name only).
88
89```yaml
90POST _reindex
91{
92 "source": {
93 "index": "myindex-new"
94 },
95 "dest": {
96 "index": "myindex"
97 }
98}
99```
100
101On the surface it looks like all is working but I have a long road in front of
102me to get all the things working again. Cluster now shows that it is in Green
103mode but I am also getting a notification that the cluster has processing status
104which could mean million of things.
105
106Godspeed!
107