aboutsummaryrefslogtreecommitdiff
path: root/_posts/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md
diff options
context:
space:
mode:
authorMitja Felicijan <mitja.felicijan@gmail.com>2024-03-10 14:59:14 +0100
committerMitja Felicijan <mitja.felicijan@gmail.com>2024-03-10 14:59:14 +0100
commit1100562e29f6476448b656dbddd4cf22505523f6 (patch)
tree442eec492199104bd49dfd74474ce89ade8fcac9 /_posts/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md
parenta40d80be378e46a6c490e1b99b0d8f4acd968503 (diff)
downloadmitjafelicijan.com-1100562e29f6476448b656dbddd4cf22505523f6.tar.gz
Move back to JBMAFP
Diffstat (limited to '_posts/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md')
-rw-r--r--_posts/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md332
1 files changed, 0 insertions, 332 deletions
diff --git a/_posts/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md b/_posts/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md
deleted file mode 100644
index d29bd09..0000000
--- a/_posts/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md
+++ /dev/null
@@ -1,332 +0,0 @@
1---
2title: Using DigitalOcean Spaces Object Storage with FUSE
3permalink: /using-digitalocean-spaces-object-storage-with-fuse.html
4date: 2018-01-16T12:00:00+02:00
5layout: post
6type: post
7draft: false
8---
9
10Couple of months ago [DigitalOcean](https://www.digitalocean.com) introduced new
11product called
12[Spaces](https://blog.digitalocean.com/introducing-spaces-object-storage/) which
13is Object Storage very similar to Amazon's S3. This really peaked my interest,
14because this was something I was missing and even the thought of going over the
15internet for such functionality was in no interest to me. Also in fashion with
16their previous pricing this also is very cheap and pricing page is a no-brainer
17compared to AWS or GCE. [Prices are clearly and precisely defined and
18outlined](https://www.digitalocean.com/pricing/). You must love them for that
19:)
20
21## Initial requirements
22
23* Is it possible to use them as a mounted drive with FUSE? (tl;dr YES)
24* Will the performance degrade over time and over different sizes of objects?
25 (tl;dr NO&YES)
26* Can storage be mounted on multiple machines at the same time and be writable?
27 (tl;dr YES)
28
29> Let me be clear. This scripts I use are made just for benchmarking and are not
30> intended to be used in real-life situations. Besides that, I am looking into
31> using this approaches but adding caching service in front of it and then
32> dumping everything as an object to storage. This could potentially be some
33> interesting post of itself. But in case you would need real-time data without
34> eventual consistency please take this scripts as they are: not usable in such
35> situations.
36
37## Is it possible to use them as a mounted drive with FUSE?
38
39Well, actually they can be used in such manor. Because they are similar to [AWS
40S3](https://aws.amazon.com/s3/) many tools are available and you can find many
41articles and [Stackoverflow items](https://stackoverflow.com/search?q=s3+fuse).
42
43To make this work you will need DigitalOcean account. If you don't have one you
44will not be able to test this code. But if you have an account then you go and
45[create new
46Droplet](https://cloud.digitalocean.com/droplets/new?size=s-1vcpu-1gb&region=ams3&distro=debian&distroImage=debian-9-x64&options=private_networking,install_agent).
47If you click on this link you will already have preselected Debian 9 with
48smallest VM option.
49
50* Please be sure to add you SSH key, because we will login to this machine
51 remotely.
52* If you change your region please remember which one you choose because we will
53 need this information when we try to mount space to our machine.
54
55Instuctions on how to use SSH keys and how to setup them are available in
56article [How To Use SSH Keys with DigitalOcean
57Droplets](https://www.digitalocean.com/community/tutorials/how-to-use-ssh-keys-with-digitalocean-droplets).
58
59![DigitalOcean Droplets](/assets/posts/do-fuse/fuse-droplets.png){:loading="lazy"}
60
61After we created Droplet it's time to create new Space. This is done by clicking
62on a button [Create](https://cloud.digitalocean.com/spaces/new) (right top
63corner) and selecting Spaces. Choose pronounceable ```Unique name``` because we
64will use it in examples below. You can either choose Private or Public, it
65doesn't matter in our case. And you can always change that in the future.
66
67When you have created new Space we should [generate Access
68key](https://cloud.digitalocean.com/settings/api/tokens). This link will guide
69to the page when you can generate this key. After you create new one, please
70save provided Key and Secret because Secret will not be shown again.
71
72![DigitalOcean Spaces](/assets/posts/do-fuse/fuse-spaces.png){:loading="lazy"}
73
74Now that we have new Space and Access key we should SSH into our machine.
75
76```bash
77# replace IP with the ip of your newly created droplet
78ssh root@IP
79
80# this will install utilities for mounting storage objects as FUSE
81apt install s3fs
82
83# we now need to provide credentials (access key we created earlier)
84# replace KEY and SECRET with your own credentials but leave the colon between them
85# we also need to set proper permissions
86echo "KEY:SECRET" > .passwd-s3fs
87chmod 600 .passwd-s3fs
88
89# now we mount space to our machine
90# replace UNIQUE-NAME with the name you choose earlier
91# if you choose different region for your space be careful about -ourl option (ams3)
92s3fs UNIQUE-NAME /mnt/ -ourl=https://ams3.digitaloceanspaces.com -ouse_cache=/tmp
93
94# now we try to create a file
95# once you mount it may take a couple of seconds to retrieve data
96echo "Hello cruel world" > /mnt/hello.txt
97```
98
99After all this you can return to your browser and go to [DigitalOcean
100Spaces](https://cloud.digitalocean.com/spaces) and click on your created
101space. If file hello.txt is present you have successfully mounted space to your
102machine and wrote data to it.
103
104I choose the same region for my Droplet and my Space but you don't have to. You
105can have different regions. What this actually does to performance I don't know.
106
107Additional information on FUSE:
108
109* [Github project page for s3fs](https://github.com/s3fs-fuse/s3fs-fuse)
110* [FUSE - Filesystem in Userspace](https://en.wikipedia.org/wiki/Filesystem_in_Userspace)
111
112## Will the performance degrade over time and over different sizes of objects?
113
114For this task I didn't want to just read and write text files or uploading
115images. I actually wanted to figure out if using something like SQlite is viable
116in this case.
117
118### Measurement experiment 1: File copy
119
120```bash
121# first we create some dummy files at different sizes
122dd if=/dev/zero of=10KB.dat bs=1024 count=10 #10KB
123dd if=/dev/zero of=100KB.dat bs=1024 count=100 #100KB
124dd if=/dev/zero of=1MB.dat bs=1024 count=1024 #1MB
125dd if=/dev/zero of=10MB.dat bs=1024 count=10240 #10MB
126
127# now we set time command to only return real
128TIMEFORMAT=%R
129
130# now lets test it
131(time cp 10KB.dat /mnt/) |& tee -a 10KB.results.txt
132
133# and now we automate
134# this will perform the same operation 100 times
135# this will output results into separated files based on objecty size
136n=0; while (( n++ < 100 )); do (time cp 10KB.dat /mnt/10KB.$n.dat) |& tee -a 10KB.results.txt; done
137n=0; while (( n++ < 100 )); do (time cp 100KB.dat /mnt/100KB.$n.dat) |& tee -a 100KB.results.txt; done
138n=0; while (( n++ < 100 )); do (time cp 1MB.dat /mnt/1MB.$n.dat) |& tee -a 1MB.results.txt; done
139n=0; while (( n++ < 100 )); do (time cp 10MB.dat /mnt/10MB.$n.dat) |& tee -a 10MB.results.txt; done
140```
141
142Files of size 100MB were not successfully transferred and ended up displaying
143error (cp: failed to close '/mnt/100MB.1.dat': Operation not permitted).
144
145As I suspected, object size is not really that important. Sadly I don't have the
146time to test performance over periods of time. But if some of you would do it
147please send me your data. I would be interested in seeing results.
148
149**Here are plotted results**
150
151You can download [raw result here](/assets/posts/do-fuse/copy-benchmarks.tsv).
152Measurements are in seconds.
153
154<script src="//cdn.plot.ly/plotly-latest.min.js"></script>
155<div id="copy-benchmarks"></div>
156<script>
157(function(){
158 var request = new XMLHttpRequest();
159 request.open("GET", "/assets/posts/do-fuse/copy-benchmarks.tsv", true);
160 request.onload = function() {
161 if (request.status >= 200 && request.status < 400) {
162 var payload = request.responseText.trim();
163 var tsv = payload.split("\n");
164 for (var i=0; i<tsv.length; i++) { tsv[i] = tsv[i].split("\t"); }
165 var traces = [];
166 var headers = tsv[0];
167 tsv.shift();
168 Array.prototype.forEach.call(headers, function(el, idx) {
169 var x = [];
170 var y = [];
171 for (var j=0; j<tsv.length; j++) {
172 x.push(j);
173 y.push(parseFloat(tsv[j][idx].replace(",", ".")));
174 }
175 traces.push({ x: x, y: y, type: "scatter", name: el, line: { width: 1, shape: "spline" } });
176 });
177 var copy = Plotly.newPlot("copy-benchmarks", traces, { legend: {"orientation": "h"}, height: 400, margin: { l: 40, r: 0, b: 20, t: 30, pad: 0 }, yaxis: { title: "execution time in seconds", titlefont: { size: 12 } }, xaxis: { title: "fn(i)", titlefont: { size: 12 } } });
178 } else { }
179 };
180 request.onerror = function() { };
181 request.send(null);
182})();
183</script>
184
185As far as these tests show, performance is quite stable and can be predicted
186which is fantastic. But this is a small test and spans only over couple of
187hours. So you should not completely trust them.
188
189### Measurement experiment 2: SQLite performanse
190
191I was unable to use database file directly from mounted drive so this is a no-go
192as I suspected. So I executed code below on a local disk just to get some
193benchmarks. I inserted 1000 records with DROPTABLE, CREATETABLE, INSERTMANY,
194FETCHALL, COMMIT for 1000 times to generate statistics. As you can see
195performance of SQLite is quite amazing. You could then potentially just copy
196file to mounted drive and be done with it.
197
198```python
199import time
200import sqlite3
201import sys
202
203if len(sys.argv) < 3:
204 print("usage: python sqlite-benchmark.py DB_PATH NUM_RECORDS REPEAT")
205 exit()
206
207def data_iter(x):
208 for i in range(x):
209 yield "m" + str(i), "f" + str(i*i)
210
211header_line = "%s\t%s\t%s\t%s\t%s\n" % ("DROPTABLE", "CREATETABLE", "INSERTMANY", "FETCHALL", "COMMIT")
212with open("sqlite-benchmarks.tsv", "w") as fp:
213 fp.write(header_line)
214
215start_time = time.time()
216conn = sqlite3.connect(sys.argv[1])
217c = conn.cursor()
218end_time = time.time()
219result_time = CONNECT = end_time - start_time
220print("CONNECT: %g seconds" % (result_time))
221
222start_time = time.time()
223c.execute("PRAGMA journal_mode=WAL")
224c.execute("PRAGMA temp_store=MEMORY")
225c.execute("PRAGMA synchronous=OFF")
226result_time = PRAGMA = end_time - start_time
227print("PRAGMA: %g seconds" % (result_time))
228
229for i in range(int(sys.argv[3])):
230 print("#%i" % (i))
231
232 start_time = time.time()
233 c.execute("drop table if exists test")
234 end_time = time.time()
235 result_time = DROPTABLE = end_time - start_time
236 print("DROPTABLE: %g seconds" % (result_time))
237
238 start_time = time.time()
239 c.execute("create table if not exists test(a,b)")
240 end_time = time.time()
241 result_time = CREATETABLE = end_time - start_time
242 print("CREATETABLE: %g seconds" % (result_time))
243
244 start_time = time.time()
245 c.executemany("INSERT INTO test VALUES (?, ?)", data_iter(int(sys.argv[2])))
246 end_time = time.time()
247 result_time = INSERTMANY = end_time - start_time
248 print("INSERTMANY: %g seconds" % (result_time))
249
250 start_time = time.time()
251 c.execute("select count(*) from test")
252 res = c.fetchall()
253 end_time = time.time()
254 result_time = FETCHALL = end_time - start_time
255 print("FETCHALL: %g seconds" % (result_time))
256
257 start_time = time.time()
258 conn.commit()
259 end_time = time.time()
260 result_time = COMMIT = end_time - start_time
261 print("COMMIT: %g seconds" % (result_time))
262
263 print
264 log_line = "%f\t%f\t%f\t%f\t%f\n" % (DROPTABLE, CREATETABLE, INSERTMANY, FETCHALL, COMMIT)
265 with open("sqlite-benchmarks.tsv", "a") as fp:
266 fp.write(log_line)
267
268start_time = time.time()
269conn.close()
270end_time = time.time()
271result_time = CLOSE = end_time - start_time
272print("CLOSE: %g seconds" % (result_time))
273```
274
275You can download [raw result here](/assets/posts/do-fuse/sqlite-benchmarks.tsv). And
276again, these results are done on a local block storage and do not represent
277capabilities of object storage. With my current approach and state of the test
278code these can not be done. I would need to make Python code much more robust
279and check locking etc.
280
281<div id="sqlite-benchmarks"></div>
282<script>
283(function(){
284 var request = new XMLHttpRequest();
285 request.open("GET", "/assets/posts/do-fuse/sqlite-benchmarks.tsv", true);
286 request.onload = function() {
287 if (request.status >= 200 && request.status < 400) {
288 var payload = request.responseText.trim();
289 var tsv = payload.split("\n");
290 for (var i=0; i<tsv.length; i++) { tsv[i] = tsv[i].split("\t"); }
291 var traces = [];
292 var headers = tsv[0];
293 tsv.shift();
294 Array.prototype.forEach.call(headers, function(el, idx) {
295 var x = [];
296 var y = [];
297 for (var j=0; j<tsv.length; j++) {
298 x.push(j);
299 y.push(parseFloat(tsv[j][idx].replace(",", ".")));
300 }
301 traces.push({ x: x, y: y, type: "scatter", name: el, line: { width: 1, shape: "spline" } });
302 });
303 var sqlite = Plotly.newPlot("sqlite-benchmarks", traces, { legend: {"orientation": "h"}, height: 400, margin: { l: 50, r: 0, b: 20, t: 30, pad: 0 }, yaxis: { title: "execution time in seconds", titlefont: { size: 12 } } });
304 } else { }
305 };
306 request.onerror = function() { };
307 request.send(null);
308})();
309</script>
310
311## Can storage be mounted on multiple machines at the same time and be writable?
312
313Well, this one didn't take long to test. And the answer is **YES**. I mounted
314space on both machines and measured same performance on both machines. But
315because file is downloaded before write and then uploaded on complete there
316could potentially be problems is another process is trying to access the same
317file.
318
319## Observations and conslusion
320
321Using Spaces in this way makes it easier to access and manage files. But besides
322that you would need to write additional code to make this one play nice with you
323applications.
324
325Nevertheless, this was extremely simple to setup and use and this is just
326another excellent product in DigitalOcean product line. I found this exercise
327very valuable and am thinking about implementing some sort of mechanism for
328SQLite, so data can be stored on Spaces and accessed by many VM's. For a project
329where data doesn't need to be accessible in real-time and can have couple of
330minutes old data this would be very interesting. If any of you find this
331proposal interesting please write in a comment box below or shoot me an email
332and I will keep you posted.