aboutsummaryrefslogtreecommitdiff
path: root/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md
diff options
context:
space:
mode:
authorMitja Felicijan <mitja.felicijan@gmail.com>2021-01-24 01:42:03 +0100
committerMitja Felicijan <mitja.felicijan@gmail.com>2021-01-24 01:42:03 +0100
commite07ab67bf95ea7e65828e373c731b6cdf984a7de (patch)
tree4fe471a1a8492149bb0b3e6ec726184e3bcf1647 /posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md
parent36fb49bbef11294a93a53c363d32c2134f6b19b4 (diff)
downloadmitjafelicijan.com-e07ab67bf95ea7e65828e373c731b6cdf984a7de.tar.gz
Moved to altenator and DO
Diffstat (limited to 'posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md')
-rw-r--r--posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md263
1 files changed, 263 insertions, 0 deletions
diff --git a/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md b/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md
new file mode 100644
index 0000000..ae895f7
--- /dev/null
+++ b/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md
@@ -0,0 +1,263 @@
1---
2Title: Using DigitalOcean Spaces Object Storage with FUSE
3Description: Using DigitalOcean Spaces Object Storage with FUSE
4Slug: using-digitalocean-spaces-object-storage-with-fuse
5Listing: true
6Created: 2018, January 16
7Tags: []
8---
9
10Couple of months ago [DigitalOcean](https://www.digitalocean.com) introduced new product called [Spaces](https://blog.digitalocean.com/introducing-spaces-object-storage/) which is Object Storage very similar to Amazon's S3. This really peaked my interest, because this was something I was missing and even the thought of going over the internet for such functionality was in no interest to me. Also in fashion with their previous pricing this also is very cheap and pricing page is a no-brainer compared to AWS or GCE. [Prices are clearly and precisely defined and outlined](https://www.digitalocean.com/pricing/). You must love them for that :)
11
12### Initial requirements
13
14* Is it possible to use them as a mounted drive with FUSE? (tl;dr YES)
15* Will the performance degrade over time and over different sizes of objects? (tl;dr NO&YES)
16* Can storage be mounted on multiple machines at the same time and be writable? (tl;dr YES)
17
18> Let me be clear. This scripts I use are made just for benchmarking and are not intended to be used in real-life situations. Besides that, I am looking into using this approaches but adding caching service in front of it and then dumping everything as an object to storage. This could potentially be some interesting post of itself. But in case you would need real-time data without eventual consistency please take this scripts as they are: not usable in such situations.
19
20## Is it possible to use them as a mounted drive with FUSE?
21
22Well, actually they can be used in such manor. Because they are similar to [AWS S3](https://aws.amazon.com/s3/) many tools are available and you can find many articles and [Stackoverflow items](https://stackoverflow.com/search?q=s3+fuse).
23
24To make this work you will need DigitalOcean account. If you don't have one you will not be able to test this code. But if you have an account then you go and [create new Droplet](https://cloud.digitalocean.com/droplets/new?size=s-1vcpu-1gb&region=ams3&distro=debian&distroImage=debian-9-x64&options=private_networking,install_agent). If you click on this link you will already have preselected Debian 9 with smallest VM option.
25
26* Please be sure to add you SSH key, because we will login to this machine remotely.
27* If you change your region please remember which one you choose because we will need this information when we try to mount space to our machine.
28
29Instuctions on how to use SSH keys and how to setup them are available in article [How To Use SSH Keys with DigitalOcean Droplets](https://www.digitalocean.com/community/tutorials/how-to-use-ssh-keys-with-digitalocean-droplets).
30
31![DigitalOcean Droplets](/assets/do-fuse/fuse-droplets.png)
32
33After we created Droplet it's time to create new Space. This is done by clicking on a button [Create](https://cloud.digitalocean.com/spaces/new) (right top corner) and selecting Spaces. Choose pronounceable ```Unique name``` because we will use it in examples below. You can either choose Private or Public, it doesn't matter in our case. And you can always change that in the future.
34
35When you have created new Space we should [generate Access key](https://cloud.digitalocean.com/settings/api/tokens). This link will guide to the page when you can generate this key. After you create new one, please save provided Key and Secret because Secret will not be shown again.
36
37![DigitalOcean Spaces](/assets/do-fuse/fuse-spaces.png)
38
39Now that we have new Space and Access key we should SSH into our machine.
40
41```bash
42# replace IP with the ip of your newly created droplet
43ssh root@IP
44
45# this will install utilities for mounting storage objects as FUSE
46apt install s3fs
47
48# we now need to provide credentials (access key we created earlier)
49# replace KEY and SECRET with your own credentials but leave the colon between them
50# we also need to set proper permissions
51echo "KEY:SECRET" > .passwd-s3fs
52chmod 600 .passwd-s3fs
53
54# now we mount space to our machine
55# replace UNIQUE-NAME with the name you choose earlier
56# if you choose different region for your space be careful about -ourl option (ams3)
57s3fs UNIQUE-NAME /mnt/ -ourl=https://ams3.digitaloceanspaces.com -ouse_cache=/tmp
58
59# now we try to create a file
60# once you mount it may take a couple of seconds to retrieve data
61echo "Hello cruel world" > /mnt/hello.txt
62```
63
64After all this you can return to your browser and go to [DigitalOcean Spaces](https://cloud.digitalocean.com/spaces) and click on your created space. If file hello.txt is present you have successfully mounted space to your machine and wrote data to it.
65
66I choose the same region for my Droplet and my Space but you don't have to. You can have different regions. What this actually does to performance I don't know.
67
68Additional information on FUSE:
69
70* [Github project page for s3fs](https://github.com/s3fs-fuse/s3fs-fuse)
71* [FUSE - Filesystem in Userspace](https://en.wikipedia.org/wiki/Filesystem_in_Userspace)
72
73## Will the performance degrade over time and over different sizes of objects?
74
75For this task I didn't want to just read and write text files or uploading images. I actually wanted to figure out if using something like SQlite is viable in this case.
76
77### Measurement experiment 1: File copy
78
79```bash
80# first we create some dummy files at different sizes
81dd if=/dev/zero of=10KB.dat bs=1024 count=10 #10KB
82dd if=/dev/zero of=100KB.dat bs=1024 count=100 #100KB
83dd if=/dev/zero of=1MB.dat bs=1024 count=1024 #1MB
84dd if=/dev/zero of=10MB.dat bs=1024 count=10240 #10MB
85
86# now we set time command to only return real
87TIMEFORMAT=%R
88
89# now lets test it
90(time cp 10KB.dat /mnt/) |& tee -a 10KB.results.txt
91
92# and now we automate
93# this will perform the same operation 100 times
94# this will output results into separated files based on objecty size
95n=0; while (( n++ < 100 )); do (time cp 10KB.dat /mnt/10KB.$n.dat) |& tee -a 10KB.results.txt; done
96n=0; while (( n++ < 100 )); do (time cp 100KB.dat /mnt/100KB.$n.dat) |& tee -a 100KB.results.txt; done
97n=0; while (( n++ < 100 )); do (time cp 1MB.dat /mnt/1MB.$n.dat) |& tee -a 1MB.results.txt; done
98n=0; while (( n++ < 100 )); do (time cp 10MB.dat /mnt/10MB.$n.dat) |& tee -a 10MB.results.txt; done
99```
100
101Files of size 100MB were not successfully transferred and ended up displaying error (cp: failed to close '/mnt/100MB.1.dat': Operation not permitted).
102
103As I suspected, object size is not really that important. Sadly I don't have the time to test performance over periods of time. But if some of you would do it please send me your data. I would be interested in seeing results.
104
105**Here are plotted results**
106
107You can download [raw result here](/assets/do-fuse/copy-benchmarks.tsv). Measurements are in seconds.
108
109<script src="//cdn.plot.ly/plotly-latest.min.js"></script>
110<div id="copy-benchmarks"></div>
111<script>
112(function(){
113 var request = new XMLHttpRequest();
114 request.open("GET", "/assets/do-fuse/copy-benchmarks.tsv", true);
115 request.onload = function() {
116 if (request.status >= 200 && request.status < 400) {
117 var payload = request.responseText.trim();
118 var tsv = payload.split("\n");
119 for (var i=0; i<tsv.length; i++) { tsv[i] = tsv[i].split("\t"); }
120 var traces = [];
121 var headers = tsv[0];
122 tsv.shift();
123 Array.prototype.forEach.call(headers, function(el, idx) {
124 var x = [];
125 var y = [];
126 for (var j=0; j<tsv.length; j++) {
127 x.push(j);
128 y.push(parseFloat(tsv[j][idx].replace(",", ".")));
129 }
130 traces.push({ x: x, y: y, type: "scatter", name: el, line: { width: 1, shape: "spline" } });
131 });
132 var copy = Plotly.newPlot("copy-benchmarks", traces, { legend: {"orientation": "h"}, height: 400, margin: { l: 40, r: 0, b: 20, t: 30, pad: 0 }, yaxis: { title: "execution time in seconds", titlefont: { size: 12 } }, xaxis: { title: "fn(i)", titlefont: { size: 12 } } });
133 } else { }
134 };
135 request.onerror = function() { };
136 request.send(null);
137})();
138</script>
139
140As far as these tests show, performance is quite stable and can be predicted which is fantastic. But this is a small test and spans only over couple of hours. So you should not completely trust them.
141
142### Measurement experiment 2: SQLite performanse
143
144I was unable to use database file directly from mounted drive so this is a no-go as I suspected. So I executed code below on a local disk just to get some benchmarks. I inserted 1000 records with DROPTABLE, CREATETABLE, INSERTMANY, FETCHALL, COMMIT for 1000 times to generate statistics. As you can see performance of SQLite is quite amazing. You could then potentially just copy file to mounted drive and be done with it.
145
146```python
147import time
148import sqlite3
149import sys
150
151if len(sys.argv) < 3:
152 print("usage: python sqlite-benchmark.py DB_PATH NUM_RECORDS REPEAT")
153 exit()
154
155def data_iter(x):
156 for i in range(x):
157 yield "m" + str(i), "f" + str(i*i)
158
159header_line = "%s\t%s\t%s\t%s\t%s\n" % ("DROPTABLE", "CREATETABLE", "INSERTMANY", "FETCHALL", "COMMIT")
160with open("sqlite-benchmarks.tsv", "w") as fp:
161 fp.write(header_line)
162
163start_time = time.time()
164conn = sqlite3.connect(sys.argv[1])
165c = conn.cursor()
166end_time = time.time()
167result_time = CONNECT = end_time - start_time
168print("CONNECT: %g seconds" % (result_time))
169
170start_time = time.time()
171c.execute("PRAGMA journal_mode=WAL")
172c.execute("PRAGMA temp_store=MEMORY")
173c.execute("PRAGMA synchronous=OFF")
174result_time = PRAGMA = end_time - start_time
175print("PRAGMA: %g seconds" % (result_time))
176
177for i in range(int(sys.argv[3])):
178 print("#%i" % (i))
179
180 start_time = time.time()
181 c.execute("drop table if exists test")
182 end_time = time.time()
183 result_time = DROPTABLE = end_time - start_time
184 print("DROPTABLE: %g seconds" % (result_time))
185
186 start_time = time.time()
187 c.execute("create table if not exists test(a,b)")
188 end_time = time.time()
189 result_time = CREATETABLE = end_time - start_time
190 print("CREATETABLE: %g seconds" % (result_time))
191
192 start_time = time.time()
193 c.executemany("INSERT INTO test VALUES (?, ?)", data_iter(int(sys.argv[2])))
194 end_time = time.time()
195 result_time = INSERTMANY = end_time - start_time
196 print("INSERTMANY: %g seconds" % (result_time))
197
198 start_time = time.time()
199 c.execute("select count(*) from test")
200 res = c.fetchall()
201 end_time = time.time()
202 result_time = FETCHALL = end_time - start_time
203 print("FETCHALL: %g seconds" % (result_time))
204
205 start_time = time.time()
206 conn.commit()
207 end_time = time.time()
208 result_time = COMMIT = end_time - start_time
209 print("COMMIT: %g seconds" % (result_time))
210
211 print
212 log_line = "%f\t%f\t%f\t%f\t%f\n" % (DROPTABLE, CREATETABLE, INSERTMANY, FETCHALL, COMMIT)
213 with open("sqlite-benchmarks.tsv", "a") as fp:
214 fp.write(log_line)
215
216start_time = time.time()
217conn.close()
218end_time = time.time()
219result_time = CLOSE = end_time - start_time
220print("CLOSE: %g seconds" % (result_time))
221```
222
223You can download [raw result here](/assets/do-fuse/sqlite-benchmarks.tsv). And again, these results are done on a local block storage and do not represent capabilities of object storage. With my current approach and state of the test code these can not be done. I would need to make Python code much more robust and check locking etc.
224
225<div id="sqlite-benchmarks"></div>
226<script>
227(function(){
228 var request = new XMLHttpRequest();
229 request.open("GET", "/assets/do-fuse/sqlite-benchmarks.tsv", true);
230 request.onload = function() {
231 if (request.status >= 200 && request.status < 400) {
232 var payload = request.responseText.trim();
233 var tsv = payload.split("\n");
234 for (var i=0; i<tsv.length; i++) { tsv[i] = tsv[i].split("\t"); }
235 var traces = [];
236 var headers = tsv[0];
237 tsv.shift();
238 Array.prototype.forEach.call(headers, function(el, idx) {
239 var x = [];
240 var y = [];
241 for (var j=0; j<tsv.length; j++) {
242 x.push(j);
243 y.push(parseFloat(tsv[j][idx].replace(",", ".")));
244 }
245 traces.push({ x: x, y: y, type: "scatter", name: el, line: { width: 1, shape: "spline" } });
246 });
247 var sqlite = Plotly.newPlot("sqlite-benchmarks", traces, { legend: {"orientation": "h"}, height: 400, margin: { l: 50, r: 0, b: 20, t: 30, pad: 0 }, yaxis: { title: "execution time in seconds", titlefont: { size: 12 } } });
248 } else { }
249 };
250 request.onerror = function() { };
251 request.send(null);
252})();
253</script>
254
255## Can storage be mounted on multiple machines at the same time and be writable?
256
257Well, this one didn't take long to test. And the answer is **YES**. I mounted space on both machines and measured same performance on both machines. But because file is downloaded before write and then uploaded on complete there could potentially be problems is another process is trying to access the same file.
258
259## Observations and conslusion
260
261Using Spaces in this way makes it easier to access and manage files. But besides that you would need to write additional code to make this one play nice with you applications.
262
263Nevertheless, this was extremely simple to setup and use and this is just another excellent product in DigitalOcean product line. I found this exercise very valuable and am thinking about implementing some sort of mechanism for SQLite, so data can be stored on Spaces and accessed by many VM's. For a project where data doesn't need to be accessible in real-time and can have couple of minutes old data this would be very interesting. If any of you find this proposal interesting please write in a comment box below or shoot me an email and I will keep you posted.