1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
|
<!doctype html><html lang=en-us><meta charset=utf-8><meta name=viewport content="width=device-width,initial-scale=1"><meta name=generator content="JBMAFP - github.com/mitjafelicijan/jbmafp"><link href="data:image/x-icon;base64,AAABAAEAEBAAAAEAIABoBAAAFgAAACgAAAAQAAAAIAAAAAEAIAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAL69vf8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAv76+/8LBwQkAAAAAAAAAAAAAAAC+vb3/AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAL+9vf/Bv78JAAAAAAAAAAAAAAAAu7q6/wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAC7ubr/vr29CAAAAAAAAAAAy8nJAZ6foP8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAnqGj/6GipAoAAAAAHLjU/xcXHf/BwsL/I8XY/yPK3v8XGiD/IbjL/yPF2f8XGiD/Fxkf/yLF2f8gnK3/Fxog/62ztv8fwNf/FRcd/x271v8mz93/GRsi/xkXHf8p097/GiIp/xobIv8p0t3/KdPe/xocIv8fYmr/KNPe/xoZH/8aHCL/J87c/xy81/8VFxz/IsPZ/8zS0/8XGiD/Ir/R/yPH2/8XGiD/Fxkf/yPH2/8dd4T/GBog/yPJ3f8jyNr/uru9/xcUGv8cudb/EhITDKi5vRKlvMP/RUpOERwcHRAdOj4QHTk8EBwdHRAdNTgQHTo/EBwcHRAcHB0QSGduEKW4vf+koqQfHzg+EBqz0ewSFRv7EyMr/xq51vsTERb7ExUb+xq41fsau9j7ExUb+xiPp/sZudb7ExUb+xMVG/sZuNX/GKvI/BIUGfMdvdn/IrfL/xcaIP8n1eb/J9Dh/xkcIf8ZGR7/J8/f/xxCSv8ZGyH/J9Dg/ybQ4P8ZHCL/FSQs/yPK3/8UExj/GE1b/ybS5P8ZGB7/Ghwj/ynW5P8p2Ob/Ghwi/yWrtv8p1eH/Ghwi/xocIv8p1uT/J8XT/xkcIv8m1un/Hb7d/xUYH/8hzOr/HtHu/xcaIf8XGB//I8vi/xgxOv8XGSD/I8rg/yPK4P8XGiD/GUFL/yPP6f8SERj/Fhkh/x3A4f8AAAAAJ2f9/ydr//8mZPH/AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAlYu38J2v//ydo/f8AAAAAAAAAAAd8/fkFqf//Iob8sAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMY39awWr//8FfP3/AAAAAAAAAAAFm/7/SfD//wR+/f8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOB/f9B7v//BaX+/wAAAAAAAAAAQ878SAyZ/v9n1v4KAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADu9v8DDJb+/z3N/XgAAAAA3/sAAN/7AADf+wAA3/sAAAAAAAAAAAAAAAAAAN/7AAAAAAAAAAAAAAAAAAAAAAAAj/EAAI/5AACP8QAA3/sAAA==" rel=icon type=image/x-icon><title>Using DigitalOcean Spaces Object Storage with FUSE</title><meta name=description content="Couple of months ago DigitalOcean introduced newproduct calledSpaces whichis Object Storage very similar to Amazon&#39;s S3."><link rel=alternate type=application/rss+xml title="Mitja Felicijan's posts" href=https://mitjafelicijan.com/index.xml><link rel=alternate type=application/rss+xml title="Mitja Felicijan's notes" href=https://mitjafelicijan.com/notes.xml><style>body{padding:2.5rem;max-width:1900px;background:#fff;font-family:sans-serif;line-height:1.35rem;font-size:16px}hr{margin-block-start:1.5rem}h1,h2,h3{line-height:initial}h1{font-size:xx-large}footer{margin-block-start:2rem}cap{text-transform:capitalize}table{max-width:100%;border-collapse:separate;border-spacing:2px;border:1px solid #000;border-left:1px solid #999;border-top:1px solid #999}blockquote{font-style:italic}table thead{background:#eee}ul.list li{padding:.2em 0}ul{line-height:1.4em}td,th{border:1px solid #000;padding:4px;border-right:1px solid #999;border-bottom:1px solid #999;text-align:left}pre{text-wrap:nowrap;overflow-x:auto;padding:0 1em;border:1px solid #dcdcdc}code{padding:0 3px;font-size:14px;border:0}pre code{line-height:1.3em}pre,code,pre *,code *{font-family:monospace}figure{margin-inline-start:0;margin-inline-end:0}figcaption{text-align:center}figcaption p{margin:.3em 0 0}img,video,audio{width:800px;max-width:100%}header{display:flex;flex-direction:row;gap:6rem}nav{display:flex;gap:.75rem}nav.main{}.pstatus-orange{background:gold}.pstatus-green{background:#9acd32}.pstatus-red{background:#cd5c5c}@media only screen and (max-width:600px){body{padding:15px}header{flex-direction:column;gap:1rem}a{word-wrap:break-word}}</style><header><nav class=main itemscope itemtype=http://schema.org/SiteNavigationElement role=toolbar><a href=/>Home</a>
<a href=https://github.com/mitjafelicijan target=_blank>Code</a>
<a href=/vault.html>Vault</a>
<a href=/mitjafelicijan.pgp.pub.txt target=_blank>PGP</a>
<a href=/curriculum-vitae.html>CV</a>
<a href=/index.xml target=_blank>RSS</a></nav></header><main role=main><article itemtype=http://schema.org/Article><h1 itemtype=headline>Using DigitalOcean Spaces Object Storage with FUSE</h1><p><cap>post</cap>, Jan 16, 2018 on <a href=https://mitjafelicijan.com>Mitja Felicijan's blog</a><div><p>Couple of months ago <a href=https://www.digitalocean.com>DigitalOcean</a> introduced new
product called
<a href=https://blog.digitalocean.com/introducing-spaces-object-storage/>Spaces</a> which
is Object Storage very similar to Amazon's S3. This really peaked my interest,
because this was something I was missing and even the thought of going over the
internet for such functionality was in no interest to me. Also in fashion with
their previous pricing this also is very cheap and pricing page is a no-brainer
compared to AWS or GCE. <a href=https://www.digitalocean.com/pricing/>Prices are clearly and precisely defined and
outlined</a>. You must love them for that
:)<h2 id=initial-requirements>Initial requirements</h2><ul><li>Is it possible to use them as a mounted drive with FUSE? (tl;dr YES)<li>Will the performance degrade over time and over different sizes of objects?
(tl;dr NO&YES)<li>Can storage be mounted on multiple machines at the same time and be writable?
(tl;dr YES)</ul><blockquote><p>Let me be clear. This scripts I use are made just for benchmarking and are not
intended to be used in real-life situations. Besides that, I am looking into
using this approaches but adding caching service in front of it and then
dumping everything as an object to storage. This could potentially be some
interesting post of itself. But in case you would need real-time data without
eventual consistency please take this scripts as they are: not usable in such
situations.</blockquote><h2 id=is-it-possible-to-use-them-as-a-mounted-drive-with-fuse>Is it possible to use them as a mounted drive with FUSE?</h2><p>Well, actually they can be used in such manor. Because they are similar to <a href=https://aws.amazon.com/s3/>AWS
S3</a> many tools are available and you can find many
articles and <a href="https://stackoverflow.com/search?q=s3+fuse">Stackoverflow items</a>.<p>To make this work you will need DigitalOcean account. If you don't have one you
will not be able to test this code. But if you have an account then you go and
<a href="https://cloud.digitalocean.com/droplets/new?size=s-1vcpu-1gb&region=ams3&distro=debian&distroImage=debian-9-x64&options=private_networking,install_agent">create new
Droplet</a>.
If you click on this link you will already have preselected Debian 9 with
smallest VM option.<ul><li>Please be sure to add you SSH key, because we will login to this machine
remotely.<li>If you change your region please remember which one you choose because we will
need this information when we try to mount space to our machine.</ul><p>Instuctions on how to use SSH keys and how to setup them are available in
article <a href=https://www.digitalocean.com/community/tutorials/how-to-use-ssh-keys-with-digitalocean-droplets>How To Use SSH Keys with DigitalOcean
Droplets</a>.<figure><img src=/posts/do-fuse/fuse-droplets.png alt="DigitalOcean Droplets"></figure><p>After we created Droplet it's time to create new Space. This is done by clicking
on a button <a href=https://cloud.digitalocean.com/spaces/new>Create</a> (right top
corner) and selecting Spaces. Choose pronounceable <code>Unique name</code> because we
will use it in examples below. You can either choose Private or Public, it
doesn't matter in our case. And you can always change that in the future.<p>When you have created new Space we should <a href=https://cloud.digitalocean.com/settings/api/tokens>generate Access
key</a>. This link will guide
to the page when you can generate this key. After you create new one, please
save provided Key and Secret because Secret will not be shown again.<figure><img src=/posts/do-fuse/fuse-spaces.png alt="DigitalOcean Spaces"></figure><p>Now that we have new Space and Access key we should SSH into our machine.<pre tabindex=0 style=background-color:#fff><code><span style=display:flex><span><span style=color:green># replace IP with the ip of your newly created droplet</span>
</span></span><span style=display:flex><span>ssh root@IP
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span><span style=color:green># this will install utilities for mounting storage objects as FUSE</span>
</span></span><span style=display:flex><span>apt install s3fs
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span><span style=color:green># we now need to provide credentials (access key we created earlier)</span>
</span></span><span style=display:flex><span><span style=color:green># replace KEY and SECRET with your own credentials but leave the colon between them</span>
</span></span><span style=display:flex><span><span style=color:green># we also need to set proper permissions</span>
</span></span><span style=display:flex><span>echo <span style=color:#a31515>"KEY:SECRET"</span> > .passwd-s3fs
</span></span><span style=display:flex><span>chmod 600 .passwd-s3fs
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span><span style=color:green># now we mount space to our machine</span>
</span></span><span style=display:flex><span><span style=color:green># replace UNIQUE-NAME with the name you choose earlier</span>
</span></span><span style=display:flex><span><span style=color:green># if you choose different region for your space be careful about -ourl option (ams3)</span>
</span></span><span style=display:flex><span>s3fs UNIQUE-NAME /mnt/ -ourl=https://ams3.digitaloceanspaces.com -ouse_cache=/tmp
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span><span style=color:green># now we try to create a file</span>
</span></span><span style=display:flex><span><span style=color:green># once you mount it may take a couple of seconds to retrieve data</span>
</span></span><span style=display:flex><span>echo <span style=color:#a31515>"Hello cruel world"</span> > /mnt/hello.txt
</span></span></code></pre><p>After all this you can return to your browser and go to <a href=https://cloud.digitalocean.com/spaces>DigitalOcean
Spaces</a> and click on your created
space. If file hello.txt is present you have successfully mounted space to your
machine and wrote data to it.<p>I choose the same region for my Droplet and my Space but you don't have to. You
can have different regions. What this actually does to performance I don't know.<p>Additional information on FUSE:<ul><li><a href=https://github.com/s3fs-fuse/s3fs-fuse>Github project page for s3fs</a><li><a href=https://en.wikipedia.org/wiki/Filesystem_in_Userspace>FUSE - Filesystem in Userspace</a></ul><h2 id=will-the-performance-degrade-over-time-and-over-different-sizes-of-objects>Will the performance degrade over time and over different sizes of objects?</h2><p>For this task I didn't want to just read and write text files or uploading
images. I actually wanted to figure out if using something like SQlite is viable
in this case.<h3 id=measurement-experiment-1-file-copy>Measurement experiment 1: File copy</h3><pre tabindex=0 style=background-color:#fff><code><span style=display:flex><span><span style=color:green># first we create some dummy files at different sizes</span>
</span></span><span style=display:flex><span>dd <span style=color:#00f>if</span>=/dev/zero of=10KB.dat bs=1024 count=10 <span style=color:green>#10KB</span>
</span></span><span style=display:flex><span>dd <span style=color:#00f>if</span>=/dev/zero of=100KB.dat bs=1024 count=100 <span style=color:green>#100KB</span>
</span></span><span style=display:flex><span>dd <span style=color:#00f>if</span>=/dev/zero of=1MB.dat bs=1024 count=1024 <span style=color:green>#1MB</span>
</span></span><span style=display:flex><span>dd <span style=color:#00f>if</span>=/dev/zero of=10MB.dat bs=1024 count=10240 <span style=color:green>#10MB</span>
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span><span style=color:green># now we set time command to only return real</span>
</span></span><span style=display:flex><span>TIMEFORMAT=%R
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span><span style=color:green># now lets test it</span>
</span></span><span style=display:flex><span>(time cp 10KB.dat /mnt/) |& tee -a 10KB.results.txt
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span><span style=color:green># and now we automate</span>
</span></span><span style=display:flex><span><span style=color:green># this will perform the same operation 100 times</span>
</span></span><span style=display:flex><span><span style=color:green># this will output results into separated files based on objecty size</span>
</span></span><span style=display:flex><span>n=0; <span style=color:#00f>while</span> (( n++ < 100 )); <span style=color:#00f>do</span> (time cp 10KB.dat /mnt/10KB.$n.dat) |& tee -a 10KB.results.txt; <span style=color:#00f>done</span>
</span></span><span style=display:flex><span>n=0; <span style=color:#00f>while</span> (( n++ < 100 )); <span style=color:#00f>do</span> (time cp 100KB.dat /mnt/100KB.$n.dat) |& tee -a 100KB.results.txt; <span style=color:#00f>done</span>
</span></span><span style=display:flex><span>n=0; <span style=color:#00f>while</span> (( n++ < 100 )); <span style=color:#00f>do</span> (time cp 1MB.dat /mnt/1MB.$n.dat) |& tee -a 1MB.results.txt; <span style=color:#00f>done</span>
</span></span><span style=display:flex><span>n=0; <span style=color:#00f>while</span> (( n++ < 100 )); <span style=color:#00f>do</span> (time cp 10MB.dat /mnt/10MB.$n.dat) |& tee -a 10MB.results.txt; <span style=color:#00f>done</span>
</span></span></code></pre><p>Files of size 100MB were not successfully transferred and ended up displaying
error (cp: failed to close '/mnt/100MB.1.dat': Operation not permitted).<p>As I suspected, object size is not really that important. Sadly I don't have the
time to test performance over periods of time. But if some of you would do it
please send me your data. I would be interested in seeing results.<p><strong>Here are plotted results</strong><p>You can download <a href=/posts/do-fuse/copy-benchmarks.tsv>raw result here</a>.
Measurements are in seconds.</p><script src=//cdn.plot.ly/plotly-latest.min.js></script><div id=copy-benchmarks></div><script>
(function(){
var request = new XMLHttpRequest();
request.open("GET", "/posts/do-fuse/copy-benchmarks.tsv", true);
request.onload = function() {
if (request.status >= 200 && request.status < 400) {
var payload = request.responseText.trim();
var tsv = payload.split("\n");
for (var i=0; i<tsv.length; i++) { tsv[i] = tsv[i].split("\t"); }
var traces = [];
var headers = tsv[0];
tsv.shift();
Array.prototype.forEach.call(headers, function(el, idx) {
var x = [];
var y = [];
for (var j=0; j<tsv.length; j++) {
x.push(j);
y.push(parseFloat(tsv[j][idx].replace(",", ".")));
}
traces.push({ x: x, y: y, type: "scatter", name: el, line: { width: 1, shape: "spline" } });
});
var copy = Plotly.newPlot("copy-benchmarks", traces, { legend: {"orientation": "h"}, height: 400, margin: { l: 40, r: 0, b: 20, t: 30, pad: 0 }, yaxis: { title: "execution time in seconds", titlefont: { size: 12 } }, xaxis: { title: "fn(i)", titlefont: { size: 12 } } });
} else { }
};
request.onerror = function() { };
request.send(null);
})();
</script><p>As far as these tests show, performance is quite stable and can be predicted
which is fantastic. But this is a small test and spans only over couple of
hours. So you should not completely trust them.<h3 id=measurement-experiment-2-sqlite-performanse>Measurement experiment 2: SQLite performanse</h3><p>I was unable to use database file directly from mounted drive so this is a no-go
as I suspected. So I executed code below on a local disk just to get some
benchmarks. I inserted 1000 records with DROPTABLE, CREATETABLE, INSERTMANY,
FETCHALL, COMMIT for 1000 times to generate statistics. As you can see
performance of SQLite is quite amazing. You could then potentially just copy
file to mounted drive and be done with it.<pre tabindex=0 style=background-color:#fff><code><span style=display:flex><span><span style=color:#00f>import</span> time
</span></span><span style=display:flex><span><span style=color:#00f>import</span> sqlite3
</span></span><span style=display:flex><span><span style=color:#00f>import</span> sys
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span><span style=color:#00f>if</span> len(sys.argv) < 3:
</span></span><span style=display:flex><span> print(<span style=color:#a31515>"usage: python sqlite-benchmark.py DB_PATH NUM_RECORDS REPEAT"</span>)
</span></span><span style=display:flex><span> exit()
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span><span style=color:#00f>def</span> data_iter(x):
</span></span><span style=display:flex><span> <span style=color:#00f>for</span> i <span style=color:#00f>in</span> range(x):
</span></span><span style=display:flex><span> <span style=color:#00f>yield</span> <span style=color:#a31515>"m"</span> + str(i), <span style=color:#a31515>"f"</span> + str(i*i)
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span>header_line = <span style=color:#a31515>"</span><span style=color:#a31515>%s</span><span style=color:#a31515>\t</span><span style=color:#a31515>%s</span><span style=color:#a31515>\t</span><span style=color:#a31515>%s</span><span style=color:#a31515>\t</span><span style=color:#a31515>%s</span><span style=color:#a31515>\t</span><span style=color:#a31515>%s</span><span style=color:#a31515>\n</span><span style=color:#a31515>"</span> % (<span style=color:#a31515>"DROPTABLE"</span>, <span style=color:#a31515>"CREATETABLE"</span>, <span style=color:#a31515>"INSERTMANY"</span>, <span style=color:#a31515>"FETCHALL"</span>, <span style=color:#a31515>"COMMIT"</span>)
</span></span><span style=display:flex><span><span style=color:#00f>with</span> open(<span style=color:#a31515>"sqlite-benchmarks.tsv"</span>, <span style=color:#a31515>"w"</span>) <span style=color:#00f>as</span> fp:
</span></span><span style=display:flex><span> fp.write(header_line)
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span>start_time = time.time()
</span></span><span style=display:flex><span>conn = sqlite3.connect(sys.argv[1])
</span></span><span style=display:flex><span>c = conn.cursor()
</span></span><span style=display:flex><span>end_time = time.time()
</span></span><span style=display:flex><span>result_time = CONNECT = end_time - start_time
</span></span><span style=display:flex><span>print(<span style=color:#a31515>"CONNECT: </span><span style=color:#a31515>%g</span><span style=color:#a31515> seconds"</span> % (result_time))
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span>start_time = time.time()
</span></span><span style=display:flex><span>c.execute(<span style=color:#a31515>"PRAGMA journal_mode=WAL"</span>)
</span></span><span style=display:flex><span>c.execute(<span style=color:#a31515>"PRAGMA temp_store=MEMORY"</span>)
</span></span><span style=display:flex><span>c.execute(<span style=color:#a31515>"PRAGMA synchronous=OFF"</span>)
</span></span><span style=display:flex><span>result_time = PRAGMA = end_time - start_time
</span></span><span style=display:flex><span>print(<span style=color:#a31515>"PRAGMA: </span><span style=color:#a31515>%g</span><span style=color:#a31515> seconds"</span> % (result_time))
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span><span style=color:#00f>for</span> i <span style=color:#00f>in</span> range(int(sys.argv[3])):
</span></span><span style=display:flex><span> print(<span style=color:#a31515>"#</span><span style=color:#a31515>%i</span><span style=color:#a31515>"</span> % (i))
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span> start_time = time.time()
</span></span><span style=display:flex><span> c.execute(<span style=color:#a31515>"drop table if exists test"</span>)
</span></span><span style=display:flex><span> end_time = time.time()
</span></span><span style=display:flex><span> result_time = DROPTABLE = end_time - start_time
</span></span><span style=display:flex><span> print(<span style=color:#a31515>"DROPTABLE: </span><span style=color:#a31515>%g</span><span style=color:#a31515> seconds"</span> % (result_time))
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span> start_time = time.time()
</span></span><span style=display:flex><span> c.execute(<span style=color:#a31515>"create table if not exists test(a,b)"</span>)
</span></span><span style=display:flex><span> end_time = time.time()
</span></span><span style=display:flex><span> result_time = CREATETABLE = end_time - start_time
</span></span><span style=display:flex><span> print(<span style=color:#a31515>"CREATETABLE: </span><span style=color:#a31515>%g</span><span style=color:#a31515> seconds"</span> % (result_time))
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span> start_time = time.time()
</span></span><span style=display:flex><span> c.executemany(<span style=color:#a31515>"INSERT INTO test VALUES (?, ?)"</span>, data_iter(int(sys.argv[2])))
</span></span><span style=display:flex><span> end_time = time.time()
</span></span><span style=display:flex><span> result_time = INSERTMANY = end_time - start_time
</span></span><span style=display:flex><span> print(<span style=color:#a31515>"INSERTMANY: </span><span style=color:#a31515>%g</span><span style=color:#a31515> seconds"</span> % (result_time))
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span> start_time = time.time()
</span></span><span style=display:flex><span> c.execute(<span style=color:#a31515>"select count(*) from test"</span>)
</span></span><span style=display:flex><span> res = c.fetchall()
</span></span><span style=display:flex><span> end_time = time.time()
</span></span><span style=display:flex><span> result_time = FETCHALL = end_time - start_time
</span></span><span style=display:flex><span> print(<span style=color:#a31515>"FETCHALL: </span><span style=color:#a31515>%g</span><span style=color:#a31515> seconds"</span> % (result_time))
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span> start_time = time.time()
</span></span><span style=display:flex><span> conn.commit()
</span></span><span style=display:flex><span> end_time = time.time()
</span></span><span style=display:flex><span> result_time = COMMIT = end_time - start_time
</span></span><span style=display:flex><span> print(<span style=color:#a31515>"COMMIT: </span><span style=color:#a31515>%g</span><span style=color:#a31515> seconds"</span> % (result_time))
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span> print
</span></span><span style=display:flex><span> log_line = <span style=color:#a31515>"</span><span style=color:#a31515>%f</span><span style=color:#a31515>\t</span><span style=color:#a31515>%f</span><span style=color:#a31515>\t</span><span style=color:#a31515>%f</span><span style=color:#a31515>\t</span><span style=color:#a31515>%f</span><span style=color:#a31515>\t</span><span style=color:#a31515>%f</span><span style=color:#a31515>\n</span><span style=color:#a31515>"</span> % (DROPTABLE, CREATETABLE, INSERTMANY, FETCHALL, COMMIT)
</span></span><span style=display:flex><span> <span style=color:#00f>with</span> open(<span style=color:#a31515>"sqlite-benchmarks.tsv"</span>, <span style=color:#a31515>"a"</span>) <span style=color:#00f>as</span> fp:
</span></span><span style=display:flex><span> fp.write(log_line)
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span>start_time = time.time()
</span></span><span style=display:flex><span>conn.close()
</span></span><span style=display:flex><span>end_time = time.time()
</span></span><span style=display:flex><span>result_time = CLOSE = end_time - start_time
</span></span><span style=display:flex><span>print(<span style=color:#a31515>"CLOSE: </span><span style=color:#a31515>%g</span><span style=color:#a31515> seconds"</span> % (result_time))
</span></span></code></pre><p>You can download <a href=/posts/do-fuse/sqlite-benchmarks.tsv>raw result here</a>. And
again, these results are done on a local block storage and do not represent
capabilities of object storage. With my current approach and state of the test
code these can not be done. I would need to make Python code much more robust
and check locking etc.<div id=sqlite-benchmarks></div><script>
(function(){
var request = new XMLHttpRequest();
request.open("GET", "/posts/do-fuse/sqlite-benchmarks.tsv", true);
request.onload = function() {
if (request.status >= 200 && request.status < 400) {
var payload = request.responseText.trim();
var tsv = payload.split("\n");
for (var i=0; i<tsv.length; i++) { tsv[i] = tsv[i].split("\t"); }
var traces = [];
var headers = tsv[0];
tsv.shift();
Array.prototype.forEach.call(headers, function(el, idx) {
var x = [];
var y = [];
for (var j=0; j<tsv.length; j++) {
x.push(j);
y.push(parseFloat(tsv[j][idx].replace(",", ".")));
}
traces.push({ x: x, y: y, type: "scatter", name: el, line: { width: 1, shape: "spline" } });
});
var sqlite = Plotly.newPlot("sqlite-benchmarks", traces, { legend: {"orientation": "h"}, height: 400, margin: { l: 50, r: 0, b: 20, t: 30, pad: 0 }, yaxis: { title: "execution time in seconds", titlefont: { size: 12 } } });
} else { }
};
request.onerror = function() { };
request.send(null);
})();
</script><h2 id=can-storage-be-mounted-on-multiple-machines-at-the-same-time-and-be-writable>Can storage be mounted on multiple machines at the same time and be writable?</h2><p>Well, this one didn't take long to test. And the answer is <strong>YES</strong>. I mounted
space on both machines and measured same performance on both machines. But
because file is downloaded before write and then uploaded on complete there
could potentially be problems is another process is trying to access the same
file.<h2 id=observations-and-conslusion>Observations and conslusion</h2><p>Using Spaces in this way makes it easier to access and manage files. But besides
that you would need to write additional code to make this one play nice with you
applications.<p>Nevertheless, this was extremely simple to setup and use and this is just
another excellent product in DigitalOcean product line. I found this exercise
very valuable and am thinking about implementing some sort of mechanism for
SQLite, so data can be stored on Spaces and accessed by many VM's. For a project
where data doesn't need to be accessible in real-time and can have couple of
minutes old data this would be very interesting. If any of you find this
proposal interesting please write in a comment box below or shoot me an email
and I will keep you posted.</div></article></main><section><hr><h2>Posts from blogs I follow around the net</h2><ul><li><a href=https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSWhyNotDirectoryToFilesystem target=_blank rel=noopener>One reason that ZFS can't turn a directory into a filesystem</a> — <a href=https://utcc.utoronto.ca/~cks/space/blog/>Chris's Wiki :: blog</a><div>One of the wishes that I and other people frequently have for ZFS
is the ability to take an existing directory (and everything
underneath it) in a ZFS filesystem and turn it into a sub-filesystem
of its own. One reason for wanting this is that a number of things
are set and controlled on a per-filesyst…<li><a href=http://www.landley.net/notes-2023.html#28-10-2023 target=_blank rel=noopener>October 28, 2023</a> — <a href=http://www.landley.net/notes-2023.html>Rob Landley's Blog Thing for 2023</a><div>Oh good grief, two of my least favorite licensing people, Larry Rosen
and Bradley Kuhn, are interacting on the OSI's license-discuss
list where the're doing
bad computer history and insisting that a guy Larry Rosen
coincidentally interviewed for a book years ago is clearly the origin of
somethin…<li><a href="http://offbeatpursuit.com:80/blog/?id=25" target=_blank rel=noopener>A fix by any other name</a> — <a href=http://offbeatpursuit.com:80/blog/>WLOG - blog</a><div>tags:
i2c, plan9
Another month, another file system.
Well, if you can’t fix it in software, fix it in hardware (looking at
you, bme680, we’re not
done yet). The show must go on, as they say, and I would like my
experiments to go on.
So a “new” addition to the environmental sensor family connected to
the h…<li><a href=https://mirzapandzo.com/next-image-url-parameter-is-valid-but-upstream-response-is-invalid target=_blank rel=noopener>Next/Image "url" parameter is valid but upstream response is invalid</a> — <a href=https://mirzapandzo.com/>Mirza Pandzo's Blog</a><div>Getting "url" parameter is valid but upstream response is invalid error with Next/Image on WSL2<li><a href=https://drewdevault.com/2023/10/13/Going-off-script.html target=_blank rel=noopener>Going off-script</a> — <a href=https://drewdevault.com>Drew DeVault's blog</a><div>There is a phenomenon in society which I find quite bizarre. Upon our entry to
this mortal coil, we are endowed with self-awareness, agency, and free will.
Each of the 8 billion members of this human race represents a unique person, a
unique worldview, and a unique agency. Yet, many of us have the sam…<li><a href=https://szymonkaliski.com/writing/2023-10-02-building-a-diy-pen-plotter/ target=_blank rel=noopener>Building a DIY Pen Plotter</a> — <a href=http://github.com/dylang/node-rss>Szymon Kaliski</a><div>This article documents my learnings from designing and building a DIY Pen Plotter during the summer of 2023.
My ultimate goal is to build my…<li><a href=https://neil.computer/notes/chart-of-accounts-for-startups-and-saas-companies/ target=_blank rel=noopener>Chart of Accounts for Startups and SaaS Companies</a> — <a href=https://neil.computer/>Neil Panchal</a><div>Accounting is fundamental to starting a business. You need to have a basic understanding of accounting principles and essential bookkeeping. I had to learn it. There was no choice. For filing taxes, your CPA is going to ask you for an Income Statement (also known as P/L statement). If<li><a href=https://journal.valeriansaliou.name/deploy-a-nomad-cluster-on-alpine-linux-with-vultr/ target=_blank rel=noopener>Deploy a Nomad Cluster on Alpine Linux with Vultr</a> — <a href=https://journal.valeriansaliou.name/>Valerian Saliou</a><div>After spending countless hours trying to understand how to deploy my apps on Kubernetes for the first time to host Mirage, an AI API service that I run, I ended up making myself a promise that the next app I work on would be using a more productive & simpler<li><a href=https://jcs.org/2023/10/25/wifi_da target=_blank rel=noopener>BlueSCSI Wi-Fi Desk Accessory 1.0 Released</a> — <a href=https://jcs.org/>joshua stein</a><div>BlueSCSI Wi-Fi Desk Accessory
1.0 has been released:
wifi_da-1.0.sit
(StuffIt 3 archive)
SHA256: ccfc9d27dd5da7412d10cef73b81119a1fec3848e4d1d88ff652a07ffdc6a69aSHA1: ff124972f202ceda6d7fa4788110a67ccda6a13a
This is the initial public release of my BlueSCSI Wi-Fi Desk Accessory for
classic MacOS.<li><a href=https://michael.stapelberg.ch/posts/2023-10-25-my-all-flash-zfs-network-storage-build/ target=_blank rel=noopener>My 2023 all-flash ZFS NAS (Network Storage) build</a> — <a href=https://michael.stapelberg.ch/>Michael Stapelbergs Website</a><div>For over 10 years now, I run two self-built NAS (Network Storage) devices which serve media (currently via Jellyfin) and run daily backups of all my PCs and servers.
In this article, I describe my goals, which hardware I picked for my new build (and why) and how I set it up.
Design Goals
I use my netw…</ul><p>Generated with <a href=https://git.sr.ht/~sircmpwn/openring target=_blank rel=noopener>openring</a>.</section><footer><hr><p><big><strong>Want to comment or have something to add?</strong></big><p>You can write me an email
at <a href=mailto:mitja.felicijan@gmail.com>mitja.felicijan@gmail.com</a> or
catch up with me <a href=https://telegram.me/mitjafelicijan target=_blank>on Telegram</a>.<hr><p>This website does not track you. Content is made available under
the <a href=https://creativecommons.org/licenses/by/4.0/ target=_blank rel=noreferrer>CC BY 4.0 license</a> unless specified
otherwise. Blog is also available as <a href=/index.xml target=_blank>RSS feed</a>.</footer><script>
window.va = window.va || function () { (window.vaq = window.vaq || []).push(arguments); };
</script><script defer src=/_vercel/insights/script.js></script>
|