aboutsummaryrefslogtreecommitdiff
path: root/src/experiments
diff options
context:
space:
mode:
Diffstat (limited to 'src/experiments')
-rw-r--r--src/experiments/encoding-binary-data-into-dna-sequence.md345
-rw-r--r--src/experiments/profiling-python-web-applications-with-visual-tools.md184
-rw-r--r--src/experiments/simple-iot-application.md486
-rw-r--r--src/experiments/using-digitalocean-spaces-object-storage-with-fuse.md260
4 files changed, 1275 insertions, 0 deletions
diff --git a/src/experiments/encoding-binary-data-into-dna-sequence.md b/src/experiments/encoding-binary-data-into-dna-sequence.md
new file mode 100644
index 0000000..cc42bd7
--- /dev/null
+++ b/src/experiments/encoding-binary-data-into-dna-sequence.md
@@ -0,0 +1,345 @@
1title: Encoding binary data into DNA sequence
2date: 2019-01-03
3tags: experiment
4hide: false
5----
6
7## Initial thoughts
8
9Imagine a world where you could go outside and take a leaf from a tree and put it through your personal DNA sequencer and get data like music, videos or computer programs from it. Well, this is all possible now. It was not done on a large scale because it is quite expensive to create DNA strands but it's possible.
10
11Encoding data into DNA sequence is relatively simple process once you understand the relationship between binary data and nucleotides and scientists have been making large leaps in this field in order to provide viable long-term storage solution for our data that would potentially survive our specie if case of global disaster. We could imprint all the world's knowledge into plants and ensure the survival of our knowledge.
12
13More optimistic usage for this technology would be easier storage of ever growing data we produce every day. Once machines for sequencing DNA become fast enough and cheaper this could mean the next evolution of storing data and abandoning classical hard and solid state drives in data warehouses.
14
15As we currently stand this is still not viable but it is quite an amazing and cool technology.
16
17My interests in this field are purely in encoding processes and experimental testing mainly because I don't have the access to this expensive machines. My initial goal was to create a toolkit that can be used by everybody to encode their data into a proper DNA sequence.
18
19## Glossary
20
21**deoxyribose**
22: A five-carbon sugar molecule with a hydrogen atom rather than a hydroxyl group in the 2′ position; the sugar component of DNA nucleotides.
23
24**double helix**
25: The molecular shape of DNA in which two strands of nucleotides wind around each other in a spiral shape.
26
27**nitrogenous base**
28: A nitrogen-containing molecule that acts as a base; often referring to one of the purine or pyrimidine components of nucleic acids.
29
30**phosphate group**
31: A molecular group consisting of a central phosphorus atom bound to four oxygen atoms.
32
33**RGB**
34: The RGB color model is an additive color model in which red, green and blue light are added together in various ways to reproduce a broad array of colors.
35
36**GCC**
37: The GNU Compiler Collection is a compiler system produced by the GNU Project supporting various programming languages.
38
39## Data encoding
40
41**TL;DR:** Encoding involves the use of a code to change original data into a form that can be used by an external process [^1].
42
43Encoding is the process of converting data into a format required for a number of information processing needs, including:
44
45- Program compiling and execution
46- Data transmission, storage and compression/decompression
47- Application data processing, such as file conversion
48
49Encoding can have two meanings[^1]:
50
51- In computer technology, encoding is the process of applying a specific code, such as letters, symbols and numbers, to data for conversion into an equivalent cipher.
52- In electronics, encoding refers to analog to digital conversion.
53
54## Quick history of DNA
55
56- **1869** - Friedrich Miescher identifies "nuclein".
57- **1900s** - The Eugenics Movement.
58- **1900** – Mendel's theories are rediscovered by researchers.
59- **1944** - Oswald Avery identifies DNA as the 'transforming principle'.
60- **1952** - Rosalind Franklin photographs crystallized DNA fibres.
61- **1953** - James Watson and Francis Crick discover the double helix structure of DNA.
62- **1965** - Marshall Nirenberg is the first person to sequence the bases in each codon.
63- **1983** - Huntington's disease is the first mapped genetic disease.
64- **1990** - The Human Genome Project begins.
65- **1995** - Haemophilus Influenzae is the first bacterium genome sequenced.
66- **1996** - Dolly the sheep is cloned.
67- **1999** - First human chromosome is decoded.
68- **2000** – Genetic code of the fruit fly is decoded.
69- **2002** – Mouse is the first mammal to have its genome decoded.
70- **2003** – The Human Genome Project is completed.
71- **2013** – DNA Worldwide and Eurofins Forensic discover identical twins have differences in their genetic makeup [^2].
72
73## What is DNA?
74
75Deoxyribonucleic acid, a self-replicating material which is **present in nearly all living organisms** as the main constituent of chromosomes. It is the **carrier of genetic information**.
76
77> The nitrogen in our DNA, the calcium in our teeth, the iron in our blood, the carbon in our apple pies were made in the interiors of collapsing stars. We are made of starstuff.
78>
79> **-- Carl Sagan, Cosmos**
80
81The nucleotide in DNA consists of a sugar (deoxyribose), one of four bases (cytosine (C), thymine (T), adenine (A), guanine (G)), and a phosphate. Cytosine and thymine are pyrimidine bases, while adenine and guanine are purine bases. The sugar and the base together are called a nucleoside.
82
83![DNA](/files/dna-sequence/dna-basics.jpg#center)
84
85*DNA (a) forms a double stranded helix, and (b) adenine pairs with thymine and cytosine pairs with guanine. (credit a: modification of work by Jerome Walker, Dennis Myts) [^3]*
86
87## Encode binary data into DNA sequence
88
89As an input file you can use any file you want:
90- ASCII files,
91- Compiled programs,
92- Multimedia files (MP3, MP4, MVK, etc),
93- Images,
94- Database files,
95- etc.
96
97Note: If you would copy all the bytes from RAM to file or pipe data to file you could encode also this data as long as you provide file pointer to the encoder.
98
99### Basic Encoding
100
101As already mentioned, the Basic Encoding is based on a simple mapping. Since DNA is composed of 4 nucleotides (Adenine, Cytosine, Guanine, Thymine; usually referred using the first letter). Using this technique we can encode
102
103$$ log_2(4) = log_2(2^2) = 2 bits $$
104
105using a single nucleotide. In this way, we are able to use the 4 bases that compose the DNA strand to encode each byte of data.
106
107| Two bits | Nucleotides |
108| -------- | ---------------- |
109| 00 | **A** (Adenine) |
110| 10 | **G** (Guanine) |
111| 01 | **C** (Cytosine) |
112| 11 | **T** (Thymine) |
113
114With this in mind we can simply encode any data by using two-bit to Nucleotides conversion
115
116```pascal
117{ Algorithm 1: Naive byte array to DNA encode }
118procedure EncodeToDNASequence(f) string
119begin
120 enc string
121 while not eof(f) do
122 c byte := buffer[0] { Read 1 byte from buffer }
123 bin integer := sprintf('08b', c) { Convert to string binary }
124 for e in range[0, 2, 4, 6] do
125 if e[0] == 48 and e[1] == 48 then { 0x00 - A (Adenine) }
126 enc += 'A'
127 else if e[0] == 48 and e[1] == 49 then { 0x01 - G (Guanine) }
128 enc += 'G'
129 else if e[0] == 49 and e[1] == 48 then { 0x10 - C (Cytosine) }
130 enc += 'C'
131 else if e[0] == 49 and e[1] == 49 then { 0x11 - T (Thymine) }
132 enc += 'T'
133 return enc { Return DNA sequence }
134end
135```
136
137Another encoding would be **Goldman encoding**. Using this encoding helps with Nonsense mutation (amino acids replaced by a stop codon) that occurs and is the most problematic during translation because it leads to truncated amino acid sequences, which in turn results in truncated proteins. [^4]
138
139[Where to store big data? In DNA: Nick Goldman at TEDxPrague](https://www.youtube.com/watch?v=a4PiGWNsIEU)
140
141### FASTA file format
142
143In bioinformatics, FASTA format is a text-based format for representing either nucleotide sequences or peptide sequences, in which nucleotides or amino acids are represented using single-letter codes. The format also allows for sequence names and comments to precede the sequences. The format originates from the FASTA software package, but has now become a standard in the field of bioinformatics. [^5]
144
145The first line in a FASTA file started either with a ">" (greater-than) symbol or, less frequently, a ";" (semicolon) was taken as a comment. Subsequent lines starting with a semicolon would be ignored by software. Since the only comment used was the first, it quickly became used to hold a summary description of the sequence, often starting with a unique library accession number, and with time it has become commonplace to always use ">" for the first line and to not use ";" comments (which would otherwise be ignored).
146
147```text
148;LCBO - Prolactin precursor - Bovine
149; a sample sequence in FASTA format
150MDSKGSSQKGSRLLLLLVVSNLLLCQGVVSTPVCPNGPGNCQVSLRDLFDRAVMVSHYIHDLSS
151EMFNEFDKRYAQGKGFITMALNSCHTSSLPTPEDKEQAQQTHHEVLMSLILGLLRSWNDPLYHL
152VTEVRGMKGAPDAILSRAIEIEEENKRLLEGMEMIFGQVIPGAKETEPYPVWSGLPSLQTKDED
153ARYSAFYNLLHCLRRDSSKIDTYLKLLNCRIIYNNNC*
154
155>MCHU - Calmodulin - Human, rabbit, bovine, rat, and chicken
156ADQLTEEQIAEFKEAFSLFDKDGDGTITTKELGTVMRSLGQNPTEAELQDMINEVDADGNGTID
157FPEFLTMMARKMKDTDSEEEIREAFRVFDKDGNGYISAAELRHVMTNLGEKLTDEEVDEMIREA
158DIDGDGQVNYEEFVQMMTAK*
159
160>gi|5524211|gb|AAD44166.1| cytochrome b [Elephas maximus maximus]
161LCLYTHIGRNIYYGSYLYSETWNTGIMLLLITMATAFMGYVLPWGQMSFWGATVITNLFSAIPYIGTNLV
162EWIWGGFSVDKATLNRFFAFHFILPFTMVALAGVHLTFLHETGSNNPLGLTSDSDKIPFHPYYTIKDFLG
163LLILILLLLLLALLSPDMLGDPDNHMPADPLNTPLHIKPEWYFLFAYAILRSVPNKLGGVLALFLSIVIL
164GLMPFLHTSKHRSMMLRPLSQALFWTLTMDLLTLTWIGSQPVEYPYTIIGQMASILYFSIILAFLPIAGX
165IENY
166```
167
168FASTA format was extended by [FASTQ](https://en.wikipedia.org/wiki/FASTQ_format) format from the [Sanger Centre](https://www.sanger.ac.uk/) in Cambridge.
169
170### PNG encoded DNA sequence
171
172| Nucleotides | RGB | Color name |
173| ------------- | ----------- | ---------- |
174| A -> Adenine | (0,0,255) | Blue |
175| G -> Guanine | (0,100,0) | Green |
176| C -> Cytosine | (255,0,0) | Red |
177| T -> Thymine | (255,255,0) | Yellow |
178
179With this in mind we can create a simple algorithm to create PNG representation of a DNA sequence.
180
181```pascal
182{ Algorithm 2: Naive DNA to PNG encode from FASTA file }
183procedure EncodeDNASequenceToPNG(f)
184begin
185 i image
186 while not eof(f) do
187 c char := buffer[0] { Read 1 char from buffer }
188 case c of
189 'A': color := RGB(0, 0, 255) { Blue }
190 'G': color := RGB(0, 100, 0) { Green }
191 'C': color := RGB(255, 0, 0) { Red }
192 'T': color := RGB(255, 255, 0) { Yellow }
193 drawRect(i, [x, y], color)
194 save(i) { Save PNG image }
195end
196```
197
198## Encoding text file in practice
199
200In this example we will take a simple text file as our input stream for encoding. This file will have a quote from Niels Bohr and saved as txt file.
201
202> How wonderful that we have met with a paradox. Now we have some hope of making progress.
203> ― Niels Bohr
204
205First we encode text file into FASTA file.
206
207```bash
208./dnae-encode -i quote.txt -o quote.fa
2092019/01/10 00:38:29 Gathering input file stats
2102019/01/10 00:38:29 Starting encoding ...
211 106 B / 106 B [==================================] 100.00% 0s
2122019/01/10 00:38:29 Saving to FASTA file ...
2132019/01/10 00:38:29 Output FASTA file length is 438 B
2142019/01/10 00:38:29 Process took 987.263µs
2152019/01/10 00:38:29 Done ...
216```
217
218Output of `quote.fa` file contains the encoded DNA sequence in ASCII format.
219
220```text
221>SEQ1
222GACAGCTTGTGTACAAGTGTGCTTGCTCGCGAGCGGGTACGCGCGTGGGCTAACAAGTGA
223GCCAGCAGGTGAACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGCTGGCGGGTGA
224ACAAGTGTGCCGGTGAGCCAACAAGCAGACAAGTAAGCAGGTACGCAGGCGAGCTTGTCA
225ACTCACAAGATCGCTTGTGTACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGTAT
226GCTTGCTGGCGGACAAGCCAGCTTGTAAGCGGACAAGCTTGCGCACAAGCTGGCAGGCCT
227GCCGGCTCGCGTACAAATTCACAAGTAAGTACGCTTGCGTGTACGCGGGTATGTATACTC
228AACCTCACCAAACGGGACAAGATCGCCGGCGGGCTAGTATACAAGAACGCTTGCCAGTAC
229AACC
230```
231
232Then we encode FASTA file from previous operation to encode this data into PNG.
233
234```bash
235./dnae-png -i quote.fa -o quote.png
2362019/01/10 00:40:09 Gathering input file stats ...
2372019/01/10 00:40:09 Deconstructing FASTA file ...
2382019/01/10 00:40:09 Compositing image file ...
239 424 / 424 [==================================] 100.00% 0s
2402019/01/10 00:40:09 Saving output file ...
2412019/01/10 00:40:09 Output image file length is 1.1 kB
2422019/01/10 00:40:09 Process took 19.036117ms
2432019/01/10 00:40:09 Done ...
244```
245
246After encoding into PNG format this file looks like this.
247
248![Encoded Quote in PNG format](/files/dna-sequence/quote.png)
249
250The larger the input stream is the larger the PNG file would be.
251
252Compiled basic Hello World C program with [GCC](https://www.gnu.org/software/gcc/) would [look like](/files/dna-sequence/sample.png).
253
254```c
255// gcc -O3 -o sample sample.c
256#include <stdio.h>
257
258main() {
259 printf("Hello, world!\n");
260 return 0;
261}
262```
263
264## Toolkit for encoding data
265
266I have created a toolkit with two main programs:
267- dnae-encode (encodes file into FASTA file)
268- dnae-png (encodes FASTA file into PNG)
269
270Toolkit with full source code is available on [github.com/mitjafelicijan/dna-encoding](https://github.com/mitjafelicijan/dna-encoding).
271
272### dnae-encode
273
274```bash
275> ./dnae-encode --help
276usage: dnae-encode --input=INPUT [<flags>]
277
278A command-line application that encodes file into DNA sequence.
279
280Flags:
281 --help Show context-sensitive help (also try --help-long and --help-man).
282 -i, --input=INPUT Input file (ASCII or binary) which will be encoded into DNA sequence.
283 -o, --output="out.fa" Output file which stores DNA sequence in FASTA format.
284 -s, --sequence=SEQ1 The description line (defline) or header/identifier line, gives a name and/or a unique identifier for the sequence.
285 -c, --columns=60 Row characters length (no more than 120 characters). Devices preallocate fixed line sizes in software.
286 --version Show application version.
287```
288
289### dnae-png
290
291```bash
292> ./dnae-png --help
293usage: dnae-png --input=INPUT [<flags>]
294
295A command-line application that encodes FASTA file into PNG image.
296
297Flags:
298 --help Show context-sensitive help (also try --help-long and --help-man).
299 -i, --input=INPUT Input FASTA file which will be encoded into PNG image.
300 -o, --output="out.png" Output file in PNG format that represents DNA sequence in graphical way.
301 -s, --size=10 Size of pairings of DNA bases on image in pixels (lower resolution lower file size).
302 --version Show application version.
303```
304
305## Benchmarks
306
307First we generate some binary sample data with dd.
308
309```bash
310dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=1KB.bin bs=1KB count=1 iflag=fullblock
311```
312
313Our freshly generated 1KB file looks something like this (its full of garbage data as intended).
314
315![Sample binary file 1KB](/files/dna-sequence/sample-binary-file.png)
316
317We create following binary files:
318- 1KB.bin
319- 10KB.bin
320- 100KB.bin
321- 1MB.bin
322- 10MB.bin
323- 100MB.bin
324
325After this we create FASTA files for all the binary files by encoding them into DNA sequence.
326
327```bash
328./dnae-encode -i 100MB.bin -o 100MB.fa
329```
330
331Then we GZIP all the FASTA files to see how much the can be compressed.
332
333```bash
334gzip -9 < 10MB.fa > 10MB.fa.gz
335```
336
337[Download ODS file with benchmarks](/files/dna-sequence/benchmarks.ods).
338
339## References
340
341[^1]: https://www.techopedia.com/definition/948/encoding
342[^2]: https://www.dna-worldwide.com/resource/160/history-dna-timeline
343[^3]: https://opentextbc.ca/biology/chapter/9-1-the-structure-of-dna/
344[^4]: https://arxiv.org/abs/1801.04774
345[^5]: https://en.wikipedia.org/wiki/FASTA_format
diff --git a/src/experiments/profiling-python-web-applications-with-visual-tools.md b/src/experiments/profiling-python-web-applications-with-visual-tools.md
new file mode 100644
index 0000000..29e16d7
--- /dev/null
+++ b/src/experiments/profiling-python-web-applications-with-visual-tools.md
@@ -0,0 +1,184 @@
1title: Profiling Python web applications with visual tools
2date: 2017-04-21
3tags: experiment
4hide: false
5----
6
7I have been profiling my software with KCachegrind for a long time now and I was missing this option when I am developing API's or other web services. I always knew that this is possible but never really took the time and dive into it.
8
9Before we begin there are some requirements. We will need to:
10
11- implement [cProfile](https://docs.python.org/2/library/profile.html#module-cProfile) into our web app,
12- convert output to [callgrind](http://valgrind.org/docs/manual/cl-manual.html) format with [pyprof2calltree](https://pypi.python.org/pypi/pyprof2calltree/),
13- visualize data with [KCachegrind](http://kcachegrind.sourceforge.net/html/Home.html) or [Profiling Viewer](http://www.profilingviewer.com/).
14
15
16If you are using MacOS you should check out [Profiling Viewer](http://www.profilingviewer.com/) or [MacCallGrind](http://www.maccallgrind.com/).
17
18![KCachegrind](/files/kcachegrind.png)
19
20We will be dividing this post into two main categories:
21
22- writing simple web-service,
23- visualize profile of this web-service.
24
25## Simple web-service
26
27Let's use virtualenv so we won't pollute our base system. If you don't have virtualenv installed on your system you can install it with pip command.
28
29```bash
30# let's install virtualenv globally
31$ sudo pip install virtualenv
32
33# let's also install pyprof2calltree globally
34$ sudo pip install pyprof2calltree
35
36# now we create project
37$ mkdir demo-project
38$ cd demo-project/
39
40# now let's create folder where we will store profiles
41$ mkdir prof
42
43# now we create empty virtualenv in venv/ folder
44$ virtualenv --no-site-packages venv
45
46# we now need to activate virtualenv
47$ source venv/bin/activate
48
49# you can check if virtualenv was correctly initialized by
50# checking where your python interpreter is located
51# if command bellow points to your created directory and not some
52# system dir like /usr/bin/python then everything is fine
53$ which python
54
55# we can check now if all is good ➜ if ok couple of
56# lines will be displayed
57$ pip freeze
58# appdirs==1.4.3
59# packaging==16.8
60# pyparsing==2.2.0
61# six==1.10.0
62
63# now we are ready to install bottlepy ➜ web micro-framework
64$ pip install bottle
65
66# you can deactivate virtualenv but you will then go
67# under system domain ➜ for now don't deactivate
68$ deactivate
69```
70
71We are now ready to write simple web service. Let's create file app.py and paste code bellow in this newly created file.
72
73```python
74# -*- coding: utf-8 -*-
75
76import bottle
77import random
78import cProfile
79
80app = bottle.Bottle()
81
82# this function is a decorator and encapsulates function
83# and performs profiling and then saves it to subfolder
84# prof/function-name.prof
85# in our example only awesome_random_number function will
86# be profiled because it has do_cprofile defined
87def do_cprofile(func):
88 def profiled_func(*args, **kwargs):
89 profile = cProfile.Profile()
90 try:
91 profile.enable()
92 result = func(*args, **kwargs)
93 profile.disable()
94 return result
95 finally:
96 profile.dump_stats("prof/" + str(func.__name__) + ".prof")
97 return profiled_func
98
99
100# we use profiling over specific function with including
101# @do_cprofile above function declaration
102@app.route("/")
103@do_cprofile
104def awesome_random_number():
105 awesome_random_number = random.randint(0, 100)
106 return "awesome random number is " + str(awesome_random_number)
107
108@app.route("/test")
109def test():
110 return "dummy test"
111
112if __name__ == '__main__':
113 bottle.run(
114 app = app,
115 host = "0.0.0.0",
116 port = 4000
117 )
118
119# run with 'python app.py'
120# open browser 'http://0.0.0.0:4000'
121```
122
123When browser hits awesome\_random\_number() function profile is created in prof/ subfolder.
124
125## Visualize profile
126
127Now let's create callgrind format from this cProfile output.
128
129```bash
130$ cd prof/
131$ pyprof2calltree -i awesome_random_number.prof
132# this creates 'awesome_random_number.prof.log' file in the same folder
133```
134
135This file can be opened with visualizing tools listed above. In this case we will be using Profilling Viewer under MacOS. You can open image in new tab. As you can see from this example there is hierarchy of execution order of your code.
136
137![Profilling Viewer](/files/profiling-viewer.png)
138
139> Make sure you convert output of the cProfile output every time you want to refresh and take a look at your possible optimizations because cProfile updates .prof file every time browser hits the function.
140
141This is just a simple example but when you are developing real-life applications this can be very illuminating, especially to see which parts of your code are bottlenecks and need to be optimized.
142
143## Update 2017-04-22
144
145Reddit user [mvt](https://www.reddit.com/user/mvt) also recommended this awesome web based profile visualizer [SnakeViz](https://jiffyclub.github.io/snakeviz/) that directly takes output from [cProfile](https://docs.python.org/2/library/profile.html#module-cProfile) module.
146
147<div class="reddit-embed" data-embed-media="www.redditmedia.com" data-embed-parent="false" data-embed-live="false" data-embed-uuid="583880c1-002e-41ed-a373-020a0ef2cff9" data-embed-created="2017-04-22T19:46:54.810Z"><a href="https://www.reddit.com/r/Python/comments/66v373/profiling_python_web_applications_with_visual/dgljhsb/">Comment</a> from discussion <a href="https://www.reddit.com/r/Python/comments/66v373/profiling_python_web_applications_with_visual/">Profiling Python web applications with visual tools</a>.</div><script async src="https://www.redditstatic.com/comment-embed.js"></script>
148
149```bash
150# let's install it globally as well
151$ sudo pip install snakeviz
152
153# now let's visualize
154$ cd prof/
155$ snakeviz awesome_random_number.prof
156# this automatically opens browser window and
157# shows visualized profile
158```
159
160![SnakeViz](/files/snakeviz.png)
161
162Reddit user [ccharles](https://www.reddit.com/user/ccharles) suggested a better way for installing pip software by targeting user level instead of using sudo.
163
164<div class="reddit-embed" data-embed-media="www.redditmedia.com" data-embed-parent="false" data-embed-live="false" data-embed-uuid="f4f0459e-684d-441e-bebe-eb49b2f0a31d" data-embed-created="2017-04-22T19:46:10.874Z"><a href="https://www.reddit.com/r/Python/comments/66v373/profiling_python_web_applications_with_visual/dglpzkx/">Comment</a> from discussion <a href="https://www.reddit.com/r/Python/comments/66v373/profiling_python_web_applications_with_visual/">Profiling Python web applications with visual tools</a>.</div><script async src="https://www.redditstatic.com/comment-embed.js"></script>
165
166```bash
167# now we need to add this path to our $PATH variable
168# we do this my adding this line at the end of your
169# ~/.bashrc file
170PATH=$PATH:$HOME/.local/bin/
171
172# in order to use this new configuration you can close
173# and reopen terminal or reload .bashrc file
174$ source ~/.bashrc
175
176# now let's test if new directory is present in $PATH
177$ echo $PATH
178
179# now we can install on user level by adding --user
180# without use of sudo
181$ pip install snakeviz --user
182```
183
184Or as suggested by [mvt](https://www.reddit.com/user/mvt) you can use [pipsi](https://github.com/mitsuhiko/pipsi).
diff --git a/src/experiments/simple-iot-application.md b/src/experiments/simple-iot-application.md
new file mode 100644
index 0000000..b8744e6
--- /dev/null
+++ b/src/experiments/simple-iot-application.md
@@ -0,0 +1,486 @@
1title: Simple IOT application supported by real-time monitoring and data history
2date: 2017-08-11
3tags: experiment
4hide: false
5----
6
7## Initial thoughts
8
9I have been developing these kind of application for the better part of my last 5 years and people keep asking me how to approach developing such application and I will give a try explaining it here.
10
11IOT applications are really no different than any other kind of applications. We have data that needs to be collected and visualized in some form of tables or charts. The main difference here is that most of the times these data is collected by some kind of device foreign to developer that mainly operates in web domain. But fear not, it's not that different than writing some JavaScript.
12
13There are many devices able to transmit data via wireless or wired network by default but for the sake of example we will be using commonly known Arduino with wireless module already on the board → [Arduino MKR1000](https://store.arduino.cc/arduino-mkr1000).
14
15In order to make this little project as accessible to others as possible I will try to make it as inexpensive as possible. And by this I mean that I will avoid using hosted virtual servers and will be using my own laptop as a server. But you must buy Arduino MKR1000 to follow steps below. But if you would want to deploy this software I would suggest using [DigitalOcean](https://www.digitalocean.com) → smallest VPS is only per month making this one of the most affordable option out there. Please notice that this software will not run on stock web hosting that only supports LAMP (Linux, Apache, MySQL, and PHP).
16
17_But before we begin please take notice that this is strictly experimental code and not well optimized and there are much better ways in handling some aspects of the application but that requires much deeper knowledge of technology that is not needed for an example like this._
18
19**Development steps**
20
211. Simple Python API that will receive and store incoming data.
222. Prototype C++ code that will read "sensor data" and transmit it to API.
233. Data visualization with charts → extends Python web application.
24
25Step 1. and 3. will share the same web application. One route will be dedicated to API and another to serving HTML with chart.
26
27Schema below represents what we will try to achieve and how different parts correlates to each other.
28
29![Overview](/files/iot-application/simple-iot-application-overview.svg)
30
31## Simple Python API
32
33I have always been a fan of simplicity so we will be using [Bottle: Python Web Framework](https://bottlepy.org/docs/dev/). It is a single file web framework that seriously simplifies working with routes, templating and has built-in web server that satisfies our need in this case.
34
35First we need to install bottle package. This can be done by downloading ```bottle.py``` and placing it in the root of your application or by using pip software ```pip install bottle --user```.
36
37If you are using Linux or MacOS then Python is already installed. If you will try to test this on Windows please install [Python for Windows](https://www.python.org/downloads/windows/). There may be some problems with path when you will try to launch ```python webapp.py``` so please take care of this before you continue.
38
39### Basic web application
40
41Most basic bottle application is quite simple. Paste code below in ```webapp.py``` file and save.
42
43```python
44# -*- coding: utf-8 -*-
45
46import bottle
47
48# initializing bottle app
49app = bottle.Bottle()
50
51# triggered when / is accessed from browser
52# only accepts GET → no POST allowed
53@app.route("/", method=["GET"])
54def route_default():
55 return "howdy from python"
56
57# starting server on http://0.0.0.0:5000
58if __name__ == "__main__":
59 bottle.run(
60 app = app,
61 host = "0.0.0.0",
62 port = 5000,
63 debug = True,
64 reloader = True,
65 catchall = True,
66 )
67```
68
69To run this simple application you should open command prompt or terminal on your machine and go to the folder containing your file and type ```python webapp.py```. If everything goes ok then open your web browser and point it to ```http://0.0.0.0:5000```.
70
71If you would like change the port of your application (like port 80) and not use root to run your app this will present a problem. The TCP/IP port numbers below 1024 are privileged ports → this is a security feature. So in order of simplicity and security use a port number above 1024 like I have used port 5000.
72
73If this fails at any time please fix it before you continue, because nothing below will work otherwise.
74
75We use 0.0.0.0 as default host so that this app is available over your local network. If you find your local ip ```ifconfig``` and try accessing this site with your phone (if on same network/router as your machine) this should work as well (example of such ip ```http://192.168.1.15:5000```). This is a must have because Arduino will be accessing this application to send it's data.
76
77### Web application security
78
79There is a lot to be said about security and is a topic of many books. Of course all this can not be written here but to just establish some basic security → you should always use SSL with your application. Some fantastic free certificates are available by [Let's Encrypt - Free SSL/TLS Certificates](https://letsencrypt.org). With SSL certificate installed you should then make use of HTTP headers and send your "API key" via a header. If your key is send via header then this key is encrypted by SSL and send encrypted over the network. Never send your api keys by GET parameter like ```http://example.com/?api_key=somekeyvalue```. The problem that this kind of sending presents is that this key is visible in logs and by network sniffers.
80
81There is a fantastic article describing some aspects about security: [11 Web Application Security Best Practices](https://www.keycdn.com/blog/web-application-security-best-practices/). Please check it out.
82
83### Simple API for writing data-points
84
85We will now be using boilerplate code from example above and extend it to be able to write data received by API to local storage. For example use I will use SQLite3 because it plays well with Python and can store quite large amount of data. I have been using it to collect gigabytes of data in a single database without any corruption or problems → your experience may vary.
86
87To avoid learning SQLite I will be using [Dataset: databases for lazy people](https://dataset.readthedocs.io/en/latest/index.html). This package abstracts SQL and simplifies writing and reading data from database. You should install this package with pip software ```pip install dataset --user```.
88
89Because API will use POST method I will be testing if code works correctly by using [Restlet Client for Google Chrome](https://chrome.google.com/webstore/detail/restlet-client-rest-api-t/aejoelaoggembcahagimdiliamlcdmfm). This software also allows you to set headers → for basic security with API_KEY.
90
91To quickly generate passwords or API keys I usually use this nifty website [RandomKeygen](https://randomkeygen.com/).
92
93Copy and paste code below over your previous code in file ```webapp.py```.
94
95```python
96# -*- coding: utf-8 -*-
97
98import time
99import bottle
100import random
101import dataset
102
103# initializing bottle app
104app = bottle.Bottle()
105
106# connects to sqlite database
107# check_same_thread=False allows using it in multi-threaded mode
108app.config["dsn"] = dataset.connect("sqlite:///data.db?check_same_thread=False")
109
110# api key that will be used in Arduino code
111app.config["api_key"] = "JtF2aUE5SGHfVJBCG5SH"
112
113# triggered when /api is accessed from browser
114# only accepts POST → no GET allowed
115@app.route("/api", method=["POST"])
116def route_default():
117 status = 400
118 ts = int(time.time()) # current timestamp
119 value = bottle.request.body.read() # data from device
120 api_key = bottle.request.get_header("Api_Key") # api key from header
121
122 # outputs to console received data for debug reason
123 print ">>> {} :: {}".format(value, api_key)
124
125 # if api_key is correct and value is present
126 # then writes attribute to point table
127 if api_key == app.config["api_key"] and value:
128 app.config["dsn"]["point"].insert(dict(ts=ts, value=value))
129 status = 200
130
131 # we only need to return status
132 return bottle.HTTPResponse(status=status, body="")
133
134# starting server on http://0.0.0.0:5000
135if __name__ == "__main__":
136 bottle.run(
137 app = app,
138 host = "0.0.0.0",
139 port = 5000,
140 debug = True,
141 reloader = True,
142 catchall = True,
143 )
144```
145
146To run this simply go to folder containing python file and run ```python webapp.py``` from terminal. If everything goes ok you should have simple API available via POST method on /api route.
147
148After testing the service with Restlet Client you should be able to view your data in a database file ```data.db```.
149
150![REST settings example](/files/iot-application/iot-rest-example.png)
151
152You can also check the contents of new database file by using desktop client for SQLite → [DB Browser for SQLite](http://sqlitebrowser.org/).
153
154![SQLite database example](/files/iot-application/iot-sqlite-db.png)
155
156Table structure is as simple as it can be. We have ts (timestamp) and value (value from Arduino). As you can see timestamp is generated on API side. If you would happen to have atomic clock on Arduino it would be then better to generate and send timestamp with the value. This would be particularity useful if we would be collecting sensor data at a higher frequency and then sending this data in bulk to API.
157
158If you will deploy this app with uWSGI and multi-threaded, use DSN (Data Source Name) url with ```?check_same_thread=False```.
159
160Ok, now that we have some sort of a working API with some basic security so unwanted people can not post data to your database can we proceed further and try to program Arduino to send data to API.
161
162## Sending data to API with Arduino MKR1000
163
164First of all you should have MKR1000 module and microUSB cable to proceed. If you have ever done any work with Arduino you should know that you also need [Arduino IDE](https://www.arduino.cc/en/Main/Software). On provided link you should be able to download and install IDE. Once that task is completed and you have successfully run blink example you should proceed to the next step.
165
166In order to use wireless capabilities of MKR1000 you need to first install [WiFi101 library](https://www.arduino.cc/en/Reference/WiFi101) in Arduino IDE. Please check before you install, you may already have it installed.
167
168Code below is a working example that sends data to API. Before you try to test your code make sure you have run Python web application. Then change settings for wifi, api endpoint and api_key. If by some reason code bellow doesn't work for you please leave a comment and I'll try to help.
169
170Once you have opened IDE and copied this code try to compile and upload it. Then open "Serial monitor" to see if any output is presented by Arduino.
171
172```c
173#include <WiFi101.h>
174
175// wifi settings
176char ssid[] = "ssid-name";
177char pass[] = "ssid-password";
178
179// api server enpoint
180char server[] = "192.168.6.22";
181int port = 5000;
182
183// api key that must be the same as the one in Python code
184String api_key = "JtF2aUE5SGHfVJBCG5SH";
185
186// frequency data is sent in ms - every 5 seconds
187int timeout = 1000 * 5;
188
189int status = WL_IDLE_STATUS;
190
191void setup() {
192
193 // initialize serial and wait for port to open:
194 Serial.begin(9600);
195 delay(1000);
196
197 // check for the presence of the shield
198 if (WiFi.status() == WL_NO_SHIELD) {
199 Serial.println("WiFi shield not present");
200 while (true);
201 }
202
203 // attempt to connect to wifi network
204 while (status != WL_CONNECTED) {
205 Serial.print("Attempting to connect to SSID: ");
206 Serial.println(ssid);
207 status = WiFi.begin(ssid, pass);
208 // wait 10 seconds for connection
209 delay(10000);
210 }
211
212 // output wifi status to serial monitor
213 Serial.print("SSID: ");
214 Serial.println(WiFi.SSID());
215
216 IPAddress ip = WiFi.localIP();
217 Serial.print("IP Address: ");
218 Serial.println(ip);
219
220 long rssi = WiFi.RSSI();
221 Serial.print("signal strength (RSSI):");
222 Serial.print(rssi);
223 Serial.println(" dBm");
224}
225
226void loop() {
227
228 WiFiClient client;
229
230 if (client.connect(server, port)) {
231
232 // I use random number generator for this example
233 // but you can use analog or digital inputs from arduino
234 String content = String(random(1000));
235
236 client.println("POST /api HTTP/1.1");
237 client.println("Connection: close");
238 client.println("Api-Key: " + api_key);
239 client.println("Content-Length: " + String(content.length()));
240 client.println();
241 client.println(content);
242
243 delay(100);
244 client.stop();
245 Serial.println("Data sent successfully ...");
246
247 } else {
248 Serial.println("Problem sending data ...");
249 }
250
251 // waits for x seconds and continue looping
252 delay(timeout);
253
254}
255```
256
257As seen from example you can notice that Arduino is generating random integer between [ 0 .. 1000 ]. You can easily replace this with a temperature sensor or any other kind of sensor.
258
259Now that we have API under the hood and Arduino is sending demo data we can now focus on data visualization.
260
261## Data visualization
262
263Before we continue we should examine our project folder structure. Currently we only have two files in our project:
264
265_simple-iot-app/_
266
267* _webapp.py_
268* _data.db_
269
270We will now add HTML template that will contain CSS and JavaScript code inline for the simplicity reason. And for the bottle framework to be able to scan root application folder for templates we will add ```bottle.TEMPLATE_PATH.insert(0, "./")``` in ```webapp.py```. By default bottle framework uses ```views/``` subfolder to store templates. This is not the ideal situation and if you will use bottle to develop web applications you should use native behavior and store templates in it's predefined folder. But for the sake of example we will over-ride this. Be careful to fully replace your code with new code that is provided below. Avoid partially replacing code in file :) Also new code for reading data-points is provided in Python example below.
271
272First we add new route to our web application. It should be trigger when browser hits root of application ```http://0.0.0.0:5000/```. This route will do nothing more than render ```frontend.html``` template. This is done by ```return bottle.template("frontend.html")```. Check code below to further examine how exactly this is done.
273
274Now we will expand ```/api``` route and use different methods to write or read data-points. For writing data-point we will use POST method and for reading points we will use GET method. GET method will return JSON object with latest readings and historical data.
275
276There is a fantastic JavaScript library for plotting time-series charts called [MetricsGraphics.js](https://www.metricsgraphicsjs.org) that is based on [D3.js](https://d3js.org/) library for visualizing data.
277
278Data schema required by MetricsGraphics.js → to achieve this we need to transform data from database into this format:
279
280```json
281[
282 {
283 "date": "2017-08-11 01:07:20",
284 "value": 933
285 },
286 {
287 "date": "2017-08-11 01:07:30",
288 "value": 743
289 }
290]
291```
292
293Web application is now complete and we only need ```frontend.html``` that we will develop now. If you would try to start web app now and go to root app this will return error because we don't have frontend.html yet.
294
295```python
296# -*- coding: utf-8 -*-
297
298import time
299import bottle
300import json
301import datetime
302import random
303import dataset
304
305# initializing bottle app
306app = bottle.Bottle()
307
308# adds root directory as template folder
309bottle.TEMPLATE_PATH.insert(0, "./")
310
311# connects to sqlite database
312# check_same_thread=False allows using it in multi-threaded mode
313app.config["db"] = dataset.connect("sqlite:///data.db?check_same_thread=False")
314
315# api key that will be used in Arduino code
316app.config["api_key"] = "JtF2aUE5SGHfVJBCG5SH"
317
318# triggered when / is accessed from browser
319# only accepts GET → no POST allowed
320@app.route("/", method=["GET"])
321def route_default():
322 return bottle.template("frontend.html")
323
324# triggered when /api is accessed from browser
325# accepts POST and GET
326@app.route("/api", method=["GET", "POST"])
327def route_default():
328
329 # if method is POST then we write datapoint
330 if bottle.request.method == "POST":
331 status = 400
332 ts = int(time.time()) # current timestamp
333 value = bottle.request.body.read() # data from device
334 api_key = bottle.request.get_header("Api-Key") # api key from header
335
336 # outputs to console recieved data for debug reason
337 print ">>> {} :: {}".format(value, api_key)
338
339 # if api_key is correct and value is present
340 # then writes attribute to point table
341 if api_key == app.config["api_key"] and value:
342 app.config["db"]["point"].insert(dict(ts=ts, value=value))
343 status = 200
344
345 # we only need to return status
346 return bottle.HTTPResponse(status=status, body="")
347
348 # if method is GET then we read datapoint
349 else:
350 response = []
351 datapoints = app.config["db"]["point"].all()
352
353 for point in datapoints:
354 response.append({
355 "date": datetime.datetime.fromtimestamp(int(point["ts"])).strftime("%Y-%m-%d %H:%M:%S"),
356 "value": point["value"]
357 })
358
359 bottle.response.content_type = "application/json"
360 return json.dumps(response)
361
362# starting server on http://0.0.0.0:5000
363if __name__ == "__main__":
364 bottle.run(
365 app = app,
366 host = "0.0.0.0",
367 port = 5000,
368 debug = True,
369 reloader = True,
370 catchall = True,
371 )
372```
373
374And now finally we can implement ```frontend.html```. Create file with this name and copy code below. When you are done you can start web application. Steps for this part are listed below the code.
375
376```html
377<!DOCTYPE html>
378<html>
379
380 <head>
381 <meta charset="utf-8">
382 <title>Simple IOT application</title>
383 </head>
384
385 <body>
386
387 <h1>Simple IOT application</h1>
388
389 <div class="chart-placeholder">
390 <div id="chart"></div>
391 </div>
392
393 <!-- application main script -->
394 <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
395 <script src="https://cdnjs.cloudflare.com/ajax/libs/d3/4.10.0/d3.min.js"></script>
396 <script src="https://cdnjs.cloudflare.com/ajax/libs/metrics-graphics/2.11.0/metricsgraphics.min.js"></script>
397 <script>
398 function fetch_and_render() {
399 d3.json("/api", function(data) {
400 data = MG.convert.date(data, "date", "%Y-%m-%d %H:%M:%S");
401 MG.data_graphic({
402 data: data,
403 chart_type: "line",
404 full_width: true,
405 height: 270,
406 target: document.getElementById("chart"),
407 x_accessor: "date",
408 y_accessor: "value"
409 });
410 });
411 }
412 window.onload = function() {
413 // initial call for rendering
414 fetch_and_render();
415
416 // updates chart every 5 seconds
417 setInterval(function() {
418 fetch_and_render();
419 }, 5000);
420 }
421 </script>
422
423 <!-- application styles -->
424 <style>
425 body {
426 font: 13px sans-serif;
427 padding: 20px 50px;
428 }
429 .chart-placeholder {
430 border: 2px solid #ccc;
431 width: 100%;
432 user-select: none;
433 }
434 /* chart styles */
435 .mg-line1-color {
436 stroke: red;
437 stroke-width: 2;
438 }
439 .mg-main-area, .mg-main-line {
440 fill: #fff;
441 }
442 .mg-x-axis line, .mg-y-axis line {
443 stroke: #b3b2b2;
444 stroke-width: 1px;
445 }
446 </style>
447
448 </body>
449
450</html>
451```
452
453Now the folder structure should look like:
454
455_simple-iot-app/_
456
457* _webapp.py_
458* _data.db_
459* _frontend.html_
460
461Ok, lets now start application and start feeding it data.
462
4631. ```python webapp.py```
4642. connect Arduino MKR1000 to power source
4653. open browser and go to ```http://0.0.0.0:5000```
466
467If everything goes well you should be seeing new data-points rendered on chart every 5 seconds.
468
469If you navigate to ```http://0.0.0.0:5000``` you should see rendered chart as shown on picture below.
470
471![Application output](/files/iot-application/iot-app-output.png)
472
473Complete application with all the code is available for [download](/files/iot-application/simple-iot-application.zip).
474
475## Conclusion
476
477I hope this clarifies some aspects of IOT application development. Of course this is a minimal example and is far from what can be done in real life with some further dive into other technologies.
478
479If you would like to continue exploring IOT world here are some interesting resources for you to examine:
480
481* [Reading Sensors with an Arduino](https://www.allaboutcircuits.com/projects/reading-sensors-with-an-arduino/)
482* [MQTT 101 – How to Get Started with the lightweight IoT Protocol](http://www.hivemq.com/blog/how-to-get-started-with-mqtt)
483* [Stream Updates with Server-Sent Events](https://www.html5rocks.com/en/tutorials/eventsource/basics/)
484* [Internet of Things (IoT) Tutorials](http://www.tutorialspoint.com/internet_of_things/)
485
486Any comment or additional ideas are welcomed in comments below.
diff --git a/src/experiments/using-digitalocean-spaces-object-storage-with-fuse.md b/src/experiments/using-digitalocean-spaces-object-storage-with-fuse.md
new file mode 100644
index 0000000..bc00d1e
--- /dev/null
+++ b/src/experiments/using-digitalocean-spaces-object-storage-with-fuse.md
@@ -0,0 +1,260 @@
1title: Using DigitalOcean Spaces Object Storage with FUSE
2date: 2018-01-16
3tags: experiment
4hide: false
5----
6
7Couple of months ago [DigitalOcean](https://www.digitalocean.com) introduced new product called [Spaces](https://blog.digitalocean.com/introducing-spaces-object-storage/) which is Object Storage very similar to Amazon's S3. This really peaked my interest, because this was something I was missing and even the thought of going over the internet for such functionality was in no interest to me. Also in fashion with their previous pricing this also is very cheap and pricing page is a no-brainer compared to AWS or GCE. [Prices are clearly and precisely defined and outlined](https://www.digitalocean.com/pricing/). You must love them for that :)
8
9### Initial requirements
10
11* Is it possible to use them as a mounted drive with FUSE? (tl;dr YES)
12* Will the performance degrade over time and over different sizes of objects? (tl;dr NO&YES)
13* Can storage be mounted on multiple machines at the same time and be writable? (tl;dr YES)
14
15> Let me be clear. This scripts I use are made just for benchmarking and are not intended to be used in real-life situations. Besides that, I am looking into using this approaches but adding caching service in front of it and then dumping everything as an object to storage. This could potentially be some interesting post of itself. But in case you would need real-time data without eventual consistency please take this scripts as they are: not usable in such situations.
16
17## Is it possible to use them as a mounted drive with FUSE?
18
19Well, actually they can be used in such manor. Because they are similar to [AWS S3](https://aws.amazon.com/s3/) many tools are available and you can find many articles and [Stackoverflow items](https://stackoverflow.com/search?q=s3+fuse).
20
21To make this work you will need DigitalOcean account. If you don't have one you will not be able to test this code. But if you have an account then you go and [create new Droplet](https://cloud.digitalocean.com/droplets/new?size=s-1vcpu-1gb&region=ams3&distro=debian&distroImage=debian-9-x64&options=private_networking,install_agent). If you click on this link you will already have preselected Debian 9 with smallest VM option.
22
23* Please be sure to add you SSH key, because we will login to this machine remotely.
24* If you change your region please remember which one you choose because we will need this information when we try to mount space to our machine.
25
26Instuctions on how to use SSH keys and how to setup them are available in article [How To Use SSH Keys with DigitalOcean Droplets](https://www.digitalocean.com/community/tutorials/how-to-use-ssh-keys-with-digitalocean-droplets).
27
28![DigitalOcean Droplets](/files/do-fuse/fuse-droplets.png)
29
30After we created Droplet it's time to create new Space. This is done by clicking on a button [Create](https://cloud.digitalocean.com/spaces/new) (right top corner) and selecting Spaces. Choose pronounceable ```Unique name``` because we will use it in examples below. You can either choose Private or Public, it doesn't matter in our case. And you can always change that in the future.
31
32When you have created new Space we should [generate Access key](https://cloud.digitalocean.com/settings/api/tokens). This link will guide to the page when you can generate this key. After you create new one, please save provided Key and Secret because Secret will not be shown again.
33
34![DigitalOcean Spaces](/files/do-fuse/fuse-spaces.png)
35
36Now that we have new Space and Access key we should SSH into our machine.
37
38```bash
39# replace IP with the ip of your newly created droplet
40ssh root@IP
41
42# this will install utilities for mounting storage objects as FUSE
43apt install s3fs
44
45# we now need to provide credentials (access key we created earlier)
46# replace KEY and SECRET with your own credentials but leave the colon between them
47# we also need to set proper permissions
48echo "KEY:SECRET" > .passwd-s3fs
49chmod 600 .passwd-s3fs
50
51# now we mount space to our machine
52# replace UNIQUE-NAME with the name you choose earlier
53# if you choose different region for your space be careful about -ourl option (ams3)
54s3fs UNIQUE-NAME /mnt/ -ourl=https://ams3.digitaloceanspaces.com -ouse_cache=/tmp
55
56# now we try to create a file
57# once you mount it may take a couple of seconds to retrieve data
58echo "Hello cruel world" > /mnt/hello.txt
59```
60
61After all this you can return to your browser and go to [DigitalOcean Spaces](https://cloud.digitalocean.com/spaces) and click on your created space. If file hello.txt is present you have successfully mounted space to your machine and wrote data to it.
62
63I choose the same region for my Droplet and my Space but you don't have to. You can have different regions. What this actually does to performance I don't know.
64
65Additional information on FUSE:
66
67* [Github project page for s3fs](https://github.com/s3fs-fuse/s3fs-fuse)
68* [FUSE - Filesystem in Userspace](https://en.wikipedia.org/wiki/Filesystem_in_Userspace)
69
70## Will the performance degrade over time and over different sizes of objects?
71
72For this task I didn't want to just read and write text files or uploading images. I actually wanted to figure out if using something like SQlite is viable in this case.
73
74### Measurement experiment 1: File copy
75
76```bash
77# first we create some dummy files at different sizes
78dd if=/dev/zero of=10KB.dat bs=1024 count=10 #10KB
79dd if=/dev/zero of=100KB.dat bs=1024 count=100 #100KB
80dd if=/dev/zero of=1MB.dat bs=1024 count=1024 #1MB
81dd if=/dev/zero of=10MB.dat bs=1024 count=10240 #10MB
82
83# now we set time command to only return real
84TIMEFORMAT=%R
85
86# now lets test it
87(time cp 10KB.dat /mnt/) |& tee -a 10KB.results.txt
88
89# and now we automate
90# this will perform the same operation 100 times
91# this will output results into separated files based on objecty size
92n=0; while (( n++ < 100 )); do (time cp 10KB.dat /mnt/10KB.$n.dat) |& tee -a 10KB.results.txt; done
93n=0; while (( n++ < 100 )); do (time cp 100KB.dat /mnt/100KB.$n.dat) |& tee -a 100KB.results.txt; done
94n=0; while (( n++ < 100 )); do (time cp 1MB.dat /mnt/1MB.$n.dat) |& tee -a 1MB.results.txt; done
95n=0; while (( n++ < 100 )); do (time cp 10MB.dat /mnt/10MB.$n.dat) |& tee -a 10MB.results.txt; done
96```
97
98Files of size 100MB were not successfully transferred and ended up displaying error (cp: failed to close '/mnt/100MB.1.dat': Operation not permitted).
99
100As I suspected, object size is not really that important. Sadly I don't have the time to test performance over periods of time. But if some of you would do it please send me your data. I would be interested in seeing results.
101
102**Here are plotted results**
103
104You can download [raw result here](/files/do-fuse/copy-benchmarks.tsv). Measurements are in seconds.
105
106<script src="//cdn.plot.ly/plotly-latest.min.js"></script>
107<div id="copy-benchmarks"></div>
108<script>
109(function(){
110 var request = new XMLHttpRequest();
111 request.open("GET", "/files/do-fuse/copy-benchmarks.tsv", true);
112 request.onload = function() {
113 if (request.status >= 200 && request.status < 400) {
114 var payload = request.responseText.trim();
115 var tsv = payload.split("\n");
116 for (var i=0; i<tsv.length; i++) { tsv[i] = tsv[i].split("\t"); }
117 var traces = [];
118 var headers = tsv[0];
119 tsv.shift();
120 Array.prototype.forEach.call(headers, function(el, idx) {
121 var x = [];
122 var y = [];
123 for (var j=0; j<tsv.length; j++) {
124 x.push(j);
125 y.push(parseFloat(tsv[j][idx].replace(",", ".")));
126 }
127 traces.push({ x: x, y: y, type: "scatter", name: el, line: { width: 1, shape: "spline" } });
128 });
129 var copy = Plotly.newPlot("copy-benchmarks", traces, { legend: {"orientation": "h"}, height: 400, margin: { l: 40, r: 0, b: 20, t: 30, pad: 0 }, yaxis: { title: "execution time in seconds", titlefont: { size: 12 } }, xaxis: { title: "fn(i)", titlefont: { size: 12 } } });
130 } else { }
131 };
132 request.onerror = function() { };
133 request.send(null);
134})();
135</script>
136
137As far as these tests show, performance is quite stable and can be predicted which is fantastic. But this is a small test and spans only over couple of hours. So you should not completely trust them.
138
139### Measurement experiment 2: SQLite performanse
140
141I was unable to use database file directly from mounted drive so this is a no-go as I suspected. So I executed code below on a local disk just to get some benchmarks. I inserted 1000 records with DROPTABLE, CREATETABLE, INSERTMANY, FETCHALL, COMMIT for 1000 times to generate statistics. As you can see performance of SQLite is quite amazing. You could then potentially just copy file to mounted drive and be done with it.
142
143```python
144import time
145import sqlite3
146import sys
147
148if len(sys.argv) < 3:
149 print("usage: python sqlite-benchmark.py DB_PATH NUM_RECORDS REPEAT")
150 exit()
151
152def data_iter(x):
153 for i in range(x):
154 yield "m" + str(i), "f" + str(i*i)
155
156header_line = "%s\t%s\t%s\t%s\t%s\n" % ("DROPTABLE", "CREATETABLE", "INSERTMANY", "FETCHALL", "COMMIT")
157with open("sqlite-benchmarks.tsv", "w") as fp:
158 fp.write(header_line)
159
160start_time = time.time()
161conn = sqlite3.connect(sys.argv[1])
162c = conn.cursor()
163end_time = time.time()
164result_time = CONNECT = end_time - start_time
165print("CONNECT: %g seconds" % (result_time))
166
167start_time = time.time()
168c.execute("PRAGMA journal_mode=WAL")
169c.execute("PRAGMA temp_store=MEMORY")
170c.execute("PRAGMA synchronous=OFF")
171result_time = PRAGMA = end_time - start_time
172print("PRAGMA: %g seconds" % (result_time))
173
174for i in range(int(sys.argv[3])):
175 print("#%i" % (i))
176
177 start_time = time.time()
178 c.execute("drop table if exists test")
179 end_time = time.time()
180 result_time = DROPTABLE = end_time - start_time
181 print("DROPTABLE: %g seconds" % (result_time))
182
183 start_time = time.time()
184 c.execute("create table if not exists test(a,b)")
185 end_time = time.time()
186 result_time = CREATETABLE = end_time - start_time
187 print("CREATETABLE: %g seconds" % (result_time))
188
189 start_time = time.time()
190 c.executemany("INSERT INTO test VALUES (?, ?)", data_iter(int(sys.argv[2])))
191 end_time = time.time()
192 result_time = INSERTMANY = end_time - start_time
193 print("INSERTMANY: %g seconds" % (result_time))
194
195 start_time = time.time()
196 c.execute("select count(*) from test")
197 res = c.fetchall()
198 end_time = time.time()
199 result_time = FETCHALL = end_time - start_time
200 print("FETCHALL: %g seconds" % (result_time))
201
202 start_time = time.time()
203 conn.commit()
204 end_time = time.time()
205 result_time = COMMIT = end_time - start_time
206 print("COMMIT: %g seconds" % (result_time))
207
208 print
209 log_line = "%f\t%f\t%f\t%f\t%f\n" % (DROPTABLE, CREATETABLE, INSERTMANY, FETCHALL, COMMIT)
210 with open("sqlite-benchmarks.tsv", "a") as fp:
211 fp.write(log_line)
212
213start_time = time.time()
214conn.close()
215end_time = time.time()
216result_time = CLOSE = end_time - start_time
217print("CLOSE: %g seconds" % (result_time))
218```
219
220You can download [raw result here](/files/do-fuse/sqlite-benchmarks.tsv). And again, these results are done on a local block storage and do not represent capabilities of object storage. With my current approach and state of the test code these can not be done. I would need to make Python code much more robust and check locking etc.
221
222<div id="sqlite-benchmarks"></div>
223<script>
224(function(){
225 var request = new XMLHttpRequest();
226 request.open("GET", "/files/do-fuse/sqlite-benchmarks.tsv", true);
227 request.onload = function() {
228 if (request.status >= 200 && request.status < 400) {
229 var payload = request.responseText.trim();
230 var tsv = payload.split("\n");
231 for (var i=0; i<tsv.length; i++) { tsv[i] = tsv[i].split("\t"); }
232 var traces = [];
233 var headers = tsv[0];
234 tsv.shift();
235 Array.prototype.forEach.call(headers, function(el, idx) {
236 var x = [];
237 var y = [];
238 for (var j=0; j<tsv.length; j++) {
239 x.push(j);
240 y.push(parseFloat(tsv[j][idx].replace(",", ".")));
241 }
242 traces.push({ x: x, y: y, type: "scatter", name: el, line: { width: 1, shape: "spline" } });
243 });
244 var sqlite = Plotly.newPlot("sqlite-benchmarks", traces, { legend: {"orientation": "h"}, height: 400, margin: { l: 50, r: 0, b: 20, t: 30, pad: 0 }, yaxis: { title: "execution time in seconds", titlefont: { size: 12 } } });
245 } else { }
246 };
247 request.onerror = function() { };
248 request.send(null);
249})();
250</script>
251
252## Can storage be mounted on multiple machines at the same time and be writable?
253
254Well, this one didn't take long to test. And the answer is **YES**. I mounted space on both machines and measured same performance on both machines. But because file is downloaded before write and then uploaded on complete there could potentially be problems is another process is trying to access the same file.
255
256## Observations and conslusion
257
258Using Spaces in this way makes it easier to access and manage files. But besides that you would need to write additional code to make this one play nice with you applications.
259
260Nevertheless, this was extremely simple to setup and use and this is just another excellent product in DigitalOcean product line. I found this exercise very valuable and am thinking about implementing some sort of mechanism for SQLite, so data can be stored on Spaces and accessed by many VM's. For a project where data doesn't need to be accessible in real-time and can have couple of minutes old data this would be very interesting. If any of you find this proposal interesting please write in a comment box below or shoot me an email and I will keep you posted.