aboutsummaryrefslogtreecommitdiff
path: root/content/posts
diff options
context:
space:
mode:
authorMitja Felicijan <mitja.felicijan@gmail.com>2023-11-01 22:54:27 +0100
committerMitja Felicijan <mitja.felicijan@gmail.com>2023-11-01 22:54:27 +0100
commit2417a6b7603524dc5cd30d29b153f91024b9443d (patch)
tree9be5ea8e5baba96dd9159217da6badf6157fb595 /content/posts
parent89ba3497f07a8ea43d209b583f39fcc286acc923 (diff)
downloadmitjafelicijan.com-2417a6b7603524dc5cd30d29b153f91024b9443d.tar.gz
Move to Jekyll
Diffstat (limited to 'content/posts')
-rw-r--r--content/posts/2011-01-13-most-likely-to-succeed-in-year-of-2011.md42
-rw-r--r--content/posts/2012-03-09-led-technology-not-so-eco.md33
-rw-r--r--content/posts/2013-10-24-wireless-sensor-networks.md54
-rw-r--r--content/posts/2015-11-10-software-development-pitfalls.md181
-rw-r--r--content/posts/2017-03-07-golang-profiling-simplified.md126
-rw-r--r--content/posts/2017-04-17-what-i-ve-learned-developing-ad-server.md199
-rw-r--r--content/posts/2017-04-21-profiling-python-web-applications-with-visual-tools.md206
-rw-r--r--content/posts/2017-08-11-simple-iot-application.md607
-rw-r--r--content/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md331
-rw-r--r--content/posts/2019-01-03-encoding-binary-data-into-dna-sequence.md411
-rw-r--r--content/posts/2019-10-14-simplifying-and-reducing-clutter.md59
-rw-r--r--content/posts/2019-10-19-using-sentiment-analysis-for-clickbait-detection.md108
-rw-r--r--content/posts/2020-03-22-simple-sse-based-pubsub-server.md454
-rw-r--r--content/posts/2020-03-27-create-placeholder-images-with-sharp.md102
-rw-r--r--content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md108
-rw-r--r--content/posts/2020-03-30-my-love-and-hate-relationship-with-nodejs.md111
-rw-r--r--content/posts/2020-05-05-remote-work.md72
-rw-r--r--content/posts/2020-08-15-systemd-disable-wake-onmouse.md73
-rw-r--r--content/posts/2020-09-06-esp-and-micropython.md225
-rw-r--r--content/posts/2020-09-08-bind-warning-on-login.md54
-rw-r--r--content/posts/2020-09-09-digitalocean-sync.md112
-rw-r--r--content/posts/2021-01-24-replacing-dropbox-with-s3.md114
-rw-r--r--content/posts/2021-01-25-goaccess.md204
-rw-r--r--content/posts/2021-06-26-simple-world-clock.md107
-rw-r--r--content/posts/2021-07-30-from-internet-consumer-to-full-hominum-again.md103
-rw-r--r--content/posts/2021-08-01-linux-cheatsheet.md287
-rw-r--r--content/posts/2021-12-03-debian-based-riced-up-distribution-for-developers.md276
-rw-r--r--content/posts/2021-12-25-running-golang-application-as-pid1.md347
-rw-r--r--content/posts/2021-12-30-wap-mobile-web-before-the-web.md202
-rw-r--r--content/posts/2022-06-30-trying-out-helix-editor.md53
-rw-r--r--content/posts/2022-07-05-what-would-dna-sound-if-synthesized.md364
-rw-r--r--content/posts/2022-08-13-algae-spotted-on-river-sava.md31
-rw-r--r--content/posts/2022-10-06-state-of-web-technologies-in-year-2022.md296
-rw-r--r--content/posts/2022-10-16-that-sound-that-machine-makes-when-struggling.md66
-rw-r--r--content/posts/2023-01-26-trying-to-build-a-new-kind-of-terminal-emulator.md253
-rw-r--r--content/posts/2023-05-16-rekindling-my-love-for-programming.md74
-rw-r--r--content/posts/2023-05-23-i-was-wrong-about-git-workflows.md71
-rw-r--r--content/posts/2023-05-31-re-inventing-task-runner-that-i-actually-used-daily.md159
-rw-r--r--content/posts/2023-07-01-bringing-all-of-my-projects-together-under-one-umbrella.md281
-rw-r--r--content/posts/2023-07-08-who-knows-what-the-world-will-look-like-tomorrow.md100
-rw-r--r--content/posts/2023-07-10-fix-screen-tearing-on-debian-12-xorg-and-i3.md22
-rw-r--r--content/posts/2023-07-10-online-radio-streaming-with-mpv-from-terminal.md14
42 files changed, 0 insertions, 7092 deletions
diff --git a/content/posts/2011-01-13-most-likely-to-succeed-in-year-of-2011.md b/content/posts/2011-01-13-most-likely-to-succeed-in-year-of-2011.md
deleted file mode 100644
index 325bd52..0000000
--- a/content/posts/2011-01-13-most-likely-to-succeed-in-year-of-2011.md
+++ /dev/null
@@ -1,42 +0,0 @@
1---
2title: Most likely to succeed in the year of 2011
3url: most-likely-to-succeed-in-year-of-2011.html
4date: 2011-01-13T12:00:00+02:00
5type: post
6draft: false
7---
8
9The year of 2010 was definitely the year of Geo-location. The market responded
10beautifully and lots of very cool services were launched. We all have to thank
11the mobile market for such extensive adoption. With new generations of mobile
12phones that are not only buffed with high-tech hardware but are also affordable.
13We can now manage tasks that were not so long time ago, almost Star Trek’ish.
14And all this had and has great influence on the destination to which we are
15going now.
16
17Reading all this articles about new innovation about new thriving technologies
18makes me wonder what’s the next step. The future is the mesh, like Lisa Gansky
19said in her book The Mesh.
20
21Many still have conservative views on distributed systems. The problems with
22security of information. Fear of not controlling every aspect of information
23flow. I am very opened to distributed systems and heterogeneous applications,
24and I think this is the correct and best way to proceed.
25
26This year will definitely be about communication platforms. Mobile to mobile.
27Machine to mobile and vice versa. All the tech is available and ready to put
28into action. Wireless is today’s new mantra. And the concept of semantic web is
29now ready for industry.
30
31Applications and developers now can gain access to new layers of systems and can
32prepare and build solutions to meet the high quality needs of market. The speed
33is everything now.
34
35My vote goes to “Machine to Machine” and “Embedded Systems”!
36
37- [Machine-to-Machine](http://en.wikipedia.org/wiki/Machine-to-Machine)
38- [The ultimate M2M communication protocol](http://www.bitxml.org/)
39- [COOS Project (connectivity initiative)](http://www.coosproject.org/maven-site/1.0.0/project-info.html)
40- [Community for machine-to-machine](http://m2m.com/index.jspa)
41- [Embedded system](http://en.wikipedia.org/wiki/Embedded_system)
42
diff --git a/content/posts/2012-03-09-led-technology-not-so-eco.md b/content/posts/2012-03-09-led-technology-not-so-eco.md
deleted file mode 100644
index 2841d0a..0000000
--- a/content/posts/2012-03-09-led-technology-not-so-eco.md
+++ /dev/null
@@ -1,33 +0,0 @@
1---
2title: LED technology might not be as eco-friendly as you think
3url: led-technology-not-so-eco.html
4date: 2012-03-09T12:00:00+02:00
5type: post
6draft: false
7---
8
9There is a lot of talk about LED technology. It is beginning to infiltrate
10industry at a fast rate, and it’s a challenge for designers and also engineers.
11I wondered when a weakness will be revealed. Then I stomped on an article
12talking about harm in using LED technology. It looks like this magical
13technology is not so magical and eco-friendly.
14
15A new study from the University of California indicates that LED lights contain
16toxic metals, and should be produced, used and disposed of carefully. Besides
17the lead and nickel, the bulbs and their associated parts were also found to
18contain arsenic, copper, and other metals that have been linked to different
19cancers, neurological damage, kidney disease, hypertension, skin rashes and
20other illnesses in humans, and to ecological damage in waterways.
21
22Since then, I haven’t yet found any regulation for disposal of LED lights or any
23other regulation or standard. This might be a problem in the future. And it is a
24massive drawback. This might have quite an impact on consumer market.
25
26Nevertheless, there is a potential, and I am sure the market will adapt. I also
27hope I will be reading documents regarding solution for this concern soon.
28
29**Additional resources:**
30
31- [Recycling and Disposal of Light Bulbs](http://ezinearticles.com/?Recycling-and-Disposal-of-Light-Bulbs&id=1091304)
32- [How to Dispose of a Low-Energy Light Bulb](http://www.ehow.com/how_7483442_dispose-lowenergy-light-bulb.html)
33
diff --git a/content/posts/2013-10-24-wireless-sensor-networks.md b/content/posts/2013-10-24-wireless-sensor-networks.md
deleted file mode 100644
index bc6b333..0000000
--- a/content/posts/2013-10-24-wireless-sensor-networks.md
+++ /dev/null
@@ -1,54 +0,0 @@
1---
2title: Wireless sensor networks
3url: wireless-sensor-networks.html
4date: 2013-10-24T12:00:00+02:00
5type: post
6draft: false
7---
8
9Zigbee networks have this wonderful capability to self-heal, which means they
10can reorder connections between them if one of them is inoperable. This works
11our of the box when you deploy them. But you have to have in mind that achieving
12this is not as easy as you would think. None of it is plug&play. So to make
13your life a bit easier, here are some pointers which, I hope, will help you.
14
15- Be careful when you are ordering your equipment abroad. There are many rules
16 and regulations you need to comply before you get your Xbee radios. What they
17 do is they wait until you prove that you won’t use the technology for some
18 kind of evil take over control of the world project :). For this, they have
19 EAR (Export Administration Regulations) which basically means “This product
20 may require a license to export from the United States.”.
21- I don’t know if this applies for every country, but when we purchased our Xbee
22 radios from Mouser, this was mandatory! What we needed to do was to print out
23 a form and write information about our company and send them a copy via
24 email. With this document, we proved that we are a legitimate company.
25- When you complete your purchase and send all the documentation, you are not
26 clear yet. Then customs will take it from there :). There will be some
27 additional costs. Before purchasing, make sure you have as much information
28 about costs as possible. Because it can get costly in the end.
29- I suggest you use companies from your country. You can seriously cut your
30 costs. Here in Slovenia, the best option so far as I know is Farnell. And
31 based on my personal experience, they rock! All I need to say!
32- Make plans when ordering larger quantities. Do not, I say, do not make your
33 orders in December! :) Believe me! You will have problems with stock they can
34 provide for you. So, we were forced to buy some things from Mouser, which was
35 extremely painful because of all the regulations you need to obey when
36 importing goods from the USA.
37- Make sure that firmware version on your Xbee radios is exactly the same! Do
38 not get creative!!! I propose using templates. You can get template by
39 exporting settings/profile in X-CTU application. Make sure you have enabled
40 “Upgrade firmware” so you can be sure each radio has the same firmware.
41- And again: make plans! Plan everything! In months advanced! You will thank me
42 later :)
43- Test, test, test. Wireless networks can be tricky.
44
45If you are serious, I suggest you buy this book, Building Wireless Sensor
46Networks. You will get a glimpse of how networks work in lumens terms. It is a
47good starting point for everybody who wants to build wireless networks.
48
49**Additional resources:**
50
51- http://www.digi.com/aboutus/export/generalexportinfo
52- http://doresearch.stanford.edu/research-scholarship/export-controls/export-controlled-or-embargoed-countries-entities-and-persons
53- http://www.bis.doc.gov/licensing/exportingbasics.htm
54
diff --git a/content/posts/2015-11-10-software-development-pitfalls.md b/content/posts/2015-11-10-software-development-pitfalls.md
deleted file mode 100644
index 6a5d9bd..0000000
--- a/content/posts/2015-11-10-software-development-pitfalls.md
+++ /dev/null
@@ -1,181 +0,0 @@
1---
2title: Software development and my favorite pitfalls
3url: software-development-pitfalls.html
4date: 2015-11-10T12:00:00+02:00
5type: post
6draft: false
7---
8
9Over the years I had the privilege to work on some very excited projects both in
10software development field and also in electronics field and every experience
11taught me some invaluable lessons about how NOT TO approach development. And
12through this post I will try to point out some absurd, outdated techniques I
13find the most annoying and damaging during a development cycle. There will be
14swearing because this topic really gets on my nerves and I never coherently
15tried to explain them in writing. So if I get heated up, please bear with me.
16
17As new methods of project management are emerging, underlying processes still
18stay old and outdated. This is mainly because we as people are unable to
19completely shift away from these approaches.
20
21I was always struggling with communication, and many times that cost me a
22relationship or two because I was not on the ball all the time. Through every
23experience, I became more convinced that I am the problem and never ever doubted
24that the problem may be that communication never evolved a single step from
25emails. And if you think for a second, not many things have changed around this
26topic. We just have different representations of email (message boards, chats,
27project management tools). And I believe this is the real issue we are facing
28now.
29
30There are many articles written about hyper connectivity and the effects that
31are a direct result of it. But mainstream does nothing towards it. We are just
32putting out fires, and we do nothing to prevent it. I am certain this will be a
33major source of grief in coming years. And what we all can do to avoid this is
34to change our mindset and experiment on our communication skills, development
35approaches. We need to maximize possible output that a person can give. And to
36achieve this we need to listen to them, encourage them. I know that not
37everybody is a naturally born leader, but with enough practice and encouragement
38they also can become active participants in leadership.
39
40There are many talks now about methodologies such as Scrum, Kanban, Cleanroom
41and they all fucking piss me of :). These are all boxes that imprison people and
42take away their freedom of thought. This is a straightforward mindfuck /
43amputation of creativity.
44
45Let me list a couple of things that I find really destructive and bad for a
46project and in a long run company.
47
48## Ping emails
49
50Ping emails are emails you have to write as soon as you receive an email. Its
51sole purpose is to inform the sender that you received their email, and you are
52working on it. Its result is only to calm down the sender that their task is
53being dealt with. It’s intent basically is, I did my job by sending you this
54email, so I am on clear grounds. I categorize this email as fuck you email.
55This is one of the most irritating types of emails I need to write. This is the
56ultimate control freak show you can experience, and it gives the sender a false
57feeling of control. Newsflash: We do not live in 1982 where there was a
58possibility that email never reached the destination. I really hate this from
59the bottom of my heart.
60
61They should be like: “Yes, I am fucking alive, and I am at your service my
62leash!”. I guess if I would reply like this, I wouldn’t have to write any more
63of this kind of messages.
64
65## Everybody is a project manager
66
67Well, this is a tough one. I noticed that as soon as you let people to give
68their suggestions, you are basically screwed. There is a truth in the saying:
69“Give low expectations and deliver little more than you promised.”.
70
71People tend to take a role of a manager as soon as they are presented with an
72opportunity. And by getting angry at them, you only provoke yourself. They are
73not at fault. You just need to tell them they are only giving suggestions and
74not tasks at the beginning and everything will be alright. But if you give them
75a feeling that they are in control, you will have immense problems explaining
76why their features are not in current release.
77
78Project mission must be always leading project requirements and any deviation
79from it will result in major project butchering. And by this, I mean that the
80project will get its own path, and you will be left with half done software that
81helps nobody. Clear mission goals and clean execution will allow you to develop
82software will clear intent.
83
84## We are never wrong
85
86I find this type of arrogance the worst. We must always conduct ourselves that
87we are infallible and cannot make mistakes. As soon as a procedure or process is
88established, there is no room for changes or improvements. This is the most
89idiotic thing someone can say of think. I think that processes need to involve
90and change over time. This is imperative and need to have in your organization
91if you want to improve and develop company. We all need to grow balls and change
92everything in order to adapt to current situations. Being a prisoner of
93predefined processes kills creativity.
94
95I am constantly trying new software for project managing and communication. I
96believe every team has its own dynamic, and it needs to be discovered
97organically and naturally through many experiments. By putting the team in a
98box, you are amputating their creativity and therefore minimizing their
99potential. But if you talk to an executive, you will mainly find archetypical
100thinking and a strong need to compartmentalize everything from business
101processes to resource management. And this type of management that often
102displays micromanagement techniques only works for short periods (couple of
103years) and then employees either leave the company or become basically retarded
104drones on autopilot.
105
106## Micromanaging
107
108This basically implies that everybody on the team is an idiot who needs to have
109a to-do list that they cannot write themselves. How about spoon-feeding the team
110at launch because besides the team leader, everybody must be a retarded idiot at
111best?
112
113I prefer milestones as they give developers much more freedom and creativity in
114developing and not waste their time checking some bizarre to-do list that was
115not even thought through. Projects constantly change throughout the development
116cycle, and all you are left at the end is a list of unchecked tasks and the
117wrath of management why they are not completed. Best WTF moment!
118
119## Human contact — no need for it!
120
121We are vigorously trying to eliminate physical contact by replacing short
122meetings with software, with no regards that we are not machines. Many times a
123simple 5-min meeting at morning can solve most of the problems. In rapid
124development, short bursts of man to man communication is possibly the best way
125to go.
126
127We now have all this software available, and all what we get out of it is a
128giant clusterfuck. An obstacle and not a solution. So, why we still use them?
129
130## MVP is killing innovation
131
132Many will disagree with me on this one, but I stand strong by this statement.
133What I noticed in my experience that all this buzz words around us only mislead
134and capture us in a circle of solving issues that already have a solution, but
135we are unable to see it without using some fancy word for it.
136
137The toughest thing to do for a developer is to minimize requirements. Well, this
138is though only for bad developers. Yes, I said it. There are many types of
139developers out there. And those unable to minimize feature scope are the ones
140you don’t need on your team. Their only goal is to solve problems that exist
141only in their heads. And then you have to argue with them, and waste energy on
142them, instead of developing your awesome product. They are a cancer and I
143suggest you cut them off.
144
145MVP as an idea is great, but sadly people don’t understand underlying
146philosophy, and they spent too much time focusing and fixating on something that
147every sane person with normal IQ will understand without some made up
148acronym. And the result is a lot of talking and barely no execution.
149
150Well, MVP is not directly killing innovation, but stupid people do when they try
151to understand it.
152
153## Pressure wasteland
154
155You must never allow to be pressured into confirming a deadline if you are not
156confident. We often feel a need that we are in service of others, which is true
157to some extent. But it is also true that others are in service to us to some
158extent. And we forget this all the time. We are all pressured all the time to
159make decisions just to calm other people down. And when they leave your office
160you experience WTF moment :) How the hell did they manage to fuck me up again?
161
162People need to realize that the more pressure you put on somebody, the less they
163will be able to do. So 5-min update email requests will only resolve in mental
164breakdown and inability to work that day. Constant poking is probably the only
165thing I lose my mind instantly. For all you that are doing this: “Stop bothering
166us with your insecurities and let us do our job. We will do it quicker and
167better without you breathing down our necks.”
168
169If this happens to me, I end up with no energy at the end. Don’t you get it?
170You will get much more from and out of me if you ask me like a human person and
171not your personal butler. On a long run, you are destroying your relationships
172and nobody would want to work with you. Your schizophrenic approach will damage
173only you in a long run. Nobody is anybody’s property.
174
175## Conclusion
176
177I am guilty of many things described in this post. And I find it hard sometimes
178to acknowledge this. And I lie to myself and try vigorously to find some
179explanation why I do these things. There is always space for growth. And maybe
180you will also find some of yourself in this post and realize what needs to
181change for you to evolve.
diff --git a/content/posts/2017-03-07-golang-profiling-simplified.md b/content/posts/2017-03-07-golang-profiling-simplified.md
deleted file mode 100644
index f0821c5..0000000
--- a/content/posts/2017-03-07-golang-profiling-simplified.md
+++ /dev/null
@@ -1,126 +0,0 @@
1---
2title: Golang profiling simplified
3url: golang-profiling-simplified.html
4date: 2017-03-07T12:00:00+02:00
5type: post
6draft: false
7---
8
9Many posts have been written regarding profiling in Golang and I haven’t found
10proper tutorial regarding this. Almost all of them are missing some part of
11important information and it gets pretty frustrating when you have a deadline
12and are not finding simple distilled solution.
13
14Nevertheless, after searching and experimenting I have found a solution that
15works for me and probably should also for you.
16
17## Where are my pprof files?
18
19By default pprof files are generated in /tmp/ folder. You can override folder
20where this files are generated programmatically in your golang code as we will
21see below in example.
22
23## Why is my CPU profile empty?
24
25I have found out that sometimes CPU profile is empty because program was not
26executing long enough. Programs, that execute too quickly don’t produce pprof
27file in my cases. Well, file is generated but only contains 4KB of information.
28
29## Profiling
30
31As you can see from examples we are executing dummy_benchmark functions to
32ensure some sort of execution. Memory profiling can be done without such a
33“complex” function. But CPU profiling needs it.
34
35Both memory and CPU profiling examples are almost the same. Only parameters in
36main function when calling profile.Start are different. When we set
37profile.ProfilePath(“.”) we tell profiler to store pprof files in the same
38folder as our program.
39
40### Memory profiling
41
42```go
43package main
44
45import (
46 "fmt"
47 "time"
48 "github.com/pkg/profile"
49)
50
51func dummy_benchmark() {
52
53 fmt.Println("first set ...")
54 for i := 0; i < 918231333; i++ {
55 i *= 2
56 i /= 2
57 }
58
59 <-time.After(time.Second*3)
60
61 fmt.Println("sencond set ...")
62 for i := 0; i < 9182312232; i++ {
63 i *= 2
64 i /= 2
65 }
66}
67
68func main() {
69 defer profile.Start(profile.MemProfile, profile.ProfilePath("."), profile.NoShutdownHook).Stop()
70 dummy_benchmark()
71}
72```
73
74### CPU profiling
75
76```go
77package main
78
79import (
80 "fmt"
81 "time"
82 "github.com/pkg/profile"
83)
84
85func dummy_benchmark() {
86
87 fmt.Println("first set ...")
88 for i := 0; i < 918231333; i++ {
89 i *= 2
90 i /= 2
91 }
92
93 <-time.After(time.Second*3)
94
95 fmt.Println("sencond set ...")
96 for i := 0; i < 9182312232; i++ {
97 i *= 2
98 i /= 2
99 }
100}
101
102func main() {
103 defer profile.Start(profile.CPUProfile, profile.ProfilePath("."), profile.NoShutdownHook).Stop()
104 dummy_benchmark()
105}
106```
107
108### Generating profiling reports
109
110```bash
111# memory profiling
112go build mem.go
113./mem
114go tool pprof -pdf ./mem mem.pprof > mem.pdf
115
116# cpu profiling
117go build cpu.go
118./cpu
119go tool pprof -pdf ./cpu cpu.pprof > cpu.pdf
120```
121
122This will generate PDF document with visualized profile.
123
124- [Memory PDF profile example](/posts/go-profiling/golang-profiling-mem.pdf)
125- [CPU PDF profile example](/posts/go-profiling/golang-profiling-cpu.pdf)
126
diff --git a/content/posts/2017-04-17-what-i-ve-learned-developing-ad-server.md b/content/posts/2017-04-17-what-i-ve-learned-developing-ad-server.md
deleted file mode 100644
index 3a6410f..0000000
--- a/content/posts/2017-04-17-what-i-ve-learned-developing-ad-server.md
+++ /dev/null
@@ -1,199 +0,0 @@
1---
2title: What I've learned developing ad server
3url: what-i-ve-learned-developing-ad-server.html
4date: 2017-04-17T12:00:00+02:00
5type: post
6draft: false
7---
8
9For the past year and half I have been developing native advertising server that
10contextually matches ads and displays them in different template forms on
11variety of websites. This project grew from serving thousands of ads per day to
12millions.
13
14The system is made from couple of core components:
15
16- API for serving ads,
17- Utils - cronjobs and queue management tools,
18- Dashboard UI.
19
20Initial release was using [MongoDB](https://www.mongodb.com/) for full-text
21search but was later replaced by [Elasticsearch](https://www.elastic.co/) for
22better CPU utilization and better search performance. This provided us with many
23amazing functionalities of [Elasticsearch](https://www.elastic.co/). You should
24check it out if you do any search related operations.
25
26Because the premise of the server is to provide native ad experience, they are
27rendered on the client side via simple templating engine. This ensures that ads
28can be displayed number of different ways based on the visual style of the
29page. And this makes JavaScript client library quite complex.
30
31So now that you know basic information about the product lets get into the
32lessons we learned.
33
34## Aggregate everything
35
36After beta version was released everything (impressions, clicks, etc) was
37written in nanosecond resolution in the database. At that time we were using
38[PostgreSQL](https://www.postgresql.org/) and database quickly grew way above
39200GB in disk space. And that was problematic. Statistics took disturbingly long
40time to aggregate. Also using indexes on stats table in database was no help
41after we reached 500 million datapoints.
42
43> There is a marketing product information and there is real life experience.
44And the tend to be quite the opposite.
45
46This was the reason that now everything is aggregated on daily basis and this
47data is then fed to Elastic in form of daily summary. With this we achieved we
48can now track many more dimensions such as zone, channel and platform
49information. And with this information we can now adapt occurrences of ads on
50specific places more precisely.
51
52We have also adapted [Redis](https://redis.io/) as a full-time citizen in our
53stack. Because Redis also stores information on a local disk we have some sort
54of backup if server would accidentally suffer some failure.
55
56All the real-time statistics for ad serving and redirecting is presented as
57counters in Redis instance and daily extracted and pushed to Elastic.
58
59## Measure everything
60
61The thing about software is that we really don't know how well it is performing
62under load until such load is presented. When testing locally everything is fine
63but when on production things tend to fall apart.
64
65As a solution for this we are measuring everything we can. Function execution
66time (by encapsulating functions with timers), server performance (cpu, memory,
67disk, etc), Nginx and [uWSGI](https://uwsgi-docs.readthedocs.io/) performance.
68We sacrifice a bit of performance for the sake of this information. And we store
69all this information for later analysis.
70
71**Example of function execution time**
72
73```json
74{
75 "get_final_filtered_ads": {
76 "counter": 1931250,
77 "avg": 0.0066143431,
78 "elapsed": 12773.9500310003
79 },
80 "store_keywords_statistics": {
81 "counter": 1931011,
82 "avg": 0.0004605267,
83 "elapsed": 889.2821669996
84 },
85 "match_by_context": {
86 "counter": 1931011,
87 "avg": 0.0055960716,
88 "elapsed": 10806.0758889999
89 },
90 "match_by_high_performance": {
91 "counter": 262,
92 "avg": 0.0152770229,
93 "elapsed": 4.00258
94 },
95 "store_impression_stats": {
96 "counter": 1931250,
97 "avg": 0.0006189991,
98 "elapsed": 1195.4419869999
99 }
100}
101```
102
103We have also started profiling with [cProfile](https://pymotw.com/2/profile/)
104and then visualizing with [KCachegrind](http://kcachegrind.sourceforge.net/).
105This provides much more detailed look into code execution.
106
107## Cache control is your friend
108
109Because we use Javascript library for rendering ads we rely on this script
110extensively and when in need we need to be able to change behavior of the script
111quickly.
112
113In our case we can not simply replace javascript url in html code. It usually
114takes a day or two for the guys who maintain sites to change code or add
115?ver=xxx attribute. And this makes rapid deployment and testing very difficult
116and time consuming. There is a limitation of how much you can test locally.
117
118We are now in the process of integrating [Google Tag
119Manager](https://www.google.com/analytics/tag-manager/) but couple of websites
120are developed on ASP.net platform that have some problems with tag manager. With
121a solution below we are certain that we are serving latest version of the
122script.
123
124And it only takes one mistake and users have the script cached and in case of
125caching it for 1 year you probably know where the problem is.
126
127```nginx
128# nginx ➜ /etc/nginx/sites-available/default
129location /static/ {
130 alias /path-to-static-content/;
131 autoindex off;
132 charset utf-8;
133 gzip on;
134 gzip_types text/plain application/javascript application/x-javascript text/javascript text/xml text/css;
135 location ~* \.(ico|gif|jpeg|jpg|png|woff|ttf|otf|svg|woff2|eot)$ {
136 expires 1y;
137 add_header Pragma public;
138 add_header Cache-Control "public";
139 }
140 location ~* \.(css|js|txt)$ {
141 expires 3600s;
142 add_header Pragma public;
143 add_header Cache-Control "public, must-revalidate";
144 }
145}
146```
147
148Also be careful when redirecting to url in your python code. We noticed that if
149we didn't precisely setup cache control and expire headers in response we didn't
150get the request on the server and therefore couldn't measure clicks. So when
151redirecting do as follows and there will be no problems.
152
153```python
154# python ➜ bottlepy web micro-framework
155response = bottle.HTTPResponse(status=302)
156response.set_header("Cache-Control", "no-store, no-cache, must-revalidate")
157response.set_header("Expires", "Thu, 01 Jan 1970 00:00:00 GMT")
158response.set_header("Location", url)
159return response
160```
161
162> Cache control in browsers is quite aggressive and you need to be precise to
163avoid future problems. We learned that lesson the hard way.
164
165## Learn NGINX
166
167When deciding on a web server we went with Nginx as a reverse proxy for our
168applications. We adapted micro-service oriented architecture early in the
169project to ensure when we scale we can easily add additional servers to our
170cluster. And Nginx was crucial to perform load balancing and static content
171delivery.
172
173At first our config file was quite simple and later grew larger. After patching
174and adding new settings I sat down and learned more about the guts of Nginx.
175This proved to be very useful and we were able to squeeze much more out of our
176setup. So I advise you to take your time and read through the
177[documentation](https://nginx.org/en/docs/). This saved us a lot of headache.
178Googling for solutions only goes so far.
179
180## Use Redis/Memcached
181
182As explained above we are using caching basically for everything. It is the
183corner stone of our services. At first we were very careful about the quantity
184of things we stored in [Redis](https://redis.io/). But we later found out that
185the memory footprint is very low even when storing large amount of data in it.
186
187So we gradually increased our usage to caching whole HTML outputs of dashboard.
188This improved our performance in order of magnitude. And by using native TTL
189support this goes hand in hand with our needs.
190
191The reason why we choose [Redis](https://redis.io/) over
192[Memcached](https://memcached.org/) was the nature of scalability of Redis out
193of the box. But all this can be achieved with Memcached.
194
195## Conclusion
196
197There are a lot more details that could have been written and every single topic
198in here deserves it's own post but you probably got the idea about the problems
199we faced.
diff --git a/content/posts/2017-04-21-profiling-python-web-applications-with-visual-tools.md b/content/posts/2017-04-21-profiling-python-web-applications-with-visual-tools.md
deleted file mode 100644
index 8617abe..0000000
--- a/content/posts/2017-04-21-profiling-python-web-applications-with-visual-tools.md
+++ /dev/null
@@ -1,206 +0,0 @@
1---
2title: Profiling Python web applications with visual tools
3url: profiling-python-web-applications-with-visual-tools.html
4date: 2017-04-21T12:00:00+02:00
5type: post
6draft: false
7---
8
9I have been profiling my software with KCachegrind for a long time now and I was
10missing this option when I am developing API's or other web services. I always
11knew that this is possible but never really took the time and dive into it.
12
13Before we begin there are some requirements. We will need to:
14
15- implement [cProfile](https://docs.python.org/2/library/profile.html#module-cProfile) into our web app,
16- convert output to [callgrind](http://valgrind.org/docs/manual/cl-manual.html) format with [pyprof2calltree](https://pypi.python.org/pypi/pyprof2calltree/),
17- visualize data with [KCachegrind](http://kcachegrind.sourceforge.net/html/Home.html) or [Profiling Viewer](http://www.profilingviewer.com/).
18
19
20If you are using MacOS you should check out [Profiling
21Viewer](http://www.profilingviewer.com/) or
22[MacCallGrind](http://www.maccallgrind.com/).
23
24![KCachegrind](/posts/python-profiling/kcachegrind.png)
25
26We will be dividing this post into two main categories:
27
28- writing simple web-service,
29- visualize profile of this web-service.
30
31## Simple web-service
32
33Let's use virtualenv so we won't pollute our base system. If you don't have
34virtualenv installed on your system you can install it with pip command.
35
36```bash
37# let's install virtualenv globally
38$ sudo pip install virtualenv
39
40# let's also install pyprof2calltree globally
41$ sudo pip install pyprof2calltree
42
43# now we create project
44$ mkdir demo-project
45$ cd demo-project/
46
47# now let's create folder where we will store profiles
48$ mkdir prof
49
50# now we create empty virtualenv in venv/ folder
51$ virtualenv --no-site-packages venv
52
53# we now need to activate virtualenv
54$ source venv/bin/activate
55
56# you can check if virtualenv was correctly initialized by
57# checking where your python interpreter is located
58# if command bellow points to your created directory and not some
59# system dir like /usr/bin/python then everything is fine
60$ which python
61
62# we can check now if all is good ➜ if ok couple of
63# lines will be displayed
64$ pip freeze
65# appdirs==1.4.3
66# packaging==16.8
67# pyparsing==2.2.0
68# six==1.10.0
69
70# now we are ready to install bottlepy ➜ web micro-framework
71$ pip install bottle
72
73# you can deactivate virtualenv but you will then go
74# under system domain ➜ for now don't deactivate
75$ deactivate
76```
77
78We are now ready to write simple web service. Let's create file app.py and paste
79code bellow in this newly created file.
80
81```python
82# -*- coding: utf-8 -*-
83
84import bottle
85import random
86import cProfile
87
88app = bottle.Bottle()
89
90# this function is a decorator and encapsulates function
91# and performs profiling and then saves it to subfolder
92# prof/function-name.prof
93# in our example only awesome_random_number function will
94# be profiled because it has do_cprofile defined
95def do_cprofile(func):
96 def profiled_func(*args, **kwargs):
97 profile = cProfile.Profile()
98 try:
99 profile.enable()
100 result = func(*args, **kwargs)
101 profile.disable()
102 return result
103 finally:
104 profile.dump_stats("prof/" + str(func.__name__) + ".prof")
105 return profiled_func
106
107
108# we use profiling over specific function with including
109# @do_cprofile above function declaration
110@app.route("/")
111@do_cprofile
112def awesome_random_number():
113 awesome_random_number = random.randint(0, 100)
114 return "awesome random number is " + str(awesome_random_number)
115
116@app.route("/test")
117def test():
118 return "dummy test"
119
120if __name__ == '__main__':
121 bottle.run(
122 app = app,
123 host = "0.0.0.0",
124 port = 4000
125 )
126
127# run with 'python app.py'
128# open browser 'http://0.0.0.0:4000'
129```
130
131When browser hits awesome\_random\_number() function profile is created in prof/
132subfolder.
133
134## Visualize profile
135
136Now let's create callgrind format from this cProfile output.
137
138```bash
139$ cd prof/
140$ pyprof2calltree -i awesome_random_number.prof
141# this creates 'awesome_random_number.prof.log' file in the same folder
142```
143
144This file can be opened with visualizing tools listed above. In this case we
145will be using Profilling Viewer under MacOS. You can open image in new tab. As
146you can see from this example there is hierarchy of execution order of your
147code.
148
149![Profilling Viewer](/posts/python-profiling/profiling-viewer.png)
150
151> Make sure you convert output of the cProfile output every time you want to
152refresh and take a look at your possible optimizations because cProfile updates
153.prof file every time browser hits the function.
154
155This is just a simple example but when you are developing real-life applications
156this can be very illuminating, especially to see which parts of your code are
157bottlenecks and need to be optimized.
158
159## Update 2017-04-22
160
161Reddit user [mvt](https://www.reddit.com/user/mvt) also recommended this awesome
162web based profile visualizer [SnakeViz](https://jiffyclub.github.io/snakeviz/)
163that directly takes output from
164[cProfile](https://docs.python.org/2/library/profile.html#module-cProfile)
165module.
166
167<div class="reddit-embed" data-embed-media="www.redditmedia.com" data-embed-parent="false" data-embed-live="false" data-embed-uuid="583880c1-002e-41ed-a373-020a0ef2cff9" data-embed-created="2017-04-22T19:46:54.810Z"><a href="https://www.reddit.com/r/Python/comments/66v373/profiling_python_web_applications_with_visual/dgljhsb/">Comment</a> from discussion <a href="https://www.reddit.com/r/Python/comments/66v373/profiling_python_web_applications_with_visual/">Profiling Python web applications with visual tools</a>.</div><script async src="https://www.redditstatic.com/comment-embed.js"></script>
168
169```bash
170# let's install it globally as well
171$ sudo pip install snakeviz
172
173# now let's visualize
174$ cd prof/
175$ snakeviz awesome_random_number.prof
176# this automatically opens browser window and
177# shows visualized profile
178```
179
180![SnakeViz](/posts/python-profiling/snakeviz.png)
181
182Reddit user [ccharles](https://www.reddit.com/user/ccharles) suggested a better
183way for installing pip software by targeting user level instead of using sudo.
184
185<div class="reddit-embed" data-embed-media="www.redditmedia.com" data-embed-parent="false" data-embed-live="false" data-embed-uuid="f4f0459e-684d-441e-bebe-eb49b2f0a31d" data-embed-created="2017-04-22T19:46:10.874Z"><a href="https://www.reddit.com/r/Python/comments/66v373/profiling_python_web_applications_with_visual/dglpzkx/">Comment</a> from discussion <a href="https://www.reddit.com/r/Python/comments/66v373/profiling_python_web_applications_with_visual/">Profiling Python web applications with visual tools</a>.</div><script async src="https://www.redditstatic.com/comment-embed.js"></script>
186
187```bash
188# now we need to add this path to our $PATH variable
189# we do this my adding this line at the end of your
190# ~/.bashrc file
191PATH=$PATH:$HOME/.local/bin/
192
193# in order to use this new configuration you can close
194# and reopen terminal or reload .bashrc file
195$ source ~/.bashrc
196
197# now let's test if new directory is present in $PATH
198$ echo $PATH
199
200# now we can install on user level by adding --user
201# without use of sudo
202$ pip install snakeviz --user
203```
204
205Or as suggested by [mvt](https://www.reddit.com/user/mvt) you can
206use [pipsi](https://github.com/mitsuhiko/pipsi).
diff --git a/content/posts/2017-08-11-simple-iot-application.md b/content/posts/2017-08-11-simple-iot-application.md
deleted file mode 100644
index e31ac55..0000000
--- a/content/posts/2017-08-11-simple-iot-application.md
+++ /dev/null
@@ -1,607 +0,0 @@
1---
2title: Simple IOT application supported by real-time monitoring and data history
3url: simple-iot-application.html
4date: 2017-08-11T12:00:00+02:00
5type: post
6draft: false
7---
8
9## Initial thoughts
10
11I have been developing these kind of application for the better part of my last
125 years and people keep asking me how to approach developing such application
13and I will give a try explaining it here.
14
15IOT applications are really no different than any other kind of applications.
16We have data that needs to be collected and visualized in some form of tables or
17charts. The main difference here is that most of the times these data is
18collected by some kind of device foreign to developer that mainly operates in
19web domain. But fear not, it's not that different than writing some JavaScript.
20
21There are many devices able to transmit data via wireless or wired network by
22default but for the sake of example we will be using commonly known Arduino with
23wireless module already on the board → [Arduino
24MKR1000](https://store.arduino.cc/arduino-mkr1000).
25
26In order to make this little project as accessible to others as possible I will
27try to make it as inexpensive as possible. And by this I mean that I will avoid
28using hosted virtual servers and will be using my own laptop as a server. But
29you must buy Arduino MKR1000 to follow steps below. But if you would want to
30deploy this software I would suggest using
31[DigitalOcean](https://www.digitalocean.com) → smallest VPS is only per month
32making this one of the most affordable option out there. Please notice that this
33software will not run on stock web hosting that only supports LAMP (Linux,
34Apache, MySQL, and PHP).
35
36But before we begin please take notice that this is strictly experimental code
37and not well optimized and there are much better ways in handling some aspects
38of the application but that requires much deeper knowledge of technology that is
39not needed for an example like this.
40
41**Development steps**
42
431. Simple Python API that will receive and store incoming data.
442. Prototype C++ code that will read "sensor data" and transmit it to API.
453. Data visualization with charts → extends Python web application.
46
47Step 1. and 3. will share the same web application. One route will be dedicated
48to API and another to serving HTML with chart.
49
50Schema below represents what we will try to achieve and how different parts
51correlates to each other.
52
53![Overview](/posts/iot-application/simple-iot-application-overview.svg)
54
55## Simple Python API
56
57I have always been a fan of simplicity so we will be using [Bottle: Python Web
58Framework](https://bottlepy.org/docs/dev/). It is a single file web framework
59that seriously simplifies working with routes, templating and has built-in web
60server that satisfies our need in this case.
61
62First we need to install bottle package. This can be done by downloading
63```bottle.py``` and placing it in the root of your application or by using pip
64software ```pip install bottle --user```.
65
66If you are using Linux or MacOS then Python is already installed. If you will
67try to test this on Windows please install [Python for
68Windows](https://www.python.org/downloads/windows/). There may be some problems
69with path when you will try to launch ```python webapp.py``` so please take care
70of this before you continue.
71
72### Basic web application
73
74Most basic bottle application is quite simple. Paste code below in
75```webapp.py``` file and save.
76
77```python
78# -*- coding: utf-8 -*-
79
80import bottle
81
82# initializing bottle app
83app = bottle.Bottle()
84
85# triggered when / is accessed from browser
86# only accepts GET → no POST allowed
87@app.route("/", method=["GET"])
88def route_default():
89 return "howdy from python"
90
91# starting server on http://0.0.0.0:5000
92if __name__ == "__main__":
93 bottle.run(
94 app = app,
95 host = "0.0.0.0",
96 port = 5000,
97 debug = True,
98 reloader = True,
99 catchall = True,
100 )
101```
102
103To run this simple application you should open command prompt or terminal on
104your machine and go to the folder containing your file and type ```python
105webapp.py```. If everything goes ok then open your web browser and point it to
106```http://0.0.0.0:5000```.
107
108If you would like change the port of your application (like port 80) and not use
109root to run your app this will present a problem. The TCP/IP port numbers below
1101024 are privileged ports → this is a security feature. So in order of
111simplicity and security use a port number above 1024 like I have used port 5000.
112
113If this fails at any time please fix it before you continue, because nothing
114below will work otherwise.
115
116We use 0.0.0.0 as default host so that this app is available over your local
117network. If you find your local ip ```ifconfig``` and try accessing this site
118with your phone (if on same network/router as your machine) this should work as
119well (example of such ip ```http://192.168.1.15:5000```). This is a must have
120because Arduino will be accessing this application to send it's data.
121
122### Web application security
123
124There is a lot to be said about security and is a topic of many books. Of course
125all this can not be written here but to just establish some basic security → you
126should always use SSL with your application. Some fantastic free certificates
127are available by [Let's Encrypt - Free SSL/TLS
128Certificates](https://letsencrypt.org). With SSL certificate installed you
129should then make use of HTTP headers and send your "API key" via a header. If
130your key is send via header then this key is encrypted by SSL and send encrypted
131over the network. Never send your api keys by GET parameter like
132```http://example.com/?api_key=somekeyvalue```. The problem that this kind of
133sending presents is that this key is visible in logs and by network sniffers.
134
135There is a fantastic article describing some aspects about security: [11 Web
136Application Security Best
137Practices](https://www.keycdn.com/blog/web-application-security-best-practices/). Please
138check it out.
139
140### Simple API for writing data-points
141
142We will now be using boilerplate code from example above and extend it to be
143SQLite3 because it plays well with Python and can store quite large amount of
144able to write data received by API to local storage. For example use I will use
145data. I have been using it to collect gigabytes of data in a single database
146without any corruption or problems → your experience may vary.
147
148To avoid learning SQLite I will be using [Dataset: databases for lazy
149people](https://dataset.readthedocs.io/en/latest/index.html). This package
150abstracts SQL and simplifies writing and reading data from database. You should
151install this package with pip software ```pip install dataset --user```.
152
153Because API will use POST method I will be testing if code works correctly by
154using [Restlet Client for Google
155Chrome](https://chrome.google.com/webstore/detail/restlet-client-rest-api-t/aejoelaoggembcahagimdiliamlcdmfm).
156This software also allows you to set headers → for basic security with API_KEY.
157
158To quickly generate passwords or API keys I usually use this nifty website
159[RandomKeygen](https://randomkeygen.com/).
160
161Copy and paste code below over your previous code in file ```webapp.py```.
162
163```python
164# -*- coding: utf-8 -*-
165
166import time
167import bottle
168import random
169import dataset
170
171# initializing bottle app
172app = bottle.Bottle()
173
174# connects to sqlite database
175# check_same_thread=False allows using it in multi-threaded mode
176app.config["dsn"] = dataset.connect("sqlite:///data.db?check_same_thread=False")
177
178# api key that will be used in Arduino code
179app.config["api_key"] = "JtF2aUE5SGHfVJBCG5SH"
180
181# triggered when /api is accessed from browser
182# only accepts POST → no GET allowed
183@app.route("/api", method=["POST"])
184def route_default():
185 status = 400
186 ts = int(time.time()) # current timestamp
187 value = bottle.request.body.read() # data from device
188 api_key = bottle.request.get_header("Api_Key") # api key from header
189
190 # outputs to console received data for debug reason
191 print ">>> {} :: {}".format(value, api_key)
192
193 # if api_key is correct and value is present
194 # then writes attribute to point table
195 if api_key == app.config["api_key"] and value:
196 app.config["dsn"]["point"].insert(dict(ts=ts, value=value))
197 status = 200
198
199 # we only need to return status
200 return bottle.HTTPResponse(status=status, body="")
201
202# starting server on http://0.0.0.0:5000
203if __name__ == "__main__":
204 bottle.run(
205 app = app,
206 host = "0.0.0.0",
207 port = 5000,
208 debug = True,
209 reloader = True,
210 catchall = True,
211 )
212```
213
214To run this simply go to folder containing python file and run ```python
215webapp.py``` from terminal. If everything goes ok you should have simple API
216available via POST method on /api route.
217
218After testing the service with Restlet Client you should be able to view your
219data in a database file ```data.db```.
220
221![REST settings example](/posts/iot-application/iot-rest-example.png)
222
223You can also check the contents of new database file by using desktop client
224for SQLite → [DB Browser for SQLite](http://sqlitebrowser.org/).
225
226![SQLite database example](/posts/iot-application/iot-sqlite-db.png)
227
228Table structure is as simple as it can be. We have ts (timestamp) and value
229(value from Arduino). As you can see timestamp is generated on API side. If you
230would happen to have atomic clock on Arduino it would be then better to generate
231and send timestamp with the value. This would be particularity useful if we
232would be collecting sensor data at a higher frequency and then sending this data
233in bulk to API.
234
235If you will deploy this app with uWSGI and multi-threaded, use DSN (Data Source
236Name) url with ```?check_same_thread=False```.
237
238Ok, now that we have some sort of a working API with some basic security so
239unwanted people can not post data to your database can we proceed further and
240try to program Arduino to send data to API.
241
242## Sending data to API with Arduino MKR1000
243
244First of all you should have MKR1000 module and microUSB cable to proceed. If
245you have ever done any work with Arduino you should know that you also need
246[Arduino IDE](https://www.arduino.cc/en/Main/Software). On provided link you
247should be able to download and install IDE. Once that task is completed and you
248have successfully run blink example you should proceed to the next step.
249
250In order to use wireless capabilities of MKR1000 you need to first install
251[WiFi101 library](https://www.arduino.cc/en/Reference/WiFi101) in Arduino IDE.
252Please check before you install, you may already have it installed.
253
254Code below is a working example that sends data to API. Before you try to test
255your code make sure you have run Python web application. Then change settings
256for wifi, api endpoint and api_key. If by some reason code bellow doesn't work
257for you please leave a comment and I'll try to help.
258
259Once you have opened IDE and copied this code try to compile and upload it.
260Then open "Serial monitor" to see if any output is presented by Arduino.
261
262```c
263#include <WiFi101.h>
264
265// wifi settings
266char ssid[] = "ssid-name";
267char pass[] = "ssid-password";
268
269// api server enpoint
270char server[] = "192.168.6.22";
271int port = 5000;
272
273// api key that must be the same as the one in Python code
274String api_key = "JtF2aUE5SGHfVJBCG5SH";
275
276// frequency data is sent in ms - every 5 seconds
277int timeout = 1000 * 5;
278
279int status = WL_IDLE_STATUS;
280
281void setup() {
282
283 // initialize serial and wait for port to open:
284 Serial.begin(9600);
285 delay(1000);
286
287 // check for the presence of the shield
288 if (WiFi.status() == WL_NO_SHIELD) {
289 Serial.println("WiFi shield not present");
290 while (true);
291 }
292
293 // attempt to connect to wifi network
294 while (status != WL_CONNECTED) {
295 Serial.print("Attempting to connect to SSID: ");
296 Serial.println(ssid);
297 status = WiFi.begin(ssid, pass);
298 // wait 10 seconds for connection
299 delay(10000);
300 }
301
302 // output wifi status to serial monitor
303 Serial.print("SSID: ");
304 Serial.println(WiFi.SSID());
305
306 IPAddress ip = WiFi.localIP();
307 Serial.print("IP Address: ");
308 Serial.println(ip);
309
310 long rssi = WiFi.RSSI();
311 Serial.print("signal strength (RSSI):");
312 Serial.print(rssi);
313 Serial.println(" dBm");
314}
315
316void loop() {
317 WiFiClient client;
318
319 if (client.connect(server, port)) {
320
321 // I use random number generator for this example
322 // but you can use analog or digital inputs from arduino
323 String content = String(random(1000));
324
325 client.println("POST /api HTTP/1.1");
326 client.println("Connection: close");
327 client.println("Api-Key: " + api_key);
328 client.println("Content-Length: " + String(content.length()));
329 client.println();
330 client.println(content);
331
332 delay(100);
333 client.stop();
334 Serial.println("Data sent successfully ...");
335
336 } else {
337 Serial.println("Problem sending data ...");
338 }
339
340 // waits for x seconds and continue looping
341 delay(timeout);
342}
343```
344
345As seen from example you can notice that Arduino is generating random integer
346between [ 0 .. 1000 ]. You can easily replace this with a temperature sensor or
347any other kind of sensor.
348
349Now that we have API under the hood and Arduino is sending demo data we can now
350focus on data visualization.
351
352## Data visualization
353
354Before we continue we should examine our project folder structure. Currently we
355only have two files in our project:
356
357_simple-iot-app/_
358
359* _webapp.py_
360* _data.db_
361
362We will now add HTML template that will contain CSS and JavaScript code inline
363for the simplicity reason. And for the bottle framework to be able to scan root
364application folder for templates we will add ```bottle.TEMPLATE_PATH.insert(0,
365"./")``` in ```webapp.py```. By default bottle framework uses ```views/```
366subfolder to store templates. This is not the ideal situation and if you will
367use bottle to develop web applications you should use native behavior and store
368templates in it's predefined folder. But for the sake of example we will
369over-ride this. Be careful to fully replace your code with new code that is
370provided below. Avoid partially replacing code in file :) Also new code for
371reading data-points is provided in Python example below.
372
373First we add new route to our web application. It should be trigger when browser
374hits root of application ```http://0.0.0.0:5000/```. This route will do nothing
375more than render ```frontend.html``` template. This is done by ```return
376bottle.template("frontend.html")```. Check code below to further examine how
377exactly this is done.
378
379Now we will expand ```/api``` route and use different methods to write or read
380data-points. For writing data-point we will use POST method and for reading
381points we will use GET method. GET method will return JSON object with latest
382readings and historical data.
383
384There is a fantastic JavaScript library for plotting time-series charts called
385[MetricsGraphics.js](https://www.metricsgraphicsjs.org) that is based on
386[D3.js](https://d3js.org/) library for visualizing data.
387
388Data schema required by MetricsGraphics.js → to achieve this we need to
389transform data from database into this format:
390
391```json
392[
393 {
394 "date": "2017-08-11 01:07:20",
395 "value": 933
396 },
397 {
398 "date": "2017-08-11 01:07:30",
399 "value": 743
400 }
401]
402```
403
404Web application is now complete and we only need ```frontend.html``` that we
405will develop now. If you would try to start web app now and go to root app this
406will return error because we don't have frontend.html yet.
407
408```python
409# -*- coding: utf-8 -*-
410
411import time
412import bottle
413import json
414import datetime
415import random
416import dataset
417
418# initializing bottle app
419app = bottle.Bottle()
420
421# adds root directory as template folder
422bottle.TEMPLATE_PATH.insert(0, "./")
423
424# connects to sqlite database
425# check_same_thread=False allows using it in multi-threaded mode
426app.config["db"] = dataset.connect("sqlite:///data.db?check_same_thread=False")
427
428# api key that will be used in Arduino code
429app.config["api_key"] = "JtF2aUE5SGHfVJBCG5SH"
430
431# triggered when / is accessed from browser
432# only accepts GET → no POST allowed
433@app.route("/", method=["GET"])
434def route_default():
435 return bottle.template("frontend.html")
436
437# triggered when /api is accessed from browser
438# accepts POST and GET
439@app.route("/api", method=["GET", "POST"])
440def route_default():
441
442 # if method is POST then we write datapoint
443 if bottle.request.method == "POST":
444 status = 400
445 ts = int(time.time()) # current timestamp
446 value = bottle.request.body.read() # data from device
447 api_key = bottle.request.get_header("Api-Key") # api key from header
448
449 # outputs to console recieved data for debug reason
450 print ">>> {} :: {}".format(value, api_key)
451
452 # if api_key is correct and value is present
453 # then writes attribute to point table
454 if api_key == app.config["api_key"] and value:
455 app.config["db"]["point"].insert(dict(ts=ts, value=value))
456 status = 200
457
458 # we only need to return status
459 return bottle.HTTPResponse(status=status, body="")
460
461 # if method is GET then we read datapoint
462 else:
463 response = []
464 datapoints = app.config["db"]["point"].all()
465
466 for point in datapoints:
467 response.append({
468 "date": datetime.datetime.fromtimestamp(int(point["ts"])).strftime("%Y-%m-%d %H:%M:%S"),
469 "value": point["value"]
470 })
471
472 bottle.response.content_type = "application/json"
473 return json.dumps(response)
474
475# starting server on http://0.0.0.0:5000
476if __name__ == "__main__":
477 bottle.run(
478 app = app,
479 host = "0.0.0.0",
480 port = 5000,
481 debug = True,
482 reloader = True,
483 catchall = True,
484 )
485```
486
487And now finally we can implement ```frontend.html```. Create file with this name
488and copy code below. When you are done you can start web application. Steps for
489this part are listed below the code.
490
491```html
492<!DOCTYPE html>
493<html>
494
495 <head>
496 <meta charset="utf-8">
497 <title>Simple IOT application</title>
498 </head>
499
500 <body>
501
502 <h1>Simple IOT application</h1>
503
504 <div class="chart-placeholder">
505 <div id="chart"></div>
506 </div>
507
508 <!-- application main script -->
509 <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
510 <script src="https://cdnjs.cloudflare.com/ajax/libs/d3/4.10.0/d3.min.js"></script>
511 <script src="https://cdnjs.cloudflare.com/ajax/libs/metrics-graphics/2.11.0/metricsgraphics.min.js"></script>
512 <script>
513 function fetch_and_render() {
514 d3.json("/api", function(data) {
515 data = MG.convert.date(data, "date", "%Y-%m-%d %H:%M:%S");
516 MG.data_graphic({
517 data: data,
518 chart_type: "line",
519 full_width: true,
520 height: 270,
521 target: document.getElementById("chart"),
522 x_accessor: "date",
523 y_accessor: "value"
524 });
525 });
526 }
527 window.onload = function() {
528 // initial call for rendering
529 fetch_and_render();
530
531 // updates chart every 5 seconds
532 setInterval(function() {
533 fetch_and_render();
534 }, 5000);
535 }
536 </script>
537
538 <!-- application styles -->
539 <style>
540 body {
541 font: 13px sans-serif;
542 padding: 20px 50px;
543 }
544 .chart-placeholder {
545 border: 2px solid #ccc;
546 width: 100%;
547 user-select: none;
548 }
549 /* chart styles */
550 .mg-line1-color {
551 stroke: red;
552 stroke-width: 2;
553 }
554 .mg-main-area, .mg-main-line {
555 fill: #fff;
556 }
557 .mg-x-axis line, .mg-y-axis line {
558 stroke: #b3b2b2;
559 stroke-width: 1px;
560 }
561 </style>
562
563 </body>
564
565</html>
566```
567
568Now the folder structure should look like:
569
570_simple-iot-app/_
571
572* _webapp.py_
573* _data.db_
574* _frontend.html_
575
576Ok, lets now start application and start feeding it data.
577
5781. ```python webapp.py```
5792. connect Arduino MKR1000 to power source
5803. open browser and go to ```http://0.0.0.0:5000```
581
582If everything goes well you should be seeing new data-points rendered on chart
583every 5 seconds.
584
585If you navigate to ```http://0.0.0.0:5000``` you should see rendered chart as
586shown on picture below.
587
588![Application output](/posts/iot-application/iot-app-output.png)
589
590Complete application with all the code is available for
591[download](/posts/iot-application/simple-iot-application.zip).
592
593## Conclusion
594
595I hope this clarifies some aspects of IOT application development. Of course
596this is a minimal example and is far from what can be done in real life with
597some further dive into other technologies.
598
599If you would like to continue exploring IOT world here are some interesting
600resources for you to examine:
601
602* [Reading Sensors with an Arduino](https://www.allaboutcircuits.com/projects/reading-sensors-with-an-arduino/)
603* [MQTT 101 – How to Get Started with the lightweight IoT Protocol](http://www.hivemq.com/blog/how-to-get-started-with-mqtt)
604* [Stream Updates with Server-Sent Events](https://www.html5rocks.com/en/tutorials/eventsource/basics/)
605* [Internet of Things (IoT) Tutorials](http://www.tutorialspoint.com/internet_of_things/)
606
607Any comment or additional ideas are welcomed in comments below.
diff --git a/content/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md b/content/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md
deleted file mode 100644
index 5ba7b64..0000000
--- a/content/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md
+++ /dev/null
@@ -1,331 +0,0 @@
1---
2title: Using DigitalOcean Spaces Object Storage with FUSE
3url: using-digitalocean-spaces-object-storage-with-fuse.html
4date: 2018-01-16T12:00:00+02:00
5type: post
6draft: false
7---
8
9Couple of months ago [DigitalOcean](https://www.digitalocean.com) introduced new
10product called
11[Spaces](https://blog.digitalocean.com/introducing-spaces-object-storage/) which
12is Object Storage very similar to Amazon's S3. This really peaked my interest,
13because this was something I was missing and even the thought of going over the
14internet for such functionality was in no interest to me. Also in fashion with
15their previous pricing this also is very cheap and pricing page is a no-brainer
16compared to AWS or GCE. [Prices are clearly and precisely defined and
17outlined](https://www.digitalocean.com/pricing/). You must love them for that
18:)
19
20## Initial requirements
21
22* Is it possible to use them as a mounted drive with FUSE? (tl;dr YES)
23* Will the performance degrade over time and over different sizes of objects?
24 (tl;dr NO&YES)
25* Can storage be mounted on multiple machines at the same time and be writable?
26 (tl;dr YES)
27
28> Let me be clear. This scripts I use are made just for benchmarking and are not
29> intended to be used in real-life situations. Besides that, I am looking into
30> using this approaches but adding caching service in front of it and then
31> dumping everything as an object to storage. This could potentially be some
32> interesting post of itself. But in case you would need real-time data without
33> eventual consistency please take this scripts as they are: not usable in such
34> situations.
35
36## Is it possible to use them as a mounted drive with FUSE?
37
38Well, actually they can be used in such manor. Because they are similar to [AWS
39S3](https://aws.amazon.com/s3/) many tools are available and you can find many
40articles and [Stackoverflow items](https://stackoverflow.com/search?q=s3+fuse).
41
42To make this work you will need DigitalOcean account. If you don't have one you
43will not be able to test this code. But if you have an account then you go and
44[create new
45Droplet](https://cloud.digitalocean.com/droplets/new?size=s-1vcpu-1gb&region=ams3&distro=debian&distroImage=debian-9-x64&options=private_networking,install_agent).
46If you click on this link you will already have preselected Debian 9 with
47smallest VM option.
48
49* Please be sure to add you SSH key, because we will login to this machine
50 remotely.
51* If you change your region please remember which one you choose because we will
52 need this information when we try to mount space to our machine.
53
54Instuctions on how to use SSH keys and how to setup them are available in
55article [How To Use SSH Keys with DigitalOcean
56Droplets](https://www.digitalocean.com/community/tutorials/how-to-use-ssh-keys-with-digitalocean-droplets).
57
58![DigitalOcean Droplets](/posts/do-fuse/fuse-droplets.png)
59
60After we created Droplet it's time to create new Space. This is done by clicking
61on a button [Create](https://cloud.digitalocean.com/spaces/new) (right top
62corner) and selecting Spaces. Choose pronounceable ```Unique name``` because we
63will use it in examples below. You can either choose Private or Public, it
64doesn't matter in our case. And you can always change that in the future.
65
66When you have created new Space we should [generate Access
67key](https://cloud.digitalocean.com/settings/api/tokens). This link will guide
68to the page when you can generate this key. After you create new one, please
69save provided Key and Secret because Secret will not be shown again.
70
71![DigitalOcean Spaces](/posts/do-fuse/fuse-spaces.png)
72
73Now that we have new Space and Access key we should SSH into our machine.
74
75```bash
76# replace IP with the ip of your newly created droplet
77ssh root@IP
78
79# this will install utilities for mounting storage objects as FUSE
80apt install s3fs
81
82# we now need to provide credentials (access key we created earlier)
83# replace KEY and SECRET with your own credentials but leave the colon between them
84# we also need to set proper permissions
85echo "KEY:SECRET" > .passwd-s3fs
86chmod 600 .passwd-s3fs
87
88# now we mount space to our machine
89# replace UNIQUE-NAME with the name you choose earlier
90# if you choose different region for your space be careful about -ourl option (ams3)
91s3fs UNIQUE-NAME /mnt/ -ourl=https://ams3.digitaloceanspaces.com -ouse_cache=/tmp
92
93# now we try to create a file
94# once you mount it may take a couple of seconds to retrieve data
95echo "Hello cruel world" > /mnt/hello.txt
96```
97
98After all this you can return to your browser and go to [DigitalOcean
99Spaces](https://cloud.digitalocean.com/spaces) and click on your created
100space. If file hello.txt is present you have successfully mounted space to your
101machine and wrote data to it.
102
103I choose the same region for my Droplet and my Space but you don't have to. You
104can have different regions. What this actually does to performance I don't know.
105
106Additional information on FUSE:
107
108* [Github project page for s3fs](https://github.com/s3fs-fuse/s3fs-fuse)
109* [FUSE - Filesystem in Userspace](https://en.wikipedia.org/wiki/Filesystem_in_Userspace)
110
111## Will the performance degrade over time and over different sizes of objects?
112
113For this task I didn't want to just read and write text files or uploading
114images. I actually wanted to figure out if using something like SQlite is viable
115in this case.
116
117### Measurement experiment 1: File copy
118
119```bash
120# first we create some dummy files at different sizes
121dd if=/dev/zero of=10KB.dat bs=1024 count=10 #10KB
122dd if=/dev/zero of=100KB.dat bs=1024 count=100 #100KB
123dd if=/dev/zero of=1MB.dat bs=1024 count=1024 #1MB
124dd if=/dev/zero of=10MB.dat bs=1024 count=10240 #10MB
125
126# now we set time command to only return real
127TIMEFORMAT=%R
128
129# now lets test it
130(time cp 10KB.dat /mnt/) |& tee -a 10KB.results.txt
131
132# and now we automate
133# this will perform the same operation 100 times
134# this will output results into separated files based on objecty size
135n=0; while (( n++ < 100 )); do (time cp 10KB.dat /mnt/10KB.$n.dat) |& tee -a 10KB.results.txt; done
136n=0; while (( n++ < 100 )); do (time cp 100KB.dat /mnt/100KB.$n.dat) |& tee -a 100KB.results.txt; done
137n=0; while (( n++ < 100 )); do (time cp 1MB.dat /mnt/1MB.$n.dat) |& tee -a 1MB.results.txt; done
138n=0; while (( n++ < 100 )); do (time cp 10MB.dat /mnt/10MB.$n.dat) |& tee -a 10MB.results.txt; done
139```
140
141Files of size 100MB were not successfully transferred and ended up displaying
142error (cp: failed to close '/mnt/100MB.1.dat': Operation not permitted).
143
144As I suspected, object size is not really that important. Sadly I don't have the
145time to test performance over periods of time. But if some of you would do it
146please send me your data. I would be interested in seeing results.
147
148**Here are plotted results**
149
150You can download [raw result here](/posts/do-fuse/copy-benchmarks.tsv).
151Measurements are in seconds.
152
153<script src="//cdn.plot.ly/plotly-latest.min.js"></script>
154<div id="copy-benchmarks"></div>
155<script>
156(function(){
157 var request = new XMLHttpRequest();
158 request.open("GET", "/posts/do-fuse/copy-benchmarks.tsv", true);
159 request.onload = function() {
160 if (request.status >= 200 && request.status < 400) {
161 var payload = request.responseText.trim();
162 var tsv = payload.split("\n");
163 for (var i=0; i<tsv.length; i++) { tsv[i] = tsv[i].split("\t"); }
164 var traces = [];
165 var headers = tsv[0];
166 tsv.shift();
167 Array.prototype.forEach.call(headers, function(el, idx) {
168 var x = [];
169 var y = [];
170 for (var j=0; j<tsv.length; j++) {
171 x.push(j);
172 y.push(parseFloat(tsv[j][idx].replace(",", ".")));
173 }
174 traces.push({ x: x, y: y, type: "scatter", name: el, line: { width: 1, shape: "spline" } });
175 });
176 var copy = Plotly.newPlot("copy-benchmarks", traces, { legend: {"orientation": "h"}, height: 400, margin: { l: 40, r: 0, b: 20, t: 30, pad: 0 }, yaxis: { title: "execution time in seconds", titlefont: { size: 12 } }, xaxis: { title: "fn(i)", titlefont: { size: 12 } } });
177 } else { }
178 };
179 request.onerror = function() { };
180 request.send(null);
181})();
182</script>
183
184As far as these tests show, performance is quite stable and can be predicted
185which is fantastic. But this is a small test and spans only over couple of
186hours. So you should not completely trust them.
187
188### Measurement experiment 2: SQLite performanse
189
190I was unable to use database file directly from mounted drive so this is a no-go
191as I suspected. So I executed code below on a local disk just to get some
192benchmarks. I inserted 1000 records with DROPTABLE, CREATETABLE, INSERTMANY,
193FETCHALL, COMMIT for 1000 times to generate statistics. As you can see
194performance of SQLite is quite amazing. You could then potentially just copy
195file to mounted drive and be done with it.
196
197```python
198import time
199import sqlite3
200import sys
201
202if len(sys.argv) < 3:
203 print("usage: python sqlite-benchmark.py DB_PATH NUM_RECORDS REPEAT")
204 exit()
205
206def data_iter(x):
207 for i in range(x):
208 yield "m" + str(i), "f" + str(i*i)
209
210header_line = "%s\t%s\t%s\t%s\t%s\n" % ("DROPTABLE", "CREATETABLE", "INSERTMANY", "FETCHALL", "COMMIT")
211with open("sqlite-benchmarks.tsv", "w") as fp:
212 fp.write(header_line)
213
214start_time = time.time()
215conn = sqlite3.connect(sys.argv[1])
216c = conn.cursor()
217end_time = time.time()
218result_time = CONNECT = end_time - start_time
219print("CONNECT: %g seconds" % (result_time))
220
221start_time = time.time()
222c.execute("PRAGMA journal_mode=WAL")
223c.execute("PRAGMA temp_store=MEMORY")
224c.execute("PRAGMA synchronous=OFF")
225result_time = PRAGMA = end_time - start_time
226print("PRAGMA: %g seconds" % (result_time))
227
228for i in range(int(sys.argv[3])):
229 print("#%i" % (i))
230
231 start_time = time.time()
232 c.execute("drop table if exists test")
233 end_time = time.time()
234 result_time = DROPTABLE = end_time - start_time
235 print("DROPTABLE: %g seconds" % (result_time))
236
237 start_time = time.time()
238 c.execute("create table if not exists test(a,b)")
239 end_time = time.time()
240 result_time = CREATETABLE = end_time - start_time
241 print("CREATETABLE: %g seconds" % (result_time))
242
243 start_time = time.time()
244 c.executemany("INSERT INTO test VALUES (?, ?)", data_iter(int(sys.argv[2])))
245 end_time = time.time()
246 result_time = INSERTMANY = end_time - start_time
247 print("INSERTMANY: %g seconds" % (result_time))
248
249 start_time = time.time()
250 c.execute("select count(*) from test")
251 res = c.fetchall()
252 end_time = time.time()
253 result_time = FETCHALL = end_time - start_time
254 print("FETCHALL: %g seconds" % (result_time))
255
256 start_time = time.time()
257 conn.commit()
258 end_time = time.time()
259 result_time = COMMIT = end_time - start_time
260 print("COMMIT: %g seconds" % (result_time))
261
262 print
263 log_line = "%f\t%f\t%f\t%f\t%f\n" % (DROPTABLE, CREATETABLE, INSERTMANY, FETCHALL, COMMIT)
264 with open("sqlite-benchmarks.tsv", "a") as fp:
265 fp.write(log_line)
266
267start_time = time.time()
268conn.close()
269end_time = time.time()
270result_time = CLOSE = end_time - start_time
271print("CLOSE: %g seconds" % (result_time))
272```
273
274You can download [raw result here](/posts/do-fuse/sqlite-benchmarks.tsv). And
275again, these results are done on a local block storage and do not represent
276capabilities of object storage. With my current approach and state of the test
277code these can not be done. I would need to make Python code much more robust
278and check locking etc.
279
280<div id="sqlite-benchmarks"></div>
281<script>
282(function(){
283 var request = new XMLHttpRequest();
284 request.open("GET", "/posts/do-fuse/sqlite-benchmarks.tsv", true);
285 request.onload = function() {
286 if (request.status >= 200 && request.status < 400) {
287 var payload = request.responseText.trim();
288 var tsv = payload.split("\n");
289 for (var i=0; i<tsv.length; i++) { tsv[i] = tsv[i].split("\t"); }
290 var traces = [];
291 var headers = tsv[0];
292 tsv.shift();
293 Array.prototype.forEach.call(headers, function(el, idx) {
294 var x = [];
295 var y = [];
296 for (var j=0; j<tsv.length; j++) {
297 x.push(j);
298 y.push(parseFloat(tsv[j][idx].replace(",", ".")));
299 }
300 traces.push({ x: x, y: y, type: "scatter", name: el, line: { width: 1, shape: "spline" } });
301 });
302 var sqlite = Plotly.newPlot("sqlite-benchmarks", traces, { legend: {"orientation": "h"}, height: 400, margin: { l: 50, r: 0, b: 20, t: 30, pad: 0 }, yaxis: { title: "execution time in seconds", titlefont: { size: 12 } } });
303 } else { }
304 };
305 request.onerror = function() { };
306 request.send(null);
307})();
308</script>
309
310## Can storage be mounted on multiple machines at the same time and be writable?
311
312Well, this one didn't take long to test. And the answer is **YES**. I mounted
313space on both machines and measured same performance on both machines. But
314because file is downloaded before write and then uploaded on complete there
315could potentially be problems is another process is trying to access the same
316file.
317
318## Observations and conslusion
319
320Using Spaces in this way makes it easier to access and manage files. But besides
321that you would need to write additional code to make this one play nice with you
322applications.
323
324Nevertheless, this was extremely simple to setup and use and this is just
325another excellent product in DigitalOcean product line. I found this exercise
326very valuable and am thinking about implementing some sort of mechanism for
327SQLite, so data can be stored on Spaces and accessed by many VM's. For a project
328where data doesn't need to be accessible in real-time and can have couple of
329minutes old data this would be very interesting. If any of you find this
330proposal interesting please write in a comment box below or shoot me an email
331and I will keep you posted.
diff --git a/content/posts/2019-01-03-encoding-binary-data-into-dna-sequence.md b/content/posts/2019-01-03-encoding-binary-data-into-dna-sequence.md
deleted file mode 100644
index 2ec9387..0000000
--- a/content/posts/2019-01-03-encoding-binary-data-into-dna-sequence.md
+++ /dev/null
@@ -1,411 +0,0 @@
1---
2title: Encoding binary data into DNA sequence
3url: encoding-binary-data-into-dna-sequence.html
4date: 2019-01-03T12:00:00+02:00
5type: post
6draft: false
7---
8
9## Initial thoughts
10
11Imagine a world where you could go outside and take a leaf from a tree and put
12it through your personal DNA sequencer and get data like music, videos or
13computer programs from it. Well, this is all possible now. It was not done on a
14large scale because it is quite expensive to create DNA strands but it's
15possible.
16
17Encoding data into DNA sequence is relatively simple process once you understand
18the relationship between binary data and nucleotides and scientists have been
19making large leaps in this field in order to provide viable long-term storage
20solution for our data that would potentially survive our specie if case of
21global disaster. We could imprint all the world's knowledge into plants and
22ensure the survival of our knowledge.
23
24More optimistic usage for this technology would be easier storage of ever
25growing data we produce every day. Once machines for sequencing DNA become fast
26enough and cheaper this could mean the next evolution of storing data and
27abandoning classical hard and solid state drives in data warehouses.
28
29As we currently stand this is still not viable but it is quite an amazing and
30cool technology.
31
32My interests in this field are purely in encoding processes and experimental
33testing mainly because I don't have the access to this expensive machines. My
34initial goal was to create a toolkit that can be used by everybody to encode
35their data into a proper DNA sequence.
36
37## Glossary
38
39**deoxyribose** A five-carbon sugar molecule with a hydrogen atom rather than a
40hydroxyl group in the 2′ position; the sugar component of DNA nucleotides.
41
42**double helix** The molecular shape of DNA in which two strands of nucleotides
43wind around each other in a spiral shape.
44
45**nitrogenous base** A nitrogen-containing molecule that acts as a base; often
46referring to one of the purine or pyrimidine components of nucleic acids.
47
48**phosphate group** A molecular group consisting of a central phosphorus atom
49bound to four oxygen atoms.
50
51**RGB** The RGB color model is an additive color model in which red, green and
52blue light are added together in various ways to reproduce a broad array of
53colors.
54
55**GCC** The GNU Compiler Collection is a compiler system produced by the GNU
56Project supporting various programming languages.
57
58## Data encoding
59
60**TL;DR:** Encoding involves the use of a code to change original data into a
61form that can be used by an external process.
62
63Encoding is the process of converting data into a format required for a number
64of information processing needs, including:
65
66- Program compiling and execution
67- Data transmission, storage and compression/decompression
68- Application data processing, such as file conversion
69
70Encoding can have two meanings:
71
72- In computer technology, encoding is the process of applying a specific code,
73 such as letters, symbols and numbers, to data for conversion into an
74 equivalent cipher.
75- In electronics, encoding refers to analog to digital conversion.
76
77## Quick history of DNA
78
79- **1869** - Friedrich Miescher identifies "nuclein".
80- **1900s** - The Eugenics Movement.
81- **1900** – Mendel's theories are rediscovered by researchers.
82- **1944** - Oswald Avery identifies DNA as the 'transforming principle'.
83- **1952** - Rosalind Franklin photographs crystallized DNA fibres.
84- **1953** - James Watson and Francis Crick discover the double helix structure of DNA.
85- **1965** - Marshall Nirenberg is the first person to sequence the bases in each codon.
86- **1983** - Huntington's disease is the first mapped genetic disease.
87- **1990** - The Human Genome Project begins.
88- **1995** - Haemophilus Influenzae is the first bacterium genome sequenced.
89- **1996** - Dolly the sheep is cloned.
90- **1999** - First human chromosome is decoded.
91- **2000** – Genetic code of the fruit fly is decoded.
92- **2002** – Mouse is the first mammal to have its genome decoded.
93- **2003** – The Human Genome Project is completed.
94- **2013** – DNA Worldwide and Eurofins Forensic discover identical twins have differences in their genetic makeup.
95
96## What is DNA?
97
98Deoxyribonucleic acid, a self-replicating material which is **present in nearly
99all living organisms** as the main constituent of chromosomes. It is the
100**carrier of genetic information**.
101
102> The nitrogen in our DNA, the calcium in our teeth, the iron in our blood,
103> the carbon in our apple pies were made in the interiors of collapsing stars.
104> We are made of starstuff.
105> **-- Carl Sagan, Cosmos**
106
107The nucleotide in DNA consists of a sugar (deoxyribose), one of four bases
108(cytosine (C), thymine (T), adenine (A), guanine (G)), and a phosphate.
109Cytosine and thymine are pyrimidine bases, while adenine and guanine are purine
110bases. The sugar and the base together are called a nucleoside.
111
112![DNA](/posts/dna-sequence/dna-basics.jpg)
113*DNA (a) forms a double stranded helix, and (b) adenine pairs with thymine and
114cytosine pairs with guanine. (credit a: modification of work by Jerome Walker,
115Dennis Myts)*
116
117## Encode binary data into DNA sequence
118
119As an input file you can use any file you want:
120
121- ASCII files,
122- Compiled programs,
123- Multimedia files (MP3, MP4, MVK, etc),
124- Images,
125- Database files,
126- etc.
127
128Note: If you would copy all the bytes from RAM to file or pipe data to file you
129could encode also this data as long as you provide file pointer to the encoder.
130
131### Basic Encoding
132
133As already mentioned, the Basic Encoding is based on a simple mapping. Since DNA
134is composed of 4 nucleotides (Adenine, Cytosine, Guanine, Thymine; usually
135referred using the first letter). Using this technique we can encode
136
137<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" viewBox="0 -907.9672135000189 11313.37788460873 1185.0382429179317" style="width: 26.259ex; height: 2.721ex; vertical-align: -0.68ex; margin: 1px 0px;"><g stroke="black" fill="black" stroke-width="0" transform="matrix(1 0 0 -1 0 0)"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#MJMATHI-6C"/><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#MJMATHI-6F" x="303" y="0"/><g transform="translate(793,0)"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#MJMATHI-67"/><use transform="scale(0.7071067811865476)" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#MJMAIN-32" x="681" y="-213"/></g><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#MJMAIN-28" x="1732" y="0"/><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#MJMAIN-34" x="2126" y="0"/><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#MJMAIN-29" x="2631" y="0"/><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#MJMAIN-3D" x="3302" y="0"/><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#MJMATHI-6C" x="4363" y="0"/><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#MJMATHI-6F" x="4666" y="0"/><g transform="translate(5156,0)"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#MJMATHI-67"/><use transform="scale(0.7071067811865476)" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#MJMAIN-32" x="681" y="-213"/></g><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#MJMAIN-28" x="6095" y="0"/><g transform="translate(6489,0)"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#MJMAIN-32"/><use transform="scale(0.7071067811865476)" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#MJMAIN-32" x="714" y="583"/></g><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#MJMAIN-29" x="7451" y="0"/><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#MJMAIN-3D" x="8123" y="0"/><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#MJMAIN-32" x="9184" y="0"/><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#MJMATHI-62" x="9689" y="0"/><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#MJMATHI-69" x="10123" y="0"/><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#MJMATHI-74" x="10473" y="0"/><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#MJMATHI-73" x="10839" y="0"/></g><defs id="MathJax_SVG_glyphs"><path id="MJSZ2-2211" stroke-width="10" d="M60 948Q63 950 665 950H1267L1325 815Q1384 677 1388 669H1348L1341 683Q1320 724 1285 761Q1235 809 1174 838T1033 881T882 898T699 902H574H543H251L259 891Q722 258 724 252Q725 250 724 246Q721 243 460 -56L196 -356Q196 -357 407 -357Q459 -357 548 -357T676 -358Q812 -358 896 -353T1063 -332T1204 -283T1307 -196Q1328 -170 1348 -124H1388Q1388 -125 1381 -145T1356 -210T1325 -294L1267 -449L666 -450Q64 -450 61 -448Q55 -446 55 -439Q55 -437 57 -433L590 177Q590 178 557 222T452 366T322 544L56 909L55 924Q55 945 60 948Z"/><path id="MJMATHI-69" stroke-width="10" d="M184 600Q184 624 203 642T247 661Q265 661 277 649T290 619Q290 596 270 577T226 557Q211 557 198 567T184 600ZM21 287Q21 295 30 318T54 369T98 420T158 442Q197 442 223 419T250 357Q250 340 236 301T196 196T154 83Q149 61 149 51Q149 26 166 26Q175 26 185 29T208 43T235 78T260 137Q263 149 265 151T282 153Q302 153 302 143Q302 135 293 112T268 61T223 11T161 -11Q129 -11 102 10T74 74Q74 91 79 106T122 220Q160 321 166 341T173 380Q173 404 156 404H154Q124 404 99 371T61 287Q60 286 59 284T58 281T56 279T53 278T49 278T41 278H27Q21 284 21 287Z"/><path id="MJMAIN-3D" stroke-width="10" d="M56 347Q56 360 70 367H707Q722 359 722 347Q722 336 708 328L390 327H72Q56 332 56 347ZM56 153Q56 168 72 173H708Q722 163 722 153Q722 140 707 133H70Q56 140 56 153Z"/><path id="MJMAIN-30" stroke-width="10" d="M96 585Q152 666 249 666Q297 666 345 640T423 548Q460 465 460 320Q460 165 417 83Q397 41 362 16T301 -15T250 -22Q224 -22 198 -16T137 16T82 83Q39 165 39 320Q39 494 96 585ZM321 597Q291 629 250 629Q208 629 178 597Q153 571 145 525T137 333Q137 175 145 125T181 46Q209 16 250 16Q290 16 318 46Q347 76 354 130T362 333Q362 478 354 524T321 597Z"/><path id="MJMATHI-6E" stroke-width="10" d="M21 287Q22 293 24 303T36 341T56 388T89 425T135 442Q171 442 195 424T225 390T231 369Q231 367 232 367L243 378Q304 442 382 442Q436 442 469 415T503 336T465 179T427 52Q427 26 444 26Q450 26 453 27Q482 32 505 65T540 145Q542 153 560 153Q580 153 580 145Q580 144 576 130Q568 101 554 73T508 17T439 -10Q392 -10 371 17T350 73Q350 92 386 193T423 345Q423 404 379 404H374Q288 404 229 303L222 291L189 157Q156 26 151 16Q138 -11 108 -11Q95 -11 87 -5T76 7T74 17Q74 30 112 180T152 343Q153 348 153 366Q153 405 129 405Q91 405 66 305Q60 285 60 284Q58 278 41 278H27Q21 284 21 287Z"/><path id="MJMAIN-28" stroke-width="10" d="M94 250Q94 319 104 381T127 488T164 576T202 643T244 695T277 729T302 750H315H319Q333 750 333 741Q333 738 316 720T275 667T226 581T184 443T167 250T184 58T225 -81T274 -167T316 -220T333 -241Q333 -250 318 -250H315H302L274 -226Q180 -141 137 -14T94 250Z"/><path id="MJMAIN-2B" stroke-width="10" d="M56 237T56 250T70 270H369V420L370 570Q380 583 389 583Q402 583 409 568V270H707Q722 262 722 250T707 230H409V-68Q401 -82 391 -82H389H387Q375 -82 369 -68V230H70Q56 237 56 250Z"/><path id="MJMAIN-31" stroke-width="10" d="M213 578L200 573Q186 568 160 563T102 556H83V602H102Q149 604 189 617T245 641T273 663Q275 666 285 666Q294 666 302 660V361L303 61Q310 54 315 52T339 48T401 46H427V0H416Q395 3 257 3Q121 3 100 0H88V46H114Q136 46 152 46T177 47T193 50T201 52T207 57T213 61V578Z"/><path id="MJMAIN-29" stroke-width="10" d="M60 749L64 750Q69 750 74 750H86L114 726Q208 641 251 514T294 250Q294 182 284 119T261 12T224 -76T186 -143T145 -194T113 -227T90 -246Q87 -249 86 -250H74Q66 -250 63 -250T58 -247T55 -238Q56 -237 66 -225Q221 -64 221 250T66 725Q56 737 55 738Q55 746 60 749Z"/><path id="MJMAIN-32" stroke-width="10" d="M109 429Q82 429 66 447T50 491Q50 562 103 614T235 666Q326 666 387 610T449 465Q449 422 429 383T381 315T301 241Q265 210 201 149L142 93L218 92Q375 92 385 97Q392 99 409 186V189H449V186Q448 183 436 95T421 3V0H50V19V31Q50 38 56 46T86 81Q115 113 136 137Q145 147 170 174T204 211T233 244T261 278T284 308T305 340T320 369T333 401T340 431T343 464Q343 527 309 573T212 619Q179 619 154 602T119 569T109 550Q109 549 114 549Q132 549 151 535T170 489Q170 464 154 447T109 429Z"/><path id="MJMATHI-6C" stroke-width="10" d="M117 59Q117 26 142 26Q179 26 205 131Q211 151 215 152Q217 153 225 153H229Q238 153 241 153T246 151T248 144Q247 138 245 128T234 90T214 43T183 6T137 -11Q101 -11 70 11T38 85Q38 97 39 102L104 360Q167 615 167 623Q167 626 166 628T162 632T157 634T149 635T141 636T132 637T122 637Q112 637 109 637T101 638T95 641T94 647Q94 649 96 661Q101 680 107 682T179 688Q194 689 213 690T243 693T254 694Q266 694 266 686Q266 675 193 386T118 83Q118 81 118 75T117 65V59Z"/><path id="MJMATHI-6F" stroke-width="10" d="M201 -11Q126 -11 80 38T34 156Q34 221 64 279T146 380Q222 441 301 441Q333 441 341 440Q354 437 367 433T402 417T438 387T464 338T476 268Q476 161 390 75T201 -11ZM121 120Q121 70 147 48T206 26Q250 26 289 58T351 142Q360 163 374 216T388 308Q388 352 370 375Q346 405 306 405Q243 405 195 347Q158 303 140 230T121 120Z"/><path id="MJMATHI-67" stroke-width="10" d="M311 43Q296 30 267 15T206 0Q143 0 105 45T66 160Q66 265 143 353T314 442Q361 442 401 394L404 398Q406 401 409 404T418 412T431 419T447 422Q461 422 470 413T480 394Q480 379 423 152T363 -80Q345 -134 286 -169T151 -205Q10 -205 10 -137Q10 -111 28 -91T74 -71Q89 -71 102 -80T116 -111Q116 -121 114 -130T107 -144T99 -154T92 -162L90 -164H91Q101 -167 151 -167Q189 -167 211 -155Q234 -144 254 -122T282 -75Q288 -56 298 -13Q311 35 311 43ZM384 328L380 339Q377 350 375 354T369 368T359 382T346 393T328 402T306 405Q262 405 221 352Q191 313 171 233T151 117Q151 38 213 38Q269 38 323 108L331 118L384 328Z"/><path id="MJMAIN-34" stroke-width="10" d="M462 0Q444 3 333 3Q217 3 199 0H190V46H221Q241 46 248 46T265 48T279 53T286 61Q287 63 287 115V165H28V211L179 442Q332 674 334 675Q336 677 355 677H373L379 671V211H471V165H379V114Q379 73 379 66T385 54Q393 47 442 46H471V0H462ZM293 211V545L74 212L183 211H293Z"/><path id="MJMATHI-62" stroke-width="10" d="M73 647Q73 657 77 670T89 683Q90 683 161 688T234 694Q246 694 246 685T212 542Q204 508 195 472T180 418L176 399Q176 396 182 402Q231 442 283 442Q345 442 383 396T422 280Q422 169 343 79T173 -11Q123 -11 82 27T40 150V159Q40 180 48 217T97 414Q147 611 147 623T109 637Q104 637 101 637H96Q86 637 83 637T76 640T73 647ZM336 325V331Q336 405 275 405Q258 405 240 397T207 376T181 352T163 330L157 322L136 236Q114 150 114 114Q114 66 138 42Q154 26 178 26Q211 26 245 58Q270 81 285 114T318 219Q336 291 336 325Z"/><path id="MJMATHI-74" stroke-width="10" d="M26 385Q19 392 19 395Q19 399 22 411T27 425Q29 430 36 430T87 431H140L159 511Q162 522 166 540T173 566T179 586T187 603T197 615T211 624T229 626Q247 625 254 615T261 596Q261 589 252 549T232 470L222 433Q222 431 272 431H323Q330 424 330 420Q330 398 317 385H210L174 240Q135 80 135 68Q135 26 162 26Q197 26 230 60T283 144Q285 150 288 151T303 153H307Q322 153 322 145Q322 142 319 133Q314 117 301 95T267 48T216 6T155 -11Q125 -11 98 4T59 56Q57 64 57 83V101L92 241Q127 382 128 383Q128 385 77 385H26Z"/><path id="MJMATHI-73" stroke-width="10" d="M131 289Q131 321 147 354T203 415T300 442Q362 442 390 415T419 355Q419 323 402 308T364 292Q351 292 340 300T328 326Q328 342 337 354T354 372T367 378Q368 378 368 379Q368 382 361 388T336 399T297 405Q249 405 227 379T204 326Q204 301 223 291T278 274T330 259Q396 230 396 163Q396 135 385 107T352 51T289 7T195 -10Q118 -10 86 19T53 87Q53 126 74 143T118 160Q133 160 146 151T160 120Q160 94 142 76T111 58Q109 57 108 57T107 55Q108 52 115 47T146 34T201 27Q237 27 263 38T301 66T318 97T323 122Q323 150 302 164T254 181T195 196T148 231Q131 256 131 289Z"/></defs></svg>
138
139using a single nucleotide. In this way, we are able to use the 4 bases that
140compose the DNA strand to encode each byte of data.
141
142| Two bits | Nucleotides |
143| -------- | ---------------- |
144| 00 | **A** (Adenine) |
145| 10 | **G** (Guanine) |
146| 01 | **C** (Cytosine) |
147| 11 | **T** (Thymine) |
148
149With this in mind we can simply encode any data by using two-bit to Nucleotides
150conversion.
151
152```python
153{ Algorithm 1: Naive byte array to DNA encode }
154procedure EncodeToDNASequence(f) string
155begin
156 enc string
157 while not eof(f) do
158 c byte := buffer[0] { Read 1 byte from buffer }
159 bin integer := sprintf('08b', c) { Convert to string binary }
160 for e in range[0, 2, 4, 6] do
161 if e[0] == 48 and e[1] == 48 then { 0x00 - A (Adenine) }
162 enc += 'A'
163 else if e[0] == 48 and e[1] == 49 then { 0x01 - G (Guanine) }
164 enc += 'G'
165 else if e[0] == 49 and e[1] == 48 then { 0x10 - C (Cytosine) }
166 enc += 'C'
167 else if e[0] == 49 and e[1] == 49 then { 0x11 - T (Thymine) }
168 enc += 'T'
169 return enc { Return DNA sequence }
170end
171```
172
173Another encoding would be **Goldman encoding**. Using this encoding helps with
174Nonsense mutation (amino acids replaced by a stop codon) that occurs and is the
175most problematic during translation because it leads to truncated amino acid
176sequences, which in turn results in truncated proteins.
177
178[Where to store big data? In DNA: Nick Goldman at TEDxPrague](https://www.youtube.com/watch?v=a4PiGWNsIEU)
179
180### FASTA file format
181
182In bioinformatics, FASTA format is a text-based format for representing either
183nucleotide sequences or peptide sequences, in which nucleotides or amino acids
184are represented using single-letter codes. The format also allows for sequence
185names and comments to precede the sequences. The format originates from the
186FASTA software package, but has now become a standard in the field of
187bioinformatics.
188
189The first line in a FASTA file started either with a ">" (greater-than) symbol
190or, less frequently, a ";" (semicolon) was taken as a comment. Subsequent lines
191starting with a semicolon would be ignored by software. Since the only comment
192used was the first, it quickly became used to hold a summary description of the
193sequence, often starting with a unique library accession number, and with time
194it has become commonplace to always use ">" for the first line and to not use
195";" comments (which would otherwise be ignored).
196
197```txt
198;LCBO - Prolactin precursor - Bovine
199; a sample sequence in FASTA format
200MDSKGSSQKGSRLLLLLVVSNLLLCQGVVSTPVCPNGPGNCQVSLRDLFDRAVMVSHYIHDLSS
201EMFNEFDKRYAQGKGFITMALNSCHTSSLPTPEDKEQAQQTHHEVLMSLILGLLRSWNDPLYHL
202VTEVRGMKGAPDAILSRAIEIEEENKRLLEGMEMIFGQVIPGAKETEPYPVWSGLPSLQTKDED
203ARYSAFYNLLHCLRRDSSKIDTYLKLLNCRIIYNNNC*
204
205>MCHU - Calmodulin - Human, rabbit, bovine, rat, and chicken
206ADQLTEEQIAEFKEAFSLFDKDGDGTITTKELGTVMRSLGQNPTEAELQDMINEVDADGNGTID
207FPEFLTMMARKMKDTDSEEEIREAFRVFDKDGNGYISAAELRHVMTNLGEKLTDEEVDEMIREA
208DIDGDGQVNYEEFVQMMTAK*
209
210>gi|5524211|gb|AAD44166.1| cytochrome b [Elephas maximus maximus]
211LCLYTHIGRNIYYGSYLYSETWNTGIMLLLITMATAFMGYVLPWGQMSFWGATVITNLFSAIPYIGTNLV
212EWIWGGFSVDKATLNRFFAFHFILPFTMVALAGVHLTFLHETGSNNPLGLTSDSDKIPFHPYYTIKDFLG
213LLILILLLLLLALLSPDMLGDPDNHMPADPLNTPLHIKPEWYFLFAYAILRSVPNKLGGVLALFLSIVIL
214GLMPFLHTSKHRSMMLRPLSQALFWTLTMDLLTLTWIGSQPVEYPYTIIGQMASILYFSIILAFLPIAGX
215IENY
216```
217
218FASTA format was extended by [FASTQ](https://en.wikipedia.org/wiki/FASTQ_format)
219format from the [Sanger Centre](https://www.sanger.ac.uk/) in Cambridge.
220
221### PNG encoded DNA sequence
222
223| Nucleotides | RGB | Color name |
224| ------------ | ----------- | ---------- |
225| A ➞ Adenine | (0,0,255) | Blue |
226| G ➞ Guanine | (0,100,0) | Green |
227| C ➞ Cytosine | (255,0,0) | Red |
228| T ➞ Thymine | (255,255,0) | Yellow |
229
230With this in mind we can create a simple algorithm to create PNG representation
231of a DNA sequence.
232
233```python
234{ Algorithm 2: Naive DNA to PNG encode from FASTA file }
235procedure EncodeDNASequenceToPNG(f)
236begin
237 i image
238 while not eof(f) do
239 c char := buffer[0] { Read 1 char from buffer }
240 case c of
241 'A': color := RGB(0, 0, 255) { Blue }
242 'G': color := RGB(0, 100, 0) { Green }
243 'C': color := RGB(255, 0, 0) { Red }
244 'T': color := RGB(255, 255, 0) { Yellow }
245 drawRect(i, [x, y], color)
246 save(i) { Save PNG image }
247end
248```
249
250## Encoding text file in practice
251
252In this example we will take a simple text file as our input stream for
253encoding. This file will have a quote from Niels Bohr and saved as txt file.
254
255> How wonderful that we have met with a paradox. Now we have some hope of
256> making progress.
257> ― Niels Bohr
258
259First we encode text file into FASTA file.
260
261```bash
262./dnae-encode -i quote.txt -o quote.fa
2632019/01/10 00:38:29 Gathering input file stats
2642019/01/10 00:38:29 Starting encoding ...
265 106 B / 106 B [==================================] 100.00% 0s
2662019/01/10 00:38:29 Saving to FASTA file ...
2672019/01/10 00:38:29 Output FASTA file length is 438 B
2682019/01/10 00:38:29 Process took 987.263µs
2692019/01/10 00:38:29 Done ...
270```
271
272Output of `quote.fa` file contains the encoded DNA sequence in ASCII format.
273
274```txt
275>SEQ1
276GACAGCTTGTGTACAAGTGTGCTTGCTCGCGAGCGGGTACGCGCGTGGGCTAACAAGTGA
277GCCAGCAGGTGAACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGCTGGCGGGTGA
278ACAAGTGTGCCGGTGAGCCAACAAGCAGACAAGTAAGCAGGTACGCAGGCGAGCTTGTCA
279ACTCACAAGATCGCTTGTGTACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGTAT
280GCTTGCTGGCGGACAAGCCAGCTTGTAAGCGGACAAGCTTGCGCACAAGCTGGCAGGCCT
281GCCGGCTCGCGTACAAATTCACAAGTAAGTACGCTTGCGTGTACGCGGGTATGTATACTC
282AACCTCACCAAACGGGACAAGATCGCCGGCGGGCTAGTATACAAGAACGCTTGCCAGTAC
283AACC
284```
285
286Then we encode FASTA file from previous operation to encode this data into PNG.
287
288```bash
289./dnae-png -i quote.fa -o quote.png
2902019/01/10 00:40:09 Gathering input file stats ...
2912019/01/10 00:40:09 Deconstructing FASTA file ...
2922019/01/10 00:40:09 Compositing image file ...
293 424 / 424 [==================================] 100.00% 0s
2942019/01/10 00:40:09 Saving output file ...
2952019/01/10 00:40:09 Output image file length is 1.1 kB
2962019/01/10 00:40:09 Process took 19.036117ms
2972019/01/10 00:40:09 Done ...
298```
299
300After encoding into PNG format this file looks like this.
301
302![Encoded Quote in PNG format](/posts/dna-sequence/quote.png)
303The larger the input stream is the larger the PNG file would be.
304
305Compiled basic Hello World C program with
306[GCC](https://www.gnu.org/software/gcc/) would [look
307like](/posts/dna-sequence/sample.png).
308
309```c
310// gcc -O3 -o sample sample.c
311#include <stdio.h>
312
313main() {
314 printf("Hello, world!\n");
315 return 0;
316}
317```
318
319## Toolkit for encoding data
320
321I have created a toolkit with two main programs:
322
323- dnae-encode (encodes file into FASTA file)
324- dnae-png (encodes FASTA file into PNG)
325
326Toolkit with full source code is available on
327[github.com/mitjafelicijan/dna-encoding](https://github.com/mitjafelicijan/dna-encoding).
328
329### dnae-encode
330
331```bash
332> ./dnae-encode --help
333usage: dnae-encode --input=INPUT [<flags>]
334
335A command-line application that encodes file into DNA sequence.
336
337Flags:
338 --help Show context-sensitive help (also try --help-long and --help-man).
339 -i, --input=INPUT Input file (ASCII or binary) which will be encoded into DNA sequence.
340 -o, --output="out.fa" Output file which stores DNA sequence in FASTA format.
341 -s, --sequence=SEQ1 The description line (defline) or header/identifier line, gives a name and/or a unique identifier for the sequence.
342 -c, --columns=60 Row characters length (no more than 120 characters). Devices preallocate fixed line sizes in software.
343 --version Show application version.
344```
345
346### dnae-png
347
348```bash
349> ./dnae-png --help
350usage: dnae-png --input=INPUT [<flags>]
351
352A command-line application that encodes FASTA file into PNG image.
353
354Flags:
355 --help Show context-sensitive help (also try --help-long and --help-man).
356 -i, --input=INPUT Input FASTA file which will be encoded into PNG image.
357 -o, --output="out.png" Output file in PNG format that represents DNA sequence in graphical way.
358 -s, --size=10 Size of pairings of DNA bases on image in pixels (lower resolution lower file size).
359 --version Show application version.
360```
361
362## Benchmarks
363
364First we generate some binary sample data with dd.
365
366```bash
367dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=1KB.bin bs=1KB count=1 iflag=fullblock
368```
369
370
371![Sample binary file 1KB](/posts/dna-sequence/sample-binary-file.png)
372Our freshly generated 1KB file looks something like this (its full of
373garbage data as intended).
374
375We create following binary files:
376
377- 1KB.bin
378- 10KB.bin
379- 100KB.bin
380- 1MB.bin
381- 10MB.bin
382- 100MB.bin
383
384After this we create FASTA files for all the binary files by encoding them
385into DNA sequence.
386
387```bash
388./dnae-encode -i 100MB.bin -o 100MB.fa
389```
390
391Then we GZIP all the FASTA files to see how much the can be compressed.
392
393```bash
394gzip -9 < 10MB.fa > 10MB.fa.gz
395```
396
397![Encode to FASTA](/posts/dna-sequence/chart-speed.svg)
398The speed increase that occurs when encoding to FASTA format.
399
400![File sizes](/posts/dna-sequence/chart-size.svg)
401Size of the out file after encoding.
402
403[Download CSV file with benchmarks](/posts/dna-sequence/benchmarks.csv).
404
405## References
406
407- https://www.techopedia.com/definition/948/encoding
408- https://www.dna-worldwide.com/resource/160/history-dna-timeline
409- https://opentextbc.ca/biology/chapter/9-1-the-structure-of-dna/
410- https://arxiv.org/abs/1801.04774
411- https://en.wikipedia.org/wiki/FASTA_format
diff --git a/content/posts/2019-10-14-simplifying-and-reducing-clutter.md b/content/posts/2019-10-14-simplifying-and-reducing-clutter.md
deleted file mode 100644
index 25f9ca0..0000000
--- a/content/posts/2019-10-14-simplifying-and-reducing-clutter.md
+++ /dev/null
@@ -1,59 +0,0 @@
1---
2title: Simplifying and reducing clutter in my life and work
3url: simplifying-and-reducing-clutter.html
4date: 2019-10-14T12:00:00+02:00
5type: post
6draft: false
7---
8
9I recently moved my main working machine back from Hachintosh to Linux. Well the
10experiment was interesting and I have done some great work on macOS but it was
11time to move back.
12
13I actually really missed Linux. The simplicity of `apt-get` or just the amount
14of software that exists for Linux should be a no-brainer. I spent most of my
15time on macOS finding solutions to make things work. Using
16[Brew](https://brew.sh/) was just a horrible experience and far from package
17managers of Linux. At least they managed to get that `sudo` debacle sorted.
18
19Not all was bad. macOS in general was a perfectly good environment. Things like
20Docker and tooling like this worked without any hiccups. My normal tools like
21coding IDE worked flawlessly and the whole look and feel is just superb. I have
22been using MacBook Air for couple of years so I was used to the system but never
23as a daily driver.
24
25One of the things I did after I installed Linux back on my machine was cleaning
26up my Dropbox folder. I have everything on Dropbox. Even projects folder. I
27write code for living so my whole life revolves around couple of megs of code
28(with assets). So it's not like I have huge files on my machine. I don't have
29movies or music or pictures on my PC. All of that stuff is in cloud. I use
30Google music and I have Netflix account which is more than enough for me.
31
32I also went and deleted some of the repositories on my Github account. I have
33deleted more code than deployed. People find this strange but for me deleting
34something feels so cathartic and also forces me to write better code next time
35around when I am faced with similar problem. That was a huge relief if I am
36being totally honest.
37
38Next step was to do something with my webpage. I have been using some scripts I
39wrote a while ago to generate static pages from markdown source posts. I kept on
40adding and adding stuff on top of it and it became a source of a
41frustration. And this is just a simple blog and I was using gulp and npm.
42Anyways after couple of hours of searching and testing static generators I found
43an interesting one
44[https://github.com/piranha/gostatic](https://github.com/piranha/gostatic) and I
45just decided to use this one. It was the only one that had a simple templating
46engine, not that I really need one. But others had this convoluted way of trying
47to solve everything and at the end just required quite bigger learning curve I
48was ready to go with. So I deleted couple of old posts, simplified HTML, trashed
49most of the CSS and went with
50[https://motherfuckingwebsite.com/](https://motherfuckingwebsite.com/)
51aesthetics. Yeah, the previous site was more visually stimulating but all I
52really care is the content at this point. And Times New Roman font is kind of
53awesome.
54
55I stopped working on most of the projects in the past couple of months because
56the overhead was just too insane. There comes a point when you stretch yourself
57too much and then you stop progressing and with that comes dissatisfaction.
58
59So that's about it. Moving forward minimal style.
diff --git a/content/posts/2019-10-19-using-sentiment-analysis-for-clickbait-detection.md b/content/posts/2019-10-19-using-sentiment-analysis-for-clickbait-detection.md
deleted file mode 100644
index d5729ed..0000000
--- a/content/posts/2019-10-19-using-sentiment-analysis-for-clickbait-detection.md
+++ /dev/null
@@ -1,108 +0,0 @@
1---
2title: Using sentiment analysis for clickbait detection in RSS feeds
3url: using-sentiment-analysis-for-clickbait-detection-in-rss-feeds.html
4date: 2019-10-19T12:00:00+02:00
5type: post
6draft: false
7---
8
9## Initial thoughts
10
11One of the things that interested me for a while now is if major well
12established news sites use click bait titles to drive additional traffic to
13their sites and generate additional impressions.
14
15Goal is to see how article titles and actual content of article differ from each
16other and see if titles are clickbaited.
17
18## Preparing and cleaning data
19
20For this example I opted to just use RSS feed from a new website and decided to
21go with [The Guardian](https://www.theguardian.com) World news. While this gets
22us limited data (~40) articles and also description (actual content) is trimmed
23this really doesn't reflect the actual article contents.
24
25To get better content I could use web scraping and use RSS as link list and
26fetch contents directly from website, but for this simple example this will
27suffice.
28
29There are couple of requirements we need to install before we continue:
30
31- `pip3 install feedparser` (parses RSS feed from url)
32- `pip3 install vaderSentiment` (does sentiment polarity analysis)
33- `pip3 install matplotlib` (plots chart of results)
34
35So first we need to fetch RSS data and sanitize HTML content from description.
36
37```python
38import re
39import feedparser
40
41feed_url = "https://www.theguardian.com/world/rss"
42feed = feedparser.parse(feed_url)
43
44# sanitize html
45for item in feed.entries:
46 item.description = re.sub('<[^<]+?>', '', item.description)
47```
48
49## Perform sentiment analysis
50
51Since we now have cleaned up data in our `feed.entries` object we can start with
52performing sentiment analysis.
53
54There are many sentiment analysis libraries available that range from rule-based
55sentiment analysis up to machine learning supported analysis. To keep things
56simple I decided to use rule-based analysis library
57[vaderSentiment](https://github.com/cjhutto/vaderSentiment) from
58[C.J. Hutto](https://github.com/cjhutto). Really nice library and quite easy to
59use.
60
61```python
62from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
63analyser = SentimentIntensityAnalyzer()
64
65sentiment_results = []
66for item in feed.entries:
67 sentiment_title = analyser.polarity_scores(item.title)
68 sentiment_description = analyser.polarity_scores(item.description)
69 sentiment_results.append([sentiment_title['compound'], sentiment_description['compound']])
70```
71
72Now that we have this data in a shape that is compatible with matplotlib we can
73plot results to see the difference between title and description sentiment of an
74article.
75
76```python
77import matplotlib.pyplot as plt
78
79plt.rcParams['figure.figsize'] = (15, 3)
80plt.plot(sentiment_results, drawstyle='steps')
81plt.title('Sentiment analysis relationship between title and description (Guardian World News)')
82plt.legend(['title', 'description'])
83plt.show()
84```
85
86## Results and assets
87
881. Because of the small sample size further conclusions are impossible to make.
892. Rule-based approach may not be the best way of doing this. By using deep
90 learning we would be able to get better insights.
913. **Next step would be to** periodically fetch RSS items and store them over a
92 longer period of time and then perform analysis again and use either machine
93 learning or deep learning on top of it.
94
95![Relationship between title and description](/posts/sentiment-analysis/guardian-sa-title-desc-relationship.png)
96
97Figure above displays difference between title and description sentiment for
98specific RSS feed item. 1 means positive and -1 means negative sentiment.
99
100[» Download Jupyter Notebook](/posts/sentiment-analysis/sentiment-analysis.ipynb)
101
102## Going further
103
104- [Twitter Sentiment Analysis by Bryan Schwierzke](https://github.com/bswiss/news_mood)
105- [AFINN-based sentiment analysis for Node.js by Andrew Sliwinski](https://github.com/thisandagain/sentiment)
106- [Sentiment Analysis with LSTMs in Tensorflow by Adit Deshpande](https://github.com/adeshpande3/LSTM-Sentiment-Analysis)
107- [Sentiment analysis on tweets using Naive Bayes, SVM, CNN, LSTM, etc. by Abdul Fatir](https://github.com/abdulfatir/twitter-sentiment-analysis)
108
diff --git a/content/posts/2020-03-22-simple-sse-based-pubsub-server.md b/content/posts/2020-03-22-simple-sse-based-pubsub-server.md
deleted file mode 100644
index cf5a5d9..0000000
--- a/content/posts/2020-03-22-simple-sse-based-pubsub-server.md
+++ /dev/null
@@ -1,454 +0,0 @@
1---
2title: Simple Server-Sent Events based PubSub Server
3url: simple-server-sent-events-based-pubsub-server.html
4date: 2020-03-22T12:00:00+02:00
5type: post
6draft: false
7---
8
9## Before we continue ...
10
11Publisher Subscriber model is nothing new and there are many amazing solutions
12out there, so writing a new one would be a waste of time if other solutions
13wouldn't have quite complex install procedures and weren't so hard to maintain.
14But to be fair, comparing this simple server with something like
15[Kafka](https://kafka.apache.org/) or [RabbitMQ](https://www.rabbitmq.com/) is
16laughable at the least. Those solutions are enterprise grade and have many
17mechanisms there to ensure messages aren't lost and much more. Regardless of
18these drawbacks, this method has been tested on a large website and worked until
19now without any problems. So now, that we got that cleared up, let's continue.
20
21***Wiki definition:** Publish/subscribe messaging, or pub/sub messaging, is a
22form of asynchronous service-to-service communication used in serverless and
23microservices architectures. In a pub/sub model, any message published to a
24topic is immediately received by all the subscribers to the topic.*
25
26## General goals
27
28- provide a simple server that relays messages to all the connected clients,
29- messages can be posted on specific topics,
30- messages get sent via [Server-Sent
31 Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events)
32 to all the subscribers.
33
34## How exactly does the pub/sub model work?
35
36The easiest way to explain this is with diagram bellow. Basic function is
37simple. We have subscribers that receive messages, and we have publishers that
38create and post messages. Similar model is also well know pattern that works on
39a premise of consumers and producers, and they take similar roles.
40
41![How PubSub works](/posts/simple-pubsub-server/pubsub-overview.png)
42
43**These are some naive characteristics we want to achieve:**
44
45- producer is publishing messages to subscribe topic,
46- consumer is receiving messages from subscribed topic,
47- servers is also known as Broker,
48- broker does not store messages or tracks success,
49- broker uses
50 [FIFO](https://en.wikipedia.org/wiki/FIFO_(computing_and_electronics)) method
51 for delivering messages,
52- if consumer wants to receive messages from a topic, producer and consumer
53 topics must match,
54- consumer can subscribe to multiple topics,
55- producer can publish to multiple topics,
56- each message has a messageId.
57
58**Known drawbacks:**
59
60- messages will not be stored in a persistent queue or unreceived messages like
61 [DeadLetterQueue](https://en.wikipedia.org/wiki/Dead_letter_queue) so old
62 messages could be lost on server restart,
63- [Server-Sent
64 Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events)
65 opens a long-running connection between the client and the server so make sure
66 if your setup is load balanced that the load balancer in this case can have
67 long opened connection,
68- no system moderation due to the dynamic nature of creating queues.
69
70## Server-Sent Events
71
72Read more about it on [official specification
73page](https://html.spec.whatwg.org/multipage/server-sent-events.html).
74
75### Current browser support
76
77![Browser support](/posts/simple-pubsub-server/caniuse.png)
78
79Check
80[https://caniuse.com/#feat=eventsource](https://caniuse.com/#feat=eventsource)
81for latest information about browser support.
82
83### Known issues
84
85- Firefox 52 and below do not support EventSource in web/shared workers
86- In Firefox prior to version 36 server-sent events do not reconnect
87 automatically in case of a connection interrupt (bug)
88- Reportedly, CORS in EventSource is currently supported in Firefox 10+, Opera
89 12+, Chrome 26+, Safari 7.0+.
90- Antivirus software may block the event streaming data chunks.
91
92Source: [https://caniuse.com/#feat=eventsource](https://caniuse.com/#feat=eventsource)
93
94### Message format
95
96The simplest message that can be sent is only with data attribute:
97
98```bash
99data: this is a simple message
100<blank line>
101```
102
103You can send message IDs to be used if the connection is dropped:
104
105```bash
106id: 33
107data: this is line one
108data: this is line two
109<blank line>
110```
111
112And you can specify your own event types (the above messages will all trigger
113the message event):
114
115```bash
116id: 36
117event: price
118data: 103.34
119<blank line>
120```
121
122### Server requirements
123
124The important thing is how you send headers and which headers are sent by the
125server that triggers browser to threat response as a EventStream.
126
127Headers responsible for this are:
128
129```bash
130Content-Type: text/event-stream
131Cache-Control: no-cache
132Connection: keep-alive
133```
134
135### Debugging with Google Chrome
136
137Google Chrome provides build-in debugging and exploration tool for [Server-Sent
138Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events)
139which is quite nice and available from Developer Tools under Network tab.
140
141> You can debug only client side events that get received and not the server
142> ones. For debugging server events add `console.log` to `server.js` code and
143> print out events.
144
145![Google Chrome Developer Tools EventStream](/posts/simple-pubsub-server/chrome-debugging.png)
146
147## Server implementation
148
149For the sake of this example we will use [Node.js](https://nodejs.org/en/) with
150[Express](https://expressjs.com) as our router since this is the easiest way to
151get started and we will use already written SSE library for node
152[sse-pubsub](https://www.npmjs.com/package/sse-pubsub) so we don't reinvent the
153wheel.
154
155```bash
156npm init --yes
157
158npm install express
159npm install body-parser
160npm install sse-pubsub
161```
162
163Basic implementation of a server (`server.js`):
164
165```js
166const express = require('express');
167const bodyParser = require('body-parser');
168const SSETopic = require('sse-pubsub');
169
170const app = express();
171const port = process.env.PORT || 4000;
172
173// topics container
174const sseTopics = {};
175
176app.use(bodyParser.json());
177
178// open for all cors
179app.all('*', (req, res, next) => {
180 res.header('Access-Control-Allow-Origin', '*');
181 res.header('Access-Control-Allow-Headers', 'X-Requested-With, Content-Type');
182 next();
183});
184
185// preflight request error fix
186app.options('*', async (req, res) => {
187 res.header('Access-Control-Allow-Origin', '*');
188 res.header('Access-Control-Allow-Headers', 'X-Requested-With, Content-Type');
189 res.send('OK');
190});
191
192// serve the event streams
193app.get('/stream/:topic', async (req, res, next) => {
194 const topic = req.params.topic;
195
196 if (!(topic in sseTopics)) {
197 sseTopics[topic] = new SSETopic({
198 pingInterval: 0,
199 maxStreamDuration: 15000,
200 });
201 }
202
203 // subscribing client to topic
204 sseTopics[topic].subscribe(req, res);
205});
206
207// accepts new messages into topic
208app.post('/publish', async (req, res) => {
209 let body = req.body;
210 let status = 200;
211
212 console.log('Incoming message:', req.body);
213
214 if (
215 body.hasOwnProperty('topic') &&
216 body.hasOwnProperty('event') &&
217 body.hasOwnProperty('message')
218 ) {
219 const topic = req.body.topic;
220 const event = req.body.event;
221 const message = req.body.message;
222
223 if (topic in sseTopics) {
224 // sends message to all the subscribers
225 sseTopics[topic].publish(message, event);
226 }
227 } else {
228 status = 400;
229 }
230
231 res.status(status).send({
232 status,
233 });
234});
235
236// returns JSON object of all opened topics
237app.get('/status', async (req, res) => {
238 res.send(sseTopics);
239});
240
241// health-check endpoint
242app.get('/', async (req, res) => {
243 res.send('OK');
244});
245
246// return a 404 if no routes match
247app.use((req, res, next) => {
248 res.set('Cache-Control', 'private, no-store');
249 res.status(404).end('Not found');
250});
251
252// starts the server
253app.listen(port, () => {
254 console.log(`PubSub server running on http://localhost:${port}`);
255});
256```
257
258### Our custom message format
259
260Each message posted on a server must be in a specific format that out server
261accepts. Having structure like this allows us to have multiple separated type of
262events on each topic.
263
264With this we can separate streams and only receive events that belong to the
265topic.
266
267One example would be, that we have index page and we want to receive messages
268about new upvotes or new subscribers but we don't want to follow events for
269other pages. This reduces clutter and overall network. And structure is much
270nicer and maintanable.
271
272```json
273{
274 "topic": "sample-topic",
275 "event": "sample-event",
276 "message": { "name": "John" }
277}
278```
279
280## Publisher and subscriber clients
281
282### Publisher and subscriber in action
283
284<video src="/posts/simple-pubsub-server/clients.m4v" controls></video>
285
286You can download [the code](../simple-pubsub-server/sse-pubsub-server.zip) and
287follow along.
288
289### Publisher
290
291As talked about above publisher is the one that send messages to the
292broker/server. Message inside the payload can be whatever you want (string,
293object, array). I would however personally avoid send large chunks of data like
294blobs and such.
295
296```html
297<!DOCTYPE html>
298<html lang="en">
299
300 <head>
301 <meta charset="UTF-8">
302 <meta name="viewport" content="width=device-width, initial-scale=1.0">
303 <title>Publisher</title>
304 </head>
305
306 <body>
307
308 <h1>Publisher</h1>
309
310 <fieldset>
311 <p>
312 <label>Server:</label>
313 <input type="text" id="server" value="http://localhost:4000">
314 </p>
315 <p>
316 <label>Topic:</label>
317 <input type="text" id="topic" value="sample-topic">
318 </p>
319 <p>
320 <label>Event:</label>
321 <input type="text" id="event" value="sample-event">
322 </p>
323 <p>
324 <label>Message:</label>
325 <input type="text" id="message" value='{"name": "John"}'>
326 </p>
327 <p>
328 <button type="button" id="button">Publish message to topic</button>
329 </p>
330 </fieldset>
331
332 <script>
333
334 const button = document.querySelector('#button');
335 const server = document.querySelector('#server');
336 const topic = document.querySelector('#topic');
337 const event = document.querySelector('#event');
338 const message = document.querySelector('#message');
339
340 button.addEventListener('click', async (evt) => {
341 const req = await fetch(`${server.value}/publish`, {
342 method: 'post',
343 headers: {
344 'Accept': 'application/json',
345 'Content-Type': 'application/json',
346 },
347 body: JSON.stringify({
348 topic: topic.value,
349 event: event.value,
350 message: JSON.parse(message.value),
351 }),
352 });
353
354 const res = await req.json();
355 console.log(res);
356 });
357
358 </script>
359
360 </body>
361
362</html>
363```
364
365### Subscriber
366
367Subscriber is responsible for receiving new messages that come from server via
368publisher. The code bellow is very rudimentary but works and follows the
369implementation guidelines for EventSource.
370
371You can use either Developer Tools Console to see incoming messages or you can
372defer to Debugging with Google Chrome section above to see all EventStream
373messages.
374
375> Don't be alarmed if the subscriber gets disconnected from the server every so
376> often. The code we have here resets connection every 15s but it automatically
377> get reconnected and fetches all messages up to last received message id. This
378> setting can be adjusted in `server.js` file; search for the
379> `maxStreamDuration` variable.
380
381```html
382<!DOCTYPE html>
383<html lang="en">
384
385 <head>
386 <meta charset="UTF-8">
387 <meta name="viewport" content="width=device-width, initial-scale=1.0">
388 <title>Subscriber</title>
389 <link rel="stylesheet" href="style.css">
390 </head>
391
392 <body>
393
394 <h1>Subscriber</h1>
395
396 <fieldset>
397 <p>
398 <label>Server:</label>
399 <input type="text" id="server" value="http://localhost:4000">
400 </p>
401 <p>
402 <label>Topic:</label>
403 <input type="text" id="topic" value="sample-topic">
404 </p>
405 <p>
406 <label>Event:</label>
407 <input type="text" id="event" value="sample-event">
408 </p>
409 <p>
410 <button type="button" id="button">Subscribe to topic</button>
411 </p>
412 </fieldset>
413
414 <script>
415
416 const button = document.querySelector('#button');
417 const server = document.querySelector('#server');
418 const topic = document.querySelector('#topic');
419 const event = document.querySelector('#event');
420
421 button.addEventListener('click', async (evt) => {
422
423 let es = new EventSource(`${server.value}/stream/${topic.value}`);
424
425 es.addEventListener(event.value, function (evt) {
426 console.log(`incoming message`, JSON.parse(evt.data));
427 });
428
429 es.addEventListener('open', function (evt) {
430 console.log('connected', evt);
431 });
432
433 es.addEventListener('error', function (evt) {
434 console.log('error', evt);
435 });
436
437 });
438
439 </script>
440
441 </body>
442
443</html>
444```
445
446## Reading further
447
448- [Using server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events)
449- [Using SSE Instead Of WebSockets For Unidirectional Data Flow Over HTTP/2](https://www.smashingmagazine.com/2018/02/sse-websockets-data-flow-http2/)
450- [What is Server-Sent Events?](https://apifriends.com/api-streaming/server-sent-events/)
451- [An HTTP/2 extension for bidirectional messaging communication](https://tools.ietf.org/id/draft-xie-bidirectional-messaging-01.html)
452- [Introduction to HTTP/2](https://developers.google.com/web/fundamentals/performance/http2)
453- [The WebSocket API (WebSockets)](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API)
454
diff --git a/content/posts/2020-03-27-create-placeholder-images-with-sharp.md b/content/posts/2020-03-27-create-placeholder-images-with-sharp.md
deleted file mode 100644
index 1c2b042..0000000
--- a/content/posts/2020-03-27-create-placeholder-images-with-sharp.md
+++ /dev/null
@@ -1,102 +0,0 @@
1---
2title: Create placeholder images with sharp Node.js image processing library
3url: create-placeholder-images-with-sharp.html
4date: 2020-03-27T12:00:00+02:00
5type: post
6draft: false
7---
8
9I have been searching for a solution to pre-generate some placeholder images for
10image server I needed to develop that resizes images on S3. I though this would
11be a 15min job and quickly found out how very mistaken I was.
12
13Even though Node.js is not really the best way to do this kind of things (surely
14something written in C or Rust or even Golang would be the correct way to do
15this but we didn't need the speed in our case) I found an excellent library
16[sharp - High performance Node.js image
17processing](https://github.com/lovell/sharp).
18
19Getting things running was a breeze.
20
21## Fetch image from S3 and save resized
22
23```js
24const sharp = require('sharp');
25const aws = require('aws-sdk');
26
27const x,y = 100;
28const s3 = new aws.S3({});
29
30aws.config.update({
31 secretAccessKey: 'secretAccessKey',
32 accessKeyId: 'accessKeyId',
33 region: 'region'
34});
35
36const originalImage = await s3.getObject({
37 Bucket: 'some-bucket-name',
38 Key: 'image.jpg',
39}).promise();
40
41const resizedImage = await sharp(originalImage.Body)
42 .resize(x, y)
43 .jpeg({ progressive: true })
44 .toBuffer();
45
46s3.putObject({
47 Bucket: 'some-bucket-name',
48 Key: `optimized/${x}x${y}/image.jpg`,
49 Body: resizedImage,
50 ContentType: 'image/jpeg',
51 ACL: 'public-read'
52}).promise();
53```
54
55All this code was wrapped inside a web service with some additional security
56checks and defensive coding to detect if key is missing on S3.
57
58And at that point I needed to return placeholder images as a response in case
59key is missing or x,y are not allowed by the server etc. I could have created
60PNG in Gimp and just serve them but I wanted to respect aspect ratio and I
61didn't want to return some mangled images.
62
63> Main problem with finding a clean solution I could copy and paste and change a
64> bit was a task. API is changing constantly and there weren't clear examples or
65> I was unable to find them.
66
67## Generating placeholder images using SVG
68
69What I ended up was using SVG to generate text and created image with sharp and
70used composition to combine both layers. Response returned by this function is a
71buffer you can use to either upload to S3 or save to local file.
72
73```js
74const generatePlaceholderImageWithText = async (width, height, message) => {
75 const overlay = `<svg width="${width - 20}" height="${height - 20}">
76 <text x="50%" y="50%" font-family="sans-serif" font-size="16" text-anchor="middle">${message}</text>
77 </svg>`;
78
79 return await sharp({
80 create: {
81 width: width,
82 height: height,
83 channels: 4,
84 background: { r: 230, g: 230, b: 230, alpha: 1 }
85 }
86 })
87 .composite([{
88 input: Buffer.from(overlay),
89 gravity: 'center',
90 }])
91 .jpeg()
92 .toBuffer();
93}
94```
95
96That is about it. Nothing more to it. You can change the color of the image by
97changing `background` and if you want to change text styling you can adapt SVG
98to your needs.
99
100> Also be careful about the length of the text. This function positions text at
101> the center and adds `20px` padding on all sides. If text is longer than the
102> image it will get cut.
diff --git a/content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md b/content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md
deleted file mode 100644
index efe88fa..0000000
--- a/content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md
+++ /dev/null
@@ -1,108 +0,0 @@
1---
2title: The strange case of Elasticsearch allocation failure
3url: the-strange-case-of-elasticsearch-allocation-failure.html
4date: 2020-03-29T12:00:00+02:00
5type: post
6draft: false
7---
8
9I've been using Elasticsearch in production for 5 years now and never had a
10single problem with it. Hell, never even known there could be a problem. Just
11worked. All this time. The first node that I deployed is still being used in
12production, never updated, upgraded, touched in anyway.
13
14All this bliss came to an abrupt end this Friday when I got notification that
15Elasticsearch cluster went warm. Well, warm is not that bad right? Wrong!
16Quickly after that I got another email which sent chills down my spine. Cluster
17is now red. RED! Now, shit really hit the fan!
18
19I tried googling what could be the problem and after executing allocation
20function noticed that some shards were unassigned and 5 attempts were already
21made (which is BTW to my luck the maximum) and that meant I am basically fucked.
22They also applied that one should wait for cluster to re-balance itself. So, I
23waited. One hour, two hours, several hours. Nothing, still RED.
24
25The strangest thing about it all was, that queries were still being fulfilled.
26Data was coming out. On the outside it looked like nothing was wrong but
27everybody that would look at the cluster would know immediately that something
28was very very wrong and we were living on borrowed time here.
29
30> **Please, DO NOT do what I did.** Seriously! Please ask someone on official
31forums or if you know an expert please consult him. There could be million of
32reasons and these solution fit my problem. Maybe in your case it would
33disastrous. I had all the data backed up and even if I would fail spectacularly
34I would be able to restore the data. It would be a huge pain and I would loose
35couple of days but I had a plan B.
36
37Executing allocation and told me what the problem was but no clear solution yet.
38
39```yaml
40GET /_cat/allocation?format=json
41```
42
43I got a message that `ALLOCATION_FAILED` with additional info `failed to create
44shard, failure ioexception[failed to obtain in-memory shard lock]`. Well
45splendid! I must also say that our cluster is capable more than enough to handle
46the traffic. Also JVM memory pressure never was an issue. So what happened
47really then?
48
49I tried also re-routing failed ones with no success due to AWS restrictions on
50having managed Elasticsearch cluster (they lock some of the functions).
51
52```yaml
53POST /_cluster/reroute?retry_failed=true
54```
55
56I got a message that significantly reduced my options.
57
58```json
59{
60 "Message": "Your request: '/_cluster/reroute' is not allowed."
61}
62```
63
64After that I went on a hunt again. I won't bother you with all the details
65because hours/days went by until I was finally able to re-index the problematic
66index and hoped for the best. Until that moment even re-indexing was giving me
67errors.
68
69```yaml
70POST _reindex
71{
72 "source": {
73 "index": "myindex"
74 },
75 "dest": {
76 "index": "myindex-new"
77 }
78}
79```
80
81I needed to do this multiple times to get all the documents re-indexed. Then I
82dropped the original one with the following command.
83
84```yaml
85DELETE /myindex
86```
87
88And re-indexed again new one in the original one (well by name only).
89
90```yaml
91POST _reindex
92{
93 "source": {
94 "index": "myindex-new"
95 },
96 "dest": {
97 "index": "myindex"
98 }
99}
100```
101
102On the surface it looks like all is working but I have a long road in front of
103me to get all the things working again. Cluster now shows that it is in Green
104mode but I am also getting a notification that the cluster has processing status
105which could mean million of things.
106
107Godspeed!
108
diff --git a/content/posts/2020-03-30-my-love-and-hate-relationship-with-nodejs.md b/content/posts/2020-03-30-my-love-and-hate-relationship-with-nodejs.md
deleted file mode 100644
index d083890..0000000
--- a/content/posts/2020-03-30-my-love-and-hate-relationship-with-nodejs.md
+++ /dev/null
@@ -1,111 +0,0 @@
1---
2title: My love and hate relationship with Node.js
3url: my-love-and-hate-relationship-with-nodejs.html
4date: 2020-03-30T12:00:00+02:00
5type: post
6draft: false
7---
8
9Previous project I was working on was being coded in
10[Golang](https://golang.org/). Also was my first project using it. And damn,
11that was an awesome experience. The whole thing is just superb. From how errors
12are handled. The C-like way you handle compiling. The way the language is
13structured making it incredibly versatile and easy to learn.
14
15It may cause some pain for somebody that is not used of using interfaces to map
16JSON and doing the recompilation all the time. But we have tools like
17[entr](http://eradman.com/entrproject/) and
18[make](https://www.gnu.org/software/make/) to fix that.
19
20But we are not here to talk about my undying love for **Golang**. Only in some
21way we probably should. It is an excellent example of how modern language should
22be designed. And because I have used it extensively in the last couple of years
23this probably taints my views of other languages. And is doing me a great
24disservice. Nevertheless, here we are.
25
26About two years ago I started flirting with [Node.js](https://nodejs.org/en/)
27for a project I started working on. What I wanted was to have things written in
28a language that is widely used, and we could get additional developers for. As
29much as **Golang** is amazing it's really hard to get developers for it. Even
30now. And after playing around with it for a week I felt in love with the speed
31of iteration and massive package ecosystem. Do you want SSO? You got it! Do you
32want some esoteric library for something? There is a strong chance somebody
33wrote it. It is so extensive that you find yourself evaluating packages based on
34**GitHub stars** and number of contributors. You get swallowed by the vanity
35metrics and that potentially will become the downfall of Node.js.
36
37Because of the sheer amount of choice I often got anxiety when choosing
38libraries. Will I choose the correct one? Is this library something that will be
39supported for a foreseeable future or not? I am used of using libraries that are
40being in development for 10 years plus (Python, C) and that gave me some sort of
41comfort. And it is probably unfair to Node.js and community to expect same
42dedication.
43
44Moving forward ... Work started and things were great. **Speed of iteration was
45insane**. For some feature that I would need a day in Golang only took me hour
46or two. I became lazy! Using packages all over the place. Falling into the same
47trap as others. Packages on top of packages. And [npm](https://www.npmjs.com/)
48didn't help at all. The way that the package manager works is just
49horrendous. And not allowing to have node_modules outside the project is also
50the stupidest idea ever.
51
52So at that point I started feeling the technical debt that comes with Node.js
53and the whole ecosystem. What nobody tells you is that **structuring large
54Node.js apps** is more problematic than one would think. And going microservice
55for every single thing is also a bad idea. The amount of networking you
56introduce with that approach always ends up being a pain in the ass. And I don't
57even want to go into system administration here. The overhead is
58insane. Package-lock.json made many days feel like living hell for me. And I
59would eat the cost of all this if it meant for better development
60experience. Well, it didn't.
61
62The **lack of Typescript** support in the interpreter is still mind boggling to
63me. Why haven't they added native support yet for this is beyond me?! That would
64have solved so many problems. Lack of type safety became a problem somewhere in
65the middle of the project where the codebase was sufficiently large enough to
66present problems. We started adding arguments to functions and there was **no
67way to implicitly define argument types**. And because at that point there were
68a lot of functions, it became impossible to know what each one accepts,
69development became more and more trial and error based.
70
71I tried **implementing Typescript**, but that would present a large refactor
72that we were not willing to do at that point. The benefits were not enough. I
73also tried [Flow - static type checker](https://flow.org/) but implementation
74was also horrible. What Typescript and Flow forces you is to have src folder and
75then **transpile** your code into dist folder and run it with node. WTH is that
76all about. Why can't this be done in memory or some virtual file system? Why? I
77see no reason why this couldn't be done like this. But it is what it is. I
78abandoned all hope for static type checking.
79
80One of the problems that resulted from not having interfaces or types was
81inability to model out our data from **Elasticsearch**. I could have done a
82**pedestrian implementation** of it, but there must be a better way of doing
83this without resorting to some hack basically. Or maybe I haven't found a
84solution, which is also a possibility. I have looked, though. No juice!
85
86**Error handling?** Is that a joke?
87
88Thank god for **await/async**. Without it, I would have probably just abandoned
89the whole thing and went with something else like Python. That's all I am going
90to say about this :)
91
92I started asking myself a question if Node.js is actually ready to be used in a
93**large scale applications**? And this was a totally wrong question. What I
94should have been asking myself was, how to use Node.js in large scale
95application. And you don't get this in **marketing material** for Express or Koa
96etc. They never tell you this. Making Node.js scale on infrastructure or in
97codebase is really **more of an art than a science**. And just like with the
98whole JavaScript ecosystem:
99
100- impossible to master,
101- half of your time you work on your tooling,
102- just accept transpilers that convert one code into another (holly smokes),
103- error handling is a joke,
104- standards? What standards?
105
106But on the other hand. As I did, you will also learn to love it. Learn to use it
107quickly and do impossible things in crazy limited time.
108
109I hate to admit it. But I love Node.js. Dammit, I love it :)
110
111**2023 Update**: I hate Node.js!
diff --git a/content/posts/2020-05-05-remote-work.md b/content/posts/2020-05-05-remote-work.md
deleted file mode 100644
index 905d169..0000000
--- a/content/posts/2020-05-05-remote-work.md
+++ /dev/null
@@ -1,72 +0,0 @@
1---
2title: Remote work and how it affects the daily lives of people
3url: remote-work.html
4date: 2020-05-05T12:00:00+02:00
5type: post
6draft: false
7---
8
9I have been working remotely for the past 5 years. I love it. Love the freedom
10and make your schedule thingy.
11
12## You work more not less
13
14I've heard from people things like: "Oh, you are so lucky, working from home,
15having all the free time you want". It was obvious they had no clue what means
16working remotely. They had this romantic idea of remote work. You can watch TV
17whenever you like, you can go outside for a picnic if you want and stuff like
18that.
19
20This may be true if you work a day or two in a week from home. But if you go
21completely remote all these changes completely. I take some time to acclimate
22but then you start feeling the consequences of going fully remote. And it's not
23all rainbows and unicorns. Rather the opposite.
24
25## Feeling lost
26
27At first, I remembered I felt lost. I was not used to this kind of environment.
28It felt disoriented and a part of you that is used to procrastinate turns on.
29You start thinking of a workday as a whole day. And soon this idea of "I can do
30this later" starts creeping in. Well, I have the whole day ahead of me. I can do
31this a bit later.
32
33## Hyper-performance
34
35As a direct result, you become more focused on your work since you don't have
36all the interruptions common in the workplace. And you can quickly get used to
37this hyper-performance. But this mode requires also a lot of peace and quiet.
38
39And here we come to the ugly parts of all this. **People rarely have the
40self-control** to not waste other people's time. It is paralyzing when people
41start calling you, sending you chat messages, etc. The thing is, that when I
42achieve this hyper-performance mode I am completely embroiled in the problem I
43am solving and this kind of interruptions mess with your head. I need an hour at
44least to get back in the zone. Sometimes not achieving the same focus the whole
45day.
46
47I know that life is not how you want it to be and takes its route but from what
48I've learned this kind of interruptions can be avoided in 90% of the case easily
49just by closing any chat programs and putting your phone in a drawer.
50
51## Suggestion to all the new remote workers
52
53- Stop wasting other people's time. You don't bother people at their desks in
54 the office either.
55- Do not replace daily chats in the hallways with instant messaging software.
56 It will only interrupt people. Nothing good will come of it.
57- Set your working hours and try to not allow it to bleed outside these
58 boundaries and maintain your routine.
59- Be prepared that hours will be longer regardless of your good intentions and
60 your well thought of routine.
61- Try to be hyper-focused and do only one thing at the time. Multitasking is the
62 enemy of progress.
63- Avoid long meetings and if possible eliminate them. Rather take time to write
64 them out and allow others to respond in their own time. Meetings are usually a
65 large waste of time and most of the people attending them are there just
66 because the manager said so.
67- The software will not solve your problems. And throwing money at problems
68 neither.
69- If you are in a managerial position don't supervise any single minute of
70 workers. They are probably giving you more hours anyways. Track progress
71 weekly not daily. You hired them and give them the benefit of the doubt that
72 they will deliver what you agreed upon.
diff --git a/content/posts/2020-08-15-systemd-disable-wake-onmouse.md b/content/posts/2020-08-15-systemd-disable-wake-onmouse.md
deleted file mode 100644
index 8f411d6..0000000
--- a/content/posts/2020-08-15-systemd-disable-wake-onmouse.md
+++ /dev/null
@@ -1,73 +0,0 @@
1---
2title: Disable mouse wake from suspend with systemd service
3url: disable-mouse-wake-from-suspend-with-systemd-service.html
4date: 2020-08-15T12:00:00+02:00
5type: post
6draft: false
7---
8
9I recently bought [ThinkPad
10X220](https://www.laptopmag.com/reviews/laptops/lenovo-thinkpad-x220) just as a
11joke on eBay to test Linux distributions and play around with things and not
12destroy my main machine. Little to my knowledge I felt in love with it. Man,
13they really made awesome machines back then.
14
15After changing disk that came with it to SSD and installing Ubuntu to test if 
16everything works I noticed that even after a single touch of my external mouse
17the system would wake up from sleep even though the lid was shut down.
18
19I wouldn't even noticed it if laptop didn't have [LED
20sleep indicator](https://support.lenovo.com/lk/en/solutions/~/media/Images/ContentImages/p/pd025386_x1_status_03.ashx?w=426&h=262).
21I already had a bad experience with Linux and it's power management. I had a
22[Dell Inspiron 7537](https://www.pcmag.com/reviews/dell-inspiron-15-7537) laptop
23with a touchscreen and while traveling it decided to wake up and started cooking
24in my backpack to the point that the digitizer responsible for touch actually
25glue off and the whole screen got wrecked. So, I am a bit touchy about this.
26
27I went on solution hunting and to my surprise there is no easy way to disable
28specific devices to perform wake up. Why is this not under the power management 
29tab in setting is really strange.
30
31After googling for a solution I found [this nice article describing the
32solution](https://codetrips.com/2020/03/18/ubuntu-disable-mouse-wake-from-suspend/)
33that worked for me. The only problem with this solution was that he added his
34solution to `.bashrc` and this triggers `sudo` that asks for a password each
35time new terminal is opened, which get annoying quickly since I open a lot of
36terminals all the time.
37
38I followed his instructions and got to solution `sudo sh -c "echo 'disabled' >
39/sys/bus/usb/devices/2-1.1/power/wakeup"`.
40
41I created a system service file `sudo nano
42/etc/systemd/system/disable-mouse-wakeup.service` and removed `sudo` and
43replaced `sh` with `/usr/bin/sh` and pasted all that in `ExecStart`.
44
45```ini
46[Unit]
47Description=Disables wakeup on mouse event
48After=network.target
49StartLimitIntervalSec=0
50
51[Service]
52Type=simple
53Restart=always
54RestartSec=1
55User=root
56ExecStart=/usr/bin/sh -c "echo 'disabled' > /sys/bus/usb/devices/2-1.1/power/wakeup"
57
58[Install]
59WantedBy=multi-user.target
60```
61
62After that I enabled, started and checked status of service.
63
64```sh
65sudo systemctl enable disable-mouse-wakeup.service
66sudo systemctl start disable-mouse-wakeup.service
67sudo systemctl status disable-mouse-wakeup.service
68```
69
70This will permanently disable that device from wakeing up you computer on boot.
71If you have many devices you would like to surpress from waking up your machine
72I would create a shell script and call that instead of direclty doing it in
73service file.
diff --git a/content/posts/2020-09-06-esp-and-micropython.md b/content/posts/2020-09-06-esp-and-micropython.md
deleted file mode 100644
index fb7e150..0000000
--- a/content/posts/2020-09-06-esp-and-micropython.md
+++ /dev/null
@@ -1,225 +0,0 @@
1---
2title: Getting started with MicroPython and ESP8266
3url: esp8266-and-micropython-guide.html
4date: 2020-09-06T12:00:00+02:00
5type: post
6draft: false
7---
8
9## Introduction
10
11A while ago I bought some
12[ESP8266](https://www.espressif.com/en/products/socs/esp8266) and
13[ESP32](https://www.espressif.com/en/products/socs/esp32) dev boards to play
14around with and I finally found a project to try it out.
15
16For my project, I used [ESP32](https://www.espressif.com/en/products/socs/esp32)
17but I could easily choose
18[ESP8266](https://www.espressif.com/en/products/socs/esp8266). This guide
19contains which tools I use and how I prepared my workspace to code for
20[ESP8266](https://www.espressif.com/en/products/socs/esp8266).
21
22![ESP8266 and ESP32 boards](/posts/esp8366-micropython/boards.jpg)
23
24This guide covers:
25
26- flashing SOC
27- install proper tooling
28- deploying a simple script
29
30> Make sure that you are using **a good USB cable**. I had some problems with
31mine and once I replaced it everything started to work.
32
33## Flashing the SOC
34
35Plug your ESP8266 to USB port and check if the device was recognized with
36executing `dmesg | grep ch341-uart`.
37
38Then check if the device is available under `/dev/` by running `ls
39/dev/ttyUSB*`.
40
41> **Linux users**: if a device is not available be sure you are in `dialout`
42> group. You can check this by executing `groups $USER`. You can add a user to
43> `dialout` group with `sudo adduser $USER dialout`.
44
45After these conditions are meet go to the navigate to
46[https://micropython.org/download/esp8266/](https://micropython.org/download/esp8266/)
47and download `esp8266-20200902-v1.13.bin`.
48
49```sh
50mkdir esp8266-test
51cd esp8266-test
52
53wget https://micropython.org/resources/firmware/esp8266-20200902-v1.13.bin
54```
55
56After obtaining firmware we will need some tooling to flash the firmware to the
57board.
58
59```sh
60sudo pip3 install esptool
61```
62
63You can read more about `esptool` at
64[https://github.com/espressif/esptool/](https://github.com/espressif/esptool/).
65
66Before flashing the firmware we need to erase the flash on device. Substitute
67`USB0` with the device listed in output of `ls /dev/ttyUSB*`.
68
69```sh
70esptool.py --port /dev/ttyUSB0 erase_flash
71```
72
73If flash was successfully erased it is now time to flash the new firmware to it.
74
75```sh
76esptool.py --port /dev/ttyUSB0 --baud 460800 write_flash --flash_size=detect 0 esp8266-20200902-v1.13.bin
77```
78
79If everything went ok you can try accessing MicroPython REPL with ` screen
80/dev/ttyUSB0 115200` or `picocom /dev/ttyUSB0 -b115200`.
81
82> Sometimes you will need to press `ENTER` in `screen` or `picocom` to access
83> REPL.
84
85When you are in REPL you can test if all is working properly following steps.
86
87```py
88> import machine
89> machine.freq()
90```
91
92This should output a number representing a frequency of the CPU (mine was
93`80000000`).
94
95When you are in `screen` or `picocom` these can help you a bit.
96
97| Key | Command |
98| -------- | -------------------- |
99| CTRL+d | preforms soft reboot |
100| CTRL+a x | exits picocom |
101| CTRL+a \ | exits screen |
102
103
104## Install better tooling
105
106Now, to make our lives a little bit easier there are couple of additional tools
107that will make this whole experience a little more bearable.
108
109There are twq cool ways of uploading local files to SOC flash.
110
111- ampy → [https://github.com/scientifichackers/ampy](https://github.com/scientifichackers/ampy)
112- rshell → [https://github.com/dhylands/rshell](https://github.com/dhylands/rshell)
113
114### ampy
115
116```bash
117# installing ampy
118sudo pip3 install adafruit-ampy
119```
120
121Listed below are some common commands I used.
122
123```bash
124# uploads file to flash
125ampy --delay 2 --port /dev/ttyUSB0 put boot.py
126
127# lists file on flash
128ampy --delay 2 --port /dev/ttyUSB0 ls
129
130# outputs contents of file on flash
131ampy --delay 2 --port /dev/ttyUSB0 cat boot.py
132```
133
134> I added `delay` of 2 seconds because I had problems with executing commands.
135
136### rshell
137
138Even though `ampy` is a cool tool I opted with `rshell` in the end since it's
139much more polished and feature rich.
140
141```bash
142# installing ampy
143sudo pip3 install rshell
144```
145
146Now that `rshell` is installed we can connect to the board.
147
148```bash
149rshell --buffer-size=30 -p /dev/ttyUSB0 -a
150```
151
152This will open a shell inside bash and from here you can execute multiple
153commands. You can check what is supported with `help` once you are inside of a
154shell.
155
156```bash
157m@turing ~/Junk/esp8266-test
158$ rshell --buffer-size=30 -p /dev/ttyUSB0 -a
159
160Using buffer-size of 30
161Connecting to /dev/ttyUSB0 (buffer-size 30)...
162Trying to connect to REPL connected
163Testing if ubinascii.unhexlify exists ... Y
164Retrieving root directories ... /boot.py/
165Setting time ... Sep 06, 2020 23:54:28
166Evaluating board_name ... pyboard
167Retrieving time epoch ... Jan 01, 2000
168Welcome to rshell. Use Control-D (or the exit command) to exit rshell.
169/home/m/Junk/esp8266-test> help
170
171Documented commands (type help <topic>):
172========================================
173args cat connect date edit filesize help mkdir rm shell
174boards cd cp echo exit filetype ls repl rsync
175
176Use Control-D (or the exit command) to exit rshell.
177```
178
179> Inside a shell `ls` will display list of files on your machine. To get list
180> of files on flash folder `/pyboard` is remapped inside the shell. To list files
181> on flash you must perform `ls /pyboard`.
182
183#### Moving files to flash
184
185To avoid copying files all the time I used `rsync` function from the inside of
186`rshell`.
187
188```bash
189rsync . /pyboard
190```
191
192#### Executing scripts
193
194It is a pain to continuously reboot the device to trigger `/pyboard/boot.py` and
195there is a better way of testing local scripts on remote device.
196
197Lets assume we have `src/freq.py` file that displays CPU frequency of a remote
198device.
199
200```py
201# src/freq.py
202
203import machine
204print(machine.freq())
205```
206
207Now lets upload this and execute it.
208
209```bash
210# syncs files to remove device
211rsync ./src /pyboard
212
213# goes into REPL
214repl
215
216# we import file by importing it without .py extension and this will run the script
217> import freq
218
219# CTRL+x will exit REPL
220```
221
222## Additional resources
223
224- https://randomnerdtutorials.com/getting-started-micropython-esp32-esp8266/
225- http://docs.micropython.org/en/latest/esp8266/quickref.html
diff --git a/content/posts/2020-09-08-bind-warning-on-login.md b/content/posts/2020-09-08-bind-warning-on-login.md
deleted file mode 100644
index cb1e0e5..0000000
--- a/content/posts/2020-09-08-bind-warning-on-login.md
+++ /dev/null
@@ -1,54 +0,0 @@
1---
2title: Fix bind warning in .profile on login in Ubuntu
3url: bind-warning-on-login-in-ubuntu.html
4date: 2020-09-08T12:00:00+02:00
5type: post
6draft: false
7---
8
9Recently I moved back to [bash](https://www.gnu.org/software/bash/) as my
10default shell. I was previously using [fish](https://fishshell.com/) and got
11used to the cool features it has. But, regardless of that, I wanted to move to a
12more standard shell because I was hopping back and forth with exporting
13variables and stuff like that which got pretty annoying.
14
15So I embarked on a mission to make [bash](https://www.gnu.org/software/bash/)
16more like [fish](https://fishshell.com/) and in the process found that I really
17missed autosuggest with TAB on changing directories.
18
19I found a nice alternative that emulates [zsh](http://zsh.sourceforge.net/) like
20autosuggestion and autocomplete so I added the following to my `.bashrc` file.
21
22```bash
23bind "TAB:menu-complete"
24bind "set show-all-if-ambiguous on"
25bind "set completion-ignore-case on"
26bind "set menu-complete-display-prefix on"
27bind '"\e[Z":menu-complete-backward'
28```
29
30I haven't noticed anything wrong with this and all was working fine until I
31restarted my machine and then I got this error.
32
33![Profile bind error](/posts/profile-bind-error/error.jpg)
34
35When I pressed OK, I got into the [Gnome
36shell](https://wiki.gnome.org/Projects/GnomeShell) and all was working fine, but
37the error was still bugging me. I started looking for the reason why this is
38happening and found a solution to this error on [Remote SSH Commands - bash bind
39warning: line editing not enabled](https://superuser.com/a/892682).
40
41So I added a simple `if [ -t 1 ]` around `bind` statements to avoid running
42commands that presume the session is interactive when it isn't.
43
44```bash
45if [ -t 1 ]; then
46 bind "TAB:menu-complete"
47 bind "set show-all-if-ambiguous on"
48 bind "set completion-ignore-case on"
49 bind "set menu-complete-display-prefix on"
50 bind '"\e[Z":menu-complete-backward'
51fi
52```
53
54After logging out and back in the problem was gone.
diff --git a/content/posts/2020-09-09-digitalocean-sync.md b/content/posts/2020-09-09-digitalocean-sync.md
deleted file mode 100644
index e16b827..0000000
--- a/content/posts/2020-09-09-digitalocean-sync.md
+++ /dev/null
@@ -1,112 +0,0 @@
1---
2title: Using Digitalocean Spaces to sync between computers
3url: digitalocean-spaces-to-sync-between-computers.html
4date: 2020-09-09T12:00:00+02:00
5type: post
6draft: false
7---
8
9I've been using [Dropbox](https://www.dropbox.com/) for probably **10+ years**
10now and I-ve became so used to it that it runs in the background that I don't
11even imagine a world without it. But it's not without problems.
12
13At first I had problems with `.venv` environments for Python and the only
14solution for excluding synchronization for this folder was to manually exclude a
15specific folder which is not really scalable. FYI, my whole project folder is
16synced on [Dropbox](https://www.dropbox.com/). This of course introduced a lot
17of syncing of files and folders that are not needed or even break things on
18other machines. In the case of **Python**, I couldn't use that on my second
19machine. I needed to delete `.venv` folder and pip it again which synced files
20again to the main machine. This was very frustrating. **Nodejs** handles this
21much nicer and I can just run the scripts without deleting `node_modules` again
22and reinstalling. However, `node_modules` is a beast of its own. It creates so
23many files that OS has a problem counting them when you check the folder
24contents for size.
25
26I wanted something similar to Dropbox. I could without the instant syncing but
27it would need to be fast and had the option for me to exclude folders like
28`node_modules, .venv, .git` and folders like that.
29
30I went on a hunt for an alternative to [Dropbox](https://www.dropbox.com/)
31and found:
32
33- [Tresorit](https://tresorit.com/)
34- [Sync.com](https://sync.com)
35- [Box](https://www.box.com/)
36
37You know, the usual list of suspects. I didn't include [Google
38drive](https://drive.google.com) or [One drive](https://onedrive.live.com/)
39since they are even more draconian than Dropbox.
40
41> All this does not stem from me being paranoid but recently these companies
42> have became more and more aggressive and they keep violating our privacy when
43> they share our data with 3rd party services. It is getting out of control.
44
45So, my main problem was still there. No way of excluding a specific folder from
46syncing. And before we go into "*But you have git, isn't that enough?*", I must
47say, that many of the files (PDFs, spreadsheets, etc) I have in a `git` repo
48don't get pushed upstream to Git and I still want to have them synced across my
49computers.
50
51I initially wanted to use [rsync](https://linux.die.net/man/1/rsync) but I would
52need to then have a remote VPS or transfer between my computers directly. I
53wanted a solution where all my files could be accessible to me without my
54machine.
55
56> **WARNING: This solution will cost you money!** DigitalOcean Spaces are $5 per
57month and there are some bandwidth limitations and if you go beyond that you get
58billed additionally.
59
60Then I remembered that I could use something like
61[S3](https://en.wikipedia.org/wiki/Amazon_S3) since it has versioning and is
62fully managed. I didn't want to go down the AWS rabbit hole with this so I
63choose [DigitalOcean Spaces](https://www.digitalocean.com/products/spaces/).
64
65Then I needed a command-line tool to sync between source and target. I found
66this nice tool [s3cmd](https://s3tools.org/s3cmd) and it is in the Ubuntu
67repositories.
68
69```bash
70sudo apt install s3cmd
71```
72
73After installation will I create a new Space bucket on DigitalOcean. Remember
74the zone you will choose because you will need it when you will configure
75`s3cmd`.
76
77Then I visited [Digitalocean Applications &
78API](https://cloud.digitalocean.com/account/api/tokens) and generated **Spaces
79access keys**. Save both key and secret somewhere safe because when you will
80leave the page secret will not be available anymore to you and you will need to
81re-generate it.
82
83```bash
84# enter your key and secret and correct endpoint
85# my endpoint is ams3.digitaloceanspaces.com because
86# I created my bucket in Amsterdam regiin
87s3cmd --configure
88```
89
90After that I played around with options for `s3cmd` and got to the following
91command.
92
93```bash
94# I executed this command from my projects folder
95cd projects
96s3cmd sync --delete-removed --exclude 'node_modules/*' --exclude '.git/*' --exclude '.venv/*' ./ s3://my-bucket-name/projects/
97```
98
99When syncing int he other direction you will need to change the order of the
100`SOURCE` and `TARGET` to `s3://my-bucket-name/projects/` and `./`.
101
102> Be sure that all the paths have trailing slash so that sync knows that this
103> are directories.
104
105I am planning to implement some sort of a `.ignore` file that will enable me to
106have a project-specific exclude options.
107
108I am currently running this every hour as a cronjob which is perfectly fine for
109now when I am testing how this whole thing works and how it all will turn out.
110
111I have also created a small Gnome extension which is still very unstable, but
112when/if this whole experiment pays of I will share on Github.
diff --git a/content/posts/2021-01-24-replacing-dropbox-with-s3.md b/content/posts/2021-01-24-replacing-dropbox-with-s3.md
deleted file mode 100644
index b7fc424..0000000
--- a/content/posts/2021-01-24-replacing-dropbox-with-s3.md
+++ /dev/null
@@ -1,114 +0,0 @@
1---
2title: Replacing Dropbox in favor of DigitalOcean spaces
3url: replacing-dropbox-in-favor-of-digitalocean-spaces.html
4date: 2021-01-24T12:00:00+02:00
5type: post
6draft: false
7---
8
9A few months ago I experimented with DigitalOcean spaces as my backup solution
10that could [replace Dropbox
11eventually](/digitalocean-spaces-to-sync-between-computers.html). That solution
12worked quite nicely, and I was amazed how smashing together a couple of existing
13solutions would work this fine.
14
15I have been running that solution in the background for a couple of months now
16and kind of forgot about it. But recent developments around deplatforming and
17having us people hostages of technology and big companies speed up my goals to
18become less dependent on
19[Google](https://edition.cnn.com/2020/12/17/tech/google-antitrust-lawsuit/index.html),
20[Dropbox](https://www.pcworld.com/article/2048680/dropbox-takes-a-peek-at-files.html)
21etc and take back some control.
22
23I am not a conspiracy theory nut, but to be honest, what these companies are
24doing lately is out of control. It is a matter of principle at this point. I
25have almost completely degoogled my life all the way from ditching Gmail,
26YouTube and most of the services surrounding Google. And I must tell you, I feel
27so good. I haven't felt this way for a long time.
28
29**Anyways. Let's get to the meat of things.**
30
31Before you continue you should read my post about [syncing to
32Dropbox](/digitalocean-spaces-to-sync-between-computers.html).
33
34> Also to note, I am using Linux on my machine with Gnome desktop environment.
35This should work on MacOS too. To use this on Windows I suggest using
36[Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10)
37or [Cygwin](https://www.cygwin.com/).
38
39## Folder structure
40
41I liked structure from Dropbox. One folder where everything is located and
42synced. So, that's why adopted this also for my sync setup.
43
44```go
45~/Vault
46 ↳ backup
47 ↳ bin
48 ↳ documents
49 ↳ projects
50```
51
52All of my code is located in `~/Vault/projects` folder. And most of the projects
53are Git repositories. I do not use this sync method for backup per see but in
54case I reinstall my machine I can easily recreate all the important folder
55structure with one quick command. No external drives needed that can fail etc.
56
57## Sync script
58
59My sync script is located in `~/Vault/bin/vault-backup.sh`
60
61```bash
62#!/bin/bash
63
64# dconf load /com/gexperts/Tilix/ < tilix.dconf
65# 0 2 * * * sh ~/Vault/bin/vault-backup.sh
66
67cd ~/Vault/backup/dotfiles
68
69MACHINE=$(whoami)@$(hostname)
70mkdir -p $MACHINE
71cd $MACHINE
72
73cp ~/.config/VSCodium/User/settings.json settings.json
74cp ~/.s3cfg s3cfg
75cp ~/.bash_extended bash_extended
76cp ~/.ssh ssh -rf
77
78codium --list-extensions > vscode-extension.txt
79dconf dump /com/gexperts/Tilix/ > tilix.dconf
80
81cd ~/Vault
82s3cmd sync --delete-removed --exclude 'node_modules/*' --exclude '.git/*' --exclude '.venv/*' ./ s3://bucket-name/backup/
83
84echo `date +"%D %T"` >> ~/.vault.log
85
86notify-send \
87 -u normal \
88 -i /usr/share/icons/Adwaita/96x96/status/security-medium-symbolic.symbolic.png \
89 "Vault sync succeded at `date +"%D %T"`"
90```
91
92This script also backups some of the dotfiles I use and sends notification to
93Gnome notification center. It is a straightforward solution. Nothing special
94going on.
95
96> One obvious benefit of this is that I can omit syncing Node's `node_modules`
97> or Python's `.venv` and `.git` folders.
98
99You can use this script in a combination with [Cron](https://en.wikipedia.org/wiki/Cron).
100
101```txt
1020 2 * * * sh ~/Vault/bin/vault-backup.sh
103```
104
105When you start syncing your local stuff with a remote server you can review your
106items on DigitalOcean.
107
108![Dropbox Spaces](/posts/dropbox-sync/dropbox-spaces.png)
109
110I have been using this script now for quite some time, and it's working
111flawlessly. I also uninstalled Dropbox and stopped using it completely.
112
113All I need to do is write a Bash script that does the reverse and downloads from
114remote server to local folder. This could be another post.
diff --git a/content/posts/2021-01-25-goaccess.md b/content/posts/2021-01-25-goaccess.md
deleted file mode 100644
index 84ea3cd..0000000
--- a/content/posts/2021-01-25-goaccess.md
+++ /dev/null
@@ -1,204 +0,0 @@
1---
2title: Using GoAccess with Nginx to replace Google Analytics
3url: using-goaccess-with-nginx-to-replace-google-analytics.html
4date: 2021-01-25T12:00:00+02:00
5type: post
6draft: false
7---
8
9## Introduction
10
11I know! You cannot simply replace Google Analytics with parsing access logs and
12displaying a couple of charts. But to be honest, I actually never used Google
13Analytics to the fullest extent and was usually interested in seeing page hits
14and which pages were visited most often.
15
16I recently moved my blog from Firebase to a VPS and also decided to remove
17Google Analytics tracking code from the site since its quite malicious and
18tracks users across other pages also and is creating a profile of a user, and
19I've had it. But I also need some insight of what is happening on a server and
20which content is being read the most etc.
21
22I have looked at many existing solutions like:
23
24- [Umami](https://umami.is/)
25- [Freshlytics](https://github.com/sheshbabu/freshlytics)
26- [Matomo](https://matomo.org/)
27
28But the more I looked at them the more I noticed that I am replacing one evil
29with another one. Don't get me wrong. Some of these solutions are absolutely
30fantastic but would require installation of databases and something like PHP or
31Node. And I was not ready to put those things on my fresh server. Also having
32Docker installed is out of the question.
33
34## Opting for log parsing
35
36So, I defaulted to parsing already existing logs and generating HTML reports
37from this data.
38
39I found this amazing software [GoAccess](https://goaccess.io/) which provides
40all the functionalities I need, and it's a single binary. Written in Go.
41
42GoAccess can be used in two different modes.
43
44![GoAccess Terminal](/posts/goaccess/goaccess-dash-term.png)
45
46*Running in a terminal*
47
48![GoAccess HTML](/posts/goaccess/goaccess-dash-html.png)
49
50*Running in a browser*
51
52I, however, need this to run in a browser. So, the second option is the way to
53go. The Idea is to periodically run cronjob and export this report into a folder
54that gets then server by Nginx behind a Basic authentication.
55
56## Getting Nginx ready
57
58I choose Ubuntu on [DigitalOcean](https://www.digitalocean.com/). First I
59installed [Nginx](https://nginx.org/en/), and
60[Letsencrypt](https://letsencrypt.org/getting-started/) certbot and all the
61necessary dependencies.
62
63```sh
64# log in as root user
65sudo su -
66
67# first let's update the system
68apt update && apt upgrade -y
69
70# let's install
71apt install nginx certbot python3-certbot-nginx apache2-utils
72```
73
74After all this is installed we can create a new configuration for a statistics.
75Stats will be available at `stats.domain.com`.
76
77```sh
78# creates directory where html will be hosted
79mkdir -p /var/www/html/stats.domain.com
80
81cp /etc/nginx/sites-available/default /etc/nginx/sites-available/stats.domain.com
82nano /etc/nginx/sites-available/stats.domain.com
83```
84
85```nginx
86server {
87 root /var/www/html/stats.domain.com;
88 server_name stats.domain.com;
89
90 index index.html;
91 location / {
92 try_files $uri $uri/ =404;
93 }
94}
95```
96
97Now we check if the configuration is ok. We can do this with `nginx -t`. If all
98is ok, we can restart Nginx with `service nginx restart`.
99
100After all that you should add A record for this domain that points to IP of a
101droplet.
102
103Before enabling SSL you should test if DNS records have propagated with `curl
104stats.domain.com`.
105
106Now, it's time to provision TLS certificate. To achieve this, you execute
107command `certbot --nginx`. Follow the wizard and when you are asked about
108redirection always choose 2 (always redirect to HTTPS).
109
110When this is done you can visit https://stats.domain.com and you should get 404
111not found error which is correct.
112
113## Getting GoAccess ready
114
115If you are using Debian like system GoAccess should be available in repository.
116Otherwise refer to the official website.
117
118```sh
119apt install goaccess
120```
121
122To enable Geo location we also need one additiona thing.
123
124```sh
125cd /var/www/html/stats.stats.com
126wget https://github.com/P3TERX/GeoLite.mmdb/raw/download/GeoLite2-City.mmdb
127```
128
129Now we create a shell script that will be executed every 10 minutes.
130
131```sh
132nano /var/www/html/stats.domain.com/generate-stats.sh
133```
134
135Contents of this file should look like this.
136
137```sh
138#!/bin/sh
139
140zcat -f /var/log/nginx/access.log* > /var/log/nginx/access-all.log
141
142goaccess \
143 --log-file=/var/log/nginx/access-all.log \
144 --log-format=COMBINED \
145 --exclude-ip=0.0.0.0 \
146 --geoip-database=/var/www/html/stats.domain.com/GeoLite2-City.mmdb \
147 --ignore-crawlers \
148 --real-os \
149 --output=/var/www/html/stats.domain.com/index.html
150
151rm /var/log/nginx/access-all.log
152```
153
154Because after a while nginx creates multiple files with access logs we use
155[`zcat`](https://linux.die.net/man/1/zcat) to extract Gziped contents and create
156a file that has all the access logs. After this file is used we delete it.
157
158If you want to exclude your home IP's result look at the `--exclude-ip` option
159in script and instead of `0.0.0.0` add your own home IP address. You can find
160your home IP by executing `curl ifconfig.me` from your local machine and NOT
161from the droplet.
162
163Test the script by executing `sh
164/var/www/html/stats.domain.com/generate-stats.sh` and then checking
165`https://stats.domain.com`. If you can see stats instead of 404 than you are
166set.
167
168It's time to add this script to cron with `cron -e`.
169
170```go
171*/10 * * * * sh /var/www/html/stats.domain.com/generate-stats.sh
172```
173
174## Securing with Basic authentication
175
176You probably don't want stats to be publicly available, so we should create a
177user and a password for Basic authentication.
178
179First we create a password for a user `stats` with `htpasswd -c /etc/nginx/.htpasswd stats`.
180
181Now we update config file with `nano
182/etc/nginx/sites-available/stats.domain.com`. You probably noticed that the
183file looks a bit different from before. This is because `certbot` added
184additional rules for SSL.
185
186Your location portion the config file should now look like. You should add
187`auth_basic` and `auth_basic_user_file` lines to the file.
188
189```nginx
190location / {
191 try_files $uri $uri/ =404;
192 auth_basic "Private Property";
193 auth_basic_user_file /etc/nginx/.htpasswd;
194}
195```
196
197Test if config is still ok with `nginx -t` and if it is you can restart Nginx
198with `service nginx restart`.
199
200If you now visit `https://stats.domain.com` you should be prompted for username
201and password. If not, try reopening your browser.
202
203That is all. You now have analytics for your server that gets refreshed every 10
204minutes.
diff --git a/content/posts/2021-06-26-simple-world-clock.md b/content/posts/2021-06-26-simple-world-clock.md
deleted file mode 100644
index ef2f12c..0000000
--- a/content/posts/2021-06-26-simple-world-clock.md
+++ /dev/null
@@ -1,107 +0,0 @@
1---
2title: Simple world clock with eInk display and Raspberry Pi Zero
3url: simple-world-clock-with-eiink-display-and-raspberry-pi-zero.html
4date: 2021-06-26T12:00:00+02:00
5type: post
6draft: false
7---
8
9Our team is spread across the world, from the USA all the way to Australia, so
10having some sort of world clock makes sense.
11
12Currently, I am using an extension for Gnome called [Timezone
13extension](https://extensions.gnome.org/extension/2657/timezones-extension/),
14and it serves the purpose quite well.
15
16But I also have a bunch of electronics that I bought through the time, and I am
17not using any of them, and it's time to stop hording this stuff and use it in a
18project.
19
20A while ago I bought a small eInk display [Inky
21pHAT](https://shop.pimoroni.com/products/inky-phat?variant=12549254217811) and I
22have a bunch of [Raspberry Pi's
23Zero](https://www.raspberrypi.org/products/raspberry-pi-zero/) lying around that
24I really need to use.
25
26![Inky pHAT, Raspberry Pi Zero](/posts/world-clock/hardware.jpg)
27
28Since the Inky [Inky
29pHAT](https://shop.pimoroni.com/products/inky-phat?variant=12549254217811) is
30essentially a HAT, it can easily be added on top of the [Raspberry Pi
31Zero](https://www.raspberrypi.org/products/raspberry-pi-zero/).
32
33First, I installed the necessary software on Raspberry Pi with `pip3 install
34inky`.
35
36And then I created a file `clock.py` in home directory `/home/pi`.
37
38```python
39#!/usr/bin/env python
40# -*- coding: utf-8 -*-
41
42import sys
43import os
44from inky.auto import auto
45from PIL import Image, ImageFont, ImageDraw
46from font_fredoka_one import FredokaOne
47
48clocks = [
49 'America/New_York',
50 'Europe/Ljubljana',
51 'Australia/Brisbane',
52]
53
54board = auto()
55board.set_border(board.WHITE)
56board.rotation = 90
57
58img = Image.new('P', (board.WIDTH, board.HEIGHT))
59draw = ImageDraw.Draw(img)
60
61big_font = ImageFont.truetype(FredokaOne, 18)
62small_font = ImageFont.truetype(FredokaOne, 13)
63
64x = board.WIDTH / 3
65y = board.HEIGHT / 3
66
67idx = 1
68for clock in clocks:
69 ctime = os.popen('TZ="{}" date +"%a,%H:%M"'.format(clock))
70 ctime = ctime.read().strip().split(',')
71 city = clock.split('/')[1].replace('_', ' ')
72
73 draw.text((15, (idx*y)-y+10), city, fill=board.BLACK, font=small_font)
74 draw.text((110, (idx*y)-y+7), str(ctime[0]), fill=board.BLACK, font=big_font)
75 draw.text((155, (idx*y)-y+7), str(ctime[1]), fill=board.BLACK, font=big_font)
76
77 idx += 1
78
79board.set_image(img)
80board.show()
81```
82
83And because eInk displays are rather slow to refresh and the clock requires
84refreshing only once a minute, this can be done through cronjob.
85
86Before we add this job to cron we need to make `clock.py` executable with `chmod
87+x clock.py`.
88
89Then we add a cronjob with `crontab -e`.
90
91```txt
92* * * * * /home/pi/clock.py
93```
94
95So, we end up with a result like this.
96
97![World Clock](/posts/world-clock/world-clock.jpg)
98
99And for the enclosure that can be 3D printed, but I haven't yet something like
100this can be used.
101
102<iframe id="vs_iframe" src="https://www.viewstl.com/?embedded&url=https%3A%2F%2Fmitjafelicijan.com%2Fposts%2Fworld-clock%2Fenclosure.stl&color=gray&bgcolor=white&edges=no&orientation=front&noborder=no" style="border:0;margin:0;width:100%;height:400px;"></iframe>
103
104You can download my [STL file for the enclosure
105here](/posts/world-clock/enclosure.stl), but make sure that dimensions make
106sense and also opening for USB port should be added or just use a drill and some
107hot glue to make it stick in the enclosure.
diff --git a/content/posts/2021-07-30-from-internet-consumer-to-full-hominum-again.md b/content/posts/2021-07-30-from-internet-consumer-to-full-hominum-again.md
deleted file mode 100644
index 100645b..0000000
--- a/content/posts/2021-07-30-from-internet-consumer-to-full-hominum-again.md
+++ /dev/null
@@ -1,103 +0,0 @@
1---
2title: My journey from being an internet über consumer to being a full hominum again
3url: from-internet-consumer-to-full-hominum-again.html
4date: 2021-07-30T12:00:00+02:00
5type: post
6draft: false
7---
8
9It's been almost a year since I started purging all my online accounts and
10going down this rabbit hole of being almost independent of the current internet
11machine. Even though I initially thought that I will have problems adapting,
12I was pleasantly surprised that the transition went so smoothly. Even better,
13it brought many benefits to my life. Such as increased focus, less stress
14about trivial things, etc.
15
16It all started with me doing small changes like unsubscribing from emails that I
17have either subscribed to by accepting terms and conditions. Or even some more
18malicious emails that I was getting because I was on a shared mailing list. And
19the later ones I hate the most of all. How the hell do they keep sharing my
20email and sending me unsolicited emails and get away with it? I have a suspicion
21that these marketing people share an Excel file between them and keep
22resubscribing me when they import lists into Mailchimp or similar software.
23
24It's fascinating to see how much crap you get subscribed to when you are not
25paying attention. It got so bad that my primary Gmail address is a full of junk
26and need constant monitoring and cleaning up. And because I want to have Inbox
27Zero, this presents an additional problem for me.
28
29The stress that email presented for me didn't occur to me for a long time. I was
30noticing that I was unable to go through one single hour without hysterically
31refreshing email. And if somebody wrote me something, I needed to see it right
32then, even though I didn't immediately reply to it. I can only describe this
33with FOMO (fear of missing out). I have no other explanation than that. It was
34crippling, and I was constantly context switching, which I will address further
35down this post in more details.
36
37This was one of the reasons why I spawned up my personal email server, and I am
38using it now as my primary and person email. I still have Gmail as my “junk”
39email that I use for throw away stuff. I log in to Gmail once a week and check
40if there are any important emails that I got, but apart from that, it's sitting
41dormant and collecting dust.
42
43The more I was watching the world loose it's self with allowing anti freedom
44things to happen to it, the more I started to realize that something has to
45change. I don't have the power to change the world. And I also don't have a
46grandiose opinion of myself to even think to try it. But what I can do is to not
47subscribe to this consumer way of thinking. I will not be complicit in this. My
48moral and ethical stances won't allow it. So, this brings us to the second part
49of my journey.
50
51I was using all these 3rd party services because I was either lazy or OK with
52the drawbacks of them. I watched these services and companies became more and
53privacy policies and everybody is OK with accepting them, and they pray on that
54more evil. It is evil if you sell your user's data in this manner. Nobody reads
55flaw in human nature. I really hate the hypocrisy they manage to muster. These
56companies prey on our laziness, and we are at fault here. Nobody else. And I
57truly understand the reasons why we rather accept and move on, and not object
58and have our lives a little more difficult. They have perfected this through
59years of small changes that make us a little more dependent on them. You could
60not convince a person to give away all his rights and data in one day. This was
61gradual and slow. And it caught us all in surprise. When I really stopped and
62thought about it, I felt repulsed. By really stopping and thinking about it, I
63really mean stopping and thinking about it. Thoroughly and in depth.
64
65Each step I took depleted my character a bit more. Like I was trading myself bit
66by bit without understanding what it all meant. What it meant to be a full
67person, not divided by all this bought attention they want from me. They don't
68just get your data, but they also take your attention away from you. They
69scatter your and go with the divide and conquer tactic from there. And a person
70divided is a person not fully there. Not at the moment. Not alive fully.
71
72I was unable to form long thoughts. Well, I thought I was. But now that I see
73what being a full person is again, I can see that I was not at my 100% back
74then.
75
76A revolt was inevitable. There was no other way of continuing my story without
77it. Without taking back my attention, my thoughts, my time, and my privacy,
78regardless of how too late it maybe is.
79
80This has nothing to do with conspiracy theories. Even less with changing the
81world. All I wanted was to get my life back in order and not waste the energy
82that could be spent in other, better places.
83
84I started reading more. I can focus now fully on things I work on. Furthermore,
85I have the mental acuity that I never had before. My mind feels sharp. I don't
86get angry so much. I can cherish the finer things in life now without the need
87to interpret them intellectually. Not only that, but I have a feeling of
88belonging again. Sense of purpose has returned with a vengeance. And I can now
89help people without depleting myself.
90
91The last step so far was to finish closing all the remaining online accounts
92that I still had. And when I was thinking what value they bring me, I wasn't
93surprised that the answer was none. I wasn't logging in them and using them. I
94stopped being afraid of FOMO. If somebody wants to get in contact me, they will
95find a way. I am one search away.
96
97We are not beholden to anybody. Our lives are our own. So dare yourself to
98delete Facebook, LinkedIn. To unsubscribe. Dare yourself to take your time and
99attention back. Use that time and energy to go for a walk without thinking about
100work. Read a book instead of reading comment on social media that you will
101forget in an hour. Enrich your life instead of wasting it. It only requires a
102small step. And you will feel the benefits immediately. Lose the weight of the
103world that is crushing you without your consent.
diff --git a/content/posts/2021-08-01-linux-cheatsheet.md b/content/posts/2021-08-01-linux-cheatsheet.md
deleted file mode 100644
index 20e3382..0000000
--- a/content/posts/2021-08-01-linux-cheatsheet.md
+++ /dev/null
@@ -1,287 +0,0 @@
1---
2title: List of essential Linux commands for server management
3url: linux-cheatsheet.html
4date: 2021-08-01T12:00:00+02:00
5type: post
6draft: false
7---
8
9**Generate SSH key**
10
11```bash
12ssh-keygen -t ed25519 -C "your_email@example.com"
13
14# when no support for Ed25519 present
15ssh-keygen -t rsa -b 4096 -C "your_email@example.com"
16```
17
18Note: By default SSH keys get stored to `/home/<username>/.ssh/` folder.
19
20**Login to host via SSH**
21
22```bash
23# connect to host as your local username
24ssh host
25
26# connect to host as user
27ssh <user>@<host>
28
29# connect to host using port
30ssh -p <port> <user>@<host>
31```
32
33**Execute command on a server through SSH**
34
35```bash
36# execute one command
37ssh root@100.100.100.100 "ls /root"
38
39# execute many commands
40ssh root@100.100.100.100 "cd /root;touch file.txt"
41```
42
43**Displays currently logged in users in the system**
44
45```bash
46w
47```
48
49**Displays Linux system information**
50
51```bash
52uname
53```
54
55**Displays kernel release information**
56
57```bash
58uname -r
59```
60
61**Shows the system hostname**
62
63```bash
64hostname
65```
66
67**Shows system reboot history**
68
69```bash
70last reboot
71```
72
73**Displays information about the user**
74
75```bash
76sudo apt install finger
77finger <username>
78```
79
80**Displays IP addresses and all the network interfaces**
81
82```bash
83ip addr show
84```
85
86**Downloads a file from an online source**
87
88```bash
89wget https://example.com/example.tgz
90```
91
92Note: If URL contains ?, & enclose the URL in double quotes.
93
94**Compress a file with gzip**
95
96```bash
97# will not keep the original file
98gzip file.txt
99
100# will keep the original file
101gzip --keep file.txt
102```
103
104**Interactive disk usage analyzer**
105
106```bash
107sudo apt install ncdu
108
109ncdu
110ncdu <path/to/directory>
111```
112
113**Install Node.js using the Node Version Manager**
114
115```bash
116curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash
117source ~/.bashrc
118
119nvm install v13
120```
121
122**Too long; didn't read**
123
124```bash
125npm install -g tldr
126
127tldr tar
128```
129
130**Combine all Nginx access logs to one big log file**
131
132```bash
133zcat -f /var/log/nginx/access.log* > /var/log/nginx/access-all.log
134```
135
136**Set up Redis server**
137
138```bash
139sudo apt install redis-server redis-tools
140
141# check if server is running
142sudo service redis status
143
144# set and get a key value
145redis-cli set mykey myvalue
146redis-cli get mykey
147
148# interactive shell
149redis-cli
150```
151
152**Generate statistics of your webserver**
153
154```bash
155sudo apt install goaccess
156
157# check if installed
158goaccess -v
159
160# combine logs
161zcat -f /var/log/nginx/access.log* > /var/log/nginx/access-all.log
162
163# export to single html
164goaccess \
165 --log-file=/var/log/nginx/access-all.log \
166 --log-format=COMBINED \
167 --exclude-ip=0.0.0.0 \
168 --ignore-crawlers \
169 --real-os \
170 --output=/var/www/html/stats.html
171
172# cleanup afterwards
173rm /var/log/nginx/access-all.log
174```
175
176**Search for a given pattern in files**
177
178```bash
179grep -r ‘pattern’ files
180```
181
182**Find proccess ID for a specific program**
183
184```bash
185pgrep nginx
186```
187
188**Print name of current/working directory**
189
190```bash
191pwd
192```
193
194**Creates a blank new file**
195
196```bash
197touch newfile.txt
198```
199
200**Displays first lines in a file**
201
202```bash
203# -n <x> presents the number of lines (10 by default)
204head -n 20 somefile.txt
205```
206
207**Displays last lines in a file**
208
209```bash
210# -n <x> presents the number of lines (10 by default)
211tail -n 20 somefile.txt
212
213# -f follows the changes in file (doesn't closes)
214tail -f somefile.txt
215```
216
217**Count lines in a file**
218
219```bash
220wc -l somefile.txt
221```
222
223**Find all instances of the file**
224
225```bash
226sudo apt install mlocate
227
228locate somefile.txt
229```
230
231**Find file names that begin with ‘index’ in /home folder**
232
233```bash
234find /home/ -name "index"
235```
236
237**Find files larger than 100MB in the home folder**
238
239```bash
240find /home -size +100M
241```
242
243**Displays block devices related information**
244
245```bash
246lsblk
247```
248
249**Displays free space on mounted systems**
250
251```bash
252df -h
253```
254
255**Displays free and used memory in the system**
256
257```bash
258free -h
259```
260
261**Displays all active listening ports**
262
263```bash
264sudo apt install net-tools
265
266netstat -pnltu
267```
268
269**Kill a process violently**
270
271```bash
272kill -9 <pid>
273```
274
275**List files opened by user**
276
277```bash
278lsof -u <user>
279```
280
281**Execute "df -h", showing periodic updates**
282
283```bash
284# -n 1 means every second
285watch -n 1 df -h
286```
287
diff --git a/content/posts/2021-12-03-debian-based-riced-up-distribution-for-developers.md b/content/posts/2021-12-03-debian-based-riced-up-distribution-for-developers.md
deleted file mode 100644
index 8c2a870..0000000
--- a/content/posts/2021-12-03-debian-based-riced-up-distribution-for-developers.md
+++ /dev/null
@@ -1,276 +0,0 @@
1---
2title: Debian based riced up distribution for Developers and DevOps folks
3url: debian-based-riced-up-distribution-for-developers-and-devops-folks.html
4date: 2021-12-03T12:00:00+02:00
5type: post
6draft: false
7---
8
9## Introduction
10
11I have been using [Ubuntu](https://ubuntu.com/) for quite a longtime now. I have
12used [Debian](https://www.debian.org/) in the past and
13[Manjaro](https://manjaro.org/). Also had [Arch](https://archlinux.org/) for
14some time and even ran [Gentoo](https://www.gentoo.org/) way back.
15
16What I learned from all this is that I prefer running a bit older versions and
17having them be stable than run bleeding edge rolling release. For that reason, I
18stuck with Ubuntu for a couple of years now. I am also at a point in my life
19where I just don't care what is cool or hip anymore. I just want a stable system
20that doesn't get in my way.
21
22During all this, I noticed that these distributions were getting very bloated
23and a lot of software got included that I usually uninstall on fresh
24installation. Maybe this is my OCD speaking, but why do I have to give fresh
25installation min 1 GB of ram out of the box just to have a blank screen in front
26of me? I get it, there are many things included in the distro to make my life
27easier. I understand. But at this point I have a feeling that modern Linux
28distributions are becoming similar to [Node.js project with
29node_modules](https://devhumor.com/content/uploads/images/August2017/node-modules.jpg).
30Just a crazy number of packages serving very little or no purpose, just
31supporting other software.
32
33I felt I needed a fresh start. To start over with something minimal and clean.
34Something that would put a little more joy into using a computer again.
35
36For the first version, I wanted to target the following machines I have at home
37that I want this thing to work on.
38
39```yaml
40# My main stationary work machine
41Resolution: 3840x1080 (Super Ultrawide Monitor 32:9)
42CPU: Intel i7-8700 (12) @ 4.600GHz
43GPU: AMD ATI Radeon RX 470/480/570/570X/580/580X/590
44Memory: 32020MiB
45```
46
47```yaml
48# Thinkpad x220 for testing things and goofing around
49Resolution: 1366x768
50CPU: Intel i5-2520M (4) @ 3.200GHz
51GPU: Intel 2nd Generation Core Processor Family
52Memory: 15891MiB
53```
54
55## How should I approach this?
56
57I knew I wanted to use [minimal Debian netinst
58](https://www.debian.org/CD/netinst/) for the base to give myself a head
59start. No reason to go through changing the installer and also testing all that
60behemoth of a thing. So, some sort of ricing was the only logical option to get
61this thing of the grounds somewhat quickly.
62
63> **What is ricing anyway?**
64> The term “RICE” stands for Race Inspired Cosmetic Enhancement. A group of
65> people (could be one, idk) decided to see if they could tweak their own
66> distros like they/others did their cars. This gave rise to a community of
67> Linux/Unix enthusiasts trying to make their distros look cooler and better
68> than others... For more information, read this article
69> [What in the world is ricing!?](https://pesos.github.io/2020/07/14/what-is-ricing.html).
70
71I didn't want this to just be a set of config files for theming purpose. I
72wanted this to include a set of pre-installed tools and services that are being
73used all the time by a modern developer. Theming is just a tiny part of it.
74Fonts being applied across the distro and things like that.
75
76First, I choose terminal installer and left it to load additional components.
77Avoid using graphical installer in this case.
78
79![](/posts/dfd-rice/install-00.png)
80
81After that I selected hostname and created a normal user and set password for
82that user and root user and choose guided mode for disk partitioning.
83
84![](/posts/dfd-rice/install-01.png)
85
86I left it run to install all the things required for the base system and opted
87out of scanning additional media for use by the package manager. Those will be
88downloaded from the internet during installation.
89
90![](/posts/dfd-rice/install-02.png)
91
92I opted out of the popularity contest, and **now comes the important part**.
93Uncheck all the boxes in Software selection and only leave 'standard system
94utilities'. I also left an SSH server, so I was able to log in to the machine
95from my main PC.
96
97![](/posts/dfd-rice/install-03.png)
98
99At this point, I installed GRUB bootloader on the disk where I installed the
100system.
101
102![](/posts/dfd-rice/install-04.png)
103
104That concluded the installation of base Debian and after restarting the computer
105I was prompted with the login screen.
106
107![](/posts/dfd-rice/install-05.png)
108
109Now that I had the base installation, it was time to choose what software do I
110want to include in this so-called distribution. I wanted out of the box
111developer experience, so I had plenty to choose.
112
113Let's not waste time and go through the list.
114
115## Desktop environments
116
117I have been using [Gnome](https://www.gnome.org/) for my whole Linux life. From
118version 2 forward. It's been quite a ride. I hated version 3 when it came out
119and replaced version 2. But I got used to it. And now with version 40+ they also
120made couple of changes which I found both frustrating and presently surprised.
121
122The amount of vertical space you loose because of the beefy title bars on
123windows is ridiculous. And then in case of
124[Tilix](https://gnunn1.github.io/tilix-web/) you also have tabs, and you are
125100px deep. Vertical space is one of the most important things for a
126developer. The more real estate you have, the more code you can have in a
127viewport.
128
129But on the other hand, I still love how Gnome feels and looks. I gotta give them
130that. They really are trying to make Gnome feel unified and modern.
131
132Regardless of all the nice things Gnome has, I was looking at the tiling window
133managers for some time, but never had the nerve to actually go with it. But now
134was the ideal time to give it a go. No guts, no glory kind of a thing.
135
136One of the requirements for me was easy custom layouts because I use a really
137strange monitor with aspect ratio of 32:9. So relying on included layouts most
138of them have is a non-starter.
139
140What I was doing in Gnome was having windows in a layout like the diagram
141below. This is my common practice. And if you look at it you can clearly see I
142was replicating tiling window manager setup in Gnome.
143
144![](/posts/dfd-rice/layout.png)
145
146That made me look into a bunch of tiling window managers and then tested them
147out. Candidates I was looking at were:
148
149- [i3](https://i3wm.org/)
150- [bspwm](https://github.com/baskerville/bspwm)
151- [awesome](https://awesomewm.org/index.html)
152- [XMonad](https://xmonad.org/)
153- [sway](https://swaywm.org/)
154- [Qtile](http://www.qtile.org/)
155- [dwm](https://dwm.suckless.org/)
156
157You can also check article [13 Best Tiling Window Managers for
158Linux](https://www.tecmint.com/best-tiling-window-managers-for-linux/) I was
159referencing while testing them out.
160
161While all of them provided what I needed, I liked i3 the most. What particular
162caught my eye was the ease to use and tree based layouts which allows flexible
163layouts. I know others can be set up also to have custom layouts other than
164spiral, dwindle etc. I think i3 is a good entry-level window manager for
165somebody like me.
166
167## Batteries included
168
169The source for the whole thing is located on Github
170https://github.com/mitjafelicijan/dfd-rice.
171
172Currenly included:
173
174- `non-free` (enables non-free packages in apt)
175- `sudo` (adds sudo and adds user to sudo group)
176- `essentials` (gcc, htop, zip, curl, etc...)
177- `wifi` (network manager nmtui)
178- `desktop` (i3, dmenu, fonts, configurations)
179- `pulseaudio` (pulseaudio with pavucontrol)
180- `code-editors` (vim, micro, vscode)
181- `ohmybash` (make bash pretty)
182- `file-managers` (mc)
183- `git-ui` (terminal git gui)
184- `meld` (diff tool)
185- `profiling` (kcachegrind, valgrind, strace, ltrace)
186- `browsers` (brave, firefox, chromium)
187- programming languages:
188 - `python`
189 - `golang`
190 - `nodejs`
191 - `rust`
192 - `nim`
193 - `php`
194 - `ruby`
195- `docker` (with docker-compose)
196- `ansible`
197
198Install script also allows you to install only specific packages (example for:
199essentials ohmybash docker rust).
200
201```sh
202su - root \
203 bash -c "$(wget -q https://raw.github.com/mitjafelicijan/dfd-rice/master/tools/install.sh -O -)" -- \
204 essentials ohmybash docker rust
205```
206
207Currently, most of these recipes use what Debian and this is totally fine with
208me since I never use bleeding edge features of a package. But if something major
209would come to light, I will replace it with a possible compilation script or
210something similar.
211
212This is some of the output from the installation script.
213
214![](/posts/dfd-rice/script.png)
215
216Let's take a look at some examples in the installation script.
217
218### Docker recipe
219
220```sh
221# docker
222print_header "Installing Docker"
223curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --yes --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
224echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
225apt update
226apt -y install docker-ce docker-ce-cli containerd.io docker-compose
227
228systemctl start docker
229systemctl enable docker
230systemctl status docker --no-pager
231
232/sbin/usermod -aG docker $USERNAME
233```
234
235### Making bash pretty
236
237I really like [Oh My Zsh](https://ohmyz.sh/), but I don't like zsh shell. When
238I used it, I constantly needed to be aware of it and running bash scripts was a
239pain. So, I was really delighted when I found out that a version for bash
240existed called [Oh My Bash](https://ohmybash.nntoan.com/). Let's take a look at
241the recipe for installing it.
242
243```sh
244# ohmybash
245print_header "Enabling OhMyBash"
246sudo -u $USERNAME sh -c "$(curl -fsSL https://raw.github.com/ohmybash/oh-my-bash/master/tools/install.sh)" &
247T1=${!}
248wait ${T1}
249```
250
251Because OhMyBash does `exec bash` at the end, this traps our script inside
252another shell and our script cannot continue. For that reason, I executed this
253in background. But that presents a new problem. Because this is executed in
254background, we lose track of progress naturally. And that strange trick with
255`T1=${!}` and `wait ${T1}` waits for the background process to finish before
256continuing to another task in bash script.
257
258Check [Multi-Threaded Processing in Bash Scripts](https://www.cloudsavvyit.com/12277/how-to-use-multi-threaded-processing-in-bash-scripts/)
259for more details.
260
261## Conclusion
262
263Take a look at
264https://github.com/mitjafelicijan/dfd-rice/blob/develop/tools/install.sh script
265to get familiar with it. This is just a first iteration and I will continue to
266update it because I need this in my life.
267
268The current version boots in 4s to the login prompt, and after you log in, the
269desktop environment loads in 2s. So, its fast, very fast. And on clean boot, I
270measured ~230 MB of RAM usage.
271
272And this is how it looks with two terminals side by side. I really like the
273simplicity and clean interface. I will polish the colors and stuff like that,
274but I really do like the results.
275
276![](/posts/dfd-rice/desktop.png)
diff --git a/content/posts/2021-12-25-running-golang-application-as-pid1.md b/content/posts/2021-12-25-running-golang-application-as-pid1.md
deleted file mode 100644
index d4db07d..0000000
--- a/content/posts/2021-12-25-running-golang-application-as-pid1.md
+++ /dev/null
@@ -1,347 +0,0 @@
1---
2title: Running Golang application as PID 1 with Linux kernel
3url: running-golang-application-as-pid1.html
4date: 2021-12-25T12:00:00+02:00
5type: post
6draft: false
7---
8
9## Unikernels, kernels, and alike
10
11I have been reading a lot about
12[unikernernels](https://en.wikipedia.org/wiki/Unikernel) lately and found them
13very intriguing. When you push away all the marketing speak and look at the
14idea, it makes a lot of sense.
15
16> A unikernel is a specialized, single address space machine image constructed
17> by using library operating systems. ([Wikipedia](https://en.wikipedia.org/wiki/Unikernel))
18
19I really like the explanation from the article
20[Unikernels: Rise of the Virtual Library Operating System](https://queue.acm.org/detail.cfm?id=2566628).
21Really worth a read.
22
23If we compare a normal operating system to a unikernel side by side, they would
24look something like this.
25
26![Virtual machines vs Containers vs Unikernels](/posts/pid1/unikernels.webp)
27
28From this image, we can see how the complexity significantly decreases with
29the use of Unikernels. This comes with a price, of course. Unikernels are hard
30to get running and require a lot of work since you don't have an actual proper
31kernel running in the background providing network access and drivers etc.
32
33So as a half step to make the stack simpler, I started looking into using
34Linux kernel as a base and going from there. I came across this
35[Youtube video talking about Building the Simplest Possible Linux System](https://www.youtube.com/watch?v=Sk9TatW9ino)
36by [Rob Landley](https://landley.net) and apart from statically compiling the
37application to be run as PID1 there was really no other obstacles.
38
39## What is PID 1?
40
41PID 1 is the first process that Linux kernel starts after the boot process.
42It also has a couple of unique properties that are unique to it.
43
44- When the process with PID 1 dies for any reason, all other processes are
45 killed with KILL signal.
46- When any process having children dies for any reason, its children are
47 re-parented to process with PID 1.
48- Many signals which have default action of Term do not have one for PID 1.
49- When the process with PID 1 dies for any reason, kernel panics, which
50 result in system crash.
51
52PID 1 is considered as an Init application which takes care of running other
53and handling services like:
54
55- sshd,
56- nginx,
57- pulseaudio,
58- etc.
59
60If you are on a Linux machine, you can check what your process is with PID 1
61by running the following.
62
63```sh
64$ cat /proc/1/status
65Name: systemd
66Umask: 0000
67State: S (sleeping)
68Tgid: 1
69Ngid: 0
70Pid: 1
71PPid: 0
72...
73```
74
75As we can see on my machine the process with id of 1 is [systemd](https://systemd.io/)
76which is a software suite that provides an array of system components for Linux
77operating systems. If you look closely you can also see that the `PPid`
78(process id of the parent process) is `0` which additionally confirms that
79this process doesn't have a parent.
80
81## So why even run application as PID 1 instead of just using a container?
82
83Containers are wonderful, but they come with a lot of baggage. And because they
84are in their nature layered, the images require quite a lot of space and also a
85lot of additional software to handle them. They are not as lightweight as they
86seem, and many popular images require 500 MB plus disk space.
87
88The idea of running this as PID 1 would result in a significantly smaller footprint,
89as we will see later in the post.
90
91> You could run a simple init system inside Docker container described more
92> in this article [Docker and the PID 1 zombie reaping problem](https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/).
93
94## The master plan
95
961. Compile Linux kernel with the default definitions.
972. Prepare a Hello World application in Golang that is statically compiled.
983. Run it with [QEMU](https://www.qemu.org/) and providing Golang application
99 as init application / PID 1.
100
101For the sake of simplicity we will not be cross-compiling any of it and just
102use the 64bit version.
103
104## Compiling Linux kernel
105
106```sh
107$ wget https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.15.7.tar.xz
108$ tar xf linux-5.15.7.tar.xz
109
110$ cd linux-5.15.7
111
112$ make clean
113
114# read more about this https://stackoverflow.com/a/41886394
115$ make defconfig
116
117$ time make -j `nproc`
118
119$ cd ..
120```
121
122At this point we have kernel image that is located in `arch/x86_64/boot/bzImage`.
123We will use this in QEMU later.
124
125To make our lives a bit easier lets move the kernel image to another place.
126Lets create a folder `bin/` in the root of our project with `mkdir -p bin`.
127
128
129At this point we can copy `bzImage` to `bin/` folder with
130`cp linux-5.15.7/arch/x86_64/boot/bzImage bin/bzImage`.
131
132The folder structure of this experiment should look like this.
133
134```txt
135pid1/
136 bin/
137 bzImage
138 linux-5.15.7/
139 linux-5.15.7.tar.xz
140```
141
142## Preparing PID 1 application in Golang
143
144This step is relatively easy. The only thing we must have in mind that we will
145need to compile the binary as a static one.
146
147Let's create `init.go` file in the root of the project.
148
149```go
150package main
151
152import (
153 "fmt"
154 "time"
155)
156
157func main() {
158 for {
159 fmt.Println("Hello from Golang")
160 time.Sleep(1 * time.Second)
161 }
162}
163```
164
165If you notice, we have a forever loop in the main, with a simple sleep of 1
166second to not overwhelm the CPU. This is because PID 1 should never complete
167and/or exit. That would result in a kernel panic. Which is BAD!
168
169There are two ways of compiling Golang application. Statically and dynamically.
170
171To statically compile the binary, use the following command.
172
173```sh
174$ go build -ldflags="-extldflags=-static" init.go
175```
176
177We can also check if the binary is statically compiled with:
178
179```sh
180$ file init
181init: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, Go BuildID=Ypu8Zw_4NBxm1Yxg2OYO/H5x721rQ9uTPiDVh-VqP/vZN7kXfGG1zhX_qdHMgH/9vBfmK81tFrygfOXDEOo, not stripped
182
183$ ldd init
184not a dynamic executable
185```
186
187At this point, we need to create [initramfs](https://www.linuxfromscratch.org/blfs/view/svn/postlfs/initramfs.html)
188(abbreviated from "initial RAM file system", is the successor of initrd. It
189is a cpio archive of the initial file system that gets loaded into memory
190during the Linux startup process).
191
192```sh
193$ echo init | cpio -o --format=newc > initramfs
194$ mv initramfs bin/initramfs
195```
196
197The projects at this stage should look like this.
198
199```txt
200pid1/
201 bin/
202 bzImage
203 initramfs
204 linux-5.15.7/
205 linux-5.15.7.tar.xz
206 init.go
207```
208
209## Running all of it with QEMU
210
211[QEMU](https://www.qemu.org/) is a free and open-source hypervisor. It emulates
212the machine's processor through dynamic binary translation and provides a set
213of different hardware and device models for the machine, enabling it to run a
214variety of guest operating systems.
215
216```sh
217$ qemu-system-x86_64 -serial stdio -kernel bin/bzImage -initrd bin/initramfs -append "console=ttyS0" -m 128
218```
219
220```sh
221$ qemu-system-x86_64 -serial stdio -kernel bin/bzImage -initrd bin/initramfs -append "console=ttyS0" -m 128
222[ 0.000000] Linux version 5.15.7 (m@khan) (gcc (GCC) 11.2.1 20211203 (Red Hat 11.2.1-7), GNU ld version 2.37-10.fc35) #7 SMP Mon Dec 13 10:23:25 CET 2021
223[ 0.000000] Command line: console=ttyS0
224[ 0.000000] x86/fpu: x87 FPU will use FXSAVE
225[ 0.000000] signal: max sigframe size: 1440
226[ 0.000000] BIOS-provided physical RAM map:
227[ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
228[ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
229[ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
230[ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x0000000007fdffff] usable
231[ 0.000000] BIOS-e820: [mem 0x0000000007fe0000-0x0000000007ffffff] reserved
232[ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
233[ 0.000000] NX (Execute Disable) protection: active
234[ 0.000000] SMBIOS 2.8 present.
235[ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-6.fc35 04/01/2014
236[ 0.000000] tsc: Fast TSC calibration failed
237...
238[ 2.016106] ALSA device list:
239[ 2.016329] No soundcards found.
240[ 2.053176] Freeing unused kernel image (initmem) memory: 1368K
241[ 2.056095] Write protecting the kernel read-only data: 20480k
242[ 2.058248] Freeing unused kernel image (text/rodata gap) memory: 2032K
243[ 2.058811] Freeing unused kernel image (rodata/data gap) memory: 500K
244[ 2.059164] Run /init as init process
245Hello from Golang
246[ 2.386879] tsc: Refined TSC clocksource calibration: 3192.032 MHz
247[ 2.387114] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2e02e31fa14, max_idle_ns: 440795264947 ns
248[ 2.387380] clocksource: Switched to clocksource tsc
249[ 2.587895] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3
250Hello from Golang
251Hello from Golang
252Hello from Golang
253```
254
255The whole [log file here](/posts/pid1/qemu.log).
256
257## Size comparison
258
259The cool thing about this approach is that the Linux kernel and the application
260together only take around 12 MB, which is impressive as hell. And we need to
261also know that the size of bzImage (Linux kernel) could be greatly decreased
262by going into `make menuconfig` and removing a ton of features from the kernel,
263making the size even smaller. I managed to get kernel size down to 2 MB and
264still working properly.
265
266```sh
267total 12M
268-rw-r--r--. 1 m m 9.3M Dec 13 10:24 bzImage
269-rw-r--r--. 1 m m 1.9M Dec 27 01:19 initramfs
270```
271
272## Creating ISO image and running it with Gnome Boxes
273
274First we need to create proper folder structure with `mkdir -p iso/boot/grub`.
275
276Then we need to download the [grub binary](https://github.com/littleosbook/littleosbook/raw/master/files/stage2_eltorito).
277You can read more about this program on https://github.com/littleosbook/littleosbook.
278
279```sh
280$ wget -O iso/boot/grub/stage2_eltorito https://github.com/littleosbook/littleosbook/raw/master/files/stage2_eltorito
281```
282
283```sh
284$ tree iso/boot/
285iso/boot/
286├── bzImage
287├── grub
288│   ├── menu.lst
289│   └── stage2_eltorito
290└── initramfs
291```
292
293Let's copy files into proper folders.
294
295
296```sh
297$ cp stage2_eltorito iso/boot/grub/
298$ cp bin/bzImage iso/boot/
299$ cp bin/initramfs iso/boot/
300```
301
302Lets create a GRUB config file at `nano iso/boot/grub/menu.lst` with contents.
303
304```ini
305default=0
306timeout=5
307
308title GoAsPID1
309kernel /boot/bzImage
310initrd /boot/initramfs
311```
312
313Let's create iso file by using genisoimage:
314
315```sh
316genisoimage -R \
317 -b boot/grub/stage2_eltorito \
318 -no-emul-boot \
319 -boot-load-size 4 \
320 -A os \
321 -input-charset utf8 \
322 -quiet \
323 -boot-info-table \
324 -o GoAsPID1.iso \
325 iso
326```
327
328This will produce `GoAsPID1.iso` which you can use with [Virtualbox](https://www.virtualbox.org/)
329or [Gnome Boxes](https://apps.gnome.org/app/org.gnome.Boxes/).
330
331<video src="/posts/pid1/boxes.mp4" controls></video>
332
333## Is running applications as PID 1 even worth it?
334
335Well, the answer to this is not as simple as one would think. Sometimes it is
336and sometimes it's not. For embedded systems and very specialized applications
337it is worth for sure. But in normal uses, I don't think so. It was an interesting
338exercise in compiling kernels and looking at the guts of the Linux kernel,
339but sticking to containers for most of the things is a better option in my
340opinion.
341
342An interesting experiment would be creating an image that supports networking
343and could be deployed to AWS as an EC2 instance and observing how it fares.
344But in that case, we would need to write some sort of supervisor that would
345run on a separate EC2 that would check if other EC2 instances are running
346properly. Remember that if your application fails, kernel panics and the
347whole machine is inoperable in this case.
diff --git a/content/posts/2021-12-30-wap-mobile-web-before-the-web.md b/content/posts/2021-12-30-wap-mobile-web-before-the-web.md
deleted file mode 100644
index 5e7ff38..0000000
--- a/content/posts/2021-12-30-wap-mobile-web-before-the-web.md
+++ /dev/null
@@ -1,202 +0,0 @@
1---
2title: Wireless Application Protocol and the mobile web before the web
3url: wap-mobile-web-before-the-web.html
4date: 2021-12-30T12:00:00+02:00
5type: post
6draft: false
7---
8
9## A little stroll down the history lane
10
11About two weeks ago, I watched this outstanding documentary on YouTube
12[Springboard: the secret history of the first real
13smartphone](https://www.youtube.com/watch?v=b9_Vh9h3Ohw) about the history of
14smartphones and phones in general. It brought back so many memories. I never had
15an actual smartphone before the Android. The closest to smartphone was [Sony
16Ericsson P1](https://www.gsmarena.com/sony_ericsson_p1-1982.php). A fantastic
17phone and I broke it in Prague after a party and that was one of those rare
18occasions where I was actually mad at myself. But nevertheless, after that
19phone, the next one was an Android one.
20
21Before that, I only owned normal phones from Nokia and Siemens etc. Nothing
22special, actually. These are the phones we are talking about. Before 2007.
23Apple and Android phones didn't exist yet.
24
25These phones were rocking:
26
27- No selfie cameras.
28- ~2 inch displays.
29- ~120 MHz beast CPU's.
30- 144p main cameras.
31- But they had a headphone jack.
32
33Let's take a look at these beauties.
34
35![Old phones](/posts/wap/phones.gif)
36
37## WAP - Wireless Application Protocol
38
39Not that one! We are talking about Wireless Application Protocol and not Cardi
40B's song 😃
41
42WAP stands for Wireless Application Protocol. It is a protocol designed for
43micro-browsers, and it enables the access of internet in the mobile devices. It
44uses the mark-up language WML (Wireless Markup Language and not HTML), WML is
45defined as XML 1.0 application. Furthermore, it enables creating web
46applications for mobile devices. In 1998, WAP Forum was founded by Ericson,
47Motorola, Nokia and Unwired Planet whose aim was to standardize the various
48wireless technologies via protocols.
49[(source)](https://www.geeksforgeeks.org/wireless-application-protocol/)
50
51WAP protocol was resulted by the joint efforts of the various members of WAP
52Forum. In 2002, WAP forum was merged with various other forums of the industry,
53resulting in the formation of Open Mobile Alliance (OMA).
54[(source)](https://www.geeksforgeeks.org/wireless-application-protocol/)
55
56These were some wild times. Devices had tiny screens and data transmission rates
57were abominable. But they were capable of rendering WML (Wireless Markup
58Language). This was very similar to HTML, actually. It is a markup language,
59after all.
60
61These pages could be served by [Apache](https://apache.org/) and could be
62generated by CGI scripts on the backend. The only difference was the limited
63markup language.
64
65## WML - Wireless Markup Language
66
67Just like web browsers use HTML for content structure, older mobile device
68browsers use WML - if you need to support really old mobile phones using WML
69browsers, you will need to know about it. WML is XML-based (an XML vocabulary
70just like XHTML and MathML, but not HTML) and does not use the same metaphor as
71HTML. HTML is a single document with some metadata packed away in the head, and
72a body encapsulating the visible page. With WML, the metaphor does not envisage
73a page, but rather a deck of cards. A WML file might have several pages or cards
74contained within it.
75[(source)](https://www.w3.org/wiki/Introduction_to_mobile_web)
76
77```html
78<?xml version="1.0"?>
79<!DOCTYPE wml PUBLIC "-//WAPFORUM//DTD WML 1.1//EN" "http://www.wapforum.org/DTD/wml_1.1.xml">
80<wml>
81 <card id="home" title="Example Homepage">
82 <p>Welcome to the Example homepage</p>
83 </card>
84</wml>
85```
86
87There is an amazing tutorial on [Tutorialpoint about
88WML](https://www.tutorialspoint.com/wml/index.htm).
89
90## Converting Digg to WML
91
92This task is completely useless and not really feasible nowadays, but I had to
93give it a try for old-time sake. Since the data is already there in a form of
94RSS feed, I could take this feed and parse it and create a WML version of the
95homepage.
96
97We will need:
98
99- Python3 + Pip
100- ImageMagick
101- feedparser and mako templating
102
103```sh
104# for fedora 35
105sudo dnf install ImageMagick python3-pip
106
107# tempalting engine for python
108pip install mako --user
109
110# for parsing rss feeds
111pip install feedparser --user
112```
113
114Project folder structure should look like the following.
115
116```
11712:43:53 m@khan wap → tree -L 1
118.
119├── generate.py
120└── template.wml
121
122```
123
124After that, I created a small template for the homepage.
125
126```html
127<?xml version="1.0"?>
128<!DOCTYPE wml PUBLIC "-//WAPFORUM//DTD WML 1.2//EN" "http://www.wapforum.org/DTD/wml_1.2.xml">
129
130<wml>
131
132 <card title="Digg - What the Internet is talking about right now">
133
134 % for item in entries:
135 <p><img src="/images/${item.id}.jpg" width="175" height="95" alt="${item.title}" /></p>
136 <p><small>${item.kicker}</small></p>
137 <p><big><b>${item.title}</b></big></p>
138 <p>${item.description}</p>
139 % endfor
140
141 </card>
142
143</wml>
144```
145
146And the parser that parses RSS feed looks like this.
147
148```python
149import os
150import feedparser
151from mako.template import Template
152
153os.system('mkdir -p www/images')
154
155template = Template(filename='template.wml')
156
157feed = feedparser.parse('https://digg.com/rss/top.xml')
158
159entries = feed.entries[:15]
160
161for entry in entries:
162 print('Processing image with id {}'.format(entry.id))
163 os.system('wget -q -O www/images/{}.jpg "{}"'.format(entry.id, entry.links[1].href))
164 os.system('convert www/images/{}.jpg -type Grayscale -resize 175x -depth 3 -quality 30 www/images/{}.jpg'.format(entry.id, entry.id))
165
166html = template.render(entries = entries)
167
168with open('www/index.wml', 'w+') as fp:
169 fp.write(html)
170```
171
172This script will create a folder `www` and in the folder `www/images` for
173storing resized images.
174
175> Be sure you don't use SSL and use just normal HTTP for serving the content.
176> These old phones will have problems with TLS 1.3 etc.
177
178If you look at the python file, I convert all the images into tiny B&W images.
179They should be WBMP (Wireless BitMaP) but I choose JPEGs for this, and it seems
180to work properly.
181
182Because I currently don't have a phone old enough to test it on, I used an
183emulator. And it was really hard to find one. I found [WAP
184Proof](http://wap-proof.sharewarejunction.com/) on shareware junction, and it
185did the job well enough. I will try to find and actual device to test it on.
186
187<video src="/posts/wap/emulator.mp4" controls></video>
188
189If you are using Nginx to serve the contents, add a directive to the hosts file
190that will automatically server `index.wml` file.
191
192```nginx
193server {
194 index index.wml index.html index.htm index.nginx-debian.html;
195}
196```
197
198## Conclusion
199
200Well, this was pointless, but very fun! I hope you enjoyed it as much as I did.
201I will try to find an old phone to test it on. If you have any questions, feel
202free to ask in the comments.
diff --git a/content/posts/2022-06-30-trying-out-helix-editor.md b/content/posts/2022-06-30-trying-out-helix-editor.md
deleted file mode 100644
index dc4cfed..0000000
--- a/content/posts/2022-06-30-trying-out-helix-editor.md
+++ /dev/null
@@ -1,53 +0,0 @@
1---
2title: Trying out Helix code editor as my main editor
3url: tying-out-helix-code-editor.html
4date: 2022-06-30T12:00:00+02:00
5type: post
6draft: false
7---
8
9I have been searching for a lightweight code editor for quite some time. One of
10the main reasons was that I wanted something that doesn't burn through CPU and
11RAM usage is not through the roof. I have been mostly using Visual Studio Code.
12It's been an outstanding editor. I have no quarrel with it at all. It's just
13time to spice life up with something new.
14
15I have been on this search for a couple of years. I have tried Vim, Neovim,
16Emacs, Doom Emacs, Micro and couple more. Among most of them, I liked Micro and
17Doom Emacs the most. Micro editor was a little too basic for me. And Doom Emacs
18was a bit too hardcore. This does not reflect on any of the editors. It's just
19my personal preference.
20
21> I tried Helix Editor about a year ago. But I didn't pay attention to it.
22> Tried it and saw it's similar to Vi and just said no. I was premature to
23> dismiss it.
24
25One of the things I actually miss is line wrapping for certain files. When
26writing Markdown, line wrapping would be very helpful. Editing such a document
27is frustrating to say the least. Some of the Markdown to HTML converters don't
28take kindly of new lines between sentences. Not paragraphs, sentences. And I use
29Markdown to write this blog you are reading.
30
31But other than this, I have been extremely satisfied by it. It's been a pleasant
32surprise. There have been zero issues with the editor.
33
34One thing to do before you are able to use autocompletion and make use Language
35Server support is to install the language server with NPM.
36
37```sh
38npm install -g typescript typescript-language-server
39```
40
41I am still getting used to the keyboard shortcuts and getting better. What Helix
42does really well is packing in sane defaults and even though because currently
43there is no plugin support I haven't found any need for them. It has all that
44you would need. It goes to extreme measures to show a user what is going on with
45popups that show you what the keyboard shortcuts are.
46
47And it comes us packed with many
48[really good themes](https://github.com/helix-editor/helix/wiki/Themes).
49
50![Editor](/posts/helix-editor/editor.png)
51
52It's still young but has this mature feeling to it. It has sane defaults and
53mimics Vim (works a bit differently, but the overall idea is similar).
diff --git a/content/posts/2022-07-05-what-would-dna-sound-if-synthesized.md b/content/posts/2022-07-05-what-would-dna-sound-if-synthesized.md
deleted file mode 100644
index 136b9f4..0000000
--- a/content/posts/2022-07-05-what-would-dna-sound-if-synthesized.md
+++ /dev/null
@@ -1,364 +0,0 @@
1---
2title: What would DNA sound if synthesized to an audio file
3url: what-would-dna-sound-if-synthesized.html
4date: 2022-07-05T12:00:00+02:00
5type: post
6draft: false
7---
8
9## Introduction
10
11Lately, I have been thinking a lot about the nature of life, what are the
12foundation blocks of life and things like that. It's remarkable how complex and
13on the other hand simple the creation is when you look at it. The miracle of
14life keeps us grounded when our imagination goes wild. If the DNA are the blocks
15of life, you could consider them to be an API nature provided us to better
16understand all of this chaos masquerading as order.
17
18I have been reading a lot about superintelligence and our somehow misguided path
19to create general artificial intelligence. What would the building blocks or our
20creation look like? Is the compression really the ultimate storage of
21information? Will our creation also ponder this questions when creating new
22worlds for themselves, or will we just disappear into the vastness of
23possibilities? It is a little offensive that we are playing God whilst being
24completely ignorant of our own reality. Who knows! Like many other
25breakthroughs, this one will also come at a cost not known to us when it finally
26happens.
27
28To keep things a bit lighter, I decided to convert some popular DNA sequences
29into an audio files for us to listen to. I am not the first one, nor I will be
30the last one to do this. But it is an interesting exercise in better
31understanding the relationship between art and science. Maybe listening to DNA
32instead of parsing it will find a way into better understanding, or at least
33enjoying the creation and cryptic nature of life.
34
35## DNA encoding and primer example
36
37I have been exploring DNA in the past in my post from about 3 years ago in
38[Encoding binary data into DNA
39sequence](/encoding-binary-data-into-dna-sequence.html) where I have been
40converting all sorts of data into DNA sequences.
41
42This will be a similar exercise but instead of converting to DNA, I will be
43generating tones from Nucleotides.
44
45| Nucleotides | Note | Frequency |
46| ---------------- | ---- | --------- |
47| **A** (Adenine) | A | 440 Hz |
48| **C** (Cytosine) | C | 783.99 Hz |
49| **G** (Guanine) | G | 523.25 Hz |
50| **T** (Thymine) | D | 587.33 Hz |
51
52Since we do not have T in equal-tempered scale, I choose D to represent T note.
53
54You can check [Frequencies for equal-tempered scale, A4 = 440
55Hz](https://pages.mtu.edu/~suits/notefreqs.html). For this tuning, we also
56choose `Speed of Sound = 345 m/s = 1130 ft/s = 770 miles/hr`.
57
58Now that we have this out of the way, we can also brush up on the DNA sequencing
59a bit. This is a famous quote I also used for the encoding tests, and it goes
60like this.
61
62> How wonderful that we have met with a paradox. Now we have some hope of
63> making progress.
64> ― Niels Bohr
65
66```shell
67>SEQ1
68GACAGCTTGTGTACAAGTGTGCTTGCTCGCGAGCGGGTACGCGCGTGGGCTAACAAGTGA
69GCCAGCAGGTGAACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGCTGGCGGGTGA
70ACAAGTGTGCCGGTGAGCCAACAAGCAGACAAGTAAGCAGGTACGCAGGCGAGCTTGTCA
71ACTCACAAGATCGCTTGTGTACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGTAT
72GCTTGCTGGCGGACAAGCCAGCTTGTAAGCGGACAAGCTTGCGCACAAGCTGGCAGGCCT
73GCCGGCTCGCGTACAAATTCACAAGTAAGTACGCTTGCGTGTACGCGGGTATGTATACTC
74AACCTCACCAAACGGGACAAGATCGCCGGCGGGCTAGTATACAAGAACGCTTGCCAGTAC
75AACC
76```
77
78This is what we gonna work with to get things rolling forward, when creating
79parser and waveform generator.
80
81## Parsing DNA data
82
83This step is rather simple one. All we need to do is parse input DNA sequence in
84[FASTA format](https://en.wikipedia.org/wiki/FASTA_format) well known in
85[Bioinformatics](https://en.wikipedia.org/wiki/Bioinformatics) to extract single
86Nucleotides that will be converted into separate tones based on equal-tempered
87scale explained above.
88
89```python
90nucleotide_tone_map = {
91 'A': 440,
92 'C': 523.25,
93 'G': 783.99,
94 'T': 587.33, # converted to D
95}
96
97def split(word):
98 return [char for char in word]
99
100def generate_from_dna_sequence(sequence):
101 for nucleotide in split(sequence):
102 print(nucleotide, nucleotide_tone_map[nucleotide])
103```
104
105## Generating sine wave
106
107Because we are essentially creating a long stream of notes we will be appending
108sine notes to a global array we will later use for creating a WAV file out of
109it.
110
111```python
112import math
113
114def append_sinewave(freq=440.0, duration_milliseconds=500, volume=1.0):
115 global audio
116
117 num_samples = duration_milliseconds * (sample_rate / 1000.0)
118
119 for x in range(int(num_samples)):
120 audio.append(volume * math.sin(2 * math.pi * freq * (x / sample_rate)))
121
122 return
123```
124
125The sine wave generated here is the standard beep. If you want something more
126aggressive, you could try a square or saw tooth waveform.
127
128## Generating a WAV file from accumulated sine waves
129
130
131```python
132import wave
133import struct
134
135def save_wav(file_name):
136 wav_file = wave.open(file_name, 'w')
137 nchannels = 1
138 sampwidth = 2
139
140 nframes = len(audio)
141 comptype = 'NONE'
142 compname = 'not compressed'
143 wav_file.setparams((nchannels, sampwidth, sample_rate, nframes, comptype, compname))
144
145 for sample in audio:
146 wav_file.writeframes(struct.pack('h', int(sample * 32767.0)))
147
148 wav_file.close()
149```
150
15144100 is the industry standard sample rate - CD quality. If you need to save on
152file size, you can adjust it downwards. The standard for low quality is, 8000 or
1538kHz.
154
155WAV files here are using short, 16 bit, signed integers for the sample size.
156So, we multiply the floating-point data we have by 32767, the maximum value for
157a short integer.
158
159> It is theoretically possible to use the floating point -1.0 to 1.0 data
160> directly in a WAV file, but not obvious how to do that using the wave module
161> in Python.
162
163## Generating Spectograms
164
165I have tried two methods of doing this and both were just fine. I however opted
166out to use the [SoX - Sound eXchange, the Swiss Army knife of audio
167manipulation](https://linux.die.net/man/1/sox) one because it didn't require
168anything else.
169
170```shell
171sox output.wav -n spectrogram -o spectrogram.png
172```
173
174An example spectrogram of Ludwig van Beethoven Symphony No. 6 First movement.
175
176<audio controls>
177 <source src="/posts/dna-synthesized/symphony-no6-1st-movement.mp3" type="audio/mpeg">
178</audio>
179
180![Ludwig van Beethoven Symphony No. 6 First movement](/posts/dna-synthesized/symphony-no6-1st-movement.png)
181
182The other option could also be in combination with
183[gnuplot](http://www.gnuplot.info/). This would require an intermediary step,
184however.
185
186```shell
187sox output.wav audio.dat
188tail -n+3 audio.dat > audio_only.dat
189gnuplot audio.gpi
190```
191
192And input file `audio.gpi` that would be passed to gnuplot looks something like
193this.
194
195```txt
196# set output format and size
197set term png size 1000,280
198
199# set output file
200set output "audio.png"
201
202# set y range
203set yr [-1:1]
204
205# we want just the data
206unset key
207unset tics
208unset border
209set lmargin 0
210set rmargin 0
211set tmargin 0
212set bmargin 0
213
214# draw rectangle to change background color
215set obj 1 rectangle behind from screen 0,0 to screen 1,1
216set obj 1 fillstyle solid 1.0 fillcolor rgbcolor "#ffffff"
217
218# draw data with foreground color
219plot "audio_only.dat" with lines lt rgb 'red'
220```
221
222## Pre-generated sequences
223
224What I did was take interesting parts from an animal's genome and feed it to a
225tone generator script. This then generated a WAV file and I converted those to
226MP3, so they can be played in a browser. The last step was creating a
227spectrogram based on a WAV file.
228
229### Niels Bohr quote
230
231<audio controls>
232 <source src="/posts/dna-synthesized/quote/out.mp3" type="audio/mpeg">
233</audio>
234
235![Spectogram](/posts/dna-synthesized/quote/spectogram.png)
236
237### Mouse
238
239This is part of a mouse genome `Mus_musculus.GRCm39.dna.nonchromosomal`. You
240can get [genom data
241here](http://ftp.ensembl.org/pub/release-106/fasta/mus_musculus/dna/).
242
243<audio controls>
244 <source src="/posts/dna-synthesized/mouse/out.mp3" type="audio/mpeg">
245</audio>
246
247![Spectogram](/posts/dna-synthesized/mouse/spectogram.png)
248
249### Bison
250
251This is part of a bison genome `Bison_bison_bison.Bison_UMD1.0.cdna`. You can
252get [genom data
253here](http://ftp.ensembl.org/pub/release-106/fasta/bison_bison_bison/cdna/).
254
255<audio controls>
256 <source src="/posts/dna-synthesized/bison/out.mp3" type="audio/mpeg">
257</audio>
258
259![Spectogram](/posts/dna-synthesized/bison/spectogram.png)
260
261### Taurus
262
263This is part of a taurus genome `Bos_taurus.ARS-UCD1.2.cdna`. You can get
264[genom data
265here](http://ftp.ensembl.org/pub/release-106/fasta/bos_taurus/cdna/).
266
267<audio controls>
268 <source src="/posts/dna-synthesized/taurus/out.mp3" type="audio/mpeg">
269</audio>
270
271![Spectogram](/posts/dna-synthesized/taurus/spectogram.png)
272
273## Making a drummer out of a DNA sequence
274
275To make things even more interesting, I decided to send this data via MIDI to my
276[Elektron Model:Samples](https://www.elektron.se/en/model-samples). This is a
277really cool piece of equipment that supports MIDI in via USB and 3.5 mm audio
278jack.
279
280Elektron is connected to my MacBook via USB cable and audio out is patched to a
281Sony Bluetooth speaker I have that supports 3.5 mm audio in. Elektron doesn't
282have internal speakers.
283
284![](/posts/dna-synthesized/elektron/IMG_0619.jpg)
285
286![](/posts/dna-synthesized/elektron/IMG_0620.jpg)
287
288![](/posts/dna-synthesized/elektron/IMG_0622.jpg)
289
290For communicating with Elektron, I choose `pygame` Python module that has MIDI
291built in. With this, it was rather simple to send notes to the device. All I did
292was map MIDI notes to the actual Nucleotides.
293
294Before all of this I also checked Audio MIDI Setup app under MacOS and checked
295MIDI Studio by pressing ⌘-2.
296
297![](/posts/dna-synthesized/elektron/midi-studio.jpg)
298
299The whole script that parses and send notes to the Elektron looks like this.
300
301```python
302import pygame.midi
303import time
304
305pygame.midi.init()
306
307print(pygame.midi.get_default_output_id())
308print(pygame.midi.get_device_info(0))
309
310player = pygame.midi.Output(1)
311player.set_instrument(2)
312
313def send_note(note, velocity):
314 global player
315 player.note_on(note, velocity)
316 time.sleep(0.3)
317 player.note_off(note, velocity)
318
319
320nucleotide_midi_map = {
321 'A': 60,
322 'C': 90,
323 'G': 160,
324 'T': 180, # is D
325}
326
327with open("quote.fa") as f:
328 sequence = f.read().replace('\n', '')
329
330for nucleotide in [char for char in sequence]:
331 print("Playing nucleotide {} with MIDI note {}".format(
332 nucleotide, nucleotide_midi_map[nucleotide]))
333 send_note(nucleotide_midi_map[nucleotide], 127)
334
335del player
336pygame.midi.quit()
337```
338
339<video src="/posts/dna-synthesized/elektron/elektron.mp4" controls></video>
340
341All of this could be made much more interesting if I choose different
342instruments for different Nucleotides, or doing more funky stuff with Elektron.
343But for now, this should be enough. It is just a proof of concept. Something to
344play around with.
345
346## Going even further
347
348As you probably notice, the end results are quite similar to each other. This is
349to be expected because we are operating only with 4 notes essentially. What
350could make this more interesting is using something like
351[Supercollider](https://supercollider.github.io/) to create more interesting
352sounds. By transposing notes or using effects based on repeated data in a
353sequence. Possibilities are endless.
354
355It is really astonishing what can be achieved with a little bit of code and an
356idea. I could see this becoming an interesting background soundscape instrument
357if done properly. It could replace random note generator with something more
358intriguing, biological, natural.
359
360I actually find the results fascinating. I took some time and listened to this
361music of nature. Even though it's quite the same, it's also quite different.
362The subtle differences on repeat kind of creates music on its own. Makes you
363wonder. It kind of puts Occam’s Razor in its place. Nature for sure loves to
364make things as energy efficient as possible.
diff --git a/content/posts/2022-08-13-algae-spotted-on-river-sava.md b/content/posts/2022-08-13-algae-spotted-on-river-sava.md
deleted file mode 100644
index 34d891e..0000000
--- a/content/posts/2022-08-13-algae-spotted-on-river-sava.md
+++ /dev/null
@@ -1,31 +0,0 @@
1---
2title: Aerial photography of algae spotted on river Sava
3url: aerial-photography-of-algae-spotted-on-river-sava.html
4date: 2022-08-13T12:00:00+02:00
5type: post
6draft: false
7---
8
9This is a bit of a different post than I usually write, but quite interesting
10one to me. River Sava has plenty of hydropower plants located down the stream.
11This makes regulating the strength of a current easier than normally. Because of
12lower stream strength and high temperatures, algae has formed on the river.
13This is the first time I've seen something like this in my whole life.
14
15Below are some photographs taken from a DJI drone capturing the event.
16
17![Algae on Sava](/posts/algae-sava/dji-algae-0.jpg)
18
19![Algae on Sava](/posts/algae-sava/dji-algae-1.jpg)
20
21![Algae on Sava](/posts/algae-sava/dji-algae-2.jpg)
22
23![Algae on Sava](/posts/algae-sava/dji-algae-3.jpg)
24
25![Algae on Sava](/posts/algae-sava/dji-algae-4.jpg)
26
27![Algae on Sava](/posts/algae-sava/dji-algae-5.jpg)
28
29I will try to get more photos of this in the future days and if something
30intriguing shows up will post it again on the blog.
31
diff --git a/content/posts/2022-10-06-state-of-web-technologies-in-year-2022.md b/content/posts/2022-10-06-state-of-web-technologies-in-year-2022.md
deleted file mode 100644
index ab07a2d..0000000
--- a/content/posts/2022-10-06-state-of-web-technologies-in-year-2022.md
+++ /dev/null
@@ -1,296 +0,0 @@
1---
2title: State of Web Technologies and Web development in year 2022
3url: state-of-web-technologies-and-web-development-in-year-2022.html
4date: 2022-10-06T12:00:00+02:00
5type: post
6draft: false
7---
8
9## Initial thoughts
10
11*This post is a critique on the current state of web development. It is an
12opinionated post! I will learn more about this in the future, and probably
13slightly change my mind about some of the things I criticize.*
14
15I have started working on a hobby project about two weeks ago, and I wanted to
16use that situation as a learning one. Trying new things, new technologies, new
17tools. I always considered myself to be an adventurous person when it comes to
18technology. I never shy away from trying new languages, new operating systems
19etc. Likewise, I find the whole experience satisfying, and it tickles that part
20of my brain that finds discovery the highest of the mountains to climb.
21
22What I always wanted to make was a coding game, that you would play in a browser
23(just to eliminate building binaries for each operating system) where you would
24level up your character and go into these scriptable battles. You know, RPG
25elements.
26
27So, the natural way to go would be some sort of SPA (single page application)
28with basic routing and some state management. Nothing crazy.
29
30> **Before we move on**, I have to be transparent. Take my views on this with
31> a grain of salt. I have only scratched the surface with these technologies,
32> and my knowledge is full of gaps. This is my experience using some of these
33> products for the first time or in a limited capacity.
34
35Having this out of the way, I got myself a fresh pot of coffee and down the
36rabbit hole I went.
37
38## Giving React JS a spin
39
40I first tried [React JS](https://reactjs.org/). I kind of like it. Furthermore,
41I have worked with libraries like this in the past and also wrote a couple of
42them (nothing compared to that level), but I had the basic understanding of what
43was going on. I rolled up a project quickly and had basic things done in a
44matter of two hours, which was impressive.
45
46I prefer using [Tailwind CSS](https://tailwindcss.com/) for my styling
47pleasures, and integrating that was also a painless experience. It was actually
48nice to see that some things got better with time. In about 2 minutes I got
49Tailwind working, and I was able to use classes at my disposal. All that
50`postcss` stuff was taken care of by adding a couple of things in config files
51(all described really well in their documentation).
52
53It is not that different from Vue which I have had more encounters with in the
54past People will probably call me a lunatic for saying this. But you know, it is
55the truth. Same same, but different. I still believe that using libraries like
56this is beneficial. I am not a JavaScript purist. They all have their quirks,
57but at the end of the day, I truly believe it’s worth it.
58
59## Bundlers and Transpilers
60
61I still reject calling [Typescript](https://www.typescriptlang.org/) to
62[JavaScript](https://www.javascript.com/) conversion a "compilation process". I
63call them [transpilers](https://devopedia.org/transpiler), and I don’t care! 😈
64
65The first one that I ever used was [webpack](https://webpack.js.org/), and it
66was an absolute horrific experience. Saying this, it is an absolutely fantastic
67tool. I felt more like a config editor than actually a programmer. To be fair,
68I am a huge fan of [make](https://www.gnu.org/software/make/), and you can do as
69you wish with this information. I like my build systems simple.
70
71Also, isn’t it interesting that we need something like
72[Babel](https://babeljs.io/) to make JavaScript code work in a browser that has
73only one client side scripting available, which is by no accident also
74JavaScript. Why? I know why it’s needed, but seriously, why.
75
76I haven’t used Babel for years now. Or if I did, it was packaged together by
77some other bundler thingy. Which does not make things better, but at least I
78didn’t need to worry about it.
79
80I really don’t like complicated build systems. I really don’t like abstracting
81code and making things appear magical. The older I get, the more I appreciate
82clear and clean, expressive code. No one-liners, if possible.
83
84But I have to give props to [Vite](https://vitejs.dev/)! This was one of the
85best developer experiences I have ever had. Granted, it still has magical
86properties. And yes, it still is a bundler and abstracts things to the nth
87degree. But at least it didn’t force me to configure 700 lines of JSON. And I
88know that this makes me a hypocrite. You can’t have it all. Nonetheless, my
89reasoning here is, if using bundlers is inevitable, then at least they should
90provide an excellent developer experience.
91
92I also noticed that now the catch-all phrase is “blazingly fast” and “lightning
93fast” and “next generation” and stuff like that. I mean, yeah, tools should get
94faster with time. But saying that starting a project now takes 2 seconds instead
95of 20 seconds is something that is a break it or make it kind of a deal is
96ridiculous. I don’t mind waiting a couple of seconds every couple of days. I
97also don’t create 700 projects every day, and also who does? This argument has
98no bite. All I want is a decent reload time (~100ms is more than good enough for
99me) and that is it.
100
101You don’t need to sell me benefits if I only get them when I start a fresh
102project, and then try to convince me that this is somehow changing the fate of
103the universe. First of all, it is not. And second, if this is your only argument
104for your tool, I would advise you to maybe re-focus your efforts to something
105else. Vite says that startup times are really fast. And if that would be the
106only thing differentiating it from other tools, I would ignore it. But it has
107some really compelling features like [Hot Module
108Replacement](https://www.geeksforgeeks.org/reactjs-hot-module-replacement/) that
109really works well. It was a joy to use.
110
111So, I will be definitely using Vite in the future.
112
113## Jam Stack, Mach Stack no snack
114
115Let's get a couple of the acronyms out of the way, so we all know what we are
116talking about:
117
118- Jam Stack - JavaScript, API and Markup
119- Mach Stack - Microservices, API-first, Cloud-Native SaaS, Headless
120
121It is so hard to follow all these new trendy things happening around you, that
122it makes you have a massive **FOMO** all the time. But on the other hand, you
123also don’t want to be that old fart that doesn’t move with the times and still
124writes his trusty jQuery code while listening to Blink 182 All the small things
125on full blast. It’s a good song, don’t get me wrong, but there are other songs
126out there.
127
128I have to admit. [Vercel](https://vercel.com/) is really cool! Love the
129simplicity of the service. You could compare it to
130[Netlify](https://www.netlify.com/). I haven’t tried Netlify extensively, but
131from a couple of experimental deployments I still prefer Vercel. It is much more
132streamlined, but maybe this is bias in me. I really like Vercel’s Analytics,
133which give you a [Core Web Vitals report](https://web.dev/vitals/) in their
134admin console. Kind of cool, I’m not going to lie.
135
136This whole idea about frontend and backend merging into [SSR (server-side
137rendering)](https://www.debugbear.com/blog/server-side-rendering) looks so good
138on paper. It almost doesn’t come with any major flaws.
139
140But when it comes to the actual implementation, there is much to be desired.
141I’m going to lump [Next.js](https://nextjs.org/) and
142[Nuxt.js](https://nuxtjs.org/) together because they are essentially the same
143thing, just a different library.
144
145Now comes the reality. Mixing backend and frontend in this manner creates this
146weird mental model where you kind of rely on magical properties of these
147libraries. You relinquish control over to them for better developer experience.
148But is that really true? Initially, I was so stoked about it. However, the more
149I used them, the more I felt uncomfortable. I felt dirty, actually. Maybe this
150is because I come from old ways of doing things where you control every step of
151request, and allowing something to hijack it feels like blasphemy.
152
153More than that, some pretty significant technical issues arose from this. How do
154you do JWT token authentication? You put it in `api` folder and then do some
155fetching and storing into local state management. But doing this also requires
156some tinkering with await/async stuff on the React/Vue side of things. And then
157you need to write middleware for it. And the more I look at it, the more I see
158that this whole thing was not meant to be used like this, and it all feels and
159looks like a huge hack.
160
161The issue I have with this is that they over-promise and under-deliver. They
162want to be an all-in-one replacement for everything, and they don’t deliver on
163this promise. And how could they?! We have to be fair. It is an impossible task.
164
165They sell you [NoOps](https://www.geeksforgeeks.org/overview-of-noops/), but
166when you need to accomplish something a little bit more out of the scope of
167Hello World, you have to make hacky decisions to make it work. And having a
168deployment strategy that relies on many moving parts is never a good idea.
169Abstracting too much is usually a sign of bad architecture.
170
171Lately, this has become a huge trend that will for sure bite us in the future.
172And let’s not get it twisted. By doing this, PaaS providers like
173[AWS](https://aws.amazon.com/), [GCS](https://cloud.google.com/), etc. obscure
174their billing, and you end up paying more than you really should. And even if
175that is not an issue, it comes down to the principle of things. AWS is known for
176having multiple “currencies“ inside their projects like write operations, read
177operations, etc. which add up, and it creates this impossible to track billing
178scheme. It all behaves suspiciously like a pay-to-win game you could find on
179mobile phones that scams you out of your money.
180
181And as far as I am concerned, the most important thing was me not coding the
182functionalities for the game I want to make. I was battling libraries and cloud
183providers. How to deploy, what settings are relevant. Bad documentation or
184multiple versions of achieving the same thing. You are getting bombarded by all
185this information, and you don’t really have any control over it.
186Production-ready code becomes a joke, essentially. Especially if you tend to
187work on that project for a prolonged period of time.
188
189All of these options end up creating a fatigue. What to choose, what not to
190choose. Unnecessary worrying about if the stack will still be deemed worthy in
191six months. There is elegance in simplicity.
192
193> JavaScript UI frameworks and libraries work in cycles. Every six months or
194> so, a new one pops up, claiming that it has revolutionized UI development.
195> Thousands of developers adopt it into their new projects, blog posts are
196> written, Stack Overflow questions are asked and answered, and then a newer
197> (and even more revolutionary) framework pops up to usurp the throne.
198> — Ian Allen
199
200And this jab at these libraries and cloud providers is not done out of malice.
201It is a real concern that I have about them. In my life, I have seen
202technologies come and go, but the basics always stick around. So surrendering
203all the power you have to a library or a cloud provider is in my opinion a
204stupid move.
205
206## Tailwind CSS still rocks!
207
208You know, many people say negative things about Tailwind. And after a lot of
209deliberation, I came to the conclusion that Tailwind is good for two types of
210developers. Tailwind is good for a complete noob or a senior developer. A
211complete noob doesn’t really care about inner workings of CSS, and a senior
212developer also doesn’t care about CSS. Well, at least, not anymore. And
213developers in between usually have the biggest issues with it. Not always of
214course, but in a lot of cases.
215
216I like the creature comforts of Tailwind. Being utility first would make me
217argue that it is actually more similar to [Sass](https://sass-lang.com/) or
218[Less](https://lesscss.org/) than something like Bootstrap. Not technically, but
219ideologically. After I started using it, I never looked back. I use it every
220time I need to do something web related.
221
222Writing CSS for general things feels like going several steps back. Instead of
223focusing on what you are actually trying to achieve, you focus on notations like
224[BEM](https://en.bem.info/methodology/css/), code structuring, optimizing HTML
225size. Just doing things that make 0.1% difference. You know that saying: Early
226optimization is the root of all evil. Exactly that.
227
228I am also not saying that Tailwind is the cure for everything. Sometimes custom
229CSS is necessary. But from what I found out in using it for almost two years in
230a production environment (on a site getting quite a lot of traffic and
231constantly being changed), I can say without any reservations that Tailwind
232saved our asses countless times. We would be rewriting CSS all the time without
233it. And I don’t really think writing CSS is the best way to spend my time.
234
235I have also noticed that people who criticize Tailwind the most never actually
236used it in a real project that has a long lifetime with plenty of changes that
237will happen in the future.
238
239But you know, whatever floats your boat!
240
241## Code maintainability
242
243Somehow, people also stopped talking about maintenance. If you constantly try to
244catch the latest and greatest train, you are by that logic always trying new
245things. Which is a good thing if you want to learn about technologies and try
246them. But for the production environment, you have to have a stable stack that
247doesn’t change every 6 months.
248
249You can lock dependencies for sure. Nevertheless, the hype train moves along
250anyway. And the mindset this breeds goes against locking the code. This
251bleeding-edge rolling release cycle is not helping. That is why enterprise
252solutions usually look down on these popular stacks and only do bare minimum to
253appear hip and cool.
254
255With that said, I still think that progress is good, but should be taken with a
256grain of salt. If your project is something that should be built once and then
257rarely updated, going with the latest stack is a possible way to go. But, if you
258are working on a project that lasts for years, you should probably approach it
259with some level of caution. Web development is often times too volatile.
260
261## Web development has a marketing issue
262
263I noticed that almost every project now has this marketing spin put on it.
264Everything is blazingly fast now. I get it, they are competing for your
265attention, but what happened to just being truthful and not inflating reality.
266
267And in order to appeal to mass market, they leave things out of their marketing
268materials. These open-source projects are now behaving more and more like
269companies do. Which is a scary thought on its self.
270
271And we are also seeing a rise in a concept of building a company in the open,
272which is a good thing, don't get me wrong. But when it is using open-source to
273lure people and then lock them in their ecosystem, there is where I have issues
274with it.
275
276This might be because I have been using GNU/Linux for 20 years now and have been
277so beholden for my success to open-source that I see issues when open-source is
278being used to trick people into a false sense of security that these projects
279are built in the spirit of open-source. Because there is a difference. They are
280NOT! They have a really specific goal in mind. And the open-source is being used
281as a delivery system. Which is in my opinion disgusting!
282
283## Conclusion
284
285I will end my post with this. Web development is running now in circles. People
286are discovering [RPC](https://www.tutorialspoint.com/remote-procedure-call-rpc)
287now and this is the now the next big thing. [GraphQL](https://graphql.org/) is
288so passé. And I am so tired of it all. Of blazingly fast libraries, of all these
289new technologies that are actually just a remake of old ones. Of just the
290general spirit of the web. I will just use what I already know. Which worked 10
291years ago and will work 10 years after this. I will adopt a couple of little
292tools like Vite. But I will not waste my time on this anymore.
293
294It was a good exercise to get in touch with what’s new now. Nothing really
295changed that much. FOMO is now cured! Now I have to get my ass back to actually
296code and make the project that I wanted to make in the first place.
diff --git a/content/posts/2022-10-16-that-sound-that-machine-makes-when-struggling.md b/content/posts/2022-10-16-that-sound-that-machine-makes-when-struggling.md
deleted file mode 100644
index 7eb4029..0000000
--- a/content/posts/2022-10-16-that-sound-that-machine-makes-when-struggling.md
+++ /dev/null
@@ -1,66 +0,0 @@
1---
2title: Microsoundtrack — That sound that machine makes when struggling
3url: that-sound-that-machine-makes-when-struggling.html
4date: 2022-10-16T12:00:00+02:00
5type: post
6draft: false
7---
8
9A couple of months ago, I got an idea about micro soundtracks. In this concept,
10you are the observer, director, and audience in this tiny movies.
11
12What you do is to attempt to imagine what would be happening around you based on
13a title of the song and let the song help you fill the void in your story.
14
15I made these songs is Logic Pro X. Every year or so I do this kind of thing and
16make a couple of songs similar to this. But this is the first time I am posting
17about it.
18
19You can listen to the whole set on
20[Youtube](https://www.youtube.com/watch?v=_5oXBhSmF3c) or scroll down the page
21and there are embedded players for each song.
22
23## A bunch of inter-dimensional people with loud clocks
24
25A group of inter-dimensional people are going up and down the elevator with you
26while having loud clocks around their necks. Each clock ticks on a different
27frequency. A lot of other sounds are getting drawn into your dimension,
28resulting in a strange merging of dimensions.
29
30<iframe style="border: 0; width: 100%; height: 42px;" src="https://bandcamp.com/EmbeddedPlayer/album=3913808801/size=small/bgcol=ffffff/linkcol=0687f5/track=1349272965/transparent=true/" seamless title="Bandcamp"><a href="https://mitjafelicijan.bandcamp.com/album/that-sound-that-machine-makes-when-struggling">That sound that machine makes when struggling by Mitja Felicijan</a></iframe>
31
32## Two black holes conversing about the weather
33
34You are a traveler in a spaceship flying very close to two colliding black holes
35having a discussion about the weather while tearing each other apart. During all
36this your ship is getting pulled into the event horizon of both black holes,
37putting a lot of strain on your spaceship.
38
39<iframe style="border: 0; width: 100%; height: 42px;" src="https://bandcamp.com/EmbeddedPlayer/album=3913808801/size=small/bgcol=ffffff/linkcol=0687f5/track=1756714200/transparent=true/" seamless title="Bandcamp"><a href="https://mitjafelicijan.bandcamp.com/album/that-sound-that-machine-makes-when-struggling">That sound that machine makes when struggling by Mitja Felicijan</a></iframe>
40
41## A planet where every organism is a plant
42
43You land on a planet where every living organism is a plant and among those
44plants some of them are highly intelligent, and you were asked to make first
45contact with the native species. Your visit takes place in a giant cave where
46you are meeting these plants, and they are talking to you.
47
48<iframe style="border: 0; width: 100%; height: 42px;" src="https://bandcamp.com/EmbeddedPlayer/album=3913808801/size=small/bgcol=ffffff/linkcol=0687f5/track=3710973979/transparent=true/" seamless title="Bandcamp"><a href="https://mitjafelicijan.bandcamp.com/album/that-sound-that-machine-makes-when-struggling">That sound that machine makes when struggling by Mitja Felicijan</a></iframe>
49
50## Bio implants having a fit and reprogramming your brain
51
52In a distant future where everybody has bio implants, you have just received
53your first one, which happens to be a brain implant. Something goes wrong, and
54your implant is starting to misbehave, and you are experiencing brain
55malfunctions. You are on the streets at night a couple of hours after your
56procedure. You can feel your sanity breaking down.
57
58<iframe style="border: 0; width: 100%; height: 42px;" src="https://bandcamp.com/EmbeddedPlayer/album=3913808801/size=small/bgcol=ffffff/linkcol=0687f5/track=1157430581/transparent=true/" seamless title="Bandcamp"><a href="https://mitjafelicijan.bandcamp.com/album/that-sound-that-machine-makes-when-struggling">That sound that machine makes when struggling by Mitja Felicijan</a></iframe>
59
60## Cow animation
61
62I also made this little cow animation. Go into full screen to see the effects in
63more details.
64
65<video src="/posts/microsoundtrack/cow.m4v" controls loop></video>
66
diff --git a/content/posts/2023-01-26-trying-to-build-a-new-kind-of-terminal-emulator.md b/content/posts/2023-01-26-trying-to-build-a-new-kind-of-terminal-emulator.md
deleted file mode 100644
index 27e227a..0000000
--- a/content/posts/2023-01-26-trying-to-build-a-new-kind-of-terminal-emulator.md
+++ /dev/null
@@ -1,253 +0,0 @@
1---
2title: Trying to build a New kind of terminal emulator for the modern age
3url: trying-to-build-a-new-kind-of-terminal-emulator.html
4date: 2023-01-26T12:00:00+02:00
5type: post
6draft: false
7---
8
9Over the past few weeks, I have been really thinking about terminal emulators,
10how we interact with computers, the separation of text-based programs and GUI
11ones. To be perfectly honest, I got pissed off one evening when I was cleaning
12up files on my computer. Normally, I go into console and do `ncdu` and check
13where the junk is. Then I start deleting stuff. Without any discrimination,
14usually. But when it comes to screenshots, I have learned that it's good to keep
15them somewhere near if I need to refer to something that I was doing. I am an
16avid screenshot taker. So at that point I checked Pictures folder and also did a
17basic search `find . -type f -name "*.jpg"` for all the JPEG files in my home
18directory and immediately got pissed off. Why can’t I see thumbnails in my
19terminal? I know why, but why in the year of 2022 this is still a problem. I am
20used to traversing my disk via terminal. I am faster, and I am more comfortable
21this way. But when it comes to visualization, I then need to revert to GUI
22applications and again find the same file to see it. I know that programs like
23`feh` and `sxiv` are available, but I would just like to see the preview. Like
24[Jupyter notebook](https://jupyter.org/) or something similar. Just having it
25inline. Part of a result.
26
27It also didn’t help that I was spending some time with the [Plan
289](https://plan9.io/plan9/) Operating system. More specifically
29[9FRONT](http://9front.org/). The way that [ACME editor](http://acme.cat-v.org/)
30handles text editing is just wonderful. Different and fresh somehow, even though
31it’s super old.
32
33So, I went on a lookout for an interesting way of visualizing results of some
34query. I found these applications to be outstanding examples of how not to be a
35captive of a predetermined way of doing things.
36
37- [Wolfram Mathematica](https://www.wolfram.com/mathematica/)
38- [Jupyter notebooks](https://jupyter.org/)
39- [Plan 9 / 9FRONT](http://www.9front.org)
40- [Temple OS](https://templeos.org/)
41- [Emacs](https://www.gnu.org/software/emacs/)
42
43My idea is not as out there as ACME is, but it is a spin on the terminal
44emulators. I like the modes that Vi/Vim provides you with. I like the way the
45Emacs does its own `M-x` `M-c`. Furthermore, I really like how Mathematica and
46Jupyter present the data in a free flowing form. And I love how Temple OS is
47basically a C interpreter on some level.
48
49> **Note:** This is part 1 of the journey. Nowhere finished yet. I am just
50> tinkering with this at the moment. This whole thing can easily spectacularly
51> fail.
52
53So I started. I knew that I wanted to have the couple of modes, but I didn’t
54like the repetition of keystrokes, so the only option was to have some sort of
55toggle and indicate to the user that they are in a special mode. Like Vi does
56for Normal and Visual mode.
57
58These modes would for the first version be:
59
60- *Preview mode* (toggle with Ctrl + P)
61 - When this mode would be enabled, the `ls` command would try to find images
62 from the results and display thumbnails from them in the terminal itself.
63 No ASCII art. Proper images. In a grid!
64- *Detach mode* (toggle with Ctrl + D)
65 - When this mode would be enabled, every command would open a new window
66 and execute that command in it. This would be useful for starting `htop`
67 in a separate window.
68
69The reason for having these modes togglable is to not ask for previews every
70time. You enable a mode and until you disable it, it behaves that way. Purely
71out of ergonomic reasons.
72
73I would like to treat every terminal I open as a session mentally. When I start
74using the terminal, I start digging deeper into the issue I am trying to
75resolve. And while I am doing this, I would like to open detached windows
76etc. A lot of these things can be done easily with something like
77[i3](https://i3wm.org/), but also that pull you out of the context of what you
78were doing. I would like to orchestrate everything from one single point.
79
80In planning for this project, I knew that I would need to use a language like C
81and a library such as [SDL2](https://www.libsdl.org/) in order to achieve the
82desired results. I had considered other options, but ultimately determined that
83[SDL2](https://www.libsdl.org/) was the best fit based on its capabilities and
84reputation in the programming community.
85
86At first, I thought the idea of a hardware accelerated terminal was a bit of a
87joke. It seemed like such a niche and unnecessary feature, especially given the
88fact that terminal emulators have been around for decades and have always relied
89on software rendering. But to be fair, [Alacritty](https://alacritty.org/) is
90doing the same thing. Well, they are doing a remarkable job at it.
91
92So, I embarked on a journey. Everything has to start somewhere. For me, it
93started with creating a window! It has to start somewhere. 🙂
94
95```c
96// Oh, Hi Mark!
97// Create the window, obviously.
98SDL_Window *window = SDL_CreateWindow(
99 WINDOW_TITLE, SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED,
100 WINDOW_WIDTH, WINDOW_HEIGHT,
101 SDL_WINDOW_RESIZABLE | SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN);
102```
103
104I continued like this to get some text displayed on the screen.
105
106I noted that
107[`TTF_RenderText_Solid`](https://wiki.libsdl.org/SDL_ttf/TTF_RenderText_Solid)
108rendered text really poorly. There were no antialiasing at all. In my wisdom, I
109never checked the documentation. Well, that was a fail. To uneducated like me:
110`TTF_RenderText_Solid` renders Latin1 text at fast quality to a new 8-bit
111surface. So, that's why the texts looked like shit. No wonder.
112
113Remarks on `TTF_RenderText_Solid`: This function will allocate a new 8-bit,
114palettized surface. The surface's 0 pixel will be the colorkey, giving a
115transparent background. The 1 pixel will be set to the text color.
116
117After I replaced it with
118[`TTF_RenderText_LCD`](https://wiki.libsdl.org/SDL_ttf/TTF_RenderText_LCD) which
119renders Latin1 text at LCD subpixel quality to a new ARGB surface, the text
120started looking good. Really make sure you read the documentation. It’s actually
121good. As a side note, you can find all the documentation regarding [SDL2 on
122their Wiki](https://wiki.libsdl.org/).
123
124After that was done, I started working on displaying other things like `Preview`
125and `Detach` modes. This wasn’t really that hard. In SDL2 you can check all the
126available events with `while (SDL_PollEvent(&event) > 0)` and have a bunch of
127switch statements to determine which key is currently being pressed. More about
128keys, [SDLKey](https://documentation.help/SDL/sdlkey.html) and mroe about
129pooling the events on
130[SDL_PollEvent](https://documentation.help/SDL/sdlpollevent.html).
131
132```c
133while (SDL_PollEvent(&event) > 0)
134{
135 switch (event.type)
136 {
137 case SDL_QUIT:
138 running = false;
139 break;
140
141 case SDL_TEXTINPUT:
142 if (!meta_key_pressed)
143 {
144 strncat(input_prompt_text, event.text.text, 1);
145 update_input_prompt = true;
146 }
147 break;
148 }
149}
150```
151
152After that was somewhat working correctly, I started creating a struct that
153would hold all the commands and results and I call them Cells. Yes, I stole that
154naming idea from Jupyter.
155
156```c
157typedef struct
158{
159 char *command;
160 char *result;
161 SDL_Surface *surface;
162 SDL_Texture *texture;
163 SDL_Rect rect;
164} Cell;
165```
166
167I am at a place now where I am starting to implement scrolling. This will for
168sure be fun to code. Memory management in C is super easy. 😂
169
170I have also added a simple [INI file like
171configuration](https://en.wikipedia.org/wiki/INI_file) support. It is done in an
172[STB style of
173header](https://github.com/nothings/stb/blob/master/docs/stb_howto.txt) and maps
174to specific options supported by the terminal. It is not universal, and the code
175below demonstrates how I will use it in the future.
176
177```c
178#ifndef CONFIG_H
179#define CONFIG_H
180
181/*
182# This is a comment
183
184# This is the first configuration option
185dettach=value11111
186
187# This is the second configuration option
188preview=value22222
189
190# This is the third configuration option
191debug=value33333
192*/
193
194// Define a struct to hold the configuration options
195typedef struct
196{
197 char dettach[256];
198 char preview[256];
199 char debug[256];
200} Config;
201
202// Read the configuration file and return the options as a struct
203extern Config read_config_file(const char *filename)
204{
205 // Create a struct to hold the configuration options
206 Config config = {0};
207
208 // Open the configuration file
209 FILE *file = fopen(filename, "r");
210
211 // Read each line from the file
212 char line[256];
213 while (fgets(line, sizeof(line), file))
214 {
215 // Check if this line is a comment or empty
216 if (line[0] == '#' || line[0] == '\n')
217 continue;
218
219 // Parse the line to get the option and value
220 char option[128], value[128];
221 if (sscanf(line, "%[^=]=%s", option, value) != 2)
222 continue;
223
224 // Set the value of the appropriate option in the config struct
225 if (strcmp(option, "dettach") == 0)
226 {
227 strncpy(config.option1, value, sizeof(config.option1));
228 }
229 else if (strcmp(option, "preview") == 0)
230 {
231 strncpy(config.option2, value, sizeof(config.option2));
232 }
233 else if (strcmp(option, "debug") == 0)
234 {
235 strncpy(config.option3, value, sizeof(config.option3));
236 }
237 }
238
239 // Close the configuration file
240 fclose(file);
241
242 // Return the configuration options
243 return config;
244}
245
246#endif
247```
248
249This is as far as I managed to get for now. I have a daily job and this
250prohibits me to work on these things full time. But I should probably get back
251and finish this. At least have a simple version working out, so I can start
252testing it on my machines. Fingers crossed. 🕵️‍♂️
253
diff --git a/content/posts/2023-05-16-rekindling-my-love-for-programming.md b/content/posts/2023-05-16-rekindling-my-love-for-programming.md
deleted file mode 100644
index 3c2267b..0000000
--- a/content/posts/2023-05-16-rekindling-my-love-for-programming.md
+++ /dev/null
@@ -1,74 +0,0 @@
1---
2title: Rekindling my love for programming and enjoying the act of creating
3url: rekindling-my-love-for-programming.html
4date: 2023-05-16T12:00:00+02:00
5type: post
6draft: false
7---
8
9Programming can be a challenging and rewarding experience, but sometimes it's
10easy to feel burnt out or disinterested. I have lost the passion for coding over
11the past couple of months and it looked like I will never enjoy the coding as
12much as I did.
13
14I was feeling burnt out with programming. I thought taking a break from it and
15focusing on other activities that I enjoy might be helpful. This way, I could
16come back to programming with a fresh perspective and renewed energy. I also
17thought about learning a new programming language or technology to keep things
18interesting and challenging.
19
20However, what I didn't realize was that learning a new language or technology
21wasn't going to solve the underlying issue. I needed to take a step back and
22re-evaluate why I had lost my passion for programming in the first place. This
23involved taking a deep look into what I was doing that resulted in this rut.
24
25Sometimes, it's easy to get caught up in the hype of new technologies or
26languages, and we can feel like we're missing out if we're not constantly
27learning and experimenting. However, it's important to remember that the latest
28and greatest isn't always the best fit for our projects or our
29interests. Instead of constantly chasing the next big thing, it can be helpful
30to focus on what truly interests us and what we're passionate about. This can
31help us stay motivated and engaged with our work, rather than feeling like we're
32just going through the motions.
33
34I expressed that I had lost my passion for coding over the past couple of
35months, and I realized that the reason behind it was my tendency to spread
36myself too thin and not focus on completing interesting projects. In order to
37regain my passion for coding, I need to focus on projects that truly interest me
38and give me a sense of purpose and motivation.
39
40Recently, I have been playing World of Warcraft more frequently and have become
41interested in developing addons for the game.
42
43This quickly resulted in me creating three addons that improve the quality of
44life, and I subsequently developed a more useful add-on that encapsulates all
45the others I made.
46
47I found it interesting that this action sparked a new interest in me.
48Additionally, I discovered the Lua language, which reminded me that coding
49should be fun rather than just a struggle with a language. It should be pure,
50unadulterated fun.
51
52I wasn't fighting the syntax, nor was I focused on finding the most optimal
53solution. I simply created things without the pressure of making them the best
54they could possibly be.
55
56This made me realize that I actually adore simple languages that get out of the
57way and let you express what you want to do. It forced me to rethink a lot about
58what I use and what I actually enjoy.
59
60I have decided to stick to the basics. For a scripting language, I will use
61Lua. For networking, I will use Golang. And for any special needs, I will rely
62on C. I do not require Rust, Nim, or Zig. This selection is more than sufficient
63for my needs. I have to stay true to this simplicity. There is something to the
64Occam's Razor.
65
66I've been struggling with a lack of creativity lately, but now I'm experiencing
67a real change. I realized I needed to take a step back and stop actively trying
68to address the issue. I needed to stop worrying and overthinking it. I simply
69needed some time. Looking back, I don't think I've taken any significant time
70off in the last 10 years.
71
72Suddenly, I find myself with the energy and passion to complete multiple small
73projects. It doesn't feel like a chore at all. Who knew I needed WoW to
74kickstart everything. Inspiration really does come from the strangest places.
diff --git a/content/posts/2023-05-23-i-was-wrong-about-git-workflows.md b/content/posts/2023-05-23-i-was-wrong-about-git-workflows.md
deleted file mode 100644
index e82f50b..0000000
--- a/content/posts/2023-05-23-i-was-wrong-about-git-workflows.md
+++ /dev/null
@@ -1,71 +0,0 @@
1---
2title: I think I was completely wrong about Git workflows
3url: i-was-wrong-about-git-workflows.html
4date: 2023-05-23T12:00:00+02:00
5type: post
6draft: false
7tags: []
8---
9
10I have been using some approximation of [Git
11Flow](https://jeffkreeftmeijer.com/git-flow/) for years now and never really
12questioned it to be honest. When I create a repo I create develop branch and set
13it as default one and then merge to master from there. Seems reasonable enough.
14
15One thing that I have learned is that long living branches are the devil. They
16always end up making a huge mess when they need to be merged eventually into
17master. So by that reason, what is the develop branch if not the longest living
18feature branch. And from my personal experience there was never a situation
19where I wasn’t sweating bullets when I had to merge develop back to master.
20
21This realisation started to give me pause. So why the hell am I doing this, and
22is there a better way. Well the solution was always there. And it comes in a
23form of [git tags](https://git-scm.com/book/en/v2/Git-Basics-Tagging).
24
25So what are git tags? Git tags are references to specific points in a Git
26repository's history. They are used to mark important milestones, such as
27releases or significant commits, making it easier to identify and access
28specific versions of a project.
29
30Somehow we have all hijacked the meaning of the master branch that it has to be
31the most releasable version of code. And this is also where the confusing about
32versioning the software kicks in. Because master branch implicitly says that we
33are dealing with the rolling release type of a software. And by having a develop
34branch we are hacking around this confusion. With a separation of develop and
35master we lock functionalities into place and forcing a stable vs development
36version of the software.
37
38But if that is true and the long living branches are the devil then why have
39develop at all. I think that most of this comes to how continuous integration is
40being done. There usually is no granular access to tags and CD software deploys
41what is present on a specific branch, may that be master for production and
42develop for staging. This is a gross simplification and by having this in place
43we have completely removed tagging as a viable option to create a fix point in
44software cycle that says, this is the production ready code.
45
46One cool thing about tags are that you can checkout a specific tag. So they
47behave very similarly as branches in that regard. And you don’t have the
48overhead of having two mainstream branches.
49
50So what is the solution? One approach is to use development workflow, where all
51changes are made on the smaller branches and continuously merged into
52master. Where the software is ready to be pushed to production you tag the
53master branch. This approach eliminates the need for long-lived branches and
54simplifies the development process. It also encourages developers to make small,
55incremental changes that can be tested and deployed quickly. However, this
56approach may not be suitable for all projects or teams that heavily rely on
57automated deployment based on branch names only.
58
59This also requires that developers always keep production in mind. No more
60living on an island of the develop branch. All your actions and code need to be
61ready to meet production standards on a much smaller timescale.
62
63I think that we have complicated the workflow in an honest attempt to make
64things more streamlined but in the process of doing this, we have inadvertently
65made our lives much more complicated.
66
67In conclusion, it's important to re-evaluate our workflows from time to time to
68see if they still make sense and if there are better alternatives available.
69Long-living branches can be problematic, and using tags to mark important
70milestones can simplify the development process.
71
diff --git a/content/posts/2023-05-31-re-inventing-task-runner-that-i-actually-used-daily.md b/content/posts/2023-05-31-re-inventing-task-runner-that-i-actually-used-daily.md
deleted file mode 100644
index fd44605..0000000
--- a/content/posts/2023-05-31-re-inventing-task-runner-that-i-actually-used-daily.md
+++ /dev/null
@@ -1,159 +0,0 @@
1---
2title: "Re-Inventing Task Runner That I Actually Used Daily"
3url: re-inventing-task-runner-that-i-actually-used-daily.html
4date: 2023-05-31T12:21:10+02:00
5type: post
6draft: false
7---
8
9Couple of months ago I had this brilliant idea of re-inventing the wheel by
10making an alternative for make. And so I went. Boldly into the battle. And to my
11big surprise my attempt resulted in not a completely useless piece of software.
12
13My initial requirements were quite simple but soon grow into something more
14ambitious. And looking back I should have stuck to the simple version. My
15laziness was on my side this time though. Because I haven’t implemented some of
16the features I now realise I really didn’t need them and they would bog the
17whole program and make it be something it was never meant to be.
18
19My basic requirements were following:
20
21- Syntax should be a tiny bit inspired by Rake and Rakefiles.
22- Should borrow the overall feel of a unit test experience.
23- Using something like Python would be a bit of an overkill.
24- The program must be statically compiled, so it can run on same architecture
25 without libc, musl dependencies or things like that.
26- Install ruby for rake is a bit overkill and can not be done with certain
27 really lightweight distributions like Alpine Linux. This tool would be usable
28 on such lightweight systems for remote debugging.
29- I want to use it for more than just compiling things. I want to use it as an
30 entry-point into a project, and I want this to help me indirectly document the
31 project as well.
32- It should be an abstraction over bash shell or the default system shell.
33 - Each task essentially becomes its own shell instance.
34- Must work on Linux and macOS systems.
35- By default, running `erd` list all the available tasks (when I use make, I
36 usually put a disclaimer that you should check Makefile to see all available
37 target).
38- Should support passing arguments when you run it from a shell.
39- Normal variable as the same as environmental variables. There is no
40 distinction. Every variable is also essentially an environment variable and
41 can be used by other programs.
42- State between tasks is not shared, and this makes this “pure” shell instances.
43- Should be single-threaded for the start and later expanded with `@spawn`
44 command.
45- Variables behave like macros and are preprocessed before evaluation.
46- Should support something like `assure` that would check if programs like C
47 compiler or Python (whatever the project requires) are installed on a machine.
48
49Quite a reasonable list of requirements. I do this things already in my
50Makefiles or/and Bash scripts. But I would like to avoid repeating myself every
51time I start working on something new.
52
53So I started with the following syntax.
54
55```ruby
56@env on
57
58# Override the default shell.
59@shell /bin/bash
60
61# Assure that program is installed.
62@assure docker-compose pip python3
63
64# Load local dotenv files (these are then globally available).
65@dotenv .env
66@dotenv .env.sample
67@dotenv some_other_file
68
69# This are local variables but still accessible in tasks.
70@var HI = "hey"
71@var TOKEN = "sometoken"
72@var EMAIL = "m@m.com"
73@var PASSWORD = "pass"
74@var EDITOR = "vim"
75
76@task dev "Test chars .:'}{]!//" does
77 echo "..." $HI
78end
79
80@task clean "Cleans the obj files" does
81 rm .obj
82end
83
84@task greet "Greets the user" does
85 echo "Hi user $TOKEN or $WINDOWID $EMAIL"
86end
87
88@task stack "Starts Docker stack" does
89 docker-compose -f stack.yml up
90end
91
92@task todo "Shows all todos in source files and count them" does
93 grep -ir "TODO|FIXME" . | wc -l
94end
95
96@task test1 "For testing 1" does
97 unknown-command
98 echo "test1"
99 ls -lha
100end
101
102@task test2 "For testing 2" does
103 echo "test1"
104 ls -lha
105 docker-compose -f samples/stack.yml up
106end
107```
108
109One thing that I really like about Errand. Yes, this is what it is called. And
110it is available at https://git.mitjafelicijan.com/errand.git/about/. Moving
111on. One thing that I really like is that a task is a persistent shell. By that I
112mean, that the whole task, even if it contains multiple command in one shell.
113In make each line in a target is that and you need to combine lines or add `\`
114at the end of the line.
115
116```bash
117# How you do this things in make.
118target:
119 source .venv/bin/activate \
120 python script.py
121```
122
123This solves this problem. Consider each task and what is being executed in that
124task a shell that will only close when all the tasks are completed.
125
126By self-documenting I mean that if you are in a directory with `Errandfile` in,
127if you only type `erd` and press enter it should by default display all the
128possible targets. In make i was doing this by having a first target be something
129like `default` that echos the message “Check Makefile for all available target.”
130Because all of the tasks in Errand require a message I use that to display let’s
131call it table of contents.
132
133Because I don’t use any external dependencies this whole thing can be statically
134compiled. So that also checked one of the boxes.
135
136It works on Linux and on a Mac so that’s also a bonus. I don’t believe this
137would work on Windows machines because of the way that I use shell instances. By
138you could use something like Windows Subsystem for Linux and run it in
139there. That is a valid option.
140
141To finish this essay off, how was it to use it in “real life”. I have to be
142honest. Some of the missing features still bother me. `@dotenv` directive is
143still missing and I need to implement this ASAP.
144
145Another thing that needs to happen is support for streaming output. Currently
146commands like `docker-compose` that runs in foreground mode is not compatible
147with Errand. So commands that stream output are an issue. I need to revisit how
148I initiate shell and how I read stdout and stderr. But that shouldn’t be a
149problem.
150
151I have been very satisfied with this thing. I am pleasantly surprised by how
152useful it is. I really wanted to test this in the wild before I commit to it. I
153have more abandoned project than Google and it’s bringing a massive shame to my
154family at this point. So I wanted to be sure that this is even useful. And it
155actually is. Quite surprised at myself.
156
157I really need to package this now and write proper docs. And maybe rewrite
158tokeniser. Its atrocious right now. Site to behold! But that is an issue for
159another time.
diff --git a/content/posts/2023-07-01-bringing-all-of-my-projects-together-under-one-umbrella.md b/content/posts/2023-07-01-bringing-all-of-my-projects-together-under-one-umbrella.md
deleted file mode 100644
index 9059b00..0000000
--- a/content/posts/2023-07-01-bringing-all-of-my-projects-together-under-one-umbrella.md
+++ /dev/null
@@ -1,281 +0,0 @@
1---
2title: "Bringing all of my projects together under one umbrella"
3url: bringing-all-of-my-projects-together-under-one-umbrella.html
4date: 2023-07-01T18:49:07+02:00
5type: post
6draft: false
7---
8
9## What is the issue anyway?
10
11Over the years, I have accumulated a bunch of virtual servers on my
12[DigitalOcean](https://www.digitalocean.com/) account for small experimental
13projects I dabble in. And this has resulted in quite a bill. I mean, I wouldn't
14care if these projects were actually being used. But there were just being there
15unused and wasting resources. Which makes this an unnecessary burden for me.
16
17Most of them are just small HTML pages that have an endpoint or two to read data
18from or to, and for that reason I wrote servers left and right. To be honest,
19all of those things could have been done with [CGI
20scripts](https://en.wikipedia.org/wiki/Common_Gateway_Interface) and that would
21have been more than enough.
22
23Recently, I decided to stop language hopping and focus on a simpler stack which
24includes C, Go and Lua. And I can accomplish all the things I am interested in.
25
26## Finding a web server replacement
27
28Usually I had [Nginx](https://nginx.org/en/) in front of these small web servers
29and I had to manage SSL certificates and all that jazz. I am bored with these
30things. I don't want to manage any of this bullshit anymore.
31
32So the logical move forward was to find a solid alternative for this. I have
33ended up on [Caddy server](https://caddyserver.com/). I've used it in the past
34but kind of forgotten about it. What I really like about it is an ease of use
35and a bunch of out of the box functionalities that come with it.
36
37These are the _pitch_ points from their website:
38
39- **Secure by Default**: Caddy is the only web server that uses HTTPS by
40 default. A hardened TLS stack with modern protocols preserves privacy and
41 exposes MITM attacks.
42- **Config API**: As its primary mode of configuration, Caddy's REST API makes
43 it easy to automate and integrate with your apps.
44- **No Dependencies**: Because Caddy is written in Go, its binaries are entirely
45 self-contained and run on every platform, including containers without libc.
46- **Modular Stack**: Take back control over your compute edge. Caddy can be
47 extended with everything you need using plugins.
48
49I had just a few requirements:
50
51- Automatic SSL
52- Static file server
53- Basic authentication
54- CGI script support
55
56And the vanilla version does all of it, but CGI scripts. But that can easily be
57fixed with their modular approach. You can do this on their website and build a
58custom version of the server, or do it with Docker.
59
60This is a `Dockerfile` I used to build a custom server.
61
62```Dockerfile
63FROM caddy:builder AS builder
64
65RUN xcaddy build \
66 --with github.com/aksdb/caddy-cgi
67
68FROM caddy:latest
69RUN apk add --no-cache nano
70
71COPY --from=builder /usr/bin/caddy /usr/bin/caddy
72```
73
74## Getting rid of all the unnecessary virtual machines
75
76The next step was to get a handle on the number of virtual servers I have all
77over the place.
78
79I decided to move all the projects and services into two main VMs:
80
81- personal server (still Nginx)
82 - git server
83 - static file server
84 - personal blog
85- projects server (Caddy server)
86 - personal experiments
87 - other projects
88
89I will focus on projects' server in this post since it's more interesting.
90
91## Testing CGI scripts
92
93The first thing I tested was how CGI scripts work under Caddy. This is
94particularly import to me because almost all of my experiments and mini projects
95need this to work.
96
97To configure Caddy server, you must provide the server with a configuration
98file. By default, it's called `Caaddyfile`.
99
100```caddyfile
101{
102 order cgi before respond
103}
104
105examples.mitjafelicijan.com {
106 cgi /bash-test /opt/projects/examples/bash-test.sh
107 cgi /tcl-test /opt/projects/examples/tcl-test.tcl
108 cgi /lua-test /opt/projects/examples/lua-test.lua
109 cgi /python-test /opt/projects/examples/python-test.py
110
111 root * /opt/projects/examples
112 file_server
113}
114```
115
116- The order is very important. Make sure that `order cgi before respond` is at
117 the top of the configuration file.
118- Also, when you run with Caddy v2, make sure you provide `adapter` argument
119 like this `/usr/bin/caddy run --watch --environ --config /etc/caddy/Caddyfile
120 --adapter caddyfile`. Otherwise, Caddy will try to use a different format for
121 config file.
122
123I did a small batch of tests with [Bash](https://www.gnu.org/software/bash/),
124[Tcl](https://www.tcl-lang.org/), [Lua](https://www.lua.org/) and
125[Python](https://www.python.org/). Here is a cheat sheet if you need it.
126
127Let's get Bash out of the way first.
128
129```bash
130#!/usr/bin/bash
131
132printf "Content-type: text/plain\n\n"
133
134printf "Hello from Bash\n\n"
135printf "PATH_INFO [%s]\n" $PATH_INFO
136printf "QUERY_STRING [%s]\n" $QUERY_STRING
137printf "\n"
138
139for i in {0..9..1}; do
140 printf "> %s\n" $i
141done
142
143exit 0
144```
145
146This one is for Tcl script.
147
148```tcl
149#!/usr/bin/tclsh
150
151puts "Content-type: text/plain\n"
152
153puts "Hello from Tcl\n"
154puts "PATH_INFO \[$env(PATH_INFO)\]"
155puts "QUERY_STRING \[$env(QUERY_STRING)\]"
156puts ""
157
158for {set i 0} {$i < 10} {incr i} {
159 puts "> $i"
160}
161```
162
163And for all you Python enjoyers.
164
165```python
166#!/usr/bin/python3
167
168import os
169
170print("Content-type: text/plain\n")
171
172print("Hello from Python\n")
173print("PATH_INFO [{}]".format(os.environ['PATH_INFO']))
174print("QUERY_STRING [{}]".format(os.environ['QUERY_STRING']))
175print("")
176
177for i in range(10):
178 print("> {}".format(i))
179```
180
181And for the final example, Lua.
182
183```lua
184#!/usr/bin/lua
185
186print("Content-type: text/plain\n")
187
188print("Hello from Lua\n")
189print(string.format("PATH_INFO [%s]", os.getenv("PATH_INFO")))
190print(string.format("QUERY_STRING [%s]", os.getenv("QUERY_STRING")))
191print()
192
193for i = 0, 9 do
194 print(string.format("> %d", i))
195end
196```
197
198## Basic authentication
199
200One thing was also to have an option for some sort of authentication, and
201something like [Basic access
202authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) would
203be more than enough.
204
205Thankfully, Caddy supports this out of the box already. Below is an updated
206example.
207
208```Caddyfile
209{
210 order cgi before respond
211}
212
213examples.mitjafelicijan.com {
214 cgi /bash-test /opt/projects/examples/bash-test.sh
215 cgi /tcl-test /opt/projects/examples/tcl-test.tcl
216 cgi /lua-test /opt/projects/examples/lua-test.lua
217 cgi /python-test /opt/projects/examples/python-test.py
218
219 root * /opt/projects/examples
220 file_server
221
222 basicauth * {
223 bob $2a$14$/wCgaf9oMnmQa20txB76u.nI1AldGMBT/1J7fXCfgOiRShwz/JOkK
224 }
225}
226```
227
228`basicauth *` matches everything under this domain/sub-domain and protects it
229with Basic Authentication.
230
231- `bob` is the username
232- `hash` is the password
233
234To generate these passwords, execute `caddy hash-password` and this will prompt
235you to insert a password twice and spit out a hashed password that you can put
236in your configuration file.
237
238Restart the server and you are ready to go.
239
240## Making Caddy a service with systemd
241
242After the tests were successful, I copied `caddy` to `/usr/bin/caddy` and copied
243`Caddyfile` to `/etc/caddy/Caddyfile`.
244
245Now off to the systemd. Each systemd service requires you to create a service
246file.
247
248- I created a `/etc/systemd/system/caddy.service` and put the following content
249 in the file.
250
251```systemd
252[Unit]
253Description=Caddy
254Documentation=https://caddyserver.com/docs/
255After=network.target network-online.target
256Requires=network-online.target
257
258[Service]
259Type=notify
260User=root
261Group=root
262ExecStart=/usr/bin/caddy run --environ --config /etc/caddy/Caddyfile --adapter caddyfile
263ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile --force --adapter caddyfile
264TimeoutStopSec=5s
265LimitNOFILE=1048576
266LimitNPROC=512
267PrivateTmp=true
268ProtectSystem=full
269AmbientCapabilities=CAP_NET_ADMIN CAP_NET_BIND_SERVICE
270
271[Install]
272WantedBy=multi-user.target
273```
274
275- You might need to reload systemd with `systemctl daemon-reload`.
276- Then I enabled the service with `systemctl enable caddy.service`.
277- And then I started the service with `systemctl start caddy.service`.
278
279This was about all that I needed to do to get it running. Now I can easily add
280new subdomains and domains to the main configuration file and be done with
281it. No manual Let's Encrypt shenanigans needed.
diff --git a/content/posts/2023-07-08-who-knows-what-the-world-will-look-like-tomorrow.md b/content/posts/2023-07-08-who-knows-what-the-world-will-look-like-tomorrow.md
deleted file mode 100644
index 4743694..0000000
--- a/content/posts/2023-07-08-who-knows-what-the-world-will-look-like-tomorrow.md
+++ /dev/null
@@ -1,100 +0,0 @@
1---
2title: "Who knows what the world will look like tomorrow"
3url: who-knows-what-the-world-will-look-like-tomorrow.html
4date: 2023-07-08T18:49:07+02:00
5type: post
6draft: false
7---
8
9This site has gone through a lot of changes over the years. From being written
10in Flask and Bottle to moving on to static site generators. I have used and
11tested probably 10s of them my now. From homebrew solutions to the biggest and
12the baddest. From Bash scripts to Node.js disasters. I've seen some things, no
13doubt. Not all bad.
14
15I have been closely observing the web and where the trends are going, and I
16don't like what I see. Instead of internet being this weird place where
17experimentation is happening, it all became stale and formulized. Boring,
18actually. Really boring. And sad. Where is that old, revolutionary FU spirit I
19remember? It's still there, I know. But it's being drowned by the voices of
20mediocrity and formulaic boredom.
21
22It almost feels like that the internet stopped for 10 years and only now
23something has started happening. With all the insanity around the world. People
24hating people without actual reasons, just because it's fashionable to hate and
25crowd is saying so. Sad state of affairs.
26
27All this is contributing to this overall negativity masked as apathy. Everybody
28walking in lockstep. Instead of being creative and bold, we are just
29re-inventing the world and making the same mistakes. Maybe, just maybe, some
30things are good enough and there is no need to try to be too smart for our own
31good. After N-attempts, maybe something should click inside our heads to maybe
32say: "This thing, opinion, etc. is actually really good, and even after several
33attempts it still holds."
34
35The older I get, the more careful I am of my own thoughts and why I think the
36way I think. More and more, I try to understand people with opposite
37opinions. Far from perfect, but closer to bearable. And then I see people
38hearing or reading a thing on internet and let's fucking goooooo! Strong
39opinions are a sign of a weak and uneducated mind. I am more and more sure of
40this.
41
42It's gotten to a point where you can with great certainty deduce a person's
43personality based on one or two opinions. How boring have we become. No wonder
44people can't talk to each other. These would be very quick conversations anyway.
45
46I just got remembered of a song, ["Hi
47Ren"](https://www.youtube.com/watch?v=s_nc1IVoMxc). The ending talks about being
48stiff and not being able to dance. Such an amazing metaphor. And we as people
49have gone so far, we can't even walk or even crawl normally anymore. We have
50forgotten that the most beautiful things in life have a great deal of
51uncertainty about them. We want instant gratification. Not only that, but we
52want absolute obedience. Complete control over others, because we have zero
53control of ourselves. And all the lies we could tell ourselves will not help us
54out of this situation.
55
56It is funny how I catch myself from time to time being a complete idiot. It's
57like having an outer body experience. I can see myself being an idiot, and
58cannot stop myself. It serves as a learning lesson to stop before speaking. To
59think before saying. And to crawl before walking.
60
61So there is still time. We can dance once more. All we need to do is stop for a
62second. Me and you. Us two is a start. Let's not try to change the world, but
63rather nudge ourselves just a tiny bit. And if we only did that?! Just
64imagine. Each of us nudged ourselves a small, tiny bit, the world would heal. If
65we would just put down the phones and ignored Internet for a day or two. Put
66visiting websites that feed on us on hold. Listened to just one sentence and try
67to understand it from a person who we completely disagree with. I truly believe
68that this is possible.
69
70Life is about suffering and joy. And instead of wishing suffering on others and
71excepting joy for yourselves, we should for a brief moment want suffering for
72ourselves and wish joy on others. Wouldn't that be an amazing sight to see?
73
74I caught myself hating on Rust. And I deeply thought about it afterward. Why did
75I do it? It is obviously not for me. So why the hell was I being so negative
76towards it? I think that I know the answer. I was negative because that is
77easy. Because it's much easier to hate on things than to say to yourself: "Well,
78you know what? This is not for me. I will focus on creation and not
79destruction. This is who I want to be. This is what fills me with joy and
80purpose." Where joy is keeping me happy and purpose scares the shit out of me
81and keeps me honest. This is who I want to be. Admit to myself when I am wrong
82and accept the faults that I have without reservation and with courage march on.
83
84I just realized that this blog post is a sort of therapy for me. It's
85cathartic. Going thought the history of this site and remembering all the
86decisions and annoyances that came with it. When I was cursing at the tools. And
87time moved on, and the site is still here. It serves as a reminder that
88perseverance wins at the end. If we just let things go.
89
90This came with a decision that simplifying life and removing all the unnecessary
91negativity is key. Rather than worrying about what the internet is saying, what
92the world is trying to take from you, you are the only one who can say no. And
93create instead of destroy.
94
95I don't have an ending for this post, so I will say this. We live in the most
96amazing times in the recorded history, and we should be internally grateful for
97it. Create and study, this should be my mantra. Just create and let the world
98happen. And when you feel yourself to be too certain, stop and check how deep in the
99shit you are already. Strong opinions are a sign of a weak and uneducated
100mind. Hate and disdain is for the weak.
diff --git a/content/posts/2023-07-10-fix-screen-tearing-on-debian-12-xorg-and-i3.md b/content/posts/2023-07-10-fix-screen-tearing-on-debian-12-xorg-and-i3.md
deleted file mode 100644
index c7e12ae..0000000
--- a/content/posts/2023-07-10-fix-screen-tearing-on-debian-12-xorg-and-i3.md
+++ /dev/null
@@ -1,22 +0,0 @@
1---
2title: "Fix screen tearing on Debian 12 Xorg and i3"
3url: fix-screen-tearing-on-debian-12-xorg-and-i3.html
4date: 2023-07-10T04:21:48+02:00
5type: note
6draft: false
7---
8
9I have been experiencing some issues with Intel® Integrated HD Graphics 3000
10under Debian 12 with Xorg and i3. Using `picom` compositor didn't help. To fix
11this issue create new file `/etc/X11/xorg.conf.d/20-intel.conf` as root and put
12the following in the file.
13
14```txt
15Section "Device"
16 Identifier "Intel Graphics"
17 Driver "intel"
18 Option "TearFree" "true"
19EndSection
20```
21
22Reboot the system and that should be it.
diff --git a/content/posts/2023-07-10-online-radio-streaming-with-mpv-from-terminal.md b/content/posts/2023-07-10-online-radio-streaming-with-mpv-from-terminal.md
deleted file mode 100644
index 821a80f..0000000
--- a/content/posts/2023-07-10-online-radio-streaming-with-mpv-from-terminal.md
+++ /dev/null
@@ -1,14 +0,0 @@
1---
2title: "Online radio streaming with MPV from terminal"
3url: online-radio-streaming-with-mpv-from-terminal.html
4date: 2023-07-10T03:34:45+02:00
5type: note
6draft: false
7---
8
9Recently I have been using my Thinkpad x220 more and there are some constraints
10I have faced with it. CPU is not as powerful as on my main machine and I really
11want to listen to some music while using the machine. Browsers really are bloat.
12
13Check out this site https://streamurl.link/ and copy the stream url and then do
14`mpv streamlink`.