aboutsummaryrefslogtreecommitdiff
path: root/content/posts
diff options
context:
space:
mode:
Diffstat (limited to 'content/posts')
-rw-r--r--content/posts/2011-01-13-most-likely-to-succeed-in-year-of-2011.md41
-rw-r--r--content/posts/2012-03-09-led-technology-not-so-eco.md32
-rw-r--r--content/posts/2013-10-24-wireless-sensor-networks.md53
-rw-r--r--content/posts/2015-11-10-software-development-pitfalls.md180
-rw-r--r--content/posts/2017-03-07-golang-profiling-simplified.md125
-rw-r--r--content/posts/2017-04-17-what-i-ve-learned-developing-ad-server.md198
-rw-r--r--content/posts/2017-04-21-profiling-python-web-applications-with-visual-tools.md205
-rw-r--r--content/posts/2017-08-11-simple-iot-application.md606
-rw-r--r--content/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md330
-rw-r--r--content/posts/2019-01-03-encoding-binary-data-into-dna-sequence.md410
-rw-r--r--content/posts/2019-10-14-simplifying-and-reducing-clutter.md58
-rw-r--r--content/posts/2019-10-19-using-sentiment-analysis-for-clickbait-detection.md107
-rw-r--r--content/posts/2020-03-22-simple-sse-based-pubsub-server.md453
-rw-r--r--content/posts/2020-03-27-create-placeholder-images-with-sharp.md101
-rw-r--r--content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md107
-rw-r--r--content/posts/2020-03-30-my-love-and-hate-relationship-with-nodejs.md110
-rw-r--r--content/posts/2020-05-05-remote-work.md71
-rw-r--r--content/posts/2020-08-15-systemd-disable-wake-onmouse.md72
-rw-r--r--content/posts/2020-09-06-esp-and-micropython.md225
-rw-r--r--content/posts/2020-09-08-bind-warning-on-login.md53
-rw-r--r--content/posts/2020-09-09-digitalocean-sync.md111
-rw-r--r--content/posts/2021-01-24-replacing-dropbox-with-s3.md113
-rw-r--r--content/posts/2021-01-25-goaccess.md202
-rw-r--r--content/posts/2021-06-26-simple-world-clock.md107
-rw-r--r--content/posts/2021-07-30-from-internet-consumer-to-full-hominum-again.md102
-rw-r--r--content/posts/2021-08-01-linux-cheatsheet.md286
-rw-r--r--content/posts/2021-12-03-debian-based-riced-up-distribution-for-developers.md275
-rw-r--r--content/posts/2021-12-25-running-golang-application-as-pid1.md347
-rw-r--r--content/posts/2021-12-30-wap-mobile-web-before-the-web.md201
-rw-r--r--content/posts/2022-06-30-trying-out-helix-editor.md52
-rw-r--r--content/posts/2022-07-05-what-would-dna-sound-if-synthesized.md363
-rw-r--r--content/posts/2022-08-13-algae-spotted-on-river-sava.md30
-rw-r--r--content/posts/2022-10-06-state-of-web-technologies-in-year-2022.md303
-rw-r--r--content/posts/2022-10-16-that-sound-that-machine-makes-when-struggling.md65
-rw-r--r--content/posts/2023-01-26-trying-to-build-a-new-kind-of-terminal-emulator.md252
-rw-r--r--content/posts/2023-05-16-rekindling-my-love-for-programming.md73
-rw-r--r--content/posts/2023-05-22-crafting-stories-in-zed-editor.md87
-rw-r--r--content/posts/2023-05-23-i-was-wrong-about-git-workflows.md71
-rw-r--r--content/posts/2023-05-31-re-inventing-task-runner-that-i-actually-used-daily.md158
-rw-r--r--content/posts/2023-07-01-bringing-all-of-my-projects-together-under-one-umbrella.md280
40 files changed, 0 insertions, 7015 deletions
diff --git a/content/posts/2011-01-13-most-likely-to-succeed-in-year-of-2011.md b/content/posts/2011-01-13-most-likely-to-succeed-in-year-of-2011.md
deleted file mode 100644
index 9fc484a..0000000
--- a/content/posts/2011-01-13-most-likely-to-succeed-in-year-of-2011.md
+++ /dev/null
@@ -1,41 +0,0 @@
1---
2title: Most likely to succeed in the year of 2011
3url: most-likely-to-succeed-in-year-of-2011.html
4date: 2011-01-13T12:00:00+02:00
5draft: false
6---
7
8The year of 2010 was definitely the year of Geo-location. The market responded
9beautifully and lots of very cool services were launched. We all have to thank
10the mobile market for such extensive adoption. With new generations of mobile
11phones that are not only buffed with high-tech hardware but are also affordable.
12We can now manage tasks that were not so long time ago, almost Star Trek’ish.
13And all this had and has great influence on the destination to which we are
14going now.
15
16Reading all this articles about new innovation about new thriving technologies
17makes me wonder what’s the next step. The future is the mesh, like Lisa Gansky
18said in her book The Mesh.
19
20Many still have conservative views on distributed systems. The problems with
21security of information. Fear of not controlling every aspect of information
22flow. I am very opened to distributed systems and heterogeneous applications,
23and I think this is the correct and best way to proceed.
24
25This year will definitely be about communication platforms. Mobile to mobile.
26Machine to mobile and vice versa. All the tech is available and ready to put
27into action. Wireless is today’s new mantra. And the concept of semantic web is
28now ready for industry.
29
30Applications and developers now can gain access to new layers of systems and can
31prepare and build solutions to meet the high quality needs of market. The speed
32is everything now.
33
34My vote goes to “Machine to Machine” and “Embedded Systems”!
35
36- [Machine-to-Machine](http://en.wikipedia.org/wiki/Machine-to-Machine)
37- [The ultimate M2M communication protocol](http://www.bitxml.org/)
38- [COOS Project (connectivity initiative)](http://www.coosproject.org/maven-site/1.0.0/project-info.html)
39- [Community for machine-to-machine](http://m2m.com/index.jspa)
40- [Embedded system](http://en.wikipedia.org/wiki/Embedded_system)
41
diff --git a/content/posts/2012-03-09-led-technology-not-so-eco.md b/content/posts/2012-03-09-led-technology-not-so-eco.md
deleted file mode 100644
index a683aec..0000000
--- a/content/posts/2012-03-09-led-technology-not-so-eco.md
+++ /dev/null
@@ -1,32 +0,0 @@
1---
2title: LED technology might not be as eco-friendly as you think
3url: led-technology-not-so-eco.html
4date: 2012-03-09T12:00:00+02:00
5draft: false
6---
7
8There is a lot of talk about LED technology. It is beginning to infiltrate
9industry at a fast rate, and it’s a challenge for designers and also engineers.
10I wondered when a weakness will be revealed. Then I stomped on an article
11talking about harm in using LED technology. It looks like this magical
12technology is not so magical and eco-friendly.
13
14A new study from the University of California indicates that LED lights contain
15toxic metals, and should be produced, used and disposed of carefully. Besides
16the lead and nickel, the bulbs and their associated parts were also found to
17contain arsenic, copper, and other metals that have been linked to different
18cancers, neurological damage, kidney disease, hypertension, skin rashes and
19other illnesses in humans, and to ecological damage in waterways.
20
21Since then, I haven’t yet found any regulation for disposal of LED lights or any
22other regulation or standard. This might be a problem in the future. And it is a
23massive drawback. This might have quite an impact on consumer market.
24
25Nevertheless, there is a potential, and I am sure the market will adapt. I also
26hope I will be reading documents regarding solution for this concern soon.
27
28**Additional resources:**
29
30- [Recycling and Disposal of Light Bulbs](http://ezinearticles.com/?Recycling-and-Disposal-of-Light-Bulbs&id=1091304)
31- [How to Dispose of a Low-Energy Light Bulb](http://www.ehow.com/how_7483442_dispose-lowenergy-light-bulb.html)
32
diff --git a/content/posts/2013-10-24-wireless-sensor-networks.md b/content/posts/2013-10-24-wireless-sensor-networks.md
deleted file mode 100644
index fc5d372..0000000
--- a/content/posts/2013-10-24-wireless-sensor-networks.md
+++ /dev/null
@@ -1,53 +0,0 @@
1---
2title: Wireless sensor networks
3url: wireless-sensor-networks.html
4date: 2013-10-24T12:00:00+02:00
5draft: false
6---
7
8Zigbee networks have this wonderful capability to self-heal, which means they
9can reorder connections between them if one of them is inoperable. This works
10our of the box when you deploy them. But you have to have in mind that achieving
11this is not as easy as you would think. None of it is plug&play. So to make
12your life a bit easier, here are some pointers which, I hope, will help you.
13
14- Be careful when you are ordering your equipment abroad. There are many rules
15 and regulations you need to comply before you get your Xbee radios. What they
16 do is they wait until you prove that you won’t use the technology for some
17 kind of evil take over control of the world project :). For this, they have
18 EAR (Export Administration Regulations) which basically means “This product
19 may require a license to export from the United States.”.
20- I don’t know if this applies for every country, but when we purchased our Xbee
21 radios from Mouser, this was mandatory! What we needed to do was to print out
22 a form and write information about our company and send them a copy via
23 email. With this document, we proved that we are a legitimate company.
24- When you complete your purchase and send all the documentation, you are not
25 clear yet. Then customs will take it from there :). There will be some
26 additional costs. Before purchasing, make sure you have as much information
27 about costs as possible. Because it can get costly in the end.
28- I suggest you use companies from your country. You can seriously cut your
29 costs. Here in Slovenia, the best option so far as I know is Farnell. And
30 based on my personal experience, they rock! All I need to say!
31- Make plans when ordering larger quantities. Do not, I say, do not make your
32 orders in December! :) Believe me! You will have problems with stock they can
33 provide for you. So, we were forced to buy some things from Mouser, which was
34 extremely painful because of all the regulations you need to obey when
35 importing goods from the USA.
36- Make sure that firmware version on your Xbee radios is exactly the same! Do
37 not get creative!!! I propose using templates. You can get template by
38 exporting settings/profile in X-CTU application. Make sure you have enabled
39 “Upgrade firmware” so you can be sure each radio has the same firmware.
40- And again: make plans! Plan everything! In months advanced! You will thank me
41 later :)
42- Test, test, test. Wireless networks can be tricky.
43
44If you are serious, I suggest you buy this book, Building Wireless Sensor
45Networks. You will get a glimpse of how networks work in lumens terms. It is a
46good starting point for everybody who wants to build wireless networks.
47
48**Additional resources:**
49
50- http://www.digi.com/aboutus/export/generalexportinfo
51- http://doresearch.stanford.edu/research-scholarship/export-controls/export-controlled-or-embargoed-countries-entities-and-persons
52- http://www.bis.doc.gov/licensing/exportingbasics.htm
53
diff --git a/content/posts/2015-11-10-software-development-pitfalls.md b/content/posts/2015-11-10-software-development-pitfalls.md
deleted file mode 100644
index b9edd19..0000000
--- a/content/posts/2015-11-10-software-development-pitfalls.md
+++ /dev/null
@@ -1,180 +0,0 @@
1---
2title: Software development and my favorite pitfalls
3url: software-development-pitfalls.html
4date: 2015-11-10T12:00:00+02:00
5draft: false
6---
7
8Over the years I had the privilege to work on some very excited projects both in
9software development field and also in electronics field and every experience
10taught me some invaluable lessons about how NOT TO approach development. And
11through this post I will try to point out some absurd, outdated techniques I
12find the most annoying and damaging during a development cycle. There will be
13swearing because this topic really gets on my nerves and I never coherently
14tried to explain them in writing. So if I get heated up, please bear with me.
15
16As new methods of project management are emerging, underlying processes still
17stay old and outdated. This is mainly because we as people are unable to
18completely shift away from these approaches.
19
20I was always struggling with communication, and many times that cost me a
21relationship or two because I was not on the ball all the time. Through every
22experience, I became more convinced that I am the problem and never ever doubted
23that the problem may be that communication never evolved a single step from
24emails. And if you think for a second, not many things have changed around this
25topic. We just have different representations of email (message boards, chats,
26project management tools). And I believe this is the real issue we are facing
27now.
28
29There are many articles written about hyper connectivity and the effects that
30are a direct result of it. But mainstream does nothing towards it. We are just
31putting out fires, and we do nothing to prevent it. I am certain this will be a
32major source of grief in coming years. And what we all can do to avoid this is
33to change our mindset and experiment on our communication skills, development
34approaches. We need to maximize possible output that a person can give. And to
35achieve this we need to listen to them, encourage them. I know that not
36everybody is a naturally born leader, but with enough practice and encouragement
37they also can become active participants in leadership.
38
39There are many talks now about methodologies such as Scrum, Kanban, Cleanroom
40and they all fucking piss me of :). These are all boxes that imprison people and
41take away their freedom of thought. This is a straightforward mindfuck /
42amputation of creativity.
43
44Let me list a couple of things that I find really destructive and bad for a
45project and in a long run company.
46
47## Ping emails
48
49Ping emails are emails you have to write as soon as you receive an email. Its
50sole purpose is to inform the sender that you received their email, and you are
51working on it. Its result is only to calm down the sender that their task is
52being dealt with. It’s intent basically is, I did my job by sending you this
53email, so I am on clear grounds. I categorize this email as fuck you email.
54This is one of the most irritating types of emails I need to write. This is the
55ultimate control freak show you can experience, and it gives the sender a false
56feeling of control. Newsflash: We do not live in 1982 where there was a
57possibility that email never reached the destination. I really hate this from
58the bottom of my heart.
59
60They should be like: “Yes, I am fucking alive, and I am at your service my
61leash!”. I guess if I would reply like this, I wouldn’t have to write any more
62of this kind of messages.
63
64## Everybody is a project manager
65
66Well, this is a tough one. I noticed that as soon as you let people to give
67their suggestions, you are basically screwed. There is a truth in the saying:
68“Give low expectations and deliver little more than you promised.”.
69
70People tend to take a role of a manager as soon as they are presented with an
71opportunity. And by getting angry at them, you only provoke yourself. They are
72not at fault. You just need to tell them they are only giving suggestions and
73not tasks at the beginning and everything will be alright. But if you give them
74a feeling that they are in control, you will have immense problems explaining
75why their features are not in current release.
76
77Project mission must be always leading project requirements and any deviation
78from it will result in major project butchering. And by this, I mean that the
79project will get its own path, and you will be left with half done software that
80helps nobody. Clear mission goals and clean execution will allow you to develop
81software will clear intent.
82
83## We are never wrong
84
85I find this type of arrogance the worst. We must always conduct ourselves that
86we are infallible and cannot make mistakes. As soon as a procedure or process is
87established, there is no room for changes or improvements. This is the most
88idiotic thing someone can say of think. I think that processes need to involve
89and change over time. This is imperative and need to have in your organization
90if you want to improve and develop company. We all need to grow balls and change
91everything in order to adapt to current situations. Being a prisoner of
92predefined processes kills creativity.
93
94I am constantly trying new software for project managing and communication. I
95believe every team has its own dynamic, and it needs to be discovered
96organically and naturally through many experiments. By putting the team in a
97box, you are amputating their creativity and therefore minimizing their
98potential. But if you talk to an executive, you will mainly find archetypical
99thinking and a strong need to compartmentalize everything from business
100processes to resource management. And this type of management that often
101displays micromanagement techniques only works for short periods (couple of
102years) and then employees either leave the company or become basically retarded
103drones on autopilot.
104
105## Micromanaging
106
107This basically implies that everybody on the team is an idiot who needs to have
108a to-do list that they cannot write themselves. How about spoon-feeding the team
109at launch because besides the team leader, everybody must be a retarded idiot at
110best?
111
112I prefer milestones as they give developers much more freedom and creativity in
113developing and not waste their time checking some bizarre to-do list that was
114not even thought through. Projects constantly change throughout the development
115cycle, and all you are left at the end is a list of unchecked tasks and the
116wrath of management why they are not completed. Best WTF moment!
117
118## Human contact — no need for it!
119
120We are vigorously trying to eliminate physical contact by replacing short
121meetings with software, with no regards that we are not machines. Many times a
122simple 5-min meeting at morning can solve most of the problems. In rapid
123development, short bursts of man to man communication is possibly the best way
124to go.
125
126We now have all this software available, and all what we get out of it is a
127giant clusterfuck. An obstacle and not a solution. So, why we still use them?
128
129## MVP is killing innovation
130
131Many will disagree with me on this one, but I stand strong by this statement.
132What I noticed in my experience that all this buzz words around us only mislead
133and capture us in a circle of solving issues that already have a solution, but
134we are unable to see it without using some fancy word for it.
135
136The toughest thing to do for a developer is to minimize requirements. Well, this
137is though only for bad developers. Yes, I said it. There are many types of
138developers out there. And those unable to minimize feature scope are the ones
139you don’t need on your team. Their only goal is to solve problems that exist
140only in their heads. And then you have to argue with them, and waste energy on
141them, instead of developing your awesome product. They are a cancer and I
142suggest you cut them off.
143
144MVP as an idea is great, but sadly people don’t understand underlying
145philosophy, and they spent too much time focusing and fixating on something that
146every sane person with normal IQ will understand without some made up
147acronym. And the result is a lot of talking and barely no execution.
148
149Well, MVP is not directly killing innovation, but stupid people do when they try
150to understand it.
151
152## Pressure wasteland
153
154You must never allow to be pressured into confirming a deadline if you are not
155confident. We often feel a need that we are in service of others, which is true
156to some extent. But it is also true that others are in service to us to some
157extent. And we forget this all the time. We are all pressured all the time to
158make decisions just to calm other people down. And when they leave your office
159you experience WTF moment :) How the hell did they manage to fuck me up again?
160
161People need to realize that the more pressure you put on somebody, the less they
162will be able to do. So 5-min update email requests will only resolve in mental
163breakdown and inability to work that day. Constant poking is probably the only
164thing I lose my mind instantly. For all you that are doing this: “Stop bothering
165us with your insecurities and let us do our job. We will do it quicker and
166better without you breathing down our necks.”
167
168If this happens to me, I end up with no energy at the end. Don’t you get it?
169You will get much more from and out of me if you ask me like a human person and
170not your personal butler. On a long run, you are destroying your relationships
171and nobody would want to work with you. Your schizophrenic approach will damage
172only you in a long run. Nobody is anybody’s property.
173
174## Conclusion
175
176I am guilty of many things described in this post. And I find it hard sometimes
177to acknowledge this. And I lie to myself and try vigorously to find some
178explanation why I do these things. There is always space for growth. And maybe
179you will also find some of yourself in this post and realize what needs to
180change for you to evolve.
diff --git a/content/posts/2017-03-07-golang-profiling-simplified.md b/content/posts/2017-03-07-golang-profiling-simplified.md
deleted file mode 100644
index 4bd18b2..0000000
--- a/content/posts/2017-03-07-golang-profiling-simplified.md
+++ /dev/null
@@ -1,125 +0,0 @@
1---
2title: Golang profiling simplified
3url: golang-profiling-simplified.html
4date: 2017-03-07T12:00:00+02:00
5draft: false
6---
7
8Many posts have been written regarding profiling in Golang and I haven’t found
9proper tutorial regarding this. Almost all of them are missing some part of
10important information and it gets pretty frustrating when you have a deadline
11and are not finding simple distilled solution.
12
13Nevertheless, after searching and experimenting I have found a solution that
14works for me and probably should also for you.
15
16## Where are my pprof files?
17
18By default pprof files are generated in /tmp/ folder. You can override folder
19where this files are generated programmatically in your golang code as we will
20see below in example.
21
22## Why is my CPU profile empty?
23
24I have found out that sometimes CPU profile is empty because program was not
25executing long enough. Programs, that execute too quickly don’t produce pprof
26file in my cases. Well, file is generated but only contains 4KB of information.
27
28## Profiling
29
30As you can see from examples we are executing dummy_benchmark functions to
31ensure some sort of execution. Memory profiling can be done without such a
32“complex” function. But CPU profiling needs it.
33
34Both memory and CPU profiling examples are almost the same. Only parameters in
35main function when calling profile.Start are different. When we set
36profile.ProfilePath(“.”) we tell profiler to store pprof files in the same
37folder as our program.
38
39### Memory profiling
40
41```go
42package main
43
44import (
45 "fmt"
46 "time"
47 "github.com/pkg/profile"
48)
49
50func dummy_benchmark() {
51
52 fmt.Println("first set ...")
53 for i := 0; i < 918231333; i++ {
54 i *= 2
55 i /= 2
56 }
57
58 <-time.After(time.Second*3)
59
60 fmt.Println("sencond set ...")
61 for i := 0; i < 9182312232; i++ {
62 i *= 2
63 i /= 2
64 }
65}
66
67func main() {
68 defer profile.Start(profile.MemProfile, profile.ProfilePath("."), profile.NoShutdownHook).Stop()
69 dummy_benchmark()
70}
71```
72
73### CPU profiling
74
75```go
76package main
77
78import (
79 "fmt"
80 "time"
81 "github.com/pkg/profile"
82)
83
84func dummy_benchmark() {
85
86 fmt.Println("first set ...")
87 for i := 0; i < 918231333; i++ {
88 i *= 2
89 i /= 2
90 }
91
92 <-time.After(time.Second*3)
93
94 fmt.Println("sencond set ...")
95 for i := 0; i < 9182312232; i++ {
96 i *= 2
97 i /= 2
98 }
99}
100
101func main() {
102 defer profile.Start(profile.CPUProfile, profile.ProfilePath("."), profile.NoShutdownHook).Stop()
103 dummy_benchmark()
104}
105```
106
107### Generating profiling reports
108
109```bash
110# memory profiling
111go build mem.go
112./mem
113go tool pprof -pdf ./mem mem.pprof > mem.pdf
114
115# cpu profiling
116go build cpu.go
117./cpu
118go tool pprof -pdf ./cpu cpu.pprof > cpu.pdf
119```
120
121This will generate PDF document with visualized profile.
122
123- [Memory PDF profile example](/assets/go-profiling/golang-profiling-mem.pdf)
124- [CPU PDF profile example](/assets/go-profiling/golang-profiling-cpu.pdf)
125
diff --git a/content/posts/2017-04-17-what-i-ve-learned-developing-ad-server.md b/content/posts/2017-04-17-what-i-ve-learned-developing-ad-server.md
deleted file mode 100644
index bb98efd..0000000
--- a/content/posts/2017-04-17-what-i-ve-learned-developing-ad-server.md
+++ /dev/null
@@ -1,198 +0,0 @@
1---
2title: What I've learned developing ad server
3url: what-i-ve-learned-developing-ad-server.html
4date: 2017-04-17T12:00:00+02:00
5draft: false
6---
7
8For the past year and half I have been developing native advertising server that
9contextually matches ads and displays them in different template forms on
10variety of websites. This project grew from serving thousands of ads per day to
11millions.
12
13The system is made from couple of core components:
14
15- API for serving ads,
16- Utils - cronjobs and queue management tools,
17- Dashboard UI.
18
19Initial release was using [MongoDB](https://www.mongodb.com/) for full-text
20search but was later replaced by [Elasticsearch](https://www.elastic.co/) for
21better CPU utilization and better search performance. This provided us with many
22amazing functionalities of [Elasticsearch](https://www.elastic.co/). You should
23check it out if you do any search related operations.
24
25Because the premise of the server is to provide native ad experience, they are
26rendered on the client side via simple templating engine. This ensures that ads
27can be displayed number of different ways based on the visual style of the
28page. And this makes JavaScript client library quite complex.
29
30So now that you know basic information about the product lets get into the
31lessons we learned.
32
33## Aggregate everything
34
35After beta version was released everything (impressions, clicks, etc) was
36written in nanosecond resolution in the database. At that time we were using
37[PostgreSQL](https://www.postgresql.org/) and database quickly grew way above
38200GB in disk space. And that was problematic. Statistics took disturbingly long
39time to aggregate. Also using indexes on stats table in database was no help
40after we reached 500 million datapoints.
41
42> There is a marketing product information and there is real life experience.
43And the tend to be quite the opposite.
44
45This was the reason that now everything is aggregated on daily basis and this
46data is then fed to Elastic in form of daily summary. With this we achieved we
47can now track many more dimensions such as zone, channel and platform
48information. And with this information we can now adapt occurrences of ads on
49specific places more precisely.
50
51We have also adapted [Redis](https://redis.io/) as a full-time citizen in our
52stack. Because Redis also stores information on a local disk we have some sort
53of backup if server would accidentally suffer some failure.
54
55All the real-time statistics for ad serving and redirecting is presented as
56counters in Redis instance and daily extracted and pushed to Elastic.
57
58## Measure everything
59
60The thing about software is that we really don't know how well it is performing
61under load until such load is presented. When testing locally everything is fine
62but when on production things tend to fall apart.
63
64As a solution for this we are measuring everything we can. Function execution
65time (by encapsulating functions with timers), server performance (cpu, memory,
66disk, etc), Nginx and [uWSGI](https://uwsgi-docs.readthedocs.io/) performance.
67We sacrifice a bit of performance for the sake of this information. And we store
68all this information for later analysis.
69
70**Example of function execution time**
71
72```json
73{
74 "get_final_filtered_ads": {
75 "counter": 1931250,
76 "avg": 0.0066143431,
77 "elapsed": 12773.9500310003
78 },
79 "store_keywords_statistics": {
80 "counter": 1931011,
81 "avg": 0.0004605267,
82 "elapsed": 889.2821669996
83 },
84 "match_by_context": {
85 "counter": 1931011,
86 "avg": 0.0055960716,
87 "elapsed": 10806.0758889999
88 },
89 "match_by_high_performance": {
90 "counter": 262,
91 "avg": 0.0152770229,
92 "elapsed": 4.00258
93 },
94 "store_impression_stats": {
95 "counter": 1931250,
96 "avg": 0.0006189991,
97 "elapsed": 1195.4419869999
98 }
99}
100```
101
102We have also started profiling with [cProfile](https://pymotw.com/2/profile/)
103and then visualizing with [KCachegrind](http://kcachegrind.sourceforge.net/).
104This provides much more detailed look into code execution.
105
106## Cache control is your friend
107
108Because we use Javascript library for rendering ads we rely on this script
109extensively and when in need we need to be able to change behavior of the script
110quickly.
111
112In our case we can not simply replace javascript url in html code. It usually
113takes a day or two for the guys who maintain sites to change code or add
114?ver=xxx attribute. And this makes rapid deployment and testing very difficult
115and time consuming. There is a limitation of how much you can test locally.
116
117We are now in the process of integrating [Google Tag
118Manager](https://www.google.com/analytics/tag-manager/) but couple of websites
119are developed on ASP.net platform that have some problems with tag manager. With
120a solution below we are certain that we are serving latest version of the
121script.
122
123And it only takes one mistake and users have the script cached and in case of
124caching it for 1 year you probably know where the problem is.
125
126```nginx
127# nginx ➜ /etc/nginx/sites-available/default
128location /static/ {
129 alias /path-to-static-content/;
130 autoindex off;
131 charset utf-8;
132 gzip on;
133 gzip_types text/plain application/javascript application/x-javascript text/javascript text/xml text/css;
134 location ~* \.(ico|gif|jpeg|jpg|png|woff|ttf|otf|svg|woff2|eot)$ {
135 expires 1y;
136 add_header Pragma public;
137 add_header Cache-Control "public";
138 }
139 location ~* \.(css|js|txt)$ {
140 expires 3600s;
141 add_header Pragma public;
142 add_header Cache-Control "public, must-revalidate";
143 }
144}
145```
146
147Also be careful when redirecting to url in your python code. We noticed that if
148we didn't precisely setup cache control and expire headers in response we didn't
149get the request on the server and therefore couldn't measure clicks. So when
150redirecting do as follows and there will be no problems.
151
152```python
153# python ➜ bottlepy web micro-framework
154response = bottle.HTTPResponse(status=302)
155response.set_header("Cache-Control", "no-store, no-cache, must-revalidate")
156response.set_header("Expires", "Thu, 01 Jan 1970 00:00:00 GMT")
157response.set_header("Location", url)
158return response
159```
160
161> Cache control in browsers is quite aggressive and you need to be precise to
162avoid future problems. We learned that lesson the hard way.
163
164## Learn NGINX
165
166When deciding on a web server we went with Nginx as a reverse proxy for our
167applications. We adapted micro-service oriented architecture early in the
168project to ensure when we scale we can easily add additional servers to our
169cluster. And Nginx was crucial to perform load balancing and static content
170delivery.
171
172At first our config file was quite simple and later grew larger. After patching
173and adding new settings I sat down and learned more about the guts of Nginx.
174This proved to be very useful and we were able to squeeze much more out of our
175setup. So I advise you to take your time and read through the
176[documentation](https://nginx.org/en/docs/). This saved us a lot of headache.
177Googling for solutions only goes so far.
178
179## Use Redis/Memcached
180
181As explained above we are using caching basically for everything. It is the
182corner stone of our services. At first we were very careful about the quantity
183of things we stored in [Redis](https://redis.io/). But we later found out that
184the memory footprint is very low even when storing large amount of data in it.
185
186So we gradually increased our usage to caching whole HTML outputs of dashboard.
187This improved our performance in order of magnitude. And by using native TTL
188support this goes hand in hand with our needs.
189
190The reason why we choose [Redis](https://redis.io/) over
191[Memcached](https://memcached.org/) was the nature of scalability of Redis out
192of the box. But all this can be achieved with Memcached.
193
194## Conclusion
195
196There are a lot more details that could have been written and every single topic
197in here deserves it's own post but you probably got the idea about the problems
198we faced.
diff --git a/content/posts/2017-04-21-profiling-python-web-applications-with-visual-tools.md b/content/posts/2017-04-21-profiling-python-web-applications-with-visual-tools.md
deleted file mode 100644
index 2e36eaf..0000000
--- a/content/posts/2017-04-21-profiling-python-web-applications-with-visual-tools.md
+++ /dev/null
@@ -1,205 +0,0 @@
1---
2title: Profiling Python web applications with visual tools
3url: profiling-python-web-applications-with-visual-tools.html
4date: 2017-04-21T12:00:00+02:00
5draft: false
6---
7
8I have been profiling my software with KCachegrind for a long time now and I was
9missing this option when I am developing API's or other web services. I always
10knew that this is possible but never really took the time and dive into it.
11
12Before we begin there are some requirements. We will need to:
13
14- implement [cProfile](https://docs.python.org/2/library/profile.html#module-cProfile) into our web app,
15- convert output to [callgrind](http://valgrind.org/docs/manual/cl-manual.html) format with [pyprof2calltree](https://pypi.python.org/pypi/pyprof2calltree/),
16- visualize data with [KCachegrind](http://kcachegrind.sourceforge.net/html/Home.html) or [Profiling Viewer](http://www.profilingviewer.com/).
17
18
19If you are using MacOS you should check out [Profiling
20Viewer](http://www.profilingviewer.com/) or
21[MacCallGrind](http://www.maccallgrind.com/).
22
23![KCachegrind](/assets/python-profiling/kcachegrind.png)
24
25We will be dividing this post into two main categories:
26
27- writing simple web-service,
28- visualize profile of this web-service.
29
30## Simple web-service
31
32Let's use virtualenv so we won't pollute our base system. If you don't have
33virtualenv installed on your system you can install it with pip command.
34
35```bash
36# let's install virtualenv globally
37$ sudo pip install virtualenv
38
39# let's also install pyprof2calltree globally
40$ sudo pip install pyprof2calltree
41
42# now we create project
43$ mkdir demo-project
44$ cd demo-project/
45
46# now let's create folder where we will store profiles
47$ mkdir prof
48
49# now we create empty virtualenv in venv/ folder
50$ virtualenv --no-site-packages venv
51
52# we now need to activate virtualenv
53$ source venv/bin/activate
54
55# you can check if virtualenv was correctly initialized by
56# checking where your python interpreter is located
57# if command bellow points to your created directory and not some
58# system dir like /usr/bin/python then everything is fine
59$ which python
60
61# we can check now if all is good ➜ if ok couple of
62# lines will be displayed
63$ pip freeze
64# appdirs==1.4.3
65# packaging==16.8
66# pyparsing==2.2.0
67# six==1.10.0
68
69# now we are ready to install bottlepy ➜ web micro-framework
70$ pip install bottle
71
72# you can deactivate virtualenv but you will then go
73# under system domain ➜ for now don't deactivate
74$ deactivate
75```
76
77We are now ready to write simple web service. Let's create file app.py and paste
78code bellow in this newly created file.
79
80```python
81# -*- coding: utf-8 -*-
82
83import bottle
84import random
85import cProfile
86
87app = bottle.Bottle()
88
89# this function is a decorator and encapsulates function
90# and performs profiling and then saves it to subfolder
91# prof/function-name.prof
92# in our example only awesome_random_number function will
93# be profiled because it has do_cprofile defined
94def do_cprofile(func):
95 def profiled_func(*args, **kwargs):
96 profile = cProfile.Profile()
97 try:
98 profile.enable()
99 result = func(*args, **kwargs)
100 profile.disable()
101 return result
102 finally:
103 profile.dump_stats("prof/" + str(func.__name__) + ".prof")
104 return profiled_func
105
106
107# we use profiling over specific function with including
108# @do_cprofile above function declaration
109@app.route("/")
110@do_cprofile
111def awesome_random_number():
112 awesome_random_number = random.randint(0, 100)
113 return "awesome random number is " + str(awesome_random_number)
114
115@app.route("/test")
116def test():
117 return "dummy test"
118
119if __name__ == '__main__':
120 bottle.run(
121 app = app,
122 host = "0.0.0.0",
123 port = 4000
124 )
125
126# run with 'python app.py'
127# open browser 'http://0.0.0.0:4000'
128```
129
130When browser hits awesome\_random\_number() function profile is created in prof/
131subfolder.
132
133## Visualize profile
134
135Now let's create callgrind format from this cProfile output.
136
137```bash
138$ cd prof/
139$ pyprof2calltree -i awesome_random_number.prof
140# this creates 'awesome_random_number.prof.log' file in the same folder
141```
142
143This file can be opened with visualizing tools listed above. In this case we
144will be using Profilling Viewer under MacOS. You can open image in new tab. As
145you can see from this example there is hierarchy of execution order of your
146code.
147
148![Profilling Viewer](/assets/python-profiling/profiling-viewer.png)
149
150> Make sure you convert output of the cProfile output every time you want to
151refresh and take a look at your possible optimizations because cProfile updates
152.prof file every time browser hits the function.
153
154This is just a simple example but when you are developing real-life applications
155this can be very illuminating, especially to see which parts of your code are
156bottlenecks and need to be optimized.
157
158## Update 2017-04-22
159
160Reddit user [mvt](https://www.reddit.com/user/mvt) also recommended this awesome
161web based profile visualizer [SnakeViz](https://jiffyclub.github.io/snakeviz/)
162that directly takes output from
163[cProfile](https://docs.python.org/2/library/profile.html#module-cProfile)
164module.
165
166<div class="reddit-embed" data-embed-media="www.redditmedia.com" data-embed-parent="false" data-embed-live="false" data-embed-uuid="583880c1-002e-41ed-a373-020a0ef2cff9" data-embed-created="2017-04-22T19:46:54.810Z"><a href="https://www.reddit.com/r/Python/comments/66v373/profiling_python_web_applications_with_visual/dgljhsb/">Comment</a> from discussion <a href="https://www.reddit.com/r/Python/comments/66v373/profiling_python_web_applications_with_visual/">Profiling Python web applications with visual tools</a>.</div><script async src="https://www.redditstatic.com/comment-embed.js"></script>
167
168```bash
169# let's install it globally as well
170$ sudo pip install snakeviz
171
172# now let's visualize
173$ cd prof/
174$ snakeviz awesome_random_number.prof
175# this automatically opens browser window and
176# shows visualized profile
177```
178
179![SnakeViz](/assets/python-profiling/snakeviz.png)
180
181Reddit user [ccharles](https://www.reddit.com/user/ccharles) suggested a better
182way for installing pip software by targeting user level instead of using sudo.
183
184<div class="reddit-embed" data-embed-media="www.redditmedia.com" data-embed-parent="false" data-embed-live="false" data-embed-uuid="f4f0459e-684d-441e-bebe-eb49b2f0a31d" data-embed-created="2017-04-22T19:46:10.874Z"><a href="https://www.reddit.com/r/Python/comments/66v373/profiling_python_web_applications_with_visual/dglpzkx/">Comment</a> from discussion <a href="https://www.reddit.com/r/Python/comments/66v373/profiling_python_web_applications_with_visual/">Profiling Python web applications with visual tools</a>.</div><script async src="https://www.redditstatic.com/comment-embed.js"></script>
185
186```bash
187# now we need to add this path to our $PATH variable
188# we do this my adding this line at the end of your
189# ~/.bashrc file
190PATH=$PATH:$HOME/.local/bin/
191
192# in order to use this new configuration you can close
193# and reopen terminal or reload .bashrc file
194$ source ~/.bashrc
195
196# now let's test if new directory is present in $PATH
197$ echo $PATH
198
199# now we can install on user level by adding --user
200# without use of sudo
201$ pip install snakeviz --user
202```
203
204Or as suggested by [mvt](https://www.reddit.com/user/mvt) you can
205use [pipsi](https://github.com/mitsuhiko/pipsi).
diff --git a/content/posts/2017-08-11-simple-iot-application.md b/content/posts/2017-08-11-simple-iot-application.md
deleted file mode 100644
index e7e086b..0000000
--- a/content/posts/2017-08-11-simple-iot-application.md
+++ /dev/null
@@ -1,606 +0,0 @@
1---
2title: Simple IOT application supported by real-time monitoring and data history
3url: simple-iot-application.html
4date: 2017-08-11T12:00:00+02:00
5draft: false
6---
7
8## Initial thoughts
9
10I have been developing these kind of application for the better part of my last
115 years and people keep asking me how to approach developing such application
12and I will give a try explaining it here.
13
14IOT applications are really no different than any other kind of applications.
15We have data that needs to be collected and visualized in some form of tables or
16charts. The main difference here is that most of the times these data is
17collected by some kind of device foreign to developer that mainly operates in
18web domain. But fear not, it's not that different than writing some JavaScript.
19
20There are many devices able to transmit data via wireless or wired network by
21default but for the sake of example we will be using commonly known Arduino with
22wireless module already on the board → [Arduino
23MKR1000](https://store.arduino.cc/arduino-mkr1000).
24
25In order to make this little project as accessible to others as possible I will
26try to make it as inexpensive as possible. And by this I mean that I will avoid
27using hosted virtual servers and will be using my own laptop as a server. But
28you must buy Arduino MKR1000 to follow steps below. But if you would want to
29deploy this software I would suggest using
30[DigitalOcean](https://www.digitalocean.com) → smallest VPS is only per month
31making this one of the most affordable option out there. Please notice that this
32software will not run on stock web hosting that only supports LAMP (Linux,
33Apache, MySQL, and PHP).
34
35But before we begin please take notice that this is strictly experimental code
36and not well optimized and there are much better ways in handling some aspects
37of the application but that requires much deeper knowledge of technology that is
38not needed for an example like this.
39
40**Development steps**
41
421. Simple Python API that will receive and store incoming data.
432. Prototype C++ code that will read "sensor data" and transmit it to API.
443. Data visualization with charts → extends Python web application.
45
46Step 1. and 3. will share the same web application. One route will be dedicated
47to API and another to serving HTML with chart.
48
49Schema below represents what we will try to achieve and how different parts
50correlates to each other.
51
52![Overview](/assets/iot-application/simple-iot-application-overview.svg)
53
54## Simple Python API
55
56I have always been a fan of simplicity so we will be using [Bottle: Python Web
57Framework](https://bottlepy.org/docs/dev/). It is a single file web framework
58that seriously simplifies working with routes, templating and has built-in web
59server that satisfies our need in this case.
60
61First we need to install bottle package. This can be done by downloading
62```bottle.py``` and placing it in the root of your application or by using pip
63software ```pip install bottle --user```.
64
65If you are using Linux or MacOS then Python is already installed. If you will
66try to test this on Windows please install [Python for
67Windows](https://www.python.org/downloads/windows/). There may be some problems
68with path when you will try to launch ```python webapp.py``` so please take care
69of this before you continue.
70
71### Basic web application
72
73Most basic bottle application is quite simple. Paste code below in
74```webapp.py``` file and save.
75
76```python
77# -*- coding: utf-8 -*-
78
79import bottle
80
81# initializing bottle app
82app = bottle.Bottle()
83
84# triggered when / is accessed from browser
85# only accepts GET → no POST allowed
86@app.route("/", method=["GET"])
87def route_default():
88 return "howdy from python"
89
90# starting server on http://0.0.0.0:5000
91if __name__ == "__main__":
92 bottle.run(
93 app = app,
94 host = "0.0.0.0",
95 port = 5000,
96 debug = True,
97 reloader = True,
98 catchall = True,
99 )
100```
101
102To run this simple application you should open command prompt or terminal on
103your machine and go to the folder containing your file and type ```python
104webapp.py```. If everything goes ok then open your web browser and point it to
105```http://0.0.0.0:5000```.
106
107If you would like change the port of your application (like port 80) and not use
108root to run your app this will present a problem. The TCP/IP port numbers below
1091024 are privileged ports → this is a security feature. So in order of
110simplicity and security use a port number above 1024 like I have used port 5000.
111
112If this fails at any time please fix it before you continue, because nothing
113below will work otherwise.
114
115We use 0.0.0.0 as default host so that this app is available over your local
116network. If you find your local ip ```ifconfig``` and try accessing this site
117with your phone (if on same network/router as your machine) this should work as
118well (example of such ip ```http://192.168.1.15:5000```). This is a must have
119because Arduino will be accessing this application to send it's data.
120
121### Web application security
122
123There is a lot to be said about security and is a topic of many books. Of course
124all this can not be written here but to just establish some basic security → you
125should always use SSL with your application. Some fantastic free certificates
126are available by [Let's Encrypt - Free SSL/TLS
127Certificates](https://letsencrypt.org). With SSL certificate installed you
128should then make use of HTTP headers and send your "API key" via a header. If
129your key is send via header then this key is encrypted by SSL and send encrypted
130over the network. Never send your api keys by GET parameter like
131```http://example.com/?api_key=somekeyvalue```. The problem that this kind of
132sending presents is that this key is visible in logs and by network sniffers.
133
134There is a fantastic article describing some aspects about security: [11 Web
135Application Security Best
136Practices](https://www.keycdn.com/blog/web-application-security-best-practices/). Please
137check it out.
138
139### Simple API for writing data-points
140
141We will now be using boilerplate code from example above and extend it to be
142SQLite3 because it plays well with Python and can store quite large amount of
143able to write data received by API to local storage. For example use I will use
144data. I have been using it to collect gigabytes of data in a single database
145without any corruption or problems → your experience may vary.
146
147To avoid learning SQLite I will be using [Dataset: databases for lazy
148people](https://dataset.readthedocs.io/en/latest/index.html). This package
149abstracts SQL and simplifies writing and reading data from database. You should
150install this package with pip software ```pip install dataset --user```.
151
152Because API will use POST method I will be testing if code works correctly by
153using [Restlet Client for Google
154Chrome](https://chrome.google.com/webstore/detail/restlet-client-rest-api-t/aejoelaoggembcahagimdiliamlcdmfm).
155This software also allows you to set headers → for basic security with API_KEY.
156
157To quickly generate passwords or API keys I usually use this nifty website
158[RandomKeygen](https://randomkeygen.com/).
159
160Copy and paste code below over your previous code in file ```webapp.py```.
161
162```python
163# -*- coding: utf-8 -*-
164
165import time
166import bottle
167import random
168import dataset
169
170# initializing bottle app
171app = bottle.Bottle()
172
173# connects to sqlite database
174# check_same_thread=False allows using it in multi-threaded mode
175app.config["dsn"] = dataset.connect("sqlite:///data.db?check_same_thread=False")
176
177# api key that will be used in Arduino code
178app.config["api_key"] = "JtF2aUE5SGHfVJBCG5SH"
179
180# triggered when /api is accessed from browser
181# only accepts POST → no GET allowed
182@app.route("/api", method=["POST"])
183def route_default():
184 status = 400
185 ts = int(time.time()) # current timestamp
186 value = bottle.request.body.read() # data from device
187 api_key = bottle.request.get_header("Api_Key") # api key from header
188
189 # outputs to console received data for debug reason
190 print ">>> {} :: {}".format(value, api_key)
191
192 # if api_key is correct and value is present
193 # then writes attribute to point table
194 if api_key == app.config["api_key"] and value:
195 app.config["dsn"]["point"].insert(dict(ts=ts, value=value))
196 status = 200
197
198 # we only need to return status
199 return bottle.HTTPResponse(status=status, body="")
200
201# starting server on http://0.0.0.0:5000
202if __name__ == "__main__":
203 bottle.run(
204 app = app,
205 host = "0.0.0.0",
206 port = 5000,
207 debug = True,
208 reloader = True,
209 catchall = True,
210 )
211```
212
213To run this simply go to folder containing python file and run ```python
214webapp.py``` from terminal. If everything goes ok you should have simple API
215available via POST method on /api route.
216
217After testing the service with Restlet Client you should be able to view your
218data in a database file ```data.db```.
219
220![REST settings example](/assets/iot-application/iot-rest-example.png)
221
222You can also check the contents of new database file by using desktop client
223for SQLite → [DB Browser for SQLite](http://sqlitebrowser.org/).
224
225![SQLite database example](/assets/iot-application/iot-sqlite-db.png)
226
227Table structure is as simple as it can be. We have ts (timestamp) and value
228(value from Arduino). As you can see timestamp is generated on API side. If you
229would happen to have atomic clock on Arduino it would be then better to generate
230and send timestamp with the value. This would be particularity useful if we
231would be collecting sensor data at a higher frequency and then sending this data
232in bulk to API.
233
234If you will deploy this app with uWSGI and multi-threaded, use DSN (Data Source
235Name) url with ```?check_same_thread=False```.
236
237Ok, now that we have some sort of a working API with some basic security so
238unwanted people can not post data to your database can we proceed further and
239try to program Arduino to send data to API.
240
241## Sending data to API with Arduino MKR1000
242
243First of all you should have MKR1000 module and microUSB cable to proceed. If
244you have ever done any work with Arduino you should know that you also need
245[Arduino IDE](https://www.arduino.cc/en/Main/Software). On provided link you
246should be able to download and install IDE. Once that task is completed and you
247have successfully run blink example you should proceed to the next step.
248
249In order to use wireless capabilities of MKR1000 you need to first install
250[WiFi101 library](https://www.arduino.cc/en/Reference/WiFi101) in Arduino IDE.
251Please check before you install, you may already have it installed.
252
253Code below is a working example that sends data to API. Before you try to test
254your code make sure you have run Python web application. Then change settings
255for wifi, api endpoint and api_key. If by some reason code bellow doesn't work
256for you please leave a comment and I'll try to help.
257
258Once you have opened IDE and copied this code try to compile and upload it.
259Then open "Serial monitor" to see if any output is presented by Arduino.
260
261```c
262#include <WiFi101.h>
263
264// wifi settings
265char ssid[] = "ssid-name";
266char pass[] = "ssid-password";
267
268// api server enpoint
269char server[] = "192.168.6.22";
270int port = 5000;
271
272// api key that must be the same as the one in Python code
273String api_key = "JtF2aUE5SGHfVJBCG5SH";
274
275// frequency data is sent in ms - every 5 seconds
276int timeout = 1000 * 5;
277
278int status = WL_IDLE_STATUS;
279
280void setup() {
281
282 // initialize serial and wait for port to open:
283 Serial.begin(9600);
284 delay(1000);
285
286 // check for the presence of the shield
287 if (WiFi.status() == WL_NO_SHIELD) {
288 Serial.println("WiFi shield not present");
289 while (true);
290 }
291
292 // attempt to connect to wifi network
293 while (status != WL_CONNECTED) {
294 Serial.print("Attempting to connect to SSID: ");
295 Serial.println(ssid);
296 status = WiFi.begin(ssid, pass);
297 // wait 10 seconds for connection
298 delay(10000);
299 }
300
301 // output wifi status to serial monitor
302 Serial.print("SSID: ");
303 Serial.println(WiFi.SSID());
304
305 IPAddress ip = WiFi.localIP();
306 Serial.print("IP Address: ");
307 Serial.println(ip);
308
309 long rssi = WiFi.RSSI();
310 Serial.print("signal strength (RSSI):");
311 Serial.print(rssi);
312 Serial.println(" dBm");
313}
314
315void loop() {
316 WiFiClient client;
317
318 if (client.connect(server, port)) {
319
320 // I use random number generator for this example
321 // but you can use analog or digital inputs from arduino
322 String content = String(random(1000));
323
324 client.println("POST /api HTTP/1.1");
325 client.println("Connection: close");
326 client.println("Api-Key: " + api_key);
327 client.println("Content-Length: " + String(content.length()));
328 client.println();
329 client.println(content);
330
331 delay(100);
332 client.stop();
333 Serial.println("Data sent successfully ...");
334
335 } else {
336 Serial.println("Problem sending data ...");
337 }
338
339 // waits for x seconds and continue looping
340 delay(timeout);
341}
342```
343
344As seen from example you can notice that Arduino is generating random integer
345between [ 0 .. 1000 ]. You can easily replace this with a temperature sensor or
346any other kind of sensor.
347
348Now that we have API under the hood and Arduino is sending demo data we can now
349focus on data visualization.
350
351## Data visualization
352
353Before we continue we should examine our project folder structure. Currently we
354only have two files in our project:
355
356_simple-iot-app/_
357
358* _webapp.py_
359* _data.db_
360
361We will now add HTML template that will contain CSS and JavaScript code inline
362for the simplicity reason. And for the bottle framework to be able to scan root
363application folder for templates we will add ```bottle.TEMPLATE_PATH.insert(0,
364"./")``` in ```webapp.py```. By default bottle framework uses ```views/```
365subfolder to store templates. This is not the ideal situation and if you will
366use bottle to develop web applications you should use native behavior and store
367templates in it's predefined folder. But for the sake of example we will
368over-ride this. Be careful to fully replace your code with new code that is
369provided below. Avoid partially replacing code in file :) Also new code for
370reading data-points is provided in Python example below.
371
372First we add new route to our web application. It should be trigger when browser
373hits root of application ```http://0.0.0.0:5000/```. This route will do nothing
374more than render ```frontend.html``` template. This is done by ```return
375bottle.template("frontend.html")```. Check code below to further examine how
376exactly this is done.
377
378Now we will expand ```/api``` route and use different methods to write or read
379data-points. For writing data-point we will use POST method and for reading
380points we will use GET method. GET method will return JSON object with latest
381readings and historical data.
382
383There is a fantastic JavaScript library for plotting time-series charts called
384[MetricsGraphics.js](https://www.metricsgraphicsjs.org) that is based on
385[D3.js](https://d3js.org/) library for visualizing data.
386
387Data schema required by MetricsGraphics.js → to achieve this we need to
388transform data from database into this format:
389
390```json
391[
392 {
393 "date": "2017-08-11 01:07:20",
394 "value": 933
395 },
396 {
397 "date": "2017-08-11 01:07:30",
398 "value": 743
399 }
400]
401```
402
403Web application is now complete and we only need ```frontend.html``` that we
404will develop now. If you would try to start web app now and go to root app this
405will return error because we don't have frontend.html yet.
406
407```python
408# -*- coding: utf-8 -*-
409
410import time
411import bottle
412import json
413import datetime
414import random
415import dataset
416
417# initializing bottle app
418app = bottle.Bottle()
419
420# adds root directory as template folder
421bottle.TEMPLATE_PATH.insert(0, "./")
422
423# connects to sqlite database
424# check_same_thread=False allows using it in multi-threaded mode
425app.config["db"] = dataset.connect("sqlite:///data.db?check_same_thread=False")
426
427# api key that will be used in Arduino code
428app.config["api_key"] = "JtF2aUE5SGHfVJBCG5SH"
429
430# triggered when / is accessed from browser
431# only accepts GET → no POST allowed
432@app.route("/", method=["GET"])
433def route_default():
434 return bottle.template("frontend.html")
435
436# triggered when /api is accessed from browser
437# accepts POST and GET
438@app.route("/api", method=["GET", "POST"])
439def route_default():
440
441 # if method is POST then we write datapoint
442 if bottle.request.method == "POST":
443 status = 400
444 ts = int(time.time()) # current timestamp
445 value = bottle.request.body.read() # data from device
446 api_key = bottle.request.get_header("Api-Key") # api key from header
447
448 # outputs to console recieved data for debug reason
449 print ">>> {} :: {}".format(value, api_key)
450
451 # if api_key is correct and value is present
452 # then writes attribute to point table
453 if api_key == app.config["api_key"] and value:
454 app.config["db"]["point"].insert(dict(ts=ts, value=value))
455 status = 200
456
457 # we only need to return status
458 return bottle.HTTPResponse(status=status, body="")
459
460 # if method is GET then we read datapoint
461 else:
462 response = []
463 datapoints = app.config["db"]["point"].all()
464
465 for point in datapoints:
466 response.append({
467 "date": datetime.datetime.fromtimestamp(int(point["ts"])).strftime("%Y-%m-%d %H:%M:%S"),
468 "value": point["value"]
469 })
470
471 bottle.response.content_type = "application/json"
472 return json.dumps(response)
473
474# starting server on http://0.0.0.0:5000
475if __name__ == "__main__":
476 bottle.run(
477 app = app,
478 host = "0.0.0.0",
479 port = 5000,
480 debug = True,
481 reloader = True,
482 catchall = True,
483 )
484```
485
486And now finally we can implement ```frontend.html```. Create file with this name
487and copy code below. When you are done you can start web application. Steps for
488this part are listed below the code.
489
490```html
491<!DOCTYPE html>
492<html>
493
494 <head>
495 <meta charset="utf-8">
496 <title>Simple IOT application</title>
497 </head>
498
499 <body>
500
501 <h1>Simple IOT application</h1>
502
503 <div class="chart-placeholder">
504 <div id="chart"></div>
505 </div>
506
507 <!-- application main script -->
508 <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
509 <script src="https://cdnjs.cloudflare.com/ajax/libs/d3/4.10.0/d3.min.js"></script>
510 <script src="https://cdnjs.cloudflare.com/ajax/libs/metrics-graphics/2.11.0/metricsgraphics.min.js"></script>
511 <script>
512 function fetch_and_render() {
513 d3.json("/api", function(data) {
514 data = MG.convert.date(data, "date", "%Y-%m-%d %H:%M:%S");
515 MG.data_graphic({
516 data: data,
517 chart_type: "line",
518 full_width: true,
519 height: 270,
520 target: document.getElementById("chart"),
521 x_accessor: "date",
522 y_accessor: "value"
523 });
524 });
525 }
526 window.onload = function() {
527 // initial call for rendering
528 fetch_and_render();
529
530 // updates chart every 5 seconds
531 setInterval(function() {
532 fetch_and_render();
533 }, 5000);
534 }
535 </script>
536
537 <!-- application styles -->
538 <style>
539 body {
540 font: 13px sans-serif;
541 padding: 20px 50px;
542 }
543 .chart-placeholder {
544 border: 2px solid #ccc;
545 width: 100%;
546 user-select: none;
547 }
548 /* chart styles */
549 .mg-line1-color {
550 stroke: red;
551 stroke-width: 2;
552 }
553 .mg-main-area, .mg-main-line {
554 fill: #fff;
555 }
556 .mg-x-axis line, .mg-y-axis line {
557 stroke: #b3b2b2;
558 stroke-width: 1px;
559 }
560 </style>
561
562 </body>
563
564</html>
565```
566
567Now the folder structure should look like:
568
569_simple-iot-app/_
570
571* _webapp.py_
572* _data.db_
573* _frontend.html_
574
575Ok, lets now start application and start feeding it data.
576
5771. ```python webapp.py```
5782. connect Arduino MKR1000 to power source
5793. open browser and go to ```http://0.0.0.0:5000```
580
581If everything goes well you should be seeing new data-points rendered on chart
582every 5 seconds.
583
584If you navigate to ```http://0.0.0.0:5000``` you should see rendered chart as
585shown on picture below.
586
587![Application output](/assets/iot-application/iot-app-output.png)
588
589Complete application with all the code is available for
590[download](/assets/iot-application/simple-iot-application.zip).
591
592## Conclusion
593
594I hope this clarifies some aspects of IOT application development. Of course
595this is a minimal example and is far from what can be done in real life with
596some further dive into other technologies.
597
598If you would like to continue exploring IOT world here are some interesting
599resources for you to examine:
600
601* [Reading Sensors with an Arduino](https://www.allaboutcircuits.com/projects/reading-sensors-with-an-arduino/)
602* [MQTT 101 – How to Get Started with the lightweight IoT Protocol](http://www.hivemq.com/blog/how-to-get-started-with-mqtt)
603* [Stream Updates with Server-Sent Events](https://www.html5rocks.com/en/tutorials/eventsource/basics/)
604* [Internet of Things (IoT) Tutorials](http://www.tutorialspoint.com/internet_of_things/)
605
606Any comment or additional ideas are welcomed in comments below.
diff --git a/content/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md b/content/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md
deleted file mode 100644
index 3a62594..0000000
--- a/content/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md
+++ /dev/null
@@ -1,330 +0,0 @@
1---
2title: Using DigitalOcean Spaces Object Storage with FUSE
3url: using-digitalocean-spaces-object-storage-with-fuse.html
4date: 2018-01-16T12:00:00+02:00
5draft: false
6---
7
8Couple of months ago [DigitalOcean](https://www.digitalocean.com) introduced new
9product called
10[Spaces](https://blog.digitalocean.com/introducing-spaces-object-storage/) which
11is Object Storage very similar to Amazon's S3. This really peaked my interest,
12because this was something I was missing and even the thought of going over the
13internet for such functionality was in no interest to me. Also in fashion with
14their previous pricing this also is very cheap and pricing page is a no-brainer
15compared to AWS or GCE. [Prices are clearly and precisely defined and
16outlined](https://www.digitalocean.com/pricing/). You must love them for that
17:)
18
19## Initial requirements
20
21* Is it possible to use them as a mounted drive with FUSE? (tl;dr YES)
22* Will the performance degrade over time and over different sizes of objects?
23 (tl;dr NO&YES)
24* Can storage be mounted on multiple machines at the same time and be writable?
25 (tl;dr YES)
26
27> Let me be clear. This scripts I use are made just for benchmarking and are not
28> intended to be used in real-life situations. Besides that, I am looking into
29> using this approaches but adding caching service in front of it and then
30> dumping everything as an object to storage. This could potentially be some
31> interesting post of itself. But in case you would need real-time data without
32> eventual consistency please take this scripts as they are: not usable in such
33> situations.
34
35## Is it possible to use them as a mounted drive with FUSE?
36
37Well, actually they can be used in such manor. Because they are similar to [AWS
38S3](https://aws.amazon.com/s3/) many tools are available and you can find many
39articles and [Stackoverflow items](https://stackoverflow.com/search?q=s3+fuse).
40
41To make this work you will need DigitalOcean account. If you don't have one you
42will not be able to test this code. But if you have an account then you go and
43[create new
44Droplet](https://cloud.digitalocean.com/droplets/new?size=s-1vcpu-1gb&region=ams3&distro=debian&distroImage=debian-9-x64&options=private_networking,install_agent).
45If you click on this link you will already have preselected Debian 9 with
46smallest VM option.
47
48* Please be sure to add you SSH key, because we will login to this machine
49 remotely.
50* If you change your region please remember which one you choose because we will
51 need this information when we try to mount space to our machine.
52
53Instuctions on how to use SSH keys and how to setup them are available in
54article [How To Use SSH Keys with DigitalOcean
55Droplets](https://www.digitalocean.com/community/tutorials/how-to-use-ssh-keys-with-digitalocean-droplets).
56
57![DigitalOcean Droplets](/assets/do-fuse/fuse-droplets.png)
58
59After we created Droplet it's time to create new Space. This is done by clicking
60on a button [Create](https://cloud.digitalocean.com/spaces/new) (right top
61corner) and selecting Spaces. Choose pronounceable ```Unique name``` because we
62will use it in examples below. You can either choose Private or Public, it
63doesn't matter in our case. And you can always change that in the future.
64
65When you have created new Space we should [generate Access
66key](https://cloud.digitalocean.com/settings/api/tokens). This link will guide
67to the page when you can generate this key. After you create new one, please
68save provided Key and Secret because Secret will not be shown again.
69
70![DigitalOcean Spaces](/assets/do-fuse/fuse-spaces.png)
71
72Now that we have new Space and Access key we should SSH into our machine.
73
74```bash
75# replace IP with the ip of your newly created droplet
76ssh root@IP
77
78# this will install utilities for mounting storage objects as FUSE
79apt install s3fs
80
81# we now need to provide credentials (access key we created earlier)
82# replace KEY and SECRET with your own credentials but leave the colon between them
83# we also need to set proper permissions
84echo "KEY:SECRET" > .passwd-s3fs
85chmod 600 .passwd-s3fs
86
87# now we mount space to our machine
88# replace UNIQUE-NAME with the name you choose earlier
89# if you choose different region for your space be careful about -ourl option (ams3)
90s3fs UNIQUE-NAME /mnt/ -ourl=https://ams3.digitaloceanspaces.com -ouse_cache=/tmp
91
92# now we try to create a file
93# once you mount it may take a couple of seconds to retrieve data
94echo "Hello cruel world" > /mnt/hello.txt
95```
96
97After all this you can return to your browser and go to [DigitalOcean
98Spaces](https://cloud.digitalocean.com/spaces) and click on your created
99space. If file hello.txt is present you have successfully mounted space to your
100machine and wrote data to it.
101
102I choose the same region for my Droplet and my Space but you don't have to. You
103can have different regions. What this actually does to performance I don't know.
104
105Additional information on FUSE:
106
107* [Github project page for s3fs](https://github.com/s3fs-fuse/s3fs-fuse)
108* [FUSE - Filesystem in Userspace](https://en.wikipedia.org/wiki/Filesystem_in_Userspace)
109
110## Will the performance degrade over time and over different sizes of objects?
111
112For this task I didn't want to just read and write text files or uploading
113images. I actually wanted to figure out if using something like SQlite is viable
114in this case.
115
116### Measurement experiment 1: File copy
117
118```bash
119# first we create some dummy files at different sizes
120dd if=/dev/zero of=10KB.dat bs=1024 count=10 #10KB
121dd if=/dev/zero of=100KB.dat bs=1024 count=100 #100KB
122dd if=/dev/zero of=1MB.dat bs=1024 count=1024 #1MB
123dd if=/dev/zero of=10MB.dat bs=1024 count=10240 #10MB
124
125# now we set time command to only return real
126TIMEFORMAT=%R
127
128# now lets test it
129(time cp 10KB.dat /mnt/) |& tee -a 10KB.results.txt
130
131# and now we automate
132# this will perform the same operation 100 times
133# this will output results into separated files based on objecty size
134n=0; while (( n++ < 100 )); do (time cp 10KB.dat /mnt/10KB.$n.dat) |& tee -a 10KB.results.txt; done
135n=0; while (( n++ < 100 )); do (time cp 100KB.dat /mnt/100KB.$n.dat) |& tee -a 100KB.results.txt; done
136n=0; while (( n++ < 100 )); do (time cp 1MB.dat /mnt/1MB.$n.dat) |& tee -a 1MB.results.txt; done
137n=0; while (( n++ < 100 )); do (time cp 10MB.dat /mnt/10MB.$n.dat) |& tee -a 10MB.results.txt; done
138```
139
140Files of size 100MB were not successfully transferred and ended up displaying
141error (cp: failed to close '/mnt/100MB.1.dat': Operation not permitted).
142
143As I suspected, object size is not really that important. Sadly I don't have the
144time to test performance over periods of time. But if some of you would do it
145please send me your data. I would be interested in seeing results.
146
147**Here are plotted results**
148
149You can download [raw result here](/assets/do-fuse/copy-benchmarks.tsv).
150Measurements are in seconds.
151
152<script src="//cdn.plot.ly/plotly-latest.min.js"></script>
153<div id="copy-benchmarks"></div>
154<script>
155(function(){
156 var request = new XMLHttpRequest();
157 request.open("GET", "/assets/do-fuse/copy-benchmarks.tsv", true);
158 request.onload = function() {
159 if (request.status >= 200 && request.status < 400) {
160 var payload = request.responseText.trim();
161 var tsv = payload.split("\n");
162 for (var i=0; i<tsv.length; i++) { tsv[i] = tsv[i].split("\t"); }
163 var traces = [];
164 var headers = tsv[0];
165 tsv.shift();
166 Array.prototype.forEach.call(headers, function(el, idx) {
167 var x = [];
168 var y = [];
169 for (var j=0; j<tsv.length; j++) {
170 x.push(j);
171 y.push(parseFloat(tsv[j][idx].replace(",", ".")));
172 }
173 traces.push({ x: x, y: y, type: "scatter", name: el, line: { width: 1, shape: "spline" } });
174 });
175 var copy = Plotly.newPlot("copy-benchmarks", traces, { legend: {"orientation": "h"}, height: 400, margin: { l: 40, r: 0, b: 20, t: 30, pad: 0 }, yaxis: { title: "execution time in seconds", titlefont: { size: 12 } }, xaxis: { title: "fn(i)", titlefont: { size: 12 } } });
176 } else { }
177 };
178 request.onerror = function() { };
179 request.send(null);
180})();
181</script>
182
183As far as these tests show, performance is quite stable and can be predicted
184which is fantastic. But this is a small test and spans only over couple of
185hours. So you should not completely trust them.
186
187### Measurement experiment 2: SQLite performanse
188
189I was unable to use database file directly from mounted drive so this is a no-go
190as I suspected. So I executed code below on a local disk just to get some
191benchmarks. I inserted 1000 records with DROPTABLE, CREATETABLE, INSERTMANY,
192FETCHALL, COMMIT for 1000 times to generate statistics. As you can see
193performance of SQLite is quite amazing. You could then potentially just copy
194file to mounted drive and be done with it.
195
196```python
197import time
198import sqlite3
199import sys
200
201if len(sys.argv) < 3:
202 print("usage: python sqlite-benchmark.py DB_PATH NUM_RECORDS REPEAT")
203 exit()
204
205def data_iter(x):
206 for i in range(x):
207 yield "m" + str(i), "f" + str(i*i)
208
209header_line = "%s\t%s\t%s\t%s\t%s\n" % ("DROPTABLE", "CREATETABLE", "INSERTMANY", "FETCHALL", "COMMIT")
210with open("sqlite-benchmarks.tsv", "w") as fp:
211 fp.write(header_line)
212
213start_time = time.time()
214conn = sqlite3.connect(sys.argv[1])
215c = conn.cursor()
216end_time = time.time()
217result_time = CONNECT = end_time - start_time
218print("CONNECT: %g seconds" % (result_time))
219
220start_time = time.time()
221c.execute("PRAGMA journal_mode=WAL")
222c.execute("PRAGMA temp_store=MEMORY")
223c.execute("PRAGMA synchronous=OFF")
224result_time = PRAGMA = end_time - start_time
225print("PRAGMA: %g seconds" % (result_time))
226
227for i in range(int(sys.argv[3])):
228 print("#%i" % (i))
229
230 start_time = time.time()
231 c.execute("drop table if exists test")
232 end_time = time.time()
233 result_time = DROPTABLE = end_time - start_time
234 print("DROPTABLE: %g seconds" % (result_time))
235
236 start_time = time.time()
237 c.execute("create table if not exists test(a,b)")
238 end_time = time.time()
239 result_time = CREATETABLE = end_time - start_time
240 print("CREATETABLE: %g seconds" % (result_time))
241
242 start_time = time.time()
243 c.executemany("INSERT INTO test VALUES (?, ?)", data_iter(int(sys.argv[2])))
244 end_time = time.time()
245 result_time = INSERTMANY = end_time - start_time
246 print("INSERTMANY: %g seconds" % (result_time))
247
248 start_time = time.time()
249 c.execute("select count(*) from test")
250 res = c.fetchall()
251 end_time = time.time()
252 result_time = FETCHALL = end_time - start_time
253 print("FETCHALL: %g seconds" % (result_time))
254
255 start_time = time.time()
256 conn.commit()
257 end_time = time.time()
258 result_time = COMMIT = end_time - start_time
259 print("COMMIT: %g seconds" % (result_time))
260
261 print
262 log_line = "%f\t%f\t%f\t%f\t%f\n" % (DROPTABLE, CREATETABLE, INSERTMANY, FETCHALL, COMMIT)
263 with open("sqlite-benchmarks.tsv", "a") as fp:
264 fp.write(log_line)
265
266start_time = time.time()
267conn.close()
268end_time = time.time()
269result_time = CLOSE = end_time - start_time
270print("CLOSE: %g seconds" % (result_time))
271```
272
273You can download [raw result here](/assets/do-fuse/sqlite-benchmarks.tsv). And
274again, these results are done on a local block storage and do not represent
275capabilities of object storage. With my current approach and state of the test
276code these can not be done. I would need to make Python code much more robust
277and check locking etc.
278
279<div id="sqlite-benchmarks"></div>
280<script>
281(function(){
282 var request = new XMLHttpRequest();
283 request.open("GET", "/assets/do-fuse/sqlite-benchmarks.tsv", true);
284 request.onload = function() {
285 if (request.status >= 200 && request.status < 400) {
286 var payload = request.responseText.trim();
287 var tsv = payload.split("\n");
288 for (var i=0; i<tsv.length; i++) { tsv[i] = tsv[i].split("\t"); }
289 var traces = [];
290 var headers = tsv[0];
291 tsv.shift();
292 Array.prototype.forEach.call(headers, function(el, idx) {
293 var x = [];
294 var y = [];
295 for (var j=0; j<tsv.length; j++) {
296 x.push(j);
297 y.push(parseFloat(tsv[j][idx].replace(",", ".")));
298 }
299 traces.push({ x: x, y: y, type: "scatter", name: el, line: { width: 1, shape: "spline" } });
300 });
301 var sqlite = Plotly.newPlot("sqlite-benchmarks", traces, { legend: {"orientation": "h"}, height: 400, margin: { l: 50, r: 0, b: 20, t: 30, pad: 0 }, yaxis: { title: "execution time in seconds", titlefont: { size: 12 } } });
302 } else { }
303 };
304 request.onerror = function() { };
305 request.send(null);
306})();
307</script>
308
309## Can storage be mounted on multiple machines at the same time and be writable?
310
311Well, this one didn't take long to test. And the answer is **YES**. I mounted
312space on both machines and measured same performance on both machines. But
313because file is downloaded before write and then uploaded on complete there
314could potentially be problems is another process is trying to access the same
315file.
316
317## Observations and conslusion
318
319Using Spaces in this way makes it easier to access and manage files. But besides
320that you would need to write additional code to make this one play nice with you
321applications.
322
323Nevertheless, this was extremely simple to setup and use and this is just
324another excellent product in DigitalOcean product line. I found this exercise
325very valuable and am thinking about implementing some sort of mechanism for
326SQLite, so data can be stored on Spaces and accessed by many VM's. For a project
327where data doesn't need to be accessible in real-time and can have couple of
328minutes old data this would be very interesting. If any of you find this
329proposal interesting please write in a comment box below or shoot me an email
330and I will keep you posted.
diff --git a/content/posts/2019-01-03-encoding-binary-data-into-dna-sequence.md b/content/posts/2019-01-03-encoding-binary-data-into-dna-sequence.md
deleted file mode 100644
index f0343ae..0000000
--- a/content/posts/2019-01-03-encoding-binary-data-into-dna-sequence.md
+++ /dev/null
@@ -1,410 +0,0 @@
1---
2title: Encoding binary data into DNA sequence
3url: encoding-binary-data-into-dna-sequence.html
4date: 2019-01-03T12:00:00+02:00
5draft: false
6---
7
8## Initial thoughts
9
10Imagine a world where you could go outside and take a leaf from a tree and put
11it through your personal DNA sequencer and get data like music, videos or
12computer programs from it. Well, this is all possible now. It was not done on a
13large scale because it is quite expensive to create DNA strands but it's
14possible.
15
16Encoding data into DNA sequence is relatively simple process once you understand
17the relationship between binary data and nucleotides and scientists have been
18making large leaps in this field in order to provide viable long-term storage
19solution for our data that would potentially survive our specie if case of
20global disaster. We could imprint all the world's knowledge into plants and
21ensure the survival of our knowledge.
22
23More optimistic usage for this technology would be easier storage of ever
24growing data we produce every day. Once machines for sequencing DNA become fast
25enough and cheaper this could mean the next evolution of storing data and
26abandoning classical hard and solid state drives in data warehouses.
27
28As we currently stand this is still not viable but it is quite an amazing and
29cool technology.
30
31My interests in this field are purely in encoding processes and experimental
32testing mainly because I don't have the access to this expensive machines. My
33initial goal was to create a toolkit that can be used by everybody to encode
34their data into a proper DNA sequence.
35
36## Glossary
37
38**deoxyribose** A five-carbon sugar molecule with a hydrogen atom rather than a
39hydroxyl group in the 2′ position; the sugar component of DNA nucleotides.
40
41**double helix** The molecular shape of DNA in which two strands of nucleotides
42wind around each other in a spiral shape.
43
44**nitrogenous base** A nitrogen-containing molecule that acts as a base; often
45referring to one of the purine or pyrimidine components of nucleic acids.
46
47**phosphate group** A molecular group consisting of a central phosphorus atom
48bound to four oxygen atoms.
49
50**RGB** The RGB color model is an additive color model in which red, green and
51blue light are added together in various ways to reproduce a broad array of
52colors.
53
54**GCC** The GNU Compiler Collection is a compiler system produced by the GNU
55Project supporting various programming languages.
56
57## Data encoding
58
59**TL;DR:** Encoding involves the use of a code to change original data into a
60form that can be used by an external process.
61
62Encoding is the process of converting data into a format required for a number
63of information processing needs, including:
64
65- Program compiling and execution
66- Data transmission, storage and compression/decompression
67- Application data processing, such as file conversion
68
69Encoding can have two meanings:
70
71- In computer technology, encoding is the process of applying a specific code,
72 such as letters, symbols and numbers, to data for conversion into an
73 equivalent cipher.
74- In electronics, encoding refers to analog to digital conversion.
75
76## Quick history of DNA
77
78- **1869** - Friedrich Miescher identifies "nuclein".
79- **1900s** - The Eugenics Movement.
80- **1900** – Mendel's theories are rediscovered by researchers.
81- **1944** - Oswald Avery identifies DNA as the 'transforming principle'.
82- **1952** - Rosalind Franklin photographs crystallized DNA fibres.
83- **1953** - James Watson and Francis Crick discover the double helix structure of DNA.
84- **1965** - Marshall Nirenberg is the first person to sequence the bases in each codon.
85- **1983** - Huntington's disease is the first mapped genetic disease.
86- **1990** - The Human Genome Project begins.
87- **1995** - Haemophilus Influenzae is the first bacterium genome sequenced.
88- **1996** - Dolly the sheep is cloned.
89- **1999** - First human chromosome is decoded.
90- **2000** – Genetic code of the fruit fly is decoded.
91- **2002** – Mouse is the first mammal to have its genome decoded.
92- **2003** – The Human Genome Project is completed.
93- **2013** – DNA Worldwide and Eurofins Forensic discover identical twins have differences in their genetic makeup.
94
95## What is DNA?
96
97Deoxyribonucleic acid, a self-replicating material which is **present in nearly
98all living organisms** as the main constituent of chromosomes. It is the
99**carrier of genetic information**.
100
101> The nitrogen in our DNA, the calcium in our teeth, the iron in our blood,
102> the carbon in our apple pies were made in the interiors of collapsing stars.
103> We are made of starstuff.
104> **-- Carl Sagan, Cosmos**
105
106The nucleotide in DNA consists of a sugar (deoxyribose), one of four bases
107(cytosine (C), thymine (T), adenine (A), guanine (G)), and a phosphate.
108Cytosine and thymine are pyrimidine bases, while adenine and guanine are purine
109bases. The sugar and the base together are called a nucleoside.
110
111![DNA](/assets/dna-sequence/dna-basics.jpg)
112
113*DNA (a) forms a double stranded helix, and (b) adenine pairs with thymine and
114cytosine pairs with guanine. (credit a: modification of work by Jerome Walker,
115Dennis Myts)*
116
117## Encode binary data into DNA sequence
118
119As an input file you can use any file you want:
120
121- ASCII files,
122- Compiled programs,
123- Multimedia files (MP3, MP4, MVK, etc),
124- Images,
125- Database files,
126- etc.
127
128Note: If you would copy all the bytes from RAM to file or pipe data to file you
129could encode also this data as long as you provide file pointer to the encoder.
130
131### Basic Encoding
132
133As already mentioned, the Basic Encoding is based on a simple mapping. Since DNA
134is composed of 4 nucleotides (Adenine, Cytosine, Guanine, Thymine; usually
135referred using the first letter). Using this technique we can encode
136
137$$ log_2(4) = log_2(2^2) = 2 bits $$
138
139using a single nucleotide. In this way, we are able to use the 4 bases that
140compose the DNA strand to encode each byte of data.
141
142| Two bits | Nucleotides |
143| -------- | ---------------- |
144| 00 | **A** (Adenine) |
145| 10 | **G** (Guanine) |
146| 01 | **C** (Cytosine) |
147| 11 | **T** (Thymine) |
148
149With this in mind we can simply encode any data by using two-bit to Nucleotides
150conversion.
151
152```python
153{ Algorithm 1: Naive byte array to DNA encode }
154procedure EncodeToDNASequence(f) string
155begin
156 enc string
157 while not eof(f) do
158 c byte := buffer[0] { Read 1 byte from buffer }
159 bin integer := sprintf('08b', c) { Convert to string binary }
160 for e in range[0, 2, 4, 6] do
161 if e[0] == 48 and e[1] == 48 then { 0x00 - A (Adenine) }
162 enc += 'A'
163 else if e[0] == 48 and e[1] == 49 then { 0x01 - G (Guanine) }
164 enc += 'G'
165 else if e[0] == 49 and e[1] == 48 then { 0x10 - C (Cytosine) }
166 enc += 'C'
167 else if e[0] == 49 and e[1] == 49 then { 0x11 - T (Thymine) }
168 enc += 'T'
169 return enc { Return DNA sequence }
170end
171```
172
173Another encoding would be **Goldman encoding**. Using this encoding helps with
174Nonsense mutation (amino acids replaced by a stop codon) that occurs and is the
175most problematic during translation because it leads to truncated amino acid
176sequences, which in turn results in truncated proteins.
177
178[Where to store big data? In DNA: Nick Goldman at TEDxPrague](https://www.youtube.com/watch?v=a4PiGWNsIEU)
179
180### FASTA file format
181
182In bioinformatics, FASTA format is a text-based format for representing either
183nucleotide sequences or peptide sequences, in which nucleotides or amino acids
184are represented using single-letter codes. The format also allows for sequence
185names and comments to precede the sequences. The format originates from the
186FASTA software package, but has now become a standard in the field of
187bioinformatics.
188
189The first line in a FASTA file started either with a ">" (greater-than) symbol
190or, less frequently, a ";" (semicolon) was taken as a comment. Subsequent lines
191starting with a semicolon would be ignored by software. Since the only comment
192used was the first, it quickly became used to hold a summary description of the
193sequence, often starting with a unique library accession number, and with time
194it has become commonplace to always use ">" for the first line and to not use
195";" comments (which would otherwise be ignored).
196
197```
198;LCBO - Prolactin precursor - Bovine
199; a sample sequence in FASTA format
200MDSKGSSQKGSRLLLLLVVSNLLLCQGVVSTPVCPNGPGNCQVSLRDLFDRAVMVSHYIHDLSS
201EMFNEFDKRYAQGKGFITMALNSCHTSSLPTPEDKEQAQQTHHEVLMSLILGLLRSWNDPLYHL
202VTEVRGMKGAPDAILSRAIEIEEENKRLLEGMEMIFGQVIPGAKETEPYPVWSGLPSLQTKDED
203ARYSAFYNLLHCLRRDSSKIDTYLKLLNCRIIYNNNC*
204
205>MCHU - Calmodulin - Human, rabbit, bovine, rat, and chicken
206ADQLTEEQIAEFKEAFSLFDKDGDGTITTKELGTVMRSLGQNPTEAELQDMINEVDADGNGTID
207FPEFLTMMARKMKDTDSEEEIREAFRVFDKDGNGYISAAELRHVMTNLGEKLTDEEVDEMIREA
208DIDGDGQVNYEEFVQMMTAK*
209
210>gi|5524211|gb|AAD44166.1| cytochrome b [Elephas maximus maximus]
211LCLYTHIGRNIYYGSYLYSETWNTGIMLLLITMATAFMGYVLPWGQMSFWGATVITNLFSAIPYIGTNLV
212EWIWGGFSVDKATLNRFFAFHFILPFTMVALAGVHLTFLHETGSNNPLGLTSDSDKIPFHPYYTIKDFLG
213LLILILLLLLLALLSPDMLGDPDNHMPADPLNTPLHIKPEWYFLFAYAILRSVPNKLGGVLALFLSIVIL
214GLMPFLHTSKHRSMMLRPLSQALFWTLTMDLLTLTWIGSQPVEYPYTIIGQMASILYFSIILAFLPIAGX
215IENY
216```
217
218FASTA format was extended by [FASTQ](https://en.wikipedia.org/wiki/FASTQ_format)
219format from the [Sanger Centre](https://www.sanger.ac.uk/) in Cambridge.
220
221### PNG encoded DNA sequence
222
223| Nucleotides | RGB | Color name |
224| ------------ | ----------- | ---------- |
225| A ➞ Adenine | (0,0,255) | Blue |
226| G ➞ Guanine | (0,100,0) | Green |
227| C ➞ Cytosine | (255,0,0) | Red |
228| T ➞ Thymine | (255,255,0) | Yellow |
229
230With this in mind we can create a simple algorithm to create PNG representation
231of a DNA sequence.
232
233```python
234{ Algorithm 2: Naive DNA to PNG encode from FASTA file }
235procedure EncodeDNASequenceToPNG(f)
236begin
237 i image
238 while not eof(f) do
239 c char := buffer[0] { Read 1 char from buffer }
240 case c of
241 'A': color := RGB(0, 0, 255) { Blue }
242 'G': color := RGB(0, 100, 0) { Green }
243 'C': color := RGB(255, 0, 0) { Red }
244 'T': color := RGB(255, 255, 0) { Yellow }
245 drawRect(i, [x, y], color)
246 save(i) { Save PNG image }
247end
248```
249
250## Encoding text file in practice
251
252In this example we will take a simple text file as our input stream for
253encoding. This file will have a quote from Niels Bohr and saved as txt file.
254
255> How wonderful that we have met with a paradox. Now we have some hope of
256> making progress.
257> ― Niels Bohr
258
259First we encode text file into FASTA file.
260
261```bash
262./dnae-encode -i quote.txt -o quote.fa
2632019/01/10 00:38:29 Gathering input file stats
2642019/01/10 00:38:29 Starting encoding ...
265 106 B / 106 B [==================================] 100.00% 0s
2662019/01/10 00:38:29 Saving to FASTA file ...
2672019/01/10 00:38:29 Output FASTA file length is 438 B
2682019/01/10 00:38:29 Process took 987.263µs
2692019/01/10 00:38:29 Done ...
270```
271
272Output of `quote.fa` file contains the encoded DNA sequence in ASCII format.
273
274```
275>SEQ1
276GACAGCTTGTGTACAAGTGTGCTTGCTCGCGAGCGGGTACGCGCGTGGGCTAACAAGTGA
277GCCAGCAGGTGAACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGCTGGCGGGTGA
278ACAAGTGTGCCGGTGAGCCAACAAGCAGACAAGTAAGCAGGTACGCAGGCGAGCTTGTCA
279ACTCACAAGATCGCTTGTGTACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGTAT
280GCTTGCTGGCGGACAAGCCAGCTTGTAAGCGGACAAGCTTGCGCACAAGCTGGCAGGCCT
281GCCGGCTCGCGTACAAATTCACAAGTAAGTACGCTTGCGTGTACGCGGGTATGTATACTC
282AACCTCACCAAACGGGACAAGATCGCCGGCGGGCTAGTATACAAGAACGCTTGCCAGTAC
283AACC
284```
285
286Then we encode FASTA file from previous operation to encode this data into PNG.
287
288```bash
289./dnae-png -i quote.fa -o quote.png
2902019/01/10 00:40:09 Gathering input file stats ...
2912019/01/10 00:40:09 Deconstructing FASTA file ...
2922019/01/10 00:40:09 Compositing image file ...
293 424 / 424 [==================================] 100.00% 0s
2942019/01/10 00:40:09 Saving output file ...
2952019/01/10 00:40:09 Output image file length is 1.1 kB
2962019/01/10 00:40:09 Process took 19.036117ms
2972019/01/10 00:40:09 Done ...
298```
299
300After encoding into PNG format this file looks like this.
301
302![Encoded Quote in PNG format](/assets/dna-sequence/quote.png)
303
304The larger the input stream is the larger the PNG file would be.
305
306Compiled basic Hello World C program with
307[GCC](https://www.gnu.org/software/gcc/) would [look
308like](/assets/dna-sequence/sample.png).
309
310```c
311// gcc -O3 -o sample sample.c
312#include <stdio.h>
313
314main() {
315 printf("Hello, world!\n");
316 return 0;
317}
318```
319
320## Toolkit for encoding data
321
322I have created a toolkit with two main programs:
323
324- dnae-encode (encodes file into FASTA file)
325- dnae-png (encodes FASTA file into PNG)
326
327Toolkit with full source code is available on
328[github.com/mitjafelicijan/dna-encoding](https://github.com/mitjafelicijan/dna-encoding).
329
330### dnae-encode
331
332```bash
333> ./dnae-encode --help
334usage: dnae-encode --input=INPUT [<flags>]
335
336A command-line application that encodes file into DNA sequence.
337
338Flags:
339 --help Show context-sensitive help (also try --help-long and --help-man).
340 -i, --input=INPUT Input file (ASCII or binary) which will be encoded into DNA sequence.
341 -o, --output="out.fa" Output file which stores DNA sequence in FASTA format.
342 -s, --sequence=SEQ1 The description line (defline) or header/identifier line, gives a name and/or a unique identifier for the sequence.
343 -c, --columns=60 Row characters length (no more than 120 characters). Devices preallocate fixed line sizes in software.
344 --version Show application version.
345```
346
347### dnae-png
348
349```bash
350> ./dnae-png --help
351usage: dnae-png --input=INPUT [<flags>]
352
353A command-line application that encodes FASTA file into PNG image.
354
355Flags:
356 --help Show context-sensitive help (also try --help-long and --help-man).
357 -i, --input=INPUT Input FASTA file which will be encoded into PNG image.
358 -o, --output="out.png" Output file in PNG format that represents DNA sequence in graphical way.
359 -s, --size=10 Size of pairings of DNA bases on image in pixels (lower resolution lower file size).
360 --version Show application version.
361```
362
363## Benchmarks
364
365First we generate some binary sample data with dd.
366
367```bash
368dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=1KB.bin bs=1KB count=1 iflag=fullblock
369```
370
371Our freshly generated 1KB file looks something like this (its full of garbage
372data as intended).
373
374![Sample binary file 1KB](/assets/dna-sequence/sample-binary-file.png)
375
376We create following binary files:
377
378- 1KB.bin
379- 10KB.bin
380- 100KB.bin
381- 1MB.bin
382- 10MB.bin
383- 100MB.bin
384
385After this we create FASTA files for all the binary files by encoding them
386into DNA sequence.
387
388```bash
389./dnae-encode -i 100MB.bin -o 100MB.fa
390```
391
392Then we GZIP all the FASTA files to see how much the can be compressed.
393
394```bash
395gzip -9 < 10MB.fa > 10MB.fa.gz
396```
397
398[Download ODS file with benchmarks](/dna-sequence/benchmarks.ods).
399
400![Sample binary file 1KB](/assets/dna-sequence/chart-1.png)
401
402![Sample binary file 1KB](/assets/dna-sequence/chart-2.png)
403
404## References
405
406- https://www.techopedia.com/definition/948/encoding
407- https://www.dna-worldwide.com/resource/160/history-dna-timeline
408- https://opentextbc.ca/biology/chapter/9-1-the-structure-of-dna/
409- https://arxiv.org/abs/1801.04774
410- https://en.wikipedia.org/wiki/FASTA_format
diff --git a/content/posts/2019-10-14-simplifying-and-reducing-clutter.md b/content/posts/2019-10-14-simplifying-and-reducing-clutter.md
deleted file mode 100644
index 97ddb34..0000000
--- a/content/posts/2019-10-14-simplifying-and-reducing-clutter.md
+++ /dev/null
@@ -1,58 +0,0 @@
1---
2title: Simplifying and reducing clutter in my life and work
3url: simplifying-and-reducing-clutter.html
4date: 2019-10-14T12:00:00+02:00
5draft: false
6---
7
8I recently moved my main working machine back from Hachintosh to Linux. Well the
9experiment was interesting and I have done some great work on macOS but it was
10time to move back.
11
12I actually really missed Linux. The simplicity of `apt-get` or just the amount
13of software that exists for Linux should be a no-brainer. I spent most of my
14time on macOS finding solutions to make things work. Using
15[Brew](https://brew.sh/) was just a horrible experience and far from package
16managers of Linux. At least they managed to get that `sudo` debacle sorted.
17
18Not all was bad. macOS in general was a perfectly good environment. Things like
19Docker and tooling like this worked without any hiccups. My normal tools like
20coding IDE worked flawlessly and the whole look and feel is just superb. I have
21been using MacBook Air for couple of years so I was used to the system but never
22as a daily driver.
23
24One of the things I did after I installed Linux back on my machine was cleaning
25up my Dropbox folder. I have everything on Dropbox. Even projects folder. I
26write code for living so my whole life revolves around couple of megs of code
27(with assets). So it's not like I have huge files on my machine. I don't have
28movies or music or pictures on my PC. All of that stuff is in cloud. I use
29Google music and I have Netflix account which is more than enough for me.
30
31I also went and deleted some of the repositories on my Github account. I have
32deleted more code than deployed. People find this strange but for me deleting
33something feels so cathartic and also forces me to write better code next time
34around when I am faced with similar problem. That was a huge relief if I am
35being totally honest.
36
37Next step was to do something with my webpage. I have been using some scripts I
38wrote a while ago to generate static pages from markdown source posts. I kept on
39adding and adding stuff on top of it and it became a source of a
40frustration. And this is just a simple blog and I was using gulp and npm.
41Anyways after couple of hours of searching and testing static generators I found
42an interesting one
43[https://github.com/piranha/gostatic](https://github.com/piranha/gostatic) and I
44just decided to use this one. It was the only one that had a simple templating
45engine, not that I really need one. But others had this convoluted way of trying
46to solve everything and at the end just required quite bigger learning curve I
47was ready to go with. So I deleted couple of old posts, simplified HTML, trashed
48most of the CSS and went with
49[https://motherfuckingwebsite.com/](https://motherfuckingwebsite.com/)
50aesthetics. Yeah, the previous site was more visually stimulating but all I
51really care is the content at this point. And Times New Roman font is kind of
52awesome.
53
54I stopped working on most of the projects in the past couple of months because
55the overhead was just too insane. There comes a point when you stretch yourself
56too much and then you stop progressing and with that comes dissatisfaction.
57
58So that's about it. Moving forward minimal style.
diff --git a/content/posts/2019-10-19-using-sentiment-analysis-for-clickbait-detection.md b/content/posts/2019-10-19-using-sentiment-analysis-for-clickbait-detection.md
deleted file mode 100644
index e7324bb..0000000
--- a/content/posts/2019-10-19-using-sentiment-analysis-for-clickbait-detection.md
+++ /dev/null
@@ -1,107 +0,0 @@
1---
2title: Using sentiment analysis for clickbait detection in RSS feeds
3url: using-sentiment-analysis-for-clickbait-detection-in-rss-feeds.html
4date: 2019-10-19T12:00:00+02:00
5draft: false
6---
7
8## Initial thoughts
9
10One of the things that interested me for a while now is if major well
11established news sites use click bait titles to drive additional traffic to
12their sites and generate additional impressions.
13
14Goal is to see how article titles and actual content of article differ from each
15other and see if titles are clickbaited.
16
17## Preparing and cleaning data
18
19For this example I opted to just use RSS feed from a new website and decided to
20go with [The Guardian](https://www.theguardian.com) World news. While this gets
21us limited data (~40) articles and also description (actual content) is trimmed
22this really doesn't reflect the actual article contents.
23
24To get better content I could use web scraping and use RSS as link list and
25fetch contents directly from website, but for this simple example this will
26suffice.
27
28There are couple of requirements we need to install before we continue:
29
30- `pip3 install feedparser` (parses RSS feed from url)
31- `pip3 install vaderSentiment` (does sentiment polarity analysis)
32- `pip3 install matplotlib` (plots chart of results)
33
34So first we need to fetch RSS data and sanitize HTML content from description.
35
36```python
37import re
38import feedparser
39
40feed_url = "https://www.theguardian.com/world/rss"
41feed = feedparser.parse(feed_url)
42
43# sanitize html
44for item in feed.entries:
45 item.description = re.sub('<[^<]+?>', '', item.description)
46```
47
48## Perform sentiment analysis
49
50Since we now have cleaned up data in our `feed.entries` object we can start with
51performing sentiment analysis.
52
53There are many sentiment analysis libraries available that range from rule-based
54sentiment analysis up to machine learning supported analysis. To keep things
55simple I decided to use rule-based analysis library
56[vaderSentiment](https://github.com/cjhutto/vaderSentiment) from
57[C.J. Hutto](https://github.com/cjhutto). Really nice library and quite easy to
58use.
59
60```python
61from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
62analyser = SentimentIntensityAnalyzer()
63
64sentiment_results = []
65for item in feed.entries:
66 sentiment_title = analyser.polarity_scores(item.title)
67 sentiment_description = analyser.polarity_scores(item.description)
68 sentiment_results.append([sentiment_title['compound'], sentiment_description['compound']])
69```
70
71Now that we have this data in a shape that is compatible with matplotlib we can
72plot results to see the difference between title and description sentiment of an
73article.
74
75```python
76import matplotlib.pyplot as plt
77
78plt.rcParams['figure.figsize'] = (15, 3)
79plt.plot(sentiment_results, drawstyle='steps')
80plt.title('Sentiment analysis relationship between title and description (Guardian World News)')
81plt.legend(['title', 'description'])
82plt.show()
83```
84
85## Results and assets
86
871. Because of the small sample size further conclusions are impossible to make.
882. Rule-based approach may not be the best way of doing this. By using deep
89 learning we would be able to get better insights.
903. **Next step would be to** periodically fetch RSS items and store them over a
91 longer period of time and then perform analysis again and use either machine
92 learning or deep learning on top of it.
93
94![Relationship between title and description](/assets/sentiment-analysis/guardian-sa-title-desc-relationship.png)
95
96Figure above displays difference between title and description sentiment for
97specific RSS feed item. 1 means positive and -1 means negative sentiment.
98
99[» Download Jupyter Notebook](/assets/sentiment-analysis/sentiment-analysis.ipynb)
100
101## Going further
102
103- [Twitter Sentiment Analysis by Bryan Schwierzke](https://github.com/bswiss/news_mood)
104- [AFINN-based sentiment analysis for Node.js by Andrew Sliwinski](https://github.com/thisandagain/sentiment)
105- [Sentiment Analysis with LSTMs in Tensorflow by Adit Deshpande](https://github.com/adeshpande3/LSTM-Sentiment-Analysis)
106- [Sentiment analysis on tweets using Naive Bayes, SVM, CNN, LSTM, etc. by Abdul Fatir](https://github.com/abdulfatir/twitter-sentiment-analysis)
107
diff --git a/content/posts/2020-03-22-simple-sse-based-pubsub-server.md b/content/posts/2020-03-22-simple-sse-based-pubsub-server.md
deleted file mode 100644
index 60745d0..0000000
--- a/content/posts/2020-03-22-simple-sse-based-pubsub-server.md
+++ /dev/null
@@ -1,453 +0,0 @@
1---
2title: Simple Server-Sent Events based PubSub Server
3url: simple-server-sent-events-based-pubsub-server.html
4date: 2020-03-22T12:00:00+02:00
5draft: false
6---
7
8## Before we continue ...
9
10Publisher Subscriber model is nothing new and there are many amazing solutions
11out there, so writing a new one would be a waste of time if other solutions
12wouldn't have quite complex install procedures and weren't so hard to maintain.
13But to be fair, comparing this simple server with something like
14[Kafka](https://kafka.apache.org/) or [RabbitMQ](https://www.rabbitmq.com/) is
15laughable at the least. Those solutions are enterprise grade and have many
16mechanisms there to ensure messages aren't lost and much more. Regardless of
17these drawbacks, this method has been tested on a large website and worked until
18now without any problems. So now, that we got that cleared up, let's continue.
19
20***Wiki definition:** Publish/subscribe messaging, or pub/sub messaging, is a
21form of asynchronous service-to-service communication used in serverless and
22microservices architectures. In a pub/sub model, any message published to a
23topic is immediately received by all the subscribers to the topic.*
24
25## General goals
26
27- provide a simple server that relays messages to all the connected clients,
28- messages can be posted on specific topics,
29- messages get sent via [Server-Sent
30 Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events)
31 to all the subscribers.
32
33## How exactly does the pub/sub model work?
34
35The easiest way to explain this is with diagram bellow. Basic function is
36simple. We have subscribers that receive messages, and we have publishers that
37create and post messages. Similar model is also well know pattern that works on
38a premise of consumers and producers, and they take similar roles.
39
40![How PubSub works](/assets/simple-pubsub-server/pubsub-overview.png)
41
42**These are some naive characteristics we want to achieve:**
43
44- producer is publishing messages to subscribe topic,
45- consumer is receiving messages from subscribed topic,
46- servers is also known as Broker,
47- broker does not store messages or tracks success,
48- broker uses
49 [FIFO](https://en.wikipedia.org/wiki/FIFO_(computing_and_electronics)) method
50 for delivering messages,
51- if consumer wants to receive messages from a topic, producer and consumer
52 topics must match,
53- consumer can subscribe to multiple topics,
54- producer can publish to multiple topics,
55- each message has a messageId.
56
57**Known drawbacks:**
58
59- messages will not be stored in a persistent queue or unreceived messages like
60 [DeadLetterQueue](https://en.wikipedia.org/wiki/Dead_letter_queue) so old
61 messages could be lost on server restart,
62- [Server-Sent
63 Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events)
64 opens a long-running connection between the client and the server so make sure
65 if your setup is load balanced that the load balancer in this case can have
66 long opened connection,
67- no system moderation due to the dynamic nature of creating queues.
68
69## Server-Sent Events
70
71Read more about it on [official specification
72page](https://html.spec.whatwg.org/multipage/server-sent-events.html).
73
74### Current browser support
75
76![Browser support](/assets/simple-pubsub-server/caniuse.png)
77
78Check
79[https://caniuse.com/#feat=eventsource](https://caniuse.com/#feat=eventsource)
80for latest information about browser support.
81
82### Known issues
83
84- Firefox 52 and below do not support EventSource in web/shared workers
85- In Firefox prior to version 36 server-sent events do not reconnect
86 automatically in case of a connection interrupt (bug)
87- Reportedly, CORS in EventSource is currently supported in Firefox 10+, Opera
88 12+, Chrome 26+, Safari 7.0+.
89- Antivirus software may block the event streaming data chunks.
90
91Source: [https://caniuse.com/#feat=eventsource](https://caniuse.com/#feat=eventsource)
92
93### Message format
94
95The simplest message that can be sent is only with data attribute:
96
97```bash
98data: this is a simple message
99<blank line>
100```
101
102You can send message IDs to be used if the connection is dropped:
103
104```bash
105id: 33
106data: this is line one
107data: this is line two
108<blank line>
109```
110
111And you can specify your own event types (the above messages will all trigger
112the message event):
113
114```bash
115id: 36
116event: price
117data: 103.34
118<blank line>
119```
120
121### Server requirements
122
123The important thing is how you send headers and which headers are sent by the
124server that triggers browser to threat response as a EventStream.
125
126Headers responsible for this are:
127
128```bash
129Content-Type: text/event-stream
130Cache-Control: no-cache
131Connection: keep-alive
132```
133
134### Debugging with Google Chrome
135
136Google Chrome provides build-in debugging and exploration tool for [Server-Sent
137Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events)
138which is quite nice and available from Developer Tools under Network tab.
139
140> You can debug only client side events that get received and not the server
141> ones. For debugging server events add `console.log` to `server.js` code and
142> print out events.
143
144![Google Chrome Developer Tools EventStream](/assets/simple-pubsub-server/chrome-debugging.png)
145
146## Server implementation
147
148For the sake of this example we will use [Node.js](https://nodejs.org/en/) with
149[Express](https://expressjs.com) as our router since this is the easiest way to
150get started and we will use already written SSE library for node
151[sse-pubsub](https://www.npmjs.com/package/sse-pubsub) so we don't reinvent the
152wheel.
153
154```bash
155npm init --yes
156
157npm install express
158npm install body-parser
159npm install sse-pubsub
160```
161
162Basic implementation of a server (`server.js`):
163
164```js
165const express = require('express');
166const bodyParser = require('body-parser');
167const SSETopic = require('sse-pubsub');
168
169const app = express();
170const port = process.env.PORT || 4000;
171
172// topics container
173const sseTopics = {};
174
175app.use(bodyParser.json());
176
177// open for all cors
178app.all('*', (req, res, next) => {
179 res.header('Access-Control-Allow-Origin', '*');
180 res.header('Access-Control-Allow-Headers', 'X-Requested-With, Content-Type');
181 next();
182});
183
184// preflight request error fix
185app.options('*', async (req, res) => {
186 res.header('Access-Control-Allow-Origin', '*');
187 res.header('Access-Control-Allow-Headers', 'X-Requested-With, Content-Type');
188 res.send('OK');
189});
190
191// serve the event streams
192app.get('/stream/:topic', async (req, res, next) => {
193 const topic = req.params.topic;
194
195 if (!(topic in sseTopics)) {
196 sseTopics[topic] = new SSETopic({
197 pingInterval: 0,
198 maxStreamDuration: 15000,
199 });
200 }
201
202 // subscribing client to topic
203 sseTopics[topic].subscribe(req, res);
204});
205
206// accepts new messages into topic
207app.post('/publish', async (req, res) => {
208 let body = req.body;
209 let status = 200;
210
211 console.log('Incoming message:', req.body);
212
213 if (
214 body.hasOwnProperty('topic') &&
215 body.hasOwnProperty('event') &&
216 body.hasOwnProperty('message')
217 ) {
218 const topic = req.body.topic;
219 const event = req.body.event;
220 const message = req.body.message;
221
222 if (topic in sseTopics) {
223 // sends message to all the subscribers
224 sseTopics[topic].publish(message, event);
225 }
226 } else {
227 status = 400;
228 }
229
230 res.status(status).send({
231 status,
232 });
233});
234
235// returns JSON object of all opened topics
236app.get('/status', async (req, res) => {
237 res.send(sseTopics);
238});
239
240// health-check endpoint
241app.get('/', async (req, res) => {
242 res.send('OK');
243});
244
245// return a 404 if no routes match
246app.use((req, res, next) => {
247 res.set('Cache-Control', 'private, no-store');
248 res.status(404).end('Not found');
249});
250
251// starts the server
252app.listen(port, () => {
253 console.log(`PubSub server running on http://localhost:${port}`);
254});
255```
256
257### Our custom message format
258
259Each message posted on a server must be in a specific format that out server
260accepts. Having structure like this allows us to have multiple separated type of
261events on each topic.
262
263With this we can separate streams and only receive events that belong to the
264topic.
265
266One example would be, that we have index page and we want to receive messages
267about new upvotes or new subscribers but we don't want to follow events for
268other pages. This reduces clutter and overall network. And structure is much
269nicer and maintanable.
270
271```json
272{
273 "topic": "sample-topic",
274 "event": "sample-event",
275 "message": { "name": "John" }
276}
277```
278
279## Publisher and subscriber clients
280
281### Publisher and subscriber in action
282
283<video src="/assets/simple-pubsub-server/clients.m4v" controls></video>
284
285You can download [the code](../simple-pubsub-server/sse-pubsub-server.zip) and
286follow along.
287
288### Publisher
289
290As talked about above publisher is the one that send messages to the
291broker/server. Message inside the payload can be whatever you want (string,
292object, array). I would however personally avoid send large chunks of data like
293blobs and such.
294
295```html
296<!DOCTYPE html>
297<html lang="en">
298
299 <head>
300 <meta charset="UTF-8">
301 <meta name="viewport" content="width=device-width, initial-scale=1.0">
302 <title>Publisher</title>
303 </head>
304
305 <body>
306
307 <h1>Publisher</h1>
308
309 <fieldset>
310 <p>
311 <label>Server:</label>
312 <input type="text" id="server" value="http://localhost:4000">
313 </p>
314 <p>
315 <label>Topic:</label>
316 <input type="text" id="topic" value="sample-topic">
317 </p>
318 <p>
319 <label>Event:</label>
320 <input type="text" id="event" value="sample-event">
321 </p>
322 <p>
323 <label>Message:</label>
324 <input type="text" id="message" value='{"name": "John"}'>
325 </p>
326 <p>
327 <button type="button" id="button">Publish message to topic</button>
328 </p>
329 </fieldset>
330
331 <script>
332
333 const button = document.querySelector('#button');
334 const server = document.querySelector('#server');
335 const topic = document.querySelector('#topic');
336 const event = document.querySelector('#event');
337 const message = document.querySelector('#message');
338
339 button.addEventListener('click', async (evt) => {
340 const req = await fetch(`${server.value}/publish`, {
341 method: 'post',
342 headers: {
343 'Accept': 'application/json',
344 'Content-Type': 'application/json',
345 },
346 body: JSON.stringify({
347 topic: topic.value,
348 event: event.value,
349 message: JSON.parse(message.value),
350 }),
351 });
352
353 const res = await req.json();
354 console.log(res);
355 });
356
357 </script>
358
359 </body>
360
361</html>
362```
363
364### Subscriber
365
366Subscriber is responsible for receiving new messages that come from server via
367publisher. The code bellow is very rudimentary but works and follows the
368implementation guidelines for EventSource.
369
370You can use either Developer Tools Console to see incoming messages or you can
371defer to Debugging with Google Chrome section above to see all EventStream
372messages.
373
374> Don't be alarmed if the subscriber gets disconnected from the server every so
375> often. The code we have here resets connection every 15s but it automatically
376> get reconnected and fetches all messages up to last received message id. This
377> setting can be adjusted in `server.js` file; search for the
378> `maxStreamDuration` variable.
379
380```html
381<!DOCTYPE html>
382<html lang="en">
383
384 <head>
385 <meta charset="UTF-8">
386 <meta name="viewport" content="width=device-width, initial-scale=1.0">
387 <title>Subscriber</title>
388 <link rel="stylesheet" href="style.css">
389 </head>
390
391 <body>
392
393 <h1>Subscriber</h1>
394
395 <fieldset>
396 <p>
397 <label>Server:</label>
398 <input type="text" id="server" value="http://localhost:4000">
399 </p>
400 <p>
401 <label>Topic:</label>
402 <input type="text" id="topic" value="sample-topic">
403 </p>
404 <p>
405 <label>Event:</label>
406 <input type="text" id="event" value="sample-event">
407 </p>
408 <p>
409 <button type="button" id="button">Subscribe to topic</button>
410 </p>
411 </fieldset>
412
413 <script>
414
415 const button = document.querySelector('#button');
416 const server = document.querySelector('#server');
417 const topic = document.querySelector('#topic');
418 const event = document.querySelector('#event');
419
420 button.addEventListener('click', async (evt) => {
421
422 let es = new EventSource(`${server.value}/stream/${topic.value}`);
423
424 es.addEventListener(event.value, function (evt) {
425 console.log(`incoming message`, JSON.parse(evt.data));
426 });
427
428 es.addEventListener('open', function (evt) {
429 console.log('connected', evt);
430 });
431
432 es.addEventListener('error', function (evt) {
433 console.log('error', evt);
434 });
435
436 });
437
438 </script>
439
440 </body>
441
442</html>
443```
444
445## Reading further
446
447- [Using server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events)
448- [Using SSE Instead Of WebSockets For Unidirectional Data Flow Over HTTP/2](https://www.smashingmagazine.com/2018/02/sse-websockets-data-flow-http2/)
449- [What is Server-Sent Events?](https://apifriends.com/api-streaming/server-sent-events/)
450- [An HTTP/2 extension for bidirectional messaging communication](https://tools.ietf.org/id/draft-xie-bidirectional-messaging-01.html)
451- [Introduction to HTTP/2](https://developers.google.com/web/fundamentals/performance/http2)
452- [The WebSocket API (WebSockets)](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API)
453
diff --git a/content/posts/2020-03-27-create-placeholder-images-with-sharp.md b/content/posts/2020-03-27-create-placeholder-images-with-sharp.md
deleted file mode 100644
index ac4f053..0000000
--- a/content/posts/2020-03-27-create-placeholder-images-with-sharp.md
+++ /dev/null
@@ -1,101 +0,0 @@
1---
2title: Create placeholder images with sharp Node.js image processing library
3url: create-placeholder-images-with-sharp.html
4date: 2020-03-27T12:00:00+02:00
5draft: false
6---
7
8I have been searching for a solution to pre-generate some placeholder images for
9image server I needed to develop that resizes images on S3. I though this would
10be a 15min job and quickly found out how very mistaken I was.
11
12Even though Node.js is not really the best way to do this kind of things (surely
13something written in C or Rust or even Golang would be the correct way to do
14this but we didn't need the speed in our case) I found an excellent library
15[sharp - High performance Node.js image
16processing](https://github.com/lovell/sharp).
17
18Getting things running was a breeze.
19
20## Fetch image from S3 and save resized
21
22```js
23const sharp = require('sharp');
24const aws = require('aws-sdk');
25
26const x,y = 100;
27const s3 = new aws.S3({});
28
29aws.config.update({
30 secretAccessKey: 'secretAccessKey',
31 accessKeyId: 'accessKeyId',
32 region: 'region'
33});
34
35const originalImage = await s3.getObject({
36 Bucket: 'some-bucket-name',
37 Key: 'image.jpg',
38}).promise();
39
40const resizedImage = await sharp(originalImage.Body)
41 .resize(x, y)
42 .jpeg({ progressive: true })
43 .toBuffer();
44
45s3.putObject({
46 Bucket: 'some-bucket-name',
47 Key: `optimized/${x}x${y}/image.jpg`,
48 Body: resizedImage,
49 ContentType: 'image/jpeg',
50 ACL: 'public-read'
51}).promise();
52```
53
54All this code was wrapped inside a web service with some additional security
55checks and defensive coding to detect if key is missing on S3.
56
57And at that point I needed to return placeholder images as a response in case
58key is missing or x,y are not allowed by the server etc. I could have created
59PNG in Gimp and just serve them but I wanted to respect aspect ratio and I
60didn't want to return some mangled images.
61
62> Main problem with finding a clean solution I could copy and paste and change a
63> bit was a task. API is changing constantly and there weren't clear examples or
64> I was unable to find them.
65
66## Generating placeholder images using SVG
67
68What I ended up was using SVG to generate text and created image with sharp and
69used composition to combine both layers. Response returned by this function is a
70buffer you can use to either upload to S3 or save to local file.
71
72```js
73const generatePlaceholderImageWithText = async (width, height, message) => {
74 const overlay = `<svg width="${width - 20}" height="${height - 20}">
75 <text x="50%" y="50%" font-family="sans-serif" font-size="16" text-anchor="middle">${message}</text>
76 </svg>`;
77
78 return await sharp({
79 create: {
80 width: width,
81 height: height,
82 channels: 4,
83 background: { r: 230, g: 230, b: 230, alpha: 1 }
84 }
85 })
86 .composite([{
87 input: Buffer.from(overlay),
88 gravity: 'center',
89 }])
90 .jpeg()
91 .toBuffer();
92}
93```
94
95That is about it. Nothing more to it. You can change the color of the image by
96changing `background` and if you want to change text styling you can adapt SVG
97to your needs.
98
99> Also be careful about the length of the text. This function positions text at
100> the center and adds `20px` padding on all sides. If text is longer than the
101> image it will get cut.
diff --git a/content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md b/content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md
deleted file mode 100644
index bf1d710..0000000
--- a/content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md
+++ /dev/null
@@ -1,107 +0,0 @@
1---
2title: The strange case of Elasticsearch allocation failure
3url: the-strange-case-of-elasticsearch-allocation-failure.html
4date: 2020-03-29T12:00:00+02:00
5draft: false
6---
7
8I've been using Elasticsearch in production for 5 years now and never had a
9single problem with it. Hell, never even known there could be a problem. Just
10worked. All this time. The first node that I deployed is still being used in
11production, never updated, upgraded, touched in anyway.
12
13All this bliss came to an abrupt end this Friday when I got notification that
14Elasticsearch cluster went warm. Well, warm is not that bad right? Wrong!
15Quickly after that I got another email which sent chills down my spine. Cluster
16is now red. RED! Now, shit really hit the fan!
17
18I tried googling what could be the problem and after executing allocation
19function noticed that some shards were unassigned and 5 attempts were already
20made (which is BTW to my luck the maximum) and that meant I am basically fucked.
21They also applied that one should wait for cluster to re-balance itself. So, I
22waited. One hour, two hours, several hours. Nothing, still RED.
23
24The strangest thing about it all was, that queries were still being fulfilled.
25Data was coming out. On the outside it looked like nothing was wrong but
26everybody that would look at the cluster would know immediately that something
27was very very wrong and we were living on borrowed time here.
28
29> **Please, DO NOT do what I did.** Seriously! Please ask someone on official
30forums or if you know an expert please consult him. There could be million of
31reasons and these solution fit my problem. Maybe in your case it would
32disastrous. I had all the data backed up and even if I would fail spectacularly
33I would be able to restore the data. It would be a huge pain and I would loose
34couple of days but I had a plan B.
35
36Executing allocation and told me what the problem was but no clear solution yet.
37
38```yaml
39GET /_cat/allocation?format=json
40```
41
42I got a message that `ALLOCATION_FAILED` with additional info `failed to create
43shard, failure ioexception[failed to obtain in-memory shard lock]`. Well
44splendid! I must also say that our cluster is capable more than enough to handle
45the traffic. Also JVM memory pressure never was an issue. So what happened
46really then?
47
48I tried also re-routing failed ones with no success due to AWS restrictions on
49having managed Elasticsearch cluster (they lock some of the functions).
50
51```yaml
52POST /_cluster/reroute?retry_failed=true
53```
54
55I got a message that significantly reduced my options.
56
57```json
58{
59 "Message": "Your request: '/_cluster/reroute' is not allowed."
60}
61```
62
63After that I went on a hunt again. I won't bother you with all the details
64because hours/days went by until I was finally able to re-index the problematic
65index and hoped for the best. Until that moment even re-indexing was giving me
66errors.
67
68```yaml
69POST _reindex
70{
71 "source": {
72 "index": "myindex"
73 },
74 "dest": {
75 "index": "myindex-new"
76 }
77}
78```
79
80I needed to do this multiple times to get all the documents re-indexed. Then I
81dropped the original one with the following command.
82
83```yaml
84DELETE /myindex
85```
86
87And re-indexed again new one in the original one (well by name only).
88
89```yaml
90POST _reindex
91{
92 "source": {
93 "index": "myindex-new"
94 },
95 "dest": {
96 "index": "myindex"
97 }
98}
99```
100
101On the surface it looks like all is working but I have a long road in front of
102me to get all the things working again. Cluster now shows that it is in Green
103mode but I am also getting a notification that the cluster has processing status
104which could mean million of things.
105
106Godspeed!
107
diff --git a/content/posts/2020-03-30-my-love-and-hate-relationship-with-nodejs.md b/content/posts/2020-03-30-my-love-and-hate-relationship-with-nodejs.md
deleted file mode 100644
index daebb4c..0000000
--- a/content/posts/2020-03-30-my-love-and-hate-relationship-with-nodejs.md
+++ /dev/null
@@ -1,110 +0,0 @@
1---
2title: My love and hate relationship with Node.js
3url: my-love-and-hate-relationship-with-nodejs.html
4date: 2020-03-30T12:00:00+02:00
5draft: false
6---
7
8Previous project I was working on was being coded in
9[Golang](https://golang.org/). Also was my first project using it. And damn,
10that was an awesome experience. The whole thing is just superb. From how errors
11are handled. The C-like way you handle compiling. The way the language is
12structured making it incredibly versatile and easy to learn.
13
14It may cause some pain for somebody that is not used of using interfaces to map
15JSON and doing the recompilation all the time. But we have tools like
16[entr](http://eradman.com/entrproject/) and
17[make](https://www.gnu.org/software/make/) to fix that.
18
19But we are not here to talk about my undying love for **Golang**. Only in some
20way we probably should. It is an excellent example of how modern language should
21be designed. And because I have used it extensively in the last couple of years
22this probably taints my views of other languages. And is doing me a great
23disservice. Nevertheless, here we are.
24
25About two years ago I started flirting with [Node.js](https://nodejs.org/en/)
26for a project I started working on. What I wanted was to have things written in
27a language that is widely used, and we could get additional developers for. As
28much as **Golang** is amazing it's really hard to get developers for it. Even
29now. And after playing around with it for a week I felt in love with the speed
30of iteration and massive package ecosystem. Do you want SSO? You got it! Do you
31want some esoteric library for something? There is a strong chance somebody
32wrote it. It is so extensive that you find yourself evaluating packages based on
33**GitHub stars** and number of contributors. You get swallowed by the vanity
34metrics and that potentially will become the downfall of Node.js.
35
36Because of the sheer amount of choice I often got anxiety when choosing
37libraries. Will I choose the correct one? Is this library something that will be
38supported for a foreseeable future or not? I am used of using libraries that are
39being in development for 10 years plus (Python, C) and that gave me some sort of
40comfort. And it is probably unfair to Node.js and community to expect same
41dedication.
42
43Moving forward ... Work started and things were great. **Speed of iteration was
44insane**. For some feature that I would need a day in Golang only took me hour
45or two. I became lazy! Using packages all over the place. Falling into the same
46trap as others. Packages on top of packages. And [npm](https://www.npmjs.com/)
47didn't help at all. The way that the package manager works is just
48horrendous. And not allowing to have node_modules outside the project is also
49the stupidest idea ever.
50
51So at that point I started feeling the technical debt that comes with Node.js
52and the whole ecosystem. What nobody tells you is that **structuring large
53Node.js apps** is more problematic than one would think. And going microservice
54for every single thing is also a bad idea. The amount of networking you
55introduce with that approach always ends up being a pain in the ass. And I don't
56even want to go into system administration here. The overhead is
57insane. Package-lock.json made many days feel like living hell for me. And I
58would eat the cost of all this if it meant for better development
59experience. Well, it didn't.
60
61The **lack of Typescript** support in the interpreter is still mind boggling to
62me. Why haven't they added native support yet for this is beyond me?! That would
63have solved so many problems. Lack of type safety became a problem somewhere in
64the middle of the project where the codebase was sufficiently large enough to
65present problems. We started adding arguments to functions and there was **no
66way to implicitly define argument types**. And because at that point there were
67a lot of functions, it became impossible to know what each one accepts,
68development became more and more trial and error based.
69
70I tried **implementing Typescript**, but that would present a large refactor
71that we were not willing to do at that point. The benefits were not enough. I
72also tried [Flow - static type checker](https://flow.org/) but implementation
73was also horrible. What Typescript and Flow forces you is to have src folder and
74then **transpile** your code into dist folder and run it with node. WTH is that
75all about. Why can't this be done in memory or some virtual file system? Why? I
76see no reason why this couldn't be done like this. But it is what it is. I
77abandoned all hope for static type checking.
78
79One of the problems that resulted from not having interfaces or types was
80inability to model out our data from **Elasticsearch**. I could have done a
81**pedestrian implementation** of it, but there must be a better way of doing
82this without resorting to some hack basically. Or maybe I haven't found a
83solution, which is also a possibility. I have looked, though. No juice!
84
85**Error handling?** Is that a joke?
86
87Thank god for **await/async**. Without it, I would have probably just abandoned
88the whole thing and went with something else like Python. That's all I am going
89to say about this :)
90
91I started asking myself a question if Node.js is actually ready to be used in a
92**large scale applications**? And this was a totally wrong question. What I
93should have been asking myself was, how to use Node.js in large scale
94application. And you don't get this in **marketing material** for Express or Koa
95etc. They never tell you this. Making Node.js scale on infrastructure or in
96codebase is really **more of an art than a science**. And just like with the
97whole JavaScript ecosystem:
98
99- impossible to master,
100- half of your time you work on your tooling,
101- just accept transpilers that convert one code into another (holly smokes),
102- error handling is a joke,
103- standards? What standards?
104
105But on the other hand. As I did, you will also learn to love it. Learn to use it
106quickly and do impossible things in crazy limited time.
107
108I hate to admit it. But I love Node.js. Dammit, I love it :)
109
1102023 Update: I hate Node.js!
diff --git a/content/posts/2020-05-05-remote-work.md b/content/posts/2020-05-05-remote-work.md
deleted file mode 100644
index 90fca24..0000000
--- a/content/posts/2020-05-05-remote-work.md
+++ /dev/null
@@ -1,71 +0,0 @@
1---
2title: Remote work and how it affects the daily lives of people
3url: remote-work.html
4date: 2020-05-05T12:00:00+02:00
5draft: false
6---
7
8I have been working remotely for the past 5 years. I love it. Love the freedom
9and make your schedule thingy.
10
11## You work more not less
12
13I've heard from people things like: "Oh, you are so lucky, working from home,
14having all the free time you want". It was obvious they had no clue what means
15working remotely. They had this romantic idea of remote work. You can watch TV
16whenever you like, you can go outside for a picnic if you want and stuff like
17that.
18
19This may be true if you work a day or two in a week from home. But if you go
20completely remote all these changes completely. I take some time to acclimate
21but then you start feeling the consequences of going fully remote. And it's not
22all rainbows and unicorns. Rather the opposite.
23
24## Feeling lost
25
26At first, I remembered I felt lost. I was not used to this kind of environment.
27It felt disoriented and a part of you that is used to procrastinate turns on.
28You start thinking of a workday as a whole day. And soon this idea of "I can do
29this later" starts creeping in. Well, I have the whole day ahead of me. I can do
30this a bit later.
31
32## Hyper-performance
33
34As a direct result, you become more focused on your work since you don't have
35all the interruptions common in the workplace. And you can quickly get used to
36this hyper-performance. But this mode requires also a lot of peace and quiet.
37
38And here we come to the ugly parts of all this. **People rarely have the
39self-control** to not waste other people's time. It is paralyzing when people
40start calling you, sending you chat messages, etc. The thing is, that when I
41achieve this hyper-performance mode I am completely embroiled in the problem I
42am solving and this kind of interruptions mess with your head. I need an hour at
43least to get back in the zone. Sometimes not achieving the same focus the whole
44day.
45
46I know that life is not how you want it to be and takes its route but from what
47I've learned this kind of interruptions can be avoided in 90% of the case easily
48just by closing any chat programs and putting your phone in a drawer.
49
50## Suggestion to all the new remote workers
51
52- Stop wasting other people's time. You don't bother people at their desks in
53 the office either.
54- Do not replace daily chats in the hallways with instant messaging software.
55 It will only interrupt people. Nothing good will come of it.
56- Set your working hours and try to not allow it to bleed outside these
57 boundaries and maintain your routine.
58- Be prepared that hours will be longer regardless of your good intentions and
59 your well thought of routine.
60- Try to be hyper-focused and do only one thing at the time. Multitasking is the
61 enemy of progress.
62- Avoid long meetings and if possible eliminate them. Rather take time to write
63 them out and allow others to respond in their own time. Meetings are usually a
64 large waste of time and most of the people attending them are there just
65 because the manager said so.
66- The software will not solve your problems. And throwing money at problems
67 neither.
68- If you are in a managerial position don't supervise any single minute of
69 workers. They are probably giving you more hours anyways. Track progress
70 weekly not daily. You hired them and give them the benefit of the doubt that
71 they will deliver what you agreed upon.
diff --git a/content/posts/2020-08-15-systemd-disable-wake-onmouse.md b/content/posts/2020-08-15-systemd-disable-wake-onmouse.md
deleted file mode 100644
index 55086b1..0000000
--- a/content/posts/2020-08-15-systemd-disable-wake-onmouse.md
+++ /dev/null
@@ -1,72 +0,0 @@
1---
2title: Disable mouse wake from suspend with systemd service
3url: disable-mouse-wake-from-suspend-with-systemd-service.html
4date: 2020-08-15T12:00:00+02:00
5draft: false
6---
7
8I recently bought [ThinkPad
9X220](https://www.laptopmag.com/reviews/laptops/lenovo-thinkpad-x220) just as a
10joke on eBay to test Linux distributions and play around with things and not
11destroy my main machine. Little to my knowledge I felt in love with it. Man,
12they really made awesome machines back then.
13
14After changing disk that came with it to SSD and installing Ubuntu to test if 
15everything works I noticed that even after a single touch of my external mouse
16the system would wake up from sleep even though the lid was shut down.
17
18I wouldn't even noticed it if laptop didn't have [LED
19sleep indicator](https://support.lenovo.com/lk/en/solutions/~/media/Images/ContentImages/p/pd025386_x1_status_03.ashx?w=426&h=262).
20I already had a bad experience with Linux and it's power management. I had a
21[Dell Inspiron 7537](https://www.pcmag.com/reviews/dell-inspiron-15-7537) laptop
22with a touchscreen and while traveling it decided to wake up and started cooking
23in my backpack to the point that the digitizer responsible for touch actually
24glue off and the whole screen got wrecked. So, I am a bit touchy about this.
25
26I went on solution hunting and to my surprise there is no easy way to disable
27specific devices to perform wake up. Why is this not under the power management 
28tab in setting is really strange.
29
30After googling for a solution I found [this nice article describing the
31solution](https://codetrips.com/2020/03/18/ubuntu-disable-mouse-wake-from-suspend/)
32that worked for me. The only problem with this solution was that he added his
33solution to `.bashrc` and this triggers `sudo` that asks for a password each
34time new terminal is opened, which get annoying quickly since I open a lot of
35terminals all the time.
36
37I followed his instructions and got to solution `sudo sh -c "echo 'disabled' >
38/sys/bus/usb/devices/2-1.1/power/wakeup"`.
39
40I created a system service file `sudo nano
41/etc/systemd/system/disable-mouse-wakeup.service` and removed `sudo` and
42replaced `sh` with `/usr/bin/sh` and pasted all that in `ExecStart`.
43
44```ini
45[Unit]
46Description=Disables wakeup on mouse event
47After=network.target
48StartLimitIntervalSec=0
49
50[Service]
51Type=simple
52Restart=always
53RestartSec=1
54User=root
55ExecStart=/usr/bin/sh -c "echo 'disabled' > /sys/bus/usb/devices/2-1.1/power/wakeup"
56
57[Install]
58WantedBy=multi-user.target
59```
60
61After that I enabled, started and checked status of service.
62
63```sh
64sudo systemctl enable disable-mouse-wakeup.service
65sudo systemctl start disable-mouse-wakeup.service
66sudo systemctl status disable-mouse-wakeup.service
67```
68
69This will permanently disable that device from wakeing up you computer on boot.
70If you have many devices you would like to surpress from waking up your machine
71I would create a shell script and call that instead of direclty doing it in
72service file.
diff --git a/content/posts/2020-09-06-esp-and-micropython.md b/content/posts/2020-09-06-esp-and-micropython.md
deleted file mode 100644
index 91a04ad..0000000
--- a/content/posts/2020-09-06-esp-and-micropython.md
+++ /dev/null
@@ -1,225 +0,0 @@
1---
2title: Getting started with MicroPython and ESP8266
3url: esp8266-and-micropython-guide.html
4date: 2020-09-06T12:00:00+02:00
5draft: false
6---
7
8## Introduction
9
10A while ago I bought some
11[ESP8266](https://www.espressif.com/en/products/socs/esp8266) and
12[ESP32](https://www.espressif.com/en/products/socs/esp32) dev boards to play
13around with and I finally found a project to try it out.
14
15For my project, I used [ESP32](https://www.espressif.com/en/products/socs/esp32)
16but I could easily choose
17[ESP8266](https://www.espressif.com/en/products/socs/esp8266). This guide
18contains which tools I use and how I prepared my workspace to code for
19[ESP8266](https://www.espressif.com/en/products/socs/esp8266).
20
21![ESP8266 and ESP32 boards](/assets/esp8366-micropython/boards.jpg)
22
23This guide covers:
24
25- flashing SOC
26- install proper tooling
27- deploying a simple script
28
29> Make sure that you are using **a good USB cable**. I had some problems with
30mine and once I replaced it everything started to work.
31
32## Flashing the SOC
33
34Plug your ESP8266 to USB port and check if the device was recognized with
35executing `dmesg | grep ch341-uart`.
36
37Then check if the device is available under `/dev/` by running `ls
38/dev/ttyUSB*`.
39
40> **Linux users**: if a device is not available be sure you are in `dialout`
41> group. You can check this by executing `groups $USER`. You can add a user to
42> `dialout` group with `sudo adduser $USER dialout`.
43
44After these conditions are meet go to the navigate to
45[https://micropython.org/download/esp8266/](https://micropython.org/download/esp8266/)
46and download `esp8266-20200902-v1.13.bin`.
47
48```sh
49mkdir esp8266-test
50cd esp8266-test
51
52wget https://micropython.org/resources/firmware/esp8266-20200902-v1.13.bin
53```
54
55After obtaining firmware we will need some tooling to flash the firmware to the
56board.
57
58```sh
59sudo pip3 install esptool
60```
61
62You can read more about `esptool` at
63[https://github.com/espressif/esptool/](https://github.com/espressif/esptool/).
64
65Before flashing the firmware we need to erase the flash on device. Substitute
66`USB0` with the device listed in output of `ls /dev/ttyUSB*`.
67
68```sh
69esptool.py --port /dev/ttyUSB0 erase_flash
70```
71
72If flash was successfully erased it is now time to flash the new firmware to it.
73
74```sh
75esptool.py --port /dev/ttyUSB0 --baud 460800 write_flash --flash_size=detect 0 esp8266-20200902-v1.13.bin
76```
77
78If everything went ok you can try accessing MicroPython REPL with ` screen
79/dev/ttyUSB0 115200` or `picocom /dev/ttyUSB0 -b115200`.
80
81> Sometimes you will need to press `ENTER` in `screen` or `picocom` to access
82> REPL.
83
84When you are in REPL you can test if all is working properly following steps.
85
86```py
87> import machine
88> machine.freq()
89```
90
91This should output a number representing a frequency of the CPU (mine was
92`80000000`).
93
94When you are in `screen` or `picocom` these can help you a bit.
95
96| Key | Command |
97| -------- | -------------------- |
98| CTRL+d | preforms soft reboot |
99| CTRL+a x | exits picocom |
100| CTRL+a \ | exits screen |
101
102
103## Install better tooling
104
105Now, to make our lives a little bit easier there are couple of additional tools
106that will make this whole experience a little more bearable.
107
108There are twq cool ways of uploading local files to SOC flash.
109
110- ampy → [https://github.com/scientifichackers/ampy](https://github.com/scientifichackers/ampy)
111- rshell → [https://github.com/dhylands/rshell](https://github.com/dhylands/rshell)
112
113### ampy
114
115```bash
116# installing ampy
117sudo pip3 install adafruit-ampy
118```
119
120Listed below are some common commands I used.
121
122```bash
123
124# uploads file to flash
125ampy --delay 2 --port /dev/ttyUSB0 put boot.py
126
127# lists file on flash
128ampy --delay 2 --port /dev/ttyUSB0 ls
129
130# outputs contents of file on flash
131ampy --delay 2 --port /dev/ttyUSB0 cat boot.py
132```
133
134> I added `delay` of 2 seconds because I had problems with executing commands.
135
136### rshell
137
138Even though `ampy` is a cool tool I opted with `rshell` in the end since it's
139much more polished and feature rich.
140
141```bash
142# installing ampy
143sudo pip3 install rshell
144```
145
146Now that `rshell` is installed we can connect to the board.
147
148```bash
149rshell --buffer-size=30 -p /dev/ttyUSB0 -a
150```
151
152This will open a shell inside bash and from here you can execute multiple
153commands. You can check what is supported with `help` once you are inside of a
154shell.
155
156```bash
157m@turing ~/Junk/esp8266-test
158$ rshell --buffer-size=30 -p /dev/ttyUSB0 -a
159
160Using buffer-size of 30
161Connecting to /dev/ttyUSB0 (buffer-size 30)...
162Trying to connect to REPL connected
163Testing if ubinascii.unhexlify exists ... Y
164Retrieving root directories ... /boot.py/
165Setting time ... Sep 06, 2020 23:54:28
166Evaluating board_name ... pyboard
167Retrieving time epoch ... Jan 01, 2000
168Welcome to rshell. Use Control-D (or the exit command) to exit rshell.
169/home/m/Junk/esp8266-test> help
170
171Documented commands (type help <topic>):
172========================================
173args cat connect date edit filesize help mkdir rm shell
174boards cd cp echo exit filetype ls repl rsync
175
176Use Control-D (or the exit command) to exit rshell.
177```
178
179> Inside a shell `ls` will display list of files on your machine. To get list
180> of files on flash folder `/pyboard` is remapped inside the shell. To list files
181> on flash you must perform `ls /pyboard`.
182
183#### Moving files to flash
184
185To avoid copying files all the time I used `rsync` function from the inside of
186`rshell`.
187
188```bash
189rsync . /pyboard
190```
191
192#### Executing scripts
193
194It is a pain to continuously reboot the device to trigger `/pyboard/boot.py` and
195there is a better way of testing local scripts on remote device.
196
197Lets assume we have `src/freq.py` file that displays CPU frequency of a remote
198device.
199
200```py
201# src/freq.py
202
203import machine
204print(machine.freq())
205```
206
207Now lets upload this and execute it.
208
209```bash
210# syncs files to remove device
211rsync ./src /pyboard
212
213# goes into REPL
214repl
215
216# we import file by importing it without .py extension and this will run the script
217> import freq
218
219# CTRL+x will exit REPL
220```
221
222## Additional resources
223
224- https://randomnerdtutorials.com/getting-started-micropython-esp32-esp8266/
225- http://docs.micropython.org/en/latest/esp8266/quickref.html
diff --git a/content/posts/2020-09-08-bind-warning-on-login.md b/content/posts/2020-09-08-bind-warning-on-login.md
deleted file mode 100644
index 113c67b..0000000
--- a/content/posts/2020-09-08-bind-warning-on-login.md
+++ /dev/null
@@ -1,53 +0,0 @@
1---
2title: Fix bind warning in .profile on login in Ubuntu
3url: bind-warning-on-login-in-ubuntu.html
4date: 2020-09-08T12:00:00+02:00
5draft: false
6---
7
8Recently I moved back to [bash](https://www.gnu.org/software/bash/) as my
9default shell. I was previously using [fish](https://fishshell.com/) and got
10used to the cool features it has. But, regardless of that, I wanted to move to a
11more standard shell because I was hopping back and forth with exporting
12variables and stuff like that which got pretty annoying.
13
14So I embarked on a mission to make [bash](https://www.gnu.org/software/bash/)
15more like [fish](https://fishshell.com/) and in the process found that I really
16missed autosuggest with TAB on changing directories.
17
18I found a nice alternative that emulates [zsh](http://zsh.sourceforge.net/) like
19autosuggestion and autocomplete so I added the following to my `.bashrc` file.
20
21```bash
22bind "TAB:menu-complete"
23bind "set show-all-if-ambiguous on"
24bind "set completion-ignore-case on"
25bind "set menu-complete-display-prefix on"
26bind '"\e[Z":menu-complete-backward'
27```
28
29I haven't noticed anything wrong with this and all was working fine until I
30restarted my machine and then I got this error.
31
32![Profile bind error](/assets/profile-bind-error/error.jpg)
33
34When I pressed OK, I got into the [Gnome
35shell](https://wiki.gnome.org/Projects/GnomeShell) and all was working fine, but
36the error was still bugging me. I started looking for the reason why this is
37happening and found a solution to this error on [Remote SSH Commands - bash bind
38warning: line editing not enabled](https://superuser.com/a/892682).
39
40So I added a simple `if [ -t 1 ]` around `bind` statements to avoid running
41commands that presume the session is interactive when it isn't.
42
43```bash
44if [ -t 1 ]; then
45 bind "TAB:menu-complete"
46 bind "set show-all-if-ambiguous on"
47 bind "set completion-ignore-case on"
48 bind "set menu-complete-display-prefix on"
49 bind '"\e[Z":menu-complete-backward'
50fi
51```
52
53After logging out and back in the problem was gone.
diff --git a/content/posts/2020-09-09-digitalocean-sync.md b/content/posts/2020-09-09-digitalocean-sync.md
deleted file mode 100644
index aa3cce4..0000000
--- a/content/posts/2020-09-09-digitalocean-sync.md
+++ /dev/null
@@ -1,111 +0,0 @@
1---
2title: Using Digitalocean Spaces to sync between computers
3url: digitalocean-spaces-to-sync-between-computers.html
4date: 2020-09-09T12:00:00+02:00
5draft: false
6---
7
8I've been using [Dropbox](https://www.dropbox.com/) for probably **10+ years**
9now and I-ve became so used to it that it runs in the background that I don't
10even imagine a world without it. But it's not without problems.
11
12At first I had problems with `.venv` environments for Python and the only
13solution for excluding synchronization for this folder was to manually exclude a
14specific folder which is not really scalable. FYI, my whole project folder is
15synced on [Dropbox](https://www.dropbox.com/). This of course introduced a lot
16of syncing of files and folders that are not needed or even break things on
17other machines. In the case of **Python**, I couldn't use that on my second
18machine. I needed to delete `.venv` folder and pip it again which synced files
19again to the main machine. This was very frustrating. **Nodejs** handles this
20much nicer and I can just run the scripts without deleting `node_modules` again
21and reinstalling. However, `node_modules` is a beast of its own. It creates so
22many files that OS has a problem counting them when you check the folder
23contents for size.
24
25I wanted something similar to Dropbox. I could without the instant syncing but
26it would need to be fast and had the option for me to exclude folders like
27`node_modules, .venv, .git` and folders like that.
28
29I went on a hunt for an alternative to [Dropbox](https://www.dropbox.com/)
30and found:
31
32- [Tresorit](https://tresorit.com/)
33- [Sync.com](https://sync.com)
34- [Box](https://www.box.com/)
35
36You know, the usual list of suspects. I didn't include [Google
37drive](https://drive.google.com) or [One drive](https://onedrive.live.com/)
38since they are even more draconian than Dropbox.
39
40> All this does not stem from me being paranoid but recently these companies
41> have became more and more aggressive and they keep violating our privacy when
42> they share our data with 3rd party services. It is getting out of control.
43
44So, my main problem was still there. No way of excluding a specific folder from
45syncing. And before we go into "*But you have git, isn't that enough?*", I must
46say, that many of the files (PDFs, spreadsheets, etc) I have in a `git` repo
47don't get pushed upstream to Git and I still want to have them synced across my
48computers.
49
50I initially wanted to use [rsync](https://linux.die.net/man/1/rsync) but I would
51need to then have a remote VPS or transfer between my computers directly. I
52wanted a solution where all my files could be accessible to me without my
53machine.
54
55> **WARNING: This solution will cost you money!** DigitalOcean Spaces are $5 per
56month and there are some bandwidth limitations and if you go beyond that you get
57billed additionally.
58
59Then I remembered that I could use something like
60[S3](https://en.wikipedia.org/wiki/Amazon_S3) since it has versioning and is
61fully managed. I didn't want to go down the AWS rabbit hole with this so I
62choose [DigitalOcean Spaces](https://www.digitalocean.com/products/spaces/).
63
64Then I needed a command-line tool to sync between source and target. I found
65this nice tool [s3cmd](https://s3tools.org/s3cmd) and it is in the Ubuntu
66repositories.
67
68```bash
69sudo apt install s3cmd
70```
71
72After installation will I create a new Space bucket on DigitalOcean. Remember
73the zone you will choose because you will need it when you will configure
74`s3cmd`.
75
76Then I visited [Digitalocean Applications &
77API](https://cloud.digitalocean.com/account/api/tokens) and generated **Spaces
78access keys**. Save both key and secret somewhere safe because when you will
79leave the page secret will not be available anymore to you and you will need to
80re-generate it.
81
82```bash
83# enter your key and secret and correct endpoint
84# my endpoint is ams3.digitaloceanspaces.com because
85# I created my bucket in Amsterdam regiin
86s3cmd --configure
87```
88
89After that I played around with options for `s3cmd` and got to the following
90command.
91
92```bash
93# I executed this command from my projects folder
94cd projects
95s3cmd sync --delete-removed --exclude 'node_modules/*' --exclude '.git/*' --exclude '.venv/*' ./ s3://my-bucket-name/projects/
96```
97
98When syncing int he other direction you will need to change the order of the
99`SOURCE` and `TARGET` to `s3://my-bucket-name/projects/` and `./`.
100
101> Be sure that all the paths have trailing slash so that sync knows that this
102> are directories.
103
104I am planning to implement some sort of a `.ignore` file that will enable me to
105have a project-specific exclude options.
106
107I am currently running this every hour as a cronjob which is perfectly fine for
108now when I am testing how this whole thing works and how it all will turn out.
109
110I have also created a small Gnome extension which is still very unstable, but
111when/if this whole experiment pays of I will share on Github.
diff --git a/content/posts/2021-01-24-replacing-dropbox-with-s3.md b/content/posts/2021-01-24-replacing-dropbox-with-s3.md
deleted file mode 100644
index 4c6b33e..0000000
--- a/content/posts/2021-01-24-replacing-dropbox-with-s3.md
+++ /dev/null
@@ -1,113 +0,0 @@
1---
2title: Replacing Dropbox in favor of DigitalOcean spaces
3url: replacing-dropbox-in-favor-of-digitalocean-spaces.html
4date: 2021-01-24T12:00:00+02:00
5draft: false
6---
7
8A few months ago I experimented with DigitalOcean spaces as my backup solution
9that could [replace Dropbox
10eventually](/digitalocean-spaces-to-sync-between-computers.html). That solution
11worked quite nicely, and I was amazed how smashing together a couple of existing
12solutions would work this fine.
13
14I have been running that solution in the background for a couple of months now
15and kind of forgot about it. But recent developments around deplatforming and
16having us people hostages of technology and big companies speed up my goals to
17become less dependent on
18[Google](https://edition.cnn.com/2020/12/17/tech/google-antitrust-lawsuit/index.html),
19[Dropbox](https://www.pcworld.com/article/2048680/dropbox-takes-a-peek-at-files.html)
20etc and take back some control.
21
22I am not a conspiracy theory nut, but to be honest, what these companies are
23doing lately is out of control. It is a matter of principle at this point. I
24have almost completely degoogled my life all the way from ditching Gmail,
25YouTube and most of the services surrounding Google. And I must tell you, I feel
26so good. I haven't felt this way for a long time.
27
28**Anyways. Let's get to the meat of things.**
29
30Before you continue you should read my post about [syncing to
31Dropbox](/digitalocean-spaces-to-sync-between-computers.html).
32
33> Also to note, I am using Linux on my machine with Gnome desktop environment.
34This should work on MacOS too. To use this on Windows I suggest using
35[Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10)
36or [Cygwin](https://www.cygwin.com/).
37
38## Folder structure
39
40I liked structure from Dropbox. One folder where everything is located and
41synced. So, that's why adopted this also for my sync setup.
42
43```go
44~/Vault
45 ↳ backup
46 ↳ bin
47 ↳ documents
48 ↳ projects
49```
50
51All of my code is located in `~/Vault/projects` folder. And most of the projects
52are Git repositories. I do not use this sync method for backup per see but in
53case I reinstall my machine I can easily recreate all the important folder
54structure with one quick command. No external drives needed that can fail etc.
55
56## Sync script
57
58My sync script is located in `~/Vault/bin/vault-backup.sh`
59
60```bash
61#!/bin/bash
62
63# dconf load /com/gexperts/Tilix/ < tilix.dconf
64# 0 2 * * * sh ~/Vault/bin/vault-backup.sh
65
66cd ~/Vault/backup/dotfiles
67
68MACHINE=$(whoami)@$(hostname)
69mkdir -p $MACHINE
70cd $MACHINE
71
72cp ~/.config/VSCodium/User/settings.json settings.json
73cp ~/.s3cfg s3cfg
74cp ~/.bash_extended bash_extended
75cp ~/.ssh ssh -rf
76
77codium --list-extensions > vscode-extension.txt
78dconf dump /com/gexperts/Tilix/ > tilix.dconf
79
80cd ~/Vault
81s3cmd sync --delete-removed --exclude 'node_modules/*' --exclude '.git/*' --exclude '.venv/*' ./ s3://bucket-name/backup/
82
83echo `date +"%D %T"` >> ~/.vault.log
84
85notify-send \
86 -u normal \
87 -i /usr/share/icons/Adwaita/96x96/status/security-medium-symbolic.symbolic.png \
88 "Vault sync succeded at `date +"%D %T"`"
89```
90
91This script also backups some of the dotfiles I use and sends notification to
92Gnome notification center. It is a straightforward solution. Nothing special
93going on.
94
95> One obvious benefit of this is that I can omit syncing Node's `node_modules`
96> or Python's `.venv` and `.git` folders.
97
98You can use this script in a combination with [Cron](https://en.wikipedia.org/wiki/Cron).
99
100```
1010 2 * * * sh ~/Vault/bin/vault-backup.sh
102```
103
104When you start syncing your local stuff with a remote server you can review your
105items on DigitalOcean.
106
107![Dropbox Spaces](/assets/dropbox-sync/dropbox-spaces.png)
108
109I have been using this script now for quite some time, and it's working
110flawlessly. I also uninstalled Dropbox and stopped using it completely.
111
112All I need to do is write a Bash script that does the reverse and downloads from
113remote server to local folder. This could be another post.
diff --git a/content/posts/2021-01-25-goaccess.md b/content/posts/2021-01-25-goaccess.md
deleted file mode 100644
index 1b6a330..0000000
--- a/content/posts/2021-01-25-goaccess.md
+++ /dev/null
@@ -1,202 +0,0 @@
1---
2title: Using GoAccess with Nginx to replace Google Analytics
3url: using-goaccess-with-nginx-to-replace-google-analytics.html
4date: 2021-01-25T12:00:00+02:00
5draft: false
6---
7
8## Introduction
9
10I know! You cannot simply replace Google Analytics with parsing access logs and
11displaying a couple of charts. But to be honest, I actually never used Google
12Analytics to the fullest extent and was usually interested in seeing page hits
13and which pages were visited most often.
14
15I recently moved my blog from Firebase to a VPS and also decided to remove
16Google Analytics tracking code from the site since its quite malicious and
17tracks users across other pages also and is creating a profile of a user, and
18I've had it. But I also need some insight of what is happening on a server and
19which content is being read the most etc.
20
21I have looked at many existing solutions like:
22
23- [Umami](https://umami.is/)
24- [Freshlytics](https://github.com/sheshbabu/freshlytics)
25- [Matomo](https://matomo.org/)
26
27But the more I looked at them the more I noticed that I am replacing one evil
28with another one. Don't get me wrong. Some of these solutions are absolutely
29fantastic but would require installation of databases and something like PHP or
30Node. And I was not ready to put those things on my fresh server. Also having
31Docker installed is out of the question.
32
33## Opting for log parsing
34
35So, I defaulted to parsing already existing logs and generating HTML reports
36from this data.
37
38I found this amazing software [GoAccess](https://goaccess.io/) which provides
39all the functionalities I need, and it's a single binary. Written in Go.
40
41GoAccess can be used in two different modes.
42
43![GoAccess Terminal](/assets/goaccess/goaccess-dash-term.png)
44<center><i>Running in a terminal</i></center>
45
46![GoAccess HTML](/assets/goaccess/goaccess-dash-html.png)
47<center><i>Running in a browser</i></center>
48
49I, however, need this to run in a browser. So, the second option is the way to
50go. The Idea is to periodically run cronjob and export this report into a folder
51that gets then server by Nginx behind a Basic authentication.
52
53## Getting Nginx ready
54
55I choose Ubuntu on [DigitalOcean](https://www.digitalocean.com/). First I
56installed [Nginx](https://nginx.org/en/), and
57[Letsencrypt](https://letsencrypt.org/getting-started/) certbot and all the
58necessary dependencies.
59
60```sh
61# log in as root user
62sudo su -
63
64# first let's update the system
65apt update && apt upgrade -y
66
67# let's install
68apt install nginx certbot python3-certbot-nginx apache2-utils
69```
70
71After all this is installed we can create a new configuration for a statistics.
72Stats will be available at `stats.domain.com`.
73
74```sh
75# creates directory where html will be hosted
76mkdir -p /var/www/html/stats.domain.com
77
78cp /etc/nginx/sites-available/default /etc/nginx/sites-available/stats.domain.com
79nano /etc/nginx/sites-available/stats.domain.com
80```
81
82```nginx
83server {
84 root /var/www/html/stats.domain.com;
85 server_name stats.domain.com;
86
87 index index.html;
88 location / {
89 try_files $uri $uri/ =404;
90 }
91}
92```
93
94Now we check if the configuration is ok. We can do this with `nginx -t`. If all
95is ok, we can restart Nginx with `service nginx restart`.
96
97After all that you should add A record for this domain that points to IP of a
98droplet.
99
100Before enabling SSL you should test if DNS records have propagated with `curl
101stats.domain.com`.
102
103Now, it's time to provision TLS certificate. To achieve this, you execute
104command `certbot --nginx`. Follow the wizard and when you are asked about
105redirection always choose 2 (always redirect to HTTPS).
106
107When this is done you can visit https://stats.domain.com and you should get 404
108not found error which is correct.
109
110## Getting GoAccess ready
111
112If you are using Debian like system GoAccess should be available in repository.
113Otherwise refer to the official website.
114
115```sh
116apt install goaccess
117```
118
119To enable Geo location we also need one additiona thing.
120
121```sh
122cd /var/www/html/stats.stats.com
123wget https://github.com/P3TERX/GeoLite.mmdb/raw/download/GeoLite2-City.mmdb
124```
125
126Now we create a shell script that will be executed every 10 minutes.
127
128```sh
129nano /var/www/html/stats.domain.com/generate-stats.sh
130```
131
132Contents of this file should look like this.
133
134```sh
135#!/bin/sh
136
137zcat -f /var/log/nginx/access.log* > /var/log/nginx/access-all.log
138
139goaccess \
140 --log-file=/var/log/nginx/access-all.log \
141 --log-format=COMBINED \
142 --exclude-ip=0.0.0.0 \
143 --geoip-database=/var/www/html/stats.domain.com/GeoLite2-City.mmdb \
144 --ignore-crawlers \
145 --real-os \
146 --output=/var/www/html/stats.domain.com/index.html
147
148rm /var/log/nginx/access-all.log
149```
150
151Because after a while nginx creates multiple files with access logs we use
152[`zcat`](https://linux.die.net/man/1/zcat) to extract Gziped contents and create
153a file that has all the access logs. After this file is used we delete it.
154
155If you want to exclude your home IP's result look at the `--exclude-ip` option
156in script and instead of `0.0.0.0` add your own home IP address. You can find
157your home IP by executing `curl ifconfig.me` from your local machine and NOT
158from the droplet.
159
160Test the script by executing `sh
161/var/www/html/stats.domain.com/generate-stats.sh` and then checking
162`https://stats.domain.com`. If you can see stats instead of 404 than you are
163set.
164
165It's time to add this script to cron with `cron -e`.
166
167```go
168*/10 * * * * sh /var/www/html/stats.domain.com/generate-stats.sh
169```
170
171## Securing with Basic authentication
172
173You probably don't want stats to be publicly available, so we should create a
174user and a password for Basic authentication.
175
176First we create a password for a user `stats` with `htpasswd -c /etc/nginx/.htpasswd stats`.
177
178Now we update config file with `nano
179/etc/nginx/sites-available/stats.domain.com`. You probably noticed that the
180file looks a bit different from before. This is because `certbot` added
181additional rules for SSL.
182
183Your location portion the config file should now look like. You should add
184`auth_basic` and `auth_basic_user_file` lines to the file.
185
186```nginx
187location / {
188 try_files $uri $uri/ =404;
189 auth_basic "Private Property";
190 auth_basic_user_file /etc/nginx/.htpasswd;
191}
192```
193
194Test if config is still ok with `nginx -t` and if it is you can restart Nginx
195with `service nginx restart`.
196
197If you now visit `https://stats.domain.com` you should be prompted for username
198and password. If not, try reopening your browser.
199
200That is all. You now have analytics for your server that gets refreshed every 10
201minutes.
202
diff --git a/content/posts/2021-06-26-simple-world-clock.md b/content/posts/2021-06-26-simple-world-clock.md
deleted file mode 100644
index ed248dd..0000000
--- a/content/posts/2021-06-26-simple-world-clock.md
+++ /dev/null
@@ -1,107 +0,0 @@
1---
2title: Simple world clock with eInk display and Raspberry Pi Zero
3url: simple-world-clock-with-eiink-display-and-raspberry-pi-zero.html
4date: 2021-06-26T12:00:00+02:00
5draft: false
6---
7
8Our team is spread across the world, from the USA all the way to Australia, so
9having some sort of world clock makes sense.
10
11Currently, I am using an extension for Gnome called [Timezone
12extension](https://extensions.gnome.org/extension/2657/timezones-extension/),
13and it serves the purpose quite well.
14
15But I also have a bunch of electronics that I bought through the time, and I am
16not using any of them, and it's time to stop hording this stuff and use it in a
17project.
18
19A while ago I bought a small eInk display [Inky
20pHAT](https://shop.pimoroni.com/products/inky-phat?variant=12549254217811) and I
21have a bunch of [Raspberry Pi's
22Zero](https://www.raspberrypi.org/products/raspberry-pi-zero/) lying around that
23I really need to use.
24
25![Inky pHAT, Raspberry Pi Zero](/assets/world-clock/hardware.jpg)
26
27Since the Inky [Inky
28pHAT](https://shop.pimoroni.com/products/inky-phat?variant=12549254217811) is
29essentially a HAT, it can easily be added on top of the [Raspberry Pi
30Zero](https://www.raspberrypi.org/products/raspberry-pi-zero/).
31
32First, I installed the necessary software on Raspberry Pi with `pip3 install
33inky`.
34
35And then I created a file `clock.py` in home directory `/home/pi`.
36
37```python
38#!/usr/bin/env python
39# -*- coding: utf-8 -*-
40
41import sys
42import os
43from inky.auto import auto
44from PIL import Image, ImageFont, ImageDraw
45from font_fredoka_one import FredokaOne
46
47clocks = [
48 'America/New_York',
49 'Europe/Ljubljana',
50 'Australia/Brisbane',
51]
52
53board = auto()
54board.set_border(board.WHITE)
55board.rotation = 90
56
57img = Image.new('P', (board.WIDTH, board.HEIGHT))
58draw = ImageDraw.Draw(img)
59
60big_font = ImageFont.truetype(FredokaOne, 18)
61small_font = ImageFont.truetype(FredokaOne, 13)
62
63x = board.WIDTH / 3
64y = board.HEIGHT / 3
65
66idx = 1
67for clock in clocks:
68 ctime = os.popen('TZ="{}" date +"%a,%H:%M"'.format(clock))
69 ctime = ctime.read().strip().split(',')
70 city = clock.split('/')[1].replace('_', ' ')
71
72 draw.text((15, (idx*y)-y+10), city, fill=board.BLACK, font=small_font)
73 draw.text((110, (idx*y)-y+7), str(ctime[0]), fill=board.BLACK, font=big_font)
74 draw.text((155, (idx*y)-y+7), str(ctime[1]), fill=board.BLACK, font=big_font)
75
76 idx += 1
77
78board.set_image(img)
79board.show()
80```
81
82And because eInk displays are rather slow to refresh and the clock requires
83refreshing only once a minute, this can be done through cronjob.
84
85Before we add this job to cron we need to make `clock.py` executable with `chmod
86+x clock.py`.
87
88Then we add a cronjob with `crontab -e`.
89
90```
91* * * * * /home/pi/clock.py
92```
93
94So, we end up with a result like this.
95
96![World Clock](/assets/world-clock/world-clock.jpg)
97
98And for the enclosure that can be 3D printed, but I haven't yet something like
99this can be used.
100
101<iframe id="vs_iframe" src="https://www.viewstl.com/?embedded&url=https%3A%2F%2Fmitjafelicijan.com%2Fassets%2Fworld-clock%2Fenclosure.stl&color=gray&bgcolor=white&edges=no&orientation=front&noborder=no" style="border:0;margin:0;width:100%;height:400px;"></iframe>
102
103You can download my [STL file for the enclosure
104here](/assets/world-clock/enclosure.stl), but make sure that dimensions make
105sense and also opening for USB port should be added or just use a drill and some
106hot glue to make it stick in the enclosure.
107
diff --git a/content/posts/2021-07-30-from-internet-consumer-to-full-hominum-again.md b/content/posts/2021-07-30-from-internet-consumer-to-full-hominum-again.md
deleted file mode 100644
index 31a2ea0..0000000
--- a/content/posts/2021-07-30-from-internet-consumer-to-full-hominum-again.md
+++ /dev/null
@@ -1,102 +0,0 @@
1---
2title: My journey from being an internet über consumer to being a full hominum again
3url: from-internet-consumer-to-full-hominum-again.html
4date: 2021-07-30T12:00:00+02:00
5draft: false
6---
7
8It's been almost a year since I started purging all my online accounts and
9going down this rabbit hole of being almost independent of the current internet
10machine. Even though I initially thought that I will have problems adapting,
11I was pleasantly surprised that the transition went so smoothly. Even better,
12it brought many benefits to my life. Such as increased focus, less stress
13about trivial things, etc.
14
15It all started with me doing small changes like unsubscribing from emails that I
16have either subscribed to by accepting terms and conditions. Or even some more
17malicious emails that I was getting because I was on a shared mailing list. And
18the later ones I hate the most of all. How the hell do they keep sharing my
19email and sending me unsolicited emails and get away with it? I have a suspicion
20that these marketing people share an Excel file between them and keep
21resubscribing me when they import lists into Mailchimp or similar software.
22
23It's fascinating to see how much crap you get subscribed to when you are not
24paying attention. It got so bad that my primary Gmail address is a full of junk
25and need constant monitoring and cleaning up. And because I want to have Inbox
26Zero, this presents an additional problem for me.
27
28The stress that email presented for me didn't occur to me for a long time. I was
29noticing that I was unable to go through one single hour without hysterically
30refreshing email. And if somebody wrote me something, I needed to see it right
31then, even though I didn't immediately reply to it. I can only describe this
32with FOMO (fear of missing out). I have no other explanation than that. It was
33crippling, and I was constantly context switching, which I will address further
34down this post in more details.
35
36This was one of the reasons why I spawned up my personal email server, and I am
37using it now as my primary and person email. I still have Gmail as my “junk”
38email that I use for throw away stuff. I log in to Gmail once a week and check
39if there are any important emails that I got, but apart from that, it's sitting
40dormant and collecting dust.
41
42The more I was watching the world loose it's self with allowing anti freedom
43things to happen to it, the more I started to realize that something has to
44change. I don't have the power to change the world. And I also don't have a
45grandiose opinion of myself to even think to try it. But what I can do is to not
46subscribe to this consumer way of thinking. I will not be complicit in this. My
47moral and ethical stances won't allow it. So, this brings us to the second part
48of my journey.
49
50I was using all these 3rd party services because I was either lazy or OK with
51the drawbacks of them. I watched these services and companies became more and
52privacy policies and everybody is OK with accepting them, and they pray on that
53more evil. It is evil if you sell your user's data in this manner. Nobody reads
54flaw in human nature. I really hate the hypocrisy they manage to muster. These
55companies prey on our laziness, and we are at fault here. Nobody else. And I
56truly understand the reasons why we rather accept and move on, and not object
57and have our lives a little more difficult. They have perfected this through
58years of small changes that make us a little more dependent on them. You could
59not convince a person to give away all his rights and data in one day. This was
60gradual and slow. And it caught us all in surprise. When I really stopped and
61thought about it, I felt repulsed. By really stopping and thinking about it, I
62really mean stopping and thinking about it. Thoroughly and in depth.
63
64Each step I took depleted my character a bit more. Like I was trading myself bit
65by bit without understanding what it all meant. What it meant to be a full
66person, not divided by all this bought attention they want from me. They don't
67just get your data, but they also take your attention away from you. They
68scatter your and go with the divide and conquer tactic from there. And a person
69divided is a person not fully there. Not at the moment. Not alive fully.
70
71I was unable to form long thoughts. Well, I thought I was. But now that I see
72what being a full person is again, I can see that I was not at my 100% back
73then.
74
75A revolt was inevitable. There was no other way of continuing my story without
76it. Without taking back my attention, my thoughts, my time, and my privacy,
77regardless of how too late it maybe is.
78
79This has nothing to do with conspiracy theories. Even less with changing the
80world. All I wanted was to get my life back in order and not waste the energy
81that could be spent in other, better places.
82
83I started reading more. I can focus now fully on things I work on. Furthermore,
84I have the mental acuity that I never had before. My mind feels sharp. I don't
85get angry so much. I can cherish the finer things in life now without the need
86to interpret them intellectually. Not only that, but I have a feeling of
87belonging again. Sense of purpose has returned with a vengeance. And I can now
88help people without depleting myself.
89
90The last step so far was to finish closing all the remaining online accounts
91that I still had. And when I was thinking what value they bring me, I wasn't
92surprised that the answer was none. I wasn't logging in them and using them. I
93stopped being afraid of FOMO. If somebody wants to get in contact me, they will
94find a way. I am one search away.
95
96We are not beholden to anybody. Our lives are our own. So dare yourself to
97delete Facebook, LinkedIn. To unsubscribe. Dare yourself to take your time and
98attention back. Use that time and energy to go for a walk without thinking about
99work. Read a book instead of reading comment on social media that you will
100forget in an hour. Enrich your life instead of wasting it. It only requires a
101small step. And you will feel the benefits immediately. Lose the weight of the
102world that is crushing you without your consent.
diff --git a/content/posts/2021-08-01-linux-cheatsheet.md b/content/posts/2021-08-01-linux-cheatsheet.md
deleted file mode 100644
index 3747d43..0000000
--- a/content/posts/2021-08-01-linux-cheatsheet.md
+++ /dev/null
@@ -1,286 +0,0 @@
1---
2title: List of essential Linux commands for server management
3url: linux-cheatsheet.html
4date: 2021-08-01T12:00:00+02:00
5draft: false
6---
7
8**Generate SSH key**
9
10```bash
11ssh-keygen -t ed25519 -C "your_email@example.com"
12
13# when no support for Ed25519 present
14ssh-keygen -t rsa -b 4096 -C "your_email@example.com"
15```
16
17Note: By default SSH keys get stored to `/home/<username>/.ssh/` folder.
18
19**Login to host via SSH**
20
21```bash
22# connect to host as your local username
23ssh host
24
25# connect to host as user
26ssh <user>@<host>
27
28# connect to host using port
29ssh -p <port> <user>@<host>
30```
31
32**Execute command on a server through SSH**
33
34```bash
35# execute one command
36ssh root@100.100.100.100 "ls /root"
37
38# execute many commands
39ssh root@100.100.100.100 "cd /root;touch file.txt"
40```
41
42**Displays currently logged in users in the system**
43
44```bash
45w
46```
47
48**Displays Linux system information**
49
50```bash
51uname
52```
53
54**Displays kernel release information**
55
56```bash
57uname -r
58```
59
60**Shows the system hostname**
61
62```bash
63hostname
64```
65
66**Shows system reboot history**
67
68```bash
69last reboot
70```
71
72**Displays information about the user**
73
74```bash
75sudo apt install finger
76finger <username>
77```
78
79**Displays IP addresses and all the network interfaces**
80
81```bash
82ip addr show
83```
84
85**Downloads a file from an online source**
86
87```bash
88wget https://example.com/example.tgz
89```
90
91Note: If URL contains ?, & enclose the URL in double quotes.
92
93**Compress a file with gzip**
94
95```bash
96# will not keep the original file
97gzip file.txt
98
99# will keep the original file
100gzip --keep file.txt
101```
102
103**Interactive disk usage analyzer**
104
105```bash
106sudo apt install ncdu
107
108ncdu
109ncdu <path/to/directory>
110```
111
112**Install Node.js using the Node Version Manager**
113
114```bash
115curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash
116source ~/.bashrc
117
118nvm install v13
119```
120
121**Too long; didn't read**
122
123```bash
124npm install -g tldr
125
126tldr tar
127```
128
129**Combine all Nginx access logs to one big log file**
130
131```bash
132zcat -f /var/log/nginx/access.log* > /var/log/nginx/access-all.log
133```
134
135**Set up Redis server**
136
137```bash
138sudo apt install redis-server redis-tools
139
140# check if server is running
141sudo service redis status
142
143# set and get a key value
144redis-cli set mykey myvalue
145redis-cli get mykey
146
147# interactive shell
148redis-cli
149```
150
151**Generate statistics of your webserver**
152
153```bash
154sudo apt install goaccess
155
156# check if installed
157goaccess -v
158
159# combine logs
160zcat -f /var/log/nginx/access.log* > /var/log/nginx/access-all.log
161
162# export to single html
163goaccess \
164 --log-file=/var/log/nginx/access-all.log \
165 --log-format=COMBINED \
166 --exclude-ip=0.0.0.0 \
167 --ignore-crawlers \
168 --real-os \
169 --output=/var/www/html/stats.html
170
171# cleanup afterwards
172rm /var/log/nginx/access-all.log
173```
174
175**Search for a given pattern in files**
176
177```bash
178grep -r ‘pattern’ files
179```
180
181**Find proccess ID for a specific program**
182
183```bash
184pgrep nginx
185```
186
187**Print name of current/working directory**
188
189```bash
190pwd
191```
192
193**Creates a blank new file**
194
195```bash
196touch newfile.txt
197```
198
199**Displays first lines in a file**
200
201```bash
202# -n <x> presents the number of lines (10 by default)
203head -n 20 somefile.txt
204```
205
206**Displays last lines in a file**
207
208```bash
209# -n <x> presents the number of lines (10 by default)
210tail -n 20 somefile.txt
211
212# -f follows the changes in file (doesn't closes)
213tail -f somefile.txt
214```
215
216**Count lines in a file**
217
218```bash
219wc -l somefile.txt
220```
221
222**Find all instances of the file**
223
224```bash
225sudo apt install mlocate
226
227locate somefile.txt
228```
229
230**Find file names that begin with ‘index’ in /home folder**
231
232```bash
233find /home/ -name "index"
234```
235
236**Find files larger than 100MB in the home folder**
237
238```bash
239find /home -size +100M
240```
241
242**Displays block devices related information**
243
244```bash
245lsblk
246```
247
248**Displays free space on mounted systems**
249
250```bash
251df -h
252```
253
254**Displays free and used memory in the system**
255
256```bash
257free -h
258```
259
260**Displays all active listening ports**
261
262```bash
263sudo apt install net-tools
264
265netstat -pnltu
266```
267
268**Kill a process violently**
269
270```bash
271kill -9 <pid>
272```
273
274**List files opened by user**
275
276```bash
277lsof -u <user>
278```
279
280**Execute "df -h", showing periodic updates**
281
282```bash
283# -n 1 means every second
284watch -n 1 df -h
285```
286
diff --git a/content/posts/2021-12-03-debian-based-riced-up-distribution-for-developers.md b/content/posts/2021-12-03-debian-based-riced-up-distribution-for-developers.md
deleted file mode 100644
index 0755282..0000000
--- a/content/posts/2021-12-03-debian-based-riced-up-distribution-for-developers.md
+++ /dev/null
@@ -1,275 +0,0 @@
1---
2title: Debian based riced up distribution for Developers and DevOps folks
3url: debian-based-riced-up-distribution-for-developers-and-devops-folks.html
4date: 2021-12-03T12:00:00+02:00
5draft: false
6---
7
8## Introduction
9
10I have been using [Ubuntu](https://ubuntu.com/) for quite a longtime now. I have
11used [Debian](https://www.debian.org/) in the past and
12[Manjaro](https://manjaro.org/). Also had [Arch](https://archlinux.org/) for
13some time and even ran [Gentoo](https://www.gentoo.org/) way back.
14
15What I learned from all this is that I prefer running a bit older versions and
16having them be stable than run bleeding edge rolling release. For that reason, I
17stuck with Ubuntu for a couple of years now. I am also at a point in my life
18where I just don't care what is cool or hip anymore. I just want a stable system
19that doesn't get in my way.
20
21During all this, I noticed that these distributions were getting very bloated
22and a lot of software got included that I usually uninstall on fresh
23installation. Maybe this is my OCD speaking, but why do I have to give fresh
24installation min 1 GB of ram out of the box just to have a blank screen in front
25of me? I get it, there are many things included in the distro to make my life
26easier. I understand. But at this point I have a feeling that modern Linux
27distributions are becoming similar to [Node.js project with
28node_modules](https://devhumor.com/content/uploads/images/August2017/node-modules.jpg).
29Just a crazy number of packages serving very little or no purpose, just
30supporting other software.
31
32I felt I needed a fresh start. To start over with something minimal and clean.
33Something that would put a little more joy into using a computer again.
34
35For the first version, I wanted to target the following machines I have at home
36that I want this thing to work on.
37
38```yaml
39# My main stationary work machine
40Resolution: 3840x1080 (Super Ultrawide Monitor 32:9)
41CPU: Intel i7-8700 (12) @ 4.600GHz
42GPU: AMD ATI Radeon RX 470/480/570/570X/580/580X/590
43Memory: 32020MiB
44```
45
46```yaml
47# Thinkpad x220 for testing things and goofing around
48Resolution: 1366x768
49CPU: Intel i5-2520M (4) @ 3.200GHz
50GPU: Intel 2nd Generation Core Processor Family
51Memory: 15891MiB
52```
53
54## How should I approach this?
55
56I knew I wanted to use [minimal Debian netinst
57](https://www.debian.org/CD/netinst/) for the base to give myself a head
58start. No reason to go through changing the installer and also testing all that
59behemoth of a thing. So, some sort of ricing was the only logical option to get
60this thing of the grounds somewhat quickly.
61
62> **What is ricing anyway?**
63> The term “RICE” stands for Race Inspired Cosmetic Enhancement. A group of
64> people (could be one, idk) decided to see if they could tweak their own
65> distros like they/others did their cars. This gave rise to a community of
66> Linux/Unix enthusiasts trying to make their distros look cooler and better
67> than others... For more information, read this article
68> [What in the world is ricing!?](https://pesos.github.io/2020/07/14/what-is-ricing.html).
69
70I didn't want this to just be a set of config files for theming purpose. I
71wanted this to include a set of pre-installed tools and services that are being
72used all the time by a modern developer. Theming is just a tiny part of it.
73Fonts being applied across the distro and things like that.
74
75First, I choose terminal installer and left it to load additional components.
76Avoid using graphical installer in this case.
77
78![](/assets/dfd-rice/install-00.png)
79
80After that I selected hostname and created a normal user and set password for
81that user and root user and choose guided mode for disk partitioning.
82
83![](/assets/dfd-rice/install-01.png)
84
85I left it run to install all the things required for the base system and opted
86out of scanning additional media for use by the package manager. Those will be
87downloaded from the internet during installation.
88
89![](/assets/dfd-rice/install-02.png)
90
91I opted out of the popularity contest, and **now comes the important part**.
92Uncheck all the boxes in Software selection and only leave 'standard system
93utilities'. I also left an SSH server, so I was able to log in to the machine
94from my main PC.
95
96![](/assets/dfd-rice/install-03.png)
97
98At this point, I installed GRUB bootloader on the disk where I installed the
99system.
100
101![](/assets/dfd-rice/install-04.png)
102
103That concluded the installation of base Debian and after restarting the computer
104I was prompted with the login screen.
105
106![](/assets/dfd-rice/install-05.png)
107
108Now that I had the base installation, it was time to choose what software do I
109want to include in this so-called distribution. I wanted out of the box
110developer experience, so I had plenty to choose.
111
112Let's not waste time and go through the list.
113
114## Desktop environments
115
116I have been using [Gnome](https://www.gnome.org/) for my whole Linux life. From
117version 2 forward. It's been quite a ride. I hated version 3 when it came out
118and replaced version 2. But I got used to it. And now with version 40+ they also
119made couple of changes which I found both frustrating and presently surprised.
120
121The amount of vertical space you loose because of the beefy title bars on
122windows is ridiculous. And then in case of
123[Tilix](https://gnunn1.github.io/tilix-web/) you also have tabs, and you are
124100px deep. Vertical space is one of the most important things for a
125developer. The more real estate you have, the more code you can have in a
126viewport.
127
128But on the other hand, I still love how Gnome feels and looks. I gotta give them
129that. They really are trying to make Gnome feel unified and modern.
130
131Regardless of all the nice things Gnome has, I was looking at the tiling window
132managers for some time, but never had the nerve to actually go with it. But now
133was the ideal time to give it a go. No guts, no glory kind of a thing.
134
135One of the requirements for me was easy custom layouts because I use a really
136strange monitor with aspect ratio of 32:9. So relying on included layouts most
137of them have is a non-starter.
138
139What I was doing in Gnome was having windows in a layout like the diagram
140below. This is my common practice. And if you look at it you can clearly see I
141was replicating tiling window manager setup in Gnome.
142
143![](/assets/dfd-rice/layout.png)
144
145That made me look into a bunch of tiling window managers and then tested them
146out. Candidates I was looking at were:
147
148- [i3](https://i3wm.org/)
149- [bspwm](https://github.com/baskerville/bspwm)
150- [awesome](https://awesomewm.org/index.html)
151- [XMonad](https://xmonad.org/)
152- [sway](https://swaywm.org/)
153- [Qtile](http://www.qtile.org/)
154- [dwm](https://dwm.suckless.org/)
155
156You can also check article [13 Best Tiling Window Managers for
157Linux](https://www.tecmint.com/best-tiling-window-managers-for-linux/) I was
158referencing while testing them out.
159
160While all of them provided what I needed, I liked i3 the most. What particular
161caught my eye was the ease to use and tree based layouts which allows flexible
162layouts. I know others can be set up also to have custom layouts other than
163spiral, dwindle etc. I think i3 is a good entry-level window manager for
164somebody like me.
165
166## Batteries included
167
168The source for the whole thing is located on Github
169https://github.com/mitjafelicijan/dfd-rice.
170
171Currenly included:
172
173- `non-free` (enables non-free packages in apt)
174- `sudo` (adds sudo and adds user to sudo group)
175- `essentials` (gcc, htop, zip, curl, etc...)
176- `wifi` (network manager nmtui)
177- `desktop` (i3, dmenu, fonts, configurations)
178- `pulseaudio` (pulseaudio with pavucontrol)
179- `code-editors` (vim, micro, vscode)
180- `ohmybash` (make bash pretty)
181- `file-managers` (mc)
182- `git-ui` (terminal git gui)
183- `meld` (diff tool)
184- `profiling` (kcachegrind, valgrind, strace, ltrace)
185- `browsers` (brave, firefox, chromium)
186- programming languages:
187 - `python`
188 - `golang`
189 - `nodejs`
190 - `rust`
191 - `nim`
192 - `php`
193 - `ruby`
194- `docker` (with docker-compose)
195- `ansible`
196
197Install script also allows you to install only specific packages (example for:
198essentials ohmybash docker rust).
199
200```sh
201su - root \
202 bash -c "$(wget -q https://raw.github.com/mitjafelicijan/dfd-rice/master/tools/install.sh -O -)" -- \
203 essentials ohmybash docker rust
204```
205
206Currently, most of these recipes use what Debian and this is totally fine with
207me since I never use bleeding edge features of a package. But if something major
208would come to light, I will replace it with a possible compilation script or
209something similar.
210
211This is some of the output from the installation script.
212
213![](/assets/dfd-rice/script.png)
214
215Let's take a look at some examples in the installation script.
216
217### Docker recipe
218
219```sh
220# docker
221print_header "Installing Docker"
222curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --yes --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
223echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
224apt update
225apt -y install docker-ce docker-ce-cli containerd.io docker-compose
226
227systemctl start docker
228systemctl enable docker
229systemctl status docker --no-pager
230
231/sbin/usermod -aG docker $USERNAME
232```
233
234### Making bash pretty
235
236I really like [Oh My Zsh](https://ohmyz.sh/), but I don't like zsh shell. When
237I used it, I constantly needed to be aware of it and running bash scripts was a
238pain. So, I was really delighted when I found out that a version for bash
239existed called [Oh My Bash](https://ohmybash.nntoan.com/). Let's take a look at
240the recipe for installing it.
241
242```sh
243# ohmybash
244print_header "Enabling OhMyBash"
245sudo -u $USERNAME sh -c "$(curl -fsSL https://raw.github.com/ohmybash/oh-my-bash/master/tools/install.sh)" &
246T1=${!}
247wait ${T1}
248```
249
250Because OhMyBash does `exec bash` at the end, this traps our script inside
251another shell and our script cannot continue. For that reason, I executed this
252in background. But that presents a new problem. Because this is executed in
253background, we lose track of progress naturally. And that strange trick with
254`T1=${!}` and `wait ${T1}` waits for the background process to finish before
255continuing to another task in bash script.
256
257Check [Multi-Threaded Processing in Bash Scripts](https://www.cloudsavvyit.com/12277/how-to-use-multi-threaded-processing-in-bash-scripts/)
258for more details.
259
260## Conclusion
261
262Take a look at
263https://github.com/mitjafelicijan/dfd-rice/blob/develop/tools/install.sh script
264to get familiar with it. This is just a first iteration and I will continue to
265update it because I need this in my life.
266
267The current version boots in 4s to the login prompt, and after you log in, the
268desktop environment loads in 2s. So, its fast, very fast. And on clean boot, I
269measured ~230 MB of RAM usage.
270
271And this is how it looks with two terminals side by side. I really like the
272simplicity and clean interface. I will polish the colors and stuff like that,
273but I really do like the results.
274
275![](/assets/dfd-rice/desktop.png)
diff --git a/content/posts/2021-12-25-running-golang-application-as-pid1.md b/content/posts/2021-12-25-running-golang-application-as-pid1.md
deleted file mode 100644
index 60d0400..0000000
--- a/content/posts/2021-12-25-running-golang-application-as-pid1.md
+++ /dev/null
@@ -1,347 +0,0 @@
1---
2title: Running Golang application as PID 1 with Linux kernel
3url: running-golang-application-as-pid1.html
4date: 2021-12-25T12:00:00+02:00
5draft: false
6---
7
8## Unikernels, kernels, and alike
9
10I have been reading a lot about
11[unikernernels](https://en.wikipedia.org/wiki/Unikernel) lately and found them
12very intriguing. When you push away all the marketing speak and look at the
13idea, it makes a lot of sense.
14
15> A unikernel is a specialized, single address space machine image constructed
16> by using library operating systems. ([Wikipedia](https://en.wikipedia.org/wiki/Unikernel))
17
18I really like the explanation from the article
19[Unikernels: Rise of the Virtual Library Operating System](https://queue.acm.org/detail.cfm?id=2566628).
20Really worth a read.
21
22If we compare a normal operating system to a unikernel side by side, they would
23look something like this.
24
25![Virtual machines vs Containers vs Unikernels](/assets/pid1/unikernels.png)
26
27From this image, we can see how the complexity significantly decreases with
28the use of Unikernels. This comes with a price, of course. Unikernels are hard
29to get running and require a lot of work since you don't have an actual proper
30kernel running in the background providing network access and drivers etc.
31
32So as a half step to make the stack simpler, I started looking into using
33Linux kernel as a base and going from there. I came across this
34[Youtube video talking about Building the Simplest Possible Linux System](https://www.youtube.com/watch?v=Sk9TatW9ino)
35by [Rob Landley](https://landley.net) and apart from statically compiling the
36application to be run as PID1 there was really no other obstacles.
37
38## What is PID 1?
39
40PID 1 is the first process that Linux kernel starts after the boot process.
41It also has a couple of unique properties that are unique to it.
42
43- When the process with PID 1 dies for any reason, all other processes are
44 killed with KILL signal.
45- When any process having children dies for any reason, its children are
46 re-parented to process with PID 1.
47- Many signals which have default action of Term do not have one for PID 1.
48- When the process with PID 1 dies for any reason, kernel panics, which
49 result in system crash.
50
51PID 1 is considered as an Init application which takes care of running other
52and handling services like:
53
54- sshd,
55- nginx,
56- pulseaudio,
57- etc.
58
59If you are on a Linux machine, you can check what your process is with PID 1
60by running the following.
61
62```sh
63$ cat /proc/1/status
64Name: systemd
65Umask: 0000
66State: S (sleeping)
67Tgid: 1
68Ngid: 0
69Pid: 1
70PPid: 0
71...
72```
73
74As we can see on my machine the process with id of 1 is [systemd](https://systemd.io/)
75which is a software suite that provides an array of system components for Linux
76operating systems. If you look closely you can also see that the `PPid`
77(process id of the parent process) is `0` which additionally confirms that
78this process doesn't have a parent.
79
80## So why even run application as PID 1 instead of just using a container?
81
82Containers are wonderful, but they come with a lot of baggage. And because they
83are in their nature layered, the images require quite a lot of space and also a
84lot of additional software to handle them. They are not as lightweight as they
85seem, and many popular images require 500 MB plus disk space.
86
87The idea of running this as PID 1 would result in a significantly smaller footprint,
88as we will see later in the post.
89
90> You could run a simple init system inside Docker container described more
91> in this article [Docker and the PID 1 zombie reaping problem](https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/).
92
93## The master plan
94
951. Compile Linux kernel with the default definitions.
962. Prepare a Hello World application in Golang that is statically compiled.
973. Run it with [QEMU](https://www.qemu.org/) and providing Golang application
98 as init application / PID 1.
99
100For the sake of simplicity we will not be cross-compiling any of it and just
101use the 64bit version.
102
103## Compiling Linux kernel
104
105```sh
106$ wget https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.15.7.tar.xz
107$ tar xf linux-5.15.7.tar.xz
108
109$ cd linux-5.15.7
110
111$ make clean
112
113# read more about this https://stackoverflow.com/a/41886394
114$ make defconfig
115
116$ time make -j `nproc`
117
118$ cd ..
119```
120
121At this point we have kernel image that is located in `arch/x86_64/boot/bzImage`.
122We will use this in QEMU later.
123
124To make our lives a bit easier lets move the kernel image to another place.
125Lets create a folder `bin/` in the root of our project with `mkdir -p bin`.
126
127
128At this point we can copy `bzImage` to `bin/` folder with
129`cp linux-5.15.7/arch/x86_64/boot/bzImage bin/bzImage`.
130
131The folder structure of this experiment should look like this.
132
133```
134pid1/
135 bin/
136 bzImage
137 linux-5.15.7/
138 linux-5.15.7.tar.xz
139```
140
141## Preparing PID 1 application in Golang
142
143This step is relatively easy. The only thing we must have in mind that we will
144need to compile the binary as a static one.
145
146Let's create `init.go` file in the root of the project.
147
148```go
149package main
150
151import (
152 "fmt"
153 "time"
154)
155
156func main() {
157 for {
158 fmt.Println("Hello from Golang")
159 time.Sleep(1 * time.Second)
160 }
161}
162```
163
164If you notice, we have a forever loop in the main, with a simple sleep of 1
165second to not overwhelm the CPU. This is because PID 1 should never complete
166and/or exit. That would result in a kernel panic. Which is BAD!
167
168There are two ways of compiling Golang application. Statically and dynamically.
169
170To statically compile the binary, use the following command.
171
172```sh
173$ go build -ldflags="-extldflags=-static" init.go
174```
175
176We can also check if the binary is statically compiled with:
177
178```sh
179$ file init
180init: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, Go BuildID=Ypu8Zw_4NBxm1Yxg2OYO/H5x721rQ9uTPiDVh-VqP/vZN7kXfGG1zhX_qdHMgH/9vBfmK81tFrygfOXDEOo, not stripped
181
182$ ldd init
183not a dynamic executable
184```
185
186At this point, we need to create [initramfs](https://www.linuxfromscratch.org/blfs/view/svn/postlfs/initramfs.html)
187(abbreviated from "initial RAM file system", is the successor of initrd. It
188is a cpio archive of the initial file system that gets loaded into memory
189during the Linux startup process).
190
191```sh
192$ echo init | cpio -o --format=newc > initramfs
193$ mv initramfs bin/initramfs
194```
195
196The projects at this stage should look like this.
197
198```
199pid1/
200 bin/
201 bzImage
202 initramfs
203 linux-5.15.7/
204 linux-5.15.7.tar.xz
205 init.go
206```
207
208## Running all of it with QEMU
209
210[QEMU](https://www.qemu.org/) is a free and open-source hypervisor. It emulates
211the machine's processor through dynamic binary translation and provides a set
212of different hardware and device models for the machine, enabling it to run a
213variety of guest operating systems.
214
215```sh
216$ qemu-system-x86_64 -serial stdio -kernel bin/bzImage -initrd bin/initramfs -append "console=ttyS0" -m 128
217```
218
219```sh
220$ qemu-system-x86_64 -serial stdio -kernel bin/bzImage -initrd bin/initramfs -append "console=ttyS0" -m 128
221[ 0.000000] Linux version 5.15.7 (m@khan) (gcc (GCC) 11.2.1 20211203 (Red Hat 11.2.1-7), GNU ld version 2.37-10.fc35) #7 SMP Mon Dec 13 10:23:25 CET 2021
222[ 0.000000] Command line: console=ttyS0
223[ 0.000000] x86/fpu: x87 FPU will use FXSAVE
224[ 0.000000] signal: max sigframe size: 1440
225[ 0.000000] BIOS-provided physical RAM map:
226[ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
227[ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
228[ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
229[ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x0000000007fdffff] usable
230[ 0.000000] BIOS-e820: [mem 0x0000000007fe0000-0x0000000007ffffff] reserved
231[ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
232[ 0.000000] NX (Execute Disable) protection: active
233[ 0.000000] SMBIOS 2.8 present.
234[ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-6.fc35 04/01/2014
235[ 0.000000] tsc: Fast TSC calibration failed
236...
237[ 2.016106] ALSA device list:
238[ 2.016329] No soundcards found.
239[ 2.053176] Freeing unused kernel image (initmem) memory: 1368K
240[ 2.056095] Write protecting the kernel read-only data: 20480k
241[ 2.058248] Freeing unused kernel image (text/rodata gap) memory: 2032K
242[ 2.058811] Freeing unused kernel image (rodata/data gap) memory: 500K
243[ 2.059164] Run /init as init process
244Hello from Golang
245[ 2.386879] tsc: Refined TSC clocksource calibration: 3192.032 MHz
246[ 2.387114] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2e02e31fa14, max_idle_ns: 440795264947 ns
247[ 2.387380] clocksource: Switched to clocksource tsc
248[ 2.587895] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3
249Hello from Golang
250Hello from Golang
251Hello from Golang
252```
253
254The whole [log file here](/assets/pid1/qemu.log).
255
256## Size comparison
257
258The cool thing about this approach is that the Linux kernel and the application
259together only take around 12 MB, which is impressive as hell. And we need to
260also know that the size of bzImage (Linux kernel) could be greatly decreased
261by going into `make menuconfig` and removing a ton of features from the kernel,
262making the size even smaller. I managed to get kernel size down to 2 MB and
263still working properly.
264
265```sh
266total 12M
267-rw-r--r--. 1 m m 9.3M Dec 13 10:24 bzImage
268-rw-r--r--. 1 m m 1.9M Dec 27 01:19 initramfs
269```
270
271## Creating ISO image and running it with Gnome Boxes
272
273First we need to create proper folder structure with `mkdir -p iso/boot/grub`.
274
275Then we need to download the [grub binary](https://github.com/littleosbook/littleosbook/raw/master/files/stage2_eltorito).
276You can read more about this program on https://github.com/littleosbook/littleosbook.
277
278```sh
279$ wget -O iso/boot/grub/stage2_eltorito https://github.com/littleosbook/littleosbook/raw/master/files/stage2_eltorito
280```
281
282```sh
283$ tree iso/boot/
284iso/boot/
285├── bzImage
286├── grub
287│   ├── menu.lst
288│   └── stage2_eltorito
289└── initramfs
290```
291
292Let's copy files into proper folders.
293
294
295```sh
296$ cp stage2_eltorito iso/boot/grub/
297$ cp bin/bzImage iso/boot/
298$ cp bin/initramfs iso/boot/
299```
300
301Lets create a GRUB config file at `nano iso/boot/grub/menu.lst` with contents.
302
303```ini
304default=0
305timeout=5
306
307title GoAsPID1
308kernel /boot/bzImage
309initrd /boot/initramfs
310```
311
312Let's create iso file by using genisoimage:
313
314```sh
315genisoimage -R \
316 -b boot/grub/stage2_eltorito \
317 -no-emul-boot \
318 -boot-load-size 4 \
319 -A os \
320 -input-charset utf8 \
321 -quiet \
322 -boot-info-table \
323 -o GoAsPID1.iso \
324 iso
325```
326
327This will produce `GoAsPID1.iso` which you can use with [Virtualbox](https://www.virtualbox.org/)
328or [Gnome Boxes](https://apps.gnome.org/app/org.gnome.Boxes/).
329
330<video src="/assets/pid1/boxes.mp4" controls></video>
331
332## Is running applications as PID 1 even worth it?
333
334Well, the answer to this is not as simple as one would think. Sometimes it is
335and sometimes it's not. For embedded systems and very specialized applications
336it is worth for sure. But in normal uses, I don't think so. It was an interesting
337exercise in compiling kernels and looking at the guts of the Linux kernel,
338but sticking to containers for most of the things is a better option in my
339opinion.
340
341An interesting experiment would be creating an image that supports networking
342and could be deployed to AWS as an EC2 instance and observing how it fares.
343But in that case, we would need to write some sort of supervisor that would
344run on a separate EC2 that would check if other EC2 instances are running
345properly. Remember that if your application fails, kernel panics and the
346whole machine is inoperable in this case.
347
diff --git a/content/posts/2021-12-30-wap-mobile-web-before-the-web.md b/content/posts/2021-12-30-wap-mobile-web-before-the-web.md
deleted file mode 100644
index 6c598fe..0000000
--- a/content/posts/2021-12-30-wap-mobile-web-before-the-web.md
+++ /dev/null
@@ -1,201 +0,0 @@
1---
2title: Wireless Application Protocol and the mobile web before the web
3url: wap-mobile-web-before-the-web.html
4date: 2021-12-30T12:00:00+02:00
5draft: false
6---
7
8## A little stroll down the history lane
9
10About two weeks ago, I watched this outstanding documentary on YouTube
11[Springboard: the secret history of the first real
12smartphone](https://www.youtube.com/watch?v=b9_Vh9h3Ohw) about the history of
13smartphones and phones in general. It brought back so many memories. I never had
14an actual smartphone before the Android. The closest to smartphone was [Sony
15Ericsson P1](https://www.gsmarena.com/sony_ericsson_p1-1982.php). A fantastic
16phone and I broke it in Prague after a party and that was one of those rare
17occasions where I was actually mad at myself. But nevertheless, after that
18phone, the next one was an Android one.
19
20Before that, I only owned normal phones from Nokia and Siemens etc. Nothing
21special, actually. These are the phones we are talking about. Before 2007.
22Apple and Android phones didn't exist yet.
23
24These phones were rocking:
25
26- No selfie cameras.
27- ~2 inch displays.
28- ~120 MHz beast CPU's.
29- 144p main cameras.
30- But they had a headphone jack.
31
32Let's take a look at these beauties.
33
34![Old phones](/assets/wap/phones.gif)
35
36## WAP - Wireless Application Protocol
37
38Not that one! We are talking about Wireless Application Protocol and not Cardi
39B's song 😃
40
41WAP stands for Wireless Application Protocol. It is a protocol designed for
42micro-browsers, and it enables the access of internet in the mobile devices. It
43uses the mark-up language WML (Wireless Markup Language and not HTML), WML is
44defined as XML 1.0 application. Furthermore, it enables creating web
45applications for mobile devices. In 1998, WAP Forum was founded by Ericson,
46Motorola, Nokia and Unwired Planet whose aim was to standardize the various
47wireless technologies via protocols.
48[(source)](https://www.geeksforgeeks.org/wireless-application-protocol/)
49
50WAP protocol was resulted by the joint efforts of the various members of WAP
51Forum. In 2002, WAP forum was merged with various other forums of the industry,
52resulting in the formation of Open Mobile Alliance (OMA).
53[(source)](https://www.geeksforgeeks.org/wireless-application-protocol/)
54
55These were some wild times. Devices had tiny screens and data transmission rates
56were abominable. But they were capable of rendering WML (Wireless Markup
57Language). This was very similar to HTML, actually. It is a markup language,
58after all.
59
60These pages could be served by [Apache](https://apache.org/) and could be
61generated by CGI scripts on the backend. The only difference was the limited
62markup language.
63
64## WML - Wireless Markup Language
65
66Just like web browsers use HTML for content structure, older mobile device
67browsers use WML - if you need to support really old mobile phones using WML
68browsers, you will need to know about it. WML is XML-based (an XML vocabulary
69just like XHTML and MathML, but not HTML) and does not use the same metaphor as
70HTML. HTML is a single document with some metadata packed away in the head, and
71a body encapsulating the visible page. With WML, the metaphor does not envisage
72a page, but rather a deck of cards. A WML file might have several pages or cards
73contained within it.
74[(source)](https://www.w3.org/wiki/Introduction_to_mobile_web)
75
76```html
77<?xml version="1.0"?>
78<!DOCTYPE wml PUBLIC "-//WAPFORUM//DTD WML 1.1//EN" "http://www.wapforum.org/DTD/wml_1.1.xml">
79<wml>
80 <card id="home" title="Example Homepage">
81 <p>Welcome to the Example homepage</p>
82 </card>
83</wml>
84```
85
86There is an amazing tutorial on [Tutorialpoint about
87WML](https://www.tutorialspoint.com/wml/index.htm).
88
89## Converting Digg to WML
90
91This task is completely useless and not really feasible nowadays, but I had to
92give it a try for old-time sake. Since the data is already there in a form of
93RSS feed, I could take this feed and parse it and create a WML version of the
94homepage.
95
96We will need:
97
98- Python3 + Pip
99- ImageMagick
100- feedparser and mako templating
101
102```sh
103# for fedora 35
104sudo dnf install ImageMagick python3-pip
105
106# tempalting engine for python
107pip install mako --user
108
109# for parsing rss feeds
110pip install feedparser --user
111```
112
113Project folder structure should look like the following.
114
115```
11612:43:53 m@khan wap → tree -L 1
117.
118├── generate.py
119└── template.wml
120
121```
122
123After that, I created a small template for the homepage.
124
125```html
126<?xml version="1.0"?>
127<!DOCTYPE wml PUBLIC "-//WAPFORUM//DTD WML 1.2//EN" "http://www.wapforum.org/DTD/wml_1.2.xml">
128
129<wml>
130
131 <card title="Digg - What the Internet is talking about right now">
132
133 % for item in entries:
134 <p><img src="/images/${item.id}.jpg" width="175" height="95" alt="${item.title}" /></p>
135 <p><small>${item.kicker}</small></p>
136 <p><big><b>${item.title}</b></big></p>
137 <p>${item.description}</p>
138 % endfor
139
140 </card>
141
142</wml>
143```
144
145And the parser that parses RSS feed looks like this.
146
147```python
148import os
149import feedparser
150from mako.template import Template
151
152os.system('mkdir -p www/images')
153
154template = Template(filename='template.wml')
155
156feed = feedparser.parse('https://digg.com/rss/top.xml')
157
158entries = feed.entries[:15]
159
160for entry in entries:
161 print('Processing image with id {}'.format(entry.id))
162 os.system('wget -q -O www/images/{}.jpg "{}"'.format(entry.id, entry.links[1].href))
163 os.system('convert www/images/{}.jpg -type Grayscale -resize 175x -depth 3 -quality 30 www/images/{}.jpg'.format(entry.id, entry.id))
164
165html = template.render(entries = entries)
166
167with open('www/index.wml', 'w+') as fp:
168 fp.write(html)
169```
170
171This script will create a folder `www` and in the folder `www/images` for
172storing resized images.
173
174> Be sure you don't use SSL and use just normal HTTP for serving the content.
175> These old phones will have problems with TLS 1.3 etc.
176
177If you look at the python file, I convert all the images into tiny B&W images.
178They should be WBMP (Wireless BitMaP) but I choose JPEGs for this, and it seems
179to work properly.
180
181Because I currently don't have a phone old enough to test it on, I used an
182emulator. And it was really hard to find one. I found [WAP
183Proof](http://wap-proof.sharewarejunction.com/) on shareware junction, and it
184did the job well enough. I will try to find and actual device to test it on.
185
186<video src="/assets/wap/emulator.mp4" controls></video>
187
188If you are using Nginx to serve the contents, add a directive to the hosts file
189that will automatically server `index.wml` file.
190
191```nginx
192server {
193 index index.wml index.html index.htm index.nginx-debian.html;
194}
195```
196
197## Conclusion
198
199Well, this was pointless, but very fun! I hope you enjoyed it as much as I did.
200I will try to find an old phone to test it on. If you have any questions, feel
201free to ask in the comments.
diff --git a/content/posts/2022-06-30-trying-out-helix-editor.md b/content/posts/2022-06-30-trying-out-helix-editor.md
deleted file mode 100644
index 23c1cf3..0000000
--- a/content/posts/2022-06-30-trying-out-helix-editor.md
+++ /dev/null
@@ -1,52 +0,0 @@
1---
2title: Trying out Helix code editor as my main editor
3url: tying-out-helix-code-editor.html
4date: 2022-06-30T12:00:00+02:00
5draft: false
6---
7
8I have been searching for a lightweight code editor for quite some time. One of
9the main reasons was that I wanted something that doesn't burn through CPU and
10RAM usage is not through the roof. I have been mostly using Visual Studio Code.
11It's been an outstanding editor. I have no quarrel with it at all. It's just
12time to spice life up with something new.
13
14I have been on this search for a couple of years. I have tried Vim, Neovim,
15Emacs, Doom Emacs, Micro and couple more. Among most of them, I liked Micro and
16Doom Emacs the most. Micro editor was a little too basic for me. And Doom Emacs
17was a bit too hardcore. This does not reflect on any of the editors. It's just
18my personal preference.
19
20> I tried Helix Editor about a year ago. But I didn't pay attention to it.
21> Tried it and saw it's similar to Vi and just said no. I was premature to
22> dismiss it.
23
24One of the things I actually miss is line wrapping for certain files. When
25writing Markdown, line wrapping would be very helpful. Editing such a document
26is frustrating to say the least. Some of the Markdown to HTML converters don't
27take kindly of new lines between sentences. Not paragraphs, sentences. And I use
28Markdown to write this blog you are reading.
29
30But other than this, I have been extremely satisfied by it. It's been a pleasant
31surprise. There have been zero issues with the editor.
32
33One thing to do before you are able to use autocompletion and make use Language
34Server support is to install the language server with NPM.
35
36```sh
37npm install -g typescript typescript-language-server
38```
39
40I am still getting used to the keyboard shortcuts and getting better. What Helix
41does really well is packing in sane defaults and even though because currently
42there is no plugin support I haven't found any need for them. It has all that
43you would need. It goes to extreme measures to show a user what is going on with
44popups that show you what the keyboard shortcuts are.
45
46And it comes us packed with many
47[really good themes](https://github.com/helix-editor/helix/wiki/Themes).
48
49![Editor](/assets/helix-editor/editor.png)
50
51It's still young but has this mature feeling to it. It has sane defaults and
52mimics Vim (works a bit differently, but the overall idea is similar).
diff --git a/content/posts/2022-07-05-what-would-dna-sound-if-synthesized.md b/content/posts/2022-07-05-what-would-dna-sound-if-synthesized.md
deleted file mode 100644
index e26088b..0000000
--- a/content/posts/2022-07-05-what-would-dna-sound-if-synthesized.md
+++ /dev/null
@@ -1,363 +0,0 @@
1---
2title: What would DNA sound if synthesized to an audio file
3url: what-would-dna-sound-if-synthesized.html
4date: 2022-07-05T12:00:00+02:00
5draft: false
6---
7
8## Introduction
9
10Lately, I have been thinking a lot about the nature of life, what are the
11foundation blocks of life and things like that. It's remarkable how complex and
12on the other hand simple the creation is when you look at it. The miracle of
13life keeps us grounded when our imagination goes wild. If the DNA are the blocks
14of life, you could consider them to be an API nature provided us to better
15understand all of this chaos masquerading as order.
16
17I have been reading a lot about superintelligence and our somehow misguided path
18to create general artificial intelligence. What would the building blocks or our
19creation look like? Is the compression really the ultimate storage of
20information? Will our creation also ponder this questions when creating new
21worlds for themselves, or will we just disappear into the vastness of
22possibilities? It is a little offensive that we are playing God whilst being
23completely ignorant of our own reality. Who knows! Like many other
24breakthroughs, this one will also come at a cost not known to us when it finally
25happens.
26
27To keep things a bit lighter, I decided to convert some popular DNA sequences
28into an audio files for us to listen to. I am not the first one, nor I will be
29the last one to do this. But it is an interesting exercise in better
30understanding the relationship between art and science. Maybe listening to DNA
31instead of parsing it will find a way into better understanding, or at least
32enjoying the creation and cryptic nature of life.
33
34## DNA encoding and primer example
35
36I have been exploring DNA in the past in my post from about 3 years ago in
37[Encoding binary data into DNA
38sequence](/encoding-binary-data-into-dna-sequence.html) where I have been
39converting all sorts of data into DNA sequences.
40
41This will be a similar exercise but instead of converting to DNA, I will be
42generating tones from Nucleotides.
43
44| Nucleotides | Note | Frequency |
45| ---------------- | ---- | --------- |
46| **A** (Adenine) | A | 440 Hz |
47| **C** (Cytosine) | C | 783.99 Hz |
48| **G** (Guanine) | G | 523.25 Hz |
49| **T** (Thymine) | D | 587.33 Hz |
50
51Since we do not have T in equal-tempered scale, I choose D to represent T note.
52
53You can check [Frequencies for equal-tempered scale, A4 = 440
54Hz](https://pages.mtu.edu/~suits/notefreqs.html). For this tuning, we also
55choose `Speed of Sound = 345 m/s = 1130 ft/s = 770 miles/hr`.
56
57Now that we have this out of the way, we can also brush up on the DNA sequencing
58a bit. This is a famous quote I also used for the encoding tests, and it goes
59like this.
60
61> How wonderful that we have met with a paradox. Now we have some hope of
62> making progress.
63> ― Niels Bohr
64
65```shell
66>SEQ1
67GACAGCTTGTGTACAAGTGTGCTTGCTCGCGAGCGGGTACGCGCGTGGGCTAACAAGTGA
68GCCAGCAGGTGAACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGCTGGCGGGTGA
69ACAAGTGTGCCGGTGAGCCAACAAGCAGACAAGTAAGCAGGTACGCAGGCGAGCTTGTCA
70ACTCACAAGATCGCTTGTGTACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGTAT
71GCTTGCTGGCGGACAAGCCAGCTTGTAAGCGGACAAGCTTGCGCACAAGCTGGCAGGCCT
72GCCGGCTCGCGTACAAATTCACAAGTAAGTACGCTTGCGTGTACGCGGGTATGTATACTC
73AACCTCACCAAACGGGACAAGATCGCCGGCGGGCTAGTATACAAGAACGCTTGCCAGTAC
74AACC
75```
76
77This is what we gonna work with to get things rolling forward, when creating
78parser and waveform generator.
79
80## Parsing DNA data
81
82This step is rather simple one. All we need to do is parse input DNA sequence in
83[FASTA format](https://en.wikipedia.org/wiki/FASTA_format) well known in
84[Bioinformatics](https://en.wikipedia.org/wiki/Bioinformatics) to extract single
85Nucleotides that will be converted into separate tones based on equal-tempered
86scale explained above.
87
88```python
89nucleotide_tone_map = {
90 'A': 440,
91 'C': 523.25,
92 'G': 783.99,
93 'T': 587.33, # converted to D
94}
95
96def split(word):
97 return [char for char in word]
98
99def generate_from_dna_sequence(sequence):
100 for nucleotide in split(sequence):
101 print(nucleotide, nucleotide_tone_map[nucleotide])
102```
103
104## Generating sine wave
105
106Because we are essentially creating a long stream of notes we will be appending
107sine notes to a global array we will later use for creating a WAV file out of
108it.
109
110```python
111import math
112
113def append_sinewave(freq=440.0, duration_milliseconds=500, volume=1.0):
114 global audio
115
116 num_samples = duration_milliseconds * (sample_rate / 1000.0)
117
118 for x in range(int(num_samples)):
119 audio.append(volume * math.sin(2 * math.pi * freq * (x / sample_rate)))
120
121 return
122```
123
124The sine wave generated here is the standard beep. If you want something more
125aggressive, you could try a square or saw tooth waveform.
126
127## Generating a WAV file from accumulated sine waves
128
129
130```python
131import wave
132import struct
133
134def save_wav(file_name):
135 wav_file = wave.open(file_name, 'w')
136 nchannels = 1
137 sampwidth = 2
138
139 nframes = len(audio)
140 comptype = 'NONE'
141 compname = 'not compressed'
142 wav_file.setparams((nchannels, sampwidth, sample_rate, nframes, comptype, compname))
143
144 for sample in audio:
145 wav_file.writeframes(struct.pack('h', int(sample * 32767.0)))
146
147 wav_file.close()
148```
149
15044100 is the industry standard sample rate - CD quality. If you need to save on
151file size, you can adjust it downwards. The standard for low quality is, 8000 or
1528kHz.
153
154WAV files here are using short, 16 bit, signed integers for the sample size.
155So, we multiply the floating-point data we have by 32767, the maximum value for
156a short integer.
157
158> It is theoretically possible to use the floating point -1.0 to 1.0 data
159> directly in a WAV file, but not obvious how to do that using the wave module
160> in Python.
161
162## Generating Spectograms
163
164I have tried two methods of doing this and both were just fine. I however opted
165out to use the [SoX - Sound eXchange, the Swiss Army knife of audio
166manipulation](https://linux.die.net/man/1/sox) one because it didn't require
167anything else.
168
169```shell
170sox output.wav -n spectrogram -o spectrogram.png
171```
172
173An example spectrogram of Ludwig van Beethoven Symphony No. 6 First movement.
174
175<audio controls>
176 <source src="/assets/dna-synthesized/symphony-no6-1st-movement.mp3" type="audio/mpeg">
177</audio>
178
179![Ludwig van Beethoven Symphony No. 6 First movement](/assets/dna-synthesized/symphony-no6-1st-movement.png)
180
181The other option could also be in combination with
182[gnuplot](http://www.gnuplot.info/). This would require an intermediary step,
183however.
184
185```shell
186sox output.wav audio.dat
187tail -n+3 audio.dat > audio_only.dat
188gnuplot audio.gpi
189```
190
191And input file `audio.gpi` that would be passed to gnuplot looks something like
192this.
193
194```
195# set output format and size
196set term png size 1000,280
197
198# set output file
199set output "audio.png"
200
201# set y range
202set yr [-1:1]
203
204# we want just the data
205unset key
206unset tics
207unset border
208set lmargin 0
209set rmargin 0
210set tmargin 0
211set bmargin 0
212
213# draw rectangle to change background color
214set obj 1 rectangle behind from screen 0,0 to screen 1,1
215set obj 1 fillstyle solid 1.0 fillcolor rgbcolor "#ffffff"
216
217# draw data with foreground color
218plot "audio_only.dat" with lines lt rgb 'red'
219```
220
221## Pre-generated sequences
222
223What I did was take interesting parts from an animal's genome and feed it to a
224tone generator script. This then generated a WAV file and I converted those to
225MP3, so they can be played in a browser. The last step was creating a
226spectrogram based on a WAV file.
227
228### Niels Bohr quote
229
230<audio controls>
231 <source src="/assets/dna-synthesized/quote/out.mp3" type="audio/mpeg">
232</audio>
233
234![Spectogram](/assets/dna-synthesized/quote/spectogram.png)
235
236### Mouse
237
238This is part of a mouse genome `Mus_musculus.GRCm39.dna.nonchromosomal`. You
239can get [genom data
240here](http://ftp.ensembl.org/pub/release-106/fasta/mus_musculus/dna/).
241
242<audio controls>
243 <source src="/assets/dna-synthesized/mouse/out.mp3" type="audio/mpeg">
244</audio>
245
246![Spectogram](/assets/dna-synthesized/mouse/spectogram.png)
247
248### Bison
249
250This is part of a bison genome `Bison_bison_bison.Bison_UMD1.0.cdna`. You can
251get [genom data
252here](http://ftp.ensembl.org/pub/release-106/fasta/bison_bison_bison/cdna/).
253
254<audio controls>
255 <source src="/assets/dna-synthesized/bison/out.mp3" type="audio/mpeg">
256</audio>
257
258![Spectogram](/assets/dna-synthesized/bison/spectogram.png)
259
260### Taurus
261
262This is part of a taurus genome `Bos_taurus.ARS-UCD1.2.cdna`. You can get
263[genom data
264here](http://ftp.ensembl.org/pub/release-106/fasta/bos_taurus/cdna/).
265
266<audio controls>
267 <source src="/assets/dna-synthesized/taurus/out.mp3" type="audio/mpeg">
268</audio>
269
270![Spectogram](/assets/dna-synthesized/taurus/spectogram.png)
271
272## Making a drummer out of a DNA sequence
273
274To make things even more interesting, I decided to send this data via MIDI to my
275[Elektron Model:Samples](https://www.elektron.se/en/model-samples). This is a
276really cool piece of equipment that supports MIDI in via USB and 3.5 mm audio
277jack.
278
279Elektron is connected to my MacBook via USB cable and audio out is patched to a
280Sony Bluetooth speaker I have that supports 3.5 mm audio in. Elektron doesn't
281have internal speakers.
282
283![](/assets/dna-synthesized/elektron/IMG_0619.jpg)
284
285![](/assets/dna-synthesized/elektron/IMG_0620.jpg)
286
287![](/assets/dna-synthesized/elektron/IMG_0622.jpg)
288
289For communicating with Elektron, I choose `pygame` Python module that has MIDI
290built in. With this, it was rather simple to send notes to the device. All I did
291was map MIDI notes to the actual Nucleotides.
292
293Before all of this I also checked Audio MIDI Setup app under MacOS and checked
294MIDI Studio by pressing ⌘-2.
295
296![](/assets/dna-synthesized/elektron/midi-studio.jpg)
297
298The whole script that parses and send notes to the Elektron looks like this.
299
300```python
301import pygame.midi
302import time
303
304pygame.midi.init()
305
306print(pygame.midi.get_default_output_id())
307print(pygame.midi.get_device_info(0))
308
309player = pygame.midi.Output(1)
310player.set_instrument(2)
311
312def send_note(note, velocity):
313 global player
314 player.note_on(note, velocity)
315 time.sleep(0.3)
316 player.note_off(note, velocity)
317
318
319nucleotide_midi_map = {
320 'A': 60,
321 'C': 90,
322 'G': 160,
323 'T': 180, # is D
324}
325
326with open("quote.fa") as f:
327 sequence = f.read().replace('\n', '')
328
329for nucleotide in [char for char in sequence]:
330 print("Playing nucleotide {} with MIDI note {}".format(
331 nucleotide, nucleotide_midi_map[nucleotide]))
332 send_note(nucleotide_midi_map[nucleotide], 127)
333
334del player
335pygame.midi.quit()
336```
337
338<video src="/assets/dna-synthesized/elektron/elektron.mp4" controls></video>
339
340All of this could be made much more interesting if I choose different
341instruments for different Nucleotides, or doing more funky stuff with Elektron.
342But for now, this should be enough. It is just a proof of concept. Something to
343play around with.
344
345## Going even further
346
347As you probably notice, the end results are quite similar to each other. This is
348to be expected because we are operating only with 4 notes essentially. What
349could make this more interesting is using something like
350[Supercollider](https://supercollider.github.io/) to create more interesting
351sounds. By transposing notes or using effects based on repeated data in a
352sequence. Possibilities are endless.
353
354It is really astonishing what can be achieved with a little bit of code and an
355idea. I could see this becoming an interesting background soundscape instrument
356if done properly. It could replace random note generator with something more
357intriguing, biological, natural.
358
359I actually find the results fascinating. I took some time and listened to this
360music of nature. Even though it's quite the same, it's also quite different.
361The subtle differences on repeat kind of creates music on its own. Makes you
362wonder. It kind of puts Occam’s Razor in its place. Nature for sure loves to
363make things as energy efficient as possible.
diff --git a/content/posts/2022-08-13-algae-spotted-on-river-sava.md b/content/posts/2022-08-13-algae-spotted-on-river-sava.md
deleted file mode 100644
index e82e364..0000000
--- a/content/posts/2022-08-13-algae-spotted-on-river-sava.md
+++ /dev/null
@@ -1,30 +0,0 @@
1---
2title: Aerial photography of algae spotted on river Sava
3url: aerial-photography-of-algae-spotted-on-river-sava.html
4date: 2022-08-13T12:00:00+02:00
5draft: false
6---
7
8This is a bit of a different post than I usually write, but quite interesting
9one to me. River Sava has plenty of hydropower plants located down the stream.
10This makes regulating the strength of a current easier than normally. Because of
11lower stream strength and high temperatures, algae has formed on the river.
12This is the first time I've seen something like this in my whole life.
13
14Below are some photographs taken from a DJI drone capturing the event.
15
16![Algae on Sava](/assets/algae-sava/dji-algae-0.jpg)
17
18![Algae on Sava](/assets/algae-sava/dji-algae-1.jpg)
19
20![Algae on Sava](/assets/algae-sava/dji-algae-2.jpg)
21
22![Algae on Sava](/assets/algae-sava/dji-algae-3.jpg)
23
24![Algae on Sava](/assets/algae-sava/dji-algae-4.jpg)
25
26![Algae on Sava](/assets/algae-sava/dji-algae-5.jpg)
27
28I will try to get more photos of this in the future days and if something
29intriguing shows up will post it again on the blog.
30
diff --git a/content/posts/2022-10-06-state-of-web-technologies-in-year-2022.md b/content/posts/2022-10-06-state-of-web-technologies-in-year-2022.md
deleted file mode 100644
index 78595fa..0000000
--- a/content/posts/2022-10-06-state-of-web-technologies-in-year-2022.md
+++ /dev/null
@@ -1,303 +0,0 @@
1---
2title: State of Web Technologies and Web development in year 2022
3url: state-of-web-technologies-and-web-development-in-year-2022.html
4date: 2022-10-06T12:00:00+02:00
5draft: false
6---
7
8## Initial thoughts
9
10*This post is a critique on the current state of web development. It is an
11opinionated post! I will learn more about this in the future, and probably
12slightly change my mind about some of the things I criticize.*
13
14I have started working on a hobby project about two weeks ago, and I wanted to
15use that situation as a learning one. Trying new things, new technologies, new
16tools. I always considered myself to be an adventurous person when it comes to
17technology. I never shy away from trying new languages, new operating systems
18etc. Likewise, I find the whole experience satisfying, and it tickles that part
19of my brain that finds discovery the highest of the mountains to climb.
20
21What I always wanted to make was a coding game, that you would play in a browser
22(just to eliminate building binaries for each operating system) where you would
23level up your character and go into these scriptable battles. You know, RPG
24elements.
25
26So, the natural way to go would be some sort of SPA (single page application)
27with basic routing and some state management. Nothing crazy.
28
29> **Before we move on**, I have to be transparent. Take my views on this with
30> a grain of salt. I have only scratched the surface with these technologies,
31> and my knowledge is full of gaps. This is my experience using some of these
32> products for the first time or in a limited capacity.
33
34Having this out of the way, I got myself a fresh pot of coffee and down the
35rabbit hole I went.
36
37## Giving React JS a spin
38
39I first tried [React JS](https://reactjs.org/). I kind of like it. Furthermore,
40I have worked with libraries like this in the past and also wrote a couple of
41them (nothing compared to that level), but I had the basic understanding of what
42was going on. I rolled up a project quickly and had basic things done in a
43matter of two hours, which was impressive.
44
45I prefer using [Tailwind CSS](https://tailwindcss.com/) for my styling
46pleasures, and integrating that was also a painless experience. It was actually
47nice to see that some things got better with time. In about 2 minutes I got
48Tailwind working, and I was able to use classes at my disposal. All that
49`postcss` stuff was taken care of by adding a couple of things in config files
50(all described really well in their documentation).
51
52It is not that different from Vue which I have had more encounters with in the
53past People will probably call me a lunatic for saying this. But you know, it is
54the truth. Same same, but different. I still believe that using libraries like
55this is beneficial. I am not a JavaScript purist. They all have their quirks,
56but at the end of the day, I truly believe it’s worth it.
57
58## Bundlers and Transpilers
59
60I still reject calling [Typescript](https://www.typescriptlang.org/) to
61[JavaScript](https://www.javascript.com/) conversion a "compilation process". I
62call them [transpilers](https://devopedia.org/transpiler), and I don’t care! 😈
63
64And if you want to fight this, take a look at this little chart and be mad at
65it!
66
67![Compiling vs Transpiling](/assets/state-of-web/compiling-vs-transpiling.png)
68
69The first one that I ever used was [webpack](https://webpack.js.org/), and it
70was an absolute horrific experience. Saying this, it is an absolutely fantastic
71tool. I felt more like a config editor than actually a programmer. To be fair,
72I am a huge fan of [make](https://www.gnu.org/software/make/), and you can do as
73you wish with this information. I like my build systems simple.
74
75Also, isn’t it interesting that we need something like
76[Babel](https://babeljs.io/) to make JavaScript code work in a browser that has
77only one client side scripting available, which is by no accident also
78JavaScript. Why? I know why it’s needed, but seriously, why.
79
80I haven’t used Babel for years now. Or if I did, it was packaged together by
81some other bundler thingy. Which does not make things better, but at least I
82didn’t need to worry about it.
83
84I really don’t like complicated build systems. I really don’t like abstracting
85code and making things appear magical. The older I get, the more I appreciate
86clear and clean, expressive code. No one-liners, if possible.
87
88But I have to give props to [Vite](https://vitejs.dev/)! This was one of the
89best developer experiences I have ever had. Granted, it still has magical
90properties. And yes, it still is a bundler and abstracts things to the nth
91degree. But at least it didn’t force me to configure 700 lines of JSON. And I
92know that this makes me a hypocrite. You can’t have it all. Nonetheless, my
93reasoning here is, if using bundlers is inevitable, then at least they should
94provide an excellent developer experience.
95
96I also noticed that now the catch-all phrase is “blazingly fast” and “lightning
97fast” and “next generation” and stuff like that. I mean, yeah, tools should get
98faster with time. But saying that starting a project now takes 2 seconds instead
99of 20 seconds is something that is a break it or make it kind of a deal is
100ridiculous. I don’t mind waiting a couple of seconds every couple of days. I
101also don’t create 700 projects every day, and also who does? This argument has
102no bite. All I want is a decent reload time (~100ms is more than good enough for
103me) and that is it.
104
105You don’t need to sell me benefits if I only get them when I start a fresh
106project, and then try to convince me that this is somehow changing the fate of
107the universe. First of all, it is not. And second, if this is your only argument
108for your tool, I would advise you to maybe re-focus your efforts to something
109else. Vite says that startup times are really fast. And if that would be the
110only thing differentiating it from other tools, I would ignore it. But it has
111some really compelling features like [Hot Module
112Replacement](https://www.geeksforgeeks.org/reactjs-hot-module-replacement/) that
113really works well. It was a joy to use.
114
115So, I will be definitely using Vite in the future.
116
117## Jam Stack, Mach Stack no snack
118
119Let's get a couple of the acronyms out of the way, so we all know what we are
120talking about:
121
122- Jam Stack - JavaScript, API and Markup
123- Mach Stack - Microservices, API-first, Cloud-Native SaaS, Headless
124
125It is so hard to follow all these new trendy things happening around you, that
126it makes you have a massive **FOMO** all the time. But on the other hand, you
127also don’t want to be that old fart that doesn’t move with the times and still
128writes his trusty jQuery code while listening to Blink 182 All the small things
129on full blast. It’s a good song, don’t get me wrong, but there are other songs
130out there.
131
132I have to admit. [Vercel](https://vercel.com/) is really cool! Love the
133simplicity of the service. You could compare it to
134[Netlify](https://www.netlify.com/). I haven’t tried Netlify extensively, but
135from a couple of experimental deployments I still prefer Vercel. It is much more
136streamlined, but maybe this is bias in me. I really like Vercel’s Analytics,
137which give you a [Core Web Vitals report](https://web.dev/vitals/) in their
138admin console. Kind of cool, I’m not going to lie.
139
140This whole idea about frontend and backend merging into [SSR (server-side
141rendering)](https://www.debugbear.com/blog/server-side-rendering) looks so good
142on paper. It almost doesn’t come with any major flaws.
143
144But when it comes to the actual implementation, there is much to be desired.
145I’m going to lump [Next.js](https://nextjs.org/) and
146[Nuxt.js](https://nuxtjs.org/) together because they are essentially the same
147thing, just a different library.
148
149Now comes the reality. Mixing backend and frontend in this manner creates this
150weird mental model where you kind of rely on magical properties of these
151libraries. You relinquish control over to them for better developer experience.
152But is that really true? Initially, I was so stoked about it. However, the more
153I used them, the more I felt uncomfortable. I felt dirty, actually. Maybe this
154is because I come from old ways of doing things where you control every step of
155request, and allowing something to hijack it feels like blasphemy.
156
157More than that, some pretty significant technical issues arose from this. How do
158you do JWT token authentication? You put it in `api` folder and then do some
159fetching and storing into local state management. But doing this also requires
160some tinkering with await/async stuff on the React/Vue side of things. And then
161you need to write middleware for it. And the more I look at it, the more I see
162that this whole thing was not meant to be used like this, and it all feels and
163looks like a huge hack.
164
165The issue I have with this is that they over-promise and under-deliver. They
166want to be an all-in-one replacement for everything, and they don’t deliver on
167this promise. And how could they?! We have to be fair. It is an impossible task.
168
169They sell you [NoOps](https://www.geeksforgeeks.org/overview-of-noops/), but
170when you need to accomplish something a little bit more out of the scope of
171Hello World, you have to make hacky decisions to make it work. And having a
172deployment strategy that relies on many moving parts is never a good idea.
173Abstracting too much is usually a sign of bad architecture.
174
175Lately, this has become a huge trend that will for sure bite us in the future.
176And let’s not get it twisted. By doing this, PaaS providers like
177[AWS](https://aws.amazon.com/), [GCS](https://cloud.google.com/), etc. obscure
178their billing, and you end up paying more than you really should. And even if
179that is not an issue, it comes down to the principle of things. AWS is known for
180having multiple “currencies“ inside their projects like write operations, read
181operations, etc. which add up, and it creates this impossible to track billing
182scheme. It all behaves suspiciously like a pay-to-win game you could find on
183mobile phones that scams you out of your money.
184
185And as far as I am concerned, the most important thing was me not coding the
186functionalities for the game I want to make. I was battling libraries and cloud
187providers. How to deploy, what settings are relevant. Bad documentation or
188multiple versions of achieving the same thing. You are getting bombarded by all
189this information, and you don’t really have any control over it.
190Production-ready code becomes a joke, essentially. Especially if you tend to
191work on that project for a prolonged period of time.
192
193All of these options end up creating a fatigue. What to choose, what not to
194choose. Unnecessary worrying about if the stack will still be deemed worthy in
195six months. There is elegance in simplicity.
196
197> JavaScript UI frameworks and libraries work in cycles. Every six months or
198> so, a new one pops up, claiming that it has revolutionized UI development.
199> Thousands of developers adopt it into their new projects, blog posts are
200> written, Stack Overflow questions are asked and answered, and then a newer
201> (and even more revolutionary) framework pops up to usurp the throne.
202> — Ian Allen
203
204![To many options](/assets/state-of-web/2008-vs-2020.png)
205
206And this jab at these libraries and cloud providers is not done out of malice.
207It is a real concern that I have about them. In my life, I have seen
208technologies come and go, but the basics always stick around. So surrendering
209all the power you have to a library or a cloud provider is in my opinion a
210stupid move.
211
212## Tailwind CSS still rocks!
213
214You know, many people say negative things about Tailwind. And after a lot of
215deliberation, I came to the conclusion that Tailwind is good for two types of
216developers. Tailwind is good for a complete noob or a senior developer. A
217complete noob doesn’t really care about inner workings of CSS, and a senior
218developer also doesn’t care about CSS. Well, at least, not anymore. And
219developers in between usually have the biggest issues with it. Not always of
220course, but in a lot of cases.
221
222I like the creature comforts of Tailwind. Being utility first would make me
223argue that it is actually more similar to [Sass](https://sass-lang.com/) or
224[Less](https://lesscss.org/) than something like Bootstrap. Not technically, but
225ideologically. After I started using it, I never looked back. I use it every
226time I need to do something web related.
227
228Writing CSS for general things feels like going several steps back. Instead of
229focusing on what you are actually trying to achieve, you focus on notations like
230[BEM](https://en.bem.info/methodology/css/), code structuring, optimizing HTML
231size. Just doing things that make 0.1% difference. You know that saying: Early
232optimization is the root of all evil. Exactly that.
233
234I am also not saying that Tailwind is the cure for everything. Sometimes custom
235CSS is necessary. But from what I found out in using it for almost two years in
236a production environment (on a site getting quite a lot of traffic and
237constantly being changed), I can say without any reservations that Tailwind
238saved our asses countless times. We would be rewriting CSS all the time without
239it. And I don’t really think writing CSS is the best way to spend my time.
240
241I have also noticed that people who criticize Tailwind the most never actually
242used it in a real project that has a long lifetime with plenty of changes that
243will happen in the future.
244
245But you know, whatever floats your boat!
246
247## Code maintainability
248
249Somehow, people also stopped talking about maintenance. If you constantly try to
250catch the latest and greatest train, you are by that logic always trying new
251things. Which is a good thing if you want to learn about technologies and try
252them. But for the production environment, you have to have a stable stack that
253doesn’t change every 6 months.
254
255You can lock dependencies for sure. Nevertheless, the hype train moves along
256anyway. And the mindset this breeds goes against locking the code. This
257bleeding-edge rolling release cycle is not helping. That is why enterprise
258solutions usually look down on these popular stacks and only do bare minimum to
259appear hip and cool.
260
261With that said, I still think that progress is good, but should be taken with a
262grain of salt. If your project is something that should be built once and then
263rarely updated, going with the latest stack is a possible way to go. But, if you
264are working on a project that lasts for years, you should probably approach it
265with some level of caution. Web development is often times too volatile.
266
267## Web development has a marketing issue
268
269I noticed that almost every project now has this marketing spin put on it.
270Everything is blazingly fast now. I get it, they are competing for your
271attention, but what happened to just being truthful and not inflating reality.
272
273And in order to appeal to mass market, they leave things out of their marketing
274materials. These open-source projects are now behaving more and more like
275companies do. Which is a scary thought on its self.
276
277And we are also seeing a rise in a concept of building a company in the open,
278which is a good thing, don't get me wrong. But when it is using open-source to
279lure people and then lock them in their ecosystem, there is where I have issues
280with it.
281
282This might be because I have been using GNU/Linux for 20 years now and have been
283so beholden for my success to open-source that I see issues when open-source is
284being used to trick people into a false sense of security that these projects
285are built in the spirit of open-source. Because there is a difference. They are
286NOT! They have a really specific goal in mind. And the open-source is being used
287as a delivery system. Which is in my opinion disgusting!
288
289## Conclusion
290
291I will end my post with this. Web development is running now in circles. People
292are discovering [RPC](https://www.tutorialspoint.com/remote-procedure-call-rpc)
293now and this is the now the next big thing. [GraphQL](https://graphql.org/) is
294so passé. And I am so tired of it all. Of blazingly fast libraries, of all these
295new technologies that are actually just a remake of old ones. Of just the
296general spirit of the web. I will just use what I already know. Which worked 10
297years ago and will work 10 years after this. I will adopt a couple of little
298tools like Vite. But I will not waste my time on this anymore.
299
300It was a good exercise to get in touch with what’s new now. Nothing really
301changed that much. FOMO is now cured! Now I have to get my ass back to actually
302code and make the project that I wanted to make in the first place.
303
diff --git a/content/posts/2022-10-16-that-sound-that-machine-makes-when-struggling.md b/content/posts/2022-10-16-that-sound-that-machine-makes-when-struggling.md
deleted file mode 100644
index 05a8167..0000000
--- a/content/posts/2022-10-16-that-sound-that-machine-makes-when-struggling.md
+++ /dev/null
@@ -1,65 +0,0 @@
1---
2title: Microsoundtrack — That sound that machine makes when struggling
3url: that-sound-that-machine-makes-when-struggling.html
4date: 2022-10-16T12:00:00+02:00
5draft: false
6---
7
8A couple of months ago, I got an idea about micro soundtracks. In this concept,
9you are the observer, director, and audience in this tiny movies.
10
11What you do is to attempt to imagine what would be happening around you based on
12a title of the song and let the song help you fill the void in your story.
13
14I made these songs is Logic Pro X. Every year or so I do this kind of thing and
15make a couple of songs similar to this. But this is the first time I am posting
16about it.
17
18You can listen to the whole set on
19[Youtube](https://www.youtube.com/watch?v=_5oXBhSmF3c) or scroll down the page
20and there are embedded players for each song.
21
22## A bunch of inter-dimensional people with loud clocks
23
24A group of inter-dimensional people are going up and down the elevator with you
25while having loud clocks around their necks. Each clock ticks on a different
26frequency. A lot of other sounds are getting drawn into your dimension,
27resulting in a strange merging of dimensions.
28
29<iframe style="border: 0; width: 100%; height: 42px;" src="https://bandcamp.com/EmbeddedPlayer/album=3913808801/size=small/bgcol=ffffff/linkcol=0687f5/track=1349272965/transparent=true/" seamless title="Bandcamp"><a href="https://mitjafelicijan.bandcamp.com/album/that-sound-that-machine-makes-when-struggling">That sound that machine makes when struggling by Mitja Felicijan</a></iframe>
30
31## Two black holes conversing about the weather
32
33You are a traveler in a spaceship flying very close to two colliding black holes
34having a discussion about the weather while tearing each other apart. During all
35this your ship is getting pulled into the event horizon of both black holes,
36putting a lot of strain on your spaceship.
37
38<iframe style="border: 0; width: 100%; height: 42px;" src="https://bandcamp.com/EmbeddedPlayer/album=3913808801/size=small/bgcol=ffffff/linkcol=0687f5/track=1756714200/transparent=true/" seamless title="Bandcamp"><a href="https://mitjafelicijan.bandcamp.com/album/that-sound-that-machine-makes-when-struggling">That sound that machine makes when struggling by Mitja Felicijan</a></iframe>
39
40## A planet where every organism is a plant
41
42You land on a planet where every living organism is a plant and among those
43plants some of them are highly intelligent, and you were asked to make first
44contact with the native species. Your visit takes place in a giant cave where
45you are meeting these plants, and they are talking to you.
46
47<iframe style="border: 0; width: 100%; height: 42px;" src="https://bandcamp.com/EmbeddedPlayer/album=3913808801/size=small/bgcol=ffffff/linkcol=0687f5/track=3710973979/transparent=true/" seamless title="Bandcamp"><a href="https://mitjafelicijan.bandcamp.com/album/that-sound-that-machine-makes-when-struggling">That sound that machine makes when struggling by Mitja Felicijan</a></iframe>
48
49## Bio implants having a fit and reprogramming your brain
50
51In a distant future where everybody has bio implants, you have just received
52your first one, which happens to be a brain implant. Something goes wrong, and
53your implant is starting to misbehave, and you are experiencing brain
54malfunctions. You are on the streets at night a couple of hours after your
55procedure. You can feel your sanity breaking down.
56
57<iframe style="border: 0; width: 100%; height: 42px;" src="https://bandcamp.com/EmbeddedPlayer/album=3913808801/size=small/bgcol=ffffff/linkcol=0687f5/track=1157430581/transparent=true/" seamless title="Bandcamp"><a href="https://mitjafelicijan.bandcamp.com/album/that-sound-that-machine-makes-when-struggling">That sound that machine makes when struggling by Mitja Felicijan</a></iframe>
58
59## Cow animation
60
61I also made this little cow animation. Go into full screen to see the effects in
62more details.
63
64<video src="/assets/microsoundtrack/cow.m4v" controls loop></video>
65
diff --git a/content/posts/2023-01-26-trying-to-build-a-new-kind-of-terminal-emulator.md b/content/posts/2023-01-26-trying-to-build-a-new-kind-of-terminal-emulator.md
deleted file mode 100644
index a03a2a4..0000000
--- a/content/posts/2023-01-26-trying-to-build-a-new-kind-of-terminal-emulator.md
+++ /dev/null
@@ -1,252 +0,0 @@
1---
2title: Trying to build a New kind of terminal emulator for the modern age
3url: trying-to-build-a-new-kind-of-terminal-emulator.html
4date: 2023-01-26T12:00:00+02:00
5draft: false
6---
7
8Over the past few weeks, I have been really thinking about terminal emulators,
9how we interact with computers, the separation of text-based programs and GUI
10ones. To be perfectly honest, I got pissed off one evening when I was cleaning
11up files on my computer. Normally, I go into console and do `ncdu` and check
12where the junk is. Then I start deleting stuff. Without any discrimination,
13usually. But when it comes to screenshots, I have learned that it's good to keep
14them somewhere near if I need to refer to something that I was doing. I am an
15avid screenshot taker. So at that point I checked Pictures folder and also did a
16basic search `find . -type f -name "*.jpg"` for all the JPEG files in my home
17directory and immediately got pissed off. Why can’t I see thumbnails in my
18terminal? I know why, but why in the year of 2022 this is still a problem. I am
19used to traversing my disk via terminal. I am faster, and I am more comfortable
20this way. But when it comes to visualization, I then need to revert to GUI
21applications and again find the same file to see it. I know that programs like
22`feh` and `sxiv` are available, but I would just like to see the preview. Like
23[Jupyter notebook](https://jupyter.org/) or something similar. Just having it
24inline. Part of a result.
25
26It also didn’t help that I was spending some time with the [Plan
279](https://plan9.io/plan9/) Operating system. More specifically
28[9FRONT](http://9front.org/). The way that [ACME editor](http://acme.cat-v.org/)
29handles text editing is just wonderful. Different and fresh somehow, even though
30it’s super old.
31
32So, I went on a lookout for an interesting way of visualizing results of some
33query. I found these applications to be outstanding examples of how not to be a
34captive of a predetermined way of doing things.
35
36- [Wolfram Mathematica](https://www.wolfram.com/mathematica/)
37- [Jupyter notebooks](https://jupyter.org/)
38- [Plan 9 / 9FRONT](http://www.9front.org)
39- [Temple OS](https://templeos.org/)
40- [Emacs](https://www.gnu.org/software/emacs/)
41
42My idea is not as out there as ACME is, but it is a spin on the terminal
43emulators. I like the modes that Vi/Vim provides you with. I like the way the
44Emacs does its own `M-x` `M-c`. Furthermore, I really like how Mathematica and
45Jupyter present the data in a free flowing form. And I love how Temple OS is
46basically a C interpreter on some level.
47
48> **Note:** This is part 1 of the journey. Nowhere finished yet. I am just
49> tinkering with this at the moment. This whole thing can easily spectacularly
50> fail.
51
52So I started. I knew that I wanted to have the couple of modes, but I didn’t
53like the repetition of keystrokes, so the only option was to have some sort of
54toggle and indicate to the user that they are in a special mode. Like Vi does
55for Normal and Visual mode.
56
57These modes would for the first version be:
58
59- *Preview mode* (toggle with Ctrl + P)
60 - When this mode would be enabled, the `ls` command would try to find images
61 from the results and display thumbnails from them in the terminal itself.
62 No ASCII art. Proper images. In a grid!
63- *Detach mode* (toggle with Ctrl + D)
64 - When this mode would be enabled, every command would open a new window
65 and execute that command in it. This would be useful for starting `htop`
66 in a separate window.
67
68The reason for having these modes togglable is to not ask for previews every
69time. You enable a mode and until you disable it, it behaves that way. Purely
70out of ergonomic reasons.
71
72I would like to treat every terminal I open as a session mentally. When I start
73using the terminal, I start digging deeper into the issue I am trying to
74resolve. And while I am doing this, I would like to open detached windows
75etc. A lot of these things can be done easily with something like
76[i3](https://i3wm.org/), but also that pull you out of the context of what you
77were doing. I would like to orchestrate everything from one single point.
78
79In planning for this project, I knew that I would need to use a language like C
80and a library such as [SDL2](https://www.libsdl.org/) in order to achieve the
81desired results. I had considered other options, but ultimately determined that
82[SDL2](https://www.libsdl.org/) was the best fit based on its capabilities and
83reputation in the programming community.
84
85At first, I thought the idea of a hardware accelerated terminal was a bit of a
86joke. It seemed like such a niche and unnecessary feature, especially given the
87fact that terminal emulators have been around for decades and have always relied
88on software rendering. But to be fair, [Alacritty](https://alacritty.org/) is
89doing the same thing. Well, they are doing a remarkable job at it.
90
91So, I embarked on a journey. Everything has to start somewhere. For me, it
92started with creating a window! It has to start somewhere. 🙂
93
94```c
95// Oh, Hi Mark!
96// Create the window, obviously.
97SDL_Window *window = SDL_CreateWindow(
98 WINDOW_TITLE, SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED,
99 WINDOW_WIDTH, WINDOW_HEIGHT,
100 SDL_WINDOW_RESIZABLE | SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN);
101```
102
103I continued like this to get some text displayed on the screen.
104
105I noted that
106[`TTF_RenderText_Solid`](https://wiki.libsdl.org/SDL_ttf/TTF_RenderText_Solid)
107rendered text really poorly. There were no antialiasing at all. In my wisdom, I
108never checked the documentation. Well, that was a fail. To uneducated like me:
109`TTF_RenderText_Solid` renders Latin1 text at fast quality to a new 8-bit
110surface. So, that's why the texts looked like shit. No wonder.
111
112Remarks on `TTF_RenderText_Solid`: This function will allocate a new 8-bit,
113palettized surface. The surface's 0 pixel will be the colorkey, giving a
114transparent background. The 1 pixel will be set to the text color.
115
116After I replaced it with
117[`TTF_RenderText_LCD`](https://wiki.libsdl.org/SDL_ttf/TTF_RenderText_LCD) which
118renders Latin1 text at LCD subpixel quality to a new ARGB surface, the text
119started looking good. Really make sure you read the documentation. It’s actually
120good. As a side note, you can find all the documentation regarding [SDL2 on
121their Wiki](https://wiki.libsdl.org/).
122
123After that was done, I started working on displaying other things like `Preview`
124and `Detach` modes. This wasn’t really that hard. In SDL2 you can check all the
125available events with `while (SDL_PollEvent(&event) > 0)` and have a bunch of
126switch statements to determine which key is currently being pressed. More about
127keys, [SDLKey](https://documentation.help/SDL/sdlkey.html) and mroe about
128pooling the events on
129[SDL_PollEvent](https://documentation.help/SDL/sdlpollevent.html).
130
131```c
132while (SDL_PollEvent(&event) > 0)
133{
134 switch (event.type)
135 {
136 case SDL_QUIT:
137 running = false;
138 break;
139
140 case SDL_TEXTINPUT:
141 if (!meta_key_pressed)
142 {
143 strncat(input_prompt_text, event.text.text, 1);
144 update_input_prompt = true;
145 }
146 break;
147 }
148}
149```
150
151After that was somewhat working correctly, I started creating a struct that
152would hold all the commands and results and I call them Cells. Yes, I stole that
153naming idea from Jupyter.
154
155```c
156typedef struct
157{
158 char *command;
159 char *result;
160 SDL_Surface *surface;
161 SDL_Texture *texture;
162 SDL_Rect rect;
163} Cell;
164```
165
166I am at a place now where I am starting to implement scrolling. This will for
167sure be fun to code. Memory management in C is super easy. 😂
168
169I have also added a simple [INI file like
170configuration](https://en.wikipedia.org/wiki/INI_file) support. It is done in an
171[STB style of
172header](https://github.com/nothings/stb/blob/master/docs/stb_howto.txt) and maps
173to specific options supported by the terminal. It is not universal, and the code
174below demonstrates how I will use it in the future.
175
176```c
177#ifndef CONFIG_H
178#define CONFIG_H
179
180/*
181# This is a comment
182
183# This is the first configuration option
184dettach=value11111
185
186# This is the second configuration option
187preview=value22222
188
189# This is the third configuration option
190debug=value33333
191*/
192
193// Define a struct to hold the configuration options
194typedef struct
195{
196 char dettach[256];
197 char preview[256];
198 char debug[256];
199} Config;
200
201// Read the configuration file and return the options as a struct
202extern Config read_config_file(const char *filename)
203{
204 // Create a struct to hold the configuration options
205 Config config = {0};
206
207 // Open the configuration file
208 FILE *file = fopen(filename, "r");
209
210 // Read each line from the file
211 char line[256];
212 while (fgets(line, sizeof(line), file))
213 {
214 // Check if this line is a comment or empty
215 if (line[0] == '#' || line[0] == '\n')
216 continue;
217
218 // Parse the line to get the option and value
219 char option[128], value[128];
220 if (sscanf(line, "%[^=]=%s", option, value) != 2)
221 continue;
222
223 // Set the value of the appropriate option in the config struct
224 if (strcmp(option, "dettach") == 0)
225 {
226 strncpy(config.option1, value, sizeof(config.option1));
227 }
228 else if (strcmp(option, "preview") == 0)
229 {
230 strncpy(config.option2, value, sizeof(config.option2));
231 }
232 else if (strcmp(option, "debug") == 0)
233 {
234 strncpy(config.option3, value, sizeof(config.option3));
235 }
236 }
237
238 // Close the configuration file
239 fclose(file);
240
241 // Return the configuration options
242 return config;
243}
244
245#endif
246```
247
248This is as far as I managed to get for now. I have a daily job and this
249prohibits me to work on these things full time. But I should probably get back
250and finish this. At least have a simple version working out, so I can start
251testing it on my machines. Fingers crossed. 🕵️‍♂️
252
diff --git a/content/posts/2023-05-16-rekindling-my-love-for-programming.md b/content/posts/2023-05-16-rekindling-my-love-for-programming.md
deleted file mode 100644
index fb8add2..0000000
--- a/content/posts/2023-05-16-rekindling-my-love-for-programming.md
+++ /dev/null
@@ -1,73 +0,0 @@
1---
2title: Rekindling my love for programming and enjoying the act of creating
3url: rekindling-my-love-for-programming.html
4date: 2023-05-16T12:00:00+02:00
5draft: false
6---
7
8Programming can be a challenging and rewarding experience, but sometimes it's
9easy to feel burnt out or disinterested. I have lost the passion for coding over
10the past couple of months and it looked like I will never enjoy the coding as
11much as I did.
12
13I was feeling burnt out with programming. I thought taking a break from it and
14focusing on other activities that I enjoy might be helpful. This way, I could
15come back to programming with a fresh perspective and renewed energy. I also
16thought about learning a new programming language or technology to keep things
17interesting and challenging.
18
19However, what I didn't realize was that learning a new language or technology
20wasn't going to solve the underlying issue. I needed to take a step back and
21re-evaluate why I had lost my passion for programming in the first place. This
22involved taking a deep look into what I was doing that resulted in this rut.
23
24Sometimes, it's easy to get caught up in the hype of new technologies or
25languages, and we can feel like we're missing out if we're not constantly
26learning and experimenting. However, it's important to remember that the latest
27and greatest isn't always the best fit for our projects or our
28interests. Instead of constantly chasing the next big thing, it can be helpful
29to focus on what truly interests us and what we're passionate about. This can
30help us stay motivated and engaged with our work, rather than feeling like we're
31just going through the motions.
32
33I expressed that I had lost my passion for coding over the past couple of
34months, and I realized that the reason behind it was my tendency to spread
35myself too thin and not focus on completing interesting projects. In order to
36regain my passion for coding, I need to focus on projects that truly interest me
37and give me a sense of purpose and motivation.
38
39Recently, I have been playing World of Warcraft more frequently and have become
40interested in developing addons for the game.
41
42This quickly resulted in me creating three addons that improve the quality of
43life, and I subsequently developed a more useful add-on that encapsulates all
44the others I made.
45
46I found it interesting that this action sparked a new interest in me.
47Additionally, I discovered the Lua language, which reminded me that coding
48should be fun rather than just a struggle with a language. It should be pure,
49unadulterated fun.
50
51I wasn't fighting the syntax, nor was I focused on finding the most optimal
52solution. I simply created things without the pressure of making them the best
53they could possibly be.
54
55This made me realize that I actually adore simple languages that get out of the
56way and let you express what you want to do. It forced me to rethink a lot about
57what I use and what I actually enjoy.
58
59I have decided to stick to the basics. For a scripting language, I will use
60Lua. For networking, I will use Golang. And for any special needs, I will rely
61on C. I do not require Rust, Nim, or Zig. This selection is more than sufficient
62for my needs. I have to stay true to this simplicity. There is something to the
63Occam's Razor.
64
65I've been struggling with a lack of creativity lately, but now I'm experiencing
66a real change. I realized I needed to take a step back and stop actively trying
67to address the issue. I needed to stop worrying and overthinking it. I simply
68needed some time. Looking back, I don't think I've taken any significant time
69off in the last 10 years.
70
71Suddenly, I find myself with the energy and passion to complete multiple small
72projects. It doesn't feel like a chore at all. Who knew I needed WoW to
73kickstart everything. Inspiration really does come from the strangest places.
diff --git a/content/posts/2023-05-22-crafting-stories-in-zed-editor.md b/content/posts/2023-05-22-crafting-stories-in-zed-editor.md
deleted file mode 100644
index ead4276..0000000
--- a/content/posts/2023-05-22-crafting-stories-in-zed-editor.md
+++ /dev/null
@@ -1,87 +0,0 @@
1---
2title: From General Zod to Superman - Crafting Stories in Zed Editor
3url: crafting-stories-in-zed-editor.html
4date: 2023-05-22T12:00:00+02:00
5draft: false
6---
7
8Pretentious title! Good start! I have nothing to add to this discussion. I just
9like this editor and wanted to write something here that will remind me to use
10it again in a while when/if it becomes available for Linux.
11
12**TLDR:** I think this code editor is very cool and has a massive potential. I
13hope they don’t mess up with adding a plugin ecosystem to it!
14
15Out of morbid curiosity, I started using the [Zed editor](https://zed.dev/) on
16my Mac. Zed is a high-performance, multiplayer code editor developed by the
17creators of Atom and Tree-sitter. Written in Rust so it has to be blazingly
18fast! 😊 It's a joke, calm down.
19
20Over the past year, I have switched between [Helix
21editor](https://helix-editor.com/) and [VS
22Code](https://code.visualstudio.com/), but for the last couple of months, I have
23been using Helix exclusively.
24
25I've been genuinely impressed by Zed. When you open a file, it automatically
26detects its type and downloads the corresponding [LSP (language
27server)](https://en.wikipedia.org/wiki/Language_Server_Protocol). The list of
28supported languages is not extensive, but it's still impressive. It's a great
29example of how to create a product that stays out of your way.
30
31![Zed editor](/assets/zed/zed-1.png?style=bigimg)
32
33For C development it downloaded [clangd](https://clangd.llvm.org/) and setting
34up missing dependencies in code was rather easy. For this project I use
35[SDL2](https://www.libsdl.org/) for rendering terminal emulator. It’s a hobby
36project, don’t worry about it.
37
38If you are going to give this a try and you are using C, I suggest checking two
39files in the root of your project folder. If you don't have them, create them.
40
41**compile_flags.txt**
42
43```
44-I/opt/homebrew/include
45-I/opt/homebrew/include/SDL2
46```
47
48Easy way of checking what the appropriate includes for a specific library is to
49use `pkg-config` and in my case `pkg-config SDL2 --cflags-only-I`. But this is
50nothing new to C/C++ devs. Just a noter for people who are using Visual Studio.
51
52**.clang-format**
53
54```
55ColumnLimit: 220
56BasedOnStyle: Mozilla
57```
58
59I prefer Mozilla coding style for C so you can set that up.
60
61They really have something special here. Although there is no version available
62for Linux yet, I will stick to Helix. This impressive piece of engineering is,
63above all, an amazing example of craftsmanship.
64
65They have a bunch of amazing integrated functionalities like live desktop
66sharing, code sharing in a live coding session. There is a lot of pretentious
67marketing speak there but the product is still amazing!
68
69For me the speed and the simplicity of the product was the most impressive
70thing. You get that: it just works feeling. A rare thing in 2023.
71
72![Zed editor](/assets/zed/zed-2.png?style=bigimg)
73
74They also managed to add [Github Copilot](https://github.com/features/copilot)
75in a non obtrusive way. To me, everything feels very intentional and
76specifically selected. It's minimal yet maximally effective.
77
78<video src="https://zed.dev/img/post/copilot/copilot-demo.webm" autoplay loop></video>
79
80It is a perfect balance between VS Code, Jetbrains IDE’s and something like VIM
81or Helix.
82
83I just hope they **DON’T** add plugin support and keep it like it is. They as a
84vendor should add stuff to it with great deliberation and thought. And this way
85the product will stay fast and focused. That’s my two cents.
86
87Amazing job!
diff --git a/content/posts/2023-05-23-i-was-wrong-about-git-workflows.md b/content/posts/2023-05-23-i-was-wrong-about-git-workflows.md
deleted file mode 100644
index 16739de..0000000
--- a/content/posts/2023-05-23-i-was-wrong-about-git-workflows.md
+++ /dev/null
@@ -1,71 +0,0 @@
1---
2title: I think I was completely wrong about Git workflows
3url: i-was-wrong-about-git-workflows.html
4date: 2023-05-23T12:00:00+02:00
5draft: false
6type: posts
7tags: []
8---
9
10I have been using some approximation of [Git
11Flow](https://jeffkreeftmeijer.com/git-flow/) for years now and never really
12questioned it to be honest. When I create a repo I create develop branch and set
13it as default one and then merge to master from there. Seems reasonable enough.
14
15One thing that I have learned is that long living branches are the devil. They
16always end up making a huge mess when they need to be merged eventually into
17master. So by that reason, what is the develop branch if not the longest living
18feature branch. And from my personal experience there was never a situation
19where I wasn’t sweating bullets when I had to merge develop back to master.
20
21This realisation started to give me pause. So why the hell am I doing this, and
22is there a better way. Well the solution was always there. And it comes in a
23form of [git tags](https://git-scm.com/book/en/v2/Git-Basics-Tagging).
24
25So what are git tags? Git tags are references to specific points in a Git
26repository's history. They are used to mark important milestones, such as
27releases or significant commits, making it easier to identify and access
28specific versions of a project.
29
30Somehow we have all hijacked the meaning of the master branch that it has to be
31the most releasable version of code. And this is also where the confusing about
32versioning the software kicks in. Because master branch implicitly says that we
33are dealing with the rolling release type of a software. And by having a develop
34branch we are hacking around this confusion. With a separation of develop and
35master we lock functionalities into place and forcing a stable vs development
36version of the software.
37
38But if that is true and the long living branches are the devil then why have
39develop at all. I think that most of this comes to how continuous integration is
40being done. There usually is no granular access to tags and CD software deploys
41what is present on a specific branch, may that be master for production and
42develop for staging. This is a gross simplification and by having this in place
43we have completely removed tagging as a viable option to create a fix point in
44software cycle that says, this is the production ready code.
45
46One cool thing about tags are that you can checkout a specific tag. So they
47behave very similarly as branches in that regard. And you don’t have the
48overhead of having two mainstream branches.
49
50So what is the solution? One approach is to use development workflow, where all
51changes are made on the smaller branches and continuously merged into
52master. Where the software is ready to be pushed to production you tag the
53master branch. This approach eliminates the need for long-lived branches and
54simplifies the development process. It also encourages developers to make small,
55incremental changes that can be tested and deployed quickly. However, this
56approach may not be suitable for all projects or teams that heavily rely on
57automated deployment based on branch names only.
58
59This also requires that developers always keep production in mind. No more
60living on an island of the develop branch. All your actions and code need to be
61ready to meet production standards on a much smaller timescale.
62
63I think that we have complicated the workflow in an honest attempt to make
64things more streamlined but in the process of doing this, we have inadvertently
65made our lives much more complicated.
66
67In conclusion, it's important to re-evaluate our workflows from time to time to
68see if they still make sense and if there are better alternatives available.
69Long-living branches can be problematic, and using tags to mark important
70milestones can simplify the development process.
71
diff --git a/content/posts/2023-05-31-re-inventing-task-runner-that-i-actually-used-daily.md b/content/posts/2023-05-31-re-inventing-task-runner-that-i-actually-used-daily.md
deleted file mode 100644
index 1abfd1e..0000000
--- a/content/posts/2023-05-31-re-inventing-task-runner-that-i-actually-used-daily.md
+++ /dev/null
@@ -1,158 +0,0 @@
1---
2title: "Re-Inventing Task Runner That I Actually Used Daily"
3url: re-inventing-task-runner-that-i-actually-used-daily.html
4date: 2023-05-31T12:21:10+02:00
5draft: false
6---
7
8Couple of months ago I had this brilliant idea of re-inventing the wheel by
9making an alternative for make. And so I went. Boldly into the battle. And to my
10big surprise my attempt resulted in not a completely useless piece of software.
11
12My initial requirements were quite simple but soon grow into something more
13ambitious. And looking back I should have stuck to the simple version. My
14laziness was on my side this time though. Because I haven’t implemented some of
15the features I now realise I really didn’t need them and they would bog the
16whole program and make it be something it was never meant to be.
17
18My basic requirements were following:
19
20- Syntax should be a tiny bit inspired by Rake and Rakefiles.
21- Should borrow the overall feel of a unit test experience.
22- Using something like Python would be a bit of an overkill.
23- The program must be statically compiled, so it can run on same architecture
24 without libc, musl dependencies or things like that.
25- Install ruby for rake is a bit overkill and can not be done with certain
26 really lightweight distributions like Alpine Linux. This tool would be usable
27 on such lightweight systems for remote debugging.
28- I want to use it for more than just compiling things. I want to use it as an
29 entry-point into a project, and I want this to help me indirectly document the
30 project as well.
31- It should be an abstraction over bash shell or the default system shell.
32 - Each task essentially becomes its own shell instance.
33- Must work on Linux and macOS systems.
34- By default, running `erd` list all the available tasks (when I use make, I
35 usually put a disclaimer that you should check Makefile to see all available
36 target).
37- Should support passing arguments when you run it from a shell.
38- Normal variable as the same as environmental variables. There is no
39 distinction. Every variable is also essentially an environment variable and
40 can be used by other programs.
41- State between tasks is not shared, and this makes this “pure” shell instances.
42- Should be single-threaded for the start and later expanded with `@spawn`
43 command.
44- Variables behave like macros and are preprocessed before evaluation.
45- Should support something like `assure` that would check if programs like C
46 compiler or Python (whatever the project requires) are installed on a machine.
47
48Quite a reasonable list of requirements. I do this things already in my
49Makefiles or/and Bash scripts. But I would like to avoid repeating myself every
50time I start working on something new.
51
52So I started with the following syntax.
53
54```ruby
55@env on
56
57# Override the default shell.
58@shell /bin/bash
59
60# Assure that program is installed.
61@assure docker-compose pip python3
62
63# Load local dotenv files (these are then globally available).
64@dotenv .env
65@dotenv .env.sample
66@dotenv some_other_file
67
68# This are local variables but still accessible in tasks.
69@var HI = "hey"
70@var TOKEN = "sometoken"
71@var EMAIL = "m@m.com"
72@var PASSWORD = "pass"
73@var EDITOR = "vim"
74
75@task dev "Test chars .:'}{]!//" does
76 echo "..." $HI
77end
78
79@task clean "Cleans the obj files" does
80 rm .obj
81end
82
83@task greet "Greets the user" does
84 echo "Hi user $TOKEN or $WINDOWID $EMAIL"
85end
86
87@task stack "Starts Docker stack" does
88 docker-compose -f stack.yml up
89end
90
91@task todo "Shows all todos in source files and count them" does
92 grep -ir "TODO|FIXME" . | wc -l
93end
94
95@task test1 "For testing 1" does
96 unknown-command
97 echo "test1"
98 ls -lha
99end
100
101@task test2 "For testing 2" does
102 echo "test1"
103 ls -lha
104 docker-compose -f samples/stack.yml up
105end
106```
107
108One thing that I really like about Errand. Yes, this is what it is called. And
109it is available at https://git.mitjafelicijan.com/errand.git/about/. Moving
110on. One thing that I really like is that a task is a persistent shell. By that I
111mean, that the whole task, even if it contains multiple command in one shell.
112In make each line in a target is that and you need to combine lines or add `\`
113at the end of the line.
114
115```bash
116# How you do this things in make.
117target:
118 source .venv/bin/activate \
119 python script.py
120```
121
122This solves this problem. Consider each task and what is being executed in that
123task a shell that will only close when all the tasks are completed.
124
125By self-documenting I mean that if you are in a directory with `Errandfile` in,
126if you only type `erd` and press enter it should by default display all the
127possible targets. In make i was doing this by having a first target be something
128like `default` that echos the message “Check Makefile for all available target.”
129Because all of the tasks in Errand require a message I use that to display let’s
130call it table of contents.
131
132Because I don’t use any external dependencies this whole thing can be statically
133compiled. So that also checked one of the boxes.
134
135It works on Linux and on a Mac so that’s also a bonus. I don’t believe this
136would work on Windows machines because of the way that I use shell instances. By
137you could use something like Windows Subsystem for Linux and run it in
138there. That is a valid option.
139
140To finish this essay off, how was it to use it in “real life”. I have to be
141honest. Some of the missing features still bother me. `@dotenv` directive is
142still missing and I need to implement this ASAP.
143
144Another thing that needs to happen is support for streaming output. Currently
145commands like `docker-compose` that runs in foreground mode is not compatible
146with Errand. So commands that stream output are an issue. I need to revisit how
147I initiate shell and how I read stdout and stderr. But that shouldn’t be a
148problem.
149
150I have been very satisfied with this thing. I am pleasantly surprised by how
151useful it is. I really wanted to test this in the wild before I commit to it. I
152have more abandoned project than Google and it’s bringing a massive shame to my
153family at this point. So I wanted to be sure that this is even useful. And it
154actually is. Quite surprised at myself.
155
156I really need to package this now and write proper docs. And maybe rewrite
157tokeniser. Its atrocious right now. Site to behold! But that is an issue for
158another time.
diff --git a/content/posts/2023-07-01-bringing-all-of-my-projects-together-under-one-umbrella.md b/content/posts/2023-07-01-bringing-all-of-my-projects-together-under-one-umbrella.md
deleted file mode 100644
index 4031df0..0000000
--- a/content/posts/2023-07-01-bringing-all-of-my-projects-together-under-one-umbrella.md
+++ /dev/null
@@ -1,280 +0,0 @@
1---
2title: "Bringing all of my projects together under one umbrella"
3url: bringing-all-of-my-projects-together-under-one-umbrella.html
4date: 2023-07-01T18:49:07+02:00
5draft: false
6---
7
8## What is the issue anyway?
9
10Over the years, I have accumulated a bunch of virtual servers on my
11[DigitalOcean](https://www.digitalocean.com/) account for small experimental
12projects I dabble in. And this has resulted in quite a bill. I mean, I wouldn't
13care if these projects were actually being used. But there were just being there
14unused and wasting resources. Which makes this an unnecessary burden for me.
15
16Most of them are just small HTML pages that have an endpoint or two to read data
17from or to, and for that reason I wrote servers left and right. To be honest,
18all of those things could have been done with [CGI
19scripts](https://en.wikipedia.org/wiki/Common_Gateway_Interface) and that would
20have been more than enough.
21
22Recently, I decided to stop language hopping and focus on a simpler stack which
23includes C, Go and Lua. And I can accomplish all the things I am interested in.
24
25## Finding a web server replacement
26
27Usually I had [Nginx](https://nginx.org/en/) in front of these small web servers
28and I had to manage SSL certificates and all that jazz. I am bored with these
29things. I don't want to manage any of this bullshit anymore.
30
31So the logical move forward was to find a solid alternative for this. I have
32ended up on [Caddy server](https://caddyserver.com/). I've used it in the past
33but kind of forgotten about it. What I really like about it is an ease of use
34and a bunch of out of the box functionalities that come with it.
35
36These are the _pitch_ points from their website:
37
38- **Secure by Default**: Caddy is the only web server that uses HTTPS by
39 default. A hardened TLS stack with modern protocols preserves privacy and
40 exposes MITM attacks.
41- **Config API**: As its primary mode of configuration, Caddy's REST API makes
42 it easy to automate and integrate with your apps.
43- **No Dependencies**: Because Caddy is written in Go, its binaries are entirely
44 self-contained and run on every platform, including containers without libc.
45- **Modular Stack**: Take back control over your compute edge. Caddy can be
46 extended with everything you need using plugins.
47
48I had just a few requirements:
49
50- Automatic SSL
51- Static file server
52- Basic authentication
53- CGI script support
54
55And the vanilla version does all of it, but CGI scripts. But that can easily be
56fixed with their modular approach. You can do this on their website and build a
57custom version of the server, or do it with Docker.
58
59This is a `Dockerfile` I used to build a custom server.
60
61```Dockerfile
62FROM caddy:builder AS builder
63
64RUN xcaddy build \
65 --with github.com/aksdb/caddy-cgi
66
67FROM caddy:latest
68RUN apk add --no-cache nano
69
70COPY --from=builder /usr/bin/caddy /usr/bin/caddy
71```
72
73## Getting rid of all the unnecessary virtual machines
74
75The next step was to get a handle on the number of virtual servers I have all
76over the place.
77
78I decided to move all the projects and services into two main VMs:
79
80- personal server (still Nginx)
81 - git server
82 - static file server
83 - personal blog
84- projects server (Caddy server)
85 - personal experiments
86 - other projects
87
88I will focus on projects' server in this post since it's more interesting.
89
90## Testing CGI scripts
91
92The first thing I tested was how CGI scripts work under Caddy. This is
93particularly import to me because almost all of my experiments and mini projects
94need this to work.
95
96To configure Caddy server, you must provide the server with a configuration
97file. By default, it's called `Caaddyfile`.
98
99```caddyfile
100{
101 order cgi before respond
102}
103
104examples.mitjafelicijan.com {
105 cgi /bash-test /opt/projects/examples/bash-test.sh
106 cgi /tcl-test /opt/projects/examples/tcl-test.tcl
107 cgi /lua-test /opt/projects/examples/lua-test.lua
108 cgi /python-test /opt/projects/examples/python-test.py
109
110 root * /opt/projects/examples
111 file_server
112}
113```
114
115- The order is very important. Make sure that `order cgi before respond` is at
116 the top of the configuration file.
117- Also, when you run with Caddy v2, make sure you provide `adapter` argument
118 like this `/usr/bin/caddy run --watch --environ --config /etc/caddy/Caddyfile
119 --adapter caddyfile`. Otherwise, Caddy will try to use a different format for
120 config file.
121
122I did a small batch of tests with [Bash](https://www.gnu.org/software/bash/),
123[Tcl](https://www.tcl-lang.org/), [Lua](https://www.lua.org/) and
124[Python](https://www.python.org/). Here is a cheat sheet if you need it.
125
126Let's get Bash out of the way first.
127
128```bash
129#!/usr/bin/bash
130
131printf "Content-type: text/plain\n\n"
132
133printf "Hello from Bash\n\n"
134printf "PATH_INFO [%s]\n" $PATH_INFO
135printf "QUERY_STRING [%s]\n" $QUERY_STRING
136printf "\n"
137
138for i in {0..9..1}; do
139 printf "> %s\n" $i
140done
141
142exit 0
143```
144
145This one is for Tcl script.
146
147```tcl
148#!/usr/bin/tclsh
149
150puts "Content-type: text/plain\n"
151
152puts "Hello from Tcl\n"
153puts "PATH_INFO \[$env(PATH_INFO)\]"
154puts "QUERY_STRING \[$env(QUERY_STRING)\]"
155puts ""
156
157for {set i 0} {$i < 10} {incr i} {
158 puts "> $i"
159}
160```
161
162And for all you Python enjoyers.
163
164```python
165#!/usr/bin/python3
166
167import os
168
169print("Content-type: text/plain\n")
170
171print("Hello from Python\n")
172print("PATH_INFO [{}]".format(os.environ['PATH_INFO']))
173print("QUERY_STRING [{}]".format(os.environ['QUERY_STRING']))
174print("")
175
176for i in range(10):
177 print("> {}".format(i))
178```
179
180And for the final example, Lua.
181
182```lua
183#!/usr/bin/lua
184
185print("Content-type: text/plain\n")
186
187print("Hello from Lua\n")
188print(string.format("PATH_INFO [%s]", os.getenv("PATH_INFO")))
189print(string.format("QUERY_STRING [%s]", os.getenv("QUERY_STRING")))
190print()
191
192for i = 0, 9 do
193 print(string.format("> %d", i))
194end
195```
196
197## Basic authentication
198
199One thing was also to have an option for some sort of authentication, and
200something like [Basic access
201authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) would
202be more than enough.
203
204Thankfully, Caddy supports this out of the box already. Below is an updated
205example.
206
207```Caddyfile
208{
209 order cgi before respond
210}
211
212examples.mitjafelicijan.com {
213 cgi /bash-test /opt/projects/examples/bash-test.sh
214 cgi /tcl-test /opt/projects/examples/tcl-test.tcl
215 cgi /lua-test /opt/projects/examples/lua-test.lua
216 cgi /python-test /opt/projects/examples/python-test.py
217
218 root * /opt/projects/examples
219 file_server
220
221 basicauth * {
222 bob $2a$14$/wCgaf9oMnmQa20txB76u.nI1AldGMBT/1J7fXCfgOiRShwz/JOkK
223 }
224}
225```
226
227`basicauth *` matches everything under this domain/sub-domain and protects it
228with Basic Authentication.
229
230- `bob` is the username
231- `hash` is the password
232
233To generate these passwords, execute `caddy hash-password` and this will prompt
234you to insert a password twice and spit out a hashed password that you can put
235in your configuration file.
236
237Restart the server and you are ready to go.
238
239## Making Caddy a service with systemd
240
241After the tests were successful, I copied `caddy` to `/usr/bin/caddy` and copied
242`Caddyfile` to `/etc/caddy/Caddyfile`.
243
244Now off to the systemd. Each systemd service requires you to create a service
245file.
246
247- I created a `/etc/systemd/system/caddy.service` and put the following content
248 in the file.
249
250```systemd
251[Unit]
252Description=Caddy
253Documentation=https://caddyserver.com/docs/
254After=network.target network-online.target
255Requires=network-online.target
256
257[Service]
258Type=notify
259User=root
260Group=root
261ExecStart=/usr/bin/caddy run --environ --config /etc/caddy/Caddyfile --adapter caddyfile
262ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile --force --adapter caddyfile
263TimeoutStopSec=5s
264LimitNOFILE=1048576
265LimitNPROC=512
266PrivateTmp=true
267ProtectSystem=full
268AmbientCapabilities=CAP_NET_ADMIN CAP_NET_BIND_SERVICE
269
270[Install]
271WantedBy=multi-user.target
272```
273
274- You might need to reload systemd with `systemctl daemon-reload`.
275- Then I enabled the service with `systemctl enable caddy.service`.
276- And then I started the service with `systemctl start caddy.service`.
277
278This was about all that I needed to do to get it running. Now I can easily add
279new subdomains and domains to the main configuration file and be done with
280it. No manual Let's Encrypt shenanigans needed.