diff options
Diffstat (limited to 'content/posts')
43 files changed, 7189 insertions, 0 deletions
diff --git a/content/posts/2011-01-13-most-likely-to-succeed-in-year-of-2011.md b/content/posts/2011-01-13-most-likely-to-succeed-in-year-of-2011.md new file mode 100644 index 0000000..325bd52 --- /dev/null +++ b/content/posts/2011-01-13-most-likely-to-succeed-in-year-of-2011.md | |||
| @@ -0,0 +1,42 @@ | |||
| 1 | --- | ||
| 2 | title: Most likely to succeed in the year of 2011 | ||
| 3 | url: most-likely-to-succeed-in-year-of-2011.html | ||
| 4 | date: 2011-01-13T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | The year of 2010 was definitely the year of Geo-location. The market responded | ||
| 10 | beautifully and lots of very cool services were launched. We all have to thank | ||
| 11 | the mobile market for such extensive adoption. With new generations of mobile | ||
| 12 | phones that are not only buffed with high-tech hardware but are also affordable. | ||
| 13 | We can now manage tasks that were not so long time ago, almost Star Trek’ish. | ||
| 14 | And all this had and has great influence on the destination to which we are | ||
| 15 | going now. | ||
| 16 | |||
| 17 | Reading all this articles about new innovation about new thriving technologies | ||
| 18 | makes me wonder what’s the next step. The future is the mesh, like Lisa Gansky | ||
| 19 | said in her book The Mesh. | ||
| 20 | |||
| 21 | Many still have conservative views on distributed systems. The problems with | ||
| 22 | security of information. Fear of not controlling every aspect of information | ||
| 23 | flow. I am very opened to distributed systems and heterogeneous applications, | ||
| 24 | and I think this is the correct and best way to proceed. | ||
| 25 | |||
| 26 | This year will definitely be about communication platforms. Mobile to mobile. | ||
| 27 | Machine to mobile and vice versa. All the tech is available and ready to put | ||
| 28 | into action. Wireless is today’s new mantra. And the concept of semantic web is | ||
| 29 | now ready for industry. | ||
| 30 | |||
| 31 | Applications and developers now can gain access to new layers of systems and can | ||
| 32 | prepare and build solutions to meet the high quality needs of market. The speed | ||
| 33 | is everything now. | ||
| 34 | |||
| 35 | My vote goes to “Machine to Machine” and “Embedded Systems”! | ||
| 36 | |||
| 37 | - [Machine-to-Machine](http://en.wikipedia.org/wiki/Machine-to-Machine) | ||
| 38 | - [The ultimate M2M communication protocol](http://www.bitxml.org/) | ||
| 39 | - [COOS Project (connectivity initiative)](http://www.coosproject.org/maven-site/1.0.0/project-info.html) | ||
| 40 | - [Community for machine-to-machine](http://m2m.com/index.jspa) | ||
| 41 | - [Embedded system](http://en.wikipedia.org/wiki/Embedded_system) | ||
| 42 | |||
diff --git a/content/posts/2012-03-09-led-technology-not-so-eco.md b/content/posts/2012-03-09-led-technology-not-so-eco.md new file mode 100644 index 0000000..2841d0a --- /dev/null +++ b/content/posts/2012-03-09-led-technology-not-so-eco.md | |||
| @@ -0,0 +1,33 @@ | |||
| 1 | --- | ||
| 2 | title: LED technology might not be as eco-friendly as you think | ||
| 3 | url: led-technology-not-so-eco.html | ||
| 4 | date: 2012-03-09T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | There is a lot of talk about LED technology. It is beginning to infiltrate | ||
| 10 | industry at a fast rate, and it’s a challenge for designers and also engineers. | ||
| 11 | I wondered when a weakness will be revealed. Then I stomped on an article | ||
| 12 | talking about harm in using LED technology. It looks like this magical | ||
| 13 | technology is not so magical and eco-friendly. | ||
| 14 | |||
| 15 | A new study from the University of California indicates that LED lights contain | ||
| 16 | toxic metals, and should be produced, used and disposed of carefully. Besides | ||
| 17 | the lead and nickel, the bulbs and their associated parts were also found to | ||
| 18 | contain arsenic, copper, and other metals that have been linked to different | ||
| 19 | cancers, neurological damage, kidney disease, hypertension, skin rashes and | ||
| 20 | other illnesses in humans, and to ecological damage in waterways. | ||
| 21 | |||
| 22 | Since then, I haven’t yet found any regulation for disposal of LED lights or any | ||
| 23 | other regulation or standard. This might be a problem in the future. And it is a | ||
| 24 | massive drawback. This might have quite an impact on consumer market. | ||
| 25 | |||
| 26 | Nevertheless, there is a potential, and I am sure the market will adapt. I also | ||
| 27 | hope I will be reading documents regarding solution for this concern soon. | ||
| 28 | |||
| 29 | **Additional resources:** | ||
| 30 | |||
| 31 | - [Recycling and Disposal of Light Bulbs](http://ezinearticles.com/?Recycling-and-Disposal-of-Light-Bulbs&id=1091304) | ||
| 32 | - [How to Dispose of a Low-Energy Light Bulb](http://www.ehow.com/how_7483442_dispose-lowenergy-light-bulb.html) | ||
| 33 | |||
diff --git a/content/posts/2013-10-24-wireless-sensor-networks.md b/content/posts/2013-10-24-wireless-sensor-networks.md new file mode 100644 index 0000000..bc6b333 --- /dev/null +++ b/content/posts/2013-10-24-wireless-sensor-networks.md | |||
| @@ -0,0 +1,54 @@ | |||
| 1 | --- | ||
| 2 | title: Wireless sensor networks | ||
| 3 | url: wireless-sensor-networks.html | ||
| 4 | date: 2013-10-24T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | Zigbee networks have this wonderful capability to self-heal, which means they | ||
| 10 | can reorder connections between them if one of them is inoperable. This works | ||
| 11 | our of the box when you deploy them. But you have to have in mind that achieving | ||
| 12 | this is not as easy as you would think. None of it is plug&play. So to make | ||
| 13 | your life a bit easier, here are some pointers which, I hope, will help you. | ||
| 14 | |||
| 15 | - Be careful when you are ordering your equipment abroad. There are many rules | ||
| 16 | and regulations you need to comply before you get your Xbee radios. What they | ||
| 17 | do is they wait until you prove that you won’t use the technology for some | ||
| 18 | kind of evil take over control of the world project :). For this, they have | ||
| 19 | EAR (Export Administration Regulations) which basically means “This product | ||
| 20 | may require a license to export from the United States.”. | ||
| 21 | - I don’t know if this applies for every country, but when we purchased our Xbee | ||
| 22 | radios from Mouser, this was mandatory! What we needed to do was to print out | ||
| 23 | a form and write information about our company and send them a copy via | ||
| 24 | email. With this document, we proved that we are a legitimate company. | ||
| 25 | - When you complete your purchase and send all the documentation, you are not | ||
| 26 | clear yet. Then customs will take it from there :). There will be some | ||
| 27 | additional costs. Before purchasing, make sure you have as much information | ||
| 28 | about costs as possible. Because it can get costly in the end. | ||
| 29 | - I suggest you use companies from your country. You can seriously cut your | ||
| 30 | costs. Here in Slovenia, the best option so far as I know is Farnell. And | ||
| 31 | based on my personal experience, they rock! All I need to say! | ||
| 32 | - Make plans when ordering larger quantities. Do not, I say, do not make your | ||
| 33 | orders in December! :) Believe me! You will have problems with stock they can | ||
| 34 | provide for you. So, we were forced to buy some things from Mouser, which was | ||
| 35 | extremely painful because of all the regulations you need to obey when | ||
| 36 | importing goods from the USA. | ||
| 37 | - Make sure that firmware version on your Xbee radios is exactly the same! Do | ||
| 38 | not get creative!!! I propose using templates. You can get template by | ||
| 39 | exporting settings/profile in X-CTU application. Make sure you have enabled | ||
| 40 | “Upgrade firmware” so you can be sure each radio has the same firmware. | ||
| 41 | - And again: make plans! Plan everything! In months advanced! You will thank me | ||
| 42 | later :) | ||
| 43 | - Test, test, test. Wireless networks can be tricky. | ||
| 44 | |||
| 45 | If you are serious, I suggest you buy this book, Building Wireless Sensor | ||
| 46 | Networks. You will get a glimpse of how networks work in lumens terms. It is a | ||
| 47 | good starting point for everybody who wants to build wireless networks. | ||
| 48 | |||
| 49 | **Additional resources:** | ||
| 50 | |||
| 51 | - http://www.digi.com/aboutus/export/generalexportinfo | ||
| 52 | - http://doresearch.stanford.edu/research-scholarship/export-controls/export-controlled-or-embargoed-countries-entities-and-persons | ||
| 53 | - http://www.bis.doc.gov/licensing/exportingbasics.htm | ||
| 54 | |||
diff --git a/content/posts/2015-11-10-software-development-pitfalls.md b/content/posts/2015-11-10-software-development-pitfalls.md new file mode 100644 index 0000000..6a5d9bd --- /dev/null +++ b/content/posts/2015-11-10-software-development-pitfalls.md | |||
| @@ -0,0 +1,181 @@ | |||
| 1 | --- | ||
| 2 | title: Software development and my favorite pitfalls | ||
| 3 | url: software-development-pitfalls.html | ||
| 4 | date: 2015-11-10T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | Over the years I had the privilege to work on some very excited projects both in | ||
| 10 | software development field and also in electronics field and every experience | ||
| 11 | taught me some invaluable lessons about how NOT TO approach development. And | ||
| 12 | through this post I will try to point out some absurd, outdated techniques I | ||
| 13 | find the most annoying and damaging during a development cycle. There will be | ||
| 14 | swearing because this topic really gets on my nerves and I never coherently | ||
| 15 | tried to explain them in writing. So if I get heated up, please bear with me. | ||
| 16 | |||
| 17 | As new methods of project management are emerging, underlying processes still | ||
| 18 | stay old and outdated. This is mainly because we as people are unable to | ||
| 19 | completely shift away from these approaches. | ||
| 20 | |||
| 21 | I was always struggling with communication, and many times that cost me a | ||
| 22 | relationship or two because I was not on the ball all the time. Through every | ||
| 23 | experience, I became more convinced that I am the problem and never ever doubted | ||
| 24 | that the problem may be that communication never evolved a single step from | ||
| 25 | emails. And if you think for a second, not many things have changed around this | ||
| 26 | topic. We just have different representations of email (message boards, chats, | ||
| 27 | project management tools). And I believe this is the real issue we are facing | ||
| 28 | now. | ||
| 29 | |||
| 30 | There are many articles written about hyper connectivity and the effects that | ||
| 31 | are a direct result of it. But mainstream does nothing towards it. We are just | ||
| 32 | putting out fires, and we do nothing to prevent it. I am certain this will be a | ||
| 33 | major source of grief in coming years. And what we all can do to avoid this is | ||
| 34 | to change our mindset and experiment on our communication skills, development | ||
| 35 | approaches. We need to maximize possible output that a person can give. And to | ||
| 36 | achieve this we need to listen to them, encourage them. I know that not | ||
| 37 | everybody is a naturally born leader, but with enough practice and encouragement | ||
| 38 | they also can become active participants in leadership. | ||
| 39 | |||
| 40 | There are many talks now about methodologies such as Scrum, Kanban, Cleanroom | ||
| 41 | and they all fucking piss me of :). These are all boxes that imprison people and | ||
| 42 | take away their freedom of thought. This is a straightforward mindfuck / | ||
| 43 | amputation of creativity. | ||
| 44 | |||
| 45 | Let me list a couple of things that I find really destructive and bad for a | ||
| 46 | project and in a long run company. | ||
| 47 | |||
| 48 | ## Ping emails | ||
| 49 | |||
| 50 | Ping emails are emails you have to write as soon as you receive an email. Its | ||
| 51 | sole purpose is to inform the sender that you received their email, and you are | ||
| 52 | working on it. Its result is only to calm down the sender that their task is | ||
| 53 | being dealt with. It’s intent basically is, I did my job by sending you this | ||
| 54 | email, so I am on clear grounds. I categorize this email as fuck you email. | ||
| 55 | This is one of the most irritating types of emails I need to write. This is the | ||
| 56 | ultimate control freak show you can experience, and it gives the sender a false | ||
| 57 | feeling of control. Newsflash: We do not live in 1982 where there was a | ||
| 58 | possibility that email never reached the destination. I really hate this from | ||
| 59 | the bottom of my heart. | ||
| 60 | |||
| 61 | They should be like: “Yes, I am fucking alive, and I am at your service my | ||
| 62 | leash!”. I guess if I would reply like this, I wouldn’t have to write any more | ||
| 63 | of this kind of messages. | ||
| 64 | |||
| 65 | ## Everybody is a project manager | ||
| 66 | |||
| 67 | Well, this is a tough one. I noticed that as soon as you let people to give | ||
| 68 | their suggestions, you are basically screwed. There is a truth in the saying: | ||
| 69 | “Give low expectations and deliver little more than you promised.”. | ||
| 70 | |||
| 71 | People tend to take a role of a manager as soon as they are presented with an | ||
| 72 | opportunity. And by getting angry at them, you only provoke yourself. They are | ||
| 73 | not at fault. You just need to tell them they are only giving suggestions and | ||
| 74 | not tasks at the beginning and everything will be alright. But if you give them | ||
| 75 | a feeling that they are in control, you will have immense problems explaining | ||
| 76 | why their features are not in current release. | ||
| 77 | |||
| 78 | Project mission must be always leading project requirements and any deviation | ||
| 79 | from it will result in major project butchering. And by this, I mean that the | ||
| 80 | project will get its own path, and you will be left with half done software that | ||
| 81 | helps nobody. Clear mission goals and clean execution will allow you to develop | ||
| 82 | software will clear intent. | ||
| 83 | |||
| 84 | ## We are never wrong | ||
| 85 | |||
| 86 | I find this type of arrogance the worst. We must always conduct ourselves that | ||
| 87 | we are infallible and cannot make mistakes. As soon as a procedure or process is | ||
| 88 | established, there is no room for changes or improvements. This is the most | ||
| 89 | idiotic thing someone can say of think. I think that processes need to involve | ||
| 90 | and change over time. This is imperative and need to have in your organization | ||
| 91 | if you want to improve and develop company. We all need to grow balls and change | ||
| 92 | everything in order to adapt to current situations. Being a prisoner of | ||
| 93 | predefined processes kills creativity. | ||
| 94 | |||
| 95 | I am constantly trying new software for project managing and communication. I | ||
| 96 | believe every team has its own dynamic, and it needs to be discovered | ||
| 97 | organically and naturally through many experiments. By putting the team in a | ||
| 98 | box, you are amputating their creativity and therefore minimizing their | ||
| 99 | potential. But if you talk to an executive, you will mainly find archetypical | ||
| 100 | thinking and a strong need to compartmentalize everything from business | ||
| 101 | processes to resource management. And this type of management that often | ||
| 102 | displays micromanagement techniques only works for short periods (couple of | ||
| 103 | years) and then employees either leave the company or become basically retarded | ||
| 104 | drones on autopilot. | ||
| 105 | |||
| 106 | ## Micromanaging | ||
| 107 | |||
| 108 | This basically implies that everybody on the team is an idiot who needs to have | ||
| 109 | a to-do list that they cannot write themselves. How about spoon-feeding the team | ||
| 110 | at launch because besides the team leader, everybody must be a retarded idiot at | ||
| 111 | best? | ||
| 112 | |||
| 113 | I prefer milestones as they give developers much more freedom and creativity in | ||
| 114 | developing and not waste their time checking some bizarre to-do list that was | ||
| 115 | not even thought through. Projects constantly change throughout the development | ||
| 116 | cycle, and all you are left at the end is a list of unchecked tasks and the | ||
| 117 | wrath of management why they are not completed. Best WTF moment! | ||
| 118 | |||
| 119 | ## Human contact — no need for it! | ||
| 120 | |||
| 121 | We are vigorously trying to eliminate physical contact by replacing short | ||
| 122 | meetings with software, with no regards that we are not machines. Many times a | ||
| 123 | simple 5-min meeting at morning can solve most of the problems. In rapid | ||
| 124 | development, short bursts of man to man communication is possibly the best way | ||
| 125 | to go. | ||
| 126 | |||
| 127 | We now have all this software available, and all what we get out of it is a | ||
| 128 | giant clusterfuck. An obstacle and not a solution. So, why we still use them? | ||
| 129 | |||
| 130 | ## MVP is killing innovation | ||
| 131 | |||
| 132 | Many will disagree with me on this one, but I stand strong by this statement. | ||
| 133 | What I noticed in my experience that all this buzz words around us only mislead | ||
| 134 | and capture us in a circle of solving issues that already have a solution, but | ||
| 135 | we are unable to see it without using some fancy word for it. | ||
| 136 | |||
| 137 | The toughest thing to do for a developer is to minimize requirements. Well, this | ||
| 138 | is though only for bad developers. Yes, I said it. There are many types of | ||
| 139 | developers out there. And those unable to minimize feature scope are the ones | ||
| 140 | you don’t need on your team. Their only goal is to solve problems that exist | ||
| 141 | only in their heads. And then you have to argue with them, and waste energy on | ||
| 142 | them, instead of developing your awesome product. They are a cancer and I | ||
| 143 | suggest you cut them off. | ||
| 144 | |||
| 145 | MVP as an idea is great, but sadly people don’t understand underlying | ||
| 146 | philosophy, and they spent too much time focusing and fixating on something that | ||
| 147 | every sane person with normal IQ will understand without some made up | ||
| 148 | acronym. And the result is a lot of talking and barely no execution. | ||
| 149 | |||
| 150 | Well, MVP is not directly killing innovation, but stupid people do when they try | ||
| 151 | to understand it. | ||
| 152 | |||
| 153 | ## Pressure wasteland | ||
| 154 | |||
| 155 | You must never allow to be pressured into confirming a deadline if you are not | ||
| 156 | confident. We often feel a need that we are in service of others, which is true | ||
| 157 | to some extent. But it is also true that others are in service to us to some | ||
| 158 | extent. And we forget this all the time. We are all pressured all the time to | ||
| 159 | make decisions just to calm other people down. And when they leave your office | ||
| 160 | you experience WTF moment :) How the hell did they manage to fuck me up again? | ||
| 161 | |||
| 162 | People need to realize that the more pressure you put on somebody, the less they | ||
| 163 | will be able to do. So 5-min update email requests will only resolve in mental | ||
| 164 | breakdown and inability to work that day. Constant poking is probably the only | ||
| 165 | thing I lose my mind instantly. For all you that are doing this: “Stop bothering | ||
| 166 | us with your insecurities and let us do our job. We will do it quicker and | ||
| 167 | better without you breathing down our necks.” | ||
| 168 | |||
| 169 | If this happens to me, I end up with no energy at the end. Don’t you get it? | ||
| 170 | You will get much more from and out of me if you ask me like a human person and | ||
| 171 | not your personal butler. On a long run, you are destroying your relationships | ||
| 172 | and nobody would want to work with you. Your schizophrenic approach will damage | ||
| 173 | only you in a long run. Nobody is anybody’s property. | ||
| 174 | |||
| 175 | ## Conclusion | ||
| 176 | |||
| 177 | I am guilty of many things described in this post. And I find it hard sometimes | ||
| 178 | to acknowledge this. And I lie to myself and try vigorously to find some | ||
| 179 | explanation why I do these things. There is always space for growth. And maybe | ||
| 180 | you will also find some of yourself in this post and realize what needs to | ||
| 181 | change for you to evolve. | ||
diff --git a/content/posts/2017-03-07-golang-profiling-simplified.md b/content/posts/2017-03-07-golang-profiling-simplified.md new file mode 100644 index 0000000..ee3a210 --- /dev/null +++ b/content/posts/2017-03-07-golang-profiling-simplified.md | |||
| @@ -0,0 +1,126 @@ | |||
| 1 | --- | ||
| 2 | title: Golang profiling simplified | ||
| 3 | url: golang-profiling-simplified.html | ||
| 4 | date: 2017-03-07T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | Many posts have been written regarding profiling in Golang and I haven’t found | ||
| 10 | proper tutorial regarding this. Almost all of them are missing some part of | ||
| 11 | important information and it gets pretty frustrating when you have a deadline | ||
| 12 | and are not finding simple distilled solution. | ||
| 13 | |||
| 14 | Nevertheless, after searching and experimenting I have found a solution that | ||
| 15 | works for me and probably should also for you. | ||
| 16 | |||
| 17 | ## Where are my pprof files? | ||
| 18 | |||
| 19 | By default pprof files are generated in /tmp/ folder. You can override folder | ||
| 20 | where this files are generated programmatically in your golang code as we will | ||
| 21 | see below in example. | ||
| 22 | |||
| 23 | ## Why is my CPU profile empty? | ||
| 24 | |||
| 25 | I have found out that sometimes CPU profile is empty because program was not | ||
| 26 | executing long enough. Programs, that execute too quickly don’t produce pprof | ||
| 27 | file in my cases. Well, file is generated but only contains 4KB of information. | ||
| 28 | |||
| 29 | ## Profiling | ||
| 30 | |||
| 31 | As you can see from examples we are executing dummy_benchmark functions to | ||
| 32 | ensure some sort of execution. Memory profiling can be done without such a | ||
| 33 | “complex” function. But CPU profiling needs it. | ||
| 34 | |||
| 35 | Both memory and CPU profiling examples are almost the same. Only parameters in | ||
| 36 | main function when calling profile.Start are different. When we set | ||
| 37 | profile.ProfilePath(“.”) we tell profiler to store pprof files in the same | ||
| 38 | folder as our program. | ||
| 39 | |||
| 40 | ### Memory profiling | ||
| 41 | |||
| 42 | ```go | ||
| 43 | package main | ||
| 44 | |||
| 45 | import ( | ||
| 46 | "fmt" | ||
| 47 | "time" | ||
| 48 | "github.com/pkg/profile" | ||
| 49 | ) | ||
| 50 | |||
| 51 | func dummy_benchmark() { | ||
| 52 | |||
| 53 | fmt.Println("first set ...") | ||
| 54 | for i := 0; i < 918231333; i++ { | ||
| 55 | i *= 2 | ||
| 56 | i /= 2 | ||
| 57 | } | ||
| 58 | |||
| 59 | <-time.After(time.Second*3) | ||
| 60 | |||
| 61 | fmt.Println("sencond set ...") | ||
| 62 | for i := 0; i < 9182312232; i++ { | ||
| 63 | i *= 2 | ||
| 64 | i /= 2 | ||
| 65 | } | ||
| 66 | } | ||
| 67 | |||
| 68 | func main() { | ||
| 69 | defer profile.Start(profile.MemProfile, profile.ProfilePath("."), profile.NoShutdownHook).Stop() | ||
| 70 | dummy_benchmark() | ||
| 71 | } | ||
| 72 | ``` | ||
| 73 | |||
| 74 | ### CPU profiling | ||
| 75 | |||
| 76 | ```go | ||
| 77 | package main | ||
| 78 | |||
| 79 | import ( | ||
| 80 | "fmt" | ||
| 81 | "time" | ||
| 82 | "github.com/pkg/profile" | ||
| 83 | ) | ||
| 84 | |||
| 85 | func dummy_benchmark() { | ||
| 86 | |||
| 87 | fmt.Println("first set ...") | ||
| 88 | for i := 0; i < 918231333; i++ { | ||
| 89 | i *= 2 | ||
| 90 | i /= 2 | ||
| 91 | } | ||
| 92 | |||
| 93 | <-time.After(time.Second*3) | ||
| 94 | |||
| 95 | fmt.Println("sencond set ...") | ||
| 96 | for i := 0; i < 9182312232; i++ { | ||
| 97 | i *= 2 | ||
| 98 | i /= 2 | ||
| 99 | } | ||
| 100 | } | ||
| 101 | |||
| 102 | func main() { | ||
| 103 | defer profile.Start(profile.CPUProfile, profile.ProfilePath("."), profile.NoShutdownHook).Stop() | ||
| 104 | dummy_benchmark() | ||
| 105 | } | ||
| 106 | ``` | ||
| 107 | |||
| 108 | ### Generating profiling reports | ||
| 109 | |||
| 110 | ```bash | ||
| 111 | # memory profiling | ||
| 112 | go build mem.go | ||
| 113 | ./mem | ||
| 114 | go tool pprof -pdf ./mem mem.pprof > mem.pdf | ||
| 115 | |||
| 116 | # cpu profiling | ||
| 117 | go build cpu.go | ||
| 118 | ./cpu | ||
| 119 | go tool pprof -pdf ./cpu cpu.pprof > cpu.pdf | ||
| 120 | ``` | ||
| 121 | |||
| 122 | This will generate PDF document with visualized profile. | ||
| 123 | |||
| 124 | - [Memory PDF profile example](/assets/go-profiling/golang-profiling-mem.pdf) | ||
| 125 | - [CPU PDF profile example](/assets/go-profiling/golang-profiling-cpu.pdf) | ||
| 126 | |||
diff --git a/content/posts/2017-04-17-what-i-ve-learned-developing-ad-server.md b/content/posts/2017-04-17-what-i-ve-learned-developing-ad-server.md new file mode 100644 index 0000000..3a6410f --- /dev/null +++ b/content/posts/2017-04-17-what-i-ve-learned-developing-ad-server.md | |||
| @@ -0,0 +1,199 @@ | |||
| 1 | --- | ||
| 2 | title: What I've learned developing ad server | ||
| 3 | url: what-i-ve-learned-developing-ad-server.html | ||
| 4 | date: 2017-04-17T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | For the past year and half I have been developing native advertising server that | ||
| 10 | contextually matches ads and displays them in different template forms on | ||
| 11 | variety of websites. This project grew from serving thousands of ads per day to | ||
| 12 | millions. | ||
| 13 | |||
| 14 | The system is made from couple of core components: | ||
| 15 | |||
| 16 | - API for serving ads, | ||
| 17 | - Utils - cronjobs and queue management tools, | ||
| 18 | - Dashboard UI. | ||
| 19 | |||
| 20 | Initial release was using [MongoDB](https://www.mongodb.com/) for full-text | ||
| 21 | search but was later replaced by [Elasticsearch](https://www.elastic.co/) for | ||
| 22 | better CPU utilization and better search performance. This provided us with many | ||
| 23 | amazing functionalities of [Elasticsearch](https://www.elastic.co/). You should | ||
| 24 | check it out if you do any search related operations. | ||
| 25 | |||
| 26 | Because the premise of the server is to provide native ad experience, they are | ||
| 27 | rendered on the client side via simple templating engine. This ensures that ads | ||
| 28 | can be displayed number of different ways based on the visual style of the | ||
| 29 | page. And this makes JavaScript client library quite complex. | ||
| 30 | |||
| 31 | So now that you know basic information about the product lets get into the | ||
| 32 | lessons we learned. | ||
| 33 | |||
| 34 | ## Aggregate everything | ||
| 35 | |||
| 36 | After beta version was released everything (impressions, clicks, etc) was | ||
| 37 | written in nanosecond resolution in the database. At that time we were using | ||
| 38 | [PostgreSQL](https://www.postgresql.org/) and database quickly grew way above | ||
| 39 | 200GB in disk space. And that was problematic. Statistics took disturbingly long | ||
| 40 | time to aggregate. Also using indexes on stats table in database was no help | ||
| 41 | after we reached 500 million datapoints. | ||
| 42 | |||
| 43 | > There is a marketing product information and there is real life experience. | ||
| 44 | And the tend to be quite the opposite. | ||
| 45 | |||
| 46 | This was the reason that now everything is aggregated on daily basis and this | ||
| 47 | data is then fed to Elastic in form of daily summary. With this we achieved we | ||
| 48 | can now track many more dimensions such as zone, channel and platform | ||
| 49 | information. And with this information we can now adapt occurrences of ads on | ||
| 50 | specific places more precisely. | ||
| 51 | |||
| 52 | We have also adapted [Redis](https://redis.io/) as a full-time citizen in our | ||
| 53 | stack. Because Redis also stores information on a local disk we have some sort | ||
| 54 | of backup if server would accidentally suffer some failure. | ||
| 55 | |||
| 56 | All the real-time statistics for ad serving and redirecting is presented as | ||
| 57 | counters in Redis instance and daily extracted and pushed to Elastic. | ||
| 58 | |||
| 59 | ## Measure everything | ||
| 60 | |||
| 61 | The thing about software is that we really don't know how well it is performing | ||
| 62 | under load until such load is presented. When testing locally everything is fine | ||
| 63 | but when on production things tend to fall apart. | ||
| 64 | |||
| 65 | As a solution for this we are measuring everything we can. Function execution | ||
| 66 | time (by encapsulating functions with timers), server performance (cpu, memory, | ||
| 67 | disk, etc), Nginx and [uWSGI](https://uwsgi-docs.readthedocs.io/) performance. | ||
| 68 | We sacrifice a bit of performance for the sake of this information. And we store | ||
| 69 | all this information for later analysis. | ||
| 70 | |||
| 71 | **Example of function execution time** | ||
| 72 | |||
| 73 | ```json | ||
| 74 | { | ||
| 75 | "get_final_filtered_ads": { | ||
| 76 | "counter": 1931250, | ||
| 77 | "avg": 0.0066143431, | ||
| 78 | "elapsed": 12773.9500310003 | ||
| 79 | }, | ||
| 80 | "store_keywords_statistics": { | ||
| 81 | "counter": 1931011, | ||
| 82 | "avg": 0.0004605267, | ||
| 83 | "elapsed": 889.2821669996 | ||
| 84 | }, | ||
| 85 | "match_by_context": { | ||
| 86 | "counter": 1931011, | ||
| 87 | "avg": 0.0055960716, | ||
| 88 | "elapsed": 10806.0758889999 | ||
| 89 | }, | ||
| 90 | "match_by_high_performance": { | ||
| 91 | "counter": 262, | ||
| 92 | "avg": 0.0152770229, | ||
| 93 | "elapsed": 4.00258 | ||
| 94 | }, | ||
| 95 | "store_impression_stats": { | ||
| 96 | "counter": 1931250, | ||
| 97 | "avg": 0.0006189991, | ||
| 98 | "elapsed": 1195.4419869999 | ||
| 99 | } | ||
| 100 | } | ||
| 101 | ``` | ||
| 102 | |||
| 103 | We have also started profiling with [cProfile](https://pymotw.com/2/profile/) | ||
| 104 | and then visualizing with [KCachegrind](http://kcachegrind.sourceforge.net/). | ||
| 105 | This provides much more detailed look into code execution. | ||
| 106 | |||
| 107 | ## Cache control is your friend | ||
| 108 | |||
| 109 | Because we use Javascript library for rendering ads we rely on this script | ||
| 110 | extensively and when in need we need to be able to change behavior of the script | ||
| 111 | quickly. | ||
| 112 | |||
| 113 | In our case we can not simply replace javascript url in html code. It usually | ||
| 114 | takes a day or two for the guys who maintain sites to change code or add | ||
| 115 | ?ver=xxx attribute. And this makes rapid deployment and testing very difficult | ||
| 116 | and time consuming. There is a limitation of how much you can test locally. | ||
| 117 | |||
| 118 | We are now in the process of integrating [Google Tag | ||
| 119 | Manager](https://www.google.com/analytics/tag-manager/) but couple of websites | ||
| 120 | are developed on ASP.net platform that have some problems with tag manager. With | ||
| 121 | a solution below we are certain that we are serving latest version of the | ||
| 122 | script. | ||
| 123 | |||
| 124 | And it only takes one mistake and users have the script cached and in case of | ||
| 125 | caching it for 1 year you probably know where the problem is. | ||
| 126 | |||
| 127 | ```nginx | ||
| 128 | # nginx ➜ /etc/nginx/sites-available/default | ||
| 129 | location /static/ { | ||
| 130 | alias /path-to-static-content/; | ||
| 131 | autoindex off; | ||
| 132 | charset utf-8; | ||
| 133 | gzip on; | ||
| 134 | gzip_types text/plain application/javascript application/x-javascript text/javascript text/xml text/css; | ||
| 135 | location ~* \.(ico|gif|jpeg|jpg|png|woff|ttf|otf|svg|woff2|eot)$ { | ||
| 136 | expires 1y; | ||
| 137 | add_header Pragma public; | ||
| 138 | add_header Cache-Control "public"; | ||
| 139 | } | ||
| 140 | location ~* \.(css|js|txt)$ { | ||
| 141 | expires 3600s; | ||
| 142 | add_header Pragma public; | ||
| 143 | add_header Cache-Control "public, must-revalidate"; | ||
| 144 | } | ||
| 145 | } | ||
| 146 | ``` | ||
| 147 | |||
| 148 | Also be careful when redirecting to url in your python code. We noticed that if | ||
| 149 | we didn't precisely setup cache control and expire headers in response we didn't | ||
| 150 | get the request on the server and therefore couldn't measure clicks. So when | ||
| 151 | redirecting do as follows and there will be no problems. | ||
| 152 | |||
| 153 | ```python | ||
| 154 | # python ➜ bottlepy web micro-framework | ||
| 155 | response = bottle.HTTPResponse(status=302) | ||
| 156 | response.set_header("Cache-Control", "no-store, no-cache, must-revalidate") | ||
| 157 | response.set_header("Expires", "Thu, 01 Jan 1970 00:00:00 GMT") | ||
| 158 | response.set_header("Location", url) | ||
| 159 | return response | ||
| 160 | ``` | ||
| 161 | |||
| 162 | > Cache control in browsers is quite aggressive and you need to be precise to | ||
| 163 | avoid future problems. We learned that lesson the hard way. | ||
| 164 | |||
| 165 | ## Learn NGINX | ||
| 166 | |||
| 167 | When deciding on a web server we went with Nginx as a reverse proxy for our | ||
| 168 | applications. We adapted micro-service oriented architecture early in the | ||
| 169 | project to ensure when we scale we can easily add additional servers to our | ||
| 170 | cluster. And Nginx was crucial to perform load balancing and static content | ||
| 171 | delivery. | ||
| 172 | |||
| 173 | At first our config file was quite simple and later grew larger. After patching | ||
| 174 | and adding new settings I sat down and learned more about the guts of Nginx. | ||
| 175 | This proved to be very useful and we were able to squeeze much more out of our | ||
| 176 | setup. So I advise you to take your time and read through the | ||
| 177 | [documentation](https://nginx.org/en/docs/). This saved us a lot of headache. | ||
| 178 | Googling for solutions only goes so far. | ||
| 179 | |||
| 180 | ## Use Redis/Memcached | ||
| 181 | |||
| 182 | As explained above we are using caching basically for everything. It is the | ||
| 183 | corner stone of our services. At first we were very careful about the quantity | ||
| 184 | of things we stored in [Redis](https://redis.io/). But we later found out that | ||
| 185 | the memory footprint is very low even when storing large amount of data in it. | ||
| 186 | |||
| 187 | So we gradually increased our usage to caching whole HTML outputs of dashboard. | ||
| 188 | This improved our performance in order of magnitude. And by using native TTL | ||
| 189 | support this goes hand in hand with our needs. | ||
| 190 | |||
| 191 | The reason why we choose [Redis](https://redis.io/) over | ||
| 192 | [Memcached](https://memcached.org/) was the nature of scalability of Redis out | ||
| 193 | of the box. But all this can be achieved with Memcached. | ||
| 194 | |||
| 195 | ## Conclusion | ||
| 196 | |||
| 197 | There are a lot more details that could have been written and every single topic | ||
| 198 | in here deserves it's own post but you probably got the idea about the problems | ||
| 199 | we faced. | ||
diff --git a/content/posts/2017-04-21-profiling-python-web-applications-with-visual-tools.md b/content/posts/2017-04-21-profiling-python-web-applications-with-visual-tools.md new file mode 100644 index 0000000..d1cea7c --- /dev/null +++ b/content/posts/2017-04-21-profiling-python-web-applications-with-visual-tools.md | |||
| @@ -0,0 +1,206 @@ | |||
| 1 | --- | ||
| 2 | title: Profiling Python web applications with visual tools | ||
| 3 | url: profiling-python-web-applications-with-visual-tools.html | ||
| 4 | date: 2017-04-21T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | I have been profiling my software with KCachegrind for a long time now and I was | ||
| 10 | missing this option when I am developing API's or other web services. I always | ||
| 11 | knew that this is possible but never really took the time and dive into it. | ||
| 12 | |||
| 13 | Before we begin there are some requirements. We will need to: | ||
| 14 | |||
| 15 | - implement [cProfile](https://docs.python.org/2/library/profile.html#module-cProfile) into our web app, | ||
| 16 | - convert output to [callgrind](http://valgrind.org/docs/manual/cl-manual.html) format with [pyprof2calltree](https://pypi.python.org/pypi/pyprof2calltree/), | ||
| 17 | - visualize data with [KCachegrind](http://kcachegrind.sourceforge.net/html/Home.html) or [Profiling Viewer](http://www.profilingviewer.com/). | ||
| 18 | |||
| 19 | |||
| 20 | If you are using MacOS you should check out [Profiling | ||
| 21 | Viewer](http://www.profilingviewer.com/) or | ||
| 22 | [MacCallGrind](http://www.maccallgrind.com/). | ||
| 23 | |||
| 24 |  | ||
| 25 | |||
| 26 | We will be dividing this post into two main categories: | ||
| 27 | |||
| 28 | - writing simple web-service, | ||
| 29 | - visualize profile of this web-service. | ||
| 30 | |||
| 31 | ## Simple web-service | ||
| 32 | |||
| 33 | Let's use virtualenv so we won't pollute our base system. If you don't have | ||
| 34 | virtualenv installed on your system you can install it with pip command. | ||
| 35 | |||
| 36 | ```bash | ||
| 37 | # let's install virtualenv globally | ||
| 38 | $ sudo pip install virtualenv | ||
| 39 | |||
| 40 | # let's also install pyprof2calltree globally | ||
| 41 | $ sudo pip install pyprof2calltree | ||
| 42 | |||
| 43 | # now we create project | ||
| 44 | $ mkdir demo-project | ||
| 45 | $ cd demo-project/ | ||
| 46 | |||
| 47 | # now let's create folder where we will store profiles | ||
| 48 | $ mkdir prof | ||
| 49 | |||
| 50 | # now we create empty virtualenv in venv/ folder | ||
| 51 | $ virtualenv --no-site-packages venv | ||
| 52 | |||
| 53 | # we now need to activate virtualenv | ||
| 54 | $ source venv/bin/activate | ||
| 55 | |||
| 56 | # you can check if virtualenv was correctly initialized by | ||
| 57 | # checking where your python interpreter is located | ||
| 58 | # if command bellow points to your created directory and not some | ||
| 59 | # system dir like /usr/bin/python then everything is fine | ||
| 60 | $ which python | ||
| 61 | |||
| 62 | # we can check now if all is good ➜ if ok couple of | ||
| 63 | # lines will be displayed | ||
| 64 | $ pip freeze | ||
| 65 | # appdirs==1.4.3 | ||
| 66 | # packaging==16.8 | ||
| 67 | # pyparsing==2.2.0 | ||
| 68 | # six==1.10.0 | ||
| 69 | |||
| 70 | # now we are ready to install bottlepy ➜ web micro-framework | ||
| 71 | $ pip install bottle | ||
| 72 | |||
| 73 | # you can deactivate virtualenv but you will then go | ||
| 74 | # under system domain ➜ for now don't deactivate | ||
| 75 | $ deactivate | ||
| 76 | ``` | ||
| 77 | |||
| 78 | We are now ready to write simple web service. Let's create file app.py and paste | ||
| 79 | code bellow in this newly created file. | ||
| 80 | |||
| 81 | ```python | ||
| 82 | # -*- coding: utf-8 -*- | ||
| 83 | |||
| 84 | import bottle | ||
| 85 | import random | ||
| 86 | import cProfile | ||
| 87 | |||
| 88 | app = bottle.Bottle() | ||
| 89 | |||
| 90 | # this function is a decorator and encapsulates function | ||
| 91 | # and performs profiling and then saves it to subfolder | ||
| 92 | # prof/function-name.prof | ||
| 93 | # in our example only awesome_random_number function will | ||
| 94 | # be profiled because it has do_cprofile defined | ||
| 95 | def do_cprofile(func): | ||
| 96 | def profiled_func(*args, **kwargs): | ||
| 97 | profile = cProfile.Profile() | ||
| 98 | try: | ||
| 99 | profile.enable() | ||
| 100 | result = func(*args, **kwargs) | ||
| 101 | profile.disable() | ||
| 102 | return result | ||
| 103 | finally: | ||
| 104 | profile.dump_stats("prof/" + str(func.__name__) + ".prof") | ||
| 105 | return profiled_func | ||
| 106 | |||
| 107 | |||
| 108 | # we use profiling over specific function with including | ||
| 109 | # @do_cprofile above function declaration | ||
| 110 | @app.route("/") | ||
| 111 | @do_cprofile | ||
| 112 | def awesome_random_number(): | ||
| 113 | awesome_random_number = random.randint(0, 100) | ||
| 114 | return "awesome random number is " + str(awesome_random_number) | ||
| 115 | |||
| 116 | @app.route("/test") | ||
| 117 | def test(): | ||
| 118 | return "dummy test" | ||
| 119 | |||
| 120 | if __name__ == '__main__': | ||
| 121 | bottle.run( | ||
| 122 | app = app, | ||
| 123 | host = "0.0.0.0", | ||
| 124 | port = 4000 | ||
| 125 | ) | ||
| 126 | |||
| 127 | # run with 'python app.py' | ||
| 128 | # open browser 'http://0.0.0.0:4000' | ||
| 129 | ``` | ||
| 130 | |||
| 131 | When browser hits awesome\_random\_number() function profile is created in prof/ | ||
| 132 | subfolder. | ||
| 133 | |||
| 134 | ## Visualize profile | ||
| 135 | |||
| 136 | Now let's create callgrind format from this cProfile output. | ||
| 137 | |||
| 138 | ```bash | ||
| 139 | $ cd prof/ | ||
| 140 | $ pyprof2calltree -i awesome_random_number.prof | ||
| 141 | # this creates 'awesome_random_number.prof.log' file in the same folder | ||
| 142 | ``` | ||
| 143 | |||
| 144 | This file can be opened with visualizing tools listed above. In this case we | ||
| 145 | will be using Profilling Viewer under MacOS. You can open image in new tab. As | ||
| 146 | you can see from this example there is hierarchy of execution order of your | ||
| 147 | code. | ||
| 148 | |||
| 149 |  | ||
| 150 | |||
| 151 | > Make sure you convert output of the cProfile output every time you want to | ||
| 152 | refresh and take a look at your possible optimizations because cProfile updates | ||
| 153 | .prof file every time browser hits the function. | ||
| 154 | |||
| 155 | This is just a simple example but when you are developing real-life applications | ||
| 156 | this can be very illuminating, especially to see which parts of your code are | ||
| 157 | bottlenecks and need to be optimized. | ||
| 158 | |||
| 159 | ## Update 2017-04-22 | ||
| 160 | |||
| 161 | Reddit user [mvt](https://www.reddit.com/user/mvt) also recommended this awesome | ||
| 162 | web based profile visualizer [SnakeViz](https://jiffyclub.github.io/snakeviz/) | ||
| 163 | that directly takes output from | ||
| 164 | [cProfile](https://docs.python.org/2/library/profile.html#module-cProfile) | ||
| 165 | module. | ||
| 166 | |||
| 167 | <div class="reddit-embed" data-embed-media="www.redditmedia.com" data-embed-parent="false" data-embed-live="false" data-embed-uuid="583880c1-002e-41ed-a373-020a0ef2cff9" data-embed-created="2017-04-22T19:46:54.810Z"><a href="https://www.reddit.com/r/Python/comments/66v373/profiling_python_web_applications_with_visual/dgljhsb/">Comment</a> from discussion <a href="https://www.reddit.com/r/Python/comments/66v373/profiling_python_web_applications_with_visual/">Profiling Python web applications with visual tools</a>.</div><script async src="https://www.redditstatic.com/comment-embed.js"></script> | ||
| 168 | |||
| 169 | ```bash | ||
| 170 | # let's install it globally as well | ||
| 171 | $ sudo pip install snakeviz | ||
| 172 | |||
| 173 | # now let's visualize | ||
| 174 | $ cd prof/ | ||
| 175 | $ snakeviz awesome_random_number.prof | ||
| 176 | # this automatically opens browser window and | ||
| 177 | # shows visualized profile | ||
| 178 | ``` | ||
| 179 | |||
| 180 |  | ||
| 181 | |||
| 182 | Reddit user [ccharles](https://www.reddit.com/user/ccharles) suggested a better | ||
| 183 | way for installing pip software by targeting user level instead of using sudo. | ||
| 184 | |||
| 185 | <div class="reddit-embed" data-embed-media="www.redditmedia.com" data-embed-parent="false" data-embed-live="false" data-embed-uuid="f4f0459e-684d-441e-bebe-eb49b2f0a31d" data-embed-created="2017-04-22T19:46:10.874Z"><a href="https://www.reddit.com/r/Python/comments/66v373/profiling_python_web_applications_with_visual/dglpzkx/">Comment</a> from discussion <a href="https://www.reddit.com/r/Python/comments/66v373/profiling_python_web_applications_with_visual/">Profiling Python web applications with visual tools</a>.</div><script async src="https://www.redditstatic.com/comment-embed.js"></script> | ||
| 186 | |||
| 187 | ```bash | ||
| 188 | # now we need to add this path to our $PATH variable | ||
| 189 | # we do this my adding this line at the end of your | ||
| 190 | # ~/.bashrc file | ||
| 191 | PATH=$PATH:$HOME/.local/bin/ | ||
| 192 | |||
| 193 | # in order to use this new configuration you can close | ||
| 194 | # and reopen terminal or reload .bashrc file | ||
| 195 | $ source ~/.bashrc | ||
| 196 | |||
| 197 | # now let's test if new directory is present in $PATH | ||
| 198 | $ echo $PATH | ||
| 199 | |||
| 200 | # now we can install on user level by adding --user | ||
| 201 | # without use of sudo | ||
| 202 | $ pip install snakeviz --user | ||
| 203 | ``` | ||
| 204 | |||
| 205 | Or as suggested by [mvt](https://www.reddit.com/user/mvt) you can | ||
| 206 | use [pipsi](https://github.com/mitsuhiko/pipsi). | ||
diff --git a/content/posts/2017-08-11-simple-iot-application.md b/content/posts/2017-08-11-simple-iot-application.md new file mode 100644 index 0000000..00a7802 --- /dev/null +++ b/content/posts/2017-08-11-simple-iot-application.md | |||
| @@ -0,0 +1,607 @@ | |||
| 1 | --- | ||
| 2 | title: Simple IOT application supported by real-time monitoring and data history | ||
| 3 | url: simple-iot-application.html | ||
| 4 | date: 2017-08-11T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | ## Initial thoughts | ||
| 10 | |||
| 11 | I have been developing these kind of application for the better part of my last | ||
| 12 | 5 years and people keep asking me how to approach developing such application | ||
| 13 | and I will give a try explaining it here. | ||
| 14 | |||
| 15 | IOT applications are really no different than any other kind of applications. | ||
| 16 | We have data that needs to be collected and visualized in some form of tables or | ||
| 17 | charts. The main difference here is that most of the times these data is | ||
| 18 | collected by some kind of device foreign to developer that mainly operates in | ||
| 19 | web domain. But fear not, it's not that different than writing some JavaScript. | ||
| 20 | |||
| 21 | There are many devices able to transmit data via wireless or wired network by | ||
| 22 | default but for the sake of example we will be using commonly known Arduino with | ||
| 23 | wireless module already on the board → [Arduino | ||
| 24 | MKR1000](https://store.arduino.cc/arduino-mkr1000). | ||
| 25 | |||
| 26 | In order to make this little project as accessible to others as possible I will | ||
| 27 | try to make it as inexpensive as possible. And by this I mean that I will avoid | ||
| 28 | using hosted virtual servers and will be using my own laptop as a server. But | ||
| 29 | you must buy Arduino MKR1000 to follow steps below. But if you would want to | ||
| 30 | deploy this software I would suggest using | ||
| 31 | [DigitalOcean](https://www.digitalocean.com) → smallest VPS is only per month | ||
| 32 | making this one of the most affordable option out there. Please notice that this | ||
| 33 | software will not run on stock web hosting that only supports LAMP (Linux, | ||
| 34 | Apache, MySQL, and PHP). | ||
| 35 | |||
| 36 | But before we begin please take notice that this is strictly experimental code | ||
| 37 | and not well optimized and there are much better ways in handling some aspects | ||
| 38 | of the application but that requires much deeper knowledge of technology that is | ||
| 39 | not needed for an example like this. | ||
| 40 | |||
| 41 | **Development steps** | ||
| 42 | |||
| 43 | 1. Simple Python API that will receive and store incoming data. | ||
| 44 | 2. Prototype C++ code that will read "sensor data" and transmit it to API. | ||
| 45 | 3. Data visualization with charts → extends Python web application. | ||
| 46 | |||
| 47 | Step 1. and 3. will share the same web application. One route will be dedicated | ||
| 48 | to API and another to serving HTML with chart. | ||
| 49 | |||
| 50 | Schema below represents what we will try to achieve and how different parts | ||
| 51 | correlates to each other. | ||
| 52 | |||
| 53 |  | ||
| 54 | |||
| 55 | ## Simple Python API | ||
| 56 | |||
| 57 | I have always been a fan of simplicity so we will be using [Bottle: Python Web | ||
| 58 | Framework](https://bottlepy.org/docs/dev/). It is a single file web framework | ||
| 59 | that seriously simplifies working with routes, templating and has built-in web | ||
| 60 | server that satisfies our need in this case. | ||
| 61 | |||
| 62 | First we need to install bottle package. This can be done by downloading | ||
| 63 | ```bottle.py``` and placing it in the root of your application or by using pip | ||
| 64 | software ```pip install bottle --user```. | ||
| 65 | |||
| 66 | If you are using Linux or MacOS then Python is already installed. If you will | ||
| 67 | try to test this on Windows please install [Python for | ||
| 68 | Windows](https://www.python.org/downloads/windows/). There may be some problems | ||
| 69 | with path when you will try to launch ```python webapp.py``` so please take care | ||
| 70 | of this before you continue. | ||
| 71 | |||
| 72 | ### Basic web application | ||
| 73 | |||
| 74 | Most basic bottle application is quite simple. Paste code below in | ||
| 75 | ```webapp.py``` file and save. | ||
| 76 | |||
| 77 | ```python | ||
| 78 | # -*- coding: utf-8 -*- | ||
| 79 | |||
| 80 | import bottle | ||
| 81 | |||
| 82 | # initializing bottle app | ||
| 83 | app = bottle.Bottle() | ||
| 84 | |||
| 85 | # triggered when / is accessed from browser | ||
| 86 | # only accepts GET → no POST allowed | ||
| 87 | @app.route("/", method=["GET"]) | ||
| 88 | def route_default(): | ||
| 89 | return "howdy from python" | ||
| 90 | |||
| 91 | # starting server on http://0.0.0.0:5000 | ||
| 92 | if __name__ == "__main__": | ||
| 93 | bottle.run( | ||
| 94 | app = app, | ||
| 95 | host = "0.0.0.0", | ||
| 96 | port = 5000, | ||
| 97 | debug = True, | ||
| 98 | reloader = True, | ||
| 99 | catchall = True, | ||
| 100 | ) | ||
| 101 | ``` | ||
| 102 | |||
| 103 | To run this simple application you should open command prompt or terminal on | ||
| 104 | your machine and go to the folder containing your file and type ```python | ||
| 105 | webapp.py```. If everything goes ok then open your web browser and point it to | ||
| 106 | ```http://0.0.0.0:5000```. | ||
| 107 | |||
| 108 | If you would like change the port of your application (like port 80) and not use | ||
| 109 | root to run your app this will present a problem. The TCP/IP port numbers below | ||
| 110 | 1024 are privileged ports → this is a security feature. So in order of | ||
| 111 | simplicity and security use a port number above 1024 like I have used port 5000. | ||
| 112 | |||
| 113 | If this fails at any time please fix it before you continue, because nothing | ||
| 114 | below will work otherwise. | ||
| 115 | |||
| 116 | We use 0.0.0.0 as default host so that this app is available over your local | ||
| 117 | network. If you find your local ip ```ifconfig``` and try accessing this site | ||
| 118 | with your phone (if on same network/router as your machine) this should work as | ||
| 119 | well (example of such ip ```http://192.168.1.15:5000```). This is a must have | ||
| 120 | because Arduino will be accessing this application to send it's data. | ||
| 121 | |||
| 122 | ### Web application security | ||
| 123 | |||
| 124 | There is a lot to be said about security and is a topic of many books. Of course | ||
| 125 | all this can not be written here but to just establish some basic security → you | ||
| 126 | should always use SSL with your application. Some fantastic free certificates | ||
| 127 | are available by [Let's Encrypt - Free SSL/TLS | ||
| 128 | Certificates](https://letsencrypt.org). With SSL certificate installed you | ||
| 129 | should then make use of HTTP headers and send your "API key" via a header. If | ||
| 130 | your key is send via header then this key is encrypted by SSL and send encrypted | ||
| 131 | over the network. Never send your api keys by GET parameter like | ||
| 132 | ```http://example.com/?api_key=somekeyvalue```. The problem that this kind of | ||
| 133 | sending presents is that this key is visible in logs and by network sniffers. | ||
| 134 | |||
| 135 | There is a fantastic article describing some aspects about security: [11 Web | ||
| 136 | Application Security Best | ||
| 137 | Practices](https://www.keycdn.com/blog/web-application-security-best-practices/). Please | ||
| 138 | check it out. | ||
| 139 | |||
| 140 | ### Simple API for writing data-points | ||
| 141 | |||
| 142 | We will now be using boilerplate code from example above and extend it to be | ||
| 143 | SQLite3 because it plays well with Python and can store quite large amount of | ||
| 144 | able to write data received by API to local storage. For example use I will use | ||
| 145 | data. I have been using it to collect gigabytes of data in a single database | ||
| 146 | without any corruption or problems → your experience may vary. | ||
| 147 | |||
| 148 | To avoid learning SQLite I will be using [Dataset: databases for lazy | ||
| 149 | people](https://dataset.readthedocs.io/en/latest/index.html). This package | ||
| 150 | abstracts SQL and simplifies writing and reading data from database. You should | ||
| 151 | install this package with pip software ```pip install dataset --user```. | ||
| 152 | |||
| 153 | Because API will use POST method I will be testing if code works correctly by | ||
| 154 | using [Restlet Client for Google | ||
| 155 | Chrome](https://chrome.google.com/webstore/detail/restlet-client-rest-api-t/aejoelaoggembcahagimdiliamlcdmfm). | ||
| 156 | This software also allows you to set headers → for basic security with API_KEY. | ||
| 157 | |||
| 158 | To quickly generate passwords or API keys I usually use this nifty website | ||
| 159 | [RandomKeygen](https://randomkeygen.com/). | ||
| 160 | |||
| 161 | Copy and paste code below over your previous code in file ```webapp.py```. | ||
| 162 | |||
| 163 | ```python | ||
| 164 | # -*- coding: utf-8 -*- | ||
| 165 | |||
| 166 | import time | ||
| 167 | import bottle | ||
| 168 | import random | ||
| 169 | import dataset | ||
| 170 | |||
| 171 | # initializing bottle app | ||
| 172 | app = bottle.Bottle() | ||
| 173 | |||
| 174 | # connects to sqlite database | ||
| 175 | # check_same_thread=False allows using it in multi-threaded mode | ||
| 176 | app.config["dsn"] = dataset.connect("sqlite:///data.db?check_same_thread=False") | ||
| 177 | |||
| 178 | # api key that will be used in Arduino code | ||
| 179 | app.config["api_key"] = "JtF2aUE5SGHfVJBCG5SH" | ||
| 180 | |||
| 181 | # triggered when /api is accessed from browser | ||
| 182 | # only accepts POST → no GET allowed | ||
| 183 | @app.route("/api", method=["POST"]) | ||
| 184 | def route_default(): | ||
| 185 | status = 400 | ||
| 186 | ts = int(time.time()) # current timestamp | ||
| 187 | value = bottle.request.body.read() # data from device | ||
| 188 | api_key = bottle.request.get_header("Api_Key") # api key from header | ||
| 189 | |||
| 190 | # outputs to console received data for debug reason | ||
| 191 | print ">>> {} :: {}".format(value, api_key) | ||
| 192 | |||
| 193 | # if api_key is correct and value is present | ||
| 194 | # then writes attribute to point table | ||
| 195 | if api_key == app.config["api_key"] and value: | ||
| 196 | app.config["dsn"]["point"].insert(dict(ts=ts, value=value)) | ||
| 197 | status = 200 | ||
| 198 | |||
| 199 | # we only need to return status | ||
| 200 | return bottle.HTTPResponse(status=status, body="") | ||
| 201 | |||
| 202 | # starting server on http://0.0.0.0:5000 | ||
| 203 | if __name__ == "__main__": | ||
| 204 | bottle.run( | ||
| 205 | app = app, | ||
| 206 | host = "0.0.0.0", | ||
| 207 | port = 5000, | ||
| 208 | debug = True, | ||
| 209 | reloader = True, | ||
| 210 | catchall = True, | ||
| 211 | ) | ||
| 212 | ``` | ||
| 213 | |||
| 214 | To run this simply go to folder containing python file and run ```python | ||
| 215 | webapp.py``` from terminal. If everything goes ok you should have simple API | ||
| 216 | available via POST method on /api route. | ||
| 217 | |||
| 218 | After testing the service with Restlet Client you should be able to view your | ||
| 219 | data in a database file ```data.db```. | ||
| 220 | |||
| 221 |  | ||
| 222 | |||
| 223 | You can also check the contents of new database file by using desktop client | ||
| 224 | for SQLite → [DB Browser for SQLite](http://sqlitebrowser.org/). | ||
| 225 | |||
| 226 |  | ||
| 227 | |||
| 228 | Table structure is as simple as it can be. We have ts (timestamp) and value | ||
| 229 | (value from Arduino). As you can see timestamp is generated on API side. If you | ||
| 230 | would happen to have atomic clock on Arduino it would be then better to generate | ||
| 231 | and send timestamp with the value. This would be particularity useful if we | ||
| 232 | would be collecting sensor data at a higher frequency and then sending this data | ||
| 233 | in bulk to API. | ||
| 234 | |||
| 235 | If you will deploy this app with uWSGI and multi-threaded, use DSN (Data Source | ||
| 236 | Name) url with ```?check_same_thread=False```. | ||
| 237 | |||
| 238 | Ok, now that we have some sort of a working API with some basic security so | ||
| 239 | unwanted people can not post data to your database can we proceed further and | ||
| 240 | try to program Arduino to send data to API. | ||
| 241 | |||
| 242 | ## Sending data to API with Arduino MKR1000 | ||
| 243 | |||
| 244 | First of all you should have MKR1000 module and microUSB cable to proceed. If | ||
| 245 | you have ever done any work with Arduino you should know that you also need | ||
| 246 | [Arduino IDE](https://www.arduino.cc/en/Main/Software). On provided link you | ||
| 247 | should be able to download and install IDE. Once that task is completed and you | ||
| 248 | have successfully run blink example you should proceed to the next step. | ||
| 249 | |||
| 250 | In order to use wireless capabilities of MKR1000 you need to first install | ||
| 251 | [WiFi101 library](https://www.arduino.cc/en/Reference/WiFi101) in Arduino IDE. | ||
| 252 | Please check before you install, you may already have it installed. | ||
| 253 | |||
| 254 | Code below is a working example that sends data to API. Before you try to test | ||
| 255 | your code make sure you have run Python web application. Then change settings | ||
| 256 | for wifi, api endpoint and api_key. If by some reason code bellow doesn't work | ||
| 257 | for you please leave a comment and I'll try to help. | ||
| 258 | |||
| 259 | Once you have opened IDE and copied this code try to compile and upload it. | ||
| 260 | Then open "Serial monitor" to see if any output is presented by Arduino. | ||
| 261 | |||
| 262 | ```c | ||
| 263 | #include <WiFi101.h> | ||
| 264 | |||
| 265 | // wifi settings | ||
| 266 | char ssid[] = "ssid-name"; | ||
| 267 | char pass[] = "ssid-password"; | ||
| 268 | |||
| 269 | // api server enpoint | ||
| 270 | char server[] = "192.168.6.22"; | ||
| 271 | int port = 5000; | ||
| 272 | |||
| 273 | // api key that must be the same as the one in Python code | ||
| 274 | String api_key = "JtF2aUE5SGHfVJBCG5SH"; | ||
| 275 | |||
| 276 | // frequency data is sent in ms - every 5 seconds | ||
| 277 | int timeout = 1000 * 5; | ||
| 278 | |||
| 279 | int status = WL_IDLE_STATUS; | ||
| 280 | |||
| 281 | void setup() { | ||
| 282 | |||
| 283 | // initialize serial and wait for port to open: | ||
| 284 | Serial.begin(9600); | ||
| 285 | delay(1000); | ||
| 286 | |||
| 287 | // check for the presence of the shield | ||
| 288 | if (WiFi.status() == WL_NO_SHIELD) { | ||
| 289 | Serial.println("WiFi shield not present"); | ||
| 290 | while (true); | ||
| 291 | } | ||
| 292 | |||
| 293 | // attempt to connect to wifi network | ||
| 294 | while (status != WL_CONNECTED) { | ||
| 295 | Serial.print("Attempting to connect to SSID: "); | ||
| 296 | Serial.println(ssid); | ||
| 297 | status = WiFi.begin(ssid, pass); | ||
| 298 | // wait 10 seconds for connection | ||
| 299 | delay(10000); | ||
| 300 | } | ||
| 301 | |||
| 302 | // output wifi status to serial monitor | ||
| 303 | Serial.print("SSID: "); | ||
| 304 | Serial.println(WiFi.SSID()); | ||
| 305 | |||
| 306 | IPAddress ip = WiFi.localIP(); | ||
| 307 | Serial.print("IP Address: "); | ||
| 308 | Serial.println(ip); | ||
| 309 | |||
| 310 | long rssi = WiFi.RSSI(); | ||
| 311 | Serial.print("signal strength (RSSI):"); | ||
| 312 | Serial.print(rssi); | ||
| 313 | Serial.println(" dBm"); | ||
| 314 | } | ||
| 315 | |||
| 316 | void loop() { | ||
| 317 | WiFiClient client; | ||
| 318 | |||
| 319 | if (client.connect(server, port)) { | ||
| 320 | |||
| 321 | // I use random number generator for this example | ||
| 322 | // but you can use analog or digital inputs from arduino | ||
| 323 | String content = String(random(1000)); | ||
| 324 | |||
| 325 | client.println("POST /api HTTP/1.1"); | ||
| 326 | client.println("Connection: close"); | ||
| 327 | client.println("Api-Key: " + api_key); | ||
| 328 | client.println("Content-Length: " + String(content.length())); | ||
| 329 | client.println(); | ||
| 330 | client.println(content); | ||
| 331 | |||
| 332 | delay(100); | ||
| 333 | client.stop(); | ||
| 334 | Serial.println("Data sent successfully ..."); | ||
| 335 | |||
| 336 | } else { | ||
| 337 | Serial.println("Problem sending data ..."); | ||
| 338 | } | ||
| 339 | |||
| 340 | // waits for x seconds and continue looping | ||
| 341 | delay(timeout); | ||
| 342 | } | ||
| 343 | ``` | ||
| 344 | |||
| 345 | As seen from example you can notice that Arduino is generating random integer | ||
| 346 | between [ 0 .. 1000 ]. You can easily replace this with a temperature sensor or | ||
| 347 | any other kind of sensor. | ||
| 348 | |||
| 349 | Now that we have API under the hood and Arduino is sending demo data we can now | ||
| 350 | focus on data visualization. | ||
| 351 | |||
| 352 | ## Data visualization | ||
| 353 | |||
| 354 | Before we continue we should examine our project folder structure. Currently we | ||
| 355 | only have two files in our project: | ||
| 356 | |||
| 357 | _simple-iot-app/_ | ||
| 358 | |||
| 359 | * _webapp.py_ | ||
| 360 | * _data.db_ | ||
| 361 | |||
| 362 | We will now add HTML template that will contain CSS and JavaScript code inline | ||
| 363 | for the simplicity reason. And for the bottle framework to be able to scan root | ||
| 364 | application folder for templates we will add ```bottle.TEMPLATE_PATH.insert(0, | ||
| 365 | "./")``` in ```webapp.py```. By default bottle framework uses ```views/``` | ||
| 366 | subfolder to store templates. This is not the ideal situation and if you will | ||
| 367 | use bottle to develop web applications you should use native behavior and store | ||
| 368 | templates in it's predefined folder. But for the sake of example we will | ||
| 369 | over-ride this. Be careful to fully replace your code with new code that is | ||
| 370 | provided below. Avoid partially replacing code in file :) Also new code for | ||
| 371 | reading data-points is provided in Python example below. | ||
| 372 | |||
| 373 | First we add new route to our web application. It should be trigger when browser | ||
| 374 | hits root of application ```http://0.0.0.0:5000/```. This route will do nothing | ||
| 375 | more than render ```frontend.html``` template. This is done by ```return | ||
| 376 | bottle.template("frontend.html")```. Check code below to further examine how | ||
| 377 | exactly this is done. | ||
| 378 | |||
| 379 | Now we will expand ```/api``` route and use different methods to write or read | ||
| 380 | data-points. For writing data-point we will use POST method and for reading | ||
| 381 | points we will use GET method. GET method will return JSON object with latest | ||
| 382 | readings and historical data. | ||
| 383 | |||
| 384 | There is a fantastic JavaScript library for plotting time-series charts called | ||
| 385 | [MetricsGraphics.js](https://www.metricsgraphicsjs.org) that is based on | ||
| 386 | [D3.js](https://d3js.org/) library for visualizing data. | ||
| 387 | |||
| 388 | Data schema required by MetricsGraphics.js → to achieve this we need to | ||
| 389 | transform data from database into this format: | ||
| 390 | |||
| 391 | ```json | ||
| 392 | [ | ||
| 393 | { | ||
| 394 | "date": "2017-08-11 01:07:20", | ||
| 395 | "value": 933 | ||
| 396 | }, | ||
| 397 | { | ||
| 398 | "date": "2017-08-11 01:07:30", | ||
| 399 | "value": 743 | ||
| 400 | } | ||
| 401 | ] | ||
| 402 | ``` | ||
| 403 | |||
| 404 | Web application is now complete and we only need ```frontend.html``` that we | ||
| 405 | will develop now. If you would try to start web app now and go to root app this | ||
| 406 | will return error because we don't have frontend.html yet. | ||
| 407 | |||
| 408 | ```python | ||
| 409 | # -*- coding: utf-8 -*- | ||
| 410 | |||
| 411 | import time | ||
| 412 | import bottle | ||
| 413 | import json | ||
| 414 | import datetime | ||
| 415 | import random | ||
| 416 | import dataset | ||
| 417 | |||
| 418 | # initializing bottle app | ||
| 419 | app = bottle.Bottle() | ||
| 420 | |||
| 421 | # adds root directory as template folder | ||
| 422 | bottle.TEMPLATE_PATH.insert(0, "./") | ||
| 423 | |||
| 424 | # connects to sqlite database | ||
| 425 | # check_same_thread=False allows using it in multi-threaded mode | ||
| 426 | app.config["db"] = dataset.connect("sqlite:///data.db?check_same_thread=False") | ||
| 427 | |||
| 428 | # api key that will be used in Arduino code | ||
| 429 | app.config["api_key"] = "JtF2aUE5SGHfVJBCG5SH" | ||
| 430 | |||
| 431 | # triggered when / is accessed from browser | ||
| 432 | # only accepts GET → no POST allowed | ||
| 433 | @app.route("/", method=["GET"]) | ||
| 434 | def route_default(): | ||
| 435 | return bottle.template("frontend.html") | ||
| 436 | |||
| 437 | # triggered when /api is accessed from browser | ||
| 438 | # accepts POST and GET | ||
| 439 | @app.route("/api", method=["GET", "POST"]) | ||
| 440 | def route_default(): | ||
| 441 | |||
| 442 | # if method is POST then we write datapoint | ||
| 443 | if bottle.request.method == "POST": | ||
| 444 | status = 400 | ||
| 445 | ts = int(time.time()) # current timestamp | ||
| 446 | value = bottle.request.body.read() # data from device | ||
| 447 | api_key = bottle.request.get_header("Api-Key") # api key from header | ||
| 448 | |||
| 449 | # outputs to console recieved data for debug reason | ||
| 450 | print ">>> {} :: {}".format(value, api_key) | ||
| 451 | |||
| 452 | # if api_key is correct and value is present | ||
| 453 | # then writes attribute to point table | ||
| 454 | if api_key == app.config["api_key"] and value: | ||
| 455 | app.config["db"]["point"].insert(dict(ts=ts, value=value)) | ||
| 456 | status = 200 | ||
| 457 | |||
| 458 | # we only need to return status | ||
| 459 | return bottle.HTTPResponse(status=status, body="") | ||
| 460 | |||
| 461 | # if method is GET then we read datapoint | ||
| 462 | else: | ||
| 463 | response = [] | ||
| 464 | datapoints = app.config["db"]["point"].all() | ||
| 465 | |||
| 466 | for point in datapoints: | ||
| 467 | response.append({ | ||
| 468 | "date": datetime.datetime.fromtimestamp(int(point["ts"])).strftime("%Y-%m-%d %H:%M:%S"), | ||
| 469 | "value": point["value"] | ||
| 470 | }) | ||
| 471 | |||
| 472 | bottle.response.content_type = "application/json" | ||
| 473 | return json.dumps(response) | ||
| 474 | |||
| 475 | # starting server on http://0.0.0.0:5000 | ||
| 476 | if __name__ == "__main__": | ||
| 477 | bottle.run( | ||
| 478 | app = app, | ||
| 479 | host = "0.0.0.0", | ||
| 480 | port = 5000, | ||
| 481 | debug = True, | ||
| 482 | reloader = True, | ||
| 483 | catchall = True, | ||
| 484 | ) | ||
| 485 | ``` | ||
| 486 | |||
| 487 | And now finally we can implement ```frontend.html```. Create file with this name | ||
| 488 | and copy code below. When you are done you can start web application. Steps for | ||
| 489 | this part are listed below the code. | ||
| 490 | |||
| 491 | ```html | ||
| 492 | <!DOCTYPE html> | ||
| 493 | <html> | ||
| 494 | |||
| 495 | <head> | ||
| 496 | <meta charset="utf-8"> | ||
| 497 | <title>Simple IOT application</title> | ||
| 498 | </head> | ||
| 499 | |||
| 500 | <body> | ||
| 501 | |||
| 502 | <h1>Simple IOT application</h1> | ||
| 503 | |||
| 504 | <div class="chart-placeholder"> | ||
| 505 | <div id="chart"></div> | ||
| 506 | </div> | ||
| 507 | |||
| 508 | <!-- application main script --> | ||
| 509 | <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script> | ||
| 510 | <script src="https://cdnjs.cloudflare.com/ajax/libs/d3/4.10.0/d3.min.js"></script> | ||
| 511 | <script src="https://cdnjs.cloudflare.com/ajax/libs/metrics-graphics/2.11.0/metricsgraphics.min.js"></script> | ||
| 512 | <script> | ||
| 513 | function fetch_and_render() { | ||
| 514 | d3.json("/api", function(data) { | ||
| 515 | data = MG.convert.date(data, "date", "%Y-%m-%d %H:%M:%S"); | ||
| 516 | MG.data_graphic({ | ||
| 517 | data: data, | ||
| 518 | chart_type: "line", | ||
| 519 | full_width: true, | ||
| 520 | height: 270, | ||
| 521 | target: document.getElementById("chart"), | ||
| 522 | x_accessor: "date", | ||
| 523 | y_accessor: "value" | ||
| 524 | }); | ||
| 525 | }); | ||
| 526 | } | ||
| 527 | window.onload = function() { | ||
| 528 | // initial call for rendering | ||
| 529 | fetch_and_render(); | ||
| 530 | |||
| 531 | // updates chart every 5 seconds | ||
| 532 | setInterval(function() { | ||
| 533 | fetch_and_render(); | ||
| 534 | }, 5000); | ||
| 535 | } | ||
| 536 | </script> | ||
| 537 | |||
| 538 | <!-- application styles --> | ||
| 539 | <style> | ||
| 540 | body { | ||
| 541 | font: 13px sans-serif; | ||
| 542 | padding: 20px 50px; | ||
| 543 | } | ||
| 544 | .chart-placeholder { | ||
| 545 | border: 2px solid #ccc; | ||
| 546 | width: 100%; | ||
| 547 | user-select: none; | ||
| 548 | } | ||
| 549 | /* chart styles */ | ||
| 550 | .mg-line1-color { | ||
| 551 | stroke: red; | ||
| 552 | stroke-width: 2; | ||
| 553 | } | ||
| 554 | .mg-main-area, .mg-main-line { | ||
| 555 | fill: #fff; | ||
| 556 | } | ||
| 557 | .mg-x-axis line, .mg-y-axis line { | ||
| 558 | stroke: #b3b2b2; | ||
| 559 | stroke-width: 1px; | ||
| 560 | } | ||
| 561 | </style> | ||
| 562 | |||
| 563 | </body> | ||
| 564 | |||
| 565 | </html> | ||
| 566 | ``` | ||
| 567 | |||
| 568 | Now the folder structure should look like: | ||
| 569 | |||
| 570 | _simple-iot-app/_ | ||
| 571 | |||
| 572 | * _webapp.py_ | ||
| 573 | * _data.db_ | ||
| 574 | * _frontend.html_ | ||
| 575 | |||
| 576 | Ok, lets now start application and start feeding it data. | ||
| 577 | |||
| 578 | 1. ```python webapp.py``` | ||
| 579 | 2. connect Arduino MKR1000 to power source | ||
| 580 | 3. open browser and go to ```http://0.0.0.0:5000``` | ||
| 581 | |||
| 582 | If everything goes well you should be seeing new data-points rendered on chart | ||
| 583 | every 5 seconds. | ||
| 584 | |||
| 585 | If you navigate to ```http://0.0.0.0:5000``` you should see rendered chart as | ||
| 586 | shown on picture below. | ||
| 587 | |||
| 588 |  | ||
| 589 | |||
| 590 | Complete application with all the code is available for | ||
| 591 | [download](/assets/iot-application/simple-iot-application.zip). | ||
| 592 | |||
| 593 | ## Conclusion | ||
| 594 | |||
| 595 | I hope this clarifies some aspects of IOT application development. Of course | ||
| 596 | this is a minimal example and is far from what can be done in real life with | ||
| 597 | some further dive into other technologies. | ||
| 598 | |||
| 599 | If you would like to continue exploring IOT world here are some interesting | ||
| 600 | resources for you to examine: | ||
| 601 | |||
| 602 | * [Reading Sensors with an Arduino](https://www.allaboutcircuits.com/projects/reading-sensors-with-an-arduino/) | ||
| 603 | * [MQTT 101 – How to Get Started with the lightweight IoT Protocol](http://www.hivemq.com/blog/how-to-get-started-with-mqtt) | ||
| 604 | * [Stream Updates with Server-Sent Events](https://www.html5rocks.com/en/tutorials/eventsource/basics/) | ||
| 605 | * [Internet of Things (IoT) Tutorials](http://www.tutorialspoint.com/internet_of_things/) | ||
| 606 | |||
| 607 | Any comment or additional ideas are welcomed in comments below. | ||
diff --git a/content/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md b/content/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md new file mode 100644 index 0000000..d2fa558 --- /dev/null +++ b/content/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md | |||
| @@ -0,0 +1,331 @@ | |||
| 1 | --- | ||
| 2 | title: Using DigitalOcean Spaces Object Storage with FUSE | ||
| 3 | url: using-digitalocean-spaces-object-storage-with-fuse.html | ||
| 4 | date: 2018-01-16T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | Couple of months ago [DigitalOcean](https://www.digitalocean.com) introduced new | ||
| 10 | product called | ||
| 11 | [Spaces](https://blog.digitalocean.com/introducing-spaces-object-storage/) which | ||
| 12 | is Object Storage very similar to Amazon's S3. This really peaked my interest, | ||
| 13 | because this was something I was missing and even the thought of going over the | ||
| 14 | internet for such functionality was in no interest to me. Also in fashion with | ||
| 15 | their previous pricing this also is very cheap and pricing page is a no-brainer | ||
| 16 | compared to AWS or GCE. [Prices are clearly and precisely defined and | ||
| 17 | outlined](https://www.digitalocean.com/pricing/). You must love them for that | ||
| 18 | :) | ||
| 19 | |||
| 20 | ## Initial requirements | ||
| 21 | |||
| 22 | * Is it possible to use them as a mounted drive with FUSE? (tl;dr YES) | ||
| 23 | * Will the performance degrade over time and over different sizes of objects? | ||
| 24 | (tl;dr NO&YES) | ||
| 25 | * Can storage be mounted on multiple machines at the same time and be writable? | ||
| 26 | (tl;dr YES) | ||
| 27 | |||
| 28 | > Let me be clear. This scripts I use are made just for benchmarking and are not | ||
| 29 | > intended to be used in real-life situations. Besides that, I am looking into | ||
| 30 | > using this approaches but adding caching service in front of it and then | ||
| 31 | > dumping everything as an object to storage. This could potentially be some | ||
| 32 | > interesting post of itself. But in case you would need real-time data without | ||
| 33 | > eventual consistency please take this scripts as they are: not usable in such | ||
| 34 | > situations. | ||
| 35 | |||
| 36 | ## Is it possible to use them as a mounted drive with FUSE? | ||
| 37 | |||
| 38 | Well, actually they can be used in such manor. Because they are similar to [AWS | ||
| 39 | S3](https://aws.amazon.com/s3/) many tools are available and you can find many | ||
| 40 | articles and [Stackoverflow items](https://stackoverflow.com/search?q=s3+fuse). | ||
| 41 | |||
| 42 | To make this work you will need DigitalOcean account. If you don't have one you | ||
| 43 | will not be able to test this code. But if you have an account then you go and | ||
| 44 | [create new | ||
| 45 | Droplet](https://cloud.digitalocean.com/droplets/new?size=s-1vcpu-1gb®ion=ams3&distro=debian&distroImage=debian-9-x64&options=private_networking,install_agent). | ||
| 46 | If you click on this link you will already have preselected Debian 9 with | ||
| 47 | smallest VM option. | ||
| 48 | |||
| 49 | * Please be sure to add you SSH key, because we will login to this machine | ||
| 50 | remotely. | ||
| 51 | * If you change your region please remember which one you choose because we will | ||
| 52 | need this information when we try to mount space to our machine. | ||
| 53 | |||
| 54 | Instuctions on how to use SSH keys and how to setup them are available in | ||
| 55 | article [How To Use SSH Keys with DigitalOcean | ||
| 56 | Droplets](https://www.digitalocean.com/community/tutorials/how-to-use-ssh-keys-with-digitalocean-droplets). | ||
| 57 | |||
| 58 |  | ||
| 59 | |||
| 60 | After we created Droplet it's time to create new Space. This is done by clicking | ||
| 61 | on a button [Create](https://cloud.digitalocean.com/spaces/new) (right top | ||
| 62 | corner) and selecting Spaces. Choose pronounceable ```Unique name``` because we | ||
| 63 | will use it in examples below. You can either choose Private or Public, it | ||
| 64 | doesn't matter in our case. And you can always change that in the future. | ||
| 65 | |||
| 66 | When you have created new Space we should [generate Access | ||
| 67 | key](https://cloud.digitalocean.com/settings/api/tokens). This link will guide | ||
| 68 | to the page when you can generate this key. After you create new one, please | ||
| 69 | save provided Key and Secret because Secret will not be shown again. | ||
| 70 | |||
| 71 |  | ||
| 72 | |||
| 73 | Now that we have new Space and Access key we should SSH into our machine. | ||
| 74 | |||
| 75 | ```bash | ||
| 76 | # replace IP with the ip of your newly created droplet | ||
| 77 | ssh root@IP | ||
| 78 | |||
| 79 | # this will install utilities for mounting storage objects as FUSE | ||
| 80 | apt install s3fs | ||
| 81 | |||
| 82 | # we now need to provide credentials (access key we created earlier) | ||
| 83 | # replace KEY and SECRET with your own credentials but leave the colon between them | ||
| 84 | # we also need to set proper permissions | ||
| 85 | echo "KEY:SECRET" > .passwd-s3fs | ||
| 86 | chmod 600 .passwd-s3fs | ||
| 87 | |||
| 88 | # now we mount space to our machine | ||
| 89 | # replace UNIQUE-NAME with the name you choose earlier | ||
| 90 | # if you choose different region for your space be careful about -ourl option (ams3) | ||
| 91 | s3fs UNIQUE-NAME /mnt/ -ourl=https://ams3.digitaloceanspaces.com -ouse_cache=/tmp | ||
| 92 | |||
| 93 | # now we try to create a file | ||
| 94 | # once you mount it may take a couple of seconds to retrieve data | ||
| 95 | echo "Hello cruel world" > /mnt/hello.txt | ||
| 96 | ``` | ||
| 97 | |||
| 98 | After all this you can return to your browser and go to [DigitalOcean | ||
| 99 | Spaces](https://cloud.digitalocean.com/spaces) and click on your created | ||
| 100 | space. If file hello.txt is present you have successfully mounted space to your | ||
| 101 | machine and wrote data to it. | ||
| 102 | |||
| 103 | I choose the same region for my Droplet and my Space but you don't have to. You | ||
| 104 | can have different regions. What this actually does to performance I don't know. | ||
| 105 | |||
| 106 | Additional information on FUSE: | ||
| 107 | |||
| 108 | * [Github project page for s3fs](https://github.com/s3fs-fuse/s3fs-fuse) | ||
| 109 | * [FUSE - Filesystem in Userspace](https://en.wikipedia.org/wiki/Filesystem_in_Userspace) | ||
| 110 | |||
| 111 | ## Will the performance degrade over time and over different sizes of objects? | ||
| 112 | |||
| 113 | For this task I didn't want to just read and write text files or uploading | ||
| 114 | images. I actually wanted to figure out if using something like SQlite is viable | ||
| 115 | in this case. | ||
| 116 | |||
| 117 | ### Measurement experiment 1: File copy | ||
| 118 | |||
| 119 | ```bash | ||
| 120 | # first we create some dummy files at different sizes | ||
| 121 | dd if=/dev/zero of=10KB.dat bs=1024 count=10 #10KB | ||
| 122 | dd if=/dev/zero of=100KB.dat bs=1024 count=100 #100KB | ||
| 123 | dd if=/dev/zero of=1MB.dat bs=1024 count=1024 #1MB | ||
| 124 | dd if=/dev/zero of=10MB.dat bs=1024 count=10240 #10MB | ||
| 125 | |||
| 126 | # now we set time command to only return real | ||
| 127 | TIMEFORMAT=%R | ||
| 128 | |||
| 129 | # now lets test it | ||
| 130 | (time cp 10KB.dat /mnt/) |& tee -a 10KB.results.txt | ||
| 131 | |||
| 132 | # and now we automate | ||
| 133 | # this will perform the same operation 100 times | ||
| 134 | # this will output results into separated files based on objecty size | ||
| 135 | n=0; while (( n++ < 100 )); do (time cp 10KB.dat /mnt/10KB.$n.dat) |& tee -a 10KB.results.txt; done | ||
| 136 | n=0; while (( n++ < 100 )); do (time cp 100KB.dat /mnt/100KB.$n.dat) |& tee -a 100KB.results.txt; done | ||
| 137 | n=0; while (( n++ < 100 )); do (time cp 1MB.dat /mnt/1MB.$n.dat) |& tee -a 1MB.results.txt; done | ||
| 138 | n=0; while (( n++ < 100 )); do (time cp 10MB.dat /mnt/10MB.$n.dat) |& tee -a 10MB.results.txt; done | ||
| 139 | ``` | ||
| 140 | |||
| 141 | Files of size 100MB were not successfully transferred and ended up displaying | ||
| 142 | error (cp: failed to close '/mnt/100MB.1.dat': Operation not permitted). | ||
| 143 | |||
| 144 | As I suspected, object size is not really that important. Sadly I don't have the | ||
| 145 | time to test performance over periods of time. But if some of you would do it | ||
| 146 | please send me your data. I would be interested in seeing results. | ||
| 147 | |||
| 148 | **Here are plotted results** | ||
| 149 | |||
| 150 | You can download [raw result here](/assets/do-fuse/copy-benchmarks.tsv). | ||
| 151 | Measurements are in seconds. | ||
| 152 | |||
| 153 | <script src="//cdn.plot.ly/plotly-latest.min.js"></script> | ||
| 154 | <div id="copy-benchmarks"></div> | ||
| 155 | <script> | ||
| 156 | (function(){ | ||
| 157 | var request = new XMLHttpRequest(); | ||
| 158 | request.open("GET", "/assets/do-fuse/copy-benchmarks.tsv", true); | ||
| 159 | request.onload = function() { | ||
| 160 | if (request.status >= 200 && request.status < 400) { | ||
| 161 | var payload = request.responseText.trim(); | ||
| 162 | var tsv = payload.split("\n"); | ||
| 163 | for (var i=0; i<tsv.length; i++) { tsv[i] = tsv[i].split("\t"); } | ||
| 164 | var traces = []; | ||
| 165 | var headers = tsv[0]; | ||
| 166 | tsv.shift(); | ||
| 167 | Array.prototype.forEach.call(headers, function(el, idx) { | ||
| 168 | var x = []; | ||
| 169 | var y = []; | ||
| 170 | for (var j=0; j<tsv.length; j++) { | ||
| 171 | x.push(j); | ||
| 172 | y.push(parseFloat(tsv[j][idx].replace(",", "."))); | ||
| 173 | } | ||
| 174 | traces.push({ x: x, y: y, type: "scatter", name: el, line: { width: 1, shape: "spline" } }); | ||
| 175 | }); | ||
| 176 | var copy = Plotly.newPlot("copy-benchmarks", traces, { legend: {"orientation": "h"}, height: 400, margin: { l: 40, r: 0, b: 20, t: 30, pad: 0 }, yaxis: { title: "execution time in seconds", titlefont: { size: 12 } }, xaxis: { title: "fn(i)", titlefont: { size: 12 } } }); | ||
| 177 | } else { } | ||
| 178 | }; | ||
| 179 | request.onerror = function() { }; | ||
| 180 | request.send(null); | ||
| 181 | })(); | ||
| 182 | </script> | ||
| 183 | |||
| 184 | As far as these tests show, performance is quite stable and can be predicted | ||
| 185 | which is fantastic. But this is a small test and spans only over couple of | ||
| 186 | hours. So you should not completely trust them. | ||
| 187 | |||
| 188 | ### Measurement experiment 2: SQLite performanse | ||
| 189 | |||
| 190 | I was unable to use database file directly from mounted drive so this is a no-go | ||
| 191 | as I suspected. So I executed code below on a local disk just to get some | ||
| 192 | benchmarks. I inserted 1000 records with DROPTABLE, CREATETABLE, INSERTMANY, | ||
| 193 | FETCHALL, COMMIT for 1000 times to generate statistics. As you can see | ||
| 194 | performance of SQLite is quite amazing. You could then potentially just copy | ||
| 195 | file to mounted drive and be done with it. | ||
| 196 | |||
| 197 | ```python | ||
| 198 | import time | ||
| 199 | import sqlite3 | ||
| 200 | import sys | ||
| 201 | |||
| 202 | if len(sys.argv) < 3: | ||
| 203 | print("usage: python sqlite-benchmark.py DB_PATH NUM_RECORDS REPEAT") | ||
| 204 | exit() | ||
| 205 | |||
| 206 | def data_iter(x): | ||
| 207 | for i in range(x): | ||
| 208 | yield "m" + str(i), "f" + str(i*i) | ||
| 209 | |||
| 210 | header_line = "%s\t%s\t%s\t%s\t%s\n" % ("DROPTABLE", "CREATETABLE", "INSERTMANY", "FETCHALL", "COMMIT") | ||
| 211 | with open("sqlite-benchmarks.tsv", "w") as fp: | ||
| 212 | fp.write(header_line) | ||
| 213 | |||
| 214 | start_time = time.time() | ||
| 215 | conn = sqlite3.connect(sys.argv[1]) | ||
| 216 | c = conn.cursor() | ||
| 217 | end_time = time.time() | ||
| 218 | result_time = CONNECT = end_time - start_time | ||
| 219 | print("CONNECT: %g seconds" % (result_time)) | ||
| 220 | |||
| 221 | start_time = time.time() | ||
| 222 | c.execute("PRAGMA journal_mode=WAL") | ||
| 223 | c.execute("PRAGMA temp_store=MEMORY") | ||
| 224 | c.execute("PRAGMA synchronous=OFF") | ||
| 225 | result_time = PRAGMA = end_time - start_time | ||
| 226 | print("PRAGMA: %g seconds" % (result_time)) | ||
| 227 | |||
| 228 | for i in range(int(sys.argv[3])): | ||
| 229 | print("#%i" % (i)) | ||
| 230 | |||
| 231 | start_time = time.time() | ||
| 232 | c.execute("drop table if exists test") | ||
| 233 | end_time = time.time() | ||
| 234 | result_time = DROPTABLE = end_time - start_time | ||
| 235 | print("DROPTABLE: %g seconds" % (result_time)) | ||
| 236 | |||
| 237 | start_time = time.time() | ||
| 238 | c.execute("create table if not exists test(a,b)") | ||
| 239 | end_time = time.time() | ||
| 240 | result_time = CREATETABLE = end_time - start_time | ||
| 241 | print("CREATETABLE: %g seconds" % (result_time)) | ||
| 242 | |||
| 243 | start_time = time.time() | ||
| 244 | c.executemany("INSERT INTO test VALUES (?, ?)", data_iter(int(sys.argv[2]))) | ||
| 245 | end_time = time.time() | ||
| 246 | result_time = INSERTMANY = end_time - start_time | ||
| 247 | print("INSERTMANY: %g seconds" % (result_time)) | ||
| 248 | |||
| 249 | start_time = time.time() | ||
| 250 | c.execute("select count(*) from test") | ||
| 251 | res = c.fetchall() | ||
| 252 | end_time = time.time() | ||
| 253 | result_time = FETCHALL = end_time - start_time | ||
| 254 | print("FETCHALL: %g seconds" % (result_time)) | ||
| 255 | |||
| 256 | start_time = time.time() | ||
| 257 | conn.commit() | ||
| 258 | end_time = time.time() | ||
| 259 | result_time = COMMIT = end_time - start_time | ||
| 260 | print("COMMIT: %g seconds" % (result_time)) | ||
| 261 | |||
| 262 | |||
| 263 | log_line = "%f\t%f\t%f\t%f\t%f\n" % (DROPTABLE, CREATETABLE, INSERTMANY, FETCHALL, COMMIT) | ||
| 264 | with open("sqlite-benchmarks.tsv", "a") as fp: | ||
| 265 | fp.write(log_line) | ||
| 266 | |||
| 267 | start_time = time.time() | ||
| 268 | conn.close() | ||
| 269 | end_time = time.time() | ||
| 270 | result_time = CLOSE = end_time - start_time | ||
| 271 | print("CLOSE: %g seconds" % (result_time)) | ||
| 272 | ``` | ||
| 273 | |||
| 274 | You can download [raw result here](/assets/do-fuse/sqlite-benchmarks.tsv). And | ||
| 275 | again, these results are done on a local block storage and do not represent | ||
| 276 | capabilities of object storage. With my current approach and state of the test | ||
| 277 | code these can not be done. I would need to make Python code much more robust | ||
| 278 | and check locking etc. | ||
| 279 | |||
| 280 | <div id="sqlite-benchmarks"></div> | ||
| 281 | <script> | ||
| 282 | (function(){ | ||
| 283 | var request = new XMLHttpRequest(); | ||
| 284 | request.open("GET", "/assets/do-fuse/sqlite-benchmarks.tsv", true); | ||
| 285 | request.onload = function() { | ||
| 286 | if (request.status >= 200 && request.status < 400) { | ||
| 287 | var payload = request.responseText.trim(); | ||
| 288 | var tsv = payload.split("\n"); | ||
| 289 | for (var i=0; i<tsv.length; i++) { tsv[i] = tsv[i].split("\t"); } | ||
| 290 | var traces = []; | ||
| 291 | var headers = tsv[0]; | ||
| 292 | tsv.shift(); | ||
| 293 | Array.prototype.forEach.call(headers, function(el, idx) { | ||
| 294 | var x = []; | ||
| 295 | var y = []; | ||
| 296 | for (var j=0; j<tsv.length; j++) { | ||
| 297 | x.push(j); | ||
| 298 | y.push(parseFloat(tsv[j][idx].replace(",", "."))); | ||
| 299 | } | ||
| 300 | traces.push({ x: x, y: y, type: "scatter", name: el, line: { width: 1, shape: "spline" } }); | ||
| 301 | }); | ||
| 302 | var sqlite = Plotly.newPlot("sqlite-benchmarks", traces, { legend: {"orientation": "h"}, height: 400, margin: { l: 50, r: 0, b: 20, t: 30, pad: 0 }, yaxis: { title: "execution time in seconds", titlefont: { size: 12 } } }); | ||
| 303 | } else { } | ||
| 304 | }; | ||
| 305 | request.onerror = function() { }; | ||
| 306 | request.send(null); | ||
| 307 | })(); | ||
| 308 | </script> | ||
| 309 | |||
| 310 | ## Can storage be mounted on multiple machines at the same time and be writable? | ||
| 311 | |||
| 312 | Well, this one didn't take long to test. And the answer is **YES**. I mounted | ||
| 313 | space on both machines and measured same performance on both machines. But | ||
| 314 | because file is downloaded before write and then uploaded on complete there | ||
| 315 | could potentially be problems is another process is trying to access the same | ||
| 316 | file. | ||
| 317 | |||
| 318 | ## Observations and conslusion | ||
| 319 | |||
| 320 | Using Spaces in this way makes it easier to access and manage files. But besides | ||
| 321 | that you would need to write additional code to make this one play nice with you | ||
| 322 | applications. | ||
| 323 | |||
| 324 | Nevertheless, this was extremely simple to setup and use and this is just | ||
| 325 | another excellent product in DigitalOcean product line. I found this exercise | ||
| 326 | very valuable and am thinking about implementing some sort of mechanism for | ||
| 327 | SQLite, so data can be stored on Spaces and accessed by many VM's. For a project | ||
| 328 | where data doesn't need to be accessible in real-time and can have couple of | ||
| 329 | minutes old data this would be very interesting. If any of you find this | ||
| 330 | proposal interesting please write in a comment box below or shoot me an email | ||
| 331 | and I will keep you posted. | ||
diff --git a/content/posts/2019-01-03-encoding-binary-data-into-dna-sequence.md b/content/posts/2019-01-03-encoding-binary-data-into-dna-sequence.md new file mode 100644 index 0000000..b285756 --- /dev/null +++ b/content/posts/2019-01-03-encoding-binary-data-into-dna-sequence.md | |||
| @@ -0,0 +1,411 @@ | |||
| 1 | --- | ||
| 2 | title: Encoding binary data into DNA sequence | ||
| 3 | url: encoding-binary-data-into-dna-sequence.html | ||
| 4 | date: 2019-01-03T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | ## Initial thoughts | ||
| 10 | |||
| 11 | Imagine a world where you could go outside and take a leaf from a tree and put | ||
| 12 | it through your personal DNA sequencer and get data like music, videos or | ||
| 13 | computer programs from it. Well, this is all possible now. It was not done on a | ||
| 14 | large scale because it is quite expensive to create DNA strands but it's | ||
| 15 | possible. | ||
| 16 | |||
| 17 | Encoding data into DNA sequence is relatively simple process once you understand | ||
| 18 | the relationship between binary data and nucleotides and scientists have been | ||
| 19 | making large leaps in this field in order to provide viable long-term storage | ||
| 20 | solution for our data that would potentially survive our specie if case of | ||
| 21 | global disaster. We could imprint all the world's knowledge into plants and | ||
| 22 | ensure the survival of our knowledge. | ||
| 23 | |||
| 24 | More optimistic usage for this technology would be easier storage of ever | ||
| 25 | growing data we produce every day. Once machines for sequencing DNA become fast | ||
| 26 | enough and cheaper this could mean the next evolution of storing data and | ||
| 27 | abandoning classical hard and solid state drives in data warehouses. | ||
| 28 | |||
| 29 | As we currently stand this is still not viable but it is quite an amazing and | ||
| 30 | cool technology. | ||
| 31 | |||
| 32 | My interests in this field are purely in encoding processes and experimental | ||
| 33 | testing mainly because I don't have the access to this expensive machines. My | ||
| 34 | initial goal was to create a toolkit that can be used by everybody to encode | ||
| 35 | their data into a proper DNA sequence. | ||
| 36 | |||
| 37 | ## Glossary | ||
| 38 | |||
| 39 | **deoxyribose** A five-carbon sugar molecule with a hydrogen atom rather than a | ||
| 40 | hydroxyl group in the 2′ position; the sugar component of DNA nucleotides. | ||
| 41 | |||
| 42 | **double helix** The molecular shape of DNA in which two strands of nucleotides | ||
| 43 | wind around each other in a spiral shape. | ||
| 44 | |||
| 45 | **nitrogenous base** A nitrogen-containing molecule that acts as a base; often | ||
| 46 | referring to one of the purine or pyrimidine components of nucleic acids. | ||
| 47 | |||
| 48 | **phosphate group** A molecular group consisting of a central phosphorus atom | ||
| 49 | bound to four oxygen atoms. | ||
| 50 | |||
| 51 | **RGB** The RGB color model is an additive color model in which red, green and | ||
| 52 | blue light are added together in various ways to reproduce a broad array of | ||
| 53 | colors. | ||
| 54 | |||
| 55 | **GCC** The GNU Compiler Collection is a compiler system produced by the GNU | ||
| 56 | Project supporting various programming languages. | ||
| 57 | |||
| 58 | ## Data encoding | ||
| 59 | |||
| 60 | **TL;DR:** Encoding involves the use of a code to change original data into a | ||
| 61 | form that can be used by an external process. | ||
| 62 | |||
| 63 | Encoding is the process of converting data into a format required for a number | ||
| 64 | of information processing needs, including: | ||
| 65 | |||
| 66 | - Program compiling and execution | ||
| 67 | - Data transmission, storage and compression/decompression | ||
| 68 | - Application data processing, such as file conversion | ||
| 69 | |||
| 70 | Encoding can have two meanings: | ||
| 71 | |||
| 72 | - In computer technology, encoding is the process of applying a specific code, | ||
| 73 | such as letters, symbols and numbers, to data for conversion into an | ||
| 74 | equivalent cipher. | ||
| 75 | - In electronics, encoding refers to analog to digital conversion. | ||
| 76 | |||
| 77 | ## Quick history of DNA | ||
| 78 | |||
| 79 | - **1869** - Friedrich Miescher identifies "nuclein". | ||
| 80 | - **1900s** - The Eugenics Movement. | ||
| 81 | - **1900** – Mendel's theories are rediscovered by researchers. | ||
| 82 | - **1944** - Oswald Avery identifies DNA as the 'transforming principle'. | ||
| 83 | - **1952** - Rosalind Franklin photographs crystallized DNA fibres. | ||
| 84 | - **1953** - James Watson and Francis Crick discover the double helix structure of DNA. | ||
| 85 | - **1965** - Marshall Nirenberg is the first person to sequence the bases in each codon. | ||
| 86 | - **1983** - Huntington's disease is the first mapped genetic disease. | ||
| 87 | - **1990** - The Human Genome Project begins. | ||
| 88 | - **1995** - Haemophilus Influenzae is the first bacterium genome sequenced. | ||
| 89 | - **1996** - Dolly the sheep is cloned. | ||
| 90 | - **1999** - First human chromosome is decoded. | ||
| 91 | - **2000** – Genetic code of the fruit fly is decoded. | ||
| 92 | - **2002** – Mouse is the first mammal to have its genome decoded. | ||
| 93 | - **2003** – The Human Genome Project is completed. | ||
| 94 | - **2013** – DNA Worldwide and Eurofins Forensic discover identical twins have differences in their genetic makeup. | ||
| 95 | |||
| 96 | ## What is DNA? | ||
| 97 | |||
| 98 | Deoxyribonucleic acid, a self-replicating material which is **present in nearly | ||
| 99 | all living organisms** as the main constituent of chromosomes. It is the | ||
| 100 | **carrier of genetic information**. | ||
| 101 | |||
| 102 | > The nitrogen in our DNA, the calcium in our teeth, the iron in our blood, | ||
| 103 | > the carbon in our apple pies were made in the interiors of collapsing stars. | ||
| 104 | > We are made of starstuff. | ||
| 105 | > **-- Carl Sagan, Cosmos** | ||
| 106 | |||
| 107 | The nucleotide in DNA consists of a sugar (deoxyribose), one of four bases | ||
| 108 | (cytosine (C), thymine (T), adenine (A), guanine (G)), and a phosphate. | ||
| 109 | Cytosine and thymine are pyrimidine bases, while adenine and guanine are purine | ||
| 110 | bases. The sugar and the base together are called a nucleoside. | ||
| 111 | |||
| 112 |  | ||
| 113 | |||
| 114 | *DNA (a) forms a double stranded helix, and (b) adenine pairs with thymine and | ||
| 115 | cytosine pairs with guanine. (credit a: modification of work by Jerome Walker, | ||
| 116 | Dennis Myts)* | ||
| 117 | |||
| 118 | ## Encode binary data into DNA sequence | ||
| 119 | |||
| 120 | As an input file you can use any file you want: | ||
| 121 | |||
| 122 | - ASCII files, | ||
| 123 | - Compiled programs, | ||
| 124 | - Multimedia files (MP3, MP4, MVK, etc), | ||
| 125 | - Images, | ||
| 126 | - Database files, | ||
| 127 | - etc. | ||
| 128 | |||
| 129 | Note: If you would copy all the bytes from RAM to file or pipe data to file you | ||
| 130 | could encode also this data as long as you provide file pointer to the encoder. | ||
| 131 | |||
| 132 | ### Basic Encoding | ||
| 133 | |||
| 134 | As already mentioned, the Basic Encoding is based on a simple mapping. Since DNA | ||
| 135 | is composed of 4 nucleotides (Adenine, Cytosine, Guanine, Thymine; usually | ||
| 136 | referred using the first letter). Using this technique we can encode | ||
| 137 | |||
| 138 | $$ log_2(4) = log_2(2^2) = 2 bits $$ | ||
| 139 | |||
| 140 | using a single nucleotide. In this way, we are able to use the 4 bases that | ||
| 141 | compose the DNA strand to encode each byte of data. | ||
| 142 | |||
| 143 | | Two bits | Nucleotides | | ||
| 144 | | -------- | ---------------- | | ||
| 145 | | 00 | **A** (Adenine) | | ||
| 146 | | 10 | **G** (Guanine) | | ||
| 147 | | 01 | **C** (Cytosine) | | ||
| 148 | | 11 | **T** (Thymine) | | ||
| 149 | |||
| 150 | With this in mind we can simply encode any data by using two-bit to Nucleotides | ||
| 151 | conversion. | ||
| 152 | |||
| 153 | ```python | ||
| 154 | { Algorithm 1: Naive byte array to DNA encode } | ||
| 155 | procedure EncodeToDNASequence(f) string | ||
| 156 | begin | ||
| 157 | enc string | ||
| 158 | while not eof(f) do | ||
| 159 | c byte := buffer[0] { Read 1 byte from buffer } | ||
| 160 | bin integer := sprintf('08b', c) { Convert to string binary } | ||
| 161 | for e in range[0, 2, 4, 6] do | ||
| 162 | if e[0] == 48 and e[1] == 48 then { 0x00 - A (Adenine) } | ||
| 163 | enc += 'A' | ||
| 164 | else if e[0] == 48 and e[1] == 49 then { 0x01 - G (Guanine) } | ||
| 165 | enc += 'G' | ||
| 166 | else if e[0] == 49 and e[1] == 48 then { 0x10 - C (Cytosine) } | ||
| 167 | enc += 'C' | ||
| 168 | else if e[0] == 49 and e[1] == 49 then { 0x11 - T (Thymine) } | ||
| 169 | enc += 'T' | ||
| 170 | return enc { Return DNA sequence } | ||
| 171 | end | ||
| 172 | ``` | ||
| 173 | |||
| 174 | Another encoding would be **Goldman encoding**. Using this encoding helps with | ||
| 175 | Nonsense mutation (amino acids replaced by a stop codon) that occurs and is the | ||
| 176 | most problematic during translation because it leads to truncated amino acid | ||
| 177 | sequences, which in turn results in truncated proteins. | ||
| 178 | |||
| 179 | [Where to store big data? In DNA: Nick Goldman at TEDxPrague](https://www.youtube.com/watch?v=a4PiGWNsIEU) | ||
| 180 | |||
| 181 | ### FASTA file format | ||
| 182 | |||
| 183 | In bioinformatics, FASTA format is a text-based format for representing either | ||
| 184 | nucleotide sequences or peptide sequences, in which nucleotides or amino acids | ||
| 185 | are represented using single-letter codes. The format also allows for sequence | ||
| 186 | names and comments to precede the sequences. The format originates from the | ||
| 187 | FASTA software package, but has now become a standard in the field of | ||
| 188 | bioinformatics. | ||
| 189 | |||
| 190 | The first line in a FASTA file started either with a ">" (greater-than) symbol | ||
| 191 | or, less frequently, a ";" (semicolon) was taken as a comment. Subsequent lines | ||
| 192 | starting with a semicolon would be ignored by software. Since the only comment | ||
| 193 | used was the first, it quickly became used to hold a summary description of the | ||
| 194 | sequence, often starting with a unique library accession number, and with time | ||
| 195 | it has become commonplace to always use ">" for the first line and to not use | ||
| 196 | ";" comments (which would otherwise be ignored). | ||
| 197 | |||
| 198 | ``` | ||
| 199 | ;LCBO - Prolactin precursor - Bovine | ||
| 200 | ; a sample sequence in FASTA format | ||
| 201 | MDSKGSSQKGSRLLLLLVVSNLLLCQGVVSTPVCPNGPGNCQVSLRDLFDRAVMVSHYIHDLSS | ||
| 202 | EMFNEFDKRYAQGKGFITMALNSCHTSSLPTPEDKEQAQQTHHEVLMSLILGLLRSWNDPLYHL | ||
| 203 | VTEVRGMKGAPDAILSRAIEIEEENKRLLEGMEMIFGQVIPGAKETEPYPVWSGLPSLQTKDED | ||
| 204 | ARYSAFYNLLHCLRRDSSKIDTYLKLLNCRIIYNNNC* | ||
| 205 | |||
| 206 | >MCHU - Calmodulin - Human, rabbit, bovine, rat, and chicken | ||
| 207 | ADQLTEEQIAEFKEAFSLFDKDGDGTITTKELGTVMRSLGQNPTEAELQDMINEVDADGNGTID | ||
| 208 | FPEFLTMMARKMKDTDSEEEIREAFRVFDKDGNGYISAAELRHVMTNLGEKLTDEEVDEMIREA | ||
| 209 | DIDGDGQVNYEEFVQMMTAK* | ||
| 210 | |||
| 211 | >gi|5524211|gb|AAD44166.1| cytochrome b [Elephas maximus maximus] | ||
| 212 | LCLYTHIGRNIYYGSYLYSETWNTGIMLLLITMATAFMGYVLPWGQMSFWGATVITNLFSAIPYIGTNLV | ||
| 213 | EWIWGGFSVDKATLNRFFAFHFILPFTMVALAGVHLTFLHETGSNNPLGLTSDSDKIPFHPYYTIKDFLG | ||
| 214 | LLILILLLLLLALLSPDMLGDPDNHMPADPLNTPLHIKPEWYFLFAYAILRSVPNKLGGVLALFLSIVIL | ||
| 215 | GLMPFLHTSKHRSMMLRPLSQALFWTLTMDLLTLTWIGSQPVEYPYTIIGQMASILYFSIILAFLPIAGX | ||
| 216 | IENY | ||
| 217 | ``` | ||
| 218 | |||
| 219 | FASTA format was extended by [FASTQ](https://en.wikipedia.org/wiki/FASTQ_format) | ||
| 220 | format from the [Sanger Centre](https://www.sanger.ac.uk/) in Cambridge. | ||
| 221 | |||
| 222 | ### PNG encoded DNA sequence | ||
| 223 | |||
| 224 | | Nucleotides | RGB | Color name | | ||
| 225 | | ------------ | ----------- | ---------- | | ||
| 226 | | A ➞ Adenine | (0,0,255) | Blue | | ||
| 227 | | G ➞ Guanine | (0,100,0) | Green | | ||
| 228 | | C ➞ Cytosine | (255,0,0) | Red | | ||
| 229 | | T ➞ Thymine | (255,255,0) | Yellow | | ||
| 230 | |||
| 231 | With this in mind we can create a simple algorithm to create PNG representation | ||
| 232 | of a DNA sequence. | ||
| 233 | |||
| 234 | ```python | ||
| 235 | { Algorithm 2: Naive DNA to PNG encode from FASTA file } | ||
| 236 | procedure EncodeDNASequenceToPNG(f) | ||
| 237 | begin | ||
| 238 | i image | ||
| 239 | while not eof(f) do | ||
| 240 | c char := buffer[0] { Read 1 char from buffer } | ||
| 241 | case c of | ||
| 242 | 'A': color := RGB(0, 0, 255) { Blue } | ||
| 243 | 'G': color := RGB(0, 100, 0) { Green } | ||
| 244 | 'C': color := RGB(255, 0, 0) { Red } | ||
| 245 | 'T': color := RGB(255, 255, 0) { Yellow } | ||
| 246 | drawRect(i, [x, y], color) | ||
| 247 | save(i) { Save PNG image } | ||
| 248 | end | ||
| 249 | ``` | ||
| 250 | |||
| 251 | ## Encoding text file in practice | ||
| 252 | |||
| 253 | In this example we will take a simple text file as our input stream for | ||
| 254 | encoding. This file will have a quote from Niels Bohr and saved as txt file. | ||
| 255 | |||
| 256 | > How wonderful that we have met with a paradox. Now we have some hope of | ||
| 257 | > making progress. | ||
| 258 | > ― Niels Bohr | ||
| 259 | |||
| 260 | First we encode text file into FASTA file. | ||
| 261 | |||
| 262 | ```bash | ||
| 263 | ./dnae-encode -i quote.txt -o quote.fa | ||
| 264 | 2019/01/10 00:38:29 Gathering input file stats | ||
| 265 | 2019/01/10 00:38:29 Starting encoding ... | ||
| 266 | 106 B / 106 B [==================================] 100.00% 0s | ||
| 267 | 2019/01/10 00:38:29 Saving to FASTA file ... | ||
| 268 | 2019/01/10 00:38:29 Output FASTA file length is 438 B | ||
| 269 | 2019/01/10 00:38:29 Process took 987.263µs | ||
| 270 | 2019/01/10 00:38:29 Done ... | ||
| 271 | ``` | ||
| 272 | |||
| 273 | Output of `quote.fa` file contains the encoded DNA sequence in ASCII format. | ||
| 274 | |||
| 275 | ``` | ||
| 276 | >SEQ1 | ||
| 277 | GACAGCTTGTGTACAAGTGTGCTTGCTCGCGAGCGGGTACGCGCGTGGGCTAACAAGTGA | ||
| 278 | GCCAGCAGGTGAACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGCTGGCGGGTGA | ||
| 279 | ACAAGTGTGCCGGTGAGCCAACAAGCAGACAAGTAAGCAGGTACGCAGGCGAGCTTGTCA | ||
| 280 | ACTCACAAGATCGCTTGTGTACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGTAT | ||
| 281 | GCTTGCTGGCGGACAAGCCAGCTTGTAAGCGGACAAGCTTGCGCACAAGCTGGCAGGCCT | ||
| 282 | GCCGGCTCGCGTACAAATTCACAAGTAAGTACGCTTGCGTGTACGCGGGTATGTATACTC | ||
| 283 | AACCTCACCAAACGGGACAAGATCGCCGGCGGGCTAGTATACAAGAACGCTTGCCAGTAC | ||
| 284 | AACC | ||
| 285 | ``` | ||
| 286 | |||
| 287 | Then we encode FASTA file from previous operation to encode this data into PNG. | ||
| 288 | |||
| 289 | ```bash | ||
| 290 | ./dnae-png -i quote.fa -o quote.png | ||
| 291 | 2019/01/10 00:40:09 Gathering input file stats ... | ||
| 292 | 2019/01/10 00:40:09 Deconstructing FASTA file ... | ||
| 293 | 2019/01/10 00:40:09 Compositing image file ... | ||
| 294 | 424 / 424 [==================================] 100.00% 0s | ||
| 295 | 2019/01/10 00:40:09 Saving output file ... | ||
| 296 | 2019/01/10 00:40:09 Output image file length is 1.1 kB | ||
| 297 | 2019/01/10 00:40:09 Process took 19.036117ms | ||
| 298 | 2019/01/10 00:40:09 Done ... | ||
| 299 | ``` | ||
| 300 | |||
| 301 | After encoding into PNG format this file looks like this. | ||
| 302 | |||
| 303 |  | ||
| 304 | |||
| 305 | The larger the input stream is the larger the PNG file would be. | ||
| 306 | |||
| 307 | Compiled basic Hello World C program with | ||
| 308 | [GCC](https://www.gnu.org/software/gcc/) would [look | ||
| 309 | like](/assets/dna-sequence/sample.png). | ||
| 310 | |||
| 311 | ```c | ||
| 312 | // gcc -O3 -o sample sample.c | ||
| 313 | #include <stdio.h> | ||
| 314 | |||
| 315 | main() { | ||
| 316 | printf("Hello, world!\n"); | ||
| 317 | return 0; | ||
| 318 | } | ||
| 319 | ``` | ||
| 320 | |||
| 321 | ## Toolkit for encoding data | ||
| 322 | |||
| 323 | I have created a toolkit with two main programs: | ||
| 324 | |||
| 325 | - dnae-encode (encodes file into FASTA file) | ||
| 326 | - dnae-png (encodes FASTA file into PNG) | ||
| 327 | |||
| 328 | Toolkit with full source code is available on | ||
| 329 | [github.com/mitjafelicijan/dna-encoding](https://github.com/mitjafelicijan/dna-encoding). | ||
| 330 | |||
| 331 | ### dnae-encode | ||
| 332 | |||
| 333 | ```bash | ||
| 334 | > ./dnae-encode --help | ||
| 335 | usage: dnae-encode --input=INPUT [<flags>] | ||
| 336 | |||
| 337 | A command-line application that encodes file into DNA sequence. | ||
| 338 | |||
| 339 | Flags: | ||
| 340 | --help Show context-sensitive help (also try --help-long and --help-man). | ||
| 341 | -i, --input=INPUT Input file (ASCII or binary) which will be encoded into DNA sequence. | ||
| 342 | -o, --output="out.fa" Output file which stores DNA sequence in FASTA format. | ||
| 343 | -s, --sequence=SEQ1 The description line (defline) or header/identifier line, gives a name and/or a unique identifier for the sequence. | ||
| 344 | -c, --columns=60 Row characters length (no more than 120 characters). Devices preallocate fixed line sizes in software. | ||
| 345 | --version Show application version. | ||
| 346 | ``` | ||
| 347 | |||
| 348 | ### dnae-png | ||
| 349 | |||
| 350 | ```bash | ||
| 351 | > ./dnae-png --help | ||
| 352 | usage: dnae-png --input=INPUT [<flags>] | ||
| 353 | |||
| 354 | A command-line application that encodes FASTA file into PNG image. | ||
| 355 | |||
| 356 | Flags: | ||
| 357 | --help Show context-sensitive help (also try --help-long and --help-man). | ||
| 358 | -i, --input=INPUT Input FASTA file which will be encoded into PNG image. | ||
| 359 | -o, --output="out.png" Output file in PNG format that represents DNA sequence in graphical way. | ||
| 360 | -s, --size=10 Size of pairings of DNA bases on image in pixels (lower resolution lower file size). | ||
| 361 | --version Show application version. | ||
| 362 | ``` | ||
| 363 | |||
| 364 | ## Benchmarks | ||
| 365 | |||
| 366 | First we generate some binary sample data with dd. | ||
| 367 | |||
| 368 | ```bash | ||
| 369 | dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=1KB.bin bs=1KB count=1 iflag=fullblock | ||
| 370 | ``` | ||
| 371 | |||
| 372 | Our freshly generated 1KB file looks something like this (its full of garbage | ||
| 373 | data as intended). | ||
| 374 | |||
| 375 |  | ||
| 376 | |||
| 377 | We create following binary files: | ||
| 378 | |||
| 379 | - 1KB.bin | ||
| 380 | - 10KB.bin | ||
| 381 | - 100KB.bin | ||
| 382 | - 1MB.bin | ||
| 383 | - 10MB.bin | ||
| 384 | - 100MB.bin | ||
| 385 | |||
| 386 | After this we create FASTA files for all the binary files by encoding them | ||
| 387 | into DNA sequence. | ||
| 388 | |||
| 389 | ```bash | ||
| 390 | ./dnae-encode -i 100MB.bin -o 100MB.fa | ||
| 391 | ``` | ||
| 392 | |||
| 393 | Then we GZIP all the FASTA files to see how much the can be compressed. | ||
| 394 | |||
| 395 | ```bash | ||
| 396 | gzip -9 < 10MB.fa > 10MB.fa.gz | ||
| 397 | ``` | ||
| 398 | |||
| 399 | [Download ODS file with benchmarks](/dna-sequence/benchmarks.ods). | ||
| 400 | |||
| 401 |  | ||
| 402 | |||
| 403 |  | ||
| 404 | |||
| 405 | ## References | ||
| 406 | |||
| 407 | - https://www.techopedia.com/definition/948/encoding | ||
| 408 | - https://www.dna-worldwide.com/resource/160/history-dna-timeline | ||
| 409 | - https://opentextbc.ca/biology/chapter/9-1-the-structure-of-dna/ | ||
| 410 | - https://arxiv.org/abs/1801.04774 | ||
| 411 | - https://en.wikipedia.org/wiki/FASTA_format | ||
diff --git a/content/posts/2019-10-14-simplifying-and-reducing-clutter.md b/content/posts/2019-10-14-simplifying-and-reducing-clutter.md new file mode 100644 index 0000000..25f9ca0 --- /dev/null +++ b/content/posts/2019-10-14-simplifying-and-reducing-clutter.md | |||
| @@ -0,0 +1,59 @@ | |||
| 1 | --- | ||
| 2 | title: Simplifying and reducing clutter in my life and work | ||
| 3 | url: simplifying-and-reducing-clutter.html | ||
| 4 | date: 2019-10-14T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | I recently moved my main working machine back from Hachintosh to Linux. Well the | ||
| 10 | experiment was interesting and I have done some great work on macOS but it was | ||
| 11 | time to move back. | ||
| 12 | |||
| 13 | I actually really missed Linux. The simplicity of `apt-get` or just the amount | ||
| 14 | of software that exists for Linux should be a no-brainer. I spent most of my | ||
| 15 | time on macOS finding solutions to make things work. Using | ||
| 16 | [Brew](https://brew.sh/) was just a horrible experience and far from package | ||
| 17 | managers of Linux. At least they managed to get that `sudo` debacle sorted. | ||
| 18 | |||
| 19 | Not all was bad. macOS in general was a perfectly good environment. Things like | ||
| 20 | Docker and tooling like this worked without any hiccups. My normal tools like | ||
| 21 | coding IDE worked flawlessly and the whole look and feel is just superb. I have | ||
| 22 | been using MacBook Air for couple of years so I was used to the system but never | ||
| 23 | as a daily driver. | ||
| 24 | |||
| 25 | One of the things I did after I installed Linux back on my machine was cleaning | ||
| 26 | up my Dropbox folder. I have everything on Dropbox. Even projects folder. I | ||
| 27 | write code for living so my whole life revolves around couple of megs of code | ||
| 28 | (with assets). So it's not like I have huge files on my machine. I don't have | ||
| 29 | movies or music or pictures on my PC. All of that stuff is in cloud. I use | ||
| 30 | Google music and I have Netflix account which is more than enough for me. | ||
| 31 | |||
| 32 | I also went and deleted some of the repositories on my Github account. I have | ||
| 33 | deleted more code than deployed. People find this strange but for me deleting | ||
| 34 | something feels so cathartic and also forces me to write better code next time | ||
| 35 | around when I am faced with similar problem. That was a huge relief if I am | ||
| 36 | being totally honest. | ||
| 37 | |||
| 38 | Next step was to do something with my webpage. I have been using some scripts I | ||
| 39 | wrote a while ago to generate static pages from markdown source posts. I kept on | ||
| 40 | adding and adding stuff on top of it and it became a source of a | ||
| 41 | frustration. And this is just a simple blog and I was using gulp and npm. | ||
| 42 | Anyways after couple of hours of searching and testing static generators I found | ||
| 43 | an interesting one | ||
| 44 | [https://github.com/piranha/gostatic](https://github.com/piranha/gostatic) and I | ||
| 45 | just decided to use this one. It was the only one that had a simple templating | ||
| 46 | engine, not that I really need one. But others had this convoluted way of trying | ||
| 47 | to solve everything and at the end just required quite bigger learning curve I | ||
| 48 | was ready to go with. So I deleted couple of old posts, simplified HTML, trashed | ||
| 49 | most of the CSS and went with | ||
| 50 | [https://motherfuckingwebsite.com/](https://motherfuckingwebsite.com/) | ||
| 51 | aesthetics. Yeah, the previous site was more visually stimulating but all I | ||
| 52 | really care is the content at this point. And Times New Roman font is kind of | ||
| 53 | awesome. | ||
| 54 | |||
| 55 | I stopped working on most of the projects in the past couple of months because | ||
| 56 | the overhead was just too insane. There comes a point when you stretch yourself | ||
| 57 | too much and then you stop progressing and with that comes dissatisfaction. | ||
| 58 | |||
| 59 | So that's about it. Moving forward minimal style. | ||
diff --git a/content/posts/2019-10-19-using-sentiment-analysis-for-clickbait-detection.md b/content/posts/2019-10-19-using-sentiment-analysis-for-clickbait-detection.md new file mode 100644 index 0000000..8322e70 --- /dev/null +++ b/content/posts/2019-10-19-using-sentiment-analysis-for-clickbait-detection.md | |||
| @@ -0,0 +1,108 @@ | |||
| 1 | --- | ||
| 2 | title: Using sentiment analysis for clickbait detection in RSS feeds | ||
| 3 | url: using-sentiment-analysis-for-clickbait-detection-in-rss-feeds.html | ||
| 4 | date: 2019-10-19T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | ## Initial thoughts | ||
| 10 | |||
| 11 | One of the things that interested me for a while now is if major well | ||
| 12 | established news sites use click bait titles to drive additional traffic to | ||
| 13 | their sites and generate additional impressions. | ||
| 14 | |||
| 15 | Goal is to see how article titles and actual content of article differ from each | ||
| 16 | other and see if titles are clickbaited. | ||
| 17 | |||
| 18 | ## Preparing and cleaning data | ||
| 19 | |||
| 20 | For this example I opted to just use RSS feed from a new website and decided to | ||
| 21 | go with [The Guardian](https://www.theguardian.com) World news. While this gets | ||
| 22 | us limited data (~40) articles and also description (actual content) is trimmed | ||
| 23 | this really doesn't reflect the actual article contents. | ||
| 24 | |||
| 25 | To get better content I could use web scraping and use RSS as link list and | ||
| 26 | fetch contents directly from website, but for this simple example this will | ||
| 27 | suffice. | ||
| 28 | |||
| 29 | There are couple of requirements we need to install before we continue: | ||
| 30 | |||
| 31 | - `pip3 install feedparser` (parses RSS feed from url) | ||
| 32 | - `pip3 install vaderSentiment` (does sentiment polarity analysis) | ||
| 33 | - `pip3 install matplotlib` (plots chart of results) | ||
| 34 | |||
| 35 | So first we need to fetch RSS data and sanitize HTML content from description. | ||
| 36 | |||
| 37 | ```python | ||
| 38 | import re | ||
| 39 | import feedparser | ||
| 40 | |||
| 41 | feed_url = "https://www.theguardian.com/world/rss" | ||
| 42 | feed = feedparser.parse(feed_url) | ||
| 43 | |||
| 44 | # sanitize html | ||
| 45 | for item in feed.entries: | ||
| 46 | item.description = re.sub('<[^<]+?>', '', item.description) | ||
| 47 | ``` | ||
| 48 | |||
| 49 | ## Perform sentiment analysis | ||
| 50 | |||
| 51 | Since we now have cleaned up data in our `feed.entries` object we can start with | ||
| 52 | performing sentiment analysis. | ||
| 53 | |||
| 54 | There are many sentiment analysis libraries available that range from rule-based | ||
| 55 | sentiment analysis up to machine learning supported analysis. To keep things | ||
| 56 | simple I decided to use rule-based analysis library | ||
| 57 | [vaderSentiment](https://github.com/cjhutto/vaderSentiment) from | ||
| 58 | [C.J. Hutto](https://github.com/cjhutto). Really nice library and quite easy to | ||
| 59 | use. | ||
| 60 | |||
| 61 | ```python | ||
| 62 | from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer | ||
| 63 | analyser = SentimentIntensityAnalyzer() | ||
| 64 | |||
| 65 | sentiment_results = [] | ||
| 66 | for item in feed.entries: | ||
| 67 | sentiment_title = analyser.polarity_scores(item.title) | ||
| 68 | sentiment_description = analyser.polarity_scores(item.description) | ||
| 69 | sentiment_results.append([sentiment_title['compound'], sentiment_description['compound']]) | ||
| 70 | ``` | ||
| 71 | |||
| 72 | Now that we have this data in a shape that is compatible with matplotlib we can | ||
| 73 | plot results to see the difference between title and description sentiment of an | ||
| 74 | article. | ||
| 75 | |||
| 76 | ```python | ||
| 77 | import matplotlib.pyplot as plt | ||
| 78 | |||
| 79 | plt.rcParams['figure.figsize'] = (15, 3) | ||
| 80 | plt.plot(sentiment_results, drawstyle='steps') | ||
| 81 | plt.title('Sentiment analysis relationship between title and description (Guardian World News)') | ||
| 82 | plt.legend(['title', 'description']) | ||
| 83 | plt.show() | ||
| 84 | ``` | ||
| 85 | |||
| 86 | ## Results and assets | ||
| 87 | |||
| 88 | 1. Because of the small sample size further conclusions are impossible to make. | ||
| 89 | 2. Rule-based approach may not be the best way of doing this. By using deep | ||
| 90 | learning we would be able to get better insights. | ||
| 91 | 3. **Next step would be to** periodically fetch RSS items and store them over a | ||
| 92 | longer period of time and then perform analysis again and use either machine | ||
| 93 | learning or deep learning on top of it. | ||
| 94 | |||
| 95 |  | ||
| 96 | |||
| 97 | Figure above displays difference between title and description sentiment for | ||
| 98 | specific RSS feed item. 1 means positive and -1 means negative sentiment. | ||
| 99 | |||
| 100 | [» Download Jupyter Notebook](/assets/sentiment-analysis/sentiment-analysis.ipynb) | ||
| 101 | |||
| 102 | ## Going further | ||
| 103 | |||
| 104 | - [Twitter Sentiment Analysis by Bryan Schwierzke](https://github.com/bswiss/news_mood) | ||
| 105 | - [AFINN-based sentiment analysis for Node.js by Andrew Sliwinski](https://github.com/thisandagain/sentiment) | ||
| 106 | - [Sentiment Analysis with LSTMs in Tensorflow by Adit Deshpande](https://github.com/adeshpande3/LSTM-Sentiment-Analysis) | ||
| 107 | - [Sentiment analysis on tweets using Naive Bayes, SVM, CNN, LSTM, etc. by Abdul Fatir](https://github.com/abdulfatir/twitter-sentiment-analysis) | ||
| 108 | |||
diff --git a/content/posts/2020-03-22-simple-sse-based-pubsub-server.md b/content/posts/2020-03-22-simple-sse-based-pubsub-server.md new file mode 100644 index 0000000..8e46138 --- /dev/null +++ b/content/posts/2020-03-22-simple-sse-based-pubsub-server.md | |||
| @@ -0,0 +1,454 @@ | |||
| 1 | --- | ||
| 2 | title: Simple Server-Sent Events based PubSub Server | ||
| 3 | url: simple-server-sent-events-based-pubsub-server.html | ||
| 4 | date: 2020-03-22T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | ## Before we continue ... | ||
| 10 | |||
| 11 | Publisher Subscriber model is nothing new and there are many amazing solutions | ||
| 12 | out there, so writing a new one would be a waste of time if other solutions | ||
| 13 | wouldn't have quite complex install procedures and weren't so hard to maintain. | ||
| 14 | But to be fair, comparing this simple server with something like | ||
| 15 | [Kafka](https://kafka.apache.org/) or [RabbitMQ](https://www.rabbitmq.com/) is | ||
| 16 | laughable at the least. Those solutions are enterprise grade and have many | ||
| 17 | mechanisms there to ensure messages aren't lost and much more. Regardless of | ||
| 18 | these drawbacks, this method has been tested on a large website and worked until | ||
| 19 | now without any problems. So now, that we got that cleared up, let's continue. | ||
| 20 | |||
| 21 | ***Wiki definition:** Publish/subscribe messaging, or pub/sub messaging, is a | ||
| 22 | form of asynchronous service-to-service communication used in serverless and | ||
| 23 | microservices architectures. In a pub/sub model, any message published to a | ||
| 24 | topic is immediately received by all the subscribers to the topic.* | ||
| 25 | |||
| 26 | ## General goals | ||
| 27 | |||
| 28 | - provide a simple server that relays messages to all the connected clients, | ||
| 29 | - messages can be posted on specific topics, | ||
| 30 | - messages get sent via [Server-Sent | ||
| 31 | Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events) | ||
| 32 | to all the subscribers. | ||
| 33 | |||
| 34 | ## How exactly does the pub/sub model work? | ||
| 35 | |||
| 36 | The easiest way to explain this is with diagram bellow. Basic function is | ||
| 37 | simple. We have subscribers that receive messages, and we have publishers that | ||
| 38 | create and post messages. Similar model is also well know pattern that works on | ||
| 39 | a premise of consumers and producers, and they take similar roles. | ||
| 40 | |||
| 41 |  | ||
| 42 | |||
| 43 | **These are some naive characteristics we want to achieve:** | ||
| 44 | |||
| 45 | - producer is publishing messages to subscribe topic, | ||
| 46 | - consumer is receiving messages from subscribed topic, | ||
| 47 | - servers is also known as Broker, | ||
| 48 | - broker does not store messages or tracks success, | ||
| 49 | - broker uses | ||
| 50 | [FIFO](https://en.wikipedia.org/wiki/FIFO_(computing_and_electronics)) method | ||
| 51 | for delivering messages, | ||
| 52 | - if consumer wants to receive messages from a topic, producer and consumer | ||
| 53 | topics must match, | ||
| 54 | - consumer can subscribe to multiple topics, | ||
| 55 | - producer can publish to multiple topics, | ||
| 56 | - each message has a messageId. | ||
| 57 | |||
| 58 | **Known drawbacks:** | ||
| 59 | |||
| 60 | - messages will not be stored in a persistent queue or unreceived messages like | ||
| 61 | [DeadLetterQueue](https://en.wikipedia.org/wiki/Dead_letter_queue) so old | ||
| 62 | messages could be lost on server restart, | ||
| 63 | - [Server-Sent | ||
| 64 | Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events) | ||
| 65 | opens a long-running connection between the client and the server so make sure | ||
| 66 | if your setup is load balanced that the load balancer in this case can have | ||
| 67 | long opened connection, | ||
| 68 | - no system moderation due to the dynamic nature of creating queues. | ||
| 69 | |||
| 70 | ## Server-Sent Events | ||
| 71 | |||
| 72 | Read more about it on [official specification | ||
| 73 | page](https://html.spec.whatwg.org/multipage/server-sent-events.html). | ||
| 74 | |||
| 75 | ### Current browser support | ||
| 76 | |||
| 77 |  | ||
| 78 | |||
| 79 | Check | ||
| 80 | [https://caniuse.com/#feat=eventsource](https://caniuse.com/#feat=eventsource) | ||
| 81 | for latest information about browser support. | ||
| 82 | |||
| 83 | ### Known issues | ||
| 84 | |||
| 85 | - Firefox 52 and below do not support EventSource in web/shared workers | ||
| 86 | - In Firefox prior to version 36 server-sent events do not reconnect | ||
| 87 | automatically in case of a connection interrupt (bug) | ||
| 88 | - Reportedly, CORS in EventSource is currently supported in Firefox 10+, Opera | ||
| 89 | 12+, Chrome 26+, Safari 7.0+. | ||
| 90 | - Antivirus software may block the event streaming data chunks. | ||
| 91 | |||
| 92 | Source: [https://caniuse.com/#feat=eventsource](https://caniuse.com/#feat=eventsource) | ||
| 93 | |||
| 94 | ### Message format | ||
| 95 | |||
| 96 | The simplest message that can be sent is only with data attribute: | ||
| 97 | |||
| 98 | ```bash | ||
| 99 | data: this is a simple message | ||
| 100 | <blank line> | ||
| 101 | ``` | ||
| 102 | |||
| 103 | You can send message IDs to be used if the connection is dropped: | ||
| 104 | |||
| 105 | ```bash | ||
| 106 | id: 33 | ||
| 107 | data: this is line one | ||
| 108 | data: this is line two | ||
| 109 | <blank line> | ||
| 110 | ``` | ||
| 111 | |||
| 112 | And you can specify your own event types (the above messages will all trigger | ||
| 113 | the message event): | ||
| 114 | |||
| 115 | ```bash | ||
| 116 | id: 36 | ||
| 117 | event: price | ||
| 118 | data: 103.34 | ||
| 119 | <blank line> | ||
| 120 | ``` | ||
| 121 | |||
| 122 | ### Server requirements | ||
| 123 | |||
| 124 | The important thing is how you send headers and which headers are sent by the | ||
| 125 | server that triggers browser to threat response as a EventStream. | ||
| 126 | |||
| 127 | Headers responsible for this are: | ||
| 128 | |||
| 129 | ```bash | ||
| 130 | Content-Type: text/event-stream | ||
| 131 | Cache-Control: no-cache | ||
| 132 | Connection: keep-alive | ||
| 133 | ``` | ||
| 134 | |||
| 135 | ### Debugging with Google Chrome | ||
| 136 | |||
| 137 | Google Chrome provides build-in debugging and exploration tool for [Server-Sent | ||
| 138 | Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events) | ||
| 139 | which is quite nice and available from Developer Tools under Network tab. | ||
| 140 | |||
| 141 | > You can debug only client side events that get received and not the server | ||
| 142 | > ones. For debugging server events add `console.log` to `server.js` code and | ||
| 143 | > print out events. | ||
| 144 | |||
| 145 |  | ||
| 146 | |||
| 147 | ## Server implementation | ||
| 148 | |||
| 149 | For the sake of this example we will use [Node.js](https://nodejs.org/en/) with | ||
| 150 | [Express](https://expressjs.com) as our router since this is the easiest way to | ||
| 151 | get started and we will use already written SSE library for node | ||
| 152 | [sse-pubsub](https://www.npmjs.com/package/sse-pubsub) so we don't reinvent the | ||
| 153 | wheel. | ||
| 154 | |||
| 155 | ```bash | ||
| 156 | npm init --yes | ||
| 157 | |||
| 158 | npm install express | ||
| 159 | npm install body-parser | ||
| 160 | npm install sse-pubsub | ||
| 161 | ``` | ||
| 162 | |||
| 163 | Basic implementation of a server (`server.js`): | ||
| 164 | |||
| 165 | ```js | ||
| 166 | const express = require('express'); | ||
| 167 | const bodyParser = require('body-parser'); | ||
| 168 | const SSETopic = require('sse-pubsub'); | ||
| 169 | |||
| 170 | const app = express(); | ||
| 171 | const port = process.env.PORT || 4000; | ||
| 172 | |||
| 173 | // topics container | ||
| 174 | const sseTopics = {}; | ||
| 175 | |||
| 176 | app.use(bodyParser.json()); | ||
| 177 | |||
| 178 | // open for all cors | ||
| 179 | app.all('*', (req, res, next) => { | ||
| 180 | res.header('Access-Control-Allow-Origin', '*'); | ||
| 181 | res.header('Access-Control-Allow-Headers', 'X-Requested-With, Content-Type'); | ||
| 182 | next(); | ||
| 183 | }); | ||
| 184 | |||
| 185 | // preflight request error fix | ||
| 186 | app.options('*', async (req, res) => { | ||
| 187 | res.header('Access-Control-Allow-Origin', '*'); | ||
| 188 | res.header('Access-Control-Allow-Headers', 'X-Requested-With, Content-Type'); | ||
| 189 | res.send('OK'); | ||
| 190 | }); | ||
| 191 | |||
| 192 | // serve the event streams | ||
| 193 | app.get('/stream/:topic', async (req, res, next) => { | ||
| 194 | const topic = req.params.topic; | ||
| 195 | |||
| 196 | if (!(topic in sseTopics)) { | ||
| 197 | sseTopics[topic] = new SSETopic({ | ||
| 198 | pingInterval: 0, | ||
| 199 | maxStreamDuration: 15000, | ||
| 200 | }); | ||
| 201 | } | ||
| 202 | |||
| 203 | // subscribing client to topic | ||
| 204 | sseTopics[topic].subscribe(req, res); | ||
| 205 | }); | ||
| 206 | |||
| 207 | // accepts new messages into topic | ||
| 208 | app.post('/publish', async (req, res) => { | ||
| 209 | let body = req.body; | ||
| 210 | let status = 200; | ||
| 211 | |||
| 212 | console.log('Incoming message:', req.body); | ||
| 213 | |||
| 214 | if ( | ||
| 215 | body.hasOwnProperty('topic') && | ||
| 216 | body.hasOwnProperty('event') && | ||
| 217 | body.hasOwnProperty('message') | ||
| 218 | ) { | ||
| 219 | const topic = req.body.topic; | ||
| 220 | const event = req.body.event; | ||
| 221 | const message = req.body.message; | ||
| 222 | |||
| 223 | if (topic in sseTopics) { | ||
| 224 | // sends message to all the subscribers | ||
| 225 | sseTopics[topic].publish(message, event); | ||
| 226 | } | ||
| 227 | } else { | ||
| 228 | status = 400; | ||
| 229 | } | ||
| 230 | |||
| 231 | res.status(status).send({ | ||
| 232 | status, | ||
| 233 | }); | ||
| 234 | }); | ||
| 235 | |||
| 236 | // returns JSON object of all opened topics | ||
| 237 | app.get('/status', async (req, res) => { | ||
| 238 | res.send(sseTopics); | ||
| 239 | }); | ||
| 240 | |||
| 241 | // health-check endpoint | ||
| 242 | app.get('/', async (req, res) => { | ||
| 243 | res.send('OK'); | ||
| 244 | }); | ||
| 245 | |||
| 246 | // return a 404 if no routes match | ||
| 247 | app.use((req, res, next) => { | ||
| 248 | res.set('Cache-Control', 'private, no-store'); | ||
| 249 | res.status(404).end('Not found'); | ||
| 250 | }); | ||
| 251 | |||
| 252 | // starts the server | ||
| 253 | app.listen(port, () => { | ||
| 254 | console.log(`PubSub server running on http://localhost:${port}`); | ||
| 255 | }); | ||
| 256 | ``` | ||
| 257 | |||
| 258 | ### Our custom message format | ||
| 259 | |||
| 260 | Each message posted on a server must be in a specific format that out server | ||
| 261 | accepts. Having structure like this allows us to have multiple separated type of | ||
| 262 | events on each topic. | ||
| 263 | |||
| 264 | With this we can separate streams and only receive events that belong to the | ||
| 265 | topic. | ||
| 266 | |||
| 267 | One example would be, that we have index page and we want to receive messages | ||
| 268 | about new upvotes or new subscribers but we don't want to follow events for | ||
| 269 | other pages. This reduces clutter and overall network. And structure is much | ||
| 270 | nicer and maintanable. | ||
| 271 | |||
| 272 | ```json | ||
| 273 | { | ||
| 274 | "topic": "sample-topic", | ||
| 275 | "event": "sample-event", | ||
| 276 | "message": { "name": "John" } | ||
| 277 | } | ||
| 278 | ``` | ||
| 279 | |||
| 280 | ## Publisher and subscriber clients | ||
| 281 | |||
| 282 | ### Publisher and subscriber in action | ||
| 283 | |||
| 284 | <video src="/assets/simple-pubsub-server/clients.m4v" controls></video> | ||
| 285 | |||
| 286 | You can download [the code](../simple-pubsub-server/sse-pubsub-server.zip) and | ||
| 287 | follow along. | ||
| 288 | |||
| 289 | ### Publisher | ||
| 290 | |||
| 291 | As talked about above publisher is the one that send messages to the | ||
| 292 | broker/server. Message inside the payload can be whatever you want (string, | ||
| 293 | object, array). I would however personally avoid send large chunks of data like | ||
| 294 | blobs and such. | ||
| 295 | |||
| 296 | ```html | ||
| 297 | <!DOCTYPE html> | ||
| 298 | <html lang="en"> | ||
| 299 | |||
| 300 | <head> | ||
| 301 | <meta charset="UTF-8"> | ||
| 302 | <meta name="viewport" content="width=device-width, initial-scale=1.0"> | ||
| 303 | <title>Publisher</title> | ||
| 304 | </head> | ||
| 305 | |||
| 306 | <body> | ||
| 307 | |||
| 308 | <h1>Publisher</h1> | ||
| 309 | |||
| 310 | <fieldset> | ||
| 311 | <p> | ||
| 312 | <label>Server:</label> | ||
| 313 | <input type="text" id="server" value="http://localhost:4000"> | ||
| 314 | </p> | ||
| 315 | <p> | ||
| 316 | <label>Topic:</label> | ||
| 317 | <input type="text" id="topic" value="sample-topic"> | ||
| 318 | </p> | ||
| 319 | <p> | ||
| 320 | <label>Event:</label> | ||
| 321 | <input type="text" id="event" value="sample-event"> | ||
| 322 | </p> | ||
| 323 | <p> | ||
| 324 | <label>Message:</label> | ||
| 325 | <input type="text" id="message" value='{"name": "John"}'> | ||
| 326 | </p> | ||
| 327 | <p> | ||
| 328 | <button type="button" id="button">Publish message to topic</button> | ||
| 329 | </p> | ||
| 330 | </fieldset> | ||
| 331 | |||
| 332 | <script> | ||
| 333 | |||
| 334 | const button = document.querySelector('#button'); | ||
| 335 | const server = document.querySelector('#server'); | ||
| 336 | const topic = document.querySelector('#topic'); | ||
| 337 | const event = document.querySelector('#event'); | ||
| 338 | const message = document.querySelector('#message'); | ||
| 339 | |||
| 340 | button.addEventListener('click', async (evt) => { | ||
| 341 | const req = await fetch(`${server.value}/publish`, { | ||
| 342 | method: 'post', | ||
| 343 | headers: { | ||
| 344 | 'Accept': 'application/json', | ||
| 345 | 'Content-Type': 'application/json', | ||
| 346 | }, | ||
| 347 | body: JSON.stringify({ | ||
| 348 | topic: topic.value, | ||
| 349 | event: event.value, | ||
| 350 | message: JSON.parse(message.value), | ||
| 351 | }), | ||
| 352 | }); | ||
| 353 | |||
| 354 | const res = await req.json(); | ||
| 355 | console.log(res); | ||
| 356 | }); | ||
| 357 | |||
| 358 | </script> | ||
| 359 | |||
| 360 | </body> | ||
| 361 | |||
| 362 | </html> | ||
| 363 | ``` | ||
| 364 | |||
| 365 | ### Subscriber | ||
| 366 | |||
| 367 | Subscriber is responsible for receiving new messages that come from server via | ||
| 368 | publisher. The code bellow is very rudimentary but works and follows the | ||
| 369 | implementation guidelines for EventSource. | ||
| 370 | |||
| 371 | You can use either Developer Tools Console to see incoming messages or you can | ||
| 372 | defer to Debugging with Google Chrome section above to see all EventStream | ||
| 373 | messages. | ||
| 374 | |||
| 375 | > Don't be alarmed if the subscriber gets disconnected from the server every so | ||
| 376 | > often. The code we have here resets connection every 15s but it automatically | ||
| 377 | > get reconnected and fetches all messages up to last received message id. This | ||
| 378 | > setting can be adjusted in `server.js` file; search for the | ||
| 379 | > `maxStreamDuration` variable. | ||
| 380 | |||
| 381 | ```html | ||
| 382 | <!DOCTYPE html> | ||
| 383 | <html lang="en"> | ||
| 384 | |||
| 385 | <head> | ||
| 386 | <meta charset="UTF-8"> | ||
| 387 | <meta name="viewport" content="width=device-width, initial-scale=1.0"> | ||
| 388 | <title>Subscriber</title> | ||
| 389 | <link rel="stylesheet" href="style.css"> | ||
| 390 | </head> | ||
| 391 | |||
| 392 | <body> | ||
| 393 | |||
| 394 | <h1>Subscriber</h1> | ||
| 395 | |||
| 396 | <fieldset> | ||
| 397 | <p> | ||
| 398 | <label>Server:</label> | ||
| 399 | <input type="text" id="server" value="http://localhost:4000"> | ||
| 400 | </p> | ||
| 401 | <p> | ||
| 402 | <label>Topic:</label> | ||
| 403 | <input type="text" id="topic" value="sample-topic"> | ||
| 404 | </p> | ||
| 405 | <p> | ||
| 406 | <label>Event:</label> | ||
| 407 | <input type="text" id="event" value="sample-event"> | ||
| 408 | </p> | ||
| 409 | <p> | ||
| 410 | <button type="button" id="button">Subscribe to topic</button> | ||
| 411 | </p> | ||
| 412 | </fieldset> | ||
| 413 | |||
| 414 | <script> | ||
| 415 | |||
| 416 | const button = document.querySelector('#button'); | ||
| 417 | const server = document.querySelector('#server'); | ||
| 418 | const topic = document.querySelector('#topic'); | ||
| 419 | const event = document.querySelector('#event'); | ||
| 420 | |||
| 421 | button.addEventListener('click', async (evt) => { | ||
| 422 | |||
| 423 | let es = new EventSource(`${server.value}/stream/${topic.value}`); | ||
| 424 | |||
| 425 | es.addEventListener(event.value, function (evt) { | ||
| 426 | console.log(`incoming message`, JSON.parse(evt.data)); | ||
| 427 | }); | ||
| 428 | |||
| 429 | es.addEventListener('open', function (evt) { | ||
| 430 | console.log('connected', evt); | ||
| 431 | }); | ||
| 432 | |||
| 433 | es.addEventListener('error', function (evt) { | ||
| 434 | console.log('error', evt); | ||
| 435 | }); | ||
| 436 | |||
| 437 | }); | ||
| 438 | |||
| 439 | </script> | ||
| 440 | |||
| 441 | </body> | ||
| 442 | |||
| 443 | </html> | ||
| 444 | ``` | ||
| 445 | |||
| 446 | ## Reading further | ||
| 447 | |||
| 448 | - [Using server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events) | ||
| 449 | - [Using SSE Instead Of WebSockets For Unidirectional Data Flow Over HTTP/2](https://www.smashingmagazine.com/2018/02/sse-websockets-data-flow-http2/) | ||
| 450 | - [What is Server-Sent Events?](https://apifriends.com/api-streaming/server-sent-events/) | ||
| 451 | - [An HTTP/2 extension for bidirectional messaging communication](https://tools.ietf.org/id/draft-xie-bidirectional-messaging-01.html) | ||
| 452 | - [Introduction to HTTP/2](https://developers.google.com/web/fundamentals/performance/http2) | ||
| 453 | - [The WebSocket API (WebSockets)](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) | ||
| 454 | |||
diff --git a/content/posts/2020-03-27-create-placeholder-images-with-sharp.md b/content/posts/2020-03-27-create-placeholder-images-with-sharp.md new file mode 100644 index 0000000..1c2b042 --- /dev/null +++ b/content/posts/2020-03-27-create-placeholder-images-with-sharp.md | |||
| @@ -0,0 +1,102 @@ | |||
| 1 | --- | ||
| 2 | title: Create placeholder images with sharp Node.js image processing library | ||
| 3 | url: create-placeholder-images-with-sharp.html | ||
| 4 | date: 2020-03-27T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | I have been searching for a solution to pre-generate some placeholder images for | ||
| 10 | image server I needed to develop that resizes images on S3. I though this would | ||
| 11 | be a 15min job and quickly found out how very mistaken I was. | ||
| 12 | |||
| 13 | Even though Node.js is not really the best way to do this kind of things (surely | ||
| 14 | something written in C or Rust or even Golang would be the correct way to do | ||
| 15 | this but we didn't need the speed in our case) I found an excellent library | ||
| 16 | [sharp - High performance Node.js image | ||
| 17 | processing](https://github.com/lovell/sharp). | ||
| 18 | |||
| 19 | Getting things running was a breeze. | ||
| 20 | |||
| 21 | ## Fetch image from S3 and save resized | ||
| 22 | |||
| 23 | ```js | ||
| 24 | const sharp = require('sharp'); | ||
| 25 | const aws = require('aws-sdk'); | ||
| 26 | |||
| 27 | const x,y = 100; | ||
| 28 | const s3 = new aws.S3({}); | ||
| 29 | |||
| 30 | aws.config.update({ | ||
| 31 | secretAccessKey: 'secretAccessKey', | ||
| 32 | accessKeyId: 'accessKeyId', | ||
| 33 | region: 'region' | ||
| 34 | }); | ||
| 35 | |||
| 36 | const originalImage = await s3.getObject({ | ||
| 37 | Bucket: 'some-bucket-name', | ||
| 38 | Key: 'image.jpg', | ||
| 39 | }).promise(); | ||
| 40 | |||
| 41 | const resizedImage = await sharp(originalImage.Body) | ||
| 42 | .resize(x, y) | ||
| 43 | .jpeg({ progressive: true }) | ||
| 44 | .toBuffer(); | ||
| 45 | |||
| 46 | s3.putObject({ | ||
| 47 | Bucket: 'some-bucket-name', | ||
| 48 | Key: `optimized/${x}x${y}/image.jpg`, | ||
| 49 | Body: resizedImage, | ||
| 50 | ContentType: 'image/jpeg', | ||
| 51 | ACL: 'public-read' | ||
| 52 | }).promise(); | ||
| 53 | ``` | ||
| 54 | |||
| 55 | All this code was wrapped inside a web service with some additional security | ||
| 56 | checks and defensive coding to detect if key is missing on S3. | ||
| 57 | |||
| 58 | And at that point I needed to return placeholder images as a response in case | ||
| 59 | key is missing or x,y are not allowed by the server etc. I could have created | ||
| 60 | PNG in Gimp and just serve them but I wanted to respect aspect ratio and I | ||
| 61 | didn't want to return some mangled images. | ||
| 62 | |||
| 63 | > Main problem with finding a clean solution I could copy and paste and change a | ||
| 64 | > bit was a task. API is changing constantly and there weren't clear examples or | ||
| 65 | > I was unable to find them. | ||
| 66 | |||
| 67 | ## Generating placeholder images using SVG | ||
| 68 | |||
| 69 | What I ended up was using SVG to generate text and created image with sharp and | ||
| 70 | used composition to combine both layers. Response returned by this function is a | ||
| 71 | buffer you can use to either upload to S3 or save to local file. | ||
| 72 | |||
| 73 | ```js | ||
| 74 | const generatePlaceholderImageWithText = async (width, height, message) => { | ||
| 75 | const overlay = `<svg width="${width - 20}" height="${height - 20}"> | ||
| 76 | <text x="50%" y="50%" font-family="sans-serif" font-size="16" text-anchor="middle">${message}</text> | ||
| 77 | </svg>`; | ||
| 78 | |||
| 79 | return await sharp({ | ||
| 80 | create: { | ||
| 81 | width: width, | ||
| 82 | height: height, | ||
| 83 | channels: 4, | ||
| 84 | background: { r: 230, g: 230, b: 230, alpha: 1 } | ||
| 85 | } | ||
| 86 | }) | ||
| 87 | .composite([{ | ||
| 88 | input: Buffer.from(overlay), | ||
| 89 | gravity: 'center', | ||
| 90 | }]) | ||
| 91 | .jpeg() | ||
| 92 | .toBuffer(); | ||
| 93 | } | ||
| 94 | ``` | ||
| 95 | |||
| 96 | That is about it. Nothing more to it. You can change the color of the image by | ||
| 97 | changing `background` and if you want to change text styling you can adapt SVG | ||
| 98 | to your needs. | ||
| 99 | |||
| 100 | > Also be careful about the length of the text. This function positions text at | ||
| 101 | > the center and adds `20px` padding on all sides. If text is longer than the | ||
| 102 | > image it will get cut. | ||
diff --git a/content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md b/content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md new file mode 100644 index 0000000..efe88fa --- /dev/null +++ b/content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md | |||
| @@ -0,0 +1,108 @@ | |||
| 1 | --- | ||
| 2 | title: The strange case of Elasticsearch allocation failure | ||
| 3 | url: the-strange-case-of-elasticsearch-allocation-failure.html | ||
| 4 | date: 2020-03-29T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | I've been using Elasticsearch in production for 5 years now and never had a | ||
| 10 | single problem with it. Hell, never even known there could be a problem. Just | ||
| 11 | worked. All this time. The first node that I deployed is still being used in | ||
| 12 | production, never updated, upgraded, touched in anyway. | ||
| 13 | |||
| 14 | All this bliss came to an abrupt end this Friday when I got notification that | ||
| 15 | Elasticsearch cluster went warm. Well, warm is not that bad right? Wrong! | ||
| 16 | Quickly after that I got another email which sent chills down my spine. Cluster | ||
| 17 | is now red. RED! Now, shit really hit the fan! | ||
| 18 | |||
| 19 | I tried googling what could be the problem and after executing allocation | ||
| 20 | function noticed that some shards were unassigned and 5 attempts were already | ||
| 21 | made (which is BTW to my luck the maximum) and that meant I am basically fucked. | ||
| 22 | They also applied that one should wait for cluster to re-balance itself. So, I | ||
| 23 | waited. One hour, two hours, several hours. Nothing, still RED. | ||
| 24 | |||
| 25 | The strangest thing about it all was, that queries were still being fulfilled. | ||
| 26 | Data was coming out. On the outside it looked like nothing was wrong but | ||
| 27 | everybody that would look at the cluster would know immediately that something | ||
| 28 | was very very wrong and we were living on borrowed time here. | ||
| 29 | |||
| 30 | > **Please, DO NOT do what I did.** Seriously! Please ask someone on official | ||
| 31 | forums or if you know an expert please consult him. There could be million of | ||
| 32 | reasons and these solution fit my problem. Maybe in your case it would | ||
| 33 | disastrous. I had all the data backed up and even if I would fail spectacularly | ||
| 34 | I would be able to restore the data. It would be a huge pain and I would loose | ||
| 35 | couple of days but I had a plan B. | ||
| 36 | |||
| 37 | Executing allocation and told me what the problem was but no clear solution yet. | ||
| 38 | |||
| 39 | ```yaml | ||
| 40 | GET /_cat/allocation?format=json | ||
| 41 | ``` | ||
| 42 | |||
| 43 | I got a message that `ALLOCATION_FAILED` with additional info `failed to create | ||
| 44 | shard, failure ioexception[failed to obtain in-memory shard lock]`. Well | ||
| 45 | splendid! I must also say that our cluster is capable more than enough to handle | ||
| 46 | the traffic. Also JVM memory pressure never was an issue. So what happened | ||
| 47 | really then? | ||
| 48 | |||
| 49 | I tried also re-routing failed ones with no success due to AWS restrictions on | ||
| 50 | having managed Elasticsearch cluster (they lock some of the functions). | ||
| 51 | |||
| 52 | ```yaml | ||
| 53 | POST /_cluster/reroute?retry_failed=true | ||
| 54 | ``` | ||
| 55 | |||
| 56 | I got a message that significantly reduced my options. | ||
| 57 | |||
| 58 | ```json | ||
| 59 | { | ||
| 60 | "Message": "Your request: '/_cluster/reroute' is not allowed." | ||
| 61 | } | ||
| 62 | ``` | ||
| 63 | |||
| 64 | After that I went on a hunt again. I won't bother you with all the details | ||
| 65 | because hours/days went by until I was finally able to re-index the problematic | ||
| 66 | index and hoped for the best. Until that moment even re-indexing was giving me | ||
| 67 | errors. | ||
| 68 | |||
| 69 | ```yaml | ||
| 70 | POST _reindex | ||
| 71 | { | ||
| 72 | "source": { | ||
| 73 | "index": "myindex" | ||
| 74 | }, | ||
| 75 | "dest": { | ||
| 76 | "index": "myindex-new" | ||
| 77 | } | ||
| 78 | } | ||
| 79 | ``` | ||
| 80 | |||
| 81 | I needed to do this multiple times to get all the documents re-indexed. Then I | ||
| 82 | dropped the original one with the following command. | ||
| 83 | |||
| 84 | ```yaml | ||
| 85 | DELETE /myindex | ||
| 86 | ``` | ||
| 87 | |||
| 88 | And re-indexed again new one in the original one (well by name only). | ||
| 89 | |||
| 90 | ```yaml | ||
| 91 | POST _reindex | ||
| 92 | { | ||
| 93 | "source": { | ||
| 94 | "index": "myindex-new" | ||
| 95 | }, | ||
| 96 | "dest": { | ||
| 97 | "index": "myindex" | ||
| 98 | } | ||
| 99 | } | ||
| 100 | ``` | ||
| 101 | |||
| 102 | On the surface it looks like all is working but I have a long road in front of | ||
| 103 | me to get all the things working again. Cluster now shows that it is in Green | ||
| 104 | mode but I am also getting a notification that the cluster has processing status | ||
| 105 | which could mean million of things. | ||
| 106 | |||
| 107 | Godspeed! | ||
| 108 | |||
diff --git a/content/posts/2020-03-30-my-love-and-hate-relationship-with-nodejs.md b/content/posts/2020-03-30-my-love-and-hate-relationship-with-nodejs.md new file mode 100644 index 0000000..dec6f8d --- /dev/null +++ b/content/posts/2020-03-30-my-love-and-hate-relationship-with-nodejs.md | |||
| @@ -0,0 +1,111 @@ | |||
| 1 | --- | ||
| 2 | title: My love and hate relationship with Node.js | ||
| 3 | url: my-love-and-hate-relationship-with-nodejs.html | ||
| 4 | date: 2020-03-30T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | Previous project I was working on was being coded in | ||
| 10 | [Golang](https://golang.org/). Also was my first project using it. And damn, | ||
| 11 | that was an awesome experience. The whole thing is just superb. From how errors | ||
| 12 | are handled. The C-like way you handle compiling. The way the language is | ||
| 13 | structured making it incredibly versatile and easy to learn. | ||
| 14 | |||
| 15 | It may cause some pain for somebody that is not used of using interfaces to map | ||
| 16 | JSON and doing the recompilation all the time. But we have tools like | ||
| 17 | [entr](http://eradman.com/entrproject/) and | ||
| 18 | [make](https://www.gnu.org/software/make/) to fix that. | ||
| 19 | |||
| 20 | But we are not here to talk about my undying love for **Golang**. Only in some | ||
| 21 | way we probably should. It is an excellent example of how modern language should | ||
| 22 | be designed. And because I have used it extensively in the last couple of years | ||
| 23 | this probably taints my views of other languages. And is doing me a great | ||
| 24 | disservice. Nevertheless, here we are. | ||
| 25 | |||
| 26 | About two years ago I started flirting with [Node.js](https://nodejs.org/en/) | ||
| 27 | for a project I started working on. What I wanted was to have things written in | ||
| 28 | a language that is widely used, and we could get additional developers for. As | ||
| 29 | much as **Golang** is amazing it's really hard to get developers for it. Even | ||
| 30 | now. And after playing around with it for a week I felt in love with the speed | ||
| 31 | of iteration and massive package ecosystem. Do you want SSO? You got it! Do you | ||
| 32 | want some esoteric library for something? There is a strong chance somebody | ||
| 33 | wrote it. It is so extensive that you find yourself evaluating packages based on | ||
| 34 | **GitHub stars** and number of contributors. You get swallowed by the vanity | ||
| 35 | metrics and that potentially will become the downfall of Node.js. | ||
| 36 | |||
| 37 | Because of the sheer amount of choice I often got anxiety when choosing | ||
| 38 | libraries. Will I choose the correct one? Is this library something that will be | ||
| 39 | supported for a foreseeable future or not? I am used of using libraries that are | ||
| 40 | being in development for 10 years plus (Python, C) and that gave me some sort of | ||
| 41 | comfort. And it is probably unfair to Node.js and community to expect same | ||
| 42 | dedication. | ||
| 43 | |||
| 44 | Moving forward ... Work started and things were great. **Speed of iteration was | ||
| 45 | insane**. For some feature that I would need a day in Golang only took me hour | ||
| 46 | or two. I became lazy! Using packages all over the place. Falling into the same | ||
| 47 | trap as others. Packages on top of packages. And [npm](https://www.npmjs.com/) | ||
| 48 | didn't help at all. The way that the package manager works is just | ||
| 49 | horrendous. And not allowing to have node_modules outside the project is also | ||
| 50 | the stupidest idea ever. | ||
| 51 | |||
| 52 | So at that point I started feeling the technical debt that comes with Node.js | ||
| 53 | and the whole ecosystem. What nobody tells you is that **structuring large | ||
| 54 | Node.js apps** is more problematic than one would think. And going microservice | ||
| 55 | for every single thing is also a bad idea. The amount of networking you | ||
| 56 | introduce with that approach always ends up being a pain in the ass. And I don't | ||
| 57 | even want to go into system administration here. The overhead is | ||
| 58 | insane. Package-lock.json made many days feel like living hell for me. And I | ||
| 59 | would eat the cost of all this if it meant for better development | ||
| 60 | experience. Well, it didn't. | ||
| 61 | |||
| 62 | The **lack of Typescript** support in the interpreter is still mind boggling to | ||
| 63 | me. Why haven't they added native support yet for this is beyond me?! That would | ||
| 64 | have solved so many problems. Lack of type safety became a problem somewhere in | ||
| 65 | the middle of the project where the codebase was sufficiently large enough to | ||
| 66 | present problems. We started adding arguments to functions and there was **no | ||
| 67 | way to implicitly define argument types**. And because at that point there were | ||
| 68 | a lot of functions, it became impossible to know what each one accepts, | ||
| 69 | development became more and more trial and error based. | ||
| 70 | |||
| 71 | I tried **implementing Typescript**, but that would present a large refactor | ||
| 72 | that we were not willing to do at that point. The benefits were not enough. I | ||
| 73 | also tried [Flow - static type checker](https://flow.org/) but implementation | ||
| 74 | was also horrible. What Typescript and Flow forces you is to have src folder and | ||
| 75 | then **transpile** your code into dist folder and run it with node. WTH is that | ||
| 76 | all about. Why can't this be done in memory or some virtual file system? Why? I | ||
| 77 | see no reason why this couldn't be done like this. But it is what it is. I | ||
| 78 | abandoned all hope for static type checking. | ||
| 79 | |||
| 80 | One of the problems that resulted from not having interfaces or types was | ||
| 81 | inability to model out our data from **Elasticsearch**. I could have done a | ||
| 82 | **pedestrian implementation** of it, but there must be a better way of doing | ||
| 83 | this without resorting to some hack basically. Or maybe I haven't found a | ||
| 84 | solution, which is also a possibility. I have looked, though. No juice! | ||
| 85 | |||
| 86 | **Error handling?** Is that a joke? | ||
| 87 | |||
| 88 | Thank god for **await/async**. Without it, I would have probably just abandoned | ||
| 89 | the whole thing and went with something else like Python. That's all I am going | ||
| 90 | to say about this :) | ||
| 91 | |||
| 92 | I started asking myself a question if Node.js is actually ready to be used in a | ||
| 93 | **large scale applications**? And this was a totally wrong question. What I | ||
| 94 | should have been asking myself was, how to use Node.js in large scale | ||
| 95 | application. And you don't get this in **marketing material** for Express or Koa | ||
| 96 | etc. They never tell you this. Making Node.js scale on infrastructure or in | ||
| 97 | codebase is really **more of an art than a science**. And just like with the | ||
| 98 | whole JavaScript ecosystem: | ||
| 99 | |||
| 100 | - impossible to master, | ||
| 101 | - half of your time you work on your tooling, | ||
| 102 | - just accept transpilers that convert one code into another (holly smokes), | ||
| 103 | - error handling is a joke, | ||
| 104 | - standards? What standards? | ||
| 105 | |||
| 106 | But on the other hand. As I did, you will also learn to love it. Learn to use it | ||
| 107 | quickly and do impossible things in crazy limited time. | ||
| 108 | |||
| 109 | I hate to admit it. But I love Node.js. Dammit, I love it :) | ||
| 110 | |||
| 111 | 2023 Update: I hate Node.js! | ||
diff --git a/content/posts/2020-05-05-remote-work.md b/content/posts/2020-05-05-remote-work.md new file mode 100644 index 0000000..905d169 --- /dev/null +++ b/content/posts/2020-05-05-remote-work.md | |||
| @@ -0,0 +1,72 @@ | |||
| 1 | --- | ||
| 2 | title: Remote work and how it affects the daily lives of people | ||
| 3 | url: remote-work.html | ||
| 4 | date: 2020-05-05T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | I have been working remotely for the past 5 years. I love it. Love the freedom | ||
| 10 | and make your schedule thingy. | ||
| 11 | |||
| 12 | ## You work more not less | ||
| 13 | |||
| 14 | I've heard from people things like: "Oh, you are so lucky, working from home, | ||
| 15 | having all the free time you want". It was obvious they had no clue what means | ||
| 16 | working remotely. They had this romantic idea of remote work. You can watch TV | ||
| 17 | whenever you like, you can go outside for a picnic if you want and stuff like | ||
| 18 | that. | ||
| 19 | |||
| 20 | This may be true if you work a day or two in a week from home. But if you go | ||
| 21 | completely remote all these changes completely. I take some time to acclimate | ||
| 22 | but then you start feeling the consequences of going fully remote. And it's not | ||
| 23 | all rainbows and unicorns. Rather the opposite. | ||
| 24 | |||
| 25 | ## Feeling lost | ||
| 26 | |||
| 27 | At first, I remembered I felt lost. I was not used to this kind of environment. | ||
| 28 | It felt disoriented and a part of you that is used to procrastinate turns on. | ||
| 29 | You start thinking of a workday as a whole day. And soon this idea of "I can do | ||
| 30 | this later" starts creeping in. Well, I have the whole day ahead of me. I can do | ||
| 31 | this a bit later. | ||
| 32 | |||
| 33 | ## Hyper-performance | ||
| 34 | |||
| 35 | As a direct result, you become more focused on your work since you don't have | ||
| 36 | all the interruptions common in the workplace. And you can quickly get used to | ||
| 37 | this hyper-performance. But this mode requires also a lot of peace and quiet. | ||
| 38 | |||
| 39 | And here we come to the ugly parts of all this. **People rarely have the | ||
| 40 | self-control** to not waste other people's time. It is paralyzing when people | ||
| 41 | start calling you, sending you chat messages, etc. The thing is, that when I | ||
| 42 | achieve this hyper-performance mode I am completely embroiled in the problem I | ||
| 43 | am solving and this kind of interruptions mess with your head. I need an hour at | ||
| 44 | least to get back in the zone. Sometimes not achieving the same focus the whole | ||
| 45 | day. | ||
| 46 | |||
| 47 | I know that life is not how you want it to be and takes its route but from what | ||
| 48 | I've learned this kind of interruptions can be avoided in 90% of the case easily | ||
| 49 | just by closing any chat programs and putting your phone in a drawer. | ||
| 50 | |||
| 51 | ## Suggestion to all the new remote workers | ||
| 52 | |||
| 53 | - Stop wasting other people's time. You don't bother people at their desks in | ||
| 54 | the office either. | ||
| 55 | - Do not replace daily chats in the hallways with instant messaging software. | ||
| 56 | It will only interrupt people. Nothing good will come of it. | ||
| 57 | - Set your working hours and try to not allow it to bleed outside these | ||
| 58 | boundaries and maintain your routine. | ||
| 59 | - Be prepared that hours will be longer regardless of your good intentions and | ||
| 60 | your well thought of routine. | ||
| 61 | - Try to be hyper-focused and do only one thing at the time. Multitasking is the | ||
| 62 | enemy of progress. | ||
| 63 | - Avoid long meetings and if possible eliminate them. Rather take time to write | ||
| 64 | them out and allow others to respond in their own time. Meetings are usually a | ||
| 65 | large waste of time and most of the people attending them are there just | ||
| 66 | because the manager said so. | ||
| 67 | - The software will not solve your problems. And throwing money at problems | ||
| 68 | neither. | ||
| 69 | - If you are in a managerial position don't supervise any single minute of | ||
| 70 | workers. They are probably giving you more hours anyways. Track progress | ||
| 71 | weekly not daily. You hired them and give them the benefit of the doubt that | ||
| 72 | they will deliver what you agreed upon. | ||
diff --git a/content/posts/2020-08-15-systemd-disable-wake-onmouse.md b/content/posts/2020-08-15-systemd-disable-wake-onmouse.md new file mode 100644 index 0000000..8f411d6 --- /dev/null +++ b/content/posts/2020-08-15-systemd-disable-wake-onmouse.md | |||
| @@ -0,0 +1,73 @@ | |||
| 1 | --- | ||
| 2 | title: Disable mouse wake from suspend with systemd service | ||
| 3 | url: disable-mouse-wake-from-suspend-with-systemd-service.html | ||
| 4 | date: 2020-08-15T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | I recently bought [ThinkPad | ||
| 10 | X220](https://www.laptopmag.com/reviews/laptops/lenovo-thinkpad-x220) just as a | ||
| 11 | joke on eBay to test Linux distributions and play around with things and not | ||
| 12 | destroy my main machine. Little to my knowledge I felt in love with it. Man, | ||
| 13 | they really made awesome machines back then. | ||
| 14 | |||
| 15 | After changing disk that came with it to SSD and installing Ubuntu to test if | ||
| 16 | everything works I noticed that even after a single touch of my external mouse | ||
| 17 | the system would wake up from sleep even though the lid was shut down. | ||
| 18 | |||
| 19 | I wouldn't even noticed it if laptop didn't have [LED | ||
| 20 | sleep indicator](https://support.lenovo.com/lk/en/solutions/~/media/Images/ContentImages/p/pd025386_x1_status_03.ashx?w=426&h=262). | ||
| 21 | I already had a bad experience with Linux and it's power management. I had a | ||
| 22 | [Dell Inspiron 7537](https://www.pcmag.com/reviews/dell-inspiron-15-7537) laptop | ||
| 23 | with a touchscreen and while traveling it decided to wake up and started cooking | ||
| 24 | in my backpack to the point that the digitizer responsible for touch actually | ||
| 25 | glue off and the whole screen got wrecked. So, I am a bit touchy about this. | ||
| 26 | |||
| 27 | I went on solution hunting and to my surprise there is no easy way to disable | ||
| 28 | specific devices to perform wake up. Why is this not under the power management | ||
| 29 | tab in setting is really strange. | ||
| 30 | |||
| 31 | After googling for a solution I found [this nice article describing the | ||
| 32 | solution](https://codetrips.com/2020/03/18/ubuntu-disable-mouse-wake-from-suspend/) | ||
| 33 | that worked for me. The only problem with this solution was that he added his | ||
| 34 | solution to `.bashrc` and this triggers `sudo` that asks for a password each | ||
| 35 | time new terminal is opened, which get annoying quickly since I open a lot of | ||
| 36 | terminals all the time. | ||
| 37 | |||
| 38 | I followed his instructions and got to solution `sudo sh -c "echo 'disabled' > | ||
| 39 | /sys/bus/usb/devices/2-1.1/power/wakeup"`. | ||
| 40 | |||
| 41 | I created a system service file `sudo nano | ||
| 42 | /etc/systemd/system/disable-mouse-wakeup.service` and removed `sudo` and | ||
| 43 | replaced `sh` with `/usr/bin/sh` and pasted all that in `ExecStart`. | ||
| 44 | |||
| 45 | ```ini | ||
| 46 | [Unit] | ||
| 47 | Description=Disables wakeup on mouse event | ||
| 48 | After=network.target | ||
| 49 | StartLimitIntervalSec=0 | ||
| 50 | |||
| 51 | [Service] | ||
| 52 | Type=simple | ||
| 53 | Restart=always | ||
| 54 | RestartSec=1 | ||
| 55 | User=root | ||
| 56 | ExecStart=/usr/bin/sh -c "echo 'disabled' > /sys/bus/usb/devices/2-1.1/power/wakeup" | ||
| 57 | |||
| 58 | [Install] | ||
| 59 | WantedBy=multi-user.target | ||
| 60 | ``` | ||
| 61 | |||
| 62 | After that I enabled, started and checked status of service. | ||
| 63 | |||
| 64 | ```sh | ||
| 65 | sudo systemctl enable disable-mouse-wakeup.service | ||
| 66 | sudo systemctl start disable-mouse-wakeup.service | ||
| 67 | sudo systemctl status disable-mouse-wakeup.service | ||
| 68 | ``` | ||
| 69 | |||
| 70 | This will permanently disable that device from wakeing up you computer on boot. | ||
| 71 | If you have many devices you would like to surpress from waking up your machine | ||
| 72 | I would create a shell script and call that instead of direclty doing it in | ||
| 73 | service file. | ||
diff --git a/content/posts/2020-09-06-esp-and-micropython.md b/content/posts/2020-09-06-esp-and-micropython.md new file mode 100644 index 0000000..6a2d5fe --- /dev/null +++ b/content/posts/2020-09-06-esp-and-micropython.md | |||
| @@ -0,0 +1,226 @@ | |||
| 1 | --- | ||
| 2 | title: Getting started with MicroPython and ESP8266 | ||
| 3 | url: esp8266-and-micropython-guide.html | ||
| 4 | date: 2020-09-06T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | ## Introduction | ||
| 10 | |||
| 11 | A while ago I bought some | ||
| 12 | [ESP8266](https://www.espressif.com/en/products/socs/esp8266) and | ||
| 13 | [ESP32](https://www.espressif.com/en/products/socs/esp32) dev boards to play | ||
| 14 | around with and I finally found a project to try it out. | ||
| 15 | |||
| 16 | For my project, I used [ESP32](https://www.espressif.com/en/products/socs/esp32) | ||
| 17 | but I could easily choose | ||
| 18 | [ESP8266](https://www.espressif.com/en/products/socs/esp8266). This guide | ||
| 19 | contains which tools I use and how I prepared my workspace to code for | ||
| 20 | [ESP8266](https://www.espressif.com/en/products/socs/esp8266). | ||
| 21 | |||
| 22 |  | ||
| 23 | |||
| 24 | This guide covers: | ||
| 25 | |||
| 26 | - flashing SOC | ||
| 27 | - install proper tooling | ||
| 28 | - deploying a simple script | ||
| 29 | |||
| 30 | > Make sure that you are using **a good USB cable**. I had some problems with | ||
| 31 | mine and once I replaced it everything started to work. | ||
| 32 | |||
| 33 | ## Flashing the SOC | ||
| 34 | |||
| 35 | Plug your ESP8266 to USB port and check if the device was recognized with | ||
| 36 | executing `dmesg | grep ch341-uart`. | ||
| 37 | |||
| 38 | Then check if the device is available under `/dev/` by running `ls | ||
| 39 | /dev/ttyUSB*`. | ||
| 40 | |||
| 41 | > **Linux users**: if a device is not available be sure you are in `dialout` | ||
| 42 | > group. You can check this by executing `groups $USER`. You can add a user to | ||
| 43 | > `dialout` group with `sudo adduser $USER dialout`. | ||
| 44 | |||
| 45 | After these conditions are meet go to the navigate to | ||
| 46 | [https://micropython.org/download/esp8266/](https://micropython.org/download/esp8266/) | ||
| 47 | and download `esp8266-20200902-v1.13.bin`. | ||
| 48 | |||
| 49 | ```sh | ||
| 50 | mkdir esp8266-test | ||
| 51 | cd esp8266-test | ||
| 52 | |||
| 53 | wget https://micropython.org/resources/firmware/esp8266-20200902-v1.13.bin | ||
| 54 | ``` | ||
| 55 | |||
| 56 | After obtaining firmware we will need some tooling to flash the firmware to the | ||
| 57 | board. | ||
| 58 | |||
| 59 | ```sh | ||
| 60 | sudo pip3 install esptool | ||
| 61 | ``` | ||
| 62 | |||
| 63 | You can read more about `esptool` at | ||
| 64 | [https://github.com/espressif/esptool/](https://github.com/espressif/esptool/). | ||
| 65 | |||
| 66 | Before flashing the firmware we need to erase the flash on device. Substitute | ||
| 67 | `USB0` with the device listed in output of `ls /dev/ttyUSB*`. | ||
| 68 | |||
| 69 | ```sh | ||
| 70 | esptool.py --port /dev/ttyUSB0 erase_flash | ||
| 71 | ``` | ||
| 72 | |||
| 73 | If flash was successfully erased it is now time to flash the new firmware to it. | ||
| 74 | |||
| 75 | ```sh | ||
| 76 | esptool.py --port /dev/ttyUSB0 --baud 460800 write_flash --flash_size=detect 0 esp8266-20200902-v1.13.bin | ||
| 77 | ``` | ||
| 78 | |||
| 79 | If everything went ok you can try accessing MicroPython REPL with ` screen | ||
| 80 | /dev/ttyUSB0 115200` or `picocom /dev/ttyUSB0 -b115200`. | ||
| 81 | |||
| 82 | > Sometimes you will need to press `ENTER` in `screen` or `picocom` to access | ||
| 83 | > REPL. | ||
| 84 | |||
| 85 | When you are in REPL you can test if all is working properly following steps. | ||
| 86 | |||
| 87 | ```py | ||
| 88 | > import machine | ||
| 89 | > machine.freq() | ||
| 90 | ``` | ||
| 91 | |||
| 92 | This should output a number representing a frequency of the CPU (mine was | ||
| 93 | `80000000`). | ||
| 94 | |||
| 95 | When you are in `screen` or `picocom` these can help you a bit. | ||
| 96 | |||
| 97 | | Key | Command | | ||
| 98 | | -------- | -------------------- | | ||
| 99 | | CTRL+d | preforms soft reboot | | ||
| 100 | | CTRL+a x | exits picocom | | ||
| 101 | | CTRL+a \ | exits screen | | ||
| 102 | |||
| 103 | |||
| 104 | ## Install better tooling | ||
| 105 | |||
| 106 | Now, to make our lives a little bit easier there are couple of additional tools | ||
| 107 | that will make this whole experience a little more bearable. | ||
| 108 | |||
| 109 | There are twq cool ways of uploading local files to SOC flash. | ||
| 110 | |||
| 111 | - ampy → [https://github.com/scientifichackers/ampy](https://github.com/scientifichackers/ampy) | ||
| 112 | - rshell → [https://github.com/dhylands/rshell](https://github.com/dhylands/rshell) | ||
| 113 | |||
| 114 | ### ampy | ||
| 115 | |||
| 116 | ```bash | ||
| 117 | # installing ampy | ||
| 118 | sudo pip3 install adafruit-ampy | ||
| 119 | ``` | ||
| 120 | |||
| 121 | Listed below are some common commands I used. | ||
| 122 | |||
| 123 | ```bash | ||
| 124 | |||
| 125 | # uploads file to flash | ||
| 126 | ampy --delay 2 --port /dev/ttyUSB0 put boot.py | ||
| 127 | |||
| 128 | # lists file on flash | ||
| 129 | ampy --delay 2 --port /dev/ttyUSB0 ls | ||
| 130 | |||
| 131 | # outputs contents of file on flash | ||
| 132 | ampy --delay 2 --port /dev/ttyUSB0 cat boot.py | ||
| 133 | ``` | ||
| 134 | |||
| 135 | > I added `delay` of 2 seconds because I had problems with executing commands. | ||
| 136 | |||
| 137 | ### rshell | ||
| 138 | |||
| 139 | Even though `ampy` is a cool tool I opted with `rshell` in the end since it's | ||
| 140 | much more polished and feature rich. | ||
| 141 | |||
| 142 | ```bash | ||
| 143 | # installing ampy | ||
| 144 | sudo pip3 install rshell | ||
| 145 | ``` | ||
| 146 | |||
| 147 | Now that `rshell` is installed we can connect to the board. | ||
| 148 | |||
| 149 | ```bash | ||
| 150 | rshell --buffer-size=30 -p /dev/ttyUSB0 -a | ||
| 151 | ``` | ||
| 152 | |||
| 153 | This will open a shell inside bash and from here you can execute multiple | ||
| 154 | commands. You can check what is supported with `help` once you are inside of a | ||
| 155 | shell. | ||
| 156 | |||
| 157 | ```bash | ||
| 158 | m@turing ~/Junk/esp8266-test | ||
| 159 | $ rshell --buffer-size=30 -p /dev/ttyUSB0 -a | ||
| 160 | |||
| 161 | Using buffer-size of 30 | ||
| 162 | Connecting to /dev/ttyUSB0 (buffer-size 30)... | ||
| 163 | Trying to connect to REPL connected | ||
| 164 | Testing if ubinascii.unhexlify exists ... Y | ||
| 165 | Retrieving root directories ... /boot.py/ | ||
| 166 | Setting time ... Sep 06, 2020 23:54:28 | ||
| 167 | Evaluating board_name ... pyboard | ||
| 168 | Retrieving time epoch ... Jan 01, 2000 | ||
| 169 | Welcome to rshell. Use Control-D (or the exit command) to exit rshell. | ||
| 170 | /home/m/Junk/esp8266-test> help | ||
| 171 | |||
| 172 | Documented commands (type help <topic>): | ||
| 173 | ======================================== | ||
| 174 | args cat connect date edit filesize help mkdir rm shell | ||
| 175 | boards cd cp echo exit filetype ls repl rsync | ||
| 176 | |||
| 177 | Use Control-D (or the exit command) to exit rshell. | ||
| 178 | ``` | ||
| 179 | |||
| 180 | > Inside a shell `ls` will display list of files on your machine. To get list | ||
| 181 | > of files on flash folder `/pyboard` is remapped inside the shell. To list files | ||
| 182 | > on flash you must perform `ls /pyboard`. | ||
| 183 | |||
| 184 | #### Moving files to flash | ||
| 185 | |||
| 186 | To avoid copying files all the time I used `rsync` function from the inside of | ||
| 187 | `rshell`. | ||
| 188 | |||
| 189 | ```bash | ||
| 190 | rsync . /pyboard | ||
| 191 | ``` | ||
| 192 | |||
| 193 | #### Executing scripts | ||
| 194 | |||
| 195 | It is a pain to continuously reboot the device to trigger `/pyboard/boot.py` and | ||
| 196 | there is a better way of testing local scripts on remote device. | ||
| 197 | |||
| 198 | Lets assume we have `src/freq.py` file that displays CPU frequency of a remote | ||
| 199 | device. | ||
| 200 | |||
| 201 | ```py | ||
| 202 | # src/freq.py | ||
| 203 | |||
| 204 | import machine | ||
| 205 | print(machine.freq()) | ||
| 206 | ``` | ||
| 207 | |||
| 208 | Now lets upload this and execute it. | ||
| 209 | |||
| 210 | ```bash | ||
| 211 | # syncs files to remove device | ||
| 212 | rsync ./src /pyboard | ||
| 213 | |||
| 214 | # goes into REPL | ||
| 215 | repl | ||
| 216 | |||
| 217 | # we import file by importing it without .py extension and this will run the script | ||
| 218 | > import freq | ||
| 219 | |||
| 220 | # CTRL+x will exit REPL | ||
| 221 | ``` | ||
| 222 | |||
| 223 | ## Additional resources | ||
| 224 | |||
| 225 | - https://randomnerdtutorials.com/getting-started-micropython-esp32-esp8266/ | ||
| 226 | - http://docs.micropython.org/en/latest/esp8266/quickref.html | ||
diff --git a/content/posts/2020-09-08-bind-warning-on-login.md b/content/posts/2020-09-08-bind-warning-on-login.md new file mode 100644 index 0000000..f213cd9 --- /dev/null +++ b/content/posts/2020-09-08-bind-warning-on-login.md | |||
| @@ -0,0 +1,54 @@ | |||
| 1 | --- | ||
| 2 | title: Fix bind warning in .profile on login in Ubuntu | ||
| 3 | url: bind-warning-on-login-in-ubuntu.html | ||
| 4 | date: 2020-09-08T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | Recently I moved back to [bash](https://www.gnu.org/software/bash/) as my | ||
| 10 | default shell. I was previously using [fish](https://fishshell.com/) and got | ||
| 11 | used to the cool features it has. But, regardless of that, I wanted to move to a | ||
| 12 | more standard shell because I was hopping back and forth with exporting | ||
| 13 | variables and stuff like that which got pretty annoying. | ||
| 14 | |||
| 15 | So I embarked on a mission to make [bash](https://www.gnu.org/software/bash/) | ||
| 16 | more like [fish](https://fishshell.com/) and in the process found that I really | ||
| 17 | missed autosuggest with TAB on changing directories. | ||
| 18 | |||
| 19 | I found a nice alternative that emulates [zsh](http://zsh.sourceforge.net/) like | ||
| 20 | autosuggestion and autocomplete so I added the following to my `.bashrc` file. | ||
| 21 | |||
| 22 | ```bash | ||
| 23 | bind "TAB:menu-complete" | ||
| 24 | bind "set show-all-if-ambiguous on" | ||
| 25 | bind "set completion-ignore-case on" | ||
| 26 | bind "set menu-complete-display-prefix on" | ||
| 27 | bind '"\e[Z":menu-complete-backward' | ||
| 28 | ``` | ||
| 29 | |||
| 30 | I haven't noticed anything wrong with this and all was working fine until I | ||
| 31 | restarted my machine and then I got this error. | ||
| 32 | |||
| 33 |  | ||
| 34 | |||
| 35 | When I pressed OK, I got into the [Gnome | ||
| 36 | shell](https://wiki.gnome.org/Projects/GnomeShell) and all was working fine, but | ||
| 37 | the error was still bugging me. I started looking for the reason why this is | ||
| 38 | happening and found a solution to this error on [Remote SSH Commands - bash bind | ||
| 39 | warning: line editing not enabled](https://superuser.com/a/892682). | ||
| 40 | |||
| 41 | So I added a simple `if [ -t 1 ]` around `bind` statements to avoid running | ||
| 42 | commands that presume the session is interactive when it isn't. | ||
| 43 | |||
| 44 | ```bash | ||
| 45 | if [ -t 1 ]; then | ||
| 46 | bind "TAB:menu-complete" | ||
| 47 | bind "set show-all-if-ambiguous on" | ||
| 48 | bind "set completion-ignore-case on" | ||
| 49 | bind "set menu-complete-display-prefix on" | ||
| 50 | bind '"\e[Z":menu-complete-backward' | ||
| 51 | fi | ||
| 52 | ``` | ||
| 53 | |||
| 54 | After logging out and back in the problem was gone. | ||
diff --git a/content/posts/2020-09-09-digitalocean-sync.md b/content/posts/2020-09-09-digitalocean-sync.md new file mode 100644 index 0000000..e16b827 --- /dev/null +++ b/content/posts/2020-09-09-digitalocean-sync.md | |||
| @@ -0,0 +1,112 @@ | |||
| 1 | --- | ||
| 2 | title: Using Digitalocean Spaces to sync between computers | ||
| 3 | url: digitalocean-spaces-to-sync-between-computers.html | ||
| 4 | date: 2020-09-09T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | I've been using [Dropbox](https://www.dropbox.com/) for probably **10+ years** | ||
| 10 | now and I-ve became so used to it that it runs in the background that I don't | ||
| 11 | even imagine a world without it. But it's not without problems. | ||
| 12 | |||
| 13 | At first I had problems with `.venv` environments for Python and the only | ||
| 14 | solution for excluding synchronization for this folder was to manually exclude a | ||
| 15 | specific folder which is not really scalable. FYI, my whole project folder is | ||
| 16 | synced on [Dropbox](https://www.dropbox.com/). This of course introduced a lot | ||
| 17 | of syncing of files and folders that are not needed or even break things on | ||
| 18 | other machines. In the case of **Python**, I couldn't use that on my second | ||
| 19 | machine. I needed to delete `.venv` folder and pip it again which synced files | ||
| 20 | again to the main machine. This was very frustrating. **Nodejs** handles this | ||
| 21 | much nicer and I can just run the scripts without deleting `node_modules` again | ||
| 22 | and reinstalling. However, `node_modules` is a beast of its own. It creates so | ||
| 23 | many files that OS has a problem counting them when you check the folder | ||
| 24 | contents for size. | ||
| 25 | |||
| 26 | I wanted something similar to Dropbox. I could without the instant syncing but | ||
| 27 | it would need to be fast and had the option for me to exclude folders like | ||
| 28 | `node_modules, .venv, .git` and folders like that. | ||
| 29 | |||
| 30 | I went on a hunt for an alternative to [Dropbox](https://www.dropbox.com/) | ||
| 31 | and found: | ||
| 32 | |||
| 33 | - [Tresorit](https://tresorit.com/) | ||
| 34 | - [Sync.com](https://sync.com) | ||
| 35 | - [Box](https://www.box.com/) | ||
| 36 | |||
| 37 | You know, the usual list of suspects. I didn't include [Google | ||
| 38 | drive](https://drive.google.com) or [One drive](https://onedrive.live.com/) | ||
| 39 | since they are even more draconian than Dropbox. | ||
| 40 | |||
| 41 | > All this does not stem from me being paranoid but recently these companies | ||
| 42 | > have became more and more aggressive and they keep violating our privacy when | ||
| 43 | > they share our data with 3rd party services. It is getting out of control. | ||
| 44 | |||
| 45 | So, my main problem was still there. No way of excluding a specific folder from | ||
| 46 | syncing. And before we go into "*But you have git, isn't that enough?*", I must | ||
| 47 | say, that many of the files (PDFs, spreadsheets, etc) I have in a `git` repo | ||
| 48 | don't get pushed upstream to Git and I still want to have them synced across my | ||
| 49 | computers. | ||
| 50 | |||
| 51 | I initially wanted to use [rsync](https://linux.die.net/man/1/rsync) but I would | ||
| 52 | need to then have a remote VPS or transfer between my computers directly. I | ||
| 53 | wanted a solution where all my files could be accessible to me without my | ||
| 54 | machine. | ||
| 55 | |||
| 56 | > **WARNING: This solution will cost you money!** DigitalOcean Spaces are $5 per | ||
| 57 | month and there are some bandwidth limitations and if you go beyond that you get | ||
| 58 | billed additionally. | ||
| 59 | |||
| 60 | Then I remembered that I could use something like | ||
| 61 | [S3](https://en.wikipedia.org/wiki/Amazon_S3) since it has versioning and is | ||
| 62 | fully managed. I didn't want to go down the AWS rabbit hole with this so I | ||
| 63 | choose [DigitalOcean Spaces](https://www.digitalocean.com/products/spaces/). | ||
| 64 | |||
| 65 | Then I needed a command-line tool to sync between source and target. I found | ||
| 66 | this nice tool [s3cmd](https://s3tools.org/s3cmd) and it is in the Ubuntu | ||
| 67 | repositories. | ||
| 68 | |||
| 69 | ```bash | ||
| 70 | sudo apt install s3cmd | ||
| 71 | ``` | ||
| 72 | |||
| 73 | After installation will I create a new Space bucket on DigitalOcean. Remember | ||
| 74 | the zone you will choose because you will need it when you will configure | ||
| 75 | `s3cmd`. | ||
| 76 | |||
| 77 | Then I visited [Digitalocean Applications & | ||
| 78 | API](https://cloud.digitalocean.com/account/api/tokens) and generated **Spaces | ||
| 79 | access keys**. Save both key and secret somewhere safe because when you will | ||
| 80 | leave the page secret will not be available anymore to you and you will need to | ||
| 81 | re-generate it. | ||
| 82 | |||
| 83 | ```bash | ||
| 84 | # enter your key and secret and correct endpoint | ||
| 85 | # my endpoint is ams3.digitaloceanspaces.com because | ||
| 86 | # I created my bucket in Amsterdam regiin | ||
| 87 | s3cmd --configure | ||
| 88 | ``` | ||
| 89 | |||
| 90 | After that I played around with options for `s3cmd` and got to the following | ||
| 91 | command. | ||
| 92 | |||
| 93 | ```bash | ||
| 94 | # I executed this command from my projects folder | ||
| 95 | cd projects | ||
| 96 | s3cmd sync --delete-removed --exclude 'node_modules/*' --exclude '.git/*' --exclude '.venv/*' ./ s3://my-bucket-name/projects/ | ||
| 97 | ``` | ||
| 98 | |||
| 99 | When syncing int he other direction you will need to change the order of the | ||
| 100 | `SOURCE` and `TARGET` to `s3://my-bucket-name/projects/` and `./`. | ||
| 101 | |||
| 102 | > Be sure that all the paths have trailing slash so that sync knows that this | ||
| 103 | > are directories. | ||
| 104 | |||
| 105 | I am planning to implement some sort of a `.ignore` file that will enable me to | ||
| 106 | have a project-specific exclude options. | ||
| 107 | |||
| 108 | I am currently running this every hour as a cronjob which is perfectly fine for | ||
| 109 | now when I am testing how this whole thing works and how it all will turn out. | ||
| 110 | |||
| 111 | I have also created a small Gnome extension which is still very unstable, but | ||
| 112 | when/if this whole experiment pays of I will share on Github. | ||
diff --git a/content/posts/2021-01-24-replacing-dropbox-with-s3.md b/content/posts/2021-01-24-replacing-dropbox-with-s3.md new file mode 100644 index 0000000..a44a1aa --- /dev/null +++ b/content/posts/2021-01-24-replacing-dropbox-with-s3.md | |||
| @@ -0,0 +1,114 @@ | |||
| 1 | --- | ||
| 2 | title: Replacing Dropbox in favor of DigitalOcean spaces | ||
| 3 | url: replacing-dropbox-in-favor-of-digitalocean-spaces.html | ||
| 4 | date: 2021-01-24T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | A few months ago I experimented with DigitalOcean spaces as my backup solution | ||
| 10 | that could [replace Dropbox | ||
| 11 | eventually](/digitalocean-spaces-to-sync-between-computers.html). That solution | ||
| 12 | worked quite nicely, and I was amazed how smashing together a couple of existing | ||
| 13 | solutions would work this fine. | ||
| 14 | |||
| 15 | I have been running that solution in the background for a couple of months now | ||
| 16 | and kind of forgot about it. But recent developments around deplatforming and | ||
| 17 | having us people hostages of technology and big companies speed up my goals to | ||
| 18 | become less dependent on | ||
| 19 | [Google](https://edition.cnn.com/2020/12/17/tech/google-antitrust-lawsuit/index.html), | ||
| 20 | [Dropbox](https://www.pcworld.com/article/2048680/dropbox-takes-a-peek-at-files.html) | ||
| 21 | etc and take back some control. | ||
| 22 | |||
| 23 | I am not a conspiracy theory nut, but to be honest, what these companies are | ||
| 24 | doing lately is out of control. It is a matter of principle at this point. I | ||
| 25 | have almost completely degoogled my life all the way from ditching Gmail, | ||
| 26 | YouTube and most of the services surrounding Google. And I must tell you, I feel | ||
| 27 | so good. I haven't felt this way for a long time. | ||
| 28 | |||
| 29 | **Anyways. Let's get to the meat of things.** | ||
| 30 | |||
| 31 | Before you continue you should read my post about [syncing to | ||
| 32 | Dropbox](/digitalocean-spaces-to-sync-between-computers.html). | ||
| 33 | |||
| 34 | > Also to note, I am using Linux on my machine with Gnome desktop environment. | ||
| 35 | This should work on MacOS too. To use this on Windows I suggest using | ||
| 36 | [Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10) | ||
| 37 | or [Cygwin](https://www.cygwin.com/). | ||
| 38 | |||
| 39 | ## Folder structure | ||
| 40 | |||
| 41 | I liked structure from Dropbox. One folder where everything is located and | ||
| 42 | synced. So, that's why adopted this also for my sync setup. | ||
| 43 | |||
| 44 | ```go | ||
| 45 | ~/Vault | ||
| 46 | ↳ backup | ||
| 47 | ↳ bin | ||
| 48 | ↳ documents | ||
| 49 | ↳ projects | ||
| 50 | ``` | ||
| 51 | |||
| 52 | All of my code is located in `~/Vault/projects` folder. And most of the projects | ||
| 53 | are Git repositories. I do not use this sync method for backup per see but in | ||
| 54 | case I reinstall my machine I can easily recreate all the important folder | ||
| 55 | structure with one quick command. No external drives needed that can fail etc. | ||
| 56 | |||
| 57 | ## Sync script | ||
| 58 | |||
| 59 | My sync script is located in `~/Vault/bin/vault-backup.sh` | ||
| 60 | |||
| 61 | ```bash | ||
| 62 | #!/bin/bash | ||
| 63 | |||
| 64 | # dconf load /com/gexperts/Tilix/ < tilix.dconf | ||
| 65 | # 0 2 * * * sh ~/Vault/bin/vault-backup.sh | ||
| 66 | |||
| 67 | cd ~/Vault/backup/dotfiles | ||
| 68 | |||
| 69 | MACHINE=$(whoami)@$(hostname) | ||
| 70 | mkdir -p $MACHINE | ||
| 71 | cd $MACHINE | ||
| 72 | |||
| 73 | cp ~/.config/VSCodium/User/settings.json settings.json | ||
| 74 | cp ~/.s3cfg s3cfg | ||
| 75 | cp ~/.bash_extended bash_extended | ||
| 76 | cp ~/.ssh ssh -rf | ||
| 77 | |||
| 78 | codium --list-extensions > vscode-extension.txt | ||
| 79 | dconf dump /com/gexperts/Tilix/ > tilix.dconf | ||
| 80 | |||
| 81 | cd ~/Vault | ||
| 82 | s3cmd sync --delete-removed --exclude 'node_modules/*' --exclude '.git/*' --exclude '.venv/*' ./ s3://bucket-name/backup/ | ||
| 83 | |||
| 84 | echo `date +"%D %T"` >> ~/.vault.log | ||
| 85 | |||
| 86 | notify-send \ | ||
| 87 | -u normal \ | ||
| 88 | -i /usr/share/icons/Adwaita/96x96/status/security-medium-symbolic.symbolic.png \ | ||
| 89 | "Vault sync succeded at `date +"%D %T"`" | ||
| 90 | ``` | ||
| 91 | |||
| 92 | This script also backups some of the dotfiles I use and sends notification to | ||
| 93 | Gnome notification center. It is a straightforward solution. Nothing special | ||
| 94 | going on. | ||
| 95 | |||
| 96 | > One obvious benefit of this is that I can omit syncing Node's `node_modules` | ||
| 97 | > or Python's `.venv` and `.git` folders. | ||
| 98 | |||
| 99 | You can use this script in a combination with [Cron](https://en.wikipedia.org/wiki/Cron). | ||
| 100 | |||
| 101 | ``` | ||
| 102 | 0 2 * * * sh ~/Vault/bin/vault-backup.sh | ||
| 103 | ``` | ||
| 104 | |||
| 105 | When you start syncing your local stuff with a remote server you can review your | ||
| 106 | items on DigitalOcean. | ||
| 107 | |||
| 108 |  | ||
| 109 | |||
| 110 | I have been using this script now for quite some time, and it's working | ||
| 111 | flawlessly. I also uninstalled Dropbox and stopped using it completely. | ||
| 112 | |||
| 113 | All I need to do is write a Bash script that does the reverse and downloads from | ||
| 114 | remote server to local folder. This could be another post. | ||
diff --git a/content/posts/2021-01-25-goaccess.md b/content/posts/2021-01-25-goaccess.md new file mode 100644 index 0000000..0eb2461 --- /dev/null +++ b/content/posts/2021-01-25-goaccess.md | |||
| @@ -0,0 +1,203 @@ | |||
| 1 | --- | ||
| 2 | title: Using GoAccess with Nginx to replace Google Analytics | ||
| 3 | url: using-goaccess-with-nginx-to-replace-google-analytics.html | ||
| 4 | date: 2021-01-25T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | ## Introduction | ||
| 10 | |||
| 11 | I know! You cannot simply replace Google Analytics with parsing access logs and | ||
| 12 | displaying a couple of charts. But to be honest, I actually never used Google | ||
| 13 | Analytics to the fullest extent and was usually interested in seeing page hits | ||
| 14 | and which pages were visited most often. | ||
| 15 | |||
| 16 | I recently moved my blog from Firebase to a VPS and also decided to remove | ||
| 17 | Google Analytics tracking code from the site since its quite malicious and | ||
| 18 | tracks users across other pages also and is creating a profile of a user, and | ||
| 19 | I've had it. But I also need some insight of what is happening on a server and | ||
| 20 | which content is being read the most etc. | ||
| 21 | |||
| 22 | I have looked at many existing solutions like: | ||
| 23 | |||
| 24 | - [Umami](https://umami.is/) | ||
| 25 | - [Freshlytics](https://github.com/sheshbabu/freshlytics) | ||
| 26 | - [Matomo](https://matomo.org/) | ||
| 27 | |||
| 28 | But the more I looked at them the more I noticed that I am replacing one evil | ||
| 29 | with another one. Don't get me wrong. Some of these solutions are absolutely | ||
| 30 | fantastic but would require installation of databases and something like PHP or | ||
| 31 | Node. And I was not ready to put those things on my fresh server. Also having | ||
| 32 | Docker installed is out of the question. | ||
| 33 | |||
| 34 | ## Opting for log parsing | ||
| 35 | |||
| 36 | So, I defaulted to parsing already existing logs and generating HTML reports | ||
| 37 | from this data. | ||
| 38 | |||
| 39 | I found this amazing software [GoAccess](https://goaccess.io/) which provides | ||
| 40 | all the functionalities I need, and it's a single binary. Written in Go. | ||
| 41 | |||
| 42 | GoAccess can be used in two different modes. | ||
| 43 | |||
| 44 |  | ||
| 45 | <center><i>Running in a terminal</i></center> | ||
| 46 | |||
| 47 |  | ||
| 48 | <center><i>Running in a browser</i></center> | ||
| 49 | |||
| 50 | I, however, need this to run in a browser. So, the second option is the way to | ||
| 51 | go. The Idea is to periodically run cronjob and export this report into a folder | ||
| 52 | that gets then server by Nginx behind a Basic authentication. | ||
| 53 | |||
| 54 | ## Getting Nginx ready | ||
| 55 | |||
| 56 | I choose Ubuntu on [DigitalOcean](https://www.digitalocean.com/). First I | ||
| 57 | installed [Nginx](https://nginx.org/en/), and | ||
| 58 | [Letsencrypt](https://letsencrypt.org/getting-started/) certbot and all the | ||
| 59 | necessary dependencies. | ||
| 60 | |||
| 61 | ```sh | ||
| 62 | # log in as root user | ||
| 63 | sudo su - | ||
| 64 | |||
| 65 | # first let's update the system | ||
| 66 | apt update && apt upgrade -y | ||
| 67 | |||
| 68 | # let's install | ||
| 69 | apt install nginx certbot python3-certbot-nginx apache2-utils | ||
| 70 | ``` | ||
| 71 | |||
| 72 | After all this is installed we can create a new configuration for a statistics. | ||
| 73 | Stats will be available at `stats.domain.com`. | ||
| 74 | |||
| 75 | ```sh | ||
| 76 | # creates directory where html will be hosted | ||
| 77 | mkdir -p /var/www/html/stats.domain.com | ||
| 78 | |||
| 79 | cp /etc/nginx/sites-available/default /etc/nginx/sites-available/stats.domain.com | ||
| 80 | nano /etc/nginx/sites-available/stats.domain.com | ||
| 81 | ``` | ||
| 82 | |||
| 83 | ```nginx | ||
| 84 | server { | ||
| 85 | root /var/www/html/stats.domain.com; | ||
| 86 | server_name stats.domain.com; | ||
| 87 | |||
| 88 | index index.html; | ||
| 89 | location / { | ||
| 90 | try_files $uri $uri/ =404; | ||
| 91 | } | ||
| 92 | } | ||
| 93 | ``` | ||
| 94 | |||
| 95 | Now we check if the configuration is ok. We can do this with `nginx -t`. If all | ||
| 96 | is ok, we can restart Nginx with `service nginx restart`. | ||
| 97 | |||
| 98 | After all that you should add A record for this domain that points to IP of a | ||
| 99 | droplet. | ||
| 100 | |||
| 101 | Before enabling SSL you should test if DNS records have propagated with `curl | ||
| 102 | stats.domain.com`. | ||
| 103 | |||
| 104 | Now, it's time to provision TLS certificate. To achieve this, you execute | ||
| 105 | command `certbot --nginx`. Follow the wizard and when you are asked about | ||
| 106 | redirection always choose 2 (always redirect to HTTPS). | ||
| 107 | |||
| 108 | When this is done you can visit https://stats.domain.com and you should get 404 | ||
| 109 | not found error which is correct. | ||
| 110 | |||
| 111 | ## Getting GoAccess ready | ||
| 112 | |||
| 113 | If you are using Debian like system GoAccess should be available in repository. | ||
| 114 | Otherwise refer to the official website. | ||
| 115 | |||
| 116 | ```sh | ||
| 117 | apt install goaccess | ||
| 118 | ``` | ||
| 119 | |||
| 120 | To enable Geo location we also need one additiona thing. | ||
| 121 | |||
| 122 | ```sh | ||
| 123 | cd /var/www/html/stats.stats.com | ||
| 124 | wget https://github.com/P3TERX/GeoLite.mmdb/raw/download/GeoLite2-City.mmdb | ||
| 125 | ``` | ||
| 126 | |||
| 127 | Now we create a shell script that will be executed every 10 minutes. | ||
| 128 | |||
| 129 | ```sh | ||
| 130 | nano /var/www/html/stats.domain.com/generate-stats.sh | ||
| 131 | ``` | ||
| 132 | |||
| 133 | Contents of this file should look like this. | ||
| 134 | |||
| 135 | ```sh | ||
| 136 | #!/bin/sh | ||
| 137 | |||
| 138 | zcat -f /var/log/nginx/access.log* > /var/log/nginx/access-all.log | ||
| 139 | |||
| 140 | goaccess \ | ||
| 141 | --log-file=/var/log/nginx/access-all.log \ | ||
| 142 | --log-format=COMBINED \ | ||
| 143 | --exclude-ip=0.0.0.0 \ | ||
| 144 | --geoip-database=/var/www/html/stats.domain.com/GeoLite2-City.mmdb \ | ||
| 145 | --ignore-crawlers \ | ||
| 146 | --real-os \ | ||
| 147 | --output=/var/www/html/stats.domain.com/index.html | ||
| 148 | |||
| 149 | rm /var/log/nginx/access-all.log | ||
| 150 | ``` | ||
| 151 | |||
| 152 | Because after a while nginx creates multiple files with access logs we use | ||
| 153 | [`zcat`](https://linux.die.net/man/1/zcat) to extract Gziped contents and create | ||
| 154 | a file that has all the access logs. After this file is used we delete it. | ||
| 155 | |||
| 156 | If you want to exclude your home IP's result look at the `--exclude-ip` option | ||
| 157 | in script and instead of `0.0.0.0` add your own home IP address. You can find | ||
| 158 | your home IP by executing `curl ifconfig.me` from your local machine and NOT | ||
| 159 | from the droplet. | ||
| 160 | |||
| 161 | Test the script by executing `sh | ||
| 162 | /var/www/html/stats.domain.com/generate-stats.sh` and then checking | ||
| 163 | `https://stats.domain.com`. If you can see stats instead of 404 than you are | ||
| 164 | set. | ||
| 165 | |||
| 166 | It's time to add this script to cron with `cron -e`. | ||
| 167 | |||
| 168 | ```go | ||
| 169 | */10 * * * * sh /var/www/html/stats.domain.com/generate-stats.sh | ||
| 170 | ``` | ||
| 171 | |||
| 172 | ## Securing with Basic authentication | ||
| 173 | |||
| 174 | You probably don't want stats to be publicly available, so we should create a | ||
| 175 | user and a password for Basic authentication. | ||
| 176 | |||
| 177 | First we create a password for a user `stats` with `htpasswd -c /etc/nginx/.htpasswd stats`. | ||
| 178 | |||
| 179 | Now we update config file with `nano | ||
| 180 | /etc/nginx/sites-available/stats.domain.com`. You probably noticed that the | ||
| 181 | file looks a bit different from before. This is because `certbot` added | ||
| 182 | additional rules for SSL. | ||
| 183 | |||
| 184 | Your location portion the config file should now look like. You should add | ||
| 185 | `auth_basic` and `auth_basic_user_file` lines to the file. | ||
| 186 | |||
| 187 | ```nginx | ||
| 188 | location / { | ||
| 189 | try_files $uri $uri/ =404; | ||
| 190 | auth_basic "Private Property"; | ||
| 191 | auth_basic_user_file /etc/nginx/.htpasswd; | ||
| 192 | } | ||
| 193 | ``` | ||
| 194 | |||
| 195 | Test if config is still ok with `nginx -t` and if it is you can restart Nginx | ||
| 196 | with `service nginx restart`. | ||
| 197 | |||
| 198 | If you now visit `https://stats.domain.com` you should be prompted for username | ||
| 199 | and password. If not, try reopening your browser. | ||
| 200 | |||
| 201 | That is all. You now have analytics for your server that gets refreshed every 10 | ||
| 202 | minutes. | ||
| 203 | |||
diff --git a/content/posts/2021-06-26-simple-world-clock.md b/content/posts/2021-06-26-simple-world-clock.md new file mode 100644 index 0000000..0c17f09 --- /dev/null +++ b/content/posts/2021-06-26-simple-world-clock.md | |||
| @@ -0,0 +1,108 @@ | |||
| 1 | --- | ||
| 2 | title: Simple world clock with eInk display and Raspberry Pi Zero | ||
| 3 | url: simple-world-clock-with-eiink-display-and-raspberry-pi-zero.html | ||
| 4 | date: 2021-06-26T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | Our team is spread across the world, from the USA all the way to Australia, so | ||
| 10 | having some sort of world clock makes sense. | ||
| 11 | |||
| 12 | Currently, I am using an extension for Gnome called [Timezone | ||
| 13 | extension](https://extensions.gnome.org/extension/2657/timezones-extension/), | ||
| 14 | and it serves the purpose quite well. | ||
| 15 | |||
| 16 | But I also have a bunch of electronics that I bought through the time, and I am | ||
| 17 | not using any of them, and it's time to stop hording this stuff and use it in a | ||
| 18 | project. | ||
| 19 | |||
| 20 | A while ago I bought a small eInk display [Inky | ||
| 21 | pHAT](https://shop.pimoroni.com/products/inky-phat?variant=12549254217811) and I | ||
| 22 | have a bunch of [Raspberry Pi's | ||
| 23 | Zero](https://www.raspberrypi.org/products/raspberry-pi-zero/) lying around that | ||
| 24 | I really need to use. | ||
| 25 | |||
| 26 |  | ||
| 27 | |||
| 28 | Since the Inky [Inky | ||
| 29 | pHAT](https://shop.pimoroni.com/products/inky-phat?variant=12549254217811) is | ||
| 30 | essentially a HAT, it can easily be added on top of the [Raspberry Pi | ||
| 31 | Zero](https://www.raspberrypi.org/products/raspberry-pi-zero/). | ||
| 32 | |||
| 33 | First, I installed the necessary software on Raspberry Pi with `pip3 install | ||
| 34 | inky`. | ||
| 35 | |||
| 36 | And then I created a file `clock.py` in home directory `/home/pi`. | ||
| 37 | |||
| 38 | ```python | ||
| 39 | #!/usr/bin/env python | ||
| 40 | # -*- coding: utf-8 -*- | ||
| 41 | |||
| 42 | import sys | ||
| 43 | import os | ||
| 44 | from inky.auto import auto | ||
| 45 | from PIL import Image, ImageFont, ImageDraw | ||
| 46 | from font_fredoka_one import FredokaOne | ||
| 47 | |||
| 48 | clocks = [ | ||
| 49 | 'America/New_York', | ||
| 50 | 'Europe/Ljubljana', | ||
| 51 | 'Australia/Brisbane', | ||
| 52 | ] | ||
| 53 | |||
| 54 | board = auto() | ||
| 55 | board.set_border(board.WHITE) | ||
| 56 | board.rotation = 90 | ||
| 57 | |||
| 58 | img = Image.new('P', (board.WIDTH, board.HEIGHT)) | ||
| 59 | draw = ImageDraw.Draw(img) | ||
| 60 | |||
| 61 | big_font = ImageFont.truetype(FredokaOne, 18) | ||
| 62 | small_font = ImageFont.truetype(FredokaOne, 13) | ||
| 63 | |||
| 64 | x = board.WIDTH / 3 | ||
| 65 | y = board.HEIGHT / 3 | ||
| 66 | |||
| 67 | idx = 1 | ||
| 68 | for clock in clocks: | ||
| 69 | ctime = os.popen('TZ="{}" date +"%a,%H:%M"'.format(clock)) | ||
| 70 | ctime = ctime.read().strip().split(',') | ||
| 71 | city = clock.split('/')[1].replace('_', ' ') | ||
| 72 | |||
| 73 | draw.text((15, (idx*y)-y+10), city, fill=board.BLACK, font=small_font) | ||
| 74 | draw.text((110, (idx*y)-y+7), str(ctime[0]), fill=board.BLACK, font=big_font) | ||
| 75 | draw.text((155, (idx*y)-y+7), str(ctime[1]), fill=board.BLACK, font=big_font) | ||
| 76 | |||
| 77 | idx += 1 | ||
| 78 | |||
| 79 | board.set_image(img) | ||
| 80 | board.show() | ||
| 81 | ``` | ||
| 82 | |||
| 83 | And because eInk displays are rather slow to refresh and the clock requires | ||
| 84 | refreshing only once a minute, this can be done through cronjob. | ||
| 85 | |||
| 86 | Before we add this job to cron we need to make `clock.py` executable with `chmod | ||
| 87 | +x clock.py`. | ||
| 88 | |||
| 89 | Then we add a cronjob with `crontab -e`. | ||
| 90 | |||
| 91 | ``` | ||
| 92 | * * * * * /home/pi/clock.py | ||
| 93 | ``` | ||
| 94 | |||
| 95 | So, we end up with a result like this. | ||
| 96 | |||
| 97 |  | ||
| 98 | |||
| 99 | And for the enclosure that can be 3D printed, but I haven't yet something like | ||
| 100 | this can be used. | ||
| 101 | |||
| 102 | <iframe id="vs_iframe" src="https://www.viewstl.com/?embedded&url=https%3A%2F%2Fmitjafelicijan.com%2Fassets%2Fworld-clock%2Fenclosure.stl&color=gray&bgcolor=white&edges=no&orientation=front&noborder=no" style="border:0;margin:0;width:100%;height:400px;"></iframe> | ||
| 103 | |||
| 104 | You can download my [STL file for the enclosure | ||
| 105 | here](/assets/world-clock/enclosure.stl), but make sure that dimensions make | ||
| 106 | sense and also opening for USB port should be added or just use a drill and some | ||
| 107 | hot glue to make it stick in the enclosure. | ||
| 108 | |||
diff --git a/content/posts/2021-07-30-from-internet-consumer-to-full-hominum-again.md b/content/posts/2021-07-30-from-internet-consumer-to-full-hominum-again.md new file mode 100644 index 0000000..100645b --- /dev/null +++ b/content/posts/2021-07-30-from-internet-consumer-to-full-hominum-again.md | |||
| @@ -0,0 +1,103 @@ | |||
| 1 | --- | ||
| 2 | title: My journey from being an internet über consumer to being a full hominum again | ||
| 3 | url: from-internet-consumer-to-full-hominum-again.html | ||
| 4 | date: 2021-07-30T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | It's been almost a year since I started purging all my online accounts and | ||
| 10 | going down this rabbit hole of being almost independent of the current internet | ||
| 11 | machine. Even though I initially thought that I will have problems adapting, | ||
| 12 | I was pleasantly surprised that the transition went so smoothly. Even better, | ||
| 13 | it brought many benefits to my life. Such as increased focus, less stress | ||
| 14 | about trivial things, etc. | ||
| 15 | |||
| 16 | It all started with me doing small changes like unsubscribing from emails that I | ||
| 17 | have either subscribed to by accepting terms and conditions. Or even some more | ||
| 18 | malicious emails that I was getting because I was on a shared mailing list. And | ||
| 19 | the later ones I hate the most of all. How the hell do they keep sharing my | ||
| 20 | email and sending me unsolicited emails and get away with it? I have a suspicion | ||
| 21 | that these marketing people share an Excel file between them and keep | ||
| 22 | resubscribing me when they import lists into Mailchimp or similar software. | ||
| 23 | |||
| 24 | It's fascinating to see how much crap you get subscribed to when you are not | ||
| 25 | paying attention. It got so bad that my primary Gmail address is a full of junk | ||
| 26 | and need constant monitoring and cleaning up. And because I want to have Inbox | ||
| 27 | Zero, this presents an additional problem for me. | ||
| 28 | |||
| 29 | The stress that email presented for me didn't occur to me for a long time. I was | ||
| 30 | noticing that I was unable to go through one single hour without hysterically | ||
| 31 | refreshing email. And if somebody wrote me something, I needed to see it right | ||
| 32 | then, even though I didn't immediately reply to it. I can only describe this | ||
| 33 | with FOMO (fear of missing out). I have no other explanation than that. It was | ||
| 34 | crippling, and I was constantly context switching, which I will address further | ||
| 35 | down this post in more details. | ||
| 36 | |||
| 37 | This was one of the reasons why I spawned up my personal email server, and I am | ||
| 38 | using it now as my primary and person email. I still have Gmail as my “junk” | ||
| 39 | email that I use for throw away stuff. I log in to Gmail once a week and check | ||
| 40 | if there are any important emails that I got, but apart from that, it's sitting | ||
| 41 | dormant and collecting dust. | ||
| 42 | |||
| 43 | The more I was watching the world loose it's self with allowing anti freedom | ||
| 44 | things to happen to it, the more I started to realize that something has to | ||
| 45 | change. I don't have the power to change the world. And I also don't have a | ||
| 46 | grandiose opinion of myself to even think to try it. But what I can do is to not | ||
| 47 | subscribe to this consumer way of thinking. I will not be complicit in this. My | ||
| 48 | moral and ethical stances won't allow it. So, this brings us to the second part | ||
| 49 | of my journey. | ||
| 50 | |||
| 51 | I was using all these 3rd party services because I was either lazy or OK with | ||
| 52 | the drawbacks of them. I watched these services and companies became more and | ||
| 53 | privacy policies and everybody is OK with accepting them, and they pray on that | ||
| 54 | more evil. It is evil if you sell your user's data in this manner. Nobody reads | ||
| 55 | flaw in human nature. I really hate the hypocrisy they manage to muster. These | ||
| 56 | companies prey on our laziness, and we are at fault here. Nobody else. And I | ||
| 57 | truly understand the reasons why we rather accept and move on, and not object | ||
| 58 | and have our lives a little more difficult. They have perfected this through | ||
| 59 | years of small changes that make us a little more dependent on them. You could | ||
| 60 | not convince a person to give away all his rights and data in one day. This was | ||
| 61 | gradual and slow. And it caught us all in surprise. When I really stopped and | ||
| 62 | thought about it, I felt repulsed. By really stopping and thinking about it, I | ||
| 63 | really mean stopping and thinking about it. Thoroughly and in depth. | ||
| 64 | |||
| 65 | Each step I took depleted my character a bit more. Like I was trading myself bit | ||
| 66 | by bit without understanding what it all meant. What it meant to be a full | ||
| 67 | person, not divided by all this bought attention they want from me. They don't | ||
| 68 | just get your data, but they also take your attention away from you. They | ||
| 69 | scatter your and go with the divide and conquer tactic from there. And a person | ||
| 70 | divided is a person not fully there. Not at the moment. Not alive fully. | ||
| 71 | |||
| 72 | I was unable to form long thoughts. Well, I thought I was. But now that I see | ||
| 73 | what being a full person is again, I can see that I was not at my 100% back | ||
| 74 | then. | ||
| 75 | |||
| 76 | A revolt was inevitable. There was no other way of continuing my story without | ||
| 77 | it. Without taking back my attention, my thoughts, my time, and my privacy, | ||
| 78 | regardless of how too late it maybe is. | ||
| 79 | |||
| 80 | This has nothing to do with conspiracy theories. Even less with changing the | ||
| 81 | world. All I wanted was to get my life back in order and not waste the energy | ||
| 82 | that could be spent in other, better places. | ||
| 83 | |||
| 84 | I started reading more. I can focus now fully on things I work on. Furthermore, | ||
| 85 | I have the mental acuity that I never had before. My mind feels sharp. I don't | ||
| 86 | get angry so much. I can cherish the finer things in life now without the need | ||
| 87 | to interpret them intellectually. Not only that, but I have a feeling of | ||
| 88 | belonging again. Sense of purpose has returned with a vengeance. And I can now | ||
| 89 | help people without depleting myself. | ||
| 90 | |||
| 91 | The last step so far was to finish closing all the remaining online accounts | ||
| 92 | that I still had. And when I was thinking what value they bring me, I wasn't | ||
| 93 | surprised that the answer was none. I wasn't logging in them and using them. I | ||
| 94 | stopped being afraid of FOMO. If somebody wants to get in contact me, they will | ||
| 95 | find a way. I am one search away. | ||
| 96 | |||
| 97 | We are not beholden to anybody. Our lives are our own. So dare yourself to | ||
| 98 | delete Facebook, LinkedIn. To unsubscribe. Dare yourself to take your time and | ||
| 99 | attention back. Use that time and energy to go for a walk without thinking about | ||
| 100 | work. Read a book instead of reading comment on social media that you will | ||
| 101 | forget in an hour. Enrich your life instead of wasting it. It only requires a | ||
| 102 | small step. And you will feel the benefits immediately. Lose the weight of the | ||
| 103 | world that is crushing you without your consent. | ||
diff --git a/content/posts/2021-08-01-linux-cheatsheet.md b/content/posts/2021-08-01-linux-cheatsheet.md new file mode 100644 index 0000000..20e3382 --- /dev/null +++ b/content/posts/2021-08-01-linux-cheatsheet.md | |||
| @@ -0,0 +1,287 @@ | |||
| 1 | --- | ||
| 2 | title: List of essential Linux commands for server management | ||
| 3 | url: linux-cheatsheet.html | ||
| 4 | date: 2021-08-01T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | **Generate SSH key** | ||
| 10 | |||
| 11 | ```bash | ||
| 12 | ssh-keygen -t ed25519 -C "your_email@example.com" | ||
| 13 | |||
| 14 | # when no support for Ed25519 present | ||
| 15 | ssh-keygen -t rsa -b 4096 -C "your_email@example.com" | ||
| 16 | ``` | ||
| 17 | |||
| 18 | Note: By default SSH keys get stored to `/home/<username>/.ssh/` folder. | ||
| 19 | |||
| 20 | **Login to host via SSH** | ||
| 21 | |||
| 22 | ```bash | ||
| 23 | # connect to host as your local username | ||
| 24 | ssh host | ||
| 25 | |||
| 26 | # connect to host as user | ||
| 27 | ssh <user>@<host> | ||
| 28 | |||
| 29 | # connect to host using port | ||
| 30 | ssh -p <port> <user>@<host> | ||
| 31 | ``` | ||
| 32 | |||
| 33 | **Execute command on a server through SSH** | ||
| 34 | |||
| 35 | ```bash | ||
| 36 | # execute one command | ||
| 37 | ssh root@100.100.100.100 "ls /root" | ||
| 38 | |||
| 39 | # execute many commands | ||
| 40 | ssh root@100.100.100.100 "cd /root;touch file.txt" | ||
| 41 | ``` | ||
| 42 | |||
| 43 | **Displays currently logged in users in the system** | ||
| 44 | |||
| 45 | ```bash | ||
| 46 | w | ||
| 47 | ``` | ||
| 48 | |||
| 49 | **Displays Linux system information** | ||
| 50 | |||
| 51 | ```bash | ||
| 52 | uname | ||
| 53 | ``` | ||
| 54 | |||
| 55 | **Displays kernel release information** | ||
| 56 | |||
| 57 | ```bash | ||
| 58 | uname -r | ||
| 59 | ``` | ||
| 60 | |||
| 61 | **Shows the system hostname** | ||
| 62 | |||
| 63 | ```bash | ||
| 64 | hostname | ||
| 65 | ``` | ||
| 66 | |||
| 67 | **Shows system reboot history** | ||
| 68 | |||
| 69 | ```bash | ||
| 70 | last reboot | ||
| 71 | ``` | ||
| 72 | |||
| 73 | **Displays information about the user** | ||
| 74 | |||
| 75 | ```bash | ||
| 76 | sudo apt install finger | ||
| 77 | finger <username> | ||
| 78 | ``` | ||
| 79 | |||
| 80 | **Displays IP addresses and all the network interfaces** | ||
| 81 | |||
| 82 | ```bash | ||
| 83 | ip addr show | ||
| 84 | ``` | ||
| 85 | |||
| 86 | **Downloads a file from an online source** | ||
| 87 | |||
| 88 | ```bash | ||
| 89 | wget https://example.com/example.tgz | ||
| 90 | ``` | ||
| 91 | |||
| 92 | Note: If URL contains ?, & enclose the URL in double quotes. | ||
| 93 | |||
| 94 | **Compress a file with gzip** | ||
| 95 | |||
| 96 | ```bash | ||
| 97 | # will not keep the original file | ||
| 98 | gzip file.txt | ||
| 99 | |||
| 100 | # will keep the original file | ||
| 101 | gzip --keep file.txt | ||
| 102 | ``` | ||
| 103 | |||
| 104 | **Interactive disk usage analyzer** | ||
| 105 | |||
| 106 | ```bash | ||
| 107 | sudo apt install ncdu | ||
| 108 | |||
| 109 | ncdu | ||
| 110 | ncdu <path/to/directory> | ||
| 111 | ``` | ||
| 112 | |||
| 113 | **Install Node.js using the Node Version Manager** | ||
| 114 | |||
| 115 | ```bash | ||
| 116 | curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash | ||
| 117 | source ~/.bashrc | ||
| 118 | |||
| 119 | nvm install v13 | ||
| 120 | ``` | ||
| 121 | |||
| 122 | **Too long; didn't read** | ||
| 123 | |||
| 124 | ```bash | ||
| 125 | npm install -g tldr | ||
| 126 | |||
| 127 | tldr tar | ||
| 128 | ``` | ||
| 129 | |||
| 130 | **Combine all Nginx access logs to one big log file** | ||
| 131 | |||
| 132 | ```bash | ||
| 133 | zcat -f /var/log/nginx/access.log* > /var/log/nginx/access-all.log | ||
| 134 | ``` | ||
| 135 | |||
| 136 | **Set up Redis server** | ||
| 137 | |||
| 138 | ```bash | ||
| 139 | sudo apt install redis-server redis-tools | ||
| 140 | |||
| 141 | # check if server is running | ||
| 142 | sudo service redis status | ||
| 143 | |||
| 144 | # set and get a key value | ||
| 145 | redis-cli set mykey myvalue | ||
| 146 | redis-cli get mykey | ||
| 147 | |||
| 148 | # interactive shell | ||
| 149 | redis-cli | ||
| 150 | ``` | ||
| 151 | |||
| 152 | **Generate statistics of your webserver** | ||
| 153 | |||
| 154 | ```bash | ||
| 155 | sudo apt install goaccess | ||
| 156 | |||
| 157 | # check if installed | ||
| 158 | goaccess -v | ||
| 159 | |||
| 160 | # combine logs | ||
| 161 | zcat -f /var/log/nginx/access.log* > /var/log/nginx/access-all.log | ||
| 162 | |||
| 163 | # export to single html | ||
| 164 | goaccess \ | ||
| 165 | --log-file=/var/log/nginx/access-all.log \ | ||
| 166 | --log-format=COMBINED \ | ||
| 167 | --exclude-ip=0.0.0.0 \ | ||
| 168 | --ignore-crawlers \ | ||
| 169 | --real-os \ | ||
| 170 | --output=/var/www/html/stats.html | ||
| 171 | |||
| 172 | # cleanup afterwards | ||
| 173 | rm /var/log/nginx/access-all.log | ||
| 174 | ``` | ||
| 175 | |||
| 176 | **Search for a given pattern in files** | ||
| 177 | |||
| 178 | ```bash | ||
| 179 | grep -r ‘pattern’ files | ||
| 180 | ``` | ||
| 181 | |||
| 182 | **Find proccess ID for a specific program** | ||
| 183 | |||
| 184 | ```bash | ||
| 185 | pgrep nginx | ||
| 186 | ``` | ||
| 187 | |||
| 188 | **Print name of current/working directory** | ||
| 189 | |||
| 190 | ```bash | ||
| 191 | pwd | ||
| 192 | ``` | ||
| 193 | |||
| 194 | **Creates a blank new file** | ||
| 195 | |||
| 196 | ```bash | ||
| 197 | touch newfile.txt | ||
| 198 | ``` | ||
| 199 | |||
| 200 | **Displays first lines in a file** | ||
| 201 | |||
| 202 | ```bash | ||
| 203 | # -n <x> presents the number of lines (10 by default) | ||
| 204 | head -n 20 somefile.txt | ||
| 205 | ``` | ||
| 206 | |||
| 207 | **Displays last lines in a file** | ||
| 208 | |||
| 209 | ```bash | ||
| 210 | # -n <x> presents the number of lines (10 by default) | ||
| 211 | tail -n 20 somefile.txt | ||
| 212 | |||
| 213 | # -f follows the changes in file (doesn't closes) | ||
| 214 | tail -f somefile.txt | ||
| 215 | ``` | ||
| 216 | |||
| 217 | **Count lines in a file** | ||
| 218 | |||
| 219 | ```bash | ||
| 220 | wc -l somefile.txt | ||
| 221 | ``` | ||
| 222 | |||
| 223 | **Find all instances of the file** | ||
| 224 | |||
| 225 | ```bash | ||
| 226 | sudo apt install mlocate | ||
| 227 | |||
| 228 | locate somefile.txt | ||
| 229 | ``` | ||
| 230 | |||
| 231 | **Find file names that begin with ‘index’ in /home folder** | ||
| 232 | |||
| 233 | ```bash | ||
| 234 | find /home/ -name "index" | ||
| 235 | ``` | ||
| 236 | |||
| 237 | **Find files larger than 100MB in the home folder** | ||
| 238 | |||
| 239 | ```bash | ||
| 240 | find /home -size +100M | ||
| 241 | ``` | ||
| 242 | |||
| 243 | **Displays block devices related information** | ||
| 244 | |||
| 245 | ```bash | ||
| 246 | lsblk | ||
| 247 | ``` | ||
| 248 | |||
| 249 | **Displays free space on mounted systems** | ||
| 250 | |||
| 251 | ```bash | ||
| 252 | df -h | ||
| 253 | ``` | ||
| 254 | |||
| 255 | **Displays free and used memory in the system** | ||
| 256 | |||
| 257 | ```bash | ||
| 258 | free -h | ||
| 259 | ``` | ||
| 260 | |||
| 261 | **Displays all active listening ports** | ||
| 262 | |||
| 263 | ```bash | ||
| 264 | sudo apt install net-tools | ||
| 265 | |||
| 266 | netstat -pnltu | ||
| 267 | ``` | ||
| 268 | |||
| 269 | **Kill a process violently** | ||
| 270 | |||
| 271 | ```bash | ||
| 272 | kill -9 <pid> | ||
| 273 | ``` | ||
| 274 | |||
| 275 | **List files opened by user** | ||
| 276 | |||
| 277 | ```bash | ||
| 278 | lsof -u <user> | ||
| 279 | ``` | ||
| 280 | |||
| 281 | **Execute "df -h", showing periodic updates** | ||
| 282 | |||
| 283 | ```bash | ||
| 284 | # -n 1 means every second | ||
| 285 | watch -n 1 df -h | ||
| 286 | ``` | ||
| 287 | |||
diff --git a/content/posts/2021-12-03-debian-based-riced-up-distribution-for-developers.md b/content/posts/2021-12-03-debian-based-riced-up-distribution-for-developers.md new file mode 100644 index 0000000..58c9b0d --- /dev/null +++ b/content/posts/2021-12-03-debian-based-riced-up-distribution-for-developers.md | |||
| @@ -0,0 +1,276 @@ | |||
| 1 | --- | ||
| 2 | title: Debian based riced up distribution for Developers and DevOps folks | ||
| 3 | url: debian-based-riced-up-distribution-for-developers-and-devops-folks.html | ||
| 4 | date: 2021-12-03T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | ## Introduction | ||
| 10 | |||
| 11 | I have been using [Ubuntu](https://ubuntu.com/) for quite a longtime now. I have | ||
| 12 | used [Debian](https://www.debian.org/) in the past and | ||
| 13 | [Manjaro](https://manjaro.org/). Also had [Arch](https://archlinux.org/) for | ||
| 14 | some time and even ran [Gentoo](https://www.gentoo.org/) way back. | ||
| 15 | |||
| 16 | What I learned from all this is that I prefer running a bit older versions and | ||
| 17 | having them be stable than run bleeding edge rolling release. For that reason, I | ||
| 18 | stuck with Ubuntu for a couple of years now. I am also at a point in my life | ||
| 19 | where I just don't care what is cool or hip anymore. I just want a stable system | ||
| 20 | that doesn't get in my way. | ||
| 21 | |||
| 22 | During all this, I noticed that these distributions were getting very bloated | ||
| 23 | and a lot of software got included that I usually uninstall on fresh | ||
| 24 | installation. Maybe this is my OCD speaking, but why do I have to give fresh | ||
| 25 | installation min 1 GB of ram out of the box just to have a blank screen in front | ||
| 26 | of me? I get it, there are many things included in the distro to make my life | ||
| 27 | easier. I understand. But at this point I have a feeling that modern Linux | ||
| 28 | distributions are becoming similar to [Node.js project with | ||
| 29 | node_modules](https://devhumor.com/content/uploads/images/August2017/node-modules.jpg). | ||
| 30 | Just a crazy number of packages serving very little or no purpose, just | ||
| 31 | supporting other software. | ||
| 32 | |||
| 33 | I felt I needed a fresh start. To start over with something minimal and clean. | ||
| 34 | Something that would put a little more joy into using a computer again. | ||
| 35 | |||
| 36 | For the first version, I wanted to target the following machines I have at home | ||
| 37 | that I want this thing to work on. | ||
| 38 | |||
| 39 | ```yaml | ||
| 40 | # My main stationary work machine | ||
| 41 | Resolution: 3840x1080 (Super Ultrawide Monitor 32:9) | ||
| 42 | CPU: Intel i7-8700 (12) @ 4.600GHz | ||
| 43 | GPU: AMD ATI Radeon RX 470/480/570/570X/580/580X/590 | ||
| 44 | Memory: 32020MiB | ||
| 45 | ``` | ||
| 46 | |||
| 47 | ```yaml | ||
| 48 | # Thinkpad x220 for testing things and goofing around | ||
| 49 | Resolution: 1366x768 | ||
| 50 | CPU: Intel i5-2520M (4) @ 3.200GHz | ||
| 51 | GPU: Intel 2nd Generation Core Processor Family | ||
| 52 | Memory: 15891MiB | ||
| 53 | ``` | ||
| 54 | |||
| 55 | ## How should I approach this? | ||
| 56 | |||
| 57 | I knew I wanted to use [minimal Debian netinst | ||
| 58 | ](https://www.debian.org/CD/netinst/) for the base to give myself a head | ||
| 59 | start. No reason to go through changing the installer and also testing all that | ||
| 60 | behemoth of a thing. So, some sort of ricing was the only logical option to get | ||
| 61 | this thing of the grounds somewhat quickly. | ||
| 62 | |||
| 63 | > **What is ricing anyway?** | ||
| 64 | > The term “RICE” stands for Race Inspired Cosmetic Enhancement. A group of | ||
| 65 | > people (could be one, idk) decided to see if they could tweak their own | ||
| 66 | > distros like they/others did their cars. This gave rise to a community of | ||
| 67 | > Linux/Unix enthusiasts trying to make their distros look cooler and better | ||
| 68 | > than others... For more information, read this article | ||
| 69 | > [What in the world is ricing!?](https://pesos.github.io/2020/07/14/what-is-ricing.html). | ||
| 70 | |||
| 71 | I didn't want this to just be a set of config files for theming purpose. I | ||
| 72 | wanted this to include a set of pre-installed tools and services that are being | ||
| 73 | used all the time by a modern developer. Theming is just a tiny part of it. | ||
| 74 | Fonts being applied across the distro and things like that. | ||
| 75 | |||
| 76 | First, I choose terminal installer and left it to load additional components. | ||
| 77 | Avoid using graphical installer in this case. | ||
| 78 | |||
| 79 |  | ||
| 80 | |||
| 81 | After that I selected hostname and created a normal user and set password for | ||
| 82 | that user and root user and choose guided mode for disk partitioning. | ||
| 83 | |||
| 84 |  | ||
| 85 | |||
| 86 | I left it run to install all the things required for the base system and opted | ||
| 87 | out of scanning additional media for use by the package manager. Those will be | ||
| 88 | downloaded from the internet during installation. | ||
| 89 | |||
| 90 |  | ||
| 91 | |||
| 92 | I opted out of the popularity contest, and **now comes the important part**. | ||
| 93 | Uncheck all the boxes in Software selection and only leave 'standard system | ||
| 94 | utilities'. I also left an SSH server, so I was able to log in to the machine | ||
| 95 | from my main PC. | ||
| 96 | |||
| 97 |  | ||
| 98 | |||
| 99 | At this point, I installed GRUB bootloader on the disk where I installed the | ||
| 100 | system. | ||
| 101 | |||
| 102 |  | ||
| 103 | |||
| 104 | That concluded the installation of base Debian and after restarting the computer | ||
| 105 | I was prompted with the login screen. | ||
| 106 | |||
| 107 |  | ||
| 108 | |||
| 109 | Now that I had the base installation, it was time to choose what software do I | ||
| 110 | want to include in this so-called distribution. I wanted out of the box | ||
| 111 | developer experience, so I had plenty to choose. | ||
| 112 | |||
| 113 | Let's not waste time and go through the list. | ||
| 114 | |||
| 115 | ## Desktop environments | ||
| 116 | |||
| 117 | I have been using [Gnome](https://www.gnome.org/) for my whole Linux life. From | ||
| 118 | version 2 forward. It's been quite a ride. I hated version 3 when it came out | ||
| 119 | and replaced version 2. But I got used to it. And now with version 40+ they also | ||
| 120 | made couple of changes which I found both frustrating and presently surprised. | ||
| 121 | |||
| 122 | The amount of vertical space you loose because of the beefy title bars on | ||
| 123 | windows is ridiculous. And then in case of | ||
| 124 | [Tilix](https://gnunn1.github.io/tilix-web/) you also have tabs, and you are | ||
| 125 | 100px deep. Vertical space is one of the most important things for a | ||
| 126 | developer. The more real estate you have, the more code you can have in a | ||
| 127 | viewport. | ||
| 128 | |||
| 129 | But on the other hand, I still love how Gnome feels and looks. I gotta give them | ||
| 130 | that. They really are trying to make Gnome feel unified and modern. | ||
| 131 | |||
| 132 | Regardless of all the nice things Gnome has, I was looking at the tiling window | ||
| 133 | managers for some time, but never had the nerve to actually go with it. But now | ||
| 134 | was the ideal time to give it a go. No guts, no glory kind of a thing. | ||
| 135 | |||
| 136 | One of the requirements for me was easy custom layouts because I use a really | ||
| 137 | strange monitor with aspect ratio of 32:9. So relying on included layouts most | ||
| 138 | of them have is a non-starter. | ||
| 139 | |||
| 140 | What I was doing in Gnome was having windows in a layout like the diagram | ||
| 141 | below. This is my common practice. And if you look at it you can clearly see I | ||
| 142 | was replicating tiling window manager setup in Gnome. | ||
| 143 | |||
| 144 |  | ||
| 145 | |||
| 146 | That made me look into a bunch of tiling window managers and then tested them | ||
| 147 | out. Candidates I was looking at were: | ||
| 148 | |||
| 149 | - [i3](https://i3wm.org/) | ||
| 150 | - [bspwm](https://github.com/baskerville/bspwm) | ||
| 151 | - [awesome](https://awesomewm.org/index.html) | ||
| 152 | - [XMonad](https://xmonad.org/) | ||
| 153 | - [sway](https://swaywm.org/) | ||
| 154 | - [Qtile](http://www.qtile.org/) | ||
| 155 | - [dwm](https://dwm.suckless.org/) | ||
| 156 | |||
| 157 | You can also check article [13 Best Tiling Window Managers for | ||
| 158 | Linux](https://www.tecmint.com/best-tiling-window-managers-for-linux/) I was | ||
| 159 | referencing while testing them out. | ||
| 160 | |||
| 161 | While all of them provided what I needed, I liked i3 the most. What particular | ||
| 162 | caught my eye was the ease to use and tree based layouts which allows flexible | ||
| 163 | layouts. I know others can be set up also to have custom layouts other than | ||
| 164 | spiral, dwindle etc. I think i3 is a good entry-level window manager for | ||
| 165 | somebody like me. | ||
| 166 | |||
| 167 | ## Batteries included | ||
| 168 | |||
| 169 | The source for the whole thing is located on Github | ||
| 170 | https://github.com/mitjafelicijan/dfd-rice. | ||
| 171 | |||
| 172 | Currenly included: | ||
| 173 | |||
| 174 | - `non-free` (enables non-free packages in apt) | ||
| 175 | - `sudo` (adds sudo and adds user to sudo group) | ||
| 176 | - `essentials` (gcc, htop, zip, curl, etc...) | ||
| 177 | - `wifi` (network manager nmtui) | ||
| 178 | - `desktop` (i3, dmenu, fonts, configurations) | ||
| 179 | - `pulseaudio` (pulseaudio with pavucontrol) | ||
| 180 | - `code-editors` (vim, micro, vscode) | ||
| 181 | - `ohmybash` (make bash pretty) | ||
| 182 | - `file-managers` (mc) | ||
| 183 | - `git-ui` (terminal git gui) | ||
| 184 | - `meld` (diff tool) | ||
| 185 | - `profiling` (kcachegrind, valgrind, strace, ltrace) | ||
| 186 | - `browsers` (brave, firefox, chromium) | ||
| 187 | - programming languages: | ||
| 188 | - `python` | ||
| 189 | - `golang` | ||
| 190 | - `nodejs` | ||
| 191 | - `rust` | ||
| 192 | - `nim` | ||
| 193 | - `php` | ||
| 194 | - `ruby` | ||
| 195 | - `docker` (with docker-compose) | ||
| 196 | - `ansible` | ||
| 197 | |||
| 198 | Install script also allows you to install only specific packages (example for: | ||
| 199 | essentials ohmybash docker rust). | ||
| 200 | |||
| 201 | ```sh | ||
| 202 | su - root \ | ||
| 203 | bash -c "$(wget -q https://raw.github.com/mitjafelicijan/dfd-rice/master/tools/install.sh -O -)" -- \ | ||
| 204 | essentials ohmybash docker rust | ||
| 205 | ``` | ||
| 206 | |||
| 207 | Currently, most of these recipes use what Debian and this is totally fine with | ||
| 208 | me since I never use bleeding edge features of a package. But if something major | ||
| 209 | would come to light, I will replace it with a possible compilation script or | ||
| 210 | something similar. | ||
| 211 | |||
| 212 | This is some of the output from the installation script. | ||
| 213 | |||
| 214 |  | ||
| 215 | |||
| 216 | Let's take a look at some examples in the installation script. | ||
| 217 | |||
| 218 | ### Docker recipe | ||
| 219 | |||
| 220 | ```sh | ||
| 221 | # docker | ||
| 222 | print_header "Installing Docker" | ||
| 223 | curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --yes --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg | ||
| 224 | echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null | ||
| 225 | apt update | ||
| 226 | apt -y install docker-ce docker-ce-cli containerd.io docker-compose | ||
| 227 | |||
| 228 | systemctl start docker | ||
| 229 | systemctl enable docker | ||
| 230 | systemctl status docker --no-pager | ||
| 231 | |||
| 232 | /sbin/usermod -aG docker $USERNAME | ||
| 233 | ``` | ||
| 234 | |||
| 235 | ### Making bash pretty | ||
| 236 | |||
| 237 | I really like [Oh My Zsh](https://ohmyz.sh/), but I don't like zsh shell. When | ||
| 238 | I used it, I constantly needed to be aware of it and running bash scripts was a | ||
| 239 | pain. So, I was really delighted when I found out that a version for bash | ||
| 240 | existed called [Oh My Bash](https://ohmybash.nntoan.com/). Let's take a look at | ||
| 241 | the recipe for installing it. | ||
| 242 | |||
| 243 | ```sh | ||
| 244 | # ohmybash | ||
| 245 | print_header "Enabling OhMyBash" | ||
| 246 | sudo -u $USERNAME sh -c "$(curl -fsSL https://raw.github.com/ohmybash/oh-my-bash/master/tools/install.sh)" & | ||
| 247 | T1=${!} | ||
| 248 | wait ${T1} | ||
| 249 | ``` | ||
| 250 | |||
| 251 | Because OhMyBash does `exec bash` at the end, this traps our script inside | ||
| 252 | another shell and our script cannot continue. For that reason, I executed this | ||
| 253 | in background. But that presents a new problem. Because this is executed in | ||
| 254 | background, we lose track of progress naturally. And that strange trick with | ||
| 255 | `T1=${!}` and `wait ${T1}` waits for the background process to finish before | ||
| 256 | continuing to another task in bash script. | ||
| 257 | |||
| 258 | Check [Multi-Threaded Processing in Bash Scripts](https://www.cloudsavvyit.com/12277/how-to-use-multi-threaded-processing-in-bash-scripts/) | ||
| 259 | for more details. | ||
| 260 | |||
| 261 | ## Conclusion | ||
| 262 | |||
| 263 | Take a look at | ||
| 264 | https://github.com/mitjafelicijan/dfd-rice/blob/develop/tools/install.sh script | ||
| 265 | to get familiar with it. This is just a first iteration and I will continue to | ||
| 266 | update it because I need this in my life. | ||
| 267 | |||
| 268 | The current version boots in 4s to the login prompt, and after you log in, the | ||
| 269 | desktop environment loads in 2s. So, its fast, very fast. And on clean boot, I | ||
| 270 | measured ~230 MB of RAM usage. | ||
| 271 | |||
| 272 | And this is how it looks with two terminals side by side. I really like the | ||
| 273 | simplicity and clean interface. I will polish the colors and stuff like that, | ||
| 274 | but I really do like the results. | ||
| 275 | |||
| 276 |  | ||
diff --git a/content/posts/2021-12-25-running-golang-application-as-pid1.md b/content/posts/2021-12-25-running-golang-application-as-pid1.md new file mode 100644 index 0000000..10543f2 --- /dev/null +++ b/content/posts/2021-12-25-running-golang-application-as-pid1.md | |||
| @@ -0,0 +1,348 @@ | |||
| 1 | --- | ||
| 2 | title: Running Golang application as PID 1 with Linux kernel | ||
| 3 | url: running-golang-application-as-pid1.html | ||
| 4 | date: 2021-12-25T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | ## Unikernels, kernels, and alike | ||
| 10 | |||
| 11 | I have been reading a lot about | ||
| 12 | [unikernernels](https://en.wikipedia.org/wiki/Unikernel) lately and found them | ||
| 13 | very intriguing. When you push away all the marketing speak and look at the | ||
| 14 | idea, it makes a lot of sense. | ||
| 15 | |||
| 16 | > A unikernel is a specialized, single address space machine image constructed | ||
| 17 | > by using library operating systems. ([Wikipedia](https://en.wikipedia.org/wiki/Unikernel)) | ||
| 18 | |||
| 19 | I really like the explanation from the article | ||
| 20 | [Unikernels: Rise of the Virtual Library Operating System](https://queue.acm.org/detail.cfm?id=2566628). | ||
| 21 | Really worth a read. | ||
| 22 | |||
| 23 | If we compare a normal operating system to a unikernel side by side, they would | ||
| 24 | look something like this. | ||
| 25 | |||
| 26 |  | ||
| 27 | |||
| 28 | From this image, we can see how the complexity significantly decreases with | ||
| 29 | the use of Unikernels. This comes with a price, of course. Unikernels are hard | ||
| 30 | to get running and require a lot of work since you don't have an actual proper | ||
| 31 | kernel running in the background providing network access and drivers etc. | ||
| 32 | |||
| 33 | So as a half step to make the stack simpler, I started looking into using | ||
| 34 | Linux kernel as a base and going from there. I came across this | ||
| 35 | [Youtube video talking about Building the Simplest Possible Linux System](https://www.youtube.com/watch?v=Sk9TatW9ino) | ||
| 36 | by [Rob Landley](https://landley.net) and apart from statically compiling the | ||
| 37 | application to be run as PID1 there was really no other obstacles. | ||
| 38 | |||
| 39 | ## What is PID 1? | ||
| 40 | |||
| 41 | PID 1 is the first process that Linux kernel starts after the boot process. | ||
| 42 | It also has a couple of unique properties that are unique to it. | ||
| 43 | |||
| 44 | - When the process with PID 1 dies for any reason, all other processes are | ||
| 45 | killed with KILL signal. | ||
| 46 | - When any process having children dies for any reason, its children are | ||
| 47 | re-parented to process with PID 1. | ||
| 48 | - Many signals which have default action of Term do not have one for PID 1. | ||
| 49 | - When the process with PID 1 dies for any reason, kernel panics, which | ||
| 50 | result in system crash. | ||
| 51 | |||
| 52 | PID 1 is considered as an Init application which takes care of running other | ||
| 53 | and handling services like: | ||
| 54 | |||
| 55 | - sshd, | ||
| 56 | - nginx, | ||
| 57 | - pulseaudio, | ||
| 58 | - etc. | ||
| 59 | |||
| 60 | If you are on a Linux machine, you can check what your process is with PID 1 | ||
| 61 | by running the following. | ||
| 62 | |||
| 63 | ```sh | ||
| 64 | $ cat /proc/1/status | ||
| 65 | Name: systemd | ||
| 66 | Umask: 0000 | ||
| 67 | State: S (sleeping) | ||
| 68 | Tgid: 1 | ||
| 69 | Ngid: 0 | ||
| 70 | Pid: 1 | ||
| 71 | PPid: 0 | ||
| 72 | ... | ||
| 73 | ``` | ||
| 74 | |||
| 75 | As we can see on my machine the process with id of 1 is [systemd](https://systemd.io/) | ||
| 76 | which is a software suite that provides an array of system components for Linux | ||
| 77 | operating systems. If you look closely you can also see that the `PPid` | ||
| 78 | (process id of the parent process) is `0` which additionally confirms that | ||
| 79 | this process doesn't have a parent. | ||
| 80 | |||
| 81 | ## So why even run application as PID 1 instead of just using a container? | ||
| 82 | |||
| 83 | Containers are wonderful, but they come with a lot of baggage. And because they | ||
| 84 | are in their nature layered, the images require quite a lot of space and also a | ||
| 85 | lot of additional software to handle them. They are not as lightweight as they | ||
| 86 | seem, and many popular images require 500 MB plus disk space. | ||
| 87 | |||
| 88 | The idea of running this as PID 1 would result in a significantly smaller footprint, | ||
| 89 | as we will see later in the post. | ||
| 90 | |||
| 91 | > You could run a simple init system inside Docker container described more | ||
| 92 | > in this article [Docker and the PID 1 zombie reaping problem](https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/). | ||
| 93 | |||
| 94 | ## The master plan | ||
| 95 | |||
| 96 | 1. Compile Linux kernel with the default definitions. | ||
| 97 | 2. Prepare a Hello World application in Golang that is statically compiled. | ||
| 98 | 3. Run it with [QEMU](https://www.qemu.org/) and providing Golang application | ||
| 99 | as init application / PID 1. | ||
| 100 | |||
| 101 | For the sake of simplicity we will not be cross-compiling any of it and just | ||
| 102 | use the 64bit version. | ||
| 103 | |||
| 104 | ## Compiling Linux kernel | ||
| 105 | |||
| 106 | ```sh | ||
| 107 | $ wget https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.15.7.tar.xz | ||
| 108 | $ tar xf linux-5.15.7.tar.xz | ||
| 109 | |||
| 110 | $ cd linux-5.15.7 | ||
| 111 | |||
| 112 | $ make clean | ||
| 113 | |||
| 114 | # read more about this https://stackoverflow.com/a/41886394 | ||
| 115 | $ make defconfig | ||
| 116 | |||
| 117 | $ time make -j `nproc` | ||
| 118 | |||
| 119 | $ cd .. | ||
| 120 | ``` | ||
| 121 | |||
| 122 | At this point we have kernel image that is located in `arch/x86_64/boot/bzImage`. | ||
| 123 | We will use this in QEMU later. | ||
| 124 | |||
| 125 | To make our lives a bit easier lets move the kernel image to another place. | ||
| 126 | Lets create a folder `bin/` in the root of our project with `mkdir -p bin`. | ||
| 127 | |||
| 128 | |||
| 129 | At this point we can copy `bzImage` to `bin/` folder with | ||
| 130 | `cp linux-5.15.7/arch/x86_64/boot/bzImage bin/bzImage`. | ||
| 131 | |||
| 132 | The folder structure of this experiment should look like this. | ||
| 133 | |||
| 134 | ``` | ||
| 135 | pid1/ | ||
| 136 | bin/ | ||
| 137 | bzImage | ||
| 138 | linux-5.15.7/ | ||
| 139 | linux-5.15.7.tar.xz | ||
| 140 | ``` | ||
| 141 | |||
| 142 | ## Preparing PID 1 application in Golang | ||
| 143 | |||
| 144 | This step is relatively easy. The only thing we must have in mind that we will | ||
| 145 | need to compile the binary as a static one. | ||
| 146 | |||
| 147 | Let's create `init.go` file in the root of the project. | ||
| 148 | |||
| 149 | ```go | ||
| 150 | package main | ||
| 151 | |||
| 152 | import ( | ||
| 153 | "fmt" | ||
| 154 | "time" | ||
| 155 | ) | ||
| 156 | |||
| 157 | func main() { | ||
| 158 | for { | ||
| 159 | fmt.Println("Hello from Golang") | ||
| 160 | time.Sleep(1 * time.Second) | ||
| 161 | } | ||
| 162 | } | ||
| 163 | ``` | ||
| 164 | |||
| 165 | If you notice, we have a forever loop in the main, with a simple sleep of 1 | ||
| 166 | second to not overwhelm the CPU. This is because PID 1 should never complete | ||
| 167 | and/or exit. That would result in a kernel panic. Which is BAD! | ||
| 168 | |||
| 169 | There are two ways of compiling Golang application. Statically and dynamically. | ||
| 170 | |||
| 171 | To statically compile the binary, use the following command. | ||
| 172 | |||
| 173 | ```sh | ||
| 174 | $ go build -ldflags="-extldflags=-static" init.go | ||
| 175 | ``` | ||
| 176 | |||
| 177 | We can also check if the binary is statically compiled with: | ||
| 178 | |||
| 179 | ```sh | ||
| 180 | $ file init | ||
| 181 | init: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, Go BuildID=Ypu8Zw_4NBxm1Yxg2OYO/H5x721rQ9uTPiDVh-VqP/vZN7kXfGG1zhX_qdHMgH/9vBfmK81tFrygfOXDEOo, not stripped | ||
| 182 | |||
| 183 | $ ldd init | ||
| 184 | not a dynamic executable | ||
| 185 | ``` | ||
| 186 | |||
| 187 | At this point, we need to create [initramfs](https://www.linuxfromscratch.org/blfs/view/svn/postlfs/initramfs.html) | ||
| 188 | (abbreviated from "initial RAM file system", is the successor of initrd. It | ||
| 189 | is a cpio archive of the initial file system that gets loaded into memory | ||
| 190 | during the Linux startup process). | ||
| 191 | |||
| 192 | ```sh | ||
| 193 | $ echo init | cpio -o --format=newc > initramfs | ||
| 194 | $ mv initramfs bin/initramfs | ||
| 195 | ``` | ||
| 196 | |||
| 197 | The projects at this stage should look like this. | ||
| 198 | |||
| 199 | ``` | ||
| 200 | pid1/ | ||
| 201 | bin/ | ||
| 202 | bzImage | ||
| 203 | initramfs | ||
| 204 | linux-5.15.7/ | ||
| 205 | linux-5.15.7.tar.xz | ||
| 206 | init.go | ||
| 207 | ``` | ||
| 208 | |||
| 209 | ## Running all of it with QEMU | ||
| 210 | |||
| 211 | [QEMU](https://www.qemu.org/) is a free and open-source hypervisor. It emulates | ||
| 212 | the machine's processor through dynamic binary translation and provides a set | ||
| 213 | of different hardware and device models for the machine, enabling it to run a | ||
| 214 | variety of guest operating systems. | ||
| 215 | |||
| 216 | ```sh | ||
| 217 | $ qemu-system-x86_64 -serial stdio -kernel bin/bzImage -initrd bin/initramfs -append "console=ttyS0" -m 128 | ||
| 218 | ``` | ||
| 219 | |||
| 220 | ```sh | ||
| 221 | $ qemu-system-x86_64 -serial stdio -kernel bin/bzImage -initrd bin/initramfs -append "console=ttyS0" -m 128 | ||
| 222 | [ 0.000000] Linux version 5.15.7 (m@khan) (gcc (GCC) 11.2.1 20211203 (Red Hat 11.2.1-7), GNU ld version 2.37-10.fc35) #7 SMP Mon Dec 13 10:23:25 CET 2021 | ||
| 223 | [ 0.000000] Command line: console=ttyS0 | ||
| 224 | [ 0.000000] x86/fpu: x87 FPU will use FXSAVE | ||
| 225 | [ 0.000000] signal: max sigframe size: 1440 | ||
| 226 | [ 0.000000] BIOS-provided physical RAM map: | ||
| 227 | [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable | ||
| 228 | [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved | ||
| 229 | [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved | ||
| 230 | [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x0000000007fdffff] usable | ||
| 231 | [ 0.000000] BIOS-e820: [mem 0x0000000007fe0000-0x0000000007ffffff] reserved | ||
| 232 | [ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved | ||
| 233 | [ 0.000000] NX (Execute Disable) protection: active | ||
| 234 | [ 0.000000] SMBIOS 2.8 present. | ||
| 235 | [ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-6.fc35 04/01/2014 | ||
| 236 | [ 0.000000] tsc: Fast TSC calibration failed | ||
| 237 | ... | ||
| 238 | [ 2.016106] ALSA device list: | ||
| 239 | [ 2.016329] No soundcards found. | ||
| 240 | [ 2.053176] Freeing unused kernel image (initmem) memory: 1368K | ||
| 241 | [ 2.056095] Write protecting the kernel read-only data: 20480k | ||
| 242 | [ 2.058248] Freeing unused kernel image (text/rodata gap) memory: 2032K | ||
| 243 | [ 2.058811] Freeing unused kernel image (rodata/data gap) memory: 500K | ||
| 244 | [ 2.059164] Run /init as init process | ||
| 245 | Hello from Golang | ||
| 246 | [ 2.386879] tsc: Refined TSC clocksource calibration: 3192.032 MHz | ||
| 247 | [ 2.387114] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2e02e31fa14, max_idle_ns: 440795264947 ns | ||
| 248 | [ 2.387380] clocksource: Switched to clocksource tsc | ||
| 249 | [ 2.587895] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 | ||
| 250 | Hello from Golang | ||
| 251 | Hello from Golang | ||
| 252 | Hello from Golang | ||
| 253 | ``` | ||
| 254 | |||
| 255 | The whole [log file here](/assets/pid1/qemu.log). | ||
| 256 | |||
| 257 | ## Size comparison | ||
| 258 | |||
| 259 | The cool thing about this approach is that the Linux kernel and the application | ||
| 260 | together only take around 12 MB, which is impressive as hell. And we need to | ||
| 261 | also know that the size of bzImage (Linux kernel) could be greatly decreased | ||
| 262 | by going into `make menuconfig` and removing a ton of features from the kernel, | ||
| 263 | making the size even smaller. I managed to get kernel size down to 2 MB and | ||
| 264 | still working properly. | ||
| 265 | |||
| 266 | ```sh | ||
| 267 | total 12M | ||
| 268 | -rw-r--r--. 1 m m 9.3M Dec 13 10:24 bzImage | ||
| 269 | -rw-r--r--. 1 m m 1.9M Dec 27 01:19 initramfs | ||
| 270 | ``` | ||
| 271 | |||
| 272 | ## Creating ISO image and running it with Gnome Boxes | ||
| 273 | |||
| 274 | First we need to create proper folder structure with `mkdir -p iso/boot/grub`. | ||
| 275 | |||
| 276 | Then we need to download the [grub binary](https://github.com/littleosbook/littleosbook/raw/master/files/stage2_eltorito). | ||
| 277 | You can read more about this program on https://github.com/littleosbook/littleosbook. | ||
| 278 | |||
| 279 | ```sh | ||
| 280 | $ wget -O iso/boot/grub/stage2_eltorito https://github.com/littleosbook/littleosbook/raw/master/files/stage2_eltorito | ||
| 281 | ``` | ||
| 282 | |||
| 283 | ```sh | ||
| 284 | $ tree iso/boot/ | ||
| 285 | iso/boot/ | ||
| 286 | ├── bzImage | ||
| 287 | ├── grub | ||
| 288 | │ ├── menu.lst | ||
| 289 | │ └── stage2_eltorito | ||
| 290 | └── initramfs | ||
| 291 | ``` | ||
| 292 | |||
| 293 | Let's copy files into proper folders. | ||
| 294 | |||
| 295 | |||
| 296 | ```sh | ||
| 297 | $ cp stage2_eltorito iso/boot/grub/ | ||
| 298 | $ cp bin/bzImage iso/boot/ | ||
| 299 | $ cp bin/initramfs iso/boot/ | ||
| 300 | ``` | ||
| 301 | |||
| 302 | Lets create a GRUB config file at `nano iso/boot/grub/menu.lst` with contents. | ||
| 303 | |||
| 304 | ```ini | ||
| 305 | default=0 | ||
| 306 | timeout=5 | ||
| 307 | |||
| 308 | title GoAsPID1 | ||
| 309 | kernel /boot/bzImage | ||
| 310 | initrd /boot/initramfs | ||
| 311 | ``` | ||
| 312 | |||
| 313 | Let's create iso file by using genisoimage: | ||
| 314 | |||
| 315 | ```sh | ||
| 316 | genisoimage -R \ | ||
| 317 | -b boot/grub/stage2_eltorito \ | ||
| 318 | -no-emul-boot \ | ||
| 319 | -boot-load-size 4 \ | ||
| 320 | -A os \ | ||
| 321 | -input-charset utf8 \ | ||
| 322 | -quiet \ | ||
| 323 | -boot-info-table \ | ||
| 324 | -o GoAsPID1.iso \ | ||
| 325 | iso | ||
| 326 | ``` | ||
| 327 | |||
| 328 | This will produce `GoAsPID1.iso` which you can use with [Virtualbox](https://www.virtualbox.org/) | ||
| 329 | or [Gnome Boxes](https://apps.gnome.org/app/org.gnome.Boxes/). | ||
| 330 | |||
| 331 | <video src="/assets/pid1/boxes.mp4" controls></video> | ||
| 332 | |||
| 333 | ## Is running applications as PID 1 even worth it? | ||
| 334 | |||
| 335 | Well, the answer to this is not as simple as one would think. Sometimes it is | ||
| 336 | and sometimes it's not. For embedded systems and very specialized applications | ||
| 337 | it is worth for sure. But in normal uses, I don't think so. It was an interesting | ||
| 338 | exercise in compiling kernels and looking at the guts of the Linux kernel, | ||
| 339 | but sticking to containers for most of the things is a better option in my | ||
| 340 | opinion. | ||
| 341 | |||
| 342 | An interesting experiment would be creating an image that supports networking | ||
| 343 | and could be deployed to AWS as an EC2 instance and observing how it fares. | ||
| 344 | But in that case, we would need to write some sort of supervisor that would | ||
| 345 | run on a separate EC2 that would check if other EC2 instances are running | ||
| 346 | properly. Remember that if your application fails, kernel panics and the | ||
| 347 | whole machine is inoperable in this case. | ||
| 348 | |||
diff --git a/content/posts/2021-12-30-wap-mobile-web-before-the-web.md b/content/posts/2021-12-30-wap-mobile-web-before-the-web.md new file mode 100644 index 0000000..442943b --- /dev/null +++ b/content/posts/2021-12-30-wap-mobile-web-before-the-web.md | |||
| @@ -0,0 +1,202 @@ | |||
| 1 | --- | ||
| 2 | title: Wireless Application Protocol and the mobile web before the web | ||
| 3 | url: wap-mobile-web-before-the-web.html | ||
| 4 | date: 2021-12-30T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | ## A little stroll down the history lane | ||
| 10 | |||
| 11 | About two weeks ago, I watched this outstanding documentary on YouTube | ||
| 12 | [Springboard: the secret history of the first real | ||
| 13 | smartphone](https://www.youtube.com/watch?v=b9_Vh9h3Ohw) about the history of | ||
| 14 | smartphones and phones in general. It brought back so many memories. I never had | ||
| 15 | an actual smartphone before the Android. The closest to smartphone was [Sony | ||
| 16 | Ericsson P1](https://www.gsmarena.com/sony_ericsson_p1-1982.php). A fantastic | ||
| 17 | phone and I broke it in Prague after a party and that was one of those rare | ||
| 18 | occasions where I was actually mad at myself. But nevertheless, after that | ||
| 19 | phone, the next one was an Android one. | ||
| 20 | |||
| 21 | Before that, I only owned normal phones from Nokia and Siemens etc. Nothing | ||
| 22 | special, actually. These are the phones we are talking about. Before 2007. | ||
| 23 | Apple and Android phones didn't exist yet. | ||
| 24 | |||
| 25 | These phones were rocking: | ||
| 26 | |||
| 27 | - No selfie cameras. | ||
| 28 | - ~2 inch displays. | ||
| 29 | - ~120 MHz beast CPU's. | ||
| 30 | - 144p main cameras. | ||
| 31 | - But they had a headphone jack. | ||
| 32 | |||
| 33 | Let's take a look at these beauties. | ||
| 34 | |||
| 35 |  | ||
| 36 | |||
| 37 | ## WAP - Wireless Application Protocol | ||
| 38 | |||
| 39 | Not that one! We are talking about Wireless Application Protocol and not Cardi | ||
| 40 | B's song 😃 | ||
| 41 | |||
| 42 | WAP stands for Wireless Application Protocol. It is a protocol designed for | ||
| 43 | micro-browsers, and it enables the access of internet in the mobile devices. It | ||
| 44 | uses the mark-up language WML (Wireless Markup Language and not HTML), WML is | ||
| 45 | defined as XML 1.0 application. Furthermore, it enables creating web | ||
| 46 | applications for mobile devices. In 1998, WAP Forum was founded by Ericson, | ||
| 47 | Motorola, Nokia and Unwired Planet whose aim was to standardize the various | ||
| 48 | wireless technologies via protocols. | ||
| 49 | [(source)](https://www.geeksforgeeks.org/wireless-application-protocol/) | ||
| 50 | |||
| 51 | WAP protocol was resulted by the joint efforts of the various members of WAP | ||
| 52 | Forum. In 2002, WAP forum was merged with various other forums of the industry, | ||
| 53 | resulting in the formation of Open Mobile Alliance (OMA). | ||
| 54 | [(source)](https://www.geeksforgeeks.org/wireless-application-protocol/) | ||
| 55 | |||
| 56 | These were some wild times. Devices had tiny screens and data transmission rates | ||
| 57 | were abominable. But they were capable of rendering WML (Wireless Markup | ||
| 58 | Language). This was very similar to HTML, actually. It is a markup language, | ||
| 59 | after all. | ||
| 60 | |||
| 61 | These pages could be served by [Apache](https://apache.org/) and could be | ||
| 62 | generated by CGI scripts on the backend. The only difference was the limited | ||
| 63 | markup language. | ||
| 64 | |||
| 65 | ## WML - Wireless Markup Language | ||
| 66 | |||
| 67 | Just like web browsers use HTML for content structure, older mobile device | ||
| 68 | browsers use WML - if you need to support really old mobile phones using WML | ||
| 69 | browsers, you will need to know about it. WML is XML-based (an XML vocabulary | ||
| 70 | just like XHTML and MathML, but not HTML) and does not use the same metaphor as | ||
| 71 | HTML. HTML is a single document with some metadata packed away in the head, and | ||
| 72 | a body encapsulating the visible page. With WML, the metaphor does not envisage | ||
| 73 | a page, but rather a deck of cards. A WML file might have several pages or cards | ||
| 74 | contained within it. | ||
| 75 | [(source)](https://www.w3.org/wiki/Introduction_to_mobile_web) | ||
| 76 | |||
| 77 | ```html | ||
| 78 | <?xml version="1.0"?> | ||
| 79 | <!DOCTYPE wml PUBLIC "-//WAPFORUM//DTD WML 1.1//EN" "http://www.wapforum.org/DTD/wml_1.1.xml"> | ||
| 80 | <wml> | ||
| 81 | <card id="home" title="Example Homepage"> | ||
| 82 | <p>Welcome to the Example homepage</p> | ||
| 83 | </card> | ||
| 84 | </wml> | ||
| 85 | ``` | ||
| 86 | |||
| 87 | There is an amazing tutorial on [Tutorialpoint about | ||
| 88 | WML](https://www.tutorialspoint.com/wml/index.htm). | ||
| 89 | |||
| 90 | ## Converting Digg to WML | ||
| 91 | |||
| 92 | This task is completely useless and not really feasible nowadays, but I had to | ||
| 93 | give it a try for old-time sake. Since the data is already there in a form of | ||
| 94 | RSS feed, I could take this feed and parse it and create a WML version of the | ||
| 95 | homepage. | ||
| 96 | |||
| 97 | We will need: | ||
| 98 | |||
| 99 | - Python3 + Pip | ||
| 100 | - ImageMagick | ||
| 101 | - feedparser and mako templating | ||
| 102 | |||
| 103 | ```sh | ||
| 104 | # for fedora 35 | ||
| 105 | sudo dnf install ImageMagick python3-pip | ||
| 106 | |||
| 107 | # tempalting engine for python | ||
| 108 | pip install mako --user | ||
| 109 | |||
| 110 | # for parsing rss feeds | ||
| 111 | pip install feedparser --user | ||
| 112 | ``` | ||
| 113 | |||
| 114 | Project folder structure should look like the following. | ||
| 115 | |||
| 116 | ``` | ||
| 117 | 12:43:53 m@khan wap → tree -L 1 | ||
| 118 | . | ||
| 119 | ├── generate.py | ||
| 120 | └── template.wml | ||
| 121 | |||
| 122 | ``` | ||
| 123 | |||
| 124 | After that, I created a small template for the homepage. | ||
| 125 | |||
| 126 | ```html | ||
| 127 | <?xml version="1.0"?> | ||
| 128 | <!DOCTYPE wml PUBLIC "-//WAPFORUM//DTD WML 1.2//EN" "http://www.wapforum.org/DTD/wml_1.2.xml"> | ||
| 129 | |||
| 130 | <wml> | ||
| 131 | |||
| 132 | <card title="Digg - What the Internet is talking about right now"> | ||
| 133 | |||
| 134 | % for item in entries: | ||
| 135 | <p><img src="/images/${item.id}.jpg" width="175" height="95" alt="${item.title}" /></p> | ||
| 136 | <p><small>${item.kicker}</small></p> | ||
| 137 | <p><big><b>${item.title}</b></big></p> | ||
| 138 | <p>${item.description}</p> | ||
| 139 | % endfor | ||
| 140 | |||
| 141 | </card> | ||
| 142 | |||
| 143 | </wml> | ||
| 144 | ``` | ||
| 145 | |||
| 146 | And the parser that parses RSS feed looks like this. | ||
| 147 | |||
| 148 | ```python | ||
| 149 | import os | ||
| 150 | import feedparser | ||
| 151 | from mako.template import Template | ||
| 152 | |||
| 153 | os.system('mkdir -p www/images') | ||
| 154 | |||
| 155 | template = Template(filename='template.wml') | ||
| 156 | |||
| 157 | feed = feedparser.parse('https://digg.com/rss/top.xml') | ||
| 158 | |||
| 159 | entries = feed.entries[:15] | ||
| 160 | |||
| 161 | for entry in entries: | ||
| 162 | print('Processing image with id {}'.format(entry.id)) | ||
| 163 | os.system('wget -q -O www/images/{}.jpg "{}"'.format(entry.id, entry.links[1].href)) | ||
| 164 | os.system('convert www/images/{}.jpg -type Grayscale -resize 175x -depth 3 -quality 30 www/images/{}.jpg'.format(entry.id, entry.id)) | ||
| 165 | |||
| 166 | html = template.render(entries = entries) | ||
| 167 | |||
| 168 | with open('www/index.wml', 'w+') as fp: | ||
| 169 | fp.write(html) | ||
| 170 | ``` | ||
| 171 | |||
| 172 | This script will create a folder `www` and in the folder `www/images` for | ||
| 173 | storing resized images. | ||
| 174 | |||
| 175 | > Be sure you don't use SSL and use just normal HTTP for serving the content. | ||
| 176 | > These old phones will have problems with TLS 1.3 etc. | ||
| 177 | |||
| 178 | If you look at the python file, I convert all the images into tiny B&W images. | ||
| 179 | They should be WBMP (Wireless BitMaP) but I choose JPEGs for this, and it seems | ||
| 180 | to work properly. | ||
| 181 | |||
| 182 | Because I currently don't have a phone old enough to test it on, I used an | ||
| 183 | emulator. And it was really hard to find one. I found [WAP | ||
| 184 | Proof](http://wap-proof.sharewarejunction.com/) on shareware junction, and it | ||
| 185 | did the job well enough. I will try to find and actual device to test it on. | ||
| 186 | |||
| 187 | <video src="/assets/wap/emulator.mp4" controls></video> | ||
| 188 | |||
| 189 | If you are using Nginx to serve the contents, add a directive to the hosts file | ||
| 190 | that will automatically server `index.wml` file. | ||
| 191 | |||
| 192 | ```nginx | ||
| 193 | server { | ||
| 194 | index index.wml index.html index.htm index.nginx-debian.html; | ||
| 195 | } | ||
| 196 | ``` | ||
| 197 | |||
| 198 | ## Conclusion | ||
| 199 | |||
| 200 | Well, this was pointless, but very fun! I hope you enjoyed it as much as I did. | ||
| 201 | I will try to find an old phone to test it on. If you have any questions, feel | ||
| 202 | free to ask in the comments. | ||
diff --git a/content/posts/2022-06-30-trying-out-helix-editor.md b/content/posts/2022-06-30-trying-out-helix-editor.md new file mode 100644 index 0000000..305d4b7 --- /dev/null +++ b/content/posts/2022-06-30-trying-out-helix-editor.md | |||
| @@ -0,0 +1,53 @@ | |||
| 1 | --- | ||
| 2 | title: Trying out Helix code editor as my main editor | ||
| 3 | url: tying-out-helix-code-editor.html | ||
| 4 | date: 2022-06-30T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | I have been searching for a lightweight code editor for quite some time. One of | ||
| 10 | the main reasons was that I wanted something that doesn't burn through CPU and | ||
| 11 | RAM usage is not through the roof. I have been mostly using Visual Studio Code. | ||
| 12 | It's been an outstanding editor. I have no quarrel with it at all. It's just | ||
| 13 | time to spice life up with something new. | ||
| 14 | |||
| 15 | I have been on this search for a couple of years. I have tried Vim, Neovim, | ||
| 16 | Emacs, Doom Emacs, Micro and couple more. Among most of them, I liked Micro and | ||
| 17 | Doom Emacs the most. Micro editor was a little too basic for me. And Doom Emacs | ||
| 18 | was a bit too hardcore. This does not reflect on any of the editors. It's just | ||
| 19 | my personal preference. | ||
| 20 | |||
| 21 | > I tried Helix Editor about a year ago. But I didn't pay attention to it. | ||
| 22 | > Tried it and saw it's similar to Vi and just said no. I was premature to | ||
| 23 | > dismiss it. | ||
| 24 | |||
| 25 | One of the things I actually miss is line wrapping for certain files. When | ||
| 26 | writing Markdown, line wrapping would be very helpful. Editing such a document | ||
| 27 | is frustrating to say the least. Some of the Markdown to HTML converters don't | ||
| 28 | take kindly of new lines between sentences. Not paragraphs, sentences. And I use | ||
| 29 | Markdown to write this blog you are reading. | ||
| 30 | |||
| 31 | But other than this, I have been extremely satisfied by it. It's been a pleasant | ||
| 32 | surprise. There have been zero issues with the editor. | ||
| 33 | |||
| 34 | One thing to do before you are able to use autocompletion and make use Language | ||
| 35 | Server support is to install the language server with NPM. | ||
| 36 | |||
| 37 | ```sh | ||
| 38 | npm install -g typescript typescript-language-server | ||
| 39 | ``` | ||
| 40 | |||
| 41 | I am still getting used to the keyboard shortcuts and getting better. What Helix | ||
| 42 | does really well is packing in sane defaults and even though because currently | ||
| 43 | there is no plugin support I haven't found any need for them. It has all that | ||
| 44 | you would need. It goes to extreme measures to show a user what is going on with | ||
| 45 | popups that show you what the keyboard shortcuts are. | ||
| 46 | |||
| 47 | And it comes us packed with many | ||
| 48 | [really good themes](https://github.com/helix-editor/helix/wiki/Themes). | ||
| 49 | |||
| 50 |  | ||
| 51 | |||
| 52 | It's still young but has this mature feeling to it. It has sane defaults and | ||
| 53 | mimics Vim (works a bit differently, but the overall idea is similar). | ||
diff --git a/content/posts/2022-07-05-what-would-dna-sound-if-synthesized.md b/content/posts/2022-07-05-what-would-dna-sound-if-synthesized.md new file mode 100644 index 0000000..968341c --- /dev/null +++ b/content/posts/2022-07-05-what-would-dna-sound-if-synthesized.md | |||
| @@ -0,0 +1,364 @@ | |||
| 1 | --- | ||
| 2 | title: What would DNA sound if synthesized to an audio file | ||
| 3 | url: what-would-dna-sound-if-synthesized.html | ||
| 4 | date: 2022-07-05T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | ## Introduction | ||
| 10 | |||
| 11 | Lately, I have been thinking a lot about the nature of life, what are the | ||
| 12 | foundation blocks of life and things like that. It's remarkable how complex and | ||
| 13 | on the other hand simple the creation is when you look at it. The miracle of | ||
| 14 | life keeps us grounded when our imagination goes wild. If the DNA are the blocks | ||
| 15 | of life, you could consider them to be an API nature provided us to better | ||
| 16 | understand all of this chaos masquerading as order. | ||
| 17 | |||
| 18 | I have been reading a lot about superintelligence and our somehow misguided path | ||
| 19 | to create general artificial intelligence. What would the building blocks or our | ||
| 20 | creation look like? Is the compression really the ultimate storage of | ||
| 21 | information? Will our creation also ponder this questions when creating new | ||
| 22 | worlds for themselves, or will we just disappear into the vastness of | ||
| 23 | possibilities? It is a little offensive that we are playing God whilst being | ||
| 24 | completely ignorant of our own reality. Who knows! Like many other | ||
| 25 | breakthroughs, this one will also come at a cost not known to us when it finally | ||
| 26 | happens. | ||
| 27 | |||
| 28 | To keep things a bit lighter, I decided to convert some popular DNA sequences | ||
| 29 | into an audio files for us to listen to. I am not the first one, nor I will be | ||
| 30 | the last one to do this. But it is an interesting exercise in better | ||
| 31 | understanding the relationship between art and science. Maybe listening to DNA | ||
| 32 | instead of parsing it will find a way into better understanding, or at least | ||
| 33 | enjoying the creation and cryptic nature of life. | ||
| 34 | |||
| 35 | ## DNA encoding and primer example | ||
| 36 | |||
| 37 | I have been exploring DNA in the past in my post from about 3 years ago in | ||
| 38 | [Encoding binary data into DNA | ||
| 39 | sequence](/encoding-binary-data-into-dna-sequence.html) where I have been | ||
| 40 | converting all sorts of data into DNA sequences. | ||
| 41 | |||
| 42 | This will be a similar exercise but instead of converting to DNA, I will be | ||
| 43 | generating tones from Nucleotides. | ||
| 44 | |||
| 45 | | Nucleotides | Note | Frequency | | ||
| 46 | | ---------------- | ---- | --------- | | ||
| 47 | | **A** (Adenine) | A | 440 Hz | | ||
| 48 | | **C** (Cytosine) | C | 783.99 Hz | | ||
| 49 | | **G** (Guanine) | G | 523.25 Hz | | ||
| 50 | | **T** (Thymine) | D | 587.33 Hz | | ||
| 51 | |||
| 52 | Since we do not have T in equal-tempered scale, I choose D to represent T note. | ||
| 53 | |||
| 54 | You can check [Frequencies for equal-tempered scale, A4 = 440 | ||
| 55 | Hz](https://pages.mtu.edu/~suits/notefreqs.html). For this tuning, we also | ||
| 56 | choose `Speed of Sound = 345 m/s = 1130 ft/s = 770 miles/hr`. | ||
| 57 | |||
| 58 | Now that we have this out of the way, we can also brush up on the DNA sequencing | ||
| 59 | a bit. This is a famous quote I also used for the encoding tests, and it goes | ||
| 60 | like this. | ||
| 61 | |||
| 62 | > How wonderful that we have met with a paradox. Now we have some hope of | ||
| 63 | > making progress. | ||
| 64 | > ― Niels Bohr | ||
| 65 | |||
| 66 | ```shell | ||
| 67 | >SEQ1 | ||
| 68 | GACAGCTTGTGTACAAGTGTGCTTGCTCGCGAGCGGGTACGCGCGTGGGCTAACAAGTGA | ||
| 69 | GCCAGCAGGTGAACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGCTGGCGGGTGA | ||
| 70 | ACAAGTGTGCCGGTGAGCCAACAAGCAGACAAGTAAGCAGGTACGCAGGCGAGCTTGTCA | ||
| 71 | ACTCACAAGATCGCTTGTGTACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGTAT | ||
| 72 | GCTTGCTGGCGGACAAGCCAGCTTGTAAGCGGACAAGCTTGCGCACAAGCTGGCAGGCCT | ||
| 73 | GCCGGCTCGCGTACAAATTCACAAGTAAGTACGCTTGCGTGTACGCGGGTATGTATACTC | ||
| 74 | AACCTCACCAAACGGGACAAGATCGCCGGCGGGCTAGTATACAAGAACGCTTGCCAGTAC | ||
| 75 | AACC | ||
| 76 | ``` | ||
| 77 | |||
| 78 | This is what we gonna work with to get things rolling forward, when creating | ||
| 79 | parser and waveform generator. | ||
| 80 | |||
| 81 | ## Parsing DNA data | ||
| 82 | |||
| 83 | This step is rather simple one. All we need to do is parse input DNA sequence in | ||
| 84 | [FASTA format](https://en.wikipedia.org/wiki/FASTA_format) well known in | ||
| 85 | [Bioinformatics](https://en.wikipedia.org/wiki/Bioinformatics) to extract single | ||
| 86 | Nucleotides that will be converted into separate tones based on equal-tempered | ||
| 87 | scale explained above. | ||
| 88 | |||
| 89 | ```python | ||
| 90 | nucleotide_tone_map = { | ||
| 91 | 'A': 440, | ||
| 92 | 'C': 523.25, | ||
| 93 | 'G': 783.99, | ||
| 94 | 'T': 587.33, # converted to D | ||
| 95 | } | ||
| 96 | |||
| 97 | def split(word): | ||
| 98 | return [char for char in word] | ||
| 99 | |||
| 100 | def generate_from_dna_sequence(sequence): | ||
| 101 | for nucleotide in split(sequence): | ||
| 102 | print(nucleotide, nucleotide_tone_map[nucleotide]) | ||
| 103 | ``` | ||
| 104 | |||
| 105 | ## Generating sine wave | ||
| 106 | |||
| 107 | Because we are essentially creating a long stream of notes we will be appending | ||
| 108 | sine notes to a global array we will later use for creating a WAV file out of | ||
| 109 | it. | ||
| 110 | |||
| 111 | ```python | ||
| 112 | import math | ||
| 113 | |||
| 114 | def append_sinewave(freq=440.0, duration_milliseconds=500, volume=1.0): | ||
| 115 | global audio | ||
| 116 | |||
| 117 | num_samples = duration_milliseconds * (sample_rate / 1000.0) | ||
| 118 | |||
| 119 | for x in range(int(num_samples)): | ||
| 120 | audio.append(volume * math.sin(2 * math.pi * freq * (x / sample_rate))) | ||
| 121 | |||
| 122 | return | ||
| 123 | ``` | ||
| 124 | |||
| 125 | The sine wave generated here is the standard beep. If you want something more | ||
| 126 | aggressive, you could try a square or saw tooth waveform. | ||
| 127 | |||
| 128 | ## Generating a WAV file from accumulated sine waves | ||
| 129 | |||
| 130 | |||
| 131 | ```python | ||
| 132 | import wave | ||
| 133 | import struct | ||
| 134 | |||
| 135 | def save_wav(file_name): | ||
| 136 | wav_file = wave.open(file_name, 'w') | ||
| 137 | nchannels = 1 | ||
| 138 | sampwidth = 2 | ||
| 139 | |||
| 140 | nframes = len(audio) | ||
| 141 | comptype = 'NONE' | ||
| 142 | compname = 'not compressed' | ||
| 143 | wav_file.setparams((nchannels, sampwidth, sample_rate, nframes, comptype, compname)) | ||
| 144 | |||
| 145 | for sample in audio: | ||
| 146 | wav_file.writeframes(struct.pack('h', int(sample * 32767.0))) | ||
| 147 | |||
| 148 | wav_file.close() | ||
| 149 | ``` | ||
| 150 | |||
| 151 | 44100 is the industry standard sample rate - CD quality. If you need to save on | ||
| 152 | file size, you can adjust it downwards. The standard for low quality is, 8000 or | ||
| 153 | 8kHz. | ||
| 154 | |||
| 155 | WAV files here are using short, 16 bit, signed integers for the sample size. | ||
| 156 | So, we multiply the floating-point data we have by 32767, the maximum value for | ||
| 157 | a short integer. | ||
| 158 | |||
| 159 | > It is theoretically possible to use the floating point -1.0 to 1.0 data | ||
| 160 | > directly in a WAV file, but not obvious how to do that using the wave module | ||
| 161 | > in Python. | ||
| 162 | |||
| 163 | ## Generating Spectograms | ||
| 164 | |||
| 165 | I have tried two methods of doing this and both were just fine. I however opted | ||
| 166 | out to use the [SoX - Sound eXchange, the Swiss Army knife of audio | ||
| 167 | manipulation](https://linux.die.net/man/1/sox) one because it didn't require | ||
| 168 | anything else. | ||
| 169 | |||
| 170 | ```shell | ||
| 171 | sox output.wav -n spectrogram -o spectrogram.png | ||
| 172 | ``` | ||
| 173 | |||
| 174 | An example spectrogram of Ludwig van Beethoven Symphony No. 6 First movement. | ||
| 175 | |||
| 176 | <audio controls> | ||
| 177 | <source src="/assets/dna-synthesized/symphony-no6-1st-movement.mp3" type="audio/mpeg"> | ||
| 178 | </audio> | ||
| 179 | |||
| 180 |  | ||
| 181 | |||
| 182 | The other option could also be in combination with | ||
| 183 | [gnuplot](http://www.gnuplot.info/). This would require an intermediary step, | ||
| 184 | however. | ||
| 185 | |||
| 186 | ```shell | ||
| 187 | sox output.wav audio.dat | ||
| 188 | tail -n+3 audio.dat > audio_only.dat | ||
| 189 | gnuplot audio.gpi | ||
| 190 | ``` | ||
| 191 | |||
| 192 | And input file `audio.gpi` that would be passed to gnuplot looks something like | ||
| 193 | this. | ||
| 194 | |||
| 195 | ``` | ||
| 196 | # set output format and size | ||
| 197 | set term png size 1000,280 | ||
| 198 | |||
| 199 | # set output file | ||
| 200 | set output "audio.png" | ||
| 201 | |||
| 202 | # set y range | ||
| 203 | set yr [-1:1] | ||
| 204 | |||
| 205 | # we want just the data | ||
| 206 | unset key | ||
| 207 | unset tics | ||
| 208 | unset border | ||
| 209 | set lmargin 0 | ||
| 210 | set rmargin 0 | ||
| 211 | set tmargin 0 | ||
| 212 | set bmargin 0 | ||
| 213 | |||
| 214 | # draw rectangle to change background color | ||
| 215 | set obj 1 rectangle behind from screen 0,0 to screen 1,1 | ||
| 216 | set obj 1 fillstyle solid 1.0 fillcolor rgbcolor "#ffffff" | ||
| 217 | |||
| 218 | # draw data with foreground color | ||
| 219 | plot "audio_only.dat" with lines lt rgb 'red' | ||
| 220 | ``` | ||
| 221 | |||
| 222 | ## Pre-generated sequences | ||
| 223 | |||
| 224 | What I did was take interesting parts from an animal's genome and feed it to a | ||
| 225 | tone generator script. This then generated a WAV file and I converted those to | ||
| 226 | MP3, so they can be played in a browser. The last step was creating a | ||
| 227 | spectrogram based on a WAV file. | ||
| 228 | |||
| 229 | ### Niels Bohr quote | ||
| 230 | |||
| 231 | <audio controls> | ||
| 232 | <source src="/assets/dna-synthesized/quote/out.mp3" type="audio/mpeg"> | ||
| 233 | </audio> | ||
| 234 | |||
| 235 |  | ||
| 236 | |||
| 237 | ### Mouse | ||
| 238 | |||
| 239 | This is part of a mouse genome `Mus_musculus.GRCm39.dna.nonchromosomal`. You | ||
| 240 | can get [genom data | ||
| 241 | here](http://ftp.ensembl.org/pub/release-106/fasta/mus_musculus/dna/). | ||
| 242 | |||
| 243 | <audio controls> | ||
| 244 | <source src="/assets/dna-synthesized/mouse/out.mp3" type="audio/mpeg"> | ||
| 245 | </audio> | ||
| 246 | |||
| 247 |  | ||
| 248 | |||
| 249 | ### Bison | ||
| 250 | |||
| 251 | This is part of a bison genome `Bison_bison_bison.Bison_UMD1.0.cdna`. You can | ||
| 252 | get [genom data | ||
| 253 | here](http://ftp.ensembl.org/pub/release-106/fasta/bison_bison_bison/cdna/). | ||
| 254 | |||
| 255 | <audio controls> | ||
| 256 | <source src="/assets/dna-synthesized/bison/out.mp3" type="audio/mpeg"> | ||
| 257 | </audio> | ||
| 258 | |||
| 259 |  | ||
| 260 | |||
| 261 | ### Taurus | ||
| 262 | |||
| 263 | This is part of a taurus genome `Bos_taurus.ARS-UCD1.2.cdna`. You can get | ||
| 264 | [genom data | ||
| 265 | here](http://ftp.ensembl.org/pub/release-106/fasta/bos_taurus/cdna/). | ||
| 266 | |||
| 267 | <audio controls> | ||
| 268 | <source src="/assets/dna-synthesized/taurus/out.mp3" type="audio/mpeg"> | ||
| 269 | </audio> | ||
| 270 | |||
| 271 |  | ||
| 272 | |||
| 273 | ## Making a drummer out of a DNA sequence | ||
| 274 | |||
| 275 | To make things even more interesting, I decided to send this data via MIDI to my | ||
| 276 | [Elektron Model:Samples](https://www.elektron.se/en/model-samples). This is a | ||
| 277 | really cool piece of equipment that supports MIDI in via USB and 3.5 mm audio | ||
| 278 | jack. | ||
| 279 | |||
| 280 | Elektron is connected to my MacBook via USB cable and audio out is patched to a | ||
| 281 | Sony Bluetooth speaker I have that supports 3.5 mm audio in. Elektron doesn't | ||
| 282 | have internal speakers. | ||
| 283 | |||
| 284 |  | ||
| 285 | |||
| 286 |  | ||
| 287 | |||
| 288 |  | ||
| 289 | |||
| 290 | For communicating with Elektron, I choose `pygame` Python module that has MIDI | ||
| 291 | built in. With this, it was rather simple to send notes to the device. All I did | ||
| 292 | was map MIDI notes to the actual Nucleotides. | ||
| 293 | |||
| 294 | Before all of this I also checked Audio MIDI Setup app under MacOS and checked | ||
| 295 | MIDI Studio by pressing ⌘-2. | ||
| 296 | |||
| 297 |  | ||
| 298 | |||
| 299 | The whole script that parses and send notes to the Elektron looks like this. | ||
| 300 | |||
| 301 | ```python | ||
| 302 | import pygame.midi | ||
| 303 | import time | ||
| 304 | |||
| 305 | pygame.midi.init() | ||
| 306 | |||
| 307 | print(pygame.midi.get_default_output_id()) | ||
| 308 | print(pygame.midi.get_device_info(0)) | ||
| 309 | |||
| 310 | player = pygame.midi.Output(1) | ||
| 311 | player.set_instrument(2) | ||
| 312 | |||
| 313 | def send_note(note, velocity): | ||
| 314 | global player | ||
| 315 | player.note_on(note, velocity) | ||
| 316 | time.sleep(0.3) | ||
| 317 | player.note_off(note, velocity) | ||
| 318 | |||
| 319 | |||
| 320 | nucleotide_midi_map = { | ||
| 321 | 'A': 60, | ||
| 322 | 'C': 90, | ||
| 323 | 'G': 160, | ||
| 324 | 'T': 180, # is D | ||
| 325 | } | ||
| 326 | |||
| 327 | with open("quote.fa") as f: | ||
| 328 | sequence = f.read().replace('\n', '') | ||
| 329 | |||
| 330 | for nucleotide in [char for char in sequence]: | ||
| 331 | print("Playing nucleotide {} with MIDI note {}".format( | ||
| 332 | nucleotide, nucleotide_midi_map[nucleotide])) | ||
| 333 | send_note(nucleotide_midi_map[nucleotide], 127) | ||
| 334 | |||
| 335 | del player | ||
| 336 | pygame.midi.quit() | ||
| 337 | ``` | ||
| 338 | |||
| 339 | <video src="/assets/dna-synthesized/elektron/elektron.mp4" controls></video> | ||
| 340 | |||
| 341 | All of this could be made much more interesting if I choose different | ||
| 342 | instruments for different Nucleotides, or doing more funky stuff with Elektron. | ||
| 343 | But for now, this should be enough. It is just a proof of concept. Something to | ||
| 344 | play around with. | ||
| 345 | |||
| 346 | ## Going even further | ||
| 347 | |||
| 348 | As you probably notice, the end results are quite similar to each other. This is | ||
| 349 | to be expected because we are operating only with 4 notes essentially. What | ||
| 350 | could make this more interesting is using something like | ||
| 351 | [Supercollider](https://supercollider.github.io/) to create more interesting | ||
| 352 | sounds. By transposing notes or using effects based on repeated data in a | ||
| 353 | sequence. Possibilities are endless. | ||
| 354 | |||
| 355 | It is really astonishing what can be achieved with a little bit of code and an | ||
| 356 | idea. I could see this becoming an interesting background soundscape instrument | ||
| 357 | if done properly. It could replace random note generator with something more | ||
| 358 | intriguing, biological, natural. | ||
| 359 | |||
| 360 | I actually find the results fascinating. I took some time and listened to this | ||
| 361 | music of nature. Even though it's quite the same, it's also quite different. | ||
| 362 | The subtle differences on repeat kind of creates music on its own. Makes you | ||
| 363 | wonder. It kind of puts Occam’s Razor in its place. Nature for sure loves to | ||
| 364 | make things as energy efficient as possible. | ||
diff --git a/content/posts/2022-08-13-algae-spotted-on-river-sava.md b/content/posts/2022-08-13-algae-spotted-on-river-sava.md new file mode 100644 index 0000000..ba2dd2b --- /dev/null +++ b/content/posts/2022-08-13-algae-spotted-on-river-sava.md | |||
| @@ -0,0 +1,31 @@ | |||
| 1 | --- | ||
| 2 | title: Aerial photography of algae spotted on river Sava | ||
| 3 | url: aerial-photography-of-algae-spotted-on-river-sava.html | ||
| 4 | date: 2022-08-13T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | This is a bit of a different post than I usually write, but quite interesting | ||
| 10 | one to me. River Sava has plenty of hydropower plants located down the stream. | ||
| 11 | This makes regulating the strength of a current easier than normally. Because of | ||
| 12 | lower stream strength and high temperatures, algae has formed on the river. | ||
| 13 | This is the first time I've seen something like this in my whole life. | ||
| 14 | |||
| 15 | Below are some photographs taken from a DJI drone capturing the event. | ||
| 16 | |||
| 17 |  | ||
| 18 | |||
| 19 |  | ||
| 20 | |||
| 21 |  | ||
| 22 | |||
| 23 |  | ||
| 24 | |||
| 25 |  | ||
| 26 | |||
| 27 |  | ||
| 28 | |||
| 29 | I will try to get more photos of this in the future days and if something | ||
| 30 | intriguing shows up will post it again on the blog. | ||
| 31 | |||
diff --git a/content/posts/2022-10-06-state-of-web-technologies-in-year-2022.md b/content/posts/2022-10-06-state-of-web-technologies-in-year-2022.md new file mode 100644 index 0000000..e5a0b74 --- /dev/null +++ b/content/posts/2022-10-06-state-of-web-technologies-in-year-2022.md | |||
| @@ -0,0 +1,304 @@ | |||
| 1 | --- | ||
| 2 | title: State of Web Technologies and Web development in year 2022 | ||
| 3 | url: state-of-web-technologies-and-web-development-in-year-2022.html | ||
| 4 | date: 2022-10-06T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | ## Initial thoughts | ||
| 10 | |||
| 11 | *This post is a critique on the current state of web development. It is an | ||
| 12 | opinionated post! I will learn more about this in the future, and probably | ||
| 13 | slightly change my mind about some of the things I criticize.* | ||
| 14 | |||
| 15 | I have started working on a hobby project about two weeks ago, and I wanted to | ||
| 16 | use that situation as a learning one. Trying new things, new technologies, new | ||
| 17 | tools. I always considered myself to be an adventurous person when it comes to | ||
| 18 | technology. I never shy away from trying new languages, new operating systems | ||
| 19 | etc. Likewise, I find the whole experience satisfying, and it tickles that part | ||
| 20 | of my brain that finds discovery the highest of the mountains to climb. | ||
| 21 | |||
| 22 | What I always wanted to make was a coding game, that you would play in a browser | ||
| 23 | (just to eliminate building binaries for each operating system) where you would | ||
| 24 | level up your character and go into these scriptable battles. You know, RPG | ||
| 25 | elements. | ||
| 26 | |||
| 27 | So, the natural way to go would be some sort of SPA (single page application) | ||
| 28 | with basic routing and some state management. Nothing crazy. | ||
| 29 | |||
| 30 | > **Before we move on**, I have to be transparent. Take my views on this with | ||
| 31 | > a grain of salt. I have only scratched the surface with these technologies, | ||
| 32 | > and my knowledge is full of gaps. This is my experience using some of these | ||
| 33 | > products for the first time or in a limited capacity. | ||
| 34 | |||
| 35 | Having this out of the way, I got myself a fresh pot of coffee and down the | ||
| 36 | rabbit hole I went. | ||
| 37 | |||
| 38 | ## Giving React JS a spin | ||
| 39 | |||
| 40 | I first tried [React JS](https://reactjs.org/). I kind of like it. Furthermore, | ||
| 41 | I have worked with libraries like this in the past and also wrote a couple of | ||
| 42 | them (nothing compared to that level), but I had the basic understanding of what | ||
| 43 | was going on. I rolled up a project quickly and had basic things done in a | ||
| 44 | matter of two hours, which was impressive. | ||
| 45 | |||
| 46 | I prefer using [Tailwind CSS](https://tailwindcss.com/) for my styling | ||
| 47 | pleasures, and integrating that was also a painless experience. It was actually | ||
| 48 | nice to see that some things got better with time. In about 2 minutes I got | ||
| 49 | Tailwind working, and I was able to use classes at my disposal. All that | ||
| 50 | `postcss` stuff was taken care of by adding a couple of things in config files | ||
| 51 | (all described really well in their documentation). | ||
| 52 | |||
| 53 | It is not that different from Vue which I have had more encounters with in the | ||
| 54 | past People will probably call me a lunatic for saying this. But you know, it is | ||
| 55 | the truth. Same same, but different. I still believe that using libraries like | ||
| 56 | this is beneficial. I am not a JavaScript purist. They all have their quirks, | ||
| 57 | but at the end of the day, I truly believe it’s worth it. | ||
| 58 | |||
| 59 | ## Bundlers and Transpilers | ||
| 60 | |||
| 61 | I still reject calling [Typescript](https://www.typescriptlang.org/) to | ||
| 62 | [JavaScript](https://www.javascript.com/) conversion a "compilation process". I | ||
| 63 | call them [transpilers](https://devopedia.org/transpiler), and I don’t care! 😈 | ||
| 64 | |||
| 65 | And if you want to fight this, take a look at this little chart and be mad at | ||
| 66 | it! | ||
| 67 | |||
| 68 |  | ||
| 69 | |||
| 70 | The first one that I ever used was [webpack](https://webpack.js.org/), and it | ||
| 71 | was an absolute horrific experience. Saying this, it is an absolutely fantastic | ||
| 72 | tool. I felt more like a config editor than actually a programmer. To be fair, | ||
| 73 | I am a huge fan of [make](https://www.gnu.org/software/make/), and you can do as | ||
| 74 | you wish with this information. I like my build systems simple. | ||
| 75 | |||
| 76 | Also, isn’t it interesting that we need something like | ||
| 77 | [Babel](https://babeljs.io/) to make JavaScript code work in a browser that has | ||
| 78 | only one client side scripting available, which is by no accident also | ||
| 79 | JavaScript. Why? I know why it’s needed, but seriously, why. | ||
| 80 | |||
| 81 | I haven’t used Babel for years now. Or if I did, it was packaged together by | ||
| 82 | some other bundler thingy. Which does not make things better, but at least I | ||
| 83 | didn’t need to worry about it. | ||
| 84 | |||
| 85 | I really don’t like complicated build systems. I really don’t like abstracting | ||
| 86 | code and making things appear magical. The older I get, the more I appreciate | ||
| 87 | clear and clean, expressive code. No one-liners, if possible. | ||
| 88 | |||
| 89 | But I have to give props to [Vite](https://vitejs.dev/)! This was one of the | ||
| 90 | best developer experiences I have ever had. Granted, it still has magical | ||
| 91 | properties. And yes, it still is a bundler and abstracts things to the nth | ||
| 92 | degree. But at least it didn’t force me to configure 700 lines of JSON. And I | ||
| 93 | know that this makes me a hypocrite. You can’t have it all. Nonetheless, my | ||
| 94 | reasoning here is, if using bundlers is inevitable, then at least they should | ||
| 95 | provide an excellent developer experience. | ||
| 96 | |||
| 97 | I also noticed that now the catch-all phrase is “blazingly fast” and “lightning | ||
| 98 | fast” and “next generation” and stuff like that. I mean, yeah, tools should get | ||
| 99 | faster with time. But saying that starting a project now takes 2 seconds instead | ||
| 100 | of 20 seconds is something that is a break it or make it kind of a deal is | ||
| 101 | ridiculous. I don’t mind waiting a couple of seconds every couple of days. I | ||
| 102 | also don’t create 700 projects every day, and also who does? This argument has | ||
| 103 | no bite. All I want is a decent reload time (~100ms is more than good enough for | ||
| 104 | me) and that is it. | ||
| 105 | |||
| 106 | You don’t need to sell me benefits if I only get them when I start a fresh | ||
| 107 | project, and then try to convince me that this is somehow changing the fate of | ||
| 108 | the universe. First of all, it is not. And second, if this is your only argument | ||
| 109 | for your tool, I would advise you to maybe re-focus your efforts to something | ||
| 110 | else. Vite says that startup times are really fast. And if that would be the | ||
| 111 | only thing differentiating it from other tools, I would ignore it. But it has | ||
| 112 | some really compelling features like [Hot Module | ||
| 113 | Replacement](https://www.geeksforgeeks.org/reactjs-hot-module-replacement/) that | ||
| 114 | really works well. It was a joy to use. | ||
| 115 | |||
| 116 | So, I will be definitely using Vite in the future. | ||
| 117 | |||
| 118 | ## Jam Stack, Mach Stack no snack | ||
| 119 | |||
| 120 | Let's get a couple of the acronyms out of the way, so we all know what we are | ||
| 121 | talking about: | ||
| 122 | |||
| 123 | - Jam Stack - JavaScript, API and Markup | ||
| 124 | - Mach Stack - Microservices, API-first, Cloud-Native SaaS, Headless | ||
| 125 | |||
| 126 | It is so hard to follow all these new trendy things happening around you, that | ||
| 127 | it makes you have a massive **FOMO** all the time. But on the other hand, you | ||
| 128 | also don’t want to be that old fart that doesn’t move with the times and still | ||
| 129 | writes his trusty jQuery code while listening to Blink 182 All the small things | ||
| 130 | on full blast. It’s a good song, don’t get me wrong, but there are other songs | ||
| 131 | out there. | ||
| 132 | |||
| 133 | I have to admit. [Vercel](https://vercel.com/) is really cool! Love the | ||
| 134 | simplicity of the service. You could compare it to | ||
| 135 | [Netlify](https://www.netlify.com/). I haven’t tried Netlify extensively, but | ||
| 136 | from a couple of experimental deployments I still prefer Vercel. It is much more | ||
| 137 | streamlined, but maybe this is bias in me. I really like Vercel’s Analytics, | ||
| 138 | which give you a [Core Web Vitals report](https://web.dev/vitals/) in their | ||
| 139 | admin console. Kind of cool, I’m not going to lie. | ||
| 140 | |||
| 141 | This whole idea about frontend and backend merging into [SSR (server-side | ||
| 142 | rendering)](https://www.debugbear.com/blog/server-side-rendering) looks so good | ||
| 143 | on paper. It almost doesn’t come with any major flaws. | ||
| 144 | |||
| 145 | But when it comes to the actual implementation, there is much to be desired. | ||
| 146 | I’m going to lump [Next.js](https://nextjs.org/) and | ||
| 147 | [Nuxt.js](https://nuxtjs.org/) together because they are essentially the same | ||
| 148 | thing, just a different library. | ||
| 149 | |||
| 150 | Now comes the reality. Mixing backend and frontend in this manner creates this | ||
| 151 | weird mental model where you kind of rely on magical properties of these | ||
| 152 | libraries. You relinquish control over to them for better developer experience. | ||
| 153 | But is that really true? Initially, I was so stoked about it. However, the more | ||
| 154 | I used them, the more I felt uncomfortable. I felt dirty, actually. Maybe this | ||
| 155 | is because I come from old ways of doing things where you control every step of | ||
| 156 | request, and allowing something to hijack it feels like blasphemy. | ||
| 157 | |||
| 158 | More than that, some pretty significant technical issues arose from this. How do | ||
| 159 | you do JWT token authentication? You put it in `api` folder and then do some | ||
| 160 | fetching and storing into local state management. But doing this also requires | ||
| 161 | some tinkering with await/async stuff on the React/Vue side of things. And then | ||
| 162 | you need to write middleware for it. And the more I look at it, the more I see | ||
| 163 | that this whole thing was not meant to be used like this, and it all feels and | ||
| 164 | looks like a huge hack. | ||
| 165 | |||
| 166 | The issue I have with this is that they over-promise and under-deliver. They | ||
| 167 | want to be an all-in-one replacement for everything, and they don’t deliver on | ||
| 168 | this promise. And how could they?! We have to be fair. It is an impossible task. | ||
| 169 | |||
| 170 | They sell you [NoOps](https://www.geeksforgeeks.org/overview-of-noops/), but | ||
| 171 | when you need to accomplish something a little bit more out of the scope of | ||
| 172 | Hello World, you have to make hacky decisions to make it work. And having a | ||
| 173 | deployment strategy that relies on many moving parts is never a good idea. | ||
| 174 | Abstracting too much is usually a sign of bad architecture. | ||
| 175 | |||
| 176 | Lately, this has become a huge trend that will for sure bite us in the future. | ||
| 177 | And let’s not get it twisted. By doing this, PaaS providers like | ||
| 178 | [AWS](https://aws.amazon.com/), [GCS](https://cloud.google.com/), etc. obscure | ||
| 179 | their billing, and you end up paying more than you really should. And even if | ||
| 180 | that is not an issue, it comes down to the principle of things. AWS is known for | ||
| 181 | having multiple “currencies“ inside their projects like write operations, read | ||
| 182 | operations, etc. which add up, and it creates this impossible to track billing | ||
| 183 | scheme. It all behaves suspiciously like a pay-to-win game you could find on | ||
| 184 | mobile phones that scams you out of your money. | ||
| 185 | |||
| 186 | And as far as I am concerned, the most important thing was me not coding the | ||
| 187 | functionalities for the game I want to make. I was battling libraries and cloud | ||
| 188 | providers. How to deploy, what settings are relevant. Bad documentation or | ||
| 189 | multiple versions of achieving the same thing. You are getting bombarded by all | ||
| 190 | this information, and you don’t really have any control over it. | ||
| 191 | Production-ready code becomes a joke, essentially. Especially if you tend to | ||
| 192 | work on that project for a prolonged period of time. | ||
| 193 | |||
| 194 | All of these options end up creating a fatigue. What to choose, what not to | ||
| 195 | choose. Unnecessary worrying about if the stack will still be deemed worthy in | ||
| 196 | six months. There is elegance in simplicity. | ||
| 197 | |||
| 198 | > JavaScript UI frameworks and libraries work in cycles. Every six months or | ||
| 199 | > so, a new one pops up, claiming that it has revolutionized UI development. | ||
| 200 | > Thousands of developers adopt it into their new projects, blog posts are | ||
| 201 | > written, Stack Overflow questions are asked and answered, and then a newer | ||
| 202 | > (and even more revolutionary) framework pops up to usurp the throne. | ||
| 203 | > — Ian Allen | ||
| 204 | |||
| 205 |  | ||
| 206 | |||
| 207 | And this jab at these libraries and cloud providers is not done out of malice. | ||
| 208 | It is a real concern that I have about them. In my life, I have seen | ||
| 209 | technologies come and go, but the basics always stick around. So surrendering | ||
| 210 | all the power you have to a library or a cloud provider is in my opinion a | ||
| 211 | stupid move. | ||
| 212 | |||
| 213 | ## Tailwind CSS still rocks! | ||
| 214 | |||
| 215 | You know, many people say negative things about Tailwind. And after a lot of | ||
| 216 | deliberation, I came to the conclusion that Tailwind is good for two types of | ||
| 217 | developers. Tailwind is good for a complete noob or a senior developer. A | ||
| 218 | complete noob doesn’t really care about inner workings of CSS, and a senior | ||
| 219 | developer also doesn’t care about CSS. Well, at least, not anymore. And | ||
| 220 | developers in between usually have the biggest issues with it. Not always of | ||
| 221 | course, but in a lot of cases. | ||
| 222 | |||
| 223 | I like the creature comforts of Tailwind. Being utility first would make me | ||
| 224 | argue that it is actually more similar to [Sass](https://sass-lang.com/) or | ||
| 225 | [Less](https://lesscss.org/) than something like Bootstrap. Not technically, but | ||
| 226 | ideologically. After I started using it, I never looked back. I use it every | ||
| 227 | time I need to do something web related. | ||
| 228 | |||
| 229 | Writing CSS for general things feels like going several steps back. Instead of | ||
| 230 | focusing on what you are actually trying to achieve, you focus on notations like | ||
| 231 | [BEM](https://en.bem.info/methodology/css/), code structuring, optimizing HTML | ||
| 232 | size. Just doing things that make 0.1% difference. You know that saying: Early | ||
| 233 | optimization is the root of all evil. Exactly that. | ||
| 234 | |||
| 235 | I am also not saying that Tailwind is the cure for everything. Sometimes custom | ||
| 236 | CSS is necessary. But from what I found out in using it for almost two years in | ||
| 237 | a production environment (on a site getting quite a lot of traffic and | ||
| 238 | constantly being changed), I can say without any reservations that Tailwind | ||
| 239 | saved our asses countless times. We would be rewriting CSS all the time without | ||
| 240 | it. And I don’t really think writing CSS is the best way to spend my time. | ||
| 241 | |||
| 242 | I have also noticed that people who criticize Tailwind the most never actually | ||
| 243 | used it in a real project that has a long lifetime with plenty of changes that | ||
| 244 | will happen in the future. | ||
| 245 | |||
| 246 | But you know, whatever floats your boat! | ||
| 247 | |||
| 248 | ## Code maintainability | ||
| 249 | |||
| 250 | Somehow, people also stopped talking about maintenance. If you constantly try to | ||
| 251 | catch the latest and greatest train, you are by that logic always trying new | ||
| 252 | things. Which is a good thing if you want to learn about technologies and try | ||
| 253 | them. But for the production environment, you have to have a stable stack that | ||
| 254 | doesn’t change every 6 months. | ||
| 255 | |||
| 256 | You can lock dependencies for sure. Nevertheless, the hype train moves along | ||
| 257 | anyway. And the mindset this breeds goes against locking the code. This | ||
| 258 | bleeding-edge rolling release cycle is not helping. That is why enterprise | ||
| 259 | solutions usually look down on these popular stacks and only do bare minimum to | ||
| 260 | appear hip and cool. | ||
| 261 | |||
| 262 | With that said, I still think that progress is good, but should be taken with a | ||
| 263 | grain of salt. If your project is something that should be built once and then | ||
| 264 | rarely updated, going with the latest stack is a possible way to go. But, if you | ||
| 265 | are working on a project that lasts for years, you should probably approach it | ||
| 266 | with some level of caution. Web development is often times too volatile. | ||
| 267 | |||
| 268 | ## Web development has a marketing issue | ||
| 269 | |||
| 270 | I noticed that almost every project now has this marketing spin put on it. | ||
| 271 | Everything is blazingly fast now. I get it, they are competing for your | ||
| 272 | attention, but what happened to just being truthful and not inflating reality. | ||
| 273 | |||
| 274 | And in order to appeal to mass market, they leave things out of their marketing | ||
| 275 | materials. These open-source projects are now behaving more and more like | ||
| 276 | companies do. Which is a scary thought on its self. | ||
| 277 | |||
| 278 | And we are also seeing a rise in a concept of building a company in the open, | ||
| 279 | which is a good thing, don't get me wrong. But when it is using open-source to | ||
| 280 | lure people and then lock them in their ecosystem, there is where I have issues | ||
| 281 | with it. | ||
| 282 | |||
| 283 | This might be because I have been using GNU/Linux for 20 years now and have been | ||
| 284 | so beholden for my success to open-source that I see issues when open-source is | ||
| 285 | being used to trick people into a false sense of security that these projects | ||
| 286 | are built in the spirit of open-source. Because there is a difference. They are | ||
| 287 | NOT! They have a really specific goal in mind. And the open-source is being used | ||
| 288 | as a delivery system. Which is in my opinion disgusting! | ||
| 289 | |||
| 290 | ## Conclusion | ||
| 291 | |||
| 292 | I will end my post with this. Web development is running now in circles. People | ||
| 293 | are discovering [RPC](https://www.tutorialspoint.com/remote-procedure-call-rpc) | ||
| 294 | now and this is the now the next big thing. [GraphQL](https://graphql.org/) is | ||
| 295 | so passé. And I am so tired of it all. Of blazingly fast libraries, of all these | ||
| 296 | new technologies that are actually just a remake of old ones. Of just the | ||
| 297 | general spirit of the web. I will just use what I already know. Which worked 10 | ||
| 298 | years ago and will work 10 years after this. I will adopt a couple of little | ||
| 299 | tools like Vite. But I will not waste my time on this anymore. | ||
| 300 | |||
| 301 | It was a good exercise to get in touch with what’s new now. Nothing really | ||
| 302 | changed that much. FOMO is now cured! Now I have to get my ass back to actually | ||
| 303 | code and make the project that I wanted to make in the first place. | ||
| 304 | |||
diff --git a/content/posts/2022-10-16-that-sound-that-machine-makes-when-struggling.md b/content/posts/2022-10-16-that-sound-that-machine-makes-when-struggling.md new file mode 100644 index 0000000..47a6212 --- /dev/null +++ b/content/posts/2022-10-16-that-sound-that-machine-makes-when-struggling.md | |||
| @@ -0,0 +1,66 @@ | |||
| 1 | --- | ||
| 2 | title: Microsoundtrack — That sound that machine makes when struggling | ||
| 3 | url: that-sound-that-machine-makes-when-struggling.html | ||
| 4 | date: 2022-10-16T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | A couple of months ago, I got an idea about micro soundtracks. In this concept, | ||
| 10 | you are the observer, director, and audience in this tiny movies. | ||
| 11 | |||
| 12 | What you do is to attempt to imagine what would be happening around you based on | ||
| 13 | a title of the song and let the song help you fill the void in your story. | ||
| 14 | |||
| 15 | I made these songs is Logic Pro X. Every year or so I do this kind of thing and | ||
| 16 | make a couple of songs similar to this. But this is the first time I am posting | ||
| 17 | about it. | ||
| 18 | |||
| 19 | You can listen to the whole set on | ||
| 20 | [Youtube](https://www.youtube.com/watch?v=_5oXBhSmF3c) or scroll down the page | ||
| 21 | and there are embedded players for each song. | ||
| 22 | |||
| 23 | ## A bunch of inter-dimensional people with loud clocks | ||
| 24 | |||
| 25 | A group of inter-dimensional people are going up and down the elevator with you | ||
| 26 | while having loud clocks around their necks. Each clock ticks on a different | ||
| 27 | frequency. A lot of other sounds are getting drawn into your dimension, | ||
| 28 | resulting in a strange merging of dimensions. | ||
| 29 | |||
| 30 | <iframe style="border: 0; width: 100%; height: 42px;" src="https://bandcamp.com/EmbeddedPlayer/album=3913808801/size=small/bgcol=ffffff/linkcol=0687f5/track=1349272965/transparent=true/" seamless title="Bandcamp"><a href="https://mitjafelicijan.bandcamp.com/album/that-sound-that-machine-makes-when-struggling">That sound that machine makes when struggling by Mitja Felicijan</a></iframe> | ||
| 31 | |||
| 32 | ## Two black holes conversing about the weather | ||
| 33 | |||
| 34 | You are a traveler in a spaceship flying very close to two colliding black holes | ||
| 35 | having a discussion about the weather while tearing each other apart. During all | ||
| 36 | this your ship is getting pulled into the event horizon of both black holes, | ||
| 37 | putting a lot of strain on your spaceship. | ||
| 38 | |||
| 39 | <iframe style="border: 0; width: 100%; height: 42px;" src="https://bandcamp.com/EmbeddedPlayer/album=3913808801/size=small/bgcol=ffffff/linkcol=0687f5/track=1756714200/transparent=true/" seamless title="Bandcamp"><a href="https://mitjafelicijan.bandcamp.com/album/that-sound-that-machine-makes-when-struggling">That sound that machine makes when struggling by Mitja Felicijan</a></iframe> | ||
| 40 | |||
| 41 | ## A planet where every organism is a plant | ||
| 42 | |||
| 43 | You land on a planet where every living organism is a plant and among those | ||
| 44 | plants some of them are highly intelligent, and you were asked to make first | ||
| 45 | contact with the native species. Your visit takes place in a giant cave where | ||
| 46 | you are meeting these plants, and they are talking to you. | ||
| 47 | |||
| 48 | <iframe style="border: 0; width: 100%; height: 42px;" src="https://bandcamp.com/EmbeddedPlayer/album=3913808801/size=small/bgcol=ffffff/linkcol=0687f5/track=3710973979/transparent=true/" seamless title="Bandcamp"><a href="https://mitjafelicijan.bandcamp.com/album/that-sound-that-machine-makes-when-struggling">That sound that machine makes when struggling by Mitja Felicijan</a></iframe> | ||
| 49 | |||
| 50 | ## Bio implants having a fit and reprogramming your brain | ||
| 51 | |||
| 52 | In a distant future where everybody has bio implants, you have just received | ||
| 53 | your first one, which happens to be a brain implant. Something goes wrong, and | ||
| 54 | your implant is starting to misbehave, and you are experiencing brain | ||
| 55 | malfunctions. You are on the streets at night a couple of hours after your | ||
| 56 | procedure. You can feel your sanity breaking down. | ||
| 57 | |||
| 58 | <iframe style="border: 0; width: 100%; height: 42px;" src="https://bandcamp.com/EmbeddedPlayer/album=3913808801/size=small/bgcol=ffffff/linkcol=0687f5/track=1157430581/transparent=true/" seamless title="Bandcamp"><a href="https://mitjafelicijan.bandcamp.com/album/that-sound-that-machine-makes-when-struggling">That sound that machine makes when struggling by Mitja Felicijan</a></iframe> | ||
| 59 | |||
| 60 | ## Cow animation | ||
| 61 | |||
| 62 | I also made this little cow animation. Go into full screen to see the effects in | ||
| 63 | more details. | ||
| 64 | |||
| 65 | <video src="/assets/microsoundtrack/cow.m4v" controls loop></video> | ||
| 66 | |||
diff --git a/content/posts/2023-01-26-trying-to-build-a-new-kind-of-terminal-emulator.md b/content/posts/2023-01-26-trying-to-build-a-new-kind-of-terminal-emulator.md new file mode 100644 index 0000000..27e227a --- /dev/null +++ b/content/posts/2023-01-26-trying-to-build-a-new-kind-of-terminal-emulator.md | |||
| @@ -0,0 +1,253 @@ | |||
| 1 | --- | ||
| 2 | title: Trying to build a New kind of terminal emulator for the modern age | ||
| 3 | url: trying-to-build-a-new-kind-of-terminal-emulator.html | ||
| 4 | date: 2023-01-26T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | Over the past few weeks, I have been really thinking about terminal emulators, | ||
| 10 | how we interact with computers, the separation of text-based programs and GUI | ||
| 11 | ones. To be perfectly honest, I got pissed off one evening when I was cleaning | ||
| 12 | up files on my computer. Normally, I go into console and do `ncdu` and check | ||
| 13 | where the junk is. Then I start deleting stuff. Without any discrimination, | ||
| 14 | usually. But when it comes to screenshots, I have learned that it's good to keep | ||
| 15 | them somewhere near if I need to refer to something that I was doing. I am an | ||
| 16 | avid screenshot taker. So at that point I checked Pictures folder and also did a | ||
| 17 | basic search `find . -type f -name "*.jpg"` for all the JPEG files in my home | ||
| 18 | directory and immediately got pissed off. Why can’t I see thumbnails in my | ||
| 19 | terminal? I know why, but why in the year of 2022 this is still a problem. I am | ||
| 20 | used to traversing my disk via terminal. I am faster, and I am more comfortable | ||
| 21 | this way. But when it comes to visualization, I then need to revert to GUI | ||
| 22 | applications and again find the same file to see it. I know that programs like | ||
| 23 | `feh` and `sxiv` are available, but I would just like to see the preview. Like | ||
| 24 | [Jupyter notebook](https://jupyter.org/) or something similar. Just having it | ||
| 25 | inline. Part of a result. | ||
| 26 | |||
| 27 | It also didn’t help that I was spending some time with the [Plan | ||
| 28 | 9](https://plan9.io/plan9/) Operating system. More specifically | ||
| 29 | [9FRONT](http://9front.org/). The way that [ACME editor](http://acme.cat-v.org/) | ||
| 30 | handles text editing is just wonderful. Different and fresh somehow, even though | ||
| 31 | it’s super old. | ||
| 32 | |||
| 33 | So, I went on a lookout for an interesting way of visualizing results of some | ||
| 34 | query. I found these applications to be outstanding examples of how not to be a | ||
| 35 | captive of a predetermined way of doing things. | ||
| 36 | |||
| 37 | - [Wolfram Mathematica](https://www.wolfram.com/mathematica/) | ||
| 38 | - [Jupyter notebooks](https://jupyter.org/) | ||
| 39 | - [Plan 9 / 9FRONT](http://www.9front.org) | ||
| 40 | - [Temple OS](https://templeos.org/) | ||
| 41 | - [Emacs](https://www.gnu.org/software/emacs/) | ||
| 42 | |||
| 43 | My idea is not as out there as ACME is, but it is a spin on the terminal | ||
| 44 | emulators. I like the modes that Vi/Vim provides you with. I like the way the | ||
| 45 | Emacs does its own `M-x` `M-c`. Furthermore, I really like how Mathematica and | ||
| 46 | Jupyter present the data in a free flowing form. And I love how Temple OS is | ||
| 47 | basically a C interpreter on some level. | ||
| 48 | |||
| 49 | > **Note:** This is part 1 of the journey. Nowhere finished yet. I am just | ||
| 50 | > tinkering with this at the moment. This whole thing can easily spectacularly | ||
| 51 | > fail. | ||
| 52 | |||
| 53 | So I started. I knew that I wanted to have the couple of modes, but I didn’t | ||
| 54 | like the repetition of keystrokes, so the only option was to have some sort of | ||
| 55 | toggle and indicate to the user that they are in a special mode. Like Vi does | ||
| 56 | for Normal and Visual mode. | ||
| 57 | |||
| 58 | These modes would for the first version be: | ||
| 59 | |||
| 60 | - *Preview mode* (toggle with Ctrl + P) | ||
| 61 | - When this mode would be enabled, the `ls` command would try to find images | ||
| 62 | from the results and display thumbnails from them in the terminal itself. | ||
| 63 | No ASCII art. Proper images. In a grid! | ||
| 64 | - *Detach mode* (toggle with Ctrl + D) | ||
| 65 | - When this mode would be enabled, every command would open a new window | ||
| 66 | and execute that command in it. This would be useful for starting `htop` | ||
| 67 | in a separate window. | ||
| 68 | |||
| 69 | The reason for having these modes togglable is to not ask for previews every | ||
| 70 | time. You enable a mode and until you disable it, it behaves that way. Purely | ||
| 71 | out of ergonomic reasons. | ||
| 72 | |||
| 73 | I would like to treat every terminal I open as a session mentally. When I start | ||
| 74 | using the terminal, I start digging deeper into the issue I am trying to | ||
| 75 | resolve. And while I am doing this, I would like to open detached windows | ||
| 76 | etc. A lot of these things can be done easily with something like | ||
| 77 | [i3](https://i3wm.org/), but also that pull you out of the context of what you | ||
| 78 | were doing. I would like to orchestrate everything from one single point. | ||
| 79 | |||
| 80 | In planning for this project, I knew that I would need to use a language like C | ||
| 81 | and a library such as [SDL2](https://www.libsdl.org/) in order to achieve the | ||
| 82 | desired results. I had considered other options, but ultimately determined that | ||
| 83 | [SDL2](https://www.libsdl.org/) was the best fit based on its capabilities and | ||
| 84 | reputation in the programming community. | ||
| 85 | |||
| 86 | At first, I thought the idea of a hardware accelerated terminal was a bit of a | ||
| 87 | joke. It seemed like such a niche and unnecessary feature, especially given the | ||
| 88 | fact that terminal emulators have been around for decades and have always relied | ||
| 89 | on software rendering. But to be fair, [Alacritty](https://alacritty.org/) is | ||
| 90 | doing the same thing. Well, they are doing a remarkable job at it. | ||
| 91 | |||
| 92 | So, I embarked on a journey. Everything has to start somewhere. For me, it | ||
| 93 | started with creating a window! It has to start somewhere. 🙂 | ||
| 94 | |||
| 95 | ```c | ||
| 96 | // Oh, Hi Mark! | ||
| 97 | // Create the window, obviously. | ||
| 98 | SDL_Window *window = SDL_CreateWindow( | ||
| 99 | WINDOW_TITLE, SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, | ||
| 100 | WINDOW_WIDTH, WINDOW_HEIGHT, | ||
| 101 | SDL_WINDOW_RESIZABLE | SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN); | ||
| 102 | ``` | ||
| 103 | |||
| 104 | I continued like this to get some text displayed on the screen. | ||
| 105 | |||
| 106 | I noted that | ||
| 107 | [`TTF_RenderText_Solid`](https://wiki.libsdl.org/SDL_ttf/TTF_RenderText_Solid) | ||
| 108 | rendered text really poorly. There were no antialiasing at all. In my wisdom, I | ||
| 109 | never checked the documentation. Well, that was a fail. To uneducated like me: | ||
| 110 | `TTF_RenderText_Solid` renders Latin1 text at fast quality to a new 8-bit | ||
| 111 | surface. So, that's why the texts looked like shit. No wonder. | ||
| 112 | |||
| 113 | Remarks on `TTF_RenderText_Solid`: This function will allocate a new 8-bit, | ||
| 114 | palettized surface. The surface's 0 pixel will be the colorkey, giving a | ||
| 115 | transparent background. The 1 pixel will be set to the text color. | ||
| 116 | |||
| 117 | After I replaced it with | ||
| 118 | [`TTF_RenderText_LCD`](https://wiki.libsdl.org/SDL_ttf/TTF_RenderText_LCD) which | ||
| 119 | renders Latin1 text at LCD subpixel quality to a new ARGB surface, the text | ||
| 120 | started looking good. Really make sure you read the documentation. It’s actually | ||
| 121 | good. As a side note, you can find all the documentation regarding [SDL2 on | ||
| 122 | their Wiki](https://wiki.libsdl.org/). | ||
| 123 | |||
| 124 | After that was done, I started working on displaying other things like `Preview` | ||
| 125 | and `Detach` modes. This wasn’t really that hard. In SDL2 you can check all the | ||
| 126 | available events with `while (SDL_PollEvent(&event) > 0)` and have a bunch of | ||
| 127 | switch statements to determine which key is currently being pressed. More about | ||
| 128 | keys, [SDLKey](https://documentation.help/SDL/sdlkey.html) and mroe about | ||
| 129 | pooling the events on | ||
| 130 | [SDL_PollEvent](https://documentation.help/SDL/sdlpollevent.html). | ||
| 131 | |||
| 132 | ```c | ||
| 133 | while (SDL_PollEvent(&event) > 0) | ||
| 134 | { | ||
| 135 | switch (event.type) | ||
| 136 | { | ||
| 137 | case SDL_QUIT: | ||
| 138 | running = false; | ||
| 139 | break; | ||
| 140 | |||
| 141 | case SDL_TEXTINPUT: | ||
| 142 | if (!meta_key_pressed) | ||
| 143 | { | ||
| 144 | strncat(input_prompt_text, event.text.text, 1); | ||
| 145 | update_input_prompt = true; | ||
| 146 | } | ||
| 147 | break; | ||
| 148 | } | ||
| 149 | } | ||
| 150 | ``` | ||
| 151 | |||
| 152 | After that was somewhat working correctly, I started creating a struct that | ||
| 153 | would hold all the commands and results and I call them Cells. Yes, I stole that | ||
| 154 | naming idea from Jupyter. | ||
| 155 | |||
| 156 | ```c | ||
| 157 | typedef struct | ||
| 158 | { | ||
| 159 | char *command; | ||
| 160 | char *result; | ||
| 161 | SDL_Surface *surface; | ||
| 162 | SDL_Texture *texture; | ||
| 163 | SDL_Rect rect; | ||
| 164 | } Cell; | ||
| 165 | ``` | ||
| 166 | |||
| 167 | I am at a place now where I am starting to implement scrolling. This will for | ||
| 168 | sure be fun to code. Memory management in C is super easy. 😂 | ||
| 169 | |||
| 170 | I have also added a simple [INI file like | ||
| 171 | configuration](https://en.wikipedia.org/wiki/INI_file) support. It is done in an | ||
| 172 | [STB style of | ||
| 173 | header](https://github.com/nothings/stb/blob/master/docs/stb_howto.txt) and maps | ||
| 174 | to specific options supported by the terminal. It is not universal, and the code | ||
| 175 | below demonstrates how I will use it in the future. | ||
| 176 | |||
| 177 | ```c | ||
| 178 | #ifndef CONFIG_H | ||
| 179 | #define CONFIG_H | ||
| 180 | |||
| 181 | /* | ||
| 182 | # This is a comment | ||
| 183 | |||
| 184 | # This is the first configuration option | ||
| 185 | dettach=value11111 | ||
| 186 | |||
| 187 | # This is the second configuration option | ||
| 188 | preview=value22222 | ||
| 189 | |||
| 190 | # This is the third configuration option | ||
| 191 | debug=value33333 | ||
| 192 | */ | ||
| 193 | |||
| 194 | // Define a struct to hold the configuration options | ||
| 195 | typedef struct | ||
| 196 | { | ||
| 197 | char dettach[256]; | ||
| 198 | char preview[256]; | ||
| 199 | char debug[256]; | ||
| 200 | } Config; | ||
| 201 | |||
| 202 | // Read the configuration file and return the options as a struct | ||
| 203 | extern Config read_config_file(const char *filename) | ||
| 204 | { | ||
| 205 | // Create a struct to hold the configuration options | ||
| 206 | Config config = {0}; | ||
| 207 | |||
| 208 | // Open the configuration file | ||
| 209 | FILE *file = fopen(filename, "r"); | ||
| 210 | |||
| 211 | // Read each line from the file | ||
| 212 | char line[256]; | ||
| 213 | while (fgets(line, sizeof(line), file)) | ||
| 214 | { | ||
| 215 | // Check if this line is a comment or empty | ||
| 216 | if (line[0] == '#' || line[0] == '\n') | ||
| 217 | continue; | ||
| 218 | |||
| 219 | // Parse the line to get the option and value | ||
| 220 | char option[128], value[128]; | ||
| 221 | if (sscanf(line, "%[^=]=%s", option, value) != 2) | ||
| 222 | continue; | ||
| 223 | |||
| 224 | // Set the value of the appropriate option in the config struct | ||
| 225 | if (strcmp(option, "dettach") == 0) | ||
| 226 | { | ||
| 227 | strncpy(config.option1, value, sizeof(config.option1)); | ||
| 228 | } | ||
| 229 | else if (strcmp(option, "preview") == 0) | ||
| 230 | { | ||
| 231 | strncpy(config.option2, value, sizeof(config.option2)); | ||
| 232 | } | ||
| 233 | else if (strcmp(option, "debug") == 0) | ||
| 234 | { | ||
| 235 | strncpy(config.option3, value, sizeof(config.option3)); | ||
| 236 | } | ||
| 237 | } | ||
| 238 | |||
| 239 | // Close the configuration file | ||
| 240 | fclose(file); | ||
| 241 | |||
| 242 | // Return the configuration options | ||
| 243 | return config; | ||
| 244 | } | ||
| 245 | |||
| 246 | #endif | ||
| 247 | ``` | ||
| 248 | |||
| 249 | This is as far as I managed to get for now. I have a daily job and this | ||
| 250 | prohibits me to work on these things full time. But I should probably get back | ||
| 251 | and finish this. At least have a simple version working out, so I can start | ||
| 252 | testing it on my machines. Fingers crossed. 🕵️♂️ | ||
| 253 | |||
diff --git a/content/posts/2023-05-16-rekindling-my-love-for-programming.md b/content/posts/2023-05-16-rekindling-my-love-for-programming.md new file mode 100644 index 0000000..3c2267b --- /dev/null +++ b/content/posts/2023-05-16-rekindling-my-love-for-programming.md | |||
| @@ -0,0 +1,74 @@ | |||
| 1 | --- | ||
| 2 | title: Rekindling my love for programming and enjoying the act of creating | ||
| 3 | url: rekindling-my-love-for-programming.html | ||
| 4 | date: 2023-05-16T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | Programming can be a challenging and rewarding experience, but sometimes it's | ||
| 10 | easy to feel burnt out or disinterested. I have lost the passion for coding over | ||
| 11 | the past couple of months and it looked like I will never enjoy the coding as | ||
| 12 | much as I did. | ||
| 13 | |||
| 14 | I was feeling burnt out with programming. I thought taking a break from it and | ||
| 15 | focusing on other activities that I enjoy might be helpful. This way, I could | ||
| 16 | come back to programming with a fresh perspective and renewed energy. I also | ||
| 17 | thought about learning a new programming language or technology to keep things | ||
| 18 | interesting and challenging. | ||
| 19 | |||
| 20 | However, what I didn't realize was that learning a new language or technology | ||
| 21 | wasn't going to solve the underlying issue. I needed to take a step back and | ||
| 22 | re-evaluate why I had lost my passion for programming in the first place. This | ||
| 23 | involved taking a deep look into what I was doing that resulted in this rut. | ||
| 24 | |||
| 25 | Sometimes, it's easy to get caught up in the hype of new technologies or | ||
| 26 | languages, and we can feel like we're missing out if we're not constantly | ||
| 27 | learning and experimenting. However, it's important to remember that the latest | ||
| 28 | and greatest isn't always the best fit for our projects or our | ||
| 29 | interests. Instead of constantly chasing the next big thing, it can be helpful | ||
| 30 | to focus on what truly interests us and what we're passionate about. This can | ||
| 31 | help us stay motivated and engaged with our work, rather than feeling like we're | ||
| 32 | just going through the motions. | ||
| 33 | |||
| 34 | I expressed that I had lost my passion for coding over the past couple of | ||
| 35 | months, and I realized that the reason behind it was my tendency to spread | ||
| 36 | myself too thin and not focus on completing interesting projects. In order to | ||
| 37 | regain my passion for coding, I need to focus on projects that truly interest me | ||
| 38 | and give me a sense of purpose and motivation. | ||
| 39 | |||
| 40 | Recently, I have been playing World of Warcraft more frequently and have become | ||
| 41 | interested in developing addons for the game. | ||
| 42 | |||
| 43 | This quickly resulted in me creating three addons that improve the quality of | ||
| 44 | life, and I subsequently developed a more useful add-on that encapsulates all | ||
| 45 | the others I made. | ||
| 46 | |||
| 47 | I found it interesting that this action sparked a new interest in me. | ||
| 48 | Additionally, I discovered the Lua language, which reminded me that coding | ||
| 49 | should be fun rather than just a struggle with a language. It should be pure, | ||
| 50 | unadulterated fun. | ||
| 51 | |||
| 52 | I wasn't fighting the syntax, nor was I focused on finding the most optimal | ||
| 53 | solution. I simply created things without the pressure of making them the best | ||
| 54 | they could possibly be. | ||
| 55 | |||
| 56 | This made me realize that I actually adore simple languages that get out of the | ||
| 57 | way and let you express what you want to do. It forced me to rethink a lot about | ||
| 58 | what I use and what I actually enjoy. | ||
| 59 | |||
| 60 | I have decided to stick to the basics. For a scripting language, I will use | ||
| 61 | Lua. For networking, I will use Golang. And for any special needs, I will rely | ||
| 62 | on C. I do not require Rust, Nim, or Zig. This selection is more than sufficient | ||
| 63 | for my needs. I have to stay true to this simplicity. There is something to the | ||
| 64 | Occam's Razor. | ||
| 65 | |||
| 66 | I've been struggling with a lack of creativity lately, but now I'm experiencing | ||
| 67 | a real change. I realized I needed to take a step back and stop actively trying | ||
| 68 | to address the issue. I needed to stop worrying and overthinking it. I simply | ||
| 69 | needed some time. Looking back, I don't think I've taken any significant time | ||
| 70 | off in the last 10 years. | ||
| 71 | |||
| 72 | Suddenly, I find myself with the energy and passion to complete multiple small | ||
| 73 | projects. It doesn't feel like a chore at all. Who knew I needed WoW to | ||
| 74 | kickstart everything. Inspiration really does come from the strangest places. | ||
diff --git a/content/posts/2023-05-22-crafting-stories-in-zed-editor.md b/content/posts/2023-05-22-crafting-stories-in-zed-editor.md new file mode 100644 index 0000000..dc22e95 --- /dev/null +++ b/content/posts/2023-05-22-crafting-stories-in-zed-editor.md | |||
| @@ -0,0 +1,88 @@ | |||
| 1 | --- | ||
| 2 | title: From General Zod to Superman - Crafting Stories in Zed Editor | ||
| 3 | url: crafting-stories-in-zed-editor.html | ||
| 4 | date: 2023-05-22T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | Pretentious title! Good start! I have nothing to add to this discussion. I just | ||
| 10 | like this editor and wanted to write something here that will remind me to use | ||
| 11 | it again in a while when/if it becomes available for Linux. | ||
| 12 | |||
| 13 | **TLDR:** I think this code editor is very cool and has a massive potential. I | ||
| 14 | hope they don’t mess up with adding a plugin ecosystem to it! | ||
| 15 | |||
| 16 | Out of morbid curiosity, I started using the [Zed editor](https://zed.dev/) on | ||
| 17 | my Mac. Zed is a high-performance, multiplayer code editor developed by the | ||
| 18 | creators of Atom and Tree-sitter. Written in Rust so it has to be blazingly | ||
| 19 | fast! 😊 It's a joke, calm down. | ||
| 20 | |||
| 21 | Over the past year, I have switched between [Helix | ||
| 22 | editor](https://helix-editor.com/) and [VS | ||
| 23 | Code](https://code.visualstudio.com/), but for the last couple of months, I have | ||
| 24 | been using Helix exclusively. | ||
| 25 | |||
| 26 | I've been genuinely impressed by Zed. When you open a file, it automatically | ||
| 27 | detects its type and downloads the corresponding [LSP (language | ||
| 28 | server)](https://en.wikipedia.org/wiki/Language_Server_Protocol). The list of | ||
| 29 | supported languages is not extensive, but it's still impressive. It's a great | ||
| 30 | example of how to create a product that stays out of your way. | ||
| 31 | |||
| 32 |  | ||
| 33 | |||
| 34 | For C development it downloaded [clangd](https://clangd.llvm.org/) and setting | ||
| 35 | up missing dependencies in code was rather easy. For this project I use | ||
| 36 | [SDL2](https://www.libsdl.org/) for rendering terminal emulator. It’s a hobby | ||
| 37 | project, don’t worry about it. | ||
| 38 | |||
| 39 | If you are going to give this a try and you are using C, I suggest checking two | ||
| 40 | files in the root of your project folder. If you don't have them, create them. | ||
| 41 | |||
| 42 | **compile_flags.txt** | ||
| 43 | |||
| 44 | ``` | ||
| 45 | -I/opt/homebrew/include | ||
| 46 | -I/opt/homebrew/include/SDL2 | ||
| 47 | ``` | ||
| 48 | |||
| 49 | Easy way of checking what the appropriate includes for a specific library is to | ||
| 50 | use `pkg-config` and in my case `pkg-config SDL2 --cflags-only-I`. But this is | ||
| 51 | nothing new to C/C++ devs. Just a noter for people who are using Visual Studio. | ||
| 52 | |||
| 53 | **.clang-format** | ||
| 54 | |||
| 55 | ``` | ||
| 56 | ColumnLimit: 220 | ||
| 57 | BasedOnStyle: Mozilla | ||
| 58 | ``` | ||
| 59 | |||
| 60 | I prefer Mozilla coding style for C so you can set that up. | ||
| 61 | |||
| 62 | They really have something special here. Although there is no version available | ||
| 63 | for Linux yet, I will stick to Helix. This impressive piece of engineering is, | ||
| 64 | above all, an amazing example of craftsmanship. | ||
| 65 | |||
| 66 | They have a bunch of amazing integrated functionalities like live desktop | ||
| 67 | sharing, code sharing in a live coding session. There is a lot of pretentious | ||
| 68 | marketing speak there but the product is still amazing! | ||
| 69 | |||
| 70 | For me the speed and the simplicity of the product was the most impressive | ||
| 71 | thing. You get that: it just works feeling. A rare thing in 2023. | ||
| 72 | |||
| 73 |  | ||
| 74 | |||
| 75 | They also managed to add [Github Copilot](https://github.com/features/copilot) | ||
| 76 | in a non obtrusive way. To me, everything feels very intentional and | ||
| 77 | specifically selected. It's minimal yet maximally effective. | ||
| 78 | |||
| 79 | <video src="https://zed.dev/img/post/copilot/copilot-demo.webm" autoplay loop></video> | ||
| 80 | |||
| 81 | It is a perfect balance between VS Code, Jetbrains IDE’s and something like VIM | ||
| 82 | or Helix. | ||
| 83 | |||
| 84 | I just hope they **DON’T** add plugin support and keep it like it is. They as a | ||
| 85 | vendor should add stuff to it with great deliberation and thought. And this way | ||
| 86 | the product will stay fast and focused. That’s my two cents. | ||
| 87 | |||
| 88 | Amazing job! | ||
diff --git a/content/posts/2023-05-23-i-was-wrong-about-git-workflows.md b/content/posts/2023-05-23-i-was-wrong-about-git-workflows.md new file mode 100644 index 0000000..e82f50b --- /dev/null +++ b/content/posts/2023-05-23-i-was-wrong-about-git-workflows.md | |||
| @@ -0,0 +1,71 @@ | |||
| 1 | --- | ||
| 2 | title: I think I was completely wrong about Git workflows | ||
| 3 | url: i-was-wrong-about-git-workflows.html | ||
| 4 | date: 2023-05-23T12:00:00+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | tags: [] | ||
| 8 | --- | ||
| 9 | |||
| 10 | I have been using some approximation of [Git | ||
| 11 | Flow](https://jeffkreeftmeijer.com/git-flow/) for years now and never really | ||
| 12 | questioned it to be honest. When I create a repo I create develop branch and set | ||
| 13 | it as default one and then merge to master from there. Seems reasonable enough. | ||
| 14 | |||
| 15 | One thing that I have learned is that long living branches are the devil. They | ||
| 16 | always end up making a huge mess when they need to be merged eventually into | ||
| 17 | master. So by that reason, what is the develop branch if not the longest living | ||
| 18 | feature branch. And from my personal experience there was never a situation | ||
| 19 | where I wasn’t sweating bullets when I had to merge develop back to master. | ||
| 20 | |||
| 21 | This realisation started to give me pause. So why the hell am I doing this, and | ||
| 22 | is there a better way. Well the solution was always there. And it comes in a | ||
| 23 | form of [git tags](https://git-scm.com/book/en/v2/Git-Basics-Tagging). | ||
| 24 | |||
| 25 | So what are git tags? Git tags are references to specific points in a Git | ||
| 26 | repository's history. They are used to mark important milestones, such as | ||
| 27 | releases or significant commits, making it easier to identify and access | ||
| 28 | specific versions of a project. | ||
| 29 | |||
| 30 | Somehow we have all hijacked the meaning of the master branch that it has to be | ||
| 31 | the most releasable version of code. And this is also where the confusing about | ||
| 32 | versioning the software kicks in. Because master branch implicitly says that we | ||
| 33 | are dealing with the rolling release type of a software. And by having a develop | ||
| 34 | branch we are hacking around this confusion. With a separation of develop and | ||
| 35 | master we lock functionalities into place and forcing a stable vs development | ||
| 36 | version of the software. | ||
| 37 | |||
| 38 | But if that is true and the long living branches are the devil then why have | ||
| 39 | develop at all. I think that most of this comes to how continuous integration is | ||
| 40 | being done. There usually is no granular access to tags and CD software deploys | ||
| 41 | what is present on a specific branch, may that be master for production and | ||
| 42 | develop for staging. This is a gross simplification and by having this in place | ||
| 43 | we have completely removed tagging as a viable option to create a fix point in | ||
| 44 | software cycle that says, this is the production ready code. | ||
| 45 | |||
| 46 | One cool thing about tags are that you can checkout a specific tag. So they | ||
| 47 | behave very similarly as branches in that regard. And you don’t have the | ||
| 48 | overhead of having two mainstream branches. | ||
| 49 | |||
| 50 | So what is the solution? One approach is to use development workflow, where all | ||
| 51 | changes are made on the smaller branches and continuously merged into | ||
| 52 | master. Where the software is ready to be pushed to production you tag the | ||
| 53 | master branch. This approach eliminates the need for long-lived branches and | ||
| 54 | simplifies the development process. It also encourages developers to make small, | ||
| 55 | incremental changes that can be tested and deployed quickly. However, this | ||
| 56 | approach may not be suitable for all projects or teams that heavily rely on | ||
| 57 | automated deployment based on branch names only. | ||
| 58 | |||
| 59 | This also requires that developers always keep production in mind. No more | ||
| 60 | living on an island of the develop branch. All your actions and code need to be | ||
| 61 | ready to meet production standards on a much smaller timescale. | ||
| 62 | |||
| 63 | I think that we have complicated the workflow in an honest attempt to make | ||
| 64 | things more streamlined but in the process of doing this, we have inadvertently | ||
| 65 | made our lives much more complicated. | ||
| 66 | |||
| 67 | In conclusion, it's important to re-evaluate our workflows from time to time to | ||
| 68 | see if they still make sense and if there are better alternatives available. | ||
| 69 | Long-living branches can be problematic, and using tags to mark important | ||
| 70 | milestones can simplify the development process. | ||
| 71 | |||
diff --git a/content/posts/2023-05-31-re-inventing-task-runner-that-i-actually-used-daily.md b/content/posts/2023-05-31-re-inventing-task-runner-that-i-actually-used-daily.md new file mode 100644 index 0000000..fd44605 --- /dev/null +++ b/content/posts/2023-05-31-re-inventing-task-runner-that-i-actually-used-daily.md | |||
| @@ -0,0 +1,159 @@ | |||
| 1 | --- | ||
| 2 | title: "Re-Inventing Task Runner That I Actually Used Daily" | ||
| 3 | url: re-inventing-task-runner-that-i-actually-used-daily.html | ||
| 4 | date: 2023-05-31T12:21:10+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | Couple of months ago I had this brilliant idea of re-inventing the wheel by | ||
| 10 | making an alternative for make. And so I went. Boldly into the battle. And to my | ||
| 11 | big surprise my attempt resulted in not a completely useless piece of software. | ||
| 12 | |||
| 13 | My initial requirements were quite simple but soon grow into something more | ||
| 14 | ambitious. And looking back I should have stuck to the simple version. My | ||
| 15 | laziness was on my side this time though. Because I haven’t implemented some of | ||
| 16 | the features I now realise I really didn’t need them and they would bog the | ||
| 17 | whole program and make it be something it was never meant to be. | ||
| 18 | |||
| 19 | My basic requirements were following: | ||
| 20 | |||
| 21 | - Syntax should be a tiny bit inspired by Rake and Rakefiles. | ||
| 22 | - Should borrow the overall feel of a unit test experience. | ||
| 23 | - Using something like Python would be a bit of an overkill. | ||
| 24 | - The program must be statically compiled, so it can run on same architecture | ||
| 25 | without libc, musl dependencies or things like that. | ||
| 26 | - Install ruby for rake is a bit overkill and can not be done with certain | ||
| 27 | really lightweight distributions like Alpine Linux. This tool would be usable | ||
| 28 | on such lightweight systems for remote debugging. | ||
| 29 | - I want to use it for more than just compiling things. I want to use it as an | ||
| 30 | entry-point into a project, and I want this to help me indirectly document the | ||
| 31 | project as well. | ||
| 32 | - It should be an abstraction over bash shell or the default system shell. | ||
| 33 | - Each task essentially becomes its own shell instance. | ||
| 34 | - Must work on Linux and macOS systems. | ||
| 35 | - By default, running `erd` list all the available tasks (when I use make, I | ||
| 36 | usually put a disclaimer that you should check Makefile to see all available | ||
| 37 | target). | ||
| 38 | - Should support passing arguments when you run it from a shell. | ||
| 39 | - Normal variable as the same as environmental variables. There is no | ||
| 40 | distinction. Every variable is also essentially an environment variable and | ||
| 41 | can be used by other programs. | ||
| 42 | - State between tasks is not shared, and this makes this “pure” shell instances. | ||
| 43 | - Should be single-threaded for the start and later expanded with `@spawn` | ||
| 44 | command. | ||
| 45 | - Variables behave like macros and are preprocessed before evaluation. | ||
| 46 | - Should support something like `assure` that would check if programs like C | ||
| 47 | compiler or Python (whatever the project requires) are installed on a machine. | ||
| 48 | |||
| 49 | Quite a reasonable list of requirements. I do this things already in my | ||
| 50 | Makefiles or/and Bash scripts. But I would like to avoid repeating myself every | ||
| 51 | time I start working on something new. | ||
| 52 | |||
| 53 | So I started with the following syntax. | ||
| 54 | |||
| 55 | ```ruby | ||
| 56 | @env on | ||
| 57 | |||
| 58 | # Override the default shell. | ||
| 59 | @shell /bin/bash | ||
| 60 | |||
| 61 | # Assure that program is installed. | ||
| 62 | @assure docker-compose pip python3 | ||
| 63 | |||
| 64 | # Load local dotenv files (these are then globally available). | ||
| 65 | @dotenv .env | ||
| 66 | @dotenv .env.sample | ||
| 67 | @dotenv some_other_file | ||
| 68 | |||
| 69 | # This are local variables but still accessible in tasks. | ||
| 70 | @var HI = "hey" | ||
| 71 | @var TOKEN = "sometoken" | ||
| 72 | @var EMAIL = "m@m.com" | ||
| 73 | @var PASSWORD = "pass" | ||
| 74 | @var EDITOR = "vim" | ||
| 75 | |||
| 76 | @task dev "Test chars .:'}{]!//" does | ||
| 77 | echo "..." $HI | ||
| 78 | end | ||
| 79 | |||
| 80 | @task clean "Cleans the obj files" does | ||
| 81 | rm .obj | ||
| 82 | end | ||
| 83 | |||
| 84 | @task greet "Greets the user" does | ||
| 85 | echo "Hi user $TOKEN or $WINDOWID $EMAIL" | ||
| 86 | end | ||
| 87 | |||
| 88 | @task stack "Starts Docker stack" does | ||
| 89 | docker-compose -f stack.yml up | ||
| 90 | end | ||
| 91 | |||
| 92 | @task todo "Shows all todos in source files and count them" does | ||
| 93 | grep -ir "TODO|FIXME" . | wc -l | ||
| 94 | end | ||
| 95 | |||
| 96 | @task test1 "For testing 1" does | ||
| 97 | unknown-command | ||
| 98 | echo "test1" | ||
| 99 | ls -lha | ||
| 100 | end | ||
| 101 | |||
| 102 | @task test2 "For testing 2" does | ||
| 103 | echo "test1" | ||
| 104 | ls -lha | ||
| 105 | docker-compose -f samples/stack.yml up | ||
| 106 | end | ||
| 107 | ``` | ||
| 108 | |||
| 109 | One thing that I really like about Errand. Yes, this is what it is called. And | ||
| 110 | it is available at https://git.mitjafelicijan.com/errand.git/about/. Moving | ||
| 111 | on. One thing that I really like is that a task is a persistent shell. By that I | ||
| 112 | mean, that the whole task, even if it contains multiple command in one shell. | ||
| 113 | In make each line in a target is that and you need to combine lines or add `\` | ||
| 114 | at the end of the line. | ||
| 115 | |||
| 116 | ```bash | ||
| 117 | # How you do this things in make. | ||
| 118 | target: | ||
| 119 | source .venv/bin/activate \ | ||
| 120 | python script.py | ||
| 121 | ``` | ||
| 122 | |||
| 123 | This solves this problem. Consider each task and what is being executed in that | ||
| 124 | task a shell that will only close when all the tasks are completed. | ||
| 125 | |||
| 126 | By self-documenting I mean that if you are in a directory with `Errandfile` in, | ||
| 127 | if you only type `erd` and press enter it should by default display all the | ||
| 128 | possible targets. In make i was doing this by having a first target be something | ||
| 129 | like `default` that echos the message “Check Makefile for all available target.” | ||
| 130 | Because all of the tasks in Errand require a message I use that to display let’s | ||
| 131 | call it table of contents. | ||
| 132 | |||
| 133 | Because I don’t use any external dependencies this whole thing can be statically | ||
| 134 | compiled. So that also checked one of the boxes. | ||
| 135 | |||
| 136 | It works on Linux and on a Mac so that’s also a bonus. I don’t believe this | ||
| 137 | would work on Windows machines because of the way that I use shell instances. By | ||
| 138 | you could use something like Windows Subsystem for Linux and run it in | ||
| 139 | there. That is a valid option. | ||
| 140 | |||
| 141 | To finish this essay off, how was it to use it in “real life”. I have to be | ||
| 142 | honest. Some of the missing features still bother me. `@dotenv` directive is | ||
| 143 | still missing and I need to implement this ASAP. | ||
| 144 | |||
| 145 | Another thing that needs to happen is support for streaming output. Currently | ||
| 146 | commands like `docker-compose` that runs in foreground mode is not compatible | ||
| 147 | with Errand. So commands that stream output are an issue. I need to revisit how | ||
| 148 | I initiate shell and how I read stdout and stderr. But that shouldn’t be a | ||
| 149 | problem. | ||
| 150 | |||
| 151 | I have been very satisfied with this thing. I am pleasantly surprised by how | ||
| 152 | useful it is. I really wanted to test this in the wild before I commit to it. I | ||
| 153 | have more abandoned project than Google and it’s bringing a massive shame to my | ||
| 154 | family at this point. So I wanted to be sure that this is even useful. And it | ||
| 155 | actually is. Quite surprised at myself. | ||
| 156 | |||
| 157 | I really need to package this now and write proper docs. And maybe rewrite | ||
| 158 | tokeniser. Its atrocious right now. Site to behold! But that is an issue for | ||
| 159 | another time. | ||
diff --git a/content/posts/2023-07-01-bringing-all-of-my-projects-together-under-one-umbrella.md b/content/posts/2023-07-01-bringing-all-of-my-projects-together-under-one-umbrella.md new file mode 100644 index 0000000..9059b00 --- /dev/null +++ b/content/posts/2023-07-01-bringing-all-of-my-projects-together-under-one-umbrella.md | |||
| @@ -0,0 +1,281 @@ | |||
| 1 | --- | ||
| 2 | title: "Bringing all of my projects together under one umbrella" | ||
| 3 | url: bringing-all-of-my-projects-together-under-one-umbrella.html | ||
| 4 | date: 2023-07-01T18:49:07+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | ## What is the issue anyway? | ||
| 10 | |||
| 11 | Over the years, I have accumulated a bunch of virtual servers on my | ||
| 12 | [DigitalOcean](https://www.digitalocean.com/) account for small experimental | ||
| 13 | projects I dabble in. And this has resulted in quite a bill. I mean, I wouldn't | ||
| 14 | care if these projects were actually being used. But there were just being there | ||
| 15 | unused and wasting resources. Which makes this an unnecessary burden for me. | ||
| 16 | |||
| 17 | Most of them are just small HTML pages that have an endpoint or two to read data | ||
| 18 | from or to, and for that reason I wrote servers left and right. To be honest, | ||
| 19 | all of those things could have been done with [CGI | ||
| 20 | scripts](https://en.wikipedia.org/wiki/Common_Gateway_Interface) and that would | ||
| 21 | have been more than enough. | ||
| 22 | |||
| 23 | Recently, I decided to stop language hopping and focus on a simpler stack which | ||
| 24 | includes C, Go and Lua. And I can accomplish all the things I am interested in. | ||
| 25 | |||
| 26 | ## Finding a web server replacement | ||
| 27 | |||
| 28 | Usually I had [Nginx](https://nginx.org/en/) in front of these small web servers | ||
| 29 | and I had to manage SSL certificates and all that jazz. I am bored with these | ||
| 30 | things. I don't want to manage any of this bullshit anymore. | ||
| 31 | |||
| 32 | So the logical move forward was to find a solid alternative for this. I have | ||
| 33 | ended up on [Caddy server](https://caddyserver.com/). I've used it in the past | ||
| 34 | but kind of forgotten about it. What I really like about it is an ease of use | ||
| 35 | and a bunch of out of the box functionalities that come with it. | ||
| 36 | |||
| 37 | These are the _pitch_ points from their website: | ||
| 38 | |||
| 39 | - **Secure by Default**: Caddy is the only web server that uses HTTPS by | ||
| 40 | default. A hardened TLS stack with modern protocols preserves privacy and | ||
| 41 | exposes MITM attacks. | ||
| 42 | - **Config API**: As its primary mode of configuration, Caddy's REST API makes | ||
| 43 | it easy to automate and integrate with your apps. | ||
| 44 | - **No Dependencies**: Because Caddy is written in Go, its binaries are entirely | ||
| 45 | self-contained and run on every platform, including containers without libc. | ||
| 46 | - **Modular Stack**: Take back control over your compute edge. Caddy can be | ||
| 47 | extended with everything you need using plugins. | ||
| 48 | |||
| 49 | I had just a few requirements: | ||
| 50 | |||
| 51 | - Automatic SSL | ||
| 52 | - Static file server | ||
| 53 | - Basic authentication | ||
| 54 | - CGI script support | ||
| 55 | |||
| 56 | And the vanilla version does all of it, but CGI scripts. But that can easily be | ||
| 57 | fixed with their modular approach. You can do this on their website and build a | ||
| 58 | custom version of the server, or do it with Docker. | ||
| 59 | |||
| 60 | This is a `Dockerfile` I used to build a custom server. | ||
| 61 | |||
| 62 | ```Dockerfile | ||
| 63 | FROM caddy:builder AS builder | ||
| 64 | |||
| 65 | RUN xcaddy build \ | ||
| 66 | --with github.com/aksdb/caddy-cgi | ||
| 67 | |||
| 68 | FROM caddy:latest | ||
| 69 | RUN apk add --no-cache nano | ||
| 70 | |||
| 71 | COPY --from=builder /usr/bin/caddy /usr/bin/caddy | ||
| 72 | ``` | ||
| 73 | |||
| 74 | ## Getting rid of all the unnecessary virtual machines | ||
| 75 | |||
| 76 | The next step was to get a handle on the number of virtual servers I have all | ||
| 77 | over the place. | ||
| 78 | |||
| 79 | I decided to move all the projects and services into two main VMs: | ||
| 80 | |||
| 81 | - personal server (still Nginx) | ||
| 82 | - git server | ||
| 83 | - static file server | ||
| 84 | - personal blog | ||
| 85 | - projects server (Caddy server) | ||
| 86 | - personal experiments | ||
| 87 | - other projects | ||
| 88 | |||
| 89 | I will focus on projects' server in this post since it's more interesting. | ||
| 90 | |||
| 91 | ## Testing CGI scripts | ||
| 92 | |||
| 93 | The first thing I tested was how CGI scripts work under Caddy. This is | ||
| 94 | particularly import to me because almost all of my experiments and mini projects | ||
| 95 | need this to work. | ||
| 96 | |||
| 97 | To configure Caddy server, you must provide the server with a configuration | ||
| 98 | file. By default, it's called `Caaddyfile`. | ||
| 99 | |||
| 100 | ```caddyfile | ||
| 101 | { | ||
| 102 | order cgi before respond | ||
| 103 | } | ||
| 104 | |||
| 105 | examples.mitjafelicijan.com { | ||
| 106 | cgi /bash-test /opt/projects/examples/bash-test.sh | ||
| 107 | cgi /tcl-test /opt/projects/examples/tcl-test.tcl | ||
| 108 | cgi /lua-test /opt/projects/examples/lua-test.lua | ||
| 109 | cgi /python-test /opt/projects/examples/python-test.py | ||
| 110 | |||
| 111 | root * /opt/projects/examples | ||
| 112 | file_server | ||
| 113 | } | ||
| 114 | ``` | ||
| 115 | |||
| 116 | - The order is very important. Make sure that `order cgi before respond` is at | ||
| 117 | the top of the configuration file. | ||
| 118 | - Also, when you run with Caddy v2, make sure you provide `adapter` argument | ||
| 119 | like this `/usr/bin/caddy run --watch --environ --config /etc/caddy/Caddyfile | ||
| 120 | --adapter caddyfile`. Otherwise, Caddy will try to use a different format for | ||
| 121 | config file. | ||
| 122 | |||
| 123 | I did a small batch of tests with [Bash](https://www.gnu.org/software/bash/), | ||
| 124 | [Tcl](https://www.tcl-lang.org/), [Lua](https://www.lua.org/) and | ||
| 125 | [Python](https://www.python.org/). Here is a cheat sheet if you need it. | ||
| 126 | |||
| 127 | Let's get Bash out of the way first. | ||
| 128 | |||
| 129 | ```bash | ||
| 130 | #!/usr/bin/bash | ||
| 131 | |||
| 132 | printf "Content-type: text/plain\n\n" | ||
| 133 | |||
| 134 | printf "Hello from Bash\n\n" | ||
| 135 | printf "PATH_INFO [%s]\n" $PATH_INFO | ||
| 136 | printf "QUERY_STRING [%s]\n" $QUERY_STRING | ||
| 137 | printf "\n" | ||
| 138 | |||
| 139 | for i in {0..9..1}; do | ||
| 140 | printf "> %s\n" $i | ||
| 141 | done | ||
| 142 | |||
| 143 | exit 0 | ||
| 144 | ``` | ||
| 145 | |||
| 146 | This one is for Tcl script. | ||
| 147 | |||
| 148 | ```tcl | ||
| 149 | #!/usr/bin/tclsh | ||
| 150 | |||
| 151 | puts "Content-type: text/plain\n" | ||
| 152 | |||
| 153 | puts "Hello from Tcl\n" | ||
| 154 | puts "PATH_INFO \[$env(PATH_INFO)\]" | ||
| 155 | puts "QUERY_STRING \[$env(QUERY_STRING)\]" | ||
| 156 | puts "" | ||
| 157 | |||
| 158 | for {set i 0} {$i < 10} {incr i} { | ||
| 159 | puts "> $i" | ||
| 160 | } | ||
| 161 | ``` | ||
| 162 | |||
| 163 | And for all you Python enjoyers. | ||
| 164 | |||
| 165 | ```python | ||
| 166 | #!/usr/bin/python3 | ||
| 167 | |||
| 168 | import os | ||
| 169 | |||
| 170 | print("Content-type: text/plain\n") | ||
| 171 | |||
| 172 | print("Hello from Python\n") | ||
| 173 | print("PATH_INFO [{}]".format(os.environ['PATH_INFO'])) | ||
| 174 | print("QUERY_STRING [{}]".format(os.environ['QUERY_STRING'])) | ||
| 175 | print("") | ||
| 176 | |||
| 177 | for i in range(10): | ||
| 178 | print("> {}".format(i)) | ||
| 179 | ``` | ||
| 180 | |||
| 181 | And for the final example, Lua. | ||
| 182 | |||
| 183 | ```lua | ||
| 184 | #!/usr/bin/lua | ||
| 185 | |||
| 186 | print("Content-type: text/plain\n") | ||
| 187 | |||
| 188 | print("Hello from Lua\n") | ||
| 189 | print(string.format("PATH_INFO [%s]", os.getenv("PATH_INFO"))) | ||
| 190 | print(string.format("QUERY_STRING [%s]", os.getenv("QUERY_STRING"))) | ||
| 191 | print() | ||
| 192 | |||
| 193 | for i = 0, 9 do | ||
| 194 | print(string.format("> %d", i)) | ||
| 195 | end | ||
| 196 | ``` | ||
| 197 | |||
| 198 | ## Basic authentication | ||
| 199 | |||
| 200 | One thing was also to have an option for some sort of authentication, and | ||
| 201 | something like [Basic access | ||
| 202 | authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) would | ||
| 203 | be more than enough. | ||
| 204 | |||
| 205 | Thankfully, Caddy supports this out of the box already. Below is an updated | ||
| 206 | example. | ||
| 207 | |||
| 208 | ```Caddyfile | ||
| 209 | { | ||
| 210 | order cgi before respond | ||
| 211 | } | ||
| 212 | |||
| 213 | examples.mitjafelicijan.com { | ||
| 214 | cgi /bash-test /opt/projects/examples/bash-test.sh | ||
| 215 | cgi /tcl-test /opt/projects/examples/tcl-test.tcl | ||
| 216 | cgi /lua-test /opt/projects/examples/lua-test.lua | ||
| 217 | cgi /python-test /opt/projects/examples/python-test.py | ||
| 218 | |||
| 219 | root * /opt/projects/examples | ||
| 220 | file_server | ||
| 221 | |||
| 222 | basicauth * { | ||
| 223 | bob $2a$14$/wCgaf9oMnmQa20txB76u.nI1AldGMBT/1J7fXCfgOiRShwz/JOkK | ||
| 224 | } | ||
| 225 | } | ||
| 226 | ``` | ||
| 227 | |||
| 228 | `basicauth *` matches everything under this domain/sub-domain and protects it | ||
| 229 | with Basic Authentication. | ||
| 230 | |||
| 231 | - `bob` is the username | ||
| 232 | - `hash` is the password | ||
| 233 | |||
| 234 | To generate these passwords, execute `caddy hash-password` and this will prompt | ||
| 235 | you to insert a password twice and spit out a hashed password that you can put | ||
| 236 | in your configuration file. | ||
| 237 | |||
| 238 | Restart the server and you are ready to go. | ||
| 239 | |||
| 240 | ## Making Caddy a service with systemd | ||
| 241 | |||
| 242 | After the tests were successful, I copied `caddy` to `/usr/bin/caddy` and copied | ||
| 243 | `Caddyfile` to `/etc/caddy/Caddyfile`. | ||
| 244 | |||
| 245 | Now off to the systemd. Each systemd service requires you to create a service | ||
| 246 | file. | ||
| 247 | |||
| 248 | - I created a `/etc/systemd/system/caddy.service` and put the following content | ||
| 249 | in the file. | ||
| 250 | |||
| 251 | ```systemd | ||
| 252 | [Unit] | ||
| 253 | Description=Caddy | ||
| 254 | Documentation=https://caddyserver.com/docs/ | ||
| 255 | After=network.target network-online.target | ||
| 256 | Requires=network-online.target | ||
| 257 | |||
| 258 | [Service] | ||
| 259 | Type=notify | ||
| 260 | User=root | ||
| 261 | Group=root | ||
| 262 | ExecStart=/usr/bin/caddy run --environ --config /etc/caddy/Caddyfile --adapter caddyfile | ||
| 263 | ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile --force --adapter caddyfile | ||
| 264 | TimeoutStopSec=5s | ||
| 265 | LimitNOFILE=1048576 | ||
| 266 | LimitNPROC=512 | ||
| 267 | PrivateTmp=true | ||
| 268 | ProtectSystem=full | ||
| 269 | AmbientCapabilities=CAP_NET_ADMIN CAP_NET_BIND_SERVICE | ||
| 270 | |||
| 271 | [Install] | ||
| 272 | WantedBy=multi-user.target | ||
| 273 | ``` | ||
| 274 | |||
| 275 | - You might need to reload systemd with `systemctl daemon-reload`. | ||
| 276 | - Then I enabled the service with `systemctl enable caddy.service`. | ||
| 277 | - And then I started the service with `systemctl start caddy.service`. | ||
| 278 | |||
| 279 | This was about all that I needed to do to get it running. Now I can easily add | ||
| 280 | new subdomains and domains to the main configuration file and be done with | ||
| 281 | it. No manual Let's Encrypt shenanigans needed. | ||
diff --git a/content/posts/2023-07-08-who-knows-what-the-world-will-look-like-tomorrow.md b/content/posts/2023-07-08-who-knows-what-the-world-will-look-like-tomorrow.md new file mode 100644 index 0000000..46e6167 --- /dev/null +++ b/content/posts/2023-07-08-who-knows-what-the-world-will-look-like-tomorrow.md | |||
| @@ -0,0 +1,99 @@ | |||
| 1 | --- | ||
| 2 | title: "Who knows what the world will look like tomorrow" | ||
| 3 | url: who-knows-what-the-world-will-look-like-tomorrow.html | ||
| 4 | date: 2023-07-08T18:49:07+02:00 | ||
| 5 | type: post | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | This site has gone through a lot of changes over the years. From being written | ||
| 10 | in Flask and Bottle to moving on to static site generators. I have used and | ||
| 11 | tested probably 10s of them my now. From homebrew solutions to the biggest and | ||
| 12 | the baddest. From Bash scripts to Node.js disasters. I've seen some things, no | ||
| 13 | doubt. Not all bad. | ||
| 14 | |||
| 15 | I'have been closely observing the web and where the trends are going, and I | ||
| 16 | don't like what I see. Instead of internet being this weird place where | ||
| 17 | experimentation is happening, it all became stale and formulized. Boring, | ||
| 18 | actually. Really boring. And sad. Where is that old, revolutionary FU spirit I | ||
| 19 | remember? It's still there, I know. But it's being drowned by the voices of | ||
| 20 | mediocrity and formulaic boredom. | ||
| 21 | |||
| 22 | It almost feels like that the internet stopped for 10 years and only now | ||
| 23 | something has started happening. With all the insanity around the world. People | ||
| 24 | hating people without actual reasons, just because it's fashionable to hate and | ||
| 25 | crowd is saying so. Sad state of affairs. | ||
| 26 | |||
| 27 | All this is contributing to this overall negativity masked as apathy. Everybody | ||
| 28 | walking in lockstep. Instead of being creative are bold, we are just | ||
| 29 | re-inventing the world and making the same mistakes. Maybe, just maybe, some | ||
| 30 | things are good enough and there is no need to try to be too smart for our own | ||
| 31 | good. After N-attempts, maybe something should click inside our heads to maybe | ||
| 32 | say: "This thing, opinion, etc. is actually really good, and even after several | ||
| 33 | attempts it still holds." | ||
| 34 | |||
| 35 | The older I get, the more careful I am of my own thoughts and why I think the | ||
| 36 | way I think. More and more, I try to understand people with opposite | ||
| 37 | opinions. Far from perfect, but closer to bearable. And then I see people | ||
| 38 | hearing or reading a thing on internet and let's fucking goooooo! Strong | ||
| 39 | opinions are a sign of a weak and uneducated mind. I am more and more sure of | ||
| 40 | this. | ||
| 41 | |||
| 42 | It's gotten to a point where you can with great certainty deduce a person's | ||
| 43 | personality based on one or two opinions. How boring have we become. No wonder | ||
| 44 | people can't talk to each other. These would be very quick conversations anyway. | ||
| 45 | |||
| 46 | I just got remembered of a song, "Hi Ren". The ending talks about being stiff | ||
| 47 | and not being able to dance. Such an amazing metaphor. And we as people have | ||
| 48 | gone so far, we can't even walk or even crawl normally anymore. We have | ||
| 49 | forgotten that the most beautiful things in life have a great deal of | ||
| 50 | uncertainty about them. We want instant gratification. Not only that, but we | ||
| 51 | want absolute obedience. Complete control over others, because we have zero | ||
| 52 | control of ourselves. And all the lies we could tell ourselves will not help us | ||
| 53 | in this situation. | ||
| 54 | |||
| 55 | It is funny how I catch myself from time to time being a complete idiot. It's | ||
| 56 | like having an outer body experience. I can see myself being an idiot, and | ||
| 57 | cannot stop myself. It serves as a learning lesson to stop before speaking. To | ||
| 58 | think before saying. And to crawl before walking. | ||
| 59 | |||
| 60 | So there is still time. We can dance once more. All we need to do is stop for a | ||
| 61 | second. Me and you. Us two is a start. Let's not try to change the world, but | ||
| 62 | rather nudge ourselves just a tiny bit. And if we only did that. Each of us | ||
| 63 | nudged ourselves a small, tiny bit, the world would heal. If we would just put | ||
| 64 | down the phones and ignored Internet for a day or two. Put visiting websites | ||
| 65 | that feed on us on hold. Listened to just one sentence and try to understand it | ||
| 66 | from a person who we completely disagree with. I truly believe that this is | ||
| 67 | possible. | ||
| 68 | |||
| 69 | Life is about suffering and joy. And instead of wishing suffering on others and | ||
| 70 | excepting joy for yourselves, we should for a brief moment want suffering for | ||
| 71 | ourselves and wish joy on others. Wouldn't that be an amazing sight to see? | ||
| 72 | |||
| 73 | I caught myself hating on Rust. And I deeply thought about it afterward. Why did | ||
| 74 | I do it? It is obviously not for me. So why the hell was I being so negative | ||
| 75 | towards it? I think that I know the answer. I was negative because that is | ||
| 76 | easy. Because it's much easier to hate on things than to say to yourself: "Well, | ||
| 77 | you know what? This is not for me. I will focus on creation and not | ||
| 78 | destruction. This is who I want to be. This is what fills me with joy and | ||
| 79 | purpose." Where joy is keeping me happy and purpose scares the shit out of me | ||
| 80 | and keeps me honest. This is who I want to be. Admit to myself when I am wrong | ||
| 81 | and accept the faults that I have without reservation and with courage march on. | ||
| 82 | |||
| 83 | I just realized that this blog post is a sort of therapy for me. It's | ||
| 84 | cathartic. Going thought the history of this site and remembering all the | ||
| 85 | decisions and annoyances that came with it. When I was cursing at the tools. And | ||
| 86 | time moved on, and the site is still here. It serves as a reminder that | ||
| 87 | perseverance wins at the end. If we just let things go. | ||
| 88 | |||
| 89 | This came with a decision that simplifying life and removing all the unnecessary | ||
| 90 | negativity is key. Rather than worrying about what the internet is saying, what | ||
| 91 | the world is trying to take from you, you are the only one who can say no. And | ||
| 92 | create instead of destroy. | ||
| 93 | |||
| 94 | I don't have an ending for this post, so I will say this. We live in the most | ||
| 95 | amazing times in the recorded history, and we should be internally grateful for | ||
| 96 | it. Create and study, this should be my mantra. Just create and let the world | ||
| 97 | happen. And you feel yourself to be too certain, stop and check how deep in the | ||
| 98 | shit you are already. Strong opinions are a sign of a weak and uneducated | ||
| 99 | mind. Hate and disdain is for the weak. | ||
diff --git a/content/posts/2023-07-10-fix-screen-tearing-on-debian-12-xorg-and-i3.md b/content/posts/2023-07-10-fix-screen-tearing-on-debian-12-xorg-and-i3.md new file mode 100644 index 0000000..1de0ffe --- /dev/null +++ b/content/posts/2023-07-10-fix-screen-tearing-on-debian-12-xorg-and-i3.md | |||
| @@ -0,0 +1,22 @@ | |||
| 1 | --- | ||
| 2 | title: "Fix screen tearing on Debian 12 Xorg and i3" | ||
| 3 | url: fix-screen-tearing-on-debian-12-xorg-and-i3.html | ||
| 4 | date: 2023-07-10T04:21:48+02:00 | ||
| 5 | type: note | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | I have been experiencing some issues with Intel® Integrated HD Graphics 3000 | ||
| 10 | under Debian 12 with Xorg and i3. Using `picom` compositor didn't help. To fix | ||
| 11 | this issue create new file `/etc/X11/xorg.conf.d/20-intel.conf` as root and put | ||
| 12 | the following in the file. | ||
| 13 | |||
| 14 | ``` | ||
| 15 | Section "Device" | ||
| 16 | Identifier "Intel Graphics" | ||
| 17 | Driver "intel" | ||
| 18 | Option "TearFree" "true" | ||
| 19 | EndSection | ||
| 20 | ``` | ||
| 21 | |||
| 22 | Reboot the system and that should be it. | ||
diff --git a/content/posts/2023-07-10-online-radio-streaming-with-mpv-from-terminal.md b/content/posts/2023-07-10-online-radio-streaming-with-mpv-from-terminal.md new file mode 100644 index 0000000..821a80f --- /dev/null +++ b/content/posts/2023-07-10-online-radio-streaming-with-mpv-from-terminal.md | |||
| @@ -0,0 +1,14 @@ | |||
| 1 | --- | ||
| 2 | title: "Online radio streaming with MPV from terminal" | ||
| 3 | url: online-radio-streaming-with-mpv-from-terminal.html | ||
| 4 | date: 2023-07-10T03:34:45+02:00 | ||
| 5 | type: note | ||
| 6 | draft: false | ||
| 7 | --- | ||
| 8 | |||
| 9 | Recently I have been using my Thinkpad x220 more and there are some constraints | ||
| 10 | I have faced with it. CPU is not as powerful as on my main machine and I really | ||
| 11 | want to listen to some music while using the machine. Browsers really are bloat. | ||
| 12 | |||
| 13 | Check out this site https://streamurl.link/ and copy the stream url and then do | ||
| 14 | `mpv streamlink`. | ||
