diff options
Diffstat (limited to 'content/posts')
40 files changed, 0 insertions, 7015 deletions
diff --git a/content/posts/2011-01-13-most-likely-to-succeed-in-year-of-2011.md b/content/posts/2011-01-13-most-likely-to-succeed-in-year-of-2011.md deleted file mode 100644 index 9fc484a..0000000 --- a/content/posts/2011-01-13-most-likely-to-succeed-in-year-of-2011.md +++ /dev/null | |||
| @@ -1,41 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Most likely to succeed in the year of 2011 | ||
| 3 | url: most-likely-to-succeed-in-year-of-2011.html | ||
| 4 | date: 2011-01-13T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | The year of 2010 was definitely the year of Geo-location. The market responded | ||
| 9 | beautifully and lots of very cool services were launched. We all have to thank | ||
| 10 | the mobile market for such extensive adoption. With new generations of mobile | ||
| 11 | phones that are not only buffed with high-tech hardware but are also affordable. | ||
| 12 | We can now manage tasks that were not so long time ago, almost Star Trek’ish. | ||
| 13 | And all this had and has great influence on the destination to which we are | ||
| 14 | going now. | ||
| 15 | |||
| 16 | Reading all this articles about new innovation about new thriving technologies | ||
| 17 | makes me wonder what’s the next step. The future is the mesh, like Lisa Gansky | ||
| 18 | said in her book The Mesh. | ||
| 19 | |||
| 20 | Many still have conservative views on distributed systems. The problems with | ||
| 21 | security of information. Fear of not controlling every aspect of information | ||
| 22 | flow. I am very opened to distributed systems and heterogeneous applications, | ||
| 23 | and I think this is the correct and best way to proceed. | ||
| 24 | |||
| 25 | This year will definitely be about communication platforms. Mobile to mobile. | ||
| 26 | Machine to mobile and vice versa. All the tech is available and ready to put | ||
| 27 | into action. Wireless is today’s new mantra. And the concept of semantic web is | ||
| 28 | now ready for industry. | ||
| 29 | |||
| 30 | Applications and developers now can gain access to new layers of systems and can | ||
| 31 | prepare and build solutions to meet the high quality needs of market. The speed | ||
| 32 | is everything now. | ||
| 33 | |||
| 34 | My vote goes to “Machine to Machine” and “Embedded Systems”! | ||
| 35 | |||
| 36 | - [Machine-to-Machine](http://en.wikipedia.org/wiki/Machine-to-Machine) | ||
| 37 | - [The ultimate M2M communication protocol](http://www.bitxml.org/) | ||
| 38 | - [COOS Project (connectivity initiative)](http://www.coosproject.org/maven-site/1.0.0/project-info.html) | ||
| 39 | - [Community for machine-to-machine](http://m2m.com/index.jspa) | ||
| 40 | - [Embedded system](http://en.wikipedia.org/wiki/Embedded_system) | ||
| 41 | |||
diff --git a/content/posts/2012-03-09-led-technology-not-so-eco.md b/content/posts/2012-03-09-led-technology-not-so-eco.md deleted file mode 100644 index a683aec..0000000 --- a/content/posts/2012-03-09-led-technology-not-so-eco.md +++ /dev/null | |||
| @@ -1,32 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: LED technology might not be as eco-friendly as you think | ||
| 3 | url: led-technology-not-so-eco.html | ||
| 4 | date: 2012-03-09T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | There is a lot of talk about LED technology. It is beginning to infiltrate | ||
| 9 | industry at a fast rate, and it’s a challenge for designers and also engineers. | ||
| 10 | I wondered when a weakness will be revealed. Then I stomped on an article | ||
| 11 | talking about harm in using LED technology. It looks like this magical | ||
| 12 | technology is not so magical and eco-friendly. | ||
| 13 | |||
| 14 | A new study from the University of California indicates that LED lights contain | ||
| 15 | toxic metals, and should be produced, used and disposed of carefully. Besides | ||
| 16 | the lead and nickel, the bulbs and their associated parts were also found to | ||
| 17 | contain arsenic, copper, and other metals that have been linked to different | ||
| 18 | cancers, neurological damage, kidney disease, hypertension, skin rashes and | ||
| 19 | other illnesses in humans, and to ecological damage in waterways. | ||
| 20 | |||
| 21 | Since then, I haven’t yet found any regulation for disposal of LED lights or any | ||
| 22 | other regulation or standard. This might be a problem in the future. And it is a | ||
| 23 | massive drawback. This might have quite an impact on consumer market. | ||
| 24 | |||
| 25 | Nevertheless, there is a potential, and I am sure the market will adapt. I also | ||
| 26 | hope I will be reading documents regarding solution for this concern soon. | ||
| 27 | |||
| 28 | **Additional resources:** | ||
| 29 | |||
| 30 | - [Recycling and Disposal of Light Bulbs](http://ezinearticles.com/?Recycling-and-Disposal-of-Light-Bulbs&id=1091304) | ||
| 31 | - [How to Dispose of a Low-Energy Light Bulb](http://www.ehow.com/how_7483442_dispose-lowenergy-light-bulb.html) | ||
| 32 | |||
diff --git a/content/posts/2013-10-24-wireless-sensor-networks.md b/content/posts/2013-10-24-wireless-sensor-networks.md deleted file mode 100644 index fc5d372..0000000 --- a/content/posts/2013-10-24-wireless-sensor-networks.md +++ /dev/null | |||
| @@ -1,53 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Wireless sensor networks | ||
| 3 | url: wireless-sensor-networks.html | ||
| 4 | date: 2013-10-24T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | Zigbee networks have this wonderful capability to self-heal, which means they | ||
| 9 | can reorder connections between them if one of them is inoperable. This works | ||
| 10 | our of the box when you deploy them. But you have to have in mind that achieving | ||
| 11 | this is not as easy as you would think. None of it is plug&play. So to make | ||
| 12 | your life a bit easier, here are some pointers which, I hope, will help you. | ||
| 13 | |||
| 14 | - Be careful when you are ordering your equipment abroad. There are many rules | ||
| 15 | and regulations you need to comply before you get your Xbee radios. What they | ||
| 16 | do is they wait until you prove that you won’t use the technology for some | ||
| 17 | kind of evil take over control of the world project :). For this, they have | ||
| 18 | EAR (Export Administration Regulations) which basically means “This product | ||
| 19 | may require a license to export from the United States.”. | ||
| 20 | - I don’t know if this applies for every country, but when we purchased our Xbee | ||
| 21 | radios from Mouser, this was mandatory! What we needed to do was to print out | ||
| 22 | a form and write information about our company and send them a copy via | ||
| 23 | email. With this document, we proved that we are a legitimate company. | ||
| 24 | - When you complete your purchase and send all the documentation, you are not | ||
| 25 | clear yet. Then customs will take it from there :). There will be some | ||
| 26 | additional costs. Before purchasing, make sure you have as much information | ||
| 27 | about costs as possible. Because it can get costly in the end. | ||
| 28 | - I suggest you use companies from your country. You can seriously cut your | ||
| 29 | costs. Here in Slovenia, the best option so far as I know is Farnell. And | ||
| 30 | based on my personal experience, they rock! All I need to say! | ||
| 31 | - Make plans when ordering larger quantities. Do not, I say, do not make your | ||
| 32 | orders in December! :) Believe me! You will have problems with stock they can | ||
| 33 | provide for you. So, we were forced to buy some things from Mouser, which was | ||
| 34 | extremely painful because of all the regulations you need to obey when | ||
| 35 | importing goods from the USA. | ||
| 36 | - Make sure that firmware version on your Xbee radios is exactly the same! Do | ||
| 37 | not get creative!!! I propose using templates. You can get template by | ||
| 38 | exporting settings/profile in X-CTU application. Make sure you have enabled | ||
| 39 | “Upgrade firmware” so you can be sure each radio has the same firmware. | ||
| 40 | - And again: make plans! Plan everything! In months advanced! You will thank me | ||
| 41 | later :) | ||
| 42 | - Test, test, test. Wireless networks can be tricky. | ||
| 43 | |||
| 44 | If you are serious, I suggest you buy this book, Building Wireless Sensor | ||
| 45 | Networks. You will get a glimpse of how networks work in lumens terms. It is a | ||
| 46 | good starting point for everybody who wants to build wireless networks. | ||
| 47 | |||
| 48 | **Additional resources:** | ||
| 49 | |||
| 50 | - http://www.digi.com/aboutus/export/generalexportinfo | ||
| 51 | - http://doresearch.stanford.edu/research-scholarship/export-controls/export-controlled-or-embargoed-countries-entities-and-persons | ||
| 52 | - http://www.bis.doc.gov/licensing/exportingbasics.htm | ||
| 53 | |||
diff --git a/content/posts/2015-11-10-software-development-pitfalls.md b/content/posts/2015-11-10-software-development-pitfalls.md deleted file mode 100644 index b9edd19..0000000 --- a/content/posts/2015-11-10-software-development-pitfalls.md +++ /dev/null | |||
| @@ -1,180 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Software development and my favorite pitfalls | ||
| 3 | url: software-development-pitfalls.html | ||
| 4 | date: 2015-11-10T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | Over the years I had the privilege to work on some very excited projects both in | ||
| 9 | software development field and also in electronics field and every experience | ||
| 10 | taught me some invaluable lessons about how NOT TO approach development. And | ||
| 11 | through this post I will try to point out some absurd, outdated techniques I | ||
| 12 | find the most annoying and damaging during a development cycle. There will be | ||
| 13 | swearing because this topic really gets on my nerves and I never coherently | ||
| 14 | tried to explain them in writing. So if I get heated up, please bear with me. | ||
| 15 | |||
| 16 | As new methods of project management are emerging, underlying processes still | ||
| 17 | stay old and outdated. This is mainly because we as people are unable to | ||
| 18 | completely shift away from these approaches. | ||
| 19 | |||
| 20 | I was always struggling with communication, and many times that cost me a | ||
| 21 | relationship or two because I was not on the ball all the time. Through every | ||
| 22 | experience, I became more convinced that I am the problem and never ever doubted | ||
| 23 | that the problem may be that communication never evolved a single step from | ||
| 24 | emails. And if you think for a second, not many things have changed around this | ||
| 25 | topic. We just have different representations of email (message boards, chats, | ||
| 26 | project management tools). And I believe this is the real issue we are facing | ||
| 27 | now. | ||
| 28 | |||
| 29 | There are many articles written about hyper connectivity and the effects that | ||
| 30 | are a direct result of it. But mainstream does nothing towards it. We are just | ||
| 31 | putting out fires, and we do nothing to prevent it. I am certain this will be a | ||
| 32 | major source of grief in coming years. And what we all can do to avoid this is | ||
| 33 | to change our mindset and experiment on our communication skills, development | ||
| 34 | approaches. We need to maximize possible output that a person can give. And to | ||
| 35 | achieve this we need to listen to them, encourage them. I know that not | ||
| 36 | everybody is a naturally born leader, but with enough practice and encouragement | ||
| 37 | they also can become active participants in leadership. | ||
| 38 | |||
| 39 | There are many talks now about methodologies such as Scrum, Kanban, Cleanroom | ||
| 40 | and they all fucking piss me of :). These are all boxes that imprison people and | ||
| 41 | take away their freedom of thought. This is a straightforward mindfuck / | ||
| 42 | amputation of creativity. | ||
| 43 | |||
| 44 | Let me list a couple of things that I find really destructive and bad for a | ||
| 45 | project and in a long run company. | ||
| 46 | |||
| 47 | ## Ping emails | ||
| 48 | |||
| 49 | Ping emails are emails you have to write as soon as you receive an email. Its | ||
| 50 | sole purpose is to inform the sender that you received their email, and you are | ||
| 51 | working on it. Its result is only to calm down the sender that their task is | ||
| 52 | being dealt with. It’s intent basically is, I did my job by sending you this | ||
| 53 | email, so I am on clear grounds. I categorize this email as fuck you email. | ||
| 54 | This is one of the most irritating types of emails I need to write. This is the | ||
| 55 | ultimate control freak show you can experience, and it gives the sender a false | ||
| 56 | feeling of control. Newsflash: We do not live in 1982 where there was a | ||
| 57 | possibility that email never reached the destination. I really hate this from | ||
| 58 | the bottom of my heart. | ||
| 59 | |||
| 60 | They should be like: “Yes, I am fucking alive, and I am at your service my | ||
| 61 | leash!”. I guess if I would reply like this, I wouldn’t have to write any more | ||
| 62 | of this kind of messages. | ||
| 63 | |||
| 64 | ## Everybody is a project manager | ||
| 65 | |||
| 66 | Well, this is a tough one. I noticed that as soon as you let people to give | ||
| 67 | their suggestions, you are basically screwed. There is a truth in the saying: | ||
| 68 | “Give low expectations and deliver little more than you promised.”. | ||
| 69 | |||
| 70 | People tend to take a role of a manager as soon as they are presented with an | ||
| 71 | opportunity. And by getting angry at them, you only provoke yourself. They are | ||
| 72 | not at fault. You just need to tell them they are only giving suggestions and | ||
| 73 | not tasks at the beginning and everything will be alright. But if you give them | ||
| 74 | a feeling that they are in control, you will have immense problems explaining | ||
| 75 | why their features are not in current release. | ||
| 76 | |||
| 77 | Project mission must be always leading project requirements and any deviation | ||
| 78 | from it will result in major project butchering. And by this, I mean that the | ||
| 79 | project will get its own path, and you will be left with half done software that | ||
| 80 | helps nobody. Clear mission goals and clean execution will allow you to develop | ||
| 81 | software will clear intent. | ||
| 82 | |||
| 83 | ## We are never wrong | ||
| 84 | |||
| 85 | I find this type of arrogance the worst. We must always conduct ourselves that | ||
| 86 | we are infallible and cannot make mistakes. As soon as a procedure or process is | ||
| 87 | established, there is no room for changes or improvements. This is the most | ||
| 88 | idiotic thing someone can say of think. I think that processes need to involve | ||
| 89 | and change over time. This is imperative and need to have in your organization | ||
| 90 | if you want to improve and develop company. We all need to grow balls and change | ||
| 91 | everything in order to adapt to current situations. Being a prisoner of | ||
| 92 | predefined processes kills creativity. | ||
| 93 | |||
| 94 | I am constantly trying new software for project managing and communication. I | ||
| 95 | believe every team has its own dynamic, and it needs to be discovered | ||
| 96 | organically and naturally through many experiments. By putting the team in a | ||
| 97 | box, you are amputating their creativity and therefore minimizing their | ||
| 98 | potential. But if you talk to an executive, you will mainly find archetypical | ||
| 99 | thinking and a strong need to compartmentalize everything from business | ||
| 100 | processes to resource management. And this type of management that often | ||
| 101 | displays micromanagement techniques only works for short periods (couple of | ||
| 102 | years) and then employees either leave the company or become basically retarded | ||
| 103 | drones on autopilot. | ||
| 104 | |||
| 105 | ## Micromanaging | ||
| 106 | |||
| 107 | This basically implies that everybody on the team is an idiot who needs to have | ||
| 108 | a to-do list that they cannot write themselves. How about spoon-feeding the team | ||
| 109 | at launch because besides the team leader, everybody must be a retarded idiot at | ||
| 110 | best? | ||
| 111 | |||
| 112 | I prefer milestones as they give developers much more freedom and creativity in | ||
| 113 | developing and not waste their time checking some bizarre to-do list that was | ||
| 114 | not even thought through. Projects constantly change throughout the development | ||
| 115 | cycle, and all you are left at the end is a list of unchecked tasks and the | ||
| 116 | wrath of management why they are not completed. Best WTF moment! | ||
| 117 | |||
| 118 | ## Human contact — no need for it! | ||
| 119 | |||
| 120 | We are vigorously trying to eliminate physical contact by replacing short | ||
| 121 | meetings with software, with no regards that we are not machines. Many times a | ||
| 122 | simple 5-min meeting at morning can solve most of the problems. In rapid | ||
| 123 | development, short bursts of man to man communication is possibly the best way | ||
| 124 | to go. | ||
| 125 | |||
| 126 | We now have all this software available, and all what we get out of it is a | ||
| 127 | giant clusterfuck. An obstacle and not a solution. So, why we still use them? | ||
| 128 | |||
| 129 | ## MVP is killing innovation | ||
| 130 | |||
| 131 | Many will disagree with me on this one, but I stand strong by this statement. | ||
| 132 | What I noticed in my experience that all this buzz words around us only mislead | ||
| 133 | and capture us in a circle of solving issues that already have a solution, but | ||
| 134 | we are unable to see it without using some fancy word for it. | ||
| 135 | |||
| 136 | The toughest thing to do for a developer is to minimize requirements. Well, this | ||
| 137 | is though only for bad developers. Yes, I said it. There are many types of | ||
| 138 | developers out there. And those unable to minimize feature scope are the ones | ||
| 139 | you don’t need on your team. Their only goal is to solve problems that exist | ||
| 140 | only in their heads. And then you have to argue with them, and waste energy on | ||
| 141 | them, instead of developing your awesome product. They are a cancer and I | ||
| 142 | suggest you cut them off. | ||
| 143 | |||
| 144 | MVP as an idea is great, but sadly people don’t understand underlying | ||
| 145 | philosophy, and they spent too much time focusing and fixating on something that | ||
| 146 | every sane person with normal IQ will understand without some made up | ||
| 147 | acronym. And the result is a lot of talking and barely no execution. | ||
| 148 | |||
| 149 | Well, MVP is not directly killing innovation, but stupid people do when they try | ||
| 150 | to understand it. | ||
| 151 | |||
| 152 | ## Pressure wasteland | ||
| 153 | |||
| 154 | You must never allow to be pressured into confirming a deadline if you are not | ||
| 155 | confident. We often feel a need that we are in service of others, which is true | ||
| 156 | to some extent. But it is also true that others are in service to us to some | ||
| 157 | extent. And we forget this all the time. We are all pressured all the time to | ||
| 158 | make decisions just to calm other people down. And when they leave your office | ||
| 159 | you experience WTF moment :) How the hell did they manage to fuck me up again? | ||
| 160 | |||
| 161 | People need to realize that the more pressure you put on somebody, the less they | ||
| 162 | will be able to do. So 5-min update email requests will only resolve in mental | ||
| 163 | breakdown and inability to work that day. Constant poking is probably the only | ||
| 164 | thing I lose my mind instantly. For all you that are doing this: “Stop bothering | ||
| 165 | us with your insecurities and let us do our job. We will do it quicker and | ||
| 166 | better without you breathing down our necks.” | ||
| 167 | |||
| 168 | If this happens to me, I end up with no energy at the end. Don’t you get it? | ||
| 169 | You will get much more from and out of me if you ask me like a human person and | ||
| 170 | not your personal butler. On a long run, you are destroying your relationships | ||
| 171 | and nobody would want to work with you. Your schizophrenic approach will damage | ||
| 172 | only you in a long run. Nobody is anybody’s property. | ||
| 173 | |||
| 174 | ## Conclusion | ||
| 175 | |||
| 176 | I am guilty of many things described in this post. And I find it hard sometimes | ||
| 177 | to acknowledge this. And I lie to myself and try vigorously to find some | ||
| 178 | explanation why I do these things. There is always space for growth. And maybe | ||
| 179 | you will also find some of yourself in this post and realize what needs to | ||
| 180 | change for you to evolve. | ||
diff --git a/content/posts/2017-03-07-golang-profiling-simplified.md b/content/posts/2017-03-07-golang-profiling-simplified.md deleted file mode 100644 index 4bd18b2..0000000 --- a/content/posts/2017-03-07-golang-profiling-simplified.md +++ /dev/null | |||
| @@ -1,125 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Golang profiling simplified | ||
| 3 | url: golang-profiling-simplified.html | ||
| 4 | date: 2017-03-07T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | Many posts have been written regarding profiling in Golang and I haven’t found | ||
| 9 | proper tutorial regarding this. Almost all of them are missing some part of | ||
| 10 | important information and it gets pretty frustrating when you have a deadline | ||
| 11 | and are not finding simple distilled solution. | ||
| 12 | |||
| 13 | Nevertheless, after searching and experimenting I have found a solution that | ||
| 14 | works for me and probably should also for you. | ||
| 15 | |||
| 16 | ## Where are my pprof files? | ||
| 17 | |||
| 18 | By default pprof files are generated in /tmp/ folder. You can override folder | ||
| 19 | where this files are generated programmatically in your golang code as we will | ||
| 20 | see below in example. | ||
| 21 | |||
| 22 | ## Why is my CPU profile empty? | ||
| 23 | |||
| 24 | I have found out that sometimes CPU profile is empty because program was not | ||
| 25 | executing long enough. Programs, that execute too quickly don’t produce pprof | ||
| 26 | file in my cases. Well, file is generated but only contains 4KB of information. | ||
| 27 | |||
| 28 | ## Profiling | ||
| 29 | |||
| 30 | As you can see from examples we are executing dummy_benchmark functions to | ||
| 31 | ensure some sort of execution. Memory profiling can be done without such a | ||
| 32 | “complex” function. But CPU profiling needs it. | ||
| 33 | |||
| 34 | Both memory and CPU profiling examples are almost the same. Only parameters in | ||
| 35 | main function when calling profile.Start are different. When we set | ||
| 36 | profile.ProfilePath(“.”) we tell profiler to store pprof files in the same | ||
| 37 | folder as our program. | ||
| 38 | |||
| 39 | ### Memory profiling | ||
| 40 | |||
| 41 | ```go | ||
| 42 | package main | ||
| 43 | |||
| 44 | import ( | ||
| 45 | "fmt" | ||
| 46 | "time" | ||
| 47 | "github.com/pkg/profile" | ||
| 48 | ) | ||
| 49 | |||
| 50 | func dummy_benchmark() { | ||
| 51 | |||
| 52 | fmt.Println("first set ...") | ||
| 53 | for i := 0; i < 918231333; i++ { | ||
| 54 | i *= 2 | ||
| 55 | i /= 2 | ||
| 56 | } | ||
| 57 | |||
| 58 | <-time.After(time.Second*3) | ||
| 59 | |||
| 60 | fmt.Println("sencond set ...") | ||
| 61 | for i := 0; i < 9182312232; i++ { | ||
| 62 | i *= 2 | ||
| 63 | i /= 2 | ||
| 64 | } | ||
| 65 | } | ||
| 66 | |||
| 67 | func main() { | ||
| 68 | defer profile.Start(profile.MemProfile, profile.ProfilePath("."), profile.NoShutdownHook).Stop() | ||
| 69 | dummy_benchmark() | ||
| 70 | } | ||
| 71 | ``` | ||
| 72 | |||
| 73 | ### CPU profiling | ||
| 74 | |||
| 75 | ```go | ||
| 76 | package main | ||
| 77 | |||
| 78 | import ( | ||
| 79 | "fmt" | ||
| 80 | "time" | ||
| 81 | "github.com/pkg/profile" | ||
| 82 | ) | ||
| 83 | |||
| 84 | func dummy_benchmark() { | ||
| 85 | |||
| 86 | fmt.Println("first set ...") | ||
| 87 | for i := 0; i < 918231333; i++ { | ||
| 88 | i *= 2 | ||
| 89 | i /= 2 | ||
| 90 | } | ||
| 91 | |||
| 92 | <-time.After(time.Second*3) | ||
| 93 | |||
| 94 | fmt.Println("sencond set ...") | ||
| 95 | for i := 0; i < 9182312232; i++ { | ||
| 96 | i *= 2 | ||
| 97 | i /= 2 | ||
| 98 | } | ||
| 99 | } | ||
| 100 | |||
| 101 | func main() { | ||
| 102 | defer profile.Start(profile.CPUProfile, profile.ProfilePath("."), profile.NoShutdownHook).Stop() | ||
| 103 | dummy_benchmark() | ||
| 104 | } | ||
| 105 | ``` | ||
| 106 | |||
| 107 | ### Generating profiling reports | ||
| 108 | |||
| 109 | ```bash | ||
| 110 | # memory profiling | ||
| 111 | go build mem.go | ||
| 112 | ./mem | ||
| 113 | go tool pprof -pdf ./mem mem.pprof > mem.pdf | ||
| 114 | |||
| 115 | # cpu profiling | ||
| 116 | go build cpu.go | ||
| 117 | ./cpu | ||
| 118 | go tool pprof -pdf ./cpu cpu.pprof > cpu.pdf | ||
| 119 | ``` | ||
| 120 | |||
| 121 | This will generate PDF document with visualized profile. | ||
| 122 | |||
| 123 | - [Memory PDF profile example](/assets/go-profiling/golang-profiling-mem.pdf) | ||
| 124 | - [CPU PDF profile example](/assets/go-profiling/golang-profiling-cpu.pdf) | ||
| 125 | |||
diff --git a/content/posts/2017-04-17-what-i-ve-learned-developing-ad-server.md b/content/posts/2017-04-17-what-i-ve-learned-developing-ad-server.md deleted file mode 100644 index bb98efd..0000000 --- a/content/posts/2017-04-17-what-i-ve-learned-developing-ad-server.md +++ /dev/null | |||
| @@ -1,198 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: What I've learned developing ad server | ||
| 3 | url: what-i-ve-learned-developing-ad-server.html | ||
| 4 | date: 2017-04-17T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | For the past year and half I have been developing native advertising server that | ||
| 9 | contextually matches ads and displays them in different template forms on | ||
| 10 | variety of websites. This project grew from serving thousands of ads per day to | ||
| 11 | millions. | ||
| 12 | |||
| 13 | The system is made from couple of core components: | ||
| 14 | |||
| 15 | - API for serving ads, | ||
| 16 | - Utils - cronjobs and queue management tools, | ||
| 17 | - Dashboard UI. | ||
| 18 | |||
| 19 | Initial release was using [MongoDB](https://www.mongodb.com/) for full-text | ||
| 20 | search but was later replaced by [Elasticsearch](https://www.elastic.co/) for | ||
| 21 | better CPU utilization and better search performance. This provided us with many | ||
| 22 | amazing functionalities of [Elasticsearch](https://www.elastic.co/). You should | ||
| 23 | check it out if you do any search related operations. | ||
| 24 | |||
| 25 | Because the premise of the server is to provide native ad experience, they are | ||
| 26 | rendered on the client side via simple templating engine. This ensures that ads | ||
| 27 | can be displayed number of different ways based on the visual style of the | ||
| 28 | page. And this makes JavaScript client library quite complex. | ||
| 29 | |||
| 30 | So now that you know basic information about the product lets get into the | ||
| 31 | lessons we learned. | ||
| 32 | |||
| 33 | ## Aggregate everything | ||
| 34 | |||
| 35 | After beta version was released everything (impressions, clicks, etc) was | ||
| 36 | written in nanosecond resolution in the database. At that time we were using | ||
| 37 | [PostgreSQL](https://www.postgresql.org/) and database quickly grew way above | ||
| 38 | 200GB in disk space. And that was problematic. Statistics took disturbingly long | ||
| 39 | time to aggregate. Also using indexes on stats table in database was no help | ||
| 40 | after we reached 500 million datapoints. | ||
| 41 | |||
| 42 | > There is a marketing product information and there is real life experience. | ||
| 43 | And the tend to be quite the opposite. | ||
| 44 | |||
| 45 | This was the reason that now everything is aggregated on daily basis and this | ||
| 46 | data is then fed to Elastic in form of daily summary. With this we achieved we | ||
| 47 | can now track many more dimensions such as zone, channel and platform | ||
| 48 | information. And with this information we can now adapt occurrences of ads on | ||
| 49 | specific places more precisely. | ||
| 50 | |||
| 51 | We have also adapted [Redis](https://redis.io/) as a full-time citizen in our | ||
| 52 | stack. Because Redis also stores information on a local disk we have some sort | ||
| 53 | of backup if server would accidentally suffer some failure. | ||
| 54 | |||
| 55 | All the real-time statistics for ad serving and redirecting is presented as | ||
| 56 | counters in Redis instance and daily extracted and pushed to Elastic. | ||
| 57 | |||
| 58 | ## Measure everything | ||
| 59 | |||
| 60 | The thing about software is that we really don't know how well it is performing | ||
| 61 | under load until such load is presented. When testing locally everything is fine | ||
| 62 | but when on production things tend to fall apart. | ||
| 63 | |||
| 64 | As a solution for this we are measuring everything we can. Function execution | ||
| 65 | time (by encapsulating functions with timers), server performance (cpu, memory, | ||
| 66 | disk, etc), Nginx and [uWSGI](https://uwsgi-docs.readthedocs.io/) performance. | ||
| 67 | We sacrifice a bit of performance for the sake of this information. And we store | ||
| 68 | all this information for later analysis. | ||
| 69 | |||
| 70 | **Example of function execution time** | ||
| 71 | |||
| 72 | ```json | ||
| 73 | { | ||
| 74 | "get_final_filtered_ads": { | ||
| 75 | "counter": 1931250, | ||
| 76 | "avg": 0.0066143431, | ||
| 77 | "elapsed": 12773.9500310003 | ||
| 78 | }, | ||
| 79 | "store_keywords_statistics": { | ||
| 80 | "counter": 1931011, | ||
| 81 | "avg": 0.0004605267, | ||
| 82 | "elapsed": 889.2821669996 | ||
| 83 | }, | ||
| 84 | "match_by_context": { | ||
| 85 | "counter": 1931011, | ||
| 86 | "avg": 0.0055960716, | ||
| 87 | "elapsed": 10806.0758889999 | ||
| 88 | }, | ||
| 89 | "match_by_high_performance": { | ||
| 90 | "counter": 262, | ||
| 91 | "avg": 0.0152770229, | ||
| 92 | "elapsed": 4.00258 | ||
| 93 | }, | ||
| 94 | "store_impression_stats": { | ||
| 95 | "counter": 1931250, | ||
| 96 | "avg": 0.0006189991, | ||
| 97 | "elapsed": 1195.4419869999 | ||
| 98 | } | ||
| 99 | } | ||
| 100 | ``` | ||
| 101 | |||
| 102 | We have also started profiling with [cProfile](https://pymotw.com/2/profile/) | ||
| 103 | and then visualizing with [KCachegrind](http://kcachegrind.sourceforge.net/). | ||
| 104 | This provides much more detailed look into code execution. | ||
| 105 | |||
| 106 | ## Cache control is your friend | ||
| 107 | |||
| 108 | Because we use Javascript library for rendering ads we rely on this script | ||
| 109 | extensively and when in need we need to be able to change behavior of the script | ||
| 110 | quickly. | ||
| 111 | |||
| 112 | In our case we can not simply replace javascript url in html code. It usually | ||
| 113 | takes a day or two for the guys who maintain sites to change code or add | ||
| 114 | ?ver=xxx attribute. And this makes rapid deployment and testing very difficult | ||
| 115 | and time consuming. There is a limitation of how much you can test locally. | ||
| 116 | |||
| 117 | We are now in the process of integrating [Google Tag | ||
| 118 | Manager](https://www.google.com/analytics/tag-manager/) but couple of websites | ||
| 119 | are developed on ASP.net platform that have some problems with tag manager. With | ||
| 120 | a solution below we are certain that we are serving latest version of the | ||
| 121 | script. | ||
| 122 | |||
| 123 | And it only takes one mistake and users have the script cached and in case of | ||
| 124 | caching it for 1 year you probably know where the problem is. | ||
| 125 | |||
| 126 | ```nginx | ||
| 127 | # nginx ➜ /etc/nginx/sites-available/default | ||
| 128 | location /static/ { | ||
| 129 | alias /path-to-static-content/; | ||
| 130 | autoindex off; | ||
| 131 | charset utf-8; | ||
| 132 | gzip on; | ||
| 133 | gzip_types text/plain application/javascript application/x-javascript text/javascript text/xml text/css; | ||
| 134 | location ~* \.(ico|gif|jpeg|jpg|png|woff|ttf|otf|svg|woff2|eot)$ { | ||
| 135 | expires 1y; | ||
| 136 | add_header Pragma public; | ||
| 137 | add_header Cache-Control "public"; | ||
| 138 | } | ||
| 139 | location ~* \.(css|js|txt)$ { | ||
| 140 | expires 3600s; | ||
| 141 | add_header Pragma public; | ||
| 142 | add_header Cache-Control "public, must-revalidate"; | ||
| 143 | } | ||
| 144 | } | ||
| 145 | ``` | ||
| 146 | |||
| 147 | Also be careful when redirecting to url in your python code. We noticed that if | ||
| 148 | we didn't precisely setup cache control and expire headers in response we didn't | ||
| 149 | get the request on the server and therefore couldn't measure clicks. So when | ||
| 150 | redirecting do as follows and there will be no problems. | ||
| 151 | |||
| 152 | ```python | ||
| 153 | # python ➜ bottlepy web micro-framework | ||
| 154 | response = bottle.HTTPResponse(status=302) | ||
| 155 | response.set_header("Cache-Control", "no-store, no-cache, must-revalidate") | ||
| 156 | response.set_header("Expires", "Thu, 01 Jan 1970 00:00:00 GMT") | ||
| 157 | response.set_header("Location", url) | ||
| 158 | return response | ||
| 159 | ``` | ||
| 160 | |||
| 161 | > Cache control in browsers is quite aggressive and you need to be precise to | ||
| 162 | avoid future problems. We learned that lesson the hard way. | ||
| 163 | |||
| 164 | ## Learn NGINX | ||
| 165 | |||
| 166 | When deciding on a web server we went with Nginx as a reverse proxy for our | ||
| 167 | applications. We adapted micro-service oriented architecture early in the | ||
| 168 | project to ensure when we scale we can easily add additional servers to our | ||
| 169 | cluster. And Nginx was crucial to perform load balancing and static content | ||
| 170 | delivery. | ||
| 171 | |||
| 172 | At first our config file was quite simple and later grew larger. After patching | ||
| 173 | and adding new settings I sat down and learned more about the guts of Nginx. | ||
| 174 | This proved to be very useful and we were able to squeeze much more out of our | ||
| 175 | setup. So I advise you to take your time and read through the | ||
| 176 | [documentation](https://nginx.org/en/docs/). This saved us a lot of headache. | ||
| 177 | Googling for solutions only goes so far. | ||
| 178 | |||
| 179 | ## Use Redis/Memcached | ||
| 180 | |||
| 181 | As explained above we are using caching basically for everything. It is the | ||
| 182 | corner stone of our services. At first we were very careful about the quantity | ||
| 183 | of things we stored in [Redis](https://redis.io/). But we later found out that | ||
| 184 | the memory footprint is very low even when storing large amount of data in it. | ||
| 185 | |||
| 186 | So we gradually increased our usage to caching whole HTML outputs of dashboard. | ||
| 187 | This improved our performance in order of magnitude. And by using native TTL | ||
| 188 | support this goes hand in hand with our needs. | ||
| 189 | |||
| 190 | The reason why we choose [Redis](https://redis.io/) over | ||
| 191 | [Memcached](https://memcached.org/) was the nature of scalability of Redis out | ||
| 192 | of the box. But all this can be achieved with Memcached. | ||
| 193 | |||
| 194 | ## Conclusion | ||
| 195 | |||
| 196 | There are a lot more details that could have been written and every single topic | ||
| 197 | in here deserves it's own post but you probably got the idea about the problems | ||
| 198 | we faced. | ||
diff --git a/content/posts/2017-04-21-profiling-python-web-applications-with-visual-tools.md b/content/posts/2017-04-21-profiling-python-web-applications-with-visual-tools.md deleted file mode 100644 index 2e36eaf..0000000 --- a/content/posts/2017-04-21-profiling-python-web-applications-with-visual-tools.md +++ /dev/null | |||
| @@ -1,205 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Profiling Python web applications with visual tools | ||
| 3 | url: profiling-python-web-applications-with-visual-tools.html | ||
| 4 | date: 2017-04-21T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | I have been profiling my software with KCachegrind for a long time now and I was | ||
| 9 | missing this option when I am developing API's or other web services. I always | ||
| 10 | knew that this is possible but never really took the time and dive into it. | ||
| 11 | |||
| 12 | Before we begin there are some requirements. We will need to: | ||
| 13 | |||
| 14 | - implement [cProfile](https://docs.python.org/2/library/profile.html#module-cProfile) into our web app, | ||
| 15 | - convert output to [callgrind](http://valgrind.org/docs/manual/cl-manual.html) format with [pyprof2calltree](https://pypi.python.org/pypi/pyprof2calltree/), | ||
| 16 | - visualize data with [KCachegrind](http://kcachegrind.sourceforge.net/html/Home.html) or [Profiling Viewer](http://www.profilingviewer.com/). | ||
| 17 | |||
| 18 | |||
| 19 | If you are using MacOS you should check out [Profiling | ||
| 20 | Viewer](http://www.profilingviewer.com/) or | ||
| 21 | [MacCallGrind](http://www.maccallgrind.com/). | ||
| 22 | |||
| 23 |  | ||
| 24 | |||
| 25 | We will be dividing this post into two main categories: | ||
| 26 | |||
| 27 | - writing simple web-service, | ||
| 28 | - visualize profile of this web-service. | ||
| 29 | |||
| 30 | ## Simple web-service | ||
| 31 | |||
| 32 | Let's use virtualenv so we won't pollute our base system. If you don't have | ||
| 33 | virtualenv installed on your system you can install it with pip command. | ||
| 34 | |||
| 35 | ```bash | ||
| 36 | # let's install virtualenv globally | ||
| 37 | $ sudo pip install virtualenv | ||
| 38 | |||
| 39 | # let's also install pyprof2calltree globally | ||
| 40 | $ sudo pip install pyprof2calltree | ||
| 41 | |||
| 42 | # now we create project | ||
| 43 | $ mkdir demo-project | ||
| 44 | $ cd demo-project/ | ||
| 45 | |||
| 46 | # now let's create folder where we will store profiles | ||
| 47 | $ mkdir prof | ||
| 48 | |||
| 49 | # now we create empty virtualenv in venv/ folder | ||
| 50 | $ virtualenv --no-site-packages venv | ||
| 51 | |||
| 52 | # we now need to activate virtualenv | ||
| 53 | $ source venv/bin/activate | ||
| 54 | |||
| 55 | # you can check if virtualenv was correctly initialized by | ||
| 56 | # checking where your python interpreter is located | ||
| 57 | # if command bellow points to your created directory and not some | ||
| 58 | # system dir like /usr/bin/python then everything is fine | ||
| 59 | $ which python | ||
| 60 | |||
| 61 | # we can check now if all is good ➜ if ok couple of | ||
| 62 | # lines will be displayed | ||
| 63 | $ pip freeze | ||
| 64 | # appdirs==1.4.3 | ||
| 65 | # packaging==16.8 | ||
| 66 | # pyparsing==2.2.0 | ||
| 67 | # six==1.10.0 | ||
| 68 | |||
| 69 | # now we are ready to install bottlepy ➜ web micro-framework | ||
| 70 | $ pip install bottle | ||
| 71 | |||
| 72 | # you can deactivate virtualenv but you will then go | ||
| 73 | # under system domain ➜ for now don't deactivate | ||
| 74 | $ deactivate | ||
| 75 | ``` | ||
| 76 | |||
| 77 | We are now ready to write simple web service. Let's create file app.py and paste | ||
| 78 | code bellow in this newly created file. | ||
| 79 | |||
| 80 | ```python | ||
| 81 | # -*- coding: utf-8 -*- | ||
| 82 | |||
| 83 | import bottle | ||
| 84 | import random | ||
| 85 | import cProfile | ||
| 86 | |||
| 87 | app = bottle.Bottle() | ||
| 88 | |||
| 89 | # this function is a decorator and encapsulates function | ||
| 90 | # and performs profiling and then saves it to subfolder | ||
| 91 | # prof/function-name.prof | ||
| 92 | # in our example only awesome_random_number function will | ||
| 93 | # be profiled because it has do_cprofile defined | ||
| 94 | def do_cprofile(func): | ||
| 95 | def profiled_func(*args, **kwargs): | ||
| 96 | profile = cProfile.Profile() | ||
| 97 | try: | ||
| 98 | profile.enable() | ||
| 99 | result = func(*args, **kwargs) | ||
| 100 | profile.disable() | ||
| 101 | return result | ||
| 102 | finally: | ||
| 103 | profile.dump_stats("prof/" + str(func.__name__) + ".prof") | ||
| 104 | return profiled_func | ||
| 105 | |||
| 106 | |||
| 107 | # we use profiling over specific function with including | ||
| 108 | # @do_cprofile above function declaration | ||
| 109 | @app.route("/") | ||
| 110 | @do_cprofile | ||
| 111 | def awesome_random_number(): | ||
| 112 | awesome_random_number = random.randint(0, 100) | ||
| 113 | return "awesome random number is " + str(awesome_random_number) | ||
| 114 | |||
| 115 | @app.route("/test") | ||
| 116 | def test(): | ||
| 117 | return "dummy test" | ||
| 118 | |||
| 119 | if __name__ == '__main__': | ||
| 120 | bottle.run( | ||
| 121 | app = app, | ||
| 122 | host = "0.0.0.0", | ||
| 123 | port = 4000 | ||
| 124 | ) | ||
| 125 | |||
| 126 | # run with 'python app.py' | ||
| 127 | # open browser 'http://0.0.0.0:4000' | ||
| 128 | ``` | ||
| 129 | |||
| 130 | When browser hits awesome\_random\_number() function profile is created in prof/ | ||
| 131 | subfolder. | ||
| 132 | |||
| 133 | ## Visualize profile | ||
| 134 | |||
| 135 | Now let's create callgrind format from this cProfile output. | ||
| 136 | |||
| 137 | ```bash | ||
| 138 | $ cd prof/ | ||
| 139 | $ pyprof2calltree -i awesome_random_number.prof | ||
| 140 | # this creates 'awesome_random_number.prof.log' file in the same folder | ||
| 141 | ``` | ||
| 142 | |||
| 143 | This file can be opened with visualizing tools listed above. In this case we | ||
| 144 | will be using Profilling Viewer under MacOS. You can open image in new tab. As | ||
| 145 | you can see from this example there is hierarchy of execution order of your | ||
| 146 | code. | ||
| 147 | |||
| 148 |  | ||
| 149 | |||
| 150 | > Make sure you convert output of the cProfile output every time you want to | ||
| 151 | refresh and take a look at your possible optimizations because cProfile updates | ||
| 152 | .prof file every time browser hits the function. | ||
| 153 | |||
| 154 | This is just a simple example but when you are developing real-life applications | ||
| 155 | this can be very illuminating, especially to see which parts of your code are | ||
| 156 | bottlenecks and need to be optimized. | ||
| 157 | |||
| 158 | ## Update 2017-04-22 | ||
| 159 | |||
| 160 | Reddit user [mvt](https://www.reddit.com/user/mvt) also recommended this awesome | ||
| 161 | web based profile visualizer [SnakeViz](https://jiffyclub.github.io/snakeviz/) | ||
| 162 | that directly takes output from | ||
| 163 | [cProfile](https://docs.python.org/2/library/profile.html#module-cProfile) | ||
| 164 | module. | ||
| 165 | |||
| 166 | <div class="reddit-embed" data-embed-media="www.redditmedia.com" data-embed-parent="false" data-embed-live="false" data-embed-uuid="583880c1-002e-41ed-a373-020a0ef2cff9" data-embed-created="2017-04-22T19:46:54.810Z"><a href="https://www.reddit.com/r/Python/comments/66v373/profiling_python_web_applications_with_visual/dgljhsb/">Comment</a> from discussion <a href="https://www.reddit.com/r/Python/comments/66v373/profiling_python_web_applications_with_visual/">Profiling Python web applications with visual tools</a>.</div><script async src="https://www.redditstatic.com/comment-embed.js"></script> | ||
| 167 | |||
| 168 | ```bash | ||
| 169 | # let's install it globally as well | ||
| 170 | $ sudo pip install snakeviz | ||
| 171 | |||
| 172 | # now let's visualize | ||
| 173 | $ cd prof/ | ||
| 174 | $ snakeviz awesome_random_number.prof | ||
| 175 | # this automatically opens browser window and | ||
| 176 | # shows visualized profile | ||
| 177 | ``` | ||
| 178 | |||
| 179 |  | ||
| 180 | |||
| 181 | Reddit user [ccharles](https://www.reddit.com/user/ccharles) suggested a better | ||
| 182 | way for installing pip software by targeting user level instead of using sudo. | ||
| 183 | |||
| 184 | <div class="reddit-embed" data-embed-media="www.redditmedia.com" data-embed-parent="false" data-embed-live="false" data-embed-uuid="f4f0459e-684d-441e-bebe-eb49b2f0a31d" data-embed-created="2017-04-22T19:46:10.874Z"><a href="https://www.reddit.com/r/Python/comments/66v373/profiling_python_web_applications_with_visual/dglpzkx/">Comment</a> from discussion <a href="https://www.reddit.com/r/Python/comments/66v373/profiling_python_web_applications_with_visual/">Profiling Python web applications with visual tools</a>.</div><script async src="https://www.redditstatic.com/comment-embed.js"></script> | ||
| 185 | |||
| 186 | ```bash | ||
| 187 | # now we need to add this path to our $PATH variable | ||
| 188 | # we do this my adding this line at the end of your | ||
| 189 | # ~/.bashrc file | ||
| 190 | PATH=$PATH:$HOME/.local/bin/ | ||
| 191 | |||
| 192 | # in order to use this new configuration you can close | ||
| 193 | # and reopen terminal or reload .bashrc file | ||
| 194 | $ source ~/.bashrc | ||
| 195 | |||
| 196 | # now let's test if new directory is present in $PATH | ||
| 197 | $ echo $PATH | ||
| 198 | |||
| 199 | # now we can install on user level by adding --user | ||
| 200 | # without use of sudo | ||
| 201 | $ pip install snakeviz --user | ||
| 202 | ``` | ||
| 203 | |||
| 204 | Or as suggested by [mvt](https://www.reddit.com/user/mvt) you can | ||
| 205 | use [pipsi](https://github.com/mitsuhiko/pipsi). | ||
diff --git a/content/posts/2017-08-11-simple-iot-application.md b/content/posts/2017-08-11-simple-iot-application.md deleted file mode 100644 index e7e086b..0000000 --- a/content/posts/2017-08-11-simple-iot-application.md +++ /dev/null | |||
| @@ -1,606 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Simple IOT application supported by real-time monitoring and data history | ||
| 3 | url: simple-iot-application.html | ||
| 4 | date: 2017-08-11T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | ## Initial thoughts | ||
| 9 | |||
| 10 | I have been developing these kind of application for the better part of my last | ||
| 11 | 5 years and people keep asking me how to approach developing such application | ||
| 12 | and I will give a try explaining it here. | ||
| 13 | |||
| 14 | IOT applications are really no different than any other kind of applications. | ||
| 15 | We have data that needs to be collected and visualized in some form of tables or | ||
| 16 | charts. The main difference here is that most of the times these data is | ||
| 17 | collected by some kind of device foreign to developer that mainly operates in | ||
| 18 | web domain. But fear not, it's not that different than writing some JavaScript. | ||
| 19 | |||
| 20 | There are many devices able to transmit data via wireless or wired network by | ||
| 21 | default but for the sake of example we will be using commonly known Arduino with | ||
| 22 | wireless module already on the board → [Arduino | ||
| 23 | MKR1000](https://store.arduino.cc/arduino-mkr1000). | ||
| 24 | |||
| 25 | In order to make this little project as accessible to others as possible I will | ||
| 26 | try to make it as inexpensive as possible. And by this I mean that I will avoid | ||
| 27 | using hosted virtual servers and will be using my own laptop as a server. But | ||
| 28 | you must buy Arduino MKR1000 to follow steps below. But if you would want to | ||
| 29 | deploy this software I would suggest using | ||
| 30 | [DigitalOcean](https://www.digitalocean.com) → smallest VPS is only per month | ||
| 31 | making this one of the most affordable option out there. Please notice that this | ||
| 32 | software will not run on stock web hosting that only supports LAMP (Linux, | ||
| 33 | Apache, MySQL, and PHP). | ||
| 34 | |||
| 35 | But before we begin please take notice that this is strictly experimental code | ||
| 36 | and not well optimized and there are much better ways in handling some aspects | ||
| 37 | of the application but that requires much deeper knowledge of technology that is | ||
| 38 | not needed for an example like this. | ||
| 39 | |||
| 40 | **Development steps** | ||
| 41 | |||
| 42 | 1. Simple Python API that will receive and store incoming data. | ||
| 43 | 2. Prototype C++ code that will read "sensor data" and transmit it to API. | ||
| 44 | 3. Data visualization with charts → extends Python web application. | ||
| 45 | |||
| 46 | Step 1. and 3. will share the same web application. One route will be dedicated | ||
| 47 | to API and another to serving HTML with chart. | ||
| 48 | |||
| 49 | Schema below represents what we will try to achieve and how different parts | ||
| 50 | correlates to each other. | ||
| 51 | |||
| 52 |  | ||
| 53 | |||
| 54 | ## Simple Python API | ||
| 55 | |||
| 56 | I have always been a fan of simplicity so we will be using [Bottle: Python Web | ||
| 57 | Framework](https://bottlepy.org/docs/dev/). It is a single file web framework | ||
| 58 | that seriously simplifies working with routes, templating and has built-in web | ||
| 59 | server that satisfies our need in this case. | ||
| 60 | |||
| 61 | First we need to install bottle package. This can be done by downloading | ||
| 62 | ```bottle.py``` and placing it in the root of your application or by using pip | ||
| 63 | software ```pip install bottle --user```. | ||
| 64 | |||
| 65 | If you are using Linux or MacOS then Python is already installed. If you will | ||
| 66 | try to test this on Windows please install [Python for | ||
| 67 | Windows](https://www.python.org/downloads/windows/). There may be some problems | ||
| 68 | with path when you will try to launch ```python webapp.py``` so please take care | ||
| 69 | of this before you continue. | ||
| 70 | |||
| 71 | ### Basic web application | ||
| 72 | |||
| 73 | Most basic bottle application is quite simple. Paste code below in | ||
| 74 | ```webapp.py``` file and save. | ||
| 75 | |||
| 76 | ```python | ||
| 77 | # -*- coding: utf-8 -*- | ||
| 78 | |||
| 79 | import bottle | ||
| 80 | |||
| 81 | # initializing bottle app | ||
| 82 | app = bottle.Bottle() | ||
| 83 | |||
| 84 | # triggered when / is accessed from browser | ||
| 85 | # only accepts GET → no POST allowed | ||
| 86 | @app.route("/", method=["GET"]) | ||
| 87 | def route_default(): | ||
| 88 | return "howdy from python" | ||
| 89 | |||
| 90 | # starting server on http://0.0.0.0:5000 | ||
| 91 | if __name__ == "__main__": | ||
| 92 | bottle.run( | ||
| 93 | app = app, | ||
| 94 | host = "0.0.0.0", | ||
| 95 | port = 5000, | ||
| 96 | debug = True, | ||
| 97 | reloader = True, | ||
| 98 | catchall = True, | ||
| 99 | ) | ||
| 100 | ``` | ||
| 101 | |||
| 102 | To run this simple application you should open command prompt or terminal on | ||
| 103 | your machine and go to the folder containing your file and type ```python | ||
| 104 | webapp.py```. If everything goes ok then open your web browser and point it to | ||
| 105 | ```http://0.0.0.0:5000```. | ||
| 106 | |||
| 107 | If you would like change the port of your application (like port 80) and not use | ||
| 108 | root to run your app this will present a problem. The TCP/IP port numbers below | ||
| 109 | 1024 are privileged ports → this is a security feature. So in order of | ||
| 110 | simplicity and security use a port number above 1024 like I have used port 5000. | ||
| 111 | |||
| 112 | If this fails at any time please fix it before you continue, because nothing | ||
| 113 | below will work otherwise. | ||
| 114 | |||
| 115 | We use 0.0.0.0 as default host so that this app is available over your local | ||
| 116 | network. If you find your local ip ```ifconfig``` and try accessing this site | ||
| 117 | with your phone (if on same network/router as your machine) this should work as | ||
| 118 | well (example of such ip ```http://192.168.1.15:5000```). This is a must have | ||
| 119 | because Arduino will be accessing this application to send it's data. | ||
| 120 | |||
| 121 | ### Web application security | ||
| 122 | |||
| 123 | There is a lot to be said about security and is a topic of many books. Of course | ||
| 124 | all this can not be written here but to just establish some basic security → you | ||
| 125 | should always use SSL with your application. Some fantastic free certificates | ||
| 126 | are available by [Let's Encrypt - Free SSL/TLS | ||
| 127 | Certificates](https://letsencrypt.org). With SSL certificate installed you | ||
| 128 | should then make use of HTTP headers and send your "API key" via a header. If | ||
| 129 | your key is send via header then this key is encrypted by SSL and send encrypted | ||
| 130 | over the network. Never send your api keys by GET parameter like | ||
| 131 | ```http://example.com/?api_key=somekeyvalue```. The problem that this kind of | ||
| 132 | sending presents is that this key is visible in logs and by network sniffers. | ||
| 133 | |||
| 134 | There is a fantastic article describing some aspects about security: [11 Web | ||
| 135 | Application Security Best | ||
| 136 | Practices](https://www.keycdn.com/blog/web-application-security-best-practices/). Please | ||
| 137 | check it out. | ||
| 138 | |||
| 139 | ### Simple API for writing data-points | ||
| 140 | |||
| 141 | We will now be using boilerplate code from example above and extend it to be | ||
| 142 | SQLite3 because it plays well with Python and can store quite large amount of | ||
| 143 | able to write data received by API to local storage. For example use I will use | ||
| 144 | data. I have been using it to collect gigabytes of data in a single database | ||
| 145 | without any corruption or problems → your experience may vary. | ||
| 146 | |||
| 147 | To avoid learning SQLite I will be using [Dataset: databases for lazy | ||
| 148 | people](https://dataset.readthedocs.io/en/latest/index.html). This package | ||
| 149 | abstracts SQL and simplifies writing and reading data from database. You should | ||
| 150 | install this package with pip software ```pip install dataset --user```. | ||
| 151 | |||
| 152 | Because API will use POST method I will be testing if code works correctly by | ||
| 153 | using [Restlet Client for Google | ||
| 154 | Chrome](https://chrome.google.com/webstore/detail/restlet-client-rest-api-t/aejoelaoggembcahagimdiliamlcdmfm). | ||
| 155 | This software also allows you to set headers → for basic security with API_KEY. | ||
| 156 | |||
| 157 | To quickly generate passwords or API keys I usually use this nifty website | ||
| 158 | [RandomKeygen](https://randomkeygen.com/). | ||
| 159 | |||
| 160 | Copy and paste code below over your previous code in file ```webapp.py```. | ||
| 161 | |||
| 162 | ```python | ||
| 163 | # -*- coding: utf-8 -*- | ||
| 164 | |||
| 165 | import time | ||
| 166 | import bottle | ||
| 167 | import random | ||
| 168 | import dataset | ||
| 169 | |||
| 170 | # initializing bottle app | ||
| 171 | app = bottle.Bottle() | ||
| 172 | |||
| 173 | # connects to sqlite database | ||
| 174 | # check_same_thread=False allows using it in multi-threaded mode | ||
| 175 | app.config["dsn"] = dataset.connect("sqlite:///data.db?check_same_thread=False") | ||
| 176 | |||
| 177 | # api key that will be used in Arduino code | ||
| 178 | app.config["api_key"] = "JtF2aUE5SGHfVJBCG5SH" | ||
| 179 | |||
| 180 | # triggered when /api is accessed from browser | ||
| 181 | # only accepts POST → no GET allowed | ||
| 182 | @app.route("/api", method=["POST"]) | ||
| 183 | def route_default(): | ||
| 184 | status = 400 | ||
| 185 | ts = int(time.time()) # current timestamp | ||
| 186 | value = bottle.request.body.read() # data from device | ||
| 187 | api_key = bottle.request.get_header("Api_Key") # api key from header | ||
| 188 | |||
| 189 | # outputs to console received data for debug reason | ||
| 190 | print ">>> {} :: {}".format(value, api_key) | ||
| 191 | |||
| 192 | # if api_key is correct and value is present | ||
| 193 | # then writes attribute to point table | ||
| 194 | if api_key == app.config["api_key"] and value: | ||
| 195 | app.config["dsn"]["point"].insert(dict(ts=ts, value=value)) | ||
| 196 | status = 200 | ||
| 197 | |||
| 198 | # we only need to return status | ||
| 199 | return bottle.HTTPResponse(status=status, body="") | ||
| 200 | |||
| 201 | # starting server on http://0.0.0.0:5000 | ||
| 202 | if __name__ == "__main__": | ||
| 203 | bottle.run( | ||
| 204 | app = app, | ||
| 205 | host = "0.0.0.0", | ||
| 206 | port = 5000, | ||
| 207 | debug = True, | ||
| 208 | reloader = True, | ||
| 209 | catchall = True, | ||
| 210 | ) | ||
| 211 | ``` | ||
| 212 | |||
| 213 | To run this simply go to folder containing python file and run ```python | ||
| 214 | webapp.py``` from terminal. If everything goes ok you should have simple API | ||
| 215 | available via POST method on /api route. | ||
| 216 | |||
| 217 | After testing the service with Restlet Client you should be able to view your | ||
| 218 | data in a database file ```data.db```. | ||
| 219 | |||
| 220 |  | ||
| 221 | |||
| 222 | You can also check the contents of new database file by using desktop client | ||
| 223 | for SQLite → [DB Browser for SQLite](http://sqlitebrowser.org/). | ||
| 224 | |||
| 225 |  | ||
| 226 | |||
| 227 | Table structure is as simple as it can be. We have ts (timestamp) and value | ||
| 228 | (value from Arduino). As you can see timestamp is generated on API side. If you | ||
| 229 | would happen to have atomic clock on Arduino it would be then better to generate | ||
| 230 | and send timestamp with the value. This would be particularity useful if we | ||
| 231 | would be collecting sensor data at a higher frequency and then sending this data | ||
| 232 | in bulk to API. | ||
| 233 | |||
| 234 | If you will deploy this app with uWSGI and multi-threaded, use DSN (Data Source | ||
| 235 | Name) url with ```?check_same_thread=False```. | ||
| 236 | |||
| 237 | Ok, now that we have some sort of a working API with some basic security so | ||
| 238 | unwanted people can not post data to your database can we proceed further and | ||
| 239 | try to program Arduino to send data to API. | ||
| 240 | |||
| 241 | ## Sending data to API with Arduino MKR1000 | ||
| 242 | |||
| 243 | First of all you should have MKR1000 module and microUSB cable to proceed. If | ||
| 244 | you have ever done any work with Arduino you should know that you also need | ||
| 245 | [Arduino IDE](https://www.arduino.cc/en/Main/Software). On provided link you | ||
| 246 | should be able to download and install IDE. Once that task is completed and you | ||
| 247 | have successfully run blink example you should proceed to the next step. | ||
| 248 | |||
| 249 | In order to use wireless capabilities of MKR1000 you need to first install | ||
| 250 | [WiFi101 library](https://www.arduino.cc/en/Reference/WiFi101) in Arduino IDE. | ||
| 251 | Please check before you install, you may already have it installed. | ||
| 252 | |||
| 253 | Code below is a working example that sends data to API. Before you try to test | ||
| 254 | your code make sure you have run Python web application. Then change settings | ||
| 255 | for wifi, api endpoint and api_key. If by some reason code bellow doesn't work | ||
| 256 | for you please leave a comment and I'll try to help. | ||
| 257 | |||
| 258 | Once you have opened IDE and copied this code try to compile and upload it. | ||
| 259 | Then open "Serial monitor" to see if any output is presented by Arduino. | ||
| 260 | |||
| 261 | ```c | ||
| 262 | #include <WiFi101.h> | ||
| 263 | |||
| 264 | // wifi settings | ||
| 265 | char ssid[] = "ssid-name"; | ||
| 266 | char pass[] = "ssid-password"; | ||
| 267 | |||
| 268 | // api server enpoint | ||
| 269 | char server[] = "192.168.6.22"; | ||
| 270 | int port = 5000; | ||
| 271 | |||
| 272 | // api key that must be the same as the one in Python code | ||
| 273 | String api_key = "JtF2aUE5SGHfVJBCG5SH"; | ||
| 274 | |||
| 275 | // frequency data is sent in ms - every 5 seconds | ||
| 276 | int timeout = 1000 * 5; | ||
| 277 | |||
| 278 | int status = WL_IDLE_STATUS; | ||
| 279 | |||
| 280 | void setup() { | ||
| 281 | |||
| 282 | // initialize serial and wait for port to open: | ||
| 283 | Serial.begin(9600); | ||
| 284 | delay(1000); | ||
| 285 | |||
| 286 | // check for the presence of the shield | ||
| 287 | if (WiFi.status() == WL_NO_SHIELD) { | ||
| 288 | Serial.println("WiFi shield not present"); | ||
| 289 | while (true); | ||
| 290 | } | ||
| 291 | |||
| 292 | // attempt to connect to wifi network | ||
| 293 | while (status != WL_CONNECTED) { | ||
| 294 | Serial.print("Attempting to connect to SSID: "); | ||
| 295 | Serial.println(ssid); | ||
| 296 | status = WiFi.begin(ssid, pass); | ||
| 297 | // wait 10 seconds for connection | ||
| 298 | delay(10000); | ||
| 299 | } | ||
| 300 | |||
| 301 | // output wifi status to serial monitor | ||
| 302 | Serial.print("SSID: "); | ||
| 303 | Serial.println(WiFi.SSID()); | ||
| 304 | |||
| 305 | IPAddress ip = WiFi.localIP(); | ||
| 306 | Serial.print("IP Address: "); | ||
| 307 | Serial.println(ip); | ||
| 308 | |||
| 309 | long rssi = WiFi.RSSI(); | ||
| 310 | Serial.print("signal strength (RSSI):"); | ||
| 311 | Serial.print(rssi); | ||
| 312 | Serial.println(" dBm"); | ||
| 313 | } | ||
| 314 | |||
| 315 | void loop() { | ||
| 316 | WiFiClient client; | ||
| 317 | |||
| 318 | if (client.connect(server, port)) { | ||
| 319 | |||
| 320 | // I use random number generator for this example | ||
| 321 | // but you can use analog or digital inputs from arduino | ||
| 322 | String content = String(random(1000)); | ||
| 323 | |||
| 324 | client.println("POST /api HTTP/1.1"); | ||
| 325 | client.println("Connection: close"); | ||
| 326 | client.println("Api-Key: " + api_key); | ||
| 327 | client.println("Content-Length: " + String(content.length())); | ||
| 328 | client.println(); | ||
| 329 | client.println(content); | ||
| 330 | |||
| 331 | delay(100); | ||
| 332 | client.stop(); | ||
| 333 | Serial.println("Data sent successfully ..."); | ||
| 334 | |||
| 335 | } else { | ||
| 336 | Serial.println("Problem sending data ..."); | ||
| 337 | } | ||
| 338 | |||
| 339 | // waits for x seconds and continue looping | ||
| 340 | delay(timeout); | ||
| 341 | } | ||
| 342 | ``` | ||
| 343 | |||
| 344 | As seen from example you can notice that Arduino is generating random integer | ||
| 345 | between [ 0 .. 1000 ]. You can easily replace this with a temperature sensor or | ||
| 346 | any other kind of sensor. | ||
| 347 | |||
| 348 | Now that we have API under the hood and Arduino is sending demo data we can now | ||
| 349 | focus on data visualization. | ||
| 350 | |||
| 351 | ## Data visualization | ||
| 352 | |||
| 353 | Before we continue we should examine our project folder structure. Currently we | ||
| 354 | only have two files in our project: | ||
| 355 | |||
| 356 | _simple-iot-app/_ | ||
| 357 | |||
| 358 | * _webapp.py_ | ||
| 359 | * _data.db_ | ||
| 360 | |||
| 361 | We will now add HTML template that will contain CSS and JavaScript code inline | ||
| 362 | for the simplicity reason. And for the bottle framework to be able to scan root | ||
| 363 | application folder for templates we will add ```bottle.TEMPLATE_PATH.insert(0, | ||
| 364 | "./")``` in ```webapp.py```. By default bottle framework uses ```views/``` | ||
| 365 | subfolder to store templates. This is not the ideal situation and if you will | ||
| 366 | use bottle to develop web applications you should use native behavior and store | ||
| 367 | templates in it's predefined folder. But for the sake of example we will | ||
| 368 | over-ride this. Be careful to fully replace your code with new code that is | ||
| 369 | provided below. Avoid partially replacing code in file :) Also new code for | ||
| 370 | reading data-points is provided in Python example below. | ||
| 371 | |||
| 372 | First we add new route to our web application. It should be trigger when browser | ||
| 373 | hits root of application ```http://0.0.0.0:5000/```. This route will do nothing | ||
| 374 | more than render ```frontend.html``` template. This is done by ```return | ||
| 375 | bottle.template("frontend.html")```. Check code below to further examine how | ||
| 376 | exactly this is done. | ||
| 377 | |||
| 378 | Now we will expand ```/api``` route and use different methods to write or read | ||
| 379 | data-points. For writing data-point we will use POST method and for reading | ||
| 380 | points we will use GET method. GET method will return JSON object with latest | ||
| 381 | readings and historical data. | ||
| 382 | |||
| 383 | There is a fantastic JavaScript library for plotting time-series charts called | ||
| 384 | [MetricsGraphics.js](https://www.metricsgraphicsjs.org) that is based on | ||
| 385 | [D3.js](https://d3js.org/) library for visualizing data. | ||
| 386 | |||
| 387 | Data schema required by MetricsGraphics.js → to achieve this we need to | ||
| 388 | transform data from database into this format: | ||
| 389 | |||
| 390 | ```json | ||
| 391 | [ | ||
| 392 | { | ||
| 393 | "date": "2017-08-11 01:07:20", | ||
| 394 | "value": 933 | ||
| 395 | }, | ||
| 396 | { | ||
| 397 | "date": "2017-08-11 01:07:30", | ||
| 398 | "value": 743 | ||
| 399 | } | ||
| 400 | ] | ||
| 401 | ``` | ||
| 402 | |||
| 403 | Web application is now complete and we only need ```frontend.html``` that we | ||
| 404 | will develop now. If you would try to start web app now and go to root app this | ||
| 405 | will return error because we don't have frontend.html yet. | ||
| 406 | |||
| 407 | ```python | ||
| 408 | # -*- coding: utf-8 -*- | ||
| 409 | |||
| 410 | import time | ||
| 411 | import bottle | ||
| 412 | import json | ||
| 413 | import datetime | ||
| 414 | import random | ||
| 415 | import dataset | ||
| 416 | |||
| 417 | # initializing bottle app | ||
| 418 | app = bottle.Bottle() | ||
| 419 | |||
| 420 | # adds root directory as template folder | ||
| 421 | bottle.TEMPLATE_PATH.insert(0, "./") | ||
| 422 | |||
| 423 | # connects to sqlite database | ||
| 424 | # check_same_thread=False allows using it in multi-threaded mode | ||
| 425 | app.config["db"] = dataset.connect("sqlite:///data.db?check_same_thread=False") | ||
| 426 | |||
| 427 | # api key that will be used in Arduino code | ||
| 428 | app.config["api_key"] = "JtF2aUE5SGHfVJBCG5SH" | ||
| 429 | |||
| 430 | # triggered when / is accessed from browser | ||
| 431 | # only accepts GET → no POST allowed | ||
| 432 | @app.route("/", method=["GET"]) | ||
| 433 | def route_default(): | ||
| 434 | return bottle.template("frontend.html") | ||
| 435 | |||
| 436 | # triggered when /api is accessed from browser | ||
| 437 | # accepts POST and GET | ||
| 438 | @app.route("/api", method=["GET", "POST"]) | ||
| 439 | def route_default(): | ||
| 440 | |||
| 441 | # if method is POST then we write datapoint | ||
| 442 | if bottle.request.method == "POST": | ||
| 443 | status = 400 | ||
| 444 | ts = int(time.time()) # current timestamp | ||
| 445 | value = bottle.request.body.read() # data from device | ||
| 446 | api_key = bottle.request.get_header("Api-Key") # api key from header | ||
| 447 | |||
| 448 | # outputs to console recieved data for debug reason | ||
| 449 | print ">>> {} :: {}".format(value, api_key) | ||
| 450 | |||
| 451 | # if api_key is correct and value is present | ||
| 452 | # then writes attribute to point table | ||
| 453 | if api_key == app.config["api_key"] and value: | ||
| 454 | app.config["db"]["point"].insert(dict(ts=ts, value=value)) | ||
| 455 | status = 200 | ||
| 456 | |||
| 457 | # we only need to return status | ||
| 458 | return bottle.HTTPResponse(status=status, body="") | ||
| 459 | |||
| 460 | # if method is GET then we read datapoint | ||
| 461 | else: | ||
| 462 | response = [] | ||
| 463 | datapoints = app.config["db"]["point"].all() | ||
| 464 | |||
| 465 | for point in datapoints: | ||
| 466 | response.append({ | ||
| 467 | "date": datetime.datetime.fromtimestamp(int(point["ts"])).strftime("%Y-%m-%d %H:%M:%S"), | ||
| 468 | "value": point["value"] | ||
| 469 | }) | ||
| 470 | |||
| 471 | bottle.response.content_type = "application/json" | ||
| 472 | return json.dumps(response) | ||
| 473 | |||
| 474 | # starting server on http://0.0.0.0:5000 | ||
| 475 | if __name__ == "__main__": | ||
| 476 | bottle.run( | ||
| 477 | app = app, | ||
| 478 | host = "0.0.0.0", | ||
| 479 | port = 5000, | ||
| 480 | debug = True, | ||
| 481 | reloader = True, | ||
| 482 | catchall = True, | ||
| 483 | ) | ||
| 484 | ``` | ||
| 485 | |||
| 486 | And now finally we can implement ```frontend.html```. Create file with this name | ||
| 487 | and copy code below. When you are done you can start web application. Steps for | ||
| 488 | this part are listed below the code. | ||
| 489 | |||
| 490 | ```html | ||
| 491 | <!DOCTYPE html> | ||
| 492 | <html> | ||
| 493 | |||
| 494 | <head> | ||
| 495 | <meta charset="utf-8"> | ||
| 496 | <title>Simple IOT application</title> | ||
| 497 | </head> | ||
| 498 | |||
| 499 | <body> | ||
| 500 | |||
| 501 | <h1>Simple IOT application</h1> | ||
| 502 | |||
| 503 | <div class="chart-placeholder"> | ||
| 504 | <div id="chart"></div> | ||
| 505 | </div> | ||
| 506 | |||
| 507 | <!-- application main script --> | ||
| 508 | <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script> | ||
| 509 | <script src="https://cdnjs.cloudflare.com/ajax/libs/d3/4.10.0/d3.min.js"></script> | ||
| 510 | <script src="https://cdnjs.cloudflare.com/ajax/libs/metrics-graphics/2.11.0/metricsgraphics.min.js"></script> | ||
| 511 | <script> | ||
| 512 | function fetch_and_render() { | ||
| 513 | d3.json("/api", function(data) { | ||
| 514 | data = MG.convert.date(data, "date", "%Y-%m-%d %H:%M:%S"); | ||
| 515 | MG.data_graphic({ | ||
| 516 | data: data, | ||
| 517 | chart_type: "line", | ||
| 518 | full_width: true, | ||
| 519 | height: 270, | ||
| 520 | target: document.getElementById("chart"), | ||
| 521 | x_accessor: "date", | ||
| 522 | y_accessor: "value" | ||
| 523 | }); | ||
| 524 | }); | ||
| 525 | } | ||
| 526 | window.onload = function() { | ||
| 527 | // initial call for rendering | ||
| 528 | fetch_and_render(); | ||
| 529 | |||
| 530 | // updates chart every 5 seconds | ||
| 531 | setInterval(function() { | ||
| 532 | fetch_and_render(); | ||
| 533 | }, 5000); | ||
| 534 | } | ||
| 535 | </script> | ||
| 536 | |||
| 537 | <!-- application styles --> | ||
| 538 | <style> | ||
| 539 | body { | ||
| 540 | font: 13px sans-serif; | ||
| 541 | padding: 20px 50px; | ||
| 542 | } | ||
| 543 | .chart-placeholder { | ||
| 544 | border: 2px solid #ccc; | ||
| 545 | width: 100%; | ||
| 546 | user-select: none; | ||
| 547 | } | ||
| 548 | /* chart styles */ | ||
| 549 | .mg-line1-color { | ||
| 550 | stroke: red; | ||
| 551 | stroke-width: 2; | ||
| 552 | } | ||
| 553 | .mg-main-area, .mg-main-line { | ||
| 554 | fill: #fff; | ||
| 555 | } | ||
| 556 | .mg-x-axis line, .mg-y-axis line { | ||
| 557 | stroke: #b3b2b2; | ||
| 558 | stroke-width: 1px; | ||
| 559 | } | ||
| 560 | </style> | ||
| 561 | |||
| 562 | </body> | ||
| 563 | |||
| 564 | </html> | ||
| 565 | ``` | ||
| 566 | |||
| 567 | Now the folder structure should look like: | ||
| 568 | |||
| 569 | _simple-iot-app/_ | ||
| 570 | |||
| 571 | * _webapp.py_ | ||
| 572 | * _data.db_ | ||
| 573 | * _frontend.html_ | ||
| 574 | |||
| 575 | Ok, lets now start application and start feeding it data. | ||
| 576 | |||
| 577 | 1. ```python webapp.py``` | ||
| 578 | 2. connect Arduino MKR1000 to power source | ||
| 579 | 3. open browser and go to ```http://0.0.0.0:5000``` | ||
| 580 | |||
| 581 | If everything goes well you should be seeing new data-points rendered on chart | ||
| 582 | every 5 seconds. | ||
| 583 | |||
| 584 | If you navigate to ```http://0.0.0.0:5000``` you should see rendered chart as | ||
| 585 | shown on picture below. | ||
| 586 | |||
| 587 |  | ||
| 588 | |||
| 589 | Complete application with all the code is available for | ||
| 590 | [download](/assets/iot-application/simple-iot-application.zip). | ||
| 591 | |||
| 592 | ## Conclusion | ||
| 593 | |||
| 594 | I hope this clarifies some aspects of IOT application development. Of course | ||
| 595 | this is a minimal example and is far from what can be done in real life with | ||
| 596 | some further dive into other technologies. | ||
| 597 | |||
| 598 | If you would like to continue exploring IOT world here are some interesting | ||
| 599 | resources for you to examine: | ||
| 600 | |||
| 601 | * [Reading Sensors with an Arduino](https://www.allaboutcircuits.com/projects/reading-sensors-with-an-arduino/) | ||
| 602 | * [MQTT 101 – How to Get Started with the lightweight IoT Protocol](http://www.hivemq.com/blog/how-to-get-started-with-mqtt) | ||
| 603 | * [Stream Updates with Server-Sent Events](https://www.html5rocks.com/en/tutorials/eventsource/basics/) | ||
| 604 | * [Internet of Things (IoT) Tutorials](http://www.tutorialspoint.com/internet_of_things/) | ||
| 605 | |||
| 606 | Any comment or additional ideas are welcomed in comments below. | ||
diff --git a/content/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md b/content/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md deleted file mode 100644 index 3a62594..0000000 --- a/content/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md +++ /dev/null | |||
| @@ -1,330 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Using DigitalOcean Spaces Object Storage with FUSE | ||
| 3 | url: using-digitalocean-spaces-object-storage-with-fuse.html | ||
| 4 | date: 2018-01-16T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | Couple of months ago [DigitalOcean](https://www.digitalocean.com) introduced new | ||
| 9 | product called | ||
| 10 | [Spaces](https://blog.digitalocean.com/introducing-spaces-object-storage/) which | ||
| 11 | is Object Storage very similar to Amazon's S3. This really peaked my interest, | ||
| 12 | because this was something I was missing and even the thought of going over the | ||
| 13 | internet for such functionality was in no interest to me. Also in fashion with | ||
| 14 | their previous pricing this also is very cheap and pricing page is a no-brainer | ||
| 15 | compared to AWS or GCE. [Prices are clearly and precisely defined and | ||
| 16 | outlined](https://www.digitalocean.com/pricing/). You must love them for that | ||
| 17 | :) | ||
| 18 | |||
| 19 | ## Initial requirements | ||
| 20 | |||
| 21 | * Is it possible to use them as a mounted drive with FUSE? (tl;dr YES) | ||
| 22 | * Will the performance degrade over time and over different sizes of objects? | ||
| 23 | (tl;dr NO&YES) | ||
| 24 | * Can storage be mounted on multiple machines at the same time and be writable? | ||
| 25 | (tl;dr YES) | ||
| 26 | |||
| 27 | > Let me be clear. This scripts I use are made just for benchmarking and are not | ||
| 28 | > intended to be used in real-life situations. Besides that, I am looking into | ||
| 29 | > using this approaches but adding caching service in front of it and then | ||
| 30 | > dumping everything as an object to storage. This could potentially be some | ||
| 31 | > interesting post of itself. But in case you would need real-time data without | ||
| 32 | > eventual consistency please take this scripts as they are: not usable in such | ||
| 33 | > situations. | ||
| 34 | |||
| 35 | ## Is it possible to use them as a mounted drive with FUSE? | ||
| 36 | |||
| 37 | Well, actually they can be used in such manor. Because they are similar to [AWS | ||
| 38 | S3](https://aws.amazon.com/s3/) many tools are available and you can find many | ||
| 39 | articles and [Stackoverflow items](https://stackoverflow.com/search?q=s3+fuse). | ||
| 40 | |||
| 41 | To make this work you will need DigitalOcean account. If you don't have one you | ||
| 42 | will not be able to test this code. But if you have an account then you go and | ||
| 43 | [create new | ||
| 44 | Droplet](https://cloud.digitalocean.com/droplets/new?size=s-1vcpu-1gb®ion=ams3&distro=debian&distroImage=debian-9-x64&options=private_networking,install_agent). | ||
| 45 | If you click on this link you will already have preselected Debian 9 with | ||
| 46 | smallest VM option. | ||
| 47 | |||
| 48 | * Please be sure to add you SSH key, because we will login to this machine | ||
| 49 | remotely. | ||
| 50 | * If you change your region please remember which one you choose because we will | ||
| 51 | need this information when we try to mount space to our machine. | ||
| 52 | |||
| 53 | Instuctions on how to use SSH keys and how to setup them are available in | ||
| 54 | article [How To Use SSH Keys with DigitalOcean | ||
| 55 | Droplets](https://www.digitalocean.com/community/tutorials/how-to-use-ssh-keys-with-digitalocean-droplets). | ||
| 56 | |||
| 57 |  | ||
| 58 | |||
| 59 | After we created Droplet it's time to create new Space. This is done by clicking | ||
| 60 | on a button [Create](https://cloud.digitalocean.com/spaces/new) (right top | ||
| 61 | corner) and selecting Spaces. Choose pronounceable ```Unique name``` because we | ||
| 62 | will use it in examples below. You can either choose Private or Public, it | ||
| 63 | doesn't matter in our case. And you can always change that in the future. | ||
| 64 | |||
| 65 | When you have created new Space we should [generate Access | ||
| 66 | key](https://cloud.digitalocean.com/settings/api/tokens). This link will guide | ||
| 67 | to the page when you can generate this key. After you create new one, please | ||
| 68 | save provided Key and Secret because Secret will not be shown again. | ||
| 69 | |||
| 70 |  | ||
| 71 | |||
| 72 | Now that we have new Space and Access key we should SSH into our machine. | ||
| 73 | |||
| 74 | ```bash | ||
| 75 | # replace IP with the ip of your newly created droplet | ||
| 76 | ssh root@IP | ||
| 77 | |||
| 78 | # this will install utilities for mounting storage objects as FUSE | ||
| 79 | apt install s3fs | ||
| 80 | |||
| 81 | # we now need to provide credentials (access key we created earlier) | ||
| 82 | # replace KEY and SECRET with your own credentials but leave the colon between them | ||
| 83 | # we also need to set proper permissions | ||
| 84 | echo "KEY:SECRET" > .passwd-s3fs | ||
| 85 | chmod 600 .passwd-s3fs | ||
| 86 | |||
| 87 | # now we mount space to our machine | ||
| 88 | # replace UNIQUE-NAME with the name you choose earlier | ||
| 89 | # if you choose different region for your space be careful about -ourl option (ams3) | ||
| 90 | s3fs UNIQUE-NAME /mnt/ -ourl=https://ams3.digitaloceanspaces.com -ouse_cache=/tmp | ||
| 91 | |||
| 92 | # now we try to create a file | ||
| 93 | # once you mount it may take a couple of seconds to retrieve data | ||
| 94 | echo "Hello cruel world" > /mnt/hello.txt | ||
| 95 | ``` | ||
| 96 | |||
| 97 | After all this you can return to your browser and go to [DigitalOcean | ||
| 98 | Spaces](https://cloud.digitalocean.com/spaces) and click on your created | ||
| 99 | space. If file hello.txt is present you have successfully mounted space to your | ||
| 100 | machine and wrote data to it. | ||
| 101 | |||
| 102 | I choose the same region for my Droplet and my Space but you don't have to. You | ||
| 103 | can have different regions. What this actually does to performance I don't know. | ||
| 104 | |||
| 105 | Additional information on FUSE: | ||
| 106 | |||
| 107 | * [Github project page for s3fs](https://github.com/s3fs-fuse/s3fs-fuse) | ||
| 108 | * [FUSE - Filesystem in Userspace](https://en.wikipedia.org/wiki/Filesystem_in_Userspace) | ||
| 109 | |||
| 110 | ## Will the performance degrade over time and over different sizes of objects? | ||
| 111 | |||
| 112 | For this task I didn't want to just read and write text files or uploading | ||
| 113 | images. I actually wanted to figure out if using something like SQlite is viable | ||
| 114 | in this case. | ||
| 115 | |||
| 116 | ### Measurement experiment 1: File copy | ||
| 117 | |||
| 118 | ```bash | ||
| 119 | # first we create some dummy files at different sizes | ||
| 120 | dd if=/dev/zero of=10KB.dat bs=1024 count=10 #10KB | ||
| 121 | dd if=/dev/zero of=100KB.dat bs=1024 count=100 #100KB | ||
| 122 | dd if=/dev/zero of=1MB.dat bs=1024 count=1024 #1MB | ||
| 123 | dd if=/dev/zero of=10MB.dat bs=1024 count=10240 #10MB | ||
| 124 | |||
| 125 | # now we set time command to only return real | ||
| 126 | TIMEFORMAT=%R | ||
| 127 | |||
| 128 | # now lets test it | ||
| 129 | (time cp 10KB.dat /mnt/) |& tee -a 10KB.results.txt | ||
| 130 | |||
| 131 | # and now we automate | ||
| 132 | # this will perform the same operation 100 times | ||
| 133 | # this will output results into separated files based on objecty size | ||
| 134 | n=0; while (( n++ < 100 )); do (time cp 10KB.dat /mnt/10KB.$n.dat) |& tee -a 10KB.results.txt; done | ||
| 135 | n=0; while (( n++ < 100 )); do (time cp 100KB.dat /mnt/100KB.$n.dat) |& tee -a 100KB.results.txt; done | ||
| 136 | n=0; while (( n++ < 100 )); do (time cp 1MB.dat /mnt/1MB.$n.dat) |& tee -a 1MB.results.txt; done | ||
| 137 | n=0; while (( n++ < 100 )); do (time cp 10MB.dat /mnt/10MB.$n.dat) |& tee -a 10MB.results.txt; done | ||
| 138 | ``` | ||
| 139 | |||
| 140 | Files of size 100MB were not successfully transferred and ended up displaying | ||
| 141 | error (cp: failed to close '/mnt/100MB.1.dat': Operation not permitted). | ||
| 142 | |||
| 143 | As I suspected, object size is not really that important. Sadly I don't have the | ||
| 144 | time to test performance over periods of time. But if some of you would do it | ||
| 145 | please send me your data. I would be interested in seeing results. | ||
| 146 | |||
| 147 | **Here are plotted results** | ||
| 148 | |||
| 149 | You can download [raw result here](/assets/do-fuse/copy-benchmarks.tsv). | ||
| 150 | Measurements are in seconds. | ||
| 151 | |||
| 152 | <script src="//cdn.plot.ly/plotly-latest.min.js"></script> | ||
| 153 | <div id="copy-benchmarks"></div> | ||
| 154 | <script> | ||
| 155 | (function(){ | ||
| 156 | var request = new XMLHttpRequest(); | ||
| 157 | request.open("GET", "/assets/do-fuse/copy-benchmarks.tsv", true); | ||
| 158 | request.onload = function() { | ||
| 159 | if (request.status >= 200 && request.status < 400) { | ||
| 160 | var payload = request.responseText.trim(); | ||
| 161 | var tsv = payload.split("\n"); | ||
| 162 | for (var i=0; i<tsv.length; i++) { tsv[i] = tsv[i].split("\t"); } | ||
| 163 | var traces = []; | ||
| 164 | var headers = tsv[0]; | ||
| 165 | tsv.shift(); | ||
| 166 | Array.prototype.forEach.call(headers, function(el, idx) { | ||
| 167 | var x = []; | ||
| 168 | var y = []; | ||
| 169 | for (var j=0; j<tsv.length; j++) { | ||
| 170 | x.push(j); | ||
| 171 | y.push(parseFloat(tsv[j][idx].replace(",", "."))); | ||
| 172 | } | ||
| 173 | traces.push({ x: x, y: y, type: "scatter", name: el, line: { width: 1, shape: "spline" } }); | ||
| 174 | }); | ||
| 175 | var copy = Plotly.newPlot("copy-benchmarks", traces, { legend: {"orientation": "h"}, height: 400, margin: { l: 40, r: 0, b: 20, t: 30, pad: 0 }, yaxis: { title: "execution time in seconds", titlefont: { size: 12 } }, xaxis: { title: "fn(i)", titlefont: { size: 12 } } }); | ||
| 176 | } else { } | ||
| 177 | }; | ||
| 178 | request.onerror = function() { }; | ||
| 179 | request.send(null); | ||
| 180 | })(); | ||
| 181 | </script> | ||
| 182 | |||
| 183 | As far as these tests show, performance is quite stable and can be predicted | ||
| 184 | which is fantastic. But this is a small test and spans only over couple of | ||
| 185 | hours. So you should not completely trust them. | ||
| 186 | |||
| 187 | ### Measurement experiment 2: SQLite performanse | ||
| 188 | |||
| 189 | I was unable to use database file directly from mounted drive so this is a no-go | ||
| 190 | as I suspected. So I executed code below on a local disk just to get some | ||
| 191 | benchmarks. I inserted 1000 records with DROPTABLE, CREATETABLE, INSERTMANY, | ||
| 192 | FETCHALL, COMMIT for 1000 times to generate statistics. As you can see | ||
| 193 | performance of SQLite is quite amazing. You could then potentially just copy | ||
| 194 | file to mounted drive and be done with it. | ||
| 195 | |||
| 196 | ```python | ||
| 197 | import time | ||
| 198 | import sqlite3 | ||
| 199 | import sys | ||
| 200 | |||
| 201 | if len(sys.argv) < 3: | ||
| 202 | print("usage: python sqlite-benchmark.py DB_PATH NUM_RECORDS REPEAT") | ||
| 203 | exit() | ||
| 204 | |||
| 205 | def data_iter(x): | ||
| 206 | for i in range(x): | ||
| 207 | yield "m" + str(i), "f" + str(i*i) | ||
| 208 | |||
| 209 | header_line = "%s\t%s\t%s\t%s\t%s\n" % ("DROPTABLE", "CREATETABLE", "INSERTMANY", "FETCHALL", "COMMIT") | ||
| 210 | with open("sqlite-benchmarks.tsv", "w") as fp: | ||
| 211 | fp.write(header_line) | ||
| 212 | |||
| 213 | start_time = time.time() | ||
| 214 | conn = sqlite3.connect(sys.argv[1]) | ||
| 215 | c = conn.cursor() | ||
| 216 | end_time = time.time() | ||
| 217 | result_time = CONNECT = end_time - start_time | ||
| 218 | print("CONNECT: %g seconds" % (result_time)) | ||
| 219 | |||
| 220 | start_time = time.time() | ||
| 221 | c.execute("PRAGMA journal_mode=WAL") | ||
| 222 | c.execute("PRAGMA temp_store=MEMORY") | ||
| 223 | c.execute("PRAGMA synchronous=OFF") | ||
| 224 | result_time = PRAGMA = end_time - start_time | ||
| 225 | print("PRAGMA: %g seconds" % (result_time)) | ||
| 226 | |||
| 227 | for i in range(int(sys.argv[3])): | ||
| 228 | print("#%i" % (i)) | ||
| 229 | |||
| 230 | start_time = time.time() | ||
| 231 | c.execute("drop table if exists test") | ||
| 232 | end_time = time.time() | ||
| 233 | result_time = DROPTABLE = end_time - start_time | ||
| 234 | print("DROPTABLE: %g seconds" % (result_time)) | ||
| 235 | |||
| 236 | start_time = time.time() | ||
| 237 | c.execute("create table if not exists test(a,b)") | ||
| 238 | end_time = time.time() | ||
| 239 | result_time = CREATETABLE = end_time - start_time | ||
| 240 | print("CREATETABLE: %g seconds" % (result_time)) | ||
| 241 | |||
| 242 | start_time = time.time() | ||
| 243 | c.executemany("INSERT INTO test VALUES (?, ?)", data_iter(int(sys.argv[2]))) | ||
| 244 | end_time = time.time() | ||
| 245 | result_time = INSERTMANY = end_time - start_time | ||
| 246 | print("INSERTMANY: %g seconds" % (result_time)) | ||
| 247 | |||
| 248 | start_time = time.time() | ||
| 249 | c.execute("select count(*) from test") | ||
| 250 | res = c.fetchall() | ||
| 251 | end_time = time.time() | ||
| 252 | result_time = FETCHALL = end_time - start_time | ||
| 253 | print("FETCHALL: %g seconds" % (result_time)) | ||
| 254 | |||
| 255 | start_time = time.time() | ||
| 256 | conn.commit() | ||
| 257 | end_time = time.time() | ||
| 258 | result_time = COMMIT = end_time - start_time | ||
| 259 | print("COMMIT: %g seconds" % (result_time)) | ||
| 260 | |||
| 261 | |||
| 262 | log_line = "%f\t%f\t%f\t%f\t%f\n" % (DROPTABLE, CREATETABLE, INSERTMANY, FETCHALL, COMMIT) | ||
| 263 | with open("sqlite-benchmarks.tsv", "a") as fp: | ||
| 264 | fp.write(log_line) | ||
| 265 | |||
| 266 | start_time = time.time() | ||
| 267 | conn.close() | ||
| 268 | end_time = time.time() | ||
| 269 | result_time = CLOSE = end_time - start_time | ||
| 270 | print("CLOSE: %g seconds" % (result_time)) | ||
| 271 | ``` | ||
| 272 | |||
| 273 | You can download [raw result here](/assets/do-fuse/sqlite-benchmarks.tsv). And | ||
| 274 | again, these results are done on a local block storage and do not represent | ||
| 275 | capabilities of object storage. With my current approach and state of the test | ||
| 276 | code these can not be done. I would need to make Python code much more robust | ||
| 277 | and check locking etc. | ||
| 278 | |||
| 279 | <div id="sqlite-benchmarks"></div> | ||
| 280 | <script> | ||
| 281 | (function(){ | ||
| 282 | var request = new XMLHttpRequest(); | ||
| 283 | request.open("GET", "/assets/do-fuse/sqlite-benchmarks.tsv", true); | ||
| 284 | request.onload = function() { | ||
| 285 | if (request.status >= 200 && request.status < 400) { | ||
| 286 | var payload = request.responseText.trim(); | ||
| 287 | var tsv = payload.split("\n"); | ||
| 288 | for (var i=0; i<tsv.length; i++) { tsv[i] = tsv[i].split("\t"); } | ||
| 289 | var traces = []; | ||
| 290 | var headers = tsv[0]; | ||
| 291 | tsv.shift(); | ||
| 292 | Array.prototype.forEach.call(headers, function(el, idx) { | ||
| 293 | var x = []; | ||
| 294 | var y = []; | ||
| 295 | for (var j=0; j<tsv.length; j++) { | ||
| 296 | x.push(j); | ||
| 297 | y.push(parseFloat(tsv[j][idx].replace(",", "."))); | ||
| 298 | } | ||
| 299 | traces.push({ x: x, y: y, type: "scatter", name: el, line: { width: 1, shape: "spline" } }); | ||
| 300 | }); | ||
| 301 | var sqlite = Plotly.newPlot("sqlite-benchmarks", traces, { legend: {"orientation": "h"}, height: 400, margin: { l: 50, r: 0, b: 20, t: 30, pad: 0 }, yaxis: { title: "execution time in seconds", titlefont: { size: 12 } } }); | ||
| 302 | } else { } | ||
| 303 | }; | ||
| 304 | request.onerror = function() { }; | ||
| 305 | request.send(null); | ||
| 306 | })(); | ||
| 307 | </script> | ||
| 308 | |||
| 309 | ## Can storage be mounted on multiple machines at the same time and be writable? | ||
| 310 | |||
| 311 | Well, this one didn't take long to test. And the answer is **YES**. I mounted | ||
| 312 | space on both machines and measured same performance on both machines. But | ||
| 313 | because file is downloaded before write and then uploaded on complete there | ||
| 314 | could potentially be problems is another process is trying to access the same | ||
| 315 | file. | ||
| 316 | |||
| 317 | ## Observations and conslusion | ||
| 318 | |||
| 319 | Using Spaces in this way makes it easier to access and manage files. But besides | ||
| 320 | that you would need to write additional code to make this one play nice with you | ||
| 321 | applications. | ||
| 322 | |||
| 323 | Nevertheless, this was extremely simple to setup and use and this is just | ||
| 324 | another excellent product in DigitalOcean product line. I found this exercise | ||
| 325 | very valuable and am thinking about implementing some sort of mechanism for | ||
| 326 | SQLite, so data can be stored on Spaces and accessed by many VM's. For a project | ||
| 327 | where data doesn't need to be accessible in real-time and can have couple of | ||
| 328 | minutes old data this would be very interesting. If any of you find this | ||
| 329 | proposal interesting please write in a comment box below or shoot me an email | ||
| 330 | and I will keep you posted. | ||
diff --git a/content/posts/2019-01-03-encoding-binary-data-into-dna-sequence.md b/content/posts/2019-01-03-encoding-binary-data-into-dna-sequence.md deleted file mode 100644 index f0343ae..0000000 --- a/content/posts/2019-01-03-encoding-binary-data-into-dna-sequence.md +++ /dev/null | |||
| @@ -1,410 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Encoding binary data into DNA sequence | ||
| 3 | url: encoding-binary-data-into-dna-sequence.html | ||
| 4 | date: 2019-01-03T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | ## Initial thoughts | ||
| 9 | |||
| 10 | Imagine a world where you could go outside and take a leaf from a tree and put | ||
| 11 | it through your personal DNA sequencer and get data like music, videos or | ||
| 12 | computer programs from it. Well, this is all possible now. It was not done on a | ||
| 13 | large scale because it is quite expensive to create DNA strands but it's | ||
| 14 | possible. | ||
| 15 | |||
| 16 | Encoding data into DNA sequence is relatively simple process once you understand | ||
| 17 | the relationship between binary data and nucleotides and scientists have been | ||
| 18 | making large leaps in this field in order to provide viable long-term storage | ||
| 19 | solution for our data that would potentially survive our specie if case of | ||
| 20 | global disaster. We could imprint all the world's knowledge into plants and | ||
| 21 | ensure the survival of our knowledge. | ||
| 22 | |||
| 23 | More optimistic usage for this technology would be easier storage of ever | ||
| 24 | growing data we produce every day. Once machines for sequencing DNA become fast | ||
| 25 | enough and cheaper this could mean the next evolution of storing data and | ||
| 26 | abandoning classical hard and solid state drives in data warehouses. | ||
| 27 | |||
| 28 | As we currently stand this is still not viable but it is quite an amazing and | ||
| 29 | cool technology. | ||
| 30 | |||
| 31 | My interests in this field are purely in encoding processes and experimental | ||
| 32 | testing mainly because I don't have the access to this expensive machines. My | ||
| 33 | initial goal was to create a toolkit that can be used by everybody to encode | ||
| 34 | their data into a proper DNA sequence. | ||
| 35 | |||
| 36 | ## Glossary | ||
| 37 | |||
| 38 | **deoxyribose** A five-carbon sugar molecule with a hydrogen atom rather than a | ||
| 39 | hydroxyl group in the 2′ position; the sugar component of DNA nucleotides. | ||
| 40 | |||
| 41 | **double helix** The molecular shape of DNA in which two strands of nucleotides | ||
| 42 | wind around each other in a spiral shape. | ||
| 43 | |||
| 44 | **nitrogenous base** A nitrogen-containing molecule that acts as a base; often | ||
| 45 | referring to one of the purine or pyrimidine components of nucleic acids. | ||
| 46 | |||
| 47 | **phosphate group** A molecular group consisting of a central phosphorus atom | ||
| 48 | bound to four oxygen atoms. | ||
| 49 | |||
| 50 | **RGB** The RGB color model is an additive color model in which red, green and | ||
| 51 | blue light are added together in various ways to reproduce a broad array of | ||
| 52 | colors. | ||
| 53 | |||
| 54 | **GCC** The GNU Compiler Collection is a compiler system produced by the GNU | ||
| 55 | Project supporting various programming languages. | ||
| 56 | |||
| 57 | ## Data encoding | ||
| 58 | |||
| 59 | **TL;DR:** Encoding involves the use of a code to change original data into a | ||
| 60 | form that can be used by an external process. | ||
| 61 | |||
| 62 | Encoding is the process of converting data into a format required for a number | ||
| 63 | of information processing needs, including: | ||
| 64 | |||
| 65 | - Program compiling and execution | ||
| 66 | - Data transmission, storage and compression/decompression | ||
| 67 | - Application data processing, such as file conversion | ||
| 68 | |||
| 69 | Encoding can have two meanings: | ||
| 70 | |||
| 71 | - In computer technology, encoding is the process of applying a specific code, | ||
| 72 | such as letters, symbols and numbers, to data for conversion into an | ||
| 73 | equivalent cipher. | ||
| 74 | - In electronics, encoding refers to analog to digital conversion. | ||
| 75 | |||
| 76 | ## Quick history of DNA | ||
| 77 | |||
| 78 | - **1869** - Friedrich Miescher identifies "nuclein". | ||
| 79 | - **1900s** - The Eugenics Movement. | ||
| 80 | - **1900** – Mendel's theories are rediscovered by researchers. | ||
| 81 | - **1944** - Oswald Avery identifies DNA as the 'transforming principle'. | ||
| 82 | - **1952** - Rosalind Franklin photographs crystallized DNA fibres. | ||
| 83 | - **1953** - James Watson and Francis Crick discover the double helix structure of DNA. | ||
| 84 | - **1965** - Marshall Nirenberg is the first person to sequence the bases in each codon. | ||
| 85 | - **1983** - Huntington's disease is the first mapped genetic disease. | ||
| 86 | - **1990** - The Human Genome Project begins. | ||
| 87 | - **1995** - Haemophilus Influenzae is the first bacterium genome sequenced. | ||
| 88 | - **1996** - Dolly the sheep is cloned. | ||
| 89 | - **1999** - First human chromosome is decoded. | ||
| 90 | - **2000** – Genetic code of the fruit fly is decoded. | ||
| 91 | - **2002** – Mouse is the first mammal to have its genome decoded. | ||
| 92 | - **2003** – The Human Genome Project is completed. | ||
| 93 | - **2013** – DNA Worldwide and Eurofins Forensic discover identical twins have differences in their genetic makeup. | ||
| 94 | |||
| 95 | ## What is DNA? | ||
| 96 | |||
| 97 | Deoxyribonucleic acid, a self-replicating material which is **present in nearly | ||
| 98 | all living organisms** as the main constituent of chromosomes. It is the | ||
| 99 | **carrier of genetic information**. | ||
| 100 | |||
| 101 | > The nitrogen in our DNA, the calcium in our teeth, the iron in our blood, | ||
| 102 | > the carbon in our apple pies were made in the interiors of collapsing stars. | ||
| 103 | > We are made of starstuff. | ||
| 104 | > **-- Carl Sagan, Cosmos** | ||
| 105 | |||
| 106 | The nucleotide in DNA consists of a sugar (deoxyribose), one of four bases | ||
| 107 | (cytosine (C), thymine (T), adenine (A), guanine (G)), and a phosphate. | ||
| 108 | Cytosine and thymine are pyrimidine bases, while adenine and guanine are purine | ||
| 109 | bases. The sugar and the base together are called a nucleoside. | ||
| 110 | |||
| 111 |  | ||
| 112 | |||
| 113 | *DNA (a) forms a double stranded helix, and (b) adenine pairs with thymine and | ||
| 114 | cytosine pairs with guanine. (credit a: modification of work by Jerome Walker, | ||
| 115 | Dennis Myts)* | ||
| 116 | |||
| 117 | ## Encode binary data into DNA sequence | ||
| 118 | |||
| 119 | As an input file you can use any file you want: | ||
| 120 | |||
| 121 | - ASCII files, | ||
| 122 | - Compiled programs, | ||
| 123 | - Multimedia files (MP3, MP4, MVK, etc), | ||
| 124 | - Images, | ||
| 125 | - Database files, | ||
| 126 | - etc. | ||
| 127 | |||
| 128 | Note: If you would copy all the bytes from RAM to file or pipe data to file you | ||
| 129 | could encode also this data as long as you provide file pointer to the encoder. | ||
| 130 | |||
| 131 | ### Basic Encoding | ||
| 132 | |||
| 133 | As already mentioned, the Basic Encoding is based on a simple mapping. Since DNA | ||
| 134 | is composed of 4 nucleotides (Adenine, Cytosine, Guanine, Thymine; usually | ||
| 135 | referred using the first letter). Using this technique we can encode | ||
| 136 | |||
| 137 | $$ log_2(4) = log_2(2^2) = 2 bits $$ | ||
| 138 | |||
| 139 | using a single nucleotide. In this way, we are able to use the 4 bases that | ||
| 140 | compose the DNA strand to encode each byte of data. | ||
| 141 | |||
| 142 | | Two bits | Nucleotides | | ||
| 143 | | -------- | ---------------- | | ||
| 144 | | 00 | **A** (Adenine) | | ||
| 145 | | 10 | **G** (Guanine) | | ||
| 146 | | 01 | **C** (Cytosine) | | ||
| 147 | | 11 | **T** (Thymine) | | ||
| 148 | |||
| 149 | With this in mind we can simply encode any data by using two-bit to Nucleotides | ||
| 150 | conversion. | ||
| 151 | |||
| 152 | ```python | ||
| 153 | { Algorithm 1: Naive byte array to DNA encode } | ||
| 154 | procedure EncodeToDNASequence(f) string | ||
| 155 | begin | ||
| 156 | enc string | ||
| 157 | while not eof(f) do | ||
| 158 | c byte := buffer[0] { Read 1 byte from buffer } | ||
| 159 | bin integer := sprintf('08b', c) { Convert to string binary } | ||
| 160 | for e in range[0, 2, 4, 6] do | ||
| 161 | if e[0] == 48 and e[1] == 48 then { 0x00 - A (Adenine) } | ||
| 162 | enc += 'A' | ||
| 163 | else if e[0] == 48 and e[1] == 49 then { 0x01 - G (Guanine) } | ||
| 164 | enc += 'G' | ||
| 165 | else if e[0] == 49 and e[1] == 48 then { 0x10 - C (Cytosine) } | ||
| 166 | enc += 'C' | ||
| 167 | else if e[0] == 49 and e[1] == 49 then { 0x11 - T (Thymine) } | ||
| 168 | enc += 'T' | ||
| 169 | return enc { Return DNA sequence } | ||
| 170 | end | ||
| 171 | ``` | ||
| 172 | |||
| 173 | Another encoding would be **Goldman encoding**. Using this encoding helps with | ||
| 174 | Nonsense mutation (amino acids replaced by a stop codon) that occurs and is the | ||
| 175 | most problematic during translation because it leads to truncated amino acid | ||
| 176 | sequences, which in turn results in truncated proteins. | ||
| 177 | |||
| 178 | [Where to store big data? In DNA: Nick Goldman at TEDxPrague](https://www.youtube.com/watch?v=a4PiGWNsIEU) | ||
| 179 | |||
| 180 | ### FASTA file format | ||
| 181 | |||
| 182 | In bioinformatics, FASTA format is a text-based format for representing either | ||
| 183 | nucleotide sequences or peptide sequences, in which nucleotides or amino acids | ||
| 184 | are represented using single-letter codes. The format also allows for sequence | ||
| 185 | names and comments to precede the sequences. The format originates from the | ||
| 186 | FASTA software package, but has now become a standard in the field of | ||
| 187 | bioinformatics. | ||
| 188 | |||
| 189 | The first line in a FASTA file started either with a ">" (greater-than) symbol | ||
| 190 | or, less frequently, a ";" (semicolon) was taken as a comment. Subsequent lines | ||
| 191 | starting with a semicolon would be ignored by software. Since the only comment | ||
| 192 | used was the first, it quickly became used to hold a summary description of the | ||
| 193 | sequence, often starting with a unique library accession number, and with time | ||
| 194 | it has become commonplace to always use ">" for the first line and to not use | ||
| 195 | ";" comments (which would otherwise be ignored). | ||
| 196 | |||
| 197 | ``` | ||
| 198 | ;LCBO - Prolactin precursor - Bovine | ||
| 199 | ; a sample sequence in FASTA format | ||
| 200 | MDSKGSSQKGSRLLLLLVVSNLLLCQGVVSTPVCPNGPGNCQVSLRDLFDRAVMVSHYIHDLSS | ||
| 201 | EMFNEFDKRYAQGKGFITMALNSCHTSSLPTPEDKEQAQQTHHEVLMSLILGLLRSWNDPLYHL | ||
| 202 | VTEVRGMKGAPDAILSRAIEIEEENKRLLEGMEMIFGQVIPGAKETEPYPVWSGLPSLQTKDED | ||
| 203 | ARYSAFYNLLHCLRRDSSKIDTYLKLLNCRIIYNNNC* | ||
| 204 | |||
| 205 | >MCHU - Calmodulin - Human, rabbit, bovine, rat, and chicken | ||
| 206 | ADQLTEEQIAEFKEAFSLFDKDGDGTITTKELGTVMRSLGQNPTEAELQDMINEVDADGNGTID | ||
| 207 | FPEFLTMMARKMKDTDSEEEIREAFRVFDKDGNGYISAAELRHVMTNLGEKLTDEEVDEMIREA | ||
| 208 | DIDGDGQVNYEEFVQMMTAK* | ||
| 209 | |||
| 210 | >gi|5524211|gb|AAD44166.1| cytochrome b [Elephas maximus maximus] | ||
| 211 | LCLYTHIGRNIYYGSYLYSETWNTGIMLLLITMATAFMGYVLPWGQMSFWGATVITNLFSAIPYIGTNLV | ||
| 212 | EWIWGGFSVDKATLNRFFAFHFILPFTMVALAGVHLTFLHETGSNNPLGLTSDSDKIPFHPYYTIKDFLG | ||
| 213 | LLILILLLLLLALLSPDMLGDPDNHMPADPLNTPLHIKPEWYFLFAYAILRSVPNKLGGVLALFLSIVIL | ||
| 214 | GLMPFLHTSKHRSMMLRPLSQALFWTLTMDLLTLTWIGSQPVEYPYTIIGQMASILYFSIILAFLPIAGX | ||
| 215 | IENY | ||
| 216 | ``` | ||
| 217 | |||
| 218 | FASTA format was extended by [FASTQ](https://en.wikipedia.org/wiki/FASTQ_format) | ||
| 219 | format from the [Sanger Centre](https://www.sanger.ac.uk/) in Cambridge. | ||
| 220 | |||
| 221 | ### PNG encoded DNA sequence | ||
| 222 | |||
| 223 | | Nucleotides | RGB | Color name | | ||
| 224 | | ------------ | ----------- | ---------- | | ||
| 225 | | A ➞ Adenine | (0,0,255) | Blue | | ||
| 226 | | G ➞ Guanine | (0,100,0) | Green | | ||
| 227 | | C ➞ Cytosine | (255,0,0) | Red | | ||
| 228 | | T ➞ Thymine | (255,255,0) | Yellow | | ||
| 229 | |||
| 230 | With this in mind we can create a simple algorithm to create PNG representation | ||
| 231 | of a DNA sequence. | ||
| 232 | |||
| 233 | ```python | ||
| 234 | { Algorithm 2: Naive DNA to PNG encode from FASTA file } | ||
| 235 | procedure EncodeDNASequenceToPNG(f) | ||
| 236 | begin | ||
| 237 | i image | ||
| 238 | while not eof(f) do | ||
| 239 | c char := buffer[0] { Read 1 char from buffer } | ||
| 240 | case c of | ||
| 241 | 'A': color := RGB(0, 0, 255) { Blue } | ||
| 242 | 'G': color := RGB(0, 100, 0) { Green } | ||
| 243 | 'C': color := RGB(255, 0, 0) { Red } | ||
| 244 | 'T': color := RGB(255, 255, 0) { Yellow } | ||
| 245 | drawRect(i, [x, y], color) | ||
| 246 | save(i) { Save PNG image } | ||
| 247 | end | ||
| 248 | ``` | ||
| 249 | |||
| 250 | ## Encoding text file in practice | ||
| 251 | |||
| 252 | In this example we will take a simple text file as our input stream for | ||
| 253 | encoding. This file will have a quote from Niels Bohr and saved as txt file. | ||
| 254 | |||
| 255 | > How wonderful that we have met with a paradox. Now we have some hope of | ||
| 256 | > making progress. | ||
| 257 | > ― Niels Bohr | ||
| 258 | |||
| 259 | First we encode text file into FASTA file. | ||
| 260 | |||
| 261 | ```bash | ||
| 262 | ./dnae-encode -i quote.txt -o quote.fa | ||
| 263 | 2019/01/10 00:38:29 Gathering input file stats | ||
| 264 | 2019/01/10 00:38:29 Starting encoding ... | ||
| 265 | 106 B / 106 B [==================================] 100.00% 0s | ||
| 266 | 2019/01/10 00:38:29 Saving to FASTA file ... | ||
| 267 | 2019/01/10 00:38:29 Output FASTA file length is 438 B | ||
| 268 | 2019/01/10 00:38:29 Process took 987.263µs | ||
| 269 | 2019/01/10 00:38:29 Done ... | ||
| 270 | ``` | ||
| 271 | |||
| 272 | Output of `quote.fa` file contains the encoded DNA sequence in ASCII format. | ||
| 273 | |||
| 274 | ``` | ||
| 275 | >SEQ1 | ||
| 276 | GACAGCTTGTGTACAAGTGTGCTTGCTCGCGAGCGGGTACGCGCGTGGGCTAACAAGTGA | ||
| 277 | GCCAGCAGGTGAACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGCTGGCGGGTGA | ||
| 278 | ACAAGTGTGCCGGTGAGCCAACAAGCAGACAAGTAAGCAGGTACGCAGGCGAGCTTGTCA | ||
| 279 | ACTCACAAGATCGCTTGTGTACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGTAT | ||
| 280 | GCTTGCTGGCGGACAAGCCAGCTTGTAAGCGGACAAGCTTGCGCACAAGCTGGCAGGCCT | ||
| 281 | GCCGGCTCGCGTACAAATTCACAAGTAAGTACGCTTGCGTGTACGCGGGTATGTATACTC | ||
| 282 | AACCTCACCAAACGGGACAAGATCGCCGGCGGGCTAGTATACAAGAACGCTTGCCAGTAC | ||
| 283 | AACC | ||
| 284 | ``` | ||
| 285 | |||
| 286 | Then we encode FASTA file from previous operation to encode this data into PNG. | ||
| 287 | |||
| 288 | ```bash | ||
| 289 | ./dnae-png -i quote.fa -o quote.png | ||
| 290 | 2019/01/10 00:40:09 Gathering input file stats ... | ||
| 291 | 2019/01/10 00:40:09 Deconstructing FASTA file ... | ||
| 292 | 2019/01/10 00:40:09 Compositing image file ... | ||
| 293 | 424 / 424 [==================================] 100.00% 0s | ||
| 294 | 2019/01/10 00:40:09 Saving output file ... | ||
| 295 | 2019/01/10 00:40:09 Output image file length is 1.1 kB | ||
| 296 | 2019/01/10 00:40:09 Process took 19.036117ms | ||
| 297 | 2019/01/10 00:40:09 Done ... | ||
| 298 | ``` | ||
| 299 | |||
| 300 | After encoding into PNG format this file looks like this. | ||
| 301 | |||
| 302 |  | ||
| 303 | |||
| 304 | The larger the input stream is the larger the PNG file would be. | ||
| 305 | |||
| 306 | Compiled basic Hello World C program with | ||
| 307 | [GCC](https://www.gnu.org/software/gcc/) would [look | ||
| 308 | like](/assets/dna-sequence/sample.png). | ||
| 309 | |||
| 310 | ```c | ||
| 311 | // gcc -O3 -o sample sample.c | ||
| 312 | #include <stdio.h> | ||
| 313 | |||
| 314 | main() { | ||
| 315 | printf("Hello, world!\n"); | ||
| 316 | return 0; | ||
| 317 | } | ||
| 318 | ``` | ||
| 319 | |||
| 320 | ## Toolkit for encoding data | ||
| 321 | |||
| 322 | I have created a toolkit with two main programs: | ||
| 323 | |||
| 324 | - dnae-encode (encodes file into FASTA file) | ||
| 325 | - dnae-png (encodes FASTA file into PNG) | ||
| 326 | |||
| 327 | Toolkit with full source code is available on | ||
| 328 | [github.com/mitjafelicijan/dna-encoding](https://github.com/mitjafelicijan/dna-encoding). | ||
| 329 | |||
| 330 | ### dnae-encode | ||
| 331 | |||
| 332 | ```bash | ||
| 333 | > ./dnae-encode --help | ||
| 334 | usage: dnae-encode --input=INPUT [<flags>] | ||
| 335 | |||
| 336 | A command-line application that encodes file into DNA sequence. | ||
| 337 | |||
| 338 | Flags: | ||
| 339 | --help Show context-sensitive help (also try --help-long and --help-man). | ||
| 340 | -i, --input=INPUT Input file (ASCII or binary) which will be encoded into DNA sequence. | ||
| 341 | -o, --output="out.fa" Output file which stores DNA sequence in FASTA format. | ||
| 342 | -s, --sequence=SEQ1 The description line (defline) or header/identifier line, gives a name and/or a unique identifier for the sequence. | ||
| 343 | -c, --columns=60 Row characters length (no more than 120 characters). Devices preallocate fixed line sizes in software. | ||
| 344 | --version Show application version. | ||
| 345 | ``` | ||
| 346 | |||
| 347 | ### dnae-png | ||
| 348 | |||
| 349 | ```bash | ||
| 350 | > ./dnae-png --help | ||
| 351 | usage: dnae-png --input=INPUT [<flags>] | ||
| 352 | |||
| 353 | A command-line application that encodes FASTA file into PNG image. | ||
| 354 | |||
| 355 | Flags: | ||
| 356 | --help Show context-sensitive help (also try --help-long and --help-man). | ||
| 357 | -i, --input=INPUT Input FASTA file which will be encoded into PNG image. | ||
| 358 | -o, --output="out.png" Output file in PNG format that represents DNA sequence in graphical way. | ||
| 359 | -s, --size=10 Size of pairings of DNA bases on image in pixels (lower resolution lower file size). | ||
| 360 | --version Show application version. | ||
| 361 | ``` | ||
| 362 | |||
| 363 | ## Benchmarks | ||
| 364 | |||
| 365 | First we generate some binary sample data with dd. | ||
| 366 | |||
| 367 | ```bash | ||
| 368 | dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=1KB.bin bs=1KB count=1 iflag=fullblock | ||
| 369 | ``` | ||
| 370 | |||
| 371 | Our freshly generated 1KB file looks something like this (its full of garbage | ||
| 372 | data as intended). | ||
| 373 | |||
| 374 |  | ||
| 375 | |||
| 376 | We create following binary files: | ||
| 377 | |||
| 378 | - 1KB.bin | ||
| 379 | - 10KB.bin | ||
| 380 | - 100KB.bin | ||
| 381 | - 1MB.bin | ||
| 382 | - 10MB.bin | ||
| 383 | - 100MB.bin | ||
| 384 | |||
| 385 | After this we create FASTA files for all the binary files by encoding them | ||
| 386 | into DNA sequence. | ||
| 387 | |||
| 388 | ```bash | ||
| 389 | ./dnae-encode -i 100MB.bin -o 100MB.fa | ||
| 390 | ``` | ||
| 391 | |||
| 392 | Then we GZIP all the FASTA files to see how much the can be compressed. | ||
| 393 | |||
| 394 | ```bash | ||
| 395 | gzip -9 < 10MB.fa > 10MB.fa.gz | ||
| 396 | ``` | ||
| 397 | |||
| 398 | [Download ODS file with benchmarks](/dna-sequence/benchmarks.ods). | ||
| 399 | |||
| 400 |  | ||
| 401 | |||
| 402 |  | ||
| 403 | |||
| 404 | ## References | ||
| 405 | |||
| 406 | - https://www.techopedia.com/definition/948/encoding | ||
| 407 | - https://www.dna-worldwide.com/resource/160/history-dna-timeline | ||
| 408 | - https://opentextbc.ca/biology/chapter/9-1-the-structure-of-dna/ | ||
| 409 | - https://arxiv.org/abs/1801.04774 | ||
| 410 | - https://en.wikipedia.org/wiki/FASTA_format | ||
diff --git a/content/posts/2019-10-14-simplifying-and-reducing-clutter.md b/content/posts/2019-10-14-simplifying-and-reducing-clutter.md deleted file mode 100644 index 97ddb34..0000000 --- a/content/posts/2019-10-14-simplifying-and-reducing-clutter.md +++ /dev/null | |||
| @@ -1,58 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Simplifying and reducing clutter in my life and work | ||
| 3 | url: simplifying-and-reducing-clutter.html | ||
| 4 | date: 2019-10-14T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | I recently moved my main working machine back from Hachintosh to Linux. Well the | ||
| 9 | experiment was interesting and I have done some great work on macOS but it was | ||
| 10 | time to move back. | ||
| 11 | |||
| 12 | I actually really missed Linux. The simplicity of `apt-get` or just the amount | ||
| 13 | of software that exists for Linux should be a no-brainer. I spent most of my | ||
| 14 | time on macOS finding solutions to make things work. Using | ||
| 15 | [Brew](https://brew.sh/) was just a horrible experience and far from package | ||
| 16 | managers of Linux. At least they managed to get that `sudo` debacle sorted. | ||
| 17 | |||
| 18 | Not all was bad. macOS in general was a perfectly good environment. Things like | ||
| 19 | Docker and tooling like this worked without any hiccups. My normal tools like | ||
| 20 | coding IDE worked flawlessly and the whole look and feel is just superb. I have | ||
| 21 | been using MacBook Air for couple of years so I was used to the system but never | ||
| 22 | as a daily driver. | ||
| 23 | |||
| 24 | One of the things I did after I installed Linux back on my machine was cleaning | ||
| 25 | up my Dropbox folder. I have everything on Dropbox. Even projects folder. I | ||
| 26 | write code for living so my whole life revolves around couple of megs of code | ||
| 27 | (with assets). So it's not like I have huge files on my machine. I don't have | ||
| 28 | movies or music or pictures on my PC. All of that stuff is in cloud. I use | ||
| 29 | Google music and I have Netflix account which is more than enough for me. | ||
| 30 | |||
| 31 | I also went and deleted some of the repositories on my Github account. I have | ||
| 32 | deleted more code than deployed. People find this strange but for me deleting | ||
| 33 | something feels so cathartic and also forces me to write better code next time | ||
| 34 | around when I am faced with similar problem. That was a huge relief if I am | ||
| 35 | being totally honest. | ||
| 36 | |||
| 37 | Next step was to do something with my webpage. I have been using some scripts I | ||
| 38 | wrote a while ago to generate static pages from markdown source posts. I kept on | ||
| 39 | adding and adding stuff on top of it and it became a source of a | ||
| 40 | frustration. And this is just a simple blog and I was using gulp and npm. | ||
| 41 | Anyways after couple of hours of searching and testing static generators I found | ||
| 42 | an interesting one | ||
| 43 | [https://github.com/piranha/gostatic](https://github.com/piranha/gostatic) and I | ||
| 44 | just decided to use this one. It was the only one that had a simple templating | ||
| 45 | engine, not that I really need one. But others had this convoluted way of trying | ||
| 46 | to solve everything and at the end just required quite bigger learning curve I | ||
| 47 | was ready to go with. So I deleted couple of old posts, simplified HTML, trashed | ||
| 48 | most of the CSS and went with | ||
| 49 | [https://motherfuckingwebsite.com/](https://motherfuckingwebsite.com/) | ||
| 50 | aesthetics. Yeah, the previous site was more visually stimulating but all I | ||
| 51 | really care is the content at this point. And Times New Roman font is kind of | ||
| 52 | awesome. | ||
| 53 | |||
| 54 | I stopped working on most of the projects in the past couple of months because | ||
| 55 | the overhead was just too insane. There comes a point when you stretch yourself | ||
| 56 | too much and then you stop progressing and with that comes dissatisfaction. | ||
| 57 | |||
| 58 | So that's about it. Moving forward minimal style. | ||
diff --git a/content/posts/2019-10-19-using-sentiment-analysis-for-clickbait-detection.md b/content/posts/2019-10-19-using-sentiment-analysis-for-clickbait-detection.md deleted file mode 100644 index e7324bb..0000000 --- a/content/posts/2019-10-19-using-sentiment-analysis-for-clickbait-detection.md +++ /dev/null | |||
| @@ -1,107 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Using sentiment analysis for clickbait detection in RSS feeds | ||
| 3 | url: using-sentiment-analysis-for-clickbait-detection-in-rss-feeds.html | ||
| 4 | date: 2019-10-19T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | ## Initial thoughts | ||
| 9 | |||
| 10 | One of the things that interested me for a while now is if major well | ||
| 11 | established news sites use click bait titles to drive additional traffic to | ||
| 12 | their sites and generate additional impressions. | ||
| 13 | |||
| 14 | Goal is to see how article titles and actual content of article differ from each | ||
| 15 | other and see if titles are clickbaited. | ||
| 16 | |||
| 17 | ## Preparing and cleaning data | ||
| 18 | |||
| 19 | For this example I opted to just use RSS feed from a new website and decided to | ||
| 20 | go with [The Guardian](https://www.theguardian.com) World news. While this gets | ||
| 21 | us limited data (~40) articles and also description (actual content) is trimmed | ||
| 22 | this really doesn't reflect the actual article contents. | ||
| 23 | |||
| 24 | To get better content I could use web scraping and use RSS as link list and | ||
| 25 | fetch contents directly from website, but for this simple example this will | ||
| 26 | suffice. | ||
| 27 | |||
| 28 | There are couple of requirements we need to install before we continue: | ||
| 29 | |||
| 30 | - `pip3 install feedparser` (parses RSS feed from url) | ||
| 31 | - `pip3 install vaderSentiment` (does sentiment polarity analysis) | ||
| 32 | - `pip3 install matplotlib` (plots chart of results) | ||
| 33 | |||
| 34 | So first we need to fetch RSS data and sanitize HTML content from description. | ||
| 35 | |||
| 36 | ```python | ||
| 37 | import re | ||
| 38 | import feedparser | ||
| 39 | |||
| 40 | feed_url = "https://www.theguardian.com/world/rss" | ||
| 41 | feed = feedparser.parse(feed_url) | ||
| 42 | |||
| 43 | # sanitize html | ||
| 44 | for item in feed.entries: | ||
| 45 | item.description = re.sub('<[^<]+?>', '', item.description) | ||
| 46 | ``` | ||
| 47 | |||
| 48 | ## Perform sentiment analysis | ||
| 49 | |||
| 50 | Since we now have cleaned up data in our `feed.entries` object we can start with | ||
| 51 | performing sentiment analysis. | ||
| 52 | |||
| 53 | There are many sentiment analysis libraries available that range from rule-based | ||
| 54 | sentiment analysis up to machine learning supported analysis. To keep things | ||
| 55 | simple I decided to use rule-based analysis library | ||
| 56 | [vaderSentiment](https://github.com/cjhutto/vaderSentiment) from | ||
| 57 | [C.J. Hutto](https://github.com/cjhutto). Really nice library and quite easy to | ||
| 58 | use. | ||
| 59 | |||
| 60 | ```python | ||
| 61 | from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer | ||
| 62 | analyser = SentimentIntensityAnalyzer() | ||
| 63 | |||
| 64 | sentiment_results = [] | ||
| 65 | for item in feed.entries: | ||
| 66 | sentiment_title = analyser.polarity_scores(item.title) | ||
| 67 | sentiment_description = analyser.polarity_scores(item.description) | ||
| 68 | sentiment_results.append([sentiment_title['compound'], sentiment_description['compound']]) | ||
| 69 | ``` | ||
| 70 | |||
| 71 | Now that we have this data in a shape that is compatible with matplotlib we can | ||
| 72 | plot results to see the difference between title and description sentiment of an | ||
| 73 | article. | ||
| 74 | |||
| 75 | ```python | ||
| 76 | import matplotlib.pyplot as plt | ||
| 77 | |||
| 78 | plt.rcParams['figure.figsize'] = (15, 3) | ||
| 79 | plt.plot(sentiment_results, drawstyle='steps') | ||
| 80 | plt.title('Sentiment analysis relationship between title and description (Guardian World News)') | ||
| 81 | plt.legend(['title', 'description']) | ||
| 82 | plt.show() | ||
| 83 | ``` | ||
| 84 | |||
| 85 | ## Results and assets | ||
| 86 | |||
| 87 | 1. Because of the small sample size further conclusions are impossible to make. | ||
| 88 | 2. Rule-based approach may not be the best way of doing this. By using deep | ||
| 89 | learning we would be able to get better insights. | ||
| 90 | 3. **Next step would be to** periodically fetch RSS items and store them over a | ||
| 91 | longer period of time and then perform analysis again and use either machine | ||
| 92 | learning or deep learning on top of it. | ||
| 93 | |||
| 94 |  | ||
| 95 | |||
| 96 | Figure above displays difference between title and description sentiment for | ||
| 97 | specific RSS feed item. 1 means positive and -1 means negative sentiment. | ||
| 98 | |||
| 99 | [» Download Jupyter Notebook](/assets/sentiment-analysis/sentiment-analysis.ipynb) | ||
| 100 | |||
| 101 | ## Going further | ||
| 102 | |||
| 103 | - [Twitter Sentiment Analysis by Bryan Schwierzke](https://github.com/bswiss/news_mood) | ||
| 104 | - [AFINN-based sentiment analysis for Node.js by Andrew Sliwinski](https://github.com/thisandagain/sentiment) | ||
| 105 | - [Sentiment Analysis with LSTMs in Tensorflow by Adit Deshpande](https://github.com/adeshpande3/LSTM-Sentiment-Analysis) | ||
| 106 | - [Sentiment analysis on tweets using Naive Bayes, SVM, CNN, LSTM, etc. by Abdul Fatir](https://github.com/abdulfatir/twitter-sentiment-analysis) | ||
| 107 | |||
diff --git a/content/posts/2020-03-22-simple-sse-based-pubsub-server.md b/content/posts/2020-03-22-simple-sse-based-pubsub-server.md deleted file mode 100644 index 60745d0..0000000 --- a/content/posts/2020-03-22-simple-sse-based-pubsub-server.md +++ /dev/null | |||
| @@ -1,453 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Simple Server-Sent Events based PubSub Server | ||
| 3 | url: simple-server-sent-events-based-pubsub-server.html | ||
| 4 | date: 2020-03-22T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | ## Before we continue ... | ||
| 9 | |||
| 10 | Publisher Subscriber model is nothing new and there are many amazing solutions | ||
| 11 | out there, so writing a new one would be a waste of time if other solutions | ||
| 12 | wouldn't have quite complex install procedures and weren't so hard to maintain. | ||
| 13 | But to be fair, comparing this simple server with something like | ||
| 14 | [Kafka](https://kafka.apache.org/) or [RabbitMQ](https://www.rabbitmq.com/) is | ||
| 15 | laughable at the least. Those solutions are enterprise grade and have many | ||
| 16 | mechanisms there to ensure messages aren't lost and much more. Regardless of | ||
| 17 | these drawbacks, this method has been tested on a large website and worked until | ||
| 18 | now without any problems. So now, that we got that cleared up, let's continue. | ||
| 19 | |||
| 20 | ***Wiki definition:** Publish/subscribe messaging, or pub/sub messaging, is a | ||
| 21 | form of asynchronous service-to-service communication used in serverless and | ||
| 22 | microservices architectures. In a pub/sub model, any message published to a | ||
| 23 | topic is immediately received by all the subscribers to the topic.* | ||
| 24 | |||
| 25 | ## General goals | ||
| 26 | |||
| 27 | - provide a simple server that relays messages to all the connected clients, | ||
| 28 | - messages can be posted on specific topics, | ||
| 29 | - messages get sent via [Server-Sent | ||
| 30 | Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events) | ||
| 31 | to all the subscribers. | ||
| 32 | |||
| 33 | ## How exactly does the pub/sub model work? | ||
| 34 | |||
| 35 | The easiest way to explain this is with diagram bellow. Basic function is | ||
| 36 | simple. We have subscribers that receive messages, and we have publishers that | ||
| 37 | create and post messages. Similar model is also well know pattern that works on | ||
| 38 | a premise of consumers and producers, and they take similar roles. | ||
| 39 | |||
| 40 |  | ||
| 41 | |||
| 42 | **These are some naive characteristics we want to achieve:** | ||
| 43 | |||
| 44 | - producer is publishing messages to subscribe topic, | ||
| 45 | - consumer is receiving messages from subscribed topic, | ||
| 46 | - servers is also known as Broker, | ||
| 47 | - broker does not store messages or tracks success, | ||
| 48 | - broker uses | ||
| 49 | [FIFO](https://en.wikipedia.org/wiki/FIFO_(computing_and_electronics)) method | ||
| 50 | for delivering messages, | ||
| 51 | - if consumer wants to receive messages from a topic, producer and consumer | ||
| 52 | topics must match, | ||
| 53 | - consumer can subscribe to multiple topics, | ||
| 54 | - producer can publish to multiple topics, | ||
| 55 | - each message has a messageId. | ||
| 56 | |||
| 57 | **Known drawbacks:** | ||
| 58 | |||
| 59 | - messages will not be stored in a persistent queue or unreceived messages like | ||
| 60 | [DeadLetterQueue](https://en.wikipedia.org/wiki/Dead_letter_queue) so old | ||
| 61 | messages could be lost on server restart, | ||
| 62 | - [Server-Sent | ||
| 63 | Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events) | ||
| 64 | opens a long-running connection between the client and the server so make sure | ||
| 65 | if your setup is load balanced that the load balancer in this case can have | ||
| 66 | long opened connection, | ||
| 67 | - no system moderation due to the dynamic nature of creating queues. | ||
| 68 | |||
| 69 | ## Server-Sent Events | ||
| 70 | |||
| 71 | Read more about it on [official specification | ||
| 72 | page](https://html.spec.whatwg.org/multipage/server-sent-events.html). | ||
| 73 | |||
| 74 | ### Current browser support | ||
| 75 | |||
| 76 |  | ||
| 77 | |||
| 78 | Check | ||
| 79 | [https://caniuse.com/#feat=eventsource](https://caniuse.com/#feat=eventsource) | ||
| 80 | for latest information about browser support. | ||
| 81 | |||
| 82 | ### Known issues | ||
| 83 | |||
| 84 | - Firefox 52 and below do not support EventSource in web/shared workers | ||
| 85 | - In Firefox prior to version 36 server-sent events do not reconnect | ||
| 86 | automatically in case of a connection interrupt (bug) | ||
| 87 | - Reportedly, CORS in EventSource is currently supported in Firefox 10+, Opera | ||
| 88 | 12+, Chrome 26+, Safari 7.0+. | ||
| 89 | - Antivirus software may block the event streaming data chunks. | ||
| 90 | |||
| 91 | Source: [https://caniuse.com/#feat=eventsource](https://caniuse.com/#feat=eventsource) | ||
| 92 | |||
| 93 | ### Message format | ||
| 94 | |||
| 95 | The simplest message that can be sent is only with data attribute: | ||
| 96 | |||
| 97 | ```bash | ||
| 98 | data: this is a simple message | ||
| 99 | <blank line> | ||
| 100 | ``` | ||
| 101 | |||
| 102 | You can send message IDs to be used if the connection is dropped: | ||
| 103 | |||
| 104 | ```bash | ||
| 105 | id: 33 | ||
| 106 | data: this is line one | ||
| 107 | data: this is line two | ||
| 108 | <blank line> | ||
| 109 | ``` | ||
| 110 | |||
| 111 | And you can specify your own event types (the above messages will all trigger | ||
| 112 | the message event): | ||
| 113 | |||
| 114 | ```bash | ||
| 115 | id: 36 | ||
| 116 | event: price | ||
| 117 | data: 103.34 | ||
| 118 | <blank line> | ||
| 119 | ``` | ||
| 120 | |||
| 121 | ### Server requirements | ||
| 122 | |||
| 123 | The important thing is how you send headers and which headers are sent by the | ||
| 124 | server that triggers browser to threat response as a EventStream. | ||
| 125 | |||
| 126 | Headers responsible for this are: | ||
| 127 | |||
| 128 | ```bash | ||
| 129 | Content-Type: text/event-stream | ||
| 130 | Cache-Control: no-cache | ||
| 131 | Connection: keep-alive | ||
| 132 | ``` | ||
| 133 | |||
| 134 | ### Debugging with Google Chrome | ||
| 135 | |||
| 136 | Google Chrome provides build-in debugging and exploration tool for [Server-Sent | ||
| 137 | Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events) | ||
| 138 | which is quite nice and available from Developer Tools under Network tab. | ||
| 139 | |||
| 140 | > You can debug only client side events that get received and not the server | ||
| 141 | > ones. For debugging server events add `console.log` to `server.js` code and | ||
| 142 | > print out events. | ||
| 143 | |||
| 144 |  | ||
| 145 | |||
| 146 | ## Server implementation | ||
| 147 | |||
| 148 | For the sake of this example we will use [Node.js](https://nodejs.org/en/) with | ||
| 149 | [Express](https://expressjs.com) as our router since this is the easiest way to | ||
| 150 | get started and we will use already written SSE library for node | ||
| 151 | [sse-pubsub](https://www.npmjs.com/package/sse-pubsub) so we don't reinvent the | ||
| 152 | wheel. | ||
| 153 | |||
| 154 | ```bash | ||
| 155 | npm init --yes | ||
| 156 | |||
| 157 | npm install express | ||
| 158 | npm install body-parser | ||
| 159 | npm install sse-pubsub | ||
| 160 | ``` | ||
| 161 | |||
| 162 | Basic implementation of a server (`server.js`): | ||
| 163 | |||
| 164 | ```js | ||
| 165 | const express = require('express'); | ||
| 166 | const bodyParser = require('body-parser'); | ||
| 167 | const SSETopic = require('sse-pubsub'); | ||
| 168 | |||
| 169 | const app = express(); | ||
| 170 | const port = process.env.PORT || 4000; | ||
| 171 | |||
| 172 | // topics container | ||
| 173 | const sseTopics = {}; | ||
| 174 | |||
| 175 | app.use(bodyParser.json()); | ||
| 176 | |||
| 177 | // open for all cors | ||
| 178 | app.all('*', (req, res, next) => { | ||
| 179 | res.header('Access-Control-Allow-Origin', '*'); | ||
| 180 | res.header('Access-Control-Allow-Headers', 'X-Requested-With, Content-Type'); | ||
| 181 | next(); | ||
| 182 | }); | ||
| 183 | |||
| 184 | // preflight request error fix | ||
| 185 | app.options('*', async (req, res) => { | ||
| 186 | res.header('Access-Control-Allow-Origin', '*'); | ||
| 187 | res.header('Access-Control-Allow-Headers', 'X-Requested-With, Content-Type'); | ||
| 188 | res.send('OK'); | ||
| 189 | }); | ||
| 190 | |||
| 191 | // serve the event streams | ||
| 192 | app.get('/stream/:topic', async (req, res, next) => { | ||
| 193 | const topic = req.params.topic; | ||
| 194 | |||
| 195 | if (!(topic in sseTopics)) { | ||
| 196 | sseTopics[topic] = new SSETopic({ | ||
| 197 | pingInterval: 0, | ||
| 198 | maxStreamDuration: 15000, | ||
| 199 | }); | ||
| 200 | } | ||
| 201 | |||
| 202 | // subscribing client to topic | ||
| 203 | sseTopics[topic].subscribe(req, res); | ||
| 204 | }); | ||
| 205 | |||
| 206 | // accepts new messages into topic | ||
| 207 | app.post('/publish', async (req, res) => { | ||
| 208 | let body = req.body; | ||
| 209 | let status = 200; | ||
| 210 | |||
| 211 | console.log('Incoming message:', req.body); | ||
| 212 | |||
| 213 | if ( | ||
| 214 | body.hasOwnProperty('topic') && | ||
| 215 | body.hasOwnProperty('event') && | ||
| 216 | body.hasOwnProperty('message') | ||
| 217 | ) { | ||
| 218 | const topic = req.body.topic; | ||
| 219 | const event = req.body.event; | ||
| 220 | const message = req.body.message; | ||
| 221 | |||
| 222 | if (topic in sseTopics) { | ||
| 223 | // sends message to all the subscribers | ||
| 224 | sseTopics[topic].publish(message, event); | ||
| 225 | } | ||
| 226 | } else { | ||
| 227 | status = 400; | ||
| 228 | } | ||
| 229 | |||
| 230 | res.status(status).send({ | ||
| 231 | status, | ||
| 232 | }); | ||
| 233 | }); | ||
| 234 | |||
| 235 | // returns JSON object of all opened topics | ||
| 236 | app.get('/status', async (req, res) => { | ||
| 237 | res.send(sseTopics); | ||
| 238 | }); | ||
| 239 | |||
| 240 | // health-check endpoint | ||
| 241 | app.get('/', async (req, res) => { | ||
| 242 | res.send('OK'); | ||
| 243 | }); | ||
| 244 | |||
| 245 | // return a 404 if no routes match | ||
| 246 | app.use((req, res, next) => { | ||
| 247 | res.set('Cache-Control', 'private, no-store'); | ||
| 248 | res.status(404).end('Not found'); | ||
| 249 | }); | ||
| 250 | |||
| 251 | // starts the server | ||
| 252 | app.listen(port, () => { | ||
| 253 | console.log(`PubSub server running on http://localhost:${port}`); | ||
| 254 | }); | ||
| 255 | ``` | ||
| 256 | |||
| 257 | ### Our custom message format | ||
| 258 | |||
| 259 | Each message posted on a server must be in a specific format that out server | ||
| 260 | accepts. Having structure like this allows us to have multiple separated type of | ||
| 261 | events on each topic. | ||
| 262 | |||
| 263 | With this we can separate streams and only receive events that belong to the | ||
| 264 | topic. | ||
| 265 | |||
| 266 | One example would be, that we have index page and we want to receive messages | ||
| 267 | about new upvotes or new subscribers but we don't want to follow events for | ||
| 268 | other pages. This reduces clutter and overall network. And structure is much | ||
| 269 | nicer and maintanable. | ||
| 270 | |||
| 271 | ```json | ||
| 272 | { | ||
| 273 | "topic": "sample-topic", | ||
| 274 | "event": "sample-event", | ||
| 275 | "message": { "name": "John" } | ||
| 276 | } | ||
| 277 | ``` | ||
| 278 | |||
| 279 | ## Publisher and subscriber clients | ||
| 280 | |||
| 281 | ### Publisher and subscriber in action | ||
| 282 | |||
| 283 | <video src="/assets/simple-pubsub-server/clients.m4v" controls></video> | ||
| 284 | |||
| 285 | You can download [the code](../simple-pubsub-server/sse-pubsub-server.zip) and | ||
| 286 | follow along. | ||
| 287 | |||
| 288 | ### Publisher | ||
| 289 | |||
| 290 | As talked about above publisher is the one that send messages to the | ||
| 291 | broker/server. Message inside the payload can be whatever you want (string, | ||
| 292 | object, array). I would however personally avoid send large chunks of data like | ||
| 293 | blobs and such. | ||
| 294 | |||
| 295 | ```html | ||
| 296 | <!DOCTYPE html> | ||
| 297 | <html lang="en"> | ||
| 298 | |||
| 299 | <head> | ||
| 300 | <meta charset="UTF-8"> | ||
| 301 | <meta name="viewport" content="width=device-width, initial-scale=1.0"> | ||
| 302 | <title>Publisher</title> | ||
| 303 | </head> | ||
| 304 | |||
| 305 | <body> | ||
| 306 | |||
| 307 | <h1>Publisher</h1> | ||
| 308 | |||
| 309 | <fieldset> | ||
| 310 | <p> | ||
| 311 | <label>Server:</label> | ||
| 312 | <input type="text" id="server" value="http://localhost:4000"> | ||
| 313 | </p> | ||
| 314 | <p> | ||
| 315 | <label>Topic:</label> | ||
| 316 | <input type="text" id="topic" value="sample-topic"> | ||
| 317 | </p> | ||
| 318 | <p> | ||
| 319 | <label>Event:</label> | ||
| 320 | <input type="text" id="event" value="sample-event"> | ||
| 321 | </p> | ||
| 322 | <p> | ||
| 323 | <label>Message:</label> | ||
| 324 | <input type="text" id="message" value='{"name": "John"}'> | ||
| 325 | </p> | ||
| 326 | <p> | ||
| 327 | <button type="button" id="button">Publish message to topic</button> | ||
| 328 | </p> | ||
| 329 | </fieldset> | ||
| 330 | |||
| 331 | <script> | ||
| 332 | |||
| 333 | const button = document.querySelector('#button'); | ||
| 334 | const server = document.querySelector('#server'); | ||
| 335 | const topic = document.querySelector('#topic'); | ||
| 336 | const event = document.querySelector('#event'); | ||
| 337 | const message = document.querySelector('#message'); | ||
| 338 | |||
| 339 | button.addEventListener('click', async (evt) => { | ||
| 340 | const req = await fetch(`${server.value}/publish`, { | ||
| 341 | method: 'post', | ||
| 342 | headers: { | ||
| 343 | 'Accept': 'application/json', | ||
| 344 | 'Content-Type': 'application/json', | ||
| 345 | }, | ||
| 346 | body: JSON.stringify({ | ||
| 347 | topic: topic.value, | ||
| 348 | event: event.value, | ||
| 349 | message: JSON.parse(message.value), | ||
| 350 | }), | ||
| 351 | }); | ||
| 352 | |||
| 353 | const res = await req.json(); | ||
| 354 | console.log(res); | ||
| 355 | }); | ||
| 356 | |||
| 357 | </script> | ||
| 358 | |||
| 359 | </body> | ||
| 360 | |||
| 361 | </html> | ||
| 362 | ``` | ||
| 363 | |||
| 364 | ### Subscriber | ||
| 365 | |||
| 366 | Subscriber is responsible for receiving new messages that come from server via | ||
| 367 | publisher. The code bellow is very rudimentary but works and follows the | ||
| 368 | implementation guidelines for EventSource. | ||
| 369 | |||
| 370 | You can use either Developer Tools Console to see incoming messages or you can | ||
| 371 | defer to Debugging with Google Chrome section above to see all EventStream | ||
| 372 | messages. | ||
| 373 | |||
| 374 | > Don't be alarmed if the subscriber gets disconnected from the server every so | ||
| 375 | > often. The code we have here resets connection every 15s but it automatically | ||
| 376 | > get reconnected and fetches all messages up to last received message id. This | ||
| 377 | > setting can be adjusted in `server.js` file; search for the | ||
| 378 | > `maxStreamDuration` variable. | ||
| 379 | |||
| 380 | ```html | ||
| 381 | <!DOCTYPE html> | ||
| 382 | <html lang="en"> | ||
| 383 | |||
| 384 | <head> | ||
| 385 | <meta charset="UTF-8"> | ||
| 386 | <meta name="viewport" content="width=device-width, initial-scale=1.0"> | ||
| 387 | <title>Subscriber</title> | ||
| 388 | <link rel="stylesheet" href="style.css"> | ||
| 389 | </head> | ||
| 390 | |||
| 391 | <body> | ||
| 392 | |||
| 393 | <h1>Subscriber</h1> | ||
| 394 | |||
| 395 | <fieldset> | ||
| 396 | <p> | ||
| 397 | <label>Server:</label> | ||
| 398 | <input type="text" id="server" value="http://localhost:4000"> | ||
| 399 | </p> | ||
| 400 | <p> | ||
| 401 | <label>Topic:</label> | ||
| 402 | <input type="text" id="topic" value="sample-topic"> | ||
| 403 | </p> | ||
| 404 | <p> | ||
| 405 | <label>Event:</label> | ||
| 406 | <input type="text" id="event" value="sample-event"> | ||
| 407 | </p> | ||
| 408 | <p> | ||
| 409 | <button type="button" id="button">Subscribe to topic</button> | ||
| 410 | </p> | ||
| 411 | </fieldset> | ||
| 412 | |||
| 413 | <script> | ||
| 414 | |||
| 415 | const button = document.querySelector('#button'); | ||
| 416 | const server = document.querySelector('#server'); | ||
| 417 | const topic = document.querySelector('#topic'); | ||
| 418 | const event = document.querySelector('#event'); | ||
| 419 | |||
| 420 | button.addEventListener('click', async (evt) => { | ||
| 421 | |||
| 422 | let es = new EventSource(`${server.value}/stream/${topic.value}`); | ||
| 423 | |||
| 424 | es.addEventListener(event.value, function (evt) { | ||
| 425 | console.log(`incoming message`, JSON.parse(evt.data)); | ||
| 426 | }); | ||
| 427 | |||
| 428 | es.addEventListener('open', function (evt) { | ||
| 429 | console.log('connected', evt); | ||
| 430 | }); | ||
| 431 | |||
| 432 | es.addEventListener('error', function (evt) { | ||
| 433 | console.log('error', evt); | ||
| 434 | }); | ||
| 435 | |||
| 436 | }); | ||
| 437 | |||
| 438 | </script> | ||
| 439 | |||
| 440 | </body> | ||
| 441 | |||
| 442 | </html> | ||
| 443 | ``` | ||
| 444 | |||
| 445 | ## Reading further | ||
| 446 | |||
| 447 | - [Using server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events) | ||
| 448 | - [Using SSE Instead Of WebSockets For Unidirectional Data Flow Over HTTP/2](https://www.smashingmagazine.com/2018/02/sse-websockets-data-flow-http2/) | ||
| 449 | - [What is Server-Sent Events?](https://apifriends.com/api-streaming/server-sent-events/) | ||
| 450 | - [An HTTP/2 extension for bidirectional messaging communication](https://tools.ietf.org/id/draft-xie-bidirectional-messaging-01.html) | ||
| 451 | - [Introduction to HTTP/2](https://developers.google.com/web/fundamentals/performance/http2) | ||
| 452 | - [The WebSocket API (WebSockets)](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) | ||
| 453 | |||
diff --git a/content/posts/2020-03-27-create-placeholder-images-with-sharp.md b/content/posts/2020-03-27-create-placeholder-images-with-sharp.md deleted file mode 100644 index ac4f053..0000000 --- a/content/posts/2020-03-27-create-placeholder-images-with-sharp.md +++ /dev/null | |||
| @@ -1,101 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Create placeholder images with sharp Node.js image processing library | ||
| 3 | url: create-placeholder-images-with-sharp.html | ||
| 4 | date: 2020-03-27T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | I have been searching for a solution to pre-generate some placeholder images for | ||
| 9 | image server I needed to develop that resizes images on S3. I though this would | ||
| 10 | be a 15min job and quickly found out how very mistaken I was. | ||
| 11 | |||
| 12 | Even though Node.js is not really the best way to do this kind of things (surely | ||
| 13 | something written in C or Rust or even Golang would be the correct way to do | ||
| 14 | this but we didn't need the speed in our case) I found an excellent library | ||
| 15 | [sharp - High performance Node.js image | ||
| 16 | processing](https://github.com/lovell/sharp). | ||
| 17 | |||
| 18 | Getting things running was a breeze. | ||
| 19 | |||
| 20 | ## Fetch image from S3 and save resized | ||
| 21 | |||
| 22 | ```js | ||
| 23 | const sharp = require('sharp'); | ||
| 24 | const aws = require('aws-sdk'); | ||
| 25 | |||
| 26 | const x,y = 100; | ||
| 27 | const s3 = new aws.S3({}); | ||
| 28 | |||
| 29 | aws.config.update({ | ||
| 30 | secretAccessKey: 'secretAccessKey', | ||
| 31 | accessKeyId: 'accessKeyId', | ||
| 32 | region: 'region' | ||
| 33 | }); | ||
| 34 | |||
| 35 | const originalImage = await s3.getObject({ | ||
| 36 | Bucket: 'some-bucket-name', | ||
| 37 | Key: 'image.jpg', | ||
| 38 | }).promise(); | ||
| 39 | |||
| 40 | const resizedImage = await sharp(originalImage.Body) | ||
| 41 | .resize(x, y) | ||
| 42 | .jpeg({ progressive: true }) | ||
| 43 | .toBuffer(); | ||
| 44 | |||
| 45 | s3.putObject({ | ||
| 46 | Bucket: 'some-bucket-name', | ||
| 47 | Key: `optimized/${x}x${y}/image.jpg`, | ||
| 48 | Body: resizedImage, | ||
| 49 | ContentType: 'image/jpeg', | ||
| 50 | ACL: 'public-read' | ||
| 51 | }).promise(); | ||
| 52 | ``` | ||
| 53 | |||
| 54 | All this code was wrapped inside a web service with some additional security | ||
| 55 | checks and defensive coding to detect if key is missing on S3. | ||
| 56 | |||
| 57 | And at that point I needed to return placeholder images as a response in case | ||
| 58 | key is missing or x,y are not allowed by the server etc. I could have created | ||
| 59 | PNG in Gimp and just serve them but I wanted to respect aspect ratio and I | ||
| 60 | didn't want to return some mangled images. | ||
| 61 | |||
| 62 | > Main problem with finding a clean solution I could copy and paste and change a | ||
| 63 | > bit was a task. API is changing constantly and there weren't clear examples or | ||
| 64 | > I was unable to find them. | ||
| 65 | |||
| 66 | ## Generating placeholder images using SVG | ||
| 67 | |||
| 68 | What I ended up was using SVG to generate text and created image with sharp and | ||
| 69 | used composition to combine both layers. Response returned by this function is a | ||
| 70 | buffer you can use to either upload to S3 or save to local file. | ||
| 71 | |||
| 72 | ```js | ||
| 73 | const generatePlaceholderImageWithText = async (width, height, message) => { | ||
| 74 | const overlay = `<svg width="${width - 20}" height="${height - 20}"> | ||
| 75 | <text x="50%" y="50%" font-family="sans-serif" font-size="16" text-anchor="middle">${message}</text> | ||
| 76 | </svg>`; | ||
| 77 | |||
| 78 | return await sharp({ | ||
| 79 | create: { | ||
| 80 | width: width, | ||
| 81 | height: height, | ||
| 82 | channels: 4, | ||
| 83 | background: { r: 230, g: 230, b: 230, alpha: 1 } | ||
| 84 | } | ||
| 85 | }) | ||
| 86 | .composite([{ | ||
| 87 | input: Buffer.from(overlay), | ||
| 88 | gravity: 'center', | ||
| 89 | }]) | ||
| 90 | .jpeg() | ||
| 91 | .toBuffer(); | ||
| 92 | } | ||
| 93 | ``` | ||
| 94 | |||
| 95 | That is about it. Nothing more to it. You can change the color of the image by | ||
| 96 | changing `background` and if you want to change text styling you can adapt SVG | ||
| 97 | to your needs. | ||
| 98 | |||
| 99 | > Also be careful about the length of the text. This function positions text at | ||
| 100 | > the center and adds `20px` padding on all sides. If text is longer than the | ||
| 101 | > image it will get cut. | ||
diff --git a/content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md b/content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md deleted file mode 100644 index bf1d710..0000000 --- a/content/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md +++ /dev/null | |||
| @@ -1,107 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: The strange case of Elasticsearch allocation failure | ||
| 3 | url: the-strange-case-of-elasticsearch-allocation-failure.html | ||
| 4 | date: 2020-03-29T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | I've been using Elasticsearch in production for 5 years now and never had a | ||
| 9 | single problem with it. Hell, never even known there could be a problem. Just | ||
| 10 | worked. All this time. The first node that I deployed is still being used in | ||
| 11 | production, never updated, upgraded, touched in anyway. | ||
| 12 | |||
| 13 | All this bliss came to an abrupt end this Friday when I got notification that | ||
| 14 | Elasticsearch cluster went warm. Well, warm is not that bad right? Wrong! | ||
| 15 | Quickly after that I got another email which sent chills down my spine. Cluster | ||
| 16 | is now red. RED! Now, shit really hit the fan! | ||
| 17 | |||
| 18 | I tried googling what could be the problem and after executing allocation | ||
| 19 | function noticed that some shards were unassigned and 5 attempts were already | ||
| 20 | made (which is BTW to my luck the maximum) and that meant I am basically fucked. | ||
| 21 | They also applied that one should wait for cluster to re-balance itself. So, I | ||
| 22 | waited. One hour, two hours, several hours. Nothing, still RED. | ||
| 23 | |||
| 24 | The strangest thing about it all was, that queries were still being fulfilled. | ||
| 25 | Data was coming out. On the outside it looked like nothing was wrong but | ||
| 26 | everybody that would look at the cluster would know immediately that something | ||
| 27 | was very very wrong and we were living on borrowed time here. | ||
| 28 | |||
| 29 | > **Please, DO NOT do what I did.** Seriously! Please ask someone on official | ||
| 30 | forums or if you know an expert please consult him. There could be million of | ||
| 31 | reasons and these solution fit my problem. Maybe in your case it would | ||
| 32 | disastrous. I had all the data backed up and even if I would fail spectacularly | ||
| 33 | I would be able to restore the data. It would be a huge pain and I would loose | ||
| 34 | couple of days but I had a plan B. | ||
| 35 | |||
| 36 | Executing allocation and told me what the problem was but no clear solution yet. | ||
| 37 | |||
| 38 | ```yaml | ||
| 39 | GET /_cat/allocation?format=json | ||
| 40 | ``` | ||
| 41 | |||
| 42 | I got a message that `ALLOCATION_FAILED` with additional info `failed to create | ||
| 43 | shard, failure ioexception[failed to obtain in-memory shard lock]`. Well | ||
| 44 | splendid! I must also say that our cluster is capable more than enough to handle | ||
| 45 | the traffic. Also JVM memory pressure never was an issue. So what happened | ||
| 46 | really then? | ||
| 47 | |||
| 48 | I tried also re-routing failed ones with no success due to AWS restrictions on | ||
| 49 | having managed Elasticsearch cluster (they lock some of the functions). | ||
| 50 | |||
| 51 | ```yaml | ||
| 52 | POST /_cluster/reroute?retry_failed=true | ||
| 53 | ``` | ||
| 54 | |||
| 55 | I got a message that significantly reduced my options. | ||
| 56 | |||
| 57 | ```json | ||
| 58 | { | ||
| 59 | "Message": "Your request: '/_cluster/reroute' is not allowed." | ||
| 60 | } | ||
| 61 | ``` | ||
| 62 | |||
| 63 | After that I went on a hunt again. I won't bother you with all the details | ||
| 64 | because hours/days went by until I was finally able to re-index the problematic | ||
| 65 | index and hoped for the best. Until that moment even re-indexing was giving me | ||
| 66 | errors. | ||
| 67 | |||
| 68 | ```yaml | ||
| 69 | POST _reindex | ||
| 70 | { | ||
| 71 | "source": { | ||
| 72 | "index": "myindex" | ||
| 73 | }, | ||
| 74 | "dest": { | ||
| 75 | "index": "myindex-new" | ||
| 76 | } | ||
| 77 | } | ||
| 78 | ``` | ||
| 79 | |||
| 80 | I needed to do this multiple times to get all the documents re-indexed. Then I | ||
| 81 | dropped the original one with the following command. | ||
| 82 | |||
| 83 | ```yaml | ||
| 84 | DELETE /myindex | ||
| 85 | ``` | ||
| 86 | |||
| 87 | And re-indexed again new one in the original one (well by name only). | ||
| 88 | |||
| 89 | ```yaml | ||
| 90 | POST _reindex | ||
| 91 | { | ||
| 92 | "source": { | ||
| 93 | "index": "myindex-new" | ||
| 94 | }, | ||
| 95 | "dest": { | ||
| 96 | "index": "myindex" | ||
| 97 | } | ||
| 98 | } | ||
| 99 | ``` | ||
| 100 | |||
| 101 | On the surface it looks like all is working but I have a long road in front of | ||
| 102 | me to get all the things working again. Cluster now shows that it is in Green | ||
| 103 | mode but I am also getting a notification that the cluster has processing status | ||
| 104 | which could mean million of things. | ||
| 105 | |||
| 106 | Godspeed! | ||
| 107 | |||
diff --git a/content/posts/2020-03-30-my-love-and-hate-relationship-with-nodejs.md b/content/posts/2020-03-30-my-love-and-hate-relationship-with-nodejs.md deleted file mode 100644 index daebb4c..0000000 --- a/content/posts/2020-03-30-my-love-and-hate-relationship-with-nodejs.md +++ /dev/null | |||
| @@ -1,110 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: My love and hate relationship with Node.js | ||
| 3 | url: my-love-and-hate-relationship-with-nodejs.html | ||
| 4 | date: 2020-03-30T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | Previous project I was working on was being coded in | ||
| 9 | [Golang](https://golang.org/). Also was my first project using it. And damn, | ||
| 10 | that was an awesome experience. The whole thing is just superb. From how errors | ||
| 11 | are handled. The C-like way you handle compiling. The way the language is | ||
| 12 | structured making it incredibly versatile and easy to learn. | ||
| 13 | |||
| 14 | It may cause some pain for somebody that is not used of using interfaces to map | ||
| 15 | JSON and doing the recompilation all the time. But we have tools like | ||
| 16 | [entr](http://eradman.com/entrproject/) and | ||
| 17 | [make](https://www.gnu.org/software/make/) to fix that. | ||
| 18 | |||
| 19 | But we are not here to talk about my undying love for **Golang**. Only in some | ||
| 20 | way we probably should. It is an excellent example of how modern language should | ||
| 21 | be designed. And because I have used it extensively in the last couple of years | ||
| 22 | this probably taints my views of other languages. And is doing me a great | ||
| 23 | disservice. Nevertheless, here we are. | ||
| 24 | |||
| 25 | About two years ago I started flirting with [Node.js](https://nodejs.org/en/) | ||
| 26 | for a project I started working on. What I wanted was to have things written in | ||
| 27 | a language that is widely used, and we could get additional developers for. As | ||
| 28 | much as **Golang** is amazing it's really hard to get developers for it. Even | ||
| 29 | now. And after playing around with it for a week I felt in love with the speed | ||
| 30 | of iteration and massive package ecosystem. Do you want SSO? You got it! Do you | ||
| 31 | want some esoteric library for something? There is a strong chance somebody | ||
| 32 | wrote it. It is so extensive that you find yourself evaluating packages based on | ||
| 33 | **GitHub stars** and number of contributors. You get swallowed by the vanity | ||
| 34 | metrics and that potentially will become the downfall of Node.js. | ||
| 35 | |||
| 36 | Because of the sheer amount of choice I often got anxiety when choosing | ||
| 37 | libraries. Will I choose the correct one? Is this library something that will be | ||
| 38 | supported for a foreseeable future or not? I am used of using libraries that are | ||
| 39 | being in development for 10 years plus (Python, C) and that gave me some sort of | ||
| 40 | comfort. And it is probably unfair to Node.js and community to expect same | ||
| 41 | dedication. | ||
| 42 | |||
| 43 | Moving forward ... Work started and things were great. **Speed of iteration was | ||
| 44 | insane**. For some feature that I would need a day in Golang only took me hour | ||
| 45 | or two. I became lazy! Using packages all over the place. Falling into the same | ||
| 46 | trap as others. Packages on top of packages. And [npm](https://www.npmjs.com/) | ||
| 47 | didn't help at all. The way that the package manager works is just | ||
| 48 | horrendous. And not allowing to have node_modules outside the project is also | ||
| 49 | the stupidest idea ever. | ||
| 50 | |||
| 51 | So at that point I started feeling the technical debt that comes with Node.js | ||
| 52 | and the whole ecosystem. What nobody tells you is that **structuring large | ||
| 53 | Node.js apps** is more problematic than one would think. And going microservice | ||
| 54 | for every single thing is also a bad idea. The amount of networking you | ||
| 55 | introduce with that approach always ends up being a pain in the ass. And I don't | ||
| 56 | even want to go into system administration here. The overhead is | ||
| 57 | insane. Package-lock.json made many days feel like living hell for me. And I | ||
| 58 | would eat the cost of all this if it meant for better development | ||
| 59 | experience. Well, it didn't. | ||
| 60 | |||
| 61 | The **lack of Typescript** support in the interpreter is still mind boggling to | ||
| 62 | me. Why haven't they added native support yet for this is beyond me?! That would | ||
| 63 | have solved so many problems. Lack of type safety became a problem somewhere in | ||
| 64 | the middle of the project where the codebase was sufficiently large enough to | ||
| 65 | present problems. We started adding arguments to functions and there was **no | ||
| 66 | way to implicitly define argument types**. And because at that point there were | ||
| 67 | a lot of functions, it became impossible to know what each one accepts, | ||
| 68 | development became more and more trial and error based. | ||
| 69 | |||
| 70 | I tried **implementing Typescript**, but that would present a large refactor | ||
| 71 | that we were not willing to do at that point. The benefits were not enough. I | ||
| 72 | also tried [Flow - static type checker](https://flow.org/) but implementation | ||
| 73 | was also horrible. What Typescript and Flow forces you is to have src folder and | ||
| 74 | then **transpile** your code into dist folder and run it with node. WTH is that | ||
| 75 | all about. Why can't this be done in memory or some virtual file system? Why? I | ||
| 76 | see no reason why this couldn't be done like this. But it is what it is. I | ||
| 77 | abandoned all hope for static type checking. | ||
| 78 | |||
| 79 | One of the problems that resulted from not having interfaces or types was | ||
| 80 | inability to model out our data from **Elasticsearch**. I could have done a | ||
| 81 | **pedestrian implementation** of it, but there must be a better way of doing | ||
| 82 | this without resorting to some hack basically. Or maybe I haven't found a | ||
| 83 | solution, which is also a possibility. I have looked, though. No juice! | ||
| 84 | |||
| 85 | **Error handling?** Is that a joke? | ||
| 86 | |||
| 87 | Thank god for **await/async**. Without it, I would have probably just abandoned | ||
| 88 | the whole thing and went with something else like Python. That's all I am going | ||
| 89 | to say about this :) | ||
| 90 | |||
| 91 | I started asking myself a question if Node.js is actually ready to be used in a | ||
| 92 | **large scale applications**? And this was a totally wrong question. What I | ||
| 93 | should have been asking myself was, how to use Node.js in large scale | ||
| 94 | application. And you don't get this in **marketing material** for Express or Koa | ||
| 95 | etc. They never tell you this. Making Node.js scale on infrastructure or in | ||
| 96 | codebase is really **more of an art than a science**. And just like with the | ||
| 97 | whole JavaScript ecosystem: | ||
| 98 | |||
| 99 | - impossible to master, | ||
| 100 | - half of your time you work on your tooling, | ||
| 101 | - just accept transpilers that convert one code into another (holly smokes), | ||
| 102 | - error handling is a joke, | ||
| 103 | - standards? What standards? | ||
| 104 | |||
| 105 | But on the other hand. As I did, you will also learn to love it. Learn to use it | ||
| 106 | quickly and do impossible things in crazy limited time. | ||
| 107 | |||
| 108 | I hate to admit it. But I love Node.js. Dammit, I love it :) | ||
| 109 | |||
| 110 | 2023 Update: I hate Node.js! | ||
diff --git a/content/posts/2020-05-05-remote-work.md b/content/posts/2020-05-05-remote-work.md deleted file mode 100644 index 90fca24..0000000 --- a/content/posts/2020-05-05-remote-work.md +++ /dev/null | |||
| @@ -1,71 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Remote work and how it affects the daily lives of people | ||
| 3 | url: remote-work.html | ||
| 4 | date: 2020-05-05T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | I have been working remotely for the past 5 years. I love it. Love the freedom | ||
| 9 | and make your schedule thingy. | ||
| 10 | |||
| 11 | ## You work more not less | ||
| 12 | |||
| 13 | I've heard from people things like: "Oh, you are so lucky, working from home, | ||
| 14 | having all the free time you want". It was obvious they had no clue what means | ||
| 15 | working remotely. They had this romantic idea of remote work. You can watch TV | ||
| 16 | whenever you like, you can go outside for a picnic if you want and stuff like | ||
| 17 | that. | ||
| 18 | |||
| 19 | This may be true if you work a day or two in a week from home. But if you go | ||
| 20 | completely remote all these changes completely. I take some time to acclimate | ||
| 21 | but then you start feeling the consequences of going fully remote. And it's not | ||
| 22 | all rainbows and unicorns. Rather the opposite. | ||
| 23 | |||
| 24 | ## Feeling lost | ||
| 25 | |||
| 26 | At first, I remembered I felt lost. I was not used to this kind of environment. | ||
| 27 | It felt disoriented and a part of you that is used to procrastinate turns on. | ||
| 28 | You start thinking of a workday as a whole day. And soon this idea of "I can do | ||
| 29 | this later" starts creeping in. Well, I have the whole day ahead of me. I can do | ||
| 30 | this a bit later. | ||
| 31 | |||
| 32 | ## Hyper-performance | ||
| 33 | |||
| 34 | As a direct result, you become more focused on your work since you don't have | ||
| 35 | all the interruptions common in the workplace. And you can quickly get used to | ||
| 36 | this hyper-performance. But this mode requires also a lot of peace and quiet. | ||
| 37 | |||
| 38 | And here we come to the ugly parts of all this. **People rarely have the | ||
| 39 | self-control** to not waste other people's time. It is paralyzing when people | ||
| 40 | start calling you, sending you chat messages, etc. The thing is, that when I | ||
| 41 | achieve this hyper-performance mode I am completely embroiled in the problem I | ||
| 42 | am solving and this kind of interruptions mess with your head. I need an hour at | ||
| 43 | least to get back in the zone. Sometimes not achieving the same focus the whole | ||
| 44 | day. | ||
| 45 | |||
| 46 | I know that life is not how you want it to be and takes its route but from what | ||
| 47 | I've learned this kind of interruptions can be avoided in 90% of the case easily | ||
| 48 | just by closing any chat programs and putting your phone in a drawer. | ||
| 49 | |||
| 50 | ## Suggestion to all the new remote workers | ||
| 51 | |||
| 52 | - Stop wasting other people's time. You don't bother people at their desks in | ||
| 53 | the office either. | ||
| 54 | - Do not replace daily chats in the hallways with instant messaging software. | ||
| 55 | It will only interrupt people. Nothing good will come of it. | ||
| 56 | - Set your working hours and try to not allow it to bleed outside these | ||
| 57 | boundaries and maintain your routine. | ||
| 58 | - Be prepared that hours will be longer regardless of your good intentions and | ||
| 59 | your well thought of routine. | ||
| 60 | - Try to be hyper-focused and do only one thing at the time. Multitasking is the | ||
| 61 | enemy of progress. | ||
| 62 | - Avoid long meetings and if possible eliminate them. Rather take time to write | ||
| 63 | them out and allow others to respond in their own time. Meetings are usually a | ||
| 64 | large waste of time and most of the people attending them are there just | ||
| 65 | because the manager said so. | ||
| 66 | - The software will not solve your problems. And throwing money at problems | ||
| 67 | neither. | ||
| 68 | - If you are in a managerial position don't supervise any single minute of | ||
| 69 | workers. They are probably giving you more hours anyways. Track progress | ||
| 70 | weekly not daily. You hired them and give them the benefit of the doubt that | ||
| 71 | they will deliver what you agreed upon. | ||
diff --git a/content/posts/2020-08-15-systemd-disable-wake-onmouse.md b/content/posts/2020-08-15-systemd-disable-wake-onmouse.md deleted file mode 100644 index 55086b1..0000000 --- a/content/posts/2020-08-15-systemd-disable-wake-onmouse.md +++ /dev/null | |||
| @@ -1,72 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Disable mouse wake from suspend with systemd service | ||
| 3 | url: disable-mouse-wake-from-suspend-with-systemd-service.html | ||
| 4 | date: 2020-08-15T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | I recently bought [ThinkPad | ||
| 9 | X220](https://www.laptopmag.com/reviews/laptops/lenovo-thinkpad-x220) just as a | ||
| 10 | joke on eBay to test Linux distributions and play around with things and not | ||
| 11 | destroy my main machine. Little to my knowledge I felt in love with it. Man, | ||
| 12 | they really made awesome machines back then. | ||
| 13 | |||
| 14 | After changing disk that came with it to SSD and installing Ubuntu to test if | ||
| 15 | everything works I noticed that even after a single touch of my external mouse | ||
| 16 | the system would wake up from sleep even though the lid was shut down. | ||
| 17 | |||
| 18 | I wouldn't even noticed it if laptop didn't have [LED | ||
| 19 | sleep indicator](https://support.lenovo.com/lk/en/solutions/~/media/Images/ContentImages/p/pd025386_x1_status_03.ashx?w=426&h=262). | ||
| 20 | I already had a bad experience with Linux and it's power management. I had a | ||
| 21 | [Dell Inspiron 7537](https://www.pcmag.com/reviews/dell-inspiron-15-7537) laptop | ||
| 22 | with a touchscreen and while traveling it decided to wake up and started cooking | ||
| 23 | in my backpack to the point that the digitizer responsible for touch actually | ||
| 24 | glue off and the whole screen got wrecked. So, I am a bit touchy about this. | ||
| 25 | |||
| 26 | I went on solution hunting and to my surprise there is no easy way to disable | ||
| 27 | specific devices to perform wake up. Why is this not under the power management | ||
| 28 | tab in setting is really strange. | ||
| 29 | |||
| 30 | After googling for a solution I found [this nice article describing the | ||
| 31 | solution](https://codetrips.com/2020/03/18/ubuntu-disable-mouse-wake-from-suspend/) | ||
| 32 | that worked for me. The only problem with this solution was that he added his | ||
| 33 | solution to `.bashrc` and this triggers `sudo` that asks for a password each | ||
| 34 | time new terminal is opened, which get annoying quickly since I open a lot of | ||
| 35 | terminals all the time. | ||
| 36 | |||
| 37 | I followed his instructions and got to solution `sudo sh -c "echo 'disabled' > | ||
| 38 | /sys/bus/usb/devices/2-1.1/power/wakeup"`. | ||
| 39 | |||
| 40 | I created a system service file `sudo nano | ||
| 41 | /etc/systemd/system/disable-mouse-wakeup.service` and removed `sudo` and | ||
| 42 | replaced `sh` with `/usr/bin/sh` and pasted all that in `ExecStart`. | ||
| 43 | |||
| 44 | ```ini | ||
| 45 | [Unit] | ||
| 46 | Description=Disables wakeup on mouse event | ||
| 47 | After=network.target | ||
| 48 | StartLimitIntervalSec=0 | ||
| 49 | |||
| 50 | [Service] | ||
| 51 | Type=simple | ||
| 52 | Restart=always | ||
| 53 | RestartSec=1 | ||
| 54 | User=root | ||
| 55 | ExecStart=/usr/bin/sh -c "echo 'disabled' > /sys/bus/usb/devices/2-1.1/power/wakeup" | ||
| 56 | |||
| 57 | [Install] | ||
| 58 | WantedBy=multi-user.target | ||
| 59 | ``` | ||
| 60 | |||
| 61 | After that I enabled, started and checked status of service. | ||
| 62 | |||
| 63 | ```sh | ||
| 64 | sudo systemctl enable disable-mouse-wakeup.service | ||
| 65 | sudo systemctl start disable-mouse-wakeup.service | ||
| 66 | sudo systemctl status disable-mouse-wakeup.service | ||
| 67 | ``` | ||
| 68 | |||
| 69 | This will permanently disable that device from wakeing up you computer on boot. | ||
| 70 | If you have many devices you would like to surpress from waking up your machine | ||
| 71 | I would create a shell script and call that instead of direclty doing it in | ||
| 72 | service file. | ||
diff --git a/content/posts/2020-09-06-esp-and-micropython.md b/content/posts/2020-09-06-esp-and-micropython.md deleted file mode 100644 index 91a04ad..0000000 --- a/content/posts/2020-09-06-esp-and-micropython.md +++ /dev/null | |||
| @@ -1,225 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Getting started with MicroPython and ESP8266 | ||
| 3 | url: esp8266-and-micropython-guide.html | ||
| 4 | date: 2020-09-06T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | ## Introduction | ||
| 9 | |||
| 10 | A while ago I bought some | ||
| 11 | [ESP8266](https://www.espressif.com/en/products/socs/esp8266) and | ||
| 12 | [ESP32](https://www.espressif.com/en/products/socs/esp32) dev boards to play | ||
| 13 | around with and I finally found a project to try it out. | ||
| 14 | |||
| 15 | For my project, I used [ESP32](https://www.espressif.com/en/products/socs/esp32) | ||
| 16 | but I could easily choose | ||
| 17 | [ESP8266](https://www.espressif.com/en/products/socs/esp8266). This guide | ||
| 18 | contains which tools I use and how I prepared my workspace to code for | ||
| 19 | [ESP8266](https://www.espressif.com/en/products/socs/esp8266). | ||
| 20 | |||
| 21 |  | ||
| 22 | |||
| 23 | This guide covers: | ||
| 24 | |||
| 25 | - flashing SOC | ||
| 26 | - install proper tooling | ||
| 27 | - deploying a simple script | ||
| 28 | |||
| 29 | > Make sure that you are using **a good USB cable**. I had some problems with | ||
| 30 | mine and once I replaced it everything started to work. | ||
| 31 | |||
| 32 | ## Flashing the SOC | ||
| 33 | |||
| 34 | Plug your ESP8266 to USB port and check if the device was recognized with | ||
| 35 | executing `dmesg | grep ch341-uart`. | ||
| 36 | |||
| 37 | Then check if the device is available under `/dev/` by running `ls | ||
| 38 | /dev/ttyUSB*`. | ||
| 39 | |||
| 40 | > **Linux users**: if a device is not available be sure you are in `dialout` | ||
| 41 | > group. You can check this by executing `groups $USER`. You can add a user to | ||
| 42 | > `dialout` group with `sudo adduser $USER dialout`. | ||
| 43 | |||
| 44 | After these conditions are meet go to the navigate to | ||
| 45 | [https://micropython.org/download/esp8266/](https://micropython.org/download/esp8266/) | ||
| 46 | and download `esp8266-20200902-v1.13.bin`. | ||
| 47 | |||
| 48 | ```sh | ||
| 49 | mkdir esp8266-test | ||
| 50 | cd esp8266-test | ||
| 51 | |||
| 52 | wget https://micropython.org/resources/firmware/esp8266-20200902-v1.13.bin | ||
| 53 | ``` | ||
| 54 | |||
| 55 | After obtaining firmware we will need some tooling to flash the firmware to the | ||
| 56 | board. | ||
| 57 | |||
| 58 | ```sh | ||
| 59 | sudo pip3 install esptool | ||
| 60 | ``` | ||
| 61 | |||
| 62 | You can read more about `esptool` at | ||
| 63 | [https://github.com/espressif/esptool/](https://github.com/espressif/esptool/). | ||
| 64 | |||
| 65 | Before flashing the firmware we need to erase the flash on device. Substitute | ||
| 66 | `USB0` with the device listed in output of `ls /dev/ttyUSB*`. | ||
| 67 | |||
| 68 | ```sh | ||
| 69 | esptool.py --port /dev/ttyUSB0 erase_flash | ||
| 70 | ``` | ||
| 71 | |||
| 72 | If flash was successfully erased it is now time to flash the new firmware to it. | ||
| 73 | |||
| 74 | ```sh | ||
| 75 | esptool.py --port /dev/ttyUSB0 --baud 460800 write_flash --flash_size=detect 0 esp8266-20200902-v1.13.bin | ||
| 76 | ``` | ||
| 77 | |||
| 78 | If everything went ok you can try accessing MicroPython REPL with ` screen | ||
| 79 | /dev/ttyUSB0 115200` or `picocom /dev/ttyUSB0 -b115200`. | ||
| 80 | |||
| 81 | > Sometimes you will need to press `ENTER` in `screen` or `picocom` to access | ||
| 82 | > REPL. | ||
| 83 | |||
| 84 | When you are in REPL you can test if all is working properly following steps. | ||
| 85 | |||
| 86 | ```py | ||
| 87 | > import machine | ||
| 88 | > machine.freq() | ||
| 89 | ``` | ||
| 90 | |||
| 91 | This should output a number representing a frequency of the CPU (mine was | ||
| 92 | `80000000`). | ||
| 93 | |||
| 94 | When you are in `screen` or `picocom` these can help you a bit. | ||
| 95 | |||
| 96 | | Key | Command | | ||
| 97 | | -------- | -------------------- | | ||
| 98 | | CTRL+d | preforms soft reboot | | ||
| 99 | | CTRL+a x | exits picocom | | ||
| 100 | | CTRL+a \ | exits screen | | ||
| 101 | |||
| 102 | |||
| 103 | ## Install better tooling | ||
| 104 | |||
| 105 | Now, to make our lives a little bit easier there are couple of additional tools | ||
| 106 | that will make this whole experience a little more bearable. | ||
| 107 | |||
| 108 | There are twq cool ways of uploading local files to SOC flash. | ||
| 109 | |||
| 110 | - ampy → [https://github.com/scientifichackers/ampy](https://github.com/scientifichackers/ampy) | ||
| 111 | - rshell → [https://github.com/dhylands/rshell](https://github.com/dhylands/rshell) | ||
| 112 | |||
| 113 | ### ampy | ||
| 114 | |||
| 115 | ```bash | ||
| 116 | # installing ampy | ||
| 117 | sudo pip3 install adafruit-ampy | ||
| 118 | ``` | ||
| 119 | |||
| 120 | Listed below are some common commands I used. | ||
| 121 | |||
| 122 | ```bash | ||
| 123 | |||
| 124 | # uploads file to flash | ||
| 125 | ampy --delay 2 --port /dev/ttyUSB0 put boot.py | ||
| 126 | |||
| 127 | # lists file on flash | ||
| 128 | ampy --delay 2 --port /dev/ttyUSB0 ls | ||
| 129 | |||
| 130 | # outputs contents of file on flash | ||
| 131 | ampy --delay 2 --port /dev/ttyUSB0 cat boot.py | ||
| 132 | ``` | ||
| 133 | |||
| 134 | > I added `delay` of 2 seconds because I had problems with executing commands. | ||
| 135 | |||
| 136 | ### rshell | ||
| 137 | |||
| 138 | Even though `ampy` is a cool tool I opted with `rshell` in the end since it's | ||
| 139 | much more polished and feature rich. | ||
| 140 | |||
| 141 | ```bash | ||
| 142 | # installing ampy | ||
| 143 | sudo pip3 install rshell | ||
| 144 | ``` | ||
| 145 | |||
| 146 | Now that `rshell` is installed we can connect to the board. | ||
| 147 | |||
| 148 | ```bash | ||
| 149 | rshell --buffer-size=30 -p /dev/ttyUSB0 -a | ||
| 150 | ``` | ||
| 151 | |||
| 152 | This will open a shell inside bash and from here you can execute multiple | ||
| 153 | commands. You can check what is supported with `help` once you are inside of a | ||
| 154 | shell. | ||
| 155 | |||
| 156 | ```bash | ||
| 157 | m@turing ~/Junk/esp8266-test | ||
| 158 | $ rshell --buffer-size=30 -p /dev/ttyUSB0 -a | ||
| 159 | |||
| 160 | Using buffer-size of 30 | ||
| 161 | Connecting to /dev/ttyUSB0 (buffer-size 30)... | ||
| 162 | Trying to connect to REPL connected | ||
| 163 | Testing if ubinascii.unhexlify exists ... Y | ||
| 164 | Retrieving root directories ... /boot.py/ | ||
| 165 | Setting time ... Sep 06, 2020 23:54:28 | ||
| 166 | Evaluating board_name ... pyboard | ||
| 167 | Retrieving time epoch ... Jan 01, 2000 | ||
| 168 | Welcome to rshell. Use Control-D (or the exit command) to exit rshell. | ||
| 169 | /home/m/Junk/esp8266-test> help | ||
| 170 | |||
| 171 | Documented commands (type help <topic>): | ||
| 172 | ======================================== | ||
| 173 | args cat connect date edit filesize help mkdir rm shell | ||
| 174 | boards cd cp echo exit filetype ls repl rsync | ||
| 175 | |||
| 176 | Use Control-D (or the exit command) to exit rshell. | ||
| 177 | ``` | ||
| 178 | |||
| 179 | > Inside a shell `ls` will display list of files on your machine. To get list | ||
| 180 | > of files on flash folder `/pyboard` is remapped inside the shell. To list files | ||
| 181 | > on flash you must perform `ls /pyboard`. | ||
| 182 | |||
| 183 | #### Moving files to flash | ||
| 184 | |||
| 185 | To avoid copying files all the time I used `rsync` function from the inside of | ||
| 186 | `rshell`. | ||
| 187 | |||
| 188 | ```bash | ||
| 189 | rsync . /pyboard | ||
| 190 | ``` | ||
| 191 | |||
| 192 | #### Executing scripts | ||
| 193 | |||
| 194 | It is a pain to continuously reboot the device to trigger `/pyboard/boot.py` and | ||
| 195 | there is a better way of testing local scripts on remote device. | ||
| 196 | |||
| 197 | Lets assume we have `src/freq.py` file that displays CPU frequency of a remote | ||
| 198 | device. | ||
| 199 | |||
| 200 | ```py | ||
| 201 | # src/freq.py | ||
| 202 | |||
| 203 | import machine | ||
| 204 | print(machine.freq()) | ||
| 205 | ``` | ||
| 206 | |||
| 207 | Now lets upload this and execute it. | ||
| 208 | |||
| 209 | ```bash | ||
| 210 | # syncs files to remove device | ||
| 211 | rsync ./src /pyboard | ||
| 212 | |||
| 213 | # goes into REPL | ||
| 214 | repl | ||
| 215 | |||
| 216 | # we import file by importing it without .py extension and this will run the script | ||
| 217 | > import freq | ||
| 218 | |||
| 219 | # CTRL+x will exit REPL | ||
| 220 | ``` | ||
| 221 | |||
| 222 | ## Additional resources | ||
| 223 | |||
| 224 | - https://randomnerdtutorials.com/getting-started-micropython-esp32-esp8266/ | ||
| 225 | - http://docs.micropython.org/en/latest/esp8266/quickref.html | ||
diff --git a/content/posts/2020-09-08-bind-warning-on-login.md b/content/posts/2020-09-08-bind-warning-on-login.md deleted file mode 100644 index 113c67b..0000000 --- a/content/posts/2020-09-08-bind-warning-on-login.md +++ /dev/null | |||
| @@ -1,53 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Fix bind warning in .profile on login in Ubuntu | ||
| 3 | url: bind-warning-on-login-in-ubuntu.html | ||
| 4 | date: 2020-09-08T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | Recently I moved back to [bash](https://www.gnu.org/software/bash/) as my | ||
| 9 | default shell. I was previously using [fish](https://fishshell.com/) and got | ||
| 10 | used to the cool features it has. But, regardless of that, I wanted to move to a | ||
| 11 | more standard shell because I was hopping back and forth with exporting | ||
| 12 | variables and stuff like that which got pretty annoying. | ||
| 13 | |||
| 14 | So I embarked on a mission to make [bash](https://www.gnu.org/software/bash/) | ||
| 15 | more like [fish](https://fishshell.com/) and in the process found that I really | ||
| 16 | missed autosuggest with TAB on changing directories. | ||
| 17 | |||
| 18 | I found a nice alternative that emulates [zsh](http://zsh.sourceforge.net/) like | ||
| 19 | autosuggestion and autocomplete so I added the following to my `.bashrc` file. | ||
| 20 | |||
| 21 | ```bash | ||
| 22 | bind "TAB:menu-complete" | ||
| 23 | bind "set show-all-if-ambiguous on" | ||
| 24 | bind "set completion-ignore-case on" | ||
| 25 | bind "set menu-complete-display-prefix on" | ||
| 26 | bind '"\e[Z":menu-complete-backward' | ||
| 27 | ``` | ||
| 28 | |||
| 29 | I haven't noticed anything wrong with this and all was working fine until I | ||
| 30 | restarted my machine and then I got this error. | ||
| 31 | |||
| 32 |  | ||
| 33 | |||
| 34 | When I pressed OK, I got into the [Gnome | ||
| 35 | shell](https://wiki.gnome.org/Projects/GnomeShell) and all was working fine, but | ||
| 36 | the error was still bugging me. I started looking for the reason why this is | ||
| 37 | happening and found a solution to this error on [Remote SSH Commands - bash bind | ||
| 38 | warning: line editing not enabled](https://superuser.com/a/892682). | ||
| 39 | |||
| 40 | So I added a simple `if [ -t 1 ]` around `bind` statements to avoid running | ||
| 41 | commands that presume the session is interactive when it isn't. | ||
| 42 | |||
| 43 | ```bash | ||
| 44 | if [ -t 1 ]; then | ||
| 45 | bind "TAB:menu-complete" | ||
| 46 | bind "set show-all-if-ambiguous on" | ||
| 47 | bind "set completion-ignore-case on" | ||
| 48 | bind "set menu-complete-display-prefix on" | ||
| 49 | bind '"\e[Z":menu-complete-backward' | ||
| 50 | fi | ||
| 51 | ``` | ||
| 52 | |||
| 53 | After logging out and back in the problem was gone. | ||
diff --git a/content/posts/2020-09-09-digitalocean-sync.md b/content/posts/2020-09-09-digitalocean-sync.md deleted file mode 100644 index aa3cce4..0000000 --- a/content/posts/2020-09-09-digitalocean-sync.md +++ /dev/null | |||
| @@ -1,111 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Using Digitalocean Spaces to sync between computers | ||
| 3 | url: digitalocean-spaces-to-sync-between-computers.html | ||
| 4 | date: 2020-09-09T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | I've been using [Dropbox](https://www.dropbox.com/) for probably **10+ years** | ||
| 9 | now and I-ve became so used to it that it runs in the background that I don't | ||
| 10 | even imagine a world without it. But it's not without problems. | ||
| 11 | |||
| 12 | At first I had problems with `.venv` environments for Python and the only | ||
| 13 | solution for excluding synchronization for this folder was to manually exclude a | ||
| 14 | specific folder which is not really scalable. FYI, my whole project folder is | ||
| 15 | synced on [Dropbox](https://www.dropbox.com/). This of course introduced a lot | ||
| 16 | of syncing of files and folders that are not needed or even break things on | ||
| 17 | other machines. In the case of **Python**, I couldn't use that on my second | ||
| 18 | machine. I needed to delete `.venv` folder and pip it again which synced files | ||
| 19 | again to the main machine. This was very frustrating. **Nodejs** handles this | ||
| 20 | much nicer and I can just run the scripts without deleting `node_modules` again | ||
| 21 | and reinstalling. However, `node_modules` is a beast of its own. It creates so | ||
| 22 | many files that OS has a problem counting them when you check the folder | ||
| 23 | contents for size. | ||
| 24 | |||
| 25 | I wanted something similar to Dropbox. I could without the instant syncing but | ||
| 26 | it would need to be fast and had the option for me to exclude folders like | ||
| 27 | `node_modules, .venv, .git` and folders like that. | ||
| 28 | |||
| 29 | I went on a hunt for an alternative to [Dropbox](https://www.dropbox.com/) | ||
| 30 | and found: | ||
| 31 | |||
| 32 | - [Tresorit](https://tresorit.com/) | ||
| 33 | - [Sync.com](https://sync.com) | ||
| 34 | - [Box](https://www.box.com/) | ||
| 35 | |||
| 36 | You know, the usual list of suspects. I didn't include [Google | ||
| 37 | drive](https://drive.google.com) or [One drive](https://onedrive.live.com/) | ||
| 38 | since they are even more draconian than Dropbox. | ||
| 39 | |||
| 40 | > All this does not stem from me being paranoid but recently these companies | ||
| 41 | > have became more and more aggressive and they keep violating our privacy when | ||
| 42 | > they share our data with 3rd party services. It is getting out of control. | ||
| 43 | |||
| 44 | So, my main problem was still there. No way of excluding a specific folder from | ||
| 45 | syncing. And before we go into "*But you have git, isn't that enough?*", I must | ||
| 46 | say, that many of the files (PDFs, spreadsheets, etc) I have in a `git` repo | ||
| 47 | don't get pushed upstream to Git and I still want to have them synced across my | ||
| 48 | computers. | ||
| 49 | |||
| 50 | I initially wanted to use [rsync](https://linux.die.net/man/1/rsync) but I would | ||
| 51 | need to then have a remote VPS or transfer between my computers directly. I | ||
| 52 | wanted a solution where all my files could be accessible to me without my | ||
| 53 | machine. | ||
| 54 | |||
| 55 | > **WARNING: This solution will cost you money!** DigitalOcean Spaces are $5 per | ||
| 56 | month and there are some bandwidth limitations and if you go beyond that you get | ||
| 57 | billed additionally. | ||
| 58 | |||
| 59 | Then I remembered that I could use something like | ||
| 60 | [S3](https://en.wikipedia.org/wiki/Amazon_S3) since it has versioning and is | ||
| 61 | fully managed. I didn't want to go down the AWS rabbit hole with this so I | ||
| 62 | choose [DigitalOcean Spaces](https://www.digitalocean.com/products/spaces/). | ||
| 63 | |||
| 64 | Then I needed a command-line tool to sync between source and target. I found | ||
| 65 | this nice tool [s3cmd](https://s3tools.org/s3cmd) and it is in the Ubuntu | ||
| 66 | repositories. | ||
| 67 | |||
| 68 | ```bash | ||
| 69 | sudo apt install s3cmd | ||
| 70 | ``` | ||
| 71 | |||
| 72 | After installation will I create a new Space bucket on DigitalOcean. Remember | ||
| 73 | the zone you will choose because you will need it when you will configure | ||
| 74 | `s3cmd`. | ||
| 75 | |||
| 76 | Then I visited [Digitalocean Applications & | ||
| 77 | API](https://cloud.digitalocean.com/account/api/tokens) and generated **Spaces | ||
| 78 | access keys**. Save both key and secret somewhere safe because when you will | ||
| 79 | leave the page secret will not be available anymore to you and you will need to | ||
| 80 | re-generate it. | ||
| 81 | |||
| 82 | ```bash | ||
| 83 | # enter your key and secret and correct endpoint | ||
| 84 | # my endpoint is ams3.digitaloceanspaces.com because | ||
| 85 | # I created my bucket in Amsterdam regiin | ||
| 86 | s3cmd --configure | ||
| 87 | ``` | ||
| 88 | |||
| 89 | After that I played around with options for `s3cmd` and got to the following | ||
| 90 | command. | ||
| 91 | |||
| 92 | ```bash | ||
| 93 | # I executed this command from my projects folder | ||
| 94 | cd projects | ||
| 95 | s3cmd sync --delete-removed --exclude 'node_modules/*' --exclude '.git/*' --exclude '.venv/*' ./ s3://my-bucket-name/projects/ | ||
| 96 | ``` | ||
| 97 | |||
| 98 | When syncing int he other direction you will need to change the order of the | ||
| 99 | `SOURCE` and `TARGET` to `s3://my-bucket-name/projects/` and `./`. | ||
| 100 | |||
| 101 | > Be sure that all the paths have trailing slash so that sync knows that this | ||
| 102 | > are directories. | ||
| 103 | |||
| 104 | I am planning to implement some sort of a `.ignore` file that will enable me to | ||
| 105 | have a project-specific exclude options. | ||
| 106 | |||
| 107 | I am currently running this every hour as a cronjob which is perfectly fine for | ||
| 108 | now when I am testing how this whole thing works and how it all will turn out. | ||
| 109 | |||
| 110 | I have also created a small Gnome extension which is still very unstable, but | ||
| 111 | when/if this whole experiment pays of I will share on Github. | ||
diff --git a/content/posts/2021-01-24-replacing-dropbox-with-s3.md b/content/posts/2021-01-24-replacing-dropbox-with-s3.md deleted file mode 100644 index 4c6b33e..0000000 --- a/content/posts/2021-01-24-replacing-dropbox-with-s3.md +++ /dev/null | |||
| @@ -1,113 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Replacing Dropbox in favor of DigitalOcean spaces | ||
| 3 | url: replacing-dropbox-in-favor-of-digitalocean-spaces.html | ||
| 4 | date: 2021-01-24T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | A few months ago I experimented with DigitalOcean spaces as my backup solution | ||
| 9 | that could [replace Dropbox | ||
| 10 | eventually](/digitalocean-spaces-to-sync-between-computers.html). That solution | ||
| 11 | worked quite nicely, and I was amazed how smashing together a couple of existing | ||
| 12 | solutions would work this fine. | ||
| 13 | |||
| 14 | I have been running that solution in the background for a couple of months now | ||
| 15 | and kind of forgot about it. But recent developments around deplatforming and | ||
| 16 | having us people hostages of technology and big companies speed up my goals to | ||
| 17 | become less dependent on | ||
| 18 | [Google](https://edition.cnn.com/2020/12/17/tech/google-antitrust-lawsuit/index.html), | ||
| 19 | [Dropbox](https://www.pcworld.com/article/2048680/dropbox-takes-a-peek-at-files.html) | ||
| 20 | etc and take back some control. | ||
| 21 | |||
| 22 | I am not a conspiracy theory nut, but to be honest, what these companies are | ||
| 23 | doing lately is out of control. It is a matter of principle at this point. I | ||
| 24 | have almost completely degoogled my life all the way from ditching Gmail, | ||
| 25 | YouTube and most of the services surrounding Google. And I must tell you, I feel | ||
| 26 | so good. I haven't felt this way for a long time. | ||
| 27 | |||
| 28 | **Anyways. Let's get to the meat of things.** | ||
| 29 | |||
| 30 | Before you continue you should read my post about [syncing to | ||
| 31 | Dropbox](/digitalocean-spaces-to-sync-between-computers.html). | ||
| 32 | |||
| 33 | > Also to note, I am using Linux on my machine with Gnome desktop environment. | ||
| 34 | This should work on MacOS too. To use this on Windows I suggest using | ||
| 35 | [Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10) | ||
| 36 | or [Cygwin](https://www.cygwin.com/). | ||
| 37 | |||
| 38 | ## Folder structure | ||
| 39 | |||
| 40 | I liked structure from Dropbox. One folder where everything is located and | ||
| 41 | synced. So, that's why adopted this also for my sync setup. | ||
| 42 | |||
| 43 | ```go | ||
| 44 | ~/Vault | ||
| 45 | ↳ backup | ||
| 46 | ↳ bin | ||
| 47 | ↳ documents | ||
| 48 | ↳ projects | ||
| 49 | ``` | ||
| 50 | |||
| 51 | All of my code is located in `~/Vault/projects` folder. And most of the projects | ||
| 52 | are Git repositories. I do not use this sync method for backup per see but in | ||
| 53 | case I reinstall my machine I can easily recreate all the important folder | ||
| 54 | structure with one quick command. No external drives needed that can fail etc. | ||
| 55 | |||
| 56 | ## Sync script | ||
| 57 | |||
| 58 | My sync script is located in `~/Vault/bin/vault-backup.sh` | ||
| 59 | |||
| 60 | ```bash | ||
| 61 | #!/bin/bash | ||
| 62 | |||
| 63 | # dconf load /com/gexperts/Tilix/ < tilix.dconf | ||
| 64 | # 0 2 * * * sh ~/Vault/bin/vault-backup.sh | ||
| 65 | |||
| 66 | cd ~/Vault/backup/dotfiles | ||
| 67 | |||
| 68 | MACHINE=$(whoami)@$(hostname) | ||
| 69 | mkdir -p $MACHINE | ||
| 70 | cd $MACHINE | ||
| 71 | |||
| 72 | cp ~/.config/VSCodium/User/settings.json settings.json | ||
| 73 | cp ~/.s3cfg s3cfg | ||
| 74 | cp ~/.bash_extended bash_extended | ||
| 75 | cp ~/.ssh ssh -rf | ||
| 76 | |||
| 77 | codium --list-extensions > vscode-extension.txt | ||
| 78 | dconf dump /com/gexperts/Tilix/ > tilix.dconf | ||
| 79 | |||
| 80 | cd ~/Vault | ||
| 81 | s3cmd sync --delete-removed --exclude 'node_modules/*' --exclude '.git/*' --exclude '.venv/*' ./ s3://bucket-name/backup/ | ||
| 82 | |||
| 83 | echo `date +"%D %T"` >> ~/.vault.log | ||
| 84 | |||
| 85 | notify-send \ | ||
| 86 | -u normal \ | ||
| 87 | -i /usr/share/icons/Adwaita/96x96/status/security-medium-symbolic.symbolic.png \ | ||
| 88 | "Vault sync succeded at `date +"%D %T"`" | ||
| 89 | ``` | ||
| 90 | |||
| 91 | This script also backups some of the dotfiles I use and sends notification to | ||
| 92 | Gnome notification center. It is a straightforward solution. Nothing special | ||
| 93 | going on. | ||
| 94 | |||
| 95 | > One obvious benefit of this is that I can omit syncing Node's `node_modules` | ||
| 96 | > or Python's `.venv` and `.git` folders. | ||
| 97 | |||
| 98 | You can use this script in a combination with [Cron](https://en.wikipedia.org/wiki/Cron). | ||
| 99 | |||
| 100 | ``` | ||
| 101 | 0 2 * * * sh ~/Vault/bin/vault-backup.sh | ||
| 102 | ``` | ||
| 103 | |||
| 104 | When you start syncing your local stuff with a remote server you can review your | ||
| 105 | items on DigitalOcean. | ||
| 106 | |||
| 107 |  | ||
| 108 | |||
| 109 | I have been using this script now for quite some time, and it's working | ||
| 110 | flawlessly. I also uninstalled Dropbox and stopped using it completely. | ||
| 111 | |||
| 112 | All I need to do is write a Bash script that does the reverse and downloads from | ||
| 113 | remote server to local folder. This could be another post. | ||
diff --git a/content/posts/2021-01-25-goaccess.md b/content/posts/2021-01-25-goaccess.md deleted file mode 100644 index 1b6a330..0000000 --- a/content/posts/2021-01-25-goaccess.md +++ /dev/null | |||
| @@ -1,202 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Using GoAccess with Nginx to replace Google Analytics | ||
| 3 | url: using-goaccess-with-nginx-to-replace-google-analytics.html | ||
| 4 | date: 2021-01-25T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | ## Introduction | ||
| 9 | |||
| 10 | I know! You cannot simply replace Google Analytics with parsing access logs and | ||
| 11 | displaying a couple of charts. But to be honest, I actually never used Google | ||
| 12 | Analytics to the fullest extent and was usually interested in seeing page hits | ||
| 13 | and which pages were visited most often. | ||
| 14 | |||
| 15 | I recently moved my blog from Firebase to a VPS and also decided to remove | ||
| 16 | Google Analytics tracking code from the site since its quite malicious and | ||
| 17 | tracks users across other pages also and is creating a profile of a user, and | ||
| 18 | I've had it. But I also need some insight of what is happening on a server and | ||
| 19 | which content is being read the most etc. | ||
| 20 | |||
| 21 | I have looked at many existing solutions like: | ||
| 22 | |||
| 23 | - [Umami](https://umami.is/) | ||
| 24 | - [Freshlytics](https://github.com/sheshbabu/freshlytics) | ||
| 25 | - [Matomo](https://matomo.org/) | ||
| 26 | |||
| 27 | But the more I looked at them the more I noticed that I am replacing one evil | ||
| 28 | with another one. Don't get me wrong. Some of these solutions are absolutely | ||
| 29 | fantastic but would require installation of databases and something like PHP or | ||
| 30 | Node. And I was not ready to put those things on my fresh server. Also having | ||
| 31 | Docker installed is out of the question. | ||
| 32 | |||
| 33 | ## Opting for log parsing | ||
| 34 | |||
| 35 | So, I defaulted to parsing already existing logs and generating HTML reports | ||
| 36 | from this data. | ||
| 37 | |||
| 38 | I found this amazing software [GoAccess](https://goaccess.io/) which provides | ||
| 39 | all the functionalities I need, and it's a single binary. Written in Go. | ||
| 40 | |||
| 41 | GoAccess can be used in two different modes. | ||
| 42 | |||
| 43 |  | ||
| 44 | <center><i>Running in a terminal</i></center> | ||
| 45 | |||
| 46 |  | ||
| 47 | <center><i>Running in a browser</i></center> | ||
| 48 | |||
| 49 | I, however, need this to run in a browser. So, the second option is the way to | ||
| 50 | go. The Idea is to periodically run cronjob and export this report into a folder | ||
| 51 | that gets then server by Nginx behind a Basic authentication. | ||
| 52 | |||
| 53 | ## Getting Nginx ready | ||
| 54 | |||
| 55 | I choose Ubuntu on [DigitalOcean](https://www.digitalocean.com/). First I | ||
| 56 | installed [Nginx](https://nginx.org/en/), and | ||
| 57 | [Letsencrypt](https://letsencrypt.org/getting-started/) certbot and all the | ||
| 58 | necessary dependencies. | ||
| 59 | |||
| 60 | ```sh | ||
| 61 | # log in as root user | ||
| 62 | sudo su - | ||
| 63 | |||
| 64 | # first let's update the system | ||
| 65 | apt update && apt upgrade -y | ||
| 66 | |||
| 67 | # let's install | ||
| 68 | apt install nginx certbot python3-certbot-nginx apache2-utils | ||
| 69 | ``` | ||
| 70 | |||
| 71 | After all this is installed we can create a new configuration for a statistics. | ||
| 72 | Stats will be available at `stats.domain.com`. | ||
| 73 | |||
| 74 | ```sh | ||
| 75 | # creates directory where html will be hosted | ||
| 76 | mkdir -p /var/www/html/stats.domain.com | ||
| 77 | |||
| 78 | cp /etc/nginx/sites-available/default /etc/nginx/sites-available/stats.domain.com | ||
| 79 | nano /etc/nginx/sites-available/stats.domain.com | ||
| 80 | ``` | ||
| 81 | |||
| 82 | ```nginx | ||
| 83 | server { | ||
| 84 | root /var/www/html/stats.domain.com; | ||
| 85 | server_name stats.domain.com; | ||
| 86 | |||
| 87 | index index.html; | ||
| 88 | location / { | ||
| 89 | try_files $uri $uri/ =404; | ||
| 90 | } | ||
| 91 | } | ||
| 92 | ``` | ||
| 93 | |||
| 94 | Now we check if the configuration is ok. We can do this with `nginx -t`. If all | ||
| 95 | is ok, we can restart Nginx with `service nginx restart`. | ||
| 96 | |||
| 97 | After all that you should add A record for this domain that points to IP of a | ||
| 98 | droplet. | ||
| 99 | |||
| 100 | Before enabling SSL you should test if DNS records have propagated with `curl | ||
| 101 | stats.domain.com`. | ||
| 102 | |||
| 103 | Now, it's time to provision TLS certificate. To achieve this, you execute | ||
| 104 | command `certbot --nginx`. Follow the wizard and when you are asked about | ||
| 105 | redirection always choose 2 (always redirect to HTTPS). | ||
| 106 | |||
| 107 | When this is done you can visit https://stats.domain.com and you should get 404 | ||
| 108 | not found error which is correct. | ||
| 109 | |||
| 110 | ## Getting GoAccess ready | ||
| 111 | |||
| 112 | If you are using Debian like system GoAccess should be available in repository. | ||
| 113 | Otherwise refer to the official website. | ||
| 114 | |||
| 115 | ```sh | ||
| 116 | apt install goaccess | ||
| 117 | ``` | ||
| 118 | |||
| 119 | To enable Geo location we also need one additiona thing. | ||
| 120 | |||
| 121 | ```sh | ||
| 122 | cd /var/www/html/stats.stats.com | ||
| 123 | wget https://github.com/P3TERX/GeoLite.mmdb/raw/download/GeoLite2-City.mmdb | ||
| 124 | ``` | ||
| 125 | |||
| 126 | Now we create a shell script that will be executed every 10 minutes. | ||
| 127 | |||
| 128 | ```sh | ||
| 129 | nano /var/www/html/stats.domain.com/generate-stats.sh | ||
| 130 | ``` | ||
| 131 | |||
| 132 | Contents of this file should look like this. | ||
| 133 | |||
| 134 | ```sh | ||
| 135 | #!/bin/sh | ||
| 136 | |||
| 137 | zcat -f /var/log/nginx/access.log* > /var/log/nginx/access-all.log | ||
| 138 | |||
| 139 | goaccess \ | ||
| 140 | --log-file=/var/log/nginx/access-all.log \ | ||
| 141 | --log-format=COMBINED \ | ||
| 142 | --exclude-ip=0.0.0.0 \ | ||
| 143 | --geoip-database=/var/www/html/stats.domain.com/GeoLite2-City.mmdb \ | ||
| 144 | --ignore-crawlers \ | ||
| 145 | --real-os \ | ||
| 146 | --output=/var/www/html/stats.domain.com/index.html | ||
| 147 | |||
| 148 | rm /var/log/nginx/access-all.log | ||
| 149 | ``` | ||
| 150 | |||
| 151 | Because after a while nginx creates multiple files with access logs we use | ||
| 152 | [`zcat`](https://linux.die.net/man/1/zcat) to extract Gziped contents and create | ||
| 153 | a file that has all the access logs. After this file is used we delete it. | ||
| 154 | |||
| 155 | If you want to exclude your home IP's result look at the `--exclude-ip` option | ||
| 156 | in script and instead of `0.0.0.0` add your own home IP address. You can find | ||
| 157 | your home IP by executing `curl ifconfig.me` from your local machine and NOT | ||
| 158 | from the droplet. | ||
| 159 | |||
| 160 | Test the script by executing `sh | ||
| 161 | /var/www/html/stats.domain.com/generate-stats.sh` and then checking | ||
| 162 | `https://stats.domain.com`. If you can see stats instead of 404 than you are | ||
| 163 | set. | ||
| 164 | |||
| 165 | It's time to add this script to cron with `cron -e`. | ||
| 166 | |||
| 167 | ```go | ||
| 168 | */10 * * * * sh /var/www/html/stats.domain.com/generate-stats.sh | ||
| 169 | ``` | ||
| 170 | |||
| 171 | ## Securing with Basic authentication | ||
| 172 | |||
| 173 | You probably don't want stats to be publicly available, so we should create a | ||
| 174 | user and a password for Basic authentication. | ||
| 175 | |||
| 176 | First we create a password for a user `stats` with `htpasswd -c /etc/nginx/.htpasswd stats`. | ||
| 177 | |||
| 178 | Now we update config file with `nano | ||
| 179 | /etc/nginx/sites-available/stats.domain.com`. You probably noticed that the | ||
| 180 | file looks a bit different from before. This is because `certbot` added | ||
| 181 | additional rules for SSL. | ||
| 182 | |||
| 183 | Your location portion the config file should now look like. You should add | ||
| 184 | `auth_basic` and `auth_basic_user_file` lines to the file. | ||
| 185 | |||
| 186 | ```nginx | ||
| 187 | location / { | ||
| 188 | try_files $uri $uri/ =404; | ||
| 189 | auth_basic "Private Property"; | ||
| 190 | auth_basic_user_file /etc/nginx/.htpasswd; | ||
| 191 | } | ||
| 192 | ``` | ||
| 193 | |||
| 194 | Test if config is still ok with `nginx -t` and if it is you can restart Nginx | ||
| 195 | with `service nginx restart`. | ||
| 196 | |||
| 197 | If you now visit `https://stats.domain.com` you should be prompted for username | ||
| 198 | and password. If not, try reopening your browser. | ||
| 199 | |||
| 200 | That is all. You now have analytics for your server that gets refreshed every 10 | ||
| 201 | minutes. | ||
| 202 | |||
diff --git a/content/posts/2021-06-26-simple-world-clock.md b/content/posts/2021-06-26-simple-world-clock.md deleted file mode 100644 index ed248dd..0000000 --- a/content/posts/2021-06-26-simple-world-clock.md +++ /dev/null | |||
| @@ -1,107 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Simple world clock with eInk display and Raspberry Pi Zero | ||
| 3 | url: simple-world-clock-with-eiink-display-and-raspberry-pi-zero.html | ||
| 4 | date: 2021-06-26T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | Our team is spread across the world, from the USA all the way to Australia, so | ||
| 9 | having some sort of world clock makes sense. | ||
| 10 | |||
| 11 | Currently, I am using an extension for Gnome called [Timezone | ||
| 12 | extension](https://extensions.gnome.org/extension/2657/timezones-extension/), | ||
| 13 | and it serves the purpose quite well. | ||
| 14 | |||
| 15 | But I also have a bunch of electronics that I bought through the time, and I am | ||
| 16 | not using any of them, and it's time to stop hording this stuff and use it in a | ||
| 17 | project. | ||
| 18 | |||
| 19 | A while ago I bought a small eInk display [Inky | ||
| 20 | pHAT](https://shop.pimoroni.com/products/inky-phat?variant=12549254217811) and I | ||
| 21 | have a bunch of [Raspberry Pi's | ||
| 22 | Zero](https://www.raspberrypi.org/products/raspberry-pi-zero/) lying around that | ||
| 23 | I really need to use. | ||
| 24 | |||
| 25 |  | ||
| 26 | |||
| 27 | Since the Inky [Inky | ||
| 28 | pHAT](https://shop.pimoroni.com/products/inky-phat?variant=12549254217811) is | ||
| 29 | essentially a HAT, it can easily be added on top of the [Raspberry Pi | ||
| 30 | Zero](https://www.raspberrypi.org/products/raspberry-pi-zero/). | ||
| 31 | |||
| 32 | First, I installed the necessary software on Raspberry Pi with `pip3 install | ||
| 33 | inky`. | ||
| 34 | |||
| 35 | And then I created a file `clock.py` in home directory `/home/pi`. | ||
| 36 | |||
| 37 | ```python | ||
| 38 | #!/usr/bin/env python | ||
| 39 | # -*- coding: utf-8 -*- | ||
| 40 | |||
| 41 | import sys | ||
| 42 | import os | ||
| 43 | from inky.auto import auto | ||
| 44 | from PIL import Image, ImageFont, ImageDraw | ||
| 45 | from font_fredoka_one import FredokaOne | ||
| 46 | |||
| 47 | clocks = [ | ||
| 48 | 'America/New_York', | ||
| 49 | 'Europe/Ljubljana', | ||
| 50 | 'Australia/Brisbane', | ||
| 51 | ] | ||
| 52 | |||
| 53 | board = auto() | ||
| 54 | board.set_border(board.WHITE) | ||
| 55 | board.rotation = 90 | ||
| 56 | |||
| 57 | img = Image.new('P', (board.WIDTH, board.HEIGHT)) | ||
| 58 | draw = ImageDraw.Draw(img) | ||
| 59 | |||
| 60 | big_font = ImageFont.truetype(FredokaOne, 18) | ||
| 61 | small_font = ImageFont.truetype(FredokaOne, 13) | ||
| 62 | |||
| 63 | x = board.WIDTH / 3 | ||
| 64 | y = board.HEIGHT / 3 | ||
| 65 | |||
| 66 | idx = 1 | ||
| 67 | for clock in clocks: | ||
| 68 | ctime = os.popen('TZ="{}" date +"%a,%H:%M"'.format(clock)) | ||
| 69 | ctime = ctime.read().strip().split(',') | ||
| 70 | city = clock.split('/')[1].replace('_', ' ') | ||
| 71 | |||
| 72 | draw.text((15, (idx*y)-y+10), city, fill=board.BLACK, font=small_font) | ||
| 73 | draw.text((110, (idx*y)-y+7), str(ctime[0]), fill=board.BLACK, font=big_font) | ||
| 74 | draw.text((155, (idx*y)-y+7), str(ctime[1]), fill=board.BLACK, font=big_font) | ||
| 75 | |||
| 76 | idx += 1 | ||
| 77 | |||
| 78 | board.set_image(img) | ||
| 79 | board.show() | ||
| 80 | ``` | ||
| 81 | |||
| 82 | And because eInk displays are rather slow to refresh and the clock requires | ||
| 83 | refreshing only once a minute, this can be done through cronjob. | ||
| 84 | |||
| 85 | Before we add this job to cron we need to make `clock.py` executable with `chmod | ||
| 86 | +x clock.py`. | ||
| 87 | |||
| 88 | Then we add a cronjob with `crontab -e`. | ||
| 89 | |||
| 90 | ``` | ||
| 91 | * * * * * /home/pi/clock.py | ||
| 92 | ``` | ||
| 93 | |||
| 94 | So, we end up with a result like this. | ||
| 95 | |||
| 96 |  | ||
| 97 | |||
| 98 | And for the enclosure that can be 3D printed, but I haven't yet something like | ||
| 99 | this can be used. | ||
| 100 | |||
| 101 | <iframe id="vs_iframe" src="https://www.viewstl.com/?embedded&url=https%3A%2F%2Fmitjafelicijan.com%2Fassets%2Fworld-clock%2Fenclosure.stl&color=gray&bgcolor=white&edges=no&orientation=front&noborder=no" style="border:0;margin:0;width:100%;height:400px;"></iframe> | ||
| 102 | |||
| 103 | You can download my [STL file for the enclosure | ||
| 104 | here](/assets/world-clock/enclosure.stl), but make sure that dimensions make | ||
| 105 | sense and also opening for USB port should be added or just use a drill and some | ||
| 106 | hot glue to make it stick in the enclosure. | ||
| 107 | |||
diff --git a/content/posts/2021-07-30-from-internet-consumer-to-full-hominum-again.md b/content/posts/2021-07-30-from-internet-consumer-to-full-hominum-again.md deleted file mode 100644 index 31a2ea0..0000000 --- a/content/posts/2021-07-30-from-internet-consumer-to-full-hominum-again.md +++ /dev/null | |||
| @@ -1,102 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: My journey from being an internet über consumer to being a full hominum again | ||
| 3 | url: from-internet-consumer-to-full-hominum-again.html | ||
| 4 | date: 2021-07-30T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | It's been almost a year since I started purging all my online accounts and | ||
| 9 | going down this rabbit hole of being almost independent of the current internet | ||
| 10 | machine. Even though I initially thought that I will have problems adapting, | ||
| 11 | I was pleasantly surprised that the transition went so smoothly. Even better, | ||
| 12 | it brought many benefits to my life. Such as increased focus, less stress | ||
| 13 | about trivial things, etc. | ||
| 14 | |||
| 15 | It all started with me doing small changes like unsubscribing from emails that I | ||
| 16 | have either subscribed to by accepting terms and conditions. Or even some more | ||
| 17 | malicious emails that I was getting because I was on a shared mailing list. And | ||
| 18 | the later ones I hate the most of all. How the hell do they keep sharing my | ||
| 19 | email and sending me unsolicited emails and get away with it? I have a suspicion | ||
| 20 | that these marketing people share an Excel file between them and keep | ||
| 21 | resubscribing me when they import lists into Mailchimp or similar software. | ||
| 22 | |||
| 23 | It's fascinating to see how much crap you get subscribed to when you are not | ||
| 24 | paying attention. It got so bad that my primary Gmail address is a full of junk | ||
| 25 | and need constant monitoring and cleaning up. And because I want to have Inbox | ||
| 26 | Zero, this presents an additional problem for me. | ||
| 27 | |||
| 28 | The stress that email presented for me didn't occur to me for a long time. I was | ||
| 29 | noticing that I was unable to go through one single hour without hysterically | ||
| 30 | refreshing email. And if somebody wrote me something, I needed to see it right | ||
| 31 | then, even though I didn't immediately reply to it. I can only describe this | ||
| 32 | with FOMO (fear of missing out). I have no other explanation than that. It was | ||
| 33 | crippling, and I was constantly context switching, which I will address further | ||
| 34 | down this post in more details. | ||
| 35 | |||
| 36 | This was one of the reasons why I spawned up my personal email server, and I am | ||
| 37 | using it now as my primary and person email. I still have Gmail as my “junk” | ||
| 38 | email that I use for throw away stuff. I log in to Gmail once a week and check | ||
| 39 | if there are any important emails that I got, but apart from that, it's sitting | ||
| 40 | dormant and collecting dust. | ||
| 41 | |||
| 42 | The more I was watching the world loose it's self with allowing anti freedom | ||
| 43 | things to happen to it, the more I started to realize that something has to | ||
| 44 | change. I don't have the power to change the world. And I also don't have a | ||
| 45 | grandiose opinion of myself to even think to try it. But what I can do is to not | ||
| 46 | subscribe to this consumer way of thinking. I will not be complicit in this. My | ||
| 47 | moral and ethical stances won't allow it. So, this brings us to the second part | ||
| 48 | of my journey. | ||
| 49 | |||
| 50 | I was using all these 3rd party services because I was either lazy or OK with | ||
| 51 | the drawbacks of them. I watched these services and companies became more and | ||
| 52 | privacy policies and everybody is OK with accepting them, and they pray on that | ||
| 53 | more evil. It is evil if you sell your user's data in this manner. Nobody reads | ||
| 54 | flaw in human nature. I really hate the hypocrisy they manage to muster. These | ||
| 55 | companies prey on our laziness, and we are at fault here. Nobody else. And I | ||
| 56 | truly understand the reasons why we rather accept and move on, and not object | ||
| 57 | and have our lives a little more difficult. They have perfected this through | ||
| 58 | years of small changes that make us a little more dependent on them. You could | ||
| 59 | not convince a person to give away all his rights and data in one day. This was | ||
| 60 | gradual and slow. And it caught us all in surprise. When I really stopped and | ||
| 61 | thought about it, I felt repulsed. By really stopping and thinking about it, I | ||
| 62 | really mean stopping and thinking about it. Thoroughly and in depth. | ||
| 63 | |||
| 64 | Each step I took depleted my character a bit more. Like I was trading myself bit | ||
| 65 | by bit without understanding what it all meant. What it meant to be a full | ||
| 66 | person, not divided by all this bought attention they want from me. They don't | ||
| 67 | just get your data, but they also take your attention away from you. They | ||
| 68 | scatter your and go with the divide and conquer tactic from there. And a person | ||
| 69 | divided is a person not fully there. Not at the moment. Not alive fully. | ||
| 70 | |||
| 71 | I was unable to form long thoughts. Well, I thought I was. But now that I see | ||
| 72 | what being a full person is again, I can see that I was not at my 100% back | ||
| 73 | then. | ||
| 74 | |||
| 75 | A revolt was inevitable. There was no other way of continuing my story without | ||
| 76 | it. Without taking back my attention, my thoughts, my time, and my privacy, | ||
| 77 | regardless of how too late it maybe is. | ||
| 78 | |||
| 79 | This has nothing to do with conspiracy theories. Even less with changing the | ||
| 80 | world. All I wanted was to get my life back in order and not waste the energy | ||
| 81 | that could be spent in other, better places. | ||
| 82 | |||
| 83 | I started reading more. I can focus now fully on things I work on. Furthermore, | ||
| 84 | I have the mental acuity that I never had before. My mind feels sharp. I don't | ||
| 85 | get angry so much. I can cherish the finer things in life now without the need | ||
| 86 | to interpret them intellectually. Not only that, but I have a feeling of | ||
| 87 | belonging again. Sense of purpose has returned with a vengeance. And I can now | ||
| 88 | help people without depleting myself. | ||
| 89 | |||
| 90 | The last step so far was to finish closing all the remaining online accounts | ||
| 91 | that I still had. And when I was thinking what value they bring me, I wasn't | ||
| 92 | surprised that the answer was none. I wasn't logging in them and using them. I | ||
| 93 | stopped being afraid of FOMO. If somebody wants to get in contact me, they will | ||
| 94 | find a way. I am one search away. | ||
| 95 | |||
| 96 | We are not beholden to anybody. Our lives are our own. So dare yourself to | ||
| 97 | delete Facebook, LinkedIn. To unsubscribe. Dare yourself to take your time and | ||
| 98 | attention back. Use that time and energy to go for a walk without thinking about | ||
| 99 | work. Read a book instead of reading comment on social media that you will | ||
| 100 | forget in an hour. Enrich your life instead of wasting it. It only requires a | ||
| 101 | small step. And you will feel the benefits immediately. Lose the weight of the | ||
| 102 | world that is crushing you without your consent. | ||
diff --git a/content/posts/2021-08-01-linux-cheatsheet.md b/content/posts/2021-08-01-linux-cheatsheet.md deleted file mode 100644 index 3747d43..0000000 --- a/content/posts/2021-08-01-linux-cheatsheet.md +++ /dev/null | |||
| @@ -1,286 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: List of essential Linux commands for server management | ||
| 3 | url: linux-cheatsheet.html | ||
| 4 | date: 2021-08-01T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | **Generate SSH key** | ||
| 9 | |||
| 10 | ```bash | ||
| 11 | ssh-keygen -t ed25519 -C "your_email@example.com" | ||
| 12 | |||
| 13 | # when no support for Ed25519 present | ||
| 14 | ssh-keygen -t rsa -b 4096 -C "your_email@example.com" | ||
| 15 | ``` | ||
| 16 | |||
| 17 | Note: By default SSH keys get stored to `/home/<username>/.ssh/` folder. | ||
| 18 | |||
| 19 | **Login to host via SSH** | ||
| 20 | |||
| 21 | ```bash | ||
| 22 | # connect to host as your local username | ||
| 23 | ssh host | ||
| 24 | |||
| 25 | # connect to host as user | ||
| 26 | ssh <user>@<host> | ||
| 27 | |||
| 28 | # connect to host using port | ||
| 29 | ssh -p <port> <user>@<host> | ||
| 30 | ``` | ||
| 31 | |||
| 32 | **Execute command on a server through SSH** | ||
| 33 | |||
| 34 | ```bash | ||
| 35 | # execute one command | ||
| 36 | ssh root@100.100.100.100 "ls /root" | ||
| 37 | |||
| 38 | # execute many commands | ||
| 39 | ssh root@100.100.100.100 "cd /root;touch file.txt" | ||
| 40 | ``` | ||
| 41 | |||
| 42 | **Displays currently logged in users in the system** | ||
| 43 | |||
| 44 | ```bash | ||
| 45 | w | ||
| 46 | ``` | ||
| 47 | |||
| 48 | **Displays Linux system information** | ||
| 49 | |||
| 50 | ```bash | ||
| 51 | uname | ||
| 52 | ``` | ||
| 53 | |||
| 54 | **Displays kernel release information** | ||
| 55 | |||
| 56 | ```bash | ||
| 57 | uname -r | ||
| 58 | ``` | ||
| 59 | |||
| 60 | **Shows the system hostname** | ||
| 61 | |||
| 62 | ```bash | ||
| 63 | hostname | ||
| 64 | ``` | ||
| 65 | |||
| 66 | **Shows system reboot history** | ||
| 67 | |||
| 68 | ```bash | ||
| 69 | last reboot | ||
| 70 | ``` | ||
| 71 | |||
| 72 | **Displays information about the user** | ||
| 73 | |||
| 74 | ```bash | ||
| 75 | sudo apt install finger | ||
| 76 | finger <username> | ||
| 77 | ``` | ||
| 78 | |||
| 79 | **Displays IP addresses and all the network interfaces** | ||
| 80 | |||
| 81 | ```bash | ||
| 82 | ip addr show | ||
| 83 | ``` | ||
| 84 | |||
| 85 | **Downloads a file from an online source** | ||
| 86 | |||
| 87 | ```bash | ||
| 88 | wget https://example.com/example.tgz | ||
| 89 | ``` | ||
| 90 | |||
| 91 | Note: If URL contains ?, & enclose the URL in double quotes. | ||
| 92 | |||
| 93 | **Compress a file with gzip** | ||
| 94 | |||
| 95 | ```bash | ||
| 96 | # will not keep the original file | ||
| 97 | gzip file.txt | ||
| 98 | |||
| 99 | # will keep the original file | ||
| 100 | gzip --keep file.txt | ||
| 101 | ``` | ||
| 102 | |||
| 103 | **Interactive disk usage analyzer** | ||
| 104 | |||
| 105 | ```bash | ||
| 106 | sudo apt install ncdu | ||
| 107 | |||
| 108 | ncdu | ||
| 109 | ncdu <path/to/directory> | ||
| 110 | ``` | ||
| 111 | |||
| 112 | **Install Node.js using the Node Version Manager** | ||
| 113 | |||
| 114 | ```bash | ||
| 115 | curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash | ||
| 116 | source ~/.bashrc | ||
| 117 | |||
| 118 | nvm install v13 | ||
| 119 | ``` | ||
| 120 | |||
| 121 | **Too long; didn't read** | ||
| 122 | |||
| 123 | ```bash | ||
| 124 | npm install -g tldr | ||
| 125 | |||
| 126 | tldr tar | ||
| 127 | ``` | ||
| 128 | |||
| 129 | **Combine all Nginx access logs to one big log file** | ||
| 130 | |||
| 131 | ```bash | ||
| 132 | zcat -f /var/log/nginx/access.log* > /var/log/nginx/access-all.log | ||
| 133 | ``` | ||
| 134 | |||
| 135 | **Set up Redis server** | ||
| 136 | |||
| 137 | ```bash | ||
| 138 | sudo apt install redis-server redis-tools | ||
| 139 | |||
| 140 | # check if server is running | ||
| 141 | sudo service redis status | ||
| 142 | |||
| 143 | # set and get a key value | ||
| 144 | redis-cli set mykey myvalue | ||
| 145 | redis-cli get mykey | ||
| 146 | |||
| 147 | # interactive shell | ||
| 148 | redis-cli | ||
| 149 | ``` | ||
| 150 | |||
| 151 | **Generate statistics of your webserver** | ||
| 152 | |||
| 153 | ```bash | ||
| 154 | sudo apt install goaccess | ||
| 155 | |||
| 156 | # check if installed | ||
| 157 | goaccess -v | ||
| 158 | |||
| 159 | # combine logs | ||
| 160 | zcat -f /var/log/nginx/access.log* > /var/log/nginx/access-all.log | ||
| 161 | |||
| 162 | # export to single html | ||
| 163 | goaccess \ | ||
| 164 | --log-file=/var/log/nginx/access-all.log \ | ||
| 165 | --log-format=COMBINED \ | ||
| 166 | --exclude-ip=0.0.0.0 \ | ||
| 167 | --ignore-crawlers \ | ||
| 168 | --real-os \ | ||
| 169 | --output=/var/www/html/stats.html | ||
| 170 | |||
| 171 | # cleanup afterwards | ||
| 172 | rm /var/log/nginx/access-all.log | ||
| 173 | ``` | ||
| 174 | |||
| 175 | **Search for a given pattern in files** | ||
| 176 | |||
| 177 | ```bash | ||
| 178 | grep -r ‘pattern’ files | ||
| 179 | ``` | ||
| 180 | |||
| 181 | **Find proccess ID for a specific program** | ||
| 182 | |||
| 183 | ```bash | ||
| 184 | pgrep nginx | ||
| 185 | ``` | ||
| 186 | |||
| 187 | **Print name of current/working directory** | ||
| 188 | |||
| 189 | ```bash | ||
| 190 | pwd | ||
| 191 | ``` | ||
| 192 | |||
| 193 | **Creates a blank new file** | ||
| 194 | |||
| 195 | ```bash | ||
| 196 | touch newfile.txt | ||
| 197 | ``` | ||
| 198 | |||
| 199 | **Displays first lines in a file** | ||
| 200 | |||
| 201 | ```bash | ||
| 202 | # -n <x> presents the number of lines (10 by default) | ||
| 203 | head -n 20 somefile.txt | ||
| 204 | ``` | ||
| 205 | |||
| 206 | **Displays last lines in a file** | ||
| 207 | |||
| 208 | ```bash | ||
| 209 | # -n <x> presents the number of lines (10 by default) | ||
| 210 | tail -n 20 somefile.txt | ||
| 211 | |||
| 212 | # -f follows the changes in file (doesn't closes) | ||
| 213 | tail -f somefile.txt | ||
| 214 | ``` | ||
| 215 | |||
| 216 | **Count lines in a file** | ||
| 217 | |||
| 218 | ```bash | ||
| 219 | wc -l somefile.txt | ||
| 220 | ``` | ||
| 221 | |||
| 222 | **Find all instances of the file** | ||
| 223 | |||
| 224 | ```bash | ||
| 225 | sudo apt install mlocate | ||
| 226 | |||
| 227 | locate somefile.txt | ||
| 228 | ``` | ||
| 229 | |||
| 230 | **Find file names that begin with ‘index’ in /home folder** | ||
| 231 | |||
| 232 | ```bash | ||
| 233 | find /home/ -name "index" | ||
| 234 | ``` | ||
| 235 | |||
| 236 | **Find files larger than 100MB in the home folder** | ||
| 237 | |||
| 238 | ```bash | ||
| 239 | find /home -size +100M | ||
| 240 | ``` | ||
| 241 | |||
| 242 | **Displays block devices related information** | ||
| 243 | |||
| 244 | ```bash | ||
| 245 | lsblk | ||
| 246 | ``` | ||
| 247 | |||
| 248 | **Displays free space on mounted systems** | ||
| 249 | |||
| 250 | ```bash | ||
| 251 | df -h | ||
| 252 | ``` | ||
| 253 | |||
| 254 | **Displays free and used memory in the system** | ||
| 255 | |||
| 256 | ```bash | ||
| 257 | free -h | ||
| 258 | ``` | ||
| 259 | |||
| 260 | **Displays all active listening ports** | ||
| 261 | |||
| 262 | ```bash | ||
| 263 | sudo apt install net-tools | ||
| 264 | |||
| 265 | netstat -pnltu | ||
| 266 | ``` | ||
| 267 | |||
| 268 | **Kill a process violently** | ||
| 269 | |||
| 270 | ```bash | ||
| 271 | kill -9 <pid> | ||
| 272 | ``` | ||
| 273 | |||
| 274 | **List files opened by user** | ||
| 275 | |||
| 276 | ```bash | ||
| 277 | lsof -u <user> | ||
| 278 | ``` | ||
| 279 | |||
| 280 | **Execute "df -h", showing periodic updates** | ||
| 281 | |||
| 282 | ```bash | ||
| 283 | # -n 1 means every second | ||
| 284 | watch -n 1 df -h | ||
| 285 | ``` | ||
| 286 | |||
diff --git a/content/posts/2021-12-03-debian-based-riced-up-distribution-for-developers.md b/content/posts/2021-12-03-debian-based-riced-up-distribution-for-developers.md deleted file mode 100644 index 0755282..0000000 --- a/content/posts/2021-12-03-debian-based-riced-up-distribution-for-developers.md +++ /dev/null | |||
| @@ -1,275 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Debian based riced up distribution for Developers and DevOps folks | ||
| 3 | url: debian-based-riced-up-distribution-for-developers-and-devops-folks.html | ||
| 4 | date: 2021-12-03T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | ## Introduction | ||
| 9 | |||
| 10 | I have been using [Ubuntu](https://ubuntu.com/) for quite a longtime now. I have | ||
| 11 | used [Debian](https://www.debian.org/) in the past and | ||
| 12 | [Manjaro](https://manjaro.org/). Also had [Arch](https://archlinux.org/) for | ||
| 13 | some time and even ran [Gentoo](https://www.gentoo.org/) way back. | ||
| 14 | |||
| 15 | What I learned from all this is that I prefer running a bit older versions and | ||
| 16 | having them be stable than run bleeding edge rolling release. For that reason, I | ||
| 17 | stuck with Ubuntu for a couple of years now. I am also at a point in my life | ||
| 18 | where I just don't care what is cool or hip anymore. I just want a stable system | ||
| 19 | that doesn't get in my way. | ||
| 20 | |||
| 21 | During all this, I noticed that these distributions were getting very bloated | ||
| 22 | and a lot of software got included that I usually uninstall on fresh | ||
| 23 | installation. Maybe this is my OCD speaking, but why do I have to give fresh | ||
| 24 | installation min 1 GB of ram out of the box just to have a blank screen in front | ||
| 25 | of me? I get it, there are many things included in the distro to make my life | ||
| 26 | easier. I understand. But at this point I have a feeling that modern Linux | ||
| 27 | distributions are becoming similar to [Node.js project with | ||
| 28 | node_modules](https://devhumor.com/content/uploads/images/August2017/node-modules.jpg). | ||
| 29 | Just a crazy number of packages serving very little or no purpose, just | ||
| 30 | supporting other software. | ||
| 31 | |||
| 32 | I felt I needed a fresh start. To start over with something minimal and clean. | ||
| 33 | Something that would put a little more joy into using a computer again. | ||
| 34 | |||
| 35 | For the first version, I wanted to target the following machines I have at home | ||
| 36 | that I want this thing to work on. | ||
| 37 | |||
| 38 | ```yaml | ||
| 39 | # My main stationary work machine | ||
| 40 | Resolution: 3840x1080 (Super Ultrawide Monitor 32:9) | ||
| 41 | CPU: Intel i7-8700 (12) @ 4.600GHz | ||
| 42 | GPU: AMD ATI Radeon RX 470/480/570/570X/580/580X/590 | ||
| 43 | Memory: 32020MiB | ||
| 44 | ``` | ||
| 45 | |||
| 46 | ```yaml | ||
| 47 | # Thinkpad x220 for testing things and goofing around | ||
| 48 | Resolution: 1366x768 | ||
| 49 | CPU: Intel i5-2520M (4) @ 3.200GHz | ||
| 50 | GPU: Intel 2nd Generation Core Processor Family | ||
| 51 | Memory: 15891MiB | ||
| 52 | ``` | ||
| 53 | |||
| 54 | ## How should I approach this? | ||
| 55 | |||
| 56 | I knew I wanted to use [minimal Debian netinst | ||
| 57 | ](https://www.debian.org/CD/netinst/) for the base to give myself a head | ||
| 58 | start. No reason to go through changing the installer and also testing all that | ||
| 59 | behemoth of a thing. So, some sort of ricing was the only logical option to get | ||
| 60 | this thing of the grounds somewhat quickly. | ||
| 61 | |||
| 62 | > **What is ricing anyway?** | ||
| 63 | > The term “RICE” stands for Race Inspired Cosmetic Enhancement. A group of | ||
| 64 | > people (could be one, idk) decided to see if they could tweak their own | ||
| 65 | > distros like they/others did their cars. This gave rise to a community of | ||
| 66 | > Linux/Unix enthusiasts trying to make their distros look cooler and better | ||
| 67 | > than others... For more information, read this article | ||
| 68 | > [What in the world is ricing!?](https://pesos.github.io/2020/07/14/what-is-ricing.html). | ||
| 69 | |||
| 70 | I didn't want this to just be a set of config files for theming purpose. I | ||
| 71 | wanted this to include a set of pre-installed tools and services that are being | ||
| 72 | used all the time by a modern developer. Theming is just a tiny part of it. | ||
| 73 | Fonts being applied across the distro and things like that. | ||
| 74 | |||
| 75 | First, I choose terminal installer and left it to load additional components. | ||
| 76 | Avoid using graphical installer in this case. | ||
| 77 | |||
| 78 |  | ||
| 79 | |||
| 80 | After that I selected hostname and created a normal user and set password for | ||
| 81 | that user and root user and choose guided mode for disk partitioning. | ||
| 82 | |||
| 83 |  | ||
| 84 | |||
| 85 | I left it run to install all the things required for the base system and opted | ||
| 86 | out of scanning additional media for use by the package manager. Those will be | ||
| 87 | downloaded from the internet during installation. | ||
| 88 | |||
| 89 |  | ||
| 90 | |||
| 91 | I opted out of the popularity contest, and **now comes the important part**. | ||
| 92 | Uncheck all the boxes in Software selection and only leave 'standard system | ||
| 93 | utilities'. I also left an SSH server, so I was able to log in to the machine | ||
| 94 | from my main PC. | ||
| 95 | |||
| 96 |  | ||
| 97 | |||
| 98 | At this point, I installed GRUB bootloader on the disk where I installed the | ||
| 99 | system. | ||
| 100 | |||
| 101 |  | ||
| 102 | |||
| 103 | That concluded the installation of base Debian and after restarting the computer | ||
| 104 | I was prompted with the login screen. | ||
| 105 | |||
| 106 |  | ||
| 107 | |||
| 108 | Now that I had the base installation, it was time to choose what software do I | ||
| 109 | want to include in this so-called distribution. I wanted out of the box | ||
| 110 | developer experience, so I had plenty to choose. | ||
| 111 | |||
| 112 | Let's not waste time and go through the list. | ||
| 113 | |||
| 114 | ## Desktop environments | ||
| 115 | |||
| 116 | I have been using [Gnome](https://www.gnome.org/) for my whole Linux life. From | ||
| 117 | version 2 forward. It's been quite a ride. I hated version 3 when it came out | ||
| 118 | and replaced version 2. But I got used to it. And now with version 40+ they also | ||
| 119 | made couple of changes which I found both frustrating and presently surprised. | ||
| 120 | |||
| 121 | The amount of vertical space you loose because of the beefy title bars on | ||
| 122 | windows is ridiculous. And then in case of | ||
| 123 | [Tilix](https://gnunn1.github.io/tilix-web/) you also have tabs, and you are | ||
| 124 | 100px deep. Vertical space is one of the most important things for a | ||
| 125 | developer. The more real estate you have, the more code you can have in a | ||
| 126 | viewport. | ||
| 127 | |||
| 128 | But on the other hand, I still love how Gnome feels and looks. I gotta give them | ||
| 129 | that. They really are trying to make Gnome feel unified and modern. | ||
| 130 | |||
| 131 | Regardless of all the nice things Gnome has, I was looking at the tiling window | ||
| 132 | managers for some time, but never had the nerve to actually go with it. But now | ||
| 133 | was the ideal time to give it a go. No guts, no glory kind of a thing. | ||
| 134 | |||
| 135 | One of the requirements for me was easy custom layouts because I use a really | ||
| 136 | strange monitor with aspect ratio of 32:9. So relying on included layouts most | ||
| 137 | of them have is a non-starter. | ||
| 138 | |||
| 139 | What I was doing in Gnome was having windows in a layout like the diagram | ||
| 140 | below. This is my common practice. And if you look at it you can clearly see I | ||
| 141 | was replicating tiling window manager setup in Gnome. | ||
| 142 | |||
| 143 |  | ||
| 144 | |||
| 145 | That made me look into a bunch of tiling window managers and then tested them | ||
| 146 | out. Candidates I was looking at were: | ||
| 147 | |||
| 148 | - [i3](https://i3wm.org/) | ||
| 149 | - [bspwm](https://github.com/baskerville/bspwm) | ||
| 150 | - [awesome](https://awesomewm.org/index.html) | ||
| 151 | - [XMonad](https://xmonad.org/) | ||
| 152 | - [sway](https://swaywm.org/) | ||
| 153 | - [Qtile](http://www.qtile.org/) | ||
| 154 | - [dwm](https://dwm.suckless.org/) | ||
| 155 | |||
| 156 | You can also check article [13 Best Tiling Window Managers for | ||
| 157 | Linux](https://www.tecmint.com/best-tiling-window-managers-for-linux/) I was | ||
| 158 | referencing while testing them out. | ||
| 159 | |||
| 160 | While all of them provided what I needed, I liked i3 the most. What particular | ||
| 161 | caught my eye was the ease to use and tree based layouts which allows flexible | ||
| 162 | layouts. I know others can be set up also to have custom layouts other than | ||
| 163 | spiral, dwindle etc. I think i3 is a good entry-level window manager for | ||
| 164 | somebody like me. | ||
| 165 | |||
| 166 | ## Batteries included | ||
| 167 | |||
| 168 | The source for the whole thing is located on Github | ||
| 169 | https://github.com/mitjafelicijan/dfd-rice. | ||
| 170 | |||
| 171 | Currenly included: | ||
| 172 | |||
| 173 | - `non-free` (enables non-free packages in apt) | ||
| 174 | - `sudo` (adds sudo and adds user to sudo group) | ||
| 175 | - `essentials` (gcc, htop, zip, curl, etc...) | ||
| 176 | - `wifi` (network manager nmtui) | ||
| 177 | - `desktop` (i3, dmenu, fonts, configurations) | ||
| 178 | - `pulseaudio` (pulseaudio with pavucontrol) | ||
| 179 | - `code-editors` (vim, micro, vscode) | ||
| 180 | - `ohmybash` (make bash pretty) | ||
| 181 | - `file-managers` (mc) | ||
| 182 | - `git-ui` (terminal git gui) | ||
| 183 | - `meld` (diff tool) | ||
| 184 | - `profiling` (kcachegrind, valgrind, strace, ltrace) | ||
| 185 | - `browsers` (brave, firefox, chromium) | ||
| 186 | - programming languages: | ||
| 187 | - `python` | ||
| 188 | - `golang` | ||
| 189 | - `nodejs` | ||
| 190 | - `rust` | ||
| 191 | - `nim` | ||
| 192 | - `php` | ||
| 193 | - `ruby` | ||
| 194 | - `docker` (with docker-compose) | ||
| 195 | - `ansible` | ||
| 196 | |||
| 197 | Install script also allows you to install only specific packages (example for: | ||
| 198 | essentials ohmybash docker rust). | ||
| 199 | |||
| 200 | ```sh | ||
| 201 | su - root \ | ||
| 202 | bash -c "$(wget -q https://raw.github.com/mitjafelicijan/dfd-rice/master/tools/install.sh -O -)" -- \ | ||
| 203 | essentials ohmybash docker rust | ||
| 204 | ``` | ||
| 205 | |||
| 206 | Currently, most of these recipes use what Debian and this is totally fine with | ||
| 207 | me since I never use bleeding edge features of a package. But if something major | ||
| 208 | would come to light, I will replace it with a possible compilation script or | ||
| 209 | something similar. | ||
| 210 | |||
| 211 | This is some of the output from the installation script. | ||
| 212 | |||
| 213 |  | ||
| 214 | |||
| 215 | Let's take a look at some examples in the installation script. | ||
| 216 | |||
| 217 | ### Docker recipe | ||
| 218 | |||
| 219 | ```sh | ||
| 220 | # docker | ||
| 221 | print_header "Installing Docker" | ||
| 222 | curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --yes --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg | ||
| 223 | echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null | ||
| 224 | apt update | ||
| 225 | apt -y install docker-ce docker-ce-cli containerd.io docker-compose | ||
| 226 | |||
| 227 | systemctl start docker | ||
| 228 | systemctl enable docker | ||
| 229 | systemctl status docker --no-pager | ||
| 230 | |||
| 231 | /sbin/usermod -aG docker $USERNAME | ||
| 232 | ``` | ||
| 233 | |||
| 234 | ### Making bash pretty | ||
| 235 | |||
| 236 | I really like [Oh My Zsh](https://ohmyz.sh/), but I don't like zsh shell. When | ||
| 237 | I used it, I constantly needed to be aware of it and running bash scripts was a | ||
| 238 | pain. So, I was really delighted when I found out that a version for bash | ||
| 239 | existed called [Oh My Bash](https://ohmybash.nntoan.com/). Let's take a look at | ||
| 240 | the recipe for installing it. | ||
| 241 | |||
| 242 | ```sh | ||
| 243 | # ohmybash | ||
| 244 | print_header "Enabling OhMyBash" | ||
| 245 | sudo -u $USERNAME sh -c "$(curl -fsSL https://raw.github.com/ohmybash/oh-my-bash/master/tools/install.sh)" & | ||
| 246 | T1=${!} | ||
| 247 | wait ${T1} | ||
| 248 | ``` | ||
| 249 | |||
| 250 | Because OhMyBash does `exec bash` at the end, this traps our script inside | ||
| 251 | another shell and our script cannot continue. For that reason, I executed this | ||
| 252 | in background. But that presents a new problem. Because this is executed in | ||
| 253 | background, we lose track of progress naturally. And that strange trick with | ||
| 254 | `T1=${!}` and `wait ${T1}` waits for the background process to finish before | ||
| 255 | continuing to another task in bash script. | ||
| 256 | |||
| 257 | Check [Multi-Threaded Processing in Bash Scripts](https://www.cloudsavvyit.com/12277/how-to-use-multi-threaded-processing-in-bash-scripts/) | ||
| 258 | for more details. | ||
| 259 | |||
| 260 | ## Conclusion | ||
| 261 | |||
| 262 | Take a look at | ||
| 263 | https://github.com/mitjafelicijan/dfd-rice/blob/develop/tools/install.sh script | ||
| 264 | to get familiar with it. This is just a first iteration and I will continue to | ||
| 265 | update it because I need this in my life. | ||
| 266 | |||
| 267 | The current version boots in 4s to the login prompt, and after you log in, the | ||
| 268 | desktop environment loads in 2s. So, its fast, very fast. And on clean boot, I | ||
| 269 | measured ~230 MB of RAM usage. | ||
| 270 | |||
| 271 | And this is how it looks with two terminals side by side. I really like the | ||
| 272 | simplicity and clean interface. I will polish the colors and stuff like that, | ||
| 273 | but I really do like the results. | ||
| 274 | |||
| 275 |  | ||
diff --git a/content/posts/2021-12-25-running-golang-application-as-pid1.md b/content/posts/2021-12-25-running-golang-application-as-pid1.md deleted file mode 100644 index 60d0400..0000000 --- a/content/posts/2021-12-25-running-golang-application-as-pid1.md +++ /dev/null | |||
| @@ -1,347 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Running Golang application as PID 1 with Linux kernel | ||
| 3 | url: running-golang-application-as-pid1.html | ||
| 4 | date: 2021-12-25T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | ## Unikernels, kernels, and alike | ||
| 9 | |||
| 10 | I have been reading a lot about | ||
| 11 | [unikernernels](https://en.wikipedia.org/wiki/Unikernel) lately and found them | ||
| 12 | very intriguing. When you push away all the marketing speak and look at the | ||
| 13 | idea, it makes a lot of sense. | ||
| 14 | |||
| 15 | > A unikernel is a specialized, single address space machine image constructed | ||
| 16 | > by using library operating systems. ([Wikipedia](https://en.wikipedia.org/wiki/Unikernel)) | ||
| 17 | |||
| 18 | I really like the explanation from the article | ||
| 19 | [Unikernels: Rise of the Virtual Library Operating System](https://queue.acm.org/detail.cfm?id=2566628). | ||
| 20 | Really worth a read. | ||
| 21 | |||
| 22 | If we compare a normal operating system to a unikernel side by side, they would | ||
| 23 | look something like this. | ||
| 24 | |||
| 25 |  | ||
| 26 | |||
| 27 | From this image, we can see how the complexity significantly decreases with | ||
| 28 | the use of Unikernels. This comes with a price, of course. Unikernels are hard | ||
| 29 | to get running and require a lot of work since you don't have an actual proper | ||
| 30 | kernel running in the background providing network access and drivers etc. | ||
| 31 | |||
| 32 | So as a half step to make the stack simpler, I started looking into using | ||
| 33 | Linux kernel as a base and going from there. I came across this | ||
| 34 | [Youtube video talking about Building the Simplest Possible Linux System](https://www.youtube.com/watch?v=Sk9TatW9ino) | ||
| 35 | by [Rob Landley](https://landley.net) and apart from statically compiling the | ||
| 36 | application to be run as PID1 there was really no other obstacles. | ||
| 37 | |||
| 38 | ## What is PID 1? | ||
| 39 | |||
| 40 | PID 1 is the first process that Linux kernel starts after the boot process. | ||
| 41 | It also has a couple of unique properties that are unique to it. | ||
| 42 | |||
| 43 | - When the process with PID 1 dies for any reason, all other processes are | ||
| 44 | killed with KILL signal. | ||
| 45 | - When any process having children dies for any reason, its children are | ||
| 46 | re-parented to process with PID 1. | ||
| 47 | - Many signals which have default action of Term do not have one for PID 1. | ||
| 48 | - When the process with PID 1 dies for any reason, kernel panics, which | ||
| 49 | result in system crash. | ||
| 50 | |||
| 51 | PID 1 is considered as an Init application which takes care of running other | ||
| 52 | and handling services like: | ||
| 53 | |||
| 54 | - sshd, | ||
| 55 | - nginx, | ||
| 56 | - pulseaudio, | ||
| 57 | - etc. | ||
| 58 | |||
| 59 | If you are on a Linux machine, you can check what your process is with PID 1 | ||
| 60 | by running the following. | ||
| 61 | |||
| 62 | ```sh | ||
| 63 | $ cat /proc/1/status | ||
| 64 | Name: systemd | ||
| 65 | Umask: 0000 | ||
| 66 | State: S (sleeping) | ||
| 67 | Tgid: 1 | ||
| 68 | Ngid: 0 | ||
| 69 | Pid: 1 | ||
| 70 | PPid: 0 | ||
| 71 | ... | ||
| 72 | ``` | ||
| 73 | |||
| 74 | As we can see on my machine the process with id of 1 is [systemd](https://systemd.io/) | ||
| 75 | which is a software suite that provides an array of system components for Linux | ||
| 76 | operating systems. If you look closely you can also see that the `PPid` | ||
| 77 | (process id of the parent process) is `0` which additionally confirms that | ||
| 78 | this process doesn't have a parent. | ||
| 79 | |||
| 80 | ## So why even run application as PID 1 instead of just using a container? | ||
| 81 | |||
| 82 | Containers are wonderful, but they come with a lot of baggage. And because they | ||
| 83 | are in their nature layered, the images require quite a lot of space and also a | ||
| 84 | lot of additional software to handle them. They are not as lightweight as they | ||
| 85 | seem, and many popular images require 500 MB plus disk space. | ||
| 86 | |||
| 87 | The idea of running this as PID 1 would result in a significantly smaller footprint, | ||
| 88 | as we will see later in the post. | ||
| 89 | |||
| 90 | > You could run a simple init system inside Docker container described more | ||
| 91 | > in this article [Docker and the PID 1 zombie reaping problem](https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/). | ||
| 92 | |||
| 93 | ## The master plan | ||
| 94 | |||
| 95 | 1. Compile Linux kernel with the default definitions. | ||
| 96 | 2. Prepare a Hello World application in Golang that is statically compiled. | ||
| 97 | 3. Run it with [QEMU](https://www.qemu.org/) and providing Golang application | ||
| 98 | as init application / PID 1. | ||
| 99 | |||
| 100 | For the sake of simplicity we will not be cross-compiling any of it and just | ||
| 101 | use the 64bit version. | ||
| 102 | |||
| 103 | ## Compiling Linux kernel | ||
| 104 | |||
| 105 | ```sh | ||
| 106 | $ wget https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.15.7.tar.xz | ||
| 107 | $ tar xf linux-5.15.7.tar.xz | ||
| 108 | |||
| 109 | $ cd linux-5.15.7 | ||
| 110 | |||
| 111 | $ make clean | ||
| 112 | |||
| 113 | # read more about this https://stackoverflow.com/a/41886394 | ||
| 114 | $ make defconfig | ||
| 115 | |||
| 116 | $ time make -j `nproc` | ||
| 117 | |||
| 118 | $ cd .. | ||
| 119 | ``` | ||
| 120 | |||
| 121 | At this point we have kernel image that is located in `arch/x86_64/boot/bzImage`. | ||
| 122 | We will use this in QEMU later. | ||
| 123 | |||
| 124 | To make our lives a bit easier lets move the kernel image to another place. | ||
| 125 | Lets create a folder `bin/` in the root of our project with `mkdir -p bin`. | ||
| 126 | |||
| 127 | |||
| 128 | At this point we can copy `bzImage` to `bin/` folder with | ||
| 129 | `cp linux-5.15.7/arch/x86_64/boot/bzImage bin/bzImage`. | ||
| 130 | |||
| 131 | The folder structure of this experiment should look like this. | ||
| 132 | |||
| 133 | ``` | ||
| 134 | pid1/ | ||
| 135 | bin/ | ||
| 136 | bzImage | ||
| 137 | linux-5.15.7/ | ||
| 138 | linux-5.15.7.tar.xz | ||
| 139 | ``` | ||
| 140 | |||
| 141 | ## Preparing PID 1 application in Golang | ||
| 142 | |||
| 143 | This step is relatively easy. The only thing we must have in mind that we will | ||
| 144 | need to compile the binary as a static one. | ||
| 145 | |||
| 146 | Let's create `init.go` file in the root of the project. | ||
| 147 | |||
| 148 | ```go | ||
| 149 | package main | ||
| 150 | |||
| 151 | import ( | ||
| 152 | "fmt" | ||
| 153 | "time" | ||
| 154 | ) | ||
| 155 | |||
| 156 | func main() { | ||
| 157 | for { | ||
| 158 | fmt.Println("Hello from Golang") | ||
| 159 | time.Sleep(1 * time.Second) | ||
| 160 | } | ||
| 161 | } | ||
| 162 | ``` | ||
| 163 | |||
| 164 | If you notice, we have a forever loop in the main, with a simple sleep of 1 | ||
| 165 | second to not overwhelm the CPU. This is because PID 1 should never complete | ||
| 166 | and/or exit. That would result in a kernel panic. Which is BAD! | ||
| 167 | |||
| 168 | There are two ways of compiling Golang application. Statically and dynamically. | ||
| 169 | |||
| 170 | To statically compile the binary, use the following command. | ||
| 171 | |||
| 172 | ```sh | ||
| 173 | $ go build -ldflags="-extldflags=-static" init.go | ||
| 174 | ``` | ||
| 175 | |||
| 176 | We can also check if the binary is statically compiled with: | ||
| 177 | |||
| 178 | ```sh | ||
| 179 | $ file init | ||
| 180 | init: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, Go BuildID=Ypu8Zw_4NBxm1Yxg2OYO/H5x721rQ9uTPiDVh-VqP/vZN7kXfGG1zhX_qdHMgH/9vBfmK81tFrygfOXDEOo, not stripped | ||
| 181 | |||
| 182 | $ ldd init | ||
| 183 | not a dynamic executable | ||
| 184 | ``` | ||
| 185 | |||
| 186 | At this point, we need to create [initramfs](https://www.linuxfromscratch.org/blfs/view/svn/postlfs/initramfs.html) | ||
| 187 | (abbreviated from "initial RAM file system", is the successor of initrd. It | ||
| 188 | is a cpio archive of the initial file system that gets loaded into memory | ||
| 189 | during the Linux startup process). | ||
| 190 | |||
| 191 | ```sh | ||
| 192 | $ echo init | cpio -o --format=newc > initramfs | ||
| 193 | $ mv initramfs bin/initramfs | ||
| 194 | ``` | ||
| 195 | |||
| 196 | The projects at this stage should look like this. | ||
| 197 | |||
| 198 | ``` | ||
| 199 | pid1/ | ||
| 200 | bin/ | ||
| 201 | bzImage | ||
| 202 | initramfs | ||
| 203 | linux-5.15.7/ | ||
| 204 | linux-5.15.7.tar.xz | ||
| 205 | init.go | ||
| 206 | ``` | ||
| 207 | |||
| 208 | ## Running all of it with QEMU | ||
| 209 | |||
| 210 | [QEMU](https://www.qemu.org/) is a free and open-source hypervisor. It emulates | ||
| 211 | the machine's processor through dynamic binary translation and provides a set | ||
| 212 | of different hardware and device models for the machine, enabling it to run a | ||
| 213 | variety of guest operating systems. | ||
| 214 | |||
| 215 | ```sh | ||
| 216 | $ qemu-system-x86_64 -serial stdio -kernel bin/bzImage -initrd bin/initramfs -append "console=ttyS0" -m 128 | ||
| 217 | ``` | ||
| 218 | |||
| 219 | ```sh | ||
| 220 | $ qemu-system-x86_64 -serial stdio -kernel bin/bzImage -initrd bin/initramfs -append "console=ttyS0" -m 128 | ||
| 221 | [ 0.000000] Linux version 5.15.7 (m@khan) (gcc (GCC) 11.2.1 20211203 (Red Hat 11.2.1-7), GNU ld version 2.37-10.fc35) #7 SMP Mon Dec 13 10:23:25 CET 2021 | ||
| 222 | [ 0.000000] Command line: console=ttyS0 | ||
| 223 | [ 0.000000] x86/fpu: x87 FPU will use FXSAVE | ||
| 224 | [ 0.000000] signal: max sigframe size: 1440 | ||
| 225 | [ 0.000000] BIOS-provided physical RAM map: | ||
| 226 | [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable | ||
| 227 | [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved | ||
| 228 | [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved | ||
| 229 | [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x0000000007fdffff] usable | ||
| 230 | [ 0.000000] BIOS-e820: [mem 0x0000000007fe0000-0x0000000007ffffff] reserved | ||
| 231 | [ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved | ||
| 232 | [ 0.000000] NX (Execute Disable) protection: active | ||
| 233 | [ 0.000000] SMBIOS 2.8 present. | ||
| 234 | [ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-6.fc35 04/01/2014 | ||
| 235 | [ 0.000000] tsc: Fast TSC calibration failed | ||
| 236 | ... | ||
| 237 | [ 2.016106] ALSA device list: | ||
| 238 | [ 2.016329] No soundcards found. | ||
| 239 | [ 2.053176] Freeing unused kernel image (initmem) memory: 1368K | ||
| 240 | [ 2.056095] Write protecting the kernel read-only data: 20480k | ||
| 241 | [ 2.058248] Freeing unused kernel image (text/rodata gap) memory: 2032K | ||
| 242 | [ 2.058811] Freeing unused kernel image (rodata/data gap) memory: 500K | ||
| 243 | [ 2.059164] Run /init as init process | ||
| 244 | Hello from Golang | ||
| 245 | [ 2.386879] tsc: Refined TSC clocksource calibration: 3192.032 MHz | ||
| 246 | [ 2.387114] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2e02e31fa14, max_idle_ns: 440795264947 ns | ||
| 247 | [ 2.387380] clocksource: Switched to clocksource tsc | ||
| 248 | [ 2.587895] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 | ||
| 249 | Hello from Golang | ||
| 250 | Hello from Golang | ||
| 251 | Hello from Golang | ||
| 252 | ``` | ||
| 253 | |||
| 254 | The whole [log file here](/assets/pid1/qemu.log). | ||
| 255 | |||
| 256 | ## Size comparison | ||
| 257 | |||
| 258 | The cool thing about this approach is that the Linux kernel and the application | ||
| 259 | together only take around 12 MB, which is impressive as hell. And we need to | ||
| 260 | also know that the size of bzImage (Linux kernel) could be greatly decreased | ||
| 261 | by going into `make menuconfig` and removing a ton of features from the kernel, | ||
| 262 | making the size even smaller. I managed to get kernel size down to 2 MB and | ||
| 263 | still working properly. | ||
| 264 | |||
| 265 | ```sh | ||
| 266 | total 12M | ||
| 267 | -rw-r--r--. 1 m m 9.3M Dec 13 10:24 bzImage | ||
| 268 | -rw-r--r--. 1 m m 1.9M Dec 27 01:19 initramfs | ||
| 269 | ``` | ||
| 270 | |||
| 271 | ## Creating ISO image and running it with Gnome Boxes | ||
| 272 | |||
| 273 | First we need to create proper folder structure with `mkdir -p iso/boot/grub`. | ||
| 274 | |||
| 275 | Then we need to download the [grub binary](https://github.com/littleosbook/littleosbook/raw/master/files/stage2_eltorito). | ||
| 276 | You can read more about this program on https://github.com/littleosbook/littleosbook. | ||
| 277 | |||
| 278 | ```sh | ||
| 279 | $ wget -O iso/boot/grub/stage2_eltorito https://github.com/littleosbook/littleosbook/raw/master/files/stage2_eltorito | ||
| 280 | ``` | ||
| 281 | |||
| 282 | ```sh | ||
| 283 | $ tree iso/boot/ | ||
| 284 | iso/boot/ | ||
| 285 | ├── bzImage | ||
| 286 | ├── grub | ||
| 287 | │ ├── menu.lst | ||
| 288 | │ └── stage2_eltorito | ||
| 289 | └── initramfs | ||
| 290 | ``` | ||
| 291 | |||
| 292 | Let's copy files into proper folders. | ||
| 293 | |||
| 294 | |||
| 295 | ```sh | ||
| 296 | $ cp stage2_eltorito iso/boot/grub/ | ||
| 297 | $ cp bin/bzImage iso/boot/ | ||
| 298 | $ cp bin/initramfs iso/boot/ | ||
| 299 | ``` | ||
| 300 | |||
| 301 | Lets create a GRUB config file at `nano iso/boot/grub/menu.lst` with contents. | ||
| 302 | |||
| 303 | ```ini | ||
| 304 | default=0 | ||
| 305 | timeout=5 | ||
| 306 | |||
| 307 | title GoAsPID1 | ||
| 308 | kernel /boot/bzImage | ||
| 309 | initrd /boot/initramfs | ||
| 310 | ``` | ||
| 311 | |||
| 312 | Let's create iso file by using genisoimage: | ||
| 313 | |||
| 314 | ```sh | ||
| 315 | genisoimage -R \ | ||
| 316 | -b boot/grub/stage2_eltorito \ | ||
| 317 | -no-emul-boot \ | ||
| 318 | -boot-load-size 4 \ | ||
| 319 | -A os \ | ||
| 320 | -input-charset utf8 \ | ||
| 321 | -quiet \ | ||
| 322 | -boot-info-table \ | ||
| 323 | -o GoAsPID1.iso \ | ||
| 324 | iso | ||
| 325 | ``` | ||
| 326 | |||
| 327 | This will produce `GoAsPID1.iso` which you can use with [Virtualbox](https://www.virtualbox.org/) | ||
| 328 | or [Gnome Boxes](https://apps.gnome.org/app/org.gnome.Boxes/). | ||
| 329 | |||
| 330 | <video src="/assets/pid1/boxes.mp4" controls></video> | ||
| 331 | |||
| 332 | ## Is running applications as PID 1 even worth it? | ||
| 333 | |||
| 334 | Well, the answer to this is not as simple as one would think. Sometimes it is | ||
| 335 | and sometimes it's not. For embedded systems and very specialized applications | ||
| 336 | it is worth for sure. But in normal uses, I don't think so. It was an interesting | ||
| 337 | exercise in compiling kernels and looking at the guts of the Linux kernel, | ||
| 338 | but sticking to containers for most of the things is a better option in my | ||
| 339 | opinion. | ||
| 340 | |||
| 341 | An interesting experiment would be creating an image that supports networking | ||
| 342 | and could be deployed to AWS as an EC2 instance and observing how it fares. | ||
| 343 | But in that case, we would need to write some sort of supervisor that would | ||
| 344 | run on a separate EC2 that would check if other EC2 instances are running | ||
| 345 | properly. Remember that if your application fails, kernel panics and the | ||
| 346 | whole machine is inoperable in this case. | ||
| 347 | |||
diff --git a/content/posts/2021-12-30-wap-mobile-web-before-the-web.md b/content/posts/2021-12-30-wap-mobile-web-before-the-web.md deleted file mode 100644 index 6c598fe..0000000 --- a/content/posts/2021-12-30-wap-mobile-web-before-the-web.md +++ /dev/null | |||
| @@ -1,201 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Wireless Application Protocol and the mobile web before the web | ||
| 3 | url: wap-mobile-web-before-the-web.html | ||
| 4 | date: 2021-12-30T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | ## A little stroll down the history lane | ||
| 9 | |||
| 10 | About two weeks ago, I watched this outstanding documentary on YouTube | ||
| 11 | [Springboard: the secret history of the first real | ||
| 12 | smartphone](https://www.youtube.com/watch?v=b9_Vh9h3Ohw) about the history of | ||
| 13 | smartphones and phones in general. It brought back so many memories. I never had | ||
| 14 | an actual smartphone before the Android. The closest to smartphone was [Sony | ||
| 15 | Ericsson P1](https://www.gsmarena.com/sony_ericsson_p1-1982.php). A fantastic | ||
| 16 | phone and I broke it in Prague after a party and that was one of those rare | ||
| 17 | occasions where I was actually mad at myself. But nevertheless, after that | ||
| 18 | phone, the next one was an Android one. | ||
| 19 | |||
| 20 | Before that, I only owned normal phones from Nokia and Siemens etc. Nothing | ||
| 21 | special, actually. These are the phones we are talking about. Before 2007. | ||
| 22 | Apple and Android phones didn't exist yet. | ||
| 23 | |||
| 24 | These phones were rocking: | ||
| 25 | |||
| 26 | - No selfie cameras. | ||
| 27 | - ~2 inch displays. | ||
| 28 | - ~120 MHz beast CPU's. | ||
| 29 | - 144p main cameras. | ||
| 30 | - But they had a headphone jack. | ||
| 31 | |||
| 32 | Let's take a look at these beauties. | ||
| 33 | |||
| 34 |  | ||
| 35 | |||
| 36 | ## WAP - Wireless Application Protocol | ||
| 37 | |||
| 38 | Not that one! We are talking about Wireless Application Protocol and not Cardi | ||
| 39 | B's song 😃 | ||
| 40 | |||
| 41 | WAP stands for Wireless Application Protocol. It is a protocol designed for | ||
| 42 | micro-browsers, and it enables the access of internet in the mobile devices. It | ||
| 43 | uses the mark-up language WML (Wireless Markup Language and not HTML), WML is | ||
| 44 | defined as XML 1.0 application. Furthermore, it enables creating web | ||
| 45 | applications for mobile devices. In 1998, WAP Forum was founded by Ericson, | ||
| 46 | Motorola, Nokia and Unwired Planet whose aim was to standardize the various | ||
| 47 | wireless technologies via protocols. | ||
| 48 | [(source)](https://www.geeksforgeeks.org/wireless-application-protocol/) | ||
| 49 | |||
| 50 | WAP protocol was resulted by the joint efforts of the various members of WAP | ||
| 51 | Forum. In 2002, WAP forum was merged with various other forums of the industry, | ||
| 52 | resulting in the formation of Open Mobile Alliance (OMA). | ||
| 53 | [(source)](https://www.geeksforgeeks.org/wireless-application-protocol/) | ||
| 54 | |||
| 55 | These were some wild times. Devices had tiny screens and data transmission rates | ||
| 56 | were abominable. But they were capable of rendering WML (Wireless Markup | ||
| 57 | Language). This was very similar to HTML, actually. It is a markup language, | ||
| 58 | after all. | ||
| 59 | |||
| 60 | These pages could be served by [Apache](https://apache.org/) and could be | ||
| 61 | generated by CGI scripts on the backend. The only difference was the limited | ||
| 62 | markup language. | ||
| 63 | |||
| 64 | ## WML - Wireless Markup Language | ||
| 65 | |||
| 66 | Just like web browsers use HTML for content structure, older mobile device | ||
| 67 | browsers use WML - if you need to support really old mobile phones using WML | ||
| 68 | browsers, you will need to know about it. WML is XML-based (an XML vocabulary | ||
| 69 | just like XHTML and MathML, but not HTML) and does not use the same metaphor as | ||
| 70 | HTML. HTML is a single document with some metadata packed away in the head, and | ||
| 71 | a body encapsulating the visible page. With WML, the metaphor does not envisage | ||
| 72 | a page, but rather a deck of cards. A WML file might have several pages or cards | ||
| 73 | contained within it. | ||
| 74 | [(source)](https://www.w3.org/wiki/Introduction_to_mobile_web) | ||
| 75 | |||
| 76 | ```html | ||
| 77 | <?xml version="1.0"?> | ||
| 78 | <!DOCTYPE wml PUBLIC "-//WAPFORUM//DTD WML 1.1//EN" "http://www.wapforum.org/DTD/wml_1.1.xml"> | ||
| 79 | <wml> | ||
| 80 | <card id="home" title="Example Homepage"> | ||
| 81 | <p>Welcome to the Example homepage</p> | ||
| 82 | </card> | ||
| 83 | </wml> | ||
| 84 | ``` | ||
| 85 | |||
| 86 | There is an amazing tutorial on [Tutorialpoint about | ||
| 87 | WML](https://www.tutorialspoint.com/wml/index.htm). | ||
| 88 | |||
| 89 | ## Converting Digg to WML | ||
| 90 | |||
| 91 | This task is completely useless and not really feasible nowadays, but I had to | ||
| 92 | give it a try for old-time sake. Since the data is already there in a form of | ||
| 93 | RSS feed, I could take this feed and parse it and create a WML version of the | ||
| 94 | homepage. | ||
| 95 | |||
| 96 | We will need: | ||
| 97 | |||
| 98 | - Python3 + Pip | ||
| 99 | - ImageMagick | ||
| 100 | - feedparser and mako templating | ||
| 101 | |||
| 102 | ```sh | ||
| 103 | # for fedora 35 | ||
| 104 | sudo dnf install ImageMagick python3-pip | ||
| 105 | |||
| 106 | # tempalting engine for python | ||
| 107 | pip install mako --user | ||
| 108 | |||
| 109 | # for parsing rss feeds | ||
| 110 | pip install feedparser --user | ||
| 111 | ``` | ||
| 112 | |||
| 113 | Project folder structure should look like the following. | ||
| 114 | |||
| 115 | ``` | ||
| 116 | 12:43:53 m@khan wap → tree -L 1 | ||
| 117 | . | ||
| 118 | ├── generate.py | ||
| 119 | └── template.wml | ||
| 120 | |||
| 121 | ``` | ||
| 122 | |||
| 123 | After that, I created a small template for the homepage. | ||
| 124 | |||
| 125 | ```html | ||
| 126 | <?xml version="1.0"?> | ||
| 127 | <!DOCTYPE wml PUBLIC "-//WAPFORUM//DTD WML 1.2//EN" "http://www.wapforum.org/DTD/wml_1.2.xml"> | ||
| 128 | |||
| 129 | <wml> | ||
| 130 | |||
| 131 | <card title="Digg - What the Internet is talking about right now"> | ||
| 132 | |||
| 133 | % for item in entries: | ||
| 134 | <p><img src="/images/${item.id}.jpg" width="175" height="95" alt="${item.title}" /></p> | ||
| 135 | <p><small>${item.kicker}</small></p> | ||
| 136 | <p><big><b>${item.title}</b></big></p> | ||
| 137 | <p>${item.description}</p> | ||
| 138 | % endfor | ||
| 139 | |||
| 140 | </card> | ||
| 141 | |||
| 142 | </wml> | ||
| 143 | ``` | ||
| 144 | |||
| 145 | And the parser that parses RSS feed looks like this. | ||
| 146 | |||
| 147 | ```python | ||
| 148 | import os | ||
| 149 | import feedparser | ||
| 150 | from mako.template import Template | ||
| 151 | |||
| 152 | os.system('mkdir -p www/images') | ||
| 153 | |||
| 154 | template = Template(filename='template.wml') | ||
| 155 | |||
| 156 | feed = feedparser.parse('https://digg.com/rss/top.xml') | ||
| 157 | |||
| 158 | entries = feed.entries[:15] | ||
| 159 | |||
| 160 | for entry in entries: | ||
| 161 | print('Processing image with id {}'.format(entry.id)) | ||
| 162 | os.system('wget -q -O www/images/{}.jpg "{}"'.format(entry.id, entry.links[1].href)) | ||
| 163 | os.system('convert www/images/{}.jpg -type Grayscale -resize 175x -depth 3 -quality 30 www/images/{}.jpg'.format(entry.id, entry.id)) | ||
| 164 | |||
| 165 | html = template.render(entries = entries) | ||
| 166 | |||
| 167 | with open('www/index.wml', 'w+') as fp: | ||
| 168 | fp.write(html) | ||
| 169 | ``` | ||
| 170 | |||
| 171 | This script will create a folder `www` and in the folder `www/images` for | ||
| 172 | storing resized images. | ||
| 173 | |||
| 174 | > Be sure you don't use SSL and use just normal HTTP for serving the content. | ||
| 175 | > These old phones will have problems with TLS 1.3 etc. | ||
| 176 | |||
| 177 | If you look at the python file, I convert all the images into tiny B&W images. | ||
| 178 | They should be WBMP (Wireless BitMaP) but I choose JPEGs for this, and it seems | ||
| 179 | to work properly. | ||
| 180 | |||
| 181 | Because I currently don't have a phone old enough to test it on, I used an | ||
| 182 | emulator. And it was really hard to find one. I found [WAP | ||
| 183 | Proof](http://wap-proof.sharewarejunction.com/) on shareware junction, and it | ||
| 184 | did the job well enough. I will try to find and actual device to test it on. | ||
| 185 | |||
| 186 | <video src="/assets/wap/emulator.mp4" controls></video> | ||
| 187 | |||
| 188 | If you are using Nginx to serve the contents, add a directive to the hosts file | ||
| 189 | that will automatically server `index.wml` file. | ||
| 190 | |||
| 191 | ```nginx | ||
| 192 | server { | ||
| 193 | index index.wml index.html index.htm index.nginx-debian.html; | ||
| 194 | } | ||
| 195 | ``` | ||
| 196 | |||
| 197 | ## Conclusion | ||
| 198 | |||
| 199 | Well, this was pointless, but very fun! I hope you enjoyed it as much as I did. | ||
| 200 | I will try to find an old phone to test it on. If you have any questions, feel | ||
| 201 | free to ask in the comments. | ||
diff --git a/content/posts/2022-06-30-trying-out-helix-editor.md b/content/posts/2022-06-30-trying-out-helix-editor.md deleted file mode 100644 index 23c1cf3..0000000 --- a/content/posts/2022-06-30-trying-out-helix-editor.md +++ /dev/null | |||
| @@ -1,52 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Trying out Helix code editor as my main editor | ||
| 3 | url: tying-out-helix-code-editor.html | ||
| 4 | date: 2022-06-30T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | I have been searching for a lightweight code editor for quite some time. One of | ||
| 9 | the main reasons was that I wanted something that doesn't burn through CPU and | ||
| 10 | RAM usage is not through the roof. I have been mostly using Visual Studio Code. | ||
| 11 | It's been an outstanding editor. I have no quarrel with it at all. It's just | ||
| 12 | time to spice life up with something new. | ||
| 13 | |||
| 14 | I have been on this search for a couple of years. I have tried Vim, Neovim, | ||
| 15 | Emacs, Doom Emacs, Micro and couple more. Among most of them, I liked Micro and | ||
| 16 | Doom Emacs the most. Micro editor was a little too basic for me. And Doom Emacs | ||
| 17 | was a bit too hardcore. This does not reflect on any of the editors. It's just | ||
| 18 | my personal preference. | ||
| 19 | |||
| 20 | > I tried Helix Editor about a year ago. But I didn't pay attention to it. | ||
| 21 | > Tried it and saw it's similar to Vi and just said no. I was premature to | ||
| 22 | > dismiss it. | ||
| 23 | |||
| 24 | One of the things I actually miss is line wrapping for certain files. When | ||
| 25 | writing Markdown, line wrapping would be very helpful. Editing such a document | ||
| 26 | is frustrating to say the least. Some of the Markdown to HTML converters don't | ||
| 27 | take kindly of new lines between sentences. Not paragraphs, sentences. And I use | ||
| 28 | Markdown to write this blog you are reading. | ||
| 29 | |||
| 30 | But other than this, I have been extremely satisfied by it. It's been a pleasant | ||
| 31 | surprise. There have been zero issues with the editor. | ||
| 32 | |||
| 33 | One thing to do before you are able to use autocompletion and make use Language | ||
| 34 | Server support is to install the language server with NPM. | ||
| 35 | |||
| 36 | ```sh | ||
| 37 | npm install -g typescript typescript-language-server | ||
| 38 | ``` | ||
| 39 | |||
| 40 | I am still getting used to the keyboard shortcuts and getting better. What Helix | ||
| 41 | does really well is packing in sane defaults and even though because currently | ||
| 42 | there is no plugin support I haven't found any need for them. It has all that | ||
| 43 | you would need. It goes to extreme measures to show a user what is going on with | ||
| 44 | popups that show you what the keyboard shortcuts are. | ||
| 45 | |||
| 46 | And it comes us packed with many | ||
| 47 | [really good themes](https://github.com/helix-editor/helix/wiki/Themes). | ||
| 48 | |||
| 49 |  | ||
| 50 | |||
| 51 | It's still young but has this mature feeling to it. It has sane defaults and | ||
| 52 | mimics Vim (works a bit differently, but the overall idea is similar). | ||
diff --git a/content/posts/2022-07-05-what-would-dna-sound-if-synthesized.md b/content/posts/2022-07-05-what-would-dna-sound-if-synthesized.md deleted file mode 100644 index e26088b..0000000 --- a/content/posts/2022-07-05-what-would-dna-sound-if-synthesized.md +++ /dev/null | |||
| @@ -1,363 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: What would DNA sound if synthesized to an audio file | ||
| 3 | url: what-would-dna-sound-if-synthesized.html | ||
| 4 | date: 2022-07-05T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | ## Introduction | ||
| 9 | |||
| 10 | Lately, I have been thinking a lot about the nature of life, what are the | ||
| 11 | foundation blocks of life and things like that. It's remarkable how complex and | ||
| 12 | on the other hand simple the creation is when you look at it. The miracle of | ||
| 13 | life keeps us grounded when our imagination goes wild. If the DNA are the blocks | ||
| 14 | of life, you could consider them to be an API nature provided us to better | ||
| 15 | understand all of this chaos masquerading as order. | ||
| 16 | |||
| 17 | I have been reading a lot about superintelligence and our somehow misguided path | ||
| 18 | to create general artificial intelligence. What would the building blocks or our | ||
| 19 | creation look like? Is the compression really the ultimate storage of | ||
| 20 | information? Will our creation also ponder this questions when creating new | ||
| 21 | worlds for themselves, or will we just disappear into the vastness of | ||
| 22 | possibilities? It is a little offensive that we are playing God whilst being | ||
| 23 | completely ignorant of our own reality. Who knows! Like many other | ||
| 24 | breakthroughs, this one will also come at a cost not known to us when it finally | ||
| 25 | happens. | ||
| 26 | |||
| 27 | To keep things a bit lighter, I decided to convert some popular DNA sequences | ||
| 28 | into an audio files for us to listen to. I am not the first one, nor I will be | ||
| 29 | the last one to do this. But it is an interesting exercise in better | ||
| 30 | understanding the relationship between art and science. Maybe listening to DNA | ||
| 31 | instead of parsing it will find a way into better understanding, or at least | ||
| 32 | enjoying the creation and cryptic nature of life. | ||
| 33 | |||
| 34 | ## DNA encoding and primer example | ||
| 35 | |||
| 36 | I have been exploring DNA in the past in my post from about 3 years ago in | ||
| 37 | [Encoding binary data into DNA | ||
| 38 | sequence](/encoding-binary-data-into-dna-sequence.html) where I have been | ||
| 39 | converting all sorts of data into DNA sequences. | ||
| 40 | |||
| 41 | This will be a similar exercise but instead of converting to DNA, I will be | ||
| 42 | generating tones from Nucleotides. | ||
| 43 | |||
| 44 | | Nucleotides | Note | Frequency | | ||
| 45 | | ---------------- | ---- | --------- | | ||
| 46 | | **A** (Adenine) | A | 440 Hz | | ||
| 47 | | **C** (Cytosine) | C | 783.99 Hz | | ||
| 48 | | **G** (Guanine) | G | 523.25 Hz | | ||
| 49 | | **T** (Thymine) | D | 587.33 Hz | | ||
| 50 | |||
| 51 | Since we do not have T in equal-tempered scale, I choose D to represent T note. | ||
| 52 | |||
| 53 | You can check [Frequencies for equal-tempered scale, A4 = 440 | ||
| 54 | Hz](https://pages.mtu.edu/~suits/notefreqs.html). For this tuning, we also | ||
| 55 | choose `Speed of Sound = 345 m/s = 1130 ft/s = 770 miles/hr`. | ||
| 56 | |||
| 57 | Now that we have this out of the way, we can also brush up on the DNA sequencing | ||
| 58 | a bit. This is a famous quote I also used for the encoding tests, and it goes | ||
| 59 | like this. | ||
| 60 | |||
| 61 | > How wonderful that we have met with a paradox. Now we have some hope of | ||
| 62 | > making progress. | ||
| 63 | > ― Niels Bohr | ||
| 64 | |||
| 65 | ```shell | ||
| 66 | >SEQ1 | ||
| 67 | GACAGCTTGTGTACAAGTGTGCTTGCTCGCGAGCGGGTACGCGCGTGGGCTAACAAGTGA | ||
| 68 | GCCAGCAGGTGAACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGCTGGCGGGTGA | ||
| 69 | ACAAGTGTGCCGGTGAGCCAACAAGCAGACAAGTAAGCAGGTACGCAGGCGAGCTTGTCA | ||
| 70 | ACTCACAAGATCGCTTGTGTACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGTAT | ||
| 71 | GCTTGCTGGCGGACAAGCCAGCTTGTAAGCGGACAAGCTTGCGCACAAGCTGGCAGGCCT | ||
| 72 | GCCGGCTCGCGTACAAATTCACAAGTAAGTACGCTTGCGTGTACGCGGGTATGTATACTC | ||
| 73 | AACCTCACCAAACGGGACAAGATCGCCGGCGGGCTAGTATACAAGAACGCTTGCCAGTAC | ||
| 74 | AACC | ||
| 75 | ``` | ||
| 76 | |||
| 77 | This is what we gonna work with to get things rolling forward, when creating | ||
| 78 | parser and waveform generator. | ||
| 79 | |||
| 80 | ## Parsing DNA data | ||
| 81 | |||
| 82 | This step is rather simple one. All we need to do is parse input DNA sequence in | ||
| 83 | [FASTA format](https://en.wikipedia.org/wiki/FASTA_format) well known in | ||
| 84 | [Bioinformatics](https://en.wikipedia.org/wiki/Bioinformatics) to extract single | ||
| 85 | Nucleotides that will be converted into separate tones based on equal-tempered | ||
| 86 | scale explained above. | ||
| 87 | |||
| 88 | ```python | ||
| 89 | nucleotide_tone_map = { | ||
| 90 | 'A': 440, | ||
| 91 | 'C': 523.25, | ||
| 92 | 'G': 783.99, | ||
| 93 | 'T': 587.33, # converted to D | ||
| 94 | } | ||
| 95 | |||
| 96 | def split(word): | ||
| 97 | return [char for char in word] | ||
| 98 | |||
| 99 | def generate_from_dna_sequence(sequence): | ||
| 100 | for nucleotide in split(sequence): | ||
| 101 | print(nucleotide, nucleotide_tone_map[nucleotide]) | ||
| 102 | ``` | ||
| 103 | |||
| 104 | ## Generating sine wave | ||
| 105 | |||
| 106 | Because we are essentially creating a long stream of notes we will be appending | ||
| 107 | sine notes to a global array we will later use for creating a WAV file out of | ||
| 108 | it. | ||
| 109 | |||
| 110 | ```python | ||
| 111 | import math | ||
| 112 | |||
| 113 | def append_sinewave(freq=440.0, duration_milliseconds=500, volume=1.0): | ||
| 114 | global audio | ||
| 115 | |||
| 116 | num_samples = duration_milliseconds * (sample_rate / 1000.0) | ||
| 117 | |||
| 118 | for x in range(int(num_samples)): | ||
| 119 | audio.append(volume * math.sin(2 * math.pi * freq * (x / sample_rate))) | ||
| 120 | |||
| 121 | return | ||
| 122 | ``` | ||
| 123 | |||
| 124 | The sine wave generated here is the standard beep. If you want something more | ||
| 125 | aggressive, you could try a square or saw tooth waveform. | ||
| 126 | |||
| 127 | ## Generating a WAV file from accumulated sine waves | ||
| 128 | |||
| 129 | |||
| 130 | ```python | ||
| 131 | import wave | ||
| 132 | import struct | ||
| 133 | |||
| 134 | def save_wav(file_name): | ||
| 135 | wav_file = wave.open(file_name, 'w') | ||
| 136 | nchannels = 1 | ||
| 137 | sampwidth = 2 | ||
| 138 | |||
| 139 | nframes = len(audio) | ||
| 140 | comptype = 'NONE' | ||
| 141 | compname = 'not compressed' | ||
| 142 | wav_file.setparams((nchannels, sampwidth, sample_rate, nframes, comptype, compname)) | ||
| 143 | |||
| 144 | for sample in audio: | ||
| 145 | wav_file.writeframes(struct.pack('h', int(sample * 32767.0))) | ||
| 146 | |||
| 147 | wav_file.close() | ||
| 148 | ``` | ||
| 149 | |||
| 150 | 44100 is the industry standard sample rate - CD quality. If you need to save on | ||
| 151 | file size, you can adjust it downwards. The standard for low quality is, 8000 or | ||
| 152 | 8kHz. | ||
| 153 | |||
| 154 | WAV files here are using short, 16 bit, signed integers for the sample size. | ||
| 155 | So, we multiply the floating-point data we have by 32767, the maximum value for | ||
| 156 | a short integer. | ||
| 157 | |||
| 158 | > It is theoretically possible to use the floating point -1.0 to 1.0 data | ||
| 159 | > directly in a WAV file, but not obvious how to do that using the wave module | ||
| 160 | > in Python. | ||
| 161 | |||
| 162 | ## Generating Spectograms | ||
| 163 | |||
| 164 | I have tried two methods of doing this and both were just fine. I however opted | ||
| 165 | out to use the [SoX - Sound eXchange, the Swiss Army knife of audio | ||
| 166 | manipulation](https://linux.die.net/man/1/sox) one because it didn't require | ||
| 167 | anything else. | ||
| 168 | |||
| 169 | ```shell | ||
| 170 | sox output.wav -n spectrogram -o spectrogram.png | ||
| 171 | ``` | ||
| 172 | |||
| 173 | An example spectrogram of Ludwig van Beethoven Symphony No. 6 First movement. | ||
| 174 | |||
| 175 | <audio controls> | ||
| 176 | <source src="/assets/dna-synthesized/symphony-no6-1st-movement.mp3" type="audio/mpeg"> | ||
| 177 | </audio> | ||
| 178 | |||
| 179 |  | ||
| 180 | |||
| 181 | The other option could also be in combination with | ||
| 182 | [gnuplot](http://www.gnuplot.info/). This would require an intermediary step, | ||
| 183 | however. | ||
| 184 | |||
| 185 | ```shell | ||
| 186 | sox output.wav audio.dat | ||
| 187 | tail -n+3 audio.dat > audio_only.dat | ||
| 188 | gnuplot audio.gpi | ||
| 189 | ``` | ||
| 190 | |||
| 191 | And input file `audio.gpi` that would be passed to gnuplot looks something like | ||
| 192 | this. | ||
| 193 | |||
| 194 | ``` | ||
| 195 | # set output format and size | ||
| 196 | set term png size 1000,280 | ||
| 197 | |||
| 198 | # set output file | ||
| 199 | set output "audio.png" | ||
| 200 | |||
| 201 | # set y range | ||
| 202 | set yr [-1:1] | ||
| 203 | |||
| 204 | # we want just the data | ||
| 205 | unset key | ||
| 206 | unset tics | ||
| 207 | unset border | ||
| 208 | set lmargin 0 | ||
| 209 | set rmargin 0 | ||
| 210 | set tmargin 0 | ||
| 211 | set bmargin 0 | ||
| 212 | |||
| 213 | # draw rectangle to change background color | ||
| 214 | set obj 1 rectangle behind from screen 0,0 to screen 1,1 | ||
| 215 | set obj 1 fillstyle solid 1.0 fillcolor rgbcolor "#ffffff" | ||
| 216 | |||
| 217 | # draw data with foreground color | ||
| 218 | plot "audio_only.dat" with lines lt rgb 'red' | ||
| 219 | ``` | ||
| 220 | |||
| 221 | ## Pre-generated sequences | ||
| 222 | |||
| 223 | What I did was take interesting parts from an animal's genome and feed it to a | ||
| 224 | tone generator script. This then generated a WAV file and I converted those to | ||
| 225 | MP3, so they can be played in a browser. The last step was creating a | ||
| 226 | spectrogram based on a WAV file. | ||
| 227 | |||
| 228 | ### Niels Bohr quote | ||
| 229 | |||
| 230 | <audio controls> | ||
| 231 | <source src="/assets/dna-synthesized/quote/out.mp3" type="audio/mpeg"> | ||
| 232 | </audio> | ||
| 233 | |||
| 234 |  | ||
| 235 | |||
| 236 | ### Mouse | ||
| 237 | |||
| 238 | This is part of a mouse genome `Mus_musculus.GRCm39.dna.nonchromosomal`. You | ||
| 239 | can get [genom data | ||
| 240 | here](http://ftp.ensembl.org/pub/release-106/fasta/mus_musculus/dna/). | ||
| 241 | |||
| 242 | <audio controls> | ||
| 243 | <source src="/assets/dna-synthesized/mouse/out.mp3" type="audio/mpeg"> | ||
| 244 | </audio> | ||
| 245 | |||
| 246 |  | ||
| 247 | |||
| 248 | ### Bison | ||
| 249 | |||
| 250 | This is part of a bison genome `Bison_bison_bison.Bison_UMD1.0.cdna`. You can | ||
| 251 | get [genom data | ||
| 252 | here](http://ftp.ensembl.org/pub/release-106/fasta/bison_bison_bison/cdna/). | ||
| 253 | |||
| 254 | <audio controls> | ||
| 255 | <source src="/assets/dna-synthesized/bison/out.mp3" type="audio/mpeg"> | ||
| 256 | </audio> | ||
| 257 | |||
| 258 |  | ||
| 259 | |||
| 260 | ### Taurus | ||
| 261 | |||
| 262 | This is part of a taurus genome `Bos_taurus.ARS-UCD1.2.cdna`. You can get | ||
| 263 | [genom data | ||
| 264 | here](http://ftp.ensembl.org/pub/release-106/fasta/bos_taurus/cdna/). | ||
| 265 | |||
| 266 | <audio controls> | ||
| 267 | <source src="/assets/dna-synthesized/taurus/out.mp3" type="audio/mpeg"> | ||
| 268 | </audio> | ||
| 269 | |||
| 270 |  | ||
| 271 | |||
| 272 | ## Making a drummer out of a DNA sequence | ||
| 273 | |||
| 274 | To make things even more interesting, I decided to send this data via MIDI to my | ||
| 275 | [Elektron Model:Samples](https://www.elektron.se/en/model-samples). This is a | ||
| 276 | really cool piece of equipment that supports MIDI in via USB and 3.5 mm audio | ||
| 277 | jack. | ||
| 278 | |||
| 279 | Elektron is connected to my MacBook via USB cable and audio out is patched to a | ||
| 280 | Sony Bluetooth speaker I have that supports 3.5 mm audio in. Elektron doesn't | ||
| 281 | have internal speakers. | ||
| 282 | |||
| 283 |  | ||
| 284 | |||
| 285 |  | ||
| 286 | |||
| 287 |  | ||
| 288 | |||
| 289 | For communicating with Elektron, I choose `pygame` Python module that has MIDI | ||
| 290 | built in. With this, it was rather simple to send notes to the device. All I did | ||
| 291 | was map MIDI notes to the actual Nucleotides. | ||
| 292 | |||
| 293 | Before all of this I also checked Audio MIDI Setup app under MacOS and checked | ||
| 294 | MIDI Studio by pressing ⌘-2. | ||
| 295 | |||
| 296 |  | ||
| 297 | |||
| 298 | The whole script that parses and send notes to the Elektron looks like this. | ||
| 299 | |||
| 300 | ```python | ||
| 301 | import pygame.midi | ||
| 302 | import time | ||
| 303 | |||
| 304 | pygame.midi.init() | ||
| 305 | |||
| 306 | print(pygame.midi.get_default_output_id()) | ||
| 307 | print(pygame.midi.get_device_info(0)) | ||
| 308 | |||
| 309 | player = pygame.midi.Output(1) | ||
| 310 | player.set_instrument(2) | ||
| 311 | |||
| 312 | def send_note(note, velocity): | ||
| 313 | global player | ||
| 314 | player.note_on(note, velocity) | ||
| 315 | time.sleep(0.3) | ||
| 316 | player.note_off(note, velocity) | ||
| 317 | |||
| 318 | |||
| 319 | nucleotide_midi_map = { | ||
| 320 | 'A': 60, | ||
| 321 | 'C': 90, | ||
| 322 | 'G': 160, | ||
| 323 | 'T': 180, # is D | ||
| 324 | } | ||
| 325 | |||
| 326 | with open("quote.fa") as f: | ||
| 327 | sequence = f.read().replace('\n', '') | ||
| 328 | |||
| 329 | for nucleotide in [char for char in sequence]: | ||
| 330 | print("Playing nucleotide {} with MIDI note {}".format( | ||
| 331 | nucleotide, nucleotide_midi_map[nucleotide])) | ||
| 332 | send_note(nucleotide_midi_map[nucleotide], 127) | ||
| 333 | |||
| 334 | del player | ||
| 335 | pygame.midi.quit() | ||
| 336 | ``` | ||
| 337 | |||
| 338 | <video src="/assets/dna-synthesized/elektron/elektron.mp4" controls></video> | ||
| 339 | |||
| 340 | All of this could be made much more interesting if I choose different | ||
| 341 | instruments for different Nucleotides, or doing more funky stuff with Elektron. | ||
| 342 | But for now, this should be enough. It is just a proof of concept. Something to | ||
| 343 | play around with. | ||
| 344 | |||
| 345 | ## Going even further | ||
| 346 | |||
| 347 | As you probably notice, the end results are quite similar to each other. This is | ||
| 348 | to be expected because we are operating only with 4 notes essentially. What | ||
| 349 | could make this more interesting is using something like | ||
| 350 | [Supercollider](https://supercollider.github.io/) to create more interesting | ||
| 351 | sounds. By transposing notes or using effects based on repeated data in a | ||
| 352 | sequence. Possibilities are endless. | ||
| 353 | |||
| 354 | It is really astonishing what can be achieved with a little bit of code and an | ||
| 355 | idea. I could see this becoming an interesting background soundscape instrument | ||
| 356 | if done properly. It could replace random note generator with something more | ||
| 357 | intriguing, biological, natural. | ||
| 358 | |||
| 359 | I actually find the results fascinating. I took some time and listened to this | ||
| 360 | music of nature. Even though it's quite the same, it's also quite different. | ||
| 361 | The subtle differences on repeat kind of creates music on its own. Makes you | ||
| 362 | wonder. It kind of puts Occam’s Razor in its place. Nature for sure loves to | ||
| 363 | make things as energy efficient as possible. | ||
diff --git a/content/posts/2022-08-13-algae-spotted-on-river-sava.md b/content/posts/2022-08-13-algae-spotted-on-river-sava.md deleted file mode 100644 index e82e364..0000000 --- a/content/posts/2022-08-13-algae-spotted-on-river-sava.md +++ /dev/null | |||
| @@ -1,30 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Aerial photography of algae spotted on river Sava | ||
| 3 | url: aerial-photography-of-algae-spotted-on-river-sava.html | ||
| 4 | date: 2022-08-13T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | This is a bit of a different post than I usually write, but quite interesting | ||
| 9 | one to me. River Sava has plenty of hydropower plants located down the stream. | ||
| 10 | This makes regulating the strength of a current easier than normally. Because of | ||
| 11 | lower stream strength and high temperatures, algae has formed on the river. | ||
| 12 | This is the first time I've seen something like this in my whole life. | ||
| 13 | |||
| 14 | Below are some photographs taken from a DJI drone capturing the event. | ||
| 15 | |||
| 16 |  | ||
| 17 | |||
| 18 |  | ||
| 19 | |||
| 20 |  | ||
| 21 | |||
| 22 |  | ||
| 23 | |||
| 24 |  | ||
| 25 | |||
| 26 |  | ||
| 27 | |||
| 28 | I will try to get more photos of this in the future days and if something | ||
| 29 | intriguing shows up will post it again on the blog. | ||
| 30 | |||
diff --git a/content/posts/2022-10-06-state-of-web-technologies-in-year-2022.md b/content/posts/2022-10-06-state-of-web-technologies-in-year-2022.md deleted file mode 100644 index 78595fa..0000000 --- a/content/posts/2022-10-06-state-of-web-technologies-in-year-2022.md +++ /dev/null | |||
| @@ -1,303 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: State of Web Technologies and Web development in year 2022 | ||
| 3 | url: state-of-web-technologies-and-web-development-in-year-2022.html | ||
| 4 | date: 2022-10-06T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | ## Initial thoughts | ||
| 9 | |||
| 10 | *This post is a critique on the current state of web development. It is an | ||
| 11 | opinionated post! I will learn more about this in the future, and probably | ||
| 12 | slightly change my mind about some of the things I criticize.* | ||
| 13 | |||
| 14 | I have started working on a hobby project about two weeks ago, and I wanted to | ||
| 15 | use that situation as a learning one. Trying new things, new technologies, new | ||
| 16 | tools. I always considered myself to be an adventurous person when it comes to | ||
| 17 | technology. I never shy away from trying new languages, new operating systems | ||
| 18 | etc. Likewise, I find the whole experience satisfying, and it tickles that part | ||
| 19 | of my brain that finds discovery the highest of the mountains to climb. | ||
| 20 | |||
| 21 | What I always wanted to make was a coding game, that you would play in a browser | ||
| 22 | (just to eliminate building binaries for each operating system) where you would | ||
| 23 | level up your character and go into these scriptable battles. You know, RPG | ||
| 24 | elements. | ||
| 25 | |||
| 26 | So, the natural way to go would be some sort of SPA (single page application) | ||
| 27 | with basic routing and some state management. Nothing crazy. | ||
| 28 | |||
| 29 | > **Before we move on**, I have to be transparent. Take my views on this with | ||
| 30 | > a grain of salt. I have only scratched the surface with these technologies, | ||
| 31 | > and my knowledge is full of gaps. This is my experience using some of these | ||
| 32 | > products for the first time or in a limited capacity. | ||
| 33 | |||
| 34 | Having this out of the way, I got myself a fresh pot of coffee and down the | ||
| 35 | rabbit hole I went. | ||
| 36 | |||
| 37 | ## Giving React JS a spin | ||
| 38 | |||
| 39 | I first tried [React JS](https://reactjs.org/). I kind of like it. Furthermore, | ||
| 40 | I have worked with libraries like this in the past and also wrote a couple of | ||
| 41 | them (nothing compared to that level), but I had the basic understanding of what | ||
| 42 | was going on. I rolled up a project quickly and had basic things done in a | ||
| 43 | matter of two hours, which was impressive. | ||
| 44 | |||
| 45 | I prefer using [Tailwind CSS](https://tailwindcss.com/) for my styling | ||
| 46 | pleasures, and integrating that was also a painless experience. It was actually | ||
| 47 | nice to see that some things got better with time. In about 2 minutes I got | ||
| 48 | Tailwind working, and I was able to use classes at my disposal. All that | ||
| 49 | `postcss` stuff was taken care of by adding a couple of things in config files | ||
| 50 | (all described really well in their documentation). | ||
| 51 | |||
| 52 | It is not that different from Vue which I have had more encounters with in the | ||
| 53 | past People will probably call me a lunatic for saying this. But you know, it is | ||
| 54 | the truth. Same same, but different. I still believe that using libraries like | ||
| 55 | this is beneficial. I am not a JavaScript purist. They all have their quirks, | ||
| 56 | but at the end of the day, I truly believe it’s worth it. | ||
| 57 | |||
| 58 | ## Bundlers and Transpilers | ||
| 59 | |||
| 60 | I still reject calling [Typescript](https://www.typescriptlang.org/) to | ||
| 61 | [JavaScript](https://www.javascript.com/) conversion a "compilation process". I | ||
| 62 | call them [transpilers](https://devopedia.org/transpiler), and I don’t care! 😈 | ||
| 63 | |||
| 64 | And if you want to fight this, take a look at this little chart and be mad at | ||
| 65 | it! | ||
| 66 | |||
| 67 |  | ||
| 68 | |||
| 69 | The first one that I ever used was [webpack](https://webpack.js.org/), and it | ||
| 70 | was an absolute horrific experience. Saying this, it is an absolutely fantastic | ||
| 71 | tool. I felt more like a config editor than actually a programmer. To be fair, | ||
| 72 | I am a huge fan of [make](https://www.gnu.org/software/make/), and you can do as | ||
| 73 | you wish with this information. I like my build systems simple. | ||
| 74 | |||
| 75 | Also, isn’t it interesting that we need something like | ||
| 76 | [Babel](https://babeljs.io/) to make JavaScript code work in a browser that has | ||
| 77 | only one client side scripting available, which is by no accident also | ||
| 78 | JavaScript. Why? I know why it’s needed, but seriously, why. | ||
| 79 | |||
| 80 | I haven’t used Babel for years now. Or if I did, it was packaged together by | ||
| 81 | some other bundler thingy. Which does not make things better, but at least I | ||
| 82 | didn’t need to worry about it. | ||
| 83 | |||
| 84 | I really don’t like complicated build systems. I really don’t like abstracting | ||
| 85 | code and making things appear magical. The older I get, the more I appreciate | ||
| 86 | clear and clean, expressive code. No one-liners, if possible. | ||
| 87 | |||
| 88 | But I have to give props to [Vite](https://vitejs.dev/)! This was one of the | ||
| 89 | best developer experiences I have ever had. Granted, it still has magical | ||
| 90 | properties. And yes, it still is a bundler and abstracts things to the nth | ||
| 91 | degree. But at least it didn’t force me to configure 700 lines of JSON. And I | ||
| 92 | know that this makes me a hypocrite. You can’t have it all. Nonetheless, my | ||
| 93 | reasoning here is, if using bundlers is inevitable, then at least they should | ||
| 94 | provide an excellent developer experience. | ||
| 95 | |||
| 96 | I also noticed that now the catch-all phrase is “blazingly fast” and “lightning | ||
| 97 | fast” and “next generation” and stuff like that. I mean, yeah, tools should get | ||
| 98 | faster with time. But saying that starting a project now takes 2 seconds instead | ||
| 99 | of 20 seconds is something that is a break it or make it kind of a deal is | ||
| 100 | ridiculous. I don’t mind waiting a couple of seconds every couple of days. I | ||
| 101 | also don’t create 700 projects every day, and also who does? This argument has | ||
| 102 | no bite. All I want is a decent reload time (~100ms is more than good enough for | ||
| 103 | me) and that is it. | ||
| 104 | |||
| 105 | You don’t need to sell me benefits if I only get them when I start a fresh | ||
| 106 | project, and then try to convince me that this is somehow changing the fate of | ||
| 107 | the universe. First of all, it is not. And second, if this is your only argument | ||
| 108 | for your tool, I would advise you to maybe re-focus your efforts to something | ||
| 109 | else. Vite says that startup times are really fast. And if that would be the | ||
| 110 | only thing differentiating it from other tools, I would ignore it. But it has | ||
| 111 | some really compelling features like [Hot Module | ||
| 112 | Replacement](https://www.geeksforgeeks.org/reactjs-hot-module-replacement/) that | ||
| 113 | really works well. It was a joy to use. | ||
| 114 | |||
| 115 | So, I will be definitely using Vite in the future. | ||
| 116 | |||
| 117 | ## Jam Stack, Mach Stack no snack | ||
| 118 | |||
| 119 | Let's get a couple of the acronyms out of the way, so we all know what we are | ||
| 120 | talking about: | ||
| 121 | |||
| 122 | - Jam Stack - JavaScript, API and Markup | ||
| 123 | - Mach Stack - Microservices, API-first, Cloud-Native SaaS, Headless | ||
| 124 | |||
| 125 | It is so hard to follow all these new trendy things happening around you, that | ||
| 126 | it makes you have a massive **FOMO** all the time. But on the other hand, you | ||
| 127 | also don’t want to be that old fart that doesn’t move with the times and still | ||
| 128 | writes his trusty jQuery code while listening to Blink 182 All the small things | ||
| 129 | on full blast. It’s a good song, don’t get me wrong, but there are other songs | ||
| 130 | out there. | ||
| 131 | |||
| 132 | I have to admit. [Vercel](https://vercel.com/) is really cool! Love the | ||
| 133 | simplicity of the service. You could compare it to | ||
| 134 | [Netlify](https://www.netlify.com/). I haven’t tried Netlify extensively, but | ||
| 135 | from a couple of experimental deployments I still prefer Vercel. It is much more | ||
| 136 | streamlined, but maybe this is bias in me. I really like Vercel’s Analytics, | ||
| 137 | which give you a [Core Web Vitals report](https://web.dev/vitals/) in their | ||
| 138 | admin console. Kind of cool, I’m not going to lie. | ||
| 139 | |||
| 140 | This whole idea about frontend and backend merging into [SSR (server-side | ||
| 141 | rendering)](https://www.debugbear.com/blog/server-side-rendering) looks so good | ||
| 142 | on paper. It almost doesn’t come with any major flaws. | ||
| 143 | |||
| 144 | But when it comes to the actual implementation, there is much to be desired. | ||
| 145 | I’m going to lump [Next.js](https://nextjs.org/) and | ||
| 146 | [Nuxt.js](https://nuxtjs.org/) together because they are essentially the same | ||
| 147 | thing, just a different library. | ||
| 148 | |||
| 149 | Now comes the reality. Mixing backend and frontend in this manner creates this | ||
| 150 | weird mental model where you kind of rely on magical properties of these | ||
| 151 | libraries. You relinquish control over to them for better developer experience. | ||
| 152 | But is that really true? Initially, I was so stoked about it. However, the more | ||
| 153 | I used them, the more I felt uncomfortable. I felt dirty, actually. Maybe this | ||
| 154 | is because I come from old ways of doing things where you control every step of | ||
| 155 | request, and allowing something to hijack it feels like blasphemy. | ||
| 156 | |||
| 157 | More than that, some pretty significant technical issues arose from this. How do | ||
| 158 | you do JWT token authentication? You put it in `api` folder and then do some | ||
| 159 | fetching and storing into local state management. But doing this also requires | ||
| 160 | some tinkering with await/async stuff on the React/Vue side of things. And then | ||
| 161 | you need to write middleware for it. And the more I look at it, the more I see | ||
| 162 | that this whole thing was not meant to be used like this, and it all feels and | ||
| 163 | looks like a huge hack. | ||
| 164 | |||
| 165 | The issue I have with this is that they over-promise and under-deliver. They | ||
| 166 | want to be an all-in-one replacement for everything, and they don’t deliver on | ||
| 167 | this promise. And how could they?! We have to be fair. It is an impossible task. | ||
| 168 | |||
| 169 | They sell you [NoOps](https://www.geeksforgeeks.org/overview-of-noops/), but | ||
| 170 | when you need to accomplish something a little bit more out of the scope of | ||
| 171 | Hello World, you have to make hacky decisions to make it work. And having a | ||
| 172 | deployment strategy that relies on many moving parts is never a good idea. | ||
| 173 | Abstracting too much is usually a sign of bad architecture. | ||
| 174 | |||
| 175 | Lately, this has become a huge trend that will for sure bite us in the future. | ||
| 176 | And let’s not get it twisted. By doing this, PaaS providers like | ||
| 177 | [AWS](https://aws.amazon.com/), [GCS](https://cloud.google.com/), etc. obscure | ||
| 178 | their billing, and you end up paying more than you really should. And even if | ||
| 179 | that is not an issue, it comes down to the principle of things. AWS is known for | ||
| 180 | having multiple “currencies“ inside their projects like write operations, read | ||
| 181 | operations, etc. which add up, and it creates this impossible to track billing | ||
| 182 | scheme. It all behaves suspiciously like a pay-to-win game you could find on | ||
| 183 | mobile phones that scams you out of your money. | ||
| 184 | |||
| 185 | And as far as I am concerned, the most important thing was me not coding the | ||
| 186 | functionalities for the game I want to make. I was battling libraries and cloud | ||
| 187 | providers. How to deploy, what settings are relevant. Bad documentation or | ||
| 188 | multiple versions of achieving the same thing. You are getting bombarded by all | ||
| 189 | this information, and you don’t really have any control over it. | ||
| 190 | Production-ready code becomes a joke, essentially. Especially if you tend to | ||
| 191 | work on that project for a prolonged period of time. | ||
| 192 | |||
| 193 | All of these options end up creating a fatigue. What to choose, what not to | ||
| 194 | choose. Unnecessary worrying about if the stack will still be deemed worthy in | ||
| 195 | six months. There is elegance in simplicity. | ||
| 196 | |||
| 197 | > JavaScript UI frameworks and libraries work in cycles. Every six months or | ||
| 198 | > so, a new one pops up, claiming that it has revolutionized UI development. | ||
| 199 | > Thousands of developers adopt it into their new projects, blog posts are | ||
| 200 | > written, Stack Overflow questions are asked and answered, and then a newer | ||
| 201 | > (and even more revolutionary) framework pops up to usurp the throne. | ||
| 202 | > — Ian Allen | ||
| 203 | |||
| 204 |  | ||
| 205 | |||
| 206 | And this jab at these libraries and cloud providers is not done out of malice. | ||
| 207 | It is a real concern that I have about them. In my life, I have seen | ||
| 208 | technologies come and go, but the basics always stick around. So surrendering | ||
| 209 | all the power you have to a library or a cloud provider is in my opinion a | ||
| 210 | stupid move. | ||
| 211 | |||
| 212 | ## Tailwind CSS still rocks! | ||
| 213 | |||
| 214 | You know, many people say negative things about Tailwind. And after a lot of | ||
| 215 | deliberation, I came to the conclusion that Tailwind is good for two types of | ||
| 216 | developers. Tailwind is good for a complete noob or a senior developer. A | ||
| 217 | complete noob doesn’t really care about inner workings of CSS, and a senior | ||
| 218 | developer also doesn’t care about CSS. Well, at least, not anymore. And | ||
| 219 | developers in between usually have the biggest issues with it. Not always of | ||
| 220 | course, but in a lot of cases. | ||
| 221 | |||
| 222 | I like the creature comforts of Tailwind. Being utility first would make me | ||
| 223 | argue that it is actually more similar to [Sass](https://sass-lang.com/) or | ||
| 224 | [Less](https://lesscss.org/) than something like Bootstrap. Not technically, but | ||
| 225 | ideologically. After I started using it, I never looked back. I use it every | ||
| 226 | time I need to do something web related. | ||
| 227 | |||
| 228 | Writing CSS for general things feels like going several steps back. Instead of | ||
| 229 | focusing on what you are actually trying to achieve, you focus on notations like | ||
| 230 | [BEM](https://en.bem.info/methodology/css/), code structuring, optimizing HTML | ||
| 231 | size. Just doing things that make 0.1% difference. You know that saying: Early | ||
| 232 | optimization is the root of all evil. Exactly that. | ||
| 233 | |||
| 234 | I am also not saying that Tailwind is the cure for everything. Sometimes custom | ||
| 235 | CSS is necessary. But from what I found out in using it for almost two years in | ||
| 236 | a production environment (on a site getting quite a lot of traffic and | ||
| 237 | constantly being changed), I can say without any reservations that Tailwind | ||
| 238 | saved our asses countless times. We would be rewriting CSS all the time without | ||
| 239 | it. And I don’t really think writing CSS is the best way to spend my time. | ||
| 240 | |||
| 241 | I have also noticed that people who criticize Tailwind the most never actually | ||
| 242 | used it in a real project that has a long lifetime with plenty of changes that | ||
| 243 | will happen in the future. | ||
| 244 | |||
| 245 | But you know, whatever floats your boat! | ||
| 246 | |||
| 247 | ## Code maintainability | ||
| 248 | |||
| 249 | Somehow, people also stopped talking about maintenance. If you constantly try to | ||
| 250 | catch the latest and greatest train, you are by that logic always trying new | ||
| 251 | things. Which is a good thing if you want to learn about technologies and try | ||
| 252 | them. But for the production environment, you have to have a stable stack that | ||
| 253 | doesn’t change every 6 months. | ||
| 254 | |||
| 255 | You can lock dependencies for sure. Nevertheless, the hype train moves along | ||
| 256 | anyway. And the mindset this breeds goes against locking the code. This | ||
| 257 | bleeding-edge rolling release cycle is not helping. That is why enterprise | ||
| 258 | solutions usually look down on these popular stacks and only do bare minimum to | ||
| 259 | appear hip and cool. | ||
| 260 | |||
| 261 | With that said, I still think that progress is good, but should be taken with a | ||
| 262 | grain of salt. If your project is something that should be built once and then | ||
| 263 | rarely updated, going with the latest stack is a possible way to go. But, if you | ||
| 264 | are working on a project that lasts for years, you should probably approach it | ||
| 265 | with some level of caution. Web development is often times too volatile. | ||
| 266 | |||
| 267 | ## Web development has a marketing issue | ||
| 268 | |||
| 269 | I noticed that almost every project now has this marketing spin put on it. | ||
| 270 | Everything is blazingly fast now. I get it, they are competing for your | ||
| 271 | attention, but what happened to just being truthful and not inflating reality. | ||
| 272 | |||
| 273 | And in order to appeal to mass market, they leave things out of their marketing | ||
| 274 | materials. These open-source projects are now behaving more and more like | ||
| 275 | companies do. Which is a scary thought on its self. | ||
| 276 | |||
| 277 | And we are also seeing a rise in a concept of building a company in the open, | ||
| 278 | which is a good thing, don't get me wrong. But when it is using open-source to | ||
| 279 | lure people and then lock them in their ecosystem, there is where I have issues | ||
| 280 | with it. | ||
| 281 | |||
| 282 | This might be because I have been using GNU/Linux for 20 years now and have been | ||
| 283 | so beholden for my success to open-source that I see issues when open-source is | ||
| 284 | being used to trick people into a false sense of security that these projects | ||
| 285 | are built in the spirit of open-source. Because there is a difference. They are | ||
| 286 | NOT! They have a really specific goal in mind. And the open-source is being used | ||
| 287 | as a delivery system. Which is in my opinion disgusting! | ||
| 288 | |||
| 289 | ## Conclusion | ||
| 290 | |||
| 291 | I will end my post with this. Web development is running now in circles. People | ||
| 292 | are discovering [RPC](https://www.tutorialspoint.com/remote-procedure-call-rpc) | ||
| 293 | now and this is the now the next big thing. [GraphQL](https://graphql.org/) is | ||
| 294 | so passé. And I am so tired of it all. Of blazingly fast libraries, of all these | ||
| 295 | new technologies that are actually just a remake of old ones. Of just the | ||
| 296 | general spirit of the web. I will just use what I already know. Which worked 10 | ||
| 297 | years ago and will work 10 years after this. I will adopt a couple of little | ||
| 298 | tools like Vite. But I will not waste my time on this anymore. | ||
| 299 | |||
| 300 | It was a good exercise to get in touch with what’s new now. Nothing really | ||
| 301 | changed that much. FOMO is now cured! Now I have to get my ass back to actually | ||
| 302 | code and make the project that I wanted to make in the first place. | ||
| 303 | |||
diff --git a/content/posts/2022-10-16-that-sound-that-machine-makes-when-struggling.md b/content/posts/2022-10-16-that-sound-that-machine-makes-when-struggling.md deleted file mode 100644 index 05a8167..0000000 --- a/content/posts/2022-10-16-that-sound-that-machine-makes-when-struggling.md +++ /dev/null | |||
| @@ -1,65 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Microsoundtrack — That sound that machine makes when struggling | ||
| 3 | url: that-sound-that-machine-makes-when-struggling.html | ||
| 4 | date: 2022-10-16T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | A couple of months ago, I got an idea about micro soundtracks. In this concept, | ||
| 9 | you are the observer, director, and audience in this tiny movies. | ||
| 10 | |||
| 11 | What you do is to attempt to imagine what would be happening around you based on | ||
| 12 | a title of the song and let the song help you fill the void in your story. | ||
| 13 | |||
| 14 | I made these songs is Logic Pro X. Every year or so I do this kind of thing and | ||
| 15 | make a couple of songs similar to this. But this is the first time I am posting | ||
| 16 | about it. | ||
| 17 | |||
| 18 | You can listen to the whole set on | ||
| 19 | [Youtube](https://www.youtube.com/watch?v=_5oXBhSmF3c) or scroll down the page | ||
| 20 | and there are embedded players for each song. | ||
| 21 | |||
| 22 | ## A bunch of inter-dimensional people with loud clocks | ||
| 23 | |||
| 24 | A group of inter-dimensional people are going up and down the elevator with you | ||
| 25 | while having loud clocks around their necks. Each clock ticks on a different | ||
| 26 | frequency. A lot of other sounds are getting drawn into your dimension, | ||
| 27 | resulting in a strange merging of dimensions. | ||
| 28 | |||
| 29 | <iframe style="border: 0; width: 100%; height: 42px;" src="https://bandcamp.com/EmbeddedPlayer/album=3913808801/size=small/bgcol=ffffff/linkcol=0687f5/track=1349272965/transparent=true/" seamless title="Bandcamp"><a href="https://mitjafelicijan.bandcamp.com/album/that-sound-that-machine-makes-when-struggling">That sound that machine makes when struggling by Mitja Felicijan</a></iframe> | ||
| 30 | |||
| 31 | ## Two black holes conversing about the weather | ||
| 32 | |||
| 33 | You are a traveler in a spaceship flying very close to two colliding black holes | ||
| 34 | having a discussion about the weather while tearing each other apart. During all | ||
| 35 | this your ship is getting pulled into the event horizon of both black holes, | ||
| 36 | putting a lot of strain on your spaceship. | ||
| 37 | |||
| 38 | <iframe style="border: 0; width: 100%; height: 42px;" src="https://bandcamp.com/EmbeddedPlayer/album=3913808801/size=small/bgcol=ffffff/linkcol=0687f5/track=1756714200/transparent=true/" seamless title="Bandcamp"><a href="https://mitjafelicijan.bandcamp.com/album/that-sound-that-machine-makes-when-struggling">That sound that machine makes when struggling by Mitja Felicijan</a></iframe> | ||
| 39 | |||
| 40 | ## A planet where every organism is a plant | ||
| 41 | |||
| 42 | You land on a planet where every living organism is a plant and among those | ||
| 43 | plants some of them are highly intelligent, and you were asked to make first | ||
| 44 | contact with the native species. Your visit takes place in a giant cave where | ||
| 45 | you are meeting these plants, and they are talking to you. | ||
| 46 | |||
| 47 | <iframe style="border: 0; width: 100%; height: 42px;" src="https://bandcamp.com/EmbeddedPlayer/album=3913808801/size=small/bgcol=ffffff/linkcol=0687f5/track=3710973979/transparent=true/" seamless title="Bandcamp"><a href="https://mitjafelicijan.bandcamp.com/album/that-sound-that-machine-makes-when-struggling">That sound that machine makes when struggling by Mitja Felicijan</a></iframe> | ||
| 48 | |||
| 49 | ## Bio implants having a fit and reprogramming your brain | ||
| 50 | |||
| 51 | In a distant future where everybody has bio implants, you have just received | ||
| 52 | your first one, which happens to be a brain implant. Something goes wrong, and | ||
| 53 | your implant is starting to misbehave, and you are experiencing brain | ||
| 54 | malfunctions. You are on the streets at night a couple of hours after your | ||
| 55 | procedure. You can feel your sanity breaking down. | ||
| 56 | |||
| 57 | <iframe style="border: 0; width: 100%; height: 42px;" src="https://bandcamp.com/EmbeddedPlayer/album=3913808801/size=small/bgcol=ffffff/linkcol=0687f5/track=1157430581/transparent=true/" seamless title="Bandcamp"><a href="https://mitjafelicijan.bandcamp.com/album/that-sound-that-machine-makes-when-struggling">That sound that machine makes when struggling by Mitja Felicijan</a></iframe> | ||
| 58 | |||
| 59 | ## Cow animation | ||
| 60 | |||
| 61 | I also made this little cow animation. Go into full screen to see the effects in | ||
| 62 | more details. | ||
| 63 | |||
| 64 | <video src="/assets/microsoundtrack/cow.m4v" controls loop></video> | ||
| 65 | |||
diff --git a/content/posts/2023-01-26-trying-to-build-a-new-kind-of-terminal-emulator.md b/content/posts/2023-01-26-trying-to-build-a-new-kind-of-terminal-emulator.md deleted file mode 100644 index a03a2a4..0000000 --- a/content/posts/2023-01-26-trying-to-build-a-new-kind-of-terminal-emulator.md +++ /dev/null | |||
| @@ -1,252 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Trying to build a New kind of terminal emulator for the modern age | ||
| 3 | url: trying-to-build-a-new-kind-of-terminal-emulator.html | ||
| 4 | date: 2023-01-26T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | Over the past few weeks, I have been really thinking about terminal emulators, | ||
| 9 | how we interact with computers, the separation of text-based programs and GUI | ||
| 10 | ones. To be perfectly honest, I got pissed off one evening when I was cleaning | ||
| 11 | up files on my computer. Normally, I go into console and do `ncdu` and check | ||
| 12 | where the junk is. Then I start deleting stuff. Without any discrimination, | ||
| 13 | usually. But when it comes to screenshots, I have learned that it's good to keep | ||
| 14 | them somewhere near if I need to refer to something that I was doing. I am an | ||
| 15 | avid screenshot taker. So at that point I checked Pictures folder and also did a | ||
| 16 | basic search `find . -type f -name "*.jpg"` for all the JPEG files in my home | ||
| 17 | directory and immediately got pissed off. Why can’t I see thumbnails in my | ||
| 18 | terminal? I know why, but why in the year of 2022 this is still a problem. I am | ||
| 19 | used to traversing my disk via terminal. I am faster, and I am more comfortable | ||
| 20 | this way. But when it comes to visualization, I then need to revert to GUI | ||
| 21 | applications and again find the same file to see it. I know that programs like | ||
| 22 | `feh` and `sxiv` are available, but I would just like to see the preview. Like | ||
| 23 | [Jupyter notebook](https://jupyter.org/) or something similar. Just having it | ||
| 24 | inline. Part of a result. | ||
| 25 | |||
| 26 | It also didn’t help that I was spending some time with the [Plan | ||
| 27 | 9](https://plan9.io/plan9/) Operating system. More specifically | ||
| 28 | [9FRONT](http://9front.org/). The way that [ACME editor](http://acme.cat-v.org/) | ||
| 29 | handles text editing is just wonderful. Different and fresh somehow, even though | ||
| 30 | it’s super old. | ||
| 31 | |||
| 32 | So, I went on a lookout for an interesting way of visualizing results of some | ||
| 33 | query. I found these applications to be outstanding examples of how not to be a | ||
| 34 | captive of a predetermined way of doing things. | ||
| 35 | |||
| 36 | - [Wolfram Mathematica](https://www.wolfram.com/mathematica/) | ||
| 37 | - [Jupyter notebooks](https://jupyter.org/) | ||
| 38 | - [Plan 9 / 9FRONT](http://www.9front.org) | ||
| 39 | - [Temple OS](https://templeos.org/) | ||
| 40 | - [Emacs](https://www.gnu.org/software/emacs/) | ||
| 41 | |||
| 42 | My idea is not as out there as ACME is, but it is a spin on the terminal | ||
| 43 | emulators. I like the modes that Vi/Vim provides you with. I like the way the | ||
| 44 | Emacs does its own `M-x` `M-c`. Furthermore, I really like how Mathematica and | ||
| 45 | Jupyter present the data in a free flowing form. And I love how Temple OS is | ||
| 46 | basically a C interpreter on some level. | ||
| 47 | |||
| 48 | > **Note:** This is part 1 of the journey. Nowhere finished yet. I am just | ||
| 49 | > tinkering with this at the moment. This whole thing can easily spectacularly | ||
| 50 | > fail. | ||
| 51 | |||
| 52 | So I started. I knew that I wanted to have the couple of modes, but I didn’t | ||
| 53 | like the repetition of keystrokes, so the only option was to have some sort of | ||
| 54 | toggle and indicate to the user that they are in a special mode. Like Vi does | ||
| 55 | for Normal and Visual mode. | ||
| 56 | |||
| 57 | These modes would for the first version be: | ||
| 58 | |||
| 59 | - *Preview mode* (toggle with Ctrl + P) | ||
| 60 | - When this mode would be enabled, the `ls` command would try to find images | ||
| 61 | from the results and display thumbnails from them in the terminal itself. | ||
| 62 | No ASCII art. Proper images. In a grid! | ||
| 63 | - *Detach mode* (toggle with Ctrl + D) | ||
| 64 | - When this mode would be enabled, every command would open a new window | ||
| 65 | and execute that command in it. This would be useful for starting `htop` | ||
| 66 | in a separate window. | ||
| 67 | |||
| 68 | The reason for having these modes togglable is to not ask for previews every | ||
| 69 | time. You enable a mode and until you disable it, it behaves that way. Purely | ||
| 70 | out of ergonomic reasons. | ||
| 71 | |||
| 72 | I would like to treat every terminal I open as a session mentally. When I start | ||
| 73 | using the terminal, I start digging deeper into the issue I am trying to | ||
| 74 | resolve. And while I am doing this, I would like to open detached windows | ||
| 75 | etc. A lot of these things can be done easily with something like | ||
| 76 | [i3](https://i3wm.org/), but also that pull you out of the context of what you | ||
| 77 | were doing. I would like to orchestrate everything from one single point. | ||
| 78 | |||
| 79 | In planning for this project, I knew that I would need to use a language like C | ||
| 80 | and a library such as [SDL2](https://www.libsdl.org/) in order to achieve the | ||
| 81 | desired results. I had considered other options, but ultimately determined that | ||
| 82 | [SDL2](https://www.libsdl.org/) was the best fit based on its capabilities and | ||
| 83 | reputation in the programming community. | ||
| 84 | |||
| 85 | At first, I thought the idea of a hardware accelerated terminal was a bit of a | ||
| 86 | joke. It seemed like such a niche and unnecessary feature, especially given the | ||
| 87 | fact that terminal emulators have been around for decades and have always relied | ||
| 88 | on software rendering. But to be fair, [Alacritty](https://alacritty.org/) is | ||
| 89 | doing the same thing. Well, they are doing a remarkable job at it. | ||
| 90 | |||
| 91 | So, I embarked on a journey. Everything has to start somewhere. For me, it | ||
| 92 | started with creating a window! It has to start somewhere. 🙂 | ||
| 93 | |||
| 94 | ```c | ||
| 95 | // Oh, Hi Mark! | ||
| 96 | // Create the window, obviously. | ||
| 97 | SDL_Window *window = SDL_CreateWindow( | ||
| 98 | WINDOW_TITLE, SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, | ||
| 99 | WINDOW_WIDTH, WINDOW_HEIGHT, | ||
| 100 | SDL_WINDOW_RESIZABLE | SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN); | ||
| 101 | ``` | ||
| 102 | |||
| 103 | I continued like this to get some text displayed on the screen. | ||
| 104 | |||
| 105 | I noted that | ||
| 106 | [`TTF_RenderText_Solid`](https://wiki.libsdl.org/SDL_ttf/TTF_RenderText_Solid) | ||
| 107 | rendered text really poorly. There were no antialiasing at all. In my wisdom, I | ||
| 108 | never checked the documentation. Well, that was a fail. To uneducated like me: | ||
| 109 | `TTF_RenderText_Solid` renders Latin1 text at fast quality to a new 8-bit | ||
| 110 | surface. So, that's why the texts looked like shit. No wonder. | ||
| 111 | |||
| 112 | Remarks on `TTF_RenderText_Solid`: This function will allocate a new 8-bit, | ||
| 113 | palettized surface. The surface's 0 pixel will be the colorkey, giving a | ||
| 114 | transparent background. The 1 pixel will be set to the text color. | ||
| 115 | |||
| 116 | After I replaced it with | ||
| 117 | [`TTF_RenderText_LCD`](https://wiki.libsdl.org/SDL_ttf/TTF_RenderText_LCD) which | ||
| 118 | renders Latin1 text at LCD subpixel quality to a new ARGB surface, the text | ||
| 119 | started looking good. Really make sure you read the documentation. It’s actually | ||
| 120 | good. As a side note, you can find all the documentation regarding [SDL2 on | ||
| 121 | their Wiki](https://wiki.libsdl.org/). | ||
| 122 | |||
| 123 | After that was done, I started working on displaying other things like `Preview` | ||
| 124 | and `Detach` modes. This wasn’t really that hard. In SDL2 you can check all the | ||
| 125 | available events with `while (SDL_PollEvent(&event) > 0)` and have a bunch of | ||
| 126 | switch statements to determine which key is currently being pressed. More about | ||
| 127 | keys, [SDLKey](https://documentation.help/SDL/sdlkey.html) and mroe about | ||
| 128 | pooling the events on | ||
| 129 | [SDL_PollEvent](https://documentation.help/SDL/sdlpollevent.html). | ||
| 130 | |||
| 131 | ```c | ||
| 132 | while (SDL_PollEvent(&event) > 0) | ||
| 133 | { | ||
| 134 | switch (event.type) | ||
| 135 | { | ||
| 136 | case SDL_QUIT: | ||
| 137 | running = false; | ||
| 138 | break; | ||
| 139 | |||
| 140 | case SDL_TEXTINPUT: | ||
| 141 | if (!meta_key_pressed) | ||
| 142 | { | ||
| 143 | strncat(input_prompt_text, event.text.text, 1); | ||
| 144 | update_input_prompt = true; | ||
| 145 | } | ||
| 146 | break; | ||
| 147 | } | ||
| 148 | } | ||
| 149 | ``` | ||
| 150 | |||
| 151 | After that was somewhat working correctly, I started creating a struct that | ||
| 152 | would hold all the commands and results and I call them Cells. Yes, I stole that | ||
| 153 | naming idea from Jupyter. | ||
| 154 | |||
| 155 | ```c | ||
| 156 | typedef struct | ||
| 157 | { | ||
| 158 | char *command; | ||
| 159 | char *result; | ||
| 160 | SDL_Surface *surface; | ||
| 161 | SDL_Texture *texture; | ||
| 162 | SDL_Rect rect; | ||
| 163 | } Cell; | ||
| 164 | ``` | ||
| 165 | |||
| 166 | I am at a place now where I am starting to implement scrolling. This will for | ||
| 167 | sure be fun to code. Memory management in C is super easy. 😂 | ||
| 168 | |||
| 169 | I have also added a simple [INI file like | ||
| 170 | configuration](https://en.wikipedia.org/wiki/INI_file) support. It is done in an | ||
| 171 | [STB style of | ||
| 172 | header](https://github.com/nothings/stb/blob/master/docs/stb_howto.txt) and maps | ||
| 173 | to specific options supported by the terminal. It is not universal, and the code | ||
| 174 | below demonstrates how I will use it in the future. | ||
| 175 | |||
| 176 | ```c | ||
| 177 | #ifndef CONFIG_H | ||
| 178 | #define CONFIG_H | ||
| 179 | |||
| 180 | /* | ||
| 181 | # This is a comment | ||
| 182 | |||
| 183 | # This is the first configuration option | ||
| 184 | dettach=value11111 | ||
| 185 | |||
| 186 | # This is the second configuration option | ||
| 187 | preview=value22222 | ||
| 188 | |||
| 189 | # This is the third configuration option | ||
| 190 | debug=value33333 | ||
| 191 | */ | ||
| 192 | |||
| 193 | // Define a struct to hold the configuration options | ||
| 194 | typedef struct | ||
| 195 | { | ||
| 196 | char dettach[256]; | ||
| 197 | char preview[256]; | ||
| 198 | char debug[256]; | ||
| 199 | } Config; | ||
| 200 | |||
| 201 | // Read the configuration file and return the options as a struct | ||
| 202 | extern Config read_config_file(const char *filename) | ||
| 203 | { | ||
| 204 | // Create a struct to hold the configuration options | ||
| 205 | Config config = {0}; | ||
| 206 | |||
| 207 | // Open the configuration file | ||
| 208 | FILE *file = fopen(filename, "r"); | ||
| 209 | |||
| 210 | // Read each line from the file | ||
| 211 | char line[256]; | ||
| 212 | while (fgets(line, sizeof(line), file)) | ||
| 213 | { | ||
| 214 | // Check if this line is a comment or empty | ||
| 215 | if (line[0] == '#' || line[0] == '\n') | ||
| 216 | continue; | ||
| 217 | |||
| 218 | // Parse the line to get the option and value | ||
| 219 | char option[128], value[128]; | ||
| 220 | if (sscanf(line, "%[^=]=%s", option, value) != 2) | ||
| 221 | continue; | ||
| 222 | |||
| 223 | // Set the value of the appropriate option in the config struct | ||
| 224 | if (strcmp(option, "dettach") == 0) | ||
| 225 | { | ||
| 226 | strncpy(config.option1, value, sizeof(config.option1)); | ||
| 227 | } | ||
| 228 | else if (strcmp(option, "preview") == 0) | ||
| 229 | { | ||
| 230 | strncpy(config.option2, value, sizeof(config.option2)); | ||
| 231 | } | ||
| 232 | else if (strcmp(option, "debug") == 0) | ||
| 233 | { | ||
| 234 | strncpy(config.option3, value, sizeof(config.option3)); | ||
| 235 | } | ||
| 236 | } | ||
| 237 | |||
| 238 | // Close the configuration file | ||
| 239 | fclose(file); | ||
| 240 | |||
| 241 | // Return the configuration options | ||
| 242 | return config; | ||
| 243 | } | ||
| 244 | |||
| 245 | #endif | ||
| 246 | ``` | ||
| 247 | |||
| 248 | This is as far as I managed to get for now. I have a daily job and this | ||
| 249 | prohibits me to work on these things full time. But I should probably get back | ||
| 250 | and finish this. At least have a simple version working out, so I can start | ||
| 251 | testing it on my machines. Fingers crossed. 🕵️♂️ | ||
| 252 | |||
diff --git a/content/posts/2023-05-16-rekindling-my-love-for-programming.md b/content/posts/2023-05-16-rekindling-my-love-for-programming.md deleted file mode 100644 index fb8add2..0000000 --- a/content/posts/2023-05-16-rekindling-my-love-for-programming.md +++ /dev/null | |||
| @@ -1,73 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: Rekindling my love for programming and enjoying the act of creating | ||
| 3 | url: rekindling-my-love-for-programming.html | ||
| 4 | date: 2023-05-16T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | Programming can be a challenging and rewarding experience, but sometimes it's | ||
| 9 | easy to feel burnt out or disinterested. I have lost the passion for coding over | ||
| 10 | the past couple of months and it looked like I will never enjoy the coding as | ||
| 11 | much as I did. | ||
| 12 | |||
| 13 | I was feeling burnt out with programming. I thought taking a break from it and | ||
| 14 | focusing on other activities that I enjoy might be helpful. This way, I could | ||
| 15 | come back to programming with a fresh perspective and renewed energy. I also | ||
| 16 | thought about learning a new programming language or technology to keep things | ||
| 17 | interesting and challenging. | ||
| 18 | |||
| 19 | However, what I didn't realize was that learning a new language or technology | ||
| 20 | wasn't going to solve the underlying issue. I needed to take a step back and | ||
| 21 | re-evaluate why I had lost my passion for programming in the first place. This | ||
| 22 | involved taking a deep look into what I was doing that resulted in this rut. | ||
| 23 | |||
| 24 | Sometimes, it's easy to get caught up in the hype of new technologies or | ||
| 25 | languages, and we can feel like we're missing out if we're not constantly | ||
| 26 | learning and experimenting. However, it's important to remember that the latest | ||
| 27 | and greatest isn't always the best fit for our projects or our | ||
| 28 | interests. Instead of constantly chasing the next big thing, it can be helpful | ||
| 29 | to focus on what truly interests us and what we're passionate about. This can | ||
| 30 | help us stay motivated and engaged with our work, rather than feeling like we're | ||
| 31 | just going through the motions. | ||
| 32 | |||
| 33 | I expressed that I had lost my passion for coding over the past couple of | ||
| 34 | months, and I realized that the reason behind it was my tendency to spread | ||
| 35 | myself too thin and not focus on completing interesting projects. In order to | ||
| 36 | regain my passion for coding, I need to focus on projects that truly interest me | ||
| 37 | and give me a sense of purpose and motivation. | ||
| 38 | |||
| 39 | Recently, I have been playing World of Warcraft more frequently and have become | ||
| 40 | interested in developing addons for the game. | ||
| 41 | |||
| 42 | This quickly resulted in me creating three addons that improve the quality of | ||
| 43 | life, and I subsequently developed a more useful add-on that encapsulates all | ||
| 44 | the others I made. | ||
| 45 | |||
| 46 | I found it interesting that this action sparked a new interest in me. | ||
| 47 | Additionally, I discovered the Lua language, which reminded me that coding | ||
| 48 | should be fun rather than just a struggle with a language. It should be pure, | ||
| 49 | unadulterated fun. | ||
| 50 | |||
| 51 | I wasn't fighting the syntax, nor was I focused on finding the most optimal | ||
| 52 | solution. I simply created things without the pressure of making them the best | ||
| 53 | they could possibly be. | ||
| 54 | |||
| 55 | This made me realize that I actually adore simple languages that get out of the | ||
| 56 | way and let you express what you want to do. It forced me to rethink a lot about | ||
| 57 | what I use and what I actually enjoy. | ||
| 58 | |||
| 59 | I have decided to stick to the basics. For a scripting language, I will use | ||
| 60 | Lua. For networking, I will use Golang. And for any special needs, I will rely | ||
| 61 | on C. I do not require Rust, Nim, or Zig. This selection is more than sufficient | ||
| 62 | for my needs. I have to stay true to this simplicity. There is something to the | ||
| 63 | Occam's Razor. | ||
| 64 | |||
| 65 | I've been struggling with a lack of creativity lately, but now I'm experiencing | ||
| 66 | a real change. I realized I needed to take a step back and stop actively trying | ||
| 67 | to address the issue. I needed to stop worrying and overthinking it. I simply | ||
| 68 | needed some time. Looking back, I don't think I've taken any significant time | ||
| 69 | off in the last 10 years. | ||
| 70 | |||
| 71 | Suddenly, I find myself with the energy and passion to complete multiple small | ||
| 72 | projects. It doesn't feel like a chore at all. Who knew I needed WoW to | ||
| 73 | kickstart everything. Inspiration really does come from the strangest places. | ||
diff --git a/content/posts/2023-05-22-crafting-stories-in-zed-editor.md b/content/posts/2023-05-22-crafting-stories-in-zed-editor.md deleted file mode 100644 index ead4276..0000000 --- a/content/posts/2023-05-22-crafting-stories-in-zed-editor.md +++ /dev/null | |||
| @@ -1,87 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: From General Zod to Superman - Crafting Stories in Zed Editor | ||
| 3 | url: crafting-stories-in-zed-editor.html | ||
| 4 | date: 2023-05-22T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | Pretentious title! Good start! I have nothing to add to this discussion. I just | ||
| 9 | like this editor and wanted to write something here that will remind me to use | ||
| 10 | it again in a while when/if it becomes available for Linux. | ||
| 11 | |||
| 12 | **TLDR:** I think this code editor is very cool and has a massive potential. I | ||
| 13 | hope they don’t mess up with adding a plugin ecosystem to it! | ||
| 14 | |||
| 15 | Out of morbid curiosity, I started using the [Zed editor](https://zed.dev/) on | ||
| 16 | my Mac. Zed is a high-performance, multiplayer code editor developed by the | ||
| 17 | creators of Atom and Tree-sitter. Written in Rust so it has to be blazingly | ||
| 18 | fast! 😊 It's a joke, calm down. | ||
| 19 | |||
| 20 | Over the past year, I have switched between [Helix | ||
| 21 | editor](https://helix-editor.com/) and [VS | ||
| 22 | Code](https://code.visualstudio.com/), but for the last couple of months, I have | ||
| 23 | been using Helix exclusively. | ||
| 24 | |||
| 25 | I've been genuinely impressed by Zed. When you open a file, it automatically | ||
| 26 | detects its type and downloads the corresponding [LSP (language | ||
| 27 | server)](https://en.wikipedia.org/wiki/Language_Server_Protocol). The list of | ||
| 28 | supported languages is not extensive, but it's still impressive. It's a great | ||
| 29 | example of how to create a product that stays out of your way. | ||
| 30 | |||
| 31 |  | ||
| 32 | |||
| 33 | For C development it downloaded [clangd](https://clangd.llvm.org/) and setting | ||
| 34 | up missing dependencies in code was rather easy. For this project I use | ||
| 35 | [SDL2](https://www.libsdl.org/) for rendering terminal emulator. It’s a hobby | ||
| 36 | project, don’t worry about it. | ||
| 37 | |||
| 38 | If you are going to give this a try and you are using C, I suggest checking two | ||
| 39 | files in the root of your project folder. If you don't have them, create them. | ||
| 40 | |||
| 41 | **compile_flags.txt** | ||
| 42 | |||
| 43 | ``` | ||
| 44 | -I/opt/homebrew/include | ||
| 45 | -I/opt/homebrew/include/SDL2 | ||
| 46 | ``` | ||
| 47 | |||
| 48 | Easy way of checking what the appropriate includes for a specific library is to | ||
| 49 | use `pkg-config` and in my case `pkg-config SDL2 --cflags-only-I`. But this is | ||
| 50 | nothing new to C/C++ devs. Just a noter for people who are using Visual Studio. | ||
| 51 | |||
| 52 | **.clang-format** | ||
| 53 | |||
| 54 | ``` | ||
| 55 | ColumnLimit: 220 | ||
| 56 | BasedOnStyle: Mozilla | ||
| 57 | ``` | ||
| 58 | |||
| 59 | I prefer Mozilla coding style for C so you can set that up. | ||
| 60 | |||
| 61 | They really have something special here. Although there is no version available | ||
| 62 | for Linux yet, I will stick to Helix. This impressive piece of engineering is, | ||
| 63 | above all, an amazing example of craftsmanship. | ||
| 64 | |||
| 65 | They have a bunch of amazing integrated functionalities like live desktop | ||
| 66 | sharing, code sharing in a live coding session. There is a lot of pretentious | ||
| 67 | marketing speak there but the product is still amazing! | ||
| 68 | |||
| 69 | For me the speed and the simplicity of the product was the most impressive | ||
| 70 | thing. You get that: it just works feeling. A rare thing in 2023. | ||
| 71 | |||
| 72 |  | ||
| 73 | |||
| 74 | They also managed to add [Github Copilot](https://github.com/features/copilot) | ||
| 75 | in a non obtrusive way. To me, everything feels very intentional and | ||
| 76 | specifically selected. It's minimal yet maximally effective. | ||
| 77 | |||
| 78 | <video src="https://zed.dev/img/post/copilot/copilot-demo.webm" autoplay loop></video> | ||
| 79 | |||
| 80 | It is a perfect balance between VS Code, Jetbrains IDE’s and something like VIM | ||
| 81 | or Helix. | ||
| 82 | |||
| 83 | I just hope they **DON’T** add plugin support and keep it like it is. They as a | ||
| 84 | vendor should add stuff to it with great deliberation and thought. And this way | ||
| 85 | the product will stay fast and focused. That’s my two cents. | ||
| 86 | |||
| 87 | Amazing job! | ||
diff --git a/content/posts/2023-05-23-i-was-wrong-about-git-workflows.md b/content/posts/2023-05-23-i-was-wrong-about-git-workflows.md deleted file mode 100644 index 16739de..0000000 --- a/content/posts/2023-05-23-i-was-wrong-about-git-workflows.md +++ /dev/null | |||
| @@ -1,71 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: I think I was completely wrong about Git workflows | ||
| 3 | url: i-was-wrong-about-git-workflows.html | ||
| 4 | date: 2023-05-23T12:00:00+02:00 | ||
| 5 | draft: false | ||
| 6 | type: posts | ||
| 7 | tags: [] | ||
| 8 | --- | ||
| 9 | |||
| 10 | I have been using some approximation of [Git | ||
| 11 | Flow](https://jeffkreeftmeijer.com/git-flow/) for years now and never really | ||
| 12 | questioned it to be honest. When I create a repo I create develop branch and set | ||
| 13 | it as default one and then merge to master from there. Seems reasonable enough. | ||
| 14 | |||
| 15 | One thing that I have learned is that long living branches are the devil. They | ||
| 16 | always end up making a huge mess when they need to be merged eventually into | ||
| 17 | master. So by that reason, what is the develop branch if not the longest living | ||
| 18 | feature branch. And from my personal experience there was never a situation | ||
| 19 | where I wasn’t sweating bullets when I had to merge develop back to master. | ||
| 20 | |||
| 21 | This realisation started to give me pause. So why the hell am I doing this, and | ||
| 22 | is there a better way. Well the solution was always there. And it comes in a | ||
| 23 | form of [git tags](https://git-scm.com/book/en/v2/Git-Basics-Tagging). | ||
| 24 | |||
| 25 | So what are git tags? Git tags are references to specific points in a Git | ||
| 26 | repository's history. They are used to mark important milestones, such as | ||
| 27 | releases or significant commits, making it easier to identify and access | ||
| 28 | specific versions of a project. | ||
| 29 | |||
| 30 | Somehow we have all hijacked the meaning of the master branch that it has to be | ||
| 31 | the most releasable version of code. And this is also where the confusing about | ||
| 32 | versioning the software kicks in. Because master branch implicitly says that we | ||
| 33 | are dealing with the rolling release type of a software. And by having a develop | ||
| 34 | branch we are hacking around this confusion. With a separation of develop and | ||
| 35 | master we lock functionalities into place and forcing a stable vs development | ||
| 36 | version of the software. | ||
| 37 | |||
| 38 | But if that is true and the long living branches are the devil then why have | ||
| 39 | develop at all. I think that most of this comes to how continuous integration is | ||
| 40 | being done. There usually is no granular access to tags and CD software deploys | ||
| 41 | what is present on a specific branch, may that be master for production and | ||
| 42 | develop for staging. This is a gross simplification and by having this in place | ||
| 43 | we have completely removed tagging as a viable option to create a fix point in | ||
| 44 | software cycle that says, this is the production ready code. | ||
| 45 | |||
| 46 | One cool thing about tags are that you can checkout a specific tag. So they | ||
| 47 | behave very similarly as branches in that regard. And you don’t have the | ||
| 48 | overhead of having two mainstream branches. | ||
| 49 | |||
| 50 | So what is the solution? One approach is to use development workflow, where all | ||
| 51 | changes are made on the smaller branches and continuously merged into | ||
| 52 | master. Where the software is ready to be pushed to production you tag the | ||
| 53 | master branch. This approach eliminates the need for long-lived branches and | ||
| 54 | simplifies the development process. It also encourages developers to make small, | ||
| 55 | incremental changes that can be tested and deployed quickly. However, this | ||
| 56 | approach may not be suitable for all projects or teams that heavily rely on | ||
| 57 | automated deployment based on branch names only. | ||
| 58 | |||
| 59 | This also requires that developers always keep production in mind. No more | ||
| 60 | living on an island of the develop branch. All your actions and code need to be | ||
| 61 | ready to meet production standards on a much smaller timescale. | ||
| 62 | |||
| 63 | I think that we have complicated the workflow in an honest attempt to make | ||
| 64 | things more streamlined but in the process of doing this, we have inadvertently | ||
| 65 | made our lives much more complicated. | ||
| 66 | |||
| 67 | In conclusion, it's important to re-evaluate our workflows from time to time to | ||
| 68 | see if they still make sense and if there are better alternatives available. | ||
| 69 | Long-living branches can be problematic, and using tags to mark important | ||
| 70 | milestones can simplify the development process. | ||
| 71 | |||
diff --git a/content/posts/2023-05-31-re-inventing-task-runner-that-i-actually-used-daily.md b/content/posts/2023-05-31-re-inventing-task-runner-that-i-actually-used-daily.md deleted file mode 100644 index 1abfd1e..0000000 --- a/content/posts/2023-05-31-re-inventing-task-runner-that-i-actually-used-daily.md +++ /dev/null | |||
| @@ -1,158 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: "Re-Inventing Task Runner That I Actually Used Daily" | ||
| 3 | url: re-inventing-task-runner-that-i-actually-used-daily.html | ||
| 4 | date: 2023-05-31T12:21:10+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | Couple of months ago I had this brilliant idea of re-inventing the wheel by | ||
| 9 | making an alternative for make. And so I went. Boldly into the battle. And to my | ||
| 10 | big surprise my attempt resulted in not a completely useless piece of software. | ||
| 11 | |||
| 12 | My initial requirements were quite simple but soon grow into something more | ||
| 13 | ambitious. And looking back I should have stuck to the simple version. My | ||
| 14 | laziness was on my side this time though. Because I haven’t implemented some of | ||
| 15 | the features I now realise I really didn’t need them and they would bog the | ||
| 16 | whole program and make it be something it was never meant to be. | ||
| 17 | |||
| 18 | My basic requirements were following: | ||
| 19 | |||
| 20 | - Syntax should be a tiny bit inspired by Rake and Rakefiles. | ||
| 21 | - Should borrow the overall feel of a unit test experience. | ||
| 22 | - Using something like Python would be a bit of an overkill. | ||
| 23 | - The program must be statically compiled, so it can run on same architecture | ||
| 24 | without libc, musl dependencies or things like that. | ||
| 25 | - Install ruby for rake is a bit overkill and can not be done with certain | ||
| 26 | really lightweight distributions like Alpine Linux. This tool would be usable | ||
| 27 | on such lightweight systems for remote debugging. | ||
| 28 | - I want to use it for more than just compiling things. I want to use it as an | ||
| 29 | entry-point into a project, and I want this to help me indirectly document the | ||
| 30 | project as well. | ||
| 31 | - It should be an abstraction over bash shell or the default system shell. | ||
| 32 | - Each task essentially becomes its own shell instance. | ||
| 33 | - Must work on Linux and macOS systems. | ||
| 34 | - By default, running `erd` list all the available tasks (when I use make, I | ||
| 35 | usually put a disclaimer that you should check Makefile to see all available | ||
| 36 | target). | ||
| 37 | - Should support passing arguments when you run it from a shell. | ||
| 38 | - Normal variable as the same as environmental variables. There is no | ||
| 39 | distinction. Every variable is also essentially an environment variable and | ||
| 40 | can be used by other programs. | ||
| 41 | - State between tasks is not shared, and this makes this “pure” shell instances. | ||
| 42 | - Should be single-threaded for the start and later expanded with `@spawn` | ||
| 43 | command. | ||
| 44 | - Variables behave like macros and are preprocessed before evaluation. | ||
| 45 | - Should support something like `assure` that would check if programs like C | ||
| 46 | compiler or Python (whatever the project requires) are installed on a machine. | ||
| 47 | |||
| 48 | Quite a reasonable list of requirements. I do this things already in my | ||
| 49 | Makefiles or/and Bash scripts. But I would like to avoid repeating myself every | ||
| 50 | time I start working on something new. | ||
| 51 | |||
| 52 | So I started with the following syntax. | ||
| 53 | |||
| 54 | ```ruby | ||
| 55 | @env on | ||
| 56 | |||
| 57 | # Override the default shell. | ||
| 58 | @shell /bin/bash | ||
| 59 | |||
| 60 | # Assure that program is installed. | ||
| 61 | @assure docker-compose pip python3 | ||
| 62 | |||
| 63 | # Load local dotenv files (these are then globally available). | ||
| 64 | @dotenv .env | ||
| 65 | @dotenv .env.sample | ||
| 66 | @dotenv some_other_file | ||
| 67 | |||
| 68 | # This are local variables but still accessible in tasks. | ||
| 69 | @var HI = "hey" | ||
| 70 | @var TOKEN = "sometoken" | ||
| 71 | @var EMAIL = "m@m.com" | ||
| 72 | @var PASSWORD = "pass" | ||
| 73 | @var EDITOR = "vim" | ||
| 74 | |||
| 75 | @task dev "Test chars .:'}{]!//" does | ||
| 76 | echo "..." $HI | ||
| 77 | end | ||
| 78 | |||
| 79 | @task clean "Cleans the obj files" does | ||
| 80 | rm .obj | ||
| 81 | end | ||
| 82 | |||
| 83 | @task greet "Greets the user" does | ||
| 84 | echo "Hi user $TOKEN or $WINDOWID $EMAIL" | ||
| 85 | end | ||
| 86 | |||
| 87 | @task stack "Starts Docker stack" does | ||
| 88 | docker-compose -f stack.yml up | ||
| 89 | end | ||
| 90 | |||
| 91 | @task todo "Shows all todos in source files and count them" does | ||
| 92 | grep -ir "TODO|FIXME" . | wc -l | ||
| 93 | end | ||
| 94 | |||
| 95 | @task test1 "For testing 1" does | ||
| 96 | unknown-command | ||
| 97 | echo "test1" | ||
| 98 | ls -lha | ||
| 99 | end | ||
| 100 | |||
| 101 | @task test2 "For testing 2" does | ||
| 102 | echo "test1" | ||
| 103 | ls -lha | ||
| 104 | docker-compose -f samples/stack.yml up | ||
| 105 | end | ||
| 106 | ``` | ||
| 107 | |||
| 108 | One thing that I really like about Errand. Yes, this is what it is called. And | ||
| 109 | it is available at https://git.mitjafelicijan.com/errand.git/about/. Moving | ||
| 110 | on. One thing that I really like is that a task is a persistent shell. By that I | ||
| 111 | mean, that the whole task, even if it contains multiple command in one shell. | ||
| 112 | In make each line in a target is that and you need to combine lines or add `\` | ||
| 113 | at the end of the line. | ||
| 114 | |||
| 115 | ```bash | ||
| 116 | # How you do this things in make. | ||
| 117 | target: | ||
| 118 | source .venv/bin/activate \ | ||
| 119 | python script.py | ||
| 120 | ``` | ||
| 121 | |||
| 122 | This solves this problem. Consider each task and what is being executed in that | ||
| 123 | task a shell that will only close when all the tasks are completed. | ||
| 124 | |||
| 125 | By self-documenting I mean that if you are in a directory with `Errandfile` in, | ||
| 126 | if you only type `erd` and press enter it should by default display all the | ||
| 127 | possible targets. In make i was doing this by having a first target be something | ||
| 128 | like `default` that echos the message “Check Makefile for all available target.” | ||
| 129 | Because all of the tasks in Errand require a message I use that to display let’s | ||
| 130 | call it table of contents. | ||
| 131 | |||
| 132 | Because I don’t use any external dependencies this whole thing can be statically | ||
| 133 | compiled. So that also checked one of the boxes. | ||
| 134 | |||
| 135 | It works on Linux and on a Mac so that’s also a bonus. I don’t believe this | ||
| 136 | would work on Windows machines because of the way that I use shell instances. By | ||
| 137 | you could use something like Windows Subsystem for Linux and run it in | ||
| 138 | there. That is a valid option. | ||
| 139 | |||
| 140 | To finish this essay off, how was it to use it in “real life”. I have to be | ||
| 141 | honest. Some of the missing features still bother me. `@dotenv` directive is | ||
| 142 | still missing and I need to implement this ASAP. | ||
| 143 | |||
| 144 | Another thing that needs to happen is support for streaming output. Currently | ||
| 145 | commands like `docker-compose` that runs in foreground mode is not compatible | ||
| 146 | with Errand. So commands that stream output are an issue. I need to revisit how | ||
| 147 | I initiate shell and how I read stdout and stderr. But that shouldn’t be a | ||
| 148 | problem. | ||
| 149 | |||
| 150 | I have been very satisfied with this thing. I am pleasantly surprised by how | ||
| 151 | useful it is. I really wanted to test this in the wild before I commit to it. I | ||
| 152 | have more abandoned project than Google and it’s bringing a massive shame to my | ||
| 153 | family at this point. So I wanted to be sure that this is even useful. And it | ||
| 154 | actually is. Quite surprised at myself. | ||
| 155 | |||
| 156 | I really need to package this now and write proper docs. And maybe rewrite | ||
| 157 | tokeniser. Its atrocious right now. Site to behold! But that is an issue for | ||
| 158 | another time. | ||
diff --git a/content/posts/2023-07-01-bringing-all-of-my-projects-together-under-one-umbrella.md b/content/posts/2023-07-01-bringing-all-of-my-projects-together-under-one-umbrella.md deleted file mode 100644 index 4031df0..0000000 --- a/content/posts/2023-07-01-bringing-all-of-my-projects-together-under-one-umbrella.md +++ /dev/null | |||
| @@ -1,280 +0,0 @@ | |||
| 1 | --- | ||
| 2 | title: "Bringing all of my projects together under one umbrella" | ||
| 3 | url: bringing-all-of-my-projects-together-under-one-umbrella.html | ||
| 4 | date: 2023-07-01T18:49:07+02:00 | ||
| 5 | draft: false | ||
| 6 | --- | ||
| 7 | |||
| 8 | ## What is the issue anyway? | ||
| 9 | |||
| 10 | Over the years, I have accumulated a bunch of virtual servers on my | ||
| 11 | [DigitalOcean](https://www.digitalocean.com/) account for small experimental | ||
| 12 | projects I dabble in. And this has resulted in quite a bill. I mean, I wouldn't | ||
| 13 | care if these projects were actually being used. But there were just being there | ||
| 14 | unused and wasting resources. Which makes this an unnecessary burden for me. | ||
| 15 | |||
| 16 | Most of them are just small HTML pages that have an endpoint or two to read data | ||
| 17 | from or to, and for that reason I wrote servers left and right. To be honest, | ||
| 18 | all of those things could have been done with [CGI | ||
| 19 | scripts](https://en.wikipedia.org/wiki/Common_Gateway_Interface) and that would | ||
| 20 | have been more than enough. | ||
| 21 | |||
| 22 | Recently, I decided to stop language hopping and focus on a simpler stack which | ||
| 23 | includes C, Go and Lua. And I can accomplish all the things I am interested in. | ||
| 24 | |||
| 25 | ## Finding a web server replacement | ||
| 26 | |||
| 27 | Usually I had [Nginx](https://nginx.org/en/) in front of these small web servers | ||
| 28 | and I had to manage SSL certificates and all that jazz. I am bored with these | ||
| 29 | things. I don't want to manage any of this bullshit anymore. | ||
| 30 | |||
| 31 | So the logical move forward was to find a solid alternative for this. I have | ||
| 32 | ended up on [Caddy server](https://caddyserver.com/). I've used it in the past | ||
| 33 | but kind of forgotten about it. What I really like about it is an ease of use | ||
| 34 | and a bunch of out of the box functionalities that come with it. | ||
| 35 | |||
| 36 | These are the _pitch_ points from their website: | ||
| 37 | |||
| 38 | - **Secure by Default**: Caddy is the only web server that uses HTTPS by | ||
| 39 | default. A hardened TLS stack with modern protocols preserves privacy and | ||
| 40 | exposes MITM attacks. | ||
| 41 | - **Config API**: As its primary mode of configuration, Caddy's REST API makes | ||
| 42 | it easy to automate and integrate with your apps. | ||
| 43 | - **No Dependencies**: Because Caddy is written in Go, its binaries are entirely | ||
| 44 | self-contained and run on every platform, including containers without libc. | ||
| 45 | - **Modular Stack**: Take back control over your compute edge. Caddy can be | ||
| 46 | extended with everything you need using plugins. | ||
| 47 | |||
| 48 | I had just a few requirements: | ||
| 49 | |||
| 50 | - Automatic SSL | ||
| 51 | - Static file server | ||
| 52 | - Basic authentication | ||
| 53 | - CGI script support | ||
| 54 | |||
| 55 | And the vanilla version does all of it, but CGI scripts. But that can easily be | ||
| 56 | fixed with their modular approach. You can do this on their website and build a | ||
| 57 | custom version of the server, or do it with Docker. | ||
| 58 | |||
| 59 | This is a `Dockerfile` I used to build a custom server. | ||
| 60 | |||
| 61 | ```Dockerfile | ||
| 62 | FROM caddy:builder AS builder | ||
| 63 | |||
| 64 | RUN xcaddy build \ | ||
| 65 | --with github.com/aksdb/caddy-cgi | ||
| 66 | |||
| 67 | FROM caddy:latest | ||
| 68 | RUN apk add --no-cache nano | ||
| 69 | |||
| 70 | COPY --from=builder /usr/bin/caddy /usr/bin/caddy | ||
| 71 | ``` | ||
| 72 | |||
| 73 | ## Getting rid of all the unnecessary virtual machines | ||
| 74 | |||
| 75 | The next step was to get a handle on the number of virtual servers I have all | ||
| 76 | over the place. | ||
| 77 | |||
| 78 | I decided to move all the projects and services into two main VMs: | ||
| 79 | |||
| 80 | - personal server (still Nginx) | ||
| 81 | - git server | ||
| 82 | - static file server | ||
| 83 | - personal blog | ||
| 84 | - projects server (Caddy server) | ||
| 85 | - personal experiments | ||
| 86 | - other projects | ||
| 87 | |||
| 88 | I will focus on projects' server in this post since it's more interesting. | ||
| 89 | |||
| 90 | ## Testing CGI scripts | ||
| 91 | |||
| 92 | The first thing I tested was how CGI scripts work under Caddy. This is | ||
| 93 | particularly import to me because almost all of my experiments and mini projects | ||
| 94 | need this to work. | ||
| 95 | |||
| 96 | To configure Caddy server, you must provide the server with a configuration | ||
| 97 | file. By default, it's called `Caaddyfile`. | ||
| 98 | |||
| 99 | ```caddyfile | ||
| 100 | { | ||
| 101 | order cgi before respond | ||
| 102 | } | ||
| 103 | |||
| 104 | examples.mitjafelicijan.com { | ||
| 105 | cgi /bash-test /opt/projects/examples/bash-test.sh | ||
| 106 | cgi /tcl-test /opt/projects/examples/tcl-test.tcl | ||
| 107 | cgi /lua-test /opt/projects/examples/lua-test.lua | ||
| 108 | cgi /python-test /opt/projects/examples/python-test.py | ||
| 109 | |||
| 110 | root * /opt/projects/examples | ||
| 111 | file_server | ||
| 112 | } | ||
| 113 | ``` | ||
| 114 | |||
| 115 | - The order is very important. Make sure that `order cgi before respond` is at | ||
| 116 | the top of the configuration file. | ||
| 117 | - Also, when you run with Caddy v2, make sure you provide `adapter` argument | ||
| 118 | like this `/usr/bin/caddy run --watch --environ --config /etc/caddy/Caddyfile | ||
| 119 | --adapter caddyfile`. Otherwise, Caddy will try to use a different format for | ||
| 120 | config file. | ||
| 121 | |||
| 122 | I did a small batch of tests with [Bash](https://www.gnu.org/software/bash/), | ||
| 123 | [Tcl](https://www.tcl-lang.org/), [Lua](https://www.lua.org/) and | ||
| 124 | [Python](https://www.python.org/). Here is a cheat sheet if you need it. | ||
| 125 | |||
| 126 | Let's get Bash out of the way first. | ||
| 127 | |||
| 128 | ```bash | ||
| 129 | #!/usr/bin/bash | ||
| 130 | |||
| 131 | printf "Content-type: text/plain\n\n" | ||
| 132 | |||
| 133 | printf "Hello from Bash\n\n" | ||
| 134 | printf "PATH_INFO [%s]\n" $PATH_INFO | ||
| 135 | printf "QUERY_STRING [%s]\n" $QUERY_STRING | ||
| 136 | printf "\n" | ||
| 137 | |||
| 138 | for i in {0..9..1}; do | ||
| 139 | printf "> %s\n" $i | ||
| 140 | done | ||
| 141 | |||
| 142 | exit 0 | ||
| 143 | ``` | ||
| 144 | |||
| 145 | This one is for Tcl script. | ||
| 146 | |||
| 147 | ```tcl | ||
| 148 | #!/usr/bin/tclsh | ||
| 149 | |||
| 150 | puts "Content-type: text/plain\n" | ||
| 151 | |||
| 152 | puts "Hello from Tcl\n" | ||
| 153 | puts "PATH_INFO \[$env(PATH_INFO)\]" | ||
| 154 | puts "QUERY_STRING \[$env(QUERY_STRING)\]" | ||
| 155 | puts "" | ||
| 156 | |||
| 157 | for {set i 0} {$i < 10} {incr i} { | ||
| 158 | puts "> $i" | ||
| 159 | } | ||
| 160 | ``` | ||
| 161 | |||
| 162 | And for all you Python enjoyers. | ||
| 163 | |||
| 164 | ```python | ||
| 165 | #!/usr/bin/python3 | ||
| 166 | |||
| 167 | import os | ||
| 168 | |||
| 169 | print("Content-type: text/plain\n") | ||
| 170 | |||
| 171 | print("Hello from Python\n") | ||
| 172 | print("PATH_INFO [{}]".format(os.environ['PATH_INFO'])) | ||
| 173 | print("QUERY_STRING [{}]".format(os.environ['QUERY_STRING'])) | ||
| 174 | print("") | ||
| 175 | |||
| 176 | for i in range(10): | ||
| 177 | print("> {}".format(i)) | ||
| 178 | ``` | ||
| 179 | |||
| 180 | And for the final example, Lua. | ||
| 181 | |||
| 182 | ```lua | ||
| 183 | #!/usr/bin/lua | ||
| 184 | |||
| 185 | print("Content-type: text/plain\n") | ||
| 186 | |||
| 187 | print("Hello from Lua\n") | ||
| 188 | print(string.format("PATH_INFO [%s]", os.getenv("PATH_INFO"))) | ||
| 189 | print(string.format("QUERY_STRING [%s]", os.getenv("QUERY_STRING"))) | ||
| 190 | print() | ||
| 191 | |||
| 192 | for i = 0, 9 do | ||
| 193 | print(string.format("> %d", i)) | ||
| 194 | end | ||
| 195 | ``` | ||
| 196 | |||
| 197 | ## Basic authentication | ||
| 198 | |||
| 199 | One thing was also to have an option for some sort of authentication, and | ||
| 200 | something like [Basic access | ||
| 201 | authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) would | ||
| 202 | be more than enough. | ||
| 203 | |||
| 204 | Thankfully, Caddy supports this out of the box already. Below is an updated | ||
| 205 | example. | ||
| 206 | |||
| 207 | ```Caddyfile | ||
| 208 | { | ||
| 209 | order cgi before respond | ||
| 210 | } | ||
| 211 | |||
| 212 | examples.mitjafelicijan.com { | ||
| 213 | cgi /bash-test /opt/projects/examples/bash-test.sh | ||
| 214 | cgi /tcl-test /opt/projects/examples/tcl-test.tcl | ||
| 215 | cgi /lua-test /opt/projects/examples/lua-test.lua | ||
| 216 | cgi /python-test /opt/projects/examples/python-test.py | ||
| 217 | |||
| 218 | root * /opt/projects/examples | ||
| 219 | file_server | ||
| 220 | |||
| 221 | basicauth * { | ||
| 222 | bob $2a$14$/wCgaf9oMnmQa20txB76u.nI1AldGMBT/1J7fXCfgOiRShwz/JOkK | ||
| 223 | } | ||
| 224 | } | ||
| 225 | ``` | ||
| 226 | |||
| 227 | `basicauth *` matches everything under this domain/sub-domain and protects it | ||
| 228 | with Basic Authentication. | ||
| 229 | |||
| 230 | - `bob` is the username | ||
| 231 | - `hash` is the password | ||
| 232 | |||
| 233 | To generate these passwords, execute `caddy hash-password` and this will prompt | ||
| 234 | you to insert a password twice and spit out a hashed password that you can put | ||
| 235 | in your configuration file. | ||
| 236 | |||
| 237 | Restart the server and you are ready to go. | ||
| 238 | |||
| 239 | ## Making Caddy a service with systemd | ||
| 240 | |||
| 241 | After the tests were successful, I copied `caddy` to `/usr/bin/caddy` and copied | ||
| 242 | `Caddyfile` to `/etc/caddy/Caddyfile`. | ||
| 243 | |||
| 244 | Now off to the systemd. Each systemd service requires you to create a service | ||
| 245 | file. | ||
| 246 | |||
| 247 | - I created a `/etc/systemd/system/caddy.service` and put the following content | ||
| 248 | in the file. | ||
| 249 | |||
| 250 | ```systemd | ||
| 251 | [Unit] | ||
| 252 | Description=Caddy | ||
| 253 | Documentation=https://caddyserver.com/docs/ | ||
| 254 | After=network.target network-online.target | ||
| 255 | Requires=network-online.target | ||
| 256 | |||
| 257 | [Service] | ||
| 258 | Type=notify | ||
| 259 | User=root | ||
| 260 | Group=root | ||
| 261 | ExecStart=/usr/bin/caddy run --environ --config /etc/caddy/Caddyfile --adapter caddyfile | ||
| 262 | ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile --force --adapter caddyfile | ||
| 263 | TimeoutStopSec=5s | ||
| 264 | LimitNOFILE=1048576 | ||
| 265 | LimitNPROC=512 | ||
| 266 | PrivateTmp=true | ||
| 267 | ProtectSystem=full | ||
| 268 | AmbientCapabilities=CAP_NET_ADMIN CAP_NET_BIND_SERVICE | ||
| 269 | |||
| 270 | [Install] | ||
| 271 | WantedBy=multi-user.target | ||
| 272 | ``` | ||
| 273 | |||
| 274 | - You might need to reload systemd with `systemctl daemon-reload`. | ||
| 275 | - Then I enabled the service with `systemctl enable caddy.service`. | ||
| 276 | - And then I started the service with `systemctl start caddy.service`. | ||
| 277 | |||
| 278 | This was about all that I needed to do to get it running. Now I can easily add | ||
| 279 | new subdomains and domains to the main configuration file and be done with | ||
| 280 | it. No manual Let's Encrypt shenanigans needed. | ||
