From 4abcce013c9ee3053badf2abda77190233066676 Mon Sep 17 00:00:00 2001 From: Mitja Felicijan Date: Fri, 23 Feb 2024 10:35:22 +0100 Subject: Testing thoughts page --- ...01-13-most-likely-to-succeed-in-year-of-2011.md | 43 -- _posts/2012-03-09-led-technology-not-so-eco.md | 34 -- _posts/2013-10-24-wireless-sensor-networks.md | 55 -- _posts/2015-11-10-software-development-pitfalls.md | 182 ------ _posts/2017-03-07-golang-profiling-simplified.md | 127 ----- ...04-17-what-i-ve-learned-developing-ad-server.md | 200 ------- ...ng-python-web-applications-with-visual-tools.md | 207 ------- _posts/2017-08-11-simple-iot-application.md | 608 --------------------- ...digitalocean-spaces-object-storage-with-fuse.md | 332 ----------- ...01-03-encoding-binary-data-into-dna-sequence.md | 416 -------------- .../2019-10-14-simplifying-and-reducing-clutter.md | 60 -- ...g-sentiment-analysis-for-clickbait-detection.md | 109 ---- .../2020-03-22-simple-sse-based-pubsub-server.md | 455 --------------- ...0-03-27-create-placeholder-images-with-sharp.md | 103 ---- ...nge-case-of-elasticsearch-allocation-failure.md | 109 ---- ...30-my-love-and-hate-relationship-with-nodejs.md | 112 ---- _posts/2020-05-05-remote-work.md | 73 --- _posts/2020-08-15-systemd-disable-wake-onmouse.md | 74 --- _posts/2020-09-06-esp-and-micropython.md | 226 -------- _posts/2020-09-08-bind-warning-on-login.md | 55 -- _posts/2020-09-09-digitalocean-sync.md | 113 ---- _posts/2021-01-24-replacing-dropbox-with-s3.md | 115 ---- _posts/2021-01-25-goaccess.md | 205 ------- _posts/2021-06-26-simple-world-clock.md | 108 ---- ...from-internet-consumer-to-full-hominum-again.md | 104 ---- _posts/2021-08-01-linux-cheatsheet.md | 288 ---------- ...n-based-riced-up-distribution-for-developers.md | 277 ---------- ...021-12-25-running-golang-application-as-pid1.md | 348 ------------ _posts/2021-12-30-wap-mobile-web-before-the-web.md | 203 ------- _posts/2022-06-30-trying-out-helix-editor.md | 55 -- ...22-07-05-what-would-dna-sound-if-synthesized.md | 365 ------------- _posts/2022-08-13-algae-spotted-on-river-sava.md | 31 -- ...10-06-state-of-web-technologies-in-year-2022.md | 297 ---------- ...hat-sound-that-machine-makes-when-struggling.md | 67 --- ...ing-to-build-a-new-kind-of-terminal-emulator.md | 254 --------- _posts/2023-05-01-cachebusting-in-hugo.md | 18 - _posts/2023-05-05-run-9front-in-qemu.md | 29 - _posts/2023-05-06-git-push-multiple-origins.md | 18 - _posts/2023-05-07-mount-plan9-over-network.md | 24 - _posts/2023-05-08-write-iso-usb.md | 16 - _posts/2023-05-09-catv-weechat-config.md | 22 - _posts/2023-05-10-plan9-screenshot.md | 23 - _posts/2023-05-11-fix-plan9-bootloader.md | 21 - _posts/2023-05-12-install-plan9port-linux.md | 22 - _posts/2023-05-13-download-youtube-videos.md | 26 - _posts/2023-05-14-convert-mkv.md | 23 - _posts/2023-05-15-preview-troff-man-pages.md | 21 - _posts/2023-05-16-mass-set-permission.md | 17 - ...023-05-16-rekindling-my-love-for-programming.md | 75 --- .../2023-05-22-non-blocking-shell-exec-csharp.md | 45 -- _posts/2023-05-23-extend-lua-with-custom-c.md | 55 -- .../2023-05-23-i-was-wrong-about-git-workflows.md | 72 --- _posts/2023-05-23-parse-rss-with-lua.md | 41 -- _posts/2023-05-24-fresh-9front-desktop.md | 15 - _posts/2023-05-25-dcss-new-player-guide.md | 99 ---- _posts/2023-05-25-show-xterm-colors.md | 85 --- _posts/2023-05-25-tmux-sane-defaults.md | 38 -- _posts/2023-05-27-cronjobs-github-with-actions.md | 34 -- _posts/2023-05-27-dcss-on-4k-displays.md | 31 -- _posts/2023-05-27-drawing-pixels-in-plan9.md | 84 --- _posts/2023-05-28-easy-time-took-in-bash.md | 26 - _posts/2023-05-29-grep-to-less-maintain-colors.md | 26 - _posts/2023-05-31-extending-dte-editor.md | 53 -- ...nting-task-runner-that-i-actually-used-daily.md | 160 ------ _posts/2023-06-01-ewd-manuscripts-ebook.md | 23 - _posts/2023-06-04-bulk-make-thumbnails.md | 22 - _posts/2023-06-21-presentations-with-markdown.md | 79 --- _posts/2023-06-24-making-cgit-look-nicer.md | 207 ------- ...023-06-25-alacritty-open-links-with-modifier.md | 36 -- ...2023-06-25-development-environments-with-nix.md | 69 --- ...29-10gui-10-finger-multitouch-user-interface.md | 26 - _posts/2023-06-29-60s-ibm-computers-commercial.md | 18 - ...l-of-my-projects-together-under-one-umbrella.md | 282 ---------- ...knows-what-the-world-will-look-like-tomorrow.md | 101 ---- ...-fix-screen-tearing-on-debian-12-xorg-and-i3.md | 23 - ...nline-radio-streaming-with-mpv-from-terminal.md | 15 - ...7-14-set-color-temperature-of-displays-on-i3.md | 16 - ...23-08-01-make-b-w-svg-charts-with-matplotlib.md | 71 --- _posts/2023-08-05-floods-in-slovenia.md | 20 - _posts/2023-09-18-aws-eb-pyyaml-fix.md | 36 -- _posts/2023-09-25-compile-drawterm-on-fedora-38.md | 24 - ...4-using-ffmpeg-to-combine-video-side-by-side.md | 41 -- .../2023-11-05-add-lazy-loading-to-jekyll-posts.md | 34 -- ...titudes-are-sapping-the-fun-from-programming.md | 97 ---- _posts/2023-11-07-personal-sane-vim-defaults.md | 60 -- _posts/2024-02-11-k-mer.md | 140 ----- _posts/2024-02-15-extract-lines-from-file.md | 20 - _posts/2024-02-21-dcss-online-rc-defaults.md | 35 -- ...2024-02-23-uninstall-ollama-from-a-linux-box.md | 26 - .../2022-08-13-algae-spotted-on-river-sava.md | 31 ++ _posts/notes/2023-05-01-cachebusting-in-hugo.md | 18 + _posts/notes/2023-05-05-run-9front-in-qemu.md | 29 + .../notes/2023-05-06-git-push-multiple-origins.md | 18 + .../notes/2023-05-07-mount-plan9-over-network.md | 24 + _posts/notes/2023-05-08-write-iso-usb.md | 16 + _posts/notes/2023-05-09-catv-weechat-config.md | 22 + _posts/notes/2023-05-10-plan9-screenshot.md | 23 + _posts/notes/2023-05-11-fix-plan9-bootloader.md | 21 + _posts/notes/2023-05-12-install-plan9port-linux.md | 22 + _posts/notes/2023-05-13-download-youtube-videos.md | 26 + _posts/notes/2023-05-14-convert-mkv.md | 23 + _posts/notes/2023-05-15-preview-troff-man-pages.md | 21 + _posts/notes/2023-05-16-mass-set-permission.md | 17 + .../2023-05-22-non-blocking-shell-exec-csharp.md | 45 ++ .../notes/2023-05-23-extend-lua-with-custom-c.md | 55 ++ _posts/notes/2023-05-23-parse-rss-with-lua.md | 41 ++ _posts/notes/2023-05-24-fresh-9front-desktop.md | 15 + _posts/notes/2023-05-25-dcss-new-player-guide.md | 99 ++++ _posts/notes/2023-05-25-show-xterm-colors.md | 85 +++ _posts/notes/2023-05-25-tmux-sane-defaults.md | 38 ++ .../2023-05-27-cronjobs-github-with-actions.md | 34 ++ _posts/notes/2023-05-27-dcss-on-4k-displays.md | 31 ++ _posts/notes/2023-05-27-drawing-pixels-in-plan9.md | 84 +++ _posts/notes/2023-05-28-easy-time-took-in-bash.md | 26 + .../2023-05-29-grep-to-less-maintain-colors.md | 26 + _posts/notes/2023-05-31-extending-dte-editor.md | 53 ++ _posts/notes/2023-06-01-ewd-manuscripts-ebook.md | 23 + _posts/notes/2023-06-04-bulk-make-thumbnails.md | 22 + .../2023-06-21-presentations-with-markdown.md | 79 +++ _posts/notes/2023-06-24-making-cgit-look-nicer.md | 207 +++++++ ...023-06-25-alacritty-open-links-with-modifier.md | 36 ++ ...2023-06-25-development-environments-with-nix.md | 69 +++ ...29-10gui-10-finger-multitouch-user-interface.md | 26 + .../2023-06-29-60s-ibm-computers-commercial.md | 18 + ...-fix-screen-tearing-on-debian-12-xorg-and-i3.md | 23 + ...nline-radio-streaming-with-mpv-from-terminal.md | 15 + ...7-14-set-color-temperature-of-displays-on-i3.md | 16 + ...23-08-01-make-b-w-svg-charts-with-matplotlib.md | 71 +++ _posts/notes/2023-08-05-floods-in-slovenia.md | 20 + _posts/notes/2023-09-18-aws-eb-pyyaml-fix.md | 36 ++ .../2023-09-25-compile-drawterm-on-fedora-38.md | 24 + ...4-using-ffmpeg-to-combine-video-side-by-side.md | 41 ++ .../2023-11-05-add-lazy-loading-to-jekyll-posts.md | 34 ++ .../notes/2023-11-07-personal-sane-vim-defaults.md | 60 ++ _posts/notes/2024-02-15-extract-lines-from-file.md | 20 + _posts/notes/2024-02-21-dcss-online-rc-defaults.md | 35 ++ ...2024-02-23-uninstall-ollama-from-a-linux-box.md | 26 + ...01-13-most-likely-to-succeed-in-year-of-2011.md | 43 ++ .../posts/2012-03-09-led-technology-not-so-eco.md | 34 ++ .../posts/2013-10-24-wireless-sensor-networks.md | 55 ++ .../2015-11-10-software-development-pitfalls.md | 182 ++++++ .../2017-03-07-golang-profiling-simplified.md | 127 +++++ ...04-17-what-i-ve-learned-developing-ad-server.md | 200 +++++++ ...ng-python-web-applications-with-visual-tools.md | 207 +++++++ _posts/posts/2017-08-11-simple-iot-application.md | 608 +++++++++++++++++++++ ...digitalocean-spaces-object-storage-with-fuse.md | 332 +++++++++++ ...01-03-encoding-binary-data-into-dna-sequence.md | 416 ++++++++++++++ .../2019-10-14-simplifying-and-reducing-clutter.md | 60 ++ ...g-sentiment-analysis-for-clickbait-detection.md | 109 ++++ .../2020-03-22-simple-sse-based-pubsub-server.md | 455 +++++++++++++++ ...0-03-27-create-placeholder-images-with-sharp.md | 103 ++++ ...nge-case-of-elasticsearch-allocation-failure.md | 109 ++++ ...30-my-love-and-hate-relationship-with-nodejs.md | 112 ++++ _posts/posts/2020-05-05-remote-work.md | 73 +++ .../2020-08-15-systemd-disable-wake-onmouse.md | 74 +++ _posts/posts/2020-09-06-esp-and-micropython.md | 226 ++++++++ _posts/posts/2020-09-08-bind-warning-on-login.md | 55 ++ _posts/posts/2020-09-09-digitalocean-sync.md | 113 ++++ .../posts/2021-01-24-replacing-dropbox-with-s3.md | 115 ++++ _posts/posts/2021-01-25-goaccess.md | 205 +++++++ _posts/posts/2021-06-26-simple-world-clock.md | 108 ++++ ...from-internet-consumer-to-full-hominum-again.md | 104 ++++ _posts/posts/2021-08-01-linux-cheatsheet.md | 288 ++++++++++ ...n-based-riced-up-distribution-for-developers.md | 277 ++++++++++ ...021-12-25-running-golang-application-as-pid1.md | 348 ++++++++++++ .../2021-12-30-wap-mobile-web-before-the-web.md | 203 +++++++ _posts/posts/2022-06-30-trying-out-helix-editor.md | 55 ++ ...22-07-05-what-would-dna-sound-if-synthesized.md | 365 +++++++++++++ ...10-06-state-of-web-technologies-in-year-2022.md | 297 ++++++++++ ...hat-sound-that-machine-makes-when-struggling.md | 67 +++ ...ing-to-build-a-new-kind-of-terminal-emulator.md | 254 +++++++++ ...023-05-16-rekindling-my-love-for-programming.md | 75 +++ .../2023-05-23-i-was-wrong-about-git-workflows.md | 72 +++ ...nting-task-runner-that-i-actually-used-daily.md | 160 ++++++ ...l-of-my-projects-together-under-one-umbrella.md | 282 ++++++++++ ...knows-what-the-world-will-look-like-tomorrow.md | 101 ++++ ...titudes-are-sapping-the-fun-from-programming.md | 97 ++++ _posts/posts/2024-02-11-k-mer.md | 141 +++++ _posts/thoughts/.gitkeep | 0 179 files changed, 9151 insertions(+), 9150 deletions(-) delete mode 100644 _posts/2011-01-13-most-likely-to-succeed-in-year-of-2011.md delete mode 100644 _posts/2012-03-09-led-technology-not-so-eco.md delete mode 100644 _posts/2013-10-24-wireless-sensor-networks.md delete mode 100644 _posts/2015-11-10-software-development-pitfalls.md delete mode 100644 _posts/2017-03-07-golang-profiling-simplified.md delete mode 100644 _posts/2017-04-17-what-i-ve-learned-developing-ad-server.md delete mode 100644 _posts/2017-04-21-profiling-python-web-applications-with-visual-tools.md delete mode 100644 _posts/2017-08-11-simple-iot-application.md delete mode 100644 _posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md delete mode 100644 _posts/2019-01-03-encoding-binary-data-into-dna-sequence.md delete mode 100644 _posts/2019-10-14-simplifying-and-reducing-clutter.md delete mode 100644 _posts/2019-10-19-using-sentiment-analysis-for-clickbait-detection.md delete mode 100644 _posts/2020-03-22-simple-sse-based-pubsub-server.md delete mode 100644 _posts/2020-03-27-create-placeholder-images-with-sharp.md delete mode 100644 _posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md delete mode 100644 _posts/2020-03-30-my-love-and-hate-relationship-with-nodejs.md delete mode 100644 _posts/2020-05-05-remote-work.md delete mode 100644 _posts/2020-08-15-systemd-disable-wake-onmouse.md delete mode 100644 _posts/2020-09-06-esp-and-micropython.md delete mode 100644 _posts/2020-09-08-bind-warning-on-login.md delete mode 100644 _posts/2020-09-09-digitalocean-sync.md delete mode 100644 _posts/2021-01-24-replacing-dropbox-with-s3.md delete mode 100644 _posts/2021-01-25-goaccess.md delete mode 100644 _posts/2021-06-26-simple-world-clock.md delete mode 100644 _posts/2021-07-30-from-internet-consumer-to-full-hominum-again.md delete mode 100644 _posts/2021-08-01-linux-cheatsheet.md delete mode 100644 _posts/2021-12-03-debian-based-riced-up-distribution-for-developers.md delete mode 100644 _posts/2021-12-25-running-golang-application-as-pid1.md delete mode 100644 _posts/2021-12-30-wap-mobile-web-before-the-web.md delete mode 100644 _posts/2022-06-30-trying-out-helix-editor.md delete mode 100644 _posts/2022-07-05-what-would-dna-sound-if-synthesized.md delete mode 100644 _posts/2022-08-13-algae-spotted-on-river-sava.md delete mode 100644 _posts/2022-10-06-state-of-web-technologies-in-year-2022.md delete mode 100644 _posts/2022-10-16-that-sound-that-machine-makes-when-struggling.md delete mode 100644 _posts/2023-01-26-trying-to-build-a-new-kind-of-terminal-emulator.md delete mode 100644 _posts/2023-05-01-cachebusting-in-hugo.md delete mode 100644 _posts/2023-05-05-run-9front-in-qemu.md delete mode 100644 _posts/2023-05-06-git-push-multiple-origins.md delete mode 100644 _posts/2023-05-07-mount-plan9-over-network.md delete mode 100644 _posts/2023-05-08-write-iso-usb.md delete mode 100644 _posts/2023-05-09-catv-weechat-config.md delete mode 100644 _posts/2023-05-10-plan9-screenshot.md delete mode 100644 _posts/2023-05-11-fix-plan9-bootloader.md delete mode 100644 _posts/2023-05-12-install-plan9port-linux.md delete mode 100644 _posts/2023-05-13-download-youtube-videos.md delete mode 100644 _posts/2023-05-14-convert-mkv.md delete mode 100644 _posts/2023-05-15-preview-troff-man-pages.md delete mode 100644 _posts/2023-05-16-mass-set-permission.md delete mode 100644 _posts/2023-05-16-rekindling-my-love-for-programming.md delete mode 100644 _posts/2023-05-22-non-blocking-shell-exec-csharp.md delete mode 100644 _posts/2023-05-23-extend-lua-with-custom-c.md delete mode 100644 _posts/2023-05-23-i-was-wrong-about-git-workflows.md delete mode 100644 _posts/2023-05-23-parse-rss-with-lua.md delete mode 100644 _posts/2023-05-24-fresh-9front-desktop.md delete mode 100644 _posts/2023-05-25-dcss-new-player-guide.md delete mode 100644 _posts/2023-05-25-show-xterm-colors.md delete mode 100644 _posts/2023-05-25-tmux-sane-defaults.md delete mode 100644 _posts/2023-05-27-cronjobs-github-with-actions.md delete mode 100644 _posts/2023-05-27-dcss-on-4k-displays.md delete mode 100644 _posts/2023-05-27-drawing-pixels-in-plan9.md delete mode 100644 _posts/2023-05-28-easy-time-took-in-bash.md delete mode 100644 _posts/2023-05-29-grep-to-less-maintain-colors.md delete mode 100644 _posts/2023-05-31-extending-dte-editor.md delete mode 100644 _posts/2023-05-31-re-inventing-task-runner-that-i-actually-used-daily.md delete mode 100644 _posts/2023-06-01-ewd-manuscripts-ebook.md delete mode 100644 _posts/2023-06-04-bulk-make-thumbnails.md delete mode 100644 _posts/2023-06-21-presentations-with-markdown.md delete mode 100644 _posts/2023-06-24-making-cgit-look-nicer.md delete mode 100644 _posts/2023-06-25-alacritty-open-links-with-modifier.md delete mode 100644 _posts/2023-06-25-development-environments-with-nix.md delete mode 100644 _posts/2023-06-29-10gui-10-finger-multitouch-user-interface.md delete mode 100644 _posts/2023-06-29-60s-ibm-computers-commercial.md delete mode 100644 _posts/2023-07-01-bringing-all-of-my-projects-together-under-one-umbrella.md delete mode 100644 _posts/2023-07-08-who-knows-what-the-world-will-look-like-tomorrow.md delete mode 100644 _posts/2023-07-10-fix-screen-tearing-on-debian-12-xorg-and-i3.md delete mode 100644 _posts/2023-07-10-online-radio-streaming-with-mpv-from-terminal.md delete mode 100644 _posts/2023-07-14-set-color-temperature-of-displays-on-i3.md delete mode 100644 _posts/2023-08-01-make-b-w-svg-charts-with-matplotlib.md delete mode 100644 _posts/2023-08-05-floods-in-slovenia.md delete mode 100644 _posts/2023-09-18-aws-eb-pyyaml-fix.md delete mode 100644 _posts/2023-09-25-compile-drawterm-on-fedora-38.md delete mode 100644 _posts/2023-11-04-using-ffmpeg-to-combine-video-side-by-side.md delete mode 100644 _posts/2023-11-05-add-lazy-loading-to-jekyll-posts.md delete mode 100644 _posts/2023-11-05-elitist-attitudes-are-sapping-the-fun-from-programming.md delete mode 100644 _posts/2023-11-07-personal-sane-vim-defaults.md delete mode 100644 _posts/2024-02-11-k-mer.md delete mode 100644 _posts/2024-02-15-extract-lines-from-file.md delete mode 100644 _posts/2024-02-21-dcss-online-rc-defaults.md delete mode 100644 _posts/2024-02-23-uninstall-ollama-from-a-linux-box.md create mode 100644 _posts/notes/2022-08-13-algae-spotted-on-river-sava.md create mode 100644 _posts/notes/2023-05-01-cachebusting-in-hugo.md create mode 100644 _posts/notes/2023-05-05-run-9front-in-qemu.md create mode 100644 _posts/notes/2023-05-06-git-push-multiple-origins.md create mode 100644 _posts/notes/2023-05-07-mount-plan9-over-network.md create mode 100644 _posts/notes/2023-05-08-write-iso-usb.md create mode 100644 _posts/notes/2023-05-09-catv-weechat-config.md create mode 100644 _posts/notes/2023-05-10-plan9-screenshot.md create mode 100644 _posts/notes/2023-05-11-fix-plan9-bootloader.md create mode 100644 _posts/notes/2023-05-12-install-plan9port-linux.md create mode 100644 _posts/notes/2023-05-13-download-youtube-videos.md create mode 100644 _posts/notes/2023-05-14-convert-mkv.md create mode 100644 _posts/notes/2023-05-15-preview-troff-man-pages.md create mode 100644 _posts/notes/2023-05-16-mass-set-permission.md create mode 100644 _posts/notes/2023-05-22-non-blocking-shell-exec-csharp.md create mode 100644 _posts/notes/2023-05-23-extend-lua-with-custom-c.md create mode 100644 _posts/notes/2023-05-23-parse-rss-with-lua.md create mode 100644 _posts/notes/2023-05-24-fresh-9front-desktop.md create mode 100644 _posts/notes/2023-05-25-dcss-new-player-guide.md create mode 100644 _posts/notes/2023-05-25-show-xterm-colors.md create mode 100644 _posts/notes/2023-05-25-tmux-sane-defaults.md create mode 100644 _posts/notes/2023-05-27-cronjobs-github-with-actions.md create mode 100644 _posts/notes/2023-05-27-dcss-on-4k-displays.md create mode 100644 _posts/notes/2023-05-27-drawing-pixels-in-plan9.md create mode 100644 _posts/notes/2023-05-28-easy-time-took-in-bash.md create mode 100644 _posts/notes/2023-05-29-grep-to-less-maintain-colors.md create mode 100644 _posts/notes/2023-05-31-extending-dte-editor.md create mode 100644 _posts/notes/2023-06-01-ewd-manuscripts-ebook.md create mode 100644 _posts/notes/2023-06-04-bulk-make-thumbnails.md create mode 100644 _posts/notes/2023-06-21-presentations-with-markdown.md create mode 100644 _posts/notes/2023-06-24-making-cgit-look-nicer.md create mode 100644 _posts/notes/2023-06-25-alacritty-open-links-with-modifier.md create mode 100644 _posts/notes/2023-06-25-development-environments-with-nix.md create mode 100644 _posts/notes/2023-06-29-10gui-10-finger-multitouch-user-interface.md create mode 100644 _posts/notes/2023-06-29-60s-ibm-computers-commercial.md create mode 100644 _posts/notes/2023-07-10-fix-screen-tearing-on-debian-12-xorg-and-i3.md create mode 100644 _posts/notes/2023-07-10-online-radio-streaming-with-mpv-from-terminal.md create mode 100644 _posts/notes/2023-07-14-set-color-temperature-of-displays-on-i3.md create mode 100644 _posts/notes/2023-08-01-make-b-w-svg-charts-with-matplotlib.md create mode 100644 _posts/notes/2023-08-05-floods-in-slovenia.md create mode 100644 _posts/notes/2023-09-18-aws-eb-pyyaml-fix.md create mode 100644 _posts/notes/2023-09-25-compile-drawterm-on-fedora-38.md create mode 100644 _posts/notes/2023-11-04-using-ffmpeg-to-combine-video-side-by-side.md create mode 100644 _posts/notes/2023-11-05-add-lazy-loading-to-jekyll-posts.md create mode 100644 _posts/notes/2023-11-07-personal-sane-vim-defaults.md create mode 100644 _posts/notes/2024-02-15-extract-lines-from-file.md create mode 100644 _posts/notes/2024-02-21-dcss-online-rc-defaults.md create mode 100644 _posts/notes/2024-02-23-uninstall-ollama-from-a-linux-box.md create mode 100644 _posts/posts/2011-01-13-most-likely-to-succeed-in-year-of-2011.md create mode 100644 _posts/posts/2012-03-09-led-technology-not-so-eco.md create mode 100644 _posts/posts/2013-10-24-wireless-sensor-networks.md create mode 100644 _posts/posts/2015-11-10-software-development-pitfalls.md create mode 100644 _posts/posts/2017-03-07-golang-profiling-simplified.md create mode 100644 _posts/posts/2017-04-17-what-i-ve-learned-developing-ad-server.md create mode 100644 _posts/posts/2017-04-21-profiling-python-web-applications-with-visual-tools.md create mode 100644 _posts/posts/2017-08-11-simple-iot-application.md create mode 100644 _posts/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md create mode 100644 _posts/posts/2019-01-03-encoding-binary-data-into-dna-sequence.md create mode 100644 _posts/posts/2019-10-14-simplifying-and-reducing-clutter.md create mode 100644 _posts/posts/2019-10-19-using-sentiment-analysis-for-clickbait-detection.md create mode 100644 _posts/posts/2020-03-22-simple-sse-based-pubsub-server.md create mode 100644 _posts/posts/2020-03-27-create-placeholder-images-with-sharp.md create mode 100644 _posts/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md create mode 100644 _posts/posts/2020-03-30-my-love-and-hate-relationship-with-nodejs.md create mode 100644 _posts/posts/2020-05-05-remote-work.md create mode 100644 _posts/posts/2020-08-15-systemd-disable-wake-onmouse.md create mode 100644 _posts/posts/2020-09-06-esp-and-micropython.md create mode 100644 _posts/posts/2020-09-08-bind-warning-on-login.md create mode 100644 _posts/posts/2020-09-09-digitalocean-sync.md create mode 100644 _posts/posts/2021-01-24-replacing-dropbox-with-s3.md create mode 100644 _posts/posts/2021-01-25-goaccess.md create mode 100644 _posts/posts/2021-06-26-simple-world-clock.md create mode 100644 _posts/posts/2021-07-30-from-internet-consumer-to-full-hominum-again.md create mode 100644 _posts/posts/2021-08-01-linux-cheatsheet.md create mode 100644 _posts/posts/2021-12-03-debian-based-riced-up-distribution-for-developers.md create mode 100644 _posts/posts/2021-12-25-running-golang-application-as-pid1.md create mode 100644 _posts/posts/2021-12-30-wap-mobile-web-before-the-web.md create mode 100644 _posts/posts/2022-06-30-trying-out-helix-editor.md create mode 100644 _posts/posts/2022-07-05-what-would-dna-sound-if-synthesized.md create mode 100644 _posts/posts/2022-10-06-state-of-web-technologies-in-year-2022.md create mode 100644 _posts/posts/2022-10-16-that-sound-that-machine-makes-when-struggling.md create mode 100644 _posts/posts/2023-01-26-trying-to-build-a-new-kind-of-terminal-emulator.md create mode 100644 _posts/posts/2023-05-16-rekindling-my-love-for-programming.md create mode 100644 _posts/posts/2023-05-23-i-was-wrong-about-git-workflows.md create mode 100644 _posts/posts/2023-05-31-re-inventing-task-runner-that-i-actually-used-daily.md create mode 100644 _posts/posts/2023-07-01-bringing-all-of-my-projects-together-under-one-umbrella.md create mode 100644 _posts/posts/2023-07-08-who-knows-what-the-world-will-look-like-tomorrow.md create mode 100644 _posts/posts/2023-11-05-elitist-attitudes-are-sapping-the-fun-from-programming.md create mode 100644 _posts/posts/2024-02-11-k-mer.md create mode 100644 _posts/thoughts/.gitkeep (limited to '_posts') diff --git a/_posts/2011-01-13-most-likely-to-succeed-in-year-of-2011.md b/_posts/2011-01-13-most-likely-to-succeed-in-year-of-2011.md deleted file mode 100644 index de90494..0000000 --- a/_posts/2011-01-13-most-likely-to-succeed-in-year-of-2011.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: Most likely to succeed in the year of 2011 -permalink: /most-likely-to-succeed-in-year-of-2011.html -date: 2011-01-13T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -The year of 2010 was definitely the year of Geo-location. The market responded -beautifully and lots of very cool services were launched. We all have to thank -the mobile market for such extensive adoption. With new generations of mobile -phones that are not only buffed with high-tech hardware but are also affordable. -We can now manage tasks that were not so long time ago, almost Star Trek’ish. -And all this had and has great influence on the destination to which we are -going now. - -Reading all this articles about new innovation about new thriving technologies -makes me wonder what’s the next step. The future is the mesh, like Lisa Gansky -said in her book The Mesh. - -Many still have conservative views on distributed systems. The problems with -security of information. Fear of not controlling every aspect of information -flow. I am very opened to distributed systems and heterogeneous applications, -and I think this is the correct and best way to proceed. - -This year will definitely be about communication platforms. Mobile to mobile. -Machine to mobile and vice versa. All the tech is available and ready to put -into action. Wireless is today’s new mantra. And the concept of semantic web is -now ready for industry. - -Applications and developers now can gain access to new layers of systems and can -prepare and build solutions to meet the high quality needs of market. The speed -is everything now. - -My vote goes to “Machine to Machine” and “Embedded Systems”! - -- [Machine-to-Machine](http://en.wikipedia.org/wiki/Machine-to-Machine) -- [The ultimate M2M communication protocol](http://www.bitxml.org/) -- [COOS Project (connectivity initiative)](http://www.coosproject.org/maven-site/1.0.0/project-info.html) -- [Community for machine-to-machine](http://m2m.com/index.jspa) -- [Embedded system](http://en.wikipedia.org/wiki/Embedded_system) - diff --git a/_posts/2012-03-09-led-technology-not-so-eco.md b/_posts/2012-03-09-led-technology-not-so-eco.md deleted file mode 100644 index 4c5fda3..0000000 --- a/_posts/2012-03-09-led-technology-not-so-eco.md +++ /dev/null @@ -1,34 +0,0 @@ ---- -title: LED technology might not be as eco-friendly as you think -permalink: /led-technology-not-so-eco.html -date: 2012-03-09T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -There is a lot of talk about LED technology. It is beginning to infiltrate -industry at a fast rate, and it’s a challenge for designers and also engineers. -I wondered when a weakness will be revealed. Then I stomped on an article -talking about harm in using LED technology. It looks like this magical -technology is not so magical and eco-friendly. - -A new study from the University of California indicates that LED lights contain -toxic metals, and should be produced, used and disposed of carefully. Besides -the lead and nickel, the bulbs and their associated parts were also found to -contain arsenic, copper, and other metals that have been linked to different -cancers, neurological damage, kidney disease, hypertension, skin rashes and -other illnesses in humans, and to ecological damage in waterways. - -Since then, I haven’t yet found any regulation for disposal of LED lights or any -other regulation or standard. This might be a problem in the future. And it is a -massive drawback. This might have quite an impact on consumer market. - -Nevertheless, there is a potential, and I am sure the market will adapt. I also -hope I will be reading documents regarding solution for this concern soon. - -**Additional resources:** - -- [Recycling and Disposal of Light Bulbs](http://ezinearticles.com/?Recycling-and-Disposal-of-Light-Bulbs&id=1091304) -- [How to Dispose of a Low-Energy Light Bulb](http://www.ehow.com/how_7483442_dispose-lowenergy-light-bulb.html) - diff --git a/_posts/2013-10-24-wireless-sensor-networks.md b/_posts/2013-10-24-wireless-sensor-networks.md deleted file mode 100644 index 6eb3fe1..0000000 --- a/_posts/2013-10-24-wireless-sensor-networks.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: Wireless sensor networks -permalink: /wireless-sensor-networks.html -date: 2013-10-24T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -Zigbee networks have this wonderful capability to self-heal, which means they -can reorder connections between them if one of them is inoperable. This works -our of the box when you deploy them. But you have to have in mind that achieving -this is not as easy as you would think. None of it is plug&play. So to make -your life a bit easier, here are some pointers which, I hope, will help you. - -- Be careful when you are ordering your equipment abroad. There are many rules - and regulations you need to comply before you get your Xbee radios. What they - do is they wait until you prove that you won’t use the technology for some - kind of evil take over control of the world project :). For this, they have - EAR (Export Administration Regulations) which basically means “This product - may require a license to export from the United States.”. -- I don’t know if this applies for every country, but when we purchased our Xbee - radios from Mouser, this was mandatory! What we needed to do was to print out - a form and write information about our company and send them a copy via - email. With this document, we proved that we are a legitimate company. -- When you complete your purchase and send all the documentation, you are not - clear yet. Then customs will take it from there :). There will be some - additional costs. Before purchasing, make sure you have as much information - about costs as possible. Because it can get costly in the end. -- I suggest you use companies from your country. You can seriously cut your - costs. Here in Slovenia, the best option so far as I know is Farnell. And - based on my personal experience, they rock! All I need to say! -- Make plans when ordering larger quantities. Do not, I say, do not make your - orders in December! :) Believe me! You will have problems with stock they can - provide for you. So, we were forced to buy some things from Mouser, which was - extremely painful because of all the regulations you need to obey when - importing goods from the USA. -- Make sure that firmware version on your Xbee radios is exactly the same! Do - not get creative!!! I propose using templates. You can get template by - exporting settings/profile in X-CTU application. Make sure you have enabled - “Upgrade firmware” so you can be sure each radio has the same firmware. -- And again: make plans! Plan everything! In months advanced! You will thank me - later :) -- Test, test, test. Wireless networks can be tricky. - -If you are serious, I suggest you buy this book, Building Wireless Sensor -Networks. You will get a glimpse of how networks work in lumens terms. It is a -good starting point for everybody who wants to build wireless networks. - -**Additional resources:** - -- http://www.digi.com/aboutus/export/generalexportinfo -- http://doresearch.stanford.edu/research-scholarship/export-controls/export-controlled-or-embargoed-countries-entities-and-persons -- http://www.bis.doc.gov/licensing/exportingbasics.htm - diff --git a/_posts/2015-11-10-software-development-pitfalls.md b/_posts/2015-11-10-software-development-pitfalls.md deleted file mode 100644 index d7b9c1b..0000000 --- a/_posts/2015-11-10-software-development-pitfalls.md +++ /dev/null @@ -1,182 +0,0 @@ ---- -title: Software development and my favorite pitfalls -permalink: /software-development-pitfalls.html -date: 2015-11-10T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -Over the years I had the privilege to work on some very excited projects both in -software development field and also in electronics field and every experience -taught me some invaluable lessons about how NOT TO approach development. And -through this post I will try to point out some absurd, outdated techniques I -find the most annoying and damaging during a development cycle. There will be -swearing because this topic really gets on my nerves and I never coherently -tried to explain them in writing. So if I get heated up, please bear with me. - -As new methods of project management are emerging, underlying processes still -stay old and outdated. This is mainly because we as people are unable to -completely shift away from these approaches. - -I was always struggling with communication, and many times that cost me a -relationship or two because I was not on the ball all the time. Through every -experience, I became more convinced that I am the problem and never ever doubted -that the problem may be that communication never evolved a single step from -emails. And if you think for a second, not many things have changed around this -topic. We just have different representations of email (message boards, chats, -project management tools). And I believe this is the real issue we are facing -now. - -There are many articles written about hyper connectivity and the effects that -are a direct result of it. But mainstream does nothing towards it. We are just -putting out fires, and we do nothing to prevent it. I am certain this will be a -major source of grief in coming years. And what we all can do to avoid this is -to change our mindset and experiment on our communication skills, development -approaches. We need to maximize possible output that a person can give. And to -achieve this we need to listen to them, encourage them. I know that not -everybody is a naturally born leader, but with enough practice and encouragement -they also can become active participants in leadership. - -There are many talks now about methodologies such as Scrum, Kanban, Cleanroom -and they all fucking piss me of :). These are all boxes that imprison people and -take away their freedom of thought. This is a straightforward mindfuck / -amputation of creativity. - -Let me list a couple of things that I find really destructive and bad for a -project and in a long run company. - -## Ping emails - -Ping emails are emails you have to write as soon as you receive an email. Its -sole purpose is to inform the sender that you received their email, and you are -working on it. Its result is only to calm down the sender that their task is -being dealt with. It’s intent basically is, I did my job by sending you this -email, so I am on clear grounds. I categorize this email as fuck you email. -This is one of the most irritating types of emails I need to write. This is the -ultimate control freak show you can experience, and it gives the sender a false -feeling of control. Newsflash: We do not live in 1982 where there was a -possibility that email never reached the destination. I really hate this from -the bottom of my heart. - -They should be like: “Yes, I am fucking alive, and I am at your service my -leash!”. I guess if I would reply like this, I wouldn’t have to write any more -of this kind of messages. - -## Everybody is a project manager - -Well, this is a tough one. I noticed that as soon as you let people to give -their suggestions, you are basically screwed. There is a truth in the saying: -“Give low expectations and deliver little more than you promised.”. - -People tend to take a role of a manager as soon as they are presented with an -opportunity. And by getting angry at them, you only provoke yourself. They are -not at fault. You just need to tell them they are only giving suggestions and -not tasks at the beginning and everything will be alright. But if you give them -a feeling that they are in control, you will have immense problems explaining -why their features are not in current release. - -Project mission must be always leading project requirements and any deviation -from it will result in major project butchering. And by this, I mean that the -project will get its own path, and you will be left with half done software that -helps nobody. Clear mission goals and clean execution will allow you to develop -software will clear intent. - -## We are never wrong - -I find this type of arrogance the worst. We must always conduct ourselves that -we are infallible and cannot make mistakes. As soon as a procedure or process is -established, there is no room for changes or improvements. This is the most -idiotic thing someone can say of think. I think that processes need to involve -and change over time. This is imperative and need to have in your organization -if you want to improve and develop company. We all need to grow balls and change -everything in order to adapt to current situations. Being a prisoner of -predefined processes kills creativity. - -I am constantly trying new software for project managing and communication. I -believe every team has its own dynamic, and it needs to be discovered -organically and naturally through many experiments. By putting the team in a -box, you are amputating their creativity and therefore minimizing their -potential. But if you talk to an executive, you will mainly find archetypical -thinking and a strong need to compartmentalize everything from business -processes to resource management. And this type of management that often -displays micromanagement techniques only works for short periods (couple of -years) and then employees either leave the company or become basically retarded -drones on autopilot. - -## Micromanaging - -This basically implies that everybody on the team is an idiot who needs to have -a to-do list that they cannot write themselves. How about spoon-feeding the team -at launch because besides the team leader, everybody must be a retarded idiot at -best? - -I prefer milestones as they give developers much more freedom and creativity in -developing and not waste their time checking some bizarre to-do list that was -not even thought through. Projects constantly change throughout the development -cycle, and all you are left at the end is a list of unchecked tasks and the -wrath of management why they are not completed. Best WTF moment! - -## Human contact — no need for it! - -We are vigorously trying to eliminate physical contact by replacing short -meetings with software, with no regards that we are not machines. Many times a -simple 5-min meeting at morning can solve most of the problems. In rapid -development, short bursts of man to man communication is possibly the best way -to go. - -We now have all this software available, and all what we get out of it is a -giant clusterfuck. An obstacle and not a solution. So, why we still use them? - -## MVP is killing innovation - -Many will disagree with me on this one, but I stand strong by this statement. -What I noticed in my experience that all this buzz words around us only mislead -and capture us in a circle of solving issues that already have a solution, but -we are unable to see it without using some fancy word for it. - -The toughest thing to do for a developer is to minimize requirements. Well, this -is though only for bad developers. Yes, I said it. There are many types of -developers out there. And those unable to minimize feature scope are the ones -you don’t need on your team. Their only goal is to solve problems that exist -only in their heads. And then you have to argue with them, and waste energy on -them, instead of developing your awesome product. They are a cancer and I -suggest you cut them off. - -MVP as an idea is great, but sadly people don’t understand underlying -philosophy, and they spent too much time focusing and fixating on something that -every sane person with normal IQ will understand without some made up -acronym. And the result is a lot of talking and barely no execution. - -Well, MVP is not directly killing innovation, but stupid people do when they try -to understand it. - -## Pressure wasteland - -You must never allow to be pressured into confirming a deadline if you are not -confident. We often feel a need that we are in service of others, which is true -to some extent. But it is also true that others are in service to us to some -extent. And we forget this all the time. We are all pressured all the time to -make decisions just to calm other people down. And when they leave your office -you experience WTF moment :) How the hell did they manage to fuck me up again? - -People need to realize that the more pressure you put on somebody, the less they -will be able to do. So 5-min update email requests will only resolve in mental -breakdown and inability to work that day. Constant poking is probably the only -thing I lose my mind instantly. For all you that are doing this: “Stop bothering -us with your insecurities and let us do our job. We will do it quicker and -better without you breathing down our necks.” - -If this happens to me, I end up with no energy at the end. Don’t you get it? -You will get much more from and out of me if you ask me like a human person and -not your personal butler. On a long run, you are destroying your relationships -and nobody would want to work with you. Your schizophrenic approach will damage -only you in a long run. Nobody is anybody’s property. - -## Conclusion - -I am guilty of many things described in this post. And I find it hard sometimes -to acknowledge this. And I lie to myself and try vigorously to find some -explanation why I do these things. There is always space for growth. And maybe -you will also find some of yourself in this post and realize what needs to -change for you to evolve. diff --git a/_posts/2017-03-07-golang-profiling-simplified.md b/_posts/2017-03-07-golang-profiling-simplified.md deleted file mode 100644 index aeea956..0000000 --- a/_posts/2017-03-07-golang-profiling-simplified.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -title: Golang profiling simplified -permalink: /golang-profiling-simplified.html -date: 2017-03-07T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -Many posts have been written regarding profiling in Golang and I haven’t found -proper tutorial regarding this. Almost all of them are missing some part of -important information and it gets pretty frustrating when you have a deadline -and are not finding simple distilled solution. - -Nevertheless, after searching and experimenting I have found a solution that -works for me and probably should also for you. - -## Where are my pprof files? - -By default pprof files are generated in /tmp/ folder. You can override folder -where this files are generated programmatically in your golang code as we will -see below in example. - -## Why is my CPU profile empty? - -I have found out that sometimes CPU profile is empty because program was not -executing long enough. Programs, that execute too quickly don’t produce pprof -file in my cases. Well, file is generated but only contains 4KB of information. - -## Profiling - -As you can see from examples we are executing dummy_benchmark functions to -ensure some sort of execution. Memory profiling can be done without such a -“complex” function. But CPU profiling needs it. - -Both memory and CPU profiling examples are almost the same. Only parameters in -main function when calling profile.Start are different. When we set -profile.ProfilePath(“.”) we tell profiler to store pprof files in the same -folder as our program. - -### Memory profiling - -```go -package main - -import ( - "fmt" - "time" - "github.com/pkg/profile" -) - -func dummy_benchmark() { - - fmt.Println("first set ...") - for i := 0; i < 918231333; i++ { - i *= 2 - i /= 2 - } - - <-time.After(time.Second*3) - - fmt.Println("sencond set ...") - for i := 0; i < 9182312232; i++ { - i *= 2 - i /= 2 - } -} - -func main() { - defer profile.Start(profile.MemProfile, profile.ProfilePath("."), profile.NoShutdownHook).Stop() - dummy_benchmark() -} -``` - -### CPU profiling - -```go -package main - -import ( - "fmt" - "time" - "github.com/pkg/profile" -) - -func dummy_benchmark() { - - fmt.Println("first set ...") - for i := 0; i < 918231333; i++ { - i *= 2 - i /= 2 - } - - <-time.After(time.Second*3) - - fmt.Println("sencond set ...") - for i := 0; i < 9182312232; i++ { - i *= 2 - i /= 2 - } -} - -func main() { - defer profile.Start(profile.CPUProfile, profile.ProfilePath("."), profile.NoShutdownHook).Stop() - dummy_benchmark() -} -``` - -### Generating profiling reports - -```bash -# memory profiling -go build mem.go -./mem -go tool pprof -pdf ./mem mem.pprof > mem.pdf - -# cpu profiling -go build cpu.go -./cpu -go tool pprof -pdf ./cpu cpu.pprof > cpu.pdf -``` - -This will generate PDF document with visualized profile. - -- [Memory PDF profile example](/assets/posts/go-profiling/golang-profiling-mem.pdf) -- [CPU PDF profile example](/assets/posts/go-profiling/golang-profiling-cpu.pdf) - diff --git a/_posts/2017-04-17-what-i-ve-learned-developing-ad-server.md b/_posts/2017-04-17-what-i-ve-learned-developing-ad-server.md deleted file mode 100644 index 10aca0d..0000000 --- a/_posts/2017-04-17-what-i-ve-learned-developing-ad-server.md +++ /dev/null @@ -1,200 +0,0 @@ ---- -title: What I've learned developing ad server -permalink: /what-i-ve-learned-developing-ad-server.html -date: 2017-04-17T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -For the past year and half I have been developing native advertising server that -contextually matches ads and displays them in different template forms on -variety of websites. This project grew from serving thousands of ads per day to -millions. - -The system is made from couple of core components: - -- API for serving ads, -- Utils - cronjobs and queue management tools, -- Dashboard UI. - -Initial release was using [MongoDB](https://www.mongodb.com/) for full-text -search but was later replaced by [Elasticsearch](https://www.elastic.co/) for -better CPU utilization and better search performance. This provided us with many -amazing functionalities of [Elasticsearch](https://www.elastic.co/). You should -check it out if you do any search related operations. - -Because the premise of the server is to provide native ad experience, they are -rendered on the client side via simple templating engine. This ensures that ads -can be displayed number of different ways based on the visual style of the -page. And this makes JavaScript client library quite complex. - -So now that you know basic information about the product lets get into the -lessons we learned. - -## Aggregate everything - -After beta version was released everything (impressions, clicks, etc) was -written in nanosecond resolution in the database. At that time we were using -[PostgreSQL](https://www.postgresql.org/) and database quickly grew way above -200GB in disk space. And that was problematic. Statistics took disturbingly long -time to aggregate. Also using indexes on stats table in database was no help -after we reached 500 million datapoints. - -> There is a marketing product information and there is real life experience. -And the tend to be quite the opposite. - -This was the reason that now everything is aggregated on daily basis and this -data is then fed to Elastic in form of daily summary. With this we achieved we -can now track many more dimensions such as zone, channel and platform -information. And with this information we can now adapt occurrences of ads on -specific places more precisely. - -We have also adapted [Redis](https://redis.io/) as a full-time citizen in our -stack. Because Redis also stores information on a local disk we have some sort -of backup if server would accidentally suffer some failure. - -All the real-time statistics for ad serving and redirecting is presented as -counters in Redis instance and daily extracted and pushed to Elastic. - -## Measure everything - -The thing about software is that we really don't know how well it is performing -under load until such load is presented. When testing locally everything is fine -but when on production things tend to fall apart. - -As a solution for this we are measuring everything we can. Function execution -time (by encapsulating functions with timers), server performance (cpu, memory, -disk, etc), Nginx and [uWSGI](https://uwsgi-docs.readthedocs.io/) performance. -We sacrifice a bit of performance for the sake of this information. And we store -all this information for later analysis. - -**Example of function execution time** - -```json -{ - "get_final_filtered_ads": { - "counter": 1931250, - "avg": 0.0066143431, - "elapsed": 12773.9500310003 - }, - "store_keywords_statistics": { - "counter": 1931011, - "avg": 0.0004605267, - "elapsed": 889.2821669996 - }, - "match_by_context": { - "counter": 1931011, - "avg": 0.0055960716, - "elapsed": 10806.0758889999 - }, - "match_by_high_performance": { - "counter": 262, - "avg": 0.0152770229, - "elapsed": 4.00258 - }, - "store_impression_stats": { - "counter": 1931250, - "avg": 0.0006189991, - "elapsed": 1195.4419869999 - } -} -``` - -We have also started profiling with [cProfile](https://pymotw.com/2/profile/) -and then visualizing with [KCachegrind](http://kcachegrind.sourceforge.net/). -This provides much more detailed look into code execution. - -## Cache control is your friend - -Because we use Javascript library for rendering ads we rely on this script -extensively and when in need we need to be able to change behavior of the script -quickly. - -In our case we can not simply replace javascript url in html code. It usually -takes a day or two for the guys who maintain sites to change code or add -?ver=xxx attribute. And this makes rapid deployment and testing very difficult -and time consuming. There is a limitation of how much you can test locally. - -We are now in the process of integrating [Google Tag -Manager](https://www.google.com/analytics/tag-manager/) but couple of websites -are developed on ASP.net platform that have some problems with tag manager. With -a solution below we are certain that we are serving latest version of the -script. - -And it only takes one mistake and users have the script cached and in case of -caching it for 1 year you probably know where the problem is. - -```nginx -# nginx ➜ /etc/nginx/sites-available/default -location /static/ { - alias /path-to-static-content/; - autoindex off; - charset utf-8; - gzip on; - gzip_types text/plain application/javascript application/x-javascript text/javascript text/xml text/css; - location ~* \.(ico|gif|jpeg|jpg|png|woff|ttf|otf|svg|woff2|eot)$ { - expires 1y; - add_header Pragma public; - add_header Cache-Control "public"; - } - location ~* \.(css|js|txt)$ { - expires 3600s; - add_header Pragma public; - add_header Cache-Control "public, must-revalidate"; - } -} -``` - -Also be careful when redirecting to url in your python code. We noticed that if -we didn't precisely setup cache control and expire headers in response we didn't -get the request on the server and therefore couldn't measure clicks. So when -redirecting do as follows and there will be no problems. - -```python -# python ➜ bottlepy web micro-framework -response = bottle.HTTPResponse(status=302) -response.set_header("Cache-Control", "no-store, no-cache, must-revalidate") -response.set_header("Expires", "Thu, 01 Jan 1970 00:00:00 GMT") -response.set_header("Location", url) -return response -``` - -> Cache control in browsers is quite aggressive and you need to be precise to -avoid future problems. We learned that lesson the hard way. - -## Learn NGINX - -When deciding on a web server we went with Nginx as a reverse proxy for our -applications. We adapted micro-service oriented architecture early in the -project to ensure when we scale we can easily add additional servers to our -cluster. And Nginx was crucial to perform load balancing and static content -delivery. - -At first our config file was quite simple and later grew larger. After patching -and adding new settings I sat down and learned more about the guts of Nginx. -This proved to be very useful and we were able to squeeze much more out of our -setup. So I advise you to take your time and read through the -[documentation](https://nginx.org/en/docs/). This saved us a lot of headache. -Googling for solutions only goes so far. - -## Use Redis/Memcached - -As explained above we are using caching basically for everything. It is the -corner stone of our services. At first we were very careful about the quantity -of things we stored in [Redis](https://redis.io/). But we later found out that -the memory footprint is very low even when storing large amount of data in it. - -So we gradually increased our usage to caching whole HTML outputs of dashboard. -This improved our performance in order of magnitude. And by using native TTL -support this goes hand in hand with our needs. - -The reason why we choose [Redis](https://redis.io/) over -[Memcached](https://memcached.org/) was the nature of scalability of Redis out -of the box. But all this can be achieved with Memcached. - -## Conclusion - -There are a lot more details that could have been written and every single topic -in here deserves it's own post but you probably got the idea about the problems -we faced. diff --git a/_posts/2017-04-21-profiling-python-web-applications-with-visual-tools.md b/_posts/2017-04-21-profiling-python-web-applications-with-visual-tools.md deleted file mode 100644 index 2e2ec70..0000000 --- a/_posts/2017-04-21-profiling-python-web-applications-with-visual-tools.md +++ /dev/null @@ -1,207 +0,0 @@ ---- -title: Profiling Python web applications with visual tools -permalink: /profiling-python-web-applications-with-visual-tools.html -date: 2017-04-21T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -I have been profiling my software with KCachegrind for a long time now and I was -missing this option when I am developing API's or other web services. I always -knew that this is possible but never really took the time and dive into it. - -Before we begin there are some requirements. We will need to: - -- implement [cProfile](https://docs.python.org/2/library/profile.html#module-cProfile) into our web app, -- convert output to [callgrind](http://valgrind.org/docs/manual/cl-manual.html) format with [pyprof2calltree](https://pypi.python.org/pypi/pyprof2calltree/), -- visualize data with [KCachegrind](http://kcachegrind.sourceforge.net/html/Home.html) or [Profiling Viewer](http://www.profilingviewer.com/). - - -If you are using MacOS you should check out [Profiling -Viewer](http://www.profilingviewer.com/) or -[MacCallGrind](http://www.maccallgrind.com/). - -![KCachegrind](/assets/posts/python-profiling/kcachegrind.png){:loading="lazy"} - -We will be dividing this post into two main categories: - -- writing simple web-service, -- visualize profile of this web-service. - -## Simple web-service - -Let's use virtualenv so we won't pollute our base system. If you don't have -virtualenv installed on your system you can install it with pip command. - -```bash -# let's install virtualenv globally -$ sudo pip install virtualenv - -# let's also install pyprof2calltree globally -$ sudo pip install pyprof2calltree - -# now we create project -$ mkdir demo-project -$ cd demo-project/ - -# now let's create folder where we will store profiles -$ mkdir prof - -# now we create empty virtualenv in venv/ folder -$ virtualenv --no-site-packages venv - -# we now need to activate virtualenv -$ source venv/bin/activate - -# you can check if virtualenv was correctly initialized by -# checking where your python interpreter is located -# if command bellow points to your created directory and not some -# system dir like /usr/bin/python then everything is fine -$ which python - -# we can check now if all is good ➜ if ok couple of -# lines will be displayed -$ pip freeze -# appdirs==1.4.3 -# packaging==16.8 -# pyparsing==2.2.0 -# six==1.10.0 - -# now we are ready to install bottlepy ➜ web micro-framework -$ pip install bottle - -# you can deactivate virtualenv but you will then go -# under system domain ➜ for now don't deactivate -$ deactivate -``` - -We are now ready to write simple web service. Let's create file app.py and paste -code bellow in this newly created file. - -```python -# -*- coding: utf-8 -*- - -import bottle -import random -import cProfile - -app = bottle.Bottle() - -# this function is a decorator and encapsulates function -# and performs profiling and then saves it to subfolder -# prof/function-name.prof -# in our example only awesome_random_number function will -# be profiled because it has do_cprofile defined -def do_cprofile(func): - def profiled_func(*args, **kwargs): - profile = cProfile.Profile() - try: - profile.enable() - result = func(*args, **kwargs) - profile.disable() - return result - finally: - profile.dump_stats("prof/" + str(func.__name__) + ".prof") - return profiled_func - - -# we use profiling over specific function with including -# @do_cprofile above function declaration -@app.route("/") -@do_cprofile -def awesome_random_number(): - awesome_random_number = random.randint(0, 100) - return "awesome random number is " + str(awesome_random_number) - -@app.route("/test") -def test(): - return "dummy test" - -if __name__ == '__main__': - bottle.run( - app = app, - host = "0.0.0.0", - port = 4000 - ) - -# run with 'python app.py' -# open browser 'http://0.0.0.0:4000' -``` - -When browser hits awesome\_random\_number() function profile is created in prof/ -subfolder. - -## Visualize profile - -Now let's create callgrind format from this cProfile output. - -```bash -$ cd prof/ -$ pyprof2calltree -i awesome_random_number.prof -# this creates 'awesome_random_number.prof.log' file in the same folder -``` - -This file can be opened with visualizing tools listed above. In this case we -will be using Profilling Viewer under MacOS. You can open image in new tab. As -you can see from this example there is hierarchy of execution order of your -code. - -![Profilling Viewer](/assets/posts/python-profiling/profiling-viewer.png){:loading="lazy"} - -> Make sure you convert output of the cProfile output every time you want to -refresh and take a look at your possible optimizations because cProfile updates -.prof file every time browser hits the function. - -This is just a simple example but when you are developing real-life applications -this can be very illuminating, especially to see which parts of your code are -bottlenecks and need to be optimized. - -## Update 2017-04-22 - -Reddit user [mvt](https://www.reddit.com/user/mvt) also recommended this awesome -web based profile visualizer [SnakeViz](https://jiffyclub.github.io/snakeviz/) -that directly takes output from -[cProfile](https://docs.python.org/2/library/profile.html#module-cProfile) -module. - -
Comment from discussion Profiling Python web applications with visual tools.
- -```bash -# let's install it globally as well -$ sudo pip install snakeviz - -# now let's visualize -$ cd prof/ -$ snakeviz awesome_random_number.prof -# this automatically opens browser window and -# shows visualized profile -``` - -![SnakeViz](/assets/posts/python-profiling/snakeviz.png){:loading="lazy"} - -Reddit user [ccharles](https://www.reddit.com/user/ccharles) suggested a better -way for installing pip software by targeting user level instead of using sudo. - -
Comment from discussion Profiling Python web applications with visual tools.
- -```bash -# now we need to add this path to our $PATH variable -# we do this my adding this line at the end of your -# ~/.bashrc file -PATH=$PATH:$HOME/.local/bin/ - -# in order to use this new configuration you can close -# and reopen terminal or reload .bashrc file -$ source ~/.bashrc - -# now let's test if new directory is present in $PATH -$ echo $PATH - -# now we can install on user level by adding --user -# without use of sudo -$ pip install snakeviz --user -``` - -Or as suggested by [mvt](https://www.reddit.com/user/mvt) you can -use [pipsi](https://github.com/mitsuhiko/pipsi). diff --git a/_posts/2017-08-11-simple-iot-application.md b/_posts/2017-08-11-simple-iot-application.md deleted file mode 100644 index b552e8f..0000000 --- a/_posts/2017-08-11-simple-iot-application.md +++ /dev/null @@ -1,608 +0,0 @@ ---- -title: Simple IOT application supported by real-time monitoring and data history -permalink: /simple-iot-application.html -date: 2017-08-11T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -## Initial thoughts - -I have been developing these kind of application for the better part of my last -5 years and people keep asking me how to approach developing such application -and I will give a try explaining it here. - -IOT applications are really no different than any other kind of applications. -We have data that needs to be collected and visualized in some form of tables or -charts. The main difference here is that most of the times these data is -collected by some kind of device foreign to developer that mainly operates in -web domain. But fear not, it's not that different than writing some JavaScript. - -There are many devices able to transmit data via wireless or wired network by -default but for the sake of example we will be using commonly known Arduino with -wireless module already on the board → [Arduino -MKR1000](https://store.arduino.cc/arduino-mkr1000). - -In order to make this little project as accessible to others as possible I will -try to make it as inexpensive as possible. And by this I mean that I will avoid -using hosted virtual servers and will be using my own laptop as a server. But -you must buy Arduino MKR1000 to follow steps below. But if you would want to -deploy this software I would suggest using -[DigitalOcean](https://www.digitalocean.com) → smallest VPS is only per month -making this one of the most affordable option out there. Please notice that this -software will not run on stock web hosting that only supports LAMP (Linux, -Apache, MySQL, and PHP). - -But before we begin please take notice that this is strictly experimental code -and not well optimized and there are much better ways in handling some aspects -of the application but that requires much deeper knowledge of technology that is -not needed for an example like this. - -**Development steps** - -1. Simple Python API that will receive and store incoming data. -2. Prototype C++ code that will read "sensor data" and transmit it to API. -3. Data visualization with charts → extends Python web application. - -Step 1. and 3. will share the same web application. One route will be dedicated -to API and another to serving HTML with chart. - -Schema below represents what we will try to achieve and how different parts -correlates to each other. - -![Overview](/assets/posts/iot-application/simple-iot-application-overview.svg){:loading="lazy"} - -## Simple Python API - -I have always been a fan of simplicity so we will be using [Bottle: Python Web -Framework](https://bottlepy.org/docs/dev/). It is a single file web framework -that seriously simplifies working with routes, templating and has built-in web -server that satisfies our need in this case. - -First we need to install bottle package. This can be done by downloading -```bottle.py``` and placing it in the root of your application or by using pip -software ```pip install bottle --user```. - -If you are using Linux or MacOS then Python is already installed. If you will -try to test this on Windows please install [Python for -Windows](https://www.python.org/downloads/windows/). There may be some problems -with path when you will try to launch ```python webapp.py``` so please take care -of this before you continue. - -### Basic web application - -Most basic bottle application is quite simple. Paste code below in -```webapp.py``` file and save. - -```python -# -*- coding: utf-8 -*- - -import bottle - -# initializing bottle app -app = bottle.Bottle() - -# triggered when / is accessed from browser -# only accepts GET → no POST allowed -@app.route("/", method=["GET"]) -def route_default(): - return "howdy from python" - -# starting server on http://0.0.0.0:5000 -if __name__ == "__main__": - bottle.run( - app = app, - host = "0.0.0.0", - port = 5000, - debug = True, - reloader = True, - catchall = True, - ) -``` - -To run this simple application you should open command prompt or terminal on -your machine and go to the folder containing your file and type ```python -webapp.py```. If everything goes ok then open your web browser and point it to -```http://0.0.0.0:5000```. - -If you would like change the port of your application (like port 80) and not use -root to run your app this will present a problem. The TCP/IP port numbers below -1024 are privileged ports → this is a security feature. So in order of -simplicity and security use a port number above 1024 like I have used port 5000. - -If this fails at any time please fix it before you continue, because nothing -below will work otherwise. - -We use 0.0.0.0 as default host so that this app is available over your local -network. If you find your local ip ```ifconfig``` and try accessing this site -with your phone (if on same network/router as your machine) this should work as -well (example of such ip ```http://192.168.1.15:5000```). This is a must have -because Arduino will be accessing this application to send it's data. - -### Web application security - -There is a lot to be said about security and is a topic of many books. Of course -all this can not be written here but to just establish some basic security → you -should always use SSL with your application. Some fantastic free certificates -are available by [Let's Encrypt - Free SSL/TLS -Certificates](https://letsencrypt.org). With SSL certificate installed you -should then make use of HTTP headers and send your "API key" via a header. If -your key is send via header then this key is encrypted by SSL and send encrypted -over the network. Never send your api keys by GET parameter like -```http://example.com/?api_key=somekeyvalue```. The problem that this kind of -sending presents is that this key is visible in logs and by network sniffers. - -There is a fantastic article describing some aspects about security: [11 Web -Application Security Best -Practices](https://www.keycdn.com/blog/web-application-security-best-practices/). Please -check it out. - -### Simple API for writing data-points - -We will now be using boilerplate code from example above and extend it to be -SQLite3 because it plays well with Python and can store quite large amount of -able to write data received by API to local storage. For example use I will use -data. I have been using it to collect gigabytes of data in a single database -without any corruption or problems → your experience may vary. - -To avoid learning SQLite I will be using [Dataset: databases for lazy -people](https://dataset.readthedocs.io/en/latest/index.html). This package -abstracts SQL and simplifies writing and reading data from database. You should -install this package with pip software ```pip install dataset --user```. - -Because API will use POST method I will be testing if code works correctly by -using [Restlet Client for Google -Chrome](https://chrome.google.com/webstore/detail/restlet-client-rest-api-t/aejoelaoggembcahagimdiliamlcdmfm). -This software also allows you to set headers → for basic security with API_KEY. - -To quickly generate passwords or API keys I usually use this nifty website -[RandomKeygen](https://randomkeygen.com/). - -Copy and paste code below over your previous code in file ```webapp.py```. - -```python -# -*- coding: utf-8 -*- - -import time -import bottle -import random -import dataset - -# initializing bottle app -app = bottle.Bottle() - -# connects to sqlite database -# check_same_thread=False allows using it in multi-threaded mode -app.config["dsn"] = dataset.connect("sqlite:///data.db?check_same_thread=False") - -# api key that will be used in Arduino code -app.config["api_key"] = "JtF2aUE5SGHfVJBCG5SH" - -# triggered when /api is accessed from browser -# only accepts POST → no GET allowed -@app.route("/api", method=["POST"]) -def route_default(): - status = 400 - ts = int(time.time()) # current timestamp - value = bottle.request.body.read() # data from device - api_key = bottle.request.get_header("Api_Key") # api key from header - - # outputs to console received data for debug reason - print ">>> {} :: {}".format(value, api_key) - - # if api_key is correct and value is present - # then writes attribute to point table - if api_key == app.config["api_key"] and value: - app.config["dsn"]["point"].insert(dict(ts=ts, value=value)) - status = 200 - - # we only need to return status - return bottle.HTTPResponse(status=status, body="") - -# starting server on http://0.0.0.0:5000 -if __name__ == "__main__": - bottle.run( - app = app, - host = "0.0.0.0", - port = 5000, - debug = True, - reloader = True, - catchall = True, - ) -``` - -To run this simply go to folder containing python file and run ```python -webapp.py``` from terminal. If everything goes ok you should have simple API -available via POST method on /api route. - -After testing the service with Restlet Client you should be able to view your -data in a database file ```data.db```. - -![REST settings example](/assets/posts/iot-application/iot-rest-example.png){:loading="lazy"} - -You can also check the contents of new database file by using desktop client -for SQLite → [DB Browser for SQLite](http://sqlitebrowser.org/). - -![SQLite database example](/assets/posts/iot-application/iot-sqlite-db.png){:loading="lazy"} - -Table structure is as simple as it can be. We have ts (timestamp) and value -(value from Arduino). As you can see timestamp is generated on API side. If you -would happen to have atomic clock on Arduino it would be then better to generate -and send timestamp with the value. This would be particularity useful if we -would be collecting sensor data at a higher frequency and then sending this data -in bulk to API. - -If you will deploy this app with uWSGI and multi-threaded, use DSN (Data Source -Name) url with ```?check_same_thread=False```. - -Ok, now that we have some sort of a working API with some basic security so -unwanted people can not post data to your database can we proceed further and -try to program Arduino to send data to API. - -## Sending data to API with Arduino MKR1000 - -First of all you should have MKR1000 module and microUSB cable to proceed. If -you have ever done any work with Arduino you should know that you also need -[Arduino IDE](https://www.arduino.cc/en/Main/Software). On provided link you -should be able to download and install IDE. Once that task is completed and you -have successfully run blink example you should proceed to the next step. - -In order to use wireless capabilities of MKR1000 you need to first install -[WiFi101 library](https://www.arduino.cc/en/Reference/WiFi101) in Arduino IDE. -Please check before you install, you may already have it installed. - -Code below is a working example that sends data to API. Before you try to test -your code make sure you have run Python web application. Then change settings -for wifi, api endpoint and api_key. If by some reason code bellow doesn't work -for you please leave a comment and I'll try to help. - -Once you have opened IDE and copied this code try to compile and upload it. -Then open "Serial monitor" to see if any output is presented by Arduino. - -```c -#include - -// wifi settings -char ssid[] = "ssid-name"; -char pass[] = "ssid-password"; - -// api server enpoint -char server[] = "192.168.6.22"; -int port = 5000; - -// api key that must be the same as the one in Python code -String api_key = "JtF2aUE5SGHfVJBCG5SH"; - -// frequency data is sent in ms - every 5 seconds -int timeout = 1000 * 5; - -int status = WL_IDLE_STATUS; - -void setup() { - - // initialize serial and wait for port to open: - Serial.begin(9600); - delay(1000); - - // check for the presence of the shield - if (WiFi.status() == WL_NO_SHIELD) { - Serial.println("WiFi shield not present"); - while (true); - } - - // attempt to connect to wifi network - while (status != WL_CONNECTED) { - Serial.print("Attempting to connect to SSID: "); - Serial.println(ssid); - status = WiFi.begin(ssid, pass); - // wait 10 seconds for connection - delay(10000); - } - - // output wifi status to serial monitor - Serial.print("SSID: "); - Serial.println(WiFi.SSID()); - - IPAddress ip = WiFi.localIP(); - Serial.print("IP Address: "); - Serial.println(ip); - - long rssi = WiFi.RSSI(); - Serial.print("signal strength (RSSI):"); - Serial.print(rssi); - Serial.println(" dBm"); -} - -void loop() { - WiFiClient client; - - if (client.connect(server, port)) { - - // I use random number generator for this example - // but you can use analog or digital inputs from arduino - String content = String(random(1000)); - - client.println("POST /api HTTP/1.1"); - client.println("Connection: close"); - client.println("Api-Key: " + api_key); - client.println("Content-Length: " + String(content.length())); - client.println(); - client.println(content); - - delay(100); - client.stop(); - Serial.println("Data sent successfully ..."); - - } else { - Serial.println("Problem sending data ..."); - } - - // waits for x seconds and continue looping - delay(timeout); -} -``` - -As seen from example you can notice that Arduino is generating random integer -between [ 0 .. 1000 ]. You can easily replace this with a temperature sensor or -any other kind of sensor. - -Now that we have API under the hood and Arduino is sending demo data we can now -focus on data visualization. - -## Data visualization - -Before we continue we should examine our project folder structure. Currently we -only have two files in our project: - -_simple-iot-app/_ - -* _webapp.py_ -* _data.db_ - -We will now add HTML template that will contain CSS and JavaScript code inline -for the simplicity reason. And for the bottle framework to be able to scan root -application folder for templates we will add ```bottle.TEMPLATE_PATH.insert(0, -"./")``` in ```webapp.py```. By default bottle framework uses ```views/``` -subfolder to store templates. This is not the ideal situation and if you will -use bottle to develop web applications you should use native behavior and store -templates in it's predefined folder. But for the sake of example we will -over-ride this. Be careful to fully replace your code with new code that is -provided below. Avoid partially replacing code in file :) Also new code for -reading data-points is provided in Python example below. - -First we add new route to our web application. It should be trigger when browser -hits root of application ```http://0.0.0.0:5000/```. This route will do nothing -more than render ```frontend.html``` template. This is done by ```return -bottle.template("frontend.html")```. Check code below to further examine how -exactly this is done. - -Now we will expand ```/api``` route and use different methods to write or read -data-points. For writing data-point we will use POST method and for reading -points we will use GET method. GET method will return JSON object with latest -readings and historical data. - -There is a fantastic JavaScript library for plotting time-series charts called -[MetricsGraphics.js](https://www.metricsgraphicsjs.org) that is based on -[D3.js](https://d3js.org/) library for visualizing data. - -Data schema required by MetricsGraphics.js → to achieve this we need to -transform data from database into this format: - -```json -[ - { - "date": "2017-08-11 01:07:20", - "value": 933 - }, - { - "date": "2017-08-11 01:07:30", - "value": 743 - } -] -``` - -Web application is now complete and we only need ```frontend.html``` that we -will develop now. If you would try to start web app now and go to root app this -will return error because we don't have frontend.html yet. - -```python -# -*- coding: utf-8 -*- - -import time -import bottle -import json -import datetime -import random -import dataset - -# initializing bottle app -app = bottle.Bottle() - -# adds root directory as template folder -bottle.TEMPLATE_PATH.insert(0, "./") - -# connects to sqlite database -# check_same_thread=False allows using it in multi-threaded mode -app.config["db"] = dataset.connect("sqlite:///data.db?check_same_thread=False") - -# api key that will be used in Arduino code -app.config["api_key"] = "JtF2aUE5SGHfVJBCG5SH" - -# triggered when / is accessed from browser -# only accepts GET → no POST allowed -@app.route("/", method=["GET"]) -def route_default(): - return bottle.template("frontend.html") - -# triggered when /api is accessed from browser -# accepts POST and GET -@app.route("/api", method=["GET", "POST"]) -def route_default(): - - # if method is POST then we write datapoint - if bottle.request.method == "POST": - status = 400 - ts = int(time.time()) # current timestamp - value = bottle.request.body.read() # data from device - api_key = bottle.request.get_header("Api-Key") # api key from header - - # outputs to console recieved data for debug reason - print ">>> {} :: {}".format(value, api_key) - - # if api_key is correct and value is present - # then writes attribute to point table - if api_key == app.config["api_key"] and value: - app.config["db"]["point"].insert(dict(ts=ts, value=value)) - status = 200 - - # we only need to return status - return bottle.HTTPResponse(status=status, body="") - - # if method is GET then we read datapoint - else: - response = [] - datapoints = app.config["db"]["point"].all() - - for point in datapoints: - response.append({ - "date": datetime.datetime.fromtimestamp(int(point["ts"])).strftime("%Y-%m-%d %H:%M:%S"), - "value": point["value"] - }) - - bottle.response.content_type = "application/json" - return json.dumps(response) - -# starting server on http://0.0.0.0:5000 -if __name__ == "__main__": - bottle.run( - app = app, - host = "0.0.0.0", - port = 5000, - debug = True, - reloader = True, - catchall = True, - ) -``` - -And now finally we can implement ```frontend.html```. Create file with this name -and copy code below. When you are done you can start web application. Steps for -this part are listed below the code. - -```html - - - - - - Simple IOT application - - - - -

Simple IOT application

- -
-
-
- - - - - - - - - - - - - -``` - -Now the folder structure should look like: - -_simple-iot-app/_ - -* _webapp.py_ -* _data.db_ -* _frontend.html_ - -Ok, lets now start application and start feeding it data. - -1. ```python webapp.py``` -2. connect Arduino MKR1000 to power source -3. open browser and go to ```http://0.0.0.0:5000``` - -If everything goes well you should be seeing new data-points rendered on chart -every 5 seconds. - -If you navigate to ```http://0.0.0.0:5000``` you should see rendered chart as -shown on picture below. - -![Application output](/assets/posts/iot-application/iot-app-output.png){:loading="lazy"} - -Complete application with all the code is available for -[download](/assets/posts/iot-application/simple-iot-application.zip). - -## Conclusion - -I hope this clarifies some aspects of IOT application development. Of course -this is a minimal example and is far from what can be done in real life with -some further dive into other technologies. - -If you would like to continue exploring IOT world here are some interesting -resources for you to examine: - -* [Reading Sensors with an Arduino](https://www.allaboutcircuits.com/projects/reading-sensors-with-an-arduino/) -* [MQTT 101 – How to Get Started with the lightweight IoT Protocol](http://www.hivemq.com/blog/how-to-get-started-with-mqtt) -* [Stream Updates with Server-Sent Events](https://www.html5rocks.com/en/tutorials/eventsource/basics/) -* [Internet of Things (IoT) Tutorials](http://www.tutorialspoint.com/internet_of_things/) - -Any comment or additional ideas are welcomed in comments below. diff --git a/_posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md b/_posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md deleted file mode 100644 index d29bd09..0000000 --- a/_posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md +++ /dev/null @@ -1,332 +0,0 @@ ---- -title: Using DigitalOcean Spaces Object Storage with FUSE -permalink: /using-digitalocean-spaces-object-storage-with-fuse.html -date: 2018-01-16T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -Couple of months ago [DigitalOcean](https://www.digitalocean.com) introduced new -product called -[Spaces](https://blog.digitalocean.com/introducing-spaces-object-storage/) which -is Object Storage very similar to Amazon's S3. This really peaked my interest, -because this was something I was missing and even the thought of going over the -internet for such functionality was in no interest to me. Also in fashion with -their previous pricing this also is very cheap and pricing page is a no-brainer -compared to AWS or GCE. [Prices are clearly and precisely defined and -outlined](https://www.digitalocean.com/pricing/). You must love them for that -:) - -## Initial requirements - -* Is it possible to use them as a mounted drive with FUSE? (tl;dr YES) -* Will the performance degrade over time and over different sizes of objects? - (tl;dr NO&YES) -* Can storage be mounted on multiple machines at the same time and be writable? - (tl;dr YES) - -> Let me be clear. This scripts I use are made just for benchmarking and are not -> intended to be used in real-life situations. Besides that, I am looking into -> using this approaches but adding caching service in front of it and then -> dumping everything as an object to storage. This could potentially be some -> interesting post of itself. But in case you would need real-time data without -> eventual consistency please take this scripts as they are: not usable in such -> situations. - -## Is it possible to use them as a mounted drive with FUSE? - -Well, actually they can be used in such manor. Because they are similar to [AWS -S3](https://aws.amazon.com/s3/) many tools are available and you can find many -articles and [Stackoverflow items](https://stackoverflow.com/search?q=s3+fuse). - -To make this work you will need DigitalOcean account. If you don't have one you -will not be able to test this code. But if you have an account then you go and -[create new -Droplet](https://cloud.digitalocean.com/droplets/new?size=s-1vcpu-1gb®ion=ams3&distro=debian&distroImage=debian-9-x64&options=private_networking,install_agent). -If you click on this link you will already have preselected Debian 9 with -smallest VM option. - -* Please be sure to add you SSH key, because we will login to this machine - remotely. -* If you change your region please remember which one you choose because we will - need this information when we try to mount space to our machine. - -Instuctions on how to use SSH keys and how to setup them are available in -article [How To Use SSH Keys with DigitalOcean -Droplets](https://www.digitalocean.com/community/tutorials/how-to-use-ssh-keys-with-digitalocean-droplets). - -![DigitalOcean Droplets](/assets/posts/do-fuse/fuse-droplets.png){:loading="lazy"} - -After we created Droplet it's time to create new Space. This is done by clicking -on a button [Create](https://cloud.digitalocean.com/spaces/new) (right top -corner) and selecting Spaces. Choose pronounceable ```Unique name``` because we -will use it in examples below. You can either choose Private or Public, it -doesn't matter in our case. And you can always change that in the future. - -When you have created new Space we should [generate Access -key](https://cloud.digitalocean.com/settings/api/tokens). This link will guide -to the page when you can generate this key. After you create new one, please -save provided Key and Secret because Secret will not be shown again. - -![DigitalOcean Spaces](/assets/posts/do-fuse/fuse-spaces.png){:loading="lazy"} - -Now that we have new Space and Access key we should SSH into our machine. - -```bash -# replace IP with the ip of your newly created droplet -ssh root@IP - -# this will install utilities for mounting storage objects as FUSE -apt install s3fs - -# we now need to provide credentials (access key we created earlier) -# replace KEY and SECRET with your own credentials but leave the colon between them -# we also need to set proper permissions -echo "KEY:SECRET" > .passwd-s3fs -chmod 600 .passwd-s3fs - -# now we mount space to our machine -# replace UNIQUE-NAME with the name you choose earlier -# if you choose different region for your space be careful about -ourl option (ams3) -s3fs UNIQUE-NAME /mnt/ -ourl=https://ams3.digitaloceanspaces.com -ouse_cache=/tmp - -# now we try to create a file -# once you mount it may take a couple of seconds to retrieve data -echo "Hello cruel world" > /mnt/hello.txt -``` - -After all this you can return to your browser and go to [DigitalOcean -Spaces](https://cloud.digitalocean.com/spaces) and click on your created -space. If file hello.txt is present you have successfully mounted space to your -machine and wrote data to it. - -I choose the same region for my Droplet and my Space but you don't have to. You -can have different regions. What this actually does to performance I don't know. - -Additional information on FUSE: - -* [Github project page for s3fs](https://github.com/s3fs-fuse/s3fs-fuse) -* [FUSE - Filesystem in Userspace](https://en.wikipedia.org/wiki/Filesystem_in_Userspace) - -## Will the performance degrade over time and over different sizes of objects? - -For this task I didn't want to just read and write text files or uploading -images. I actually wanted to figure out if using something like SQlite is viable -in this case. - -### Measurement experiment 1: File copy - -```bash -# first we create some dummy files at different sizes -dd if=/dev/zero of=10KB.dat bs=1024 count=10 #10KB -dd if=/dev/zero of=100KB.dat bs=1024 count=100 #100KB -dd if=/dev/zero of=1MB.dat bs=1024 count=1024 #1MB -dd if=/dev/zero of=10MB.dat bs=1024 count=10240 #10MB - -# now we set time command to only return real -TIMEFORMAT=%R - -# now lets test it -(time cp 10KB.dat /mnt/) |& tee -a 10KB.results.txt - -# and now we automate -# this will perform the same operation 100 times -# this will output results into separated files based on objecty size -n=0; while (( n++ < 100 )); do (time cp 10KB.dat /mnt/10KB.$n.dat) |& tee -a 10KB.results.txt; done -n=0; while (( n++ < 100 )); do (time cp 100KB.dat /mnt/100KB.$n.dat) |& tee -a 100KB.results.txt; done -n=0; while (( n++ < 100 )); do (time cp 1MB.dat /mnt/1MB.$n.dat) |& tee -a 1MB.results.txt; done -n=0; while (( n++ < 100 )); do (time cp 10MB.dat /mnt/10MB.$n.dat) |& tee -a 10MB.results.txt; done -``` - -Files of size 100MB were not successfully transferred and ended up displaying -error (cp: failed to close '/mnt/100MB.1.dat': Operation not permitted). - -As I suspected, object size is not really that important. Sadly I don't have the -time to test performance over periods of time. But if some of you would do it -please send me your data. I would be interested in seeing results. - -**Here are plotted results** - -You can download [raw result here](/assets/posts/do-fuse/copy-benchmarks.tsv). -Measurements are in seconds. - - -
- - -As far as these tests show, performance is quite stable and can be predicted -which is fantastic. But this is a small test and spans only over couple of -hours. So you should not completely trust them. - -### Measurement experiment 2: SQLite performanse - -I was unable to use database file directly from mounted drive so this is a no-go -as I suspected. So I executed code below on a local disk just to get some -benchmarks. I inserted 1000 records with DROPTABLE, CREATETABLE, INSERTMANY, -FETCHALL, COMMIT for 1000 times to generate statistics. As you can see -performance of SQLite is quite amazing. You could then potentially just copy -file to mounted drive and be done with it. - -```python -import time -import sqlite3 -import sys - -if len(sys.argv) < 3: - print("usage: python sqlite-benchmark.py DB_PATH NUM_RECORDS REPEAT") - exit() - -def data_iter(x): - for i in range(x): - yield "m" + str(i), "f" + str(i*i) - -header_line = "%s\t%s\t%s\t%s\t%s\n" % ("DROPTABLE", "CREATETABLE", "INSERTMANY", "FETCHALL", "COMMIT") -with open("sqlite-benchmarks.tsv", "w") as fp: - fp.write(header_line) - -start_time = time.time() -conn = sqlite3.connect(sys.argv[1]) -c = conn.cursor() -end_time = time.time() -result_time = CONNECT = end_time - start_time -print("CONNECT: %g seconds" % (result_time)) - -start_time = time.time() -c.execute("PRAGMA journal_mode=WAL") -c.execute("PRAGMA temp_store=MEMORY") -c.execute("PRAGMA synchronous=OFF") -result_time = PRAGMA = end_time - start_time -print("PRAGMA: %g seconds" % (result_time)) - -for i in range(int(sys.argv[3])): - print("#%i" % (i)) - - start_time = time.time() - c.execute("drop table if exists test") - end_time = time.time() - result_time = DROPTABLE = end_time - start_time - print("DROPTABLE: %g seconds" % (result_time)) - - start_time = time.time() - c.execute("create table if not exists test(a,b)") - end_time = time.time() - result_time = CREATETABLE = end_time - start_time - print("CREATETABLE: %g seconds" % (result_time)) - - start_time = time.time() - c.executemany("INSERT INTO test VALUES (?, ?)", data_iter(int(sys.argv[2]))) - end_time = time.time() - result_time = INSERTMANY = end_time - start_time - print("INSERTMANY: %g seconds" % (result_time)) - - start_time = time.time() - c.execute("select count(*) from test") - res = c.fetchall() - end_time = time.time() - result_time = FETCHALL = end_time - start_time - print("FETCHALL: %g seconds" % (result_time)) - - start_time = time.time() - conn.commit() - end_time = time.time() - result_time = COMMIT = end_time - start_time - print("COMMIT: %g seconds" % (result_time)) - - print - log_line = "%f\t%f\t%f\t%f\t%f\n" % (DROPTABLE, CREATETABLE, INSERTMANY, FETCHALL, COMMIT) - with open("sqlite-benchmarks.tsv", "a") as fp: - fp.write(log_line) - -start_time = time.time() -conn.close() -end_time = time.time() -result_time = CLOSE = end_time - start_time -print("CLOSE: %g seconds" % (result_time)) -``` - -You can download [raw result here](/assets/posts/do-fuse/sqlite-benchmarks.tsv). And -again, these results are done on a local block storage and do not represent -capabilities of object storage. With my current approach and state of the test -code these can not be done. I would need to make Python code much more robust -and check locking etc. - -
- - -## Can storage be mounted on multiple machines at the same time and be writable? - -Well, this one didn't take long to test. And the answer is **YES**. I mounted -space on both machines and measured same performance on both machines. But -because file is downloaded before write and then uploaded on complete there -could potentially be problems is another process is trying to access the same -file. - -## Observations and conslusion - -Using Spaces in this way makes it easier to access and manage files. But besides -that you would need to write additional code to make this one play nice with you -applications. - -Nevertheless, this was extremely simple to setup and use and this is just -another excellent product in DigitalOcean product line. I found this exercise -very valuable and am thinking about implementing some sort of mechanism for -SQLite, so data can be stored on Spaces and accessed by many VM's. For a project -where data doesn't need to be accessible in real-time and can have couple of -minutes old data this would be very interesting. If any of you find this -proposal interesting please write in a comment box below or shoot me an email -and I will keep you posted. diff --git a/_posts/2019-01-03-encoding-binary-data-into-dna-sequence.md b/_posts/2019-01-03-encoding-binary-data-into-dna-sequence.md deleted file mode 100644 index 6980ed1..0000000 --- a/_posts/2019-01-03-encoding-binary-data-into-dna-sequence.md +++ /dev/null @@ -1,416 +0,0 @@ ---- -title: Encoding binary data into DNA sequence -permalink: /encoding-binary-data-into-dna-sequence.html -date: 2019-01-03T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -## Initial thoughts - -Imagine a world where you could go outside and take a leaf from a tree and put -it through your personal DNA sequencer and get data like music, videos or -computer programs from it. Well, this is all possible now. It was not done on a -large scale because it is quite expensive to create DNA strands but it's -possible. - -Encoding data into DNA sequence is relatively simple process once you understand -the relationship between binary data and nucleotides and scientists have been -making large leaps in this field in order to provide viable long-term storage -solution for our data that would potentially survive our specie if case of -global disaster. We could imprint all the world's knowledge into plants and -ensure the survival of our knowledge. - -More optimistic usage for this technology would be easier storage of ever -growing data we produce every day. Once machines for sequencing DNA become fast -enough and cheaper this could mean the next evolution of storing data and -abandoning classical hard and solid state drives in data warehouses. - -As we currently stand this is still not viable but it is quite an amazing and -cool technology. - -My interests in this field are purely in encoding processes and experimental -testing mainly because I don't have the access to this expensive machines. My -initial goal was to create a toolkit that can be used by everybody to encode -their data into a proper DNA sequence. - -## Glossary - -**deoxyribose** A five-carbon sugar molecule with a hydrogen atom rather than a -hydroxyl group in the 2′ position; the sugar component of DNA nucleotides. - -**double helix** The molecular shape of DNA in which two strands of nucleotides -wind around each other in a spiral shape. - -**nitrogenous base** A nitrogen-containing molecule that acts as a base; often -referring to one of the purine or pyrimidine components of nucleic acids. - -**phosphate group** A molecular group consisting of a central phosphorus atom -bound to four oxygen atoms. - -**RGB** The RGB color model is an additive color model in which red, green and -blue light are added together in various ways to reproduce a broad array of -colors. - -**GCC** The GNU Compiler Collection is a compiler system produced by the GNU -Project supporting various programming languages. - -## Data encoding - -**TL;DR:** Encoding involves the use of a code to change original data into a -form that can be used by an external process. - -Encoding is the process of converting data into a format required for a number -of information processing needs, including: - -- Program compiling and execution -- Data transmission, storage and compression/decompression -- Application data processing, such as file conversion - -Encoding can have two meanings: - -- In computer technology, encoding is the process of applying a specific code, - such as letters, symbols and numbers, to data for conversion into an - equivalent cipher. -- In electronics, encoding refers to analog to digital conversion. - -## Quick history of DNA - -- **1869** - Friedrich Miescher identifies "nuclein". -- **1900s** - The Eugenics Movement. -- **1900** – Mendel's theories are rediscovered by researchers. -- **1944** - Oswald Avery identifies DNA as the 'transforming principle'. -- **1952** - Rosalind Franklin photographs crystallized DNA fibres. -- **1953** - James Watson and Francis Crick discover the double helix structure of DNA. -- **1965** - Marshall Nirenberg is the first person to sequence the bases in each codon. -- **1983** - Huntington's disease is the first mapped genetic disease. -- **1990** - The Human Genome Project begins. -- **1995** - Haemophilus Influenzae is the first bacterium genome sequenced. -- **1996** - Dolly the sheep is cloned. -- **1999** - First human chromosome is decoded. -- **2000** – Genetic code of the fruit fly is decoded. -- **2002** – Mouse is the first mammal to have its genome decoded. -- **2003** – The Human Genome Project is completed. -- **2013** – DNA Worldwide and Eurofins Forensic discover identical twins have differences in their genetic makeup. - -## What is DNA? - -Deoxyribonucleic acid, a self-replicating material which is **present in nearly -all living organisms** as the main constituent of chromosomes. It is the -**carrier of genetic information**. - -> The nitrogen in our DNA, the calcium in our teeth, the iron in our blood, -> the carbon in our apple pies were made in the interiors of collapsing stars. -> We are made of starstuff. -> **-- Carl Sagan, Cosmos** - -The nucleotide in DNA consists of a sugar (deoxyribose), one of four bases -(cytosine (C), thymine (T), adenine (A), guanine (G)), and a phosphate. -Cytosine and thymine are pyrimidine bases, while adenine and guanine are purine -bases. The sugar and the base together are called a nucleoside. - -![DNA](/assets/posts/dna-sequence/dna-basics.jpg){:loading="lazy"} - -*DNA (a) forms a double stranded helix, and (b) adenine pairs with thymine and -cytosine pairs with guanine. (credit a: modification of work by Jerome Walker, -Dennis Myts)* - -## Encode binary data into DNA sequence - -As an input file you can use any file you want: - -- ASCII files, -- Compiled programs, -- Multimedia files (MP3, MP4, MVK, etc), -- Images, -- Database files, -- etc. - -Note: If you would copy all the bytes from RAM to file or pipe data to file you -could encode also this data as long as you provide file pointer to the encoder. - -### Basic Encoding - -As already mentioned, the Basic Encoding is based on a simple mapping. Since DNA -is composed of 4 nucleotides (Adenine, Cytosine, Guanine, Thymine; usually -referred using the first letter). Using this technique we can encode - - - -using a single nucleotide. In this way, we are able to use the 4 bases that -compose the DNA strand to encode each byte of data. - -| Two bits | Nucleotides | -| -------- | ---------------- | -| 00 | **A** (Adenine) | -| 10 | **G** (Guanine) | -| 01 | **C** (Cytosine) | -| 11 | **T** (Thymine) | - -With this in mind we can simply encode any data by using two-bit to Nucleotides -conversion. - -```python -{ Algorithm 1: Naive byte array to DNA encode } -procedure EncodeToDNASequence(f) string -begin - enc string - while not eof(f) do - c byte := buffer[0] { Read 1 byte from buffer } - bin integer := sprintf('08b', c) { Convert to string binary } - for e in range[0, 2, 4, 6] do - if e[0] == 48 and e[1] == 48 then { 0x00 - A (Adenine) } - enc += 'A' - else if e[0] == 48 and e[1] == 49 then { 0x01 - G (Guanine) } - enc += 'G' - else if e[0] == 49 and e[1] == 48 then { 0x10 - C (Cytosine) } - enc += 'C' - else if e[0] == 49 and e[1] == 49 then { 0x11 - T (Thymine) } - enc += 'T' - return enc { Return DNA sequence } -end -``` - -Another encoding would be **Goldman encoding**. Using this encoding helps with -Nonsense mutation (amino acids replaced by a stop codon) that occurs and is the -most problematic during translation because it leads to truncated amino acid -sequences, which in turn results in truncated proteins. - -[Where to store big data? In DNA: Nick Goldman at TEDxPrague](https://www.youtube.com/watch?v=a4PiGWNsIEU) - -### FASTA file format - -In bioinformatics, FASTA format is a text-based format for representing either -nucleotide sequences or peptide sequences, in which nucleotides or amino acids -are represented using single-letter codes. The format also allows for sequence -names and comments to precede the sequences. The format originates from the -FASTA software package, but has now become a standard in the field of -bioinformatics. - -The first line in a FASTA file started either with a ">" (greater-than) symbol -or, less frequently, a ";" (semicolon) was taken as a comment. Subsequent lines -starting with a semicolon would be ignored by software. Since the only comment -used was the first, it quickly became used to hold a summary description of the -sequence, often starting with a unique library accession number, and with time -it has become commonplace to always use ">" for the first line and to not use -";" comments (which would otherwise be ignored). - -```txt -;LCBO - Prolactin precursor - Bovine -; a sample sequence in FASTA format -MDSKGSSQKGSRLLLLLVVSNLLLCQGVVSTPVCPNGPGNCQVSLRDLFDRAVMVSHYIHDLSS -EMFNEFDKRYAQGKGFITMALNSCHTSSLPTPEDKEQAQQTHHEVLMSLILGLLRSWNDPLYHL -VTEVRGMKGAPDAILSRAIEIEEENKRLLEGMEMIFGQVIPGAKETEPYPVWSGLPSLQTKDED -ARYSAFYNLLHCLRRDSSKIDTYLKLLNCRIIYNNNC* - ->MCHU - Calmodulin - Human, rabbit, bovine, rat, and chicken -ADQLTEEQIAEFKEAFSLFDKDGDGTITTKELGTVMRSLGQNPTEAELQDMINEVDADGNGTID -FPEFLTMMARKMKDTDSEEEIREAFRVFDKDGNGYISAAELRHVMTNLGEKLTDEEVDEMIREA -DIDGDGQVNYEEFVQMMTAK* - ->gi|5524211|gb|AAD44166.1| cytochrome b [Elephas maximus maximus] -LCLYTHIGRNIYYGSYLYSETWNTGIMLLLITMATAFMGYVLPWGQMSFWGATVITNLFSAIPYIGTNLV -EWIWGGFSVDKATLNRFFAFHFILPFTMVALAGVHLTFLHETGSNNPLGLTSDSDKIPFHPYYTIKDFLG -LLILILLLLLLALLSPDMLGDPDNHMPADPLNTPLHIKPEWYFLFAYAILRSVPNKLGGVLALFLSIVIL -GLMPFLHTSKHRSMMLRPLSQALFWTLTMDLLTLTWIGSQPVEYPYTIIGQMASILYFSIILAFLPIAGX -IENY -``` - -FASTA format was extended by [FASTQ](https://en.wikipedia.org/wiki/FASTQ_format) -format from the [Sanger Centre](https://www.sanger.ac.uk/) in Cambridge. - -### PNG encoded DNA sequence - -| Nucleotides | RGB | Color name | -| ------------ | ----------- | ---------- | -| A ➞ Adenine | (0,0,255) | Blue | -| G ➞ Guanine | (0,100,0) | Green | -| C ➞ Cytosine | (255,0,0) | Red | -| T ➞ Thymine | (255,255,0) | Yellow | - -With this in mind we can create a simple algorithm to create PNG representation -of a DNA sequence. - -```python -{ Algorithm 2: Naive DNA to PNG encode from FASTA file } -procedure EncodeDNASequenceToPNG(f) -begin - i image - while not eof(f) do - c char := buffer[0] { Read 1 char from buffer } - case c of - 'A': color := RGB(0, 0, 255) { Blue } - 'G': color := RGB(0, 100, 0) { Green } - 'C': color := RGB(255, 0, 0) { Red } - 'T': color := RGB(255, 255, 0) { Yellow } - drawRect(i, [x, y], color) - save(i) { Save PNG image } -end -``` - -## Encoding text file in practice - -In this example we will take a simple text file as our input stream for -encoding. This file will have a quote from Niels Bohr and saved as txt file. - -> How wonderful that we have met with a paradox. Now we have some hope of -> making progress. -> ― Niels Bohr - -First we encode text file into FASTA file. - -```bash -./dnae-encode -i quote.txt -o quote.fa -2019/01/10 00:38:29 Gathering input file stats -2019/01/10 00:38:29 Starting encoding ... - 106 B / 106 B [==================================] 100.00% 0s -2019/01/10 00:38:29 Saving to FASTA file ... -2019/01/10 00:38:29 Output FASTA file length is 438 B -2019/01/10 00:38:29 Process took 987.263µs -2019/01/10 00:38:29 Done ... -``` - -Output of `quote.fa` file contains the encoded DNA sequence in ASCII format. - -```txt ->SEQ1 -GACAGCTTGTGTACAAGTGTGCTTGCTCGCGAGCGGGTACGCGCGTGGGCTAACAAGTGA -GCCAGCAGGTGAACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGCTGGCGGGTGA -ACAAGTGTGCCGGTGAGCCAACAAGCAGACAAGTAAGCAGGTACGCAGGCGAGCTTGTCA -ACTCACAAGATCGCTTGTGTACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGTAT -GCTTGCTGGCGGACAAGCCAGCTTGTAAGCGGACAAGCTTGCGCACAAGCTGGCAGGCCT -GCCGGCTCGCGTACAAATTCACAAGTAAGTACGCTTGCGTGTACGCGGGTATGTATACTC -AACCTCACCAAACGGGACAAGATCGCCGGCGGGCTAGTATACAAGAACGCTTGCCAGTAC -AACC -``` - -Then we encode FASTA file from previous operation to encode this data into PNG. - -```bash -./dnae-png -i quote.fa -o quote.png -2019/01/10 00:40:09 Gathering input file stats ... -2019/01/10 00:40:09 Deconstructing FASTA file ... -2019/01/10 00:40:09 Compositing image file ... - 424 / 424 [==================================] 100.00% 0s -2019/01/10 00:40:09 Saving output file ... -2019/01/10 00:40:09 Output image file length is 1.1 kB -2019/01/10 00:40:09 Process took 19.036117ms -2019/01/10 00:40:09 Done ... -``` - -After encoding into PNG format this file looks like this. - -![Encoded Quote in PNG format](/assets/posts/dna-sequence/quote.png){:loading="lazy"} - -The larger the input stream is the larger the PNG file would be. - -Compiled basic Hello World C program with -[GCC](https://www.gnu.org/software/gcc/) would [look -like](/assets/posts/dna-sequence/sample.png). - -```c -// gcc -O3 -o sample sample.c -#include - -main() { - printf("Hello, world!\n"); - return 0; -} -``` - -## Toolkit for encoding data - -I have created a toolkit with two main programs: - -- dnae-encode (encodes file into FASTA file) -- dnae-png (encodes FASTA file into PNG) - -Toolkit with full source code is available on -[github.com/mitjafelicijan/dna-encoding](https://github.com/mitjafelicijan/dna-encoding). - -### dnae-encode - -```bash -> ./dnae-encode --help -usage: dnae-encode --input=INPUT [] - -A command-line application that encodes file into DNA sequence. - -Flags: - --help Show context-sensitive help (also try --help-long and --help-man). - -i, --input=INPUT Input file (ASCII or binary) which will be encoded into DNA sequence. - -o, --output="out.fa" Output file which stores DNA sequence in FASTA format. - -s, --sequence=SEQ1 The description line (defline) or header/identifier line, gives a name and/or a unique identifier for the sequence. - -c, --columns=60 Row characters length (no more than 120 characters). Devices preallocate fixed line sizes in software. - --version Show application version. -``` - -### dnae-png - -```bash -> ./dnae-png --help -usage: dnae-png --input=INPUT [] - -A command-line application that encodes FASTA file into PNG image. - -Flags: - --help Show context-sensitive help (also try --help-long and --help-man). - -i, --input=INPUT Input FASTA file which will be encoded into PNG image. - -o, --output="out.png" Output file in PNG format that represents DNA sequence in graphical way. - -s, --size=10 Size of pairings of DNA bases on image in pixels (lower resolution lower file size). - --version Show application version. -``` - -## Benchmarks - -First we generate some binary sample data with dd. - -```bash -dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=1KB.bin bs=1KB count=1 iflag=fullblock -``` - -![Sample binary file 1KB](/assets/posts/dna-sequence/sample-binary-file.png){:loading="lazy"} - -Our freshly generated 1KB file looks something like this (its full of -garbage data as intended). - -We create following binary files: - -- 1KB.bin -- 10KB.bin -- 100KB.bin -- 1MB.bin -- 10MB.bin -- 100MB.bin - -After this we create FASTA files for all the binary files by encoding them -into DNA sequence. - -```bash -./dnae-encode -i 100MB.bin -o 100MB.fa -``` - -Then we GZIP all the FASTA files to see how much the can be compressed. - -```bash -gzip -9 < 10MB.fa > 10MB.fa.gz -``` - -![Encode to FASTA](/assets/posts/dna-sequence/chart-speed.svg){:loading="lazy"} - -The speed increase that occurs when encoding to FASTA format. - -![File sizes](/assets/posts/dna-sequence/chart-size.svg){:loading="lazy"} - -Size of the out file after encoding. - -[Download CSV file with benchmarks](/assets/posts/dna-sequence/benchmarks.csv). - -## References - -- https://www.techopedia.com/definition/948/encoding -- https://www.dna-worldwide.com/resource/160/history-dna-timeline -- https://opentextbc.ca/biology/chapter/9-1-the-structure-of-dna/ -- https://arxiv.org/abs/1801.04774 -- https://en.wikipedia.org/wiki/FASTA_format diff --git a/_posts/2019-10-14-simplifying-and-reducing-clutter.md b/_posts/2019-10-14-simplifying-and-reducing-clutter.md deleted file mode 100644 index e804ecb..0000000 --- a/_posts/2019-10-14-simplifying-and-reducing-clutter.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -title: Simplifying and reducing clutter in my life and work -permalink: /simplifying-and-reducing-clutter.html -date: 2019-10-14T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -I recently moved my main working machine back from Hachintosh to Linux. Well the -experiment was interesting and I have done some great work on macOS but it was -time to move back. - -I actually really missed Linux. The simplicity of `apt-get` or just the amount -of software that exists for Linux should be a no-brainer. I spent most of my -time on macOS finding solutions to make things work. Using -[Brew](https://brew.sh/) was just a horrible experience and far from package -managers of Linux. At least they managed to get that `sudo` debacle sorted. - -Not all was bad. macOS in general was a perfectly good environment. Things like -Docker and tooling like this worked without any hiccups. My normal tools like -coding IDE worked flawlessly and the whole look and feel is just superb. I have -been using MacBook Air for couple of years so I was used to the system but never -as a daily driver. - -One of the things I did after I installed Linux back on my machine was cleaning -up my Dropbox folder. I have everything on Dropbox. Even projects folder. I -write code for living so my whole life revolves around couple of megs of code -(with assets). So it's not like I have huge files on my machine. I don't have -movies or music or pictures on my PC. All of that stuff is in cloud. I use -Google music and I have Netflix account which is more than enough for me. - -I also went and deleted some of the repositories on my Github account. I have -deleted more code than deployed. People find this strange but for me deleting -something feels so cathartic and also forces me to write better code next time -around when I am faced with similar problem. That was a huge relief if I am -being totally honest. - -Next step was to do something with my webpage. I have been using some scripts I -wrote a while ago to generate static pages from markdown source posts. I kept on -adding and adding stuff on top of it and it became a source of a -frustration. And this is just a simple blog and I was using gulp and npm. -Anyways after couple of hours of searching and testing static generators I found -an interesting one -[https://github.com/piranha/gostatic](https://github.com/piranha/gostatic) and I -just decided to use this one. It was the only one that had a simple templating -engine, not that I really need one. But others had this convoluted way of trying -to solve everything and at the end just required quite bigger learning curve I -was ready to go with. So I deleted couple of old posts, simplified HTML, trashed -most of the CSS and went with -[https://motherfuckingwebsite.com/](https://motherfuckingwebsite.com/) -aesthetics. Yeah, the previous site was more visually stimulating but all I -really care is the content at this point. And Times New Roman font is kind of -awesome. - -I stopped working on most of the projects in the past couple of months because -the overhead was just too insane. There comes a point when you stretch yourself -too much and then you stop progressing and with that comes dissatisfaction. - -So that's about it. Moving forward minimal style. diff --git a/_posts/2019-10-19-using-sentiment-analysis-for-clickbait-detection.md b/_posts/2019-10-19-using-sentiment-analysis-for-clickbait-detection.md deleted file mode 100644 index a1b237b..0000000 --- a/_posts/2019-10-19-using-sentiment-analysis-for-clickbait-detection.md +++ /dev/null @@ -1,109 +0,0 @@ ---- -title: Using sentiment analysis for clickbait detection in RSS feeds -permalink: /using-sentiment-analysis-for-clickbait-detection-in-rss-feeds.html -date: 2019-10-19T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -## Initial thoughts - -One of the things that interested me for a while now is if major well -established news sites use click bait titles to drive additional traffic to -their sites and generate additional impressions. - -Goal is to see how article titles and actual content of article differ from each -other and see if titles are clickbaited. - -## Preparing and cleaning data - -For this example I opted to just use RSS feed from a new website and decided to -go with [The Guardian](https://www.theguardian.com) World news. While this gets -us limited data (~40) articles and also description (actual content) is trimmed -this really doesn't reflect the actual article contents. - -To get better content I could use web scraping and use RSS as link list and -fetch contents directly from website, but for this simple example this will -suffice. - -There are couple of requirements we need to install before we continue: - -- `pip3 install feedparser` (parses RSS feed from url) -- `pip3 install vaderSentiment` (does sentiment polarity analysis) -- `pip3 install matplotlib` (plots chart of results) - -So first we need to fetch RSS data and sanitize HTML content from description. - -```python -import re -import feedparser - -feed_url = "https://www.theguardian.com/world/rss" -feed = feedparser.parse(feed_url) - -# sanitize html -for item in feed.entries: - item.description = re.sub('<[^<]+?>', '', item.description) -``` - -## Perform sentiment analysis - -Since we now have cleaned up data in our `feed.entries` object we can start with -performing sentiment analysis. - -There are many sentiment analysis libraries available that range from rule-based -sentiment analysis up to machine learning supported analysis. To keep things -simple I decided to use rule-based analysis library -[vaderSentiment](https://github.com/cjhutto/vaderSentiment) from -[C.J. Hutto](https://github.com/cjhutto). Really nice library and quite easy to -use. - -```python -from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer -analyser = SentimentIntensityAnalyzer() - -sentiment_results = [] -for item in feed.entries: - sentiment_title = analyser.polarity_scores(item.title) - sentiment_description = analyser.polarity_scores(item.description) - sentiment_results.append([sentiment_title['compound'], sentiment_description['compound']]) -``` - -Now that we have this data in a shape that is compatible with matplotlib we can -plot results to see the difference between title and description sentiment of an -article. - -```python -import matplotlib.pyplot as plt - -plt.rcParams['figure.figsize'] = (15, 3) -plt.plot(sentiment_results, drawstyle='steps') -plt.title('Sentiment analysis relationship between title and description (Guardian World News)') -plt.legend(['title', 'description']) -plt.show() -``` - -## Results and assets - -1. Because of the small sample size further conclusions are impossible to make. -2. Rule-based approach may not be the best way of doing this. By using deep - learning we would be able to get better insights. -3. **Next step would be to** periodically fetch RSS items and store them over a - longer period of time and then perform analysis again and use either machine - learning or deep learning on top of it. - -![Relationship between title and description](/assets/posts/sentiment-analysis/guardian-sa-title-desc-relationship.png){:loading="lazy"} - -Figure above displays difference between title and description sentiment for -specific RSS feed item. 1 means positive and -1 means negative sentiment. - -[» Download Jupyter Notebook](/assets/posts/sentiment-analysis/sentiment-analysis.ipynb) - -## Going further - -- [Twitter Sentiment Analysis by Bryan Schwierzke](https://github.com/bswiss/news_mood) -- [AFINN-based sentiment analysis for Node.js by Andrew Sliwinski](https://github.com/thisandagain/sentiment) -- [Sentiment Analysis with LSTMs in Tensorflow by Adit Deshpande](https://github.com/adeshpande3/LSTM-Sentiment-Analysis) -- [Sentiment analysis on tweets using Naive Bayes, SVM, CNN, LSTM, etc. by Abdul Fatir](https://github.com/abdulfatir/twitter-sentiment-analysis) - diff --git a/_posts/2020-03-22-simple-sse-based-pubsub-server.md b/_posts/2020-03-22-simple-sse-based-pubsub-server.md deleted file mode 100644 index ffb7285..0000000 --- a/_posts/2020-03-22-simple-sse-based-pubsub-server.md +++ /dev/null @@ -1,455 +0,0 @@ ---- -title: Simple Server-Sent Events based PubSub Server -permalink: /simple-server-sent-events-based-pubsub-server.html -date: 2020-03-22T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -## Before we continue ... - -Publisher Subscriber model is nothing new and there are many amazing solutions -out there, so writing a new one would be a waste of time if other solutions -wouldn't have quite complex install procedures and weren't so hard to maintain. -But to be fair, comparing this simple server with something like -[Kafka](https://kafka.apache.org/) or [RabbitMQ](https://www.rabbitmq.com/) is -laughable at the least. Those solutions are enterprise grade and have many -mechanisms there to ensure messages aren't lost and much more. Regardless of -these drawbacks, this method has been tested on a large website and worked until -now without any problems. So now, that we got that cleared up, let's continue. - -***Wiki definition:** Publish/subscribe messaging, or pub/sub messaging, is a -form of asynchronous service-to-service communication used in serverless and -microservices architectures. In a pub/sub model, any message published to a -topic is immediately received by all the subscribers to the topic.* - -## General goals - -- provide a simple server that relays messages to all the connected clients, -- messages can be posted on specific topics, -- messages get sent via [Server-Sent - Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events) - to all the subscribers. - -## How exactly does the pub/sub model work? - -The easiest way to explain this is with diagram bellow. Basic function is -simple. We have subscribers that receive messages, and we have publishers that -create and post messages. Similar model is also well know pattern that works on -a premise of consumers and producers, and they take similar roles. - -![How PubSub works](/assets/posts/simple-pubsub-server/pubsub-overview.png){:loading="lazy"} - -**These are some naive characteristics we want to achieve:** - -- producer is publishing messages to subscribe topic, -- consumer is receiving messages from subscribed topic, -- servers is also known as Broker, -- broker does not store messages or tracks success, -- broker uses - [FIFO](https://en.wikipedia.org/wiki/FIFO_(computing_and_electronics)) method - for delivering messages, -- if consumer wants to receive messages from a topic, producer and consumer - topics must match, -- consumer can subscribe to multiple topics, -- producer can publish to multiple topics, -- each message has a messageId. - -**Known drawbacks:** - -- messages will not be stored in a persistent queue or unreceived messages like - [DeadLetterQueue](https://en.wikipedia.org/wiki/Dead_letter_queue) so old - messages could be lost on server restart, -- [Server-Sent - Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events) - opens a long-running connection between the client and the server so make sure - if your setup is load balanced that the load balancer in this case can have - long opened connection, -- no system moderation due to the dynamic nature of creating queues. - -## Server-Sent Events - -Read more about it on [official specification -page](https://html.spec.whatwg.org/multipage/server-sent-events.html). - -### Current browser support - -![Browser support](/assets/posts/simple-pubsub-server/caniuse.png){:loading="lazy"} - -Check -[https://caniuse.com/#feat=eventsource](https://caniuse.com/#feat=eventsource) -for latest information about browser support. - -### Known issues - -- Firefox 52 and below do not support EventSource in web/shared workers -- In Firefox prior to version 36 server-sent events do not reconnect - automatically in case of a connection interrupt (bug) -- Reportedly, CORS in EventSource is currently supported in Firefox 10+, Opera - 12+, Chrome 26+, Safari 7.0+. -- Antivirus software may block the event streaming data chunks. - -Source: [https://caniuse.com/#feat=eventsource](https://caniuse.com/#feat=eventsource) - -### Message format - -The simplest message that can be sent is only with data attribute: - -```bash -data: this is a simple message - -``` - -You can send message IDs to be used if the connection is dropped: - -```bash -id: 33 -data: this is line one -data: this is line two - -``` - -And you can specify your own event types (the above messages will all trigger -the message event): - -```bash -id: 36 -event: price -data: 103.34 - -``` - -### Server requirements - -The important thing is how you send headers and which headers are sent by the -server that triggers browser to threat response as a EventStream. - -Headers responsible for this are: - -```bash -Content-Type: text/event-stream -Cache-Control: no-cache -Connection: keep-alive -``` - -### Debugging with Google Chrome - -Google Chrome provides build-in debugging and exploration tool for [Server-Sent -Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events) -which is quite nice and available from Developer Tools under Network tab. - -> You can debug only client side events that get received and not the server -> ones. For debugging server events add `console.log` to `server.js` code and -> print out events. - -![Google Chrome Developer Tools EventStream](/assets/posts/simple-pubsub-server/chrome-debugging.png){:loading="lazy"} - -## Server implementation - -For the sake of this example we will use [Node.js](https://nodejs.org/en/) with -[Express](https://expressjs.com) as our router since this is the easiest way to -get started and we will use already written SSE library for node -[sse-pubsub](https://www.npmjs.com/package/sse-pubsub) so we don't reinvent the -wheel. - -```bash -npm init --yes - -npm install express -npm install body-parser -npm install sse-pubsub -``` - -Basic implementation of a server (`server.js`): - -```js -const express = require('express'); -const bodyParser = require('body-parser'); -const SSETopic = require('sse-pubsub'); - -const app = express(); -const port = process.env.PORT || 4000; - -// topics container -const sseTopics = {}; - -app.use(bodyParser.json()); - -// open for all cors -app.all('*', (req, res, next) => { - res.header('Access-Control-Allow-Origin', '*'); - res.header('Access-Control-Allow-Headers', 'X-Requested-With, Content-Type'); - next(); -}); - -// preflight request error fix -app.options('*', async (req, res) => { - res.header('Access-Control-Allow-Origin', '*'); - res.header('Access-Control-Allow-Headers', 'X-Requested-With, Content-Type'); - res.send('OK'); -}); - -// serve the event streams -app.get('/stream/:topic', async (req, res, next) => { - const topic = req.params.topic; - - if (!(topic in sseTopics)) { - sseTopics[topic] = new SSETopic({ - pingInterval: 0, - maxStreamDuration: 15000, - }); - } - - // subscribing client to topic - sseTopics[topic].subscribe(req, res); -}); - -// accepts new messages into topic -app.post('/publish', async (req, res) => { - let body = req.body; - let status = 200; - - console.log('Incoming message:', req.body); - - if ( - body.hasOwnProperty('topic') && - body.hasOwnProperty('event') && - body.hasOwnProperty('message') - ) { - const topic = req.body.topic; - const event = req.body.event; - const message = req.body.message; - - if (topic in sseTopics) { - // sends message to all the subscribers - sseTopics[topic].publish(message, event); - } - } else { - status = 400; - } - - res.status(status).send({ - status, - }); -}); - -// returns JSON object of all opened topics -app.get('/status', async (req, res) => { - res.send(sseTopics); -}); - -// health-check endpoint -app.get('/', async (req, res) => { - res.send('OK'); -}); - -// return a 404 if no routes match -app.use((req, res, next) => { - res.set('Cache-Control', 'private, no-store'); - res.status(404).end('Not found'); -}); - -// starts the server -app.listen(port, () => { - console.log(`PubSub server running on http://localhost:${port}`); -}); -``` - -### Our custom message format - -Each message posted on a server must be in a specific format that out server -accepts. Having structure like this allows us to have multiple separated type of -events on each topic. - -With this we can separate streams and only receive events that belong to the -topic. - -One example would be, that we have index page and we want to receive messages -about new upvotes or new subscribers but we don't want to follow events for -other pages. This reduces clutter and overall network. And structure is much -nicer and maintanable. - -```json -{ - "topic": "sample-topic", - "event": "sample-event", - "message": { "name": "John" } -} -``` - -## Publisher and subscriber clients - -### Publisher and subscriber in action - - - -You can download [the code](../simple-pubsub-server/sse-pubsub-server.zip) and -follow along. - -### Publisher - -As talked about above publisher is the one that send messages to the -broker/server. Message inside the payload can be whatever you want (string, -object, array). I would however personally avoid send large chunks of data like -blobs and such. - -```html - - - - - - - Publisher - - - - -

Publisher

- -
-

- - -

-

- - -

-

- - -

-

- - -

-

- -

-
- - - - - - -``` - -### Subscriber - -Subscriber is responsible for receiving new messages that come from server via -publisher. The code bellow is very rudimentary but works and follows the -implementation guidelines for EventSource. - -You can use either Developer Tools Console to see incoming messages or you can -defer to Debugging with Google Chrome section above to see all EventStream -messages. - -> Don't be alarmed if the subscriber gets disconnected from the server every so -> often. The code we have here resets connection every 15s but it automatically -> get reconnected and fetches all messages up to last received message id. This -> setting can be adjusted in `server.js` file; search for the -> `maxStreamDuration` variable. - -```html - - - - - - - Subscriber - - - - - -

Subscriber

- -
-

- - -

-

- - -

-

- - -

-

- -

-
- - - - - - -``` - -## Reading further - -- [Using server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events) -- [Using SSE Instead Of WebSockets For Unidirectional Data Flow Over HTTP/2](https://www.smashingmagazine.com/2018/02/sse-websockets-data-flow-http2/) -- [What is Server-Sent Events?](https://apifriends.com/api-streaming/server-sent-events/) -- [An HTTP/2 extension for bidirectional messaging communication](https://tools.ietf.org/id/draft-xie-bidirectional-messaging-01.html) -- [Introduction to HTTP/2](https://developers.google.com/web/fundamentals/performance/http2) -- [The WebSocket API (WebSockets)](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) - diff --git a/_posts/2020-03-27-create-placeholder-images-with-sharp.md b/_posts/2020-03-27-create-placeholder-images-with-sharp.md deleted file mode 100644 index c129396..0000000 --- a/_posts/2020-03-27-create-placeholder-images-with-sharp.md +++ /dev/null @@ -1,103 +0,0 @@ ---- -title: Create placeholder images with sharp Node.js image processing library -permalink: /create-placeholder-images-with-sharp.html -date: 2020-03-27T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -I have been searching for a solution to pre-generate some placeholder images for -image server I needed to develop that resizes images on S3. I though this would -be a 15min job and quickly found out how very mistaken I was. - -Even though Node.js is not really the best way to do this kind of things (surely -something written in C or Rust or even Golang would be the correct way to do -this but we didn't need the speed in our case) I found an excellent library -[sharp - High performance Node.js image -processing](https://github.com/lovell/sharp). - -Getting things running was a breeze. - -## Fetch image from S3 and save resized - -```js -const sharp = require('sharp'); -const aws = require('aws-sdk'); - -const x,y = 100; -const s3 = new aws.S3({}); - -aws.config.update({ - secretAccessKey: 'secretAccessKey', - accessKeyId: 'accessKeyId', - region: 'region' -}); - -const originalImage = await s3.getObject({ - Bucket: 'some-bucket-name', - Key: 'image.jpg', -}).promise(); - -const resizedImage = await sharp(originalImage.Body) - .resize(x, y) - .jpeg({ progressive: true }) - .toBuffer(); - -s3.putObject({ - Bucket: 'some-bucket-name', - Key: `optimized/${x}x${y}/image.jpg`, - Body: resizedImage, - ContentType: 'image/jpeg', - ACL: 'public-read' -}).promise(); -``` - -All this code was wrapped inside a web service with some additional security -checks and defensive coding to detect if key is missing on S3. - -And at that point I needed to return placeholder images as a response in case -key is missing or x,y are not allowed by the server etc. I could have created -PNG in Gimp and just serve them but I wanted to respect aspect ratio and I -didn't want to return some mangled images. - -> Main problem with finding a clean solution I could copy and paste and change a -> bit was a task. API is changing constantly and there weren't clear examples or -> I was unable to find them. - -## Generating placeholder images using SVG - -What I ended up was using SVG to generate text and created image with sharp and -used composition to combine both layers. Response returned by this function is a -buffer you can use to either upload to S3 or save to local file. - -```js -const generatePlaceholderImageWithText = async (width, height, message) => { - const overlay = ` - ${message} - `; - - return await sharp({ - create: { - width: width, - height: height, - channels: 4, - background: { r: 230, g: 230, b: 230, alpha: 1 } - } - }) - .composite([{ - input: Buffer.from(overlay), - gravity: 'center', - }]) - .jpeg() - .toBuffer(); -} -``` - -That is about it. Nothing more to it. You can change the color of the image by -changing `background` and if you want to change text styling you can adapt SVG -to your needs. - -> Also be careful about the length of the text. This function positions text at -> the center and adds `20px` padding on all sides. If text is longer than the -> image it will get cut. diff --git a/_posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md b/_posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md deleted file mode 100644 index 1aa3536..0000000 --- a/_posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md +++ /dev/null @@ -1,109 +0,0 @@ ---- -title: The strange case of Elasticsearch allocation failure -permalink: /the-strange-case-of-elasticsearch-allocation-failure.html -date: 2020-03-29T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -I've been using Elasticsearch in production for 5 years now and never had a -single problem with it. Hell, never even known there could be a problem. Just -worked. All this time. The first node that I deployed is still being used in -production, never updated, upgraded, touched in anyway. - -All this bliss came to an abrupt end this Friday when I got notification that -Elasticsearch cluster went warm. Well, warm is not that bad right? Wrong! -Quickly after that I got another email which sent chills down my spine. Cluster -is now red. RED! Now, shit really hit the fan! - -I tried googling what could be the problem and after executing allocation -function noticed that some shards were unassigned and 5 attempts were already -made (which is BTW to my luck the maximum) and that meant I am basically fucked. -They also applied that one should wait for cluster to re-balance itself. So, I -waited. One hour, two hours, several hours. Nothing, still RED. - -The strangest thing about it all was, that queries were still being fulfilled. -Data was coming out. On the outside it looked like nothing was wrong but -everybody that would look at the cluster would know immediately that something -was very very wrong and we were living on borrowed time here. - -> **Please, DO NOT do what I did.** Seriously! Please ask someone on official -forums or if you know an expert please consult him. There could be million of -reasons and these solution fit my problem. Maybe in your case it would -disastrous. I had all the data backed up and even if I would fail spectacularly -I would be able to restore the data. It would be a huge pain and I would loose -couple of days but I had a plan B. - -Executing allocation and told me what the problem was but no clear solution yet. - -```yaml -GET /_cat/allocation?format=json -``` - -I got a message that `ALLOCATION_FAILED` with additional info `failed to create -shard, failure ioexception[failed to obtain in-memory shard lock]`. Well -splendid! I must also say that our cluster is capable more than enough to handle -the traffic. Also JVM memory pressure never was an issue. So what happened -really then? - -I tried also re-routing failed ones with no success due to AWS restrictions on -having managed Elasticsearch cluster (they lock some of the functions). - -```yaml -POST /_cluster/reroute?retry_failed=true -``` - -I got a message that significantly reduced my options. - -```json -{ - "Message": "Your request: '/_cluster/reroute' is not allowed." -} -``` - -After that I went on a hunt again. I won't bother you with all the details -because hours/days went by until I was finally able to re-index the problematic -index and hoped for the best. Until that moment even re-indexing was giving me -errors. - -```yaml -POST _reindex -{ - "source": { - "index": "myindex" - }, - "dest": { - "index": "myindex-new" - } -} -``` - -I needed to do this multiple times to get all the documents re-indexed. Then I -dropped the original one with the following command. - -```yaml -DELETE /myindex -``` - -And re-indexed again new one in the original one (well by name only). - -```yaml -POST _reindex -{ - "source": { - "index": "myindex-new" - }, - "dest": { - "index": "myindex" - } -} -``` - -On the surface it looks like all is working but I have a long road in front of -me to get all the things working again. Cluster now shows that it is in Green -mode but I am also getting a notification that the cluster has processing status -which could mean million of things. - -Godspeed! - diff --git a/_posts/2020-03-30-my-love-and-hate-relationship-with-nodejs.md b/_posts/2020-03-30-my-love-and-hate-relationship-with-nodejs.md deleted file mode 100644 index 0299d9d..0000000 --- a/_posts/2020-03-30-my-love-and-hate-relationship-with-nodejs.md +++ /dev/null @@ -1,112 +0,0 @@ ---- -title: My love and hate relationship with Node.js -permalink: /my-love-and-hate-relationship-with-nodejs.html -date: 2020-03-30T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -Previous project I was working on was being coded in -[Golang](https://golang.org/). Also was my first project using it. And damn, -that was an awesome experience. The whole thing is just superb. From how errors -are handled. The C-like way you handle compiling. The way the language is -structured making it incredibly versatile and easy to learn. - -It may cause some pain for somebody that is not used of using interfaces to map -JSON and doing the recompilation all the time. But we have tools like -[entr](http://eradman.com/entrproject/) and -[make](https://www.gnu.org/software/make/) to fix that. - -But we are not here to talk about my undying love for **Golang**. Only in some -way we probably should. It is an excellent example of how modern language should -be designed. And because I have used it extensively in the last couple of years -this probably taints my views of other languages. And is doing me a great -disservice. Nevertheless, here we are. - -About two years ago I started flirting with [Node.js](https://nodejs.org/en/) -for a project I started working on. What I wanted was to have things written in -a language that is widely used, and we could get additional developers for. As -much as **Golang** is amazing it's really hard to get developers for it. Even -now. And after playing around with it for a week I felt in love with the speed -of iteration and massive package ecosystem. Do you want SSO? You got it! Do you -want some esoteric library for something? There is a strong chance somebody -wrote it. It is so extensive that you find yourself evaluating packages based on -**GitHub stars** and number of contributors. You get swallowed by the vanity -metrics and that potentially will become the downfall of Node.js. - -Because of the sheer amount of choice I often got anxiety when choosing -libraries. Will I choose the correct one? Is this library something that will be -supported for a foreseeable future or not? I am used of using libraries that are -being in development for 10 years plus (Python, C) and that gave me some sort of -comfort. And it is probably unfair to Node.js and community to expect same -dedication. - -Moving forward ... Work started and things were great. **Speed of iteration was -insane**. For some feature that I would need a day in Golang only took me hour -or two. I became lazy! Using packages all over the place. Falling into the same -trap as others. Packages on top of packages. And [npm](https://www.npmjs.com/) -didn't help at all. The way that the package manager works is just -horrendous. And not allowing to have node_modules outside the project is also -the stupidest idea ever. - -So at that point I started feeling the technical debt that comes with Node.js -and the whole ecosystem. What nobody tells you is that **structuring large -Node.js apps** is more problematic than one would think. And going microservice -for every single thing is also a bad idea. The amount of networking you -introduce with that approach always ends up being a pain in the ass. And I don't -even want to go into system administration here. The overhead is -insane. Package-lock.json made many days feel like living hell for me. And I -would eat the cost of all this if it meant for better development -experience. Well, it didn't. - -The **lack of Typescript** support in the interpreter is still mind boggling to -me. Why haven't they added native support yet for this is beyond me?! That would -have solved so many problems. Lack of type safety became a problem somewhere in -the middle of the project where the codebase was sufficiently large enough to -present problems. We started adding arguments to functions and there was **no -way to implicitly define argument types**. And because at that point there were -a lot of functions, it became impossible to know what each one accepts, -development became more and more trial and error based. - -I tried **implementing Typescript**, but that would present a large refactor -that we were not willing to do at that point. The benefits were not enough. I -also tried [Flow - static type checker](https://flow.org/) but implementation -was also horrible. What Typescript and Flow forces you is to have src folder and -then **transpile** your code into dist folder and run it with node. WTH is that -all about. Why can't this be done in memory or some virtual file system? Why? I -see no reason why this couldn't be done like this. But it is what it is. I -abandoned all hope for static type checking. - -One of the problems that resulted from not having interfaces or types was -inability to model out our data from **Elasticsearch**. I could have done a -**pedestrian implementation** of it, but there must be a better way of doing -this without resorting to some hack basically. Or maybe I haven't found a -solution, which is also a possibility. I have looked, though. No juice! - -**Error handling?** Is that a joke? - -Thank god for **await/async**. Without it, I would have probably just abandoned -the whole thing and went with something else like Python. That's all I am going -to say about this :) - -I started asking myself a question if Node.js is actually ready to be used in a -**large scale applications**? And this was a totally wrong question. What I -should have been asking myself was, how to use Node.js in large scale -application. And you don't get this in **marketing material** for Express or Koa -etc. They never tell you this. Making Node.js scale on infrastructure or in -codebase is really **more of an art than a science**. And just like with the -whole JavaScript ecosystem: - -- impossible to master, -- half of your time you work on your tooling, -- just accept transpilers that convert one code into another (holly smokes), -- error handling is a joke, -- standards? What standards? - -But on the other hand. As I did, you will also learn to love it. Learn to use it -quickly and do impossible things in crazy limited time. - -I hate to admit it. But I love Node.js. Dammit, I love it :) - -**2023 Update**: I hate Node.js! diff --git a/_posts/2020-05-05-remote-work.md b/_posts/2020-05-05-remote-work.md deleted file mode 100644 index 8eb75d2..0000000 --- a/_posts/2020-05-05-remote-work.md +++ /dev/null @@ -1,73 +0,0 @@ ---- -title: Remote work and how it affects the daily lives of people -permalink: /remote-work.html -date: 2020-05-05T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -I have been working remotely for the past 5 years. I love it. Love the freedom -and make your schedule thingy. - -## You work more not less - -I've heard from people things like: "Oh, you are so lucky, working from home, -having all the free time you want". It was obvious they had no clue what means -working remotely. They had this romantic idea of remote work. You can watch TV -whenever you like, you can go outside for a picnic if you want and stuff like -that. - -This may be true if you work a day or two in a week from home. But if you go -completely remote all these changes completely. I take some time to acclimate -but then you start feeling the consequences of going fully remote. And it's not -all rainbows and unicorns. Rather the opposite. - -## Feeling lost - -At first, I remembered I felt lost. I was not used to this kind of environment. -It felt disoriented and a part of you that is used to procrastinate turns on. -You start thinking of a workday as a whole day. And soon this idea of "I can do -this later" starts creeping in. Well, I have the whole day ahead of me. I can do -this a bit later. - -## Hyper-performance - -As a direct result, you become more focused on your work since you don't have -all the interruptions common in the workplace. And you can quickly get used to -this hyper-performance. But this mode requires also a lot of peace and quiet. - -And here we come to the ugly parts of all this. **People rarely have the -self-control** to not waste other people's time. It is paralyzing when people -start calling you, sending you chat messages, etc. The thing is, that when I -achieve this hyper-performance mode I am completely embroiled in the problem I -am solving and this kind of interruptions mess with your head. I need an hour at -least to get back in the zone. Sometimes not achieving the same focus the whole -day. - -I know that life is not how you want it to be and takes its route but from what -I've learned this kind of interruptions can be avoided in 90% of the case easily -just by closing any chat programs and putting your phone in a drawer. - -## Suggestion to all the new remote workers - -- Stop wasting other people's time. You don't bother people at their desks in - the office either. -- Do not replace daily chats in the hallways with instant messaging software. - It will only interrupt people. Nothing good will come of it. -- Set your working hours and try to not allow it to bleed outside these - boundaries and maintain your routine. -- Be prepared that hours will be longer regardless of your good intentions and - your well thought of routine. -- Try to be hyper-focused and do only one thing at the time. Multitasking is the - enemy of progress. -- Avoid long meetings and if possible eliminate them. Rather take time to write - them out and allow others to respond in their own time. Meetings are usually a - large waste of time and most of the people attending them are there just - because the manager said so. -- The software will not solve your problems. And throwing money at problems - neither. -- If you are in a managerial position don't supervise any single minute of - workers. They are probably giving you more hours anyways. Track progress - weekly not daily. You hired them and give them the benefit of the doubt that - they will deliver what you agreed upon. diff --git a/_posts/2020-08-15-systemd-disable-wake-onmouse.md b/_posts/2020-08-15-systemd-disable-wake-onmouse.md deleted file mode 100644 index 8122322..0000000 --- a/_posts/2020-08-15-systemd-disable-wake-onmouse.md +++ /dev/null @@ -1,74 +0,0 @@ ---- -title: Disable mouse wake from suspend with systemd service -permalink: /disable-mouse-wake-from-suspend-with-systemd-service.html -date: 2020-08-15T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -I recently bought [ThinkPad -X220](https://www.laptopmag.com/reviews/laptops/lenovo-thinkpad-x220) just as a -joke on eBay to test Linux distributions and play around with things and not -destroy my main machine. Little to my knowledge I felt in love with it. Man, -they really made awesome machines back then. - -After changing disk that came with it to SSD and installing Ubuntu to test if  -everything works I noticed that even after a single touch of my external mouse -the system would wake up from sleep even though the lid was shut down. - -I wouldn't even noticed it if laptop didn't have [LED -sleep indicator](https://support.lenovo.com/lk/en/solutions/~/media/Images/ContentImages/p/pd025386_x1_status_03.ashx?w=426&h=262). -I already had a bad experience with Linux and it's power management. I had a -[Dell Inspiron 7537](https://www.pcmag.com/reviews/dell-inspiron-15-7537) laptop -with a touchscreen and while traveling it decided to wake up and started cooking -in my backpack to the point that the digitizer responsible for touch actually -glue off and the whole screen got wrecked. So, I am a bit touchy about this. - -I went on solution hunting and to my surprise there is no easy way to disable -specific devices to perform wake up. Why is this not under the power management  -tab in setting is really strange. - -After googling for a solution I found [this nice article describing the -solution](https://codetrips.com/2020/03/18/ubuntu-disable-mouse-wake-from-suspend/) -that worked for me. The only problem with this solution was that he added his -solution to `.bashrc` and this triggers `sudo` that asks for a password each -time new terminal is opened, which get annoying quickly since I open a lot of -terminals all the time. - -I followed his instructions and got to solution `sudo sh -c "echo 'disabled' > -/sys/bus/usb/devices/2-1.1/power/wakeup"`. - -I created a system service file `sudo nano -/etc/systemd/system/disable-mouse-wakeup.service` and removed `sudo` and -replaced `sh` with `/usr/bin/sh` and pasted all that in `ExecStart`. - -```ini -[Unit] -Description=Disables wakeup on mouse event -After=network.target -StartLimitIntervalSec=0 - -[Service] -Type=simple -Restart=always -RestartSec=1 -User=root -ExecStart=/usr/bin/sh -c "echo 'disabled' > /sys/bus/usb/devices/2-1.1/power/wakeup" - -[Install] -WantedBy=multi-user.target -``` - -After that I enabled, started and checked status of service. - -```sh -sudo systemctl enable disable-mouse-wakeup.service -sudo systemctl start disable-mouse-wakeup.service -sudo systemctl status disable-mouse-wakeup.service -``` - -This will permanently disable that device from wakeing up you computer on boot. -If you have many devices you would like to surpress from waking up your machine -I would create a shell script and call that instead of direclty doing it in -service file. diff --git a/_posts/2020-09-06-esp-and-micropython.md b/_posts/2020-09-06-esp-and-micropython.md deleted file mode 100644 index bfd05d9..0000000 --- a/_posts/2020-09-06-esp-and-micropython.md +++ /dev/null @@ -1,226 +0,0 @@ ---- -title: Getting started with MicroPython and ESP8266 -permalink: /esp8266-and-micropython-guide.html -date: 2020-09-06T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -## Introduction - -A while ago I bought some -[ESP8266](https://www.espressif.com/en/products/socs/esp8266) and -[ESP32](https://www.espressif.com/en/products/socs/esp32) dev boards to play -around with and I finally found a project to try it out. - -For my project, I used [ESP32](https://www.espressif.com/en/products/socs/esp32) -but I could easily choose -[ESP8266](https://www.espressif.com/en/products/socs/esp8266). This guide -contains which tools I use and how I prepared my workspace to code for -[ESP8266](https://www.espressif.com/en/products/socs/esp8266). - -![ESP8266 and ESP32 boards](/assets/posts/esp8366-micropython/boards.jpg){:loading="lazy"} - -This guide covers: - -- flashing SOC -- install proper tooling -- deploying a simple script - -> Make sure that you are using **a good USB cable**. I had some problems with -mine and once I replaced it everything started to work. - -## Flashing the SOC - -Plug your ESP8266 to USB port and check if the device was recognized with -executing `dmesg | grep ch341-uart`. - -Then check if the device is available under `/dev/` by running `ls -/dev/ttyUSB*`. - -> **Linux users**: if a device is not available be sure you are in `dialout` -> group. You can check this by executing `groups $USER`. You can add a user to -> `dialout` group with `sudo adduser $USER dialout`. - -After these conditions are meet go to the navigate to -[https://micropython.org/download/esp8266/](https://micropython.org/download/esp8266/) -and download `esp8266-20200902-v1.13.bin`. - -```sh -mkdir esp8266-test -cd esp8266-test - -wget https://micropython.org/resources/firmware/esp8266-20200902-v1.13.bin -``` - -After obtaining firmware we will need some tooling to flash the firmware to the -board. - -```sh -sudo pip3 install esptool -``` - -You can read more about `esptool` at -[https://github.com/espressif/esptool/](https://github.com/espressif/esptool/). - -Before flashing the firmware we need to erase the flash on device. Substitute -`USB0` with the device listed in output of `ls /dev/ttyUSB*`. - -```sh -esptool.py --port /dev/ttyUSB0 erase_flash -``` - -If flash was successfully erased it is now time to flash the new firmware to it. - -```sh -esptool.py --port /dev/ttyUSB0 --baud 460800 write_flash --flash_size=detect 0 esp8266-20200902-v1.13.bin -``` - -If everything went ok you can try accessing MicroPython REPL with ` screen -/dev/ttyUSB0 115200` or `picocom /dev/ttyUSB0 -b115200`. - -> Sometimes you will need to press `ENTER` in `screen` or `picocom` to access -> REPL. - -When you are in REPL you can test if all is working properly following steps. - -```py -> import machine -> machine.freq() -``` - -This should output a number representing a frequency of the CPU (mine was -`80000000`). - -When you are in `screen` or `picocom` these can help you a bit. - -| Key | Command | -| -------- | -------------------- | -| CTRL+d | preforms soft reboot | -| CTRL+a x | exits picocom | -| CTRL+a \ | exits screen | - - -## Install better tooling - -Now, to make our lives a little bit easier there are couple of additional tools -that will make this whole experience a little more bearable. - -There are twq cool ways of uploading local files to SOC flash. - -- ampy → [https://github.com/scientifichackers/ampy](https://github.com/scientifichackers/ampy) -- rshell → [https://github.com/dhylands/rshell](https://github.com/dhylands/rshell) - -### ampy - -```bash -# installing ampy -sudo pip3 install adafruit-ampy -``` - -Listed below are some common commands I used. - -```bash -# uploads file to flash -ampy --delay 2 --port /dev/ttyUSB0 put boot.py - -# lists file on flash -ampy --delay 2 --port /dev/ttyUSB0 ls - -# outputs contents of file on flash -ampy --delay 2 --port /dev/ttyUSB0 cat boot.py -``` - -> I added `delay` of 2 seconds because I had problems with executing commands. - -### rshell - -Even though `ampy` is a cool tool I opted with `rshell` in the end since it's -much more polished and feature rich. - -```bash -# installing ampy -sudo pip3 install rshell -``` - -Now that `rshell` is installed we can connect to the board. - -```bash -rshell --buffer-size=30 -p /dev/ttyUSB0 -a -``` - -This will open a shell inside bash and from here you can execute multiple -commands. You can check what is supported with `help` once you are inside of a -shell. - -```bash -m@turing ~/Junk/esp8266-test -$ rshell --buffer-size=30 -p /dev/ttyUSB0 -a - -Using buffer-size of 30 -Connecting to /dev/ttyUSB0 (buffer-size 30)... -Trying to connect to REPL connected -Testing if ubinascii.unhexlify exists ... Y -Retrieving root directories ... /boot.py/ -Setting time ... Sep 06, 2020 23:54:28 -Evaluating board_name ... pyboard -Retrieving time epoch ... Jan 01, 2000 -Welcome to rshell. Use Control-D (or the exit command) to exit rshell. -/home/m/Junk/esp8266-test> help - -Documented commands (type help ): -======================================== -args cat connect date edit filesize help mkdir rm shell -boards cd cp echo exit filetype ls repl rsync - -Use Control-D (or the exit command) to exit rshell. -``` - -> Inside a shell `ls` will display list of files on your machine. To get list -> of files on flash folder `/pyboard` is remapped inside the shell. To list files -> on flash you must perform `ls /pyboard`. - -#### Moving files to flash - -To avoid copying files all the time I used `rsync` function from the inside of -`rshell`. - -```bash -rsync . /pyboard -``` - -#### Executing scripts - -It is a pain to continuously reboot the device to trigger `/pyboard/boot.py` and -there is a better way of testing local scripts on remote device. - -Lets assume we have `src/freq.py` file that displays CPU frequency of a remote -device. - -```py -# src/freq.py - -import machine -print(machine.freq()) -``` - -Now lets upload this and execute it. - -```bash -# syncs files to remove device -rsync ./src /pyboard - -# goes into REPL -repl - -# we import file by importing it without .py extension and this will run the script -> import freq - -# CTRL+x will exit REPL -``` - -## Additional resources - -- https://randomnerdtutorials.com/getting-started-micropython-esp32-esp8266/ -- http://docs.micropython.org/en/latest/esp8266/quickref.html diff --git a/_posts/2020-09-08-bind-warning-on-login.md b/_posts/2020-09-08-bind-warning-on-login.md deleted file mode 100644 index 4b2c983..0000000 --- a/_posts/2020-09-08-bind-warning-on-login.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: Fix bind warning in .profile on login in Ubuntu -permalink: /bind-warning-on-login-in-ubuntu.html -date: 2020-09-08T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -Recently I moved back to [bash](https://www.gnu.org/software/bash/) as my -default shell. I was previously using [fish](https://fishshell.com/) and got -used to the cool features it has. But, regardless of that, I wanted to move to a -more standard shell because I was hopping back and forth with exporting -variables and stuff like that which got pretty annoying. - -So I embarked on a mission to make [bash](https://www.gnu.org/software/bash/) -more like [fish](https://fishshell.com/) and in the process found that I really -missed autosuggest with TAB on changing directories. - -I found a nice alternative that emulates [zsh](http://zsh.sourceforge.net/) like -autosuggestion and autocomplete so I added the following to my `.bashrc` file. - -```bash -bind "TAB:menu-complete" -bind "set show-all-if-ambiguous on" -bind "set completion-ignore-case on" -bind "set menu-complete-display-prefix on" -bind '"\e[Z":menu-complete-backward' -``` - -I haven't noticed anything wrong with this and all was working fine until I -restarted my machine and then I got this error. - -![Profile bind error](/assets/posts/profile-bind-error/error.jpg){:loading="lazy"} - -When I pressed OK, I got into the [Gnome -shell](https://wiki.gnome.org/Projects/GnomeShell) and all was working fine, but -the error was still bugging me. I started looking for the reason why this is -happening and found a solution to this error on [Remote SSH Commands - bash bind -warning: line editing not enabled](https://superuser.com/a/892682). - -So I added a simple `if [ -t 1 ]` around `bind` statements to avoid running -commands that presume the session is interactive when it isn't. - -```bash -if [ -t 1 ]; then - bind "TAB:menu-complete" - bind "set show-all-if-ambiguous on" - bind "set completion-ignore-case on" - bind "set menu-complete-display-prefix on" - bind '"\e[Z":menu-complete-backward' -fi -``` - -After logging out and back in the problem was gone. diff --git a/_posts/2020-09-09-digitalocean-sync.md b/_posts/2020-09-09-digitalocean-sync.md deleted file mode 100644 index 38696a9..0000000 --- a/_posts/2020-09-09-digitalocean-sync.md +++ /dev/null @@ -1,113 +0,0 @@ ---- -title: Using Digitalocean Spaces to sync between computers -permalink: /digitalocean-spaces-to-sync-between-computers.html -date: 2020-09-09T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -I've been using [Dropbox](https://www.dropbox.com/) for probably **10+ years** -now and I-ve became so used to it that it runs in the background that I don't -even imagine a world without it. But it's not without problems. - -At first I had problems with `.venv` environments for Python and the only -solution for excluding synchronization for this folder was to manually exclude a -specific folder which is not really scalable. FYI, my whole project folder is -synced on [Dropbox](https://www.dropbox.com/). This of course introduced a lot -of syncing of files and folders that are not needed or even break things on -other machines. In the case of **Python**, I couldn't use that on my second -machine. I needed to delete `.venv` folder and pip it again which synced files -again to the main machine. This was very frustrating. **Nodejs** handles this -much nicer and I can just run the scripts without deleting `node_modules` again -and reinstalling. However, `node_modules` is a beast of its own. It creates so -many files that OS has a problem counting them when you check the folder -contents for size. - -I wanted something similar to Dropbox. I could without the instant syncing but -it would need to be fast and had the option for me to exclude folders like -`node_modules, .venv, .git` and folders like that. - -I went on a hunt for an alternative to [Dropbox](https://www.dropbox.com/) -and found: - -- [Tresorit](https://tresorit.com/) -- [Sync.com](https://sync.com) -- [Box](https://www.box.com/) - -You know, the usual list of suspects. I didn't include [Google -drive](https://drive.google.com) or [One drive](https://onedrive.live.com/) -since they are even more draconian than Dropbox. - -> All this does not stem from me being paranoid but recently these companies -> have became more and more aggressive and they keep violating our privacy when -> they share our data with 3rd party services. It is getting out of control. - -So, my main problem was still there. No way of excluding a specific folder from -syncing. And before we go into "*But you have git, isn't that enough?*", I must -say, that many of the files (PDFs, spreadsheets, etc) I have in a `git` repo -don't get pushed upstream to Git and I still want to have them synced across my -computers. - -I initially wanted to use [rsync](https://linux.die.net/man/1/rsync) but I would -need to then have a remote VPS or transfer between my computers directly. I -wanted a solution where all my files could be accessible to me without my -machine. - -> **WARNING: This solution will cost you money!** DigitalOcean Spaces are $5 per -month and there are some bandwidth limitations and if you go beyond that you get -billed additionally. - -Then I remembered that I could use something like -[S3](https://en.wikipedia.org/wiki/Amazon_S3) since it has versioning and is -fully managed. I didn't want to go down the AWS rabbit hole with this so I -choose [DigitalOcean Spaces](https://www.digitalocean.com/products/spaces/). - -Then I needed a command-line tool to sync between source and target. I found -this nice tool [s3cmd](https://s3tools.org/s3cmd) and it is in the Ubuntu -repositories. - -```bash -sudo apt install s3cmd -``` - -After installation will I create a new Space bucket on DigitalOcean. Remember -the zone you will choose because you will need it when you will configure -`s3cmd`. - -Then I visited [Digitalocean Applications & -API](https://cloud.digitalocean.com/account/api/tokens) and generated **Spaces -access keys**. Save both key and secret somewhere safe because when you will -leave the page secret will not be available anymore to you and you will need to -re-generate it. - -```bash -# enter your key and secret and correct endpoint -# my endpoint is ams3.digitaloceanspaces.com because -# I created my bucket in Amsterdam regiin -s3cmd --configure -``` - -After that I played around with options for `s3cmd` and got to the following -command. - -```bash -# I executed this command from my projects folder -cd projects -s3cmd sync --delete-removed --exclude 'node_modules/*' --exclude '.git/*' --exclude '.venv/*' ./ s3://my-bucket-name/projects/ -``` - -When syncing int he other direction you will need to change the order of the -`SOURCE` and `TARGET` to `s3://my-bucket-name/projects/` and `./`. - -> Be sure that all the paths have trailing slash so that sync knows that this -> are directories. - -I am planning to implement some sort of a `.ignore` file that will enable me to -have a project-specific exclude options. - -I am currently running this every hour as a cronjob which is perfectly fine for -now when I am testing how this whole thing works and how it all will turn out. - -I have also created a small Gnome extension which is still very unstable, but -when/if this whole experiment pays of I will share on Github. diff --git a/_posts/2021-01-24-replacing-dropbox-with-s3.md b/_posts/2021-01-24-replacing-dropbox-with-s3.md deleted file mode 100644 index 7599949..0000000 --- a/_posts/2021-01-24-replacing-dropbox-with-s3.md +++ /dev/null @@ -1,115 +0,0 @@ ---- -title: Replacing Dropbox in favor of DigitalOcean spaces -permalink: /replacing-dropbox-in-favor-of-digitalocean-spaces.html -date: 2021-01-24T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -A few months ago I experimented with DigitalOcean spaces as my backup solution -that could [replace Dropbox -eventually](/digitalocean-spaces-to-sync-between-computers.html). That solution -worked quite nicely, and I was amazed how smashing together a couple of existing -solutions would work this fine. - -I have been running that solution in the background for a couple of months now -and kind of forgot about it. But recent developments around deplatforming and -having us people hostages of technology and big companies speed up my goals to -become less dependent on -[Google](https://edition.cnn.com/2020/12/17/tech/google-antitrust-lawsuit/index.html), -[Dropbox](https://www.pcworld.com/article/2048680/dropbox-takes-a-peek-at-files.html) -etc and take back some control. - -I am not a conspiracy theory nut, but to be honest, what these companies are -doing lately is out of control. It is a matter of principle at this point. I -have almost completely degoogled my life all the way from ditching Gmail, -YouTube and most of the services surrounding Google. And I must tell you, I feel -so good. I haven't felt this way for a long time. - -**Anyways. Let's get to the meat of things.** - -Before you continue you should read my post about [syncing to -Dropbox](/digitalocean-spaces-to-sync-between-computers.html). - -> Also to note, I am using Linux on my machine with Gnome desktop environment. -This should work on MacOS too. To use this on Windows I suggest using -[Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10) -or [Cygwin](https://www.cygwin.com/). - -## Folder structure - -I liked structure from Dropbox. One folder where everything is located and -synced. So, that's why adopted this also for my sync setup. - -```go -~/Vault - ↳ backup - ↳ bin - ↳ documents - ↳ projects -``` - -All of my code is located in `~/Vault/projects` folder. And most of the projects -are Git repositories. I do not use this sync method for backup per see but in -case I reinstall my machine I can easily recreate all the important folder -structure with one quick command. No external drives needed that can fail etc. - -## Sync script - -My sync script is located in `~/Vault/bin/vault-backup.sh` - -```bash -#!/bin/bash - -# dconf load /com/gexperts/Tilix/ < tilix.dconf -# 0 2 * * * sh ~/Vault/bin/vault-backup.sh - -cd ~/Vault/backup/dotfiles - -MACHINE=$(whoami)@$(hostname) -mkdir -p $MACHINE -cd $MACHINE - -cp ~/.config/VSCodium/User/settings.json settings.json -cp ~/.s3cfg s3cfg -cp ~/.bash_extended bash_extended -cp ~/.ssh ssh -rf - -codium --list-extensions > vscode-extension.txt -dconf dump /com/gexperts/Tilix/ > tilix.dconf - -cd ~/Vault -s3cmd sync --delete-removed --exclude 'node_modules/*' --exclude '.git/*' --exclude '.venv/*' ./ s3://bucket-name/backup/ - -echo `date +"%D %T"` >> ~/.vault.log - -notify-send \ - -u normal \ - -i /usr/share/icons/Adwaita/96x96/status/security-medium-symbolic.symbolic.png \ - "Vault sync succeded at `date +"%D %T"`" -``` - -This script also backups some of the dotfiles I use and sends notification to -Gnome notification center. It is a straightforward solution. Nothing special -going on. - -> One obvious benefit of this is that I can omit syncing Node's `node_modules` -> or Python's `.venv` and `.git` folders. - -You can use this script in a combination with [Cron](https://en.wikipedia.org/wiki/Cron). - -```txt -0 2 * * * sh ~/Vault/bin/vault-backup.sh -``` - -When you start syncing your local stuff with a remote server you can review your -items on DigitalOcean. - -![Dropbox Spaces](/assets/posts/dropbox-sync/dropbox-spaces.png){:loading="lazy"} - -I have been using this script now for quite some time, and it's working -flawlessly. I also uninstalled Dropbox and stopped using it completely. - -All I need to do is write a Bash script that does the reverse and downloads from -remote server to local folder. This could be another post. diff --git a/_posts/2021-01-25-goaccess.md b/_posts/2021-01-25-goaccess.md deleted file mode 100644 index 779bce5..0000000 --- a/_posts/2021-01-25-goaccess.md +++ /dev/null @@ -1,205 +0,0 @@ ---- -title: Using GoAccess with Nginx to replace Google Analytics -permalink: /using-goaccess-with-nginx-to-replace-google-analytics.html -date: 2021-01-25T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -## Introduction - -I know! You cannot simply replace Google Analytics with parsing access logs and -displaying a couple of charts. But to be honest, I actually never used Google -Analytics to the fullest extent and was usually interested in seeing page hits -and which pages were visited most often. - -I recently moved my blog from Firebase to a VPS and also decided to remove -Google Analytics tracking code from the site since its quite malicious and -tracks users across other pages also and is creating a profile of a user, and -I've had it. But I also need some insight of what is happening on a server and -which content is being read the most etc. - -I have looked at many existing solutions like: - -- [Umami](https://umami.is/) -- [Freshlytics](https://github.com/sheshbabu/freshlytics) -- [Matomo](https://matomo.org/) - -But the more I looked at them the more I noticed that I am replacing one evil -with another one. Don't get me wrong. Some of these solutions are absolutely -fantastic but would require installation of databases and something like PHP or -Node. And I was not ready to put those things on my fresh server. Also having -Docker installed is out of the question. - -## Opting for log parsing - -So, I defaulted to parsing already existing logs and generating HTML reports -from this data. - -I found this amazing software [GoAccess](https://goaccess.io/) which provides -all the functionalities I need, and it's a single binary. Written in Go. - -GoAccess can be used in two different modes. - -![GoAccess Terminal](/assets/posts/goaccess/goaccess-dash-term.png){:loading="lazy"} - -*Running in a terminal* - -![GoAccess HTML](/assets/posts/goaccess/goaccess-dash-html.png){:loading="lazy"} - -*Running in a browser* - -I, however, need this to run in a browser. So, the second option is the way to -go. The Idea is to periodically run cronjob and export this report into a folder -that gets then server by Nginx behind a Basic authentication. - -## Getting Nginx ready - -I choose Ubuntu on [DigitalOcean](https://www.digitalocean.com/). First I -installed [Nginx](https://nginx.org/en/), and -[Letsencrypt](https://letsencrypt.org/getting-started/) certbot and all the -necessary dependencies. - -```sh -# log in as root user -sudo su - - -# first let's update the system -apt update && apt upgrade -y - -# let's install -apt install nginx certbot python3-certbot-nginx apache2-utils -``` - -After all this is installed we can create a new configuration for a statistics. -Stats will be available at `stats.domain.com`. - -```sh -# creates directory where html will be hosted -mkdir -p /var/www/html/stats.domain.com - -cp /etc/nginx/sites-available/default /etc/nginx/sites-available/stats.domain.com -nano /etc/nginx/sites-available/stats.domain.com -``` - -```nginx -server { - root /var/www/html/stats.domain.com; - server_name stats.domain.com; - - index index.html; - location / { - try_files $uri $uri/ =404; - } -} -``` - -Now we check if the configuration is ok. We can do this with `nginx -t`. If all -is ok, we can restart Nginx with `service nginx restart`. - -After all that you should add A record for this domain that points to IP of a -droplet. - -Before enabling SSL you should test if DNS records have propagated with `curl -stats.domain.com`. - -Now, it's time to provision TLS certificate. To achieve this, you execute -command `certbot --nginx`. Follow the wizard and when you are asked about -redirection always choose 2 (always redirect to HTTPS). - -When this is done you can visit https://stats.domain.com and you should get 404 -not found error which is correct. - -## Getting GoAccess ready - -If you are using Debian like system GoAccess should be available in repository. -Otherwise refer to the official website. - -```sh -apt install goaccess -``` - -To enable Geo location we also need one additiona thing. - -```sh -cd /var/www/html/stats.stats.com -wget https://github.com/P3TERX/GeoLite.mmdb/raw/download/GeoLite2-City.mmdb -``` - -Now we create a shell script that will be executed every 10 minutes. - -```sh -nano /var/www/html/stats.domain.com/generate-stats.sh -``` - -Contents of this file should look like this. - -```sh -#!/bin/sh - -zcat -f /var/log/nginx/access.log* > /var/log/nginx/access-all.log - -goaccess \ - --log-file=/var/log/nginx/access-all.log \ - --log-format=COMBINED \ - --exclude-ip=0.0.0.0 \ - --geoip-database=/var/www/html/stats.domain.com/GeoLite2-City.mmdb \ - --ignore-crawlers \ - --real-os \ - --output=/var/www/html/stats.domain.com/index.html - -rm /var/log/nginx/access-all.log -``` - -Because after a while nginx creates multiple files with access logs we use -[`zcat`](https://linux.die.net/man/1/zcat) to extract Gziped contents and create -a file that has all the access logs. After this file is used we delete it. - -If you want to exclude your home IP's result look at the `--exclude-ip` option -in script and instead of `0.0.0.0` add your own home IP address. You can find -your home IP by executing `curl ifconfig.me` from your local machine and NOT -from the droplet. - -Test the script by executing `sh -/var/www/html/stats.domain.com/generate-stats.sh` and then checking -`https://stats.domain.com`. If you can see stats instead of 404 than you are -set. - -It's time to add this script to cron with `cron -e`. - -```go -*/10 * * * * sh /var/www/html/stats.domain.com/generate-stats.sh -``` - -## Securing with Basic authentication - -You probably don't want stats to be publicly available, so we should create a -user and a password for Basic authentication. - -First we create a password for a user `stats` with `htpasswd -c /etc/nginx/.htpasswd stats`. - -Now we update config file with `nano -/etc/nginx/sites-available/stats.domain.com`. You probably noticed that the -file looks a bit different from before. This is because `certbot` added -additional rules for SSL. - -Your location portion the config file should now look like. You should add -`auth_basic` and `auth_basic_user_file` lines to the file. - -```nginx -location / { - try_files $uri $uri/ =404; - auth_basic "Private Property"; - auth_basic_user_file /etc/nginx/.htpasswd; -} -``` - -Test if config is still ok with `nginx -t` and if it is you can restart Nginx -with `service nginx restart`. - -If you now visit `https://stats.domain.com` you should be prompted for username -and password. If not, try reopening your browser. - -That is all. You now have analytics for your server that gets refreshed every 10 -minutes. diff --git a/_posts/2021-06-26-simple-world-clock.md b/_posts/2021-06-26-simple-world-clock.md deleted file mode 100644 index d1b53b4..0000000 --- a/_posts/2021-06-26-simple-world-clock.md +++ /dev/null @@ -1,108 +0,0 @@ ---- -title: Simple world clock with eInk display and Raspberry Pi Zero -permalink: /simple-world-clock-with-eiink-display-and-raspberry-pi-zero.html -date: 2021-06-26T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -Our team is spread across the world, from the USA all the way to Australia, so -having some sort of world clock makes sense. - -Currently, I am using an extension for Gnome called [Timezone -extension](https://extensions.gnome.org/extension/2657/timezones-extension/), -and it serves the purpose quite well. - -But I also have a bunch of electronics that I bought through the time, and I am -not using any of them, and it's time to stop hording this stuff and use it in a -project. - -A while ago I bought a small eInk display [Inky -pHAT](https://shop.pimoroni.com/products/inky-phat?variant=12549254217811) and I -have a bunch of [Raspberry Pi's -Zero](https://www.raspberrypi.org/products/raspberry-pi-zero/) lying around that -I really need to use. - -![Inky pHAT, Raspberry Pi Zero](/assets/posts/world-clock/hardware.jpg){:loading="lazy"} - -Since the Inky [Inky -pHAT](https://shop.pimoroni.com/products/inky-phat?variant=12549254217811) is -essentially a HAT, it can easily be added on top of the [Raspberry Pi -Zero](https://www.raspberrypi.org/products/raspberry-pi-zero/). - -First, I installed the necessary software on Raspberry Pi with `pip3 install -inky`. - -And then I created a file `clock.py` in home directory `/home/pi`. - -```python -#!/usr/bin/env python -# -*- coding: utf-8 -*- - -import sys -import os -from inky.auto import auto -from PIL import Image, ImageFont, ImageDraw -from font_fredoka_one import FredokaOne - -clocks = [ - 'America/New_York', - 'Europe/Ljubljana', - 'Australia/Brisbane', -] - -board = auto() -board.set_border(board.WHITE) -board.rotation = 90 - -img = Image.new('P', (board.WIDTH, board.HEIGHT)) -draw = ImageDraw.Draw(img) - -big_font = ImageFont.truetype(FredokaOne, 18) -small_font = ImageFont.truetype(FredokaOne, 13) - -x = board.WIDTH / 3 -y = board.HEIGHT / 3 - -idx = 1 -for clock in clocks: - ctime = os.popen('TZ="{}" date +"%a,%H:%M"'.format(clock)) - ctime = ctime.read().strip().split(',') - city = clock.split('/')[1].replace('_', ' ') - - draw.text((15, (idx*y)-y+10), city, fill=board.BLACK, font=small_font) - draw.text((110, (idx*y)-y+7), str(ctime[0]), fill=board.BLACK, font=big_font) - draw.text((155, (idx*y)-y+7), str(ctime[1]), fill=board.BLACK, font=big_font) - - idx += 1 - -board.set_image(img) -board.show() -``` - -And because eInk displays are rather slow to refresh and the clock requires -refreshing only once a minute, this can be done through cronjob. - -Before we add this job to cron we need to make `clock.py` executable with `chmod -+x clock.py`. - -Then we add a cronjob with `crontab -e`. - -```txt -* * * * * /home/pi/clock.py -``` - -So, we end up with a result like this. - -![World Clock](/assets/posts/world-clock/world-clock.jpg){:loading="lazy"} - -And for the enclosure that can be 3D printed, but I haven't yet something like -this can be used. - - - -You can download my [STL file for the enclosure -here](/assets/posts/world-clock/enclosure.stl), but make sure that dimensions make -sense and also opening for USB port should be added or just use a drill and some -hot glue to make it stick in the enclosure. diff --git a/_posts/2021-07-30-from-internet-consumer-to-full-hominum-again.md b/_posts/2021-07-30-from-internet-consumer-to-full-hominum-again.md deleted file mode 100644 index cbcca37..0000000 --- a/_posts/2021-07-30-from-internet-consumer-to-full-hominum-again.md +++ /dev/null @@ -1,104 +0,0 @@ ---- -title: My journey from being an internet über consumer to being a full hominum again -permalink: /from-internet-consumer-to-full-hominum-again.html -date: 2021-07-30T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -It's been almost a year since I started purging all my online accounts and -going down this rabbit hole of being almost independent of the current internet -machine. Even though I initially thought that I will have problems adapting, -I was pleasantly surprised that the transition went so smoothly. Even better, -it brought many benefits to my life. Such as increased focus, less stress -about trivial things, etc. - -It all started with me doing small changes like unsubscribing from emails that I -have either subscribed to by accepting terms and conditions. Or even some more -malicious emails that I was getting because I was on a shared mailing list. And -the later ones I hate the most of all. How the hell do they keep sharing my -email and sending me unsolicited emails and get away with it? I have a suspicion -that these marketing people share an Excel file between them and keep -resubscribing me when they import lists into Mailchimp or similar software. - -It's fascinating to see how much crap you get subscribed to when you are not -paying attention. It got so bad that my primary Gmail address is a full of junk -and need constant monitoring and cleaning up. And because I want to have Inbox -Zero, this presents an additional problem for me. - -The stress that email presented for me didn't occur to me for a long time. I was -noticing that I was unable to go through one single hour without hysterically -refreshing email. And if somebody wrote me something, I needed to see it right -then, even though I didn't immediately reply to it. I can only describe this -with FOMO (fear of missing out). I have no other explanation than that. It was -crippling, and I was constantly context switching, which I will address further -down this post in more details. - -This was one of the reasons why I spawned up my personal email server, and I am -using it now as my primary and person email. I still have Gmail as my “junk” -email that I use for throw away stuff. I log in to Gmail once a week and check -if there are any important emails that I got, but apart from that, it's sitting -dormant and collecting dust. - -The more I was watching the world loose it's self with allowing anti freedom -things to happen to it, the more I started to realize that something has to -change. I don't have the power to change the world. And I also don't have a -grandiose opinion of myself to even think to try it. But what I can do is to not -subscribe to this consumer way of thinking. I will not be complicit in this. My -moral and ethical stances won't allow it. So, this brings us to the second part -of my journey. - -I was using all these 3rd party services because I was either lazy or OK with -the drawbacks of them. I watched these services and companies became more and -privacy policies and everybody is OK with accepting them, and they pray on that -more evil. It is evil if you sell your user's data in this manner. Nobody reads -flaw in human nature. I really hate the hypocrisy they manage to muster. These -companies prey on our laziness, and we are at fault here. Nobody else. And I -truly understand the reasons why we rather accept and move on, and not object -and have our lives a little more difficult. They have perfected this through -years of small changes that make us a little more dependent on them. You could -not convince a person to give away all his rights and data in one day. This was -gradual and slow. And it caught us all in surprise. When I really stopped and -thought about it, I felt repulsed. By really stopping and thinking about it, I -really mean stopping and thinking about it. Thoroughly and in depth. - -Each step I took depleted my character a bit more. Like I was trading myself bit -by bit without understanding what it all meant. What it meant to be a full -person, not divided by all this bought attention they want from me. They don't -just get your data, but they also take your attention away from you. They -scatter your and go with the divide and conquer tactic from there. And a person -divided is a person not fully there. Not at the moment. Not alive fully. - -I was unable to form long thoughts. Well, I thought I was. But now that I see -what being a full person is again, I can see that I was not at my 100% back -then. - -A revolt was inevitable. There was no other way of continuing my story without -it. Without taking back my attention, my thoughts, my time, and my privacy, -regardless of how too late it maybe is. - -This has nothing to do with conspiracy theories. Even less with changing the -world. All I wanted was to get my life back in order and not waste the energy -that could be spent in other, better places. - -I started reading more. I can focus now fully on things I work on. Furthermore, -I have the mental acuity that I never had before. My mind feels sharp. I don't -get angry so much. I can cherish the finer things in life now without the need -to interpret them intellectually. Not only that, but I have a feeling of -belonging again. Sense of purpose has returned with a vengeance. And I can now -help people without depleting myself. - -The last step so far was to finish closing all the remaining online accounts -that I still had. And when I was thinking what value they bring me, I wasn't -surprised that the answer was none. I wasn't logging in them and using them. I -stopped being afraid of FOMO. If somebody wants to get in contact me, they will -find a way. I am one search away. - -We are not beholden to anybody. Our lives are our own. So dare yourself to -delete Facebook, LinkedIn. To unsubscribe. Dare yourself to take your time and -attention back. Use that time and energy to go for a walk without thinking about -work. Read a book instead of reading comment on social media that you will -forget in an hour. Enrich your life instead of wasting it. It only requires a -small step. And you will feel the benefits immediately. Lose the weight of the -world that is crushing you without your consent. diff --git a/_posts/2021-08-01-linux-cheatsheet.md b/_posts/2021-08-01-linux-cheatsheet.md deleted file mode 100644 index b416ffa..0000000 --- a/_posts/2021-08-01-linux-cheatsheet.md +++ /dev/null @@ -1,288 +0,0 @@ ---- -title: List of essential Linux commands for server management -permalink: /linux-cheatsheet.html -date: 2021-08-01T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -**Generate SSH key** - -```bash -ssh-keygen -t ed25519 -C "your_email@example.com" - -# when no support for Ed25519 present -ssh-keygen -t rsa -b 4096 -C "your_email@example.com" -``` - -Note: By default SSH keys get stored to `/home//.ssh/` folder. - -**Login to host via SSH** - -```bash -# connect to host as your local username -ssh host - -# connect to host as user -ssh @ - -# connect to host using port -ssh -p @ -``` - -**Execute command on a server through SSH** - -```bash -# execute one command -ssh root@100.100.100.100 "ls /root" - -# execute many commands -ssh root@100.100.100.100 "cd /root;touch file.txt" -``` - -**Displays currently logged in users in the system** - -```bash -w -``` - -**Displays Linux system information** - -```bash -uname -``` - -**Displays kernel release information** - -```bash -uname -r -``` - -**Shows the system hostname** - -```bash -hostname -``` - -**Shows system reboot history** - -```bash -last reboot -``` - -**Displays information about the user** - -```bash -sudo apt install finger -finger -``` - -**Displays IP addresses and all the network interfaces** - -```bash -ip addr show -``` - -**Downloads a file from an online source** - -```bash -wget https://example.com/example.tgz -``` - -Note: If URL contains ?, & enclose the URL in double quotes. - -**Compress a file with gzip** - -```bash -# will not keep the original file -gzip file.txt - -# will keep the original file -gzip --keep file.txt -``` - -**Interactive disk usage analyzer** - -```bash -sudo apt install ncdu - -ncdu -ncdu -``` - -**Install Node.js using the Node Version Manager** - -```bash -curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash -source ~/.bashrc - -nvm install v13 -``` - -**Too long; didn't read** - -```bash -npm install -g tldr - -tldr tar -``` - -**Combine all Nginx access logs to one big log file** - -```bash -zcat -f /var/log/nginx/access.log* > /var/log/nginx/access-all.log -``` - -**Set up Redis server** - -```bash -sudo apt install redis-server redis-tools - -# check if server is running -sudo service redis status - -# set and get a key value -redis-cli set mykey myvalue -redis-cli get mykey - -# interactive shell -redis-cli -``` - -**Generate statistics of your webserver** - -```bash -sudo apt install goaccess - -# check if installed -goaccess -v - -# combine logs -zcat -f /var/log/nginx/access.log* > /var/log/nginx/access-all.log - -# export to single html -goaccess \ - --log-file=/var/log/nginx/access-all.log \ - --log-format=COMBINED \ - --exclude-ip=0.0.0.0 \ - --ignore-crawlers \ - --real-os \ - --output=/var/www/html/stats.html - -# cleanup afterwards -rm /var/log/nginx/access-all.log -``` - -**Search for a given pattern in files** - -```bash -grep -r ‘pattern’ files -``` - -**Find proccess ID for a specific program** - -```bash -pgrep nginx -``` - -**Print name of current/working directory** - -```bash -pwd -``` - -**Creates a blank new file** - -```bash -touch newfile.txt -``` - -**Displays first lines in a file** - -```bash -# -n presents the number of lines (10 by default) -head -n 20 somefile.txt -``` - -**Displays last lines in a file** - -```bash -# -n presents the number of lines (10 by default) -tail -n 20 somefile.txt - -# -f follows the changes in file (doesn't closes) -tail -f somefile.txt -``` - -**Count lines in a file** - -```bash -wc -l somefile.txt -``` - -**Find all instances of the file** - -```bash -sudo apt install mlocate - -locate somefile.txt -``` - -**Find file names that begin with ‘index’ in /home folder** - -```bash -find /home/ -name "index" -``` - -**Find files larger than 100MB in the home folder** - -```bash -find /home -size +100M -``` - -**Displays block devices related information** - -```bash -lsblk -``` - -**Displays free space on mounted systems** - -```bash -df -h -``` - -**Displays free and used memory in the system** - -```bash -free -h -``` - -**Displays all active listening ports** - -```bash -sudo apt install net-tools - -netstat -pnltu -``` - -**Kill a process violently** - -```bash -kill -9 -``` - -**List files opened by user** - -```bash -lsof -u -``` - -**Execute "df -h", showing periodic updates** - -```bash -# -n 1 means every second -watch -n 1 df -h -``` - diff --git a/_posts/2021-12-03-debian-based-riced-up-distribution-for-developers.md b/_posts/2021-12-03-debian-based-riced-up-distribution-for-developers.md deleted file mode 100644 index 4f9bc09..0000000 --- a/_posts/2021-12-03-debian-based-riced-up-distribution-for-developers.md +++ /dev/null @@ -1,277 +0,0 @@ ---- -title: Debian based riced up distribution for Developers and DevOps folks -permalink: /debian-based-riced-up-distribution-for-developers-and-devops-folks.html -date: 2021-12-03T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -## Introduction - -I have been using [Ubuntu](https://ubuntu.com/) for quite a longtime now. I have -used [Debian](https://www.debian.org/) in the past and -[Manjaro](https://manjaro.org/). Also had [Arch](https://archlinux.org/) for -some time and even ran [Gentoo](https://www.gentoo.org/) way back. - -What I learned from all this is that I prefer running a bit older versions and -having them be stable than run bleeding edge rolling release. For that reason, I -stuck with Ubuntu for a couple of years now. I am also at a point in my life -where I just don't care what is cool or hip anymore. I just want a stable system -that doesn't get in my way. - -During all this, I noticed that these distributions were getting very bloated -and a lot of software got included that I usually uninstall on fresh -installation. Maybe this is my OCD speaking, but why do I have to give fresh -installation min 1 GB of ram out of the box just to have a blank screen in front -of me? I get it, there are many things included in the distro to make my life -easier. I understand. But at this point I have a feeling that modern Linux -distributions are becoming similar to [Node.js project with -node_modules](https://devhumor.com/content/uploads/images/August2017/node-modules.jpg). -Just a crazy number of packages serving very little or no purpose, just -supporting other software. - -I felt I needed a fresh start. To start over with something minimal and clean. -Something that would put a little more joy into using a computer again. - -For the first version, I wanted to target the following machines I have at home -that I want this thing to work on. - -```yaml -# My main stationary work machine -Resolution: 3840x1080 (Super Ultrawide Monitor 32:9) -CPU: Intel i7-8700 (12) @ 4.600GHz -GPU: AMD ATI Radeon RX 470/480/570/570X/580/580X/590 -Memory: 32020MiB -``` - -```yaml -# Thinkpad x220 for testing things and goofing around -Resolution: 1366x768 -CPU: Intel i5-2520M (4) @ 3.200GHz -GPU: Intel 2nd Generation Core Processor Family -Memory: 15891MiB -``` - -## How should I approach this? - -I knew I wanted to use [minimal Debian netinst -](https://www.debian.org/CD/netinst/) for the base to give myself a head -start. No reason to go through changing the installer and also testing all that -behemoth of a thing. So, some sort of ricing was the only logical option to get -this thing of the grounds somewhat quickly. - -> **What is ricing anyway?** -> The term “RICE” stands for Race Inspired Cosmetic Enhancement. A group of -> people (could be one, idk) decided to see if they could tweak their own -> distros like they/others did their cars. This gave rise to a community of -> Linux/Unix enthusiasts trying to make their distros look cooler and better -> than others... For more information, read this article -> [What in the world is ricing!?](https://pesos.github.io/2020/07/14/what-is-ricing.html). - -I didn't want this to just be a set of config files for theming purpose. I -wanted this to include a set of pre-installed tools and services that are being -used all the time by a modern developer. Theming is just a tiny part of it. -Fonts being applied across the distro and things like that. - -First, I choose terminal installer and left it to load additional components. -Avoid using graphical installer in this case. - -![](/assets/posts/dfd-rice/install-00.png){:loading="lazy"} - -After that I selected hostname and created a normal user and set password for -that user and root user and choose guided mode for disk partitioning. - -![](/assets/posts/dfd-rice/install-01.png){:loading="lazy"} - -I left it run to install all the things required for the base system and opted -out of scanning additional media for use by the package manager. Those will be -downloaded from the internet during installation. - -![](/assets/posts/dfd-rice/install-02.png){:loading="lazy"} - -I opted out of the popularity contest, and **now comes the important part**. -Uncheck all the boxes in Software selection and only leave 'standard system -utilities'. I also left an SSH server, so I was able to log in to the machine -from my main PC. - -![](/assets/posts/dfd-rice/install-03.png){:loading="lazy"} - -At this point, I installed GRUB bootloader on the disk where I installed the -system. - -![](/assets/posts/dfd-rice/install-04.png){:loading="lazy"} - -That concluded the installation of base Debian and after restarting the computer -I was prompted with the login screen. - -![](/assets/posts/dfd-rice/install-05.png){:loading="lazy"} - -Now that I had the base installation, it was time to choose what software do I -want to include in this so-called distribution. I wanted out of the box -developer experience, so I had plenty to choose. - -Let's not waste time and go through the list. - -## Desktop environments - -I have been using [Gnome](https://www.gnome.org/) for my whole Linux life. From -version 2 forward. It's been quite a ride. I hated version 3 when it came out -and replaced version 2. But I got used to it. And now with version 40+ they also -made couple of changes which I found both frustrating and presently surprised. - -The amount of vertical space you loose because of the beefy title bars on -windows is ridiculous. And then in case of -[Tilix](https://gnunn1.github.io/tilix-web/) you also have tabs, and you are -100px deep. Vertical space is one of the most important things for a -developer. The more real estate you have, the more code you can have in a -viewport. - -But on the other hand, I still love how Gnome feels and looks. I gotta give them -that. They really are trying to make Gnome feel unified and modern. - -Regardless of all the nice things Gnome has, I was looking at the tiling window -managers for some time, but never had the nerve to actually go with it. But now -was the ideal time to give it a go. No guts, no glory kind of a thing. - -One of the requirements for me was easy custom layouts because I use a really -strange monitor with aspect ratio of 32:9. So relying on included layouts most -of them have is a non-starter. - -What I was doing in Gnome was having windows in a layout like the diagram -below. This is my common practice. And if you look at it you can clearly see I -was replicating tiling window manager setup in Gnome. - -![](/assets/posts/dfd-rice/layout.png){:loading="lazy"} - -That made me look into a bunch of tiling window managers and then tested them -out. Candidates I was looking at were: - -- [i3](https://i3wm.org/) -- [bspwm](https://github.com/baskerville/bspwm) -- [awesome](https://awesomewm.org/index.html) -- [XMonad](https://xmonad.org/) -- [sway](https://swaywm.org/) -- [Qtile](http://www.qtile.org/) -- [dwm](https://dwm.suckless.org/) - -You can also check article [13 Best Tiling Window Managers for -Linux](https://www.tecmint.com/best-tiling-window-managers-for-linux/) I was -referencing while testing them out. - -While all of them provided what I needed, I liked i3 the most. What particular -caught my eye was the ease to use and tree based layouts which allows flexible -layouts. I know others can be set up also to have custom layouts other than -spiral, dwindle etc. I think i3 is a good entry-level window manager for -somebody like me. - -## Batteries included - -The source for the whole thing is located on Github -https://github.com/mitjafelicijan/dfd-rice. - -Currenly included: - -- `non-free` (enables non-free packages in apt) -- `sudo` (adds sudo and adds user to sudo group) -- `essentials` (gcc, htop, zip, curl, etc...) -- `wifi` (network manager nmtui) -- `desktop` (i3, dmenu, fonts, configurations) -- `pulseaudio` (pulseaudio with pavucontrol) -- `code-editors` (vim, micro, vscode) -- `ohmybash` (make bash pretty) -- `file-managers` (mc) -- `git-ui` (terminal git gui) -- `meld` (diff tool) -- `profiling` (kcachegrind, valgrind, strace, ltrace) -- `browsers` (brave, firefox, chromium) -- programming languages: - - `python` - - `golang` - - `nodejs` - - `rust` - - `nim` - - `php` - - `ruby` -- `docker` (with docker-compose) -- `ansible` - -Install script also allows you to install only specific packages (example for: -essentials ohmybash docker rust). - -```sh -su - root \ - bash -c "$(wget -q https://raw.github.com/mitjafelicijan/dfd-rice/master/tools/install.sh -O -)" -- \ - essentials ohmybash docker rust -``` - -Currently, most of these recipes use what Debian and this is totally fine with -me since I never use bleeding edge features of a package. But if something major -would come to light, I will replace it with a possible compilation script or -something similar. - -This is some of the output from the installation script. - -![](/assets/posts/dfd-rice/script.png){:loading="lazy"} - -Let's take a look at some examples in the installation script. - -### Docker recipe - -```sh -# docker -print_header "Installing Docker" -curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --yes --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg -echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null -apt update -apt -y install docker-ce docker-ce-cli containerd.io docker-compose - -systemctl start docker -systemctl enable docker -systemctl status docker --no-pager - -/sbin/usermod -aG docker $USERNAME -``` - -### Making bash pretty - -I really like [Oh My Zsh](https://ohmyz.sh/), but I don't like zsh shell. When -I used it, I constantly needed to be aware of it and running bash scripts was a -pain. So, I was really delighted when I found out that a version for bash -existed called [Oh My Bash](https://ohmybash.nntoan.com/). Let's take a look at -the recipe for installing it. - -```sh -# ohmybash -print_header "Enabling OhMyBash" -sudo -u $USERNAME sh -c "$(curl -fsSL https://raw.github.com/ohmybash/oh-my-bash/master/tools/install.sh)" & -T1=${!} -wait ${T1} -``` - -Because OhMyBash does `exec bash` at the end, this traps our script inside -another shell and our script cannot continue. For that reason, I executed this -in background. But that presents a new problem. Because this is executed in -background, we lose track of progress naturally. And that strange trick with -`T1=${!}` and `wait ${T1}` waits for the background process to finish before -continuing to another task in bash script. - -Check [Multi-Threaded Processing in Bash Scripts](https://www.cloudsavvyit.com/12277/how-to-use-multi-threaded-processing-in-bash-scripts/) -for more details. - -## Conclusion - -Take a look at -https://github.com/mitjafelicijan/dfd-rice/blob/develop/tools/install.sh script -to get familiar with it. This is just a first iteration and I will continue to -update it because I need this in my life. - -The current version boots in 4s to the login prompt, and after you log in, the -desktop environment loads in 2s. So, its fast, very fast. And on clean boot, I -measured ~230 MB of RAM usage. - -And this is how it looks with two terminals side by side. I really like the -simplicity and clean interface. I will polish the colors and stuff like that, -but I really do like the results. - -![](/assets/posts/dfd-rice/desktop.png){:loading="lazy"} diff --git a/_posts/2021-12-25-running-golang-application-as-pid1.md b/_posts/2021-12-25-running-golang-application-as-pid1.md deleted file mode 100644 index edd5a57..0000000 --- a/_posts/2021-12-25-running-golang-application-as-pid1.md +++ /dev/null @@ -1,348 +0,0 @@ ---- -title: Running Golang application as PID 1 with Linux kernel -permalink: /running-golang-application-as-pid1.html -date: 2021-12-25T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -## Unikernels, kernels, and alike - -I have been reading a lot about -[unikernernels](https://en.wikipedia.org/wiki/Unikernel) lately and found them -very intriguing. When you push away all the marketing speak and look at the -idea, it makes a lot of sense. - -> A unikernel is a specialized, single address space machine image constructed -> by using library operating systems. ([Wikipedia](https://en.wikipedia.org/wiki/Unikernel)) - -I really like the explanation from the article -[Unikernels: Rise of the Virtual Library Operating System](https://queue.acm.org/detail.cfm?id=2566628). -Really worth a read. - -If we compare a normal operating system to a unikernel side by side, they would -look something like this. - -![Virtual machines vs Containers vs Unikernels](/assets/posts/pid1/unikernels.webp){:loading="lazy"} - -From this image, we can see how the complexity significantly decreases with -the use of Unikernels. This comes with a price, of course. Unikernels are hard -to get running and require a lot of work since you don't have an actual proper -kernel running in the background providing network access and drivers etc. - -So as a half step to make the stack simpler, I started looking into using -Linux kernel as a base and going from there. I came across this -[Youtube video talking about Building the Simplest Possible Linux System](https://www.youtube.com/watch?v=Sk9TatW9ino) -by [Rob Landley](https://landley.net) and apart from statically compiling the -application to be run as PID1 there was really no other obstacles. - -## What is PID 1? - -PID 1 is the first process that Linux kernel starts after the boot process. -It also has a couple of unique properties that are unique to it. - -- When the process with PID 1 dies for any reason, all other processes are - killed with KILL signal. -- When any process having children dies for any reason, its children are - re-parented to process with PID 1. -- Many signals which have default action of Term do not have one for PID 1. -- When the process with PID 1 dies for any reason, kernel panics, which - result in system crash. - -PID 1 is considered as an Init application which takes care of running other -and handling services like: - -- sshd, -- nginx, -- pulseaudio, -- etc. - -If you are on a Linux machine, you can check what your process is with PID 1 -by running the following. - -```sh -$ cat /proc/1/status -Name: systemd -Umask: 0000 -State: S (sleeping) -Tgid: 1 -Ngid: 0 -Pid: 1 -PPid: 0 -... -``` - -As we can see on my machine the process with id of 1 is [systemd](https://systemd.io/) -which is a software suite that provides an array of system components for Linux -operating systems. If you look closely you can also see that the `PPid` -(process id of the parent process) is `0` which additionally confirms that -this process doesn't have a parent. - -## So why even run application as PID 1 instead of just using a container? - -Containers are wonderful, but they come with a lot of baggage. And because they -are in their nature layered, the images require quite a lot of space and also a -lot of additional software to handle them. They are not as lightweight as they -seem, and many popular images require 500 MB plus disk space. - -The idea of running this as PID 1 would result in a significantly smaller footprint, -as we will see later in the post. - -> You could run a simple init system inside Docker container described more -> in this article [Docker and the PID 1 zombie reaping problem](https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/). - -## The master plan - -1. Compile Linux kernel with the default definitions. -2. Prepare a Hello World application in Golang that is statically compiled. -3. Run it with [QEMU](https://www.qemu.org/) and providing Golang application - as init application / PID 1. - -For the sake of simplicity we will not be cross-compiling any of it and just -use the 64bit version. - -## Compiling Linux kernel - -```sh -$ wget https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.15.7.tar.xz -$ tar xf linux-5.15.7.tar.xz - -$ cd linux-5.15.7 - -$ make clean - -# read more about this https://stackoverflow.com/a/41886394 -$ make defconfig - -$ time make -j `nproc` - -$ cd .. -``` - -At this point we have kernel image that is located in `arch/x86_64/boot/bzImage`. -We will use this in QEMU later. - -To make our lives a bit easier lets move the kernel image to another place. -Lets create a folder `bin/` in the root of our project with `mkdir -p bin`. - - -At this point we can copy `bzImage` to `bin/` folder with -`cp linux-5.15.7/arch/x86_64/boot/bzImage bin/bzImage`. - -The folder structure of this experiment should look like this. - -```txt -pid1/ - bin/ - bzImage - linux-5.15.7/ - linux-5.15.7.tar.xz -``` - -## Preparing PID 1 application in Golang - -This step is relatively easy. The only thing we must have in mind that we will -need to compile the binary as a static one. - -Let's create `init.go` file in the root of the project. - -```go -package main - -import ( - "fmt" - "time" -) - -func main() { - for { - fmt.Println("Hello from Golang") - time.Sleep(1 * time.Second) - } -} -``` - -If you notice, we have a forever loop in the main, with a simple sleep of 1 -second to not overwhelm the CPU. This is because PID 1 should never complete -and/or exit. That would result in a kernel panic. Which is BAD! - -There are two ways of compiling Golang application. Statically and dynamically. - -To statically compile the binary, use the following command. - -```sh -$ go build -ldflags="-extldflags=-static" init.go -``` - -We can also check if the binary is statically compiled with: - -```sh -$ file init -init: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, Go BuildID=Ypu8Zw_4NBxm1Yxg2OYO/H5x721rQ9uTPiDVh-VqP/vZN7kXfGG1zhX_qdHMgH/9vBfmK81tFrygfOXDEOo, not stripped - -$ ldd init -not a dynamic executable -``` - -At this point, we need to create [initramfs](https://www.linuxfromscratch.org/blfs/view/svn/postlfs/initramfs.html) -(abbreviated from "initial RAM file system", is the successor of initrd. It -is a cpio archive of the initial file system that gets loaded into memory -during the Linux startup process). - -```sh -$ echo init | cpio -o --format=newc > initramfs -$ mv initramfs bin/initramfs -``` - -The projects at this stage should look like this. - -```txt -pid1/ - bin/ - bzImage - initramfs - linux-5.15.7/ - linux-5.15.7.tar.xz - init.go -``` - -## Running all of it with QEMU - -[QEMU](https://www.qemu.org/) is a free and open-source hypervisor. It emulates -the machine's processor through dynamic binary translation and provides a set -of different hardware and device models for the machine, enabling it to run a -variety of guest operating systems. - -```sh -$ qemu-system-x86_64 -serial stdio -kernel bin/bzImage -initrd bin/initramfs -append "console=ttyS0" -m 128 -``` - -```sh -$ qemu-system-x86_64 -serial stdio -kernel bin/bzImage -initrd bin/initramfs -append "console=ttyS0" -m 128 -[ 0.000000] Linux version 5.15.7 (m@khan) (gcc (GCC) 11.2.1 20211203 (Red Hat 11.2.1-7), GNU ld version 2.37-10.fc35) #7 SMP Mon Dec 13 10:23:25 CET 2021 -[ 0.000000] Command line: console=ttyS0 -[ 0.000000] x86/fpu: x87 FPU will use FXSAVE -[ 0.000000] signal: max sigframe size: 1440 -[ 0.000000] BIOS-provided physical RAM map: -[ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable -[ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved -[ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved -[ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x0000000007fdffff] usable -[ 0.000000] BIOS-e820: [mem 0x0000000007fe0000-0x0000000007ffffff] reserved -[ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved -[ 0.000000] NX (Execute Disable) protection: active -[ 0.000000] SMBIOS 2.8 present. -[ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-6.fc35 04/01/2014 -[ 0.000000] tsc: Fast TSC calibration failed -... -[ 2.016106] ALSA device list: -[ 2.016329] No soundcards found. -[ 2.053176] Freeing unused kernel image (initmem) memory: 1368K -[ 2.056095] Write protecting the kernel read-only data: 20480k -[ 2.058248] Freeing unused kernel image (text/rodata gap) memory: 2032K -[ 2.058811] Freeing unused kernel image (rodata/data gap) memory: 500K -[ 2.059164] Run /init as init process -Hello from Golang -[ 2.386879] tsc: Refined TSC clocksource calibration: 3192.032 MHz -[ 2.387114] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2e02e31fa14, max_idle_ns: 440795264947 ns -[ 2.387380] clocksource: Switched to clocksource tsc -[ 2.587895] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 -Hello from Golang -Hello from Golang -Hello from Golang -``` - -The whole [log file here](/assets/posts/pid1/qemu.log). - -## Size comparison - -The cool thing about this approach is that the Linux kernel and the application -together only take around 12 MB, which is impressive as hell. And we need to -also know that the size of bzImage (Linux kernel) could be greatly decreased -by going into `make menuconfig` and removing a ton of features from the kernel, -making the size even smaller. I managed to get kernel size down to 2 MB and -still working properly. - -```sh -total 12M --rw-r--r--. 1 m m 9.3M Dec 13 10:24 bzImage --rw-r--r--. 1 m m 1.9M Dec 27 01:19 initramfs -``` - -## Creating ISO image and running it with Gnome Boxes - -First we need to create proper folder structure with `mkdir -p iso/boot/grub`. - -Then we need to download the [grub binary](https://github.com/littleosbook/littleosbook/raw/master/files/stage2_eltorito). -You can read more about this program on https://github.com/littleosbook/littleosbook. - -```sh -$ wget -O iso/boot/grub/stage2_eltorito https://github.com/littleosbook/littleosbook/raw/master/files/stage2_eltorito -``` - -```sh -$ tree iso/boot/ -iso/boot/ -├── bzImage -├── grub -│   ├── menu.lst -│   └── stage2_eltorito -└── initramfs -``` - -Let's copy files into proper folders. - - -```sh -$ cp stage2_eltorito iso/boot/grub/ -$ cp bin/bzImage iso/boot/ -$ cp bin/initramfs iso/boot/ -``` - -Lets create a GRUB config file at `nano iso/boot/grub/menu.lst` with contents. - -```ini -default=0 -timeout=5 - -title GoAsPID1 -kernel /boot/bzImage -initrd /boot/initramfs -``` - -Let's create iso file by using genisoimage: - -```sh -genisoimage -R \ - -b boot/grub/stage2_eltorito \ - -no-emul-boot \ - -boot-load-size 4 \ - -A os \ - -input-charset utf8 \ - -quiet \ - -boot-info-table \ - -o GoAsPID1.iso \ - iso -``` - -This will produce `GoAsPID1.iso` which you can use with [Virtualbox](https://www.virtualbox.org/) -or [Gnome Boxes](https://apps.gnome.org/app/org.gnome.Boxes/). - - - -## Is running applications as PID 1 even worth it? - -Well, the answer to this is not as simple as one would think. Sometimes it is -and sometimes it's not. For embedded systems and very specialized applications -it is worth for sure. But in normal uses, I don't think so. It was an interesting -exercise in compiling kernels and looking at the guts of the Linux kernel, -but sticking to containers for most of the things is a better option in my -opinion. - -An interesting experiment would be creating an image that supports networking -and could be deployed to AWS as an EC2 instance and observing how it fares. -But in that case, we would need to write some sort of supervisor that would -run on a separate EC2 that would check if other EC2 instances are running -properly. Remember that if your application fails, kernel panics and the -whole machine is inoperable in this case. diff --git a/_posts/2021-12-30-wap-mobile-web-before-the-web.md b/_posts/2021-12-30-wap-mobile-web-before-the-web.md deleted file mode 100644 index 665be0f..0000000 --- a/_posts/2021-12-30-wap-mobile-web-before-the-web.md +++ /dev/null @@ -1,203 +0,0 @@ ---- -title: Wireless Application Protocol and the mobile web before the web -permalink: /wap-mobile-web-before-the-web.html -date: 2021-12-30T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -## A little stroll down the history lane - -About two weeks ago, I watched this outstanding documentary on YouTube -[Springboard: the secret history of the first real -smartphone](https://www.youtube.com/watch?v=b9_Vh9h3Ohw) about the history of -smartphones and phones in general. It brought back so many memories. I never had -an actual smartphone before the Android. The closest to smartphone was [Sony -Ericsson P1](https://www.gsmarena.com/sony_ericsson_p1-1982.php). A fantastic -phone and I broke it in Prague after a party and that was one of those rare -occasions where I was actually mad at myself. But nevertheless, after that -phone, the next one was an Android one. - -Before that, I only owned normal phones from Nokia and Siemens etc. Nothing -special, actually. These are the phones we are talking about. Before 2007. -Apple and Android phones didn't exist yet. - -These phones were rocking: - -- No selfie cameras. -- ~2 inch displays. -- ~120 MHz beast CPU's. -- 144p main cameras. -- But they had a headphone jack. - -Let's take a look at these beauties. - -![Old phones](/assets/posts/wap/phones.gif){:loading="lazy"} - -## WAP - Wireless Application Protocol - -Not that one! We are talking about Wireless Application Protocol and not Cardi -B's song 😃 - -WAP stands for Wireless Application Protocol. It is a protocol designed for -micro-browsers, and it enables the access of internet in the mobile devices. It -uses the mark-up language WML (Wireless Markup Language and not HTML), WML is -defined as XML 1.0 application. Furthermore, it enables creating web -applications for mobile devices. In 1998, WAP Forum was founded by Ericson, -Motorola, Nokia and Unwired Planet whose aim was to standardize the various -wireless technologies via protocols. -[(source)](https://www.geeksforgeeks.org/wireless-application-protocol/) - -WAP protocol was resulted by the joint efforts of the various members of WAP -Forum. In 2002, WAP forum was merged with various other forums of the industry, -resulting in the formation of Open Mobile Alliance (OMA). -[(source)](https://www.geeksforgeeks.org/wireless-application-protocol/) - -These were some wild times. Devices had tiny screens and data transmission rates -were abominable. But they were capable of rendering WML (Wireless Markup -Language). This was very similar to HTML, actually. It is a markup language, -after all. - -These pages could be served by [Apache](https://apache.org/) and could be -generated by CGI scripts on the backend. The only difference was the limited -markup language. - -## WML - Wireless Markup Language - -Just like web browsers use HTML for content structure, older mobile device -browsers use WML - if you need to support really old mobile phones using WML -browsers, you will need to know about it. WML is XML-based (an XML vocabulary -just like XHTML and MathML, but not HTML) and does not use the same metaphor as -HTML. HTML is a single document with some metadata packed away in the head, and -a body encapsulating the visible page. With WML, the metaphor does not envisage -a page, but rather a deck of cards. A WML file might have several pages or cards -contained within it. -[(source)](https://www.w3.org/wiki/Introduction_to_mobile_web) - -```html - - - - -

Welcome to the Example homepage

-
-
-``` - -There is an amazing tutorial on [Tutorialpoint about -WML](https://www.tutorialspoint.com/wml/index.htm). - -## Converting Digg to WML - -This task is completely useless and not really feasible nowadays, but I had to -give it a try for old-time sake. Since the data is already there in a form of -RSS feed, I could take this feed and parse it and create a WML version of the -homepage. - -We will need: - -- Python3 + Pip -- ImageMagick -- feedparser and mako templating - -```sh -# for fedora 35 -sudo dnf install ImageMagick python3-pip - -# tempalting engine for python -pip install mako --user - -# for parsing rss feeds -pip install feedparser --user -``` - -Project folder structure should look like the following. - -``` -12:43:53 m@khan wap → tree -L 1 -. -├── generate.py -└── template.wml - -``` - -After that, I created a small template for the homepage. - -```html - - - - - - - - % for item in entries: -

${item.title}

-

${item.kicker}

-

${item.title}

-

${item.description}

- % endfor - -
- -
-``` - -And the parser that parses RSS feed looks like this. - -```python -import os -import feedparser -from mako.template import Template - -os.system('mkdir -p www/images') - -template = Template(filename='template.wml') - -feed = feedparser.parse('https://digg.com/rss/top.xml') - -entries = feed.entries[:15] - -for entry in entries: - print('Processing image with id {}'.format(entry.id)) - os.system('wget -q -O www/images/{}.jpg "{}"'.format(entry.id, entry.links[1].href)) - os.system('convert www/images/{}.jpg -type Grayscale -resize 175x -depth 3 -quality 30 www/images/{}.jpg'.format(entry.id, entry.id)) - -html = template.render(entries = entries) - -with open('www/index.wml', 'w+') as fp: - fp.write(html) -``` - -This script will create a folder `www` and in the folder `www/images` for -storing resized images. - -> Be sure you don't use SSL and use just normal HTTP for serving the content. -> These old phones will have problems with TLS 1.3 etc. - -If you look at the python file, I convert all the images into tiny B&W images. -They should be WBMP (Wireless BitMaP) but I choose JPEGs for this, and it seems -to work properly. - -Because I currently don't have a phone old enough to test it on, I used an -emulator. And it was really hard to find one. I found [WAP -Proof](http://wap-proof.sharewarejunction.com/) on shareware junction, and it -did the job well enough. I will try to find and actual device to test it on. - - - -If you are using Nginx to serve the contents, add a directive to the hosts file -that will automatically server `index.wml` file. - -```nginx -server { - index index.wml index.html index.htm index.nginx-debian.html; -} -``` - -## Conclusion - -Well, this was pointless, but very fun! I hope you enjoyed it as much as I did. -I will try to find an old phone to test it on. If you have any questions, feel -free to ask in the comments. diff --git a/_posts/2022-06-30-trying-out-helix-editor.md b/_posts/2022-06-30-trying-out-helix-editor.md deleted file mode 100644 index be369a1..0000000 --- a/_posts/2022-06-30-trying-out-helix-editor.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: Trying out Helix code editor as my main editor -permalink: /tying-out-helix-code-editor.html -date: 2022-06-30T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -I have been searching for a lightweight code editor for quite some time. One of -the main reasons was that I wanted something that doesn't burn through CPU and -RAM usage is not through the roof. I have been mostly using Visual Studio Code. -It's been an outstanding editor. I have no quarrel with it at all. It's just -time to spice life up with something new. - -I have been on this search for a couple of years. I have tried Vim, Neovim, -Emacs, Doom Emacs, Micro and couple more. Among most of them, I liked Micro and -Doom Emacs the most. Micro editor was a little too basic for me. And Doom Emacs -was a bit too hardcore. This does not reflect on any of the editors. It's just -my personal preference. - -> I tried Helix Editor about a year ago. But I didn't pay attention to it. -> Tried it and saw it's similar to Vi and just said no. I was premature to -> dismiss it. - -One of the things I actually miss is line wrapping for certain files. When -writing Markdown, line wrapping would be very helpful. Editing such a document -is frustrating to say the least. Some of the Markdown to HTML converters don't -take kindly of new lines between sentences. Not paragraphs, sentences. And I use -Markdown to write this blog you are reading. - -But other than this, I have been extremely satisfied by it. It's been a pleasant -surprise. There have been zero issues with the editor. - -One thing to do before you are able to use autocompletion and make use Language -Server support is to install the language server with NPM. - -```sh -# For C development this installs C LSP. -sudo dnf install clang-tools-extra -``` - -I am still getting used to the keyboard shortcuts and getting better. What Helix -does really well is packing in sane defaults and even though because currently -there is no plugin support I haven't found any need for them. It has all that -you would need. It goes to extreme measures to show a user what is going on with -popups that show you what the keyboard shortcuts are. - -And it comes us packed with many -[really good themes](https://github.com/helix-editor/helix/wiki/Themes). - -![Editor](/assets/posts/helix-editor/editor.png){:loading="lazy"} - -It's still young but has this mature feeling to it. It has sane defaults and -mimics Vim (works a bit differently, but the overall idea is similar). diff --git a/_posts/2022-07-05-what-would-dna-sound-if-synthesized.md b/_posts/2022-07-05-what-would-dna-sound-if-synthesized.md deleted file mode 100644 index 6efe559..0000000 --- a/_posts/2022-07-05-what-would-dna-sound-if-synthesized.md +++ /dev/null @@ -1,365 +0,0 @@ ---- -title: What would DNA sound if synthesized to an audio file -permalink: /what-would-dna-sound-if-synthesized.html -date: 2022-07-05T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -## Introduction - -Lately, I have been thinking a lot about the nature of life, what are the -foundation blocks of life and things like that. It's remarkable how complex and -on the other hand simple the creation is when you look at it. The miracle of -life keeps us grounded when our imagination goes wild. If the DNA are the blocks -of life, you could consider them to be an API nature provided us to better -understand all of this chaos masquerading as order. - -I have been reading a lot about superintelligence and our somehow misguided path -to create general artificial intelligence. What would the building blocks or our -creation look like? Is the compression really the ultimate storage of -information? Will our creation also ponder this questions when creating new -worlds for themselves, or will we just disappear into the vastness of -possibilities? It is a little offensive that we are playing God whilst being -completely ignorant of our own reality. Who knows! Like many other -breakthroughs, this one will also come at a cost not known to us when it finally -happens. - -To keep things a bit lighter, I decided to convert some popular DNA sequences -into an audio files for us to listen to. I am not the first one, nor I will be -the last one to do this. But it is an interesting exercise in better -understanding the relationship between art and science. Maybe listening to DNA -instead of parsing it will find a way into better understanding, or at least -enjoying the creation and cryptic nature of life. - -## DNA encoding and primer example - -I have been exploring DNA in the past in my post from about 3 years ago in -[Encoding binary data into DNA -sequence](/encoding-binary-data-into-dna-sequence.html) where I have been -converting all sorts of data into DNA sequences. - -This will be a similar exercise but instead of converting to DNA, I will be -generating tones from Nucleotides. - -| Nucleotides | Note | Frequency | -| ---------------- | ---- | --------- | -| **A** (Adenine) | A | 440 Hz | -| **C** (Cytosine) | C | 783.99 Hz | -| **G** (Guanine) | G | 523.25 Hz | -| **T** (Thymine) | D | 587.33 Hz | - -Since we do not have T in equal-tempered scale, I choose D to represent T note. - -You can check [Frequencies for equal-tempered scale, A4 = 440 -Hz](https://pages.mtu.edu/~suits/notefreqs.html). For this tuning, we also -choose `Speed of Sound = 345 m/s = 1130 ft/s = 770 miles/hr`. - -Now that we have this out of the way, we can also brush up on the DNA sequencing -a bit. This is a famous quote I also used for the encoding tests, and it goes -like this. - -> How wonderful that we have met with a paradox. Now we have some hope of -> making progress. -> ― Niels Bohr - -```shell ->SEQ1 -GACAGCTTGTGTACAAGTGTGCTTGCTCGCGAGCGGGTACGCGCGTGGGCTAACAAGTGA -GCCAGCAGGTGAACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGCTGGCGGGTGA -ACAAGTGTGCCGGTGAGCCAACAAGCAGACAAGTAAGCAGGTACGCAGGCGAGCTTGTCA -ACTCACAAGATCGCTTGTGTACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGTAT -GCTTGCTGGCGGACAAGCCAGCTTGTAAGCGGACAAGCTTGCGCACAAGCTGGCAGGCCT -GCCGGCTCGCGTACAAATTCACAAGTAAGTACGCTTGCGTGTACGCGGGTATGTATACTC -AACCTCACCAAACGGGACAAGATCGCCGGCGGGCTAGTATACAAGAACGCTTGCCAGTAC -AACC -``` - -This is what we gonna work with to get things rolling forward, when creating -parser and waveform generator. - -## Parsing DNA data - -This step is rather simple one. All we need to do is parse input DNA sequence in -[FASTA format](https://en.wikipedia.org/wiki/FASTA_format) well known in -[Bioinformatics](https://en.wikipedia.org/wiki/Bioinformatics) to extract single -Nucleotides that will be converted into separate tones based on equal-tempered -scale explained above. - -```python -nucleotide_tone_map = { - 'A': 440, - 'C': 523.25, - 'G': 783.99, - 'T': 587.33, # converted to D -} - -def split(word): - return [char for char in word] - -def generate_from_dna_sequence(sequence): - for nucleotide in split(sequence): - print(nucleotide, nucleotide_tone_map[nucleotide]) -``` - -## Generating sine wave - -Because we are essentially creating a long stream of notes we will be appending -sine notes to a global array we will later use for creating a WAV file out of -it. - -```python -import math - -def append_sinewave(freq=440.0, duration_milliseconds=500, volume=1.0): - global audio - - num_samples = duration_milliseconds * (sample_rate / 1000.0) - - for x in range(int(num_samples)): - audio.append(volume * math.sin(2 * math.pi * freq * (x / sample_rate))) - - return -``` - -The sine wave generated here is the standard beep. If you want something more -aggressive, you could try a square or saw tooth waveform. - -## Generating a WAV file from accumulated sine waves - - -```python -import wave -import struct - -def save_wav(file_name): - wav_file = wave.open(file_name, 'w') - nchannels = 1 - sampwidth = 2 - - nframes = len(audio) - comptype = 'NONE' - compname = 'not compressed' - wav_file.setparams((nchannels, sampwidth, sample_rate, nframes, comptype, compname)) - - for sample in audio: - wav_file.writeframes(struct.pack('h', int(sample * 32767.0))) - - wav_file.close() -``` - -44100 is the industry standard sample rate - CD quality. If you need to save on -file size, you can adjust it downwards. The standard for low quality is, 8000 or -8kHz. - -WAV files here are using short, 16 bit, signed integers for the sample size. -So, we multiply the floating-point data we have by 32767, the maximum value for -a short integer. - -> It is theoretically possible to use the floating point -1.0 to 1.0 data -> directly in a WAV file, but not obvious how to do that using the wave module -> in Python. - -## Generating Spectograms - -I have tried two methods of doing this and both were just fine. I however opted -out to use the [SoX - Sound eXchange, the Swiss Army knife of audio -manipulation](https://linux.die.net/man/1/sox) one because it didn't require -anything else. - -```shell -sox output.wav -n spectrogram -o spectrogram.png -``` - -An example spectrogram of Ludwig van Beethoven Symphony No. 6 First movement. - - - -![Ludwig van Beethoven Symphony No. 6 First movement](/assets/posts/dna-synthesized/symphony-no6-1st-movement.png){:loading="lazy"} - -The other option could also be in combination with -[gnuplot](http://www.gnuplot.info/). This would require an intermediary step, -however. - -```shell -sox output.wav audio.dat -tail -n+3 audio.dat > audio_only.dat -gnuplot audio.gpi -``` - -And input file `audio.gpi` that would be passed to gnuplot looks something like -this. - -```txt -# set output format and size -set term png size 1000,280 - -# set output file -set output "audio.png" - -# set y range -set yr [-1:1] - -# we want just the data -unset key -unset tics -unset border -set lmargin 0 -set rmargin 0 -set tmargin 0 -set bmargin 0 - -# draw rectangle to change background color -set obj 1 rectangle behind from screen 0,0 to screen 1,1 -set obj 1 fillstyle solid 1.0 fillcolor rgbcolor "#ffffff" - -# draw data with foreground color -plot "audio_only.dat" with lines lt rgb 'red' -``` - -## Pre-generated sequences - -What I did was take interesting parts from an animal's genome and feed it to a -tone generator script. This then generated a WAV file and I converted those to -MP3, so they can be played in a browser. The last step was creating a -spectrogram based on a WAV file. - -### Niels Bohr quote - - - -![Spectogram](/assets/posts/dna-synthesized/quote/spectogram.png){:loading="lazy"} - -### Mouse - -This is part of a mouse genome `Mus_musculus.GRCm39.dna.nonchromosomal`. You -can get [genom data -here](http://ftp.ensembl.org/pub/release-106/fasta/mus_musculus/dna/). - - - -![Spectogram](/assets/posts/dna-synthesized/mouse/spectogram.png){:loading="lazy"} - -### Bison - -This is part of a bison genome `Bison_bison_bison.Bison_UMD1.0.cdna`. You can -get [genom data -here](http://ftp.ensembl.org/pub/release-106/fasta/bison_bison_bison/cdna/). - - - -![Spectogram](/assets/posts/dna-synthesized/bison/spectogram.png){:loading="lazy"} - -### Taurus - -This is part of a taurus genome `Bos_taurus.ARS-UCD1.2.cdna`. You can get -[genom data -here](http://ftp.ensembl.org/pub/release-106/fasta/bos_taurus/cdna/). - - - -![Spectogram](/assets/posts/dna-synthesized/taurus/spectogram.png){:loading="lazy"} - -## Making a drummer out of a DNA sequence - -To make things even more interesting, I decided to send this data via MIDI to my -[Elektron Model:Samples](https://www.elektron.se/en/model-samples). This is a -really cool piece of equipment that supports MIDI in via USB and 3.5 mm audio -jack. - -Elektron is connected to my MacBook via USB cable and audio out is patched to a -Sony Bluetooth speaker I have that supports 3.5 mm audio in. Elektron doesn't -have internal speakers. - -![](/assets/posts/dna-synthesized/elektron/IMG_0619.jpg){:loading="lazy"} - -![](/assets/posts/dna-synthesized/elektron/IMG_0620.jpg){:loading="lazy"} - -![](/assets/posts/dna-synthesized/elektron/IMG_0622.jpg){:loading="lazy"} - -For communicating with Elektron, I choose `pygame` Python module that has MIDI -built in. With this, it was rather simple to send notes to the device. All I did -was map MIDI notes to the actual Nucleotides. - -Before all of this I also checked Audio MIDI Setup app under MacOS and checked -MIDI Studio by pressing ⌘-2. - -![](/assets/posts/dna-synthesized/elektron/midi-studio.jpg){:loading="lazy"} - -The whole script that parses and send notes to the Elektron looks like this. - -```python -import pygame.midi -import time - -pygame.midi.init() - -print(pygame.midi.get_default_output_id()) -print(pygame.midi.get_device_info(0)) - -player = pygame.midi.Output(1) -player.set_instrument(2) - -def send_note(note, velocity): - global player - player.note_on(note, velocity) - time.sleep(0.3) - player.note_off(note, velocity) - - -nucleotide_midi_map = { - 'A': 60, - 'C': 90, - 'G': 160, - 'T': 180, # is D -} - -with open("quote.fa") as f: - sequence = f.read().replace('\n', '') - -for nucleotide in [char for char in sequence]: - print("Playing nucleotide {} with MIDI note {}".format( - nucleotide, nucleotide_midi_map[nucleotide])) - send_note(nucleotide_midi_map[nucleotide], 127) - -del player -pygame.midi.quit() -``` - - - -All of this could be made much more interesting if I choose different -instruments for different Nucleotides, or doing more funky stuff with Elektron. -But for now, this should be enough. It is just a proof of concept. Something to -play around with. - -## Going even further - -As you probably notice, the end results are quite similar to each other. This is -to be expected because we are operating only with 4 notes essentially. What -could make this more interesting is using something like -[Supercollider](https://supercollider.github.io/) to create more interesting -sounds. By transposing notes or using effects based on repeated data in a -sequence. Possibilities are endless. - -It is really astonishing what can be achieved with a little bit of code and an -idea. I could see this becoming an interesting background soundscape instrument -if done properly. It could replace random note generator with something more -intriguing, biological, natural. - -I actually find the results fascinating. I took some time and listened to this -music of nature. Even though it's quite the same, it's also quite different. -The subtle differences on repeat kind of creates music on its own. Makes you -wonder. It kind of puts Occam’s Razor in its place. Nature for sure loves to -make things as energy efficient as possible. diff --git a/_posts/2022-08-13-algae-spotted-on-river-sava.md b/_posts/2022-08-13-algae-spotted-on-river-sava.md deleted file mode 100644 index 02314f4..0000000 --- a/_posts/2022-08-13-algae-spotted-on-river-sava.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -title: Aerial photography of algae spotted on river Sava -permalink: /aerial-photography-of-algae-spotted-on-river-sava.html -date: 2022-08-13T12:00:00+02:00 -layout: post -type: note -draft: false ---- - -This is a bit of a different post than I usually write, but quite interesting -one to me. River Sava has plenty of hydropower plants located down the stream. -This makes regulating the strength of a current easier than normally. Because of -lower stream strength and high temperatures, algae has formed on the river. -This is the first time I've seen something like this in my whole life. - -Below are some photographs taken from a DJI drone capturing the event. - -![Algae on Sava](/assets/posts/algae-sava/dji-algae-0.jpg){:loading="lazy"} - -![Algae on Sava](/assets/posts/algae-sava/dji-algae-1.jpg){:loading="lazy"} - -![Algae on Sava](/assets/posts/algae-sava/dji-algae-2.jpg){:loading="lazy"} - -![Algae on Sava](/assets/posts/algae-sava/dji-algae-3.jpg){:loading="lazy"} - -![Algae on Sava](/assets/posts/algae-sava/dji-algae-4.jpg){:loading="lazy"} - -![Algae on Sava](/assets/posts/algae-sava/dji-algae-5.jpg){:loading="lazy"} - -I will try to get more photos of this in the future days and if something -intriguing shows up will post it again on the blog. diff --git a/_posts/2022-10-06-state-of-web-technologies-in-year-2022.md b/_posts/2022-10-06-state-of-web-technologies-in-year-2022.md deleted file mode 100644 index e7c8d62..0000000 --- a/_posts/2022-10-06-state-of-web-technologies-in-year-2022.md +++ /dev/null @@ -1,297 +0,0 @@ ---- -title: State of Web Technologies and Web development in year 2022 -permalink: /state-of-web-technologies-and-web-development-in-year-2022.html -date: 2022-10-06T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -## Initial thoughts - -*This post is a critique on the current state of web development. It is an -opinionated post! I will learn more about this in the future, and probably -slightly change my mind about some of the things I criticize.* - -I have started working on a hobby project about two weeks ago, and I wanted to -use that situation as a learning one. Trying new things, new technologies, new -tools. I always considered myself to be an adventurous person when it comes to -technology. I never shy away from trying new languages, new operating systems -etc. Likewise, I find the whole experience satisfying, and it tickles that part -of my brain that finds discovery the highest of the mountains to climb. - -What I always wanted to make was a coding game, that you would play in a browser -(just to eliminate building binaries for each operating system) where you would -level up your character and go into these scriptable battles. You know, RPG -elements. - -So, the natural way to go would be some sort of SPA (single page application) -with basic routing and some state management. Nothing crazy. - -> **Before we move on**, I have to be transparent. Take my views on this with -> a grain of salt. I have only scratched the surface with these technologies, -> and my knowledge is full of gaps. This is my experience using some of these -> products for the first time or in a limited capacity. - -Having this out of the way, I got myself a fresh pot of coffee and down the -rabbit hole I went. - -## Giving React JS a spin - -I first tried [React JS](https://reactjs.org/). I kind of like it. Furthermore, -I have worked with libraries like this in the past and also wrote a couple of -them (nothing compared to that level), but I had the basic understanding of what -was going on. I rolled up a project quickly and had basic things done in a -matter of two hours, which was impressive. - -I prefer using [Tailwind CSS](https://tailwindcss.com/) for my styling -pleasures, and integrating that was also a painless experience. It was actually -nice to see that some things got better with time. In about 2 minutes I got -Tailwind working, and I was able to use classes at my disposal. All that -`postcss` stuff was taken care of by adding a couple of things in config files -(all described really well in their documentation). - -It is not that different from Vue which I have had more encounters with in the -past People will probably call me a lunatic for saying this. But you know, it is -the truth. Same same, but different. I still believe that using libraries like -this is beneficial. I am not a JavaScript purist. They all have their quirks, -but at the end of the day, I truly believe it’s worth it. - -## Bundlers and Transpilers - -I still reject calling [Typescript](https://www.typescriptlang.org/) to -[JavaScript](https://www.javascript.com/) conversion a "compilation process". I -call them [transpilers](https://devopedia.org/transpiler), and I don’t care! 😈 - -The first one that I ever used was [webpack](https://webpack.js.org/), and it -was an absolute horrific experience. Saying this, it is an absolutely fantastic -tool. I felt more like a config editor than actually a programmer. To be fair, -I am a huge fan of [make](https://www.gnu.org/software/make/), and you can do as -you wish with this information. I like my build systems simple. - -Also, isn’t it interesting that we need something like -[Babel](https://babeljs.io/) to make JavaScript code work in a browser that has -only one client side scripting available, which is by no accident also -JavaScript. Why? I know why it’s needed, but seriously, why. - -I haven’t used Babel for years now. Or if I did, it was packaged together by -some other bundler thingy. Which does not make things better, but at least I -didn’t need to worry about it. - -I really don’t like complicated build systems. I really don’t like abstracting -code and making things appear magical. The older I get, the more I appreciate -clear and clean, expressive code. No one-liners, if possible. - -But I have to give props to [Vite](https://vitejs.dev/)! This was one of the -best developer experiences I have ever had. Granted, it still has magical -properties. And yes, it still is a bundler and abstracts things to the nth -degree. But at least it didn’t force me to configure 700 lines of JSON. And I -know that this makes me a hypocrite. You can’t have it all. Nonetheless, my -reasoning here is, if using bundlers is inevitable, then at least they should -provide an excellent developer experience. - -I also noticed that now the catch-all phrase is “blazingly fast” and “lightning -fast” and “next generation” and stuff like that. I mean, yeah, tools should get -faster with time. But saying that starting a project now takes 2 seconds instead -of 20 seconds is something that is a break it or make it kind of a deal is -ridiculous. I don’t mind waiting a couple of seconds every couple of days. I -also don’t create 700 projects every day, and also who does? This argument has -no bite. All I want is a decent reload time (~100ms is more than good enough for -me) and that is it. - -You don’t need to sell me benefits if I only get them when I start a fresh -project, and then try to convince me that this is somehow changing the fate of -the universe. First of all, it is not. And second, if this is your only argument -for your tool, I would advise you to maybe re-focus your efforts to something -else. Vite says that startup times are really fast. And if that would be the -only thing differentiating it from other tools, I would ignore it. But it has -some really compelling features like [Hot Module -Replacement](https://www.geeksforgeeks.org/reactjs-hot-module-replacement/) that -really works well. It was a joy to use. - -So, I will be definitely using Vite in the future. - -## Jam Stack, Mach Stack no snack - -Let's get a couple of the acronyms out of the way, so we all know what we are -talking about: - -- Jam Stack - JavaScript, API and Markup -- Mach Stack - Microservices, API-first, Cloud-Native SaaS, Headless - -It is so hard to follow all these new trendy things happening around you, that -it makes you have a massive **FOMO** all the time. But on the other hand, you -also don’t want to be that old fart that doesn’t move with the times and still -writes his trusty jQuery code while listening to Blink 182 All the small things -on full blast. It’s a good song, don’t get me wrong, but there are other songs -out there. - -I have to admit. [Vercel](https://vercel.com/) is really cool! Love the -simplicity of the service. You could compare it to -[Netlify](https://www.netlify.com/). I haven’t tried Netlify extensively, but -from a couple of experimental deployments I still prefer Vercel. It is much more -streamlined, but maybe this is bias in me. I really like Vercel’s Analytics, -which give you a [Core Web Vitals report](https://web.dev/vitals/) in their -admin console. Kind of cool, I’m not going to lie. - -This whole idea about frontend and backend merging into [SSR (server-side -rendering)](https://www.debugbear.com/blog/server-side-rendering) looks so good -on paper. It almost doesn’t come with any major flaws. - -But when it comes to the actual implementation, there is much to be desired. -I’m going to lump [Next.js](https://nextjs.org/) and -[Nuxt.js](https://nuxtjs.org/) together because they are essentially the same -thing, just a different library. - -Now comes the reality. Mixing backend and frontend in this manner creates this -weird mental model where you kind of rely on magical properties of these -libraries. You relinquish control over to them for better developer experience. -But is that really true? Initially, I was so stoked about it. However, the more -I used them, the more I felt uncomfortable. I felt dirty, actually. Maybe this -is because I come from old ways of doing things where you control every step of -request, and allowing something to hijack it feels like blasphemy. - -More than that, some pretty significant technical issues arose from this. How do -you do JWT token authentication? You put it in `api` folder and then do some -fetching and storing into local state management. But doing this also requires -some tinkering with await/async stuff on the React/Vue side of things. And then -you need to write middleware for it. And the more I look at it, the more I see -that this whole thing was not meant to be used like this, and it all feels and -looks like a huge hack. - -The issue I have with this is that they over-promise and under-deliver. They -want to be an all-in-one replacement for everything, and they don’t deliver on -this promise. And how could they?! We have to be fair. It is an impossible task. - -They sell you [NoOps](https://www.geeksforgeeks.org/overview-of-noops/), but -when you need to accomplish something a little bit more out of the scope of -Hello World, you have to make hacky decisions to make it work. And having a -deployment strategy that relies on many moving parts is never a good idea. -Abstracting too much is usually a sign of bad architecture. - -Lately, this has become a huge trend that will for sure bite us in the future. -And let’s not get it twisted. By doing this, PaaS providers like -[AWS](https://aws.amazon.com/), [GCS](https://cloud.google.com/), etc. obscure -their billing, and you end up paying more than you really should. And even if -that is not an issue, it comes down to the principle of things. AWS is known for -having multiple “currencies“ inside their projects like write operations, read -operations, etc. which add up, and it creates this impossible to track billing -scheme. It all behaves suspiciously like a pay-to-win game you could find on -mobile phones that scams you out of your money. - -And as far as I am concerned, the most important thing was me not coding the -functionalities for the game I want to make. I was battling libraries and cloud -providers. How to deploy, what settings are relevant. Bad documentation or -multiple versions of achieving the same thing. You are getting bombarded by all -this information, and you don’t really have any control over it. -Production-ready code becomes a joke, essentially. Especially if you tend to -work on that project for a prolonged period of time. - -All of these options end up creating a fatigue. What to choose, what not to -choose. Unnecessary worrying about if the stack will still be deemed worthy in -six months. There is elegance in simplicity. - -> JavaScript UI frameworks and libraries work in cycles. Every six months or -> so, a new one pops up, claiming that it has revolutionized UI development. -> Thousands of developers adopt it into their new projects, blog posts are -> written, Stack Overflow questions are asked and answered, and then a newer -> (and even more revolutionary) framework pops up to usurp the throne. -> — Ian Allen - -And this jab at these libraries and cloud providers is not done out of malice. -It is a real concern that I have about them. In my life, I have seen -technologies come and go, but the basics always stick around. So surrendering -all the power you have to a library or a cloud provider is in my opinion a -stupid move. - -## Tailwind CSS still rocks! - -You know, many people say negative things about Tailwind. And after a lot of -deliberation, I came to the conclusion that Tailwind is good for two types of -developers. Tailwind is good for a complete noob or a senior developer. A -complete noob doesn’t really care about inner workings of CSS, and a senior -developer also doesn’t care about CSS. Well, at least, not anymore. And -developers in between usually have the biggest issues with it. Not always of -course, but in a lot of cases. - -I like the creature comforts of Tailwind. Being utility first would make me -argue that it is actually more similar to [Sass](https://sass-lang.com/) or -[Less](https://lesscss.org/) than something like Bootstrap. Not technically, but -ideologically. After I started using it, I never looked back. I use it every -time I need to do something web related. - -Writing CSS for general things feels like going several steps back. Instead of -focusing on what you are actually trying to achieve, you focus on notations like -[BEM](https://en.bem.info/methodology/css/), code structuring, optimizing HTML -size. Just doing things that make 0.1% difference. You know that saying: Early -optimization is the root of all evil. Exactly that. - -I am also not saying that Tailwind is the cure for everything. Sometimes custom -CSS is necessary. But from what I found out in using it for almost two years in -a production environment (on a site getting quite a lot of traffic and -constantly being changed), I can say without any reservations that Tailwind -saved our asses countless times. We would be rewriting CSS all the time without -it. And I don’t really think writing CSS is the best way to spend my time. - -I have also noticed that people who criticize Tailwind the most never actually -used it in a real project that has a long lifetime with plenty of changes that -will happen in the future. - -But you know, whatever floats your boat! - -## Code maintainability - -Somehow, people also stopped talking about maintenance. If you constantly try to -catch the latest and greatest train, you are by that logic always trying new -things. Which is a good thing if you want to learn about technologies and try -them. But for the production environment, you have to have a stable stack that -doesn’t change every 6 months. - -You can lock dependencies for sure. Nevertheless, the hype train moves along -anyway. And the mindset this breeds goes against locking the code. This -bleeding-edge rolling release cycle is not helping. That is why enterprise -solutions usually look down on these popular stacks and only do bare minimum to -appear hip and cool. - -With that said, I still think that progress is good, but should be taken with a -grain of salt. If your project is something that should be built once and then -rarely updated, going with the latest stack is a possible way to go. But, if you -are working on a project that lasts for years, you should probably approach it -with some level of caution. Web development is often times too volatile. - -## Web development has a marketing issue - -I noticed that almost every project now has this marketing spin put on it. -Everything is blazingly fast now. I get it, they are competing for your -attention, but what happened to just being truthful and not inflating reality. - -And in order to appeal to mass market, they leave things out of their marketing -materials. These open-source projects are now behaving more and more like -companies do. Which is a scary thought on its self. - -And we are also seeing a rise in a concept of building a company in the open, -which is a good thing, don't get me wrong. But when it is using open-source to -lure people and then lock them in their ecosystem, there is where I have issues -with it. - -This might be because I have been using GNU/Linux for 20 years now and have been -so beholden for my success to open-source that I see issues when open-source is -being used to trick people into a false sense of security that these projects -are built in the spirit of open-source. Because there is a difference. They are -NOT! They have a really specific goal in mind. And the open-source is being used -as a delivery system. Which is in my opinion disgusting! - -## Conclusion - -I will end my post with this. Web development is running now in circles. People -are discovering [RPC](https://www.tutorialspoint.com/remote-procedure-call-rpc) -now and this is the now the next big thing. [GraphQL](https://graphql.org/) is -so passé. And I am so tired of it all. Of blazingly fast libraries, of all these -new technologies that are actually just a remake of old ones. Of just the -general spirit of the web. I will just use what I already know. Which worked 10 -years ago and will work 10 years after this. I will adopt a couple of little -tools like Vite. But I will not waste my time on this anymore. - -It was a good exercise to get in touch with what’s new now. Nothing really -changed that much. FOMO is now cured! Now I have to get my ass back to actually -code and make the project that I wanted to make in the first place. diff --git a/_posts/2022-10-16-that-sound-that-machine-makes-when-struggling.md b/_posts/2022-10-16-that-sound-that-machine-makes-when-struggling.md deleted file mode 100644 index 7b019e9..0000000 --- a/_posts/2022-10-16-that-sound-that-machine-makes-when-struggling.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -title: Microsoundtrack — That sound that machine makes when struggling -permalink: /that-sound-that-machine-makes-when-struggling.html -date: 2022-10-16T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -A couple of months ago, I got an idea about micro soundtracks. In this concept, -you are the observer, director, and audience in this tiny movies. - -What you do is to attempt to imagine what would be happening around you based on -a title of the song and let the song help you fill the void in your story. - -I made these songs is Logic Pro X. Every year or so I do this kind of thing and -make a couple of songs similar to this. But this is the first time I am posting -about it. - -You can listen to the whole set on -[Youtube](https://www.youtube.com/watch?v=_5oXBhSmF3c) or scroll down the page -and there are embedded players for each song. - -## A bunch of inter-dimensional people with loud clocks - -A group of inter-dimensional people are going up and down the elevator with you -while having loud clocks around their necks. Each clock ticks on a different -frequency. A lot of other sounds are getting drawn into your dimension, -resulting in a strange merging of dimensions. - - - -## Two black holes conversing about the weather - -You are a traveler in a spaceship flying very close to two colliding black holes -having a discussion about the weather while tearing each other apart. During all -this your ship is getting pulled into the event horizon of both black holes, -putting a lot of strain on your spaceship. - - - -## A planet where every organism is a plant - -You land on a planet where every living organism is a plant and among those -plants some of them are highly intelligent, and you were asked to make first -contact with the native species. Your visit takes place in a giant cave where -you are meeting these plants, and they are talking to you. - - - -## Bio implants having a fit and reprogramming your brain - -In a distant future where everybody has bio implants, you have just received -your first one, which happens to be a brain implant. Something goes wrong, and -your implant is starting to misbehave, and you are experiencing brain -malfunctions. You are on the streets at night a couple of hours after your -procedure. You can feel your sanity breaking down. - - - -## Cow animation - -I also made this little cow animation. Go into full screen to see the effects in -more details. - - - diff --git a/_posts/2023-01-26-trying-to-build-a-new-kind-of-terminal-emulator.md b/_posts/2023-01-26-trying-to-build-a-new-kind-of-terminal-emulator.md deleted file mode 100644 index ced58bb..0000000 --- a/_posts/2023-01-26-trying-to-build-a-new-kind-of-terminal-emulator.md +++ /dev/null @@ -1,254 +0,0 @@ ---- -title: Trying to build a New kind of terminal emulator for the modern age -permalink: /trying-to-build-a-new-kind-of-terminal-emulator.html -date: 2023-01-26T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -Over the past few weeks, I have been really thinking about terminal emulators, -how we interact with computers, the separation of text-based programs and GUI -ones. To be perfectly honest, I got pissed off one evening when I was cleaning -up files on my computer. Normally, I go into console and do `ncdu` and check -where the junk is. Then I start deleting stuff. Without any discrimination, -usually. But when it comes to screenshots, I have learned that it's good to keep -them somewhere near if I need to refer to something that I was doing. I am an -avid screenshot taker. So at that point I checked Pictures folder and also did a -basic search `find . -type f -name "*.jpg"` for all the JPEG files in my home -directory and immediately got pissed off. Why can’t I see thumbnails in my -terminal? I know why, but why in the year of 2022 this is still a problem. I am -used to traversing my disk via terminal. I am faster, and I am more comfortable -this way. But when it comes to visualization, I then need to revert to GUI -applications and again find the same file to see it. I know that programs like -`feh` and `sxiv` are available, but I would just like to see the preview. Like -[Jupyter notebook](https://jupyter.org/) or something similar. Just having it -inline. Part of a result. - -It also didn’t help that I was spending some time with the [Plan -9](https://plan9.io/plan9/) Operating system. More specifically -[9FRONT](http://9front.org/). The way that [ACME editor](http://acme.cat-v.org/) -handles text editing is just wonderful. Different and fresh somehow, even though -it’s super old. - -So, I went on a lookout for an interesting way of visualizing results of some -query. I found these applications to be outstanding examples of how not to be a -captive of a predetermined way of doing things. - -- [Wolfram Mathematica](https://www.wolfram.com/mathematica/) -- [Jupyter notebooks](https://jupyter.org/) -- [Plan 9 / 9FRONT](http://www.9front.org) -- [Temple OS](https://templeos.org/) -- [Emacs](https://www.gnu.org/software/emacs/) - -My idea is not as out there as ACME is, but it is a spin on the terminal -emulators. I like the modes that Vi/Vim provides you with. I like the way the -Emacs does its own `M-x` `M-c`. Furthermore, I really like how Mathematica and -Jupyter present the data in a free flowing form. And I love how Temple OS is -basically a C interpreter on some level. - -> **Note:** This is part 1 of the journey. Nowhere finished yet. I am just -> tinkering with this at the moment. This whole thing can easily spectacularly -> fail. - -So I started. I knew that I wanted to have the couple of modes, but I didn’t -like the repetition of keystrokes, so the only option was to have some sort of -toggle and indicate to the user that they are in a special mode. Like Vi does -for Normal and Visual mode. - -These modes would for the first version be: - -- *Preview mode* (toggle with Ctrl + P) - - When this mode would be enabled, the `ls` command would try to find images - from the results and display thumbnails from them in the terminal itself. - No ASCII art. Proper images. In a grid! -- *Detach mode* (toggle with Ctrl + D) - - When this mode would be enabled, every command would open a new window - and execute that command in it. This would be useful for starting `htop` - in a separate window. - -The reason for having these modes togglable is to not ask for previews every -time. You enable a mode and until you disable it, it behaves that way. Purely -out of ergonomic reasons. - -I would like to treat every terminal I open as a session mentally. When I start -using the terminal, I start digging deeper into the issue I am trying to -resolve. And while I am doing this, I would like to open detached windows -etc. A lot of these things can be done easily with something like -[i3](https://i3wm.org/), but also that pull you out of the context of what you -were doing. I would like to orchestrate everything from one single point. - -In planning for this project, I knew that I would need to use a language like C -and a library such as [SDL2](https://www.libsdl.org/) in order to achieve the -desired results. I had considered other options, but ultimately determined that -[SDL2](https://www.libsdl.org/) was the best fit based on its capabilities and -reputation in the programming community. - -At first, I thought the idea of a hardware accelerated terminal was a bit of a -joke. It seemed like such a niche and unnecessary feature, especially given the -fact that terminal emulators have been around for decades and have always relied -on software rendering. But to be fair, [Alacritty](https://alacritty.org/) is -doing the same thing. Well, they are doing a remarkable job at it. - -So, I embarked on a journey. Everything has to start somewhere. For me, it -started with creating a window! It has to start somewhere. 🙂 - -```c -// Oh, Hi Mark! -// Create the window, obviously. -SDL_Window *window = SDL_CreateWindow( - WINDOW_TITLE, SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, - WINDOW_WIDTH, WINDOW_HEIGHT, - SDL_WINDOW_RESIZABLE | SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN); -``` - -I continued like this to get some text displayed on the screen. - -I noted that -[`TTF_RenderText_Solid`](https://wiki.libsdl.org/SDL_ttf/TTF_RenderText_Solid) -rendered text really poorly. There were no antialiasing at all. In my wisdom, I -never checked the documentation. Well, that was a fail. To uneducated like me: -`TTF_RenderText_Solid` renders Latin1 text at fast quality to a new 8-bit -surface. So, that's why the texts looked like shit. No wonder. - -Remarks on `TTF_RenderText_Solid`: This function will allocate a new 8-bit, -palettized surface. The surface's 0 pixel will be the colorkey, giving a -transparent background. The 1 pixel will be set to the text color. - -After I replaced it with -[`TTF_RenderText_LCD`](https://wiki.libsdl.org/SDL_ttf/TTF_RenderText_LCD) which -renders Latin1 text at LCD subpixel quality to a new ARGB surface, the text -started looking good. Really make sure you read the documentation. It’s actually -good. As a side note, you can find all the documentation regarding [SDL2 on -their Wiki](https://wiki.libsdl.org/). - -After that was done, I started working on displaying other things like `Preview` -and `Detach` modes. This wasn’t really that hard. In SDL2 you can check all the -available events with `while (SDL_PollEvent(&event) > 0)` and have a bunch of -switch statements to determine which key is currently being pressed. More about -keys, [SDLKey](https://documentation.help/SDL/sdlkey.html) and mroe about -pooling the events on -[SDL_PollEvent](https://documentation.help/SDL/sdlpollevent.html). - -```c -while (SDL_PollEvent(&event) > 0) -{ - switch (event.type) - { - case SDL_QUIT: - running = false; - break; - - case SDL_TEXTINPUT: - if (!meta_key_pressed) - { - strncat(input_prompt_text, event.text.text, 1); - update_input_prompt = true; - } - break; - } -} -``` - -After that was somewhat working correctly, I started creating a struct that -would hold all the commands and results and I call them Cells. Yes, I stole that -naming idea from Jupyter. - -```c -typedef struct -{ - char *command; - char *result; - SDL_Surface *surface; - SDL_Texture *texture; - SDL_Rect rect; -} Cell; -``` - -I am at a place now where I am starting to implement scrolling. This will for -sure be fun to code. Memory management in C is super easy. 😂 - -I have also added a simple [INI file like -configuration](https://en.wikipedia.org/wiki/INI_file) support. It is done in an -[STB style of -header](https://github.com/nothings/stb/blob/master/docs/stb_howto.txt) and maps -to specific options supported by the terminal. It is not universal, and the code -below demonstrates how I will use it in the future. - -```c -#ifndef CONFIG_H -#define CONFIG_H - -/* -# This is a comment - -# This is the first configuration option -dettach=value11111 - -# This is the second configuration option -preview=value22222 - -# This is the third configuration option -debug=value33333 -*/ - -// Define a struct to hold the configuration options -typedef struct -{ - char dettach[256]; - char preview[256]; - char debug[256]; -} Config; - -// Read the configuration file and return the options as a struct -extern Config read_config_file(const char *filename) -{ - // Create a struct to hold the configuration options - Config config = {0}; - - // Open the configuration file - FILE *file = fopen(filename, "r"); - - // Read each line from the file - char line[256]; - while (fgets(line, sizeof(line), file)) - { - // Check if this line is a comment or empty - if (line[0] == '#' || line[0] == '\n') - continue; - - // Parse the line to get the option and value - char option[128], value[128]; - if (sscanf(line, "%[^=]=%s", option, value) != 2) - continue; - - // Set the value of the appropriate option in the config struct - if (strcmp(option, "dettach") == 0) - { - strncpy(config.option1, value, sizeof(config.option1)); - } - else if (strcmp(option, "preview") == 0) - { - strncpy(config.option2, value, sizeof(config.option2)); - } - else if (strcmp(option, "debug") == 0) - { - strncpy(config.option3, value, sizeof(config.option3)); - } - } - - // Close the configuration file - fclose(file); - - // Return the configuration options - return config; -} - -#endif -``` - -This is as far as I managed to get for now. I have a daily job and this -prohibits me to work on these things full time. But I should probably get back -and finish this. At least have a simple version working out, so I can start -testing it on my machines. Fingers crossed. 🕵️‍♂️ - diff --git a/_posts/2023-05-01-cachebusting-in-hugo.md b/_posts/2023-05-01-cachebusting-in-hugo.md deleted file mode 100644 index f8d92b2..0000000 --- a/_posts/2023-05-01-cachebusting-in-hugo.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: Cache busting in Hugo -permalink: /cachebusting-in-hugo.html -date: 2023-05-01T12:00:00+02:00 -layout: post -type: note -draft: false -tags: [hugo] ---- - -```html -\{\{ $cachebuster := delimit (shuffle (split (md5 "6fab11c6669976d759d2992eff1dd5be") "" )) "" \}\} - - -``` - -This `6fab11c6669976d759d2992eff1dd5be` can be random string you generate use. -You can use whatever you want. diff --git a/_posts/2023-05-05-run-9front-in-qemu.md b/_posts/2023-05-05-run-9front-in-qemu.md deleted file mode 100644 index 853b2c1..0000000 --- a/_posts/2023-05-05-run-9front-in-qemu.md +++ /dev/null @@ -1,29 +0,0 @@ ---- -title: Run 9front in Qemu -permalink: /run-9front-in-qemu.html -date: 2023-05-05T12:00:00+02:00 -layout: post -type: note -draft: false -tags: [plan9, qemu] ---- - -Run 9front in Qemu. This applies to [Plan9](https://9p.io/plan9/) and -[9front](https://9front.org/). - -Download from here http://9front.org/iso/. - -```sh -# Create a qcow2 image. -qemu-img create -f qcow2 $HOME/VM/9front.qcow2.img 30G - -# Run the VM. -qemu-system-x86_64 -cpu host -enable-kvm -m 1024 \ - -net nic,model=virtio,macaddr=52:54:00:00:EE:03 -net user \ - -device virtio-scsi-pci,id=scsi \ - -drive if=none,id=vd0,file=$HOME/VM/9front.qcow2.img \ - -device scsi-hd,drive=vd0 \ - -drive if=none,id=vd1,file=$HOME/VM/ISO/9front.386.iso \ - -device scsi-cd,drive=vd1,bootindex=0 -``` - diff --git a/_posts/2023-05-06-git-push-multiple-origins.md b/_posts/2023-05-06-git-push-multiple-origins.md deleted file mode 100644 index ce7e64b..0000000 --- a/_posts/2023-05-06-git-push-multiple-origins.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: Push to multiple origins at once in Git -permalink: /git-push-multiple-origins.html -date: 2023-05-06T12:00:00+02:00 -layout: post -type: note -draft: false -tags: [git] ---- - -Sometimes you want to push to multiple origins at once. This is useful if you -have a mirror of your repository on another server. You can do this by adding -multiple push urls to your git config. This is a shorthand for command above. - -```sh -git config --global alias.pushall '!sh -c "git remote | xargs -L1 git push --all"' -``` - diff --git a/_posts/2023-05-07-mount-plan9-over-network.md b/_posts/2023-05-07-mount-plan9-over-network.md deleted file mode 100644 index ad68e80..0000000 --- a/_posts/2023-05-07-mount-plan9-over-network.md +++ /dev/null @@ -1,24 +0,0 @@ ---- -title: Mount Plan9 over network -permalink: /mount-plan9-over-network.html -date: 2023-05-07T12:00:00+02:00 -layout: post -type: note -draft: false -tags: [plan9] ---- - -- First install libfuse with sudo apt install libfuse-dev. -- Then clone https://github.com/ftrvxmtrx/9pfs and compile it with make. -- Copy 9pfs to your path. - -```sh -# On Plan9 side -ip/ipconfig # enables network -aux/listen1 -tv tcp!*!9999 /bin/exportfs -r tmp # export tmp folder - -# On Linux side -9pfs 172.18.0.1 -p 9999 local_folder # mount -umount local_folder # unmount -``` - diff --git a/_posts/2023-05-08-write-iso-usb.md b/_posts/2023-05-08-write-iso-usb.md deleted file mode 100644 index 9c0e9fb..0000000 --- a/_posts/2023-05-08-write-iso-usb.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: Write ISO to USB Key -permalink: /write-iso-usb.html -date: 2023-05-08T12:00:00+02:00 -layout: post -type: note -draft: false -tags: [linux] ---- - -Write ISO to USB key. Nothing fancy here. - -```sh -sudo dd if=iso_file.iso of=/dev/sdX bs=4M status=progress conv=fdatasync -``` - diff --git a/_posts/2023-05-09-catv-weechat-config.md b/_posts/2023-05-09-catv-weechat-config.md deleted file mode 100644 index 78d0907..0000000 --- a/_posts/2023-05-09-catv-weechat-config.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: "#cat-v on weechat configuration" -permalink: /catv-weechat-config.html -date: 2023-05-09T12:00:00+02:00 -layout: post -type: note -draft: false -tags: [irc] ---- - -Set up weechat to connect to #cat-v on oftc. This applies to -[weechat](https://weechat.org/) but should be similar for other irc clients. - -```sh -# Install weechat and launch it and execute the following commands. - -/server add oftc irc.oftc.net -tls -/set irc.server.oftc.autoconnect on -/set irc.server.oftc.autojoin "#cat-v" -/set irc.server.oftc.nicks "nick1,nick2,nick3" -``` - diff --git a/_posts/2023-05-10-plan9-screenshot.md b/_posts/2023-05-10-plan9-screenshot.md deleted file mode 100644 index 5aa11bf..0000000 --- a/_posts/2023-05-10-plan9-screenshot.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: Take a screenshot in Plan9 -permalink: /plan9-screenshot.html -date: 2023-05-10T12:00:00+02:00 -layout: post -type: note -draft: false -tags: [plan9] ---- - -Take a screenshot in Plan9. This applies to [Plan9](https://9p.io/plan9/) and -[9front](https://9front.org/). This will take a screenshot of the screen and -output it to `/dev/screen`. You can then use `topng` to convert it to a png -image. - -```sh -# Instant screenshot. -cat /dev/screen | topng > screen.png - -# Delayed screenshot (5 seconds). -sleep 5; cat /dev/screen | topng > screen.png -``` - diff --git a/_posts/2023-05-11-fix-plan9-bootloader.md b/_posts/2023-05-11-fix-plan9-bootloader.md deleted file mode 100644 index de030c9..0000000 --- a/_posts/2023-05-11-fix-plan9-bootloader.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: Fix bootloader not being written in Plan9 -permalink: /fix-plan9-bootloader.html -date: 2023-05-11T12:00:00+02:00 -layout: post -type: note -draft: false -tags: [plan9] ---- - -If the bootloader is not being written to a disk when installing 9front on real -harware try clearing first sector of the disk with the following command. - -```sh -dd if=/dev/zero of=/dev/sdX bs=512 count=1 - -# If command above doesn't work try this one, wait couple of seconds and -# press delete key to stop the command. -cat /dev/sd*/data -``` - diff --git a/_posts/2023-05-12-install-plan9port-linux.md b/_posts/2023-05-12-install-plan9port-linux.md deleted file mode 100644 index c1cce46..0000000 --- a/_posts/2023-05-12-install-plan9port-linux.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: Install Plan9port on Linux -permalink: /install-plan9port-linux.html -date: 2023-05-12T12:00:00+02:00 -layout: post -type: note -draft: false -tags: [plan9] ---- - -Install Plan9port on Linux. This applies to -[Plan9port](https://9fans.github.io/plan9port/). This is a port of many Plan 9 -programs to Unix-like operating systems. Useful for programs like `9term` and -`rc`. - -```sh -sudo apt-get install gcc libx11-dev libxt-dev libxext-dev libfontconfig1-dev -git clone https://github.com/9fans/plan9port $HOME/plan9 -cd $HOME/plan9/plan9port -./INSTALL -r $HOME/plan9 -``` - diff --git a/_posts/2023-05-13-download-youtube-videos.md b/_posts/2023-05-13-download-youtube-videos.md deleted file mode 100644 index 9ed8221..0000000 --- a/_posts/2023-05-13-download-youtube-videos.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: Download list of YouTube files -permalink: /download-youtube-videos.html -date: 2023-05-13T12:00:00+02:00 -layout: post -type: note -draft: false -tags: [youtube] ---- - -If you need to download a list of YouTube videos and don't want to download the -actual YouTube list (which `yt-dlp` supports), you can use the following method. - -```js -// Used to get list of raw URL's from YouTube's video tab'. -// Copy them into videos.txt. -document.querySelectorAll('#contents a.ytd-thumbnail.style-scope.ytd-thumbnail').forEach(el => console.log(el.href)) -``` - -Download and install https://github.com/yt-dlp/yt-dlp. - -```sh -# This will download all videos in videos.txt. -yt-dlp --batch-file videos.txt -N `nproc` -f webm -``` - diff --git a/_posts/2023-05-14-convert-mkv.md b/_posts/2023-05-14-convert-mkv.md deleted file mode 100644 index 7cc6189..0000000 --- a/_posts/2023-05-14-convert-mkv.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: Convert all MKV files into other formats -permalink: /convert-mkv.html -date: 2023-05-14T12:00:00+02:00 -layout: post -type: note -draft: false -tags: [ffmpeg] ---- - -You will need `ffmpeg` installed on your system. This will convert all MKV files -into WebM format. - -```sh -# Convert all MKV files into WebM format. -find ./ -name '*.mkv' -exec bash -c 'ffmpeg -i "$0" -vcodec libvpx -acodec libvorbis -cpu-used 5 -threads 8 "${0%%.mp4}.webm"' {} \; -``` - -```sh -# Convert all MKV files into MP4 format. -find ./ -name '*.mkv' -exec bash -c 'ffmpeg -i "$0" c:a copy -c:v copy -cpu-used 5 -threads 8 "${0%%.mp4}.mp4"' {} \; -``` - diff --git a/_posts/2023-05-15-preview-troff-man-pages.md b/_posts/2023-05-15-preview-troff-man-pages.md deleted file mode 100644 index 2f0ca82..0000000 --- a/_posts/2023-05-15-preview-troff-man-pages.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: Previews how man page written in Troff will look like -permalink: /preview-troff-man-pages.html -date: 2023-05-15T12:00:00+02:00 -layout: post -type: note -draft: false -tags: [troff] ---- - -Troff is used to write man pages and it is difficult to read it so this will -preview how it will look like when it is rendered. - -```sh -# On Linux system. -groff -man -Tascii filename - -# On Plan9 system. -man 1 filename -``` - diff --git a/_posts/2023-05-16-mass-set-permission.md b/_posts/2023-05-16-mass-set-permission.md deleted file mode 100644 index 654d9d1..0000000 --- a/_posts/2023-05-16-mass-set-permission.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: Change permissions of matching files recursively -permalink: /mass-set-permission.html -date: 2023-05-16T12:00:00+02:00 -layout: post -type: note -draft: false -tags: [linux] ---- - -Replace `*.xml` with your pattern. This will remove executable bit from all -files matching the pattern. Change `+` to `-` to add executable bit. - -```sh -find . -type f -name "*.xml" -exec chmod -x {} + -``` - diff --git a/_posts/2023-05-16-rekindling-my-love-for-programming.md b/_posts/2023-05-16-rekindling-my-love-for-programming.md deleted file mode 100644 index dc5344f..0000000 --- a/_posts/2023-05-16-rekindling-my-love-for-programming.md +++ /dev/null @@ -1,75 +0,0 @@ ---- -title: Rekindling my love for programming and enjoying the act of creating -permalink: /rekindling-my-love-for-programming.html -date: 2023-05-16T12:00:00+02:00 -layout: post -type: post -draft: false ---- - -Programming can be a challenging and rewarding experience, but sometimes it's -easy to feel burnt out or disinterested. I have lost the passion for coding over -the past couple of months and it looked like I will never enjoy the coding as -much as I did. - -I was feeling burnt out with programming. I thought taking a break from it and -focusing on other activities that I enjoy might be helpful. This way, I could -come back to programming with a fresh perspective and renewed energy. I also -thought about learning a new programming language or technology to keep things -interesting and challenging. - -However, what I didn't realize was that learning a new language or technology -wasn't going to solve the underlying issue. I needed to take a step back and -re-evaluate why I had lost my passion for programming in the first place. This -involved taking a deep look into what I was doing that resulted in this rut. - -Sometimes, it's easy to get caught up in the hype of new technologies or -languages, and we can feel like we're missing out if we're not constantly -learning and experimenting. However, it's important to remember that the latest -and greatest isn't always the best fit for our projects or our -interests. Instead of constantly chasing the next big thing, it can be helpful -to focus on what truly interests us and what we're passionate about. This can -help us stay motivated and engaged with our work, rather than feeling like we're -just going through the motions. - -I expressed that I had lost my passion for coding over the past couple of -months, and I realized that the reason behind it was my tendency to spread -myself too thin and not focus on completing interesting projects. In order to -regain my passion for coding, I need to focus on projects that truly interest me -and give me a sense of purpose and motivation. - -Recently, I have been playing World of Warcraft more frequently and have become -interested in developing addons for the game. - -This quickly resulted in me creating three addons that improve the quality of -life, and I subsequently developed a more useful add-on that encapsulates all -the others I made. - -I found it interesting that this action sparked a new interest in me. -Additionally, I discovered the Lua language, which reminded me that coding -should be fun rather than just a struggle with a language. It should be pure, -unadulterated fun. - -I wasn't fighting the syntax, nor was I focused on finding the most optimal -solution. I simply created things without the pressure of making them the best -they could possibly be. - -This made me realize that I actually adore simple languages that get out of the -way and let you express what you want to do. It forced me to rethink a lot about -what I use and what I actually enjoy. - -I have decided to stick to the basics. For a scripting language, I will use -Lua. For networking, I will use Golang. And for any special needs, I will rely -on C. I do not require Rust, Nim, or Zig. This selection is more than sufficient -for my needs. I have to stay true to this simplicity. There is something to the -Occam's Razor. - -I've been struggling with a lack of creativity lately, but now I'm experiencing -a real change. I realized I needed to take a step back and stop actively trying -to address the issue. I needed to stop worrying and overthinking it. I simply -needed some time. Looking back, I don't think I've taken any significant time -off in the last 10 years. - -Suddenly, I find myself with the energy and passion to complete multiple small -projects. It doesn't feel like a chore at all. Who knew I needed WoW to -kickstart everything. Inspiration really does come from the strangest places. diff --git a/_posts/2023-05-22-non-blocking-shell-exec-csharp.md b/_posts/2023-05-22-non-blocking-shell-exec-csharp.md deleted file mode 100644 index f8b9c53..0000000 --- a/_posts/2023-05-22-non-blocking-shell-exec-csharp.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: Execute not blocking async shell command in C# -permalink: /non-blocking-shell-exec-csharp.html -date: 2023-05-22T12:00:00+02:00 -layout: post -type: note -draft: false -tags: [csharp] ---- - -Execute a shell command in async in C# while not blocking the UI thread. - -```c# -private async Task executeCopyCommand() -{ - await Task.Run(() => - { - var processStartInfo = new ProcessStartInfo("cmd", "/c dir") - { - RedirectStandardOutput = true, - UseShellExecute = false, - CreateNoWindow = true - }; - - var process = new Process - { - StartInfo = processStartInfo - }; - - process.Start(); - process.WaitForExit(); - }); -} -``` - -Make sure that `async` is present in the function definition and `await` is used -in the method that calls `executeCopyCommand()`. - -```c# -private async void button_Click(object sender, EventArgs e) -{ - await executeCopyCommand(); -} -``` - diff --git a/_posts/2023-05-23-extend-lua-with-custom-c.md b/_posts/2023-05-23-extend-lua-with-custom-c.md deleted file mode 100644 index 604d359..0000000 --- a/_posts/2023-05-23-extend-lua-with-custom-c.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: Extend Lua with custom C functions using Clang -permalink: /extend-lua-with-custom-c.html -date: 2023-05-23T12:00:00+02:00 -layout: post -type: note -draft: false -tags: [lua, c] ---- - -Here is a boilerplate for extending Lua with custom C functions. This requires -Clang and Lua 5.1 to be installed. GCC can be used instead of Clang, but the -Makefile will need to be modified. - -- nativefunc.c - - ```c - #include - #include - - static int l_mult50(lua_State *L) { - double number = luaL_checknumber(L, 1); - lua_pushnumber(L, number * 50); - return 1; - } - - int luaopen_nativefunc(lua_State *L) { - static const struct luaL_Reg nativeFuncLib[] = { {"mult50", l_mult50}, {NULL, NULL} }; - - luaL_register(L, "nativelib", nativeFuncLib); - return 1; - } - ``` - -- main.lua - - ```lua - require "nativefunc" - print(nativelib.mult50(50)) - ``` - -- Makefile - - ```Makefile - CC = clang - CFLAGS = - INCLUDES = `pkg-config lua5.1 --cflags-only-I` - - all: - $(CC) -shared -o nativefunc.so -fPIC nativefunc.c $(CFLAGS) $(INCLUDES) - - clean: - rm *.so - ``` - diff --git a/_posts/2023-05-23-i-was-wrong-about-git-workflows.md b/_posts/2023-05-23-i-was-wrong-about-git-workflows.md deleted file mode 100644 index 57d887c..0000000 --- a/_posts/2023-05-23-i-was-wrong-about-git-workflows.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -title: I think I was completely wrong about Git workflows -permalink: /i-was-wrong-about-git-workflows.html -date: 2023-05-23T12:00:00+02:00 -layout: post -type: post -draft: false -tags: [] ---- - -I have been using some approximation of [Git -Flow](https://jeffkreeftmeijer.com/git-flow/) for years now and never really -questioned it to be honest. When I create a repo I create develop branch and set -it as default one and then merge to master from there. Seems reasonable enough. - -One thing that I have learned is that long living branches are the devil. They -always end up making a huge mess when they need to be merged eventually into -master. So by that reason, what is the develop branch if not the longest living -feature branch. And from my personal experience there was never a situation -where I wasn’t sweating bullets when I had to merge develop back to master. - -This realisation started to give me pause. So why the hell am I doing this, and -is there a better way. Well the solution was always there. And it comes in a -form of [git tags](https://git-scm.com/book/en/v2/Git-Basics-Tagging). - -So what are git tags? Git tags are references to specific points in a Git -repository's history. They are used to mark important milestones, such as -releases or significant commits, making it easier to identify and access -specific versions of a project. - -Somehow we have all hijacked the meaning of the master branch that it has to be -the most releasable version of code. And this is also where the confusing about -versioning the software kicks in. Because master branch implicitly says that we -are dealing with the rolling release type of a software. And by having a develop -branch we are hacking around this confusion. With a separation of develop and -master we lock functionalities into place and forcing a stable vs development -version of the software. - -But if that is true and the long living branches are the devil then why have -develop at all. I think that most of this comes to how continuous integration is -being done. There usually is no granular access to tags and CD software deploys -what is present on a specific branch, may that be master for production and -develop for staging. This is a gross simplification and by having this in place -we have completely removed tagging as a viable option to create a fix point in -software cycle that says, this is the production ready code. - -One cool thing about tags are that you can checkout a specific tag. So they -behave very similarly as branches in that regard. And you don’t have the -overhead of having two mainstream branches. - -So what is the solution? One approach is to use development workflow, where all -changes are made on the smaller branches and continuously merged into -master. Where the software is ready to be pushed to production you tag the -master branch. This approach eliminates the need for long-lived branches and -simplifies the development process. It also encourages developers to make small, -incremental changes that can be tested and deployed quickly. However, this -approach may not be suitable for all projects or teams that heavily rely on -automated deployment based on branch names only. - -This also requires that developers always keep production in mind. No more -living on an island of the develop branch. All your actions and code need to be -ready to meet production standards on a much smaller timescale. - -I think that we have complicated the workflow in an honest attempt to make -things more streamlined but in the process of doing this, we have inadvertently -made our lives much more complicated. - -In conclusion, it's important to re-evaluate our workflows from time to time to -see if they still make sense and if there are better alternatives available. -Long-living branches can be problematic, and using tags to mark important -milestones can simplify the development process. - diff --git a/_posts/2023-05-23-parse-rss-with-lua.md b/_posts/2023-05-23-parse-rss-with-lua.md deleted file mode 100644 index ea8ce8c..0000000 --- a/_posts/2023-05-23-parse-rss-with-lua.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: Parse RSS feeds with Lua -permalink: /parse-rss-with-lua.html -date: 2023-05-23T12:00:00+02:00 -layout: post -type: note -draft: false -tags: [lua, rss] ---- - -Example of parsing RSS feeds with Lua. Before running the script install: - -- feedparser with `luarocks install feedparser` -- luasocket with `luarocks install luasocket` - -```lua -local http = require("socket.http") -local feedparser = require("feedparser") - -local feed_url = "https://mitjafelicijan.com/index.xml" - -local response, status, _ = http.request(feed_url) -if status == 200 then - local parsed = feedparser.parse(response) - - -- Print out feed details. - print("> Title ", parsed.feed.title) - print("> Author ", parsed.feed.author) - print("> ID ", parsed.feed.id) - print("> Entries ", #parsed.entries) - - for _, item in ipairs(parsed.entries) do - print("GUID ", item.guid) - print("Title ", item.title) - print("Link ", item.link) - print("Summary ", item.summary) - end -else - print("! Request failed. Status:", status) -end -``` diff --git a/_posts/2023-05-24-fresh-9front-desktop.md b/_posts/2023-05-24-fresh-9front-desktop.md deleted file mode 100644 index 5da89e7..0000000 --- a/_posts/2023-05-24-fresh-9front-desktop.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: My brand new Plan9/9front desktop -permalink: /fresh-9front-desktop.html -date: 2023-05-24T12:00:00+02:00 -layout: post -type: note -draft: false -tags: [plan9] ---- - -I have been experimenting with Plan9/9front for a week now. Noice! This is how -my desktop looks like. - -![9front desktop](/assets/notes/9front-desktop.png){:loading="lazy"} - diff --git a/_posts/2023-05-25-dcss-new-player-guide.md b/_posts/2023-05-25-dcss-new-player-guide.md deleted file mode 100644 index dd63f79..0000000 --- a/_posts/2023-05-25-dcss-new-player-guide.md +++ /dev/null @@ -1,99 +0,0 @@ ---- -title: Dungeon Crawl Stone Soup - New player guide -permalink: /dcss-new-player-guide.html -date: 2023-05-25T22:00:00+02:00 -layout: post -type: note -draft: false -tags: [dcss] ---- - -An amazing game deserves an amazing guide. All this material can be find in some -form on another on [craw's](https://github.com/crawl/crawl) official repository. - -- [DCSS Quickstart](/assets/notes/dcss-quickstart.pdf) - Very short introduction to the - game -- [DCSS Manual](/assets/notes/dcss_manual.pdf) - Extensive manual about the game - -![Dungeon Crawl Stone Soup](/assets/notes/dcss.jpg){:loading="lazy"} - -**Movement and Exploration** - -- You can move around with the numpad (try numlock on and off), vi-keys, or - clicking with the mouse. Arrow keys work, though you can't move diagonally - with them. Pressing Shift and a direction will move until you see/hit - something. -- Pressing `>` will take you down a staircase, and `<` to go up a staircase. -- You can open doors by walking into them, and close them with `C`. -- You can autoexplore by pressing `o`. -- You can re-view recent messages with `Ctrl-p`. - -**Monsters and Combat** - -- You can pick up items with `,` or `g`. -- Wield weapons with `w`. Weapons have different stats. - - (You may also engage in Unarmed Combat, though it isn't very effective when - untrained). -- Attack monsters in melee by walking in their direction (or with - Ctrl-direction). -- You can wait with `.` or `s`, passing your turn - such as to get monsters into - a corridor with you. -- You can rest with `5`, waiting until you are fully healed, or something - noteworthy happens. -- Either mouseover and rightclick, or use `x` then `v` on the monster to examine - monsters. Monsters with a red border are 'dangerous' relative to your current - XP level (XL). -- Quiver (often ranged) actions for further use with `Q`. -- You can fire ranged weapons manually with `f`, or auto-target your quiver with - `p` or `Shift-Tab`. Throwing weapons can be thrown immediately, while - launchers (like bows) need to be wielded first. - -**Items and Inventory** - -- View your inventory by pressing `i`. Most item related commands can also be - done with this menu. -- You can wear amour with `W;` amour gives `AC`, while heavier body armour - reduces `EV`. -- Autoexplore will automatically pick up useful items, such as potions and - scrolls, if you aren't in danger. -- You can read scrolls with `r` and drink ("quaff") potions with `q`. -- Equipment items may have brands, with special properties. Branded equipment is - blue when unidentified. -- Equipment items may be artifacts, often with unique properties, and are - unmodifiable. They are written in white. -- You can evoke wands with `V`. -- You can put on jewelry with `P`, and remove it with `R`. -- Gold is used in shops, which can be interacted with by either `>` or `<`. - -**Magic and Spellcasting** - -- Once you find a spellbook, you can memorize spells with `M`. -- You need to be the same XL as the spell's spell level in order to learn it, in - addition to training magical skill (to lower failure rate). -- Cast spells by pressing `z`, then the letter assigned to the spell. You may - also Quiver a spell and then use it like a ranged weapon (with Shift-Tab). -- You can view your memorized spells by pressing `I` (capital-i) or `z`. -- Like HP, you can recover MP by resting (with 5). -- Many spells can be positioned more effectively, or combined with other spells, - in order to get (more effective) use out of them. -- Heavier body amour and shields hamper spellcasting. - -**Gods and Divine Abilities** - -- You may look at a god's overview by praying at their altar (with `>` or `<`). - After praying, you can worship the god by pressing Enter afterwards. -- Gods all have unique features about them. Trog, the god of the tutorial, is - also the god of rage and bloodshed, and so despises spellcasting. -- Gods like and dislike different things. Most gods either like killing things - (like Trog) or exploring new areas (like Elyvilon), rewarding you piety - (divine favor) for doing so. -- You should learn to use and even rely on divine abilities often, as they are - usually very strong. Trog's Berserk gives you 1.5x health, 1.5x speed (to all - valid actions), and a big damage boost. Note that Berserk prevents most - actions other than move and melee attack, and runs out very quickly if you - aren't attacking. And after berserk ends, you are slowed down and can't - berserk again for a short time. -- In addition, the vast majority of abilities consume piety in the process. - Regardless, this ability is very cheap, and the benefits are incredible, so - don't hold back! -- Pressing `^` will let you view your current god, abilities, and piety. diff --git a/_posts/2023-05-25-show-xterm-colors.md b/_posts/2023-05-25-show-xterm-colors.md deleted file mode 100644 index 56050fd..0000000 --- a/_posts/2023-05-25-show-xterm-colors.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -title: Display xterm color palette -permalink: /xterm-color-palette.html -date: 2023-05-25T12:00:00+02:00 -layout: post -type: note -draft: false -tags: [linux] ---- - -- `bash xterm-palette.sh` - will show you number of max colors available -- `bash xterm-palette.sh -v` - will create a list of all colors with codes - -![xterm color palette](/assets/notes/xterm-palette.png){:loading="lazy"} - -```sh -#!/usr/bin/env bash -# xterm-palette.sh - -trap 'tput sgr0' exit # Clean up even if user hits ^C - -function setfg () { - printf '\e[38;5;%dm' $1 -} - -function setbg () { - printf '\e[48;5;%dm' $1 -} - -function showcolors() { - # Given an integer, display that many colors - for ((i=0; i<$1; i++)) - do - printf '%4d ' $i - setbg $i - tput el - tput sgr0 - echo - done - tput sgr0 el -} - -# First, test if terminal supports OSC 4 at all. -printf '\e]4;%d;?\a' 0 -read -d $'\a' -s -t 0.1 -#include -#include -#include - -void -main() -{ - ulong co; - Image *im, *bg; - co = 0x0000FFFF; - - if (initdraw(nil, nil, argv0) < 0) - { - sysfatal("%s: %r", argv0); - } - - im = allocimage(display, Rect(0, 0, 300, 300), RGB24, 0, DYellow); - bg = allocimage(display, Rect(0, 0, 1, 1), RGB24, 1, co); - - if (im == nil || bg == nil) - { - sysfatal("not enough memory"); - } - - draw(screen, screen->r, bg, nil, ZP); - draw(screen, screen->r, im, nil, Pt(-40, -40)); - - flushimage(display, Refnone); - - // Wait 10 seconds before exiting. - sleep(10000); - - exits(nil); -} -``` - -And then compile with `mk` (mkfile below): - -```makefile -# mkfile - - - - - - - - - - - - - - - - -``` - -Now the markdown file `presentation.md` with presenetation. `---` is used to -separate slides. Other stuff is just pure markdown. - -```md -class: center, middle - -# Main title of the presentation - ---- - -# Fist slide - -Eveniet mollitia nemo architecto rerum aut iure iste. Sit nihil nobis libero iusto fugit nam laudantium ut. Dignissimos corrupti laudantium nisi. - -- Lorem ipsum dolor sit amet, consectetur adipiscing elit. -- Integer aliquet mauris a felis fringilla, ut congue massa finibus. - ---- - -# Slide two - -- Lorem ipsum dolor sit amet, consectetur adipiscing elit. -- Vestibulum eget leo ac dolor venenatis pulvinar. -``` diff --git a/_posts/2023-06-24-making-cgit-look-nicer.md b/_posts/2023-06-24-making-cgit-look-nicer.md deleted file mode 100644 index 0140a3e..0000000 --- a/_posts/2023-06-24-making-cgit-look-nicer.md +++ /dev/null @@ -1,207 +0,0 @@ ---- -title: "Making cgit look nicer" -permalink: /making-cgit-look-nicer.html -date: 2023-06-24T13:33:58+02:00 -layout: post -type: note -draft: false -tags: [git] ---- - -For personal use I have a [private Git server](https://git.mitjafelicijan.com) -set up and I use GitHub just as a mirror. By default the cgit theme looks a bit -dated so I made the flowing theme. - -- `/etc/cgitrc` - -```ini -css=/cgit.css -logo=/startrek.gif -favicon=/favicon.png -source-filter=/usr/lib/cgit/filters/syntax-highlighting-edited.sh -about-filter=/usr/lib/cgit/filters/about-formatting.sh - -local-time=1 -snapshots=tar.gz -repository-sort=age -cache-size=1000 -branch-sort=age -summary-log=200 -max-atom-items=50 -max-repo-count=100 - -enable-index-owner=0 -enable-follow-links=1 -enable-log-filecount=1 -enable-log-linecount=1 - -root-title=Place for code, experiments and other bullshit! -root-desc= -clone-url=git@git.mitjafelicijan.com:/home/git/$CGIT_REPO_URL - -mimetype.gif=image/gif -mimetype.html=text/html -mimetype.jpg=image/jpeg -mimetype.jpeg=image/jpeg -mimetype.pdf=application/pdf -mimetype.png=image/png -mimetype.svg=image/svg+xml - -readme=:README.md -readme=:readme.md - -# Must be at the end! -virtual-root=/ -scan-path=/home/git/ -``` - -For `syntax-highlighting-edited.sh` follow instructions on -[https://wiki.archlinux.org/title/Cgit](https://wiki.archlinux.org/title/Cgit#Using_highlight). - -- `/usr/share/cgit/cgit.css` - -```css -* { - font-size: 11pt; -} - -body { - font-family: monospace; - background: white; - padding: 1em; -} - -th, td { - text-align: left; -} - -/* HEADER */ - -#header { - margin-bottom: 1em; -} - -#header .logo img { - display: block; - height: 3em; - margin-right: 10px; -} - -#header .sub.right { - display: none; -} - -/* FOOTER */ - -.footer { - margin-top: 2em; - font-style: italic; -} - -.footer, .footer a { - color: gray; -} - -/* TABS */ - -.tabs a { - margin-bottom: 2em; - display: inline-block; - margin-right: 1em; -} - -.tabs td a:only-child { - display: none; -} - -/* HIDING ELEMENTS */ - -.cgit-panel, .form { - display: none; -} - -/* LISTS */ - -.list td, .list th { - padding-right: 2em; -} - -.list .nohover a { - color: black; -} - -.list .button { - padding-right: 0.5em; -} - -/* COMMIT */ - -.commit-subject { - padding: 1em 0; -} - -.decoration a { - padding-left: 0.5em; -} - -.commit-info th { - padding-right: 1em; -} - -.commit-subject { - padding: 2em 0; -} - -table.diff div.head { - padding-top: 2em; -} - -table.diffstat td { - padding-right: 1em; -} - -/* CONTENT */ - -.linenumbers { - padding-right: 0.5em; -} - -.linenumbers a { - color: gray; -} - -.pager { - display: flex; - list-style-type: none; - padding: 0; - gap: 0.5em; -} - -/* DIFF COLORS */ - -table.diff { - width: 100%; -} - -table.diff td { - white-space: pre; -} - -table.diff td div.head { - font-weight: bold; - margin-top: 1em; - color: black; -} - -table.diff td div.hunk { - color: #009; -} - -table.diff td div.add { - color: green; -} - -table.diff td div.del { - color: red; -} -``` diff --git a/_posts/2023-06-25-alacritty-open-links-with-modifier.md b/_posts/2023-06-25-alacritty-open-links-with-modifier.md deleted file mode 100644 index a26dd14..0000000 --- a/_posts/2023-06-25-alacritty-open-links-with-modifier.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -title: "Alacritty open links with modifier" -permalink: /alacritty-open-links-with-modifier.html -date: 2023-06-25T17:17:16+02:00 -layout: post -type: note -draft: false -tags: [linux] ---- - -Alacritty by default makes all links in the terminal output clickable and this -gets annoying rather quickly. I liked the default behavior of Gnome terminal -where you needed to hold Control key and then you could click and open links. - -To achieve this in Alacritty you need to provide a `hint` in the configuration -file. Config file is located at `~/.config/alacritty/alacritty.yml`. - -```yaml -hints: - enabled: - - regex: "(mailto:|gemini:|gopher:|https:|http:|news:|file:|git:|ssh:|ftp:)\ - [^\u0000-\u001F\u007F-\u009F<>\"\\s{-}\\^⟨⟩`]+" - command: xdg-open - post_processing: true - mouse: - enabled: true - mods: Control -``` - -The following should work under any Linux system. For macOS, you will need to -change `command: xdg-open` to something else. - -Now the links will be visible and clickable only when Control key is being -pressed. - -Source: https://github.com/alacritty/alacritty/issues/5246 diff --git a/_posts/2023-06-25-development-environments-with-nix.md b/_posts/2023-06-25-development-environments-with-nix.md deleted file mode 100644 index a905f10..0000000 --- a/_posts/2023-06-25-development-environments-with-nix.md +++ /dev/null @@ -1,69 +0,0 @@ ---- -title: "Development environments with Nix" -permalink: /development-environments-with-nix.html -date: 2023-06-25T16:38:10+02:00 -layout: post -type: note -draft: false -tags: [random] ---- - -Nix is amazing for making reproducible cross OS development environment. - -First you need to [install Nix package -manager](https://nixos.org/download.html). - -- Create a file `shell.nix` in your project folder. -- In the section that has `python3` etc add programs you want to use. These can - be CLI or GUI applications. It doesn't matter to Nix. - -```nix -{ pkgs ? import {} }: - pkgs.mkShell { - nativeBuildInputs = with pkgs.buildPackages; [ - python3 - tinycc - ]; -} -``` - -And then run it `nix-shell`. By default it will look for `shell.nix` file. If -you want to specify a different file use `nix-shell file.nix`. That is about it. - -When the shell is spawned it could happen that your `PS1` prompt will be -overwritten and your prompt will look differently. In that case you need to -either do `NIX_SHELL_PRESERVE_PROMPT=1 nix shell` or add -`NIX_SHELL_PRESERVE_PROMPT` variable to your `bashrc` or `zshrc` file and set it -to `1`. - -I also have a modified `PS1` prompt for Bash that I use and it also catches the -usage of Nix shell. - -```sh -NIX_SHELL_PRESERVE_PROMPT=1 - -parse_git_branch() { - git branch 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/ (\1)/' -} - -is_inside_nix_shell() { - nix_shell_name="$(basename "$IN_NIX_SHELL" 2>/dev/null)" - if [[ -n "$nix_shell_name" ]]; then - echo " \e[0;36m(nix-shell)\e[0m" - fi -} - -export PS1="[\033[38;5;9m\]\u@\h\[$(tput sgr0)\]]$(is_inside_nix_shell)\[\033[33m\]\$(parse_git_branch)\[\033[00m\] \w\[$(tput sgr0)\] \n$ " -``` - -And this is what it looks like when you are in a Nix shell. Otherwise that part -of prompt is omitted - -![PS1 Prompt](/assets/notes/ps1-prompt.png){:loading="lazy"} - -More resources: - -- https://nixos.wiki/wiki/Development_environment_with_nix-shell -- https://nixos.wiki/wiki/Main_Page -- https://itsfoss.com/why-use-nixos/ -- https://mynixos.com/ diff --git a/_posts/2023-06-29-10gui-10-finger-multitouch-user-interface.md b/_posts/2023-06-29-10gui-10-finger-multitouch-user-interface.md deleted file mode 100644 index d4b8e54..0000000 --- a/_posts/2023-06-29-10gui-10-finger-multitouch-user-interface.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: "10/GUI 10 Finger Multitouch User Interface" -permalink: /10gui-10-finger-multitouch-user-interface.html -date: 2023-06-29T14:51:39+02:00 -layout: post -type: note -draft: false -tags: [graphics] ---- - -Message from 10/GUI team (page 10gui.com does not exist anymore): - -*Over a quarter-century ago, Xerox introduced the modern graphical user -interface paradigm we today take for granted.* - -*That it has endured is a testament to the genius of its design. But the -industry is now at a crossroads: New technologies promise higher-bandwidth -interaction, but have yet to find a truly viable implementation.* - -*10/GUI aims to bridge this gap by rethinking the desktop to leverage technology -in an intuitive and powerful way.* - - diff --git a/_posts/2023-06-29-60s-ibm-computers-commercial.md b/_posts/2023-06-29-60s-ibm-computers-commercial.md deleted file mode 100644 index bddca2a..0000000 --- a/_posts/2023-06-29-60s-ibm-computers-commercial.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: "60's IBM Computers Commercial" -permalink: /60s-ibm-computers-commercial.html -date: 2023-06-29T22:13:45+02:00 -layout: post -type: note -draft: false -tags: [random] ---- - -Likely aired during an hour-long program during the 1960s, long commercials such -as this typically aired during hour-long programs. They would *not* have aired -during a half-hour program. - - diff --git a/_posts/2023-07-01-bringing-all-of-my-projects-together-under-one-umbrella.md b/_posts/2023-07-01-bringing-all-of-my-projects-together-under-one-umbrella.md deleted file mode 100644 index 4bc45ce..0000000 --- a/_posts/2023-07-01-bringing-all-of-my-projects-together-under-one-umbrella.md +++ /dev/null @@ -1,282 +0,0 @@ ---- -title: "Bringing all of my projects together under one umbrella" -permalink: /bringing-all-of-my-projects-together-under-one-umbrella.html -date: 2023-07-01T18:49:07+02:00 -layout: post -type: post -draft: false ---- - -## What is the issue anyway? - -Over the years, I have accumulated a bunch of virtual servers on my -[DigitalOcean](https://www.digitalocean.com/) account for small experimental -projects I dabble in. And this has resulted in quite a bill. I mean, I wouldn't -care if these projects were actually being used. But there were just being there -unused and wasting resources. Which makes this an unnecessary burden for me. - -Most of them are just small HTML pages that have an endpoint or two to read data -from or to, and for that reason I wrote servers left and right. To be honest, -all of those things could have been done with [CGI -scripts](https://en.wikipedia.org/wiki/Common_Gateway_Interface) and that would -have been more than enough. - -Recently, I decided to stop language hopping and focus on a simpler stack which -includes C, Go and Lua. And I can accomplish all the things I am interested in. - -## Finding a web server replacement - -Usually I had [Nginx](https://nginx.org/en/) in front of these small web servers -and I had to manage SSL certificates and all that jazz. I am bored with these -things. I don't want to manage any of this bullshit anymore. - -So the logical move forward was to find a solid alternative for this. I have -ended up on [Caddy server](https://caddyserver.com/). I've used it in the past -but kind of forgotten about it. What I really like about it is an ease of use -and a bunch of out of the box functionalities that come with it. - -These are the _pitch_ points from their website: - -- **Secure by Default**: Caddy is the only web server that uses HTTPS by - default. A hardened TLS stack with modern protocols preserves privacy and - exposes MITM attacks. -- **Config API**: As its primary mode of configuration, Caddy's REST API makes - it easy to automate and integrate with your apps. -- **No Dependencies**: Because Caddy is written in Go, its binaries are entirely - self-contained and run on every platform, including containers without libc. -- **Modular Stack**: Take back control over your compute edge. Caddy can be - extended with everything you need using plugins. - -I had just a few requirements: - -- Automatic SSL -- Static file server -- Basic authentication -- CGI script support - -And the vanilla version does all of it, but CGI scripts. But that can easily be -fixed with their modular approach. You can do this on their website and build a -custom version of the server, or do it with Docker. - -This is a `Dockerfile` I used to build a custom server. - -```Dockerfile -FROM caddy:builder AS builder - -RUN xcaddy build \ - --with github.com/aksdb/caddy-cgi - -FROM caddy:latest -RUN apk add --no-cache nano - -COPY --from=builder /usr/bin/caddy /usr/bin/caddy -``` - -## Getting rid of all the unnecessary virtual machines - -The next step was to get a handle on the number of virtual servers I have all -over the place. - -I decided to move all the projects and services into two main VMs: - -- personal server (still Nginx) - - git server - - static file server - - personal blog -- projects server (Caddy server) - - personal experiments - - other projects - -I will focus on projects' server in this post since it's more interesting. - -## Testing CGI scripts - -The first thing I tested was how CGI scripts work under Caddy. This is -particularly import to me because almost all of my experiments and mini projects -need this to work. - -To configure Caddy server, you must provide the server with a configuration -file. By default, it's called `Caaddyfile`. - -```caddyfile -{ - order cgi before respond -} - -examples.mitjafelicijan.com { - cgi /bash-test /opt/projects/examples/bash-test.sh - cgi /tcl-test /opt/projects/examples/tcl-test.tcl - cgi /lua-test /opt/projects/examples/lua-test.lua - cgi /python-test /opt/projects/examples/python-test.py - - root * /opt/projects/examples - file_server -} -``` - -- The order is very important. Make sure that `order cgi before respond` is at - the top of the configuration file. -- Also, when you run with Caddy v2, make sure you provide `adapter` argument - like this `/usr/bin/caddy run --watch --environ --config /etc/caddy/Caddyfile - --adapter caddyfile`. Otherwise, Caddy will try to use a different format for - config file. - -I did a small batch of tests with [Bash](https://www.gnu.org/software/bash/), -[Tcl](https://www.tcl-lang.org/), [Lua](https://www.lua.org/) and -[Python](https://www.python.org/). Here is a cheat sheet if you need it. - -Let's get Bash out of the way first. - -```bash -#!/usr/bin/bash - -printf "Content-type: text/plain\n\n" - -printf "Hello from Bash\n\n" -printf "PATH_INFO [%s]\n" $PATH_INFO -printf "QUERY_STRING [%s]\n" $QUERY_STRING -printf "\n" - -for i in {0..9..1}; do - printf "> %s\n" $i -done - -exit 0 -``` - -This one is for Tcl script. - -```tcl -#!/usr/bin/tclsh - -puts "Content-type: text/plain\n" - -puts "Hello from Tcl\n" -puts "PATH_INFO \[$env(PATH_INFO)\]" -puts "QUERY_STRING \[$env(QUERY_STRING)\]" -puts "" - -for {set i 0} {$i < 10} {incr i} { - puts "> $i" -} -``` - -And for all you Python enjoyers. - -```python -#!/usr/bin/python3 - -import os - -print("Content-type: text/plain\n") - -print("Hello from Python\n") -print("PATH_INFO [{}]".format(os.environ['PATH_INFO'])) -print("QUERY_STRING [{}]".format(os.environ['QUERY_STRING'])) -print("") - -for i in range(10): - print("> {}".format(i)) -``` - -And for the final example, Lua. - -```lua -#!/usr/bin/lua - -print("Content-type: text/plain\n") - -print("Hello from Lua\n") -print(string.format("PATH_INFO [%s]", os.getenv("PATH_INFO"))) -print(string.format("QUERY_STRING [%s]", os.getenv("QUERY_STRING"))) -print() - -for i = 0, 9 do - print(string.format("> %d", i)) -end -``` - -## Basic authentication - -One thing was also to have an option for some sort of authentication, and -something like [Basic access -authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) would -be more than enough. - -Thankfully, Caddy supports this out of the box already. Below is an updated -example. - -```Caddyfile -{ - order cgi before respond -} - -examples.mitjafelicijan.com { - cgi /bash-test /opt/projects/examples/bash-test.sh - cgi /tcl-test /opt/projects/examples/tcl-test.tcl - cgi /lua-test /opt/projects/examples/lua-test.lua - cgi /python-test /opt/projects/examples/python-test.py - - root * /opt/projects/examples - file_server - - basicauth * { - bob $2a$14$/wCgaf9oMnmQa20txB76u.nI1AldGMBT/1J7fXCfgOiRShwz/JOkK - } -} -``` - -`basicauth *` matches everything under this domain/sub-domain and protects it -with Basic Authentication. - -- `bob` is the username -- `hash` is the password - -To generate these passwords, execute `caddy hash-password` and this will prompt -you to insert a password twice and spit out a hashed password that you can put -in your configuration file. - -Restart the server and you are ready to go. - -## Making Caddy a service with systemd - -After the tests were successful, I copied `caddy` to `/usr/bin/caddy` and copied -`Caddyfile` to `/etc/caddy/Caddyfile`. - -Now off to the systemd. Each systemd service requires you to create a service -file. - -- I created a `/etc/systemd/system/caddy.service` and put the following content - in the file. - -```systemd -[Unit] -Description=Caddy -Documentation=https://caddyserver.com/docs/ -After=network.target network-online.target -Requires=network-online.target - -[Service] -Type=notify -User=root -Group=root -ExecStart=/usr/bin/caddy run --environ --config /etc/caddy/Caddyfile --adapter caddyfile -ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile --force --adapter caddyfile -TimeoutStopSec=5s -LimitNOFILE=1048576 -LimitNPROC=512 -PrivateTmp=true -ProtectSystem=full -AmbientCapabilities=CAP_NET_ADMIN CAP_NET_BIND_SERVICE - -[Install] -WantedBy=multi-user.target -``` - -- You might need to reload systemd with `systemctl daemon-reload`. -- Then I enabled the service with `systemctl enable caddy.service`. -- And then I started the service with `systemctl start caddy.service`. - -This was about all that I needed to do to get it running. Now I can easily add -new subdomains and domains to the main configuration file and be done with -it. No manual Let's Encrypt shenanigans needed. diff --git a/_posts/2023-07-08-who-knows-what-the-world-will-look-like-tomorrow.md b/_posts/2023-07-08-who-knows-what-the-world-will-look-like-tomorrow.md deleted file mode 100644 index c7d52d5..0000000 --- a/_posts/2023-07-08-who-knows-what-the-world-will-look-like-tomorrow.md +++ /dev/null @@ -1,101 +0,0 @@ ---- -title: "Who knows what the world will look like tomorrow" -permalink: /who-knows-what-the-world-will-look-like-tomorrow.html -date: 2023-07-08T18:49:07+02:00 -layout: post -type: post -draft: false ---- - -This site has gone through a lot of changes over the years. From being written -in Flask and Bottle to moving on to static site generators. I have used and -tested probably 10s of them my now. From homebrew solutions to the biggest and -the baddest. From Bash scripts to Node.js disasters. I've seen some things, no -doubt. Not all bad. - -I have been closely observing the web and where the trends are going, and I -don't like what I see. Instead of internet being this weird place where -experimentation is happening, it all became stale and formulized. Boring, -actually. Really boring. And sad. Where is that old, revolutionary FU spirit I -remember? It's still there, I know. But it's being drowned by the voices of -mediocrity and formulaic boredom. - -It almost feels like that the internet stopped for 10 years and only now -something has started happening. With all the insanity around the world. People -hating people without actual reasons, just because it's fashionable to hate and -crowd is saying so. Sad state of affairs. - -All this is contributing to this overall negativity masked as apathy. Everybody -walking in lockstep. Instead of being creative and bold, we are just -re-inventing the world and making the same mistakes. Maybe, just maybe, some -things are good enough and there is no need to try to be too smart for our own -good. After N-attempts, maybe something should click inside our heads to maybe -say: "This thing, opinion, etc. is actually really good, and even after several -attempts it still holds." - -The older I get, the more careful I am of my own thoughts and why I think the -way I think. More and more, I try to understand people with opposite -opinions. Far from perfect, but closer to bearable. And then I see people -hearing or reading a thing on internet and let's fucking goooooo! Strong -opinions are a sign of a weak and uneducated mind. I am more and more sure of -this. - -It's gotten to a point where you can with great certainty deduce a person's -personality based on one or two opinions. How boring have we become. No wonder -people can't talk to each other. These would be very quick conversations anyway. - -I just got remembered of a song, ["Hi -Ren"](https://www.youtube.com/watch?v=s_nc1IVoMxc). The ending talks about being -stiff and not being able to dance. Such an amazing metaphor. And we as people -have gone so far, we can't even walk or even crawl normally anymore. We have -forgotten that the most beautiful things in life have a great deal of -uncertainty about them. We want instant gratification. Not only that, but we -want absolute obedience. Complete control over others, because we have zero -control of ourselves. And all the lies we could tell ourselves will not help us -out of this situation. - -It is funny how I catch myself from time to time being a complete idiot. It's -like having an outer body experience. I can see myself being an idiot, and -cannot stop myself. It serves as a learning lesson to stop before speaking. To -think before saying. And to crawl before walking. - -So there is still time. We can dance once more. All we need to do is stop for a -second. Me and you. Us two is a start. Let's not try to change the world, but -rather nudge ourselves just a tiny bit. And if we only did that?! Just -imagine. Each of us nudged ourselves a small, tiny bit, the world would heal. If -we would just put down the phones and ignored Internet for a day or two. Put -visiting websites that feed on us on hold. Listened to just one sentence and try -to understand it from a person who we completely disagree with. I truly believe -that this is possible. - -Life is about suffering and joy. And instead of wishing suffering on others and -excepting joy for yourselves, we should for a brief moment want suffering for -ourselves and wish joy on others. Wouldn't that be an amazing sight to see? - -I caught myself hating on Rust. And I deeply thought about it afterward. Why did -I do it? It is obviously not for me. So why the hell was I being so negative -towards it? I think that I know the answer. I was negative because that is -easy. Because it's much easier to hate on things than to say to yourself: "Well, -you know what? This is not for me. I will focus on creation and not -destruction. This is who I want to be. This is what fills me with joy and -purpose." Where joy is keeping me happy and purpose scares the shit out of me -and keeps me honest. This is who I want to be. Admit to myself when I am wrong -and accept the faults that I have without reservation and with courage march on. - -I just realized that this blog post is a sort of therapy for me. It's -cathartic. Going thought the history of this site and remembering all the -decisions and annoyances that came with it. When I was cursing at the tools. And -time moved on, and the site is still here. It serves as a reminder that -perseverance wins at the end. If we just let things go. - -This came with a decision that simplifying life and removing all the unnecessary -negativity is key. Rather than worrying about what the internet is saying, what -the world is trying to take from you, you are the only one who can say no. And -create instead of destroy. - -I don't have an ending for this post, so I will say this. We live in the most -amazing times in the recorded history, and we should be internally grateful for -it. Create and study, this should be my mantra. Just create and let the world -happen. And when you feel yourself to be too certain, stop and check how deep in the -shit you are already. Strong opinions are a sign of a weak and uneducated -mind. Hate and disdain is for the weak. diff --git a/_posts/2023-07-10-fix-screen-tearing-on-debian-12-xorg-and-i3.md b/_posts/2023-07-10-fix-screen-tearing-on-debian-12-xorg-and-i3.md deleted file mode 100644 index fa88d99..0000000 --- a/_posts/2023-07-10-fix-screen-tearing-on-debian-12-xorg-and-i3.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: "Fix screen tearing on Debian 12 Xorg and i3" -permalink: /fix-screen-tearing-on-debian-12-xorg-and-i3.html -date: 2023-07-10T04:21:48+02:00 -layout: post -type: note -draft: false ---- - -I have been experiencing some issues with Intel® Integrated HD Graphics 3000 -under Debian 12 with Xorg and i3. Using `picom` compositor didn't help. To fix -this issue create new file `/etc/X11/xorg.conf.d/20-intel.conf` as root and put -the following in the file. - -```txt -Section "Device" - Identifier "Intel Graphics" - Driver "intel" - Option "TearFree" "true" -EndSection -``` - -Reboot the system and that should be it. diff --git a/_posts/2023-07-10-online-radio-streaming-with-mpv-from-terminal.md b/_posts/2023-07-10-online-radio-streaming-with-mpv-from-terminal.md deleted file mode 100644 index 60daca8..0000000 --- a/_posts/2023-07-10-online-radio-streaming-with-mpv-from-terminal.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: "Online radio streaming with MPV from terminal" -permalink: /online-radio-streaming-with-mpv-from-terminal.html -date: 2023-07-10T03:34:45+02:00 -layout: post -type: note -draft: false ---- - -Recently I have been using my Thinkpad x220 more and there are some constraints -I have faced with it. CPU is not as powerful as on my main machine and I really -want to listen to some music while using the machine. Browsers really are bloat. - -Check out this site https://streamurl.link/ and copy the stream url and then do -`mpv streamlink`. diff --git a/_posts/2023-07-14-set-color-temperature-of-displays-on-i3.md b/_posts/2023-07-14-set-color-temperature-of-displays-on-i3.md deleted file mode 100644 index 4618581..0000000 --- a/_posts/2023-07-14-set-color-temperature-of-displays-on-i3.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: "Set color temperature of displays on i3" -permalink: /set-color-temperature-of-displays-on-i3.html -date: 2023-07-14T09:19:31+02:00 -layout: post -type: note -draft: false ---- - -I have been using Gnome's night shift for a while now and I have been missing -this feature under i3wm. This can be done with -[redshift](https://linux.die.net/man/1/redshift). - -- On Debian install with `sudo apt install redshift` -- And then manually set it with `redshift -O 3000` -- Reset the current settings with `redshift -x` diff --git a/_posts/2023-08-01-make-b-w-svg-charts-with-matplotlib.md b/_posts/2023-08-01-make-b-w-svg-charts-with-matplotlib.md deleted file mode 100644 index 461842d..0000000 --- a/_posts/2023-08-01-make-b-w-svg-charts-with-matplotlib.md +++ /dev/null @@ -1,71 +0,0 @@ ---- -title: "Make B/W SVG charts with matplotlib" -permalink: /make-b-w-svg-charts-with-matplotlib.html -date: 2023-08-01T17:04:10+02:00 -layout: post -type: note -draft: false ---- - -Install pip requirements. - -```sh -pip install matplotlib -pip install pandas -``` - -Example of data being used. - -```text -Epoch,Connect (NLB),Processing (NLB),Waiting (NLB),Total (NLB),Connect (ALB),Processing (ALB),Waiting (ALB),Total (ALB) -1,57.7,315.7,309.4,321.6,9,104.4,98.3,105.7 -2,121.9,114.4,100.3,176.9,5.8,99.1,97.1,101.1 -3,5.3,229.4,231.2,231.4,14.2,83,69.4,87.9 -4,4.2,134.5,112.2,135.3,5.3,132.4,105.5,134.1 -5,5.8,247.4,246.8,248.1,6,74.3,70.2,75.5 -6,9.9,122.9,100.6,122.7,7.5,241.1,79.3,242.3 -7,6.1,170.2,106.4,170.5,7.2,382.4,375.1,383.8 -8,6.6,194.3,201.4,195.5,7.1,130.9,104.8,132.6 -9,6.4,146.1,122.3,147.7,9.4,95.6,74,96.4 -``` - -In the code you can use `df` as dataframes and use the headers like `df["Epoch"]`. -This is how you get a column data with pandas. - -The Python code responsible for generating a chart: - -```python -import csv -import sys - -import matplotlib.pyplot as plt -import pandas as pd - -# Read the data -df = pd.read_csv("data.csv") - -# Settings -plt.title("Connect median NLB vs ALB") -plt.tight_layout(pad=2) -fig = plt.gcf() -fig.set_size_inches(10, 4) - -# Plotting -plt.plot(df["Epoch"], df["Connect (ALB)"], label = "ALB", color="black", linestyle="-") -plt.plot(df["Epoch"], df["Connect (NLB)"], label = "NLB", color="black", linestyle="--") - -# Adding x and y axis labels -plt.xlabel("Epoch", fontstyle="italic") -plt.ylabel("Median value (ms)", fontstyle="italic") - -# Legend -legend = plt.legend() -legend.get_frame().set_linewidth(0) - -# Export as SVG -plt.savefig("plot.svg", format="svg") -``` - -![SVG Chart](/assets/notes/plot.svg){:loading="lazy"} - -The image above is SVG and you can zoom in and out and check that the image is vector. diff --git a/_posts/2023-08-05-floods-in-slovenia.md b/_posts/2023-08-05-floods-in-slovenia.md deleted file mode 100644 index 8b2354a..0000000 --- a/_posts/2023-08-05-floods-in-slovenia.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: "Floods in Slovenia up close" -permalink: /floods-in-slovenia.html -date: 2023-08-05T07:06:50+02:00 -layout: post -type: note -draft: false ---- - - - - - -![](/assets/notes/floods/IMG_1469.webp){:loading="lazy"} - -![](/assets/notes/floods/IMG_1470.webp){:loading="lazy"} - - - - diff --git a/_posts/2023-09-18-aws-eb-pyyaml-fix.md b/_posts/2023-09-18-aws-eb-pyyaml-fix.md deleted file mode 100644 index b1dd0cd..0000000 --- a/_posts/2023-09-18-aws-eb-pyyaml-fix.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -title: "AWS EB PyYAML fix" -permalink: /aws-eb-pyyaml-fix.html -date: 2023-09-18T07:27:29+02:00 -layout: post -type: note -draft: false ---- - -Recent update of my system completely borked [EB CLI](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-install-advanced.html) -on my machine. - -I tried installing it with `pip install awsebcli --upgrade --user` and it failed. - -The error was the following. - -```text -Collecting PyYAML<6.1,>=5.3.1 (from awsebcli) - Using cached PyYAML-5.4.1.tar.gz (175 kB) - Installing build dependencies ... done - Getting requirements to build wheel ... error - error: subprocess-exited-with-error - - × Getting requirements to build wheel did not run successfully. - │ exit code: 1 - ╰─> [68 lines of output] -``` - -To fix this issue with PyYAML you must install PyYAML separately. - -Do the following and try installing `eb` again after. - -```sh -echo 'Cython < 3.0' > /tmp/constraint.txt -PIP_CONSTRAINT=/tmp/constraint.txt pip install 'PyYAML==5.4.1' -``` diff --git a/_posts/2023-09-25-compile-drawterm-on-fedora-38.md b/_posts/2023-09-25-compile-drawterm-on-fedora-38.md deleted file mode 100644 index 57e1719..0000000 --- a/_posts/2023-09-25-compile-drawterm-on-fedora-38.md +++ /dev/null @@ -1,24 +0,0 @@ ---- -title: "Compile drawterm on Fedora 38" -permalink: /compile-drawterm-on-fedora-38.html -date: 2023-09-25T09:04:28+02:00 -layout: post -type: note -draft: false ---- - -First install two dependencies: - -```sh -sudo dnf install libX11-devel libXt-devel -``` - -Clone the repo and compile it: - -```sh -git clone git://git.9front.org/plan9front/drawterm -cd drawterm -CONF=unix make -``` - -That should produce `drawterm` binary. diff --git a/_posts/2023-11-04-using-ffmpeg-to-combine-video-side-by-side.md b/_posts/2023-11-04-using-ffmpeg-to-combine-video-side-by-side.md deleted file mode 100644 index c47a726..0000000 --- a/_posts/2023-11-04-using-ffmpeg-to-combine-video-side-by-side.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: "Using ffmpeg to combine videos side by side" -permalink: /using-ffmpeg-to-combine-video-side-by-side.html -date: 2023-11-04T09:04:28+02:00 -layout: post -type: note -draft: false ---- - -I had a 4 webm videos (each 492x451) that I wanted to combine to be played side -by side and I tried [iMovie](https://support.apple.com/imovie) and -[Kdenlive](https://kdenlive.org/) and failed to do it in an easy way. I needed -this for Github readme file so it also needed to be a GIF. - -The following is the [ffmpeg](https://ffmpeg.org/) version of it. - -```sh -ffmpeg -y \ - -i 01.webm \ - -i 02.webm \ - -i 03.webm \ - -i 04.webm \ - -filter_complex "\ - [0:v] trim=duration=8, setpts=PTS-STARTPTS, scale=492x451, fps=6 [a0]; \ - [1:v] trim=duration=8, setpts=PTS-STARTPTS, scale=492x451, fps=6 [a1]; \ - [2:v] trim=duration=8, setpts=PTS-STARTPTS, scale=492x451, fps=6 [a2]; \ - [3:v] trim=duration=8, setpts=PTS-STARTPTS, scale=492x451, fps=6 [a3]; \ - [a0][a1][a2][a3] xstack=inputs=4:layout=0_0|w0_0|w0+w1_0|w0+w1+w2_0, scale=1000:-1 [v]" \ - -map "[v]" \ - -crf 23 \ - -preset veryfast \ - trigraphs.gif -``` - -- This will produce `trigraphs.gif` that is also scaled to max 1000px in width - (refer to `scale=1000:-1`). -- The important part for 4x1 stack is `xstack=inputs=4:layout=0_0|w0_0|w0+w1_0|w0+w1+w2_0`. -- This will also cap frame rate to 6 (refer to `fps=6`) since that is enough and - this makes playback of GIFs smoother in a browser. - -![Result](./assets/notes/trigraphs.gif){:loading="lazy"} diff --git a/_posts/2023-11-05-add-lazy-loading-to-jekyll-posts.md b/_posts/2023-11-05-add-lazy-loading-to-jekyll-posts.md deleted file mode 100644 index 8293a4d..0000000 --- a/_posts/2023-11-05-add-lazy-loading-to-jekyll-posts.md +++ /dev/null @@ -1,34 +0,0 @@ ---- -title: "Add lazy loading of images in Jekyll posts" -permalink: /add-lazy-loading-to-jekyll-posts.html -date: 2023-11-05T09:04:28+02:00 -layout: post -type: note -draft: false ---- - -Normally you define images with `![]()` in markdown files. But jekyll also -provides a way to adding custom attributes to tags with `{:attr="value"}`. - -If you have lots of posts this command will append `{:loading="lazy"}` to all -images in all your markdown files. - -```md -![image-title](/path/to/your/image.jpg) -``` - -will become - -```md -![image-title](/path/to/your/image.jpg){:loading="lazy"} -``` - -Shell line bellow. Go into the folder where your posts are (probably `_posts`). - -```sh -find . -type f -name "*.md" -exec sed -i -E 's/(\!\[.*\]\((.*?)\))$/\1{:loading="lazy"}/' {} \; -``` - -Under the hood this adds `loading="lazy"` to HTML `img` nodes. - -That is about it. diff --git a/_posts/2023-11-05-elitist-attitudes-are-sapping-the-fun-from-programming.md b/_posts/2023-11-05-elitist-attitudes-are-sapping-the-fun-from-programming.md deleted file mode 100644 index ccee72b..0000000 --- a/_posts/2023-11-05-elitist-attitudes-are-sapping-the-fun-from-programming.md +++ /dev/null @@ -1,97 +0,0 @@ ---- -title: "Elitist attitudes are sapping all the fun from programming" -permalink: /elitist-attitudes-are-sapping-all-the-fun-from-programming.html -date: 2023-11-05T09:04:28+02:00 -layout: post -type: post -draft: false ---- - -It's always been like that. Maybe it was even worse before, and I am remembering -it with rose-tinted glasses. But from the best that I can remember, it had at -least some roots in reality. If something was objectively bad, you could point -to it. But what I have started noticing recently is that objectivity is not the -only condition to bash on something. More and more, you can use subjective -opinion to say horrible things about technology, language or just a specific -manufacturer. - -And all this has achieved is that I don't really listen to anybody anymore. I -don't care what you think about X or Y. I don't care if you like this language -or that one. I don't care if you prefer Dell to ThinkPad over Macbook. Who gives -a fuck, anyway? If you can do your job on it, why even care about this stuff at -all. And if you can't, buy a different machine. - -It's like the politics weren't enough. Now the same tribalism is here as well. C -developers hating on Rust. JavaScript developers laughing at jQuery users. Rust -developers laughing at everybody except Haskell users. And everybody laughing at -JavaScript. It's like this never-ending dream, being stuck in high school. Us -against your team. It's like we are all stuck being 16. Such a sad state of -affair. And it's always been like this. But it's getting worse I think. - -Everybody trying to be elitist. Compensating lack of JavaScript features (like -type system, for one) with coming up with this insane terminology to make -JavaScript sound more sophisticated, as it is. Let's invent terminology to hide -flaws and sound more educated and academic. And the same goes for C and all the -other languages. All languages are shitty in some ways. For the love of God, -why? Just let it be. For once, let things just be. - -And I, for one, just do not care anymore. Languages are tools and not your -identity. If you need a programming language to fill a void in your life, I -strongly suggest that you re-evaluate where you stand currently. Try something -else. You are not a C developer, or Go developer, or JavaScript developer. You -are a problem solver. That's what you are. And be damn proud of it. You don't -need a label to make that more true or more sophisticated. - -I use Linux and macOS. I have fun on both systems. In my personal experience, -Macbooks are better laptops for what I need them to be. They are better fit for -me. Portable machines with an amazing battery life. That's all that I need from -a laptop. I don't need to come up with this insane hypothetical scenarios where -it will fell short. Yes, it can't water the plants when I am sleeping. OMG, are -we really going there. These insane hypotheticals. Who really cares? I don't! I -use it, it does what I need it to do, and that is the end of the story. Not only -that, but I don't push this down other people's throats. Like Tsodings often -says: It is what it is, and it isn't what it isn't. Such wise words. On my main -machine I have Linux and had it for more than 20 years and I love it. I LOVE -it. I am used to it. And I've had some shitty experiences with it, but damn it, -I love it. It does what it needs to do. It fits my needs. And if I needed -Windows, I would find a way to love it too. Why not? There is enough love to go -around where you are not being elitist and a shithead. - -Programming should be fun. Not going through a checklist before you even start, -to see if you are using what is considered the “cool” thing. If you are doing -this, you already failed in my opinion. - -Oh, you are not using this “insert here” algorithm? Such a pleb. Don't you know -about O(N) complexity? OMG, such a noob. He doesn't know. Uneducated pleb. 2017 -called, and they want their stack back. - -Yes, there is a place for all of those things. But not everything needs to be -perfect. There is an awesome quote in Uncharted: Sic Parvis Magna. “greatness, -from small beginnings.” - -I would laugh if it wasn't sad. And in the end, who cares. Let these people -worry about making the perfect solutions that will never ship or take years to -finish because “Early optimization is the root of all evil.” Everybody has their -definition of fun. I just don't want to listen to people preaching to others how -to do stuff. If people would just shut up and think before they speak 5% of the -time, the world would be a different place. But that will never happen. So the -only solution is to not give a fuck. - -This is more a rant than an actual post with some solution, so maybe I am a part -of the problem. Who knows? Just venting. Every so often it helps. - -Do your Rust thing. It's not for me, though. But if it works for you, more power -to you. Do your project with vanilla JavaScript. You don't always need -TypeScript, Next.js or who know what else to make a button do a thing. Use VS -Code or Vim or Emacs or even Notepad if you wish. If you are having fun, then -just do it. Don't worry about these elitist pricks. They will never be satisfied -anyway. You will never get their approval. So why even bother. Just go for -it. Use C, Rust, OCaml, whatever floats your boat. If it tickles you, just do -it. To hell with everybody else. And if somebody says O(N) complexity, dude? You -can say, OOOOO, fuck the fuck off. - -If this post triggered you, then you are the asshole. Probably. Then you -probably are that guy preaching about O(N) or this language is soo slow -haha. Stop it. Nobody cares! Touch grass. - -Anyway, back to having fun. Cheers! diff --git a/_posts/2023-11-07-personal-sane-vim-defaults.md b/_posts/2023-11-07-personal-sane-vim-defaults.md deleted file mode 100644 index be8b2ae..0000000 --- a/_posts/2023-11-07-personal-sane-vim-defaults.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -title: "Personal sane Vim defaults" -permalink: /apersonal-sane-vim-defaults.html -date: 2023-11-07T01:04:28+02:00 -layout: post -type: note -draft: false ---- - -I have found many "sane" default configs on the net and this is my favorite -personal list. This is how my `.vimrc` file looks like. - -```vimrc -" General sane defaults. -syntax enable -colorscheme sorbet -nnoremap q: -set nocompatible -set relativenumber -set nohlsearch -set smartcase -set ignorecase -set incsearch -set autoindent -set nowrap -set nobackup -set noswapfile -set autoread -set wildmenu -set encoding=utf8 -set backspace=2 -set scrolloff=4 -set spelllang=en_us - -" Status Line enhancements. -set laststatus=2 -set statusline=%f%m%=%y\ %{strlen(&fenc)?&fenc:'none'}\ %l:%c\ %L\ %P -hi StatusLine cterm=NONE ctermbg=black ctermfg=brown -hi StatusLineNC cterm=NONE ctermbg=black ctermfg=darkgray - -" Commenting blocks of code. -augroup commenting_blocks_of_code - autocmd! - autocmd FileType c,cpp,go,scala let b:comment_leader = '// ' - autocmd FileType sh,ruby,python let b:comment_leader = '# ' - autocmd FileType conf,fstab let b:comment_leader = '# ' - autocmd FileType lua let b:comment_leader = '-- ' - autocmd FileType vim let b:comment_leader = '" ' -augroup END -noremap ,cc :silent s/^/=escape(b:comment_leader,'\/')/:nohlsearch -noremap ,cu :silent s/^\V=escape(b:comment_leader,'\/')//e:nohlsearch - -" Language specific indentation. -filetype plugin indent on -autocmd Filetype make,go,c,cpp setlocal noexpandtab tabstop=4 shiftwidth=4 -autocmd Filetype html,js,css setlocal expandtab tabstop=2 shiftwidth=2 -``` - -I keep it pretty vanilla so this is about everything I have in the file. - diff --git a/_posts/2024-02-11-k-mer.md b/_posts/2024-02-11-k-mer.md deleted file mode 100644 index c3e4a17..0000000 --- a/_posts/2024-02-11-k-mer.md +++ /dev/null @@ -1,140 +0,0 @@ ---- -title: "Navigating the genome using k-mers for DNA analysis and visualization" -permalink: /navigating-the-genome-using-k-mers-for-dna-analysis-and-visualization.html -date: 2024-02-11T01:04:28+02:00 -layout: post -type: post -mathjax: yes -draft: true ---- - -## Brief introduction to K-mer - -A "k-mer" refers to all the possible substrings of length \\(k\\) contained in a -string, which is commonly used in computational biology and bioinformatics. In -the context of DNA, RNA, or protein sequences, a k-mer is a sequence of \\(k\\) -nucleotides (for DNA and RNA) or amino acids (for proteins). - -The concept of k-mers is fundamental in various bioinformatics applications, -including genome assembly, sequence alignment, and identification of repeat -sequences. By analyzing the frequency and distribution of k-mers within a -sequence or set of sequences, researchers can infer structural characteristics, -identify genetic variants, and compare genomic or proteomic compositions between -different organisms or conditions. - -For example, in genome assembly, k-mers are used to reconstruct the sequence of -a genome from a collection of short sequencing reads. By finding overlaps -between the k-mers derived from these reads, assembly algorithms can piece -together contiguous sequences (contigs), which represent longer sections of the -genome. - -The choice of \\(k\\) (the length of the k-mer) is crucial and depends on the -specific application. A larger \\(k\\) provides more specificity (useful for -distinguishing between closely related sequences), while a smaller \\(k\\) -offers greater sensitivity (useful for detecting repeats or low-complexity -regions). However, the computational resources required increase with \\(k\\), -as there are \\(4^k\\) possible k-mers for nucleotide sequences (due to the four -types of nucleotides: A, T, C, G) and \\(20^k\\) for amino acid sequences (due -to the twenty standard amino acids). - -## K-mer counting - -K-mer counting is a fundamental process in bioinformatics used for analyzing the -frequency of k-mers (subsequences of length \\(k\\)) in DNA, RNA, or protein -sequences. Efficient k-mer counting is crucial for various applications such as -genome assembly, metagenomics, and sequence comparison. The implementation -typically involves parsing a sequence into all possible k-mers and then counting -the occurrences of each unique k-mer. Here's a general approach to implementing -k-mer counting: - -### Reading the Sequences - -The first step involves reading the genetic or protein sequences from files, -which are often in formats like FASTA or FASTQ. These files contain one or -multiple sequences that will be processed to extract k-mers. - -### Generating K-mers - -For each sequence, generate all possible subsequences of length \\(k\\). This is -done by sliding a window of size \\(k\\) across the sequence, one nucleotide (or -amino acid) at a time, and extracting the subsequence within this window. - -### Counting K-mers - -The extracted k-mers are then counted. This can be achieved using various data -structures: - -- **Hash Tables (Dictionaries)**: They offer an efficient way to keep track of - k-mer counts, with k-mers as keys and their frequencies as values. This - approach is straightforward but can become memory-intensive with large - datasets or large values of \\(k\\). -- **Suffix Trees or Arrays**: These data structures are more space-efficient for - k-mer counting, especially for large datasets. They allow for efficient - retrieval of k-mer occurrences but are more complex to implement. -- **Bloom Filters and Count-Min Sketch**: For very large datasets, probabilistic - data structures like Bloom filters or Count-Min Sketch can estimate k-mer - counts using significantly less memory, at the cost of a controlled error - rate. - -### Handling Memory and Performance Issues - -K-mer counting can be memory-intensive, especially for large values of \\(k\\) or -large datasets. Optimizations include: - -- **Compressing K-mers**: Representing k-mers using a binary format rather than - strings can save memory. -- **Parallel Processing**: Distributing the k-mer counting task across multiple - processors or machines can significantly speed up the process. -- **Minimizing I/O Operations**: Efficiently reading and processing sequences - from files in chunks reduces I/O overhead. - -### Post-processing - -After counting, the k-mer frequencies can be used directly for analyses or can -undergo further processing, such as filtering rare k-mers, which are often -errors, or normalizing counts for comparative analysis. - -### Implementation Example - -Here's a simple Python example using a dictionary for k-mer counting: - -```python -def count_kmers(sequence, k): - kmer_counts = {} - for i in range(len(sequence) - k + 1): - kmer = sequence[i:i+k] - if kmer in kmer_counts: - kmer_counts[kmer] += 1 - else: - kmer_counts[kmer] = 1 - return kmer_counts - -# Example usage -sequence = "ATGCGATGATCTGATG" -k = 3 -kmer_counts = count_kmers(sequence, k) -print(kmer_counts) -``` - -This code snippet counts the occurrences of each 3-mer in a given sequence. - -For real-world applications, especially those involving large datasets, consider -using specialized bioinformatics tools like Jellyfish, KMC, or khmer, which are -optimized for efficiency and scalability. - -Now that we have the basics out of the way we can start implementing basic k-mer -counter in C. - -## Implementing sequence reading in C - -## Additional reading material - -- [2101.08385](https://arxiv.org/pdf/2101.08385.pdf) - Motif Identification using CNN-based Pairwise -- [2112.15107](https://arxiv.org/pdf/2112.15107.pdf) - Probabilistic Models of k-mer Frequencies -- [2205.13915](https://arxiv.org/pdf/2205.13915.pdf) - DiMA: Sequence Diversity Dynamics Analyser for Viruses -- [2209.09242](https://arxiv.org/pdf/2209.09242.pdf) - Computing Phylo-k-mers -- [2305.07545](https://arxiv.org/pdf/2305.07545.pdf) - KmerCo: A lightweight K-mer counting technique with a tiny memory footprint -- [2308.01920](https://arxiv.org/pdf/2308.01920.pdf) - Sequence-Based Nanobody-Antigen Binding -- [2310.10321](https://arxiv.org/pdf/2310.10321.pdf) - Hamming Encoder: Mining Discriminative k-mers for Discrete Sequence Classification -- [2312.03865](https://arxiv.org/pdf/2312.03865.pdf) - Learning Genomic Sequence Representations using Graph Neural Networks over De Bruijn Graphs -- [2401.14025](https://arxiv.org/pdf/2401.14025.pdf) - DNA Sequence Classification with Compressors diff --git a/_posts/2024-02-15-extract-lines-from-file.md b/_posts/2024-02-15-extract-lines-from-file.md deleted file mode 100644 index 45df9da..0000000 --- a/_posts/2024-02-15-extract-lines-from-file.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: "Extract lines from a file with sed" -permalink: /extract-lines-from-file-with-sed.html -date: 2024-02-15T10:04:28+02:00 -layout: post -type: note -draft: false ---- - -Easy way to extract line ranges (from line 200 to line 210) with sed. - -```sh -sed -n '200,210p' data/Homo_sapiens.GRCh38.dna.chromosome.18.fa - -# then pipe it to a new file with - -sed -n '200,210p' data/Homo_sapiens.GRCh38.dna.chromosome.18.fa > new.fa -``` - -`head` or `tail` could be used to extract from begining of the end of the file. diff --git a/_posts/2024-02-21-dcss-online-rc-defaults.md b/_posts/2024-02-21-dcss-online-rc-defaults.md deleted file mode 100644 index cf12109..0000000 --- a/_posts/2024-02-21-dcss-online-rc-defaults.md +++ /dev/null @@ -1,35 +0,0 @@ ---- -title: "Sane default for Dungeon Crawl Stone Soup Online edition" -permalink: /dcss-online-rc-defaults.html -date: 2024-02-21T06:35:11+02:00 -layout: post -type: note -draft: false -tags: [dcss] ---- - -I mostly play Dungeon Crawl Stone Soup online on Ohio, USA: cbro.berotato.org server and -when you start playing you can select the version you want to play. Each instance also -has `rc` file that can customize the way the game behave. - -This is my sane defaults config. It zooms in the game without needing to zoom in the -browser and it also adds a bit of delays in exploring and it stops at fight. - -```ini -autofight_stop = 80 -explore_auto_rest = true -explore_delay = 20 - -tile_cell_pixels = 48 -tile_font_crt_size = 24 -tile_font_stat_size = 24 -tile_font_msg_size = 24 -tile_font_tip_size = 24 -tile_font_lbl_size = 24 -tile_map_pixels = 0 -tile_filter_scaling = false -``` - -All the possible options are documented in the [Dungeon Crawl Stone Soup Options -Guide](https://github.com/crawl/crawl/blob/master/crawl-ref/docs/options_guide.txt) -file. diff --git a/_posts/2024-02-23-uninstall-ollama-from-a-linux-box.md b/_posts/2024-02-23-uninstall-ollama-from-a-linux-box.md deleted file mode 100644 index fffd458..0000000 --- a/_posts/2024-02-23-uninstall-ollama-from-a-linux-box.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: Uninstall Ollama from a Linux box -permalink: /uninstall-ollama-from-a-linux-box.html -date: 2024-02-23 -layout: post -draft: false -type: note ---- -I have had some issues with Ollama not being up-to-date. If Ollama is installed with a curl command, it adds a systemd service. - -```sh -sudo systemctl stop ollama -sudo systemctl disable ollama -sudo rm /etc/systemd/system/ollama.service -sudo systemctl daemon-reload - -sudo rm /usr/local/bin/ollama - -sudo userdel ollama -sudo groupdel ollama - -rm -r ~/.ollama -sudo rm -rf /usr/share/ollama -``` - -That is about it. \ No newline at end of file diff --git a/_posts/notes/2022-08-13-algae-spotted-on-river-sava.md b/_posts/notes/2022-08-13-algae-spotted-on-river-sava.md new file mode 100644 index 0000000..02314f4 --- /dev/null +++ b/_posts/notes/2022-08-13-algae-spotted-on-river-sava.md @@ -0,0 +1,31 @@ +--- +title: Aerial photography of algae spotted on river Sava +permalink: /aerial-photography-of-algae-spotted-on-river-sava.html +date: 2022-08-13T12:00:00+02:00 +layout: post +type: note +draft: false +--- + +This is a bit of a different post than I usually write, but quite interesting +one to me. River Sava has plenty of hydropower plants located down the stream. +This makes regulating the strength of a current easier than normally. Because of +lower stream strength and high temperatures, algae has formed on the river. +This is the first time I've seen something like this in my whole life. + +Below are some photographs taken from a DJI drone capturing the event. + +![Algae on Sava](/assets/posts/algae-sava/dji-algae-0.jpg){:loading="lazy"} + +![Algae on Sava](/assets/posts/algae-sava/dji-algae-1.jpg){:loading="lazy"} + +![Algae on Sava](/assets/posts/algae-sava/dji-algae-2.jpg){:loading="lazy"} + +![Algae on Sava](/assets/posts/algae-sava/dji-algae-3.jpg){:loading="lazy"} + +![Algae on Sava](/assets/posts/algae-sava/dji-algae-4.jpg){:loading="lazy"} + +![Algae on Sava](/assets/posts/algae-sava/dji-algae-5.jpg){:loading="lazy"} + +I will try to get more photos of this in the future days and if something +intriguing shows up will post it again on the blog. diff --git a/_posts/notes/2023-05-01-cachebusting-in-hugo.md b/_posts/notes/2023-05-01-cachebusting-in-hugo.md new file mode 100644 index 0000000..f8d92b2 --- /dev/null +++ b/_posts/notes/2023-05-01-cachebusting-in-hugo.md @@ -0,0 +1,18 @@ +--- +title: Cache busting in Hugo +permalink: /cachebusting-in-hugo.html +date: 2023-05-01T12:00:00+02:00 +layout: post +type: note +draft: false +tags: [hugo] +--- + +```html +\{\{ $cachebuster := delimit (shuffle (split (md5 "6fab11c6669976d759d2992eff1dd5be") "" )) "" \}\} + + +``` + +This `6fab11c6669976d759d2992eff1dd5be` can be random string you generate use. +You can use whatever you want. diff --git a/_posts/notes/2023-05-05-run-9front-in-qemu.md b/_posts/notes/2023-05-05-run-9front-in-qemu.md new file mode 100644 index 0000000..853b2c1 --- /dev/null +++ b/_posts/notes/2023-05-05-run-9front-in-qemu.md @@ -0,0 +1,29 @@ +--- +title: Run 9front in Qemu +permalink: /run-9front-in-qemu.html +date: 2023-05-05T12:00:00+02:00 +layout: post +type: note +draft: false +tags: [plan9, qemu] +--- + +Run 9front in Qemu. This applies to [Plan9](https://9p.io/plan9/) and +[9front](https://9front.org/). + +Download from here http://9front.org/iso/. + +```sh +# Create a qcow2 image. +qemu-img create -f qcow2 $HOME/VM/9front.qcow2.img 30G + +# Run the VM. +qemu-system-x86_64 -cpu host -enable-kvm -m 1024 \ + -net nic,model=virtio,macaddr=52:54:00:00:EE:03 -net user \ + -device virtio-scsi-pci,id=scsi \ + -drive if=none,id=vd0,file=$HOME/VM/9front.qcow2.img \ + -device scsi-hd,drive=vd0 \ + -drive if=none,id=vd1,file=$HOME/VM/ISO/9front.386.iso \ + -device scsi-cd,drive=vd1,bootindex=0 +``` + diff --git a/_posts/notes/2023-05-06-git-push-multiple-origins.md b/_posts/notes/2023-05-06-git-push-multiple-origins.md new file mode 100644 index 0000000..ce7e64b --- /dev/null +++ b/_posts/notes/2023-05-06-git-push-multiple-origins.md @@ -0,0 +1,18 @@ +--- +title: Push to multiple origins at once in Git +permalink: /git-push-multiple-origins.html +date: 2023-05-06T12:00:00+02:00 +layout: post +type: note +draft: false +tags: [git] +--- + +Sometimes you want to push to multiple origins at once. This is useful if you +have a mirror of your repository on another server. You can do this by adding +multiple push urls to your git config. This is a shorthand for command above. + +```sh +git config --global alias.pushall '!sh -c "git remote | xargs -L1 git push --all"' +``` + diff --git a/_posts/notes/2023-05-07-mount-plan9-over-network.md b/_posts/notes/2023-05-07-mount-plan9-over-network.md new file mode 100644 index 0000000..ad68e80 --- /dev/null +++ b/_posts/notes/2023-05-07-mount-plan9-over-network.md @@ -0,0 +1,24 @@ +--- +title: Mount Plan9 over network +permalink: /mount-plan9-over-network.html +date: 2023-05-07T12:00:00+02:00 +layout: post +type: note +draft: false +tags: [plan9] +--- + +- First install libfuse with sudo apt install libfuse-dev. +- Then clone https://github.com/ftrvxmtrx/9pfs and compile it with make. +- Copy 9pfs to your path. + +```sh +# On Plan9 side +ip/ipconfig # enables network +aux/listen1 -tv tcp!*!9999 /bin/exportfs -r tmp # export tmp folder + +# On Linux side +9pfs 172.18.0.1 -p 9999 local_folder # mount +umount local_folder # unmount +``` + diff --git a/_posts/notes/2023-05-08-write-iso-usb.md b/_posts/notes/2023-05-08-write-iso-usb.md new file mode 100644 index 0000000..9c0e9fb --- /dev/null +++ b/_posts/notes/2023-05-08-write-iso-usb.md @@ -0,0 +1,16 @@ +--- +title: Write ISO to USB Key +permalink: /write-iso-usb.html +date: 2023-05-08T12:00:00+02:00 +layout: post +type: note +draft: false +tags: [linux] +--- + +Write ISO to USB key. Nothing fancy here. + +```sh +sudo dd if=iso_file.iso of=/dev/sdX bs=4M status=progress conv=fdatasync +``` + diff --git a/_posts/notes/2023-05-09-catv-weechat-config.md b/_posts/notes/2023-05-09-catv-weechat-config.md new file mode 100644 index 0000000..78d0907 --- /dev/null +++ b/_posts/notes/2023-05-09-catv-weechat-config.md @@ -0,0 +1,22 @@ +--- +title: "#cat-v on weechat configuration" +permalink: /catv-weechat-config.html +date: 2023-05-09T12:00:00+02:00 +layout: post +type: note +draft: false +tags: [irc] +--- + +Set up weechat to connect to #cat-v on oftc. This applies to +[weechat](https://weechat.org/) but should be similar for other irc clients. + +```sh +# Install weechat and launch it and execute the following commands. + +/server add oftc irc.oftc.net -tls +/set irc.server.oftc.autoconnect on +/set irc.server.oftc.autojoin "#cat-v" +/set irc.server.oftc.nicks "nick1,nick2,nick3" +``` + diff --git a/_posts/notes/2023-05-10-plan9-screenshot.md b/_posts/notes/2023-05-10-plan9-screenshot.md new file mode 100644 index 0000000..5aa11bf --- /dev/null +++ b/_posts/notes/2023-05-10-plan9-screenshot.md @@ -0,0 +1,23 @@ +--- +title: Take a screenshot in Plan9 +permalink: /plan9-screenshot.html +date: 2023-05-10T12:00:00+02:00 +layout: post +type: note +draft: false +tags: [plan9] +--- + +Take a screenshot in Plan9. This applies to [Plan9](https://9p.io/plan9/) and +[9front](https://9front.org/). This will take a screenshot of the screen and +output it to `/dev/screen`. You can then use `topng` to convert it to a png +image. + +```sh +# Instant screenshot. +cat /dev/screen | topng > screen.png + +# Delayed screenshot (5 seconds). +sleep 5; cat /dev/screen | topng > screen.png +``` + diff --git a/_posts/notes/2023-05-11-fix-plan9-bootloader.md b/_posts/notes/2023-05-11-fix-plan9-bootloader.md new file mode 100644 index 0000000..de030c9 --- /dev/null +++ b/_posts/notes/2023-05-11-fix-plan9-bootloader.md @@ -0,0 +1,21 @@ +--- +title: Fix bootloader not being written in Plan9 +permalink: /fix-plan9-bootloader.html +date: 2023-05-11T12:00:00+02:00 +layout: post +type: note +draft: false +tags: [plan9] +--- + +If the bootloader is not being written to a disk when installing 9front on real +harware try clearing first sector of the disk with the following command. + +```sh +dd if=/dev/zero of=/dev/sdX bs=512 count=1 + +# If command above doesn't work try this one, wait couple of seconds and +# press delete key to stop the command. +cat /dev/sd*/data +``` + diff --git a/_posts/notes/2023-05-12-install-plan9port-linux.md b/_posts/notes/2023-05-12-install-plan9port-linux.md new file mode 100644 index 0000000..c1cce46 --- /dev/null +++ b/_posts/notes/2023-05-12-install-plan9port-linux.md @@ -0,0 +1,22 @@ +--- +title: Install Plan9port on Linux +permalink: /install-plan9port-linux.html +date: 2023-05-12T12:00:00+02:00 +layout: post +type: note +draft: false +tags: [plan9] +--- + +Install Plan9port on Linux. This applies to +[Plan9port](https://9fans.github.io/plan9port/). This is a port of many Plan 9 +programs to Unix-like operating systems. Useful for programs like `9term` and +`rc`. + +```sh +sudo apt-get install gcc libx11-dev libxt-dev libxext-dev libfontconfig1-dev +git clone https://github.com/9fans/plan9port $HOME/plan9 +cd $HOME/plan9/plan9port +./INSTALL -r $HOME/plan9 +``` + diff --git a/_posts/notes/2023-05-13-download-youtube-videos.md b/_posts/notes/2023-05-13-download-youtube-videos.md new file mode 100644 index 0000000..9ed8221 --- /dev/null +++ b/_posts/notes/2023-05-13-download-youtube-videos.md @@ -0,0 +1,26 @@ +--- +title: Download list of YouTube files +permalink: /download-youtube-videos.html +date: 2023-05-13T12:00:00+02:00 +layout: post +type: note +draft: false +tags: [youtube] +--- + +If you need to download a list of YouTube videos and don't want to download the +actual YouTube list (which `yt-dlp` supports), you can use the following method. + +```js +// Used to get list of raw URL's from YouTube's video tab'. +// Copy them into videos.txt. +document.querySelectorAll('#contents a.ytd-thumbnail.style-scope.ytd-thumbnail').forEach(el => console.log(el.href)) +``` + +Download and install https://github.com/yt-dlp/yt-dlp. + +```sh +# This will download all videos in videos.txt. +yt-dlp --batch-file videos.txt -N `nproc` -f webm +``` + diff --git a/_posts/notes/2023-05-14-convert-mkv.md b/_posts/notes/2023-05-14-convert-mkv.md new file mode 100644 index 0000000..7cc6189 --- /dev/null +++ b/_posts/notes/2023-05-14-convert-mkv.md @@ -0,0 +1,23 @@ +--- +title: Convert all MKV files into other formats +permalink: /convert-mkv.html +date: 2023-05-14T12:00:00+02:00 +layout: post +type: note +draft: false +tags: [ffmpeg] +--- + +You will need `ffmpeg` installed on your system. This will convert all MKV files +into WebM format. + +```sh +# Convert all MKV files into WebM format. +find ./ -name '*.mkv' -exec bash -c 'ffmpeg -i "$0" -vcodec libvpx -acodec libvorbis -cpu-used 5 -threads 8 "${0%%.mp4}.webm"' {} \; +``` + +```sh +# Convert all MKV files into MP4 format. +find ./ -name '*.mkv' -exec bash -c 'ffmpeg -i "$0" c:a copy -c:v copy -cpu-used 5 -threads 8 "${0%%.mp4}.mp4"' {} \; +``` + diff --git a/_posts/notes/2023-05-15-preview-troff-man-pages.md b/_posts/notes/2023-05-15-preview-troff-man-pages.md new file mode 100644 index 0000000..2f0ca82 --- /dev/null +++ b/_posts/notes/2023-05-15-preview-troff-man-pages.md @@ -0,0 +1,21 @@ +--- +title: Previews how man page written in Troff will look like +permalink: /preview-troff-man-pages.html +date: 2023-05-15T12:00:00+02:00 +layout: post +type: note +draft: false +tags: [troff] +--- + +Troff is used to write man pages and it is difficult to read it so this will +preview how it will look like when it is rendered. + +```sh +# On Linux system. +groff -man -Tascii filename + +# On Plan9 system. +man 1 filename +``` + diff --git a/_posts/notes/2023-05-16-mass-set-permission.md b/_posts/notes/2023-05-16-mass-set-permission.md new file mode 100644 index 0000000..654d9d1 --- /dev/null +++ b/_posts/notes/2023-05-16-mass-set-permission.md @@ -0,0 +1,17 @@ +--- +title: Change permissions of matching files recursively +permalink: /mass-set-permission.html +date: 2023-05-16T12:00:00+02:00 +layout: post +type: note +draft: false +tags: [linux] +--- + +Replace `*.xml` with your pattern. This will remove executable bit from all +files matching the pattern. Change `+` to `-` to add executable bit. + +```sh +find . -type f -name "*.xml" -exec chmod -x {} + +``` + diff --git a/_posts/notes/2023-05-22-non-blocking-shell-exec-csharp.md b/_posts/notes/2023-05-22-non-blocking-shell-exec-csharp.md new file mode 100644 index 0000000..f8b9c53 --- /dev/null +++ b/_posts/notes/2023-05-22-non-blocking-shell-exec-csharp.md @@ -0,0 +1,45 @@ +--- +title: Execute not blocking async shell command in C# +permalink: /non-blocking-shell-exec-csharp.html +date: 2023-05-22T12:00:00+02:00 +layout: post +type: note +draft: false +tags: [csharp] +--- + +Execute a shell command in async in C# while not blocking the UI thread. + +```c# +private async Task executeCopyCommand() +{ + await Task.Run(() => + { + var processStartInfo = new ProcessStartInfo("cmd", "/c dir") + { + RedirectStandardOutput = true, + UseShellExecute = false, + CreateNoWindow = true + }; + + var process = new Process + { + StartInfo = processStartInfo + }; + + process.Start(); + process.WaitForExit(); + }); +} +``` + +Make sure that `async` is present in the function definition and `await` is used +in the method that calls `executeCopyCommand()`. + +```c# +private async void button_Click(object sender, EventArgs e) +{ + await executeCopyCommand(); +} +``` + diff --git a/_posts/notes/2023-05-23-extend-lua-with-custom-c.md b/_posts/notes/2023-05-23-extend-lua-with-custom-c.md new file mode 100644 index 0000000..604d359 --- /dev/null +++ b/_posts/notes/2023-05-23-extend-lua-with-custom-c.md @@ -0,0 +1,55 @@ +--- +title: Extend Lua with custom C functions using Clang +permalink: /extend-lua-with-custom-c.html +date: 2023-05-23T12:00:00+02:00 +layout: post +type: note +draft: false +tags: [lua, c] +--- + +Here is a boilerplate for extending Lua with custom C functions. This requires +Clang and Lua 5.1 to be installed. GCC can be used instead of Clang, but the +Makefile will need to be modified. + +- nativefunc.c + + ```c + #include + #include + + static int l_mult50(lua_State *L) { + double number = luaL_checknumber(L, 1); + lua_pushnumber(L, number * 50); + return 1; + } + + int luaopen_nativefunc(lua_State *L) { + static const struct luaL_Reg nativeFuncLib[] = { {"mult50", l_mult50}, {NULL, NULL} }; + + luaL_register(L, "nativelib", nativeFuncLib); + return 1; + } + ``` + +- main.lua + + ```lua + require "nativefunc" + print(nativelib.mult50(50)) + ``` + +- Makefile + + ```Makefile + CC = clang + CFLAGS = + INCLUDES = `pkg-config lua5.1 --cflags-only-I` + + all: + $(CC) -shared -o nativefunc.so -fPIC nativefunc.c $(CFLAGS) $(INCLUDES) + + clean: + rm *.so + ``` + diff --git a/_posts/notes/2023-05-23-parse-rss-with-lua.md b/_posts/notes/2023-05-23-parse-rss-with-lua.md new file mode 100644 index 0000000..ea8ce8c --- /dev/null +++ b/_posts/notes/2023-05-23-parse-rss-with-lua.md @@ -0,0 +1,41 @@ +--- +title: Parse RSS feeds with Lua +permalink: /parse-rss-with-lua.html +date: 2023-05-23T12:00:00+02:00 +layout: post +type: note +draft: false +tags: [lua, rss] +--- + +Example of parsing RSS feeds with Lua. Before running the script install: + +- feedparser with `luarocks install feedparser` +- luasocket with `luarocks install luasocket` + +```lua +local http = require("socket.http") +local feedparser = require("feedparser") + +local feed_url = "https://mitjafelicijan.com/index.xml" + +local response, status, _ = http.request(feed_url) +if status == 200 then + local parsed = feedparser.parse(response) + + -- Print out feed details. + print("> Title ", parsed.feed.title) + print("> Author ", parsed.feed.author) + print("> ID ", parsed.feed.id) + print("> Entries ", #parsed.entries) + + for _, item in ipairs(parsed.entries) do + print("GUID ", item.guid) + print("Title ", item.title) + print("Link ", item.link) + print("Summary ", item.summary) + end +else + print("! Request failed. Status:", status) +end +``` diff --git a/_posts/notes/2023-05-24-fresh-9front-desktop.md b/_posts/notes/2023-05-24-fresh-9front-desktop.md new file mode 100644 index 0000000..5da89e7 --- /dev/null +++ b/_posts/notes/2023-05-24-fresh-9front-desktop.md @@ -0,0 +1,15 @@ +--- +title: My brand new Plan9/9front desktop +permalink: /fresh-9front-desktop.html +date: 2023-05-24T12:00:00+02:00 +layout: post +type: note +draft: false +tags: [plan9] +--- + +I have been experimenting with Plan9/9front for a week now. Noice! This is how +my desktop looks like. + +![9front desktop](/assets/notes/9front-desktop.png){:loading="lazy"} + diff --git a/_posts/notes/2023-05-25-dcss-new-player-guide.md b/_posts/notes/2023-05-25-dcss-new-player-guide.md new file mode 100644 index 0000000..dd63f79 --- /dev/null +++ b/_posts/notes/2023-05-25-dcss-new-player-guide.md @@ -0,0 +1,99 @@ +--- +title: Dungeon Crawl Stone Soup - New player guide +permalink: /dcss-new-player-guide.html +date: 2023-05-25T22:00:00+02:00 +layout: post +type: note +draft: false +tags: [dcss] +--- + +An amazing game deserves an amazing guide. All this material can be find in some +form on another on [craw's](https://github.com/crawl/crawl) official repository. + +- [DCSS Quickstart](/assets/notes/dcss-quickstart.pdf) - Very short introduction to the + game +- [DCSS Manual](/assets/notes/dcss_manual.pdf) - Extensive manual about the game + +![Dungeon Crawl Stone Soup](/assets/notes/dcss.jpg){:loading="lazy"} + +**Movement and Exploration** + +- You can move around with the numpad (try numlock on and off), vi-keys, or + clicking with the mouse. Arrow keys work, though you can't move diagonally + with them. Pressing Shift and a direction will move until you see/hit + something. +- Pressing `>` will take you down a staircase, and `<` to go up a staircase. +- You can open doors by walking into them, and close them with `C`. +- You can autoexplore by pressing `o`. +- You can re-view recent messages with `Ctrl-p`. + +**Monsters and Combat** + +- You can pick up items with `,` or `g`. +- Wield weapons with `w`. Weapons have different stats. + - (You may also engage in Unarmed Combat, though it isn't very effective when + untrained). +- Attack monsters in melee by walking in their direction (or with + Ctrl-direction). +- You can wait with `.` or `s`, passing your turn - such as to get monsters into + a corridor with you. +- You can rest with `5`, waiting until you are fully healed, or something + noteworthy happens. +- Either mouseover and rightclick, or use `x` then `v` on the monster to examine + monsters. Monsters with a red border are 'dangerous' relative to your current + XP level (XL). +- Quiver (often ranged) actions for further use with `Q`. +- You can fire ranged weapons manually with `f`, or auto-target your quiver with + `p` or `Shift-Tab`. Throwing weapons can be thrown immediately, while + launchers (like bows) need to be wielded first. + +**Items and Inventory** + +- View your inventory by pressing `i`. Most item related commands can also be + done with this menu. +- You can wear amour with `W;` amour gives `AC`, while heavier body armour + reduces `EV`. +- Autoexplore will automatically pick up useful items, such as potions and + scrolls, if you aren't in danger. +- You can read scrolls with `r` and drink ("quaff") potions with `q`. +- Equipment items may have brands, with special properties. Branded equipment is + blue when unidentified. +- Equipment items may be artifacts, often with unique properties, and are + unmodifiable. They are written in white. +- You can evoke wands with `V`. +- You can put on jewelry with `P`, and remove it with `R`. +- Gold is used in shops, which can be interacted with by either `>` or `<`. + +**Magic and Spellcasting** + +- Once you find a spellbook, you can memorize spells with `M`. +- You need to be the same XL as the spell's spell level in order to learn it, in + addition to training magical skill (to lower failure rate). +- Cast spells by pressing `z`, then the letter assigned to the spell. You may + also Quiver a spell and then use it like a ranged weapon (with Shift-Tab). +- You can view your memorized spells by pressing `I` (capital-i) or `z`. +- Like HP, you can recover MP by resting (with 5). +- Many spells can be positioned more effectively, or combined with other spells, + in order to get (more effective) use out of them. +- Heavier body amour and shields hamper spellcasting. + +**Gods and Divine Abilities** + +- You may look at a god's overview by praying at their altar (with `>` or `<`). + After praying, you can worship the god by pressing Enter afterwards. +- Gods all have unique features about them. Trog, the god of the tutorial, is + also the god of rage and bloodshed, and so despises spellcasting. +- Gods like and dislike different things. Most gods either like killing things + (like Trog) or exploring new areas (like Elyvilon), rewarding you piety + (divine favor) for doing so. +- You should learn to use and even rely on divine abilities often, as they are + usually very strong. Trog's Berserk gives you 1.5x health, 1.5x speed (to all + valid actions), and a big damage boost. Note that Berserk prevents most + actions other than move and melee attack, and runs out very quickly if you + aren't attacking. And after berserk ends, you are slowed down and can't + berserk again for a short time. +- In addition, the vast majority of abilities consume piety in the process. + Regardless, this ability is very cheap, and the benefits are incredible, so + don't hold back! +- Pressing `^` will let you view your current god, abilities, and piety. diff --git a/_posts/notes/2023-05-25-show-xterm-colors.md b/_posts/notes/2023-05-25-show-xterm-colors.md new file mode 100644 index 0000000..56050fd --- /dev/null +++ b/_posts/notes/2023-05-25-show-xterm-colors.md @@ -0,0 +1,85 @@ +--- +title: Display xterm color palette +permalink: /xterm-color-palette.html +date: 2023-05-25T12:00:00+02:00 +layout: post +type: note +draft: false +tags: [linux] +--- + +- `bash xterm-palette.sh` - will show you number of max colors available +- `bash xterm-palette.sh -v` - will create a list of all colors with codes + +![xterm color palette](/assets/notes/xterm-palette.png){:loading="lazy"} + +```sh +#!/usr/bin/env bash +# xterm-palette.sh + +trap 'tput sgr0' exit # Clean up even if user hits ^C + +function setfg () { + printf '\e[38;5;%dm' $1 +} + +function setbg () { + printf '\e[48;5;%dm' $1 +} + +function showcolors() { + # Given an integer, display that many colors + for ((i=0; i<$1; i++)) + do + printf '%4d ' $i + setbg $i + tput el + tput sgr0 + echo + done + tput sgr0 el +} + +# First, test if terminal supports OSC 4 at all. +printf '\e]4;%d;?\a' 0 +read -d $'\a' -s -t 0.1 +#include +#include +#include + +void +main() +{ + ulong co; + Image *im, *bg; + co = 0x0000FFFF; + + if (initdraw(nil, nil, argv0) < 0) + { + sysfatal("%s: %r", argv0); + } + + im = allocimage(display, Rect(0, 0, 300, 300), RGB24, 0, DYellow); + bg = allocimage(display, Rect(0, 0, 1, 1), RGB24, 1, co); + + if (im == nil || bg == nil) + { + sysfatal("not enough memory"); + } + + draw(screen, screen->r, bg, nil, ZP); + draw(screen, screen->r, im, nil, Pt(-40, -40)); + + flushimage(display, Refnone); + + // Wait 10 seconds before exiting. + sleep(10000); + + exits(nil); +} +``` + +And then compile with `mk` (mkfile below): + +```makefile +# mkfile + + + + + + + + + + + + + + + + +``` + +Now the markdown file `presentation.md` with presenetation. `---` is used to +separate slides. Other stuff is just pure markdown. + +```md +class: center, middle + +# Main title of the presentation + +--- + +# Fist slide + +Eveniet mollitia nemo architecto rerum aut iure iste. Sit nihil nobis libero iusto fugit nam laudantium ut. Dignissimos corrupti laudantium nisi. + +- Lorem ipsum dolor sit amet, consectetur adipiscing elit. +- Integer aliquet mauris a felis fringilla, ut congue massa finibus. + +--- + +# Slide two + +- Lorem ipsum dolor sit amet, consectetur adipiscing elit. +- Vestibulum eget leo ac dolor venenatis pulvinar. +``` diff --git a/_posts/notes/2023-06-24-making-cgit-look-nicer.md b/_posts/notes/2023-06-24-making-cgit-look-nicer.md new file mode 100644 index 0000000..0140a3e --- /dev/null +++ b/_posts/notes/2023-06-24-making-cgit-look-nicer.md @@ -0,0 +1,207 @@ +--- +title: "Making cgit look nicer" +permalink: /making-cgit-look-nicer.html +date: 2023-06-24T13:33:58+02:00 +layout: post +type: note +draft: false +tags: [git] +--- + +For personal use I have a [private Git server](https://git.mitjafelicijan.com) +set up and I use GitHub just as a mirror. By default the cgit theme looks a bit +dated so I made the flowing theme. + +- `/etc/cgitrc` + +```ini +css=/cgit.css +logo=/startrek.gif +favicon=/favicon.png +source-filter=/usr/lib/cgit/filters/syntax-highlighting-edited.sh +about-filter=/usr/lib/cgit/filters/about-formatting.sh + +local-time=1 +snapshots=tar.gz +repository-sort=age +cache-size=1000 +branch-sort=age +summary-log=200 +max-atom-items=50 +max-repo-count=100 + +enable-index-owner=0 +enable-follow-links=1 +enable-log-filecount=1 +enable-log-linecount=1 + +root-title=Place for code, experiments and other bullshit! +root-desc= +clone-url=git@git.mitjafelicijan.com:/home/git/$CGIT_REPO_URL + +mimetype.gif=image/gif +mimetype.html=text/html +mimetype.jpg=image/jpeg +mimetype.jpeg=image/jpeg +mimetype.pdf=application/pdf +mimetype.png=image/png +mimetype.svg=image/svg+xml + +readme=:README.md +readme=:readme.md + +# Must be at the end! +virtual-root=/ +scan-path=/home/git/ +``` + +For `syntax-highlighting-edited.sh` follow instructions on +[https://wiki.archlinux.org/title/Cgit](https://wiki.archlinux.org/title/Cgit#Using_highlight). + +- `/usr/share/cgit/cgit.css` + +```css +* { + font-size: 11pt; +} + +body { + font-family: monospace; + background: white; + padding: 1em; +} + +th, td { + text-align: left; +} + +/* HEADER */ + +#header { + margin-bottom: 1em; +} + +#header .logo img { + display: block; + height: 3em; + margin-right: 10px; +} + +#header .sub.right { + display: none; +} + +/* FOOTER */ + +.footer { + margin-top: 2em; + font-style: italic; +} + +.footer, .footer a { + color: gray; +} + +/* TABS */ + +.tabs a { + margin-bottom: 2em; + display: inline-block; + margin-right: 1em; +} + +.tabs td a:only-child { + display: none; +} + +/* HIDING ELEMENTS */ + +.cgit-panel, .form { + display: none; +} + +/* LISTS */ + +.list td, .list th { + padding-right: 2em; +} + +.list .nohover a { + color: black; +} + +.list .button { + padding-right: 0.5em; +} + +/* COMMIT */ + +.commit-subject { + padding: 1em 0; +} + +.decoration a { + padding-left: 0.5em; +} + +.commit-info th { + padding-right: 1em; +} + +.commit-subject { + padding: 2em 0; +} + +table.diff div.head { + padding-top: 2em; +} + +table.diffstat td { + padding-right: 1em; +} + +/* CONTENT */ + +.linenumbers { + padding-right: 0.5em; +} + +.linenumbers a { + color: gray; +} + +.pager { + display: flex; + list-style-type: none; + padding: 0; + gap: 0.5em; +} + +/* DIFF COLORS */ + +table.diff { + width: 100%; +} + +table.diff td { + white-space: pre; +} + +table.diff td div.head { + font-weight: bold; + margin-top: 1em; + color: black; +} + +table.diff td div.hunk { + color: #009; +} + +table.diff td div.add { + color: green; +} + +table.diff td div.del { + color: red; +} +``` diff --git a/_posts/notes/2023-06-25-alacritty-open-links-with-modifier.md b/_posts/notes/2023-06-25-alacritty-open-links-with-modifier.md new file mode 100644 index 0000000..a26dd14 --- /dev/null +++ b/_posts/notes/2023-06-25-alacritty-open-links-with-modifier.md @@ -0,0 +1,36 @@ +--- +title: "Alacritty open links with modifier" +permalink: /alacritty-open-links-with-modifier.html +date: 2023-06-25T17:17:16+02:00 +layout: post +type: note +draft: false +tags: [linux] +--- + +Alacritty by default makes all links in the terminal output clickable and this +gets annoying rather quickly. I liked the default behavior of Gnome terminal +where you needed to hold Control key and then you could click and open links. + +To achieve this in Alacritty you need to provide a `hint` in the configuration +file. Config file is located at `~/.config/alacritty/alacritty.yml`. + +```yaml +hints: + enabled: + - regex: "(mailto:|gemini:|gopher:|https:|http:|news:|file:|git:|ssh:|ftp:)\ + [^\u0000-\u001F\u007F-\u009F<>\"\\s{-}\\^⟨⟩`]+" + command: xdg-open + post_processing: true + mouse: + enabled: true + mods: Control +``` + +The following should work under any Linux system. For macOS, you will need to +change `command: xdg-open` to something else. + +Now the links will be visible and clickable only when Control key is being +pressed. + +Source: https://github.com/alacritty/alacritty/issues/5246 diff --git a/_posts/notes/2023-06-25-development-environments-with-nix.md b/_posts/notes/2023-06-25-development-environments-with-nix.md new file mode 100644 index 0000000..a905f10 --- /dev/null +++ b/_posts/notes/2023-06-25-development-environments-with-nix.md @@ -0,0 +1,69 @@ +--- +title: "Development environments with Nix" +permalink: /development-environments-with-nix.html +date: 2023-06-25T16:38:10+02:00 +layout: post +type: note +draft: false +tags: [random] +--- + +Nix is amazing for making reproducible cross OS development environment. + +First you need to [install Nix package +manager](https://nixos.org/download.html). + +- Create a file `shell.nix` in your project folder. +- In the section that has `python3` etc add programs you want to use. These can + be CLI or GUI applications. It doesn't matter to Nix. + +```nix +{ pkgs ? import {} }: + pkgs.mkShell { + nativeBuildInputs = with pkgs.buildPackages; [ + python3 + tinycc + ]; +} +``` + +And then run it `nix-shell`. By default it will look for `shell.nix` file. If +you want to specify a different file use `nix-shell file.nix`. That is about it. + +When the shell is spawned it could happen that your `PS1` prompt will be +overwritten and your prompt will look differently. In that case you need to +either do `NIX_SHELL_PRESERVE_PROMPT=1 nix shell` or add +`NIX_SHELL_PRESERVE_PROMPT` variable to your `bashrc` or `zshrc` file and set it +to `1`. + +I also have a modified `PS1` prompt for Bash that I use and it also catches the +usage of Nix shell. + +```sh +NIX_SHELL_PRESERVE_PROMPT=1 + +parse_git_branch() { + git branch 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/ (\1)/' +} + +is_inside_nix_shell() { + nix_shell_name="$(basename "$IN_NIX_SHELL" 2>/dev/null)" + if [[ -n "$nix_shell_name" ]]; then + echo " \e[0;36m(nix-shell)\e[0m" + fi +} + +export PS1="[\033[38;5;9m\]\u@\h\[$(tput sgr0)\]]$(is_inside_nix_shell)\[\033[33m\]\$(parse_git_branch)\[\033[00m\] \w\[$(tput sgr0)\] \n$ " +``` + +And this is what it looks like when you are in a Nix shell. Otherwise that part +of prompt is omitted + +![PS1 Prompt](/assets/notes/ps1-prompt.png){:loading="lazy"} + +More resources: + +- https://nixos.wiki/wiki/Development_environment_with_nix-shell +- https://nixos.wiki/wiki/Main_Page +- https://itsfoss.com/why-use-nixos/ +- https://mynixos.com/ diff --git a/_posts/notes/2023-06-29-10gui-10-finger-multitouch-user-interface.md b/_posts/notes/2023-06-29-10gui-10-finger-multitouch-user-interface.md new file mode 100644 index 0000000..d4b8e54 --- /dev/null +++ b/_posts/notes/2023-06-29-10gui-10-finger-multitouch-user-interface.md @@ -0,0 +1,26 @@ +--- +title: "10/GUI 10 Finger Multitouch User Interface" +permalink: /10gui-10-finger-multitouch-user-interface.html +date: 2023-06-29T14:51:39+02:00 +layout: post +type: note +draft: false +tags: [graphics] +--- + +Message from 10/GUI team (page 10gui.com does not exist anymore): + +*Over a quarter-century ago, Xerox introduced the modern graphical user +interface paradigm we today take for granted.* + +*That it has endured is a testament to the genius of its design. But the +industry is now at a crossroads: New technologies promise higher-bandwidth +interaction, but have yet to find a truly viable implementation.* + +*10/GUI aims to bridge this gap by rethinking the desktop to leverage technology +in an intuitive and powerful way.* + + diff --git a/_posts/notes/2023-06-29-60s-ibm-computers-commercial.md b/_posts/notes/2023-06-29-60s-ibm-computers-commercial.md new file mode 100644 index 0000000..bddca2a --- /dev/null +++ b/_posts/notes/2023-06-29-60s-ibm-computers-commercial.md @@ -0,0 +1,18 @@ +--- +title: "60's IBM Computers Commercial" +permalink: /60s-ibm-computers-commercial.html +date: 2023-06-29T22:13:45+02:00 +layout: post +type: note +draft: false +tags: [random] +--- + +Likely aired during an hour-long program during the 1960s, long commercials such +as this typically aired during hour-long programs. They would *not* have aired +during a half-hour program. + + diff --git a/_posts/notes/2023-07-10-fix-screen-tearing-on-debian-12-xorg-and-i3.md b/_posts/notes/2023-07-10-fix-screen-tearing-on-debian-12-xorg-and-i3.md new file mode 100644 index 0000000..fa88d99 --- /dev/null +++ b/_posts/notes/2023-07-10-fix-screen-tearing-on-debian-12-xorg-and-i3.md @@ -0,0 +1,23 @@ +--- +title: "Fix screen tearing on Debian 12 Xorg and i3" +permalink: /fix-screen-tearing-on-debian-12-xorg-and-i3.html +date: 2023-07-10T04:21:48+02:00 +layout: post +type: note +draft: false +--- + +I have been experiencing some issues with Intel® Integrated HD Graphics 3000 +under Debian 12 with Xorg and i3. Using `picom` compositor didn't help. To fix +this issue create new file `/etc/X11/xorg.conf.d/20-intel.conf` as root and put +the following in the file. + +```txt +Section "Device" + Identifier "Intel Graphics" + Driver "intel" + Option "TearFree" "true" +EndSection +``` + +Reboot the system and that should be it. diff --git a/_posts/notes/2023-07-10-online-radio-streaming-with-mpv-from-terminal.md b/_posts/notes/2023-07-10-online-radio-streaming-with-mpv-from-terminal.md new file mode 100644 index 0000000..60daca8 --- /dev/null +++ b/_posts/notes/2023-07-10-online-radio-streaming-with-mpv-from-terminal.md @@ -0,0 +1,15 @@ +--- +title: "Online radio streaming with MPV from terminal" +permalink: /online-radio-streaming-with-mpv-from-terminal.html +date: 2023-07-10T03:34:45+02:00 +layout: post +type: note +draft: false +--- + +Recently I have been using my Thinkpad x220 more and there are some constraints +I have faced with it. CPU is not as powerful as on my main machine and I really +want to listen to some music while using the machine. Browsers really are bloat. + +Check out this site https://streamurl.link/ and copy the stream url and then do +`mpv streamlink`. diff --git a/_posts/notes/2023-07-14-set-color-temperature-of-displays-on-i3.md b/_posts/notes/2023-07-14-set-color-temperature-of-displays-on-i3.md new file mode 100644 index 0000000..4618581 --- /dev/null +++ b/_posts/notes/2023-07-14-set-color-temperature-of-displays-on-i3.md @@ -0,0 +1,16 @@ +--- +title: "Set color temperature of displays on i3" +permalink: /set-color-temperature-of-displays-on-i3.html +date: 2023-07-14T09:19:31+02:00 +layout: post +type: note +draft: false +--- + +I have been using Gnome's night shift for a while now and I have been missing +this feature under i3wm. This can be done with +[redshift](https://linux.die.net/man/1/redshift). + +- On Debian install with `sudo apt install redshift` +- And then manually set it with `redshift -O 3000` +- Reset the current settings with `redshift -x` diff --git a/_posts/notes/2023-08-01-make-b-w-svg-charts-with-matplotlib.md b/_posts/notes/2023-08-01-make-b-w-svg-charts-with-matplotlib.md new file mode 100644 index 0000000..461842d --- /dev/null +++ b/_posts/notes/2023-08-01-make-b-w-svg-charts-with-matplotlib.md @@ -0,0 +1,71 @@ +--- +title: "Make B/W SVG charts with matplotlib" +permalink: /make-b-w-svg-charts-with-matplotlib.html +date: 2023-08-01T17:04:10+02:00 +layout: post +type: note +draft: false +--- + +Install pip requirements. + +```sh +pip install matplotlib +pip install pandas +``` + +Example of data being used. + +```text +Epoch,Connect (NLB),Processing (NLB),Waiting (NLB),Total (NLB),Connect (ALB),Processing (ALB),Waiting (ALB),Total (ALB) +1,57.7,315.7,309.4,321.6,9,104.4,98.3,105.7 +2,121.9,114.4,100.3,176.9,5.8,99.1,97.1,101.1 +3,5.3,229.4,231.2,231.4,14.2,83,69.4,87.9 +4,4.2,134.5,112.2,135.3,5.3,132.4,105.5,134.1 +5,5.8,247.4,246.8,248.1,6,74.3,70.2,75.5 +6,9.9,122.9,100.6,122.7,7.5,241.1,79.3,242.3 +7,6.1,170.2,106.4,170.5,7.2,382.4,375.1,383.8 +8,6.6,194.3,201.4,195.5,7.1,130.9,104.8,132.6 +9,6.4,146.1,122.3,147.7,9.4,95.6,74,96.4 +``` + +In the code you can use `df` as dataframes and use the headers like `df["Epoch"]`. +This is how you get a column data with pandas. + +The Python code responsible for generating a chart: + +```python +import csv +import sys + +import matplotlib.pyplot as plt +import pandas as pd + +# Read the data +df = pd.read_csv("data.csv") + +# Settings +plt.title("Connect median NLB vs ALB") +plt.tight_layout(pad=2) +fig = plt.gcf() +fig.set_size_inches(10, 4) + +# Plotting +plt.plot(df["Epoch"], df["Connect (ALB)"], label = "ALB", color="black", linestyle="-") +plt.plot(df["Epoch"], df["Connect (NLB)"], label = "NLB", color="black", linestyle="--") + +# Adding x and y axis labels +plt.xlabel("Epoch", fontstyle="italic") +plt.ylabel("Median value (ms)", fontstyle="italic") + +# Legend +legend = plt.legend() +legend.get_frame().set_linewidth(0) + +# Export as SVG +plt.savefig("plot.svg", format="svg") +``` + +![SVG Chart](/assets/notes/plot.svg){:loading="lazy"} + +The image above is SVG and you can zoom in and out and check that the image is vector. diff --git a/_posts/notes/2023-08-05-floods-in-slovenia.md b/_posts/notes/2023-08-05-floods-in-slovenia.md new file mode 100644 index 0000000..8b2354a --- /dev/null +++ b/_posts/notes/2023-08-05-floods-in-slovenia.md @@ -0,0 +1,20 @@ +--- +title: "Floods in Slovenia up close" +permalink: /floods-in-slovenia.html +date: 2023-08-05T07:06:50+02:00 +layout: post +type: note +draft: false +--- + + + + + +![](/assets/notes/floods/IMG_1469.webp){:loading="lazy"} + +![](/assets/notes/floods/IMG_1470.webp){:loading="lazy"} + + + + diff --git a/_posts/notes/2023-09-18-aws-eb-pyyaml-fix.md b/_posts/notes/2023-09-18-aws-eb-pyyaml-fix.md new file mode 100644 index 0000000..b1dd0cd --- /dev/null +++ b/_posts/notes/2023-09-18-aws-eb-pyyaml-fix.md @@ -0,0 +1,36 @@ +--- +title: "AWS EB PyYAML fix" +permalink: /aws-eb-pyyaml-fix.html +date: 2023-09-18T07:27:29+02:00 +layout: post +type: note +draft: false +--- + +Recent update of my system completely borked [EB CLI](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-install-advanced.html) +on my machine. + +I tried installing it with `pip install awsebcli --upgrade --user` and it failed. + +The error was the following. + +```text +Collecting PyYAML<6.1,>=5.3.1 (from awsebcli) + Using cached PyYAML-5.4.1.tar.gz (175 kB) + Installing build dependencies ... done + Getting requirements to build wheel ... error + error: subprocess-exited-with-error + + × Getting requirements to build wheel did not run successfully. + │ exit code: 1 + ╰─> [68 lines of output] +``` + +To fix this issue with PyYAML you must install PyYAML separately. + +Do the following and try installing `eb` again after. + +```sh +echo 'Cython < 3.0' > /tmp/constraint.txt +PIP_CONSTRAINT=/tmp/constraint.txt pip install 'PyYAML==5.4.1' +``` diff --git a/_posts/notes/2023-09-25-compile-drawterm-on-fedora-38.md b/_posts/notes/2023-09-25-compile-drawterm-on-fedora-38.md new file mode 100644 index 0000000..57e1719 --- /dev/null +++ b/_posts/notes/2023-09-25-compile-drawterm-on-fedora-38.md @@ -0,0 +1,24 @@ +--- +title: "Compile drawterm on Fedora 38" +permalink: /compile-drawterm-on-fedora-38.html +date: 2023-09-25T09:04:28+02:00 +layout: post +type: note +draft: false +--- + +First install two dependencies: + +```sh +sudo dnf install libX11-devel libXt-devel +``` + +Clone the repo and compile it: + +```sh +git clone git://git.9front.org/plan9front/drawterm +cd drawterm +CONF=unix make +``` + +That should produce `drawterm` binary. diff --git a/_posts/notes/2023-11-04-using-ffmpeg-to-combine-video-side-by-side.md b/_posts/notes/2023-11-04-using-ffmpeg-to-combine-video-side-by-side.md new file mode 100644 index 0000000..c47a726 --- /dev/null +++ b/_posts/notes/2023-11-04-using-ffmpeg-to-combine-video-side-by-side.md @@ -0,0 +1,41 @@ +--- +title: "Using ffmpeg to combine videos side by side" +permalink: /using-ffmpeg-to-combine-video-side-by-side.html +date: 2023-11-04T09:04:28+02:00 +layout: post +type: note +draft: false +--- + +I had a 4 webm videos (each 492x451) that I wanted to combine to be played side +by side and I tried [iMovie](https://support.apple.com/imovie) and +[Kdenlive](https://kdenlive.org/) and failed to do it in an easy way. I needed +this for Github readme file so it also needed to be a GIF. + +The following is the [ffmpeg](https://ffmpeg.org/) version of it. + +```sh +ffmpeg -y \ + -i 01.webm \ + -i 02.webm \ + -i 03.webm \ + -i 04.webm \ + -filter_complex "\ + [0:v] trim=duration=8, setpts=PTS-STARTPTS, scale=492x451, fps=6 [a0]; \ + [1:v] trim=duration=8, setpts=PTS-STARTPTS, scale=492x451, fps=6 [a1]; \ + [2:v] trim=duration=8, setpts=PTS-STARTPTS, scale=492x451, fps=6 [a2]; \ + [3:v] trim=duration=8, setpts=PTS-STARTPTS, scale=492x451, fps=6 [a3]; \ + [a0][a1][a2][a3] xstack=inputs=4:layout=0_0|w0_0|w0+w1_0|w0+w1+w2_0, scale=1000:-1 [v]" \ + -map "[v]" \ + -crf 23 \ + -preset veryfast \ + trigraphs.gif +``` + +- This will produce `trigraphs.gif` that is also scaled to max 1000px in width + (refer to `scale=1000:-1`). +- The important part for 4x1 stack is `xstack=inputs=4:layout=0_0|w0_0|w0+w1_0|w0+w1+w2_0`. +- This will also cap frame rate to 6 (refer to `fps=6`) since that is enough and + this makes playback of GIFs smoother in a browser. + +![Result](./assets/notes/trigraphs.gif){:loading="lazy"} diff --git a/_posts/notes/2023-11-05-add-lazy-loading-to-jekyll-posts.md b/_posts/notes/2023-11-05-add-lazy-loading-to-jekyll-posts.md new file mode 100644 index 0000000..8293a4d --- /dev/null +++ b/_posts/notes/2023-11-05-add-lazy-loading-to-jekyll-posts.md @@ -0,0 +1,34 @@ +--- +title: "Add lazy loading of images in Jekyll posts" +permalink: /add-lazy-loading-to-jekyll-posts.html +date: 2023-11-05T09:04:28+02:00 +layout: post +type: note +draft: false +--- + +Normally you define images with `![]()` in markdown files. But jekyll also +provides a way to adding custom attributes to tags with `{:attr="value"}`. + +If you have lots of posts this command will append `{:loading="lazy"}` to all +images in all your markdown files. + +```md +![image-title](/path/to/your/image.jpg) +``` + +will become + +```md +![image-title](/path/to/your/image.jpg){:loading="lazy"} +``` + +Shell line bellow. Go into the folder where your posts are (probably `_posts`). + +```sh +find . -type f -name "*.md" -exec sed -i -E 's/(\!\[.*\]\((.*?)\))$/\1{:loading="lazy"}/' {} \; +``` + +Under the hood this adds `loading="lazy"` to HTML `img` nodes. + +That is about it. diff --git a/_posts/notes/2023-11-07-personal-sane-vim-defaults.md b/_posts/notes/2023-11-07-personal-sane-vim-defaults.md new file mode 100644 index 0000000..be8b2ae --- /dev/null +++ b/_posts/notes/2023-11-07-personal-sane-vim-defaults.md @@ -0,0 +1,60 @@ +--- +title: "Personal sane Vim defaults" +permalink: /apersonal-sane-vim-defaults.html +date: 2023-11-07T01:04:28+02:00 +layout: post +type: note +draft: false +--- + +I have found many "sane" default configs on the net and this is my favorite +personal list. This is how my `.vimrc` file looks like. + +```vimrc +" General sane defaults. +syntax enable +colorscheme sorbet +nnoremap q: +set nocompatible +set relativenumber +set nohlsearch +set smartcase +set ignorecase +set incsearch +set autoindent +set nowrap +set nobackup +set noswapfile +set autoread +set wildmenu +set encoding=utf8 +set backspace=2 +set scrolloff=4 +set spelllang=en_us + +" Status Line enhancements. +set laststatus=2 +set statusline=%f%m%=%y\ %{strlen(&fenc)?&fenc:'none'}\ %l:%c\ %L\ %P +hi StatusLine cterm=NONE ctermbg=black ctermfg=brown +hi StatusLineNC cterm=NONE ctermbg=black ctermfg=darkgray + +" Commenting blocks of code. +augroup commenting_blocks_of_code + autocmd! + autocmd FileType c,cpp,go,scala let b:comment_leader = '// ' + autocmd FileType sh,ruby,python let b:comment_leader = '# ' + autocmd FileType conf,fstab let b:comment_leader = '# ' + autocmd FileType lua let b:comment_leader = '-- ' + autocmd FileType vim let b:comment_leader = '" ' +augroup END +noremap ,cc :silent s/^/=escape(b:comment_leader,'\/')/:nohlsearch +noremap ,cu :silent s/^\V=escape(b:comment_leader,'\/')//e:nohlsearch + +" Language specific indentation. +filetype plugin indent on +autocmd Filetype make,go,c,cpp setlocal noexpandtab tabstop=4 shiftwidth=4 +autocmd Filetype html,js,css setlocal expandtab tabstop=2 shiftwidth=2 +``` + +I keep it pretty vanilla so this is about everything I have in the file. + diff --git a/_posts/notes/2024-02-15-extract-lines-from-file.md b/_posts/notes/2024-02-15-extract-lines-from-file.md new file mode 100644 index 0000000..45df9da --- /dev/null +++ b/_posts/notes/2024-02-15-extract-lines-from-file.md @@ -0,0 +1,20 @@ +--- +title: "Extract lines from a file with sed" +permalink: /extract-lines-from-file-with-sed.html +date: 2024-02-15T10:04:28+02:00 +layout: post +type: note +draft: false +--- + +Easy way to extract line ranges (from line 200 to line 210) with sed. + +```sh +sed -n '200,210p' data/Homo_sapiens.GRCh38.dna.chromosome.18.fa + +# then pipe it to a new file with + +sed -n '200,210p' data/Homo_sapiens.GRCh38.dna.chromosome.18.fa > new.fa +``` + +`head` or `tail` could be used to extract from begining of the end of the file. diff --git a/_posts/notes/2024-02-21-dcss-online-rc-defaults.md b/_posts/notes/2024-02-21-dcss-online-rc-defaults.md new file mode 100644 index 0000000..cf12109 --- /dev/null +++ b/_posts/notes/2024-02-21-dcss-online-rc-defaults.md @@ -0,0 +1,35 @@ +--- +title: "Sane default for Dungeon Crawl Stone Soup Online edition" +permalink: /dcss-online-rc-defaults.html +date: 2024-02-21T06:35:11+02:00 +layout: post +type: note +draft: false +tags: [dcss] +--- + +I mostly play Dungeon Crawl Stone Soup online on Ohio, USA: cbro.berotato.org server and +when you start playing you can select the version you want to play. Each instance also +has `rc` file that can customize the way the game behave. + +This is my sane defaults config. It zooms in the game without needing to zoom in the +browser and it also adds a bit of delays in exploring and it stops at fight. + +```ini +autofight_stop = 80 +explore_auto_rest = true +explore_delay = 20 + +tile_cell_pixels = 48 +tile_font_crt_size = 24 +tile_font_stat_size = 24 +tile_font_msg_size = 24 +tile_font_tip_size = 24 +tile_font_lbl_size = 24 +tile_map_pixels = 0 +tile_filter_scaling = false +``` + +All the possible options are documented in the [Dungeon Crawl Stone Soup Options +Guide](https://github.com/crawl/crawl/blob/master/crawl-ref/docs/options_guide.txt) +file. diff --git a/_posts/notes/2024-02-23-uninstall-ollama-from-a-linux-box.md b/_posts/notes/2024-02-23-uninstall-ollama-from-a-linux-box.md new file mode 100644 index 0000000..fffd458 --- /dev/null +++ b/_posts/notes/2024-02-23-uninstall-ollama-from-a-linux-box.md @@ -0,0 +1,26 @@ +--- +title: Uninstall Ollama from a Linux box +permalink: /uninstall-ollama-from-a-linux-box.html +date: 2024-02-23 +layout: post +draft: false +type: note +--- +I have had some issues with Ollama not being up-to-date. If Ollama is installed with a curl command, it adds a systemd service. + +```sh +sudo systemctl stop ollama +sudo systemctl disable ollama +sudo rm /etc/systemd/system/ollama.service +sudo systemctl daemon-reload + +sudo rm /usr/local/bin/ollama + +sudo userdel ollama +sudo groupdel ollama + +rm -r ~/.ollama +sudo rm -rf /usr/share/ollama +``` + +That is about it. \ No newline at end of file diff --git a/_posts/posts/2011-01-13-most-likely-to-succeed-in-year-of-2011.md b/_posts/posts/2011-01-13-most-likely-to-succeed-in-year-of-2011.md new file mode 100644 index 0000000..de90494 --- /dev/null +++ b/_posts/posts/2011-01-13-most-likely-to-succeed-in-year-of-2011.md @@ -0,0 +1,43 @@ +--- +title: Most likely to succeed in the year of 2011 +permalink: /most-likely-to-succeed-in-year-of-2011.html +date: 2011-01-13T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +The year of 2010 was definitely the year of Geo-location. The market responded +beautifully and lots of very cool services were launched. We all have to thank +the mobile market for such extensive adoption. With new generations of mobile +phones that are not only buffed with high-tech hardware but are also affordable. +We can now manage tasks that were not so long time ago, almost Star Trek’ish. +And all this had and has great influence on the destination to which we are +going now. + +Reading all this articles about new innovation about new thriving technologies +makes me wonder what’s the next step. The future is the mesh, like Lisa Gansky +said in her book The Mesh. + +Many still have conservative views on distributed systems. The problems with +security of information. Fear of not controlling every aspect of information +flow. I am very opened to distributed systems and heterogeneous applications, +and I think this is the correct and best way to proceed. + +This year will definitely be about communication platforms. Mobile to mobile. +Machine to mobile and vice versa. All the tech is available and ready to put +into action. Wireless is today’s new mantra. And the concept of semantic web is +now ready for industry. + +Applications and developers now can gain access to new layers of systems and can +prepare and build solutions to meet the high quality needs of market. The speed +is everything now. + +My vote goes to “Machine to Machine” and “Embedded Systems”! + +- [Machine-to-Machine](http://en.wikipedia.org/wiki/Machine-to-Machine) +- [The ultimate M2M communication protocol](http://www.bitxml.org/) +- [COOS Project (connectivity initiative)](http://www.coosproject.org/maven-site/1.0.0/project-info.html) +- [Community for machine-to-machine](http://m2m.com/index.jspa) +- [Embedded system](http://en.wikipedia.org/wiki/Embedded_system) + diff --git a/_posts/posts/2012-03-09-led-technology-not-so-eco.md b/_posts/posts/2012-03-09-led-technology-not-so-eco.md new file mode 100644 index 0000000..4c5fda3 --- /dev/null +++ b/_posts/posts/2012-03-09-led-technology-not-so-eco.md @@ -0,0 +1,34 @@ +--- +title: LED technology might not be as eco-friendly as you think +permalink: /led-technology-not-so-eco.html +date: 2012-03-09T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +There is a lot of talk about LED technology. It is beginning to infiltrate +industry at a fast rate, and it’s a challenge for designers and also engineers. +I wondered when a weakness will be revealed. Then I stomped on an article +talking about harm in using LED technology. It looks like this magical +technology is not so magical and eco-friendly. + +A new study from the University of California indicates that LED lights contain +toxic metals, and should be produced, used and disposed of carefully. Besides +the lead and nickel, the bulbs and their associated parts were also found to +contain arsenic, copper, and other metals that have been linked to different +cancers, neurological damage, kidney disease, hypertension, skin rashes and +other illnesses in humans, and to ecological damage in waterways. + +Since then, I haven’t yet found any regulation for disposal of LED lights or any +other regulation or standard. This might be a problem in the future. And it is a +massive drawback. This might have quite an impact on consumer market. + +Nevertheless, there is a potential, and I am sure the market will adapt. I also +hope I will be reading documents regarding solution for this concern soon. + +**Additional resources:** + +- [Recycling and Disposal of Light Bulbs](http://ezinearticles.com/?Recycling-and-Disposal-of-Light-Bulbs&id=1091304) +- [How to Dispose of a Low-Energy Light Bulb](http://www.ehow.com/how_7483442_dispose-lowenergy-light-bulb.html) + diff --git a/_posts/posts/2013-10-24-wireless-sensor-networks.md b/_posts/posts/2013-10-24-wireless-sensor-networks.md new file mode 100644 index 0000000..6eb3fe1 --- /dev/null +++ b/_posts/posts/2013-10-24-wireless-sensor-networks.md @@ -0,0 +1,55 @@ +--- +title: Wireless sensor networks +permalink: /wireless-sensor-networks.html +date: 2013-10-24T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +Zigbee networks have this wonderful capability to self-heal, which means they +can reorder connections between them if one of them is inoperable. This works +our of the box when you deploy them. But you have to have in mind that achieving +this is not as easy as you would think. None of it is plug&play. So to make +your life a bit easier, here are some pointers which, I hope, will help you. + +- Be careful when you are ordering your equipment abroad. There are many rules + and regulations you need to comply before you get your Xbee radios. What they + do is they wait until you prove that you won’t use the technology for some + kind of evil take over control of the world project :). For this, they have + EAR (Export Administration Regulations) which basically means “This product + may require a license to export from the United States.”. +- I don’t know if this applies for every country, but when we purchased our Xbee + radios from Mouser, this was mandatory! What we needed to do was to print out + a form and write information about our company and send them a copy via + email. With this document, we proved that we are a legitimate company. +- When you complete your purchase and send all the documentation, you are not + clear yet. Then customs will take it from there :). There will be some + additional costs. Before purchasing, make sure you have as much information + about costs as possible. Because it can get costly in the end. +- I suggest you use companies from your country. You can seriously cut your + costs. Here in Slovenia, the best option so far as I know is Farnell. And + based on my personal experience, they rock! All I need to say! +- Make plans when ordering larger quantities. Do not, I say, do not make your + orders in December! :) Believe me! You will have problems with stock they can + provide for you. So, we were forced to buy some things from Mouser, which was + extremely painful because of all the regulations you need to obey when + importing goods from the USA. +- Make sure that firmware version on your Xbee radios is exactly the same! Do + not get creative!!! I propose using templates. You can get template by + exporting settings/profile in X-CTU application. Make sure you have enabled + “Upgrade firmware” so you can be sure each radio has the same firmware. +- And again: make plans! Plan everything! In months advanced! You will thank me + later :) +- Test, test, test. Wireless networks can be tricky. + +If you are serious, I suggest you buy this book, Building Wireless Sensor +Networks. You will get a glimpse of how networks work in lumens terms. It is a +good starting point for everybody who wants to build wireless networks. + +**Additional resources:** + +- http://www.digi.com/aboutus/export/generalexportinfo +- http://doresearch.stanford.edu/research-scholarship/export-controls/export-controlled-or-embargoed-countries-entities-and-persons +- http://www.bis.doc.gov/licensing/exportingbasics.htm + diff --git a/_posts/posts/2015-11-10-software-development-pitfalls.md b/_posts/posts/2015-11-10-software-development-pitfalls.md new file mode 100644 index 0000000..d7b9c1b --- /dev/null +++ b/_posts/posts/2015-11-10-software-development-pitfalls.md @@ -0,0 +1,182 @@ +--- +title: Software development and my favorite pitfalls +permalink: /software-development-pitfalls.html +date: 2015-11-10T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +Over the years I had the privilege to work on some very excited projects both in +software development field and also in electronics field and every experience +taught me some invaluable lessons about how NOT TO approach development. And +through this post I will try to point out some absurd, outdated techniques I +find the most annoying and damaging during a development cycle. There will be +swearing because this topic really gets on my nerves and I never coherently +tried to explain them in writing. So if I get heated up, please bear with me. + +As new methods of project management are emerging, underlying processes still +stay old and outdated. This is mainly because we as people are unable to +completely shift away from these approaches. + +I was always struggling with communication, and many times that cost me a +relationship or two because I was not on the ball all the time. Through every +experience, I became more convinced that I am the problem and never ever doubted +that the problem may be that communication never evolved a single step from +emails. And if you think for a second, not many things have changed around this +topic. We just have different representations of email (message boards, chats, +project management tools). And I believe this is the real issue we are facing +now. + +There are many articles written about hyper connectivity and the effects that +are a direct result of it. But mainstream does nothing towards it. We are just +putting out fires, and we do nothing to prevent it. I am certain this will be a +major source of grief in coming years. And what we all can do to avoid this is +to change our mindset and experiment on our communication skills, development +approaches. We need to maximize possible output that a person can give. And to +achieve this we need to listen to them, encourage them. I know that not +everybody is a naturally born leader, but with enough practice and encouragement +they also can become active participants in leadership. + +There are many talks now about methodologies such as Scrum, Kanban, Cleanroom +and they all fucking piss me of :). These are all boxes that imprison people and +take away their freedom of thought. This is a straightforward mindfuck / +amputation of creativity. + +Let me list a couple of things that I find really destructive and bad for a +project and in a long run company. + +## Ping emails + +Ping emails are emails you have to write as soon as you receive an email. Its +sole purpose is to inform the sender that you received their email, and you are +working on it. Its result is only to calm down the sender that their task is +being dealt with. It’s intent basically is, I did my job by sending you this +email, so I am on clear grounds. I categorize this email as fuck you email. +This is one of the most irritating types of emails I need to write. This is the +ultimate control freak show you can experience, and it gives the sender a false +feeling of control. Newsflash: We do not live in 1982 where there was a +possibility that email never reached the destination. I really hate this from +the bottom of my heart. + +They should be like: “Yes, I am fucking alive, and I am at your service my +leash!”. I guess if I would reply like this, I wouldn’t have to write any more +of this kind of messages. + +## Everybody is a project manager + +Well, this is a tough one. I noticed that as soon as you let people to give +their suggestions, you are basically screwed. There is a truth in the saying: +“Give low expectations and deliver little more than you promised.”. + +People tend to take a role of a manager as soon as they are presented with an +opportunity. And by getting angry at them, you only provoke yourself. They are +not at fault. You just need to tell them they are only giving suggestions and +not tasks at the beginning and everything will be alright. But if you give them +a feeling that they are in control, you will have immense problems explaining +why their features are not in current release. + +Project mission must be always leading project requirements and any deviation +from it will result in major project butchering. And by this, I mean that the +project will get its own path, and you will be left with half done software that +helps nobody. Clear mission goals and clean execution will allow you to develop +software will clear intent. + +## We are never wrong + +I find this type of arrogance the worst. We must always conduct ourselves that +we are infallible and cannot make mistakes. As soon as a procedure or process is +established, there is no room for changes or improvements. This is the most +idiotic thing someone can say of think. I think that processes need to involve +and change over time. This is imperative and need to have in your organization +if you want to improve and develop company. We all need to grow balls and change +everything in order to adapt to current situations. Being a prisoner of +predefined processes kills creativity. + +I am constantly trying new software for project managing and communication. I +believe every team has its own dynamic, and it needs to be discovered +organically and naturally through many experiments. By putting the team in a +box, you are amputating their creativity and therefore minimizing their +potential. But if you talk to an executive, you will mainly find archetypical +thinking and a strong need to compartmentalize everything from business +processes to resource management. And this type of management that often +displays micromanagement techniques only works for short periods (couple of +years) and then employees either leave the company or become basically retarded +drones on autopilot. + +## Micromanaging + +This basically implies that everybody on the team is an idiot who needs to have +a to-do list that they cannot write themselves. How about spoon-feeding the team +at launch because besides the team leader, everybody must be a retarded idiot at +best? + +I prefer milestones as they give developers much more freedom and creativity in +developing and not waste their time checking some bizarre to-do list that was +not even thought through. Projects constantly change throughout the development +cycle, and all you are left at the end is a list of unchecked tasks and the +wrath of management why they are not completed. Best WTF moment! + +## Human contact — no need for it! + +We are vigorously trying to eliminate physical contact by replacing short +meetings with software, with no regards that we are not machines. Many times a +simple 5-min meeting at morning can solve most of the problems. In rapid +development, short bursts of man to man communication is possibly the best way +to go. + +We now have all this software available, and all what we get out of it is a +giant clusterfuck. An obstacle and not a solution. So, why we still use them? + +## MVP is killing innovation + +Many will disagree with me on this one, but I stand strong by this statement. +What I noticed in my experience that all this buzz words around us only mislead +and capture us in a circle of solving issues that already have a solution, but +we are unable to see it without using some fancy word for it. + +The toughest thing to do for a developer is to minimize requirements. Well, this +is though only for bad developers. Yes, I said it. There are many types of +developers out there. And those unable to minimize feature scope are the ones +you don’t need on your team. Their only goal is to solve problems that exist +only in their heads. And then you have to argue with them, and waste energy on +them, instead of developing your awesome product. They are a cancer and I +suggest you cut them off. + +MVP as an idea is great, but sadly people don’t understand underlying +philosophy, and they spent too much time focusing and fixating on something that +every sane person with normal IQ will understand without some made up +acronym. And the result is a lot of talking and barely no execution. + +Well, MVP is not directly killing innovation, but stupid people do when they try +to understand it. + +## Pressure wasteland + +You must never allow to be pressured into confirming a deadline if you are not +confident. We often feel a need that we are in service of others, which is true +to some extent. But it is also true that others are in service to us to some +extent. And we forget this all the time. We are all pressured all the time to +make decisions just to calm other people down. And when they leave your office +you experience WTF moment :) How the hell did they manage to fuck me up again? + +People need to realize that the more pressure you put on somebody, the less they +will be able to do. So 5-min update email requests will only resolve in mental +breakdown and inability to work that day. Constant poking is probably the only +thing I lose my mind instantly. For all you that are doing this: “Stop bothering +us with your insecurities and let us do our job. We will do it quicker and +better without you breathing down our necks.” + +If this happens to me, I end up with no energy at the end. Don’t you get it? +You will get much more from and out of me if you ask me like a human person and +not your personal butler. On a long run, you are destroying your relationships +and nobody would want to work with you. Your schizophrenic approach will damage +only you in a long run. Nobody is anybody’s property. + +## Conclusion + +I am guilty of many things described in this post. And I find it hard sometimes +to acknowledge this. And I lie to myself and try vigorously to find some +explanation why I do these things. There is always space for growth. And maybe +you will also find some of yourself in this post and realize what needs to +change for you to evolve. diff --git a/_posts/posts/2017-03-07-golang-profiling-simplified.md b/_posts/posts/2017-03-07-golang-profiling-simplified.md new file mode 100644 index 0000000..aeea956 --- /dev/null +++ b/_posts/posts/2017-03-07-golang-profiling-simplified.md @@ -0,0 +1,127 @@ +--- +title: Golang profiling simplified +permalink: /golang-profiling-simplified.html +date: 2017-03-07T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +Many posts have been written regarding profiling in Golang and I haven’t found +proper tutorial regarding this. Almost all of them are missing some part of +important information and it gets pretty frustrating when you have a deadline +and are not finding simple distilled solution. + +Nevertheless, after searching and experimenting I have found a solution that +works for me and probably should also for you. + +## Where are my pprof files? + +By default pprof files are generated in /tmp/ folder. You can override folder +where this files are generated programmatically in your golang code as we will +see below in example. + +## Why is my CPU profile empty? + +I have found out that sometimes CPU profile is empty because program was not +executing long enough. Programs, that execute too quickly don’t produce pprof +file in my cases. Well, file is generated but only contains 4KB of information. + +## Profiling + +As you can see from examples we are executing dummy_benchmark functions to +ensure some sort of execution. Memory profiling can be done without such a +“complex” function. But CPU profiling needs it. + +Both memory and CPU profiling examples are almost the same. Only parameters in +main function when calling profile.Start are different. When we set +profile.ProfilePath(“.”) we tell profiler to store pprof files in the same +folder as our program. + +### Memory profiling + +```go +package main + +import ( + "fmt" + "time" + "github.com/pkg/profile" +) + +func dummy_benchmark() { + + fmt.Println("first set ...") + for i := 0; i < 918231333; i++ { + i *= 2 + i /= 2 + } + + <-time.After(time.Second*3) + + fmt.Println("sencond set ...") + for i := 0; i < 9182312232; i++ { + i *= 2 + i /= 2 + } +} + +func main() { + defer profile.Start(profile.MemProfile, profile.ProfilePath("."), profile.NoShutdownHook).Stop() + dummy_benchmark() +} +``` + +### CPU profiling + +```go +package main + +import ( + "fmt" + "time" + "github.com/pkg/profile" +) + +func dummy_benchmark() { + + fmt.Println("first set ...") + for i := 0; i < 918231333; i++ { + i *= 2 + i /= 2 + } + + <-time.After(time.Second*3) + + fmt.Println("sencond set ...") + for i := 0; i < 9182312232; i++ { + i *= 2 + i /= 2 + } +} + +func main() { + defer profile.Start(profile.CPUProfile, profile.ProfilePath("."), profile.NoShutdownHook).Stop() + dummy_benchmark() +} +``` + +### Generating profiling reports + +```bash +# memory profiling +go build mem.go +./mem +go tool pprof -pdf ./mem mem.pprof > mem.pdf + +# cpu profiling +go build cpu.go +./cpu +go tool pprof -pdf ./cpu cpu.pprof > cpu.pdf +``` + +This will generate PDF document with visualized profile. + +- [Memory PDF profile example](/assets/posts/go-profiling/golang-profiling-mem.pdf) +- [CPU PDF profile example](/assets/posts/go-profiling/golang-profiling-cpu.pdf) + diff --git a/_posts/posts/2017-04-17-what-i-ve-learned-developing-ad-server.md b/_posts/posts/2017-04-17-what-i-ve-learned-developing-ad-server.md new file mode 100644 index 0000000..10aca0d --- /dev/null +++ b/_posts/posts/2017-04-17-what-i-ve-learned-developing-ad-server.md @@ -0,0 +1,200 @@ +--- +title: What I've learned developing ad server +permalink: /what-i-ve-learned-developing-ad-server.html +date: 2017-04-17T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +For the past year and half I have been developing native advertising server that +contextually matches ads and displays them in different template forms on +variety of websites. This project grew from serving thousands of ads per day to +millions. + +The system is made from couple of core components: + +- API for serving ads, +- Utils - cronjobs and queue management tools, +- Dashboard UI. + +Initial release was using [MongoDB](https://www.mongodb.com/) for full-text +search but was later replaced by [Elasticsearch](https://www.elastic.co/) for +better CPU utilization and better search performance. This provided us with many +amazing functionalities of [Elasticsearch](https://www.elastic.co/). You should +check it out if you do any search related operations. + +Because the premise of the server is to provide native ad experience, they are +rendered on the client side via simple templating engine. This ensures that ads +can be displayed number of different ways based on the visual style of the +page. And this makes JavaScript client library quite complex. + +So now that you know basic information about the product lets get into the +lessons we learned. + +## Aggregate everything + +After beta version was released everything (impressions, clicks, etc) was +written in nanosecond resolution in the database. At that time we were using +[PostgreSQL](https://www.postgresql.org/) and database quickly grew way above +200GB in disk space. And that was problematic. Statistics took disturbingly long +time to aggregate. Also using indexes on stats table in database was no help +after we reached 500 million datapoints. + +> There is a marketing product information and there is real life experience. +And the tend to be quite the opposite. + +This was the reason that now everything is aggregated on daily basis and this +data is then fed to Elastic in form of daily summary. With this we achieved we +can now track many more dimensions such as zone, channel and platform +information. And with this information we can now adapt occurrences of ads on +specific places more precisely. + +We have also adapted [Redis](https://redis.io/) as a full-time citizen in our +stack. Because Redis also stores information on a local disk we have some sort +of backup if server would accidentally suffer some failure. + +All the real-time statistics for ad serving and redirecting is presented as +counters in Redis instance and daily extracted and pushed to Elastic. + +## Measure everything + +The thing about software is that we really don't know how well it is performing +under load until such load is presented. When testing locally everything is fine +but when on production things tend to fall apart. + +As a solution for this we are measuring everything we can. Function execution +time (by encapsulating functions with timers), server performance (cpu, memory, +disk, etc), Nginx and [uWSGI](https://uwsgi-docs.readthedocs.io/) performance. +We sacrifice a bit of performance for the sake of this information. And we store +all this information for later analysis. + +**Example of function execution time** + +```json +{ + "get_final_filtered_ads": { + "counter": 1931250, + "avg": 0.0066143431, + "elapsed": 12773.9500310003 + }, + "store_keywords_statistics": { + "counter": 1931011, + "avg": 0.0004605267, + "elapsed": 889.2821669996 + }, + "match_by_context": { + "counter": 1931011, + "avg": 0.0055960716, + "elapsed": 10806.0758889999 + }, + "match_by_high_performance": { + "counter": 262, + "avg": 0.0152770229, + "elapsed": 4.00258 + }, + "store_impression_stats": { + "counter": 1931250, + "avg": 0.0006189991, + "elapsed": 1195.4419869999 + } +} +``` + +We have also started profiling with [cProfile](https://pymotw.com/2/profile/) +and then visualizing with [KCachegrind](http://kcachegrind.sourceforge.net/). +This provides much more detailed look into code execution. + +## Cache control is your friend + +Because we use Javascript library for rendering ads we rely on this script +extensively and when in need we need to be able to change behavior of the script +quickly. + +In our case we can not simply replace javascript url in html code. It usually +takes a day or two for the guys who maintain sites to change code or add +?ver=xxx attribute. And this makes rapid deployment and testing very difficult +and time consuming. There is a limitation of how much you can test locally. + +We are now in the process of integrating [Google Tag +Manager](https://www.google.com/analytics/tag-manager/) but couple of websites +are developed on ASP.net platform that have some problems with tag manager. With +a solution below we are certain that we are serving latest version of the +script. + +And it only takes one mistake and users have the script cached and in case of +caching it for 1 year you probably know where the problem is. + +```nginx +# nginx ➜ /etc/nginx/sites-available/default +location /static/ { + alias /path-to-static-content/; + autoindex off; + charset utf-8; + gzip on; + gzip_types text/plain application/javascript application/x-javascript text/javascript text/xml text/css; + location ~* \.(ico|gif|jpeg|jpg|png|woff|ttf|otf|svg|woff2|eot)$ { + expires 1y; + add_header Pragma public; + add_header Cache-Control "public"; + } + location ~* \.(css|js|txt)$ { + expires 3600s; + add_header Pragma public; + add_header Cache-Control "public, must-revalidate"; + } +} +``` + +Also be careful when redirecting to url in your python code. We noticed that if +we didn't precisely setup cache control and expire headers in response we didn't +get the request on the server and therefore couldn't measure clicks. So when +redirecting do as follows and there will be no problems. + +```python +# python ➜ bottlepy web micro-framework +response = bottle.HTTPResponse(status=302) +response.set_header("Cache-Control", "no-store, no-cache, must-revalidate") +response.set_header("Expires", "Thu, 01 Jan 1970 00:00:00 GMT") +response.set_header("Location", url) +return response +``` + +> Cache control in browsers is quite aggressive and you need to be precise to +avoid future problems. We learned that lesson the hard way. + +## Learn NGINX + +When deciding on a web server we went with Nginx as a reverse proxy for our +applications. We adapted micro-service oriented architecture early in the +project to ensure when we scale we can easily add additional servers to our +cluster. And Nginx was crucial to perform load balancing and static content +delivery. + +At first our config file was quite simple and later grew larger. After patching +and adding new settings I sat down and learned more about the guts of Nginx. +This proved to be very useful and we were able to squeeze much more out of our +setup. So I advise you to take your time and read through the +[documentation](https://nginx.org/en/docs/). This saved us a lot of headache. +Googling for solutions only goes so far. + +## Use Redis/Memcached + +As explained above we are using caching basically for everything. It is the +corner stone of our services. At first we were very careful about the quantity +of things we stored in [Redis](https://redis.io/). But we later found out that +the memory footprint is very low even when storing large amount of data in it. + +So we gradually increased our usage to caching whole HTML outputs of dashboard. +This improved our performance in order of magnitude. And by using native TTL +support this goes hand in hand with our needs. + +The reason why we choose [Redis](https://redis.io/) over +[Memcached](https://memcached.org/) was the nature of scalability of Redis out +of the box. But all this can be achieved with Memcached. + +## Conclusion + +There are a lot more details that could have been written and every single topic +in here deserves it's own post but you probably got the idea about the problems +we faced. diff --git a/_posts/posts/2017-04-21-profiling-python-web-applications-with-visual-tools.md b/_posts/posts/2017-04-21-profiling-python-web-applications-with-visual-tools.md new file mode 100644 index 0000000..2e2ec70 --- /dev/null +++ b/_posts/posts/2017-04-21-profiling-python-web-applications-with-visual-tools.md @@ -0,0 +1,207 @@ +--- +title: Profiling Python web applications with visual tools +permalink: /profiling-python-web-applications-with-visual-tools.html +date: 2017-04-21T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +I have been profiling my software with KCachegrind for a long time now and I was +missing this option when I am developing API's or other web services. I always +knew that this is possible but never really took the time and dive into it. + +Before we begin there are some requirements. We will need to: + +- implement [cProfile](https://docs.python.org/2/library/profile.html#module-cProfile) into our web app, +- convert output to [callgrind](http://valgrind.org/docs/manual/cl-manual.html) format with [pyprof2calltree](https://pypi.python.org/pypi/pyprof2calltree/), +- visualize data with [KCachegrind](http://kcachegrind.sourceforge.net/html/Home.html) or [Profiling Viewer](http://www.profilingviewer.com/). + + +If you are using MacOS you should check out [Profiling +Viewer](http://www.profilingviewer.com/) or +[MacCallGrind](http://www.maccallgrind.com/). + +![KCachegrind](/assets/posts/python-profiling/kcachegrind.png){:loading="lazy"} + +We will be dividing this post into two main categories: + +- writing simple web-service, +- visualize profile of this web-service. + +## Simple web-service + +Let's use virtualenv so we won't pollute our base system. If you don't have +virtualenv installed on your system you can install it with pip command. + +```bash +# let's install virtualenv globally +$ sudo pip install virtualenv + +# let's also install pyprof2calltree globally +$ sudo pip install pyprof2calltree + +# now we create project +$ mkdir demo-project +$ cd demo-project/ + +# now let's create folder where we will store profiles +$ mkdir prof + +# now we create empty virtualenv in venv/ folder +$ virtualenv --no-site-packages venv + +# we now need to activate virtualenv +$ source venv/bin/activate + +# you can check if virtualenv was correctly initialized by +# checking where your python interpreter is located +# if command bellow points to your created directory and not some +# system dir like /usr/bin/python then everything is fine +$ which python + +# we can check now if all is good ➜ if ok couple of +# lines will be displayed +$ pip freeze +# appdirs==1.4.3 +# packaging==16.8 +# pyparsing==2.2.0 +# six==1.10.0 + +# now we are ready to install bottlepy ➜ web micro-framework +$ pip install bottle + +# you can deactivate virtualenv but you will then go +# under system domain ➜ for now don't deactivate +$ deactivate +``` + +We are now ready to write simple web service. Let's create file app.py and paste +code bellow in this newly created file. + +```python +# -*- coding: utf-8 -*- + +import bottle +import random +import cProfile + +app = bottle.Bottle() + +# this function is a decorator and encapsulates function +# and performs profiling and then saves it to subfolder +# prof/function-name.prof +# in our example only awesome_random_number function will +# be profiled because it has do_cprofile defined +def do_cprofile(func): + def profiled_func(*args, **kwargs): + profile = cProfile.Profile() + try: + profile.enable() + result = func(*args, **kwargs) + profile.disable() + return result + finally: + profile.dump_stats("prof/" + str(func.__name__) + ".prof") + return profiled_func + + +# we use profiling over specific function with including +# @do_cprofile above function declaration +@app.route("/") +@do_cprofile +def awesome_random_number(): + awesome_random_number = random.randint(0, 100) + return "awesome random number is " + str(awesome_random_number) + +@app.route("/test") +def test(): + return "dummy test" + +if __name__ == '__main__': + bottle.run( + app = app, + host = "0.0.0.0", + port = 4000 + ) + +# run with 'python app.py' +# open browser 'http://0.0.0.0:4000' +``` + +When browser hits awesome\_random\_number() function profile is created in prof/ +subfolder. + +## Visualize profile + +Now let's create callgrind format from this cProfile output. + +```bash +$ cd prof/ +$ pyprof2calltree -i awesome_random_number.prof +# this creates 'awesome_random_number.prof.log' file in the same folder +``` + +This file can be opened with visualizing tools listed above. In this case we +will be using Profilling Viewer under MacOS. You can open image in new tab. As +you can see from this example there is hierarchy of execution order of your +code. + +![Profilling Viewer](/assets/posts/python-profiling/profiling-viewer.png){:loading="lazy"} + +> Make sure you convert output of the cProfile output every time you want to +refresh and take a look at your possible optimizations because cProfile updates +.prof file every time browser hits the function. + +This is just a simple example but when you are developing real-life applications +this can be very illuminating, especially to see which parts of your code are +bottlenecks and need to be optimized. + +## Update 2017-04-22 + +Reddit user [mvt](https://www.reddit.com/user/mvt) also recommended this awesome +web based profile visualizer [SnakeViz](https://jiffyclub.github.io/snakeviz/) +that directly takes output from +[cProfile](https://docs.python.org/2/library/profile.html#module-cProfile) +module. + + + +```bash +# let's install it globally as well +$ sudo pip install snakeviz + +# now let's visualize +$ cd prof/ +$ snakeviz awesome_random_number.prof +# this automatically opens browser window and +# shows visualized profile +``` + +![SnakeViz](/assets/posts/python-profiling/snakeviz.png){:loading="lazy"} + +Reddit user [ccharles](https://www.reddit.com/user/ccharles) suggested a better +way for installing pip software by targeting user level instead of using sudo. + + + +```bash +# now we need to add this path to our $PATH variable +# we do this my adding this line at the end of your +# ~/.bashrc file +PATH=$PATH:$HOME/.local/bin/ + +# in order to use this new configuration you can close +# and reopen terminal or reload .bashrc file +$ source ~/.bashrc + +# now let's test if new directory is present in $PATH +$ echo $PATH + +# now we can install on user level by adding --user +# without use of sudo +$ pip install snakeviz --user +``` + +Or as suggested by [mvt](https://www.reddit.com/user/mvt) you can +use [pipsi](https://github.com/mitsuhiko/pipsi). diff --git a/_posts/posts/2017-08-11-simple-iot-application.md b/_posts/posts/2017-08-11-simple-iot-application.md new file mode 100644 index 0000000..b552e8f --- /dev/null +++ b/_posts/posts/2017-08-11-simple-iot-application.md @@ -0,0 +1,608 @@ +--- +title: Simple IOT application supported by real-time monitoring and data history +permalink: /simple-iot-application.html +date: 2017-08-11T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +## Initial thoughts + +I have been developing these kind of application for the better part of my last +5 years and people keep asking me how to approach developing such application +and I will give a try explaining it here. + +IOT applications are really no different than any other kind of applications. +We have data that needs to be collected and visualized in some form of tables or +charts. The main difference here is that most of the times these data is +collected by some kind of device foreign to developer that mainly operates in +web domain. But fear not, it's not that different than writing some JavaScript. + +There are many devices able to transmit data via wireless or wired network by +default but for the sake of example we will be using commonly known Arduino with +wireless module already on the board → [Arduino +MKR1000](https://store.arduino.cc/arduino-mkr1000). + +In order to make this little project as accessible to others as possible I will +try to make it as inexpensive as possible. And by this I mean that I will avoid +using hosted virtual servers and will be using my own laptop as a server. But +you must buy Arduino MKR1000 to follow steps below. But if you would want to +deploy this software I would suggest using +[DigitalOcean](https://www.digitalocean.com) → smallest VPS is only per month +making this one of the most affordable option out there. Please notice that this +software will not run on stock web hosting that only supports LAMP (Linux, +Apache, MySQL, and PHP). + +But before we begin please take notice that this is strictly experimental code +and not well optimized and there are much better ways in handling some aspects +of the application but that requires much deeper knowledge of technology that is +not needed for an example like this. + +**Development steps** + +1. Simple Python API that will receive and store incoming data. +2. Prototype C++ code that will read "sensor data" and transmit it to API. +3. Data visualization with charts → extends Python web application. + +Step 1. and 3. will share the same web application. One route will be dedicated +to API and another to serving HTML with chart. + +Schema below represents what we will try to achieve and how different parts +correlates to each other. + +![Overview](/assets/posts/iot-application/simple-iot-application-overview.svg){:loading="lazy"} + +## Simple Python API + +I have always been a fan of simplicity so we will be using [Bottle: Python Web +Framework](https://bottlepy.org/docs/dev/). It is a single file web framework +that seriously simplifies working with routes, templating and has built-in web +server that satisfies our need in this case. + +First we need to install bottle package. This can be done by downloading +```bottle.py``` and placing it in the root of your application or by using pip +software ```pip install bottle --user```. + +If you are using Linux or MacOS then Python is already installed. If you will +try to test this on Windows please install [Python for +Windows](https://www.python.org/downloads/windows/). There may be some problems +with path when you will try to launch ```python webapp.py``` so please take care +of this before you continue. + +### Basic web application + +Most basic bottle application is quite simple. Paste code below in +```webapp.py``` file and save. + +```python +# -*- coding: utf-8 -*- + +import bottle + +# initializing bottle app +app = bottle.Bottle() + +# triggered when / is accessed from browser +# only accepts GET → no POST allowed +@app.route("/", method=["GET"]) +def route_default(): + return "howdy from python" + +# starting server on http://0.0.0.0:5000 +if __name__ == "__main__": + bottle.run( + app = app, + host = "0.0.0.0", + port = 5000, + debug = True, + reloader = True, + catchall = True, + ) +``` + +To run this simple application you should open command prompt or terminal on +your machine and go to the folder containing your file and type ```python +webapp.py```. If everything goes ok then open your web browser and point it to +```http://0.0.0.0:5000```. + +If you would like change the port of your application (like port 80) and not use +root to run your app this will present a problem. The TCP/IP port numbers below +1024 are privileged ports → this is a security feature. So in order of +simplicity and security use a port number above 1024 like I have used port 5000. + +If this fails at any time please fix it before you continue, because nothing +below will work otherwise. + +We use 0.0.0.0 as default host so that this app is available over your local +network. If you find your local ip ```ifconfig``` and try accessing this site +with your phone (if on same network/router as your machine) this should work as +well (example of such ip ```http://192.168.1.15:5000```). This is a must have +because Arduino will be accessing this application to send it's data. + +### Web application security + +There is a lot to be said about security and is a topic of many books. Of course +all this can not be written here but to just establish some basic security → you +should always use SSL with your application. Some fantastic free certificates +are available by [Let's Encrypt - Free SSL/TLS +Certificates](https://letsencrypt.org). With SSL certificate installed you +should then make use of HTTP headers and send your "API key" via a header. If +your key is send via header then this key is encrypted by SSL and send encrypted +over the network. Never send your api keys by GET parameter like +```http://example.com/?api_key=somekeyvalue```. The problem that this kind of +sending presents is that this key is visible in logs and by network sniffers. + +There is a fantastic article describing some aspects about security: [11 Web +Application Security Best +Practices](https://www.keycdn.com/blog/web-application-security-best-practices/). Please +check it out. + +### Simple API for writing data-points + +We will now be using boilerplate code from example above and extend it to be +SQLite3 because it plays well with Python and can store quite large amount of +able to write data received by API to local storage. For example use I will use +data. I have been using it to collect gigabytes of data in a single database +without any corruption or problems → your experience may vary. + +To avoid learning SQLite I will be using [Dataset: databases for lazy +people](https://dataset.readthedocs.io/en/latest/index.html). This package +abstracts SQL and simplifies writing and reading data from database. You should +install this package with pip software ```pip install dataset --user```. + +Because API will use POST method I will be testing if code works correctly by +using [Restlet Client for Google +Chrome](https://chrome.google.com/webstore/detail/restlet-client-rest-api-t/aejoelaoggembcahagimdiliamlcdmfm). +This software also allows you to set headers → for basic security with API_KEY. + +To quickly generate passwords or API keys I usually use this nifty website +[RandomKeygen](https://randomkeygen.com/). + +Copy and paste code below over your previous code in file ```webapp.py```. + +```python +# -*- coding: utf-8 -*- + +import time +import bottle +import random +import dataset + +# initializing bottle app +app = bottle.Bottle() + +# connects to sqlite database +# check_same_thread=False allows using it in multi-threaded mode +app.config["dsn"] = dataset.connect("sqlite:///data.db?check_same_thread=False") + +# api key that will be used in Arduino code +app.config["api_key"] = "JtF2aUE5SGHfVJBCG5SH" + +# triggered when /api is accessed from browser +# only accepts POST → no GET allowed +@app.route("/api", method=["POST"]) +def route_default(): + status = 400 + ts = int(time.time()) # current timestamp + value = bottle.request.body.read() # data from device + api_key = bottle.request.get_header("Api_Key") # api key from header + + # outputs to console received data for debug reason + print ">>> {} :: {}".format(value, api_key) + + # if api_key is correct and value is present + # then writes attribute to point table + if api_key == app.config["api_key"] and value: + app.config["dsn"]["point"].insert(dict(ts=ts, value=value)) + status = 200 + + # we only need to return status + return bottle.HTTPResponse(status=status, body="") + +# starting server on http://0.0.0.0:5000 +if __name__ == "__main__": + bottle.run( + app = app, + host = "0.0.0.0", + port = 5000, + debug = True, + reloader = True, + catchall = True, + ) +``` + +To run this simply go to folder containing python file and run ```python +webapp.py``` from terminal. If everything goes ok you should have simple API +available via POST method on /api route. + +After testing the service with Restlet Client you should be able to view your +data in a database file ```data.db```. + +![REST settings example](/assets/posts/iot-application/iot-rest-example.png){:loading="lazy"} + +You can also check the contents of new database file by using desktop client +for SQLite → [DB Browser for SQLite](http://sqlitebrowser.org/). + +![SQLite database example](/assets/posts/iot-application/iot-sqlite-db.png){:loading="lazy"} + +Table structure is as simple as it can be. We have ts (timestamp) and value +(value from Arduino). As you can see timestamp is generated on API side. If you +would happen to have atomic clock on Arduino it would be then better to generate +and send timestamp with the value. This would be particularity useful if we +would be collecting sensor data at a higher frequency and then sending this data +in bulk to API. + +If you will deploy this app with uWSGI and multi-threaded, use DSN (Data Source +Name) url with ```?check_same_thread=False```. + +Ok, now that we have some sort of a working API with some basic security so +unwanted people can not post data to your database can we proceed further and +try to program Arduino to send data to API. + +## Sending data to API with Arduino MKR1000 + +First of all you should have MKR1000 module and microUSB cable to proceed. If +you have ever done any work with Arduino you should know that you also need +[Arduino IDE](https://www.arduino.cc/en/Main/Software). On provided link you +should be able to download and install IDE. Once that task is completed and you +have successfully run blink example you should proceed to the next step. + +In order to use wireless capabilities of MKR1000 you need to first install +[WiFi101 library](https://www.arduino.cc/en/Reference/WiFi101) in Arduino IDE. +Please check before you install, you may already have it installed. + +Code below is a working example that sends data to API. Before you try to test +your code make sure you have run Python web application. Then change settings +for wifi, api endpoint and api_key. If by some reason code bellow doesn't work +for you please leave a comment and I'll try to help. + +Once you have opened IDE and copied this code try to compile and upload it. +Then open "Serial monitor" to see if any output is presented by Arduino. + +```c +#include + +// wifi settings +char ssid[] = "ssid-name"; +char pass[] = "ssid-password"; + +// api server enpoint +char server[] = "192.168.6.22"; +int port = 5000; + +// api key that must be the same as the one in Python code +String api_key = "JtF2aUE5SGHfVJBCG5SH"; + +// frequency data is sent in ms - every 5 seconds +int timeout = 1000 * 5; + +int status = WL_IDLE_STATUS; + +void setup() { + + // initialize serial and wait for port to open: + Serial.begin(9600); + delay(1000); + + // check for the presence of the shield + if (WiFi.status() == WL_NO_SHIELD) { + Serial.println("WiFi shield not present"); + while (true); + } + + // attempt to connect to wifi network + while (status != WL_CONNECTED) { + Serial.print("Attempting to connect to SSID: "); + Serial.println(ssid); + status = WiFi.begin(ssid, pass); + // wait 10 seconds for connection + delay(10000); + } + + // output wifi status to serial monitor + Serial.print("SSID: "); + Serial.println(WiFi.SSID()); + + IPAddress ip = WiFi.localIP(); + Serial.print("IP Address: "); + Serial.println(ip); + + long rssi = WiFi.RSSI(); + Serial.print("signal strength (RSSI):"); + Serial.print(rssi); + Serial.println(" dBm"); +} + +void loop() { + WiFiClient client; + + if (client.connect(server, port)) { + + // I use random number generator for this example + // but you can use analog or digital inputs from arduino + String content = String(random(1000)); + + client.println("POST /api HTTP/1.1"); + client.println("Connection: close"); + client.println("Api-Key: " + api_key); + client.println("Content-Length: " + String(content.length())); + client.println(); + client.println(content); + + delay(100); + client.stop(); + Serial.println("Data sent successfully ..."); + + } else { + Serial.println("Problem sending data ..."); + } + + // waits for x seconds and continue looping + delay(timeout); +} +``` + +As seen from example you can notice that Arduino is generating random integer +between [ 0 .. 1000 ]. You can easily replace this with a temperature sensor or +any other kind of sensor. + +Now that we have API under the hood and Arduino is sending demo data we can now +focus on data visualization. + +## Data visualization + +Before we continue we should examine our project folder structure. Currently we +only have two files in our project: + +_simple-iot-app/_ + +* _webapp.py_ +* _data.db_ + +We will now add HTML template that will contain CSS and JavaScript code inline +for the simplicity reason. And for the bottle framework to be able to scan root +application folder for templates we will add ```bottle.TEMPLATE_PATH.insert(0, +"./")``` in ```webapp.py```. By default bottle framework uses ```views/``` +subfolder to store templates. This is not the ideal situation and if you will +use bottle to develop web applications you should use native behavior and store +templates in it's predefined folder. But for the sake of example we will +over-ride this. Be careful to fully replace your code with new code that is +provided below. Avoid partially replacing code in file :) Also new code for +reading data-points is provided in Python example below. + +First we add new route to our web application. It should be trigger when browser +hits root of application ```http://0.0.0.0:5000/```. This route will do nothing +more than render ```frontend.html``` template. This is done by ```return +bottle.template("frontend.html")```. Check code below to further examine how +exactly this is done. + +Now we will expand ```/api``` route and use different methods to write or read +data-points. For writing data-point we will use POST method and for reading +points we will use GET method. GET method will return JSON object with latest +readings and historical data. + +There is a fantastic JavaScript library for plotting time-series charts called +[MetricsGraphics.js](https://www.metricsgraphicsjs.org) that is based on +[D3.js](https://d3js.org/) library for visualizing data. + +Data schema required by MetricsGraphics.js → to achieve this we need to +transform data from database into this format: + +```json +[ + { + "date": "2017-08-11 01:07:20", + "value": 933 + }, + { + "date": "2017-08-11 01:07:30", + "value": 743 + } +] +``` + +Web application is now complete and we only need ```frontend.html``` that we +will develop now. If you would try to start web app now and go to root app this +will return error because we don't have frontend.html yet. + +```python +# -*- coding: utf-8 -*- + +import time +import bottle +import json +import datetime +import random +import dataset + +# initializing bottle app +app = bottle.Bottle() + +# adds root directory as template folder +bottle.TEMPLATE_PATH.insert(0, "./") + +# connects to sqlite database +# check_same_thread=False allows using it in multi-threaded mode +app.config["db"] = dataset.connect("sqlite:///data.db?check_same_thread=False") + +# api key that will be used in Arduino code +app.config["api_key"] = "JtF2aUE5SGHfVJBCG5SH" + +# triggered when / is accessed from browser +# only accepts GET → no POST allowed +@app.route("/", method=["GET"]) +def route_default(): + return bottle.template("frontend.html") + +# triggered when /api is accessed from browser +# accepts POST and GET +@app.route("/api", method=["GET", "POST"]) +def route_default(): + + # if method is POST then we write datapoint + if bottle.request.method == "POST": + status = 400 + ts = int(time.time()) # current timestamp + value = bottle.request.body.read() # data from device + api_key = bottle.request.get_header("Api-Key") # api key from header + + # outputs to console recieved data for debug reason + print ">>> {} :: {}".format(value, api_key) + + # if api_key is correct and value is present + # then writes attribute to point table + if api_key == app.config["api_key"] and value: + app.config["db"]["point"].insert(dict(ts=ts, value=value)) + status = 200 + + # we only need to return status + return bottle.HTTPResponse(status=status, body="") + + # if method is GET then we read datapoint + else: + response = [] + datapoints = app.config["db"]["point"].all() + + for point in datapoints: + response.append({ + "date": datetime.datetime.fromtimestamp(int(point["ts"])).strftime("%Y-%m-%d %H:%M:%S"), + "value": point["value"] + }) + + bottle.response.content_type = "application/json" + return json.dumps(response) + +# starting server on http://0.0.0.0:5000 +if __name__ == "__main__": + bottle.run( + app = app, + host = "0.0.0.0", + port = 5000, + debug = True, + reloader = True, + catchall = True, + ) +``` + +And now finally we can implement ```frontend.html```. Create file with this name +and copy code below. When you are done you can start web application. Steps for +this part are listed below the code. + +```html + + + + + + Simple IOT application + + + + +

Simple IOT application

+ +
+
+
+ + + + + + + + + + + + + +``` + +Now the folder structure should look like: + +_simple-iot-app/_ + +* _webapp.py_ +* _data.db_ +* _frontend.html_ + +Ok, lets now start application and start feeding it data. + +1. ```python webapp.py``` +2. connect Arduino MKR1000 to power source +3. open browser and go to ```http://0.0.0.0:5000``` + +If everything goes well you should be seeing new data-points rendered on chart +every 5 seconds. + +If you navigate to ```http://0.0.0.0:5000``` you should see rendered chart as +shown on picture below. + +![Application output](/assets/posts/iot-application/iot-app-output.png){:loading="lazy"} + +Complete application with all the code is available for +[download](/assets/posts/iot-application/simple-iot-application.zip). + +## Conclusion + +I hope this clarifies some aspects of IOT application development. Of course +this is a minimal example and is far from what can be done in real life with +some further dive into other technologies. + +If you would like to continue exploring IOT world here are some interesting +resources for you to examine: + +* [Reading Sensors with an Arduino](https://www.allaboutcircuits.com/projects/reading-sensors-with-an-arduino/) +* [MQTT 101 – How to Get Started with the lightweight IoT Protocol](http://www.hivemq.com/blog/how-to-get-started-with-mqtt) +* [Stream Updates with Server-Sent Events](https://www.html5rocks.com/en/tutorials/eventsource/basics/) +* [Internet of Things (IoT) Tutorials](http://www.tutorialspoint.com/internet_of_things/) + +Any comment or additional ideas are welcomed in comments below. diff --git a/_posts/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md b/_posts/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md new file mode 100644 index 0000000..d29bd09 --- /dev/null +++ b/_posts/posts/2018-01-16-using-digitalocean-spaces-object-storage-with-fuse.md @@ -0,0 +1,332 @@ +--- +title: Using DigitalOcean Spaces Object Storage with FUSE +permalink: /using-digitalocean-spaces-object-storage-with-fuse.html +date: 2018-01-16T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +Couple of months ago [DigitalOcean](https://www.digitalocean.com) introduced new +product called +[Spaces](https://blog.digitalocean.com/introducing-spaces-object-storage/) which +is Object Storage very similar to Amazon's S3. This really peaked my interest, +because this was something I was missing and even the thought of going over the +internet for such functionality was in no interest to me. Also in fashion with +their previous pricing this also is very cheap and pricing page is a no-brainer +compared to AWS or GCE. [Prices are clearly and precisely defined and +outlined](https://www.digitalocean.com/pricing/). You must love them for that +:) + +## Initial requirements + +* Is it possible to use them as a mounted drive with FUSE? (tl;dr YES) +* Will the performance degrade over time and over different sizes of objects? + (tl;dr NO&YES) +* Can storage be mounted on multiple machines at the same time and be writable? + (tl;dr YES) + +> Let me be clear. This scripts I use are made just for benchmarking and are not +> intended to be used in real-life situations. Besides that, I am looking into +> using this approaches but adding caching service in front of it and then +> dumping everything as an object to storage. This could potentially be some +> interesting post of itself. But in case you would need real-time data without +> eventual consistency please take this scripts as they are: not usable in such +> situations. + +## Is it possible to use them as a mounted drive with FUSE? + +Well, actually they can be used in such manor. Because they are similar to [AWS +S3](https://aws.amazon.com/s3/) many tools are available and you can find many +articles and [Stackoverflow items](https://stackoverflow.com/search?q=s3+fuse). + +To make this work you will need DigitalOcean account. If you don't have one you +will not be able to test this code. But if you have an account then you go and +[create new +Droplet](https://cloud.digitalocean.com/droplets/new?size=s-1vcpu-1gb®ion=ams3&distro=debian&distroImage=debian-9-x64&options=private_networking,install_agent). +If you click on this link you will already have preselected Debian 9 with +smallest VM option. + +* Please be sure to add you SSH key, because we will login to this machine + remotely. +* If you change your region please remember which one you choose because we will + need this information when we try to mount space to our machine. + +Instuctions on how to use SSH keys and how to setup them are available in +article [How To Use SSH Keys with DigitalOcean +Droplets](https://www.digitalocean.com/community/tutorials/how-to-use-ssh-keys-with-digitalocean-droplets). + +![DigitalOcean Droplets](/assets/posts/do-fuse/fuse-droplets.png){:loading="lazy"} + +After we created Droplet it's time to create new Space. This is done by clicking +on a button [Create](https://cloud.digitalocean.com/spaces/new) (right top +corner) and selecting Spaces. Choose pronounceable ```Unique name``` because we +will use it in examples below. You can either choose Private or Public, it +doesn't matter in our case. And you can always change that in the future. + +When you have created new Space we should [generate Access +key](https://cloud.digitalocean.com/settings/api/tokens). This link will guide +to the page when you can generate this key. After you create new one, please +save provided Key and Secret because Secret will not be shown again. + +![DigitalOcean Spaces](/assets/posts/do-fuse/fuse-spaces.png){:loading="lazy"} + +Now that we have new Space and Access key we should SSH into our machine. + +```bash +# replace IP with the ip of your newly created droplet +ssh root@IP + +# this will install utilities for mounting storage objects as FUSE +apt install s3fs + +# we now need to provide credentials (access key we created earlier) +# replace KEY and SECRET with your own credentials but leave the colon between them +# we also need to set proper permissions +echo "KEY:SECRET" > .passwd-s3fs +chmod 600 .passwd-s3fs + +# now we mount space to our machine +# replace UNIQUE-NAME with the name you choose earlier +# if you choose different region for your space be careful about -ourl option (ams3) +s3fs UNIQUE-NAME /mnt/ -ourl=https://ams3.digitaloceanspaces.com -ouse_cache=/tmp + +# now we try to create a file +# once you mount it may take a couple of seconds to retrieve data +echo "Hello cruel world" > /mnt/hello.txt +``` + +After all this you can return to your browser and go to [DigitalOcean +Spaces](https://cloud.digitalocean.com/spaces) and click on your created +space. If file hello.txt is present you have successfully mounted space to your +machine and wrote data to it. + +I choose the same region for my Droplet and my Space but you don't have to. You +can have different regions. What this actually does to performance I don't know. + +Additional information on FUSE: + +* [Github project page for s3fs](https://github.com/s3fs-fuse/s3fs-fuse) +* [FUSE - Filesystem in Userspace](https://en.wikipedia.org/wiki/Filesystem_in_Userspace) + +## Will the performance degrade over time and over different sizes of objects? + +For this task I didn't want to just read and write text files or uploading +images. I actually wanted to figure out if using something like SQlite is viable +in this case. + +### Measurement experiment 1: File copy + +```bash +# first we create some dummy files at different sizes +dd if=/dev/zero of=10KB.dat bs=1024 count=10 #10KB +dd if=/dev/zero of=100KB.dat bs=1024 count=100 #100KB +dd if=/dev/zero of=1MB.dat bs=1024 count=1024 #1MB +dd if=/dev/zero of=10MB.dat bs=1024 count=10240 #10MB + +# now we set time command to only return real +TIMEFORMAT=%R + +# now lets test it +(time cp 10KB.dat /mnt/) |& tee -a 10KB.results.txt + +# and now we automate +# this will perform the same operation 100 times +# this will output results into separated files based on objecty size +n=0; while (( n++ < 100 )); do (time cp 10KB.dat /mnt/10KB.$n.dat) |& tee -a 10KB.results.txt; done +n=0; while (( n++ < 100 )); do (time cp 100KB.dat /mnt/100KB.$n.dat) |& tee -a 100KB.results.txt; done +n=0; while (( n++ < 100 )); do (time cp 1MB.dat /mnt/1MB.$n.dat) |& tee -a 1MB.results.txt; done +n=0; while (( n++ < 100 )); do (time cp 10MB.dat /mnt/10MB.$n.dat) |& tee -a 10MB.results.txt; done +``` + +Files of size 100MB were not successfully transferred and ended up displaying +error (cp: failed to close '/mnt/100MB.1.dat': Operation not permitted). + +As I suspected, object size is not really that important. Sadly I don't have the +time to test performance over periods of time. But if some of you would do it +please send me your data. I would be interested in seeing results. + +**Here are plotted results** + +You can download [raw result here](/assets/posts/do-fuse/copy-benchmarks.tsv). +Measurements are in seconds. + + +
+ + +As far as these tests show, performance is quite stable and can be predicted +which is fantastic. But this is a small test and spans only over couple of +hours. So you should not completely trust them. + +### Measurement experiment 2: SQLite performanse + +I was unable to use database file directly from mounted drive so this is a no-go +as I suspected. So I executed code below on a local disk just to get some +benchmarks. I inserted 1000 records with DROPTABLE, CREATETABLE, INSERTMANY, +FETCHALL, COMMIT for 1000 times to generate statistics. As you can see +performance of SQLite is quite amazing. You could then potentially just copy +file to mounted drive and be done with it. + +```python +import time +import sqlite3 +import sys + +if len(sys.argv) < 3: + print("usage: python sqlite-benchmark.py DB_PATH NUM_RECORDS REPEAT") + exit() + +def data_iter(x): + for i in range(x): + yield "m" + str(i), "f" + str(i*i) + +header_line = "%s\t%s\t%s\t%s\t%s\n" % ("DROPTABLE", "CREATETABLE", "INSERTMANY", "FETCHALL", "COMMIT") +with open("sqlite-benchmarks.tsv", "w") as fp: + fp.write(header_line) + +start_time = time.time() +conn = sqlite3.connect(sys.argv[1]) +c = conn.cursor() +end_time = time.time() +result_time = CONNECT = end_time - start_time +print("CONNECT: %g seconds" % (result_time)) + +start_time = time.time() +c.execute("PRAGMA journal_mode=WAL") +c.execute("PRAGMA temp_store=MEMORY") +c.execute("PRAGMA synchronous=OFF") +result_time = PRAGMA = end_time - start_time +print("PRAGMA: %g seconds" % (result_time)) + +for i in range(int(sys.argv[3])): + print("#%i" % (i)) + + start_time = time.time() + c.execute("drop table if exists test") + end_time = time.time() + result_time = DROPTABLE = end_time - start_time + print("DROPTABLE: %g seconds" % (result_time)) + + start_time = time.time() + c.execute("create table if not exists test(a,b)") + end_time = time.time() + result_time = CREATETABLE = end_time - start_time + print("CREATETABLE: %g seconds" % (result_time)) + + start_time = time.time() + c.executemany("INSERT INTO test VALUES (?, ?)", data_iter(int(sys.argv[2]))) + end_time = time.time() + result_time = INSERTMANY = end_time - start_time + print("INSERTMANY: %g seconds" % (result_time)) + + start_time = time.time() + c.execute("select count(*) from test") + res = c.fetchall() + end_time = time.time() + result_time = FETCHALL = end_time - start_time + print("FETCHALL: %g seconds" % (result_time)) + + start_time = time.time() + conn.commit() + end_time = time.time() + result_time = COMMIT = end_time - start_time + print("COMMIT: %g seconds" % (result_time)) + + print + log_line = "%f\t%f\t%f\t%f\t%f\n" % (DROPTABLE, CREATETABLE, INSERTMANY, FETCHALL, COMMIT) + with open("sqlite-benchmarks.tsv", "a") as fp: + fp.write(log_line) + +start_time = time.time() +conn.close() +end_time = time.time() +result_time = CLOSE = end_time - start_time +print("CLOSE: %g seconds" % (result_time)) +``` + +You can download [raw result here](/assets/posts/do-fuse/sqlite-benchmarks.tsv). And +again, these results are done on a local block storage and do not represent +capabilities of object storage. With my current approach and state of the test +code these can not be done. I would need to make Python code much more robust +and check locking etc. + +
+ + +## Can storage be mounted on multiple machines at the same time and be writable? + +Well, this one didn't take long to test. And the answer is **YES**. I mounted +space on both machines and measured same performance on both machines. But +because file is downloaded before write and then uploaded on complete there +could potentially be problems is another process is trying to access the same +file. + +## Observations and conslusion + +Using Spaces in this way makes it easier to access and manage files. But besides +that you would need to write additional code to make this one play nice with you +applications. + +Nevertheless, this was extremely simple to setup and use and this is just +another excellent product in DigitalOcean product line. I found this exercise +very valuable and am thinking about implementing some sort of mechanism for +SQLite, so data can be stored on Spaces and accessed by many VM's. For a project +where data doesn't need to be accessible in real-time and can have couple of +minutes old data this would be very interesting. If any of you find this +proposal interesting please write in a comment box below or shoot me an email +and I will keep you posted. diff --git a/_posts/posts/2019-01-03-encoding-binary-data-into-dna-sequence.md b/_posts/posts/2019-01-03-encoding-binary-data-into-dna-sequence.md new file mode 100644 index 0000000..6980ed1 --- /dev/null +++ b/_posts/posts/2019-01-03-encoding-binary-data-into-dna-sequence.md @@ -0,0 +1,416 @@ +--- +title: Encoding binary data into DNA sequence +permalink: /encoding-binary-data-into-dna-sequence.html +date: 2019-01-03T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +## Initial thoughts + +Imagine a world where you could go outside and take a leaf from a tree and put +it through your personal DNA sequencer and get data like music, videos or +computer programs from it. Well, this is all possible now. It was not done on a +large scale because it is quite expensive to create DNA strands but it's +possible. + +Encoding data into DNA sequence is relatively simple process once you understand +the relationship between binary data and nucleotides and scientists have been +making large leaps in this field in order to provide viable long-term storage +solution for our data that would potentially survive our specie if case of +global disaster. We could imprint all the world's knowledge into plants and +ensure the survival of our knowledge. + +More optimistic usage for this technology would be easier storage of ever +growing data we produce every day. Once machines for sequencing DNA become fast +enough and cheaper this could mean the next evolution of storing data and +abandoning classical hard and solid state drives in data warehouses. + +As we currently stand this is still not viable but it is quite an amazing and +cool technology. + +My interests in this field are purely in encoding processes and experimental +testing mainly because I don't have the access to this expensive machines. My +initial goal was to create a toolkit that can be used by everybody to encode +their data into a proper DNA sequence. + +## Glossary + +**deoxyribose** A five-carbon sugar molecule with a hydrogen atom rather than a +hydroxyl group in the 2′ position; the sugar component of DNA nucleotides. + +**double helix** The molecular shape of DNA in which two strands of nucleotides +wind around each other in a spiral shape. + +**nitrogenous base** A nitrogen-containing molecule that acts as a base; often +referring to one of the purine or pyrimidine components of nucleic acids. + +**phosphate group** A molecular group consisting of a central phosphorus atom +bound to four oxygen atoms. + +**RGB** The RGB color model is an additive color model in which red, green and +blue light are added together in various ways to reproduce a broad array of +colors. + +**GCC** The GNU Compiler Collection is a compiler system produced by the GNU +Project supporting various programming languages. + +## Data encoding + +**TL;DR:** Encoding involves the use of a code to change original data into a +form that can be used by an external process. + +Encoding is the process of converting data into a format required for a number +of information processing needs, including: + +- Program compiling and execution +- Data transmission, storage and compression/decompression +- Application data processing, such as file conversion + +Encoding can have two meanings: + +- In computer technology, encoding is the process of applying a specific code, + such as letters, symbols and numbers, to data for conversion into an + equivalent cipher. +- In electronics, encoding refers to analog to digital conversion. + +## Quick history of DNA + +- **1869** - Friedrich Miescher identifies "nuclein". +- **1900s** - The Eugenics Movement. +- **1900** – Mendel's theories are rediscovered by researchers. +- **1944** - Oswald Avery identifies DNA as the 'transforming principle'. +- **1952** - Rosalind Franklin photographs crystallized DNA fibres. +- **1953** - James Watson and Francis Crick discover the double helix structure of DNA. +- **1965** - Marshall Nirenberg is the first person to sequence the bases in each codon. +- **1983** - Huntington's disease is the first mapped genetic disease. +- **1990** - The Human Genome Project begins. +- **1995** - Haemophilus Influenzae is the first bacterium genome sequenced. +- **1996** - Dolly the sheep is cloned. +- **1999** - First human chromosome is decoded. +- **2000** – Genetic code of the fruit fly is decoded. +- **2002** – Mouse is the first mammal to have its genome decoded. +- **2003** – The Human Genome Project is completed. +- **2013** – DNA Worldwide and Eurofins Forensic discover identical twins have differences in their genetic makeup. + +## What is DNA? + +Deoxyribonucleic acid, a self-replicating material which is **present in nearly +all living organisms** as the main constituent of chromosomes. It is the +**carrier of genetic information**. + +> The nitrogen in our DNA, the calcium in our teeth, the iron in our blood, +> the carbon in our apple pies were made in the interiors of collapsing stars. +> We are made of starstuff. +> **-- Carl Sagan, Cosmos** + +The nucleotide in DNA consists of a sugar (deoxyribose), one of four bases +(cytosine (C), thymine (T), adenine (A), guanine (G)), and a phosphate. +Cytosine and thymine are pyrimidine bases, while adenine and guanine are purine +bases. The sugar and the base together are called a nucleoside. + +![DNA](/assets/posts/dna-sequence/dna-basics.jpg){:loading="lazy"} + +*DNA (a) forms a double stranded helix, and (b) adenine pairs with thymine and +cytosine pairs with guanine. (credit a: modification of work by Jerome Walker, +Dennis Myts)* + +## Encode binary data into DNA sequence + +As an input file you can use any file you want: + +- ASCII files, +- Compiled programs, +- Multimedia files (MP3, MP4, MVK, etc), +- Images, +- Database files, +- etc. + +Note: If you would copy all the bytes from RAM to file or pipe data to file you +could encode also this data as long as you provide file pointer to the encoder. + +### Basic Encoding + +As already mentioned, the Basic Encoding is based on a simple mapping. Since DNA +is composed of 4 nucleotides (Adenine, Cytosine, Guanine, Thymine; usually +referred using the first letter). Using this technique we can encode + + + +using a single nucleotide. In this way, we are able to use the 4 bases that +compose the DNA strand to encode each byte of data. + +| Two bits | Nucleotides | +| -------- | ---------------- | +| 00 | **A** (Adenine) | +| 10 | **G** (Guanine) | +| 01 | **C** (Cytosine) | +| 11 | **T** (Thymine) | + +With this in mind we can simply encode any data by using two-bit to Nucleotides +conversion. + +```python +{ Algorithm 1: Naive byte array to DNA encode } +procedure EncodeToDNASequence(f) string +begin + enc string + while not eof(f) do + c byte := buffer[0] { Read 1 byte from buffer } + bin integer := sprintf('08b', c) { Convert to string binary } + for e in range[0, 2, 4, 6] do + if e[0] == 48 and e[1] == 48 then { 0x00 - A (Adenine) } + enc += 'A' + else if e[0] == 48 and e[1] == 49 then { 0x01 - G (Guanine) } + enc += 'G' + else if e[0] == 49 and e[1] == 48 then { 0x10 - C (Cytosine) } + enc += 'C' + else if e[0] == 49 and e[1] == 49 then { 0x11 - T (Thymine) } + enc += 'T' + return enc { Return DNA sequence } +end +``` + +Another encoding would be **Goldman encoding**. Using this encoding helps with +Nonsense mutation (amino acids replaced by a stop codon) that occurs and is the +most problematic during translation because it leads to truncated amino acid +sequences, which in turn results in truncated proteins. + +[Where to store big data? In DNA: Nick Goldman at TEDxPrague](https://www.youtube.com/watch?v=a4PiGWNsIEU) + +### FASTA file format + +In bioinformatics, FASTA format is a text-based format for representing either +nucleotide sequences or peptide sequences, in which nucleotides or amino acids +are represented using single-letter codes. The format also allows for sequence +names and comments to precede the sequences. The format originates from the +FASTA software package, but has now become a standard in the field of +bioinformatics. + +The first line in a FASTA file started either with a ">" (greater-than) symbol +or, less frequently, a ";" (semicolon) was taken as a comment. Subsequent lines +starting with a semicolon would be ignored by software. Since the only comment +used was the first, it quickly became used to hold a summary description of the +sequence, often starting with a unique library accession number, and with time +it has become commonplace to always use ">" for the first line and to not use +";" comments (which would otherwise be ignored). + +```txt +;LCBO - Prolactin precursor - Bovine +; a sample sequence in FASTA format +MDSKGSSQKGSRLLLLLVVSNLLLCQGVVSTPVCPNGPGNCQVSLRDLFDRAVMVSHYIHDLSS +EMFNEFDKRYAQGKGFITMALNSCHTSSLPTPEDKEQAQQTHHEVLMSLILGLLRSWNDPLYHL +VTEVRGMKGAPDAILSRAIEIEEENKRLLEGMEMIFGQVIPGAKETEPYPVWSGLPSLQTKDED +ARYSAFYNLLHCLRRDSSKIDTYLKLLNCRIIYNNNC* + +>MCHU - Calmodulin - Human, rabbit, bovine, rat, and chicken +ADQLTEEQIAEFKEAFSLFDKDGDGTITTKELGTVMRSLGQNPTEAELQDMINEVDADGNGTID +FPEFLTMMARKMKDTDSEEEIREAFRVFDKDGNGYISAAELRHVMTNLGEKLTDEEVDEMIREA +DIDGDGQVNYEEFVQMMTAK* + +>gi|5524211|gb|AAD44166.1| cytochrome b [Elephas maximus maximus] +LCLYTHIGRNIYYGSYLYSETWNTGIMLLLITMATAFMGYVLPWGQMSFWGATVITNLFSAIPYIGTNLV +EWIWGGFSVDKATLNRFFAFHFILPFTMVALAGVHLTFLHETGSNNPLGLTSDSDKIPFHPYYTIKDFLG +LLILILLLLLLALLSPDMLGDPDNHMPADPLNTPLHIKPEWYFLFAYAILRSVPNKLGGVLALFLSIVIL +GLMPFLHTSKHRSMMLRPLSQALFWTLTMDLLTLTWIGSQPVEYPYTIIGQMASILYFSIILAFLPIAGX +IENY +``` + +FASTA format was extended by [FASTQ](https://en.wikipedia.org/wiki/FASTQ_format) +format from the [Sanger Centre](https://www.sanger.ac.uk/) in Cambridge. + +### PNG encoded DNA sequence + +| Nucleotides | RGB | Color name | +| ------------ | ----------- | ---------- | +| A ➞ Adenine | (0,0,255) | Blue | +| G ➞ Guanine | (0,100,0) | Green | +| C ➞ Cytosine | (255,0,0) | Red | +| T ➞ Thymine | (255,255,0) | Yellow | + +With this in mind we can create a simple algorithm to create PNG representation +of a DNA sequence. + +```python +{ Algorithm 2: Naive DNA to PNG encode from FASTA file } +procedure EncodeDNASequenceToPNG(f) +begin + i image + while not eof(f) do + c char := buffer[0] { Read 1 char from buffer } + case c of + 'A': color := RGB(0, 0, 255) { Blue } + 'G': color := RGB(0, 100, 0) { Green } + 'C': color := RGB(255, 0, 0) { Red } + 'T': color := RGB(255, 255, 0) { Yellow } + drawRect(i, [x, y], color) + save(i) { Save PNG image } +end +``` + +## Encoding text file in practice + +In this example we will take a simple text file as our input stream for +encoding. This file will have a quote from Niels Bohr and saved as txt file. + +> How wonderful that we have met with a paradox. Now we have some hope of +> making progress. +> ― Niels Bohr + +First we encode text file into FASTA file. + +```bash +./dnae-encode -i quote.txt -o quote.fa +2019/01/10 00:38:29 Gathering input file stats +2019/01/10 00:38:29 Starting encoding ... + 106 B / 106 B [==================================] 100.00% 0s +2019/01/10 00:38:29 Saving to FASTA file ... +2019/01/10 00:38:29 Output FASTA file length is 438 B +2019/01/10 00:38:29 Process took 987.263µs +2019/01/10 00:38:29 Done ... +``` + +Output of `quote.fa` file contains the encoded DNA sequence in ASCII format. + +```txt +>SEQ1 +GACAGCTTGTGTACAAGTGTGCTTGCTCGCGAGCGGGTACGCGCGTGGGCTAACAAGTGA +GCCAGCAGGTGAACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGCTGGCGGGTGA +ACAAGTGTGCCGGTGAGCCAACAAGCAGACAAGTAAGCAGGTACGCAGGCGAGCTTGTCA +ACTCACAAGATCGCTTGTGTACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGTAT +GCTTGCTGGCGGACAAGCCAGCTTGTAAGCGGACAAGCTTGCGCACAAGCTGGCAGGCCT +GCCGGCTCGCGTACAAATTCACAAGTAAGTACGCTTGCGTGTACGCGGGTATGTATACTC +AACCTCACCAAACGGGACAAGATCGCCGGCGGGCTAGTATACAAGAACGCTTGCCAGTAC +AACC +``` + +Then we encode FASTA file from previous operation to encode this data into PNG. + +```bash +./dnae-png -i quote.fa -o quote.png +2019/01/10 00:40:09 Gathering input file stats ... +2019/01/10 00:40:09 Deconstructing FASTA file ... +2019/01/10 00:40:09 Compositing image file ... + 424 / 424 [==================================] 100.00% 0s +2019/01/10 00:40:09 Saving output file ... +2019/01/10 00:40:09 Output image file length is 1.1 kB +2019/01/10 00:40:09 Process took 19.036117ms +2019/01/10 00:40:09 Done ... +``` + +After encoding into PNG format this file looks like this. + +![Encoded Quote in PNG format](/assets/posts/dna-sequence/quote.png){:loading="lazy"} + +The larger the input stream is the larger the PNG file would be. + +Compiled basic Hello World C program with +[GCC](https://www.gnu.org/software/gcc/) would [look +like](/assets/posts/dna-sequence/sample.png). + +```c +// gcc -O3 -o sample sample.c +#include + +main() { + printf("Hello, world!\n"); + return 0; +} +``` + +## Toolkit for encoding data + +I have created a toolkit with two main programs: + +- dnae-encode (encodes file into FASTA file) +- dnae-png (encodes FASTA file into PNG) + +Toolkit with full source code is available on +[github.com/mitjafelicijan/dna-encoding](https://github.com/mitjafelicijan/dna-encoding). + +### dnae-encode + +```bash +> ./dnae-encode --help +usage: dnae-encode --input=INPUT [] + +A command-line application that encodes file into DNA sequence. + +Flags: + --help Show context-sensitive help (also try --help-long and --help-man). + -i, --input=INPUT Input file (ASCII or binary) which will be encoded into DNA sequence. + -o, --output="out.fa" Output file which stores DNA sequence in FASTA format. + -s, --sequence=SEQ1 The description line (defline) or header/identifier line, gives a name and/or a unique identifier for the sequence. + -c, --columns=60 Row characters length (no more than 120 characters). Devices preallocate fixed line sizes in software. + --version Show application version. +``` + +### dnae-png + +```bash +> ./dnae-png --help +usage: dnae-png --input=INPUT [] + +A command-line application that encodes FASTA file into PNG image. + +Flags: + --help Show context-sensitive help (also try --help-long and --help-man). + -i, --input=INPUT Input FASTA file which will be encoded into PNG image. + -o, --output="out.png" Output file in PNG format that represents DNA sequence in graphical way. + -s, --size=10 Size of pairings of DNA bases on image in pixels (lower resolution lower file size). + --version Show application version. +``` + +## Benchmarks + +First we generate some binary sample data with dd. + +```bash +dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=1KB.bin bs=1KB count=1 iflag=fullblock +``` + +![Sample binary file 1KB](/assets/posts/dna-sequence/sample-binary-file.png){:loading="lazy"} + +Our freshly generated 1KB file looks something like this (its full of +garbage data as intended). + +We create following binary files: + +- 1KB.bin +- 10KB.bin +- 100KB.bin +- 1MB.bin +- 10MB.bin +- 100MB.bin + +After this we create FASTA files for all the binary files by encoding them +into DNA sequence. + +```bash +./dnae-encode -i 100MB.bin -o 100MB.fa +``` + +Then we GZIP all the FASTA files to see how much the can be compressed. + +```bash +gzip -9 < 10MB.fa > 10MB.fa.gz +``` + +![Encode to FASTA](/assets/posts/dna-sequence/chart-speed.svg){:loading="lazy"} + +The speed increase that occurs when encoding to FASTA format. + +![File sizes](/assets/posts/dna-sequence/chart-size.svg){:loading="lazy"} + +Size of the out file after encoding. + +[Download CSV file with benchmarks](/assets/posts/dna-sequence/benchmarks.csv). + +## References + +- https://www.techopedia.com/definition/948/encoding +- https://www.dna-worldwide.com/resource/160/history-dna-timeline +- https://opentextbc.ca/biology/chapter/9-1-the-structure-of-dna/ +- https://arxiv.org/abs/1801.04774 +- https://en.wikipedia.org/wiki/FASTA_format diff --git a/_posts/posts/2019-10-14-simplifying-and-reducing-clutter.md b/_posts/posts/2019-10-14-simplifying-and-reducing-clutter.md new file mode 100644 index 0000000..e804ecb --- /dev/null +++ b/_posts/posts/2019-10-14-simplifying-and-reducing-clutter.md @@ -0,0 +1,60 @@ +--- +title: Simplifying and reducing clutter in my life and work +permalink: /simplifying-and-reducing-clutter.html +date: 2019-10-14T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +I recently moved my main working machine back from Hachintosh to Linux. Well the +experiment was interesting and I have done some great work on macOS but it was +time to move back. + +I actually really missed Linux. The simplicity of `apt-get` or just the amount +of software that exists for Linux should be a no-brainer. I spent most of my +time on macOS finding solutions to make things work. Using +[Brew](https://brew.sh/) was just a horrible experience and far from package +managers of Linux. At least they managed to get that `sudo` debacle sorted. + +Not all was bad. macOS in general was a perfectly good environment. Things like +Docker and tooling like this worked without any hiccups. My normal tools like +coding IDE worked flawlessly and the whole look and feel is just superb. I have +been using MacBook Air for couple of years so I was used to the system but never +as a daily driver. + +One of the things I did after I installed Linux back on my machine was cleaning +up my Dropbox folder. I have everything on Dropbox. Even projects folder. I +write code for living so my whole life revolves around couple of megs of code +(with assets). So it's not like I have huge files on my machine. I don't have +movies or music or pictures on my PC. All of that stuff is in cloud. I use +Google music and I have Netflix account which is more than enough for me. + +I also went and deleted some of the repositories on my Github account. I have +deleted more code than deployed. People find this strange but for me deleting +something feels so cathartic and also forces me to write better code next time +around when I am faced with similar problem. That was a huge relief if I am +being totally honest. + +Next step was to do something with my webpage. I have been using some scripts I +wrote a while ago to generate static pages from markdown source posts. I kept on +adding and adding stuff on top of it and it became a source of a +frustration. And this is just a simple blog and I was using gulp and npm. +Anyways after couple of hours of searching and testing static generators I found +an interesting one +[https://github.com/piranha/gostatic](https://github.com/piranha/gostatic) and I +just decided to use this one. It was the only one that had a simple templating +engine, not that I really need one. But others had this convoluted way of trying +to solve everything and at the end just required quite bigger learning curve I +was ready to go with. So I deleted couple of old posts, simplified HTML, trashed +most of the CSS and went with +[https://motherfuckingwebsite.com/](https://motherfuckingwebsite.com/) +aesthetics. Yeah, the previous site was more visually stimulating but all I +really care is the content at this point. And Times New Roman font is kind of +awesome. + +I stopped working on most of the projects in the past couple of months because +the overhead was just too insane. There comes a point when you stretch yourself +too much and then you stop progressing and with that comes dissatisfaction. + +So that's about it. Moving forward minimal style. diff --git a/_posts/posts/2019-10-19-using-sentiment-analysis-for-clickbait-detection.md b/_posts/posts/2019-10-19-using-sentiment-analysis-for-clickbait-detection.md new file mode 100644 index 0000000..a1b237b --- /dev/null +++ b/_posts/posts/2019-10-19-using-sentiment-analysis-for-clickbait-detection.md @@ -0,0 +1,109 @@ +--- +title: Using sentiment analysis for clickbait detection in RSS feeds +permalink: /using-sentiment-analysis-for-clickbait-detection-in-rss-feeds.html +date: 2019-10-19T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +## Initial thoughts + +One of the things that interested me for a while now is if major well +established news sites use click bait titles to drive additional traffic to +their sites and generate additional impressions. + +Goal is to see how article titles and actual content of article differ from each +other and see if titles are clickbaited. + +## Preparing and cleaning data + +For this example I opted to just use RSS feed from a new website and decided to +go with [The Guardian](https://www.theguardian.com) World news. While this gets +us limited data (~40) articles and also description (actual content) is trimmed +this really doesn't reflect the actual article contents. + +To get better content I could use web scraping and use RSS as link list and +fetch contents directly from website, but for this simple example this will +suffice. + +There are couple of requirements we need to install before we continue: + +- `pip3 install feedparser` (parses RSS feed from url) +- `pip3 install vaderSentiment` (does sentiment polarity analysis) +- `pip3 install matplotlib` (plots chart of results) + +So first we need to fetch RSS data and sanitize HTML content from description. + +```python +import re +import feedparser + +feed_url = "https://www.theguardian.com/world/rss" +feed = feedparser.parse(feed_url) + +# sanitize html +for item in feed.entries: + item.description = re.sub('<[^<]+?>', '', item.description) +``` + +## Perform sentiment analysis + +Since we now have cleaned up data in our `feed.entries` object we can start with +performing sentiment analysis. + +There are many sentiment analysis libraries available that range from rule-based +sentiment analysis up to machine learning supported analysis. To keep things +simple I decided to use rule-based analysis library +[vaderSentiment](https://github.com/cjhutto/vaderSentiment) from +[C.J. Hutto](https://github.com/cjhutto). Really nice library and quite easy to +use. + +```python +from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer +analyser = SentimentIntensityAnalyzer() + +sentiment_results = [] +for item in feed.entries: + sentiment_title = analyser.polarity_scores(item.title) + sentiment_description = analyser.polarity_scores(item.description) + sentiment_results.append([sentiment_title['compound'], sentiment_description['compound']]) +``` + +Now that we have this data in a shape that is compatible with matplotlib we can +plot results to see the difference between title and description sentiment of an +article. + +```python +import matplotlib.pyplot as plt + +plt.rcParams['figure.figsize'] = (15, 3) +plt.plot(sentiment_results, drawstyle='steps') +plt.title('Sentiment analysis relationship between title and description (Guardian World News)') +plt.legend(['title', 'description']) +plt.show() +``` + +## Results and assets + +1. Because of the small sample size further conclusions are impossible to make. +2. Rule-based approach may not be the best way of doing this. By using deep + learning we would be able to get better insights. +3. **Next step would be to** periodically fetch RSS items and store them over a + longer period of time and then perform analysis again and use either machine + learning or deep learning on top of it. + +![Relationship between title and description](/assets/posts/sentiment-analysis/guardian-sa-title-desc-relationship.png){:loading="lazy"} + +Figure above displays difference between title and description sentiment for +specific RSS feed item. 1 means positive and -1 means negative sentiment. + +[» Download Jupyter Notebook](/assets/posts/sentiment-analysis/sentiment-analysis.ipynb) + +## Going further + +- [Twitter Sentiment Analysis by Bryan Schwierzke](https://github.com/bswiss/news_mood) +- [AFINN-based sentiment analysis for Node.js by Andrew Sliwinski](https://github.com/thisandagain/sentiment) +- [Sentiment Analysis with LSTMs in Tensorflow by Adit Deshpande](https://github.com/adeshpande3/LSTM-Sentiment-Analysis) +- [Sentiment analysis on tweets using Naive Bayes, SVM, CNN, LSTM, etc. by Abdul Fatir](https://github.com/abdulfatir/twitter-sentiment-analysis) + diff --git a/_posts/posts/2020-03-22-simple-sse-based-pubsub-server.md b/_posts/posts/2020-03-22-simple-sse-based-pubsub-server.md new file mode 100644 index 0000000..ffb7285 --- /dev/null +++ b/_posts/posts/2020-03-22-simple-sse-based-pubsub-server.md @@ -0,0 +1,455 @@ +--- +title: Simple Server-Sent Events based PubSub Server +permalink: /simple-server-sent-events-based-pubsub-server.html +date: 2020-03-22T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +## Before we continue ... + +Publisher Subscriber model is nothing new and there are many amazing solutions +out there, so writing a new one would be a waste of time if other solutions +wouldn't have quite complex install procedures and weren't so hard to maintain. +But to be fair, comparing this simple server with something like +[Kafka](https://kafka.apache.org/) or [RabbitMQ](https://www.rabbitmq.com/) is +laughable at the least. Those solutions are enterprise grade and have many +mechanisms there to ensure messages aren't lost and much more. Regardless of +these drawbacks, this method has been tested on a large website and worked until +now without any problems. So now, that we got that cleared up, let's continue. + +***Wiki definition:** Publish/subscribe messaging, or pub/sub messaging, is a +form of asynchronous service-to-service communication used in serverless and +microservices architectures. In a pub/sub model, any message published to a +topic is immediately received by all the subscribers to the topic.* + +## General goals + +- provide a simple server that relays messages to all the connected clients, +- messages can be posted on specific topics, +- messages get sent via [Server-Sent + Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events) + to all the subscribers. + +## How exactly does the pub/sub model work? + +The easiest way to explain this is with diagram bellow. Basic function is +simple. We have subscribers that receive messages, and we have publishers that +create and post messages. Similar model is also well know pattern that works on +a premise of consumers and producers, and they take similar roles. + +![How PubSub works](/assets/posts/simple-pubsub-server/pubsub-overview.png){:loading="lazy"} + +**These are some naive characteristics we want to achieve:** + +- producer is publishing messages to subscribe topic, +- consumer is receiving messages from subscribed topic, +- servers is also known as Broker, +- broker does not store messages or tracks success, +- broker uses + [FIFO](https://en.wikipedia.org/wiki/FIFO_(computing_and_electronics)) method + for delivering messages, +- if consumer wants to receive messages from a topic, producer and consumer + topics must match, +- consumer can subscribe to multiple topics, +- producer can publish to multiple topics, +- each message has a messageId. + +**Known drawbacks:** + +- messages will not be stored in a persistent queue or unreceived messages like + [DeadLetterQueue](https://en.wikipedia.org/wiki/Dead_letter_queue) so old + messages could be lost on server restart, +- [Server-Sent + Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events) + opens a long-running connection between the client and the server so make sure + if your setup is load balanced that the load balancer in this case can have + long opened connection, +- no system moderation due to the dynamic nature of creating queues. + +## Server-Sent Events + +Read more about it on [official specification +page](https://html.spec.whatwg.org/multipage/server-sent-events.html). + +### Current browser support + +![Browser support](/assets/posts/simple-pubsub-server/caniuse.png){:loading="lazy"} + +Check +[https://caniuse.com/#feat=eventsource](https://caniuse.com/#feat=eventsource) +for latest information about browser support. + +### Known issues + +- Firefox 52 and below do not support EventSource in web/shared workers +- In Firefox prior to version 36 server-sent events do not reconnect + automatically in case of a connection interrupt (bug) +- Reportedly, CORS in EventSource is currently supported in Firefox 10+, Opera + 12+, Chrome 26+, Safari 7.0+. +- Antivirus software may block the event streaming data chunks. + +Source: [https://caniuse.com/#feat=eventsource](https://caniuse.com/#feat=eventsource) + +### Message format + +The simplest message that can be sent is only with data attribute: + +```bash +data: this is a simple message + +``` + +You can send message IDs to be used if the connection is dropped: + +```bash +id: 33 +data: this is line one +data: this is line two + +``` + +And you can specify your own event types (the above messages will all trigger +the message event): + +```bash +id: 36 +event: price +data: 103.34 + +``` + +### Server requirements + +The important thing is how you send headers and which headers are sent by the +server that triggers browser to threat response as a EventStream. + +Headers responsible for this are: + +```bash +Content-Type: text/event-stream +Cache-Control: no-cache +Connection: keep-alive +``` + +### Debugging with Google Chrome + +Google Chrome provides build-in debugging and exploration tool for [Server-Sent +Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events) +which is quite nice and available from Developer Tools under Network tab. + +> You can debug only client side events that get received and not the server +> ones. For debugging server events add `console.log` to `server.js` code and +> print out events. + +![Google Chrome Developer Tools EventStream](/assets/posts/simple-pubsub-server/chrome-debugging.png){:loading="lazy"} + +## Server implementation + +For the sake of this example we will use [Node.js](https://nodejs.org/en/) with +[Express](https://expressjs.com) as our router since this is the easiest way to +get started and we will use already written SSE library for node +[sse-pubsub](https://www.npmjs.com/package/sse-pubsub) so we don't reinvent the +wheel. + +```bash +npm init --yes + +npm install express +npm install body-parser +npm install sse-pubsub +``` + +Basic implementation of a server (`server.js`): + +```js +const express = require('express'); +const bodyParser = require('body-parser'); +const SSETopic = require('sse-pubsub'); + +const app = express(); +const port = process.env.PORT || 4000; + +// topics container +const sseTopics = {}; + +app.use(bodyParser.json()); + +// open for all cors +app.all('*', (req, res, next) => { + res.header('Access-Control-Allow-Origin', '*'); + res.header('Access-Control-Allow-Headers', 'X-Requested-With, Content-Type'); + next(); +}); + +// preflight request error fix +app.options('*', async (req, res) => { + res.header('Access-Control-Allow-Origin', '*'); + res.header('Access-Control-Allow-Headers', 'X-Requested-With, Content-Type'); + res.send('OK'); +}); + +// serve the event streams +app.get('/stream/:topic', async (req, res, next) => { + const topic = req.params.topic; + + if (!(topic in sseTopics)) { + sseTopics[topic] = new SSETopic({ + pingInterval: 0, + maxStreamDuration: 15000, + }); + } + + // subscribing client to topic + sseTopics[topic].subscribe(req, res); +}); + +// accepts new messages into topic +app.post('/publish', async (req, res) => { + let body = req.body; + let status = 200; + + console.log('Incoming message:', req.body); + + if ( + body.hasOwnProperty('topic') && + body.hasOwnProperty('event') && + body.hasOwnProperty('message') + ) { + const topic = req.body.topic; + const event = req.body.event; + const message = req.body.message; + + if (topic in sseTopics) { + // sends message to all the subscribers + sseTopics[topic].publish(message, event); + } + } else { + status = 400; + } + + res.status(status).send({ + status, + }); +}); + +// returns JSON object of all opened topics +app.get('/status', async (req, res) => { + res.send(sseTopics); +}); + +// health-check endpoint +app.get('/', async (req, res) => { + res.send('OK'); +}); + +// return a 404 if no routes match +app.use((req, res, next) => { + res.set('Cache-Control', 'private, no-store'); + res.status(404).end('Not found'); +}); + +// starts the server +app.listen(port, () => { + console.log(`PubSub server running on http://localhost:${port}`); +}); +``` + +### Our custom message format + +Each message posted on a server must be in a specific format that out server +accepts. Having structure like this allows us to have multiple separated type of +events on each topic. + +With this we can separate streams and only receive events that belong to the +topic. + +One example would be, that we have index page and we want to receive messages +about new upvotes or new subscribers but we don't want to follow events for +other pages. This reduces clutter and overall network. And structure is much +nicer and maintanable. + +```json +{ + "topic": "sample-topic", + "event": "sample-event", + "message": { "name": "John" } +} +``` + +## Publisher and subscriber clients + +### Publisher and subscriber in action + + + +You can download [the code](../simple-pubsub-server/sse-pubsub-server.zip) and +follow along. + +### Publisher + +As talked about above publisher is the one that send messages to the +broker/server. Message inside the payload can be whatever you want (string, +object, array). I would however personally avoid send large chunks of data like +blobs and such. + +```html + + + + + + + Publisher + + + + +

Publisher

+ +
+

+ + +

+

+ + +

+

+ + +

+

+ + +

+

+ +

+
+ + + + + + +``` + +### Subscriber + +Subscriber is responsible for receiving new messages that come from server via +publisher. The code bellow is very rudimentary but works and follows the +implementation guidelines for EventSource. + +You can use either Developer Tools Console to see incoming messages or you can +defer to Debugging with Google Chrome section above to see all EventStream +messages. + +> Don't be alarmed if the subscriber gets disconnected from the server every so +> often. The code we have here resets connection every 15s but it automatically +> get reconnected and fetches all messages up to last received message id. This +> setting can be adjusted in `server.js` file; search for the +> `maxStreamDuration` variable. + +```html + + + + + + + Subscriber + + + + + +

Subscriber

+ +
+

+ + +

+

+ + +

+

+ + +

+

+ +

+
+ + + + + + +``` + +## Reading further + +- [Using server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events) +- [Using SSE Instead Of WebSockets For Unidirectional Data Flow Over HTTP/2](https://www.smashingmagazine.com/2018/02/sse-websockets-data-flow-http2/) +- [What is Server-Sent Events?](https://apifriends.com/api-streaming/server-sent-events/) +- [An HTTP/2 extension for bidirectional messaging communication](https://tools.ietf.org/id/draft-xie-bidirectional-messaging-01.html) +- [Introduction to HTTP/2](https://developers.google.com/web/fundamentals/performance/http2) +- [The WebSocket API (WebSockets)](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) + diff --git a/_posts/posts/2020-03-27-create-placeholder-images-with-sharp.md b/_posts/posts/2020-03-27-create-placeholder-images-with-sharp.md new file mode 100644 index 0000000..c129396 --- /dev/null +++ b/_posts/posts/2020-03-27-create-placeholder-images-with-sharp.md @@ -0,0 +1,103 @@ +--- +title: Create placeholder images with sharp Node.js image processing library +permalink: /create-placeholder-images-with-sharp.html +date: 2020-03-27T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +I have been searching for a solution to pre-generate some placeholder images for +image server I needed to develop that resizes images on S3. I though this would +be a 15min job and quickly found out how very mistaken I was. + +Even though Node.js is not really the best way to do this kind of things (surely +something written in C or Rust or even Golang would be the correct way to do +this but we didn't need the speed in our case) I found an excellent library +[sharp - High performance Node.js image +processing](https://github.com/lovell/sharp). + +Getting things running was a breeze. + +## Fetch image from S3 and save resized + +```js +const sharp = require('sharp'); +const aws = require('aws-sdk'); + +const x,y = 100; +const s3 = new aws.S3({}); + +aws.config.update({ + secretAccessKey: 'secretAccessKey', + accessKeyId: 'accessKeyId', + region: 'region' +}); + +const originalImage = await s3.getObject({ + Bucket: 'some-bucket-name', + Key: 'image.jpg', +}).promise(); + +const resizedImage = await sharp(originalImage.Body) + .resize(x, y) + .jpeg({ progressive: true }) + .toBuffer(); + +s3.putObject({ + Bucket: 'some-bucket-name', + Key: `optimized/${x}x${y}/image.jpg`, + Body: resizedImage, + ContentType: 'image/jpeg', + ACL: 'public-read' +}).promise(); +``` + +All this code was wrapped inside a web service with some additional security +checks and defensive coding to detect if key is missing on S3. + +And at that point I needed to return placeholder images as a response in case +key is missing or x,y are not allowed by the server etc. I could have created +PNG in Gimp and just serve them but I wanted to respect aspect ratio and I +didn't want to return some mangled images. + +> Main problem with finding a clean solution I could copy and paste and change a +> bit was a task. API is changing constantly and there weren't clear examples or +> I was unable to find them. + +## Generating placeholder images using SVG + +What I ended up was using SVG to generate text and created image with sharp and +used composition to combine both layers. Response returned by this function is a +buffer you can use to either upload to S3 or save to local file. + +```js +const generatePlaceholderImageWithText = async (width, height, message) => { + const overlay = ` + ${message} + `; + + return await sharp({ + create: { + width: width, + height: height, + channels: 4, + background: { r: 230, g: 230, b: 230, alpha: 1 } + } + }) + .composite([{ + input: Buffer.from(overlay), + gravity: 'center', + }]) + .jpeg() + .toBuffer(); +} +``` + +That is about it. Nothing more to it. You can change the color of the image by +changing `background` and if you want to change text styling you can adapt SVG +to your needs. + +> Also be careful about the length of the text. This function positions text at +> the center and adds `20px` padding on all sides. If text is longer than the +> image it will get cut. diff --git a/_posts/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md b/_posts/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md new file mode 100644 index 0000000..1aa3536 --- /dev/null +++ b/_posts/posts/2020-03-29-the-strange-case-of-elasticsearch-allocation-failure.md @@ -0,0 +1,109 @@ +--- +title: The strange case of Elasticsearch allocation failure +permalink: /the-strange-case-of-elasticsearch-allocation-failure.html +date: 2020-03-29T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +I've been using Elasticsearch in production for 5 years now and never had a +single problem with it. Hell, never even known there could be a problem. Just +worked. All this time. The first node that I deployed is still being used in +production, never updated, upgraded, touched in anyway. + +All this bliss came to an abrupt end this Friday when I got notification that +Elasticsearch cluster went warm. Well, warm is not that bad right? Wrong! +Quickly after that I got another email which sent chills down my spine. Cluster +is now red. RED! Now, shit really hit the fan! + +I tried googling what could be the problem and after executing allocation +function noticed that some shards were unassigned and 5 attempts were already +made (which is BTW to my luck the maximum) and that meant I am basically fucked. +They also applied that one should wait for cluster to re-balance itself. So, I +waited. One hour, two hours, several hours. Nothing, still RED. + +The strangest thing about it all was, that queries were still being fulfilled. +Data was coming out. On the outside it looked like nothing was wrong but +everybody that would look at the cluster would know immediately that something +was very very wrong and we were living on borrowed time here. + +> **Please, DO NOT do what I did.** Seriously! Please ask someone on official +forums or if you know an expert please consult him. There could be million of +reasons and these solution fit my problem. Maybe in your case it would +disastrous. I had all the data backed up and even if I would fail spectacularly +I would be able to restore the data. It would be a huge pain and I would loose +couple of days but I had a plan B. + +Executing allocation and told me what the problem was but no clear solution yet. + +```yaml +GET /_cat/allocation?format=json +``` + +I got a message that `ALLOCATION_FAILED` with additional info `failed to create +shard, failure ioexception[failed to obtain in-memory shard lock]`. Well +splendid! I must also say that our cluster is capable more than enough to handle +the traffic. Also JVM memory pressure never was an issue. So what happened +really then? + +I tried also re-routing failed ones with no success due to AWS restrictions on +having managed Elasticsearch cluster (they lock some of the functions). + +```yaml +POST /_cluster/reroute?retry_failed=true +``` + +I got a message that significantly reduced my options. + +```json +{ + "Message": "Your request: '/_cluster/reroute' is not allowed." +} +``` + +After that I went on a hunt again. I won't bother you with all the details +because hours/days went by until I was finally able to re-index the problematic +index and hoped for the best. Until that moment even re-indexing was giving me +errors. + +```yaml +POST _reindex +{ + "source": { + "index": "myindex" + }, + "dest": { + "index": "myindex-new" + } +} +``` + +I needed to do this multiple times to get all the documents re-indexed. Then I +dropped the original one with the following command. + +```yaml +DELETE /myindex +``` + +And re-indexed again new one in the original one (well by name only). + +```yaml +POST _reindex +{ + "source": { + "index": "myindex-new" + }, + "dest": { + "index": "myindex" + } +} +``` + +On the surface it looks like all is working but I have a long road in front of +me to get all the things working again. Cluster now shows that it is in Green +mode but I am also getting a notification that the cluster has processing status +which could mean million of things. + +Godspeed! + diff --git a/_posts/posts/2020-03-30-my-love-and-hate-relationship-with-nodejs.md b/_posts/posts/2020-03-30-my-love-and-hate-relationship-with-nodejs.md new file mode 100644 index 0000000..0299d9d --- /dev/null +++ b/_posts/posts/2020-03-30-my-love-and-hate-relationship-with-nodejs.md @@ -0,0 +1,112 @@ +--- +title: My love and hate relationship with Node.js +permalink: /my-love-and-hate-relationship-with-nodejs.html +date: 2020-03-30T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +Previous project I was working on was being coded in +[Golang](https://golang.org/). Also was my first project using it. And damn, +that was an awesome experience. The whole thing is just superb. From how errors +are handled. The C-like way you handle compiling. The way the language is +structured making it incredibly versatile and easy to learn. + +It may cause some pain for somebody that is not used of using interfaces to map +JSON and doing the recompilation all the time. But we have tools like +[entr](http://eradman.com/entrproject/) and +[make](https://www.gnu.org/software/make/) to fix that. + +But we are not here to talk about my undying love for **Golang**. Only in some +way we probably should. It is an excellent example of how modern language should +be designed. And because I have used it extensively in the last couple of years +this probably taints my views of other languages. And is doing me a great +disservice. Nevertheless, here we are. + +About two years ago I started flirting with [Node.js](https://nodejs.org/en/) +for a project I started working on. What I wanted was to have things written in +a language that is widely used, and we could get additional developers for. As +much as **Golang** is amazing it's really hard to get developers for it. Even +now. And after playing around with it for a week I felt in love with the speed +of iteration and massive package ecosystem. Do you want SSO? You got it! Do you +want some esoteric library for something? There is a strong chance somebody +wrote it. It is so extensive that you find yourself evaluating packages based on +**GitHub stars** and number of contributors. You get swallowed by the vanity +metrics and that potentially will become the downfall of Node.js. + +Because of the sheer amount of choice I often got anxiety when choosing +libraries. Will I choose the correct one? Is this library something that will be +supported for a foreseeable future or not? I am used of using libraries that are +being in development for 10 years plus (Python, C) and that gave me some sort of +comfort. And it is probably unfair to Node.js and community to expect same +dedication. + +Moving forward ... Work started and things were great. **Speed of iteration was +insane**. For some feature that I would need a day in Golang only took me hour +or two. I became lazy! Using packages all over the place. Falling into the same +trap as others. Packages on top of packages. And [npm](https://www.npmjs.com/) +didn't help at all. The way that the package manager works is just +horrendous. And not allowing to have node_modules outside the project is also +the stupidest idea ever. + +So at that point I started feeling the technical debt that comes with Node.js +and the whole ecosystem. What nobody tells you is that **structuring large +Node.js apps** is more problematic than one would think. And going microservice +for every single thing is also a bad idea. The amount of networking you +introduce with that approach always ends up being a pain in the ass. And I don't +even want to go into system administration here. The overhead is +insane. Package-lock.json made many days feel like living hell for me. And I +would eat the cost of all this if it meant for better development +experience. Well, it didn't. + +The **lack of Typescript** support in the interpreter is still mind boggling to +me. Why haven't they added native support yet for this is beyond me?! That would +have solved so many problems. Lack of type safety became a problem somewhere in +the middle of the project where the codebase was sufficiently large enough to +present problems. We started adding arguments to functions and there was **no +way to implicitly define argument types**. And because at that point there were +a lot of functions, it became impossible to know what each one accepts, +development became more and more trial and error based. + +I tried **implementing Typescript**, but that would present a large refactor +that we were not willing to do at that point. The benefits were not enough. I +also tried [Flow - static type checker](https://flow.org/) but implementation +was also horrible. What Typescript and Flow forces you is to have src folder and +then **transpile** your code into dist folder and run it with node. WTH is that +all about. Why can't this be done in memory or some virtual file system? Why? I +see no reason why this couldn't be done like this. But it is what it is. I +abandoned all hope for static type checking. + +One of the problems that resulted from not having interfaces or types was +inability to model out our data from **Elasticsearch**. I could have done a +**pedestrian implementation** of it, but there must be a better way of doing +this without resorting to some hack basically. Or maybe I haven't found a +solution, which is also a possibility. I have looked, though. No juice! + +**Error handling?** Is that a joke? + +Thank god for **await/async**. Without it, I would have probably just abandoned +the whole thing and went with something else like Python. That's all I am going +to say about this :) + +I started asking myself a question if Node.js is actually ready to be used in a +**large scale applications**? And this was a totally wrong question. What I +should have been asking myself was, how to use Node.js in large scale +application. And you don't get this in **marketing material** for Express or Koa +etc. They never tell you this. Making Node.js scale on infrastructure or in +codebase is really **more of an art than a science**. And just like with the +whole JavaScript ecosystem: + +- impossible to master, +- half of your time you work on your tooling, +- just accept transpilers that convert one code into another (holly smokes), +- error handling is a joke, +- standards? What standards? + +But on the other hand. As I did, you will also learn to love it. Learn to use it +quickly and do impossible things in crazy limited time. + +I hate to admit it. But I love Node.js. Dammit, I love it :) + +**2023 Update**: I hate Node.js! diff --git a/_posts/posts/2020-05-05-remote-work.md b/_posts/posts/2020-05-05-remote-work.md new file mode 100644 index 0000000..8eb75d2 --- /dev/null +++ b/_posts/posts/2020-05-05-remote-work.md @@ -0,0 +1,73 @@ +--- +title: Remote work and how it affects the daily lives of people +permalink: /remote-work.html +date: 2020-05-05T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +I have been working remotely for the past 5 years. I love it. Love the freedom +and make your schedule thingy. + +## You work more not less + +I've heard from people things like: "Oh, you are so lucky, working from home, +having all the free time you want". It was obvious they had no clue what means +working remotely. They had this romantic idea of remote work. You can watch TV +whenever you like, you can go outside for a picnic if you want and stuff like +that. + +This may be true if you work a day or two in a week from home. But if you go +completely remote all these changes completely. I take some time to acclimate +but then you start feeling the consequences of going fully remote. And it's not +all rainbows and unicorns. Rather the opposite. + +## Feeling lost + +At first, I remembered I felt lost. I was not used to this kind of environment. +It felt disoriented and a part of you that is used to procrastinate turns on. +You start thinking of a workday as a whole day. And soon this idea of "I can do +this later" starts creeping in. Well, I have the whole day ahead of me. I can do +this a bit later. + +## Hyper-performance + +As a direct result, you become more focused on your work since you don't have +all the interruptions common in the workplace. And you can quickly get used to +this hyper-performance. But this mode requires also a lot of peace and quiet. + +And here we come to the ugly parts of all this. **People rarely have the +self-control** to not waste other people's time. It is paralyzing when people +start calling you, sending you chat messages, etc. The thing is, that when I +achieve this hyper-performance mode I am completely embroiled in the problem I +am solving and this kind of interruptions mess with your head. I need an hour at +least to get back in the zone. Sometimes not achieving the same focus the whole +day. + +I know that life is not how you want it to be and takes its route but from what +I've learned this kind of interruptions can be avoided in 90% of the case easily +just by closing any chat programs and putting your phone in a drawer. + +## Suggestion to all the new remote workers + +- Stop wasting other people's time. You don't bother people at their desks in + the office either. +- Do not replace daily chats in the hallways with instant messaging software. + It will only interrupt people. Nothing good will come of it. +- Set your working hours and try to not allow it to bleed outside these + boundaries and maintain your routine. +- Be prepared that hours will be longer regardless of your good intentions and + your well thought of routine. +- Try to be hyper-focused and do only one thing at the time. Multitasking is the + enemy of progress. +- Avoid long meetings and if possible eliminate them. Rather take time to write + them out and allow others to respond in their own time. Meetings are usually a + large waste of time and most of the people attending them are there just + because the manager said so. +- The software will not solve your problems. And throwing money at problems + neither. +- If you are in a managerial position don't supervise any single minute of + workers. They are probably giving you more hours anyways. Track progress + weekly not daily. You hired them and give them the benefit of the doubt that + they will deliver what you agreed upon. diff --git a/_posts/posts/2020-08-15-systemd-disable-wake-onmouse.md b/_posts/posts/2020-08-15-systemd-disable-wake-onmouse.md new file mode 100644 index 0000000..8122322 --- /dev/null +++ b/_posts/posts/2020-08-15-systemd-disable-wake-onmouse.md @@ -0,0 +1,74 @@ +--- +title: Disable mouse wake from suspend with systemd service +permalink: /disable-mouse-wake-from-suspend-with-systemd-service.html +date: 2020-08-15T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +I recently bought [ThinkPad +X220](https://www.laptopmag.com/reviews/laptops/lenovo-thinkpad-x220) just as a +joke on eBay to test Linux distributions and play around with things and not +destroy my main machine. Little to my knowledge I felt in love with it. Man, +they really made awesome machines back then. + +After changing disk that came with it to SSD and installing Ubuntu to test if  +everything works I noticed that even after a single touch of my external mouse +the system would wake up from sleep even though the lid was shut down. + +I wouldn't even noticed it if laptop didn't have [LED +sleep indicator](https://support.lenovo.com/lk/en/solutions/~/media/Images/ContentImages/p/pd025386_x1_status_03.ashx?w=426&h=262). +I already had a bad experience with Linux and it's power management. I had a +[Dell Inspiron 7537](https://www.pcmag.com/reviews/dell-inspiron-15-7537) laptop +with a touchscreen and while traveling it decided to wake up and started cooking +in my backpack to the point that the digitizer responsible for touch actually +glue off and the whole screen got wrecked. So, I am a bit touchy about this. + +I went on solution hunting and to my surprise there is no easy way to disable +specific devices to perform wake up. Why is this not under the power management  +tab in setting is really strange. + +After googling for a solution I found [this nice article describing the +solution](https://codetrips.com/2020/03/18/ubuntu-disable-mouse-wake-from-suspend/) +that worked for me. The only problem with this solution was that he added his +solution to `.bashrc` and this triggers `sudo` that asks for a password each +time new terminal is opened, which get annoying quickly since I open a lot of +terminals all the time. + +I followed his instructions and got to solution `sudo sh -c "echo 'disabled' > +/sys/bus/usb/devices/2-1.1/power/wakeup"`. + +I created a system service file `sudo nano +/etc/systemd/system/disable-mouse-wakeup.service` and removed `sudo` and +replaced `sh` with `/usr/bin/sh` and pasted all that in `ExecStart`. + +```ini +[Unit] +Description=Disables wakeup on mouse event +After=network.target +StartLimitIntervalSec=0 + +[Service] +Type=simple +Restart=always +RestartSec=1 +User=root +ExecStart=/usr/bin/sh -c "echo 'disabled' > /sys/bus/usb/devices/2-1.1/power/wakeup" + +[Install] +WantedBy=multi-user.target +``` + +After that I enabled, started and checked status of service. + +```sh +sudo systemctl enable disable-mouse-wakeup.service +sudo systemctl start disable-mouse-wakeup.service +sudo systemctl status disable-mouse-wakeup.service +``` + +This will permanently disable that device from wakeing up you computer on boot. +If you have many devices you would like to surpress from waking up your machine +I would create a shell script and call that instead of direclty doing it in +service file. diff --git a/_posts/posts/2020-09-06-esp-and-micropython.md b/_posts/posts/2020-09-06-esp-and-micropython.md new file mode 100644 index 0000000..bfd05d9 --- /dev/null +++ b/_posts/posts/2020-09-06-esp-and-micropython.md @@ -0,0 +1,226 @@ +--- +title: Getting started with MicroPython and ESP8266 +permalink: /esp8266-and-micropython-guide.html +date: 2020-09-06T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +## Introduction + +A while ago I bought some +[ESP8266](https://www.espressif.com/en/products/socs/esp8266) and +[ESP32](https://www.espressif.com/en/products/socs/esp32) dev boards to play +around with and I finally found a project to try it out. + +For my project, I used [ESP32](https://www.espressif.com/en/products/socs/esp32) +but I could easily choose +[ESP8266](https://www.espressif.com/en/products/socs/esp8266). This guide +contains which tools I use and how I prepared my workspace to code for +[ESP8266](https://www.espressif.com/en/products/socs/esp8266). + +![ESP8266 and ESP32 boards](/assets/posts/esp8366-micropython/boards.jpg){:loading="lazy"} + +This guide covers: + +- flashing SOC +- install proper tooling +- deploying a simple script + +> Make sure that you are using **a good USB cable**. I had some problems with +mine and once I replaced it everything started to work. + +## Flashing the SOC + +Plug your ESP8266 to USB port and check if the device was recognized with +executing `dmesg | grep ch341-uart`. + +Then check if the device is available under `/dev/` by running `ls +/dev/ttyUSB*`. + +> **Linux users**: if a device is not available be sure you are in `dialout` +> group. You can check this by executing `groups $USER`. You can add a user to +> `dialout` group with `sudo adduser $USER dialout`. + +After these conditions are meet go to the navigate to +[https://micropython.org/download/esp8266/](https://micropython.org/download/esp8266/) +and download `esp8266-20200902-v1.13.bin`. + +```sh +mkdir esp8266-test +cd esp8266-test + +wget https://micropython.org/resources/firmware/esp8266-20200902-v1.13.bin +``` + +After obtaining firmware we will need some tooling to flash the firmware to the +board. + +```sh +sudo pip3 install esptool +``` + +You can read more about `esptool` at +[https://github.com/espressif/esptool/](https://github.com/espressif/esptool/). + +Before flashing the firmware we need to erase the flash on device. Substitute +`USB0` with the device listed in output of `ls /dev/ttyUSB*`. + +```sh +esptool.py --port /dev/ttyUSB0 erase_flash +``` + +If flash was successfully erased it is now time to flash the new firmware to it. + +```sh +esptool.py --port /dev/ttyUSB0 --baud 460800 write_flash --flash_size=detect 0 esp8266-20200902-v1.13.bin +``` + +If everything went ok you can try accessing MicroPython REPL with ` screen +/dev/ttyUSB0 115200` or `picocom /dev/ttyUSB0 -b115200`. + +> Sometimes you will need to press `ENTER` in `screen` or `picocom` to access +> REPL. + +When you are in REPL you can test if all is working properly following steps. + +```py +> import machine +> machine.freq() +``` + +This should output a number representing a frequency of the CPU (mine was +`80000000`). + +When you are in `screen` or `picocom` these can help you a bit. + +| Key | Command | +| -------- | -------------------- | +| CTRL+d | preforms soft reboot | +| CTRL+a x | exits picocom | +| CTRL+a \ | exits screen | + + +## Install better tooling + +Now, to make our lives a little bit easier there are couple of additional tools +that will make this whole experience a little more bearable. + +There are twq cool ways of uploading local files to SOC flash. + +- ampy → [https://github.com/scientifichackers/ampy](https://github.com/scientifichackers/ampy) +- rshell → [https://github.com/dhylands/rshell](https://github.com/dhylands/rshell) + +### ampy + +```bash +# installing ampy +sudo pip3 install adafruit-ampy +``` + +Listed below are some common commands I used. + +```bash +# uploads file to flash +ampy --delay 2 --port /dev/ttyUSB0 put boot.py + +# lists file on flash +ampy --delay 2 --port /dev/ttyUSB0 ls + +# outputs contents of file on flash +ampy --delay 2 --port /dev/ttyUSB0 cat boot.py +``` + +> I added `delay` of 2 seconds because I had problems with executing commands. + +### rshell + +Even though `ampy` is a cool tool I opted with `rshell` in the end since it's +much more polished and feature rich. + +```bash +# installing ampy +sudo pip3 install rshell +``` + +Now that `rshell` is installed we can connect to the board. + +```bash +rshell --buffer-size=30 -p /dev/ttyUSB0 -a +``` + +This will open a shell inside bash and from here you can execute multiple +commands. You can check what is supported with `help` once you are inside of a +shell. + +```bash +m@turing ~/Junk/esp8266-test +$ rshell --buffer-size=30 -p /dev/ttyUSB0 -a + +Using buffer-size of 30 +Connecting to /dev/ttyUSB0 (buffer-size 30)... +Trying to connect to REPL connected +Testing if ubinascii.unhexlify exists ... Y +Retrieving root directories ... /boot.py/ +Setting time ... Sep 06, 2020 23:54:28 +Evaluating board_name ... pyboard +Retrieving time epoch ... Jan 01, 2000 +Welcome to rshell. Use Control-D (or the exit command) to exit rshell. +/home/m/Junk/esp8266-test> help + +Documented commands (type help ): +======================================== +args cat connect date edit filesize help mkdir rm shell +boards cd cp echo exit filetype ls repl rsync + +Use Control-D (or the exit command) to exit rshell. +``` + +> Inside a shell `ls` will display list of files on your machine. To get list +> of files on flash folder `/pyboard` is remapped inside the shell. To list files +> on flash you must perform `ls /pyboard`. + +#### Moving files to flash + +To avoid copying files all the time I used `rsync` function from the inside of +`rshell`. + +```bash +rsync . /pyboard +``` + +#### Executing scripts + +It is a pain to continuously reboot the device to trigger `/pyboard/boot.py` and +there is a better way of testing local scripts on remote device. + +Lets assume we have `src/freq.py` file that displays CPU frequency of a remote +device. + +```py +# src/freq.py + +import machine +print(machine.freq()) +``` + +Now lets upload this and execute it. + +```bash +# syncs files to remove device +rsync ./src /pyboard + +# goes into REPL +repl + +# we import file by importing it without .py extension and this will run the script +> import freq + +# CTRL+x will exit REPL +``` + +## Additional resources + +- https://randomnerdtutorials.com/getting-started-micropython-esp32-esp8266/ +- http://docs.micropython.org/en/latest/esp8266/quickref.html diff --git a/_posts/posts/2020-09-08-bind-warning-on-login.md b/_posts/posts/2020-09-08-bind-warning-on-login.md new file mode 100644 index 0000000..4b2c983 --- /dev/null +++ b/_posts/posts/2020-09-08-bind-warning-on-login.md @@ -0,0 +1,55 @@ +--- +title: Fix bind warning in .profile on login in Ubuntu +permalink: /bind-warning-on-login-in-ubuntu.html +date: 2020-09-08T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +Recently I moved back to [bash](https://www.gnu.org/software/bash/) as my +default shell. I was previously using [fish](https://fishshell.com/) and got +used to the cool features it has. But, regardless of that, I wanted to move to a +more standard shell because I was hopping back and forth with exporting +variables and stuff like that which got pretty annoying. + +So I embarked on a mission to make [bash](https://www.gnu.org/software/bash/) +more like [fish](https://fishshell.com/) and in the process found that I really +missed autosuggest with TAB on changing directories. + +I found a nice alternative that emulates [zsh](http://zsh.sourceforge.net/) like +autosuggestion and autocomplete so I added the following to my `.bashrc` file. + +```bash +bind "TAB:menu-complete" +bind "set show-all-if-ambiguous on" +bind "set completion-ignore-case on" +bind "set menu-complete-display-prefix on" +bind '"\e[Z":menu-complete-backward' +``` + +I haven't noticed anything wrong with this and all was working fine until I +restarted my machine and then I got this error. + +![Profile bind error](/assets/posts/profile-bind-error/error.jpg){:loading="lazy"} + +When I pressed OK, I got into the [Gnome +shell](https://wiki.gnome.org/Projects/GnomeShell) and all was working fine, but +the error was still bugging me. I started looking for the reason why this is +happening and found a solution to this error on [Remote SSH Commands - bash bind +warning: line editing not enabled](https://superuser.com/a/892682). + +So I added a simple `if [ -t 1 ]` around `bind` statements to avoid running +commands that presume the session is interactive when it isn't. + +```bash +if [ -t 1 ]; then + bind "TAB:menu-complete" + bind "set show-all-if-ambiguous on" + bind "set completion-ignore-case on" + bind "set menu-complete-display-prefix on" + bind '"\e[Z":menu-complete-backward' +fi +``` + +After logging out and back in the problem was gone. diff --git a/_posts/posts/2020-09-09-digitalocean-sync.md b/_posts/posts/2020-09-09-digitalocean-sync.md new file mode 100644 index 0000000..38696a9 --- /dev/null +++ b/_posts/posts/2020-09-09-digitalocean-sync.md @@ -0,0 +1,113 @@ +--- +title: Using Digitalocean Spaces to sync between computers +permalink: /digitalocean-spaces-to-sync-between-computers.html +date: 2020-09-09T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +I've been using [Dropbox](https://www.dropbox.com/) for probably **10+ years** +now and I-ve became so used to it that it runs in the background that I don't +even imagine a world without it. But it's not without problems. + +At first I had problems with `.venv` environments for Python and the only +solution for excluding synchronization for this folder was to manually exclude a +specific folder which is not really scalable. FYI, my whole project folder is +synced on [Dropbox](https://www.dropbox.com/). This of course introduced a lot +of syncing of files and folders that are not needed or even break things on +other machines. In the case of **Python**, I couldn't use that on my second +machine. I needed to delete `.venv` folder and pip it again which synced files +again to the main machine. This was very frustrating. **Nodejs** handles this +much nicer and I can just run the scripts without deleting `node_modules` again +and reinstalling. However, `node_modules` is a beast of its own. It creates so +many files that OS has a problem counting them when you check the folder +contents for size. + +I wanted something similar to Dropbox. I could without the instant syncing but +it would need to be fast and had the option for me to exclude folders like +`node_modules, .venv, .git` and folders like that. + +I went on a hunt for an alternative to [Dropbox](https://www.dropbox.com/) +and found: + +- [Tresorit](https://tresorit.com/) +- [Sync.com](https://sync.com) +- [Box](https://www.box.com/) + +You know, the usual list of suspects. I didn't include [Google +drive](https://drive.google.com) or [One drive](https://onedrive.live.com/) +since they are even more draconian than Dropbox. + +> All this does not stem from me being paranoid but recently these companies +> have became more and more aggressive and they keep violating our privacy when +> they share our data with 3rd party services. It is getting out of control. + +So, my main problem was still there. No way of excluding a specific folder from +syncing. And before we go into "*But you have git, isn't that enough?*", I must +say, that many of the files (PDFs, spreadsheets, etc) I have in a `git` repo +don't get pushed upstream to Git and I still want to have them synced across my +computers. + +I initially wanted to use [rsync](https://linux.die.net/man/1/rsync) but I would +need to then have a remote VPS or transfer between my computers directly. I +wanted a solution where all my files could be accessible to me without my +machine. + +> **WARNING: This solution will cost you money!** DigitalOcean Spaces are $5 per +month and there are some bandwidth limitations and if you go beyond that you get +billed additionally. + +Then I remembered that I could use something like +[S3](https://en.wikipedia.org/wiki/Amazon_S3) since it has versioning and is +fully managed. I didn't want to go down the AWS rabbit hole with this so I +choose [DigitalOcean Spaces](https://www.digitalocean.com/products/spaces/). + +Then I needed a command-line tool to sync between source and target. I found +this nice tool [s3cmd](https://s3tools.org/s3cmd) and it is in the Ubuntu +repositories. + +```bash +sudo apt install s3cmd +``` + +After installation will I create a new Space bucket on DigitalOcean. Remember +the zone you will choose because you will need it when you will configure +`s3cmd`. + +Then I visited [Digitalocean Applications & +API](https://cloud.digitalocean.com/account/api/tokens) and generated **Spaces +access keys**. Save both key and secret somewhere safe because when you will +leave the page secret will not be available anymore to you and you will need to +re-generate it. + +```bash +# enter your key and secret and correct endpoint +# my endpoint is ams3.digitaloceanspaces.com because +# I created my bucket in Amsterdam regiin +s3cmd --configure +``` + +After that I played around with options for `s3cmd` and got to the following +command. + +```bash +# I executed this command from my projects folder +cd projects +s3cmd sync --delete-removed --exclude 'node_modules/*' --exclude '.git/*' --exclude '.venv/*' ./ s3://my-bucket-name/projects/ +``` + +When syncing int he other direction you will need to change the order of the +`SOURCE` and `TARGET` to `s3://my-bucket-name/projects/` and `./`. + +> Be sure that all the paths have trailing slash so that sync knows that this +> are directories. + +I am planning to implement some sort of a `.ignore` file that will enable me to +have a project-specific exclude options. + +I am currently running this every hour as a cronjob which is perfectly fine for +now when I am testing how this whole thing works and how it all will turn out. + +I have also created a small Gnome extension which is still very unstable, but +when/if this whole experiment pays of I will share on Github. diff --git a/_posts/posts/2021-01-24-replacing-dropbox-with-s3.md b/_posts/posts/2021-01-24-replacing-dropbox-with-s3.md new file mode 100644 index 0000000..7599949 --- /dev/null +++ b/_posts/posts/2021-01-24-replacing-dropbox-with-s3.md @@ -0,0 +1,115 @@ +--- +title: Replacing Dropbox in favor of DigitalOcean spaces +permalink: /replacing-dropbox-in-favor-of-digitalocean-spaces.html +date: 2021-01-24T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +A few months ago I experimented with DigitalOcean spaces as my backup solution +that could [replace Dropbox +eventually](/digitalocean-spaces-to-sync-between-computers.html). That solution +worked quite nicely, and I was amazed how smashing together a couple of existing +solutions would work this fine. + +I have been running that solution in the background for a couple of months now +and kind of forgot about it. But recent developments around deplatforming and +having us people hostages of technology and big companies speed up my goals to +become less dependent on +[Google](https://edition.cnn.com/2020/12/17/tech/google-antitrust-lawsuit/index.html), +[Dropbox](https://www.pcworld.com/article/2048680/dropbox-takes-a-peek-at-files.html) +etc and take back some control. + +I am not a conspiracy theory nut, but to be honest, what these companies are +doing lately is out of control. It is a matter of principle at this point. I +have almost completely degoogled my life all the way from ditching Gmail, +YouTube and most of the services surrounding Google. And I must tell you, I feel +so good. I haven't felt this way for a long time. + +**Anyways. Let's get to the meat of things.** + +Before you continue you should read my post about [syncing to +Dropbox](/digitalocean-spaces-to-sync-between-computers.html). + +> Also to note, I am using Linux on my machine with Gnome desktop environment. +This should work on MacOS too. To use this on Windows I suggest using +[Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10) +or [Cygwin](https://www.cygwin.com/). + +## Folder structure + +I liked structure from Dropbox. One folder where everything is located and +synced. So, that's why adopted this also for my sync setup. + +```go +~/Vault + ↳ backup + ↳ bin + ↳ documents + ↳ projects +``` + +All of my code is located in `~/Vault/projects` folder. And most of the projects +are Git repositories. I do not use this sync method for backup per see but in +case I reinstall my machine I can easily recreate all the important folder +structure with one quick command. No external drives needed that can fail etc. + +## Sync script + +My sync script is located in `~/Vault/bin/vault-backup.sh` + +```bash +#!/bin/bash + +# dconf load /com/gexperts/Tilix/ < tilix.dconf +# 0 2 * * * sh ~/Vault/bin/vault-backup.sh + +cd ~/Vault/backup/dotfiles + +MACHINE=$(whoami)@$(hostname) +mkdir -p $MACHINE +cd $MACHINE + +cp ~/.config/VSCodium/User/settings.json settings.json +cp ~/.s3cfg s3cfg +cp ~/.bash_extended bash_extended +cp ~/.ssh ssh -rf + +codium --list-extensions > vscode-extension.txt +dconf dump /com/gexperts/Tilix/ > tilix.dconf + +cd ~/Vault +s3cmd sync --delete-removed --exclude 'node_modules/*' --exclude '.git/*' --exclude '.venv/*' ./ s3://bucket-name/backup/ + +echo `date +"%D %T"` >> ~/.vault.log + +notify-send \ + -u normal \ + -i /usr/share/icons/Adwaita/96x96/status/security-medium-symbolic.symbolic.png \ + "Vault sync succeded at `date +"%D %T"`" +``` + +This script also backups some of the dotfiles I use and sends notification to +Gnome notification center. It is a straightforward solution. Nothing special +going on. + +> One obvious benefit of this is that I can omit syncing Node's `node_modules` +> or Python's `.venv` and `.git` folders. + +You can use this script in a combination with [Cron](https://en.wikipedia.org/wiki/Cron). + +```txt +0 2 * * * sh ~/Vault/bin/vault-backup.sh +``` + +When you start syncing your local stuff with a remote server you can review your +items on DigitalOcean. + +![Dropbox Spaces](/assets/posts/dropbox-sync/dropbox-spaces.png){:loading="lazy"} + +I have been using this script now for quite some time, and it's working +flawlessly. I also uninstalled Dropbox and stopped using it completely. + +All I need to do is write a Bash script that does the reverse and downloads from +remote server to local folder. This could be another post. diff --git a/_posts/posts/2021-01-25-goaccess.md b/_posts/posts/2021-01-25-goaccess.md new file mode 100644 index 0000000..779bce5 --- /dev/null +++ b/_posts/posts/2021-01-25-goaccess.md @@ -0,0 +1,205 @@ +--- +title: Using GoAccess with Nginx to replace Google Analytics +permalink: /using-goaccess-with-nginx-to-replace-google-analytics.html +date: 2021-01-25T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +## Introduction + +I know! You cannot simply replace Google Analytics with parsing access logs and +displaying a couple of charts. But to be honest, I actually never used Google +Analytics to the fullest extent and was usually interested in seeing page hits +and which pages were visited most often. + +I recently moved my blog from Firebase to a VPS and also decided to remove +Google Analytics tracking code from the site since its quite malicious and +tracks users across other pages also and is creating a profile of a user, and +I've had it. But I also need some insight of what is happening on a server and +which content is being read the most etc. + +I have looked at many existing solutions like: + +- [Umami](https://umami.is/) +- [Freshlytics](https://github.com/sheshbabu/freshlytics) +- [Matomo](https://matomo.org/) + +But the more I looked at them the more I noticed that I am replacing one evil +with another one. Don't get me wrong. Some of these solutions are absolutely +fantastic but would require installation of databases and something like PHP or +Node. And I was not ready to put those things on my fresh server. Also having +Docker installed is out of the question. + +## Opting for log parsing + +So, I defaulted to parsing already existing logs and generating HTML reports +from this data. + +I found this amazing software [GoAccess](https://goaccess.io/) which provides +all the functionalities I need, and it's a single binary. Written in Go. + +GoAccess can be used in two different modes. + +![GoAccess Terminal](/assets/posts/goaccess/goaccess-dash-term.png){:loading="lazy"} + +*Running in a terminal* + +![GoAccess HTML](/assets/posts/goaccess/goaccess-dash-html.png){:loading="lazy"} + +*Running in a browser* + +I, however, need this to run in a browser. So, the second option is the way to +go. The Idea is to periodically run cronjob and export this report into a folder +that gets then server by Nginx behind a Basic authentication. + +## Getting Nginx ready + +I choose Ubuntu on [DigitalOcean](https://www.digitalocean.com/). First I +installed [Nginx](https://nginx.org/en/), and +[Letsencrypt](https://letsencrypt.org/getting-started/) certbot and all the +necessary dependencies. + +```sh +# log in as root user +sudo su - + +# first let's update the system +apt update && apt upgrade -y + +# let's install +apt install nginx certbot python3-certbot-nginx apache2-utils +``` + +After all this is installed we can create a new configuration for a statistics. +Stats will be available at `stats.domain.com`. + +```sh +# creates directory where html will be hosted +mkdir -p /var/www/html/stats.domain.com + +cp /etc/nginx/sites-available/default /etc/nginx/sites-available/stats.domain.com +nano /etc/nginx/sites-available/stats.domain.com +``` + +```nginx +server { + root /var/www/html/stats.domain.com; + server_name stats.domain.com; + + index index.html; + location / { + try_files $uri $uri/ =404; + } +} +``` + +Now we check if the configuration is ok. We can do this with `nginx -t`. If all +is ok, we can restart Nginx with `service nginx restart`. + +After all that you should add A record for this domain that points to IP of a +droplet. + +Before enabling SSL you should test if DNS records have propagated with `curl +stats.domain.com`. + +Now, it's time to provision TLS certificate. To achieve this, you execute +command `certbot --nginx`. Follow the wizard and when you are asked about +redirection always choose 2 (always redirect to HTTPS). + +When this is done you can visit https://stats.domain.com and you should get 404 +not found error which is correct. + +## Getting GoAccess ready + +If you are using Debian like system GoAccess should be available in repository. +Otherwise refer to the official website. + +```sh +apt install goaccess +``` + +To enable Geo location we also need one additiona thing. + +```sh +cd /var/www/html/stats.stats.com +wget https://github.com/P3TERX/GeoLite.mmdb/raw/download/GeoLite2-City.mmdb +``` + +Now we create a shell script that will be executed every 10 minutes. + +```sh +nano /var/www/html/stats.domain.com/generate-stats.sh +``` + +Contents of this file should look like this. + +```sh +#!/bin/sh + +zcat -f /var/log/nginx/access.log* > /var/log/nginx/access-all.log + +goaccess \ + --log-file=/var/log/nginx/access-all.log \ + --log-format=COMBINED \ + --exclude-ip=0.0.0.0 \ + --geoip-database=/var/www/html/stats.domain.com/GeoLite2-City.mmdb \ + --ignore-crawlers \ + --real-os \ + --output=/var/www/html/stats.domain.com/index.html + +rm /var/log/nginx/access-all.log +``` + +Because after a while nginx creates multiple files with access logs we use +[`zcat`](https://linux.die.net/man/1/zcat) to extract Gziped contents and create +a file that has all the access logs. After this file is used we delete it. + +If you want to exclude your home IP's result look at the `--exclude-ip` option +in script and instead of `0.0.0.0` add your own home IP address. You can find +your home IP by executing `curl ifconfig.me` from your local machine and NOT +from the droplet. + +Test the script by executing `sh +/var/www/html/stats.domain.com/generate-stats.sh` and then checking +`https://stats.domain.com`. If you can see stats instead of 404 than you are +set. + +It's time to add this script to cron with `cron -e`. + +```go +*/10 * * * * sh /var/www/html/stats.domain.com/generate-stats.sh +``` + +## Securing with Basic authentication + +You probably don't want stats to be publicly available, so we should create a +user and a password for Basic authentication. + +First we create a password for a user `stats` with `htpasswd -c /etc/nginx/.htpasswd stats`. + +Now we update config file with `nano +/etc/nginx/sites-available/stats.domain.com`. You probably noticed that the +file looks a bit different from before. This is because `certbot` added +additional rules for SSL. + +Your location portion the config file should now look like. You should add +`auth_basic` and `auth_basic_user_file` lines to the file. + +```nginx +location / { + try_files $uri $uri/ =404; + auth_basic "Private Property"; + auth_basic_user_file /etc/nginx/.htpasswd; +} +``` + +Test if config is still ok with `nginx -t` and if it is you can restart Nginx +with `service nginx restart`. + +If you now visit `https://stats.domain.com` you should be prompted for username +and password. If not, try reopening your browser. + +That is all. You now have analytics for your server that gets refreshed every 10 +minutes. diff --git a/_posts/posts/2021-06-26-simple-world-clock.md b/_posts/posts/2021-06-26-simple-world-clock.md new file mode 100644 index 0000000..d1b53b4 --- /dev/null +++ b/_posts/posts/2021-06-26-simple-world-clock.md @@ -0,0 +1,108 @@ +--- +title: Simple world clock with eInk display and Raspberry Pi Zero +permalink: /simple-world-clock-with-eiink-display-and-raspberry-pi-zero.html +date: 2021-06-26T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +Our team is spread across the world, from the USA all the way to Australia, so +having some sort of world clock makes sense. + +Currently, I am using an extension for Gnome called [Timezone +extension](https://extensions.gnome.org/extension/2657/timezones-extension/), +and it serves the purpose quite well. + +But I also have a bunch of electronics that I bought through the time, and I am +not using any of them, and it's time to stop hording this stuff and use it in a +project. + +A while ago I bought a small eInk display [Inky +pHAT](https://shop.pimoroni.com/products/inky-phat?variant=12549254217811) and I +have a bunch of [Raspberry Pi's +Zero](https://www.raspberrypi.org/products/raspberry-pi-zero/) lying around that +I really need to use. + +![Inky pHAT, Raspberry Pi Zero](/assets/posts/world-clock/hardware.jpg){:loading="lazy"} + +Since the Inky [Inky +pHAT](https://shop.pimoroni.com/products/inky-phat?variant=12549254217811) is +essentially a HAT, it can easily be added on top of the [Raspberry Pi +Zero](https://www.raspberrypi.org/products/raspberry-pi-zero/). + +First, I installed the necessary software on Raspberry Pi with `pip3 install +inky`. + +And then I created a file `clock.py` in home directory `/home/pi`. + +```python +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +import sys +import os +from inky.auto import auto +from PIL import Image, ImageFont, ImageDraw +from font_fredoka_one import FredokaOne + +clocks = [ + 'America/New_York', + 'Europe/Ljubljana', + 'Australia/Brisbane', +] + +board = auto() +board.set_border(board.WHITE) +board.rotation = 90 + +img = Image.new('P', (board.WIDTH, board.HEIGHT)) +draw = ImageDraw.Draw(img) + +big_font = ImageFont.truetype(FredokaOne, 18) +small_font = ImageFont.truetype(FredokaOne, 13) + +x = board.WIDTH / 3 +y = board.HEIGHT / 3 + +idx = 1 +for clock in clocks: + ctime = os.popen('TZ="{}" date +"%a,%H:%M"'.format(clock)) + ctime = ctime.read().strip().split(',') + city = clock.split('/')[1].replace('_', ' ') + + draw.text((15, (idx*y)-y+10), city, fill=board.BLACK, font=small_font) + draw.text((110, (idx*y)-y+7), str(ctime[0]), fill=board.BLACK, font=big_font) + draw.text((155, (idx*y)-y+7), str(ctime[1]), fill=board.BLACK, font=big_font) + + idx += 1 + +board.set_image(img) +board.show() +``` + +And because eInk displays are rather slow to refresh and the clock requires +refreshing only once a minute, this can be done through cronjob. + +Before we add this job to cron we need to make `clock.py` executable with `chmod ++x clock.py`. + +Then we add a cronjob with `crontab -e`. + +```txt +* * * * * /home/pi/clock.py +``` + +So, we end up with a result like this. + +![World Clock](/assets/posts/world-clock/world-clock.jpg){:loading="lazy"} + +And for the enclosure that can be 3D printed, but I haven't yet something like +this can be used. + + + +You can download my [STL file for the enclosure +here](/assets/posts/world-clock/enclosure.stl), but make sure that dimensions make +sense and also opening for USB port should be added or just use a drill and some +hot glue to make it stick in the enclosure. diff --git a/_posts/posts/2021-07-30-from-internet-consumer-to-full-hominum-again.md b/_posts/posts/2021-07-30-from-internet-consumer-to-full-hominum-again.md new file mode 100644 index 0000000..cbcca37 --- /dev/null +++ b/_posts/posts/2021-07-30-from-internet-consumer-to-full-hominum-again.md @@ -0,0 +1,104 @@ +--- +title: My journey from being an internet über consumer to being a full hominum again +permalink: /from-internet-consumer-to-full-hominum-again.html +date: 2021-07-30T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +It's been almost a year since I started purging all my online accounts and +going down this rabbit hole of being almost independent of the current internet +machine. Even though I initially thought that I will have problems adapting, +I was pleasantly surprised that the transition went so smoothly. Even better, +it brought many benefits to my life. Such as increased focus, less stress +about trivial things, etc. + +It all started with me doing small changes like unsubscribing from emails that I +have either subscribed to by accepting terms and conditions. Or even some more +malicious emails that I was getting because I was on a shared mailing list. And +the later ones I hate the most of all. How the hell do they keep sharing my +email and sending me unsolicited emails and get away with it? I have a suspicion +that these marketing people share an Excel file between them and keep +resubscribing me when they import lists into Mailchimp or similar software. + +It's fascinating to see how much crap you get subscribed to when you are not +paying attention. It got so bad that my primary Gmail address is a full of junk +and need constant monitoring and cleaning up. And because I want to have Inbox +Zero, this presents an additional problem for me. + +The stress that email presented for me didn't occur to me for a long time. I was +noticing that I was unable to go through one single hour without hysterically +refreshing email. And if somebody wrote me something, I needed to see it right +then, even though I didn't immediately reply to it. I can only describe this +with FOMO (fear of missing out). I have no other explanation than that. It was +crippling, and I was constantly context switching, which I will address further +down this post in more details. + +This was one of the reasons why I spawned up my personal email server, and I am +using it now as my primary and person email. I still have Gmail as my “junk” +email that I use for throw away stuff. I log in to Gmail once a week and check +if there are any important emails that I got, but apart from that, it's sitting +dormant and collecting dust. + +The more I was watching the world loose it's self with allowing anti freedom +things to happen to it, the more I started to realize that something has to +change. I don't have the power to change the world. And I also don't have a +grandiose opinion of myself to even think to try it. But what I can do is to not +subscribe to this consumer way of thinking. I will not be complicit in this. My +moral and ethical stances won't allow it. So, this brings us to the second part +of my journey. + +I was using all these 3rd party services because I was either lazy or OK with +the drawbacks of them. I watched these services and companies became more and +privacy policies and everybody is OK with accepting them, and they pray on that +more evil. It is evil if you sell your user's data in this manner. Nobody reads +flaw in human nature. I really hate the hypocrisy they manage to muster. These +companies prey on our laziness, and we are at fault here. Nobody else. And I +truly understand the reasons why we rather accept and move on, and not object +and have our lives a little more difficult. They have perfected this through +years of small changes that make us a little more dependent on them. You could +not convince a person to give away all his rights and data in one day. This was +gradual and slow. And it caught us all in surprise. When I really stopped and +thought about it, I felt repulsed. By really stopping and thinking about it, I +really mean stopping and thinking about it. Thoroughly and in depth. + +Each step I took depleted my character a bit more. Like I was trading myself bit +by bit without understanding what it all meant. What it meant to be a full +person, not divided by all this bought attention they want from me. They don't +just get your data, but they also take your attention away from you. They +scatter your and go with the divide and conquer tactic from there. And a person +divided is a person not fully there. Not at the moment. Not alive fully. + +I was unable to form long thoughts. Well, I thought I was. But now that I see +what being a full person is again, I can see that I was not at my 100% back +then. + +A revolt was inevitable. There was no other way of continuing my story without +it. Without taking back my attention, my thoughts, my time, and my privacy, +regardless of how too late it maybe is. + +This has nothing to do with conspiracy theories. Even less with changing the +world. All I wanted was to get my life back in order and not waste the energy +that could be spent in other, better places. + +I started reading more. I can focus now fully on things I work on. Furthermore, +I have the mental acuity that I never had before. My mind feels sharp. I don't +get angry so much. I can cherish the finer things in life now without the need +to interpret them intellectually. Not only that, but I have a feeling of +belonging again. Sense of purpose has returned with a vengeance. And I can now +help people without depleting myself. + +The last step so far was to finish closing all the remaining online accounts +that I still had. And when I was thinking what value they bring me, I wasn't +surprised that the answer was none. I wasn't logging in them and using them. I +stopped being afraid of FOMO. If somebody wants to get in contact me, they will +find a way. I am one search away. + +We are not beholden to anybody. Our lives are our own. So dare yourself to +delete Facebook, LinkedIn. To unsubscribe. Dare yourself to take your time and +attention back. Use that time and energy to go for a walk without thinking about +work. Read a book instead of reading comment on social media that you will +forget in an hour. Enrich your life instead of wasting it. It only requires a +small step. And you will feel the benefits immediately. Lose the weight of the +world that is crushing you without your consent. diff --git a/_posts/posts/2021-08-01-linux-cheatsheet.md b/_posts/posts/2021-08-01-linux-cheatsheet.md new file mode 100644 index 0000000..b416ffa --- /dev/null +++ b/_posts/posts/2021-08-01-linux-cheatsheet.md @@ -0,0 +1,288 @@ +--- +title: List of essential Linux commands for server management +permalink: /linux-cheatsheet.html +date: 2021-08-01T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +**Generate SSH key** + +```bash +ssh-keygen -t ed25519 -C "your_email@example.com" + +# when no support for Ed25519 present +ssh-keygen -t rsa -b 4096 -C "your_email@example.com" +``` + +Note: By default SSH keys get stored to `/home//.ssh/` folder. + +**Login to host via SSH** + +```bash +# connect to host as your local username +ssh host + +# connect to host as user +ssh @ + +# connect to host using port +ssh -p @ +``` + +**Execute command on a server through SSH** + +```bash +# execute one command +ssh root@100.100.100.100 "ls /root" + +# execute many commands +ssh root@100.100.100.100 "cd /root;touch file.txt" +``` + +**Displays currently logged in users in the system** + +```bash +w +``` + +**Displays Linux system information** + +```bash +uname +``` + +**Displays kernel release information** + +```bash +uname -r +``` + +**Shows the system hostname** + +```bash +hostname +``` + +**Shows system reboot history** + +```bash +last reboot +``` + +**Displays information about the user** + +```bash +sudo apt install finger +finger +``` + +**Displays IP addresses and all the network interfaces** + +```bash +ip addr show +``` + +**Downloads a file from an online source** + +```bash +wget https://example.com/example.tgz +``` + +Note: If URL contains ?, & enclose the URL in double quotes. + +**Compress a file with gzip** + +```bash +# will not keep the original file +gzip file.txt + +# will keep the original file +gzip --keep file.txt +``` + +**Interactive disk usage analyzer** + +```bash +sudo apt install ncdu + +ncdu +ncdu +``` + +**Install Node.js using the Node Version Manager** + +```bash +curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash +source ~/.bashrc + +nvm install v13 +``` + +**Too long; didn't read** + +```bash +npm install -g tldr + +tldr tar +``` + +**Combine all Nginx access logs to one big log file** + +```bash +zcat -f /var/log/nginx/access.log* > /var/log/nginx/access-all.log +``` + +**Set up Redis server** + +```bash +sudo apt install redis-server redis-tools + +# check if server is running +sudo service redis status + +# set and get a key value +redis-cli set mykey myvalue +redis-cli get mykey + +# interactive shell +redis-cli +``` + +**Generate statistics of your webserver** + +```bash +sudo apt install goaccess + +# check if installed +goaccess -v + +# combine logs +zcat -f /var/log/nginx/access.log* > /var/log/nginx/access-all.log + +# export to single html +goaccess \ + --log-file=/var/log/nginx/access-all.log \ + --log-format=COMBINED \ + --exclude-ip=0.0.0.0 \ + --ignore-crawlers \ + --real-os \ + --output=/var/www/html/stats.html + +# cleanup afterwards +rm /var/log/nginx/access-all.log +``` + +**Search for a given pattern in files** + +```bash +grep -r ‘pattern’ files +``` + +**Find proccess ID for a specific program** + +```bash +pgrep nginx +``` + +**Print name of current/working directory** + +```bash +pwd +``` + +**Creates a blank new file** + +```bash +touch newfile.txt +``` + +**Displays first lines in a file** + +```bash +# -n presents the number of lines (10 by default) +head -n 20 somefile.txt +``` + +**Displays last lines in a file** + +```bash +# -n presents the number of lines (10 by default) +tail -n 20 somefile.txt + +# -f follows the changes in file (doesn't closes) +tail -f somefile.txt +``` + +**Count lines in a file** + +```bash +wc -l somefile.txt +``` + +**Find all instances of the file** + +```bash +sudo apt install mlocate + +locate somefile.txt +``` + +**Find file names that begin with ‘index’ in /home folder** + +```bash +find /home/ -name "index" +``` + +**Find files larger than 100MB in the home folder** + +```bash +find /home -size +100M +``` + +**Displays block devices related information** + +```bash +lsblk +``` + +**Displays free space on mounted systems** + +```bash +df -h +``` + +**Displays free and used memory in the system** + +```bash +free -h +``` + +**Displays all active listening ports** + +```bash +sudo apt install net-tools + +netstat -pnltu +``` + +**Kill a process violently** + +```bash +kill -9 +``` + +**List files opened by user** + +```bash +lsof -u +``` + +**Execute "df -h", showing periodic updates** + +```bash +# -n 1 means every second +watch -n 1 df -h +``` + diff --git a/_posts/posts/2021-12-03-debian-based-riced-up-distribution-for-developers.md b/_posts/posts/2021-12-03-debian-based-riced-up-distribution-for-developers.md new file mode 100644 index 0000000..4f9bc09 --- /dev/null +++ b/_posts/posts/2021-12-03-debian-based-riced-up-distribution-for-developers.md @@ -0,0 +1,277 @@ +--- +title: Debian based riced up distribution for Developers and DevOps folks +permalink: /debian-based-riced-up-distribution-for-developers-and-devops-folks.html +date: 2021-12-03T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +## Introduction + +I have been using [Ubuntu](https://ubuntu.com/) for quite a longtime now. I have +used [Debian](https://www.debian.org/) in the past and +[Manjaro](https://manjaro.org/). Also had [Arch](https://archlinux.org/) for +some time and even ran [Gentoo](https://www.gentoo.org/) way back. + +What I learned from all this is that I prefer running a bit older versions and +having them be stable than run bleeding edge rolling release. For that reason, I +stuck with Ubuntu for a couple of years now. I am also at a point in my life +where I just don't care what is cool or hip anymore. I just want a stable system +that doesn't get in my way. + +During all this, I noticed that these distributions were getting very bloated +and a lot of software got included that I usually uninstall on fresh +installation. Maybe this is my OCD speaking, but why do I have to give fresh +installation min 1 GB of ram out of the box just to have a blank screen in front +of me? I get it, there are many things included in the distro to make my life +easier. I understand. But at this point I have a feeling that modern Linux +distributions are becoming similar to [Node.js project with +node_modules](https://devhumor.com/content/uploads/images/August2017/node-modules.jpg). +Just a crazy number of packages serving very little or no purpose, just +supporting other software. + +I felt I needed a fresh start. To start over with something minimal and clean. +Something that would put a little more joy into using a computer again. + +For the first version, I wanted to target the following machines I have at home +that I want this thing to work on. + +```yaml +# My main stationary work machine +Resolution: 3840x1080 (Super Ultrawide Monitor 32:9) +CPU: Intel i7-8700 (12) @ 4.600GHz +GPU: AMD ATI Radeon RX 470/480/570/570X/580/580X/590 +Memory: 32020MiB +``` + +```yaml +# Thinkpad x220 for testing things and goofing around +Resolution: 1366x768 +CPU: Intel i5-2520M (4) @ 3.200GHz +GPU: Intel 2nd Generation Core Processor Family +Memory: 15891MiB +``` + +## How should I approach this? + +I knew I wanted to use [minimal Debian netinst +](https://www.debian.org/CD/netinst/) for the base to give myself a head +start. No reason to go through changing the installer and also testing all that +behemoth of a thing. So, some sort of ricing was the only logical option to get +this thing of the grounds somewhat quickly. + +> **What is ricing anyway?** +> The term “RICE” stands for Race Inspired Cosmetic Enhancement. A group of +> people (could be one, idk) decided to see if they could tweak their own +> distros like they/others did their cars. This gave rise to a community of +> Linux/Unix enthusiasts trying to make their distros look cooler and better +> than others... For more information, read this article +> [What in the world is ricing!?](https://pesos.github.io/2020/07/14/what-is-ricing.html). + +I didn't want this to just be a set of config files for theming purpose. I +wanted this to include a set of pre-installed tools and services that are being +used all the time by a modern developer. Theming is just a tiny part of it. +Fonts being applied across the distro and things like that. + +First, I choose terminal installer and left it to load additional components. +Avoid using graphical installer in this case. + +![](/assets/posts/dfd-rice/install-00.png){:loading="lazy"} + +After that I selected hostname and created a normal user and set password for +that user and root user and choose guided mode for disk partitioning. + +![](/assets/posts/dfd-rice/install-01.png){:loading="lazy"} + +I left it run to install all the things required for the base system and opted +out of scanning additional media for use by the package manager. Those will be +downloaded from the internet during installation. + +![](/assets/posts/dfd-rice/install-02.png){:loading="lazy"} + +I opted out of the popularity contest, and **now comes the important part**. +Uncheck all the boxes in Software selection and only leave 'standard system +utilities'. I also left an SSH server, so I was able to log in to the machine +from my main PC. + +![](/assets/posts/dfd-rice/install-03.png){:loading="lazy"} + +At this point, I installed GRUB bootloader on the disk where I installed the +system. + +![](/assets/posts/dfd-rice/install-04.png){:loading="lazy"} + +That concluded the installation of base Debian and after restarting the computer +I was prompted with the login screen. + +![](/assets/posts/dfd-rice/install-05.png){:loading="lazy"} + +Now that I had the base installation, it was time to choose what software do I +want to include in this so-called distribution. I wanted out of the box +developer experience, so I had plenty to choose. + +Let's not waste time and go through the list. + +## Desktop environments + +I have been using [Gnome](https://www.gnome.org/) for my whole Linux life. From +version 2 forward. It's been quite a ride. I hated version 3 when it came out +and replaced version 2. But I got used to it. And now with version 40+ they also +made couple of changes which I found both frustrating and presently surprised. + +The amount of vertical space you loose because of the beefy title bars on +windows is ridiculous. And then in case of +[Tilix](https://gnunn1.github.io/tilix-web/) you also have tabs, and you are +100px deep. Vertical space is one of the most important things for a +developer. The more real estate you have, the more code you can have in a +viewport. + +But on the other hand, I still love how Gnome feels and looks. I gotta give them +that. They really are trying to make Gnome feel unified and modern. + +Regardless of all the nice things Gnome has, I was looking at the tiling window +managers for some time, but never had the nerve to actually go with it. But now +was the ideal time to give it a go. No guts, no glory kind of a thing. + +One of the requirements for me was easy custom layouts because I use a really +strange monitor with aspect ratio of 32:9. So relying on included layouts most +of them have is a non-starter. + +What I was doing in Gnome was having windows in a layout like the diagram +below. This is my common practice. And if you look at it you can clearly see I +was replicating tiling window manager setup in Gnome. + +![](/assets/posts/dfd-rice/layout.png){:loading="lazy"} + +That made me look into a bunch of tiling window managers and then tested them +out. Candidates I was looking at were: + +- [i3](https://i3wm.org/) +- [bspwm](https://github.com/baskerville/bspwm) +- [awesome](https://awesomewm.org/index.html) +- [XMonad](https://xmonad.org/) +- [sway](https://swaywm.org/) +- [Qtile](http://www.qtile.org/) +- [dwm](https://dwm.suckless.org/) + +You can also check article [13 Best Tiling Window Managers for +Linux](https://www.tecmint.com/best-tiling-window-managers-for-linux/) I was +referencing while testing them out. + +While all of them provided what I needed, I liked i3 the most. What particular +caught my eye was the ease to use and tree based layouts which allows flexible +layouts. I know others can be set up also to have custom layouts other than +spiral, dwindle etc. I think i3 is a good entry-level window manager for +somebody like me. + +## Batteries included + +The source for the whole thing is located on Github +https://github.com/mitjafelicijan/dfd-rice. + +Currenly included: + +- `non-free` (enables non-free packages in apt) +- `sudo` (adds sudo and adds user to sudo group) +- `essentials` (gcc, htop, zip, curl, etc...) +- `wifi` (network manager nmtui) +- `desktop` (i3, dmenu, fonts, configurations) +- `pulseaudio` (pulseaudio with pavucontrol) +- `code-editors` (vim, micro, vscode) +- `ohmybash` (make bash pretty) +- `file-managers` (mc) +- `git-ui` (terminal git gui) +- `meld` (diff tool) +- `profiling` (kcachegrind, valgrind, strace, ltrace) +- `browsers` (brave, firefox, chromium) +- programming languages: + - `python` + - `golang` + - `nodejs` + - `rust` + - `nim` + - `php` + - `ruby` +- `docker` (with docker-compose) +- `ansible` + +Install script also allows you to install only specific packages (example for: +essentials ohmybash docker rust). + +```sh +su - root \ + bash -c "$(wget -q https://raw.github.com/mitjafelicijan/dfd-rice/master/tools/install.sh -O -)" -- \ + essentials ohmybash docker rust +``` + +Currently, most of these recipes use what Debian and this is totally fine with +me since I never use bleeding edge features of a package. But if something major +would come to light, I will replace it with a possible compilation script or +something similar. + +This is some of the output from the installation script. + +![](/assets/posts/dfd-rice/script.png){:loading="lazy"} + +Let's take a look at some examples in the installation script. + +### Docker recipe + +```sh +# docker +print_header "Installing Docker" +curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --yes --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg +echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null +apt update +apt -y install docker-ce docker-ce-cli containerd.io docker-compose + +systemctl start docker +systemctl enable docker +systemctl status docker --no-pager + +/sbin/usermod -aG docker $USERNAME +``` + +### Making bash pretty + +I really like [Oh My Zsh](https://ohmyz.sh/), but I don't like zsh shell. When +I used it, I constantly needed to be aware of it and running bash scripts was a +pain. So, I was really delighted when I found out that a version for bash +existed called [Oh My Bash](https://ohmybash.nntoan.com/). Let's take a look at +the recipe for installing it. + +```sh +# ohmybash +print_header "Enabling OhMyBash" +sudo -u $USERNAME sh -c "$(curl -fsSL https://raw.github.com/ohmybash/oh-my-bash/master/tools/install.sh)" & +T1=${!} +wait ${T1} +``` + +Because OhMyBash does `exec bash` at the end, this traps our script inside +another shell and our script cannot continue. For that reason, I executed this +in background. But that presents a new problem. Because this is executed in +background, we lose track of progress naturally. And that strange trick with +`T1=${!}` and `wait ${T1}` waits for the background process to finish before +continuing to another task in bash script. + +Check [Multi-Threaded Processing in Bash Scripts](https://www.cloudsavvyit.com/12277/how-to-use-multi-threaded-processing-in-bash-scripts/) +for more details. + +## Conclusion + +Take a look at +https://github.com/mitjafelicijan/dfd-rice/blob/develop/tools/install.sh script +to get familiar with it. This is just a first iteration and I will continue to +update it because I need this in my life. + +The current version boots in 4s to the login prompt, and after you log in, the +desktop environment loads in 2s. So, its fast, very fast. And on clean boot, I +measured ~230 MB of RAM usage. + +And this is how it looks with two terminals side by side. I really like the +simplicity and clean interface. I will polish the colors and stuff like that, +but I really do like the results. + +![](/assets/posts/dfd-rice/desktop.png){:loading="lazy"} diff --git a/_posts/posts/2021-12-25-running-golang-application-as-pid1.md b/_posts/posts/2021-12-25-running-golang-application-as-pid1.md new file mode 100644 index 0000000..edd5a57 --- /dev/null +++ b/_posts/posts/2021-12-25-running-golang-application-as-pid1.md @@ -0,0 +1,348 @@ +--- +title: Running Golang application as PID 1 with Linux kernel +permalink: /running-golang-application-as-pid1.html +date: 2021-12-25T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +## Unikernels, kernels, and alike + +I have been reading a lot about +[unikernernels](https://en.wikipedia.org/wiki/Unikernel) lately and found them +very intriguing. When you push away all the marketing speak and look at the +idea, it makes a lot of sense. + +> A unikernel is a specialized, single address space machine image constructed +> by using library operating systems. ([Wikipedia](https://en.wikipedia.org/wiki/Unikernel)) + +I really like the explanation from the article +[Unikernels: Rise of the Virtual Library Operating System](https://queue.acm.org/detail.cfm?id=2566628). +Really worth a read. + +If we compare a normal operating system to a unikernel side by side, they would +look something like this. + +![Virtual machines vs Containers vs Unikernels](/assets/posts/pid1/unikernels.webp){:loading="lazy"} + +From this image, we can see how the complexity significantly decreases with +the use of Unikernels. This comes with a price, of course. Unikernels are hard +to get running and require a lot of work since you don't have an actual proper +kernel running in the background providing network access and drivers etc. + +So as a half step to make the stack simpler, I started looking into using +Linux kernel as a base and going from there. I came across this +[Youtube video talking about Building the Simplest Possible Linux System](https://www.youtube.com/watch?v=Sk9TatW9ino) +by [Rob Landley](https://landley.net) and apart from statically compiling the +application to be run as PID1 there was really no other obstacles. + +## What is PID 1? + +PID 1 is the first process that Linux kernel starts after the boot process. +It also has a couple of unique properties that are unique to it. + +- When the process with PID 1 dies for any reason, all other processes are + killed with KILL signal. +- When any process having children dies for any reason, its children are + re-parented to process with PID 1. +- Many signals which have default action of Term do not have one for PID 1. +- When the process with PID 1 dies for any reason, kernel panics, which + result in system crash. + +PID 1 is considered as an Init application which takes care of running other +and handling services like: + +- sshd, +- nginx, +- pulseaudio, +- etc. + +If you are on a Linux machine, you can check what your process is with PID 1 +by running the following. + +```sh +$ cat /proc/1/status +Name: systemd +Umask: 0000 +State: S (sleeping) +Tgid: 1 +Ngid: 0 +Pid: 1 +PPid: 0 +... +``` + +As we can see on my machine the process with id of 1 is [systemd](https://systemd.io/) +which is a software suite that provides an array of system components for Linux +operating systems. If you look closely you can also see that the `PPid` +(process id of the parent process) is `0` which additionally confirms that +this process doesn't have a parent. + +## So why even run application as PID 1 instead of just using a container? + +Containers are wonderful, but they come with a lot of baggage. And because they +are in their nature layered, the images require quite a lot of space and also a +lot of additional software to handle them. They are not as lightweight as they +seem, and many popular images require 500 MB plus disk space. + +The idea of running this as PID 1 would result in a significantly smaller footprint, +as we will see later in the post. + +> You could run a simple init system inside Docker container described more +> in this article [Docker and the PID 1 zombie reaping problem](https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/). + +## The master plan + +1. Compile Linux kernel with the default definitions. +2. Prepare a Hello World application in Golang that is statically compiled. +3. Run it with [QEMU](https://www.qemu.org/) and providing Golang application + as init application / PID 1. + +For the sake of simplicity we will not be cross-compiling any of it and just +use the 64bit version. + +## Compiling Linux kernel + +```sh +$ wget https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.15.7.tar.xz +$ tar xf linux-5.15.7.tar.xz + +$ cd linux-5.15.7 + +$ make clean + +# read more about this https://stackoverflow.com/a/41886394 +$ make defconfig + +$ time make -j `nproc` + +$ cd .. +``` + +At this point we have kernel image that is located in `arch/x86_64/boot/bzImage`. +We will use this in QEMU later. + +To make our lives a bit easier lets move the kernel image to another place. +Lets create a folder `bin/` in the root of our project with `mkdir -p bin`. + + +At this point we can copy `bzImage` to `bin/` folder with +`cp linux-5.15.7/arch/x86_64/boot/bzImage bin/bzImage`. + +The folder structure of this experiment should look like this. + +```txt +pid1/ + bin/ + bzImage + linux-5.15.7/ + linux-5.15.7.tar.xz +``` + +## Preparing PID 1 application in Golang + +This step is relatively easy. The only thing we must have in mind that we will +need to compile the binary as a static one. + +Let's create `init.go` file in the root of the project. + +```go +package main + +import ( + "fmt" + "time" +) + +func main() { + for { + fmt.Println("Hello from Golang") + time.Sleep(1 * time.Second) + } +} +``` + +If you notice, we have a forever loop in the main, with a simple sleep of 1 +second to not overwhelm the CPU. This is because PID 1 should never complete +and/or exit. That would result in a kernel panic. Which is BAD! + +There are two ways of compiling Golang application. Statically and dynamically. + +To statically compile the binary, use the following command. + +```sh +$ go build -ldflags="-extldflags=-static" init.go +``` + +We can also check if the binary is statically compiled with: + +```sh +$ file init +init: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, Go BuildID=Ypu8Zw_4NBxm1Yxg2OYO/H5x721rQ9uTPiDVh-VqP/vZN7kXfGG1zhX_qdHMgH/9vBfmK81tFrygfOXDEOo, not stripped + +$ ldd init +not a dynamic executable +``` + +At this point, we need to create [initramfs](https://www.linuxfromscratch.org/blfs/view/svn/postlfs/initramfs.html) +(abbreviated from "initial RAM file system", is the successor of initrd. It +is a cpio archive of the initial file system that gets loaded into memory +during the Linux startup process). + +```sh +$ echo init | cpio -o --format=newc > initramfs +$ mv initramfs bin/initramfs +``` + +The projects at this stage should look like this. + +```txt +pid1/ + bin/ + bzImage + initramfs + linux-5.15.7/ + linux-5.15.7.tar.xz + init.go +``` + +## Running all of it with QEMU + +[QEMU](https://www.qemu.org/) is a free and open-source hypervisor. It emulates +the machine's processor through dynamic binary translation and provides a set +of different hardware and device models for the machine, enabling it to run a +variety of guest operating systems. + +```sh +$ qemu-system-x86_64 -serial stdio -kernel bin/bzImage -initrd bin/initramfs -append "console=ttyS0" -m 128 +``` + +```sh +$ qemu-system-x86_64 -serial stdio -kernel bin/bzImage -initrd bin/initramfs -append "console=ttyS0" -m 128 +[ 0.000000] Linux version 5.15.7 (m@khan) (gcc (GCC) 11.2.1 20211203 (Red Hat 11.2.1-7), GNU ld version 2.37-10.fc35) #7 SMP Mon Dec 13 10:23:25 CET 2021 +[ 0.000000] Command line: console=ttyS0 +[ 0.000000] x86/fpu: x87 FPU will use FXSAVE +[ 0.000000] signal: max sigframe size: 1440 +[ 0.000000] BIOS-provided physical RAM map: +[ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable +[ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved +[ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved +[ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x0000000007fdffff] usable +[ 0.000000] BIOS-e820: [mem 0x0000000007fe0000-0x0000000007ffffff] reserved +[ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved +[ 0.000000] NX (Execute Disable) protection: active +[ 0.000000] SMBIOS 2.8 present. +[ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-6.fc35 04/01/2014 +[ 0.000000] tsc: Fast TSC calibration failed +... +[ 2.016106] ALSA device list: +[ 2.016329] No soundcards found. +[ 2.053176] Freeing unused kernel image (initmem) memory: 1368K +[ 2.056095] Write protecting the kernel read-only data: 20480k +[ 2.058248] Freeing unused kernel image (text/rodata gap) memory: 2032K +[ 2.058811] Freeing unused kernel image (rodata/data gap) memory: 500K +[ 2.059164] Run /init as init process +Hello from Golang +[ 2.386879] tsc: Refined TSC clocksource calibration: 3192.032 MHz +[ 2.387114] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2e02e31fa14, max_idle_ns: 440795264947 ns +[ 2.387380] clocksource: Switched to clocksource tsc +[ 2.587895] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 +Hello from Golang +Hello from Golang +Hello from Golang +``` + +The whole [log file here](/assets/posts/pid1/qemu.log). + +## Size comparison + +The cool thing about this approach is that the Linux kernel and the application +together only take around 12 MB, which is impressive as hell. And we need to +also know that the size of bzImage (Linux kernel) could be greatly decreased +by going into `make menuconfig` and removing a ton of features from the kernel, +making the size even smaller. I managed to get kernel size down to 2 MB and +still working properly. + +```sh +total 12M +-rw-r--r--. 1 m m 9.3M Dec 13 10:24 bzImage +-rw-r--r--. 1 m m 1.9M Dec 27 01:19 initramfs +``` + +## Creating ISO image and running it with Gnome Boxes + +First we need to create proper folder structure with `mkdir -p iso/boot/grub`. + +Then we need to download the [grub binary](https://github.com/littleosbook/littleosbook/raw/master/files/stage2_eltorito). +You can read more about this program on https://github.com/littleosbook/littleosbook. + +```sh +$ wget -O iso/boot/grub/stage2_eltorito https://github.com/littleosbook/littleosbook/raw/master/files/stage2_eltorito +``` + +```sh +$ tree iso/boot/ +iso/boot/ +├── bzImage +├── grub +│   ├── menu.lst +│   └── stage2_eltorito +└── initramfs +``` + +Let's copy files into proper folders. + + +```sh +$ cp stage2_eltorito iso/boot/grub/ +$ cp bin/bzImage iso/boot/ +$ cp bin/initramfs iso/boot/ +``` + +Lets create a GRUB config file at `nano iso/boot/grub/menu.lst` with contents. + +```ini +default=0 +timeout=5 + +title GoAsPID1 +kernel /boot/bzImage +initrd /boot/initramfs +``` + +Let's create iso file by using genisoimage: + +```sh +genisoimage -R \ + -b boot/grub/stage2_eltorito \ + -no-emul-boot \ + -boot-load-size 4 \ + -A os \ + -input-charset utf8 \ + -quiet \ + -boot-info-table \ + -o GoAsPID1.iso \ + iso +``` + +This will produce `GoAsPID1.iso` which you can use with [Virtualbox](https://www.virtualbox.org/) +or [Gnome Boxes](https://apps.gnome.org/app/org.gnome.Boxes/). + + + +## Is running applications as PID 1 even worth it? + +Well, the answer to this is not as simple as one would think. Sometimes it is +and sometimes it's not. For embedded systems and very specialized applications +it is worth for sure. But in normal uses, I don't think so. It was an interesting +exercise in compiling kernels and looking at the guts of the Linux kernel, +but sticking to containers for most of the things is a better option in my +opinion. + +An interesting experiment would be creating an image that supports networking +and could be deployed to AWS as an EC2 instance and observing how it fares. +But in that case, we would need to write some sort of supervisor that would +run on a separate EC2 that would check if other EC2 instances are running +properly. Remember that if your application fails, kernel panics and the +whole machine is inoperable in this case. diff --git a/_posts/posts/2021-12-30-wap-mobile-web-before-the-web.md b/_posts/posts/2021-12-30-wap-mobile-web-before-the-web.md new file mode 100644 index 0000000..665be0f --- /dev/null +++ b/_posts/posts/2021-12-30-wap-mobile-web-before-the-web.md @@ -0,0 +1,203 @@ +--- +title: Wireless Application Protocol and the mobile web before the web +permalink: /wap-mobile-web-before-the-web.html +date: 2021-12-30T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +## A little stroll down the history lane + +About two weeks ago, I watched this outstanding documentary on YouTube +[Springboard: the secret history of the first real +smartphone](https://www.youtube.com/watch?v=b9_Vh9h3Ohw) about the history of +smartphones and phones in general. It brought back so many memories. I never had +an actual smartphone before the Android. The closest to smartphone was [Sony +Ericsson P1](https://www.gsmarena.com/sony_ericsson_p1-1982.php). A fantastic +phone and I broke it in Prague after a party and that was one of those rare +occasions where I was actually mad at myself. But nevertheless, after that +phone, the next one was an Android one. + +Before that, I only owned normal phones from Nokia and Siemens etc. Nothing +special, actually. These are the phones we are talking about. Before 2007. +Apple and Android phones didn't exist yet. + +These phones were rocking: + +- No selfie cameras. +- ~2 inch displays. +- ~120 MHz beast CPU's. +- 144p main cameras. +- But they had a headphone jack. + +Let's take a look at these beauties. + +![Old phones](/assets/posts/wap/phones.gif){:loading="lazy"} + +## WAP - Wireless Application Protocol + +Not that one! We are talking about Wireless Application Protocol and not Cardi +B's song 😃 + +WAP stands for Wireless Application Protocol. It is a protocol designed for +micro-browsers, and it enables the access of internet in the mobile devices. It +uses the mark-up language WML (Wireless Markup Language and not HTML), WML is +defined as XML 1.0 application. Furthermore, it enables creating web +applications for mobile devices. In 1998, WAP Forum was founded by Ericson, +Motorola, Nokia and Unwired Planet whose aim was to standardize the various +wireless technologies via protocols. +[(source)](https://www.geeksforgeeks.org/wireless-application-protocol/) + +WAP protocol was resulted by the joint efforts of the various members of WAP +Forum. In 2002, WAP forum was merged with various other forums of the industry, +resulting in the formation of Open Mobile Alliance (OMA). +[(source)](https://www.geeksforgeeks.org/wireless-application-protocol/) + +These were some wild times. Devices had tiny screens and data transmission rates +were abominable. But they were capable of rendering WML (Wireless Markup +Language). This was very similar to HTML, actually. It is a markup language, +after all. + +These pages could be served by [Apache](https://apache.org/) and could be +generated by CGI scripts on the backend. The only difference was the limited +markup language. + +## WML - Wireless Markup Language + +Just like web browsers use HTML for content structure, older mobile device +browsers use WML - if you need to support really old mobile phones using WML +browsers, you will need to know about it. WML is XML-based (an XML vocabulary +just like XHTML and MathML, but not HTML) and does not use the same metaphor as +HTML. HTML is a single document with some metadata packed away in the head, and +a body encapsulating the visible page. With WML, the metaphor does not envisage +a page, but rather a deck of cards. A WML file might have several pages or cards +contained within it. +[(source)](https://www.w3.org/wiki/Introduction_to_mobile_web) + +```html + + + + +

Welcome to the Example homepage

+
+
+``` + +There is an amazing tutorial on [Tutorialpoint about +WML](https://www.tutorialspoint.com/wml/index.htm). + +## Converting Digg to WML + +This task is completely useless and not really feasible nowadays, but I had to +give it a try for old-time sake. Since the data is already there in a form of +RSS feed, I could take this feed and parse it and create a WML version of the +homepage. + +We will need: + +- Python3 + Pip +- ImageMagick +- feedparser and mako templating + +```sh +# for fedora 35 +sudo dnf install ImageMagick python3-pip + +# tempalting engine for python +pip install mako --user + +# for parsing rss feeds +pip install feedparser --user +``` + +Project folder structure should look like the following. + +``` +12:43:53 m@khan wap → tree -L 1 +. +├── generate.py +└── template.wml + +``` + +After that, I created a small template for the homepage. + +```html + + + + + + + + % for item in entries: +

${item.title}

+

${item.kicker}

+

${item.title}

+

${item.description}

+ % endfor + +
+ +
+``` + +And the parser that parses RSS feed looks like this. + +```python +import os +import feedparser +from mako.template import Template + +os.system('mkdir -p www/images') + +template = Template(filename='template.wml') + +feed = feedparser.parse('https://digg.com/rss/top.xml') + +entries = feed.entries[:15] + +for entry in entries: + print('Processing image with id {}'.format(entry.id)) + os.system('wget -q -O www/images/{}.jpg "{}"'.format(entry.id, entry.links[1].href)) + os.system('convert www/images/{}.jpg -type Grayscale -resize 175x -depth 3 -quality 30 www/images/{}.jpg'.format(entry.id, entry.id)) + +html = template.render(entries = entries) + +with open('www/index.wml', 'w+') as fp: + fp.write(html) +``` + +This script will create a folder `www` and in the folder `www/images` for +storing resized images. + +> Be sure you don't use SSL and use just normal HTTP for serving the content. +> These old phones will have problems with TLS 1.3 etc. + +If you look at the python file, I convert all the images into tiny B&W images. +They should be WBMP (Wireless BitMaP) but I choose JPEGs for this, and it seems +to work properly. + +Because I currently don't have a phone old enough to test it on, I used an +emulator. And it was really hard to find one. I found [WAP +Proof](http://wap-proof.sharewarejunction.com/) on shareware junction, and it +did the job well enough. I will try to find and actual device to test it on. + + + +If you are using Nginx to serve the contents, add a directive to the hosts file +that will automatically server `index.wml` file. + +```nginx +server { + index index.wml index.html index.htm index.nginx-debian.html; +} +``` + +## Conclusion + +Well, this was pointless, but very fun! I hope you enjoyed it as much as I did. +I will try to find an old phone to test it on. If you have any questions, feel +free to ask in the comments. diff --git a/_posts/posts/2022-06-30-trying-out-helix-editor.md b/_posts/posts/2022-06-30-trying-out-helix-editor.md new file mode 100644 index 0000000..be369a1 --- /dev/null +++ b/_posts/posts/2022-06-30-trying-out-helix-editor.md @@ -0,0 +1,55 @@ +--- +title: Trying out Helix code editor as my main editor +permalink: /tying-out-helix-code-editor.html +date: 2022-06-30T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +I have been searching for a lightweight code editor for quite some time. One of +the main reasons was that I wanted something that doesn't burn through CPU and +RAM usage is not through the roof. I have been mostly using Visual Studio Code. +It's been an outstanding editor. I have no quarrel with it at all. It's just +time to spice life up with something new. + +I have been on this search for a couple of years. I have tried Vim, Neovim, +Emacs, Doom Emacs, Micro and couple more. Among most of them, I liked Micro and +Doom Emacs the most. Micro editor was a little too basic for me. And Doom Emacs +was a bit too hardcore. This does not reflect on any of the editors. It's just +my personal preference. + +> I tried Helix Editor about a year ago. But I didn't pay attention to it. +> Tried it and saw it's similar to Vi and just said no. I was premature to +> dismiss it. + +One of the things I actually miss is line wrapping for certain files. When +writing Markdown, line wrapping would be very helpful. Editing such a document +is frustrating to say the least. Some of the Markdown to HTML converters don't +take kindly of new lines between sentences. Not paragraphs, sentences. And I use +Markdown to write this blog you are reading. + +But other than this, I have been extremely satisfied by it. It's been a pleasant +surprise. There have been zero issues with the editor. + +One thing to do before you are able to use autocompletion and make use Language +Server support is to install the language server with NPM. + +```sh +# For C development this installs C LSP. +sudo dnf install clang-tools-extra +``` + +I am still getting used to the keyboard shortcuts and getting better. What Helix +does really well is packing in sane defaults and even though because currently +there is no plugin support I haven't found any need for them. It has all that +you would need. It goes to extreme measures to show a user what is going on with +popups that show you what the keyboard shortcuts are. + +And it comes us packed with many +[really good themes](https://github.com/helix-editor/helix/wiki/Themes). + +![Editor](/assets/posts/helix-editor/editor.png){:loading="lazy"} + +It's still young but has this mature feeling to it. It has sane defaults and +mimics Vim (works a bit differently, but the overall idea is similar). diff --git a/_posts/posts/2022-07-05-what-would-dna-sound-if-synthesized.md b/_posts/posts/2022-07-05-what-would-dna-sound-if-synthesized.md new file mode 100644 index 0000000..6efe559 --- /dev/null +++ b/_posts/posts/2022-07-05-what-would-dna-sound-if-synthesized.md @@ -0,0 +1,365 @@ +--- +title: What would DNA sound if synthesized to an audio file +permalink: /what-would-dna-sound-if-synthesized.html +date: 2022-07-05T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +## Introduction + +Lately, I have been thinking a lot about the nature of life, what are the +foundation blocks of life and things like that. It's remarkable how complex and +on the other hand simple the creation is when you look at it. The miracle of +life keeps us grounded when our imagination goes wild. If the DNA are the blocks +of life, you could consider them to be an API nature provided us to better +understand all of this chaos masquerading as order. + +I have been reading a lot about superintelligence and our somehow misguided path +to create general artificial intelligence. What would the building blocks or our +creation look like? Is the compression really the ultimate storage of +information? Will our creation also ponder this questions when creating new +worlds for themselves, or will we just disappear into the vastness of +possibilities? It is a little offensive that we are playing God whilst being +completely ignorant of our own reality. Who knows! Like many other +breakthroughs, this one will also come at a cost not known to us when it finally +happens. + +To keep things a bit lighter, I decided to convert some popular DNA sequences +into an audio files for us to listen to. I am not the first one, nor I will be +the last one to do this. But it is an interesting exercise in better +understanding the relationship between art and science. Maybe listening to DNA +instead of parsing it will find a way into better understanding, or at least +enjoying the creation and cryptic nature of life. + +## DNA encoding and primer example + +I have been exploring DNA in the past in my post from about 3 years ago in +[Encoding binary data into DNA +sequence](/encoding-binary-data-into-dna-sequence.html) where I have been +converting all sorts of data into DNA sequences. + +This will be a similar exercise but instead of converting to DNA, I will be +generating tones from Nucleotides. + +| Nucleotides | Note | Frequency | +| ---------------- | ---- | --------- | +| **A** (Adenine) | A | 440 Hz | +| **C** (Cytosine) | C | 783.99 Hz | +| **G** (Guanine) | G | 523.25 Hz | +| **T** (Thymine) | D | 587.33 Hz | + +Since we do not have T in equal-tempered scale, I choose D to represent T note. + +You can check [Frequencies for equal-tempered scale, A4 = 440 +Hz](https://pages.mtu.edu/~suits/notefreqs.html). For this tuning, we also +choose `Speed of Sound = 345 m/s = 1130 ft/s = 770 miles/hr`. + +Now that we have this out of the way, we can also brush up on the DNA sequencing +a bit. This is a famous quote I also used for the encoding tests, and it goes +like this. + +> How wonderful that we have met with a paradox. Now we have some hope of +> making progress. +> ― Niels Bohr + +```shell +>SEQ1 +GACAGCTTGTGTACAAGTGTGCTTGCTCGCGAGCGGGTACGCGCGTGGGCTAACAAGTGA +GCCAGCAGGTGAACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGCTGGCGGGTGA +ACAAGTGTGCCGGTGAGCCAACAAGCAGACAAGTAAGCAGGTACGCAGGCGAGCTTGTCA +ACTCACAAGATCGCTTGTGTACAAGTGTGCGGACAAGCCAGCAGGTGCGCGGACAAGTAT +GCTTGCTGGCGGACAAGCCAGCTTGTAAGCGGACAAGCTTGCGCACAAGCTGGCAGGCCT +GCCGGCTCGCGTACAAATTCACAAGTAAGTACGCTTGCGTGTACGCGGGTATGTATACTC +AACCTCACCAAACGGGACAAGATCGCCGGCGGGCTAGTATACAAGAACGCTTGCCAGTAC +AACC +``` + +This is what we gonna work with to get things rolling forward, when creating +parser and waveform generator. + +## Parsing DNA data + +This step is rather simple one. All we need to do is parse input DNA sequence in +[FASTA format](https://en.wikipedia.org/wiki/FASTA_format) well known in +[Bioinformatics](https://en.wikipedia.org/wiki/Bioinformatics) to extract single +Nucleotides that will be converted into separate tones based on equal-tempered +scale explained above. + +```python +nucleotide_tone_map = { + 'A': 440, + 'C': 523.25, + 'G': 783.99, + 'T': 587.33, # converted to D +} + +def split(word): + return [char for char in word] + +def generate_from_dna_sequence(sequence): + for nucleotide in split(sequence): + print(nucleotide, nucleotide_tone_map[nucleotide]) +``` + +## Generating sine wave + +Because we are essentially creating a long stream of notes we will be appending +sine notes to a global array we will later use for creating a WAV file out of +it. + +```python +import math + +def append_sinewave(freq=440.0, duration_milliseconds=500, volume=1.0): + global audio + + num_samples = duration_milliseconds * (sample_rate / 1000.0) + + for x in range(int(num_samples)): + audio.append(volume * math.sin(2 * math.pi * freq * (x / sample_rate))) + + return +``` + +The sine wave generated here is the standard beep. If you want something more +aggressive, you could try a square or saw tooth waveform. + +## Generating a WAV file from accumulated sine waves + + +```python +import wave +import struct + +def save_wav(file_name): + wav_file = wave.open(file_name, 'w') + nchannels = 1 + sampwidth = 2 + + nframes = len(audio) + comptype = 'NONE' + compname = 'not compressed' + wav_file.setparams((nchannels, sampwidth, sample_rate, nframes, comptype, compname)) + + for sample in audio: + wav_file.writeframes(struct.pack('h', int(sample * 32767.0))) + + wav_file.close() +``` + +44100 is the industry standard sample rate - CD quality. If you need to save on +file size, you can adjust it downwards. The standard for low quality is, 8000 or +8kHz. + +WAV files here are using short, 16 bit, signed integers for the sample size. +So, we multiply the floating-point data we have by 32767, the maximum value for +a short integer. + +> It is theoretically possible to use the floating point -1.0 to 1.0 data +> directly in a WAV file, but not obvious how to do that using the wave module +> in Python. + +## Generating Spectograms + +I have tried two methods of doing this and both were just fine. I however opted +out to use the [SoX - Sound eXchange, the Swiss Army knife of audio +manipulation](https://linux.die.net/man/1/sox) one because it didn't require +anything else. + +```shell +sox output.wav -n spectrogram -o spectrogram.png +``` + +An example spectrogram of Ludwig van Beethoven Symphony No. 6 First movement. + + + +![Ludwig van Beethoven Symphony No. 6 First movement](/assets/posts/dna-synthesized/symphony-no6-1st-movement.png){:loading="lazy"} + +The other option could also be in combination with +[gnuplot](http://www.gnuplot.info/). This would require an intermediary step, +however. + +```shell +sox output.wav audio.dat +tail -n+3 audio.dat > audio_only.dat +gnuplot audio.gpi +``` + +And input file `audio.gpi` that would be passed to gnuplot looks something like +this. + +```txt +# set output format and size +set term png size 1000,280 + +# set output file +set output "audio.png" + +# set y range +set yr [-1:1] + +# we want just the data +unset key +unset tics +unset border +set lmargin 0 +set rmargin 0 +set tmargin 0 +set bmargin 0 + +# draw rectangle to change background color +set obj 1 rectangle behind from screen 0,0 to screen 1,1 +set obj 1 fillstyle solid 1.0 fillcolor rgbcolor "#ffffff" + +# draw data with foreground color +plot "audio_only.dat" with lines lt rgb 'red' +``` + +## Pre-generated sequences + +What I did was take interesting parts from an animal's genome and feed it to a +tone generator script. This then generated a WAV file and I converted those to +MP3, so they can be played in a browser. The last step was creating a +spectrogram based on a WAV file. + +### Niels Bohr quote + + + +![Spectogram](/assets/posts/dna-synthesized/quote/spectogram.png){:loading="lazy"} + +### Mouse + +This is part of a mouse genome `Mus_musculus.GRCm39.dna.nonchromosomal`. You +can get [genom data +here](http://ftp.ensembl.org/pub/release-106/fasta/mus_musculus/dna/). + + + +![Spectogram](/assets/posts/dna-synthesized/mouse/spectogram.png){:loading="lazy"} + +### Bison + +This is part of a bison genome `Bison_bison_bison.Bison_UMD1.0.cdna`. You can +get [genom data +here](http://ftp.ensembl.org/pub/release-106/fasta/bison_bison_bison/cdna/). + + + +![Spectogram](/assets/posts/dna-synthesized/bison/spectogram.png){:loading="lazy"} + +### Taurus + +This is part of a taurus genome `Bos_taurus.ARS-UCD1.2.cdna`. You can get +[genom data +here](http://ftp.ensembl.org/pub/release-106/fasta/bos_taurus/cdna/). + + + +![Spectogram](/assets/posts/dna-synthesized/taurus/spectogram.png){:loading="lazy"} + +## Making a drummer out of a DNA sequence + +To make things even more interesting, I decided to send this data via MIDI to my +[Elektron Model:Samples](https://www.elektron.se/en/model-samples). This is a +really cool piece of equipment that supports MIDI in via USB and 3.5 mm audio +jack. + +Elektron is connected to my MacBook via USB cable and audio out is patched to a +Sony Bluetooth speaker I have that supports 3.5 mm audio in. Elektron doesn't +have internal speakers. + +![](/assets/posts/dna-synthesized/elektron/IMG_0619.jpg){:loading="lazy"} + +![](/assets/posts/dna-synthesized/elektron/IMG_0620.jpg){:loading="lazy"} + +![](/assets/posts/dna-synthesized/elektron/IMG_0622.jpg){:loading="lazy"} + +For communicating with Elektron, I choose `pygame` Python module that has MIDI +built in. With this, it was rather simple to send notes to the device. All I did +was map MIDI notes to the actual Nucleotides. + +Before all of this I also checked Audio MIDI Setup app under MacOS and checked +MIDI Studio by pressing ⌘-2. + +![](/assets/posts/dna-synthesized/elektron/midi-studio.jpg){:loading="lazy"} + +The whole script that parses and send notes to the Elektron looks like this. + +```python +import pygame.midi +import time + +pygame.midi.init() + +print(pygame.midi.get_default_output_id()) +print(pygame.midi.get_device_info(0)) + +player = pygame.midi.Output(1) +player.set_instrument(2) + +def send_note(note, velocity): + global player + player.note_on(note, velocity) + time.sleep(0.3) + player.note_off(note, velocity) + + +nucleotide_midi_map = { + 'A': 60, + 'C': 90, + 'G': 160, + 'T': 180, # is D +} + +with open("quote.fa") as f: + sequence = f.read().replace('\n', '') + +for nucleotide in [char for char in sequence]: + print("Playing nucleotide {} with MIDI note {}".format( + nucleotide, nucleotide_midi_map[nucleotide])) + send_note(nucleotide_midi_map[nucleotide], 127) + +del player +pygame.midi.quit() +``` + + + +All of this could be made much more interesting if I choose different +instruments for different Nucleotides, or doing more funky stuff with Elektron. +But for now, this should be enough. It is just a proof of concept. Something to +play around with. + +## Going even further + +As you probably notice, the end results are quite similar to each other. This is +to be expected because we are operating only with 4 notes essentially. What +could make this more interesting is using something like +[Supercollider](https://supercollider.github.io/) to create more interesting +sounds. By transposing notes or using effects based on repeated data in a +sequence. Possibilities are endless. + +It is really astonishing what can be achieved with a little bit of code and an +idea. I could see this becoming an interesting background soundscape instrument +if done properly. It could replace random note generator with something more +intriguing, biological, natural. + +I actually find the results fascinating. I took some time and listened to this +music of nature. Even though it's quite the same, it's also quite different. +The subtle differences on repeat kind of creates music on its own. Makes you +wonder. It kind of puts Occam’s Razor in its place. Nature for sure loves to +make things as energy efficient as possible. diff --git a/_posts/posts/2022-10-06-state-of-web-technologies-in-year-2022.md b/_posts/posts/2022-10-06-state-of-web-technologies-in-year-2022.md new file mode 100644 index 0000000..e7c8d62 --- /dev/null +++ b/_posts/posts/2022-10-06-state-of-web-technologies-in-year-2022.md @@ -0,0 +1,297 @@ +--- +title: State of Web Technologies and Web development in year 2022 +permalink: /state-of-web-technologies-and-web-development-in-year-2022.html +date: 2022-10-06T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +## Initial thoughts + +*This post is a critique on the current state of web development. It is an +opinionated post! I will learn more about this in the future, and probably +slightly change my mind about some of the things I criticize.* + +I have started working on a hobby project about two weeks ago, and I wanted to +use that situation as a learning one. Trying new things, new technologies, new +tools. I always considered myself to be an adventurous person when it comes to +technology. I never shy away from trying new languages, new operating systems +etc. Likewise, I find the whole experience satisfying, and it tickles that part +of my brain that finds discovery the highest of the mountains to climb. + +What I always wanted to make was a coding game, that you would play in a browser +(just to eliminate building binaries for each operating system) where you would +level up your character and go into these scriptable battles. You know, RPG +elements. + +So, the natural way to go would be some sort of SPA (single page application) +with basic routing and some state management. Nothing crazy. + +> **Before we move on**, I have to be transparent. Take my views on this with +> a grain of salt. I have only scratched the surface with these technologies, +> and my knowledge is full of gaps. This is my experience using some of these +> products for the first time or in a limited capacity. + +Having this out of the way, I got myself a fresh pot of coffee and down the +rabbit hole I went. + +## Giving React JS a spin + +I first tried [React JS](https://reactjs.org/). I kind of like it. Furthermore, +I have worked with libraries like this in the past and also wrote a couple of +them (nothing compared to that level), but I had the basic understanding of what +was going on. I rolled up a project quickly and had basic things done in a +matter of two hours, which was impressive. + +I prefer using [Tailwind CSS](https://tailwindcss.com/) for my styling +pleasures, and integrating that was also a painless experience. It was actually +nice to see that some things got better with time. In about 2 minutes I got +Tailwind working, and I was able to use classes at my disposal. All that +`postcss` stuff was taken care of by adding a couple of things in config files +(all described really well in their documentation). + +It is not that different from Vue which I have had more encounters with in the +past People will probably call me a lunatic for saying this. But you know, it is +the truth. Same same, but different. I still believe that using libraries like +this is beneficial. I am not a JavaScript purist. They all have their quirks, +but at the end of the day, I truly believe it’s worth it. + +## Bundlers and Transpilers + +I still reject calling [Typescript](https://www.typescriptlang.org/) to +[JavaScript](https://www.javascript.com/) conversion a "compilation process". I +call them [transpilers](https://devopedia.org/transpiler), and I don’t care! 😈 + +The first one that I ever used was [webpack](https://webpack.js.org/), and it +was an absolute horrific experience. Saying this, it is an absolutely fantastic +tool. I felt more like a config editor than actually a programmer. To be fair, +I am a huge fan of [make](https://www.gnu.org/software/make/), and you can do as +you wish with this information. I like my build systems simple. + +Also, isn’t it interesting that we need something like +[Babel](https://babeljs.io/) to make JavaScript code work in a browser that has +only one client side scripting available, which is by no accident also +JavaScript. Why? I know why it’s needed, but seriously, why. + +I haven’t used Babel for years now. Or if I did, it was packaged together by +some other bundler thingy. Which does not make things better, but at least I +didn’t need to worry about it. + +I really don’t like complicated build systems. I really don’t like abstracting +code and making things appear magical. The older I get, the more I appreciate +clear and clean, expressive code. No one-liners, if possible. + +But I have to give props to [Vite](https://vitejs.dev/)! This was one of the +best developer experiences I have ever had. Granted, it still has magical +properties. And yes, it still is a bundler and abstracts things to the nth +degree. But at least it didn’t force me to configure 700 lines of JSON. And I +know that this makes me a hypocrite. You can’t have it all. Nonetheless, my +reasoning here is, if using bundlers is inevitable, then at least they should +provide an excellent developer experience. + +I also noticed that now the catch-all phrase is “blazingly fast” and “lightning +fast” and “next generation” and stuff like that. I mean, yeah, tools should get +faster with time. But saying that starting a project now takes 2 seconds instead +of 20 seconds is something that is a break it or make it kind of a deal is +ridiculous. I don’t mind waiting a couple of seconds every couple of days. I +also don’t create 700 projects every day, and also who does? This argument has +no bite. All I want is a decent reload time (~100ms is more than good enough for +me) and that is it. + +You don’t need to sell me benefits if I only get them when I start a fresh +project, and then try to convince me that this is somehow changing the fate of +the universe. First of all, it is not. And second, if this is your only argument +for your tool, I would advise you to maybe re-focus your efforts to something +else. Vite says that startup times are really fast. And if that would be the +only thing differentiating it from other tools, I would ignore it. But it has +some really compelling features like [Hot Module +Replacement](https://www.geeksforgeeks.org/reactjs-hot-module-replacement/) that +really works well. It was a joy to use. + +So, I will be definitely using Vite in the future. + +## Jam Stack, Mach Stack no snack + +Let's get a couple of the acronyms out of the way, so we all know what we are +talking about: + +- Jam Stack - JavaScript, API and Markup +- Mach Stack - Microservices, API-first, Cloud-Native SaaS, Headless + +It is so hard to follow all these new trendy things happening around you, that +it makes you have a massive **FOMO** all the time. But on the other hand, you +also don’t want to be that old fart that doesn’t move with the times and still +writes his trusty jQuery code while listening to Blink 182 All the small things +on full blast. It’s a good song, don’t get me wrong, but there are other songs +out there. + +I have to admit. [Vercel](https://vercel.com/) is really cool! Love the +simplicity of the service. You could compare it to +[Netlify](https://www.netlify.com/). I haven’t tried Netlify extensively, but +from a couple of experimental deployments I still prefer Vercel. It is much more +streamlined, but maybe this is bias in me. I really like Vercel’s Analytics, +which give you a [Core Web Vitals report](https://web.dev/vitals/) in their +admin console. Kind of cool, I’m not going to lie. + +This whole idea about frontend and backend merging into [SSR (server-side +rendering)](https://www.debugbear.com/blog/server-side-rendering) looks so good +on paper. It almost doesn’t come with any major flaws. + +But when it comes to the actual implementation, there is much to be desired. +I’m going to lump [Next.js](https://nextjs.org/) and +[Nuxt.js](https://nuxtjs.org/) together because they are essentially the same +thing, just a different library. + +Now comes the reality. Mixing backend and frontend in this manner creates this +weird mental model where you kind of rely on magical properties of these +libraries. You relinquish control over to them for better developer experience. +But is that really true? Initially, I was so stoked about it. However, the more +I used them, the more I felt uncomfortable. I felt dirty, actually. Maybe this +is because I come from old ways of doing things where you control every step of +request, and allowing something to hijack it feels like blasphemy. + +More than that, some pretty significant technical issues arose from this. How do +you do JWT token authentication? You put it in `api` folder and then do some +fetching and storing into local state management. But doing this also requires +some tinkering with await/async stuff on the React/Vue side of things. And then +you need to write middleware for it. And the more I look at it, the more I see +that this whole thing was not meant to be used like this, and it all feels and +looks like a huge hack. + +The issue I have with this is that they over-promise and under-deliver. They +want to be an all-in-one replacement for everything, and they don’t deliver on +this promise. And how could they?! We have to be fair. It is an impossible task. + +They sell you [NoOps](https://www.geeksforgeeks.org/overview-of-noops/), but +when you need to accomplish something a little bit more out of the scope of +Hello World, you have to make hacky decisions to make it work. And having a +deployment strategy that relies on many moving parts is never a good idea. +Abstracting too much is usually a sign of bad architecture. + +Lately, this has become a huge trend that will for sure bite us in the future. +And let’s not get it twisted. By doing this, PaaS providers like +[AWS](https://aws.amazon.com/), [GCS](https://cloud.google.com/), etc. obscure +their billing, and you end up paying more than you really should. And even if +that is not an issue, it comes down to the principle of things. AWS is known for +having multiple “currencies“ inside their projects like write operations, read +operations, etc. which add up, and it creates this impossible to track billing +scheme. It all behaves suspiciously like a pay-to-win game you could find on +mobile phones that scams you out of your money. + +And as far as I am concerned, the most important thing was me not coding the +functionalities for the game I want to make. I was battling libraries and cloud +providers. How to deploy, what settings are relevant. Bad documentation or +multiple versions of achieving the same thing. You are getting bombarded by all +this information, and you don’t really have any control over it. +Production-ready code becomes a joke, essentially. Especially if you tend to +work on that project for a prolonged period of time. + +All of these options end up creating a fatigue. What to choose, what not to +choose. Unnecessary worrying about if the stack will still be deemed worthy in +six months. There is elegance in simplicity. + +> JavaScript UI frameworks and libraries work in cycles. Every six months or +> so, a new one pops up, claiming that it has revolutionized UI development. +> Thousands of developers adopt it into their new projects, blog posts are +> written, Stack Overflow questions are asked and answered, and then a newer +> (and even more revolutionary) framework pops up to usurp the throne. +> — Ian Allen + +And this jab at these libraries and cloud providers is not done out of malice. +It is a real concern that I have about them. In my life, I have seen +technologies come and go, but the basics always stick around. So surrendering +all the power you have to a library or a cloud provider is in my opinion a +stupid move. + +## Tailwind CSS still rocks! + +You know, many people say negative things about Tailwind. And after a lot of +deliberation, I came to the conclusion that Tailwind is good for two types of +developers. Tailwind is good for a complete noob or a senior developer. A +complete noob doesn’t really care about inner workings of CSS, and a senior +developer also doesn’t care about CSS. Well, at least, not anymore. And +developers in between usually have the biggest issues with it. Not always of +course, but in a lot of cases. + +I like the creature comforts of Tailwind. Being utility first would make me +argue that it is actually more similar to [Sass](https://sass-lang.com/) or +[Less](https://lesscss.org/) than something like Bootstrap. Not technically, but +ideologically. After I started using it, I never looked back. I use it every +time I need to do something web related. + +Writing CSS for general things feels like going several steps back. Instead of +focusing on what you are actually trying to achieve, you focus on notations like +[BEM](https://en.bem.info/methodology/css/), code structuring, optimizing HTML +size. Just doing things that make 0.1% difference. You know that saying: Early +optimization is the root of all evil. Exactly that. + +I am also not saying that Tailwind is the cure for everything. Sometimes custom +CSS is necessary. But from what I found out in using it for almost two years in +a production environment (on a site getting quite a lot of traffic and +constantly being changed), I can say without any reservations that Tailwind +saved our asses countless times. We would be rewriting CSS all the time without +it. And I don’t really think writing CSS is the best way to spend my time. + +I have also noticed that people who criticize Tailwind the most never actually +used it in a real project that has a long lifetime with plenty of changes that +will happen in the future. + +But you know, whatever floats your boat! + +## Code maintainability + +Somehow, people also stopped talking about maintenance. If you constantly try to +catch the latest and greatest train, you are by that logic always trying new +things. Which is a good thing if you want to learn about technologies and try +them. But for the production environment, you have to have a stable stack that +doesn’t change every 6 months. + +You can lock dependencies for sure. Nevertheless, the hype train moves along +anyway. And the mindset this breeds goes against locking the code. This +bleeding-edge rolling release cycle is not helping. That is why enterprise +solutions usually look down on these popular stacks and only do bare minimum to +appear hip and cool. + +With that said, I still think that progress is good, but should be taken with a +grain of salt. If your project is something that should be built once and then +rarely updated, going with the latest stack is a possible way to go. But, if you +are working on a project that lasts for years, you should probably approach it +with some level of caution. Web development is often times too volatile. + +## Web development has a marketing issue + +I noticed that almost every project now has this marketing spin put on it. +Everything is blazingly fast now. I get it, they are competing for your +attention, but what happened to just being truthful and not inflating reality. + +And in order to appeal to mass market, they leave things out of their marketing +materials. These open-source projects are now behaving more and more like +companies do. Which is a scary thought on its self. + +And we are also seeing a rise in a concept of building a company in the open, +which is a good thing, don't get me wrong. But when it is using open-source to +lure people and then lock them in their ecosystem, there is where I have issues +with it. + +This might be because I have been using GNU/Linux for 20 years now and have been +so beholden for my success to open-source that I see issues when open-source is +being used to trick people into a false sense of security that these projects +are built in the spirit of open-source. Because there is a difference. They are +NOT! They have a really specific goal in mind. And the open-source is being used +as a delivery system. Which is in my opinion disgusting! + +## Conclusion + +I will end my post with this. Web development is running now in circles. People +are discovering [RPC](https://www.tutorialspoint.com/remote-procedure-call-rpc) +now and this is the now the next big thing. [GraphQL](https://graphql.org/) is +so passé. And I am so tired of it all. Of blazingly fast libraries, of all these +new technologies that are actually just a remake of old ones. Of just the +general spirit of the web. I will just use what I already know. Which worked 10 +years ago and will work 10 years after this. I will adopt a couple of little +tools like Vite. But I will not waste my time on this anymore. + +It was a good exercise to get in touch with what’s new now. Nothing really +changed that much. FOMO is now cured! Now I have to get my ass back to actually +code and make the project that I wanted to make in the first place. diff --git a/_posts/posts/2022-10-16-that-sound-that-machine-makes-when-struggling.md b/_posts/posts/2022-10-16-that-sound-that-machine-makes-when-struggling.md new file mode 100644 index 0000000..7b019e9 --- /dev/null +++ b/_posts/posts/2022-10-16-that-sound-that-machine-makes-when-struggling.md @@ -0,0 +1,67 @@ +--- +title: Microsoundtrack — That sound that machine makes when struggling +permalink: /that-sound-that-machine-makes-when-struggling.html +date: 2022-10-16T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +A couple of months ago, I got an idea about micro soundtracks. In this concept, +you are the observer, director, and audience in this tiny movies. + +What you do is to attempt to imagine what would be happening around you based on +a title of the song and let the song help you fill the void in your story. + +I made these songs is Logic Pro X. Every year or so I do this kind of thing and +make a couple of songs similar to this. But this is the first time I am posting +about it. + +You can listen to the whole set on +[Youtube](https://www.youtube.com/watch?v=_5oXBhSmF3c) or scroll down the page +and there are embedded players for each song. + +## A bunch of inter-dimensional people with loud clocks + +A group of inter-dimensional people are going up and down the elevator with you +while having loud clocks around their necks. Each clock ticks on a different +frequency. A lot of other sounds are getting drawn into your dimension, +resulting in a strange merging of dimensions. + + + +## Two black holes conversing about the weather + +You are a traveler in a spaceship flying very close to two colliding black holes +having a discussion about the weather while tearing each other apart. During all +this your ship is getting pulled into the event horizon of both black holes, +putting a lot of strain on your spaceship. + + + +## A planet where every organism is a plant + +You land on a planet where every living organism is a plant and among those +plants some of them are highly intelligent, and you were asked to make first +contact with the native species. Your visit takes place in a giant cave where +you are meeting these plants, and they are talking to you. + + + +## Bio implants having a fit and reprogramming your brain + +In a distant future where everybody has bio implants, you have just received +your first one, which happens to be a brain implant. Something goes wrong, and +your implant is starting to misbehave, and you are experiencing brain +malfunctions. You are on the streets at night a couple of hours after your +procedure. You can feel your sanity breaking down. + + + +## Cow animation + +I also made this little cow animation. Go into full screen to see the effects in +more details. + + + diff --git a/_posts/posts/2023-01-26-trying-to-build-a-new-kind-of-terminal-emulator.md b/_posts/posts/2023-01-26-trying-to-build-a-new-kind-of-terminal-emulator.md new file mode 100644 index 0000000..ced58bb --- /dev/null +++ b/_posts/posts/2023-01-26-trying-to-build-a-new-kind-of-terminal-emulator.md @@ -0,0 +1,254 @@ +--- +title: Trying to build a New kind of terminal emulator for the modern age +permalink: /trying-to-build-a-new-kind-of-terminal-emulator.html +date: 2023-01-26T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +Over the past few weeks, I have been really thinking about terminal emulators, +how we interact with computers, the separation of text-based programs and GUI +ones. To be perfectly honest, I got pissed off one evening when I was cleaning +up files on my computer. Normally, I go into console and do `ncdu` and check +where the junk is. Then I start deleting stuff. Without any discrimination, +usually. But when it comes to screenshots, I have learned that it's good to keep +them somewhere near if I need to refer to something that I was doing. I am an +avid screenshot taker. So at that point I checked Pictures folder and also did a +basic search `find . -type f -name "*.jpg"` for all the JPEG files in my home +directory and immediately got pissed off. Why can’t I see thumbnails in my +terminal? I know why, but why in the year of 2022 this is still a problem. I am +used to traversing my disk via terminal. I am faster, and I am more comfortable +this way. But when it comes to visualization, I then need to revert to GUI +applications and again find the same file to see it. I know that programs like +`feh` and `sxiv` are available, but I would just like to see the preview. Like +[Jupyter notebook](https://jupyter.org/) or something similar. Just having it +inline. Part of a result. + +It also didn’t help that I was spending some time with the [Plan +9](https://plan9.io/plan9/) Operating system. More specifically +[9FRONT](http://9front.org/). The way that [ACME editor](http://acme.cat-v.org/) +handles text editing is just wonderful. Different and fresh somehow, even though +it’s super old. + +So, I went on a lookout for an interesting way of visualizing results of some +query. I found these applications to be outstanding examples of how not to be a +captive of a predetermined way of doing things. + +- [Wolfram Mathematica](https://www.wolfram.com/mathematica/) +- [Jupyter notebooks](https://jupyter.org/) +- [Plan 9 / 9FRONT](http://www.9front.org) +- [Temple OS](https://templeos.org/) +- [Emacs](https://www.gnu.org/software/emacs/) + +My idea is not as out there as ACME is, but it is a spin on the terminal +emulators. I like the modes that Vi/Vim provides you with. I like the way the +Emacs does its own `M-x` `M-c`. Furthermore, I really like how Mathematica and +Jupyter present the data in a free flowing form. And I love how Temple OS is +basically a C interpreter on some level. + +> **Note:** This is part 1 of the journey. Nowhere finished yet. I am just +> tinkering with this at the moment. This whole thing can easily spectacularly +> fail. + +So I started. I knew that I wanted to have the couple of modes, but I didn’t +like the repetition of keystrokes, so the only option was to have some sort of +toggle and indicate to the user that they are in a special mode. Like Vi does +for Normal and Visual mode. + +These modes would for the first version be: + +- *Preview mode* (toggle with Ctrl + P) + - When this mode would be enabled, the `ls` command would try to find images + from the results and display thumbnails from them in the terminal itself. + No ASCII art. Proper images. In a grid! +- *Detach mode* (toggle with Ctrl + D) + - When this mode would be enabled, every command would open a new window + and execute that command in it. This would be useful for starting `htop` + in a separate window. + +The reason for having these modes togglable is to not ask for previews every +time. You enable a mode and until you disable it, it behaves that way. Purely +out of ergonomic reasons. + +I would like to treat every terminal I open as a session mentally. When I start +using the terminal, I start digging deeper into the issue I am trying to +resolve. And while I am doing this, I would like to open detached windows +etc. A lot of these things can be done easily with something like +[i3](https://i3wm.org/), but also that pull you out of the context of what you +were doing. I would like to orchestrate everything from one single point. + +In planning for this project, I knew that I would need to use a language like C +and a library such as [SDL2](https://www.libsdl.org/) in order to achieve the +desired results. I had considered other options, but ultimately determined that +[SDL2](https://www.libsdl.org/) was the best fit based on its capabilities and +reputation in the programming community. + +At first, I thought the idea of a hardware accelerated terminal was a bit of a +joke. It seemed like such a niche and unnecessary feature, especially given the +fact that terminal emulators have been around for decades and have always relied +on software rendering. But to be fair, [Alacritty](https://alacritty.org/) is +doing the same thing. Well, they are doing a remarkable job at it. + +So, I embarked on a journey. Everything has to start somewhere. For me, it +started with creating a window! It has to start somewhere. 🙂 + +```c +// Oh, Hi Mark! +// Create the window, obviously. +SDL_Window *window = SDL_CreateWindow( + WINDOW_TITLE, SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, + WINDOW_WIDTH, WINDOW_HEIGHT, + SDL_WINDOW_RESIZABLE | SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN); +``` + +I continued like this to get some text displayed on the screen. + +I noted that +[`TTF_RenderText_Solid`](https://wiki.libsdl.org/SDL_ttf/TTF_RenderText_Solid) +rendered text really poorly. There were no antialiasing at all. In my wisdom, I +never checked the documentation. Well, that was a fail. To uneducated like me: +`TTF_RenderText_Solid` renders Latin1 text at fast quality to a new 8-bit +surface. So, that's why the texts looked like shit. No wonder. + +Remarks on `TTF_RenderText_Solid`: This function will allocate a new 8-bit, +palettized surface. The surface's 0 pixel will be the colorkey, giving a +transparent background. The 1 pixel will be set to the text color. + +After I replaced it with +[`TTF_RenderText_LCD`](https://wiki.libsdl.org/SDL_ttf/TTF_RenderText_LCD) which +renders Latin1 text at LCD subpixel quality to a new ARGB surface, the text +started looking good. Really make sure you read the documentation. It’s actually +good. As a side note, you can find all the documentation regarding [SDL2 on +their Wiki](https://wiki.libsdl.org/). + +After that was done, I started working on displaying other things like `Preview` +and `Detach` modes. This wasn’t really that hard. In SDL2 you can check all the +available events with `while (SDL_PollEvent(&event) > 0)` and have a bunch of +switch statements to determine which key is currently being pressed. More about +keys, [SDLKey](https://documentation.help/SDL/sdlkey.html) and mroe about +pooling the events on +[SDL_PollEvent](https://documentation.help/SDL/sdlpollevent.html). + +```c +while (SDL_PollEvent(&event) > 0) +{ + switch (event.type) + { + case SDL_QUIT: + running = false; + break; + + case SDL_TEXTINPUT: + if (!meta_key_pressed) + { + strncat(input_prompt_text, event.text.text, 1); + update_input_prompt = true; + } + break; + } +} +``` + +After that was somewhat working correctly, I started creating a struct that +would hold all the commands and results and I call them Cells. Yes, I stole that +naming idea from Jupyter. + +```c +typedef struct +{ + char *command; + char *result; + SDL_Surface *surface; + SDL_Texture *texture; + SDL_Rect rect; +} Cell; +``` + +I am at a place now where I am starting to implement scrolling. This will for +sure be fun to code. Memory management in C is super easy. 😂 + +I have also added a simple [INI file like +configuration](https://en.wikipedia.org/wiki/INI_file) support. It is done in an +[STB style of +header](https://github.com/nothings/stb/blob/master/docs/stb_howto.txt) and maps +to specific options supported by the terminal. It is not universal, and the code +below demonstrates how I will use it in the future. + +```c +#ifndef CONFIG_H +#define CONFIG_H + +/* +# This is a comment + +# This is the first configuration option +dettach=value11111 + +# This is the second configuration option +preview=value22222 + +# This is the third configuration option +debug=value33333 +*/ + +// Define a struct to hold the configuration options +typedef struct +{ + char dettach[256]; + char preview[256]; + char debug[256]; +} Config; + +// Read the configuration file and return the options as a struct +extern Config read_config_file(const char *filename) +{ + // Create a struct to hold the configuration options + Config config = {0}; + + // Open the configuration file + FILE *file = fopen(filename, "r"); + + // Read each line from the file + char line[256]; + while (fgets(line, sizeof(line), file)) + { + // Check if this line is a comment or empty + if (line[0] == '#' || line[0] == '\n') + continue; + + // Parse the line to get the option and value + char option[128], value[128]; + if (sscanf(line, "%[^=]=%s", option, value) != 2) + continue; + + // Set the value of the appropriate option in the config struct + if (strcmp(option, "dettach") == 0) + { + strncpy(config.option1, value, sizeof(config.option1)); + } + else if (strcmp(option, "preview") == 0) + { + strncpy(config.option2, value, sizeof(config.option2)); + } + else if (strcmp(option, "debug") == 0) + { + strncpy(config.option3, value, sizeof(config.option3)); + } + } + + // Close the configuration file + fclose(file); + + // Return the configuration options + return config; +} + +#endif +``` + +This is as far as I managed to get for now. I have a daily job and this +prohibits me to work on these things full time. But I should probably get back +and finish this. At least have a simple version working out, so I can start +testing it on my machines. Fingers crossed. 🕵️‍♂️ + diff --git a/_posts/posts/2023-05-16-rekindling-my-love-for-programming.md b/_posts/posts/2023-05-16-rekindling-my-love-for-programming.md new file mode 100644 index 0000000..dc5344f --- /dev/null +++ b/_posts/posts/2023-05-16-rekindling-my-love-for-programming.md @@ -0,0 +1,75 @@ +--- +title: Rekindling my love for programming and enjoying the act of creating +permalink: /rekindling-my-love-for-programming.html +date: 2023-05-16T12:00:00+02:00 +layout: post +type: post +draft: false +--- + +Programming can be a challenging and rewarding experience, but sometimes it's +easy to feel burnt out or disinterested. I have lost the passion for coding over +the past couple of months and it looked like I will never enjoy the coding as +much as I did. + +I was feeling burnt out with programming. I thought taking a break from it and +focusing on other activities that I enjoy might be helpful. This way, I could +come back to programming with a fresh perspective and renewed energy. I also +thought about learning a new programming language or technology to keep things +interesting and challenging. + +However, what I didn't realize was that learning a new language or technology +wasn't going to solve the underlying issue. I needed to take a step back and +re-evaluate why I had lost my passion for programming in the first place. This +involved taking a deep look into what I was doing that resulted in this rut. + +Sometimes, it's easy to get caught up in the hype of new technologies or +languages, and we can feel like we're missing out if we're not constantly +learning and experimenting. However, it's important to remember that the latest +and greatest isn't always the best fit for our projects or our +interests. Instead of constantly chasing the next big thing, it can be helpful +to focus on what truly interests us and what we're passionate about. This can +help us stay motivated and engaged with our work, rather than feeling like we're +just going through the motions. + +I expressed that I had lost my passion for coding over the past couple of +months, and I realized that the reason behind it was my tendency to spread +myself too thin and not focus on completing interesting projects. In order to +regain my passion for coding, I need to focus on projects that truly interest me +and give me a sense of purpose and motivation. + +Recently, I have been playing World of Warcraft more frequently and have become +interested in developing addons for the game. + +This quickly resulted in me creating three addons that improve the quality of +life, and I subsequently developed a more useful add-on that encapsulates all +the others I made. + +I found it interesting that this action sparked a new interest in me. +Additionally, I discovered the Lua language, which reminded me that coding +should be fun rather than just a struggle with a language. It should be pure, +unadulterated fun. + +I wasn't fighting the syntax, nor was I focused on finding the most optimal +solution. I simply created things without the pressure of making them the best +they could possibly be. + +This made me realize that I actually adore simple languages that get out of the +way and let you express what you want to do. It forced me to rethink a lot about +what I use and what I actually enjoy. + +I have decided to stick to the basics. For a scripting language, I will use +Lua. For networking, I will use Golang. And for any special needs, I will rely +on C. I do not require Rust, Nim, or Zig. This selection is more than sufficient +for my needs. I have to stay true to this simplicity. There is something to the +Occam's Razor. + +I've been struggling with a lack of creativity lately, but now I'm experiencing +a real change. I realized I needed to take a step back and stop actively trying +to address the issue. I needed to stop worrying and overthinking it. I simply +needed some time. Looking back, I don't think I've taken any significant time +off in the last 10 years. + +Suddenly, I find myself with the energy and passion to complete multiple small +projects. It doesn't feel like a chore at all. Who knew I needed WoW to +kickstart everything. Inspiration really does come from the strangest places. diff --git a/_posts/posts/2023-05-23-i-was-wrong-about-git-workflows.md b/_posts/posts/2023-05-23-i-was-wrong-about-git-workflows.md new file mode 100644 index 0000000..57d887c --- /dev/null +++ b/_posts/posts/2023-05-23-i-was-wrong-about-git-workflows.md @@ -0,0 +1,72 @@ +--- +title: I think I was completely wrong about Git workflows +permalink: /i-was-wrong-about-git-workflows.html +date: 2023-05-23T12:00:00+02:00 +layout: post +type: post +draft: false +tags: [] +--- + +I have been using some approximation of [Git +Flow](https://jeffkreeftmeijer.com/git-flow/) for years now and never really +questioned it to be honest. When I create a repo I create develop branch and set +it as default one and then merge to master from there. Seems reasonable enough. + +One thing that I have learned is that long living branches are the devil. They +always end up making a huge mess when they need to be merged eventually into +master. So by that reason, what is the develop branch if not the longest living +feature branch. And from my personal experience there was never a situation +where I wasn’t sweating bullets when I had to merge develop back to master. + +This realisation started to give me pause. So why the hell am I doing this, and +is there a better way. Well the solution was always there. And it comes in a +form of [git tags](https://git-scm.com/book/en/v2/Git-Basics-Tagging). + +So what are git tags? Git tags are references to specific points in a Git +repository's history. They are used to mark important milestones, such as +releases or significant commits, making it easier to identify and access +specific versions of a project. + +Somehow we have all hijacked the meaning of the master branch that it has to be +the most releasable version of code. And this is also where the confusing about +versioning the software kicks in. Because master branch implicitly says that we +are dealing with the rolling release type of a software. And by having a develop +branch we are hacking around this confusion. With a separation of develop and +master we lock functionalities into place and forcing a stable vs development +version of the software. + +But if that is true and the long living branches are the devil then why have +develop at all. I think that most of this comes to how continuous integration is +being done. There usually is no granular access to tags and CD software deploys +what is present on a specific branch, may that be master for production and +develop for staging. This is a gross simplification and by having this in place +we have completely removed tagging as a viable option to create a fix point in +software cycle that says, this is the production ready code. + +One cool thing about tags are that you can checkout a specific tag. So they +behave very similarly as branches in that regard. And you don’t have the +overhead of having two mainstream branches. + +So what is the solution? One approach is to use development workflow, where all +changes are made on the smaller branches and continuously merged into +master. Where the software is ready to be pushed to production you tag the +master branch. This approach eliminates the need for long-lived branches and +simplifies the development process. It also encourages developers to make small, +incremental changes that can be tested and deployed quickly. However, this +approach may not be suitable for all projects or teams that heavily rely on +automated deployment based on branch names only. + +This also requires that developers always keep production in mind. No more +living on an island of the develop branch. All your actions and code need to be +ready to meet production standards on a much smaller timescale. + +I think that we have complicated the workflow in an honest attempt to make +things more streamlined but in the process of doing this, we have inadvertently +made our lives much more complicated. + +In conclusion, it's important to re-evaluate our workflows from time to time to +see if they still make sense and if there are better alternatives available. +Long-living branches can be problematic, and using tags to mark important +milestones can simplify the development process. + diff --git a/_posts/posts/2023-05-31-re-inventing-task-runner-that-i-actually-used-daily.md b/_posts/posts/2023-05-31-re-inventing-task-runner-that-i-actually-used-daily.md new file mode 100644 index 0000000..c595905 --- /dev/null +++ b/_posts/posts/2023-05-31-re-inventing-task-runner-that-i-actually-used-daily.md @@ -0,0 +1,160 @@ +--- +title: "Re-Inventing Task Runner That I Actually Used Daily" +permalink: /re-inventing-task-runner-that-i-actually-used-daily.html +date: 2023-05-31T12:21:10+02:00 +layout: post +type: post +draft: false +--- + +Couple of months ago I had this brilliant idea of re-inventing the wheel by +making an alternative for make. And so I went. Boldly into the battle. And to my +big surprise my attempt resulted in not a completely useless piece of software. + +My initial requirements were quite simple but soon grow into something more +ambitious. And looking back I should have stuck to the simple version. My +laziness was on my side this time though. Because I haven’t implemented some of +the features I now realise I really didn’t need them and they would bog the +whole program and make it be something it was never meant to be. + +My basic requirements were following: + +- Syntax should be a tiny bit inspired by Rake and Rakefiles. +- Should borrow the overall feel of a unit test experience. +- Using something like Python would be a bit of an overkill. +- The program must be statically compiled, so it can run on same architecture + without libc, musl dependencies or things like that. +- Install ruby for rake is a bit overkill and can not be done with certain + really lightweight distributions like Alpine Linux. This tool would be usable + on such lightweight systems for remote debugging. +- I want to use it for more than just compiling things. I want to use it as an + entry-point into a project, and I want this to help me indirectly document the + project as well. +- It should be an abstraction over bash shell or the default system shell. + - Each task essentially becomes its own shell instance. +- Must work on Linux and macOS systems. +- By default, running `erd` list all the available tasks (when I use make, I + usually put a disclaimer that you should check Makefile to see all available + target). +- Should support passing arguments when you run it from a shell. +- Normal variable as the same as environmental variables. There is no + distinction. Every variable is also essentially an environment variable and + can be used by other programs. +- State between tasks is not shared, and this makes this “pure” shell instances. +- Should be single-threaded for the start and later expanded with `@spawn` + command. +- Variables behave like macros and are preprocessed before evaluation. +- Should support something like `assure` that would check if programs like C + compiler or Python (whatever the project requires) are installed on a machine. + +Quite a reasonable list of requirements. I do this things already in my +Makefiles or/and Bash scripts. But I would like to avoid repeating myself every +time I start working on something new. + +So I started with the following syntax. + +```ruby +@env on + +# Override the default shell. +@shell /bin/bash + +# Assure that program is installed. +@assure docker-compose pip python3 + +# Load local dotenv files (these are then globally available). +@dotenv .env +@dotenv .env.sample +@dotenv some_other_file + +# This are local variables but still accessible in tasks. +@var HI = "hey" +@var TOKEN = "sometoken" +@var EMAIL = "m@m.com" +@var PASSWORD = "pass" +@var EDITOR = "vim" + +@task dev "Test chars .:'}{]!//" does + echo "..." $HI +end + +@task clean "Cleans the obj files" does + rm .obj +end + +@task greet "Greets the user" does + echo "Hi user $TOKEN or $WINDOWID $EMAIL" +end + +@task stack "Starts Docker stack" does + docker-compose -f stack.yml up +end + +@task todo "Shows all todos in source files and count them" does + grep -ir "TODO|FIXME" . | wc -l +end + +@task test1 "For testing 1" does + unknown-command + echo "test1" + ls -lha +end + +@task test2 "For testing 2" does + echo "test1" + ls -lha + docker-compose -f samples/stack.yml up +end +``` + +One thing that I really like about Errand. Yes, this is what it is called. And +it is available at https://git.mitjafelicijan.com/errand.git/about/. Moving +on. One thing that I really like is that a task is a persistent shell. By that I +mean, that the whole task, even if it contains multiple command in one shell. +In make each line in a target is that and you need to combine lines or add `\` +at the end of the line. + +```bash +# How you do this things in make. +target: + source .venv/bin/activate \ + python script.py +``` + +This solves this problem. Consider each task and what is being executed in that +task a shell that will only close when all the tasks are completed. + +By self-documenting I mean that if you are in a directory with `Errandfile` in, +if you only type `erd` and press enter it should by default display all the +possible targets. In make i was doing this by having a first target be something +like `default` that echos the message “Check Makefile for all available target.” +Because all of the tasks in Errand require a message I use that to display let’s +call it table of contents. + +Because I don’t use any external dependencies this whole thing can be statically +compiled. So that also checked one of the boxes. + +It works on Linux and on a Mac so that’s also a bonus. I don’t believe this +would work on Windows machines because of the way that I use shell instances. By +you could use something like Windows Subsystem for Linux and run it in +there. That is a valid option. + +To finish this essay off, how was it to use it in “real life”. I have to be +honest. Some of the missing features still bother me. `@dotenv` directive is +still missing and I need to implement this ASAP. + +Another thing that needs to happen is support for streaming output. Currently +commands like `docker-compose` that runs in foreground mode is not compatible +with Errand. So commands that stream output are an issue. I need to revisit how +I initiate shell and how I read stdout and stderr. But that shouldn’t be a +problem. + +I have been very satisfied with this thing. I am pleasantly surprised by how +useful it is. I really wanted to test this in the wild before I commit to it. I +have more abandoned project than Google and it’s bringing a massive shame to my +family at this point. So I wanted to be sure that this is even useful. And it +actually is. Quite surprised at myself. + +I really need to package this now and write proper docs. And maybe rewrite +tokeniser. Its atrocious right now. Site to behold! But that is an issue for +another time. diff --git a/_posts/posts/2023-07-01-bringing-all-of-my-projects-together-under-one-umbrella.md b/_posts/posts/2023-07-01-bringing-all-of-my-projects-together-under-one-umbrella.md new file mode 100644 index 0000000..4bc45ce --- /dev/null +++ b/_posts/posts/2023-07-01-bringing-all-of-my-projects-together-under-one-umbrella.md @@ -0,0 +1,282 @@ +--- +title: "Bringing all of my projects together under one umbrella" +permalink: /bringing-all-of-my-projects-together-under-one-umbrella.html +date: 2023-07-01T18:49:07+02:00 +layout: post +type: post +draft: false +--- + +## What is the issue anyway? + +Over the years, I have accumulated a bunch of virtual servers on my +[DigitalOcean](https://www.digitalocean.com/) account for small experimental +projects I dabble in. And this has resulted in quite a bill. I mean, I wouldn't +care if these projects were actually being used. But there were just being there +unused and wasting resources. Which makes this an unnecessary burden for me. + +Most of them are just small HTML pages that have an endpoint or two to read data +from or to, and for that reason I wrote servers left and right. To be honest, +all of those things could have been done with [CGI +scripts](https://en.wikipedia.org/wiki/Common_Gateway_Interface) and that would +have been more than enough. + +Recently, I decided to stop language hopping and focus on a simpler stack which +includes C, Go and Lua. And I can accomplish all the things I am interested in. + +## Finding a web server replacement + +Usually I had [Nginx](https://nginx.org/en/) in front of these small web servers +and I had to manage SSL certificates and all that jazz. I am bored with these +things. I don't want to manage any of this bullshit anymore. + +So the logical move forward was to find a solid alternative for this. I have +ended up on [Caddy server](https://caddyserver.com/). I've used it in the past +but kind of forgotten about it. What I really like about it is an ease of use +and a bunch of out of the box functionalities that come with it. + +These are the _pitch_ points from their website: + +- **Secure by Default**: Caddy is the only web server that uses HTTPS by + default. A hardened TLS stack with modern protocols preserves privacy and + exposes MITM attacks. +- **Config API**: As its primary mode of configuration, Caddy's REST API makes + it easy to automate and integrate with your apps. +- **No Dependencies**: Because Caddy is written in Go, its binaries are entirely + self-contained and run on every platform, including containers without libc. +- **Modular Stack**: Take back control over your compute edge. Caddy can be + extended with everything you need using plugins. + +I had just a few requirements: + +- Automatic SSL +- Static file server +- Basic authentication +- CGI script support + +And the vanilla version does all of it, but CGI scripts. But that can easily be +fixed with their modular approach. You can do this on their website and build a +custom version of the server, or do it with Docker. + +This is a `Dockerfile` I used to build a custom server. + +```Dockerfile +FROM caddy:builder AS builder + +RUN xcaddy build \ + --with github.com/aksdb/caddy-cgi + +FROM caddy:latest +RUN apk add --no-cache nano + +COPY --from=builder /usr/bin/caddy /usr/bin/caddy +``` + +## Getting rid of all the unnecessary virtual machines + +The next step was to get a handle on the number of virtual servers I have all +over the place. + +I decided to move all the projects and services into two main VMs: + +- personal server (still Nginx) + - git server + - static file server + - personal blog +- projects server (Caddy server) + - personal experiments + - other projects + +I will focus on projects' server in this post since it's more interesting. + +## Testing CGI scripts + +The first thing I tested was how CGI scripts work under Caddy. This is +particularly import to me because almost all of my experiments and mini projects +need this to work. + +To configure Caddy server, you must provide the server with a configuration +file. By default, it's called `Caaddyfile`. + +```caddyfile +{ + order cgi before respond +} + +examples.mitjafelicijan.com { + cgi /bash-test /opt/projects/examples/bash-test.sh + cgi /tcl-test /opt/projects/examples/tcl-test.tcl + cgi /lua-test /opt/projects/examples/lua-test.lua + cgi /python-test /opt/projects/examples/python-test.py + + root * /opt/projects/examples + file_server +} +``` + +- The order is very important. Make sure that `order cgi before respond` is at + the top of the configuration file. +- Also, when you run with Caddy v2, make sure you provide `adapter` argument + like this `/usr/bin/caddy run --watch --environ --config /etc/caddy/Caddyfile + --adapter caddyfile`. Otherwise, Caddy will try to use a different format for + config file. + +I did a small batch of tests with [Bash](https://www.gnu.org/software/bash/), +[Tcl](https://www.tcl-lang.org/), [Lua](https://www.lua.org/) and +[Python](https://www.python.org/). Here is a cheat sheet if you need it. + +Let's get Bash out of the way first. + +```bash +#!/usr/bin/bash + +printf "Content-type: text/plain\n\n" + +printf "Hello from Bash\n\n" +printf "PATH_INFO [%s]\n" $PATH_INFO +printf "QUERY_STRING [%s]\n" $QUERY_STRING +printf "\n" + +for i in {0..9..1}; do + printf "> %s\n" $i +done + +exit 0 +``` + +This one is for Tcl script. + +```tcl +#!/usr/bin/tclsh + +puts "Content-type: text/plain\n" + +puts "Hello from Tcl\n" +puts "PATH_INFO \[$env(PATH_INFO)\]" +puts "QUERY_STRING \[$env(QUERY_STRING)\]" +puts "" + +for {set i 0} {$i < 10} {incr i} { + puts "> $i" +} +``` + +And for all you Python enjoyers. + +```python +#!/usr/bin/python3 + +import os + +print("Content-type: text/plain\n") + +print("Hello from Python\n") +print("PATH_INFO [{}]".format(os.environ['PATH_INFO'])) +print("QUERY_STRING [{}]".format(os.environ['QUERY_STRING'])) +print("") + +for i in range(10): + print("> {}".format(i)) +``` + +And for the final example, Lua. + +```lua +#!/usr/bin/lua + +print("Content-type: text/plain\n") + +print("Hello from Lua\n") +print(string.format("PATH_INFO [%s]", os.getenv("PATH_INFO"))) +print(string.format("QUERY_STRING [%s]", os.getenv("QUERY_STRING"))) +print() + +for i = 0, 9 do + print(string.format("> %d", i)) +end +``` + +## Basic authentication + +One thing was also to have an option for some sort of authentication, and +something like [Basic access +authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) would +be more than enough. + +Thankfully, Caddy supports this out of the box already. Below is an updated +example. + +```Caddyfile +{ + order cgi before respond +} + +examples.mitjafelicijan.com { + cgi /bash-test /opt/projects/examples/bash-test.sh + cgi /tcl-test /opt/projects/examples/tcl-test.tcl + cgi /lua-test /opt/projects/examples/lua-test.lua + cgi /python-test /opt/projects/examples/python-test.py + + root * /opt/projects/examples + file_server + + basicauth * { + bob $2a$14$/wCgaf9oMnmQa20txB76u.nI1AldGMBT/1J7fXCfgOiRShwz/JOkK + } +} +``` + +`basicauth *` matches everything under this domain/sub-domain and protects it +with Basic Authentication. + +- `bob` is the username +- `hash` is the password + +To generate these passwords, execute `caddy hash-password` and this will prompt +you to insert a password twice and spit out a hashed password that you can put +in your configuration file. + +Restart the server and you are ready to go. + +## Making Caddy a service with systemd + +After the tests were successful, I copied `caddy` to `/usr/bin/caddy` and copied +`Caddyfile` to `/etc/caddy/Caddyfile`. + +Now off to the systemd. Each systemd service requires you to create a service +file. + +- I created a `/etc/systemd/system/caddy.service` and put the following content + in the file. + +```systemd +[Unit] +Description=Caddy +Documentation=https://caddyserver.com/docs/ +After=network.target network-online.target +Requires=network-online.target + +[Service] +Type=notify +User=root +Group=root +ExecStart=/usr/bin/caddy run --environ --config /etc/caddy/Caddyfile --adapter caddyfile +ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile --force --adapter caddyfile +TimeoutStopSec=5s +LimitNOFILE=1048576 +LimitNPROC=512 +PrivateTmp=true +ProtectSystem=full +AmbientCapabilities=CAP_NET_ADMIN CAP_NET_BIND_SERVICE + +[Install] +WantedBy=multi-user.target +``` + +- You might need to reload systemd with `systemctl daemon-reload`. +- Then I enabled the service with `systemctl enable caddy.service`. +- And then I started the service with `systemctl start caddy.service`. + +This was about all that I needed to do to get it running. Now I can easily add +new subdomains and domains to the main configuration file and be done with +it. No manual Let's Encrypt shenanigans needed. diff --git a/_posts/posts/2023-07-08-who-knows-what-the-world-will-look-like-tomorrow.md b/_posts/posts/2023-07-08-who-knows-what-the-world-will-look-like-tomorrow.md new file mode 100644 index 0000000..c7d52d5 --- /dev/null +++ b/_posts/posts/2023-07-08-who-knows-what-the-world-will-look-like-tomorrow.md @@ -0,0 +1,101 @@ +--- +title: "Who knows what the world will look like tomorrow" +permalink: /who-knows-what-the-world-will-look-like-tomorrow.html +date: 2023-07-08T18:49:07+02:00 +layout: post +type: post +draft: false +--- + +This site has gone through a lot of changes over the years. From being written +in Flask and Bottle to moving on to static site generators. I have used and +tested probably 10s of them my now. From homebrew solutions to the biggest and +the baddest. From Bash scripts to Node.js disasters. I've seen some things, no +doubt. Not all bad. + +I have been closely observing the web and where the trends are going, and I +don't like what I see. Instead of internet being this weird place where +experimentation is happening, it all became stale and formulized. Boring, +actually. Really boring. And sad. Where is that old, revolutionary FU spirit I +remember? It's still there, I know. But it's being drowned by the voices of +mediocrity and formulaic boredom. + +It almost feels like that the internet stopped for 10 years and only now +something has started happening. With all the insanity around the world. People +hating people without actual reasons, just because it's fashionable to hate and +crowd is saying so. Sad state of affairs. + +All this is contributing to this overall negativity masked as apathy. Everybody +walking in lockstep. Instead of being creative and bold, we are just +re-inventing the world and making the same mistakes. Maybe, just maybe, some +things are good enough and there is no need to try to be too smart for our own +good. After N-attempts, maybe something should click inside our heads to maybe +say: "This thing, opinion, etc. is actually really good, and even after several +attempts it still holds." + +The older I get, the more careful I am of my own thoughts and why I think the +way I think. More and more, I try to understand people with opposite +opinions. Far from perfect, but closer to bearable. And then I see people +hearing or reading a thing on internet and let's fucking goooooo! Strong +opinions are a sign of a weak and uneducated mind. I am more and more sure of +this. + +It's gotten to a point where you can with great certainty deduce a person's +personality based on one or two opinions. How boring have we become. No wonder +people can't talk to each other. These would be very quick conversations anyway. + +I just got remembered of a song, ["Hi +Ren"](https://www.youtube.com/watch?v=s_nc1IVoMxc). The ending talks about being +stiff and not being able to dance. Such an amazing metaphor. And we as people +have gone so far, we can't even walk or even crawl normally anymore. We have +forgotten that the most beautiful things in life have a great deal of +uncertainty about them. We want instant gratification. Not only that, but we +want absolute obedience. Complete control over others, because we have zero +control of ourselves. And all the lies we could tell ourselves will not help us +out of this situation. + +It is funny how I catch myself from time to time being a complete idiot. It's +like having an outer body experience. I can see myself being an idiot, and +cannot stop myself. It serves as a learning lesson to stop before speaking. To +think before saying. And to crawl before walking. + +So there is still time. We can dance once more. All we need to do is stop for a +second. Me and you. Us two is a start. Let's not try to change the world, but +rather nudge ourselves just a tiny bit. And if we only did that?! Just +imagine. Each of us nudged ourselves a small, tiny bit, the world would heal. If +we would just put down the phones and ignored Internet for a day or two. Put +visiting websites that feed on us on hold. Listened to just one sentence and try +to understand it from a person who we completely disagree with. I truly believe +that this is possible. + +Life is about suffering and joy. And instead of wishing suffering on others and +excepting joy for yourselves, we should for a brief moment want suffering for +ourselves and wish joy on others. Wouldn't that be an amazing sight to see? + +I caught myself hating on Rust. And I deeply thought about it afterward. Why did +I do it? It is obviously not for me. So why the hell was I being so negative +towards it? I think that I know the answer. I was negative because that is +easy. Because it's much easier to hate on things than to say to yourself: "Well, +you know what? This is not for me. I will focus on creation and not +destruction. This is who I want to be. This is what fills me with joy and +purpose." Where joy is keeping me happy and purpose scares the shit out of me +and keeps me honest. This is who I want to be. Admit to myself when I am wrong +and accept the faults that I have without reservation and with courage march on. + +I just realized that this blog post is a sort of therapy for me. It's +cathartic. Going thought the history of this site and remembering all the +decisions and annoyances that came with it. When I was cursing at the tools. And +time moved on, and the site is still here. It serves as a reminder that +perseverance wins at the end. If we just let things go. + +This came with a decision that simplifying life and removing all the unnecessary +negativity is key. Rather than worrying about what the internet is saying, what +the world is trying to take from you, you are the only one who can say no. And +create instead of destroy. + +I don't have an ending for this post, so I will say this. We live in the most +amazing times in the recorded history, and we should be internally grateful for +it. Create and study, this should be my mantra. Just create and let the world +happen. And when you feel yourself to be too certain, stop and check how deep in the +shit you are already. Strong opinions are a sign of a weak and uneducated +mind. Hate and disdain is for the weak. diff --git a/_posts/posts/2023-11-05-elitist-attitudes-are-sapping-the-fun-from-programming.md b/_posts/posts/2023-11-05-elitist-attitudes-are-sapping-the-fun-from-programming.md new file mode 100644 index 0000000..ccee72b --- /dev/null +++ b/_posts/posts/2023-11-05-elitist-attitudes-are-sapping-the-fun-from-programming.md @@ -0,0 +1,97 @@ +--- +title: "Elitist attitudes are sapping all the fun from programming" +permalink: /elitist-attitudes-are-sapping-all-the-fun-from-programming.html +date: 2023-11-05T09:04:28+02:00 +layout: post +type: post +draft: false +--- + +It's always been like that. Maybe it was even worse before, and I am remembering +it with rose-tinted glasses. But from the best that I can remember, it had at +least some roots in reality. If something was objectively bad, you could point +to it. But what I have started noticing recently is that objectivity is not the +only condition to bash on something. More and more, you can use subjective +opinion to say horrible things about technology, language or just a specific +manufacturer. + +And all this has achieved is that I don't really listen to anybody anymore. I +don't care what you think about X or Y. I don't care if you like this language +or that one. I don't care if you prefer Dell to ThinkPad over Macbook. Who gives +a fuck, anyway? If you can do your job on it, why even care about this stuff at +all. And if you can't, buy a different machine. + +It's like the politics weren't enough. Now the same tribalism is here as well. C +developers hating on Rust. JavaScript developers laughing at jQuery users. Rust +developers laughing at everybody except Haskell users. And everybody laughing at +JavaScript. It's like this never-ending dream, being stuck in high school. Us +against your team. It's like we are all stuck being 16. Such a sad state of +affair. And it's always been like this. But it's getting worse I think. + +Everybody trying to be elitist. Compensating lack of JavaScript features (like +type system, for one) with coming up with this insane terminology to make +JavaScript sound more sophisticated, as it is. Let's invent terminology to hide +flaws and sound more educated and academic. And the same goes for C and all the +other languages. All languages are shitty in some ways. For the love of God, +why? Just let it be. For once, let things just be. + +And I, for one, just do not care anymore. Languages are tools and not your +identity. If you need a programming language to fill a void in your life, I +strongly suggest that you re-evaluate where you stand currently. Try something +else. You are not a C developer, or Go developer, or JavaScript developer. You +are a problem solver. That's what you are. And be damn proud of it. You don't +need a label to make that more true or more sophisticated. + +I use Linux and macOS. I have fun on both systems. In my personal experience, +Macbooks are better laptops for what I need them to be. They are better fit for +me. Portable machines with an amazing battery life. That's all that I need from +a laptop. I don't need to come up with this insane hypothetical scenarios where +it will fell short. Yes, it can't water the plants when I am sleeping. OMG, are +we really going there. These insane hypotheticals. Who really cares? I don't! I +use it, it does what I need it to do, and that is the end of the story. Not only +that, but I don't push this down other people's throats. Like Tsodings often +says: It is what it is, and it isn't what it isn't. Such wise words. On my main +machine I have Linux and had it for more than 20 years and I love it. I LOVE +it. I am used to it. And I've had some shitty experiences with it, but damn it, +I love it. It does what it needs to do. It fits my needs. And if I needed +Windows, I would find a way to love it too. Why not? There is enough love to go +around where you are not being elitist and a shithead. + +Programming should be fun. Not going through a checklist before you even start, +to see if you are using what is considered the “cool” thing. If you are doing +this, you already failed in my opinion. + +Oh, you are not using this “insert here” algorithm? Such a pleb. Don't you know +about O(N) complexity? OMG, such a noob. He doesn't know. Uneducated pleb. 2017 +called, and they want their stack back. + +Yes, there is a place for all of those things. But not everything needs to be +perfect. There is an awesome quote in Uncharted: Sic Parvis Magna. “greatness, +from small beginnings.” + +I would laugh if it wasn't sad. And in the end, who cares. Let these people +worry about making the perfect solutions that will never ship or take years to +finish because “Early optimization is the root of all evil.” Everybody has their +definition of fun. I just don't want to listen to people preaching to others how +to do stuff. If people would just shut up and think before they speak 5% of the +time, the world would be a different place. But that will never happen. So the +only solution is to not give a fuck. + +This is more a rant than an actual post with some solution, so maybe I am a part +of the problem. Who knows? Just venting. Every so often it helps. + +Do your Rust thing. It's not for me, though. But if it works for you, more power +to you. Do your project with vanilla JavaScript. You don't always need +TypeScript, Next.js or who know what else to make a button do a thing. Use VS +Code or Vim or Emacs or even Notepad if you wish. If you are having fun, then +just do it. Don't worry about these elitist pricks. They will never be satisfied +anyway. You will never get their approval. So why even bother. Just go for +it. Use C, Rust, OCaml, whatever floats your boat. If it tickles you, just do +it. To hell with everybody else. And if somebody says O(N) complexity, dude? You +can say, OOOOO, fuck the fuck off. + +If this post triggered you, then you are the asshole. Probably. Then you +probably are that guy preaching about O(N) or this language is soo slow +haha. Stop it. Nobody cares! Touch grass. + +Anyway, back to having fun. Cheers! diff --git a/_posts/posts/2024-02-11-k-mer.md b/_posts/posts/2024-02-11-k-mer.md new file mode 100644 index 0000000..254b5df --- /dev/null +++ b/_posts/posts/2024-02-11-k-mer.md @@ -0,0 +1,141 @@ +--- +title: "Navigating the genome using k-mers for DNA analysis and visualization" +permalink: /navigating-the-genome-using-k-mers-for-dna-analysis-and-visualization.html +date: 2024-02-11T01:04:28+02:00 +layout: post +type: post +mathjax: yes +draft: true +published: false +--- + +## Brief introduction to K-mer + +A "k-mer" refers to all the possible substrings of length \\(k\\) contained in a +string, which is commonly used in computational biology and bioinformatics. In +the context of DNA, RNA, or protein sequences, a k-mer is a sequence of \\(k\\) +nucleotides (for DNA and RNA) or amino acids (for proteins). + +The concept of k-mers is fundamental in various bioinformatics applications, +including genome assembly, sequence alignment, and identification of repeat +sequences. By analyzing the frequency and distribution of k-mers within a +sequence or set of sequences, researchers can infer structural characteristics, +identify genetic variants, and compare genomic or proteomic compositions between +different organisms or conditions. + +For example, in genome assembly, k-mers are used to reconstruct the sequence of +a genome from a collection of short sequencing reads. By finding overlaps +between the k-mers derived from these reads, assembly algorithms can piece +together contiguous sequences (contigs), which represent longer sections of the +genome. + +The choice of \\(k\\) (the length of the k-mer) is crucial and depends on the +specific application. A larger \\(k\\) provides more specificity (useful for +distinguishing between closely related sequences), while a smaller \\(k\\) +offers greater sensitivity (useful for detecting repeats or low-complexity +regions). However, the computational resources required increase with \\(k\\), +as there are \\(4^k\\) possible k-mers for nucleotide sequences (due to the four +types of nucleotides: A, T, C, G) and \\(20^k\\) for amino acid sequences (due +to the twenty standard amino acids). + +## K-mer counting + +K-mer counting is a fundamental process in bioinformatics used for analyzing the +frequency of k-mers (subsequences of length \\(k\\)) in DNA, RNA, or protein +sequences. Efficient k-mer counting is crucial for various applications such as +genome assembly, metagenomics, and sequence comparison. The implementation +typically involves parsing a sequence into all possible k-mers and then counting +the occurrences of each unique k-mer. Here's a general approach to implementing +k-mer counting: + +### Reading the Sequences + +The first step involves reading the genetic or protein sequences from files, +which are often in formats like FASTA or FASTQ. These files contain one or +multiple sequences that will be processed to extract k-mers. + +### Generating K-mers + +For each sequence, generate all possible subsequences of length \\(k\\). This is +done by sliding a window of size \\(k\\) across the sequence, one nucleotide (or +amino acid) at a time, and extracting the subsequence within this window. + +### Counting K-mers + +The extracted k-mers are then counted. This can be achieved using various data +structures: + +- **Hash Tables (Dictionaries)**: They offer an efficient way to keep track of + k-mer counts, with k-mers as keys and their frequencies as values. This + approach is straightforward but can become memory-intensive with large + datasets or large values of \\(k\\). +- **Suffix Trees or Arrays**: These data structures are more space-efficient for + k-mer counting, especially for large datasets. They allow for efficient + retrieval of k-mer occurrences but are more complex to implement. +- **Bloom Filters and Count-Min Sketch**: For very large datasets, probabilistic + data structures like Bloom filters or Count-Min Sketch can estimate k-mer + counts using significantly less memory, at the cost of a controlled error + rate. + +### Handling Memory and Performance Issues + +K-mer counting can be memory-intensive, especially for large values of \\(k\\) or +large datasets. Optimizations include: + +- **Compressing K-mers**: Representing k-mers using a binary format rather than + strings can save memory. +- **Parallel Processing**: Distributing the k-mer counting task across multiple + processors or machines can significantly speed up the process. +- **Minimizing I/O Operations**: Efficiently reading and processing sequences + from files in chunks reduces I/O overhead. + +### Post-processing + +After counting, the k-mer frequencies can be used directly for analyses or can +undergo further processing, such as filtering rare k-mers, which are often +errors, or normalizing counts for comparative analysis. + +### Implementation Example + +Here's a simple Python example using a dictionary for k-mer counting: + +```python +def count_kmers(sequence, k): + kmer_counts = {} + for i in range(len(sequence) - k + 1): + kmer = sequence[i:i+k] + if kmer in kmer_counts: + kmer_counts[kmer] += 1 + else: + kmer_counts[kmer] = 1 + return kmer_counts + +# Example usage +sequence = "ATGCGATGATCTGATG" +k = 3 +kmer_counts = count_kmers(sequence, k) +print(kmer_counts) +``` + +This code snippet counts the occurrences of each 3-mer in a given sequence. + +For real-world applications, especially those involving large datasets, consider +using specialized bioinformatics tools like Jellyfish, KMC, or khmer, which are +optimized for efficiency and scalability. + +Now that we have the basics out of the way we can start implementing basic k-mer +counter in C. + +## Implementing sequence reading in C + +## Additional reading material + +- [2101.08385](https://arxiv.org/pdf/2101.08385.pdf) - Motif Identification using CNN-based Pairwise +- [2112.15107](https://arxiv.org/pdf/2112.15107.pdf) - Probabilistic Models of k-mer Frequencies +- [2205.13915](https://arxiv.org/pdf/2205.13915.pdf) - DiMA: Sequence Diversity Dynamics Analyser for Viruses +- [2209.09242](https://arxiv.org/pdf/2209.09242.pdf) - Computing Phylo-k-mers +- [2305.07545](https://arxiv.org/pdf/2305.07545.pdf) - KmerCo: A lightweight K-mer counting technique with a tiny memory footprint +- [2308.01920](https://arxiv.org/pdf/2308.01920.pdf) - Sequence-Based Nanobody-Antigen Binding +- [2310.10321](https://arxiv.org/pdf/2310.10321.pdf) - Hamming Encoder: Mining Discriminative k-mers for Discrete Sequence Classification +- [2312.03865](https://arxiv.org/pdf/2312.03865.pdf) - Learning Genomic Sequence Representations using Graph Neural Networks over De Bruijn Graphs +- [2401.14025](https://arxiv.org/pdf/2401.14025.pdf) - DNA Sequence Classification with Compressors diff --git a/_posts/thoughts/.gitkeep b/_posts/thoughts/.gitkeep new file mode 100644 index 0000000..e69de29 -- cgit v1.2.3