<![CDATA[Mohammad Mustakim Ali]]>//favicon.pngMohammad Mustakim Ali/Ghost 5.32Sat, 24 Feb 2024 13:11:31 GMT60<![CDATA[Rust-ing into Open Source: My debut with Rust-Analyzer]]>/first-rust-oss-with-rust-analyzer/65d0e0664f411201598e200eSat, 17 Feb 2024 18:39:53 GMT

I've made my first open-source contribution to Rust. It's on rust-analyzer - the compiler front-end for IDEs like VS Code. The code completion list now shows constructors and builder methods first.

Rust-ing into Open Source: My debut with Rust-Analyzer

The PR has been merged, therefore it's available in the nightly version. It should land on the next preview version of rust-analyzer soon (Update: This is now in release 2024-02-19).

Motivation

I want to be able to determine if a type can be created quickly; the established norm is to look for constructors with names new, from_ etc. or a builder  methods.

Norms aren't rules; there are exceptions too: PathBuf and Vec has with_capacity, they are well known but what if the library author needs to name the constructor something else?

Naming is hard, and no "one rule fits all (libraries)" when naming the constructors. Instead of looking for these franticly in the completion list, what if the IDE presented those methods first? After all, when I enter YourType:: I am primarily interested in creating an instance of this type. To be precise, I am looking for

  • Direct Constructors: Method that creates an instance of Self that may or may not take an argument. eg. fn new() -> Self
  • Constructors: similar to direct constructors but Self is wrapped, a typical example is Result<T, E>, Option<T> or Self wrapped in other types.
  • Builder methods: These are methods that return a different type that typically ends in Builder, eg.
struct MyType;
struct MyTypeBuilder;
impl MyType {
   fn builder() -> MyTypeBuilder {}
}

Document it

My first attempt at addressing this was to update rustdoc generator to highlight constructors & builder methods in the generated documentation.

However, I lost motivation even before I got started, as evidenced by this PR. The reason being:

  1. If I'm browsing the documentation, there's a good chance I tried to find a way to create an instance of this type in the IDE first.
  2. It's easy to forget the name of the constructors even if I've used the type before. So #1 applies here too.

I did not want to do something I didn't think would be helpful. So I dropped the idea and took a long pause (I got busy with life) until the quiet period of the year, December, arrived. That is when I started putting some late-night (after my daughter is asleep till I'm too tired to think) digging into the rust-analyzer.

Implement it where it's most useful

This was the exciting part because although I write a lot of rust code in my day job, These are CRUD web applications and event-based distributed systems designed to process payments. On the other hand, the Rust analyser is a complex monolith, a highly optimised language server intended not to get in my way when I'm building other Rust applications (while being the second best friend - after rustc). This is nothing like what I do during the day, in other words, when it's gloomy outside (in an English village).

I'll be honest, the first implementation wasn't elegant at all. It was keen to put something together & validate the idea from the rust-analyzer contributors before iterating on the implementation. Given the limited time I could put into this outside work, this seems the best approach.

I am so grateful to the reviewer, Lukas Wirth, for all the pointers in the PR. The few iterations made the implementation more elegant and up to the mark, given a language server's strict performance/overhead expectations.

What's next?

I have a few ideas I'd like to try for rust-analyzer and this contribution will inspire me to strive in that direction. I may post about these on my blog (or not, depending on how excited I am for these or how they turn out).

Conclusion

I'm glad my first contribution to Rust language is in one of the frequently used features of rust-analyzer - code completion.

Every time you enter a type in the IDE, the code completion list appears- and if the first item isn't a constructor or builder method, you immediately know you should RTFM; there is no point in looking for a constructor.

]]>
<![CDATA[Liinks - minimalistic link in bio page]]>/liinks-minimalistic-link-in-bio-page/64de88066c5aaf0128b74198Thu, 17 Aug 2023 21:29:06 GMT

Introducing liinks.xyz - a site for designing minimalistic links in bio page.

A few months ago, I found myself in a situation where I needed to quickly design a page with a few lines of text and a few link buttons to a different webpage. I hoped to link to this page using a permalink from another website.

I immediately went to a few "Link in Bio" services but quickly got demotivated due to the need for registration and lack of minimalism. I then embarked on a weekend-long journey to see how much work was needed to design something good enough as a link in bio page designer and not so complex that it needs registration.

The result was a single index.html file that can render a short webpage without storing anything in the server. This is done by storing the data in the link that is shared and most importantly no registration is required. View an example of this post as a liinks.xyz page. The link is 980 characters long, but this is not a problem for most cases (eg. Twitter, Facebook etc.)

The body is written using markdown, and adding some colour to personalise the page is possible.

Liinks - minimalistic link in bio page

Finally, if you need a short URL - this is possible too.

Liinks - minimalistic link in bio page | Liinks
https://liinks.xyz/i/1efc573e31
You will notice it takes a few seconds to generate short-links. This is Cloudflare Turnstile checking the request to protect against abuse.

The tech stack is Cloudflare (static) Page, Function and R2 for storage.

]]>
<![CDATA[Some underrated features of Rust]]>/some-underrated-features-of-rust/64da63f56c5aaf0128b74079Fri, 31 Mar 2023 18:22:00 GMTThis will be an opinionated post about some of the useful features of the Rust Programming language. They are underrated and often don't even get mentioned as useful programming language features.

Imports scoped to a code block / Local Imports

You are used to having all the imports at the beginning of the file - even those used just once in a function.

use some::{Person, Address};
use other::Thing;

fn save(a: Address) {
  Person:new().with_address(a);
}

fn process() {
  // [...]
  Thing::do();
  // [...]
  Thing::do_another_thing();
}

The process function can import the other::Thing in place or scope within the function. This makes it easier to cut-paste the entire function to another module if needed, and all the imports will follow you.

// [...]

fn process() {
  use other::Thing;

  // [...]
  Thing::do();
  // [...]
  Thing::do_another_thing();
}

I prefer this when I import a type that is only used once. It can make your code much more readable and reduce noise at the beginning of the code file.

fn another_example(o: Order) {
  match o {
    package::model::OrderType::Pending(p) => todo!(),
    package::model::OrderType::Shipped(p) => todo!(),
    package::model::OrderType::Delivered(p) => todo!(),
  }
}

fn another_example_better(o: Order) {
  use package::model::OrderType;
  match o {
    OrderType::Pending(p) => todo!(),
    OrderType::Shipped(p) => todo!(),
    OrderType::Delivered(p) => todo!(),
  }
}

Use if/match/loop expression to return value

This is usually the first thing I miss about Rust when I switch popular backend languages.

In Rust, if, match, loop etc. all are expressions. That means they all can return value. This makes initialization with the default or setting the value of a variable concise.

let delivery_speed = match order.user.membership_type {
  Member::Platinum => Delivery::Today,
  Member::Gold => Delivery::NextDay,
  Member::Free => Delivery::Slow,
};

Compare this to

function getDeliverySpeed(order: Order): Delivery {
  switch (order.user.membership_type) {
    case Membership.Platinum:
      return Delivery.Today;
    case Membership.Gold:
      return Delivery.NextDay;
    case Membership.Free:
      return Delivery.Slow;
    default:
      throw new Error("Invalid membership type");
  }
}

const delivery_speed = getDeliverySpeed(order);

Let me know if you have any favourite underrated Rust Language feature in the comment.

]]>
<![CDATA[I'm Rusted 🦀]]>The most difficult part of learning Rust for me isn’t ownership or slow compile time. It’s that thought in the back of my head that "I could do it much quicker if I were using C#".

I could do that quicker using X

So

]]>
/i-am-rusted/61a6cb2a92ce65000122e089Wed, 01 Dec 2021 23:18:38 GMT

The most difficult part of learning Rust for me isn’t ownership or slow compile time. It’s that thought in the back of my head that "I could do it much quicker if I were using C#".

I could do that quicker using X

So far I’ve been reminding myself: no matter how badly implemented it is, the rust version will most likely be faster, correct and probably run correctly when it compiles for the  first time. It’s working great so far.

I'm also glad I wont have to come back and scrutinise my code to allocate less and go easy on GC in the future.

Ownership rules aren't that different

If  you had a brief look in Rust and got scared due to borrow checker rule or resulting compiler error, and ran away (like me) - I invite you to give it another go but look at it from a different angle this time.

If you are already familiar with profiling, reducing allocations and optimising code on the hot path - you read the code and keep track of what you are allocating and how are you passing these data around. So you are just "following" the memory on your own.

Rust compiler is doing the same thing for  you with the added benefit of statically writing the (invisible) deallocation code alongside your code. So that nothing sticks around when they are not needed.

It’s a lot difficult to look at old code or even worse, someone else’s code and follow the memory than just do it as you are writing them.

Rust graveyard

I like to learn by doing. This time I am also keeping a list of projects I'm doing. I am calling the repo "Rust Graveyard". It's a private repo where I add a link to any rust project I have started. Of course these projects are not "done". However they have served their purpose, to explore the language while building something cool ¯\_(ツ)_/¯.

I'm Rusted 🦀

The list is getting longer, apart from the last project I've blogged about, the list has everything from rewriting a C# continuous test runner, to a meme maker web app, to a library for mocking gRPC server to a webpage showcasing awesome Rust projects by parsing awesome-rust repo and everything in between.

They have just two things in common, they are incomplete and naive. However, every single one of those projects helped me learn so much more.

Where am I going with this?

It's quite difficult to focus on something so different like Rust in whatever little time I get after my day job and an evening of being a dad to a little 💖 human 👼. So I've landed myself a role in another team that uses Rust in a payment product in my company.

I am a bit nervous about this move as this will be the first time I'm taking a role that takes me completely out of my comfort zone. This will be a great challenge for sure but I have faith in myself and the brilliant, helpful, and humble geniuses I will be working with. As to why am I taking a break from C#? this is a story for some other time.

We are hiring for various Rust and C# IC roles. If you are up for it then come join us.

]]>
<![CDATA[Playing with 🦀 Rust: Building a cli app to estimate disk space usage]]>/playing-with-rust-writing-gnu-du/6162d14392ce65000122dc6fSun, 10 Oct 2021 16:22:46 GMT

I have been fascinated by 🦀 Rust for a while now. However unlike some other languages I've tried over the years, the journey has not been straightforward so far. That is mostly due to the fact that you have to be the a coder as well as the garbage collector at the same time.

I am slowly getting the grip on this, thanks to the best in class documentations and community support available in the internet. Since I learn best by "doing", I've spent this weekend building a small Cli tool to estimate disk space used by a folder including all subfolders. Something like the du command that comes with GNU Coreutils

$ du -hd 0
156M    .

❓This post was written by a Rust newbie - so you know what to do! I may be talking nonsense here so help this friend out by posting your suggestions in the comment.


The Idea

The cli will take a path or default to the current working directory and then display number of files it contains and total disk space used by them.

# current path
$ dux
Total size is 204828703 bytes (204.83 MB) across 1176 items

# specify a path
$ dux ~/bin/
Total size is 586934311 bytes (586.93 MB) across 3372 items

Quick and dirty implementation

The first version was very straight-forward that simply traverses the file system using a single thread. Obviously this was very slow.

[package]
name = "dux"
version = "0.1.0"
edition = "2018"

[profile.release]
lto = true

[dependencies]
pretty-bytes = "0.2.2"
cargo.toml
use pretty_bytes::converter::convert as humanize_byte;
use std::path::{Path, PathBuf};
use std::{env, path};

fn main() {
    let current_path = env::current_dir()
        .expect("")
        .to_str()
        .expect("")
        .to_string();
    let args: Vec<String> = env::args().collect();
    let target = args.get(1).unwrap_or(&current_path);
    let path = path::Path::new(target);

    if !path.exists() {
        eprintln!("Invalid path: {}", &target);
    } else if path.is_file() {
        todo!();
    } else if path.is_dir() {
        // Single threaded
        let size: f64 = size_of_dir_single_threaded(path) as f64;
        println!("Total size is {} bytes ({})", size, humanize_byte(size));
    } else {
        eprintln!("Unknown type {}", target);
    }
}

fn size_of_dir_single_threaded(path: &path::Path) -> u64 {
    if !path.is_dir() {
        return 0;
    }

    let mut count = 0;
    for entry in path.read_dir().expect("Read dir").flatten() {
        let path = entry.path();
        if path.is_file() {
            count += path.metadata().unwrap().len();
        } else if path.is_dir() {
            count += size_of_dir_single_threaded(&path);
        }
    }
    count
}
main.rs

This is obviously going to be painfully slow! we can do better! 👐

Objectives

I am hoping to touch a few more Rust concepts and implement a more performant solution that gives me exposure to these:

  • Channels - multiple producers walk the directory tree and multiple receivers will count them in parallel.
  • Spawning multiple threads (default is 1 thread per CPU core)
  • Explore relevant rust traits,

Better version

It always starts with a c̶l̶a̶s̶s̶  struct

Lets start by defining a struct to hold the statistics we will collect as we will travarse the file system. We are interested in keeping track of total disk space in bytes and number of files we've travarsed to far.

#[derive(Default)]
struct Stats {
    size: u64,
    count: i32,
}

We are using the derive macro to auto generate the code for

  • Default Trait: So we can create a default instance using Stats::default(). Another approach could be to implement a new static function that returns a new instance - but I think this trait is more idiomatic and well understood by others. Initial value for both fields are the default value of their data type size = 0, count = 0

Next, lets implement the Display trait so we can specify how these data is formatted to look like our expected output, eg: 586934311 bytes (586.93 MB) across 3372 items

use pretty_bytes::converter::convert as humanize_byte;

impl Display for Stats {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        write!(
            f,
            "{} bytes ({}) across {} items",
            self.size,
            humanize_byte(self.size as f64),
            self.count
        )
    }
}

The humanize_byte came from a crate called pretty-bytes, so we need to update cargo.toml accordingly. While we are at it, we will also add another crate that lets us retrieve number of CPU cores available in the computer.

# ...
[dependencies]
pretty-bytes = "0.2.2"
num_cpus = "1.13.0"
cargo.toml

The better size_of_dir function

Lets update our main function to use function that should walk all the nested subdirectories and return a Stats object with all the information we are after. We will specify the number of workers thread to use to speed up the process.

fn main() {    
    // [...]
    if !path.exists() {
        // [...]
    } else if path.is_file() {
       // [...]
    } else if path.is_dir() {
        // Single threaded
        // code is removed

        // Multi threaded
        let cores = num_cpus::get().to_string();
        let cores = std::env::var("WORKERS").unwrap_or(cores).parse().unwrap();
        let stat = size_of_dir(path, cores);

        println!("Total size is {}", stat);
    } else {
    // [...]
    }
}

// new function
fn size_of_dir(path: &path::Path, num_threads: usize) -> Stats {
    todo!();
}

As you can see, we are making it possible to override the number of threads to be used using an environment variable WORKERS.

Type inference

Note we are asking the envrionment variable or the number of cores - both of which returns String to be parsed into usize without explicitly specifying the type (eg: parse::<T>()).

// `parse()` doesn't know the target type yet
let cores = std::env::var("WORKERS").unwrap_or(cores).parse().unwrap();

// `cores` is passed as an argument that requires type `usize` so `parse` in line above is inferred as `parse::<usize>()`
let stat = size_of_dir(path, cores);

As soon as we pass the cores into size_of_dir function as an argument for num_threads which accepts usize type - the compilers infers the type requested for parse must be usize. Nice! 💛💛

Hard question: How to channel in 🦀 Rust?

Channels allow you to safely share data between threads. Depending on the type of channels - there can me multiple publishers and multiple receivers on different end of a channel. This replaces the need to introduce shared variables across threads which is notoriously hard to get right (in other languages) by giving us a pub-sub model. Channels are available in all modern languages like C# System.Threading.Channels or go's channel. I was blown away with the simplicity of using channels across different goroutes.

In my use case, I want

  • thread 1: to walk into each folder and then, for each sub-folder publish a message using this channel no notify other (idle) worker to process them.
    • For each file it encounters, it increments a Stats object it holds with the size of the file and increment the count by 1.
  • thread 2..n: receives each of these message and walk into these folder and repeat what thread a does.

Now, a basic channel implementation for Rust is in std::sync::mpsc::channel. It's explained brilliantly in Jon Gjengset's Crust Of Rust video about channels.

However looking at the documentation,

The Sender can be cloned to send to the same channel multiple times, but only one Receiver is supported.

I see this supports only one Receiver. Not useful for our use case. Quick google took me to a similar question in rust user group where a Rustaceans suggested crossbeam_channel crate. Sweet!, lets add this into our cargo.toml

# ...
[dependencies]
# ...
crossbeam-channel = "0.5.1"
cargo.toml

Lets implement our size_of_dir function

use crossbeam_channel::{unbounded, Receiver, Sender};

fn size_of_dir(path: &path::Path, num_threads: usize) -> Stats {
    let mut stats = Vec::new();
    let mut consumers = Vec::new();
    {
        let (producer, rx) = unbounded();

        for idx in 0..num_threads {
            let producer = producer.clone();
            let rx = rx.clone();

            consumers.push(std::thread::spawn(move || worker(idx, rx, &producer)));
        }

        // walk the root folder
        stats.push(walk(path, &producer));
    } // extra block so the channel is dropped early, 
      // therefore all threads waiting for new message will encounter the
      // exit codition and will run to the end.

    // wait for all receiver to finish
    for c in consumers {
        let stat = c.join().unwrap();
        stats.push(stat);
    }

    stats.iter().sum()
}

// this is an worker
fn worker(idx: usize, receiver: Receiver<PathBuf>, sender: &Sender<PathBuf>) -> Stats {
    todo!();
    // walks into each PathBuf it receives from receiver,
    // returns the Stat object in the end
}

fn walk(path: &path::Path, sender: &Sender<PathBuf>) -> Stats {
    todo!();
    // actual calculation happens here
    // 1. for each file in path, increment a local Stat object with the size
    // 2. for each folder encountered - publish a message using sender, so another worker can pick up and process this
    // 3. return the Stat object in the end
}

We start by creating a list of Stats object that we will populate once each of the worker thread is completed that each return an instance of Stats object. These worker thread will process random number of folders - so the Stats will

New Trait: Sum<T>

First error is, we are unable to perform sum() on a collection of arbitrary struct: Stats.

stats.iter().sum()
    Checking dux v0.1.0 (/home/mustakim/code/rust-projects/dux)
error[E0277]: the trait bound `Stats: Sum<&Stats>` is not satisfied
   --> src/main.rs:124:18
    |
124 |     stats.iter().sum()
    |                  ^^^ the trait `Sum<&Stats>` is not implemented for `Stats`

For more information about this error, try `rustc --explain E0277`.
error: could not compile `dux` due to previous error

As with most other cases, Rust compiler actually tells you exactly what to do next. So I got introduced with the Trait Sum<T> and need to implement this.

impl<'a> std::iter::Sum<&'a Stats> for Stats {
    fn sum<I: Iterator<Item = &'a Stats>>(iter: I) -> Self {
        let mut result = Self::default();
        for stat in iter {
            result.count += stat.count;
            result.size += stat.size;
        }
        result
    }
}

If you are a newbie like me, you may start without the lifetime annotation anywhere or even without the generic type parameter - the compiler will guide you there.

The walk function

Looking at the requirement written as comment, this seem more straightforward to implement,

fn walk(path: &path::Path, sender: &Sender<PathBuf>) -> Stats {
    let mut stat = Stats::default();

    // Optimisation (makes it faster)
    // if !path.is_dir() {
    //     return;
    // }
    if let Err(e) = path.read_dir() {
        eprintln!("Error {} ({})", e, path.to_str().unwrap());
        return stat;
    } else if let Ok(dir_items) = path.read_dir() {
        for entry in dir_items.flatten() {
            let path = entry.path();
            if path.is_file() {
                stat.add_file(&path).unwrap();
            } else if path.is_dir() {
            	// publish message to the channel
                sender.try_send(path).unwrap();
            }
        }
    }
    stat
}
The walk function

The worker function

Next up is the final piece of the puzzle, the worker code - that handles all incoming message as long as the channel is active (eg. not dropped)

fn worker(idx: usize, receiver: Receiver<PathBuf>, sender: &Sender<PathBuf>) -> Stats {
    let mut stat = Stats::default();
    while let Ok(path) = receiver.recv_timeout(Duration::from_millis(50)) {
        let newstat = walk(&path, sender);
        stat += newstat;
    }

    stat
}

The receiver.recv_timeout sleeps until a new message is available or it times out (50 ms). There are other options like receiver.recv() that should work as well but I thought it's good to specify a timeout.

Once a message is available, we call the walk function with the new path received and then add the Stats returned to the local stat variable.

stat += newstat;
what is going on there?

As I'm exploring Rust, I thought it would look nicer if I allowed a Stat object could be added and assigned to another Stats object, something like overloading += operator. There is a Trait for that.

impl std::ops::AddAssign for Stats {
    fn add_assign(&mut self, rhs: Self) {
        self.count += rhs.count;
        self.size += rhs.size;
    }
}

Similiary if I wanted to allow let s = stat + newstat I'd need to implement std::ops::Add trait. You don't need to remember these, simply write the code you wished worked, eg

stat = stat + newstat;

Let the compiller say it for you

error[E0369]: cannot add `Stats` to `Stats`
   --> src/main.rs:132:21
    |
132 |         stat = stat + newstat;
    |                ---- ^ ------- Stats
    |                |
    |                Stats
    |
    = note: an implementation of `std::ops::Add` might be missing for `Stats`

For more information about this error, try `rustc --explain E0369`.

I ♥ this! coming from OOP languages - i find it a great relief that I do not need deal with interface, abstract class etc to introduce shared behaviour in my own code and understand "what's what" in someone else's code.

In my own "how can you do this" mental model - If C# is on the far left 🤷 and go is on the far right ⛔ - Rust puts itself right in the middle in this.

The end result

Lets compare the difference in performance between the first quick and dirty version with this implementaiton

# quick and dirty implementation
$ time dux
Total size is 666559986 bytes (666.56 MB)
dux  0.14s user 3.55s system 50% cpu 7.280 total

# this implementation
$ time dux
Total size is 666559986 bytes (666.56 MB) across 31623 items
dux  0.24s user 3.12s system 76% cpu 4.365 total

That's good enough for now. It was not about performance gain, it was about trying out different Rust elements.

The journey

This was result of many trial and errors in an entire lazy Saturday afternoon. In the first iteration of the optimised implementation, I was not using a struct instead only counting the disk space and sharing the same variable across all the worker threads (although safely, using Rusts mandatory Mutex and Arc etc smart pointers). This implementation looked like this.

fn size_of(path: &path::Path) -> u64 {
    let size = Arc::from(Mutex::new(Box::from(0 as u64))); //<-- yuk
    let mut consumers = Vec::new();
    {
        let (producer, rx) = unbounded();
        let producer = Box::new(producer);

        for idx in 1..10 {
            let producer = producer.clone();
            let rx = rx.clone();
            let size = size.clone();

            consumers.push(std::thread::spawn(move || -> () {
                let p = producer.as_ref().clone();
                worker(idx, rx.clone(), &p, &size);
            }));
        }

        // if in trouble - just `clone` it.  ¯\_(ツ)_/¯
        walk(path, &producer.as_ref().clone(), &size.clone().as_ref());
    }

    for c in consumers {
        c.join().unwrap();
    }

    *size.clone().lock().unwrap().as_ref()  //<-- yuk
}

So I was basically just cloneing when in trouble! as well as started with shared states (the size variable) even though this problem can be solved without any shared states at all.

During my limited test, this performed almost same as the final implementation. I am not sure why this Mutex didn't introduce any perf penalty here.

Anyway, this looked messy to me and I had to put little bit more effort in order to come up with a better implementation than this.

clippy the next best friend after rustc

A great friend on this was clippy - it suggested various cases where the clone wasn't necessary. It event suggest me to rewrite some loop in a much nicer way.

Next step

Not sure if I will come back to this, but if I do, then I'll try to make it act like a cli tool

  • Use clap to parse command line arguments and render a --help screen.
  • Output the result to make it easier to pipe

The code is available in GitHub. Feel free to suggest me any Rust feature that I could have used here.

]]>
<![CDATA[Ergodox EZ: Time to step up your engineering game]]>I have been using a keyboard almost all of my life. I write so much using a keyboard that I don't remember when I actually forgot writing using traditional methods like pen & paper. My handwriting using either of my hand is indistinguishable (so I can claim I

]]>
/ergodox-ez-time-to-step-up-your-engineering-game/5e47da84dcfba200013d2d71Sun, 22 Mar 2020 01:33:31 GMT

I have been using a keyboard almost all of my life. I write so much using a keyboard that I don't remember when I actually forgot writing using traditional methods like pen & paper. My handwriting using either of my hand is indistinguishable (so I can claim I can write using both hands!).

After using your average keyboard for a long time, the biggest change to this came when I upgraded to a Mechanical Keyboard after joining my first job back in 2015. That too because of a colleague wasn't ready to switch her keyboard as the company purchased an extra, and I wanted to try out this expensive keyboard some people were taking about.

There I dropped my Microsoft Ergonomic "mushy" keyboard. The last non-mechanical keyboard I've used till date.

Enter Ergodox EZ

I've made the biggest upgrade of this game in 2020. A split ergonomic, ortholinear keyboard - Ergodox EZ.

In ortholinear keyboards the rows are straight and in line with each other.
Staggered keyboards are the regular keyboards with the keys arranged in a zigzag layout.
Ergodox EZ: Time to step up your engineering game

The main reason of getting this one is due to the split nature. Prolonged use of computers have caused a few issues, and shoulder strain is one of them.

While researching for possible split keyboard options (there are not many!) there were only two recommendations, this one and Keyboardio Model 01. Keyboardio stopped selling Model 01 right around that time as they started preparing for new project in Kickstarter. So that left me with only Ergodox EZ option.

Ergodox EZ: Time to step up your engineering game
Right half of my Ergodox EZ keyboard

First Week

Frustrating! one simple word. Frustrating to the point it made coding a painful work. I was only using the default layout (despite the so called 32 layers of customisibility) and the lack of dedicated arrow keys were driving me nuts.

To put that into prespective, my avarage typing speed (on regular staggered keyboard) dropped from 75wpm+ to 3.95wpm. The added frustration of not getting "stuff" done was unbearable.

Ergodox EZ: Time to step up your engineering game
Some of my first typing tests (first 4 days)

That was all expected. 80% of that was attributed to the blank keycaps and the rest of it was due to the ortholinear key positions. I know I went to extreme with those black/unprinted keycaps there, but I wanted to dive all in to force myself to learn touch typing.

As with each of those typing tests above (they are taken in a span of 4 days) you can see the gradual improvements. I must say we humen are good at adapting new stuff. Years of mucle memory with backspace, spacebar slowly started to ease off and I was typing again. Still lower than the max speed I could type with regular keyboard (and 4 fingers), but I was enjoying every keystrokes.

Getting the grips on

As there is no such thing as "one size fits for all" when it comes to keyboard layout. I also started customising the layout to suit my need. Just like most ergodox users have their own layout, I was also slowly started to find my "perfect layout".

It's so relaxing to touch type and having all necessarry keys right underneath the thumbs like backspace (the most used key), space, ctrl and other modifiers. and most importantly Ctrl+Z key (yes it's a single key for me).

Imagine life gave you an undo button conveniently located for the mistaked you make 🤔 ...

I stopped taking typing tests but I can feel it's somewhere between 40-50wpm. I am quite happy with that. Of-course coding is a whole different subject and I am happy with the progress I've made within a month.

To get the snapshot of my current keyboard layout at the time of this article was written you can find it here.

Its all Click Click

I am ready to take this keyboard into the office, this was supposed to be it's destination. However with the COVID-19 outbreak, I will be working from home indefinately so this might take a while. At that point I will have to buy another Ergodox EZ for my home as I spend more time using computer at home than office if I take into account the WFH and weekends.

I love this keyboard, I love the reduced typing speed that I know is temporary. My shoulders thank me and I love every keystrokes and every typos I make with it. This has been the best investment in tech you can make to step up your engineering game (only if you love challenges).

]]>
<![CDATA[Why am I so far away from Windows 💔]]>I never thought I would come to this - as I am writing this blog on my Debian installation dual booted with my beloved Windows I have been using decades.

It all started when I joined my current company back in January 2019, TrueLayer - which is a fastest growing

]]>
/why-am-i-so-far-away-from-windows/5e279d2f2bf85d00010ccb63Sun, 02 Feb 2020 02:04:46 GMT

I never thought I would come to this - as I am writing this blog on my Debian installation dual booted with my beloved Windows I have been using decades.

It all started when I joined my current company back in January 2019, TrueLayer - which is a fastest growing fintech startup company in London and like other fintech companies, we deal with financial information (PII, transactions etc). This makes fintech a lucrative target for a hacker. Therefore Windows is pretty much banned here. You get to choose a Linux or a macOS laptop as your workhorse.

I am not going to debate about why/whether Windows is less secure than Linux/macOS because

  1. I will be biased towards Windows 🤷
  2. I am nobody to make any statements, DevSecOps seat 5 rows away from me, you know!

Since my personal laptop happened to be a Mac for many years (which was just a fancy internet browser as I did all of my development work in my Windows desktop then) - so my natural choice was a Mac. Also not to mention bad experiences when I tried to install Linux on different computers I owned. They all crashed due to driver issues or instability etc. Although I have heard people argue it's the device manufacturer being uncooperative with Linux. It never worked for me (or maybe I didn't try enough?).

Why am I so far away from Windows 💔
Photo by Rubén Menárguez / Unsplash

The first struggle I had after switching to Mac at work was, the placement of Cmd, Ctrl- and this started injuring my muscle memory. Moreover when I'd return home and worked on my side projects in my beloved Windows PC - I'd injure it even more. Think of it, I'm a keyboard-heavy user and I'm spending my days with macOS and the evenings and weekends on Windows. I could not have caused more injury to my muscles memories than this ever.

I had to stop this madness and make a decision, changing the work was not an option because these are the highest number of nerdiest, genius and also down to the earth people I have worked with. So abandoning them was not an option. So I chose to switch my personal projects into the MacBook Pro I bought some months before joining TrueLayer.

Within this time, I was super comfortable with the scary terminals, tmux etc. I starting doing things faster using the cli. The useful xargs, grep, awk, pbcopy, base64 and many other unix cli tools were unconventional to adapt at first but then made a lot of the task easier. I stopped using Gui for kubernetes and some other tasks with the exception being Git and REST client. I more and more find myself spending a considerable amount of time in the terminal.

Why am I so far away from Windows 💔
Photo by Sai Kiran Anagani / Unsplash

One of the first things I did after switching to a Mac at work was to find some kind of REPL for C#, an inferior replacement of LinqPad - failed to find something that is cross-platform, I had to write cant-run-linqpad, this is closest to a REPL I use that "gets the work done". But I still wish LinqPad was cross-platform, something that RoslynPad is trying to do but far away from completion.

mustakimali/cant-run-linqpad
When you can’t run LinqPad but you need to write throwaway code - mustakimali/cant-run-linqpad
Why am I so far away from Windows 💔

JetBrains Raider made the transition to being a full time C# developer on a mac a breeze! Rider achieved all of these in years which Visual Studio (my all-time favourite IDE and the gold standard of IDEs) took decades to perfect. Apart from the missing memory and performance profiler and my favourite LinqPad I pretty much don't miss anything else.

2019, Long year of solely using macOS

After exclusively using Windows for 18 years I've got used to with macOS however that was not without its issues. It all come down to one thing- At the end of the day (just a phrase, we deploy many times a day) I have to deploy my code into a linux machine. So there will be some kind of frictions when developing in macOS.

Some of the issues I faced was,

  • Scripts needed to be catered such a way it works on both mac and Linux (some CLI tools come pre-installed with Linux but not with macOS). So there had to be some conditions like this in most of the scripts
if [[ "$OSTYPE" == "darwin"* ]]; then
  • We use gRPC on many internal services and HTTP/2 over TLS is not yet (as of writing) supported by macOS. It's not a huge issue, but this is something that would not be an issue with a Linux (or Windows) machine.
  • That time after few excruciating days of debugging, I found out a popular bank's payment API only works when the HTTP request is made using WebClient and from linux machine. Otherwise It just "worked on my machine" but failed without any hint on production. This is ofcourse due to some issues on their API (Hint: Character Encoding and how HttpClient is different than WebClient across platform) - but I'd have solved the issue had I been using a Linux distro at least a couple of days early. This was one rare moment, I didn't feel like going into the bottom of the issue to find out the root cause, as soon as it started working. I deployed the changes and called it a day.

So those along with some other issues, I started asking to myself, how long can I keep myself away from the destination? If I'm going to be deploying to Linux, I must be developing in Linux. I have to give up my new found love on some macOS apps like iTerm2, Paw, LittleSnitch, Postico etc. Yes I agree macOS has some of the polished apps I've ever used - but I have to say goodbye.

I started my transition by installing Debian (after a couple of failure with Ubuntu) into my desktop PC alongside Windows. Which I've been using for a couple of weeks now.

2020 The year of Linux / Ubuntu

I got a new XPS 15 7590 at work and I'm gonna spend the next few days setting up Ubuntu and transfer my files over. The installation experience was the worst as most of the pain points are written in details here by a good Samaritan.

My Journey Installing Ubuntu 18.04 on the Dell XPS 15 7590 (2019)
The Dell XPS 15 7590 is the newest laptop in Dell’s XPS 15 series. Sporting a 9th generation Intel CPU, a NVIDIA GTX 1650 GPU, and a 97 Wh battery, it is able to balance high perfomance, excellent…
Why am I so far away from Windows 💔

The fingerprint reader doesn't work, which is helpful when I have to copy a password from LastPass's vault as typing my master password is a PITA. oh I forgot, LastPass doesn't offer any GUI for Linux. So that means I have to type my login password (will miss TouchID). The membrane keyboard feels like I am back in dark age - Even MacBook's terrible keyboard has better tactile feedback than these. My list goes on and on - but I have to get used to with these limitations to make myself free on some other aspects. It's all a trade-off I'm willing to accept.

Yes, I am far far away from Windows now. However, I still speak more C# than my native language, promote Azure, Visual Studio. I am doing the same thing with just a little less friction.

]]>
<![CDATA[Fix Windows Sandbox internet connectivity problem]]>Windows sandbox lets you run disposable Windows 10 environment to test potentially dangerous files and programs. This is a major security feature (I'd say) in recent versions of windows. If you are not sure about running an installer, you fire up Windows Sandbox, copy the file inside the

]]>
/fix-windows-sandbox-internet-connectivity-problem/5db3784884935a00013a9f53Fri, 25 Oct 2019 23:08:33 GMT

Windows sandbox lets you run disposable Windows 10 environment to test potentially dangerous files and programs. This is a major security feature (I'd say) in recent versions of windows. If you are not sure about running an installer, you fire up Windows Sandbox, copy the file inside the sandbox and run it. Once you are done close the sandbox window and it's all gone. Nothing you do inside the sandbox can affect your computer.

In order to enable Windows Sandbox feature for Windows 10 follow this guide

A common issue you may encounter when you run Windwos Sandbox for the first time is no internet connectivity. I had to spend some time searching the internet and there were suggestions more or less something like these:-

Make sure you have enabled required features in Windows Feature dialogue

They are

  • Windows Sandbox
  • Containers
  • Hyper-V
  • Virtual Machine Platform
  • Windows Hypervisor Platform
Fix Windows Sandbox internet connectivity problem

Disable and re-enable Windows Sandbox feature

This is the most commonly suggested solution. Classic Did-You-Try-Restarting, nice!

Disable any VPN

Another very common suggestion.

Bridge your internet adapter and vEthernet (Default Switch) connection

This is the most interesting one as described here. However the moment you select those two connections and select Create Bridged connection I was loosing internet connection in my machine. 🤦‍♂️

Though the article helped me take a step to the right direction, however I had to figure out the rest myself 👏. I had to actuall configure the IP address manually for the newly created Network Bridge connection.

Fix Windows Sandbox internet connectivity problem
Here is the IPv4 settings for the Network Bridge connection

If none of the above suggestion restored internet connection in your Windows Sandbox environment, try manually configuring IP address and other settings for the network connection that uses Microsoft Network Adapter Multiplexer. Just make sure you don't assign an IP address that's already taken in your network.

For other settings, the Default Gateway is your router's address. For DNS server, my preferred DNS server is the one that's running in my home network but this can would be one of the public DNS servers available (Cloudflare: 1.1.1.1 & 1.0.0.1, Google: 8.8.8.8 & 8.8.4.4).

Happy sandboxing and be safe (from 🐛🦟🦠).

]]>
<![CDATA[Why I am ditching Google Analytics in favour of Fathom Analytics]]>I am removing Google Analytics from all of my websites. This includes my website where you are reading this, as well as all of my personal projects. This has less to do with me being paranoid about my visitors being tracked but more to do with recently grown dislike about

]]>
/why-i-am-ditching-google-analytics-in-favour-of-fathom-analytics/5d644cca9251bf00014b0543Mon, 26 Aug 2019 22:47:43 GMT

I am removing Google Analytics from all of my websites. This includes my website where you are reading this, as well as all of my personal projects. This has less to do with me being paranoid about my visitors being tracked but more to do with recently grown dislike about some of Googles decisions about how they are going to do certain things.

I do not mind being tracked on the internet. This is because the positive outcome offsets the negatives (mostly) - that's just my personal opinion. Especially when it comes to Google's services, we all know Google's business model - one can use almost all of their products without paying a penny at all. This means the actual product is the people. I get it - but It's the tracking that enables Google to deliver better search results, better assistance and suggested contents in Google Now feed and Google News. I am not saying there is no "more to it" - but I am focusing on the positives here. Like it or not - it's all of the trackings that make the internet as I see "relevant" to me.

Just like millions of other websites in the internet - I also added Google Analytics to all of my websites - that enables web administrators to gain insight of their visitors. It's a vital piece of software to understand what part of your website or application is used the most or how one of your recent changes impacted the user experience. This also helps google keeps a record of websites visited by a user. This (along with other things) enables Google to suggest flight deals in Google Now feed when it sees I've visited few flight-comparison websites recently. Again, no problem here.

Why I am ditching Google Analytics in favour of Fathom Analytics

However, a recent suggestion to block Chrome Extensions ability to block web request before they happen caused a log of outrage over the internet. Because this essentially will make Ad Blocking software useless.

Assuming Manifest V3 is adopted, the new webRequest API would only allow  extensions to observe network requests, but not modify, redirect or  block them.

The over-simplified version of Manifest V3 is - currently Ad Blocking gets a notification before the browser is about to connect to different domains to download contents. The Adblocking extension can examine the URL and give a 👍 or 👎 whether to go ahead and download the content. However, in V3, Google's proposal is the extensions need to give a list of rules that the browser can honour beforehand. The rules can not have any pattern-matching rules and there is a limit of 30,000 150,000 rules. Take that AdBlock!

Of course, this will improve the performance of the web browser and will make it difficult for malicious extensions to perform ... em... malicious act. But this will kill Ad Blocking software as they stand now. There is a nice article over at XDA-Developer about this.

Switched from Chrome to Firefox after 11 years

This controversial suggestion prompted me to switch to Firefox after almost 11 years of using Chrome. When first launched - Chrome was the snappy, simple web browser that everyone was asking for. Over the last 11 years, it has become a major focus of Google to strengthen its internet stronghold. It even has become an operating system for god's sake 🤷‍♀️. Recently it even implemented a built-in adblocker that blocks less than 1% of the ads over the internet. None of these is surprising from a company which makes money through online advertising - yet they didn't bother me as I was happily using Chrome on my phone and computers.

Then I gave up - switched to Firefox despite occasionally missing Google Assistant's integration with Chrome on Android (contextual information upon selecting some text in a website).

Ditch Google Analytics in favour of Fathom Analytics

I do not need to know the ISP my visitors are connecting from, nor do I need their location. All I need is how long people are spending on which page and what are the trends like over some time.

Why I am ditching Google Analytics in favour of Fathom Analytics

Fathom Analytics seem to be the privacy-focused browser analytics software that would serve my purpose without feeding the giant that turned annoying recently.

What's Next?

Don't know! 🤣

If you do not need to know all of those extra pieces of information about your visitors, or you care for user's privacy, or you don't want your visitors' information to be mined - you can also try Fathom analytics. It's an open-source self-hosted (also available in Docker hub) or managed web analytics solution that cares about privacy.

]]>
<![CDATA[gRPC for ASP.NET Core 3.0]]>/grpc-for-asp-net-core-3-0/5d546e0e8bf842000131d494Wed, 14 Aug 2019 22:51:07 GMT

gRPC is a high performance RPC framework. A faster and more efficient alternative to JSON based REST services. gRPC uses HTTP/2 protocol and by default Google's protocol buffer binary serialisation format to transfer messages. This is mature and a must technology in cloud native products. I will spare you from the introduction as I know you are here to quickly start gRPC using ASP.NET Core.

I have recently upgraded a gRPC service at work (that was using .NET Core 2.2) into .NET Core 3.0 Preview 7. It was such a smooth migration. gRPC in .NET Core 3.0 has first class support by the framework just like using a MVC Controller, SignalR Hub or Razor page.

Starting from ASP.NET Core 3 gRPC has first class support by the framework. It is supported into the framework starting from Kestrel up to the new Endpoint routing. You no longer need to use a separate binary to generate code. It's as simple as building the project. An ASP.NET application can expose gRPC, MVC, WebApi, SignalR Hub endpoint etc all at the same time. It integrates seamlessly into Endpoint routing, built-in IoC container and generated logs are just like any other logs - all out of the box.

A gRPC service has 3 elements. A server, bunch of clients and a .proto file. If you are from JSON based service background then you know what I mean by server and client. What's new here is the proto file.

Protocol buffer file (protobuf) - The Contract

.proto file contains the scheme of your service endpoints and models (in protobuf's term - message). For our example we will need two RPC endpoints

  • SayHello that takes a name and returns a string
  • SayHelloToNobody that does not take any parameter and returns a string

Here is the proto file

If you can't see the code above switch to non-amp version of this post.

Each RPC required to take one message. Therefore we could add one or many properties in the HelloRequest message and assign an unique number for each property. Since protobuf is designed for speed and efficiency in mind, it only sends the number over the wire in order to identify a property.

You can read more about protobuf files in the Language Guide for proto3 format.

The proto file is then used by gRPC compilers (available for many popular language) to generate code for a Client and a Server. In order to do that, we need to put the proto file into a shared place to be accessed by both the client and server.

In a non-trivial C# project, you would create a NuGet package that would contain the generated code for Client and Server. For simplicity, we are going to put the .proto files and generated code into a separate c# project and reference it from both client and server projects.

Here is a netstandard2.0 project that references required NuGet packages and the proto file defined above.

Notice the GrpcServices="Both" (instead of Client or Server - we've asked for Both). That's why once you build the project - it will generate code for both the client (for consumption) and server (for implementation). Also, by targeting netstandard2.0 we have enabled wider compatibility from entire .NET ecosystem.

Server

The generated code defines the contract defined in the .proto file into programming language (in our case C#). We need to define the implementation. So we have a ASP.NET Core app that references the Protos.csproj.

Here is the Program (the entry-point of a c# app), Startup (convention based bootstrapping an ASP.NET Web/API) and an implementation of our DemoService (defined in the DemoService.proto above)

Once started, the gRPC service will be up in http://localhost:5000 endpoint. You can use any gRPC client like grpcurl CLI or BloomRPC GUI to test the server.

gRPC for ASP.NET Core 3.0

The server automatically generates logs message like this

/usr/local/share/dotnet/dotnet /Users/mustakim/dev/grpc-dotnet-post/Server/bin/Debug/netcoreapp3.0/Server.dll
warn: Microsoft.AspNetCore.Server.Kestrel[0]
      Overriding address(es) 'https://localhost:5001, http://localhost:5000'. Binding to endpoints defined in UseKestrel() instead.
info: Microsoft.Hosting.Lifetime[0]
      Now listening on: http://0.0.0.0:5000
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
      Content root path: /Users/mustakim/dev/grpc-dotnet-post/Server
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
      Request starting HTTP/2 POST http://localhost:5000/GrpcDotNetDemoPackage.DemoService/SayHello application/grpc 
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0]
      Executing endpoint 'gRPC - /GrpcDotNetDemoPackage.DemoService/SayHello'
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[1]
      Executed endpoint 'gRPC - /GrpcDotNetDemoPackage.DemoService/SayHello'
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
      Request finished in 60.1197ms 200 application/grpc

Client

You are not limited to use a C# client. gRPC gives you the freedom to generate code in many popular programming language like Go, Java etc. All you need is the same proto file and the address of the server (http://localhost:5000) in our case. Here is a c# client that uses a HttpClient and and extension methods defined in Grpc.Net.Client nuget package to create a client using the generated client code in our Proto.csproj project.

Running the client (while the serve is running - of course) works as expected.

gRPC for ASP.NET Core 3.0

Other ways of getting the gRPC client

If the client was an ASP.NET web application (or a console application bootstrapped using .NET Generic Host), you could use IHttpClientFactory to get a HttpClient (instead of newing up) - which would be reused for improved performance.

Even better you can use IServiceCollection.AddGrpcClient extension method like this (need Grpc.Net.ClientFactory nuget package)

Then you can just request the IoC to inject DemoService.DemoServiceClient whenever you need to consume the gRPC service.

Securing your gRPC server

You would never deploy a gRPC server (or any service) without securing it with TLS, so here is a quick overview of what needs to be done.

  • In Server: Pass the server certificate (in pfx form) to kestrel in Program.cs
macOS does not support TLS over HTTP/2. So you will get an error if you try this. Alternatively you can use Docker.
  • The client need to specify the https://... endpoint and optionally pass the ca file (if the server certificate is self-signed or signed using your own certification authority).

Go clone yourself

You can find all of these source code in github
https://github.com/mustakimali/grpc-dotnet-demo

]]>
<![CDATA[My first meetup: London .NET User Group, August 2019]]>I managed to work in this industry for 5 years without going to any meet-ups, or being in touch with others in the developer community. I prefer watching tech-talks and read blogs, articles etc. and Video tutorials has been my primary source of learning new technologies. As for why I

]]>
/my-first-meetup-london-net-august-2019/5d50266a8bf842000131d360Sun, 11 Aug 2019 21:55:00 GMT

I managed to work in this industry for 5 years without going to any meet-ups, or being in touch with others in the developer community. I prefer watching tech-talks and read blogs, articles etc. and Video tutorials has been my primary source of learning new technologies. As for why I don't prefer meet-ups, live tech-talks? here are the reasons I can think of

  • Lacks the comfort of my sofa and shorts,
  • Run by humans - they can't be paused, re-winded or fast-forwarded.
  • Lacks giant subtitles underneath the speaker's face, it's important depending on what accent the speaker use.

However, I decided to become more active in the community and part of that was to attend meet-ups regularly.

London .NET User Group Meetup, August 2019

I have been to my first meetup recently. This was hosted by London .NET User Group in CodeNode in Central London and was run by two amazing speakers Mark and Willow Rendle.

The first speaker was Willow, Mark's 13yo daughter who is making a game using Unity3D and C#. She started with some basics of Unity3D development using C# and the eventually dived deep into game logic and some implementation details of one of her ongoing game development project. They both also talked bit about allocations, how it can crop-up in a game when the game-loop is run 60 times per second. It was amazing to see how a 13yo can turn their love of game into the passion of game development.

Next was Mark Rendle of RendleLabs - his topic revolved around a InfluxDb client (RendleLabs.InfluxDB) he wrote that was order of magnitude faster and efficient compared to exiting solution for .NET. He explained the use of Span<T>, Memory<T>, Syste.Threading.Channel and System.IO.Pipeline to achieve near allocation-less (1.5kb if I remember correctly) high performance (300k/s ingestion). This talk was so much insightful that I decided to attend any meet-ups like this.

SIGTERM

It was a coincidence that the first meetup was about performance, micro-optimisations that I deeply care about. This made it even more exciting. overall it was a positive one and hopefully first of many meetups I am about to attend in the future. 🤞

]]>
<![CDATA[Move to k8s: Using nginx rewrite rule to preserve permalinks]]>I have been moving all of my side projects to a kubernetes cluster to simplify my life. It's been going slow due to my commitments to my day jobs and (new side projects). While I had to say goodbye to my old personal website because it was difficult

]]>
/move-to-k8s-using-nginx-rewrite-rule-to-preserve-permalinks/5d4deeaa8bf842000131d26fFri, 09 Aug 2019 22:52:41 GMT

I have been moving all of my side projects to a kubernetes cluster to simplify my life. It's been going slow due to my commitments to my day jobs and (new side projects). While I had to say goodbye to my old personal website because it was difficult to migrate to a Linux environment, all other the services only required some docker magic as I have already upgraded them to .NET Core, or they were already written in cross-platform languages (like my secret crush Go).

One exception to that was go.mustak.im - I used it over the years to generate permalinks (for ex: go.mustak.im/linkedIn - takes to my LinkedIn page). It is a simple PHP script backed by MySQL database. Why PHP and MySQL for such a simple site? ... ahem, it was simple at that time when I wrote this ... a long time ago. I already had MySQL installed and IIS had PHP extension installed in my Windows Server VPS. It handled a decent amount of traffics over the years and never required any maintenance.

Being written using a programming language and a database meant I had an admin interface to add/remove permalinks and view hit counts etc. Here's how it looked, nice wasn't it?

Move to k8s: Using nginx rewrite rule to preserve permalinks

Now, the whole reason I am writing this post - I didn't want to carry the weight of MySQL server for this in my new kubernetes environment. So I had to come up with a simple solution. Here's the idea and what it meant-

  • Kubernetes ingress with hardcoded rewrite rule - simple, lightweight.
  • I will lose access to hit counts - I couldn't care less!
  • I will lose access to a back panel - It's fine, how often do I need to add a permalink? and how difficult it is to do kubectl apply -f k8s-spec.yaml?

Rewrite rules

Here is the Ingress resource with most of the rewrite rules removed for beivety

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: go-mustakim-ing
  annotations:
    nginx.ingress.kubernetes.io/server-snippet: |
      rewrite ^/cant-run-linqpad(.*) "https://about.mustak.im/cant-run-linqpad/$1" last;
      rewrite ^/linked[I|i]n(.*)     "http://uk.linkedin.com/in/mustakimali/$1" last;
      rewrite ^/(.*)$                "/";

spec:
  tls:
    - hosts:
      - go.mustak.im
  rules:
  - host: go.mustak.im
go-mustak-im.yaml

Few things to notice here,

  • Each of the rules only matches the first part of the request and preserves the rest of the url across the redirect. so the rule /foo -> /dest will also match /foo/bar and redirect to /dest/bar.
  • All of the rewrite rules have last flag so that nginx skips checking any other rules - to improve the speed and to avoid any issues when multiple rules matches.
  • The last rule is a catch-all when none of the rule works - it redirects to the homepage.
rewrite ^/(.*)$                "/";
]]>
<![CDATA[Firefox spell checker not working? are you missing the dictionary?]]>If the language in Firefox is set to something other than "English (United States)", the spell checker might not work despite the setting "Check your spelling as you type" checked in the preference screen. Here is what I have

In my case the language was selected

]]>
/firefox-spell-checker-not-working-are-you-missing-the-dictionary/5d5046628bf842000131d412Fri, 09 Aug 2019 22:45:00 GMT

If the language in Firefox is set to something other than "English (United States)", the spell checker might not work despite the setting "Check your spelling as you type" checked in the preference screen. Here is what I have

Firefox spell checker not working? are you missing the dictionary?

In my case the language was selected as "English (United Kingdom) but still the spellchecker wasn't working. It took me a while to figure out you need to install a dictionary manually for your selected language. Head to Dictionaries and Language Packs page and download the Dictionary for the language you have selected.

Firefox spell checker not working? are you missing the dictionary?
Firefox dictionary for English - United Kingdom locale

Having recently switched from Chrome to Firefox, it feels strange that something like this wasn't taken care of by the browser itself.

Image by Gerd Altmann from Pixabay

]]>
<![CDATA[Server-side kubernetes nginx-ingress log analysis using GoAccess]]>/server-side-kubernetes-nginx-ingress-log-analysis-using-goaccess/5d4a02506a25420001a42cddWed, 07 Aug 2019 21:54:00 GMT

I manage a single node kubernetes cluster to run some of my fun side projects and I am slowly getting rid of Google Analytics from my projects. Therefore I was looking for a server-side log analyser and GoAccess seem to have what I need, mostly.

GoAccess is a very fast open source web log analyser and interactive viewer that runs in a terminal in *nix systems or through your browser. It provides fast and valuable HTTP statistics for system administrators that require a visual server report on the fly.

Server-side kubernetes nginx-ingress log analysis using GoAccess
GoAccess Dashboard

Parsing nginx-ingress container log

GoAccess recognises most common logs generated by IIS, Apache and nginx - However, the logs generated by nginx-ingress container that sits as a gateway to the services running on my kubernetes cluster was quite different. Here is an example log

{"log":"123.123.223.23 - [123.123.223.27] - - [30/Jul/2019:04:45:57 +0000] \"GET /about-me HTTP/1.1\" 500 117356 \"-\" \"Mozilla/5.0 (compatible; AhrefsBot/6.1; +http://ahrefs.com/robot/)\" 458 13.047 [mustakim-site-service-80] 172.17.0.17:5000 117349 13.048 500 830f5081a7740db11eb2ffbce625dafc\n","stream":"stdout","time":"2019-07-30T04:45:57.689166775Z"}

So I had to tell GoAccess how to parse these logs by providing the value for --log-format,  --date-format and --time-format command line arguments.

Arguments Value
--log-format %^ %^ [%h] - - [%d:%t] %~ %~ %m %U %^ %s %b %R %u %^ %^ %^ %^ %^ %T %^
--date-format %d/%b/%Y
--time-format %H:%M:%S +0000

Keep in mind the log json property of the container logs (usually found in /var/log/containers/) does not always contains access logs similiar to the example above. It also contains other miscelenious logs generated by the container. Also all the logs were combined into one or many files (depending on number of replicas available for the ingress deployment. So I decided whatever I do -

  • I need to grep with the service name to extract logs for a particular service.
  • If I need to generate dashboard for all requests then I'll simply grep Mozilla!

How it's done

  1. Get the name of all kubernetes services,
  2. Generate an index.html file that will contain links to each static html generated by GoAccess - in order to navigate easily. This will go to /storage/goaccess/out/ which is the root of a static web server already running.
  3. Combine all logs generated by from kubernetes nginx-ingress from /var/log/containers/
  4. grep the logs to extract logs of each of the services (this will make sure other logs are skipped)
  5. Copy extracted log to a temporary location (in my case: /storage/goaccess/imported-logs/imported-log.log)
  6. Run GoAccess and pass the log as well as instructions on how to parse them.
  7. Generated static html (that renders the nice dashboard) will go to /storage/goaccess/out/{svc_name}.html
  8. Repeat steps 3 - 7 for each of the service
  9. Repeat the above, but grep Mozilla and save as all.html so we have another dashboard for all requests in the server.

The script

goaccess-kubernetes-nginx-ingress.py
#!/usr/bin/python

import os
import subprocess

def process_log_for_svc(svc,out):
  print('Processing ' + svc)

  os.system('find /var/log/containers/ | grep nginx-ingress | xargs sudo cat | grep ' + svc + ' > /storage/goaccess/imported-logs/imported-log.log')
  
  print("Parsing...")
  os.system('goaccess -f /storage/goaccess/imported-logs/imported-log.log --real-os --log-format="%^ %^ [%h] - - [%d:%t] %~ %~ %m %U %^ %s %b %R %u %^ %^ %^ %^ %^ %T %^" --date-format="%d/%b/%Y" --time-format="%H:%M:%S +0000" > /storage/goaccess/out/' + out + '.html')

  print("Cleaning...")
  os.system("rm /storage/goaccess/imported-logs/imported-log.log")

print('Getting all services')
all_svc=os.popen('kubectl get svc | tail -n +2 | awk \'{print $1}\'').read()

all_svc_arr = all_svc.split('\n')

print('Creating index.html')
index_html=''
for svc in all_svc_arr:
  index_html += '' + svc + '
' index_html = '-ALL-
' + index_html text_file = open("/storage/goaccess/out/index.html", "w") text_file.write(index_html) text_file.close() print('Processing All Logs') process_log_for_svc('Mozilla','all') for svc in all_svc_arr: process_log_for_svc(svc,svc)

This is nowhere near perfect but It's a good start. I will keep the gist updated as I improve this.

]]>
<![CDATA[A plan, of, not a side project!]]>I am both a dark matter developer which consist of 99% of the developers in the world, as well as in the top 1%. That's because "I can't be seen", in terms of my activity in social media and developer community. I am just

]]>
/a-plan-of-not-a-side-project/5d49c09c6a25420001a42c10Tue, 06 Aug 2019 18:41:26 GMT

I am both a dark matter developer which consist of 99% of the developers in the world, as well as in the top 1%. That's because "I can't be seen", in terms of my activity in social media and developer community. I am just a 404. At the same time, I play with the latest technology, use them at work and for personal projects. It's time to pick aside.

This might be like one of the side projects - that never see the light 💡 of reality. However, I am going to give it a try for the first time. I would like to become active in the community, especially in .NET Community. Let me tell you how I plan to do this.

Start writing something

Let me be clear, I mean it when I say something. Being not a famous person gives me an advantage of writing anything on the internet (= my site). Almost nobody will see it! However, by doing so, I get the satisfaction of doing something that all of the influencers do.

Why something, why not use tech stuff? - I don't like to promise on anything, that limits my ability to pursue this. It's like "I need to survive" as opposed to "I need to get rich". My activity on the social network and developer community since I left my country was zero. So moving up to one will be an achievement for me.

Again, there is no pressure whatsoever. I love my job, it keeps me suitably busy - just like I sometimes pick and do some un-natural things for me - I am choosing this. Not promising how often I will post - let's pick a reasonable number. At least once per week.

Attend meetups

I have recently created an account in meetup.com and joined some developer groups (hits: most of them are .NET related, some golang). I look for interesting meetups and I am going to go on one soon 🤞. Unsurprisingly, I am planning to write about that experience.

That is unless something comes up! the last one I registered for - I had to abort because something happened for the first time in a long time. Let's see if I can make it this one.

Engage more at work

I will try to host brown-bags at work to share some experiences, technologies I am using. I've got requests from my colleagues at my current company - that gives me some confidence that nobody will eat me alive. That's because

  1. They are supposed to eat their lunch and listen to the talk (that's why it's called brown-bag)
  2. They are some of the friendliest people I've worked with 🤞

Am I missing anything?

Let it be missing, I feel it's already too much.

Ctrl+C

Years of complete isolation since moving to a new country made an introverted person like me somewhat of an deepintrovert. It's time to do some experiments on that 💪 I repeat "experiment"!

To follow my journey of the changes 👉 #moving-up

]]>