Different ways of getting a development environment

I like to experiment with different technologies be it language, framework or even VMs and cloud related tools but installing them on my current machine can cause conflicts with my work environment or pollute my dotfiles very much. For past few weeks I was going through different solutions to segregate environments and I found few good solutions out there which can help me out. They are:

  • Nix - nix-shell
  • Container - OCI (Docker or Podman) and LXC
  • VMs - cloud or local

Nix - nix-shell

From Wikipedia - Nix is cross platform package manager that utilizes purely functional deployment model where software is installed into unique directories generated through cryptographic hashes. Dependencies from each software installation are included within each hash, solving the problem of dependency hell. This novel approach to package management promises to generate more reliable, reproducible, and portable packages.

nix-shell is a tool that comes with Nix that helps us to create environments by setting up necessary packages and environment variables. When creating environment using Nix its usage packages available in their store. It caches all packages in your local machine also. There might be some package that is not available in remote store but you can create your own packages and use it later. It also helps in having consistent work environment when shifting systems.

Nix uses .nix file manage configuration which can be extended to operating configuration also which NixOS uses and can be done with other *nix-based systems like Linux and macOS (using nix-darwin).

Here is example .nix file having lua, nodejs and python installed, also video attached. nix-shell executable searches for default.nix file in home directory or you can pass a .nix file when running, eg nix-shell lua.nix.

# default.nix
{ pkgs ? import <nixpkgs> {}
}:
pkgs.mkShell {
  name = "dev-shell";
  buildInputs = [
        pkgs.nodejs
        pkgs.wasmtime
        pkgs.rustc
        pkgs.cargo
  ];
}
nix-shell
# lua.nix
{ pkgs ? import <nixpkgs> {}
}:
pkgs.mkShell {
  name = "dev-shell";
  buildInputs = [
        pkgs.lua
  ];
}
nix-shell lua.nix

By default, nix uses bash shell in environment it creates if you want to have zsh or fish then there is project solution particularly this issue any-nix-shell.

haslersn/any-nix-shell

Sometimes some packages are not available in Nix like I tried to find Node.js 5 for some old project but couldn't find in this case it's good to go with Docker than building your own Node.js package for Nix for reason - first it's easy to use Docker and second it takes too much time to build even if you know to build using Nix, one advantage you can build docker images also using using nix.

Containers - Docker and Podman

When developing projects, you require to have databases, queues, caches and also sometime different services running and I don't like running everything on my main system, so I use docker for running those services, also I use docker-compose for creating environment using single command and making make environment persisten.

I am trying to run same thing on podman using podman-machine on macOS same as docker-machine. Podman provides daemonless container engine. Currently I use podman to run individual services like postgres etc and use docker with docker-compose.

Also, most of my things are either deployed in Kubernetes, docker using Elastic Beanstalk or Heroku which becomes easy for me to shift from docker/docker-compose to these environments.

If you have docker-compose file it can be also converted to Kubernetes manifest using a program called kompose and there is another option called skaffold which can be used with Kubernetes for whole development lifecycle from local development to production deployment.

For example, a project consists of Flask, React and Postgres, docker-compose can be as follows where I don't copy code to my docker-image but share it as volume from to host to container so if on save it can reload and I don't have to rebuild image every time.

version: "2"

services:
    db:
        image: postgres:9.6.16
        environment:
            - POSTGRES_USER=postgres
            - POSTGRES_PASSWORD=postgres
            - POSTGRES_DB=blog
        ports:
            - 5432 # if you want to expose it then 5432:5432

    server:
        build: server # using monorepo example - Dockerfile is in server folder
        image: server
        environment:
            - DATABASE_URL=postgres://postgres:postgres@db:5432/blog
        volumes:
            - ./server:/app
        ports:
            - 8000:8000 # exposing port to host also to use with Postman or insomnia
        links:
            - db
        depends_on: # make sure db starts first
            - db

    client:
        build: client # using monorepo example - Dockerfile is in client folder
        image: client
        volumes:
            - ./client:/app
        ports:
            - 3000:3000 # exposing port to host also to access via browser
        links:
            - server
        depends_on: # make sure db starts first
            - server
# if you want to run using docker-compose
docker-compose up

# rebuild images when starting up
docker-compose up --build 

# restart image
# docker-compose restart <service_name>
docker-compose restart server

# stop containers
docker-compose stop

# remove containers
docker-compose down

and above can be used with skaffold like this where *.yml are manifest files created using kompose.

apiVersion: skaffold/v2beta1
kind: Config
metadata:
    name: my-blog
build: # defautl build
    local:
        useDockerCLI: true
    artifacts:
    - image: <image_repo_address>
deploy:
    kubectl: # using kubectl to deploy - other methods are also supported like helm
        manifests:
        - kubernetes/server-deployment.yml
        - kubernetes/client-deployment.yml
        - kubernetes/server-service.yml
        - kubernetes/client-service.yml
        - kubernetes/config.yml
profiles:
- name: dev
    activation:
    - command: dev # activate this profile on skaffold dev
    build:
        local:
            useDockerCLI: true
        artifacts:
        - image: <image_repo_address>
    deploy:
        kubectl:
            manifests:
            - kubernetes/server-deployment.yml
            - kubernetes/client-deployment.yml
            - kubernetes/server-service.yml
            - kubernetes/client-service.yml
            - kubernetes/dev-config.yml
- name: prod
    build:
        local:
            useDockerCLI: true
        artifacts:
        - image: <image_repo_address>
    deploy:
        kubectl:
            manifests:
            - kubernetes/server-deployment.yml
            - kubernetes/client-deployment.yml
            - kubernetes/server-prod_service.yml
            - kubernetes/client-prod_service.yml
            - kubernetes/prod-config.yml
# run skaffold in dev environment
skaffold dev

# explicitly run dev profile
skaffold run -p dev

# run prod profile
skaffold run -p prod

# build only prod
skaffold build -p prod

# deploy only prod
skaffold deploy -p prod

Full OS

If I need full machine I can go with VM with hypervisor like Virtualbox with vagrant or multipass, or there are option for cloud if I need powerful machine for some task or my internet is slow to upload docker images to private repo so I build it on cloud VMs and upload them. Sometime I also use LXC/LXD if I need to boot up whole OS fast and it is of distro whose iso image I don't have also LXC images are fast to boot.

For running LXC in macOS I use multipass by Canonical to use it like remote LXC server.

Links of projects: