Category: CLI

  • Google Drive on Linux with rclone

    Recently Dropbox hit me with the following announcement:

    Basic users have a three device limit as of March 2019.

    Being the “basic” user, and relying on Dropbox across multiple machines, I got unreasonably upset (“How dare you deny me free access to your service?!”) and started looking for a replacement.

    I already store quite a lot of things in Google Drive, so it seemed like a no brainer: I migrated all my machines to Google Drive overnight. There was but only one problem: Google Drive has official clients for Windows and Mac, but there’s nothing when it comes to Linux.

    I found the Internets to be surprisingly sparse on the subject, and I had to try multiple solutions and spent more time than I’d like researching options.

    The best solution for me turned out to be rclone, which mounts Google Drive as a directory. It requires rclone service to be constantly running in order to access the data, which is a plus for me - I’ve accidentally killed Dropbox daemon in the past and had to deal with conflicts in my files.

    Install rclone (instructions):

    curl https://rclone.org/install.sh | sudo bash
    

    From then on, rclone website some documentation when it comes to the setup. I found it somewhat difficult to parse, so here it is paraphrased:

    Launch rclone config and follow the prompts:

    • n) New remote
    • name> remote
    • Type of storage to configure: Google Drive
    • Leave client_id> and client_secret> blank
    • Scope: 1 \ Full access to all files
    • Leave root_folder_id> and service_account_file> blank
    • Use auto config? y
    • Configure this as a team drive? n
    • Is this OK? y

    From here on, you can interact with your Google Drive by running rclone commands (e.g. rclone ls remote: to list top level files). But I am more interested in a continuous running service and mount is what I need:

    rclone mount remote: $HOME/Drive
    

    Now my Google Drive is accessible at ~/Drive. All that’s left is to make sure the directory is mounted on startup.

    For Ubuntu/Debian, I added the following line to /etc/rc.local (before exit 0, and you need sudo access to edit the file):

    rclone mount remote: $HOME/Drive
    

    For my i3 setup, all I needed was to add the following to ~/.config/i3/config:

    exec rclone mount remote: $HOME/Drive
    

    It’s been working without an issue for a couple of weeks now - and my migration from Dropbox turned out to be somewhat painless and quick.

  • Desktop notifications from Chrome Secure Shell

    For the past year or two I’ve been working in the cloud. I use Chrome Secure Shell to connect to my machines, and it works rather well. In fact, I moved away from my work Linux/Mac laptops towards an HP Chromebook, which fullfilled both requirements I had: a browser and a terminal. One thing I missed about a Linux machine though is lack of notify-send-like functionality, especially when working with long-running builds.

    Yesterday I pinged hterm team for assistance with this matter, and turns out recent release of Secure Shell supports Chrome desktop notifications! Furthermore, two amazing engineers (thanks Andrew and Mike!) crafted an hterm-notify script, which propagates notifications to Chrome, and by extent to desktop!

    I made a few tiny changes, mainly since I don’t use screen, and tmux sets my $TERM to screen-256color for some reason:

    #!/bin/sh
    # Copyright 2017 The Chromium OS Authors. All rights reserved.
    # Use of this source code is governed by a BSD-style license that can be
    # found in the LICENSE file.
    
    # Write an error message and exit.
    # Usage: <message>
    die() {
      echo "ERROR: $*"
      exit 1
    }
    
    # Send a notification.
    # Usage: [title] [body]
    notify() {
      local title="${1-}" body="${2-}"
    
      case ${TERM-} in
      screen*)  # This one's really tmux
        printf '\ePtmux;\e\e]777;notify;%s;%s\a\e\\' "${title}" "${body}"
        ;;
      *)        # This one's plain hterm
        printf '\e]777;notify;%s;%s\a' "${title}" "${body}"
        ;;
      esac
    }
    
    # Write tool usage and exit.
    # Usage: [error message]
    usage() {
      if [ $# -gt 0 ]; then
        exec 1>&2
      fi
      cat <<EOF
    Usage: hterm-notify [options] <title> [body]
    
    Send a notification to hterm.
    
    Notes:
    - The title should not have a semi-colon in it.
    - Neither field should have escape sequences in them.
      Best to stick to plain text.
    EOF
    
      if [ $# -gt 0 ]; then
        echo
        die "$@"
      else
        exit 0
      fi
    }
    
    main() {
      set -e
    
      while [ $# -gt 0 ]; do
        case $1 in
        -h|--help)
          usage
          ;;
        -*)
          usage "Unknown option: $1"
          ;;
        *)
          break
          ;;
        esac
      done
    
      if [ $# -eq 0 ]; then
        die "Missing message to send"
      fi
      if [ $# -gt 2 ]; then
        usage "Too many arguments"
      fi
    
      notify "$@"
    }
    main "$@"
    

    Throwing this in as ~/bin/notify (not forgetting to chmod +x and having ~/bin in the $PATH) I can get a notification when a particular long running command is complete:

    sleep 30 && notify Hooray "The sleep's done!"
    
  • Browsing MSSQL and Vertica from CLI

    Notes to make sure I don’t forget how to do this in the future. First, install mssql and vcli tools:

    npm install -g sql-cli
    pip install vcli
    

    Encrypt desired database account passwords:

    mkdir -p ~/.passwd
    echo '$PASSWORD' | gpg --use-agent -e > ~/.passwd/$DB_ACCOUNT.gpg
    

    Set up a set of aliases with the desired level of flexibility in ~/.bashrc to avoid typing too much:

    function _sql-helper-command {
      host=$1
      user=$2
      password=$3
      db=$4
      opt_query_file=$5
    
      if [ -z $opt_query_file ]; then
        mssql -s $host -u $user -p $password -d $db
      else
        mssql -s $host -u $user -p $password -d $db -q "`cat $opt_query_file`"
      fi
    }
    
    function _vsql-helper-command {
      host=$1
      user=$2
      password=$3
    
      vcli -h $host -U $user -w $password
    }
    
    # Usage: `sql` for interactive mode, `sql filename.sql` to execute a file.
    function sql {
      opt_query_file=$1
    
      host='$SOME_HOST'
      user='$SOME_USER'
      password=`gpg --use-agent --quiet --batch -d ~/.passwd/$SOME_FILENAME.gpg`
      db='$SOME_DB'
    
      _sql-helper-command $host $user $password $db $opt_query_file
    }
    
    # Usage: `vsql $VERTICA_HOST`
    function vsql {
      host=$1
      user=`whoami`
      password=`gpg --use-agent --quiet --batch -d ~/.passwd/$SOME_FILENAME.gpg`
    
      _vsql-helper-command $host $user $password
    }
    

    Replace $SOME_USER, $SOME_HOST, $SOME_DB, $SOME_FILENAME above with specific user, host, DB, and filenames respectively. I usually make a bunch of aliases for different environments/machines I use: sql-prod, sql-dev, sql-local or vsql-host1, vsql-host2.

  • Profiling slow bashrc

    I’ve recently noticed that it takes a long time for my bash to load. I’ve found following StackOverflow answer to be useful, and I based my solution to find a startup time hog in my ~/.bashrc upon it.

    First off, add following few lines to your /etc/bash.bashrc, ~/.bash_profile, or wherever you’d like to begin tracing the script:

    PS4='+ $(date "+%s.%N")\011 '
    exec 3>&2 2>/tmp/bashstart.$$.log
    set -x
    

    And add following few lines where you want to stop the trace:

    set +x
    exec 2>&3 3>&-
    

    Now start your bash session (you can simply open a new terminal Window for that). The above will create /tmp/bashstart.<PID>.log. To analyze it, I wrote a little Python script:

    import argparse
    import heapq
    
    parser = argparse.ArgumentParser(description='Analyze bashstart log for speed.')
    parser.add_argument('filename', help='often /tmp/bashstart.<PID>.log')
    parser.add_argument('-n', default=20, help='number of results to show')
    args = parser.parse_args()
    filename, n = args.filename, int(args.n)
    
    with open(filename, 'r') as f:
        q = []
        prev_time = None
        for line in f.readlines():
            line = line.split()
            if '+' not in line[0] or len(line) < 3:
                continue
            text = ' '.join(line[2:])
            seconds, nanoseconds = line[1].split('.')
            time = int(nanoseconds)
            diff = time - prev_time if prev_time is not None else 0
            prev_time = time
            heapq.heappush(q, (diff, text))
    
    for diff, text in heapq.nlargest(n, q):
        print float(diff) / 1000000000, 's:', text
    

    Save it as bashprofile.py, and run it as follows (replace file name with an appropriate):

    python bashprofile.py /tmp/bashstart.2831.log -n 20
    0.050056909 s: _powerline_init_tmux_support
    0.045323022 s: _powerline_setup_prompt
    0.044722024 s: _powerline_setup_prompt
    0.044423727 s: '[' -f /usr/local/google/home/ruslano/.local/lib/python2.7/site-packages/powerline/bindings/bash/powerline.sh ']'
    0.044364097 s: '[' -f /usr/local/google/home/ruslano/.local/lib/python2.7/site-packages/powerline/bindings/bash/powerline.sh ']'
    0.044137159 s: _powerline_init_tmux_support
    0.015839574 s: __shell_name=bash
    0.010850276 s: command which which
    0.010105462 s: PS2='\[\]  \[\] \[\]'
    0.010000598 s: PS3=' Select variant  '
    0.009837956 s: complete -F _svn -o default -X '@(*/.svn|*/.svn/|.svn|.svn/)' svn
    0.009767517 s: PS2='\[\]  \[\] \[\]'
    0.0095753 s: PS3=' Select variant  '
    0.007915565 s: other_utils=(ant automake autoreconf libtoolize make mount patch readlink)
    0.00771205 s: for script in version functions/selector cd functions/cli cli override_gem
    0.007008299 s: for gnu_util in '"${gnu_utils[@]}"'
    0.00693653 s: complete -F _crow crow
    0.006803049 s: complete -F _svn -o default -X '@(*/.svn|*/.svn/|.svn|.svn/)' svn
    0.006672906 s: for script in version functions/selector cd functions/cli cli override_gem
    0.005912399 s: for entry in '${scripts[@]}'
    

    In my example, Powerline turned out to be a massive hog. Looks like I’ll have to troubleshoot the speed or plain disable it.

    Don’t forget to remove the lines you added to your bash configuration files after you’re done profiling.

  • Managing cd bookmarks with apparix

    A couple of months ago I discovered apparix: a set of commands which augment cd with bookmarks. It really is an amazing feeling when you zap between multiple directories far away just with a couple of keystrokes! Apparix provides three commands I use daily: to, bm, and apparix (program suggests aliasing last one to als). Here’s how I use it:

    $ pwd
    /Users/ruslan
    $ apparix
    --- portals
    --- expansions
    --- bookmarks
    j dotfiles     /Users/ruslan/.dotfiles
    j blog         /Users/ruslan/Projects/ruslanosipov.github.io
    $ to blog
    $ pwd
    /Users/ruslan/Projects/ruslanosipov.github.io
    $ cd source/_posts
    $ bm posts
    added: posts -> /Users/ruslan/Projects/ruslanosipov.github.io/source/_posts
    $ to dotfiles
    $ pwd
    /Users/ruslan/.dotfiles
    $ to posts
    $ pwd
    /Users/ruslan/Projects/ruslanosipov.github.io/source/_posts
    

    The example above is self explanatory: you can see how over the span of a year apparix saves hours of navigating directories you frequent.

    Installation

    If you don’t like reading manuals, installation might be a confusing. But in reality it’s straightforward, you just need to add some functions or aliases to your shell’s configuration file.

    Install apparix using your favorite package manager, and then pipe examples apparix offers into your shell’s rc file.

    apparix --shell-examples >> ~/.bashrc
    

    Open your .bashrc (or another corresponding configuration file), and pick the preferred way of using apparix: you’ll see functions for bash and aliases for csh given as examples. Pick whatever works for your shell, source your rc file, and you’re all set!

    Happy jumping!

  • Ranger - the CLI file manager

    Ranger is a lightweight but powerful file manager with Vi-like key bindings. It shines at exploring file trees, looking for specific files, and performing bulk operations on folders and files. Three column layout will be very similar to Mac OS X users: center column shows contents of the current directory, left column lists contents of a parent directory, and the right column contains preview for the selected file or folder.

    File preview screen in Ranger: parent directory in the left column, current directory in the center column, and selected file preview in the left column.

    Ranger supports movement with familiar to Vi users h, j, k, and l keys, has internal command line which is invoked with :, as well as many other features and key bindings similar to Vi. Another great selling point - Ranger can be extended with custom commands and key bindings. Utility is written in Python, therefore all the commands are nothing more than Python scripts.

    Marking files for deletion in Ranger.  Files highlighted in yellow will be deleted by executing `:delete` command.

    Installation

    Ranger is easy to install and can be found in most public repositories, just install ranger package using your favorite package manager. While you’re at it, you may want to install some external utilities to help Ranger properly display file previews (list is taken from ArchWiki page on Ranger):

    • atool for archives.
    • highlight for syntax highlighting. ![for image previews]]CII.(libcaca)())
    • lynx, w3m or elinks for HTML.
    • mediainfo or perl-image-exiftool for media file information.
    • poppler (pdftotext) for PDF.
    • transmission-cli for BitTorrent information.
    • w3m for image previews.

    After all the dependencies are installed, quickly start up ranger, exit it with q, and run ranger --copy-config=all to generate configuration files in ~/.config/ranger.

    Usage

    Here are a few of the key bindings and commands I found useful:

    • Use spacebar to select files one by one. By selecting multiple files, you can perform bulk operations on them. Use V to perform visual selection. Lowercase v reverses current selection. For instance, you can run :delete after selecting multiple files and folders.
    • As mentioned above, execute :delete to remove currently selected file (or files).
    • To fullscreen a preview window, hit i. Hit i again to return the preview window to it’s normal size.
    • Vi’s gg and G allow you to jump to the top and bottom of the file list respectively.
    • Hit zh to toggle hidden files display.
    • As in Vim, / searches for a file in a current buffer, while n and N let you navigate to the next and previous matches respectively.
    • Similarly, :filter allows you to only limit your view to the files matching a pattern. It’s also interactive - changes are applied as you type.

    If you’re an avid Vim user, you’ll find using Ranger surprisingly intuitive. Otherwise you might get confused and scared away, probably for a good reason. Ranger is designed to provide Vi-like feel for file browsing, and it does that job well.

  • Power of the command line

    Disclaimer: I am not advocating any specific tools or methodologies, but sharing a workflow I find to be efficient and pleasant.

    I am a huge fan of working with CLI applications. I use Vim for editing code, composing emails, and various kinds of writing. When I have to manipulate huge amounts of email, I use Mutt: it’s intuitive tagging and regular expression engine are extremely useful for the task. I employ ack, awk, grep, and sed - Linux utilities which allow for precise and fast text manipulation.

    However, I would not use CLI browsers like elinks or w3m, and the idea of reading every email in Mutt gives me the creeps. I love the visualization web browser offers, something text-based prompt is not able to provide. And it doesn’t have to.

    There are two components to most of the tasks performed on a computer: analyzing output and entering input. Certain tasks employ one component more than the other. In most modern applications it’s rare to have both solid control from the user perspective and a pleasant informative UI. With increased visual component, it’s more time consuming to make the application do what you need, especially if your needs are esoteric. With more editing power, visual display becomes less complex in order to make editing tasks easier.

    Where visual tools fall short

    What is the alternative? Using multiple programs with different levels of control to accomplish one task: to edit text. Each of the programs excels in it’s own field: word processing software allows for beautiful fonts and document presentation, IDE lets you access aggregated meta information about your application. But most of the IDEs and word processors lack the powerful tools needed to manipulate the foundation of what user is working with - plain text.

    Ode to plain text

    I spend a lot of time writing and editing plain text. Be it source code, emails, documentation, or even blog posts. These tasks take up significant amount of my day, and it is only logical to substitute some of the visual presentation capabilities for effectiveness.

    It is hard to mentally process data which is not explicitly and unambiguously visible: different levels of headings, hidden meta information. Unlike more obscuring formats, plain text is all there is - it has nothing to hide. If you don’t see it - it’s not there. If you do see it - you know exactly what it is.

    One of my favorite tips from “Pragmatic Programmer” goes:

    Use a single editor well

    So I learned one editor well, and now I use it for all my writing and editing needs. I don’t have to jump between IDE, browser, and office software. Most of the text I edit is manipulated with one editor. There is only one set of key bindings to know, one skill to master and hone. Fast, without any additional thought, using single text editor and all of it’s powerful features is imprinted in muscle memory. One less task to worry about.

    I write my documents in Markdown format, and later convert them to the desired output using pandoc: be it an HTML page, PDF, or a Microsoft Word document. I use Vim, so I can rearrange paragraphs or manipulate lines within a couple of keystrokes. Since I spend so much time editing text, I also touch type, which makes me even more effective at the given task.

    Harness the power of the command line

    When it comes to bulk manipulating files or working with version control - there is no better candidate then command line applications. There’s no need to go through a number of obscure menus, ticking and unticking checkboxes, and hoping that your desired result can be achieved with a program’s GUI.

    Let’s look at a few scenarios some users face in their daily workflow.

    Creating a backup

    With GUI, you’d have to take multiple steps:

    1. Right click file.
    2. Left click on “Copy”.
    3. Right click on some empty space.
    4. Left click on “Paste”.
    5. Right click on a newly created copy.
    6. Left click on “Rename”.
    7. Switch to a keyboard.
    8. Type file.bak.

    The above steps can be sped up using shortcuts like C-c or C-v, but not by much. Here’s an alternative in bash:

    cp file{,.bak}
    

    While first variant would do great for a novice or a casual user - the second method would be much more preferred by an experienced user whose concern is speed.

    Recursively bulk replacing text in a directory

    Let’s assume we want to do a bulk replace text in a directory and all it’s subdirectories. We have our trusted IDE, let’s assume this IDE is already configured to work with a desired directory.

    1. Open your IDE.
    2. Select “Edit” menu.
    3. Select “Find and Replace” submenu.
    4. Click on a “Find” input field.
    5. Switch to a keyboard.
    6. Type function_to_replace.
    7. Switch to a mouse.
    8. Click on “Replace” input field.
    9. Switch to a keyboard.
    10. Type new_function_name.
    11. Switch to a mouse.
    12. Enable “Search in subdirectories” checkbox.
    13. Click “OK”.

    Again, this can be shortened a bit with some keyboard shortcuts, but not by much. You still have to switch between keyboard and a mouse a total of 4 times, and you still have to click through all the menus. This does get time consuming if you do this often. Now let’s try to perform the same task in command line:

    find . -type f | xargs sed -i 's/function_to_replace/new_function_name/g'
    

    Much faster, if you’re able to memorize the structure. And remembering what the commands do is much easier than it looks. Especially with the help of man or, even better, bro (see http://bropages.org for latter).

    The above example demonstrates one of the biggest advantages of command line interfaces: an ability to redirect an output of one program into another, chaining the tools together. In this example, we first get a list of all files use find tool, and then run sed tool on each of those files in order to replace the text.

    An output from any CLI tool can be fed into any other CLI tool. This allows for countless possibilities and high adaptability to unscripted scenarios.

    Is it worth learning CLI tools over their GUI counterparts?

    This depends on what your intentions are. If you’re a power user who writes and edits a lot of text or manipulates bulk amounts of text on a daily basis - than it’s definitely worth it. Time spent learning these tools will pay off. But if you’re a casual user whose needs end with writing an occasional email or two - then you probably don’t need to worry about this.

    Hell, if you’ve read this far - this means you’re the former case. I can practically guarantee that you will benefit from employing command line tools and modal editors over their GUI counterparts.

    I’ve put together a table for comparison between two. Indeed, there are different times when either GUI or CLI tools excel:

    Factor CLI GUI
    Ability to combine/chain tools Yes No
    Easy to learn No Yes
    Efficient for a novice user No Yes
    Efficient for an experienced user Yes No
    Good for occasional use No Yes
    Good for repetitive tasks Yes No
    Presents visual information well No Yes

    As you can see - both CLI and GUI programs have their pluses and minuses. CLI tools seem to appeal to experienced users, while GUI tools are great for novice users and do excel in representing visual information. No matter what kind of interface you prefer, it’s crucially important to use the right tool for the job.

  • Beyond grep

    I search for things a lot, especially in my code. Or even worse - someone else’s code. For years grep served as an amazing tool for this: fast, simple, and yet powerful. That was until I discovered ack for myself. An incredibly easy to use grep implementation built to work with large (or not really) code trees.

    A lot can be said to enforce superiority of ack over grep when it comes to working with code, and it’s all said here: ack’s features.

    Amazing thing is - ack doesn’t even need a tutorial. Learning progression is natural and “just happens” by researching necessary use cases as the need arises (ack has a great manual entry).

    Here’s a typical use example for ack:

    ack --shell 'gr[ae]y'
    

    Searches all shell script files in the current code tree for any occurrences of “gray” or “grey”. It will search .sh, .zsh, and just about dot-anything; ack will even check shebang lines for you.

    Ease of use, the fact that it’s ready to use out of the box, extensive file types, native support for Perl’s regular expressions: ack does really good job at searching through code.

    Download it from Beyond grep.

  • Effective search with Mutt

    I generally don’t use Mutt for everyday emails - I find smooth non-monospace fonts to be more pleasant to the eye, and the visualization my browser offers is hard to beat. The main use-case for me is composing long emails: Mutt lets me use my favorite text editor, which speeds up the editing of long and carefully composed responses.

    Recently I added a new use-case to my work flow: searching through emails. Mutt has a powerful built-in regular-expressions engine, which is something the web Gmail client is missing.

    Mutt has two ways of finding things: search and limit. “Search” just jumps from one matching letter to another, something along the lines what / command does in less, more, or vim. “Limit” is something I am more used to with the web client, and it’s what I use the most.

    Using limits

    Limit works the way regular search works in Gmail: it limits the view to conversations matching the query. Hit l, and enter a search query.

    By default, Mutt will only search through the subject lines, but this behaviour can be changed by prefixing the command with a special identifier. For instance, searching for ~b oranges will limit the view to all the messages which mention “oranges” in the message body. Here are a couple I use the most:

    • ~b – Search in the message body.
    • ~B – Search in the whole message.
    • ~f – Message originated from the user.
    • ~Q – Messages which have been replied to.

    You can find full list in the Mutt Advanced Usage Manual.

    Patterns can be chained to produce narrower results: ~f joe ~B apples. This will search for a message mentioning “apples” coming from an author whose name contains “joe”.

    You may find that searching whole messages is slow, especially if you have more than a couple hundred messages to search through. That’s because by default Mutt does not store messages for local use. This can be changed by specifying header_cache and message_cachedir variables in your .muttrc file:

    set header_cache     = "$HOME/Mail"
    set message_cachedir = "$HOME/Mail"
    

    Now, after you perform your first search, it will cache every message you open, making all the consecutive searches lightning fast.

    Oh, and keep in mind, Mutt stores messages and headers in plain text, so make sure the cache directory is not shared with anyone but yourself.

  • Three favorite bash tricks

    I spend most of my development time in the shell - be it editing text with Vim or executing various console commands. I have quite a number of tricks in my daily repertoire, and I would like to share three tips today.

    Edit current command with a text editor

    I often end up having to change a long command I just typed, and using arrow keys to get to the correct spot is not favorable. Bash has the feature which lets you edit current command in a text editor of your choice. Hit Ctrl + x, Ctrl + e (or Ctrl + x + e), and you will be dropped into your text editor. Now you are able to edit the command, and it will be executed as soon as your write the file and exit the editor.

    You can use an editor of your choice by adding following line to your .bashrc file:

    export EDITOR=vim
    

    Replace vim with the name of your favorite editor.

    Update: It looks like for some machines setting EDITOR variable is not enough. In this case, you also need to set VISUAL environment variable.

    Edit recent command

    You can edit your recent commands in a text editor of your choice by executing fc beginning_of_the_command. For instance, if you run fc l, you will open most recent command starting with the letter “l”.

    You can execute fc without any arguments to edit last executed command.

    Bash history autocomplete

    Another great feature - “reverse intelligent search”. If you hit Ctrl + r in your shell, you’ll be greeted by the following prompt:

    (reverse-i-search)`':
    

    Start typing a part of the command from your history, and you’ll see suggestions popping up. Hit Enter to pick the command (you’ll be able to edit it before executing), or push Ctrl + g to return back.

    Like any of these tips? Have some of your own? Don’t hesitate to share those in the comments section down below.

  • Elegant Mutt setup for use with Gmail

    I have been using Mutt for a while now. Wouldn’t say that it saves my time, but nor does it extend the amount of time I spend reading email. For me, the best part about Mutt is that it lets me use text editor of my choice - Vim. Everything else - keyboard shortcuts, minimalist design, and simplicity - already exists in Gmail.

    I found configuration below to work really well for my needs: all of the important for me Gmail features are translated. Here’s my .muttrc file:

    bind editor <space> noop
    set alias_file        = '~/.mutt/aliases.txt'
    set copy              = no
    set display_filter    = '$HOME/.mutt/aliases.sh'
    set edit_headers
    set editor            = "vim +/^$ ++1"
    set folder            = "imaps://imap.gmail.com/"
    set hostname          = "gmail.com"
    set imap_check_subscribed
    set imap_pass         = "$PASSWORD"
    set imap_user         = "$USERNAME"
    set mail_check        = 5
    set move              = no
    set postponed         = "+[Gmail]/Drafts"
    set spoolfile         = "+INBOX"
    set text_flowed       = yes
    unset imap_passive
    unset record
    
    # Gmail-style keyboard shortcuts
    macro index ga "<change-folder>=[Gmail]/All Mail<enter>" "Go to all mail"
    macro index gd "<change-folder>=[Gmail]/Drafts<enter>" "Go to drafts"
    macro index gi "<change-folder>=INBOX<enter>" "Go to inbox"
    macro index gs "<change-folder>=[Gmail]/Starred<enter>" "Go to starred messages"
    macro index gt "<change-folder>=[Gmail]/Trash<enter>" "Go to trash"
    macro index,pager d "<save-message>=[Gmail]/Trash<enter><enter>" "Trash"
    macro index,pager y "<save-message>=[Gmail]/All Mail<enter><enter>" "Archive"
    
    source $alias_file
    

    It is quite self-explanatory, and includes such nice features as:

    • Automatically adding addresses from read emails to address book (see below).
    • Using vim as a text editor, with an ability to edit message headers/recipients from within vim.
    • Ability to access all the default Gmail folders: All mail, Drafts, Inbox, Starred, Trash.
    • Key bindings to delete and archive messages bound to d and y respectfully (I am a huge fun of a zero-mail inbox).

    You might also want to have your password encrypted by GPG as opposed to leaving it in plain text in your .muttrc file. You can read how to do this here: Using Mutt with GPG.

    As you may have noticed, .muttrc above sets display_filter to $HOME/.mutt/aliases.sh. This script is being executed every time you read an email, and it collects email address to $HOME/.mutt/aliases.txt. Contents of the aliases.sh are below:

    #!/bin/sh
    
    MESSAGE=$(cat)
    
    NEWALIAS=$(echo "${MESSAGE}" | grep ^"From: " | sed s/[\,\"\']//g | awk '{$1=""; if (NF == 3) {print "alias" $0;} else if (NF == 2) {print "alias" $0 $0;} else if (NF > 3) {print "alias", tolower($(NF-1))"-"tolower($2) $0;}}')
    
    
    if grep -Fxq "$NEWALIAS" $HOME/.mutt/aliases.txt; then
        :
    else
        echo "$NEWALIAS" >> $HOME/.mutt/aliases.txt
    fi
    
    echo "${MESSAGE}"
    

    Source: W. Caleb McDaniel.

    This script will create aliases.txt file containing email addresses for search and auto completion of email-addresses.

  • Using Mutt with GPG

    Mutt is a great command line email client, but it does not offer a built-in way to store passwords. But that’s where GPG comes in. A while back I wrote an article on how to use GPG to store your passwords: GPG Usage, this is a more practical note about using GPG to store your passwords for mutt. This note implies that you already have installed and configured GPG (which you can learn how to do in above linked article).

    First you will have to record a password to a GPG file. Replace $PASSWORD with your password and $ACCOUNT with a desired account alias. You probably want to prefix this command with a space, to avoid writing your password to a history file.

    echo '$PASSWORD' | gpg --use-agent -e > ~/.passwd/$ACCOUNT.gpg
    

    Next, open your ~/.muttrc file and add following line:

    set imap_pass = "`gpg --use-agent --quiet --batch -d ~/.passwd/$ACCOUNT.gpg`"
    

    Again, replace $ACCOUNT with the same account alias you specified earlier. Now you don’t have to re-enter your password every time you start Mutt.

  • Use vimdiff as git mergetool

    Using vimdiff as a git mergetool can be pretty confusing - multiple windows and little explanation. This is a short tutorial which explains basic usage, and what the LOCAL, BASE, and REMOTE keywords mean. This implies that you have at least a little bit of basic vim knowledge (how to move, save, and switch between split windows). If you don’t, there’s a short article for you: Using vim for writing code. Some basic understanding of git and branching is required as well, obviously.

    Git config

    Prior to doing anything, you need to know how to set vimdiff as a git mergetool. That being said:

    git config merge.tool vimdiff
    git config merge.conflictstyle diff3
    git config mergetool.prompt false
    

    This will set git as the default merge tool, will display a common ancestor while merging, and will disable the prompt to open the vimdiff.

    Creating merge conflict

    Let’s create a test situation. You are free to skip this part or you can work along with the tutorial.

    mkdir zoo
    cd zoo
    git init
    vi animals.txt
    

    Let’s add some animals:

    cat
    dog
    octopus
    octocat
    

    Save the file.

    git add animals.txt
    git commit -m "Initial commit"
    git branch octodog
    git checkout octodog
    vi animals.txt  # let's change octopus to octodog
    git add animals.txt
    git commit -m "Replace octopus with an octodog"
    git checkout master
    vi animals.txt  # let's change octopus to octoman
    git add animals.txt
    git commit -m "Replace octopus with an octoman"
    git merge octodog  # merge octodog into master
    

    That’s where we get a merge error:

    Auto-merging animals.txt
    CONFLICT (content): Merge conflict in animals.txt
    Automatic merge failed; fix conflicts and then commit the result.
    

    Resolving merge conflict with vimdiff

    Let’s resolve the conflict:

    git mergetool
    

    Three-way merge using vimdiff. Local changes are in top left, followed by a common ancestor, and branch `octodog` in the top right corner. Resulting file is at the bottom.

    This looks terrifying at first, but let me explain what is going on.

    From left to right, top to the bottom:

    LOCAL – this is file from the current branch BASE – common ancestor, how file looked before both changes REMOTE – file you are merging into your branch MERGED – merge result, this is what gets saved in the repo

    Let’s assume that we want to keep the “octodog” change (from REMOTE). For that, move to the MERGED file (Ctrl + w, j), move your cursor to a merge conflict area and then:

    :diffget RE
    

    This gets the corresponding change from REMOTE and puts it in MERGED file. You can also:

    :diffg RE  " get from REMOTE
    :diffg BA  " get from BASE
    :diffg LO  " get from LOCAL
    

    Save the file and quit (a fast way to write and quit multiple files is :wqa).

    Run git commit and you are all set!

    If you’d like to get even better about using Vim, I wrote a book about it: Mastering Vim. I’m pretty proud of how it turned out, and I hope you like it too.

  • Download gists from prompt

    I wrote a little script to download gists from the command prompt.

    Generate your Github API Token under Settings -> Applications, change it within a script, and then:

    chmod +x shgist.py
    mv shgist.py ~/bin/shgist
    

    Where ~/bin is a directory in your path. Now you can use it as shgist file to quickly download your gists (Gist on Github).

    #!/usr/bin/env python
    
    # Ruslan Osipov <ruslan@rosipov.com>
    # Usage: shgist keywords
    # Description: Gists downloader
    
    import urllib
    import urllib2
    import sys
    import json
    
    token = 'Personal API Access Token'  # Github Settings -> Applications
    
    class Gist:
        def __init__(self, token):
            """
            token -- str, github token
            """
            self.token = token
            self.url = 'https://api.github.com'
    
        def find_by_name(self, keywords):
            """
            keywords -- list of strings
            """
            gists, urls = self._get_gists()
            for i, gist in enumerate(gists):
                for keyword in keywords:
                    if keyword not in gist:
                        del gists[i]
                        del urls[i]
                        break
            if len(gists) == 0:
                print "Sorry, no gists matching your description"
                return
            if len(gists) == 1:
                self._download_gist(gists[0], urls[0])
                return
            for i, gist in enumerate(gists):
                print i, gist
            while True:
                num = raw_input("Gist number, 'q' to quit: ")
                if num == 'q':
                    print "Quiting..."
                    return
                try:
                    num = int(num)
                    if 0 <= num < len(gists):
                        break
                    print "Number should be within specified range"
                except:
                    print "Only integers or 'q' are allowed"
            self._download_gist(gists[num], urls[num])
    
        def _download_gist(self, name, url):
            """
            name -- str, filename
            url -- str, raw gist url
            """
            print "Downloading %s..." % name
            gist = self._send_get_request(url)
            open(name, 'wb').write(gist)
    
        def _get_gists(self):
            """
            Returns 2 lists which should be treated as ordered dict
            """
            url = '/gists'
            response = self._send_get_request(self.url + url)
            response = json.loads(response)
            gists, urls = [], []
            for gist in response:
                for name, meta in gist['files'].items():
                    gists.append(name)
                    urls.append(meta['raw_url'])
            return gists, urls
    
        def _send_get_request(self, url):
            """
            url -- str
            """
            headers = {
                    'Authorization': 'token ' + self.token
                    }
            request = urllib2.Request(url, headers=headers)
            response = urllib2.urlopen(request)
            return response.read()
    
    argv = sys.argv[1:]
    if not len(argv):
        print "Usage: shgist keywords"
        sys.exit(0)
    
    gist = Gist(token)
    gist.find_by_name(argv)
    
  • My most used bash commands

    Shell history can tell a lot about its owner. What’s in your shell?

    history | awk '{CMD[$2]++;count++;}
    END { for (a in CMD)print CMD[a] " " CMD[a]/count*100 "% " a;}'
    | grep -v "./" | column -c3 -s " " -t | sort -nr | nl |  head -n10
    
         1  580  38.0328%    git         # I keep everything under VCS
         2  202  13.2459%    cd          # Moving around a lot
         3  171  11.2131%    vi          # Favorite text editor
         4  127  8.32787%    ls          # I'm a curious person
         5  43   2.81967%    rm          # I also like when it's clean
         6  26   1.70492%    usrswitch   # https://gist.github.com/ruslanosipov/5453510
         7  25   1.63934%    exit        # I don't like hitting the red cross button
         8  18   1.18033%    source      # Reloading bash configuration files
         9  17   1.11475%    clear       # Like when it's *really* clean
        10  15   0.983607%   gitk        # Sometimes it is too messy for git log
    
  • Colorless week results

    A round-up of The Week Without Colorful Prompt.

    I worked with the colors disabled in bash, git, and vim for a week. So how did it go? It is definitely an interesting experience, but such a harsh change that it doesn’t really work out with everything.

    Bash

    Disabling colorful PS1 and removing color output for ls commands forced me to concentrate more on the actual text, changing the perception of the general bash workflow. I was more concentrated on the task, missed less details, and generally paid more attention to the output.

    Git

    Never repeat my mistake by disabling colors for git diff. Log and status are fairly easy to read, but the disabling of colors noticeably slows down the workflow.

    Vim

    Vim without code highlight forces you to remember your code structure more effectively, which is a great thing. Not having a need to rely on color can hint that a programmer has better understanding of the code he/she is writing.

    Now that the experiment is over I have mostly returned to using colorful prompt. But I do turn syntax highlight off once in a while - it allows you to see problems from new angle and work more efficiently at finding a solution. Try it and see for yourself!

  • A week without colorful prompt

    I noticed that I rely on colors in the bash terminal a lot, as in git output, diffs, directory and file listings… It gets worse when using vim - I feel lost without the cozy syntax highlight guidance.

    Time to stop using output colors for a week whether in shell, git, or vim, and use only plain text with no fancy colors. Set git config –global color.ui false and don’t use –color flags in shell. Also, set syntax off and set a simple color scheme for vim.

    What can I gain from all this? It will definitely reduce my productivity for a few days. However, I have a hint of an idea that changing the visual code representation will give me new insight on what I am currently writing.

    Link to related commit on GutHub.

    Check back in a week to see how it went!

  • Editing bash command in vim

    You can open the current command you are typing for editing in your default text editor by pressing Ctrl + x + e. It will be executed after you write and quit the file. This is perfect for editing long/multi-line commands where typos are likely to occur. Consider something like this:

    for run in {1..10}
    do
        echo "Print me ten times"
    done
    

    Editing this in vim is much more satisfying, isn’t it?

    You can also open the last executed command for editing if you execute the fc command. You can also edit the last command starting with a certain pattern using fc [pattern] (you can skip the editor and execute the output of fc by adding the -s option, and a useful tip is to have alias r="fc -s", which would allow you to execute the last command starting with “cc” by running r cc).

    P.S: In order for this trick to open vim and not any other editor, make sure you have the line EDITOR=vim in your ~/.bashrc. Obviously this works with any text editor.

  • IRSSI - ignore all from everyone

    If you visit noisy IRC channels like the programming ones on freenode, you probably want to ignore all the annoying status messages.

    To permanently ignore joins, parts, quits, and nickname changes from every channel in IRSSI:

    /ignore * joins parts quits nicks
    /save
    

    I keep forgetting the exact syntax, so maybe clipping the snippet in a blog post will keep it in my memory.

  • Rename commit author in git

    In some extremely rare cases you end up pushing data to the repo with the wrong credentials. If you are the only author and you’re as picky as I am, it can be corrected easily:

    git filter-branch -f --env-filter
    "GIT_AUTHOR_NAME='Stan Smith';
    GIT_AUTHOR_EMAIL='stansmith@cia.gov';
    GIT_COMMITTER_NAME='Stan Smith';
    GIT_COMMITTER_EMAIL='stansmith@cia.gov';" HEAD
    git push --force
    

    In the case of there being multiple people working on a project, you may want to use the following gist posted by anonymous: https://gist.github.com/anonymous/2523336/ (again, followed by git push --force).

  • Mintty color scheme (Cygwin)

    Softer colors for mintty.

    I find the default cygwin color palette to be a bit ugly, so here’s one that has softer colors. Add the following lines to your .minttyrc and restart cygwin in order to apply changes.

    ForegroundColour = 131, 148, 150
    BackgroundColour =   0,   0,   0
    CursorColour     = 220,  50,  47
    
    Black            =   7,  54,  66
    BoldBlack        =   0,  43,  54
    Red              = 220,  50,  47
    BoldRed          = 203,  75,  22
    Green            =   0, 200, 132
    BoldGreen        =   0, 200, 132
    Yellow           = 204, 204, 102
    BoldYellow       = 204, 204, 102
    Blue             = 102, 153, 204
    BoldBlue         = 102, 153, 204
    Magenta          = 211,  54, 130
    BoldMagenta      = 108, 113, 196
    Cyan             =  42, 161, 152
    BoldCyan         = 147, 161, 161
    White            = 238, 232, 213
    BoldWhite        = 253, 246, 227
    

    Update (December 2018): This theme is now packaged with the default Mintty distribution! Pull up Mintty/Cygwin and check for a theme called rosipov (I didn’t pick the name).

  • Rails and MongoDB with Cygwin

    Setting up Ruby on Rails with MongoDB on a Windows machine.

    You need to have cygwin installed with ruby and git packages (obviously you may want to have more).

    The following commands are executed in the cygwin prompt:

    git clone git://github.com/rubygems/rubygems.git
    cd rubygems/
    ruby setup.rb
    gem install rails
    

    Go to the MongoDB website and download Windows binaries: http://www.mongodb.org/downloads. Extract the content of the bin/ directory to C:\cygwin\usr\local\bin.

    Create a directory for the db files (the default MongoDB db files directory is C:\datadb):

    cd /cygdrive/c
    mkdir data
    mkdir data/db
    

    Done! Both mongo and rails are in your cygwin’s path now, feel free to tweak it as you see fit.

  • Git: merge two repositories

    Today I had to merge changes from one repository into another. Let’s assume you want to merge beta into alpha.

    Operations are performed in repo alpha:

    git remote add beta_repo git@rosipov.com:beta.git
    git fetch beta_repo
    git merge beta_repo/master
    

    In this case, beta_repo is the name you pick for remote.

    If you just need to cherry-pick a certain commit from beta you can omit the last step and replace it with the cherry-pick.

    More on the topic of remotes: http://git-scm.com/book/ch2-5.html.

  • GPG Usage

    To encrypt and decrypt files in Linux there is a utility called gpg (Gnu Privacy Guard). This is a short GPG tutorial.

    Quick usage example

    gpg -c foo.txt
    

    It will prompt you for the passphrase and a confirmation. Now you will have the encrypted foo.txt.gpg file. To decrypt a file:

    gpg -d foo.txt.gpg
    

    This will forward the output to the console. You can output it into a file:

    gpg -d foo.txt.gpg > foo.txt

    GPG keyring

    This is all secure, but not quite enough if you are paranoid. Keys are what makes gpg great. Let’s generate a private key:

    gpg --gen-key
    

    And create an ASCII version of a public key:

    gpg --armor --export "John Doe" --output johndoe.txt
    

    Public key johndoe.txt can be freely distributed. Now you can encrypt files for yourself only:

    gpg -e -r "John Doe" foo.txt
    

    Now if you decrypt a file it will require the passphrase you specified while generating a key. To encrypt a file for someone else you should have this person’s public key.

    Let’s assume Stan Smith sent you a key, stansmith.txt. You import it using:

    gpg --import stansmith.txt
    

    And encrypt the file:

    gpg -e -r "Stan Smith" foo.txt
    
  • Create gitolite repository

    A reminder on how to initialize a fresh gitolite repository, assuming that gitolite has already been set up.

    All actions are performed on a local machine. In this case: ~/gitolite-admin is admin repository, ~/foo is desired repository, rosipov.com is gitolite hostname. Command vi stands for the text editor, but you may use whichever editor you prefer.

    cd ~/gitolite-admin
    vi conf/gitolite.conf
    

    Add lines (obviously you may want to use individual users instead of @all):

    repo foo
        RW+ = @all
    

    Save it. Next:

    git add conf/gitolite.conf
    git commit -m "Add foo repo for @all"
    git pull --rebase &amp;&amp; git push
    mkdir ~/foo
    cd ~/foo
    git init
    git remote add origin git@rosipov.com:foo.git
    

    Add some files at this point. In this example, only .gitkeep is added.

    git add .gitkeep
    git commit -m "Initialize repo"
    git push origin master
    

    The new repository is all set up now.

  • GUI git difftool for Windows

    A quick note on how to set up GUI difftool to use with git on Windows (Git Bash, Cygwin, etc…).

    Download and install GUI diff tool of your choice, get the path to executable.

    Create difftool.sh in directory included in your path (for example C:\Users\{username}\bin in Git Bash). Let’s take SourceGear’s DiffMerge as an example.

    #!/bin/sh
    "C:/Program Files/SourceGear/Common/DiffMerge/sgdm.exe" "$1" "$2" | cat
    

    And in your ~/.gitconfig:

    [diff]
        tool = diffmerge
    [difftool "diffmerge"]
        difftool.sh "$LOCAL" "$REMOTE"
    

    And difftool is available via git difftool command now.