You are not logged in.

#1 2013-10-17 07:12:30

jocom
Member
Registered: 2009-04-29
Posts: 74

On tight integration with the shell (a pager and a browser)

I just hacked together a bunch of scripts that implement a pager and browser tightly integrated to the shell.

Slogan:

Monolithic programs prevent synergy. Tight integration with the shell promotes synergy, and is closer to the UNIX philosophy.

(Furthermore, it enables workflows that would otherwise require a mouse. It enhances accessability.)

Background:

Some time ago I stumbled upon the nmh and mh mail clients (implementation in mailutils). I was interested by the concept, and surprised that they are not more popular. (One big problem with them is the lack of modern mailbox support, lack of IMAP, and such.)

Anyway, they allow really tight integration with the shell. Another such application is taskwarrior [pkg].

Example:
Using taskwarrior and mh it is very easy to add a task description to an email. In the other direction, one can easily annotate a task with an email.

I realised git [pkg] is also such an app. To illustrate what I do not want: You do not use the command `git` to enter a monolith, that provides its own shell:

[user@host ~]$ git
git> add myfile
git> commit
git> push origin master
git> quit
[user@host ~]$

That would feel really stupid, right? Another program that illustrates what I mean is surfraw [pkg] (usually abbreviated to sr).

Ok, so far for background and illustration. On to my scripts.

My scripts

Warning: First of all, I want to stress that I am a bad programmer, my scripts are very fragile, and they provide a crude user experience. This is really just a proof of concept.

Consider the following scenario.

Scenario:
Suppose you are logged in to some server via ssh. No mouse, no X, no screen or tmux. You are writing an email, and want to quote some text from Wikipedia. What do you do?

I provide some scripts that give a crude browser and page tightly integrated with the shell. The pager p is akin to less (or more) but just dumps its output in the terminal. You use pd (page down) pu (page up) pj and pk (scroll 3 lines down/up) to navigate throught the output.

The browser b just runs a URL through elinks -dump and saves the output in a cache file. Typically, one would pipe the output of b through p. Further bl n follows link number n. Further bcur just prints the output of the cache file to the terminal (for example to pipe it through p if one had not done that yet).

The bundles of scripts are available at https://github.com/jcommelin/p and https://github.com/jcommelin/b .

Back to the scenario. With my scripts I can now do:

[user@host ~]$ sr wikipedia linux | p # sr the linux wiki page, pipe through p
[user@host ~]$ pd # page down
[user@host ~]$ pd
[user@host ~]$ pj 10 # scroll 10 lines down
[user@host ~]$ pcur | tail -12 >> email.txt # select last 12 lines of current paging, append to email

(I obviously did not print the output of the commands, as they would fill the entire screen.)

Ok, this post is long enough already. I would appreciate any feedback.

  • Other apps that take the `tight integration' approach.

  • A niche of apps still lacking such a program.

  • Feedback on my scripts (I don't mind converting stuff to C. It should become pretty robust in the end.)

FYI: I already plan on adding browsing history to b.

Offline

Board footer

Powered by FluxBB