How to Actually Publish More Things

I recently published a post to my blog newsletter for the first time in a year, and mentioned a writing group that I had started. One of my readers asked for more details about it. So in this post I’ll talk about how our small writing group of around ten people helps keep each other accountable and encourages each other to write more.

The beginning

In August 2015, I decided to write more and more regularly. My writing backlog had increased quite a bit but I was not paring it down much. I had some recent experiences with a weekly fitness challenge and a weekly social media challenge (details to come), so figured something like a weekly writing challenge would work well. Those challenges showed that social accountability helps me stay focused on a goal. Also, setting up a system where I need to do something each week is flexible enough that I can make progress but it is not too onerous to achieve.

I didn’t just want to write, though. I wanted to have the writing be accessible to others. So I figured something like a weekly minimum to publish would be good.

I didn’t really want to have a financial component like the other two challenges. I wanted to see if the motivation of writing for the group would be enough to get people to write. Plus, I didn’t want to have to deal with the hassle of moving money around or setting up a system that was correctly motivational.

I sent a message to a few people who I know who write a decent amount or want to be writing more, and we correspond via a small private Google group. The current group composition is mostly software / product / marketing people with varying levels of experience.

The process

The general process for the group is:

  1. publish something at least once a week
  2. make a short post per week on the thread of the week of the things you published
  3. (optional) respond to others’ posts with minor feedback, encouragement, personal stories, etc.

The goal is to make the process of participating in the group as small of a time commitment as possible, leaving most of the time for actual writing. To encourage people to write more, publishing can be anywhere, any time. It can be across multiple blogs / platforms / books / podcasts, etc. Any topic is fine. The key is just to keep writing and making public what you are writing.

Course-corrections

Things went pretty smoothly from the beginning.

Since the group was self-organizing, we had a little trouble figuring out how to split up the email threads. A thread per week is helpful for remembering to post once per week. Amitai suggested that we use the Unix week as the boundary. Conveniently, Sunday night at midnight begins a new week. The week is fairly easy to figure out1.

Some people got hung up a bit on feeling bad or guilty that they were not writing as much as they “should”. Some wanted to catch up when they missed a week by posting twice the following week. Generally we recommend to just post once a week, and if you miss a week, to let it go. Otherwise it can be easy to fall into a downward motivational spiral (“haven’t published anything in three weeks, need to post three things or I’m a failure.”) The point is never to feel bad or complain, only to get writing out there, of whatever quality level or length the member determines is acceptable.

The results (so far)

I feel that as a group we have published quite a bit, and I have personally published at least ten times as much as I would have without the group.

Usually I have a lot of things that I know well enough that I can knock them out without too much work, but actually having a deadline helps me put in the time to write them out. There’s also the positive spiral of doing something every week.

I think the group dynamic is great, with people contributing feedback of the high level points and not getting hung up on small formatting points. Every week I read great posts that I otherwise might not have seen. The publishing has covered podcasts, books, works of fiction, technical blog posts, book reviews, role-playing writeups, heartfelt honesty, training material that is hosted on Github, and even some highly voted Hacker News comments.

One additional thing I like about the weekly publishing deadline is that it forces me to make definite progress each week. For example, I can’t say “I’m working on my book” for six months without writing anything up. I need to think about how to incrementally publish my work, publish bits and pieces or the artifacts of it, or write up other ideas.

Create your own groups

Trying to do something and want more chances to succeed? Consider forming a group with like-minded people that has some sort of daily or weekly requirement. Some of the members of the group are thinking about starting other groups with similar philosophies around product development, and it will be interesting to see how it plays out.

You can even form your own writing group. I would be happy to hear about it!

Does a writing group like this seem appealing? Would you like to join? Can you commit to writing something each week? Send me an email (domain name minus the .com at gmail.com) with what you are generally interested in writing about, and we can try to add you to the group!


  1. If someone is the first to post for the week, they make a new thread with the week of the year. You can do this with date "+%V" if you are on a Linux-like machine. If you have Ruby installed, ruby -e "require 'date'; puts DateTime.now.strftime('%V')" also works. 

Ignore URLs and Acroynms While Spell-checking Vim

Today’s post is a short one that I have incorporated into Writing With Vim and will publish when I add a few more changes.

When Vim’s spell check is enabled, words that are not known to be good are highlighted as incorrect. This is the behavior that we want generally. However, there are certain types of text that are commonly marked as incorrect, and it can be tedious or impossible to constantly add them to your dictionary.

One example would be URLs. These are marked as incorrect by Vim by default, but it would be nice to just tell Vim to ignore them from the spell check so you can navigate spelling errors more quickly and to reduce the amount of noise on the screen. You can accomplish this with:

 " Don't mark URL-like things as spelling errors
 syn match UrlNoSpell '\w\+:\/\/[^[:space:]]\+' contains=@NoSpell

Here, we are defining a new syntax highlighting group called UrlNoSpell (could be called whatever you want), and when text matches the pattern, we mark it with the special highlight group called @NoSpell.

The pattern says to skip checking the spelling of any string which has word characters ([a-zA-Z0-9]) before a :// until a space character is encountered. So any string starting with http://, https://, ftp://, or even anything:// would qualify as a match. There may be things that aren’t true URLs that are matched with this pattern, but it should cover most cases and lead to few false negatives.

When I looked through my spellfile, I saw many acronyms or abbreviations. Things like “AWS”, “B2B”, “CEOs”, “HL7”, “MP3”, “PDF”, and so forth. I removed all of these from the spellfile with the following command:

 " Don't count acronyms / abbreviations as spelling errors
 " (all upper-case letters, at least three characters)
 " Also will not count acronym with 's' at the end a spelling error
 " Also will not count numbers that are part of this
 " Recognizes the following as correct:
 syn match AcronymNoSpell '\<\(\u\|\d\)\{3,}s\?\>' contains=@NoSpell

This function works similarly to the previous one, except the pattern is different. The pattern says: if we have an entire word composed of three or more characters that are all either uppercase letters or digits, possibly followed by “s” (as in “CEOs”), then mark this word as good. Again, while this could lead to false negatives (“ZARBLATZZZZ” is marked as good, for instance), it might be something to consider for your spell function.

I recommend adding these to your writing function and they will be enabled whenever you are in a writing mood.

Running Old Programs In The Browser

I did my first programming in Applesoft BASIC on the Apple IIc, and then I did quite a bit in middle school and early high school in QBasic on a 486 machine.

I’ve been in a bit of an archival kick lately. I recently started a small project of getting some data off of my Apple IIc disks, which is going well. I thought: maybe there is a way to get my old QBasic programs running as well.

QBasic ran under MS-DOS, the command-line interface of pre-Windows and early Windows computers. You can see a more modern version by typing cmd at the Windows program. I wanted to get a way that I could look at my old programs and to preserve and distribute them to others.

I looked around for a QBasic emulator, and except for the well-known and battle-tested DOSBox DOS emulator, could only find a partially-completed QBasic interpreter written in JavaScript. It didn’t have graphics mode, and many of the programs that I wrote used this.

Archive.org has a huge collection of DOS games. This is great for preserving old software. You can even play old games, although any game saves won’t persist between page reloads1. The DOS collection runs on em-dosbox, which compiles the DOSBox DOS emulator from C++ to Javascript using emscripten. Fortunately, all of this is open source.

I was able to get a copy of the QBasic executable and program by digging through archive.org’s copy of the Microsoft website, extracting a self-inflating executable in a Windows virtual machine, then copying that file over to my Mac. Then I could copy my files in the right hierarchy and run a command to compile a directory with of the executable and BASIC programs to something that could be run in the browser.

If you want to mess around with the programs right now, you can skip to the bottom of the post.

The tree structure that I had looked roughly like:

...
├── src
│   ├── dosbox.html
│   ├── dosbox.html.mem
│   ├── dosbox.js
│   ├── myqb
│   │   ├── QBASIC
│   │   │   ├── BASIC
│   │   │   │   ├── ...
│   │   │   ├── ...
│   │   ├── QBASIC.HLP
│   │   ├── qbasic.data
│   │   ├── qbasic.exe
│   │   └── qbasic.html
│   ├── packager.py
│   ├── qbasic.data
│   ├── qbasic.html
│   └── ...
├── ...

The command to compile it was:

$ ./packager.py qbasic myqb qbasic.exe

The second argument specifies where all of the files are (where C:\ is mounted). The last argument is which executable to run when DOSBox starts up. The first argument is the name of the .html and associated .data file that will be generated.

You need to copy the following files to be able to host it standalone:

  • dosbox.html.mem
  • dosbox.js
  • qbasic.data (or whatever your .data file is called)
  • qbasic.html (or whatever your .html file is called)

The files in, say, myqb, aren’t actually needed since they are compiled to the .data file when you run the packager program.

The programs

The QBasic programs are located here.

The DOSBox emulator will load when you go to that page and immediately run QBasic. To load a program, File, Open and navigate under the QBASIC directory. (Yes, the files back then needed to be in 8.3 format.) Under each folder is a set of programs and related assets.

There are a couple of potential issues with running the programs.

One is that some of the programs relied on me pressing Ctrl + Break and most keyboards today don’t have the Break key. Plus, I think there are some issues with how DOSBox handles the Ctrl+Break combo normally, which is likely only exacerbated when running inside of a browser.

The other is that I originally had them in different places on my drives (hard and floppy) so the paths to certain assets might not be hooked up entirely correctly. But at least it is up there in some form.

Parting thoughts

If you would have asked me if I would be running programs that used all of the resources of the computer at the time in the web browser of a future computer, I’m not sure that I would have believed you. But, if exponential increases in hardware efficiency continue, the things that are most taxing today might be trivial in the future. Everything you are currently running on your computer might be emulated in a chat client in the future. :)

I may write up some of the programs in an index up there, but it might be a while before I get to it. Until then, feel free to poke around!


  1. There is an open issue related to the ability to persist disk changes between page reloads. 

Deciding From Multiple Open-Source Alternatives

In a Slack organization that I am a part of, someone asked:

Has anyone ever heard of anything that takes Github repos and assigns them some sort of “reliability score”? Something that takes stars, commit frequency, PRs, open/closed issue ratio, issue closing time, etc. into account and gives them a score? It would be really nice for choosing between two or three repos that do similar things.

I had some thoughts on this since it is something that I have done in the past, and took them and extended for this post.

Existing alternatives

The Ruby Toolbox tries to give a rough score for ruby gems based on the features above, and it gets most of the way there. It provides a good overview for classes of gems to get a sense of how well maintained they are. Usually when I am trying to find a gem that does something, I start here. Over time you get a sense of a language’s ecosystem and this is less valuable, but it is still useful for finding new gems that might do that thing better.

Personal heuristics

I usually look at some combination of the following to determine whether a package or library is worth using:

  • Does it purport to do what I need done?
  • Does it have documentation?
  • How many people have starred / forked the repo?
  • How many issues are outstanding?
  • Are there many open / ignored pull requests
  • Does it have tests?
  • Who is the maintainer of the library?

It doesn’t need to score highly on every metric, but more positive signals generally make me more confident in the decision.

More general considerations

To me, choosing an open source library comes down to cost of change. If it would take only a few hours to switch to the other library if one of them proves to be unmaintained or unsuited for the task at hand, then I would just pick one and proceed with implementing it. If it is something that would be very expensive (in terms of time or money), then it is a decision that should be considered more closely.

For example, the other day we tried a library on Github that didn’t have any stars. It seemed like it might get the job done and it was for a minor piece of functionality. Something that was more critical might demand a closer look.

If I am worried about committing to one over the other I sometimes try to build an adapter layer to be able to change which library we use seamlessly. Basically taking a bit of time to increase optionality in the future, with a bit of last responsible moment thrown in there as well. If you defer the decision on what database to use and build an adapter layer, you can switch databases much more easily in the future. There are some similar thoughts at:

Another approach is to try a spike of both and then using the knowledge that you gain to make a decision. So if you learn that one is easier to work with or fits your domain better or is faster, then you can use that knowledge. Also, you buy yourself hours or days of additional information to watch the activity on the repo. Then at worst you have one working implementation if you did it in serial and were happy after the first one, or two if you were unhappy or did them in parallel.

Similarly, Martin Fowler writes:

…one of the main source of complexity is the irreversibility of decisions. If you can easily change your decisions, this means it’s less important to get them right - which makes your life much simpler. The consequence for evolutionary design is that designers need to think about how they can avoid irreversibility in their decisions. Rather than trying to get the right decision now, look for a way to either put off the decision until later (when you’ll have more information) or make the decision in such a way that you’ll be able to reverse it later on without too much difficulty.

What this implies for my projects

Of course, the implication for any open source projects that I maintain is that to provide the most value to others, they should clearly state what the project’s function is. They should be documented, and the issues and pull requests should be reasonably gardened. And so forth.

Mac Karabiner Key Remaps I Use

Karabiner (formerly KeyRemap4MacBook) is a tool that lets you remap certain keys on your keyboard. I find this useful as a developer and someone who writes words in a few ways.

Mega Fast Key Repeats

When I pair with someone for the first time or give a presentation, they usually ask “how are you moving the cursor so fast?” I use Karabiner to crank up the cursor speed. This is most useful for navigating quickly between letters, words, and sentences. Also, it’s useful for quickly killing characters. At first the cursor seems to be flying around, but soon you gain control and can’t imagine working another way.

My current settings (see the “Key Repeat” tab) are for a “delay until repeat” of 250 milliseconds and a “key repeat” of 10 milliseconds. Any shorter of delay until repeat causes me to add extra chaarractersss when I don’t intend to do so. Similarly, 10 milliseconds of key repeat allows me to be pretty precise in cursor placement without sacrificing nearly top speed.

Try it for a week and I’d bet that you won’t go back to slow repeats.

Caps Lock -> Escape

I wrote about remapping caps lock to escape a long time ago, and my thoughts on it have not changed much. If anything, my muscle memory has made it more indispensable. The Macbook keyboard already has a control and alt on the left side, so I’m not sure why you would remap caps lock to those.

The current best way to remap it is to use Karabiner, which has it as an option (just search “caps”.) It works everywhere flawlessly, so there is not much more to say.

Emacs / Readline Shortcuts

Emacs and readline (bash, zsh, etc.) use an interesting set of meta (alt) and control commands to help you navigate around. You probably already use some of the built-in keyboard shortcuts for editing text. For example:

  • Ctrl+a to move to the beginning of the current line
  • Ctrl+e to move to the end of the current line
  • Ctrl+k to kill to the end of the line without copying to clipboard
  • Ctrl+b to move one character backward
  • Ctrl+f to move one character forward
  • Alt+backspace to delete word backward

If you’re a command-line or Emacs junkie, you might want more power or consistency. Plus, by making the behavior system-wide,you’ll make those muscle memories stronger. There are some Karabiner settings that emulate these behaviors:

  • Alt+b to move to previous word
  • Alt+f to move to next word
  • Alt+d to delete word forward
  • Ctrl+d to delete character forward
  • Ctrl+u to delete to the beginning of the line (copying to clipboard if you want)

You can also change Ctrl+k to copy the contents you killed to the clipboard. There are a ton of other Emacs commands but I don’t use it heavily enough to want to use these bindings. Some of the alt command remaps will overwrite the default Mac insertions (Alt+f usually inserts the “ƒ” character, for example), but I don’t use them often enough to justify keeping them around.

To find these settings, look under the “Emacs Mode” section of the main “Change Key” tab. I was digging around and saw this, and it has made me much more effective at moving around.

I primarily use Vim for writing other prose so it doesn’t help there, but I write a non-trivial amount of words in Slack and email. After making these changes I find that my editing speed in those programs is much faster and more intuitive. Also my readline editing speed and accuracy are up as well.

Hope this helped!

Hope this post helped you squeeze a little more power out of your keyboard. I’d love to hear any keyboard hacks you use that you get value out of. Leave a comment!