That One Blog Post That Gets All the Emails

In 2003 I spent half a day figuring out a bad Microsoft error. I hadn’t been able to Google the answer. And I guess this was before the days of Stack Overflow. So I wrote it up as a blog post – https://bluebones.net/2003/07/server-did-not-recognize-http-header-soapaction/

Because I’d been extremely frustrated by the error I included at the bottom, “If you’re having a similar problem but can’t work what I’m saying here, feel free to mail me on bakert+web@gmail.com – I wouldn’t wish my four hours on anyone!”

I never get email about any of my other blog posts. But Gmail tells me I’ve had at least 269 about this one. Including within the last two weeks! If you find this phenomenon interesting and wish to discuss it further, feel free to email me at bakert@gmail.com 😉

bakert’s 4th Law

The Backlog Never Gets Smaller

Previously:

  1. All Production Code is Shit
  2. It’s more important to have a standard than what the standard is
  3. TODOs don’t get TODOne

“Perfect” Libraries

Sometimes you use a third party library and the interface is so well designed it’s just effortless. Something that would have been gnarly and murky becomes simple. The kind of library that gets ported to multiple languages because everyone wants access to it.

One slightly obscure example is feedparser, (originally) Mark Pilgrim’s python2 library for reading Atom and RSS feeds. Hiding all this nonsense:

behind a simple interface.

import feedparser 
d = feedparser.parse('http://www.reddit.com/r/python/.rss') 
print(d['feed']['title'])
>>> Python
print d.feed.subtitle 
>>> news about the dynamic, interpreted, interactive, object-oriented, extensible programming language Python 
print d.headers           
>>>  {'content-length': '5393', 'content-encoding': 'gzip', 'vary': 'accept-encoding', 'server': "'; DROP TABLE servertypes; --", 'connection': 'close', 'date': 'Mon, 14 Oct 2013 09:13:34 GMT', 'content-type': 'text/xml; charset=UTF-8'}

Another library that has the same simplicity is Mustache logic-less templates. This one has been ported to literally dozens of languages. Every template I ever worked on was kind of a mess until I found Mustache. It’s actually the restrictions here that make it sing.

Hello {{name}} 
You have just won {{value}} dollars! 
{{#in_ca}} Well, {{taxed_value}} dollars, after taxes. {{/in_ca}}

Some other examples:

  • web.py – Dead simple web framework
  • BeautifulSoup – HTML/XML parser
  • requests – Python library for HTTP
  • humps – Underscore-to-camelCase converter (and vice versa) for strings and object keys in JavaScript (has been ported to Python as pyhumps).
  • Markdown – Text format with HTML representation that has taken over the web due to its simplicity and usefulness compared to actual HTML

Do you know any “perfect” libraries?

GPX to PostGIS, PostGIS to GPX

With ogr2ogr.

export CONN_STRING="host=localhost dbname=DATABASE user=USERNAME password=PASSWORD port=5432"
# Import
ogr2ogr -append -f PostgreSQL PG:dbname=DATABASE_NAME /path/to/your.gpx
# Export
ogr2ogr -f gpx -nlt MULTILINESTRING /path/to/output/tracks.gpx PG:"$CONN_STRING" "tracks(wkb_geometry)"
ogr2ogr -f gpx -nlt MULTILINESTRING /path/to/output/routes.gpx PG:"$CONN_STRING" "routes(wkb_geometry)"
ogr2ogr -f gpx -nlt POINT /path/to/output/waypoints.gpx PG:"$CONN_STRING" "waypoints(wkb_geometry)"

The wkb_geometry references can be replaced with full SQL statements as required.

Short Variable Names in Go

In a recent code review my colleague took issue with the following code.

func Enqueue(properties Properties) (err error) {
	logger := logging.GetLogger(ctx)
	bs, err := json.Marshal(properties)
	if err != nil {
		logger = logger.With().Err(err).Logger()
	} else {
		logger = logger.With().RawJSON("properties", bs).Logger()
	}
        … go on to log some stuff and enqueue the supplied event with the supplied properties …
}

Specifically the question was around whether `bs` was a reasonable name for the variable holding the JSON version of the properties. My counterargument was that short names are better than long names when well understood and/or short in scope. And that Go has a C influence and favors short variable names which you can see in both the standard library and its examples. The Go encoding/json library calls []byte variously src and data (code), b, j and text (examples) – https://golang.org/pkg/encoding/json/

My colleague said it took them longer 0s to understand the var so they called it out as a nit (not a blocker) and that they care more about knowing what is contained within than if it is `[]byte` or not.

I ended up renaming it `propertiesJSON`. It did start a discussion about short variable names in general and in Go in particular. I’m still not sure how I feel about it. I did find some reading that seemed relevant though.

Notes on Programming in C (Variable names)

Variable names in Go should be short rather than long. This is especially true for local variables with limited scope. Prefer c to lineCount. Prefer i to sliceIndex.

Go Code Review Comments (Variable Names)

Local variables. Keep them short; long names obscure what the code does … Prefer b to buffer.

What’s In a Name? (Local Variables)

Share Clipboard Between Terminal (zsh) and OSX

### System-wide Clipboard mostly from https://gist.github.com/welldan97/5127861

pb-kill-line () {
  zle kill-line
  echo -n $CUTBUFFER | pbcopy
}

pb-backward-kill-line () {
  zle backward-kill-line
  echo -n $CUTBUFFER | pbcopy
}

pb-kill-whole-line () {
  zle kill-whole-line
  echo -n $CUTBUFFER | pbcopy
}

pb-backward-kill-word () {
  zle backward-kill-word
  echo -n $CUTBUFFER | pbcopy
}

pb-kill-word () {
  zle kill-word
  echo -n $CUTBUFFER | pbcopy
}

pb-kill-buffer () {
  zle kill-buffer
  echo -n $CUTBUFFER | pbcopy
}

pb-copy-region-as-kill-deactivate-mark () {
  zle copy-region-as-kill
  zle set-mark-command -n -1
  echo -n $CUTBUFFER | pbcopy
}

pb-yank () {
  CUTBUFFER=$(pbpaste)
  zle yank
}

zle -N pb-kill-line
zle -N pb-backward-kill-line
zle -N pb-kill-whole-line
# This is too extreme - I often want to wrangle a commandline then paste into it.
#zle -N pb-backward-kill-word
#zle -N pb-kill-word
zle -N pb-kill-buffer
zle -N pb-copy-region-as-kill-deactivate-mark
zle -N pb-yank

bindkey '^K'   pb-kill-line
bindkey '^U'   pb-backward-kill-line
#bindkey '\e^?' pb-backward-kill-word
#bindkey '\e^H' pb-backward-kill-word
#bindkey '^W'   pb-backward-kill-word
#bindkey '\ed'  pb-kill-word
#bindkey '\eD'  pb-kill-word
bindkey '^X^K' pb-kill-buffer
bindkey '\ew'  pb-copy-region-as-kill-deactivate-mark
bindkey '\eW'  pb-copy-region-as-kill-deactivate-mark
bindkey '^Y'   pb-yank

Calculating Swiss Record Required to Reach Elimination Rounds

Here’s some Python that calculates how many players will reach each record in a Swiss tournament with a Top 8 or similar cut.

from typing import Sequence

# Math from https://www.mtgsalvation.com/forums/magic-fundamentals/magic-general/325775-making-the-cut-in-swiss-tournaments
def swisscalc(num_players: int, num_rounds: int, num_elimination_rounds: int) -> Sequence[int]:
    num_players_in_elimination_rounds = 2 ** num_elimination_rounds
    base = num_players / (2 ** num_rounds)
    num_players_by_losses = [0] * (num_rounds + 1)
    multiplier = 1.0
    total_so_far = 0
    record_required = None
    for losses in range(0, num_rounds + 1):
        wins = num_rounds - losses
        numerator = wins + 1
        denominator = losses
        if denominator > 0:
            multiplier *= (numerator / denominator)
        num_players_by_losses[losses] = base * multiplier
        if not record_required and num_players_in_elimination_rounds:
            total_so_far += num_players_by_losses[losses]
    return num_players_by_losses

Example usage:

$ python3
>>> rounds = 4
>>> r = swisscalc(24, rounds, 3)
>>> for losses in range(len(r)):
...     print(f'{r[losses]} players at {rounds - losses}–{losses}')
... 
1.5 players at 4–0
6.0 players at 3–1
9.0 players at 2–2
6.0 players at 1–3
1.5 players at 0–4

git popclean

If you git stash when you have a bunch of local files that are ignored git stash pop will refuse to un-stash your saved changes. This command cleans that up.


git stash pop 2>&1 | grep already | cut -d' ' -f1 | xargs rm && git stash pop