The Backlog Never Gets Smaller
Previously:
- All Production Code is Shit
- It’s more important to have a standard than what the standard is
- TODOs don’t get TODOne
Adventures in Computer Programming – bakert@gmail.com
The Backlog Never Gets Smaller
Previously:
The internet has not been an unequivocal good for mankind but this is pretty great …
Sometimes you use a third party library and the interface is so well designed it’s just effortless. Something that would have been gnarly and murky becomes simple. The kind of library that gets ported to multiple languages because everyone wants access to it.
One slightly obscure example is feedparser, (originally) Mark Pilgrim’s python2 library for reading Atom and RSS feeds. Hiding all this nonsense:
behind a simple interface.
import feedparser
d = feedparser.parse('http://www.reddit.com/r/python/.rss')
print(d['feed']['title']
)>>> Python
print d.feed.subtitle
>>> news about the dynamic, interpreted, interactive, object-oriented, extensible programming language Python
print d.headers
>>> {'content-length': '5393', 'content-encoding': 'gzip', 'vary': 'accept-encoding', 'server': "'; DROP TABLE servertypes; --", 'connection': 'close', 'date': 'Mon, 14 Oct 2013 09:13:34 GMT', 'content-type': 'text/xml; charset=UTF-8'}
Another library that has the same simplicity is Mustache logic-less templates. This one has been ported to literally dozens of languages. Every template I ever worked on was kind of a mess until I found Mustache. It’s actually the restrictions here that make it sing.
Hello {{name}}
You have just won {{value}} dollars!
{{#in_ca}} Well, {{taxed_value}} dollars, after taxes. {{/in_ca}}
Some other examples:
Do you know any “perfect” libraries?
With ogr2ogr.
export CONN_STRING="host=localhost dbname=DATABASE user=USERNAME password=PASSWORD port=5432" # Import ogr2ogr -append -f PostgreSQL PG:dbname=DATABASE_NAME /path/to/your.gpx # Export ogr2ogr -f gpx -nlt MULTILINESTRING /path/to/output/tracks.gpx PG:"$CONN_STRING" "tracks(wkb_geometry)" ogr2ogr -f gpx -nlt MULTILINESTRING /path/to/output/routes.gpx PG:"$CONN_STRING" "routes(wkb_geometry)" ogr2ogr -f gpx -nlt POINT /path/to/output/waypoints.gpx PG:"$CONN_STRING" "waypoints(wkb_geometry)"
The wkb_geometry
references can be replaced with full SQL statements as required.
In a recent code review my colleague took issue with the following code.
func Enqueue(properties Properties) (err error) { logger := logging.GetLogger(ctx) bs, err := json.Marshal(properties) if err != nil { logger = logger.With().Err(err).Logger() } else { logger = logger.With().RawJSON("properties", bs).Logger() } … go on to log some stuff and enqueue the supplied event with the supplied properties … }
Specifically the question was around whether `bs` was a reasonable name for the variable holding the JSON version of the properties. My counterargument was that short names are better than long names when well understood and/or short in scope. And that Go has a C influence and favors short variable names which you can see in both the standard library and its examples. The Go encoding/json library calls []byte variously src and data (code), b, j and text (examples) – https://golang.org/pkg/encoding/json/
My colleague said it took them longer 0s to understand the var so they called it out as a nit (not a blocker) and that they care more about knowing what is contained within than if it is `[]byte` or not.
I ended up renaming it `propertiesJSON`. It did start a discussion about short variable names in general and in Go in particular. I’m still not sure how I feel about it. I did find some reading that seemed relevant though.
Notes on Programming in C (Variable names)
Variable names in Go should be short rather than long. This is especially true for local variables with limited scope. Prefer c to lineCount. Prefer i to sliceIndex.
Go Code Review Comments (Variable Names)
Local variables. Keep them short; long names obscure what the code does … Prefer b to buffer.
What’s In a Name? (Local Variables)
git fetch origin master:master git rebase master
This avoids the `git stash && git checkout master && git pull && git checkout $branchname && git rebase master && git stash pop` dance that I’ve been doing for a long time.
### System-wide Clipboard mostly from https://gist.github.com/welldan97/5127861 pb-kill-line () { zle kill-line echo -n $CUTBUFFER | pbcopy } pb-backward-kill-line () { zle backward-kill-line echo -n $CUTBUFFER | pbcopy } pb-kill-whole-line () { zle kill-whole-line echo -n $CUTBUFFER | pbcopy } pb-backward-kill-word () { zle backward-kill-word echo -n $CUTBUFFER | pbcopy } pb-kill-word () { zle kill-word echo -n $CUTBUFFER | pbcopy } pb-kill-buffer () { zle kill-buffer echo -n $CUTBUFFER | pbcopy } pb-copy-region-as-kill-deactivate-mark () { zle copy-region-as-kill zle set-mark-command -n -1 echo -n $CUTBUFFER | pbcopy } pb-yank () { CUTBUFFER=$(pbpaste) zle yank } zle -N pb-kill-line zle -N pb-backward-kill-line zle -N pb-kill-whole-line # This is too extreme - I often want to wrangle a commandline then paste into it. #zle -N pb-backward-kill-word #zle -N pb-kill-word zle -N pb-kill-buffer zle -N pb-copy-region-as-kill-deactivate-mark zle -N pb-yank bindkey '^K' pb-kill-line bindkey '^U' pb-backward-kill-line #bindkey '\e^?' pb-backward-kill-word #bindkey '\e^H' pb-backward-kill-word #bindkey '^W' pb-backward-kill-word #bindkey '\ed' pb-kill-word #bindkey '\eD' pb-kill-word bindkey '^X^K' pb-kill-buffer bindkey '\ew' pb-copy-region-as-kill-deactivate-mark bindkey '\eW' pb-copy-region-as-kill-deactivate-mark bindkey '^Y' pb-yank
Here’s some Python that calculates how many players will reach each record in a Swiss tournament with a Top 8 or similar cut.
from typing import Sequence # Math from https://www.mtgsalvation.com/forums/magic-fundamentals/magic-general/325775-making-the-cut-in-swiss-tournaments def swisscalc(num_players: int, num_rounds: int, num_elimination_rounds: int) -> Sequence[int]: num_players_in_elimination_rounds = 2 ** num_elimination_rounds base = num_players / (2 ** num_rounds) num_players_by_losses = [0] * (num_rounds + 1) multiplier = 1.0 total_so_far = 0 record_required = None for losses in range(0, num_rounds + 1): wins = num_rounds - losses numerator = wins + 1 denominator = losses if denominator > 0: multiplier *= (numerator / denominator) num_players_by_losses[losses] = base * multiplier if not record_required and num_players_in_elimination_rounds: total_so_far += num_players_by_losses[losses] return num_players_by_losses
Example usage:
$ python3 >>> rounds = 4 >>> r = swisscalc(24, rounds, 3) >>> for losses in range(len(r)): ... print(f'{r[losses]} players at {rounds - losses}–{losses}') ... 1.5 players at 4–0 6.0 players at 3–1 9.0 players at 2–2 6.0 players at 1–3 1.5 players at 0–4
If you git stash
when you have a bunch of local files that are ignored git stash pop will refuse to un-stash your saved changes. This command cleans that up.
git stash pop 2>&1 | grep already | cut -d' ' -f1 | xargs rm && git stash pop
package main
import (
"bytes"
"encoding/json"
"fmt"
"net/http"
"time"
)
type MyRequest struct {
Name string `json:"name"`
Color string `json:"color"`
Size int `json:"size"`
}
type MyResponse struct {
Status string `json:"status"`
}
func doRequest(httpMethod string, address string, requestBody MyRequest, responseBody *MyResponse) (err error) {
j, err := json.Marshal(requestBody)
if err != nil {
return
}
req, err := http.NewRequest(httpMethod, address, bytes.NewReader(j))
if err != nil {
return
}
req.Header.Set("Content-type", "application/json")
client := http.Client{Timeout: time.Second * 10}
resp, err := client.Do(req)
if err != nil {
return
}
defer resp.Body.Close()
if resp.StatusCode >= 400 {
return fmt.Errorf("Request failed with status %d", resp.StatusCode)
}
err = json.NewDecoder(resp.Body).Decode(responseBody)
if err != nil {
return
}
return
}
func main() {
responseBody := new(MyResponse)
err := doRequest("PUT", "https://example.com/endpoint", MyRequest{Name: "bakert", Color: "red", Size: 10}, responseBody)
if err != nil {
fmt.Println("Failed", err)
} else {
fmt.Println("Status", responseBody.Status)
}
}