Skip to content

ahoffer/bin

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

399 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

# bin
The contents of my bin dir. Useful when I crash, burn, corrupt, despoil, savage, or ravage my system. 


## Paused git tracking

.git renamed to .git.paused, so git no longer sees this as a repo.
.git.paused added to .stignore, so Syncthing won't sync it to other hosts.
To resume later: mv ~/bin/.git.paused ~/bin/.git


## Notes

* fresh-start is the big script that install almost everything
* `~/bin` is the primary git repo on `bigfish`; `clown` has a Syncthing copy for execution only.

1. Run install-pkcs11
1. Download DoD certs and run install-os-dodcerts
1. Run setup-nssdb
1. Run install-nssdb-dodcerts


## bashrc
Read the comments in bashrc. 


## On a new machine...
- Generate an ssh kepair with ssh-keygen -b 2048 -t rsa
- Copy pub key to GitHub account
- Clone from home dir, git clone git@github.com:ahoffer/bin.git

After cloning...
- Copy gitconfig_template to ~/.gitconfig
- Open ~/bin/bashrc. Copy the parts that add ~/bin to the path into ~/.bashrc and source .bashrc
- Source ~/.bashrc
- Set up secret env var in ~/.bashrc

## MCP server

`mcpserve` starts the Desktop Commander MCP server via `npx @wonderwhy-er/desktop-commander@latest`
and writes stderr logs to `~/log/mcpserve.log`.

To expose it from another host over SSH, add an SSH alias such as:

    Host clown-mcp
      HostName clown
      User aaron

Then point Codex at it with an MCP entry like:

    [mcp_servers.clown]
    command = "ssh"
    args = ["clown-mcp", "mcpserve"]

This repo only provides the `mcpserve` wrapper; the Desktop Commander package is downloaded by `npx`
when the command runs.

## Claude Code and Codex session wrappers

`claude` and `codex` in ~/bin find the real binary and hand off to `wraplog`, which
runs the binary directly and indexes the session JSONL into `~/logs/` on exit.

The real binaries are at `~/.local/bin/claude` and the nvm-managed path for codex.
The wrappers strip ~/bin from PATH before resolving the real binary to prevent
self-invocation.

## bigfish-shell

`bigfish-shell` is the resilient interactive entrypoint from clown to bigfish.
It keeps your local terminal unchanged and uses remote tmux on bigfish so work
survives short SSH disconnects.

Behavior:

1. It connects with `ssh bigfish` and runs `tmux new-session -A -s <name>` on
   bigfish (create-or-attach remote session).
1. If SSH fails, it retries with bounded exponential backoff.
1. It must run in an interactive terminal (TTY). Non-interactive runs fail fast.

Usage:

    bigfish-shell
    bigfish-shell my-session

Optional environment variables:

    BIGFISH_HOST=bigfish
    BIGFISH_TMUX_SESSION=bigfish
    BIGFISH_SSH_MAX_ATTEMPTS=5
    BIGFISH_SSH_MAX_DELAY=8

Why this exists:

* Keeps remote work alive inside tmux on bigfish.
* Avoids any background watchdog process; recovery is explicit and operator driven.

### Session log index

`wraplog` creates a timestamped symlink in `~/logs/` pointing to the JSONL session
file written by the tool:

    ~/logs/claude-YYYYMMDD-HHMMSS-<session-uuid>.jsonl  -> ~/.claude/projects/.../uuid.jsonl
    ~/logs/codex-YYYYMMDD-HHMMSS-rollout-<ts>-<uuid>.jsonl -> ~/.codex/sessions/.../uuid.jsonl

The JSONL stays in its canonical location. The symlink provides a time-indexed entry
point. If no session file was created the symlink is omitted.

### Claude Code session JSONs

Stored at `~/.claude/projects/`. Each project gets a subdirectory named after its path
slug, for example `-home-aaron-bin`. Each conversation is one UUID-named `.jsonl` file.
Subagent threads nest under a session UUID in a `subagents/` subdirectory.

Global prompt history is at `~/.claude/history.jsonl`.

No rotation is performed. Files accumulate indefinitely.

### Codex session JSONs

Stored at `~/.codex/sessions/2026/MM/DD/` as `rollout-<timestamp>-<uuid>.jsonl`.
Directories are created per calendar day.

Global prompt history is at `~/.codex/history.jsonl`.

Persistent conversation state lives in `~/.codex/state_5.sqlite` with standard SQLite
WAL files alongside it. The TUI process log is at `~/.codex/log/codex-tui.log`.

No rotation is performed on any of these files.

## Colima scripts

### Fix mixed OCI/Docker v2 image manifest format

Docker 24+ with Colima's containerd-snapshotter enabled stores images in containerd's
OCI format. Base layers pulled from Docker Hub arrive as OCI, while new build layers
get Docker v2 types from the Docker build API. The result is a mixed-manifest image
that skopeo cannot convert to docker-archive format, breaking CI Twistlock scans.

Fix: disable containerd-snapshotter in `~/.colima/default/colima.yaml`:

    docker:
      features:
        containerd-snapshotter: false

Then restart Colima and prune stale snapshotter images before rebuilding:

    colima restart
    docker system prune -a --volumes -f

After the prune, rebuild and repush. The resulting image will have a clean Docker
Schema v2 manifest with all layers as `application/vnd.docker.image.rootfs.diff.tar.gzip`.

Verify with:

    skopeo inspect --raw docker://<registry>/<image>:<tag> \
      | python3 -m json.tool | grep mediaType

Colima launch helpers were removed from `~/bin` to avoid conflicting startup paths.

Removed files:

1. `colima-start-guarded`
1. `start-colima-docker.command`
1. `stop-colima-docker.command`

On clown, Colima-related launchd labels were also disabled:

1. `local.colima.guarded`
1. `homebrew.mxcl.colima`

Check disabled state:

    launchctl print-disabled gui/$(id -u) | rg -i colima

## Xpra on bigfish

`mac/xpra-chrome` is the macOS-side helper that opens `Xpra.app` and attaches it to the forwarded
xpra session exposed by `bigfish` on `tcp://127.0.0.1:14501`.

On the `bigfish` side, the attached xpra X11 display is currently visible as `:100`:

    xpra list
    DISPLAY=:100 xdpyinfo | head

That matters for remote GUI debugging. To run a real headed browser that is visible through the
existing xpra app windows on macOS, point the process at that display instead of using `xvfb-run`.
For Playwright:

    cd ~/projects/cx-search/proximity/test/playwright
    DISPLAY=:100 npx playwright test

For a narrower visible smoke test:

    DISPLAY=:100 npx playwright test tests/proximity.spec.ts -g "01 place ship CoT"

This launches Playwright's own browser window on the live xpra display so it can be watched from
the existing Xpra app flow on the Mac.

### Xpra sessions

| App     | Display | Port  | Service        | Connect script    |
|---------|---------|-------|----------------|-------------------|
| Chrome  | :100    | 14501 | xprachrome     | mac/xpra-chrome   |
| Signal  | :101    | 14502 | xpra-signal    | mac/xpra-signal   |

Signal uses a dedicated user-data dir at `~/.config/signal-xpra`, isolated from any native
Signal installation. Connect from clown with `xprasignal` (delegates to `mac/xpra-signal`).

To add another session: create a service file and connect script following the same pattern,
then add a line to `xpra-healthcheck-server` on bigfish and `xpra-healthcheck-client` on clown.

### Xpra watchdog daemons

One pair of daemons covers all sessions.

On bigfish a systemd timer fires every 60 seconds and runs `xpra-healthcheck-server`. The server
script loops over every known display/service pair, probes each with a 10-second timeout, and
restarts the matching service if a probe fails or hangs.

On clown a launchd agent fires every 60 seconds as `com.aaron.xpra-healthcheck-client` and runs
`xpra-healthcheck-client`. The client script exits quietly if bigfish is unreachable. Otherwise
it loops over every known port/connect-script pair, checks tunnel and probe health for each, and
reconnects only the sessions that are unhealthy. Each tunnel uses its own SSH control socket
(`~/.ssh/cm-xpra-bigfish-PORT`), so reconnects terminate only the dedicated xpra tunnel instead
of any shared SSH master session. The stale-process killer at the top of the script queries all
known ports together, so it never mistakes a healthy session for a stale one.

Check watchdog status on bigfish:

    systemctl --user status xpra-healthcheck-server.timer
    systemctl --user list-timers xpra-healthcheck-server.timer
    journalctl --user -u xpra-healthcheck-server.service -n 50

Check watchdog status on clown:

    launchctl list com.aaron.xpra-healthcheck-client
    tail -100 ~/Library/Logs/xpra-healthcheck-client.log

### Xpra log scripts

On bigfish, `xprachromelog` follows the Chrome server journal, `xprasignallog` follows the Signal
server journal, and `xprawatchlog` follows the single shared watchdog journal. All three
pass through extra arguments to `journalctl`.

On clown, `xpralogmac` streams Xpra.app output via `log stream` and `xprawatchlogmac`
tails the watchdog log at `~/Library/Logs/xpra-healthcheck-client.log`. `xprasignalclientlog`
tails `~/Library/Logs/xpra-signal.log`, the Xpra.app client output for the Signal session.
`xprachromeclientlog` tails `~/Library/Logs/xpra-chrome.log`, the equivalent for the Chrome session.

About

The contents of my bin dir. Eventually expand to some kind of Ansibile or Puppet to re-constitute my desktop when I crash, burn, corrupt, despoil, savage, or ravage my system.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors