Exiting an unconditional juju debug-hooks session

Dmitrii Shcherbakov dmitrii.shcherbakov at canonical.com
Sun Jun 4 13:56:13 UTC 2017

Hi everybody,

Currently if you do

juju debug-hooks <unit-name> # no event (hook) in particular

each time there is a new event you will get a new tmux window open and
this will be done serially as there is no parallelism in hook
execution on a given logical machine. This is all good and intentional
but when you've observed the charm behavior and want to let it work
without your interference again, you need to end your tmux session.
This can be hard via `exit [status]` shell builtin when you get a lot
of events (think of an OpenStack HA deployment) - each time you do

./hooks/$JUJU_HOOK_NAME && exit

you are dropped into a session '0' and a new session is created for a
queued event for which you have to manually execute a hook and exit
again until you process the backlog.

tmux list-windows
0: bash- (1 panes) [239x62] [layout bbde,239x62,0,0,1] @1 # <---
dropping here after `exit`
1: update-status* (1 panes) [239x62] [layout bbe0,239x62,0,0,3] @3 (active)

"Note: To allow Juju to continue processing events normally, you must
exit the hook execution with a zero return code (using the exit
command), otherwise all further events on that unit may be blocked

My initial thought was something like this - send SIGTERM to a child
of sshd which will terminate your ssh session:
unset n ; p=`pgrep -f 'tmux attach-session.*'$JUJU_UNIT_NAME` ; while
[ "$n" != "sshd" ] ; do pc=$p ; p=$(ps -o ppid= $p | tr -d ' ') ; echo
$p ; n=`basename $(readlink /proc/$p/exe || echo -n none)` ; done &&
kill $pc

as an agent waits for an SSH client to exit:

After thinking about it some more, I thought it would be cleaner to
just kill a specific tmux session:

tmux list-sessions
gluster/0: 2 windows (created Fri Jun  2 20:22:30 2017) [239x62] (attached)

./hooks/$JUJU_HOOK_NAME && tmux kill-session -t $JUJU_UNIT_NAME
Cleaning up the debug session
no server running on /tmp/tmux-0/default
Connection to closed.

The cleanup message comes from debugHooksClientScript that simply sets
up a bash trap on EXIT:

Judging by the code, it should be pretty safe to do so - unless there
is a debug session in a debug context for a particular unit, other
hooks will be executed regularly by an agent instead of creating a new
tmux window:
debugctx := debug.NewHooksContext(runner.context.UnitName())
if session, _ := debugctx.FindSession(); session != nil &&
session.MatchHook(hookName) {
logger.Infof("executing %s via debug-hooks", hookName)
err = session.RunHook(hookName, runner.paths.GetCharmDir(), env)
} else {
err = runner.runCharmHook(hookName, env, charmLocation)
return runner.context.Flush(hookName, err)

There are two scripts:

- a client script executed via an ssh client when you run juju debug-hooks
- a server script which is executed in the `RunHook` function by an
agent and creates a new window for an existing tmux session.

client side:
script := base64.StdEncoding.EncodeToString([]byte(unitdebug.ClientScript(debugctx,
innercmd := fmt.Sprintf(`F=$(mktemp); echo %s | base64 -d > $F; . $F`, script)
args := []string{fmt.Sprintf("sudo /bin/bash -c '%s'", innercmd)}
c.Args = args
return c.sshCommand.Run(ctx)

Server script:
Client script:

A worker waits until a client exists by monitoring a file lock at
A path of a lock itself for a particular session:


If this approach with killing a tmux session is fine then I could
create a PR for the doc repo and for the description in the
debugHooksServerScript to explicitly mention it.

I doubt it deserves a helper command but rather a more verbose explanation.

Have anybody else encountered the need to do the same?

Best Regards,
Dmitrii Shcherbakov

Field Software Engineer
IRC (freenode): Dmitrii-Sh

More information about the Juju mailing list