Migrating from WATCHOUT 6

If you have been running shows with WATCHOUT 6, much of what you know still applies — timelines, cues, the Stage metaphor, and the general production workflow all carry forward. However, the underlying architecture changed significantly in WATCHOUT 7. This article maps the old concepts to the new ones so you can transfer your expertise without starting from scratch.

Overview

The table below is a quick cheat-sheet of the major shifts. Each row is expanded in the sections that follow.

CategoryWATCHOUT 6WATCHOUT 7
Service model2 roles — Production Computer, Display Computer (Watchpoint)4 services — Producer, Director, Runner, Asset Manager
Online / OfflineExplicit online/offline toggle; edits staged before going liveAlways connected; changes propagate immediately
Asset managementManual file placement on display computers or network sharesCentralized Asset Manager distributes files automatically
Show file formatProprietary .watch format containing media references.watch files store display config, timelines, and cues — media lives in the Asset Manager
DiscoveryIP-based addressingMulticast discovery (UDP 3011/3012); nodes identified by host alias
External controlTCP text protocol on ports 3039/3040Unified variable system — OSC, ArtNet, HTTP, MIDI, WO6 compat all map to WATCHOUT variables
Display softwareWatchpointRunner

From Two Roles to Four Services

In WATCHOUT 6, the architecture was a straightforward master/slave model. The Production Computer ran the authoring application and controlled one or more Display Computers (Watchpoint). Two programs, two roles.

WATCHOUT 7 splits that model into four distinct services:

  • Producer — the authoring application where you build shows (replaces the Production Computer role).
  • Director — a coordination service that manages playback across all display nodes. It holds the show state and tells Runners what to do.
  • Runner — the display engine that renders output (replaces Watchpoint).
  • Asset Manager — a dedicated service for storing, optimizing, and distributing media files.

The split happened for practical reasons: separating authoring from coordination lets the Director run independently (even without a Producer connected), media distribution can happen in the background without blocking playback, and each service can scale independently on different hardware.

The key mental-model shift: the Director is not the Production Computer. The Director is a headless coordination service that can run on its own. The Producer connects to a Director to author and monitor the show, but the Director keeps running when the Producer disconnects.

For small setups, all four services can still run on a single machine — the added complexity is entirely opt-in.

See Network Overview for the full technical picture of how the services communicate.

Key Executables

Under the hood, WATCHOUT 7 is composed of specialized executables that are started and managed automatically. Understanding what each one does helps when troubleshooting or planning deployments.

ExecutableRole
process-manager.exeCore component present on every WATCHOUT machine. Handles node discovery and service management — it decides which other executables to launch and monitors their health.
Producer.exeThe main user interface for show creation and control.
director.exeMaintains the master show data, synchronizes timelines, and evaluates expressions. Acts as the NTP time server.
runner.exeReceives show data from the Director and manages playback logic on display nodes.
visualrenderer.exeRenders visual output to displays. Managed by the Runner.
audiorenderer.exeRenders audio output. Managed by the Runner.
asset-manager.exeMedia optimization, storage, and distribution service.
asset-watcher.exeMonitors designated folders and auto-imports new or changed media into the Asset Manager.
ltc-bridge.exeReads SMPTE linear timecode (LTC) input and feeds it to the Director.
midi-bridge.exeTranslates MIDI note, CC, and MSC messages into WATCHOUT variables.
operative.exeProtocol translation layer that handles OSC, ArtNet, and HTTP REST API communication with external systems. Hosted by the Director.

The Process Manager is the foundation — it runs on every node, discovers other nodes on the network via multicast, and starts or restarts services as needed. You never interact with it directly, but it is the reason WATCHOUT nodes can find each other automatically and recover from crashes.

The Operative deserves special mention because it is the single entry point for all external control protocols. Rather than each protocol having its own standalone service, the Operative translates incoming OSC, ArtNet, and HTTP messages into WATCHOUT's unified variable system, and routes outgoing messages back to the appropriate protocol.

Deployment Configurations

Because the services are independent executables, you can distribute them across machines in whatever arrangement suits your production:

  1. All-in-One — a single computer runs Producer, Director, Runner, and Asset Manager. Suitable for development, rehearsal, or small single-screen shows.
  2. Small Production — Producer and Director on one machine, with 1–3 separate Runner nodes driving displays.
  3. Large Production — a dedicated Producer workstation, a separate Director server, and many Runner nodes across the network.
  4. Installation (Headless) — the Director runs on a dedicated server with no Producer connected. Runners play the show autonomously, controlled by external systems via the Operative. Ideal for permanent installations that must run unattended.

System Resilience

The distributed design provides built-in fault tolerance:

  • Runners continue playback if the Director is lost — once a Runner has received the show data, it keeps rendering even if the network connection to the Director drops.
  • Renderers maintain output if the Runner disconnects — the visual and audio renderers hold their last state, avoiding black screens during transient failures.
  • Executables are stopped, started, and restarted automatically — the Process Manager monitors all services and restarts any that crash, without manual intervention.
  • Distributed processing eliminates single points of failure — no single machine needs to handle all tasks.

Always Connected — No More Online/Offline

In WATCHOUT 6, you worked offline by default. You staged your edits, previewed locally, and then explicitly chose Go Online to push changes to the display computers. This two-step workflow gave you a safety net — nothing was live until you said so.

In WATCHOUT 7, there is no offline mode. When Producer is connected to a Director, the system is always live. Every edit you make — moving a layer, adjusting a tween, repositioning a display — propagates to the Runners immediately. Pressing Play is always a live action.

This means you need new habits to protect a running show:

  • Blind Edit Mode — create a temporary copy of a composition, make changes freely, then Take (apply) or Discard the edits. This is the closest equivalent to the old offline workflow. See Blind Edit Mode.
  • Hands Off — a timeline lock that prevents accidental edits during a performance.

The online command in the WO6 compatibility protocol is accepted but has no effect in WATCHOUT 7. If your control system sends it, nothing will break — it is simply a no-op.

For more detail, see Going Online.

Centralized Asset Management

In WATCHOUT 6, media management was manual. You placed files directly on each display computer's hard drive or pointed to a network share. Keeping files in sync across multiple machines was your responsibility.

In WATCHOUT 7, a dedicated Asset Manager service handles media storage, optimization, and distribution. You import media into the Asset Manager (or let the Asset Watcher auto-import from watched folders), and the service takes care of copying the right files to the right Runner nodes in the background.

New capabilities that come with this approach:

  • Automatic distribution — the Asset Manager pushes files to Runners based on which displays need them. No manual copying.
  • Dynamic asset versioning — replace a media file and Runners pick up the new version without stopping the show.
  • Show consolidation — bundle all assets used by a show for archiving or transport.
  • Background transfer — media distribution happens alongside playback without interrupting the show.

See Asset Manager for full documentation.

Show File Format

WATCHOUT 6 used a proprietary show file format. WATCHOUT 7 also uses .watch files, but the content model is different: a .watch file stores display configurations, timeline data, and cue definitions — not media. All media lives in the Asset Manager.

There is no automatic import or conversion path from WATCHOUT 6 show files to WATCHOUT 7. Shows must be rebuilt in the new system.

Network Discovery and Failover

In WATCHOUT 6, display computers were configured by IP address. You typed in the IP of each Watchpoint and the Production Computer connected directly.

In WATCHOUT 7, nodes discover each other automatically via multicast on UDP ports 3011 and 3012. Each machine is identified by a host alias (a human-readable name), not by IP address. As long as nodes are on the same subnet and multicast traffic is allowed, they find each other without manual configuration.

This name-based model also enables automatic failover: if two machines share the same host alias, the system treats them as a redundant pair. If the primary goes down, the standby takes over.

See Network Overview for subnet, VLAN, and switch configuration details.

External Control and Protocols

In WATCHOUT 6, external control used a text-based TCP protocol on ports 3039 (production) and 3040 (display). Control systems sent direct commands like run, halt, load, and gotoControlCue.

In WATCHOUT 7, external control is built around a unified variable system. All incoming protocols — OSC, ArtNet, HTTP, MIDI, the WO6 compatibility layer — are mapped to WATCHOUT variables. A timeline, a cue, or a media property can be driven by any protocol through the same variable binding. This means you can mix and match protocols freely.

Available control surfaces in WATCHOUT 7:

  • HTTP REST API — the recommended protocol for new integrations. Structured, documented, stateless.
  • OSC — bidirectional, supports both input and output.
  • ArtNet / sACN — DMX-style channel control.
  • PSN — PosiStageNet for live tracking data.
  • MIDI Bridge — note and CC mapping.
  • LTC Bridge — SMPTE timecode input.
  • MSC — MIDI Show Control.
  • WO6 compatibility — ports 3039/3040 still respond to the classic text protocol.

The WO6 compatibility layer supports most legacy commands, but some are unimplemented (hitTest, standBy, setRate, and others). The load command replaces the old loadShow. The online command is accepted but ignored.

If you have existing Crestron, AMX, or Extron integrations, they will work via the WO6 compatibility protocol with minimal changes. For new development, use the HTTP REST API.

See WATCHOUT 6 Protocol and External Control Overview for the full reference.

Keyboard and Playback Behavior

In WATCHOUT 6, pressing Space targeted the last active timeline. In WATCHOUT 7, Space targets the currently selected timeline. If you are used to the old behavior, enable Legacy Keyboard Mode from the Edit menu.

Control cues also work differently. In WATCHOUT 6, a control cue targeted a single timeline by reference (This, Id, or Name). WATCHOUT 7 adds two new targeting modes:

  • List mode — target multiple specific timelines.
  • Filter mode — target timelines matching a pattern or tag.

See Starting and Stopping for the full playback reference.

Running Both Versions

If you need to switch between WATCHOUT 6 and WATCHOUT 7 on WATCHPAX hardware, you can do so from Producer's Nodes window using the Switch to WATCHOUT 6 command. This triggers a full machine reboot and requires that WATCHOUT 6 is pre-installed on the node.

You can also enable or disable the WO6 and WO7 control protocols independently on each node. This lets you run WO7 playback while still accepting commands from a WO6-era control system, or vice versa.

See Software Updates for details on managing software versions across your nodes.