Skip to content

Manual installation


Runs the Clawforce control plane as a Docker container on a single machine. Agent instances are created as sibling containers via the Docker socket.

  • Docker Engine 20.10+ or Docker Desktop
  • Docker Compose v2
  1. Clone the repository

    Terminal window
    git clone https://github.com/clawforceone/clawforce.git
    cd clawforce
  2. Create the data directory

    Clawforce stores its SQLite database and SSH keys here. Use a path that persists across system restarts.

    Terminal window
    mkdir -p ~/.clawforce/data
  3. Start the services

    Terminal window
    CLAWFORCE_DATA_DIR=~/.clawforce/data docker compose up -d

    Or create a .env file in the repo root so you don’t need to pass the variable every time:

    Terminal window
    echo "CLAWFORCE_DATA_DIR=$HOME/.clawforce/data" > .env
    docker compose up -d
  4. Verify it’s running

    Terminal window
    docker compose logs -f
    curl http://localhost:8000/health

    The dashboard is available at http://localhost:8000.

The docker-compose.yml reads these variables from the environment or .env:

VariableDescriptionDefault
CLAWFORCE_DATA_DIRHost path for the database and SSH keys(required)

Additional Clawforce settings (e.g., CLAWFORCE_AUTH_DISABLED, CLAWFORCE_RP_ID) can be passed as environment variables inside the docker-compose.yml environment: block. See Environment variables for the full list.

Clawforce needs access to the Docker socket to create and manage agent containers. The docker-compose.yml mounts it automatically:

volumes:
- /var/run/docker.sock:/var/run/docker.sock

Verify the mount is present if agent creation fails:

Terminal window
docker inspect clawforce-dashboard --format \
'{{range .Mounts}}{{.Source}} -> {{.Destination}}{{println}}{{end}}'

You should see /var/run/docker.sock -> /var/run/docker.sock.

Terminal window
docker compose logs -f # Stream logs
docker compose down # Stop
docker compose up -d # Start
git pull && docker compose up -d --build # Rebuild from the updated source checkout
docker compose down -v # Stop and delete volumes (destructive)

For production environments, prefer pinned image tags and a snapshot-first upgrade flow instead of upgrading straight to latest. See Safe updates.

Use scripts/control-plane-state.sh snapshot docker before the upgrade if this deployment is business-critical.

Terminal window
docker compose down
# Remove agent containers
docker ps -a --filter "name=bot-" --format '{{.Names}}' | xargs -r docker rm -f
# Remove data (optional)
rm -rf ~/.clawforce/data

Deploys Clawforce to a Kubernetes cluster using the Helm chart included in the repository.

  • Kubernetes 1.24+
  • kubectl configured with cluster access
  • Helm v3+
  • A StorageClass that supports ReadWriteOnce
  1. Clone the repository

    Terminal window
    git clone https://github.com/clawforceone/clawforce.git
    cd clawforce
  2. Install the chart

    Terminal window
    helm install clawforce helm/ \
    --namespace clawforce \
    --create-namespace

    If your kubeconfig is not at the default path:

    Terminal window
    helm install clawforce helm/ \
    --namespace clawforce \
    --create-namespace \
    --kubeconfig /path/to/kubeconfig
  3. Verify the deployment

    Terminal window
    kubectl get pods -n clawforce
    kubectl logs -f deploy/clawforce -n clawforce

    Wait for the pod to reach Running state.

  4. Access the dashboard

    The chart exposes a NodePort service on port 30000 by default.

    Terminal window
    # Get a node IP
    kubectl get nodes -o wide
    # Open http://<node-ip>:30000

    For local access without exposing a port:

    Terminal window
    kubectl port-forward -n clawforce svc/clawforce 8000:8001
    # Open http://localhost:8000

Create a custom-values.yaml to override defaults:

image:
tag: 1.4.2
config:
dataPath: /app/data # Path inside the pod for DB and SSH keys
k8sNamespace: clawforce # Namespace where agent instances are created
bootstrapDefaultContainerImage: clawforceone/agent-chromium:1.4.2
service:
type: NodePort
port: 8001
nodePort: 30000 # External port for the dashboard
persistence:
enabled: true
size: 1Gi
storageClass: "" # Empty = use the cluster default StorageClass
accessMode: ReadWriteOnce
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
rbac:
create: true # Creates ServiceAccount, Role, RoleBinding

Apply your values:

Terminal window
helm install clawforce helm/ \
--namespace clawforce \
--create-namespace \
-f custom-values.yaml

The chart creates a ServiceAccount, Role, and RoleBinding scoped to the clawforce namespace. These grant only the permissions needed to manage agent pods, services, PVCs, secrets, and configmaps.

Set rbac.create: false if you manage RBAC externally.

Verify RBAC resources exist:

Terminal window
kubectl get serviceaccount,role,rolebinding -n clawforce
Terminal window
# Pull latest chart changes
git pull
helm upgrade clawforce helm/ \
--namespace clawforce \
-f custom-values.yaml

For production environments, pin the image tag in your values file and take a snapshot of the persistent data volume before upgrading. See Safe updates.

For each release, update both image.tag and config.bootstrapDefaultContainerImage together before running helm upgrade.

Use scripts/control-plane-state.sh snapshot kubernetes before the upgrade if this deployment is business-critical.

Terminal window
helm uninstall clawforce -n clawforce
kubectl delete namespace clawforce

Docker: Check that the container is running:

Terminal window
docker ps --filter "name=clawforce-dashboard"

Kubernetes: Check pod status and the service:

Terminal window
kubectl get pods -n clawforce
kubectl get svc -n clawforce

Check the control plane logs first:

Terminal window
# Docker Compose
docker compose logs -f
# Kubernetes
kubectl logs -f deploy/clawforce -n clawforce

Docker: Verify the Docker socket mount (see Docker socket access above).

Kubernetes: Verify RBAC:

Terminal window
kubectl get rolebinding -n clawforce
Terminal window
# Docker
docker logs -f bot-<instance-name>
# Kubernetes
kubectl logs -f deploy/bot-<instance-name> -n clawforce
Terminal window
curl http://localhost:8000/health

Docker Compose:

Terminal window
docker compose down
rm -f ~/.clawforce/data/clawforce.db
docker compose up -d

Kubernetes:

Terminal window
kubectl delete pvc clawforce-data -n clawforce
kubectl rollout restart deploy/clawforce -n clawforce

Windows: script fails with “invalid option” error

Section titled “Windows: script fails with “invalid option” error”

If you run install.sh on Windows directly (not via WSL), you may see:

: invalid option nameet: pipefail

This is caused by Windows line endings. Use install.ps1 instead, or convert line endings first:

Terminal window
dos2unix install.sh
bash install.sh