Linux¶
Running Kubernetes? See Kubernetes setup.
Summary¶
- Unpack the deployment kit
- Install the service package
- Add your license
- Configure the service
- Start the service
- Verify the cluster
- Run an example build
- Configure the client
Requirements¶
As of 2022-03-15, our Remote Execution software only runs on Linux. Supported distros:
- Debian 10 or newer
- Ubuntu 18.04 or newer
Need other OS support? Contact us.
1. Unpack the deployment kit¶
Unpack engflow-re-<VERSION>.zip
.
It contains:
- this documentation (
./index.html
) - the service package (
./setup/engflow-re-services.deb
) - EngFlow config file (
./setup/on-prem/config
) - example Bazel project (
./example
)
It does not contain a valid license file: ./setup/license
is empty. We
send you a license separately.
2. Install the service package¶
-
Copy
./setup/engflow-re-services.deb
to every machine. -
Run on every machine:
This will install the service binaries under
/usr/bin/engflow/
, and install thescheduler
andworker
systemd services.
Warning
Do not copy your source tree on these machines. The build tool uploads files if build actions need them.
3. Add your license¶
Copy your license onto every machine as /etc/engflow/license
.
4. Configure the service¶
See the dedicated articles for details.
Tip
All service instances (schedulers and workers) can use the same
config
file. Schedulers ignore worker-specific options and vice versa.
-
Customize
./setup/on-prem/config
Set options common to every machine.
-
Set
--discovery
-
Use
multicast
if possible -
Use
static
otherwise, and also specify one or more--static_scheduler
and one or more--static_cas_node
flags.
-
-
Copy the file to every machine as
/etc/engflow/config
-
Customize the file per-machine:
- Set
--worker_config
and--disk_size
on workers
- Set
Tip
For a first time trial setup we recommend using the default
./setup/on-prem/config
. Later (and especially before productionizing) you
should customize this config more. Consider:
- network settings (e.g.
--public_port
,--private_ip_selector
) - authentication
(e.g.
--tls_certificate
,--client_auth
) - execution strategies
(e.g.
--allow_docker
,--allow_sandbox
) - executor pools
- storage use (
--external_storage
) - monitoring
- JVM flags
See the Service Options Reference for more info.
5. Start the service¶
-
SSH into every machine
-
Run on every machine (respectively if it's a worker or scheduler):
-
If you use systemd:
Bash or
Bash -
Otherwise:
Bash or
Bash
-
6. Verify the cluster¶
-
SSH to a worker instance
-
Look at the service output
Bash Scroll down using the arrow keys; jump to the bottom with
Shift
+G
.Somewhere in the log you should see a cluster formed:
On schedulers you should see two clusters: the same one as above, and another one only containing schedulers.
-
Optional: Ensure you can pull Docker images
Skip this step if you don't plan to run actions in a Docker container.
On a worker machine, run:
7. Run an example build¶
Follow the instructions in ./example/README.md
Info
The first build can take a while as Bazel first downloads the docker image locally, and the cluster software then downloads the docker image on each worker. You will not see a performance improvement for builds of the example project; it is too small to benefit from the remote execution cluster.
8. Configure the client¶
See Client configuration.