Version 1.43 of the documentation is no longer actively maintained. The site that you are currently viewing is an archived snapshot. For up-to-date documentation, see the latest version.
This document describes how to setup server and client authentication for a EngFlow Remote Execution cluster.
The only supported mechanism for server authentication is TLS. In order to use
it, you have to set the
flags. You can either use an existing certificate
authority, or you can use self-signed certificates.
By default, Bazel trusts the set of root certificates that is shipped with the
Java Development Kit (JDK). If you are using a non-standard certificate
authority, you have to configure Bazel to accept its certificate using Bazel’s
The EngFlow Remote Execution Service supports four mechanisms for client
authentication using the
If you already setup GCP RBE access roles, then we recommend using the
mechanism, which allows continued use of your existing IAM setup.
If you already have a TLS certificate authority and distribute client
certificates to your clients, and you use a client that supports mTLS
(e.g. Bazel 3.1 or newer), then we recommend using the
Otherwise you will have to decide whether to setup a certificate authority
(which may require setting up additional infrastructure, such as HashiCorp
Vault or smallstep),
use a VPN (
none), or use GCP OAuth 2.0 (
If you disable client authentication, anyone who can initiate a network
connection to the cluster can use it. This must only be used in combination with
network-based usage restrictions, e.g., over a VPN. By default, every client can
read and write to the CAS, and can read and execute actions, but cannot directly
write actions. In order to change the default permissions, use the
flag, for example
Basic authentication uses username / password pairs. These are transmitted as clear text over a potentially encrypted connection (if server-side TLS is enabled); however, this may still be susceptible to man-in-the-middle attacks, and we do not recommend basic authentication.
To enable basic authentication: create a htpasswd file with the user-password
pairs, copy it to every scheduler instance, then add its path to the config
The password entries must be in apr1 format (Apache MD5). See https://httpd.apache.org/docs/2.4/misc/password_encryptions.html for details.
Example (generating a
$ htpasswd -m /etc/engflow/htpasswd alice
The generated entry will look like:
By default, all authenticated users have the “
user” role. You can specify
more detailed permissions with
--basic_auth_htpasswd=/etc/engflow/htpasswd --principal_based_permissions=alice->admin --principal_based_permissions+=bob->cache-reader
If you are using Bazel, create a
~/.netrc file with an entry for the remote
execution service address.
machine demo.engflow.com login alice password foo
To use an alternative netrc file, set its path in the
NETRC environment variable before running Bazel.
mTLS or mutual TLS authentication requires each client to present a signed
client TLS certificate whenever it establishes a connection to the cluster.
flag to configure the
certificate authority that is trusted to authenticate clients. In addition, you
can configure more fine-grained permissions based on the TLS common name
specified in the client TLS certificate using the
If you are using Bazel, you can configure it using the
GCP Email Authentication
GCP email-based authentication uses GCP OAuth 2.0 bearer tokens to prove ownership
of an email address to the cluster. You can then configure per-client
permissions using the
If you are using Bazel, you can configure it using either
--google_credentials, and you also have to
GCP RBE Authentication
GCP RBE-based authentication also uses GCP OAuth 2.0 bearer tokens. However,
instead of relying on verified email addresses, it queries GCP’s IAM for the
Remote Build Executor permissions.
In order to use this authentication mechanism, you must specify a GCP project
The Google GCP permissions and roles are documented here: https://cloud.google.com/iam/docs/understanding-roles#other-roles
The EngFlow Remote Execution service supports only a subset of the RBE permissions and roles. Permissions:
remotebuildexecution.actions.createto run an action remotely
remotebuildexecution.actions.deleteto delete an action cache entry (this is not used by Bazel)
remotebuildexecution.actions.getto read an action cache entry
remotebuildexecution.actions.writeto write an action cache entry from a client (remotely executed actions always have write access to the action cache)
remotebuildexecution.blobs.createto write an entry to the CAS
remotebuildexecution.blobs.getto read an entry from the CAS or query whether an entry is in the CAS based on its digest
These are the relevant roles, though note that you can create custom roles for different subsets of permissions:
Remote Build Execution Artifact Creator aka
Can run actions remotely. This is the most commonly used role, and maps to the
userrole defined in EngFlow RE.
Remote Build Execution Artifact Admin aka
Can run actions remotely, and also delete actions. Cannot write actions to the cache.
Remote Build Execution Action Cache Writer aka
Can write CAS and action cache entries. This is primarily useful when using the system as a pure cache without remote execution. In this use case, the CI system should be allowed to read and write the cache (requires both this role and the Remote Build Execution Artifact Viewer), while individual engineers are only allowed to read the cache.
Remote Build Execution Artifact Viewer aka
Can read CAS and action cache entries. This is primarily useful when using the system as a pure cache without remote execution.