Version 1.52 of the documentation is no longer actively maintained. The site that you are currently viewing is an archived snapshot. For up-to-date documentation, see the latest version.


Server & Client Authentication

This document describes how to setup server and client authentication for a EngFlow Remote Execution cluster.

Server Authentication

The only supported mechanism for server authentication is TLS. In order to use it, you have to set the --tls_certificate and --tls_key flags. You can either use an existing certificate authority, or you can use self-signed certificates.

By default, Bazel trusts the set of root certificates that is shipped with the Java Development Kit (JDK). If you are using a non-standard certificate authority, you have to configure Bazel to accept its certificate using Bazel’s --tls_certificate flag.

Client Authentication

The EngFlow Remote Execution Service supports four mechanisms for client authentication using the --client_auth flag: none, basic, mtls, gcp_email, and gcp_rbe.

If you already setup GCP RBE access roles, then we recommend using the gcp_rbe mechanism, which allows continued use of your existing IAM setup.

If you already have a TLS certificate authority and distribute client certificates to your clients, and you use a client that supports mTLS (e.g. Bazel 3.1 or newer), then we recommend using the mtls mechanism.

Otherwise you will have to decide whether to setup a certificate authority (which may require setting up additional infrastructure, such as HashiCorp Vault or smallstep), use a VPN (none), or use GCP OAuth 2.0 (gcp_email).

No Authentication

If you disable client authentication, anyone who can initiate a network connection to the cluster can use it. This must only be used in combination with network-based usage restrictions, e.g., over a VPN. By default, every client can read and write to the CAS, and can read and execute actions, but cannot directly write actions. In order to change the default permissions, use the --principal_based_permissions flag, for example --principal_based_permissions=*->admin.

Basic Authentication

Basic authentication uses username / password pairs. These are transmitted as clear text over a potentially encrypted connection (if server-side TLS is enabled); however, this may still be susceptible to man-in-the-middle attacks, and we do not recommend basic authentication.

Bazel supports basic authentication as of version 4.0.0 using a .netrc file, but only for remote execution / caching, not for the build event service.

To enable basic authentication: create a htpasswd file with the user-password pairs, copy it to every scheduler instance, then add its path to the config file with --basic_auth_htpasswd .

The password entries must be in apr1 format (Apache MD5). See for details.

Example (generating a htpasswd entry):

$ htpasswd -m /etc/engflow/htpasswd alice

The generated entry will look like:


By default, all authenticated users have the “user” role. You can specify more detailed permissions with --principal_based_permissions .

Example (/etc/engflow/config snippet):


(Note the = and += operators: --principal_based_permissions is a list-type flag; = overrides the default flag value.)

If you are using Bazel, create a ~/.netrc file with an entry for the remote execution service address.

Example (~/.netrc):

login alice
password foo

To use an alternative netrc file, set its path in the NETRC environment variable before running Bazel.

mTLS Authentication

mTLS or mutual TLS authentication requires each client to present a signed client TLS certificate whenever it establishes a connection to the cluster. Use the --tls_trusted_certificate flag to configure the certificate authority that is trusted to authenticate clients. In addition, you can configure more fine-grained permissions based on the TLS common name specified in the client TLS certificate using the --principal_based_permissions flag.

If you are using Bazel, you can configure it using the --tls_client_certificate and --tls_client_key flags.

GCP Email Authentication

GCP email-based authentication uses GCP OAuth 2.0 bearer tokens to prove ownership of an email address to the cluster. You can then configure per-client permissions using the --principal_based_permissions flag.

If you are using Bazel, you can configure it using either --google_default_credentials or --google_credentials, and you also have to set --google_auth_scopes=email.

GCP RBE Authentication

GCP RBE-based authentication also uses GCP OAuth 2.0 bearer tokens. However, instead of relying on verified email addresses, it queries GCP’s IAM for the Google-defined Remote Build Executor permissions.

In order to use this authentication mechanism, you must specify a GCP project using the --gcp_rbe_auth_project flag.

The Google GCP permissions and roles are documented here:

The EngFlow Remote Execution service supports only a subset of the RBE permissions and roles. Permissions:

  • remotebuildexecution.actions.create to run an action remotely
  • remotebuildexecution.actions.delete to delete an action cache entry (this is not used by Bazel)
  • remotebuildexecution.actions.get to read an action cache entry
  • remotebuildexecution.actions.write to write an action cache entry from a client (remotely executed actions always have write access to the action cache)
  • remotebuildexecution.blobs.create to write an entry to the CAS
  • remotebuildexecution.blobs.get to read an entry from the CAS or query whether an entry is in the CAS based on its digest

These are the relevant roles, though note that you can create custom roles for different subsets of permissions:

  • Remote Build Execution Artifact Creator aka roles/remotebuildexecution.artifactCreator

    Can run actions remotely. This is the most commonly used role, and maps to the user role defined in EngFlow RE.

  • Remote Build Execution Artifact Admin aka roles/remotebuildexecution.artifactAdmin

    Can run actions remotely, and also delete actions. Cannot write actions to the cache.

  • Remote Build Execution Action Cache Writer aka roles/remotebuildexecution.actionCacheWriter

    Can write CAS and action cache entries. This is primarily useful when using the system as a pure cache without remote execution. In this use case, the CI system should be allowed to read and write the cache (requires both this role and the Remote Build Execution Artifact Viewer), while individual engineers are only allowed to read the cache.

  • Remote Build Execution Artifact Viewer aka roles/remotebuildexecution.artifactViewer

    Can read CAS and action cache entries. This is primarily useful when using the system as a pure cache without remote execution.