- Deploy nodes in pairs for redundancy
- Offset maintenance windows for node pairs for continuous uptime
- Use metrics endpoint to monitor performance for scaling considerations
- Deploy gateways geographically close to users
- Deploy relays close to resources
StrongDM Gateway & Relay Packaging
StrongDM Gateways and Relays are available as a linux binary, and thus are compatible with many target deployment types. Sample deployment walk-thrus for specific target deployment types can be found below:
Recommended Minimum Specifications for Gateways / Relays
- 2 CPUs
- 4GB Memory
- Disk Space: Minimal, or commensurate with the expected logging volume when recording session logs locally on Gateways and Relays
New Releases & Package Updates
The latest version of Gateways and Relays should always be used to ensure compatibility across StrongDM clients, datasources, and peering Gateways and Relays.
StrongDM includes an auto-update system that is enabled by default to ensure nodes are always current. As long as the StrongDM Gateway/Relay is run by a user with the appropriate permissions and network access, it will automatically update itself. It is recommended to stagger maintenance windows for Gateways and Relays to ensure high availability during updates.
In cases where using the StrongDM auto-update system is not compatible with deployment policies, it is possible to freeze versions via an environment variable on each node, so that new versions can be explicitly deployed.
Additional detailed information regarding Gateway/Relay automatic update logic and maintenance window considerations can be found here.
Network Configuration Requirements
StrongDM Gateways have been specifically hardened to be exposed for ingress for StrongDM clients to connect. StrongDM will only accept connections from a StrongDM client that has first authenticated and been authorized via the StrongDM Control Plane for access.
StrongDM Gateways, Relays, and Clients have specific network routability requirements for each that are detailed here.
Gateways are inherently intelligent enough to peer with each other and, when necessary, fail-over between themselves. With this in mind, it is best practice to deploy them in pairs for redundancy.
Scaling Gateways and Relays can be done vertically by increasing a Gateway’s specifications such as CPU and/or memory when appropriate. Gateways are primarily CPU-bound in performance, and as such, it is recommended to consider scaling up Gateways with consistent saturation of CPU greater than 70%.
Alternatively, Gateways and Relays can also be scaled horizontally by adding additional Gateways and Relays in parallel, for example, via an auto-scaling group that triggers on usage thresholds in existing nodes.
StrongDM Nodes can be placed in auto-scaling groups that deploy new instances based on CPU or other metrics spikes during unanticipated high traffic scenarios. However, as StrongDM Nodes have built in fail-over and routing capabilities, StrongDM Nodes should NOT be placed behind a load balancer. If a load balancer is required, such as in a cluster configuration, there must be a 1:1 relationship between port and container.
Reference scripts for auto-scaling deployments can be found here.
Gateways and Relays provide various metrics on performance that can be monitored, and used for expansion and sizing considerations. More information around monitoring can be found here.
In a connection chain of SDM Client → Gateway → Relay → Resource, the SDM client is expected to be the most latent link in the chain. With this in mind, StrongDM recommends deploying Gateways geographically close to users.
Relays, being the last link in the connection chain to a resource, should be placed as close to resources as possible.
In cases where users are geographically distributed, it may be advantageous to also distribute multiple sets of Gateways.
For example, an organization has offices in San Francisco and New York with resources scattered across the same geographic regions. Gateways should be deployed in each office network, with relays located in the resources’ networks to ensure optimal performance.