KOBIL SHIFT
KOBIL SHIFT helm chart
[[TOC]]
Prerequisites
-
Kubernetes version 1.23 - 1.29. Currently tested with 1.26 and 1.29.
-
Helm version 3.x.
-
Postgres version 13.x or 16.x for all components except scp-addressbook, scp-presence, scp-messenger, scp-media, scp-gateway. Note: scram-sha-256 password hashing is NOT supported.
- Alternatively Oracle 19c for all components used by Shift-Lite (ast components, scp-notifier, idp components, otp-management)
-
MongoDB version 4.4 or 6 for scp-addressbook, scp-presence, scp-messenger, scp-media, scp-gateway.
-
Redis Standalone or Redis Cluster. Version 6.2.x or 7.2.x.
-
Istio. Currently tested with 1.17.2 and 1.18.2.
-
Strimzi Kafka Operator with support for Kafka 3.6.1 (0.39.0 - 0.42.0). Currently tested with 0.39.0.
helm install -n kafka strimzi-kafka-operator strimzi-kafka-operator --version x.y.z --repo https://strimzi.io/charts/ --set watchAnyNamespace=true
-
Access to KOBIL chart museum
helm repo add kobil https://charts.kobil.com --username {chart_username} --password {chart_password}
-
An
imagePullSecret
providing access to the relevant repositories at Azure (kobilsystems.azurecr.io
)kubectl create secret docker-registry registry-azure \
--docker-server=kobilsystems.azurecr.io \
--docker-username=your_user_token_name \
--docker-password=your_password -
KOBIL Shift operator
Deploy KOBIL Shift-Operator
KOBIL Shift Operator charts are available at https://charts.kobil.com
.
Before deployment, configure image pull secret, Docker image registry, and Helm Chart repository credentials in configuration file shift-operator-values.yaml
:
global:
imagePullSecrets:
- registry-azure
registry: kobilsystems.azurecr.io
helmRepo:
url: https://charts.kobil.com
username: ""
password: ""
helm install shift-operator -f shift-operator-values.yaml -n shift kobil/shift-operator --version x.y.z
Verify that KOBIL Shift-Operator is running by executing:
kubectl -n shift get deployments
Also verify that the custom resource definition servicegroups.shift.kobil.com
is available by executing kubectl get crd
.
Deploy KOBIL Shift
The next step is to deploy KOBIL Shift in the same namespace where the Shift Operator is running.
Set appropriate configuration for the KOBIL Shift services in configuration file shift-values.yaml
. The included values.yaml can be used as template. See section Values and Issuer CA for additional information.
helm install shift -f shift-values.yaml -n shift kobil/shift --version x.y.z
Deploying Shift chart creates multiple servicegroups.shift.kobil.com
objects which are managed by Shift Operator. Use
kubectl -n shift get servicegroups.shift.kobil.com
to obtain an overview of the deployed servicegroups. The READY column shows the status of the servicegroup. The status changes to true
, once Shift Operator successfully deployed all services in the servicegroup and the corresponding workloads are in a ready state.
If a servicegroup fails to become ready, use command
kubectl -n shift describe servicegroups.shift.kobil.com <name-of-servicegroup>
to obtain information about the deployment error.
In addition, Shift chart creates resources which are managed by Strimzi Kafka Operator and Istio. The status of the Kafka cluster can be observed using command
kubectl -n shift get kafkas.kafka.strimzi.io
The deployed Istio resources can be viewed using commands
kubectl -n shift get gateways.networking.istio.io
kubectl -n shift get virtualservices.networking.istio.io
kubectl -n shift get destinationrules.networking.istio.io
Post deployment configuration
The following settings need to be performed in IDP UI. Open https://idp.{{global.routing.domain}}/auth/admin
and login using IDP master admin credentials:
username: {{ global.idp.adminUser.username }}
password: {{ global.idp.adminUser.passsord }}
Configure SMTP settings under 'Realm Settings -> Email'.
Create confidential OIDC client workspacemanagement
and configure its client_id and client_secret in file shift-values.yaml
.
smartdashboardKongConfigurationBackend:
# -- client_id and client_secret of an OIDC client in IDP master realm. Must be manually created.
config:
masterClientId: "workspacemanagement"
masterClientSecret: "client_secret"
Then execute command
helm upgrade shift -f shift-values.yaml -n shift kobil/shift --version x.y.z
Open URL https://smartdashboard.{{global.routing.domain}}/dashboard/master/workspace-management
to access Kobil Portal.
Issuer CA
Part of every SHIFT deployment are multiple identities as Certificate Authorities (CA).
- A CA must have a key pair (CA Key Pair).
- A CA must have a digital certificate (CA Certificate) signed either by its own private key, as a self-signed root certificate, or be signed by another CA's private key, as an intermediate certificate.
- The main CA identity of a SHIFT deployment will be named Issuer CA.
- The main CA identity's certificate will be named Issuer CA Certificate.
- The main CA identity's key pair will be named Issuer CA Key Pair.
- A sub CA Certificate of SHIFT deployments, which are signed by the Issuer CA's private key will be named Tenant Signer CA Certificate.
- Sub CA key pairs will be named Tenant Signer CA Key Pair.
An Issuer CA Certificate and Issuer CA Key Pair must be generated for each SHIFT deployment. The Issuer CA Key Pair cannot be changed afterwards. Generate the Issuer CA Certificate according to the following instructions.
- The Issuer CA Key Pair must be in PKCS#8 format. Both the Issuer CA Certificate and Issuer CA Key Pair must be DER-encoded
- The Issuer CA Key Pair must be of one of the supported algorithms:
- RSA with
>= 2048 bit
keys - ECDSA with one of the supported curves:
secp256r1
(orP-256
),secp384r1
(orP-384
),secp521r1
(orP-521
)
- Ed25519
- RSA with
- Set the key algorithm used by the Issuer CA Key Pair using
ca.signers.key_generation.algorithm
. If using an intermediate certificate, it is recommended to use the same key algorithm that is configured for the root certificate. Thecurve
parameter (for ECDSA), or thestrength
parameter (for RSA), can be set to a more secure configuration, compared to the root certificate, to provide additional security. - The Issuer CA Certificate must have the
Basic Constraints
Certificate Extension withCA=True
andpathLen
unset or>= 1
. The Certificate Extension must be marked as critical. - The Issuer CA Certificate must have the
Key Usage
Certificate Extension with at least the bits forkeyCertSign
andcRLSign
set. The Certificate Extension must be marked as critical. Other usage bits should not be set. - The Issuer CA Certificate must have specific Certificate Extension policies set.
These Certificate Extensions depend on the feature sets that are required.
Alternatively, the Certificate Extensions may specify
anyPolicy
. These Certificate Extension policies should not be marked critical. The following Certificate Extension policies are supported:Base
policy with OID1.3.6.1.4.1.14481.109.4.1
. This policy must always be added.SCP
policy with OID1.3.6.1.4.1.14481.109.4.2
. This policy is needed when scp services are deployed and messaging features are used.mTLS
policy with OID1.3.6.1.4.1.14481.109.4.3
. This policy is needed when ast services are configured to enforce mutualTLS communication with clients.
- The above policies are umbrella policies, combining multiple single policies, required by the associated feature set.
It is recommended to use these umbrella policies. They combine the following single policies:
Base
policy contains1.3.6.1.4.1.14481.109.1.0
(profileLEAF_CA
)1.3.6.1.4.1.14481.109.1.4
(profileAST_DEVICE
)
SCP
policy contains1.3.6.1.4.1.14481.109.1.1
(profileSIGNATURE
)1.3.6.1.4.1.14481.109.1.2
(profileAUTHENTICATION
)1.3.6.1.4.1.14481.109.1.3
(profileENCRYPTION
)1.3.6.1.4.1.14481.109.1.6
(profileSIGNATURE_GATEWAY
)1.3.6.1.4.1.14481.109.1.7
(profileAUTHENTICATION_GATEWAY
)1.3.6.1.4.1.14481.109.1.8
(profileENCRYPTION_GATEWAY
)
mTLS
policy contains1.3.6.1.4.1.14481.109.1.9
(profileTLS_CLIENT
)1.3.6.1.4.1.14481.109.1.10
(profileTLS_CLIENT_AND_KEY
)
- The Issuer CA Certificate may have the
Extended Key Usage
Certificate Extension with theid_kp_OCSPSigning
key purpose set. Other key purpose IDs should not be set. - Other Certificate Extensions should not be present.
Below is a simple example how to generate an Issuer CA Certificate as a self-signed root certificate using OpenSSL. The example generates an Issuer CA Certificate for all three umbrella policies described above.
-
Create file
openssl.cnf
with the following content[req]
default_bits = 4096
encrypt_key = no
default_md = sha512
prompt = no
utf8 = yes
x509_extensions = v3_req
distinguished_name = req_distinguished_name
# Adjust below values as required
[req_distinguished_name]
C = DE
ST = Rheinland-Pfalz
L = Worms
O = KOBIL GmbH
CN = KOBIL Shift Issuer CA
[v3_req]
basicConstraints = critical, CA:TRUE, pathlen:1
keyUsage = critical, keyCertSign, cRLSign
# explicit policies
certificatePolicies = 1.3.6.1.4.1.14481.109.4.1, 1.3.6.1.4.1.14481.109.4.2, 1.3.6.1.4.1.14481.109.4.3
# or alternatively anyPolicy
# certificatePolicies = 2.5.29.32.0 -
Create an ECDSA key-pair for curve P-521, convert it to PKCS#8 format, and store it in file
key.der
.openssl ecparam -name P-521 -genkey -noout -outform DER | openssl pkcs8 -inform DER -topk8 -nocrypt -outform DER -out key.der
-
Create a self-signed certificate with a validity of 10 years for the public key generated in the previous step and store it in file
cert.der
.openssl req -nodes -x509 -days 3650 -config openssl.cnf -key key.der -keyform DER -out cert.der -outform DER
-
Base64 encode key and certificate. The content of resulting files
key.b64
andcert.b64
can be added to valuescommon.ast.issuer.key
andcommon.ast.issuer.certs
, respectively.openssl enc -a -A -in key.der -out key.b64
openssl enc -a -A -in cert.der -out cert.b64
Updating policies
It is possible to change the supported Certificate Extension policies of the Issuer CA Certificate. This can be done by reissuing the Issuer CA Certificate with the updated Certificate Extension policies.
When reissuing the Issuer CA Certificate, the Issuer CA Key Pair, both the public and private keys, must not be changed.
When using the above OpenSSL example, edit the openssl.cnf
config file and adjust the policies accordingly.
Then reissue the Issuer CA Certificate using the command:
openssl req -nodes -x509 -days 3650 -config openssl.cnf -key key.der -keyform DER -out cert.der -outform DER
Then base64 encode it using command
openssl enc -a -A -in cert.der -out cert.b64
Then update the value common.ast.issuer.certs
with content of file cert.b64
.
After updating the Certificate Extension policies of the Issuer CA Certificate, existing Tenant Signer CA Certificates must be manually updated using the following instructions:
-
For each tenant, in which a Tenant Signer CA Certificate exists, obtain an access token with admin write privileges
- In the default permission configuration, the required role is
ks-management/Admin
- If the permission configuration was changed from the default, use a token with any of the
api.security.jwtAuth.external.writeAccessRoles
from the AST-CA service's values
- In the default permission configuration, the required role is
-
Execute
PATCH /v1/tenants/<tenant>/signers/admin
with the admin token and an empty request body, e.g.curl -X 'PATCH' https://asts.example.com/v1/tenants/<tenant>/signers/admin \
--header "authorization: bearer <token>" -
If successful, the AST-CA service returns a body that looks like this:
{
"id": "<the new signer ID>",
"tenant": "<tenant>",
"name": "<signer name>"
} -
The AST-CA Service has enqueued the Tenant Signer CA Certificate to be reissued and will recreate it using the same Tenant Signer CA Key Pair
-
It is recommended to recreate the SDK Config JWT for tenants in which the Tenant Signer CA Certificate as been reissued, but it is not required.
Mutual TLS
Shift supports a mode where clients are forced to perform mutual TLS authentication when accessing certain endpoints. This feature is configured using values section common.mutualTLS:
.
Set common.mutualTLS.services.enabled: true
to enable this feature. When enabled, the first instance that terminates TLS for client traffic must be configured to optionally perform mutual TLS authentication.
The trusted CA certificates to use when verifying client certificates must be the same certificates provided in value common.ast.issuer.certs
or alternatively in the existing secret common.ast.issuer.existingSecretIssuerCa
.
Client certificates must be added to the request headers of requests which are forwarded to upstream services. The names of the request header for client certificates must be specified as a list using the value common.mutualTLS.services.certRequestHeaders:
. Multiple header names are supported.
common:
mutualTLS:
services:
enabled: true
certRequestHeaders:
- "x-forwarded-client-cert"
In case Istio ingress gateway is the first instance that terminates TLS for client traffic, set common.mutualTLS.istioIngressGateway.enabled: true
to configure it for mutual TLS.
The trusted CA certificates to use when verifying client certificates must also be configured. The required format is a single line base64 encoded list of certificates in PEM format. Either provide them directly using value common.mutualTLS.istioIngressGateway.cacerts:
or manually put them in a Kubernetes secret and set common.mutualTLS.istioIngressGateway.useExistingCaCertsSecret: true
. The name of the existing secret must be {{ .Values.global.routing.tlsSecret }}-cacert
, e.g. tls-secret-cacert
when using the default.
When using mutual TLS on the Istio ingress gateway, the value of common.mutualTLS.services.certRequestHeaders:
must not be changed.
common:
mutualTLS:
services:
enabled: true
istioIngressGateway:
enabled: true
useExistingCaCertsSecret: true
Example for an existing CA certificates secret.
apiVersion: v1
kind: Secret
metadata:
name: tls-secret-cacert
type: Opaque
data:
cacert: "single line base64 encoded list of certificates in PEM format"
When using the mutual TLS feature, the certificate policy mTLS
must be added to the Issuer CA Certificate. See Section Issuer CA for details on the policies. See Section Updating policies for details on how to add the mTLS
policy to an existing Issuer CA Certificate and how to update existing Tenant Signer CA Certificates.
External Kafka clusters
Shift supports external Kafka clusters.
The required topics must be manually created in the external Kafka cluster. See file topics.yaml
for a list of required topics and their configuration (partitions, retention times). For Shift-lite, the topics from sections common:
, asts:
, scp:
, and idp:
must be created. When using Smartscreen services, the topics from section smartscreen:
must be created as well. When using payment services, the topics from section payment:
must be created as well.
To configure Shift to use an external Kafka cluster, the following parameters must be set:
-
Disable the creation of the custom resources for Strimzi Kafka operator
strimzi:
enabled: false -
Enable usage of an external Kafka cluster and provide hostname and port
common:
datastores:
kafka:
external:
enabled: true
broker:
host: kafka-broker
port: 9092
Authentication
Shift supports only one user for all connections to Kafka. Only the SASL mechanism SCRAM-SHA-512 is supported. Use the following parameters to enable authentication and configure the username.
common:
datastores:
kafka:
auth:
enabled: true
username: shift-kafka-username
The password must be provided in an existing Kubernetes secret. The name of the secret must match the username. The password must be contained in the key password
.
For username shift-kafka-username
and password shift-kafka-password
, the required Kubernetes secret can be generated using the following command:
kubectl create secret generic shift-kafka-username \
--from-literal=password=shift-kafka-password
This is how the resulting secret should look like:
apiVersion: v1
kind: Secret
metadata:
name: shift-kafka-username
type: Opaque
data:
password: c2hpZnQta2Fma2EtcGFzc3dvcmQ=
TLS
Shift supports TLS for Kafka connections. When TLS is used, authentication must also be enabled. The TLS trust store must be provided in an existing Kubernetes secret. Use the following parameters to enable TLS and configure the name of the existing Kubernetes secret containing the trust store.
common:
datastores:
kafka:
external:
tls:
enabled: true
trustStoreSecret: shift-kafka-tls-truststore
The existing Kubernetes secret must contain the trust store in two formats. A file containing all required certificates in PEM format must be provided in the key ca.crt
encoded as a base64 string. A file containing all required certificates in PKCS#12 format must be provided in the key ca.p12
encoded as a base64 string. The import password for the PKCS#12 file must be provided in the key ca.password
encoded as a base64 string.
Given files
/path/to/ca.crt
containing all required certificates in PEM format/path/to/ca.p12
containing all required certificates in PKCS#12
as well as import password ca-import-password
, the required Kubernetes secret can be generated using the following command:
kubectl create secret generic shift-kafka-tls-truststore \
--from-file=ca.crt=/path/to/ca.crt \
--from-file=ca.p12=/path/to/ca.p12 \
--from-literal=ca.password=ca-import-password
This is how the resulting secret should look like:
apiVersion: v1
kind: Secret
metadata:
name: shift-kafka-tls-truststore
type: Opaque
data:
ca.crt: LS0tLS1...LS0tLS0K
ca.p12: MIIGogI...xAgInEA==
ca.password: Y2EtaW1wb3J0LXBhc3N3b3Jk
Topics prefix
Shift supports an optional prefix to add to all Kafka topics. This must be used when running multiple Shift deployments against the same external Kafka cluster to ensure each deployment uses unique topics. The prefix must contain only lowercase alphanumeric characters and dashes ('-'). The prefix must start and end with an alphanumeric character and consist of no more than 16 characters. Dot character ('.') will be inserted automatically as delimiter between prefix and internal topic name. For example, when using prefix prod
, topic com.kobil.audit
becomes prod.com.kobil.audit
.
Use the following parameter to configure a topics prefix.
common:
datastores:
kafka:
external:
topics:
prefix: ""
Full example
Below is a full example to configure Shift against an external Kafka cluster using TLS, authentication, and a topics prefix 'test'.
# Disable custom resources for Strimzi Kafka operator
strimzi:
enabled: false
# Configure Shift against external Kafka using authentication, TLS, and topics prefix.
common:
datastores:
kafka:
auth:
enabled: true
username: shift-kafka-username
external:
enabled: true
broker:
host: kafka-broker
port: 9092
topics:
prefix: "test"
tls:
enabled: true
trustStoreSecret: shift-kafka-tls-truststore
Istio Sidecar Proxy Injection
Shift supports injecting Istio sidecar proxies into it's workloads to add Shift components to the Istio service mesh. This is enabled by setting
global:
routing:
istio:
options:
inject: true
in shift-values.yaml
. Shift controls sidecar proxy injection by adding the label sidecar.istio.io/inject: "true"
to all workloads. Enabling sidecar injection on the namespace level is not required.
Sidecar proxy configuration using annotations
By default, shift adds the proxy config annotation
annotations:
proxy.istio.io/config: |
holdApplicationUntilProxyStarts: true
to all workloads in order to delay application startup until the Istio proxy is ready. This avoids startup race conditions. Additional Istio related annotations can be configured using the value
global:
routing:
istio:
resourceAnnotations: |
proxy.istio.io/config: |
holdApplicationUntilProxyStarts: true
The content must be a yaml formatted text block. To disable annotations, e.g. in case they are set via global mesh options, set this value to the empty string (""
). See Resource Annotations and Proxy Config for supported options.
Istio sidecar proxy injection on OpenShift
OpenShift Service Mesh does not allow Init Containers to establish network connections to outside of the service mesh, see Enabling sidecar injection.
Shift components use Init Containers for performing database migrations which will fail if the database is outside of the service mesh.
The workaround suggested by RedHat is to exclude the database port from being redirected through the sidecar proxy. This can be achieved by setting
global:
routing:
istio:
resourceAnnotations: |
proxy.istio.io/config: |
holdApplicationUntilProxyStarts: true
traffic.sidecar.istio.io/excludeOutboundPorts: "1521"
in shift-values.yaml
, where 1521
is the database port.
In addition, the various values for database.host:
in shift-values.yaml
must be set to the IP address, and not the hostname, of the database.
PeerAuthentication resources
Shift creates a PeerAuthentication resource with
spec:
mtls:
mode: STRICT
to ensure that mutual TLS is required for all inbound traffic to the sidecar proxies and therefore the Shift services.
For some ports mTLS mode STRICT
cannot be used, e.g. additional ports serving prometheus metrics. Shift creates additional PeerAuthentication resources to configure mTLS mode PERMISSIVE
for such ports and the workload in question.
The creation of PeerAuthentication resources can be disabled by setting
global:
routing:
istio:
options:
createPeerAuthentication: false
in shift-values.yaml
. When no PeerAuthentication resources exist, the default mode PERMISSIVE is used. This allows Shift services to accept both plaintext and mutual TLS traffic. See PeerAuthentication and Mutual TLS Migration for details.
Sidecar resource
To reduce memory usage of Istio sidecar proxies, when the mesh is large, Shift creates a Sidecar resource with
spec:
egress:
- hosts:
- "./*"
This configures the sidecar proxies to allow egress traffic only to other workloads in the same namespace. This affects only egress traffic to services which are part of the service mesh. Egress traffic to services outside of the service mesh is not restricted. See Sidecar for details.
The creation of the Sidecar resource can be disabled by setting
global:
routing:
istio:
options:
createSidecar: false
in shift-values.yaml
.
Upgrading
Upgrading from versions before 0.189.0
This version adds support for reading Redis credentials from an existing Kubernetes secret to pay services.
If Shift is configured to read database credentials from an existing Kubernetes secret, the Redis credentials used by pay services must be added to this secret in key PAY_SERVICES_REDIS_PASSWORD
before updating to this version.
Upgrading from versions before 0.186.0
The service ast-webhooks included in Shift version 0.186.0 requires a database. Before updating to Shift version 0.186.0, create the database and configure it in custom-values.yaml:
astWebhooks:
database:
host: postgres
port: 5432
name: "ast_webhooks"
auth:
username: user
password: "password"
If Shift is configured to read database credentials from an existing Kubernetes secret, the ast-webhooks database credentials must be added to this secret in keys AST_WEBHOOKS_DB_USERNAME
and AST_WEBHOOKS_DB_PASSWORD
.
Upgrading from versions before 0.179.0
This Shift version updates the version of the included Kafka cluster from 3.5.1 to 3.6.1. Required Strimzi Kafka Operator versions change from 0.36.1 - 0.39.0 to 0.39.0 - 0.42.0.
Before applying the update, ensure that Strimzi Kafka Operator version 0.39.0 is installed. This is the only version that supports both Kafka 3.5.1 and 3.6.1, see supported versions of Strimzi Kafka Operator.
Also ensure that a shift version using Kafka 3.5.1 is running before applying the upgrade, i.e. shift version 0.171.0 or newer.
Upgrading from versions before 0.171.0
This shift version updates the version of the included Kafka cluster from 3.4.0 to 3.5.1. Supported Strimzi Kafka Operator versions change from 0.33.2 - 0.37.0 to 0.36.1 - 0.39.0.
Before applying the update, ensure that Strimzi Kafka Operator version 0.36.1 or 0.37.0 is installed. These are the only versions that support both Kafka 3.4.0 and 3.5.1, see supported versions of Strimzi Kafka Operator.
Also ensure that a shift version using Kafka 3.4.0 is running before applying the upgrade, i.e. shift version 0.153.0 or newer.
Upgrading from versions before 0.168.0
Shift version 0.168.0 no longer requires the Kafka topics com.kobil.ast.stream.sse
and com.kobil.ast.stream-statestoreastmessages-changelog
. Since the included Kafka cluster by default is set to prevent topic deletion, the Kafka topics are not actually deleted. This has no negative impact. Optionally, the following steps can be performed after updating to shift version 0.168.0 to permanently delete the Kafka topics:
-
Enable topic deletion in Kafka by adding the following to custom-values.yaml and upgrade the shift helm release.
strimzi:
valuesOverride:
kafka:
config:
delete.topic.enable: "true" -
Delete the Kafkatopic Kubernetes resources
com.kobil.ast.stream.sse
andcom.kobil.ast.stream-statestoreastmessages-changelog
. -
Remove the
valuesOverride
block and upgrade the shift helm release to disable topic deletion.
Upgrading from versions before 0.153.0
This shift version updates the version of the included Kafka cluster from 3.2.0 to 3.4.0. Supported Strimzi Kafka Operator versions change from 0.29.0 - 0.33.2 to 0.33.2 - 0.37.0.
Before applying the update, ensure that Strimzi Kafka Operator version 0.33.2 is installed. This is the only version that supports both Kafka 3.2.0 and 3.4.0, c.f. supported versions of Strimzi Kafka Operator.
Also ensure that a shift version using Kafka 3.2.0 is running before applying the upgrade, i.e. shift version 0.133.0 or newer.
Upgrading from versions before 0.134.0
Shift version 0.134.0 removes two no longer needed Kafkatopic Kubernetes resources (com.kobil.smartscreen.resource-changes
and com.kobil.smartscreen.events
). Since the included Kafka cluster by default is set to prevent topic deletion, the Kafka topics are not actually deleted and the Kafkatopic Kubernetes resources will be automatically recreated by the Strimzi topic operator. This has no negative impact. Optionally, the following steps can be performed after updating to shift version 0.134.0 to permanently delete the Kafka topics:
-
Enable topic deletion in Kafka by adding the following to custom-values.yaml and upgrade the shift helm release.
strimzi:
valuesOverride:
kafka:
config:
delete.topic.enable: "true" -
Delete the Kafkatopic Kubernetes resources
com.kobil.smartscreen.resource-changes
andcom.kobil.smartscreen.events
:kubectl -n shift delete kafkatopics.kafka.strimzi.io com.kobil.smartscreen.resource-changes com.kobil.smartscreen.events
-
Remove the
valuesOverride
block and upgrade the shift helm release to disable topic deletion.
Upgrading from versions before 0.133.0
This shift version updates the version of the included Kafka cluster from 3.0.0 to 3.2.0. Supported Strimzi Kafka Operator versions change from 0.26.0 - 0.29.0 to 0.29.0 - 0.33.2.
Before applying the update, ensure that Strimzi Kafka Operator version 0.29.0 is installed. This is the only version that supports both Kafka 3.0.0 and 3.2.0, c.f. supported versions of Strimzi Kafka Operator.
Also ensure that a shift version using Kafka 3.0.0 is running before applying the upgrade, i.e. shift version 0.74.0 or newer.
Upgrading from versions before 0.93.0
Updating to this version causes down time of Smart Screen until manual migration is performed. The existing Smart Screen must be migrated manually after performing the upgrade. Migration is done using a http request to smartscreen-services. If migration is omitted, the services will launch, but clients will see an empty Smart Screen. Any changes to Smart Screen happening after the update and before the migration will be overwritten. Since this service is not exposed outside of the cluster, port forwarding must be used, e.g.
kubectl -n <shift-namespace> port-forward svc/<svc-name-smartscreen-services> 8080:80
Then execute the following curl command.
curl -X 'POST' http://localhost:8080/v1/commands/migrate
Upgrading from versions before 0.86.0
Due to an incompatibility in the Infinispan version used by idp-core, a rolling update is not possible when using the default idp-core image. The idp-core deployment must be scaled down to 0 before applying the upgrade. This causes downtime.
Upgrading from versions before 0.80.0
Shift version 0.80.0 removes built in defaults for security related configurations. This includes ast-services session and database encryption keys as well as the issuer private key and certificate. To allow existing installations to keep functioning, a new value testInstallation
was added. When set to true
, the previous defaults are used.
Note that testInstallation: true
must only be used for test and demo deployments and is not suitable for productive usage.
Upgrading from versions before 0.68.0
-
Prepare for the upgrade
-
Some Kafka topics were renamed, which means that the Kafkatopic resource is reinstalled. To avoid topic deletion, ensure that the kafka option
delete.topic.enable: "false"
is set (this is the default since shift version 0.47.0). -
To avoid that the persistent volumes (pv) are deleted if the corresponding persistent volume claims (pvc) are accidently removed, edit the persistent volume resources and change
spec.persistentVolumeReclaimPolicy
fromDelete
toRetain
. -
Some CD tools prune the pvc resources during the upgrade. In case of ArgoCD, this can be avoided by adding the following to values.yaml (this requires shift version 0.45.0 or newer).
kafka:
kafka:
extraValues:
template:
persistentVolumeClaim:
metadata:
annotations:
argocd.argoproj.io/sync-options: Prune=false
zookeeper:
extraValues:
template:
persistentVolumeClaim:
metadata:
annotations:
argocd.argoproj.io/sync-options: Prune=false -
Any of the above config changes must be applied to the currently used version of shift.
-
-
Before the upgrade:
- In values.yaml, remove
global.kafka.enabled
and replace it withstrimzi.enabled
. - Additional Kafka topics added via value
kafka.topics
must be migrated to valuestrimzi.additionalTopics
. See Section Values for details.
- In values.yaml, remove
-
After performing the update, ast-stream application needs to be manually reset using Kafka Streams Application Reset Tool. This is required because the partitions of corresponding Kafka topics were increased.
-
Enable topic deletion in Kafka by adding the following to values.yaml and upgrade the shift helm release.
strimzi:
valuesOverride:
kafka:
config:
delete.topic.enable: "true" -
Scale down ast-stream service to zero. This can be done using command:
kubectl -n <shift-namespace> scale --replicas=0 deployment <ast-stream-deployment-name>`
-
Reset the stream application using command:
kubectl -n <shift-namespace> exec -it <kafka-pod-name-0> \
/bin/bash -- bin/kafka-streams-application-reset.sh \
--bootstrap-servers localhost:9092 \
--application-id com.kobil.ast.streamThe output of the command will be similar to
No input or intermediate topics specified. Skipping seek.
Deleting all internal/auto-created topics for application com.kobil.ast.stream
Done. -
Scale up ast-stream to the previous replicas using command:
kubectl -n <shift-namespace> scale --replicas=1 deployment <ast-stream-deployment-name>`
-
Disable topic deletion in Kafka by removing the values added in the first step and upgrade the shift helm release.
-
-
After performing the update, old topics can be deleted. This step is optional.
-
Enable topic deletion in kafka by adding the following to values.yaml and upgrade the shift helm release.
strimzi:
valuesOverride:
kafka:
config:
delete.topic.enable: "true" -
Open a shell in the running Kafka container
kubectl -n <shift-namespace> exec -it <kafka-pod> sh
-
Execute the following commands to delete old topics:
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic ast.audit
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic audit
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic health-check-topic
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic com.kobil.client.management
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic com.kobil.client.management.event
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic com.kobil.ast.healthCheck
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic com.kobil.ast.ca.signerCreated
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic com.kobil.astlogin.events
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic com.kobil.astmanagement.events
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic com.kobil.otp.management.events
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic com.kobil.scp.notifier.push_messages
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic com.kobil.vertx.smartscreen.events
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic com.kobil.vertx.smartscreen.smartScreenTopic
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic checkStatus
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic createOperation
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic createTenant
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic createTransaction
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic creditAction
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic creditActionResult
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic digitalAction
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic digitalActionResult
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic digitalBalanceTransaction
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic initiateCancelTransaction
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic initiateTransactionCreationAndPayment
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic initiateTransactionPayment
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic operationAction
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic operationActionResult
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic operationCallback
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic responseNotification
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic securityNotification
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic statusCallback
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic statusNotification
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic transactionCallback
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic transactionNotificationDeliveryData
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic transactionRequestDeliveryData
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic transactionRequestNotification -
Remove the
valuesOverride
block and upgrade the shift helm release to disable topic deletion.
-
Sizing
Sizing of the Kafka cluster deployed using Strimzi Kafka Operator can be configured using value strimzi.sizing.mode
. Supported values are 'basic', 'tuned', 'custom'.
When using mode 'custom', values strimzi.sizing.custom.kafka
and strimzi.sizing.custom.zookeeper
must be specified. See also documentation on sizing and configuration
Note: Changing the sizing mode after deployment is highly discouraged, as it effects the topic replica count and partition assignment to nodes. It can even lead to data loss.
-
Performance mode 'basic' corresponds to configuration
strimzi:
sizing:
mode: "custom"
custom:
kafka:
replicas: 1
resources:
requests:
memory: 2Gi
cpu: "100m"
limits:
memory: 2Gi
jvmOptions:
-Xms: 1024m
-Xmx: 1024m
config:
auto.create.topics.enable: "false"
delete.topic.enable: "false"
default.replication.factor: 1
min.insync.replicas: 1
offsets.topic.replication.factor: 1
transaction.state.log.replication.factor: 1
transaction.state.log.min.isr: 1
zookeeper:
replicas: 1
resources:
requests:
memory: 768Mi
cpu: "50m"
limits:
memory: 768Mi
jvmOptions:
-Xms: 512m
-Xmx: 512m -
Performance mode 'tuned' corresponds to configuration
strimzi:
sizing:
mode: "custom"
custom:
kafka:
replicas: 3
resources:
requests:
memory: 8Gi
cpu: "2"
limits:
memory: 8Gi
jvmOptions:
-Xms: 4096m
-Xmx: 4096m
config:
auto.create.topics.enable: "false"
delete.topic.enable: "false"
default.replication.factor: 3
min.insync.replicas: 2
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
zookeeper:
replicas: 3
resources:
requests:
memory: 1536Mi
cpu: "1"
limits:
memory: 1536Mi
jvmOptions:
-Xms: 1024m
-Xmx: 1024m
Values
Key | Type | Description | Default |
---|---|---|---|
nameOverride | string | "" | |
fullnameOverride | string | "" | |
global.imagePullSecrets | list | Image pull secrets added to pod spes generated by this chart. | ["registry-secret"] |
global.registry | string | Docker registry for KOBIL provided docker images | "kobilsystems.azurecr.io" |
global.limits | object | Limits for kubernetes object names | {"lengths":{"commonName":63,"fullname":43,"releaseName":16}} |
global.annotations.common | object | Custom annotations added to deployment metadata and pod spec | {} |
global.annotations.workload | object | Custom annotations added to pod spec | {} |
global.labels.workloadPod | object | Custom labels added to pod spec | {} |
global.routing.domain | string | External domain name | "local" |
global.routing.tlsSecret | string | Name of kubernetes TLS secret for the required domain and subdomains | "tls-secret" |
global.routing.ingress.enabled | bool | Globally enable/disable creation of ingress resources for services. | false |
global.routing.ingress.class | string | Ingress class name of the ingress controller. | nil |
global.routing.istio.enabled | bool | Globally enable/disable creation of istio ingress gateways and virtual services. Requires istio operator. | true |
global.routing.istio.resourceAnnotations | string | Istio related annotations to be added to Shift workloads. Only applicable when Istio sidecar proxy injection is enabled. The content must be a yaml formatted text block. To disable annotations, e.g. in case they are set via global mesh options, set this value to the empty string ("" ). See Resource Annotations and Proxy Config for supported options. The default sets 'holdApplicationUntilProxyStarts: true' to delay application startup until the Istio proxy is ready. This avoids startup race conditions. | `"proxy.istio.io/config: |
global.routing.istio.gateways | object | Enable/disable istio ingress gateways for the three API groups. public gateway only routes endpoints marked as public. external gateway routes endpoints marked as public and external. admin gateway routes all endpoints. | {"admin":true,"external":false,"public":false} |
global.routing.istio.options.gatewayNamePrefix | string | Prefix for istio ingress gateway names. The final name is generated by appending -admin , -external , -public , respectively. Note, that istio ingress gateways names must be unique across the cluster. | "istio-ingressgateway" |
global.routing.istio.options.gatewayAddAllHosts | bool | The default setting (false ), configures a wildcard (* ) for the hosts exposed by the Istio ingress gateway. This means that any host is exposed. If set to true , all hosts required by Shift are added explicitly. The default (false ) should be used if dedicated Istio ingress gateways are used for Shift and the gateway workloads are running in the same namespace as shift. Setting this value to true allows using an Istio ingress gateway that is shared by multiple applications. Note: Using shared Istio ingress gateways is currently not supported when the gateway is configured for mutual TLS, i.e. common.mutualTLS.istioIngressGateway.enabled: false must be set when using shared ingress gateways. Note: When using a shared Istio ingress gateway, the TLS certificate and optional ingress resources must be manually created in the namespace where the Istio ingress gateway workload is running. The resources optionally generated by shift will not work in that case. Note: When using a shared Istio ingress gateway, Shift will configure the gateway to perform an SNI match on incoming requests. This will lead to issues if load balancers in front of Istio ingress gateway do not forward the SNI. See here for further information. | false |
global.routing.istio.options.gatewayHttpsRedirect | bool | If set to true, the Istio ingress gateway will send a redirect for all http requests asking clients to use https. | true |
global.routing.istio.options.inject | bool | Globally enable/disable injection of Istio sidecar proxies into Shift components. Shift controls sidecar proxy injection by adding the label sidecar.istio.io/inject: "true" to all workloads. Enabling sidecar injection on the namespace level is not required. | false |
global.routing.istio.options.createPeerAuthentication | bool | Globally enable/disable the creation of Istio PeerAuthentication resources. Only relevant when injection of Istio sidecar proxies is enabled. When enabled, Shift creates a PeerAuthentication resource with mTLS mode STRICT to ensure that mutual TLS is required for all inbound traffic to the sidecar proxies and therefore the Shift services. For some ports mTLS mode STRICT cannot be used, e.g. additional ports serving prometheus metrics. Shift creates additional PeerAuthentication resources to configure mTLS mode PERMISSIVE for such ports and the workload in question. When disabled, no PeerAuthentication is created, mTLS mode PERMISSIVE is used, and services accept both plaintext and mutual TLS traffic. See PeerAuthentication and Mutual TLS Migration for details. | true |
global.routing.istio.options.createSidecar | bool | Enable/disable the creation of Istio Sidecar resource. This reduces the memory usage of Istio sidecar proxies when the mesh is large. Only relevant when injection of Istio sidecar proxies is enabled. The created Sidecar resource configures the sidecar proxies to allow egress traffic only to other workloads in the same namespace. This affects only egress traffic to services which are part of the service mesh. Egress traffic to services outside of the service mesh is not restricted. See Sidecar for details. | true |
global.serviceMonitor.enabled | bool | Globally enable/disable creation of monitoring.coreos.com/v1 serviceMonitor object. Requires prometheus operator. | false |
global.podDisruptionBudget | object | Globally enable/disable creation of pod disruption budgets in the default configuration. Customization of the specific pod disruption budgets must be done in the valuesOverride section of the respective service. Below is an example. The parameters minAvailable (defining the minimum number/percentage of pods that should remain scheduled) and maxUnavailable (defining the Maximum number/percentage of pods that may be made unavailable) cannot both be set. Additional annotations can be added using object annotations . See Kubernetes docs for details. service-name: valuesOverride: pod: disruptionBudget: enabled: true annotations: {} minAvailable: 1 maxUnavailable: "" | {"enabled":false} |
global.certs.managed | bool | true | |
global.certs.issuerName | string | "mbattery-ca-issuer" | |
global.certs.additionalDnsNames | list | [] | |
global.ingress | object | Globally enable/disable service ingress resources. Legacy value. | {"enabled":false} |
global.monitoring | object | Globally enable/disable creation of monitoring.coreos.com/v1 serviceMonitor object. Requires prometheus operator. Legacy value. | {"prometheus":{"serviceMonitor":{"enabled":false}}} |
scp | object | SCP specific values | {"enabled":true} |
idp | object | IDP specific values | {"enabled":true} |
smartscreen | object | Smartscreen specific values | {"enabled":true} |
smartdashboard | object | Smartdashboard specific values | {"enabled":true} |
asts | object | AST services specific values | {"enabled":true} |
payment | object | PAY services specific values | {"enabled":false} |
apiProxy | object | CustomApisSuperapps services specific values | {"enabled":false} |
partof | string | Value of label app.kubernetes.io/part-of added to resources generated by this chart. | "shift" |
component | string | Value of label app.kubernetes.io/component added to resources generated by this chart. | "shift-chart" |
testInstallation | bool | Set to 'true' for test or demo deployments. When set to true, defaults values for security related parameters are applied to simplify deployment. Must not be used for production deployments. | false |
common | object | Section for configuration parameters that are common to more than one service. | {"ast":{"databaseEncryptionMasterKey":"","enforceMutuallyAuthenticatedKeyExchange":false,"existingSecretEncryptionKeys":"","issuer":{"certs":[],"existingSecretIssuerCa":"","key":""},"offerMutuallyAuthenticatedKeyExchange":false,"redis":{"password":"password","user":"default"},"sessionEncryptionMasterKey":""},"datastores":{"database":{"tls":{"mode":"PREFER","trustStore":{"password":"","store":"","type":"JKS"}},"type":"postgres"},"kafka":{"auth":{"enabled":false,"username":"shift-kafka-username"},"external":{"broker":{"host":"kafka-broker","port":9092},"enabled":false,"tls":{"enabled":false,"trustStoreSecret":"shift-kafka-tls-truststore"},"topics":{"prefix":""}}},"mongoDb":{"host":"mongodb","port":27017,"tls":false,"tlsOpts":{"cacerts":"","certKey":""}},"redis":{"host":"redis","mode":"standalone","port":6379}},"existingSecretAdminCredentials":"","existingSecretDatastoreCredentials":"","idp":{"adminUser":{"password":"password","username":"admin"}},"mutualTLS":{"istioIngressGateway":{"cacerts":"","enabled":false,"useExistingCaCertsSecret":false},"services":{"certRequestHeaders":["x-forwarded-client-cert"],"enabled":false}},"payment":{"database":{"auth":{"password":"password","username":"user"},"host":"postgres","name":"payment","port":5432},"idp":{"authRole":"user","clientId":{"ui":"mpayUIPublic"},"merchantRole":"merchant"},"redis":{"password":"password"},"serviceProvider":{"tenant":"master"},"springBootAdmin":{"password":"password","username":"admin"},"stripe":{"apiKey":"apiKey","webhookSecret":"secret"}},"scp":{"enableP2PChat":true,"mediaMaxSizeBytes":16777216,"service":{"auth":{"password":"password","passwordHash":"password-hash","username":"scp-services"}}},"tracing":{"enable":{"ast":false,"idp":false,"payment":false,"scp":false,"smartscreen":false},"enabled":false,"jaegerGrpcHost":"http://jaeger-collector.tracing.svc.cluster.local:14250","otlpGrpcEndpoint":"","zipkinUrl":"http://jaeger-collector.tracing.svc.cluster.local:9411/api/v2/spans"}} |
common.existingSecretDatastoreCredentials | string | The name of an existing secret with datastore credentials. See README.md for details and the required structure. NOTE: When it's set, the datastore credentials configured in this file are ignored. | "" |
common.existingSecretAdminCredentials | string | The name of an existing secret with admin credentials. See README.md for details and the required structure. NOTE: When it's set, the admin credentials configured in this file are ignored. | "" |
common.datastores.redis | object | Common configuration of Redis used by all services | {"host":"redis","mode":"standalone","port":6379} |
common.datastores.redis.mode | string | Redis mode. Supported options are standalone (for Standalone Redis) and cluster (for Redis Cluster). When using mode cluster , it is recommended to set up a load balancer in front of the Redis cluster nodes and configure the hostname of that load balancer below. This decouples the Redis cluster topology from the configuration of the Shift services and ensures that Shift services are always able to connect to a healthy Redis cluster node to discover the complete Redis cluster. | "standalone" |
common.datastores.database.type | string | Database type. Supported options are postgres and oracle . Only the components used by Shift-Lite (ast components, scp-notifier, idp components, otp-management) provide support for Oracle DBMS. When using type: oracle , the components not supporting Oracle still expect valid configuration for Postgres/MongoDB. | "postgres" |
common.datastores.database.tls | object | Optional TLS configuration for database connection. Currently used by smartscreen and ast services. | {"mode":"PREFER","trustStore":{"password":"","store":"","type":"JKS"}} |
common.datastores.database.tls.mode | string | TLS mode. Supported values are PREFER : This mode tries to establish database connection using TLS. If that fails, it tries non-TLS connection. No server certificate validation is performed. VERIFY_CA : This mode requires TLS connection, i.e. there is no fallback to non-TLS. This mode performs server certificate validation against the provided trust store. VERIFY_FULL : This mode acts like VERIFY_CA with additional hostname verification of the server certificate. | "PREFER" |
common.datastores.database.tls.trustStore.type | string | Type of the truststore. Supported types are JKS and PKCS12 . | "JKS" |
common.datastores.database.tls.trustStore.store | string | Truststore of the selected type in BASE64 encoding. This setting is required for TLS modes VERIFY_CA and VERIFY_FULL . | "" |
common.datastores.database.tls.trustStore.password | string | Password to open the truststore. This setting is required when a truststore is provided. | "" |
common.datastores.mongoDb | object | Common configuration of Mongo DB used by scp services | {"host":"mongodb","port":27017,"tls":false,"tlsOpts":{"cacerts":"","certKey":""}} |
common.datastores.kafka | object | Common Kafka connections config These config values affect only Shift-Lite and Smartscreen services. See README.md section External Kafka clusters for details. | {"auth":{"enabled":false,"username":"shift-kafka-username"},"external":{"broker":{"host":"kafka-broker","port":9092},"enabled":false,"tls":{"enabled":false,"trustStoreSecret":"shift-kafka-tls-truststore"},"topics":{"prefix":""}}} |
common.datastores.kafka.auth.enabled | bool | Enable authentication | false |
common.datastores.kafka.external.enabled | bool | Set to true when using an external Kafka cluster and not the one created by Shift using Strimzi Kafka operator. In this case value strimzi.enabled: false must be set. | false |
common.datastores.kafka.external.broker | object | The broker hostname and port of the external Kafka cluster. | {"host":"kafka-broker","port":9092} |
common.datastores.kafka.external.topics | object | Optional prefix to add to all Kafka topics. This must be used when running multiple Shift deployments against the same external Kafka cluster to ensure each deployment uses unique topics. The prefix must contain only lowercase alphanumeric characters and dashes ('-'). The prefix must start and end with an alphanumeric character and consist of no more than 16 characters. Dot character ('.') will be inserted automatically as delimiter between prefix and internal topic name. For example, when using prefix 'prod', topic 'com.kobil.audit' becomes 'prod.com.kobil.audit'. | {"prefix":""} |
common.datastores.kafka.external.tls | object | Optional TLS configuration for connection to external cluster. | {"enabled":false,"trustStoreSecret":"shift-kafka-tls-truststore"} |
common.datastores.kafka.external.tls.enabled | bool | Set to true to enable TLS connection. When TLS is used, authentication must also be enabled. | false |
common.datastores.kafka.external.tls.trustStoreSecret | string | Configure the name of the existing Kubernetes secret that contains the TLS truststore for Kafka. This setting is mandatory when TLS is used. The secret must contain the truststore in two formats. A file containing all required certificates in PEM format must be provided in key ca.crt encoded as base64 string. A file containing all required certificates in PKCS#12 format must be provided in the key ca.p12 encoded as base64 string. The import password for the PKCS#12 file must be provided in the key ca.password encoded as base64 string. | "shift-kafka-tls-truststore" |
common.mutualTLS.services | object | Configure mutual TLS for relevant API endpoints. Set common.mutualTLS.services.enabled: true to enable. If enabled, the first instance that terminates TLS for client traffic must be configured to optionally perform mutual TLS authentication. It must also be configured to forward the received client certificate to upstream services in a request header. The names of the headers that contain client certificates must be specified as list using value common.mutualTLS.services.certRequestHeaders: . Multiple header names are supported. In case Istio ingress gateway is the first instance that terminates TLS for client traffic, use values common.mutualTLS.istioIngressGateway: to configure it and do not change the default value of certRequestHeaders . | {"certRequestHeaders":["x-forwarded-client-cert"],"enabled":false} |
common.mutualTLS.istioIngressGateway | object | Configure optional mutual TLS on the Istio ingress gateways. Enabling this feature is required when mutualTLS is enabled for services (common.mutualTLS.services.enabled: true ) and Istio ingress gateway is the first instance that terminates TLS for client traffic, i.e. all other load balancers are configured to pass-through the TLS connection. Set common.mutualTLS.istioIngressGateway.enabled: true to enable. If enabled, clients are requested to authenticate with a client certificate. The trusted CA certificates to use when verifying client certificates must be configured. These are the same certificates provided in value common.ast.issuer.certs or alternatively in the existing secret common.ast.issuer.existingSecretIssuerCa . The required format is a single line base64 encoded list of certificates in PEM format. Either provide them directly using value common.mutualTLS.istioIngressGateway.cacerts: or manually put them in a Kubernetes secret and set common.mutualTLS.istioIngressGateway.useExistingCaCertsSecret: true . The name of the existing secret must be {{ .Values.global.routing.tlsSecret }}-cacert , e.g. tls-secret-cacert when using the default. | {"cacerts":"","enabled":false,"useExistingCaCertsSecret":false} |
common.scp.mediaMaxSizeBytes | int | The maximum allowed size in Bytes of an attachment sent via SCP. Affects both attachments sent by apps and smartdashboard. | 16777216 |
common.ast.existingSecretEncryptionKeys | string | The name of an existing secret with encryption keys. See README.md for details and the required structure. NOTE: When it's set, the encryption keys configured in this file are ignored. | "" |
common.ast.sessionEncryptionMasterKey | string | Encryption master key for ast sessions. Must be randomly generated and unique for each Shift deployment. Must be set to an alphanumeric (UTF-8) string of length 64. Changing it invalidates all current ast sessions. | "" |
common.ast.databaseEncryptionMasterKey | string | Encryption master key for sensitive data store in database. Must be randomly generated and unique for each Shift deployment. Must be set to an alphanumeric (UTF-8) string of length 64. This value cannot be changed after installation. | "" |
common.ast.issuer | object | The issuer CA certificate and private key used to generate tenant signers. See README.md section Issuer CA for requirements on issuer CA generation. | {"certs":[],"existingSecretIssuerCa":"","key":""} |
common.ast.issuer.existingSecretIssuerCa | string | The name of an existing secret with issuer CA. See README.md for details and the required structure. NOTE: When it's set, the issuer CA configured in this file is ignored. | "" |
common.ast.issuer.certs | list | Valid certificate chain for the issuer public key. The list must consist of base64 encoded certificates ordered from the root to the issuer certificate. If the issuer certificate is self-signed, the list consists of this one entry only. The public key of the issuer certificate must match common.ast.issuer.key . While the certificates can be changed, the issuer public key must not. | [] |
common.ast.issuer.key | string | Issuer private and public key in PKCS#8 format as base64 string. Public key must match the issuer certificate in common.ast.issuer.certs . Keys must not be changed after installation. It is recommended to keep a backup of the keys for productive environments. | "" |
common.ast.offerMutuallyAuthenticatedKeyExchange | bool | Configure if mutually authenticated key exchange can be used by clients. Set to true to enable. Enable it when starting the migration of clients to mutual authenticated key exchange. | false |
common.ast.enforceMutuallyAuthenticatedKeyExchange | bool | Configure if mutually authenticated key exchange is enforced by ast services for certain use cases. Set to true to enforce it. Requires offerMutuallyAuthenticatedKeyExchange: true . Enable it only after completing migration of clients to mutual authenticated key exchange. | false |
common.ast.redis | object | Redis credentials used by ast services. The redis password is also used by idp-scp-connector. | {"password":"password","user":"default"} |
common.tracing.enabled | bool | Globally enable distributed tracing for all components. When set to true , distributed tracing is enabled for all components. When set to false the parameters below (common.tracing.enable: ) are used to determine for which components distributed tracing is enabled. | false |
common.tracing.enable | object | These parameters are DEPRECATED and will be removed in a future release. Use value (common.tracing.enabled: ) instead. Enable / Disable tracing for service groups. | {"ast":false,"idp":false,"payment":false,"scp":false,"smartscreen":false} |
common.tracing.jaegerGrpcHost | string | DEPRECATED: Support for the distributed tracing protocol Jaeger model.proto (JaegerGrpc ) is deprecated and will be removed in a future Shift release. Switch to OpenTelemetry Protocol over gRPC (OTLP/gRPC ) by configuring the tracing endpoint in value common.tracing.otlpGrpcEndpoint . hostname of tracing sink used by idp, ast and smartscreen services. Must support model.proto protocol on port 14250 (gRPC). For example jaeger-collector | "http://jaeger-collector.tracing.svc.cluster.local:14250" |
common.tracing.zipkinUrl | string | hostname of tracing sink used by scp and payment services. Must support zipkin protocol on port 9411 (HTTP). For example jaeger-collector | "http://jaeger-collector.tracing.svc.cluster.local:9411/api/v2/spans" |
common.tracing.otlpGrpcEndpoint | string | Tracing endpoint supporting OpenTelemetry Protocol over gRPC (OTLP/gRPC). When configured, services supporting OTLP/gRPC will send traces to this endpoint instead of the 'jaegerGrpcHost' and 'zipkinUrl' endpoints. For example jaeger-collector: http://jaeger-collector.tracing.svc.cluster.local:4317 . | "" |
common.payment | object | Section for configuring common values for payment services. | {"database":{"auth":{"password":"password","username":"user"},"host":"postgres","name":"payment","port":5432},"idp":{"authRole":"user","clientId":{"ui":"mpayUIPublic"},"merchantRole":"merchant"},"redis":{"password":"password"},"serviceProvider":{"tenant":"master"},"springBootAdmin":{"password":"password","username":"admin"},"stripe":{"apiKey":"apiKey","webhookSecret":"secret"}} |
common.payment.idp | object | Values to configure IDP integration of payment services. | {"authRole":"user","clientId":{"ui":"mpayUIPublic"},"merchantRole":"merchant"} |
common.payment.idp.authRole | string | Realm role required for administrative access to GUI. Must exist in master realm. | "user" |
common.payment.idp.merchantRole | string | Realm role required for merchant access to GUI. Must exist in master realm. | "merchant" |
common.payment.idp.clientId.ui | string | ID of public OICD client used for authenticating admin users. Must exist in master realm. | "mpayUIPublic" |
common.payment.serviceProvider.tenant | string | Name of tenant for GUI admin user. | "master" |
common.payment.redis.password | string | Redis credentials used by payment services | "password" |
common.payment.stripe | object | Configuration for Stripe payment processing platform. | {"apiKey":"apiKey","webhookSecret":"secret"} |
routing.istio.enabled | bool | Enable/disable creation of virtual service resource for endpoints currently maintained in this chart. | true |
routing.istio.ingress | object | Definition of optional ingress resources for istio ingress gateways. For each API group (admin , external , public ), the ingress can be enabled separately. This requires the respective istio ingress gateway to be enabled (global.routing.istio.gateways ). Each ingress can be configured with ingress class , optional annotations . For TLS, the secret defined via global.routing.tlsSecret is used. | {"admin":{"annotations":{},"class":null,"enabled":false},"external":{"annotations":{},"class":null,"enabled":false},"public":{"annotations":{},"class":null,"enabled":false}} |
strimzi | object | Configuration of the Kafka custom resources. Requires Strimzi Kafka operator | {"additionalTopics":null,"enabled":true,"kafkaConnect":{"elasticsearch":{"enabled":false,"url":"http://elasticsearch-master:9200"},"enabled":false,"s3":{"accessKey":"","bucketName":"audit","enabled":false,"flushSize":"100000","region":"eu-central-1","rotateScheduleIntervalMs":"600000","secretAccessKey":"","topicsDir":"topics"}},"kafkaExporter":{"enabled":false},"sizing":{"custom":{"kafka":null,"zookeeper":null},"mode":"basic"},"storage":{"class":{"kafka":null,"zookeeper":null},"size":{"kafka":"20Gi","zookeeper":"5Gi"}}} |
strimzi.enabled | bool | Enable/disable deployment of Kafka custom resources. | true |
strimzi.storage | object | Storage configuration for Kafka and Zookeeper. | {"class":{"kafka":null,"zookeeper":null},"size":{"kafka":"20Gi","zookeeper":"5Gi"}} |
strimzi.storage.size | object | Size of persistent volumes for Kafka and Zookeeper. | {"kafka":"20Gi","zookeeper":"5Gi"} |
strimzi.storage.class | object | Storage class to use for the persistent volumes. The default ~ uses the default Kubernetes storage class. | {"kafka":null,"zookeeper":null} |
strimzi.sizing.mode | string | Configure seizing for Kafka cluster. Supported values are 'basic', 'tuned', and 'custom'. When using mode 'custom', values .custom.kafka and custom.zookeeper must be provided. See README.md for details. Note: Changing the sizing mode after deployment is highly discouraged, as it effects the topic replica count and partition assignment to nodes. It can even lead to data loss. | "basic" |
strimzi.additionalTopics | string | Configuration of additional Kafkatopic resource, c.f. docs | nil |
strimzi.kafkaConnect | object | Configuration for kafka-connect which streams audit events to external Elasticsearch and S3 object storage. | {"elasticsearch":{"enabled":false,"url":"http://elasticsearch-master:9200"},"enabled":false,"s3":{"accessKey":"","bucketName":"audit","enabled":false,"flushSize":"100000","region":"eu-central-1","rotateScheduleIntervalMs":"600000","secretAccessKey":"","topicsDir":"topics"}} |
strimzi.kafkaConnect.elasticsearch | object | Elasticsearch sink connector config. Required for observing user audit events in smartdashboard user management. Set enabled: true to enable and provide the Elasticsearch url. | {"enabled":false,"url":"http://elasticsearch-master:9200"} |
strimzi.kafkaConnect.s3 | object | S3 sink connector config. Set enabled: true to enable and provide the S3 specific configurations. By default, the S3 sink connector commits files to S3 every 10 minutes or when 100,000 records are available. This behavior can be configured with values rotateScheduleIntervalMs and flushSize . The folder structure at S3 is topics/YYYY/MM/dd , where the prefix 'topic' is configurable using value topicsDir . | {"accessKey":"","bucketName":"audit","enabled":false,"flushSize":"100000","region":"eu-central-1","rotateScheduleIntervalMs":"600000","secretAccessKey":"","topicsDir":"topics"} |
strimzi.kafkaExporter | object | Configuration to enable the Kafka Exporter tool which provides additional Kafka metrics to prometheus. | {"enabled":false} |
idpCore | object | Configuration for idp-core | {"database":{"auth":{"password":"password","username":"user"},"host":"postgres","name":"idp_core","port":5432},"enabled":true,"replicaCount":1} |
idpScpConnector | object | Configuration for idp-scp-connector | {"database":{"auth":{"password":"password","username":"user"},"host":"postgres","name":"idp_scp_connector","port":5432},"enabled":true,"replicaCount":1} |
idpScheduler | object | Configuration for idp-scheduler | {"enabled":false,"replicaCount":1} |
smartscreenFrontend | object | Configuration for smartscreenfrontend | {"enabled":true,"replicaCount":1} |
smartscreenServices | object | Configuration for smartscreenservices | {"database":{"auth":{"password":"password","username":"user"},"host":"postgres","name":"smartscreen_services","port":5432},"enabled":true,"replicaCount":1} |
smartscreenSearch | object | Configuration for smartscreensearch | {"database":{"auth":{"password":"password","username":"user"},"enabled":false,"host":"postgres-segment","name":"shift","port":5432,"schema":"tenant","sslMode":"PREFER","trustStore":"","trustStorePassword":"","trustStoreType":"JKS"},"enabled":true,"forDashboard":{"replicaCount":1},"forFrontend":{"replicaCount":1},"searchProviders":[],"trustStore":{"trustStore":"","trustStorePassword":"","trustStoreType":"JKS"}} |
smartscreenSearch.forFrontend | object | Specific configuration for the smartscreen-search-for-frontend deployment. valuesOverride must be added in this section. | {"replicaCount":1} |
smartscreenSearch.forDashboard | object | Specific configuration for the smartscreen-search-for-dashboard deployment. valuesOverride must be added in this section. | {"replicaCount":1} |
smartscreenSearch.database | object | Optional configuration of database used by Segment | {"auth":{"password":"password","username":"user"},"enabled":false,"host":"postgres-segment","name":"shift","port":5432,"schema":"tenant","sslMode":"PREFER","trustStore":"","trustStorePassword":"","trustStoreType":"JKS"} |
smartscreenSearch.database.sslMode | string | TLS mode. Supported values are PREFER : This mode tries to establish database connection using TLS. If that fails, it tries non-TLS connection. No server certificate validation is performed. VERIFY_CA : This mode requires TLS connection, i.e. there is no fallback to non-TLS. This mode performs server certificate validation against the provided trust store. VERIFY_FULL : This mode acts like VERIFY_CA with additional hostname verification of the server certificate. | "PREFER" |
smartscreenSearch.database.trustStoreType | string | Type of the truststore. Supported types are JKS and PKCS12 . | "JKS" |
smartscreenSearch.database.trustStore | string | Truststore of the selected type in BASE64 encoding. This setting is required for TLS modes VERIFY_CA and VERIFY_FULL . | "" |
smartscreenSearch.database.trustStorePassword | string | Password to open the truststore. This setting is required when a truststore is provided. | "" |
smartscreenSearch.searchProviders | list | An array defining additional search providers. See below for an example. | [] |
smartscreenSearch.trustStore | object | Optional configuration of a TLS truststore. This truststore is used for all TLS connections to any of the configured additional search providers. | {"trustStore":"","trustStorePassword":"","trustStoreType":"JKS"} |
smartscreenSearch.trustStore.trustStoreType | string | Type of the truststore. Supported types are JKS and PKCS12 . | "JKS" |
smartscreenSearch.trustStore.trustStore | string | Truststore of the selected type in BASE64 encoding. | "" |
smartscreenSearch.trustStore.trustStorePassword | string | Password to open the truststore. This setting is required when a truststore is provided. | "" |
smartscreenDashboard | object | Configuration for smartscreendashboard | {"enabled":true,"replicaCount":1} |
smartscreenConnector | object | Configuration for smartscreenconnector | {"enabled":false,"replicaCount":1} |
smartscreenMedia | object | Configuration for smartscreenmedia Uses the same physical db and db settings as smartscreenServices | {"enabled":true,"replicaCount":1} |
smartdashboardRoutes | object | Configuration for smartdashboard-routes | {"database":{"auth":{"password":"password","username":"user"},"host":"postgres","name":"smartdashboard","port":5432,"schema":"routes"}} |
smartdashboardFrontend | object | Configuration for smartdashboard-frontend | {"enabled":true,"replicaCount":1} |
smartdashboardSmartscreen | object | Configuration for smartdashboard-smartscreen | {"config":{"authenticationBrowserFlow":"browser"},"database":{"auth":{"password":"password","username":"user"},"host":"postgres","name":"smartdashboard","port":5432,"schema":"smartscreen"},"enabled":true,"replicaCount":1} |
smartdashboardSmartscreen.config.authenticationBrowserFlow | string | Default browser flow for OIDC clients created for MiniApps. | "browser" |
smartdashboardUserManagement | object | Configuration for smartdashboard-user-management | {"config":{"mailTemplateName":"smartdashboard-update-profile.ftl","mailType":"UPDATE_USER_PROFILE","requiredActions":"KOBIL_UPDATE_USER_PROFILE","updateMailSubject":"Welcome to your KOBIL Shift Portal"},"enabled":true,"replicaCount":1} |
smartdashboardUserManagement.config.mailTemplateName | string | Email template to use when sending emails. | "smartdashboard-update-profile.ftl" |
smartdashboardUserManagement.config.updateMailSubject | string | Email subject for invitation emails. | "Welcome to your KOBIL Shift Portal" |
smartdashboardUserManagement.config.mailType | string | Defines mail type to be sent when inviting a user. Should be set to UPDATE_USER_PROFILE when using ast-services and VERIFY when using SSMS. | "UPDATE_USER_PROFILE" |
smartdashboardUserManagement.config.requiredActions | string | Defines required actions that need to be performed by user during first login. Should be set to KOBIL_UPDATE_USER_PROFILE when using ast-services and UPDATE_PASSWORD when using SSMS. | "KOBIL_UPDATE_USER_PROFILE" |
smartdashboardAnalytics | object | Configuration for smartdashboard-analytics | {"database":{"auth":{"password":"password","username":"user"},"host":"postgres-segment","name":"shift","port":5432,"schema":"tenant"},"enabled":false,"replicaCount":1} |
smartdashboardAnalytics.database | object | Database used by Segment | {"auth":{"password":"password","username":"user"},"host":"postgres-segment","name":"shift","port":5432,"schema":"tenant"} |
smartdashboardReports | object | Configuration for smartdashboard-reports | {"config":{"defaultEnv":"shift"},"enabled":false,"redis":{"password":"password","user":"default"},"replicaCount":1,"sentry":{"env":"environment=store","issueId":"sentry issue id","organization":"kobil-gmbh","project":"customer app","token":"sentry token","url":"https://sentry.io/api/0/","urlEvent":"events"}} |
smartdashboardReports.sentry | object | Sentry configuration | {"env":"environment=store","issueId":"sentry issue id","organization":"kobil-gmbh","project":"customer app","token":"sentry token","url":"https://sentry.io/api/0/","urlEvent":"events"} |
smartdashboardReports.sentry.urlEvent | string | Events endpoint for sentry. Change to eventsv2 for older on premise installations. | "events" |
smartdashboardReports.config.defaultEnv | string | Default sentry environment | "shift" |
smartdashboardReports.redis | object | Redis credentials used by smartdashboard-reports | {"password":"password","user":"default"} |
smartdashboardBroadcast | object | Configuration for smartdashboard-broadcast | {"database":{"auth":{"password":"password","username":"user"},"host":"postgres","name":"smartdashboard","port":5432,"schema":"broadcast"},"enabled":true,"replicaCount":1} |
smartdashboardAppManagement | object | Configuration for smartdashboard-app-management | {"enabled":true,"replicaCount":1} |
smartdashboardAppBuilder | object | Configuration for smartdashboard-app-builder | {"config":{"appBuilderProxyBaseUrl":"https://app-builder-proxy.example.com","bundleId":"com.example.app.{tenant}","externalApiKey":"api-key","flavorEnv":"test","flavorName":"shift","segment":{"authToken":"segment auth token","baseUrl":"https://api.segmentapis.com","selectedWarehouseId":"warehouse id for environment","sourceMetadataId":"source meta data id"},"sentry":{"baseToken":"sentry token","baseUrl":"https://sentry.io/api/0/","envName":"shift","orgEvent":"kobil-gmbh","team":"development"},"tlsBundle":""},"database":{"auth":{"password":"password","username":"user"},"host":"postgres","name":"smartdashboard","port":5432,"schema":"appbuilder"},"enabled":false,"replicaCount":1} |
smartdashboardAppBuilder.config.bundleId | string | App bundle IP | "com.example.app.{tenant}" |
smartdashboardAppBuilder.config.externalApiKey | string | API key for authentication at app builder proxy | "api-key" |
smartdashboardAppBuilder.config.appBuilderProxyBaseUrl | string | URL of app-builder proxy | "https://app-builder-proxy.example.com" |
smartdashboardAppBuilder.config.tlsBundle | string | The TLS bundle to be included in sdk-config.jwt. Only a single certificate is supported. Must be set to the root CA that issued the TLS certificate on the public AST endpoints. Expected format is BASE64 encoded PEM. | "" |
smartdashboardAppBuilder.config.segment | object | Segment configuration | {"authToken":"segment auth token","baseUrl":"https://api.segmentapis.com","selectedWarehouseId":"warehouse id for environment","sourceMetadataId":"source meta data id"} |
smartdashboardAppBuilder.config.sentry | object | Sentry configuration | {"baseToken":"sentry token","baseUrl":"https://sentry.io/api/0/","envName":"shift","orgEvent":"kobil-gmbh","team":"development"} |
smartdashboardWorkspaceManagement | object | Configuration for smartdashboard-workspace-management | {"config":{"emailTheme":"kobilv2","inviteMailSubject":"Welcome to your KOBIL Shift Portal","loginAccountAdminTheme":"kobilv2","loginTheme":"smart-dashboard","mailTemplateName":"smartdashboard-password-reset.ftl"},"enabled":true,"replicaCount":1} |
smartdashboardWorkspaceManagement.config | object | Email and theme settings | {"emailTheme":"kobilv2","inviteMailSubject":"Welcome to your KOBIL Shift Portal","loginAccountAdminTheme":"kobilv2","loginTheme":"smart-dashboard","mailTemplateName":"smartdashboard-password-reset.ftl"} |
smartdashboardKongConfigurationBackend | object | Configuration for smartdashboard-kong-configuration-backend | {"config":{"masterClientId":"client_id","masterClientSecret":"client_secret"},"enabled":true,"replicaCount":1} |
smartdashboardKongConfigurationBackend.config | object | client_id and client_secret of an OIDC client in IDP master realm. Must be manually created. | {"masterClientId":"client_id","masterClientSecret":"client_secret"} |
smartdashboardTile38 | object | Configuration for smartdashboard-tile38 | {"config":{"tile38Host":"hostname","tile38Password":"password","tile38Protocol":"https","tile38User":"user"},"enabled":false,"replicaCount":1} |
smartdashboardTile38.config | object | Configuration of Tile38 server and basic auth credentials The Tile38 server must be reachable via standard ports, i.e. 80 for http and 443 for https. | {"tile38Host":"hostname","tile38Password":"password","tile38Protocol":"https","tile38User":"user"} |
audience | object | Configuration for audience services | {"apiGateway":{"replicaCount":1},"custom":{"replicaCount":1},"database":{"auth":{"password":"password","username":"user"},"host":"postgres","name":"audience","port":5432,"schema":"audience"},"enabled":false,"eventBased":{"database":{"schema":"event"},"enabled":false,"replicaCount":1},"getEndpoints":{"replicaCount":1}} |
audience.getEndpoints | object | Specific configuration for the audience-get-endpoints deployment. valuesOverride must be added in this section. | {"replicaCount":1} |
audience.apiGateway | object | Specific configuration for the audience-api-gateway deployment. valuesOverride must be added in this section. | {"replicaCount":1} |
audience.custom | object | Specific configuration for the audience-custom deployment. valuesOverride must be added in this section. | {"replicaCount":1} |
audience.eventBased | object | Specific configuration for the audience-event-based deployment. valuesOverride must be added in this section. | {"database":{"schema":"event"},"enabled":false,"replicaCount":1} |
scpAddressbook | object | Configuration for scp-addressbook | {"db":{"name":"scp_addressbook","password":"password","poolSize":5,"username":"user"},"enabled":true,"replicaCount":1} |
scpPresence | object | Configuration for scp-presence | {"db":{"name":"scp_presence","password":"password","poolSize":5,"username":"user"},"enabled":true,"replicaCount":1} |
scpMessenger | object | Configuration for scp-messenger | {"db":{"name":"scp_messenger","password":"password","poolSize":5,"username":"user"},"enabled":true,"replicaCount":1} |
scpMedia | object | Configuration for scp-media | {"db":{"name":"scp_media","password":"password","poolSize":5,"username":"user"},"enabled":true,"replicaCount":1} |
scpGateway | object | Configuration for scp-gateway | {"db":{"name":"scp_gateway","password":"password","poolSize":5,"username":"user"},"enabled":true,"replicaCount":1} |
scpNotifier | object | Configuration for scp-notifier | {"app":{"body":"Body","title":"Push notification title"},"database":{"auth":{"password":"password","username":"user"},"host":"postgres","name":"scp_notifier","poolSize":10,"port":5432,"ssl":{"enabled":false,"trustStore":""}},"enabled":true,"replicaCount":1} |
scpNotifier.database.poolSize | int | Size of database connection pool. | 10 |
scpNotifier.database.ssl.enabled | bool | Set to true to enable SSL connection to the postgres database without certificate validation. | false |
scpNotifier.database.ssl.trustStore | string | When SSL it enabled, specify trust store to enable certificate chain validation. The truststore must be provided as single line string and contain a base64 encoded list of certificates in PEM format. | "" |
scpNotifier.app.title | string | Default push notification title used in case it is not specified in the push notification payload. | "Push notification title" |
scpNotifier.app.body | string | Default push notification body used in case it is not provided in the push notification payload. | "Body" |
astca | object | Configuration for astca | {"database":{"auth":{"password":"password","username":"user"},"host":"postgres","name":"ast_ca","port":5432},"enabled":true,"replicaCount":1} |
astcpb | object | Configuration for astcpb | {"enabled":true,"replicaCount":1} |
astClientManagement | object | Configuration for astclientmanagement | {"database":{"auth":{"password":"password","username":"user"},"host":"postgres","name":"ast_client_management","port":5432},"enabled":true,"replicaCount":1} |
astClientProperties | object | Configuration for ast-client-properties | {"database":{"auth":{"password":"password","username":"user"},"host":"postgres","name":"ast_client_properties","port":5432},"enabled":true,"replicaCount":1} |
astLogin | object | Configuration for astlogin | {"database":{"auth":{"password":"password","username":"user"},"host":"postgres","name":"ast_login","port":5432},"enabled":true,"replicaCount":1} |
astStream | object | Configuration for ast-stream | {"enabled":true,"replicaCount":1} |
astVersion | object | Configuration for ast-version | {"database":{"auth":{"password":"password","username":"user"},"host":"postgres","name":"ast_version","port":5432},"enabled":true,"replicaCount":1} |
astLocalization | object | Configuration for ast-localization | {"database":{"auth":{"password":"password","username":"user"},"host":"postgres","name":"ast_localization","port":5432},"enabled":true,"replicaCount":1} |
astTms | object | Configuration for ast-tms | {"database":{"auth":{"password":"password","username":"user"},"host":"postgres","name":"ast_tms","port":5432},"enabled":true,"replicaCount":1} |
astWebhooks | object | Configuration for ast-webhooks | {"database":{"auth":{"password":"password","username":"user"},"host":"postgres","name":"ast_webhooks","port":5432},"enabled":true,"replicaCount":1,"trustStore":{"enabled":false,"trustStore":"","trustStorePassword":"","trustStoreType":"JKS"}} |
astWebhooks.trustStore | object | Optional configuration of a truststore containing TLS certificates that are not part of the default truststore. This truststore is required when configured callback addresses use TLS certificates that are not part of the default truststore, e.g. self-issued certificates. | {"enabled":false,"trustStore":"","trustStorePassword":"","trustStoreType":"JKS"} |
astWebhooks.trustStore.enabled | bool | Enable/disable usage of a custom truststore. | false |
astWebhooks.trustStore.trustStoreType | string | Type of the truststore. Supported types are JKS and PKCS12 . | "JKS" |
astWebhooks.trustStore.trustStore | string | Truststore of the selected type in BASE64 encoding. | "" |
astWebhooks.trustStore.trustStorePassword | string | Password to open the truststore. This setting is required when a truststore is provided. | "" |
astKeyProtection | object | Configuration for ast-key-protection | {"database":{"auth":{"password":"password","username":"user"},"host":"postgres","name":"ast_key_protection","port":5432},"enabled":true,"replicaCount":1} |
payGui | object | Configuration for pay-gui | {"enabled":true,"replicaCount":1} |
payScheduler | object | Configuration for pay-scheduler | {"enabled":true,"replicaCount":1} |
payMerchant | object | Configuration for pay-merchant | {"enabled":true,"replicaCount":1} |
payUi | object | Configuration for pay-ui | {"enabled":true,"replicaCount":1} |
payScp | object | Configuration for pay-scp | {"enabled":true,"replicaCount":1} |
payNotification | object | Configuration for pay-notification | {"enabled":true,"replicaCount":1} |
payResult | object | Configuration for pay-result | {"enabled":true,"replicaCount":1} |
payPayment | object | Configuration for pay-payment | {"enabled":true,"replicaCount":1} |
payProcessing | object | Configuration for pay-processing | {"enabled":true,"replicaCount":1} |
profileBackend | object | Configuration for profile-backend | {"app":{"awsAccessKeyId":"access-key","awsRegion":"eu-central-1","awsS3BucketName":"profile-backend","awsS3EndpointUrl":"","awsSecretAccessKey":"secret-access-key"},"enabled":true,"replicaCount":1} |
otpManagement | object | Configuration for otp-management | {"database":{"auth":{"password":"password","username":"user"},"host":"postgres","name":"otp_management","port":5432},"enabled":false,"otpVerification":{"atcVariance":10,"maxResyncWindowSize":100,"maxVerifyWindowSize":3,"maximumRetryCounter":10},"replicaCount":1,"tokenImport":{"certificate":"","privateKey":""}} |
otpManagement.otpVerification.maximumRetryCounter | int | How many retries are allowed. | 10 |
otpManagement.otpVerification.maxVerifyWindowSize | int | How many consecutive OTPs the server tries at most in order to find an entered OTP when using SecOVID. | 3 |
otpManagement.otpVerification.maxResyncWindowSize | int | How many values the server is trying out in order to find the 2 OTPs during a resync. | 100 |
otpManagement.otpVerification.atcVariance | int | How many consecutive OTPs the server tries at most in order to find an entered OTP when using SecOPTIC. | 10 |
otpManagement.tokenImport.certificate | string | The base64 encoded token import certificate. Token data (XML format) is encrypted based on respective public key. The public key needs to be provided with a certificate, as the certificate will be included in respective import files. This value must be provided. | "" |
otpManagement.tokenImport.privateKey | string | Token import private key in PKCS#8 format as base64 string. The private key is required to decrypt token data during import. This value must be provided. | "" |
Configuration
Additional search providers for smartscreen-search
Smartscreen-search supports additional optional search providers which are accessible with the new endpoint /tenants/{tenantId}/provider-search
. This new endpoint calls search APIs that can be configured via value smartscreenSearch.searchProviders
(default []
). A sample configuration looks like this
searchProviders:
- name: Test
uriTemplate: https://test.com/{tenantId}
headers:
Authorization: Bearer search-test
Content-Type: application/json
timeout: 100
httpMethod: GET
requestBody: '{ "request": "{query}" }'
The search configuration is templated. Templates will be replaced with variables when called. The following variables are currently available:
- The search
query
- The
languange
used by the app (if available) - The
tenantId
where the query was called - The OIDC
token
used by the mobile app to authenticate to smartscreen components (if available)
By default, no custom search providers are configured. Calling /provider-search
without search providers will return an empty result. It is possible to define multiple search providers, in which case, they will all be queried sequentially and /provider-search
will only return when all searches are finished (or timed out). It is advised to take this into consideration when configuring production systems.
Credentials and other sensitive data from existing Kubernetes secrets
Shift supports reading certain credentials and security sensitive data from existing Kubernetes secrets. If this feature is used, it is not required to add them in values.yaml, because they will be ignored.
Shift currently supports four existing secrets for database credentials, admin credentials, encryption keys, and issuer CA. Usage of these secrets can be configured independent of each other:
common:
# The name of an existing secret with datastore credentials.
# See README.md for details and the required structure.
# NOTE: When it's set, the datastore credentials configured in this file
# are ignored.
existingSecretDatastoreCredentials: "shift-datastore-credentials"
# -- The name of an existing secret with admin credentials.
# See README.md for details and the required structure.
# NOTE: When it's set, the admin credentials configured in this file
# are ignored.
existingSecretAdminCredentials: "shift-admin-passwords"
ast:
# -- The name of an existing secret with encryption keys.
# See README.md for details and the required structure.
# NOTE: When it's set, the encryption keys configured in this file
# are ignored.
existingSecretEncryptionKeys: "shift-encryption-keys"
# -- The issuer CA certificate and private key used to generate tenant signers.
# See README.md section [Issuer CA](#issuer-ca) for requirements on issuer CA generation.
issuer:
# -- The name of an existing secret with issuer CA.
# See README.md for details and the required structure.
# NOTE: When it's set, the issuer CA configured in this file
# is ignored.
existingSecretIssuerCa: "shift-issuer-ca"
These secrets must be created in the same namespace where shift is deployed.
Required structure for datastore secrets
Create a secret using below structure and add
- Database credentials for ast services, idp-core, idp-scp-connector, and scp-notifier.
- Redis password used by ast services, pay services, and idp-scp-connector.
Credentials for all supported and enabled services must be added to the secret. Not used (disabled) services can be omitted.
apiVersion: v1
kind: Secret
metadata:
name: shift-datastore-credentials
type: Generic
stringData:
AST_SERVICES_REDIS_PASSWORD: "change-me"
PAY_SERVICES_REDIS_PASSWORD: "change-me"
IDP_CORE_DB_USERNAME: "change-me"
IDP_CORE_DB_PASSWORD: "change-me"
IDP_SCP_CONNECTOR_DB_USERNAME: "change-me"
IDP_SCP_CONNECTOR_DB_PASSWORD: "change-me"
AST_CA_DB_USERNAME: "change-me"
AST_CA_DB_PASSWORD: "change-me"
AST_CLIENT_MANAGEMENT_DB_USERNAME: "change-me"
AST_CLIENT_MANAGEMENT_DB_PASSWORD: "change-me"
AST_CLIENT_PROPERTIES_DB_USERNAME: "change-me"
AST_CLIENT_PROPERTIES_DB_PASSWORD: "change-me"
AST_LOGIN_DB_USERNAME: "change-me"
AST_LOGIN_DB_PASSWORD: "change-me"
AST_VERSION_DB_USERNAME: "change-me"
AST_VERSION_DB_PASSWORD: "change-me"
AST_LOCALIZATION_DB_USERNAME: "change-me"
AST_LOCALIZATION_DB_PASSWORD: "change-me"
AST_TMS_DB_USERNAME: "change-me"
AST_TMS_DB_PASSWORD: "change-me"
AST_KEY_PROTECTION_DB_USERNAME: "change-me"
AST_KEY_PROTECTION_DB_PASSWORD: "change-me"
AST_WEBHOOKS_DB_USERNAME: "change-me"
AST_WEBHOOKS_DB_PASSWORD: "change-me"
SCP_NOTIFIER_DB_USERNAME: "change-me"
SCP_NOTIFIER_DB_PASSWORD: "change-me"
Required structure for admin credentials
Create a secret using below structure and add admin credentials for idp-core. Not used (disabled) services can be omitted.
apiVersion: v1
kind: Secret
metadata:
name: shift-admin-passwords
type: Generic
stringData:
IDP_CORE_ADMIN_USERNAME: "admin"
IDP_CORE_ADMIN_PASSWORD: "password"
Required structure for encryption keys
Create a secret using below structure and add database encryption master key and session encryption master key. Both keys must be alphanumeric (UTF-8) strings of length 64.
apiVersion: v1
kind: Secret
metadata:
name: shift-encryption-keys
type: Generic
stringData:
DATABASE_ENCRYPTION_MASTER_KEY: ""
SESSION_ENCRYPTION_MASTER_KEY: ""
Required structure for issuer CA
Create a secret using below structure and add issuer CA certificate and key. Only a single self-signed certificate is supported. The certificate must be a base64 encoded self-signed certificate. The key must be the issuer private and public key in PKCS#8 format as base64 string.
apiVersion: v1
kind: Secret
metadata:
name: shift-issuer-ca
type: Generic
data:
ISSUER_CA_CERTIFICATE: ""
ISSUER_CA_KEY: ""
Internal Features
ServiceGroup for additional helm charts
Shift chart has experimental support for adding arbitrary add-on
helm charts as ServiceGroup to be managed by shift operator. This feature is configured using values:
# -- Section for configuring `add-on` helm charts to be managed by shift operator.
addons:
# -- Name of the `add-on` helm chart
chartname:
# -- enable/disable deployment
enabled: true
# -- Chart version
version: 0.1.0
# -- ServiceGroup readiness check by shift operator. Set to `true` to enable.
# A servicegroup is considered ready if all pods created by the ServiceGroup
# are running. Shift operator uses label `app.kubernetes.io/instance` for
# selecting pods to check. The add-on helm chart must set label
# `app.kubernetes.io/instance: {{ .Release.Name }}` on all pods to
# ensure readiness check works properly.
readycheck: false
ServiceGroup's sub chart aliases
Shift supports deploying the same helm chart multiple times using aliases. This requires shift operator version 0.9.0
or higher.
The following example demonstrates it in the spec
context of a service group custom resource. The helm chart with name service-chart
is deployed twice using aliases service-one
and service-two
.
spec:
service-one:
chart: service-chart
version: 1.0.0
fullnameOverride: {{ include "ks.siblingFullname" (merge (dict "sibling" "service-one") .) }}
serviceTwoUrl: http://{{ include "ks.siblingFullname" (merge (dict "sibling" "service-two") .) }}
service-two:
chart: service-chart
version: 1.0.0
fullnameOverride: {{ include "ks.siblingFullname" (merge (dict "sibling" "service-two") .) }}
Shift operator uses the 'alias' to generate the helm release names.
Note, that when overriding the full name (fullnameOverride
) in a custom resource, the alias must be used in the sibling parameter. The same holds when configuring service names of other services, see the example value serviceTwoUrl
.
Overriding arbitrary values of service helm charts
Arbitrary default values of the services helm charts can be overridden using object valuesOverride:
in custom shift-values.yaml
. This feature can be used to change defaults for values that are not exposed by shift chart.
For example, to increase the memory requests and limits of idp-core to 4Gi use
idpCore:
valuesOverride:
mainContainer:
resources:
requests:
memory: "4Gi"
limits:
memory: "4Gi"
Values containing helm templates can be overridden using object valuesOverrideTpl:
. Values provided in valuesOverrideTpl:
have a higher priority than values provided in valuesOverride:
.
For example, to set the value baseUrl:
of service service
to the svc name of idp-core use
service:
valuesOverrideTpl:
baseUrl: 'http://{{ include "ks.siblingFullname" (merge (dict "sibling" "idp-core") .) }}:80/auth'