Skip to main content

Migration install

Migration Install for Kobil Security Server service - moving from Single-Tenant Security Server R 2.* to Multi-Tenant Security Server R3.*

Target for migration install is to switch over from single-Tenant Security Server installation into multi-Tenant Security Server installation using the existing Security Server-DB database. This will ensure user and device registrations remaining valid.

Default processing:
Any Security Server ( R2.* )-single-Tenant DB-Data is moved into a Security Server-MT ( R3.* ) "MASTER" Tenant context by default implementation.

Advanced processing:
Using an "installer" Security Server R 3.* installation applied first on the existing Security Server R 2.* allows per manual script-run (db-migration script) to move Single-Tenant Security Server-data into a specific MT-Security Server Tenant context with specific Tenant-Name (sub-tenant / not MASTER). When the "db-migration" job is completed the new k8s-Security Server deployment (which is in fact a fresh new install on Kubernets) will be able to open and read the migrated Security Server-DB-Data.
Find more info here: Security Server-ST Database migration into Subtenant

Security Server-Migration matrix

  • Security Server up to release 2.11 could be migrated to Security Server MT 3.4.1 (deprecated by 20.01.2022 - now using MT 3.6.1 db-migration scripting works)
  • Security Server release 2.12 and higher could be migrated to Security Server MT 3.5* and 3.6.*
  • Security Server release 2.12 or higher is at DB level not compatible to Security Server MT 3.4.1
FromTo
Security Server 2.10.03.6.latest
Security Server-2.1.113.6.latest
Security Server-2.1.143.6.latest
Security Server-2.9.03.6.latest
Security Server-2.9.13.6.latest
Security Server-2.11.0Not Supported
Security Server-2.12.03.6.latest
Security Server-2.12.13.6.latest
Security Server-2.12.23.6.latest

Very important Note:

The migration-install will make use of existing Security Server-DB-service stored Security Server-data. It is required the Security Server-service startup at Kubernetes is not triggering a testInstallation. This is configured by parameter ssms:certificate:testInstallation: false in the meta-configuration file "values.yaml". The default setting is "testInstallation: true" and will try to re-initialize the specified Security Server-DB-database into a test-install Security Server-DB.

Prepare "license context" for Security Server-DB database access before Security Server deployment and Security Server POD startup

  • retrieve from existing local installation Security Server "config.xml" file (required initially). The installer Security Server "config.xml" covers the required information to allow accessing and reading the existing Security Server-DB-data.
  • ensure to run "helm install" with meta-configuration file "values.yaml" covering ssms:certificate:testInstallation: false - this is REQUIRED
  • retrieve DB-service endpoint and credentials (required db configuration parameter for mpower "values.yaml" in the Security Server section)
  • retrieve existing Security Server-tuning configuration files (communication.xml,server.xml,,,) and custom Security Server truststore (optional ConfigMap objects created prior to initial deployment).
    Find more info for Security Server custom configuration here: Configuration of Kubernetes based Security Server 3.4.x and higher
  • create ssms-local-config secret into the targeted namespace prior to initial deployment (before running helm install first time) using the install Security Server "config.xml" data

Notes:

The file must read like this - only "config.xml" is allowed as this will become part of the key-value pair in the data section of the secret.
Use the original Security Server "config.xml" file and ensure to address the right namespace when creating the secret named "ssms-local-config".

Run/create the secret:

  kubectl create secret generic ssms-local-config --from-file=config.xml  

Once the secret is created - start the deployment (helm install). For more details about the "ssms-local-config" secret find below additional info / Appendix.

run "helm install" into namespace covering the prepared "ssms-local-config" secret

  • ensure to have repository pull-secret created prior to run "helm install" and helm kobil chart repository access is enabled
  • doublecheck the "ssms-local-config" secret is pre-allocated in right namespace
  • doubelcheck the meta-configuration file "values.yaml" is covering ssms:certificate:testInstallation: false
  • run helm install mpower -f ./values.yaml kobil/mpower covering in the Security Server section the targeted Security Server-DB service (host / credentials / options) in the target namespace
  • watch "ssms-master-config*" Pod(job) log output to verify access to Security Server-DB is possible
  • watch "ssms-mgt/svc" Pod log output to verify access and reading from Security Server-DB is possible matching the Security Server-modules to Security Server-table versions

Appendix (import license):

Additional info for the Security Server "ssms-local-config" secret:

When printout the created secret running "kubectl get secret ssms-local-config -o yaml" you should find object structure like this:

...
apiVersion: v1
data:
config.xml: evR0fAxPPOd0/+MpxP5xMF7sib5Xx7EADeVg47X4F9EZu1WayS
2RehMqAe5YinqDEvffFyQ2V3wHalw7Z1gCezRsQUErmSdfyXYncM3pEUF/5O9Xwy
tI3g2oeL6fbrp1/1mAh4XThcYI7+/HuHL7OXa/dkYfJLl/7FNrFli79uahb8UlZi
/Rohm+km0OXjlvRyLhuxQFjns6ILwDs74WsQmJtC00fxR1clxPui4Ms4vN9VHedo
Mlhxa9poYmCQeDsD82VxyPb8QmsecaF7+0+9VsIyVWijW57+0G4Qz5jcjf/Rtxuw
VpC+G0cXCzo4u+TVgs2Gm4Nt76cWFDxH5iR6OLNniWGrGIvP4SssEY6YRlZAHbvx
pGAQOSZJaGlRBvkzAaR66...............+vZBZAAqnGMG0oAl71YJxTpZn87j
TVhG+dd1D+tnmmXJFa6g0l1dzdnnRqVls9wsZN8BJkiqMshpXUzibWhXoH1m9fd7
2auVLSi4zTzv9hoakh+2E/SZIG3TOCbwV+ROm9xtjoLUvZOzCQwBl77ngy+Gmwq3
1Hyc10gGUory51IQhfBXCfyxj2GPnDNvDAroOgkn7SIYE2ristEDdMCEwZO4WPdJ
GgNayKsXGuEpqSC42rgrnfqAZrvmciPZOgzUmaGrI0QpCA7I6lRdSaY7oHHPsXmt
4n/fy94NrKSEMtOYl7A0yn2V+XQ631meX4laLOzT+oNrwjfZ4XX67DEfGJId1N2V
VBKAOmjkAYaPVMX67PyCp0eObuXK+pLNHrBPArBW++qkTDuFIy7nrFsflYnhqXyj
uRrILt2dRIpMdSJ38O/jSpbvI55/3S0Vnd/WGOFqDqe97VlW54g8d2ZF9X2kAkFw
jVakBiYeAs5Vl3li701sgEf/PM7SRwLbmhIxl31s4YlIspSqktP4z6hRuuAgFRl8
VxCyMHbFrmWS5uYlA/U3UcYZMZCi5qkN7S9Ytav8ZunTRC+ZmTkbp9HU7tYF1wRN
b97Sik55yGyDCmSZtWDMaCkw1waMstmAvwjw5v2ACRFhxQd6L+WX5baQKqyFZSxc
fo5JZdldqTCec1tLs7iKxuNFQDndGJhqqRwsI+0Vao3DWulUu+E5YmWrE9TDm1yS
HJfwh9bquj7GxxZz9x4q7g==

kind: Secret
metadata:
creationTimestamp: "2024-12-16T10:56:11Z"
managedFields:
- apiVersion: v1
fieldstype: FieldsV1
fieldsV1:
f:data:
.: {}
f:config.xml {}
f:type: {}
manager: Kubernetes Java Client
operation: Update

It is required to find in the data-section the "config.xml" key-value pair covering the data base64 encoded.
Once the secret is created you should doublecheck the "config.xml" value/payload.

Verify the content by:

   kubectl get secret ssms-local-config -o jsonpath='{.data.config.xml}' | base64 --decode  

This should confirm the created secret is fine.

Special Notes:

Considerations for using dependency deployment (using Ingress-Controller Daemon-Set / or platform specific routing for the Kobil-Services)