diff --git a/AAE/README.md b/AAE/README.md deleted file mode 100644 index 6b320a2f..00000000 --- a/AAE/README.md +++ /dev/null @@ -1,486 +0,0 @@ -# IBM-DBA-AAE-PROD - -IBM Business Automation Application Engine (App Engine) - -## Introduction - -This IBM Business Automation Application Engine Helm chart deploys the App Engine, a user interface service tier to run applications that are built by IBM Business Automation Application Designer (App Designer). This Helm chart is a platform-level Helm chart that deploys all required components. - -## Chart Details - -This chart deploys several services and components. - -In the standard configuration, it includes these components: - -* IBM Resource Registry component -* IBM Business Automation Application Engine (App Engine) component - -To support those components, a standard installation generates: - - * 3 ConfigMaps that manage the configuration of App Engine - * 1 deployment running App Engine - * 1 StatefulSet running Resource Registry - * 4 or more jobs for Resource Registry, depending on the customized configuration - * 1 service account with related role and role binding - * 3 secrets to get access during chart installation - * 3 services and optionally an Ingress or Route (OpenShift) to route the traffic to the App Engine - -## Prerequisites - - * [Red Hat OpenShift 3.11](https://docs.openshift.com/container-platform/3.11/welcome/index.html) or later - * [Helm and Tiller 2.9.1](/~https://github.com/helm/helm/releases) or later if you are [using helm charts](#using-helm-charts) to deploy your container images - * [Cert Manager 0.8.0](https://cert-manager.readthedocs.io/en/latest/getting-started/install/openshift.html) or later if you want to use Cert Manager to create the Transport Layer Security (TLS) key and certificate secrets. Otherwise, you can use Secure Sockets Layer (SSL) tools to create the TLS key and certificate secrets. - * [IBM DB2 11.1.2.2](https://www.ibm.com/products/db2-database) or later - * [IBM Cloud Pack For Automation - User Management Service (UMS)](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.offerings/topics/con_ums.html) - * Persistent volume support - -### Preparing the environment for the application engine - -1. Log in to OC (the OpenShift command line interface (CLI)) by running the following command. You are prompted for the password. - - ``` - oc login -u - ``` - -2. Create a project (namespace) for the App engine by running the following command: - - ``` - oc new-project - ``` - -3. Save and exit. - -4. To deploy the service account, role, and role binding successfully, assign the administrator role to the user for this namespace by running the following command: - - ``` - oc project - oc adm policy add-role-to-user admin - ``` - -5. If you want to operate persistent volumes (PVs), you must have the storage-admin cluster role, because PVs are a cluster resource in OpenShift. Add the role by running the following command: - - ``` - oc adm policy add-cluster-role-to-user storage-admin - ``` - -### Uploading the images - -Upload the IBM Business Automation Application Engine images to the Docker registry of the Kubernetes cluster. See [Download a product package from PPA and load the images](https://github.ibm.com/dba/cert-kubernetes/blob/master/README.md#download-ppa-and-load-images). - -### Generating the database script and YAML files - -Use the [App Engine platform Helm installation helper script](configuration) to generate the database script and YAML files for your environment. Follow the instructions in the [readme](configuration/README.md) for the following requirements: - -* Setting up the database for App Engine -* Protecting sensitive configuration data -* Setting up the TLS key and certificate secrets -* Setting the service type - -If you don't want to use the helper script, you can create your own secrets and service type by following the instructions in the [Knowledge Center](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/welcome/kc_welcome_dba_distrib.html). - - -#### Notes -* Image pull secret: The script does not generate the image pull secret. You can follow the instructions in [Configuring the secret for pulling Docker images](#Configuring-the-secret-for-pulling-docker-image) to create your own. -* Storage: The script does not generate a YAML file for persistent volumes. You can follow the instructions in [Implementing storage](#implementing-storage) to create your own perstent volumes. -* UMS-related configuration and TLS certificates: You must do this configuration if you have an existing UMS that is in a different namespace from the App Engine Helm chart. - -### Preparing UMS-related configuration and TLS certificates (optional) - -If you have an existing UMS that is in a different namespace from the App Engine Helm chart, follow these steps. - -If the UMS certificate is not signed by the same root CA, you must add the root CA as trusted instead of the UMS certificate. You should first get the root CA which is used to sign the UMS, and then save it to a certificate named like `ums-cert.crt`, then create the secret by running the following command: - - - - kubectl create secret generic ca-tls-secret --from-file=tls.crt=./ums-cert.crt - - -You will get a secret named ca-tls-secret. Enter this secret value in every TLS section for Resource Registry and App Engine that is listed in [Configuration](#configuration). If you use [App Engine platform Helm installation helper script](configuration) to setup App Engine, you can enter this secret value in [`ums.tlsSecretName`](configuration) The components will trust this certificate and communicate with UMS successfully. - - ``` - tls: - tlsSecretName: - tlsTrustList: - - ca-tls-secret - ``` - -### Configuring the secret for pulling Docker images - -If you're pulling Docker images from a private registry, you must provide a secret containing credentials for it. For instructions, see the [Kubernetes information about private registries](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-line). - -This command can be used for one repository only. If your Docker images come from different repositories, you can create multiple image pull secrets and add the names in global.imagePullSecrets. Or, you can create secrets by using the custom Docker configuration file. - -The following sample shows the Docker auth file `config.json`: - -``` -{ - "auths": { - "url1.xx.xx.xx.xx": { - "auth": "xxxxxxxxxxxxxxxxxxxxxxxxxxxx" - }, - "url2.xx.xx.xx.xx": { - "auth": "xxxxxxxxxxxxxxxxxxxxxxxxxxxx" - }, - "url3.xx.xx.xx.xx": { - "auth": "xxxxxxxxxxxxxxxxxxxxxxxxxxxx" - }, - "url4.xx.xx.xx.xx": { - "auth": "xxxxxxxxxxxxxxxxxxxxxxxxxxxx" - } - } -} -``` - -The key under auths is the link to the Docker repository, and the value inside that repository name is the authentication string that is used for that repository. You can create the auth string with base64 by running the following command: - -``` - # echo -n : | base64 -``` - -You can replace the auth string by running the previous command with your config.json file. Then, create the image pull secret by running the following command: - -``` - kubectl create secret generic image-pull-secret --from-file=.dockerconfigjson= --type=kubernetes.io/dockerconfigjson -``` - -### Configuring Redis for App Engine (optional) - -You can configure the App Engine with Remote Dictionary Server (Redis) to provide more reliable service. - -1. Update the Redis host, port, and Time To Live (TTL) settings in `values.yaml` - - ```yaml - redis: - host: - port: - ttl: 1800 - ``` - -2. Set `.Values.appengine.session.useExternalStore` to `true`. -3. If Redis is protected by a password, enter the password in the `REDIS_PASSWORD` field in the `ae-secret-credential` secret that you created in [Protecting sensitive configuration data](#Protecting-sensitive-configuration-data). - -4. If you want to protect Redis communication with TLS, you have the following options: - - * Sign the Redis certificate with a well-known CA. - * Sign the Redis certificate with the same root CA used by this installation. - * Use a zero depth self-signed certificate or sign the certificate with another root CA. Then save the certificate or root CA in the secret and enter the secret name in `.Values.appengine.tls.tlsTrustList`. - -## Red Hat OpenShift SecurityContextConstraints Requirements - -The predefined SecurityContextConstraints name [`restricted`](https://ibm.biz/cpkspec-scc) has been verified for this chart. If your target namespace is bound to this SecurityContextConstraints resource, you can proceed to install the chart. - -This chart also defines a custom SecurityContextConstraints definition that can be used to finely control the permissions and capabilities needed to deploy this chart. - -- From the user interface, you can copy and paste the following snippets to enable the custom SecurityContextConstraints. - - Custom SecurityContextConstraints definition: - - ```yaml - apiVersion: security.openshift.io/v1 - kind: SecurityContextConstraints - metadata: - annotations: - kubernetes.io/description: "This policy is the most restrictive, - requiring pods to run with a non-root UID, and preventing pods from accessing the host." - cloudpak.ibm.com/version: "1.0.0" - name: ibm-dba-aae-scc - allowHostDirVolumePlugin: false - allowHostIPC: false - allowHostNetwork: false - allowHostPID: false - allowHostPorts: false - allowPrivilegedContainer: false - allowPrivilegeEscalation: false - allowedCapabilities: [] - allowedFlexVolumes: [] - allowedUnsafeSysctls: [] - defaultAddCapabilities: [] - defaultPrivilegeEscalation: false - forbiddenSysctls: - - "*" - fsGroup: - type: MustRunAs - ranges: - - max: 65535 - min: 1 - readOnlyRootFilesystem: false - requiredDropCapabilities: - - ALL - runAsUser: - type: MustRunAsNonRoot - seccompProfiles: - - docker/default - seLinuxContext: - type: RunAsAny - supplementalGroups: - type: MustRunAs - ranges: - - max: 65535 - min: 1 - volumes: - - configMap - - downwardAPI - - emptyDir - - persistentVolumeClaim - - projected - - secret - priority: 0 - ``` - -## Resources Required - -Follow the OpenShift instructions in [Planning Your Installation](https://docs.openshift.com/container-platform/3.11/install/index.html#single-master-single-box). Then check the required resources in [System and Environment Requirements](https://docs.openshift.com/container-platform/3.11/install/prerequisites.html) and set up your environment. - -| Component name | Container | CPU | Memory | -| --- | --- | --- | --- | -| App Engine | App Engine container | 1 | 512Mi | -| App Engine | Init Containers | 200m | 128Mi | -| Resource Registry | Resource Registry container | 200m | 256Mi | -| Resource Registry | Init containers | 200m | 256Mi | - -## Installing the Chart - -You can deploy your container images with the following methods: - -- [Using Helm charts](helm-charts/README.md) -- [Using Kubernetes YAML](k8s-yaml/README.md) - - -## Configuration - -The following table lists the configurable parameters of the chart and their default values. All properties are required, unless they have a default value or are explicitly optional. Although the chart might seem to install correctly when some parameters are omitted, this kind of configuration is not supported. - -| Parameter | Description | Default | -| -------------------------------------- | ----------------------------------------------------- | ---------------------------------------------------- | -| `global.existingClaimName` | Existing persistent volume claim name for the JDBC and ODBC library | | -| `global.nonProductionMode` | Production mode. This value must be false. | `false` | -| `global.imagePullSecrets` | Existing Docker image secret | | -| `global.caSecretName` | Existing CA secret | | -| `global.dnsBaseName` | Kubernetes Domain Name System (DNS) base name | `svc.cluster.local` | -| `global.contributorToolkitsPVC` | Persistent volume for contributor toolkit storage | | -| `global.image.keytoolInitcontainer` | Image name for TLS init container | `dba-keytool-initcontainer:19.0.2` | -| `global.ums.serviceType` | UMS service type: `NodePort`, `ClusterIP`, or `Ingress` | | -| `global.ums.hostname` | UMS external host name | | -| `global.ums.port` | UMS port (only effective when using NodePort service) | | -| `global.ums.adminSecretName` | Existing UMS administrative secret for sensitive configuration data | | -| `global.resourceRegistry.hostname` | Resource Registry external host name | | -| `global.resourceRegistry.port` | Resource Registry port for using NodePort Service | | -| `global.resourceRegistry.adminSecretName` | Existing Resource Registry administrative secret for sensitive configuration data | | -| `global.appEngine.serviceType` | App Engine service type: `NodePort`, `ClusterIP`, or `Ingress` | | -| `global.appEngine.hostname` | App Engine external host name | | -| `global.appEngine.port` | App Engine port (only effective when using NodePort service) | | -| `appEngine.install` | Switch for installing App Engine | `true` | -| `appEngine.replicaCount` | Number of deployment replicas | `1` | -| `appEngine.probes.initialDelaySeconds` | Number of seconds after the container has started before liveness or readiness probes are initiated | `5` | -| `appEngine.probes.periodSeconds` | How often (in seconds) to perform the probe. The default is 10 seconds. Minimum value is 1. | `10` | -| `appEngine.probes.timeoutSeconds` | Number of seconds after which the probe times out. The default is 1 second. Minimum value is 1. | `5` | -| `appEngine.probes.successThreshold` | Minimum consecutive successes for the probe to be considered successful after failing. Minimum value is 1. | `5` | -| `appEngine.probes.failureThreshold` | When a pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up. Minimum value is 1. | `3` | -| `appEngine.images.appEngine` | Image name for App Engine container | `solution-server:19.0.2` | -| `appEngine.images.tlsInitContainer` | Image name for TLS init container | `dba-keytool-initcontainer:19.0.2` | -| `appEngine.images.dbJob` | Image name for App Engine database job container | `solution-server-helmjob-db:19.0.2` | -| `appEngine.images.oidcJob` | Image name for OpenID Connect (OIDC) registration job container | `dba-umsregistration-initjob:19.0.2` | -| `appEngine.images.dbcompatibilityInitContainer` | Image name for database compatibility init container | `dba-dbcompatibility-initcontainer:19.0.2` | -| `appEngine.images.pullPolicy` | Pull policy for all containers | `IfNotPresent` | -| `appEngine.tls.tlsSecretName` | Existing TLS secret containing `tls.key` and `tls.crt`| | -| `appEngine.tls.tlsTrustList` | Existing TLS trust secret | `[]` | -| `appEngine.database.name` | App Engine database name | | -| `appEngine.database.host` | App Engine database host | | -| `appEngine.database.port` | App Engine database port | | -| `appEngine.database.type` | App Engine database type: `db2` | | -| `appEngine.database.currentSchema` | App Engine database Schema | | -| `appEngine.database.initialPoolSize` | Initial pool size of the App Engine database | `1` | -| `appEngine.database.maxPoolSize` | Maximum pool size of the App Engine database | `10` | -| `appEngine.database.uvThreadPoolSize` | UV thread pool size of the App Engine database | `4` | -| `appEngine.database.maxLRUCacheSize` | Maximum Least Recently Used (LRU) cache size of the App Engine database | `1000` | -| `appEngine.database.maxLRUCacheAge` | Maximum LRU cache age of the App Engine database | `600000` | -| `appEngine.useCustomJDBCDrivers` | Toggle for custom JDBC drivers | `false` | -| `appEngine.adminSecretName` | Existing App Engine administrative secret for sensitive configuration data | | -| `appEngine.logLevel.node` | Log level for output from the App Engine server | `trace` | -| `appEngine.logLevel.browser` | Log level for output from the web browser | `2` | -| `appEngine.contentSecurityPolicy.enable`| Enables the content security policy for the App Engine | `false` | -| `appEngine.contentSecurityPolicy.whitelist`| Configuration of the App Engine content security policy whitelist | `""` | -| `appEngine.session.duration` | Duration of the session | `1800000` | -| `appEngine.session.resave` | Enables session resave | `false` | -| `appEngine.session.rolling` | Send cookie every time | `true` | -| `appEngine.session.saveUninitialized` | Uninitialized sessions will be saved if checked | `false` | -| `appEngine.session.useExternalStore` | Use an external store for storing sessions | `false` | -| `appEngine.redis.host` | Host name of the Redis database that is used by the App Engine | | -| `appEngine.redis.port` | Port number of the Redis database that is used by the App Engine | | -| `appEngine.redis.ttl` | Time to live for the Redis database connection that is used by the App Engine | | -| `appEngine.maxAge.staticAsset` | Maximum age of a static asset | `2592000` | -| `appEngine.maxAge.csrfCookie` | Maximum age of a Cross-Site Request Forgery (CSRF) cookie | `3600000` | -| `appEngine.maxAge.authCookie` | Maximum age of an authentication cookie | `900000` | -| `appEngine.env.serverEnvType` | App Engine server environment type | `development` | -| `appEngine.env.maxSizeLRUCacheRR` | Maximum size of the LRU cache for the Resource Registry | `1000` | -| `appEngine.resources.ae.limits.cpu` | Maximum amount of CPU that is required for the App Engine container | `1` | -| `appEngine.resources.ae.limits.memory` | Maximum amount of memory that is required for the App Engine container | `1024Mi` | -| `appEngine.resources.ae.requests.cpu` | Minimum amount of CPU that is required for the App Engine container | `500m` | -| `appEngine.resources.ae.requests.memory` | Minimum amount of memory that is required for the App Engine container | `512Mi` | -| `appEngine.resources.initContainer.limits.cpu` | Maximum amount of CPU that is required for the App Engine init container | `500m` | -| `appEngine.resources.initContainer.limits.memory` | Maximum amount of memory that is required for the App Engine init container | `256Mi` | -| `appEngine.resources.initContainer.requests.cpu` | Minimum amount of CPU that is required for the App Engine init container | `200m` | -| `appEngine.resources.initContainer.requests.memory` | Minimum amount of memory that is required for App Engine init container | `128Mi` | -| `appEngine.autoscaling.enabled` | Enable the Horizontal Pod Autoscaler for App Engine init container | `false` | -| `appEngine.autoscaling.minReplicas` | Minimum limit for the number of pods for the App Engine | `2` | -| `appEngine.autoscaling.maxReplicas` | Maximum limit for the number of pods for the App Engine | `5` | -| `appEngine.autoscaling.targetAverageUtilization` | Target average CPU utilization over all the pods for the App Engine init container | `80` | -| `resourceRegistry.install` | Switch for installing Resource Registry | `true` | -| `resourceRegistry.images.resourceRegistry` | Image name for Resource Registry container | `dba-etcd:19.0.2` | -| `resourceRegistry.images.pullPolicy` | Pull policy for all containers | `IfNotPresent` | -| `resourceRegistry.tls.tlsSecretName` | Existing TLS secret containing `tls.key` and `tls.crt`| | -| `resourceRegistry.replicaCount` | Number of etcd nodes in cluster | `3` | -| `resourceRegistry.resources.limits.cpu` | CPU limit for Resource Registry configuration | `500m` | -| `resourceRegistry.resources.limits.memory` | Memory limit for Resource Registry configuration | `512Mi` | -| `resourceRegistry.resources.requests.cpu` | Requested CPU for Resource Registry configuration | `200m` | -| `resourceRegistry.resources.requests.memory` | Requested memory for Resource Registry configuration | `256Mi` | -| `resourceRegistry.persistence.enabled` | Enables this deployment to use persistent volumes | `false` | -| `resourceRegistry.persistence.useDynamicProvisioning` | Enables dynamic binding of persistent volumes to created persistent volume claims | `true` | -| `resourceRegistry.persistence.storageClassName` | Storage class name | | -| `resourceRegistry.persistence.accessMode` | Access mode as ReadWriteMany ReadWriteOnce | | -| `resourceRegistry.persistence.size` | Storage size | | -| `resourceRegistry.livenessProbe.enabled` | Liveness probe configuration enabled | `true` | -| `resourceRegistry.livenessProbe.initialDelaySeconds` | Number of seconds after the container has started before liveness is initiated | `120` | -| `resourceRegistry.livenessProbe.periodSeconds` | How often (in seconds) to perform the probe | `10` | -| `resourceRegistry.livenessProbe.timeoutSeconds` | Number of seconds after which the probe times out | `5` | -| `resourceRegistry.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after failing. Minimum value is 1. | `1` | -| `resourceRegistry.livenessProbe.failureThreshold` | When a pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up. Minimum value is 1. | `3` | -| `resourceRegistry.readinessProbe.enabled` | Readiness probe configuration enabled | `true` | -| `resourceRegistry.readinessProbe.initialDelaySeconds` | Number of seconds after the container has started before readiness is initiated | `15` | -| `resourceRegistry.readinessProbe.periodSeconds` | How often (in seconds) to perform the probe | `10` | -| `resourceRegistry.readinessProbe.timeoutSeconds` | Number of seconds after which the probe times out | `5` | -| `resourceRegistry.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after failing. Minimum value is 1. | `1` | -| `resourceRegistry.readinessProbe.failureThreshold` | When a pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up. Minimum value is 1. | `6` | -| `resourceRegistry.logLevel` | Log level of the resource registry server. Available options: `debug` `info` `warn` `error` `panic` `fatal` | `info` | - -## Implementing storage - -This chart requires an existing persistent volume of any type. The minimum supported size is 1GB. Additionally, a persistent volume claim must be created and referenced in the configuration. - -### Persistent volume for JDBC Drivers (optional) - -If you don't create this persistent volume and related claim, leave `global.existingClaimName` empty and set `appengine.useCustomJDBCDrivers` to `false`. - -The persistent volume should be shareable by pods across the whole cluster. For a single-node Kubernetes cluster, you can use HostPath to create it. For multiple nodes in a cluster, use shareable storage, such as NFS or GlusterFS, for the persistent volume. It must be passed in the values.yaml files (see the global.existingClaimName property in the configuration). - -The following example shows the HostPath type of persistent volume. - -```yaml -kind: PersistentVolume -apiVersion: v1 -metadata: - name: jdbc-pv-volume - labels: - type: local -spec: - storageClassName: manual - capacity: - storage: 2Gi - accessModes: - - ReadWriteMany - hostPath: - path: "/mnt/data" -``` - -The following example shows the NFS type of persistent volume. - -```yaml -kind: PersistentVolume -apiVersion: v1 -metadata: - name: jdbc-pv-volume - labels: - type: nfs -spec: - storageClassName: manual - capacity: - storage: 2Gi - accessModes: - - ReadWriteMany - nfs: - path: /tmp - server: 172.17.0.2 -``` - -After you create a persistent volume, you can create a persistent volume claim to bind the correct persistent volume with the selector. Or, if you are using GlusterFS with dynamic allocation, create the persistent volume claim with the correct storageClassName to allow the persistent volume to be created automatically. - -The following example shows a persistent volume claim. - -```yaml -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: jdbc-pvc -spec: - storageClassName: manual - accessModes: - - ReadWriteMany - resources: - requests: - storage: 2Gi -``` - -The mounted directory must contain a jdbc sub-directory, which in turn holds subdirectories with the required JDBC driver files. Add the following structure to the mounted directory (which in this case is called binaries): - -``` -/binaries - /jdbc - /db2 - /db2jcc4.jar - /db2jcc_license_cu.jar -``` - -The /jdbc folder and its contents depend on the configuration. Copy the JDBC driver files to the mounted directory as shown in the previous example. Make sure those files have the correct access. IBM Cloud Pak for Automation products on OpenShift use an arbitrary UID to run the applications, so make sure those files have read access for root(0) group. Enter the persistent volume claim name in the `global.existingClaimName` field. - -### Persistent volume for etcd data for Resource Registry (optional) - -Without a persistent volume, the Resource Registry cluster might be broken during pod relocation. -If you don't need data persistence for Resource Registry, you can skip this section by setting resourceRegistry.persistence.enabled to false in the configuration. Otherwise, you must create a persistent volume. - -The following example shows a persistent volume definition using NFS. - -```yaml -kind: PersistentVolume -apiVersion: v1 -metadata: - name: etcd-data-volume - labels: - type: nfs -spec: - storageClassName: manual - capacity: - storage: 3Gi - accessModes: - - ReadWriteOnce - nfs: - path: /nfs/general/rrdata - server: 172.17.0.2 -``` - -You don't need to create a persistent volume claim for Resource Registry. Resource Registry is a StatefulSet, so it creates the persistent volume claim based on the template in the chart. See the [Kubernetes StatefulSets document](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) for more details. - -Notes: - -* You must give root(0) group read/write access to the mounted directories. Use the following command: - - ``` - chown -R 50001:0 - chmod g+rw - ``` - -* Each Resource Registry server uses its own persistent volume. Create persistent volumes based on the replicas (resourceRegistry.replicaCount in the configuration). - -## Limitations - -* The solution server image only trusts CA due to the limitation of the Node.js server. For example, if external UMS is used and signed with another root CA, you must add the root CA as trusted instead of the UMS certificate. - - * The certificate can be self-signed, or signed by a well-known CA. - * If you're using a depth zero self-signed certificate, it must be listed as a trusted certificate. - * If you're using a certificate signed by a self-signed CA, the self-signed CA must be in the trusted list. Using a leaf certificate in the trusted list is not supported. - -* The App Engine supports only the IBM DB2 database. -* The Helm upgrade and rollback operations must use the Helm command line, not the uder interface. - -## Documentation - -* [Using the IBM Cloud Pak for Automation](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/welcome/kc_welcome_dba_distrib.html) -* [Content Security Policy](https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP) diff --git a/AAE/README_config.md b/AAE/README_config.md new file mode 100644 index 00000000..34dc1327 --- /dev/null +++ b/AAE/README_config.md @@ -0,0 +1,119 @@ +# Configuring IBM Business Automation Application Engine (App Engine) 19.0.3 + +These instructions cover the basic installation and configuration of IBM Business Automation Application Engine (App Engine). + +## Table of contents +- [App Engine Component Details](#App-engine-component-details) +- [Prerequisites](#Prerequisites) +- [Resources Required](#Resources-required) +- [Step 1: Preparing to install App Engine for Production](#Step-1-preparing-to-install-app-engine-for-production) +- [Step 2: Configuring Redis for App Engine (Optional)](#Step-2-configuring-redis-for-app-Engine-optional) +- [Step 3: Implementing storage (Optional)](#Step-3-implementing-storage-optional) +- [Step 4: Configuring the custom resource YAML file for your App Engine deployment](#Step-4-configuring-the-custom-resource-YAML-file-for-your-app-engine-deployment) +- [Step 5: Completing the installation](#Step-5-completing-the-installation) +- [Limitations](#Limitations) + +## Introduction + +This installation deploys the App Engine, a user interface service tier to run applications that are built by IBM Business Automation Application Designer (App Designer). + +## App Engine Component Details + +This component deploys several services and components. + +In the standard configuration, it includes these components: + +* IBM Resource Registry component +* IBM Business Automation Application Engine (App Engine) component + +To support those components, a standard installation generates: + + * 3 or more ConfigMaps that manage the configuration of App Engine, depending on the customized configuration + * 1 or more deployment running App Engine, depending on the customized configuration + * 4 or more pods for Resource Registry, depending on the customized configuration + * 1 service account with related role and role binding + * 3 secrets to get access during operator installation + * 3 services and optionally an Ingress or Route (OpenShift) to route the traffic to the App Engine + +## Prerequisites + + * [Remote Dictionary Server (Redis)](http://download.redis.io/releases/) + * [User Management Service](../UMS/README_config.md) + * Resource Registry, which is included in the App Engine configuration. If you already configured Resource Registry through another component, you need not install it again. + +## Resources Required + +Follow the OpenShift instructions in [Planning Your Installation 3.11](https://docs.openshift.com/container-platform/3.11/install/index.html#single-master-single-box) or [Planning your Installation 4.2](https://docs.openshift.com/container-platform/4.2/welcome/index.html). Then check the required resources in [System and Environment Requirements on OCP 3.11](https://docs.openshift.com/container-platform/3.11/install/prerequisites.html) or [System and Environment Requirements on OCP 4.2](https://docs.openshift.com/container-platform/4.2/architecture/architecture.html) and set up your environment. + +| Component name | Container | CPU | Memory | +| --- | --- | --- | --- | +| App Engine | App Engine container | 1 | 1Gi | +| App Engine | Init containers | 200m | 128Mi | +| Resource Registry | Resource Registry container | 200m | 256Mi | +| Resource Registry | Init containers | 100m | 128Mi | + + +## Step 1: Preparing to install App Engine for Production + +Besides the common steps to set up the operator environment, you must do the following steps before you install App Engine. + +* Create the App Engine database. See [Creating the database](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_aeprep_db.html). +* Create secrets to protect sensitive configuration data, See [Creating secrets to protect sensitive configuration data](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_aeprep_data.html). + +## Step 2: Configuring Redis for App Engine (Optional) + +You can configure App Engine with Remote Dictionary Server (Redis) to provide more reliable service. See [Configuring App Engine with Redis](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_aeprep_redis.html). + +## Step 3: Implementing storage (Optional) + +You can optionally add your own persistent volume (PV) and persistent volume claim (PVC) if you want to use your own JDBC driver or you want Resource Registry to be backed up automatically. The minimum supported size is 1 GB. For instructions, see [Optional: Implementing storage](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_aeprep_storage.html). + + +## Step 4: Configuring the custom resource YAML file for your App Engine deployment + +1. Make sure that you've set the configuration parameters for the [User Management Service](../UMS/README_config.md) in your copy of the template custom resource YAML file. + +2. Edit your copy of the template custom resource YAML file and make the following updates. After completing those updates, if you need to install other components, please go to [Step 5](README_config.md#step-5-completing-the-installation) and do the configuration for those components, using the same YAML file. + + a. Uncomment and update the `shared_configuration` section if you haven't done it already. + + b. Update the `application_engine_configuration` and `resource_registry_configuration` sections. + * If you just want to install App Engine with the minimal required values, replace the contents of `application_engine_configuration` and `resource_registry_configuration` in your copy of the template custom resource YAML file with the values from the [sample_min_value.yaml](configuration/sample_min_value.yaml) file. + + * If you want to use the full configuration list and customize the values, update the required values in `application_engine_configuration` and `resource_registry_configuration` in your copy of the template custom resource YAML file based on your configuration. + +### Configuration +If you want to customize your custom resource YAML file, refer to the [configuration list](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_ae_params.html) for each parameter. + +## Step 5: Completing the installation + +Go back to the relevant installation or update page to configure other components and complete the deployment with the operator. + +Installation pages: + - [Managed OpenShift installation page](../platform/roks/install.md) + - [OpenShift installation page](../platform/ocp/install.md) + - [Certified Kubernetes installation page](../platform/k8s/install.md) + +Update pages: + - [Managed OpenShift installation page](../platform/roks/update.md) + - [OpenShift installation page](../platform/ocp/update.md) + - [Certified Kubernetes installation page](../platform/k8s/update.md) + +## Limitations + +* After you deploy the App Engine, you can't change App Engine admin user in the admin secret. + +* Because of a Node.js server limitation, App Engine trusts only root CA. If an external service is used and signed with another root CA, you must add the root CA as trusted instead of the service certificate. + + * The certificate can be self-signed, or signed by a well-known root CA. + * If you're using a depth zero self-signed certificate, it must be listed as a trusted certificate. + * If you're using a certificate signed by a self-signed root CA, the self-signed CA must be in the trusted list. Using a leaf certificate in the trusted list is not supported. + * If you're adding the root CA of two or more external services to the App Engine trust list, you can't use the same common name for those root CAs. + +* The App Engine supports only the IBM DB2 database. + +* Resource Registry limitation + + Because of the design of etcd, it's recommended that you don't change the replica size after you create the Resource Registry cluster to prevent data loss. If you must set the replica size, set it to an odd number. If you reduce the pod size, the pods are destroyed one by one slowly to prevent data loss or the cluster getting out of sync. + * If you update the Resource Registry admin secret to change the username or password, first delete the -dba-rr- pods to cause Resource Registry to enable the updates. Alternatively, you can enable the update manually with etcd commands. + * If you update the Resource Registry configurations in the icp4acluster custom resource instance. the update might not affect the Resource Registry pod directly. It will affect the newly created pods when you increase the number of replicas. diff --git a/AAE/README_migrate.md b/AAE/README_migrate.md new file mode 100644 index 00000000..7459c90b --- /dev/null +++ b/AAE/README_migrate.md @@ -0,0 +1,34 @@ + +# Migrating from IBM Business Automation Application Engine (App Engine) 19.0.2 to 19.0.3 + +These instructions cover the migration of IBM Business Automation Application Engine (App Engine) from 19.0.2 to 19.0.3. + +## Introduction + +If you install App Engine 19.0.2 and want to continue to use your 19.0.2 applications in App Engine 19.0.3, you can migrate your applications from App Engine 19.0.2 to 19.0.3. + +## Step 1: Export apps that were authored in 19.0.2 + +Log in to the admin console in your IBM Business Automation Studio 19.0.2 environment, then export your apps as IBM Business App Installation Package (.zip) files. + +## Step 2: Publish the apps to App Engine through Business Automation Navigator + +Publish your apps to App Engine through Business Automation Navigator and make sure they work without errors. + +## Step 3: Shut down the App Engine 19.0.2 environment + +Log in to your OpenShift environment to stop all the development pods. You can scale down the number of development pods to 0 by using the OpenShift console. (Note: JMS and the Resource Registry are stateful and can't be scaled down from the OpenShift console. Keeping them won't impact your next action.) + +## Step 4: Reuse the App Engine database from 19.0.2 + +Reuse the existing App Engine database. Update the database configuration information under application_engine_configuration in the custom resource YAML file. + +## Step 5: Install App Engine 19.0.3 + +[Install IBM Business Automation Application Engine](../AAE/README_config.md). + +## Step 6: Migrate IBM Business Automation Navigator from 19.0.2 to 19.0.3 to verify your apps + +Following the IBM Business Automation Navigator migration instructions(We should add a link to the Navigator migration instructions,once navigator migration link is ready), migrate Business Automation Navigator from 19.0.2 to 19.0.3. Then, test your apps. + + diff --git a/AAE/configuration/README.md b/AAE/configuration/README.md deleted file mode 100644 index 7cb42454..00000000 --- a/AAE/configuration/README.md +++ /dev/null @@ -1,90 +0,0 @@ -# App Engine platform Helm installation helper script - -1. Extract the IBM Business Applicaition Studio platform Helm installation helper script from the aae-helper.tar file and copy it to a specified directory, for example, ibm-dba-aae-helper. - -2. Unpack the package by running the following command: - - ``` - tar xvf aae-helper.tar - ``` - -3. Update the `./pre-install/aae.yaml`file with the following settings: - -#### App Engine settings - | Parameter | Description | Default | -| -------------------------------------- | ----------------------------------------------------- | ---------------------------------------------------- | -| `releaseName` | Release Name. If you want to install with a release name other than bastudio, update this field. | | -| `server.type` | Kubernetes cluster type. OpenShift is supported. | `openshift` | -| `server.infrastructureNodeIP` | Infrastructure node IP | | -| `server.certificateManagerIntalled` | Whether to use Cert Manager installation | `false` | -| `admin.username` | Administrative username, which is used by User Management Service (UMS), App Engine, and Business Automation Studio | | -| `admin.password` | Administrative password | | -| `ums.hostname` | UMS external host name | | -| `ums.tlsSecretName` | Enter the UMS root CA secret name in this field | | -| `appEngine.hostname` | App Engine external host name | | -| `appEngine.db.name` | App Engine database name | | -| `appEngine.db.hostname` | App Engine database host | | -| `appEngine.db.port` | App Engine database port | | -| `appEngine.db.username` | App Engine database user name | | -| `appEngine.db.password` | App Engine database password | | -| `appEngine.redis.password` | Set this password only if you are using Redis | `password` | -| `resourceRegistry.hostname` | Resource Registry external host name | | -| `resourceRegistry.root.password` | Resource Registry root password | | -| `resourceRegistry.read.username` | Resource Registry reader user name | | -| `resourceRegistry.read.password` | Resource Registry reader password | | -| `resourceRegistry.write.username` | Resource Registry writer user name | | -| `resourceRegistry.write.password` | Resource Registry writer password | | | -| `images.appEngine` | Image name for App Engine container | `solution-server:19.0.2` | -| `images.dbJob` | Image name for App Engine database job container | `solution-server-helmjob-db:19.0.2` | -| `images.resourceRegistry` | Image name for Resource Registry container | `dba-etcd:19.0.2` | -| `images.umsInitRegistration` | Image name for OpenID Connect (OIDC) registration job container | `dba-umsregistration-initjob:19.0.2` | -| `images.tlsInitContainer` | Image name for TLS init container | `dba-keytool-initcontainer:19.0.2` | -| `images.ltpaInitContainer` | Image name for job container | `dba-keytool-jobcontainer:19.0.2` | -| `images.dbcompatibilityInitContainer` | Image name for database compatibility init container | `dba-dbcompatibility-initcontainer:19.0.2` | -| `ImagePullPolicy` | Pull policy for all containers | `Always` | -| `imagePullSecrets` | Existing Docker image secret | `image-pull-secret` | - - -4. Run the command `./pre-install/prepare-aae.sh -i ./pre-install/aae.yaml`. You'll see the following information on your screen: - -``` -Target folder does not exist. Creating folder -wrote ./output/aae-helper/templates/admin-secrets.yaml -wrote ./output/aae-helper/templates/certificate.yaml -wrote ./output/aae-helper/templates/route-ingress.yaml -wrote ./output/aae-helper/templates/NOTES.txt -wrote ./output/aae-helper/templates/db-script.sql -wrote ./output/aae-helper/templates/updateValues.yaml ---- -# Source: aae-helper/templates/NOTES.txt -Generating admin secret- related resources in file -./aae-helper/templates/admin-secrets.yaml - -Generating TLS key and certificate resources with secret in file -./aae-helper/templates/certificate.yaml - -Generating route definition in file -./aae-helper/templates/route-ingress.yaml - -Generating values to update in file -./aae-helper/templates/updateValues.yaml - -You can apply the resources with command: -kubectl apply -f ./admin-secrets.yaml -kubectl apply -f ./certificate.yaml -oc apply -f ./route-ingress.yaml - -Create the database with command: -db2 -tvf ./db-script.sql - -``` - -5. Run the following commands to create sensitive configuration data, create TLS key and certification secrets, and set the service type. - -``` - kubectl apply -f ./admin-secrets.yaml - kubectl apply -f ./certificate.yaml - oc apply -f ./route-ingress.yaml -``` - -6. Copy the database script to your dabase and run the command `db2 -tvf ./db-script.sql` on the database. diff --git a/AAE/configuration/aae-helper.tar b/AAE/configuration/aae-helper.tar deleted file mode 100644 index 3675ea6b..00000000 Binary files a/AAE/configuration/aae-helper.tar and /dev/null differ diff --git a/AAE/configuration/sample_min_value.yaml b/AAE/configuration/sample_min_value.yaml new file mode 100644 index 00000000..56750ff6 --- /dev/null +++ b/AAE/configuration/sample_min_value.yaml @@ -0,0 +1,39 @@ +# Minimal required values for App Engine +# This is a YAML-formatted file. +# Declare variables to be passed into your templates. +spec: + ## production configuration + ## App Engine configuration + application_engine_configuration: + ## The application_engine_configuration is a list, you can deploy multiple instances of AppEngine, you can assign different configurations for each instance. + ## For each instance, application_engine_configuration.name and application_engine_configuration.name.hostname must be assigned to different values. + - name: ae_instance1 + hostname: + port: 443 + admin_secret_name: ae-secret-credential + database: + host: + name: + port: + ## If you setup DB2 HADR and want to use it, you need to configure alternative_host and alternative_port, or else, leave is as blank. + alternative_host: + alternative_port: + images: + db_job: + repository: cp.icr.io/cp/cp4a/aae/solution-server-helmjob-db + tag: 19.0.3 + solution_server: + repository: cp.icr.io/cp/cp4a/aae/solution-server + tag: 19.0.3 + + ## Resource Registry Configuration + ## Important: if you've already configured Resource Registry before, you don't need to change resource_registry_configuration section in your copy of the template custom resource YAML file. + resource_registry_configuration: + admin_secret_name: resource-registry-admin-secret + hostname: + port: 443 + images: + resource_registry: + repository: cp.icr.io/cp/cp4a/aae/dba-etcd + tag: 19.0.3 + diff --git a/AAE/helm-charts/README.md b/AAE/helm-charts/README.md deleted file mode 100644 index a4ea47fe..00000000 --- a/AAE/helm-charts/README.md +++ /dev/null @@ -1,42 +0,0 @@ -# Deploying with Helm charts - -Extract the helm chart from ibm-dba-aae-prod-1.0.0.tgz and copy to your installation directory. - - -## Installing the Chart - -1. To install the chart with release name `my-release`, run the following command: - - ``` - helm install --tls --name my-release ibm-dba-aae-prod -f my-values.yaml --namespace ` - ``` - - The command deploys `ibm-dba-aae-prod` onto the Kubernetes cluster, based on the values specified in the `my-values.yaml` file. If you use [App Engine platform helm install helper script](configuration) before, you can use ./aae-helper/templates/updateValues.yaml file generated by the script. The configuration section lists the parameters that can be configured during installation. - - -### Verifying the Chart - -1. After the installation is finished, see the instructions for verifying the chart by running the following command: - - `helm status my-release --tls` - -2. Get the name of the pods that were deployed with ibm-dba-aae-prod by running the following command: - - `kubectl get pod -n ` - -3. For each pod, check under Events to see that the images were successfully pulled and the containers were created and started, by running the following command with the specific pod name: - - `kubectl describe pod -n ` - -4. Go to `https://` in your browser (if you set up App Engine with Route) or `https://:` (if you set up App Engine with NodePort). - -### Uninstalling the Chart -To uninstall and delete the my-release deployment, run the following command: - - helm delete my-release --purge --tls - -This command removes all the Kubernetes components associated with the chart and deletes the release. If a delete can result in orphaned components, you must delete them manually. - -For example, when you delete a release with stateful sets, the associated persistent volume must be deleted. Run the following command after deleting the chart release to clean up orphaned persistent volumes: - - kubectl delete pvc -l release=my-release diff --git a/AAE/helm-charts/ibm-dba-aae-prod-1.0.0.tgz b/AAE/helm-charts/ibm-dba-aae-prod-1.0.0.tgz deleted file mode 100644 index 0fd83cb7..00000000 Binary files a/AAE/helm-charts/ibm-dba-aae-prod-1.0.0.tgz and /dev/null differ diff --git a/AAE/k8s-yaml/README.md b/AAE/k8s-yaml/README.md deleted file mode 100644 index 0dd9cee4..00000000 --- a/AAE/k8s-yaml/README.md +++ /dev/null @@ -1,58 +0,0 @@ -# Deploying with Kubernetes YAML - -Extract the helm chart from ibm-dba-aae-prod-1.0.0.tgz and copy to your installation directory. - - -## Installing the Chart - -To use the Kubernetes command line to install the chart with release name `my-release` - -* Run the following command: - - ``` - helm template --name my-release ibm-dba-aae-prod --namespace --output-dir ./yamls -f my-values.yaml - ``` - - If the directory `/yamls` does not exist, you can create it by running `mkdir yamls`. - - The command deploys `ibm-dba-aae-prod` onto the Kubernetes cluster, based on the values specified in the `my-values.yaml` file. If you use [App Engine platform helm install helper script](configuration) before, you can use ./aae-helper/templates/updateValues.yaml file generated by the script.The configuration section lists the parameters that can be configured during installation. - -* Customize the yamls directory by running the following commands: - - ``` - rm -rf ./yamls/ibm-dba-aae-prod/charts/appengine/templates/tests - rm -rf ./yamls/ibm-dba-aae-prod/charts/resourceRegistry/templates/tests - ``` - -* Search `runAsUser: 50001` in the generated contents. And delete them all. (This step can be avoid after helm new feature added). - -* Apply the customization to the server by running the following command: - - kubectl apply -R -f ./yamls - -### Verifying the Chart - -1. After the installation is finished, see the instructions for verifying the chart by running the following command: - - `helm status my-release --tls` - -2. Get the name of the pods that were deployed with ibm-dba-aae-prod by running the following command: - - `kubectl get pod -n ` - -3. For each pod, check under Events to see that the images were successfully pulled and the containers were created and started, by running the following command with the specific pod name: - - `kubectl describe pod -n ` - -4. Go to `https://` in your browser (if you set up App Engine with Route) or `https://:` (if you set up App Engine with NodePort). - -### Uninstalling the Chart -To uninstall and delete the my-release deployment, run the following command: - - `kubectl delete -R -f ./yamls` - -This command removes all the Kubernetes components associated with the chart and deletes the release. If a delete can result in orphaned components, you must delete them manually. - -For example, when you delete a release with stateful sets, the associated persistent volume must be deleted. Run the following command after deleting the chart release to clean up orphaned persistent volumes: - - kubectl delete pvc -l release=my-release diff --git a/AAE/k8s-yaml/ibm-dba-aae-prod-1.0.0.tgz b/AAE/k8s-yaml/ibm-dba-aae-prod-1.0.0.tgz deleted file mode 100644 index 0fd83cb7..00000000 Binary files a/AAE/k8s-yaml/ibm-dba-aae-prod-1.0.0.tgz and /dev/null differ diff --git a/AAE/platform/README-ROKS.md b/AAE/platform/README-ROKS.md deleted file mode 100644 index 4ca2332e..00000000 --- a/AAE/platform/README-ROKS.md +++ /dev/null @@ -1,813 +0,0 @@ -# Deploying IBM Business Automation Application Engine (App Engine) on Red Hat OpenShift on IBM Cloud - -These instructions are for installing IBM Business Automation Application Engine (App Engine) on a managed Red Hat OpenShift cluster on IBM Public Cloud. - -## Table of contents - -- [Prerequisites](#prerequisites) -- [Step 1: Preparing your client and environment on IBM Cloud](#step-1-preparing-your-client-and-environment-on-ibm-cloud) -- [Step 2: Preparing the OCP client environment](#step-2-preparing-the-ocp-client-environment) -- [Step 3: Downloading the package and uploading it to the local repository](#step-3-downloading-the-package-and-uploading-it-to-the-local-repository) -- [Step 4: Connecting OpenShift with CLI](#step-4-connecting-openshift-with-cli) -- [Step 5: Creating the database](#step-5-creating-the-database) -- [Step 6: Creating the routes](#step-6-creating-the-routes) -- [Step 7: Protecting sensitive configuration data](#step-7-protecting-sensitive-configuration-data) -- [Step 8: Configuring TLS key and certificate secrets](#step-8-configuring-tls-key-and-certificate-secrets) -- [Step 9: Preparing persistent storage](#step-9-preparing-persistent-storage) -- [Step 10: Installing App Engine 19.0.2 on platform Helm](#step-10-installing-app-engine-1902-on-platform-helm) -- [Creating the Navigator service and configuring its UMS](#creating-the-navigator-service-and-configuring-its-ums) -- [References](#references) - -## Prerequisites - - * [OpenShift 3.11](https://docs.openshift.com/container-platform/3.11/welcome/index.html) or later - * [Helm and Tiller 2.9.1](/~https://github.com/helm/helm/releases) or later - * [Cert Manager 0.8.0](https://cert-manager.readthedocs.io/en/latest/getting-started/install/openshift.html) or later - * [IBM DB2 11.1.2.2](https://www.ibm.com/products/db2-database) or later - * [IBM Cloud Pak For Automation - User Management Service](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.offerings/topics/con_ums.html) - * Persistent volume support - -Before you deploy, you must configure your IBM Public Cloud environment, create an OpenShift cluster and load the product images into the registry. Use the following information to configure your environment and deploy the images. - -## Step 1: Preparing your client and environment on IBM Cloud - -1. Create an account on [IBM Cloud](https://cloud.ibm.com/kubernetes/registry/main/start). -2. Create a cluster. - From the [IBM Cloud Overview page](https://cloud.ibm.com/kubernetes/overview), on the OpenShift Cluster tile, click **Create Cluster**. - -3. Install the [IBM Cloud CLI](https://cloud.ibm.com/docs/containers?topic=containers-cs_cli_install). -4. Install the [OpenShift Container Platform CLI](https://docs.openshift.com/container-platform/3.11/cli_reference/get_started_cli.html#cli-reference-get-started-cli) to manage your applications and to interact with the system. -5. Install [Helm 2.9.1](https://www.ibm.com/links?url=https%3A%2F%2Fgithub.com%2Fhelm%2Fhelm%2Freleases%2Ftag%2Fv2.9.1) to install the Helm charts with Helm and Tiller. -6. Install the [Kubernetes CLI](https://kubernetes.io/docs/tasks/tools/install-kubectl/). -7. Install the [Docker CLI](https://cloud.ibm.com/docs/containers?topic=containers-cs_cli_install). -8. Get the storage class name for your OpenShift cluster: - ```console - $ oc get sc - ``` - -## Step 2: Preparing the OCP client environment - -**1. Log in to IBM Cloud using CLI** - - Open a terminal window on your client machine, then run the following commands: - -```console - ibmcloud login -u -p -c -r - ``` - -r value Name of region, such as 'us-south' or 'eu-gb' - -c value Account ID or owner user ID (such as user@example.com) - -```console -ibmcloud login -u -p -c -r -ibmcloud ks cluster ls -ibmcloud ks cluster config --cluster $cluster | grep export > env.sh -chmod 755 env.sh -. ./env.sh -echo $KUBECONFIG -kubectl version --short - ``` - -**2. Configure IBM Cloud Container Registry** - - **a. Log in with your IBM Cloud account. Use “ibmcloud login --sso” to log in to IBM Cloud CLI** - - **Note:** After you press "Y" to open the URL in the default browser, IBM Cloud generates a one-time code in the browser. Copy and paste it, then press “Enter" to pass authentication. - -```console -$ ibmcloud login --sso -API endpoint: https://cloud.ibm.com -Region: eu-gb - -Get One Time Code from https://identity-2.ap-north.iam.cloud.ibm.com/identity/passcode to proceed. -Open the URL in the default browser? [Y/n] > yes -One Time Code > -Authenticating... -OK - -Select an account: -1. XXXXXX's Account (0xxxxxxxxxxxxxxaa9xxx) -2. XXXXXXXX's Account (c56xxxxxxxxxxxxx74xxxxc) <-> 1...7 -Enter a number> 2 -Targeted account XXXXXXXX's Account (c56xxxxxxxxxxxxx74xxxxc) <-> 1...7 - - -API endpoint: https://cloud.ibm.com -Region: eu-gb -User: xxxxxxx -Account: XXXXXXXX's Account (c56xxxxxxxxxxxxx74xxxxc) <-> 1...7 -Resource group: No resource group targeted, use 'ibmcloud target -g RESOURCE_GROUP' -CF API endpoint: -Org: -Space: - -Tip: If you are managing Cloud Foundry applications and services -- Use 'ibmcloud target --cf' to target Cloud Foundry org/space interactively, or use 'ibmcloud target --cf-api ENDPOINT -o ORG -s SPACE' to target the org/space. -- Use 'ibmcloud cf' if you want to run the Cloud Foundry CLI with current IBM Cloud CLI context. - - -New version 0.19.0 is available. -Release notes: /~https://github.com/IBM-Cloud/ibm-cloud-cli-release/releases/tag/v0.19.0 -TIP: use 'ibmcloud config --check-version=false' to disable update check. - -Do you want to update? [y/N] > y - -Installing version '0.19.0'... -Downloading... - 17.45 MiB / 17.45 MiB [========================================================================================] 100.00% 9s -18301051 bytes downloaded -Saved in /Users/ibm/.bluemix/tmp/bx_746509876/IBM_Cloud_CLI_0.19.0.pkg -``` - -If you encouter errors using "ibmcloud login --sso", you can run "ibmcloud login" and enter your username and password instead. - - **b. Create a namespace** - -```console - $ ibmcloud cr namespace-add -``` - - **c. Check the cluster** -```console -$ oc get pod - ``` - **d. Log in to IBM Cloud Container Registry (cr)** -```console -$ ibmcloud cr login -``` - Example output: - -```console -$ ibmcloud cr login -Logging in to 'registry.eu-gb.bluemix.net'... -Logged in to 'registry.eu-gb.bluemix.net'. - -IBM Cloud Container Registry is adopting new icr.io domain names to align with the rebranding of IBM Cloud for a better user experience. The existing bluemix.net domain names are deprecated, but you can continue to use them for the time being, as an unsupported date will be announced later. For more information about registry domain names, see https://cloud.ibm.com/docs/services/Registry?topic=registry-registry_overview#registry_regions_local - -Logging in to 'us.icr.io'... -Logged in to 'us.icr.io'. - -IBM Cloud Container Registry is adopting new icr.io domain names to align with the rebranding of IBM Cloud for a better user experience. The existing bluemix.net domain names are deprecated, but you can continue to use them for the time being, as an unsupported date will be announced later. For more information about registry domain names, see https://cloud.ibm.com/docs/services/Registry?topic=registry-registry_overview#registry_regions_local - -OK -``` -Get the container repository host from the "ibmcloud cr" login output. In this example, the Docker repository host is “us.icr.io”. - - **e. Verify the images are in your private registry:** -```console -$ ibmcloud cr image-list -``` - **f. Create an API key** - - I. Log in to https://cloud.ibm.com. - - II. Select your own cluster account (upper right corner) and click IBM Cloud -> Security -> Manage -> Identity and Access -> Access (IAM) / IBM Cloud API Keys (left menu) --> Create an IBM Cloud API Key. Then download the API key or copy the API key. - - III. Return to your client terminal window and log in to the local Docker registry: - -```console -docker login -u iamapikey -p -``` - Example: -```console -$ docker login -u iamapikey -p us.icr.io -WARNING! Using --password via the CLI is insecure. Use --password-stdin. -Login Succeeded -``` - **g. Create a Docker pull secret in your OpenShift cluster** -```console -oc create secret docker-registry ums-secret --docker-server=us.icr.io --docker-username=iamapikey --docker-password= - ``` -This secret will be passed to the chart in the imagePullSecrets property. Check the "docker-server" name in the output of the previous command “ibmcloud cr login”. - -## Step 3: Downloading the package and uploading it to the local repository - -1. Download and save the [loadimages.sh](/~https://github.com/icp4a/cert-kubernetes/blob/master/scripts/loadimages.sh) script to the client machine. -2. Download the Business Automation Application Engine Passport Advantage packages by following the instructions in [IBM Cloud Pak for Automation 19.0.2 on Certified Kubernetes](/~https://github.com/icp4a/cert-kubernetes/blob/master/README.md#step-2-download-a-product-package-from-ppa-and-load-the-images). -3. Run the following commands to load the images into the Docker repository: -```console -$ ibmcloud cr namespace-add - ``` -Example: -```console -./loadimages.sh -p ./CC3I3ML.tgz -r us.icr.io/ -./loadimages.sh -p ./CC3I4ML.tgz -r us.icr.io/ -./loadimages.sh -p ./CC3I5ML.tgz -r us.icr.io/ -./loadimages.sh -p ./CC3HVML.tgz -r us.icr.io/ - ``` -The name "us.icr.io" is one of the IBM Cloud Container Registry names and your registry name might be different. Get the name from the "ibmcloud cr login" step. - -4. Get the following Docker images in the IBM Cloud repository, which can be used for future App Engine deployments: -```console - - us.icr.io//solution-server:19.0.2 - - us.icr.io//dba-etcd:19.0.2 - - us.icr.io//solution-server-helmjob-db:19.0.2 - - us.icr.io//dba-keytool-initcontainer:19.0.2 - - us.icr.io//dba-umsregistration-initjob:19.0.2 - - us.icr.io//dba-dbcompatibility-initcontainer:19.0.2 - - us.icr.io//navigator:ga-306-icn-if002 - - us.icr.io//navigator-sso:ga-306-icn-if002 - - us.icr.io//ums:19.0.2 - - us.icr.io//dba-keytool-initcontainer:19.0.2 - - us.icr.io//dba-keytool-jobcontainer:19.0.2 - - us.icr.io//bastudio:19.0.2 - - us.icr.io//jms:19.0.2 - - us.icr.io//solution-server:19.0.2 - - us.icr.io//dba-etcd:19.0.2 - - us.icr.io//solution-server-helmjob-db:19.0.2 - - us.icr.io//dba-keytool-initcontainer:19.0.2 - - us.icr.io//dba-keytool-jobcontainer:19.0.2 - - us.icr.io//dba-umsregistration-initjob:19.0.2 - - us.icr.io//dba-dbcompatibility-initcontainer:19.0.2 -``` -## Step 4: Connecting OpenShift with CLI -1. Open a browser and log in to the IBM Cloud website (https://cloud.ibm.com) with your IBM Cloud ID, then navigate to the OpenShift category. -2. Find your OpenShift cluster instance in the Clusters list, select ..., and click OpenShift Web Console. -3. In the OpenShift Web Console, click your user ID (top right) and click Copy Login Command. -4. Paste the login command into the shell in your client machine terminal window: -```console - oc login https://: --token= - ``` -5. Create or switch to the namespace you created by running the following command: -```console - oc new-project && oc project - ``` -6. To deploy the service account, role, and role binding successfully, assign the administrator role to the user for this namespace by running the following command: -```console - oc project - oc adm policy add-role-to-user admin -``` -7. If you want to operate persistent volumes (PVs), you must have the storage-admin cluster role, because PVs are a cluster resource in OpenShift. Add the role by running the following command: -```console - oc adm policy add-cluster-role-to-user storage-admin -``` - 8. Grant scc ibm-anyuid-scc to your newly created namespace: - ```console -oc adm policy add-scc-to-group ibm-anyuid-scc system:serviceaccounts: -``` - -## Step 5: Creating the database - -1. Prepare the database for App Engine, following the instructions in [Creating the database](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_aeprep_db.html). - -## Step 6: Creating the routes -1. Choose a release name, for example, “ocp-aae”. You can replace `````` with your own release name in the examples that follow. -2. Choose the route name, for example, "ae-route" for App Engine. -3. Prepare the YAML files for the routes. For example: -ums-route.yaml -```yaml -apiVersion: route.openshift.io/v1 -kind: Route -metadata: - name: ums-route - namespace: -spec: - port: - targetPort: https - tls: - insecureEdgeTerminationPolicy: Redirect - termination: passthrough - to: - kind: Service - name: -ibm-dba-ums - weight: 100 - wildcardPolicy: None -``` -ae-route.yaml: -```yaml -apiVersion: route.openshift.io/v1 -kind: Route -metadata: - name: ae-route - namespace: -spec: - port: - targetPort: https - tls: - insecureEdgeTerminationPolicy: Redirect - termination: passthrough - to: - kind: Service - name: -ibm-dba-ae-service - weight: 100 - wildcardPolicy: None -``` - -4. Create the route by running the following command: -```console -oc create -f ae-route.yaml -``` -5. Get the host name for Application Engine. You will need it later. - - a. Run the command "oc get route" to get the host name for each component. -```console -$ oc get route -NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD -ae-route ae-route-bastudio. .us-east.containers.appdomain.cloud aa-ibm-dba-ae-service https passthrough/Redirect None -rr-route rr-route-bastudio. .us-east.containers.appdomain.cloud aa-resource-registry-service https passthrough/Redirect None -ums-route ums-route-bastudio. .us-east.containers.appdomain.cloud aa-ibm-dba-ums https passthrough/Redirect None -``` - - b. Find the host name “ums-route-bastudio..us-east.containers.appdomain.cloud” and write it down. You will use it later when creating secrets. - - c. Ping the host name to get the IP address. - -```console -$ping ums-route-bastudio..us-east.containers.appdomain.cloud -PING dbaclusterxxxxxxxxxxxxxx001.us-east.containers.appdomain.cloud (169.x.x.x) 56(84) bytes of data. -64 bytes from xxx.ip4.static.sl-reverse.com (169.x.x.x): icmp_seq=1 ttl=44 time=72.9 ms -64 bytes from xxx.ip4.static.sl-reverse.com (169.x.x.x): icmp_seq=2 ttl=44 time=72.7 ms -``` -Write down the IP address 169.x.x.x. It will be used later in the . For each route (ums-route, ae-route, rr-route) write down the host name and IP address. - -## Step 7: Protecting sensitive configuration data - -You must create the following secrets manually before you install the chart. - -* Create the UMS Service following the instructions in [Install User Management Service 19.0.2 on Red Hat OpenShift on IBM Cloud](/~https://github.com/icp4a/cert-kubernetes/blob/master/UMS/platform/README-ROKS.md). - -* Follow the instructions in [Preparing UMS-related configuration and TLS certificates](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_aeprep_ums.html) to prepare UMS secrets. - -Follow [Protecting sensitive configuration data](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_aeprep_data.html) to prepare secrets for Resource Registry and App Engine. - -The following sample YAML files are for Resource Registry and App Engine secrets. Update the values with your own user name, database information, and so on. - -Resource Registry yaml: -```yaml - apiVersion: v1 - kind: Secret - metadata: - name: resource-registry-admin-secret - type: Opaque - stringData: - rootPassword: "" - readUser: "reader" - readPassword: "" - writeUser: "writer" - writePassword: "" -``` - -App Engine yaml: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: ae-secret-credential -type: Opaque -stringData: - AE_DATABASE_PWD: "" - AE_DATABASE_USER: "" - OPENID_CLIENT_ID: "app_engine" - OPENID_CLIENT_SECRET: ““ - SESSION_SECRET: "bigblue123solutionserver" - SESSION_COOKIE_NAME: "nsessionid" - REDIS_PASSWORD: "password" -``` - -## Step 8: Configuring TLS key and certificate secrets -Modify all values enclosed in angle brackets like `````` in each of the following xxx.conf files with your own values. - -Follow [Configuring the TLS key and certificate secrets](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_basprep_secrets.html) to create TLS certificate secrets for UMS, Resource Registry, and App Engine services. - -1. Create the root CA. - -Run the following three commands: -```console - -openssl genrsa -out rootCA.key.pem 2048 - -openssl req -x509 -new -nodes -key rootCA.key.pem -sha256 -days 3650 \ - -subj "/CN=rootCA" \ - -out rootCA.crt.pem - -kubectl create secret tls ca-tls-secret --key=rootCA.key.pem --cert=rootCA.crt.pem -``` - -2. Generate the UMS TLS key and certificate. - -Example: ums-extfile.conf -```console -authorityKeyIdentifier=keyid,issuer -basicConstraints=CA:FALSE -keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment -subjectAltName = @alt_names - -[alt_names] -DNS.1 = -ibm-dba-ums -DNS.2 = -DNS.3 = .svc.cluster.local -DNS.4 = svc.cluster.local -DNS.5 = localhost -IP.1 = -``` -Run the following four commands: -```console -openssl genrsa -out ums.key.pem 2048 -openssl req -new -key ums.key.pem -out ums.csr \ - -subj "/CN= " - -openssl x509 -req -in ums.csr -CA rootCA.crt.pem \ - -CAkey rootCA.key.pem \ - -CAcreateserial \ - -out ums.crt.pem \ - -days 1825 -sha256 \ - -extfile ums-extfile.conf -kubectl create secret tls ums-tls-secret --key=ums.key.pem --cert=ums.crt.pem -``` -3. Generate the UMS JKS TLS key and certificate. - -Example ums-jks-extfile.conf -```console -authorityKeyIdentifier=keyid,issuer -basicConstraints=CA:FALSE -keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment -subjectAltName = @alt_names - -[alt_names] -DNS.1 = -ibm-dba-ums -DNS.2 = -ibm-dba-ums..svc.cluster.local -DNS.3 = svc.cluster.local -DNS.4 = localhost -DNS.5 = c100-e.us-east.containers.cloud.ibm.com -IP.1 = -``` -Run the following four commands: -```console -openssl genrsa -out ums-jks.key.pem 2048 -openssl req -new -key ums-jks.key.pem -out ums-jks.csr \ - -subj "/CN= " - -openssl x509 -req -in ums-jks.csr -CA rootCA.crt.pem \ - -CAkey rootCA.key.pem \ - -CAcreateserial \ - -out ums-jks.crt.pem \ - -days 1825 -sha256 \ - -extfile ums-jks-extfile.conf -kubectl create secret tls ums-jks-tls-secret --key=ums-jks.key.pem --cert=ums-jks.crt.pem -``` -4. Generate the Resource Registry TLS key and certificate. - -Example rr-extfile.conf -```console -authorityKeyIdentifier=keyid,issuer -basicConstraints=CA:FALSE -keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment -subjectAltName = @alt_names - -[alt_names] -DNS.1 = -resource-registry-service -DNS.2 = -DNS.3 = -resource-registry-service..svc.cluster.local -DNS.4 = svc.cluster.local -DNS.5 = localhost -DNS.6 = c100-e.us-east.containers.cloud.ibm.com -IP.1 = -``` -Run the following four commands: -```console -openssl genrsa -out rr.key.pem 2048 -openssl req -new -key rr.key.pem -out rr.csr \ - -subj "/CN= " - -openssl x509 -req -in rr.csr -CA rootCA.crt.pem \ - -CAkey rootCA.key.pem \ - -CAcreateserial \ - -out rr.crt.pem \ - -days 1825 -sha256 \ - -extfile rr-extfile.conf -kubectl create secret tls rr-tls-secret --key=rr.key.pem --cert=rr.crt.pem -``` -5. Generate the App Engine TLS key and certificate. - -Example ae-extfile.conf -```console -authorityKeyIdentifier=keyid,issuer -basicConstraints=CA:FALSE -keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment -subjectAltName = @alt_names - -[alt_names] -DNS.1 = -ibm-dba-ae-service -DNS.2 = -DNS.3 = -ibm-dba-ae-service..svc.cluster.local -DNS.4 = svc.cluster.local -DNS.5=localhost -DNS.6=c100-e.us-east.containers.cloud.ibm.com -IP.1 = -``` -Run the following four commands: - -```console -openssl genrsa -out ae.key.pem 2048 -openssl req -new -key ae.key.pem -out ae.csr \ - -subj "/CN=< ip address from above ae-route > " - -openssl x509 -req -in ae.csr -CA rootCA.crt.pem \ - -CAkey rootCA.key.pem \ - -CAcreateserial \ - -out ae.crt.pem \ - -days 1825 -sha256 \ - -extfile ae-extfile.conf -kubectl create secret tls ae-tls-secret --key=ae.key.pem --cert=ae.crt.pem -``` -6. Generate the IBM Content Navigator (ICN) TLS key and certificate. - -Example icn-extfile.conf -```console -authorityKeyIdentifier=keyid,issuer -basicConstraints=CA:FALSE -keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment -subjectAltName = @alt_names - -[alt_names] -DNS.1 = icn..nip.io -DNS.2 = svc.cluster.local -DNS.3 = localhost -IP.1 = -``` -Run the following four commands: -```console -openssl genrsa -out icn.key.pem 2048 -openssl req -new -key icn.key.pem -out icn.csr \ - -subj "/CN=< ip address from above ums-route > " - -openssl x509 -req -in icn.csr -CA rootCA.crt.pem \ - -CAkey rootCA.key.pem \ - -CAcreateserial \ - -out icn.crt.pem \ - -days 1825 -sha256 \ - -extfile icn-extfile.conf -kubectl create secret tls icn-tls-secret --key=icn.key.pem --cert=icn.crt.pem -``` -7. Generate the JKS TLS key and certificate. - -Example jks-extfile.conf -```console -authorityKeyIdentifier=keyid,issuer -basicConstraints=CA:FALSE -keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment -subjectAltName = @alt_names - -[alt_names] -DNS.1 = -ibm-dba-ums -DNS.2 = ums..nip.io -DNS.3 = -ibm-dba-ums..svc.cluster.local -DNS.4 = svc.cluster.local -IP.1 = -``` -Run the following four commands: - -```console -openssl genrsa -out jks.key.pem 2048 -openssl req -new -key jks.key.pem -out jks.csr \ - -subj "/CN=< ip address from above ums-route > " - -openssl x509 -req -in jks.csr -CA rootCA.crt.pem \ - -CAkey rootCA.key.pem \ - -CAcreateserial \ - -out jks.crt.pem \ - -days 1825 -sha256 \ - -extfile jks-extfile.conf -kubectl create secret tls jks-tls-secret --key=jks.key.pem --cert=jks.crt.pem -``` - -## Step 9: Preparing persistent storage - -Follow the "Implementing storage" section of [IBM Business Automation Application Engine Installation](/~https://github.com/icp4a/cert-kubernetes/blob/master/AAE/README.md) to prepare the persistent storage for App Engine. - -## Step 10: Installing App Engine 19.0.2 on platform Helm - -To install the App Engine service on a managed Red Hat OpenShift cluster on IBM Public Cloud, choose one of the following options: -* To use Helm charts, follow the instructions in [Deploying with Helm charts](/~https://github.com/icp4a/cert-kubernetes/blob/master/AAE/helm-charts/README.md) - -* To use YAML, follow the instructions in [Deploying with Kubernetes YAML](/~https://github.com/icp4a/cert-kubernetes/blob/master/AAE/k8s-yaml/README.md) - -* To deploy the service on your own, complete the following steps: - -**1. Download the Helm charts provided for certificate in the GitHub release page:** -* Download ibm-dba-aae-prod-1.0.0.tgz from [AAE HELM](/~https://github.com/icp4a/cert-kubernetes/tree/master/AAE/helm-charts) - -**Modify the sample values in the YAML files to match your own environment:** - -```yaml -#Shared values across components -global: - # The persistent volume claim name used to store JDBC and ODBC library - existingClaimName: - # Keep this value as false - nonProductionMode: false - # Secret with Docker credentials - imagePullSecrets: ums-secret - # global CA secret name - caSecretName: "ca-tls-secret" - # Kubernetes dns base name - dnsBaseName: "svc.cluster.local" - # Contributor toolkits storage PVC - contributorToolkitsPVC: "" - # Global configuration created by user management service - ums: - serviceType: Ingress - # Get UMS hostname from “oc get route” command - hostname: "ums-route-bastudio. xxxxx.us-east.containers.appdomain.cloud" - port: 443 - # Secret with admin credentials - adminSecretName: ibm-dba-ums-secret - - # Global configuration created by Resource Registry - resourceRegistry: - # Get RR hostname from “oc get route” command - hostname: "rr-route-bastudio. xxxxx.us-east.containers.appdomain.cloud" - port: 31099 - adminSecretName: resource-registry-admin-secret - - # Global configuration created by App Engine - appEngine: - serviceType: "Ingress" - # Get AE hostname from “oc get route” command - hostname: "ae-route-bastudio.xxxxx.us-east.containers.appdomain.cloud" - port: 443 - -appengine: - install: true - - replicaCount: 1 - - probes: - initialDelaySeconds: 5 - periodSeconds: 10 - timeoutSeconds: 5 - successThreshold: 5 - failureThreshold: 3 - - images: - appEngine: us.icr.io//solution-server:19.0.2 - tlsInitContainer: us.icr.io//dba-keytool-initcontainer:19.0.2 - dbJob: us.icr.io//solution-server-helmjob-db:19.0.2 - oidcJob: us.icr.io//dba-umsregistration-initjob:19.0.2 - dbcompatibilityInitContainer: us.icr.io//dba-dbcompatibility-initcontainer:19.0.2 - pullPolicy: Always - - tls: - tlsSecretName: ae-tls-secret - tlsTrustList: [] - - database: - name: APPDB - host: - port: - type: db2 - currentSchema: DBASB - initialPoolSize: 1 - maxPoolSize: 10 - uvThreadPoolSize: 4 - maxLRUCacheSize: 1000 - maxLRUCacheAge: 600000 - - # Toggle for custom JDBC drivers - useCustomJDBCDrivers: false - - adminSecretName: ae-secret-credential - - logLevel: - node: trace - browser: 2 - - contentSecurityPolicy: - enable: false - whitelist: "" - - session: - duration: "1800000" - resave: "false" - rolling: "true" - saveUninitialized: "false" - useExternalStore: "false" - - redis: - host: localhost - port: 6379 - ttl: 1800 - - maxAge: - staticAsset: "2592000" - csrfCookie: "3600000" - authCookie: "900000" - - env: - serverEnvType: development - maxSizeLRUCacheRR: 1000 - - resources: - ae: - limits: - cpu: 1500m - memory: 1024Mi - requests: - cpu: 1 - memory: 512Mi - initContainer: - limits: - cpu: 500m - memory: 256Mi - requests: - cpu: 200m - memory: 128Mi - - autoscaling: - enabled: false - minReplicas: 2 - maxReplicas: 5 - targetAverageUtilization: 80 - -resourceRegistry: - install: true - - # Private images for resource registry - images: - resourceRegistry: us.icr.io//dba-etcd:19.0.2 - keytoolInitcontainer: us.icr.io//dba-keytool-initcontainer:19.0.2 - pullPolicy: Always - - # TLS configurations - tls: - tlsSecretName: rr-tls-secret - - # Resource registry cluster size - replicaCount: 1 - - # RR Resource config - resources: - limits: - cpu: 500m - memory: 512Mi - requests: - cpu: 200m - memory: 256Mi - - # data persistence config - persistence: - enabled: false - useDynamicProvisioning: true - storageClassName: "manual" - accessMode: "ReadWriteOnce" - size: 3Gi - - livenessProbe: - enabled: true - initialDelaySeconds: 120 - periodSeconds: 10 - timeoutSeconds: 5 - failureThreshold: 3 - successThreshold: 1 - - readinessProbe: - enabled: true - initialDelaySeconds: 15 - periodSeconds: 10 - timeoutSeconds: 5 - failureThreshold: 6 - successThreshold: 1 - - logLevel: info -``` -**2. Generate and customize the deployment YAML files:** - -a.Generate the output folder: -```console -mkdir yamls -``` -b.Generate the deployment YAML files into the created folder: - -```console -helm template --name --namespace --output-dir ./yamls -f aae-values.yaml ibm-dba-aae-prod-1.0.0.tgz -``` -**3. Move to the aae-yamls folder. Remove the test folders:** -```console - rm -rf ./yamls/ibm-dba-aae-prod/charts/appengine/templates/tests - rm -rf ./yamls/ibm-dba-aae-prod/charts/resourceRegistry/templates/tests - rm -rf ./yamls/ibm-dba-aae-prod/templates/tests -``` - -**4. Apply the YAML definitions by running the following command:** -```console -kubectl apply -R -f ./yamls -``` - -## Creating the Navigator service and configuring its UMS -1. Create the Navigator service on MOCP: -* /~https://github.com/icp4a/cert-kubernetes/blob/19.0.1/NAVIGATOR/platform/README_Eval_ROKS.md - -2. Configure it to connect to UMS: -* https://www.ibm.com/support/pages/node/1073240 - -3. Configure it to work with App Engine and IBM Business Automation Workflow using the following instructions: -* [Configuring App Engine with IBM Business Automation Navigator](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_aeconfig_ban.html) -* [Publishing apps](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.bas/topics/tsk_bas_publishapps.html) -* [Configuring App Engine with IBM Business Automation Workflow](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_aeconfig_baw.html) - -## References -* /~https://github.com/icp4a/cert-kubernetes/blob/master/AAE/README.md -* /~https://github.com/icp4a/cert-kubernetes/blob/master/UMS/platform/README-ROKS.md -* https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_bas.html -* https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_install_bas.html - diff --git a/ACA/README_config.md b/ACA/README_config.md new file mode 100644 index 00000000..f9626d69 --- /dev/null +++ b/ACA/README_config.md @@ -0,0 +1,120 @@ +# IBM® Business Automation Content Analyzer +========= + +## Introduction + +This readme provide instruction to deploy IBM Business Automation Content Analyzer with IBM® Cloud Pak for Automation platform. IBM Business Automation Content Analyzer offers the power of intelligent capture with the flexibility of an API that enables you to extend the value of your core enterprise content management (ECM) technology stack and helps you rapidly accelerate extraction and classification of data in your documents. + + +Requirements to Prepare Your Environment +------------ + +### Step 1 - Preparing users for Content Analyzer + +Content Analyzer users need to be configured on the LDAP server. +See [Preparing users for Content Analyzer](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_prepare_bacak8s_usergroups.html) for detailed instructions. + +### Step 2 - Create DB2 databases for Content Analyzer + +For development or testing purposes, you can skip this step and move to "Step 3 - Initialize the Content Analyzer Base database" if you prefer for the Content Analyzer scripts to create the database for you. + +See [Create the Db2 database](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_prepare_bacak8s_createdb2.html) for detailed instructions. + +### Step 3 - Initialize the Content Analyzer Base database + +If you do not have a Db2® database set up, do so now. + +See [Initializing the Content Analyzer Base database](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_prepare_bacak8s_db.html) for detailed instructions. + +### Step 4 - Initialize the Content Analyzer Tenant database(s) + +If you do not have a tenant database, set up a Db2 tenant database. + +See [Initializing the Tenant database](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_prepare_bacak8s_dbtenant.html) for detailed instructions. + +### Step 5 - Optional - DB2 High-Availability + +You can set up a Db2 High Availability Disaster Recovery (HADR) database. + +See [Setting up Db2 High-Availability](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_prepare_cadb2ha.html) for detailed instructions. + +### Step 6 - Create prerequisite resources for IBM Business Automation Content Analyzer + +Set up and configure storage to prepare for the container configuration and deployment. You set up permissions to PVC directories, label worker nodes, create the docker secret, create security, and enable SSL communication for LDAP if necessary. + +See [Configuring storage and the environment](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_prepare_bacak8s_storage.html) for detailed instructions. + +### Step 7 - Configuring the CR YAML file + +Update the custom YAML file to provide the details that are relevant to your IBM Business Automation Content Analyzer and your decisions for the deployment of the container. + +See [Content Analyzer parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_k8sca_operparams.html) for detailed instructions. + +### Step 8 - Deployment +----------- +1) Once all the required parameters have been filled out for Content Analyzer, the CR can be applied by + +``` + +oc -n apply -f + +``` +where: +`ns` is the namespace name where you want to install Content Analyzer. +`CR yaml` is the CR yaml name. + +2) The Operator container will deploy Content Analyzer. For more information about Operator, please refer to +/~https://github.com/icp4a/cert-kubernetes/tree/19.0.3/ + + + +Post Deployment +-------------- + +## Post Deployment steps for route (OpenShift) setup + +You can deploy IBM Business Automation Content Analyzer by using an OpenShift route as the ingress point to provide fronted and backend services through an externally reachable, unique hostname such as www.backend.example.com and www.frontend.example.com. A defined route and the endpoints, which are identified by its service, can be consumed by a router to provide named connectivity that allows external clients to reach your applications. + +See [Configuring an OpenShift route](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_postcadeploy_routeOS.html) for detailed instructions. + +## Post Deployment steps for NodePort (Non OpenShift) setup + +You can modify your LoadBalancer, like the HAProxy, in the Kubernetes cluster to route the request to a specific node port. + +See [Configuring routing to a node port](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_postcadeploy_nodeport_NOS.html) for detailed instructions. + +## Troubleshooting + +This section describes how to get various logs for Content Analyzer. + +### Installation: + +- Retreieve the Ansible installation logs: + +``` +kubectl logs deployment/ibm-cp4a-operator -c operator > Operator.log + +kubectl logs deployment/ibm-cp4a-operator -c ansible > Ansible.log +``` + +### Post install: + +- Content Analyzer logs are located in the log pvc. Logs are separated into sub-folders based on the component names. + +``` +├── backend +├── callerapi +├── classifyprocess-classify +├── frontend +├── mongo +├── mongoadmin +├── ocr-extraction +├── pdfprocess +├── postprocessing +├── processing-extraction +├── setup +├── updatefiledetail +└── utf8process + +``` + diff --git a/ACA/README_migrate.md b/ACA/README_migrate.md new file mode 100644 index 00000000..86dccd4d --- /dev/null +++ b/ACA/README_migrate.md @@ -0,0 +1,28 @@ +# IBM® Business Automation Content Analyzer +========= + +## Introduction + +With these instructions, you can deploy IBM Business Automation Content Analyzer with IBM® Cloud Pak for Automation platform. IBM Business Automation Content Analyzer offers the power of intelligent capture with the flexibility of an API that enables you to extend the value of your core enterprise content management (ECM) technology stack and helps you rapidly accelerate extraction and classification of data in your documents. + + +Upgrade +----------- +## Upgrade from 19.0.1 to 19.0.3 +Upgrade from Content Analyzer 19.0.1 to 19.0.3 is not supported. + +## Upgrade from 19.0.2 to 19.0.3 + +- To upgrade from Content Analyzer 19.0.2 to 19.0.3, do the following steps: + - Back up your ontology through the export function from the UI. + - Back up your Content Analyzer's Base database and Tenant database. + - Copy the `DB2` [folder](/~https://github.com/icp4a/cert-kubernetes/tree/19.0.3/ACA/configuration-ha) to the DB2 server. + - Run the `UpgradeTenantDB.sh` from your database server as `db2inst1` user. + - Delete the previous Content Analyzer 19.0.2 instance by running `delete_ContentAnalyzer.sh`. +- Deploy Content Analyzer 19.0.3 using Operator. Make sure to reuse the Base database and Tenant database by filling out the CR yaml file properly. + + +## Rolling back an upgrade +- Delete the current version of Content Analyzer by following the [README_uninstall.md](README_uninstall.md) +- Restore the Content Analyzer's Base database and Tenant database to the previous release. For example: Restore the Base database and Tenant database to 19.0.2, that you previously backed up, if you want to rollback to 19.0.2. +- Follow the installation procedure to deploy Content Analyzer for that specific version. diff --git a/ACA/README_uninstall.md b/ACA/README_uninstall.md new file mode 100644 index 00000000..effd8ac3 --- /dev/null +++ b/ACA/README_uninstall.md @@ -0,0 +1,20 @@ +# IBM® Business Automation Content Analyzer +========= + +## Introduction + +With these instructions, you can uninstall IBM Business Automation Content Analyzer with IBM® Cloud Pak for Automation platform. IBM Business Automation Content Analyzer offers the power of intelligent capture with the flexibility of an API that enables you to extend the value of your core enterprise content management (ECM) technology stack and helps you rapidly accelerate extraction and classification of data in your documents. + + +Uninstall +----------- +1. Backup your ontology. +2. In the CR yaml file: comment out the `ca_configuration` section + +3. Apply the CR. For example: `oc apply -f [PATH TO CR YAML]` + +4. Delete all the subdirectories under the Content Analyzer Data PVC. + +5. Delete all the subdirectories under the Content Analyzer Config PVC. + +6. Delete all the subdirectories under the CA Log PVC. diff --git a/ACA/README_update.md b/ACA/README_update.md new file mode 100644 index 00000000..fa327dc6 --- /dev/null +++ b/ACA/README_update.md @@ -0,0 +1,23 @@ +# IBM® Business Automation Content Analyzer +========= + +## Introduction + +With these instructions, you can update IBM Business Automation Content Analyzer with IBM® Cloud Pak for Automation platform. IBM Business Automation Content Analyzer offers the power of intelligent capture with the flexibility of an API that enables you to extend the value of your core enterprise content management (ECM) technology stack and helps you rapidly accelerate extraction and classification of data in your documents. + + + +## Redeploying Content Analyzer if changes are made to the Role Variables +If you need to make changes to Content Analyzer deployment, you must redeploy Content Analyzer by doing the following: + +Note that this process removes any documents that you processed in Content Analyzer. Download any needed document output from Content Analyzer before doing these steps. + +1) In the CR yaml file: comment out the `ca_configuration` section. + +2) Apply the CR. For example: `oc apply -f [PATH TO CR YAML]`. + +3) Delete the contents under the Content Analyzer Data PVC and Content Analyzer Config PVC. + +4) In the CR yaml file: uncomment the `ca_configuration` section and make the changes. + +5) Apply the CR. For example: `oc apply -f [PATH TO CR YAML]`. diff --git a/BACA/configuration-ha/DB2/AddOntology.sh b/ACA/configuration-ha/DB2/AddOntology.sh similarity index 100% rename from BACA/configuration-ha/DB2/AddOntology.sh rename to ACA/configuration-ha/DB2/AddOntology.sh diff --git a/BACA/configuration-ha/DB2/AddTenant.bat b/ACA/configuration-ha/DB2/AddTenant.bat similarity index 50% rename from BACA/configuration-ha/DB2/AddTenant.bat rename to ACA/configuration-ha/DB2/AddTenant.bat index 05ab9be2..6686f3fd 100755 --- a/BACA/configuration-ha/DB2/AddTenant.bat +++ b/ACA/configuration-ha/DB2/AddTenant.bat @@ -1,143 +1,205 @@ -@echo off - -SETLOCAL -echo Enter '1' to add new tenant and an ontology. -echo Enter '2' to add an ontology for an existing tenant database. -echo Enter anything to abort - -set /p choice="Type input: " - -set /p tenant_id= Enter the tenant ID for the new tenant: (eg. t4900) : - -set /p tenant_db_name= Enter the name of the new BACA tenant database to create: (eg. t4900) : - -set /p baca_database_server_ip= Enter the host/IP of the tenant database server. : - -set /p baca_database_port= Enter the port of the tenant database server : - -set /p tenant_db_user= Please enter the name of tenant database user. If no value is entered we will use the following default value 'tenantuser' : -IF NOT DEFINED tenant_db_user SET "tenant_db_user=tenantuser" - -set /p tenant_db_pwd= Enter the password for the tenant database user: - -set /p tenant_ontology= Enter the tenant ontology name. If nothing is entered, the default name will be used 'default' : -IF NOT DEFINED tenant_ontology SET "tenant_ontology=default" - -set /p base_db_name= Enter the name of the Base BACA database with the TENANTINFO Table. If nothing is entered, we will use the following default value 'CABASEDB': -IF NOT DEFINED base_db_name SET "base_db_name=CABASEDB" - -set /p base_db_user= Enter the name of the database user for the Base BACA database. If nothing is entered, we will use the following default value 'CABASEUSER' : -IF NOT DEFINED base_db_user SET "base_db_user=CABASEUSER" - -set /p tenant_company= Please enter the company name for the initial BACA user : - -set /p tenant_first_name= Please enter the first name for the initial BACA user : - -set /p tenant_last_name= Please enter the last name for the initial BACA user : - -set /p tenant_email= Please enter a valid email address for the initial BACA user : - -set /p tenant_user_name= Please enter the login name for the initial BACA user : - -set /p ssl= Please enter the login name for the initial BACA user : - -echo "-- Please confirm these are the desired settings:" -echo " - tenant ID: %tenant_id%" -echo " - tenant database name: %tenant_db_name%" -echo " - database server hostname/IP: %baca_database_server_ip%" -echo " - database server port: %baca_database_port%" -echo " - tenant database user: %tenant_db_user%" -echo " - ontology name: %tenant_ontology%" -echo " - base database: %base_db_name%" -echo " - base database user: %base_db_user%" -echo " - tenant company name: %tenant_company%" -echo " - tenant first name: %tenant_first_name%" -echo " - tenant last name: %tenant_last_name%" -echo " - tenant email address: %tenant_email%" -echo " - tenant login name: %tenant_user_name%" - -set /P c=Are you sure you want to continue[Y/N]? -if /I "%c%" EQU "Y" goto :DOCREATE -if /I "%c%" EQU "N" goto :DOEXIT - -:DOCREATE - echo "Running the db script" - REM adding new teneant db need to create db first - IF "%choice%"=="1" ( - echo "Creating db on user input" - db2 CREATE DATABASE %tenant_db_name% AUTOMATIC STORAGE YES USING CODESET UTF-8 TERRITORY DEFAULT COLLATE USING SYSTEM PAGESIZE 32768 - db2 CONNECT TO %tenant_db_name% - db2 GRANT CONNECT,DATAACCESS ON DATABASE TO USER %tenant_db_user% - db2 GRANT USE OF TABLESPACE USERSPACE1 TO USER %tenant_db_user% - db2 CONNECT RESET - ) - - REM create schema - echo "Connecting to db and creating schema" - db2 CONNECT TO %tenant_db_name% - db2 CREATE SCHEMA %tenant_ontology% - db2 SET SCHEMA %tenant_ontology% - - REM create tables - echo "creating schema tables" - db2 -stvf sql\CreateBacaTables.sql - - REM table permissions to tenant user - echo "Giving permissions on tables" - db2 GRANT ALTER ON TABLE DOC_CLASS TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE DOC_ALIAS TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE KEY_CLASS TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE KEY_ALIAS TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE CWORD TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE HEADING TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE HEADING_ALIAS TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE USER_DETAIL TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE INTEGRATION TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE IMPORT_ONTOLOGY TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE API_INTEGRATIONS_OBJECTSSTORE TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE SMARTPAGES_OPTIONS TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE FONTS TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE FONTS_TRANSID TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE DB_BACKUP TO USER %tenant_db_user% - - REM load the tenant Db - echo "Loading default data into tables" - db2 load from CSVFiles\doc_class.csv of del modified by identityoverride insert into doc_class - db2 load from CSVFiles\key_class.csv of del modified by identityoverride insert into key_class - db2 load from CSVFiles\doc_alias.csv of del modified by identityoverride insert into doc_alias - db2 load from CSVFiles\key_alias.csv of del modified by identityoverride insert into key_alias - db2 load from CSVFiles\cword.csv of del modified by identityoverride insert into cword - db2 load from CSVFiles\heading.csv of del modified by identityoverride insert into heading - db2 load from CSVFiles\heading_alias.csv of del modified by identityoverride insert into heading_alias - db2 load from CSVFiles\key_class_dc.csv of del modified by identityoverride insert into key_class_dc - db2 load from CSVFiles\doc_alias_dc.csv of del modified by identityoverride insert into doc_alias_dc - db2 load from CSVFiles\key_alias_dc.csv of del modified by identityoverride insert into key_alias_dc - db2 load from CSVFiles\key_alias_kc.csv of del modified by identityoverride insert into key_alias_kc - db2 load from CSVFiles\heading_dc.csv of del modified by identityoverride insert into heading_dc - db2 load from CSVFiles\heading_alias_dc.csv of del modified by identityoverride insert into heading_alias_dc - db2 load from CSVFiles\heading_alias_h.csv of del modified by identityoverride insert into heading_alias_h - db2 load from CSVFiles\cword_dc.csv of del modified by identityoverride insert into cword_dc - db2 connect reset - - REM Insert InsertTenant - echo "Connecting to base database to insert tenant info" - db2 connect to %base_db_name% - db2 set schema %base_db_user% - db2 insert into TENANTINFO (tenantid,ontology,tenanttype,rdbmsengine,bacaversion,rdbmsconnection) values ( '%tenant_id%', '%tenant_ontology%', 0, 'DB2', '1.1', encrypt('DATABASE=%tenant_db_name%;HOSTNAME=%baca_database_server_ip%;PORT=%baca_database_port%;PROTOCOL=TCPIP;UID=%tenant_db_user%;PWD=%tenant_db_pwd%;','AES_KEY')) - db2 connect reset - - REM Insert InsertUser - echo "Connecting to tenant database to insert initial userinfo" - db2 connect to %tenant_db_name% - db2 set schema %tenant_ontology% - db2 insert into user_detail (email,first_name,last_name,user_name,company,expire) values ('%tenant_email%','%tenant_first_name%','%tenant_last_name%','%tenant_user_name%','%tenant_company%',10080) - db2 insert into login_detail (user_id,role,status,logged_in) select user_id,'Admin','1',0 from user_detail where email='%tenant_email%' - db2 connect reset - goto END -:DOEXIT - echo "Exited on user input" - goto END -:END - echo "END" - -ENDLOCAL +@echo off + +SETLOCAL + +IF NOT DEFINED skip_create_tenant_db ( + set skip_create_tenant_db=false +) + +IF "%skip_create_tenant_db%"=="true" ( + echo -- + echo This script will initialize an existing DB2 database for use as a BACA tenant database and add an ontology. + set choice="2" + echo -- +) ELSE ( + echo -- + echo Enter '1' to create an new DB2 database and initialize the database as a tenant DB and create an ontology. An existing database user must exist. + echo Enter '2' to add an ontology for an existing tenant database. + echo Enter '3' to abort. + + set /p choice="Type input: " +) + + +if /I "%choice%" EQU "3" goto :DOEXIT + +set /p tenant_id= Enter the tenant ID for the new tenant: (eg. t4900) : + +IF NOT "%skip_create_tenant_db%"=="true" ( + set /p tenant_db_name= "Enter the name of the new DB2 database to create for the BACA tenant. Please follow the DB2 naming rules :" +) ELSE ( + set /p tenant_db_name= "Enter the name of the existing DB2 database to use for the BACA tenant database (eg. t4900) :" +) +set tenant_dsn_name=%tenant_db_name% + +set /p baca_database_server_ip= "Enter the host/IP of the DB2 database server for the tenant database. :" + +set /p baca_database_port= "Enter the port of the DB2 database server for the tenant database :" + +set /p tenant_db_user= "Please enter the name of tenant database user. If no value is entered we will use the following default value 'tenantuser' :" +IF NOT DEFINED tenant_db_user SET "tenant_db_user=tenantuser" + +REM Use powershell to mask password +set "psCommand=powershell -Command "$pword = read-host 'Enter the password for the tenant database user:' -AsSecureString ; ^ + $BSTR=[System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($pword); ^ + [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)"" +for /f "usebackq delims=" %%p in (`%psCommand%`) do set tenant_db_pwd=%%p +REM Alternative way to prompt for pwd without masking +REM set /p tenant_db_pwd= "Enter the password for the tenant database user:" + +set /p tenant_ontology= "Enter the tenant ontology name. If nothing is entered, the default name will be used 'default' :" +IF NOT DEFINED tenant_ontology SET "tenant_ontology=default" + +set /p base_db_name= "Enter the name of the DB2 BACA Base database with the TENANTINFO Table. If nothing is entered, we will use the following default value 'CABASEDB': " +IF NOT DEFINED base_db_name SET "base_db_name=CABASEDB" + +set /p base_db_user= "Enter the name of the database user for the Base BACA database. If nothing is entered, we will use the following default value 'CABASEUSER' : " +IF NOT DEFINED base_db_user SET "base_db_user=CABASEUSER" + +set /p tenant_company= "Please enter the company name for the initial BACA user :" + +set /p tenant_first_name= "Please enter the first name for the initial BACA user :" + +set /p tenant_last_name= "Please enter the last name for the initial BACA user :" + +set /p tenant_email= "Please enter a valid email address for the initial BACA user : " + +set /p tenant_user_name= "Please enter the login name for the initial BACA user (IMPORTANT: if you are using LDAP, you must use the LDAP user name):" + +IF NOT DEFINED rdbmsconnection SET "rdbmsconnection=DSN=%tenant_dsn_name%;UID=%tenant_db_user%;PWD=%tenant_db_pwd%;" +set /p ssl= "Please enter if database is enabled for SSL default is false [Y/N] :" +if /I "%ssl%" EQU "Y" ( + SET rdbmsconnection=%rdbmsconnection%Security=SSL; +) +echo "-- Please confirm these are the desired settings:" +echo " - tenant ID: %tenant_id%" +echo " - tenant database name: %tenant_db_name%" +echo " - database server hostname/IP: %baca_database_server_ip%" +echo " - database server port: %baca_database_port%" +echo " - tenant database user: %tenant_db_user%" +echo " - ontology name: %tenant_ontology%" +echo " - base database: %base_db_name%" +echo " - base database user: %base_db_user%" +echo " - tenant company name: %tenant_company%" +echo " - tenant first name: %tenant_first_name%" +echo " - tenant last name: %tenant_last_name%" +echo " - tenant email address: %tenant_email%" +echo " - tenant login name: %tenant_user_name%" +echo " - tenant ssl: %ssl%" + +set /P c=Are you sure you want to continue[Y/N]? +if /I "%c%" EQU "Y" goto :DOCREATE +if /I "%c%" EQU "N" goto :DOEXIT + +:DOCREATE + echo "Running the db script" + REM adding new teneant db need to create db first + IF "%choice%"=="1" ( + echo "Creating database" + db2 CREATE DATABASE %tenant_db_name% AUTOMATIC STORAGE YES USING CODESET UTF-8 TERRITORY DEFAULT COLLATE USING SYSTEM PAGESIZE 32768 + db2 CONNECT TO %tenant_db_name% + db2 GRANT CONNECT,DATAACCESS ON DATABASE TO USER %tenant_db_user% + db2 GRANT USE OF TABLESPACE USERSPACE1 TO USER %tenant_db_user% + db2 CONNECT RESET + ) + + REM create schema + echo -- + echo "Connecting to db and creating schema" + db2 CONNECT TO %tenant_db_name% + db2 CREATE SCHEMA %tenant_ontology% + db2 SET SCHEMA %tenant_ontology% + + REM create tables + echo -- + echo "Creating BACA tables" + db2 -stvf sql\CreateBacaTables.sql + + REM table permissions to tenant user + echo -- + echo "Giving permissions on tables" + db2 GRANT ALTER ON TABLE DOC_CLASS TO USER %tenant_db_user% + db2 GRANT ALTER ON TABLE DOC_ALIAS TO USER %tenant_db_user% + db2 GRANT ALTER ON TABLE KEY_CLASS TO USER %tenant_db_user% + db2 GRANT ALTER ON TABLE KEY_ALIAS TO USER %tenant_db_user% + db2 GRANT ALTER ON TABLE CWORD TO USER %tenant_db_user% + db2 GRANT ALTER ON TABLE HEADING TO USER %tenant_db_user% + db2 GRANT ALTER ON TABLE HEADING_ALIAS TO USER %tenant_db_user% + db2 GRANT ALTER ON TABLE USER_DETAIL TO USER %tenant_db_user% + db2 GRANT ALTER ON TABLE INTEGRATION TO USER %tenant_db_user% + db2 GRANT ALTER ON TABLE IMPORT_ONTOLOGY TO USER %tenant_db_user% + db2 GRANT ALTER ON TABLE API_INTEGRATIONS_OBJECTSSTORE TO USER %tenant_db_user% + db2 GRANT ALTER ON TABLE SMARTPAGES_OPTIONS TO USER %tenant_db_user% + db2 GRANT ALTER ON TABLE FONTS TO USER %tenant_db_user% + db2 GRANT ALTER ON TABLE FONTS_TRANSID TO USER %tenant_db_user% + db2 GRANT ALTER ON TABLE DB_BACKUP TO USER %tenant_db_user% + db2 GRANT ALTER ON TABLE PATTERN TO USER %tenant_db_user% + db2 GRANT ALTER ON TABLE DOCUMENT TO USER %tenant_db_user% + db2 GRANT ALTER ON TABLE TRAINING_LOG TO USER %tenant_db_user% + + REM load the tenant Db + echo "Loading default data into tables" + db2 load from CSVFiles\doc_class.csv of del insert into doc_class + db2 load from CSVFiles\key_class.csv of del modified by identityoverride insert into key_class + db2 load from CSVFiles\doc_alias.csv of del modified by identityoverride insert into doc_alias + db2 load from CSVFiles\key_alias.csv of del modified by identityoverride insert into key_alias + db2 load from CSVFiles\cword.csv of del modified by identityoverride insert into cword + db2 load from CSVFiles\heading.csv of del modified by identityoverride insert into heading + db2 load from CSVFiles\heading_alias.csv of del modified by identityoverride insert into heading_alias + db2 load from CSVFiles\key_class_dc.csv of del modified by identityoverride insert into key_class_dc + db2 load from CSVFiles\doc_alias_dc.csv of del modified by identityoverride insert into doc_alias_dc + db2 load from CSVFiles\key_alias_dc.csv of del modified by identityoverride insert into key_alias_dc + db2 load from CSVFiles\key_alias_kc.csv of del modified by identityoverride insert into key_alias_kc + db2 load from CSVFiles\heading_dc.csv of del modified by identityoverride insert into heading_dc + db2 load from CSVFiles\heading_alias_dc.csv of del modified by identityoverride insert into heading_alias_dc + db2 load from CSVFiles\heading_alias_h.csv of del modified by identityoverride insert into heading_alias_h + db2 load from CSVFiles\cword_dc.csv of del modified by identityoverride insert into cword_dc + + echo -- + echo "SET INTEGRITY ..." + db2 set integrity for key_class_dc immediate checked + db2 set integrity for doc_alias_dc immediate checked + db2 set integrity for key_alias_dc immediate checked + db2 set integrity for key_alias_kc immediate checked + db2 set integrity for heading_dc immediate checked + db2 set integrity for heading_alias_dc immediate checked + db2 set integrity for heading_alias_h immediate checked + db2 set integrity for cword_dc immediate checked + + echo -- + echo "ALTER TABLE ..." + db2 alter table doc_class alter column doc_class_id restart with 10 + db2 alter table doc_alias alter column doc_alias_id restart with 11 + db2 alter table key_class alter column key_class_id restart with 202 + db2 alter table key_alias alter column key_alias_id restart with 239 + db2 alter table cword alter column cword_id restart with 76 + db2 alter table heading alter column heading_id restart with 3 + db2 alter table heading_alias alter column heading_alias_id restart with 3 + + db2 connect reset + + REM Insert InsertTenant + echo -- + echo "Connecting to base database to insert tenant info" + db2 connect to %base_db_name% + db2 set schema %base_db_user% + db2 insert into TENANTINFO (tenantid,ontology,tenanttype,dailylimit,rdbmsengine,bacaversion,rdbmsconnection,dbname,dbuser,tenantdbversion) values ( '%tenant_id%', '%tenant_ontology%', 0, 0, 'DB2', '1.3', encrypt('%rdbmsconnection%','AES_KEY'),'%tenant_db_name%','%tenant_db_user%','1.3') + db2 connect reset + + REM Insert InsertUser + echo -- + echo "Connecting to tenant database to insert initial userinfo" + db2 connect to %tenant_db_name% + db2 set schema %tenant_ontology% + db2 insert into user_detail (email,first_name,last_name,user_name,company,expire) values ('%tenant_email%','%tenant_first_name%','%tenant_last_name%','%tenant_user_name%','%tenant_company%',10080) + db2 insert into login_detail (user_id,role,status,logged_in) select user_id,'Admin','1',0 from user_detail where email='%tenant_email%' + db2 connect reset + goto END +:DOEXIT + echo "Exited on user input" + goto END +:END + SET skip_create_tenant_db= + echo "END" + +ENDLOCAL diff --git a/BACA/configuration/DB2/AddTenant.sh b/ACA/configuration-ha/DB2/AddTenant.sh similarity index 54% rename from BACA/configuration/DB2/AddTenant.sh rename to ACA/configuration-ha/DB2/AddTenant.sh index 1f17c071..4f012f24 100755 --- a/BACA/configuration/DB2/AddTenant.sh +++ b/ACA/configuration-ha/DB2/AddTenant.sh @@ -1,4 +1,10 @@ #!/bin/bash + +# NOTES: +# This script will create a DB2 database and initialize the database for a Content Analyzer tenant and load it with default data. +# If you prefer to create your own database, and only want the script to initialize the existing database, +# please exit this script and run 'InitTenantDB.sh'. + . ./ScriptFunctions.sh INPUT_PROPS_FILENAME="./common_for_DB2.sh" @@ -16,13 +22,24 @@ if [[ "$NUMARGS" -gt 0 ]]; then use_existing_tenant=$1 fi - if [[ -z "$use_existing_tenant" || $use_existing_tenant -ne 1 ]]; then - echo -e "\n-- This script will create a BACA database and an ontology for a new tenant and load it with default data" - echo + if [[ -z "$tenant_db_exists" || $tenant_db_exists != "true" ]]; then + echo + echo "==================================================" + echo + echo -e "\nThis script will create a DB2 database and initialize the database for a Content Analyzer tenant and load it with default data." + echo + echo -e "If you prefer to create your own database, and only want the script to initialize the existing database, please exit this script and run 'InitTenantDB.sh'." + echo + echo "==================================================" + echo + else + echo -e "\n-- This script will initialize an existing database for a Content Analyzer tenant and load it with default data" + echo + fi fi -if [[ -z "$use_existing_tenant" || $use_existing_tenant -ne 1 ]]; then +if [[ -z "$use_existing_tenant" || $use_existing_tenant -ne 1 ]]; then echo "Enter the tenant ID for the new tenant: (eg. t4900)" else echo "Enter the tenant ID for the existing tenant: (eg. t4900)" @@ -38,7 +55,7 @@ if [[ -z "$use_existing_tenant" || $use_existing_tenant -ne 1 ]]; then while [[ $tenant_type == '' || $tenant_type != "0" && $tenant_type != "1" && $tenant_type != "2" ]] # While tenant_type is not valid/set do - echo -e "\n\x1B[1;31mEnter the tenanttype\x1B[0m" + echo -e "\n\x1B[1;31mEnter the tenant type\x1B[0m" echo -e "\x1B[1;31mChoose the number equivalent.\x1B[0m" echo -e "\x1B[1;34m0. Enterprise\x1B[0m" echo -e "\x1B[1;34m1. Trial\x1B[0m" @@ -57,10 +74,10 @@ fi echo -if [[ -z "$use_existing_tenant" || $use_existing_tenant -ne 1 ]]; then - echo "Enter the name of the new BACA tenant database to create: (eg. t4900)" +if [[ -z "$tenant_db_exists" || $tenant_db_exists != "true" ]]; then + echo "Enter the name of the new Content Analyzer Tenant database to create: " else - echo "Enter the name of the existing BACA tenant database: (eg. t4900)" + echo "Enter the name of an existing DB2 database to be used as the Content Analyzer Tenant database: " fi while [[ $tenant_db_name == '' ]] do @@ -74,23 +91,35 @@ do done done -if [[ -z "$baca_database_server_ip" ]]; then - echo -e "\nEnter the host/IP of the database server: " - read baca_database_server_ip +default_dsn_name=$tenant_db_name +if [[ -z "$tenant_dsn_name" ]]; then + echo -e "\nEnter the data source name. This will generally be same name as the " + echo -e "database name unless you specifiy a different value in the 'db2dsdriver.cfg'. " + echo -e "If nothing is entered, we will use the following default value : " $default_dsn_name + read tenant_dsn_name + if [[ -z "$tenant_dsn_name" ]]; then + tenant_dsn_name=$default_dsn_name + fi fi -default_dbport=50000 -if [[ -z "$baca_database_port" ]]; then - echo -e "\nEnter the port of the database server. If nothing is entered we will use the following default value: " $default_dbport - read baca_database_port - if [[ -z "$baca_database_port" ]]; then - baca_database_port=$default_dbport - fi -fi +# if [[ -z "$baca_database_server_ip" ]]; then +# echo -e "\nEnter the host/IP of the database server: " +# read baca_database_server_ip +# fi + +# default_dbport=50000 +# if [[ -z "$baca_database_port" ]]; then +# echo -e "\nEnter the port of the database server. If nothing is entered we will use the following default value: " $default_dbport +# read baca_database_port +# if [[ -z "$baca_database_port" ]]; then +# baca_database_port=$default_dbport +# fi +# fi default_ssl='No' if [[ -z "$ssl" ]]; then - echo -e "\nWould you like to enable SSL to communicate with DB2 server? If nothing is entered we will use the default value: " $default_ssl + echo -e "\nWould you like to enable SSL to communicate with DB2 server? (Please note that additional setup steps are required in order to use SSL with DB2.)" + echo -e "Please enter 'Yes' or 'No'. If nothing is entered we will use the default value of '" $default_ssl "'" read ssl if [[ -z "$ssl" ]]; then ssl=$default_ssl @@ -102,7 +131,7 @@ if [[ $use_existing_tenant -eq 1 ]]; then fi echo -echo "We need a non-admin database user that BACA will use to access your BACA tenant database." +echo "We need a non-admin database user that Content Analyzer will use to access your Content Analyzer Tenant database." while [[ -z "$tenant_db_user" || $tenant_db_user == "" ]] do echo @@ -125,7 +154,7 @@ do if [[ "$create_new_user" == "y" || "$create_new_user" = "Y" ]]; then echo "Please enter the name of database user to create: " else - echo "Please enter the name of an existing database user" + echo "Please enter the name of an existing database user with read and write privileges for the Content Analyzer Tenant database: " fi read tenant_db_user done @@ -188,7 +217,8 @@ fi default_basedb='BASECA' if [[ -z "$base_db_name" ]]; then - echo -e "\nEnter the name of the Base BACA database with the TENANTINFO Table. If nothing is entered, we will use the following default value : " $default_basedb + echo -e "\n-- Content Analyzer Base database info: --" + echo -e "\nEnter the name of the Base Content Analyzer Base database. If nothing is entered, we will use the following default value : " $default_basedb read base_db_name if [[ -z "$base_db_name" ]]; then base_db_name=$default_basedb @@ -197,7 +227,7 @@ fi default_basedb_user='CABASEUSER' if [[ -z "$base_db_user" ]]; then - echo -e "\nEnter the name of the database user for the Base BACA database. If nothing is entered, we will use the following default value : " $default_basedb_user + echo -e "\nEnter the name of the database user for the Content Analyzer Base database. If nothing is entered, we will use the following default value : " $default_basedb_user read base_db_user if [[ -z "$base_db_user" ]]; then base_db_user=$default_basedb_user @@ -210,7 +240,7 @@ fi # pwdconfirmed=0 # while [[ $pwdconfirmed -ne 1 ]] # While pwd is not yet received and confirmed (i.e. entered teh same time twice) # do -# echo "Enter the password for the BACA base database user: " +# echo "Enter the password for the Content Analyzer base database user: " # read -s base_tenant_db_pwd # while [[ $base_tenant_db_pwd == '' ]] # While pwd is empty... # do @@ -236,39 +266,39 @@ fi # done echo -echo "Now we will gather information about the initial BACA user that will be defined:" +echo "Now we will gather information about the initial Content Analyzer login user" while [[ $tenant_company == '' ]] do - echo -e "\nPlease enter the company name for the initial BACA user:" + echo -e "\nPlease enter the company name for the initial Content Analyzer user:" read tenant_company done while [[ $tenant_first_name == '' ]] do - echo -e "\nPlease enter the first name for the initial BACA user:" + echo -e "\nPlease enter the first name for the initial Content Analyzer user:" read tenant_first_name done while [[ $tenant_last_name == '' ]] do - echo -e "\nPlease enter the last name for the initial BACA user:" + echo -e "\nPlease enter the last name for the initial Content Analyzer user:" read tenant_last_name done while [[ $tenant_email == '' || ! $tenant_email =~ ^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,4}$ ]] do - echo -e "\nPlease enter a valid email address for the initial BACA user:" + echo -e "\nPlease enter a valid email address for the initial Content Analyzer user:" read tenant_email done while [[ $tenant_user_name == '' ]] do - echo -e "\nPlease enter the login name for the initial BACA user:" + echo -e "\nPlease enter the login name for the initial Content Analyzer user. (IMPORTANT: if you are using LDAP, the login name must the same as your LDAP username.)" read tenant_user_name done @@ -280,8 +310,8 @@ if [[ $use_existing_tenant -eq 1 ]]; then daily_limit=$(echo $resp | awk '{print $2}') fi -rdbmsconnection="DATABASE=$tenant_db_name;HOSTNAME=$baca_database_server_ip;PORT=$baca_database_port;PROTOCOL=TCPIP;UID=$tenant_db_user;PWD=$tenant_db_pwd;" -if [[ "$ssl" == "Yes" || "$ssl" == "y" || "$ssl" == "Y" ]]; then +rdbmsconnection="DSN=$tenant_dsn_name;UID=$tenant_db_user;PWD=$tenant_db_pwd;" +if [[ "$ssl" == "Yes" || "$ssl" == "yes" || "$ssl" == "YES" || "$ssl" == "y" || "$ssl" == "Y" ]]; then echo rdbmsconnection+="Security=SSL;" echo "--- with SSL rdbstring : " $rdbmsconnection @@ -298,8 +328,8 @@ echo " - tenant ID: $tenant_id" echo " - tenant type: $tenant_type" echo " - daily limit: $daily_limit" echo " - tenant database name: $tenant_db_name" -echo " - database server hostname/IP: $baca_database_server_ip" -echo " - database server port: $baca_database_port" +# echo " - database server hostname/IP: $baca_database_server_ip" +# echo " - database server port: $baca_database_port" echo " - database enabled for ssl : $ssl" if [[ $user_already_defined -ne 1 ]]; then echo " - tenant database user will be created by this script" @@ -331,10 +361,37 @@ if [[ $user_already_defined -ne 1 ]]; then sudo chage -E -1 -M -1 $tenant_db_user fi +# -------- convert certain variables to lower-case to standardize ---- +if [[ ! -z "$tenant_db_exists" ]]; then + tenant_db_exists=$(echo "$tenant_db_exists" | tr '[:upper:]' '[:lower:]') +fi + +if [[ ! -z "$skip_setup_schema" ]]; then + skip_setup_schema=$(echo "$skip_setup_schema" | tr '[:upper:]' '[:lower:]') +fi + +if [[ ! -z "$skip_load_data" ]]; then + skip_load_data=$(echo "$skip_load_data" | tr '[:upper:]' '[:lower:]') +fi + +if [[ ! -z "$skip_set_integrity" ]]; then + skip_set_integrity=$(echo "$skip_set_integrity" | tr '[:upper:]' '[:lower:]') +fi + +if [[ ! -z "$skip_insert_tenant" ]]; then + skip_insert_tenant=$(echo "$skip_insert_tenant" | tr '[:upper:]' '[:lower:]') +fi + +if [[ ! -z "$skip_insert_user" ]]; then + skip_insert_user=$(echo "$skip_insert_user" | tr '[:upper:]' '[:lower:]') +fi +# ----- end convert variables ------ + + # Only create DB for new tenants if [[ $use_existing_tenant -ne 1 ]]; then # allow using existing DB if the flag "tenant_db_exists" is true - if [[ -z "$tenant_db_exists" || $tenant_db_exists == "false" ]]; then + if [[ -z "$tenant_db_exists" || $tenant_db_exists != "true" ]]; then cp sql/CreateDB.sql.template sql/CreateDB.sql sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/CreateDB.sql sed -i s/\$tenant_db_user/"$tenant_db_user"/ sql/CreateDB.sql @@ -344,60 +401,76 @@ if [[ $use_existing_tenant -ne 1 ]]; then fi fi -cp sql/CreateBacaSchema.sql.template sql/CreateBacaSchema.sql -sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/CreateBacaSchema.sql -sed -i s/\$tenant_ontology/"$tenant_ontology"/ sql/CreateBacaSchema.sql -echo -e "\nRunning script: sql/CreateBacaSchema.sql" -db2 -stvf sql/CreateBacaSchema.sql - -echo -e "\nRunning script: sql/CreateBacaTables.sql" -db2 -tf sql/CreateBacaTables.sql -echo "CONNECT RESET" -db2 "CONNECT RESET" - -cp sql/TablePermissions.sql.template sql/TablePermissions.sql -sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/TablePermissions.sql -sed -i s/\$tenant_db_user/"$tenant_db_user"/ sql/TablePermissions.sql -sed -i s/\$tenant_ontology/"$tenant_ontology"/ sql/TablePermissions.sql -echo -e "\nRunning script: sql/TablePermissions.sql" -db2 -stvf sql/TablePermissions.sql - -cp sql/LoadData.sql.template sql/LoadData.sql -sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/LoadData.sql -sed -i s/\$tenant_ontology/"$tenant_ontology"/ sql/LoadData.sql -echo -e "\nRunning script: sql/LoadData.sql" -db2 -stvf sql/LoadData.sql - -cp sql/InsertTenant.sql.template sql/InsertTenant.sql -sed -i s/\$base_db_name/"$base_db_name"/ sql/InsertTenant.sql -sed -i s/\$base_db_user/"$base_db_user"/ sql/InsertTenant.sql -sed -i s/\$tenant_id/"$tenant_id"/ sql/InsertTenant.sql -sed -i s/\$tenant_ontology/"$tenant_ontology"/ sql/InsertTenant.sql -sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/InsertTenant.sql -sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/InsertTenant.sql -sed -i s/\$baca_database_server_ip/"$baca_database_server_ip"/ sql/InsertTenant.sql -sed -i s/\$baca_database_port/"$baca_database_port"/ sql/InsertTenant.sql -sed -i s/\$tenant_db_user/"$tenant_db_user"/ sql/InsertTenant.sql -sed -i s/\$tenant_db_user/"$tenant_db_user"/ sql/InsertTenant.sql -sed -i s/\$tenant_db_pwd/"$tenant_db_pwd"/ sql/InsertTenant.sql -sed -i s/\$tenant_type/"$tenant_type"/ sql/InsertTenant.sql -sed -i s/\$daily_limit/"$daily_limit"/ sql/InsertTenant.sql -sed -i s/\$rdbmsconnection/"$rdbmsconnection"/ sql/InsertTenant.sql -echo -e "\nRunning script: sql/InsertTenant.sql" -db2 -stvf sql/InsertTenant.sql - - -cp sql/InsertUser.sql.template sql/InsertUser.sql -sed -i s/\$tenant_ontology/"$tenant_ontology"/ sql/InsertUser.sql -sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/InsertUser.sql -sed -i s/\$tenant_email/"$tenant_email"/ sql/InsertUser.sql -sed -i s/\$tenant_first_name/"$tenant_first_name"/ sql/InsertUser.sql -sed -i s/\$tenant_last_name/"$tenant_last_name"/ sql/InsertUser.sql -sed -i s/\$tenant_user_name/"$tenant_user_name"/ sql/InsertUser.sql -sed -i s/\$tenant_company/"$tenant_company"/ sql/InsertUser.sql -sed -i s/\$tenant_email/"$tenant_email"/ sql/InsertUser.sql -echo -e "\nRunning script: sql/InsertUser.sql" -db2 -stvf sql/InsertUser.sql +if [[ -z "$skip_setup_schema" || $skip_setup_schema != "true" ]]; then + cp sql/CreateBacaSchema.sql.template sql/CreateBacaSchema.sql + sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/CreateBacaSchema.sql + sed -i s/\$tenant_ontology/"$tenant_ontology"/ sql/CreateBacaSchema.sql + echo -e "\nRunning script: sql/CreateBacaSchema.sql" + db2 -stvf sql/CreateBacaSchema.sql + + echo -e "\nRunning script: sql/CreateBacaTables.sql" + db2 -tf sql/CreateBacaTables.sql + echo "CONNECT RESET" + db2 "CONNECT RESET" + + cp sql/TablePermissions.sql.template sql/TablePermissions.sql + sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/TablePermissions.sql + sed -i s/\$tenant_db_user/"$tenant_db_user"/ sql/TablePermissions.sql + sed -i s/\$tenant_ontology/"$tenant_ontology"/ sql/TablePermissions.sql + echo -e "\nRunning script: sql/TablePermissions.sql" + db2 -stvf sql/TablePermissions.sql +fi + +if [[ -z "$skip_load_data" || $skip_load_data != "true" ]]; then + cp sql/LoadData.sql.template sql/LoadData.sql + sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/LoadData.sql + sed -i s/\$tenant_ontology/"$tenant_ontology"/ sql/LoadData.sql + echo -e "\nRunning script: sql/LoadData.sql" + db2 -stvf sql/LoadData.sql +fi + + +if [[ -z "$skip_insert_tenant" || $skip_insert_tenant != "true" ]]; then + cp sql/InsertTenant.sql.template sql/InsertTenant.sql + sed -i s/\$base_db_name/"$base_db_name"/ sql/InsertTenant.sql + sed -i s/\$base_db_user/"$base_db_user"/ sql/InsertTenant.sql + sed -i s/\$tenant_id/"$tenant_id"/ sql/InsertTenant.sql + sed -i s/\$tenant_ontology/"$tenant_ontology"/ sql/InsertTenant.sql + sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/InsertTenant.sql + # sed -i s/\$baca_database_server_ip/"$baca_database_server_ip"/ sql/InsertTenant.sql + # sed -i s/\$baca_database_port/"$baca_database_port"/ sql/InsertTenant.sql + sed -i s/\$tenant_db_user/"$tenant_db_user"/ sql/InsertTenant.sql + sed -i s/\$tenant_db_pwd/"$tenant_db_pwd"/ sql/InsertTenant.sql + sed -i s/\$tenant_type/"$tenant_type"/ sql/InsertTenant.sql + sed -i s/\$daily_limit/"$daily_limit"/ sql/InsertTenant.sql + sed -i s/\$rdbmsconnection/"$rdbmsconnection"/ sql/InsertTenant.sql + echo -e "\nRunning script: sql/InsertTenant.sql" + db2 -stf sql/InsertTenant.sql +fi + + +if [[ -z "$skip_set_integrity" || $skip_set_integrity != "true" ]]; then + cp sql/SetIntegrity.sql.template sql/SetIntegrity.sql + sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/SetIntegrity.sql + sed -i s/\$tenant_ontology/"$tenant_ontology"/ sql/SetIntegrity.sql + echo -e "\nRunning script: sql/SetIntegrity.sql" + db2 -stvf sql/SetIntegrity.sql +fi + + +if [[ -z "$skip_insert_user" || $skip_insert_user != "true" ]]; then + cp sql/InsertUser.sql.template sql/InsertUser.sql + sed -i s/\$tenant_ontology/"$tenant_ontology"/ sql/InsertUser.sql + sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/InsertUser.sql + sed -i s/\$tenant_email/"$tenant_email"/ sql/InsertUser.sql + sed -i s/\$tenant_first_name/"$tenant_first_name"/ sql/InsertUser.sql + sed -i s/\$tenant_last_name/"$tenant_last_name"/ sql/InsertUser.sql + sed -i s/\$tenant_user_name/"$tenant_user_name"/ sql/InsertUser.sql + sed -i s/\$tenant_company/"$tenant_company"/ sql/InsertUser.sql + sed -i s/\$tenant_email/"$tenant_email"/ sql/InsertUser.sql + echo -e "\nRunning script: sql/InsertUser.sql" + db2 -stvf sql/InsertUser.sql +fi echo -e "\n-- Add completed succesfully. Tenant ID: $tenant_id , Ontology: $tenant_ontology \n" diff --git a/BACA/configuration-ha/DB2/CSVFiles/cword.csv b/ACA/configuration-ha/DB2/CSVFiles/cword.csv similarity index 100% rename from BACA/configuration-ha/DB2/CSVFiles/cword.csv rename to ACA/configuration-ha/DB2/CSVFiles/cword.csv diff --git a/BACA/configuration-ha/DB2/CSVFiles/cword_dc.csv b/ACA/configuration-ha/DB2/CSVFiles/cword_dc.csv similarity index 100% rename from BACA/configuration-ha/DB2/CSVFiles/cword_dc.csv rename to ACA/configuration-ha/DB2/CSVFiles/cword_dc.csv diff --git a/BACA/configuration-ha/DB2/CSVFiles/doc_alias.csv b/ACA/configuration-ha/DB2/CSVFiles/doc_alias.csv similarity index 100% rename from BACA/configuration-ha/DB2/CSVFiles/doc_alias.csv rename to ACA/configuration-ha/DB2/CSVFiles/doc_alias.csv diff --git a/BACA/configuration-ha/DB2/CSVFiles/doc_alias_dc.csv b/ACA/configuration-ha/DB2/CSVFiles/doc_alias_dc.csv similarity index 100% rename from BACA/configuration-ha/DB2/CSVFiles/doc_alias_dc.csv rename to ACA/configuration-ha/DB2/CSVFiles/doc_alias_dc.csv diff --git a/ACA/configuration-ha/DB2/CSVFiles/doc_class.csv b/ACA/configuration-ha/DB2/CSVFiles/doc_class.csv new file mode 100644 index 00000000..57170b28 --- /dev/null +++ b/ACA/configuration-ha/DB2/CSVFiles/doc_class.csv @@ -0,0 +1,10 @@ +0,__root,Reserved document class,0 +1,Balance Statement,This is a Sample,0 +2,Bill of Lading,This is a Sample,0 +3,Estimates,This is a Sample,0 +4,Invoice,This is a Sample,0 +5,Letter,This is a Sample,0 +6,Medical Record,This is a Sample,0 +7,Police Report,This is a Sample,0 +8,Power of Attorney,This is a Sample,0 +9,Pricing Schedule,This is a Sample,0 diff --git a/BACA/configuration-ha/DB2/CSVFiles/heading.csv b/ACA/configuration-ha/DB2/CSVFiles/heading.csv similarity index 100% rename from BACA/configuration-ha/DB2/CSVFiles/heading.csv rename to ACA/configuration-ha/DB2/CSVFiles/heading.csv diff --git a/BACA/configuration-ha/DB2/CSVFiles/heading_alias.csv b/ACA/configuration-ha/DB2/CSVFiles/heading_alias.csv similarity index 100% rename from BACA/configuration-ha/DB2/CSVFiles/heading_alias.csv rename to ACA/configuration-ha/DB2/CSVFiles/heading_alias.csv diff --git a/BACA/configuration-ha/DB2/CSVFiles/heading_alias_dc.csv b/ACA/configuration-ha/DB2/CSVFiles/heading_alias_dc.csv similarity index 100% rename from BACA/configuration-ha/DB2/CSVFiles/heading_alias_dc.csv rename to ACA/configuration-ha/DB2/CSVFiles/heading_alias_dc.csv diff --git a/BACA/configuration-ha/DB2/CSVFiles/heading_alias_h.csv b/ACA/configuration-ha/DB2/CSVFiles/heading_alias_h.csv similarity index 100% rename from BACA/configuration-ha/DB2/CSVFiles/heading_alias_h.csv rename to ACA/configuration-ha/DB2/CSVFiles/heading_alias_h.csv diff --git a/BACA/configuration-ha/DB2/CSVFiles/heading_dc.csv b/ACA/configuration-ha/DB2/CSVFiles/heading_dc.csv similarity index 100% rename from BACA/configuration-ha/DB2/CSVFiles/heading_dc.csv rename to ACA/configuration-ha/DB2/CSVFiles/heading_dc.csv diff --git a/BACA/configuration-ha/DB2/CSVFiles/key_alias.csv b/ACA/configuration-ha/DB2/CSVFiles/key_alias.csv similarity index 100% rename from BACA/configuration-ha/DB2/CSVFiles/key_alias.csv rename to ACA/configuration-ha/DB2/CSVFiles/key_alias.csv diff --git a/BACA/configuration-ha/DB2/CSVFiles/key_alias_dc.csv b/ACA/configuration-ha/DB2/CSVFiles/key_alias_dc.csv similarity index 100% rename from BACA/configuration-ha/DB2/CSVFiles/key_alias_dc.csv rename to ACA/configuration-ha/DB2/CSVFiles/key_alias_dc.csv diff --git a/BACA/configuration-ha/DB2/CSVFiles/key_alias_kc.csv b/ACA/configuration-ha/DB2/CSVFiles/key_alias_kc.csv similarity index 100% rename from BACA/configuration-ha/DB2/CSVFiles/key_alias_kc.csv rename to ACA/configuration-ha/DB2/CSVFiles/key_alias_kc.csv diff --git a/BACA/configuration-ha/DB2/CSVFiles/key_class.csv b/ACA/configuration-ha/DB2/CSVFiles/key_class.csv similarity index 100% rename from BACA/configuration-ha/DB2/CSVFiles/key_class.csv rename to ACA/configuration-ha/DB2/CSVFiles/key_class.csv diff --git a/BACA/configuration-ha/DB2/CSVFiles/key_class_dc.csv b/ACA/configuration-ha/DB2/CSVFiles/key_class_dc.csv similarity index 100% rename from BACA/configuration-ha/DB2/CSVFiles/key_class_dc.csv rename to ACA/configuration-ha/DB2/CSVFiles/key_class_dc.csv diff --git a/ACA/configuration-ha/DB2/CreateBaseDB.bat b/ACA/configuration-ha/DB2/CreateBaseDB.bat new file mode 100755 index 00000000..89d93e46 --- /dev/null +++ b/ACA/configuration-ha/DB2/CreateBaseDB.bat @@ -0,0 +1,56 @@ +@echo off +SETLOCAL + +IF NOT DEFINED skip_create_base_db ( + set skip_create_base_db=false +) + +IF "%skip_create_base_db%"=="true" ( + echo -- + echo This script will initialize an existing DB2 database for use as a BACA base database. + echo -- +) ELSE ( + echo -- + echo This script will create and initialize a new DB2 database for use as a BACA base database. An existing database user must exist. + echo -- +) + + +set /p base_db_name= Enter the name of the Base BACA database. If nothing is entered, we will use the following default value 'CABASEDB': +IF NOT DEFINED base_db_name SET "base_db_name=CABASEDB" + +set /p base_db_user= Enter the name of the database user for the Base BACA database. If nothing is entered, we will use the following default value 'CABASEUSER' : +IF NOT DEFINED base_db_user SET "base_db_user=CABASEUSER" + +set /P c=Are you sure you want to continue[Y/N]? +if /I "%c%" EQU "N" goto :DOEXIT + +IF "%skip_create_base_db%"=="true" ( + goto :DOCREATETABLE +) ELSE ( + goto :DOCREATE +) + +:DOCREATE + echo "Creating a database...." + db2 CREATE DATABASE %base_db_name% AUTOMATIC STORAGE YES USING CODESET UTF-8 TERRITORY DEFAULT COLLATE USING SYSTEM PAGESIZE 32768 + db2 CONNECT TO %base_db_name% + db2 GRANT CONNECT,DATAACCESS ON DATABASE TO USER %base_db_user% + db2 GRANT USE OF TABLESPACE USERSPACE1 TO USER %base_db_user% + db2 CONNECT RESET + goto DOCREATETABLE +:DOCREATETABLE + db2 CONNECT TO %base_db_name% + db2 SET SCHEMA %base_db_user% + echo "Creating table TENANTINFO...." + db2 CREATE TABLE TENANTINFO (tenantid varchar(128) NOT NULL,ontology varchar(128) not null,tenanttype smallint not null with default,dailylimit smallint not null with default 0,rdbmsengine varchar(128) not null,dbname varchar(255) not null,dbuser varchar(255) not null,bacaversion varchar(1024) not null,rdbmsconnection varchar(1024) for bit data default null,mongoconnection varchar(1024) for bit data default null,mongoadminconnection varchar(1024) for bit data default null,featureflags bigint not null with default 0,tenantdbversion varchar(255),CONSTRAINT tenantinfo_pkey PRIMARY KEY (tenantid, ontology) ) + db2 CONNECT RESET + goto END +:DOEXIT + echo "Exited on user input" + goto END +:END + set skip_create_base_db= + echo "END" + +ENDLOCAL \ No newline at end of file diff --git a/BACA/configuration/DB2/CreateBaseDB.sh b/ACA/configuration-ha/DB2/CreateBaseDB.sh similarity index 76% rename from BACA/configuration/DB2/CreateBaseDB.sh rename to ACA/configuration-ha/DB2/CreateBaseDB.sh index c0cd4a41..b60b688e 100755 --- a/BACA/configuration/DB2/CreateBaseDB.sh +++ b/ACA/configuration-ha/DB2/CreateBaseDB.sh @@ -1,5 +1,10 @@ #!/bin/bash +# NOTES: +# This script will create a new DB2 database to be used as the Content Analyzer Base database and initialize the database. +# If you prefer to create your own database, and only want the script to initialize the existing database, +# please exit this script and run 'InitBaseDB.sh'." + . ./ScriptFunctions.sh INPUT_PROPS_FILENAME="./common_for_DB2.sh" @@ -10,12 +15,26 @@ if [ -f $INPUT_PROPS_FILENAME ]; then fi default_basedb='BASECA' -echo -e "\n-- This script will create the BACA Base database." + if [[ -z "$base_db_name" ]]; then - echo -e "\nEnter the name of the BACA Base database to create. (The name must be 8 chars or less). If nothing is entered, we will use this default value : " $default_basedb + echo + if [[ -z "$base_db_exists" || $base_db_exists == "false" ]]; then + echo + echo "==================================================" + echo + echo -e "This script will create a new DB2 database to be used as the Content Analyzer Base database and initialize the database." + echo + echo -e "If you prefer to create your own database, and only want the script to initialize the existing database, please exit this script and run 'InitBaseDB.sh'." + echo + echo "==================================================" + echo + echo -e "\nEnter the name of the database to create. (The name must be 8 chars or less). If nothing is entered, we will use this default value : " $default_basedb + else + echo -e "\nEnter the name of an existing DB2 database to initialize as the Content Analyzer Base database." + fi read base_db_name - if [[ -z "$base_db_name" ]]; then + if [[ -z "$base_db_name" && $base_db_exists != "true" ]]; then base_db_name=$default_basedb fi while [ ${#base_db_name} -gt 8 ]; @@ -54,7 +73,7 @@ do if [[ $base_user_already_defined -ne 1 ]]; then echo "Please enter the name of database user to create: " else - echo "Please enter the name of an existing database user:" + echo "Please enter the name of an existing database user with read and write privileges for this database:" fi read base_db_user done @@ -108,7 +127,7 @@ do done echo -echo "-- Information gathering is completed. Create base DB is about to begin." +echo "-- Information gathering is completed. Script execution is starting ...." askForConfirmation if [[ $db_user_pwd_b64_encoded -eq 1 ]]; then diff --git a/BACA/configuration-ha/DB2/DeleteOntology.sh b/ACA/configuration-ha/DB2/DeleteOntology.sh similarity index 100% rename from BACA/configuration-ha/DB2/DeleteOntology.sh rename to ACA/configuration-ha/DB2/DeleteOntology.sh diff --git a/BACA/configuration-ha/DB2/DeleteTenant.sh b/ACA/configuration-ha/DB2/DeleteTenant.sh similarity index 100% rename from BACA/configuration-ha/DB2/DeleteTenant.sh rename to ACA/configuration-ha/DB2/DeleteTenant.sh diff --git a/ACA/configuration-ha/DB2/InitBaseDB.bat b/ACA/configuration-ha/DB2/InitBaseDB.bat new file mode 100755 index 00000000..72325aaf --- /dev/null +++ b/ACA/configuration-ha/DB2/InitBaseDB.bat @@ -0,0 +1,4 @@ +SET skip_create_base_db=true + +CreateBaseDB.bat + diff --git a/ACA/configuration-ha/DB2/InitBaseDB.sh b/ACA/configuration-ha/DB2/InitBaseDB.sh new file mode 100755 index 00000000..92bccdd3 --- /dev/null +++ b/ACA/configuration-ha/DB2/InitBaseDB.sh @@ -0,0 +1,20 @@ +#!/bin/bash + +echo +echo "==================================================" +echo +echo "This script will initialize an existing DB2 database to be used as the Content Analyzer Base database." +echo +echo "If you want the script to create a DB2 database for you, please exit this script and run 'CreateBaseDB.sh' instead." +echo +echo "==================================================" +echo + +# to skip creating user +export create_new_base_user=n + +# To skip creating base DB +export base_db_exists=true + +./CreateBaseDB.sh + diff --git a/ACA/configuration-ha/DB2/InitTenantDB.bat b/ACA/configuration-ha/DB2/InitTenantDB.bat new file mode 100755 index 00000000..97d83a2b --- /dev/null +++ b/ACA/configuration-ha/DB2/InitTenantDB.bat @@ -0,0 +1,4 @@ +SET skip_create_tenant_db=true + +AddTenant.bat + diff --git a/ACA/configuration-ha/DB2/InitTenantDB.sh b/ACA/configuration-ha/DB2/InitTenantDB.sh new file mode 100755 index 00000000..182d5ebf --- /dev/null +++ b/ACA/configuration-ha/DB2/InitTenantDB.sh @@ -0,0 +1,16 @@ +#!/bin/bash + +echo +echo "==================================================" +echo +echo "This script will add a new BACA tenant by initializing a DB2 database to be a CA tenant database and inserting a tenant entry into the CA Base database." +echo +echo "If you want the script to create a DB2 database for you, please exit this script and run 'AddTenant.sh' instead." +echo +echo "==================================================" +echo + +export create_new_user=n +export tenant_db_exists=true + +./AddTenant.sh \ No newline at end of file diff --git a/BACA/configuration-ha/DB2/Readme_windows.txt b/ACA/configuration-ha/DB2/Readme_windows.txt similarity index 91% rename from BACA/configuration-ha/DB2/Readme_windows.txt rename to ACA/configuration-ha/DB2/Readme_windows.txt index b98e4d97..f1942822 100755 --- a/BACA/configuration-ha/DB2/Readme_windows.txt +++ b/ACA/configuration-ha/DB2/Readme_windows.txt @@ -7,5 +7,5 @@ base database and the other is called tenant database. 2. Open db2 administrator command window to run the script files. 3. Run the CreateBaseDB.bat to create the base database. 3. Run AddTenant.bat to add a new tenant db and ontology. - You can aslo run this script file to add a new ontology + You can also run this script file to add a new ontology for existing tenant database. \ No newline at end of file diff --git a/BACA/configuration-ha/DB2/ScriptFunctions.sh b/ACA/configuration-ha/DB2/ScriptFunctions.sh similarity index 100% rename from BACA/configuration-ha/DB2/ScriptFunctions.sh rename to ACA/configuration-ha/DB2/ScriptFunctions.sh diff --git a/ACA/configuration-ha/DB2/UpgradeTenantDB.bat b/ACA/configuration-ha/DB2/UpgradeTenantDB.bat new file mode 100755 index 00000000..ae6ce7e8 --- /dev/null +++ b/ACA/configuration-ha/DB2/UpgradeTenantDB.bat @@ -0,0 +1,31 @@ +@echo off + +SETLOCAL + +set /p tenant_db_name= Please enter a valid value for the tenant database name : +set /p tenant_db_user= Please enter a valid value for the tenant database user name : +set /p tenant_ontology= Please enter a valid value for the tenant ontology name : + +echo +echo "-- Please confirm these are the desired settings:" +echo " - tenant database name: %tenant_db_name%" +echo " - tenant database user name: %tenant_db_user%" +echo " - ontology name: %tenant_ontology%" + +set /P c=Are you sure you want to continue[Y/N]? +if /I "%c%" EQU "Y" goto :DOCREATE +if /I "%c%" EQU "N" goto :DOEXIT + +:DOCREATE + echo "Connecting to db and schema" + db2 connect to %tenant_db_name% + db2 set schema %tenant_ontology% + db2 -stvf sql\WinUpgradeTenantDB_1.2_to_1.3.sql + goto END +:DOEXIT + echo "Exited on user input" + goto END +:END + echo "END" + +ENDLOCAL \ No newline at end of file diff --git a/BACA/configuration/DB2/UpgradeTenantDB.sh b/ACA/configuration-ha/DB2/UpgradeTenantDB.sh similarity index 60% rename from BACA/configuration/DB2/UpgradeTenantDB.sh rename to ACA/configuration-ha/DB2/UpgradeTenantDB.sh index c1457886..eb4a4771 100755 --- a/BACA/configuration/DB2/UpgradeTenantDB.sh +++ b/ACA/configuration-ha/DB2/UpgradeTenantDB.sh @@ -1,7 +1,9 @@ #!/usr/bin/env bash . ./ScriptFunctions.sh -INPUT_PROPS_FILENAME="./common_for_DB2_Tenant_Upgrade.sh" +if [[ -z $INPUT_PROPS_FILENAME ]]; then + INPUT_PROPS_FILENAME="./common_for_DB2_Tenant_Upgrade.sh" +fi if [ -f $INPUT_PROPS_FILENAME ]; then echo "Found a $INPUT_PROPS_FILENAME. Reading in variables from that script." @@ -42,22 +44,11 @@ echo " - tenant database name: $tenant_db_name" echo " - tenant database user name: $tenant_db_user" askForConfirmation -if [[ $SaaS != "true" || -z $SaaS ]]; then - cp sql/UpgradeTenantDB_to_1.1.sql.template sql/UpgradeTenantDB_to_1.1.sql - sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/UpgradeTenantDB_to_1.1.sql - sed -i s/\$tenant_ontology/"$tenant_ontology"/ sql/UpgradeTenantDB_to_1.1.sql - sed -i s/\$tenant_db_user/"$tenant_db_user"/ sql/UpgradeTenantDB_to_1.1.sql - echo - echo "Running upgrade script: sql/UpgradeTenantDB_to_1.1.sql" - db2 -stvf sql/UpgradeTenantDB_to_1.1.sql -else - echo "-- Skipping UpgradeTenantDB_to_1.1.sql" -fi - -cp sql/UpgradeTenantDB_1.1_to_1.2.sql.template sql/UpgradeTenantDB_1.1_to_1.2.sql -sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/UpgradeTenantDB_1.1_to_1.2.sql -sed -i s/\$tenant_ontology/"$tenant_ontology"/ sql/UpgradeTenantDB_1.1_to_1.2.sql -sed -i s/\$tenant_db_user/"$tenant_db_user"/ sql/UpgradeTenantDB_1.1_to_1.2.sql +echo " -- upgrade from 1.2 to 1.3 ---" +cp sql/UpgradeTenantDB_1.2_to_1.3.sql.template sql/UpgradeTenantDB_1.2_to_1.3.sql +sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/UpgradeTenantDB_1.2_to_1.3.sql +sed -i s/\$tenant_ontology/"$tenant_ontology"/ sql/UpgradeTenantDB_1.2_to_1.3.sql +sed -i s/\$tenant_db_user/"$tenant_db_user"/ sql/UpgradeTenantDB_1.2_to_1.3.sql echo -echo "Running upgrade script: sql/UpgradeTenantDB_1.1_to_1.2.sql" -db2 -stvf sql/UpgradeTenantDB_1.1_to_1.2.sql \ No newline at end of file +echo "Running upgrade script: sql/UpgradeTenantDB_1.2_to_1.3.sql" +db2 -stvf sql/UpgradeTenantDB_1.2_to_1.3.sql \ No newline at end of file diff --git a/BACA/configuration-ha/DB2/common_for_DB2.sh.sample b/ACA/configuration-ha/DB2/common_for_DB2.sh.sample similarity index 88% rename from BACA/configuration-ha/DB2/common_for_DB2.sh.sample rename to ACA/configuration-ha/DB2/common_for_DB2.sh.sample index 87b77b8d..8d3470e2 100644 --- a/BACA/configuration-ha/DB2/common_for_DB2.sh.sample +++ b/ACA/configuration-ha/DB2/common_for_DB2.sh.sample @@ -21,6 +21,7 @@ baca_database_server_ip=10.126.18.120 baca_database_port=50000 tenant_id=t4910 tenant_db_name=t4910 +tenant_dsn_name=t4910 tenant_db_user=t4910user # To skip creating tenant database user and skip asking for pwd, use these vars below. @@ -48,4 +49,9 @@ tenant_user_name=johnsmith confirmation=y #DB2 ssl Yes/No -ssl=No \ No newline at end of file +ssl=No + + +# if insert tenant is the only part needed, specify "y" (skips populating tenant DB and inserting user) +# this is useful for fixing tenant connection string +insert_tenant_only=y \ No newline at end of file diff --git a/BACA/configuration-ha/DB2/common_for_DB2_Tenant_Upgrade.sh.sample b/ACA/configuration-ha/DB2/common_for_DB2_Tenant_Upgrade.sh.sample similarity index 100% rename from BACA/configuration-ha/DB2/common_for_DB2_Tenant_Upgrade.sh.sample rename to ACA/configuration-ha/DB2/common_for_DB2_Tenant_Upgrade.sh.sample diff --git a/BACA/configuration-ha/DB2/common_for_DB2_Upgrade.sh.sample b/ACA/configuration-ha/DB2/common_for_DB2_Upgrade.sh.sample similarity index 100% rename from BACA/configuration-ha/DB2/common_for_DB2_Upgrade.sh.sample rename to ACA/configuration-ha/DB2/common_for_DB2_Upgrade.sh.sample diff --git a/ACA/configuration-ha/DB2/db2dsdriver.cfg.sample.HA b/ACA/configuration-ha/DB2/db2dsdriver.cfg.sample.HA new file mode 100644 index 00000000..8fe2c347 --- /dev/null +++ b/ACA/configuration-ha/DB2/db2dsdriver.cfg.sample.HA @@ -0,0 +1,43 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/ACA/configuration-ha/DB2/db2dsdriver.cfg.sample.nonHA b/ACA/configuration-ha/DB2/db2dsdriver.cfg.sample.nonHA new file mode 100644 index 00000000..a0ff5685 --- /dev/null +++ b/ACA/configuration-ha/DB2/db2dsdriver.cfg.sample.nonHA @@ -0,0 +1,21 @@ + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/BACA/configuration-ha/DB2/sql/CreateBacaSchema.sql.template b/ACA/configuration-ha/DB2/sql/CreateBacaSchema.sql.template similarity index 100% rename from BACA/configuration-ha/DB2/sql/CreateBacaSchema.sql.template rename to ACA/configuration-ha/DB2/sql/CreateBacaSchema.sql.template diff --git a/BACA/configuration/DB2/sql/CreateBacaTables.sql b/ACA/configuration-ha/DB2/sql/CreateBacaTables.sql similarity index 98% rename from BACA/configuration/DB2/sql/CreateBacaTables.sql rename to ACA/configuration-ha/DB2/sql/CreateBacaTables.sql index 5c6ac1fe..53774168 100644 --- a/BACA/configuration/DB2/sql/CreateBacaTables.sql +++ b/ACA/configuration-ha/DB2/sql/CreateBacaTables.sql @@ -1,9 +1,10 @@ create table doc_class ( - doc_class_id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), + doc_class_id INTEGER NOT NULL GENERATED BY DEFAULT AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), doc_class_name VARCHAR (512) NOT NULL, comment varchar(1024), - + trained smallint NOT NULL default 0, + CONSTRAINT doc_class_pkey PRIMARY KEY (doc_class_id), CONSTRAINT doc_class_doc_class_name_key UNIQUE (doc_class_name) @@ -447,7 +448,7 @@ create table feature ); ---status 0.uploaded 1.processing 2.text (completed status) 3.error +--status 0.uploaded 1.processing 2.text (completed status) 3.error 4. trained create table document ( id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), @@ -459,6 +460,8 @@ create table document status SMALLINT NOT NULL, error_info VARCHAR(1024), content BLOB(250M), + actual_content BLOB(250M), + flag SMALLINT NOT NULL, CONSTRAINT doc_doc_class_id_fkey FOREIGN KEY (doc_class_id) REFERENCES doc_class (doc_class_id) ON UPDATE RESTRICT ON DELETE CASCADE, @@ -482,6 +485,7 @@ create table training_log created_by INTEGER NOT NULL, json_model_input_detail BLOB(250M), global_feature_vector BLOB(250M), + selected_features VARCHAR(1024), CONSTRAINT training_log_pkey PRIMARY KEY (id) ); diff --git a/BACA/configuration-ha/DB2/sql/CreateBaseDB.sql.template b/ACA/configuration-ha/DB2/sql/CreateBaseDB.sql.template similarity index 100% rename from BACA/configuration-ha/DB2/sql/CreateBaseDB.sql.template rename to ACA/configuration-ha/DB2/sql/CreateBaseDB.sql.template diff --git a/BACA/configuration-ha/DB2/sql/CreateBaseTable.sql.template b/ACA/configuration-ha/DB2/sql/CreateBaseTable.sql.template similarity index 100% rename from BACA/configuration-ha/DB2/sql/CreateBaseTable.sql.template rename to ACA/configuration-ha/DB2/sql/CreateBaseTable.sql.template diff --git a/BACA/configuration-ha/DB2/sql/CreateDB.sql.template b/ACA/configuration-ha/DB2/sql/CreateDB.sql.template similarity index 100% rename from BACA/configuration-ha/DB2/sql/CreateDB.sql.template rename to ACA/configuration-ha/DB2/sql/CreateDB.sql.template diff --git a/BACA/configuration-ha/DB2/sql/DropBacaTables.sql b/ACA/configuration-ha/DB2/sql/DropBacaTables.sql similarity index 98% rename from BACA/configuration-ha/DB2/sql/DropBacaTables.sql rename to ACA/configuration-ha/DB2/sql/DropBacaTables.sql index 1eb4506e..349780c2 100644 --- a/BACA/configuration-ha/DB2/sql/DropBacaTables.sql +++ b/ACA/configuration-ha/DB2/sql/DropBacaTables.sql @@ -37,6 +37,7 @@ drop table key_alias; drop table cword; drop table key_class; drop table doc_alias; +drop table feature; drop table doc_class; drop table ontology; drop table classifier; diff --git a/BACA/configuration-ha/DB2/sql/InsertTenant.sql.template b/ACA/configuration-ha/DB2/sql/InsertTenant.sql.template similarity index 70% rename from BACA/configuration-ha/DB2/sql/InsertTenant.sql.template rename to ACA/configuration-ha/DB2/sql/InsertTenant.sql.template index ea921ff8..b9f3d9c1 100644 --- a/BACA/configuration-ha/DB2/sql/InsertTenant.sql.template +++ b/ACA/configuration-ha/DB2/sql/InsertTenant.sql.template @@ -1,4 +1,4 @@ connect to $base_db_name ; set schema $base_db_user ; -insert into TENANTINFO (tenantid,ontology,tenanttype,dailylimit,rdbmsengine,bacaversion,rdbmsconnection,dbname,dbuser,tenantdbversion) values ( '$tenant_id', '$tenant_ontology', $tenant_type, $daily_limit, 'DB2', '1.2', encrypt('$rdbmsconnection','AES_KEY'),'$tenant_db_name','$tenant_db_user','1.2') ; +insert into TENANTINFO (tenantid,ontology,tenanttype,dailylimit,rdbmsengine,bacaversion,rdbmsconnection,dbname,dbuser,tenantdbversion) values ( '$tenant_id', '$tenant_ontology', $tenant_type, $daily_limit, 'DB2', '1.3', encrypt('$rdbmsconnection','AES_KEY'),'$tenant_db_name','$tenant_db_user','1.3') ; connect reset ; diff --git a/BACA/configuration-ha/DB2/sql/InsertUser.sql.template b/ACA/configuration-ha/DB2/sql/InsertUser.sql.template similarity index 100% rename from BACA/configuration-ha/DB2/sql/InsertUser.sql.template rename to ACA/configuration-ha/DB2/sql/InsertUser.sql.template diff --git a/BACA/configuration-ha/DB2/sql/LoadData.sql.template b/ACA/configuration-ha/DB2/sql/LoadData.sql.template similarity index 60% rename from BACA/configuration-ha/DB2/sql/LoadData.sql.template rename to ACA/configuration-ha/DB2/sql/LoadData.sql.template index 24c2657e..bdcffa15 100644 --- a/BACA/configuration-ha/DB2/sql/LoadData.sql.template +++ b/ACA/configuration-ha/DB2/sql/LoadData.sql.template @@ -1,7 +1,7 @@ CONNECT TO $tenant_db_name ; SET SCHEMA $tenant_ontology ; -load from ./CSVFiles/doc_class.csv of del modified by identityoverride insert into doc_class ; +load from ./CSVFiles/doc_class.csv of del insert into doc_class ; load from ./CSVFiles/key_class.csv of del modified by identityoverride insert into key_class ; load from ./CSVFiles/doc_alias.csv of del modified by identityoverride insert into doc_alias ; load from ./CSVFiles/key_alias.csv of del modified by identityoverride insert into key_alias ; @@ -17,21 +17,4 @@ load from ./CSVFiles/heading_alias_dc.csv of del modified by identityoverride in load from ./CSVFiles/heading_alias_h.csv of del modified by identityoverride insert into heading_alias_h ; load from ./CSVFiles/cword_dc.csv of del modified by identityoverride insert into cword_dc ; -set integrity for key_class_dc immediate checked ; -set integrity for doc_alias_dc immediate checked ; -set integrity for key_alias_dc immediate checked ; -set integrity for key_alias_kc immediate checked ; -set integrity for heading_dc immediate checked ; -set integrity for heading_alias_dc immediate checked ; -set integrity for heading_alias_h immediate checked ; -set integrity for cword_dc immediate checked ; - -alter table doc_class alter column doc_class_id restart with 10 ; -alter table doc_alias alter column doc_alias_id restart with 11 ; -alter table key_class alter column key_class_id restart with 202 ; -alter table key_alias alter column key_alias_id restart with 239 ; -alter table cword alter column cword_id restart with 76 ; -alter table heading alter column heading_id restart with 3 ; -alter table heading_alias alter column heading_alias_id restart with 3 ; - CONNECT RESET; diff --git a/ACA/configuration-ha/DB2/sql/SetIntegrity.sql.template b/ACA/configuration-ha/DB2/sql/SetIntegrity.sql.template new file mode 100644 index 00000000..01d72031 --- /dev/null +++ b/ACA/configuration-ha/DB2/sql/SetIntegrity.sql.template @@ -0,0 +1,21 @@ +CONNECT TO $tenant_db_name ; +SET SCHEMA $tenant_ontology ; + +set integrity for key_class_dc immediate checked ; +set integrity for doc_alias_dc immediate checked ; +set integrity for key_alias_dc immediate checked ; +set integrity for key_alias_kc immediate checked ; +set integrity for heading_dc immediate checked ; +set integrity for heading_alias_dc immediate checked ; +set integrity for heading_alias_h immediate checked ; +set integrity for cword_dc immediate checked ; + +alter table doc_class alter column doc_class_id restart with 10 ; +alter table doc_alias alter column doc_alias_id restart with 11 ; +alter table key_class alter column key_class_id restart with 202 ; +alter table key_alias alter column key_alias_id restart with 239 ; +alter table cword alter column cword_id restart with 76 ; +alter table heading alter column heading_id restart with 3 ; +alter table heading_alias alter column heading_alias_id restart with 3 ; + +CONNECT RESET; diff --git a/BACA/configuration-ha/DB2/sql/TablePermissions.sql.template b/ACA/configuration-ha/DB2/sql/TablePermissions.sql.template similarity index 89% rename from BACA/configuration-ha/DB2/sql/TablePermissions.sql.template rename to ACA/configuration-ha/DB2/sql/TablePermissions.sql.template index d8090bba..897b9679 100644 --- a/BACA/configuration-ha/DB2/sql/TablePermissions.sql.template +++ b/ACA/configuration-ha/DB2/sql/TablePermissions.sql.template @@ -16,5 +16,7 @@ GRANT ALTER ON TABLE $tenant_ontology.FONTS TO USER $tenant_db_user ; GRANT ALTER ON TABLE $tenant_ontology.FONTS_TRANSID TO USER $tenant_db_user ; GRANT ALTER ON TABLE $tenant_ontology.DB_BACKUP TO USER $tenant_db_user ; GRANT ALTER ON TABLE $tenant_ontology.PATTERN TO USER $tenant_db_user ; +GRANT ALTER ON TABLE $tenant_ontology.DOCUMENT TO USER $tenant_db_user ; +GRANT ALTER ON TABLE $tenant_ontology.TRAINING_LOG TO USER $tenant_db_user ; CONNECT RESET; \ No newline at end of file diff --git a/BACA/configuration-ha/DB2/sql/UpgradeBaseDB_1.1_to_1.2.sql.template b/ACA/configuration-ha/DB2/sql/UpgradeBaseDB_1.1_to_1.2.sql.template similarity index 74% rename from BACA/configuration-ha/DB2/sql/UpgradeBaseDB_1.1_to_1.2.sql.template rename to ACA/configuration-ha/DB2/sql/UpgradeBaseDB_1.1_to_1.2.sql.template index c5a5fec8..650ccd7d 100644 --- a/BACA/configuration-ha/DB2/sql/UpgradeBaseDB_1.1_to_1.2.sql.template +++ b/ACA/configuration-ha/DB2/sql/UpgradeBaseDB_1.1_to_1.2.sql.template @@ -4,6 +4,8 @@ set schema $base_db_user ; alter table tenantinfo add column featureflags bigint not null with default 0; alter table tenantinfo add column tenantdbversion varchar(255); +update tenantinfo set bacaversion='1.2'; +update tenantinfo set tenantdbversion='1.2'; reorg table tenantinfo; connect reset; \ No newline at end of file diff --git a/BACA/configuration-ha/DB2/sql/UpgradeBaseDB_to_1.1.sql.template b/ACA/configuration-ha/DB2/sql/UpgradeBaseDB_to_1.1.sql.template similarity index 100% rename from BACA/configuration-ha/DB2/sql/UpgradeBaseDB_to_1.1.sql.template rename to ACA/configuration-ha/DB2/sql/UpgradeBaseDB_to_1.1.sql.template diff --git a/BACA/configuration-ha/DB2/sql/UpgradeTenantDB_1.1_to_1.2.sql.template b/ACA/configuration-ha/DB2/sql/UpgradeTenantDB_1.1_to_1.2.sql.template similarity index 100% rename from BACA/configuration-ha/DB2/sql/UpgradeTenantDB_1.1_to_1.2.sql.template rename to ACA/configuration-ha/DB2/sql/UpgradeTenantDB_1.1_to_1.2.sql.template diff --git a/ACA/configuration-ha/DB2/sql/UpgradeTenantDB_1.2_to_1.3.sql.template b/ACA/configuration-ha/DB2/sql/UpgradeTenantDB_1.2_to_1.3.sql.template new file mode 100644 index 00000000..1e18890c --- /dev/null +++ b/ACA/configuration-ha/DB2/sql/UpgradeTenantDB_1.2_to_1.3.sql.template @@ -0,0 +1,26 @@ +connect to $tenant_db_name ; +set schema $tenant_ontology ; + + +GRANT ALTER ON TABLE $tenant_ontology.document TO USER $tenant_db_user ; +GRANT ALTER ON TABLE $tenant_ontology.training_log TO USER $tenant_db_user ; + +---classification schema changes + +--trained 0.not trained 1.trained +alter table doc_class alter column doc_class_id drop identity; +alter table doc_class alter column doc_class_id set GENERATED BY DEFAULT as IDENTITY; +insert into doc_class (doc_class_id, doc_class_name, comment) VALUES (0, '__root', 'Reserved document class'); +alter table doc_class add column trained smallint NOT NULL default 0; +reorg table doc_class; + +--flag 0.text 1.json +alter table document add column actual_content BLOB(250M); +alter table document add column flag SMALLINT NOT NULL default 0; +alter table document alter column flag drop default; +reorg table document; + +alter table training_log add column selected_features VARCHAR(1024); +reorg table training_log; + + diff --git a/BACA/configuration-ha/DB2/sql/UpgradeTenantDB_to_1.1.sql.template b/ACA/configuration-ha/DB2/sql/UpgradeTenantDB_to_1.1.sql.template similarity index 100% rename from BACA/configuration-ha/DB2/sql/UpgradeTenantDB_to_1.1.sql.template rename to ACA/configuration-ha/DB2/sql/UpgradeTenantDB_to_1.1.sql.template diff --git a/BACA/configuration/DB2/sql/UpgradeTenantDB_1.1_to_1.2.sql.template b/ACA/configuration-ha/DB2/sql/WinUpgradeTenantDB_1.2.sql similarity index 97% rename from BACA/configuration/DB2/sql/UpgradeTenantDB_1.1_to_1.2.sql.template rename to ACA/configuration-ha/DB2/sql/WinUpgradeTenantDB_1.2.sql index ff6ecf20..55cf7083 100644 --- a/BACA/configuration/DB2/sql/UpgradeTenantDB_1.1_to_1.2.sql.template +++ b/ACA/configuration-ha/DB2/sql/WinUpgradeTenantDB_1.2.sql @@ -1,5 +1,6 @@ -connect to $tenant_db_name ; -set schema $tenant_ontology ; +alter table integration alter column model_id set data type varchar(1024); + +reorg table integration; --pattern tables create table pattern @@ -126,5 +127,3 @@ create table ontology CONSTRAINT ontology_fkey FOREIGN KEY (default_classifier_id) REFERENCES classifier(id) ON UPDATE RESTRICT ON DELETE RESTRICT ); - -connect reset ; \ No newline at end of file diff --git a/ACA/configuration-ha/DB2/sql/WinUpgradeTenantDB_1.2_1.3.sql b/ACA/configuration-ha/DB2/sql/WinUpgradeTenantDB_1.2_1.3.sql new file mode 100644 index 00000000..dc5eea12 --- /dev/null +++ b/ACA/configuration-ha/DB2/sql/WinUpgradeTenantDB_1.2_1.3.sql @@ -0,0 +1,20 @@ +GRANT ALTER ON TABLE $tenant_ontology.document TO USER $tenant_db_user ; +GRANT ALTER ON TABLE $tenant_ontology.training_log TO USER $tenant_db_user ; + +---classification schema changes + +--trained 0.not trained 1.trained +alter table doc_class alter column doc_class_id drop identity; +alter table doc_class alter column doc_class_id set GENERATED BY DEFAULT as IDENTITY; +insert into doc_class (doc_class_id, doc_class_name, comment) VALUES (0, '__root', 'Reserved document class'); +alter table doc_class add column trained smallint NOT NULL default 0; +reorg table doc_class; + +--flag 0.text 1.json +alter table document add column actual_content BLOB(250M); +alter table document add column flag SMALLINT NOT NULL default 0; +alter table document alter column flag drop default; +reorg table document; + +alter table training_log add column selected_features VARCHAR(1024); +reorg table training_log; \ No newline at end of file diff --git a/BACA/configuration-ha/baca-netpol.yaml b/ACA/configuration-ha/security/baca-netpol.yaml similarity index 71% rename from BACA/configuration-ha/baca-netpol.yaml rename to ACA/configuration-ha/security/baca-netpol.yaml index fa676f1e..d5637c7a 100644 --- a/BACA/configuration-ha/baca-netpol.yaml +++ b/ACA/configuration-ha/security/baca-netpol.yaml @@ -6,6 +6,8 @@ metadata: spec: ingress: - {} - podSelector: {} + podSelector: + matchLabels: + productID: ibm-dba-aca-prod policyTypes: - Ingress \ No newline at end of file diff --git a/BACA/configuration-ha/baca-psp.yaml b/ACA/configuration-ha/security/baca-psp.yaml similarity index 72% rename from BACA/configuration-ha/baca-psp.yaml rename to ACA/configuration-ha/security/baca-psp.yaml index 4b385949..48052f0a 100644 --- a/BACA/configuration-ha/baca-psp.yaml +++ b/ACA/configuration-ha/security/baca-psp.yaml @@ -1,4 +1,4 @@ -apiVersion: extensions/v1beta1 +apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: annotations: @@ -9,34 +9,21 @@ spec: allowPrivilegeEscalation: false fsGroup: ranges: - - max: 65535 - min: 1 + - max: 1 + min: 0 rule: MustRunAs #rule: RunAsAny requiredDropCapabilities: - - MKNOD - - SETFCAP - - NET_RAW - - NET_BIND_SERVICE - - KILL + - ALL allowedCapabilities: - - SETPCAP - - AUDIT_WRITE - - CHOWN - - FOWNER - - FSETID - - SETUID - - SETGID - - SYS_CHROOT - - DAC_OVERRIDE runAsUser: rule: MustRunAsNonRoot seLinux: rule: RunAsAny supplementalGroups: ranges: - - max: 65535 - min: 1 + - max: 1 + min: 0 rule: MustRunAs #rule: RunAsAny volumes: @@ -50,10 +37,10 @@ spec: - '*' --- apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole +kind: Role metadata: annotations: - name: baca-clusterrole + name: baca-role rules: - apiGroups: - extensions @@ -61,5 +48,6 @@ rules: - baca-psp resources: - podsecuritypolicies + #verbs: ["create", "delete", "deletecollection", "get", "list", "patch", "update", "watch"] verbs: - use diff --git a/ACA/configuration-ha/security/baca-rolebinding.yaml b/ACA/configuration-ha/security/baca-rolebinding.yaml new file mode 100644 index 00000000..30e90fe5 --- /dev/null +++ b/ACA/configuration-ha/security/baca-rolebinding.yaml @@ -0,0 +1,12 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: baca-rolebinding +subjects: +- kind: Group + name: system:serviceaccounts:$KUBE_NAME_SPACE + apiGroup: rbac.authorization.k8s.io +roleRef: + kind: Role #this must be Role or ClusterRole + name: baca-role # this must match the name of the Role or ClusterRole you wish to bind to + apiGroup: rbac.authorization.k8s.io \ No newline at end of file diff --git a/ACA/configuration-ha/security/baca-scc.yaml b/ACA/configuration-ha/security/baca-scc.yaml new file mode 100644 index 00000000..cde9bd56 --- /dev/null +++ b/ACA/configuration-ha/security/baca-scc.yaml @@ -0,0 +1,76 @@ +# This SCC is the most restrictive, and is meant to be a template +# Pass the --validate=false flag when applying +# The ID ranges provided in this template match the PSPs and can be changed + +apiVersion: security.openshift.io/v1 +kind: SecurityContextConstraints +metadata: + annotations: + kubernetes.io/description: "This policy is the most restrictive, + requiring pods to run with a non-root UID, and preventing pods from accessing the host. + The UID and GID will be bound by ranges specified at the Namespace level." + cloudpak.ibm.com/version: "1.1.0" + name: baca-scc +allowHostDirVolumePlugin: false +allowHostIPC: false +allowHostNetwork: false +allowHostPID: false +allowHostPorts: false +allowPrivilegedContainer: false +allowPrivilegeEscalation: false +allowedCapabilities: null +allowedFlexVolumes: null +allowedUnsafeSysctls: null +defaultAddCapabilities: null +defaultAllowPrivilegeEscalation: false +forbiddenSysctls: + - "*" +fsGroup: + type: MustRunAs + ranges: + - max: 1 + min: 0 +readOnlyRootFilesystem: false +requiredDropCapabilities: +- ALL +runAsUser: + type: MustRunAsNonRoot +seccompProfiles: +- docker/default +# This can be customized for seLinuxOptions specific to your host machine +seLinuxContext: + type: RunAsAny +# seLinuxOptions: +# level: +# user: +# role: +# type: +supplementalGroups: + type: MustRunAs + ranges: + - max: 1 + min: 0 +# This can be customized to host specifics +volumes: +- configMap +- downwardAPI +- emptyDir +- persistentVolumeClaim +- projected +- secret + +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + annotations: + name: baca-role +rules: +- apiGroups: + - extensions + resourceNames: + - baca-scc + resources: + - podsecuritypolicies + verbs: + - use \ No newline at end of file diff --git a/ADW/README_config.md b/ADW/README_config.md new file mode 100644 index 00000000..c5b0e28b --- /dev/null +++ b/ADW/README_config.md @@ -0,0 +1,130 @@ +# Configuring IBM Automation Digital Worker + +The following instructions cover the basic configuration of IBM Automation Digital Worker. + + +## Prerequisites + +Digital Worker requires: +- A [User Management Service](../UMS/README_config.md) instance in order to protect access to Digital Worker designer and APIs +- An [IBM Business Automation Insights](../BAI/README_config.md) instance (recommended but also optional) in order to collect Digital Worker tasks events and monitor them +- An [IBM Business Automation Studio Resource Registry](../BAS/README_config.md) instance (recommended but also optional) in order to integrate with some other components in the pack + +Digital Worker includes 5 pods corresponding to the following services: + - Digital Worker Designer + - Digital Worker Tasks Runtime + - Digital Worker Management Server + - MongoDB + - NPM registry + +The services require CPU and memory resources. The following table lists the minimum requirements that are used as default values. + +| Component | CPU Minimum (m) | Memory Minimum (Mi) | +| ----------------------------------------| --------------- | -------------------- | +| Digital Worker Designer | 0.1 | 128 | +| Digital Worker Tasks Runtime | 0.1 | 128 | +| Digital Worker Management Server | 0.1 | 512 | +| MongoDB | 0.1 | 128 | +| NPM registry | 0.1 | 128 | + + +In addition to these 5 services there are 2 Jobs: + - Setup + - Registry + +## Preparing for Installation + +Before you configure, make sure that you have prepared your environment. For more information, see [Preparing to install IBM Automation Digital Worker](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_prepare_adwk8s.html). + +### Step 1: Configure the custom resource YAML file for your Automation Digital Worker deployment + +In your `my_icp4a_cr.yaml` file, update the `adw_configuration` section with the configuration parameters. See [IBM Automation Digital Worker parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_adw_K8s_parameters.html) to find the default values for each ADW parameter and customize these values in your file. + +> **Note**: The [configuration](configuration) folder provides sample configuration files that you might find useful. Download the files and edit them for your own customizations. + +### Step 2: Configuring Security + +#### Step 2.1: Apply Security Context Constraint + +The Digital Worker role requires a SecurityContextConstraint to be bound to the target namespace prior to installation. To meet this requirement there may be cluster scoped as well as namespace scoped pre and post actions that need to occur. + +The [`ibm-restricted-scc`](https://ibm.biz/cpkspec-scc) SecurityContextConstraint is required to install the chart. + +you must also have a service account that has the [`ibm-restricted-scc`](https://ibm.biz/cpkspec-scc) SecurityContextConstraint to allow running restricted containers: +```bash +oc adm policy add-scc-to-user ibm-restricted-scc -z ibm-cp4a-operator +``` + +> **Note**: You can define a custom SecurityContextConstraints to finely control the permissions/capabilities needed to deploy this role. An example has been provided. + + +#### Step 2.2: Apply Pod Security Policy + +Digital Worker requires a pod security policy to be bound to the target namespace prior to installation. To meet this requirement there may be cluster scoped as well as namespace scoped pre and post actions that need to occur. + +The predefined pod security policy name: [`ibm-restricted-psp`](https://ibm.biz/cpkspec-psp) has been verified for this chart, if your target namespace is bound to it there is no further action needed in terms of pod security policy. + +This chart also defines a custom PodSecurityPolicy which can be used to finely control the permissions/capabilities needed to deploy this chart. You can enable this custom PodSecurityPolicy using the OCP user interface or via the OCP CLI. + +Using the CLI you can apply the following YAML file to enable the custom pod security policy: +- [Custom PodSecurityPolicy definition](./configuration/adw-psp.yaml) + +After creating the policy, replace all occurrences of `< NAMESPACE >` with the name of namespace the operator is deployed in. Then apply using the following command: + +```bash +kubectl apply -f adw-psp.yaml +``` + +For the custom PodSecurityPolicy to take affect you must bind the ServiceAccount to a ClusterRole. This can be done via the command line using the folliowing command: + +```bash +kubectl create clusterrolebinding adw-clusterrolebinding --clusterrole=cluster-admin --serviceaccount=: +``` + +### Step 3: Prepare and Apply the Secret + +Using the [Preparing to install IBM Automation Digital Worker](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_prepare_adwk8s.html) and [IBM Automation Digital Worker parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_adw_K8s_parameters.html) pages, create `adw-secret.yaml` then apply it to your instance using the following command. + + +```bash +kubectl apply -f adw-secret.yaml +``` +> **Note**: An empty secret had been provided [adw-secret.yaml](configuration/adw-secret.yaml) + +## Complete the installation + +When you have finished editing the configuration file, go back to the relevant install or update page to configure other components and complete the deployment with the operator. + +Install pages: + - [Managed OpenShift installation page](../platform/roks/install.md#step-6-configure-the-software-that-you-want-to-install) + - [OpenShift installation page](../platform/ocp/install.md#step-6-configure-the-software-that-you-want-to-install) + - [Certified Kubernetes installation page](../platform/k8s/install.md#step-6-configure-the-software-that-you-want-to-install) + +Update pages: + - [Managed OpenShift installation page](../platform/roks/update.md) + - [OpenShift installation page](../platform/ocp/update.md#step-1-modify-the-software-that-is-installed) + - [Certified Kubernetes installation page](../platform/k8s/update.md) + + +## Post installation + +If you intend to connect Digital Worker to Resource Registry or have provisioned User Management Service using the same cr, re-run the setup job post deployment with the following command: + +```bash +oc get job -o json | jq 'del(.spec.selector)' | jq 'del(.spec.template.metadata.labels)' | kubectl replace --force -f - +``` + +## Troubleshooting +### Management pod not going into a ready state +If using dynamically provisioned storage, please ensure that the following line is present and set to true in your custom resource file. If not set the managment pod may fail as it needs to be able to write to the volume: + +```yaml +grantWritePermissionOnMountedVolumes: true +``` +### Digital Worker tile not present in Business Automation Studio + +When integrating with resource registry the mangement service must be exposed to resource registry. If you are using SSL the certificate used will require a CN to be set matching the pod name `< DEPLOYMENT NAME >-management`. + +### The Operator is attempting to install Digital Worker despite the configuration not being present in the custom resource + +Should the custom resource be removed post deployment the operator on it's next cycle around may attempt to go through the installation task of the operator role. If this issue does occur the adw deployment in the namespace shall be removed as intended and the error can be ignored. diff --git a/ADW/configuration/adw-cr.yaml b/ADW/configuration/adw-cr.yaml new file mode 100644 index 00000000..316b0aa0 --- /dev/null +++ b/ADW/configuration/adw-cr.yaml @@ -0,0 +1,97 @@ +apiVersion: icp4a.ibm.com/v1 +kind: ICP4ACluster +metadata: + name: adw-cr + labels: + app.kubernetes.io/instance: ibm-dba + app.kubernetes.io/managed-by: ibm-dba + app.kubernetes.io/name: ibm-dba + release: 19.0.3 +spec: + adw_configuration: + global: + imagePullSecret: < IMAGE SECRET > + kubernetes: + serviceAccountName: "ibm-cp4a-operator" + + adwSecret: < SECRET > + + grantWritePermissionOnMountedVolumes: true + + logLevel: "error" + + networkPolicy: + enabled: true + + restartPolicy: Never + + registry: + endpoint: "" + + npmRegistry: + persistence: + enabled: true + useDynamicProvisioning: true + storageClassName: "< STORAGE CLASS NAME >" + + mongodb: + persistence: + enabled: true + useDynamicProvisioning: true + storageClassName: "< STORAGE CLASS NAME >" + + designer: + image: + repository: "< REGISTRY >/adw-designer" + tag: "19.0.3" + pullPolicy: "Always" + externalPort: 30708 + externalUrl: "" + + runtime: + image: + repository: "< REGISTRY >/adw-runtime" + tag: "19.0.3" + pullPolicy: "Always" + persistence: + useDynamicProvisioning: true + storageClassName: "< STORAGE CLASS NAME >" + service: + type: "NodePort" + externalPort: 30709 + runLogLevel: "warn" + externalUrl: "" + + management: + image: + repository: "< REGISTRY >/adw-management" + tag: "19.0.3" + pullPolicy: "Always" + persistence: + useDynamicProvisioning: true + storageClassName: "< STORAGE CLASS NAME >" + externalPort: 30710 + externalUrl: "" + + setup: + image: + repository: "< REGISTRY >/adw-setup" + tag: "19.0.3" + pullPolicy: "Always" + + init: + image: + repository: "< REGISTRY >/dba/adw-init" + tag: "19.0.3" + pullPolicy: "Always" + + baiKafka: + topic: "BAITOPICFORODM" + bootstrapServers: "" + securityProtocol: "SASL_SSL" + + baiElasticsearch: + url: "" + + oidc: + endpoint: "" diff --git a/ADW/configuration/adw-psp.yaml b/ADW/configuration/adw-psp.yaml new file mode 100755 index 00000000..20d123d1 --- /dev/null +++ b/ADW/configuration/adw-psp.yaml @@ -0,0 +1,63 @@ +apiVersion: extensions/v1beta1 +kind: PodSecurityPolicy +metadata: + annotations: + kubernetes.io/description: "This policy allows pods to run with any UID and GID, but preventing access to the host." + name: adw-psp +spec: + allowPrivilegeEscalation: true + fsGroup: + rule: RunAsAny + requiredDropCapabilities: + - MKNOD + allowedCapabilities: + - CHOWN + runAsUser: + rule: RunAsAny + seLinux: + rule: RunAsAny + supplementalGroups: + rule: RunAsAny + volumes: + - configMap + - emptyDir + - projected + - secret + - downwardAPI + - persistentVolumeClaim + forbiddenSysctls: + - '*' +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: adw-role + namespace: < NAMESPACE > +rules: + - apiGroups: + - extensions + resourceNames: + - adw-psp + resources: + - podsecuritypolicies + verbs: + - use +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: adw-psp-sa +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: adw-rolebinding + namespace: < NAMESPACE > +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: adw-role +subjects: + - kind: ServiceAccount + name: adw-psp-sa + namespace: < NAMESPACE > diff --git a/ADW/configuration/adw-scc.yaml b/ADW/configuration/adw-scc.yaml new file mode 100755 index 00000000..67d690fe --- /dev/null +++ b/ADW/configuration/adw-scc.yaml @@ -0,0 +1,38 @@ +allowHostDirVolumePlugin: false +allowHostIPC: false +allowHostNetwork: false +allowHostPID: false +allowHostPorts: false +allowPrivilegeEscalation: true +allowPrivilegedContainer: false +allowedCapabilities: [] +apiVersion: security.openshift.io/v1 +defaultAddCapabilities: [] +fsGroup: + type: RunAsAny +groups: +- system:authenticated +kind: SecurityContextConstraints +metadata: + name: ibm-cp4a-operator +priority: 0 +readOnlyRootFilesystem: false +requiredDropCapabilities: +- KILL +- MKNOD +- SETUID +- SETGID +runAsUser: + type: MustRunAsRange +seLinuxContext: + type: MustRunAs +supplementalGroups: + type: RunAsAny +users: [] +volumes: +- configMap +- downwardAPI +- emptyDir +- persistentVolumeClaim +- projected +- secret \ No newline at end of file diff --git a/ADW/configuration/adw-secret.yaml b/ADW/configuration/adw-secret.yaml new file mode 100755 index 00000000..a317941d --- /dev/null +++ b/ADW/configuration/adw-secret.yaml @@ -0,0 +1,25 @@ +apiVersion: v1 +kind: Secret +metadata: + name: "" +type: Opaque +data: + server.key: "" + server.crt: "" + npmUser: "" + npmPassword: "" + kafkaUser: "" + kafkaPassword: "" + kafkaServerCert: "" + kafkaKerberosKeytab: "" + kafkaKerberosSaslServiceName: "" + kafkaKerberosRealm: "" + kafkaKerberosKdc: "" + kafkaKerberosPrincipal: "" + skillEncryptionSeed: """" + oidcClientId: "" + oidcClientSecret: "" + oidcUserName: "" + oidcPassword: "" + elasticsearchUser: "" + elasticsearchPassword: "" diff --git a/BACA/README.md b/BACA/README.md deleted file mode 100644 index 205f7f77..00000000 --- a/BACA/README.md +++ /dev/null @@ -1,31 +0,0 @@ -## Deploy IBM Business Automation Content Analyzer - -IBM Business Automation Content Analyzer offers the power of intelligent capture with the flexibility of an API that enables you to extend the value of your core enterprise content management (ECM) technology stack. Advanced AI more accurately classifies data and can be configurable in minutes, instead of weeks. - -For more information, see [IBM Business Automation Content Analyzer: Details](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.offerings/topics/con_baca.html) - - -## Deploying with Helm charts - -- Extract [ibm-dba-baca-prod-1.2.0.tgz](./helm-charts/ibm-dba-baca-prod-1.2.0.tgz) for non-HA deployment and reference the readme in ibm-dba-baca-prod/README.md after extraction. - -- Extract [ibm-dba-baca-prod-1.2.0_ha.tgz](./helm-charts/ibm-dba-baca-prod-1.2.0_ha.tgz) for HA deployment and reference the readme in ibm-dba-baca-prod/README.md after extraction. - - -## Deploying using Kubernetes YAML - -- [Using Kubernetes YAML](k8s-yaml/README.md) - -## NOTE: - -- We include a sample network policy yaml file (baca-netpol.yaml) inside the `configuration` and `configuration-ha` folder. You can review and further modify to fit your need. To apply the network policy: -``` -export KUBE_NAME_SPACE= -cat baca-netpol.yaml | sed s/\$KUBE_NAME_SPACE/"$KUBE_NAME_SPACE"/ | kubectl apply -f - - -``` - - -## Completing post deployment configuration - -After you deploy your container images, you might need to perform some required and some optional steps to get your Business Automation Content Analyzer environment up and running. For detail instructions, see [Completing post deployment tasks for Business Automation Content Analyzer](docs/post-deployment.md) diff --git a/BACA/configuration-ha/DB2/AddTenant.sh b/BACA/configuration-ha/DB2/AddTenant.sh deleted file mode 100755 index 1f17c071..00000000 --- a/BACA/configuration-ha/DB2/AddTenant.sh +++ /dev/null @@ -1,404 +0,0 @@ -#!/bin/bash -. ./ScriptFunctions.sh - -INPUT_PROPS_FILENAME="./common_for_DB2.sh" - -if [ -f $INPUT_PROPS_FILENAME ]; then - echo "Found a $INPUT_PROPS_FILENAME. Reading in variables from that script." - . $INPUT_PROPS_FILENAME -fi - -NUMARGS=$# - -# if an argument of '1' is passed, it is assumed that a tenant already exists, -# and the script will add a new ontology to an existing tenant -if [[ "$NUMARGS" -gt 0 ]]; then - use_existing_tenant=$1 -fi - - -if [[ -z "$use_existing_tenant" || $use_existing_tenant -ne 1 ]]; then - echo -e "\n-- This script will create a BACA database and an ontology for a new tenant and load it with default data" - echo -fi - -if [[ -z "$use_existing_tenant" || $use_existing_tenant -ne 1 ]]; then - echo "Enter the tenant ID for the new tenant: (eg. t4900)" -else - echo "Enter the tenant ID for the existing tenant: (eg. t4900)" -fi -while [[ -z "$tenant_id" || $tenant_id == '' ]] -do - echo "Please enter a valid value for the tenant ID:" - read tenant_id -done - - -if [[ -z "$use_existing_tenant" || $use_existing_tenant -ne 1 ]]; then - - while [[ $tenant_type == '' || $tenant_type != "0" && $tenant_type != "1" && $tenant_type != "2" ]] # While tenant_type is not valid/set - do - echo -e "\n\x1B[1;31mEnter the tenanttype\x1B[0m" - echo -e "\x1B[1;31mChoose the number equivalent.\x1B[0m" - echo -e "\x1B[1;34m0. Enterprise\x1B[0m" - echo -e "\x1B[1;34m1. Trial\x1B[0m" - echo -e "\x1B[1;34m2. Internal\x1B[0m" - read tenant_type - done - - if [ $tenant_type == 0 ]; then - daily_limit=0 - elif [ $tenant_type == 1 ]; then - daily_limit=100 - elif [ $tenant_type == 2 ]; then - daily_limit=2000 - fi -fi - - -echo -if [[ -z "$use_existing_tenant" || $use_existing_tenant -ne 1 ]]; then - echo "Enter the name of the new BACA tenant database to create: (eg. t4900)" -else - echo "Enter the name of the existing BACA tenant database: (eg. t4900)" -fi -while [[ $tenant_db_name == '' ]] -do - echo "Please enter a valid value for the tenant database name of max length 8 :" - read tenant_db_name - while [ ${#tenant_db_name} -gt 8 ]; - do - echo "Please enter a valid value for the tenant database name of max length 8 :" - read tenant_db_name; - echo ${#tenant_db_name}; - done -done - -if [[ -z "$baca_database_server_ip" ]]; then - echo -e "\nEnter the host/IP of the database server: " - read baca_database_server_ip -fi - -default_dbport=50000 -if [[ -z "$baca_database_port" ]]; then - echo -e "\nEnter the port of the database server. If nothing is entered we will use the following default value: " $default_dbport - read baca_database_port - if [[ -z "$baca_database_port" ]]; then - baca_database_port=$default_dbport - fi -fi - -default_ssl='No' -if [[ -z "$ssl" ]]; then - echo -e "\nWould you like to enable SSL to communicate with DB2 server? If nothing is entered we will use the default value: " $default_ssl - read ssl - if [[ -z "$ssl" ]]; then - ssl=$default_ssl - fi -fi - -if [[ $use_existing_tenant -eq 1 ]]; then - user_already_defined=1 -fi - -echo -echo "We need a non-admin database user that BACA will use to access your BACA tenant database." -while [[ -z "$tenant_db_user" || $tenant_db_user == "" ]] -do - echo - if [[ -z "$user_already_defined" || $user_already_defined -ne 1 ]]; then - while [[ "$create_new_user" != "y" && "$create_new_user" != "Y" && "$create_new_user" != "n" && "$create_new_user" != "N" ]] - do - echo "Do you want this script to create a new database user for you (This will create local OS user)? (Please enter y or n)" - read create_new_user - done - - if [[ "$create_new_user" == "n" || "$create_new_user" == "N" ]]; then - user_already_defined=1 - else - user_already_defined=0 - fi - fi - - while [[ -z "$tenant_db_user" || $tenant_db_user == "" ]] - do - if [[ "$create_new_user" == "y" || "$create_new_user" = "Y" ]]; then - echo "Please enter the name of database user to create: " - else - echo "Please enter the name of an existing database user" - fi - read tenant_db_user - done - - if [[ $user_already_defined -ne 1 ]]; then - getent passwd $tenant_db_user > /dev/null - if [[ $? -eq 0 ]]; then - while [[ "$use_existing_user" != "y" && "$use_existing_user" != "Y" && "$use_existing_user" != "n" && "$use_existing_user" != "N" ]] - do - echo "$tenant_db_user already exists. Do you want to use this user (Please enter y or n)" - read use_existing_user - if [ "$use_existing_user" = "y" ] || [ "$use_existing_user" = "Y" ]; then - user_already_defined=1 - else - unset tenant_db_user - unset user_already_defined - unset create_new_user - fi - done - fi - fi -done - - -while [[ $pwdconfirmed -ne 1 ]] # While pwd is not yet received and confirmed (i.e. entered teh same time twice) -do - while [[ $tenant_db_pwd == '' ]] # While pwd is empty... - do - echo "Enter the password for the user: " - read -s tenant_db_pwd - done - - while [[ $tenant_db_pwd2 == '' ]] # While pwd is empty... - do - echo "Please confirm the password by entering it again:" - read -s tenant_db_pwd2 - done - - if [[ "$tenant_db_pwd" == "$tenant_db_pwd2" ]]; then - pwdconfirmed=1 - else - echo "The passwords do not match. Please enter the password again." - unset tenant_db_pwd - unset tenant_db_pwd2 - fi -done - -if [[ $tenant_db_pwd_b64_encoded -eq 1 ]]; then - tenant_db_pwd=$(echo $tenant_db_pwd | base64 --decode) -fi - -default_ontology='default' -if [[ -z "$tenant_ontology" ]]; then - echo -e "\nEnter the tenant ontology name. If nothing is entered, the default name will be used: " $default_ontology - read tenant_ontology - if [[ -z "$tenant_ontology" ]]; then - tenant_ontology=$default_ontology - fi -fi - -default_basedb='BASECA' -if [[ -z "$base_db_name" ]]; then - echo -e "\nEnter the name of the Base BACA database with the TENANTINFO Table. If nothing is entered, we will use the following default value : " $default_basedb - read base_db_name - if [[ -z "$base_db_name" ]]; then - base_db_name=$default_basedb - fi -fi - -default_basedb_user='CABASEUSER' -if [[ -z "$base_db_user" ]]; then - echo -e "\nEnter the name of the database user for the Base BACA database. If nothing is entered, we will use the following default value : " $default_basedb_user - read base_db_user - if [[ -z "$base_db_user" ]]; then - base_db_user=$default_basedb_user - fi -fi - -# FOR NOW, there is no need to collect credentials for Base DB, as we are currently assuming that we are running script as DB2 admin (eg. db2inst1) on the DB2 server. -# If we decide to run from a remote machine, then UNCOMMENT the following to collect the DB2 admin credentials - -# pwdconfirmed=0 -# while [[ $pwdconfirmed -ne 1 ]] # While pwd is not yet received and confirmed (i.e. entered teh same time twice) -# do -# echo "Enter the password for the BACA base database user: " -# read -s base_tenant_db_pwd -# while [[ $base_tenant_db_pwd == '' ]] # While pwd is empty... -# do -# echo "Enter a valid value" -# read -r base_tenant_db_pwd -# done - -# echo "Please confirm the password by entering it again:" -# read -s base_tenant_db_pwd2 -# while [[ $base_tenant_db_pwd2 == '' ]] # While pwd is empty... -# do -# echo "Enter a valid value" -# read -r base_tenant_db_pwd2 -# done - -# if [[ "$base_tenant_db_pwd" == "$base_tenant_db_pwd2" ]]; then -# pwdconfirmed=1 -# else -# echo "The passwords do not match. Please enter the password again." -# unset base_tenant_db_pwd -# unset base_tenant_db_pwd2 -# fi -# done - -echo -echo "Now we will gather information about the initial BACA user that will be defined:" - -while [[ $tenant_company == '' ]] -do - echo -e "\nPlease enter the company name for the initial BACA user:" - read tenant_company -done - - -while [[ $tenant_first_name == '' ]] -do - echo -e "\nPlease enter the first name for the initial BACA user:" - read tenant_first_name -done - - -while [[ $tenant_last_name == '' ]] -do - echo -e "\nPlease enter the last name for the initial BACA user:" - read tenant_last_name -done - - -while [[ $tenant_email == '' || ! $tenant_email =~ ^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,4}$ ]] -do - echo -e "\nPlease enter a valid email address for the initial BACA user:" - read tenant_email -done - - -while [[ $tenant_user_name == '' ]] -do - echo -e "\nPlease enter the login name for the initial BACA user:" - read tenant_user_name -done - -if [[ $use_existing_tenant -eq 1 ]]; then - db2 "connect to $base_db_name" - db2 "set schema $base_db_user" - resp=$(db2 -x "select tenanttype,dailylimit from tenantinfo where tenantid = '$tenant_id'") - tenant_type=$(echo $resp | awk '{print $1}') - daily_limit=$(echo $resp | awk '{print $2}') -fi - -rdbmsconnection="DATABASE=$tenant_db_name;HOSTNAME=$baca_database_server_ip;PORT=$baca_database_port;PROTOCOL=TCPIP;UID=$tenant_db_user;PWD=$tenant_db_pwd;" -if [[ "$ssl" == "Yes" || "$ssl" == "y" || "$ssl" == "Y" ]]; then - echo - rdbmsconnection+="Security=SSL;" - echo "--- with SSL rdbstring : " $rdbmsconnection -fi - -echo -if [[ $use_existing_tenant -ne 1 ]]; then - echo "-- Information gathering is completed. Add tenant is about to begin." -else - echo "-- Information gathering is completed. Add ontology is about to begin." -fi -echo "-- Please confirm these are the desired settings:" -echo " - tenant ID: $tenant_id" -echo " - tenant type: $tenant_type" -echo " - daily limit: $daily_limit" -echo " - tenant database name: $tenant_db_name" -echo " - database server hostname/IP: $baca_database_server_ip" -echo " - database server port: $baca_database_port" -echo " - database enabled for ssl : $ssl" -if [[ $user_already_defined -ne 1 ]]; then - echo " - tenant database user will be created by this script" -else - echo " - tenant database user already exists and will not be created by this script" -fi -echo " - tenant database user: $tenant_db_user" -echo " - ontology name: $tenant_ontology" -echo " - base database: $base_db_name" -echo " - base database user: $base_db_user" -echo " - tenant company name: $tenant_company" -echo " - tenant first name: $tenant_first_name" -echo " - tenant last name: $tenant_last_name" -echo " - tenant email address: $tenant_email" -echo " - tenant login name: $tenant_user_name" -askForConfirmation - - -if [[ $user_already_defined -ne 1 ]]; then - encrypted_pwd=$(perl -e 'print crypt($ARGV[0], "pwsalt")' $tenant_db_pwd) - sudo useradd -m -p $encrypted_pwd $tenant_db_user - if [[ $? -eq 0 ]]; then - echo "User $tenant_db_user has been added to system!" - else - echo "ERROR: Failed to add a user $tenant_db_user! Please try again..." - exit 1 - fi - echo "setting password to not expire" - sudo chage -E -1 -M -1 $tenant_db_user -fi - -# Only create DB for new tenants -if [[ $use_existing_tenant -ne 1 ]]; then - # allow using existing DB if the flag "tenant_db_exists" is true - if [[ -z "$tenant_db_exists" || $tenant_db_exists == "false" ]]; then - cp sql/CreateDB.sql.template sql/CreateDB.sql - sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/CreateDB.sql - sed -i s/\$tenant_db_user/"$tenant_db_user"/ sql/CreateDB.sql - - echo -e "\nRunning script: sql/CreateDB.sql" - db2 -stvf sql/CreateDB.sql - fi -fi - -cp sql/CreateBacaSchema.sql.template sql/CreateBacaSchema.sql -sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/CreateBacaSchema.sql -sed -i s/\$tenant_ontology/"$tenant_ontology"/ sql/CreateBacaSchema.sql -echo -e "\nRunning script: sql/CreateBacaSchema.sql" -db2 -stvf sql/CreateBacaSchema.sql - -echo -e "\nRunning script: sql/CreateBacaTables.sql" -db2 -tf sql/CreateBacaTables.sql -echo "CONNECT RESET" -db2 "CONNECT RESET" - -cp sql/TablePermissions.sql.template sql/TablePermissions.sql -sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/TablePermissions.sql -sed -i s/\$tenant_db_user/"$tenant_db_user"/ sql/TablePermissions.sql -sed -i s/\$tenant_ontology/"$tenant_ontology"/ sql/TablePermissions.sql -echo -e "\nRunning script: sql/TablePermissions.sql" -db2 -stvf sql/TablePermissions.sql - -cp sql/LoadData.sql.template sql/LoadData.sql -sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/LoadData.sql -sed -i s/\$tenant_ontology/"$tenant_ontology"/ sql/LoadData.sql -echo -e "\nRunning script: sql/LoadData.sql" -db2 -stvf sql/LoadData.sql - -cp sql/InsertTenant.sql.template sql/InsertTenant.sql -sed -i s/\$base_db_name/"$base_db_name"/ sql/InsertTenant.sql -sed -i s/\$base_db_user/"$base_db_user"/ sql/InsertTenant.sql -sed -i s/\$tenant_id/"$tenant_id"/ sql/InsertTenant.sql -sed -i s/\$tenant_ontology/"$tenant_ontology"/ sql/InsertTenant.sql -sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/InsertTenant.sql -sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/InsertTenant.sql -sed -i s/\$baca_database_server_ip/"$baca_database_server_ip"/ sql/InsertTenant.sql -sed -i s/\$baca_database_port/"$baca_database_port"/ sql/InsertTenant.sql -sed -i s/\$tenant_db_user/"$tenant_db_user"/ sql/InsertTenant.sql -sed -i s/\$tenant_db_user/"$tenant_db_user"/ sql/InsertTenant.sql -sed -i s/\$tenant_db_pwd/"$tenant_db_pwd"/ sql/InsertTenant.sql -sed -i s/\$tenant_type/"$tenant_type"/ sql/InsertTenant.sql -sed -i s/\$daily_limit/"$daily_limit"/ sql/InsertTenant.sql -sed -i s/\$rdbmsconnection/"$rdbmsconnection"/ sql/InsertTenant.sql -echo -e "\nRunning script: sql/InsertTenant.sql" -db2 -stvf sql/InsertTenant.sql - - -cp sql/InsertUser.sql.template sql/InsertUser.sql -sed -i s/\$tenant_ontology/"$tenant_ontology"/ sql/InsertUser.sql -sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/InsertUser.sql -sed -i s/\$tenant_email/"$tenant_email"/ sql/InsertUser.sql -sed -i s/\$tenant_first_name/"$tenant_first_name"/ sql/InsertUser.sql -sed -i s/\$tenant_last_name/"$tenant_last_name"/ sql/InsertUser.sql -sed -i s/\$tenant_user_name/"$tenant_user_name"/ sql/InsertUser.sql -sed -i s/\$tenant_company/"$tenant_company"/ sql/InsertUser.sql -sed -i s/\$tenant_email/"$tenant_email"/ sql/InsertUser.sql -echo -e "\nRunning script: sql/InsertUser.sql" -db2 -stvf sql/InsertUser.sql - -echo -e "\n-- Add completed succesfully. Tenant ID: $tenant_id , Ontology: $tenant_ontology \n" - -echo "-- URL (replace frontend with your frontend host): https://frontend/?tid=$tenant_id&ont=$tenant_ontology" diff --git a/BACA/configuration-ha/DB2/CSVFiles/doc_class.csv b/BACA/configuration-ha/DB2/CSVFiles/doc_class.csv deleted file mode 100644 index 0d53dbd4..00000000 --- a/BACA/configuration-ha/DB2/CSVFiles/doc_class.csv +++ /dev/null @@ -1,9 +0,0 @@ -1,Balance Statement,This is a Sample -2,Bill of Lading,This is a Sample -3,Estimates,This is a Sample -4,Invoice,This is a Sample -5,Letter,This is a Sample -6,Medical Record,This is a Sample -7,Police Report,This is a Sample -8,Power of Attorney,This is a Sample -9,Pricing Schedule,This is a Sample diff --git a/BACA/configuration-ha/DB2/CreateBaseDB.bat b/BACA/configuration-ha/DB2/CreateBaseDB.bat deleted file mode 100755 index 95a53fce..00000000 --- a/BACA/configuration-ha/DB2/CreateBaseDB.bat +++ /dev/null @@ -1,32 +0,0 @@ -@echo off -SETLOCAL - -set /p base_db_name= Enter the name of the Base BACA database. If nothing is entered, we will use the following default value 'CABASEDB': -IF NOT DEFINED base_db_name SET "base_db_name=CABASEDB" - -set /p base_db_user= Enter the name of the database user for the Base BACA database. If nothing is entered, we will use the following default value 'CABASEUSER' : -IF NOT DEFINED base_db_user SET "base_db_user=CABASEUSER" - -set /P c=Are you sure you want to continue[Y/N]? -if /I "%c%" EQU "Y" goto :DOCREATE -if /I "%c%" EQU "N" goto :DOEXIT - -:DOCREATE - echo "Running the db script" - db2 CREATE DATABASE %base_db_name% AUTOMATIC STORAGE YES USING CODESET UTF-8 TERRITORY DEFAULT COLLATE USING SYSTEM PAGESIZE 32768 - db2 CONNECT TO %base_db_name% - db2 GRANT CONNECT,DATAACCESS ON DATABASE TO USER %base_db_user% - db2 GRANT USE OF TABLESPACE USERSPACE1 TO USER %base_db_user% - db2 CONNECT RESET - db2 CONNECT TO %base_db_name% - db2 SET SCHEMA %base_db_user% - db2 CREATE TABLE TENANTINFO (tenantid varchar(128) NOT NULL, ontology varchar(128) not null,tenanttype smallint not null with default, rdbmsengine varchar(128) not null, bacaversion varchar(1024) not null, rdbmsconnection varchar(1024) for bit data default null,mongoconnection varchar(1024) for bit data default null,mongoadminconnection varchar(1024) for bit data default null,CONSTRAINT tenantinfo_pkey PRIMARY KEY (tenantid, ontology)) - db2 CONNECT RESET - goto END -:DOEXIT - echo "Exited on user input" - goto END -:END - echo "END" - -ENDLOCAL \ No newline at end of file diff --git a/BACA/configuration-ha/DB2/CreateBaseDB.sh b/BACA/configuration-ha/DB2/CreateBaseDB.sh deleted file mode 100755 index c0cd4a41..00000000 --- a/BACA/configuration-ha/DB2/CreateBaseDB.sh +++ /dev/null @@ -1,150 +0,0 @@ -#!/bin/bash - -. ./ScriptFunctions.sh - -INPUT_PROPS_FILENAME="./common_for_DB2.sh" - -if [ -f $INPUT_PROPS_FILENAME ]; then - echo "Found a $INPUT_PROPS_FILENAME. Reading in variables from that script." - . $INPUT_PROPS_FILENAME -fi - -default_basedb='BASECA' -echo -e "\n-- This script will create the BACA Base database." - -if [[ -z "$base_db_name" ]]; then - echo -e "\nEnter the name of the BACA Base database to create. (The name must be 8 chars or less). If nothing is entered, we will use this default value : " $default_basedb - read base_db_name - if [[ -z "$base_db_name" ]]; then - base_db_name=$default_basedb - fi - while [ ${#base_db_name} -gt 8 ]; - do - echo "Please enter a valid value for the base database name of max length 8 :" - read base_db_name; - echo ${#base_db_name}; - done -fi - -if [[ -z "$base_valid_user" ]]; then - base_valid_user=0 -fi - -while [[ $base_valid_user -ne 1 ]] -do - echo -e "\nWe need a non-admin database user that BACA will use to access your BASE database." - - if [[ -z "$base_user_already_defined" || $base_user_already_defined -ne 1 ]]; then - while [[ "$create_new_base_user" != "y" && "$create_new_base_user" != "Y" && "$create_new_base_user" != "n" && "$create_new_base_user" != "N" ]] - do - echo "Do you want this script to create a new database user for you (This will create local OS user)? (Please enter y or n)" - read create_new_base_user - done - - if [[ "$create_new_base_user" == "n" || "$create_new_base_user" == "N" ]]; then - base_user_already_defined=1 - base_valid_user=1 - else - base_user_already_defined=0 - fi - fi - - while [[ -z "$base_db_user" || $base_db_user == "" ]] - do - if [[ $base_user_already_defined -ne 1 ]]; then - echo "Please enter the name of database user to create: " - else - echo "Please enter the name of an existing database user:" - fi - read base_db_user - done - - if [[ $base_user_already_defined -ne 1 ]]; then - getent passwd $base_db_user > /dev/null - if [[ $? -eq 0 ]]; then - echo "$base_db_user already exists. Do you want to use this existing user (y/n)" - read use_existing_user - if [ "$use_existing_user" = "y" ] || [ "$use_existing_user" = "Y" ]; then - base_base_user_already_defined=1 - base_valid_user=1 - fi - else - base_valid_user=1 - fi - fi -done - -if [[ $base_user_already_defined = 1 ]]; then - base_pwdconfirmed=1 -else - base_pwdconfirmed=0 -fi - -while [[ $base_pwdconfirmed -ne 1 ]] # While pwd is not yet received and confirmed (i.e. entered the same time twice) -do - echo "Enter the password for the user: " - read -s db_user_pwd - while [[ $db_user_pwd == '' ]] # While pwd is empty... - do - echo "Enter a valid value" - read -s db_user_pwd - done - - echo "Please confirm the password by entering it again:" - read -s db_user_pwd2 - while [[ $db_user_pwd2 == '' ]] # While pwd is empty... - do - echo "Enter a valid value" - read -s db_user_pwd2 - done - - if [[ "$db_user_pwd" == "$db_user_pwd2" ]]; then - base_pwdconfirmed=1 - else - echo "The passwords do not match. Please enter the password again." - unset db_user_pwd - unset db_user_pwd2 - fi -done - -echo -echo "-- Information gathering is completed. Create base DB is about to begin." -askForConfirmation - -if [[ $db_user_pwd_b64_encoded -eq 1 ]]; then - db_user_pwd=$(echo $db_user_pwd | base64 --decode) -fi - -if [[ $base_user_already_defined -ne 1 ]]; then - echo - echo "Creating user $base_db_user..." - - encrypted_pwd=$(perl -e 'print crypt($ARGV[0], "pwsalt")' $db_user_pwd) - sudo useradd -m -p $encrypted_pwd $base_db_user - if [[ $? -eq 0 ]]; then - echo "User $base_db_user has been added to system!" - else - echo "ERROR: Failed to add a user $base_db_user! Please try again..." - exit 1 - fi - echo "setting password to not expire" - sudo chage -E -1 -M -1 $base_db_user -fi - -# allow using existing DB if the flag "base_db_exists" is true -if [[ -z "$base_db_exists" || $base_db_exists == "false" ]]; then - cp sql/CreateBaseDB.sql.template sql/CreateBaseDB.sql - sed -i s/\$base_db_name/"$base_db_name"/ sql/CreateBaseDB.sql - sed -i s/\$base_db_user/"$base_db_user"/ sql/CreateBaseDB.sql - echo - echo "Running script: sql/CreateBaseDB.sql" - db2 -stvf sql/CreateBaseDB.sql -fi - -cp sql/CreateBaseTable.sql.template sql/CreateBaseTable.sql -sed -i s/\$base_db_name/"$base_db_name"/ sql/CreateBaseTable.sql -sed -i s/\$base_db_user/"$base_db_user"/ sql/CreateBaseTable.sql - -echo -echo "Running script: sql/CreateBaseTable.sql" -db2 -stvf sql/CreateBaseTable.sql diff --git a/BACA/configuration-ha/DB2/UpgradeBaseDB.sh b/BACA/configuration-ha/DB2/UpgradeBaseDB.sh deleted file mode 100755 index 8409eb48..00000000 --- a/BACA/configuration-ha/DB2/UpgradeBaseDB.sh +++ /dev/null @@ -1,54 +0,0 @@ -#!/usr/bin/env bash -. ./ScriptFunctions.sh - -INPUT_PROPS_FILENAME="./common_for_DB2_Upgrade.sh" - -if [ -f $INPUT_PROPS_FILENAME ]; then - echo "Found a $INPUT_PROPS_FILENAME. Reading in variables from that script." - . $INPUT_PROPS_FILENAME -fi - -echo -e "\n-- This script will upgrade base DB" -echo - -while [[ $base_db_name == '' ]] -do - echo "Please enter a valid value for the base database name :" - read base_db_name - while [ ${#base_db_name} -gt 8 ]; - do - echo "Please enter a valid value for the base database name :" - read base_db_name; - echo ${#base_db_name}; - done -done - -while [[ -z "$base_db_user" || $base_db_user == "" ]] -do - echo "Please enter a valid value for the base database user name :" - read base_db_user -done - -echo -echo "-- Please confirm these are the desired settings:" -echo " - Base database name: $base_db_name" -echo " - Base database user name: $base_db_user" -askForConfirmation - -if [[ $SaaS != "true" || -z $SaaS ]]; then - cp sql/UpgradeBaseDB_to_1.1.sql.template sql/UpgradeBaseDB_to_1.1.sql - sed -i s/\$base_db_name/"$base_db_name"/ sql/UpgradeBaseDB_to_1.1.sql - sed -i s/\$base_db_user/"$base_db_user"/ sql/UpgradeBaseDB_to_1.1.sql - echo - echo "Running upgrade script: sql/UpgradeBaseDB_to_1.1.sql" - db2 -stvf sql/UpgradeBaseDB_to_1.1.sql -else - echo "-- Skipping UpgradeBaseDB_to_1.1.sql" -fi - -cp sql/UpgradeBaseDB_1.1_to_1.2.sql.template sql/UpgradeBaseDB_1.1_to_1.2.sql -sed -i s/\$base_db_name/"$base_db_name"/ sql/UpgradeBaseDB_1.1_to_1.2.sql -sed -i s/\$base_db_user/"$base_db_user"/ sql/UpgradeBaseDB_1.1_to_1.2.sql -echo -echo "Running upgrade script: sql/UpgradeBaseDB_1.1_to_1.2.sql" -db2 -stvf sql/UpgradeBaseDB_1.1_to_1.2.sql \ No newline at end of file diff --git a/BACA/configuration-ha/DB2/UpgradeTenantDB.sh b/BACA/configuration-ha/DB2/UpgradeTenantDB.sh deleted file mode 100755 index c1457886..00000000 --- a/BACA/configuration-ha/DB2/UpgradeTenantDB.sh +++ /dev/null @@ -1,63 +0,0 @@ -#!/usr/bin/env bash -. ./ScriptFunctions.sh - -INPUT_PROPS_FILENAME="./common_for_DB2_Tenant_Upgrade.sh" - -if [ -f $INPUT_PROPS_FILENAME ]; then - echo "Found a $INPUT_PROPS_FILENAME. Reading in variables from that script." - . $INPUT_PROPS_FILENAME -fi - -echo -e "\n-- This script will upgrade tenant DB" -echo - -while [[ $tenant_db_name == '' ]] -do - echo "Please enter a valid value for the tenant database name :" - read tenant_db_name - while [ ${#tenant_db_name} -gt 8 ]; - do - echo "Please enter a valid value for the tenant database name :" - read tenant_db_name; - echo ${#tenant_db_name}; - done -done - -while [[ -z "$tenant_db_user" || $tenant_db_user == "" ]] -do - echo "Please enter a valid value for the tenant database user name :" - read tenant_db_user -done - -while [[ $tenant_ontology == '' ]] -do - echo "Please enter a valid value for the tenant ontology name :" - read tenant_ontology -done - -echo -echo "-- Please confirm these are the desired settings:" -echo " - ontology: $tenant_ontology" -echo " - tenant database name: $tenant_db_name" -echo " - tenant database user name: $tenant_db_user" -askForConfirmation - -if [[ $SaaS != "true" || -z $SaaS ]]; then - cp sql/UpgradeTenantDB_to_1.1.sql.template sql/UpgradeTenantDB_to_1.1.sql - sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/UpgradeTenantDB_to_1.1.sql - sed -i s/\$tenant_ontology/"$tenant_ontology"/ sql/UpgradeTenantDB_to_1.1.sql - sed -i s/\$tenant_db_user/"$tenant_db_user"/ sql/UpgradeTenantDB_to_1.1.sql - echo - echo "Running upgrade script: sql/UpgradeTenantDB_to_1.1.sql" - db2 -stvf sql/UpgradeTenantDB_to_1.1.sql -else - echo "-- Skipping UpgradeTenantDB_to_1.1.sql" -fi - -cp sql/UpgradeTenantDB_1.1_to_1.2.sql.template sql/UpgradeTenantDB_1.1_to_1.2.sql -sed -i s/\$tenant_db_name/"$tenant_db_name"/ sql/UpgradeTenantDB_1.1_to_1.2.sql -sed -i s/\$tenant_ontology/"$tenant_ontology"/ sql/UpgradeTenantDB_1.1_to_1.2.sql -sed -i s/\$tenant_db_user/"$tenant_db_user"/ sql/UpgradeTenantDB_1.1_to_1.2.sql -echo -echo "Running upgrade script: sql/UpgradeTenantDB_1.1_to_1.2.sql" -db2 -stvf sql/UpgradeTenantDB_1.1_to_1.2.sql \ No newline at end of file diff --git a/BACA/configuration-ha/DB2/sql/CreateBacaTables.sql b/BACA/configuration-ha/DB2/sql/CreateBacaTables.sql deleted file mode 100644 index 5c6ac1fe..00000000 --- a/BACA/configuration-ha/DB2/sql/CreateBacaTables.sql +++ /dev/null @@ -1,707 +0,0 @@ -create table doc_class -( - doc_class_id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - doc_class_name VARCHAR (512) NOT NULL, - comment varchar(1024), - - CONSTRAINT doc_class_pkey PRIMARY KEY (doc_class_id), - - CONSTRAINT doc_class_doc_class_name_key UNIQUE (doc_class_name) -); - -create table doc_alias -( - doc_alias_id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - doc_alias_name VARCHAR (512) NOT NULL, - language CHAR(3) NOT NULL, - - CONSTRAINT doc_alias_pkey PRIMARY KEY (doc_alias_id), - - CONSTRAINT doc_alias_doc_alias_name_key UNIQUE (doc_alias_name) -); - -create table key_class -( - key_class_id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - key_class_name VARCHAR (512) NOT NULL, - datatype VARCHAR (256) NOT NULL, - mandatory BOOLEAN, - sensitive BOOLEAN, - comment VARCHAR(1024), - - CONSTRAINT key_class_pkey PRIMARY KEY (key_class_id) -); - -create table key_alias -( - key_alias_id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - key_alias_name VARCHAR (512) NOT NULL, - language CHAR(3) NOT NULL, - - CONSTRAINT key_alias_pkey PRIMARY KEY (key_alias_id), - - CONSTRAINT key_alias_key_alias_name_key UNIQUE (key_alias_name) -); - -create table cword -( - cword_id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - cword_name VARCHAR (512) NOT NULL, - - CONSTRAINT cword_pkey PRIMARY KEY (cword_id), - - CONSTRAINT cword_cword_name_key UNIQUE (cword_name) -); - -create table doc_alias_dc -( - doc_alias_id INTEGER NOT NULL, - doc_class_id INTEGER NOT NULL, - da_count INTEGER NOT NULL, - - CONSTRAINT doc_alias_dc_pkey PRIMARY KEY (doc_alias_id, doc_class_id), - - CONSTRAINT doc_alias_dc_doc_alias_id_fkey FOREIGN KEY (doc_alias_id) REFERENCES doc_alias (doc_alias_id) - ON UPDATE RESTRICT ON DELETE CASCADE, - - constraint doc_alias_dc_doc_class_id_fkey FOREIGN KEY (doc_class_id) REFERENCES doc_class (doc_class_id) - ON UPDATE RESTRICT ON DELETE CASCADE - -); - -create table key_class_dc -( - key_class_id INTEGER NOT NULL, - doc_class_id INTEGER NOT NULL, - CONSTRAINT key_class_dc_pkey PRIMARY KEY (key_class_id, doc_class_id), - - CONSTRAINT key_class_dc_key_class_id_fkey FOREIGN KEY (key_class_id) REFERENCES key_class (key_class_id) - ON UPDATE RESTRICT ON DELETE CASCADE, - - CONSTRAINT key_class_dc_doc_class_id_fkey FOREIGN KEY (doc_class_id) REFERENCES doc_class (doc_class_id) - ON UPDATE RESTRICT ON DELETE CASCADE -); - -create table key_alias_dc -( - key_alias_id INTEGER NOT NULL, - doc_class_id INTEGER NOT NULL, - ka_count INTEGER NOT NULL, - - CONSTRAINT key_alias_dc_pkey PRIMARY KEY (key_alias_id, doc_class_id), - - CONSTRAINT key_alias_dc_key_alias_id_fkey FOREIGN KEY (key_alias_id) REFERENCES key_alias (key_alias_id) - ON UPDATE RESTRICT ON DELETE CASCADE, - - CONSTRAINT key_alias_dc_doc_class_id_fkey FOREIGN KEY (doc_class_id) REFERENCES doc_class (doc_class_id) - ON UPDATE RESTRICT ON DELETE CASCADE -); - -create table key_alias_kc -( - key_alias_id INTEGER NOT NULL, - - key_class_id INTEGER NOT NULL, - - CONSTRAINT key_alias_kc_pkey PRIMARY KEY (key_alias_id, key_class_id), - - CONSTRAINT key_alias_kc_key_alias_id_fkey FOREIGN KEY (key_alias_id) REFERENCES key_alias (key_alias_id) - ON UPDATE RESTRICT ON DELETE CASCADE, - - CONSTRAINT key_alias_kc_key_class_id_fkey FOREIGN KEY (key_class_id) REFERENCES key_class (key_class_id) - ON UPDATE RESTRICT ON DELETE CASCADE -); - -create table cword_dc -( - doc_class_id INTEGER NOT NULL, - cword_id INTEGER NOT NULL, - cw_count INTEGER NOT NULL, - - CONSTRAINT cword_dc_pkey PRIMARY KEY (cword_id, doc_class_id), - - CONSTRAINT cword_dc_doc_class_id_fkey FOREIGN KEY (doc_class_id) REFERENCES doc_class (doc_class_id) - ON UPDATE RESTRICT ON DELETE CASCADE, - - CONSTRAINT cword_dc_cword_id_fkey FOREIGN KEY (cword_id) REFERENCES cword (cword_id) - ON UPDATE RESTRICT ON DELETE CASCADE -); - -create table heading -( - heading_id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - heading_name VARCHAR (512) NOT NULL, - comment VARCHAR(1024), - CONSTRAINT heading_pkey PRIMARY KEY (heading_id) -); - -create table heading_alias -( - heading_alias_id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - heading_alias_name VARCHAR (512) NOT NULL, - - CONSTRAINT heading_alias_pkey PRIMARY KEY (heading_alias_id), - - CONSTRAINT heading_alias_heading_alias_name_key unique (heading_alias_name) -); - -create table heading_dc -( - heading_id INTEGER NOT NULL, - - doc_class_id INTEGER NOT NULL, - - CONSTRAINT heading_dc_pkey PRIMARY KEY (heading_id, doc_class_id), - - CONSTRAINT heading_dc_heading_id_fkey FOREIGN KEY (heading_id) REFERENCES heading (heading_id) - ON UPDATE RESTRICT ON DELETE CASCADE, - - CONSTRAINT heading_dc_doc_class_id_fkey FOREIGN KEY (doc_class_id) REFERENCES doc_class (doc_class_id) - ON UPDATE RESTRICT ON DELETE CASCADE -); - -create table heading_alias_h -( - heading_alias_id INTEGER NOT NULL, - heading_id INTEGER NOT NULL, - - CONSTRAINT heading_alias_h_pkey PRIMARY KEY (heading_alias_id, heading_id), - - CONSTRAINT heading_alias_h_heading_alias_id_fkey FOREIGN KEY (heading_alias_id) REFERENCES heading_alias (heading_alias_id) - ON UPDATE RESTRICT ON DELETE CASCADE, - - CONSTRAINT heading_alias_h_heading_id_fkey FOREIGN KEY (heading_id) REFERENCES heading (heading_id) - ON UPDATE RESTRICT ON DELETE CASCADE -); - -create table heading_alias_dc -( - heading_alias_id INTEGER NOT NULL, - doc_class_id INTEGER NOT NULL, - - CONSTRAINT heading_alias_dc_pkey PRIMARY KEY (heading_alias_id, doc_class_id), - - CONSTRAINT heading_alias_dc_heading_alias_id_fkey FOREIGN KEY (heading_alias_id) REFERENCES heading_alias (heading_alias_id) - ON UPDATE RESTRICT ON DELETE CASCADE, - - CONSTRAINT heading_alias_dc_doc_class_id_fkey FOREIGN KEY (doc_class_id) REFERENCES doc_class (doc_class_id) - ON UPDATE RESTRICT ON DELETE CASCADE -); - -create table pattern -( - pattern_id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - pattern_name VARCHAR (512) NOT NULL, - description VARCHAR(1024), - namespace SMALLINT NOT NULL, - extraction_tool SMALLINT NOT NULL, - pattern VARCHAR(1024) NOT NULL, - predefined SMALLINT DEFAULT 0, - - CONSTRAINT pattern_pkey PRIMARY KEY (pattern_id), - - CONSTRAINT pattern_pattern_name_key UNIQUE (pattern_name) -); - -create table pattern_kc -( - pattern_id INTEGER NOT NULL, - key_class_id INTEGER NOT NULL, - pattern_type SMALLINT NOT NULL, - - CONSTRAINT pattern_kc_pkey PRIMARY KEY (pattern_id, key_class_id), - - CONSTRAINT pattern_kc_pattern_id_fkey FOREIGN KEY (pattern_id) REFERENCES pattern (pattern_id) - ON UPDATE RESTRICT ON DELETE CASCADE, - - CONSTRAINT pattern_kc_key_class_id_fkey FOREIGN KEY (key_class_id) REFERENCES key_class (key_class_id) - ON UPDATE RESTRICT ON DELETE CASCADE -); - -create table user_detail -( - user_id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - email VARCHAR(1024) NOT NULL, - first_name VARCHAR(512) NOT NULL, - last_name VARCHAR(512) NOT NULL, - phone VARCHAR(256), - company VARCHAR(512), - expire INTEGER, - expiry_date BIGINT, - token VARCHAR(1024) FOR BIT DATA DEFAULT NULL, - user_name VARCHAR(1024) NOT NULL, - CONSTRAINT user_detail_pkey PRIMARY KEY (user_id), - CONSTRAINT user_detail_email_key UNIQUE (email), - CONSTRAINT user_name UNIQUE (user_name) -); - -create table login_detail -( - user_id INTEGER, - role VARCHAR(32), - status BOOLEAN, - logged_in BOOLEAN DEFAULT 0, - - CONSTRAINT login_detail_user_id_fkey FOREIGN KEY (user_id) REFERENCES user_detail (user_id) - ON UPDATE RESTRICT ON DELETE CASCADE -); - -create table integration -( - id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - type VARCHAR(32), - url VARCHAR(1024), - user_name VARCHAR(256) DEFAULT NULL, - password VARCHAR(512) FOR BIT DATA DEFAULT NULL, - label VARCHAR(256), - status BOOLEAN, - model_id VARCHAR(1024), - api_key VARCHAR(1024) FOR BIT DATA DEFAULT NULL, - flag VARCHAR(64), - CONSTRAINT integration_pkey PRIMARY KEY (id) -); - -create table integration_dc -( - id INTEGER NOT NULL, - doc_class_id INTEGER NOT NULL, - checked SMALLINT, - - CONSTRAINT integration_dc_id_fkey FOREIGN KEY (id) REFERENCES integration (id) - ON UPDATE RESTRICT ON DELETE CASCADE, - - CONSTRAINT integration_dc_doc_class_id_fkey FOREIGN KEY (doc_class_id) REFERENCES doc_class (doc_class_id) - ON UPDATE RESTRICT ON DELETE CASCADE, - - CONSTRAINT integration_dc_pkey PRIMARY KEY (id, doc_class_id) -); - -create table import_ontology -( - id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - user_id INTEGER, - date BIGINT, - start_time BIGINT, - end_time BIGINT, - complete BOOLEAN, - failure BOOLEAN, - - CONSTRAINT import_ontology_user_id_fkey FOREIGN KEY (user_id) REFERENCES user_detail (user_id) - ON UPDATE RESTRICT ON DELETE CASCADE, - - CONSTRAINT import_ontology_pkey PRIMARY KEY (id) -); - -create table api_integrations_objectsstore -( - id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - user_id INTEGER NOT NULL, - type VARCHAR(64), - bucket_name VARCHAR(128) NOT NULL, - endpoint VARCHAR(1024) NOT NULL, - access_key VARCHAR(1024) NOT NULL FOR BIT DATA, - access_id VARCHAR(1024) NOT NULL FOR BIT DATA, - signatureversion VARCHAR(128) NOT NULL, - forcestylepath boolean, - - CONSTRAINT api_integrations_objectsstore_id_pk PRIMARY KEY (id), - - CONSTRAINT api_integrations_objectsstore_user_detail_user_id_fk FOREIGN KEY (user_id) REFERENCES user_detail (user_id) -); - -create table smartpages_options -( - id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - outputname VARCHAR(6), - company VARCHAR(512), - selections VARCHAR(256), - CONSTRAINT smartpages_options_pkey PRIMARY KEY (id) -); - -create table fonts -( - font_id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - font_size VARCHAR(256) NOT NULL, - total_no_of_observations INTEGER, - sum_of_observations_by_no_of_pixels DOUBLE, - sum_of_square_of_observations DOUBLE, - - CONSTRAINT fonts_pkey PRIMARY KEY (font_id) -); - -create table fonts_dc -( - font_id INTEGER NOT NULL, - doc_class_id INTEGER NOT NULL, - - CONSTRAINT fonts_dc_pkey PRIMARY KEY (font_id, doc_class_id), - - CONSTRAINT fonts_dc_font_id_fkey FOREIGN KEY (font_id) REFERENCES fonts (font_id) - ON UPDATE RESTRICT ON DELETE CASCADE, - - CONSTRAINT fonts_dc_doc_class_id_fkey FOREIGN KEY (doc_class_id) REFERENCES doc_class (doc_class_id) - ON UPDATE RESTRICT ON DELETE CASCADE -); - -create table fonts_transid -( - id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - transid VARCHAR(256) NOT NULL, - - CONSTRAINT fonts_transid_pkey PRIMARY KEY (id), - - CONSTRAINT fonts_transid_transid_key UNIQUE (transid) -); - -create table db_backup -( - id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - date BIGINT NOT NULL, - frequency CHAR(15) NOT NULL, - type VARCHAR(1024) NOT NULL, - start_time BIGINT, - end_time BIGINT, - complete BOOLEAN DEFAULT 0, - failure BOOLEAN DEFAULT 0, - obj_cred_id INTEGER NOT NULL, - - CONSTRAINT db_backup_pkey PRIMARY KEY (id) - - --CONSTRAINT db_backup_obj_cred_id_fkey FOREIGN KEY (obj_cred_id) REFERENCES api_integrations_objectsstore (obj_cred_id) - --ON UPDATE RESTRICT ON DELETE CASCADE -); - -create table key_spacing -( - key_class_id INTEGER NOT NULL, - key_class_count INTEGER, - key_class_count_doc INTEGER, - class_total_docs INTEGER, - sum_x INTEGER, - sum_x_sq INTEGER, - sum_y INTEGER, - sum_y_sq INTEGER, - - CONSTRAINT key_spacing_pkey PRIMARY KEY (key_class_id), - - CONSTRAINT key_spacing_key_class_id_fkey FOREIGN KEY (key_class_id) REFERENCES key_class (key_class_id) - ON UPDATE RESTRICT ON DELETE CASCADE -); - - -create table processed_file -( - id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - transaction_id VARCHAR(256) NOT NULL, - file_name VARCHAR(1024) NOT NULL, - number_of_page INTEGER, - date BIGINT, - start_time BIGINT, - end_time BIGINT, - failed_ocr_pages INTEGER DEFAULT 0, - failed_pages INTEGER DEFAULT 0, - failed BOOLEAN DEFAULT FALSE, - - CONSTRAINT processed_file_pkey PRIMARY KEY (id), - CONSTRAINT processed_file_transaction_id_key UNIQUE (transaction_id) -); - -create table error_log -( - id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - transaction_id VARCHAR(256), - error_code CHAR(32), - description VARCHAR(1024), - date BIGINT, - - CONSTRAINT error_log_pkey PRIMARY KEY (id), - - CONSTRAINT error_log_transaction_id_fkey FOREIGN KEY (transaction_id) REFERENCES processed_file (transaction_id) - ON UPDATE RESTRICT ON DELETE CASCADE -); - -create table db_restore -( - id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - start_time BIGINT, - end_time BIGINT, - complete BOOLEAN DEFAULT FALSE, - failure BOOLEAN DEFAULT FALSE, - - CONSTRAINT db_restore_pkey PRIMARY KEY (id) -); - ---flags -0 user defined and default 1. will be training set detected ---rank -relative importance number 0.0 to 1.0 -create table feature -( - doc_class_id INTEGER NOT NULL, - name VARCHAR (512) NOT NULL, - flags SMALLINT NOT NULL DEFAULT 0, - rank REAL DEFAULT 1.0, - - CONSTRAINT feature_doc_class_id_flags_name_key UNIQUE (doc_class_id ,flags, name), - - CONSTRAINT feature_doc_class_id_fkey FOREIGN KEY (doc_class_id) REFERENCES doc_class (doc_class_id) - ON UPDATE RESTRICT ON DELETE CASCADE - -); - ---status 0.uploaded 1.processing 2.text (completed status) 3.error -create table document -( - id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - name VARCHAR(1024) NOT NULL, - doc_class_id INTEGER NOT NULL, - num_pages SMALLINT NOT NULL, - upload_date BIGINT NOT NULL, - user_uploaded INTEGER NOT NULL, - status SMALLINT NOT NULL, - error_info VARCHAR(1024), - content BLOB(250M), - - CONSTRAINT doc_doc_class_id_fkey FOREIGN KEY (doc_class_id) REFERENCES doc_class (doc_class_id) - ON UPDATE RESTRICT ON DELETE CASCADE, - - CONSTRAINT document_pkey PRIMARY KEY (id) -); - ---1. initialized 2. running 3.error 4.trained ---createdby user ---major_version developer controled no auto increment. Update for each release (1.0) ---minor version in each release increment.Reset after new major version update. - -create table training_log -( - id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - status SMALLINT NOT NULL, - created_date BIGINT NOT NULL, - major_version SMALLINT NOT NULL, - minor_version SMALLINT NOT NULL, - error_info VARCHAR(1024), - created_by INTEGER NOT NULL, - json_model_input_detail BLOB(250M), - global_feature_vector BLOB(250M), - - CONSTRAINT training_log_pkey PRIMARY KEY (id) -); - ---create a sequence for minor version -CREATE SEQUENCE MINOR_VER_SEQ AS SMALLINT START WITH 1 INCREMENT BY 1 NO CYCLE NO CACHE ORDER; - ---version developer of classifier specifies -create table classifier -( - id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - training_id INTEGER NOT NULL, - displayname VARCHAR(1024) NOT NULL, - algorithm SMALLINT NOT NULL, - accuracy real, - version SMALLINT, - model_output BLOB(250M), - json_feature_vector BLOB(250M), - json_report BLOB(250M), - - CONSTRAINT classifier_pkey PRIMARY KEY (id), - - CONSTRAINT classifier_fkey FOREIGN KEY (training_id) REFERENCES training_log (id) - ON UPDATE RESTRICT ON DELETE CASCADE -); - ---published_status active ,inactive -create table ontology -( - vid INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - default_classifier_id INTEGER NOT NULL, - name VARCHAR(128) NOT NULL, - published_status SMALLINT default 0, - published_date BIGINT NOT NULL, - published_user INTEGER NOT NULL, - - CONSTRAINT ontology_fkey FOREIGN KEY (default_classifier_id) REFERENCES classifier(id) - ON UPDATE RESTRICT ON DELETE RESTRICT -); - -create table audit_ontology -( - id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - username VARCHAR(1024), - type VARCHAR(256), - action VARCHAR(512), - description VARCHAR(1024), - date BIGINT, - time_elapsed VARCHAR(128), - error BOOLEAN DEFAULT FALSE, - page VARCHAR(32) DEFAULT '', - - CONSTRAINT audit_ontology_pkey PRIMARY KEY (id) -); - -create table audit_login_activity -( - id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - username VARCHAR(1024), - type VARCHAR(256), - action VARCHAR(512), - description VARCHAR(1024), - date BIGINT, - time_elapsed VARCHAR(128), - error BOOLEAN DEFAULT FALSE, - page VARCHAR(32) DEFAULT '', - - CONSTRAINT audit_login_activity_pkey PRIMARY KEY (id) -); - -create table audit_processed_files -( - id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - username VARCHAR(1024), - type VARCHAR(256), - action VARCHAR(512), - description VARCHAR(1024), - date BIGINT, - time_elapsed VARCHAR(128), - transaction_id VARCHAR(256), - error BOOLEAN DEFAULT FALSE, - page VARCHAR(32) DEFAULT '', - - CONSTRAINT audit_processed_files_pkey PRIMARY KEY (id) -); - -create table audit_user_activity -( - id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - username VARCHAR(1024), - type VARCHAR(256), - action VARCHAR(512), - description VARCHAR(1024), - date BIGINT, - time_elapsed VARCHAR(128), - error BOOLEAN DEFAULT FALSE, - page VARCHAR(32) DEFAULT '', - - CONSTRAINT audit_user_activity_pkey PRIMARY KEY (id) -); - -create table audit_api_activity -( - id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - username VARCHAR(1024), - type VARCHAR(256), - action VARCHAR(512), - description VARCHAR(1024), - date BIGINT, - time_elapsed VARCHAR(128), - error BOOLEAN DEFAULT FALSE, - page VARCHAR(32) DEFAULT '', - - CONSTRAINT audit_api_activity PRIMARY KEY (id) -); - -create table audit_system_activity -( - id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - username VARCHAR(1024), - type VARCHAR(256), - action VARCHAR(512), - description VARCHAR(1024), - date BIGINT, - time_elapsed VARCHAR(128), - error BOOLEAN DEFAULT FALSE, - page VARCHAR(32) DEFAULT '', - - CONSTRAINT audit_system_activity_pkey PRIMARY KEY (id) -); - -create table audit_integration_activity -( - id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO CYCLE), - username VARCHAR(1024), - type VARCHAR(256), - action VARCHAR(512), - description VARCHAR(1024), - date BIGINT, - time_elapsed VARCHAR(128), - error BOOLEAN DEFAULT FALSE, - page VARCHAR(32) DEFAULT '', - - CONSTRAINT audit_integration_activity_pkey PRIMARY KEY (id) -); - -CREATE OR REPLACE VIEW audit_sys_report AS SELECT audit_ontology.username, - audit_ontology.type, - audit_ontology.action, - audit_ontology.description, - audit_ontology.date, - audit_ontology.time_elapsed, - audit_ontology.error, - audit_ontology.page, - 'Ontology' AS details - FROM audit_ontology -UNION - SELECT audit_processed_files.username, - audit_processed_files.type, - audit_processed_files.action, - audit_processed_files.description, - audit_processed_files.date, - audit_processed_files.time_elapsed, - audit_processed_files.error, - audit_processed_files.page, - 'Processed files' AS details - FROM audit_processed_files -UNION - SELECT audit_login_activity.username, - audit_login_activity.type, - audit_login_activity.action, - audit_login_activity.description, - audit_login_activity.date, - audit_login_activity.time_elapsed, - audit_login_activity.error, - audit_login_activity.page, - 'Login activity' AS details - FROM audit_login_activity -UNION - SELECT audit_user_activity.username, - audit_user_activity.type, - audit_user_activity.action, - audit_user_activity.description, - audit_user_activity.date, - audit_user_activity.time_elapsed, - audit_user_activity.error, - audit_user_activity.page, - 'User activity' AS details - FROM audit_user_activity -UNION - SELECT audit_system_activity.username, - audit_system_activity.type, - audit_system_activity.action, - audit_system_activity.description, - audit_system_activity.date, - audit_system_activity.time_elapsed, - audit_system_activity.error, - audit_system_activity.page, - 'System activity' AS detailsimport_ontology - FROM audit_system_activity -UNION - SELECT audit_integration_activity.username, - audit_integration_activity.type, - audit_integration_activity.action, - audit_integration_activity.description, - audit_integration_activity.date, - audit_integration_activity.time_elapsed, - audit_integration_activity.error, - audit_integration_activity.page, - 'Integration activity' AS details - FROM audit_integration_activity -UNION - SELECT audit_api_activity.username, - audit_api_activity.type, - audit_api_activity.action, - audit_api_activity.description, - audit_api_activity.date, - audit_api_activity.time_elapsed, - audit_api_activity.error, - audit_api_activity.page, - 'API activity' AS details - FROM audit_api_activity -; diff --git a/BACA/configuration-ha/README.md b/BACA/configuration-ha/README.md deleted file mode 100644 index 9ae45484..00000000 --- a/BACA/configuration-ha/README.md +++ /dev/null @@ -1,4 +0,0 @@ -# Please Preparing your environment for Content Analyzer - -Please perform the steps described in the following page in IBM Content Analyzer Knowledge Center before proceed to installing the Charts. -https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.install/topics/tsk_preparing_baca_deploy.html diff --git a/BACA/configuration-ha/bashfunctions.sh b/BACA/configuration-ha/bashfunctions.sh deleted file mode 100755 index 9430e6ce..00000000 --- a/BACA/configuration-ha/bashfunctions.sh +++ /dev/null @@ -1,407 +0,0 @@ -#!/usr/bin/env bash - -# -# Licensed Materials - Property of IBM -# 6949-68N -# -# © Copyright IBM Corp. 2018 All Rights Reserved -# - -# Function to request user for their domain name - -export ICP_clustername=$(echo $DOCKER_REG_FOR_SERVICES | awk -F'[.]' '{print $1}') -export ICP_account_id="id-"$ICP_clustername"-account" - -# Login to ICP, to ensure bx pr and kubectl commands work in later functions -function loginToCluster() { - if [[ $ICP_VERSION == "3.1.0" || $ICP_VERSION == "3.1.2" ]]; then - echo - #echo "\x1B[1;31m Logging into ICP using: bx pr login -a https://$MASTERIP:8443 --skip-ssl-validation -u admin - # -p admin -c id-mycluster-account. \x1B[0m" - export ICP_USER_PASSWORD_DECODE=$(echo $ICP_USER_PASSWORD | base64 --decode) - #ICP 3.10 - cloudctl login -a https://$MASTERIP:8443 --skip-ssl-validation -u $ICP_USER -p $ICP_USER_PASSWORD_DECODE -c $ICP_account_id -n default - fi - if [[ $OCP_VERSION == "3.11" ]]; then - echo - export OCP_USER_PASSWORD_DECODE=$(echo $OCP_USER_PASSWORD | base64 --decode) - #echo "\x1B[1;31m Logging into OCP using: oc login https://$MASTERIP:8443 --insecure-skip-tls-verify=true -u $OCP_USER - # -p $OCP_USER_PASSWORD_DECODE. \x1B[0m" - #OCP 3.11 - oc login https://$MASTERIP:8443 --insecure-skip-tls-verify=true -u $OCP_USER -p $OCP_USER_PASSWORD_DECODE - fi -} - -# ------------------- -# HELM Client setup -# ------------------- -function downloadHelmClient() { - - - if [[ $ICP_VERSION == "3.1.0" || $ICP_VERSION == "3.1.2" ]]; then - echo - echo "Downloading Helm 2.9.1 from ICp" - curl -kLo helm-linux-amd64-v2.9.1.tar.gz https://$MASTERIP:8443/api/cli/helm-linux-amd64.tar.gz - echo - echo "Moving helm to /usr/local/bin and chmod 755 helm" - tar -xvf helm-linux-amd64-v2.9.1.tar.gz - chmod 755 ./linux-amd64/helm && mv ./linux-amd64/helm /usr/local/bin - rm -rf linux-amd64 - # testing Helm - echo Testing Helm CLI using: helm version --tls - helm version --tls - fi - - if [[ $OCP_VERSION == "3.11" ]]; then - echo "Downloading Helm 2.11.0 from Github" - curl -s https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz | tar xz - echo - echo "Moving helm to /usr/local/bin and chmod 755 helm" - - chmod 755 ./linux-amd64/helm && mv ./linux-amd64/helm /usr/local/bin - rm -rf linux-amd64 - - fi -} - - -function helmSetup(){ - - if [[ $ICP_VERSION == "3.1.2" ]]; then - # ICP specific setup - echo - echo Initializing Helm CLI using: helm init --client-only - helm init --client-only - echo - echo Creating clusterrolebinding tiller-cluster-admin .... - kubectl create clusterrolebinding tiller-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default - fi - - if [[ $OCP_VERSION == "3.11" ]]; then - echo Creating clusterrolebinding tiller-cluster-admin .... - export TILLER_NAMESPACE=tiller - oc new-project $TILLER_NAMESPACE - oc project $TILLER_NAMESPACE - oc process -f /~https://github.com/openshift/origin/raw/master/examples/helm/tiller-template.yaml -p TILLER_NAMESPACE="${TILLER_NAMESPACE}" -p HELM_VERSION=v2.11.0 | oc create -f - - oc rollout status deployment tiller - oc project $KUBE_NAME_SPACE - oc policy add-role-to-user $OCP_USER "system:serviceaccount:${TILLER_NAMESPACE}:tiller" - fi - -} - -function checkHelm(){ - - if [[ $ICP_VERSION == "3.1.2" ]]; then - MAX_ITERATIONS=120 - count=0 - while [[ $( kubectl get deployment tiller-deploy --namespace kube-system | sed -n '1!p' | awk '{print $5}' ) == 0 ]] - do - if [ "$count" -eq $MAX_ITERATIONS ]; then - echo "ERROR: Failed to find tiller-deploy after $MAX_ITERATIONS tries. Please check your cluster using kubectl get deployment tiller-deploy --namespace kube-system" - return 1 - fi - echo "Checking that helm tiller is deployed ......................" - sleep 10 - ((count++)) - done - echo "Helm deployed successfully ......................" - fi -} - - - -function getWorkerIPs() { - echo "inside getWorkerIPs" - if [[ $ICP_VERSION == "3.1.0" || $ICP_VERSION == "3.1.2" ]]; then - export ICP_USER_PASSWORD_DECODE=$(echo $ICP_USER_PASSWORD | base64 --decode) - echo "About to get all the worker IPs from $ICP_VERSION" - echo "login -a https://$MASTERIP:8443 --skip-ssl-validation -u $ICP_USER -p $ICP_USER_PASSWORD_DECODE -c $ICP_account_id" - cloudctl login -a https://$MASTERIP:8443 --skip-ssl-validation -u $ICP_USER -p $ICP_USER_PASSWORD_DECODE -c $ICP_account_id -n default - export WORKER_IPs=$(cloudctl cm workers --json | grep "publicIP" | awk '{print $2}' | cut -d ',' -f1 | tr -d '"') - if [ -z "$WORKER_IPs" ]; then - echo "Cannot find public IP for worker nodes. Will try to check for Private IP now" - export WORKER_IPs=$(cloudctl cm workers --json | grep "privateIP" | awk '{print $2}' | cut -d ',' -f1 | tr -d '"') - echo WORKER_IPs=$WORKER_IPs - if [[ -z "$WORKER_IPs" ]]; then exit 1; fi - fi - fi - if [[ $OCP_VERSION == "3.11" ]]; then - echo "About to get all the worker IPs from $OCP_VERSION" - loginToCluster - export WORKER_IPs=$(oc get nodes | grep compute | grep [^Not]Ready | awk '{print $1}' | cut -d ',' -f1 | tr -d '"') - echo WORKER_IPs=$WORKER_IPs - if [[ -z "$WORKER_IPs" ]]; then exit 1; fi - fi - -} -function getWorkerIPBasedOnLabel() { - echo "inside getWorkerIP1s. It will get the worker IPs based on label" - - loginToCluster - if [[ $ICP_VERSION == "3.1.0" || $ICP_VERSION == "3.1.2" ]]; then - export WORKER_IP1s=$(kubectl get nodes --show-labels |grep worker.*$KUBE_NAME_SPACE=baca | grep [^Not]Ready | awk {'print $1'}) - fi - if [[ $OCP_VERSION == "3.11" ]]; then - export WORKER_IP1s=$(kubectl get nodes --show-labels |grep compute=true |grep celery$KUBE_NAME_SPACE'='baca | grep [^Not]Ready | awk {'print $1'}) - fi - echo $WORKER_IP1s - if [[ -z "$WORKER_IP1s" ]]; then exit 1; fi - -} -function clearAllLabels(){ - echo "About to clear ALL label nodes with in $KUBE_NAME_SPACE" - getWorkerIPs - for i in $WORKER_IPs - do - echo "Clear out previous labeling" - kubectl label nodes $i {celery$KUBE_NAME_SPACE-,mongo$KUBE_NAME_SPACE-,mongo-admin$KUBE_NAME_SPACE-} - echo - done -} -#function labelNodes() { -# clearAllLabels -# echo "About to label ALL nodes with celery$KUBE_NAME_SPACE=baca." -# getWorkerIPs -# for i in $WORKER_IPs -# do -# echo "Label --overwrite $i with celery$KUBE_NAME_SPACE=baca" -# kubectl label nodes --overwrite $i {celery$KUBE_NAME_SPACE=baca,mongo$KUBE_NAME_SPACE=baca,mongo-admin$KUBE_NAME_SPACE=baca} -# done -#} - -function customLabelNodes() { - loginToCluster - clearAllLabels -# echo "Clear out previous labeling" -# kubectl label nodes $i {celery$KUBE_NAME_SPACE-,mongo$KUBE_NAME_SPACE-,mongo-admin$KUBE_NAME_SPACE-,postgres$KUBE_NAME_SPACE-} - - echo "About to label --overwrite $CA_WORKERS with celery$KUBE_NAME_SPACE=baca." - echo label nodes {$CA_WORKERS} celery$KUBE_NAME_SPACE=baca - for i in $(echo $CA_WORKERS | sed "s/,/ /g") - do - echo "Label $i with celery$KUBE_NAME_SPACE=baca" - kubectl label nodes --overwrite $i celery$KUBE_NAME_SPACE=baca - echo - done - echo - echo "About to label $MONGO_WORKERS with mongo$KUBE_NAME_SPACE=baca." - for i in $(echo $MONGO_WORKERS | sed "s/,/ /g") - do - echo "Label $i with mongo$KUBE_NAME_SPACE=baca" - kubectl label nodes --overwrite $i mongo$KUBE_NAME_SPACE=baca - done - echo - echo "About to label $MONGO_ADMIN_WORKERS with mongo-admin$KUBE_NAME_SPACE=baca." - for i in $(echo $MONGO_ADMIN_WORKERS | sed "s/,/ /g") - do - echo "Label $i with mongo-admin$KUBE_NAME_SPACE=baca" - kubectl label nodes --overwrite $i mongo-admin$KUBE_NAME_SPACE=baca - done - echo -} - - - -function getNFSServer() { - #Get a list of worker IPs - if [[ $PVCCHOICE == "1" ]]; then # This is the option 1 where the script will create everything for Internal usage. - getWorkerIPBasedOnLabel - #Create directories: - echo "Creating required directory for SP by ssh into $NFS_IP" - if [ -z "$SSH_USER" ]; then - export SSH_USER="root" - fi - - if [ "$SSH_USER" == "root" ]; then - export SUDO_CMD="" - else - export SUDO_CMD="sudo " - fi - echo "Creating necessary folder in $NFS_IP..." - ssh $SSH_USER@$NFS_IP -oStrictHostKeyChecking=no "$SUDO_CMD mkdir -p /exports/smartpages/$KUBE_NAME_SPACE/{logs,data,config}" - ssh $SSH_USER@$NFS_IP -oStrictHostKeyChecking=no "$SUDO_CMD mkdir -p /exports/smartpages/$KUBE_NAME_SPACE/logs/{backend,frontend,callerapi,processing-extraction,pdfprocess,setup,interprocessing,classifyprocess-classify,ocr-extraction,postprocessing,reanalyze,updatefiledetail,spfrontend,redis,rabbitmq,mongo,mongoadmin,utf8process}" - ssh $SSH_USER@$NFS_IP -oStrictHostKeyChecking=no "$SUDO_CMD mkdir -p /exports/smartpages/$KUBE_NAME_SPACE/config/backend" - - - - echo "Creating data directory on NFS ..." - ssh $SSH_USER@$NFS_IP -oStrictHostKeyChecking=no "$SUDO_CMD mkdir -p /exports/smartpages/$KUBE_NAME_SPACE/data/{mongo,mongoadmin,redis,rabbitmq}" - - - echo "Setting owner (51000:51001) for BACA's PVC" - ssh $SSH_USER@$NFS_IP -oStrictHostKeyChecking=no "$SUDO_CMD chown -R 51000:51001 /exports/smartpages/" - - - - - echo "Checking to see if NFS server is installed..." - if [[ $ICP_VERSION == "3.1.2" ]]; then - ssh $SSH_USER@$NFS_IP "$SUDO_CMD systemctl status nfs-kernel-server" - if [[ $? != "0" ]]; then - echo "We could not find nfs service. We will try to install nfs server" - ssh $SSH_USER@$NFS_IP "$SUDO_CMD apt install nfs-kernel-server && $SUDO_CMD systemctl enable nfs-kernel-server && $SUDO_CMD systemctl restart nfs-kernel-server" - - fi - fi - if [[ $OCP_VERSION == "3.11" ]]; then - ssh $SSH_USER@$NFS_IP "$SUDO_CMD systemctl status nfs-server" - if [[ $? != "0" ]]; then - echo "We could not find nfs service. We will try to install nfs server" - ssh $SSH_USER@$NFS_IP "$SUDO_CMD yum install nfs-utils && $SUDO_CMD systemctl enable nfs-server && $SUDO_CMD systemctl restart nfs-server" - fi - fi - - - - - #We will backup the existing /etc/exports - #Compare the icp worker ip w/ the existing IP in the /etc/exports file then insert any missing entry (IP) into /etc/exports. - echo "ssh $SSH_USER@$NFS_IP "$SUDO_CMD cp /etc/exports /etc/exports_bak"" - ssh $SSH_USER@$NFS_IP "$SUDO_CMD cp /etc/exports /etc/exports_bak" - export EXPORTS_FILE=`ssh $SSH_USER@$NFS_IP "$SUDO_CMD cat /etc/exports |grep '/exports/smartpages'" | awk '{print $2}' | cut -d'(' -f1` - echo "from exports files: $EXPORTS_FILE" - echo "from k8's : $WORKER_IP1s" - - #if [[ $? == "1" ]]; then - - echo "Inside writting to /etc/exports routine" - echo $WORKER_IP1s - - for i in $WORKER_IP1s - do - - echo $EXPORTS_FILE |grep $i - if [[ $? == "1" ]]; then - echo $i - echo "Cannot find $i in the /etc/exports file....." - echo "Writing '/exports/smartpages "$i"(rw,sync,no_root_squash)' to $NFS_IP/etc/exports file" - - ssh $SSH_USER@$NFS_IP "echo '/exports/smartpages "$i"(rw,sync,no_root_squash)' | $SUDO_CMD tee --append /etc/exports" - else - echo " $i matched" - fi - - done - - - #restart nfs service if available$KUBE_NAME_SPACE/config - if [[ $ICP_VERSION == "3.1.2" ]]; then - ssh $SSH_USER@$NFS_IP "$SUDO_CMD systemctl restart nfs-kernel-server" - fi - if [[ $OCP_VERSION == "3.11" ]]; then - ssh $SSH_USER@$NFS_IP "$SUDO_CMD systemctl restart nfs-server" - fi - - - else - echo -e "\x1B[1;32mPVCCHOICE is not defined. Therefore, you must create the following pvc name: \x1B[0m" - fi # end if of pvc=1 - -} -function calMemoryLimitedDist(){ - - echo -e "\x1B[1;32mChecking to see if bc package is installed\x1B[0m" - dpkg -l | awk {'print $2'} |grep ^bc$ > /dev/null - if [[ $? != "0" ]]; then - echo "Installing bc package for resource calculation" - apt install bc -y - fi - echo CALLERAPI_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo BACKEND_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.04 * 1024" | bc)Mi" - echo FRONTEND_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo POST_PROCESS_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo PDF_PROCESS_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.06 * 1024" | bc)Mi" - echo UTF8_PROCESS_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.06 * 1024" | bc)Mi" - echo SETUP_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo OCR_EXTRACTION_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.09 * 1024" | bc)Mi" - echo CLASSIFY_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.06 * 1024" | bc)Mi" - echo PROCESSING_EXTRACTION_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.09 * 1024" | bc)Mi" - # echo INTER_PROCESSING_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo REANALYZE_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.045 * 1024" | bc)Mi" - echo UPDATEFILE_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo RABBITMQ_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.06 * 1024" | bc)Mi" -# echo MINIO_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.04 * 1024" | bc)Mi" - echo REDIS_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.04 * 1024" | bc)Mi" - echo MONGO_LIMITED_MEMORY="$(echo "$MONGO_SERVER_MEMORY * 0.6 * 1024" | bc)Mi" - echo MONGO_ADMIN_LIMITED_MEMORY="$(echo "$MONGO_ADMIN_SERVER_MEMORY * 0.6 * 1024" | bc)Mi" - export mongo_memory_value="$(echo "$MONGO_SERVER_MEMORY * 0.6 " | bc)" - export mongo_admin_memory_value="$(echo "$MONGO_ADMIN_SERVER_MEMORY * 0.6 " | bc)" - - - export MONGO_WIREDTIGER_LIMIT="$(echo "($mongo_memory_value -1)*0.5" | bc)" - - if [[ 1 -eq $(echo "$MONGO_WIREDTIGER_LIMIT < 0.25" |bc -l) ]];then - echo MONGO_WIREDTIGER_LIMIT='0.25' - - - else - echo "MONGO_WIREDTIGER_LIMIT=$MONGO_WIREDTIGER_LIMIT" - - fi - -# echo "mongo_admin_memory_value=$mongo_admin_memory_value" - export MONGO_ADMIN_WIREDTIGER_LIMIT="$(echo "($mongo_admin_memory_value -1)*0.5" | bc)" - - if [[ 1 -eq $(echo "$MONGO_ADMIN_WIREDTIGER_LIMIT < 0.25" |bc -l) ]];then - echo MONGO_ADMIN_WIREDTIGER_LIMIT='0.25' - - else - echo "MONGO_ADMIN_WIREDTIGER_LIMIT=$MONGO_ADMIN_WIREDTIGER_LIMIT" - fi - -} - -function calMemoryLimitedShared(){ - echo CALLERAPI_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo BACKEND_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.04 * 1024" | bc)Mi" - echo FRONTEND_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo POST_PROCESS_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo PDF_PROCESS_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.06 * 1024" | bc)Mi" - echo UTF8_PROCESS_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.06 * 1024" | bc)Mi" - echo SETUP_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo OCR_EXTRACTION_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.09 * 1024" | bc)Mi" - echo CLASSIFY_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.06 * 1024" | bc)Mi" - echo PROCESSING_EXTRACTION_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.09 * 1024" | bc)Mi" -# echo INTER_PROCESSING_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo REANALYZE_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.045 * 1024" | bc)Mi" - echo UPDATEFILE_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo RABBITMQ_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.06 * 1024" | bc)Mi" -# echo MINIO_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.04 * 1024" | bc)Mi" - echo REDIS_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.04 * 1024" | bc)Mi" - echo MONGO_LIMITED_MEMORY="$(echo "$MONGO_SERVER_MEMORY * 0.1 * 1024" | bc)Mi" - export mongo_memory_value="$(echo "$MONGO_SERVER_MEMORY * 0.1" | bc)" - echo MONGO_ADMIN_LIMITED_MEMORY="$(echo "$MONGO_ADMIN_SERVER_MEMORY * 0.1 * 1024" | bc)Mi" - export mongo_admin_memory_value="$(echo "$MONGO_ADMIN_SERVER_MEMORY * 0.1" | bc)" - -# echo "mongo_memory_value=$mongo_memory_value" - export MONGO_WIREDTIGER_LIMIT="$(echo "($mongo_memory_value -1)*0.5" | bc)" - #echo "MONGO_WIREDTIGER_LIMIT=$MONGO_WIREDTIGER_LIMIT" - if [[ 1 -eq $(echo "$MONGO_WIREDTIGER_LIMIT < 0.25" |bc -l) ]];then - echo MONGO_WIREDTIGER_LIMIT='0.25' - - else - echo "MONGO_WIREDTIGER_LIMIT=$MONGO_WIREDTIGER_LIMIT" - fi - -# echo "mongo_admin_memory_value=$mongo_admin_memory_value" - export MONGO_ADMIN_WIREDTIGER_LIMIT="$(echo "($mongo_admin_memory_value -1)*0.5" | bc)" - #echo "MONGO_WIREDTIGER_LIMIT=$MONGO_WIREDTIGER_LIMIT" - if [[ 1 -eq $(echo "$MONGO_WIREDTIGER_LIMIT < 0.25" |bc -l) ]];then - echo MONGO_ADMIN_WIREDTIGER_LIMIT='.25' - else - echo "MONGO_ADMIN_WIREDTIGER_LIMIT=$MONGO_ADMIN_WIREDTIGER_LIMIT" - fi - -} -function calNumOfContainers(){ - if [[ $ICP_VERSION == "3.1.0" || $ICP_VERSION == "3.1.2" ]]; then - export numOfCelery=$(kubectl get nodes --show-labels |grep worker.*celery$KUBE_NAME_SPACE=baca | wc -l) - fi - if [[ $OCP_VERSION == "3.11" ]]; then - export numOfCelery=$(oc get nodes --show-labels |grep compute=true | grep celery$KUBE_NAME_SPACE=baca | wc -l) - fi - echo CELERY_REPLICAS=$numOfCelery - echo NON_CELERY_REPLICAS=$numOfCelery - -} diff --git a/BACA/configuration-ha/common.sh b/BACA/configuration-ha/common.sh deleted file mode 100755 index 41e85f38..00000000 --- a/BACA/configuration-ha/common.sh +++ /dev/null @@ -1,29 +0,0 @@ -SERVER_MEMORY=16 -MONGO_SERVER_MEMORY=16 -MONGO_ADMIN_SERVER_MEMORY=16 -USING_HELM=y -HELM_INIT_BEFORE=n -KUBE_NAME_SPACE=sp -DOCKER_REG_FOR_SERVICES=mycluster.icp:8500/sp -LABEL_NODE=y -CA_WORKERS= -MONGO_WORKERS= -MONGO_ADMIN_WORKERS= -ICP_VERSION=3.1.2 -ICP_USER=admin -ICP_USER_PASSWORD=YWRtaW4K -BXDOMAINNAME= -MASTERIP= -SSH_USER=root -PVCCHOICE=1 -NFS_IP= -DATAPVC=sp-data-pvc -LOGPVC=sp-log-pvc -CONFIGPVC=sp-config-pvc -BASE_DB_PWD= -LDAP=n -LDAP_PASSWORD= -LDAP_URL= -LDAP_CRT_NAME= -DB_SSL=n -DB_CRT_NAME= diff --git a/BACA/configuration-ha/common_ICP_template.sh b/BACA/configuration-ha/common_ICP_template.sh deleted file mode 100755 index c06c5760..00000000 --- a/BACA/configuration-ha/common_ICP_template.sh +++ /dev/null @@ -1,27 +0,0 @@ -SERVER_MEMORY=16 -MONGO_SERVER_MEMORY=16 -MONGO_ADMIN_SERVER_MEMORY=16 -USING_HELM=y -HELM_INIT_BEFORE=n -KUBE_NAME_SPACE=sp -DOCKER_REG_FOR_SERVICES=mycluster.icp:8500/sp -LABEL_NODE=y -CA_WORKERS= -MONGO_WORKERS= -MONGO_ADMIN_WORKERS= -ICP_VERSION=3.1.2 -ICP_USER=admin -ICP_USER_PASSWORD=YWRtaW4K -BXDOMAINNAME= -MASTERIP= -SSH_USER=root -PVCCHOICE=1 -NFS_IP= -DATAPVC=sp-data-pvc -LOGPVC=sp-log-pvc -CONFIGPVC=sp-config-pvc -BASE_DB_PWD= -LDAP= -LDAP_PASSWORD= -LDAP_URL=ldap://172.16.194.107 -LDAP_CRT_NAME= \ No newline at end of file diff --git a/BACA/configuration-ha/common_OCP_template.sh b/BACA/configuration-ha/common_OCP_template.sh deleted file mode 100755 index c0bb0f7f..00000000 --- a/BACA/configuration-ha/common_OCP_template.sh +++ /dev/null @@ -1,27 +0,0 @@ -SERVER_MEMORY=16 -MONGO_SERVER_MEMORY=16 -MONGO_ADMIN_SERVER_MEMORY=16 -USING_HELM=y -HELM_INIT_BEFORE=n -KUBE_NAME_SPACE=sp -DOCKER_REG_FOR_SERVICES=docker-registry.default.svc:5000/sp -LABEL_NODE=y -CA_WORKERS= -MONGO_WORKERS= -MONGO_ADMIN_WORKERS= -OCP_VERSION=3.11 -OCP_USER=admin -OCP_USER_PASSWORD=YWRtaW4K -BXDOMAINNAME= -MASTERIP= -SSH_USER=root -PVCCHOICE=1 -NFS_IP= -DATAPVC=sp-data-pvc -LOGPVC=sp-log-pvc -CONFIGPVC=sp-config-pvc -BASE_DB_PWD= -LDAP=y -LDAP_PASSWORD= -LDAP_URL=ldap://172.16.194.107 -LDAP_CRT_NAME= \ No newline at end of file diff --git a/BACA/configuration-ha/createSSLCert.sh b/BACA/configuration-ha/createSSLCert.sh deleted file mode 100755 index 9ef4145b..00000000 --- a/BACA/configuration-ha/createSSLCert.sh +++ /dev/null @@ -1,205 +0,0 @@ -#!/usr/bin/env bash - -# -# Licensed Materials - Property of IBM -# 6949-68N -# -# © Copyright IBM Corp. 2018 All Rights Reserved -# - - -function createSSLCert() { - rm -r *.crt *.pem *.key || true - - echo -e "\x1B[1;32mAbout to create a self-signed SSL cert for ingress, celery, mongo, redis, rabbitmq....\x1B[0m" - echo "openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/tls.key -out $PWD/tls.crt -subj "/CN=127.0.0.1" " - openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/tls.key -out $PWD/tls.crt -subj "/CN=127.0.0.1" - cat $PWD/tls.key $PWD/tls.crt > $PWD/tls.pem - - echo "openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/celery.key -out $PWD/celery.crt -subj "/CN=127.0.0.1" " - openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/celery.key -out $PWD/celery.crt -subj "/CN=127.0.0.1" - cat $PWD/celery.key $PWD/celery.crt > $PWD/celery.pem - if [[ $HA_ENABLE = false ]]; then - echo "openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/mongo.key -out $PWD/mongo.crt -subj "/CN=127.0.0.1" " - openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/mongo.key -out $PWD/mongo.crt -subj "/CN=127.0.0.1" - cat $PWD/mongo.key $PWD/mongo.crt > $PWD/mongo.pem - else - echo "create mongo and mongo admin cluster certifications" - CERT_DOMAIN="svc.cluster.local" - openssl genrsa -out $PWD/CA.key 4096 - openssl req -new -x509 -days 365 -key $PWD/CA.key -out $PWD/CA.crt \ - -subj "/C=CA/ST=NS/L=Halifax/O=IBM/CN=IBM baca" - openssl genrsa -out $PWD/certificate.key 4096 - openssl req -new -nodes -key $PWD/certificate.key -out $PWD/certificate.csr -config $PWD/openssl.cnf -extensions v3_req \ - -subj "/C=CA/ST=NS/L=Halifax/O=IBM/CN=*.${KUBE_NAME_SPACE}.${CERT_DOMAIN}" - openssl x509 -req -days 365 -in $PWD/certificate.csr -CA $PWD/CA.crt -CAkey $PWD/CA.key -set_serial 01 -out $PWD/certificate.crt - cat $PWD/certificate.key $PWD/certificate.crt > $PWD/mongo.key - cat $PWD/CA.key $PWD/CA.crt > $PWD/mongo.pem - cp $PWD/certificate.crt $PWD/mongo.crt - fi - - echo "openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/public.crt -out $PWD/public.crt -subj "/CN=127.0.0.1" " - openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/private.key -out $PWD/public.crt -subj "/CN=127.0.0.1" - - echo "openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/redis.key -out $PWD/redis.crt -subj "/CN=127.0.0.1" " - openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/redis.key -out $PWD/redis.crt -subj "/CN=127.0.0.1" - cat $PWD/redis.key $PWD/redis.crt > $PWD/redis.pem - echo "changing file permissions for redis.key ..." - chmod 600 $PWD/redis.key - - echo "openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/rabbitmq.key -out $PWD/rabbitmq.crt -subj "/CN=127.0.0.1" " - openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/rabbitmq.key -out $PWD/rabbitmq.crt -subj "/CN=127.0.0.1" - cat $PWD/rabbitmq.key $PWD/rabbitmq.crt > $PWD/rabbitmq.pem - - -} -function createSecret (){ - - echo -e "\x1B[1;32mAbout to create a secrets for ingress, celery, mongo, redis, rabbitmq....\x1B[0m" - echo "kubectl -n $KUBE_NAME_SPACE create secret tls baca-ingress-secret --key $PWD/tls.key --cert $PWD/tls.crt" - kubectl -n $KUBE_NAME_SPACE create secret tls baca-ingress-secret --key $PWD/tls.key --cert $PWD/tls.crt \ - --dry-run -o yaml | kubectl apply -f - - -# if [[ $DB_SSL == "y" || $DB_SSL == "Y" ]]; then -# echo "kubectl -n sp create secret generic baca-db2-secret --from-file=$PWD/db2-cert.arm" -# kubectl -n sp create secret generic baca-db2-secret --from-file=$PWD/db2-cert.arm -# fi - if [[ ($LDAP_URL =~ ^'ldaps' && ! -z $LDAP_CRT_NAME) && ($DB_SSL == "n") ]]; then - echo "kubectl -n $KUBE_NAME_SPACE create secret generic with LDAP certs AND no DB2 cert " - kubectl -n $KUBE_NAME_SPACE create secret generic baca-secrets$KUBE_NAME_SPACE \ - --from-file=$PWD/celery.pem --from-file=$PWD/celery.crt --from-file=$PWD/celery.key \ - --from-file=$PWD/mongo.pem --from-file=$PWD/mongo.crt --from-file=$PWD/mongo.key \ - --from-file=$PWD/public.crt --from-file=$PWD/private.key \ - --from-file=$PWD/redis.pem --from-file=$PWD/redis.key --from-file=$PWD/redis.crt \ - --from-file=$PWD/rabbitmq.pem --from-file=$PWD/rabbitmq.key --from-file=$PWD/rabbitmq.crt \ - --from-file=$PWD/$LDAP_CRT_NAME \ - --dry-run -o yaml | kubectl apply -f - - elif [[ ($LDAP_URL =~ ^'ldaps' && ! -z $LDAP_CRT_NAME) && ($DB_SSL == "y" && ! -z $DB_CRT_NAME) ]]; then - echo "kubectl -n $KUBE_NAME_SPACE create secret generic with DB certs AND LDAP certs " - kubectl -n $KUBE_NAME_SPACE create secret generic baca-secrets$KUBE_NAME_SPACE \ - --from-file=$PWD/celery.pem --from-file=$PWD/celery.crt --from-file=$PWD/celery.key \ - --from-file=$PWD/mongo.pem --from-file=$PWD/mongo.crt --from-file=$PWD/mongo.key \ - --from-file=$PWD/public.crt --from-file=$PWD/private.key \ - --from-file=$PWD/redis.pem --from-file=$PWD/redis.key --from-file=$PWD/redis.crt \ - --from-file=$PWD/rabbitmq.pem --from-file=$PWD/rabbitmq.key --from-file=$PWD/rabbitmq.crt \ - --from-file=$PWD/$LDAP_CRT_NAME \ - --from-file=$PWD/$DB_CRT_NAME \ - --dry-run -o yaml | kubectl apply -f - - elif [[ ($DB_SSL == "y" && ! -z $DB_CRT_NAME) && ($LDAP_URL != ^'ldaps') ]]; then - echo "kubectl -n $KUBE_NAME_SPACE create secret generic with DB certs AND NO LDAP certs " - kubectl -n $KUBE_NAME_SPACE create secret generic baca-secrets$KUBE_NAME_SPACE \ - --from-file=$PWD/celery.pem --from-file=$PWD/celery.crt --from-file=$PWD/celery.key \ - --from-file=$PWD/mongo.pem --from-file=$PWD/mongo.crt --from-file=$PWD/mongo.key \ - --from-file=$PWD/public.crt --from-file=$PWD/private.key \ - --from-file=$PWD/redis.pem --from-file=$PWD/redis.key --from-file=$PWD/redis.crt \ - --from-file=$PWD/rabbitmq.pem --from-file=$PWD/rabbitmq.key --from-file=$PWD/rabbitmq.crt \ - --from-file=$PWD/$DB_CRT_NAME \ - --dry-run -o yaml | kubectl apply -f - - else - echo "kubectl -n $KUBE_NAME_SPACE create secret generic with no LDAP and DB2 certs" - kubectl -n $KUBE_NAME_SPACE create secret generic baca-secrets$KUBE_NAME_SPACE \ - --from-file=$PWD/celery.pem --from-file=$PWD/celery.crt --from-file=$PWD/celery.key \ - --from-file=$PWD/mongo.pem --from-file=$PWD/mongo.crt --from-file=$PWD/mongo.key \ - --from-file=$PWD/public.crt --from-file=$PWD/private.key \ - --from-file=$PWD/redis.pem --from-file=$PWD/redis.key --from-file=$PWD/redis.crt \ - --from-file=$PWD/rabbitmq.pem --from-file=$PWD/rabbitmq.key --from-file=$PWD/rabbitmq.crt \ - --dry-run -o yaml | kubectl apply -f - - fi - -} -function createMongoSecrets (){ -echo -e "\x1B[1;32mAbout to create mongo Secrets....\x1B[0m" -if [[ -z "$MONGOADMINENTRYPASSWORD" && -z "$MONGOADMINUSER" && -z "$MONGOADMINPASSWORD" ]]; then - echo -e "\x1B[1;32mCreating mongo admin Secrets using random values....\x1B[0m" - export MONGOADMINENTRYPASSWORD=$(openssl rand -base64 32 | tr -d "=+/" | cut -c1-29) - export MONGOADMINUSER=$(openssl rand -base64 12 | tr -d "=+/" | cut -c1-29) - export MONGOADMINPASSWORD=$(openssl rand -base64 32 | tr -d "=+/" | cut -c1-29) - - kubectl -n $KUBE_NAME_SPACE create secret generic baca-mongo-admin \ - --from-literal=MONGOADMINENTRYPASSWORD="$MONGOADMINENTRYPASSWORD" \ - --from-literal=MONGOADMINUSER="$MONGOADMINUSER" \ - --from-literal=MONGOADMINPASSWORD="$MONGOADMINPASSWORD" \ - --dry-run -o yaml | kubectl apply -f - -else - echo -e "\x1B[1;32mCreating mongo admin Secret based on custom values for MONGOADMINENTRYPASSWORD, MONGOADMINUSER, MONGOADMINPASSWORD\x1B[0m" - kubectl -n $KUBE_NAME_SPACE create secret generic mongo-admin \ - --from-literal=MONGOADMINENTRYPASSWORD="$MONGOADMINENTRYPASSWORD" \ - --from-literal=MONGOADMINUSER="$MONGOADMINUSER" \ - --from-literal=MONGOADMINPASSWORD="$MONGOADMINPASSWORD" \ - --dry-run -o yaml | kubectl apply -f - -fi - -if [[ -z "$MONGOENTRYPASSWORD" && -z "$MONGOUSER" && -z "$MONGOPASSWORD" ]] ; then - echo -e "\x1B[1;32mCreating mongo Secrets using random values....\x1B[0m" - export MONGOENTRYPASSWORD=$(openssl rand -base64 32 | tr -d "=+/" | cut -c1-29) - export MONGOUSER=$(openssl rand -base64 12 | tr -d "=+/" | cut -c1-29) - export MONGOPASSWORD=$(openssl rand -base64 32 | tr -d "=+/" | cut -c1-29) - kubectl -n $KUBE_NAME_SPACE create secret generic baca-mongo \ - --from-literal=MONGOENTRYPASSWORD="$MONGOENTRYPASSWORD" \ - --from-literal=MONGOUSER="$MONGOUSER" \ - --from-literal=MONGOPASSWORD="$MONGOPASSWORD" \ - --dry-run -o yaml | kubectl apply -f - -else - echo -e "\x1B[1;32mCreating mongo Secret based on custom values for MONGOENTRYPASSWORD, MONGOUSER, MONGOPASSWORD\x1B[0m" - kubectl -n $KUBE_NAME_SPACE create secret generic mongo \ - --from-literal=MONGOENTRYPASSWORD="$MONGOENTRYPASSWORD" \ - --from-literal=MONGOUSER="$MONGOUSER" \ - --from-literal=MONGOPASSWORD="$MONGOPASSWORD" \ - --dry-run -o yaml | kubectl apply -f - -fi - -} -function createLDAPSecret(){ - -if [[ $LDAP == "y" && $LDAP_PASSWORD != "" ]]; then - echo -e "\x1B[1;32mAbout to create LDAP Secret....\x1B[0m" - echo -e "\x1B[1;32mCreating LDAP Secret....\x1B[0m" - export LDAP_PASSWORD_DECODE=$(echo $LDAP_PASSWORD | base64 --decode) - kubectl -n $KUBE_NAME_SPACE create secret generic baca-ldap \ - --from-literal=LDAP_PASSWORD="$LDAP_PASSWORD_DECODE" \ - --dry-run -o yaml | kubectl apply -f - -fi - -} -function createBaseDbSecret(){ -echo -e "\x1B[1;32mAbout to create secret for Base DB....\x1B[0m" -if [[ -z $BASE_DB_PWD ]]; then - echo -e "\x1B[1;32m Cannot find BASED_DB_PWD from common.sh..Exiting !!\x1B[0m" - exit 1 -else - echo -e "\x1B[1;32mCreating Base DB secret....\x1B[0m" - kubectl -n $KUBE_NAME_SPACE create secret generic baca-basedb \ - --from-literal=BASE_DB_PWD="$BASE_DB_PWD" \ - --dry-run -o yaml | kubectl apply -f - -fi -} - -function createRabbitmaSecret(){ -echo -e "\x1B[1;32mAbout to create secret for RabbitMQ....\x1B[0m" - -export rabbitmq_admin_password=$(openssl rand -base64 10 | tr -d "=+/" | cut -c1-29) -export rabbitmq_erlang_cookie=$(openssl rand -base64 32 | tr -d "=+/" | cut -c1-29) -export rabbitmq_password=$(openssl rand -base64 10 | tr -d "=+/" | cut -c1-29) -export rabbitmq_user=$(openssl rand -base64 6 | tr -d "=+/" | cut -c1-29) -export rabbitmq_management_password=$(openssl rand -base64 10 | tr -d "=+/" | cut -c1-29) -export rabbitmq_management_user=$(openssl rand -base64 6 | tr -d "=+/" | cut -c1-29) - -kubectl -n $KUBE_NAME_SPACE create secret generic baca-rabbitmq \ ---from-literal=rabbitmq-admin-password="$rabbitmq_admin_password" \ ---from-literal=rabbitmq-erlang-cookie="$rabbitmq_erlang_cookie" \ ---from-literal=rabbitmq-password="$rabbitmq_password" \ ---from-literal=rabbitmq-user="$rabbitmq_user" \ ---from-literal=rabbitmq-management-password="$rabbitmq_management_password" \ ---from-literal=rabbitmq-management-user="$rabbitmq_management_user" \ ---dry-run -o yaml | kubectl apply -f - - - -} - -function createRedisSecret(){ -echo -e "\x1B[1;32mAbout to create secret for Redis....\x1B[0m" -export redis_password=$(openssl rand -base64 10 | tr -d "=+/" | cut -c1-29) -kubectl -n $KUBE_NAME_SPACE create secret generic baca-redis \ ---from-literal=redis-password="$redis_password" \ ---dry-run -o yaml | kubectl apply -f - -} \ No newline at end of file diff --git a/BACA/configuration-ha/delete_ContentAnalyzer.sh b/BACA/configuration-ha/delete_ContentAnalyzer.sh deleted file mode 100755 index 95b9248b..00000000 --- a/BACA/configuration-ha/delete_ContentAnalyzer.sh +++ /dev/null @@ -1,117 +0,0 @@ -#!/usr/bin/env bash -# -# Licensed Materials - Property of IBM -# 6949-68N -# -# © Copyright IBM Corp. 2018 All Rights Reserved -# - -. ./common.sh -. ./bashfunctions.sh - -today=`date +%Y-%m-%d.%H:%M:%S` -echo $today - -if [ -z "$KUBE_NAME_SPACE" ] -then - echo -e "\x1B[1;31mThe KUBE_NAME_SPACE is not set. The script will exit. To delete everything in the IBM Business Automation Content Analyzer namespace, set the KUBE_NAME_SPACE variable to the name of the namespace where IBM Business Automation Content Analyzer is deployed and rerun. :\x1B[0m" - exit -fi - -if [ $KUBE_NAME_SPACE == "default" ] -then - echo -e "\x1B[1;31mThe KUBE_NAME_SPACE is set to default. The script will exit. We cannot delete all resources from the default namespace. To delete everything in the IBM Business Automation Content Analyzer namespace, set the KUBE_NAME_SPACE variable to the name of the namespace where IBM Business Automation Content Analyzer is deployed and rerun. :\x1B[0m" - exit -fi - -# confirm they want to delete -echo -echo -e "\x1B[1;31mThis script will DELETE all the resources, including services, deployments, and pvc, in the namespace : $KUBE_NAME_SPACE . And then delete the namespace $KUBE_NAME_SPACE \x1B[0m" -echo -echo -e "\x1B[1;31mPlease only execute if you are SURE you want to DELETE everything from your namespace $KUBE_NAME_SPACE . \x1B[0m" -echo -echo -e "\x1B[1;31mWARNING: Please note that on ICP this script may not be able to successfully remove all the pods. The pods and the namespace might be left in 'terminating' state . \x1B[0m" -echo - -while [[ $deleteconfirm != "y" && $deleteconfirm != "n" && $deleteconfirm != "yes" && $deleteconfirm != "no" ]] # While deleteconfirm is not y or n... -do - echo -e "\x1B[1;31mWould you like to continue (Y/N):\x1B[0m" - read deleteconfirm - deleteconfirm=$(echo "$deleteconfirm" | tr '[:upper:]' '[:lower:]') -done - - -if [[ $deleteconfirm == "n" || $deleteconfirm == "no" ]] -then - exit -fi - -#Logon to kubectl -loginToCluster - - -echo "----- Deleting Celery ..." -cwd=$(pwd) - -#export HELM="./helm-chart/baca-celery" -#export HELM1="./helm-chart/baca-userportal" -#echo -#echo "cd ${HELM}" -#cd ${HELM} - -echo -if [[ $ICP_VERSION == "3.1.2" ]]; then -echo "helm delete celery${KUBE_NAME_SPACE} --purge --tls" -helm delete celery${KUBE_NAME_SPACE} --purge --tls -fi -if [[ $OCP_VERSION == "3.11" ]]; then -echo "helm delete celery${KUBE_NAME_SPACE} --purge " -helm delete celery${KUBE_NAME_SPACE} --purge -fi - -echo -echo "sleep for 120 secs to wait for celery pods to complete termination...." - -sleep 120 -# -#echo -#echo "return to previous directory: ${cwd}" -#cd ${cwd} - -echo ----- Deleting all BACA resources from namespace : $KUBE_NAME_SPACE -set +e -kubectl delete -n $KUBE_NAME_SPACE --all deploy,svc,pvc,pods --force --grace-period=0 -kubectl delete -n $KUBE_NAME_SPACE secret baca-ingress-secret baca-secrets$KUBE_NAME_SPACE baca-userportal-ingress-secret baca-mongo baca-mongo-admin baca-ldap baca-basedb baca-rabbitmq baca-redis -if [[ $ICP_VERSION == "3.1.2" ]]; then - kubectl delete -n $KUBE_NAME_SPACE rolebinding baca-clusterrole-rolebinding - kubectl delete -n $KUBE_NAME_SPACE clusterrole baca-clusterrole - kubectl delete -n $KUBE_NAME_SPACE psp baca-psp -fi -set -e - - - - -# only delete PVC for internal/dev env. -if [[ $PVCCHOICE == "1" ]]; then - echo ---- Deleting persistent volumes. - count=`kubectl -n $KUBE_NAME_SPACE get pv | awk {'print $1'}| grep ^sp-.*${KUBE_NAME_SPACE}|wc | awk {'print $1'}` - if [[ $count != "0" ]]; then - kubectl -n $KUBE_NAME_SPACE delete pv `kubectl -n $KUBE_NAME_SPACE get pv | awk {'print $1'}| grep ^sp-.*${KUBE_NAME_SPACE}` - fi - echo ---Clean up all pvc subdirectories. You need to run setup.sh or init_deployment.sh again to have these directories re-created. -# ssh root@$NFS_IP rm -rf /exports/smartpages/$KUBE_NAME_SPACE/* - if [ -z "$SSH_USER" ]; then - export SSH_USER="root" - fi - - if [ "$SSH_USER" == "root" ]; then - export SUDO_CMD="" - else - export SUDO_CMD="sudo " - fi - ssh $SSH_USER@$NFS_IP "$SUDO_CMD rm -rf /exports/smartpages/$KUBE_NAME_SPACE/*" - - -fi - diff --git a/BACA/configuration-ha/generateMemoryValues.sh b/BACA/configuration-ha/generateMemoryValues.sh deleted file mode 100755 index 0e6cf3ea..00000000 --- a/BACA/configuration-ha/generateMemoryValues.sh +++ /dev/null @@ -1,28 +0,0 @@ -#!/usr/bin/env bash -# -# Licensed Materials - Property of IBM -# 6949-68N -# -# © Copyright IBM Corp. 2018 All Rights Reserved -# -. ./bashfunctions.sh -. ./common.sh - -echo -e "\x1B[1;32mThis will generate recommended values for setting memory resources in Business Automation Content Analyzer (CA) product.\x1B[0m" -echo -e "\x1B[1;32mUse \"distributed\" flag when you have an distribute environment where mongo DB, mongo-admin DB, and CA processing components are their own nodes. Otherwise, use \"limited\" flag \x1B[0m" -echo -e "\x1B[1;32mThese values may need to be adjusted depending on your workload\x1B[0m" - - -if [[ -z $1 ]]; then - echo -e "\x1B[1;31mYou need to pass in either \"distributed\" or \"limited\" to use this script\x1B[0m" - exit 1 -fi - - -if [[ $1 == "distributed" ]]; then - calMemoryLimitedDist - calNumOfContainers -elif [[ $1 == "limited" ]]; then - calMemoryLimitedShared - calNumOfContainers -fi \ No newline at end of file diff --git a/BACA/configuration-ha/init_deployments.sh b/BACA/configuration-ha/init_deployments.sh deleted file mode 100755 index 59d31c3b..00000000 --- a/BACA/configuration-ha/init_deployments.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/usr/bin/env bash -# -# Licensed Materials - Property of IBM -# 6949-68N -# -# © Copyright IBM Corp. 2018 All Rights Reserved -# - -. ./common.sh -. ./bashfunctions.sh -. ./createSSLCert.sh - -# Login (if necessary) -loginToCluster - -#Creating psp and clusterrole for BACA -export HA_ENABLE=true - - -# Create Kube namespace -echo "\x1B[1;32mCreating $KUBE_NAME_SPACE namespace \x1B[0m" -if [[ $ICP_VERSION == "3.1.0" || $ICP_VERSION == "3.1.2" ]]; then - kubectl create namespace $KUBE_NAME_SPACE -fi - -if [[ $OCP_VERSION == "3.11" ]]; then - oc new-project $KUBE_NAME_SPACE - oc project $KUBE_NAME_SPACE -fi - -if [[ $ICP_VERSION == "3.1.2" ]]; then - checkPsp=$(kubectl get psp |grep baca |wc -l) - - if [[ $checkPsp == "0" ]]; then - - echo -e "\x1B[1;32mCreating psp and clusterrole for BACA\x1B[0m" - kubectl -n $KUBE_NAME_SPACE apply -f ./baca-psp.yaml - echo -e "\x1B[1;32mCreating rolebinding for BACA\x1B[0m" - kubectl -n $KUBE_NAME_SPACE create rolebinding baca-clusterrole-rolebinding --clusterrole=baca-clusterrole --group=system:serviceaccounts:$KUBE_NAME_SPACE - - fi -fi - -if [[ $OCP_VERSION == "3.11" ]]; then - # Allows images to run as the root UID if no USER in specified in the Dockerfile. - oc adm policy add-scc-to-group anyuid system:authenticated -fi - -#label nodes -if [[ ($LABEL_NODE == "y" || $LABEL_NODE == "Y") ]]; then - customLabelNodes -else - echo -e "\x1B[1;32mLABEL_NODE and LABEL_NODE_BY_PARAM parameters are not defined. Therefore, you must label your nodes accordingly\x1B[0m" -fi - - -# Create nfs, and pv/pvc -#getNFSServer - - -#Create SSL cert and secret -createSSLCert -createSecret -createMongoSecrets -createLDAPSecret -createBaseDbSecret -createRabbitmaSecret -createRedisSecret -if [[ $PVCCHOICE == "1" ]]; then - echo -e "\x1B[1;32mSetting up PV/PVC storage\x1B[0m" - getNFSServer - ./init_persistent.sh -fi - -echo -e "\x1B[1;32mCalling pre-setup scripts to setup pvc for Mongo and Mongo-admin\x1B[0m" -cd mongo && ./pre-setup.sh -cd .. -cd mongoadmin && ./pre-setup.sh -cd .. - - -#Helm client download and initialization -if [[ $USING_HELM == "y" || $USING_HELM == "yes" ]]; then - if [[ -z $HELM_INIT_BEFORE || $HELM_INIT_BEFORE == "n" || $HELM_INIT_BEFORE == "no" ]]; then - - # setup helm client - downloadHelmClient - - # setup helm on cluster - helmSetup - - # ensure tiller-deploy is successful on cluster - checkHelm - fi -fi - diff --git a/BACA/configuration-ha/init_persistent.sh b/BACA/configuration-ha/init_persistent.sh deleted file mode 100755 index a731d486..00000000 --- a/BACA/configuration-ha/init_persistent.sh +++ /dev/null @@ -1,14 +0,0 @@ -#!/usr/bin/env bash - -# -# Licensed Materials - Property of IBM -# 6949-68N -# -# © Copyright IBM Corp. 2018 All Rights Reserved -# - -. ./common.sh - - -cat sppersistent.yaml | sed s/\$NFS_IP/"$NFS_IP"/ | sed s/\$KUBE_NAME_SPACE/"$KUBE_NAME_SPACE"/ | sed s/\$DATAPVC/"$DATAPVC"/ | sed s/\$LOGPVC/"$LOGPVC"/ | sed s/\$CONFIGPVC/"$CONFIGPVC"/ |kubectl apply -f - - diff --git a/BACA/configuration-ha/mongo/README.md b/BACA/configuration-ha/mongo/README.md deleted file mode 100644 index 7c8a941f..00000000 --- a/BACA/configuration-ha/mongo/README.md +++ /dev/null @@ -1,119 +0,0 @@ -# Mongodb - -[Mongodb](https://www.mongodb.com/) is a general purpose, document-based, distributed database built for modern application developers and for the cloud era. No database is more productive to use - -## TL;DR; - -```bash -$ helm install stable/mongo-ha -``` - -By default this chart install 12 pods total: - * three pods containing a mongos router - * three pods containing a mongodb config server - * three pods containing a mongdb shard - * three pods containing a mongdb shard -## Introduction - -This chart bootstraps a[Mongodb](https://www.mongodb.com/) highly available Shard+Replica statefulset in a [Kubernetes](http://kubernetes.io) cluster using the Helm package manager. - -## Prerequisites - -- Kubernetes 1.8+ with Beta APIs enabled -- PV provisioner support in the underlying infrastructure or an existing PVC claim created when running `init_deployments.sh` -- PV for shards and replicas will be created in generate.sh -- Change the values for the `reposittory` and `tag` under `image` and tag to match your mongo cluster environment. For example: -``` -image: - repository: mycluster.com:8500/sp/mongocluster - tag: latest - pullPolicy: Always -``` - -mongocluster image can be downloaded from TBD -The current default namespace is `sp`. If you have different namespace, please make sure you update generate.sh as well. Next version will fixed this issue. -openssl.cnf and ssl_generator.sh are used to create x509 certificate for mongo cluster. -## Upgrading the Chart - -You can use Helm to update MongoCluster version in a live release. Assuming your release is named as `my-release`, get the values using the command: - -## Installing the Chart - -To install the chart - -```bash -sh generate.sh -``` - -The command will generate templates for mongodb shards and replicas, save them into templates folder. And then create values.yaml based on values-base.yaml. It will deploys Mongodb Cluster on the Kubernetes cluster in the default configuration. By default this chart install 2 shards, 3 mongodb config and 3 mongos router. - -> **Tip**: List all releases using `helm list` - -## Uninstalling the Chart - -To uninstall/delete the deployment: - -```bash -$ helm delete --purge --tls -``` - -The command removes all the Kubernetes components associated with the chart and deletes the release. - -## Configuration - -The following table lists the configurable parameters of the MongoDB chart and their default values. - -| Parameter | Description | Default | -|:-------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------| -| `image.repository` | Mongodb image | `mongocluster` | -| `image.tag` | Mongodb tag | `latest` | -| `image.pullPolicy` | Pull Image policy | `Always` | -| `storageClassName` | Specifies storage class name | local-storage | -| `nfsIP` | The NFS location | | -| `nameSpace` | use kubernetes namespace | `sp` | -| `wiredTigerCache` | mondo db cache limitiation | `0.5` | -| `secretVolume` | Where the certification stored | created from setup.sh script | -| `logs.claimname` | Where the location of log, depends on setup.sh | `` | -| `logs.path` | log path inside the pod | `/var/log/` | -| `logs.logLevel` | log level | `debug` | -| `mongoDBConfig.storageCapacity` | Mongodb config storage size | `10Gi` | -| `mongoDBConfig.labelName` | label name | mongodb-configdb | -| `mongoDBConfig.replicas` | mongodb config replicas, variable in generate.sh | `` | -| `mongoDBConfig.replicaSetName` | replica set name | `ConfigDBRepSet` | -| `mongoDBConfig.resources` | CPU/Memory for init Container node resource requests/limits | `{}` | -| `mongosRouter.name` | name of the mongos router | `mongos-router` | -| `mongosRouter.replicas` | mongodb router replicas, need to change in generate.sh | `` | -| `mongosRouter.configReplset` | generate by generate.sh, do not change. | | -| `mongoDBShard.storageCapacity` | Mongodb shard storage size | `15Gi` | -| `mongoDBShard.replicas` | mongodb shard replicas, variable in generate.sh | `{}` | -| `logs.logLevel` | log level | `[]` | - -Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example, - -```bash -$ helm install \ - --set image=mongocluster \ - --set tag=latest \ - stable/mongo-ha -``` - -The above command sets the Mongodb server within `default` namespace. - - -> **Tip**: There is no [values.yaml](values.yaml) file, and will generate [values.yaml](values.yaml) on the fly based on [values-base.yaml](values-base.yaml) - -Persistence ------------ - -This generate.sh provisions a PersistentVolume and pods will create PersistentVolumeClaim and mounts corresponding persistent volume under the same storage class name to default location `/export/smartpages/`. You'll need physical storage available in the Kubernetes cluster for this to work. - -Configure TLS -------------- - -Always enable TLS for mongodb containers, acquire TLS certificates from a CA or create self-signed certificates. While creating / acquiring certificates ensure the corresponding domain names are set as per the standard [DNS naming conventions](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-identity) in a Kubernetes StatefulSet (for a distributed mongodb setup). Then create a secret using - -```bash -$ kubectl create secret generic baca-secrets${NAMESPACE} --from-file=path/to/private.key --from-file=path/to/public.crt -``` - -Then install the chart, specifying the path you'd like to mount to the TLS secret: diff --git a/BACA/configuration-ha/mongo/js_base/add_shard.js b/BACA/configuration-ha/mongo/js_base/add_shard.js deleted file mode 100644 index 43ebc4e1..00000000 --- a/BACA/configuration-ha/mongo/js_base/add_shard.js +++ /dev/null @@ -1,19 +0,0 @@ -var server_list_s = "$SHARD_LIST_S"; -var shard_id = "$SHARD_ID"; -var shard_string = shard_id.concat('\/', server_list_s); -var result; -print("First try to add shard"); -do { - sleep(5000); - result = sh.addShard(shard_string); - if (result.ok == 0) { - print("Failed to add shard and retry in 5 seconds"); - } - // if (result.code == 23) { - // print("already initialized"); - // break; - // } - printjson(result); -} while (result.ok != 1) -// printjson(result); - diff --git a/BACA/configuration-ha/mongo/js_base/mongo_initiate.js b/BACA/configuration-ha/mongo/js_base/mongo_initiate.js deleted file mode 100644 index eed8d9bb..00000000 --- a/BACA/configuration-ha/mongo/js_base/mongo_initiate.js +++ /dev/null @@ -1,27 +0,0 @@ -var server_list_s = "$SERVER_LIST_S"; -var server_list = server_list_s.split(","); -var cfg_id = "$CFG_ID"; -var member_list = []; -for (i = 0; i < server_list.length; i++) { - member_list.push({_id: i, host: server_list[i]}); -} -var cfg = { - _id: cfg_id, - version: 1, - members: member_list -} -print("First try to initiate"); -var result; -do { - sleep(5000); - result = rs.initiate(cfg); - if(result.ok==0) { - print("Failed to initiate and retry in 5 seconds"); - } - if(result.code==23){ - print("already initialized"); - break; - } - printjson(result); -} while (result.ok != 1) -// printjson(result); diff --git a/BACA/configuration-ha/mongo/openssl.cnf b/BACA/configuration-ha/mongo/openssl.cnf deleted file mode 100644 index 7d3892c9..00000000 --- a/BACA/configuration-ha/mongo/openssl.cnf +++ /dev/null @@ -1,38 +0,0 @@ -[req] -default_bits = 2048 -utf8 = yes -distinguished_name = req_distinguished_name -req_extensions = v3_req - -[req_distinguished_name] -countryName = Country Name (2 letter code) -countryName_default = CA -countryName_min = 2 -countryName_max = 2 -stateOrProvinceName = State or Province Name (full name) -stateOrProvinceName_default = NS -stateOrProvinceName_max = 64 -localityName = Locality Name (eg, city) -localityName_default = Halifax -localityName_max = 64 -organizationName = Organization Name (eg, company) -organizationName_default = IBM -organizationName_max = 64 -organizationalUnitName = Organizational Unit Name (eg, section) -organizationalUnitName_default = baca -organizationalUnitName_max = 64 -commonName = *.svc.cluster.local -commonName_max = 64 - -[v3_req] -basicConstraints = CA:FALSE -subjectKeyIdentifier = hash -keyUsage = digitalSignature, keyEncipherment -extendedKeyUsage = clientAuth, serverAuth -subjectAltName = @alt_names - -[alt_names] -DNS.1 = localhost -IP.1 = 127.0.0.1 - - diff --git a/BACA/configuration-ha/mongo/post-setup.sh b/BACA/configuration-ha/mongo/post-setup.sh deleted file mode 100755 index 56c4cb40..00000000 --- a/BACA/configuration-ha/mongo/post-setup.sh +++ /dev/null @@ -1,143 +0,0 @@ -#!/usr/bin/env bash - -. ../common.sh - -NUMOFSHARDS=2 - -#LOG_LEVEL=info -ROUTER_REPLICA=3 -SHARD_REPLICA=3 -CONFIG_REPLICA=3 - -CONFIG_PORT=27019 -DB_SHARD_PORT=27018 -ROUTER_PORT=27017 -CONFIG_REPLSET_PREFIX="configReplSet" - -ADD_SHARD='./js_base/add_shard.js' -MONGO_INIT='./js_base/mongo_initiate.js' - -for i in `seq 0 $((CONFIG_REPLICA-1))` -do - CONFIG_SERVER_LIST_S="${CONFIG_SERVER_LIST_S}mongodb-configdb-${i}.mongodb-configdb-service.${KUBE_NAME_SPACE}.svc.cluster.local:${CONFIG_PORT}," -done -CONFIG_SERVER_LIST_S=${CONFIG_SERVER_LIST_S:: -1} -echo "CONFIG_SERVER_LIST_S=${CONFIG_SERVER_LIST_S}" - -echo "Waiting for all the shards and configdb containers up running" -sleep 30 -echo -n " " -until kubectl exec mongodb-configdb-$((CONFIG_REPLICA-1)) --namespace=${KUBE_NAME_SPACE} -c mongodb-configdb-container -- mongo --host 127.0.0.1 --port ${CONFIG_PORT} --ssl --sslAllowInvalidCertificates --sslPEMKeyFile /etc/certs/mongo.key --sslCAFile /etc/certs/mongo.pem --quiet --eval 'db.getMongo()'; do - sleep 5 - echo -n " " -done - -echo -n " " -for i in `seq 0 $((NUMOFSHARDS-1))` -do - until kubectl exec mongodb-shard${i}-$((SHARD_REPLICA-1)) --namespace=${KUBE_NAME_SPACE} -c mongod-shard${i}-container -- mongo --host 127.0.0.1 --port ${DB_SHARD_PORT} --ssl --sslAllowInvalidCertificates --sslPEMKeyFile /etc/certs/mongo.key --sslCAFile /etc/certs/mongo.pem --quiet --eval 'db.getMongo()'; do - sleep 5 - echo -n " " - done -done -echo "...shards & configdb containers are now running" -echo - -sleep 90 - -for i in `seq 0 $((NUMOFSHARDS-1))` -do - for j in `seq 0 $((SHARD_REPLICA-1))` - do - shard_temp="${shard_temp}mongodb-shard${i}-${j}.mongodb-shard${i}-service.${KUBE_NAME_SPACE}.svc.cluster.local:${DB_SHARD_PORT}," - done - SHARD_STRING[${i}]=${shard_temp:: -1} - unset shard_temp -done - -echo "start to initiate config server replicas" -echo - -cat $MONGO_INIT | sed s#\$SERVER_LIST_S#"$CONFIG_SERVER_LIST_S"# | sed s#\$CFG_ID#"${CONFIG_REPLSET_PREFIX}"# > mongo_initiate_config.js -kubectl cp mongo_initiate_config.js ${KUBE_NAME_SPACE}/mongodb-configdb-0:/tmp/ - -kubectl exec mongodb-configdb-0 --namespace=${KUBE_NAME_SPACE} -c mongodb-configdb-container -- mongo --host 127.0.0.1 --port ${CONFIG_PORT} --ssl --sslAllowInvalidCertificates --sslPEMKeyFile /etc/certs/mongo.key --sslCAFile /etc/certs/mongo.pem /tmp/mongo_initiate_config.js - -echo "start to initiate shard server replicas" -echo - -for i in `seq 0 $((NUMOFSHARDS-1))` -do - cat $MONGO_INIT | sed s#\$SERVER_LIST_S#"${SHARD_STRING[$i]}"# | sed s#\$CFG_ID#"rs\-shard$i"# > mongo_initiate_shard${i}.js - kubectl cp mongo_initiate_shard${i}.js ${KUBE_NAME_SPACE}/mongodb-shard${i}-0:/tmp/mongo_initiate_shard.js - kubectl exec mongodb-shard${i}-0 --namespace=${KUBE_NAME_SPACE} -c mongod-shard${i}-container -- mongo --host 127.0.0.1 --port ${DB_SHARD_PORT} --ssl --sslAllowInvalidCertificates --sslPEMKeyFile /etc/certs/mongo.key --sslCAFile /etc/certs/mongo.pem /tmp/mongo_initiate_shard.js -done - - -echo "Wait for each MongoDB Shard's Replica Set + the ConfigDB Replica Set to each have a primary ready" - -kubectl exec mongodb-configdb-0 --namespace=${KUBE_NAME_SPACE} -c mongodb-configdb-container -- mongo --host 127.0.0.1 --port ${CONFIG_PORT} --ssl --sslAllowInvalidCertificates --sslPEMKeyFile /etc/certs/mongo.key --sslCAFile /etc/certs/mongo.pem --quiet --eval 'while (rs.status().hasOwnProperty("myState") && rs.status().myState != 1) { print("."); sleep(1000); };' -for i in `seq 0 $((NUMOFSHARDS-1))` -do - kubectl exec mongodb-shard${i}-0 --namespace=${KUBE_NAME_SPACE} -c mongod-shard${i}-container -- mongo --host 127.0.0.1 --port ${DB_SHARD_PORT} --ssl --sslAllowInvalidCertificates --sslPEMKeyFile /etc/certs/mongo.key --sslCAFile /etc/certs/mongo.pem --eval 'while (rs.status().hasOwnProperty("myState") && rs.status().myState != 1) { print("."); sleep(1000); };' -done - -echo "...initialisation of the MongoDB shard Replica Sets completed" -echo - - -echo "Waiting for the first mongos router to up and run" -echo -n " " -until kubectl exec --namespace=${KUBE_NAME_SPACE} $(kubectl get pod -l "tier=routers" -o jsonpath='{.items[0].metadata.name}' --namespace=${KUBE_NAME_SPACE} ) -c mongos-router-container -- mongo --host 127.0.0.1 --port ${ROUTER_PORT} --ssl --sslAllowInvalidCertificates --sslPEMKeyFile /etc/certs/mongo.key --sslCAFile /etc/certs/mongo.pem --quiet --eval 'db.getMongo()'; do - sleep 2 - echo -n " " -done -echo "...first mongos router is now running" -echo - -echo "start to add shard replicas" -echo -for i in `seq 0 $((NUMOFSHARDS-1))` -do - cat $ADD_SHARD | sed s#\$SHARD_LIST_S#"${SHARD_STRING[$i]}"# | sed s#\$SHARD_ID#"rs\-shard$i"# > add_shard${i}.js - kubectl cp add_shard${i}.js ${KUBE_NAME_SPACE}/$(kubectl get pod -l "tier=routers" -o jsonpath='{.items[0].metadata.name}' --namespace=${KUBE_NAME_SPACE} ):/tmp/add_shard.js - kubectl exec --namespace=${KUBE_NAME_SPACE} $(kubectl get pod -l "tier=routers" -o jsonpath='{.items[0].metadata.name}' --namespace=${KUBE_NAME_SPACE} ) -c mongos-router-container \ - -- mongo --host 127.0.0.1 --port ${ROUTER_PORT} --ssl --sslAllowInvalidCertificates --sslPEMKeyFile /etc/certs/mongo.key --sslCAFile /etc/certs/mongo.pem \ - --authenticationMechanism=MONGODB-X509 --authenticationDatabase='$external' /tmp/add_shard.js -done - - -# # --------------create admin user start------------------------ - - kubectl exec --namespace=${KUBE_NAME_SPACE} $(kubectl get pod -l "tier=routers" -o jsonpath='{.items[0].metadata.name}' --namespace=${KUBE_NAME_SPACE} ) -- bash -c \ - 'echo "db.getSiblingDB(\"admin\").createUser({user:mongo_initdb_root_username,pwd:entrypassword,roles:[{role:\"root\",db:\"admin\"}, {role:\"clusterAdmin\",db:\"admin\"}]});" > mongo_create_admin.js;' - - kubectl exec --namespace=${KUBE_NAME_SPACE} $(kubectl get pod -l "tier=routers" -o jsonpath='{.items[0].metadata.name}' --namespace=${KUBE_NAME_SPACE} ) \ - -- bash -c 'echo mongo --host 127.0.0.1 --port 27017 --sslAllowInvalidCertificates --ssl --sslPEMKeyFile /etc/certs/mongo.key --sslCAFile /etc/certs/mongo.pem --eval \"var mongo_initdb_root_username="'"'MONGO_INITDB_ROOT_USERNAME'"'",entrypassword="'"'ENTRYPASSWORD'"'"\" mongo_create_admin.js > mongo_create_admin_bak.sh' - - kubectl exec --namespace=${KUBE_NAME_SPACE} $(kubectl get pod -l "tier=routers" -o jsonpath='{.items[0].metadata.name}' --namespace=${KUBE_NAME_SPACE} ) \ - -- bash -c 'cat mongo_create_admin_bak.sh | sed s/MONGO_INITDB_ROOT_USERNAME/$MONGO_INITDB_ROOT_USERNAME/g | sed s/ENTRYPASSWORD/$ENTRYPASSWORD/g > mongo_create_admin.sh' - - kubectl exec --namespace=${KUBE_NAME_SPACE} $(kubectl get pod -l "tier=routers" -o jsonpath='{.items[0].metadata.name}' --namespace=${KUBE_NAME_SPACE} ) \ - -- bash -c 'sh mongo_create_admin.sh && rm mongo_create_admin.js mongo_create_admin.sh mongo_create_admin_bak.sh' - -# # --------------create admin user end------------------------ - -sleep 10 - -# # --------------create regular user start------------------------ - - - kubectl exec --namespace=${KUBE_NAME_SPACE} $(kubectl get pod -l "tier=routers" -o jsonpath='{.items[0].metadata.name}' --namespace=${KUBE_NAME_SPACE} ) -- bash -c \ - 'echo "db.createUser({user:mongo_user,pwd:mongo_password,roles:[{role:\"readWrite\",db:mongo_initdb}, {role:\"readWrite\",db:mongo_seconddb}, {role:\"readWrite\", db:\"cronjobs\"}, {role:\"readWrite\",db:\"smartpages\"}]});" > mongo_create_user.js;' - - kubectl exec --namespace=${KUBE_NAME_SPACE} $(kubectl get pod -l "tier=routers" -o jsonpath='{.items[0].metadata.name}' --namespace=${KUBE_NAME_SPACE} ) \ - -- bash -c 'echo mongo --host 127.0.0.1 --port 27017 $MONGO_INITDB --sslAllowInvalidCertificates --ssl --sslPEMKeyFile /etc/certs/mongo.key --sslCAFile /etc/certs/mongo.pem -u $MONGO_INITDB_ROOT_USERNAME -p $ENTRYPASSWORD --authenticationDatabase admin --eval \"var mongo_user="'"'MONGO_USER'"'", mongo_password="'"'MONGO_PASSWORD'"'", mongo_initdb="'"'MONGO_INITDB'"'", mongo_seconddb="'"'MONGO_SECONDDB'"'"\" mongo_create_user.js > mongo_create_user_bak.sh' - - kubectl exec --namespace=${KUBE_NAME_SPACE} $(kubectl get pod -l "tier=routers" -o jsonpath='{.items[0].metadata.name}' --namespace=${KUBE_NAME_SPACE} ) \ - -- bash -c 'cat mongo_create_user_bak.sh | sed s/MONGO_USER/$MONGO_USER/g | sed s/MONGO_PASSWORD/$MONGO_PASSWORD/g | sed s/MONGO_INITDB/$MONGO_INITDB/g | sed s/MONGO_SECONDDB/$MONGO_SECONDDB/g > mongo_create_user.sh' - - kubectl exec --namespace=${KUBE_NAME_SPACE} $(kubectl get pod -l "tier=routers" -o jsonpath='{.items[0].metadata.name}' --namespace=${KUBE_NAME_SPACE} ) \ - -- bash -c 'sh mongo_create_user.sh && rm mongo_create_user.js mongo_create_user.sh mongo_create_user_bak.sh' - -echo "==================Done============================" \ No newline at end of file diff --git a/BACA/configuration-ha/mongo/pre-setup.sh b/BACA/configuration-ha/mongo/pre-setup.sh deleted file mode 100755 index 2b1d04b7..00000000 --- a/BACA/configuration-ha/mongo/pre-setup.sh +++ /dev/null @@ -1,100 +0,0 @@ -#!/usr/bin/env bash - -. ../common.sh - -NUMOFSHARDS=2 -#KUBE_NAME_SPACE=sp - -LOG_LEVEL=info -ROUTER_REPLICA=3 -SHARD_REPLICA=3 -CONFIG_REPLICA=3 - -CONFIG_PORT=27019 -DB_SHARD_PORT=27018 -ROUTER_PORT=27017 -CONFIG_REPLSET_PREFIX="configReplSet" - - -current_templates_path="../../stable/ibm-dba-baca-prod/charts/mongo-ha/templates" -current_base_path="../../stable/ibm-dba-baca-prod/charts/mongo-ha" - -echo "Removing existing yaml before generating the new ones ...." -rm -rf $current_templates_path/* -cp templates_base/mongo-service-base.yaml $current_templates_path/mongo-service.yaml -cp values-base.yaml $current_base_path/values.yaml - -echo LOG_LEVEL=$LOG_LEVEL -sed -i.bak s#\$LOG_LEVEL#$LOG_LEVEL# $current_base_path/values.yaml -echo "Replacing '' with $KUBE_NAME_SPACE" -sed -i.bak s#\$KUBE_NAME_SPACE#$KUBE_NAME_SPACE# $current_base_path/values.yaml -echo "Replacing '' with $NFS_IP" -# sed -i.bak s#\$NFS_IP#$NFS_IP# values.yaml -sed -i.bak s#\$ROUTER_REPLICA#$ROUTER_REPLICA# $current_base_path/values.yaml -sed -i.bak s#\$SHARD_REPLICA#$SHARD_REPLICA# $current_base_path/values.yaml -sed -i.bak s#\$CONFIG_REPLICA#$CONFIG_REPLICA# $current_base_path/values.yaml -sed -i.bak s#\$LOGPVC#$LOGPVC# $current_base_path/values.yaml - -if [ "$SSH_USER" = "root" ]; then - export SUDO_CMD="" -else - export SUDO_CMD="sudo" -fi -if [[ $PVCCHOICE == "1" ]]; then - echo "Creating necessary folder in $NFS_IP..." - cp templates_base/local-storage-base.yaml $current_templates_path/local-storage.yaml - for i in `seq 0 $((CONFIG_REPLICA-1))` - do - ssh $SSH_USER@$NFS_IP -oStrictHostKeyChecking=no "$SUDO_CMD mkdir -p /exports/smartpages/$KUBE_NAME_SPACE/configdb-${i}" - done - - for i in `seq 0 $((NUMOFSHARDS-1))` - do - for j in `seq 0 $((SHARD_REPLICA-1))` - do - ssh $SSH_USER@$NFS_IP -oStrictHostKeyChecking=no "$SUDO_CMD mkdir -p /exports/smartpages/$KUBE_NAME_SPACE/mongodb-shard${i}-${j}" - done - done - - ssh $SSH_USER@$NFS_IP -oStrictHostKeyChecking=no "$SUDO_CMD chown -R 51000:51001 /exports/smartpages/$KUBE_NAME_SPACE/*" - - echo "-----------------Creating pv and pvc by sp-persistence for shard-------------" - for i in `seq 0 $((NUMOFSHARDS-1))` - do - for j in `seq 0 $((SHARD_REPLICA-1))` - do - sed -e "s/\$KUBE_NAME_SPACE/$KUBE_NAME_SPACE/g; s/\$SHARDX/${i}/g; s/\$COUNTER/${j}/g; s#\$NFS_IP#${NFS_IP}#g" \ - ./templates_base/shard-persistence-base.yaml> $current_templates_path/persistence-shard${i}-${j}.yaml - done - done - - echo "-------------Creating pv and pvc by sp-persistence for mongodb config-----------------" - for i in `seq 0 $((CONFIG_REPLICA-1))` - do - sed -e "s/\$KUBE_NAME_SPACE/$KUBE_NAME_SPACE/g; s/\$COUNTER/${i}/g; s#\$NFS_IP#${NFS_IP}#g" ./templates_base/configdb-persistence-base.yaml> \ - $current_templates_path/configdb-persistence-${i}.yaml - done -fi - -echo "------------cp mongodb configsvr--------------------" -sed -e "s/\$KUBE_NAME_SPACE/$KUBE_NAME_SPACE/g; s/\$PORT_NUMBER/$PORT_NUMBER/g" ./templates_base/configdb-service-base.yaml> $current_templates_path/configdb-service.yaml - - echo "------------cp mongodb shardX------------" - -for i in `seq 0 $((NUMOFSHARDS-1))` -do - sed -e "s/\$SHARDX/${i}/g" ./templates_base/shardX-stateful.yaml> $current_templates_path/shard${i}-stateful.yaml -done - - echo "------------cp mongodb router(mongos)------------" - -for i in `seq 0 $((CONFIG_REPLICA-1))` -do - CONFIG_SERVER_LIST_S="${CONFIG_SERVER_LIST_S}mongodb-configdb-${i}.mongodb-configdb-service.${KUBE_NAME_SPACE}.svc.cluster.local:${CONFIG_PORT}," -done -CONFIG_SERVER_LIST_S=${CONFIG_SERVER_LIST_S:: -1} -CONFIG_REPLSET_VALUE="${CONFIG_REPLSET_PREFIX}/${CONFIG_SERVER_LIST_S}" -echo "CONFIG_REPLSET_VALUE=${CONFIG_REPLSET_VALUE}" - -sed -i.bak s#\$CONFIG_REPLSET_VALUE#$CONFIG_REPLSET_VALUE# $current_base_path/values.yaml -cp ./templates_base/mongos-router-base.yaml $current_templates_path/mongos-router.yaml diff --git a/BACA/configuration-ha/mongo/templates_base/configdb-persistence-base.yaml b/BACA/configuration-ha/mongo/templates_base/configdb-persistence-base.yaml deleted file mode 100644 index e8d7b8b2..00000000 --- a/BACA/configuration-ha/mongo/templates_base/configdb-persistence-base.yaml +++ /dev/null @@ -1,22 +0,0 @@ -apiVersion: v1 -kind: PersistentVolume -metadata: - name: pv-$KUBE_NAME_SPACE-configdb-$COUNTER - # namespace: {{.Values.global.nameSpace}} - labels: - app: mongo-configdb-pv - configpv: configdb-$COUNTER - heritage: {{ .Release.Service | quote }} - release: {{ .Release.Name | quote }} - chart: mongo-configdb-pv -spec: - accessModes: - - ReadWriteOnce - capacity: - storage: {{.Values.mongoDBConfig.storageCapacity}} - nfs: - # may use variable counter for different shard - path: /exports/smartpages/$KUBE_NAME_SPACE/configdb-$COUNTER - server: $NFS_IP - persistentVolumeReclaimPolicy: Retain - storageClassName: {{.Values.storageClassName}} diff --git a/BACA/configuration-ha/mongo/templates_base/configdb-service-base.yaml b/BACA/configuration-ha/mongo/templates_base/configdb-service-base.yaml deleted file mode 100644 index f0b2d03c..00000000 --- a/BACA/configuration-ha/mongo/templates_base/configdb-service-base.yaml +++ /dev/null @@ -1,177 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: mongodb-configdb-service - # namespace: {{ .Values.global.nameSpace }} - labels: - name: {{ .Values.mongoDBConfig.labelName }} - app: mongodb-configdb-service - heritage: {{ .Release.Service | quote }} - release: {{ .Release.Name | quote }} - chart: mongodb-configdb-service -spec: - ports: - - port: {{ .Values.mongoDBConfig.configPort }} - targetPort: {{ .Values.mongoDBConfig.configPort }} - clusterIP: None - selector: - role: {{ .Values.mongoDBConfig.labelName }} ---- -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: mongodb-configdb - labels: - app: mongodb-configdb - heritage: {{ .Release.Service | quote }} - release: {{ .Release.Name | quote }} - chart: mongodb-configdb -spec: - serviceName: mongodb-configdb-service - replicas: {{ .Values.mongoDBConfig.replicas }} - selector: - matchLabels: - role: {{ .Values.mongoDBConfig.labelName }} - template: - metadata: - annotations: - {{- range $key, $value := .Values.global.annotations }} - {{ $key }}: {{ $value | quote }} - {{- end }} - labels: - role: {{ .Values.mongoDBConfig.labelName }} - tier: configdb - replicaset: {{ .Values.mongoDBConfig.replicaSetName }} - app: mongodb-configdb - heritage: {{ .Release.Service | quote }} - release: {{ .Release.Name | quote }} - chart: mongodb-configdb - spec: - affinity: - podAntiAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 100 - podAffinityTerm: - labelSelector: - matchExpressions: - - key: replicaset - operator: In - values: - - {{ .Values.mongoDBConfig.replicaSetName }} - topologyKey: kubernetes.io/hostname - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: mongo{{ .Values.global.namespace.name }} - operator: In - values: - - "baca" - - key: beta.kubernetes.io/arch - operator: In - values: - - {{ .Values.global.arch }} - terminationGracePeriodSeconds: 10 - volumes: - - name: secrets-volume - secret: - secretName: {{ .Values.secretVolume }} - - name: sp-log-pvc - persistentVolumeClaim: - claimName: {{ .Values.global.logs.claimname }} - containers: - - name: mongodb-configdb-container - image: "{{ .Values.global.mongo.image.repository }}:{{ .Values.global.mongo.image.tag }}" - securityContext: - runAsUser: 51000 - allowPrivilegeEscalation: false - privileged: false - readOnlyRootFilesystem: false - runAsNonRoot: true - capabilities: - drop: - - ALL - resources: -{{ toYaml .Values.mongoDBConfig.resources | indent 12 }} - env: - # - name: ENTRYPASSWORD - # value: "bacauser" - # - name: MONGO_USER - # value: "bacauser" - # - name: MONGO_PASSWORD - # value: "bacauser" - # - name: MONGO_INITDB - # value: "bacauser" - - name: LOG_LEVEL - value: {{ .Values.global.logs.logLevel }} - - name: WIREDTIGERCACHE - value: {{ .Values.global.mongo.wiredTigerCache | default 0.5 | quote }} - - name: CERTIFICATE_DIR - value: "/etc/certs" - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - - name: MONGO_TYPE - value: "configsvr" - - name: MONGO_TYPE_VALUE - value: "configReplSet" - - name: CONTAINER_PORT - value: {{ .Values.mongoDBConfig.configPort | quote }} - - name: KUBE_NAME_SPACE - value: {{ .Values.global.nameSpace | quote }} - ports: - - containerPort: {{ .Values.mongoDBConfig.configPort }} - livenessProbe: - initialDelaySeconds: 60 - periodSeconds: 60 - timeoutSeconds: 20 - successThreshold: 1 - failureThreshold: 5 - exec: - command: - - bash - - -c - - source setup_env.sh && echo 'db.runCommand("ping").ok' | mongo 127.0.0.1:27019 --sslAllowInvalidCertificates --ssl --sslPEMKeyFile $PEMFILE --sslCAFile $CERTIFICATE_PATH - readinessProbe: - initialDelaySeconds: 30 - periodSeconds: 30 - timeoutSeconds: 20 - successThreshold: 1 - failureThreshold: 5 - exec: - command: - - bash - - -c - - source setup_env.sh && echo 'db.runCommand("ping").ok' | mongo 127.0.0.1:27019 --sslAllowInvalidCertificates --ssl --sslPEMKeyFile $PEMFILE --sslCAFile $CERTIFICATE_PATH - imagePullPolicy: {{ .Values.global.mongo.image.pullPolicy }} - volumeMounts: - - name: secrets-volume - readOnly: true - mountPath: "/etc/certs" - - name: mongodb-configdb-storage - mountPath: /data/db - - name: sp-log-pvc -# mountPath: "/var/log/mongodb" - mountPath: {{ .Values.global.logs.path }}{{ .Values.mongo.name }}db -# subPath: mongo - subPath: {{ .Values.mongo.name }} - volumeClaimTemplates: - - metadata: - name: mongodb-configdb-storage - spec: - accessModes: [ "ReadWriteOnce" ] - {{- if $.Values.global.storageClass }} - {{- if (eq "-" $.Values.global.storageClass) }} - storageClassName: {{ .Values.storageClassName | quote }} - {{- else }} - storageClassName: {{ $.Values.global.storageClass | quote }} - {{- end }} - {{- end }} - resources: - requests: - storage: {{.Values.mongoDBConfig.storageCapacity}} \ No newline at end of file diff --git a/BACA/configuration-ha/mongo/templates_base/local-storage-base.yaml b/BACA/configuration-ha/mongo/templates_base/local-storage-base.yaml deleted file mode 100644 index caab5631..00000000 --- a/BACA/configuration-ha/mongo/templates_base/local-storage-base.yaml +++ /dev/null @@ -1,11 +0,0 @@ -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: {{.Values.storageClassName}} - labels: - app: {{.Values.storageClassName}} - heritage: {{ .Release.Service | quote }} - release: {{ .Release.Name | quote }} - chart: {{ .Values.storageClassName | quote }} -provisioner: kubernetes.io/no-provisioner -volumeBindingMode: WaitForFirstConsumer \ No newline at end of file diff --git a/BACA/configuration-ha/mongo/templates_base/mongo-service-base.yaml b/BACA/configuration-ha/mongo/templates_base/mongo-service-base.yaml deleted file mode 100644 index cfd55537..00000000 --- a/BACA/configuration-ha/mongo/templates_base/mongo-service-base.yaml +++ /dev/null @@ -1,22 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - labels: - app: "{{ .Values.mongosRouter.name }}-service" - role: mongos - tier: routers - heritage: "{{ .Values.mongosRouter.name }}-service" - release: {{ .Values.release | quote}} - chart: "{{ .Values.mongosRouter.name }}-service" - name: {{ .Values.mongosService }} -spec: - type: ClusterIP - selector: - app: {{ .Values.mongosRouter.name }} - ports: - - port: {{.Values.mongosRouter.routerPort}} - protocol: TCP - - - - diff --git a/BACA/configuration-ha/mongo/templates_base/mongos-router-base.yaml b/BACA/configuration-ha/mongo/templates_base/mongos-router-base.yaml deleted file mode 100644 index 8a4e9e86..00000000 --- a/BACA/configuration-ha/mongo/templates_base/mongos-router-base.yaml +++ /dev/null @@ -1,139 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: {{ .Values.mongosRouter.name }} - labels: - app: {{ .Values.mongosRouter.name }} - heritage: {{ .Release.Service | quote }} - release: {{ .Release.Name | quote }} - chart: {{ .Values.mongosRouter.name }} - # namespace: {{ .Values.global.nameSpace }} -spec: - replicas: {{ .Values.mongosRouter.replicas }} - selector: - matchLabels: - app: {{ .Values.mongosRouter.name }} - template: - metadata: - annotations: - {{- range $key, $value := .Values.global.annotations }} - {{ $key }}: {{ $value | quote }} - {{- end }} - labels: - app: {{ .Values.mongosRouter.name }} - role: mongos - tier: routers - heritage: {{ .Release.Service | quote }} - release: {{ .Release.Name | quote }} - chart: {{ .Values.mongosRouter.name }} - spec: - affinity: - podAntiAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 100 - podAffinityTerm: - labelSelector: - matchExpressions: - - key: tier - operator: In - values: - - routers - topologyKey: kubernetes.io/hostname - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: mongo{{ .Values.global.namespace.name }} - operator: In - values: - - "baca" - - key: beta.kubernetes.io/arch - operator: In - values: - - {{ .Values.global.arch }} - volumes: - - name: secrets-volume - secret: - secretName: {{ .Values.secretVolume }} - - name: sp-log-pvc - persistentVolumeClaim: - claimName: {{ .Values.global.logs.claimname }} - terminationGracePeriodSeconds: 10 - containers: - - name: mongos-router-container - image: "{{ .Values.global.mongo.image.repository }}:{{ .Values.global.mongo.image.tag }}" - imagePullPolicy: {{ .Values.global.mongo.image.pullPolicy }} - env: - - name: ENTRYPASSWORD - valueFrom: - secretKeyRef: - name: "baca-mongo" - key: MONGOENTRYPASSWORD - - name: MONGO_USER - valueFrom: - secretKeyRef: - name: "baca-mongo" - key: MONGOUSER - - name: MONGO_PASSWORD - valueFrom: - secretKeyRef: - name: "baca-mongo" - key: MONGOPASSWORD - - name: MONGO_INITDB - value: "bacauser" - - name: MONGO_SECONDDB - value: "cogdig" - - name: MONGO_TYPE - value: "mongodb-router" - - name: CERTIFICATE_DIR - value: "/etc/certs" - - name: CONTAINER_PORT - value: {{ .Values.mongosRouter.routerPort | quote}} - - name: CONFIG_REPL_SET - value: {{ .Values.mongosRouter.configReplset }} - - name: KUBE_NAME_SPACE - value: {{ .Values.global.nameSpace | quote }} - volumeMounts: - - name: secrets-volume - readOnly: true - mountPath: "/etc/certs" - - name: sp-log-pvc -# mountPath: "/var/log/mongodb" - mountPath: {{ .Values.global.logs.path }}{{ .Values.mongo.name }}db -# subPath: mongo - subPath: {{ .Values.mongo.name }} - resources: -{{ toYaml .Values.mongosRouter.resources | indent 10 }} - securityContext: - runAsUser: 51000 - allowPrivilegeEscalation: false - privileged: false - readOnlyRootFilesystem: false - runAsNonRoot: true - capabilities: - drop: - - ALL - ports: - - containerPort: {{ .Values.mongosRouter.routerPort }} - livenessProbe: - initialDelaySeconds: 60 - periodSeconds: 60 - timeoutSeconds: 20 - successThreshold: 1 - failureThreshold: 5 - exec: - command: - - bash - - -c - - source setup_env.sh && echo 'db.runCommand("ping").ok' | mongo 127.0.0.1:27017 --sslAllowInvalidCertificates --ssl --sslPEMKeyFile $PEMFILE --sslCAFile $CERTIFICATE_PATH - readinessProbe: - initialDelaySeconds: 30 - periodSeconds: 30 - timeoutSeconds: 20 - successThreshold: 1 - failureThreshold: 5 - exec: - command: - - bash - - -c - - source setup_env.sh && echo 'db.runCommand("ping").ok' | mongo 127.0.0.1:27017 --sslAllowInvalidCertificates --ssl --sslPEMKeyFile $PEMFILE --sslCAFile $CERTIFICATE_PATH diff --git a/BACA/configuration-ha/mongo/templates_base/shard-persistence-base.yaml b/BACA/configuration-ha/mongo/templates_base/shard-persistence-base.yaml deleted file mode 100644 index 8abbb0b9..00000000 --- a/BACA/configuration-ha/mongo/templates_base/shard-persistence-base.yaml +++ /dev/null @@ -1,21 +0,0 @@ -apiVersion: v1 -kind: PersistentVolume -metadata: - name: pv-mongodb-shard$SHARDX-$COUNTER - # namespace: {{ .Values.global.nameSpace }} - labels: - shard: shard$SHARDX - app: "pv-shard$SHARDX" - heritage: {{ .Release.Service | quote }} - release: {{ .Release.Name | quote }} - chart: "pv-shard$SHARDX" -spec: - accessModes: - - ReadWriteOnce - capacity: - storage: {{ .Values.mongoDBShard.storageCapacity }} - nfs: - path: /exports/smartpages/$KUBE_NAME_SPACE/mongodb-shard$SHARDX-$COUNTER - server: $NFS_IP - persistentVolumeReclaimPolicy: Retain - storageClassName: {{.Values.storageClassName}} diff --git a/BACA/configuration-ha/mongo/templates_base/shardX-stateful.yaml b/BACA/configuration-ha/mongo/templates_base/shardX-stateful.yaml deleted file mode 100644 index 2a8b6169..00000000 --- a/BACA/configuration-ha/mongo/templates_base/shardX-stateful.yaml +++ /dev/null @@ -1,183 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: mongodb-shard$SHARDX-service - # namespace: {{ .Values.global.nameSpace }} - labels: - name: mongodb-shard$SHARDX-service - app: shard$SHARDX-service - heritage: {{ .Release.Service | quote }} - release: {{ .Release.Name | quote }} - chart: mongodb-shard$SHARDX-service -spec: - ports: - - port: {{ .Values.mongoDBShard.shardPort }} - targetPort: {{ .Values.mongoDBShard.shardPort }} - clusterIP: None - selector: - role: mongodb-shard$SHARDX ---- -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: mongodb-shard$SHARDX - labels: - app: mongodb-shard$SHARDX - heritage: {{ .Release.Service | quote }} - release: {{ .Release.Name | quote }} - chart: mongodb-shard$SHARDX - # namespace: {{ .Values.global.nameSpace }} -spec: - selector: - matchLabels: - role: mongodb-shard$SHARDX - serviceName: mongodb-shard$SHARDX-service - replicas: {{ .Values.mongoDBShard.replicas }} - template: - metadata: - annotations: - {{- range $key, $value := .Values.global.annotations }} - {{ $key }}: {{ $value | quote }} - {{- end }} - labels: - role: mongodb-shard$SHARDX - tier: mongodb - replicaset: rs-shard$SHARDX - app: mongodb-shard$SHARDX - heritage: {{ .Release.Service | quote }} - release: {{ .Release.Name | quote }} - chart: mongodb-shard$SHARDX - spec: - affinity: - podAntiAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 100 - podAffinityTerm: - labelSelector: - matchExpressions: - - key: replicaset - operator: In - values: - - rs-shard$SHARDX - topologyKey: kubernetes.io/hostname - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: mongo{{ .Values.global.namespace.name }} - operator: In - values: - - "baca" - - key: beta.kubernetes.io/arch - operator: In - values: - - {{ .Values.global.arch }} - terminationGracePeriodSeconds: 10 - volumes: - - name: secrets-volume - secret: - secretName: {{ .Values.secretVolume }} - - name: sp-log-pvc - persistentVolumeClaim: - claimName: {{ .Values.global.logs.claimname }} - containers: - - name: mongod-shard$SHARDX-container - image: "{{ .Values.global.mongo.image.repository }}:{{ .Values.global.mongo.image.tag }}" - imagePullPolicy: {{ .Values.global.mongo.image.pullPolicy }} - resources: -{{ toYaml .Values.mongoDBShard.resources | indent 10 }} - env: - # - name: ENTRYPASSWORD - # value: "$ENTRYPASSWORD" - # - name: MONGO_USER - # value: "$MONGO_USER" - # - name: MONGO_PASSWORD - # value: "$MONGO_PASSWORD" - # - name: MONGO_INITDB - # value: "$MONGOADMINAUTHDB" - # - name: MONGO_SECONDDB - # value: "binaryfiles" - - name: LOG_PATH - value: {{ .Values.logs.path }}{{ .Values.mongo.name | substr 0 5 }}db - - name: LOG_LEVEL - value: {{ .Values.global.logs.logLevel }} - - name: CERTIFICATE_DIR - value: "/etc/certs" - - name: WIREDTIGERCACHE - value: {{ .Values.global.mongo.wiredTigerCache | default 0.5 | quote }} - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - - name: MONGO_TYPE - value: "shard" - - name: MONGO_TYPE_VALUE - value: "rs-shard$SHARDX" - - name: CONTAINER_PORT - value: {{ .Values.mongoDBShard.shardPort | quote}} - - name: KUBE_NAME_SPACE - value: {{ .Values.global.nameSpace | quote }} - securityContext: - runAsUser: 51000 - allowPrivilegeEscalation: false - privileged: false - readOnlyRootFilesystem: false - runAsNonRoot: true - capabilities: - drop: - - ALL - livenessProbe: - initialDelaySeconds: 60 - periodSeconds: 60 - timeoutSeconds: 20 - successThreshold: 1 - failureThreshold: 5 - exec: - command: - - bash - - -c - - source setup_env.sh && echo 'db.runCommand("ping").ok' | mongo 127.0.0.1:27018 --sslAllowInvalidCertificates --ssl --sslPEMKeyFile $PEMFILE --sslCAFile $CERTIFICATE_PATH - readinessProbe: - initialDelaySeconds: 30 - periodSeconds: 30 - timeoutSeconds: 20 - successThreshold: 1 - failureThreshold: 5 - exec: - command: - - bash - - -c - - source setup_env.sh && echo 'db.runCommand("ping").ok' | mongo 127.0.0.1:27018 --sslAllowInvalidCertificates --ssl --sslPEMKeyFile $PEMFILE --sslCAFile $CERTIFICATE_PATH - ports: - - containerPort: {{ .Values.mongoDBShard.shardPort }} - volumeMounts: - - name: shard$SHARDX-storage - mountPath: /data/db - - name: sp-log-pvc -# mountPath: "/var/log/mongodb" - mountPath: {{ .Values.global.logs.path }}{{ .Values.mongo.name }}db -# subPath: mongo - subPath: {{ .Values.mongo.name }} - - name: secrets-volume # must match the volume name, above - mountPath: "/etc/certs" - volumeClaimTemplates: - - metadata: - name: shard$SHARDX-storage - spec: - accessModes: [ "ReadWriteOnce" ] - {{- if $.Values.global.storageClass }} - {{- if (eq "-" $.Values.global.storageClass) }} - storageClassName: {{ .Values.storageClassName | quote }} - {{- else }} - storageClassName: {{ $.Values.global.storageClass | quote }} - {{- end }} - {{- end }} - resources: - requests: - storage: {{ .Values.mongoDBShard.storageCapacity }} - -# cat sp-shardX-stateful.yaml | sed s/\$SHARDX/"shard1"/ | kubectl apply --validate=true --dry-run=true --filename= diff --git a/BACA/configuration-ha/mongo/values-base.yaml b/BACA/configuration-ha/mongo/values-base.yaml deleted file mode 100644 index e2d3b00a..00000000 --- a/BACA/configuration-ha/mongo/values-base.yaml +++ /dev/null @@ -1,65 +0,0 @@ -# image: -# repository: mycluster.icp:8500/$KUBE_NAME_SPACE/mongocluster -# tag: latest -# pullPolicy: Always - -storageClassName: local-storage -# nfsIP: $NFS_IP -# nameSpace: $KUBE_NAME_SPACE -# existingSecret: true -# wiredTigerCache: "$MONGO_WIREDTIGER_LIMIT" -# wiredTigerCache: "0.5" -secretVolume: baca-secrets$KUBE_NAME_SPACE -mongosService: mongos-service - -mongo: - # nodeSelector: - # mongo$KUBE_NAME_SPACE: baca - name: mongo - -logs: - # claimname: $LOGPVC - path: /var/log/ - # logLevel: $LOG_LEVEL - -mongoDBConfig: - storageCapacity: 10Gi - labelName: mongodb-configdb - configPort: 27019 - replicas: $CONFIG_REPLICA - replicaSetName: ConfigDBRepSet - resources: - limits: - memory: "1Gi" - cpu: "500m" - requests: - memory: "256Mi" - cpu: "500m" - -mongosRouter: - name: mongos-router - routerPort: 27017 - replicas: $ROUTER_REPLICA - configReplset: "$CONFIG_REPLSET_VALUE" - resources: - limits: - memory: "1Gi" - cpu: "500m" - requests: - memory: "256Mi" - cpu: "500m" - -mongoDBShard: - # heritage: admin-shard - # pvheritage: admin-shardpv - storageCapacity: 15Gi - shardPort: 27018 - replicas: $SHARD_REPLICA - resources: - limits: - memory: "1Gi" - cpu: "500m" - requests: - memory: "256Mi" - cpu: "500m" - \ No newline at end of file diff --git a/BACA/configuration-ha/mongoadmin/README.md b/BACA/configuration-ha/mongoadmin/README.md deleted file mode 100644 index 4ef5ee4b..00000000 --- a/BACA/configuration-ha/mongoadmin/README.md +++ /dev/null @@ -1,118 +0,0 @@ -# Mongodb - -[Mongodb](https://www.mongodb.com/) is a general purpose, document-based, distributed database built for modern application developers and for the cloud era. No database is more productive to use - -## TL;DR; - -```bash -$ helm install stable/mongo-ha -``` - -By default this chart install 12 pods total: - * three pods containing a mongos router - * three pods containing a mongodb config server - * three pods containing a mongdb shard - * three pods containing a mongdb shard -## Introduction - -This chart bootstraps a[Mongodb](https://www.mongodb.com/) highly available Shard+Replica statefulset in a [Kubernetes](http://kubernetes.io) cluster using the Helm package manager. - -## Prerequisites - -- Kubernetes 1.8+ with Beta APIs enabled -- PV provisioner support in the underlying infrastructure or an existing PVC claim created when running `init_deployments.sh` -- PV for shards and replicas will be created in generate.sh -- Change the values for the `reposittory` and `tag` under `image` and tag to match your mongo cluster environment. For example: -``` -image: - repository: mycluster.com:8500/sp/mongocluster - tag: latest - pullPolicy: Always -``` -mongocluster image can be downloaded from TBD -The current default namespace is `sp`. If you have different namespace, please make sure you update generate.sh as well. Next version will fixed this issue. -openssl.cnf and ssl_generator.sh are used to create x509 certificate for mongo cluster. -## Upgrading the Chart - -You can use Helm to update MongoCluster version in a live release. Assuming your release is named as `my-release`, get the values using the command: - -## Installing the Chart - -To install the chart - -```bash -sh generate.sh -``` - -The command will generate templates for mongodb shards and replicas, save them into templates folder. And then create values.yaml based on values-base.yaml. It will deploys Mongodb Cluster on the Kubernetes cluster in the default configuration. By default this chart install 2 shards, 3 mongodb config and 3 mongos router. - -> **Tip**: List all releases using `helm list` - -## Uninstalling the Chart - -To uninstall/delete the deployment: - -```bash -$ helm delete --purge --tls -``` - -The command removes all the Kubernetes components associated with the chart and deletes the release. - -## Configuration - -The following table lists the configurable parameters of the MongoDB chart and their default values. - -| Parameter | Description | Default | -|:-------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------| -| `image.repository` | Mongodb image | `mongocluster` | -| `image.tag` | Mongodb tag | `latest` | -| `image.pullPolicy` | Pull Image policy | `Always` | -| `storageClassName` | Specifies storage class name | local-storage | -| `nfsIP` | The NFS location | | -| `nameSpace` | use kubernetes namespace | `sp` | -| `wiredTigerCache` | mondo db cache limitiation | `0.5` | -| `secretVolume` | Where the certification stored | created from setup.sh script | -| `logs.claimname` | Where the location of log, depends on setup.sh | `` | -| `logs.path` | log path inside the pod | `/var/log/` | -| `logs.logLevel` | log level | `debug` | -| `mongoDBConfig.storageCapacity` | Mongodb config storage size | `10Gi` | -| `mongoDBConfig.labelName` | label name | mongodb-configdb | -| `mongoDBConfig.replicas` | mongodb config replicas, variable in generate.sh | `` | -| `mongoDBConfig.replicaSetName` | replica set name | `ConfigDBRepSet` | -| `mongoDBConfig.resources` | CPU/Memory for init Container node resource requests/limits | `{}` | -| `mongosRouter.name` | name of the mongos router | `mongos-router` | -| `mongosRouter.replicas` | mongodb router replicas, need to change in generate.sh | `` | -| `mongosRouter.configReplset` | generate by generate.sh, do not change. | | -| `mongoDBShard.storageCapacity` | Mongodb shard storage size | `15Gi` | -| `mongoDBShard.replicas` | mongodb shard replicas, variable in generate.sh | `{}` | -| `logs.logLevel` | log level | `[]` | - -Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example, - -```bash -$ helm install \ - --set image=mongocluster \ - --set tag=latest \ - stable/mongo-ha -``` - -The above command sets the Mongodb server within `default` namespace. - - -> **Tip**: There is no [values.yaml](values.yaml) file, and will generate [values.yaml](values.yaml) on the fly based on [values-base.yaml](values-base.yaml) - -Persistence ------------ - -This generate.sh provisions a PersistentVolume and pods will create PersistentVolumeClaim and mounts corresponding persistent volume under the same storage class name to default location `/export/smartpages/`. You'll need physical storage available in the Kubernetes cluster for this to work. - -Configure TLS -------------- - -Always enable TLS for mongodb containers, acquire TLS certificates from a CA or create self-signed certificates. While creating / acquiring certificates ensure the corresponding domain names are set as per the standard [DNS naming conventions](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-identity) in a Kubernetes StatefulSet (for a distributed mongodb setup). Then create a secret using - -```bash -$ kubectl create secret generic baca-secrets${NAMESPACE} --from-file=path/to/private.key --from-file=path/to/public.crt -``` - -Then install the chart, specifying the path you'd like to mount to the TLS secret: diff --git a/BACA/configuration-ha/mongoadmin/js_base/add_shard.js b/BACA/configuration-ha/mongoadmin/js_base/add_shard.js deleted file mode 100644 index 43ebc4e1..00000000 --- a/BACA/configuration-ha/mongoadmin/js_base/add_shard.js +++ /dev/null @@ -1,19 +0,0 @@ -var server_list_s = "$SHARD_LIST_S"; -var shard_id = "$SHARD_ID"; -var shard_string = shard_id.concat('\/', server_list_s); -var result; -print("First try to add shard"); -do { - sleep(5000); - result = sh.addShard(shard_string); - if (result.ok == 0) { - print("Failed to add shard and retry in 5 seconds"); - } - // if (result.code == 23) { - // print("already initialized"); - // break; - // } - printjson(result); -} while (result.ok != 1) -// printjson(result); - diff --git a/BACA/configuration-ha/mongoadmin/js_base/mongo_initiate.js b/BACA/configuration-ha/mongoadmin/js_base/mongo_initiate.js deleted file mode 100644 index eed8d9bb..00000000 --- a/BACA/configuration-ha/mongoadmin/js_base/mongo_initiate.js +++ /dev/null @@ -1,27 +0,0 @@ -var server_list_s = "$SERVER_LIST_S"; -var server_list = server_list_s.split(","); -var cfg_id = "$CFG_ID"; -var member_list = []; -for (i = 0; i < server_list.length; i++) { - member_list.push({_id: i, host: server_list[i]}); -} -var cfg = { - _id: cfg_id, - version: 1, - members: member_list -} -print("First try to initiate"); -var result; -do { - sleep(5000); - result = rs.initiate(cfg); - if(result.ok==0) { - print("Failed to initiate and retry in 5 seconds"); - } - if(result.code==23){ - print("already initialized"); - break; - } - printjson(result); -} while (result.ok != 1) -// printjson(result); diff --git a/BACA/configuration-ha/mongoadmin/openssl.cnf b/BACA/configuration-ha/mongoadmin/openssl.cnf deleted file mode 100644 index 7d3892c9..00000000 --- a/BACA/configuration-ha/mongoadmin/openssl.cnf +++ /dev/null @@ -1,38 +0,0 @@ -[req] -default_bits = 2048 -utf8 = yes -distinguished_name = req_distinguished_name -req_extensions = v3_req - -[req_distinguished_name] -countryName = Country Name (2 letter code) -countryName_default = CA -countryName_min = 2 -countryName_max = 2 -stateOrProvinceName = State or Province Name (full name) -stateOrProvinceName_default = NS -stateOrProvinceName_max = 64 -localityName = Locality Name (eg, city) -localityName_default = Halifax -localityName_max = 64 -organizationName = Organization Name (eg, company) -organizationName_default = IBM -organizationName_max = 64 -organizationalUnitName = Organizational Unit Name (eg, section) -organizationalUnitName_default = baca -organizationalUnitName_max = 64 -commonName = *.svc.cluster.local -commonName_max = 64 - -[v3_req] -basicConstraints = CA:FALSE -subjectKeyIdentifier = hash -keyUsage = digitalSignature, keyEncipherment -extendedKeyUsage = clientAuth, serverAuth -subjectAltName = @alt_names - -[alt_names] -DNS.1 = localhost -IP.1 = 127.0.0.1 - - diff --git a/BACA/configuration-ha/mongoadmin/post-setup.sh b/BACA/configuration-ha/mongoadmin/post-setup.sh deleted file mode 100755 index 13465a99..00000000 --- a/BACA/configuration-ha/mongoadmin/post-setup.sh +++ /dev/null @@ -1,148 +0,0 @@ -#!/usr/bin/env bash - -. ../common.sh - -# ENTRYPASSWORD='bacauser' -# NFS_IP=172.16.243.23 - -# KUBE_NAME_SPACE=sp2 -#LOG_LEVEL=info -NUMOFSHARDS=2 -ROUTER_REPLICA=3 -SHARD_REPLICA=3 -CONFIG_REPLICA=3 -CONFIG_PORT=27019 -DB_SHARD_PORT=27018 -ROUTER_PORT=27017 -CONFIG_REPLSET_ADMIN_PREFIX="configReplSetAdmin" - -ADD_SHARD='./js_base/add_shard.js' -MONGO_INIT='./js_base/mongo_initiate.js' - - -for i in `seq 0 $((CONFIG_REPLICA-1))` -do - CONFIG_SERVER_LIST_S="${CONFIG_SERVER_LIST_S}mongodb-admin-configdb-${i}.mongodb-admin-configdb-service.${KUBE_NAME_SPACE}.svc.cluster.local:${CONFIG_PORT}," -done -CONFIG_SERVER_LIST_S=${CONFIG_SERVER_LIST_S:: -1} -echo "CONFIG_SERVER_LIST_S=${CONFIG_SERVER_LIST_S}" - -echo "Waiting for all the shards and configdb containers up running" -sleep 30 -echo -n " " -until kubectl exec mongodb-admin-configdb-$((CONFIG_REPLICA-1)) --namespace=${KUBE_NAME_SPACE} -c mongodb-admin-configdb-container -- mongo --host 127.0.0.1 --port ${CONFIG_PORT} --ssl --sslAllowInvalidCertificates --sslPEMKeyFile /etc/certs/mongo.key --sslCAFile /etc/certs/mongo.pem --quiet --eval 'db.getMongo()'; do - sleep 5 - echo -n " " -done - -echo -n " " -for i in `seq 0 $((NUMOFSHARDS-1))` -do - until kubectl exec mongodb-admin-shard${i}-$((SHARD_REPLICA-1)) --namespace=${KUBE_NAME_SPACE} -c mongod-admin-shard${i}-container -- mongo --host 127.0.0.1 --port ${DB_SHARD_PORT} --ssl --sslAllowInvalidCertificates --sslPEMKeyFile /etc/certs/mongo.key --sslCAFile /etc/certs/mongo.pem --quiet --eval 'db.getMongo()'; do - sleep 5 - echo -n " " - done -done -echo "...shards & configdb containers are now running" -echo - -sleep 90 - -for i in `seq 0 $((NUMOFSHARDS-1))` -do - for j in `seq 0 $((SHARD_REPLICA-1))` - do - shard_temp="${shard_temp}mongodb-admin-shard${i}-${j}.mongodb-admin-shard${i}-service.${KUBE_NAME_SPACE}.svc.cluster.local:${DB_SHARD_PORT}," - done - SHARD_STRING[${i}]=${shard_temp:: -1} - unset shard_temp -done - -echo "start to initiate config admin server replicas" -echo - -cat $MONGO_INIT | sed s#\$SERVER_LIST_S#"$CONFIG_SERVER_LIST_S"# | sed s#\$CFG_ID#"${CONFIG_REPLSET_ADMIN_PREFIX}"# > mongo_initiate_config.js -kubectl cp mongo_initiate_config.js ${KUBE_NAME_SPACE}/mongodb-admin-configdb-0:/tmp/ - -kubectl exec mongodb-admin-configdb-0 --namespace=${KUBE_NAME_SPACE} -c mongodb-admin-configdb-container -- mongo --host 127.0.0.1 --port ${CONFIG_PORT} --ssl --sslAllowInvalidCertificates --sslPEMKeyFile /etc/certs/mongo.key --sslCAFile /etc/certs/mongo.pem /tmp/mongo_initiate_config.js - -echo "start to initiate shard admin server replicas" -echo - -for i in `seq 0 $((NUMOFSHARDS-1))` -do - cat $MONGO_INIT | sed s#\$SERVER_LIST_S#"${SHARD_STRING[$i]}"# | sed s#\$CFG_ID#"rs\-admin\-shard$i"# > mongo_initiate_shard${i}.js - kubectl cp mongo_initiate_shard${i}.js ${KUBE_NAME_SPACE}/mongodb-admin-shard${i}-0:/tmp/mongo_initiate_shard.js - kubectl exec mongodb-admin-shard${i}-0 --namespace=${KUBE_NAME_SPACE} -c mongod-admin-shard${i}-container -- mongo --host 127.0.0.1 --port ${DB_SHARD_PORT} --ssl --sslAllowInvalidCertificates --sslPEMKeyFile /etc/certs/mongo.key --sslCAFile /etc/certs/mongo.pem /tmp/mongo_initiate_shard.js -done - -echo "Wait for each MongoDB admin Shard's Replica Set + the admin ConfigDB Replica Set to each have a primary ready" - -kubectl exec mongodb-admin-configdb-0 --namespace=${KUBE_NAME_SPACE} -c mongodb-admin-configdb-container -- mongo --host 127.0.0.1 --port ${CONFIG_PORT} --ssl --sslAllowInvalidCertificates --sslPEMKeyFile /etc/certs/mongo.key --sslCAFile /etc/certs/mongo.pem --quiet --eval 'while (rs.status().hasOwnProperty("myState") && rs.status().myState != 1) { print("."); sleep(1000); };' -for i in `seq 0 $((NUMOFSHARDS-1))` -do - kubectl exec mongodb-admin-shard${i}-0 --namespace=${KUBE_NAME_SPACE} -c mongod-admin-shard${i}-container -- mongo --host 127.0.0.1 --port ${DB_SHARD_PORT} --ssl --sslAllowInvalidCertificates --sslPEMKeyFile /etc/certs/mongo.key --sslCAFile /etc/certs/mongo.pem --eval 'while (rs.status().hasOwnProperty("myState") && rs.status().myState != 1) { print("."); sleep(1000); };' -done - -echo "...initialisation of the MongoDB admin shard Replica Sets completed" -echo - -# Wait for the mongos to have started properly -echo "Waiting for the first mongos admin router to up and run" -echo -n " " -until kubectl exec --namespace=${KUBE_NAME_SPACE} $(kubectl get pod -l "tier=routers-admin" -o jsonpath='{.items[0].metadata.name}' --namespace=${KUBE_NAME_SPACE} ) -c mongos-admin-router-container -- mongo --host 127.0.0.1 --port ${ROUTER_PORT} --ssl --sslAllowInvalidCertificates --sslPEMKeyFile /etc/certs/mongo.key --sslCAFile /etc/certs/mongo.pem --quiet --eval 'db.getMongo()'; do - sleep 2 - echo -n " " -done -echo "...first mongos admin router is now running" -echo - - -echo "start to add shard admin replicas" -echo -for i in `seq 0 $((NUMOFSHARDS-1))` -do - cat $ADD_SHARD | sed s#\$SHARD_LIST_S#"${SHARD_STRING[$i]}"# | sed s#\$SHARD_ID#"rs\-admin\-shard$i"# > add_shard${i}.js - kubectl cp add_shard${i}.js ${KUBE_NAME_SPACE}/$(kubectl get pod -l "tier=routers-admin" -o jsonpath='{.items[0].metadata.name}' --namespace=${KUBE_NAME_SPACE} ):/tmp/add_shard.js - kubectl exec --namespace=${KUBE_NAME_SPACE} $(kubectl get pod -l "tier=routers-admin" -o jsonpath='{.items[0].metadata.name}' --namespace=${KUBE_NAME_SPACE} ) -c mongos-admin-router-container \ - -- mongo --host 127.0.0.1 --port ${ROUTER_PORT} --ssl --sslAllowInvalidCertificates --sslPEMKeyFile /etc/certs/mongo.key --sslCAFile /etc/certs/mongo.pem \ - --authenticationMechanism=MONGODB-X509 --authenticationDatabase='$external' /tmp/add_shard.js -done - - -# --------------create admin user start------------------------ - - - -kubectl exec --namespace=${KUBE_NAME_SPACE} $(kubectl get pod -l "tier=routers-admin" -o jsonpath='{.items[0].metadata.name}' --namespace=${KUBE_NAME_SPACE} ) -- bash -c \ -'echo "db.getSiblingDB(\"admin\").createUser({user:mongo_initdb_root_username,pwd:entrypassword,roles:[{role:\"root\",db:\"admin\"}, {role:\"clusterAdmin\",db:\"admin\"}]});" > mongo_create_admin.js;' - -kubectl exec --namespace=${KUBE_NAME_SPACE} $(kubectl get pod -l "tier=routers-admin" -o jsonpath='{.items[0].metadata.name}' --namespace=${KUBE_NAME_SPACE} ) \ --- bash -c 'echo mongo --host 127.0.0.1 --port 27017 --sslAllowInvalidCertificates --ssl --sslPEMKeyFile /etc/certs/mongo.key --sslCAFile /etc/certs/mongo.pem --eval \"var mongo_initdb_root_username="'"'MONGO_INITDB_ROOT_USERNAME'"'",entrypassword="'"'ENTRYPASSWORD'"'"\" mongo_create_admin.js > mongo_create_admin_bak.sh' - -kubectl exec --namespace=${KUBE_NAME_SPACE} $(kubectl get pod -l "tier=routers-admin" -o jsonpath='{.items[0].metadata.name}' --namespace=${KUBE_NAME_SPACE} ) \ --- bash -c 'cat mongo_create_admin_bak.sh | sed s/MONGO_INITDB_ROOT_USERNAME/$MONGO_INITDB_ROOT_USERNAME/g | sed s/ENTRYPASSWORD/$ENTRYPASSWORD/g > mongo_create_admin.sh' - -kubectl exec --namespace=${KUBE_NAME_SPACE} $(kubectl get pod -l "tier=routers-admin" -o jsonpath='{.items[0].metadata.name}' --namespace=${KUBE_NAME_SPACE} ) \ --- bash -c 'sh mongo_create_admin.sh && rm mongo_create_admin.js mongo_create_admin.sh mongo_create_admin_bak.sh' - -# --------------create admin user end------------------------ - -sleep 10 - -# --------------create regular user start------------------------ - - -kubectl exec --namespace=${KUBE_NAME_SPACE} $(kubectl get pod -l "tier=routers-admin" -o jsonpath='{.items[0].metadata.name}' --namespace=${KUBE_NAME_SPACE} ) -- bash -c \ -'echo "db.createUser({user:mongo_user,pwd:mongo_password,roles:[{role:\"readWrite\",db:mongo_initdb}, {role:\"readWrite\",db:mongo_seconddb}, {role:\"readWrite\", db:\"cronjobs\"}, {role:\"readWrite\",db:\"smartpages\"}]});" > mongo_create_user.js;' - -kubectl exec --namespace=${KUBE_NAME_SPACE} $(kubectl get pod -l "tier=routers-admin" -o jsonpath='{.items[0].metadata.name}' --namespace=${KUBE_NAME_SPACE} ) \ --- bash -c 'echo mongo --host 127.0.0.1 --port 27017 $MONGO_INITDB --sslAllowInvalidCertificates --ssl --sslPEMKeyFile /etc/certs/mongo.key --sslCAFile /etc/certs/mongo.pem -u $MONGO_INITDB_ROOT_USERNAME -p $ENTRYPASSWORD --authenticationDatabase admin --eval \"var mongo_user="'"'MONGO_USER'"'", mongo_password="'"'MONGO_PASSWORD'"'", mongo_initdb="'"'MONGO_INITDB'"'", mongo_seconddb="'"'MONGO_SECONDDB'"'"\" mongo_create_user.js > mongo_create_user_bak.sh' - -kubectl exec --namespace=${KUBE_NAME_SPACE} $(kubectl get pod -l "tier=routers-admin" -o jsonpath='{.items[0].metadata.name}' --namespace=${KUBE_NAME_SPACE} ) \ --- bash -c 'cat mongo_create_user_bak.sh | sed s/MONGO_USER/$MONGO_USER/g | sed s/MONGO_PASSWORD/$MONGO_PASSWORD/g | sed s/MONGO_INITDB/$MONGO_INITDB/g | sed s/MONGO_SECONDDB/$MONGO_SECONDDB/g > mongo_create_user.sh' - -kubectl exec --namespace=${KUBE_NAME_SPACE} $(kubectl get pod -l "tier=routers-admin" -o jsonpath='{.items[0].metadata.name}' --namespace=${KUBE_NAME_SPACE} ) \ --- bash -c 'sh mongo_create_user.sh && rm mongo_create_user.js mongo_create_user.sh mongo_create_user_bak.sh' - -echo "==================Done============================" \ No newline at end of file diff --git a/BACA/configuration-ha/mongoadmin/pre-setup.sh b/BACA/configuration-ha/mongoadmin/pre-setup.sh deleted file mode 100755 index defa876c..00000000 --- a/BACA/configuration-ha/mongoadmin/pre-setup.sh +++ /dev/null @@ -1,102 +0,0 @@ -#!/usr/bin/env bash - -. ../common.sh - -NUMOFSHARDS=2 -# NFS_IP=172.16.243.23 -#KUBE_NAME_SPACE=sp -# ENTRYPASSWORD='bacauser' -LOG_LEVEL=info -ROUTER_REPLICA=3 -SHARD_REPLICA=3 -CONFIG_REPLICA=3 - -CONFIG_PORT=27019 -DB_SHARD_PORT=27018 -ROUTER_PORT=27017 - -CONFIG_REPLSET_ADMIN_PREFIX="configReplSetAdmin" -current_templates_path="../../stable/ibm-dba-baca-prod/charts/mongoadmin-ha/templates" -current_base_path="../../stable/ibm-dba-baca-prod/charts/mongoadmin-ha" -#current_templates_path=$(pwd)/templates -#mkdir $current_templates_path -echo "Removing existing yaml before generating the new ones ...." -rm -rf $current_templates_path/* - -#cp templates_base/local-storage-base.yaml templates/local-storage-base.yaml -cp templates_base/mongo-service-base.yaml $current_templates_path/mongo-service.yaml -cp values-base.yaml $current_base_path/values.yaml - -echo LOG_LEVEL=$LOG_LEVEL -sed -i.bak s#\$LOG_LEVEL#$LOG_LEVEL# $current_base_path/values.yaml -echo "Replacing '' with $KUBE_NAME_SPACE" -sed -i.bak s#\$KUBE_NAME_SPACE#$KUBE_NAME_SPACE# $current_base_path/values.yaml -echo "Replacing '' with $NFS_IP" -# sed -i.bak s#\$NFS_IP#$NFS_IP# values.yaml -sed -i.bak s#\$ROUTER_REPLICA#$ROUTER_REPLICA# $current_base_path/values.yaml -sed -i.bak s#\$SHARD_REPLICA#$SHARD_REPLICA# $current_base_path/values.yaml -sed -i.bak s#\$CONFIG_REPLICA#$CONFIG_REPLICA# $current_base_path/values.yaml -sed -i.bak s#\$LOGPVC#$LOGPVC# $current_base_path/values.yaml - -if [ "$SSH_USER" = "root" ]; then - export SUDO_CMD="" -else - export SUDO_CMD="sudo" -fi - -if [[ $PVCCHOICE == "1" ]]; then - echo "Creating necessary folder in $NFS_IP..." - cp templates_base/local-storage-base.yaml $current_templates_path/local-storage.yaml - for i in `seq 0 $((CONFIG_REPLICA-1))` - do - ssh $SSH_USER@$NFS_IP -oStrictHostKeyChecking=no "$SUDO_CMD mkdir -p /exports/smartpages/$KUBE_NAME_SPACE/configdb-admin-${i}" - done - - for i in `seq 0 $((NUMOFSHARDS-1))` - do - for j in `seq 0 $((SHARD_REPLICA-1))` - do - ssh $SSH_USER@$NFS_IP -oStrictHostKeyChecking=no "$SUDO_CMD mkdir -p /exports/smartpages/$KUBE_NAME_SPACE/mongodb-admin-shard${i}-${j}" - done - done - - ssh $SSH_USER@$NFS_IP -oStrictHostKeyChecking=no "$SUDO_CMD chown -R 51000:51001 /exports/smartpages/$KUBE_NAME_SPACE/*" - - echo "-----------------Creating pv and pvc by sp-persistence for shard admin-------------" - for i in `seq 0 $((NUMOFSHARDS-1))` - do - for j in `seq 0 $((SHARD_REPLICA-1))` - do - sed -e "s/\$KUBE_NAME_SPACE/$KUBE_NAME_SPACE/g; s/\$SHARDX/${i}/g; s/\$COUNTER/${j}/g; s#\$NFS_IP#${NFS_IP}#g" \ - ./templates_base/shard-persistence-base.yaml> $current_templates_path/persistence-shard${i}-${j}.yaml - done - done - - echo "-------------Creating pv and pvc by sp-persistence for mongodb admin config-----------------" - for i in `seq 0 $((CONFIG_REPLICA-1))` - do - sed -e "s/\$KUBE_NAME_SPACE/$KUBE_NAME_SPACE/g; s/\$COUNTER/${i}/g; s#\$NFS_IP#${NFS_IP}#g" ./templates_base/configdb-persistence-base.yaml> \ - $current_templates_path/configdb-persistence-${i}.yaml - done -fi -echo "------------cp mongodb admin configsvr--------------------" -sed -e "s/\$KUBE_NAME_SPACE/$KUBE_NAME_SPACE/g; s/\$PORT_NUMBER/$PORT_NUMBER/g" ./templates_base/configdb-service-base.yaml> $current_templates_path/configdb-service.yaml - -echo "------------cp mongodb admin shardX------------" -for i in `seq 0 $((NUMOFSHARDS-1))` -do - sed -e "s/\$SHARDX/${i}/g" ./templates_base/shardX-stateful.yaml> $current_templates_path/shard${i}-stateful.yaml -done - -echo "------------cp mongodb admin router(mongos)------------" -# !!!Replicas if your mongodb-admin-configdb has more than x>=3 replicas, please add mongodb-admin-configdb-{x-1}.mongodb-admin-configdb-service.${KUBE_NAME_SPACE}.svc.cluster.local:27019 in the end -for i in `seq 0 $((CONFIG_REPLICA-1))` -do - CONFIG_SERVER_LIST_S="${CONFIG_SERVER_LIST_S}mongodb-admin-configdb-${i}.mongodb-admin-configdb-service.${KUBE_NAME_SPACE}.svc.cluster.local:${CONFIG_PORT}," -done -CONFIG_SERVER_LIST_S=${CONFIG_SERVER_LIST_S:: -1} -CONFIG_REPLSET_VALUE="${CONFIG_REPLSET_ADMIN_PREFIX}/${CONFIG_SERVER_LIST_S}" -echo "CONFIG_REPLSET_VALUE=${CONFIG_REPLSET_VALUE}" -#CONFIG_REPLSET_VALUE="configReplSetAdmin/mongodb-admin-configdb-0.mongodb-admin-configdb-service.${KUBE_NAME_SPACE}.svc.cluster.local:27019,mongodb-admin-configdb-1.mongodb-admin-configdb-service.${KUBE_NAME_SPACE}.svc.cluster.local:27019,mongodb-admin-configdb-2.mongodb-admin-configdb-service.${KUBE_NAME_SPACE}.svc.cluster.local:27019" -sed -i.bak s#\$CONFIG_REPLSET_VALUE#$CONFIG_REPLSET_VALUE# $current_base_path/values.yaml -cp ./templates_base/mongos-router-base.yaml $current_templates_path/mongos-router.yaml diff --git a/BACA/configuration-ha/mongoadmin/templates_base/configdb-persistence-base.yaml b/BACA/configuration-ha/mongoadmin/templates_base/configdb-persistence-base.yaml deleted file mode 100644 index 0f3c47db..00000000 --- a/BACA/configuration-ha/mongoadmin/templates_base/configdb-persistence-base.yaml +++ /dev/null @@ -1,22 +0,0 @@ -apiVersion: v1 -kind: PersistentVolume -metadata: - name: pv-$KUBE_NAME_SPACE-configdb-admin-$COUNTER - # namespace: {{.Values.global.nameSpace}} - labels: - app: mongoadmin-configdb-pv - configpv: configdb-admin-$COUNTER - heritage: {{ .Release.Service | quote }} - release: {{ .Release.Name | quote }} - chart: mongoadmin-configdb-pv -spec: - accessModes: - - ReadWriteOnce - capacity: - storage: {{.Values.mongoDBConfig.storageCapacity}} - nfs: - # may use variable counter for different shard - path: /exports/smartpages/$KUBE_NAME_SPACE/configdb-admin-$COUNTER - server: $NFS_IP - persistentVolumeReclaimPolicy: Retain - storageClassName: {{.Values.storageClassName}} diff --git a/BACA/configuration-ha/mongoadmin/templates_base/configdb-service-base.yaml b/BACA/configuration-ha/mongoadmin/templates_base/configdb-service-base.yaml deleted file mode 100644 index 5c507573..00000000 --- a/BACA/configuration-ha/mongoadmin/templates_base/configdb-service-base.yaml +++ /dev/null @@ -1,177 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: mongodb-admin-configdb-service - # namespace: {{ .Values.global.nameSpace }} - labels: - name: {{ .Values.mongoDBConfig.labelName }} - app: mongodb-admin-configdb-service - heritage: {{ .Release.Service | quote }} - release: {{ .Release.Name | quote }} - chart: mongodb-admin-configdb-service -spec: - ports: - - port: {{ .Values.mongoDBConfig.configPort }} - targetPort: {{ .Values.mongoDBConfig.configPort }} - clusterIP: None - selector: - role: {{ .Values.mongoDBConfig.labelName }} ---- -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: mongodb-admin-configdb - labels: - app: mongodb-admin-configdb - heritage: {{ .Release.Service | quote }} - release: {{ .Release.Name | quote }} - chart: mongodb-admin-configdb -spec: - serviceName: mongodb-admin-configdb-service - replicas: {{ .Values.mongoDBConfig.replicas }} - selector: - matchLabels: - role: {{ .Values.mongoDBConfig.labelName }} - template: - metadata: - annotations: - {{- range $key, $value := .Values.global.annotations }} - {{ $key }}: {{ $value | quote }} - {{- end }} - labels: - role: {{ .Values.mongoDBConfig.labelName }} - tier: configdb-admin - replicaset: {{ .Values.mongoDBConfig.replicaSetName }} - app: mongodb-admin-configdb - heritage: {{ .Release.Service | quote }} - release: {{ .Release.Name | quote }} - chart: mongodb-admin-configdb - spec: - affinity: - podAntiAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 100 - podAffinityTerm: - labelSelector: - matchExpressions: - - key: replicaset - operator: In - values: - - {{ .Values.mongoDBConfig.replicaSetName }} - topologyKey: kubernetes.io/hostname - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: mongo-admin{{ .Values.global.namespace.name }} - operator: In - values: - - "baca" - - key: beta.kubernetes.io/arch - operator: In - values: - - {{ .Values.global.arch }} - terminationGracePeriodSeconds: 10 - volumes: - - name: secrets-volume - secret: - secretName: {{ .Values.secretVolume }} - - name: sp-log-pvc - persistentVolumeClaim: - claimName: {{ .Values.global.logs.claimname }} - containers: - - name: mongodb-admin-configdb-container - image: "{{ .Values.global.mongoadmin.image.repository }}:{{ .Values.global.mongoadmin.image.tag }}" - securityContext: - runAsUser: 51000 - allowPrivilegeEscalation: false - privileged: false - readOnlyRootFilesystem: false - runAsNonRoot: true - capabilities: - drop: - - ALL - resources: -{{ toYaml .Values.mongoDBConfig.resources | indent 12 }} - env: - # - name: ENTRYPASSWORD - # value: "bacauser" - # - name: MONGO_USER - # value: "bacauser" - # - name: MONGO_PASSWORD - # value: "bacauser" - # - name: MONGO_INITDB - # value: "bacauser" - - name: LOG_LEVEL - value: {{ .Values.global.logs.logLevel }} - - name: LOG_PATH - value: {{ .Values.global.logs.path }}{{ .Values.mongoAdmin.name | substr 0 5 }}db - - name: WIREDTIGERCACHE - value: {{ .Values.global.mongoadmin.wiredTigerCache | default 0.5 | quote }} - - name: CERTIFICATE_DIR - value: "/etc/certs" - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - - name: MONGO_TYPE - value: "configsvr" - - name: MONGO_TYPE_VALUE - value: "configReplSetAdmin" - - name: CONTAINER_PORT - value: {{ .Values.mongoDBConfig.configPort | quote }} - - name: KUBE_NAME_SPACE - value: {{ .Values.global.nameSpace | quote }} - ports: - - containerPort: {{ .Values.mongoDBConfig.configPort }} - livenessProbe: - initialDelaySeconds: 60 - periodSeconds: 60 - timeoutSeconds: 20 - successThreshold: 1 - failureThreshold: 5 - exec: - command: - - bash - - -c - - source setup_env.sh && echo 'db.runCommand("ping").ok' | mongo 127.0.0.1:27019 --sslAllowInvalidCertificates --ssl --sslPEMKeyFile $PEMFILE --sslCAFile $CERTIFICATE_PATH - readinessProbe: - initialDelaySeconds: 30 - periodSeconds: 30 - timeoutSeconds: 20 - successThreshold: 1 - failureThreshold: 5 - exec: - command: - - bash - - -c - - source setup_env.sh && echo 'db.runCommand("ping").ok' | mongo 127.0.0.1:27019 --sslAllowInvalidCertificates --ssl --sslPEMKeyFile $PEMFILE --sslCAFile $CERTIFICATE_PATH - imagePullPolicy: {{ .Values.global.mongoadmin.image.pullPolicy }} - volumeMounts: - - name: secrets-volume - readOnly: true - mountPath: "/etc/certs" - - name: mongodb-admin-configdb-storage - mountPath: /data/db - - name: sp-log-pvc - mountPath: {{ .Values.global.logs.path }}{{ .Values.mongoAdmin.name | substr 0 5 }}db - subPath: {{ .Values.mongoAdmin.name | replace "-" "" }} - volumeClaimTemplates: - - metadata: - name: mongodb-admin-configdb-storage - spec: - accessModes: [ "ReadWriteOnce" ] - {{- if $.Values.global.storageClass }} - {{- if (eq "-" $.Values.global.storageClass) }} - storageClassName: {{ .Values.storageClassName | quote }} - {{- else }} - storageClassName: {{ $.Values.global.storageClass | quote }} - {{- end }} - {{- end }} - resources: - requests: - storage: {{.Values.mongoDBConfig.storageCapacity}} \ No newline at end of file diff --git a/BACA/configuration-ha/mongoadmin/templates_base/local-storage-base.yaml b/BACA/configuration-ha/mongoadmin/templates_base/local-storage-base.yaml deleted file mode 100644 index caab5631..00000000 --- a/BACA/configuration-ha/mongoadmin/templates_base/local-storage-base.yaml +++ /dev/null @@ -1,11 +0,0 @@ -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: {{.Values.storageClassName}} - labels: - app: {{.Values.storageClassName}} - heritage: {{ .Release.Service | quote }} - release: {{ .Release.Name | quote }} - chart: {{ .Values.storageClassName | quote }} -provisioner: kubernetes.io/no-provisioner -volumeBindingMode: WaitForFirstConsumer \ No newline at end of file diff --git a/BACA/configuration-ha/mongoadmin/templates_base/mongo-service-base.yaml b/BACA/configuration-ha/mongoadmin/templates_base/mongo-service-base.yaml deleted file mode 100644 index 78c3591b..00000000 --- a/BACA/configuration-ha/mongoadmin/templates_base/mongo-service-base.yaml +++ /dev/null @@ -1,22 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - labels: - app: "{{ .Values.mongosRouter.name }}-service" - role: mongos-admin - tier: routers-admin - heritage: "{{ .Values.mongosRouter.name }}-service" - release: {{ .Values.release | quote}} - chart: "{{ .Values.mongosRouter.name }}-service" - name: {{ .Values.mongosService }} -spec: - type: ClusterIP - selector: - app: {{ .Values.mongosRouter.name }} - ports: - - port: {{.Values.mongosRouter.routerPort}} - protocol: TCP - - - - diff --git a/BACA/configuration-ha/mongoadmin/templates_base/mongos-router-base.yaml b/BACA/configuration-ha/mongoadmin/templates_base/mongos-router-base.yaml deleted file mode 100644 index d1ba8dea..00000000 --- a/BACA/configuration-ha/mongoadmin/templates_base/mongos-router-base.yaml +++ /dev/null @@ -1,139 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: {{ .Values.mongosRouter.name }} - labels: - app: {{ .Values.mongosRouter.name }} - heritage: {{ .Release.Service | quote }} - release: {{ .Release.Name | quote }} - chart: {{ .Values.mongosRouter.name }} - # namespace: {{ .Values.global.nameSpace }} -spec: - replicas: {{ .Values.mongosRouter.replicas }} - selector: - matchLabels: - app: {{ .Values.mongosRouter.name }} - template: - metadata: - annotations: - {{- range $key, $value := .Values.global.annotations }} - {{ $key }}: {{ $value | quote }} - {{- end }} - labels: - app: {{ .Values.mongosRouter.name }} - role: mongos - tier: routers-admin - heritage: {{ .Release.Service | quote }} - release: {{ .Release.Name | quote }} - chart: {{ .Values.mongosRouter.name }} - spec: - affinity: - podAntiAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 100 - podAffinityTerm: - labelSelector: - matchExpressions: - - key: tier - operator: In - values: - - routers-admin - topologyKey: kubernetes.io/hostname - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: mongo-admin{{ .Values.global.namespace.name }} - operator: In - values: - - "baca" - - key: beta.kubernetes.io/arch - operator: In - values: - - {{ .Values.global.arch }} - volumes: - - name: secrets-volume - secret: - secretName: {{ .Values.secretVolume }} - - name: sp-log-pvc - persistentVolumeClaim: - claimName: {{ .Values.global.logs.claimname }} - terminationGracePeriodSeconds: 10 - containers: - - name: mongos-admin-router-container - image: "{{ .Values.global.mongoadmin.image.repository }}:{{ .Values.global.mongoadmin.image.tag }}" - imagePullPolicy: {{ .Values.global.mongoadmin.image.pullPolicy }} - env: - - name: ENTRYPASSWORD - valueFrom: - secretKeyRef: - name: "baca-mongo-admin" - key: MONGOADMINENTRYPASSWORD - - name: MONGO_USER - valueFrom: - secretKeyRef: - name: "baca-mongo-admin" - key: MONGOADMINUSER - - name: MONGO_PASSWORD - valueFrom: - secretKeyRef: - name: "baca-mongo-admin" - key: MONGOADMINPASSWORD - - name: MONGO_INITDB - value: "smartpages" - - name: MONGO_SECONDDB - value: "binaryfiles" - - name: MONGO_TYPE - value: "mongodb-router" - - name: CERTIFICATE_DIR - value: "/etc/certs" - - name: CONTAINER_PORT - value: {{ .Values.mongosRouter.routerPort | quote}} - - name: CONFIG_REPL_SET - value: {{ .Values.mongosRouter.configReplset }} - - name: KUBE_NAME_SPACE - value: {{ .Values.global.nameSpace | quote }} - volumeMounts: - - name: secrets-volume - readOnly: true - mountPath: "/etc/certs" - - name: sp-log-pvc -# mountPath: "/var/log/mongodb" -# subPath: mongo - mountPath: {{ .Values.global.logs.path }}{{ .Values.mongoAdmin.name | substr 0 5 }}db - subPath: {{ .Values.mongoAdmin.name | replace "-" "" }} - resources: -{{ toYaml .Values.mongosRouter.resources | indent 10 }} - securityContext: - runAsUser: 51000 - allowPrivilegeEscalation: false - privileged: false - readOnlyRootFilesystem: false - runAsNonRoot: true - capabilities: - drop: - - ALL - ports: - - containerPort: {{ .Values.mongosRouter.routerPort }} - livenessProbe: - initialDelaySeconds: 60 - periodSeconds: 60 - timeoutSeconds: 20 - successThreshold: 1 - failureThreshold: 5 - exec: - command: - - bash - - -c - - source setup_env.sh && echo 'db.runCommand("ping").ok' | mongo 127.0.0.1:27017 --sslAllowInvalidCertificates --ssl --sslPEMKeyFile $PEMFILE --sslCAFile $CERTIFICATE_PATH - readinessProbe: - initialDelaySeconds: 30 - periodSeconds: 30 - timeoutSeconds: 20 - successThreshold: 1 - failureThreshold: 5 - exec: - command: - - bash - - -c - - source setup_env.sh && echo 'db.runCommand("ping").ok' | mongo 127.0.0.1:27017 --sslAllowInvalidCertificates --ssl --sslPEMKeyFile $PEMFILE --sslCAFile $CERTIFICATE_PATH \ No newline at end of file diff --git a/BACA/configuration-ha/mongoadmin/templates_base/shard-persistence-base.yaml b/BACA/configuration-ha/mongoadmin/templates_base/shard-persistence-base.yaml deleted file mode 100644 index 34080b1d..00000000 --- a/BACA/configuration-ha/mongoadmin/templates_base/shard-persistence-base.yaml +++ /dev/null @@ -1,21 +0,0 @@ -apiVersion: v1 -kind: PersistentVolume -metadata: - name: pv-mongodb-admin-shard$SHARDX-$COUNTER - # namespace: {{ .Values.global.nameSpace }} - labels: - shard: admin-shard$SHARDX - app: pv-admin-shard$SHARDX - heritage: {{ .Release.Service | quote }} - release: {{ .Release.Name | quote }} - chart: "pv-admin-shard$SHARDX" -spec: - accessModes: - - ReadWriteOnce - capacity: - storage: {{ .Values.mongoDBShard.storageCapacity }} - nfs: - path: /exports/smartpages/$KUBE_NAME_SPACE/mongodb-admin-shard$SHARDX-$COUNTER - server: $NFS_IP - persistentVolumeReclaimPolicy: Retain - storageClassName: {{.Values.storageClassName}} diff --git a/BACA/configuration-ha/mongoadmin/templates_base/shardX-stateful.yaml b/BACA/configuration-ha/mongoadmin/templates_base/shardX-stateful.yaml deleted file mode 100644 index c4561f98..00000000 --- a/BACA/configuration-ha/mongoadmin/templates_base/shardX-stateful.yaml +++ /dev/null @@ -1,182 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: mongodb-admin-shard$SHARDX-service - # namespace: {{ .Values.global.nameSpace }} - labels: - name: mongodb-admin-shard$SHARDX - app: admin-shard$SHARDX - heritage: {{ .Release.Service | quote }} - release: {{ .Release.Name | quote }} - chart: mongodb-admin-shard$SHARDX-service -spec: - ports: - - port: {{ .Values.mongoDBShard.shardPort }} - targetPort: {{ .Values.mongoDBShard.shardPort }} - clusterIP: None - selector: - role: mongodb-admin-shard$SHARDX ---- -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: mongodb-admin-shard$SHARDX - labels: - app: mongodb-admin-shard$SHARDX - heritage: {{ .Release.Service | quote }} - release: {{ .Release.Name | quote }} - chart: mongodb-shard$SHARDX - # namespace: {{ .Values.global.nameSpace }} -spec: - selector: - matchLabels: - role: mongodb-admin-shard$SHARDX - serviceName: mongodb-admin-shard$SHARDX-service - replicas: {{ .Values.mongoDBShard.replicas }} - template: - metadata: - annotations: - {{- range $key, $value := .Values.global.annotations }} - {{ $key }}: {{ $value | quote }} - {{- end }} - labels: - role: mongodb-admin-shard$SHARDX - tier: mongodb-admin - replicaset: rs-admin-shard$SHARDX - app: mongodb-shard$SHARDX - heritage: {{ .Release.Service | quote }} - release: {{ .Release.Name | quote }} - chart: mongodb-shard$SHARDX - spec: - affinity: - podAntiAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 100 - podAffinityTerm: - labelSelector: - matchExpressions: - - key: replicaset - operator: In - values: - - rs-shard$SHARDX - topologyKey: kubernetes.io/hostname - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: mongo-admin{{ .Values.global.namespace.name }} - operator: In - values: - - "baca" - - key: beta.kubernetes.io/arch - operator: In - values: - - {{ .Values.global.arch }} - terminationGracePeriodSeconds: 10 - volumes: - - name: secrets-volume - secret: - secretName: {{ .Values.secretVolume }} - - name: sp-log-pvc - persistentVolumeClaim: - claimName: {{ .Values.global.logs.claimname }} - containers: - - name: mongod-admin-shard$SHARDX-container - image: "{{ .Values.global.mongoadmin.image.repository }}:{{ .Values.global.mongoadmin.image.tag }}" - imagePullPolicy: {{ .Values.global.mongoadmin.image.pullPolicy }} - resources: -{{ toYaml .Values.mongoDBShard.resources | indent 10 }} - env: - # - name: ENTRYPASSWORD - # value: "$ENTRYPASSWORD" - # - name: MONGO_USER - # value: "$MONGO_USER" - # - name: MONGO_PASSWORD - # value: "$MONGO_PASSWORD" - # - name: MONGO_INITDB - # value: "$MONGOADMINAUTHDB" - # - name: MONGO_SECONDDB - # value: "binaryfiles" - - name: LOG_PATH - value: "{{ .Values.logs.path }}{{ .Values.mongoAdmin.name | substr 0 5 }}db" - - name: LOG_LEVEL - value: {{ .Values.global.logs.logLevel }} - - name: CERTIFICATE_DIR - value: "/etc/certs" - - name: WIREDTIGERCACHE - value: {{ .Values.global.mongoadmin.wiredTigerCache | default 0.5 | quote }} - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - - name: MONGO_TYPE - value: "shard" - - name: MONGO_TYPE_VALUE - value: "rs-admin-shard$SHARDX" - - name: CONTAINER_PORT - value: {{ .Values.mongoDBShard.shardPort | quote}} - - name: KUBE_NAME_SPACE - value: {{ .Values.global.nameSpace | quote }} - securityContext: - runAsUser: 51000 - allowPrivilegeEscalation: false - privileged: false - readOnlyRootFilesystem: false - runAsNonRoot: true - capabilities: - drop: - - ALL - livenessProbe: - initialDelaySeconds: 60 - periodSeconds: 60 - timeoutSeconds: 20 - successThreshold: 1 - failureThreshold: 5 - exec: - command: - - bash - - -c - - source setup_env.sh && echo 'db.runCommand("ping").ok' | mongo 127.0.0.1:27018 --sslAllowInvalidCertificates --ssl --sslPEMKeyFile $PEMFILE --sslCAFile $CERTIFICATE_PATH - readinessProbe: - initialDelaySeconds: 30 - periodSeconds: 30 - timeoutSeconds: 20 - successThreshold: 1 - failureThreshold: 5 - exec: - command: - - bash - - -c - - source setup_env.sh && echo 'db.runCommand("ping").ok' | mongo 127.0.0.1:27018 --sslAllowInvalidCertificates --ssl --sslPEMKeyFile $PEMFILE --sslCAFile $CERTIFICATE_PATH - ports: - - containerPort: {{ .Values.mongoDBShard.shardPort }} - volumeMounts: - - name: shard$SHARDX-admin-storage - mountPath: /data/db - - name: sp-log-pvc -# mountPath: "/var/log/mongodb" -# subPath: mongo - mountPath: {{ .Values.global.logs.path }}{{ .Values.mongoAdmin.name | substr 0 5 }}db - subPath: {{ .Values.mongoAdmin.name | replace "-" "" }} - - name: secrets-volume # must match the volume name, above - mountPath: "/etc/certs" - volumeClaimTemplates: - - metadata: - name: shard$SHARDX-admin-storage - spec: - accessModes: [ "ReadWriteOnce" ] - {{- if $.Values.global.storageClass }} - {{- if (eq "-" $.Values.global.storageClass) }} - storageClassName: {{ .Values.storageClassName | quote }} - {{- else }} - storageClassName: {{ $.Values.global.storageClass | quote }} - {{- end }} - {{- end }} - resources: - requests: - storage: {{ .Values.mongoDBShard.storageCapacity }} - diff --git a/BACA/configuration-ha/mongoadmin/values-base.yaml b/BACA/configuration-ha/mongoadmin/values-base.yaml deleted file mode 100644 index 01c6f202..00000000 --- a/BACA/configuration-ha/mongoadmin/values-base.yaml +++ /dev/null @@ -1,66 +0,0 @@ -# image: -# repository: mycluster.icp:8500/$KUBE_NAME_SPACE/mongocluster -# tag: latest -# pullPolicy: Always - -storageClassName: local-storage-admin -# nfsIP: $NFS_IP -# nameSpace: $KUBE_NAME_SPACE -# # existingSecret: true -# # wiredTigerCache: "$MONGO_WIREDTIGER_LIMIT" -wiredTigerCache: "0.5" -secretVolume: baca-secrets$KUBE_NAME_SPACE -mongosService: mongos-admin-service - - -mongoAdmin: - # nodeSelector: - # mongo-admin$KUBE_NAME_SPACE: baca - name: mongo-admin - -logs: - # claimname: $LOGPVC - path: /var/log/ - # logLevel: $LOG_LEVEL - -mongoDBConfig: - storageCapacity: 10Gi - labelName: mongodb-admin-configdb - configPort: 27019 - replicas: $CONFIG_REPLICA - replicaSetName: ConfigDBRepSetAdmin - resources: - limits: - memory: "1Gi" - cpu: "500m" - requests: - memory: "256Mi" - cpu: "500m" - -mongosRouter: - name: mongos-admin-router - routerPort: 27017 - replicas: $ROUTER_REPLICA - configReplset: "$CONFIG_REPLSET_VALUE" - resources: - limits: - memory: "1Gi" - cpu: "500m" - requests: - memory: "256Mi" - cpu: "500m" - -mongoDBShard: - # heritage: admin-shard - # pvheritage: admin-shardpv - storageCapacity: 15Gi - shardPort: 27018 - replicas: $SHARD_REPLICA - resources: - limits: - memory: "1Gi" - cpu: "500m" - requests: - memory: "256Mi" - cpu: "500m" - diff --git a/BACA/configuration-ha/openssl.cnf b/BACA/configuration-ha/openssl.cnf deleted file mode 100644 index 8a8ecb64..00000000 --- a/BACA/configuration-ha/openssl.cnf +++ /dev/null @@ -1,56 +0,0 @@ -[req] -default_bits = 2048 -utf8 = yes -distinguished_name = req_distinguished_name -req_extensions = v3_req - -[req_distinguished_name] -countryName = Country Name (2 letter code) -countryName_default = CA -countryName_min = 2 -countryName_max = 2 -stateOrProvinceName = State or Province Name (full name) -stateOrProvinceName_default = NS -stateOrProvinceName_max = 64 -localityName = Locality Name (eg, city) -localityName_default = Halifax -localityName_max = 64 -organizationName = Organization Name (eg, company) -organizationName_default = IBM -organizationName_max = 64 -organizationalUnitName = Organizational Unit Name (eg, section) -organizationalUnitName_default = baca -organizationalUnitName_max = 64 -commonName = *.svc.cluster.local -commonName_max = 64 - -[v3_req] -basicConstraints = CA:FALSE -subjectKeyIdentifier = hash -keyUsage = digitalSignature, keyEncipherment -extendedKeyUsage = clientAuth, serverAuth -subjectAltName = @alt_names - -[alt_names] -DNS.1 = localhost -DNS.2 = mongodb-admin-shard0-0.mongodb-admin-shard0-service.sp.svc.cluster.local -DNS.3 = mongodb-admin-shard0-1.mongodb-admin-shard0-service.sp.svc.cluster.local -DNS.4 = mongodb-admin-shard0-2.mongodb-admin-shard0-service.sp.svc.cluster.local -DNS.5 = mongodb-admin-shard1-0.mongodb-admin-shard1-service.sp.svc.cluster.local -DNS.6 = mongodb-admin-shard1-1.mongodb-admin-shard1-service.sp.svc.cluster.local -DNS.7 = mongodb-admin-shard1-2.mongodb-admin-shard1-service.sp.svc.cluster.local -DNS.8 = mongodb-admin-configdb-0.mongodb-admin-configdb-service.sp.svc.cluster.local -DNS.9 = mongodb-admin-configdb-1.mongodb-admin-configdb-service.sp.svc.cluster.local -DNS.10 = mongodb-admin-configdb-2.mongodb-admin-configdb-service.sp.svc.cluster.local -DNS.11 = mongodb-shard0-0.mongodb-shard0-service.sp.svc.cluster.local -DNS.12 = mongodb-shard0-1.mongodb-shard0-service.sp.svc.cluster.local -DNS.13 = mongodb-shard0-2.mongodb-shard0-service.sp.svc.cluster.local -DNS.14 = mongodb-shard1-0.mongodb-shard1-service.sp.svc.cluster.local -DNS.15 = mongodb-shard1-1.mongodb-shard1-service.sp.svc.cluster.local -DNS.16 = mongodb-shard1-2.mongodb-shard1-service.sp.svc.cluster.local -DNS.17 = mongodb-configdb-0.mongodb-configdb-service.sp.svc.cluster.local -DNS.18 = mongodb-configdb-1.mongodb-configdb-service.sp.svc.cluster.local -DNS.19 = mongodb-configdb-2.mongodb-configdb-service.sp.svc.cluster.local -IP.1 = 127.0.0.1 - - diff --git a/BACA/configuration-ha/renewCert.sh b/BACA/configuration-ha/renewCert.sh deleted file mode 100755 index dbaf4e47..00000000 --- a/BACA/configuration-ha/renewCert.sh +++ /dev/null @@ -1,54 +0,0 @@ -#!/usr/bin/env bash -# -# Licensed Materials - Property of IBM -# 6949-68N -# -# © Copyright IBM Corp. 2018 All Rights Reserved -# - -. ./common.sh -. ./bashfunctions.sh -. ./createSSLCert.sh - - -today=`date +%Y-%m-%d.%H:%M:%S` -echo $today - - -# confirm they want to delete -echo -echo -e "\x1B[1;31mThis script will RENEW all the certificates for IBM Business Automation Content Analyzer in $KUBE_NAME_SPACE \x1B[0m" -echo -echo -e "\x1B[1;31mThe script will delete ALL the IBM Business Automation Content Analyzer pods in $KUBE_NAME_SPACE. Therefore, you must make sure to backup your ontology,etc... and make sure there are no activities on the system \x1B[0m" -echo -ls -al *.pem > /dev/null -if [[ $? == "0" ]]; then - echo -e "\x1B[1;31mBased on the PEM files in the $PWD, the expirations date for them are: \x1B[0m" - - for pem in ./*.pem; do - printf '%s: %s\n' \ - "$pem expries on" \ - "$(date --date="$(openssl x509 -enddate -noout -in "$pem"|cut -d= -f 2)" --iso-8601)" - done -else - echo -e "\x1B[1;31mWe could not find any existing PMR files in $PWD \x1B[0m" -fi - -while [[ $renewConfirm != "y" && $renewConfirm != "n" && $renewConfirm != "yes" && $renewConfirm != "no" ]] # While deleteconfirm is not y or n... -do - echo -e "\x1B[1;31mWould you like to continue (Y/N):\x1B[0m" - read renewConfirm - renewConfirm=$(echo "$renewConfirm" | tr '[:upper:]' '[:lower:]') -done - - -if [[ $renewConfirm == "n" || $renewConfirm == "no" ]] -then - exit -else - loginToCluster - createSSLCert - createSecret - echo -e "\x1B[1;31m Deleting all Content Analyzer's pods ... " - kubectl -n sp delete --all pods --force --grace-period=0 -fi \ No newline at end of file diff --git a/BACA/configuration-ha/sppersistent.yaml b/BACA/configuration-ha/sppersistent.yaml deleted file mode 100644 index 03bfc6d3..00000000 --- a/BACA/configuration-ha/sppersistent.yaml +++ /dev/null @@ -1,83 +0,0 @@ -apiVersion: v1 -kind: PersistentVolume -metadata: - name: sp-data-pv-$KUBE_NAME_SPACE - namespace: $KUBE_NAME_SPACE -spec: - accessModes: - - ReadWriteMany - capacity: - storage: 60Gi - nfs: - path: /exports/smartpages/$KUBE_NAME_SPACE/data - server: $NFS_IP - persistentVolumeReclaimPolicy: Retain ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: $DATAPVC - namespace: $KUBE_NAME_SPACE -spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 60Gi - volumeName: sp-data-pv-$KUBE_NAME_SPACE ---- -apiVersion: v1 -kind: PersistentVolume -metadata: - name: sp-log-pv-$KUBE_NAME_SPACE - namespace: $KUBE_NAME_SPACE -spec: - accessModes: - - ReadWriteMany - capacity: - storage: 35Gi - nfs: - path: /exports/smartpages/$KUBE_NAME_SPACE/logs - server: $NFS_IP - persistentVolumeReclaimPolicy: Retain ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: $LOGPVC - namespace: $KUBE_NAME_SPACE -spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 35Gi - volumeName: sp-log-pv-$KUBE_NAME_SPACE ---- -apiVersion: v1 -kind: PersistentVolume -metadata: - name: sp-config-pv-$KUBE_NAME_SPACE - namespace: $KUBE_NAME_SPACE -spec: - accessModes: - - ReadWriteMany - capacity: - storage: 5Gi - nfs: - path: /exports/smartpages/$KUBE_NAME_SPACE/config - server: $NFS_IP - persistentVolumeReclaimPolicy: Retain ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: $CONFIGPVC - namespace: $KUBE_NAME_SPACE -spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 5Gi - volumeName: sp-config-pv-$KUBE_NAME_SPACE \ No newline at end of file diff --git a/BACA/configuration/DB2/AddOntology.sh b/BACA/configuration/DB2/AddOntology.sh deleted file mode 100755 index 29f68640..00000000 --- a/BACA/configuration/DB2/AddOntology.sh +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash - -echo -echo "-- This script will create a new ontology for an existing tenant and load it with default data." -echo - -./AddTenant.sh 1 \ No newline at end of file diff --git a/BACA/configuration/DB2/AddTenant.bat b/BACA/configuration/DB2/AddTenant.bat deleted file mode 100755 index 05ab9be2..00000000 --- a/BACA/configuration/DB2/AddTenant.bat +++ /dev/null @@ -1,143 +0,0 @@ -@echo off - -SETLOCAL -echo Enter '1' to add new tenant and an ontology. -echo Enter '2' to add an ontology for an existing tenant database. -echo Enter anything to abort - -set /p choice="Type input: " - -set /p tenant_id= Enter the tenant ID for the new tenant: (eg. t4900) : - -set /p tenant_db_name= Enter the name of the new BACA tenant database to create: (eg. t4900) : - -set /p baca_database_server_ip= Enter the host/IP of the tenant database server. : - -set /p baca_database_port= Enter the port of the tenant database server : - -set /p tenant_db_user= Please enter the name of tenant database user. If no value is entered we will use the following default value 'tenantuser' : -IF NOT DEFINED tenant_db_user SET "tenant_db_user=tenantuser" - -set /p tenant_db_pwd= Enter the password for the tenant database user: - -set /p tenant_ontology= Enter the tenant ontology name. If nothing is entered, the default name will be used 'default' : -IF NOT DEFINED tenant_ontology SET "tenant_ontology=default" - -set /p base_db_name= Enter the name of the Base BACA database with the TENANTINFO Table. If nothing is entered, we will use the following default value 'CABASEDB': -IF NOT DEFINED base_db_name SET "base_db_name=CABASEDB" - -set /p base_db_user= Enter the name of the database user for the Base BACA database. If nothing is entered, we will use the following default value 'CABASEUSER' : -IF NOT DEFINED base_db_user SET "base_db_user=CABASEUSER" - -set /p tenant_company= Please enter the company name for the initial BACA user : - -set /p tenant_first_name= Please enter the first name for the initial BACA user : - -set /p tenant_last_name= Please enter the last name for the initial BACA user : - -set /p tenant_email= Please enter a valid email address for the initial BACA user : - -set /p tenant_user_name= Please enter the login name for the initial BACA user : - -set /p ssl= Please enter the login name for the initial BACA user : - -echo "-- Please confirm these are the desired settings:" -echo " - tenant ID: %tenant_id%" -echo " - tenant database name: %tenant_db_name%" -echo " - database server hostname/IP: %baca_database_server_ip%" -echo " - database server port: %baca_database_port%" -echo " - tenant database user: %tenant_db_user%" -echo " - ontology name: %tenant_ontology%" -echo " - base database: %base_db_name%" -echo " - base database user: %base_db_user%" -echo " - tenant company name: %tenant_company%" -echo " - tenant first name: %tenant_first_name%" -echo " - tenant last name: %tenant_last_name%" -echo " - tenant email address: %tenant_email%" -echo " - tenant login name: %tenant_user_name%" - -set /P c=Are you sure you want to continue[Y/N]? -if /I "%c%" EQU "Y" goto :DOCREATE -if /I "%c%" EQU "N" goto :DOEXIT - -:DOCREATE - echo "Running the db script" - REM adding new teneant db need to create db first - IF "%choice%"=="1" ( - echo "Creating db on user input" - db2 CREATE DATABASE %tenant_db_name% AUTOMATIC STORAGE YES USING CODESET UTF-8 TERRITORY DEFAULT COLLATE USING SYSTEM PAGESIZE 32768 - db2 CONNECT TO %tenant_db_name% - db2 GRANT CONNECT,DATAACCESS ON DATABASE TO USER %tenant_db_user% - db2 GRANT USE OF TABLESPACE USERSPACE1 TO USER %tenant_db_user% - db2 CONNECT RESET - ) - - REM create schema - echo "Connecting to db and creating schema" - db2 CONNECT TO %tenant_db_name% - db2 CREATE SCHEMA %tenant_ontology% - db2 SET SCHEMA %tenant_ontology% - - REM create tables - echo "creating schema tables" - db2 -stvf sql\CreateBacaTables.sql - - REM table permissions to tenant user - echo "Giving permissions on tables" - db2 GRANT ALTER ON TABLE DOC_CLASS TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE DOC_ALIAS TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE KEY_CLASS TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE KEY_ALIAS TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE CWORD TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE HEADING TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE HEADING_ALIAS TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE USER_DETAIL TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE INTEGRATION TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE IMPORT_ONTOLOGY TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE API_INTEGRATIONS_OBJECTSSTORE TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE SMARTPAGES_OPTIONS TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE FONTS TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE FONTS_TRANSID TO USER %tenant_db_user% - db2 GRANT ALTER ON TABLE DB_BACKUP TO USER %tenant_db_user% - - REM load the tenant Db - echo "Loading default data into tables" - db2 load from CSVFiles\doc_class.csv of del modified by identityoverride insert into doc_class - db2 load from CSVFiles\key_class.csv of del modified by identityoverride insert into key_class - db2 load from CSVFiles\doc_alias.csv of del modified by identityoverride insert into doc_alias - db2 load from CSVFiles\key_alias.csv of del modified by identityoverride insert into key_alias - db2 load from CSVFiles\cword.csv of del modified by identityoverride insert into cword - db2 load from CSVFiles\heading.csv of del modified by identityoverride insert into heading - db2 load from CSVFiles\heading_alias.csv of del modified by identityoverride insert into heading_alias - db2 load from CSVFiles\key_class_dc.csv of del modified by identityoverride insert into key_class_dc - db2 load from CSVFiles\doc_alias_dc.csv of del modified by identityoverride insert into doc_alias_dc - db2 load from CSVFiles\key_alias_dc.csv of del modified by identityoverride insert into key_alias_dc - db2 load from CSVFiles\key_alias_kc.csv of del modified by identityoverride insert into key_alias_kc - db2 load from CSVFiles\heading_dc.csv of del modified by identityoverride insert into heading_dc - db2 load from CSVFiles\heading_alias_dc.csv of del modified by identityoverride insert into heading_alias_dc - db2 load from CSVFiles\heading_alias_h.csv of del modified by identityoverride insert into heading_alias_h - db2 load from CSVFiles\cword_dc.csv of del modified by identityoverride insert into cword_dc - db2 connect reset - - REM Insert InsertTenant - echo "Connecting to base database to insert tenant info" - db2 connect to %base_db_name% - db2 set schema %base_db_user% - db2 insert into TENANTINFO (tenantid,ontology,tenanttype,rdbmsengine,bacaversion,rdbmsconnection) values ( '%tenant_id%', '%tenant_ontology%', 0, 'DB2', '1.1', encrypt('DATABASE=%tenant_db_name%;HOSTNAME=%baca_database_server_ip%;PORT=%baca_database_port%;PROTOCOL=TCPIP;UID=%tenant_db_user%;PWD=%tenant_db_pwd%;','AES_KEY')) - db2 connect reset - - REM Insert InsertUser - echo "Connecting to tenant database to insert initial userinfo" - db2 connect to %tenant_db_name% - db2 set schema %tenant_ontology% - db2 insert into user_detail (email,first_name,last_name,user_name,company,expire) values ('%tenant_email%','%tenant_first_name%','%tenant_last_name%','%tenant_user_name%','%tenant_company%',10080) - db2 insert into login_detail (user_id,role,status,logged_in) select user_id,'Admin','1',0 from user_detail where email='%tenant_email%' - db2 connect reset - goto END -:DOEXIT - echo "Exited on user input" - goto END -:END - echo "END" - -ENDLOCAL diff --git a/BACA/configuration/DB2/CSVFiles/cword.csv b/BACA/configuration/DB2/CSVFiles/cword.csv deleted file mode 100644 index f7240470..00000000 --- a/BACA/configuration/DB2/CSVFiles/cword.csv +++ /dev/null @@ -1,75 +0,0 @@ -12,inspection -13,vin -14,repair -15,estimates -16,policy -17,qty -18,excluding -19,bank -20,cost -21,credit -22,taxable -23,task -24,shipped -25,ship -26,salesperson -27,handling -28,gst -29,client -30,order -31,receipt -32,draft -33,payment -34,fees -35,offer -36,claim -37,report -38,invoice -39,total -40,settlement -41,services -42,amount -43,brand -44,terms -45,tax -46,purchase -47,due -48,acct -49,account -50,campaign -51,letter -52,invitation -53,attn -54,sincerely -55,insurance -56,patient -57,disability -58,health -59,adjuster -60,division -61,investigating -62,attorney -63,power -64,principal -65,designation -66,authority -67,agreement -68,contract -69,pricing -70,provider -71,schedule -72,branch -73,solution -74,authorized -75,sales -1,statement -2,balance -3,capital -4,shipment -5,flag -6,lading -7,master -8,shipper -9,consignee -10,voyage -11,loading diff --git a/BACA/configuration/DB2/CSVFiles/cword_dc.csv b/BACA/configuration/DB2/CSVFiles/cword_dc.csv deleted file mode 100644 index ce162f91..00000000 --- a/BACA/configuration/DB2/CSVFiles/cword_dc.csv +++ /dev/null @@ -1,75 +0,0 @@ -3,12,0 -3,13,0 -3,14,0 -3,15,0 -3,16,0 -4,17,0 -4,18,0 -4,19,0 -4,20,0 -4,21,0 -4,22,0 -4,23,0 -4,24,0 -4,25,0 -4,26,0 -4,27,0 -4,28,0 -4,29,0 -4,30,0 -4,31,0 -4,32,0 -4,33,0 -4,34,0 -4,35,0 -4,36,0 -4,37,0 -4,38,0 -4,39,0 -4,40,0 -4,41,0 -4,42,0 -4,43,0 -4,44,0 -4,45,0 -4,46,0 -4,47,0 -4,48,0 -4,49,0 -4,50,0 -5,51,0 -5,52,0 -5,53,0 -5,54,0 -6,55,0 -6,56,0 -6,57,0 -6,58,0 -7,59,0 -7,60,0 -7,61,0 -8,62,0 -8,63,0 -8,64,0 -8,65,0 -8,66,0 -9,67,0 -9,68,0 -9,69,0 -9,70,0 -9,71,0 -9,72,0 -9,73,0 -9,74,0 -9,75,0 -1,1,0 -1,2,0 -1,3,0 -2,4,0 -2,5,0 -2,6,0 -2,7,0 -2,8,0 -2,9,0 -2,10,0 -2,11,0 diff --git a/BACA/configuration/DB2/CSVFiles/doc_alias.csv b/BACA/configuration/DB2/CSVFiles/doc_alias.csv deleted file mode 100644 index 54990440..00000000 --- a/BACA/configuration/DB2/CSVFiles/doc_alias.csv +++ /dev/null @@ -1,10 +0,0 @@ -1,Capital Balance statement,en -2,valuation report,en -3,Balance statement,en -4,bill of lading,en -5,Tax Invoice,en -6,Invoice,en -7,Letter of Invitation,en -8,Letter of Employment,en -9,Police Report,en -10,Power of Attorney,en diff --git a/BACA/configuration/DB2/CSVFiles/doc_alias_dc.csv b/BACA/configuration/DB2/CSVFiles/doc_alias_dc.csv deleted file mode 100644 index 2cc9d633..00000000 --- a/BACA/configuration/DB2/CSVFiles/doc_alias_dc.csv +++ /dev/null @@ -1,10 +0,0 @@ -1,1,0 -2,1,0 -3,1,0 -4,2,0 -5,4,0 -6,4,0 -7,5,0 -8,5,0 -9,7,0 -10,8,0 diff --git a/BACA/configuration/DB2/CSVFiles/doc_class.csv b/BACA/configuration/DB2/CSVFiles/doc_class.csv deleted file mode 100644 index 0d53dbd4..00000000 --- a/BACA/configuration/DB2/CSVFiles/doc_class.csv +++ /dev/null @@ -1,9 +0,0 @@ -1,Balance Statement,This is a Sample -2,Bill of Lading,This is a Sample -3,Estimates,This is a Sample -4,Invoice,This is a Sample -5,Letter,This is a Sample -6,Medical Record,This is a Sample -7,Police Report,This is a Sample -8,Power of Attorney,This is a Sample -9,Pricing Schedule,This is a Sample diff --git a/BACA/configuration/DB2/CSVFiles/heading.csv b/BACA/configuration/DB2/CSVFiles/heading.csv deleted file mode 100644 index 77896d2f..00000000 --- a/BACA/configuration/DB2/CSVFiles/heading.csv +++ /dev/null @@ -1,2 +0,0 @@ -1,Principal, -2,designation, diff --git a/BACA/configuration/DB2/CSVFiles/heading_alias.csv b/BACA/configuration/DB2/CSVFiles/heading_alias.csv deleted file mode 100644 index c6d1389d..00000000 --- a/BACA/configuration/DB2/CSVFiles/heading_alias.csv +++ /dev/null @@ -1,2 +0,0 @@ -1,caution to the principal -2,designation of agent diff --git a/BACA/configuration/DB2/CSVFiles/heading_alias_dc.csv b/BACA/configuration/DB2/CSVFiles/heading_alias_dc.csv deleted file mode 100644 index 2e787c85..00000000 --- a/BACA/configuration/DB2/CSVFiles/heading_alias_dc.csv +++ /dev/null @@ -1,2 +0,0 @@ -1,8 -2,8 diff --git a/BACA/configuration/DB2/CSVFiles/heading_alias_h.csv b/BACA/configuration/DB2/CSVFiles/heading_alias_h.csv deleted file mode 100644 index 3bf58f25..00000000 --- a/BACA/configuration/DB2/CSVFiles/heading_alias_h.csv +++ /dev/null @@ -1,2 +0,0 @@ -1,1 -2,2 diff --git a/BACA/configuration/DB2/CSVFiles/heading_dc.csv b/BACA/configuration/DB2/CSVFiles/heading_dc.csv deleted file mode 100644 index 2e787c85..00000000 --- a/BACA/configuration/DB2/CSVFiles/heading_dc.csv +++ /dev/null @@ -1,2 +0,0 @@ -1,8 -2,8 diff --git a/BACA/configuration/DB2/CSVFiles/key_alias.csv b/BACA/configuration/DB2/CSVFiles/key_alias.csv deleted file mode 100644 index 7981e570..00000000 --- a/BACA/configuration/DB2/CSVFiles/key_alias.csv +++ /dev/null @@ -1,238 +0,0 @@ -20,Adjuster,en -21,Written By,en -22,Claim #,en -23,Grand Total,en -24,Vehicle Out,en -25,Type of Loss,en -26,Insured,en -27,Policy #,en -28,Fax,en -29,Workfile ID,en -30,Phone,en -31,Days to Repair,en -32,CUSTOMER PAY,en -33,Subtotal,en -34,INSURANCE PAY,en -35,Condition,en -36,Job #,en -37,Production Date,en -38,State,en -39,Federal ID,en -40,Mileage Out,en -41,RO Number,en -42,Deductible,en -43,License,en -44,VIN,en -45,Point of Impact,en -46,Date of Loss:,en -47,Date Of Loss,en -48,Inspection Location:,en -49,Owner:,en -50,Mileage In,en -51,Exterior Color,en -52,Interior Color,en -53,Page #:,en -54,Job Description,en -55,SB Cess on Taxable Value [B],en -56,SB Cess levied by Vendor [A],en -57,Service Tax on Taxable Value [B],en -58,Inv No#,en -59,Inv Ni #:,en -60,Inv No #:,en -61,TAX INVOICE NUMBER,en -62,Invoice Number,en -63,Invoice #,en -64,Invoice Number:,en -65,Total Cost,en -66,Total Invoice Value (Rs.),en -67,INVOICE TOTAL INCLUDING GST,en -68,Total,en -69,TOTAL INC GST:,en -70,Office,en -71,Address,en -72,Work Site,en -73,Brand,en -74,Website,en -75,ATTORNEY,en -76,Email,en -77,Matter Number,en -78,Matter Number:,en -79,Regd. Office,en -80,Terms,en -81,Payment Terms,en -82,Est No,en -83,Est Ni:,en -84,Est Date,en -85,Est Date:,en -86,Campaign Name,en -87,Service Tax levied by Vendor [A],en -88,Agency Commission,en -89,Beneficiary Name,en -90,Sub Brand,en -91,PAN NO:,en -92,Credit,en -93,CIN No:,en -94,Swift Code,en -95,To:,en -96,Customer Name,en -97,Client,en -98,Tel,en -99,Telephone,en -100,BANK Name:,en -101,Bank,en -102,Price,en -103,Qty,en -104,Description,en -105,GL Code / Item,en -106,Sold To,en -107,ABN,en -108,Regarding,en -109,RE:,en -110,Requesting Manager,en -111,Inv Date,en -112,INVOICE DATE,en -113,Date,en -114,DUE DATE,en -115,Account No:,en -116,Acct No,en -117,Account,en -118,BSB,en -119,Acct Name,en -120,Account Name,en -121,Sub Total (Rs.),en -122,INVOICE TOTAL EXCLUDING GST,en -123,SUBTOTAL:,en -124,sales tax,en -125,GST,en -126,P.O. Number,en -127,PO Number,en -128,Purchase Nbr,en -129,Order #,en -130,Order Number,en -131,Ship To:,en -132,Branch Office:,en -133,IFSC Code,en -134,Centralised Billing and Accounting Office:,en -135,Service Tax Category:,en -136,Service Tax Regn No:,en -137,Branch,en -138,Branch:,en -139,Attn,en -140,Date of Birth,en -141,Start Date,en -142,Title,en -143,Place of Birth,en -144,Status,en -145,Employee,en -146,Full Name,en -147,Subject,en -148,Annual Salary,en -149,Citizenship,en -150,Expire Date,en -151,Passport no.,en -152,Gender,en -153,Issue Date,en -154,Smoking Status:,en -155,Service Dept,en -156,PCP,en -157,Progress Notes,en -158,Appointment Facility,en -159,Referring,en -160,med primary,en -161,prescription,en -162,primary care provider,en -163,Ph,en -164,Horne:,en -165,Horme:,en -166,NPI,en -167,Follow Up,en -168,llãollow Up,en -169,Division,en -170,Claim,en -171,Name,en -172,Diabetes,en -173,Appt. Date/Time,en -174,DOB,en -175,Marital status,en -176,Alcohol intake,en -177,Hypertension,en -178,Occupation,en -179,Kidney Stones,en -180,CELEBREX:,en -181,CEI.EBRi=X:,en -182,CËLEBREX:,en -183,Employer,en -184,Vitals,en -185,ROS,en -186,Qty:,en -187,Refills:,en -188,BMI,en -189,Wt,en -190,Encounter Date,en -191,Provider,en -192,Insurance,en -193,Client Name,en -194,Investigating Agency,en -195,County,en -196,PARTY 1:,en -197,Transaction #,en -198,TIME OF LOSS:,en -199,Claim No,en -200,Driver License,en -201,Street,en -202,DIVISION:,en -203,Division Code,en -204,ADJUSTER:,en -205,Report Number,en -206,Report Type,en -207,Tag,en -208,City,en -209,Start Date of Minimum payment period per service component,en -210,Zip Code,en -211,Zi. Code:,en -212,Service Components,en -213,existing circuit ids,en -214,State/Province,en -215,Country,en -216,MA Reference No.,en -217,PS/CSA Reference No.,en -218,AT&T PS Reference No.:,en -219,AT&T PA Reference No.:,en -220,pre-existing Contract no (must be included),en -221,account number,en -222,Calculation of early termination charges*,en -223,Customer,en -224,Sales Region,en -225,Pricing Schedule Term,en -226,Sales Strata,en -227,Sales / Branch Manager,en -228,Branch Manager:,en -229,Existing Service,en -230,Street Address,en -231,per Service Component,en -232,Program Code,en -233,scvp name,en -234,Rates following the end of minimum payment,en -235,Branch Transit No.,en -236,Branch Transit No.:,en -237,Rate Stabilization per service component,en -238,Effective Date of this pricing schedule,en -1,Capital Balance,en -2,balance,en -3,capital,en -4,amount,en -5,Fund as of date,en -6,period end date,en -7,Issued Date,en -8,Issued At,en -9,Master,en -10,Shipper,en -11,BL NO:,en -12,Flag,en -13,Consignee,en -14,Consignee:,en -15,Voyage No,en -16,Notify Party,en -17,On board the Tanker,en -18,Loading Port,en -19,To be delivered to the port of,en diff --git a/BACA/configuration/DB2/CSVFiles/key_alias_dc.csv b/BACA/configuration/DB2/CSVFiles/key_alias_dc.csv deleted file mode 100644 index 5aa00e02..00000000 --- a/BACA/configuration/DB2/CSVFiles/key_alias_dc.csv +++ /dev/null @@ -1,255 +0,0 @@ -20,3,0 -21,3,0 -22,3,0 -23,3,0 -24,3,0 -25,3,0 -26,3,0 -27,3,0 -28,3,0 -29,3,0 -30,3,0 -31,3,0 -32,3,0 -33,3,0 -34,3,0 -35,3,0 -36,3,0 -37,3,0 -38,3,0 -39,3,0 -40,3,0 -41,3,0 -42,3,0 -43,3,0 -44,3,0 -45,3,0 -46,3,0 -47,3,0 -48,3,0 -49,3,0 -50,3,0 -51,3,0 -52,3,0 -53,4,0 -54,4,0 -55,4,0 -56,4,0 -57,4,0 -58,4,0 -59,4,0 -60,4,0 -61,4,0 -62,4,0 -63,4,0 -64,4,0 -65,4,0 -66,4,0 -67,4,0 -68,4,0 -69,4,0 -70,4,0 -71,4,0 -72,4,0 -73,4,0 -74,4,0 -75,4,0 -76,4,0 -77,4,0 -78,4,0 -79,4,0 -80,4,0 -81,4,0 -82,4,0 -83,4,0 -84,4,0 -85,4,0 -86,4,0 -87,4,0 -88,4,0 -89,4,0 -90,4,0 -91,4,0 -92,4,0 -93,4,0 -94,4,0 -95,4,0 -96,4,0 -97,4,0 -98,4,0 -99,4,0 -100,4,0 -101,4,0 -102,4,0 -103,4,0 -104,4,0 -105,4,0 -106,4,0 -107,4,0 -108,4,0 -109,4,0 -110,4,0 -111,4,0 -112,4,0 -113,4,0 -114,4,0 -115,4,0 -116,4,0 -117,4,0 -118,4,0 -119,4,0 -120,4,0 -121,4,0 -122,4,0 -123,4,0 -124,4,0 -125,4,0 -126,4,0 -127,4,0 -128,4,0 -129,4,0 -130,4,0 -131,4,0 -132,4,0 -133,4,0 -134,4,0 -135,4,0 -136,4,0 -137,4,0 -138,4,0 -28,4,0 -30,4,0 -139,5,0 -140,5,0 -141,5,0 -142,5,0 -143,5,0 -144,5,0 -145,5,0 -146,5,0 -147,5,0 -148,5,0 -149,5,0 -150,5,0 -151,5,0 -152,5,0 -153,5,0 -154,6,0 -155,6,0 -156,6,0 -157,6,0 -158,6,0 -159,6,0 -160,6,0 -161,6,0 -162,6,0 -163,6,0 -164,6,0 -165,6,0 -166,6,0 -167,6,0 -168,6,0 -169,6,0 -170,6,0 -171,6,0 -172,6,0 -173,6,0 -174,6,0 -175,6,0 -176,6,0 -177,6,0 -178,6,0 -179,6,0 -180,6,0 -181,6,0 -182,6,0 -183,6,0 -184,6,0 -185,6,0 -186,6,0 -187,6,0 -188,6,0 -189,6,0 -190,6,0 -191,6,0 -192,6,0 -99,6,0 -28,6,0 -193,7,0 -194,7,0 -195,7,0 -196,7,0 -197,7,0 -198,7,0 -199,7,0 -200,7,0 -201,7,0 -202,7,0 -203,7,0 -204,7,0 -205,7,0 -206,7,0 -207,7,0 -208,7,0 -97,7,0 -113,7,0 -170,7,0 -38,7,0 -47,7,0 -209,9,0 -210,9,0 -211,9,0 -212,9,0 -213,9,0 -214,9,0 -215,9,0 -216,9,0 -217,9,0 -218,9,0 -219,9,0 -220,9,0 -221,9,0 -222,9,0 -223,9,0 -224,9,0 -225,9,0 -226,9,0 -227,9,0 -228,9,0 -229,9,0 -230,9,0 -231,9,0 -232,9,0 -233,9,0 -234,9,0 -235,9,0 -236,9,0 -237,9,0 -238,9,0 -113,9,0 -139,9,0 -208,9,0 -171,9,0 -142,9,0 -76,9,0 -28,9,0 -99,9,0 -1,1,0 -2,1,0 -3,1,0 -4,1,0 -5,1,0 -6,1,0 -7,2,0 -8,2,0 -9,2,0 -10,2,0 -11,2,0 -12,2,0 -13,2,0 -14,2,0 -15,2,0 -16,2,0 -17,2,0 -18,2,0 -19,2,0 diff --git a/BACA/configuration/DB2/CSVFiles/key_alias_kc.csv b/BACA/configuration/DB2/CSVFiles/key_alias_kc.csv deleted file mode 100644 index 2f375e78..00000000 --- a/BACA/configuration/DB2/CSVFiles/key_alias_kc.csv +++ /dev/null @@ -1,255 +0,0 @@ -20,17 -21,18 -22,19 -23,20 -24,21 -25,22 -26,23 -27,24 -28,25 -29,26 -30,27 -31,28 -32,29 -33,30 -34,31 -35,32 -36,33 -37,34 -38,35 -39,36 -40,37 -41,38 -42,39 -43,40 -44,41 -45,42 -46,43 -47,43 -48,44 -49,45 -50,46 -51,47 -52,48 -53,49 -54,50 -55,51 -56,51 -57,51 -58,52 -59,52 -60,52 -61,52 -62,52 -63,52 -64,52 -65,54 -66,54 -67,54 -68,54 -69,54 -70,55 -71,55 -72,56 -73,58 -74,59 -75,60 -76,60 -77,61 -78,61 -79,62 -80,63 -81,63 -82,64 -83,64 -84,65 -85,65 -86,66 -87,67 -88,68 -89,69 -90,70 -91,71 -92,72 -93,73 -94,74 -95,75 -96,75 -97,75 -98,76 -99,76 -100,77 -101,77 -102,78 -103,79 -104,80 -105,81 -106,82 -107,83 -108,85 -109,85 -110,86 -111,87 -112,87 -113,87 -114,88 -115,89 -116,89 -117,89 -118,90 -119,91 -120,91 -121,92 -122,92 -123,92 -124,93 -125,93 -126,94 -127,94 -128,94 -129,94 -130,94 -131,95 -132,96 -133,97 -134,98 -135,99 -136,100 -137,101 -138,101 -28,53 -30,84 -139,102 -140,103 -141,104 -142,105 -143,106 -144,107 -145,108 -146,109 -147,110 -148,111 -149,112 -150,113 -151,114 -152,115 -153,116 -154,117 -155,118 -156,119 -157,120 -158,121 -159,122 -160,123 -161,124 -162,125 -163,126 -164,126 -165,126 -166,128 -167,129 -168,129 -169,129 -170,129 -171,130 -172,131 -173,132 -174,133 -175,134 -176,135 -177,136 -178,137 -179,138 -180,139 -181,139 -182,139 -183,140 -184,141 -185,142 -186,143 -187,144 -188,145 -189,146 -190,147 -191,148 -192,149 -99,126 -28,127 -193,150 -194,151 -195,152 -196,153 -197,154 -198,156 -199,157 -200,160 -201,161 -202,162 -203,162 -204,163 -205,164 -206,165 -207,166 -208,167 -97,150 -113,155 -170,157 -38,158 -47,159 -209,168 -210,169 -211,169 -212,170 -213,171 -214,173 -1,3 -2,3 -3,3 -4,3 -5,4 -6,4 -7,5 -8,6 -9,7 -10,8 -11,9 -12,10 -13,11 -14,11 -15,12 -16,13 -17,14 -18,15 -19,16 -215,175 -216,176 -217,176 -218,176 -219,176 -220,177 -221,178 -222,179 -223,180 -224,186 -225,189 -226,190 -227,191 -228,191 -229,193 -230,194 -231,195 -232,196 -233,197 -234,198 -235,199 -236,199 -237,200 -238,201 -113,172 -139,174 -208,185 -171,187 -142,188 -76,192 -28,193 -99,193 diff --git a/BACA/configuration/DB2/CSVFiles/key_class.csv b/BACA/configuration/DB2/CSVFiles/key_class.csv deleted file mode 100644 index af8fdae7..00000000 --- a/BACA/configuration/DB2/CSVFiles/key_class.csv +++ /dev/null @@ -1,201 +0,0 @@ -47,ExteriorColor,char,0,0,Exterior Color -48,InteriorColor,char,0,0,Interior Color -49,Page Number,number,0,0, -50,JobDescription,char,0,0,Job Description -51,SBCess,number,0,0,Swachh Bharat Cess -52,InvoiceNumber,number,1,0,Invoice Number -53,Fax,number,0,0,FaxNo -54,Total,number,1,0,Grand Total -55,Address,char,0,0, -56,WorkSite,char,0,0,Work Site -57,SalesPerson,char,0,0, -58,Brand,char,0,0,Brand -59,Website,char,0,0,Website Address -60,EmailAddress,char,0,0,Email address -61,MatterNumber,char,0,0,Matter Number -62,RegdOffice,char,0,0,Regd Office -63,Terms,char,0,0,Payment Terms -64,EstNo,number,0,0,Est No -65,EstDate,number,0,0,Est Date -66,CampaignName,char,0,0,Campaign Name -67,ServiceTax,number,0,0,Service Tax -68,AgencyCommission,number,0,0,Agency Commission -69,BeneficiaryName,char,0,0,Beneficiary Name -70,Sub Brand,char,0,0, -71,PANNo,number,0,0,PAN NO -72,Credit,char,0,0,Credit -73,CINNo,number,0,0,CIN No -74,SwiftCode,number,0,1,Swift Code -75,CustName,char,0,1, -76,Telephone,number,0,0,Telephone -77,BankName,char,0,0,Bank Name -78,Price,number,0,0,Price -79,Qty,number,0,0,Quantity -80,Description,char,0,0,Description -81,GLCode,number,0,0,GL Code -82,SoldTo,char,0,0,Sold To -83,ABN,char,0,0,ABN number -84,Phone,number,0,0,Phone no -85,Regarding,char,0,0,Regarding -86,RequestingManager,char,0,0,Requesting Manager -87,InvoiceDate,number,0,0,Invoice Date -88,DueDate,number,0,0,Due Date -89,AccNo,number,0,0,Account Number -90,BSB,number,0,0,BSB No -91,AccName,char,0,0,Account Name -92,SubTotal,number,0,0,Sub Total before tax -93,Tax,number,0,0,Tac amounts -94,PurchaseNo,number,0,0,Purchase number -95,ShipTo,char,0,0, -96,BranchOffice,char,0,0,Branch Office -97,IFSCCode,number,0,0,IFSC Code -98,CentralisedBillingAndAccOffice,char,0,0,Centralised Billing and Accounting Office -99,ServiceTaxCategory,char,0,0,Service Tax Category -100,ServiceTaxRegnNo,number,0,0,Service Tax Regn No -101,Branch,char,0,0,Branch -168,StartDate,char,0,0,Start Date of Minimum payment period per service component -169,ZipCode,number,0,0,Zip Code -170,ServiceComponents,char,0,0,Service Components -171,ExistingCircuitIds,number,0,0,existing circuit ids -172,SignedDate,number,0,0,Signed Date -173,StateProvince,char,0,0,State Province -174,Attention,char,0,0,Attention -175,Country,char,0,0,Country -176,ReferenceNo,number,0,0,Reference No -177,PreExistingContractNo,number,0,0,Pre Existing Contract No -178,AccNo,number,0,0,AccountNumber -179,PercMonthlyFee,char,0,0,Percentage of Monthly Fee -180,Customer,char,0,0,Customer -181,SDAcode,number,0,0,SDA code -182,ContractIDNo,number,0,0,contract id no -183,DS1No,number,0,0,ds1 no -184,PRINo,char,0,0,PRI No -185,City,char,0,0,City -186,SalesRegion,char,0,0,Sales Region -187,Name,char,0,0,Name -188,Title,char,0,0,Title -189,PricingTerm,char,0,0,Pricing Schedule Term -190,SalesStrata,char,0,0,Sales Strata -191,SalesBranchManager,char,0,0,Sales Branch Manager -192,EmailAddress,char,0,0,Email Address -193,TeleFax,number,0,0,Telephone and Fax -194,StreetAddress,char,0,0,Street Address -195,MinPayPeriod,char,0,0,Minimum Payment Period -196,ProgramCode,number,0,0,Program Code -197,SCVPName,char,0,0,SCVP Name -198,RatesForMinPayment,char,0,0,Rates following the end of minimum payment -199,Branch Transit Number,number,0,1, -200,RateStabilization,char,0,0,Rate Stabilization per service component -201,EfffectiveDate,char,0,0,Effective Date of this pricing schedule -46,MileageIn,number,0,0,Mileage In -102,Attention,char,0,0,Attention -103,DOB,number,0,0,Date of Birth -104,StartDate,number,0,0,Start Date -105,Title,char,0,0,Title -106,PlaceOfBirth,char,0,0,Place of Birth -107,Status,char,0,0,Status -108,Employee,char,0,0,Employee -109,FullName,char,0,0,Full Name -110,Subject,char,0,0,Subject -111,AnnualSalary,number,0,0,Annual Salary -112,Citizenship,char,0,0,Citizenship -113,ExpireDate,number,0,0,Expire Date -114,PassportNo,number,0,0,Passport no -115,Gender,char,0,0,Gender -116,IssueDate,number,0,0,Issue Date -117,Smoking Status,char,0,0, -118,ServiceDept,char,0,0,Service Department -119,PCP,char,0,0,PCP -120,ProgressNotes,char,0,0,Progress Notes -121,AppointmentFacility,char,0,0,Appointment Facility -122,Referring,char,0,0,Referring -123,MedPrimary,char,0,0,med primary -124,Prescription,char,0,0,prescription -125,PrimaryCareProvider,char,0,0,primary care provider -126,Telephone,number,0,0,Telephone -127,FaxNo,number,0,0,Fax Number -128,NPI,number,0,0,NPI -129,FollowUp,char,0,0,Follow Up -130,Name,char,0,0,Name -131,Diabetes,char,0,0, -132,AppointmentDateTime,number,0,0,Appt. Date/Time -133,DOB,number,0,0,Date of Birth -134,Marital status,char,0,0, -135,Alcohol intake,char,0,0, -136,Hypertension,char,0,0, -137,Occupation,char,0,0, -138,Kidney Stones,char,0,0, -139,Celebrex,char,0,0, -140,Employer,char,0,0, -141,Vitals,char,0,0, -142,ROS,char,0,0, -143,Quantity,number,0,0, -144,Refills,number,0,0, -145,BodyMassIndex,number,0,0,Body Mass Index (BMI) -146,Weight,number,0,0, -147,EncounterDate,number,0,0,Encounter Date -148,Provider,char,0,0,Provider -149,Insurance,char,0,0,Insurance -150,Client,char,0,0,Client -151,InvestigatingAgency,char,0,0,Investigating Agency -152,County,char,0,0,County -153,Parties,char,0,0,Parties -154,TransactionNo,number,0,0,Transaction Number -155,Date,number,0,0,Date -156,TimeofLoss,number,0,0,Time of Loss -157,ClaimNo,number,0,0,Claim Number -158,State,char,0,0,State -159,DateOfLoss,number,0,0,Date Of Loss -160,DriverLicense,number,0,0,Driver License No -161,Street,char,0,0,Street -162,Division,char,0,0,Division -163,Adjuster,char,0,0,Adjuster -164,ReportNumber,number,0,0,Report Number -165,ReportType,char,0,0,Report Type -166,Tag,char,0,0,Tag -167,City,char,0,0,City -1,InvestmentName,char,1,0, -2,InvestorName,char,1,0, -3,CapBalance,number,1,0, -4,FundAsOfDate,number,1,0, -5,IssuedDate,number,0,0,Issued Date -6,IssuedAt,char,0,0,Issued At -7,Master,char,0,0,Master/Captain -8,Shipper,char,0,0,Shipper -9,BLNo,char,0,0,Bill of Lading number -10,Flag,char,0,0,Flag -11,Consignee,char,0,0,Consignee -12,VoyageNo,number,0,0,Voyage No -13,NotifyParty,char,0,0,Notify Party -14,OnboardTanker,char,0,0,OnboardTanker -15,LoadingPort,char,0,0,Loading Port -16,DeliveryPort,char,0,0,Delivery Port -17,Adjuster,char,0,0,Adjuster -18,WrittenBy,char,0,0,Written By -19,ClaimNo,number,0,0,Claim No -20,GrandTotal,number,0,0,Grand Total -21,VehicleOut,char,0,0,Vehicle Out -22,TypeOfLoss,char,0,0,Type of Loss -23,Insured,char,0,0,Insured -24,PolicyNo,number,0,0,Policy no -25,Fax,number,0,0,Fax -26,WorkfileID,number,0,0,Workfile ID -27,Telephone,number,0,0,Telephone -28,DaysToRepair,number,0,0,Days to Repair -29,CUSTOMERPAY,number,0,0,CUSTOMER PAY -30,Subtotal,number,0,0,Subtotal -31,INSURANCEPAY,number,0,0,INSURANCE PAY -32,Condition,char,0,0,Condition -33,JobNo,number,0,0,Jon no -34,ProductionDate,number,0,0,Production Date -35,State,char,0,0,State -36,FederalID,number,0,0,Federal ID -37,MileageOut,char,0,0,Mileage Out -38,RONumber,number,0,0,RO Number -39,Deductible,number,0,0,Deductible -40,License,char,0,0,License -41,VIN,number,0,0,VIN -42,PointOfImpact,char,0,0,Point of Impact -43,DateOfLoss,number,0,0,Date of Loss -44,InspectionLocation,char,0,0,Inspection Location -45,Owner,char,0,0,Owner diff --git a/BACA/configuration/DB2/CSVFiles/key_class_dc.csv b/BACA/configuration/DB2/CSVFiles/key_class_dc.csv deleted file mode 100644 index bd42f1ae..00000000 --- a/BACA/configuration/DB2/CSVFiles/key_class_dc.csv +++ /dev/null @@ -1,201 +0,0 @@ -46,3 -47,3 -48,3 -49,4 -50,4 -51,4 -52,4 -53,4 -54,4 -55,4 -56,4 -57,4 -58,4 -59,4 -60,4 -61,4 -62,4 -63,4 -64,4 -65,4 -66,4 -67,4 -68,4 -69,4 -70,4 -71,4 -72,4 -73,4 -74,4 -75,4 -76,4 -77,4 -78,4 -79,4 -80,4 -81,4 -82,4 -83,4 -84,4 -85,4 -86,4 -87,4 -88,4 -89,4 -90,4 -91,4 -92,4 -93,4 -94,4 -95,4 -96,4 -97,4 -98,4 -99,4 -100,4 -101,4 -168,9 -169,9 -170,9 -171,9 -172,9 -173,9 -174,9 -175,9 -176,9 -177,9 -178,9 -179,9 -180,9 -181,9 -182,9 -183,9 -184,9 -185,9 -186,9 -187,9 -188,9 -189,9 -190,9 -191,9 -192,9 -193,9 -194,9 -195,9 -196,9 -197,9 -198,9 -199,9 -200,9 -201,9 -102,5 -103,5 -104,5 -105,5 -106,5 -107,5 -108,5 -109,5 -110,5 -111,5 -112,5 -113,5 -114,5 -115,5 -116,5 -1,1 -2,1 -3,1 -4,1 -5,2 -6,2 -7,2 -8,2 -9,2 -10,2 -11,2 -12,2 -13,2 -14,2 -15,2 -16,2 -17,3 -18,3 -19,3 -20,3 -21,3 -22,3 -23,3 -24,3 -25,3 -26,3 -27,3 -28,3 -29,3 -30,3 -31,3 -32,3 -33,3 -34,3 -35,3 -36,3 -37,3 -38,3 -39,3 -40,3 -41,3 -42,3 -43,3 -44,3 -45,3 -117,6 -118,6 -119,6 -120,6 -121,6 -122,6 -123,6 -124,6 -125,6 -126,6 -127,6 -128,6 -129,6 -130,6 -131,6 -132,6 -133,6 -134,6 -135,6 -136,6 -137,6 -138,6 -139,6 -140,6 -141,6 -142,6 -143,6 -144,6 -145,6 -146,6 -147,6 -148,6 -149,6 -150,7 -151,7 -152,7 -153,7 -154,7 -155,7 -156,7 -157,7 -158,7 -159,7 -160,7 -161,7 -162,7 -163,7 -164,7 -165,7 -166,7 -167,7 diff --git a/BACA/configuration/DB2/CreateBaseDB.bat b/BACA/configuration/DB2/CreateBaseDB.bat deleted file mode 100755 index 95a53fce..00000000 --- a/BACA/configuration/DB2/CreateBaseDB.bat +++ /dev/null @@ -1,32 +0,0 @@ -@echo off -SETLOCAL - -set /p base_db_name= Enter the name of the Base BACA database. If nothing is entered, we will use the following default value 'CABASEDB': -IF NOT DEFINED base_db_name SET "base_db_name=CABASEDB" - -set /p base_db_user= Enter the name of the database user for the Base BACA database. If nothing is entered, we will use the following default value 'CABASEUSER' : -IF NOT DEFINED base_db_user SET "base_db_user=CABASEUSER" - -set /P c=Are you sure you want to continue[Y/N]? -if /I "%c%" EQU "Y" goto :DOCREATE -if /I "%c%" EQU "N" goto :DOEXIT - -:DOCREATE - echo "Running the db script" - db2 CREATE DATABASE %base_db_name% AUTOMATIC STORAGE YES USING CODESET UTF-8 TERRITORY DEFAULT COLLATE USING SYSTEM PAGESIZE 32768 - db2 CONNECT TO %base_db_name% - db2 GRANT CONNECT,DATAACCESS ON DATABASE TO USER %base_db_user% - db2 GRANT USE OF TABLESPACE USERSPACE1 TO USER %base_db_user% - db2 CONNECT RESET - db2 CONNECT TO %base_db_name% - db2 SET SCHEMA %base_db_user% - db2 CREATE TABLE TENANTINFO (tenantid varchar(128) NOT NULL, ontology varchar(128) not null,tenanttype smallint not null with default, rdbmsengine varchar(128) not null, bacaversion varchar(1024) not null, rdbmsconnection varchar(1024) for bit data default null,mongoconnection varchar(1024) for bit data default null,mongoadminconnection varchar(1024) for bit data default null,CONSTRAINT tenantinfo_pkey PRIMARY KEY (tenantid, ontology)) - db2 CONNECT RESET - goto END -:DOEXIT - echo "Exited on user input" - goto END -:END - echo "END" - -ENDLOCAL \ No newline at end of file diff --git a/BACA/configuration/DB2/DeleteOntology.sh b/BACA/configuration/DB2/DeleteOntology.sh deleted file mode 100755 index b9acc0f6..00000000 --- a/BACA/configuration/DB2/DeleteOntology.sh +++ /dev/null @@ -1,70 +0,0 @@ -#!/bin/bash -. ./ScriptFunctions.sh - -echo -e "\n-- This script will delete an existing ontology from a tenant" -echo - -echo "Enter the tenant ID for the existing tenant: (eg. t4900)" -while [[ -z "$tenant_id" || $tenant_id == '' ]] -do - echo "Please enter a valid value for the tenant ID:" - read tenant_id -done - -echo -e "\nEnter the tenant ontology to delete: " -read tenant_ontology -if [[ -z "$tenant_ontology" ]]; then - tenant_ontology=$default_ontology -fi - - -default_basedb='BASECA' -if [[ -z "$base_db_name" ]]; then - echo -e "\nEnter the name of the Base BACA database with the TENANTINFO Table. If nothing is entered, we will use the following default value : " $default_basedb - read base_db_name - if [[ -z "$base_db_name" ]]; then - base_db_name=$default_basedb - fi -fi - -default_basedb_user='CABASEUSER' -if [[ -z "$base_db_user" ]]; then - echo -e "\nEnter the name of the database user for the Base BACA database. If nothing is entered, we will use the following default value : " $default_basedb_user - read base_db_user - if [[ -z "$base_db_user" ]]; then - base_db_user=$default_basedb_user - fi -fi - -db2 "connect to $base_db_name" -db2 "set schema $base_db_user" -resp=$(db2 -x "select dbname,dbuser from tenantinfo where tenantid = '$tenant_id'") -tenant_db=$(echo $resp | awk '{print $1}') -tenant_user=$(echo $resp | awk '{print $2}') - -echo -echo "-- Please confirm these are the desired settings:" -echo " - tenant ID: $tenant_id" -echo " - ontology: $tenant_ontology" -echo " - tenant database name: $tenant_db" -echo " - base database: $base_db_name" -askForConfirmation - -db2 "connect to $tenant_db" -db2 "set schema $tenant_ontology" -db2 -stvf sql/DropBacaTables.sql - -resp=$(db2 -x "drop schema $tenant_ontology restrict") -echo $resp -rc=$(echo $resp | awk '{print $1}') -if [[ "$rc" == "DB20000I" ]] -then - echo ontology delete - db2 connect reset - db2 "connect to $base_db_name" - db2 "set schema $base_db_user" - db2 "delete from tenantinfo where tenantid='$tenant_id' and ontology='$tenant_ontology'" -else - echo ontology delete failed: $rc -fi - diff --git a/BACA/configuration/DB2/DeleteTenant.sh b/BACA/configuration/DB2/DeleteTenant.sh deleted file mode 100755 index b5f93a40..00000000 --- a/BACA/configuration/DB2/DeleteTenant.sh +++ /dev/null @@ -1,70 +0,0 @@ -#!/bin/bash -. ./ScriptFunctions.sh - -echo -e "\n-- This script will delete an existing BACA tenant" -echo - -echo "Enter the tenant ID for the existing tenant: (eg. t4900)" -while [[ -z "$tenant_id" || $tenant_id == '' ]] -do - echo "Please enter a valid value for the tenant ID:" - read tenant_id -done - -default_basedb='BASECA' -if [[ -z "$base_db_name" ]]; then - echo -e "\nEnter the name of the Base BACA database with the TENANTINFO Table. If nothing is entered, we will use the following default value : " $default_basedb - read base_db_name - if [[ -z "$base_db_name" ]]; then - base_db_name=$default_basedb - fi -fi - -default_basedb_user='CABASEUSER' -if [[ -z "$base_db_user" ]]; then - echo -e "\nEnter the name of the database user for the Base BACA database. If nothing is entered, we will use the following default value : " $default_basedb_user - read base_db_user - if [[ -z "$base_db_user" ]]; then - base_db_user=$default_basedb_user - fi -fi - - - -db2 "connect to $base_db_name" -db2 "set schema $base_db_user" -resp=$(db2 -x "select dbname,dbuser from tenantinfo where tenantid = '$tenant_id'") -tenant_db=$(echo $resp | awk '{print $1}') -tenant_user=$(echo $resp | awk '{print $2}') - -echo -echo "-- Please confirm these are the desired settings:" -echo " - tenant ID: $tenant_id" -echo " - tenant database name: $tenant_db" -echo " - base database: $base_db_name" -askForConfirmation - -db2 "connect to $tenant_db" -resp=$(db2 -x "QUIESCE DATABASE IMMEDIATE FORCE CONNECTIONS") -rc=$(echo $resp | awk '{print $1}') - -if [[ "$rc" == "DB20000I" || "$rc" == "SQL1371W" ]] -then - echo "DB Quiesced" - db2 "unquiesce database" - db2 "connect reset" - resp=$(db2 -x "drop db $tenant_db") - rc=$(echo $resp | awk '{print $1}') - if [[ "$rc" == "DB20000I" ]] - then - echo "DB Dropped" - db2 "connect to $base_db_name" - db2 "set schema $base_db_user" - db2 "delete from tenantinfo where tenantid='$tenant_id'" - else - echo "Failed to drop the database: " $rc - fi -else - echo "Quiesce failed: " $rc -fi - diff --git a/BACA/configuration/DB2/Readme_windows.txt b/BACA/configuration/DB2/Readme_windows.txt deleted file mode 100755 index b98e4d97..00000000 --- a/BACA/configuration/DB2/Readme_windows.txt +++ /dev/null @@ -1,11 +0,0 @@ -Prerequisite : DB2 v11 fixpack 2 or higher -Intructions to create BACA databases. Baca uses two database one is called -base database and the other is called tenant database. -1. Before running the scripts file you need to create two windows non-admin - users who are also db2 regular users.These users are used to connect - databases.The db scripts are initilized with cabaseuser and tenantuser. -2. Open db2 administrator command window to run the script files. -3. Run the CreateBaseDB.bat to create the base database. -3. Run AddTenant.bat to add a new tenant db and ontology. - You can aslo run this script file to add a new ontology - for existing tenant database. \ No newline at end of file diff --git a/BACA/configuration/DB2/ScriptFunctions.sh b/BACA/configuration/DB2/ScriptFunctions.sh deleted file mode 100755 index 4d40ce59..00000000 --- a/BACA/configuration/DB2/ScriptFunctions.sh +++ /dev/null @@ -1,16 +0,0 @@ -#!/usr/bin/env bash - -function askForConfirmation(){ - while [[ $confirmation != "y" && $confirmation != "n" && $confirmation != "yes" && $confirmation != "no" ]] # While confirmation is not y or n... - do - echo - echo -e "Would you like to continue (Y/N):" - read confirmation - confirmation=$(echo "$confirmation" | tr '[:upper:]' '[:lower:]') - done - - if [[ $confirmation == "n" || $confirmation == "no" ]] - then - exit - fi -} \ No newline at end of file diff --git a/BACA/configuration/DB2/UpgradeBaseDB.sh b/BACA/configuration/DB2/UpgradeBaseDB.sh deleted file mode 100755 index 8409eb48..00000000 --- a/BACA/configuration/DB2/UpgradeBaseDB.sh +++ /dev/null @@ -1,54 +0,0 @@ -#!/usr/bin/env bash -. ./ScriptFunctions.sh - -INPUT_PROPS_FILENAME="./common_for_DB2_Upgrade.sh" - -if [ -f $INPUT_PROPS_FILENAME ]; then - echo "Found a $INPUT_PROPS_FILENAME. Reading in variables from that script." - . $INPUT_PROPS_FILENAME -fi - -echo -e "\n-- This script will upgrade base DB" -echo - -while [[ $base_db_name == '' ]] -do - echo "Please enter a valid value for the base database name :" - read base_db_name - while [ ${#base_db_name} -gt 8 ]; - do - echo "Please enter a valid value for the base database name :" - read base_db_name; - echo ${#base_db_name}; - done -done - -while [[ -z "$base_db_user" || $base_db_user == "" ]] -do - echo "Please enter a valid value for the base database user name :" - read base_db_user -done - -echo -echo "-- Please confirm these are the desired settings:" -echo " - Base database name: $base_db_name" -echo " - Base database user name: $base_db_user" -askForConfirmation - -if [[ $SaaS != "true" || -z $SaaS ]]; then - cp sql/UpgradeBaseDB_to_1.1.sql.template sql/UpgradeBaseDB_to_1.1.sql - sed -i s/\$base_db_name/"$base_db_name"/ sql/UpgradeBaseDB_to_1.1.sql - sed -i s/\$base_db_user/"$base_db_user"/ sql/UpgradeBaseDB_to_1.1.sql - echo - echo "Running upgrade script: sql/UpgradeBaseDB_to_1.1.sql" - db2 -stvf sql/UpgradeBaseDB_to_1.1.sql -else - echo "-- Skipping UpgradeBaseDB_to_1.1.sql" -fi - -cp sql/UpgradeBaseDB_1.1_to_1.2.sql.template sql/UpgradeBaseDB_1.1_to_1.2.sql -sed -i s/\$base_db_name/"$base_db_name"/ sql/UpgradeBaseDB_1.1_to_1.2.sql -sed -i s/\$base_db_user/"$base_db_user"/ sql/UpgradeBaseDB_1.1_to_1.2.sql -echo -echo "Running upgrade script: sql/UpgradeBaseDB_1.1_to_1.2.sql" -db2 -stvf sql/UpgradeBaseDB_1.1_to_1.2.sql \ No newline at end of file diff --git a/BACA/configuration/DB2/common_for_DB2.sh.sample b/BACA/configuration/DB2/common_for_DB2.sh.sample deleted file mode 100644 index 87b77b8d..00000000 --- a/BACA/configuration/DB2/common_for_DB2.sh.sample +++ /dev/null @@ -1,51 +0,0 @@ -# Sample script for running the DB2 scripts non-interactively by providing the needed env vars -# To use: Make a copy and name it "common_for_DB2.sh", update the needed variables. - - -# --- For Base BACA DB: -# update these variables for the BACA Base database -base_db_name=CABASE3 -base_db_user=baseuser3 - - -# To skip creating base databse user and skip asking for pwd, use these vars below. -# Prereq is that the DB2 user (from var "base_db_user") must already be created. -base_valid_user=1 -base_user_already_defined=1 -base_pwdconfirmed=1 - -# --- For adding tenant: -# update these variables -tenant_type=0 # Allowed values: 0 for Enterprise, 1 for Trial, 2 for Internal -baca_database_server_ip=10.126.18.120 -baca_database_port=50000 -tenant_id=t4910 -tenant_db_name=t4910 -tenant_db_user=t4910user - -# To skip creating tenant database user and skip asking for pwd, use these vars below. -# Prereq is that the DB2 user (from var "tenant_db_user") must already be created. -user_already_defined=1 -pwdconfirmed=1 - -# update these variables -tenant_db_pwd=xyz123ee -tenant_db_pwd_b64_encoded=1 # set to 1 if "tenant_db_pwd" is base64 encoded -tenant_ontology=ONT1 - -tenant_company=IBM -tenant_first_name=John -tenant_last_name=Smith -tenant_email=johnsmith@ibm.com -tenant_user_name=johnsmith - -# --- For adding ontology to existing tenant -# uncomment this below to add ontology, and comment out "tenant_ontology" line above in this file -#use_existing_tenant=1 -#tenant_ontology=ONT2 - -# skip confirmation prompts: -confirmation=y - -#DB2 ssl Yes/No -ssl=No \ No newline at end of file diff --git a/BACA/configuration/DB2/common_for_DB2_Tenant_Upgrade.sh.sample b/BACA/configuration/DB2/common_for_DB2_Tenant_Upgrade.sh.sample deleted file mode 100644 index a1e773e6..00000000 --- a/BACA/configuration/DB2/common_for_DB2_Tenant_Upgrade.sh.sample +++ /dev/null @@ -1,14 +0,0 @@ -# Sample script for running the DB2 scripts non-interactively by providing the needed env vars -# To use: Make a copy and name it "common_for_DB2.sh", update the needed variables. - -# --- For adding tenant: - -tenant_db_name= - -tenant_ontology= - -tenant_db_user= - -# skip confirmation prompts: -confirmation=y - diff --git a/BACA/configuration/DB2/common_for_DB2_Upgrade.sh.sample b/BACA/configuration/DB2/common_for_DB2_Upgrade.sh.sample deleted file mode 100644 index 1c7cdbed..00000000 --- a/BACA/configuration/DB2/common_for_DB2_Upgrade.sh.sample +++ /dev/null @@ -1,8 +0,0 @@ -# Sample script for running the DB2 scripts non-interactively by providing the needed env vars -# To use: Make a copy and name it "common_for_DB2.sh", update the needed variables. - -# --- For Base BACA DB: -# update these variables for the BACA Base database -base_db_name= -base_db_user= - diff --git a/BACA/configuration/DB2/sql/CreateBacaSchema.sql.template b/BACA/configuration/DB2/sql/CreateBacaSchema.sql.template deleted file mode 100644 index 2968a7ac..00000000 --- a/BACA/configuration/DB2/sql/CreateBacaSchema.sql.template +++ /dev/null @@ -1,6 +0,0 @@ -CONNECT TO $tenant_db_name ; - -CREATE SCHEMA $tenant_ontology ; - -SET SCHEMA $tenant_ontology ; - diff --git a/BACA/configuration/DB2/sql/CreateBaseDB.sql.template b/BACA/configuration/DB2/sql/CreateBaseDB.sql.template deleted file mode 100644 index f316dd92..00000000 --- a/BACA/configuration/DB2/sql/CreateBaseDB.sql.template +++ /dev/null @@ -1,10 +0,0 @@ -CREATE DATABASE $base_db_name AUTOMATIC STORAGE YES USING CODESET UTF-8 TERRITORY DEFAULT COLLATE USING SYSTEM PAGESIZE 32768; - -CONNECT TO $base_db_name ; - -GRANT CONNECT,DATAACCESS ON DATABASE TO USER $base_db_user ; - -GRANT USE OF TABLESPACE USERSPACE1 TO USER $base_db_user ; - -CONNECT RESET; - diff --git a/BACA/configuration/DB2/sql/CreateBaseTable.sql.template b/BACA/configuration/DB2/sql/CreateBaseTable.sql.template deleted file mode 100644 index 08abaae0..00000000 --- a/BACA/configuration/DB2/sql/CreateBaseTable.sql.template +++ /dev/null @@ -1,26 +0,0 @@ -CONNECT TO $base_db_name ; - -SET SCHEMA $base_db_user ; - ---Following are added to handle seemless updates in feature ---Going forward bacaversion is base db schema version ---tenantdbversion is tenant and ontology schema version - -CREATE TABLE TENANTINFO - (tenantid varchar(128) NOT NULL, - ontology varchar(128) not null, - tenanttype smallint not null with default, - dailylimit smallint not null with default 0, - rdbmsengine varchar(128) not null, - dbname varchar(255) not null, - dbuser varchar(255) not null, - bacaversion varchar(1024) not null, - rdbmsconnection varchar(1024) for bit data default null, - mongoconnection varchar(1024) for bit data default null, - mongoadminconnection varchar(1024) for bit data default null, - featureflags bigint not null with default 0, - tenantdbversion varchar(255), - CONSTRAINT tenantinfo_pkey PRIMARY KEY (tenantid, ontology) - ); - -CONNECT RESET; diff --git a/BACA/configuration/DB2/sql/CreateDB.sql.template b/BACA/configuration/DB2/sql/CreateDB.sql.template deleted file mode 100644 index cc8e1636..00000000 --- a/BACA/configuration/DB2/sql/CreateDB.sql.template +++ /dev/null @@ -1,9 +0,0 @@ -CREATE DATABASE $tenant_db_name AUTOMATIC STORAGE YES USING CODESET UTF-8 TERRITORY DEFAULT COLLATE USING SYSTEM PAGESIZE 32768; - -CONNECT TO $tenant_db_name ; - -GRANT CONNECT,DATAACCESS ON DATABASE TO USER $tenant_db_user ; - -GRANT USE OF TABLESPACE USERSPACE1 TO USER $tenant_db_user ; - -CONNECT RESET; \ No newline at end of file diff --git a/BACA/configuration/DB2/sql/DropBacaTables.sql b/BACA/configuration/DB2/sql/DropBacaTables.sql deleted file mode 100644 index 1eb4506e..00000000 --- a/BACA/configuration/DB2/sql/DropBacaTables.sql +++ /dev/null @@ -1,45 +0,0 @@ -drop VIEW audit_sys_report; -drop table audit_integration_activity; -drop table audit_system_activity; -drop table audit_api_activity; -drop table audit_user_activity; -drop table audit_processed_files; -drop table audit_login_activity; -drop table audit_ontology; -drop table db_restore; -drop table error_log; -drop table processed_file; -drop table key_spacing; -drop table db_backup; -drop table fonts_transid; -drop table fonts_dc; -drop table fonts; -drop table smartpages_options; -drop table api_integrations_objectsstore; -drop table import_ontology; -drop table integration_dc; -drop table integration; -drop table login_detail; -drop table user_detail; -drop table pattern_kc; -drop table pattern; -drop table heading_alias_dc; -drop table heading_alias_h; -drop table heading_dc; -drop table heading_alias; -drop table heading; -drop table cword_dc; -drop table key_alias_kc; -drop table key_alias_dc; -drop table key_class_dc; -drop table doc_alias_dc; -drop table key_alias; -drop table cword; -drop table key_class; -drop table doc_alias; -drop table doc_class; -drop table ontology; -drop table classifier; -drop table training_log; -drop table document; -drop sequence MINOR_VER_SEQ; \ No newline at end of file diff --git a/BACA/configuration/DB2/sql/InsertTenant.sql.template b/BACA/configuration/DB2/sql/InsertTenant.sql.template deleted file mode 100644 index ea921ff8..00000000 --- a/BACA/configuration/DB2/sql/InsertTenant.sql.template +++ /dev/null @@ -1,4 +0,0 @@ -connect to $base_db_name ; -set schema $base_db_user ; -insert into TENANTINFO (tenantid,ontology,tenanttype,dailylimit,rdbmsengine,bacaversion,rdbmsconnection,dbname,dbuser,tenantdbversion) values ( '$tenant_id', '$tenant_ontology', $tenant_type, $daily_limit, 'DB2', '1.2', encrypt('$rdbmsconnection','AES_KEY'),'$tenant_db_name','$tenant_db_user','1.2') ; -connect reset ; diff --git a/BACA/configuration/DB2/sql/InsertUser.sql.template b/BACA/configuration/DB2/sql/InsertUser.sql.template deleted file mode 100644 index bcc368d7..00000000 --- a/BACA/configuration/DB2/sql/InsertUser.sql.template +++ /dev/null @@ -1,5 +0,0 @@ -connect to $tenant_db_name ; -set schema $tenant_ontology ; -insert into user_detail (email,first_name,last_name,user_name,company,expire) values ('$tenant_email','$tenant_first_name','$tenant_last_name','$tenant_user_name','$tenant_company',10080) ; -insert into login_detail (user_id,role,status,logged_in) select user_id,'Admin','1',0 from user_detail where email='$tenant_email' ; -connect reset ; \ No newline at end of file diff --git a/BACA/configuration/DB2/sql/LoadData.sql.template b/BACA/configuration/DB2/sql/LoadData.sql.template deleted file mode 100644 index 24c2657e..00000000 --- a/BACA/configuration/DB2/sql/LoadData.sql.template +++ /dev/null @@ -1,37 +0,0 @@ -CONNECT TO $tenant_db_name ; -SET SCHEMA $tenant_ontology ; - -load from ./CSVFiles/doc_class.csv of del modified by identityoverride insert into doc_class ; -load from ./CSVFiles/key_class.csv of del modified by identityoverride insert into key_class ; -load from ./CSVFiles/doc_alias.csv of del modified by identityoverride insert into doc_alias ; -load from ./CSVFiles/key_alias.csv of del modified by identityoverride insert into key_alias ; -load from ./CSVFiles/cword.csv of del modified by identityoverride insert into cword ; -load from ./CSVFiles/heading.csv of del modified by identityoverride insert into heading ; -load from ./CSVFiles/heading_alias.csv of del modified by identityoverride insert into heading_alias ; -load from ./CSVFiles/key_class_dc.csv of del modified by identityoverride insert into key_class_dc ; -load from ./CSVFiles/doc_alias_dc.csv of del modified by identityoverride insert into doc_alias_dc ; -load from ./CSVFiles/key_alias_dc.csv of del modified by identityoverride insert into key_alias_dc ; -load from ./CSVFiles/key_alias_kc.csv of del modified by identityoverride insert into key_alias_kc ; -load from ./CSVFiles/heading_dc.csv of del modified by identityoverride insert into heading_dc ; -load from ./CSVFiles/heading_alias_dc.csv of del modified by identityoverride insert into heading_alias_dc ; -load from ./CSVFiles/heading_alias_h.csv of del modified by identityoverride insert into heading_alias_h ; -load from ./CSVFiles/cword_dc.csv of del modified by identityoverride insert into cword_dc ; - -set integrity for key_class_dc immediate checked ; -set integrity for doc_alias_dc immediate checked ; -set integrity for key_alias_dc immediate checked ; -set integrity for key_alias_kc immediate checked ; -set integrity for heading_dc immediate checked ; -set integrity for heading_alias_dc immediate checked ; -set integrity for heading_alias_h immediate checked ; -set integrity for cword_dc immediate checked ; - -alter table doc_class alter column doc_class_id restart with 10 ; -alter table doc_alias alter column doc_alias_id restart with 11 ; -alter table key_class alter column key_class_id restart with 202 ; -alter table key_alias alter column key_alias_id restart with 239 ; -alter table cword alter column cword_id restart with 76 ; -alter table heading alter column heading_id restart with 3 ; -alter table heading_alias alter column heading_alias_id restart with 3 ; - -CONNECT RESET; diff --git a/BACA/configuration/DB2/sql/TablePermissions.sql.template b/BACA/configuration/DB2/sql/TablePermissions.sql.template deleted file mode 100644 index d8090bba..00000000 --- a/BACA/configuration/DB2/sql/TablePermissions.sql.template +++ /dev/null @@ -1,20 +0,0 @@ -CONNECT TO $tenant_db_name ; - -GRANT ALTER ON TABLE $tenant_ontology.DOC_CLASS TO USER $tenant_db_user ; -GRANT ALTER ON TABLE $tenant_ontology.DOC_ALIAS TO USER $tenant_db_user ; -GRANT ALTER ON TABLE $tenant_ontology.KEY_CLASS TO USER $tenant_db_user ; -GRANT ALTER ON TABLE $tenant_ontology.KEY_ALIAS TO USER $tenant_db_user ; -GRANT ALTER ON TABLE $tenant_ontology.CWORD TO USER $tenant_db_user ; -GRANT ALTER ON TABLE $tenant_ontology.HEADING TO USER $tenant_db_user ; -GRANT ALTER ON TABLE $tenant_ontology.HEADING_ALIAS TO USER $tenant_db_user ; -GRANT ALTER ON TABLE $tenant_ontology.USER_DETAIL TO USER $tenant_db_user ; -GRANT ALTER ON TABLE $tenant_ontology.INTEGRATION TO USER $tenant_db_user ; -GRANT ALTER ON TABLE $tenant_ontology.IMPORT_ONTOLOGY TO USER $tenant_db_user ; -GRANT ALTER ON TABLE $tenant_ontology.API_INTEGRATIONS_OBJECTSSTORE TO USER $tenant_db_user ; -GRANT ALTER ON TABLE $tenant_ontology.SMARTPAGES_OPTIONS TO USER $tenant_db_user ; -GRANT ALTER ON TABLE $tenant_ontology.FONTS TO USER $tenant_db_user ; -GRANT ALTER ON TABLE $tenant_ontology.FONTS_TRANSID TO USER $tenant_db_user ; -GRANT ALTER ON TABLE $tenant_ontology.DB_BACKUP TO USER $tenant_db_user ; -GRANT ALTER ON TABLE $tenant_ontology.PATTERN TO USER $tenant_db_user ; - -CONNECT RESET; \ No newline at end of file diff --git a/BACA/configuration/DB2/sql/UpgradeBaseDB_1.1_to_1.2.sql.template b/BACA/configuration/DB2/sql/UpgradeBaseDB_1.1_to_1.2.sql.template deleted file mode 100644 index c5a5fec8..00000000 --- a/BACA/configuration/DB2/sql/UpgradeBaseDB_1.1_to_1.2.sql.template +++ /dev/null @@ -1,9 +0,0 @@ ---base DB changes -connect to $base_db_name ; -set schema $base_db_user ; - -alter table tenantinfo add column featureflags bigint not null with default 0; -alter table tenantinfo add column tenantdbversion varchar(255); -reorg table tenantinfo; - -connect reset; \ No newline at end of file diff --git a/BACA/configuration/DB2/sql/UpgradeBaseDB_to_1.1.sql.template b/BACA/configuration/DB2/sql/UpgradeBaseDB_to_1.1.sql.template deleted file mode 100644 index 771f1576..00000000 --- a/BACA/configuration/DB2/sql/UpgradeBaseDB_to_1.1.sql.template +++ /dev/null @@ -1,10 +0,0 @@ ---base DB changes -connect to $base_db_name ; -set schema $base_db_user ; - -alter table tenantinfo add column dailylimit bigint not null with default 0; -alter table tenantinfo add column dbname varchar(255); -alter table tenantinfo add column dbuser varchar(255); -reorg table tenantinfo; - -connect reset; \ No newline at end of file diff --git a/BACA/configuration/DB2/sql/UpgradeTenantDB_to_1.1.sql.template b/BACA/configuration/DB2/sql/UpgradeTenantDB_to_1.1.sql.template deleted file mode 100644 index 8921a752..00000000 --- a/BACA/configuration/DB2/sql/UpgradeTenantDB_to_1.1.sql.template +++ /dev/null @@ -1,7 +0,0 @@ -connect to $tenant_db_name ; -set schema $tenant_ontology ; - -alter table integration alter column model_id set data type varchar(1024); -reorg table integration; - -connect reset ; \ No newline at end of file diff --git a/BACA/configuration/README.md b/BACA/configuration/README.md deleted file mode 100644 index 3fb68728..00000000 --- a/BACA/configuration/README.md +++ /dev/null @@ -1,4 +0,0 @@ -# Please Preparing your environment for Content Analyzer - -Please perform the steps described in the following page in IBM Content Analyzer Knowledge Center before proceed to installing the Charts. -https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/topics/tsk_preparing_baca_deploy.html diff --git a/BACA/configuration/baca-netpol.yaml b/BACA/configuration/baca-netpol.yaml deleted file mode 100644 index fa676f1e..00000000 --- a/BACA/configuration/baca-netpol.yaml +++ /dev/null @@ -1,11 +0,0 @@ -kind: NetworkPolicy -apiVersion: networking.k8s.io/v1 -metadata: - namespace: $KUBE_NAME_SPACE - name: baca-netpol -spec: - ingress: - - {} - podSelector: {} - policyTypes: - - Ingress \ No newline at end of file diff --git a/BACA/configuration/baca-psp.yaml b/BACA/configuration/baca-psp.yaml deleted file mode 100644 index 712b3327..00000000 --- a/BACA/configuration/baca-psp.yaml +++ /dev/null @@ -1,65 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: PodSecurityPolicy -metadata: - annotations: - kubernetes.io/description: "This policy allows pods to run with - any UID and GID, but preventing access to the host." - name: baca-anyuid-psp -spec: - allowPrivilegeEscalation: false - fsGroup: - ranges: - - max: 65535 - min: 1 - rule: MustRunAs - #rule: RunAsAny - requiredDropCapabilities: - - MKNOD - - SETFCAP - - NET_RAW - - NET_BIND_SERVICE - - KILL - allowedCapabilities: - - SETPCAP - - AUDIT_WRITE - - CHOWN - - FOWNER - - FSETID - - SETUID - - SETGID - - SYS_CHROOT - - DAC_OVERRIDE - runAsUser: - rule: MustRunAsNonRoot - seLinux: - rule: RunAsAny - supplementalGroups: - ranges: - - max: 65535 - min: 1 - rule: MustRunAs - #rule: RunAsAny - volumes: - - configMap - - emptyDir - - projected - - secret - - downwardAPI - - persistentVolumeClaim - forbiddenSysctls: - - '*' ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - annotations: - name: baca-anyuid-clusterrole -rules: -- apiGroups: - - extensions - resourceNames: - - baca-anyuid-psp - resources: - - podsecuritypolicies - verbs: - - use diff --git a/BACA/configuration/bashfunctions.sh b/BACA/configuration/bashfunctions.sh deleted file mode 100755 index ebdf8714..00000000 --- a/BACA/configuration/bashfunctions.sh +++ /dev/null @@ -1,407 +0,0 @@ -#!/usr/bin/env bash - -# -# Licensed Materials - Property of IBM -# 6949-68N -# -# © Copyright IBM Corp. 2018 All Rights Reserved -# - -# Function to request user for their domain name - -export ICP_clustername=$(echo $DOCKER_REG_FOR_SERVICES | awk -F'[.]' '{print $1}') -export ICP_account_id="id-"$ICP_clustername"-account" - -# Login to ICP, to ensure bx pr and kubectl commands work in later functions -function loginToCluster() { - if [[ $ICP_VERSION == "3.1.0" || $ICP_VERSION == "3.1.2" ]]; then - echo - #echo "\x1B[1;31m Logging into ICP using: bx pr login -a https://$MASTERIP:8443 --skip-ssl-validation -u admin - # -p admin -c id-mycluster-account. \x1B[0m" - export ICP_USER_PASSWORD_DECODE=$(echo $ICP_USER_PASSWORD | base64 --decode) - #ICP 3.10 - cloudctl login -a https://$MASTERIP:8443 --skip-ssl-validation -u $ICP_USER -p $ICP_USER_PASSWORD_DECODE -c $ICP_account_id -n default - fi - if [[ $OCP_VERSION == "3.11" ]]; then - echo - export OCP_USER_PASSWORD_DECODE=$(echo $OCP_USER_PASSWORD | base64 --decode) - #echo "\x1B[1;31m Logging into OCP using: oc login https://$MASTERIP:8443 --insecure-skip-tls-verify=true -u $OCP_USER - # -p $OCP_USER_PASSWORD_DECODE. \x1B[0m" - #OCP 3.11 - oc login https://$MASTERIP:8443 --insecure-skip-tls-verify=true -u $OCP_USER -p $OCP_USER_PASSWORD_DECODE - fi -} - -# ------------------- -# HELM Client setup -# ------------------- -function downloadHelmClient() { - - - if [[ $ICP_VERSION == "3.1.0" || $ICP_VERSION == "3.1.2" ]]; then - echo - echo "Downloading Helm 2.9.1 from ICp" - curl -kLo helm-linux-amd64-v2.9.1.tar.gz https://$MASTERIP:8443/api/cli/helm-linux-amd64.tar.gz - echo - echo "Moving helm to /usr/local/bin and chmod 755 helm" - tar -xvf helm-linux-amd64-v2.9.1.tar.gz - chmod 755 ./linux-amd64/helm && mv ./linux-amd64/helm /usr/local/bin - rm -rf linux-amd64 - # testing Helm - echo Testing Helm CLI using: helm version --tls - helm version --tls - fi - - if [[ $OCP_VERSION == "3.11" ]]; then - echo "Downloading Helm 2.11.0 from Github" - curl -s https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz | tar xz - echo - echo "Moving helm to /usr/local/bin and chmod 755 helm" - - chmod 755 ./linux-amd64/helm && mv ./linux-amd64/helm /usr/local/bin - rm -rf linux-amd64 - - fi -} - - -function helmSetup(){ - - if [[ $ICP_VERSION == "3.1.2" ]]; then - # ICP specific setup - echo - echo Initializing Helm CLI using: helm init --client-only - helm init --client-only - echo - echo Creating clusterrolebinding tiller-cluster-admin .... - kubectl create clusterrolebinding tiller-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default - fi - - if [[ $OCP_VERSION == "3.11" ]]; then - echo Creating clusterrolebinding tiller-cluster-admin .... - export TILLER_NAMESPACE=tiller - oc new-project $TILLER_NAMESPACE - oc project $TILLER_NAMESPACE - oc process -f /~https://github.com/openshift/origin/raw/master/examples/helm/tiller-template.yaml -p TILLER_NAMESPACE="${TILLER_NAMESPACE}" -p HELM_VERSION=v2.11.0 | oc create -f - - oc rollout status deployment tiller - oc project $KUBE_NAME_SPACE - oc policy add-role-to-user $OCP_USER "system:serviceaccount:${TILLER_NAMESPACE}:tiller" - fi - -} - -function checkHelm(){ - - if [[ $ICP_VERSION == "3.1.2" ]]; then - MAX_ITERATIONS=120 - count=0 - while [[ $( kubectl get deployment tiller-deploy --namespace kube-system | sed -n '1!p' | awk '{print $5}' ) == 0 ]] - do - if [ "$count" -eq $MAX_ITERATIONS ]; then - echo "ERROR: Failed to find tiller-deploy after $MAX_ITERATIONS tries. Please check your cluster using kubectl get deployment tiller-deploy --namespace kube-system" - return 1 - fi - echo "Checking that helm tiller is deployed ......................" - sleep 10 - ((count++)) - done - echo "Helm deployed successfully ......................" - fi -} - - - -function getWorkerIPs() { - echo "inside getWorkerIPs" - if [[ $ICP_VERSION == "3.1.0" || $ICP_VERSION == "3.1.2" ]]; then - export ICP_USER_PASSWORD_DECODE=$(echo $ICP_USER_PASSWORD | base64 --decode) - echo "About to get all the worker IPs from $ICP_VERSION" - echo "login -a https://$MASTERIP:8443 --skip-ssl-validation -u $ICP_USER -p $ICP_USER_PASSWORD_DECODE -c $ICP_account_id" - cloudctl login -a https://$MASTERIP:8443 --skip-ssl-validation -u $ICP_USER -p $ICP_USER_PASSWORD_DECODE -c $ICP_account_id -n default - export WORKER_IPs=$(cloudctl cm workers --json | grep "publicIP" | awk '{print $2}' | cut -d ',' -f1 | tr -d '"') - if [ -z "$WORKER_IPs" ]; then - echo "Cannot find public IP for worker nodes. Will try to check for Private IP now" - export WORKER_IPs=$(cloudctl cm workers --json | grep "privateIP" | awk '{print $2}' | cut -d ',' -f1 | tr -d '"') - echo WORKER_IPs=$WORKER_IPs - if [[ -z "$WORKER_IPs" ]]; then exit 1; fi - fi - fi - if [[ $OCP_VERSION == "3.11" ]]; then - echo "About to get all the worker IPs from $OCP_VERSION" - loginToCluster - export WORKER_IPs=$(oc get nodes | grep compute | grep [^Not]Ready | awk '{print $1}' | cut -d ',' -f1 | tr -d '"') - echo WORKER_IPs=$WORKER_IPs - if [[ -z "$WORKER_IPs" ]]; then exit 1; fi - fi - -} -function getWorkerIPBasedOnLabel() { - echo "inside getWorkerIP1s. It will get the worker IPs based on label" - - loginToCluster - if [[ $ICP_VERSION == "3.1.0" || $ICP_VERSION == "3.1.2" ]]; then - export WORKER_IP1s=$(kubectl get nodes --show-labels |grep worker.*$KUBE_NAME_SPACE=baca | grep [^Not]Ready | awk {'print $1'}) - fi - if [[ $OCP_VERSION == "3.11" ]]; then - export WORKER_IP1s=$(kubectl get nodes --show-labels |grep compute=true |grep celery$KUBE_NAME_SPACE'='baca | grep [^Not]Ready | awk {'print $1'}) - fi - echo $WORKER_IP1s - if [[ -z "$WORKER_IP1s" ]]; then exit 1; fi - -} -function clearAllLabels(){ - echo "About to clear ALL label nodes with in $KUBE_NAME_SPACE" - getWorkerIPs - for i in $WORKER_IPs - do - echo "Clear out previous labeling" - kubectl label nodes $i {celery$KUBE_NAME_SPACE-,mongo$KUBE_NAME_SPACE-,mongo-admin$KUBE_NAME_SPACE-} - echo - done -} -#function labelNodes() { -# clearAllLabels -# echo "About to label ALL nodes with celery$KUBE_NAME_SPACE=baca." -# getWorkerIPs -# for i in $WORKER_IPs -# do -# echo "Label --overwrite $i with celery$KUBE_NAME_SPACE=baca" -# kubectl label nodes --overwrite $i {celery$KUBE_NAME_SPACE=baca,mongo$KUBE_NAME_SPACE=baca,mongo-admin$KUBE_NAME_SPACE=baca} -# done -#} - -function customLabelNodes() { - loginToCluster - clearAllLabels -# echo "Clear out previous labeling" -# kubectl label nodes $i {celery$KUBE_NAME_SPACE-,mongo$KUBE_NAME_SPACE-,mongo-admin$KUBE_NAME_SPACE-,postgres$KUBE_NAME_SPACE-} - - echo "About to label --overwrite $CA_WORKERS with celery$KUBE_NAME_SPACE=baca." - echo label nodes {$CA_WORKERS} celery$KUBE_NAME_SPACE=baca - for i in $(echo $CA_WORKERS | sed "s/,/ /g") - do - echo "Label $i with celery$KUBE_NAME_SPACE=baca" - kubectl label nodes --overwrite $i celery$KUBE_NAME_SPACE=baca - echo - done - echo - echo "About to label $MONGO_WORKERS with mongo$KUBE_NAME_SPACE=baca." - for i in $(echo $MONGO_WORKERS | sed "s/,/ /g") - do - echo "Label $i with mongo$KUBE_NAME_SPACE=baca" - kubectl label nodes --overwrite $i mongo$KUBE_NAME_SPACE=baca - done - echo - echo "About to label $MONGO_ADMIN_WORKERS with mongo-admin$KUBE_NAME_SPACE=baca." - for i in $(echo $MONGO_ADMIN_WORKERS | sed "s/,/ /g") - do - echo "Label $i with mongo-admin$KUBE_NAME_SPACE=baca" - kubectl label nodes --overwrite $i mongo-admin$KUBE_NAME_SPACE=baca - done - echo -} - - - -function getNFSServer() { - #Get a list of worker IPs - if [[ $PVCCHOICE == "1" ]]; then # This is the option 1 where the script will create everything for Internal usage. - getWorkerIPBasedOnLabel - #Create directories: - echo "Creating required directory for SP by ssh into $NFS_IP" - if [ -z "$SSH_USER" ]; then - export SSH_USER="root" - fi - - if [ "$SSH_USER" == "root" ]; then - export SUDO_CMD="" - else - export SUDO_CMD="sudo " - fi - echo "Creating necessary folder in $NFS_IP..." - ssh $SSH_USER@$NFS_IP -oStrictHostKeyChecking=no "$SUDO_CMD mkdir -p /exports/smartpages/$KUBE_NAME_SPACE/{logs,data,config}" - ssh $SSH_USER@$NFS_IP -oStrictHostKeyChecking=no "$SUDO_CMD mkdir -p /exports/smartpages/$KUBE_NAME_SPACE/logs/{backend,frontend,callerapi,processing-extraction,pdfprocess,setup,interprocessing,classifyprocess-classify,ocr-extraction,postprocessing,reanalyze,updatefiledetail,spfrontend,redis,rabbitmq,mongo,mongoadmin,utf8process}" - ssh $SSH_USER@$NFS_IP -oStrictHostKeyChecking=no "$SUDO_CMD mkdir -p /exports/smartpages/$KUBE_NAME_SPACE/config/backend" - - - - echo "Creating data directory on NFS ..." - ssh $SSH_USER@$NFS_IP -oStrictHostKeyChecking=no "$SUDO_CMD mkdir -p /exports/smartpages/$KUBE_NAME_SPACE/data/{mongo,mongoadmin}" - - - echo "Setting owner (51000:51001) for BACA's PVC" - ssh $SSH_USER@$NFS_IP -oStrictHostKeyChecking=no "$SUDO_CMD chown -R 51000:51001 /exports/smartpages/" - - - - - echo "Checking to see if NFS server is installed..." - if [[ $ICP_VERSION == "3.1.2" ]]; then - ssh $SSH_USER@$NFS_IP "$SUDO_CMD systemctl status nfs-kernel-server" - if [[ $? != "0" ]]; then - echo "We could not find nfs service. We will try to install nfs server" - ssh $SSH_USER@$NFS_IP "$SUDO_CMD apt install nfs-kernel-server && $SUDO_CMD systemctl enable nfs-kernel-server && $SUDO_CMD systemctl restart nfs-kernel-server" - - fi - fi - if [[ $OCP_VERSION == "3.11" ]]; then - ssh $SSH_USER@$NFS_IP "$SUDO_CMD systemctl status nfs-server" - if [[ $? != "0" ]]; then - echo "We could not find nfs service. We will try to install nfs server" - ssh $SSH_USER@$NFS_IP "$SUDO_CMD yum install nfs-utils && $SUDO_CMD systemctl enable nfs-server && $SUDO_CMD systemctl restart nfs-server" - fi - fi - - - - - #We will backup the existing /etc/exports - #Compare the icp worker ip w/ the existing IP in the /etc/exports file then insert any missing entry (IP) into /etc/exports. - echo "ssh $SSH_USER@$NFS_IP "$SUDO_CMD cp /etc/exports /etc/exports_bak"" - ssh $SSH_USER@$NFS_IP "$SUDO_CMD cp /etc/exports /etc/exports_bak" - export EXPORTS_FILE=`ssh $SSH_USER@$NFS_IP "$SUDO_CMD cat /etc/exports |grep '/exports/smartpages'" | awk '{print $2}' | cut -d'(' -f1` - echo "from exports files: $EXPORTS_FILE" - echo "from k8's : $WORKER_IP1s" - - #if [[ $? == "1" ]]; then - - echo "Inside writting to /etc/exports routine" - echo $WORKER_IP1s - - for i in $WORKER_IP1s - do - - echo $EXPORTS_FILE |grep $i - if [[ $? == "1" ]]; then - echo $i - echo "Cannot find $i in the /etc/exports file....." - echo "Writing '/exports/smartpages "$i"(rw,sync,no_root_squash)' to $NFS_IP/etc/exports file" - - ssh $SSH_USER@$NFS_IP "echo '/exports/smartpages "$i"(rw,sync,no_root_squash)' | $SUDO_CMD tee --append /etc/exports" - else - echo " $i matched" - fi - - done - - - #restart nfs service if available$KUBE_NAME_SPACE/config - if [[ $ICP_VERSION == "3.1.2" ]]; then - ssh $SSH_USER@$NFS_IP "$SUDO_CMD systemctl restart nfs-kernel-server" - fi - if [[ $OCP_VERSION == "3.11" ]]; then - ssh $SSH_USER@$NFS_IP "$SUDO_CMD systemctl restart nfs-server" - fi - - - else - echo -e "\x1B[1;32mPVCCHOICE is not defined. Therefore, you must create the following pvc name: \x1B[0m" - fi # end if of pvc=1 - -} -function calMemoryLimitedDist(){ - - echo -e "\x1B[1;32mChecking to see if bc package is installed\x1B[0m" - dpkg -l | awk {'print $2'} |grep ^bc$ > /dev/null - if [[ $? != "0" ]]; then - echo "Installing bc package for resource calculation" - apt install bc -y - fi - echo CALLERAPI_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo BACKEND_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.04 * 1024" | bc)Mi" - echo FRONTEND_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo POST_PROCESS_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo PDF_PROCESS_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.06 * 1024" | bc)Mi" - echo UTF8_PROCESS_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.06 * 1024" | bc)Mi" - echo SETUP_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo OCR_EXTRACTION_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.09 * 1024" | bc)Mi" - echo CLASSIFY_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.06 * 1024" | bc)Mi" - echo PROCESSING_EXTRACTION_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.09 * 1024" | bc)Mi" - # echo INTER_PROCESSING_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo REANALYZE_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.045 * 1024" | bc)Mi" - echo UPDATEFILE_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo RABBITMQ_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.06 * 1024" | bc)Mi" -# echo MINIO_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.04 * 1024" | bc)Mi" - echo REDIS_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.04 * 1024" | bc)Mi" - echo MONGO_LIMITED_MEMORY="$(echo "$MONGO_SERVER_MEMORY * 0.6 * 1024" | bc)Mi" - echo MONGO_ADMIN_LIMITED_MEMORY="$(echo "$MONGO_ADMIN_SERVER_MEMORY * 0.6 * 1024" | bc)Mi" - export mongo_memory_value="$(echo "$MONGO_SERVER_MEMORY * 0.6 " | bc)" - export mongo_admin_memory_value="$(echo "$MONGO_ADMIN_SERVER_MEMORY * 0.6 " | bc)" - - - export MONGO_WIREDTIGER_LIMIT="$(echo "($mongo_memory_value -1)*0.5" | bc)" - - if [[ 1 -eq $(echo "$MONGO_WIREDTIGER_LIMIT < 0.25" |bc -l) ]];then - echo MONGO_WIREDTIGER_LIMIT='0.25' - - - else - echo "MONGO_WIREDTIGER_LIMIT=$MONGO_WIREDTIGER_LIMIT" - - fi - -# echo "mongo_admin_memory_value=$mongo_admin_memory_value" - export MONGO_ADMIN_WIREDTIGER_LIMIT="$(echo "($mongo_admin_memory_value -1)*0.5" | bc)" - - if [[ 1 -eq $(echo "$MONGO_ADMIN_WIREDTIGER_LIMIT < 0.25" |bc -l) ]];then - echo MONGO_ADMIN_WIREDTIGER_LIMIT='0.25' - - else - echo "MONGO_ADMIN_WIREDTIGER_LIMIT=$MONGO_ADMIN_WIREDTIGER_LIMIT" - fi - -} - -function calMemoryLimitedShared(){ - echo CALLERAPI_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo BACKEND_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.04 * 1024" | bc)Mi" - echo FRONTEND_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo POST_PROCESS_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo PDF_PROCESS_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.06 * 1024" | bc)Mi" - echo UTF8_PROCESS_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.06 * 1024" | bc)Mi" - echo SETUP_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo OCR_EXTRACTION_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.09 * 1024" | bc)Mi" - echo CLASSIFY_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.06 * 1024" | bc)Mi" - echo PROCESSING_EXTRACTION_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.09 * 1024" | bc)Mi" -# echo INTER_PROCESSING_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo REANALYZE_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.045 * 1024" | bc)Mi" - echo UPDATEFILE_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.03 * 1024" | bc)Mi" - echo RABBITMQ_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.06 * 1024" | bc)Mi" -# echo MINIO_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.04 * 1024" | bc)Mi" - echo REDIS_LIMITED_MEMORY="$(echo "$SERVER_MEMORY * 0.04 * 1024" | bc)Mi" - echo MONGO_LIMITED_MEMORY="$(echo "$MONGO_SERVER_MEMORY * 0.1 * 1024" | bc)Mi" - export mongo_memory_value="$(echo "$MONGO_SERVER_MEMORY * 0.1" | bc)" - echo MONGO_ADMIN_LIMITED_MEMORY="$(echo "$MONGO_ADMIN_SERVER_MEMORY * 0.1 * 1024" | bc)Mi" - export mongo_admin_memory_value="$(echo "$MONGO_ADMIN_SERVER_MEMORY * 0.1" | bc)" - -# echo "mongo_memory_value=$mongo_memory_value" - export MONGO_WIREDTIGER_LIMIT="$(echo "($mongo_memory_value -1)*0.5" | bc)" - #echo "MONGO_WIREDTIGER_LIMIT=$MONGO_WIREDTIGER_LIMIT" - if [[ 1 -eq $(echo "$MONGO_WIREDTIGER_LIMIT < 0.25" |bc -l) ]];then - echo MONGO_WIREDTIGER_LIMIT='0.25' - - else - echo "MONGO_WIREDTIGER_LIMIT=$MONGO_WIREDTIGER_LIMIT" - fi - -# echo "mongo_admin_memory_value=$mongo_admin_memory_value" - export MONGO_ADMIN_WIREDTIGER_LIMIT="$(echo "($mongo_admin_memory_value -1)*0.5" | bc)" - #echo "MONGO_WIREDTIGER_LIMIT=$MONGO_WIREDTIGER_LIMIT" - if [[ 1 -eq $(echo "$MONGO_WIREDTIGER_LIMIT < 0.25" |bc -l) ]];then - echo MONGO_ADMIN_WIREDTIGER_LIMIT='.25' - else - echo "MONGO_ADMIN_WIREDTIGER_LIMIT=$MONGO_ADMIN_WIREDTIGER_LIMIT" - fi - -} -function calNumOfContainers(){ - if [[ $ICP_VERSION == "3.1.0" || $ICP_VERSION == "3.1.2" ]]; then - export numOfCelery=$(kubectl get nodes --show-labels |grep worker.*celery$KUBE_NAME_SPACE=baca | wc -l) - fi - if [[ $OCP_VERSION == "3.11" ]]; then - export numOfCelery=$(oc get nodes --show-labels |grep compute=true | grep celery$KUBE_NAME_SPACE=baca | wc -l) - fi - echo CELERY_REPLICAS=$numOfCelery - echo NON_CELERY_REPLICAS=$numOfCelery - -} diff --git a/BACA/configuration/common.sh b/BACA/configuration/common.sh deleted file mode 100755 index 63d75a06..00000000 --- a/BACA/configuration/common.sh +++ /dev/null @@ -1,29 +0,0 @@ -SERVER_MEMORY= -MONGO_SERVER_MEMORY= -MONGO_ADMIN_SERVER_MEMORY= -USING_HELM= -HELM_INIT_BEFORE= -KUBE_NAME_SPACE= -DOCKER_REG_FOR_SERVICES= -LABEL_NODE= -CA_WORKERS= -MONGO_WORKERS= -MONGO_ADMIN_WORKERS= -ICP_VERSION= -ICP_USER= -ICP_USER_PASSWORD= -BXDOMAINNAME= -MASTERIP= -SSH_USER= -PVCCHOICE= -NFS_IP= -DATAPVC= -LOGPVC= -CONFIGPVC= -BASE_DB_PWD= -LDAP= -LDAP_PASSWORD= -LDAP_URL= -LDAP_CRT_NAME= -DB_SSL= -DB_CRT_NAME= \ No newline at end of file diff --git a/BACA/configuration/common_ICP_template.sh b/BACA/configuration/common_ICP_template.sh deleted file mode 100755 index 63d75a06..00000000 --- a/BACA/configuration/common_ICP_template.sh +++ /dev/null @@ -1,29 +0,0 @@ -SERVER_MEMORY= -MONGO_SERVER_MEMORY= -MONGO_ADMIN_SERVER_MEMORY= -USING_HELM= -HELM_INIT_BEFORE= -KUBE_NAME_SPACE= -DOCKER_REG_FOR_SERVICES= -LABEL_NODE= -CA_WORKERS= -MONGO_WORKERS= -MONGO_ADMIN_WORKERS= -ICP_VERSION= -ICP_USER= -ICP_USER_PASSWORD= -BXDOMAINNAME= -MASTERIP= -SSH_USER= -PVCCHOICE= -NFS_IP= -DATAPVC= -LOGPVC= -CONFIGPVC= -BASE_DB_PWD= -LDAP= -LDAP_PASSWORD= -LDAP_URL= -LDAP_CRT_NAME= -DB_SSL= -DB_CRT_NAME= \ No newline at end of file diff --git a/BACA/configuration/common_OCP_template.sh b/BACA/configuration/common_OCP_template.sh deleted file mode 100755 index a5e741ef..00000000 --- a/BACA/configuration/common_OCP_template.sh +++ /dev/null @@ -1,29 +0,0 @@ -SERVER_MEMORY= -MONGO_SERVER_MEMORY= -MONGO_ADMIN_SERVER_MEMORY= -USING_HELM= -HELM_INIT_BEFORE= -KUBE_NAME_SPACE= -DOCKER_REG_FOR_SERVICES= -LABEL_NODE= -CA_WORKERS= -MONGO_WORKERS= -MONGO_ADMIN_WORKERS= -OCP_VERSION= -OCP_USER= -OCP_USER_PASSWORD= -BXDOMAINNAME= -MASTERIP= -SSH_USER= -PVCCHOICE= -NFS_IP= -DATAPVC= -LOGPVC= -CONFIGPVC= -BASE_DB_PWD= -LDAP= -LDAP_PASSWORD= -LDAP_URL= -LDAP_CRT_NAME= -DB_SSL= -DB_CRT_NAME= \ No newline at end of file diff --git a/BACA/configuration/createSSLCert.sh b/BACA/configuration/createSSLCert.sh deleted file mode 100755 index cc713f03..00000000 --- a/BACA/configuration/createSSLCert.sh +++ /dev/null @@ -1,191 +0,0 @@ -#!/usr/bin/env bash - -# -# Licensed Materials - Property of IBM -# 6949-68N -# -# © Copyright IBM Corp. 2018 All Rights Reserved -# - - -function createSSLCert() { - rm -r *.crt *.pem *.key || true - - echo -e "\x1B[1;32mAbout to create a self-signed SSL cert for ingress, celery, mongo, redis, rabbitmq....\x1B[0m" - echo "openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/tls.key -out $PWD/tls.crt -subj "/CN=127.0.0.1" " - openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/tls.key -out $PWD/tls.crt -subj "/CN=127.0.0.1" - cat $PWD/tls.key $PWD/tls.crt > $PWD/tls.pem - - echo "openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/celery.key -out $PWD/celery.crt -subj "/CN=127.0.0.1" " - openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/celery.key -out $PWD/celery.crt -subj "/CN=127.0.0.1" - cat $PWD/celery.key $PWD/celery.crt > $PWD/celery.pem - - echo "openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/mongo.key -out $PWD/mongo.crt -subj "/CN=127.0.0.1" " - openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/mongo.key -out $PWD/mongo.crt -subj "/CN=127.0.0.1" - cat $PWD/mongo.key $PWD/mongo.crt > $PWD/mongo.pem - - echo "openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/public.crt -out $PWD/public.crt -subj "/CN=127.0.0.1" " - openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/private.key -out $PWD/public.crt -subj "/CN=127.0.0.1" - - echo "openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/redis.key -out $PWD/redis.crt -subj "/CN=127.0.0.1" " - openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/redis.key -out $PWD/redis.crt -subj "/CN=127.0.0.1" - cat $PWD/redis.key $PWD/redis.crt > $PWD/redis.pem - echo "changing file permissions for redis.key ..." - chmod 600 $PWD/redis.key - - echo "openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/rabbitmq.key -out $PWD/rabbitmq.crt -subj "/CN=127.0.0.1" " - openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $PWD/rabbitmq.key -out $PWD/rabbitmq.crt -subj "/CN=127.0.0.1" - cat $PWD/rabbitmq.key $PWD/rabbitmq.crt > $PWD/rabbitmq.pem - - -} -function createSecret (){ - - echo -e "\x1B[1;32mAbout to create a secrets for ingress, celery, mongo, redis, rabbitmq....\x1B[0m" - echo "kubectl -n $KUBE_NAME_SPACE create secret tls baca-ingress-secret --key $PWD/tls.key --cert $PWD/tls.crt" - kubectl -n $KUBE_NAME_SPACE create secret tls baca-ingress-secret --key $PWD/tls.key --cert $PWD/tls.crt \ - --dry-run -o yaml | kubectl apply -f - - -# if [[ $DB_SSL == "y" || $DB_SSL == "Y" ]]; then -# echo "kubectl -n sp create secret generic baca-db2-secret --from-file=$PWD/db2-cert.arm" -# kubectl -n sp create secret generic baca-db2-secret --from-file=$PWD/db2-cert.arm -# fi - if [[ ($LDAP_URL =~ ^'ldaps' && ! -z $LDAP_CRT_NAME) && ($DB_SSL == "n") ]]; then - echo "kubectl -n $KUBE_NAME_SPACE create secret generic with LDAP certs AND no DB2 cert " - kubectl -n $KUBE_NAME_SPACE create secret generic baca-secrets$KUBE_NAME_SPACE \ - --from-file=$PWD/celery.pem --from-file=$PWD/celery.crt --from-file=$PWD/celery.key \ - --from-file=$PWD/mongo.pem --from-file=$PWD/mongo.crt --from-file=$PWD/mongo.key \ - --from-file=$PWD/public.crt --from-file=$PWD/private.key \ - --from-file=$PWD/redis.pem --from-file=$PWD/redis.key --from-file=$PWD/redis.crt \ - --from-file=$PWD/rabbitmq.pem --from-file=$PWD/rabbitmq.key --from-file=$PWD/rabbitmq.crt \ - --from-file=$PWD/$LDAP_CRT_NAME \ - --dry-run -o yaml | kubectl apply -f - - elif [[ ($LDAP_URL =~ ^'ldaps' && ! -z $LDAP_CRT_NAME) && ($DB_SSL == "y" && ! -z $DB_CRT_NAME) ]]; then - echo "kubectl -n $KUBE_NAME_SPACE create secret generic with DB certs AND LDAP certs " - kubectl -n $KUBE_NAME_SPACE create secret generic baca-secrets$KUBE_NAME_SPACE \ - --from-file=$PWD/celery.pem --from-file=$PWD/celery.crt --from-file=$PWD/celery.key \ - --from-file=$PWD/mongo.pem --from-file=$PWD/mongo.crt --from-file=$PWD/mongo.key \ - --from-file=$PWD/public.crt --from-file=$PWD/private.key \ - --from-file=$PWD/redis.pem --from-file=$PWD/redis.key --from-file=$PWD/redis.crt \ - --from-file=$PWD/rabbitmq.pem --from-file=$PWD/rabbitmq.key --from-file=$PWD/rabbitmq.crt \ - --from-file=$PWD/$LDAP_CRT_NAME \ - --from-file=$PWD/$DB_CRT_NAME \ - --dry-run -o yaml | kubectl apply -f - - elif [[ ($DB_SSL == "y" && ! -z $DB_CRT_NAME) && ($LDAP_URL != ^'ldaps') ]]; then - echo "kubectl -n $KUBE_NAME_SPACE create secret generic with DB certs AND NO LDAP certs " - kubectl -n $KUBE_NAME_SPACE create secret generic baca-secrets$KUBE_NAME_SPACE \ - --from-file=$PWD/celery.pem --from-file=$PWD/celery.crt --from-file=$PWD/celery.key \ - --from-file=$PWD/mongo.pem --from-file=$PWD/mongo.crt --from-file=$PWD/mongo.key \ - --from-file=$PWD/public.crt --from-file=$PWD/private.key \ - --from-file=$PWD/redis.pem --from-file=$PWD/redis.key --from-file=$PWD/redis.crt \ - --from-file=$PWD/rabbitmq.pem --from-file=$PWD/rabbitmq.key --from-file=$PWD/rabbitmq.crt \ - --from-file=$PWD/$DB_CRT_NAME \ - --dry-run -o yaml | kubectl apply -f - - else - echo "kubectl -n $KUBE_NAME_SPACE create secret generic with no LDAP and DB2 certs" - kubectl -n $KUBE_NAME_SPACE create secret generic baca-secrets$KUBE_NAME_SPACE \ - --from-file=$PWD/celery.pem --from-file=$PWD/celery.crt --from-file=$PWD/celery.key \ - --from-file=$PWD/mongo.pem --from-file=$PWD/mongo.crt --from-file=$PWD/mongo.key \ - --from-file=$PWD/public.crt --from-file=$PWD/private.key \ - --from-file=$PWD/redis.pem --from-file=$PWD/redis.key --from-file=$PWD/redis.crt \ - --from-file=$PWD/rabbitmq.pem --from-file=$PWD/rabbitmq.key --from-file=$PWD/rabbitmq.crt \ - --dry-run -o yaml | kubectl apply -f - - fi - -} -function createMongoSecrets (){ -echo -e "\x1B[1;32mAbout to create mongo Secrets....\x1B[0m" -if [[ -z "$MONGOADMINENTRYPASSWORD" && -z "$MONGOADMINUSER" && -z "$MONGOADMINPASSWORD" ]]; then - echo -e "\x1B[1;32mCreating mongo admin Secrets using random values....\x1B[0m" - export MONGOADMINENTRYPASSWORD=$(openssl rand -base64 32 | tr -d "=+/" | cut -c1-29) - export MONGOADMINUSER=$(openssl rand -base64 12 | tr -d "=+/" | cut -c1-29) - export MONGOADMINPASSWORD=$(openssl rand -base64 32 | tr -d "=+/" | cut -c1-29) - - kubectl -n $KUBE_NAME_SPACE create secret generic baca-mongo-admin \ - --from-literal=MONGOADMINENTRYPASSWORD="$MONGOADMINENTRYPASSWORD" \ - --from-literal=MONGOADMINUSER="$MONGOADMINUSER" \ - --from-literal=MONGOADMINPASSWORD="$MONGOADMINPASSWORD" \ - --dry-run -o yaml | kubectl apply -f - -else - echo -e "\x1B[1;32mCreating mongo admin Secret based on custom values for MONGOADMINENTRYPASSWORD, MONGOADMINUSER, MONGOADMINPASSWORD\x1B[0m" - kubectl -n $KUBE_NAME_SPACE create secret generic mongo-admin \ - --from-literal=MONGOADMINENTRYPASSWORD="$MONGOADMINENTRYPASSWORD" \ - --from-literal=MONGOADMINUSER="$MONGOADMINUSER" \ - --from-literal=MONGOADMINPASSWORD="$MONGOADMINPASSWORD" \ - --dry-run -o yaml | kubectl apply -f - -fi - -if [[ -z "$MONGOENTRYPASSWORD" && -z "$MONGOUSER" && -z "$MONGOPASSWORD" ]] ; then - echo -e "\x1B[1;32mCreating mongo Secrets using random values....\x1B[0m" - export MONGOENTRYPASSWORD=$(openssl rand -base64 32 | tr -d "=+/" | cut -c1-29) - export MONGOUSER=$(openssl rand -base64 12 | tr -d "=+/" | cut -c1-29) - export MONGOPASSWORD=$(openssl rand -base64 32 | tr -d "=+/" | cut -c1-29) - kubectl -n $KUBE_NAME_SPACE create secret generic baca-mongo \ - --from-literal=MONGOENTRYPASSWORD="$MONGOENTRYPASSWORD" \ - --from-literal=MONGOUSER="$MONGOUSER" \ - --from-literal=MONGOPASSWORD="$MONGOPASSWORD" \ - --dry-run -o yaml | kubectl apply -f - -else - echo -e "\x1B[1;32mCreating mongo Secret based on custom values for MONGOENTRYPASSWORD, MONGOUSER, MONGOPASSWORD\x1B[0m" - kubectl -n $KUBE_NAME_SPACE create secret generic mongo \ - --from-literal=MONGOENTRYPASSWORD="$MONGOENTRYPASSWORD" \ - --from-literal=MONGOUSER="$MONGOUSER" \ - --from-literal=MONGOPASSWORD="$MONGOPASSWORD" \ - --dry-run -o yaml | kubectl apply -f - -fi - -} -function createLDAPSecret(){ - -if [[ $LDAP == "y" && $LDAP_PASSWORD != "" ]]; then - echo -e "\x1B[1;32mAbout to create LDAP Secret....\x1B[0m" - echo -e "\x1B[1;32mCreating LDAP Secret....\x1B[0m" - export LDAP_PASSWORD_DECODE=$(echo $LDAP_PASSWORD | base64 --decode) - kubectl -n $KUBE_NAME_SPACE create secret generic baca-ldap \ - --from-literal=LDAP_PASSWORD="$LDAP_PASSWORD_DECODE" \ - --dry-run -o yaml | kubectl apply -f - -fi - -} -function createBaseDbSecret(){ -echo -e "\x1B[1;32mAbout to create secret for Base DB....\x1B[0m" -if [[ -z $BASE_DB_PWD ]]; then - echo -e "\x1B[1;32m Cannot find BASED_DB_PWD from common.sh..Exiting !!\x1B[0m" - exit 1 -else - echo -e "\x1B[1;32mCreating Base DB secret....\x1B[0m" - kubectl -n $KUBE_NAME_SPACE create secret generic baca-basedb \ - --from-literal=BASE_DB_PWD="$BASE_DB_PWD" \ - --dry-run -o yaml | kubectl apply -f - -fi -} - -function createRabbitmaSecret(){ -echo -e "\x1B[1;32mAbout to create secret for RabbitMQ....\x1B[0m" - -export rabbitmq_admin_password=$(openssl rand -base64 10 | tr -d "=+/" | cut -c1-29) -export rabbitmq_erlang_cookie=$(openssl rand -base64 32 | tr -d "=+/" | cut -c1-29) -export rabbitmq_password=$(openssl rand -base64 10 | tr -d "=+/" | cut -c1-29) -export rabbitmq_user=$(openssl rand -base64 6 | tr -d "=+/" | cut -c1-29) -export rabbitmq_management_password=$(openssl rand -base64 10 | tr -d "=+/" | cut -c1-29) -export rabbitmq_management_user=$(openssl rand -base64 6 | tr -d "=+/" | cut -c1-29) - -kubectl -n $KUBE_NAME_SPACE create secret generic baca-rabbitmq \ ---from-literal=rabbitmq-admin-password="$rabbitmq_admin_password" \ ---from-literal=rabbitmq-erlang-cookie="$rabbitmq_erlang_cookie" \ ---from-literal=rabbitmq-password="$rabbitmq_password" \ ---from-literal=rabbitmq-user="$rabbitmq_user" \ ---from-literal=rabbitmq-management-password="$rabbitmq_management_password" \ ---from-literal=rabbitmq-management-user="$rabbitmq_management_user" \ ---dry-run -o yaml | kubectl apply -f - - - -} - -function createRedisSecret(){ -echo -e "\x1B[1;32mAbout to create secret for Redis....\x1B[0m" -export redis_password=$(openssl rand -base64 10 | tr -d "=+/" | cut -c1-29) -kubectl -n $KUBE_NAME_SPACE create secret generic baca-redis \ ---from-literal=redis-password="$redis_password" \ ---dry-run -o yaml | kubectl apply -f - -} \ No newline at end of file diff --git a/BACA/configuration/delete_ContentAnalyzer.sh b/BACA/configuration/delete_ContentAnalyzer.sh deleted file mode 100755 index a116f33b..00000000 --- a/BACA/configuration/delete_ContentAnalyzer.sh +++ /dev/null @@ -1,118 +0,0 @@ -#!/usr/bin/env bash -# -# Licensed Materials - Property of IBM -# 6949-68N -# -# © Copyright IBM Corp. 2018 All Rights Reserved -# - -. ./common.sh -. ./bashfunctions.sh - -today=`date +%Y-%m-%d.%H:%M:%S` -echo $today - -if [ -z "$KUBE_NAME_SPACE" ] -then - echo -e "\x1B[1;31mThe KUBE_NAME_SPACE is not set. The script will exit. To delete everything in the IBM Business Automation Content Analyzer namespace, set the KUBE_NAME_SPACE variable to the name of the namespace where IBM Business Automation Content Analyzer is deployed and rerun. :\x1B[0m" - exit -fi - -if [ $KUBE_NAME_SPACE == "default" ] -then - echo -e "\x1B[1;31mThe KUBE_NAME_SPACE is set to default. The script will exit. We cannot delete all resources from the default namespace. To delete everything in the IBM Business Automation Content Analyzer namespace, set the KUBE_NAME_SPACE variable to the name of the namespace where IBM Business Automation Content Analyzer is deployed and rerun. :\x1B[0m" - exit -fi - -# confirm they want to delete -echo -echo -e "\x1B[1;31mThis script will DELETE all the resources, including services, deployments, and pvc, in the namespace : $KUBE_NAME_SPACE . And then delete the namespace $KUBE_NAME_SPACE \x1B[0m" -echo -echo -e "\x1B[1;31mPlease only execute if you are SURE you want to DELETE everything from your namespace $KUBE_NAME_SPACE . \x1B[0m" -echo -echo -e "\x1B[1;31mWARNING: Please note that on ICP this script may not be able to successfully remove all the pods. The pods and the namespace might be left in 'terminating' state . \x1B[0m" -echo - -while [[ $deleteconfirm != "y" && $deleteconfirm != "n" && $deleteconfirm != "yes" && $deleteconfirm != "no" ]] # While deleteconfirm is not y or n... -do - echo -e "\x1B[1;31mWould you like to continue (Y/N):\x1B[0m" - read deleteconfirm - deleteconfirm=$(echo "$deleteconfirm" | tr '[:upper:]' '[:lower:]') -done - - -if [[ $deleteconfirm == "n" || $deleteconfirm == "no" ]] -then - exit -fi - -#Logon to kubectl -loginToCluster - - -echo "----- Deleting Celery ..." -cwd=$(pwd) - -#export HELM="./helm-chart/baca-celery" -#export HELM1="./helm-chart/baca-userportal" -#echo -#echo "cd ${HELM}" -#cd ${HELM} - -echo -if [[ $USING_HELM == "y" || $USING_HELM == "yes" ]]; then - if [[ $ICP_VERSION == "3.1.2" ]]; then - echo "helm delete celery${KUBE_NAME_SPACE} --purge --tls" - helm delete celery${KUBE_NAME_SPACE} --purge --tls - fi - if [[ $OCP_VERSION == "3.11" ]]; then - echo "helm delete celery${KUBE_NAME_SPACE} --purge --tiller-namespace tiller" - helm delete celery${KUBE_NAME_SPACE} --purge --tiller-namespace tiller - fi -fi -echo -echo "sleep for 120 secs to wait for celery pods to complete termination...." - -sleep 120 -# -#echo -#echo "return to previous directory: ${cwd}" -#cd ${cwd} - -echo ----- Deleting all BACA resources from namespace : $KUBE_NAME_SPACE -set +e -kubectl delete -n $KUBE_NAME_SPACE --all deploy,svc,pvc,pods --force --grace-period=0 -kubectl delete -n $KUBE_NAME_SPACE secret baca-ingress-secret baca-secrets$KUBE_NAME_SPACE baca-userportal-ingress-secret baca-mongo baca-mongo-admin baca-ldap baca-basedb baca-rabbitmq baca-redis -if [[ $ICP_VERSION == "3.1.2" ]]; then - kubectl delete -n $KUBE_NAME_SPACE rolebinding baca-clusterrole-rolebinding - kubectl delete -n $KUBE_NAME_SPACE clusterrole baca-anyuid-clusterrole - kubectl delete -n $KUBE_NAME_SPACE psp baca-anyuid-psp -fi -set -e - - - - -# only delete PVC for internal/dev env. -if [[ $PVCCHOICE == "1" ]]; then - echo ---- Deleting persistent volumes. - count=`kubectl -n $KUBE_NAME_SPACE get pv | awk {'print $1'}| grep ^sp-.*${KUBE_NAME_SPACE}$|wc | awk {'print $1'}` - if [[ $count != "0" ]]; then - kubectl -n $KUBE_NAME_SPACE delete pv `kubectl -n $KUBE_NAME_SPACE get pv | awk {'print $1'}| grep ^sp-.*${KUBE_NAME_SPACE}$` - fi - echo ---Clean up all pvc subdirectories. You need to run setup.sh or init_deployment.sh again to have these directories re-created. -# ssh root@$NFS_IP rm -rf /exports/smartpages/$KUBE_NAME_SPACE/* - if [ -z "$SSH_USER" ]; then - export SSH_USER="root" - fi - - if [ "$SSH_USER" == "root" ]; then - export SUDO_CMD="" - else - export SUDO_CMD="sudo " - fi - ssh $SSH_USER@$NFS_IP "$SUDO_CMD rm -rf /exports/smartpages/$KUBE_NAME_SPACE/*" - - -fi - diff --git a/BACA/configuration/generateMemoryValues.sh b/BACA/configuration/generateMemoryValues.sh deleted file mode 100755 index 0e6cf3ea..00000000 --- a/BACA/configuration/generateMemoryValues.sh +++ /dev/null @@ -1,28 +0,0 @@ -#!/usr/bin/env bash -# -# Licensed Materials - Property of IBM -# 6949-68N -# -# © Copyright IBM Corp. 2018 All Rights Reserved -# -. ./bashfunctions.sh -. ./common.sh - -echo -e "\x1B[1;32mThis will generate recommended values for setting memory resources in Business Automation Content Analyzer (CA) product.\x1B[0m" -echo -e "\x1B[1;32mUse \"distributed\" flag when you have an distribute environment where mongo DB, mongo-admin DB, and CA processing components are their own nodes. Otherwise, use \"limited\" flag \x1B[0m" -echo -e "\x1B[1;32mThese values may need to be adjusted depending on your workload\x1B[0m" - - -if [[ -z $1 ]]; then - echo -e "\x1B[1;31mYou need to pass in either \"distributed\" or \"limited\" to use this script\x1B[0m" - exit 1 -fi - - -if [[ $1 == "distributed" ]]; then - calMemoryLimitedDist - calNumOfContainers -elif [[ $1 == "limited" ]]; then - calMemoryLimitedShared - calNumOfContainers -fi \ No newline at end of file diff --git a/BACA/configuration/init_deployments.sh b/BACA/configuration/init_deployments.sh deleted file mode 100755 index 37c3ae7a..00000000 --- a/BACA/configuration/init_deployments.sh +++ /dev/null @@ -1,95 +0,0 @@ -#!/usr/bin/env bash -# -# Licensed Materials - Property of IBM -# 6949-68N -# -# © Copyright IBM Corp. 2018 All Rights Reserved -# - -. ./common.sh -. ./bashfunctions.sh -. ./createSSLCert.sh - -# Login (if necessary) -loginToCluster - -#Creating psp and clusterrole for BACA - - - -# Create Kube namespace -echo "\x1B[1;32mCreating $KUBE_NAME_SPACE namespace \x1B[0m" -if [[ $ICP_VERSION == "3.1.0" || $ICP_VERSION == "3.1.2" ]]; then - kubectl create namespace $KUBE_NAME_SPACE -fi - -if [[ $OCP_VERSION == "3.11" ]]; then - oc new-project $KUBE_NAME_SPACE - oc project $KUBE_NAME_SPACE -fi - -if [[ $ICP_VERSION == "3.1.2" ]]; then - checkPsp=$(kubectl get psp |grep baca |wc -l) - - if [[ $checkPsp == "0" ]]; then - - echo -e "\x1B[1;32mCreating psp and clusterrole for BACA\x1B[0m" - kubectl -n $KUBE_NAME_SPACE apply -f ./baca-psp.yaml - echo -e "\x1B[1;32mCreating rolebinding for BACA\x1B[0m" - kubectl -n $KUBE_NAME_SPACE create rolebinding baca-clusterrole-rolebinding --clusterrole=baca-anyuid-clusterrole --group=system:serviceaccounts:$KUBE_NAME_SPACE - - fi -fi - -if [[ $OCP_VERSION == "3.11" ]]; then - # Allows images to run as the root UID if no USER in specified in the Dockerfile. - oc adm policy add-scc-to-group anyuid system:authenticated -fi - -#label nodes -if [[ ($LABEL_NODE == "y" || $LABEL_NODE == "Y") ]]; then - customLabelNodes -else - echo -e "\x1B[1;32mLABEL_NODE and LABEL_NODE_BY_PARAM parameters are not defined. Therefore, you must label your nodes accordingly\x1B[0m" -fi - - -# Create nfs, and pv/pvc -#getNFSServer - -#Check and rename DB2 cert to db2-cert.arm when DB_SSL=y -if [[ ($DB_SSL == "y" || $DB_SSL == "Y") && ($DB_CRT_NAME != 'db2-cert.arm') ]]; then - echo "renaming DB2 Cert name from $DB_CRT_NAME to db2-cert.arm" - cp $DB_CRT_NAME db2-cert.arm -fi - -#Create SSL cert and secret -createSSLCert -createSecret -createMongoSecrets -createLDAPSecret -createBaseDbSecret -createRabbitmaSecret -createRedisSecret -if [[ $PVCCHOICE == "1" ]]; then - echo -e "\x1B[1;32mSetting up PV/PVC storage\x1B[0m" - getNFSServer - ./init_persistent.sh -fi - - -#Helm client download and initialization -if [[ $USING_HELM == "y" || $USING_HELM == "yes" ]]; then - if [[ -z $HELM_INIT_BEFORE || $HELM_INIT_BEFORE == "n" || $HELM_INIT_BEFORE == "no" ]]; then - - # setup helm client - downloadHelmClient - - # setup helm on cluster - helmSetup - - # ensure tiller-deploy is successful on cluster - checkHelm - fi -fi - diff --git a/BACA/configuration/init_persistent.sh b/BACA/configuration/init_persistent.sh deleted file mode 100755 index a731d486..00000000 --- a/BACA/configuration/init_persistent.sh +++ /dev/null @@ -1,14 +0,0 @@ -#!/usr/bin/env bash - -# -# Licensed Materials - Property of IBM -# 6949-68N -# -# © Copyright IBM Corp. 2018 All Rights Reserved -# - -. ./common.sh - - -cat sppersistent.yaml | sed s/\$NFS_IP/"$NFS_IP"/ | sed s/\$KUBE_NAME_SPACE/"$KUBE_NAME_SPACE"/ | sed s/\$DATAPVC/"$DATAPVC"/ | sed s/\$LOGPVC/"$LOGPVC"/ | sed s/\$CONFIGPVC/"$CONFIGPVC"/ |kubectl apply -f - - diff --git a/BACA/configuration/renewCert.sh b/BACA/configuration/renewCert.sh deleted file mode 100755 index dbaf4e47..00000000 --- a/BACA/configuration/renewCert.sh +++ /dev/null @@ -1,54 +0,0 @@ -#!/usr/bin/env bash -# -# Licensed Materials - Property of IBM -# 6949-68N -# -# © Copyright IBM Corp. 2018 All Rights Reserved -# - -. ./common.sh -. ./bashfunctions.sh -. ./createSSLCert.sh - - -today=`date +%Y-%m-%d.%H:%M:%S` -echo $today - - -# confirm they want to delete -echo -echo -e "\x1B[1;31mThis script will RENEW all the certificates for IBM Business Automation Content Analyzer in $KUBE_NAME_SPACE \x1B[0m" -echo -echo -e "\x1B[1;31mThe script will delete ALL the IBM Business Automation Content Analyzer pods in $KUBE_NAME_SPACE. Therefore, you must make sure to backup your ontology,etc... and make sure there are no activities on the system \x1B[0m" -echo -ls -al *.pem > /dev/null -if [[ $? == "0" ]]; then - echo -e "\x1B[1;31mBased on the PEM files in the $PWD, the expirations date for them are: \x1B[0m" - - for pem in ./*.pem; do - printf '%s: %s\n' \ - "$pem expries on" \ - "$(date --date="$(openssl x509 -enddate -noout -in "$pem"|cut -d= -f 2)" --iso-8601)" - done -else - echo -e "\x1B[1;31mWe could not find any existing PMR files in $PWD \x1B[0m" -fi - -while [[ $renewConfirm != "y" && $renewConfirm != "n" && $renewConfirm != "yes" && $renewConfirm != "no" ]] # While deleteconfirm is not y or n... -do - echo -e "\x1B[1;31mWould you like to continue (Y/N):\x1B[0m" - read renewConfirm - renewConfirm=$(echo "$renewConfirm" | tr '[:upper:]' '[:lower:]') -done - - -if [[ $renewConfirm == "n" || $renewConfirm == "no" ]] -then - exit -else - loginToCluster - createSSLCert - createSecret - echo -e "\x1B[1;31m Deleting all Content Analyzer's pods ... " - kubectl -n sp delete --all pods --force --grace-period=0 -fi \ No newline at end of file diff --git a/BACA/configuration/sppersistent.yaml b/BACA/configuration/sppersistent.yaml deleted file mode 100644 index 03bfc6d3..00000000 --- a/BACA/configuration/sppersistent.yaml +++ /dev/null @@ -1,83 +0,0 @@ -apiVersion: v1 -kind: PersistentVolume -metadata: - name: sp-data-pv-$KUBE_NAME_SPACE - namespace: $KUBE_NAME_SPACE -spec: - accessModes: - - ReadWriteMany - capacity: - storage: 60Gi - nfs: - path: /exports/smartpages/$KUBE_NAME_SPACE/data - server: $NFS_IP - persistentVolumeReclaimPolicy: Retain ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: $DATAPVC - namespace: $KUBE_NAME_SPACE -spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 60Gi - volumeName: sp-data-pv-$KUBE_NAME_SPACE ---- -apiVersion: v1 -kind: PersistentVolume -metadata: - name: sp-log-pv-$KUBE_NAME_SPACE - namespace: $KUBE_NAME_SPACE -spec: - accessModes: - - ReadWriteMany - capacity: - storage: 35Gi - nfs: - path: /exports/smartpages/$KUBE_NAME_SPACE/logs - server: $NFS_IP - persistentVolumeReclaimPolicy: Retain ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: $LOGPVC - namespace: $KUBE_NAME_SPACE -spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 35Gi - volumeName: sp-log-pv-$KUBE_NAME_SPACE ---- -apiVersion: v1 -kind: PersistentVolume -metadata: - name: sp-config-pv-$KUBE_NAME_SPACE - namespace: $KUBE_NAME_SPACE -spec: - accessModes: - - ReadWriteMany - capacity: - storage: 5Gi - nfs: - path: /exports/smartpages/$KUBE_NAME_SPACE/config - server: $NFS_IP - persistentVolumeReclaimPolicy: Retain ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: $CONFIGPVC - namespace: $KUBE_NAME_SPACE -spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 5Gi - volumeName: sp-config-pv-$KUBE_NAME_SPACE \ No newline at end of file diff --git a/BACA/docs/DB2_setup.md b/BACA/docs/DB2_setup.md deleted file mode 100644 index e4a67dba..00000000 --- a/BACA/docs/DB2_setup.md +++ /dev/null @@ -1,40 +0,0 @@ -## Creating BaseDB and TenantDB on Db2 - -### Create Content Analyzer BaseDB -After the configuration/DB2 directory has been copied to the Db2 server, run the CreateBaseDB.sh script from the command prompt. ->Note: Run the following scripts with a Db2 user such as db2inst1 who has 'su' privilege. - -#### Procedure: -As prompted, enter the following data: - -- Enter the name of the IBM Business Automation Content Analyzer Base database (enter a unique name of 8 characters or less and no special characters. for example, CABASEDB) -- Enter the name of database user – (enter a database user name that will have full permissions to the base database) – this can be a new or existing Db2 user -- Enter the password for the above user – enter a password each time when prompted. If this is an existing user, this prompt will be skipped - -### Create Content Analyzer Tenant DB -Create the Content Analyzer Tenant DB and add it to the basedb by running the AddTenant.sh script on the Db2 server. - -#### Procedure - -As prompted, enter the following: - - Enter the tenant ID – (an alphanumeric value that will be used by the user to reference the database) - - Enter the name of the IBM Business Automation Content Analyzer tenant database to create - (an alphanumeric value for the actual database name in Db2) - - Enter the host/IP of the database server – (the IP address of the database server) - - Enter the port of the database server – Press Enter to accept default of 50000 (or enter the port number if a different port is required) - - Do you want this script to create a database user – y (for yes) - - Enter the name of database user – (this will be the tenant database user - enter an alphanumeric username with no special characters) - - Enter the password for the user – (enter an alphanumeric password each time when prompted) - - Enter the tenant ontology name – Press Enter to accept 'default' (or enter a name to reference the ontology by if desired) - - Enter the name of the IBM Business Automation Content Analyzer base database (enter the database name given when creating the base database) - - Enter the name of the database user for the IBM Business Automation Content Analyzer base database (enter the base username given when creating the base database) -The remaining values will be used to set up the initial user in IBM Business Automation Content Analyzer - - Enter the company name (enter your company name) - - Enter the first name (enter your first name) - - Enter the last name (enter your last name) - - Enter a valid email address (enter your email address) - - Enter the login name (if using LDAP authentication, enter your username as it appears in the LDAP server) - - Would you like to continue (y for yes) - -Save the TenantID and Ontology name for the later steps. - -Back to prerequisite [Overview](../configuration/README.md) diff --git a/BACA/docs/common_sh_values.md b/BACA/docs/common_sh_values.md deleted file mode 100644 index 54b0991e..00000000 --- a/BACA/docs/common_sh_values.md +++ /dev/null @@ -1,47 +0,0 @@ -## Common.sh parameters - -Review common.sh as a reference sample then copy common_ICP_template.sh or common_OCP_template.sh to common.sh based on your platform. - -Note Since the common.sh contains several passwords, you need to protect it by assigning appropriate permission such as read-only. - -#### common.sh parameters -|Description|Possible values| -|-----------|-----------------------| -SERVER_MEMORY| The amount of memory for Content Analyzer worker nodes 16,32, etc. Required: Yes -MONGO_SERVER_MEMORY| The amount of memory for Content Analyzer mongo node 16,32, etc. Required: Yes -MONGO_ADMIN_SERVER_MEMORY| The amount of memory for Content Analyzer mongoadmin node 16,32, etc. Required: Yes -USING_HELM|Indicate if you want to deploy Content Analyzer with Helm Chart. If given value is "n", will deploy Content Analyzer with Kubernates YAML files "y" or "n". Required: Yes -HELM_INIT_BEFORE|This field is used for installing Helm client for Content Analyzer helm install. Set it to "n" if you are not installing Content Analyzer using Helm. "y" or "n". Required: Yes -ICP_VERSION or OCP_VERSION|ICP version is 3.1.2. OCP version is 3.11 "3.1.2" or "3.11". Required: Yes -KUBE_NAME_SPACE| The K8's namespace that Content Analyzer will be installed on. Any valid namespace. Required: Yes -DOCKER_REG_FOR_SERVICES|This is the Content Analyzer domain used in ICP cluster, docker registry port and your namespace. For example: mycluster.icp:8500/sp where mycluster.icp is the Content Analyzer domain, 8500 is the docker registry port and sp is the namespace you want to install Content Analyzer on. Example:mycluster.icp:8500/sp. Required: Yes -LABEL_NODE |-Content Analyzer (CA) processing components will be deployed on node(s) with label celery=baca
-mongodb will be deployed on node with label mongo=baca
-mongoadmindb will be deployed on node with label mongoadmin=baca
Example: The nodes will have these labels where the namespace is "sp":
-celerysp=baca
-mongosp=baca
-mongoadminsp=baca
You must manually label your nodes per the above guideline if the value of LABEL_NODE is "n". "y" or "n". Required: Yes -CA_WORKERS |A list of comma separated IP address (ICP) or host names (Openshift) of worker nodes to be labeled as "celery=baca". NOTE: You can share the nodes/IP if you have a small cluster for development purposes. Required if LABEL_NODE = "y" -MONGO_WORKERS|A list of comma separated IP address (ICP) or host names (Openshift) of worker nodes to be labeled as "mongo=baca". NOTE: You can share the nodes/IP if you have a small cluster for development purposes. Required if LABEL_NODE = "y" -MONGO_ADMIN_WORKERS|A list of comma separated IP address (ICP) or host names (Openshift) of worker nodes to be labeled as "mongoadmin=baca". NOTE: You can share the nodes/IP if you have a small cluster for development purposes. Required if LABEL_NODE = "y" -ICP_USER or OCP_USER|ICP or OCP username with enough permission to deploy Content Analyzer. Required: Yes -ICP_USER_PASSWORD or OCP_USER_PASSWORD|ICP's or OCP's username password. Must be encoded with base 64. Required: Yes -BXDOMAINNAME|IP address of your ICP's proxy node if you are using ICP. IP address of your OCP's infra node if you are using OCP. Required: Yes -MasterIp|IP address of your ICP's or OCP master node. Required: Yes -PVCCHOICE|Whether to have script create PV/PVC for Content Analyzer. PVCCHOICE=1 means script will create directories. See note below table for more information. Default 1. Required: yes -SSH_USER|User for the script to SSH into the NFS server (NFS_IP) to create the necessary folders. This user must have "sudo" privilege. Not required if you create PV/PVC manually. -NFS_IP|NFS Server IP address. Not required if you create PV/PVC manually. -DATAPVC|Name of the data pvc. If you use a different name you must change it in the values.yaml. Default: sp-data-pvc. Required: Yes -LOGPVC|Name of your log pvc. If you use a different name you must change it in the values.yaml sp-log-pvc. Required: Yes -CONFIGPVC|Name of your config pvc. If you use a different name you must change it in the values.yaml sp-log-pvc. Required: Yes -BASE_DB_PWD|This is the base-64 encoded Content Analyzer base database password. Required: Yes -LDAP|Indicate if you want to integrate Content Analyzer with external LDAP "y" or "n". Required: Yes -LDAP_PASSWORD|This is the base-64 encoded Content Analyzer base database password for the LDAP bind user. Required: Yes (if LDAP) -LDAP_URL|LDAP URL such as ldap://192.168.10.10 for non SSL. For ssl, you can use ldaps://192.168.10.10. Required: Yes (if LDAP) -LDAP_CRT_NAME|The name of the LDAP's server client certificate when using 'ldaps' in the LDAP_URL. For more information on how to generate the required certificate, refer to the LDAP vendor documentation. - -If you select PVCCHOICE=1, the script will perform the following tasks: -1) create the following directories on the NFS server: - - /exports/smartpages//{config,data,logs} - - /exports/smartpages//data/{mongo,mongoadmin} - - /exports/smartpages//config/backend - - /exports/smartpages//logs/{backend,frontend,callerapi,processing-extraction,pdfprocess,setup,interprocessing,classifyprocess-classify,ocr-extraction,postprocessing,reanalyze,updatefiledetail,minio,redis,rabbitmq,mongo,mongoadmin,utf8process}" -2) Change the owner on all folders to 51000:51001 -3) Append all the worker's IP to the /etc/exports file on the NFS server. - -Back to [Init_Deployment](init_deployment.md) \ No newline at end of file diff --git a/BACA/docs/init_deployment.md b/BACA/docs/init_deployment.md deleted file mode 100644 index 3a039915..00000000 --- a/BACA/docs/init_deployment.md +++ /dev/null @@ -1,38 +0,0 @@ -## Create PVs, PVCs, certificates and secrets using init_deployment.sh - -To use the init_deployments.sh script to create preqrequisites: -1) Populate the common.sh file with appropriate values based on the instructions in [common.sh values](./common_sh_values.md) -2) Run the init_deployments.sh script to create objects based on common.sh values -3) Verify the objects were created by running the following commands: - Check pvcs - ```console - # kubectl -n sp get pvc - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - sp-config-pvc Bound sp-config-pv-sp 5Gi RWX 4d - sp-data-pvc Bound sp-data-pv-sp 60Gi RWX 4d - sp-log-pvc Bound sp-log-pv-sp 35Gi RWX 4d - ``` - and verify that 3 PVCs were created - - Check secrets - ```console - # kubectl -n sp get secrets - NAME TYPE DATA AGE - baca-basedb Opaque 1 4d - baca-ingress-secret kubernetes.io/tls 2 4d - baca-ldap Opaque 1 4d - baca-minio Opaque 2 4d - baca-mongo Opaque 3 4d - baca-mongo-admin Opaque 3 4d - baca-rabbitmq Opaque 4 4d - baca-redis Opaque 1 4d - baca-secretssp Opaque 14 4d - - ``` - and verify that 9 secrets were created (might only be 7 if not using LDAP or ingress) -4) Run `./generateMemoryValues.sh ` or .`/generateMemoryValues.sh ` - >Note For smaller system (5 worker-nodes or less) where the mongo database pods will be on the same worker node as other pods, use limited option. - - Copy these values for replacement in the values.yaml file if you want to deploy CA using Helm chart, or replacing these values in the ca-deploy.yml file if you want to deploy CA using kubernetes YAML files. - - Back to [Overview](../configuration/README.md) diff --git a/BACA/docs/post-deployment.md b/BACA/docs/post-deployment.md deleted file mode 100644 index 8ab61009..00000000 --- a/BACA/docs/post-deployment.md +++ /dev/null @@ -1,79 +0,0 @@ -## Post Deployment steps for non-ingress setup (Option 1) - -Since OpenShift's router does not support URL rewriting, there are some steps necessary post-deployment to enable accessing -IBM Business Automation Content Analyzer via the node ports exposed by the services. Or if you do not want to use path based ingress on ICP, follow the same steps. - -###### Once deployment is started: - -To find the node port for the backend service, execute: -```console -# kubectl get svc spbackend -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -spbackend NodePort 172.1.1.1 8080:30437/TCP 19h -``` -In the above example, the node port is 30437 -1) Execute: `kubectl edit deploy spfrontend` -2) Look for the BACKEND_PORT environment variable and add the value from the previous step in quotes: - for eaxmple, - `- name: BACKEND_HOST` -   `value: myopenshift1.com` - `- name: BACKEND_PROTOCOL` -   `value: https` - **`- name: BACKEND_PORT` -   `value: "30437"`** -3) Ensure that the BACKEND_PATH and FRONTEND_PATH variables are blank (for example, no values) - for eaxmple, - ` - name: BACKEND_PATH` - `- name: FRONTEND_PATH` - `- name: FRONTEND_HOST` -   `value: myopenshift1.com` - -4) Save the changes. This should cause the spfrontend pods to restart. -5) Look at the service list again and note the node port of spfrontend service (for eaxmple, `kubectl get svc spfrontend`). -6) Access Content Analyzer using the URL: `https://:/?tid=&ont= ` - (tenant id and ontology defined when adding tenant to base Db2 database) - - -## Post Deployment steps for OpenShift route setup (Option 2) - -You can also deploy IBM Business Automation Content Analyzer using an OpenShift route as the ingress point to expose the frontend and backend services via an externally-reachable, unique hostname such www.backend.example.com and www.frontend.example.com. -A defined route and the endpoints identified by its service can be consumed by a router to provide named connectivity that allows external clients to reach your applications. - -Run the command below to create appropriate routes for the services. - -###### Once deployment is started: - -1) To create a route for the frontend service, execute: - ```console - # oc create route passthrough --insecure-policy=Redirect --service=spfrontend --hostname= - ``` - > **Sample**: oc create route passthrough spfrontend-route --insecure-policy=Redirect --service=spfrontend --hostname=www.ca.frontendsp - -2) To create a route for the backend service, execute: - ```console - # oc create route passthrough --insecure-policy=Redirect --service=spbackend --hostname= - ``` - > **Sample**: oc create route passthrough spbackend-route --insecure-policy=Redirect --service=spbackend --hostname=www.ca.backendsp - > **Note**: A route name is limited to 63 characters, and router hostname given a wildcard DNS entry and must be unique. - -3) Add the frontend router hostname and backend router hostname, that were specified at steps 1 & 2 above, to your client hosts file or DNS server, so that external client can reach endpoint by name. Two DNS entries should point to OpenShift's Infra node IP address.   - -4) Edit the spfrontend deployment - - Execute: `kubectl edit deploy spfrontend` - - Look for the BACKEND_HOST environment variable and change the value to hostname of backend router that specified in the setp 2 in quotes: - for eaxmple, - **`- name: BACKEND_HOST` -   `value: www.ca.backendsp`** - - Ensure that the BACKEND_PATH and FRONTEND_PATH variables are blank (for eaxmple, no values) - for eaxmple, - ` - name: BACKEND_PATH` - `- name: FRONTEND_PATH` - - - Save the changes. This should cause the spfrontend pods to restart. - -5) Access backend endpoint to accept certificate using the URL: `https://` (backend_router_hostname defined when creating route for the backend service) - - **Note**: If the content **WORKS** appears in the page, it means the backend route is working. - -6) Access frontend endpoint to accept certificate using the URL: `https:///?tid=&ont= ` -(frontend_router_hostname defined when creating route for the frontend service. tenant id and ontology defined when adding tenant to base Db2 database) diff --git a/BACA/docs/values_yaml_parameters.md b/BACA/docs/values_yaml_parameters.md deleted file mode 100644 index 1b6af90a..00000000 --- a/BACA/docs/values_yaml_parameters.md +++ /dev/null @@ -1,121 +0,0 @@ -## Populating values.yaml with correct values - -1.      Copy template.yaml to values.yaml -2.      Edit values.yaml and fill in values for the following items. - -Note that anything not documented here typically does not need to be changed. - -##### GLOBAL OPTIONS: -The following variables are used in multiple places. Perform a global search and replace with the correct information (for example, in vi - `:%s/$REGISTRY_NAME/docker-registry.default.svc:5000\/sp/g`): - -|Tag|Description| -|----|----| -$REGISTRY_NAME  |refers to the name of the local registry where IBM Business Automation Content Analyzer images have been loaded, in the format `/` (for example, docker-registry.default.svc:5000/sp or mycluster.icp:8500/baca).  There are 18 occurrences of this tag in the values.yaml that need to be updated.   -$VERSION_TAG |refers to the version tag of the docker images loaded into the registry (for example, 1.0.1-gm).  There are also 18 occurrences of this value in values.yaml that need to be updated. -$CELERY_REPLICAS |determines the number of celery pods to start. Recommended value is 1 per worker node. 11 occurrences of this value -$NON_CELERY_REPLICAS |determines the number of pods for non-celery. Recommended value is 1 per worker node. 2 occurrences of this value. -$KUBE_NAME_SPACE |the kubernetes namespace or Openshift project where Content Analyzer will be deployed. 5 occurrences of this value. - -##### RESOURCE LIMIT OPTIONS: -You can define resource limits for each of the pods based on available memory on the worker/compute nodes to ensure better operating efficiency. -Use the sample configuration script, [generateMemoryValues.sh](../configuration/generateMemoryValues.sh), to determine the appropriate values for each of the following based on your environment: -The following values need to be set: - -$CALLERAPI_LIMITED_MEMORY -$SETUP_LIMITED_MEMORY -$OCR_EXTRACTION_LIMITED_MEMORY -$CLASSIFY_LIMITED_MEMORY -$PROCESSING_EXTRACTION_LIMITED_MEMORY -$POST_PROCESS_LIMITED_MEMORY -$INTER_PROCESSING_LIMITED_MEMORY -$PDF_PROCESS_LIMITED_MEMORY -$UTF8_PROCESS_LIMITED_MEMORY -$REANALYZE_LIMITED_MEMORY -$UPDATEFILE_LIMITED_MEMORY -$FRONTEND_LIMITED_MEMORY -$BACKEND_LIMITED_MEMORY -$MINIO_LIMITED_MEMORY -$REDIS_LIMITED_MEMORY -$RABBITMQ_LIMITED_MEMORY -$MONGO_LIMITED_MEMORY -$MONGO_ADMIN_LIMITED_MEMORY -$MONGO_WIREDTIGER_LIMIT #note this value should just be entered as a number only in GB (for example, .3 and not .3Gi or 300Mi) -  - -##### LDAP INTEGRATION OPTIONS: -If integrating with an LDAP repository for logon, set the following: ->Note that if not using LDAP, then the ldap: setting under spbackend and spfrontend needs to be set to FALSE and the rest of the values left blank) - -###### spfrontend: -- ldap: TRUE OR FALSE depending on whether you are using LDAP - -###### spbackend: -- ldap: TRUE OR FALSE depending on whether you are using LDAP -- ldapFilter: search filter to find user.  Use ‘{{username}}’ as substitution variable for example, (&(cn={{username}})(objectClass=person)) -- ldapDn: dn of bind user (for example, cn=root) -- ldapURL: URL of ldap server (for example, ldap://xx.xx.xx.xx -- ldapPort: ldap port (for example, 389) -- ldapBase: ldap search base   -- userName: username of initial user -- ldapCrtName: if using LDAPS, specify certificate from LDAP server -- ldapSelfSignedCert: Y if using a self-signed certificate. N otherwise - -##### DB2 Parameters: -Set the following parameters on spbackend to tell IBM Business Automation Content Analyzer how to connect to the Base DB on Db2: -###### DB2 Base DB connection info -- baseDB: name of the base database created on Db2 (for example, CABASEDB) -- baseDBServer: host name of the Db2 server -- baseDBPort: listener port for the Db2 server -- baseDBUser: user to log into Db2 and access Base DB - >Note the password for above user is stored in secret baca-basedb created by init_deployment.sh script or manually. - -##### DEPLOYMENT SPECIFIC OPTIONS: - -Some deployments require additional settings as described below: - -###### spbackend: -- backendPath: #leave blank for most deployments -- backendPort: 8080 #leave at default for most deployments -- nodeTLSRejectUnauthorized: 0 or 1 depending on whether self signed certificate if used for SSL. Generally left at 0 - ->Note: Several parameters in spfrontend depend upon whether you wish to use path based ingress (for ICP only) or simply access the app via exposed node ports. If not using ingress in ICP, or using Openshift, be sure there are no values for backendPath & frontendPath, and values for backendPort will need to added post deployment see [Post Deployment Steps](post-deployment.md) -###### spfrontend: -- backendHost: domain used in URL to access backend. Usually the same as BXDOMAINNAME which is usually the name/address of the proxy or infra node.  If using ingress with non-default port (80/443), then include port in hostname (for example, my.domain.com:444)   -- backendPort: for non-ingress solution, enter the node port of the spbackend service, otherwise leave blank -- backendPath: if using path based ingress, specify the path (for example, in http://my.domain.com/backendsp/ path would be 'backendsp'). Note that port and path are mutually exclusive. Only one should be specified. -- frontendHost: domain used in URL to access frontend. Similar to backend_host -- frontendPath: if using path based ingress, specify the path for frontend -- nodeTLSRejectUnauthorized: :0 or 1 depending on whether self signed certificate if used for SSL -- sso:  0 or 1 depending on whether you need to authenticate through another portal (for example, in IBM cloud) -- bxDomainName: domain name used to access frontend/backend -  -###### ingress: -- enabled: TRUE OR FALSE to indicate that path based ingress should be used on ICP (OCP's router does not support url rewriting and consequently will not work with path based ingress) --  $HOST_NAME – if ingress enabled, specify the host name used to access   -   -###### nodeSelector: -label applied to nodes targeted to run celery workers. Default value created by init_deployment.sh is `celery: baca` - -for example, -nodeSelector: -  celerysp: baca                    - -###### global: -  configs: -   - claimname: enter name of PVC for config files created earlier. Default sp-config-pvc -  logs: -   - claimname: enter name of PVC for log files created earlier. Default sp-logs-pvc -    - logLevel: blank or debug to enable additional logging -  data: -   - claimname: enter name of PVC for data files created earlier. Default sp-data-pvc -  celery: -   - processTimeout: 300 #timeout for OCR processing -  namespace: -   - name: #kubernetes namespace where IBM Business Automation Content Analyzer is to be deployed -  - sslValidate: false #true or false depending on whether you are using a self signed SSL certificate or not (false=self signed) -  mongo: -   nodeSelector: -     - mongosp: baca 'label applied to nodes targeted to run mongo pod. Default value is "mongo: baca" -  mongoadmin: -   nodeSelector: -     - mongo-adminsp: baca 'label applied to nodes targeted to run mongo-admin pod. Default value is "mongo-admin: baca" diff --git a/BACA/helm-charts/README.md b/BACA/helm-charts/README.md deleted file mode 100644 index e195b660..00000000 --- a/BACA/helm-charts/README.md +++ /dev/null @@ -1,5 +0,0 @@ -# Instruction to deploy IBM Business Automation Content Analyzer with Helm charts - -- Extract [ibm-dba-baca-prod-1.2.0.tgz](./ibm-dba-baca-prod-1.2.0.tgz) for non-HA deployment and reference the readme in ibm-dba-baca-prod/README.md after extraction. - -- Extract [ibm-dba-baca-prod-1.2.0_ha.tgz](./ibm-dba-baca-prod-1.2.0_ha.tgz) for HA deployment and reference the readme in ibm-dba-baca-prod/README.md after extraction. diff --git a/BACA/helm-charts/ibm-dba-baca-prod-1.2.0.tgz b/BACA/helm-charts/ibm-dba-baca-prod-1.2.0.tgz deleted file mode 100644 index 00c78476..00000000 Binary files a/BACA/helm-charts/ibm-dba-baca-prod-1.2.0.tgz and /dev/null differ diff --git a/BACA/helm-charts/ibm-dba-baca-prod-1.2.0_ha.tgz b/BACA/helm-charts/ibm-dba-baca-prod-1.2.0_ha.tgz deleted file mode 100644 index e752a3c5..00000000 Binary files a/BACA/helm-charts/ibm-dba-baca-prod-1.2.0_ha.tgz and /dev/null differ diff --git a/BACA/k8s-yaml/README.md b/BACA/k8s-yaml/README.md deleted file mode 100644 index 6710b6ac..00000000 --- a/BACA/k8s-yaml/README.md +++ /dev/null @@ -1,115 +0,0 @@ -# Deploying with Kubernetes YAML - -If you prefer to use a simpler deployment process that uses a native Kubernetes authorization mechanism (RBAC) instead of Helm and Tiller, use the Helm command line interface (CLI) to generate a Kubernetes manifest. If you choose to use Kubernetes YAML you cannot use certain capabilities of Helm to manage your deployment. - -Before you install make sure that you have prepared your environment. - -## Prepare environment - -### Prerequisites -1. If the Helm client is not installed in your Kubernetes cluster, install [Helm 2.11.0](/~https://github.com/helm/helm/releases/tag/v2.11.0). - - -### Step 1 - Create Content Analyzer Base DB -1. Copy the DB2 folder from /~https://github.com/icp4a/cert-kubernetes/tree/19.0.2/BACA/configuration/DB2 to your IBM DB2 server -2. cd to DB2 folder and run ./CreateBaseDB.sh script. (Ex. Please run with db2inst1 which has 'sudo' privileges) -3. As prompted, enter the following data: - - Enter the name of the IBM® Business Automation Content Analyzer Base database – (enter a unique name of 8 characters or less and no special characters). - - Enter the name of database user – (enter a database user name that has full permissions to the base database). This can be a new or an existing Db2 user. - - Enter the password for the user – (enter a password) – each time when prompted. If this is an existing user, this prompt is skipped - -### Step 2 - Create the Content Analyzer Tenant database -1. Still in the DB2 folder, Run ./AddTenant.sh script on the Db2 server. -For more information, see Creating Content Analyzer Tenant database. -2. As prompted, enter the following parameters: - - Enter the tenant ID – (an alphanumeric value that is used by the user to reference the database) - - Enter the name of the IBM® Business Automation Content Analyzer tenant database - (an alphanumeric value for the actual database name in Db2) - - Enter the host/IP of the database server – (the IP address of the database server) - - Enter the port of the database server – Press Enter to accept default of 50000 (or enter the port number if a different port is needed) - - Do you want this script to create a database user – y (for yes) - - Enter the name of database user – (this is the tenant database user - enter an alphanumeric user name with no special characters) - - Enter the password for the user – (enter an alphanumeric password each time when prompted) - - Enter the tenant ontology name – Press Enter to accept default (or enter a name to reference the ontology by if desired) - - Enter the name of the Base Business Automation Content Analyzer database – (enter the database name given when you create the base database) - - Enter the name of the database user for the Base Business Automation Content Analyzer database – (enter the base user name given when you create the base database) - - Enter the company name – (enter your company name. This parameter and the remaining values are used to set up the initial user in Business Automation Content Analyzer) - - Enter the first name - (enter your first name) - - Enter the last name - (enter your last name) - - Enter a valid email address - (enter your email address) - - Enter the login name – (if you use LDAP authentication, enter your user name as it appears in the LDAP server) - - Would you like to continue – y (for yes) - - Save the tenantID and Ontology name for the later steps. - -### Step 3 - download the configuration files -1. Download all the files and folders except DB2 folder from /~https://github.com/icp4a/cert-kubernetes/tree/19.0.2/BACA/configuration to where you plan to install Content Analyzer. For example, to a system that can be connected to IBM Cloud Private. - -### Step 4 - Edit common.sh -1. Edit and populate the /configuration/common.sh that was downloaded from step 3 with the correct values from the [Prerequisite install parameters table](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/topics/ref_baca_common_params.html). (Since helm server is not being used, be sure USING_HELM is set to N) - -### Step 5 - Creates prerequisite resources for IBM Business Automation Content Analyzer -1. Run ./init_deployment.sh from `configuration` folder that was downloaded from step 3. - - Required persistent volumes and volume claims, secrets are created during the preparation of the environment - -### Step 6 - Update values.yaml -1. Download the Helm Chart to the master node from /~https://github.com/icp4a/cert-kubernetes/blob/19.0.2/BACA/helm-charts/ibm-dba-baca-prod-1.2.0.tgz -2. Extract the helm chart from ibm-dba-prod-1.2.0.tgz. -3. Proceed to ibm-dba-baca-prod/ibm_cloud_pak/pak_extensions directory and copy template.yaml to ibm-dba-baca-prod/values.yaml -4. Edit the values.yaml file and complete the values mentioned in the [Helm Chart configuration parameter section](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/topics/ref_baca_globaloptions_params.html) for options with the parameters and values. - -Note that anything not documented does not need to be changed. - -### Step 7 - Download IBM Cloud Pak for Automation V19.0.2 and load IBM Business Automation Content Analyzer base image - -1. Please follow the instruction in https://www.ibm.com/support/docview.wss?uid=ibm10958567 to download CC3SEEN package to a server that is connected to your Docker registry. -2. Download the [loadimages.sh](/~https://github.com/icp4a/cert-kubernetes/blob/19.0.2/scripts/loadimages.sh script from GitHub. -3. Login to the specified Docker registry with the docker login command. This command depends on the environment that you have. -4. Run the loadimages.sh script to load the images into your Docker registry. Specify the two mandatory parameters in the command line. - - Note: The docker-registry value depends on the platform that you are using - - ``` - -p PPA archive files location or archive filename - -r Target Docker registry and namespace - -l Optional: Target a local registry - ``` - The following example shows the input values in the command line. - ``` - # scripts/loadimages.sh -p /Downloads/PPA/ImageArchive.tgz -r /demo-project - ``` -### Step 8 - Generate yaml files and deploy -1. Create a chart YAML template file with the configuration parameters defined in values.yaml by using the following command in the ibm-dba-baca-prod directory. The `--name` argument sets the name of the release to install. - - ```console - $ helm template . -f values.yaml\ - --name celery \ - > generated-k8s-templates.yaml - ``` - -2. Install `celery` by using the following command. - - ```console - $ kubectl -n apply -f generated-k8s-templates.yaml - ``` - -3. Run the following command to see that status of the pods. Wait until all pods are running and ready. - - ```$ kubectl -n get pods``` - - Due to the configuration of the readiness probes, after the pods start, it may take up to 10 or more minutes before the pods enter a ready state. - -> **Reminder**: After you deploy, return to the instructions for [Completing post deployment tasks for IBM Business Automation Content Analyzer](../docs/post-deployment.md), to review document for further configuration. - -## Uninstalling a Kubernetes release of IBM Business Automation Content Analyzer - -To uninstall and delete the IBM Business Automation Content Analyzer release, use the following command: - -```console -$ kubectl delete -f generated-k8s-templates.yaml -``` - -The command removes all the Kubernetes components associated with the release, except any Persistent Volume Claims (PVCs). This is the default behavior of Kubernetes, and ensures that valuable data is not deleted. To delete the persisted data of the release, you can delete the PVC using the following command: - -```console -$ kubectl delete pvc my-baca-prod-release-baca-pvclaim -``` - -In the configuration folder, the delete_ContentAnalyzer.sh script can also be used to clean up PVs, PVCs, secrets and directories created by the init_deployment.sh script. Simply, run delete_ContentAnalyzer.sh from the master node where the configuration directory was copied to. diff --git a/BACA/platform/README_Eval_ROKS.md b/BACA/platform/README_Eval_ROKS.md deleted file mode 100644 index 9f0a3bb2..00000000 --- a/BACA/platform/README_Eval_ROKS.md +++ /dev/null @@ -1,173 +0,0 @@ -# Deploying BACA on Red Hat OpenShift on IBM Cloud - -Before you deploy, you must configure your IBM Public Cloud environment and create an OpenShift cluster. Use the following information to configure your environment and deploy the images. - -## Step 1: Prepare your client and environment on IBM Cloud - -1. Create an account on [IBM Cloud](https://cloud.ibm.com/kubernetes/registry/main/start). -2. Create a Cluster. - From the [IBM Cloud Overview page](https://cloud.ibm.com/kubernetes/overview), in the OpenShift Cluster tile, click **Create Cluster**. - A cluster comes with attached storage, so you do not need to create persistent volumes. -3. Create a Project. - Select Kubernetes, Clusters. - Select the name of your newly created cluster, then select OpenShift Web Console. - Select Create Project. - For name and display name enter your project name. -4. Set up a client workstation. - Install the [IBM Cloud CLI](https://cloud.ibm.com/docs/containers?topic=containers-cs_cli_install). - Install the [OpenShift Container Platform CLI](https://docs.openshift.com/container-platform/3.11/cli_reference/get_started_cli.html#cli-reference-get-started-cli) to manage your applications and to interact with the system. -5. Install the Container Registry plug-in: - `ibmcloud plugin install container-registry -r Bluemix` -6. On your client workstation, download the following components: - * ICP4A BACA ppa package from [Passport Advantage](https://spcn.w3cloud.ibm.com/software/spcn/content/Y107038W39561F66.html). - * BACA installation folder from [GitHub](/~https://github.com/icp4a/cert-kubernetes/tree/19.0.1/BACA). - -## Step 2: Push the images to the IBM Cloud Container Registry - -Push the downloaded images to your private registry. - -1. Log in to your IBM Cloud account. with `ibmcloud login -a https://cloud.ibm.com –-sso` - When asked to Open the URL in the default browser, select Y. In some cases, your client may not be able to open the browser automatically in which case you will need to copy the provided URL and open the browser manually. - Paste the One Time Code into the client. Then as prompted, enter the following: - *Select an account – Enter the number for the Cloud account holding the baca project. - *Select a region – Enter the number for the region where the managed instance is located. -2. Create a namespace. - `ibmcloud cr namespace-add ` -3. Log your local Docker daemon into the IBM Cloud Container Registry. - `ibmcloud cr login` -4. Push and tag the images to the cluster registry: - `./loadimages.sh -p -r us.icr.io/` -6. Verify that your images are in your private registry. - `ibmcloud cr image-list` - -## Step 3: Create the PVCs - -1. Get a list of your storage classes and select one of the choices to be your storage class. - `oc get storage classes` -Login to the OpenShift Web Console and select Storage. -2. For each of three PVCs, click on Create PVC and enter the following values. - * Storage Class – - * Access Mode – Shared Access (RWX) - * Name and Size (typical name is sp--pvc- - * data pvc 60GiB - * log pvc 35GiB - * config pvc 20GiB - -## Step 4: Create a Secret ID - -1. Login to IBM Cloud. -2. Select Manage toward the top right and click on Access (AIM). -3. Select Service IDs and click Create. -4. Enter a name and description, and click Create. -5. Select the API keys tab and click Create. -6. Enter the same name and description and click Create. -7. Copy or download the API key. You must save it now. - -## Step 5: Configure the DB2 databases -BACA requires a dedicated DB2 server. - -1. Connect to the database server as user with administrator level access to DB2. -2. Copy the DB2 folder from your client installation folder onto a DB2 server work folder you create. -3. Create the base database. - `./CreateBaseDB.sh` -4. As prompted, enter the following: - * Enter the name of the BACA Base database – (enter a unique name of 8 characters or less and no special characters) - * Enter the name of database user – (enter a database user name) – this can be a new or existing DB2 user - * Enter the password for the user – (enter a password) – each time when prompted. If this is an existing user, this prompt will be skipped. -5. Add a tenant. - `./AddTenant.sh` -6. As prompted, enter the following: - * Enter the tenanttype – 0 (for Enterprise) - * Enter the tenant ID – (enter a unique alphanumeric value) - * Enter the name of the BACA tenant database – (recommend using the tenant id, but can be any unique name of 8 characters or less and no special characters) - * Enter the host/IP of the database server – (enter the IP address of the database server) - * Enter the port of the database server – Press Enter to accept default of 50000 - * Do you want this script to create a database user – y (for yes) - * Please enter the name of database user – (enter an alphanumeric username with no special characters) - * Enter the password for the user – (enter an alphanumeric password each time when prompted) - * Enter the tenant ontology name – Press Enter to accept default, or if desired, enter the name you will reference the ontology by. - * Enter the name of the Base BACA database – (enter the database name entered when creating the base database) - * Enter the name of the database user for the Base BACA database – (enter the database user entered when creating the base database) -The remaining entries are for setting up the initial user. - * Please enter the company name – (enter your company name) - * Please enter the first name - (enter your first name) - * Please enter the last name - (enter your last name) - * Please enter a valid email address - (enter your IBM email address) - * Please enter the login name – (if using LDAP, enter your LDAP name – if not using LDAP, enter the name you prefer to use to login with) - * Would you like to continue – y (for yes) - -## Step 6: Run the BACA predeployment - -1. In the configuration folder, copy common_OCP_template.sh to common.sh -2. Edit common.sh following the [Knowledge Center Reference](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/topics/ref_baca_common_params.html). -When editing common.sh, the following are differences specific to OCPoC. - * OCP_VERSION=3.1.1 - * ICP references in documentation are OCP in common.sh - * PVCCHOICE=2 (PVCs previously created) -3. Run the predeployment script. - `./init_deployments.sh` - -## Step 7: Generate memory values -An OCPoC install with multiple products may require a systems designer to determine how memory will be configured. However, for guidance getting a starting point on a basic system, do the following: - -1. Change to the configuration folder. -2. Generate the memory values for a small development system - `./generateMemoryValues.sh limited` - --- or for a larger system with six or more nodes --- - `./generateMemoryValues.sh distributed` -3. Note these values as they will be used in the next step. - -## Step 8: Deploy the Helm Chart - -1. Change to the SmartPages-Helmchart folder. -2. Extract the helm chart. - `tar xf ibm-dba-baca-prod-1.0.0.tgz` -3. Change to the stable/ibm-dba-baca-prod folder. -4. Edit values.yaml, changing the following values wherever they appear, using the [GitHub values.yaml Reference](/~https://github.com/icp4a/cert-kubernetes/blob/19.0.1/BACA/docs/values_yaml_parameters.md) -5. When editing values.yaml, for OCPoC under global add the secret ID so the section looks as follows: - ``` - global: - image: - pullSecrets: - - (secret ID name) - ``` -6. Install the helm chart. - `helm install . --name celery -f values.yaml --namespace --tiller-namespace tiller` - -## Step 9: Create an NGINX Pod -These steps create a pod called folder-creation-baca and its purpose is to provide the ability to add the folder structure required for logging. - -1. Change to the platforms folder. -2. Edit the nginx_folders.yaml if needed. -3. Create the pod. - `kubectl apply -f nginx_folders.yaml` -4. Log in to the pod. - `kubectl exec -ti folder-creation-baca bash` -5. Create folders used by BACA. - ``` - cd /logs - mkdir -p {backend,frontend,callerapi,processing-extraction,pdfprocess,setup,interprocessing,classifyprocess-classify,ocr-extraction,postprocessing,reanalyze,updatefiledetail,spfrontend,minio,redis,rabbitmq,mongo,mongoadmin,utf8process} - cd /data - mkdir -p {mongo,mongoadmin,redis,rabbitmq,minio} - cd /config - mkdir -p /config/backend - ``` -6. Set folder permissions to 51000:51001. - ``` - cd / - chown -Rf 51000:51001 /logs - chown -Rf 51000:51001 /data - chown -Rf 51000:51001 /config - ``` -7. Exit the pod. - `exit` - -## Step 10: Configure Routing - -1. Login to the OpenShift Web Console and in the dropdown in the top banner, select Cluster Console. -2. Note the URL, dropping https://console from the front. This will form the second part of the routing URL. -Create pass-through routing. - ``` - oc create route passthrough frontend --insecure-policy=Redirect --service=spfrontend --hostname=frontend. - oc create route passthrough backend --insecure-policy=Redirect --service=spbackend --hostname=backend. - ``` diff --git a/BACA/platform/nginx_folders.yaml b/BACA/platform/nginx_folders.yaml deleted file mode 100644 index c7d50545..00000000 --- a/BACA/platform/nginx_folders.yaml +++ /dev/null @@ -1,30 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - name: folder-creation-baca - labels: - app: folder-creation-baca - namespace: baca -spec: - volumes: - - name: sp-config-pvc-baca - persistentVolumeClaim: - claimName: sp-config-pvc-baca - - name: sp-log-pvc-baca - persistentVolumeClaim: - claimName: sp-log-pvc-baca - - name: sp-data-pvc-baca - persistentVolumeClaim: - claimName: sp-data-pvc-baca - containers: - - name: folder-creation-baca - image: nginx:latest - ports: - - containerPort: 8080 - volumeMounts: - - name: sp-config-pvc-baca - mountPath: /config - - name: sp-log-pvc-baca - mountPath: /logs - - name: sp-data-pvc-baca - mountPath: /data \ No newline at end of file diff --git a/BAI/README.md b/BAI/README.md deleted file mode 100644 index 04654fc9..00000000 --- a/BAI/README.md +++ /dev/null @@ -1,693 +0,0 @@ -# Installing IBM Business Automation Insights on Certified Kubernetes - - -> **NOTE**: This procedure covers the deployment on certified Kubernetes. To deploy on IBM Cloud Private 3.1.2, see [Getting started with IBM Business Automation Insights](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.bai/topics/tut_getting_started.html). - -## Overview - -IBM Business Automation Insights is a platform-level component that provides visualization insights to business owners and feeds a data lake to infuse artificial intelligence into IBM Digital Business Automation. - -Based on state-of-the-art open source technologies, IBM Business Automation Insights captures all events that are generated by the operational systems implemented with the Digital Business Automation products, aggregates these events into business-relevant KPIs, and presents them in meaningful dashboards for lines of business to have a real-time view on their business operations. - -### Entities - -IBM Business Automation Insights processes and produces the following entities: - -- Raw events: Native events that are ingested and processed by IBM Business Automation Insights. - -- Time series: Simplified, flattened versions of raw events. - -- Summaries: Aggregations of time series. For example, each process instance, activity instance, or case instance has a summary entity. Summaries describe the current state of the process, activity, or case instance, and compute their duration. Summaries are complete when the process, activity, or case is completed. - -### Architecture diagram - - - -### Deployed artifacts - -When you install IBM Business Automation Insights, the following main elements are deployed: - -- A `bai-admin` pod in charge of the IBM Business Automation Insights REST API. -- An Apache Flink cluster (`bai-jobmanager` and `bai-taskmanager`) hosting the IBM Business Automation Insights event processing. -- Optionally, an Elasticsearch and Kibana cluster to gather data from the event processing. - -If you want to use an HDFS data lake, you must install it separately. - -## Requirements - -### Kubernetes cluster - -IBM Business Automation Insights requires a certified Kubernetes platform -(see [support statement](../README.md#support-statement)). - -### Helm command line interface - -To install Helm, follow these [instructions](https://docs.helm.sh/using_helm/#installing-helm). - -### Apache Kafka - -An Apache Kafka cluster must be up and running before you deploy IBM Business Automation Insights. -The Apache Kafka connection must be configured in the Helm Chart values. - -For a quick start, try [Confluent Apache Kafka Helm Chart](/~https://github.com/confluentinc/cp-helm-charts). - -To enable secure communications with Confluent Kafka by using the SASL security protocol, you must modify the values.yaml file of this chart. - -```yaml -kafka: - username: "kafka" - password: "kafka-password" - bootstrapServers: "kafka_ip_or_hostname:port" - securityProtocol: "SASL_SSL" - serverCertificate: "" -``` - -Define the username and password supplied in `kafka.username` and `kafka.password` on the Kafka server side in appropriate JAAS configuration files, such as `kafka_jaas.conf` and `zookeeper_jaas.conf`. - -- `kafka_jaas.conf` - -``` -KafkaServer { - org.apache.kafka.common.security.plain.PlainLoginModule required - username="kafka" - password="kafka-password" - user_kafka="kafka-password"; -}; - -Client { - org.apache.zookeeper.server.auth.DigestLoginModule required - username="admin" - password="admin-secret"; -}; -``` - -- `zookeeper_jaas.conf` - -``` -Server { - org.apache.zookeeper.server.auth.DigestLoginModule required - user_super="admin-secret" - user_admin="admin-secret"; -}; -``` - -To use a JAAS configuration and the SASL protocol, pass the following system properties to the Kafka and Zookeeper JVMs. - -``` --Djava.security.auth.login.config= --Djava.security.auth.login.config= --Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider --Dzookeeper.requireClientAuthScheme=sasl -``` -To do so, you can set the `KAFKA_OPTS` environment variable and assign it a string that contains these properties. - -To ensure SSL encryption between the Kafka client and the Kafka brokers, the kafka.serverCertificate parameter must contain the base64-encoded CA certificate that is used to sign each certificate of the Kafka brokers. - - -## Before you begin - -### Connect to the cluster - -1. Log in to your Kubernetes cluster. - - For example, on OpenShift: - ``` - oc login https://:8443 - ``` - -2. Create a namespace where to deploy IBM Business Automation Insights: - - ```sh - kubectl create namespace - ``` - -### Upload the images - -You need to upload the IBM Business Automation Insights images to the docker registry of the Kubernetes cluster. See: [Download a product package from PPA and load the images](../README.md#download-ppa-and-load-images). - -### Configure the storage - -IBM Business Automation Insights requires a certain number of persistent volumes. - -Apache Flink needs a persistent volume to store its internal state and to support fault tolerance and high availability. - -Choose between dynamic provisioning or creating the persistent volumes manually. - -#### Dynamic provisioning - -If you use dynamic provisioning, make sure to use a `StorageClass` with a `reclaimPolicy` set to `Retain`. Otherwise, you might lose your data when -you upgrade or update IBM Business Automation Insights because a different persistent volume might be allocated. - -Unless you intend to use the default `StorageClass` of your Kubernetes environment, you must set the following configuration properties with the `StorageClass` name to use: `flinkPv.storageClassName`, `ibm-dba-ek.data.storage.storageClass`, and `ibm-dba-ek.elasticsearch.data.snapshotStorage.storageClassName`. - -You then need to set `persistence.useDynamicProvisioning`,`ibm-dba-ek.elasticsearch.data.storage.useDynamicProvisioning`, and `ibm-dba-ek.elasticsearch.data.snapshotStorage.useDynamicProvisioning` to `true` when you deploy IBM Business Automation Insights. - -`ibm-dba-ek` settings are required only if you install embedded Elasticsearch. - -#### Manual provisioning - -In the current section, `` is a path that is NFS-shared by the NFS server with IP equal to ``. -You must ensure that your Kubernetes nodes have a very fast access to the NFS shared folders. -Usually, the NFS share is set up on the master node of your Kubernetes cluster, thus `` equals ``. - -If dynamic provisioning is not enabled on the Kubernetes cluster or if you prefer to control the provisioning, you must create persistent volumes from scratch. - -1. Create a persistent volume for Apache Flink. - - It is recommended to apply the `Retain` reclaim policy to make sure that data is not lost when you install a new release of IBM Business Automation Insights. -Use the following YAML file to create a persistent volume. Replace the placeholders with the values that are appropriate for your environment. - -```yaml -apiVersion: v1 -kind: PersistentVolume -metadata: - name: ibm-bai-pv -spec: - accessModes: - - ReadWriteMany - capacity: - storage: - nfs: - path: /ibm-bai-pv - server: - persistentVolumeReclaimPolicy: Retain - claimRef: - namespace: - name: -``` - -> **Note**: The `claimRef` section is optional. However, you must set it in a production environment if you want to make sure that your release always uses the same volume and if you do not want to lose your data. If you add the `claimRef` section, you must also set the namespace and the name of the persistent volume claim, as in step 2. - -2. *Optional*: Create a persistent volume claim for Apache Flink. - - Use the following YAML file to create a persistent volume claim. Replace the placeholders with the appropriate values. -The value of `` must match the name provided in the `claimRef` section of the persistent volume. -The `` value must be smaller than or equal to the value of the persistent volume storage capacity. -The persistent volume claim must provide enough space to fit the capacity set at installation time. The default capacity is `20Gi`. - -```yaml -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: - namespace: -spec: - storageClassName: "" - accessModes: - - ReadWriteMany - resources: - requests: - storage: -``` - -3. If you use embedded Elasticsearch, deployed together with IBM Business Automation Insights, rather than your own Elasticsearch, create the persistent volumes for Elasticsearch. - - It is recommended to apply the `Retain` reclaim policy to make sure that data is not lost when you install a new release of IBM Business Automation Insights. -The following YAML creates persistent volumes and sets the reclaim policy for two data nodes and a master node. - -```yaml -apiVersion: v1 -kind: PersistentVolume -metadata: - name: ibm-bai-ek-pv-0 -spec: - accessModes: - - ReadWriteOnce - capacity: - storage: 10Gi - nfs: - path: /ibm-bai-ek-pv-0 - server: - persistentVolumeReclaimPolicy: Retain ---- -apiVersion: v1 -kind: PersistentVolume -metadata: - name: ibm-bai-ek-pv-1 -spec: - accessModes: - - ReadWriteOnce - capacity: - storage: 10Gi - nfs: - path: /ibm-bai-ek-pv-1 - server: - persistentVolumeReclaimPolicy: Retain ---- -apiVersion: v1 -kind: PersistentVolume -metadata: - name: ibm-bai-ek-pv-2 -spec: - accessModes: - - ReadWriteOnce - capacity: - storage: 10Gi - nfs: - path: /ibm-bai-ek-pv-2 - server: - persistentVolumeReclaimPolicy: Retain -``` - -4. *Optional*: If you want to refine the binding of the persistent volumes, provide a `storageClassName` value in the persistent volume .yaml file and then reference it when you configure the IBM Business Automation Insights installation. - - Modify the sample [pv.yaml](./configuration/pv.yaml) and deploy it as follows: - - ```sh - kubectl apply -f pv.yaml - ``` - -#### Persistent volume access rights - -The access rights to the persistent volumes are as follows: -- user `9999` and group `9999` must have read and write access to the Apache Flink persistent volume. - -- user `1000` and group `1000` must have read and write access to the Elasticsearch persistent volumes. - -### Configure the image policy - -- If you use the Docker registry of the Kubernetes cluster, the default image policy, `default-dockercfg-*`, is applied. Check it out by running the following command: -```sh -kubectl get secrets -n | grep kubernetes.io/dockercfg -``` - -- If you use a Docker registry that is external to the Kubernetes cluster, you must define an image policy to be able to access the Docker registry: - -```sh -kubectl create secret docker-registry --docker-server= --docker-username= --docker-password= --docker-email= -n -``` - -## PodSecurityPolicy Requirements - -Before installation, this chart requires a PodSecurityPolicy resource to be bound to the target namespace. -The predefined PodSecurityPolicy resource named [`ibm-anyuid-psp`](https://ibm.biz/cpkspec-psp) has been verified for this chart. - -You must also set up the proper PodSecurityPolicy, Role, ServiceAccount, and RoleBinding Kubernetes resources to allow -the pods to run privileged containers. To achieve this, you must set up a custom PodSecurityPolicy definition. - -1- Adapt the following YAML content to reference your Kubernetes namespace and Business Automation Insights Helm release name, and save it to a file named `bai-psp.yml`, which sets up the Custom PodSecurityPolicy definition. -```yaml -apiVersion: policy/v1beta1 -kind: PodSecurityPolicy -metadata: - annotations: - kubernetes.io/description: "This policy is required to allow ibm-dba-ek pods running Elasticsearch to use privileged containers." - name: -bai-psp -spec: - privileged: true - runAsUser: - rule: RunAsAny - seLinux: - rule: RunAsAny - supplementalGroups: - rule: RunAsAny - fsGroup: - rule: RunAsAny - volumes: - - '*' ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: Role -metadata: - name: -bai-role - namespace: -rules: -- apiGroups: - - extensions - resourceNames: - - -bai-psp - resources: - - podsecuritypolicies - verbs: - - use ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: -bai-psp-sa ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - name: -bai-rolebinding - namespace: -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: -bai-role -subjects: -- kind: ServiceAccount - name: -bai-psp-sa - namespace: -``` -2- Execute: -```bash -kubectl create -f bai-psp.yaml -n -``` - -This command allows the pods to run the sysctl commands that are needed at initialization. - - -## Red Hat OpenShift SecurityContextConstraints Requirements - -If you are installing the chart on Red Hat OpenShift or OKD, the [ibm-anyuid-scc](https://ibm.biz/cpkscc-spec) SecurityContextConstraint is required to install the chart. - -If you are planning to install Elasticsearch and Kibana as part of IBM Business Automation Insights on Red Hat OpenShift or OKD, you must also create a service account that has the [ibm-privileged-scc](https://ibm.biz/cpkscc-spec) SecurityContextConstraint to allow running privileged containers: -``` -$ oc create serviceaccount -bai-psp-sa -$ oc adm policy add-scc-to-user ibm-privileged-scc -z -bai-psp-sa -``` - -If you cannot or do not want to allow running privileged containers, you can still install IBM Business Automation Insights but you must configure it to use an external Elasticsearch (in Helm values, set `elasticsearch.install: false`). - -## Installing - -There are two ways to deploy IBM Business Automation Insights to the Kubernetes cluster: - -### Install IBM Business Automation Insights by using the Helm chart and Tiller - -Refer to [Helm instructions](./helm-charts/README.md). - -### Install IBM Business Automation Insights by using Kubernetes YAML - -Refer to [Kubernetes instructions](./k8s-yaml/README.md). - -## Post-installation steps - -IBM Business Automation Insights is correctly deployed when all the jobs are completed, all the pods are running and ready, and all the services are reachable. - -- Monitor the status of the jobs and check that all of them are marked as successful by executing the following command: - ```sh -kubectl get jobs -n -``` -- Monitor the status of the pods and check that all of them are in `Running` mode and with all their containers `Ready` (for example, 2/2) by executing the following command: - ```sh -kubectl get pods -n -``` -- Verify that all the services are reachable by accessing the corresponding URLs. -When all the services have the default value for `serviceType`, that is, NodePort, the URLs are as follows: - ```sh -export NODE_IP=$(kubectl cluster-info | grep "master" | awk 'match($0, /([0-9]{1,3}\.){3}[0-9]{1,3}/) { print substr( $0, RSTART, RLENGTH )}') -export ADMIN_NODE_PORT=$(kubectl get svc -n "bai-bai-admin-service" -o 'jsonpath={.spec.ports[?(@.targetPort=="admin-rest")].nodePort}') -export ES_NODE_PORT=$(kubectl get svc -n "bai-ibm-dba-ek-client" -o 'jsonpath={.spec.ports[?(@.targetPort=="es-rest")].nodePort}') -export KIBANA_NODE_PORT=$(kubectl get svc -n "bai-ibm-dba-ek-kibana" -o 'jsonpath={.spec.ports[?(@.targetPort=="kibana-ui")].nodePort}') -echo "Admin REST API: https://$NODE_IP:$ADMIN_NODE_PORT" -echo "Elasticsearch REST API: https://$NODE_IP:$ES_NODE_PORT" -echo "Kibana: https://$NODE_IP:$KIBANA_NODE_PORT" -``` -Use the following default login/passwords to authenticate with Elasticsearch REST API and with Kibana: -- demo/demo -- admin/passw0rd - -> **Note:** To check the Admin REST API status, use `https://$NODE_IP:$ADMIN_NODE_PORT/api/health`. - -## Updating - -Depending on the updates that you plan, you might have to deploy new versions of some batch jobs. Because completed jobs cannot be updated, you must delete them before performing the update. - -### Prerequisites - -* Delete the batch jobs related to processing jobs if you plan to update parameters that affect the execution of processing jobs. These parameters include: Apache Flink settings (including RocksDB settings), Kafka configuration options, Elasticsearch general settings, and Kerberos authentication settings. -That is, properties in the values.yaml file that start with `flink.*`, `bpmn.*`, `ingestion.*`, `icm.*`, `odm.*`, `kafka.*`, `settings.*`, `kerberos.*`, or `elasticsearch.*`. -See the full list of properties in the [Configuration parameters](#configuration-parameters) section below. - - * Retrieve the job names: `kubectl get jobs --selector=release= -n | grep -v setup` - * Delete each job in the list: `kubectl delete job -n ` - -* Delete the bai-setup job if you update the `elasticsearch.url` property to change the Elasticsearch instance used by your Business Automation Insights system. - - `kubectl delete job -bai-setup -n ` - -* Delete all the batch jobs if you plan to update the docker images. - -### Update IBM Business Automation Insights by using Helm - -Refer to [Helm instructions](./helm-charts/README.md#update-the-helm-chart). - -### Update IBM Business Automation Insights by using Kubernetes - -Refer to [Kubernetes instructions](./k8s-yaml/README.md#update-ibm-business-automation-insights). - -## Configuration parameters - -Learn more about IBM Business Automation Insights and its configuration in the [Knowledge Center](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.bai/topics/con_bai_overview.html). - -### General configuration - -Parameter | Description | Default value | --------------------------------------|------------------------------------|-----------------------------------| -`persistence.useDynamicProvisioning` | Use Dynamic Provisioning | `true` | -`settings.egress` | Enable Data Egress to Apache Kafka | `true` | -`settings.ingressTopic` | Apache Kafka ingress topic | `[Release name]-ibm-bai-ingress` | -`settings.egressTopic` | Apache Kafka egress topic | `[Release name]-ibm-bai-egress` | -`settings.serviceTopic` | Apache Kafka service topic | `[Release name]-ibm-bai-service` | -`baiSecret` | Name of a secret that is already deployed to Kubernetes. See [below](#baiSecret) for details. | `None` | - -#### baiSecret - -A secret that contains the following keys: - -- `admin-username`: the username to authenticate against the admin REST API -- `admin-password`: the password to authenticate against the admin REST API -- `admin-key`: the private key in PEM format for secure communications with the administration service -- `admin-cert`: the certificate in PEM format for secure communications with the administration service -- `kafka-username`: the username to authenticate against Kafka -- `kafka-password`: the password to authenticate against Kafka -- `flink-ssl-keystore`: the keystore for secure communications with the Flink REST API -- `flink-ssl-truststore`: the truststore for secure communications with the Flink REST API -- `flink-ssl-internal-keystore`: the keystore for inter-node communications in the Flink cluster -- `flink-ssl-password`: the password of Flink keystore and truststore -- `kafka-server-cert`: the certificate in PEM format for secure communication with Kafka -- `kafka-ca-cert`: the CA certificate in PEM format for secure communication with Kafka -- `flink-security-krb5-keytab`: the Kerberos Keytab -- `elasticsearch-username`: the username for connection to the external Elasticsearch -- `elasticsearch-password`: the password for connection to the external Elasticsearch -- `elasticsearch-server-cert`: the certificate in PEM format for secure communication with Elasticsearch - -> **Note**: The secret must hold a value for each of these keys, even if their value is empty (when they are not relevant in your IBM Business Automation Insights configuration). -When you run `kubectl` to create a secret with empty values, you must turn validation off with the ` --validate=false` argument. - -This secret must be created in a production environment for overriding the default credentials. - -For example: -``` -kubectl create -f bai-prereq-secret.yaml --validate=false -``` - -If `baiSecret` is defined, it overrides the following values: -- `admin.username` -- `admin.password` -- `kafka.username` -- `kafka.password` -- `kafka.serverCertificate` -- `kerberos.keytab` -- `elasticsearch.username` -- `elasticsearch.password` -- `elasticsearch.serverCertificate` - -### Docker registry details - -Parameter | Description | Default value | -----------------------------|--------------------------|----------------| -`imageCredentials.registry` | Docker registry URL | None | -`imageCredentials.username` | Docker registry username | None | -`imageCredentials.password` | Docker registry password | None | -`imageCredentials.imagePullSecret` | The imagePullSecret for Docker images. See [below](#imagecredentials) for details. | None -`imagePullPolicy` | The pull policy for Docker images | None | - -#### imageCredentials.imagePullSecret - -An imagePullSecret for Docker images which overrides: -- `imageCredentials.registry` -- `imageCredentials.userName` -- `imageCredentials.password` - -Here is the command to create such a secret: - -``` -kubectl create secret docker-registry regcred --docker-server= --docker-username= --docker-password= --docker-email= -n -``` - -### Apache Kafka - -Parameter | Description | Default -----------------------------------|---------------------------------|-------- -`kafka.bootstrapServers` | Apache Kafka Bootstrap Servers. | `kafka.bootstrapserver1.hostname:9093,kafka.bootstrapserver2.hostname:9093,kafka.bootstrapserver3.hostname:9093` -`kafka.securityProtocol` | Apache Kafka `security.protocol` property value | `SASL_SSL` -`kafka.saslKerberosServiceName` | Apache Kafka `sasl.kerberos.service.name` property value | -`kafka.serverCertificate` | Apache Kafka server certificate for SSL communications (base64 encoded) | -`kafka.username` | Apache Kafka username | -`kafka.password` | Apache Kafka password | -`kafka.propertiesConfigMap` | Name of a ConfigMap already deployed to Kubernetes and that contains Kafka consumer and producer properties. For details, see [Specifying a configuration map for Kafka properties](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.bai/topics/tsk_bai_flink_kub_config_maps_kafka.html). | - -### Elasticsearch settings - -Parameter | Description | Default -----------|-------------|-------- -`elasticsearch.install` | Specifies whether Elasticsearch and Kibana must be deployed by using the ibm-dba-ek subchart | `true` -`elasticsearch.url` | Elasticsearch URL. Only relevant if you do not use the ibm-dba-ek subchart to install Elasticsearch. | -`elasticsearch.username` | Elasticsearch username. Only relevant if you do not use the ibm-dba-ek subchart to install Elasticsearch. | -`elasticsearch.password` | Elasticsearch password. Only relevant if you do not use the ibm-dba-ek subchart to install Elasticsearch. | -`elasticsearch.serverCertificate` | Elasticsearch server certificate for SSL communications (base64 encoded). This attribute is relevant only if you set `Install Elasticsearch` to false. | - -### Setup job - -Parameter | Description | Default -----------|-------------|-------- -`setup.image.repository` | Docker image name for the setup job | `bai-setup` -`setup.image.tag` | Docker image version for the setup job | `19.0.1` - -### Administration service -Parameter | Description | Default -----------|-------------|-------- -`admin.image.repository` | Docker image name for the Administration Service | `bai-admin` -`admin.image.tag` | Docker image version for the Administration Service | `19.0.1` -`admin.replicas` | Number of Administration Service replicas | 2 -`admin.username` | Sets the user name to the Administration Service | `admin` -`admin.password` | Sets the password to the Administration Service API | `passw0rd` -`admin.serviceType` | The way the Administration Service API must be exposed. Can be `NodePort` or `ClusterIP`. If you want to expose the service on Ingress, choose `ClusterIP`. After the Helm chart is deployed, create your own Ingress Kubernetes resource manually. | `NodePort` -`admin.externalPort` | The port to which the Administration Service API is exposed externally. Relevant only if `serviceType` is set to `NodePort`. | - -### Apache Flink persistent volume - -Parameter | Description | Default -----------|-------------|-------- -`flinkPv.capacity` | Persistent volume capacity | `20Gi` -`flinkPv.storageClassName` | Storage class name to be used if `persistence.useDynamicProvisioning` is `true` | -`flinkPv.existingClaimName`| By default, a new persistent volume claim is created. Specify an existing claim here if one is available. | - -### Apache Flink - -Parameter | Description | Default -----------|-------------|-------- -`flink.image.repository` | Docker image name for Apache Flink | `bai-flink` -`flink.image.tag` | Docker image version for Apache Flink | `19.0.1` -`flink.taskManagerHeapMemory` | Apache Flink Task Manager heap memory (in megabytes) | 1024 -`flink.taskManagerMemory` | Apache Flink Task Manager total memory (in megabytes). It has to be greater than `flink.taskManagerHeapMemory`. | 1536 -`flink.jobCheckpointingInterval` | Interval between checkpoints of Apache Flink jobs | `5000` -`flink.batchSize` | Batch size for bucketing sink storage | `268435456` -`flink.checkInterval` | How frequently (in milliseconds) the job checks for inactive buckets | `300000` -`flink.bucketThreshold` | The minimum time (in milliseconds) after which a bucket that does not receive new data is considered inactive | `900000` -`flink.storageBucketUrl` | The HDFS URL for long-term storage (e.g. `hdfs://:/bucket_path`) | -`flink.rocksDbPropertiesConfigMap` | Name of a ConfigMap already deployed to Kubernetes that contains advanced RocksDB properties | -`flink.log4jConfigMap` | Name of a configMap already deployed to Kubernetes that overrides the default bai-flink-log4j configMap | -`flink.hadoopConfigMap` | Name of a ConfigMap already deployed to Kubernetes that contains HDFS configuration (core-site.xml and hdfs-site.xml) | -`flink.zookeeper.image.repository` | Docker image name for Apache Zookeeper | `bai-flink` -`flink.zookeeper.image.tag` | Docker image version for Apache Zookeeper | `19.0.1` -`flink.zookeeper.replicas` | Number of Apache Zookeeper replicas | 1 - -### IBM Business Automation Workflow - BPMN processing - -Parameter | Description | Default -----------|-------------|-------- -`bpmn.install` | Whether to install Business Process Model & Notation (BPMN) event processing or not. | `true` -`bpmn.image.repository` | Docker image name for BPMN event processing. | `bai-bpmn` -`bpmn.image.tag` | Docker image version number for BPMN event processing. | `19.0.1` -`bpmn.recoveryPath` | The path to the savepoint or checkpoint from which a job will recover. You can use this path to restart the job from a previous state in case of failure. To use the default workflow of the job, leave this option empty. | -`bpmn.endAggregationDelay` | The delay in milliseconds before clearing the states used for summary transformation. | `10000` -`bpmn.parallelism` | The number of parallel instances (task managers) to use for running the processing job. | - -### IBM Business Automation Workflow - Advanced Processing - -Parameter | Description | Default -----------|-------------|-------- -`bawadv.install` | Whether to install Business Automation Workflow Advanced (BAW) event processing (for BPEL processes, human tasks, ...) or not. | `true` -`bawadv.image.repository` | Docker image name for BAW Advanced event processing. | `bai-bawadv` -`bawadv.image.tag` | Docker image version for BAW Advanced event processing | `19.0.1` -`bawadv.recoveryPath` | The path to the savepoint or checkpoint from which a job will recover. You can use this path to restart the job from a previous state in case of failure. To use the default workflow of the job, leave this option empty. | -`bawadv.parallelism` | The number of parallel instances (task managers) to use for running the processing job. | - -### IBM Business Automation Workflow - Case processing - -Parameter | Description | Default -----------|-------------|-------- -`icm.install` | Whether to install IBM Case Manager (ICM) event processing or not. | `true` -`icm.image.repository` | Docker image name for ICM events processing. | `bai-icm` -`icm.image.tag` | Docker image version for ICM events processing. | `19.0.1` -`icm.recoveryPath` | The path to the savepoint or checkpoint from which a job will recover. You can use this path to restart the job from a previous state in case of failure. To use the default workflow of the job, leave this option empty. | -`icm.parallelism` | The number of parallel instances (task managers) to use for running the processing job. | - -### IBM Operational Decision Manager processing - -Parameter | Description | Default -----------|-------------|-------- -`odm.install` | Whether to install IBM Operational Decision Manager (ODM) event processing or not. | `true` -`odm.image.repository` | Docker image name for ODM event processing. | `bai-odm` -`odm.image.tag` | Docker image version for ODM event processing | `19.0.1` -`odm.recoveryPath` | The path to the savepoint or checkpoint from which a job will recover. You can use this path to restart the job from a previous state in case of failure. To use the default workflow of the job, leave this option empty. | -`odm.parallelism` | The number of parallel instances (task managers) to use for running the processing job | - -### IBM Content Platform Engine Processing - -Parameter | Description | Default -----------|-------------|-------- -`content.install` | Whether to install IBM Content Platform Engine (Content) event processing or not. | `true` -`content.image.repository` | Docker image name for Content event processing. | `bai-content` -`content.image.tag` | Docker image version for Content event processing | `19.0.1` -`content.recoveryPath` | The path to the savepoint or checkpoint from which a job will recover. You can use this path to restart the job from a previous state in case of failure. To use the default workflow of the job, leave this option empty. | -`content.parallelism` | The number of parallel instances (task managers) to use for running the processing job. | - -### IBM Business Automation Workflow Advanced processing - -Parameter | Description | Default -----------|-------------|-------- -`bawadv.install` | Whether to install Business Automation Workflow Advanced (BAW) event processing (for BPEL processes, human tasks, ...) or not. | `true` -`bawadv.image.repository` | Docker image name for BAW Advanced event processing. | `bai-bawadv` -`bawadv.image.tag` | Docker image version for BAW Advanced event processing | `latest` -`bawadv.recoveryPath` | The path to the savepoint or checkpoint from which a job will recover. You can use this path to restart the job from a previous state in case of failure. To use the default workflow of the job, leave this option empty. | -`bawadv.parallelism` | The number of parallel instances (task managers) to use for running the processing job. | - -### Raw events processing - -Parameter | Description | Default -----------|-------------|-------- -`ingestion.install` | Whether to install raw event processing or not. | true -`ingestion.image.repository` | Docker image name for raw event processing. | `bai-ingestion` -`ingestion.image.tag` | Docker image version for raw event processing | `19.0.1` -`ingestion.recoveryPath` | The path to the savepoint or checkpoint from which a job will recover. You can use this path to restart the job from a previous state in case of failure. To use the default workflow of the job, leave this option empty. | -`ingestion.parallelism` | The number of parallel instances (task managers) to use for running the processing job | - -### Kerberos configuration - -Parameter | Description | Default -----------|-------------|-------- -`kerberos.enabledForKafka` | Set to true to enable Kerberos authentication to the Kafka server | `false` -`kerberos.enabledForHdfs` | Set to true to enable Kerberos authentication to the HDFS server | `false` -`kerberos.realm` | Kerberos default realm name | -`kerberos.kdc` | Kerberos key distribution center host | -`kerberos.principal` | Sets the Kerberos principal to authenticate with | -`kerberos.keytab` | Sets the Kerberos Keytab (base64 encoded) | - -### Init Image configuration - -Parameter | Description | Default -----------|-------------|-------- -`initImage.image.repository` | Docker image name for initialization containers | `bai-init` -`initImage.image.tag` | Docker image version for initialization containers | `19.0.1` - -### Elasticsearch-Kibana subchart - -If `elasticsearch.install` is set to `true`, Elasticsearch and Kibana are deployed as the ibm-dba-ek subchart. - -You can set values for the `ibm-dba-ek` subchart under the `ibm-dba-ek` key. These attributes are relevant only if you use the `ibm-dba-ek` subchart to install Elasticsearch into Kubernetes (see `elasticsearch.install`). You can adjust the values for this subchart if you want to set up your own set of users or to update the deployment topology or persistent storage management. - -With the default configuration, which must not be used in a production environment, you can access Kibana by using the following credentials: - -- admin:passw0rd -- demo:demo - -In a production environment, you must create a secret with the following keys: - -- `elasticsearch-username`: A Kibana username with administration privileges for connection to an external Elasticsearch -- `elasticsearch-password`: A Kibana password for connection to an external Elasticsearch - -The name you choose for that secret must be specified in the values: - -```yaml -ibm-dba-ek: - ekSecret: "" -``` - -For details, regarding the ibm-dba-ek subchart Helm values: -- [Elasticsearch parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/topics/ref_bai_es_params.html) -- [Kibana parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/topics/ref_bai_kibana_params.html) diff --git a/BAI/README_config.md b/BAI/README_config.md new file mode 100644 index 00000000..0bf0c7c7 --- /dev/null +++ b/BAI/README_config.md @@ -0,0 +1,269 @@ +# Configuring IBM® Business Automation Insights + +These instructions cover the basic configuration of IBM Business Automation Insights. + +In order to use Business Automation Insights with other components in the IBM Cloud Pak for Automation you also need to configure them to emit events. + +For more information on the IBM Cloud Pak for Automation, see the [IBM Cloud Pak for Automation Knowledge Center](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/welcome/kc_welcome_dba_distrib.html). + +## Before you start + +If you have not done so, go to the [IBM Cloud Pak for Automation 19.0.x](http://engtest01w.fr.eurolabs.ibm.com:9190/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_preparing_baik8s.html) Knowledge Center and follow the steps to prepare your environment for Business Automation Insights. + +This README will summarize a number of the preparation steps found in the Knowledge Center. For more information at each stage refer to the Knowledge Center links provided. + +## Step 1: Make a copy of the sample Custom Resource + +The IBM Cloud Pak for Automation operator uses a single Custom Resource to install the required Cloud Pak products. These instructions provide an example ICP4ACluster Custom Resource [`configuration/bai-sample-values.yaml`](configuration/bai-sample-values.yaml). You can use this yaml file to customize your Business Automation Insights install, then copy the `bai_configuration` section of the CR yaml to the single ICP4ACluster CR yaml for all Cloud Pak products. + +To begin customizing a basic installation first clone this repository and then copy the [`configuration/bai-sample-values.yaml`](configuration/bai-sample-values.yaml) configuration file into a working directory. + +## Step 2: Edit the Custom Resource + +Open the `bai-sample-values.yaml` ICP4ACluster Custom Resource file in a text/code editor. + +There are a number of values you need to customize: + +* Change all occurrences of `` to the location of the registry hosting the Business Automation Insights Docker images + +* Change all occurrences of `` to the name of the Docker pull secret created above, for example `icp4apull` + +* Ensure the `tag` value for all configuration matches the Docker tag used for the Docker images in your repository + +### Step 2.1: Customize the Apache Kafka Configuration + +#### Step 2.1.1: Apache Kafka connection configuration + +To configure Business Automation Insights to interact with your installation of Apache Kafka you need to customize the `bai_configuration.kafka` section of the Custom Resource. + +Below is an example of a simple Kafka configuration: + +```yaml + kafka: + bootstrapServers: "kafka-0.example.com:9092,kafka-1.example.com:9092,kafka-2.example.com:9092" + securityProtocol: "PLAINTEXT" +``` + +For advanced Apache Kafka configuration, including security options, refer to the [IBM Business Automation Insights Knowledge Center - Apache Kafka parameters](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_bai_k8s_kafka_params.html). + +#### Step 2.1.2: Apache Kafka topic configuration + +Business Automation Insights uses a number of Apache Kafka topics. To customize the names of these topics, uncomment and alter the settings below: + +```yaml + settings: + egress: true + ingressTopic: ibm-bai-ingress + egressTopic: ibm-bai-egress + serviceTopic: ibm-bai-service +``` + +More information about this can be found in the [IBM Business Automation Insights Knowledge Center - Apache Kafka parameters](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_bai_k8s_kafka_params.html), including an explanation of egress functionality. + +### Step 2.2 Persistent Storage +When configuring Business Automation Insights you have a number of options regarding persistent storage. + +Below is a summary of the persistent storage used by Business Automation Insights: + +| Volume | Default volume name | Default Storage | Required | Access Mode | Number of volumes | +| --------------------------------- | ------------------------------------------ | --------------- | -------- | ------------- | ----------------- | +| Flink volume | -bai-pvc | 20Gi | Yes | ReadWriteMany | 1 | +| ElasticSearch Master | data--ibm-dba-ek-master-_replica_ | 10Gi | No | ReadWriteOnce | 1 per replica | +| ElasticSearch Data | data--ibm-dba-ek-data-_replica_ | 10Gi | No | ReadWriteOnce | 1 per replica | +| ElasticSearchSnapshot Storage | -es-snapshot-storage-pvc | 30Gi | No | ReadWriteMany | 1 | + +The Flink volume is used by multiple pods for normal operation of Business Automation Insights. For more information on the Business Automation Insights persistent volume configuration see [IBM Business Automation Insights Knowledge Center - Apache Flink parameters](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_bai_k8s_flink_params.html). + +If you are using the embedded ElasticSearch stack you can choose to enable persistence for the ElasticSearch nodes (with a volume for each replica of the master and data nodes), and for snapshot storage. For more information on the embedded ElasticSearch volume configuration see [IBM Business Automation Insights Knowledge Center - Elasticsearch parameters](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_bai_k8s_es_params.html) + +#### Example configuration using dynamic provisioning + +If your cluster has dynamic volume provisioning the example shows a storage configuration (as found in the `bai-sample-values.yaml` file) when persistence is enabled: + +```yaml + persistence: + useDynamicProvisioning: true + + flinkPv: + storageClassName: "" + + ibm-dba-ek: + elasticsearch: + data: + storage: + persistent: true + useDynamicProvisioning: true + storageClass: "" + snapshotStorage: + enabled: true + useDynamicProvisioning: true + storageClassName: "" +``` + +This configuration creates the four `PersistentVolumeClaim` resources listed with the default configuration. To use dynamic provisioning, change all occurrences of `` and `` to the name of the storage classes appropriate for your deployment platform. + +> Note: The `bai_configuration.flinkPv.storageClassName` and `bai_configuration.ibm-dba-ek.elasticsearch.data.snapshotStorage.storageClassName` storage classes must be capable of access mode `ReadWriteMany`. Additional configuration may be required on some platforms to create a `ReadWriteMany` capable storage class. `bai_configuration.ibm-dba-ek.elasticsearch.data.storage.storageClass` requires a `ReadWriteOnce` access mode capable storage class, available by default on many cloud platforms. + +#### Example configuration using static provisioning + +If you want to manually create `PersistentVolume` and `PersistentVolumeClaim` resources use the following template for an example configuration: + +```yaml + persistence: + useDynamicProvisioning: false + + flinkPv: + existingClaimName: "" + + ibm-dba-ek: + elasticsearch: + data: + storage: + persistent: true + useDynamicProvisioning: false + storageClass: "" + snapshotStorage: + enabled: true + useDynamicProvisioning: false + existingClaimName: "" +``` + +### Step 2.3 Product event processors + +By default, no event processor setup pods are started when Business Automation Insights is installed. The event processor setup pods are required in order to configure Business Automation Insights to be able to ingest events from other products in the IBM Cloud Pak for Automation. + +Each product has an `install` parameter in the `bai_configuration` Custom Resource section, as shown below: + +```yaml + ingestion: + install: false + image: + repository: /bai-ingestion + tag: "19.0.3" + + adw: + install: false + image: + repository: /bai-adw + tag: "19.0.3" + + bpmn: + install: false + image: + repository: /bai-bpmn + tag: "19.0.3" + + bawadv: + install: false + image: + repository: /bai-bawadv + tag: "19.0.3" + + icm: + install: false + image: + repository: /bai-icm + tag: "19.0.3" + + odm: + install: false + image: + repository: /bai-odm + tag: "19.0.3" + + content: + install: false + image: + repository: /bai-content + tag: "19.0.3" +``` + +For each products that you want to process events from change the `install` parameter to `true`. For example to process events from IBM Operation Decision Manager set `spec.bai_configuration.odm.install` to `true`. + +## Step 3: Security configuration + +Business Automation Insights requires some additional security configuration. + +### Step 3.1: Create security configuration + +Use the following template to create a [`BAI/configuration/bai-psp-yaml`](configuration/bai-psp.yaml) file containing the required `PodSecurityPolicy`, `Role`, `RoleBinding` and `ServiceAccount` resources needed by BAI. + +**Example bai-psp.yaml** + +```yaml +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + annotations: + kubernetes.io/description: "This policy is required to allow ibm-dba-ek pods running Elasticsearch to use privileged containers." + name: -bai-psp +spec: + privileged: true + runAsUser: + rule: RunAsAny + seLinux: + rule: RunAsAny + supplementalGroups: + rule: RunAsAny + fsGroup: + rule: RunAsAny + volumes: + - '*' +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: -bai-role +rules: +- apiGroups: + - extensions + resourceNames: + - -bai-psp + resources: + - podsecuritypolicies + verbs: + - use +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: -bai-psp-sa +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: -bai-rolebinding +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: -bai-role +subjects: +- kind: ServiceAccount + name: -bai-psp-sa +``` + +After creating the file, replace all occurrences of `` with the name of your ICP4ACluster Custom Resource created in Step 3. + +### Step 3.2: Apply the security configuration + +To apply the configuration you can use the `kubectl` command line utility: + +```bash +kubectl apply -f bai-psp.yaml +``` + +For RedHat OpenShift, additional policies may be required to enable the `Pod` resources to start containers using the required UIDs. To ensure these containers can start use the `oc` command to add the service accounts to the required `privileged` SCC: + +```bash +oc adm policy add-scc-to-user privileged -z -bai-psp-sa +oc adm policy add-scc-to-user privileged -z default +``` + +## Step 4: Complete the installation + +Go back to the relevant install or update page to configure other components and complete the deployment with the operator. + +Install pages: + - [Managed OpenShift installation page](../platform/roks/install.md) + - [OpenShift installation page](../platform/ocp/install.md) + - [Certified Kubernetes installation page](../platform/k8s/install.md) diff --git a/BAI/README_migrate.md b/BAI/README_migrate.md new file mode 100644 index 00000000..3f2166b4 --- /dev/null +++ b/BAI/README_migrate.md @@ -0,0 +1,85 @@ +# Upgrading IBM® Business Automation Insights + +These instructions cover upgrading IBM® Business Automation Insights. + +## Upgrading from IBM® Business Automation Insights version 19.0.2 to 19.0.3 + +These intructions will detail upgrading from a Helm / Kubernetes resource installation of Business Automation Insights version 19.0.2 to a Operator install of Business Automation Insights version 19.0.3. + +### Important note about Elasticsearch snapshot storage + +If Dynamic Provisioning was used to create the Elasticsearch snapshot storage PersistentVolumeClaim for Business Automation Insights version 19.0.2, deleting this release will delete this PersistentVolumeClaim. It is recommended you backup the data in the PersistentVolume before uninstalling this release. + +If Static Provisioning was used to provision the snapshot storage PersistentVolumeClaim, this storage can be reused for 19.0.3. The value for `ibm-dba-ek.elasticsearch.data.snapshotStorage.existingClaimName` can be used for the `spec.bai_configuration.ibm-dba-ek.elasticsearch.data.snapshotStorage.existingClaimName` value in the new ICP4ACluster custom resource (see Step 3. Migrate custom values to Custom Resource). + +### Step 1: Get latest configuration values + +Before uninstalling Business Automation Insights version 19.0.2 ensure the configuration values used for this installation are available. + +To do this, either: +* Retrieve the original `values.yaml` configuration parameter overrides file used in the `helm install` or `helm template` command for the installation. This file would have been specified using the `-f` flag in the original install. +* Alternatively, if the configuration parameters have changed since install, it is recommended to export the latest values using this command: + +```bash +helm get values my-bai-release +``` + +### Step 2: Uninstall Business Automation Insights version 19.0.2 + +> **Note** Events sent to Kafka by product event processors between the uninstallation of the previous release and the completion of the installation of 19.0.3 are not processed by Business Automation Insights 19.0.3. + +Depending on the installation method used to install Business Automation Insights, use one of the following methods to uninstall the 19.0.2 version. + +#### Helm installation (using `helm install`) + +Use the `helm delete` command to delete the Helm release for the Business Automation Insights installation: + +```bash +helm delete --purge my-bai-release +``` + +#### Kubernetes Resource installation (using `helm template`) + +Use the following procedure if the `helm template` command was used to generate Kubernetes YAML files to install Business Automation Insights version 19.0.2: +1. Navigate to the directory where the YAML files were exported. This is the directory set using the `--output-dir` flag in the `helm template` command. +2. Run the `kubectl delete` command for the installed resources: + +```bash +kubectl delete -f ./ibm-business-automation-insights/templates && \ +kubectl delete -f ./ibm-business-automation-insights/charts/ibm-dba-ek/templates +``` + +### Step 3: Clean up Flink persistent storage + +**IMPORTANT** You must ensure that the PersistentVolume used for Flink in the 19.0.2 release is deleted, or the contents are cleared. Due to an upgrade of Apache Flink, the data stored is not able to be reused between installations. + +For information regarding cleaning up persistent storage following an uninstallation see [README_uninstall.md](README_uninstall.md). + +#### Dynamic Provisioning + +If you used dynamically provisioning for your 19.0.2 installation, ensure that the PersistentVolumeClaim that was created as part of the 19.0.2 release has been deleted. + +#### Static Provisioning + +If you used static provisioning ensure that either: +* The PersistentVolume and PersistentVolumeClaim defined in the flinkPv.existingVolumeClaim parameter in your helm installation has been deleted following uninstallation; or +* The contents of the PersistentVolume have been deleted following uninstallation. This may be applicable if you are using NFS mounted storage + +### Step 4: Migrate custom values to Custom Resource + +Copy the configuration parameters used to setup and configure Business Automation Insights from the `values.yaml` override file used for the helm installation of a 19.0.2 release of Business Automation Insights (as detailed in Step 1) to a new ICP4ACluster Custom Resource under the `bai_configuration` section. + +For more information on how to configure the ICP4ACluster Custom Resource see [README_config.md](README_config.md). + +### Step 5: Preinstallation steps + +Read [README_config.md](README_config.md) to ensure all preinstallation instructions have been completed before installing Business Automation Insights version 19.0.3 + +## Step 6: Complete the upgrade + +Go back to the relevant update page to configure other components and complete the deployment with the operator. + +Update pages: + - [Managed OpenShift installation page](../platform/roks/update.md) + - [OpenShift installation page](../platform/ocp/update.md) + - [Certified Kubernetes installation page](../platform/k8s/update.md) diff --git a/BAI/README_uninstall.md b/BAI/README_uninstall.md new file mode 100644 index 00000000..99749247 --- /dev/null +++ b/BAI/README_uninstall.md @@ -0,0 +1,71 @@ +# Uninstalling IBM® Business Automation Insights + +These instructions cover uninstalling IBM® Business Automation Insights. + +> **WARNING** If you have used Dynamic Provision to provision the snapshot storage used by the embedded Elasticsearch, the PVC will be deleted as part of the uninstall. It is recommended to back-up any snapshots before following these instructions. + +## Step 1: Uninstall Custom Resource + +Detailed uninstall instructions can be found on the uninstall page for your platform: + - [Managed OpenShift installation page](../platform/roks/uninstall.md) + - [OpenShift installation page](../platform/ocp/uninstall.md) + - [Certified Kubernetes installation page](../platform/k8s/uninstall.md) + +As mentioned in the above pages to begin the uninstall of Business Automation Insights use `kubectl` to delete the Custom Resource: + +```bash +kubectl delete -f my_icp4a_cr.yaml +``` + +Alternatively, you can use the `oc` command to delete the Custom Resource: + +```bash +oc delete -f my_icp4a_cr.yaml +``` + +The Operator will now start to uninstall Business Automation Insights. + +## Step 2: Deallocate storage + +To clean up storage used by Business Automation Insights, you will have to follow the instructions below. + +### Statically provisioned storage + +If you chose to statically provision storage for Flink or Snapshot Storage, the PersistentVolumeClaims and PersistentVolumes that you manually created will not be deleted. To completely remove all data, you will need to delete this storage manually. + +### Embedded Elasticsearch volumes + +If you installed with the embedded Elasticsearch enabled, the volumes created for the *master* and *data* replicas of the Elasticsearch pods will not be deleted when uninstalling. To completely remove an installation you will need to delete the relevant PersistentVolumeClaims and PersistentVolumes. + +To do this run the command: + +```bash +kubectl delete pvc/pvc-name +``` + +For example: + +```bash +kubectl delete pvc/data-bai-ibm-dba-ek-data-0 +``` + +To get a list of all PersistentVolumeClaims run the command: + +```bash +kubectl get pvc +``` + +## Step 3: Security configuration + +If you used the bai-psp.yaml file referenced in [README_config.yaml](README_config.yaml) to install the required `PodSecurityPolicy`, `Role`, `RoleBinding` and `ServiceAccount` resources needed by Business Automation Insights, you will need to remove this configuration using `kubectl`: + +```bash +kubectl delete -f bai-psp.yaml +``` + +If you are using RedHat OpenShift, it is advised you also remove the default service account and Business Automation Insights service account (defined in the bai-psp.yaml file) from privileged SCC: + +```bash +oc adm policy remove-scc-from-user privileged -z -bai-psp-sa +oc adm policy remove-scc-from-user privileged -z default +``` \ No newline at end of file diff --git a/BAI/configuration/bai-pod-security-policy.yaml b/BAI/configuration/bai-pod-security-policy.yaml deleted file mode 100644 index c4a39ddb..00000000 --- a/BAI/configuration/bai-pod-security-policy.yaml +++ /dev/null @@ -1,31 +0,0 @@ -apiVersion: policy/v1beta1 -kind: PodSecurityPolicy -metadata: - name: bai-psp -spec: - privileged: true - runAsUser: - rule: RunAsAny - seLinux: - rule: RunAsAny - supplementalGroups: - rule: RunAsAny - fsGroup: - rule: RunAsAny - volumes: - - '*' ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - annotations: - name: bai-clusterrole -rules: -- apiGroups: - - extensions - resourceNames: - - bai-psp - resources: - - podsecuritypolicies - verbs: - - use diff --git a/BAI/configuration/bai-psp.yaml b/BAI/configuration/bai-psp.yaml new file mode 100644 index 00000000..4869da2b --- /dev/null +++ b/BAI/configuration/bai-psp.yaml @@ -0,0 +1,59 @@ +############################################################################### +# +# Licensed Materials - Property of IBM +# +# (C) Copyright IBM Corp. 2019. All Rights Reserved. +# +# US Government Users Restricted Rights - Use, duplication or +# disclosure restricted by GSA ADP Schedule Contract with IBM Corp. +# +############################################################################### +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + annotations: + kubernetes.io/description: "This policy is required to allow ibm-dba-ek pods running Elasticsearch to use privileged containers." + name: -bai-psp +spec: + privileged: true + runAsUser: + rule: RunAsAny + seLinux: + rule: RunAsAny + supplementalGroups: + rule: RunAsAny + fsGroup: + rule: RunAsAny + volumes: + - '*' +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: -bai-role +rules: +- apiGroups: + - extensions + resourceNames: + - -bai-psp + resources: + - podsecuritypolicies + verbs: + - use +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: -bai-psp-sa +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: -bai-rolebinding +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: -bai-role +subjects: +- kind: ServiceAccount + name: -bai-psp-sa \ No newline at end of file diff --git a/BAI/configuration/bai-sample-values.yaml b/BAI/configuration/bai-sample-values.yaml new file mode 100644 index 00000000..338957b2 --- /dev/null +++ b/BAI/configuration/bai-sample-values.yaml @@ -0,0 +1,141 @@ +############################################################################### +# +# Licensed Materials - Property of IBM +# +# (C) Copyright IBM Corp. 2019. All Rights Reserved. +# +# US Government Users Restricted Rights - Use, duplication or +# disclosure restricted by GSA ADP Schedule Contract with IBM Corp. +# +############################################################################### +apiVersion: icp4a.ibm.com/v1 +kind: ICP4ACluster +metadata: + name: bai-demo + labels: + app.kubernetes.io/instance: ibm-dba + app.kubernetes.io/managed-by: ibm-dba + app.kubernetes.io/name: ibm-dba + release: 19.0.3 +spec: + bai_configuration: + imageCredentials: + imagePullSecret: + + persistence: + useDynamicProvisioning: true + + flinkPv: + storageClassName: "" + + kafka: + bootstrapServers: "kafka.bootstrapserver1.hostname:9092,kafka.bootstrapserver2.hostname:9092,kafka.bootstrapserver3.hostname:9092" + securityProtocol: "PLAINTEXT" + + # settings: + # egress: true + # ingressTopic: ibm-bai-ingress + # egressTopic: ibm-bai-egress + # serviceTopic: ibm-bai-service + + setup: + image: + repository: /bai-setup + tag: "19.0.3" + + admin: + image: + repository: /bai-admin + tag: "19.0.3" + + flink: + initStorageDirectory: true + image: + repository: /bai-flink + tag: "19.0.3" + zookeeper: + image: + repository: /bai-flink-zookeeper + tag: "19.0.3" + + ingestion: + install: false + image: + repository: /bai-ingestion + tag: "19.0.3" + + adw: + install: false + image: + repository: /bai-adw + tag: "19.0.3" + + bpmn: + install: false + image: + repository: /bai-bpmn + tag: "19.0.3" + + bawadv: + install: false + image: + repository: /bai-bawadv + tag: "19.0.3" + + icm: + install: false + image: + repository: /bai-icm + tag: "19.0.3" + + odm: + install: false + image: + repository: /bai-odm + tag: "19.0.3" + + content: + install: false + image: + repository: /bai-content + tag: "19.0.3" + + initImage: + image: + repository: /bai-init + tag: "19.0.3" + + elasticsearch: + install: true + + ibm-dba-ek: + image: + imagePullPolicy: Always + imagePullSecret: + + elasticsearch: + image: + repository: /bai-elasticsearch + tag: "19.0.3" + init: + image: + repository: /bai-init + tag: "19.0.3" + data: + storage: + persistent: true + useDynamicProvisioning: true + storageClass: "" + snapshotStorage: + enabled: true + useDynamicProvisioning: true + storageClassName: "" + + kibana: + image: + repository: /bai-kibana + tag: "19.0.3" + init: + image: + repository: /bai-init + tag: "19.0.3" diff --git a/BAI/configuration/pv.yaml b/BAI/configuration/pv.yaml deleted file mode 100644 index ca700a36..00000000 --- a/BAI/configuration/pv.yaml +++ /dev/null @@ -1,103 +0,0 @@ -## persistent volume & claims definition to be run once in the cluster -apiVersion: v1 -kind: PersistentVolume -metadata: - name: bai-pv -spec: - accessModes: - - ReadWriteMany - capacity: - storage: 20Gi - nfs: - path: /export/NFS/bai/bai-pv - server: - persistentVolumeReclaimPolicy: Retain - claimRef: - namespace: bai - name: bai-pvc ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: bai-pvc -spec: - storageClassName: "" - accessModes: - - ReadWriteMany - resources: - requests: - storage: 20Gi ---- -apiVersion: v1 -kind: PersistentVolume -metadata: - name: bai-ek-data-pv-0 -spec: - storageClassName: "bai-ek-data" - accessModes: - - ReadWriteOnce - capacity: - storage: 10Gi - nfs: - path: /export/NFS/bai/ek-data-0 - server: - persistentVolumeReclaimPolicy: Recycle ---- -apiVersion: v1 -kind: PersistentVolume -metadata: - name: bai-ek-data-pv-1 -spec: - storageClassName: "bai-ek-data" - accessModes: - - ReadWriteOnce - capacity: - storage: 10Gi - nfs: - path: /export/NFS/bai/ek-data-1 - server: - persistentVolumeReclaimPolicy: Recycle ---- -apiVersion: v1 -kind: PersistentVolume -metadata: - name: bai-ek-data-pv-2 -spec: - storageClassName: "bai-ek-data" - accessModes: - - ReadWriteOnce - capacity: - storage: 10Gi - nfs: - path: /export/NFS/bai/ek-data-2 - server: - persistentVolumeReclaimPolicy: Recycle ---- -apiVersion: v1 -kind: PersistentVolume -metadata: - name: bai-ek-snapshots-pv -spec: - accessModes: - - ReadWriteMany - capacity: - storage: 30Gi - nfs: - path: /export/NFS/bai/ek-snapshots - server: - persistentVolumeReclaimPolicy: Retain - claimRef: - namespace: bai - name: bai-ek-snapshots-pvc ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: bai-ek-snapshots-pvc -spec: - storageClassName: "" - accessModes: - - ReadWriteMany - resources: - requests: - storage: 30Gi diff --git a/BAI/configuration/sample-secure-values.yaml b/BAI/configuration/sample-secure-values.yaml deleted file mode 100644 index 54525ffb..00000000 --- a/BAI/configuration/sample-secure-values.yaml +++ /dev/null @@ -1,90 +0,0 @@ -persistence: - useDynamicProvisioning: true - -imageCredentials: - registry: - username: - password: - -kafka: - bootstrapServers: "" - securityProtocol: "SASL_SSL" - username: "" - password: "" - serverCertificate: "" - -settings: - ingressTopic: "bai-release-ingress" - egressTopic: "bai-release-egress" - serviceTopic: "bai-release-service" - -setup: - image: - repository: /bai-setup - -admin: - image: - repository: /bai-admin - externalPort: - -flinkPv: - existingClaimName: "" - -flink: - image: - repository: /bai-flink - storageBucketUrl: "" - - zookeeper: - image: - repository: /bai-flink/zookeeper - -ingestion: - image: - repository: /bai-ingestion - -bpmn: - install: true - image: - repository: /bai-bpmn - -bawadv: - install: false - -icm: - install: false - -odm: - install: false - -content: - install: false - -initImage: - image: - repository: /bai-init - -ibm-dba-ek: - image: - credentials: - registry: - username: - password: - elasticsearch: - init: - image: - repository: /bai-init - image: - repository: /bai-elasticsearch - - data: - snapshotStorage: - enabled: - existingClaimName: "" - client: - externalPort: 31200 - - kibana: - image: - repository: /bai-kibana - externalPort: 31501 \ No newline at end of file diff --git a/BAI/configuration/sample-values.yaml b/BAI/configuration/sample-values.yaml deleted file mode 100644 index 9aa567d4..00000000 --- a/BAI/configuration/sample-values.yaml +++ /dev/null @@ -1,60 +0,0 @@ -# This is a customized values.yaml sample. -# In this sample, only the BPMN event processing is enabled. - -persistence: - useDynamicProvisioning: true - -imagePullPolicy: IfNotPresent - -kafka: - bootstrapServers: "kafka-release-cp-kafka-headless:9092" - securityProtocol: "PLAINTEXT" - -elasticsearch: - install: true - -settings: - egress: false - ingressTopic: bai-release-ingress - serviceTopic: bai-release-service - - -admin: - replicas: 1 - serviceType: NodePort - externalPort: 31100 - -# don't install ICM event processing -icm: - install: false - -# don't install ODM event processing -odm: - install: false - -# don't install BAWAdv event processing -bawadv: - install: false - -# don't install Content event processing -content: - install: false - -ingestion: - install: false - -# Overall, the event processing is installed only for BPMN. - -ibm-dba-ek: - elasticsearch: - data: - storage: - persistent: true - useDynamicProvisioning: true - storageClass: "bai-ek-data" - client: - serviceType: NodePort - externalPort: 31200 - kibana: - serviceType: NodePort - externalPort: 31501 diff --git a/BAI/helm-charts/.gitkeep b/BAI/helm-charts/.gitkeep deleted file mode 100644 index e69de29b..00000000 diff --git a/BAI/helm-charts/README.md b/BAI/helm-charts/README.md deleted file mode 100644 index 2fb0c568..00000000 --- a/BAI/helm-charts/README.md +++ /dev/null @@ -1,70 +0,0 @@ -# Install with the Helm chart - -This directory includes the [IBM Business Automation Insights Helm Chart](./ibm-business-automation-insights-3.2.0.tgz) and explains how to install it. - -## Initializing Helm and installing Tiller - -Tiller is a companion to the helm command that runs on your cluster. It receives commands from Helm and communicates directly with the Kubernetes API to create and delete resources. - -To install Tiller on your cluster, run: - -```sh -helm init -``` - -To grant Tiller the required cluster-admin permissions to deploy Business Automation Insights, run: -```sh -kubectl create serviceaccount --namespace kube-system tiller -kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller -kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' -``` - -> **Note:** For clusters where Tiller is already deployed, you only need to initialize the client part: - -```sh -helm init --client-only -``` - -## Installing IBM Business Automation Insights - -### Prerequisites - -First follow the [Requirements](../README.md#requirements) and [Before you begin](../README.md#before-you-begin). - -### Install the Helm chart - -To install the IBM Business Automation Helm chart, you need to decide on a release name and use this name when you run the helm command, as follows: - -```sh -helm install ibm-business-automation-insights-3.2.0.tgz --name -n -f values.yaml -``` - -To override the default Business Automation Insights configuration, you must provide a `values.yaml` file with your custom configuration. - -Configuration properties and default values are described in the [Business Automation Insights README.md](../README.md#configuration-parameters). An example `values.yaml` is provided [here](../configuration/sample-values.yaml). - -### Install the event emitters - -You must install the emitters into your IBM Digital Business Automation products to be able to emit events from the products to Business Automation Insights. - -You must only install emitters for the products that you enabled during Business Automation Insights installation process. In the provided sample, only the BPMN job is installed, and so only the BPMN emitter must be installed. - -Refer to the [Knowledge Center](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.bai/topics/con_bai_top_bmpn_events.html) for instructions. - -## Updating the Helm chart - -Check the Business Automation Insights [Updating](../README.md#updating) section for prerequisites to the update. - -After initial installation, you can update the chart configuration as follows: - -```sh -helm upgrade ibm-business-automation-insights-3.2.0.tgz -n --reuse-values --set a.property=newvalue[,other.property2=newvalue2] -``` - -## Uninstalling the Helm chart - -Run the following command to uninstall the Helm chart: - -```sh -helm delete -``` diff --git a/BAI/helm-charts/ibm-business-automation-insights-3.2.0.tgz b/BAI/helm-charts/ibm-business-automation-insights-3.2.0.tgz deleted file mode 100644 index dcbdf14d..00000000 Binary files a/BAI/helm-charts/ibm-business-automation-insights-3.2.0.tgz and /dev/null differ diff --git a/BAI/k8s-yaml/.gitkeep b/BAI/k8s-yaml/.gitkeep deleted file mode 100644 index e69de29b..00000000 diff --git a/BAI/k8s-yaml/README.md b/BAI/k8s-yaml/README.md deleted file mode 100644 index 5968ab0f..00000000 --- a/BAI/k8s-yaml/README.md +++ /dev/null @@ -1,59 +0,0 @@ -# Install with Kubernetes YAML - -This directory explains how to install IBM Business Automation Insights without the Helm server (Tiller). - -## Initializing Helm - -Initialize the Helm client-side as follows: - -```sh -helm init --client-only -``` - -## Installing IBM Business Automation Insights - -### Prerequisites - -First follow the [Requirements](../README.md#requirements) and [Before you begin](../README.md#before-you-begin). - -### Generate the Kubernetes YAML - -To install IBM Business Automation Insights, generate the Kubernetes YAML files as follows: - -```sh -mkdir yaml-files -helm template ibm-business-automation-insights-3.2.0.tgz --name --output-dir yaml-files -f values.yaml -``` - -To override the default configuration, you must provide a `values.yaml` file that contains your custom configuration. - -Configuration properties and default values are described in the [Business Automation Insights README.md](../README.md#configuration-parameters). An example `values.yaml` is provided [here](../configuration/sample-values.yaml). - -### Install the Kubernetes YAML - -```sh -kubectl apply -f ./yaml-files/ibm-business-automation-insights/templates -n bai && \ -kubectl apply -f ./yaml-files/ibm-business-automation-insights/charts/ibm-dba-ek/templates -n bai -``` - -### Install the event emitters - -You must install the emitters into your IBM Digital Business Automation products to be able to emit events from the products to Business Automation Insights. - -You must only install emitters for the products that you enabled during Business Automation Insights installation process. In the provided sample, only the BPMN job is installed, and so only the BPMN emitter must be installed. - -Refer to the [Knowledge Center](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.bai/topics/con_bai_top_bmpn_events.html) for instructions. - - -## Updating IBM Business Automation Insights - -Check the Business Automation Insights [Updating](../README.md#updating) section for prerequisites to the update. - -After initial installation, you can update the deployment by following the same steps but passing a different `values.yaml` - -## Uninstalling IBM Business Automation Insights - -```sh -kubectl delete -f ./yaml-files/ibm-business-automation-insights/templates -n bai && \ -kubectl delete -f ./yaml-files/ibm-business-automation-insights/charts/ibm-dba-ek/templates -n bai -``` diff --git a/BAI/platform/README_Eval_Openshift.md b/BAI/platform/README_Eval_Openshift.md deleted file mode 100644 index dc08cfcc..00000000 --- a/BAI/platform/README_Eval_Openshift.md +++ /dev/null @@ -1,105 +0,0 @@ -# Install IBM Business Automation Insights for developers on Red Hat OpenShift - -IBM® Business Automation Insights collects and continuously feeds operational data from IBM Automation Platform for Digital Business on Cloud to data lakes to provide users with a 360-degree view of operations and to enable machine learning from historical data. - -By downloading and installing this no-charge Developer Edition of IBM Business Automation Insights, you can benefit from the following capabilities: - * Collect data from IBM Business Automation Workflow, Operational Decision Manager, IBM FileNet® Content Manager, and BAIW, and store it on Elasticsearch. - * Visualize the data through predefined or user-configured dashboards in Kibana. - -See the following license section for restrictions on the use of this product: http://www14.software.ibm.com/cgi-bin/weblap/lap.pl?li_formnum=L-ASAY-BEEGE4 - -Note: You can use IBM Business Automation Insights Developer Edition only for non-production environments, primarily to try out IBM Business Automation Insights with your own event types and business data. You can connect your existing on-premise IBM Business Automation Workflow non-production systems to IBM Business Automation Insights Developer Edition. - -Note: You can also install Developer Edition on Minikube. For more information, see /~https://github.com/icp4a/cert-kubernetes/blob/19.0.2/BAI/platform/minikube/README.md. - -## Step 1: Prerequisites - -Make sure to go through the Prerequisites sections that are documented at /~https://github.com/icp4a/cert-kubernetes/tree/19.0.2/BAI/README.md: - - * [Requirements](/~https://github.com/icp4a/cert-kubernetes/tree/19.0.2/BAI/README.md#requirements) - * [Connect to the cluster](/~https://github.com/icp4a/cert-kubernetes/tree/19.0.2/BAI/README.md#connect-to-the-cluster) - * [Upload the images](/~https://github.com/icp4a/cert-kubernetes/tree/19.0.2/BAI/README.md#upload-the-images) This step must be skipped for the Developer Edition because images are pulled from Docker Hub public registry. - * [Configure the storage](/~https://github.com/icp4a/cert-kubernetes/tree/19.0.2/BAI/README.md#configure-the-storage). Note that the Developer Edition embeds Elasticsearch. - * [Configure the image policy](/~https://github.com/icp4a/cert-kubernetes/tree/19.0.2/BAI/README.md#configure-the-image-policy) This step must be skipped for the Developer Edition. - * [PodSecurityPolicy Requirements](/~https://github.com/icp4a/cert-kubernetes/tree/19.0.2/BAI/README.md#podsecuritypolicy-requirements) - * [Red Hat OpenShift SecurityContextConstraints Requirements](/~https://github.com/icp4a/cert-kubernetes/tree/19.0.2/BAI/README.md#red-hat-openshift-securitycontextconstraints-requirements) - -Note: When the `kubectl create namespace ` command is executed, an associated Kubernetes namespace is created with the project name. In subsequent commands, replace the `` placeholder with your actual project name. - -## Step 2: Install an IBM Business Automation Insights Developer Edition release - - -1. Create a `values.yaml` file. - - a. Configure the connection between your Kafka tool and Business Automation Insights: - - In the `values.yaml` file, configure the connection to Kafka. - - For example, for a Kafka without authentication: - - ```yaml - kafka: - bootstrapServers: "kafka-hostname:9092" - securityProtocol: "PLAINTEXT" - propertiesConfigMap: "" - ``` - - IBM Business Automation Insights creates Kafka topics if they do not exist. Default Kafka topic names are documented at [General configuration](/~https://github.com/icp4a/cert-kubernetes/tree/19.0.2/BAI/README.md#general-configuration). - - b. Enable event processing. - - For example, to install only ODM event processing, edit your `values.yaml` file as follows. - - ```yaml - bpmn: - install: false - - icm: - install: false - - odm: - install: true - - content: - install: false - - bawadv: - install: false - - baiw: - install: false - ``` - -2. Install the release. - - a. Add the IBM Charts repository - ```console - $ helm repo add ibm-charts https://raw.githubusercontent.com/IBM/charts/master/repo/stable - $ helm repo update - ``` - b. Run the helm install command - - ```console - $ helm install --namespace --name ibm-charts/ibm-business-automation-insights-dev -f ./values.yaml --version=3.2.0 - ``` - -## Step 3: Verify that IBM Business Automation Insights deployment is running - -IBM Business Automation Insights is correctly deployed when all the jobs are completed, all the pods are running and ready, and all the services are reachable. - -- Monitor the status of the jobs and check that all of them are marked as successful by executing the following command: -```sh -oc get jobs -n -``` -- Monitor the status of the pods and check that all of them are in `Running` mode and with all their containers `Ready` (for example, 2/2) by executing the following command: -```sh -oc get pods -n -``` - -## To uninstall the release - -To uninstall and delete the release from the Helm CLI, use the following command: - -```console -$ helm delete --purge -``` diff --git a/BAI/platform/README_ROKS.md b/BAI/platform/README_ROKS.md deleted file mode 100644 index 519ed7ad..00000000 --- a/BAI/platform/README_ROKS.md +++ /dev/null @@ -1,284 +0,0 @@ -# Install IBM Business Automation Insights for production on Red Hat OpenShift on IBM Cloud - -## Before you begin: Create a cluster and get access to the container images - -Before you run any installation command, make sure that you have created the IBM Cloud cluster and prepared your own environment. You must also create a pull secret to be able to pull your images from a registry. - -For more information, see [Installing containers on Red Hat OpenShift by using CLIs](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_env_ROKS.html). - -## Step 1: Install a Business Automation Insights release - -> **Tip**: If you activate Business Automation Insights persistence, you need to specify persistent volumes (PV) to install. PV represents an underlying storage capacity in the infrastructure. Before you can install Business Automation Insights, you must create two PVs with access mode set to ReadWriteOnce and storage capacity of 10Gi or more for Elasticsearch storage, and one PV with access mode set to ReadWriteMany and storage capacity of 10Gi or more for Apache Flink storage. You create a PV in the administration console or in a YAML file (.yml or. yaml file name extension). - -1. Prerequisites: - - * Install a [Kafka distribution](https://cwiki.apache.org/confluence/display/KAFKA/Ecosystem) and make sure it is accessible from the Managed OpenShift cluster. - -2. Get the Business Automation Insights Helm charts: - - a. Download the charts [ibm-business-automation-insights-3.2.0.tgz](../helm-charts/ibm-business-automation-insights-3.2.0.tgz) - -3. Apply the security policy: - - a. Create a file named, for example, 'bai-psp.yaml', based on this PSP template, and set the values of the and placeholders. - * Replace `` with the name of the Business Automation Insights release. - * Replace `` with the name of the namespace that is associated with your OpenShift project. - - ```console - apiVersion: policy/v1beta1 - kind: PodSecurityPolicy - metadata: - annotations: - kubernetes.io/description: "This policy is required to allow ibm-dba-ek pods running Elasticsearch to use privileged containers." - name: -bai-psp - spec: - privileged: true - runAsUser: - rule: RunAsAny - seLinux: - rule: RunAsAny - supplementalGroups: - rule: RunAsAny - fsGroup: - rule: RunAsAny - volumes: - - '*' - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: Role - metadata: - name: -bai-role - namespace: - rules: - - apiGroups: - - extensions - resourceNames: - - -bai-psp - resources: - - podsecuritypolicies - verbs: - - use - --- - apiVersion: v1 - kind: ServiceAccount - metadata: - name: -bai-psp-sa - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: RoleBinding - metadata: - name: -bai-rolebinding - namespace: - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: -bai-role - subjects: - - kind: ServiceAccount - name: -bai-psp-sa - namespace: - ``` - - b. Apply this policy. - - ```console - $ kubectl apply -f bai-psp.yaml -n - ``` - -4. Grant "ibm-privileged-scc" privileges to the service account. - - ```console - oc adm policy add-scc-to-user ibm-privileged-scc -z -bai-psp-sa -n - ``` - -5. Create a `values.yaml` file. - - a. Pull image secrets - - BAI images are available in IBM Docker registry by using a pull secret name. - Replace the placeholder with the secret that you - created in [Before you begin](#before-you-begin-create-a-cluster-and-get-access-to-the-container-images). Then, add the following parameters in the `values.yaml` file. - - ```yaml - imageCredentials: - imagePullSecret: - - ibm-dba-ek: - image: - imagePullSecret: - ``` - - b. Image repository - - Add the following parameters in the `values.yaml` file. Replace the placeholder with the IBM Docker registry path. - ```yaml - setup: - image: - repository: /bai-setup - admin: - image: - repository: /bai-admin - flink: - image: - repository: /bai-flink - zookeeper: - image: - repository: /bai-flink-zookeeper - bpmn: - image: - repository: /bai-bpmn - icm: - image: - repository: /bai-icm - odm: - image: - repository: /bai-odm - content: - image: - repository: /bai-content - bawadv: - image: - repository: /bai-bawadv - ingestion: - image: - repository: /bai-ingestion - initImage: - image: - repository: /bai-init - ibm-dba-ek: - elasticsearch: - init: - image: - repository: /bai-init - image: - repository: /bai-elasticsearch - kibana: - image: - repository: /bai-kibana - ``` - - c. Activate persistence. - - The following example uses dynamic provisioning and the `ibmc-file-retain-gold` storage class. For Elasticsearch volumes, use the fastest possible storage class. - - ```yaml - persistence: - useDynamicProvisioning: true - - flinkPv: - storageClassName: "ibmc-file-retain-gold" - - ibm-dba-ek: - elasticsearch: - data: - storage: - persistent: true - useDynamicProvisioning: true - storageClass: "ibmc-file-retain-gold" - snapshotStorage: - enabled: true - useDynamicProvisioning: true - storageClass: "ibmc-file-retain-gold" - ``` - - d. Configure the connection between your Kafka tool and Business Automation Insights. - - In the `values.yaml` file, configure the connection to Kafka. - - For example, for a Kafka without authentication: - - ```yaml - kafka: - bootstrapServers: "kafka-hostname:9092" - securityProtocol: "PLAINTEXT" - propertiesConfigMap: "" - ``` - - e. Enable init of the Flink storage directory. - - When deploying IBM Business Automation Insights on IBM Cloud, the Flink init container needs to be run as privileged, such that it can - change the ownership and permissions of its storage directory. For details, see https://cloud.ibm.com/docs/containers?topic=containers-cs_troubleshoot_storage#file_app_failures - and https://cloud.ibm.com/docs/containers?topic=containers-cs_troubleshoot_storage#cs_storage_nonroot. To enable initialization - of the Flink storage directory, add `flink.initStorageDirectory: true` in your `values.yaml`. - - ```yaml - flink: - initStorageDirectory: true - ``` - - f. Enable event processing. - - For example, to install only BPMN event processing, edit your `values.yaml` file as follows. - - ```yaml - bpmn: - install: true - - icm: - install: false - - odm: - install: false - - content: - install: false - - bawadv: - install: false - ``` - - g. Configure event ingestion in HDFS. - - By default, events are ingested in HDFS in a dedicated bucket which must be created beforehand with appropriate permissions. - Indicate the path to the HDFS bucket by using the `flink.storageBucketUrl` parameter in your `values.yaml` file. - Replace the placeholders and with the actual values. - - ```yaml - flink: - storageBucketUrl: "hdfs:///" - - ingestion: - install: true - ``` - - For more information about HDFS configuration, see [Preparing to use HDFS](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.bai/topics/tsk_bai_config_hdfs_storage.html). - - To disable event ingestion, edit your `values.yaml` file as follows. - - ```yaml - ingestion: - install: false - ``` - - -10. Install the release. - - ```console - $ helm install --namespace --name /ibm-business-automation-insights-3.2.0.tgz -f ./values.yaml - ``` - -## Step 3: Verify that the Business Automation Insights deployment is running - -1. Monitor the Business Automation Insights pods until they show the *Running* or *Completed* STATUS. - - ```console - $ while oc get pods | grep -E "(Running|Completed|STATUS)"; do sleep 5; done - ``` - -2. Expose the Kibana service to your users by using Openshift routes. - - ```console - $ oc create route passthrough --service=-ibm-dba-ek-kibana -n - ``` - - > **Note**: For more information, refer to the [Openshift documentation](https://docs.openshift.com/container-platform/3.11/dev_guide/routes.html). - - The Kibana URL is available in the 'Routes' section of the Openshift console. - -## To uninstall the release - -To uninstall and delete the release from the Helm CLI, use the following command. - -```console -$ helm delete --purge -``` diff --git a/BAI/platform/minikube/Monitoring.md b/BAI/platform/minikube/Monitoring.md deleted file mode 100644 index 0d9d8cde..00000000 --- a/BAI/platform/minikube/Monitoring.md +++ /dev/null @@ -1,251 +0,0 @@ -# Monitoring an IBM Business Automation Insights installation on Minikube - -After Business Automation Insights is installed on Minikube, you can use the following procedure to monitor the health of you installation and troubleshoot issues. - -Table of contents: -- [Retrieving all the logs](#retrieving-all-the-logs) -- [Monitoring Kafka](#monitoring-kafka) -- [Monitoring Elasticsearch](#monitoring-elasticsearch) -- [Monitoring Flink](#monitoring-flink) - -## Retrieving all the logs - -In order to retrieve the logs of all the main BAI runtime components, run the following command: - -``` bash -./get-logs.sh -``` - -This command creates a `logs` directory under which the following log files are created: - -``` -elasticsearch-client.log (last log file for the elasticsearch client pod) -elasticsearch-client.previous.log (previous log file for the elasticsearch client pod) -elasticsearch-data.log (last log file for the elasticsearch data pod) -elasticsearch-data.previous.log (previous log file for the elasticsearch data pod) -elasticsearch-master.log (last log file for the elasticsearch master pod) -elasticsearch-master.previous.log (previous log file for the elasticsearch master pod) -flink-jobmanager.log (last log file for the flink job manager pod) -flink-jobmanager.previous.log (previous log file for the flink job manager pod) -flink-taskmanager-n.log (last log file for the flink task manager pod(s)) -flink-taskmanager-n.previous.log (previous log file for the flink task manager pod(s)) -flink-zookeeper.log (last log file for the flink zookeeper pod) -flink-zookeeper.previous.log (previous log file for the flink zookeeper pod) -kafka-zookeeper.log (last log file for the kafka zookeeper pod) -kafka-zookeeper.previous.log (previous log file for the kafka zookeeper pod) -kafka.log (last log file for the kafka pod) -kafka.previous.log (previous log file for the kafka pod) -``` - -## Monitoring Kafka - -### Checking that Kafka is running - -Run the following command: - -``` bash -kubectl get pods -n kakfa -``` - -The expected output should be similar to the following result, indicating two pods running and ready. - -``` -NAME READY STATUS RESTARTS AGE -kafka-release-cp-kafka-0 2/2 Running 0 60m -kafka-release-cp-zookeeper-0 2/2 Running 12 41h -``` - -After you ensured that your two pods are ready and running, you can check that the Kafka service is correctly exposed by running the following command: - -``` bash -kubectl get services -n kafka -``` - -The expected output should be similar to the following result: a service named `kafka-release-_x_-nodeport` of type `NodePort` should be mapped to TCP port 31090). - -``` -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -kafka-release-0-nodeport NodePort 10.103.253.72 19092:31090/TCP 41h -kafka-release-cp-kafka ClusterIP 10.103.115.114 9092/TCP 41h -kafka-release-cp-kafka-headless ClusterIP None 9092/TCP 41h -kafka-release-cp-zookeeper ClusterIP 10.107.231.119 2181/TCP 41h -kafka-release-cp-zookeeper-headless ClusterIP None 2888/TCP,3888/TCP 41h -``` - -### Checking that the Kafka topics for Business Automation Insights exist - -_Note: Before this verification, make sure to install the [kafka binaries](https://kafka.apache.org/downloads) on your laptop. In the following command, ${KAFKA_HOME} refers to the home directory of your Kafka installation._ - -Run the following Kafka command: - -``` bash -${KAFKA_HOME}/bin/kafka-topics.sh --list --bootstrap-server $(minikube ip):31090 -``` - -The returned list must include the three following Kafka topics: - -``` -bai-release-ibm-bai-egress -bai-release-ingress -bai-release-service -``` - -### Checking that messages are sent by the emitter - -_Note: Before this verification, make sure to install the [kafka binaries](https://kafka.apache.org/downloads) on your laptop. In the following command, ${KAFKA_HOME} refers to the home directory of your Kafka installation._ - -Run the following Kafka command to display all messages in the `bai-release-ibm-bai-egress` Kafka topic: - -``` bash -${KAFKA_HOME}/bin/kafka-console-consumer.sh --bootstrap-server $(minikube ip):31090 --topic bai-release-ingress --from-beginning -``` - -Then, interact with your emitter application (the ODM emitter for IBM Operational Decision Manager, or the BPMN or Case emitter for IBM Business Automation Workflow) and check that you can see messages added to the `bai-release-ingress` topic in your console. - -### Getting the Kafka logs - -Run the following command to get the logs: - -``` bash -kubectl logs $(kubectl get pods -n kafka | grep kafka-release-cp-kafka- | awk '{print $1}') cp-kafka-broker -n kafka -``` - -## Monitoring Elasticsearch - -### Checking that Elasticsearch is running - -Run the following command to display the list of Elasticsearch and Kibana pods: - -``` bash -kubectl get pods -n bai | grep -e 'RESTARTS\|-ek-' -``` - -The expected output should be similar to the following result, indicating four pods running and ready. - -``` -NAME READY STATUS RESTARTS AGE -bai-release-ibm-dba-ek-client-58bc6bf75c-9dwvc 1/1 Running 2 18h -bai-release-ibm-dba-ek-data-0 1/1 Running 2 18h -bai-release-ibm-dba-ek-kibana-7bcfc6ddf9-ff69f 1/1 Running 2 18h -bai-release-ibm-dba-ek-master-0 1/1 Running 2 18h -``` - -After you ensured that your two pods are ready and running, you can check that the Elasticsearch and Kibana services are correctly exposed by running the following command: - -``` bash -kubectl get services -n bai | grep 'EXTERNAL-IP\|-ek-' -``` - -The expected output should be similar to the following result. - -``` -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -bai-release-ibm-dba-ek-client NodePort 10.110.241.254 9200:31200/TCP 18h -bai-release-ibm-dba-ek-kibana NodePort 10.97.123.220 5601:31501/TCP 18h -bai-release-ibm-dba-ek-master ClusterIP 10.111.170.123 9300/TCP 18h -``` - -### Checking that the Elasticsearch cluster is healthy - -Run the following command to check the health of your Elasticsearch cluster: - -``` bash -curl https://$(minikube ip):31200/_cluster/health?pretty=true --insecure -u admin:passw0rd -``` - -In the returned JSON code, check that the status is `green` or `yellow`, as in the following example: - -``` json -{ - "cluster_name" : "bai-release-ibm-dba-ek-elasticsearch", - "status" : "yellow", - "timed_out" : false, - "number_of_nodes" : 3, - "number_of_data_nodes" : 1, - "active_primary_shards" : 42, - "active_shards" : 42, - "relocating_shards" : 0, - "initializing_shards" : 0, - "unassigned_shards" : 40, - "delayed_unassigned_shards" : 0, - "number_of_pending_tasks" : 0, - "number_of_in_flight_fetch" : 0, - "task_max_waiting_in_queue_millis" : 0, - "active_shards_percent_as_number" : 51.21951219512195 -} -``` - -### Checking that the Elasticsearch indexes exist - -Run the following command to retrieve the list of indexes in the Elasticsearch cluster: - -``` bash -curl https://$(minikube ip):31200/_cat/indices?v --insecure -u admin:passw0rd -``` - -In the returned list, check that all expected indexes exist, are open, and have a `green` or `yellow` health status, as in the following example: - -``` -health status index uuid pri rep docs.count docs.deleted store.size pri.store.size -yellow open security-auditlog-2019.04.25 P7LQybcvTySRYpDUCoUftw 5 1 159 0 671.2kb 671.2kb -yellow open process-summaries-active-idx-ibm-bai-2019.04.25-000001 81GfwYOOTJOK4LD551uVTw 5 1 4 0 96.5kb 96.5kb -green open .kibana_1 HyuwkYF8QvKJyONgyECFtw 1 0 135 9 191.5kb 191.5kb -green open .opendistro_security SOGNgWczThqAT26vcyg71g 1 0 5 0 32kb 32kb -yellow open process-summaries-completed-idx-ibm-bai-2019.04.25-000001 qlwuQ1AqTca3FcQ2LB-9xg 5 1 1 0 25.2kb 25.2kb -yellow open odm-timeseries-idx-ibm-bai-2019.04.25-000001 _SGUSxhfSi-3yfWQ4qdNYQ 5 1 0 0 1.2kb 1.2kb -yellow open case-summaries-active-idx-ibm-bai-2019.04.25-000001 nwwlbYUZRzmtUPisVusJUw 5 1 0 0 1.2kb 1.2kb -yellow open security-auditlog-2019.04.26 -Xqc9GqiQSmLTfVwgzjk9A 5 1 21 0 268.6kb 268.6kb -yellow open content-timeseries-idx-ibm-bai-2019.04.25-000001 gMs6ZjIfQ8O1eyoK7V02eQ 5 1 0 0 1.2kb 1.2kb -yellow open case-summaries-completed-idx-ibm-bai-2019.04.25-000001 AS7uaqCYRAOuvPY1S2g2gw 5 1 0 0 1.2kb 1.2kb -``` - -### Getting the Elasticsearch logs - -Run the following command to get the logs of the Elasticsearch master node: - -``` bash -kubectl logs $(kubectl get pods -n bai | grep bai-release-ibm-dba-ek-master- | awk '{print $1}') -n bai -``` - -Run the following command to get the logs of the Elasticsearch data node: - -``` bash -kubectl logs $(kubectl get pods -n bai | grep bai-release-ibm-dba-ek-data- | awk '{print $1}') -n bai -``` - -Run the following command to get the logs of the Elasticsearch client node: - -``` bash -kubectl logs $(kubectl get pods -n bai | grep bai-release-ibm-dba-ek-client- | awk '{print $1}') -n bai -``` - -### Using Elasticsearch head to introspect your cluster - -To introspect and monitor your Elasticsearch cluster with a user interface, you can install the [Elasticsearch head chrome plugin](https://chrome.google.com/webstore/detail/elasticsearch-head/ffmkiejjmecolpfloofpjologoblkegm). - -To connect the plugin to your Elasticsearch cluster, go through the following steps: - -1. Retrieve the URL to the Elasticsearch cluster by running the `echo https://$(minikube ip):31200` command. -1. Enter this URL in your Chrome browser, accept the self-signed certificate if requested, and then use the `admin / passw0rd` credentials to authenticate. -1. After you access the URL, open the Elasticsearch head plugin in the same browser, enter the same URL in the text box at the top of the user interface, and click the `Connect` button. - -## Monitoring Flink - -### Checking that Flink is running - -Run the following command to display the list of Flink pods: - -``` bash -kubectl get pods -n bai | grep -e 'RESTARTS\|-flink-' -``` - -The expected output should be similar to the following result, with all pods running and ready. Note that you might have more or fewer `bai-release-bai-flink-taskmanager-_x_` pods. - -``` -NAME READY STATUS RESTARTS AGE -bai-release-bai-flink-jobmanager-5d8f74f947-zv6wm 1/1 Running 3 19h -bai-release-bai-flink-taskmanager-0 1/1 Running 3 19h -bai-release-bai-flink-taskmanager-1 1/1 Running 3 19h -bai-release-bai-flink-zk-0 1/1 Running 2 19h -``` - diff --git a/BAI/platform/minikube/README.md b/BAI/platform/minikube/README.md deleted file mode 100644 index b70c6a47..00000000 --- a/BAI/platform/minikube/README.md +++ /dev/null @@ -1,276 +0,0 @@ -# Install IBM Business Automation Insights on Minikube - -This procedure guides you to install and run IBM Business Automation Insights Developer Edition on a local Minikube cluster. - -### Disclaimer - -The deployment of IBM Business Automation Insights Developer Edition on Minikube is **not going to provide any high performance, scalability, high availability or allow any long term storage of the data**. Use with care. In order to get high performance, high availability and features not available in the Developer Edition you must install the commercial release of IBM Business Automation Insights on a scalable Kubernetes cluster. -As a consequence, and not limited to the following, machine hibernation or shutdown without having properly shutdown the Minikube virtual machine may have unpredictable effects on Kubernetes persistent storage. This may also prevent Minikube from restarting properly. -*** - -- [Prerequisites](#prerequisites) -- [Automated installation](#automated-installation-fast-path) -- [Step by step installation](#step-by-step-installation) - - [1. Initialize minikube](#1-initialize-minikube) - - [2. Initialize minikube persistent volumes](#2-initialize-minikube-persistent-volumes) - - [3. Initialize Helm](#3-initialize-helm) - - [4. Install Apache Kafka](#4-install-apache-kafka) - - [5. Install IBM Business Automation Insights Developer Edition](#5-install-ibm-business-automation-insights-developer-edition) - - [1. Add IBM Charts repository](#1-add-ibm-charts-repository) - - [2. Create a security policy and a service account for elasticsearch](#2-create-a-security-policy-and-a-service-account-for-elasticsearch) - - [3. Choose the type of event processing you want to deploy](#3-choose-the-type-of-event-processing-you-want-to-deploy) - - [4. Deploy BAI release](#4-deploy-the-bai-release) - - [5. Verify](#5-verify) -- [Starting/stopping minikube](#starting-or-stopping-minikube) -- [Next step: configure your Event Emitter](#next-step-configure-your-event-emitter) -- [Troubleshooting](#troubleshooting) -*** - -## Prerequisites - -- Resources: - - MacOS Mojave or Windows 10 - - 2CPUs + 6Gb RAM free space - - In addition to the space for Docker, Minikube, and Helm, 15Gb disk space for images and persisted data - - There are [known networking issues](/~https://github.com/kubernetes/minikube/issues/1099) when using Minikube while Cisco AnyConnect is running on the same machine. Before running Minikube, make sure that your Cisco AnyConnect VPN is NOT running. - -- Tools that must be installed: - - **[Docker](https://docs.docker.com/install)**, tested with [Docker Desktop](https://www.docker.com/products/docker-desktop) on MacOS and [Docker Toolbox](https://docs.docker.com/toolbox/overview/) on Windows - - **[VirtualBox latest](https://www.virtualbox.org/wiki/Downloads)** - - **[Minikube](https://kubernetes.io/docs/setup/minikube)**, tested with [v1.4.0](/~https://github.com/kubernetes/minikube/releases/tag/v1.4.0) (MacOS and Windows) - - **[Helm](https://docs.helm.sh/using_helm/#installing-helm)**, tested with [v2.12.3](/~https://github.com/helm/helm/releases/tag/v2.12.3) (MacOS) and [v2.13.1](/~https://github.com/helm/helm/releases/tag/v2.13.1) (Windows) - - **[kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl)**, tested with latest version - - **[jq](https://stedolan.github.io/jq/)**, tested with latest version (MacOS and Windows) - -- IBM Business Automation Insights Developer Edition: - - Choose a destination directory where the installation artifacts below will be downloaded. - - On Windows, this must be on the drive where your ```MINIKUBE_HOME``` environment variable points to. If this is not set, it defaults to the ```C:``` drive (current [restriction of ```minikube```](/~https://github.com/kubernetes/minikube/issues/1574)) - - Download the following files: - - [configuration/easy-install-kafka.yaml](configuration/easy-install-kafka.yaml?raw=true) - - [configuration/easy-install.yaml](configuration/easy-install.yaml?raw=true) - - [configuration/pv.yaml](configuration/pv.yaml?raw=true) - - [configuration/bai-psp.yaml](configuration/bai-psp.yaml?raw=true) - - [install-bai-minikube.sh](./install-bai-minikube.sh?raw=true) - - [install-bai.sh](./install-bai.sh?raw=true) - - [utilities.sh](./utilities.sh?raw=true) - - [Mac/Linux only] Ensure proper execution permissions of downloaded scripts: `chmod +x *.sh` - -## Automated installation ("fast path") - - - See [installation prerequisites](#prerequisites) - - Choose an \ to deal with. The valid values are "odm", "icm", "bpmn", "bawadv", "content", or "baiw". - - If your event emitter is not hosted by the local host, you must use the ```-i ``` option to specify the local machine IP address that is reachable by the event emitter. - - To bypass the check of the Minikube version used, pass the ``` -f ``` option. - - Make sure that the VirtualBox ```VBoxManage``` command is on the ```PATH```. - - Example: ```./install-bai-minikube.sh -e -i 9.128.37.112 -f``` - - On Windows, you must run this command from the Git [```bash```](https://gitforwindows.org/) command tool, which comes with Docker Toolbox. - - Processed data is stored locally in the ```minikube virtual machine /data``` directory and subdirectories. - - -## Step-by-step installation - -Run the following commands from the destination folder where you downloaded the IBM Business Automation Insights Developer Edition files (archive + YAML files). Your working directory structure should be: - -``` -. -|____./configuration/easy-install.yaml -|____./configuration/easy-install-kafka.yaml -|____./configuration/bai-psp.yaml -|____./configuration/pv.yaml -``` - -### 1. Initialize Minikube - -``` -minikube start --cpus 2 --memory 6144 -minikube docker-env -eval $(minikube docker-env) -``` -### 2. Initialize minikube persistent volumes - -``` -kubectl create ns bai -kubectl apply -f configuration/pv.yaml -n bai -minikube ssh "sudo mkdir /data/bai" -minikube ssh "sudo mkdir /data/bai-elasticsearch-data-1" -minikube ssh "sudo mkdir /data/bai-elasticsearch-master-1" -minikube ssh "sudo chmod -R 777 /data" -``` - -### 3. Initialize Helm - -``` -helm init --wait -``` - -### 4. Install Apache Kafka - -#### Scenario 1: Your event emitter (BPMN, BAW Advanced, Case, ODM, Content, or BAIW) is running on your local machine. - -If you plan to feed your Business Automation Insights instance with events from a Business Automation Worfklow server or from an Operational Decision Manager server running on your local machine, use the following procedure to install Apache Kafka: - -``` -helm repo add confluent https://confluentinc.github.io/cp-helm-charts -helm repo update -kubectl create ns kafka -helm install --wait --name kafka-release --namespace kafka -f configuration/easy-install-kafka.yaml --set cp-kafka.customEnv.ADVERTISED_LISTENER_HOST=$(minikube ip) confluent/cp-helm-charts -``` - -After the command completes, check the deployment status of Kafka pods with `kubectl get pods -n kafka` until all pods are running.
Click to show an example of successful completed deployment. -

- -``` -NAME READY STATUS RESTARTS AGE -kafka-release-cp-kafka-0 2/2 Running 0 108s -kafka-release-cp-zookeeper-0 2/2 Running 0 108s -``` - -

-
- -#### Scenario 2: Your event emitter (BPMN, BAW Advanced, Case, ODM, Content, or BAIW) is running on an external machine. - -If you plan to feed your Business Automation Insights instance with events from a Business Automation Worfklow server or from an Operational Decision Manager server running on an external machine (for example, on IBM Cloud), you need to through the following steps: - -1. Retrieve the IP address of your local machine (addressable from an external machine). -1. Set up Kafka so that it informs its listener of this IP address. -1. Set up VirtualBox to redirect the connection to your local machine IP to the Minikube VM. -1. Disable your local firewall. This is particularly important on Mac OSx where the firewall is enabled by default. Or add a rule to allow remote connection to port `31090`. - -In the following procedure, replace `1.2.3.4` with the actual IP address of your local machine: - -``` -VBoxManage controlvm "minikube" natpf1 "kafka service,tcp,,31090,,31090" -helm repo add confluent https://confluentinc.github.io/cp-helm-charts -helm repo update -kubectl create ns kafka -helm install --wait --name kafka-release --namespace kafka -f configuration/easy-install-kafka.yaml --set cp-kafka.customEnv.ADVERTISED_LISTENER_HOST=1.2.3.4 confluent/cp-helm-charts -``` - -After the command completes, check the deployment status of Kafka pods with `kubectl get pods -n kafka` until all pods are running.
Click to show an example of successful completed deployment. -

- -``` -NAME READY STATUS RESTARTS AGE -kafka-release-cp-kafka-0 2/2 Running 0 108s -kafka-release-cp-zookeeper-0 2/2 Running 0 108s -``` - -

-
- ---- -**Re-installing Kafka when the external IP address changes** - -If your external IP address changes when you restart your computer, update the Kafka settings so that it correctly sends the new IP address to Kafka listeners. - -To update Kafla settings, run the following commands (replacing 2.3.4.5 with the actual new IP address of your local machine): - -``` -./ip-upgrade.sh -i 2.3.4.5 -``` - ---- -### 5. Install IBM Business Automation Insights Developer Edition - -#### 1. Add IBM Charts repository - -``` -helm repo add ibm-charts https://raw.githubusercontent.com/IBM/charts/master/repo/stable -helm repo update -``` - -#### 2. Create a security policy and a service account for Elasticsearch. - -``` -kubectl create -f configuration/bai-psp.yaml -n bai -kubectl create rolebinding bai-rolebinding --role=bai-role --serviceaccount=bai:bai-release-bai-psp-sa -n bai -``` - -#### 3. Choose the type of event processing you want to deploy. - -You can choose: `bpmn`, `bawadv`, `icm`, `odm`, `content`, or `baiw`. - -``` -EVENT_PROCESSING_TYPE= -``` - -#### 4. Deploy the bai release. - -``` -helm install ibm-charts/ibm-business-automation-insights-dev --version 3.2.0 --wait --name bai-release --namespace bai -f configuration/easy-install.yaml --set kafka.bootstrapServers=$(minikube ip):31090 --set ${EVENT_PROCESSING_TYPE}.install=true -``` - -#### 5. Verify - -- Run `kubectl get pods -n bai -w` to monitor the deployment status of bai pods. - -
    Click to show an example of successful completed deployment. -

    - -``` -$ kubectl get pods -n bai -NAME READY STATUS RESTARTS AGE -bai-release-bai-admin-6bc755fc5f-mwvl7 1/1 Running 0 36m -bai-release-bai-bpmn-bxknx 0/1 Completed 0 36m -bai-release-bai-flink-jobmanager-5bff88579b-vkhmn 1/1 Running 0 36m -bai-release-bai-flink-taskmanager-0 1/1 Running 0 36m -bai-release-bai-flink-zk-0 1/1 Running 0 36m -bai-release-bai-setup-5vrvd 0/1 Completed 0 36m -bai-release-ibm-dba-ek-client-6ccf856d5d-f7xk6 2/2 Running 0 36m -bai-release-ibm-dba-ek-data-0 1/1 Running 0 36m -bai-release-ibm-dba-ek-kibana-6f9c464574-zhxnq 2/2 Running 0 36m -bai-release-ibm-dba-ek-master-0 1/1 Running 0 36m -``` - -

    -
- -- Run `echo "https://$(minikube ip):31501"` to obtain the URL of Kibana. -- Kibana credentials are admin / passw0rd - -Note: -- Elasticsearch REST endpoint is available on port `31200`. -- The Business Automation Insights administration service is available on port `31100`. - -## Starting or stopping Minikube - -- To start Minikube: ```minikube start --cpus 2 --memory 6144``` -- To stop Minikube: ```minikube stop``` - -## Next step: configure your event emitter - -To configure your event emitter, you need the following information: - -- The **Kafka bootstrap URL**. By default, you can connect to Kafka from your host by using the bootstrap URL that is returned by this command: - - `echo $(minikube ip):31090` -- The **name of the Kafka topic** that Event Processing Jobs use to consume messages sent by event emitters: - - `bai-release-ingress` - -## Troubleshooting - -- After Minikube is restarted, the task manager is not running properly (READY: 0/1). Solution: Restart the job manager: `kubectl delete pod -n bai` - -- If your Minikube is not responsive anymore, you probably undersized it and deployed too many elements on it. It is safer to call `minikube delete` and start all over again than to try to fix separate issues. - -- If you get errors such as `Error: error validating "": error validating data: field` when you install Kafka or the Helm Chart for Business Automation Insights, use the exact Minikube and Helm versions this procedure was tested with (see [Prerequisites](#prerequisites)). - -- If, when Minikube starts, you get an error such as : ```💣 Error starting cluster: timed out waiting to elevate kube-system RBAC privileges: creating clusterrolebinding: Post https://192.168.99.110:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.110:8443: connect: network is unreachable```, try the following actions: - - Delete the VirtualBox "vboxnet0" network adapter and try restarting. - - Turn ```off``` your VPN. - - Restart your computer - - See [Can't use Minikube on VPN](/~https://github.com/kubernetes/minikube/issues/1099) - -- If you get errors such as `Error: release kafka-release failed: namespaces "kafka" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "kafka"`, run the following commands to fix the issue: - - `kubectl --namespace kube-system create serviceaccount tiller` - - `kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller` - - `kubectl --namespace kube-system patch deploy tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}` - -- In case the minikube VM is stopped suddently (aborted, power off...), checkpoints might be corrupted. In this case jobmanager will keep crashing, and a `kubectl logs bai-release-bai-flink-jobmanager --namespace bai | egrep -i error.*Could not read any of the . checkpoints from storage"` will show an error. - - run [recover-minikube-bai.sh](./recover-minikube-bai.sh?raw=true) - - monitor proper pod recovery using `kubectl --namespace bai get pods -w` - - Elasticsearch data will be recovered, but the Flink state will be reset, therefore the result of the processing is likely to be lost for the last events. - -- Troubleshooting Apache Flink jobs: [Knowledge Center - Troubleshooting Apache Flink jobs](http://engtest01w.fr.eurolabs.ibm.com:9190/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.bai/topics/con_bai_troubleshoot_jobs.html) - -*** diff --git a/BAI/platform/minikube/configuration/bai-psp.yaml b/BAI/platform/minikube/configuration/bai-psp.yaml deleted file mode 100644 index 37c7e10f..00000000 --- a/BAI/platform/minikube/configuration/bai-psp.yaml +++ /dev/null @@ -1,38 +0,0 @@ -apiVersion: policy/v1beta1 -kind: PodSecurityPolicy -metadata: - annotations: - kubernetes.io/description: "This policy is required to allow ibm-dba-ek pods running elasticsearch to use privileged containers" - name: bai-psp -spec: - privileged: true - runAsUser: - rule: RunAsAny - seLinux: - rule: RunAsAny - supplementalGroups: - rule: RunAsAny - fsGroup: - rule: RunAsAny - volumes: - - '*' ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: Role -metadata: - name: bai-role - namespace: bai -rules: - - apiGroups: - - extensions - resourceNames: - - bai-psp - resources: - - podsecuritypolicies - verbs: - - use ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: bai-release-bai-psp-sa \ No newline at end of file diff --git a/BAI/platform/minikube/configuration/easy-install-kafka.yaml b/BAI/platform/minikube/configuration/easy-install-kafka.yaml deleted file mode 100644 index 0503d898..00000000 --- a/BAI/platform/minikube/configuration/easy-install-kafka.yaml +++ /dev/null @@ -1,33 +0,0 @@ -# -# BAI on Minikube (easy install) - kafka values definition -# -cp-kafka: - brokers: 1 - customEnv: - ADVERTISED_LISTENER_HOST: "" - configurationOverrides: - "advertised.listeners": |- - EXTERNAL://$(ADVERTISED_LISTENER_HOST):31090 - "offsets.topic.replication.factor": 1 - nodeport: - enabled: true - heapOptions: "-Xms256M -Xmx256M" - persistence: - enabled: false - disksPerBroker: 0 - -cp-zookeeper: - servers: 1 - heapOptions: "-Xms256M -Xmx256M" - persistence: - enabled: false - -cp-schema-registry: - enabled: false -cp-kafka-rest: - enabled: false -cp-kafka-connect: - enabled: false -cp-ksql-server: - enabled: false - diff --git a/BAI/platform/minikube/configuration/easy-install.yaml b/BAI/platform/minikube/configuration/easy-install.yaml deleted file mode 100644 index b96ba5d4..00000000 --- a/BAI/platform/minikube/configuration/easy-install.yaml +++ /dev/null @@ -1,109 +0,0 @@ -# -# BAI on Minikube (easy install) - BAI values definition -# -persistence: - useDynamicProvisioning: true - -kafka: - bootstrapServers: "kafka-release-cp-kafka-headless:9092" - securityProtocol: "PLAINTEXT" - -elasticsearch: - install: true - -settings: - egress: false - ingressTopic: bai-release-ingress - serviceTopic: bai-release-service - -flink: - taskManagerHeapMemory: 400 - taskManagerMemory: 500 - taskManagerCPU: 0.5 - zookeeper: - replicas: 1 - resources: - requests: - memory: "100Mi" - cpu: "50m" - limits: - memory: "200Mi" - cpu: "200m" - -admin: - username: "admin" - password: "passw0rd" - serviceType: NodePort - externalPort: 31100 - -flinkPv: - existingClaimName: "minikube-bai-pvc" - capacity: "2Gi" - -bpmn: - install: false - -icm: - install: false - -odm: - install: false - -content: - install: false - -bawadv: - install: false - -baiw: - install: false - -ibm-dba-ek: - elasticsearch: - probeInitialDelay: 120 - data: - snapshotStorage: - enabled: false - storage: - persistent: true - useDynamicProvisioning: false - storageClass: "bai-elasticsearch-pv" - size: "1Gi" - heapSize: "392m" - resources: - limits: - memory: "640Mi" - cpu: "200m" - requests: - memory: "392Mi" - cpu: "100m" - client: - serviceType: NodePort - externalPort: 31200 - heapSize: "392m" - resources: - limits: - memory: "1000Mi" - cpu: "200m" - requests: - memory: "392Mi" - cpu: "100m" - master: - heapSize: "392m" - resources: - limits: - memory: "1000Mi" - cpu: "200m" - requests: - memory: "256Mi" - cpu: "100m" - kibana: - serviceType: NodePort - externalPort: 31501 - resources: - limits: - memory: "512Mi" - cpu: "150m" - requests: - memory: "256Mi" - cpu: "100m" diff --git a/BAI/platform/minikube/configuration/pv.yaml b/BAI/platform/minikube/configuration/pv.yaml deleted file mode 100644 index 9698509e..00000000 --- a/BAI/platform/minikube/configuration/pv.yaml +++ /dev/null @@ -1,55 +0,0 @@ -apiVersion: v1 -kind: PersistentVolume -metadata: - name: bai-elasticsearch-data-1 -spec: - storageClassName: "bai-elasticsearch-pv" - accessModes: - - ReadWriteOnce - capacity: - storage: 1Gi - hostPath: - path: /data/bai-elasticsearch-data-1 - persistentVolumeReclaimPolicy: Recycle ---- -apiVersion: v1 -kind: PersistentVolume -metadata: - name: bai-elasticsearch-master-1 -spec: - storageClassName: "bai-elasticsearch-pv" - accessModes: - - ReadWriteOnce - capacity: - storage: 1Gi - hostPath: - path: /data/bai-elasticsearch-master-1 - persistentVolumeReclaimPolicy: Recycle ---- -apiVersion: v1 -kind: PersistentVolume -metadata: - name: minikube-bai-pv -spec: - accessModes: - - ReadWriteMany - capacity: - storage: 2Gi - hostPath: - path: /data/bai - persistentVolumeReclaimPolicy: Retain - claimRef: - namespace: bai - name: minikube-bai-pvc ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: minikube-bai-pvc -spec: - storageClassName: "" - accessModes: - - ReadWriteMany - resources: - requests: - storage: 2Gi diff --git a/BAI/platform/minikube/get-logs.sh b/BAI/platform/minikube/get-logs.sh deleted file mode 100755 index d7697c4f..00000000 --- a/BAI/platform/minikube/get-logs.sh +++ /dev/null @@ -1,31 +0,0 @@ -#!/bin/bash - -BAI_NAMESPACE="bai" -KAFKA_NAMESPACE="kafka" -LOG_DIR="./logs" - -echo "Creating logs directory" - -mkdir -p ${LOG_DIR} - -echo "Retrieving logs for Kafka components" -kubectl logs $(kubectl get pods -n ${KAFKA_NAMESPACE} | grep kafka-release-cp-kafka- | awk '{print $1}') cp-kafka-broker -n ${KAFKA_NAMESPACE} > ${LOG_DIR}/kafka.log -kubectl logs -p $(kubectl get pods -n ${KAFKA_NAMESPACE} | grep kafka-release-cp-kafka- | awk '{print $1}') cp-kafka-broker -n ${KAFKA_NAMESPACE} > ${LOG_DIR}/kafka.previous.log -kubectl logs $(kubectl get pods -n ${KAFKA_NAMESPACE} | grep kafka-release-cp-zookeeper- | awk '{print $1}') cp-zookeeper-server -n ${KAFKA_NAMESPACE} > ${LOG_DIR}/kafka-zookeeper.log -kubectl logs -p $(kubectl get pods -n ${KAFKA_NAMESPACE} | grep kafka-release-cp-zookeeper- | awk '{print $1}') cp-zookeeper-server -n ${KAFKA_NAMESPACE} > ${LOG_DIR}/kafka-zookeeper.previous.log - -echo "Retrieving logs for Elasticsearch components" -kubectl logs $(kubectl get pods -n ${BAI_NAMESPACE} | grep bai-release-ibm-dba-ek-master- | awk '{print $1}') -n ${BAI_NAMESPACE} > ${LOG_DIR}/elasticsearch-master.log -kubectl logs -p $(kubectl get pods -n ${BAI_NAMESPACE} | grep bai-release-ibm-dba-ek-master- | awk '{print $1}') -n ${BAI_NAMESPACE} > ${LOG_DIR}/elasticsearch-master.previous.log -kubectl logs $(kubectl get pods -n ${BAI_NAMESPACE} | grep bai-release-ibm-dba-ek-data- | awk '{print $1}') -n ${BAI_NAMESPACE} > ${LOG_DIR}/elasticsearch-data.log -kubectl logs -p $(kubectl get pods -n ${BAI_NAMESPACE} | grep bai-release-ibm-dba-ek-data- | awk '{print $1}') -n ${BAI_NAMESPACE} > ${LOG_DIR}/elasticsearch-data.previous.log -kubectl logs $(kubectl get pods -n ${BAI_NAMESPACE} | grep bai-release-ibm-dba-ek-client- | awk '{print $1}') -n ${BAI_NAMESPACE} > ${LOG_DIR}/elasticsearch-client.log -kubectl logs -p $(kubectl get pods -n ${BAI_NAMESPACE} | grep bai-release-ibm-dba-ek-client- | awk '{print $1}') -n ${BAI_NAMESPACE} > ${LOG_DIR}/elasticsearch-client.previous.log - -echo "Retrieving logs for Flink components" -kubectl logs $(kubectl get pods -n ${BAI_NAMESPACE} | grep bai-release-bai-flink-jobmanager- | awk '{print $1}') -n ${BAI_NAMESPACE} > ${LOG_DIR}/flink-jobmanager.log -kubectl logs -p $(kubectl get pods -n ${BAI_NAMESPACE} | grep bai-release-bai-flink-jobmanager- | awk '{print $1}') -n ${BAI_NAMESPACE} > ${LOG_DIR}/flink-jobmanager.previous.log -for pod in $(kubectl get pods -n bai | grep bai-release-bai-flink-taskmanager- | awk '{print $1}'); do `kubectl logs $pod -n ${BAI_NAMESPACE} > ${LOG_DIR}/${pod#bai-release-bai-}.log`; done -for pod in $(kubectl get pods -n bai | grep bai-release-bai-flink-taskmanager- | awk '{print $1}'); do `kubectl logs -p $pod -n ${BAI_NAMESPACE} > ${LOG_DIR}/${pod#bai-release-bai-}.previous.log`; done -kubectl logs $(kubectl get pods -n ${BAI_NAMESPACE} | grep bai-release-bai-flink-zk- | awk '{print $1}') -n ${BAI_NAMESPACE} > ${LOG_DIR}/flink-zookeeper.log -kubectl logs -p $(kubectl get pods -n ${BAI_NAMESPACE} | grep bai-release-bai-flink-zk- | awk '{print $1}') -n ${BAI_NAMESPACE} > ${LOG_DIR}/flink-zookeeper.previous.log diff --git a/BAI/platform/minikube/install-bai-minikube.sh b/BAI/platform/minikube/install-bai-minikube.sh deleted file mode 100755 index deac5aa0..00000000 --- a/BAI/platform/minikube/install-bai-minikube.sh +++ /dev/null @@ -1,182 +0,0 @@ -#!/bin/bash - -LVAR_SCRIPT_NAME="$(basename $0)" -LVAR_BAI_VERSION="3.2.0" -LVAR_MINIKUBE_VERSION="1.4.0" -LVAR_LOCALHOST_IP="" -LVAR_FORCE_MINIKUBE_VERSION="false" -LVAR_VM_DRIVER="" -LVAR_VBOX_NETWORKS="" -LVAR_CPUS="2" -LVAR_MEMORY="6144" - -set -e - -# Common script utilities -source ./utilities.sh - -showHelp() { - echo - echo "--------------------------------------------------------------------------" - echo "Installs a Business Automation Insights release on minikube." - echo "--------------------------------------------------------------------------" - echo "Prerequisites" - echo "These files must be present in the same directory:" - echo " - configuration/pv.yaml" - echo " - configuration/bai-psp.yaml" - echo " - configuration/easy-install-kafka.yaml" - echo " - configuration/easy-install.yaml" - echo " - install-bai-minikube.sh" - echo - echo "--------------------------------------------------------------------------" - echo "Arguments:" - echo " -e " - echo " Mandatory. The argument must have one of the following values:" - echo " - bpmn " - echo " - bawadv " - echo " - icm " - echo " - odm " - echo " - content " - echo " - baiw " - echo " -i " - echo " Optional. Needed only if the event emitter is not present on the local machine." - echo " Defaults to the value of \"minikube ip\"." - echo " -c" - echo " Optional. Specifies the number of CPU to be used (defaults to 2)." - echo " -m" - echo " Optional. Specifies the amount of memory (megabytes) to be used (defaults to 6144)." - echo " -f" - echo " Optional. Bypasses the minikube version validation." - echo - echo " -h: Displays this help." - echo - echo "Examples:" - echo - echo " ./${LVAR_SCRIPT_NAME} -e odm" - echo - echo "---------------------------------------------------------------" - exit 1 -} -echo - -disableVirtualBoxDHCP() { - # this is supposed to work on both Win10/GitBash and OSx Mojave platforms. - VBoxManage list dhcpservers > dhcpList.txt - IP_MASK=$(minikube ip | cut -d "." -f -3) - - cat dhcpList.txt | grep NetworkName > names.txt - cat dhcpList.txt | grep lowerIPAddress > ips.txt - - LVAR_MINIKUBE_NETWORK_NAME=$(awk 'BEGIN {OFS=" "}{ - getline line < "names.txt" - print $0,line - } ' ips.txt | grep "$IP_MASK" | cut -d ":" -f 3 | tr -s " " | xargs) - - rm dhcpList.txt names.txt ips.txt - VBoxManage dhcpserver modify --netname "$LVAR_MINIKUBE_NETWORK_NAME" --disable - echo "Disabled DHCP server on VirtualBox network name: "$LVAR_MINIKUBE_NETWORK_NAME"" -} - -while getopts :fhd:e:i:c:m: option; -do - case ${option} in - c) - LVAR_CPUS=$OPTARG - echo "Number of CPUs is set to ${LVAR_CPUS} units" - ;; - m) - LVAR_MEMORY=$OPTARG - echo "Amount of memory is set to ${LVAR_MEMORY} megabytes" - ;; - e) - EVENT_PROCESSING_TYPE=$OPTARG - echo "Event processing is for ${EVENT_PROCESSING_TYPE}" - ;; - h) - showHelp - ;; - i) - LVAR_LOCALHOST_IP=$OPTARG - checkValidIP $LVAR_LOCALHOST_IP - echo "Local machine IP address: $LVAR_LOCALHOST_IP" - ;; - f) - LVAR_FORCE_MINIKUBE_VERSION="true" - ;; - d) - LVAR_VM_DRIVER=$OPTARG - echo "Use vm driver: ${LVAR_VM_DRIVER}" - ;; - \?) - echo "Invalid option: -${OPTARG}" - exit 1 - ;; - esac -done -echo - -if [ -z "${EVENT_PROCESSING_TYPE}" ]; then - echo "ERROR: You must provide an event type to process...." - showHelp -fi - -if [ "${EVENT_PROCESSING_TYPE}" != "odm" -a "${EVENT_PROCESSING_TYPE}" != "icm" -a "${EVENT_PROCESSING_TYPE}" != "bpmn" -a "${EVENT_PROCESSING_TYPE}" != "bawadv" -a "${EVENT_PROCESSING_TYPE}" != "content" -a "${EVENT_PROCESSING_TYPE}" != "baiw" ]; then - echo "ERROR: This event type is invalid and cannot be processed: ${EVENT_PROCESSING_TYPE}" - showHelp -fi - -checkFileExist "./configuration/pv.yaml" -checkFileExist "./configuration/bai-psp.yaml" -checkFileExist "./configuration/easy-install-kafka.yaml" -checkFileExist "./configuration/easy-install.yaml" -checkFileExist "./install-bai-minikube.sh" -checkFileExist "./install-bai.sh" - -if [ "$LVAR_FORCE_MINIKUBE_VERSION" == "false" ]; then - echo "Checking the minikube version." - if echo "$(minikube version)" | grep "$LVAR_MINIKUBE_VERSION" > /dev/null; then - echo "The minikube version is correct." - else - echo "The minikube version is NOT correct. Only version $LVAR_MINIKUBE_VERSION is supported. Exiting." - echo "If you wish to skip this check, use the -f option." - exit 1 - fi -else - echo "You have chosen to use an unchecked version of minikube." -fi - -echo "Creating the minikube machine" - -if [ ! -z "$LVAR_VM_DRIVER" ]; then - MINIKUBE_OPTS=" --vm-driver $LVAR_VM_DRIVER" -fi - -# Using minikube version 1.4.0 concurrently with a version of Kubernetes higher than 1.15.4 exposes to -# /~https://github.com/kubernetes/minikube/issues/5429 related to kubernetes apiVersion update. -# Also due to previous versions reported to hang with macOS Catalina, minikube v1.4.0 becomes the recommended version. -minikube $MINIKUBE_OPTS start --cpus ${LVAR_CPUS} --memory ${LVAR_MEMORY} --kubernetes-version=v1.15.4 - -minikube docker-env --shell bash -eval $(minikube docker-env --shell bash) - -# setting kafka communication address -if [ -z "$LVAR_LOCALHOST_IP" ]; then - LVAR_LOCALHOST_IP="$(minikube ip)" -fi - - -minikube ssh "sudo mkdir -p /data/bai" -minikube ssh "sudo mkdir -p /data/bai-elasticsearch-data-1" -minikube ssh "sudo mkdir -p /data/bai-elasticsearch-master-1" -minikube ssh "sudo chmod -R 777 /data" - -if command -v VBoxManage; then - echo "Opening the Kafka communication port" - VBoxManage controlvm "minikube" natpf1 "kafka service,tcp,,31090,,31090" - disableVirtualBoxDHCP -else - echo "Warning: VirtualBox does not exist. The Kafka communication port cannot be opened." - echo "The event emitter must be hosted locally." -fi - -./install-bai.sh -e "$EVENT_PROCESSING_TYPE" -i "$(minikube ip)" -j "$LVAR_LOCALHOST_IP" -p ./configuration/pv.yaml -s ./configuration/bai-psp.yaml -k ./configuration/easy-install-kafka.yaml -b ./configuration/easy-install.yaml diff --git a/BAI/platform/minikube/install-bai.sh b/BAI/platform/minikube/install-bai.sh deleted file mode 100755 index ca5654da..00000000 --- a/BAI/platform/minikube/install-bai.sh +++ /dev/null @@ -1,171 +0,0 @@ -#!/bin/bash - - -LVAR_SCRIPT_NAME="$(basename $0)" -LVAR_EMITTER_IP="" -LVAR_KAFKA_IP="" -LVAR_PV_YAML="" -LVAR_KAFKA_YAML="" -LVAR_BAI_YAML="" -LVAR_PSP_YAML="" -LVAR_CONFIG_MAP_YAML="" - -set -e - -# Common script utilities -source ./utilities.sh - -showHelp() { - echo - echo "--------------------------------------------------------------------------" - echo "Installs a Business Automation Insights release." - echo "--------------------------------------------------------------------------" - echo "Prerequisites" - echo "These files must be present in the same directory:" - echo " - A YAML file that defines the persistent volumes" - echo " - Optionally, a YAML file for Business Automation Insights ConfigMaps" - echo " - A YAML file that defines the pod security policy" - echo " - A YAML file for the Kafka installation" - echo " - A YAML file for the Business Automation Insights installation" - echo - echo "--------------------------------------------------------------------------" - echo "Arguments:" - echo " -e " - echo " Mandatory. The argument must have one of the following values:" - echo " - bpmn " - echo " - bawadv " - echo " - icm " - echo " - odm " - echo " - baiw " - echo " -p Mandatory. " - echo " -s Mandatory. " - echo " -k Mandatory. " - echo " -b Mandatory. " - echo " -i Mandatory. " - echo " -j Mandatory. " - echo " -c Optional. " - echo - echo " -h" - echo " Displays this help." - echo - echo "Example:" - echo - echo " ./${LVAR_SCRIPT_NAME} -e odm -i 9.x.x.x -j 9.x.x.x -p ./pv.yaml -c ./bai-configmap.yaml -s ./bai-psp.yaml -k ./easy-install-kafka.yaml -b ./easy-install.yaml" - echo - echo "---------------------------------------------------------------" - exit 1 -} -echo - -while getopts :e:p:k:b:c:s:i:j:h option; -do - case ${option} in - e) - EVENT_PROCESSING_TYPE=$OPTARG - echo "Event processing is for ${EVENT_PROCESSING_TYPE}" - ;; - p) - LVAR_PV_YAML=$OPTARG - checkFileExist "$LVAR_PV_YAML" - ;; - k) - LVAR_KAFKA_YAML=$OPTARG - checkFileExist "$LVAR_KAFKA_YAML" - ;; - b) - LVAR_BAI_YAML=$OPTARG - checkFileExist "$LVAR_BAI_YAML" - ;; - c) - LVAR_CONFIG_MAP_YAML=$OPTARG - checkFileExist "$LVAR_CONFIG_MAP_YAML" - ;; - s) - LVAR_PSP_YAML=$OPTARG - checkFileExist "$LVAR_PSP_YAML" - ;; - i) - LVAR_EMITTER_IP=$OPTARG - checkValidIP $LVAR_EMITTER_IP - echo "Event emitter IP address: $LVAR_EMITTER_IP" - ;; - j) - LVAR_KAFKA_IP=$OPTARG - checkValidIP $LVAR_KAFKA_IP - echo "Kafka bootstrap server IP address: $LVAR_KAFKA_IP" - ;; - h) - showHelp - ;; - \?) - echo "Invalid option: -${OPTARG}" - exit 1 - ;; - esac -done -echo - -if [ -z "${EVENT_PROCESSING_TYPE}" ]; then - echo "ERROR: You must provide an event type to process...." - showHelp -fi -if [ -z "${LVAR_PV_YAML}" ]; then - echo "ERROR: You must provide a configuration file for persistent volumes ...." - showHelp -fi -if [ -z "${LVAR_KAFKA_YAML}" ]; then - echo "ERROR: You must provide a Kafka configuration file...." - showHelp -fi -if [ -z "${LVAR_BAI_YAML}" ]; then - echo "ERROR: You must provide a configuration file for Business Automation Insights...." - showHelp -fi -if [ -z "${LVAR_PSP_YAML}" ]; then - echo "ERROR: You must provide a configuration file for the pod security policy...." - showHelp -fi -if [ -z "${LVAR_EMITTER_IP}" ]; then - echo "ERROR: You must provide the IP address of the event emitter host...." - showHelp -fi -if [ -z "${LVAR_KAFKA_IP}" ]; then - echo "ERROR: You must provide the IP address of the Kafka host...." - showHelp -fi -if [ ! -z "${LVAR_CONFIG_MAP_YAML}" ] && [ ! -f "${LVAR_CONFIG_MAP_YAML}" ]; then - echo "ERROR: The ConfigMap file ${LVAR_CONFIG_MAP_YAML} cannot be found...." - showHelp -fi - -echo "Creating the Business Automation Insights namespace" -kubectl create ns bai - -echo "Creating persistent volumes" -kubectl apply -f "$LVAR_PV_YAML" -n bai - -echo "Initializing helm " -helm init --wait -helm repo add confluent https://confluentinc.github.io/cp-helm-charts -helm repo add ibm-charts https://raw.githubusercontent.com/IBM/charts/master/repo/stable -helm repo update - -echo "Creating the Kafka namespace" -kubectl create ns kafka - -echo "Tiller is $(which tiller)" -echo "Installing Kafka" -# due to /~https://github.com/helm/helm/issues/3173 and others, adding a timeout argument... -helm install --wait --timeout 999999 --name kafka-release --namespace kafka -f "$LVAR_KAFKA_YAML" --set cp-kafka.customEnv.ADVERTISED_LISTENER_HOST=$(echo $LVAR_EMITTER_IP) confluent/cp-helm-charts - -if [ ! -z "$LVAR_CONFIG_MAP_YAML" ]; then - cp "$LVAR_CONFIG_MAP_YAML" charts/ibm-business-automation-insights-dev/templates -fi - - -echo "Creating a security policy and a service account for Elasticsearch" -kubectl create -f "$LVAR_PSP_YAML" -n bai -kubectl create rolebinding bai-rolebinding --role=bai-role --serviceaccount=bai:bai-release-bai-psp-sa -n bai - -echo "Installing Business Automation Insights" -helm install ibm-charts/ibm-business-automation-insights-dev --version 3.2.0 --wait --timeout 999999 --name bai-release --namespace bai -f "$LVAR_BAI_YAML" --set kafka.bootstrapServers=$(echo $LVAR_KAFKA_IP):31090 --set ${EVENT_PROCESSING_TYPE}.install=true diff --git a/BAI/platform/minikube/ip-update.sh b/BAI/platform/minikube/ip-update.sh deleted file mode 100755 index 56557c62..00000000 --- a/BAI/platform/minikube/ip-update.sh +++ /dev/null @@ -1,70 +0,0 @@ -#!/bin/bash - -LVAR_SCRIPT_NAME="$(basename $0)" -LVAR_BAI_VERSION="3.2.0" -LVAR_LOCALHOST_IP="" - -set -e - -# Common script utilities -source ./utilities.sh - -showHelp() { - echo - echo "--------------------------------------------------------------------------" - echo "Update a Kafka release on minikube to advertise a new IP address." - echo "--------------------------------------------------------------------------" - echo "Prerequisites" - echo "These files must be present in the same directory:" - echo " - configuration/easy-install-kafka.yaml" - echo - echo "--------------------------------------------------------------------------" - echo "Arguments:" - echo " -i " - echo " Mandatory. The remote IP address of your computer" - echo - echo " -h" - echo " Displays this help." - echo - echo "Examples:" - echo - echo " ./${LVAR_SCRIPT_NAME} -i 1.2.3.4" - echo - echo "---------------------------------------------------------------" - exit 1 -} -echo - -while getopts hi: option; -do - case ${option} in - i) - LVAR_LOCALHOST_IP=$OPTARG - checkValidIP $LVAR_LOCALHOST_IP - echo "Local machine IP address: $LVAR_LOCALHOST_IP" - ;; - h) - showHelp - ;; - \?) - echo "Invalid option: -${OPTARG}" - exit 1 - ;; - esac -done -echo - -if [ -z "${LVAR_LOCALHOST_IP}" ]; then - echo "ERROR: You must provide the external IP address of your computer...." - showHelp -fi - -checkFileExist "./configuration/easy-install-kafka.yaml" - -echo "Initializing helm " -helm init --wait - -echo "Tiller is $(which tiller)" - -echo "Upgrading the Kafka installation with new IP address ${LVAR_LOCALHOST_IP}" -helm upgrade --wait --timeout 999999 --namespace kafka -f configuration/easy-install-kafka.yaml --set cp-kafka.customEnv.ADVERTISED_LISTENER_HOST=${LVAR_LOCALHOST_IP} kafka-release confluent/cp-helm-charts diff --git a/BAI/platform/minikube/recover-minikube-bai.sh b/BAI/platform/minikube/recover-minikube-bai.sh deleted file mode 100755 index eebee431..00000000 --- a/BAI/platform/minikube/recover-minikube-bai.sh +++ /dev/null @@ -1,35 +0,0 @@ -#!/bin/bash - -set -e - -echo This script is designed to recover an IBM Business Automation Insights cluster on minikube if persisted checkpoints are corrupted, typically in case of virtual machine sudden stop. -echo WARNING: Elasticsearch data will be recovered, but the Flink state will be reset, therefore the result of the processing is likely to be lost for the last events. - -if [ `which jq | wc -l` == "0" ] - then - echo ERROR: jq is required to run this script, please install it on your system: https://stedolan.github.io/jq/ - exit 1 -fi - -echo "Backing up previous flink data in /data/bai.saved..." -minikube ssh "sudo cp -r /data/bai/ /data/bai.saved" - -echo "Removing flink related content..." -minikube ssh "sudo rm -rf /data/bai/checkpoints/*" -minikube ssh "sudo rm -rf /data/bai/recovery/*" -minikube ssh "sudo rm -rf /data/bai/savepoints/*" -minikube ssh "sudo rm -rf /data/bai/flink-zookeeper/*" - -echo "Restarting jobmanager and zookeeper pods..." -JOB_MANAGER_POD=`kubectl get pods -n bai | egrep jobmanager | awk '{print $1}'` -ZK_POD=`kubectl get pods -n bai | egrep flink-zk | awk '{print $1}'` -kubectl delete pod $JOB_MANAGER_POD -n bai -kubectl delete pod $ZK_POD -n bai - -PILLAR_LIST=`kubectl get pods -n bai | grep -v "dba" | grep -v flink | grep -v admin | grep -v setup | grep bai | cut -d " " -f 1 | cut -d "-" -f -4 | sort -u` - -for p in $PILLAR_LIST -do - echo "Restarting pillar job $p..." - kubectl get job $p -o json -n bai | jq 'del(.spec.selector)' | jq 'del(.spec.template.metadata.labels)' | kubectl replace --force -f - -done diff --git a/BAI/platform/minikube/utilities.sh b/BAI/platform/minikube/utilities.sh deleted file mode 100644 index 93c7a9a5..00000000 --- a/BAI/platform/minikube/utilities.sh +++ /dev/null @@ -1,59 +0,0 @@ -#!/bin/bash - -checkFileExist() { - if [ ! -f "$1" ]; then - echo "ERROR: The $1 file must be present." - exit 1 - fi -} - -checkValidIP() { - -# first testing IPV4 format - test='([1-9]?[0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])' - - if [[ $1 =~ ^$test\.$test\.$test\.$test$ ]] - then - echo "IP v4 is $1" - ret=0 - else - echo "$1 is not a valid IP v4, checking for IP v6." - checkValidIPV6 $1 - fi - return $ret -} - -checkValidIPV6() { - ipv6reg='^([0-9a-fA-F]{0,4}:){1,7}[0-9a-fA-F]{0,4}$' - var="$1" - - if [[ $var =~ $ipv6reg ]]; then - echo "IP v6 is $1" - ret=0 - else - echo "$1 is not a valid IP v6, exiting." - ret=1 - fi - return $ret -} -LVAR_BAI_VERSION="3.2.0" -LVAR_SPRINT_VERSION="dev" -LVAR_BAI_IMAGES_SPRINT="ibm-bai-dev-$LVAR_BAI_VERSION-$LVAR_SPRINT_VERSION.tar.gz" -LVAR_BAI_IMAGES="ibm-bai-dev-$LVAR_BAI_VERSION-dev.tar.gz" -LVAR_BAI_CHARTS_SPRINT="charts/ibm-business-automation-insights-dev-$LVAR_BAI_VERSION-$LVAR_SPRINT_VERSION.tgz" -LVAR_BAI_CHARTS="charts/ibm-business-automation-insights-dev-$LVAR_BAI_VERSION.tgz" - -expand-BAI-Charts() { - # moving sprint charts into regular charts - if [ -f "$LVAR_BAI_CHARTS_SPRINT" ]; then - mv "$LVAR_BAI_CHARTS_SPRINT" "$LVAR_BAI_CHARTS" - fi - tar xvf "$LVAR_BAI_CHARTS" -C charts/ -} - - -# moving sprint images into regular images -if [ -f "$LVAR_BAI_IMAGES_SPRINT" ]; then - mv "$LVAR_BAI_IMAGES_SPRINT" "$LVAR_BAI_IMAGES" -fi - diff --git a/BAN/README_config.md b/BAN/README_config.md new file mode 100644 index 00000000..341d0232 --- /dev/null +++ b/BAN/README_config.md @@ -0,0 +1,146 @@ +# Configuring IBM Business Automation Navigator 3.0.7 + +IBM Business Automation Navigator configuration settings are recorded and stored in the shared YAML file for operator deployment. After you prepare your environment, you add the values for your configuration settings to the YAML so that the operator can deploy your containers to match your environment. + +## Requirements and prerequisites + +Confirm that you have completed the following tasks to prepare to deploy your Business Automation Navigator images: + +- Prepare your Business Automation Navigator environment. These procedures include setting up databases, LDAP, storage, and configuration files that are required for use and operation. You must complete all of the [preparation steps for Business Automation Navigator](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_prepare_bank8s.html) before you are ready to deploy the container images. Collect the values for these environment components; you use them to configure your Business Automation Navigator container deployment. + +- Prepare your container environment. See [Preparing to install automation containers on Kubernetes](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/welcome/com.ibm.dba.install/op_topics/tsk_prepare_env_k8s.html) + +> **Note**: If you plan to use UMS integration with Business Automation Navigator, note that you might encounter registration failure errors during deployment. This can happen if the UMS deployment is not ready by the time the other containers come up. The situation resolves in the next operator loop, so the errors can be ignored. + +## Prepare your security environment + +You must also create a secret for the security details of the LDAP directory and datasources that you configured in preparation for use with IBM Business Automation Navigator. Collect the users, password to add to the secret. Using your values, run the following command: + + ``` +kubectl create secret generic ibm-ban-secret \ +   --from-literal=navigatorDBUsername="user_name" +   --from-literal=navigatorDBPassword="xxxxxxx" \ +   --from-literal=ldapUsername="CN=CEAdmin,OU=Shared,OU=Engineering,OU=FileNet,DC=dockerdom,DC=ecm,DC=ibm,DC=com" + --from-literal=ldapPassword="xxxxxxx" \ +   --from-literal=externalLdapUsername="cn=exUser1,ou=test1OU,dc=fncmad,dc=com" --from-literal=externalLdapPassword="xxxxxxx=" \ +   --from-literal=keystorePassword="xxxxxxx" \ +   --from-literal=ltpaPassword="xxxxxxx" \ + --from-literal=appLoginUsername=“user_name” + --from-literal=appLoginPassword=“xxxxxxx” + + ``` +The secret you create is the value for the parameter `ban_secret_name`. + +### Root CA and trusted certificate list + + The custom YAML file also requires values for the `root_ca_secret` and `trusted_certificate_list` parameters. The TLS secret contains the root CA's key value pair. You have the following choices for the root CA: + - You can generate a self-signed root CA + - You can allow the operator (or ROOTCA ansible role) to generate the secret with a self-signed root CA (by not specifying one) + - You can use a signed root CA. In this case, you create a secret that contains the root CA's key value pair in advance. + + The list of the trusted certificate secrets can be a TLS secret or an opaque secret. An opaque secret must contain a tls.crt file for the trusted certificate. The TLS secret has a tls.key file as the private key. + +### Apply the Security Context Contstraints + +Apply the required Security Context Constraints (SCC) by applying the [SCC YAML](../descriptors/scc-fncm.yaml) file. + + ```bash + $ oc apply -f descriptors/scc-fncm.yaml + ``` + + > **Note**: `fsGroup` and `supplementalGroups` are `RunAsAny` and `runAsUser` is `MustRunAsRange`. + + +## Customize the YAML file for your deployment + +All of the configuration values for the components that you want to deploy are included in the [ibm_cp4a_cr_template.yaml](../descriptors/ibm_cp4a_cr_template.yaml) file. Create a copy of this file on the system that you prepared for your container environment, for example `my_ibm_cp4a_cr_template.yaml`. + +The custom YAML file includes the following sections that apply for all of the components: +- shared_configuration - Specify your deployment and your overall security information. +- ldap_configuration - Specify the directory service provider information for all components in this common section. +- datasource configuration - Specify the database information for all components in this common section. +- monitoring_configuration - Optional for deployments where you want to enable monitoring. +- logging_configuration - Optional for deployments where you want to enable logging. + +After the shared section, the YAML includes a section of parameters for each of the available components. If you plan to include a component in your deployment, you un-comment the parameters for that component and update the values. For some parameters, the default values are sufficient. For other parameters, you must supply values that correspond to your specific environment or deployment needs. + +The optional initialize_configuration and verify_configuration section includes values for a set of automatic set up steps for your IBM Business Automation Navigator deployment. + +If you want to exclude any components from your deployment, leave the section for that component and all related parameters commented out in the YAML file. + +A description of the configuration parameters is available in [Configuration reference for operators](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_ban_opparams.html) + +Use the information in the following sections to record the configuration settings for the components that you want to deploy. + +- [Shared configuration settings](README_config.md#shared-configuration-settings) +- [Business Automation Navigator settings](README_config.md#business-automation-navigator-settings) +- [Initialization settings](README_config.md#initialization-settings) +- [Verification settings](README_config.md#verification-settings) + +### Shared configuration settings + +Un-comment and update the values for the shared configuration, LDAP, datasource, monitoring, and logging parameters, as applicable. + +Use the secrets that you created in Preparing your security environment for the `root_ca_secret` and `trusted_certificate_list` values. + +> **Reminder**: If you plan to use External Share with the 2 LDAP model for configuring external users, update the LDAP values in the `ext_ldap_configuration` section of the YAML file with the information about the directory server that you configured for external users. If you are not using external share, leave this section commented out. + +For more information about the shared parameters, see the following topics: + +- [Shared parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_cm_opsharedparams.html) +- [LDAP parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_k8s_ldap.html) +- [Datasource parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_cm_dbparams.html) +- [Monitoring parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_cm_opmonparams.html) + + +### Business Automation Navigator settings + +Use the `navigator_configuration` section of the custom YAML to provide values for the configuration of Business Automation Navigator. You provide details for configuration settings that you have already created, like the names of your persistent volume claims. You also provide names for pieces of your Business Automation Navigator environment, and tuning decisions for your runtime environment. + +In the Business Automation Navigator section, leave the `enable_appcues` setting with the default value, false. + +For more information about the settings, see [Business Automation Navigator parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_ban_opparams.html) + +### Initialization settings + +Use the `initialize_configuration` section of the custom YAML to provide values for the automatic initialization and setup of Content Platform Engine and Business Automation Navigator. The initialization container creates required configuration of IBM Business Automation Navigator. You also make decisions for your runtime environment. + +> **Important**: Do not enable initialization for your operator deployment if you plan to integrate UMS with Content Platform Engine or Business Automation Navigator. In this use case, you must manually create your Content Platform Engine domain, object stores, repositories, and desktops after deployment. If you are integrating UMS with Content Platform Engine and Business Automation Navigator, leave the `initialize_configuration` section commented out. + +You can edit the YAML to configure more than one of the available pieces in your automatically initialized environment. For example, if you want to create an additional Business Automation Navigator repository, you copy the stanza for the repository settings, paste it below the original, and add the new values for your additional repository: + + ``` +# icn_repos: + # - add_repo_id: "demo_repo1" + # add_repo_ce_wsi_url: "http://{{ meta.name }}-cpe-svc:9080/wsi/FNCEWS40MTOM/" + # add_repo_os_sym_name: "OS01" + # add_repo_os_dis_name: "OS01" + # add_repo_workflow_enable: false + # add_repo_work_conn_pnt: "pe_conn_os1:1" + # add_repo_protocol: "FileNetP8WSI" + + ``` + +You can create additional object stores, Content Search Services indexes, IBM Content Navigator repositories, and IBM Content Navigator desktops. + +For more information about the settings, see [Initialization parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_cm_opinitiparams.html) + +### Verification settings + +Use the `verify_configuration` section of the custom YAML to provide values for the automatic verification of your Content Platform Engine and IBM Content Navigator. The verify container works in conjunction with the automatic setup of the initialize container. You can accept most of the default settings for the verification. However, compare the settings with the values that you supply for the initialization settings. Specific settings like object store names and the Content Platform Engine connection point must match between these two configuration sections. + +For more information about the settings, see [Verify parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_cm_opverifyparams.html) + +## Complete the installation + +After you have set all of the parameters for the relevant components, return to to the install or update page for your platform to configure other components and complete the deployment with the operator. + +Install pages: + - [Installing on Managed Red Hat OpenShift on IBM Cloud Public](../platform/roks/install.md) + - [Installing on Red Hat OpenShift](../platform/ocp/install.md) + - [Installing on Certified Kubernetes](../platform/k8s/install.md) + +Update pages: + - [Updating on Managed Red Hat OpenShift on IBM Cloud Public](../platform/roks/update.md) + - [Updating on Red Hat OpenShift](../platform/ocp/update.md) + - [Updating on Certified Kubernetes](../platform/k8s/update.md) diff --git a/BAN/README_migrate.md b/BAN/README_migrate.md new file mode 100644 index 00000000..f563a6ed --- /dev/null +++ b/BAN/README_migrate.md @@ -0,0 +1,22 @@ +# Migrating Business Automation Navigator 3.0.x to V3.0.7 + +Because of the change in the container deployment method, there is no upgrade path for previous versions of Business Automation Navigator to V3.0.7. + +To move a V3.0.x installation to V3.0.7, you prepare your environment and deploy the operator the same way you would for a new installation. The difference is that you use the configuration values for your previously configured environment, including datasource, LDAP, storage volumes, etc. when you customize your deployment YAML file. + +Optionally, to protect your production deployment, you can create a replica of your data and use that datasource information for the operator deployment to test your migration. In this option, you follow the instructions for a new deployment. + + +## Step 1: Collect parameter values from your existing deployment + +You can use the reference topics in the [Cloud Pak for Automation Knowldege Center](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_ban_opparams.html) to see the parameters that apply for your components and shared configuration. + +You will use the values for your existing deployment to update the custom YAML file for the new operator deployment. For more information, see [Configure Business Automation Navigator](README_config.md). + +> **Note**: When you are ready to deploy the V3.0.7 version of your Business Automation Navigator container, stop your previous container. + +## Step 2: Return to the platform readme to migrate other components + +- [Managed OpenShift migrate page](../platform/roks/migrate.md) +- [OpenShift migrate page](../platform/ocp/migrate.md) +- [Kubernetes migrate page](../platform/k8s/migrate.md) diff --git a/BAI/configuration/.gitkeep b/BAN/configuration/.gitkeep similarity index 100% rename from BAI/configuration/.gitkeep rename to BAN/configuration/.gitkeep diff --git a/CONTENT/configuration/extShare/configDropins/overrides/ICNDS.xml b/BAN/configuration/ICN/configDropins/overrides/ICNDS.xml similarity index 100% rename from CONTENT/configuration/extShare/configDropins/overrides/ICNDS.xml rename to BAN/configuration/ICN/configDropins/overrides/ICNDS.xml diff --git a/NAVIGATOR/configuration/ICN/configDropins/overrides/ICNDS_HADR.xml b/BAN/configuration/ICN/configDropins/overrides/ICNDS_HADR.xml similarity index 100% rename from NAVIGATOR/configuration/ICN/configDropins/overrides/ICNDS_HADR.xml rename to BAN/configuration/ICN/configDropins/overrides/ICNDS_HADR.xml diff --git a/CONTENT/configuration/extShare/configDropins/overrides/ICNDS_Oracle.xml b/BAN/configuration/ICN/configDropins/overrides/ICNDS_Oracle.xml similarity index 100% rename from CONTENT/configuration/extShare/configDropins/overrides/ICNDS_Oracle.xml rename to BAN/configuration/ICN/configDropins/overrides/ICNDS_Oracle.xml diff --git a/BAS/README.md b/BAS/README.md deleted file mode 100644 index 654d208d..00000000 --- a/BAS/README.md +++ /dev/null @@ -1,568 +0,0 @@ -# IBM-DBA-BAS-PROD - -IBM Business Automation Studio - -## Introduction - -This Business Automation Studio Helm chart deploys an IBM Business Automation Studio environment for authoring and managing applications (apps) for the IBM Cloud Pak for Automation platform. - -## Chart Details - -This chart deploys several services and components. - -In the standard configuration, it includes these components: - -* IBM Resource Registry component -* IBM Business Automation Application Engine (App Engine) component -* IBM Business Automation Studio component - -To support those components for a standard installation, it generates: - -* 4 ConfigMaps that manage the configuration of Business Automation Studio server -* 2 deployments running the Business Automation Studio server -* 1 StatefulSet running Resource Registry -* 4 or more jobs for Business Automation Studio and Resource Registry -* 3 service accounts with related roles and role bindings -* 3 secrets to get access during chart installation -* 5 services to route the traffic to Business Automation Studio server - -## Prerequisites - - * [Red Hat OpenShift 3.11](https://docs.openshift.com/container-platform/3.11/welcome/index.html) or later - * [Helm and Tiller 2.9.1](/~https://github.com/helm/helm/releases) or later if you are [using Helm Charts](#using-helm-charts) to deploy your container images - * [Cert Manager 0.8.0](https://cert-manager.readthedocs.io/en/latest/getting-started/install/openshift.html) or later if you want to use Cert Manager to create the Transport Layer Security (TLS) key and certificate secrets. Otherwise, you can use Secure Sockets Layer (SSL) tools to create the TLS key and certificate secrets. - * [IBM DB2 11.1.2.2](https://www.ibm.com/products/db2-database) or later - * [IBM Cloud Pack For Automation - User Management Service (UMS)](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.offerings/topics/con_ums.html) - * Persistent volume support - -### Preparing the environment - -1. Log in to OC (the OpenShift command line interface (CLI)) by running the following command. You are prompted for the password. - - ``` - oc login -u - ``` - -2. Create a project (namespace) for Business Automation Studio by running the following command: - - ``` - oc new-project - ``` - -3. Save and exit. - -4. To deploy the service account, role, and role binding successfully, assign the administrator role to the user for this namespace by running the following command: - - ``` - oc project - oc adm policy add-role-to-user admin - ``` - -5. If you want to operate persistent volumes (PVs), you must have the storage-admin cluster role, because PVs are a cluster resource in OpenShift. Add the role by running the following command: - - ``` - oc adm policy add-cluster-role-to-user storage-admin - ``` - -### Uploading the images - -Upload the Business Automation Studio images to the Docker registry of the Kubernetes cluster. See [Download a product package from PPA and load the images](https://github.ibm.com/dba/cert-kubernetes/blob/master/README.md#download-ppa-and-load-images). - -### Generating the database script and YAML files - -Use the [Business Application Studio platform Helm installation helper script](configuration) to generate the database script and YAML files for your environment. Follow the instructions in the [readme](configuration/README.md) for the following requirements: - -* Setting up the database for App Engine and Business Automation Studio -* Protecting sensitive configuration data -* Setting up the TLS key and certificate secrets -* Setting the service type - -If you don't want to use the helper script, you can create your own secrets and service type by following the instructions in the [Knowledge Center](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/welcome/kc_welcome_dba_distrib.html). - - -#### Notes -* Image pull secret: The script does not generate the image pull secret. You can follow the instructions in [Configuring the secret for pulling Docker images](#configuring-the-secret-for-pulling-docker-image) to create your own. -* Storage: The script does not generate a YAML file for persistent volumes. You can follow the instructions in [Implementing storage](#implementing-storage) to create your own perstent volumes. -* UMS-related configuration and TLS certificates: You must do this configuration if you have an existing UMS that is in a different namespace from the Business Automation Studio Helm chart. - -### Preparing UMS-related configuration and TLS certificates (optional) - -If you have an existing UMS that is in a different namespace from the Business Automation Studio Helm chart, follow these steps. - -If the UMS certificate is not signed by the same root CA, you must add the root CA as trusted instead of the UMS certificate. You should first get the root CA which is used to sign the UMS, and then save it to a certificate named like `ums-cert.crt`, then create the secret by running the following command: - - - kubectl create secret generic ca-tls-secret --from-file=tls.crt=./ums-cert.crt - - -You will get a secret named ca-tls-secret. Enter this secret value in every TLS section for Business Automation Studio, Resource Registry, and App Engine that is listed in [Configuration](#configuration). If you use [Business Application Studio platform Helm installation helper script](configuration) to set up Business Automation Studio, you can enter this secret value in [`ums.tlsSecretName`](configuration). The components will trust this certificate and communicate with UMS successfully. - - ``` - tls: - tlsSecretName: - tlsTrustList: - - ca-tls-secret - ``` - -### Configuring the secret for pulling Docker images - -If you're pulling Docker images from a private registry, you must provide a secret containing credentials for it. For instructions, see the [Kubernetes information about private registries](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-line). - -This command can be used for one repository only. If your Docker images come from different repositories, you can create multiple image pull secrets and add the names in global.imagePullSecrets. Or, you can create secrets by using the custom Docker configuration file. - -The following sample shows the Docker auth file `config.json`: - -``` -{ - "auths": { - "url1.xx.xx.xx.xx": { - "auth": "xxxxxxxxxxxxxxxxxxxxxxxxxxxx" - }, - "url2.xx.xx.xx.xx": { - "auth": "xxxxxxxxxxxxxxxxxxxxxxxxxxxx" - }, - "url3.xx.xx.xx.xx": { - "auth": "xxxxxxxxxxxxxxxxxxxxxxxxxxxx" - }, - "url4.xx.xx.xx.xx": { - "auth": "xxxxxxxxxxxxxxxxxxxxxxxxxxxx" - } - } -} -``` - -The key under auths is the link to the Docker repository, and the value inside that repository name is the authentication string that is used for that repository. You can create the auth string with base64 by running the following command: - -``` - # echo -n : | base64 -``` - -You can replace the auth string by running the previous command with your config.json file. Then, create the image pull secret by running the following command: - -``` - kubectl create secret generic image-pull-secret --from-file=.dockerconfigjson= --type=kubernetes.io/dockerconfigjson -``` - -## Implementing storage - -This chart requires an existing persistent volume of any type. The minimum supported size is 1GB. Additionally, a persistent volume claim must be created and referenced in the configuration. - -### Persistent volume for JDBC Drivers (optional) - -If you don't create this persistent volume and related claim, leave `global.existingClaimName` empty and set `appengine.useCustomJDBCDrivers` to `false`. - -The persistent volume should be shareable by pods across the whole cluster. For a single-node Kubernetes cluster, you can use HostPath to create it. For multiple nodes in a cluster, use shareable storage, such as NFS or GlusterFS, for the persistent volume. It must be passed in the values.yaml files (see the global.existingClaimName property in the configuration). - -The following example shows the HostPath type of persistent volume. - -```yaml -kind: PersistentVolume -apiVersion: v1 -metadata: - name: jdbc-pv-volume - labels: - type: local -spec: - storageClassName: manual - capacity: - storage: 2Gi - accessModes: - - ReadWriteMany - hostPath: - path: "/mnt/data" -``` - -The following example shows the NFS type of persistent volume. - -```yaml -kind: PersistentVolume -apiVersion: v1 -metadata: - name: jdbc-pv-volume - labels: - type: nfs -spec: - storageClassName: manual - capacity: - storage: 2Gi - accessModes: - - ReadWriteMany - nfs: - path: /tmp - server: 172.17.0.2 -``` - -After you create a persistent volume, you can create a persistent volume claim to bind the correct persistent volume with the selector. Or, if you are using GlusterFS with dynamic allocation, create the persistent volume claim with the correct storageClassName to allow the persistent volume to be created automatically. - -The following example shows a persistent volume claim. - -```yaml -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: jdbc-pvc -spec: - storageClassName: manual - accessModes: - - ReadWriteMany - resources: - requests: - storage: 2Gi -``` - -The mounted directory must contain a jdbc sub-directory, which in turn holds subdirectories with the required JDBC driver files. Add the following structure to the mounted directory (which in this case is called binaries): - -``` -/binaries - /jdbc - /db2 - /db2jcc4.jar - /db2jcc_license_cu.jar -``` - -The /jdbc folder and its contents depend on the configuration. Copy the JDBC driver files to the mounted directory as shown in the previous example. Make sure those files have the correct access. IBM Cloud Pak for Automation products on OpenShift use an arbitrary UID to run the applications, so make sure those files have read access for root(0) group. Enter the persistent volume claim name in the `global.existingClaimName`field. - -### Persistent volume for etcd data for Resource Registry (optional) - -Without a persistent volume, the Resource Registry cluster might be broken during pod relocation. -If you don't need data persistence for Resource Registry, you can skip this section by setting resourceRegistry.persistence.enabled to false in the configuration. Otherwise, you must create a persistent volume. - -The following example shows a persistent volume definition using NFS. - -```yaml -kind: PersistentVolume -apiVersion: v1 -metadata: - name: etcd-data-volume - labels: - type: nfs -spec: - storageClassName: manual - capacity: - storage: 3Gi - accessModes: - - ReadWriteOnce - nfs: - path: /nfs/general/rrdata - server: 172.17.0.2 -``` - -You don't need to create a persistent volume claim for Resource Registry. Resource Registry is a StatefulSet, so it creates the persistent volume claim based on the template in the chart. See the [Kubernetes StatefulSets document](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) for more details. - -Notes: - -* You must give root(0) group read/write access to the mounted directories by running the following command: - - ```text - chown -R 50001:0 - chmod g+rw - ``` - -* Each Resource Registry server uses its own persistent volume. Create persistent volumes based on the replicas (resourceRegistry.replicaCount in the configuration). - -### Persistent volume for sharing toolkit storage (optional) - -If you don't want the Business Automation Studio to import shared toolkits automatically, you can leave `global.contributorToolkitsPVC` empty. - -To integrate contributors, toolkit (twx) files can be imported into Business Application Studio. Place the toolkit package in shared storage and create the persistent volume for that storage by referring to the following example files. Then enter the persistent volume claim name in `global.contributorToolkitsPVC`. - -```yaml -kind: PersistentVolume -apiVersion: v1 -metadata: - name: toolkit-pv-volume - labels: - type: nfs -spec: - storageClassName: toolkit-pv - capacity: - storage: 2Gi - accessModes: - - ReadWriteMany - nfs: - path: /mptest/toolkit - server: 9.111.101.131 ------------------------- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: shared-storage-pvc -spec: - storageClassName: toolkit-pv - accessModes: - - ReadWriteMany - resources: - requests: - storage: 2Gi -``` - -Notes: - -* You must give root(0) group read/write access to the mounted directories by running the following command: - - ```text - chown -R 50001:0 - chmod g+rw - ``` - -### Configuring Redis for App Engine (optional) - -You can configure the App Engine with Remote Dictionary Server (Redis) to provide more reliable service, it is mandatory if you want to use mutiple active replicas for App Engine. - -1. Update the Redis host, port, and Time To Live (TTL) settings in `values.yaml` - - ```yaml - redis: - host: - port: - ttl: 1800 - ``` - -2. If Redis is protected by a password, enter it in the `REDIS_PASSWORD` field in the `ae-secret-credential` secret that you created in [Protecting sensitive configuration data](#Protecting-sensitive-configuration-data). - -3. If you want to protect Redis communication with TLS, you have the following options: - - * Sign the Redis certificate with a well-known CA. - * Sign the Redis certificate with the same root CA that is used by this installation. - * Use a zero depth self-signed certificate or sign the certificate with another root CA. Then save the certificate or root CA in the secret and enter the secret name in `.Values.appengine.tls.tlsTrustList`. - -## Red Hat OpenShift SecurityContextConstraints Requirements - -The predefined SecurityContextConstraints name [`restricted`](https://ibm.biz/cpkspec-scc) has been verified for this chart. If your target namespace is bound to this SecurityContextConstraints resource, you can proceed to install the chart. - -This chart also defines a custom SecurityContextConstraints definition that can be used to finely control the permissions and capabilities needed to deploy this chart. - -- From the user interface, you can copy and paste the following snippets to enable the custom SecurityContextConstraints. - - Custom SecurityContextConstraints definition: - - ```yaml - apiVersion: security.openshift.io/v1 - kind: SecurityContextConstraints - metadata: - annotations: - kubernetes.io/description: "This policy is the most restrictive, - requiring pods to run with a non-root UID, and preventing pods from accessing the host." - cloudpak.ibm.com/version: "1.0.0" - name: ibm-dba-bas-scc - allowHostDirVolumePlugin: false - allowHostIPC: false - allowHostNetwork: false - allowHostPID: false - allowHostPorts: false - allowPrivilegedContainer: false - allowPrivilegeEscalation: false - allowedCapabilities: [] - allowedFlexVolumes: [] - allowedUnsafeSysctls: [] - defaultAddCapabilities: [] - defaultPrivilegeEscalation: false - forbiddenSysctls: - - "*" - fsGroup: - type: MustRunAs - ranges: - - max: 65535 - min: 1 - readOnlyRootFilesystem: false - requiredDropCapabilities: - - ALL - runAsUser: - type: MustRunAsNonRoot - seccompProfiles: - - docker/default - seLinuxContext: - type: RunAsAny - supplementalGroups: - type: MustRunAs - ranges: - - max: 65535 - min: 1 - volumes: - - configMap - - downwardAPI - - emptyDir - - persistentVolumeClaim - - projected - - secret - priority: 0 - ``` - -## Resources Required - -Follow the OpenShift instructions in [Planning Your Installation](https://docs.openshift.com/container-platform/3.11/install/index.html#single-master-single-box). Then check the required resources in [System and Environment Requirements](https://docs.openshift.com/container-platform/3.11/install/prerequisites.html) and set up your environment. - -| Component name | Container | CPU | Memory | -| --- | --- | --- | --- | -| Business Automation Studio | BAStudio container | 2 | 3Gi | -| Business Automation Studio | Init containers | 200m | 128Mi | -| Business Automation Studio | JMS containers | 500m | 512Mi | -| App Engine | App Engine container | 1 | 512Mi | -| App Engine | Init Containers | 200m | 128Mi | -| Resource Registry | Resource Registry container | 100m | 128Mi | -| Resource Registry | Init containers | 100m | 128Mi | - - -## Installing the Chart - -You can deploy your container images with the following methods: - -- [Using Helm charts](helm-charts/README.md) -- [Using Kubernetes YAML](k8s-yaml/README.md) - - -## Configuration - | Parameter | Description | Default | -| -------------------------------------- | ----------------------------------------------------- | ---------------------------------------------------- | -| `global.existingClaimName` | Existing persistent volume claim name for JDBC and ODBC library | | -| `global.nonProductionMode` | Production mode. This value must be false. | `false` | -| `global.imagePullSecrets` | Existing Docker image secret | `image-pull-secret` | -| `global.caSecretName` | Existing CA secret | `ca-tls-secret` | -| `global.dnsBaseName` | Kubernetes Domain Name Server (DNS) base name | `svc.cluster.local` | -| `global.contributorToolkitsPVC` | Persistent volume for contributor toolkits storage | `` | -| `global.image.keytoolInitcontainer` | Image name for TLS init container | `dba-keytool-initcontainer:19.0.2` | -| `global.ums.serviceType` | UMS service type: `NodePort`, `ClusterIP`, or `Ingress` | | -| `global.ums.hostname` | UMS external host name | | -| `global.ums.port` | UMS port (only effective when using NodePort service) | | -| `global.ums.adminSecretName` | Existing UMS administrative secret for sensitive configuration data | | -| `global.baStudio.serviceType` | Business Automation Studio service type: `NodePort`, `ClusterIP`, or `Ingress` | | -| `global.baStudio.hostname` | Business Automation Studio external host name | | -| `global.baStudio.port` | Business Automation Studio port (only effective when using NodePort service) | | -| `global.baStudio.adminSecretName` | Business Automation Studio Secret for administration | | -| `global.baStudio.jmsPersistencePVC` | Business Automation Studio JMS persistent volume claim | | -| `global.resourceRegistry.hostname` | Resource Registry external host name | | -| `global.resourceRegistry.port` | Resource Registry port for using NodePort Service | | -| `global.resourceRegistry.adminSecretName` | Existing Resource Registry administrative secret for sensitive configuration | | -| `global.appEngine.serviceType` | App Engine service type: `NodePort`, `ClusterIP`, or `Ingress` | | -| `global.appEngine.hostname` | App Engine external host name | | -| `global.appEngine.port` | App Engine port (only effective when using NodePort service) | | -| `baStudio.install` | Switch for installing Business Automation Studio | `true` | -| `baStudio.replicaCount` | Number of deployment replicas | `1` | -| `baStudio.images.baStudio` | Image name for Business Automation Studio container | `20190624-064834.0.linux:19.0.0.1` | -| `baStudio.images.tlsInitContainer` | Image name for TLS init container | `dba-keytool-initcontainer:19.0.2` | -| `baStudio.images.ltpaInitContainer` | Image name for job container | `dba-keytool-jobcontainer:19.0.2` | -| `baStudio.images.umsInitRegistration` | Image name for UMS container | `dba-umsregistration-initjob:19.0.2` | -| `baStudio.images.jmsContainer` | Image name for JMS container | `baw-jms-server:19.0.2` | -| `baStudio.images.pullPolicy` | Pull policy for all containers | `IfNotPresent` | -| `baStudio.tls.tlsSecretName` | Existing TLS secret containing `tls.key` and `tls.crt`| | -| `baStudio.tls.tlsTrustList` | Existing TLS trust secret | `[]` | -| `baStudio.database.name` | Business Automation Studio database name | | -| `baStudio.database.host` | Business Automation Studio database host | | -| `baStudio.database.port` | Business Automation Studio database port | | -| `baStudio.database.type` | Business Automation Studio database type: `db2` | | -| `baStudio.autoscaling.enabled` | Enable the Horizontal Pod Autoscaler for Business Automation Studio | `false` | -| `baStudio.autoscaling.minReplicas` | Minimum limit for the number of pods for Business Automation Studio | `2` | -| `baStudio.autoscaling.maxReplicas` | Maximum limit for the number of pods for Business Automation Studio | `5` | -| `baStudio.autoscaling.targetAverageUtilization` | Target average CPU utilization over all the pods for Business Automation Studio | `80` | -| `baStudio.contentSecurityPolicy` | ContentSecurityPolicy for Business Automation Studio | `upgrade-insecure-requests` | -| `baStudio.resources.bastudio.limits.cpu` | Maximum amount of CPU that is required for Business Automation Studio | `4` | -| `baStudio.resources.bastudio.limits.memory` | Maximum amount of memory that is required for Business Automation Studio | `3Gi` | -| `baStudio.resources.bastudio.requests.cpu` | Minimum amount of CPU that is required for Business Automation Studio | `2` | -| `baStudio.resources.bastudio.requests.memory` | Minimum amount of memory that is required for Business Automation Studio | `2Gi` | -| `baStudio.resources.initProcess.limits.cpu` | Maximum amount of CPU that is required for Business Automation Studio init processes | `500m` | -| `baStudio.resources.initProcess.limits.memory` | Maximum amount of memory that is required for Business Automation Studio init processes | `512Mi` | -| `baStudio.resources.initProcess.requests.cpu` | Minimum amount of CPU that is required for Business Automation Studio init processes | `200m` | -| `baStudio.resources.initProcess.requests.memory` | Minimum amount of memory that is required for Business Automation Studio init processes | `256Mi` | -| `baStudio.resources.jms.limits.cpu` | Maximum amount of CPU that is required for Business Automation Studio Jms Server | `1` | -| `baStudio.resources.jms.limits.memory` | Maximum amount of memory that is required for Business Automation Studio Jms Server | `1Gi` | -| `baStudio.resources.jms.requests.cpu` | Minimum amount of CPU that is required for Business Automation Studio Jms Server | `500m` | -| `baStudio.resources.jms.requests.memory` | Minimum amount of memory that is required for Business Automation Studio Jms Server | `512Mi` | -| `appEngine.install` | Switch for installing App Engine | `true` | -| `appEngine.replicaCount` | Number of App Engine deployment replicas | `1` | -| `appEngine.probes.initialDelaySeconds` | Number of seconds after the App Engine container has started before liveness or readiness probes are initiated | `5` | -| `appEngine.probes.periodSeconds` | How often (in seconds) to perform the probe. The default is 10 seconds. Minimum value is 1. | `10` | -| `appEngine.probes.timeoutSeconds` | Number of seconds after which the probe times out. The default is 1 second. Minimum value is 1. | `5` | -| `appEngine.probes.successThreshold` | Minimum consecutive successes for the probe to be considered successful after failing. Minimum value is 1. | `5` | -| `appEngine.probes.failureThreshold` | When a pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up. Minimum value is 1. | `3` | -| `appEngine.images.appEngine` | Image name for App Engine container | `solution-server:19.0.2` | -| `appEngine.images.tlsInitContainer` | Image name for TLS init container | `dba-keytool-initcontainer:19.0.2` | -| `appEngine.images.dbJob` | Image name for App Engine database job container | `solution-server-helmjob-db:19.0.2` | -| `appEngine.images.oidcJob` | Image name for OpenID Connect (OIDC) registration job container | `dba-umsregistration-initjob:19.0.2` | -| `appEngine.images.dbcompatibilityInitContainer` | Image name for database compatibility init container | `dba-dbcompatibility-initcontainer:19.0.2` | -| `appEngine.images.pullPolicy` | Pull policy for all App Engine containers | `IfNotPresent` | -| `appEngine.tls.tlsSecretName` | Existing TLS secret containing `tls.key` and `tls.crt`| | -| `appEngine.tls.tlsTrustList` | Existing TLS trust secret | `[]` | -| `appEngine.database.name` | App Engine database name | | -| `appEngine.database.host` | App Engine database host | | -| `appEngine.database.port` | App Engine database port | | -| `appEngine.database.type` | App Engine database type: `db2` | | -| `appEngine.database.currentSchema` | App Engine database Schema | | -| `appEngine.database.initialPoolSize` | Initial pool size of the App Engine database | `1` | -| `appEngine.database.maxPoolSize` | Maximum pool size of the App Engine database | `10` | -| `appEngine.database.uvThreadPoolSize` | UV thread pool size of the App Engine database | `4` | -| `appEngine.database.maxLRUCacheSize` | Maximum Least Recently Used (LRU) cache size of the App Engine database | `1000` | -| `appEngine.database.maxLRUCacheAge` | Maximum LRU cache age of the App Engine database | `600000` | -| `appEngine.useCustomJDBCDrivers` | Toggle for custom JDBC drivers | `false` | -| `appEngine.adminSecretName` | Existing App Engine administrative secret for sensitive configuration data | | -| `appEngine.logLevel.node` | Log level for output from the App Engine server | `trace` | -| `appEngine.logLevel.browser` | Log level for output from the web browser | `2` | -| `appEngine.contentSecurityPolicy.enable`| Enables the content security policy for the App Engine | `false` | -| `appEngine.contentSecurityPolicy.whitelist`| Configuration of the App Engine content security policy whitelist | `""` | -| `appEngine.session.duration` | Duration of the session | `1800000` | -| `appEngine.session.resave` | Enables session resaves | `false` | -| `appEngine.session.rolling` | Send cookie every time | `true` | -| `appEngine.session.saveUninitialized` | Uninitialized sessions will be saved if checked | `false` | -| `appEngine.session.useExternalStore` | Use an external store for storing sessions | `false` | -| `appEngine.redis.host` | Host name of the Redis database that is used by the App Engine | | -| `appEngine.redis.port` | Port number of the Redis database that is used by the App Engine | | -| `appEngine.redis.ttl` | Time to live for the Redis database connection that is used by the App Engine | | -| `appEngine.maxAge.staticAsset` | Maximum age of a static asset | `2592000` | -| `appEngine.maxAge.csrfCookie` | Maximum age of a Cross-Site Request Forgery (CSRF) cookie | `3600000` | -| `appEngine.maxAge.authCookie` | Maximum age of an authentication cookie | `900000` | -| `appEngine.env.serverEnvType` | App Engine server environment type | `development` | -| `appEngine.env.maxSizeLRUCacheRR` | Maximum size of the LRU cache for the Resource Registry | `1000` | -| `appEngine.resources.ae.limits.cpu` | Maximum amount of CPU that is required for the App Engine container | `1` | -| `appEngine.resources.ae.limits.memory` | Maximum amount of memory that is required for the App Engine container | `1024Mi` | -| `appEngine.resources.ae.requests.cpu` | Minimum amount of CPU that is required for the App Engine container | `500m` | -| `appEngine.resources.ae.requests.memory` | Minimum amount of memory that is required for the App Engine container | `512Mi` | -| `appEngine.resources.initContainer.limits.cpu` | Maximum amount of CPU that is required for the App Engine init container | `500m` | -| `appEngine.resources.initContainer.limits.memory` | Maximum amount of memory that is required for the App Engine init container | `256Mi` | -| `appEngine.resources.initContainer.requests.cpu` | Minimum amount of CPU that is required for the App Engine init container | `200m` | -| `appEngine.resources.initContainer.requests.memory` | Minimum amount of memory that is required for the App Engine init container | `128Mi` | -| `appEngine.autoscaling.enabled` | Enable the Horizontal Pod Autoscaler for App Engine init container | `false` | -| `appEngine.autoscaling.minReplicas` | Minimum limit for the number of pods for the App Engine | `2` | -| `appEngine.autoscaling.maxReplicas` | Maximum limit for the number of pods for the App Engine | `5` | -| `appEngine.autoscaling.targetAverageUtilization` | Target average CPU utilization over all the pods for the App Engine init container | `80` | -| `resourceRegistry.install` | Switch for installing Resource Registry | `true` | -| `resourceRegistry.images.resourceRegistry` | Image name for Resource Registry container | `dba-etcd:19.0.2` | -| `resourceRegistry.images.pullPolicy` | Pull policy for all containers | `IfNotPresent` | -| `resourceRegistry.tls.tlsSecretName` | Existing TLS secret containing `tls.key` and `tls.crt`| | -| `resourceRegistry.replicaCount` | Number of etcd nodes in cluster | `3` | -| `resourceRegistry.resources.limits.cpu` | CPU limit for Resource Registry configuration | `500m` | -| `resourceRegistry.resources.limits.memory` | Memory limit for Resource Registry configuration | `512Mi` | -| `resourceRegistry.resources.requests.cpu` | Requested CPU for Resource Registry configuration | `200m` | -| `resourceRegistry.resources.requests.memory` | Requested memory for Resource Registry configuration | `256Mi` | -| `resourceRegistry.persistence.enabled` | Enables this deployment to use persistent volumes | `false` | -| `resourceRegistry.persistence.useDynamicProvisioning` | Enables dynamic binding of persistent volumes to created persistent volume claims | `true` | -| `resourceRegistry.persistence.storageClassName` | Storage class name | | -| `resourceRegistry.persistence.accessMode` | Access mode as ReadWriteMany ReadWriteOnce | | -| `resourceRegistry.persistence.size` | Storage size | | -| `resourceRegistry.livenessProbe.enabled` | Liveness probe configuration enabled | `true` | -| `resourceRegistry.livenessProbe.initialDelaySeconds` | Number of seconds after the container has started before liveness is initiated | `120` | -| `resourceRegistry.livenessProbe.periodSeconds` | How often (in seconds) to perform the probe | `10` | -| `resourceRegistry.livenessProbe.timeoutSeconds` | Number of seconds after which the probe times out | `5` | -| `resourceRegistry.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after failing. Minimum value is 1. | `1` | -| `resourceRegistry.livenessProbe.failureThreshold` | When a pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up. Minimum value is 1. | `3` | -| `resourceRegistry.readinessProbe.enabled` | Readiness probe configuration enabled | `true` | -| `resourceRegistry.readinessProbe.initialDelaySeconds` | Number of seconds after the container has started before readiness is initiated | `15` | -| `resourceRegistry.readinessProbe.periodSeconds` | How often (in seconds) to perform the probe | `10` | -| `resourceRegistry.readinessProbe.timeoutSeconds` | Number of seconds after which the probe times out | `5` | -| `resourceRegistry.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after failing. Minimum value is 1. | `1` | -| `resourceRegistry.readinessProbe.failureThreshold` | When a pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up. Minimum value is 1. | `6` | -| `resourceRegistry.logLevel` | Log level of the resource registry server. Available options: `debug` `info` `warn` `error` `panic` `fatal` | `info` | - -## Limitations - -* The solution server image only trusts CA due to the limitation of the Node.js server. For example, if external UMS is used and signed with another root CA, you must add the root CA as trusted instead of the UMS certificate. - - * The certificate can be self-signed, or signed by a well-known CA. - * If you're using a depth zero self-signed certificate, it must be listed as a trusted certificate. - * If you're using a certificate signed by a self-signed CA, the self-signed CA must be in the trusted list. Using a leaf certificate in the trusted list is not supported. - -* The Business Automation Studio components support only the IBM DB2 database. -* The JMS statefulset doesn't support scale. You must leave the replicate size of the JMS statefulset at 1. -* The Helm upgrade and rollback operations must use the Helm command line, not the user interface. - -## Documentation - -* [Using the IBM Cloud Pak for Automation](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/welcome/kc_welcome_dba_distrib.html) -* [Content Security Policy(CSP)](https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP) diff --git a/BAS/README_config.md b/BAS/README_config.md new file mode 100644 index 00000000..68e96626 --- /dev/null +++ b/BAS/README_config.md @@ -0,0 +1,139 @@ +# Configuring IBM Business Automation Studio 19.0.3 + +These instructions cover the basic installation and configuration of IBM Business Automation Studio. + +## Table of contents + - [Business Automation Studio Component Details](#Business-Automation-Studio-Component-Details) + - [Prerequisites](#Prerequisites) + - [Resources Required](#Resources-Required) + - [Step 1: Preparing to install Business Automation Studio for Production](#Step-1-Preparing-to-install-Business-Automation-Studio-for-Production) + - [Step 2: Configuring Redis for App Engine Playback Server (Optional)](#Step-2-Configuring-Redis-for-App-Engine-Playback-Server-Optional) + - [Step 3: Implementing storage (Optional)](#Step-3-implementing-storage-optional) + - [Step 4: Configuring the custom resource YAML file for your Business Automation Studio deployment](#Step-4-Configuring-the-custom-resource-YAML-file-for-your-Business-Automation-Studio-deployment) + - [Step 5: Completing the installation](#Step-5-Completing-the-installation) + - [Limitations](#Limitations) + +## Introduction + +This installation deploys a Business Automation Studio environment, the single authoring and development environment for the IBM Cloud Pak for Automation platform, where you can go to author business services, applications, and digital workers. + +## Business Automation Studio Component Details + +This component deploys several services and components. + +In the standard configuration, it includes these components: + +* IBM Business Automation Studio (BAStudio) component +* IBM Resource Registry component +* IBM Business Automation Application Engine (App Engine) playback server component + +Notes: + - The IBM Business Automation Application Engine (App Engine) playback server component is designed to provide a playback environment for application development use. The App Engine installed as a playback server doesn't contain all the features needed by the App Engine in a production environment and can't be used as a production App Engine server. + - For a production environment, deploy the App Engine following the instructions in [Application Engine Configuration](../AAE/README_config.md). + +To support those components, a standard installation generates: + + * 5 ConfigMaps that manage the configuration of Business Automation Studio server + * 2 deployments running the Business Automation Studio server and App Engine playback server + * 1 StatefulSet running JMS + * 4 or more jobs for Business Automation Studio and Resource Registry + * 5 secrets to get access + * 5 services to route the traffic to Business Automation Studio server + +## Prerequisites + + * [User Management Service](../UMS/README_config.md) + * Resource Registry, which is included in the BAStudio configuration. If you already configured Resource Registry through another component, you need not install it again. + +## Resources Required + +Follow the OpenShift instructions in [Planning Your Installation 3.11](https://docs.openshift.com/container-platform/3.11/install/index.html#single-master-single-box) or [Planning your Installation 4.2](https://docs.openshift.com/container-platform/4.2/welcome/index.html). Then check the required resources in [System and Environment Requirements on OCP 3.11](https://docs.openshift.com/container-platform/3.11/install/prerequisites.html) or [System and Environment Requirements on OCP 4.2](https://docs.openshift.com/container-platform/4.2/architecture/architecture.html) and set up your environment. + +| Component name | Container | CPU | Memory | +| --- | --- | --- | --- | +| BAStudio | BAStudio container | 2 | 2Gi | +| BAStudio | Init containers | 200m | 256Mi | +| BAStudio | JMS containers | 500m | 512Mi | +| Resource Registry | Resource Registry container | 200m | 256Mi | +| Resource Registry | Init containers | 100m | 128Mi | +| App Engine Playback Server | App Engine container | 1 | 1Gi | +| App Engine Playback Server | Init containers | 200m | 128Mi | + +## Step 1: Preparing to install Business Automation Studio for Production + +Besides the common steps to set up the operator environment, you must do the following steps before you install Business Automation Studio. + +* Create the Business Automation Studio and App Engine playback server databases. See [Creating databases](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_basprep_db.html). +* Create admin secrets to protect sensitive configuration data. See [Protecting sensitive configuration data](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_basprep_data.html). + +## Step 2: Configuring Redis for App Engine Playback Server (Optional) + +The default replica size of the App Engine playback server is 1. You can have only one App Engine pod because it's a playback server for application development use. If you need the replica size to be more than 1 or you enabled the Horizontal Pod Autoscaler for the playback server, you must configure the App Engine playback server with Remote Dictionary Server (Redis). For instructions, see [Optional: Configuring App Engine playback server with Redis](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_basprep_redis.html). + +## Step 3: Implementing storage (Optional) + +You can optionally add your own persistent volume (PV) and persistent volume claim (PVC) if you want to use your own JDBC driver or you want Resource Registry to be backed up automatically. The minimum supported size is 1 GB. For instructions see [Optional: Implementing storage](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_basprep_storage.html). + + +## Step 4: Configuring the custom resource YAML file for your Business Automation Studio deployment + + 1. Make sure that you've set the configuration parameters for [User Management Service](../UMS/README_config.md) in your copy of the template custom resource YAML file. + 2. Edit your copy of the template custom resource YAML file and make the following updates. After completing those updates, if you need to install other components, please go to [Step 5](README_config.md#step-5-Completing-the-installation) and do the configuration for those components, using the same YAML file. + + a. Uncomment and update the shared_configuration section if you haven't done it already. + + b. Update the `bastudio_configuration` and `resource_registry_configuration` sections. + * If you just want to install BAStudio with the minimal required values, replace the contents of `bastudio_configuration` and `resource_registry_configuration` in your copy of the template custom resource YAML file with the values from the [sample_min_value.yaml](configuration/sample_min_value.yaml) file. + * If you want to use the full configuration list and customize the values, update the required values in `bastudio_configuration` and `resource_registry_configuration` in your copy of the template custom resource YAML file based on your configuration. + +Note: The hostname must be less than 64 characters. Use a wildcard DNS (https://nip.io/) if the hostname is too long. For example, instead of: +`resource_registry_configuration: + admin_secret_name: op-bas-rr-admin-secret + hostname: hostname: rr-{{ meta.namespace }.I-have-a-very-long-hostname-which-exceeds-64-characters.cloud.com`' +the hostname can use a wildcard: +`resource_registry_configuration: + admin_secret_name: op-bas-rr-admin-secret + hostname: rr-{{ meta.namespace }..nip.io`' + +### Configuration + +If you want to customize your custom resource YAML file, refer to the [configuration list](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_bas_params.html) for each parameter. + +## Step 5: Completing the installation + +Go back to the relevant installation or update page to configure other components and complete the deployment with the operator. + +Installation pages: + - [Managed OpenShift installation page](../platform/roks/install.md) + - [OpenShift installation page](../platform/ocp/install.md) + - [Certified Kubernetes installation page](../platform/k8s/install.md) + +Update pages: + - [Managed OpenShift installation page](../platform/roks/update.md) + - [OpenShift installation page](../platform/ocp/update.md) + - [Certified Kubernetes installation page](../platform/k8s/update.md) + + +## Limitations + +* After you deploy Business Automation Studio, you can't change the Business Automation Studio or App Engine playback server admin user in the admin secret. + +* Because of a node.js server limitation, App Engine playback server image trusts only root CA. If an external service is used and signed with another root CA, you must add the root CA as trusted instead of the service certificate. + + * The certificate can be self-signed, or signed by a well-known root CA. + * If you're using a depth zero self-signed certificate, it must be listed as a trusted certificate. + * If you're using a certificate signed by a self-signed root CA, the self-signed root CA must be in the trusted list. Using a leaf certificate in the trusted list is not supported. + * If you're adding the root CA of two or more external services to the App Engine trust list, you can't use the same common name for those root CAs. + +* The Business Automation Studio components support only the IBM DB2 database. + +* The App Engine playback server supports only the IBM DB2 database. + +* The JMS statefulset doesn't support scale. You must keep the replica size of the JMS statefulset at 1. + +* Resource Registry limitation + + Because of the design of etcd, it's recommended that you don't change the replica size after you create the Resource Registry cluster to prevent data loss. If you must set the replica size, set it to an odd number. If you reduce the pod size, the pods are destroyed one by one slowly to prevent data loss or the cluster getting out of sync. + + * If you update the Resource Registry admin secret to change the username or password, first delete the -dba-rr- pods to cause Resource Registry to enable the updates. Alternatively, you can enable the update manually with etcd commands. + * If you update the Resource Registry configurations in the icp4acluster custom resource instance. the update might not affect the Resource Registry pod directly. It will affect the newly created pods when you increase the number of replicas. diff --git a/BAS/README_migrate.md b/BAS/README_migrate.md new file mode 100644 index 00000000..8c775902 --- /dev/null +++ b/BAS/README_migrate.md @@ -0,0 +1,15 @@ +# Migrating from IBM Business Automation Studio 19.0.2 to 19.0.3 + +These instructions cover the migration of IBM Business Automation Studio from 19.0.2 to 19.0.3. + +## Introduction + +If you install IBM Business Automation Studio 19.0.2 and want to continue to use your 19.0.2 applications in Business Automation Studio 19.0.3, you can migrate your applications from Business Automation Studio 19.0.2 to 19.0.3. + +## Step 1: Export apps that were authored in 19.0.2 + +Log in to the admin console in your Business Automation Studio 19.0.2 environment, then export your apps as .twx files. + +## Step 2: Import the apps to 19.0.3 + +Install [IBM Business Automation Studio 19.0.3](../BAS/README_config.md), then import the apps that you exported. \ No newline at end of file diff --git a/BAS/configuration/README.md b/BAS/configuration/README.md deleted file mode 100644 index e1ad199d..00000000 --- a/BAS/configuration/README.md +++ /dev/null @@ -1,97 +0,0 @@ -# Business Automation Studio platform Helm installation helper script - -1. Extract the IBM Business Applicaition Studio platform Helm installation helper script from the bastudio-helper.tar file and copy it to a specified directory, for example, ibm-dba-bas-helper. - -2. Unpack the package by running the following command: - - ``` - tar xvf bastudio-helper.tar - ``` - -3. Update the `./pre-install/bastudio.yaml` file with the following settings: - -#### Business Automation Studio settings - | Parameter | Description | Default | -| -------------------------------------- | ----------------------------------------------------- | ---------------------------------------------------- | -| `releaseName` | Release Name. If you want to install with a release name other than bastudio, update this field. | | -| `server.type` | Kubernetes cluster type. OpenShift is supported. | `openshift` | -| `server.infrastructureNodeIP` | Infrastructure node IP | | -| `server.certificateManagerIntalled` | Whether to use Cert Manager installation | `false` | -| `admin.username` | Administrative user name, which is used by User Management Service (UMS), App Engine, and Business Automation Studio | | -| `admin.password` | Administrative password | | -| `ums.hostname` | UMS external host name | | -| `ums.tlsSecretName` | Enter the UMS root CA secret name in this field | | -| `appEngine.hostname` | App Engine external host name | | -| `appEngine.db.name` | App Engine database name | | -| `appEngine.db.hostname` | App Engine database host | | -| `appEngine.db.port` | App Engine database port | | -| `appEngine.db.username` | App Engine database user name | | -| `appEngine.db.password` | App Engine database password | | -| `appEngine.redis.password` | Set this password only if you are using Redis | `password` | -| `resourceRegistry.hostname` | Resource Registry external host name | | -| `resourceRegistry.root.password` | Resource Registry root password | | -| `resourceRegistry.read.username` | Resource Registry reader user name | | -| `resourceRegistry.read.password` | Resource Registry reader password | | -| `resourceRegistry.write.username` | Resource Registry writer user name | | -| `resourceRegistry.write.password` | Resource Registry writer password | | -| `bastudio.hostname` | Business Automation Studio external host name | | -| `bastudio.db.name` | Business Automation Studio database name | | -| `bastudio.db.hostname` | Business Automation Studio database host | | -| `bastudio.db.port` | Business Automation Studio database port | | -| `bastudio.db.username` | Business Automation Studio database user name | | -| `bastudio.db.password` | Business Automation Studio database password | | -| `images.bastudio` | Image name for Business Automation Studio container | | -| `images.jmsContainer` | Image name for JMS container | `baw-jms-server:19.0.2` | -| `images.appEngine` | Image name for Application Engine container | `solution-server:19.0.2` | -| `images.dbJob` | Image name for Application Engine database job container | `solution-server-helmjob-db:19.0.2` | -| `images.resourceRegistry` | Image name for Resource Registry container | `dba-etcd:19.0.2` | -| `images.umsInitRegistration` | Image name for OpenID Connect (OIDC) registration job container | `dba-umsregistration-initjob:19.0.2` | -| `images.tlsInitContainer` | Image name for TLS init container | `dba-keytool-initcontainer:19.0.2` | -| `images.ltpaInitContainer` | Image name for job container | `dba-keytool-jobcontainer:19.0.2` | -| `images.dbcompatibilityInitContainer` | Image name for database compatibility init container | `dba-dbcompatibility-initcontainer:19.0.2` | -| `ImagePullPolicy` | Pull policy for all containers | `Always` | -| `imagePullSecrets` | Existing Docker image secret | `image-pull-secret` | - - -4. Run the command`./pre-install/prepare-bastudio.sh -i ./pre-install/bastudio.yaml`. You'll see the following information on your screen: - -``` -Target folder does not exist. Creating folder -wrote ./output/bastudio-helper/templates/admin-secrets.yaml -wrote ./output/bastudio-helper/templates/certificate.yaml -wrote ./output/bastudio-helper/templates/route-ingress.yaml -wrote ./output/bastudio-helper/templates/NOTES.txt -wrote ./output/bastudio-helper/templates/db-script.sql -wrote ./output/bastudio-helper/templates/updateValues.yaml ---- -# Source: bastudio-helper/templates/NOTES.txt -Generating admin secret-related resources in file -./bastudio-helper/templates/admin-secrets.yaml - -Generating TLS key and certificate resources with secret in file -./bastudio-helper/templates/certificate.yaml - -Generating route definition in file -./bastudio-helper/templates/route-ingress.yaml - -Generating values to update in file -./bastudio-helper/templates/updateValues.yaml - -You can apply the resources with command: -kubectl apply -f ./admin-secrets.yaml -kubectl apply -f ./certificate.yaml -oc apply -f ./route-ingress.yaml - -Create the database with command: -db2 -tvf ./db-script.sql - -``` - -5. Run the following commands to create sensitive configuration data, create TLS key and certification secrets, and set the service type. -``` - kubectl apply -f ./admin-secrets.yaml - kubectl apply -f ./certificate.yaml - oc apply -f ./route-ingress.yaml - ``` - -6. Copy the database script to your dabase and run the command `db2 -tvf ./db-script.sql` on the database. diff --git a/BAS/configuration/bastudio-helper.tar b/BAS/configuration/bastudio-helper.tar deleted file mode 100644 index 2be463fe..00000000 Binary files a/BAS/configuration/bastudio-helper.tar and /dev/null differ diff --git a/BAS/configuration/sample_min_value.yaml b/BAS/configuration/sample_min_value.yaml new file mode 100644 index 00000000..170efbfc --- /dev/null +++ b/BAS/configuration/sample_min_value.yaml @@ -0,0 +1,68 @@ +apiVersion: icp4a.ibm.com/v1 +kind: ICP4ACluster +metadata: + name: demo-template + labels: + app.kubernetes.io/instance: ibm-dba + app.kubernetes.io/managed-by: ibm-dba + app.kubernetes.io/name: ibm-dba + release: 19.0.3 +spec: + ##################################################################### + ## IBM Business Automation Studio 19.0.3 configuration ## + ##################################################################### + bastudio_configuration: + admin_secret_name: bastudio-admin-secret + images: + bastudio: + repository: cp.icr.io/cp/cp4a/bas/bastudio + tag: 19.0.3 + hostname: + port: 443 + database: + host: + # The database provided should be created by the BAStudio SQL script template. + name: + port: + # If you want to enable the database ACR, HADR, configure the alternative_host and alternative_port both, otherwise leave them as blank. + alternative_host: + alternative_port: + type: db2 + jms_server: + image: + repository: cp.icr.io/cp/cp4a/bas/jms + tag: 19.0.3 + #----------------------------------------------------------------------- + # App Engine Playback Server (playback_server) can only be one instance, which differs from the App Engine (The application_engine_configuration is a list, you can deploy multiple instances of AppEngine). + #----------------------------------------------------------------------- + playback_server: + admin_secret_name: playback-server-admin-secret + images: + db_job: + repository: cp.icr.io/cp/cp4a/bas/solution-server-helmjob-db + tag: 19.0.3 + solution_server: + repository: cp.icr.io/cp/cp4a/bas/solution-server + tag: 19.0.3 + hostname: + port: 443 + database: + host: + # The database provided should be created by the App Engine Playback Server SQL script template. + name: + port: + # If you want to enable the database ACR, HADR, configure the alternative_host and alternative_port both, otherwise leave them as blank. + alternative_host: + alternative_port: + type: db2 + + ## Resource Registry Configuration + ## Important: if you've already configured Resource Registry before, you don't need to change resource_registry_configuration section in your copy of the template custom resource YAML file. + resource_registry_configuration: + admin_secret_name: resource-registry-admin-secret + images: + resource_registry: + repository: cp.icr.io/cp/cp4a/bas/dba-etcd + tag: 19.0.3 + hostname: + port: 443 \ No newline at end of file diff --git a/BAS/helm-charts/README.md b/BAS/helm-charts/README.md deleted file mode 100644 index f9969448..00000000 --- a/BAS/helm-charts/README.md +++ /dev/null @@ -1,40 +0,0 @@ -# Deploying with Helm charts - -Extract the helm chart from ibm-dba-bas-prod-1.0.0.tgz and copy to your installation directory. - -## Installing the Chart - - To install the chart with release name `my-release`, run the following command: - - ``` - helm install --tls --name my-release ibm-dba-bas-prod -f my-values.yaml --namespace ` - ``` - - The command deploys `ibm-dba-bas-prod` onto the Kubernetes cluster, based on the values specified in the `my-values.yaml` file. If you use [BAStudio platform helm install helper script](configuration) before, you can use ./bastudio-helper/templates/updateValues.yaml file generated by the script. The configuration section lists the parameters that can be configured during installation. - -### Verifying the Chart - -1. After the installation is finished, see the instructions for verifying the chart by running the command: - - `helm status my-release --tls` - -2. Get the name of the pods that were deployed with ibm-dba-bas-prod by running the following command: - - `kubectl get pod -n ` - -3. For each pod, check under Events to see that the images were successfully pulled and the containers were created and started, by running the following command with the specific pod name: - - `kubectl describe pod -n ` - -4. Go to `https:///BAStudio` in your browser (if you set up Business Automation Studio with Route) or `https://:/BAStudio` (if you set up Business Automation Studio with NodePort). - -### Uninstalling the Chart -To uninstall and delete the my-release deployment, run the following command: - - helm delete my-release --purge --tls - -This command removes all the Kubernetes components associated with the chart and deletes the release. If deletion can result in orphaned components, you must delete them manually. - -For example, when you delete a release with stateful sets, the associated persistent volume must be deleted. Run the following command after deleting the chart release to clean up orphaned persistent volumes: - - kubectl delete pvc -l release=my-release diff --git a/BAS/helm-charts/ibm-dba-bas-prod-1.0.0.tgz b/BAS/helm-charts/ibm-dba-bas-prod-1.0.0.tgz deleted file mode 100644 index 77570794..00000000 Binary files a/BAS/helm-charts/ibm-dba-bas-prod-1.0.0.tgz and /dev/null differ diff --git a/BAS/k8s-yaml/README.md b/BAS/k8s-yaml/README.md deleted file mode 100644 index 39faa23f..00000000 --- a/BAS/k8s-yaml/README.md +++ /dev/null @@ -1,60 +0,0 @@ -# Deploying with Kubernetes YAML - -Extract the helm chart from ibm-dba-bas-prod-1.0.0.tgz and copy to your installation directory. - -## Installing the Chart - -To use the Kubernetes command line to install the chart with release name `my-release` - -* run the following command: - - ``` - helm template --name my-release ibm-dba-bas-prod --namespace --output-dir ./yamls -f my-values.yaml - ``` - - Note: if the directory `/yamls` does not exist, you can create it by running `mkdir yamls`. - - The command deploys `ibm-dba-bas-prod` onto the Kubernetes cluster, based on the values specified in the `my-values.yaml` file. If you use [BAStudio platform helm install helper script](configuration) before, you can use ./bastudio-helper/templates/updateValues.yaml file generated by the script.The configuration section lists the parameters that can be configured during installation. - - -* Customize the yamls directory by running the following commands: - - ``` - rm -rf ./yamls/ibm-dba-bas-prod/charts/appengine/templates/tests - rm -rf ./yamls/ibm-dba-bas-prod/charts/baStudio/templates/tests - rm -rf ./yamls/ibm-dba-bas-prod/charts/resourceRegistry/templates/tests - ``` - -* Search `runAsUser: 50001` in the generated contents. And delete them all. (This step can be avoid after helm new feature added). - -* Apply the customization to the server by running the following command: - - kubectl apply -R -f ./yamls - -### Verifying the Chart - -1. After the installation is finished, see the instructions for verifying the chart by running the command: - - `helm status my-release --tls` - -2. Get the name of the pods that were deployed with ibm-dba-bas-prod by running the following command: - - `kubectl get pod -n ` - -3. For each pod, check under Events to see that the images were successfully pulled and the containers were created and started, by running the following command with the specific pod name: - - `kubectl describe pod -n ` - -4. Go to `https:///BAStudio` in your browser (if you set up Business Automation Studio with Route) or `https://:/BAStudio` (if you set up Business Automation Studio with NodePort). - -### Uninstalling the Chart -To uninstall and delete the my-release deployment, run the following command to uninstall and delete the my-release deployment: - - kubectl delete -R -f ./yamls - -This command removes all the Kubernetes components associated with the chart and deletes the release. If deletion can result in orphaned components, you must delete them manually. - -For example, when you delete a release with stateful sets, the associated persistent volume must be deleted. Run the following command after deleting the chart release to clean up orphaned persistent volumes: - - kubectl delete pvc -l release=my-release - diff --git a/BAS/k8s-yaml/ibm-dba-bas-prod-1.0.0.tgz b/BAS/k8s-yaml/ibm-dba-bas-prod-1.0.0.tgz deleted file mode 100644 index 77570794..00000000 Binary files a/BAS/k8s-yaml/ibm-dba-bas-prod-1.0.0.tgz and /dev/null differ diff --git a/BAS/platform/README-ROKS.md b/BAS/platform/README-ROKS.md deleted file mode 100644 index 5b34646e..00000000 --- a/BAS/platform/README-ROKS.md +++ /dev/null @@ -1,1033 +0,0 @@ -# Deploying IBM Business Automation Studio on Red Hat OpenShift on IBM Cloud - -These instructions are for installing IBM Business Automation Studio on a managed Red Hat OpenShift cluster on IBM Public Cloud. - -## Table of contents - -- [Prerequisites](#prerequisites) -- [Step 1: Preparing your client and environment on IBM Cloud](#step-1-preparing-your-client-and-environment-on-ibm-cloud) -- [Step 2: Preparing the OCP client environment](#step-2-preparing-the-ocp-client-environment) -- [Step 3: Downloading the package and uploading it to the local repository](#step-3-downloading-the-package-and-uploading-it-to-the-local-repository) -- [Step 4: Connecting OpenShift with CLI](#step-4-connecting-openshift-with-cli) -- [Step 5: Creating the databases](#step-5-creating-the-databases) -- [Step 6: Creating the routes](#step-6-creating-the-routes) -- [Step 7: Protecting sensitive configuration data](#step-7-protecting-sensitive-configuration-data) -- [Step 8: Configuring TLS key and certificate secrets](#step-8-configuring-tls-key-and-certificate-secrets) -- [Step 9: Preparing persistent storage](#step-9-preparing-persistent-storage) -- [Step 10: Installing Business Automation Studio 19.0.2 on platform Helm](#step-10-installing-business-automation-studio-1902-on-platform-helm) -- [Creating the Navigator service and configuring its UMS](#creating-the-navigator-service-and-configuring-its-ums) -- [References](#references) - -## Prerequisites - - * [OpenShift 3.11](https://docs.openshift.com/container-platform/3.11/welcome/index.html) or later - * [Helm and Tiller 2.9.1](/~https://github.com/helm/helm/releases) or later - * [Cert Manager 0.8.0](https://cert-manager.readthedocs.io/en/latest/getting-started/install/openshift.html) or later - * [IBM DB2 11.1.2.2](https://www.ibm.com/products/db2-database) or later - * [IBM Cloud Pak For Automation - User Management Service](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.offerings/topics/con_ums.html) - * Persistent volume support - -Before you deploy, you must configure your IBM Public Cloud environment, create an OpenShift cluster and load the product images into the registry. Use the following information to configure your environment and deploy the images. - -## Step 1: Preparing your client and environment on IBM Cloud - -1. Create an account on [IBM Cloud](https://cloud.ibm.com/kubernetes/registry/main/start). -2. Create a cluster. - From the [IBM Cloud Overview page](https://cloud.ibm.com/kubernetes/overview), on the OpenShift Cluster tile, click **Create Cluster**. - -3. Install the [IBM Cloud CLI](https://cloud.ibm.com/docs/containers?topic=containers-cs_cli_install). -4. Install the [OpenShift Container Platform CLI](https://docs.openshift.com/container-platform/3.11/cli_reference/get_started_cli.html#cli-reference-get-started-cli) to manage your applications and to interact with the system. -5. Install [Helm 2.9.1](https://www.ibm.com/links?url=https%3A%2F%2Fgithub.com%2Fhelm%2Fhelm%2Freleases%2Ftag%2Fv2.9.1) to install the Helm charts with Helm and Tiller. -6. Install the [Kubernetes CLI](https://kubernetes.io/docs/tasks/tools/install-kubectl/). -7. Install the [Docker CLI](https://cloud.ibm.com/docs/containers?topic=containers-cs_cli_install). -8. Get the storage class name for your OpenShift cluster: - ```console - $ oc get sc - ``` - -## Step 2: Preparing the OCP client environment - -**1. Log in to IBM Cloud using CLI** - - Open a terminal window on your client machine, then run the following commands: - -```console - ibmcloud login -u -p -c -r - ``` - -r value Name of region, such as 'us-south' or 'eu-gb' - -c value Account ID or owner user ID (such as user@example.com) - -```console -ibmcloud login -u -p -c -r -ibmcloud ks cluster ls -ibmcloud ks cluster config --cluster $cluster | grep export > env.sh -chmod 755 env.sh -. ./env.sh -echo $KUBECONFIG -kubectl version --short - ``` - -**2. Configure IBM Cloud Container Registry** - - **a. Log in with your IBM Cloud account. Use “ibmcloud login --sso” to log in to IBM Cloud CLI** - - **Note:** After you press "Y" to open the URL in the default browser, IBM Cloud generates a one-time code in the browser. Copy and paste it, then press “Enter" to pass authentication. - -```console -$ ibmcloud login --sso -API endpoint: https://cloud.ibm.com -Region: eu-gb - -Get One Time Code from https://identity-2.ap-north.iam.cloud.ibm.com/identity/passcode to proceed. -Open the URL in the default browser? [Y/n] > yes -One Time Code > -Authenticating... -OK - -Select an account: -1. XXXXXX's Account (0xxxxxxxxxxxxxxaa9xxx) -2. XXXXXXXX's Account (c56xxxxxxxxxxxxx74xxxxc) <-> 1...7 -Enter a number> 2 -Targeted account XXXXXXXX's Account (c56xxxxxxxxxxxxx74xxxxc) <-> 1...7 - - -API endpoint: https://cloud.ibm.com -Region: eu-gb -User: xxxxxxx -Account: XXXXXXXX's Account (c56xxxxxxxxxxxxx74xxxxc) <-> 1...7 -Resource group: No resource group targeted, use 'ibmcloud target -g RESOURCE_GROUP' -CF API endpoint: -Org: -Space: - -Tip: If you are managing Cloud Foundry applications and services -- Use 'ibmcloud target --cf' to target Cloud Foundry org/space interactively, or use 'ibmcloud target --cf-api ENDPOINT -o ORG -s SPACE' to target the org/space. -- Use 'ibmcloud cf' if you want to run the Cloud Foundry CLI with current IBM Cloud CLI context. - - -New version 0.19.0 is available. -Release notes: /~https://github.com/IBM-Cloud/ibm-cloud-cli-release/releases/tag/v0.19.0 -TIP: use 'ibmcloud config --check-version=false' to disable update check. - -Do you want to update? [y/N] > y - -Installing version '0.19.0'... -Downloading... - 17.45 MiB / 17.45 MiB [========================================================================================] 100.00% 9s -18301051 bytes downloaded -Saved in /Users/ibm/.bluemix/tmp/bx_746509876/IBM_Cloud_CLI_0.19.0.pkg -``` - -If you encounter errors using "ibmcloud login --sso", you can run "ibmcloud login" and enter your user name and password instead. - - **b. Create a namespace** - -```console - $ ibmcloud cr namespace-add -``` - - **c. Check the cluster** -```console -$ oc get pod - ``` - **d. Log in to IBM Cloud Container Registry (cr)** -```console -$ ibmcloud cr login -``` - Example output: - -```console -$ ibmcloud cr login -Logging in to 'registry.eu-gb.bluemix.net'... -Logged in to 'registry.eu-gb.bluemix.net'. - -IBM Cloud Container Registry is adopting new icr.io domain names to align with the rebranding of IBM Cloud for a better user experience. The existing bluemix.net domain names are deprecated, but you can continue to use them for the time being, as an unsupported date will be announced later. For more information about registry domain names, see https://cloud.ibm.com/docs/services/Registry?topic=registry-registry_overview#registry_regions_local - -Logging in to 'us.icr.io'... -Logged in to 'us.icr.io'. - -IBM Cloud Container Registry is adopting new icr.io domain names to align with the rebranding of IBM Cloud for a better user experience. The existing bluemix.net domain names are deprecated, but you can continue to use them for the time being, as an unsupported date will be announced later. For more information about registry domain names, see https://cloud.ibm.com/docs/services/Registry?topic=registry-registry_overview#registry_regions_local - -OK -``` -Get the container repository host from the "ibmcloud cr" login output. In this example, the Docker repository host is “us.icr.io” - - **e. Verify the images are in your private registry:** -```console -$ ibmcloud cr image-list -``` - **f. Create an API key** - - I. Log in to https://cloud.ibm.com. - - II. Select your own cluster account (upper right corner) and click IBM Cloud -> Security -> Manage -> Identity and Access -> Access (IAM) / IBM Cloud API Keys (left menu) --> Create an IBM Cloud API Key. Then download the API key or copy the API key. - - III. Return to your client terminal window and log in to the local Docker registry: - -```console -docker login -u iamapikey -p -``` - Example: -```console -$ docker login -u iamapikey -p us.icr.io -WARNING! Using --password via the CLI is insecure. Use --password-stdin. -Login Succeeded -``` - **g. Create a Docker pull secret in your OpenShift cluster** -```console -oc create secret docker-registry ums-secret --docker-server=us.icr.io --docker-username=iamapikey --docker-password= - ``` -This secret will be passed to the chart in the imagePullSecrets property. Check the "docker-server" name in the output of the previous command “ibmcloud cr login”. - -## Step 3: Downloading the package and uploading it to the local repository - -1. Download and save the [loadimages.sh](/~https://github.com/icp4a/cert-kubernetes/blob/master/scripts/loadimages.sh) script to the client machine. -2. Download the Business Automation Studio Passport Advantage packages by following the instructions in [IBM Cloud Pak for Automation 19.0.2 on Certified Kubernetes](/~https://github.com/icp4a/cert-kubernetes/blob/master/README.md#step-2-get-access-to-the-container-images). -3. Run the following commands to load the images into the Docker repository: -```console -$ ibmcloud cr namespace-add - ``` -Example: -```console -./loadimages.sh -p ./CC3I3ML.tgz -r us.icr.io/ -./loadimages.sh -p ./CC3I4ML.tgz -r us.icr.io/ -./loadimages.sh -p ./CC3I5ML.tgz -r us.icr.io/ -./loadimages.sh -p ./CC3HVML.tgz -r us.icr.io/ - ``` -The name "us.icr.io" is one of the IBM Cloud Container Registry names and your registry name might be different. Get the name from the "ibmcloud cr login" step. - -4. Get the following Docker images in the IBM Cloud repository, which can be used for future Studio deployments: -```console - - us.icr.io//solution-server:19.0.2 - - us.icr.io//dba-etcd:19.0.2 - - us.icr.io//solution-server-helmjob-db:19.0.2 - - us.icr.io//dba-keytool-initcontainer:19.0.2 - - us.icr.io//dba-umsregistration-initjob:19.0.2 - - us.icr.io//dba-dbcompatibility-initcontainer:19.0.2 - - us.icr.io//navigator:ga-306-icn-if002 - - us.icr.io//navigator-sso:ga-306-icn-if002 - - us.icr.io//ums:19.0.2 - - us.icr.io//dba-keytool-initcontainer:19.0.2 - - us.icr.io//dba-keytool-jobcontainer:19.0.2 - - us.icr.io//bastudio:19.0.2 - - us.icr.io//jms:19.0.2 - - us.icr.io//solution-server:19.0.2 - - us.icr.io//dba-etcd:19.0.2 - - us.icr.io//solution-server-helmjob-db:19.0.2 - - us.icr.io//dba-keytool-initcontainer:19.0.2 - - us.icr.io//dba-keytool-jobcontainer:19.0.2 - - us.icr.io//dba-umsregistration-initjob:19.0.2 - - us.icr.io//dba-dbcompatibility-initcontainer:19.0.2 -``` -## Step 4: Connecting OpenShift with CLI -1. Open a browser and log in to the IBM Cloud website (https://cloud.ibm.com) with your IBM Cloud ID, then navigate to the OpenShift category. -2. Find your OpenShift cluster instance in the Clusters list, select ..., and click OpenShift Web Console. -3. In the OpenShift Web Console, click your user ID (top right) and click Copy Login Command. -4. Paste the login command into the shell in your client machine terminal window: -```console - oc login https://: --token= - ``` -5. Create or switch to the namespace you created by running the following command: -```console - oc new-project && oc project - ``` -6. To deploy the service account, role, and role binding successfully, assign the administrator role to the user for this namespace by running the following command: -```console - oc project - oc adm policy add-role-to-user admin -``` -7. If you want to operate persistent volumes (PVs), you must have the storage-admin cluster role, because PVs are a cluster resource in OpenShift. Add the role by running the following command: -```console - oc adm policy add-cluster-role-to-user storage-admin -``` - 8. Grant scc ibm-anyuid-scc to your newly created namespace: - ```console -oc adm policy add-scc-to-group ibm-anyuid-scc system:serviceaccounts: -``` - -## Step 5: Creating the databases - -1. Prepare the databases for Studio and App Engine, following the instructions in [Creating databases](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_basprep_db.html). - -## Step 6: Creating the routes - -1. Choose the release name, for example, “ocp-bas”. You can replace `````` with your own release name in the examples that follow. - -2. Choose the route names, for example, "bas-route" for Studio and "ae-route" for App Engine. - -3. Prepare the YAML files for the routes. For example: - -ums-route.yaml -```yaml -apiVersion: route.openshift.io/v1 -kind: Route -metadata: - name: ums-route - namespace: -spec: - port: - targetPort: https - tls: - insecureEdgeTerminationPolicy: Redirect - termination: passthrough - to: - kind: Service - name: -ibm-dba-ums - weight: 100 - wildcardPolicy: None -``` -bas-route.yaml: -```yaml -apiVersion: route.openshift.io/v1 -kind: Route -metadata: - name: bas-route - namespace: -spec: - port: - targetPort: https - tls: - insecureEdgeTerminationPolicy: Redirect - termination: passthrough - to: - kind: Service - name: -bastudio-service - weight: 100 - wildcardPolicy: None -``` -ae-route.yaml: -```yaml -apiVersion: route.openshift.io/v1 -kind: Route -metadata: - name: ae-route - namespace: -spec: - port: - targetPort: https - tls: - insecureEdgeTerminationPolicy: Redirect - termination: passthrough - to: - kind: Service - name: -ibm-dba-ae-service - weight: 100 - wildcardPolicy: None -``` - -4. Create the routes by running the following commands: -```console -oc create -f bas-route.yaml -oc create -f ae-route.yaml -``` -5. Get the host names for Studio and App Engine. You will need them later. - -a. Run the command "oc get route" to get the host name for each component. -```console -$ oc get route -NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD -ae-route ae-route-bastudio. .us-east.containers.appdomain.cloud aa-ibm-dba-ae-service https passthrough/Redirect None -bas-route bas-route-bastudio. .us-east.containers.appdomain.cloud aa-bastudio-service https passthrough/Redirect None -rr-route rr-route-bastudio. .us-east.containers.appdomain.cloud aa-resource-registry-service https passthrough/Redirect None -ums-route ums-route-bastudio. .us-east.containers.appdomain.cloud aa-ibm-dba-ums https passthrough/Redirect None -``` - -b. Find the host name ```“ums-route-bastudio..us-east.containers.appdomain.cloud”``` and write it down. You will use it later when creating secrets. - -c. Ping the host name to get the ip address. - -```console -$ping ums-route-bastudio..us-east.containers.appdomain.cloud -PING dbaclusterxxxxxxxxxxxxxx001.us-east.containers.appdomain.cloud (169.x.x.x) 56(84) bytes of data. -64 bytes from xxx.ip4.static.sl-reverse.com (169.x.x.x): icmp_seq=1 ttl=44 time=72.9 ms -64 bytes from xxx.ip4.static.sl-reverse.com (169.x.x.x): icmp_seq=2 ttl=44 time=72.7 ms -``` -Write down the IP address 169.x.x.x. It will be used later in the . For each route (ums-route, bas-route, ae-route, rr-route) write down the host name and IP address. - -## Step 7: Protecting sensitive configuration data - -You must create the following secrets manually before you install the chart. - -* Create the UMS Service following the instructions in [Install User Management Service 19.0.2 on Red Hat OpenShift on IBM Cloud](/~https://github.com/icp4a/cert-kubernetes/blob/master/UMS/platform/README-ROKS.md). - -* Follow the instructions in [Preparing UMS-related configuration and TLS certificates](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_basprep_ums.html) to prepare UMS secrets. - -* Follow the instructions in [Protecting sensitive configuration data](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_basprep_data.html) to prepare secrets for Resource Registry, App Engine, and Studio. - -The following sample YAML files are for Resource Registry, App Engine, and Studio secrets. Update the values with your own user name, database information, and so on. - -Resource Registry yaml: -```yaml - apiVersion: v1 - kind: Secret - metadata: - name: resource-registry-admin-secret - type: Opaque - stringData: - rootPassword: "" - readUser: "reader" - readPassword: "" - writeUser: "writer" - writePassword: "" -``` - -App Engine yaml: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: ae-secret-credential -type: Opaque -stringData: - AE_DATABASE_PWD: "" - AE_DATABASE_USER: "" - OPENID_CLIENT_ID: "app_engine" - OPENID_CLIENT_SECRET: ““ - SESSION_SECRET: "bigblue123solutionserver" - SESSION_COOKIE_NAME: "nsessionid" - REDIS_PASSWORD: "password" -``` -Business Automation Studio yaml: -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: bastudio-admin-secret -type: Opaque -stringData: - adminUser: "umsadmin" - adminPassword: "password" - sslKeystorePassword: "" - dbUsername: "" - dbPassword: "" - oidcClientId: "bastudio-liberty" - oidcClientSecret: "tsSecret-jdaklfjsef" -``` - -## Step 8: Configuring TLS key and certificate secrets -Modify all values enclosed in angle brackets like `````` in each of the following xxx.conf files with your own values. - -Follow [Configuring the TLS key and certificate secrets](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_basprep_secrets.html) to create TLS certificate secrets for UMS, Studio, Resource Registry, and App Engine services. - -1. Create the root CA. - -Run the following three commands: -```console - -openssl genrsa -out rootCA.key.pem 2048 - -openssl req -x509 -new -nodes -key rootCA.key.pem -sha256 -days 3650 \ - -subj "/CN=rootCA" \ - -out rootCA.crt.pem - -kubectl create secret tls ca-tls-secret --key=rootCA.key.pem --cert=rootCA.crt.pem -``` - -2. Generate the UMS TLS key and certificate. - -Example: ums-extfile.conf -```console -authorityKeyIdentifier=keyid,issuer -basicConstraints=CA:FALSE -keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment -subjectAltName = @alt_names - -[alt_names] -DNS.1 = -ibm-dba-ums -DNS.2 = -DNS.3 = .svc.cluster.local -DNS.4 = svc.cluster.local -DNS.5 = localhost -IP.1 = -``` -Run the following four commands: -```console -openssl genrsa -out ums.key.pem 2048 -openssl req -new -key ums.key.pem -out ums.csr \ - -subj "/CN= " - -openssl x509 -req -in ums.csr -CA rootCA.crt.pem \ - -CAkey rootCA.key.pem \ - -CAcreateserial \ - -out ums.crt.pem \ - -days 1825 -sha256 \ - -extfile ums-extfile.conf -kubectl create secret tls ums-tls-secret --key=ums.key.pem --cert=ums.crt.pem -``` -3. Generate the UMS JKS TLS key and certificate. - -Example ums-jks-extfile.conf -```console -authorityKeyIdentifier=keyid,issuer -basicConstraints=CA:FALSE -keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment -subjectAltName = @alt_names - -[alt_names] -DNS.1 = -ibm-dba-ums -DNS.2 = -ibm-dba-ums..svc.cluster.local -DNS.3 = svc.cluster.local -DNS.4 = localhost -DNS.5 = c100-e.us-east.containers.cloud.ibm.com -IP.1 = -``` -Run the following four commands: -```console -openssl genrsa -out ums-jks.key.pem 2048 -openssl req -new -key ums-jks.key.pem -out ums-jks.csr \ - -subj "/CN= " - -openssl x509 -req -in ums-jks.csr -CA rootCA.crt.pem \ - -CAkey rootCA.key.pem \ - -CAcreateserial \ - -out ums-jks.crt.pem \ - -days 1825 -sha256 \ - -extfile ums-jks-extfile.conf -kubectl create secret tls ums-jks-tls-secret --key=ums-jks.key.pem --cert=ums-jks.crt.pem -``` -4. Generate the Resource Registry TLS key and certificate. - -Example rr-extfile.conf -```console -authorityKeyIdentifier=keyid,issuer -basicConstraints=CA:FALSE -keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment -subjectAltName = @alt_names - -[alt_names] -DNS.1 = -resource-registry-service -DNS.2 = -DNS.3 = -resource-registry-service..svc.cluster.local -DNS.4 = svc.cluster.local -DNS.5 = localhost -DNS.6 = c100-e.us-east.containers.cloud.ibm.com -IP.1 = -``` -Run the following four commands: -```console -openssl genrsa -out rr.key.pem 2048 -openssl req -new -key rr.key.pem -out rr.csr \ - -subj "/CN= " - -openssl x509 -req -in rr.csr -CA rootCA.crt.pem \ - -CAkey rootCA.key.pem \ - -CAcreateserial \ - -out rr.crt.pem \ - -days 1825 -sha256 \ - -extfile rr-extfile.conf -kubectl create secret tls rr-tls-secret --key=rr.key.pem --cert=rr.crt.pem -``` -5. Generate the App Engine TLS key and certificate. - -Example ae-extfile.conf -```console -authorityKeyIdentifier=keyid,issuer -basicConstraints=CA:FALSE -keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment -subjectAltName = @alt_names - -[alt_names] -DNS.1 = -ibm-dba-ae-service -DNS.2 = -DNS.3 = -ibm-dba-ae-service..svc.cluster.local -DNS.4 = svc.cluster.local -DNS.5=localhost -DNS.6=c100-e.us-east.containers.cloud.ibm.com -IP.1 = -``` -Run the following four commands: - -```console -openssl genrsa -out ae.key.pem 2048 -openssl req -new -key ae.key.pem -out ae.csr \ - -subj "/CN=< ip address from above ae-route > " - -openssl x509 -req -in ae.csr -CA rootCA.crt.pem \ - -CAkey rootCA.key.pem \ - -CAcreateserial \ - -out ae.crt.pem \ - -days 1825 -sha256 \ - -extfile ae-extfile.conf -kubectl create secret tls ae-tls-secret --key=ae.key.pem --cert=ae.crt.pem -``` -6. Generate the Business Automation Studio TLS key and certificate. - -Example bas-extfile.conf - -```console -authorityKeyIdentifier=keyid,issuer -basicConstraints=CA:FALSE -keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment -subjectAltName = @alt_names - -[alt_names] -DNS.1 = -bastudio-service -DNS.2 = -DNS.3 = -bastudio-service..svc.cluster.local -DNS.4 = svc.cluster.local -DNS.5 = localhost -DNS.6 = c100-e.us-east.containers.cloud.ibm.com -IP.1 = -``` -Run the following four commands: -```console -openssl genrsa -out bas.key.pem 2048 -openssl req -new -key bas.key.pem -out bas.csr \ - -subj "/CN=< ip address from above bas-route > " - -openssl x509 -req -in bas.csr -CA rootCA.crt.pem \ - -CAkey rootCA.key.pem \ - -CAcreateserial \ - -out bas.crt.pem \ - -days 1825 -sha256 \ - -extfile bas-extfile.conf -kubectl create secret tls bas-tls-secret --key=bas.key.pem --cert=bas.crt.pem -``` -7. Generate the IBM Content Navigator (ICN) TLS key and certificate. - -Example icn-extfile.conf -```console -authorityKeyIdentifier=keyid,issuer -basicConstraints=CA:FALSE -keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment -subjectAltName = @alt_names - -[alt_names] -DNS.1 = icn..nip.io -DNS.2 = svc.cluster.local -DNS.3 = localhost -IP.1 = -``` -Run the following four commands: -```console -openssl genrsa -out icn.key.pem 2048 -openssl req -new -key icn.key.pem -out icn.csr \ - -subj "/CN=< ip address from above ums-route > " - -openssl x509 -req -in icn.csr -CA rootCA.crt.pem \ - -CAkey rootCA.key.pem \ - -CAcreateserial \ - -out icn.crt.pem \ - -days 1825 -sha256 \ - -extfile icn-extfile.conf -kubectl create secret tls icn-tls-secret --key=icn.key.pem --cert=icn.crt.pem -``` -8. Generate the JKS TLS key and certificate. - -Example jks-extfile.conf -```console -authorityKeyIdentifier=keyid,issuer -basicConstraints=CA:FALSE -keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment -subjectAltName = @alt_names - -[alt_names] -DNS.1 = -ibm-dba-ums -DNS.2 = ums..nip.io -DNS.3 = -ibm-dba-ums..svc.cluster.local -DNS.4 = svc.cluster.local -IP.1 = -``` -Run the following four commands: - -```console -openssl genrsa -out jks.key.pem 2048 -openssl req -new -key jks.key.pem -out jks.csr \ - -subj "/CN=< ip address from above ums-route > " - -openssl x509 -req -in jks.csr -CA rootCA.crt.pem \ - -CAkey rootCA.key.pem \ - -CAcreateserial \ - -out jks.crt.pem \ - -days 1825 -sha256 \ - -extfile jks-extfile.conf -kubectl create secret tls jks-tls-secret --key=jks.key.pem --cert=jks.crt.pem -``` - -## Step 9: Preparing persistent storage - -Follow the "Implementing storage" section of [IBM Business Automation Studio installation](/~https://github.com/icp4a/cert-kubernetes/blob/master/BAS/README.md) to prepare the persistent storage for Studio. - -## Step 10: Installing Business Automation Studio 19.0.2 on platform Helm - -To install the Business Automation Studio service on a managed Red Hat OpenShift cluster on IBM Public Cloud, choose one of the following options: -* To use Helm charts, follow the instructions in [Deploying with Helm charts](/~https://github.com/icp4a/cert-kubernetes/blob/master/BAS/helm-charts/README.md). - -* To use YAML, follow the instructions in [Deploying with Kubernetes YAML](/~https://github.com/icp4a/cert-kubernetes/blob/master/BAS/k8s-yaml/README.md). - -* To deploy the service on your own, complete the following steps: - -**1. Download the Helm charts provided for certificates in the GitHub release pages:** -* Download ibm-dba-aae-prod-1.0.0.tgz from [AAE HELM](/~https://github.com/icp4a/cert-kubernetes/tree/master/AAE/helm-charts) -* Download ibm-dba-bas-prod-1.0.0.tgz from [BAS HELM](/~https://github.com/icp4a/cert-kubernetes/tree/master/BAS/helm-charts) - - -**Modify the sample values in the YAML files to match your own environment:** - -```yaml -#Shared values across components -global: - # The persistent volume claim name used to store JDBC and ODBC library - existingClaimName: - # Keep this value as false - nonProductionMode: false - # Secret with Docker credentials - imagePullSecrets: ums-secret - # global CA secret name - caSecretName: "ca-tls-secret" - # Kubernetes dns base name - dnsBaseName: "svc.cluster.local" - # Contributor toolkits storage PVC - contributorToolkitsPVC: "" - # Global configuration created by user management service - ums: - serviceType: Ingress - # Get UMS hostname from “oc get route” command - hostname: "ums-route-bastudio. xxxxx.us-east.containers.appdomain.cloud" - port: 443 - # Secret with admin credentials - adminSecretName: ibm-dba-ums-secret - - # Global configuration created by BAStudio - baStudio: - serviceType: "Ingress" - # Get BAStudio hostname from “oc get route” command - hostname: "bas-route-bastudio. xxxxx.us-east.containers.appdomain.cloud” - port: 443 - adminSecretName: bastudio-admin-secret - jmsPersistencePVC: - - # Global configuration created by Resource Registry - resourceRegistry: - # Get RR hostname from “oc get route” command - hostname: "rr-route-bastudio. xxxxx.us-east.containers.appdomain.cloud" - port: 31099 - adminSecretName: resource-registry-admin-secret - - # Global configuration created by App Engine - appEngine: - serviceType: "Ingress" - # Get AE hostname from “oc get route” command - hostname: "ae-route-bastudio.xxxxx.us-east.containers.appdomain.cloud" - port: 443 - -# BAStudio private configurations here -baStudio: - install: true - # BAStudio private configurations here - images: - bastudio: us.icr.io//bastudio:19.0.2 - umsInitRegistration: us.icr.io//dba-umsregistration-initjob:19.0.2 - tlsInitContainer: us.icr.io//dba-keytool-initcontainer:19.0.2 - ltpaInitContainer: us.icr.io//dba-keytool-jobcontainer:19.0.2 - dbcompatibilityInitContainer: us.icr.io//dba-dbcompatibility-initcontainer:19.0.2 - jmsContainer: us.icr.io//jms:19.0.2 - pullPolicy: Always - - tls: - tlsSecretName: bas-tls-secret - tlsTrustList: [] - - # Database config - bastudioDB: - database: - type: db2 - name: BPMDB - host: - port: - expectedSchemaVersion: "1.0.0" - driverfiles: "db2jcc4.jar db2jcc_license_cu.jar" - - # BAStudio scaling config - replicaCount: 1 - autoscaling: - enabled: false - minReplicas: 2 - maxReplicas: 5 - targetAverageUtilization: 80 - - contentSecurityPolicy: upgrade-insecure-requests - - # BAStudio resource config - resources: - bastudio: - limits: - cpu: 4 - memory: 4Gi - requests: - cpu: 2 - memory: 3Gi - initProcess: - limits: - cpu: 500m - memory: 256Mi - requests: - cpu: 200m - memory: 128Mi - jms: - limits: - cpu: 1 - memory: 1G - requests: - cpu: 500m - memory: 512Mi - logs: - consoleFormat: basic - consoleLogLevel: INFO - consoleSource: message,trace,accessLog,ffdc,audit - traceFormat: ENHANCED - traceSpecification: "*=info" - - # Health checks - livenessProbe: - initialDelaySeconds: 420 - periodSeconds: 10 - timeoutSeconds: 5 - failureThreshold: 3 - successThreshold: 1 - readinessProbe: - initialDelaySeconds: 240 - periodSeconds: 5 - timeoutSeconds: 5 - failureThreshold: 6 - successThreshold: 1 - -appengine: - install: true - - replicaCount: 1 - - probes: - initialDelaySeconds: 5 - periodSeconds: 10 - timeoutSeconds: 5 - successThreshold: 5 - failureThreshold: 3 - - images: - appEngine: us.icr.io//solution-server:19.0.2 - tlsInitContainer: us.icr.io//dba-keytool-initcontainer:19.0.2 - dbJob: us.icr.io//solution-server-helmjob-db:19.0.2 - oidcJob: us.icr.io//dba-umsregistration-initjob:19.0.2 - dbcompatibilityInitContainer: us.icr.io//dba-dbcompatibility-initcontainer:19.0.2 - pullPolicy: Always - - tls: - tlsSecretName: ae-tls-secret - tlsTrustList: [] - - database: - name: APPDB - host: - port: - type: db2 - currentSchema: DBASB - initialPoolSize: 1 - maxPoolSize: 10 - uvThreadPoolSize: 4 - maxLRUCacheSize: 1000 - maxLRUCacheAge: 600000 - - # Toggle for custom JDBC drivers - useCustomJDBCDrivers: false - - adminSecretName: ae-secret-credential - - logLevel: - node: trace - browser: 2 - - contentSecurityPolicy: - enable: false - whitelist: "" - - session: - duration: "1800000" - resave: "false" - rolling: "true" - saveUninitialized: "false" - useExternalStore: "false" - - redis: - host: localhost - port: 6379 - ttl: 1800 - - maxAge: - staticAsset: "2592000" - csrfCookie: "3600000" - authCookie: "900000" - - env: - serverEnvType: development - maxSizeLRUCacheRR: 1000 - - resources: - ae: - limits: - cpu: 1500m - memory: 1024Mi - requests: - cpu: 1 - memory: 512Mi - initContainer: - limits: - cpu: 500m - memory: 256Mi - requests: - cpu: 200m - memory: 128Mi - - autoscaling: - enabled: false - minReplicas: 2 - maxReplicas: 5 - targetAverageUtilization: 80 - -resourceRegistry: - install: true - - # Private images for resource registry - images: - resourceRegistry: us.icr.io//dba-etcd:19.0.2 - keytoolInitcontainer: us.icr.io//dba-keytool-initcontainer:19.0.2 - pullPolicy: Always - - # TLS configurations - tls: - tlsSecretName: rr-tls-secret - - # Resource registry cluster size - replicaCount: 1 - - # RR Resource config - resources: - limits: - cpu: 500m - memory: 512Mi - requests: - cpu: 200m - memory: 256Mi - - # data persistence config - persistence: - enabled: false - useDynamicProvisioning: true - storageClassName: "manual" - accessMode: "ReadWriteOnce" - size: 3Gi - - livenessProbe: - enabled: true - initialDelaySeconds: 120 - periodSeconds: 10 - timeoutSeconds: 5 - failureThreshold: 3 - successThreshold: 1 - - readinessProbe: - enabled: true - initialDelaySeconds: 15 - periodSeconds: 10 - timeoutSeconds: 5 - failureThreshold: 6 - successThreshold: 1 - - logLevel: info -``` -**2. Generate and customize the deployment YAML files:** - -a.Generate the output folder: -```console -mkdir yamls -``` -b.Generate the deployment YAML Files into the created folder: - -```console -helm template --name --namespace --output-dir ./yamls -f bas-values.yaml ibm-dba-bas-prod-1.0.0.tgz -``` -**3. Move to the bas-yamls folder. Remove the test folders:** -```console - rm -rf ./yamls/ibm-dba-bas-prod/charts/appengine/templates/tests - rm -rf ./yamls/ibm-dba-bas-prod/charts/baStudio/templates/tests - rm -rf ./yamls/ibm-dba-bas-prod/charts/resourceRegistry/templates/tests - rm -rf ./yamls/ibm-dba-bas-prod/templates/tests -``` - -**4. Apply the YAML definitions by running the following command:** -```console -kubectl apply -R -f ./yamls -``` - Your output should look similar to the following output: - -```console -job.batch/aa-ibm-dba-ae-db-init-707 created -configmap/aa-ibm-dba-ae-env created -configmap/aa-ibm-dba-ae-file created -job.batch/aa-ibm-dba-ae-oidc-641 created -poddisruptionbudget.policy/aa-ibm-dba-ae-pdb-deployment-605 created -deployment.apps/aa-ibm-dba-ae-deployment created -serviceaccount/aa-ibm-dba-ae-deployment-access created -networkpolicy.networking.k8s.io/aa-ibm-dba-ae-db-init created -networkpolicy.networking.k8s.io/aa-ibm-dba-ae-npolicy-all created -networkpolicy.networking.k8s.io/aa-ibm-dba-ae-npolicy-deployment created -networkpolicy.networking.k8s.io/aa-ibm-dba-ae-npolicy-oidc created -networkpolicy.networking.k8s.io/aa-ibm-dba-ae-npolicy-test created -service/aa-ibm-dba-ae-service created -job.batch/aa-bastudio-bootstrap created -configmap/aa-bastudio-config created -deployment.apps/aa-bastudio-deployment created -service/aa-bastudio-jms-service created -statefulset.apps/aa-bastudio-jms created -job.batch/aa-bastudio-ltpa-395 created -secret/aa-bastudio-ltpa created -job.batch/aa-bastudio-oidc-127 created -poddisruptionbudget.policy/aa-bastudio-pdb-deployment-719 created -service/aa-bastudio-service created -poddisruptionbudget.policy/aa-bastudio-pdb-jms-107 created -role.rbac.authorization.k8s.io/aa-bastudio-init created -rolebinding.rbac.authorization.k8s.io/aa-bastudio-init created -serviceaccount/aa-bastudio-init created -networkpolicy.networking.k8s.io/aa-bastudio-npolicy-bas created -networkpolicy.networking.k8s.io/aa-bastudio-npolicy-bootstrap created -networkpolicy.networking.k8s.io/aa-bastudio-npolicy-default created -networkpolicy.networking.k8s.io/aa-bastudio-npolicy-jms created -networkpolicy.networking.k8s.io/aa-bastudio-npolicy-ltpa created -networkpolicy.networking.k8s.io/aa-bastudio-npolicy-oidc created -networkpolicy.networking.k8s.io/aa-bastudio-npolicy-test created -networkpolicy.networking.k8s.io/aa-bastudio-npolicy-upgrade created -serviceaccount/aa-bastudio-bastudio-sa created -poddisruptionbudget.policy/aa-resource-registry-pdb-516 created -service/aa-resource-registry-headless created -configmap/aa-resource-registry-script created -service/aa-resource-registry-service created -statefulset.apps/aa-resource-registry-server created -networkpolicy.networking.k8s.io/aa-resource-registry-npolicy-default created -networkpolicy.networking.k8s.io/aa-resource-registry-npolicy-test created -networkpolicy.networking.k8s.io/aa-resource-registry-networkpolicy created -serviceaccount/aa-resource-registry-sa created -networkpolicy.networking.k8s.io/aa-ibm-dba-base-npolicy-default created -networkpolicy.networking.k8s.io/aa-ibm-dba-base-npolicy-test created -serviceaccount/aa-ibm-dba-base-base-sa created -``` - -## Creating the Navigator service and configuring its UMS -1. Create the Navigator service on Redhat Openshift on IBM Cloud: -* /~https://github.com/icp4a/cert-kubernetes/blob/19.0.1/NAVIGATOR/platform/README_Eval_ROKS.md - -2. Configure it to connect to UMS: -* https://www.ibm.com/support/pages/node/1073240 - -3. Configure it to work with App Engine and IBM Business Automation Workflow using the following instructions: -* [Configuring App Engine with IBM Business Automation Navigator](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_basconfig_ban.html) -* [Publishing apps](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.bas/topics/tsk_bas_publishapps.html) -* [Configuring IBM Business Automation Studio with IBM Business Automation Workflow](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_basconfig_baw.html) - -## References -* /~https://github.com/icp4a/cert-kubernetes/blob/master/AAE/README.md -* /~https://github.com/icp4a/cert-kubernetes/blob/master/UMS/platform/README-ROKS.md -* /~https://github.com/icp4a/cert-kubernetes/blob/master/BAS/README.md -* https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_bas.html -* https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_install_bas.html - diff --git a/CONTENT/README.md b/CONTENT/README.md deleted file mode 100644 index 8745611e..00000000 --- a/CONTENT/README.md +++ /dev/null @@ -1,32 +0,0 @@ -# Deploy FileNet Content Manager - -IBM® FileNet® Content Manager V5.5 digitizes content and manages the content lifecycle by enabling users to focus on their work and collaborate within the enterprise and with external business partners. - -IBM FileNet Content Manager offers enterprise-level scalability and flexibility to handle the most demanding content challenges, the most complex business processes, and integration to all your existing systems. FileNet P8 is a reliable, scalable, and highly available enterprise platform that enables you to capture, store, manage, secure, and process information to increase operational efficiency and lower total cost of ownership. FileNet P8 enables you to streamline and automate business processes, access and manage all forms of content, and automate records management to help meet compliance needs. - -For more information see [FileNet Content Manager in the Knowledge Center](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.offerings/topics/con_cm.html) - -## Requirements and Prerequisites - -Perform the following tasks to prepare to deploy your FileNet Content Manager images on Kubernetes: - -- Prepare your Kubernetes environment. See [Preparing to install automation containers on Kubernetes](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_env_k8s.html) - -- Download the PPA. Refer to the top repository [readme](../README.md) to find instructions on how to push and tag the product container images to your Docker registry. - -- Prepare your FileNet Content Manager environment. These procedures include setting up databases, LDAP, storage, and configuration files that are required for use and operation. If you plan to use the YAML file method, you also create YAML files that include the applicable parameter values for your deployment. You must complete all of the [preparation steps for FileNet Content Manager](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_ecmk8s.html) before you are ready to deploy the container images. - -- If you want to deploy additional optional containers, prepare the requirements that are specific to those containers. For details see the following information: - - [Configuring external share for containers](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_ecmexternalsharek8s.html) - - [Technology Preview: Getting started with the Content Services GraphQL API](http://www.ibm.com/support/docview.wss?uid=ibm10883630) - -## Deploying - -You can deploy your container images with the following methods: - -- [Using Helm charts](helm-charts/README.md) -- [Using Kubernetes YAML](k8s-yaml/README.md) - -## Completing post deployment configuration - -After you deploy your container images, you perform some required and some optional steps to get your FileNet Content Manager environment up and running. For detailed instructions, see [Completing post deployment tasks for IBM FileNet Content Manager](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_deploy_postecmdeployk8s.html) diff --git a/CONTENT/configuration/CMIS/configDropins/overrides/ldap_AD.xml b/CONTENT/configuration/CMIS/configDropins/overrides/ldap_AD.xml deleted file mode 100644 index c8fa5155..00000000 --- a/CONTENT/configuration/CMIS/configDropins/overrides/ldap_AD.xml +++ /dev/null @@ -1,17 +0,0 @@ - - - - - - diff --git a/CONTENT/configuration/CMIS/configDropins/overrides/ldap_TDS.xml b/CONTENT/configuration/CMIS/configDropins/overrides/ldap_TDS.xml deleted file mode 100644 index 6c9610d4..00000000 --- a/CONTENT/configuration/CMIS/configDropins/overrides/ldap_TDS.xml +++ /dev/null @@ -1,18 +0,0 @@ - - - - - - diff --git a/CONTENT/configuration/CPE/configDropins/overrides/DB2JCCDriver.xml b/CONTENT/configuration/CPE/configDropins/overrides/DB2JCCDriver.xml deleted file mode 100644 index 937c2ce0..00000000 --- a/CONTENT/configuration/CPE/configDropins/overrides/DB2JCCDriver.xml +++ /dev/null @@ -1,6 +0,0 @@ - - - - - - diff --git a/CONTENT/configuration/CPE/configDropins/overrides/GCD.xml b/CONTENT/configuration/CPE/configDropins/overrides/GCD.xml deleted file mode 100644 index b51f5026..00000000 --- a/CONTENT/configuration/CPE/configDropins/overrides/GCD.xml +++ /dev/null @@ -1,29 +0,0 @@ - - - - - - - - - - - - - - - - diff --git a/CONTENT/configuration/CPE/configDropins/overrides/GCD_HADR.xml b/CONTENT/configuration/CPE/configDropins/overrides/GCD_HADR.xml deleted file mode 100644 index 30365e42..00000000 --- a/CONTENT/configuration/CPE/configDropins/overrides/GCD_HADR.xml +++ /dev/null @@ -1,35 +0,0 @@ - - - - - - - - - - - - - - diff --git a/CONTENT/configuration/CPE/configDropins/overrides/GCD_Oracle.xml b/CONTENT/configuration/CPE/configDropins/overrides/GCD_Oracle.xml deleted file mode 100644 index d8488a5f..00000000 --- a/CONTENT/configuration/CPE/configDropins/overrides/GCD_Oracle.xml +++ /dev/null @@ -1,21 +0,0 @@ - - - - - - - - - - - - - - - diff --git a/CONTENT/configuration/CPE/configDropins/overrides/OraJDBCDriver.xml b/CONTENT/configuration/CPE/configDropins/overrides/OraJDBCDriver.xml deleted file mode 100644 index aa2cffb9..00000000 --- a/CONTENT/configuration/CPE/configDropins/overrides/OraJDBCDriver.xml +++ /dev/null @@ -1,7 +0,0 @@ - - - - - - - diff --git a/CONTENT/configuration/CPE/configDropins/overrides/ldap_AD.xml b/CONTENT/configuration/CPE/configDropins/overrides/ldap_AD.xml deleted file mode 100644 index c8fa5155..00000000 --- a/CONTENT/configuration/CPE/configDropins/overrides/ldap_AD.xml +++ /dev/null @@ -1,17 +0,0 @@ - - - - - - diff --git a/CONTENT/configuration/CPE/configDropins/overrides/ldap_TDS.xml b/CONTENT/configuration/CPE/configDropins/overrides/ldap_TDS.xml deleted file mode 100644 index e5725463..00000000 --- a/CONTENT/configuration/CPE/configDropins/overrides/ldap_TDS.xml +++ /dev/null @@ -1,18 +0,0 @@ - - - - - - diff --git a/CONTENT/configuration/CSS/CSS_Server_data/sslkeystore/cssSelfsignedServerStore b/CONTENT/configuration/CSS/CSS_Server_data/sslkeystore/cssSelfsignedServerStore deleted file mode 100644 index caa84df6..00000000 Binary files a/CONTENT/configuration/CSS/CSS_Server_data/sslkeystore/cssSelfsignedServerStore and /dev/null differ diff --git a/CONTENT/configuration/ContentGraphQL/configDropins/overrides/UMS_clientRegistration.json b/CONTENT/configuration/ContentGraphQL/configDropins/overrides/UMS_clientRegistration.json deleted file mode 100644 index 792f8a5c..00000000 --- a/CONTENT/configuration/ContentGraphQL/configDropins/overrides/UMS_clientRegistration.json +++ /dev/null @@ -1,33 +0,0 @@ -{ -��� "token_endpoint_auth_method": "client_secret_basic", -��� "scope": "openid profile email", -��� "grant_types": [ -������� "authorization_code", -������� "client_credentials", -������� "implicit", -������� "refresh_token", -������� "urn:ietf:params:oauth:grant-type:jwt-bearer" -��� ], -��� "response_types": [ -������� "code", -������� "token", -������� "id_token token" -��� ], -��� "application_type": "web", -��� "subject_type": "public", -��� "post_logout_redirect_uris": [], -��� "preauthorized_scope": "openid profile email", -��� "introspect_tokens": true, -��� "trusted_uri_prefixes": [ -������� "https://:/" -��� ], -��� "resource_ids": [], -��� "functional_user_groupIds": [], -��� "client_id": "contentServicesUms", -��� "client_secret": "password", -��� "client_name": "Content Services UMS", -��� "redirect_uris": [ -������� "https://:/oidcclient/redirect/ContentServicesUms" -��� ], -��� "allow_regexp_redirects": true -} \ No newline at end of file diff --git a/CONTENT/configuration/ContentGraphQL/configDropins/overrides/ldap_AD.xml b/CONTENT/configuration/ContentGraphQL/configDropins/overrides/ldap_AD.xml deleted file mode 100644 index 7b589dab..00000000 --- a/CONTENT/configuration/ContentGraphQL/configDropins/overrides/ldap_AD.xml +++ /dev/null @@ -1,17 +0,0 @@ - - - - - - diff --git a/CONTENT/configuration/ContentGraphQL/configDropins/overrides/ldap_TDS.xml b/CONTENT/configuration/ContentGraphQL/configDropins/overrides/ldap_TDS.xml deleted file mode 100644 index cb1a51e4..00000000 --- a/CONTENT/configuration/ContentGraphQL/configDropins/overrides/ldap_TDS.xml +++ /dev/null @@ -1,18 +0,0 @@ - - - - - - diff --git a/CONTENT/configuration/README.md b/CONTENT/configuration/README.md deleted file mode 100644 index 1e35831a..00000000 --- a/CONTENT/configuration/README.md +++ /dev/null @@ -1,16 +0,0 @@ -# Configuration - -The configuration directory provides sample files for deployment settings and application configuration settings. - -Follow the instructions in [Preparing for FileNet Content Manager](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_18.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_ecmk8s.html) to set up the following environment elements: - -- LDAP -- Databases -- Configuration files for LDAP and Databases -- YAML files (for YAML deployments) - -The configuration directories also include samples for additional containers. For details, see the following information: - - [Configuring external share for containers](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_ecmexternalsharek8s.html) - - [Technology Preview: Getting started with the Content Services GraphQL API](http://www.ibm.com/support/docview.wss?uid=ibm10883630) - - diff --git a/CONTENT/configuration/extShare/configDropins/overrides/DB2JCCDriver.xml b/CONTENT/configuration/extShare/configDropins/overrides/DB2JCCDriver.xml deleted file mode 100644 index 937c2ce0..00000000 --- a/CONTENT/configuration/extShare/configDropins/overrides/DB2JCCDriver.xml +++ /dev/null @@ -1,6 +0,0 @@ - - - - - - diff --git a/CONTENT/configuration/extShare/configDropins/overrides/OraJDBCDriver.xml b/CONTENT/configuration/extShare/configDropins/overrides/OraJDBCDriver.xml deleted file mode 100644 index aa2cffb9..00000000 --- a/CONTENT/configuration/extShare/configDropins/overrides/OraJDBCDriver.xml +++ /dev/null @@ -1,7 +0,0 @@ - - - - - - - diff --git a/CONTENT/configuration/extShare/configDropins/overrides/ldapExt.xml b/CONTENT/configuration/extShare/configDropins/overrides/ldapExt.xml deleted file mode 100644 index 65ed740c..00000000 --- a/CONTENT/configuration/extShare/configDropins/overrides/ldapExt.xml +++ /dev/null @@ -1,7 +0,0 @@ - - - - - - - diff --git a/CONTENT/configuration/extShare/configDropins/overrides/ldap_AD.xml b/CONTENT/configuration/extShare/configDropins/overrides/ldap_AD.xml deleted file mode 100644 index 0326dc4d..00000000 --- a/CONTENT/configuration/extShare/configDropins/overrides/ldap_AD.xml +++ /dev/null @@ -1,17 +0,0 @@ - - - - - - diff --git a/CONTENT/configuration/extShare/configDropins/overrides/ldap_TDS.xml b/CONTENT/configuration/extShare/configDropins/overrides/ldap_TDS.xml deleted file mode 100644 index 6c9610d4..00000000 --- a/CONTENT/configuration/extShare/configDropins/overrides/ldap_TDS.xml +++ /dev/null @@ -1,18 +0,0 @@ - - - - - - diff --git a/CONTENT/helm-charts/README.md b/CONTENT/helm-charts/README.md deleted file mode 100644 index 6dae3cd0..00000000 --- a/CONTENT/helm-charts/README.md +++ /dev/null @@ -1,324 +0,0 @@ -# Deploying with Helm charts - -> **NOTE**: This procedure covers a Helm chart deployment on certified Kubernetes. To deploy the Enterprise Content Management products on IBM Cloud Private 3.1.2, you must use the Business Automation Configuration Container. - -## Requirements and Prerequisites - -Ensure that you have completed the following tasks: - -- [Preparing FileNet environment](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_ecmk8s.html) - -- [Preparing your Kubernetes server with Kubernetes, Helm Tiller, and the Kubernetes command line](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_env_k8s.html) - -- [Downloading the PPA archive](../../README.md) - -The Helm commands for deploying the FileNet Content Manager images include a number of required command parameters for specific environment and configuration settings. Review the reference topics for these parameters and determine the values for your environment as part of your preparation: - -- [Content Platform Engine Helm command parameters](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_cm_cpeparamsk8s_helm.html) - -- [Content Search Services Helm command parameters](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_cm_cssparamsk8s_helm.html) - -- [Content Management Interoperability Services Helm command parameters](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_cm_cmisparamsk8s_helm.html) - -## Tips: - -- On Openshift, an expired Docker secret can cause errors during deployment. If an admin.registry key already exists and has expired, delete the key with the following command: - ```console - kubectl delete secret admin.registrykey -n - ``` - Then generate a new Docker secret with the following command: - ```console - kubectl create secret docker-registry admin.registrykey --docker-server= --docker-username= --docker-password=$(oc whoami -t) --docker-email=ecmtest@ibm.com -n - ``` - - -## Initializing the command line interface -Use the following commands to initialize the command line interface: -1. Run the init command: - ```$ helm init --client-only ``` -2. Check whether the command line can connect to the remote Tiller server: - ```console - $ helm version - Client: &version.Version{SemVer:"v2.9.1", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"} - Server: &version.Version{SemVer:"v2.9.1", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"} - ``` - -## Deploying images -Provide the parameter values for your environment and run the command to deploy the image. - > **Tip**: Copy the sample command to a file, edit the parameter values, and use the updated command for deployment. - > **Tip**: The values that are provided for 'resources' inside helm commands are examples only. Each deployment must take into account the demands that their particular workload will place on the system and adjust values accordingly. - -For deployments on Red Hat OpenShift, note the following considerations for whether you want to use the Arbitrary UID capability in your environment: - -- If you don't want to use Arbitrary UID capability in your Red Hat OpenShift environment, deploy the images as described in the following sections. - -- If you do want to use Arbitrary UID, prepare for deployment by checking and if needed editing your Security Context Constraint: - - Set the desired user id range of minimum and maximum values for the project namespace: - - ```$ oc edit namespace ``` - - For the uid-range annotation, verify that a value similar to the following is specified: - - ```$ openshift.io/sa.scc.uid-range=1000490000/10000 ``` - - This range is similar to the default range for Red Hat OpenShift. - - - Remove authenticated users from anyuid (if set): - - ```$ oc adm policy remove-scc-from-group anyuid system:authenticated ``` - - - Update the runAsUser value. - Find the entry: - - ``` - $ oc get scc -o yaml - runAsUser: - type: RunAsAny - ``` - - Update the value: - - ``` - $ oc get scc -o yaml - runAsUser: - type: MustRunAsRange - ``` -To deploy Content Platform Engine: - - ```console - $ helm install ibm-dba-contentservices-3.1.0.tgz --name dbamc-cpe --namespace dbamc --set cpeProductionSetting.license=accept,cpeProductionSetting.JVM_INITIAL_HEAP_PERCENTAGE=18,cpeProductionSetting.JVM_MAX_HEAP_PERCENTAGE=33,service.externalmetricsPort=9103,cpeProductionSetting.licenseModel=FNCM.CU,dataVolume.existingPVCforCPECfgstore=cpe-cfgstore,dataVolume.existingPVCforCPELogstore=cpe-logstore,dataVolume.existingPVCforFilestore=cpe-filestore,dataVolume.existingPVCforICMrulestore=cpe-icmrulesstore,dataVolume.existingPVCforTextextstore=cpe-textextstore,dataVolume.existingPVCforBootstrapstore=cpe-bootstrapstore,dataVolume.existingPVCforFNLogstore=cpe-fnlogstore,autoscaling.enabled=False,resources.requests.cpu=1,replicaCount=1,image.repository=:/dbamc/cpe,image.tag=ga-553-p8cpe,cpeProductionSetting.gcdJNDIName=FNGDDS,cpeProductionSetting.gcdJNDIXAName=FNGDDSXA - ``` -Replace with the correct registry URL, for example, docker-registry.default.svc. - -To deploy Content Search Services: - - ```console - $ helm install ibm-dba-contentsearch-3.1.0.tgz --name dbamc-css --namespace dbamc --set cssProductionSetting.license=accept,service.name=csssvc,service.externalSSLPort=8199,cssProductionSetting.JVM_INITIAL_HEAP_PERCENTAGE=38,cssProductionSetting.JVM_MAX_HEAP_PERCENTAGE=50,service.externalmetricsPort=9103,dataVolume.existingPVCforCSSCfgstore=css-cfgstore,dataVolume.existingPVCforCSSLogstore=css-logstore,dataVolume.existingPVCforCSSTmpstore=css-tempstore,dataVolume.existingPVCforIndex=css-indexstore,dataVolume.existingPVCforCSSCustomstore=css-customstore,resources.limits.memory=7Gi,image.repository=:/dbamc/css,image.tag=ga-553-p8css,imagePullSecrets.name=admin.registrykey - ``` - Replace with the correct registry URL, for example, docker-registry.default.svc. - -Some environments require multiple Content Search Services deployments. To deploy multiple Content Search Services instances, specify a unique release name and service name, and a new set of persistent volumes and persistent volume claims (PVs and PVCs). The example below shows a deployment using a new release name `dbamc-css2`, a new service name `csssvc2`, and a new set of persistent volumes `css2-cfgstore`, `css2-logstore`, `css2-tempstore`, and `css2-customstore`. You must use the same persistent volume for the indexstore because multiple Content Search Services deployments must access the same set of index collections. However, it is recommended that the other persistent volumes be unique. - - ```console - $ helm install ibm-dba-contentsearch-3.1.0.tgz --name dbamc-css2 --namespace dbamc --set cssProductionSetting.license=accept,service.externalSSLPort=8199,service.externalmetricsPort=9103,service.name=csssvc2,cssProductionSetting.JVM_INITIAL_HEAP_PERCENTAGE=38,cssProductionSetting.JVM_MAX_HEAP_PERCENTAGE=50,dataVolume.existingPVCforCSSCfgstore=css2-cfgstore,dataVolume.existingPVCforCSSLogstore=css2-logstore,dataVolume.existingPVCforCSSTmpstore=css2-tempstore,dataVolume.existingPVCforIndex=css-indexstore,dataVolume.existingPVCforCSSCustomstore=css2-customstore,resources.limits.memory=7Gi,image.repository=:/dbamc/css,image.tag=ga-553-p8css,imagePullSecrets.name=admin.registrykey - ``` - - Replace with correct registry URL, for example, docker-registry.default.svc. - - - To deploy Content Management Interoperability Services: - - ```console - $ helm install ibm-dba-cscmis-1.8.0.tgz --name dbamc-cmis --namespace dbamc --set cmisProductionSetting.license=accept,cmisProductionSetting.JVM_INITIAL_HEAP_PERCENTAGE=40,cmisProductionSetting.JVM_MAX_HEAP_PERCENTAGE=66,service.externalmetricsPort=9103,dataVolume.existingPVCforCMISCfgstore=cmis-cfgstore,dataVolume.existingPVCforCMISLogstore=cmis-logstore,autoscaling.enabled=False,replicaCount=1,imagePullSecrets.name=admin.registrykey,image.repository=:/dbamc/cmis,image.tag=ga-304-cmis-if007,cmisProductionSetting.cpeUrl=http://10.0.0.110:9080/wsi/FNCEWS40MTOM - ``` -Replace with correct registry URL, for example, docker-registry.default.svc. - -> **Reminder**: After you deploy, return to the instructions in the Knowledge Center, [Completing post deployment tasks for IBM FileNet Content Manager](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_deploy_postecmdeployk8s.html), to get your FileNet Content Manager environment up and running - -## Deploying the External Share container - -If you want to optionally include the external share capability in your environment, you also configure and deploy the External Share container. - -Ensure that you have completed the all of the preparation steps for deploying the External Share container: [Configuring external share for containers](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_ecmexternalsharek8s.html) - -For deployments on Red Hat OpenShift, note the following considerations for whether you want to use the Arbitrary UID capability in your environment: - -- If you don't want to use Arbitrary UID capability in your Red Hat OpenShift environment, deploy the images as described in the following sections. - -- If you do want to use Arbitrary UID, prepare for deployment by checking and if needed editing your Security Context Constraint to set the desired user id range of minimum and maximum values for the project namespace: - ```$ oc edit namespace ``` - - For the uid-range annotation, verify that a value similar to the following is specified: - ```$ openshift.io/sa.scc.uid-range=1000490000/10000 ``` - This range is similar to the default range for Red Hat OpenShift. - - You can also remove authenticated users: - ```$ oc adm policy remove-scc-from-group anyuid system:authenticated ``` - - -To deploy the External Share container: - - ``` - $ helm install ibm-dba-extshare-prod-3.0.1.tgz --name dbamc-es --namespace dbamc --set esProductionSetting.license=accept,esProductionSetting.JVM_INITIAL_HEAP_PERCENTAGE=40,esProductionSetting.JVM_MAX_HEAP_PERCENTAGE=66,service.externalmetricsPort=9103,dataVolume.existingPVCforESCfgstore=es-cfgstore,dataVolume.existingPVCforESLogstore=es-logstore,autoscaling.enabled=False,replicaCount=1,imagePullSecrets.name=admin.registrykey,image.repository=:/dbamc/extshare,image.tag=ga-306-es,esProductionSetting.esDBType=db2,esProductionSetting.esJNDIDSName=ECMClientDS,esProductionSetting.esSChema=ICNDB,esProductionSetting.esTableSpace=ICNDBTS,esProductionSetting.esAdmin=ceadmin - ``` - - Replace with correct registry URL, for example, docker-registry.default.svc. - -## Deploying the Technology Preview: Content Services GraphQL API container -If you want to use the Content Services GraphQL API container, follow the instructions in the Getting Started technical notice: [Technology Preview: Getting started with Content Services GraphQL API](http://www.ibm.com/support/docview.wss?uid=ibm10883630) - -To deploy the ContentGraphQL Container: - - ``` - $ helm install ibm-dba-contentrestservice-dev-3.1.0.tgz --name dbamc-crs --namespace dbamc --set crsProductionSetting.license=accept,crsProductionSetting.JVM_INITIAL_HEAP_PERCENTAGE=40,crsProductionSetting.JVM_MAX_HEAP_PERCENTAGE=66,service.externalmetricsPort=9103,dataVolume.existingPVCforCfgstore=crs-icp-cfgstore,dataVolume.existingPVCforCfglogs=crs-icp-logs,autoscaling.enabled=False,replicaCount=1,imagePullSecrets.name=admin.registrykey,image.repository=:/dbamc/crs,image.tag=5.5.3,crsProductionSetting.cpeUri=https://:/wsi/FNCEWS40MTOM - ``` - Replace with correct registry URL, for example, docker-registry.default.svc. - Replace : with the FileNet Content Engine application host and Port. - - - -## Upgrading deployments - > **Tip**: You can discover the necessary resource values for the deployment from corresponding product deployments in IBM Cloud Private Console and Openshift Container Platform. - -### Before you begin -Before you run the upgrade commands, you must prepare the environment for upgrades by updating permissions on your persistent volumes. Depending on your starting version you might also need to create or update volumes and folders for Content Search Services and Content Management Interoperability Services. Complete the preparation steps in the following topic before you start the upgrade: [Upgrading Content Manager releases](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.upgrading/topics/tsk_cm_upgrade.htm) - -For an upgrade to the External share container, complete the 19.0.2 preparation steps for External Share PV and PVC updates in the following topic before you start the upgrade: [Upgrading Content Manager releases](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.upgrading/topics/tsk_cm_upgrade.htm) - -You must also [download the PPA archive](../../README.md) before you begin the upgrade process. - -### Upgrading on Red Hat OpenShift - -For upgrades on Red Hat OpenShift, note the following considerations when you want to use the Arbitrary UID capability in your updated environment: - -- If you don't want to use Arbitrary UID capability in your Red Hat OpenShift environment, use the instructions in Upgrading on certified Kubernetes. - -- If you do want to use Arbitrary UID, use the following steps to prepare for the upgrade: - -1. Check and if necessary edit your Security Context Constraint to set desired user id range of minimum and maximum values for the project namespace: - - Set the desired user id range of minimum and maximum values for the project namespace: - - ```$ oc edit namespace ``` - - For the uid-range annotation, verify that a value similar to the following is specified: - - ```$ openshift.io/sa.scc.uid-range=1000490000/10000 ``` - - This range is similar to the default range for Red Hat OpenShift. - - - Remove authenticated users from anyuid (if set): - - ```$ oc adm policy remove-scc-from-group anyuid system:authenticated ``` - - - Update the runAsUser value. - Find the entry: - - ``` - $ oc get scc -o yaml - runAsUser: - type: RunAsAny - ``` - - Update the value: - - ``` - $ oc get scc -o yaml - runAsUser: - type: MustRunAsRange - ``` - -2. Stop all existing containers. - -3. Run the new install (instead of upgrade) commands for the containers. Update the commands provided to include the values for your existing environment. - -> **NOTE**: In this context, the install commands update the application. Updates for your existing data happen automatically when the updated applications start. - -To deploy Content Platform Engine: - - ```console - $ helm install ibm-dba-contentservices-3.1.0.tgz --name dbamc-cpe --namespace dbamc --set cpeProductionSetting.license=accept,cpeProductionSetting.JVM_INITIAL_HEAP_PERCENTAGE=18,cpeProductionSetting.JVM_MAX_HEAP_PERCENTAGE=33,service.externalmetricsPort=9103,cpeProductionSetting.licenseModel=FNCM.CU,dataVolume.existingPVCforCPECfgstore=cpe-cfgstore,dataVolume.existingPVCforCPELogstore=cpe-logstore,dataVolume.existingPVCforFilestore=cpe-filestore,dataVolume.existingPVCforICMrulestore=cpe-icmrulesstore,dataVolume.existingPVCforTextextstore=cpe-textextstore,dataVolume.existingPVCforBootstrapstore=cpe-bootstrapstore,dataVolume.existingPVCforFNLogstore=cpe-fnlogstore,autoscaling.enabled=False,resources.requests.cpu=1,replicaCount=1,image.repository=:/dbamc/cpe,image.tag=ga-553-p8cpe,cpeProductionSetting.gcdJNDIName=FNGDDS,cpeProductionSetting.gcdJNDIXAName=FNGDDSXA - ``` -Replace with correct registry URL, for example, docker-registry.default.svc. - -To deploy Content Search Services: - - ```console - $ helm install ibm-dba-contentsearch-3.1.0.tgz --name dbamc-css --namespace dbamc --set cssProductionSetting.license=accept,service.name=csssvc,service.externalSSLPort=8199,cssProductionSetting.JVM_INITIAL_HEAP_PERCENTAGE=38,cssProductionSetting.JVM_MAX_HEAP_PERCENTAGE=50,service.externalmetricsPort=9103,dataVolume.existingPVCforCSSCfgstore=css-cfgstore,dataVolume.existingPVCforCSSLogstore=css-logstore,dataVolume.existingPVCforCSSTmpstore=css-tempstore,dataVolume.existingPVCforIndex=css-indexstore,dataVolume.existingPVCforCSSCustomstore=css-customstore,resources.limits.memory=7Gi,image.repository=:/dbamc/css,image.tag=ga-553-p8css,imagePullSecrets.name=admin.registrykey - ``` - Replace with the correct registry URL, for example, docker-registry.default.svc. - - To deploy Content Management Interoperability Services: - - ```console - $ helm install ibm-dba-cscmis-1.8.0.tgz --name dbamc-cmis --namespace dbamc --set cmisProductionSetting.license=accept,cmisProductionSetting.JVM_INITIAL_HEAP_PERCENTAGE=40,cmisProductionSetting.JVM_MAX_HEAP_PERCENTAGE=66,service.externalmetricsPort=9103,dataVolume.existingPVCforCMISCfgstore=cmis-cfgstore,dataVolume.existingPVCforCMISLogstore=cmis-logstore,autoscaling.enabled=False,replicaCount=1,imagePullSecrets.name=admin.registrykey,image.repository=:/dbamc/cmis,image.tag=ga-304-cmis-if007,cmisProductionSetting.cpeUrl=http://10.0.0.110:9080/wsi/FNCEWS40MTOM - ``` -Replace with correct registry URL, for example, docker-registry.default.svc. - -To deploy the External Share container: - - ``` - $ helm install ibm-dba-extshare-prod-3.0.1.tgz --name dbamc-es --namespace dbamc --set esProductionSetting.license=accept,esProductionSetting.JVM_INITIAL_HEAP_PERCENTAGE=40,esProductionSetting.JVM_MAX_HEAP_PERCENTAGE=66,service.externalmetricsPort=9103,dataVolume.existingPVCforESCfgstore=es-cfgstore,dataVolume.existingPVCforESLogstore=es-logstore,autoscaling.enabled=False,replicaCount=1,imagePullSecrets.name=admin.registrykey,image.repository=:/dbamc/extshare,image.tag=ga-306-es,esProductionSetting.esDBType=db2,esProductionSetting.esJNDIDSName=ECMClientDS,esProductionSetting.esSChema=ICNDB,esProductionSetting.esTableSpace=ICNDBTS,esProductionSetting.esAdmin=ceadmin - ``` - - Replace with correct registry URL, for example, docker-registry.default.svc. - -### Upgrading on certified Kubernetes platforms (for non Arbitrary UID deployments) - -To upgrade Content Platform Engine: - -On Red Hat OpenShift: - -``` - helm upgrade ecm-helm-cpe ibm-dba-contentservices-3.1.0.tgz --reuse-values --set image.repository=docker-registry.default.svc:5000/{project}/cpe,image.tag=ga-553-p8cpe-if001,imagePullSecrets.name=admin.registrykey,log.format=json,cpeProductionSetting.jvmInitialHeapPercentage=18,cpeProductionSetting.jvmMaxHeapPercentage=33,service.externalmetricsPort=9103 -``` -On non-Red Hat OpenShift platforms: - -``` - helm upgrade ecm-helm-cpe ibm-dba-contentservices-3.1.0.tgz --reuse-values --tls --set image.repository=:/{namespace}/cpe,image.tag=ga-553-p8cpe-if001,imagePullSecrets.name=admin.registrykey,log.format=json,cpeProductionSetting.jvmInitialHeapPercentage=18,cpeProductionSetting.jvmMaxHeapPercentage=33,runAsUser=50001,service.externalmetricsPort=9103 -``` - - -Replace with correct registry URL, for example, docker-registry.default.svc - -To upgrade Content Search Services: - -On Red Hat OpenShift: - -``` - $ helm upgrade dbamc-css /helm-charts/ibm-dba-contentsearch-3.1.0.tgz --reuse-values --set image.repository=:/dbamc/css,image.tag=ga-553-p8css-if001,imagePullSecrets.name=admin.registrykey,resources.requests.cpu=500m,resources.requests.memory=512Mi,resources.limits.cpu=8,resources.limits.memory=8192Mi,log.format=json,dataVolume.nameforCSSCustomstore=custom-stor,dataVolume.existingPVCforCSSCustomstore=css-icp-customstore,service.,cssProductionSetting.JVM_INITIAL_HEAP_PERCENTAGE=38,cssProductionSetting.JVM_MAX_HEAP_PERCENTAGE=50,service.externalmetricsPort=9103 -``` - -On non-Red Hat OpenShift platforms: - -``` - $ helm upgrade dbamc-css /helm-charts/ibm-dba-contentsearch-3.1.0.tgz --reuse-values --set image.repository=:/dbamc/css,image.tag=ga-553-p8css,imagePullSecrets.name=admin.registrykey,resources.requests.cpu=500m,resources.requests.memory=512Mi,resources.limits.cpu=8,resources.limits.memory=8192Mi,log.format=json,dataVolume.nameforCSSCustomstore=custom-stor,dataVolume.existingPVCforCSSCustomstore=css-icp-customstore,runAsUser=50001,cssProductionSetting.JVM_INITIAL_HEAP_PERCENTAGE=38,cssProductionSetting.JVM_MAX_HEAP_PERCENTAGE=50,service.externalmetricsPort=9103 -``` - -Replace with correct registry URL, for example, docker-registry.default.svc. - -To upgrade Content Management Interoperability Services: - -On Red Hat OpenShift: - -``` - $ helm upgrade dbamc-cmis /helm-charts/ibm-dba-cscmis-1.8.0.tgz --reuse-values --set image.repository=:/dbamc/cmis,image.tag=ga-304-cmis-if007,imagePullSecrets.name=admin.registrykey,resources.requests.cpu=500m,resources.requests.memory=512Mi,resources.limits.cpu=1,resources.limits.memory=1024Mi,cmisProductionSetting.JVM_INITIAL_HEAP_PERCENTAGE=40,cmisProductionSetting.JVM_MAX_HEAP_PERCENTAGE=66,log.format=json,service.externalmetricsPort=9103 -``` -On non-Red Hat OpenShift platforms: - -``` - $ helm upgrade dbamc-cmis /helm-charts/ibm-dba-cscmis-1.8.0.tgz --reuse-values --set image.repository=:/dbamc/cmis,image.tag=ga-304-cmis-if007,imagePullSecrets.name=admin.registrykey,resources.requests.cpu=500m,resources.requests.memory=512Mi,resources.limits.cpu=1,resources.limits.memory=1024Mi,log.format=json,runAsUser=50001,cmisProductionSetting.JVM_INITIAL_HEAP_PERCENTAGE=40,cmisProductionSetting.JVM_MAX_HEAP_PERCENTAGE=66,service.externalmetricsPort=9103 -``` - -Replace with correct registry URL, for example, docker-registry.default.svc. - -To upgrade the External Share container: - -On Red Hat OpenShift: - - ``` - $ helm upgrade ibm-dba-extshare-prod-3.0.1.tgz --name dbamc-es --namespace dbamc --set esProductionSetting.license=accept,esProductionSetting.JVM_INITIAL_HEAP_PERCENTAGE=40,esProductionSetting.JVM_MAX_HEAP_PERCENTAGE=66,dataVolume.existingPVCforESCfgstore=es-cfgstore,dataVolume.existingPVCforESLogstore=es-logstore,autoscaling.enabled=False,replicaCount=1,imagePullSecrets.name=admin.registrykey,image.repository=:5000/dbamc/extshare,image.tag=ga-306-es,esProductionSetting.esDBType=db2,esProductionSetting.esJNDIDSName=ECMClientDS,esProductionSetting.esSChema=ICNDB,esProductionSetting.esTableSpace=ICNDBTS,esProductionSetting.esAdmin=ceadmin,service.externalmetricsPort=9103 - ``` - -On non-Red Hat OpenShift platforms: - - ``` - $ helm upgrade ibm-dba-extshare-prod-3.0.1.tgz --name dbamc-es --namespace dbamc --set esProductionSetting.license=accept,esProductionSetting.JVM_INITIAL_HEAP_PERCENTAGE=40,esProductionSetting.JVM_MAX_HEAP_PERCENTAGE=66,dataVolume.existingPVCforESCfgstore=es-cfgstore,dataVolume.existingPVCforESLogstore=es-logstore,autoscaling.enabled=False,replicaCount=1,imagePullSecrets.name=admin.registrykey,image.repository=:5000/dbamc/extshare,image.tag=ga-306-es,esProductionSetting.esDBType=db2,esProductionSetting.esJNDIDSName=ECMClientDS,esProductionSetting.esSChema=ICNDB,esProductionSetting.esTableSpace=ICNDBTS,esProductionSetting.esAdmin=ceadmin,runAsUser=50001,service.externalmetricsPort=9103 - ``` - - Replace with correct registry URL, for example, docker-registry.default.svc. - - - -## Uninstalling a Kubernetes release of FileNet Content Manager - -To uninstall and delete a release named `my-cpe-prod-release`, use the following command: - -```console -$ helm delete my-cpe-prod-release --purge --tls -``` - -The command removes all the Kubernetes components associated with the release, except any Persistent Volume Claims (PVCs). This is the default behavior of Kubernetes, and ensures that valuable data is not deleted. To delete the persisted data of the release, you can delete the PVC using the following command: - -```console -$ kubectl delete pvc my-cpe-prod-release-cpe-pvclaim -``` diff --git a/CONTENT/helm-charts/ibm-dba-contentrestservice-dev-3.0.0.tgz b/CONTENT/helm-charts/ibm-dba-contentrestservice-dev-3.0.0.tgz deleted file mode 100644 index 4f5f1199..00000000 Binary files a/CONTENT/helm-charts/ibm-dba-contentrestservice-dev-3.0.0.tgz and /dev/null differ diff --git a/CONTENT/helm-charts/ibm-dba-contentrestservice-dev-3.1.0.tgz b/CONTENT/helm-charts/ibm-dba-contentrestservice-dev-3.1.0.tgz deleted file mode 100644 index 9cee2617..00000000 Binary files a/CONTENT/helm-charts/ibm-dba-contentrestservice-dev-3.1.0.tgz and /dev/null differ diff --git a/CONTENT/helm-charts/ibm-dba-contentsearch-3.0.0.tgz b/CONTENT/helm-charts/ibm-dba-contentsearch-3.0.0.tgz deleted file mode 100644 index 077b7074..00000000 Binary files a/CONTENT/helm-charts/ibm-dba-contentsearch-3.0.0.tgz and /dev/null differ diff --git a/CONTENT/helm-charts/ibm-dba-contentsearch-3.1.0.tgz b/CONTENT/helm-charts/ibm-dba-contentsearch-3.1.0.tgz deleted file mode 100644 index 361be0dc..00000000 Binary files a/CONTENT/helm-charts/ibm-dba-contentsearch-3.1.0.tgz and /dev/null differ diff --git a/CONTENT/helm-charts/ibm-dba-contentservices-3.0.0.tgz b/CONTENT/helm-charts/ibm-dba-contentservices-3.0.0.tgz deleted file mode 100644 index 53a679ab..00000000 Binary files a/CONTENT/helm-charts/ibm-dba-contentservices-3.0.0.tgz and /dev/null differ diff --git a/CONTENT/helm-charts/ibm-dba-contentservices-3.1.0.tgz b/CONTENT/helm-charts/ibm-dba-contentservices-3.1.0.tgz deleted file mode 100644 index f381f490..00000000 Binary files a/CONTENT/helm-charts/ibm-dba-contentservices-3.1.0.tgz and /dev/null differ diff --git a/CONTENT/helm-charts/ibm-dba-cscmis-1.7.0.tgz b/CONTENT/helm-charts/ibm-dba-cscmis-1.7.0.tgz deleted file mode 100644 index 982e6731..00000000 Binary files a/CONTENT/helm-charts/ibm-dba-cscmis-1.7.0.tgz and /dev/null differ diff --git a/CONTENT/helm-charts/ibm-dba-cscmis-1.8.0.tgz b/CONTENT/helm-charts/ibm-dba-cscmis-1.8.0.tgz deleted file mode 100644 index 9dff8b49..00000000 Binary files a/CONTENT/helm-charts/ibm-dba-cscmis-1.8.0.tgz and /dev/null differ diff --git a/CONTENT/helm-charts/ibm-dba-extshare-prod-3.0.0.tgz b/CONTENT/helm-charts/ibm-dba-extshare-prod-3.0.0.tgz deleted file mode 100644 index cf7bf062..00000000 Binary files a/CONTENT/helm-charts/ibm-dba-extshare-prod-3.0.0.tgz and /dev/null differ diff --git a/CONTENT/helm-charts/ibm-dba-extshare-prod-3.0.1.tgz b/CONTENT/helm-charts/ibm-dba-extshare-prod-3.0.1.tgz deleted file mode 100644 index d157342d..00000000 Binary files a/CONTENT/helm-charts/ibm-dba-extshare-prod-3.0.1.tgz and /dev/null differ diff --git a/CONTENT/k8s-yaml/CMIS/cmis-deploy.yml b/CONTENT/k8s-yaml/CMIS/cmis-deploy.yml deleted file mode 100644 index 99d3eae6..00000000 --- a/CONTENT/k8s-yaml/CMIS/cmis-deploy.yml +++ /dev/null @@ -1,173 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: ecm-cmis-svc -spec: - ports: - - name: http - protocol: TCP - port: 9080 - targetPort: 9080 - - name: https - protocol: TCP - port: 9443 - targetPort: 9443 - selector: - app: cmis-cluster1 - type: NodePort - sessionAffinity: ClientIP ---- -apiVersion: networking.k8s.io/v1 -kind: NetworkPolicy -metadata: - name: ecm-cmis-np - namespace: $KUBE_NAME_SPACE -spec: - podSelector: {} - policyTypes: - - Ingress - - Egress - ingress: - - {} - egress: - - ports: - - port: 53 - protocol: UDP - - port: 53 - protocol: TCP - - to: - - namespaceSelector: {} ---- -apiVersion: apps/v1beta1 -kind: Deployment -metadata: - name: ecm-cmis -spec: - replicas: 1 - strategy: - type: RollingUpdate - template: - metadata: - labels: - app: cmis-cluster1 - spec: - imagePullSecrets: - - name: admin.registrykey - spec: - affinity: - podAntiAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 100 - podAffinityTerm: - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - cmis-cluster1 - topologyKey: "kubernetes.io/hostname" - containers: - - image: /default/cmis:latest - imagePullPolicy: Always - name: ecm-cmis - securityContext: - # If deployment on OpenShift and image supports arbitrary uid, - # remove runAsUser and pods will run with arbitrarily assigned user ID. - runAsUser: 50001 - allowPrivilegeEscalation: false - resources: - requests: - memory: 256Mi - cpu: 500m - limits: - memory: 1536Mi - cpu: 1 - ports: - - containerPort: 9080 - name: http - - containerPort: 9443 - name: https - env: - - name: LICENSE - value: "accept" - - name: PRODUCT - value: "DBAMC" - - name: CMIS_VERSION - value: "1.1" - - name: CE_URL - value: "http://cpeurl:30540/wsi/FNCEWS40MTOM" - - name: TZ - value: "Etc/UTC" - - name: JVM_INITIAL_HEAP_PERCENTAGE - value: "40" - - name: JVM_MAX_HEAP_PERCENTAGE - value: "66" - - name: CMC_TIME_TO_LIVE - value: "3600000" - - name: CRC_TIME_TO_LIVE - value: "3600000" - - name: USER_CACHE_TIME_TO_LIVE - value: "28800000" - - name: CHECKOUT_COPYCONTENT - value: "True" - - name: DEFAULTMAXITEMS - value: "25" - - name: CVL_CACHE - value: "True" - - name: SECUREMETADATACACHE - value: "False" - - name: FILTERHIDDENPROPERTIES - value: "True" - - name: QUERYTIMELIMIT - value: "180" - - name: RESUMABLEQUERIESFORREST - value: "True" - - name: ESCAPEUNSAFESTRINGCHARACTERS - value: "False" - - name: MAXSOAPSIZE - value: "180" - - name: PRINTFULLSTACKTRACE - value: "False" - - name: FOLDERFIRSTSEARCH - value: "False" - - name: IGNOREROOTDOCUMENTS - value: "False" - - name: SUPPORTINGTYPEMUTABILITY - value: "False" - - name: MY_POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: MY_POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: MY_POD_IP - valueFrom: - fieldRef: - fieldPath: status.podIP - readinessProbe: - tcpSocket: - port: 9080 - initialDelaySeconds: 90 - periodSeconds: 5 - livenessProbe: - tcpSocket: - port: 9080 - initialDelaySeconds: 180 - periodSeconds: 5 - volumeMounts: - - name: cmiscfgstore-pvc - mountPath: "/opt/ibm/wlp/usr/servers/defaultServer/configDropins/overrides" - subPath: configDropins/overrides - - name: cmislogstore-pvc - mountPath: "/opt/ibm/wlp/usr/servers/defaultServer/logs" - subPath: logs - - volumes: - - name: cmiscfgstore-pvc - persistentVolumeClaim: - claimName: "cmis-cfgstore" - - name: cmislogstore-pvc - persistentVolumeClaim: - claimName: "cmis-logstore" diff --git a/CONTENT/k8s-yaml/CPE/cpe-deploy.yml b/CONTENT/k8s-yaml/CPE/cpe-deploy.yml deleted file mode 100644 index a3995437..00000000 --- a/CONTENT/k8s-yaml/CPE/cpe-deploy.yml +++ /dev/null @@ -1,187 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: ecm-cpe-svc -spec: - ports: - - name: http - protocol: TCP - port: 9080 - targetPort: 9080 - - name: https - protocol: TCP - port: 9443 - targetPort: 9443 - selector: - app: cpeserver-cluster1 - type: NodePort - sessionAffinity: ClientIP ---- -apiVersion: networking.k8s.io/v1 -kind: NetworkPolicy -metadata: - name: ecm-cpe-np - namespace: $KUBE_NAME_SPACE -spec: - podSelector: {} - policyTypes: - - Ingress - - Egress - ingress: - - {} - egress: - - ports: - - port: 53 - protocol: UDP - - port: 53 - protocol: TCP - - to: - - namespaceSelector: {} ---- -apiVersion: apps/v1beta1 -kind: Deployment -metadata: - name: ecm-cpe -spec: - replicas: 1 - strategy: - type: RollingUpdate - template: - metadata: - labels: - app: cpeserver-cluster1 - spec: - imagePullSecrets: - - name: admin.registrykey - spec: - affinity: - podAntiAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 100 - podAffinityTerm: - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - cpeserver-cluster1 - topologyKey: "kubernetes.io/hostname" - containers: - - image: /default/cpe:latest - imagePullPolicy: Always - name: ecm-cpe - securityContext: - # If deployment on OpenShift and image supports arbitrary uid, - # remove runAsUser and pods will run with arbitrarily assigned user ID. - runAsUser: 50001 - allowPrivilegeEscalation: false - resources: - requests: - memory: 512Mi - cpu: 500m - limits: - memory: 3072Mi - cpu: 1 - ports: - - containerPort: 9080 - name: http - - containerPort: 9443 - name: https - env: - - name: LICENSE - value: "accept" - - name: PRODUCT - value: "DBAMC" - - name: CPESTATICPORT - value: "false" - - name: CONTAINERTYPE - value: "1" - - name: TZ - value: "Etc/UTC" - - name: JVM_INITIAL_HEAP_PERCENTAGE - value: "18" - - name: JVM_MAX_HEAP_PERCENTAGE - value: "33" - - name: JVM_CUSTOMIZE_OPTIONS - value: "" - - name: GCDJNDINAME - value: "FNGDDS" - - name: GCDJNDIXANAME - value: "FNGDDSXA" - - name: LICENSEMODEL - value: "FNCM.CU" - - name: MY_POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: MY_POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: MY_POD_IP - valueFrom: - fieldRef: - fieldPath: status.podIP - readinessProbe: - httpGet: - path: /P8CE/Health - port: 9080 - httpHeaders: - - name: Content-Encoding - value: gzip - initialDelaySeconds: 180 - periodSeconds: 5 - livenessProbe: - httpGet: - path: /P8CE/Health - port: 9080 - httpHeaders: - - name: Content-Encoding - value: gzip - initialDelaySeconds: 600 - periodSeconds: 5 - volumeMounts: - - name: cpecfgstore-pvc - mountPath: "/opt/ibm/wlp/usr/servers/defaultServer/configDropins/overrides" - subPath: configDropins/overrides - - name: cpelogstore-pvc - mountPath: "/opt/ibm/wlp/usr/servers/defaultServer/logs" - subPath: logs - - name: cpefilestore-pvc - mountPath: "/opt/ibm/asa" - subPath: asa - - name: cpeicmrulesstore-pvc - mountPath: "/opt/ibm/icmrules" - subPath: icmrules - - name: cpetextextstore-pvc - mountPath: /opt/ibm/textext - subPath: textext - - name: cpebootstrapstore-pvc - mountPath: "/opt/ibm/wlp/usr/servers/defaultServer/lib/bootstrap" - subPath: bootstrap - - name: cpefnlogstore-pvc - mountPath: "/opt/ibm/wlp/usr/servers/defaultServer/FileNet" - subPath: FileNet - - volumes: - - name: cpecfgstore-pvc - persistentVolumeClaim: - claimName: "cpe-cfgstore" - - name: cpelogstore-pvc - persistentVolumeClaim: - claimName: "cpe-logstore" - - name: cpefilestore-pvc - persistentVolumeClaim: - claimName: "cpe-filestore" - - name: cpeicmrulesstore-pvc - persistentVolumeClaim: - claimName: "cpe-icmrulesstore" - - name: cpetextextstore-pvc - persistentVolumeClaim: - claimName: "cpe-textextstore" - - name: cpebootstrapstore-pvc - persistentVolumeClaim: - claimName: "cpe-bootstrapstore" - - name: cpefnlogstore-pvc - persistentVolumeClaim: - claimName: "cpe-fnlogstore" diff --git a/CONTENT/k8s-yaml/CSS/css-deploy.yml b/CONTENT/k8s-yaml/CSS/css-deploy.yml deleted file mode 100644 index cb77cfa0..00000000 --- a/CONTENT/k8s-yaml/CSS/css-deploy.yml +++ /dev/null @@ -1,167 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: csssearch-cluster -spec: - ports: - - name: cssdefault - protocol: TCP - port: 8191 - targetPort: 8191 - - name: ccsssl - protocol: TCP - port: 8199 - targetPort: 8199 - selector: - app: csssearch-cluster - type: ClusterIP - sessionAffinity: ClientIP ---- -apiVersion: networking.k8s.io/v1 -kind: NetworkPolicy -metadata: - name: ecm-css-np - namespace: $KUBE_NAME_SPACE -spec: - podSelector: {} - policyTypes: - - Ingress - - Egress - ingress: - - {} - egress: - - ports: - - port: 53 - protocol: UDP - - port: 53 - protocol: TCP - - to: - - namespaceSelector: {} ---- -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: csssearch-cluster -spec: - replicas: 1 - strategy: - type: RollingUpdate - template: - metadata: - labels: - app: csssearch-cluster - spec: - imagePullSecrets: - - name: admin.registrykey - annotations: - scheduler.alpha.kubernetes.io/affinity: | - { - "podAntiAffinity": { - "preferredDuringSchedulingIgnoredDuringExecution": [ - { - "weight":100, - "podAffinityTerm":{ - "labelSelector": { - "matchExpressions": [ - { - "key": "app", - "operator": "In", - "values": ["csssearch-cluster"] - } - ] - }, - "topologyKey": "kubernetes.io/hostname" - } - } - ] - } - } - - spec: - containers: - - image: /default/css:latest - imagePullPolicy: Always - name: csssearch-cluster - securityContext: - # If deployment on OpenShift and image supports arbitrary uid, - # remove runAsUser and pods will run with arbitrarily assigned user ID. - runAsUser: 50001 - allowPrivilegeEscalation: false - resources: - requests: - memory: 512Mi - cpu: 500m - limits: - memory: 4096Mi - cpu: 1 - ports: - - containerPort: 8191 - name: cssdefault - - containerPort: 8199 - name: cssssl - readinessProbe: - tcpSocket: - port: 8199 - initialDelaySeconds: 60 - periodSeconds: 5 - livenessProbe: - tcpSocket: - port: 8199 - initialDelaySeconds: 120 - periodSeconds: 5 - env: - - name: LICENSE - value: "accept" - - name: PRODUCT - value: "DBAMC" - - name: JVM_INITIAL_HEAP_PERCENTAGE - value: "38" - - name: JVM_MAX_HEAP_PERCENTAGE - value: "50" - - name: TZ - value: "Etc/UTC" - - name: MY_POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: MY_POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: MY_POD_IP - valueFrom: - fieldRef: - fieldPath: status.podIP - volumeMounts: - - name: csscfgstore-pvc - mountPath: "/opt/IBM/ContentSearchServices/CSS_Server/data" - subPath: CSS_Server_data/sslkeystore - - name: csslogstore-pvc - mountPath: "/opt/IBM/ContentSearchServices/CSS_Server/log" - subPath: CSS_Server_log - - name: csstempstore-pvc - mountPath: "/opt/IBM/ContentSearchServices/CSS_Server/temp" - subPath: CSS_Server_temp - - name: cssindexstore-pvc - mountPath: "/opt/ibm/indexareas" - subPath: CSS_Indexes - - name: csscustomstore-pvc - mountPath: "/opt/IBM/ContentSearchServices/CSS_Server/config" - subPath: css/CSS_Server_Config - - volumes: - - name: csscfgstore-pvc - persistentVolumeClaim: - claimName: "css-cfgstore" - - name: csslogstore-pvc - persistentVolumeClaim: - claimName: "css-logstore" - - name: csstempstore-pvc - persistentVolumeClaim: - claimName: "css-tempstore" - - name: cssindexstore-pvc - persistentVolumeClaim: - claimName: "css-indexstore" - - name: csscustomstore-pvc - persistentVolumeClaim: - claimName: "css-customstore" diff --git a/CONTENT/k8s-yaml/ContentGraphQL/crs-deploy.yml b/CONTENT/k8s-yaml/ContentGraphQL/crs-deploy.yml deleted file mode 100755 index 3b45ccd3..00000000 --- a/CONTENT/k8s-yaml/ContentGraphQL/crs-deploy.yml +++ /dev/null @@ -1,135 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: ecm-crs-svc -spec: - ports: - - name: http - protocol: TCP - port: 9080 - targetPort: 9080 - - name: https - protocol: TCP - port: 9443 - targetPort: 9443 - selector: - app: crsserver-cluster1 - type: NodePort - sessionAffinity: ClientIP ---- -apiVersion: networking.k8s.io/v1 -kind: NetworkPolicy -metadata: - name: ecm-crs-np - namespace: $KUBE_NAME_SPACE -spec: - podSelector: {} - policyTypes: - - Ingress - - Egress - ingress: - - {} - egress: - - ports: - - port: 53 - protocol: UDP - - port: 53 - protocol: TCP - - to: - - namespaceSelector: {} ---- -apiVersion: apps/v1beta1 -kind: Deployment -metadata: - name: ecm-crs -spec: - replicas: 1 - strategy: - type: RollingUpdate - template: - metadata: - labels: - app: crsserver-cluster1 - spec: - imagePullSecrets: - - name: admin.registrykey - spec: - affinity: - podAntiAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 100 - podAffinityTerm: - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - crsserver-cluster1 - topologyKey: "kubernetes.io/hostname" - containers: - - image: /default/crs:553 - imagePullPolicy: Always - securityContext: - # If deployment on OpenShift and image supports arbitrary uid, - # remove runAsUser and pods will run with arbitrarily assigned user ID. - runAsUser: 50001 - allowPrivilegeEscalation: false - name: ecm-crs - resources: - requests: - memory: 512Mi - cpu: 500m - limits: - memory: 1536Mi - cpu: 1 - ports: - - containerPort: 9080 - name: http - - containerPort: 9443 - name: https - env: - - name: LICENSE - value: "accept" - - name: PRODUCT - value: "DBAMC" - - name: TZ - value: "Etc/UTC" - - name: JVM_INITIAL_HEAP_PERCENTAGE - value: "40" - - name: JVM_MAX_HEAP_PERCENTAGE - value: "66" - - name: CPE_URI - value: "http://cpeurl:30540/wsi/FNCEWS40MTOM" - - name: CPESTATICPORT - value: "false" - - name: CONTAINERTYPE - value: "1" - - name: ENABLE_GRAPHIQL - value: "false" - - name: MY_POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: MY_POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: MY_POD_IP - valueFrom: - fieldRef: - fieldPath: status.podIP - volumeMounts: - - name: crscfgstore-pvc - mountPath: "/opt/ibm/wlp/usr/servers/defaultServer/configDropins/overrides" - subPath: configDropins/overrides - - name: crslogstore-pvc - mountPath: "/opt/ibm/wlp/usr/servers/defaultServer/logs" - subPath: logs - - volumes: - - name: crscfgstore-pvc - persistentVolumeClaim: - claimName: "crs-cfgstore" - - name: crslogstore-pvc - persistentVolumeClaim: - claimName: "crs-logstore" diff --git a/CONTENT/k8s-yaml/README.md b/CONTENT/k8s-yaml/README.md deleted file mode 100644 index 783304f4..00000000 --- a/CONTENT/k8s-yaml/README.md +++ /dev/null @@ -1,217 +0,0 @@ -# Deploying with YAML files - -## Requirements and Prerequisites - -Ensure that you have completed the following tasks: - -- [Preparing your Kubernetes server](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_18.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_env_k8s.html) -- [Downloading the PPA archive](../../README.md) -- [Preparing FileNet environment](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_18.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_ecmk8s.html) - -## Deploying component images - -Use the command line to deploy the image using the parameters in the appropriate YAML file. You also use the command line to determine access information for your deployed images. - -For deployments on Red Hat OpenShift, note the following considerations for whether you want to use the Arbitrary UID capability in your environment: - -- If you don't want to use Arbitrary UID capability in your Red Hat OpenShift environment, deploy the images as described in the following sections. - -- If you do want to use Arbitrary UID, prepare for deployment by updating your deployment file and editing your Security Context Constraint: - - - Remove the following line from your deployment YAML file: `runAsUser: 50001`. - - - In your SCC, set the desired user id range of minimum and maximum values for the project namespace: - - ```$ oc edit namespace ``` - - For the uid-range annotation, verify that a value similar to the following is specified: - - ```$ openshift.io/sa.scc.uid-range=1000490000/10000 ``` - - This range is similar to the default range for Red Hat OpenShift. - - - Remove authenticated users from anyuid (if set): - - ```$ oc adm policy remove-scc-from-group anyuid system:authenticated ``` - - - Update the runAsUser value. - Find the entry: - - ``` - $ oc get scc -o yaml - runAsUser: - type: RunAsAny - ``` - - Update the value: - - ``` - $ oc get scc -o yaml - runAsUser: - type: MustRunAsRange - ``` - - -To deploy Content Platform Engine: - 1. Use the deployment file to deploy Content Platform Engine: - - ```kubectl apply -f cpe-deploy.yml``` - 2. Run following command to get the Public IP and port to access Content Platform Engine: - - ```kubectl get svc | grep ecm-cpe``` - -To deploy Content Search Services: - 1. Use the deployment file to deploy Content Search Services: - - ```kubectl apply -f css-deploy.yml``` - 2. Run the following command to get the Public IP and port to access Content Search Services: - - ```kubectl get svc | grep ecm-css``` - -To deploy Content Management Interoperability Services: - 1. Use the deployment file to deploy Content Management Interoperability Services: - - ```kubectl apply -f cmis-deploy.yml``` - 2. Run the following command to get the Public IP and port to access Content Management Interoperability Services: - - ```kubectl get svc | ecm-cmis``` - -> **Reminder**: After you deploy, return to the instructions in the Knowledge Center, [Completing post deployment tasks for IBM FileNet Content Manager](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_18.0.x/com.ibm.dba.install/k8s_topics/tsk_deploy_postecmdeployk8s.html), to get your FileNet Content Manager environment up and running - -## Deploying the External Share container - -If you want to optionally include the external share capability in your environment, you also configure and deploy the External Share container. - -Ensure that you have completed the all of the preparation steps for deploying the External Share container: [Configuring external share for containers](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_ecmexternalsharek8s.html) - -For deployments on Red Hat OpenShift, if you want to use Arbitrary UID, use the steps in the previous section to prepare for the deployment, including updating your YAML file and editing your Security Context Constraint. - - -To deploy the External Share container: - - 1. Use the deployment file to deploy the External Share container: - - ```kubectl apply -f es-deploy.yml``` - 2. Run the following command to get the Public IP and port to access External Share: - - ```kubectl get svc | ecm-es``` - -## Deploying the Technology Preview: Content Services GraphQL API container -If you want to use the Content Services GraphQL API container, follow the instructions in the Getting Started technical notice: [Technology Preview: Getting started with Content Services GraphQL API](http://www.ibm.com/support/docview.wss?uid=ibm10883630) - - 1. Use the deployment file to deploy the Content Services GraphQL API container: - - ```kubectl apply -f crs-deploy.yml``` - 2. Run the following command to get the Public IP and port to access the Content Services GraphQL API: - - ```kubectl get svc | ecm-crs``` - -## Upgrading deployments - > **Tip**: You can discover the necessary resource values for the deployment from corresponding product deployments in IBM Cloud Private Console and Openshift Container Platform. - -### Before you begin -Before you run the upgrade commands, you must prepare the environment for upgrades by updating permissions on your persistent volumes. Depending on your starting version you might also need to create or update volumes and folders for Content Search Services and Content Management Interoperability Services. Complete the preparation steps in the following topic before you start the upgrade: [Upgrading Content Manager releases](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/com.ibm.dba.upgrading/topics/tsk_cm_upgrade.htm) - -If you already have a customized YAML file for your existing deployment, update the file with the new parameters for this release before you apply the YAML as part of the upgrade. See the sample YAML files for more information. - -For an upgrade to the External share container, complete the preparation steps in the following topic before you start the upgrade: [Upgrading External Share releases](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/com.ibm.dba.upgrading/topics/tsk_cm_upgrade.htm) - -You must also [download the PPA archive](../../README.md) before you begin the upgrade process. - -### Preparing for upgrade on Red Hat OpenShift - -For upgrades on Red Hat OpenShift, note the following considerations when you want to use the Arbitrary UID capability in your updated environment: - -- If you don't want to use Arbitrary UID capability in your Red Hat OpenShift environment, use the instructions in Running the upgrade deployments. - -- If you do want to use Arbitrary UID, use the following steps to prepare for the upgrade: - -1. Check and if necessary edit your Security Context Constraint to set desired user id range of minimum and maximum values for the project namespace: - - Set the desired user id range of minimum and maximum values for the project namespace: - - ```$ oc edit namespace ``` - - For the uid-range annotation, verify that a value similar to the following is specified: - - ```$ openshift.io/sa.scc.uid-range=1000490000/10000 ``` - - This range is similar to the default range for Red Hat OpenShift. - - - Remove authenticated users from anyuid (if set): - - ```$ oc adm policy remove-scc-from-group anyuid system:authenticated ``` - - - Update the runAsUser value. - Find the entry: - - ``` - $ oc get scc -o yaml - runAsUser: - type: RunAsAny - ``` - - Update the value: - - ``` - $ oc get scc -o yaml - runAsUser: - type: MustRunAsRange - ``` - -2. Remove the following line from your deployment YAML file: `runAsUser: 50001`. - -3. Update other values in your deployment YAML file to reflect the values for your existing environment and any updates in the new samples. - -4. Stop all existing containers. - -5. Run the deployment commands for the containers, in the following section. - -### Running the upgrade deployments - -To deploy Content Platform Engine: - 1. Use the deployment file to deploy Content Platform Engine: - - ```kubectl apply -f cpe-deploy.yml``` - 2. Run following command to get the Public IP and port to access Content Platform Engine: - - ```kubectl get svc | grep ecm-cpe``` - -To deploy Content Search Services: - 1. Use the deployment file to deploy Content Search Services: - - ```kubectl apply -f css-deploy.yml``` - 2. Run the following command to get the Public IP and port to access Content Search Services: - - ```kubectl get svc | grep ecm-css``` - -To deploy Content Management Interoperability Services: - 1. Use the deployment file to deploy Content Management Interoperability Services: - - ```kubectl apply -f cmis-deploy.yml``` - 2. Run the following command to get the Public IP and port to access Content Management Interoperability Services: - - ```kubectl get svc | ecm-cmis``` - -To deploy the External Share container: - 1. Use the deployment file to deploy the External Share container: - - ```kubectl apply -f es-deploy.yml``` - 2. Run the following command to get the Public IP and port to access External Share: - - ```kubectl get svc | ecm-es``` - - -## Uninstalling a Kubernetes release of FileNet Content Manager - -To uninstall and delete the Content Platform Engine release, use the following command: - -```console -$ kubectl delete -f -``` - -The command removes all the Kubernetes components associated with the release, except any Persistent Volume Claims (PVCs). This is the default behavior of Kubernetes, and ensures that valuable data is not deleted. To delete the persisted data of the release, you can delete the PVC using the following command: - -```console -$ kubectl delete pvc my-cpe-prod-release-cpe-pvclaim -``` -Repeat the process for any other deployments that you want to delete. diff --git a/CONTENT/k8s-yaml/extShare/es-deploy.yml b/CONTENT/k8s-yaml/extShare/es-deploy.yml deleted file mode 100755 index a7b7e980..00000000 --- a/CONTENT/k8s-yaml/extShare/es-deploy.yml +++ /dev/null @@ -1,151 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: ecm-es-svc -spec: - ports: - - name: http - protocol: TCP - port: 9080 - targetPort: 9080 - - name: https - protocol: TCP - port: 9443 - targetPort: 9443 - selector: - app: esserver-cluster1 - type: NodePort - sessionAffinity: ClientIP ---- -apiVersion: networking.k8s.io/v1 -kind: NetworkPolicy -metadata: - name: ecm-es-np - namespace: $KUBE_NAME_SPACE -spec: - podSelector: {} - policyTypes: - - Ingress - - Egress - ingress: - - {} - egress: - - ports: - - port: 53 - protocol: UDP - - port: 53 - protocol: TCP - - to: - - namespaceSelector: {} ---- -apiVersion: apps/v1beta1 -kind: Deployment -metadata: - name: ecm-es -spec: - replicas: 1 - strategy: - type: RollingUpdate - template: - metadata: - labels: - app: esserver-cluster1 - spec: - imagePullSecrets: - - name: admin.registrykey - spec: - affinity: - podAntiAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 100 - podAffinityTerm: - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - esserver-cluster1 - topologyKey: "kubernetes.io/hostname" - containers: - - image: /default/extshare:latest - imagePullPolicy: Always - name: ecm-es - securityContext: - # If deployment on OpenShift and image supports arbitrary uid, - # remove runAsUser and pods will run with arbitrarily assigned user ID. - runAsUser: 50001 - allowPrivilegeEscalation: false - resources: - requests: - memory: 512Mi - cpu: 500m - limits: - memory: 1536Mi - cpu: 1 - ports: - - containerPort: 9080 - name: http - - containerPort: 9443 - name: https - env: - - name: LICENSE - value: "accept" - - name: JVM_INITIAL_HEAP_PERCENTAGE - value: "40" - - name: JVM_MAX_HEAP_PERCENTAGE - value: "66" - - name: TZ - value: "Etc/UTC" - - name: ICNDBTYPE - value: "db2" - - name: ICNJNDIDS - value: "ECMClientDS" - - name: ICNSCHEMA - value: "ICNDB" - - name: ICNTS - value: "ICNDB" - - name: MY_POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: MY_POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: MY_POD_IP - valueFrom: - fieldRef: - fieldPath: status.podIP - readinessProbe: - httpGet: - path: /contentapi/rest/share/v1/info - port: 9080 - httpHeaders: - - name: Content-Encoding - value: gzip - initialDelaySeconds: 180 - periodSeconds: 5 - livenessProbe: - httpGet: - path: /contentapi/rest/share/v1/info - port: 9080 - httpHeaders: - - name: Content-Encoding - value: gzip - initialDelaySeconds: 600 - periodSeconds: 5 - volumeMounts: - - name: escfgstore-pvc - mountPath: "/opt/ibm/wlp/usr/servers/defaultServer/configDropins/overrides" - subPath: es/configDropins/overrides - - name: eslogstore-pvc - mountPath: "/opt/ibm/wlp/usr/servers/defaultServer/logs" - subPath: es/logs - - volumes: - - name: escfgstore-pvc - persistentVolumeClaim: - claimName: "es-icp-cfgstore" - - name: eslogstore-pvc - persistentVolumeClaim: - claimName: "es-icp-logstore" diff --git a/CONTENT/platform/README_Eval_ROKS.md b/CONTENT/platform/README_Eval_ROKS.md deleted file mode 100644 index 21a2a977..00000000 --- a/CONTENT/platform/README_Eval_ROKS.md +++ /dev/null @@ -1,110 +0,0 @@ -# Deploying on Red Hat OpenShift on IBM Cloud - -Before you deploy, you must configure your IBM Public Cloud environment, create an OpenShift cluster, prepare your FileNet environment, and load the product images to the registry. Use the following information to configure your environment and deploy the images. - -## Before you begin: Create a cluster - -Before you run any install command, make sure that you have created the IBM Cloud cluster, prepared your own environment, and obtained and loaded the product images to the registry. - -For more information, see [Installing containers on Red Hat OpenShift by using CLIs](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_env_ROKS.html). - - -## Step 1: Prepare your FileNet Content Manager environment - -To prepare your FileNet Content Manager environment, you set up databases, LDAP services, storage, and configuration files that are required for use and operation after deployment. - -Use the following instructions to prepare your FileNet environment: [Preparing to install IBM FileNet Content Manager](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_ecmk8s.html) - -**Important:** The instructions provided for preparing storage are specific to non-managed OpenShift deployments. For OpenShift deployments, the cluster you create for OpenShift includes attached storage. As a result, you don't create persistent volumes for the storage- only the listed persistent volume claims. Obtain the storage class name for this OpenShift cluster storage, and assign that value as the `storageClassName` value when you create the required persistent volumes claims for your FileNet environment as described in [Creating volumes and folders for deployment on Kubernetes](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_ecmk8s_volumes.html). - -The following example uses the storage class name `ibmc-file-retain-bronze`: - ```yaml - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - name: example-pvc - namespace: default - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 8Gi - storageClassName: ibmc-file-retain-bronze - ``` - -## Step 2: Deploy the FileNet Content Manager images -When the container images are in the registry, you can complete environment configuration for each component and then run the chart installation. - -1. Create a NGINX pod to mount the persistent volumes. The following sample creates a pod named `example-pod-ecm-eval`: [NGINX Pod Sample](nginx_sample.yaml) - -2. Copy the necessary database and LDAP configuration XML files that you prepared for your FileNet environment to the mounted volumes, for example, by accessing the NGINX pod that you created: - ```console - $ kubectl cp datasource.xml nginx-pod:/path/to/corresponding/directory - ``` -**Remember:** Make sure the permissions for all the folders are set as follows: - -For each of the folders, set the ownership to 50001:0, for example: -chown –Rf 50001:0 /cpecfgstore - -For each of the folders, set the permission to 775, for example: -chmod –Rf 775 /cpecfgstore - -3. Use the instructions in the [Helm chart readme](../helm-charts) to confirm your environment configuration and install the Helm charts. - -## Step 3: Enable Ingress to access your applications -1. Create an SSL certificate: - ```console - $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $(pwd)/tls.key -out $(pwd)/tls.crt -subj "/CN=dbamc.content - ``` -2. Create a secret using the certificate: - ```console - $ kubectl create secret tls icp4a --key $(pwd)/tls.key --cert $(pwd)/tls.crt - ``` -3. Create an Ingress service for all of the Content components by using the example `ingress_service.yaml` file in the OpenShift console or CLI: [ingress_service.yaml](ingress_service.yaml) -4. Apply the Ingress service: - ``` console - $ kubectl apply -f ingress_service.yaml - ``` -5. Create single Ingress endpoint using the [ingress_one.yaml](ingress_one.yaml) -6. Apply the Ingress: - ``` console - $ kubectl apply -f ingess_one.yaml - ``` -7. To use the Ingress for the repository connection URL in Navigator, CMIS, External Share, and GraphQL run the following commands: - ```console - $ openssl pkcs12 -export -in $(pwd)/tls.crt -inkey $(pwd)/tls.key -out $(pwd)/newkey.p12 - ``` - ```console - $ keytool -importkeystore -srckeystore $(pwd)/newkey.p12 \ - -srcstoretype PKCS12 \ - -destkeystore $(pwd)/newkey.jks \ - -deststoretype JKS - ``` -8. Copy the `newkey.jks` file to the `overrides` directory: - ``` console - $ cp $(pwd)/newkey.jks /some/directory/icn/configDropins/overrides - ``` -9. Create a new XML file, for example, `key.xml`, and save it to the `configDropins/Overrides` folder: - ``` xml - - - - ``` -10. Edit the deployments for all of the components to resolve the hostname in the pods: - ``` console - $ kubectl edit deployments dbamc-cpe-ibm-dba-contentservices - ``` - Add the following lines in the section `spec.template.spec`: - ``` yaml - hostAliases: - - ip: "" - hostnames: - - "dbamc.content" - ``` -11. Get the Ingress IP by running the following command: - ``` console - $ kubectl get ingress - ``` -12. After you save your changes, new pods are created that include the changes. When the pods are up and running, update any existing repository connection. The new repository connection URL is something like: `https://icp4a-content/wsi/FNCEWS40MTOM/` -13. On any system where you want to access the applications, update the localhost file `/etc/hosts` with the Ingress IP and the hostname. diff --git a/CONTENT/platform/ingress_one.yaml b/CONTENT/platform/ingress_one.yaml deleted file mode 100644 index ef23ef99..00000000 --- a/CONTENT/platform/ingress_one.yaml +++ /dev/null @@ -1,47 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Ingress -metadata: - name: dbamc-ingress - annotations: - # The NGINX ingress annotations contains a new prefix nginx.ingress.kubernetes.io. - # To avoid breaking a running NGINX ingress controller, specify both new and old prefixes. - kubernetes.io/ingress.class: nginx - ingress.kubernetes.io/force-ssl-redirect: "true" - ingress.bluemix.net/sticky-cookie-services: "serviceName=ibacc-cpe-ingress-svc name=cpecookie expires=7300s path=/acce hash=sha1;serviceName=ibacc-ext-ingress-svc name=extcookie expires=7300s path=/contentapi hash=sha1;serviceName=ibacc-crs-ingress-svc name=crscookie expires=7300s path=/content-services-graphql hash=sha1" -spec: - rules: - - host: icp4a.content - http: - paths: - - backend: - serviceName: ibacc-cpe-ingress-svc - servicePort: 9080 - path: /acce - - backend: - serviceName: ibacc-cpe-ingress-svc - servicePort: 9080 - path: /P8CE - - backend: - serviceName: ibacc-cpe-ingress-svc - servicePort: 9080 - path: /FileNet - - backend: - serviceName: ibacc-cpe-ingress-svc - servicePort: 9080 - path: /wsi - - backend: - serviceName: ibacc-ext-ingress-svc - servicePort: 9080 - path: /contentapi - - backend: - serviceName: ibacc-crs-ingress-svc - servicePort: 9080 - path: /content-services-graphql - - backend: - serviceName: ibacc-crs-ingress-svc - servicePort: 9080 - path: /content-services - tls: - - hosts: - - icp4a.content - secretName: icp4a diff --git a/CONTENT/platform/ingress_service.yaml b/CONTENT/platform/ingress_service.yaml deleted file mode 100644 index 06820565..00000000 --- a/CONTENT/platform/ingress_service.yaml +++ /dev/null @@ -1,56 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: ibacc-cpe-ingress-svc -spec: - ports: - - name: http - protocol: TCP - port: 9080 - targetPort: 9080 - - name: https - protocol: TCP - port: 9443 - targetPort: 9443 - selector: - app: ibm-dba-contentservices - type: ClusterIP ---- - -apiVersion: v1 -kind: Service -metadata: - name: ibacc-ext-ingress-svc -spec: - ports: - - name: http - protocol: TCP - port: 9080 - targetPort: 9080 - - name: https - protocol: TCP - port: 9443 - targetPort: 9443 - selector: - app: ibm-dba-extshare-prod - type: ClusterIP - ---- - -apiVersion: v1 -kind: Service -metadata: - name: ibacc-crs-ingress-svc -spec: - ports: - - name: http - protocol: TCP - port: 9080 - targetPort: 9080 - - name: https - protocol: TCP - port: 9443 - targetPort: 9443 - selector: - app: ibm-dba-contentrestservice-dev - type: ClusterIP diff --git a/CONTENT/platform/nginx_sample.yaml b/CONTENT/platform/nginx_sample.yaml deleted file mode 100644 index e2148ef9..00000000 --- a/CONTENT/platform/nginx_sample.yaml +++ /dev/null @@ -1,110 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - name: example-pod-ecm-eval - labels: - app: hello-openshift - namespace: ecm-eval -spec: - volumes: - - name: ecm-eval-cfg-pvc-0 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-0 - - name: ecm-eval-cfg-pvc-1 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-1 - - name: ecm-eval-cfg-pvc-2 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-2 - - name: ecm-eval-cfg-pvc-3 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-3 - - name: ecm-eval-cfg-pvc-4 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-4 - - name: ecm-eval-cfg-pvc-5 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-5 - - name: ecm-eval-cfg-pvc-6 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-6 - - name: ecm-eval-cfg-pvc-7 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-7 - - name: ecm-eval-cfg-pvc-8 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-8 - - name: ecm-eval-cfg-pvc-9 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-9 - - name: ecm-eval-cfg-pvc-10 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-10 - - name: ecm-eval-cfg-pvc-11 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-11 - - name: ecm-eval-cfg-pvc-12 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-12 - - name: ecm-eval-cfg-pvc-13 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-13 - - name: ecm-eval-cfg-pvc-14 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-14 - - name: ecm-eval-cfg-pvc-15 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-15 - - name: ecm-eval-cfg-pvc-16 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-16 - - name: ecm-eval-cfg-pvc-17 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-17 - - name: ecm-eval-cfg-pvc-18 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-18 - containers: - - name: hello-openshift - image: nginx:latest - ports: - - containerPort: 8080 - volumeMounts: - - name: ecm-eval-cfg-pvc-0 - mountPath: /cpe/configDropins/overrides - - name: ecm-eval-cfg-pvc-1 - mountPath: /cpe/asa - - name: ecm-eval-cfg-pvc-2 - mountPath: /cpe/textext - - name: ecm-eval-cfg-pvc-3 - mountPath: /cpe/logs - - name: ecm-eval-cfg-pvc-4 - mountPath: /cpe/FileNet - - name: ecm-eval-cfg-pvc-5 - mountPath: /cpe/icmrules - - name: ecm-eval-cfg-pvc-6 - mountPath: /cpe/bootstrap - - name: ecm-eval-cfg-pvc-7 - mountPath: /icn/configDropin/overrides - - name: ecm-eval-cfg-pvc-8 - mountPath: /icn/logs - - name: ecm-eval-cfg-pvc-9 - mountPath: /icn/plugins - - name: ecm-eval-cfg-pvc-10 - mountPath: /icn/viewerlog - - name: ecm-eval-cfg-pvc-11 - mountPath: /icn/viewercache - - name: ecm-eval-cfg-pvc-12 - mountPath: /icn/aspera - - name: ecm-eval-cfg-pvc-13 - mountPath: /css/CSS_Server_data - - name: ecm-eval-cfg-pvc-14 - mountPath: /css/CSS_Server_log - - name: ecm-eval-cfg-pvc-15 - mountPath: /css/CSS_Server_temp - - name: ecm-eval-cfg-pvc-16 - mountPath: /css/CSSIndex_OS1 - - name: ecm-eval-cfg-pvc-17 - mountPath: /cmis/configDropins/overrides - - name: ecm-eval-cfg-pvc-18 - mountPath: /cmis/logs/ diff --git a/FNCM/README_config.md b/FNCM/README_config.md new file mode 100644 index 00000000..7291935f --- /dev/null +++ b/FNCM/README_config.md @@ -0,0 +1,202 @@ +# Configuring IBM FileNet Content Manager 5.5.4 + +IBM FileNet Content Manager provides numerous containerized components for use in your container environment. The configuration settings for the components are recorded and stored in the shared YAML file for operator deployment. After you prepare your environment, you add the values for your configuration settings to the YAML so that the operator can deploy your containers to match your environment. + +## Requirements and prerequisites + +Confirm that you have completed the following tasks to prepare to deploy your FileNet Content Manager images: + +- Prepare your FileNet Content Manager environment. These procedures include setting up databases, LDAP, storage, and configuration files that are required for use and operation. You must complete all of the [preparation steps for FileNet Content Manager](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_prepare_ecmk8s.html) before you are ready to deploy the container images. Collect the values for these environment components; you use them to configure your FileNet Content Manager container deployment. + +- Prepare your container environment. See [Preparing to install automation containers on Kubernetes](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/welcome/com.ibm.dba.install/op_topics/tsk_prepare_env_k8s.html) + +- If you want to deploy additional optional containers, prepare the requirements that are specific to those containers. For details see the following information: + - [Preparing for External Share](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_cm_externalshareop.html) + - [Preparing volumes and folders for the Content Services GraphQL API](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_gqlvolumesop.html) + +If you plan to use external key management in your environment, review the following preparation information before you deploy: [Preparing for external key management](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_prepare_ecm_externalkeyk8s.html) + +> **Note**: If you plan to use UMS integration with any of the FileNet Content Manager components, note that you might encounter registration failure errors during deployment. This can happen if the UMS deployment is not ready by the time the other containers come up. The situation resolves in the next operator loop, so the errors can be ignored. + + +## Prepare your security environment + +Before you deploy, you must create a secret for the security details of the LDAP directory and datasources that you configured in preparation for use with FileNet Content Manager. Collect the users, password, and namespace to add to the secret. Using your values, run the following command: + + ``` +kubectl create secret generic ibm-fncm-secret \ +--from-literal=gcdDBUsername="db2inst1" --from-literal=gcdDBPassword="xxxxxxxx" \ +--from-literal=osDBUsername="db2inst1" --from-literal=osDBPassword="xxxxxxxx" \ +--from-literal=ldapUsername="cn=root" --from-literal=ldapPassword="xxxxxxxxxx" \ +--from-literal=externalLdapUsername="cn=User1,ou=test,dc=external,dc=com" --from-literal=externalLdapPassword="xxxxxxx" \ +--from-literal=appLoginUsername="filenet_admin" --from-literal=appLoginPassword="xxxxxxxx" \ +--from-literal=keystorePassword="xxxxx" \ +--from-literal=ltpaPassword="xxxxxx" + ``` +The secret you create is the value for the parameter `fncm_secret_name`. + + +### Root CA and trusted certificate list + + The custom YAML file also requires values for the `root_ca_secret` and `trusted_certificate_list` parameters. The TLS secret contains the root CA's key value pair. You have the following choices for the root CA: + - You can generate a self-signed root CA + - You can allow the operator (or ROOTCA ansible role) to generate the secret with a self-signed root CA (by not specifying one) + - You can use a signed root CA. In this case, you create a secret that contains the root CA's key value pair in advance. + + The list of the trusted certificate secrets can be a TLS secret or an opaque secret. An opaque secret must contain a tls.crt file for the trusted certificate. The TLS secret has a tls.key file as the private key. + +### Apply the Security Context Contstraints + +Apply the required Security Context Constraints (SCC) by applying the [SCC YAML](../descriptors/scc-fncm.yaml) file. + + ```bash + $ oc apply -f descriptors/scc-fncm.yaml + ``` + + > **Note**: `fsGroup` and `supplementalGroups` are `RunAsAny` and `runAsUser` is `MustRunAsRange`. + + +## Customize the YAML file for your deployment + +All of the configuration values for the components that you want to deploy are included in the [ibm_cp4a_cr_template.yaml](../descriptors/ibm_cp4a_cr_template.yaml) file. Create a copy of this file on the system that you prepared for your container environment, for example `my_ibm_cp4a_cr_template.yaml`. + +The custom YAML file includes the following sections that apply for all of the components: +- shared_configuration - Specify your deployment and your overall security information. +- ldap_configuration - Specify the directory service provider information for all components in this common section. +- datasource configuration - Specify the database information for all components in this common section. +- monitoring_configuration - Optional for deployments where you want to enable monitoring. +- logging_configuration - Optional for deployments where you want to enable logging. + +After the shared section, the YAML includes a section of parameters for each of the available components. If you plan to include a component in your deployment, you un-comment the parameters for that component and update the values. For some parameters, the default values are sufficient. For other parameters, you must supply values that correspond to your specific environment or deployment needs. + +The optional initialize_configuration and verify_configuration section includes values for a set of automatic set up steps for your FileNet P8 domain and IBM Business Automation Navigator deployment. + +If you want to exclude any components from your deployment, leave the section for that component and all related parameters commented out in the YAML file. + +All components require that you deploy the Content Platform Engine container. For that reason, you must complete the values for that section in all deployment use cases. + +A description of the configuration parameters is available in [Configuration reference for operators](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_cm_paramsop.html) + +Use the information in the following sections to record the configuration settings for the components that you want to deploy. + +- [Shared configuration settings](README_config.md#shared-configuration-settings) +- [Content Platform Engine settings](README_config.md#content-platform-engine-settings) +- [Content Search Services settings](README_config.md#content-search-services-settings) +- [Content Management Interoperability Services settings](README_config.md#content-management-interoperability-services-settings) +- [Content Services GraphQL settings](README_config.md#content-services-graphql-settings) +- [External Share settings](README_config.md#external-share-settings) +- [Task Manager settings](README_config.md#task-manager-settings) +- [Initialization settings](README_config.md#initialization-settings) +- [Verification settings](README_config.md#verification-settings) + +### Shared configuration settings + +Un-comment and update the values for the shared configuration, LDAP, datasource, monitoring, and logging parameters, as applicable. + +Use the secrets that you created in Preparing your security environment for the `root_ca_secret` and `trusted_certificate_list` values. + +> **Reminder**: If you plan to use External Share with the 2 LDAP model for configuring external users, update the LDAP values in the `ext_ldap_configuration` section of the YAML file with the information about the directory server that you configured for external users. If you are not using external share, leave this section commented out. + +For more information about the shared parameters, see the following topics: + +- [Shared parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_cm_opsharedparams.html) +- [LDAP parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_k8s_ldap.html) +- [Datasource parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_cm_dbparams.html) +- [Monitoring parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_cm_opmonparams.html) + +### Content Platform Engine settings + +Use the `cpe` section of the custom YAML to provide values for the configuration of Content Platform Engine. You provide details for configuration settings that you have already created, like the names of your persistent volume claims. You also provide names for pieces of your Content Platform Engine environment, and tuning decisions for your runtime environment. + +> **Note**: If you plan to use UMS with Content Platform Engine, do not use the Initialization container. You must manually configure your Content Platform Engine domain and object stores after deployment. + +For more information about the settings, see [Content Platform Engine parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_cm_opcpeparams.html) + +### Content Search Services settings + +Use the `css` section of the custom YAML to provide values for the configuration of Content Search Services. You provide details for configuration settings that you have already created, like the names of your persistent volume claims. You also provide names for pieces of your Content Search Services environment, and tuning decisions for your runtime environment. + +For more information about the settings, see [Content Search Services parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_cm_opcssparams.html) + +### Content Management Interoperability Services settings + +Use the `cmis` section of the custom YAML to provide values for the configuration of Content Search Services. You provide details for configuration settings that you have already created, like the names of your persistent volume claims. You also provide names for pieces of your Content Search Services environment, and tuning decisions for your runtime environment. + +For more information about the settings, see [Content Management Interoperability Services parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_cm_opcmisparams.html) + +### Content Services GraphQL settings + +Use the `graphql` section of the custom YAML to provide values for the configuration of the Content Services GraphQL API. You provide details for configuration settings that you have already created, like the names of your persistent volume claims. You also provide names for pieces of your Content Services GraphQL environment, and tuning decisions for your runtime environment. + +The section includes a parameter for enabling the GraphiQL development interface. Note the following consideration for including GraphiQL in your environment: + +- If you are deploying the GraphQL container as part of a test or development environment and you want to use GraphiQL with the API, set the enable_graph_iql parameter to true. +- If you are deploying the GraphQL container as part of a production environment, it is recommended to set the enable_graph_iql parameter to false. + +For more information about the settings, see [Content Services GraphQL parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_cm_opgqlparams.html) + +### External share settings + +Use the `es` section of the custom YAML to provide values for the configuration of External Share. You provide details for configuration settings that you have already created, like the names of your persistent volume claims. You also provide names for pieces of your External Share environment, and tuning decisions for your runtime environment. + +> **Reminder**: If you are using the 2 LDAP approach for managing your external users for external share, you must configure the ext_ldap_configuration section in the shared parameters with information about your external user LDAP directory service. + +> **Note**: If you are deploying the External Share container as an update instead of as part of the initial container deployment, note that both the Content Platform Engine and the Business Automation Navigator containers will undergo a rolling update to accommodate the External Share configuration. + +For more information about the settings, see [External Share parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_cm_opesparams.html) + +### Task Manager settings + +Use the `tm` section of the custom YAML to provide values for the configuration of Task Manager. You provide details for configuration settings that you have already created, like the names of your persistent volume claims. You also provide names for pieces of your Task Manager environment, and tuning decisions for your runtime environment. + +If you want to deploy Task Manager, you must also deploy IBM Business Automation Navigator. The Task Manager uses the same database as IBM Business Automation Navigator. Database settings must match between these two components. + +For Task Manager, pay particular attention to any relevant values in the `jvm_customize_options` parameter. + +For more information about the settings, see [Task Manager parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_cm_optmparams.html) + +### Initialization settings + +Use the `initialize_configuration` section of the custom YAML to provide values for the automatic initialization and setup of Content Platform Engine and IBM Business Automation Navigator. The initialization container creates initial instances of your FileNet Content Manager components, such as the p8 domain, one or more object stores, and configuration of IBM Business Automation Navigator. You also provide names for pieces of your FileNet Content Manager environment, and make decisions for your runtime environment. + +> **Important**: Do not enable initialization for your operator deployment if you plan to integrate UMS with Content Platform Engine. In this use case, you must manually create your Content Platform Engine domain and object stores after deployment. If you are integrating UMS and Content Platform Engine, leave the `initialize_configuration` section commented out. + +You can edit the YAML to configure more than one of the available pieces in your automatically initialized environment. For example, if you want to create an additional Content Search Services server, you copy the stanza for the server settings, paste it below the original, and add the new values for your additional object store: + + ``` +ic_css_creation: + # - css_site_name: "Initial Site" + # css_text_search_server_name: "{{ meta.name }}-css-1" + # affinity_group_name: "aff_group" + # css_text_search_server_status: 0 + # css_text_search_server_mode: 0 + # css_text_search_server_ssl_enable: "true" + # css_text_search_server_credential: "RNUNEWc=" + # css_text_search_server_host: "{{ meta.name }}-css-svc-1" + # css_text_search_server_port: 8199 + + ``` + +You can create additional object stores, Content Search Services indexes, IBM Business Automation Navigator repositories, and IBM Business Automation Navigator desktops. + +For more information about the settings, see [Initialization parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_cm_opinitiparams.html) + +### Verification settings + +Use the `verify_configuration` section of the custom YAML to provide values for the automatic verification of your Content Platform Engine and IBM Business Automation Navigator. The verify container works in conjunction with the automatic setup of the initialize container. You can accept most of the default settings for the verification. However, compare the settings with the values that you supply for the initialization settings. Specific settings like object store names and the Content Platform Engine connection point must match between these two configuration sections. + +For more information about the settings, see [Verify parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_cm_opverifyparams.html) + +## Complete the installation + +After you have set all of the parameters for the relevant components, return to to the install or update page for your platform to configure other components and complete the deployment with the operator. + +Install pages: + - [Installing on Managed Red Hat OpenShift on IBM Cloud Public](../platform/roks/install.md) + - [Installing on Red Hat OpenShift](../platform/ocp/install.md) + - [Installing on Certified Kubernetes](../platform/k8s/install.md) + +Update pages: + - [Updating on Managed Red Hat OpenShift on IBM Cloud Public](../platform/roks/update.md) + - [Updating on Red Hat OpenShift](../platform/ocp/update.md) + - [Updating on Certified Kubernetes](../platform/k8s/update.md) diff --git a/FNCM/README_migrate.md b/FNCM/README_migrate.md new file mode 100644 index 00000000..ccadd7e7 --- /dev/null +++ b/FNCM/README_migrate.md @@ -0,0 +1,22 @@ +# Migrating IBM FileNet Content Manager 5.5.x persisted data to V5.5.4 + +Because of the change in the container deployment method, there is no upgrade path for previous versions of FileNet Content Manager to V5.5.4. + +To move a V5.5.x installation to V5.5.4, you prepare your environment and deploy the operator the same way you would for a new installation. The difference is that you use the configuration values for your previously configured environment, including datasource, LDAP, storage volumes, etc. when you customize your deployment YAML file. + +Optionally, to protect your production deployment, you can create a replica of your data and use that datasource information for the operator deployment to test your migration. In this option, you follow the instructions for a new deployment. + + +## Step 1: Collect parameter values from your existing deployment + +You can use the reference topics in the [Cloud Pak for Automation Knowldege Center](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_cm_paramsop.html) to see the parameters that apply for your components and shared configuration. + +You will use the values for your existing deployment to update the custom YAML file for the new operator deployment. For more information, see [Configure IBM FileNet Content Manager](README_config.md). + +> **Note**: When you are ready to deploy the V5.5.4 version of your FileNet Content Manager containers, stop your previous containers. + +## Step 2: Return to the platform readme to migrate other components + +- [Managed OpenShift migrate page](../platform/roks/migrate.md) +- [OpenShift migrate page](../platform/ocp/migrate.md) +- [Kubernetes migrate page](../platform/k8s/migrate.md) diff --git a/CONTENT/configuration/CPE/configDropins/overrides/OBJSTORE.xml b/FNCM/configuration/CPE/configDropins/overrides/OBJSTORE.xml similarity index 100% rename from CONTENT/configuration/CPE/configDropins/overrides/OBJSTORE.xml rename to FNCM/configuration/CPE/configDropins/overrides/OBJSTORE.xml diff --git a/CONTENT/configuration/CPE/configDropins/overrides/OBJSTORE_HADR.xml b/FNCM/configuration/CPE/configDropins/overrides/OBJSTORE_HADR.xml similarity index 100% rename from CONTENT/configuration/CPE/configDropins/overrides/OBJSTORE_HADR.xml rename to FNCM/configuration/CPE/configDropins/overrides/OBJSTORE_HADR.xml diff --git a/CONTENT/configuration/CPE/configDropins/overrides/OBJSTORE_Oracle.xml b/FNCM/configuration/CPE/configDropins/overrides/OBJSTORE_Oracle.xml similarity index 100% rename from CONTENT/configuration/CPE/configDropins/overrides/OBJSTORE_Oracle.xml rename to FNCM/configuration/CPE/configDropins/overrides/OBJSTORE_Oracle.xml diff --git a/CONTENT/configuration/ContentGraphQL/configDropins/overrides/UMS.xml b/FNCM/configuration/ContentGraphQL/configDropins/overrides/UMS.xml similarity index 100% rename from CONTENT/configuration/ContentGraphQL/configDropins/overrides/UMS.xml rename to FNCM/configuration/ContentGraphQL/configDropins/overrides/UMS.xml diff --git a/CONTENT/configuration/ContentGraphQL/configDropins/overrides/CORS.xml b/FNCM/configuration/ContentGraphQL/configDropins/overrides/cors.xml similarity index 62% rename from CONTENT/configuration/ContentGraphQL/configDropins/overrides/CORS.xml rename to FNCM/configuration/ContentGraphQL/configDropins/overrides/cors.xml index 7240bd98..596bd753 100644 --- a/CONTENT/configuration/ContentGraphQL/configDropins/overrides/CORS.xml +++ b/FNCM/configuration/ContentGraphQL/configDropins/overrides/cors.xml @@ -2,9 +2,9 @@ diff --git a/CONTENT/configuration/ContentGraphQL/configDropins/overrides/crs-ssl.xml b/FNCM/configuration/ContentGraphQL/configDropins/overrides/crs-ssl.xml similarity index 100% rename from CONTENT/configuration/ContentGraphQL/configDropins/overrides/crs-ssl.xml rename to FNCM/configuration/ContentGraphQL/configDropins/overrides/crs-ssl.xml diff --git a/NAVIGATOR/configuration/ICN/configDropins/overrides/ICNDS.xml b/FNCM/configuration/TaskMgr/configDropins/overrides/ICNDS.xml similarity index 100% rename from NAVIGATOR/configuration/ICN/configDropins/overrides/ICNDS.xml rename to FNCM/configuration/TaskMgr/configDropins/overrides/ICNDS.xml diff --git a/FNCM/configuration/TaskMgr/configDropins/overrides/ICNDS_HADR.xml b/FNCM/configuration/TaskMgr/configDropins/overrides/ICNDS_HADR.xml new file mode 100644 index 00000000..a8dd0e82 --- /dev/null +++ b/FNCM/configuration/TaskMgr/configDropins/overrides/ICNDS_HADR.xml @@ -0,0 +1,17 @@ + + + + + + + + diff --git a/NAVIGATOR/configuration/ICN/configDropins/overrides/ICNDS_Oracle.xml b/FNCM/configuration/TaskMgr/configDropins/overrides/ICNDS_Oracle.xml similarity index 100% rename from NAVIGATOR/configuration/ICN/configDropins/overrides/ICNDS_Oracle.xml rename to FNCM/configuration/TaskMgr/configDropins/overrides/ICNDS_Oracle.xml diff --git a/CONTENT/configuration/extShare/configDropins/overrides/CORS.xml b/FNCM/configuration/extShare/configDropins/overrides/CORS.xml similarity index 100% rename from CONTENT/configuration/extShare/configDropins/overrides/CORS.xml rename to FNCM/configuration/extShare/configDropins/overrides/CORS.xml diff --git a/FNCM/configuration/extShare/configDropins/overrides/ICNDS.xml b/FNCM/configuration/extShare/configDropins/overrides/ICNDS.xml new file mode 100644 index 00000000..643fa38d --- /dev/null +++ b/FNCM/configuration/extShare/configDropins/overrides/ICNDS.xml @@ -0,0 +1,15 @@ + + + + + + + + diff --git a/CONTENT/configuration/extShare/configDropins/overrides/ICNDS_HADR.xml b/FNCM/configuration/extShare/configDropins/overrides/ICNDS_HADR.xml similarity index 100% rename from CONTENT/configuration/extShare/configDropins/overrides/ICNDS_HADR.xml rename to FNCM/configuration/extShare/configDropins/overrides/ICNDS_HADR.xml diff --git a/FNCM/configuration/extShare/configDropins/overrides/ICNDS_Oracle.xml b/FNCM/configuration/extShare/configDropins/overrides/ICNDS_Oracle.xml new file mode 100644 index 00000000..bb125c06 --- /dev/null +++ b/FNCM/configuration/extShare/configDropins/overrides/ICNDS_Oracle.xml @@ -0,0 +1,12 @@ + + + + + + + + diff --git a/FNCM/configuration/extShare/configDropins/overrides/oidc.xml b/FNCM/configuration/extShare/configDropins/overrides/oidc.xml new file mode 100644 index 00000000..3cd04fbd --- /dev/null +++ b/FNCM/configuration/extShare/configDropins/overrides/oidc.xml @@ -0,0 +1,22 @@ + + + + + diff --git a/IAWS/README_config.md b/IAWS/README_config.md new file mode 100644 index 00000000..492c0243 --- /dev/null +++ b/IAWS/README_config.md @@ -0,0 +1,1084 @@ +# Configuring IBM Automation Workstream Services 19.0.3 +Learn how to configure IBM Automation Workstream Services. + + +## Table of contents +- [Introduction](#Introduction) +- [Automation Workstream Services component details](#Automation-Workstream-Services-component-details) +- [Resources required](#Resources-required) +- [Prerequisites](#Prerequisites) +- [Step 1: Preparing to install Automation Workstream Services for production](#Step-1-Preparing-to-install-Automation-Workstream-Services-for-production) + - [Setting up an OpenShift environment](#Setting-up-an-OpenShift-environment) + - [Preparing SecurityContextConstraints](#Preparing-SecurityContextConstraints) +- [Step 2: Preparing databases for Automation Workstream Services](#Step-2-Preparing-databases-for-Automation-Workstream-Services) + - [Creating the database for Automation Workstream Services](#Creating-the-database-for-Automation-Workstream-Services) + - [(Optional) Db2 SSL Configuration](#Optional-Db2-SSL-Configuration) + - [(Optional) Db2 HADR Configuration](#Optional-Db2-HADR-Configuration) +- [Step 3: Preparing to configure LDAP](#Step-3-Preparing-to-configure-LDAP) +- [Step 4: Preparing storage](#Step-4-Preparing-storage) + - [Disabling swapping and increasing the limit number of files descriptors](#Disabling-swapping-and-increasing-the-limit-number-of-files-descriptors) + - [Preparing storage for Process Federation Server](#Preparing-storage-for-Process-Federation-Server) + - [Preparing storage for Java Messaging Service](#Preparing-storage-for-Java-Messaging-Service) +- [Step 5: Protecting sensitive configuration data](#Step-5-Protecting-sensitive-configuration-data) + - [Creating required secrets for Automation Workstream Services](#Creating-required-secrets-for-Automation-Workstream-Services) + - [Creating the Lombardi custom secret](#Creating-the-lombardi-custom-secret) +- [Step 6: Configuring the Custom Resource YAML file to deploy Automation Workstream Services](#Step-6-Configuring-the-Custom-Resource-YAML-file-to-deploy-Automation-Workstream-Services) + - [Adding prerequisite configuration sections](#Adding-prerequisite-configuration-sections) + - [Disabling the Content Platform Engine initialization and verification sections](#Disabling-the-content-platform-engine-initialization-and-verification-sections) + - [Adding the required Automation Workstream Services configuration section](#Adding-the-required-Automation-Workstream-Services-configuration-section) + - [Custom configuration](#Custom-configuration) +- [Step 7: Completing the installation](#Step-7-Completing-the-installation) +- [Step 8: Completing post-deployment tasks](#Step-8-Completing-post-deployment-tasks) + - [Configuring the Content Platform Engine](#Configuring-the-Content-Platform-Engine) +- [Step 9: Verifying Automation Workstream Services](#Step-9-Verifying-Automation-Workstream-Services) +- [Limitations](#Limitations) +- [Troubleshooting](#Troubleshooting) + + + +## Introduction +The IBM Automation Workstream Services operator deploys the Workstream server, a server engine that runs workstreams that are configured and launched in IBM Workplace. + + +## Automation Workstream Services component details +The standard configuration includes these components: + +- IBM Business Automation Workflow Server component +- IBM Java Messaging Service component +- IBM Process Federation Server component + +To support those components, a standard installation generates the following content: + +- 4 ConfigMaps that manage the configuration +- 1 StatefulSet running Java Messaging Service +- 1 StatefulSet running Workstream server +- 1 StatefulSet running Process Federation Server +- 4 or more jobs for Workstream server +- 3 service accounts with related role and role binding +- 20 secrets to gain access during installation +- 7 services and Route to route the traffic to the App Engine + + +## Resources required +Follow the instructions in [Planning your installation](https://docs.openshift.com/container-platform/3.11/install/index.html#single-master-single-box). Then, based on your environment, check the required resources in [System and environment requirements](https://docs.openshift.com/container-platform/3.11/install/prerequisites.html) and set up your environment. + +| Component name | Container | CPU | Memory | +| --- | --- | --- | --- | +| IBM Automation Workstream Services | Workstream container | 2 | 3Gi | +| IBM Automation Workstream Services | Init containers | 200m | 128Mi | +| IBM Automation Workstream Services | IBM Java Messaging Service containers | 500m | 512Mi | +| IBM Automation Workstream Services | IBM Process Federation Service containers | 1500m | 2560Mi | + + +## Prerequisites +- [OpenShift 3.11 or later](https://docs.openshift.com/container-platform/3.11/welcome/index.html) +- [IBM DB2 11.5](https://www.ibm.com/products/db2-database) +- [User Management Service](../UMS/README_config.md) +- [Automation Application Engine](../AAE/README_config.md) +- [Business Automation Navigator](../BAN/README_config.md) +- [FileNet Content Manager](../FNCM/README_config.md) + + + +## Step 1: Preparing to install Automation Workstream Services for production +In addition to performing the steps required to set up the operator environment, complete the following steps before you install Automation Workstream Services. + +### Setting up an OpenShift environment +Before you prepare to install Automation Workstream Services, complete [Step 1 to Step 5](../platform/ocp/install.md). + +### Preparing SecurityContextConstraints +#### Creating a SecurityContextConstraint for Automation Workstream Services +Create a SecurityContextConstraint for Automation Workstream Services that looks like the following content and save it to the ibm-dba-iaws-scc.yaml file. Then add this ibm-dba-iaws-scc SCC to all service accounts in a namespace: +```yaml +apiVersion: security.openshift.io/v1 +kind: SecurityContextConstraints +metadata: + name: ibm-dba-iaws-scc +allowHostDirVolumePlugin: false +allowHostIPC: false +allowHostNetwork: false +allowHostPID: false +allowHostPorts: false +allowPrivilegeEscalation: true +allowPrivilegedContainer: false +allowedCapabilities: [] +defaultAddCapabilities: [] +fsGroup: + type: RunAsAny +groups: +- system:authenticated +readOnlyRootFilesystem: false +requiredDropCapabilities: +- KILL +- MKNOD +- SETUID +- SETGID +runAsUser: + type: MustRunAsRange +seLinuxContext: + type: MustRunAs +supplementalGroups: + type: RunAsAny +users: [] +volumes: +- configMap +- downwardAPI +- emptyDir +- persistentVolumeClaim +- projected +- secret +priority: 1 +``` + +Run the following commands: + +```sh +$ oc apply -f ibm-dba-iaws-scc.yaml +$ oc adm policy add-scc-to-group ibm-dba-iaws-scc system:serviceaccounts: +``` + +#### Creating a SecurityContextConstraint for Process Federation Server +If pfs_configuration.elasticsearch.privileged is set to true, you must create a SecurityContextConstraint for Process Federation Server that looks like the following content and save it to the ibm-pfs-privileged-scc.yaml file. Then add this ibm-pfs-privileged-scc SCC to the ibm-pfs-es-service-account Process Federation Server Elasticsearch default service account in the current namespace: + +```yaml +apiVersion: security.openshift.io/v1 +kind: SecurityContextConstraints +metadata: + name: ibm-pfs-privileged-scc +allowHostDirVolumePlugin: true +allowHostIPC: true +allowHostNetwork: true +allowHostPID: true +allowHostPorts: true +allowPrivilegedContainer: true +allowPrivilegeEscalation: true +allowedCapabilities: +- '*' +allowedFlexVolumes: [] +allowedUnsafeSysctls: +- '*' +defaultAddCapabilities: [] +defaultAllowPrivilegeEscalation: true +forbiddenSysctls: [] +fsGroup: + type: RunAsAny +readOnlyRootFilesystem: false +requiredDropCapabilities: [] +runAsUser: + type: RunAsAny +seccompProfiles: +- '*' +seLinuxContext: + type: RunAsAny +supplementalGroups: + type: RunAsAny +volumes: +- '*' +priority: 2 +``` + +Run the following commands: + +```sh +$ oc create serviceaccount ibm-pfs-es-service-account +$ oc apply -f ibm-pfs-privileged-scc.yaml +$ oc adm policy add-scc-to-user ibm-pfs-privileged-scc -z ibm-pfs-es-service-account +``` + +**Tip:** You can use the [`getSCCs.sh`](/~https://github.com/IBM/cloud-pak/tree/master/samples/utilities) bash script, which displays all the SecurityContextConstraints resources that are mapped to each of the ServiceAccount users in the specified namespace (or project). + +**Note:** Specify the value of property `pfs_configuration.elasticsearch.service_account` to the newly created service account `ibm-pfs-es-service-account` in your Custom Resource configuration. + + + +## Step 2: Preparing databases for Automation Workstream Services +### Creating the database for Automation Workstream Services +Create the database for Automation Workstream Services by running the following script on the Db2 server: +```sql +create database automatic storage yes using codeset UTF-8 territory US pagesize 32768; +-- connect to the created database: +connect to ; +-- A user temporary tablespace is required to support stored procedures in BPM. +CREATE USER TEMPORARY TABLESPACE USRTMPSPC1; +UPDATE DB CFG FOR USING LOGFILSIZ 16384 DEFERRED; +UPDATE DB CFG FOR USING LOGSECOND 64 IMMEDIATE; +-- The following grant is used for databases without enhanced security. +-- For more information, review the IBM Knowledge Center for Enhancing Security for DB2. +grant dbadm on database to user ; +connect reset; +``` + +**Notes:** +- Replace `` with the Automation Workstream Services database name you want, for example, BPMDB. +- Replace `` with the user you will use for the database. + + +### (Optional) Db2 SSL Configuration +To ensure that all communications between the Business Automation Workflow server and Db2 are encoded, you must import the database CA certificate to the Business Automation Workflow server. To do so, you must create a secret to store the certificate: +``` +kubectl create secret generic ibm-dba-baw-db2-cacert --from-file=cacert.crt= +``` + +**Note:** You must modify the part that points to the certificate file. Do not change the part --from-file=cacert.crt=. + +You can then use the resulting secret to set the `iaws_configuration[x]. wfs.database.sslsecretname: ibm-dba-baw-db2-cacert`, while setting `iaws_configuration[x].wfs.database.ssl` to `true`. + +### (Optional) Db2 HADR Configuration +If you use Db2 as your database, you can configure high availability by setting up HADR for the process server database. This configuration ensures that the process server automatically retrieves the necessary failover server information when it first connects to the database. As part of the setup, you must provide a comma-separated list of failover servers and failover ports. + +For example, if there are two failover servers: + + server1.db2.customer.com on port 50443 + server2.db2.customer.com on port 51443 + +you can specify these hosts and ports in the Custom Resource configuration YAML file as follows: +```yaml +database: + ... ... + hadr: + standbydb_host: server1.db2.customer.com, server2.db2.customer.com + standbydb_port: 50443,51443 + retryintervalforclientreroute: + maxretriesforclientreroute: + ... ... +``` + + + +## Step 3: Preparing to configure LDAP +An LDAP server is required before you install Automation Workstream Services. Save the following content in a file named `ldap-bind-secret.yaml`, Then apply it by running the `oc apply -f ldap-bind-secret.yaml` command: +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: ldap-bind-secret +type: Opaque +data: + ldapUsername: + ldapPassword: > /etc/sysctl.conf && sysctl -w vm.swappiness=1 && sed -i '/^vm.swappiness /d' /etc/sysctl.conf && echo 'vm.swappiness=1' >> /etc/sysctl.conf +``` + +### Preparing storage for Process Federation Server +The Process Federation Server component requires persistent volumes (PVs), persistent volume claims (PVCs), and related folders to be created before you can deploy. The deployment process uses these volumes and folders during the deployment. + +The following example illustrates the procedure using Network File System (NFS). An existing NFS server is required before creating persistent volumes and persistent volume claims. +- Creating folders for Process Federation Server on an NFS server, For the NFS server, you must grant minimal privileges, In the `/etc/exports` configuration file, add the following line in the end: +``` + *(rw,sync,no_subtree_check) +``` + +**Notes:** +- `` should be an individual directory and NOT shared with other components. +- **Restart NFS service** after editing and saving `/etc/exports` configuration file. + + +Give the least privilege to the mounted directories using the following commands: +```bash +sudo mkdir /pfs-es-0 +sudo mkdir /pfs-es-1 +sudo mkdir /pfs-logs-0 +sudo mkdir /pfs-logs-1 +sudo mkdir /pfs-output-0 +sudo mkdir /pfs-output-1 + +chown -R :65534 /pfs-* +chmod g+rw /pfs-* +``` + +- Creating persistent volumes required for Process Federation Server + +Save the following YAML files on the OpenShift master node and run the `oc apply -f ` command on the files in the following order. + +1. pfs-pv-pfs-es-0.yaml +```yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: pfs-es-0 +spec: + storageClassName: "pfs-es" + accessModes: + - ReadWriteOnce + capacity: + storage: 10Gi + nfs: + path: /pfs-es-0 + server: + persistentVolumeReclaimPolicy: Recycle +``` + +2. pfs-pv-pfs-es-1.yaml +```yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: pfs-es-1 +spec: + storageClassName: "pfs-es" + accessModes: + - ReadWriteOnce + capacity: + storage: 10Gi + nfs: + path: /pfs-es-1 + server: + persistentVolumeReclaimPolicy: Recycle +``` + +3. pfs-pv-pfs-logs-0.yaml +```yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: pfs-logs-0 +spec: + storageClassName: "pfs-logs" + accessModes: + - ReadWriteOnce + capacity: + storage: 5Gi + nfs: + path: /pfs-logs-0 + server: + persistentVolumeReclaimPolicy: Recycle +``` + +4. pfs-pv-pfs-logs-1.yaml +```yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: pfs-logs-1 +spec: + storageClassName: "pfs-logs" + accessModes: + - ReadWriteOnce + capacity: + storage: 5Gi + nfs: + path: /pfs-logs-1 + server: + persistentVolumeReclaimPolicy: Recycle +``` + +5. pfs-pv-pfs-output-0.yaml +```yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: pfs-output-0 +spec: + storageClassName: "pfs-output" + accessModes: + - ReadWriteOnce + capacity: + storage: 5Gi + nfs: + path: /pfs-output-0 + server: + persistentVolumeReclaimPolicy: Recycle +``` + +6. pfs-pv-pfs-output-1.yaml +```yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: pfs-output-1 +spec: + storageClassName: "pfs-output" + accessModes: + - ReadWriteOnce + capacity: + storage: 5Gi + nfs: + path: /pfs-output-1 + server: + persistentVolumeReclaimPolicy: Recycle +``` + +**Notes:** +- Replace `` with the Process Federation Server storage folder on your NFS server. +- Replace `` with your NFS server IP address. + +### Preparing storage for Java Messaging Service +The Java Messaging Service(JMS) component requires you to create a persistent volume and a related folder to be created before you can deploy. + +The following example illustrats the procedure using NFS. An existing NFS server is required before creating PVs. + +- Creating folders for JMS on an NFS server +For the NFS server, you must grant minimal privileges, In the `/etc/exports` configuration file, add the following line in the end: +``` + *(rw,sync,no_subtree_check) +``` + +**Notes:** +- `` should be an individual directory and do NOT shared with other components. +- **Restart the NFS service** after editing and saving the `/etc/exports` configuration file. + +Give the least privilege to the mounted directories using the following commands: +```bash +sudo mkdir /jms +chown -R :65534 /jms +chmod g+rw /jms +``` + +- Creating persistent volumes for JMS + +Save the following YAML files on the OpenShift master node and run the `oc apply -f ` command. +jms-pv.yaml +```yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: jms-pv +spec: + storageClassName: "jms-storage-class" + accessModes: + - ReadWriteOnce + capacity: + storage: 2Gi + nfs: + path: /jms + server: + persistentVolumeReclaimPolicy: Recycle +``` + +**Notes:** +- Replace `` with the JMS storage folder on your NFS server. +- `accessModes` should be set to the same value as the `iaws_configuration[x].wfs.jms.storage.access_modes` property in the Custom Resource configuration file. +- Replace `` with your NFS server IP address. + + + +## Step 5: Protecting sensitive configuration data +### Creating required secrets for Automation Workstream Services +A secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Before you install Automation Workstream Services, you must create the following secrets manually by saving the content in a YAML file and running the `oc apply -f ` command on the OpenShift master node. + +Shared encryption key secret: +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: icp4a-shared-key-secret +type: Opaque +data: + encryptionKey: +``` +**Notes:** +- So that the confidential information is shared only between the components that hold the key, use the encryptionKey to encrypt the confidential information at the Resource Registry. +- Ensure the encryptionKey is **base64** encoded. + +Business Automation Workflow server secret: +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: ibm-baw-baw-secret +type: Opaque +data: + adminUsername: + adminPassword: + sslKeyPassword: + oidcClientPassword: +``` +**Note:** +- `adminUsername` and `adminPassword` is the valid LDAP user who will be configured as the admin user of Automation Workstream Services. The password is necessary because it will be created on the Liberty server. +- `sslKeyPassword` will be used as the keystore or trust store password. +- `oidcClientPassword` will be registered with the User Manaement Service(UMS) as the OIDC client password. +- Ensure all values under data are **base64** encoded. + +Business Automation Workflow server database secret: +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: ibm-baw-wfs-server-db-secret +type: Opaque +data: + dbUser: + password: +``` +**Notes:** +- `dbUser` and `password` are the database user name and password respectively. +- Ensure all values under data are **base64** encoded. + +Workstream server integration with IBM Content Platform Engine secret: +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: cpe-admin-secret +type: Opaque +data: + adminUsername: + adminPassword: +``` +**Notes:** +- `adminUsername` and `adminPassword` are the Content Platform Engine admin user credentials. +- Ensure all values under data are **base64** encoded. + +Process Federation Server secret: +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: ibm-pfs-admin-secret +type: Opaque +data: + ltpaPassword: + oidcClientPassword: + sslKeyPassword: +``` + +**Notes:** +- `sslKeyPassword` is used as the keystore and trust store password. +- `oidcClientPassword` is registered at with UMS as the OIDC client password. +- Ensure all values under data are **base64** encoded. + +### Creating the Lombardi custom secret +#### 1. Save the following content in a file named '100Custom.xml'. +```xml + + + + + + true + + + +``` + +#### 2. Create the Lombardi custom secret +Run the following command on the OpenShift master node: +``` +kubectl create secret generic wfs-lombardi-custom-xml-secret --from-file=sensitiveCustomConfig=./100Custom.xml +``` + +**Note:** To overwrite the Lombardi configuration settings, specify the value of the `iaws_configuration[x].wfs.lombardi_custom_xml_secret_name` property as the to newly created secret name `wfs-lombardi-custom-xml-secret` in the Custom Resource configuration file. + + + +## Step 6: Configuring the Custom Resource YAML file to deploy Automation Workstream Services +### Adding prerequisite configuration sections +Make sure that you've set the configuration parameters for the following components in your copy of the template Custom Resource YAML file: + +- [User Management Service](../UMS/README_config.md) +- [Automation Application Engine](../AAE/README_config.md) +- [Business Automation Navigator](../BAN/README_config.md) +- [FileNet Content Manager](../FNCM/README_config.md) + +### Disabling the Content Platform Engine initialization and verification sections +To ensure that the Content Platform Engine initialization can be completed successfully, remove the `initialize_configuration` and `verify_configuration` sections from the template Custom Resource YAML file. + +### Adding the required Automation Workstream Services configuration section +Edit your copy of the template custom resource YAML file and make the following updates. +- Uncomment and update the shared_configuration section if you haven't done it already. + +- Update the `iaws_configuration` and `pfs_configuration` sections. + To install Automation Workstream Services, replace the contents of `iaws_configuration` and `pfs_configuration` in your copy of the template Custom Resource YAML file with the values from the [sample_min_value.yaml](configuration/sample_min_value.yaml) file. + +### Custom configuration +If you want to customize your Custom Resource YAML file, you can refer to the [configuration list](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_iaws_params.html) to update the required values of each parameter according to your environment. + + +## Step 7: Completing the installation +Go back to the relevant installation or update page to configure other components and complete the deployment with the operator. + +Install pages: + - [OpenShift installation page](../platform/ocp/install.md) + - [Certified Kubernetes installation page](../platform/k8s/install.md) + +Update pages: + - [OpenShift installation page](../platform/ocp/update.md) + - [Certified Kubernetes installation page](../platform/k8s/update.md) + + +## Step 8: Completing post-deployment tasks +### Configuring the Content Platform Engine + +- [Creating the P8Domain manually](https://www.ibm.com/support/knowledgecenter/SSGLW6_5.5.0/com.ibm.p8.install.doc/p8pin328.htm) +- [Creating a database connection manually](https://www.ibm.com/support/knowledgecenter/SSGLW6_5.5.0/com.ibm.p8.install.doc/p8pin327.htm) +- [Creating object stores manually](https://www.ibm.com/support/knowledgecenter/SSGLW6_5.5.0/com.ibm.p8.install.doc/p8pin034.htm) + +**Notes:** +- The domain name must be same as value of the `iaws_configuration[x].wfs.content_integration.domain_name` property in the Custom Resource configuration file. +- The database connection related parameters shoule be from one of object store databases in `datasource_configuration.dc_os_datasources` section defined in Custom Resource configuration file which is already persisted as datasource configuration inside CPE container +- The Object Store name must be the same as the value of the `iaws_configuration[x].wfs.content_integration.object_store_name` property in the Custom Resource configuration file. + +## Step 9: Verifying Automation Workstream Services +1. Get the name of the pods that were deployed by running the following command: +``` +oc get pod -n +``` + +
+
+Click to show a successful Automation Workstream Service pod status. +

+ +``` +NAME READY STATUS RESTARTS AGE +demo-cmis-deploy-7f79f86db-crhwb 1/1 Running 0 18m +demo-cpe-deploy-774c856dfb-ss9p8 1/1 Running 0 21m +demo-dba-rr-63f407861c 1/1 Running 0 24m +demo-dba-rr-7557164eb9 1/1 Running 0 24m +demo-dba-rr-875b9f4a8f 1/1 Running 0 24m +demo-ibm-pfs-0 1/1 Running 0 8m +demo-ibm-pfs-dbareg-5d4b47577f-sp6qk 1/1 Running 0 8m +demo-ibm-pfs-elasticsearch-0 2/2 Running 0 8m +demo-ibm-pfs-umsregistry-job-bqvv6 0/1 Completed 0 8m +demo-instance1-aae-ae-db-job-9bb4p 0/1 Completed 0 9m +demo-instance1-aae-ae-deployment-bdf69b4d7-qpj5t 1/1 Running 0 9m +demo-instance1-aae-ae-oidc-job-fgzzv 0/1 Completed 0 9m +demo-instance1-baw-jms-0 1/1 Running 0 10m +demo-instance1-ibm-iaws-ibm-workplace-init-job-wnvcm 0/1 Completed 0 10m +demo-instance1-ibm-iaws-server-0 1/1 Running 0 10m +demo-instance1-ibm-iaws-server-content-init-job-7k64r 1/1 Running 1 10m +demo-instance1-ibm-iaws-server-database-init-job-czmdn 0/1 Completed 0 10m +demo-instance1-ibm-iaws-server-database-init-job-pfs-zzlwr 0/1 Completed 0 10m +demo-instance1-ibm-iaws-server-ltpa-kh76r 0/1 Completed 0 10m +demo-instance1-ibm-iaws-server-umsregistry-job-zt7rj 0/1 Completed 0 10m +demo-navigator-deploy-64cc4f44f-hnqbf 1/1 Running 0 15m +demo-rr-setup-pod 0/1 Completed 0 24m +demo-ums-deployment-86b4d9bc6b-bwkvn 1/1 Running 0 23m +demo-ums-ltpa-creation-job-zkdxb 0/1 Completed 0 24m +ibm-cp4a-operator-69569b68c8-d49v2 2/2 Running 0 31m +``` + +

+
+
+ +2. For each pod, check under Events to see that the images were successfully pulled and the containers were created and started by running the following command with the specific pod name: +``` +oc describe pod -n +``` + + + +## Limitations + +* Automation Workstream Services supports only the IBM Db2 database. + +* Elasticsearch limitation + + **Note:** The following limitation only applies if you are updating an Automation Workstream Services deployment which uses the embedded Elasticsearch statefulset + + * Scaling Elasticsearch statefulet + + In the Elasticsearch configuration, the [discovery.zen.minimum_master_nodes property](https://www.elastic.co/guide/en/elasticsearch/reference/6.7/discovery-settings.html#minimum_master_nodes) is automatically set by the operator to the quorum of replicas of the Elasticsearch statefulset. If, during an update, the pfs_configuration.elasticsearch.replicas value is changed and the change leads to a new computed value for the discovery.zen.minimum_master_nodes configuration property, then all currently running Elasticsearch pods will have to be restarted to. During this restart of the pods, there will be a temporary interruption of Elasticsearch and Process Federation Server services. + * Elasticsearch High Availability + + In the Elasticsearch configuration, the [discovery.zen.minimum_master_nodes property](https://www.elastic.co/guide/en/elasticsearch/reference/6.7/discovery-settings.html#minimum_master_nodes) is automatically set by the operator to the quorum of replicas of the Elasticsearch statefulset. If at some point, some Elasticsearch pods fail and the number of running Elastisearch pods is less than the quorum of replicas of the Elasticsearch statefulset, there will be an interruption of Elasticsearch and Process Federation Server services, until at least the quorum of running Elasticsearch pods is satisfied again. + +* Resource Registry limitation: + + Because of the design of etcd, it's recommended that you don't change the replica size after you create the Resource Registry cluster to prevent data loss. If you must set the replica size, set it to an odd number. If you reduce the pod size, the pods are destroyed one by one slowly to prevent data loss or the cluster from becoming out of sync. + * If you update the Resource Registry admin secret to change the username or password, first delete the -dba-rr- pods to cause Resource Registry to enable the updates. Alternatively, you can enable the update manually with etcd commands. + * If you update the Resource Registry configurations in the icp4acluster custom resource instance, the update might not affect the Resource Registry pod directly. It will affect the newly created pods when you increase the number of replicas. + +* The App Engine trusts only Certification Authority (CA) because of a Node.js server limitation. If an external service is used and signed with another root CA, you must add the root CA as trusted instead of the service certificate. + + * The certificate can be self-signed, or signed by a well-known CA. + * If you're using a depth zero self-signed certificate, it must be listed as a trusted certificate. + * If you're using a certificate signed by a self-signed CA, the self-signed CA must be in the trusted list. Using a leaf certificate in the trusted list is not supported. + * If you're adding the root CA of two or more external services to the App Engine trust list, you can't use the same common name for those root CAs. + + + +## Troubleshooting +- How to check check pod status and related logs for Automation Workstream Services + +There are totally 12 Automation Workstream Services-related pods in total, Run the oc get pod command to see the status of each pod: +``` +NAME READY STATUS RESTARTS AGE +demo-ibm-pfs-0 1/1 Running 0 2h +demo-ibm-pfs-dbareg-5fc759c745-mgsdv 1/1 Running 1 1h +demo-ibm-pfs-elasticsearch-0 2/2 Running 0 2h +demo-ibm-pfs-umsregistry-job-g2qt5 0/1 Completed 0 2h +demo-instance1-baw-jms-0 1/1 Running 0 2h +demo-instance1-ibm-iaws-ibm-workplace-init-job-nz9vw 0/1 Completed 0 2h +demo-instance1-ibm-iaws-server-0 1/1 Running 0 2h +demo-instance1-ibm-iaws-server-content-init-job-qv9ms 1/1 Completed 12 2h +demo-instance1-ibm-iaws-server-database-init-job-pfs-cfvs5 0/1 Completed 0 2h +demo-instance1-ibm-iaws-server-database-init-job-t8gjt 0/1 Completed 0 2h +demo-instance1-ibm-iaws-server-ltpa-gzhwp 0/1 Completed 0 2h +demo-instance1-ibm-iaws-server-umsregistry-job-hglww 0/1 Completed 0 2h +... +``` + +For pods controlled by Job, the desired `STATUS` is `Completed` and desired `READY` is `0/1`, while for pods controlled by Deployment or StatefulSet, the desired `STATUS` is `Running` and desired `READY` is `1/1` or `2/2`. You can see detailed information for each pod by running the `oc describe pod ` command and you can see detailed logs by running the `oc logs ` command. Although a pod should be in the `Running` Status at first, if a pod does not change its status, you can use the previous commands to determine what’s causing the blocks. + +
+
+Click to show an example of how to analyze the Pod "demo-instance1-ibm-iaws-server-0". +

+ +```yaml +[root@rhel76 ~]# oc describe pod demo-instance1-ibm-iaws-server-0 +Name: demo-instance1-ibm-iaws-server-0 +Namespace: demo-project +Priority: 0 +PriorityClassName: +Node: rhel76/ +Start Time: Mon, 02 Dec 2019 14:06:10 +0800 +Labels: app.kubernetes.io/component=server + app.kubernetes.io/instance=demo-instance1 + app.kubernetes.io/managed-by=Operator + app.kubernetes.io/name=workflow-server + app.kubernetes.io/version=19.0.3 + controller-revision-hash=demo-instance1-ibm-iaws-server-78d49d6667 + statefulset.kubernetes.io/pod-name=demo-instance1-ibm-iaws-server-0 +Annotations: openshift.io/scc=ibm-dba-iaws-scc + productID=5737-I23 + productName=IBM Cloud Pak for Automation + productVersion=19.0.3 +Status: Running +IP: 10.128.1.85 +Controlled By: StatefulSet/demo-instance1-ibm-iaws-server +Init Containers: + ssl-init-container: + Container ID: docker://e518904579fedc5b276a866f16af134924dba2b62fdaeb3c89e07f52f24b3872 + Image: dba-keytool-initcontainer:latest + Image ID: docker://sha256:e1d8a09881697228664b9a69d72377f7a2f3f0670d4649511b94b1890aa04b1f + Port: + Host Port: + State: Terminated + Reason: Completed + Exit Code: 0 + Started: Mon, 02 Dec 2019 16:17:06 +0800 + Finished: Mon, 02 Dec 2019 16:17:23 +0800 + Ready: True + Restart Count: 1 + Limits: + cpu: 500m + memory: 256Mi + Requests: + cpu: 200m + memory: 128Mi + Environment: + KEYTOOL_ACTION: GENERATE-BOTH + KEYSTORE_PASSWORD: Optional: false + Mounts: + /shared/resources/cert-trusted from trust-tls-volume (rw) + /shared/resources/keypair from keypair-secret (rw) + /shared/tls from key-trust-store (rw) + /var/run/secrets/kubernetes.io/serviceaccount from demo-instance1-ibm-iaws-sa-token-9r477 (ro) + dbcompatibility-init-container: + Container ID: docker://246c6c72e669101162ade46aeb1b40706d2141450becf11e359655309e591818 + Image: dba-dbcompatibility-initcontainer:latest + Image ID: docker://sha256:fac07eb3d6848ca7c3e63c4ce86b40a25a1bd9e69f595aa68056836532dc05d7 + Port: + Host Port: + State: Terminated + Reason: Completed + Exit Code: 0 + Started: Mon, 02 Dec 2019 16:17:28 +0800 + Finished: Mon, 02 Dec 2019 16:17:55 +0800 + Ready: True + Restart Count: 0 + Limits: + cpu: 500m + memory: 256Mi + Requests: + cpu: 200m + memory: 128Mi + Environment: + EXPECTED_SCHEMA_VERSION: 1.0.0 + DATABASE_TYPE: DB2 + DATABASE_HOST_NAME: + DATABASE_PORT: 50000 + DATABASE_NAME: BPMDB + DATABASE_USER: Optional: false + DATABASE_PWD: Optional: false + DATABASE_SCHEMA: Optional: false + SCHEMA_VERSION_TABLE_NAME: PFS_SCHEMA_PROPERTIES + SCHEMA_VERSION_KEY_NAME: Version + SCHEMA_VERSION_KEY_COLUMN_NAME: KEY + SCHEMA_VERSION_VALUE_COLUMN_NAME: VALUE + DATABASE_ALTERNATE_PORT: 0 + RETRY_INTERVAL_FOR_CLIENT_REROUTE: 600 + MAX_RETRIES_FOR_CLIENT_REROUTE: 5 + Mounts: + /var/run/secrets/kubernetes.io/serviceaccount from demo-instance1-ibm-iaws-sa-token-9r477 (ro) + bawdbcompatibility-init-container: + Container ID: docker://ead83f436f485f20658205dd00a7fa7e63d50cfaec8b1f6e63f459e5c2798c6a + Image: dba-dbcompatibility-initcontainer:latest + Image ID: docker://sha256:fac07eb3d6848ca7c3e63c4ce86b40a25a1bd9e69f595aa68056836532dc05d7 + Port: + Host Port: + State: Terminated + Reason: Completed + Exit Code: 0 + Started: Mon, 02 Dec 2019 16:18:02 +0800 + Finished: Mon, 02 Dec 2019 16:18:28 +0800 + Ready: True + Restart Count: 0 + Limits: + cpu: 500m + memory: 256Mi + Requests: + cpu: 200m + memory: 128Mi + Environment: + EXPECTED_SCHEMA_VERSION: 1.1.0 + DATABASE_TYPE: DB2 + DATABASE_HOST_NAME: + DATABASE_PORT: 50000 + DATABASE_NAME: BPMDB + DATABASE_USER: Optional: false + DATABASE_PWD: Optional: false + SCHEMA_VERSION_TABLE_NAME: LSW_SYSTEM_SCHEMA + SCHEMA_VERSION_KEY_NAME: DatabaseSchemaVersion + SCHEMA_VERSION_KEY_COLUMN_NAME: PROPNAME + SCHEMA_VERSION_VALUE_COLUMN_NAME: PROPVALUE + DATABASE_ALTERNATE_PORT: 0 + RETRY_INTERVAL_FOR_CLIENT_REROUTE: 600 + MAX_RETRIES_FOR_CLIENT_REROUTE: 5 + Mounts: + /var/run/secrets/kubernetes.io/serviceaccount from demo-instance1-ibm-iaws-sa-token-9r477 (ro) +Containers: + wf-ps: + Container ID: docker://686af04f1b5bb136f546a8ad34a2574f7500387099db812034d9facac33f9020 + Image: iaws-ps:19.0.3 + Image ID: docker://sha256:324ae272532971bc2779719239ebfa88adb298bf6ddd8970b568e97caedf4a13 + Port: + Host Port: + State: Running + Started: Mon, 02 Dec 2019 16:18:34 +0800 + Last State: Terminated + Reason: Error + Exit Code: 255 + Started: Mon, 02 Dec 2019 14:07:31 +0800 + Finished: Mon, 02 Dec 2019 16:15:32 +0800 + Ready: True + Restart Count: 1 + Limits: + cpu: 3 + memory: 2096Mi + Requests: + cpu: 2 + memory: 1048Mi + Readiness: exec [/bin/bash -c if [ "$(curl -sfk https://localhost:9443/ps/rest/v1/config/getProcessServerDatabaseSchemaVersion | grep -Po '(?<="status":")(.*?)(?=")')" != "200" ]; then exit 1; fi] delay=180s timeout=1s period=5s #success=1 #failure=3 + Environment: + JMS_SERVER_HOST: demo-instance1-baw-jms-service + UMS_CLIENT_ID: demo-instance1-ibm-iaws-server-oidc-client + UMS_CLIENT_SECRET: Optional: false + UMS_HOST: ums..nip.io + UMS_PORT: 443 + EXTERNAL_HOSTNAME: .nip.io + EXTERNAL_PORT: 443 + WLP_LOGGING_CONSOLE_FORMAT: json + WLP_LOGGING_MESSAGE_FORMAT: basic + ADMIN_USER: Optional: false + ADMIN_PASSWORD: Optional: false + UMS_ADMIN_USER: Optional: false + UMS_ADMIN_PASSWORD: Optional: false + DB_TYPE: DB2 + DB_USER: Optional: false + DB_PASSWORD: Optional: false + DB_NAME: BPMDB + DB_HOST: + DB_PORT: 50000 + SSL_KEY_PASSWORD: Optional: false + CSRF_SESSION_TOKENSALT: Optional: false + CSRF_REFERER_WHITELIST: .nip.io,ums..nip.io,ae..nip.io,icn..nip.io + CSRF_ORIGIN_WHITELIST: https://.nip.io,https://.nip.io:443,https://ums..nip.io,https://ums..nip.io:443,https://ae..nip.io,https://icn..nip.io + CPE_URL: https://demo-cpe-svc:9443/wsi/FNCEWS40MTOM + CMIS_URL: https://demo-cmis-svc:9443/openfncmis_wlp/services + CPE_DOMAIN_NAME: P8Domain + CPE_REPOSITORY: DOCS + CPE_OBJECTSTORE_ID: {E340B318-CF17-4C14-8902-AF713D3B0A91} + CPE_USERNAME: Optional: false + CPE_PASSWORD: Optional: false + WAIT_INTERVAL: 60000 + DB_SSLCONNECTION: false + DB_SSLCERTLOCATION: fake + DBCHECK_WAITTIME: 900 + DBCHECK_INTERVALTIME: 15 + STANDBYDB_PORT: 0 + STANDBYDB_RETRYINTERVAL: 600 + STANDBYDB_MAXRETRIES: 5 + RESOURCE_REGISTRY_URL: https://rr..nip.io:443 + RESOURCE_REGISTRY_UNAME: Optional: false + RESOURCE_REGISTRY_PASSWORD: Optional: false + CLUSTERIP_SERVICE_NAME: demo-instance1-ibm-baw-server + APPENGINE_EXTERNAL_HOSTNAME: ae..nip.io + FRAME-ANCESTORS-SETTING: https://.nip.io https://ums..nip.io https://ae..nip.io https://icn..nip.io + ENCRYPTION_KEY: Optional: false + Mounts: + /opt/ibm/wlp/output/defaultServer/resources/security/keystore/jks/server.jks from key-trust-store (rw) + /opt/ibm/wlp/output/defaultServer/resources/security/truststore/jks/trusts.jks from key-trust-store (rw) + /opt/ibm/wlp/usr/servers/defaultServer/config/100SCIM.xml from configurations (rw) + /opt/ibm/wlp/usr/servers/defaultServer/configDropins/overrides/oidc-rp.xml from configurations (rw) + /opt/ibm/wlp/usr/servers/defaultServer/configDropins/overrides/processServer_variables_system.xml from configurations (rw) + /opt/ibm/wlp/usr/servers/defaultServer/configDropins/overrides/security100.xml from configurations (rw) + /opt/ibm/wlp/usr/servers/defaultServer/configDropins/overrides/ssl.xml from configurations (rw) + /opt/ibm/wlp/usr/servers/defaultServer/configDropins/overrides/trace-specification.xml from configurations (rw) + /opt/ibm/wlp/usr/servers/defaultServer/resources/security from ltpa-store (rw) + /var/run/secrets/kubernetes.io/serviceaccount from demo-instance1-ibm-iaws-sa-token-9r477 (ro) +Conditions: + Type Status + Initialized True + Ready True + ContainersReady True + PodScheduled True +Volumes: + key-trust-store: + Type: EmptyDir (a temporary directory that shares a pod's lifetime) + Medium: + trust-tls-volume: + + keypair-secret: + Type: Secret (a volume populated by a Secret) + SecretName: ibm-baw-tls + Optional: false + ltpa-store: + Type: Secret (a volume populated by a Secret) + SecretName: demo-instance1-ibm-iaws-server-ltpa + Optional: false + configurations: + Type: ConfigMap (a volume populated by a ConfigMap) + Name: demo-instance1-ibm-iaws-server-config + Optional: false + demo-instance1-ibm-iaws-sa-token-9r477: + Type: Secret (a volume populated by a Secret) + SecretName: demo-instance1-ibm-iaws-sa-token-9r477 + Optional: false +QoS Class: Burstable +Node-Selectors: node-role.kubernetes.io/compute=true +Tolerations: node.kubernetes.io/memory-pressure:NoSchedule +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Warning NetworkNotReady 16m (x2 over 16m) kubelet, rhel76 network is not ready: [runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized] + Normal SandboxChanged 15m kubelet, rhel76 Pod sandbox changed, it will be killed and re-created. + Normal Pulled 15m kubelet, rhel76 Container image "dba-keytool-initcontainer:latest" already present on machine + Normal Created 15m kubelet, rhel76 Created container + Normal Started 15m kubelet, rhel76 Started container + Normal Pulled 15m kubelet, rhel76 Container image "dba-dbcompatibility-initcontainer:latest" already present on machine + Normal Created 15m kubelet, rhel76 Created container + Normal Started 15m kubelet, rhel76 Started container + Normal Pulled 14m kubelet, rhel76 Container image "dba-dbcompatibility-initcontainer:latest" already present on machine + Normal Created 14m kubelet, rhel76 Created container + Normal Started 14m kubelet, rhel76 Started container + Normal Pulled 14m kubelet, rhel76 Container image "iaws-ps:19.0.3" already present on machine + Normal Created 14m kubelet, rhel76 Created container + Normal Started 14m kubelet, rhel76 Started container +``` + +The "demo-instance1-ibm-iaws-server-0" pod has three init containers, named `ssl-init-container`, `dbcompatibility-init-container` and `bawdbcompatibility-init-container`. For all init containers, the desired STATUS is `Terminated` with Reason `Completed`. For the `wf-ps` container, the desired Ready STATUS is `True`. + +

+
+
+ + +- Error: failed to start container "demo-cpe-deploy" or "demo-navigator-deploy" + +
+
+Click to show detailed information and a solution. +

+ +The detailed error message is something like "Error response from daemon: oci runtime error: container_linux.go:235: starting container process caused "container init exited prematurely"". This kind of error is caused by Persistent Volumes and Persistent Volume Claims related to IBM Content Navigator and Content Platform Engine that are bound incorrectly. The solution is to delete first the Persistent Volume Claims related to IBM Content Navigator or Content Platform Engine and then the related PVs and NFS folders. Then, re-create them in the reverse order. + +

+
+
+ +- Failed to start Pod "demo-ibm-pfs-elasticsearch-0" + +Check the value of the `pfs_configuration.elasticsearch.privileged` property in your Custom Resource configuration. If it's set to `true`, run the `oc describe pod demo-ibm-pfs-elasticsearch-0` command to check the SecurityContextConstraint of the `demo-ibm-pfs-elasticsearch-0` pod. Also, ensure it’s set as `openshift.io/scc=pfs-privileged-scc`. +``` +# oc describe pod demo-ibm-pfs-elasticsearch-0 +Name: demo-ibm-pfs-elasticsearch-0 +Namespace: demo-project +Priority: 0 +PriorityClassName: +Node: rhel76/ +Start Time: Thu, 21 Nov 2019 18:10:11 +0800 +Labels: app.kubernetes.io/component=pfs-elasticsearch + app.kubernetes.io/instance=demo + app.kubernetes.io/managed-by=Operator + app.kubernetes.io/name=demo-ibm-pfs-elasticsearch + app.kubernetes.io/version=19.0.3 + controller-revision-hash=demo-ibm-pfs-elasticsearch-8675f484d + role=elasticsearch + statefulset.kubernetes.io/pod-name=demo-ibm-pfs-elasticsearch-0 +Annotations: checksum/config=6a3747ddc8ce13afdfc85b6793b847d035e8edd5 + openshift.io/scc=pfs-privileged-scc + productID=5737-I23 + productName=IBM Cloud Pak for Automation + productVersion=19.0.3 +Status: Running +``` + +- To enable Automation Workstream Services container logs: + +Use the following specification to enable Automation Workstream Services container logs in the Custom Resource configuration: +```yaml +iaws_configuration: + - name: instance1 + wfs: + logs: + console_format: “json” + console_log_level: “INFO” + console_source: “message,trace,accessLog,ffdc,audit” + message_format: “basic” + trace_format: “ENHANCED” + trace_specification: “WLE.=all:com.ibm.bpm.=all:com.ibm.workflow.*=all” +``` + +Then, run the `oc logs IAWS_pod_name` command to see the logs, or log into Automation Workstream Services to see the logs. + +This example shows how to check the Automation Workstream Services container logs: +``` +$ oc exec -it demo-instance1-ibm-iaws-server-0 bash +$ cat /logs/application/liberty-message.log +``` + +- To customize the Process Federation Server liberty server trace setting + +Use the following pecification can be used to enable Process Federation Server container logs in the Custom Resource configuration: +```yaml +pfs_configuration: + pfs: + logs: + console_format: "json" + console_log_level: "INFO" + console_source: "message,trace,accessLog,ffdc,audit" + trace_format: "ENHANCED" + trace_specification: "*=info" +``` + +Then, run the `oc logs PFS_pod_name` command to see the logs, or log into Process Federation Server to see the logs. + +This example shows how to check the Process Federation Server container logs: +``` +$ oc exec -it demo-ibm-pfs-0 bash +$ cat /logs/application/liberty-message.log +``` diff --git a/IAWS/configuration/sample_min_value.yaml b/IAWS/configuration/sample_min_value.yaml new file mode 100644 index 00000000..aa28e00b --- /dev/null +++ b/IAWS/configuration/sample_min_value.yaml @@ -0,0 +1,234 @@ +apiVersion: icp4a.ibm.com/v1 +kind: ICP4ACluster +metadata: + name: demo +spec: + iaws_configuration: + - name: instance1 + wfs: + service_type: "Route" + hostname: + port: 443 + replicas: 1 + workflow_server_secret: ibm-baw-baw-secret + tls: + tls_secret_name: ibm-baw-tls + tls_trust_list: + image: + repository: cp.icr.io/cp/cp4a/iaws/iaws-ps + tag: 19.0.3 + pullPolicy: IfNotPresent + pfs_bpd_database_init_job: + repository: cp.icr.io/cp/cp4a/iaws/pfs-bpd-database-init-prod + tag: 19.0.3 + pullPolicy: IfNotPresent + upgrade_job: + repository: cp.icr.io/cp/cp4a/iaws/iaws-psdb-handling + tag: 19.0.3 + pullPolicy: IfNotPresent + ibm_workplace_job: + repository: cp.icr.io/cp/cp4a/iaws/iaws-ibm-workplace + tag: 19.0.3 + pull_policy: IfNotPresent + database: + ssl: false + sslsecretname: ibm-dba-baw-db2-cacert + type: "DB2" + server_name: + database_name: "BPMDB" + port: "50000" + secret_name: ibm-baw-wfs-server-db-secret + dbcheck: + wait_time: 900 + interval_time: 15 + hadr: + standbydb_host: + standbydb_port: + retryinterval: + maxretries: + content_integration: + init_job_image: + repository: cp.icr.io/cp/cp4a/iaws/iaws-ps-content-integration + tag: 19.0.3 + pull_policy: IfNotPresent + domain_name: "P8Domain" + object_store_name: "DOCS" + cpe_admin_secret: cpe-admin-secret + event_handler_path: "/home/config/docs-config" + appengine: + hostname: + admin_secret_name: ae-admin-secret-instance1 + resource_registry: + hostname: + port: 443 + admin_secret_name: rr-admin-secret + jms: + image: + repository: cp.icr.io/cp/cp4a/iaws/baw-jms-server + tag: 19.0.3 + pull_policy: IfNotPresent + tls: + tls_secret_name: dummy-jms-tls-secret + resources: + limits: + memory: "2Gi" + cpu: "1000m" + requests: + memory: "512Mi" + cpu: "200m" + storage: + persistent: true + size: "2Gi" + use_dynamic_provisioning: false + access_modes: + - ReadWriteOnce + storage_class: "jms-storage-class" + resources: + limits: + cpu: 3 + memory: 2096Mi + requests: + cpu: 2 + memory: 1048Mi + probe: + ws: + liveness_probe: + initial_delay_seconds: 240 + readinessProbe: + initial_delay_seconds: 180 + logs: + console_format: "json" + console_log_level: "INFO" + console_source: "message,trace,accessLog,ffdc,audit" + message_format: "basic" + trace_format: "ENHANCED" + trace_specification: "*=info" + custom_xml_secret_name: + lombardi_custom_xml_secret_name: wfs-lombardi-custom-xml-secret + + pfs_configuration: + pfs: + hostname: + port: 443 + service_type: Route + image: + repository: cp.icr.io/cp/cp4a/iaws/pfs + tag: 19.0.3 + pull_policy: IfNotPresent + liveness_probe: + initial_delay_seconds: 60 + readiness_probe: + initial_delay_seconds: 60 + replicas: 1 + service_account: + anti_affinity: hard + admin_secret_name: ibm-pfs-admin-secret + config_dropins_overrides_secret: ibm-pfs-config + resources_security_secret: "" + external_tls_secret: + external_tls_ca_secret: + tls: + tls_secret_name: + tls_trust_list: + resources: + requests: + cpu: 500m + memory: 512Mi + limits: + cpu: 2 + memory: 4Gi + saved_searches: + index_name: ibmpfssavedsearches + index_number_of_shards: 3 + index_number_of_replicas: 1 + index_batch_size: 100 + update_lock_expiration: 5m + unique_constraint_expiration: 5m + security: + sso: + domain_name: + cookie_name: "ltpatoken2" + ltpa: + filename: "ltpa.keys" + expiration: "120m" + monitor_interval: "60s" + ssl_protocol: SSL + executor: + max_threads: "80" + core_threads: "40" + rest: + user_group_check_interval: "300s" + system_status_check_interval: "60s" + bd_fields_check_interval: "300s" + custom_env_variables: + names: + secret: + output: + storage: + use_dynamic_provisioning: false + size: 5Gi + storage_class: "pfs-output" + logs: + storage: + use_dynamic_provisioning: false + size: 5Gi + storage_class: "pfs-logs" + dba_resource_registry: + image: + repository: cp.icr.io/cp/cp4a/aae/dba-etcd + tag: latest + pull_policy: IfNotPresent + lease_ttl: 120 + pfs_check_interval: 10 + pfs_connect_timeout: 10 + pfs_response_timeout: 30 + pfs_registration_key: /dba/appresources/IBM_PFS/PFS_SYSTEM + tls_secret: rr-tls-client-secret + resources: + limits: + memory: ‘512Mi’ + cpu: ‘500m’ + requests: + memory: ‘512Mi’ + cpu: ‘200m’ + elasticsearch: + es_image: + repository: cp.icr.io/cp/cp4a/iaws/pfs-elasticsearch-prod + tag: 19.0.3 + pull_policy: IfNotPresent + pfs_init_image: + repository: cp.icr.io/cp/cp4a/iaws/pfs-init-prod + tag: 19.0.3 + pull_policy: IfNotPresent + nginx_image: + repository: cp.icr.io/cp/cp4a/iaws/pfs-nginx-prod + tag: 19.0.3 + pull_policy: IfNotPresent + replicas: 1 + service_type: NodePort + external_port: + anti_affinity: hard + service_account: ibm-pfs-es-service-account + privileged: true + probe_initial_delay: 90 + heap_size: "1024m" + resources: + limits: + memory: "2Gi" + cpu: "1000m" + requests: + memory: "1Gi" + cpu: "100m" + storage: + persistent: true + use_dynamic_provisioning: false + size: 10Gi + storage_class: "pfs-es" + snapshot_storage: + enabled: false + use_dynamic_provisioning: false + size: 30Gi + storage_class_name: "" + existing_claim_name: "" + security: + users_secret: "" \ No newline at end of file diff --git a/LICENSE b/LICENSE old mode 100755 new mode 100644 index f878f629..6951cbdc --- a/LICENSE +++ b/LICENSE @@ -1,4 +1,4 @@ -The translated license terms can be viewed here: [License and Copyright]( http://www14.software.ibm.com/cgi-bin/weblap/lap.pl?li_formnum=L-ASAY-BEEFUW#ibm-top ) +The translated license terms can be viewed here: http://www14.software.ibm.com/cgi-bin/weblap/lap.pl?li_formnum=L-ASAY-BJCED8 LICENSE INFORMATION @@ -6,7 +6,7 @@ The Programs listed below are licensed under the following License Information t Program Name (Program Number): -IBM Cloud Pak for Automation 19.0.2 (5737-I23) +IBM Cloud Pak for Automation SR1 19.0.3 (5737-I23) The following standard terms apply to Licensee's use of the Program. @@ -22,6 +22,10 @@ Prohibited Uses Licensee may not use or authorize others to use the Program if failure of the Program could lead to death, bodily injury, or property or environmental damage. +License Terms delivered with Program Not Applicable + +The terms of this Agreement supersede and void any electronic "click through," "shrinkwrap," or other licensing terms and conditions included with or accompanying the Program(s). + Multi-Product Install Image The Program is provided as part of a multi-product install image. Licensee is authorized to install and use only the Program (and its Bundled or Supporting Programs, if any) for which a valid entitlement is obtained and may not install or use any of the other software included in the image unless Licensee has acquired separate entitlements for that other software. @@ -36,7 +40,13 @@ IBM FileNet Content Manager IBM FileNet Content Manager for Non-Production Environment -IBM Datacap +IBM Datacap Processor Value Unit v9 + +IBM Datacap for Non-Production Environment Processor Value Unit v9 + +IBM Datacap Insight Edition Add-On Processor Value Unit v9 + +IBM Datacap Insight Edition Add-on for Non-Production Environment Processor Value Unit v9 IBM Content Collector for Email @@ -46,8 +56,6 @@ IBM Content collector for Microsoft SharePoint IBM Content Collector for SAP Applications -IBM Enterprise Records - IBM Business Automation Workflow Enterprise IBM Business Automation Workflow Enterprise for Non-Production Environment @@ -56,34 +64,24 @@ IBM Operational Decision Manager Server IBM Operational Decision Manager Server for Non-Production Environment +IBM Enterprise Records + Supporting Programs Licensee is authorized to install and use the Supporting Programs identified below. Licensee is authorized to install and use such Supporting Programs only to support Licensee's use of the Principal Program under this Agreement. The phrase "to support Licensee's use" would only include those uses that are necessary or otherwise directly related to a licensed use of the Principal Program or another Supporting Program. The Supporting Programs may not be used for any other purpose. A Supporting Program may be accompanied by license terms, and those terms, if any, apply to Licensee's use of that Supporting Program. In the event of conflict, the terms in this License Information document supersede the Supporting Program's terms. Licensee must obtain sufficient entitlements to the Program, as a whole, to cover Licensee's installation and use of all of the Supporting Programs, unless separate entitlements are provided within this License Information document. For example, if this Program were licensed on a PVU (Processor Value Unit) basis and Licensee were to install the Principal Program or a Supporting Program on a 100 PVU machine (physical or virtual) and another Supporting Program on a second 100 PVU machine, Licensee would be required to obtain 200 PVU entitlements to the Program. Supporting Programs: -IBM DB2 Advanced Workgroup Server Edition 11.1 +IBM DB2 Advanced Workgroup Server Edition 11.5 IBM WebSphere Liberty 19.0 +IBM WebSphere Application Server Network Deployment + Development Tool This Program is designed to aid in the development of software applications and systems. Licensee is solely responsible for the applications and systems that it develops by using this Program and assumes all risk and responsibility therefor. -Components Not Used for Establishing Required Entitlements - -When determining the number of entitlements required for Licensee's installation or use of the Program, the installation or use of the following Program components are not taken into consideration. In other words, Licensee may install and use the following Program components, under the license terms, but these components are not used to determine the number of entitlements required for the Program. - -IBM Business Automation Studio (Component of the Program) - -IBM Business Automation Navigator (Component of the Program) - -IBM Business Automation Application Designer (Component of the Program) - -IBM Business Automation Application Engine (Component of the Program) - -- Use Limitation: Non-Production - Separately Licensed Code The provisions of this paragraph do not apply to the extent they are held to be invalid or unenforceable under the law that governs this license. Each of the components listed below is considered "Separately Licensed Code". IBM Separately Licensed Code is licensed to Licensee under the terms of the applicable third party license agreement(s) set forth in the NON_IBM_LICENSE file(s) that accompanies the Program. Notwithstanding any of the terms in the Agreement, or any other agreement Licensee may have with IBM, the terms of such third party license agreement(s) governs Licensee's use of all Separately Licensed Code unless otherwise noted below. @@ -136,7 +134,7 @@ Red Hat Universal Base Image 7 Red Hat Universal Base Image 8 -Red Hat Openshift Container Platform 3.11 +Red Hat Openshift Container Platform 3.11 or later versions font-awesome icons 4.7 @@ -146,6 +144,26 @@ dbus 1.10 inotify-tools 3.14 +Red Hat Enterprise Linux 7 + +Red Hat Enterprise Linux 8 + +Erlang/OTP 21.3 + +poppler-utils 0.48 + +LibreOffice 6.3 + +OCRmyPDF 9.0 + +Debian GNU/Linux 8 + +Ubuntu 16 + +Alpine Linux 3 + +libonig2 5.9 + Privacy Licensee acknowledges and agrees that IBM may use cookie and tracking technologies to collect personal information in gathering product usage statistics and information designed to help improve user experience and/or to tailor interactions with users in accordance with the IBM Online Privacy Policy, available at http://www.ibm.com/privacy/. @@ -178,10 +196,6 @@ The Program may contain links to or be used to access third party data services, The following units of measure may apply to Licensee's use of the Program. -Establishment - -Establishment is a unit of measure by which the Program can be licensed. An Establishment is a single physical site, including the surrounding campus and satellite offices located within 50 kilometers, of Licensee's site address. Licensee must obtain an entitlement for each Establishment at or for which the Program will be used. Licensee is permitted to deploy an unlimited number of copies of the Program within the Establishment. An entitlement for an Establishment is unique to that Establishment and may not be shared, nor may it be reassigned other than for the permanent closing of the Establishment. - Virtual Processor Core Virtual Processor Core is a unit of measure by which the Program can be licensed. A Server is a physical computer that is comprised of processing units, memory, and input/output capabilities and that executes requested procedures, commands, or applications for one or more users or client devices. Where racks, blade enclosures, or other similar equipment is being employed, each separable physical device (for example, a blade or a rack-mounted device) that has the required components is considered itself a separate Server. A Virtual Server is either a virtual computer created by partitioning the resources available to a physical Server or an unpartitioned physical Server. A Processor Core is a functional unit within a computing device that interprets and executes instructions. A Processor Core consists of at least an instruction control unit and one or more arithmetic or logic unit. A Virtual Processor Core is a Processor Core on a Virtual Server created by partitioning the resources available to a physical Server or an unpartitioned physical Server. Licensee must obtain entitlement for each Virtual Processor Core made available to the Program. @@ -190,6 +204,28 @@ For each physical Server, Licensee must have sufficient entitlements for the les In addition to the above, the following terms apply to Licensee's use of the Program. +Permitted Components + +Notwithstanding any provision in the Agreement, Licensee is permitted to use only the following components or functions of the identified Supporting Program: + +- IBM WebSphere Application Server Network Deployment only for use in support of the following Bundled Programs: IBM FileNet Content Manager, IBM FileNet Content Manager for Non-Production Environment, IBM Datacap, IBM Enterprise Records, IBM Business Automation Workflow Enterprise, IBM Business Automation Workflow Enterprise for Non-Production Environment, IBM Operational Decision Manager Server, IBM Operational Decision Manager Server for Non-Production Environment. + +Components Not Used for Establishing Required Entitlements + +When determining the number of entitlements required for Licensee's installation or use of the Program, the installation or use of the following Program components are not taken into consideration. In other words, Licensee may install and use the following Program components, under the license terms, but these components are not used to determine the number of entitlements required for the Program. + +- IBM Business Automation Studio + +- IBM Business Automation Navigator + +- IBM Business Automation Application Designer + +- IBM Business Automation Application Engine when used in Non-Production + +- IBM Automation Digital Worker when used in Non-Production + +- IBM Business Automation Insights when used in Non-Production + Entitlement Conversion Details These Entitlement Conversion Details outline the entitlement conversion options. Licensee is entitled to the below entitlement conversion options in any deployment combination of Licensee's choosing and may choose to convert entitlements between the listed programs below at any time provided that the sum of Licensee's deployments do not exceed the total amount of Licensee's entitlements obtained for the Program. Licensee is not entitled to use entitlements obtained of the Program for any other purpose. @@ -202,821 +238,483 @@ Entitlement Values Business Automation Application Engine (Component of the Program) -- Entitlement Value: Ratio 1 VPC/ 1VPC +- Entitlement Value: Conversion 1 VPC/ 1VPC Business Automation Insights (Component of the Program) -- Entitlement Value: Ratio 1 VPC/ 1VPC +- Entitlement Value: Conversion 1 VPC/ 1VPC -Business Automation Insights (Component of the Program) +IBM Automation Digital Worker (Component of the Program) -- Entitlement Value: Ratio 2 VPC/ 1VPC - -- Use Limitation: Non-Production +- Entitlement Value: Conversion 1 VPC/ 1VPC IBM FileNet Content Manager -- Entitlement Value: Ratio 1 VPC/ 10VPCs +- Entitlement Value: Conversion 1 VPC/ 5VPCs IBM FileNet Content Manager for Non-Production Environment -- Entitlement Value: Ratio 2 VPCs/ 10VPCs +- Entitlement Value: Conversion 2 VPCs/ 5VPCs - Use Limitation: Non-Production IBM Business Automation Workflow Enterprise -- Entitlement Value: Ratio 1 VPC/ 5VPCs +- Entitlement Value: Conversion 1 VPC/ 5VPCs IBM Business Automation Workflow Enterprise for Non-Production Environment -- Entitlement Value: Ratio 2 VPCs/ 5VPCs +- Entitlement Value: Conversion 2 VPCs/ 5VPCs - Use Limitation: Non-Production +IBM Automation Workstream Services + +- Entitlement Value: Conversion 1 VPC/ 5VPCs + IBM Operational Decision Manager Server -- Entitlement Value: Ratio 1 VPC/ 5VPCs +- Entitlement Value: Conversion 1 VPC/ 5VPCs IBM Operational Decision Manager Server for Non-Production Environment -- Entitlement Value: Ratio 2 VPCs/ 5VPCs +- Entitlement Value: Conversion 2 VPCs/ 5VPCs - Use Limitation: Non-Production -Business Automation Navigator (Component of the Program) - -- Entitlement Value: Ratio 1 VPC/ 5VPC - Business Automation Content Analyzer (Component of the Program) -- Entitlement Value: Ratio 1 VPC/ 1VPC +- Entitlement Value: Conversion 1 VPC/ 1VPC Business Automation Content Analyzer (Component of the Program) -- Entitlement Value: Ratio 2 VPC/ 1VPC +- Entitlement Value: Conversion 2 VPC/ 1VPC - Use Limitation: Non-Production IBM Datacap Processor Value Unit -- Entitlement Value: Ratio 1 VPC/ 2VPC +- Entitlement Value: Conversion 1 VPC/ 2VPC -IBM Datacap Processor Value Unit for Non-Production +IBM Datacap for Non-Production Environment Processor Value Unit -- Entitlement Value: Ratio 1 VPC/ 1VPC +- Entitlement Value: Conversion 1 VPC/ 1VPC - Use Limitation: Non-Production -IBM Content Collector for Email, Files & Sharepoint +IBM Datacap Insight Edition Add-On Processor Value Unit -- Entitlement Value: Ratio 1 VPC/ 3VPC +- Entitlement Value: Conversion 1 VPC/ 2VPC -IBM Content Collector for Email, Files & Sharepoint for Non-Production +IBM Datacap Insight Edition Add-on for Non-Production Environment Processor Value Unit -- Entitlement Value: Ratio 2 VPC/ 3VPC +- Entitlement Value: Conversion 1 VPC/ 1VPC - Use Limitation: Non-Production -IBM Content Collector for SAP +IBM Content Collector for Email, Files & Sharepoint -- Entitlement Value: Ratio 1 VPC/ 3VPC +- Entitlement Value: Conversion 1 VPC/ 3VPC -IBM Content Collector for SAP for Non-Production +IBM Content Collector for Email, Files & Sharepoint for Non-Production -- Entitlement Value: Ratio 2 VPC/ 3VPC +- Entitlement Value: Conversion 2 VPC/ 3VPC - Use Limitation: Non-Production -IBM Enterprise Records +IBM Content Collector for SAP -- Entitlement Value: Ratio 1VPC/ 3VPC +- Entitlement Value: Conversion 1 VPC/ 3VPC -IBM Enterprise Records +IBM Content Collector for SAP for Non-Production -- Entitlement Value: Ratio 2VPC/ 3VPC +- Entitlement Value: Conversion 2 VPC/ 3VPC - Use Limitation: Non-Production -"Ratio n/m" means that for the Bundled Program Licensee elects to allocate Licensee's entitlement to the Program, the entitlement for such Bundled Program is the number ('n') entitlements of the VPCs for the Bundled Program for every specified number ('m') entitlements of the VPCs for the Program as a whole. +Conversion n/m" means that Licensee can convert some number ('n') entitlements of the indicated metric for the Bundled Program for every specified number ('m') entitlements of the specified metric for the Program. The specified conversion does not apply to any entitlements for the Program that are not of the required metric type. For example, if the conversion ratio is 100 entitlements of a Bundled Program for every 500 entitlements obtained of the Program and Licensee acquires 1,500 entitlements of the Program, Licensee may convert those 1,500 entitlements into 300 entitlements of the Bundled Program, allowing the Licensee to use the Bundled Program up to the 300 entitlements. "Non-Production" means that the Bundled Program can only be deployed as part of Licensee's internal development and test environment for internal non-production activities, including but not limited to testing, performance tuning, fault diagnosis, internal benchmarking, staging, quality assurance activity and/or developing internally used additions or extensions to the Program using published application programming interfaces. Licensee is not authorized to use any part of the Bundled Program for any other purposes without acquiring the appropriate production entitlements. -L/N: L-ASAY-BEEFUW - -D/N: L-ASAY-BEEFUW - -P/N: L-ASAY-BEEFUW - - -Back to top - -International Program License Agreement - -Part 1 - General Terms - -BY DOWNLOADING, INSTALLING, COPYING, ACCESSING, CLICKING ON AN "ACCEPT" BUTTON, OR OTHERWISE USING THE PROGRAM, LICENSEE AGREES TO THE TERMS OF THIS AGREEMENT. IF YOU ARE ACCEPTING THESE TERMS ON BEHALF OF LICENSEE, YOU REPRESENT AND WARRANT THAT YOU HAVE FULL AUTHORITY TO BIND LICENSEE TO THESE TERMS. IF YOU DO NOT AGREE TO THESE TERMS, - -* DO NOT DOWNLOAD, INSTALL, COPY, ACCESS, CLICK ON AN "ACCEPT" BUTTON, OR USE THE PROGRAM; AND - -* PROMPTLY RETURN THE UNUSED MEDIA, DOCUMENTATION, AND PROOF OF ENTITLEMENT TO THE PARTY FROM WHOM IT WAS OBTAINED FOR A REFUND OF THE AMOUNT PAID. IF THE PROGRAM WAS DOWNLOADED, DESTROY ALL COPIES OF THE PROGRAM. - -1. Definitions - -"Authorized Use" - the specified level at which Licensee is authorized to execute or run the Program. That level may be measured by number of users, millions of service units ("MSUs"), Processor Value Units ("PVUs"), or other level of use specified by IBM. - -"IBM" - International Business Machines Corporation or one of its subsidiaries. - -"License Information" ("LI") - a document that provides information and any additional terms specific to a Program. The Program's LI is available at www.ibm.com/software/sla. The LI can also be found in the Program's directory, by the use of a system command, or as a booklet included with the Program. - -"Program" - the following, including the original and all whole or partial copies: 1) machine-readable instructions and data, 2) components, files, and modules, 3) audio-visual content (such as images, text, recordings, or pictures), and 4) related licensed materials (such as keys and documentation). - -"Proof of Entitlement" ("PoE") - evidence of Licensee's Authorized Use. The PoE is also evidence of Licensee's eligibility for warranty, future update prices, if any, and potential special or promotional opportunities. If IBM does not provide Licensee with a PoE, then IBM may accept as the PoE the original paid sales receipt or other sales record from the party (either IBM or its reseller) from whom Licensee obtained the Program, provided that it specifies the Program name and Authorized Use obtained. - -"Warranty Period" - one year, starting on the date the original Licensee is granted the license. - -2. Agreement Structure - -This Agreement includes Part 1 - General Terms, Part 2 - Country-unique Terms (if any), the LI, and the PoE and is the complete agreement between Licensee and IBM regarding the use of the Program. It replaces any prior oral or written communications between Licensee and IBM concerning Licensee's use of the Program. The terms of Part 2 may replace or modify those of Part 1. To the extent of any conflict, the LI prevails over both Parts. - -3. License Grant - -The Program is owned by IBM or an IBM supplier, and is copyrighted and licensed, not sold. - -IBM grants Licensee a nonexclusive license to 1) use the Program up to the Authorized Use specified in the PoE, 2) make and install copies to support such Authorized Use, and 3) make a backup copy, all provided that - -a. Licensee has lawfully obtained the Program and complies with the terms of this Agreement; - -b. the backup copy does not execute unless the backed-up Program cannot execute; - -c. Licensee reproduces all copyright notices and other legends of ownership on each copy, or partial copy, of the Program; - -d. Licensee ensures that anyone who uses the Program (accessed either locally or remotely) 1) does so only on Licensee's behalf and 2) complies with the terms of this Agreement; - -e. Licensee does not 1) use, copy, modify, or distribute the Program except as expressly permitted in this Agreement; 2) reverse assemble, reverse compile, otherwise translate, or reverse engineer the Program, except as expressly permitted by law without the possibility of contractual waiver; 3) use any of the Program's components, files, modules, audio-visual content, or related licensed materials separately from that Program; or 4) sublicense, rent, or lease the Program; and - -f. if Licensee obtains this Program as a Supporting Program, Licensee uses this Program only to support the Principal Program and subject to any limitations in the license to the Principal Program, or, if Licensee obtains this Program as a Principal Program, Licensee uses all Supporting Programs only to support this Program, and subject to any limitations in this Agreement. For purposes of this Item "f," a "Supporting Program" is a Program that is part of another IBM Program ("Principal Program") and identified as a Supporting Program in the Principal Program's LI. (To obtain a separate license to a Supporting Program without these restrictions, Licensee should contact the party from whom Licensee obtained the Supporting Program.) - -This license applies to each copy of the Program that Licensee makes. - -3.1 Trade-ups, Updates, Fixes, and Patches - -3.1.1 Trade-ups - -If the Program is replaced by a trade-up Program, the replaced Program's license is promptly terminated. - -3.1.2 Updates, Fixes, and Patches - -When Licensee receives an update, fix, or patch to a Program, Licensee accepts any additional or different terms that are applicable to such update, fix, or patch that are specified in its LI. If no additional or different terms are provided, then the update, fix, or patch is subject solely to this Agreement. If the Program is replaced by an update, Licensee agrees to promptly discontinue use of the replaced Program. - -3.2 Fixed Term Licenses - -If IBM licenses the Program for a fixed term, Licensee's license is terminated at the end of the fixed term, unless Licensee and IBM agree to renew it. - -3.3 Term and Termination - -This Agreement is effective until terminated. - -IBM may terminate Licensee's license if Licensee fails to comply with the terms of this Agreement. - -If the license is terminated for any reason by either party, Licensee agrees to promptly discontinue use of and destroy all of Licensee's copies of the Program. Any terms of this Agreement that by their nature extend beyond termination of this Agreement remain in effect until fulfilled, and apply to both parties' respective successors and assignees. - -4. Charges - -Charges are based on Authorized Use obtained, which is specified in the PoE. IBM does not give credits or refunds for charges already due or paid, except as specified elsewhere in this Agreement. - -If Licensee wishes to increase its Authorized Use, Licensee must notify IBM or an authorized IBM reseller in advance and pay any applicable charges. - -5. Taxes - -If any authority imposes on the Program a duty, tax, levy, or fee, excluding those based on IBM's net income, then Licensee agrees to pay that amount, as specified in an invoice, or supply exemption documentation. Licensee is responsible for any personal property taxes for the Program from the date that Licensee obtains it. If any authority imposes a customs duty, tax, levy, or fee for the import into or the export, transfer, access, or use of the Program outside the country in which the original Licensee was granted the license, then Licensee agrees that it is responsible for, and will pay, any amount imposed. - -6. Money-back Guarantee - -If Licensee is dissatisfied with the Program for any reason and is the original Licensee, Licensee may terminate the license and obtain a refund of the amount Licensee paid for the Program, provided that Licensee returns the Program and PoE to the party from whom Licensee obtained it within 30 days of the date the PoE was issued to Licensee. If the license is for a fixed term that is subject to renewal, then Licensee may obtain a refund only if the Program and its PoE are returned within the first 30 days of the initial term. If Licensee downloaded the Program, Licensee should contact the party from whom Licensee obtained it for instructions on how to obtain the refund. - -7. Program Transfer - -Licensee may transfer the Program and all of Licensee's license rights and obligations to another party only if that party agrees to the terms of this Agreement. If the license is terminated for any reason by either party, Licensee is prohibited from transferring the Program to another party. Licensee may not transfer a portion of 1) the Program or 2) the Program's Authorized Use. When Licensee transfers the Program, Licensee must also transfer a hard copy of this Agreement, including the LI and PoE. Immediately after the transfer, Licensee's license terminates. - -8. Warranty and Exclusions - -8.1 Limited Warranty - -IBM warrants that the Program, when used in its specified operating environment, will conform to its specifications. The Program's specifications, and specified operating environment information, can be found in documentation accompanying the Program (such as a read-me file) or other information published by IBM (such as an announcement letter). Licensee agrees that such documentation and other Program content may be supplied only in the English language, unless otherwise required by local law without the possibility of contractual waiver or limitation. - -The warranty applies only to the unmodified portion of the Program. IBM does not warrant uninterrupted or error-free operation of the Program, or that IBM will correct all Program defects. Licensee is responsible for the results obtained from the use of the Program. - -During the Warranty Period, IBM provides Licensee with access to IBM databases containing information on known Program defects, defect corrections, restrictions, and bypasses at no additional charge. Consult the IBM Software Support Handbook for further information at www.ibm.com/software/support. - -If the Program does not function as warranted during the Warranty Period and the problem cannot be resolved with information available in the IBM databases, Licensee may return the Program and its PoE to the party (either IBM or its reseller) from whom Licensee obtained it and receive a refund of the amount Licensee paid. After returning the Program, Licensee's license terminates. If Licensee downloaded the Program, Licensee should contact the party from whom Licensee obtained it for instructions on how to obtain the refund. - -8.2 Exclusions - -THESE WARRANTIES ARE LICENSEE'S EXCLUSIVE WARRANTIES AND REPLACE ALL OTHER WARRANTIES OR CONDITIONS, EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, ANY IMPLIED WARRANTIES OR CONDITIONS OF MERCHANTABILITY, SATISFACTORY QUALITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE, AND ANY WARRANTY OR CONDITION OF NON-INFRINGEMENT. SOME STATES OR JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF EXPRESS OR IMPLIED WARRANTIES, SO THE ABOVE EXCLUSION MAY NOT APPLY TO LICENSEE. IN THAT EVENT, SUCH WARRANTIES ARE LIMITED IN DURATION TO THE WARRANTY PERIOD. NO WARRANTIES APPLY AFTER THAT PERIOD. SOME STATES OR JURISDICTIONS DO NOT ALLOW LIMITATIONS ON HOW LONG AN IMPLIED WARRANTY LASTS, SO THE ABOVE LIMITATION MAY NOT APPLY TO LICENSEE. - -THESE WARRANTIES GIVE LICENSEE SPECIFIC LEGAL RIGHTS. LICENSEE MAY ALSO HAVE OTHER RIGHTS THAT VARY FROM STATE TO STATE OR JURISDICTION TO JURISDICTION. - -THE WARRANTIES IN THIS SECTION 8 (WARRANTY AND EXCLUSIONS) ARE PROVIDED SOLELY BY IBM. THE DISCLAIMERS IN THIS SUBSECTION 8.2 (EXCLUSIONS), HOWEVER, ALSO APPLY TO IBM'S SUPPLIERS OF THIRD PARTY CODE. THOSE SUPPLIERS PROVIDE SUCH CODE WITHOUT WARRANTIES OR CONDITION OF ANY KIND. THIS PARAGRAPH DOES NOT NULLIFY IBM'S WARRANTY OBLIGATIONS UNDER THIS AGREEMENT. - -9. Licensee Data and Databases - -To assist Licensee in isolating the cause of a problem with the Program, IBM may request that Licensee 1) allow IBM to remotely access Licensee's system or 2) send Licensee information or system data to IBM. However, IBM is not obligated to provide such assistance unless IBM and Licensee enter a separate written agreement under which IBM agrees to provide to Licensee that type of support, which is beyond IBM's warranty obligations in this Agreement. In any event, IBM uses information about errors and problems to improve its products and services, and assist with its provision of related support offerings. For these purposes, IBM may use IBM entities and subcontractors (including in one or more countries other than the one in which Licensee is located), and Licensee authorizes IBM to do so. - -Licensee remains responsible for 1) any data and the content of any database Licensee makes available to IBM, 2) the selection and implementation of procedures and controls regarding access, security, encryption, use, and transmission of data (including any personally-identifiable data), and 3) backup and recovery of any database and any stored data. Licensee will not send or provide IBM access to any personally-identifiable information, whether in data or any other form, and will be responsible for reasonable costs and other amounts that IBM may incur relating to any such information mistakenly provided to IBM or the loss or disclosure of such information by IBM, including those arising out of any third party claims. - -10. Limitation of Liability - -The limitations and exclusions in this Section 10 (Limitation of Liability) apply to the full extent they are not prohibited by applicable law without the possibility of contractual waiver. - -10.1 Items for Which IBM May Be Liable - -Circumstances may arise where, because of a default on IBM's part or other liability, Licensee is entitled to recover damages from IBM. Regardless of the basis on which Licensee is entitled to claim damages from IBM (including fundamental breach, negligence, misrepresentation, or other contract or tort claim), IBM's entire liability for all claims in the aggregate arising from or related to each Program or otherwise arising under this Agreement will not exceed the amount of any 1) damages for bodily injury (including death) and damage to real property and tangible personal property and 2) other actual direct damages up to the charges (if the Program is subject to fixed term charges, up to twelve months' charges) Licensee paid for the Program that is the subject of the claim. - -This limit also applies to any of IBM's Program developers and suppliers. It is the maximum for which IBM and its Program developers and suppliers are collectively responsible. - -10.2 Items for Which IBM Is Not Liable - -UNDER NO CIRCUMSTANCES IS IBM, ITS PROGRAM DEVELOPERS OR SUPPLIERS LIABLE FOR ANY OF THE FOLLOWING, EVEN IF INFORMED OF THEIR POSSIBILITY: - -a. LOSS OF, OR DAMAGE TO, DATA; - -b. SPECIAL, INCIDENTAL, EXEMPLARY, OR INDIRECT DAMAGES, OR FOR ANY ECONOMIC CONSEQUENTIAL DAMAGES; OR - -c. LOST PROFITS, BUSINESS, REVENUE, GOODWILL, OR ANTICIPATED SAVINGS. - -11. Compliance Verification - -For purposes of this Section 11 (Compliance Verification), "IPLA Program Terms" means 1) this Agreement and applicable amendments and transaction documents provided by IBM, and 2) IBM software policies that may be found at the IBM Software Policy website (www.ibm.com/softwarepolicies), including but not limited to those policies concerning backup, sub-capacity pricing, and migration. - -The rights and obligations set forth in this Section 11 remain in effect during the period the Program is licensed to Licensee, and for two years thereafter. - -11.1 Verification Process - -Licensee agrees to create, retain, and provide to IBM and its auditors accurate written records, system tool outputs, and other system information sufficient to provide auditable verification that Licensee's use of all Programs is in compliance with the IPLA Program Terms, including, without limitation, all of IBM's applicable licensing and pricing qualification terms. Licensee is responsible for 1) ensuring that it does not exceed its Authorized Use, and 2) remaining in compliance with IPLA Program Terms. - -Upon reasonable notice, IBM may verify Licensee's compliance with IPLA Program Terms at all sites and for all environments in which Licensee uses (for any purpose) Programs subject to IPLA Program Terms. Such verification will be conducted in a manner that minimizes disruption to Licensee's business, and may be conducted on Licensee's premises, during normal business hours. IBM may use an independent auditor to assist with such verification, provided IBM has a written confidentiality agreement in place with such auditor. - -11.2 Resolution - -IBM will notify Licensee in writing if any such verification indicates that Licensee has used any Program in excess of its Authorized Use or is otherwise not in compliance with the IPLA Program Terms. Licensee agrees to promptly pay directly to IBM the charges that IBM specifies in an invoice for 1) any such excess use, 2) support for such excess use for the lesser of the duration of such excess use or two years, and 3) any additional charges and other liabilities determined as a result of such verification. - -12. Third Party Notices - -The Program may include third party code that IBM, not the third party, licenses to Licensee under this Agreement. Notices, if any, for the third party code ("Third Party Notices") are included for Licensee's information only. These notices can be found in the Program's NOTICES file(s). Information on how to obtain source code for certain third party code can be found in the Third Party Notices. If in the Third Party Notices IBM identifies third party code as "Modifiable Third Party Code," IBM authorizes Licensee to 1) modify the Modifiable Third Party Code and 2) reverse engineer the Program modules that directly interface with the Modifiable Third Party Code provided that it is only for the purpose of debugging Licensee's modifications to such third party code. IBM's service and support obligations, if any, apply only to the unmodified Program. - -13. General - -a. Nothing in this Agreement affects any statutory rights of consumers that cannot be waived or limited by contract. - -b. For Programs IBM provides to Licensee in tangible form, IBM fulfills its shipping and delivery obligations upon the delivery of such Programs to the IBM-designated carrier, unless otherwise agreed to in writing by Licensee and IBM. - -c. If any provision of this Agreement is held to be invalid or unenforceable, the remaining provisions of this Agreement remain in full force and effect. - -d. Licensee agrees to comply with all applicable export and import laws and regulations, including U.S. embargo and sanctions regulations and prohibitions on export for certain end uses or to certain users. - -e. Licensee authorizes International Business Machines Corporation and its subsidiaries (and their successors and assigns, contractors and IBM Business Partners) to store and use Licensee's business contact information wherever they do business, in connection with IBM products and services, or in furtherance of IBM's business relationship with Licensee. - -f. Each party will allow the other reasonable opportunity to comply before it claims that the other has not met its obligations under this Agreement. The parties will attempt in good faith to resolve all disputes, disagreements, or claims between the parties relating to this Agreement. - -g. Unless otherwise required by applicable law without the possibility of contractual waiver or limitation: 1) neither party will bring a legal action, regardless of form, for any claim arising out of or related to this Agreement more than two years after the cause of action arose; and 2) upon the expiration of such time limit, any such claim and all respective rights related to the claim lapse. - -h. Neither Licensee nor IBM is responsible for failure to fulfill any obligations due to causes beyond its control. - -i. No right or cause of action for any third party is created by this Agreement, nor is IBM responsible for any third party claims against Licensee, except as permitted in Subsection 10.1 (Items for Which IBM May Be Liable) above for bodily injury (including death) or damage to real or tangible personal property for which IBM is legally liable to that third party. - -j. In entering into this Agreement, neither party is relying on any representation not specified in this Agreement, including but not limited to any representation concerning: 1) the performance or function of the Program, other than as expressly warranted in Section 8 (Warranty and Exclusions) above; 2) the experiences or recommendations of other parties; or 3) any results or savings that Licensee may achieve. - -k. IBM has signed agreements with certain organizations (called "IBM Business Partners") to promote, market, and support certain Programs. IBM Business Partners remain independent and separate from IBM. IBM is not responsible for the actions or statements of IBM Business Partners or obligations they have to Licensee. - -l. The license and intellectual property indemnification terms of Licensee's other agreements with IBM (such as the IBM Customer Agreement) do not apply to Program licenses granted under this Agreement. - -14. Geographic Scope and Governing Law - -14.1 Governing Law - -Both parties agree to the application of the laws of the country in which Licensee obtained the Program license to govern, interpret, and enforce all of Licensee's and IBM's respective rights, duties, and obligations arising from, or relating in any manner to, the subject matter of this Agreement, without regard to conflict of law principles. - -The United Nations Convention on Contracts for the International Sale of Goods does not apply. - -14.2 Jurisdiction - -All rights, duties, and obligations are subject to the courts of the country in which Licensee obtained the Program license. - -Part 2 - Country-unique Terms - -For licenses granted in the countries specified below, the following terms replace or modify the referenced terms in Part 1. All terms in Part 1 that are not changed by these amendments remain unchanged and in effect. This Part 2 is organized as follows: - -* Multiple country amendments to Part 1, Section 14 (Governing Law and Jurisdiction); - -* Americas country amendments to other Agreement terms; - -* Asia Pacific country amendments to other Agreement terms; and - -* Europe, Middle East, and Africa country amendments to other Agreement terms. - -Multiple country amendments to Part 1, Section 14 (Governing Law and Jurisdiction) - -14.1 Governing Law - -The phrase "the laws of the country in which Licensee obtained the Program license" in the first paragraph of 14.1 Governing Law is replaced by the following phrases in the countries below: - -AMERICAS - -(1) In Canada: the laws in the Province of Ontario; - -(2) in Mexico: the federal laws of the Republic of Mexico; - -(3) in the United States, Anguilla, Antigua/Barbuda, Aruba, British Virgin Islands, Cayman Islands, Dominica, Grenada, Guyana, Saint Kitts and Nevis, Saint Lucia, Saint Maarten, and Saint Vincent and the Grenadines: the laws of the State of New York, United States; - -(4) in Venezuela: the laws of the Bolivarian Republic of Venezuela; - -ASIA PACIFIC - -(5) in Cambodia and Laos: the laws of the State of New York, United States; - -(6) in Australia: the laws of the State or Territory in which the transaction is performed; - -(7) in Hong Kong SAR and Macau SAR: the laws of Hong Kong Special Administrative Region ("SAR"); - -(8) in Taiwan: the laws of Taiwan; - -EUROPE, MIDDLE EAST, AND AFRICA - -(9) in Albania, Armenia, Azerbaijan, Belarus, Bosnia-Herzegovina, Bulgaria, Croatia, Former Yugoslav Republic of Macedonia, Georgia, Hungary, Kazakhstan, Kyrgyzstan, Moldova, Montenegro, Poland, Romania, Russia, Serbia, Slovakia, Tajikistan, Turkmenistan, Ukraine, and Uzbekistan: the laws of Austria; - -(10) in Algeria, Andorra, Benin, Burkina Faso, Cameroon, Cape Verde, Central African Republic, Chad, Comoros, Congo Republic, Djibouti, Democratic Republic of Congo, Equatorial Guinea, French Guiana, French Polynesia, Gabon, Gambia, Guinea, Guinea-Bissau, Ivory Coast, Lebanon, Madagascar, Mali, Mauritania, Mauritius, Mayotte, Morocco, New Caledonia, Niger, Reunion, Senegal, Seychelles, Togo, Tunisia, Vanuatu, and Wallis and Futuna: the laws of France; - -(11) in Estonia, Latvia, and Lithuania: the laws of Finland; - -(12) in Angola, Bahrain, Botswana, Burundi, Egypt, Eritrea, Ethiopia, Ghana, Jordan, Kenya, Kuwait, Liberia, Malawi, Malta, Mozambique, Nigeria, Oman, Pakistan, Qatar, Rwanda, Sao Tome and Principe, Saudi Arabia, Sierra Leone, Somalia, Tanzania, Uganda, United Arab Emirates, the United Kingdom, West Bank/Gaza, Yemen, Zambia, and Zimbabwe: the laws of England; and - -(13) in South Africa, Namibia, Lesotho, and Swaziland: the laws of the Republic of South Africa. - -14.2 Jurisdiction - -The following paragraph pertains to jurisdiction and replaces Subsection 14.2 (Jurisdiction) as it applies for those countries identified below: - -All rights, duties, and obligations are subject to the courts of the country in which Licensee obtained the Program license except that in the countries identified below all disputes arising out of or related to this Agreement, including summary proceedings, will be brought before and subject to the exclusive jurisdiction of the following courts of competent jurisdiction: - -AMERICAS - -(1) In Argentina: the Ordinary Commercial Court of the city of Buenos Aires; - -(2) in Brazil: the court of Rio de Janeiro, RJ; - -(3) in Chile: the Civil Courts of Justice of Santiago; - -(4) in Ecuador: the civil judges of Quito for executory or summary proceedings (as applicable); - -(5) in Mexico: the courts located in Mexico City, Federal District; - -(6) in Peru: the judges and tribunals of the judicial district of Lima, Cercado; - -(7) in Uruguay: the courts of the city of Montevideo; - -(8) in Venezuela: the courts of the metropolitan area of the city of Caracas; - -EUROPE, MIDDLE EAST, AND AFRICA - -(9) in Austria: the court of law in Vienna, Austria (Inner-City); - -(10) in Algeria, Andorra, Benin, Burkina Faso, Cameroon, Cape Verde, Central African Republic, Chad, Comoros, Congo Republic, Djibouti, Democratic Republic of Congo, Equatorial Guinea, France, French Guiana, French Polynesia, Gabon, Gambia, Guinea, Guinea-Bissau, Ivory Coast, Lebanon, Madagascar, Mali, Mauritania, Mauritius, Mayotte, Monaco, Morocco, New Caledonia, Niger, Reunion, Senegal, Seychelles, Togo, Tunisia, Vanuatu, and Wallis and Futuna: the Commercial Court of Paris; - -(11) in Angola, Bahrain, Botswana, Burundi, Egypt, Eritrea, Ethiopia, Ghana, Jordan, Kenya, Kuwait, Liberia, Malawi, Malta, Mozambique, Nigeria, Oman, Pakistan, Qatar, Rwanda, Sao Tome and Principe, Saudi Arabia, Sierra Leone, Somalia, Tanzania, Uganda, United Arab Emirates, the United Kingdom, West Bank/Gaza, Yemen, Zambia, and Zimbabwe: the English courts; - -(12) in South Africa, Namibia, Lesotho, and Swaziland: the High Court in Johannesburg; - -(13) in Greece: the competent court of Athens; - -(14) in Israel: the courts of Tel Aviv-Jaffa; - -(15) in Italy: the courts of Milan; - -(16) in Portugal: the courts of Lisbon; - -(17) in Spain: the courts of Madrid; and - -(18) in Turkey: the Istanbul Central Courts and Execution Directorates of Istanbul, the Republic of Turkey. - -14.3 Arbitration - -The following paragraph is added as a new Subsection 14.3 (Arbitration) as it applies for those countries identified below. The provisions of this Subsection 14.3 prevail over those of Subsection 14.2 (Jurisdiction) to the extent permitted by the applicable governing law and rules of procedure: - -ASIA PACIFIC - -(1) In Cambodia, India, Laos, Philippines, and Vietnam: - -Disputes arising out of or in connection with this Agreement will be finally settled by arbitration which will be held in Singapore in accordance with the Arbitration Rules of Singapore International Arbitration Center ("SIAC Rules") then in effect. The arbitration award will be final and binding for the parties without appeal and will be in writing and set forth the findings of fact and the conclusions of law. - -The number of arbitrators will be three, with each side to the dispute being entitled to appoint one arbitrator. The two arbitrators appointed by the parties will appoint a third arbitrator who will act as chairman of the proceedings. Vacancies in the post of chairman will be filled by the president of the SIAC. Other vacancies will be filled by the respective nominating party. Proceedings will continue from the stage they were at when the vacancy occurred. - -If one of the parties refuses or otherwise fails to appoint an arbitrator within 30 days of the date the other party appoints its, the first appointed arbitrator will be the sole arbitrator, provided that the arbitrator was validly and properly appointed. - -All proceedings will be conducted, including all documents presented in such proceedings, in the English language. The English language version of this Agreement prevails over any other language version. - -(2) In the People's Republic of China: - -In case no settlement can be reached, the disputes will be submitted to China International Economic and Trade Arbitration Commission for arbitration according to the then effective rules of the said Arbitration Commission. The arbitration will take place in Beijing and be conducted in Chinese. The arbitration award will be final and binding on both parties. During the course of arbitration, this agreement will continue to be performed except for the part which the parties are disputing and which is undergoing arbitration. - -(3) In Indonesia: - -Each party will allow the other reasonable opportunity to comply before it claims that the other has not met its obligations under this Agreement. The parties will attempt in good faith to resolve all disputes, disagreements, or claims between the parties relating to this Agreement. Unless otherwise required by applicable law without the possibility of contractual waiver or limitation, i) neither party will bring a legal action, regardless of form, arising out of or related to this Agreement or any transaction under it more than two years after the cause of action arose; and ii) after such time limit, any legal action arising out of this Agreement or any transaction under it and all respective rights related to any such action lapse. - -Disputes arising out of or in connection with this Agreement shall be finally settled by arbitration that shall be held in Jakarta, Indonesia in accordance with the rules of Board of the Indonesian National Board of Arbitration (Badan Arbitrase Nasional Indonesia or "BANI") then in effect. The arbitration award shall be final and binding for the parties without appeal and shall be in writing and set forth the findings of fact and the conclusions of law. - -The number of arbitrators shall be three, with each side to the dispute being entitled to appoint one arbitrator. The two arbitrators appointed by the parties shall appoint a third arbitrator who shall act as chairman of the proceedings. Vacancies in the post of chairman shall be filled by the chairman of the BANI. Other vacancies shall be filled by the respective nominating party. Proceedings shall continue from the stage they were at when the vacancy occurred. - -If one of the parties refuses or otherwise fails to appoint an arbitrator within 30 days of the date the other party appoints its, the first appointed arbitrator shall be the sole arbitrator, provided that the arbitrator was validly and properly appointed. - -All proceedings shall be conducted, including all documents presented in such proceedings, in the English and/or Indonesian language. - -EUROPE, MIDDLE EAST, AND AFRICA - -(4) In Albania, Armenia, Azerbaijan, Belarus, Bosnia-Herzegovina, Bulgaria, Croatia, Former Yugoslav Republic of Macedonia, Georgia, Hungary, Kazakhstan, Kyrgyzstan, Moldova, Montenegro, Poland, Romania, Russia, Serbia, Slovakia, Tajikistan, Turkmenistan, Ukraine, and Uzbekistan: - -All disputes arising out of this Agreement or related to its violation, termination or nullity will be finally settled under the Rules of Arbitration and Conciliation of the International Arbitral Center of the Federal Economic Chamber in Vienna (Vienna Rules) by three arbitrators appointed in accordance with these rules. The arbitration will be held in Vienna, Austria, and the official language of the proceedings will be English. The decision of the arbitrators will be final and binding upon both parties. Therefore, pursuant to paragraph 598 (2) of the Austrian Code of Civil Procedure, the parties expressly waive the application of paragraph 595 (1) figure 7 of the Code. IBM may, however, institute proceedings in a competent court in the country of installation. - -(5) In Estonia, Latvia, and Lithuania: - -All disputes arising in connection with this Agreement will be finally settled in arbitration that will be held in Helsinki, Finland in accordance with the arbitration laws of Finland then in effect. Each party will appoint one arbitrator. The arbitrators will then jointly appoint the chairman. If arbitrators cannot agree on the chairman, then the Central Chamber of Commerce in Helsinki will appoint the chairman. - -AMERICAS COUNTRY AMENDMENTS - -CANADA - -10.1 Items for Which IBM May be Liable - -The following replaces Item 1 in the first paragraph of this Subsection 10.1 (Items for Which IBM May be Liable): - -1) damages for bodily injury (including death) and physical harm to real property and tangible personal property caused by IBM's negligence; and - -13. General - -The following replaces Item 13.d: - -d. Licensee agrees to comply with all applicable export and import laws and regulations, including those of that apply to goods of United States origin and that prohibit or limit export for certain uses or to certain users. - -The following replaces Item 13.i: - -i. No right or cause of action for any third party is created by this Agreement or any transaction under it, nor is IBM responsible for any third party claims against Licensee except as permitted by the Limitation of Liability section above for bodily injury (including death) or physical harm to real or tangible personal property caused by IBM's negligence for which IBM is legally liable to that third party. - -The following is added as Item 13.m: - -m. For purposes of this Item 13.m, "Personal Data" refers to information relating to an identified or identifiable individual made available by one of the parties, its personnel or any other individual to the other in connection with this Agreement. The following provisions apply in the event that one party makes Personal Data available to the other: - -(1) General - -(a) Each party is responsible for complying with any obligations applying to it under applicable Canadian data privacy laws and regulations ("Laws"). - -(b) Neither party will request Personal Data beyond what is necessary to fulfill the purpose(s) for which it is requested. The purpose(s) for requesting Personal Data must be reasonable. Each party will agree in advance as to the type of Personal Data that is required to be made available. - -(2) Security Safeguards - -(a) Each party acknowledges that it is solely responsible for determining and communicating to the other the appropriate technological, physical and organizational security measures required to protect Personal Data. - -(b) Each party will ensure that Personal Data is protected in accordance with the security safeguards communicated and agreed to by the other. - -(c) Each party will ensure that any third party to whom Personal Data is transferred is bound by the applicable terms of this section. - -(d) Additional or different services required to comply with the Laws will be deemed a request for new services. - -(3) Use - -Each party agrees that Personal Data will only be used, accessed, managed, transferred, disclosed to third parties or otherwise processed to fulfill the purpose(s) for which it was made available. - -(4) Access Requests - -(a) Each party agrees to reasonably cooperate with the other in connection with requests to access or amend Personal Data. - -(b) Each party agrees to reimburse the other for any reasonable charges incurred in providing each other assistance. - -(c) Each party agrees to amend Personal Data only upon receiving instructions to do so from the other party or its personnel. - -(5) Retention - -Each party will promptly return to the other or destroy all Personal Data that is no longer necessary to fulfill the purpose(s) for which it was made available, unless otherwise instructed by the other or its personnel or required by law. - -(6) Public Bodies Who Are Subject to Public Sector Privacy Legislation - -For Licensees who are public bodies subject to public sector privacy legislation, this Item 13.m applies only to Personal Data made available to Licensee in connection with this Agreement, and the obligations in this section apply only to Licensee, except that: 1) section (2)(a) applies only to IBM; 2) sections (1)(a) and (4)(a) apply to both parties; and 3) section (4)(b) and the last sentence in (1)(b) do not apply. - -PERU - -10. Limitation of Liability - -The following is added to the end of this Section 10 (Limitation of Liability): - -Except as expressly required by law without the possibility of contractual waiver, Licensee and IBM intend that the limitation of liability in this Limitation of Liability section applies to damages caused by all types of claims and causes of action. If any limitation on or exclusion from liability in this section is held by a court of competent jurisdiction to be unenforceable with respect to a particular claim or cause of action, the parties intend that it nonetheless apply to the maximum extent permitted by applicable law to all other claims and causes of action. - -10.1 Items for Which IBM May be Liable - -The following is added at the end of this Subsection 10.1: - -In accordance with Article 1328 of the Peruvian Civil Code, the limitations and exclusions specified in this section will not apply to damages caused by IBM's willful misconduct ("dolo") or gross negligence ("culpa inexcusable"). - -UNITED STATES OF AMERICA - -5. Taxes - -The following is added at the end of this Section 5 (Taxes) - -For Programs delivered electronically in the United States for which Licensee claims a state sales and use tax exemption, Licensee agrees not to receive any tangible personal property (e.g., media and publications) associated with the electronic program. - -Licensee agrees to be responsible for any sales and use tax liabilities that may arise as a result of Licensee's subsequent redistribution of Programs after delivery by IBM. - -13. General - -The following is added to Section 13 as Item 13.m: - -U.S. Government Users Restricted Rights - Use, duplication or disclosure is restricted by the GSA IT Schedule 70 Contract with the IBM Corporation. - -The following is added to Item 13.f: - -Each party waives any right to a jury trial in any proceeding arising out of or related to this Agreement. - -ASIA PACIFIC COUNTRY AMENDMENTS - -AUSTRALIA - -5. Taxes - -The following sentences replace the first two sentences of Section 5 (Taxes): - -If any government or authority imposes a duty, tax (other than income tax), levy, or fee, on this Agreement or on the Program itself, that is not otherwise provided for in the amount payable, Licensee agrees to pay it when IBM invoices Licensee. If the rate of GST changes, IBM may adjust the charge or other amount payable to take into account that change from the date the change becomes effective. - -8.1 Limited Warranty - -The following is added to Subsection 8.1 (Limited Warranty): - -The warranties specified this Section are in addition to any rights Licensee may have under the Competition and Consumer Act 2010 or other legislation and are only limited to the extent permitted by the applicable legislation. - -10.1 Items for Which IBM May be Liable - -The following is added to Subsection 10.1 (Items for Which IBM May be Liable): - -Where IBM is in breach of a condition or warranty implied by the Competition and Consumer Act 2010, IBM's liability is limited to the repair or replacement of the goods, or the supply of equivalent goods. Where that condition or warranty relates to right to sell, quiet possession or clear title, or the goods are of a kind ordinarily obtained for personal, domestic or household use or consumption, then none of the limitations in this paragraph apply. - -HONG KONG SAR, MACAU SAR, AND TAIWAN - -As applies to licenses obtained in Taiwan and the special administrative regions, phrases throughout this Agreement containing the word "country" (for example, "the country in which the original Licensee was granted the license" and "the country in which Licensee obtained the Program license") are replaced with the following: - -(1) In Hong Kong SAR: "Hong Kong SAR" - -(2) In Macau SAR: "Macau SAR" except in the Governing Law clause (Section 14.1) - -(3) In Taiwan: "Taiwan." - -INDIA - -10.1 Items for Which IBM May be Liable - -The following replaces the terms of Items 1 and 2 of the first paragraph: - -1) liability for bodily injury (including death) or damage to real property and tangible personal property will be limited to that caused by IBM's negligence; and 2) as to any other actual damage arising in any situation involving nonperformance by IBM pursuant to, or in any way related to the subject of this Agreement, IBM's liability will be limited to the charge paid by Licensee for the individual Program that is the subject of the claim. - -13. General - -The following replaces the terms of Item 13.g: - -If no suit or other legal action is brought, within three years after the cause of action arose, in respect of any claim that either party may have against the other, the rights of the concerned party in respect of such claim will be forfeited and the other party will stand released from its obligations in respect of such claim. - -INDONESIA - -3.3 Term and Termination - -The following is added to the last paragraph: - -Both parties waive the provision of article 1266 of the Indonesian Civil Code, to the extent the article provision requires such court decree for the termination of an agreement creating mutual obligations. - -JAPAN - -13. General - -The following is inserted after Item 13.f: - -Any doubts concerning this Agreement will be initially resolved between us in good faith and in accordance with the principle of mutual trust. - -MALAYSIA - -10.2 Items for Which IBM Is not Liable - -The word "SPECIAL" in Item 10.2b is deleted. - -NEW ZEALAND - -8.1 Limited Warranty - -The following is added: - -The warranties specified in this Section are in addition to any rights Licensee may have under the Consumer Guarantees Act 1993 or other legislation which cannot be excluded or limited. The Consumer Guarantees Act 1993 will not apply in respect of any goods which IBM provides, if Licensee requires the goods for the purposes of a business as defined in that Act. - -10. Limitation of Liability - -The following is added: - -Where Programs are not obtained for the purposes of a business as defined in the Consumer Guarantees Act 1993, the limitations in this Section are subject to the limitations in that Act. - -PEOPLE'S REPUBLIC OF CHINA - -4. Charges - -The following is added: - -All banking charges incurred in the People's Republic of China will be borne by Licensee and those incurred outside the People's Republic of China will be borne by IBM. - -PHILIPPINES - -10.2 Items for Which IBM Is not Liable - -The following replaces the terms of Item 10.2b: - -b. special (including nominal and exemplary damages), moral, incidental, or indirect damages or for any economic consequential damages; or - -SINGAPORE - -10.2 Items for Which IBM Is not Liable - -The words "SPECIAL" and "ECONOMIC" are deleted from Item 10.2b. - -13. General - -The following replaces the terms of Item 13.i: - -Subject to the rights provided to IBM's suppliers and Program developers as provided in Section 10 above (Limitation of Liability), a person who is not a party to this Agreement will have no right under the Contracts (Right of Third Parties) Act to enforce any of its terms. - -TAIWAN - -8.1 Limited Warranty - -The last paragraph is deleted. - -10.1 Items for Which IBM May Be Liable - -The following sentences are deleted: - -This limit also applies to any of IBM's subcontractors and Program developers. It is the maximum for which IBM and its subcontractors and Program developers are collectively responsible. - -EUROPE, MIDDLE EAST, AFRICA (EMEA) COUNTRY AMENDMENTS - -EUROPEAN UNION MEMBER STATES - -8. Warranty and Exclusions - -The following is added to Section 8 (Warranty and Exclusion): - -In the European Union ("EU"), consumers have legal rights under applicable national legislation governing the sale of consumer goods. Such rights are not affected by the provisions set out in this Section 8 (Warranty and Exclusions). The territorial scope of the Limited Warranty is worldwide. - -EU MEMBER STATES AND THE COUNTRIES IDENTIFIED BELOW - -Iceland, Liechtenstein, Norway, Switzerland, Turkey, and any other European country that has enacted local data privacy or protection legislation similar to the EU model. - -13. General - -The following replaces Item 13.e: - -(1) Definitions - For the purposes of this Item 13.e, the following additional definitions apply: - -(a) Business Contact Information - business-related contact information disclosed by Licensee to IBM, including names, job titles, business addresses, telephone numbers and email addresses of Licensee's employees and contractors. For Austria, Italy and Switzerland, Business Contact Information also includes information about Licensee and its contractors as legal entities (for example, Licensee's revenue data and other transactional information) - -(b) Business Contact Personnel - Licensee employees and contractors to whom the Business Contact Information relates. - -(c) Data Protection Authority - the authority established by the Data Protection and Electronic Communications Legislation in the applicable country or, for non-EU countries, the authority responsible for supervising the protection of personal data in that country, or (for any of the foregoing) any duly appointed successor entity thereto. - -(d) Data Protection & Electronic Communications Legislation - (i) the applicable local legislation and regulations in force implementing the requirements of EU Directive 95/46/EC (on the protection of individuals with regard to the processing of personal data and on the free movement of such data) and of EU Directive 2002/58/EC (concerning the processing of personal data and the protection of privacy in the electronic communications sector); or (ii) for non-EU countries, the legislation and/or regulations passed in the applicable country relating to the protection of personal data and the regulation of electronic communications involving personal data, including (for any of the foregoing) any statutory replacement or modification thereof. - -(e) IBM Group - International Business Machines Corporation of Armonk, New York, USA, its subsidiaries, and their respective Business Partners and subcontractors. - -(2) Licensee authorizes IBM: - -(a) to process and use Business Contact Information within IBM Group in support of Licensee including the provision of support services, and for the purpose of furthering the business relationship between Licensee and IBM Group, including, without limitation, contacting Business Contact Personnel (by email or otherwise) and marketing IBM Group products and services (the "Specified Purpose"); and - -(b) to disclose Business Contact Information to other members of IBM Group in pursuit of the Specified Purpose only. - -(3) IBM agrees that all Business Contact Information will be processed in accordance with the Data Protection & Electronic Communications Legislation and will be used only for the Specified Purpose. - -(4) To the extent required by the Data Protection & Electronic Communications Legislation, Licensee represents that (a) it has obtained (or will obtain) any consents from (and has issued (or will issue) any notices to) the Business Contact Personnel as are necessary in order to enable IBM Group to process and use the Business Contact Information for the Specified Purpose. - -(5) Licensee authorizes IBM to transfer Business Contact Information outside the European Economic Area, provided that the transfer is made on contractual terms approved by the Data Protection Authority or the transfer is otherwise permitted under the Data Protection & Electronic Communications Legislation. - -AUSTRIA - -8.2 Exclusions - -The following is deleted from the first paragraph: - -MERCHANTABILITY, SATISFACTORY QUALITY - -10. Limitation of Liability - -The following is added: - -The following limitations and exclusions of IBM's liability do not apply for damages caused by gross negligence or willful misconduct. - -10.1 Items for Which IBM May Be Liable - -The following replaces the first sentence in the first paragraph: - -Circumstances may arise where, because of a default by IBM in the performance of its obligations under this Agreement or other liability, Licensee is entitled to recover damages from IBM. - -In the second sentence of the first paragraph, delete entirely the parenthetical phrase: - -"(including fundamental breach, negligence, misrepresentation, or other contract or tort claim)". - -10.2 Items for Which IBM Is Not Liable - -The following replaces Item 10.2b: - -b. indirect damages or consequential damages; or - -BELGIUM, FRANCE, ITALY, AND LUXEMBOURG - -10. Limitation of Liability - -The following replaces the terms of Section 10 (Limitation of Liability) in its entirety: - -Except as otherwise provided by mandatory law: - -10.1 Items for Which IBM May Be Liable - -IBM's entire liability for all claims in the aggregate for any damages and losses that may arise as a consequence of the fulfillment of its obligations under or in connection with this Agreement or due to any other cause related to this Agreement is limited to the compensation of only those damages and losses proved and actually arising as an immediate and direct consequence of the non-fulfillment of such obligations (if IBM is at fault) or of such cause, for a maximum amount equal to the charges (if the Program is subject to fixed term charges, up to twelve months' charges) Licensee paid for the Program that has caused the damages. - -The above limitation will not apply to damages for bodily injuries (including death) and damages to real property and tangible personal property for which IBM is legally liable. - -10.2 Items for Which IBM Is Not Liable - -UNDER NO CIRCUMSTANCES IS IBM OR ANY OF ITS PROGRAM DEVELOPERS LIABLE FOR ANY OF THE FOLLOWING, EVEN IF INFORMED OF THEIR POSSIBILITY: 1) LOSS OF, OR DAMAGE TO, DATA; 2) INCIDENTAL, EXEMPLARY OR INDIRECT DAMAGES, OR FOR ANY ECONOMIC CONSEQUENTIAL DAMAGES; AND / OR 3) LOST PROFITS, BUSINESS, REVENUE, GOODWILL, OR ANTICIPATED SAVINGS, EVEN IF THEY ARISE AS AN IMMEDIATE CONSEQUENCE OF THE EVENT THAT GENERATED THE DAMAGES. - -10.3 Suppliers and Program Developers - -The limitation and exclusion of liability herein agreed applies not only to the activities performed by IBM but also to the activities performed by its suppliers and Program developers, and represents the maximum amount for which IBM as well as its suppliers and Program developers are collectively responsible. - -GERMANY - -8.1 Limited Warranty - -The following is inserted at the beginning of Section 8.1: - -The Warranty Period is twelve months from the date of delivery of the Program to the original Licensee. - -8.2 Exclusions - -Section 8.2 is deleted in its entirety and replaced with the following: - -Section 8.1 defines IBM's entire warranty obligations to Licensee except as otherwise required by applicable statutory law. - -10. Limitation of Liability - -The following replaces the Limitation of Liability section in its entirety: - -a. IBM will be liable without limit for 1) loss or damage caused by a breach of an express guarantee; 2) damages or losses resulting in bodily injury (including death); and 3) damages caused intentionally or by gross negligence. - -b. In the event of loss, damage and frustrated expenditures caused by slight negligence or in breach of essential contractual obligations, IBM will be liable, regardless of the basis on which Licensee is entitled to claim damages from IBM (including fundamental breach, negligence, misrepresentation, or other contract or tort claim), per claim only up to the greater of 500,000 euro or the charges (if the Program is subject to fixed term charges, up to 12 months' charges) Licensee paid for the Program that caused the loss or damage. A number of defaults which together result in, or contribute to, substantially the same loss or damage will be treated as one default. - -c. In the event of loss, damage and frustrated expenditures caused by slight negligence, IBM will not be liable for indirect or consequential damages, even if IBM was informed about the possibility of such loss or damage. - -d. In case of delay on IBM's part: 1) IBM will pay to Licensee an amount not exceeding the loss or damage caused by IBM's delay and 2) IBM will be liable only in respect of the resulting damages that Licensee suffers, subject to the provisions of Items a and b above. - -13. General - -The following replaces the provisions of 13.g: - -Any claims resulting from this Agreement are subject to a limitation period of three years, except as stated in Section 8.1 (Limited Warranty) of this Agreement. - -The following replaces the provisions of 13.i: - -No right or cause of action for any third party is created by this Agreement, nor is IBM responsible for any third party claims against Licensee, except (to the extent permitted in Section 10 (Limitation of Liability)) for: i) bodily injury (including death); or ii) damage to real or tangible personal property for which (in either case) IBM is legally liable to that third party. - -IRELAND - -8.2 Exclusions - -The following paragraph is added: - -Except as expressly provided in these terms and conditions, or Section 12 of the Sale of Goods Act 1893 as amended by the Sale of Goods and Supply of Services Act, 1980 (the "1980 Act"), all conditions or warranties (express or implied, statutory or otherwise) are hereby excluded including, without limitation, any warranties implied by the Sale of Goods Act 1893 as amended by the 1980 Act (including, for the avoidance of doubt, Section 39 of the 1980 Act). - -IRELAND AND UNITED KINGDOM - -2. Agreement Structure - -The following sentence is added: - -Nothing in this paragraph shall have the effect of excluding or limiting liability for fraud. - -10.1 Items for Which IBM May Be Liable - -The following replaces the first paragraph of the Subsection: - -For the purposes of this section, a "Default" means any act, statement, omission or negligence on the part of IBM in connection with, or in relation to, the subject matter of an Agreement in respect of which IBM is legally liable to Licensee, whether in contract or in tort. A number of Defaults which together result in, or contribute to, substantially the same loss or damage will be treated as one Default. - -Circumstances may arise where, because of a Default by IBM in the performance of its obligations under this Agreement or other liability, Licensee is entitled to recover damages from IBM. Regardless of the basis on which Licensee is entitled to claim damages from IBM and except as expressly required by law without the possibility of contractual waiver, IBM's entire liability for any one Default will not exceed the amount of any direct damages, to the extent actually suffered by Licensee as an immediate and direct consequence of the default, up to the greater of (1) 500,000 euro (or the equivalent in local currency) or (2) 125% of the charges (if the Program is subject to fixed term charges, up to 12 months' charges) for the Program that is the subject of the claim. Notwithstanding the foregoing, the amount of any damages for bodily injury (including death) and damage to real property and tangible personal property for which IBM is legally liable is not subject to such limitation. - -10.2 Items for Which IBM is Not Liable - -The following replaces Items 10.2b and 10.2c: - -b. special, incidental, exemplary, or indirect damages or consequential damages; or - -c. wasted management time or lost profits, business, revenue, goodwill, or anticipated savings. +Red Hat Products + +Red Hat Products (as listed below) are licensed separately and are supported by IBM only when used in support of the Program and only while Licensee has Software Subscription and Support in effect for the Program. In addition, Licensee agrees that its use of and support for the Red Hat Products are subject to the following terms (https://www.redhat.com/en/about/agreements). + +Red Hat Universal Base Image + +- Entitlement: Ratio 1 VPC/ 1 VPC + +Red Hat Enterprise Linux + +- Entitlement Ratio: 1 VPC / 1 VPC + +Red Hat OpenShift Container Platform + +- Entitlement Ratio: 1 VPC / 1 VPC + +"Ratio n/m" means that Licensee receives some number ('n') entitlements of the indicated metric for the identified program for every specified number ('m') entitlements of the specified metric for the Program as a whole. The specified ratio does not apply to any entitlements for the Program that are not of the required metric type. The number of entitlements for the identified program is rounded up to a multiple of 'n'. For example, if a Program includes 100 PVUs for an identified program for every 500 PVUs obtained of the Principal Program and Licensee acquires 1,200 PVUs of the Program, Licensee may install the identified program and have processor cores available to or managed by it of up to 300 PVUs. Those PVUs would not need to be counted as part of the total PVU requirement for Licensee's installation of the Program on account of the installation of the identified program (although those PVUs might need to be counted for other reasons, such as the processor cores being made available to other components of the Program, as well). + +L/N: L-ASAY-BJCED8 + +D/N: L-ASAY-BJCED8 + +P/N: L-ASAY-BJCED8 + + + +International Program License Agreement +Part 1 - General Terms +BY DOWNLOADING, INSTALLING, COPYING, ACCESSING, CLICKING ON AN "ACCEPT" BUTTON, OR OTHERWISE USING THE PROGRAM, LICENSEE AGREES TO THE TERMS OF THIS AGREEMENT. IF YOU ARE ACCEPTING THESE TERMS ON BEHALF OF LICENSEE, YOU REPRESENT AND WARRANT THAT YOU HAVE FULL AUTHORITY TO BIND LICENSEE TO THESE TERMS. IF YOU DO NOT AGREE TO THESE TERMS, +* DO NOT DOWNLOAD, INSTALL, COPY, ACCESS, CLICK ON AN "ACCEPT" BUTTON, OR USE THE PROGRAM; AND +* PROMPTLY RETURN THE UNUSED MEDIA, DOCUMENTATION, AND PROOF OF ENTITLEMENT TO THE PARTY FROM WHOM IT WAS OBTAINED FOR A REFUND OF THE AMOUNT PAID. IF THE PROGRAM WAS DOWNLOADED, DESTROY ALL COPIES OF THE PROGRAM. +1. Definitions +"Authorized Use" - the specified level at which Licensee is authorized to execute or run the Program. That level may be measured by number of users, millions of service units ("MSUs"), Processor Value Units ("PVUs"), or other level of use specified by IBM. +"IBM" - International Business Machines Corporation or one of its subsidiaries. +"License Information" ("LI") - a document that provides information and any additional terms specific to a Program. The Program's LI is available at www.ibm.com/software/sla. The LI can also be found in the Program's directory, by the use of a system command, or as a booklet included with the Program. +"Program" - the following, including the original and all whole or partial copies: 1) machine-readable instructions and data, 2) components, files, and modules, 3) audio-visual content (such as images, text, recordings, or pictures), and 4) related licensed materials (such as keys and documentation). +"Proof of Entitlement" ("PoE") - evidence of Licensee's Authorized Use. The PoE is also evidence of Licensee's eligibility for warranty, future update prices, if any, and potential special or promotional opportunities. If IBM does not provide Licensee with a PoE, then IBM may accept as the PoE the original paid sales receipt or other sales record from the party (either IBM or its reseller) from whom Licensee obtained the Program, provided that it specifies the Program name and Authorized Use obtained. +"Warranty Period" - one year, starting on the date the original Licensee is granted the license. +2. Agreement Structure +This Agreement includes Part 1 - General Terms, Part 2 - Country-unique Terms (if any), the LI, and the PoE and is the complete agreement between Licensee and IBM regarding the use of the Program. It replaces any prior oral or written communications between Licensee and IBM concerning Licensee's use of the Program. The terms of Part 2 may replace or modify those of Part 1. To the extent of any conflict, the LI prevails over both Parts. +3. License Grant +The Program is owned by IBM or an IBM supplier, and is copyrighted and licensed, not sold. +IBM grants Licensee a nonexclusive license to 1) use the Program up to the Authorized Use specified in the PoE, 2) make and install copies to support such Authorized Use, and 3) make a backup copy, all provided that +a. Licensee has lawfully obtained the Program and complies with the terms of this Agreement; +b. the backup copy does not execute unless the backed-up Program cannot execute; +c. Licensee reproduces all copyright notices and other legends of ownership on each copy, or partial copy, of the Program; +d. Licensee ensures that anyone who uses the Program (accessed either locally or remotely) 1) does so only on Licensee's behalf and 2) complies with the terms of this Agreement; +e. Licensee does not 1) use, copy, modify, or distribute the Program except as expressly permitted in this Agreement; 2) reverse assemble, reverse compile, otherwise translate, or reverse engineer the Program, except as expressly permitted by law without the possibility of contractual waiver; 3) use any of the Program's components, files, modules, audio-visual content, or related licensed materials separately from that Program; or 4) sublicense, rent, or lease the Program; and +f. if Licensee obtains this Program as a Supporting Program, Licensee uses this Program only to support the Principal Program and subject to any limitations in the license to the Principal Program, or, if Licensee obtains this Program as a Principal Program, Licensee uses all Supporting Programs only to support this Program, and subject to any limitations in this Agreement. For purposes of this Item "f," a "Supporting Program" is a Program that is part of another IBM Program ("Principal Program") and identified as a Supporting Program in the Principal Program's LI. (To obtain a separate license to a Supporting Program without these restrictions, Licensee should contact the party from whom Licensee obtained the Supporting Program.) +This license applies to each copy of the Program that Licensee makes. +3.1 Trade-ups, Updates, Fixes, and Patches +3.1.1 Trade-ups +If the Program is replaced by a trade-up Program, the replaced Program's license is promptly terminated. +3.1.2 Updates, Fixes, and Patches +When Licensee receives an update, fix, or patch to a Program, Licensee accepts any additional or different terms that are applicable to such update, fix, or patch that are specified in its LI. If no additional or different terms are provided, then the update, fix, or patch is subject solely to this Agreement. If the Program is replaced by an update, Licensee agrees to promptly discontinue use of the replaced Program. +3.2 Fixed Term Licenses +If IBM licenses the Program for a fixed term, Licensee's license is terminated at the end of the fixed term, unless Licensee and IBM agree to renew it. +3.3 Term and Termination +This Agreement is effective until terminated. +IBM may terminate Licensee's license if Licensee fails to comply with the terms of this Agreement. +If the license is terminated for any reason by either party, Licensee agrees to promptly discontinue use of and destroy all of Licensee's copies of the Program. Any terms of this Agreement that by their nature extend beyond termination of this Agreement remain in effect until fulfilled, and apply to both parties' respective successors and assignees. +4. Charges +Charges are based on Authorized Use obtained, which is specified in the PoE. IBM does not give credits or refunds for charges already due or paid, except as specified elsewhere in this Agreement. +If Licensee wishes to increase its Authorized Use, Licensee must notify IBM or an authorized IBM reseller in advance and pay any applicable charges. +5. Taxes +If any authority imposes on the Program a duty, tax, levy, or fee, excluding those based on IBM's net income, then Licensee agrees to pay that amount, as specified in an invoice, or supply exemption documentation. Licensee is responsible for any personal property taxes for the Program from the date that Licensee obtains it. If any authority imposes a customs duty, tax, levy, or fee for the import into or the export, transfer, access, or use of the Program outside the country in which the original Licensee was granted the license, then Licensee agrees that it is responsible for, and will pay, any amount imposed. +6. Money-back Guarantee +If Licensee is dissatisfied with the Program for any reason and is the original Licensee, Licensee may terminate the license and obtain a refund of the amount Licensee paid for the Program, provided that Licensee returns the Program and PoE to the party from whom Licensee obtained it within 30 days of the date the PoE was issued to Licensee. If the license is for a fixed term that is subject to renewal, then Licensee may obtain a refund only if the Program and its PoE are returned within the first 30 days of the initial term. If Licensee downloaded the Program, Licensee should contact the party from whom Licensee obtained it for instructions on how to obtain the refund. +7. Program Transfer +Licensee may transfer the Program and all of Licensee's license rights and obligations to another party only if that party agrees to the terms of this Agreement. If the license is terminated for any reason by either party, Licensee is prohibited from transferring the Program to another party. Licensee may not transfer a portion of 1) the Program or 2) the Program's Authorized Use. When Licensee transfers the Program, Licensee must also transfer a hard copy of this Agreement, including the LI and PoE. Immediately after the transfer, Licensee's license terminates. +8. Warranty and Exclusions +8.1 Limited Warranty +IBM warrants that the Program, when used in its specified operating environment, will conform to its specifications. The Program's specifications, and specified operating environment information, can be found in documentation accompanying the Program (such as a read-me file) or other information published by IBM (such as an announcement letter). Licensee agrees that such documentation and other Program content may be supplied only in the English language, unless otherwise required by local law without the possibility of contractual waiver or limitation. +The warranty applies only to the unmodified portion of the Program. IBM does not warrant uninterrupted or error-free operation of the Program, or that IBM will correct all Program defects. Licensee is responsible for the results obtained from the use of the Program. +During the Warranty Period, IBM provides Licensee with access to IBM databases containing information on known Program defects, defect corrections, restrictions, and bypasses at no additional charge. Consult the IBM Software Support Handbook for further information at www.ibm.com/software/support. +If the Program does not function as warranted during the Warranty Period and the problem cannot be resolved with information available in the IBM databases, Licensee may return the Program and its PoE to the party (either IBM or its reseller) from whom Licensee obtained it and receive a refund of the amount Licensee paid. After returning the Program, Licensee's license terminates. If Licensee downloaded the Program, Licensee should contact the party from whom Licensee obtained it for instructions on how to obtain the refund. +8.2 Exclusions +THESE WARRANTIES ARE LICENSEE'S EXCLUSIVE WARRANTIES AND REPLACE ALL OTHER WARRANTIES OR CONDITIONS, EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, ANY IMPLIED WARRANTIES OR CONDITIONS OF MERCHANTABILITY, SATISFACTORY QUALITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE, AND ANY WARRANTY OR CONDITION OF NON-INFRINGEMENT. SOME STATES OR JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF EXPRESS OR IMPLIED WARRANTIES, SO THE ABOVE EXCLUSION MAY NOT APPLY TO LICENSEE. IN THAT EVENT, SUCH WARRANTIES ARE LIMITED IN DURATION TO THE WARRANTY PERIOD. NO WARRANTIES APPLY AFTER THAT PERIOD. SOME STATES OR JURISDICTIONS DO NOT ALLOW LIMITATIONS ON HOW LONG AN IMPLIED WARRANTY LASTS, SO THE ABOVE LIMITATION MAY NOT APPLY TO LICENSEE. +THESE WARRANTIES GIVE LICENSEE SPECIFIC LEGAL RIGHTS. LICENSEE MAY ALSO HAVE OTHER RIGHTS THAT VARY FROM STATE TO STATE OR JURISDICTION TO JURISDICTION. +THE WARRANTIES IN THIS SECTION 8 (WARRANTY AND EXCLUSIONS) ARE PROVIDED SOLELY BY IBM. THE DISCLAIMERS IN THIS SUBSECTION 8.2 (EXCLUSIONS), HOWEVER, ALSO APPLY TO IBM'S SUPPLIERS OF THIRD PARTY CODE. THOSE SUPPLIERS PROVIDE SUCH CODE WITHOUT WARRANTIES OR CONDITION OF ANY KIND. THIS PARAGRAPH DOES NOT NULLIFY IBM'S WARRANTY OBLIGATIONS UNDER THIS AGREEMENT. +9. Licensee Data and Databases +To assist Licensee in isolating the cause of a problem with the Program, IBM may request that Licensee 1) allow IBM to remotely access Licensee's system or 2) send Licensee information or system data to IBM. However, IBM is not obligated to provide such assistance unless IBM and Licensee enter a separate written agreement under which IBM agrees to provide to Licensee that type of support, which is beyond IBM's warranty obligations in this Agreement. In any event, IBM uses information about errors and problems to improve its products and services, and assist with its provision of related support offerings. For these purposes, IBM may use IBM entities and subcontractors (including in one or more countries other than the one in which Licensee is located), and Licensee authorizes IBM to do so. +Licensee remains responsible for 1) any data and the content of any database Licensee makes available to IBM, 2) the selection and implementation of procedures and controls regarding access, security, encryption, use, and transmission of data (including any personally-identifiable data), and 3) backup and recovery of any database and any stored data. Licensee will not send or provide IBM access to any personally-identifiable information, whether in data or any other form, and will be responsible for reasonable costs and other amounts that IBM may incur relating to any such information mistakenly provided to IBM or the loss or disclosure of such information by IBM, including those arising out of any third party claims. +10. Limitation of Liability +The limitations and exclusions in this Section 10 (Limitation of Liability) apply to the full extent they are not prohibited by applicable law without the possibility of contractual waiver. +10.1 Items for Which IBM May Be Liable +Circumstances may arise where, because of a default on IBM's part or other liability, Licensee is entitled to recover damages from IBM. Regardless of the basis on which Licensee is entitled to claim damages from IBM (including fundamental breach, negligence, misrepresentation, or other contract or tort claim), IBM's entire liability for all claims in the aggregate arising from or related to each Program or otherwise arising under this Agreement will not exceed the amount of any 1) damages for bodily injury (including death) and damage to real property and tangible personal property and 2) other actual direct damages up to the charges (if the Program is subject to fixed term charges, up to twelve months' charges) Licensee paid for the Program that is the subject of the claim. +This limit also applies to any of IBM's Program developers and suppliers. It is the maximum for which IBM and its Program developers and suppliers are collectively responsible. +10.2 Items for Which IBM Is Not Liable +UNDER NO CIRCUMSTANCES IS IBM, ITS PROGRAM DEVELOPERS OR SUPPLIERS LIABLE FOR ANY OF THE FOLLOWING, EVEN IF INFORMED OF THEIR POSSIBILITY: +a. LOSS OF, OR DAMAGE TO, DATA; +b. SPECIAL, INCIDENTAL, EXEMPLARY, OR INDIRECT DAMAGES, OR FOR ANY ECONOMIC CONSEQUENTIAL DAMAGES; OR +c. LOST PROFITS, BUSINESS, REVENUE, GOODWILL, OR ANTICIPATED SAVINGS. +11. Compliance Verification +For purposes of this Section 11 (Compliance Verification), "IPLA Program Terms" means 1) this Agreement and applicable amendments and transaction documents provided by IBM, and 2) IBM software policies that may be found at the IBM Software Policy website (www.ibm.com/softwarepolicies), including but not limited to those policies concerning backup, sub-capacity pricing, and migration. +The rights and obligations set forth in this Section 11 remain in effect during the period the Program is licensed to Licensee, and for two years thereafter. +11.1 Verification Process +Licensee agrees to create, retain, and provide to IBM and its auditors accurate written records, system tool outputs, and other system information sufficient to provide auditable verification that Licensee's use of all Programs is in compliance with the IPLA Program Terms, including, without limitation, all of IBM's applicable licensing and pricing qualification terms. Licensee is responsible for 1) ensuring that it does not exceed its Authorized Use, and 2) remaining in compliance with IPLA Program Terms. +Upon reasonable notice, IBM may verify Licensee's compliance with IPLA Program Terms at all sites and for all environments in which Licensee uses (for any purpose) Programs subject to IPLA Program Terms. Such verification will be conducted in a manner that minimizes disruption to Licensee's business, and may be conducted on Licensee's premises, during normal business hours. IBM may use an independent auditor to assist with such verification, provided IBM has a written confidentiality agreement in place with such auditor. +11.2 Resolution +IBM will notify Licensee in writing if any such verification indicates that Licensee has used any Program in excess of its Authorized Use or is otherwise not in compliance with the IPLA Program Terms. Licensee agrees to promptly pay directly to IBM the charges that IBM specifies in an invoice for 1) any such excess use, 2) support for such excess use for the lesser of the duration of such excess use or two years, and 3) any additional charges and other liabilities determined as a result of such verification. +12. Third Party Notices +The Program may include third party code that IBM, not the third party, licenses to Licensee under this Agreement. Notices, if any, for the third party code ("Third Party Notices") are included for Licensee's information only. These notices can be found in the Program's NOTICES file(s). Information on how to obtain source code for certain third party code can be found in the Third Party Notices. If in the Third Party Notices IBM identifies third party code as "Modifiable Third Party Code," IBM authorizes Licensee to 1) modify the Modifiable Third Party Code and 2) reverse engineer the Program modules that directly interface with the Modifiable Third Party Code provided that it is only for the purpose of debugging Licensee's modifications to such third party code. IBM's service and support obligations, if any, apply only to the unmodified Program. +13. General +a. Nothing in this Agreement affects any statutory rights of consumers that cannot be waived or limited by contract. +b. For Programs IBM provides to Licensee in tangible form, IBM fulfills its shipping and delivery obligations upon the delivery of such Programs to the IBM-designated carrier, unless otherwise agreed to in writing by Licensee and IBM. +c. If any provision of this Agreement is held to be invalid or unenforceable, the remaining provisions of this Agreement remain in full force and effect. +d. Licensee agrees to comply with all applicable export and import laws and regulations, including U.S. embargo and sanctions regulations and prohibitions on export for certain end uses or to certain users. +e. Licensee authorizes International Business Machines Corporation and its subsidiaries (and their successors and assigns, contractors and IBM Business Partners) to store and use Licensee's business contact information wherever they do business, in connection with IBM products and services, or in furtherance of IBM's business relationship with Licensee. +f. Each party will allow the other reasonable opportunity to comply before it claims that the other has not met its obligations under this Agreement. The parties will attempt in good faith to resolve all disputes, disagreements, or claims between the parties relating to this Agreement. +g. Unless otherwise required by applicable law without the possibility of contractual waiver or limitation: 1) neither party will bring a legal action, regardless of form, for any claim arising out of or related to this Agreement more than two years after the cause of action arose; and 2) upon the expiration of such time limit, any such claim and all respective rights related to the claim lapse. +h. Neither Licensee nor IBM is responsible for failure to fulfill any obligations due to causes beyond its control. +i. No right or cause of action for any third party is created by this Agreement, nor is IBM responsible for any third party claims against Licensee, except as permitted in Subsection 10.1 (Items for Which IBM May Be Liable) above for bodily injury (including death) or damage to real or tangible personal property for which IBM is legally liable to that third party. +j. In entering into this Agreement, neither party is relying on any representation not specified in this Agreement, including but not limited to any representation concerning: 1) the performance or function of the Program, other than as expressly warranted in Section 8 (Warranty and Exclusions) above; 2) the experiences or recommendations of other parties; or 3) any results or savings that Licensee may achieve. +k. IBM has signed agreements with certain organizations (called "IBM Business Partners") to promote, market, and support certain Programs. IBM Business Partners remain independent and separate from IBM. IBM is not responsible for the actions or statements of IBM Business Partners or obligations they have to Licensee. +l. The license and intellectual property indemnification terms of Licensee's other agreements with IBM (such as the IBM Customer Agreement) do not apply to Program licenses granted under this Agreement. +14. Geographic Scope and Governing Law +14.1 Governing Law +Both parties agree to the application of the laws of the country in which Licensee obtained the Program license to govern, interpret, and enforce all of Licensee's and IBM's respective rights, duties, and obligations arising from, or relating in any manner to, the subject matter of this Agreement, without regard to conflict of law principles. +The United Nations Convention on Contracts for the International Sale of Goods does not apply. +14.2 Jurisdiction +All rights, duties, and obligations are subject to the courts of the country in which Licensee obtained the Program license. +Part 2 - Country-unique Terms +For licenses granted in the countries specified below, the following terms replace or modify the referenced terms in Part 1. All terms in Part 1 that are not changed by these amendments remain unchanged and in effect. This Part 2 is organized as follows: +* Multiple country amendments to Part 1, Section 14 (Governing Law and Jurisdiction); +* Americas country amendments to other Agreement terms; +* Asia Pacific country amendments to other Agreement terms; and +* Europe, Middle East, and Africa country amendments to other Agreement terms. +Multiple country amendments to Part 1, Section 14 (Governing Law and Jurisdiction) +14.1 Governing Law +The phrase "the laws of the country in which Licensee obtained the Program license" in the first paragraph of 14.1 Governing Law is replaced by the following phrases in the countries below: +AMERICAS +(1) In Canada: the laws in the Province of Ontario; +(2) in Mexico: the federal laws of the Republic of Mexico; +(3) in the United States, Anguilla, Antigua/Barbuda, Aruba, British Virgin Islands, Cayman Islands, Dominica, Grenada, Guyana, Saint Kitts and Nevis, Saint Lucia, Saint Maarten, and Saint Vincent and the Grenadines: the laws of the State of New York, United States; +(4) in Venezuela: the laws of the Bolivarian Republic of Venezuela; +ASIA PACIFIC +(5) in Cambodia and Laos: the laws of the State of New York, United States; +(6) in Australia: the laws of the State or Territory in which the transaction is performed; +(7) in Hong Kong SAR and Macau SAR: the laws of Hong Kong Special Administrative Region ("SAR"); +(8) in Taiwan: the laws of Taiwan; +EUROPE, MIDDLE EAST, AND AFRICA +(9) in Albania, Armenia, Azerbaijan, Belarus, Bosnia-Herzegovina, Bulgaria, Croatia, Former Yugoslav Republic of Macedonia, Georgia, Hungary, Kazakhstan, Kyrgyzstan, Moldova, Montenegro, Poland, Romania, Russia, Serbia, Slovakia, Tajikistan, Turkmenistan, Ukraine, and Uzbekistan: the laws of Austria; +(10) in Algeria, Andorra, Benin, Burkina Faso, Cameroon, Cape Verde, Central African Republic, Chad, Comoros, Congo Republic, Djibouti, Democratic Republic of Congo, Equatorial Guinea, French Guiana, French Polynesia, Gabon, Gambia, Guinea, Guinea-Bissau, Ivory Coast, Lebanon, Madagascar, Mali, Mauritania, Mauritius, Mayotte, Morocco, New Caledonia, Niger, Reunion, Senegal, Seychelles, Togo, Tunisia, Vanuatu, and Wallis and Futuna: the laws of France; +(11) in Estonia, Latvia, and Lithuania: the laws of Finland; +(12) in Angola, Bahrain, Botswana, Burundi, Egypt, Eritrea, Ethiopia, Ghana, Jordan, Kenya, Kuwait, Liberia, Malawi, Malta, Mozambique, Nigeria, Oman, Pakistan, Qatar, Rwanda, Sao Tome and Principe, Saudi Arabia, Sierra Leone, Somalia, Tanzania, Uganda, United Arab Emirates, the United Kingdom, West Bank/Gaza, Yemen, Zambia, and Zimbabwe: the laws of England; and +(13) in South Africa, Namibia, Lesotho, and Swaziland: the laws of the Republic of South Africa. +14.2 Jurisdiction +The following paragraph pertains to jurisdiction and replaces Subsection 14.2 (Jurisdiction) as it applies for those countries identified below: +All rights, duties, and obligations are subject to the courts of the country in which Licensee obtained the Program license except that in the countries identified below all disputes arising out of or related to this Agreement, including summary proceedings, will be brought before and subject to the exclusive jurisdiction of the following courts of competent jurisdiction: +AMERICAS +(1) In Argentina: the Ordinary Commercial Court of the city of Buenos Aires; +(2) in Brazil: the court of Rio de Janeiro, RJ; +(3) in Chile: the Civil Courts of Justice of Santiago; +(4) in Ecuador: the civil judges of Quito for executory or summary proceedings (as applicable); +(5) in Mexico: the courts located in Mexico City, Federal District; +(6) in Peru: the judges and tribunals of the judicial district of Lima, Cercado; +(7) in Uruguay: the courts of the city of Montevideo; +(8) in Venezuela: the courts of the metropolitan area of the city of Caracas; +EUROPE, MIDDLE EAST, AND AFRICA +(9) in Austria: the court of law in Vienna, Austria (Inner-City); +(10) in Algeria, Andorra, Benin, Burkina Faso, Cameroon, Cape Verde, Central African Republic, Chad, Comoros, Congo Republic, Djibouti, Democratic Republic of Congo, Equatorial Guinea, France, French Guiana, French Polynesia, Gabon, Gambia, Guinea, Guinea-Bissau, Ivory Coast, Lebanon, Madagascar, Mali, Mauritania, Mauritius, Mayotte, Monaco, Morocco, New Caledonia, Niger, Reunion, Senegal, Seychelles, Togo, Tunisia, Vanuatu, and Wallis and Futuna: the Commercial Court of Paris; +(11) in Angola, Bahrain, Botswana, Burundi, Egypt, Eritrea, Ethiopia, Ghana, Jordan, Kenya, Kuwait, Liberia, Malawi, Malta, Mozambique, Nigeria, Oman, Pakistan, Qatar, Rwanda, Sao Tome and Principe, Saudi Arabia, Sierra Leone, Somalia, Tanzania, Uganda, United Arab Emirates, the United Kingdom, West Bank/Gaza, Yemen, Zambia, and Zimbabwe: the English courts; +(12) in South Africa, Namibia, Lesotho, and Swaziland: the High Court in Johannesburg; +(13) in Greece: the competent court of Athens; +(14) in Israel: the courts of Tel Aviv-Jaffa; +(15) in Italy: the courts of Milan; +(16) in Portugal: the courts of Lisbon; +(17) in Spain: the courts of Madrid; and +(18) in Turkey: the Istanbul Central Courts and Execution Directorates of Istanbul, the Republic of Turkey. +14.3 Arbitration +The following paragraph is added as a new Subsection 14.3 (Arbitration) as it applies for those countries identified below. The provisions of this Subsection 14.3 prevail over those of Subsection 14.2 (Jurisdiction) to the extent permitted by the applicable governing law and rules of procedure: +ASIA PACIFIC +(1) In Cambodia, India, Laos, Philippines, and Vietnam: +Disputes arising out of or in connection with this Agreement will be finally settled by arbitration which will be held in Singapore in accordance with the Arbitration Rules of Singapore International Arbitration Center ("SIAC Rules") then in effect. The arbitration award will be final and binding for the parties without appeal and will be in writing and set forth the findings of fact and the conclusions of law. +The number of arbitrators will be three, with each side to the dispute being entitled to appoint one arbitrator. The two arbitrators appointed by the parties will appoint a third arbitrator who will act as chairman of the proceedings. Vacancies in the post of chairman will be filled by the president of the SIAC. Other vacancies will be filled by the respective nominating party. Proceedings will continue from the stage they were at when the vacancy occurred. +If one of the parties refuses or otherwise fails to appoint an arbitrator within 30 days of the date the other party appoints its, the first appointed arbitrator will be the sole arbitrator, provided that the arbitrator was validly and properly appointed. +All proceedings will be conducted, including all documents presented in such proceedings, in the English language. The English language version of this Agreement prevails over any other language version. +(2) In the People's Republic of China: +In case no settlement can be reached, the disputes will be submitted to China International Economic and Trade Arbitration Commission for arbitration according to the then effective rules of the said Arbitration Commission. The arbitration will take place in Beijing and be conducted in Chinese. The arbitration award will be final and binding on both parties. During the course of arbitration, this agreement will continue to be performed except for the part which the parties are disputing and which is undergoing arbitration. +(3) In Indonesia: +Each party will allow the other reasonable opportunity to comply before it claims that the other has not met its obligations under this Agreement. The parties will attempt in good faith to resolve all disputes, disagreements, or claims between the parties relating to this Agreement. Unless otherwise required by applicable law without the possibility of contractual waiver or limitation, i) neither party will bring a legal action, regardless of form, arising out of or related to this Agreement or any transaction under it more than two years after the cause of action arose; and ii) after such time limit, any legal action arising out of this Agreement or any transaction under it and all respective rights related to any such action lapse. +Disputes arising out of or in connection with this Agreement shall be finally settled by arbitration that shall be held in Jakarta, Indonesia in accordance with the rules of Board of the Indonesian National Board of Arbitration (Badan Arbitrase Nasional Indonesia or "BANI") then in effect. The arbitration award shall be final and binding for the parties without appeal and shall be in writing and set forth the findings of fact and the conclusions of law. +The number of arbitrators shall be three, with each side to the dispute being entitled to appoint one arbitrator. The two arbitrators appointed by the parties shall appoint a third arbitrator who shall act as chairman of the proceedings. Vacancies in the post of chairman shall be filled by the chairman of the BANI. Other vacancies shall be filled by the respective nominating party. Proceedings shall continue from the stage they were at when the vacancy occurred. +If one of the parties refuses or otherwise fails to appoint an arbitrator within 30 days of the date the other party appoints its, the first appointed arbitrator shall be the sole arbitrator, provided that the arbitrator was validly and properly appointed. +All proceedings shall be conducted, including all documents presented in such proceedings, in the English and/or Indonesian language. +EUROPE, MIDDLE EAST, AND AFRICA +(4) In Albania, Armenia, Azerbaijan, Belarus, Bosnia-Herzegovina, Bulgaria, Croatia, Former Yugoslav Republic of Macedonia, Georgia, Hungary, Kazakhstan, Kyrgyzstan, Moldova, Montenegro, Poland, Romania, Russia, Serbia, Slovakia, Tajikistan, Turkmenistan, Ukraine, and Uzbekistan: +All disputes arising out of this Agreement or related to its violation, termination or nullity will be finally settled under the Rules of Arbitration and Conciliation of the International Arbitral Center of the Federal Economic Chamber in Vienna (Vienna Rules) by three arbitrators appointed in accordance with these rules. The arbitration will be held in Vienna, Austria, and the official language of the proceedings will be English. The decision of the arbitrators will be final and binding upon both parties. Therefore, pursuant to paragraph 598 (2) of the Austrian Code of Civil Procedure, the parties expressly waive the application of paragraph 595 (1) figure 7 of the Code. IBM may, however, institute proceedings in a competent court in the country of installation. +(5) In Estonia, Latvia, and Lithuania: +All disputes arising in connection with this Agreement will be finally settled in arbitration that will be held in Helsinki, Finland in accordance with the arbitration laws of Finland then in effect. Each party will appoint one arbitrator. The arbitrators will then jointly appoint the chairman. If arbitrators cannot agree on the chairman, then the Central Chamber of Commerce in Helsinki will appoint the chairman. +AMERICAS COUNTRY AMENDMENTS +CANADA +10.1 Items for Which IBM May be Liable +The following replaces Item 1 in the first paragraph of this Subsection 10.1 (Items for Which IBM May be Liable): +1) damages for bodily injury (including death) and physical harm to real property and tangible personal property caused by IBM's negligence; and +13. General +The following replaces Item 13.d: +d. Licensee agrees to comply with all applicable export and import laws and regulations, including those of that apply to goods of United States origin and that prohibit or limit export for certain uses or to certain users. +The following replaces Item 13.i: +i. No right or cause of action for any third party is created by this Agreement or any transaction under it, nor is IBM responsible for any third party claims against Licensee except as permitted by the Limitation of Liability section above for bodily injury (including death) or physical harm to real or tangible personal property caused by IBM's negligence for which IBM is legally liable to that third party. +The following is added as Item 13.m: +m. For purposes of this Item 13.m, "Personal Data" refers to information relating to an identified or identifiable individual made available by one of the parties, its personnel or any other individual to the other in connection with this Agreement. The following provisions apply in the event that one party makes Personal Data available to the other: +(1) General +(a) Each party is responsible for complying with any obligations applying to it under applicable Canadian data privacy laws and regulations ("Laws"). +(b) Neither party will request Personal Data beyond what is necessary to fulfill the purpose(s) for which it is requested. The purpose(s) for requesting Personal Data must be reasonable. Each party will agree in advance as to the type of Personal Data that is required to be made available. +(2) Security Safeguards +(a) Each party acknowledges that it is solely responsible for determining and communicating to the other the appropriate technological, physical and organizational security measures required to protect Personal Data. +(b) Each party will ensure that Personal Data is protected in accordance with the security safeguards communicated and agreed to by the other. +(c) Each party will ensure that any third party to whom Personal Data is transferred is bound by the applicable terms of this section. +(d) Additional or different services required to comply with the Laws will be deemed a request for new services. +(3) Use +Each party agrees that Personal Data will only be used, accessed, managed, transferred, disclosed to third parties or otherwise processed to fulfill the purpose(s) for which it was made available. +(4) Access Requests +(a) Each party agrees to reasonably cooperate with the other in connection with requests to access or amend Personal Data. +(b) Each party agrees to reimburse the other for any reasonable charges incurred in providing each other assistance. +(c) Each party agrees to amend Personal Data only upon receiving instructions to do so from the other party or its personnel. +(5) Retention +Each party will promptly return to the other or destroy all Personal Data that is no longer necessary to fulfill the purpose(s) for which it was made available, unless otherwise instructed by the other or its personnel or required by law. +(6) Public Bodies Who Are Subject to Public Sector Privacy Legislation +For Licensees who are public bodies subject to public sector privacy legislation, this Item 13.m applies only to Personal Data made available to Licensee in connection with this Agreement, and the obligations in this section apply only to Licensee, except that: 1) section (2)(a) applies only to IBM; 2) sections (1)(a) and (4)(a) apply to both parties; and 3) section (4)(b) and the last sentence in (1)(b) do not apply. +PERU +10. Limitation of Liability +The following is added to the end of this Section 10 (Limitation of Liability): +Except as expressly required by law without the possibility of contractual waiver, Licensee and IBM intend that the limitation of liability in this Limitation of Liability section applies to damages caused by all types of claims and causes of action. If any limitation on or exclusion from liability in this section is held by a court of competent jurisdiction to be unenforceable with respect to a particular claim or cause of action, the parties intend that it nonetheless apply to the maximum extent permitted by applicable law to all other claims and causes of action. +10.1 Items for Which IBM May be Liable +The following is added at the end of this Subsection 10.1: +In accordance with Article 1328 of the Peruvian Civil Code, the limitations and exclusions specified in this section will not apply to damages caused by IBM's willful misconduct ("dolo") or gross negligence ("culpa inexcusable"). +UNITED STATES OF AMERICA +5. Taxes +The following is added at the end of this Section 5 (Taxes) +For Programs delivered electronically in the United States for which Licensee claims a state sales and use tax exemption, Licensee agrees not to receive any tangible personal property (e.g., media and publications) associated with the electronic program. +Licensee agrees to be responsible for any sales and use tax liabilities that may arise as a result of Licensee's subsequent redistribution of Programs after delivery by IBM. +13. General +The following is added to Section 13 as Item 13.m: +U.S. Government Users Restricted Rights - Use, duplication or disclosure is restricted by the GSA IT Schedule 70 Contract with the IBM Corporation. +The following is added to Item 13.f: +Each party waives any right to a jury trial in any proceeding arising out of or related to this Agreement. +ASIA PACIFIC COUNTRY AMENDMENTS +AUSTRALIA +5. Taxes +The following sentences replace the first two sentences of Section 5 (Taxes): +If any government or authority imposes a duty, tax (other than income tax), levy, or fee, on this Agreement or on the Program itself, that is not otherwise provided for in the amount payable, Licensee agrees to pay it when IBM invoices Licensee. If the rate of GST changes, IBM may adjust the charge or other amount payable to take into account that change from the date the change becomes effective. +8.1 Limited Warranty +The following is added to Subsection 8.1 (Limited Warranty): +The warranties specified this Section are in addition to any rights Licensee may have under the Competition and Consumer Act 2010 or other legislation and are only limited to the extent permitted by the applicable legislation. +10.1 Items for Which IBM May be Liable +The following is added to Subsection 10.1 (Items for Which IBM May be Liable): +Where IBM is in breach of a condition or warranty implied by the Competition and Consumer Act 2010, IBM's liability is limited to the repair or replacement of the goods, or the supply of equivalent goods. Where that condition or warranty relates to right to sell, quiet possession or clear title, or the goods are of a kind ordinarily obtained for personal, domestic or household use or consumption, then none of the limitations in this paragraph apply. +HONG KONG SAR, MACAU SAR, AND TAIWAN +As applies to licenses obtained in Taiwan and the special administrative regions, phrases throughout this Agreement containing the word "country" (for example, "the country in which the original Licensee was granted the license" and "the country in which Licensee obtained the Program license") are replaced with the following: +(1) In Hong Kong SAR: "Hong Kong SAR" +(2) In Macau SAR: "Macau SAR" except in the Governing Law clause (Section 14.1) +(3) In Taiwan: "Taiwan." +INDIA +10.1 Items for Which IBM May be Liable +The following replaces the terms of Items 1 and 2 of the first paragraph: +1) liability for bodily injury (including death) or damage to real property and tangible personal property will be limited to that caused by IBM's negligence; and 2) as to any other actual damage arising in any situation involving nonperformance by IBM pursuant to, or in any way related to the subject of this Agreement, IBM's liability will be limited to the charge paid by Licensee for the individual Program that is the subject of the claim. +13. General +The following replaces the terms of Item 13.g: +If no suit or other legal action is brought, within three years after the cause of action arose, in respect of any claim that either party may have against the other, the rights of the concerned party in respect of such claim will be forfeited and the other party will stand released from its obligations in respect of such claim. +INDONESIA +3.3 Term and Termination +The following is added to the last paragraph: +Both parties waive the provision of article 1266 of the Indonesian Civil Code, to the extent the article provision requires such court decree for the termination of an agreement creating mutual obligations. +JAPAN +13. General +The following is inserted after Item 13.f: +Any doubts concerning this Agreement will be initially resolved between us in good faith and in accordance with the principle of mutual trust. +MALAYSIA +10.2 Items for Which IBM Is not Liable +The word "SPECIAL" in Item 10.2b is deleted. +NEW ZEALAND +8.1 Limited Warranty +The following is added: +The warranties specified in this Section are in addition to any rights Licensee may have under the Consumer Guarantees Act 1993 or other legislation which cannot be excluded or limited. The Consumer Guarantees Act 1993 will not apply in respect of any goods which IBM provides, if Licensee requires the goods for the purposes of a business as defined in that Act. +10. Limitation of Liability +The following is added: +Where Programs are not obtained for the purposes of a business as defined in the Consumer Guarantees Act 1993, the limitations in this Section are subject to the limitations in that Act. +PEOPLE'S REPUBLIC OF CHINA +4. Charges +The following is added: +All banking charges incurred in the People's Republic of China will be borne by Licensee and those incurred outside the People's Republic of China will be borne by IBM. +PHILIPPINES +10.2 Items for Which IBM Is not Liable +The following replaces the terms of Item 10.2b: +b. special (including nominal and exemplary damages), moral, incidental, or indirect damages or for any economic consequential damages; or +SINGAPORE +10.2 Items for Which IBM Is not Liable +The words "SPECIAL" and "ECONOMIC" are deleted from Item 10.2b. +13. General +The following replaces the terms of Item 13.i: +Subject to the rights provided to IBM's suppliers and Program developers as provided in Section 10 above (Limitation of Liability), a person who is not a party to this Agreement will have no right under the Contracts (Right of Third Parties) Act to enforce any of its terms. +TAIWAN +8.1 Limited Warranty +The last paragraph is deleted. +10.1 Items for Which IBM May Be Liable +The following sentences are deleted: +This limit also applies to any of IBM's subcontractors and Program developers. It is the maximum for which IBM and its subcontractors and Program developers are collectively responsible. +EUROPE, MIDDLE EAST, AFRICA (EMEA) COUNTRY AMENDMENTS +EUROPEAN UNION MEMBER STATES +8. Warranty and Exclusions +The following is added to Section 8 (Warranty and Exclusion): +In the European Union ("EU"), consumers have legal rights under applicable national legislation governing the sale of consumer goods. Such rights are not affected by the provisions set out in this Section 8 (Warranty and Exclusions). The territorial scope of the Limited Warranty is worldwide. +EU MEMBER STATES AND THE COUNTRIES IDENTIFIED BELOW +Iceland, Liechtenstein, Norway, Switzerland, Turkey, and any other European country that has enacted local data privacy or protection legislation similar to the EU model. +13. General +The following replaces Item 13.e: +(1) Definitions - For the purposes of this Item 13.e, the following additional definitions apply: +(a) Business Contact Information - business-related contact information disclosed by Licensee to IBM, including names, job titles, business addresses, telephone numbers and email addresses of Licensee's employees and contractors. For Austria, Italy and Switzerland, Business Contact Information also includes information about Licensee and its contractors as legal entities (for example, Licensee's revenue data and other transactional information) +(b) Business Contact Personnel - Licensee employees and contractors to whom the Business Contact Information relates. +(c) Data Protection Authority - the authority established by the Data Protection and Electronic Communications Legislation in the applicable country or, for non-EU countries, the authority responsible for supervising the protection of personal data in that country, or (for any of the foregoing) any duly appointed successor entity thereto. +(d) Data Protection & Electronic Communications Legislation - (i) the applicable local legislation and regulations in force implementing the requirements of EU Directive 95/46/EC (on the protection of individuals with regard to the processing of personal data and on the free movement of such data) and of EU Directive 2002/58/EC (concerning the processing of personal data and the protection of privacy in the electronic communications sector); or (ii) for non-EU countries, the legislation and/or regulations passed in the applicable country relating to the protection of personal data and the regulation of electronic communications involving personal data, including (for any of the foregoing) any statutory replacement or modification thereof. +(e) IBM Group - International Business Machines Corporation of Armonk, New York, USA, its subsidiaries, and their respective Business Partners and subcontractors. +(2) Licensee authorizes IBM: +(a) to process and use Business Contact Information within IBM Group in support of Licensee including the provision of support services, and for the purpose of furthering the business relationship between Licensee and IBM Group, including, without limitation, contacting Business Contact Personnel (by email or otherwise) and marketing IBM Group products and services (the "Specified Purpose"); and +(b) to disclose Business Contact Information to other members of IBM Group in pursuit of the Specified Purpose only. +(3) IBM agrees that all Business Contact Information will be processed in accordance with the Data Protection & Electronic Communications Legislation and will be used only for the Specified Purpose. +(4) To the extent required by the Data Protection & Electronic Communications Legislation, Licensee represents that (a) it has obtained (or will obtain) any consents from (and has issued (or will issue) any notices to) the Business Contact Personnel as are necessary in order to enable IBM Group to process and use the Business Contact Information for the Specified Purpose. +(5) Licensee authorizes IBM to transfer Business Contact Information outside the European Economic Area, provided that the transfer is made on contractual terms approved by the Data Protection Authority or the transfer is otherwise permitted under the Data Protection & Electronic Communications Legislation. +AUSTRIA +8.2 Exclusions +The following is deleted from the first paragraph: +MERCHANTABILITY, SATISFACTORY QUALITY +10. Limitation of Liability +The following is added: +The following limitations and exclusions of IBM's liability do not apply for damages caused by gross negligence or willful misconduct. +10.1 Items for Which IBM May Be Liable +The following replaces the first sentence in the first paragraph: +Circumstances may arise where, because of a default by IBM in the performance of its obligations under this Agreement or other liability, Licensee is entitled to recover damages from IBM. +In the second sentence of the first paragraph, delete entirely the parenthetical phrase: +"(including fundamental breach, negligence, misrepresentation, or other contract or tort claim)". +10.2 Items for Which IBM Is Not Liable +The following replaces Item 10.2b: +b. indirect damages or consequential damages; or +BELGIUM, FRANCE, ITALY, AND LUXEMBOURG +10. Limitation of Liability +The following replaces the terms of Section 10 (Limitation of Liability) in its entirety: +Except as otherwise provided by mandatory law: +10.1 Items for Which IBM May Be Liable +IBM's entire liability for all claims in the aggregate for any damages and losses that may arise as a consequence of the fulfillment of its obligations under or in connection with this Agreement or due to any other cause related to this Agreement is limited to the compensation of only those damages and losses proved and actually arising as an immediate and direct consequence of the non-fulfillment of such obligations (if IBM is at fault) or of such cause, for a maximum amount equal to the charges (if the Program is subject to fixed term charges, up to twelve months' charges) Licensee paid for the Program that has caused the damages. +The above limitation will not apply to damages for bodily injuries (including death) and damages to real property and tangible personal property for which IBM is legally liable. +10.2 Items for Which IBM Is Not Liable +UNDER NO CIRCUMSTANCES IS IBM OR ANY OF ITS PROGRAM DEVELOPERS LIABLE FOR ANY OF THE FOLLOWING, EVEN IF INFORMED OF THEIR POSSIBILITY: 1) LOSS OF, OR DAMAGE TO, DATA; 2) INCIDENTAL, EXEMPLARY OR INDIRECT DAMAGES, OR FOR ANY ECONOMIC CONSEQUENTIAL DAMAGES; AND / OR 3) LOST PROFITS, BUSINESS, REVENUE, GOODWILL, OR ANTICIPATED SAVINGS, EVEN IF THEY ARISE AS AN IMMEDIATE CONSEQUENCE OF THE EVENT THAT GENERATED THE DAMAGES. +10.3 Suppliers and Program Developers +The limitation and exclusion of liability herein agreed applies not only to the activities performed by IBM but also to the activities performed by its suppliers and Program developers, and represents the maximum amount for which IBM as well as its suppliers and Program developers are collectively responsible. +GERMANY +8.1 Limited Warranty +The following is inserted at the beginning of Section 8.1: +The Warranty Period is twelve months from the date of delivery of the Program to the original Licensee. +8.2 Exclusions +Section 8.2 is deleted in its entirety and replaced with the following: +Section 8.1 defines IBM's entire warranty obligations to Licensee except as otherwise required by applicable statutory law. +10. Limitation of Liability +The following replaces the Limitation of Liability section in its entirety: +a. IBM will be liable without limit for 1) loss or damage caused by a breach of an express guarantee; 2) damages or losses resulting in bodily injury (including death); and 3) damages caused intentionally or by gross negligence. +b. In the event of loss, damage and frustrated expenditures caused by slight negligence or in breach of essential contractual obligations, IBM will be liable, regardless of the basis on which Licensee is entitled to claim damages from IBM (including fundamental breach, negligence, misrepresentation, or other contract or tort claim), per claim only up to the greater of 500,000 euro or the charges (if the Program is subject to fixed term charges, up to 12 months' charges) Licensee paid for the Program that caused the loss or damage. A number of defaults which together result in, or contribute to, substantially the same loss or damage will be treated as one default. +c. In the event of loss, damage and frustrated expenditures caused by slight negligence, IBM will not be liable for indirect or consequential damages, even if IBM was informed about the possibility of such loss or damage. +d. In case of delay on IBM's part: 1) IBM will pay to Licensee an amount not exceeding the loss or damage caused by IBM's delay and 2) IBM will be liable only in respect of the resulting damages that Licensee suffers, subject to the provisions of Items a and b above. +13. General +The following replaces the provisions of 13.g: +Any claims resulting from this Agreement are subject to a limitation period of three years, except as stated in Section 8.1 (Limited Warranty) of this Agreement. +The following replaces the provisions of 13.i: +No right or cause of action for any third party is created by this Agreement, nor is IBM responsible for any third party claims against Licensee, except (to the extent permitted in Section 10 (Limitation of Liability)) for: i) bodily injury (including death); or ii) damage to real or tangible personal property for which (in either case) IBM is legally liable to that third party. +IRELAND +8.2 Exclusions +The following paragraph is added: +Except as expressly provided in these terms and conditions, or Section 12 of the Sale of Goods Act 1893 as amended by the Sale of Goods and Supply of Services Act, 1980 (the "1980 Act"), all conditions or warranties (express or implied, statutory or otherwise) are hereby excluded including, without limitation, any warranties implied by the Sale of Goods Act 1893 as amended by the 1980 Act (including, for the avoidance of doubt, Section 39 of the 1980 Act). +IRELAND AND UNITED KINGDOM +2. Agreement Structure +The following sentence is added: +Nothing in this paragraph shall have the effect of excluding or limiting liability for fraud. +10.1 Items for Which IBM May Be Liable +The following replaces the first paragraph of the Subsection: +For the purposes of this section, a "Default" means any act, statement, omission or negligence on the part of IBM in connection with, or in relation to, the subject matter of an Agreement in respect of which IBM is legally liable to Licensee, whether in contract or in tort. A number of Defaults which together result in, or contribute to, substantially the same loss or damage will be treated as one Default. +Circumstances may arise where, because of a Default by IBM in the performance of its obligations under this Agreement or other liability, Licensee is entitled to recover damages from IBM. Regardless of the basis on which Licensee is entitled to claim damages from IBM and except as expressly required by law without the possibility of contractual waiver, IBM's entire liability for any one Default will not exceed the amount of any direct damages, to the extent actually suffered by Licensee as an immediate and direct consequence of the default, up to the greater of (1) 500,000 euro (or the equivalent in local currency) or (2) 125% of the charges (if the Program is subject to fixed term charges, up to 12 months' charges) for the Program that is the subject of the claim. Notwithstanding the foregoing, the amount of any damages for bodily injury (including death) and damage to real property and tangible personal property for which IBM is legally liable is not subject to such limitation. +10.2 Items for Which IBM is Not Liable +The following replaces Items 10.2b and 10.2c: +b. special, incidental, exemplary, or indirect damages or consequential damages; or +c. wasted management time or lost profits, business, revenue, goodwill, or anticipated savings. +Z125-3301-14 (07/2011) -Z125-3301-14 (07/2011) diff --git a/NAVIGATOR/README.md b/NAVIGATOR/README.md deleted file mode 100644 index 890aee7a..00000000 --- a/NAVIGATOR/README.md +++ /dev/null @@ -1,39 +0,0 @@ -# Deploy Business Automation Navigator - -IBM® Business Automation Navigator provides a console to work with content from multiple content servers. The console enables teams to view their documents, folders, and searches in ways that help them to complete their tasks. - -You can use IBM Business Automation Navigator with IBM FileNet Content Manager to accomplish a wide range of business needs: -- Browse for content that is stored in a repository. -- Search for content by running a text search. -- Save document, folders, and other content as favorites. -- Edit documents. -- Add documents to content servers. -- Organize documents by creating folders and adding content to the folders. -- Use the version control rules that are set on the repository. -- Create teamspaces to provide a focused view of the content and objects in the repository. - -For more information see [Business Automation Navigator in the Knowledge Center](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_18.0.x/com.ibm.dba.offerings/topics/con_ban.html) - -## Requirements and Prerequisites - -To prepare to deploy on Red Hat OpenShift, see the requirements and prerequisites in the [Deploying on Red Hat OpenShift on IBM Cloud](platform/README_Eval_ROKS.md) readme. - -Perform the following tasks to prepare to deploy your Business Automation Navigator images on Kubernetes: - -- Prepare your Kubernetes environment. See [Preparing to install automation containers on Kubernetes](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_18.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_env_k8s.html) - -- Download the PPA. Refer to the top repository [readme](../README.md) to find instructions on how to push and tag the product container images to your Docker registry. - -- Prepare your Business Automation Navigator environment. These procedures include setting up databases, LDAP, storage, and configuration files that are required for use and operation. If you plan to use the YAML file method, you also create YAML files that include the applicable parameter values for your deployment. You must complete all of the [preparation steps for Business Automation Navigator](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_18.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_bank8s.html) before you are ready to deploy the container images. - - -## Deploying - -You can deploy your container images with the following methods: - -- [Using Helm charts](helm-charts/README.md) -- [Using Kubernetes YAML](k8s-yaml/README.md) - -## Completing post deployment configuration - -After you deploy your container images, you perform some required and some optional steps to get your Business Automation Navigator environment up and running. For detailed instructions, see [Configuring IBM Business Automation Navigator in a container environment](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_18.0.x/com.ibm.dba.install/k8s_topics/tsk_ecmconfigbank8s.html). diff --git a/NAVIGATOR/configuration/.gitkeep b/NAVIGATOR/configuration/.gitkeep deleted file mode 100644 index e69de29b..00000000 diff --git a/NAVIGATOR/configuration/ICN/configDropins/overrides/DB2JCCDriver.xml b/NAVIGATOR/configuration/ICN/configDropins/overrides/DB2JCCDriver.xml deleted file mode 100644 index 937c2ce0..00000000 --- a/NAVIGATOR/configuration/ICN/configDropins/overrides/DB2JCCDriver.xml +++ /dev/null @@ -1,6 +0,0 @@ - - - - - - diff --git a/NAVIGATOR/configuration/ICN/configDropins/overrides/OraJDBCDriver.xml b/NAVIGATOR/configuration/ICN/configDropins/overrides/OraJDBCDriver.xml deleted file mode 100644 index aa2cffb9..00000000 --- a/NAVIGATOR/configuration/ICN/configDropins/overrides/OraJDBCDriver.xml +++ /dev/null @@ -1,7 +0,0 @@ - - - - - - - diff --git a/NAVIGATOR/configuration/ICN/configDropins/overrides/ldap_AD.xml b/NAVIGATOR/configuration/ICN/configDropins/overrides/ldap_AD.xml deleted file mode 100644 index c8fa5155..00000000 --- a/NAVIGATOR/configuration/ICN/configDropins/overrides/ldap_AD.xml +++ /dev/null @@ -1,17 +0,0 @@ - - - - - - diff --git a/NAVIGATOR/configuration/ICN/configDropins/overrides/ldap_TDS.xml b/NAVIGATOR/configuration/ICN/configDropins/overrides/ldap_TDS.xml deleted file mode 100644 index e5725463..00000000 --- a/NAVIGATOR/configuration/ICN/configDropins/overrides/ldap_TDS.xml +++ /dev/null @@ -1,18 +0,0 @@ - - - - - - diff --git a/NAVIGATOR/configuration/README.md b/NAVIGATOR/configuration/README.md deleted file mode 100644 index 519acfd2..00000000 --- a/NAVIGATOR/configuration/README.md +++ /dev/null @@ -1,8 +0,0 @@ -# Configuration - -Follow the instructions in [Preparing to install Business Automation Navigator](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_18.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_bank8s.html) to set up the following environment elements: - -- LDAP -- Databases -- Configuration files for LDAP and Databases -- YAML files (for YAML deployments) diff --git a/NAVIGATOR/helm-charts/.gitkeep b/NAVIGATOR/helm-charts/.gitkeep deleted file mode 100644 index e69de29b..00000000 diff --git a/NAVIGATOR/helm-charts/README.md b/NAVIGATOR/helm-charts/README.md deleted file mode 100644 index d30b8013..00000000 --- a/NAVIGATOR/helm-charts/README.md +++ /dev/null @@ -1,186 +0,0 @@ - -# Deploying with Helm charts - -> **NOTE**: To deploy on IBM Cloud Private 3.1.2 you must use Business Automation Configuration Container (BACC). - -## Requirements and Prerequisites - -Ensure that you have completed the following tasks: - -- [Preparing to install Business Automation Navigator](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_18.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_bank8s.html) - -- [Preparing your Kubernetes server, including Kubernetes, Helm Tiller, and Kubernetes command line](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_18.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_env_k8s.html) - -- [Downloading the PPA archive](../../README.md) - -The Helm command for deploying the Business Automation Navigator image include a number of required command parameters for specific environment and configuration settings. Review the reference topic for these parameters and determine the values for your environment as part of your preparation: - -- [Business Automation Navigator Helm command parameters](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_18.0.x/com.ibm.dba.ref/k8s_topics/ref_cm_banparamsk8s_helm.html) - -## Tips: - -- On Openshift, an expired docker secret can cause errors during deployment. If an admin.registry key already exists and has expired, delete the key with the following command: - ```console - kubectl delete secret admin.registrykey -n - ``` - - Then generate a new docker secret with the following command: - - ```console - kubectl create secret docker-registry admin.registrykey --docker-server= --docker-username= --docker-password=$(oc whoami -t) --docker-email=ecmtest@ibm.com -n - ``` - - -## Initializing the command line interface -Use the following commands to initialize the command line interface: -1. Run the init command: - ```console - $ helm init --client-only - ``` -2. Check whether the command line can connect to the remote Tiller server: - ```console - $ helm version - Client: &version.Version{SemVer:"v2.9.1", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"} - Server: &version.Version{SemVer:"v2.9.1", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"} - ``` - -## Deploying images -Provide the parameter values for your environment and run the command to deploy the image. - > **Tip**: Copy the sample command to a file, edit the parameter values, and use the updated command for deployment. - > **Tip**: The values which are include for 'resources' inside helm install / upgrade commands just suggestions only. Each deployment must take into account the demands their particular workload will place on the system. - -For deployments on Red Hat OpenShift, note the following considerations for whether you want to use the Arbitrary UID capability in your environment: - -- If you don't want to use Arbitrary UID capability in your Red Hat OpenShift environment, deploy the images as described in the following sections. - -- If you do want to use Arbitrary UID, prepare for deployment by checking and if needed editing your Security Context Constraint: - - Set the desired user id range of minimum and maximum values for the project namespace: - - ```$ oc edit namespace ``` - - For the uid-range annotation, verify that a value similar to the following is specified: - - ```$ openshift.io/sa.scc.uid-range=1000490000/10000 ``` - - This range is similar to the default range for Red Hat OpenShift. - - - Remove authenticated users from anyuid (if set): - - ```$ oc adm policy remove-scc-from-group anyuid system:authenticated ``` - - - Update the runAsUser value. - Find the entry: - - ``` - $ oc get scc -o yaml - runAsUser: - type: RunAsAny - ``` - - Update the value: - - ``` - $ oc get scc -o yaml - runAsUser: - type: MustRunAsRange - ``` - -To deploy Business Automation Navigator: - - ```console - $ helm install ibm-dba-navigator-3.2.0.tgz --name dbamc-navigator --namespace dbamc --set icnProductionSetting.license=accept,icnProductionSetting.JVM_INITIAL_HEAP_PERCENTAGE=40,icnProductionSetting.JVM_MAX_HEAP_PERCENTAGE=66,service.externalmetricsPort=9103,icnProductionSetting.icnDBType=db2,icnProductionSetting.icnJNDIDSName=ECMClientDS,icnProductionSetting.icnSChema=ICNDB,icnProductionSetting.icnTableSpace=ICNDBTS,icnProductionSetting.icnAdmin=ceadmin,icnProductionSetting.navigatorMode=0,dataVolume.existingPVCforICNCfgstore=icn-cfgstore,dataVolume.existingPVCforICNLogstore=icn-logstore,dataVolume.existingPVCforICNPluginstore=icn-pluginstore,dataVolume.existingPVCforICNVWCachestore=icn-vw-cachestore,dataVolume.existingPVCforICNVWLogstore=icn-vw-logstore,dataVolume.existingPVCforICNAsperastore=icn-asperastore,autoscaling.enabled=False,replicaCount=1,imagePullSecrets.name=admin.registrykey,image.repository=:/dbamc/navigator,image.tag=ga-306-icn - ``` -Replace with correct registry url. For example --> docker-registry.default.svc - -> **Reminder**: After you deploy, return to the instructions in the Knowledge Center, [Configuring IBM Business Automation Navigator in a container environment](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_18.0.x/com.ibm.dba.install/k8s_topics/tsk_ecmconfigbank8s.html), to get your Business Automation Navigator environment up and running. - -## Upgrading deployments - > **Tip**: You can discover the necessary resource values for the deployment from corresponding product deployments in IBM Cloud Private Console and Openshift Container Platform. - -### Before you begin -Before you run the upgrade commands, you must prepare the environment for upgrades by updating permissions on your persistent volumes. Complete the preparation steps in the following topic before you start the upgrade: [Upgrading Business Automation Navigator releases](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.upgrading/topics/tsk_cn_upgrade.html) - -You must also [download the PPA archive](../../README.md) before you begin the upgrade process. - -### Upgrading on Red Hat OpenShift - -For upgrades on Red Hat OpenShift, note the following considerations for whether you want to use the Arbitrary UID capability in your updated environment: - -- If you don't want to use Arbitrary UID capability in your Red Hat OpenShift environment, use the instructions in Upgrading on certified Kubernetes platforms. - -- If you do want to use Arbitrary UID, use the following steps to prepare for the upgrade: - -1. Check and if necessary edit your Security Context Constraint to set desired user id range of minimum and maximum values for the project namespace: - - Set the desired user id range of minimum and maximum values for the project namespace: - - ```$ oc edit namespace ``` - - For the uid-range annotation, verify that a value similar to the following is specified: - - ```$ openshift.io/sa.scc.uid-range=1000490000/10000 ``` - - This range is similar to the default range for Red Hat OpenShift. - - - Remove authenticated users from anyuid (if set): - - ```$ oc adm policy remove-scc-from-group anyuid system:authenticated ``` - - - Update the runAsUser value. - Find the entry: - - ``` - $ oc get scc -o yaml - runAsUser: - type: RunAsAny - ``` - - Update the value: - - ``` - $ oc get scc -o yaml - runAsUser: - type: MustRunAsRange - ``` -2. Stop all existing containers. - -3. Run the new install (instead of upgrade) command for the container. Update the command provided to include the values for your existing environment. - -> **NOTE**: In this context, the install commands update the application. Updates for your existing data happen automatically when the updated applications start. - -To deploy Business Automation Navigator: - - ```console - $ helm install ibm-dba-navigator-3.2.0.tgz --name dbamc-navigator --namespace dbamc --set icnProductionSetting.license=accept,icnProductionSetting.JVM_INITIAL_HEAP_PERCENTAGE=40,icnProductionSetting.JVM_MAX_HEAP_PERCENTAGE=66,service.externalmetricsPort=9103,icnProductionSetting.icnDBType=db2,icnProductionSetting.icnJNDIDSName=ECMClientDS,icnProductionSetting.icnSChema=ICNDB,icnProductionSetting.icnTableSpace=ICNDBTS,icnProductionSetting.icnAdmin=ceadmin,icnProductionSetting.navigatorMode=0,dataVolume.existingPVCforICNCfgstore=icn-cfgstore,dataVolume.existingPVCforICNLogstore=icn-logstore,dataVolume.existingPVCforICNPluginstore=icn-pluginstore,dataVolume.existingPVCforICNVWCachestore=icn-vw-cachestore,dataVolume.existingPVCforICNVWLogstore=icn-vw-logstore,dataVolume.existingPVCforICNAsperastore=icn-asperastore,autoscaling.enabled=False,replicaCount=1,imagePullSecrets.name=admin.registrykey,image.repository=:/dbamc/navigator,image.tag=ga-306-icn - ``` -Replace with correct registry url. For example --> docker-registry.default.svc - - -## Upgrading on certified Kubernetes platforms - -To deploy Business Automation Navigator: - -On Red Hat OpenShift: - -``` - $ helm upgrade dbamc-helm-navigator ibm-dba-navigator-3.2.0.tgz --reuse-values --set image.repository=:/dbamc/navigator/navigator,image.tag=ga-306-icn-if002,resources.requests.cpu=500m,resources.requests.memory=512Mi,icnProductionSetting.JVM_INITIAL_HEAP_PERCENTAGE=40,icnProductionSetting.JVM_MAX_HEAP_PERCENTAGE=66,imagePullSecrets.name=admin.registrykey,resources.limits.cpu=1,resources.limits.memory=1024Mi,log.format=json,service.externalmetricsPort=9103 -``` -On non-Red Hat OpenShift: - -``` - $ helm upgrade dbamc-helm-navigator ibm-dba-navigator-3.2.0.tgz --tls --reuse-values --set image.repository=:/dbamc/navigator,image.tag=ga-306-icn-if002,icnProductionSetting.JVM_INITIAL_HEAP_PERCENTAGE=40,icnProductionSetting.JVM_MAX_HEAP_PERCENTAGE=66,service.externalmetricsPort=9103,runAsUser=50001 -``` -Replace with correct registry url. For example --> docker-registry.default.svc - -## Uninstalling a Kubernetes release of Business Automation Navigator - -To uninstall and delete a release named `my-icn-prod-release`, use the following command: - -```console -$ helm delete my-icn-prod-release --purge -``` - -The command removes all the Kubernetes components associated with the release, except any Persistent Volume Claims (PVCs). This is the default behavior of Kubernetes, and ensures that valuable data is not deleted. To delete the persisted data of the release, you can delete the PVC using the following command: - -```console -$ kubectl delete pvc my-icn-prod-release-icn-pvclaim -``` diff --git a/NAVIGATOR/helm-charts/ibm-dba-navigator-3.0.0.tgz b/NAVIGATOR/helm-charts/ibm-dba-navigator-3.0.0.tgz deleted file mode 100644 index f1e5c93e..00000000 Binary files a/NAVIGATOR/helm-charts/ibm-dba-navigator-3.0.0.tgz and /dev/null differ diff --git a/NAVIGATOR/helm-charts/ibm-dba-navigator-3.2.0.tgz b/NAVIGATOR/helm-charts/ibm-dba-navigator-3.2.0.tgz deleted file mode 100644 index fdb5d97d..00000000 Binary files a/NAVIGATOR/helm-charts/ibm-dba-navigator-3.2.0.tgz and /dev/null differ diff --git a/NAVIGATOR/k8s-yaml/.gitkeep b/NAVIGATOR/k8s-yaml/.gitkeep deleted file mode 100644 index e69de29b..00000000 diff --git a/NAVIGATOR/k8s-yaml/README.md b/NAVIGATOR/k8s-yaml/README.md deleted file mode 100644 index 767fd98f..00000000 --- a/NAVIGATOR/k8s-yaml/README.md +++ /dev/null @@ -1,150 +0,0 @@ -# Deploying with YAML files - -## Requirements and Prerequisites - -Ensure that you have completed the following tasks: - -- [Preparing your Kubernetes server](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_18.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_env_k8s.html) -- [Downloading the PPA archive](../../README.md) -- [Preparing to install Business Automation Navigator](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_18.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_bank8s.html) - -## Deploying component images - -Use the command line to deploy the image using the parameters in the appropriate YAML file. You also use the command line to determine access information for your deployed images. - -For deployments on Red Hat OpenShift, note the following considerations for whether you want to use the Arbitrary UID capability in your environment: - -- If you don't want to use Arbitrary UID capability in your Red Hat OpenShift environment, deploy the image as described in the following section. - -- If you do want to use Arbitrary UID, prepare for deployment by updating your deployment file and editing your Security Context Constraint: - - - Remove the following line from your deployment YAML file: `runAsUser: 50001`. - - - In your SCC, set the desired user id range of minimum and maximum values for the project namespace: - - ```$ oc edit namespace ``` - - For the uid-range annotation, verify that a value similar to the following is specified: - - ```$ openshift.io/sa.scc.uid-range=1000490000/10000 ``` - - This range is similar to the default range for Red Hat OpenShift. - - - Remove authenticated users from anyuid (if set): - - ```$ oc adm policy remove-scc-from-group anyuid system:authenticated ``` - - - Update the runAsUser value. - Find the entry: - - ``` - $ oc get scc -o yaml - runAsUser: - type: RunAsAny - ``` - - Update the value: - - ``` - $ oc get scc -o yaml - runAsUser: - type: MustRunAsRange - ``` - - - -To deploy Business Automation Navigator: - 1. Use the deployment file to deploy Business Automation Navigator: - - ```kubectl apply -f icn-deploy.yml``` - 2. Run following command to get the Public IP and port to access Business Automation Navigator: - - ```kubectl get svc | grep ecm-icn``` - - -> **Reminder**: After you deploy, return to the instructions in the Knowledge Center, [Configuring IBM Business Automation Navigator in a container environment](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_18.0.x/com.ibm.dba.install/k8s_topics/tsk_ecmconfigbank8s.html), to get your Business Automation Navigator environment up and running. - -## Upgrading deployments - > **Tip**: You can discover the necessary resource values for the deployment from corresponding product deployments in IBM Cloud Private Console and Openshift Container Platform. - -### Before you begin -Before you run the upgrade commands, you must prepare the environment for upgrades by updating permissions on your persistent volumes. Complete the preparation steps in the following topic before you start the upgrade: [Upgrading Business Automation Navigator releases](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.upgrading/topics/tsk_cn_upgrade.html) - -If you already have a customized YAML file for your existing deployment, update the file with the new parameters for this release before you apply the YAML as part of the upgrade. See the sample YAML files for more information. - -You must also [download the PPA archive](../../README.md) before you begin the upgrade process. - - -### Preparing for upgrade on Red Hat OpenShift - -For upgrades on Red Hat OpenShift, note the following considerations when you want to use the Arbitrary UID capability in your updated environment: - -- If you don't want to use Arbitrary UID capability in your Red Hat OpenShift environment, use the instructions in Running the upgrade deployments. - -- If you do want to use Arbitrary UID, use the following steps to prepare for the upgrade: - -1. Check and if necessary edit your Security Context Constraint to set desired user id range of minimum and maximum values for the project namespace: - - Set the desired user id range of minimum and maximum values for the project namespace: - - ```$ oc edit namespace ``` - - For the uid-range annotation, verify that a value similar to the following is specified: - - ```$ openshift.io/sa.scc.uid-range=1000490000/10000 ``` - - This range is similar to the default range for Red Hat OpenShift. - - - Remove authenticated users from anyuid (if set): - - ```$ oc adm policy remove-scc-from-group anyuid system:authenticated ``` - - - Update the runAsUser value. - Find the entry: - - ``` - $ oc get scc -o yaml - runAsUser: - type: RunAsAny - ``` - - Update the value: - - ``` - $ oc get scc -o yaml - runAsUser: - type: MustRunAsRange - ``` - -2. Remove the following line from your deployment YAML file: `runAsUser: 50001`. - -3. Update other values in your deployment YAML file to reflect the values for your existing environment and any updates in the new samples. - -4. Stop all existing containers. - -5. Run the deployment commands for the containers, in the following section. -### Running the upgrade deployment - -Reminder: Update the values in your deployment YAML file to reflect the values for your existing environment. - -To deploy Business Automation Navigator: - 1. Use the deployment file to deploy Business Automation Navigator: - - ```kubectl apply -f icn-deploy.yml``` - 2. Run following command to get the Public IP and port to access Business Automation Navigator: - - ```kubectl get svc | grep ecm-icn``` - - -## Uninstalling a Kubernetes release of Business Automation Navigator - -To uninstall and delete the Business Automation Navigator release, use the following command: - -```console -$ kubectl delete -f -``` - -The command removes all the Kubernetes components associated with the release, except any Persistent Volume Claims (PVCs). This is the default behavior of Kubernetes, and ensures that valuable data is not deleted. To delete the persisted data of the release, you can delete the PVC using the following command: - -```console -$ kubectl delete pvc my-icn-prod-release-icn-pvclaim -``` diff --git a/NAVIGATOR/k8s-yaml/icn-deploy.yml b/NAVIGATOR/k8s-yaml/icn-deploy.yml deleted file mode 100644 index 6230c0aa..00000000 --- a/NAVIGATOR/k8s-yaml/icn-deploy.yml +++ /dev/null @@ -1,191 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: ecm-icn-svc -spec: - ports: - - name: http - protocol: TCP - port: 9080 - targetPort: 9080 - - name: https - protocol: TCP - port: 9443 - targetPort: 9443 - selector: - app: icnserver-cluster1 - type: NodePort - sessionAffinity: ClientIP ---- -apiVersion: networking.k8s.io/v1 -kind: NetworkPolicy -metadata: - name: ecm-icn-np - namespace: $KUBE_NAME_SPACE -spec: - podSelector: {} - policyTypes: - - Ingress - - Egress - ingress: - - {} - egress: - - ports: - - port: 53 - protocol: UDP - - port: 53 - protocol: TCP - - to: - - namespaceSelector: {} ---- -apiVersion: apps/v1beta1 -kind: Deployment -metadata: - name: ecm-icn -spec: - replicas: 1 - strategy: - type: RollingUpdate - template: - metadata: - labels: - app: icnserver-cluster1 - spec: - imagePullSecrets: - - name: admin.registrykey - spec: - affinity: - podAntiAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 100 - podAffinityTerm: - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - icnserver-cluster1 - topologyKey: "kubernetes.io/hostname" - containers: - - image: /default/navigator:latest - imagePullPolicy: Always - name: ecm-icn - securityContext: - # If deployment on OpenShift and image supports arbitrary uid, - # remove runAsUser and pods will run with arbitrarily assigned user ID. - runAsUser: 50001 - allowPrivilegeEscalation: false - resources: - requests: - memory: 512Mi - cpu: 500m - limits: - memory: 1536Mi - cpu: 1 - ports: - - containerPort: 9080 - name: http - - containerPort: 9443 - name: https - env: - - name: LICENSE - value: "accept" - - name: PRODUCT - value: "DBAMC" - - name: JVM_INITIAL_HEAP_PERCENTAGE - value: "40" - - name: JVM_MAX_HEAP_PERCENTAGE - value: "66" - - name: TZ - value: "Etc/UTC" - - name: JVM_INITIAL_HEAP_PERCENTAGE - value: "40" - - name: JVM_MAX_HEAP_PERCENTAGE - value: "66" - - name: JVM_CUSTOMIZE_OPTIONS - value: "" - - name: ICNDBTYPE - value: "db2" - - name: ICNJNDIDS - value: "ECMClientDS" - - name: ICNSCHEMA - value: "ICNDB" - - name: ICNTS - value: "ICNDB" - - name: ICNADMIN - value: "ceadmin" - - name: navigatorMode - value: "3" - - name: enableAppcues - value: "false" - - name: allowRemotePluginsViaHttp - value: "false" - - name: MY_POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: MY_POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: MY_POD_IP - valueFrom: - fieldRef: - fieldPath: status.podIP - readinessProbe: - httpGet: - path: /navigator - port: 9080 - httpHeaders: - - name: Content-Encoding - value: gzip - initialDelaySeconds: 180 - periodSeconds: 5 - livenessProbe: - httpGet: - path: /navigator - port: 9080 - httpHeaders: - - name: Content-Encoding - value: gzip - initialDelaySeconds: 600 - periodSeconds: 5 - volumeMounts: - - name: icncfgstore-pvc - mountPath: "/opt/ibm/wlp/usr/servers/defaultServer/configDropins/overrides" - subPath: configDropins/overrides - - name: icnlogstore-pvc - mountPath: "/opt/ibm/wlp/usr/servers/defaultServer/logs" - subPath: logs - - name: icnpluginstore-pvc - mountPath: "/opt/ibm/plugins" - subPath: plugins - - name: icnvwcachestore-pvc - mountPath: "/opt/ibm/viewerconfig/cache" - subPath: viewercache - - name: icnvwlogstore-pvc - mountPath: "/opt/ibm/viewerconfig/logs" - subPath: viewerlogs - - name: icnasperastore-pvc - mountPath: "/opt/ibm/Aspera" - subPath: Aspera - - volumes: - - name: icncfgstore-pvc - persistentVolumeClaim: - claimName: "icn-cfgstore" - - name: icnlogstore-pvc - persistentVolumeClaim: - claimName: "icn-logstore" - - name: icnpluginstore-pvc - persistentVolumeClaim: - claimName: "icn-pluginstore" - - name: icnvwcachestore-pvc - persistentVolumeClaim: - claimName: "icn-vwcachestore" - - name: icnvwlogstore-pvc - persistentVolumeClaim: - claimName: "icn-vwlogstore" - - name: icnasperastore-pvc - persistentVolumeClaim: - claimName: "icn-asperastore" diff --git a/NAVIGATOR/platform/README_Eval_ROKS.md b/NAVIGATOR/platform/README_Eval_ROKS.md deleted file mode 100644 index e5452d39..00000000 --- a/NAVIGATOR/platform/README_Eval_ROKS.md +++ /dev/null @@ -1,108 +0,0 @@ -# Deploying on Red Hat OpenShift on IBM Cloud - -Before you deploy, you must configure your IBM Public Cloud environment, create an OpenShift cluster, prepare your Navigator environment, and load the product images to the registry. Use the following information to configure your environment and deploy the images. - -## Before you begin: Create a cluster - -Before you run any install command, make sure that you have created the IBM Cloud cluster, prepared your own environment, and loaded the product image to the registry. - -For detailed information, see [Installing containers on Red Hat OpenShift by using CLIs](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_env_ROKS.html). - - -## Step 1: Prepare your Navigator environment - -To prepare your Navigator environment, you set up databases, LDAP services, storage, and configuration files that are required for use and operation after deployment. - -Use the following instructions to prepare your Navigator environment: [Preparing to install IBM FileNet Content Manager](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_bank8s.html) - -**Important:** The instructions provided for preparing storage are specific to non-managed OpenShift deployments. For OpenShift deployments, the cluster you create for OpenShift includes attached storage. As a result, you don't create persistent volumes for the storage- only the listed persistent volume claims. Obtain the storage class name for this OpenShift cluster storage, and assign that value as the `storageClassName` value when you create the required persistent volumes claims for your Navigator environment as described in [Creating volumes and folders for deployment on Kubernetes](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_ban_volumesk8s.html). - -The following example uses the storage class name `ibmc-file-retain-bronze`: - ```yaml - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - name: example-pvc - namespace: default - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 8Gi - storageClassName: ibmc-file-retain-bronze - ``` - -## Step 2: Deploy the Business Automation Navigator images - -When the container images are in the registry, you can complete environment configuration for each component and then run the chart installation. - -1. Create a NGINX pod to mount the persistent volumes. The following sample creates a pod named `example-pod-ecm-eval`: [NGINX Pod Sample](nginx_sample.yaml) - -2. Copy the necessary database and LDAP configuration XML files that you prepared for your Navigator environment to the mounted volumes, for example, by accessing the NGINX pod that you created: - ```console - $ kubectl cp datasource.xml nginx-pod:/path/to/corresponding/directory - ``` -**Remember:** Make sure the permissions for all the folders set the user and group ownership to 50001:50000. - -3. Use the instructions in the [Helm chart readme](../helm-charts) to confirm your environment configuration and install the Helm charts. - - -## Step 3: Enable Ingress to access your applications -1. Create an SSL certificate: - ```console - $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $(pwd)/tls.key -out $(pwd)/tls.crt -subj "/CN=dbamc.content - ``` -2. Create a secret using the certificate: - ```console - $ kubectl create secret tls icp4a --key $(pwd)/tls.key --cert $(pwd)/tls.crt - ``` -3. Create an Ingress service for the Navigator component by using the example `ingress_service.yaml` file in the OpenShift console or CLI: [ingress_service.yaml](ingress_service.yaml) - -4. Apply the Ingress service: - ``` console - $ kubectl apply -f ingress_service.yaml - ``` -5. Create an Ingress endpoint using the [ingress_icn.yaml](ingress_icn.yaml). -6. Apply the Ingress: - ``` console - $ kubectl apply -f ingress_icn.yaml - ``` -7. To use the Ingress for the repository connection URL in Navigator, CMIS, External Share, and GraphQL run the following commands: - ```console - $ openssl pkcs12 -export -in $(pwd)/tls.crt -inkey $(pwd)/tls.key -out $(pwd)/newkey.p12 - ``` - ```console - $ keytool -importkeystore -srckeystore $(pwd)/newkey.p12 \ - -srcstoretype PKCS12 \ - -destkeystore $(pwd)/newkey.jks \ - -deststoretype JKS - ``` -8. Copy the `newkey.jks` file to the `overrides` directory. - ``` console - $ cp $(pwd)/newkey.jks /some/directory/icn/configDropins/overrides - ``` -9. Create a new XML file, such as `key.xml`, and save it to the `configDropins/Overrides` folder: - ``` xml - - - - ``` -10. Edit the deployments for all of the components to resolve the hostname in the pods: - ``` console - $ kubectl edit deployments dbamc-icn-ibm-dba-navigator - ``` - Add the following lines in the section `spec.template.spec`. - ``` yaml - hostAliases: - - ip: "" - hostnames: - - "dbamc.content" - ``` -11. Get the Ingress IP by running the following command: - ``` console - $ kubectl get ingress - ``` -12. After you save your changes, new pods are created that include the changes. When the pods are up and running, update any existing repository connection. The new repository connection URL is something like: `https://icp4a-content/navigator` - -13. On any system where you want to access the applications, update the localhost file `/etc/hosts` with the Ingress IP and the hostname. diff --git a/NAVIGATOR/platform/ingress_icn.yaml b/NAVIGATOR/platform/ingress_icn.yaml deleted file mode 100644 index 230cf588..00000000 --- a/NAVIGATOR/platform/ingress_icn.yaml +++ /dev/null @@ -1,27 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Ingress -metadata: - name: dbamc-ingress - annotations: - # The NGINX ingress annotations contains a new prefix nginx.ingress.kubernetes.io. - # To avoid breaking a running NGINX ingress controller, specify both new and old prefixes. - kubernetes.io/ingress.class: nginx - ingress.kubernetes.io/force-ssl-redirect: "true" - ingress.bluemix.net/sticky-cookie-services: "serviceName=ibacc-icn-ingress-svc name=icncookie expires=7300s path=/navigator hash=sha1" -spec: - rules: - - host: icp4a.content - http: - paths: - - backend: - serviceName: ibacc-icn-ingress-svc - servicePort: 9080 - path: /navigator - - backend: - serviceName: ibacc-icn-ingress-svc - servicePort: 9080 - path: /sync - tls: - - hosts: - - icp4a.content - secretName: icp4a diff --git a/NAVIGATOR/platform/ingress_service.yaml b/NAVIGATOR/platform/ingress_service.yaml deleted file mode 100644 index 400e84ba..00000000 --- a/NAVIGATOR/platform/ingress_service.yaml +++ /dev/null @@ -1,18 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: ibacc-icn-ingress-svc -spec: - ports: - - name: http - protocol: TCP - port: 9080 - targetPort: 9080 - - name: https - protocol: TCP - port: 9443 - targetPort: 9443 - selector: - app: ibm-dba-navigator - type: ClusterIP - diff --git a/NAVIGATOR/platform/nginx_sample.yaml b/NAVIGATOR/platform/nginx_sample.yaml deleted file mode 100644 index bb2954aa..00000000 --- a/NAVIGATOR/platform/nginx_sample.yaml +++ /dev/null @@ -1,45 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - name: example-pod-ecm-eval - labels: - app: hello-openshift - namespace: ecm-eval -spec: - volumes: - - name: ecm-eval-cfg-pvc-0 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-0 - - name: ecm-eval-cfg-pvc-1 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-1 - - name: ecm-eval-cfg-pvc-2 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-2 - - name: ecm-eval-cfg-pvc-3 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-3 - - name: ecm-eval-cfg-pvc-4 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-4 - - name: ecm-eval-cfg-pvc-5 - persistentVolumeClaim: - claimName: ecm-eval-cfg-pvc-5 - containers: - - name: hello-openshift - image: nginx:latest - ports: - - containerPort: 8080 - volumeMounts: - - name: ecm-eval-cfg-pvc-0 - mountPath: /icn/configDropin/overrides - - name: ecm-eval-cfg-pvc-1 - mountPath: /icn/logs - - name: ecm-eval-cfg-pvc-2 - mountPath: /icn/plugins - - name: ecm-eval-cfg-pvc-3 - mountPath: /icn/viewerlog - - name: ecm-eval-cfg-pvc-4 - mountPath: /icn/viewercache - - name: ecm-eval-cfg-pvc-5 - mountPath: /icn/aspera diff --git a/ODM/README.md b/ODM/README.md deleted file mode 100644 index 9a2d4e5e..00000000 --- a/ODM/README.md +++ /dev/null @@ -1,116 +0,0 @@ -# Install IBM Operational Decision Manager 8.10.2 on Certified Kubernetes - -The following architectures are supported for Operational Decision Manager 8.10.2 on Certified Kubernetes: -- AMD64 (or x86_64), which is the 64-bit edition for Linux x86. - -> **Note**: Rule Designer is installed as an update site from the [Eclipse Marketplace](https://marketplace.eclipse.org/content/ibm-operational-decision-manager-developers-v-8102-rule-designer) into an existing version of Eclipse. - -## Option 1: Install a release for evaluation purposes - -The following instructions are to install the Operational Decision Manager for developers Helm chart: - - * [Installing Operational Decision Manager for developers on MiniKube](platform/README_Eval_Minikube.md) - * [Installing Operational Decision Manager for developers on Openshift](platform/README_Eval_Openshift.md) - * [Installing Operational Decision Manager for developers on Red Hat OpenShift on IBM Cloud](platform/README_Eval_ROKS.md) - -## Option 2: Install a production ready release - -The installation of Operational Decision Manager 8.10.2 uses a `ibm-odm-prod` Helm chart, also known as the ODM for production Helm chart. The chart is a package of preconfigured Kubernetes resources that bootstraps an ODM for production deployment on a Kubernetes cluster. You customize the deployment by changing and adding configuration parameters. The default values are appropriate to a production environment, but it is likely that you want to configure at least the security of your kubernetes deployment. - -The `ibm-odm-prod` Helm chart includes five containers corresponding to the following services. -- Decision Center Business Console and Enterprise Console -- Decision Server Console -- Decision Server Runtime -- Decision Server Runner -- (Optional) Internal PostgreSQL DB - -The services require CPU and memory resources. The following table lists the minimum requirements that are used as default values. - -| Service | CPU Minimum (m) | Memory Minimum (Mi) | -| ---------- | ----------- | ------------------- | -| Decision Center | 500 | 512 | -| Decision Runner | 500 | 512 | -| Decision Server Console | 500 | 512 | -| Decision Server Runtime | 500 | 512 | -| **Total** | **2000** (2CPU) | **2048** (2Gb) | -| (Optional) Internal DB | 500 | 512 | - -### *Optional:* Before you install a production ready release with customizations - -If you want to customize your Operational Decision Manager installation, go to the [IBM Cloud Pak for Automation 19.0.x](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_install_odm.html) Knowledge Center and choose which customizations you want to apply. - * [Configuring PVUs](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_config_pvu.html) - * [Defining the security certificate](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_replace_security_certificate.html) - * [Configuring the LDAP and user registry](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/con_config_user_registry.html) - * [Configuring a custom external database](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_custom_external_db.html) - * [Configuring the ODM event emitter](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_custom_emitters.html) - * [Configuring Decision Center customization](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_custom_dc.html) - -> **Note**: The [configuration](configuration) folder provides sample configuration files that you might find useful. Download the files and edit them for your own customizations. - -After you noted the values of the configuration parameters that are needed to customize Operational Decision Manager, choose one of the following deployment options to complete the installation. - -The following instructions are to install the ODM for production Helm chart: - - * [Install Operational Decision Manager on MiniKube](platform/README_Minikube.md) - * [Install Operational Decision Manager on Openshift](platform/README_Openshift.md) - * [Install Operational Decision Manager on IBM Cloud OpenShift cluster](platform/README_ROKS.md) - * [Install Operational Decision Manager on other Kubernetes by using Helm and Tiller](helm-charts/README.md) - * [Install Operational Decision Manager on other Kubernetes by using Kubernetes YAML](k8s-yaml/README.md) - - - -## Post-installation steps - -### Step 1: Verify a deployment - -You can check the status of the pods by using the following command: -```console -$ kubectl get pods -``` - -When all of the pods are *Running* and *Ready*, retrieve the cluster-info-ip name and port numbers with the following commands: - -
-$ kubectl cluster-info
-Kubernetes master is running at https://cluster-info-ip:8443
-CoreDNS is running at https://cluster-info-ip:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
-
-$ kubectl get services
-NAME                                                  TYPE        CLUSTER-IP  EXTERNAL-IP   PORT(S)                    AGE
-kubernetes                                            ClusterIP   ****        none          443/TCP                    9m
-my-odm-prod-release-dbserver                          ClusterIP   ****        none          5432/TCP                   3m
-my-odm-prod-release-odm-decisioncenter                NodePort    ****        none          9453:dcs-port/TCP   3m
-my-odm-prod-release-odm-decisionrunner                NodePort    ****        none          9443:dr-port/TCP    3m
-my-odm-prod-release-odm-decisionserverconsole         NodePort    ****        none          9443:dsc-port/TCP   3m
-my-odm-prod-release-odm-decisionserverruntime         NodePort    ****        none          9443:dsr-port/TCP   3m
-
- -With the cluster-info-ip name and port numbers, you have access to the applications with the following URLs: - -|Component|URL|Username|Password| -|:-----:|:-----:|:-----:|:-----:| -| Decision Server Console | https://*cluster-info-ip*:*dsc-port*/res |resAdmin/odmAdmin|resAdmin/odmAdmin| -| Decision Server Runtime |https://*cluster-info-ip*:*dsr-port*/DecisionService |N/A|N/A| -| Decision Center Business Console | https://*cluster-info-ip*:*dcs-port*/decisioncenter |rtsAdmin/odmAdmin|rtsAdmin/odmAdmin| -| Decision Center Enterprise Console | https://*cluster-info-ip*:*dcs-port*/teamserver |rtsAdmin/odmAdmin|rtsAdmin/odmAdmin| -| Decision Runner | https://*cluster-info-ip*:*dr-port*/DecisionRunner |resDeployer/odmAdmin|resDeployer/odmAdmin| - -To further debug and diagnose deployment problems in the Kubernetes cluster, use the `kubectl cluster-info dump` command. - -For more information about how to check the state and recent events of your pods, see -[Troubleshooting](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_troubleshooting.html). - -### Step 2: Synchronize users and groups - -If you customized the default user registry, you must synchronize the registry with the Decision Center database. For more information, see -[Synchronizing users and groups in Decision Center](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_synchronize_users.html). - -### Step 3: Manage your Operational Decision Manager deployment - -It is possible to update a deployment after it is installed. Use the following tasks in IBM Knowledge Center to update a deployment whenever you need, and as many times as you need. - * [Scaling deployments](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.managing/k8s_topics/tsk_odm_scaling.html?view=kc) - * [Customizing log levels](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.managing/k8s_topics/tsk_odm_custom_logging.html?view=kc) - -## Upgrade a release - -Refer to the [Upgrade section](helm-charts/README.md#upgrade-a-release) in the helm-charts folder for instructions using Tiller, or the [Upgrade section](k8s-yaml/README.md#upgrade-a-release) in the k8s-yaml folder for instructions on how to use Kubernetes YAML. diff --git a/ODM/README_config.md b/ODM/README_config.md new file mode 100644 index 00000000..4ad1f1dd --- /dev/null +++ b/ODM/README_config.md @@ -0,0 +1,71 @@ +# Configuring IBM Operational Decision Manager 8.10.3 + +These instructions cover the basic configuration of ODM. + +The following architectures are supported for Operational Decision Manager 8.10.3: +- AMD64 (or x86_64), which is the 64-bit edition for Linux x86. + +> **Note**: Rule Designer is installed as an update site from the [Eclipse Marketplace](https://marketplace.eclipse.org/content/ibm-operational-decision-manager-developers-v-8103-rule-designer) into an existing version of Eclipse. + +ODM for production includes five containers corresponding to the following services. + - Decision Center Business Console and Enterprise Console + - Decision Server Console + - Decision Server Runtime + - Decision Server Runner + - (Optional) Internal PostgreSQL DB + +The services require CPU and memory resources. The following table lists the minimum requirements that are used as default values. + +| Service | CPU Minimum (m) | Memory Minimum (Mi) | +| ---------- | ----------- | ------------------- | +| Decision Center | 500 | 1500 | +| Decision Runner | 500 | 512 | +| Decision Server Console | 500 | 512 | +| Decision Server Runtime | 500 | 512 | +| **Total** | **2000** (2CPU) | **3036** (3Gb) | +| (Optional) Internal DB | 500 | 512 | + +### Step 1: Customize a production ready ODM (*Optional*) + +The installation of Operational Decision Manager 8.10.3 can be customized by changing and adding configuration parameters. The default values are appropriate to a production environment, but it is likely that you want to configure at least the security of your kubernetes deployment. + +Make a note of the name and value for the different parameters you want to configure so that it is at hand when you enter it in the custom resource YAML file. + +Go to the [IBM Cloud Pak for Automation 19.0.x](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_install_odm.html) Knowledge Center and choose which customizations you want to apply. + * [Defining the security certificate](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.offerings/topics/tsk_replace_security_certificate.html) + * [Configuring the LDAP and user registry](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.offerings/topics/con_config_user_registry.html) + * [Configuring a custom external database](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.offerings/topics/tsk_custom_external_db.html) + * [Configuring the ODM event emitter](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.offerings/topics/tsk_custom_emitters.html) + * [Configuring Decision Center customization](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.offerings/topics/tsk_custom_dc.html) + * [Configuring Decision Center time zone](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.managing/op_topics/tsk_set_jvmargs.html) + +> **Note**: The [configuration](configuration) folder provides sample configuration files that you might find useful. Download the files and edit them for your own customizations. + +### Step 2: Configure the custom resource YAML file for your ODM instance + +Before you configure, make sure that you have prepared your environment. For more information, see [Preparing to install ODM for production](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_preparing_odmk8s.html). + +In your `descriptors/my_icp4a_cr.yaml` file, update the `odm_configuration` section with the configuration parameters from *Step 1*. You can refer to the [`default-values.yaml`](configuration/default-values.yaml) file to find the default values for each ODM parameter and customize these values in your file. + +### Step 3: Complete the installation + +When you have finished editing the configuration file, go back to the relevant install or update page to configure other components and complete the deployment with the operator. + +Install pages: + - [Managed OpenShift installation page](../platform/roks/install.md#step-6-configure-the-software-that-you-want-to-install) + - [OpenShift installation page](../platform/ocp/install.md#step-6-configure-the-software-that-you-want-to-install) + - [Certified Kubernetes installation page](../platform/k8s/install.md#step-6-configure-the-software-that-you-want-to-install) + +Update pages: + - [Managed OpenShift installation page](../platform/roks/update.md) + - [OpenShift installation page](../platform/ocp/update.md#step-1-modify-the-software-that-is-installed) + - [Certified Kubernetes installation page](../platform/k8s/update.md) + +### Step 4: Manage your Operational Decision Manager deployment + +If you customized the default user registry, you must synchronize the registry with the Decision Center database. For more information, see +[Synchronizing users and groups in Decision Center](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.offerings/topics/tsk_synchronize_users.html). + +You might need to update an ODM deployment after it is installed. Use the following tasks in IBM Knowledge Center to update a deployment whenever you need, and as many times as you need. + * [Customizing JVM arguments](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.managing/op_topics/tsk_set_jvmargs.html) + * [Customizing log levels](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.managing/op_topics/tsk_odm_custom_logging.html) diff --git a/ODM/README_migrate.md b/ODM/README_migrate.md new file mode 100644 index 00000000..18a9908e --- /dev/null +++ b/ODM/README_migrate.md @@ -0,0 +1,46 @@ +# Migrating IBM Operational Decision Manager 8.10.x data to 8.10.3 + +## Step 1: Review the database configuration parameters + +Operational Decision Manager persists data in a database. An external Db2 or PostgreSQL database uses the following configuration parameters: + + - Server type: **externalDatabase.type** + - Server name: **externalDatabase.serverName** + - Port: **externalDatabase.port** + - Database name: **externalDatabase.databaseName** + - Secret credentials: **externalDatabase.secretCredentials** + +Note the name of the secret that encrypts the database user and password that is used to secure access to the database. + +A customized database uses the following configuration parameters: + + - Data source secret: **externalCustomDatabase.datasourceRef** + - Persistent Volume Claim to access the JDBC database driver: **externalCustomDatabase.driverPvc** + +If you customized the Decision Center Business console with your own implementation of dynamic domains, custom value editors, or custom ruleset extractors you must note the name of the YAML file you previously created, for example *custom-dc-libs-pvc.yaml*. + +An internal database uses a predefined persistent volume claim (PVC) or Kubernetes dynamic provisioning. You must have a persistent volume (PV) already created with accessMode and ReadWriteOnce attributes for Operational Decision Manager containers. Dynamic provisioning uses the default storageClass defined by the Kubernetes admin or by using a custom storageClass that overrides the default. + +Predefined PVC + + - **internalDatabase.persistence.enabled**: true (default) + - **internalDatabase.persistence.useDynamicProvisioning**: false (default) + +Kubernetes dynamic provisioning + + - **internalDatabase.persistence.enabled**: true (default) + - **internalDatabase.persistence.useDynamicProvisioning**: true + +## Step 2: Review LDAP settings + +Make a note of the Lightweight Directory Access Protocol (LDAP) parameters that are used to connect to the LDAP server to validate users. The Directory service server has a number of mandatory configuration parameters, so save these values somewhere and refer to them when you configure the custom resource YAML file. For more information, see [LDAP configuration parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_k8s_ldap.html). + +## Step 3: Review other customizations you applied + +If you customized your Operational Decision Manager installation, go to the [IBM Cloud Pak for Automation 19.0.x](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.offerings/topics/con_odm_prod.html) Knowledge Center and remind yourself of the customizations you applied and need to apply again in the new ODM instance. + +## Step 4: Go back to the platform readme to migrate other components + +- [Managed OpenShift migrate page](../platform/roks/migrate.md) +- [OpenShift migrate page](../platform/ocp/migrate.md) +- [Kubernetes migrate page](../platform/k8s/migrate.md) diff --git a/ODM/configuration/.gitkeep b/ODM/configuration/.gitkeep deleted file mode 100644 index e69de29b..00000000 diff --git a/ODM/configuration/default-values.yaml b/ODM/configuration/default-values.yaml new file mode 100644 index 00000000..a12d9f9b --- /dev/null +++ b/ODM/configuration/default-values.yaml @@ -0,0 +1,126 @@ +# Default values for odm installation. +# This is a YAML-formatted file. +# Declare variables to be passed into your templates. +apiVersion: icp4a.ibm.com/v1 +kind: ICP4ACluster +metadata: + name: odm-demo + labels: + app.kubernetes.io/instance: ibm-dba + app.kubernetes.io/managed-by: ibm-dba + app.kubernetes.io/name: ibm-dba +spec: + odm_configuration: + image: + repository: "" + pullPolicy: IfNotPresent + tag: 8.10.3 + ## Secrets must be manually created in the namespace. + ## ref: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod + ## Ex : pullSecrets: admin.registrykey + pullSecrets: + + ## Architecture - e.g. amd64, ppc64le. If left empty, the architecture will be determined automatically. + ## You can use kubectl version command to determine the architecture on the desired worker node. + arch: "" + + service: + enableTLS: true + type: NodePort + + decisionServerRuntime: + enabled: true + replicaCount: 1 + resources: + requests: + cpu: 500m + memory: 512Mi + limits: + cpu: 2 + memory: 4096Mi + + decisionServerConsole: + resources: + requests: + cpu: 500m + memory: 512Mi + limits: + cpu: 2 + memory: 1024Mi + + decisionCenter: + enabled: true + persistenceLocale: en_US + replicaCount: 1 + resources: + requests: + cpu: 500m + memory: 512Mi + limits: + cpu: 2 + memory: 4096Mi + + decisionRunner: + enabled: true + replicaCount: 1 + resources: + requests: + cpu: 500m + memory: 512Mi + limits: + cpu: 2 + memory: 4096Mi + + internalDatabase: + databaseName: odmdb + secretCredentials: "TOBEFILL" + persistence: + enabled: true + useDynamicProvisioning: false + storageClassName: "" + resources: + requests: + storage: 5Gi + securityContext: + runAsUser: 0 + resources: + requests: + cpu: 500m + memory: 512Mi + limits: + cpu: 2 + memory: 4096Mi + + externalDatabase: + type: "" + serverName: "" + databaseName: "" + user: "" + password: "" + port: "" + + externalCustomDatabase: + datasourceRef: + driverPvc: + + readinessProbe: + initialDelaySeconds: 5 + periodSeconds: 5 + failureThreshold: 45 + timeoutSeconds: 5 + + livenessProbe: + initialDelaySeconds: 300 + periodSeconds: 10 + failureThreshold: 10 + timeoutSeconds: 5 + + customization: + securitySecretRef: + baiEmitterSecretRef: + authSecretRef: + dedicatedNodeLabel: + + productName: IBM Cloud Pak for Automation + productID: 5737-I23 + kubeVersion: DBAMC diff --git a/ODM/configuration/evaluation/odm-eval-without-pv.yaml b/ODM/configuration/evaluation/odm-eval-without-pv.yaml new file mode 100644 index 00000000..78d7526c --- /dev/null +++ b/ODM/configuration/evaluation/odm-eval-without-pv.yaml @@ -0,0 +1,138 @@ +--- +# Source: ibm-odm-dev/templates/service.yaml +apiVersion: v1 +kind: Service +metadata: + name: odm-eval-ibm-odm-dev + labels: + app: ibm-odm-dev + chart: ibm-odm-dev-2.3.0 + release: odm-eval + heritage: Tiller + app.kubernetes.io/instance: odm-eval + app.kubernetes.io/managed-by: Tiller + app.kubernetes.io/name: ibm-odm-dev + helm.sh/chart: ibm-odm-dev-2.3.0 +spec: + type: NodePort + ports: + - port: 9060 + targetPort: 9060 + protocol: TCP + selector: + run: ibm-odm-dev + app: ibm-odm-dev + release: odm-eval + +--- +# Source: ibm-odm-dev/templates/deployment.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: odm-eval-ibm-odm-dev + labels: + app: ibm-odm-dev + chart: ibm-odm-dev-2.3.0 + release: odm-eval + heritage: Tiller + app.kubernetes.io/instance: odm-eval + app.kubernetes.io/managed-by: Tiller + app.kubernetes.io/name: ibm-odm-dev + helm.sh/chart: ibm-odm-dev-2.3.0 +spec: + replicas: 1 + selector: + matchLabels: + release: odm-eval + run: ibm-odm-dev + template: + metadata: + labels: + app.kubernetes.io/instance: ibm-odm-dev + app.kubernetes.io/managed-by: Tiller + app.kubernetes.io/name: ibm-odm-dev + helm.sh/chart: ibm-odm-dev + run: ibm-odm-dev + app: ibm-odm-dev + chart: ibm-odm-dev-2.3.0 + release: odm-eval + heritage: Tiller + annotations: + productName: "IBM Operational Decision Manager for Developers" + productID: "OperationalDecisionManagerForDevelopers" + productVersion: 8.10.3.0 + spec: + hostNetwork: false + hostPID: false + hostIPC: false + securityContext: + runAsNonRoot: true + runAsUser: 1001 + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + #If you specify multiple nodeSelectorTerms associated with nodeAffinity types, + #then the pod can be scheduled onto a node if one of the nodeSelectorTerms is satisfied. + # + #If you specify multiple matchExpressions associated with nodeSelectorTerms, + #then the pod can be scheduled onto a node only if all matchExpressions can be satisfied. + # + #valid operators: In, NotIn, Exists, DoesNotExist, Gt, Lt + nodeSelectorTerms: + - matchExpressions: + - key: beta.kubernetes.io/arch + operator: In + values: + - amd64 + volumes: + containers: + - name: ibm-odm-dev + image: ibmcom/odm:8.10.3.0_2.3.0-amd64 + securityContext: + runAsUser: 1001 + runAsNonRoot: true + privileged: false + readOnlyRootFilesystem: false + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + imagePullPolicy: IfNotPresent + env: + - name: LICENSE + value: "view" + - name: DB_TYPE + value: "h2" + - name: SAMPLE + value: "true" + - name: DC_PERSISTENCE_LOCALE + value: "en_US" + - name: "RELEASE_NAME" + value: odm-eval + ports: + - containerPort: 9060 + # + readinessProbe: + httpGet: + scheme: HTTP + path: /decisioncenter/healthCheck + port: 9060 + initialDelaySeconds: 10 + periodSeconds: 5 + failureThreshold: 45 + livenessProbe: + httpGet: + scheme: HTTP + path: /decisioncenter/healthCheck + port: 9060 + initialDelaySeconds: 300 + periodSeconds: 10 + failureThreshold: 10 + resources: + limits: + cpu: 2 + memory: 2048Mi + requests: + cpu: 1 + memory: 1024Mi + diff --git a/ODM/configuration/odm-eval.yaml b/ODM/configuration/evaluation/odm-eval.yaml similarity index 95% rename from ODM/configuration/odm-eval.yaml rename to ODM/configuration/evaluation/odm-eval.yaml index 15dbfa35..205711fe 100644 --- a/ODM/configuration/odm-eval.yaml +++ b/ODM/configuration/evaluation/odm-eval.yaml @@ -6,7 +6,7 @@ metadata: name: odm-eval-odm-pvclaim labels: app: odm-eval-ibm-odm-dev - chart: "ibm-odm-dev-2.2.1" + chart: "ibm-odm-dev-2.3.0" release: "odm-eval" heritage: "Tiller" spec: @@ -30,7 +30,7 @@ metadata: name: odm-eval-ibm-odm-dev labels: app: ibm-odm-dev - chart: ibm-odm-dev-2.2.1 + chart: ibm-odm-dev-2.3.0 release: odm-eval heritage: Tiller spec: @@ -53,7 +53,7 @@ metadata: name: odm-eval-ibm-odm-dev labels: app: ibm-odm-dev - chart: ibm-odm-dev-2.2.1 + chart: ibm-odm-dev-2.3.0 release: odm-eval heritage: Tiller spec: @@ -67,13 +67,13 @@ spec: labels: run: ibm-odm-dev app: ibm-odm-dev - chart: ibm-odm-dev-2.2.1 + chart: ibm-odm-dev-2.3.0 release: odm-eval heritage: Tiller annotations: productName: "IBM Operational Decision Manager for Developers" productID: "OperationalDecisionManagerForDevelopers" - productVersion: 8.10.2.0 + productVersion: 8.10.3.0 spec: hostNetwork: false hostPID: false @@ -103,7 +103,7 @@ spec: claimName: odm-eval-odm-pvclaim containers: - name: ibm-odm-dev - image: ibmcom/odm:8.10.2.0_2.2.1-amd64 + image: ibmcom/odm:8.10.3.0_2.3.0-amd64 securityContext: runAsUser: 1001 runAsNonRoot: true diff --git a/ODM/configuration/logging/logging.xml b/ODM/configuration/logging/logging.xml new file mode 100644 index 00000000..a441b16e --- /dev/null +++ b/ODM/configuration/logging/logging.xml @@ -0,0 +1,4 @@ + + + + diff --git a/ODM/configuration/sample-values-custom-configuration.yaml b/ODM/configuration/sample-values-custom-configuration.yaml new file mode 100644 index 00000000..1267dd1d --- /dev/null +++ b/ODM/configuration/sample-values-custom-configuration.yaml @@ -0,0 +1,44 @@ +# Sample values for odm installation using custom configuration. +# This is a YAML-formatted file. +# Declare variables to be passed into your templates. +apiVersion: icp4a.ibm.com/v1 +kind: ICP4ACluster +metadata: + name: odm-demo-external-custom-db + labels: + app.kubernetes.io/instance: ibm-dba + app.kubernetes.io/managed-by: ibm-dba + app.kubernetes.io/name: ibm-dba +spec: + odm_configuration: + image: + repository: "" + pullPolicy: IfNotPresent + tag: 8.10.3 + decisionCenter: + # Configuring Decision Center customization + # Following instructions at https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.offerings/topics/tsk_custom_emitters.html + customlibPvc: + + # Customizing a Decision Center time zone + # Following instructions at https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.managing/op_topics/tsk_set_jvmargs.html + jvmOptionsRef: my-odm-dc-jvm-options-configmap + + # Configuring a custom external database + # Following instructions at https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.offerings/topics/tsk_custom_external_db.html + externalCustomDatabase: + datasourceRef: customdatasource-secret + driverPvc: customdatasource-pvc + + customization: + # Defining the security certificate + # Following instructions at https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.offerings/topics/tsk_replace_security_certificate.html + securitySecretRef: mysecuritysecret + + # Configuring the ODM event emitter + # Following instructions at https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.offerings/topics/tsk_custom_emitters.html + baiEmitterSecretRef: mybaieventsecret + + # Configuring the LDAP and user registry + # Following instructions at https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.offerings/topics/con_config_user_registry.html + authSecretRef: my-auth-secret diff --git a/ODM/configuration/sample-values.yaml b/ODM/configuration/sample-values.yaml deleted file mode 100755 index 2f9b15e9..00000000 --- a/ODM/configuration/sample-values.yaml +++ /dev/null @@ -1,117 +0,0 @@ -# Default values for odmcharts. -# This is a YAML-formatted file. -# Declare variables to be passed into your templates. -image: - repository: "" - pullPolicy: IfNotPresent -## Optionally specify an array of imagePullSecrets. -## Secrets must be manually created in the namespace. -## ref: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod -## - name: admin.registrykey - pullSecrets: - -## Architecture - e.g. amd64, ppc64le. If left empty, the architecture will be determined automatically. -## You can use kubectl version command to determine the architecture on the desired worker node. - arch: "" - -service: - enableTLS: true - type: NodePort - -decisionServerRuntime: - enabled: true - replicaCount: 1 - resources: - requests: - cpu: 500m - memory: 512Mi - limits: - cpu: 2 - memory: 4096Mi - -decisionServerConsole: - resources: - requests: - cpu: 500m - memory: 512Mi - limits: - cpu: 2 - memory: 1024Mi - -decisionCenter: - enabled: true - persistenceLocale: en_US - replicaCount: 1 - resources: - requests: - cpu: 500m - memory: 512Mi - limits: - cpu: 2 - memory: 4096Mi - -decisionRunner: - enabled: true - replicaCount: 1 - resources: - requests: - cpu: 500m - memory: 512Mi - limits: - cpu: 2 - memory: 4096Mi - -internalDatabase: - databaseName: odmdb - user: odmusr - password: "odmpwd" - persistence: - enabled: true - useDynamicProvisioning: false - storageClassName: "" - resources: - requests: - storage: 5Gi - securityContext: - runAsUser: 0 - resources: - requests: - cpu: 500m - memory: 512Mi - limits: - cpu: 2 - memory: 4096Mi - -externalDatabase: - type: "" - serverName: "" - databaseName: "" - user: "" - password: "" - port: "" - -externalCustomDatabase: - datasourceRef: - driverPvc: - -readinessProbe: - initialDelaySeconds: 5 - periodSeconds: 5 - failureThreshold: 45 - timeoutSeconds: 5 - -livenessProbe: - initialDelaySeconds: 300 - periodSeconds: 10 - failureThreshold: 10 - timeoutSeconds: 5 - -customization: - securitySecretRef: - baiEmitterSecretRef: - authSecretRef: - dedicatedNodeLabel: - - productName: IBM Cloud Pak for Automation - productID: 5737-I23 - kubeVersion: DBAMC diff --git a/ODM/configuration/sample-webSecurity-LDAP.xml b/ODM/configuration/security/sample-webSecurity-LDAP.xml similarity index 100% rename from ODM/configuration/sample-webSecurity-LDAP.xml rename to ODM/configuration/security/sample-webSecurity-LDAP.xml diff --git a/ODM/configuration/sample-webSecurity-basic-registry.xml b/ODM/configuration/security/sample-webSecurity-basic-registry.xml similarity index 100% rename from ODM/configuration/sample-webSecurity-basic-registry.xml rename to ODM/configuration/security/sample-webSecurity-basic-registry.xml diff --git a/ODM/helm-charts/.gitkeep b/ODM/helm-charts/.gitkeep deleted file mode 100644 index e69de29b..00000000 diff --git a/ODM/helm-charts/README.md b/ODM/helm-charts/README.md deleted file mode 100644 index 08020279..00000000 --- a/ODM/helm-charts/README.md +++ /dev/null @@ -1,128 +0,0 @@ -# Install IBM Operational Decision Manager with the Helm CLI - -A [Helm chart](https://helm.sh/) is a Package Manager for Kubernetes to help you manage (install/upgrade/update) your Kubernetes deployment. If you are using Helm on a cluster that you completely control, like Minikube or a cluster on a private network in which sharing is not a concern, the default installation that applies no security configuration is the easiest option. - -However, if your cluster is exposed to a larger network or if you share your cluster with others – production clusters fall into this category – you must secure your installation to prevent careless or malicious actors from damaging the cluster or its data. To secure Helm for use in a production environment and other multi-tenant scenarios, see [Securing a Helm installation](https://helm.sh/docs/using_helm/#securing-your-helm-installation). - -Before you install make sure that you have prepared your environment. For more information, see [Preparing to install ODM for production](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_preparing_odmk8s.html) as well as [Customizing ODM for production](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_install_odm.html). - -1. If Helm is not installed in your Kubernetes cluster, install [Helm 2.11.0](/~https://github.com/helm/helm/releases/tag/v2.11.0). -2. When Helm is ready, initialize the local CLI and install Tiller. - - ```console - $ helm init - ``` - Tiller is now installed in the Kubernetes cluster with the current-context configuration. - - > **Important**: Helm looks for Tiller in the kube-system namespace unless --tiller-namespace or TILLER_NAMESPACE is set. If your administrator installed Tiller in a namespace other than kube-system, make sure to set TILLER_NAMESPACE before you use the following helm commands, or add --tiller-namespace to each helm command. - - By default, Tiller does not have authentication enabled. For more information about configuring strong TLS authentication, see the [Tiller TLS guide](https://helm.sh/docs/using_helm/#using-ssl-between-helm-and-tiller). - -3. Download the `ibm-odm-prod-2.2.1.tgz` Helm chart from the GitHub repository. - - [ibm-odm-prod-2.2.1.tgz](ibm-odm-prod-2.2.1.tgz) for Operational Decision Manager 8.10.2 - - If you have not done so yet, follow the instructions to download the IBM Operational Decision Manager images and the loadimages.sh file in [Download PPA and load images](../../README.md#step-2-download-a-product-package-from-ppa-and-load-the-images). - -4. Install a Kubernetes release with the default configuration and a name of `my-odm-prod-release` by using the following command: - - ```console - $ helm install --name my-odm-prod-release \ - /path/to/ibm-odm-prod-2.2.1.tgz - ``` - The package is deployed asynchronously in a matter of minutes, and is composed of several services. - - > **Note**: You can check the status of the pods that have been created: - ```console - $ kubectl get pods - NAME READY STATUS RESTARTS AGE - my-odm-prod-release-dbserver-*** 1/1 Running 0 44m - my-odm-prod-release-odm-decisioncenter-*** 1/1 Running 0 44m - my-odm-prod-release-odm-decisionrunner-*** 1/1 Running 0 44m - my-odm-prod-release-odm-decisionserverconsole-*** 1/1 Running 0 44m - my-odm-prod-release-odm-decisionserverruntime-*** 1/1 Running 0 44m - ``` - -5. List the helm releases in your cluster. - - ```console - $ helm ls - ``` - The release is an instance of the `ibm-odm-prod` chart. All the Operational Decision Manager components are now running in a Kubernetes cluster. - - To verify a deployment, go back to the [Post installation steps](../README.md#post-installation-steps). - -## Customize a Kubernetes release of Operational Decision Manager - -Refer to the [ODM for production Certified Kubernetes parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_parameters_prod.html) for a complete list of values that you can configure. - -### To customize the helm install with --set key=value arguments - -Using the `helm install` command, you can specify each parameter with a `--set key=value` argument. For example, the following command sets 3 parameters for the internal database. - -```console -$ helm install --name my-odm-prod-release \ - --set internalDatabase.databaseName=my-db \ - --set internalDatabase.user=my-user \ - --set internalDatabase.password=my-password \ - /path/to/ibm-odm-prod-2.2.1.tgz -``` - -> **New in 19.0.1**: Use the new `customlibPvc` parameter to customize Decision Center in your release. Use the name of the persistent volume claim (PVC) you set up when you prepared the release as the parameter value. For more information, see [Preparing to install Operational Decision Manager](https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_preparing_odmk8s.html). -```console ---set decisionCenter.customlibPvc=custom-dc-libs-pvc -``` - -### To customize the helm install with a YAML file - -You can use a custom-made .yaml file to specify the values of the parameters when you install the chart. For example, the following command uses the `myvalues.yaml` file. - -```console -$ helm install --name my-odm-prod-release -f myvalues.yaml /path/to/ibm-odm-prod-2.2.1.tgz -``` - -> **Tip**: Refer to the [`sample-values.yaml`](../configuration/sample-values.yaml) file to find the default values used by the `ibm-odm-prod` chart. - -## Upgrade a release - -1. [Download the latest PPA file from IBM Passport Advantage and load the new images.](../../README.md#step-2-download-a-product-package-from-ppa-and-load-the-images) - -2. Run the helm upgrade command on the release that you want to upgrade. The following example command upgrades a release `my-odm-prod-release` with the new Helm chart. - ```console - $ helm upgrade my-odm-prod-release /path/to/ibm-odm-prod-2.2.1.tgz --set image.tag=8.10.2.1 --reuse-values - ``` - -3. Verify that the version of Decision Center and the Decision Server console is the new version and they are running on the same URL and port as before. - -4. If your release uses an internal database, go to the `my-odm-prod-release-dbserver` pod and change the `volumeMounts` definition in the deployment YAML file. The following definition is from a previous version. - - ```console - "volumeMounts": [ { - "name": "my-odm-prod-release-ibm-odm-prod-volume", - "mountPath": "/var/lib/postgresql/", - "subPath": "pgdata" } ], - ``` - The definition for chart version 2.2.1 must concatenate the `mountPath` and `SubPath` parameters. - - ```console - "volumeMounts": [ { - "name": "my-odm-prod-release-ibm-odm-prod-volume", - "mountPath": "/var/lib/postgresql/pgdata" } ], - ``` - - > **Caution**: If you do not make this change, historical data from Decision Center and Decision Server is not available in the upgrade. - - After you make the change, restart the pod. - -## Uninstall a Kubernetes release of Operational Decision Manager - -To uninstall and delete a release named `my-odm-prod-release`, use the following command: - -```console -$ helm delete my-odm-prod-release --purge -``` - -The command removes all the Kubernetes components associated with the release, except any Persistent Volume Claims (PVCs). This is the default behavior of Kubernetes, and ensures that valuable data is not deleted. To delete the persisted data of the release, you can delete the PVC using the following command: - -```console -$ kubectl delete pvc my-odm-prod-release-odm-pvclaim -``` diff --git a/ODM/helm-charts/ibm-odm-prod-2.2.1.tgz b/ODM/helm-charts/ibm-odm-prod-2.2.1.tgz deleted file mode 100644 index 46164500..00000000 Binary files a/ODM/helm-charts/ibm-odm-prod-2.2.1.tgz and /dev/null differ diff --git a/ODM/k8s-yaml/.gitkeep b/ODM/k8s-yaml/.gitkeep deleted file mode 100644 index e69de29b..00000000 diff --git a/ODM/k8s-yaml/README.md b/ODM/k8s-yaml/README.md deleted file mode 100644 index 2b00a5dc..00000000 --- a/ODM/k8s-yaml/README.md +++ /dev/null @@ -1,131 +0,0 @@ -# Install IBM Operational Decision Manager with the Kubernetes CLI - -If you prefer to use a simpler deployment process that uses a native Kubernetes authorization mechanism (RBAC) instead of Helm and Tiller, use the Helm command line interface (CLI) to generate a Kubernetes manifest. If you choose to use Kubernetes YAML you cannot use certain capabilities of Helm to manage your deployment. - -Before you install make sure that you have prepared your environment. For more information, see [Preparing to install ODM for production](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_preparing_odmk8s.html) as well as [Customizing ODM for production](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_install_odm.html). - -1. If Helm is not installed in your Kubernetes cluster, install [Helm 2.11.0](/~https://github.com/helm/helm/releases/tag/v2.11.0). - -2. Download the `ibm-odm-prod-2.2.1.tgz` Helm chart. - - [ibm-odm-prod-2.2.1.tgz](../helm-charts/ibm-odm-prod-2.2.1.tgz) for Operational Decision Manager 8.10.2 - If you have not done so yet, follow the instructions to download the IBM Operational Decision Manager images and the loadimages.sh file in [Download PPA and load images](../../README.md#step-2-download-a-product-package-from-ppa-and-load-the-images). - -3. Create a chart YAML template file with the default configuration parameters by using the following command. The `--name` argument sets the name of the release to install. - - ```console - $ helm template \ - --name my-odm-prod-release \ - /path/to/ibm-odm-prod-2.2.1.tgz > generated-k8s-templates.yaml - ``` - -4. Install `my-odm-prod-release` with the default configuration by using the following command. - - ```console - $ kubectl apply -f generated-k8s-templates.yaml - ``` - The package is deployed asynchronously in a matter of minutes, and is composed of several services. - - > **Note**: You can check the status of the pods that you created: - ```console - $ kubectl get pods - NAME READY STATUS RESTARTS AGE - my-odm-prod-release-dbserver-*** 1/1 Running 0 44m - my-odm-prod-release-odm-decisioncenter-*** 1/1 Running 0 44m - my-odm-prod-release-odm-decisionrunner-*** 1/1 Running 0 44m - my-odm-prod-release-odm-decisionserverconsole-*** 1/1 Running 0 44m - my-odm-prod-release-odm-decisionserverruntime-*** 1/1 Running 0 44m - ``` - - The release is an instance of the `ibm-odm-prod` chart. All of the Operational Decision Manager components are now running in a Kubernetes cluster. - - To verify a deployment, go back to the [Post installation steps](../README.md#post-installation-steps). - -## Customize a Kubernetes release of Operational Decision Manager - -Refer to the [ODM for production Certified Kubernetes parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_parameters_prod.html) for a complete list of values that you can configure. - -### To customize the install with --set key=value arguments - -Using Helm, you can specify each parameter with a `--set key=value` argument in the `helm template` command. - -For example: -```console -$ helm template --name my-odm-prod-release \ - --set internalDatabase.databaseName=my-db \ - --set internalDatabase.user=my-user \ - --set internalDatabase.password=my-password \ - /path/to/ibm-odm-prod-2.2.1.tgz -``` - -### To customize the helm install with a YAML file - -It is also possible to use a custom-made .yaml file to specify the values of the parameters when you install the chart. -For example: - -```console -$ helm template --name my-odm-prod-release -f myvalues.yaml /path/to/ibm-odm-prod-2.2.1.tgz -``` - -> **Tip**: Refer to the [`sample-values.yaml`](../configuration/sample-values.yaml) file to find the default values used by the `ibm-odm-prod` chart. - -## Upgrade a release - -1. [Download the latest PPA file from IBM Passport Advantage and load the new images.](../README.md#step-2-download-a-product-package-from-ppa-and-load-the-images) - -2. Delete the odm-test pod - - ```console - $ kubectl delete pod my-odm-prod-release-odm-test - ``` - -3. Create a new chart YAML template file. - - > **WARNING**: You must reuse the same `--set key=value` arguments and/or values.yaml file that were specified during the previous installation or the configuration will be reset to its default values. - - ```console - $ helm template \ - --name my-odm-prod-release \ - --set key=value \ - -f myvalues.yaml \ - /path/to/ibm-odm-prod-2.2.1.tgz > generated-k8s-templates-upgrade.yaml - ``` - -4. Apply this new template in Kubernetes. - - ```console - $ kubectl apply -f generated-k8s-templates-upgrade.yaml - ``` - - > **Note**: The Persistent Volume Claim is not recreated. You can ignore the message: `The PersistentVolumeClaim "my-odm-prod-release-pvclaim" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims` - -5. Verify that the version of Decision Center and the Decision Server console is the new version and they are running on the same URL and port as before. - -6. If your release uses an internal database, go to the `my-odm-prod-release-dbserver` pod and change the `volumeMounts` definition in the deployment YAML file. The following definition is from a previous version. - - ```console - "volumeMounts": [ { - "name": "my-odm-prod-release-ibm-odm-prod-volume", - "mountPath": "/var/lib/postgresql/", - "subPath": "pgdata" } ], - ``` - The definition for chart version 2.2.1 must concatenate the `mountPath` and `SubPath` parameters. - - ```console - "volumeMounts": [ { - "name": "my-odm-prod-release-ibm-odm-prod-volume", - "mountPath": "/var/lib/postgresql/pgdata" } ], - ``` - - > **Caution**: If you do not make this change, historical data from Decision Center and Decision Server is not available in the upgrade. - - After you make the change, restart the pod. - -## Uninstall a Kubernetes release of Operational Decision Manager - -To uninstall and delete a template along with all of the associated releases, use the following command: - -```console -$ kubectl delete -f generated-k8s-templates.yaml -``` - -> **Note**: The command removes all the Kubernetes components associated with the chart, even Persistent Volume Claims (PVCs), which might contain valuable data. diff --git a/ODM/platform/README_Eval_Minikube.md b/ODM/platform/README_Eval_Minikube.md deleted file mode 100644 index 300b71ca..00000000 --- a/ODM/platform/README_Eval_Minikube.md +++ /dev/null @@ -1,72 +0,0 @@ -# Install IBM Operational Decision Manager for developers on Minikube - -IBM Operational Decision Manager for developers can be used on a personal computer to run and evaluate Operational Decision Manager in a single container. - -## Step 1: Install Minikube - -1. Refer to the Kubernetes [documentation](https://kubernetes.io/docs/setup/minikube/#installation) to install Minikube. - -2. Start Minikube with the minimum required CPU and memory. - - ```console - $ minikube start --cpus 4 --memory 4096 - ``` - - > **Note**: If you started a Minikube cluster without these parameters, stop and delete it before restarting it again. - ```console - $ minikube stop - $ minikube delete - $ minikube start --cpus 4 --memory 4096 - ``` - -3. Verify your installation. - - ```console - $ kubectl get nodes - ``` - -## Step 2: Install an Operational Decision Manager for developers release - -Install a release with the default configuration. The name defined in the configuration is `odm-eval-ibm-odm-dev`. - -1. Download the [odm-eval.yaml](../configuration/odm-eval.yaml) descriptor to your computer. - -2. Accept the license and deploy the release by using the following command: - - ```console - $ sed 's/view/accept/' odm-eval.yaml | kubectl create --validate=false -f - - ``` - - The package is deployed in a matter of minutes. - -## Step 3: Verify that the deployment is running - -1. Monitor the pod until it shows a STATUS of *Running* or *Completed*: - - ```console - $ while kubectl get pods | grep -v -E "(Running|Completed|STATUS)"; do sleep 5; done - ``` - -2. When the pod is *Running*, you can access the application with the URL returned by the `minikube service` command. - - ```console - $ minikube service list - - |-------------|----------------------|-----------------------------| - | NAMESPACE | NAME | URL | - |-------------|----------------------|-----------------------------| - | default | kubernetes | No node port | - | default | odm-eval-ibm-odm-dev | http://xxx.xxx.xx.xxx:31074 | - | kube-system | kube-dns | No node port | - |-------------|----------------------|-----------------------------| - ``` - -3. Open the URL named `odm-eval-ibm-odm-dev`. Use odmAdmin/odmAdmin for the user/password. - -## To uninstall the release - -To uninstall and delete the release from the Kubernetes CLI, use the following command: - -```console -$ kubectl delete -f odm-eval.yaml -``` diff --git a/ODM/platform/README_Eval_Openshift.md b/ODM/platform/README_Eval_Openshift.md deleted file mode 100644 index 20783610..00000000 --- a/ODM/platform/README_Eval_Openshift.md +++ /dev/null @@ -1,62 +0,0 @@ -# Install IBM Operational Decision Manager for developers on Red Hat OpenShift - -IBM Operational Decision Manager for developers can be used on a personal computer to run and evaluate Operational Decision Manager in Red Hat OpenShift. - -## Step 1: Install the OpenShift command line interface (CLI) and Helm - -The OpenShift Container Platform CLI exposes commands for managing your applications, as well as lower level tools to interact with each component of your system. Refer to the OpenShift [documentation](https://docs.openshift.com/container-platform/3.11/cli_reference/get_started_cli.html). - -## Step 2: Install an Operational Decision Manager for developers release - -> **Tip**: Storage Persistent Volume (PV) is required to install this evaluation. PV represents an underlying storage capacity in the infrastructure. PV must be created with accessMode ReadWriteOnce and storage capacity of 5Gi or more, before you install ODM. You create a PV in the Admin console or with a .yaml file. - -1. As a developer with a user name of *ODMUSER*, create a project to contain your release by running the following commands: - - ```console - $ oc login --username= - $ oc new-project odmeval - $ oc project odmeval - ``` - - > **Note**: As a privileged user, you must grant access to the privileged SCC to *ODMUSER* and the default Service Account for project odmeval. - > ```console - > $ oc adm policy add-scc-to-user privileged -z default -n odmeval - > $ oc adm policy add-scc-to-user privileged --serviceaccount=default -n odmeval - > ``` - -2. As *ODMUSER*, run the following command to accept the license and install the release: - - ```console - $ sed 's/view/accept/' ./configuration/odm-eval.yaml | oc create -f - - ``` - -## Step 3: Verify that the deployment is running - -1. Monitor the pod until it shows a STATUS of *Running* or *Completed*: - - ```console - $ while oc get pods | grep -E "(Running|Completed|STATUS)"; do sleep 5; done - ``` - -2. When the pod is in *Running* state, you can access the status of your application with the following command: - - ```console - $ oc status - In project odmeval on server https://x.xx.xxx.xx:8443 - - svc/odmeval-ibm-odm-dev (all nodes):30341 -> 9060 - deployment/odmeval-ibm-odm-dev deploys ibmcom/odm:8.10.x.x_2.x.x-amd64 - deployment #1 running for 34 minutes - 1 pod - - 1 info identified, use 'oc status --suggest' to see details. - ``` - -3. You can now expose the service to your users. You can use odmAdmin/odmAdmin for the user/password. - -## To uninstall the release - -To uninstall and delete the release from the Kubernetes CLI, use the following command: - -```console -$ oc delete -f odm-eval.yaml -``` diff --git a/ODM/platform/README_Eval_ROKS.md b/ODM/platform/README_Eval_ROKS.md deleted file mode 100644 index 98d98dcd..00000000 --- a/ODM/platform/README_Eval_ROKS.md +++ /dev/null @@ -1,76 +0,0 @@ -# Install IBM Operational Decision Manager for developers on Red Hat OpenShift on IBM Cloud - -## Before you begin: Create a cluster - -Before you run any install command, make sure that you have created the IBM Cloud cluster and prepared your own environment. - -For more information, see [Installing containers on Red Hat OpenShift by using CLIs](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_env_ROKS.html). - -## Step 1: Install an Operational Decision Manager for developers release - -> **Tip**: Storage Persistent Volume (PV) is required to install this evaluation. PV represents an underlying storage capacity in the infrastructure. PV must be created with accessMode ReadWriteOnce and storage capacity of 5Gi or more, before you install ODM. You create a PV in the Admin console or with a .yaml file. - -1. Login to your IBM Cloud Kubernetes cluster: - - - Login to [IBM Cloud account](https://www.ibm.com/cloud) and select *Kubernetes* from the menu [hamburger menu icon]. - - Select the cluster and from the cluster details page, click **OpenShift web console**. - - In the OpenShift web console menu bar, click your profile *IAM#user.name@email.com* > *Copy Login Command* and paste the copied `oc login` command into your terminal to authenticate: - ```console - $ oc login https://: --token= - ``` - - > **Note**: As a privileged user, you must grant access to the privileged SCC to *IAM#user.name@email.com* and the default Service Account for project odmeval. - > ```console - > $ oc adm policy add-scc-to-user privileged -z default -n odmeval - > $ oc adm policy add-scc-to-user privileged --serviceaccount=default -n odmeval - > ``` - -2. Create a project to contain your release by running the following commands - ```console - $ oc new-project odmeval - $ oc project odmeval - ``` - -3. Run the following command to accept the license and install the release: - - ```console - $ sed 's/view/accept/' ./configuration/odm-eval.yaml | oc create -f - - ``` - -## Step 2: Verify that the deployment is running - -1. Monitor the pod until it shows a STATUS of *Running* or *Completed*: - - ```console - $ while oc get pods | grep -E "(Running|Completed|STATUS)"; do sleep 5; done - ``` - -2. When the pod is in *Running* state, you can access the status of your application with the following command: - - ```console - $ oc status - In project odmeval on server https://x.xx.xxx.xx:8443 - - svc/odmeval-ibm-odm-dev (all nodes):30341 -> 9060 - deployment/odmeval-ibm-odm-dev deploys ibmcom/odm:8.10.x.x_2.x.x-amd64 - deployment #1 running for 34 minutes - 1 pod - - 1 info identified, use 'oc status --suggest' to see details. - ``` - -3. You can now expose the service to your users using routes: - - ```console - $ oc create route passthrough --service=odmeval-ibm-odm-dev -n odmeval - ``` - > **Note**: For more information, refer to the [Openshift documentation](https://docs.openshift.com/container-platform/3.11/dev_guide/routes.html). - -> **Note**: You can use odmAdmin/odmAdmin for the user/password to access the applications. - -## To uninstall the release - -To uninstall and delete the release from the Kubernetes CLI, use the following command: - -```console -$ oc delete -f odm-eval.yaml -``` diff --git a/ODM/platform/README_Minikube.md b/ODM/platform/README_Minikube.md deleted file mode 100644 index 5fd5fa84..00000000 --- a/ODM/platform/README_Minikube.md +++ /dev/null @@ -1,103 +0,0 @@ -# Install IBM Operational Decision Manager on Minikube - -Before you install make sure that you have prepared your environment. For more information, see [Preparing to install ODM for production](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_preparing_odmk8s.html) as well as [Customizing ODM for production](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_install_odm.html). - -## Step 1: Install Minikube and Tiller - -1. Refer to the Kubernetes [documentation](https://kubernetes.io/docs/setup/minikube/#installation) to install Minikube. - -2. Start Minikube with the minimum required CPU and memory. - - ```console - $ minikube start --cpus 6 --memory 4096 - ``` - - > **Note**: If you started a Minikube cluster without these parameters, stop and delete it before restarting it again. - ```console - $ minikube stop - $ minikube delete - $ minikube start --cpus 6 --memory 4096 - ``` - -3. Verify your installation. - - ```console - $ kubectl get nodes - ``` - -4. Install [Helm 2.9.1](/~https://github.com/helm/helm/releases/tag/v2.9.1). - - > **Note**: Version 2.9.1 is required to use Minikube. - -5. Install Tiller in the Minikube cluster. - - ```console - $ helm init - ``` - -## Step 2: Push and tag the downloaded images in Minikube - -1. Follow the instructions to download the IBM Operational Decision Manager images and the loadimages.sh file in [Download PPA and load images](../../README.md#step-2-download-a-product-package-from-ppa-and-load-the-images). - - > **Note**: **DO NOT** run the loadimages.sh script at this point. - -2. Configure your shell to use the Minikube built-in [Docker daemon](https://kubernetes.io/docs/setup/minikube/#use-local-images-by-re-using-the-docker-daemon). - - ```console - $ eval $(minikube docker-env) - ``` - -3. Use the following command to load and tag the images in the Minikube local repository. - - ```console - $ scripts/loadimages.sh -l -p .tgz -r ibmcom - ``` - -## Step 3: Install a Kubernetes release of Operational Decision Manager - -1. Download the `ibm-odm-prod-.tgz` file. The archive contains the `ODM for production (ibm-odm-prod)` Helm chart. - - [ibm-odm-prod-2.2.1.tgz](../helm-charts/ibm-odm-prod-2.2.1.tgz) for Operational Decision Manager 8.10.2 - -2. Install a release with the default configuration and a name of `my-odm-prod-release` by using the following command: - - ```console - $ helm install --name my-odm-prod-release \ - --set internalDatabase.persistence.useDynamicProvisioning=true \ - /path/to/ibm-odm-prod-.tgz - ``` - - > **Note**: You can also install on Minikube by using Kubernetes YAML. Refer to the [k8s-yaml/README.md](../k8s-yaml/README.md). - -3. The package is deployed asynchronously in a matter of minutes, and is composed of several services. - - > **Note**: You can check the status of the pods that you created: - ```console - $ kubectl get pods - NAME READY STATUS RESTARTS AGE - my-odm-prod-release-dbserver-*** 1/1 Running 0 44m - my-odm-prod-release-odm-decisioncenter-*** 1/1 Running 0 44m - my-odm-prod-release-odm-decisionrunner-*** 1/1 Running 0 44m - my-odm-prod-release-odm-decisionserverconsole-*** 1/1 Running 0 44m - my-odm-prod-release-odm-decisionserverruntime-*** 1/1 Running 0 44m - ``` - -The release is an instance of the `ibm-odm-prod` chart. All of the components are now running in a Kubernetes cluster. - -> **Tip**: List all existing releases with the `helm list` command. - - -## Step 4: Verify that the deployment is running - -When all of the pods are *Running*, you can access the application with the URLs returned by the `minikube service` command. - -```console -$ minikube service list -``` - -## To customize a release - -Refer to the customizing instructions in [helm-charts/README.md](../helm-charts/README.md#customize-a-kubernetes-release-of-operational-decision-manager). - -## To uninstall a release - -Refer to the uninstalling instructions in [helm-charts/README.md](../helm-charts/README.md#uninstall-a-kubernetes-release-of-operational-decision-manager). diff --git a/ODM/platform/README_Openshift.md b/ODM/platform/README_Openshift.md deleted file mode 100644 index 8e6bf2df..00000000 --- a/ODM/platform/README_Openshift.md +++ /dev/null @@ -1,165 +0,0 @@ -# Install IBM Operational Decision Manager on Red Hat OpenShift - -Before you install make sure that you have prepared your environment. For more information, see [Preparing to install ODM for production](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_preparing_odmk8s.html) as well as [Customizing ODM for production](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_install_odm.html). - -## Step 1: Prepare your environment - -As an administrator of the cluster you must be able to interact with your environment. Run the following commands to connect and check your access. - -1. Login to the cluster: - ```console - $ oc login https://:8443 -u - ``` -2. Create a project where you want to install Operational Decision Manager. - ```console - $ oc new-project odmproject - $ oc project odmproject - ``` -3. If you use the internal database you must add privileges to the project. - ```console - $ oc adm policy add-scc-to-user privileged -z default - ``` -4. Check you can run docker. - ```console - $ docker ps - ``` -5. Login to the docker registry with a token. - ```console - $ docker login $(oc registry info) -u -p $(oc whoami -t) - ``` - > **Note**: You can connect to a node in the cluster to resolve the docker-registry.default.svc parameter. - -6. Run a `kubectl` command to make sure you have access to Kubernetes. - ```console - $ kubectl cluster-info - ``` - - -## Step 2: Push and tag the downloaded images in the OpenShift registry - -1. If you have not already done so, follow the instructions to download the IBM Operational Decision Manager images and the loadimages.sh file in [Download PPA and load images](../../README.md#step-2-download-a-product-package-from-ppa-and-load-the-images). - - > **Note**: Change the permissions so that you can execute the script. - > ```console - > $ chmod +x loadimages.sh - > ``` - -2. Use the loadimages.sh script to push the docker images into your registry. - ```console - $ ./loadimages.sh -p .tgz -r docker-registry.default.svc:5000/odmproject - ``` - - > **Note**: The project must have pull request privileges to the registry where the Operational Decision Manager images are loaded. The project must also have pull request privileges to push the images into another namespace/project. - -3. Check whether the images have been pushed correctly to the registry. - ```console - oc get is --all-namespaces - ``` - or - ```console - oc get is -n odmproject - ``` - -## Step 3: Install a Kubernetes release of Operational Decision Manager - -You can do this step without administrator rights. - -1. Download the [ibm-odm-prod-2.2.1.tgz](../helm-charts/ibm-odm-prod-2.2.1.tgz) file. The archive contains the `ODM for production (ibm-odm-prod)` Helm chart. - -2. Install a release with the default configuration and a name of `my-odm-prod-release`. You have 2 options to install Operation Decision Manager on Openshift depending on your security policy. - - * Option 1: Use the helm CLI to generate a template, and then the OpenShift CLI to create a release from the YAML file. - - ```console - $ helm template \ - --name my-odm-prod-release \ - /path/to/ibm-odm-prod-.tgz \ - --set image.repository=docker-registry.default.svc:5000/odmproject/ > odm-k8s.yaml - $ oc create --save-config=true -f odm-k8s.yaml - ``` - - > **Note**: For more information, see [k8s-yaml/README.md](../k8s-yaml/README.md). - - * Option 2: If you installed Tiller on your cluster, you can use a single command from the helm CLI. - - ```console - $ helm install \ - --name my-odm-prod-release \ - /path/to/ibm-odm-prod-.tgz \ - --set image.repository=docker-registry.default.svc:5000/odmproject/ - --tiller-namespace - ``` - - > **Note**: For more information, see [helm-charts/README.md](../helm-charts/README.md). - -3. The package is deployed asynchronously in a matter of minutes, and is composed of several services. - - > **Note**: You can check the status of the pods that you created: - > ```console - > $ kubectl get pods - > NAME READY STATUS RESTARTS AGE - > my-odm-prod-release-dbserver-*** 1/1 Running 0 44m - > my-odm-prod-release-odm-decisioncenter-*** 1/1 Running 0 44m - > my-odm-prod-release-odm-decisionrunner-*** 1/1 Running 0 44m - > my-odm-prod-release-odm-decisionserverconsole-*** 1/1 Running 0 44m - > my-odm-prod-release-odm-decisionserverruntime-*** 1/1 Running 0 44m - > ``` - - The release is an instance of the `ibm-odm-prod` chart. All of the components are now running in a Kubernetes cluster. - -## Step 4: Verify that the deployment is running - -When all of the pods are *Running*, you can access the status of your application with the following command. -```console -$ oc status -In project odm on server https://localhost:8443 - -svc/odm-release-dbserver - xxx.xx.xx.xx:5432 - deployment/odm-release-dbserver deploys docker-registry.default.svc:5000/odmproject/dbserver:8.10.x-amd64 - deployment #1 running for 27 minutes - 1 pod - -svc/odm-release-odm-decisioncenter (all nodes):31070 -> 9453 - deployment/odm-release-odm-decisioncenter deploys docker-registry.default.svc:5000/odmproject/odm-decisioncenter:8.10.x-amd64 - deployment #1 running for 27 minutes - 1 pod - -svc/odm-release-odm-decisionrunner (all nodes):31705 -> 9443 - deployment/odm-release-odm-decisionrunner deploys docker-registry.default.svc:5000/odmproject/odm-decisionrunner:8.10.x-amd64 - deployment #1 running for 27 minutes - 1 pod - -svc/odm-release-odm-decisionserverconsole-notif - xxx.xx.xx:1883 -http://odm-release-odm-decisionserverconsole-odm.xxx.xx.xx.nip.io to pod port decisionserverconsole-https (svc/odm-release-odm-decisionserverconsole) - deployment/odm-release-odm-decisionserverconsole deploys docker-registry.default.svc:5000/odmproject/odm-decisionserverconsole:8.10.x-amd64 - deployment #1 running for 27 minutes - 1 pod - -http://myserver to pod port decisionserverruntime-https (svc/odm-release-odm-decisionserverruntime) - deployment/odm-release-odm-decisionserverruntime deploys docker-registry.default.svc:5000/odmproject/odm-decisionserverruntime:8.10.x-amd64 - deployment #1 running for 27 minutes - 1 pod - -1 info identified, use 'oc status --suggest' to see details. -``` - -You can now expose the service to your users. - -> **Tip**: Refer to [Verify a deployment](../README.md#step-1-verify-a-deployment) post installation step to get the URLs of the services. - -## To customize a release - -Refer to the customizing instructions in [k8s-yaml/README.md](../k8s-yaml/README.md#customize-a-kubernetes-release-of-operational-decision-manager). - -## To uninstall the Helm chart - - * Option 1: To uninstall and delete a release named `my-odm-prod-release` with the OpenShift CLI, use the following command: - - ```console - $ oc delete -f odm-k8s.yaml - ``` - - The `odm-k8s.yaml` is the file you created in step 3: [Install an Operational Decision Manager release](README_Openshift.md#step-3-install-a-kubernetes-release-of-operational-decision-manager). - - * Option 2: To uninstall and delete a release named `my-odm-prod-release` with Helm Tiller, use the following command: - - ```console - $ helm delete my-odm-prod-release --purge --tiller-namespace - ``` - - The command removes all the Kubernetes components associated with the chart, including Persistent Volume Claims (PVCs). diff --git a/ODM/platform/README_ROKS.md b/ODM/platform/README_ROKS.md deleted file mode 100644 index 2bf40b2a..00000000 --- a/ODM/platform/README_ROKS.md +++ /dev/null @@ -1,145 +0,0 @@ -# Install IBM Operational Decision Manager for production on Red Hat OpenShift on IBM Cloud - -## Before you begin: Create a cluster and get access to the container images - -Before you run any install command, make sure that you have created the IBM Cloud cluster and prepared your own environment. You must also create a pull secret to be able to pull your images from a registry. - -For more information, see [Installing containers on Red Hat OpenShift by using CLIs](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_env_ROKS.html) and [Customizing ODM for production](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_install_odm.html) if you want to customize your ODM release. - -## Step 1: Install a release of Operational Decision Manager - -> **Note**: You can do this step without administrator rights. - -1. Download the [ibm-odm-prod-2.2.1.tgz](../helm-charts/ibm-odm-prod-2.2.1.tgz) file. The archive contains the `ODM for production (ibm-odm-prod)` Helm chart. - -2. Log in to your IBM Cloud Kubernetes cluster. In the OpenShift web console menu bar, click your profile *IAM#user.name@email.com* > *Copy Login Command* and paste the copied command into your command line. - - ```console - $ oc login https://: --token= - ``` - -3. Go to the project that you created for your release in OpenShift. - - ```console - $ oc project - ``` - -4. Install a release with a name of `my-odm-prod-release`. You have 2 options to install Operation Decision Manager on Openshift depending on your security policy. - - In both cases, you might need to increase the default liveness and readiness probes initial delay to prevent premature termination of the pods and reduce unnecessary errors. - - Refer to the documentation to [decide on the file storage configuration](https://cloud.ibm.com/docs/containers?topic=containers-file_storage) or [on block storage configuration](https://cloud.ibm.com/docs/containers?topic=containers-block_storage). Obtain the storage class name for the OpenShift cluster storage, and assign that value as the storageClassName value. You can list all the available storage classes by running the command `kubectl get sc`. - - * **Option 1**: Use the helm CLI to generate a template, and then the OpenShift CLI to create a release from the YAML file. - - ```console - $ helm template \ - --name my-odm-prod-release \ - /path/to/ibm-odm-prod-2.2.1.tgz \ - --set image.repository=/\ - --set image.pullSecrets= \ - --set image.arch=amd64 \ - --set internalDatabase.persistence.storageClassName=ibmc-file-gold \ - --set internalDatabase.persistence.useDynamicProvisioning=true > odm-k8s.yaml - $ oc create --save-config=true -f odm-k8s.yaml - ``` - - > **Note**: For more information, see [k8s-yaml/README.md](../k8s-yaml/README.md). - - * **Option 2**: If you installed Tiller on your cluster, you can use a single command from the helm CLI. - - ```console - $ helm install \ - --name my-odm-prod-release \ - /path/to/ibm-odm-prod-2.2.1.tgz \ - --set image.repository=/,image.pullSecrets= \ - --set image.arch=amd64 \ - --set internalDatabase.persistence.storageClassName=ibmc-file-gold \ - --set internalDatabase.persistence.useDynamicProvisioning=true \ - --tiller-namespace - ``` - - > **Note**: For more information, see [helm-charts/README.md](../helm-charts/README.md). - - The release is composed of several services. You can check the status of the pods that you created. Pod names are always prefixed with the name of the deployment. - - ```console - $ kubectl get pods - NAME READY STATUS RESTARTS AGE - my-odm-prod-release-dbserver-*** 1/1 Running 0 44m - my-odm-prod-release-odm-decisioncenter-*** 1/1 Running 0 44m - my-odm-prod-release-odm-decisionrunner-*** 1/1 Running 0 44m - my-odm-prod-release-odm-decisionserverconsole-*** 1/1 Running 0 44m - my-odm-prod-release-odm-decisionserverruntime-*** 1/1 Running 0 44m - ``` - - All of the components are now running in a Kubernetes cluster. - - The release is an instance of the `ibm-odm-prod` chart. - -## Step 2: Verify the deployment is running - -When all of the pods are *Running*, you can access the status of your application with the following command. -```console -$ oc status -In project odm on server https://localhost:8443 - -svc/odm-release-dbserver - xxx.xx.xx.xx:5432 - deployment/odm-release-dbserver deploys docker-registry.default.svc:5000/odmproject/dbserver:8.10.x-amd64 - deployment #1 running for 27 minutes - 1 pod - -svc/odm-release-odm-decisioncenter (all nodes):31070 -> 9453 - deployment/odm-release-odm-decisioncenter deploys docker-registry.default.svc:5000/odmproject/odm-decisioncenter:8.10.x-amd64 - deployment #1 running for 27 minutes - 1 pod - -svc/odm-release-odm-decisionrunner (all nodes):31705 -> 9443 - deployment/odm-release-odm-decisionrunner deploys docker-registry.default.svc:5000/odmproject/odm-decisionrunner:8.10.x-amd64 - deployment #1 running for 27 minutes - 1 pod - -svc/odm-release-odm-decisionserverconsole-notif - xxx.xx.xx:1883 -http://odm-release-odm-decisionserverconsole-odm.xxx.xx.xx.nip.io to pod port decisionserverconsole-https (svc/odm-release-odm-decisionserverconsole) - deployment/odm-release-odm-decisionserverconsole deploys docker-registry.default.svc:5000/odmproject/odm-decisionserverconsole:8.10.x-amd64 - deployment #1 running for 27 minutes - 1 pod - -http://myserver to pod port decisionserverruntime-https (svc/odm-release-odm-decisionserverruntime) - deployment/odm-release-odm-decisionserverruntime deploys docker-registry.default.svc:5000/odmproject/odm-decisionserverruntime:8.10.x-amd64 - deployment #1 running for 27 minutes - 1 pod - -1 info identified, use 'oc status --suggest' to see details. -``` - -> **Tip**: Refer to [Verify a deployment](../README.md#step-1-verify-a-deployment) post installation step to get the URLs of the services. - -## Step 3: Expose the service to your users by creating routes - -1. From the OpenShift web console menu bar, select *Application console* and select `odmproject` project. - -2. Navigate to the *Routes* page under the *Applications* section and click **Create Route**. - -3. Create a route for each service with *Secure Route* enabled and *TLS Termination* type set to **Passthrough**. - - > **Note**: You can also create the routes using the `oc` CLI. - > ```console - > $ oc create route passthrough --service=my-odm-prod-release-odm-decisioncenter -n odmproject - > ``` - > For more information, refer to the [OpenShift documentation](https://docs.openshift.com/container-platform/3.11/dev_guide/routes.html). - -## To uninstall the Helm chart - - * **Option 1**: To uninstall and delete a release named `my-odm-prod-release` by using the OpenShift CLI, run the following command: - ```console - $ oc delete -f odm-k8s.yaml - ``` - The `odm-k8s.yaml` is the file you created in step 1. - - * **Option 2**: To uninstall and delete a release named `my-odm-prod-release` by using Helm Tiller, run the following command: - - ```console - $ helm delete my-odm-prod-release --purge --tiller-namespace - ``` - The command removes all of the Kubernetes components associated with the chart. - -## To upgrade a release - -Make sure that you have the new images in the container registry that you plan to use for your upgrade, and then refer to the [Upgrade section](helm-charts/README.md#upgrade-a-release) in the helm-charts folder for instructions using Tiller, or the [Upgrade section](k8s-yaml/README.md#upgrade-a-release) in the k8s-yaml folder for instructions on how to use Kubernetes YAML. - diff --git a/README.md b/README.md index 4e8dbcf7..135b616a 100644 --- a/README.md +++ b/README.md @@ -1,120 +1,55 @@ - -# IBM Cloud Pak for Automation 19.0.2 on Certified Kubernetes - -## Introduction - -For information about IBM Cloud Pak for Automation 19.0.x, see [IBM Knowledge Center](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/welcome/kc_welcome_dba_distrib.html). - -The installation of IBM Cloud Pak for Automation software uses Helm charts and Tiller or Kubernetes YAML files. The charts are packages of preconfigured Kubernetes resources that bootstrap a deployment on a Kubernetes cluster. You customize the deployment by changing and adding configuration parameters. - -The repository includes one folder for each application or service. - -| Folder | Product name | Version in 19.0.2 | -|------------ |---------------------------------- |------------- | -| AAE | IBM Business Automation Application Engine | 19.0.2 | -| BACA | IBM Business Automation Content Analyzer | 19.0.2 | -| BAI | IBM Business Automation Insights | 19.0.2 | -| BAS | IBM Business Automation Studio | 19.0.2 | -| AAE | IBM Business Automation Application Engine | 19.0.2 | -| CONTENT | IBM FileNet Content Manager | 5.5.3 | -| NAVIGATOR | IBM Digital Business Navigator | 3.0.6 | -| ODM | IBM Operational Decision Manager | 8.10.2 | -| UMS | User Management Service | 19.0.2 | - -Each folder contains subfolders, which contain instructions and resources to install the Helm charts. - -Installation is supported only on a Certified Kubernetes platform. There are dozens of Certified Kubernetes offerings and more coming to market each year. Cloud Native Computing Foundation (CNCF) has created a Certified Kubernetes Conformance Program, in which most of the leading vendors and cloud computing providers have Certified Kubernetes offerings. Use the following link to determine whether the vendor and/or platform is certified by CNCF https://landscape.cncf.io/category=platform. For more information about nonqualified platforms, see the [support statement for Certified Kubernetes](http://www.ibm.com/support/docview.wss?uid=ibm10876926). - -> **Note**: Use the instructions in the IBM Knowledge Center to help you install the containers on IBM Cloud Private. The support for IBM Cloud Private is deprecated in 19.0.2. For more information, see [Installing products on IBM Cloud Private](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/topics/tsk_install_icp.html). - -## Legal Notice - -Legal notice for users of this repository [legal-notice.md](legal-notice.md). - -## Step 1: Prepare your environment - -Before you install any of the containerized software: - -1. Go to the prerequisites page in the [IBM Cloud Pak for Automation 19.0.x](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_env_k8s.html) Knowledge Center. -2. Follow the instructions on preparing your environment in the Knowledge Center. - - How much preparation you need to do depends on your environment and how familiar you are with your environment. - -## Step 2: Get access to the container images - - * **Option 1**: Create a pull secret for the IBM Cloud Entitled Registry - - 1. Log in to [MyIBM Container Software Library](https://myibm.ibm.com/products-services/containerlibrary) with the IBMid and password that are associated with the entitled software. - - 2. In the **Container software library** tile, click **View library** and then click **Copy key** to copy the `entitlement_key` to the clipboard. - - 3. Create a pull secret by running a `kubectl create secret` command. - ``` console - $ kubectl create secret docker-registry -n --docker-server=cp.icr.io \ - --docker-username=cp --docker-password="" --docker-email=user@foo.com - ``` - - > **Note**: The `cp.icr.io` and `cp` values for the **docker-server** and **docker-username** parameters must be used. Take a note of the pull secret and the server values so that you can set them to the **pullSecrets** and **repository** parameters when you run the installation for your containers. - - 4. Install the Container Registry plug-in. - ``` console - $ ibmcloud plugin install container-registry -r 'IBM Cloud' - ``` - - 5. Log in to your IBM Cloud account. - ``` console - $ ibmcloud login -a https://cloud.ibm.com - ``` - - 6. Set the region as global. - ``` console - $ ibmcloud cr region-set global - ``` - - 7. List the available images by using the following command. - ``` console - $ ibmcloud cr image-list --include-ibm | grep -i cp4a - ``` - - * **Option 2**: Download the packages from PPA and load the images - - [IBM Passport Advantage (PPA)](https://www-01.ibm.com/software/passportadvantage/pao_customer.html) provides archives (.tgz) for the software. To view the list of Passport Advantage eAssembly installation images, refer to the [19.0.2 download document](http://www.ibm.com/support/docview.wss?uid=ibm10958567). - - 1. Download one or more PPA packages to a server that is connected to your Docker registry. - - 2. Download the [`loadimages.sh`](scripts/loadimages.sh) script from GitHub. - - 3. Log in to the specified Docker registry with the docker login command. - This command depends on the environment that you have. - - > **Note**: If your platform is OpenShift, do NOT run the .sh script to load the images without preparing your environment beforehand. Go to [Step 3](README.md#step-3-go-to-the-relevant-folders-and-follow-the-instructions) and use the instructions in the respective folders. You can then load the images to the Docker registry with the right privileges. - - 4. Run the `loadimages.sh` script to load the images into your Docker registry. Specify the two mandatory parameters in the command line. - - > **Note**: The *docker-registry* value depends on the platform that you are using. - - ``` - -p PPA archive files location or archive filename - -r Target Docker registry and namespace - -l Optional: Target a local registry - ``` - - > The following example shows the input values in the command line. - - ``` - # scripts/loadimages.sh -p /Downloads/PPA/ImageArchive.tgz -r /demo-project - ``` -## Step 3: Go to the relevant folders and follow the instructions - -You can install software on a certified Kubernetes platform with the Helm command line interface (CLI) or the kubectl command line interface (CLI). Use the following links to go to the instructions for the software that you want to install. -> **Note**: UMS must be installed before Business Automation Studio if you want to use the service. - -- [Install the User Management Service](UMS/README.md) -- [Install IBM Business Automation Application Engine](AAE/README.md) -- [Install IBM Business Automation Content Analyzer](BACA/README.md) -- [Install IBM Business Automation Insights](BAI/README.md) -- [Install IBM Business Automation Studio](BAS/README.md) -- [Install IBM FileNet Content Manager](CONTENT/README.md) -- [Install IBM Business Automation Navigator](NAVIGATOR/README.md) -- [Install IBM Operational Decision Manager](ODM/README.md) - +# IBM Cloud Pak for Automation 19.0.3 on Certified Kubernetes + +## Introduction + +The repository includes folders and resources to help you install the Cloud Pak software. The following software can be managed by the Cloud Pak operator. + + +| Folder | Component name | Version in 19.0.3 | +| :--- | :--- | ---: | +| AAE | IBM Business Automation Application Engine | 19.0.3 | +| ACA | IBM Business Automation Content Analyzer | 19.0.3 | +| ADW | IBM Automation Digital Worker | 19.0.3 | +| BAI | IBM Business Automation Insights | 19.0.3 | +| BAN | IBM Business Automation Navigator | 3.0.7 | +| BAS | IBM Business Automation Studio | 19.0.3 | +| FNCM | IBM FileNet Content Manager | 5.5.4 | +| IAWS | IBM Automation Workstream Services | 19.0.3 | +| ODM | IBM Operational Decision Manager | 8.10.3 | +| UMS | User Management Service | 19.0.3 | + +The following table shows dependencies between the components. A mandatory component is indicated in each column with an "M". Optional installation is indicated with an "O". + +| | ACA needs | ADW needs | BAN needs | BAS needs | FNCM needs | IAWS needs | ODM needs | +| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | +| AAE | | | | M(8,9) | | M(8) | | +| ACA | - | O(6) | | | | | | +| BAI | | O(3) | | | O(3) | | O(3) | +| BAN | | | - | | M(7) | M(7) | | +| BAS | M(4) | M(2,4) | | - | | M(4) | O(2,5) | +| FNCM | | | | | - | M(CMIS/CPE only) | | +| ODM | | O(6) | | | | | - | +| UMS | M(1) | M(1) | O(1) | M(1) | O(1) | M(1) | | + +The type of integration is indicated with the following numbers: + +| 1. SSO/Authentication | 4. Designer integration in Studio | 7. Runtime view | +| :--- | :--- | :--- | +| **2. Registration to Resource Registry** | **5. Toolkit for App designer**  | **8. App execution** | +| **3. Event emitter/dashboard** | **6. Skill execution** | **9. Test and deploy** | + +## Choose your platform and follow the instructions + +Use the following links to go to the platform on which you want to install. On each platform you must configure some manifest files that set up your cluster and the operator. You can then select and add configuration parameters for the software that you want to install in a custom resources (.yaml) file. + +- [Managed Red Hat OpenShift on IBM Cloud Public](platform/roks/README.md) +- [Red Hat OpenShift](platform/ocp/README.md) +- [Other Certified Kubernetes platforms](platform/k8s/README.md) + +Installation is supported only on Certified Kubernetes platforms. Cloud Native Computing Foundation (CNCF) has created a Certified Kubernetes Conformance Program, in which most of the leading vendors and cloud computing providers have Certified Kubernetes offerings. Use the following link to determine whether the vendor and/or platform is certified by CNCF https://landscape.cncf.io/category=platform. For more information about nonqualified platforms, see the [support statement for Certified Kubernetes](http://www.ibm.com/support/docview.wss?uid=ibm10876926). + +> **Note**: Support to install on IBM Cloud Private with the Business Automation Configuration Container is removed in 19.0.3. You can use the Certified Kubernetes instructions to install the automation containers on this platform. + +## Legal Notice + +Legal notice for users of this repository [legal-notice.md](legal-notice.md). diff --git a/UMS/README.md b/UMS/README.md deleted file mode 100644 index dbd79c67..00000000 --- a/UMS/README.md +++ /dev/null @@ -1,76 +0,0 @@ -# Install User Management Service 19.0.2 on Certified Kubernetes -You can use the User Management Service (UMS) option to provide users of multiple applications with a single sign-on experience. - -You can also use UMS to provide a common login page for all IBM Cloud Pak for Automation web applications. If you have multiple deployments, users can have a single sign-on experience when they interact with more than one of them. - -Because Cloud Pak for Automation combines several technologies and runtime servers in your virtual cloud-based environments, UMS helps you manage this complexity by consolidating aspects of user management in a single place. - -## Planning your installation - -| Environment size | CPU Minimum (m) | Memory Minimum (Mi) | recommended number of pods | -| ---------- | ----------- | ------------------- | -------------------------- | -| Small | 500 | 512 | 2 | -| Medium | 1000 | 1024 | 2 | -| Large | 2000 | 2048 | 3 | - -### Prerequisites -1. A database -1. Certificates for HTTPS and signing of identity tokens -1. Kubernetes secrets that contain the credentials to access the database, UMS system account, keystores, etc. -1. Persistent volume [optional] to host JDBC drivers, truststores, custom binaries - -### Installation options -* with Tiller - which is the typical option for ICP -* without Tiller - which is the typical option for OpenShift - -### Secure Deployment Guidelines -* JDBC over TLS, see "Db2 SSL Configuration" in the helm chart readme -* LDAP over TLS, see [Secure LDAP](configuration/secure-ldap.md) -* Account lockout policies and password complexity rules must be configured in LDAP for end user accounts. The built-in basic user registry for system accounts does not support such policies. User Management Service connects to your LDAP server which manages end user credentials (userids and passwords). It is expected that the LDAP bind user for connecting to LDAP has read-only permissions. Locking accounts in LDAP is therefore only possible by implementing an account lockout policy in LDAP. -Because User Management Service is just one out of many applications connecting to LDAP, locking accounts upon a number of failed login attempts has little value: attackers can just switch to another application to continue probing. -* Encrypted file system: It is recommended to host persistent volumes and database storage on encrypted file system (see "Database Requirements" in the helm chart readme) -* RBAC for operations: Installing UMS in IBM Cloud Private requires the `Administrator` role for the given namespace in order to create and assign RBAC roles. For daily operations, the `Editor` role is sufficient to scale up and down as well as viewing logs and modifying configuration. On other kubernetes platforms, it is also recommended to create a RBAC role for daily operations - avoiding `kubectl exec ...` permissions in daily operations. - -## Prepare your environment -1. Download and initialize command line interfaces: - * kubectl - * cloudctl for ICP - * helm for ICP - * oc for OpenShift -2. Create a database -1. Create a namespace `kubectl create namespace` -1. Create an image pull secret `kubectl create secret docker-registry ums-pull-secret1 --docker-server=myregistry:port --docker-username=dockeruser --docker-password=dockerpassword` -1. Create a TLS certificate for UMS pod HTTPS communication `openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt` and store them in a secret `kubectl create secret tls ibm-dba-ums-tls --key=tls.key --cert=tls.crt` -1. Create a TLS certificate for signing identity tokens `openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout jwt.key -out jwt.crt` and store them in a secret `kubectl create secret tls ibm-dba-ums-jwt --key=jwt.key --cert=jwt.crt` -1. Create secrets for system account credentials, see sample [ums-secret.yaml](configuration/ums-secret.yaml) -1. Create a secret for sensitive configuration (such as LDAP bind password), see [secure LDAP](configuration/secure-ldap.md) -1. Create a persistent volume to host JDBC drivers, truststores and custom binaries, see [Db2 HADR](configuration/db2-hadr.md) -1. Load docker images into your docker registry as described in [Download PPA and load images](../README.md#step-2-download-a-product-package-from-ppa-and-load-the-images) - -## Customize the installation -1. In a shell, extract the downloaded package -```bash -tar -xvf ibm-dba-ums-prod-1.0.0.tgz -``` -1. Review `values.yaml` and create an environment specific `myvalues.yaml` file to override defaults where necessary and to specify values for settings without defaults. Review `README.md` inside the helm chart for more details on the individual settings. - -## Option 1: With Tiller (for ICP) -`helm install --tls -n -f ibm-dba-ums-prod-1.0.0.tgz` - -## Option 2: Without Tiller (for OpenShift) -```bash -rm -rf yamls ; mkdir yamls ; helm template -n cp4aums1 -f helmvalues.yaml ../../ibm-dba-ums-prod/ --output-dir yamls -kubectl apply -f ./yamls/ -R -``` - -## Specific k8s env -* Sample for [Openshift](platform/README-openshift.md) -* Sample for [Openshift on IBM Cloud](platform/README-ROKS.md) -* Sample for [IBM Cloud Private](platform/README-icp.md) -* Sample for [Minikube](platform/README-minikube.md) - -# Verify -Use the host of this ingress to access https:///ums to view the login page. - -# Configuration -Configuration can be applied during installation by editing the values.yaml file. See the helm chart readme for details on the various settings. There are also samples in the [configuration folder](configuration). diff --git a/UMS/README_config.md b/UMS/README_config.md new file mode 100644 index 00000000..d3cd49dd --- /dev/null +++ b/UMS/README_config.md @@ -0,0 +1,237 @@ +# Configuring User Management Service 19.0.3 + +These instructions cover the configuration of the User Management Service. +You need a copy of the custom resources YAML file that you created previously. + + +## Planning UMS installation + +| Environment size | CPU Minimum (m) | Memory Minimum (Mi) | recommended number of pods | +| ---------- | ----------- | ------------------- | -------------------------- | +| Small | 500 | 512 | 2 | +| Medium | 1000 | 1024 | 2 | +| Large | 2000 | 2048 | 3 | + + +## Prerequisites + +Make sure in `shared_configuration` you specified the configuration parameter `sc_deployment_platform`. +If you deploy on Red Hat OpenShift, specify + +```yaml +spec: + shared_configuration: + sc_deployment_platform: OCP +``` + +otherwise specify + +```yaml +spec: + shared_configuration: + sc__deployment_platform: !OCP +``` + + +## Step 1: Generate UMS secret and DB secret +If you are using Db2 or Oracle create the OAuth database, e.g. `UMSDB`. + +To avoid passing sensitive information via configuration files, you must create two secrets manually before you deploy UMS. +Copy the following as ums-secret.yaml, then edit it to specify the required user identifiers and passwords. + +**Note:** The sample below includes sample values for passwords. For `ibm-dba-ums-secret` choose passwords that reflect your security requirements. +For `ibm-dba-ums-db-secret` specify user identifiers and passwords you configured for your OAuth database. + +**Note:** Team Server is an experimental internal component that has been in the User Management Service since 19.0.2. +`ibm-dba-ums-secret` and `ibm-dba-ums-db-secret` must include Team Server parameters, as described below. + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: ibm-dba-ums-secret +type: Opaque +stringData: + adminUser: "umsadmin" + adminPassword: "password" + sslKeystorePassword: "sslPassword" + jwtKeystorePassword: "jwtPassword" + teamserverClientID: "ts" + teamserverClientSecret: "tsSecret" + ltpaPassword: "ltpaPassword" +--- +apiVersion: v1 +kind: Secret +metadata: + name: ibm-dba-ums-db-secret +type: Opaque +stringData: + oauthDBUser: "db2inst1" + oauthDBPassword: "!Passw0rd" + tsDBUser: "db2inst1" + tsDBPassword: "!Passw0rd" +``` + +| Parameter | Description | +| ------------------------------- | --------------------------------------------- | +| `adminUser` | User ID of the UMS admin user to create | +| `adminPassword` | Password for the UMS admin user | +| `sslKeystorePassword` | Password for the internal UMS SSL keystore | +| `jwtKeystorePassword` | Password for the internal UMS JWT keystore | +| `teamserverClientID` | Experimental: ID for the Team Server's OIDC client | +| `teamserverClientSecret` | Experimental: Secret for the Team Server's OIDC client | +| `ltpaPassword` | Password for the internal UMS LTPA key | +| `oauthDBUser` | User ID for the OAuth database | +| `oauthDBPassword` | Password for the OAuth database | +| `tsDBUser` | Experimental: User ID for the Team Server database | +| `tsDBPassword` | Experimental: Password for the Team Server database | + +Only specify the database settings if you are not using the internal derby database. +The derby database can only be used for a deployment with one UMS pod in test scenarios. + +Apart from the database values that relate to your specific database setup, you can choose all secret values freely. + +After modifying the values, save ums-secret.yaml and create the secrets by running the following command + +```bash +oc create -f ums-secret.yaml +``` + +**Note:** `ibm-dba-ums-secret` and `ibm-dba-ums-db-secret` are passed to the Operator +by specifying corresponding properties in the `ums_configuration` section, as described in the following steps. + + +## Step 2: Configure the UMS datasource +In the section `dc_ums_datasource` adjust database configuration parameters. + +```yaml +datasource_configuration: + dc_ums_datasource: # credentials are read from ums_configuration.db_secret_name + # oauth database config + dc_ums_oauth_type: db2 # derby (for test), db2 or oracle + dc_ums_oauth_host: + dc_ums_oauth_port: 50000 + dc_ums_oauth_name: UMSDB + dc_ums_oauth_ssl: false + dc_ums_oauth_ssl_secret_name: + dc_ums_oauth_driverfiles: + dc_ums_oauth_alternate_hosts: + dc_ums_oauth_alternate_ports: +``` + +For information about UMS configuration parameters and their default values, see +(UMS Database Configuration Parameters)(http://engtest01w.fr.eurolabs.ibm.com:9190/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/ref_ums_params_database.html) + + +## Step 2a (optional): Configure database failover servers + +To cover the possibility that the primary server is unavailable during the initial connection attempt, you can configure a list of failover servers, as described in [Configuring client reroute for applications that use DB2 databases](https://www.ibm.com/support/knowledgecenter/en/SSEQTP_liberty/com.ibm.websphere.wlp.doc/ae/twlp_config_reroute_db2.html). + +In the custom resources YAML file, provide a comma-separated list of failover servers and failover ports. +For example, if there are two failover servers +* server1.db2.company.com on port 50443 +* server2.db2.company.com on port 51443 + +in `dc_ums_datasource section` specify: +```yaml +datasource_configuration: + dc_ums_datasource: + ... + dc_ums_oauth_alternate_hosts: "server1.db2.company.com, server2.db2.company.com" + dc_ums_oauth_alternate_ports: "50443, 51443" +``` + + +## Step 2b (optional): Configure SSL between UMS and Db2 +To ensure that all communications between UMS and Db2 are encrypted, import the database CA Certificate to UMS and create a secret to store the certificate: + +``` +oc create secret generic ibm-dba-ums-db2-cacert --from-file=cacert.crt= +``` + +**Note:** The certificate must be in PEM format. Specify the `` to point to the certificate file. Do not change the part `--from-file=cacert.crt=`. + +Use the generated secret to configure the Db2 SSL parameters in the custom resources YAML file: +```yaml +datasource_configuration: + dc_ums_datasource: + ... + dc_ums_oauth_ssl_secret_name: ibm-dba-ums-db2-cacert + dc_ums_oauth_ssl: true +``` + + +## Step 3: Configure LDAP + +In section `ldap_configuration`, adapt the LDAP configuration parameter values to match your LDAP server. + +For information about LDAP configuration parameters and sample values refer to +[Configuring the LDAP and user registry](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_k8s_ldap.html). + + +## Step 4: Configure UMS +In section `ums_configuration` adapt the UMS-specific configuration + +```yaml + ums_configuration: + existing_claim_name: + replica_count: 2 + service_type: Route + hostname: + port: 443 + images: + ums: + repository: cp.icr.io/cp/cp4a/ums/ums + tag: 19.0.3 + admin_secret_name: ibm-dba-ums-secret + db_secret_name: ibm-dba-ums-db-secret + external_tls_secret_name: ibm-dba-ums-external-tls-secret + external_tls_ca_secret_name: ibm-dba-ums-external-tls-ca-secret + oauth: + client_manager_group: + resources: + limits: + cpu: 500m + memory: 512Mi + requests: + cpu: 200m + memory: 256Mi + ## Horizontal Pod Autoscaler + autoscaling: + enabled: true + min_replicas: 2 + max_replicas: 5 + target_average_utilization: 98 + use_custom_jdbc_drivers: false + use_custom_binaries: false + custom_secret_name: + custom_xml: + logs: + console_format: json + console_log_level: INFO + console_source: message,trace,accessLog,ffdc,audit + trace_format: ENHANCED + trace_specification: "*=info" +``` + +For information about UMS configuration parameters and their default values, see +(UMS Configuration Parameters)[http://engtest01w.fr.eurolabs.ibm.com:9190/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_ums_params_ums.html] + + +## Step 4a (optional): Configure secure communication with UMS + +See [Configuring secure communication with UMS](README_config_SSL.md) + +## Step 5: Complete the installation + +Return to the appropriate install or update page to configure other components and complete the deployment with the operator. + +Install pages: + - [Managed OpenShift installation page](../platform/roks/install.md) + - [OpenShift installation page](../platform/ocp/install.md) + - [Certified Kubernetes installation page](../platform/k8s/install.md) + +Update pages: + - [Managed OpenShift installation page](../platform/roks/update.md) + - [OpenShift installation page](../platform/ocp/update.md) + - [Certified Kubernetes installation page](../platform/k8s/update.md) \ No newline at end of file diff --git a/UMS/README_config_SSL.md b/UMS/README_config_SSL.md new file mode 100644 index 00000000..45bbc55a --- /dev/null +++ b/UMS/README_config_SSL.md @@ -0,0 +1,85 @@ +# Configuring secure communications with UMS +To reach UMS from outside of the kubernetes cluster, +the client (e.g. a browser or a programmatic client) connects to ums-route that is created during UMS deployment. +ums-route, in turn, communicates with the ums-service that load balances between UMS pods. + +![UMS in k8s](images/ums-in-k8s.jpg) + +To ensure that sensitive information is protected in transit when communicating with UMS pods, you must setup secure communications. +This documentation describes the different options and provides instructions on how to configure a secure communication with UMS pods. + +## Option 1 - Without an external certificate + +In a test environment, you might only want to test features and functions and might not want to deal with certificates. +In this case, do not specify values for `external_tls_secret_name` and `external_tls_ca_secret_name` in the Custom Resource YAML file (or just omit these parameters): + +```yaml +ums_configuration: + ... + external_tls_secret_name: + external_tls_ca_secret_name: +``` + +By using this configuration option, `root_ca_secret` is used to generate an internal TLS secret + for the pod and an external TLS secret for the ums-route. + + ![No customer-provided certificate](images/option1.jpg) + +**Note:** If you do not provide a self-signed root CA in the `shared_configuraiton` section of the Custom Resource YAML file, `root_ca_secret` is automatically generated by the Operator with a self-signed root CA. + + +## Option 2 - Customer-provided external certificate + +In a production environment, communications are secured by using a TLS certificate. +In this case, you must provide an external certificate that is signed by an external certificate authority (CA) that is trusted by your clients. + +**Note:** You can also generate a certificate using openssl, see section [Creating TLS certificates using openssl](#Creating-TLS-certificates-using-openssl) + +Generate a secret (`ibm-dba-ums-external-tls-secret`) to include the key and the external certificate. +``` +oc create secret tls ibm-dba-ums-external-tls-secret --key=tls.key --cert=tls.crt +``` + +Generate a secret (`ibm-dba-ums-external-tls-ca-secret`) to include any number of signer certificates that are necessary to trust the external certificate. +This can be required if your external certificate was cross-signed by a second certificate authority or if the tls.crt file does not include ALL certificates of +its certification chain. +``` +oc create secret generic ibm-dba-ums-external-tls-ca-secret --from-file=cacert.crt=
+``` + +Provide both secrets to the Operator in the `ums_configuration` section of the Custom Resource YAML file: +```yaml +ums_configuration: + ... + external_tls_secret_name: ibm-dba-ums-external-tls-secret + external_tls_ca_secret_name: ibm-dba-ums-external-tls-ca-secret +``` + +**Note:** If the signer certificate is chained in the external certificate, `ibm-dba-ums-external-tls-ca-secret` is not required, and you should leave this parameter empty: +```yaml +ums_configuration: + ... + external_tls_secret_name: ibm-dba-ums-external-tls-secret + external_tls_ca_secret_name: +``` + +By using this configuration option, the customer-provided external certificate is used as the ums-route certificate. +The Operator generates a certificate for the UMS pod, signed with the `root_ca_secret`. +Signer certificates are configured for the `ums-route`, so that clients can trust the `ums-route`. + + ![Customer-provided certificate](images/option2.jpg) + +### Creating TLS certificates using openssl + +You can create a TLS certificate signing request by executing OpenSSL. Note that the final certificate should have a `Subject Alternative Names` (SAN) value that matches the hostname. Many certificate authorities allow you to specify SANs during the ordering process, otherwise you must provide the SAN directly in the certificate signing request (CSR). +``` +openssl req -new -newkey rsa:2048 -subj "/CN=UMS" -extensions SAN -days 365 -nodes -out ums.csr -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:ums.mycluster.com")) +``` + +Two files are generated: a private key (privkey.pem) and a certificate signing request that can be sent to your certificate authority for sigining. +Use the private key and your certificate authority's response to generate the secret `ibm-dba-ums-external-tls-secret`. +If the response from your certificate authority does not include all certificates from its signing chain, you can provide them in `ibm-dba-ums-external-tls-ca-secret` + +## Continue with the UMS configuration + +Continue with the UMS configuration: [README_config.md](README_config.md) diff --git a/UMS/README_migrate.md b/UMS/README_migrate.md new file mode 100644 index 00000000..7d770418 --- /dev/null +++ b/UMS/README_migrate.md @@ -0,0 +1,82 @@ +# Migrate User Management Service configuration from 19.0.2 to 19.0.3 + + +The following table maps User Management Service configuration parameters that were used in the +19.0.2 helm chart to config parameters in the Custom Resource YAML file you use in Cloud Pak for Automation 19.0.3. + +## Datasource configuration parameters + +| Helm Chart parameters in 19.0.2 | Custom Resource parameter in 19.0.3 | Comment | +| ------------------------------- | ----------------------------------------------------------------------------------- | -------------------- | +| oauth.database.type | datasource_configuration.dc_ums_datasource.dc_ums_oauth_type | | +| oauth.database.host | datasource_configuration.dc_ums_datasource.dc_ums_oauth_host | | +| oauth.database.port | datasource_configuration.dc_ums_datasource.dc_ums_oauth_port | | +| oauth.database.name | datasource_configuration.dc_ums_datasource.dc_ums_oauth_name | | +| oauth.database.ssl | datasource_configuration.dc_ums_datasource.dc_ums_oauth_ssl | | +| oauth.database.sslSecretName | datasource_configuration.dc_ums_datasource.dc_ums_oauth_ssl_secret_name | | +| oauth.database.driverfiles | datasource_configuration.dc_ums_datasource.dc_ums_oauth_driverfiles | | +| oauth.database.alternateHosts | datasource_configuration.dc_ums_datasource.dc_ums_oauth_alternate_hosts | | +| oauth.database.alternatePorts | datasource_configuration.dc_ums_datasource.dc_ums_oauth_alternate_ports | | + + +## UMS docker images + +| Helm Chart parameters in 19.0.2 | Custom Resource parameter in 19.0.3 | Comment | +| ------------------------------- | ------------------------------------------------------------------------------ | -------------------- | +| images.ums | ums_configuration.images.ums.repository, ums_configuration.images.ums.tag | In 19.0.2 the tag was appended to the repository link | +| images.initTLS | shared_configuration.images.keytool_init_container.repository, shared_configuration.images.keytool_init_container.tag | In 19.0.2 the tag was appended to the repository link | +| images.ltpa | shared_configuration.images.keytool_job_container.repository, shared_configuration.images.keytool_job_container.tag | In 19.0.2 the tag was appended to the repository link | +| images.pullPolicy | shared_configuration.images.pull_policy | + + +## LDAP configuration + +In 19.0.2 LDAP was configured by providing Liberty server LDAP configuration using the customXML parameter. +In 19.0.3 specify the LDAP configuration parameters in `ldap_configuration`. +For information about LDAP configuration parameters and sample values refer to [Configuring the LDAP and user registry](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_k8s_ldap.html). + + +## UMS configuration parameters + +| Helm Chart parameters in 19.0.2 | Custom Resource parameter in 19.0.3 | Comment | +| ------------------------------- | ----------------------------------------------| -------------------- | +| global.existingClaimName | ums_configuration.existing_claim_name | | +| global.isOpenShift | shared_configuration.sc_deployment_platform | | +| global.imagePullSecrets | shared_configuration.image_pull_secrets | | +| global.ums.serviceType | ums_configuration.service_type | | +| global.ums.hostname | ums_configuration.hostname | | +| global.ums.port | ums_configuration.port | | +| global.ums.adminSecretName | ums_configuration.admin_secret_name | | +| global.ums.dbSecretName | ums_configuration.db_secret_name | | +| global.ums.ltpaSecretName | | removed, secret is generated in 19.0.3 | +| tls.tlsSecretName | | removed, secret is generated in 19.0.3 | +| | ums_configuration.external_tls_secret_name | new parameter in 19.0.3 | +| | ums_configuration.external_tls_ca_secret_name | new parameter in 19.0.3 | +| oauth.clientManagerGroup | ums_configuration.oauth.client_manager_group | | +| resources.limits.cpu | ums_configuration.resources.limits.cpu | | +| resources.limits.memory | ums_configuration.resources.limits.memory | | +| resources.requests.cpu | ums_configuration.resources.requests.cpu | | +| resources.requests.memory | ums_configuration.resources.requests.memory | | +| useCustomJDBCDrivers | ums_configuration.use_custom_jdbc_drivers | | +| useCustomBinaries | ums_configuration.use_custom_binaries | | +| customSecretName | ums_configuration.custom_secret_name | | +| logs.tracespefication | ums_configuration.logs.trace_specification | | +| logs.consoleFormat | ums_configuration.logs.console_format | | +| logs.consoleLogLevel | ums_configuration.logs.console_log_level | | +| logs.consoleSource | ums_configuration.logs.console_source | | +| logs.traceFormat | ums_configuration.logs.trace_format | | +| replicaCount | ums_configuration.replica_count | | +| autoscaling.enabled | ums_configuration.autoscaling.enabled | | +| autoscaling.minReplicas | ums_configuration.autoscaling.min_replicas | | +| autoscaling.maxReplicas | ums_configuration.autoscaling.max_replicas | | +| autoscaling.targetAverageUtilization | ums_configuration.autoscaling.target_average_utilization | | +| resources.limits.cpu | ums_configuration.resources.limits.cpu | | +| resources.limits.memory | ums_configuration.resources.limits.memory | | +| resources.requests.cpu | ums_configuration.resources.requests.cpu | | +| resources.requests.memory | ums_configuration.resources.requests.memory | | +| customXml | ums_configuration.custom_xml | for LDAP parameters use ldap_configuration to configure LDAP | +| customSecretName | ums_configuration.custom_secret_name | | +| useCustomBinaries | ums_configuration.use_custom_binaries | | + + +Once you understand how the helm configuration parameters map to the parameters in the Custom Resource YAML file, continue with the [UMS configuration](README_config.md) diff --git a/UMS/configuration/db2-hadr.md b/UMS/configuration/db2-hadr.md deleted file mode 100644 index 232f0355..00000000 --- a/UMS/configuration/db2-hadr.md +++ /dev/null @@ -1,43 +0,0 @@ -# Database high availability -The User Management Service (UMS) requires a database. If you use Db2 as your database, you can configure high availability by setting up [HADR](https://www.ibm.com/support/knowledgecenter/SSEPGG_11.5.0/com.ibm.db2.luw.admin.ha.doc/doc/c0011267.html) for your database. -This configuration ensures that UMS automatically retrieves the necessary failover server information upon initial connection to the database. If the primary server becomes unavailable, UMS fails over to a secondary Db2 server. - -To cover the possibility that the primary server is unavailable during the initial connection attempt, you can configure a list of failover servers, as described in [Configuring client reroute for applications that use DB2 databases](https://www.ibm.com/support/knowledgecenter/en/SSEQTP_liberty/com.ibm.websphere.wlp.doc/ae/twlp_config_reroute_db2.html). - -In `myvalues.yaml`, provide a comma-separated list of failover servers and failover ports. For example, if there are two failover servers -* server1.db2.customer.com on port 50443 -* server2.db2.customer.com on port 51443 -you can specify these hosts and ports in `myvalues.yaml` as follows: - -```yaml -... -# UMS OAuth config -oauth: - database: - type: db2 - name: umsdb - host: primary.db2.customer.com - port: 50443 - ssl: true - sslSecretName: db2-cert - #driverfiles: - alternateHosts: "server1.db2.customer.com, server2.db2.customer.com" - alternatePorts: "50443, 51443" - clientManagerGroup: - jwtSecretName: - -# UMS Team Server database config -teamserver: - database: - type: db2 - name: umsdb - host: primary.db2.customer.com - port: 50443 - ssl: true - sslSecretName: db2-cert - #driverfiles: - alternateHosts: "server1.db2.customer.com, server2.db2.customer.com" - alternatePorts: "50443, 51443" -``` - -Note that the _network security policy_ automatically whitelists outbound traffic from UMS pods to the the primary database ports. You can be more restrictive and specify the IP address [range]. If your failover servers use different ports, you MUST whitelist these explicitly by editing _network security policy_ `ums-database`. diff --git a/UMS/configuration/imagepolicy.yaml b/UMS/configuration/imagepolicy.yaml deleted file mode 100644 index ab0ef08a..00000000 --- a/UMS/configuration/imagepolicy.yaml +++ /dev/null @@ -1,14 +0,0 @@ -apiVersion: securityenforcement.admission.cloud.ibm.com/v1beta1 -kind: ImagePolicy -metadata: - name: ums-docker-registry-whitelist -spec: - repositories: - - name: some.remote.registry/* - policy: - va: - enabled: true - - name: some.other.remote.registry/ums/* - policy: - va: - enabled: true \ No newline at end of file diff --git a/UMS/configuration/namespace.yaml b/UMS/configuration/namespace.yaml deleted file mode 100644 index f8c744ab..00000000 --- a/UMS/configuration/namespace.yaml +++ /dev/null @@ -1,6 +0,0 @@ -apiVersion: v1 -kind: Namespace -metadata: - name: cp4a-ums - labels: - name: cp4a-ums diff --git a/UMS/configuration/secure-ldap.md b/UMS/configuration/secure-ldap.md deleted file mode 100644 index b8f345b0..00000000 --- a/UMS/configuration/secure-ldap.md +++ /dev/null @@ -1,193 +0,0 @@ -# Connecting to an LDAP Server securely -Because the user management service (UMS) is built on WebSphere Liberty, the documentation about configuring LDAP in WebSphere Liberty applies: [Configuring LDAP user registries in Liberty -](https://www.ibm.com/support/knowledgecenter/SS7K4U_liberty/com.ibm.websphere.wlp.zseries.doc/ae/twlp_sec_ldap.html). As UMS is expected to connect to an LDAP server, the ldapRegistry-3.0 feature is pre-installed. - -A secure LDAP connection implies: -* Encrypted LDAPS traffic, typically on port 636 -* LDAP bind user configuration with least privileges - -## Bind user -Engage your LDAP administrator to provision a bind user ID that has read-only access to the parts of your LDAP server that contain your users and groups. Because this bind user ID and password is _sensitive configuration_ information, you should store it in a kubernetes secret and pass only the secret name to the UMS installation in the `myvalues.yaml` file, see [Sensitive configuration](#Sensitive-configuration). - -## Encrypted connection -To ensure that an encrypted connection to LDAP is used, make sure that you specify the secure port, typically 636. For this communication to work, UMS must trust the LDAP server's signer certificate. You can provide a dedicated truststore for that purpose by placing it on a persistent volume that is mounted into UMS. Because the truststore password is _sensitive configuration_ information, you should store it in a secret, see [Sensitive configuration](#Sensitive-configuration). -Note that the default *network security policy* `ums-ldap` whitelists outbound traffic from the UMS pod to port 636 and 389. You can edit the policy to be more restrictive and control the target IP address (range). If your LDAP server is available on a network port other than 389 or 636, you MUST adapt the policy to whitelist your target port. - -## High Availability -To ensure a high available LDAP connection, configure `failoverServers` as described in [Configuring LDAP user registries in Liberty -](https://www.ibm.com/support/knowledgecenter/SSEQTP_liberty/com.ibm.websphere.wlp.doc/ae/twlp_sec_ldap.html). - -```XML - - - - -``` - -### Create a truststore -An easy way to create a truststore is to connect to your LDAP server and download the certificate chain by using the Java keytool (in the following sample replace the host name and password with your own values): -```bash -keytool -printcert -sslserver your.ldap.host.com:636 -rfc > ldap.pem -keytool -import -noprompt -alias ldap -keystore ldap.jks -storepass changeit -file ldap.pem -keytool -list -v -keystore ldap.jks -storepass changeit -``` -This creates a truststore that contains the full certificate chain. - -### Make the truststore accessible for UMS -Create a persistent volume (PV) and persistent volume claim (PVC) for UMS as described in the helm chart README.md: - -1. Create a `ums-persistence.yaml` file. The following sample points to a Network File System (NFS). Replace the host `1.2.3.4` and path `/binaries` with your own values. - -```yaml -kind: PersistentVolume -apiVersion: v1 -metadata: - name: ibm-dba-ums-pv - labels: - type: ums-binaries -spec: - capacity: - storage: 1Gi - accessModes: - - ReadWriteOnce - persistentVolumeReclaimPolicy: Recycle - nfs: - server: "1.2.3.4" - path: "/binaries" - storageClassName: standard ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: ibm-dba-ums-pvc -spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1Gi - selector: - matchLabels: - type: ums-binaries - storageClassName: standard -``` - -2. Create the persistent volume and persistent volume claim: -```bash -kubectl apply -f ums-persistence.yaml -``` -3. Create a directory `custom-binaries` in the NFS path (`/binaries/custom-binaries` in the sample). Copy the truststore created in the previous step into that directory and make sure that the root group (0) has read access to the file. -1. In your `myvalues.yaml` file, set `useCustomBinaries` to `true` and specify the PVC name in `global.existingClaimName` to ensure that the volume is mounted into the containers -```yaml -global: - existingClaimName: ibm-dba-ums-pvc -useCustomBinaries: true -``` - -## Configuration -The LDAP configuration is passed to UMS by using the `customXml` setting in the `myvalues.yaml` file. - -### Sensitive configuration -Some of the LDAP configuration information is sensitive and should therefore be stored in a secret, never in a config map. You should also never pass sensitive configuration information through helm. Create a secret containing the Liberty configuration variables for all sensitive settings that you will later use in your configuration. - -For additional security, you can use Liberty's securityUtil to encorde or encrypt sensitive information, e.g. to encrypt the sample password `changeit`, you can invoke the following command in any [free] non-containerized [WebSphere Liberty](https://developer.ibm.com/wasdev/downloads/) or [Open Liberty](https://openliberty.io/downloads/) install. - -```bash - wlp/bin/securityUtility encode --encoding=aes changeit -{aes}AKy63+PNE+g5rNQm4t7Y1nFps9B44emN09iA7TSPaGUx -``` - -Create a `ums-ldap-secret.yaml` file as shown below. - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: ums-ldap-secret -type: Opaque -stringData: - sensitiveCustomConfig: | - - - - - - - -``` - -Create a secret from this file: - -```bash -kubectl apply -f ums-ldap-secret.yaml -``` - -The name of this secret is passed to UMS in `myvalues.yaml` using the `customSecretName` parameter: -```yaml -customSecretName: ums-liberty-secret -``` - -To reference the value of a variable that is defined in the secret from your LDAP configuration, use the `${VARIABLE_NAME}` syntax, see [Using variables in configuration files -](https://www.ibm.com/support/knowledgecenter/SSEQTP_liberty/com.ibm.websphere.wlp.doc/ae/twlp_setup_vars.html). - -### LDAP configuration in UMS config map -In `myvalues.yaml`, Liberty configuration can be specified in XML format by using the `customXml` parameter. The required configuration comprises of the following elements: - -* A `` element to load the truststore -* An `` element to refer to this truststore (and optionally restrict TLS version) -* An `` element to specify connection information -* An optional `` element to control the realm name or extend the attribute schema for users and groups when using the [SCIM](https://www.ibm.com/support/knowledgecenter/en/SSEQTP_liberty/com.ibm.websphere.wlp.doc/ae/rwlp_sec_scim_operations.html) API. When using a federatedRegistry element, make sure to federate the existing BasicRegistry as a `participatingBaseEntry` unless your admin account is specified in LDAP, too. - -The full server.xml fragment is passed in myvalues.yaml as illustrated in the following sample. Take care to use consistent indentation to avoid accidentally specifying the next YAML parameter. - -```yaml -customXml: |+ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/UMS/configuration/simple-ldap.md b/UMS/configuration/simple-ldap.md deleted file mode 100644 index 48851081..00000000 --- a/UMS/configuration/simple-ldap.md +++ /dev/null @@ -1,66 +0,0 @@ -# Connecting to an LDAP Server -Because the user management service (UMS) is built on WebSphere Liberty, the documentation about configuring LDAP in WebSphere Liberty applies: [Configuring LDAP user registries in Liberty -](https://www.ibm.com/support/knowledgecenter/SS7K4U_liberty/com.ibm.websphere.wlp.zseries.doc/ae/twlp_sec_ldap.html). As UMS is expected to connect to an LDAP server, the ldapRegistry-3.0 feature is pre-installed. - -## Bind user -The simple LDAP configuration assumes that LDAP allows anonymous binds and therefore skips bind user configuration. - -## Network connection -The simple LDAP configuration assumes that LDAP is available over an unecrypted connection on port 389 and therefore skips using a truststore and related configuration. Note that the default *network security policy* `ums-ldap` whitelists outbound traffic from the UMS pod to port 636 and 389. You can edit the policy to be more restrictive and control the target IP address (range). If your LDAP server is available on a network port other than 389 or 636, you MUST adapt the policy to whitelist your target port. - -## High Availability -To ensure a high available LDAP connection, configure `failoverServers` as described in [Configuring LDAP user registries in Liberty -](https://www.ibm.com/support/knowledgecenter/SSEQTP_liberty/com.ibm.websphere.wlp.doc/ae/twlp_sec_ldap.html). - -```XML - - - - -``` -## Configuration -The LDAP configuration is passed to UMS by using the `customXml` setting in `myvalues.yaml`. - -### LDAP configuration in UMS config map -In `myvalues.yaml`, Liberty configuration can be specified in XML format using the `customXml` parameter. The required configuration comprises of the following elements: - -* An `` element to specify connection information -* An optional `` element to control the realm name or extend the attribute schema for users and groups when using the [SCIM](https://www.ibm.com/support/knowledgecenter/en/SSEQTP_liberty/com.ibm.websphere.wlp.doc/ae/rwlp_sec_scim_operations.html) API. When using a federatedRegistry element, make sure to federate the existing BasicRegistry as a `participatingBaseEntry` unless your admin account is specified in LDAP, too. - -The full server.xml fragment is passed in myvalues.yaml as illustrated in the following sample. Take care to use consistent indentation to avoid accidentally specifying the next YAML parameter. - -```yaml -customXml: |+ - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/UMS/configuration/ums-secret.yaml b/UMS/configuration/ums-secret.yaml deleted file mode 100644 index 082be816..00000000 --- a/UMS/configuration/ums-secret.yaml +++ /dev/null @@ -1,31 +0,0 @@ -apiVersion: v1 -kind: Secret -metadata: - name: ibm-dba-ums-secret -type: Opaque -stringData: - adminUser: "" - adminPassword: "" - sslKeystorePassword: "" - jwtKeystorePassword: "" - teamserverClientID: "" - teamserverClientSecret: "" - ltpaPassword: "" ---- -apiVersion: v1 -kind: Secret -metadata: - name: ibm-dba-ums-db-secret -type: Opaque -stringData: - oauthDBUser: "" - oauthDBPassword: "" - tsDBUser: "" - tsDBPassword: "" ---- -apiVersion: v1 -kind: Secret -metadata: - name: ibm-dba-ums-ltpa-creation-secret -type: Opaque -data: \ No newline at end of file diff --git a/UMS/helm-charts/ibm-dba-ums-prod-1.0.0.tgz b/UMS/helm-charts/ibm-dba-ums-prod-1.0.0.tgz deleted file mode 100644 index a46230dd..00000000 Binary files a/UMS/helm-charts/ibm-dba-ums-prod-1.0.0.tgz and /dev/null differ diff --git a/UMS/images/option1.jpg b/UMS/images/option1.jpg new file mode 100644 index 00000000..5a590ae7 Binary files /dev/null and b/UMS/images/option1.jpg differ diff --git a/UMS/images/option2.jpg b/UMS/images/option2.jpg new file mode 100644 index 00000000..fb82f36a Binary files /dev/null and b/UMS/images/option2.jpg differ diff --git a/UMS/images/ums-in-k8s.jpg b/UMS/images/ums-in-k8s.jpg new file mode 100644 index 00000000..1aebe276 Binary files /dev/null and b/UMS/images/ums-in-k8s.jpg differ diff --git a/UMS/platform/README-ROKS.md b/UMS/platform/README-ROKS.md deleted file mode 100644 index f574496c..00000000 --- a/UMS/platform/README-ROKS.md +++ /dev/null @@ -1,244 +0,0 @@ -# Install User Management Service 19.0.2 on Red Hat OpenShift on IBM Cloud - -User Management Service can be installed on Red Hat OpenShift cluster on IBM Cloud. -This documentation provides a step-by-step instruction on how to install UMS on Red Hat OpenShift cluster on IBM Cloud for test purposes. The documentation therefore does not include steps to setup a production-ready database, create image policy or configure persistent volume. - -## Prepare your environment - -Refer to [Red Hat OpenShift on IBM Cloud](https://cloud.ibm.com/docs/openshift?topic=openshift-openshift-create-cluster#openshift_create_cluster_console) documentation to install IBM Cloud and OpenShift CLIs and to create an OpenShift cluster in IBM Cloud. - -Log in to IBM Cloud by running the command -``` -ibmcloud login --sso -``` - -Login to IBM Cloud Container Registry by running the command -``` -ibmcloud cr login -``` - -## Prerequisites - -### Create a database -This is optional. As this is the instruction for a test deployment of UMS, UMS will use the built-in derby database. - -### Create namespace and switch to use it - -1. In a browser, navigate to https://cloud.ibm.com/kubernetes/clusters. Login with your IBM Cloud ID. -2. For your Red Hat OpenShift cluster select `...` and click `OpenShift Web Console`. -3. In the OpenShift Web Console click on your user ID (top right) and click Copy Login Command. -4. Paste the login command into a shell -``` -oc login --token= -``` -5. Create and switch to the namespace you created by using the command -``` -oc new-project cp4a-ums -``` -You see the message "Now using project cp4a-ums on server ". - -### Create image policy -This is optional. As this is the instruction for a test deployment of UMS, creating image policies is not covered. - -### Create a docker pull secret -1. In the IBM Cloud Console, select Manage / Access (IAM) (upper right corner) -2. In the menu on the left site click on `Service IDs`, then click `Create` - enter a name e.g. `ums-serviceid` and description. -3. Select API keys (right tab) and click Create - enter name ums-apikey and description ums-eval-api-key -4. Download the API key as a json file. -5. Create a docker pull secret in your OpenShift cluster: -``` -oc create secret docker-registry ums-secret --docker-server=us.icr.io --docker-username=iamapikey --docker-password= -``` - -**Note** this secret will be passed to the chart via the `imagePullSecrets` property. - -### Generate TLS secret -To ensure the internal communication is secure, a TLS secret must be provided. -The secret can be generated by running the following command: -```bash -openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -``` - -This command generates two files: tls.crt and tls.key. They are used to generate the TLS secret: -```bash -kubectl create secret tls ibm-dba-ums-tls --key=tls.key --cert=tls.crt -``` - -**Note**: The secret will be passed to the chart via the `tls.tlsSecretName` property. - - -### Generate UMS secret, DB secrets and LTPA generation secret -To avoid passing sensitive information via `myvalues.yaml`, three secrets need to be generated before installing the chart. -1. Edit [ums-secret.yaml](../configuration/ums-secret.yaml) -2. For ibm-dba-ums-secret specify adminUser, adminPassword, sslKeystorePassword, jwtKeystorePassword, teamserverClientID, teamserverClientSecret and ltpaPassword -3. For ibm-dba-ums-db-secret specify oauthDBUser/outhDBPassword and tsDBUser/tsDBPassword. -4. For ibm-dba-ums-ltpa-creation-secret do nothing. Configuration will be performed during LTPA creation. -5. Save ums-secret.yaml -6. In a shell run this command to create the required secrets. -``` -kubectl create -f ums-secret.yaml --namespace cp4a-ums -``` - -**Note**: Secret names need to be passed to the chart via the global.ums.adminSecretName, global.ums.dbSecretName and global.ums.ltpaSecretName properties. - -### Install IBM Cloud Pak SecurityContextConstraints resources to your cluster -Install IBM Cloud Pak SecurityContextConstraints resources to your cluster. Refer to '[`ibm-restricted-scc`](https://ibm.biz/cpkspec-scc)'. - -### Persistent Volume -This is optional. As this is the instruction for a test deployment of UMS, Persistent Volume configuration is not covered. - -## Install the chart - -### Download PPA and load images to the content registry -1. Follow instructions to download User Management Service images and loadimages.sh file in [Download PPA and load images](/~https://github.com/icp4a/cert-kubernetes/blob/master/README.md#step-2-download-a-product-package-from-ppa-and-load-the-images) -2. Load images to the IBM Cloud container repository -``` -loadimages.sh -p K8S_UMS*.tgz -r us.icr.io/cp4a-ums -``` - -When finished, you see a message similar to: -``` -Docker images push to us.icr.io/cp4a-ums completed, and check the following images in the Docker registry: - - us.icr.io/cp4a-ums/ums:19.0.2 - - us.icr.io/cp4a-ums/dba-keytool-initcontainer:19.0.2 - - us.icr.io/cp4a-ums/dba-keytool-jobcontainer:19.0.2 -``` -Those image names must match the images section in `myvalues.yaml`. - -### Download helm chart and customize values.yaml -1. Download the helm chart [ibm-dba-ums-prod-1.0.0.tgz](../helm-charts/ibm-dba-ums-prod-1.0.0.tgz) -2. In a shell extract the downloaded package -```bash -tar -xvf ibm-dba-ums-prod-1.0.0.tgz -``` -3. Review `values.yaml` and override defaults where necessary to meet your environment and configuration. -Review README.md inside the helm chart for more details on the individual settings. -Make sure to set the global.isOpenShift parameter to true. This ensures required configuration for the pod's container security context. -Save the new configuration as `myvalues.yaml`. - -*Note:* Minimal changes to `myvalues.yaml` include specifying serviceType, imagePullSecrets, adminSecretName, dbSecretName, ltpaSecretName, images location, tlsSecretName, -database type (if using derby, name, host and port are ignored). Hostname is not needed, it will be configured when the route is defined in the OpenShift environment. -See sample below: - -```yaml -# shared values across components -global: - # PersistenceVolumeClaim name with JDBC drivers - existingClaimName: - # Secret with Docker credentials - imagePullSecrets: ums-secret - # Set to false if you are not using Openshift - isOpenShift: true - # UMS-specific global values - ums: - serviceType: Ingress - # hostname: c1-e.us-east.containers.cloud.ibm.com - port: 443 - # Secret with admin credentials - adminSecretName: ibm-dba-ums-secret - # Secret with DB connection credentials - dbSecretName: ibm-dba-ums-db-secret - #Secret to be filled from the LTPA creation job - ltpaSecretName: ibm-dba-ums-ltpa-creation-secret - -# UMS Docker images -images: - ums: us.icr.io/cp4a-ums/ums:19.0.2 - initTLS: us.icr.io/cp4a-ums/dba-keytool-initcontainer:19.0.2 - ltpa: us.icr.io/cp4a-ums/dba-keytool-jobcontainer:19.0.2 - -# Secret with an Ingress certificate -ingressSecretName: - -# UMS certificate secret -tls: - tlsSecretName: ibm-dba-ums-tls - -# Toggle for custom JDBC drivers -useCustomJDBCDrivers: false - -# UMS OAuth config -oauth: - database: - type: derby - # name: - # host: - # port: - driverfiles: - clientManagerGroup: - jwtSecretName: ibm-dba-ums-tls - -# UMS Team Server database config -teamserver: - database: - type: derby - # name: - # host: - # port: - driverfiles: -``` - -### Generate and customize deployment yamls -1. Generate the output folder -``` -mkdir yamls -``` -2. Generate deployment yamls to the created folder -``` -helm template --name cp4a-ums --namespace cp4a-ums --output-dir ./yamls -f myvalues.yaml ibm-dba-ums-prod-1.0.0.tgz -``` -3. Move to the yamls folder. Remove `ibm-dba-ums-prod/templates/test` folder. -4. Apply yaml definitions by running the command -``` -kubectl apply -R -f ./yamls -``` -Your output should look similar to: -``` -role.rbac.authorization.k8s.io/cp4a-ums-ibm-dba-ums-deployment created -rolebinding.rbac.authorization.k8s.io/cp4a-ums-ibm-dba-ums-deployment created -serviceaccount/cp4a-ums-ibm-dba-ums created -role.rbac.authorization.k8s.io/cp4a-ums-ibm-dba-ums-ltpa-creation-role created -rolebinding.rbac.authorization.k8s.io/cp4a-ums-ibm-dba-ums-ltpa-creation-role-binding created -serviceaccount/cp4a-ums-ibm-dba-ums-ltpa-creation-service-account created -networkpolicy.networking.k8s.io/ums-apiserver created -networkpolicy.networking.k8s.io/ums-database created -networkpolicy.networking.k8s.io/default-deny created -networkpolicy.networking.k8s.io/ums-dns created -networkpolicy.networking.k8s.io/ums-https created -networkpolicy.networking.k8s.io/ums-ldap created -networkpolicy.networking.k8s.io/ums-test-container-https created -configmap/cp4a-ums-ibm-dba-ums created -configmap/cp4a-ums-ibm-dba-ums-custom created -deployment.apps/cp4a-ums-ibm-dba-ums created -horizontalpodautoscaler.autoscaling/cp4a-ums-ibm-dba-ums created -job.batch/cp4a-ums-ibm-dba-ums-ltpa-creation-job-39987 created -poddisruptionbudget.policy/cp4a-ums-ibm-dba-ums created -service/cp4a-ums-ibm-dba-ums created -``` -### Create a route to expose User Management Service -1. In a browser login to IBM Cloud, select your cluster and open the OpenShift web console. Select your application (cp4a-ums in this example). -2. From the menu select Applications -> Routes. Click `Create Route`. -3. Provide a uniqu name for the route, e.g. `cp4a-ums-route`. -4. Leave the Hostname black, it will be generated. -6. As Path specify `/ums` -7. Select the service and the Target Port (9444 -> 9443 (TCP)) -8. Check the box `Secure route` -9. For TLS Termination select `Re-encrypt` -10. For Insecure Traffic specify `None` -11. As CA Certificate, provide the certificate you used to generate the TLS secret -12. Click `Create` to create the route. - -### Configure hostname in the Config Map -1. Copy the hostname that was generated for the route in the previous step. -2. In the OpenShift console of your application, select Ressources -> Config Maps. -3. Select the Config Map. -4. Click on Actions -> Edit YAML. -5. In section `ums.xml` for the variable name `ums.externalHostName` specify the value of the generated hostname. -6. Save the Config Map. - -## Verify UMS installation -From the Routes view click on the Hostname that was generated for the route. -UMS Login page opens in the browser. Log in as the administrative user you specified in ums-secret.yaml -or any user of a connected LDAP if you included an LDAP configuration in myvalues.yaml customXML. - -Congratulations, your UMS is now on ROKS. diff --git a/UMS/platform/README-icp.md b/UMS/platform/README-icp.md deleted file mode 100644 index d062eb14..00000000 --- a/UMS/platform/README-icp.md +++ /dev/null @@ -1,233 +0,0 @@ -# Install User Management Service 19.0.2 on IBM Cloud Private 3.1.2 - -User Management Service can be installed on IBM Cloud Private 3.1.2. This documentation provides a step-by-step instruction on how to install UMS on IBM Cloud Private for test purposes. The documentation therefore does not include steps to setup a production-ready database, create image policy or configure persistent volume. - -## Prepare your environment -In order to interact with your IBM Cloud Private 3.1.2 cluster, you need install and initialize command line interfaces. -1. Access your cluster at https://{MasterIP}:{consolePort}/console/tools/cli, e.g. https://1.2.3.4:8443/console/tools/cli -1. Download and install - * IBM Cloud Private CLI - * Kubernetes CLI - * Helm CLI -1. Initialize all CLIs by logging into your cluster: `cloudctl login -a https://{MasterIP}:{consolePort}`. Note that you can pass credentials and a namespace using parameters `-n` (for namespace), `-u` for username, and `-p` for password. However, it is recommended to avoid credentials in command line parameters as they might be exposed in command history. - -This guide assumes your ICP 3.1.2 cluster's master node can be addressed using `mycluster.icp`, that is, a /etc/hosts entry exists. - -## Prerequisites - -In order to install the User Management Service via helm, you need to create a file `myvalues.yaml` to override some defaults of `values.yaml`, such as your database specific settings. The following section explain the prerequisites and the corresponding settings in `myvalues.yaml`. - -### Create a database -User Management Service needs a database to work. - -The simplest test environment with a single replica can use a built-in derby database in the container. Data is not shared across multiple replicas and is lost upon restarting the pod. If these restrictions are acceptable for a simple demonstration environment, you can set `derby` as your database type in your `myvalues.yaml` -```yaml -oauth: - database: - type: derby -... -teamserver: - database: - type: derby -``` -For sharing data between replicas and keeping data when restarting, you must use a remote database, which can be installed in the same kubernetes cluster or "standalone". Follow the instructions of your database vendor, e.g. -* [IBM Db2 Developer-C](/~https://github.com/IBM/charts/tree/master/stable/ibm-db2oltp-dev) -* IBM Db2 Advanced Enterprise Edition Helm Chart - -If you install Db2 in the same kubernetes environment, you can access Db2 using a kubernetes service without exposing a port publicly. The database is available at service-name.namespace, see [Service discovery (kube-dns) -](https://www.ibm.com/support/knowledgecenter/en/SSBS6K_3.1.2/manage_network/service_discovery.html). -For example, if you installed Db2 in namespace `db2` and created a service `umsdb-ibm-db2oltp-dev-db2`, you can use `umsdb-ibm-db2oltp-dev-db2.db2` as hostname: - -```yaml -oauth: - database: - type: db2 - name: umsdb - host: umsdb-ibm-db2oltp-dev-db2.db2 - port: 50000 -``` - -### Create namespace and switch to use it -User Management Service should be installed into a dedicated namespace. Use the following command to create a namespace. - -```bash -kubectl create namespace cp4a-ums -cloudctl logout -cloudctl login -a https://mycluster.icp:8443 -n cp4a-ums -``` - -### Create image policy -This is optional. If you intend to load docker images for User Management Service into a remote docker registry and let your IBM Cloud Private cluster pull images, from this remote location, you need to create an image pull policy, see [imagepolicy.yaml](../configuration/imagepolicy.yaml) as a sample. - -### Create a docker pull secret -This is optional. If you intend to load docker images for User Management Service into a remote docker registry and let your IBM Cloud Private cluster pull images, from this remote location, you need to create image pull secrets for each of these registries: - -```bash -kubectl create secret docker-registry ums-pull-secret1 --docker-server=mycluster.icp:8500 --docker-username=dockeruser --docker-password=dockerpassword -``` - -The name of this secret can be passed to helm as a parameter in `myvalues.yaml` - -```yaml -global: - imagePullSecrets: - - ums-pull-secret1 - - base-image-artifactory -``` - -### Generate TLS secret -To ensure the internal communication is secure, a TLS secret must be provided. -The secret can be generated by running the following command: -```bash -openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -``` - -This command generates two files: tls.crt and tls.key. They are used to generate the TLS secret: -```bash -kubectl create secret tls ibm-dba-ums-tls --key=tls.key --cert=tls.crt -``` - -The name of this secret can be passed to helm as a parameter in `myvalues.yaml` - -```yaml -tls: - tlsSecretName: ibm-dba-ums-tls -``` - -### Generate UMS secret, DB secrets and LTPA generation secret -To avoid passing sensitive information via values.yaml, three secrets need to be created before installing the chart. -1. Edit [ums-secret.yaml](../configuration/ums-secret-yaml) -2. For ibm-dba-ums-secret specify adminUser, adminPassword, sslKeystorePassword, jwtKeystorePassword, teamserverClientID, teamserverClientSecret and ltpaPassword -3. For ibm-dba-ums-db-secret specify oauthDBUser/outhDBPassword and tsDBUser/tsDBPassword. -4. For ibm-dba-ums-ltpa-creation-secret do nothing. Configuration will be performed during LTPA creation. -5. Save ums-secret.yaml -6. In a shell run this command to create the required secrets. - -```bash -kubectl create -f ums-secret.yaml -``` - -**Note**: Secret names need to be passed to the chart via the global.ums.adminSecretName, global.ums.dbSecretName and global.ums.ltpaSecretName properties. - -### Persistent Volume -This is optional. As this is the instruction for a test deployment of UMS, Persistent Volume configuration is not covered. A persistent volume is only required in order to mount -* JDBC drivers for a database other than Db2. -* custom truststore for connecting to LDAP securely -* custom binaries required by your Liberty configuration (such as a .jar file for a Trust Association Interceptor). - -## Install the chart -### Download PPA and load images to the content registry -Follow instructions to download User Management Service images and loadimages.sh file in [Download PPA and load images](/~https://github.com/icp4a/cert-kubernetes/blob/master/README.md#step-2-download-a-product-package-from-ppa-and-load-the-images) - -Using sample values from this guide: - -```bash -git clone /~https://github.com/icp4a/cert-kubernetes.git -cd cert-kubernetes -docker login mycluster.icp:8500 -scripts/loadimages.sh -p ~/Downloads/.tgz -r mycluster.icp:8500/ums1902 -``` -When finished, you see a message similar to: - -``` -Docker images push to mycluster.icp:8500/ums1902 completed, and check the following images in the Docker registry: - - mycluster.icp:8500/ums1902/ums:19.0.2 - - mycluster.icp:8500/ums1902/dba-keytool-initcontainer:19.0.2 - - mycluster.icp:8500/ums1902/dba-keytool-jobcontainer:19.0.2 -``` -Those image names must match the images section in `myvalues.yaml`. - -### Download helm chart and customize values.yaml -1. Download the helm chart [ibm-dba-ums-prod-1.0.0.tgz](../helm-charts/ibm-dba-ums-prod-1.0.0.tgz) -2. In a shell extract the downloaded package -```bash -tar -xvf ibm-dba-ums-prod-1.0.0.tgz -``` -3. Review `values.yaml` and the `myvalues.yaml` file for your release to override defaults where necessary and to specify values for settings without defaults. Review `README.md` inside the helm chart for more details on the individual settings. Make sure to set the `global.ums.isOpenShift` parameter to `false`. This ensures required configuration for the pod's container security context. - -This is a sample `myvalues.yaml` file using sample values from this guide. - -```yaml -global: - isOpenShift: false - ums: - hostname: ums-hostname #replace with your own hostname - adminSecretName: ibm-dba-ums-secret - dbSecretName: ibm-dba-ums-db-secret - ltpaSecretName: ibm-dba-ums-ltpa-creation-secret - serviceType: Ingress - -# UMS Docker images -images: - ums: mycluster.icp:8500/ums1902/ums:19.0.2 - initTLS: mycluster.icp:8500/ums1902/dba-keytool-initcontainer:19.0.2 - ltpa: mycluster.icp:8500/ums1902/dba-keytool-jobcontainer:19.0.2 - -# UMS certificate secret -tls: - tlsSecretName: ibm-dba-ums-tls - -# UMS OAuth config -oauth: - database: # replace with your own db settings - type: db2 - name: umsdb - host: umsdb-ibm-db2oltp-dev-db2.db2 - port: 50000 - # for demonstration purposes, we reuse the container TLS certificate to sign JWT tokens, you can create and refer to a dedicated secret here - jwtSecretName: ibm-dba-ums-tls - -# UMS Team Server database config -teamserver: - database: # replace with your own db settings - type: db2 - name: umsdb - host: umsdb-ibm-db2oltp-dev-db2.db2 - port: 50000 -``` - -### Use helm to install -After having created all prerequisites and customized `myvalues.yaml`, you can run - -```bash -helm install --tls -n cp4a-ums -f myvalues.yaml ibm-dba-ums-prod-1.0.0.tgz -``` - -The command returns within seconds, summarizing the resources that were created in the cluster. - -## Verify UMS installation -After the IBM Cloud Private 3.1.2 cluster completes the creation of resources and starting of pods, you can access User Management Service for basic function testing. - -Use the following command to observe the current installation and pod starting status: `kubectl get pods` - -During installation / startup, the status shows 0 ready pods. -```bash -kubectl get pods -NAME READY STATUS RESTARTS AGE -cp4a-ums-ibm-dba-ums-76d48486f5-4g9l6 0/1 Running 0 45s -cp4a-ums-ibm-dba-ums-76d48486f5-wlfjv 0/1 Running 0 45s -cp4a-ums-ibm-dba-ums-ltpa-creation-job-32881-czhqr 0/1 Completed 0 45s -``` - -Once the pods respond to readiness probes, the status will be updated: -```bash -kubectl get pods -NAME READY STATUS RESTARTS AGE -cp4a-ums-ibm-dba-ums-8f9cc7c54-46mjw 1/1 Running 0 33m -cp4a-ums-ibm-dba-ums-8f9cc7c54-ml8bz 1/1 Running 0 33m -cp4a-ums-ibm-dba-ums-ltpa-creation-job-32881-czhqr 0/1 Completed 0 33m -``` - -Note that the -ibm-dba-ums-ltpa-creation-job-- pod is expected in completed state. - -You can view the configured ingress for accepting inbound HTTP traffic: -```bash -kubectl get ingress - -NAME HOSTS ADDRESS PORTS AGE -ums1902-ibm-dba-ums adenoma1.fyre.ibm.com 9.30.205.41 80, 443 2m33s -``` - -Use the host of this ingress to access https:///ums to view the login page. Log in as the administrative user you specified in `ums-secret.yaml` or any user of a connected LDAP if you included an LDAP configuration in `myvalues.yaml` customXML. - -Congratulations, your UMS is now on IBM Cloud Private 3.1.2. diff --git a/UMS/platform/README-minikube.md b/UMS/platform/README-minikube.md deleted file mode 100644 index 5b9e7fad..00000000 --- a/UMS/platform/README-minikube.md +++ /dev/null @@ -1,325 +0,0 @@ -# Install User Management Service 19.0.2 on Minikube - -User Management Service can be installed on Minikube. This documentation provides a step-by-step instruction on how to install UMS on Minikube for test purposes. The documentation therefore does not include steps to setup a production-ready database, create image policy or configure persistent volume. - - -## Step 1: Install Minikube and Tiller - -1. Refer to the Kubernetes [documentation](https://kubernetes.io/docs/setup/minikube/#installation) to install Minikube and kubectl. - -2. Start Minikube. - - ```bash - minikube start - ``` - - This starts Minikube with the default memory of 2048 MB and 2 cpus. - This is sufficient for a test install of User Management Service. - - > **Note**: If more cpus or memory are required, stop and delete it before restarting it with different parameters. - ```bash - minikube stop - minikube delete - minikube start --cpus 6 --memory 4096 - ``` - -3. Verify your installation. - - ```bash - kubectl get nodes - ``` - -4. Install [Helm 2.14.3](/~https://github.com/helm/helm/releases/tag/v2.14.3). - -5. Install Tiller in the Minikube cluster. - - ```bash - helm init - ``` - -## Step 2: Download PPA and load images to the local content registry - -1. Follow instructions to download User Management Service images and loadimages.sh file in [Download PPA and load images](/~https://github.com/icp4a/cert-kubernetes/blob/master/README.md#step-2-download-a-product-package-from-ppa-and-load-the-images) - - > **Note**: **DO NOT** run the loadimages.sh script at this point. - -2. Configure your bash shell to use the Minikube built-in [Docker daemon](https://kubernetes.io/docs/setup/minikube/#use-local-images-by-re-using-the-docker-daemon). - - ```bash - eval $(minikube docker-env) - ``` - - > **Note**: If you are not using the bash shell, execute ```minikube docker-env``` and see what environment variables this would set. Translate it to the corresponding command in your shell. - -3. Use the following command to load the images in the Minikube local repository. - - ```bash - git clone /~https://github.com/icp4a/cert-kubernetes.git - cd cert-kubernetes - scripts/loadimages.sh -l -p .tgz -r ibmcom - ``` - - On success, the command prints a message such as: - ```console - Docker images load to ibmcom completed, and check the following images in the Docker registry: - - ibmcom/ums:19.0.2 - - ibmcom/dba-keytool-initcontainer:19.0.2 - - ibmcom/dba-keytool-jobcontainer:19.0.2 - ``` - - Remember these values since we need them later. - - -## Step 3: Download helm chart -1. Download the helm chart [ibm-dba-ums-prod-1.0.0.tgz](../helm-charts/ibm-dba-ums-prod-1.0.0.tgz) -2. In a shell extract the downloaded package - - ```bash - tar -xvf ibm-dba-ums-prod-1.0.0.tgz - ``` - - You find the main settings in the file `ibm-dba-ums-prod/values.yaml`. - -## Step 4: Prerequisites and prepare myvalues.yaml - -In order to install the User Management Service via helm, you need to create a file `myvalues.yaml` to override some defaults of `values.yaml`, such as your database specific settings. The following section explain the prerequisites and the corresponding settings in `myvalues.yaml`. - -### Set the global settings and fill the image location - -The `myvalues.yaml` requires some global settings: -The flag isOpenShift must be false, and the serviceType must be NodePort. -By default, Minikube accepts ports in the range 30000-32767. -The hostname should be choosen as the name that will be used to access the User Management Service. - -```yaml -global: - isOpenShift: false - ums: - serviceType: NodePort - hostname: ums-hostname # replace with your host name - port: 30000 -``` - -The `loadimages.sh` script has emitted the location of the images. -These need to be entered in `myvalues.yaml` as follows: -```yaml -images: - ums: ibmcom/ums:19.0.2 - initTLS: ibmcom/dba-keytool-initcontainer:19.0.2 - ltpa: ibmcom/dba-keytool-jobcontainer:19.0.2 -``` - - -### Create a database -User Management Service needs a database to work. - -The simplest test environment with a single replica can use a built-in derby database in the container. Data is not shared across multiple replicas and is lost upon restarting the pod. If these restrictions are acceptable for a simple demonstration environment, you can set `derby` as your database type in your `myvalues.yaml` -```yaml -oauth: - database: - type: derby -... -teamserver: - database: - type: derby -``` -For sharing data between replicas and keeping data when restarting, you must use a remote database, which can be installed in the same kubernetes cluster or "standalone". Follow the instructions of your database vendor, e.g. -* [IBM Db2 Developer-C](/~https://github.com/IBM/charts/tree/master/stable/ibm-db2oltp-dev) -* IBM Db2 Advanced Enterprise Edition Helm Chart - -If you install Db2 in the same kubernetes environment, you can access Db2 using a kubernetes service without exposing a port publicly. The database is available at service-name.namespace, see [Service discovery (kube-dns) -](https://www.ibm.com/support/knowledgecenter/en/SSBS6K_3.1.2/manage_network/service_discovery.html). -For example, if you installed Db2 in namespace `db2` and created a service `umsdb-ibm-db2oltp-dev-db2`, you can use `umsdb-ibm-db2oltp-dev-db2.db2` as hostname: - -```yaml -oauth: - database: - type: db2 - name: umsdb - host: umsdb-ibm-db2oltp-dev-db2.db2 - port: 50000 -``` - -### Create namespace -User Management Service should be installed into a dedicated namespace. Use the following command to create a namespace. - -```bash -kubectl create namespace minikube-ums -``` -Verify the name space: -```bash -kubectl get namespaces -``` -This should show all namespaces, including the namespace minikube-ums. -All following kubectl commands need the option `--namespace=minikube-ums`. - -### Generate TLS secret -To ensure the internal communication is secure, a TLS secret must be provided. -The secret can be generated by running the following command: -```bash -openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -``` -This command queries for some additional information. Ensure that the Common Name is exactly the hostname (e.g. `ums-hostname`) choosen above. -The command generates two files: tls.crt and tls.key. They are used to generate the TLS secret: -```bash -kubectl create secret tls ibm-dba-ums-tls --key=tls.key --cert=tls.crt --namespace=minikube-ums -``` - -The name of this secret can be passed to helm as a parameter in `myvalues.yaml` - -```yaml -tls: - tlsSecretName: ibm-dba-ums-tls -``` - -We can also reuse the same secret as OAuth JWT secret in `myvalues.yaml` - -```yaml -oauth: - ... - jwtSecretName: ibm-dba-ums-tls -``` - -### Generate UMS secret, DB secrets and LTPA generation secret - -To avoid passing sensitive information via `myvalues.yaml`, three secrets need to be created before installing the chart. For these secrets, we use the separate file `ums-secret.yaml`. -1. Edit [ums-secret.yaml](../configuration/ums-secret.yaml) -2. For ibm-dba-ums-secret specify adminUser, adminPassword, sslKeystorePassword, jwtKeystorePassword, teamserverClientID, teamserverClientSecret and ltpaPassword -3. For ibm-dba-ums-db-secret specify oauthDBUser/outhDBPassword and tsDBUser/tsDBPassword. -4. For ibm-dba-ums-ltpa-creation-secret do nothing. Configuration will be performed during LTPA creation. -5. Save `ums-secret.yaml` -6. In a shell run this command to create the required secrets. - -```bash -kubectl create -f ums-secret.yaml --namespace=minikube-ums -``` - -Secret names need to be passed to the chart via the global.ums.adminSecretName, global.ums.dbSecretName and global.ums.ltpaSecretName properties. The file `myvalues.yaml` should now contain: - -```yaml -global: - isOpenShift: false - ums: - ... - adminSecretName: ibm-dba-ums-secret - dbSecretName: ibm-dba-ums-db-secret - ltpaSecretName: ibm-dba-ums-ltpa-creation-secret -``` - -### Persistent Volume -This is optional. As this is the instruction for a test deployment of UMS, Persistent Volume configuration is not covered. A persistent volume is only required in order to mount -* JDBC drivers for a database other than Db2. -* custom truststore for connecting to LDAP securely -* custom binaries required by your Liberty configuration (such as a .jar file for a Trust Association Interceptor). - -### Example myvalues.yaml - -Review `values.yaml` and the `myvalues.yaml` file for your release to override defaults where necessary and to specify values for settings without defaults. Review `README.md` inside the helm chart for more details on the individual settings. - -Here is an example `myvalues.yaml` for a DB2 database: - -```yaml -global: - isOpenShift: false - ums: - serviceType: NodePort - hostname: ums-hostname # replace with your hostname - port: 30000 - adminSecretName: ibm-dba-ums-secret # defined in ums-secret.yaml - dbSecretName: ibm-dba-ums-db-secret # defined in ums-secret.yaml - ltpaSecretName: ibm-dba-ums-ltpa-creation-secret # defined in ums-secret.yaml - -# UMS Docker images -images: - ums: ibmcom/ums:19.0.2 - initTLS: ibmcom/dba-keytool-initcontainer:19.0.2 - ltpa: ibmcom/dba-keytool-jobcontainer:19.0.2 - -# UMS certificate secret -tls: - tlsSecretName: ibm-dba-ums-tls - -# UMS OAuth config -oauth: - database: # replace with your own db settings - type: db2 - name: umsdb - host: umsdb-ibm-db2oltp-dev-db2.db2 - port: 50000 - jwtSecretName: ibm-dba-ums-tls - -# UMS Team Server database config -teamserver: - database: # replace with your own db settings - type: db2 - name: umsdb - host: umsdb-ibm-db2oltp-dev-db2.db2 - port: 50000 -``` - - -## Step 5: Install the chart - -After having created all prerequisites and customized `myvalues.yaml`, you can run - -```bash -helm install --namespace minikube-ums --name ums-default -f myvalues.yaml ibm-dba-ums-prod-1.0.0.tgz --debug -``` - -This installs the User Management Service under the release name ums-default, which is the prefix of the pods that will be created. -The command returns within seconds, summarizing the resources that were created in the cluster. - -If the install fails, delete the release ums-default first before trying to install it again: -```bash -helm del --purge ums-default -helm install --namespace minikube-ums --name ums-default -f myvalues.yaml ibm-dba-ums-prod-1.0.0.tgz --debug -``` - -## Step 6: Verify UMS installation - -After the Minikube cluster completes the creation of resources and starting of pods, you can access User Management Service for basic function testing. - -Use the following command to observe the current installation and pod starting status: -```bash -kubectl get pods --namespace minikube-ums -``` - -During installation / startup, the status shows 0 ready pods. -```bash -kubectl get pods --namespace minikube-ums - -NAME READY STATUS RESTARTS AGE -ums-default-ibm-dba-ums-76d48486f5-4g9l6 0/1 Running 0 45s -ums-default-ibm-dba-ums-76d48486f5-wlfjv 0/1 Running 0 45s -ums-default-ibm-dba-ums-ltpa-creation-job-32881-czhqr 0/1 Completed 0 45s -``` - -Once the pods respond to readiness probes, the status will be updated: -```bash -kubectl get pods --namespace minikube-ums - -NAME READY STATUS RESTARTS AGE -ums-default-ibm-dba-ums-8f9cc7c54-46mjw 1/1 Running 0 33m -ums-default-ibm-dba-ums-8f9cc7c54-ml8bz 1/1 Running 0 33m -ums-default-ibm-dba-ums-ltpa-creation-job-32881-czhqr 0/1 Completed 0 33m -``` - -> **Note:** The -ibm-dba-ums-ltpa-creation-job-- pod is expected in completed state. - -To see details of a pod, use the command: -```bash -kubectl describe pod ums-default-ibm-dba-ums-8f9cc7c54-46mjw --namespace minikube-ums -``` - -To see the services provided by the Minikube cluster: -```bash -kubectl get services --namespace minikube-ums - -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -ums-default-ibm-dba-ums NodePort 10.107.19.17 9443:30000/TCP 13m -``` - -To access the User Management Service from outside, see the DOCKER_HOST environment variable that was emitted by `minikube dockerenv`. For instance, if the DOCKER_HOST is an IP address 192.168.99.100, combine it with the Minikube port that was specified, to access https://192.168.99.100:30000/ums to view the login page. Log in as the administrative user you specified in ums-secret.yaml or any user of a connected LDAP if you included an LDAP configuration in myvalues.yaml customXML. - -Congratulations, your UMS is now on Minikube. - diff --git a/UMS/platform/README-openshift.md b/UMS/platform/README-openshift.md deleted file mode 100644 index f91432e9..00000000 --- a/UMS/platform/README-openshift.md +++ /dev/null @@ -1,274 +0,0 @@ -# Install User Management Service 19.0.2 on Red Hat OpenShift 3.11 - -This documentation provides step-by-step instructions on how to install User Management Service 19.0.2 on Red Hat OpenShift 3.11 for test purposes. The documentation therefore does not include steps to setup a production-ready database, create image policy or configure persistent volume. - -## Prepare your environment - -As an administrator of the cluster you must be able to interact with your environment. Run the following commands to connect and check your access. - -In order to interact with your Red Hat OpenShift 3.11 cluster, you need install and initialize command line interfaces. -1. Access your cluster at https://{MasterIP}:{consolePort}/console/command-line, e.g. https://1.2.3.4:8443/console/command-line -2. Download and install - * Red Hat OpenShift CLI - * Kubernetes CLI - * Helm CLI - -3. Login to the cluster: - ```bash - oc login https://:8443 -u - ``` -4. Check you can run docker. - ```bash - docker ps - ``` -## Prerequisites - -In order to install the User Management Service via helm, you need to create a file `myvalues.yaml` to override some defaults of `values.yaml`, such as your database specific settings. The following section explain the prerequisites and the corresponding settings in `myvalues.yaml`. - -### Create a database - -User Management Service needs a database to work. - -The simplest test environment with a single replica can use a built-in derby database in the container. Data is not shared across multiple replicas and is lost upon restarting the pod. If these restrictions are acceptable for a simple demonstration environment, you can set `derby` as your database type in your `myvalues.yaml` -```yaml -oauth: - database: - type: derby -... -teamserver: - database: - type: derby -``` -For sharing data between replicas and keeping data when restarting, you must use a remote database, which can be installed in the same kubernetes cluster or "standalone". Follow the instructions of your database vendor, e.g. -* [IBM Db2 Developer-C](/~https://github.com/IBM/charts/tree/master/stable/ibm-db2oltp-dev) -* IBM Db2 Advanced Enterprise Edition Helm Chart - -If you install Db2 in the same kubernetes environment, you can access Db2 using a kubernetes service without exposing a port publicly. The database is available at service-name.namespace, see [Service discovery (kube-dns) -](https://www.ibm.com/support/knowledgecenter/en/SSBS6K_3.1.2/manage_network/service_discovery.html). -For example, if you installed Db2 in namespace `db2` and created a service `umsdb-ibm-db2oltp-dev-db2`, you can use `umsdb-ibm-db2oltp-dev-db2.db2` as hostname: - -```yaml -oauth: - database: - type: db2 - name: umsdb - host: umsdb-ibm-db2oltp-dev-db2.db2 - port: 50000 -``` - -### Create a project where you want to install User Management Service -User Management Service should be installed into a dedicated project/namespace. Use the following command to create a project and switch to it. - -```bash -oc new-project umsproject -``` - -**Note:** The `oc` command implicitly passes the current project name for all subsequent commands. For `kubectl` you will need to pass the `-n umsproject` parameter explicitly. - -### Create image policy -This is optional. If you intend to load docker images for User Management Service into a remote docker registry and let your Red Hat OpenShift cluster pull images, from this remote location, you need to create an image pull policy, see [imagepolicy.yaml](../configuration/imagepolicy.yaml) as a sample. - -### Install IBM Cloud Pak SecurityContextConstraints resources to your cluster -Install IBM Cloud Pak SecurityContextConstraints resources to your cluster. Refer to '[`ibm-restricted-scc`](https://ibm.biz/cpkspec-scc)'. - -### Create a docker pull secret -This is optional. If you intend to load docker images for User Management Service into a remote docker registry and let your IBM Cloud Private cluster pull images, from this remote location, you need to create image pull secrets for each of these registries: - -```bash -oc create secret docker-registry ums-pull-secret1 --docker-server=docker-registry.default.svc:5000 --docker-username=dockeruser --docker-password=dockerpassword -``` - -The name of this secret can be passed to helm as a parameter in `myvalues.yaml` - -```yaml -global: - imagePullSecrets: - - ums-pull-secret1 - - base-image-artifactory -``` - -### Generate TLS secret -To ensure the internal communication is secure, a TLS secret must be provided. -The secret can be generated by running the following command: -```bash -openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -``` - -This command generates two files: tls.crt and tls.key. They are used to generate the TLS secret: -```bash -oc create secret tls ibm-dba-ums-tls --key=tls.key --cert=tls.crt -``` - -The name of this secret can be passed to helm as a parameter in `myvalues.yaml` - -```yaml -tls: - tlsSecretName: ibm-dba-ums-tls -``` - -### Generate UMS secret, DB secrets and LTPA generation secret -To avoid passing sensitive information via values.yaml, three secrets need to be created before installing the chart. -1. Edit [ums-secret.yaml](../configuration/ums-secret.yaml) -2. For ibm-dba-ums-secret specify adminUser, adminPassword, sslKeystorePassword, jwtKeystorePassword, teamserverClientID, teamserverClientSecret and ltpaPassword -3. For ibm-dba-ums-db-secret specify oauthDBUser/outhDBPassword and tsDBUser/tsDBPassword. -4. For ibm-dba-ums-ltpa-creation-secret do nothing. Configuration will be performed during LTPA creation. -5. Save ums-secret.yaml -6. In a shell run this command to create the required secrets. - -```bash -oc create -f ums-secret.yaml -``` - -**Note:** Secret names need to be passed to the chart via the global.ums.adminSecretName, global.ums.dbSecretName and global.ums.ltpaSecretName properties. - -### Persistent Volume -This is optional. As this is the instruction for a test deployment of UMS, Persistent Volume configuration is not covered. A persistent volume is only required in order to mount -* JDBC drivers for a database other than Db2. -* custom truststore for connecting to LDAP securely -* custom binaries required by your Liberty configuration (such as a .jar file for a Trust Association Interceptor). - -## Install the chart - -### Download PPA and load images to the content registry -Follow instructions to download User Management Service images and loadimages.sh file in [Download PPA and load images](/~https://github.com/icp4a/cert-kubernetes/blob/master/README.md#step-2-download-a-product-package-from-ppa-and-load-the-images) - -The following commands need to be executed from inside the cluster (e.g. on master machine) and assume that you are already logged-in to Red Hat OpenShift using `oc login`: - -```bash -git clone /~https://github.com/icp4a/cert-kubernetes.git -cd cert-kubernetes -docker login $(oc registry info) -u -p $(oc whoami -t) -scripts/loadimages.sh -p ~/Downloads/.tgz -r $(oc registry info)/umsproject -``` -When finished, you see a message similar to: - -``` -Docker images push to docker-registry.default.svc:5000/umsproject completed, and check the following images in the Docker registry: - - docker-registry.default.svc:5000/umsproject/ums:19.0.2 - - docker-registry.default.svc:5000/umsproject/dba-keytool-initcontainer:19.0.2 - - docker-registry.default.svc:5000/umsproject/dba-keytool-jobcontainer:19.0.2 -``` -Those image names must match the images section in `myvalues.yaml`. - -Check whether the images have been pushed correctly to the registry. - -```bash -oc get is -``` - -The results should look like this: -```bash -NAME DOCKER REPO TAGS UPDATED -dba-keytool-initcontainer docker-registry.default.svc:5000/umsproject/dba-keytool-initcontainer 19.0.2 19 hours ago -dba-keytool-jobcontainer docker-registry.default.svc:5000/umsproject/dba-keytool-jobcontainer 19.0.2 19 hours ago -ums docker-registry.default.svc:5000/umsproject/ums 19.0.2 19 hours ago -``` - -### Download helm chart and customize values.yaml -1. Download the helm chart [ibm-dba-ums-prod-1.0.0.tgz](../helm-charts/ibm-dba-ums-prod-1.0.0.tgz) -2. In a shell extract the downloaded package -```bash -tar -xvf ibm-dba-ums-prod-1.0.0.tgz -``` -3. Review `values.yaml` and the `myvalues.yaml` file for your release to override defaults where necessary and to specify values for settings without defaults. Review `README.md` inside the helm chart for more details on the individual settings. Make sure to set the `global.ums.isOpenShift` parameter to `true`. This ensures required configuration for the pod's container security context. - -This is a sample `myvalues.yaml` file using sample values from this guide. - -```yaml -global: - isOpenShift: true - ums: - hostname: ums-hostname #replace with your own hostname - adminSecretName: ibm-dba-ums-secret - dbSecretName: ibm-dba-ums-db-secret - ltpaSecretName: ibm-dba-ums-ltpa-creation-secret - serviceType: Ingress - -# UMS Docker images -images: - ums: docker-registry.default.svc:5000/umsproject/ums:19.0.2 - initTLS: docker-registry.default.svc:5000/umsproject/dba-keytool-initcontainer:19.0.2 - ltpa: docker-registry.default.svc:5000/umsproject/dba-keytool-jobcontainer:19.0.2 - -# UMS certificate secret -tls: - tlsSecretName: ibm-dba-ums-tls - -# UMS OAuth config -oauth: - database: # replace with your own db settings - type: db2 - name: umsdb - host: umsdb-ibm-db2oltp-dev-db2.db2 - port: 50000 - # for demonstration purposes, we reuse the container TLS certificate to sign JWT tokens, you can create and refer to a dedicated secret here - jwtSecretName: ibm-dba-ums-tls - -# UMS Team Server database config -teamserver: - database: # replace with your own db settings - type: db2 - name: umsdb - host: umsdb-ibm-db2oltp-dev-db2.db2 - port: 50000 -``` - -### Use helm to create the release templates -After having created all prerequisites and customized `myvalues.yaml`, you can run - -```bash -helm template -f myvalues.yaml -n cp4a-ums ibm-dba-ums-prod-1.0.0.tgz --output-dir cp4a-ums -``` - -to create the kubernetes release yaml files into a directory called `cp4a-ums`. Then apply the files in the Red Hat OpenShift cluster using - -```bash -oc apply -R -f cp4a-ums -``` - -The command returns within seconds, summarizing the resources that were created in the cluster. - -### Create a route to expose User Management Service - -To expose the User Management Service release to the public you need to create a route in the Red Hat OpenShift cluster. The command create a route using SSL/TLS re-encrypt option. With this option the Red Hat OpenShift router will terminate the SSL connection and re-encrypt the traffic using the User Management Service TLS Certificate internally. For that we need to provide the User Management Service TLS Certificate as generated above. - -```bash -oc create route reencrypt ums-route --hostname=ums-hostname --path=/ --service=cp4a-ums-ibm-dba-ums --dest-ca-cert=tls.crt -``` - -## Verify UMS installation -After the Red Hat OpenShift 3.11 cluster completes the creation of resources and starting of pods, you can access User Management Service for basic function testing. - -Use the following command to observe the current installation and pod starting status: `oc get pods` - -During installation / startup, the status shows 0 ready pods. -```bash -oc get pods -NAME READY STATUS RESTARTS AGE -cp4a-ums-ibm-dba-ums-76d48486f5-4g9l6 0/1 Running 0 45s -cp4a-ums-ibm-dba-ums-76d48486f5-wlfjv 0/1 Running 0 45s -cp4a-ums-ibm-dba-ums-ltpa-creation-job-32881-czhqr 0/1 Completed 0 45s -``` - -Once the pods respond to readiness probes, the status will be updated: -```bash -oc get pods -NAME READY STATUS RESTARTS AGE -cp4a-ums-ibm-dba-ums-8f9cc7c54-46mjw 1/1 Running 0 33m -cp4a-ums-ibm-dba-ums-8f9cc7c54-ml8bz 1/1 Running 0 33m -cp4a-ums-ibm-dba-ums-ltpa-creation-job-32881-czhqr 0/1 Completed 0 33m -``` - -**Note:** The -ibm-dba-ums-ltpa-creation-job-- pod is expected in completed state. - -You can view the configured route for accepting inbound HTTP traffic: -```bash -oc get route - -NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD -ums-route ums-host / cp4a-ums-ibm-dba-ums https reencrypt None -``` - -Use the host of this route to access `https:///ums` to view the login page. Log in as the administrative user you specified in `ums-secret.yaml` or any user of a connected LDAP if you included an LDAP configuration in `myvalues.yaml` customXML. - -Congratulations, your User Management Service is now deployed on Red Hat OpenShift 3.11. diff --git a/descriptors/ibm_cp4a_cr_template.yaml b/descriptors/ibm_cp4a_cr_template.yaml new file mode 100644 index 00000000..ae5d34c3 --- /dev/null +++ b/descriptors/ibm_cp4a_cr_template.yaml @@ -0,0 +1,2029 @@ +############################################################################### +# +# Licensed Materials - Property of IBM +# +# (C) Copyright IBM Corp. 2019. All Rights Reserved. +# +# US Government Users Restricted Rights - Use, duplication or +# disclosure restricted by GSA ADP Schedule Contract with IBM Corp. +# +############################################################################### +apiVersion: icp4a.ibm.com/v1 +kind: ICP4ACluster +metadata: + name: demo-template + labels: + app.kubernetes.io/instance: ibm-dba + app.kubernetes.io/managed-by: ibm-dba + app.kubernetes.io/name: ibm-dba + release: 19.0.3 +spec: + ## shared configuration among all tribe + shared_configuration: + # image_pull_secrets: + # - image-pull-secret + # images: + # keytool_job_container: + # repository: cp.icr.io/cp/cp4a/ums/dba-keytool-jobcontainer + # tag: 19.0.3 + # dbcompatibility_init_container: + # repository: cp.icr.io/cp/cp4a/aae/dba-dbcompatibility-initcontainer + # tag: 19.0.3 + # keytool_init_container: + # repository: cp.icr.io/cp/cp4a/ums/dba-keytool-initcontainer + # tag: 19.0.3 + # umsregistration_initjob: + # repository: cp.icr.io/cp/cp4a/aae/dba-umsregistration-initjob + # tag: 19.0.3 + # pull_policy: Always + # root_ca_secret: icp4a-root-ca + # sc_deployment_platform: OCP + # trusted_certificate_list: [] + # encryption_key_secret: icp4a-shared-encryption-key + ldap_configuration: + # the candidate value is "IBM Security Directory Server" or "Microsoft Active Directory" + # lc_selected_ldap_type: "IBM Security Directory Server" + # lc_ldap_server: "" + # lc_ldap_port: "389" + # lc_ldap_base_dn: "dc=hqpsidcdom,dc=com" + # lc_ldap_ssl_enabled: false + # lc_ldap_ssl_secret_name: "" + # lc_ldap_user_name_attribute: "*:cn" + # lc_ldap_user_display_name_attr: "cn" + # lc_ldap_group_base_dn: "dc=hqpsidcdom,dc=com" + # lc_ldap_group_name_attribute: "*:cn" + # lc_ldap_group_display_name_attr: "cn" + # lc_ldap_group_membership_search_filter: "(|(&(objectclass=groupofnames)(member={0}))(&(objectclass=groupofuniquenames)(uniquemember={0})))" + # lc_ldap_group_member_id_map: "groupofnames:member" + # lc_ldap_max_search_results: 4500 + # ca_ldap_configuration: + # lc_user_filter: "(&(cn={{ '{{' }}username{{ '}}'}})(objectclass=person))" + # lc_ldap_self_signed_crt: "" #true or false when lc_ldap_ssl_enabled: true + # ad: + # lc_ad_gc_host: "" + # lc_ad_gc_port: "" + # lc_user_filter: "(&(cn=%v)(objectclass=person))" + # lc_group_filter: "(&(cn=%v)(|(objectclass=groupofnames)(objectclass=groupofuniquenames)(objectclass=groupofurls)))" + # tds: + # lc_user_filter: "(&(cn=%v)(objectclass=person))" + # lc_group_filter: "(&(cn=%v)(|(objectclass=groupofnames)(objectclass=groupofuniquenames)(objectclass=groupofurls)))" + ext_ldap_configuration: + # # the candidate value is "IBM Security Directory Server" or "Microsoft Active Directory" + # lc_selected_ldap_type: "IBM Security Directory Server" + # lc_ldap_server: "" + # lc_ldap_port: "389" + # lc_bind_secret: ldap-bind-secret # secret is expected to have ldapUsername and ldapPassword keys + # lc_ldap_base_dn: "O=LOCAL" + # lc_ldap_ssl_enabled: false + # lc_ldap_ssl_secret_name: "" + # lc_ldap_user_name_attribute: "*:cn" + # lc_ldap_user_display_name_attr: "cn" + # lc_ldap_group_base_dn: "O=LOCAL" + # lc_ldap_group_name_attribute: "*:cn" + # lc_ldap_group_display_name_attr: "cn" + # lc_ldap_group_membership_search_filter: "(|(&(objectclass=groupofnames)(member={0}))(&(objectclass=groupofuniquenames)(uniquemember={0})))" + # lc_ldap_group_member_id_map: "groupofnames:member" + # ad: + # lc_ad_gc_host: "" + # lc_ad_gc_port: "" + # lc_user_filter: "(&(cn=%v)(objectclass=person))" + # lc_group_filter: "(&(cn=%v)(|(objectclass=groupofnames)(objectclass=groupofuniquenames)(objectclass=groupofurls)))" + # tds: + # lc_user_filter: "(&(cn=%v)(objectclass=person))" + # lc_group_filter: "(&(cn=%v)(|(objectclass=groupofnames)(objectclass=groupofuniquenames)(objectclass=groupofurls)))" + datasource_configuration: + # the candidate value is "db2" or "db2HADR" or "oracle" or "sqlserver" + # dc_gcd_datasource: + # dc_database_type: "db2" + # dc_common_gcd_datasource_name: "FNGCDDS" + # dc_common_gcd_xa_datasource_name: "FNGCDDSXA" + # database_servername: "" + # database_name: "GCDDB" + # database_port: "50000" + # dc_oracle_gcd_jdbc_url: "jdbc:oracle:thin:@//:1521/orcl" + # dc_hadr_standby_servername: "" + # dc_hadr_standby_port: "50000" + # dc_hadr_validation_timeout: 15 + # dc_hadr_retry_interval_for_client_reroute: 15 + # dc_hadr_max_retries_for_client_reroute: 3 + # dc_os_datasources: + # - dc_database_type: "db2" + # dc_common_os_datasource_name: "FNOS1DS" + # dc_common_os_xa_datasource_name: "FNOS1DSXA" + # database_servername: "" + # database_name: "OS1DB" + # database_port: "50000" + # dc_oracle_os_jdbc_url: "jdbc:oracle:thin:@//:1521/orcl" + # dc_hadr_standby_servername: "" + # dc_hadr_standby_port: "50000" + # dc_hadr_validation_timeout: 3 + # dc_hadr_retry_interval_for_client_reroute: 3 + # dc_hadr_max_retries_for_client_reroute: 3 + # - dc_database_type: "db2" + # dc_common_os_datasource_name: "FNOS2DS" + # dc_common_os_xa_datasource_name: "FNOS2DSXA" + # database_servername: "" + # database_name: "OS2DB" + # database_port: "50000" + # dc_oracle_os_jdbc_url: "jdbc:oracle:thin:@//:1521/orcl" + # dc_hadr_standby_servername: "" + # dc_hadr_standby_port: "50000" + # dc_hadr_validation_timeout: 3 + # dc_hadr_retry_interval_for_client_reroute: 3 + # dc_hadr_max_retries_for_client_reroute: 3 + # dc_icn_datasource: + # dc_database_type: "db2" + # dc_oracle_icn_jdbc_url: "jdbc:oracle:thin:@//:1521/orcl" + # dc_common_icn_datasource_name: "ECMClientDS" + # database_servername: "" + # database_port: "50000" + # database_name: "ICNDB" + # dc_hadr_standby_servername: "" + # dc_hadr_standby_port: "50000" + # dc_hadr_validation_timeout: 3 + # dc_hadr_retry_interval_for_client_reroute: 3 + # dc_hadr_max_retries_for_client_reroute: 3 + # dc_odm_datasource: + # dc_database_type: "db2" + # database_servername: "db2forodm" + # dc_common_database_port: "50000" + # dc_common_database_name: "db2db" + # dc_common_database_instance_user: "db2user" # Will remove it, and use K8S Secret to replace it + # dc_common_database_instance_password: "{base64}UGFzc3cwcmQ0SypT" # Will remove it, and use K8S Secret to replace it + #dc_ums_datasource: # credentials are read from ums_configuration.db_secret_name + # # oauth database config + # dc_ums_oauth_type: db2 # derby (for test), db2, oracle + # dc_ums_oauth_host: + # dc_ums_oauth_port: 50000 + # dc_ums_oauth_name: UMSDB + # dc_ums_oauth_ssl: false + # dc_ums_oauth_ssl_secret_name: + # dc_ums_oauth_driverfiles: + # dc_ums_oauth_alternate_hosts: + # dc_ums_oauth_alternate_ports: + # dc_ca_datasource: + # dc_database_type: "db2" # This value can be db2 or db2HADR + # database_servername: "" + # database_name: "" + # tenant_databases: + # - tenant1 + # database_port: "" + ## Monitor setting + monitoring_configuration: + # mon_metrics_writer_option: "4" + # mon_metrics_service_endpoint: "9.9.9.9:2003" + # mon_bmx_group: "ibm" + # mon_bmx_metrics_scope_id: "1" + # mon_bmx_api_key: "testkey" + # mon_ecm_metrics_collect_interval: 60 + # mon_ecm_metrics_flush_interval: 60 + # mon_enable_plugin_pch: true + # mon_enable_plugin_mbean: true + ## Logging setting + logging_configuration: + # mon_log_parse: false + # mon_log_shipper_option: "1" + # mon_log_service_endpoint: "9.9.9.9:5044" + # mon_bmx_logs_logging_token: "testtoken" + # mon_bmx_space_id: "1" + + ######################################################################## + ######## IBM FileNet Content Manager configuration ######## + ######################################################################## + ecm_configuration: + # fncm_secret_name: ibm-fncm-secret + # fncm_ext_tls_secret_name: "{{ meta.name }}-fncm-ext-tls-secret" + # fncm_auth_ca_secret_name: "{{ meta.name }}-fncm-auth-ca-secret" + # cpe: + # arch: + # amd64: "3 - Most preferred" + # replica_count: 1 + # image: + # repository: cp.icr.io/cp/cp4a/fncm/cpe + # tag: ga-554-p8cpe + # pull_policy: Always + # ## Logging for workloads + # log: + # format: json + # ## resource + # resources: + # requests: + # cpu: 500m + # memory: 512Mi + # limits: + # cpu: 1 + # memory: 3072Mi + # ## Horizontal Pod Autoscaler + # auto_scaling: + # enabled: true + # max_replicas: 3 + # min_replicas: 1 + # target_cpu_utilization_percentage: 80 + # ## Route public hostname + # hostname: "" + # ## cpe Production setting + # cpe_production_setting: + # time_zone: Etc/UTC + # jvm_initial_heap_percentage: 18 + # jvm_max_heap_percentage: 33 + # # By default, the containers are configured to support OpenID/OAuth for SSO with User Management Services (UMS). + # # If SSO is not enabled for the deployment (i.e., if UMS is not being deployed), then set the following JVM value: + # # JVM_CUSTOMIZE_OPTIONS="-DFileNet.WSI.AutoDetectLTPAToken=true" + # # This enables the container to recognize WebSphere Liberty LTPA token where LDAP is used for authentication/authorization. + # jvm_customize_options: "-DFileNet.WSI.AutoDetectLTPAToken=true" + # gcd_jndi_name: FNGCDDS + # gcd_jndixa_name: FNGCDDSXA + # license_model: FNCM.PVUNonProd + # license: accept + # monitor_enabled: false + # logging_enabled: true + # collectd_enable_plugin_write_graphite: false + # ## Specify the names of existing persistent volume claims to be used by your application. + # datavolume: + # existing_pvc_for_cpe_cfgstore: "cpe-cfgstore" + # existing_pvc_for_cpe_logstore: "cpe-logstore" + # existing_pvc_for_cpe_filestore: "cpe-filestore" + # existing_pvc_for_cpe_icmrulestore: "cpe-icmrulesstore" + # existing_pvc_for_cpe_textextstore: "cpe-textextstore" + # existing_pvc_for_cpe_bootstrapstore: "cpe-bootstrapstore" + # existing_pvc_for_cpe_fnlogstore: "cpe-fnlogstore" + # probe: + # readiness: + # initial_delay_seconds: 120 + # period_seconds: 5 + # timeout_seconds: 10 + # failure_threshold: 6 + # liveness: + # initial_delay_seconds: 600 + # period_seconds: 5 + # timeout_seconds: 5 + # failure_threshold: 6 + # image_pull_secrets: + # name: "admin.registrykey" + # css: + # arch: + # amd64: "3 - Most preferred" + # replica_count: 1 + # image: + # repository: cp.icr.io/cp/cp4a/fncm/css + # tag: ga-554-p8css + # pull_policy: Always + # ## Logging for workloads + # log: + # format: json + + # ## resource and autoscaling setting + # resources: + # requests: + # cpu: 500m + # memory: 512Mi + # limits: + # cpu: 1 + # memory: 4096Mi + # ## CSS Production setting + # css_production_setting: + # jvm_max_heap_percentage: 50 + # license: accept + # monitor_enabled: false + # logging_enabled: true + # collectd_enable_plugin_write_graphite: false + # ## Specify the names of existing persistent volume claims to be used by your application. + # datavolume: + # existing_pvc_for_css_cfgstore: "css-cfgstore" + # existing_pvc_for_css_logstore: "css-logstore" + # existing_pvc_for_css_tmpstore: "css-tempstore" + # existing_pvc_for_index: "css-indexstore" + # existing_pvc_for_css_customstore: "css-customstore" + # probe: + # readiness: + # initial_delay_seconds: 60 + # period_seconds: 5 + # timeout_seconds: 10 + # failure_threshold: 6 + # liveness: + # initial_delay_seconds: 180 + # period_seconds: 5 + # timeout_seconds: 5 + # failure_threshold: 6 + # image_pull_secrets: + # name: "admin.registrykey" + # cmis: + # arch: + # amd64: "3 - Most preferred" + # replica_count: 1 + # image: + # repository: cp.icr.io/cp/cp4a/fncm/cmis + # tag: ga-304-cmis-if009 + # pull_policy: Always + # ## Logging for workloads + # log: + # format: json + + # ## resource + # resources: + # # We usually recommend not to specify default resources and to leave this as a conscious + # # choice for the user. This also increases chances charts run on environments with little + # # resources, such as Minikube. If you do want to specify resources, uncomment the following + # # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + # requests: + # cpu: 500m + # memory: 256Mi + # limits: + # cpu: 1 + # memory: 1536Mi + + # ## Horizontal Pod Autoscaler + # auto_scaling: + # enabled: true + # max_replicas: 3 + # min_replicas: 1 + # target_cpu_utilization_percentage: 80 + # ## Route public hostname + # hostname: "" + # ## CMIS Production setting + # cmis_production_setting: + # cpe_url: + # time_zone: Etc/UTC + # jvm_initial_heap_percentage: 40 + # jvm_max_heap_percentage: 66 + # jvm_customize_options: "" + # checkout_copycontent: true + # default_maxitems: 25 + # cvl_cache: true + # secure_metadata_cache: false + # filter_hidden_properties: true + # querytime_limit: 180 + # resumable_queries_forrest: true + # escape_unsafe_string_characters: false + # max_soap_size: 180 + # print_pull_stacktrace: false + # folder_first_search: false + # ignore_root_documents: false + # supporting_type_mutability: false + # license: accept + # monitor_enabled: false + # logging_enabled: false + # collectd_enable_plugin_write_graphite: false + # ## global persistence settings + # datavolume: + # ## Specify the names of existing persistent volume claims to be used by your application. + # existing_pvc_for_cmis_cfgstore: "cmis-cfgstore" + # existing_pvc_for_cmis_logstore: "cmis-logstore" + # probe: + # readiness: + # initial_delay_seconds: 90 + # period_seconds: 5 + # timeout_seconds: 10 + # failure_threshold: 6 + # liveness: + # initial_delay_seconds: 180 + # period_seconds: 5 + # timeout_seconds: 5 + # failure_threshold: 6 + # image_pull_secrets: + # name: "admin.registrykey" + # graphql: + # arch: + # amd64: "3 - Most preferred" + # replica_count: 1 + # image: + # repository: cp.icr.io/cp/cp4a/fncm/graphql + # tag: ga-554-p8cgql + # pull_policy: Always + # ## resource + # resources: + # requests: + # cpu: 500m + # memory: 512Mi + # limits: + # cpu: 1 + # memory: 1536Mi + # ## Horizontal Pod Autoscaler + # auto_scaling: + # enabled: true + # max_replicas: 1 + # min_replicas: 1 + # target_cpu_utilization_percentage: 80 + # ## Route public hostname + # hostname: "" + # ## GraphQL Production setting + # graphql_production_setting: + # time_zone: Etc/UTC + # jvm_initial_heap_percentage: 40 + # jvm_max_heap_percentage: 66 + # jvm_customize_options: "" + # license_model: FNCM.PVUNonProd + # license: accept + # enable_graph_iql: false + # cpe_uri: http://:9080/wsi/FNCEWS40MTOM + # ## Monitor setting and Logging setting + # monitor_enabled: false + # logging_enabled: true + # collectd_enable_plugin_write_graphite: false + # ## Specify the names of existing persistent volume claims to be used by your application. + # datavolume: + # existing_pvc_for_graphql_cfgstore: "graphql-cfgstore" + # existing_pvc_for_graphql_logstore: "graphql-logstore" + # probe: + # readiness: + # initial_delay_seconds: 120 + # period_seconds: 5 + # timeout_seconds: 10 + # failure_threshold: 6 + # liveness: + # initial_delay_seconds: 600 + # period_seconds: 5 + # timeout_seconds: 5 + # failure_threshold: 6 + # imagePullSecrets: + # name: "admin.registrykey" + # es: + # arch: + # amd64: "3 - Most preferred" + # replica_count: 1 + # image: + # repository: cp.icr.io/cp/cp4a/fncm/extshare + # tag: ga-307-es + # pull_policy: Always + # ## resource + # resources: + # requests: + # cpu: 500m + # memory: 512Mi + # limits: + # cpu: 1 + # memory: 1536Mi + # ## Horizontal Pod Autoscaler + # auto_scaling: + # enabled: true + # max_replicas: 3 + # min_replicas: 1 + # target_cpu_utilization_percentage: 80 + # ## Route public hostname + # hostname: "" + # ## External Share Production setting + # es_production_setting: + # time_zone: Etc/UTC + # jvm_initial_heap_percentage: 40 + # jvm_max_heap_percentage: 66 + # jvm_customize_options: "" + # license_model: FNCM.PVUNonProd + # license: accept + # es_dbtype: db2 + # es_jndi_ds: ECMClientDS + # es_schema: ICNDB + # es_ts: ICNDB + # es_admin: ceadmin + # ## Monitor setting and Logging setting + # monitor_enabled: false + # logging_enabled: true + # collectd_enable_plugin_write_graphite: false + # ## Specify the names of existing persistent volume claims to be used by your application. + # datavolume: + # existing_pvc_for_es_cfgstore: "es-cfgstore" + # existing_pvc_for_es_logstore: "es-logstore" + # probe: + # readiness: + # initial_delay_seconds: 180 + # period_seconds: 5 + # timeout_seconds: 10 + # failure_threshold: 6 + # liveness: + # initial_delay_seconds: 600 + # period_seconds: 5 + # timeout_seconds: 5 + # failure_threshold: 6 + # imagePullSecrets: + # name: "admin.registrykey" + # tm: + # arch: + # amd64: "3 - Most preferred" + # replica_count: 1 + # image: + # repository: cp.icr.io/cp/cp4a/fncm/taskmgr + # tag: 3.0.7 + # pull_policy: Always + # ## LOGGING FOR WORKLOADS + # log: + # format: JSON + # ## resource + # resources: + # requests: + # cpu: 500m + # memory: 512Mi + # limits: + # cpu: 1 + # memory: 1536Mi + # ## Horizontal Pod Autoscaler + # auto_scaling: + # enabled: true + # max_replicas: 3 + # min_replicas: 1 + # target_cpu_utilization_percentage: 80 + # ## External Share Production setting + # tm_production_setting: + # time_zone: Etc/UTC + # jvm_initial_heap_percentage: 40 + # jvm_max_heap_percentage: 66 + # jvm_customize_options: "-Dcom.ibm.ecm.task.StartUpListener.defaultLogLevel=FINE" + # license: accept + # tm_dbtype: db2 + # tm_jndi_ds: ECMClientDS + # tm_schema: ICNDB + # tm_ts: ICNDB + # tm_admin: CEADMIN + + # ## Monitor setting and Logging setting + # monitor_enabled: false + # logging_enabled: true + # collectd_enable_plugin_write_graphite: false + # ## Specify the names of existing persistent volume claims to be used by your application. + # datavolume: + # existing_pvc_for_tm_cfgstore: "tm-cfgstore" + # existing_pvc_for_tm_logstore: "tm-logstore" + # existing_pvc_for_tm_pluginstore: "tm-pluginstore" + # probe: + # readiness: + # initial_delay_seconds: 120 + # period_seconds: 5 + # timeout_seconds: 10 + # failure_threshold: 6 + # liveness: + # initial_delay_seconds: 600 + # period_seconds: 5 + # timeout_seconds: 5 + # failure_threshold: 6 + # image_pull_secrets: + # name: "admin.registrykey" + + ######################################################################## + ######## IBM Business Automation Navigator configuration ######## + ######################################################################## + navigator_configuration: + # ban_secret_name: ibm-ban-secret + # ban_ext_tls_secret_name: "{{ meta.name }}-ban-ext-tls-secret" + # ban_auth_ca_secret_name: "{{ meta.name }}-ban-auth-ca-secret" + # arch: + # amd64: "3 - Most preferred" + # replica_count: 1 + # image: + # repository: cp.icr.io/cp/cp4a/ban/navigator-sso + # tag: ga-307-icn + # pull_policy: Always + # arbitrary_uid_enabled: true + # ## Logging for workloads + # log: + # format: json + # ## resource and autoscaling setting + # resources: + # # We usually recommend not to specify default resources and to leave this as a conscious + # # choice for the user. This also increases chances charts run on environments with little + # # resources, such as Minikube. If you do want to specify resources, uncomment the following + # # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + # requests: + # cpu: 500m + # memory: 512Mi + # limits: + # cpu: 1 + # memory: 1536Mi + + # ## Horizontal Pod Autoscaler + # auto_scaling: + # enabled: true + # max_replicas: 3 + # min_replicas: 1 + # target_cpu_utilization_percentage: 80 + # ## Route public hostname + # hostname: "" + # ## ICN Production setting + # icn_production_setting: + # timezone: Etc/UTC + # jvm_initial_heap_percentage: 40 + # jvm_max_heap_percentage: 66 + # # By default, the containers are configured to support OpenID/OAuth for SSO with User Management Services (UMS). + # # If SSO is not enabled for the deployment (i.e., if UMS is not being deployed), then set the following JVM value: + # # JVM_CUSTOMIZE_OPTIONS="-DFileNet.WSI.AutoDetectLTPAToken=true" + # # This enables the container to recognize WebSphere Liberty LTPA token where LDAP is used for authentication/authorization. + # jvm_customize_options: "-DFileNet.WSI.AutoDetectLTPAToken=true" + # icn_db_type: db2 + # icn_jndids_name: ECMClientDS + # icn_schema: ICNDB + # icn_table_space: ICNDB + # icn_admin: CEADMIN + # license: accept + # enable_appcues: false + # allow_remote_plugins_via_http: false + # monitor_enabled: false + # logging_enabled: false + # collectd_enable_plugin_write_graphite: false + # ## Specify the names of existing persistent volume claims to be used by your application. + # ## Specify an empty string if you don't have existing persistent volume claim. + # datavolume: + # existing_pvc_for_icn_cfgstore: "icn-cfgstore" + # existing_pvc_for_icn_logstore: "icn-logstore" + # existing_pvc_for_icn_pluginstore: "icn-pluginstore" + # existing_pvc_for_icnvw_cachestore: "icn-vw-cachestore" + # existing_pvc_for_icnvw_logstore: "icn-vw-logstore" + # existing_pvc_for_icn_aspera: "icn-asperastore" + # probe: + # readiness: + # initial_delay_seconds: 120 + # period_seconds: 5 + # timeout_seconds: 10 + # failure_threshold: 6 + # liveness: + # initial_delay_seconds: 600 + # period_seconds: 5 + # timeout_seconds: 5 + # failure_threshold: 6 + # image_pull_secrets: + # name: "admin.registrykey" + + # ######################################################################## + # ######## IBM FNCM and BAN initialization configuration ######## + # ######################################################################## + initialize_configuration: + # ic_domain_creation: + # domain_name: "P8DOMAIN" + # encryption_key: "128" + # ic_ldap_creation: + # ic_ldap_admin_user_name: + # - "CEAdmin" + # ic_ldap_admins_groups_name: + # - "P8Administrators" + # ic_ldap_name: "ldap_name" + # ic_obj_store_creation: + # object_stores: + # - oc_cpe_obj_store_display_name: "OS01" + # oc_cpe_obj_store_symb_name: "OS01" + # oc_cpe_obj_store_conn: + # name: "objectstore1_connection" + # site_name: "InitialSite" + # dc_os_datasource_name: "FNOS1DS" + # dc_os_xa_datasource_name: "FNOS1DSXA" + # oc_cpe_obj_store_admin_user_groups: + # - "CEAdmin" + # # Array of users + # oc_cpe_obj_store_basic_user_groups: + # oc_cpe_obj_store_addons: true + # oc_cpe_obj_store_addons_list: + # - "{CE460ADD-0000-0000-0000-000000000004}" + # - "{CE460ADD-0000-0000-0000-000000000001}" + # - "{CE460ADD-0000-0000-0000-000000000003}" + # - "{CE460ADD-0000-0000-0000-000000000005}" + # - "{CE511ADD-0000-0000-0000-000000000006}" + # - "{CE460ADD-0000-0000-0000-000000000008}" + # - "{CE460ADD-0000-0000-0000-000000000007}" + # - "{CE460ADD-0000-0000-0000-000000000009}" + # - "{CE460ADD-0000-0000-0000-00000000000A}" + # - "{CE460ADD-0000-0000-0000-00000000000B}" + # - "{CE460ADD-0000-0000-0000-00000000000D}" + # - "{CE511ADD-0000-0000-0000-00000000000F}" + # oc_cpe_obj_store_asa_name: "demo_storage" + # oc_cpe_obj_store_asa_file_systems_storage_device_name: "demo_file_system_storage" + # oc_cpe_obj_store_asa_root_dir_path: "/opt/ibm/asa/os01_storagearea1" + # oc_cpe_obj_store_enable_workflow: true + # oc_cpe_obj_store_workflow_region_name: "design_region_name" + # oc_cpe_obj_store_workflow_region_number: 1 + # oc_cpe_obj_store_workflow_data_tbl_space: "VWDATA_TS" + # oc_cpe_obj_store_workflow_index_tbl_space: "" + # oc_cpe_obj_store_workflow_blob_tbl_space: "" + # oc_cpe_obj_store_workflow_admin_group: "P8Administrators" + # oc_cpe_obj_store_workflow_config_group: "P8Administrators" + # oc_cpe_obj_store_workflow_date_time_mask: "mm/dd/yy hh:tt am" + # oc_cpe_obj_store_workflow_locale: "en" + # oc_cpe_obj_store_workflow_pe_conn_point_name: "pe_conn_os1" + # ic_css_creation: + # - css_site_name: "Initial Site" + # css_text_search_server_name: "{{ meta.name }}-css-1" + # affinity_group_name: "aff_group" + # css_text_search_server_status: 0 + # css_text_search_server_mode: 0 + # css_text_search_server_ssl_enable: "true" + # css_text_search_server_credential: "RNUNEWc=" + # css_text_search_server_host: "{{ meta.name }}-css-svc-1" + # css_text_search_server_port: 8199 + # ic_css_index_area: + # - object_store_name: "OS01" + # index_area_name: "os1_index_area" + # affinity_group_name: "aff_group" + # root_dir: "/opt/ibm/indexareas" + # max_indexes: 20 + # max_objects_per_index: 10000 + # ic_enable_cbr: + # - object_store_name: "OS01" + # class_name: "Document" + # indexing_languages: "en" + # ic_icn_init_info: + # icn_repos: + # - add_repo_id: "demo_repo1" + # add_repo_ce_wsi_url: "http://{{ meta.name }}-cpe-svc:9080/wsi/FNCEWS40MTOM/" + # add_repo_os_sym_name: "OS01" + # add_repo_os_dis_name: "OS01" + # add_repo_workflow_enable: false + # add_repo_work_conn_pnt: "pe_conn_os1:1" + # add_repo_protocol: "FileNetP8WSI" + # # - add_repo_id: "test_repo2" + # # add_repo_ce_wsi_url: "http://{{ meta.name }}-cpe-svc:9080/wsi/FNCEWS40MTOM/" + # # add_repo_os_sym_name: "OS02" + # # add_repo_os_dis_name: "OS02" + # # add_repo_workflow_enable: true + # # add_repo_work_conn_pnt: "pe_conn_os02:1" + # # add_repo_protocol: "FileNetP8WSI" + # icn_desktop: + # - add_desktop_id: "demo" + # add_desktop_name: "icn_desktop" + # add_desktop_description: "This is ICN desktop" + # add_desktop_is_default: false + # add_desktop_repo_id: "demo_repo1" + # add_desktop_repo_workflow_enable: false + # # - add_desktop_id: "demotest" + # # add_desktop_name: "icn_desktop_demo" + # # add_desktop_description: "Just Another desktop" + # # add_desktop_is_default: false + # # add_desktop_repo_id: "test_repo2" + # # add_desktop_repo_workflow_enable: false + + ######################################################################## + ######## IBM FNCM and BAN verification configuration ######## + ######################################################################## + verify_configuration: + # vc_cpe_verification: + # vc_cpe_folder: + # - folder_cpe_obj_store_name: "OS01" + # folder_cpe_folder_path: "/TESTFOLDER" + # vc_cpe_document: + # - doc_cpe_obj_store_name: "OS01" + # doc_cpe_folder_name: "/TESTFOLDER" + # doc_cpe_doc_title: "test_title" + # DOC_CPE_class_name: "Document" + # doc_cpe_doc_content: "This is a simple document test" + # doc_cpe_doc_content_name: "doc_content_name" + # vc_cpe_cbr: + # - cbr_cpe_obj_store_name: "OS01" + # cbr_cpe_class_name: "Document" + # cbr_cpe_search_string: "is a simple" + # vc_cpe_workflow: + # - workflow_cpe_enabled: false + # workflow_cpe_connection_point: "pe_conn_os1" + # vc_icn_verification: + # - vc_icn_repository: "demo_repo1" + # vc_icn_desktop_id: "demo" + + ######################################################################## + ######## IBM Operational Decision Manager Configuration ######## + ######################################################################## + + # odm_configuration: + # # Allow to activate more trace for ODM in the Operator pod. + # debug: false + # # Allow to specify which version of ODM you want to deploy. + # # Supported version > 19.0.2 + # # If omitted the latest version will be used. + # version: 19.0.3 + # image: + # # Specify the repository used to retrieve the Docker images if you do not want to use the default one. + # repository: "" + # # Specify the tag for the Docker images. + # # It's a Mandatory tag when you enable odm_configuraton. + # tag: 8.10.3 + # # Specify the pull policy for the Docker images. See Kuberntes documentation for more inforations. + # # Possible values : IfNotPresent, Always, Never + # pullPolicy: IfNotPresent + # # Optionally specify an array of imagePullSecrets. + # # Secrets must be manually created in the namespace. + # # ref: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod + # # - name: admin.registrykey + # pullSecrets: + # + # service: + # # Specify whether to enable Transport Layer Security. If true, ODM web apps are accessed through HTTPS. If false, they are accessed through HTTP. + # enableTLS: true + # # Specify the service type. + # type: NodePort + # + # ## Decision Server Runtime parameters + # decisionServerRuntime: + # # Specify whether to enable Decision Server Runtime. + # enabled: true + # # Specify the number of Decision Server Runtime pods. + # replicaCount: 1 + # # Specify the name of the configMap the wanted logging options. If left empty, default logging options are used. + # loggingRef: + # # Specify the name of the configMap the wanted JVM options. If left empty, default JVM options are used. + # jvmOptionsRef: + # resources: + # requests: + # # Specify the requested CPU. + # cpu: 500m + # # Specify the requested memory. + # memory: 512Mi + # limits: + # # Specify the CPU limit. + # cpu: 2 + # # Specify the memory limit. + # memory: 4096Mi + # ## Decision Server Console parameters + # decisionServerConsole: + # # Specify the name of the configMap the wanted logging options. If left empty, default logging options are used. + # loggingRef: + # # Specify the name of the configMap the wanted JVM options. If left empty, default JVM options are used. + # jvmOptionsRef: + # resources: + # requests: + # # Specify the requested CPU. + # cpu: 500m + # # Specify the requested memory. + # memory: 512Mi + # limits: + # # Specify the CPU limit. + # cpu: 2 + # # Specify the memory limit. + # memory: 1024Mi + # ## Decision Center parameters + # decisionCenter: + # # Specify whether to enable Decision Center. + # enabled: true + # # Specify the persistence locale for Decision Center. + # # Possible values "ar_EG" (Arabic - Egypt), "zh_CN" (Chinese - China), "zh_TW" (Chinese - Taiwan) + # # "nl_NL" (Netherlands), "en_GB" (English - United Kingdom), "en_US" (English - United States), + # # "fr_FR" (French - France), "de_DE" (German - Germany), "iw_IL" (Hebrew - Israel), "it_IT" (Italian - Italy), + # # "ja_JP" (Japanese - Japan) , "ko_KR" (Korean - Korea), "pl_PL" (Polish - Poland), + # # "pt_BR" (Portuguese - Brazil), "ru_RU" (Russian - Russia), "es_ES" (Spanish - Spain) + # persistenceLocale: en_US + # # Specify the number of Decision Center pods. + # replicaCount: 1 + # # Persistent Volume Claim to access the custom libraries + # customlibPvc: + # # Specify the name of the configMap the wanted logging options. If left empty, default logging options are used. + # loggingRef: + # # Specify the name of the configMap the wanted JVM options. If left empty, default JVM options are used. + # jvmOptionsRef: + # resources: + # requests: + # # Specify the requested CPU. + # cpu: 500m + # # Specify the requested memory. + # memory: 1500Mi + # limits: + # # Specify the CPU limit. + # cpu: 2 + # # Specify the memory limit. + # memory: 4096Mi + # ## Decision Runner parameters + # decisionRunner: + # # Specify whether to enable Decision Runner. + # enabled: true + # # Specify the number of Decision Runner pods. + # replicaCount: 1 + # # Specify the name of the configMap the wanted logging options. If left empty, default logging options are used. + # loggingRef: + # # Specify the name of the configMap the wanted JVM options. If left empty, default JVM options are used. + # jvmOptionsRef: + # resources: + # requests: + # # Specify the requested CPU. + # cpu: 500m + # # Specify the requested memory. + # memory: 512Mi + # limits: + # # Specify the CPU limit. + # cpu: 2 + # # Specify the memory limit. + # memory: 4096Mi + # + # ## Database - Option 1: Internal (PostgreSQL) + # ## Fill in the parameters to use an internal PostgresSQL database. + # internalDatabase: + # # Specify the name of the internal database. + # databaseName: odmdb + # # Specify the name of the secret that contains the credentials to connect to the internal database. + # secretCredentials: "" + # persistence: + # # Specify whether to enable persistence for the internal database in a persistent volume. + # enabled: true + # # When this parameter is false, the binding process selects an existing volume. Ensure that an unbound volume exists before you install the chart. + # useDynamicProvisioning: false + # # Specify the storage class name for persistent volume. If this parameter is left empty, the default storage class is used. + # storageClassName: "" + # resources: + # requests: + # # Specify the storage size for persistent volume. + # storage: 5Gi + # securityContext: + # # User to init internal database container + # runAsUser: 0 + # resources: + # requests: + # # Specify the requested CPU. + # cpu: 500m + # # Specify the requested memory. + # memory: 512Mi + # limits: + # # Specify the CPU limit. + # cpu: 2 + # # Specify the memory limit. + # memory: 4096Mi + # + # ## Database - Option 2: External (DB2 or PostgreSQL) + # ## Fill in the parameters to use an external DB2 or PostgreSQL database. + # externalDatabase: + # # Specify the type of the external database. If this parameter is left empty, PostgreSQL is used by default. + # # Possible values : "db2", "postgresql" + # type: "" + # # Specify the name of the server running the external database. If it is not specified, the PostgreSQL internal database is used. + # serverName: "" + # # Specify the name of the external database. + # databaseName: "" + # # Specify the name of the secret that contains the credentials to connect to the external database. + # secretCredentials: "" + # # Specify the port used to connect to the external database. + # port: "" + # + # ## Database - Option 3: External (Custom) + # ## Fill in the parameters to use an external database configured by a secret. + # externalCustomDatabase: + # # Specify the name of the secret that contains the datasource configuration to use. + # datasourceRef: + # # Persistent Volume Claim to access the JDBC Database Driver + # driverPvc: + # + # readinessProbe: + # # Specify the number of seconds after the container has started before readiness probe is initiated. + # initialDelaySeconds: 5 + # # Specify how often (in seconds) to perform the probe. + # periodSeconds: 5 + # # Specify how many times Kubernetes will try before giving up when a pod starts and the probe fails. Giving up means marking the pod as Unready. + # failureThreshold: 45 + # # Specify the number of seconds after which the readiness probe times out. + # timeoutSeconds: 5 + # + # livenessProbe: + # # Specify the number of seconds after the container has started before liveness probe is initiated. + # initialDelaySeconds: 300 + # # Specify how often (in seconds) to perform the probe. + # periodSeconds: 10 + # # Specify how many times Kubernetes will try before giving up when a pod starts and the probe fails. Giving up means restarting the pod. + # failureThreshold: 10 + # # Specify the number of seconds after which the liveness probe times out. + # timeoutSeconds: 5 + # + # customization: + # # Specify the name of the secret that contains the TLS certificate you want to use. If the parameter is left empty, a default certificate is used. + # securitySecretRef: + # # Specify the name of the secret that contains the configuration files required to use the IBM Business Automation Insights emitter. + # baiEmitterSecretRef: + # # Specify the label attached to some nodes. Pods are scheduled to the nodes with this label. If the parameter is empty, pods are scheduled randomly. + # authSecretRef: + # + # networkPolicy: + # # Enable creation of NetworkPolicy resources. + # enabled: true + # # For Kubernetes v1.4, v1.5 and v1.6, use 'extensions/v1beta1' + # # For Kubernetes v1.7, use 'networking.k8s.io/v1' + # apiVersion: networking.k8s.io/v1 + + ums_configuration: + # existing_claim_name: + # replica_count: 2 + # service_type: Route + # hostname: + # port: 443 + # images: + # ums: + # repository: cp.icr.io/cp/cp4a/ums/ums + # tag: 19.0.3 + # admin_secret_name: ibm-dba-ums-secret + # db_secret_name: ibm-dba-ums-db-secret + # external_tls_secret_name: ibm-dba-ums-external-tls-secret + # external_tls_ca_secret_name: ibm-dba-ums-external-tls-ca-secret + # oauth: + # client_manager_group: + # resources: + # limits: + # cpu: 500m + # memory: 512Mi + # requests: + # cpu: 200m + # memory: 256Mi + # ## Horizontal Pod Autoscaler + # autoscaling: + # enabled: true + # min_replicas: 2 + # max_replicas: 5 + # target_average_utilization: 98 + # use_custom_jdbc_drivers: false + # use_custom_binaries: false + # custom_secret_name: + # custom_xml: + # logs: + # console_format: json + # console_log_level: INFO + # console_source: message,trace,accessLog,ffdc,audit + # trace_format: ENHANCED + # trace_specification: "*=info" + ##################################################################### + ## IBM App Engine production configuration 19.0.3 configuration ## + ##################################################################### + application_engine_configuration: + # ## The application_engine_configuration is a list, you can deploy multiple instances of AppEngine, you can assign different configurations for each instance. + # ## For each instance, application_engine_configuration.name and application_engine_configuration.name.hostname must be assigned to different values. + # - name: instance1 + # hostname: + # port: 443 + # admin_secret_name: + # external_tls_secret_name: + # replica_size: 1 + # user_custom_jdbc_drivers: false + # service_type: Route + # autoscaling: + # enabled: false + # max_replicas: 5 + # min_replicas: 2 + # target_cpu_utilization_percentage: 80 + # database: + # host: + # name: + # port: + # ## If you setup DB2 HADR and want to use it, you need to configure alternative_host and alternative_port, or else, leave is as blank. + # alternative_host: + # alternative_port: + # ## Only DB2 is supported + # type: db2 + # enable_ssl: false + # db_cert_secret_name: + # current_schema: DBASB + # initial_pool_size: 1 + # max_pool_size: 10 + # uv_thread_pool_size: 4 + # max_lru_cache_size: 1000 + # max_lru_cache_age: 600000 + # dbcompatibility_max_retries: 30 + # dbcompatibility_retry_interval: 10 + # custom_jdbc_pvc: + # log_level: + # node: info + # browser: 2 + # content_security_policy: + # enable: false + # whitelist: + # env: + # max_size_lru_cache_rr: 1000 + # server_env_type: development + # purge_stale_apps_interval: 86400000 + # apps_threshold: 100 + # stale_threshold: 172800000 + # images: + # pull_policy: IfNotPresent + # db_job: + # repository: cp.icr.io/cp/cp4a/aae/solution-server-helmjob-db + # tag: 19.0.3 + # solution_server: + # repository: cp.icr.io/cp/cp4a/aae/solution-server + # tag: 19.0.3 + # max_age: + # auth_cookie: "900000" + # csrf_cookie: "3600000" + # static_asset: "2592000" + # hsts_header: "2592000" + # probe: + # liveness: + # failure_threshold: 5 + # initial_delay_seconds: 60 + # period_seconds: 10 + # success_threshold: 1 + # timeout_seconds: 180 + # readiness: + # failure_threshold: 5 + # initial_delay_seconds: 10 + # period_seconds: 10 + # success_threshold: 1 + # timeout_seconds: 180 + # redis: + # host: + # port: + # ttl: 1800 + # resource_ae: + # limits: + # cpu: 2000m + # memory: 2Gi + # requests: + # cpu: 1000m + # memory: 1Gi + # resource_init: + # limits: + # cpu: 500m + # memory: 256Mi + # requests: + # cpu: 200m + # memory: 128Mi + # session: + # check_period: "3600000" + # duration: "1800000" + # max: "10000" + # resave: "false" + # rolling: "true" + # save_uninitialized: "false" + # use_external_store: "true" + # tls: + # tls_trust_list: [] + resource_registry_configuration: + # admin_secret_name: resource-registry-admin-secret + # hostname: + # port: + # replica_size: 3 + # images: + # pull_policy: IfNotPresent + # resource_registry: + # repository: cp.icr.io/cp/cp4a/aae/dba-etcd + # tag: 19.0.3 + # tls: + # tls_secret: rr-tls-client-secret + # probe: + # liveness: + # initial_delay_seconds: 60 + # period_seconds: 10 + # timeout_seconds: 5 + # success_threshold: 1 + # failure_threshold: 3 + # readiness: + # initial_delay_seconds: 10 + # period_seconds: 10 + # timeout_seconds: 5 + # success_threshold: 1 + # failure_threshold: 3 + # resource: + # limits: + # cpu: "500m" + # memory: "512Mi" + # requests: + # cpu: "200m" + # memory: "256Mi" + # auto_backup: + # enable: false + # minimal_time_interval: 1800 + # pvc_name: rr-autobackup-pvc + ##################################################################### + ## IBM Business Automation Studio 19.0.3 configuration ## + ##################################################################### + bastudio_configuration: + # admin_secret_name: bastudio-admin-secret + # hostname: + # port: + # # If we disable the User Management Service Certificate Common Name Check or not + # ums_disable_cn_check: false + # # If you don't want to use the customized external TLS certificate, you can leave it empty. + # external_tls_secret: + # # If you don't want to use the customized Certificate Authority (CA) to sign the external TLS certificate, you can leave it empty. + # external_tls_ca_secret: + # tls: + # tls_trust_list: [] + # database: + # host: + # # The database provided should be created by the BAStudio SQL script template. + # name: + # port: + # # If you want to enable the database ACR, HADR, configure the alternative_host and alternative_port both + # alternative_host: + # alternative_port: + # type: db2 + # ssl_enabled: false + # certificate_secret_name: db2-ssl-certificate + # # If you don't want to use the customized JDBC dirvers, you can keep it as default. + # user_custom_jdbc_drivers: false + # # The persistent volume claim for custom JDBC Drivers if using the custom jdbc drivers is enabled + # custom_jdbc_pvc: + # # The custom JDBC Drivers' names if using the custom jdbc drivers is enabled + # jdbc_driver_files: "db2jcc4.jar db2jcc_license_cu.jar" + # images: + # pull_policy: IfNotPresent + # bastudio: + # repository: cp.icr.io/cp/cp4a/bas/bastudio + # tag: 19.0.3 + # # Optional + # custom_xml: + # # Optional + # custom_secret_name: + # # Optional + # bastudio_custom_xml: + # content_security_policy: "default-src 'self' 'unsafe-inline' 'unsafe-eval'; frame-ancestors 'self'; font-src 'self' fonts.gstatic.com; frame-src *; img-src 'self' data:;" + # csrf_referrer: + # # The custom whitelist for Cross-Site Request Forgery (CSRF) protection. For example it is needed when you want to integrate BAS with the other editors such as ADW, ACA + # whitelist: "" + # logs: + # console_format: json + # console_log_level: INFO + # console_source: message,trace,accessLog,ffdc,audit + # trace_format: ENHANCED + # trace_specification: "*=info" + # replica_size: 1 + # autoscaling: + # enabled: false + # minReplicas: 1 + # maxReplicas: 3 + # targetAverageUtilization: 95 + # resources: + # bastudio: + # limits: + # cpu: 4000m + # memory: 3Gi + # requests: + # cpu: 2000m + # memory: 2Gi + # init_process: + # limits: + # cpu: 500m + # memory: 512Mi + # requests: + # cpu: 200m + # memory: 256Mi + # liveness_probe: + # initial_delay_seconds: 300 + # period_seconds: 10 + # timeout_seconds: 5 + # failure_threshold: 3 + # success_threshold: 1 + # readiness_probe: + # initial_delay_seconds: 240 + # period_seconds: 5 + # timeout_seconds: 5 + # failure_threshold: 6 + # success_threshold: 1 + # jms_server: + # image: + # repository: cp.icr.io/cp/cp4a/bas/jms + # tag: 19.0.3 + # pull_policy: IfNotPresent + # resources: + # limits: + # cpu: "1" + # memory: "1Gi" + # requests: + # cpu: "500m" + # memory: "512Mi" + # storage: + # # If JMS storage persistent should be enabled + # persistent: false + # # If use dynamic provisioning for JMS storage persistent + # use_dynamic_provisioning: false + # storage_class: "gluster-fs" + # access_modes: "ReadWriteOnce" + # selector: {} + # size: "3Gi" + # #----------------------------------------------------------------------- + # # App Engine Playback Server can only be one instance + # #----------------------------------------------------------------------- + # playback_server: + # admin_secret_name: playback-server-admin-secret + # images: + # pull_policy: IfNotPresent + # db_job: + # repository: cp.icr.io/cp/cp4a/bas/solution-server-helmjob-db + # tag: 19.0.3 + # solution_server: + # repository: cp.icr.io/cp/cp4a/bas/solution-server + # tag: 19.0.3 + # hostname: + # port: + # # If you don't want to use the customized external TLS certificate, you can leave it empty. + # external_tls_secret: + # # If you don't want to use the customized JDBC dirvers, you can keep it as default. + # user_custom_jdbc_drivers: false + # replica_size: 1 + # autoscaling: + # enabled: false + # max_replicas: 5 + # min_replicas: 2 + # target_cpu_utilization_percentage: 80 + # database: + # host: + # # The database provided should be created by the App Engine Playback Server SQL script template. + # name: + # port: + # # If you want to enable the database ACR, HADR, configure the alternative_host and alternative_port both + # alternative_host: + # alternative_port: + # type: db2 + # enable_ssl: false + # db_cert_secret_name: db2-ssl-certificate-secret + # current_schema: DBASB + # initial_pool_size: 1 + # max_pool_size: 10 + # uv_thread_pool_size: 4 + # max_lru_cache_size: 1000 + # max_lru_cache_age: 600000 + # dbcompatibility_max_retries: 30 + # dbcompatibility_retry_interval: 10 + # # The persistent volume claim for custom JDBC Drivers if using the custom jdbc drivers is enabled + # custom_jdbc_pvc: + # log_level: + # node: info + # browser: 2 + # content_security_policy: + # enable: false + # whitelist: + # env: + # max_size_lru_cache_rr: 1000 + # server_env_type: development + # purge_stale_apps_interval: 86400000 + # apps_threshold: 100 + # stale_threshold: 172800000 + # max_age: + # auth_cookie: "900000" + # csrf_cookie: "3600000" + # static_asset: "2592000" + # hsts_header: "2592000" + # probe: + # liveness: + # failure_threshold: 5 + # initial_delay_seconds: 60 + # period_seconds: 10 + # success_threshold: 1 + # timeout_seconds: 180 + # readiness: + # failure_threshold: 5 + # initial_delay_seconds: 10 + # period_seconds: 10 + # success_threshold: 1 + # timeout_seconds: 180 + # redis: + # host: localhost + # port: 6379 + # ttl: 1800 + # resource_ae: + # limits: + # cpu: 2000m + # memory: 2Gi + # requests: + # cpu: 1000m + # memory: 1Gi + # resource_init: + # limits: + # cpu: 500m + # memory: 256Mi + # requests: + # cpu: 200m + # memory: 128Mi + # session: + # check_period: "3600000" + # duration: "1800000" + # max: "10000" + # resave: "false" + # rolling: "true" + # save_uninitialized: "false" + # use_external_store: "false" + # tls: + # tls_trust_list: [] + iaws_configuration: + # - name: instance1 + # wfs: + # service_type: "Route" + # hostname: "" + # port: 443 + # external_tls_secret: ibm-baw-ext-tls-secret + # external_tls_ca_secret: ibm-baw-ext-tls-ca-secret + # replicas: 1 + # workflow_server_secret: ibm-baw-baw-secret + # tls: + # tls_secret_name: ibm-baw-tls + # tls_trust_list: + # - ums-ingress-tls-secret + # + # # ---------------------------------------------------------------------------------------- + # # images + # # ---------------------------------------------------------------------------------------- + # image: + # repository: cp.icr.io/cp/cp4a/iaws/iaws-ps + # tag: 19.0.3 + # pull_policy: IfNotPresent + # pfs_bpd_database_init_job: + # repository: cp.icr.io/cp/cp4a/iaws/pfs-bpd-database-init-prod + # tag: "19.0.3" + # pull_policy: IfNotPresent + # upgrade_job: + # repository: cp.icr.io/cp/cp4a/iaws/iaws-psdb-handling + # tag: "19.0.3" + # pull_policy: IfNotPresent + # ibm_workplace_job: + # repository: cp.icr.io/cp/cp4a/iaws/iaws-ibm-workplace + # tag: "19.0.3" + # pull_policy: IfNotPresent + # + # # ---------------------------------------------------------------------------------------- + # # PS DB settings. + # # ---------------------------------------------------------------------------------------- + # database: + # ssl: false + # sslsecretname: ibm-dba-baw-db2-cacert + # type: "DB2" + # server_name: "" + # database_name: "" + # port: "50000" + # secret_name: ibm-baw-wfs-server-db-secret + # dbcheck: + # # The maximum waiting time (seconds) to check the database intialization status. + # wait_time: 900 + # # The interval time (seconds) to check. + # interval_time: 15 + # hadr: + # standbydb_host: + # standbydb_port: + # retryinterval: + # maxretries: + # + # # ---------------------------------------------------------------------------------------- + # # Content integration configurations + # # ---------------------------------------------------------------------------------------- + # content_integration: + # init_job_image: + # repository: cp.icr.io/cp/cp4a/iaws/iaws-ps-content-integration + # tag: "19.0.3" + # pull_policy: IfNotPresent + # domain_name: "" + # object_store_name: "" + # cpe_admin_secret: + # event_handler_path: "/home/config/docs-config" + # wait_interval: 60000 + # + # # ---------------------------------------------------------------------------------------- + # # AppEngine configuration + # # ---------------------------------------------------------------------------------------- + # appengine: + # hostname: + # admin_secret_name: ae-admin-secret-instance1 + # + # # ---------------------------------------------------------------------------------------- + # # Resource Registry configuration + # # ---------------------------------------------------------------------------------------- + # resource_registry: + # hostname: + # port: 443 + # admin_secret_name: + # + # # ---------------------------------------------------------------------------------------- + # # JMS configuration + # # ---------------------------------------------------------------------------------------- + # jms: + # image: + # repository: cp.icr.io/cp/cp4a/iaws/baw-jms-server + # tag: "19.0.3" + # pull_policy: IfNotPresent + # tls: + # tls_secret_name: dummy-jms-tls-secret + # resources: + # limits: + # memory: "2Gi" + # cpu: "1000m" + # requests: + # memory: "512Mi" + # cpu: "200m" + # storage: + # persistent: true + # size: "2Gi" + # use_dynamic_provisioning: false + # access_modes: + # - ReadWriteOnce + # storage_class: "jms-storage-class" + # # if you do not need selector, please comment or remove below selector section + # selector: + # label: "" + # value: "" + # + # # ---------------------------------------------------------------------------------------- + # # Resource limitation + # # ---------------------------------------------------------------------------------------- + # resources: + # limits: + # cpu: 3 + # memory: 2096Mi + # requests: + # cpu: 2 + # memory: 1048Mi + # + # # ---------------------------------------------------------------------------------------- + # # Resource limitation for init containers + # # ---------------------------------------------------------------------------------------- + # resource_init: + # limits: + # cpu: 500m + # memory: 256Mi + # requests: + # cpu: 200m + # memory: 128Mi + # + # # ---------------------------------------------------------------------------------------- + # # liveness and readiness probes + # # ---------------------------------------------------------------------------------------- + # probe: + # ws: + # liveness_probe: + # initial_delay_seconds: 240 + # readinessProbe: + # initial_delay_seconds: 180 + # + # # ---------------------------------------------------------------------------------------- + # # trace settings. + # # ---------------------------------------------------------------------------------------- + # logs: + # console_format: "json" + # console_log_level: "INFO" + # console_source: "message,trace,accessLog,ffdc,audit" + # message_format: "basic" + # trace_format: "ENHANCED" + # trace_specification: "*=info" + # + # # ---------------------------------------------------------------------------------------- + # # custom configuration in Liberty server.xml, put the custom.xml in secret with key "sensitiveCustomConfig" + # # kubectl create secret generic wfs-custom-xml-secret --from-file=sensitiveCustomConfig=./custom.xml + # # ---------------------------------------------------------------------------------------- + # custom_xml_secret_name: + # + # # ---------------------------------------------------------------------------------------- + # # custom configuraiton in 100Custom.xml, put the 100Custom.xml in secret with key "sensitiveCustomConfig" + # # kubectl create secret generic wfs-lombardi-custom-xml-secret --from-file=sensitiveCustomConfig=./100Custom.xml + # # ---------------------------------------------------------------------------------------- + # lombardi_custom_xml_secret_name: + ######################################################################## + ######## IBM Process Federation Server configuration ######## + ######################################################################## + pfs_configuration: + # pfs: + # hostname: "" + # port: 443 + # service_type: Route + # + # image: + # repository: cp.icr.io/cp/cp4a/iaws/pfs + # tag: "19.0.3" + # pull_policy: IfNotPresent + # + # replicas: 1 + # service_account: + # anti_affinity: hard + # + # admin_secret_name: ibm-pfs-admin-secret + # config_dropins_overrides_secret: ibm-pfs-config + # resources_security_secret: "" + # + # external_tls_secret: + # external_tls_ca_secret: + # tls: + # tls_secret_name: + # tls_trust_list: + # - ums-tls-crt-secret + # + # resources: + # requests: + # cpu: 500m + # memory: 512Mi + # limits: + # cpu: 2 + # memory: 4Gi + # liveness_probe: + # initial_delay_seconds: 300 + # readiness_probe: + # initial_delay_seconds: 240 + # saved_searches: + # index_name: ibmpfssavedsearches + # index_number_of_shards: 3 + # index_number_of_replicas: 1 + # index_batch_size: 100 + # update_lock_expiration: 5m + # unique_constraint_expiration: 5m + # + # security: + # sso: + # domain_name: + # cookie_name: "ltpatoken2" + # ltpa: + # filename: "ltpa.keys" + # expiration: "120m" + # monitor_interval: "60s" + # ssl_protocol: SSL + # + # executor: + # max_threads: "80" + # core_threads: "40" + # + # rest: + # user_group_check_interval: "300s" + # system_status_check_interval: "60s" + # bd_fields_check_interval: "300s" + # + # custom_env_variables: + # names: + # # - name: MY_CUSTOM_ENVIRONMENT_VARIABLE + # secret: + # + # output: + # storage: + # use_dynamic_provisioning: false + # size: 5Gi + # storage_class: "pfs-output" + # + # logs: + # console_format: "json" + # console_log_level: "INFO" + # console_source: "message,trace,accessLog,ffdc,audit" + # trace_format: "ENHANCED" + # trace_specification: "*=info" + # storage: + # use_dynamic_provisioning: false + # size: 5Gi + # storage_class: "pfs-logs" + # + # dba_resource_registry: + # image: + # repository: cp.icr.io/cp/cp4a/aae/dba-etcd + # tag: 19.0.3 + # pull_policy: IfNotPresent + # lease_ttl: 120 + # pfs_check_interval: 10 + # pfs_connect_timeout: 10 + # pfs_response_timeout: 30 + # pfs_registration_key: /dba/appresources/IBM_PFS/PFS_SYSTEM + # tls_secret: rr-tls-client-secret + # resources: + # limits: + # memory: '512Mi' + # cpu: '500m' + # requests: + # memory: '512Mi' + # cpu: '200m' + # + # # ---------------------------------------------------- + # # PFS Embedded Elasticsearch configuration + # # ---------------------------------------------------- + # elasticsearch: + # es_image: + # repository: cp.icr.io/cp/cp4a/iaws/pfs-elasticsearch-prod + # tag: "19.0.3" + # pull_policy: IfNotPresent + # + # pfs_init_image: + # repository: cp.icr.io/cp/cp4a/iaws/pfs-init-prod + # tag: "19.0.3" + # pull_policy: IfNotPresent + # + # nginx_image: + # repository: cp.icr.io/cp/cp4a/iaws/pfs-nginx-prod + # tag: "19.0.3" + # pull_policy: IfNotPresent + # + # replicas: 1 + # service_type: NodePort + # external_port: + # anti_affinity: hard + # service_account: + # privileged: true + # probe_initial_delay: 90 + # heap_size: "1024m" + # + # resources: + # limits: + # memory: "2Gi" + # cpu: "1000m" + # requests: + # memory: "1Gi" + # cpu: "100m" + # + # storage: + # persistent: true + # use_dynamic_provisioning: false + # size: 10Gi + # storage_class: "pfs-es" + # + # snapshot_storage: + # enabled: false + # use_dynamic_provisioning: false + # size: 30Gi + # storage_class_name: "" + # existing_claim_name: "" + # + # security: + # users_secret: "" + ca_configuration: +# global: +# arch: "amd64" +# service_type: "Route" # required, supported service type for application engine is: Route or NodePort. +# frontend_external_hostname: "www.ca.frontendsp" # required, if service_type is Route. Otherwise leave blank +# backend_external_hostname: "www.ca.backendsp" # required, if service_type is Route. Otherwise leave blank +# image: +# repository: "" +# tag: "latest" +# pull_policy: "IfNotPresent" +# pull_secrets: "baca-docker-secret" # Specify secret name for image pull +# authentication_type: 1 # 0-Non-ldap, 1-LDAP, 2- User Management Service integration +# retries: "90" # The max of retrying for CA deployment verification task until all the pods are in Ready status. A delay of 20 seconds between each attempt. +# bas: +# bas_enabled: "false" +# celery: +# process_timeout: 300 +# configs: +# claimname: "sp-config-pvc" +# logs: +# claimname: "sp-log-pvc" +# log_level: "debug" +# data: +# claimname: "sp-data-pvc" +# redis: +# resources: +# limits: +# memory: "640Mi" +# cpu: "0.25" +## replica_count: 3 +## quorum: 2 +# rabbitmq: +# resources: +# limits: +# memory: "640Mi" +# cpu: "0.5" +## replica_count: 3 +# mongo: +# configdb_claimname: "sp-data-pvc" +# shard_claimname: "sp-data-pvc" +# mongo_limited_memory: "1600Mi" +# wired_tiger_cache: ".3" +# mongoadmin: +# admin_configdb_claimname: "sp-data-pvc" +# admin_shard_claimname: "sp-data-pvc" +# mongo_limited_memory: "1600Mi" +# wired_tiger_cache: ".3" +# caller_api: +# replica_count: 2 +# resources: +# limits: +# memory: "480Mi" +# cpu: "1" +# spbackend: +# replica_count: 2 +# resources: +# limits: +# memory: "640Mi" +# cpu: "2" +# spfrontend: +# replica_count: 2 +# resources: +# limits: +# memory: "480Mi" +# cpu: "2" +# backend_host: "" +## frontend_host: "" +# sso: "false" +# postprocessing: +# name: "postprocessing" +# process_timeout: 1500 +# replica_count: 2 +# max_unavailable_count: 1 +# resources: +# limits: +# memory: "480Mi" +# cpu: "4" +# pdfprocess: +# name: "pdfprocess" +# process_timeout: 1500 +# replica_count: 2 +# max_unavailable_count: 1 +# resources: +# limits: +# memory: "960Mi" +# cpu: "2" +# utfprocess: +# name: "utf8process" +# process_timeout: 1500 +# replica_count: 2 +# max_unavailable_count: 1 +# resources: +# limits: +# memory: "960Mi" +# cpu: "2" +# setup: +# name: "setup" +# process_timeout: 120 +# replica_count: 2 +# max_unavailable_count: 1 +# resources: +# limits: +# memory: "480Mi" +# cpu: "2" +# ocrextraction: +# name: "ocr-extraction" +# replica_count: 2 +# max_unavailable_count: 1 +# resources: +# limits: +# memory: "1440Mi" +# cpu: "4" +# classifyprocess: +# name: "classifyprocess-classify" +# replica_count: 2 +# max_unavailable_count: 1 +# resources: +# limits: +# memory: "960Mi" +# cpu: "4" +# processingextraction: +# name: "processing-extraction" +# replica_count: 2 +# max_unavailable_count: 1 +# resources: +# limits: +# memory: "1440Mi" +# cpu: "4" +# updatefiledetail: +# name: "updatefiledetail" +# replica_count: 2 +# max_unavailable_count: 1 +# resources: +# limits: +# memory: "480Mi" +# cpu: "2" + ######################################################################## + ######## IBM Business Automation Insights configuration ######## + ######################################################################## + bai_configuration: +# imageCredentials: +# imagePullSecret: "admin.registrykey" +# persistence: +# useDynamicProvisioning: true +# flinkPv: +# storageClassName: "" +# kafka: +# bootstrapServers: "kafka.bootstrapserver1.hostname:9092,kafka.bootstrapserver2.hostname:9092,kafka.bootstrapserver3.hostname:9092" +# securityProtocol: "PLAINTEXT" +# settings: +# egress: false +# ingressTopic: icp4adeploy-ibm-bai-ingress +# egressTopic: icp4adeploy-ibm-bai-egress +# serviceTopic: icp4adeploy-ibm-bai-serviceTopic +# setup: +# image: +# repository: cp.icr.io/cp/cp4a/bai/bai-setup +# tag: "19.0.3" +# admin: +# image: +# repository: cp.icr.io/cp/cp4a/bai/bai-admin +# tag: "19.0.3" +# flink: +# initStorageDirectory: true +# image: +# repository: cp.icr.io/cp/cp4a/bai/bai-flink +# tag: "19.0.3" +# zookeeper: +# image: +# repository: cp.icr.io/cp/cp4a/bai/bai-flink-zookeeper +# tag: "19.0.3" +# ingestion: +# install: false +# image: +# repository: cp.icr.io/cp/cp4a/bai/bai-ingestion +# tag: "19.0.3" +# adw: +# install: false +# image: +# repository: cp.icr.io/cp/cp4a/bai/bai-adw +# tag: "19.0.3" +# bpmn: +# install: false +# image: +# repository: cp.icr.io/cp/cp4a/bai/bai-bpmn +# tag: "19.0.3" +# bawadv: +# install: false +# image: +# repository: cp.icr.io/cp/cp4a/bai/bai-bawadv +# tag: "19.0.3" +# icm: +# install: false +# image: +# repository: cp.icr.io/cp/cp4a/bai/bai-icm +# tag: "19.0.3" +# odm: +# install: false +# image: +# repository: cp.icr.io/cp/cp4a/bai/bai-odm +# tag: "19.0.3" +# content: +# install: false +# image: +# repository: cp.icr.io/cp/cp4a/bai/bai-content +# tag: "19.0.3" +# initImage: +# image: +# repository: cp.icr.io/cp/cp4a/bai/bai-init +# tag: "19.0.3" +# elasticsearch: +# install: true +# ibm-dba-ek: +# image: +# imagePullPolicy: Always +# imagePullSecret: "admin.registrykey" +# elasticsearch: +# image: +# repository: cp.icr.io/cp/cp4a/bai/bai-elasticsearch +# tag: "19.0.3" +# init: +# image: +# repository: cp.icr.io/cp/cp4a/bai/bai-init +# tag: "19.0.3" +# data: +# storage: +# persistent: true +# useDynamicProvisioning: true +# storageClass: "" +# snapshotStorage: +# enabled: true +# useDynamicProvisioning: true +# storageClassName: "" +# kibana: +# image: +# repository: cp.icr.io/cp/cp4a/bai/bai-kibana +# tag: "19.0.3" +# init: +# image: +# repository: cp.icr.io/cp/cp4a/bai/bai-init +# tag: "19.0.3" + ######################################################################## + ######## IBM Business Automation Digital Worker Configuration ######## + ######################################################################## + + adw_configuration: +# global: +# imagePullSecret: baiw-reg-cred +# kubernetes: +# serviceAccountName: "" + +# adwSecret: "" + +# grantWritePermissionOnMountedVolumes: true + +# logLevel: "error" + +# networkPolicy: +# enabled: true + +# registry: +# endpoint: "" + +# npmRegistry: +# persistence: +# enabled: true +# useDynamicProvisioning: true +# storageClassName: "managed-nfs-storage" + +# mongodb: +# replicas: 2 +# persistence: +# enabled: true +# useDynamicProvisioning: true +# storageClassName: "managed-nfs-storage" + + +# designer: +# image: +# repository: "cp.icr.io/cp/cp4a/adw/adw-designer" +# pullPolicy: "Always" +# externalUrl: "" + +# runtime: +# image: +# repository: "cp.icr.io/cp/cp4a/adw/adw-runtime" +# pullPolicy: "Always" +# persistence: +# useDynamicProvisioning: true +# storageClassName: "managed-nfs-storage" +# service: +# type: "NodePort" +# externalPort: 30711 +# runLogLevel: "warn" +# externalUrl: "" + + +# management: +# image: +# repository: "cp.icr.io/cp/cp4a/adw/adw-management" +# pullPolicy: "Always" +# persistence: +# useDynamicProvisioning: true +# storageClassName: "managed-nfs-storage" + +# setup: +# image: +# repository: "cp.icr.io/cp/cp4a/adw/adw-setup" +# pullPolicy: "Always" + +# init: +# image: +# repository: "cp.icr.io/cp/cp4a/adw/adw-init" +# pullPolicy: "Always" + +# baiKafka: +# topic: "BAITOPICFORODM" +# bootstrapServers: "" +# securityProtocol: "SASL_SSL" + +# baiElasticsearch: +# url: "" + +# oidc: +# endpoint: "" diff --git a/descriptors/ibm_cp4a_crd.yaml b/descriptors/ibm_cp4a_crd.yaml new file mode 100644 index 00000000..b847a676 --- /dev/null +++ b/descriptors/ibm_cp4a_crd.yaml @@ -0,0 +1,57 @@ +############################################################################### +# +# Licensed Materials - Property of IBM +# +# (C) Copyright IBM Corp. 2019. All Rights Reserved. +# +# US Government Users Restricted Rights - Use, duplication or +# disclosure restricted by GSA ADP Schedule Contract with IBM Corp. +# +############################################################################### +apiVersion: apiextensions.k8s.io/v1beta1 +kind: CustomResourceDefinition +metadata: + name: icp4aclusters.icp4a.ibm.com + labels: + app.kubernetes.io/instance: ibm-dba + app.kubernetes.io/managed-by: ibm-dba + app.kubernetes.io/name: ibm-dba + release: 19.0.3 +spec: + group: icp4a.ibm.com + names: + kind: ICP4ACluster + listKind: ICP4AClusterList + plural: icp4aclusters + singular: icp4acluster + scope: Namespaced + subresources: + status: {} + version: v1 + versions: + - name: v1 + served: true + storage: true + validation: + # openAPIV3Schema is the schema for validating custom objects. + # in kube 1.14 schemas can be version specific + openAPIV3Schema: + properties: + spec: + properties: + license: + type: string + pattern: '^accept$' + readinessProbe: + properties: + initialDelaySeconds: + type: integer + minimum: 5 + maximum: 20 + queueManager: + properties: + dev: + properties: + adminPassword: + type: string + pattern: '^[a-zA-Z0-9]{8,}$' diff --git a/descriptors/operator-shared-pvc.yaml b/descriptors/operator-shared-pvc.yaml new file mode 100644 index 00000000..81f9c092 --- /dev/null +++ b/descriptors/operator-shared-pvc.yaml @@ -0,0 +1,27 @@ +############################################################################### +# +# Licensed Materials - Property of IBM +# +# (C) Copyright IBM Corp. 2019. All Rights Reserved. +# +# US Government Users Restricted Rights - Use, duplication or +# disclosure restricted by GSA ADP Schedule Contract with IBM Corp. +# +############################################################################### +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: operator-shared-pvc + annotations: + volume.beta.kubernetes.io/storage-class: "" + labels: + app.kubernetes.io/instance: ibm-dba + app.kubernetes.io/managed-by: ibm-dba + app.kubernetes.io/name: ibm-dba + release: 19.0.3 +spec: + accessModes: + - ReadWriteMany + resources: + requests: + storage: 1Gi diff --git a/descriptors/operator.yaml b/descriptors/operator.yaml new file mode 100644 index 00000000..0c3af855 --- /dev/null +++ b/descriptors/operator.yaml @@ -0,0 +1,134 @@ +############################################################################### +# +# Licensed Materials - Property of IBM +# +# (C) Copyright IBM Corp. 2019. All Rights Reserved. +# +# US Government Users Restricted Rights - Use, duplication or +# disclosure restricted by GSA ADP Schedule Contract with IBM Corp. +# +############################################################################### +apiVersion: apps/v1 +kind: Deployment +metadata: + name: ibm-cp4a-operator + labels: + app.kubernetes.io/instance: ibm-dba + app.kubernetes.io/managed-by: ibm-dba + app.kubernetes.io/name: ibm-dba + release: 19.0.3 +spec: + replicas: 1 + selector: + matchLabels: + name: ibm-cp4a-operator + template: + metadata: + labels: + name: ibm-cp4a-operator + app.kubernetes.io/instance: ibm-dba + app.kubernetes.io/managed-by: ibm-dba + app.kubernetes.io/name: ibm-dba + release: 19.0.3 + annotations: + productID: "5737-I23" + productName: "IBM Cloud Pak for Automation" + productVersion: "19.0.3" + spec: + hostNetwork: false + hostPID: false + hostIPC: false + securityContext: + runAsNonRoot: true + serviceAccountName: ibm-cp4a-operator + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: beta.kubernetes.io/arch + operator: In + values: + - amd64 + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 3 + preference: + matchExpressions: + - key: beta.kubernetes.io/arch + operator: In + values: + - "amd64" + containers: + - name: ansible + command: + - /usr/local/bin/ao-logs + - /tmp/ansible-operator/runner + - stdout + # Replace this with the built image name + image: "cp.icr.io/cp/cp4a/icp4a-operator:19.0.3" + imagePullPolicy: "IfNotPresent" + securityContext: + allowPrivilegeEscalation: false + privileged: false + readOnlyRootFilesystem: false + runAsNonRoot: true + capabilities: + drop: + - ALL + resources: + limits: + cpu: '1' + memory: 1Gi + requests: + cpu: 500m + memory: 256Mi + volumeMounts: + - mountPath: /tmp/ansible-operator/runner + name: runner + - mountPath: /opt/ansible/share + name: operator-shared-folder + - name: operator + # Replace this with the built image name + image: "cp.icr.io/cp/cp4a/icp4a-operator:19.0.3" + imagePullPolicy: "IfNotPresent" + securityContext: + allowPrivilegeEscalation: false + privileged: false + readOnlyRootFilesystem: false + runAsNonRoot: true + capabilities: + drop: + - ALL + resources: + limits: + cpu: '1' + memory: 1Gi + requests: + cpu: 500m + memory: 256Mi + volumeMounts: + - mountPath: /tmp/ansible-operator/runner + name: runner + - mountPath: /opt/ansible/share + name: operator-shared-folder + env: + - name: WATCH_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: OPERATOR_NAME + value: "ibm-cp4a-operator" + - name: WORKER_FOOSERVICE_CACHE_EXAMPLE_COM + value: "10" + imagePullSecrets: + - name: "admin.registrykey" + volumes: + - name: runner + emptyDir: {} + - name: "operator-shared-folder" + persistentVolumeClaim: + claimName: "operator-shared-pvc" diff --git a/descriptors/role.yaml b/descriptors/role.yaml new file mode 100644 index 00000000..5d76d4bf --- /dev/null +++ b/descriptors/role.yaml @@ -0,0 +1,122 @@ +############################################################################### +# +# Licensed Materials - Property of IBM +# +# (C) Copyright IBM Corp. 2019. All Rights Reserved. +# +# US Government Users Restricted Rights - Use, duplication or +# disclosure restricted by GSA ADP Schedule Contract with IBM Corp. +# +############################################################################### +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + creationTimestamp: null + name: ibm-cp4a-operator + labels: + app.kubernetes.io/instance: ibm-dba + app.kubernetes.io/managed-by: ibm-dba + app.kubernetes.io/name: ibm-dba + release: 19.0.3 +rules: +- apiGroups: + - "" + resources: + - pods + - services + - endpoints + - persistentvolumeclaims + - events + - configmaps + - secrets + - serviceaccounts + verbs: + - '*' +- apiGroups: + - apps + resources: + - deployments + - daemonsets + - replicasets + - statefulsets + verbs: + - '*' +- apiGroups: + - monitoring.coreos.com + resources: + - servicemonitors + verbs: + - get + - create +- apiGroups: + - apps + resourceNames: + - ibm-cp4a-operator + resources: + - deployments/finalizers + verbs: + - update +- apiGroups: + - icp4a.ibm.com + resources: + - '*' + verbs: + - '*' +- apiGroups: + - "" + resources: + - pods/exec + verbs: + - '*' +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - '*' +- apiGroups: + - policy + resources: + - poddisruptionbudgets + - podsecuritypolicies + verbs: + - '*' +- apiGroups: + - networking.k8s.io + resources: + - networkpolicies + verbs: + - '*' +- apiGroups: + - rbac.authorization.k8s.io + resources: + - roles + - rolebindings + verbs: + - '*' +- apiGroups: + - batch + resources: + - jobs + verbs: + - '*' +- apiGroups: + - "" + - route.openshift.io + resources: + - routes + verbs: + - '*' +- apiGroups: + - "" + - route.openshift.io + resources: + - routes/custom-host + verbs: + - '*' +- apiGroups: + - "extensions" + resources: + - "ingresses" + verbs: + - "*" diff --git a/descriptors/role_binding.yaml b/descriptors/role_binding.yaml new file mode 100644 index 00000000..17dac646 --- /dev/null +++ b/descriptors/role_binding.yaml @@ -0,0 +1,26 @@ +############################################################################### +# +# Licensed Materials - Property of IBM +# +# (C) Copyright IBM Corp. 2019. All Rights Reserved. +# +# US Government Users Restricted Rights - Use, duplication or +# disclosure restricted by GSA ADP Schedule Contract with IBM Corp. +# +############################################################################### +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: ibm-cp4a-operator + labels: + app.kubernetes.io/instance: ibm-dba + app.kubernetes.io/managed-by: ibm-dba + app.kubernetes.io/name: ibm-dba + release: 19.0.3 +subjects: +- kind: ServiceAccount + name: ibm-cp4a-operator +roleRef: + kind: Role + name: ibm-cp4a-operator + apiGroup: rbac.authorization.k8s.io diff --git a/descriptors/scc-fncm.yaml b/descriptors/scc-fncm.yaml new file mode 100755 index 00000000..96feb774 --- /dev/null +++ b/descriptors/scc-fncm.yaml @@ -0,0 +1,38 @@ +allowHostDirVolumePlugin: false +allowHostIPC: false +allowHostNetwork: false +allowHostPID: false +allowHostPorts: false +allowPrivilegeEscalation: true +allowPrivilegedContainer: false +allowedCapabilities: [] +apiVersion: security.openshift.io/v1 +defaultAddCapabilities: [] +fsGroup: + type: RunAsAny +groups: +- system:authenticated +kind: SecurityContextConstraints +metadata: + name: ibm-fncm-operator +priority: 0 +readOnlyRootFilesystem: false +requiredDropCapabilities: +- KILL +- MKNOD +- SETUID +- SETGID +runAsUser: + type: MustRunAsRange +seLinuxContext: + type: MustRunAs +supplementalGroups: + type: RunAsAny +users: [] +volumes: +- configMap +- downwardAPI +- emptyDir +- persistentVolumeClaim +- projected +- secret diff --git a/descriptors/service_account.yaml b/descriptors/service_account.yaml new file mode 100644 index 00000000..160efb74 --- /dev/null +++ b/descriptors/service_account.yaml @@ -0,0 +1,19 @@ +############################################################################### +# +# Licensed Materials - Property of IBM +# +# (C) Copyright IBM Corp. 2019. All Rights Reserved. +# +# US Government Users Restricted Rights - Use, duplication or +# disclosure restricted by GSA ADP Schedule Contract with IBM Corp. +# +############################################################################### +apiVersion: v1 +kind: ServiceAccount +metadata: + name: ibm-cp4a-operator + labels: + app.kubernetes.io/instance: ibm-dba + app.kubernetes.io/managed-by: ibm-dba + app.kubernetes.io/name: ibm-dba + release: 19.0.3 diff --git a/images/bai-architecture.jpg b/images/bai-architecture.jpg deleted file mode 100644 index d0090810..00000000 Binary files a/images/bai-architecture.jpg and /dev/null differ diff --git a/images/diag_icp4a_k8s.jpg b/images/diag_icp4a_k8s.jpg deleted file mode 100644 index b64b8113..00000000 Binary files a/images/diag_icp4a_k8s.jpg and /dev/null differ diff --git a/images/samples-structure.png b/images/samples-structure.png deleted file mode 100644 index 8568e506..00000000 Binary files a/images/samples-structure.png and /dev/null differ diff --git a/legal-notice.md b/legal-notice.md index d8e67a52..dd2daf30 100644 --- a/legal-notice.md +++ b/legal-notice.md @@ -1,112 +1,112 @@ -IBM - -This information was developed for products and services that are offered in the USA. - -IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. -IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to: - -IBM Director of Licensing -IBM Corporation -North Castle Drive, MD-NC119 -Armonk, NY 10504-1785 -United States of America -For license inquiries regarding double-byte character set (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to: - -Intellectual Property Licensing -Legal and Intellectual Property Law -IBM Japan Ltd. -19-21, Nihonbashi-Hakozakicho, Chuo-ku -Tokyo 103-8510, Japan - -The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. - -This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. - -Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. - -IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. -Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact: - -IBM Director of Licensing -IBM Corporation -North Castle Drive, MD-NC119 -Armonk, NY 10504-1785 -United States of America - -Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee. - -The licensed program described in this document and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or any equivalent agreement between us. - -The performance data discussed herein is presented as derived under specific operating conditions. Actual results may vary. - -The client examples cited are presented for illustrative purposes only. Actual performance results may vary depending on specific configurations and operating conditions. - -The performance data and client examples cited are presented for illustrative purposes only. Actual performance results may vary depending on specific configurations and operating conditions. - -Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. - -All statements regarding IBM's future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only. - -This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. - -COPYRIGHT LICENSE: - -This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are provided "AS IS", without warranty of any kind. IBM shall not be liable for any damages arising out of your use of the sample programs. - -Each copy or any portion of these sample programs or any derivative work, must include a copyright notice as follows: - -© Copyright IBM Corp. 2016 -Portions of this code are derived from IBM Corp. Sample Programs. -Additional license terms - -The Oracle Outside In Technology included herein is subject to a restricted use license and can only be used in conjunction with this application. -Trademarks - -IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the web at www.ibm.com/legal/copytrade.shtml. - -Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. - -Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. - -Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. - -Java™ and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. - -UNIX is a registered trademark of The Open Group in the United States and other countries. - -Other company, product, and service names may be trademarks or service marks of others. -Terms and conditions for product documentation - -Permissions for the use of these publications are granted subject to the following terms and conditions. -Applicability - -These terms and conditions are in addition to any terms of use for the IBM website. -Personal use - -You may reproduce these publications for your personal, noncommercial use provided that all proprietary notices are preserved. You may not distribute, display or make derivative work of these publications, or any portion thereof, without the express consent of IBM. -Commercial use - -You may reproduce, distribute and display these publications solely within your enterprise provided that all proprietary notices are preserved. You may not make derivative works of these publications, or reproduce, distribute or display these publications or any portion thereof outside your enterprise, without the express consent of IBM. -Rights - -Except as expressly granted in this permission, no other permissions, licenses or rights are granted, either express or implied, to the publications or any information, data, software or other intellectual property contained therein. - -IBM reserves the right to withdraw the permissions granted herein whenever, in its discretion, the use of the publications is detrimental to its interest or, as determined by IBM, the above instructions are not being properly followed. - -You may not download, export or re-export this information except in full compliance with all applicable laws and regulations, including all United States export laws and regulations. - -IBM MAKES NO GUARANTEE ABOUT THE CONTENT OF THESE PUBLICATIONS. THE PUBLICATIONS ARE PROVIDED "AS-IS" AND WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. -IBM Online Privacy Statement - -IBM Software products, including software as a service solutions, (“Software Offerings”) may use cookies or other technologies to collect product usage information, to help improve the end user experience, to tailor interactions with the end user or for other purposes. In many cases no personally identifiable information is collected by the Software Offerings. Some of our Software Offerings can help enable you to collect personally identifiable information. If this Software Offering uses cookies to collect personally identifiable information, specific information about this offering’s use of cookies is set forth below. - -This Software Offering does not use cookies or other technologies to collect personally identifiable information. - -If the configurations deployed for this Software Offering provide you as customer the ability to collect personally identifiable information from end users via cookies and other technologies, you should seek your own legal advice about any laws applicable to such data collection, including any requirements for notice and consent. - -For more information about the use of various technologies, including cookies, for these purposes, see IBM’s Privacy Policy at www.ibm.com/privacy and IBM’s Online Privacy Statement at www.ibm.com/privacy/details the section entitled “Cookies, Web Beacons and Other Technologies” and the “IBM Software Products and Software-as-a-Service Privacy Statement” at www.ibm.com/software/info/product-privacy. - -Last updated: June 2017 -legal_notices.htm - +IBM + +This information was developed for products and services that are offered in the USA. + +IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. +IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to: + +IBM Director of Licensing +IBM Corporation +North Castle Drive, MD-NC119 +Armonk, NY 10504-1785 +United States of America +For license inquiries regarding double-byte character set (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to: + +Intellectual Property Licensing +Legal and Intellectual Property Law +IBM Japan Ltd. +19-21, Nihonbashi-Hakozakicho, Chuo-ku +Tokyo 103-8510, Japan + +The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. + +This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. + +Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. + +IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. +Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact: + +IBM Director of Licensing +IBM Corporation +North Castle Drive, MD-NC119 +Armonk, NY 10504-1785 +United States of America + +Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee. + +The licensed program described in this document and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or any equivalent agreement between us. + +The performance data discussed herein is presented as derived under specific operating conditions. Actual results may vary. + +The client examples cited are presented for illustrative purposes only. Actual performance results may vary depending on specific configurations and operating conditions. + +The performance data and client examples cited are presented for illustrative purposes only. Actual performance results may vary depending on specific configurations and operating conditions. + +Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. + +All statements regarding IBM's future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only. + +This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. + +COPYRIGHT LICENSE: + +This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are provided "AS IS", without warranty of any kind. IBM shall not be liable for any damages arising out of your use of the sample programs. + +Each copy or any portion of these sample programs or any derivative work, must include a copyright notice as follows: + +© Copyright IBM Corp. 2016 +Portions of this code are derived from IBM Corp. Sample Programs. +Additional license terms + +The Oracle Outside In Technology included herein is subject to a restricted use license and can only be used in conjunction with this application. +Trademarks + +IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the web at www.ibm.com/legal/copytrade.shtml. + +Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. + +Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. + +Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. + +Java™ and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. + +UNIX is a registered trademark of The Open Group in the United States and other countries. + +Other company, product, and service names may be trademarks or service marks of others. +Terms and conditions for product documentation + +Permissions for the use of these publications are granted subject to the following terms and conditions. +Applicability + +These terms and conditions are in addition to any terms of use for the IBM website. +Personal use + +You may reproduce these publications for your personal, noncommercial use provided that all proprietary notices are preserved. You may not distribute, display or make derivative work of these publications, or any portion thereof, without the express consent of IBM. +Commercial use + +You may reproduce, distribute and display these publications solely within your enterprise provided that all proprietary notices are preserved. You may not make derivative works of these publications, or reproduce, distribute or display these publications or any portion thereof outside your enterprise, without the express consent of IBM. +Rights + +Except as expressly granted in this permission, no other permissions, licenses or rights are granted, either express or implied, to the publications or any information, data, software or other intellectual property contained therein. + +IBM reserves the right to withdraw the permissions granted herein whenever, in its discretion, the use of the publications is detrimental to its interest or, as determined by IBM, the above instructions are not being properly followed. + +You may not download, export or re-export this information except in full compliance with all applicable laws and regulations, including all United States export laws and regulations. + +IBM MAKES NO GUARANTEE ABOUT THE CONTENT OF THESE PUBLICATIONS. THE PUBLICATIONS ARE PROVIDED "AS-IS" AND WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. +IBM Online Privacy Statement + +IBM Software products, including software as a service solutions, (“Software Offerings”) may use cookies or other technologies to collect product usage information, to help improve the end user experience, to tailor interactions with the end user or for other purposes. In many cases no personally identifiable information is collected by the Software Offerings. Some of our Software Offerings can help enable you to collect personally identifiable information. If this Software Offering uses cookies to collect personally identifiable information, specific information about this offering’s use of cookies is set forth below. + +This Software Offering does not use cookies or other technologies to collect personally identifiable information. + +If the configurations deployed for this Software Offering provide you as customer the ability to collect personally identifiable information from end users via cookies and other technologies, you should seek your own legal advice about any laws applicable to such data collection, including any requirements for notice and consent. + +For more information about the use of various technologies, including cookies, for these purposes, see IBM’s Privacy Policy at www.ibm.com/privacy and IBM’s Online Privacy Statement at www.ibm.com/privacy/details the section entitled “Cookies, Web Beacons and Other Technologies” and the “IBM Software Products and Software-as-a-Service Privacy Statement” at www.ibm.com/software/info/product-privacy. + +Last updated: June 2017 +legal_notices.htm + © Copyright IBM Corporation 2017. \ No newline at end of file diff --git a/platform/k8s/README.md b/platform/k8s/README.md new file mode 100644 index 00000000..40670b4f --- /dev/null +++ b/platform/k8s/README.md @@ -0,0 +1,14 @@ +# IBM Cloud Pak for Automation 19.0.3 on Certified Kubernetes + +Any platform that includes Kubernetes 1.11+ is supported by Cloud Pak for Automation 19.0.3. + +Choose which use case you need, and then follow the links below to find the right instructions: + +- [Install Cloud Pak for Automation 19.0.3 on Certified Kubernetes](install.md) +- [Uninstall Cloud Pak for Automation 19.0.3 on Certified Kubernetes](uninstall.md) +- [Migrate 19.0.x persisted data to 19.0.3 on Certified Kubernetes](migrate.md) +- [Update Cloud Pak for Automation 19.0.3 on Certified Kubernetes](update.md) + +Choose to evaluate components: + +- [Install ODM for developers on Minikube](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/topics/tsk_dev_odm_minikube.html) diff --git a/platform/k8s/install.md b/platform/k8s/install.md new file mode 100644 index 00000000..1fa9a517 --- /dev/null +++ b/platform/k8s/install.md @@ -0,0 +1,284 @@ +# Installing Cloud Pak for Automation 19.0.3 on Certified Kubernetes + +- [Step 1: Get access to the container images](install.md#step-1-get-access-to-the-container-images) +- [Step 2: Prepare your environment for automation software](install.md#step-2-prepare-your-environment-for-automation-software) +- [Step 3: Create a shared PV and add the JDBC drivers](install.md#step-3-create-a-shared-pv-and-add-the-jdbc-drivers) +- [Step 4: Deploy the operator manifest files to your cluster](install.md#step-4-deploy-the-operator-manifest-files-to-your-cluster) +- [Step 5: Configure the software that you want to install](install.md#step-5-configure-the-software-that-you-want-to-install) +- [Step 6: Apply the custom resources](install.md#step-6-apply-the-custom-resources) +- [Step 7: Verify that the automation containers are running](install.md#step-7-verify-that-the-automation-containers-are-running) +- [Step 8: Complete some post-installation steps](install.md#step-8-complete-some-post-installation-steps) + +## Step 1: Get access to the container images + +You can access the container images in the IBM Docker registry with your IBMid (Option 1), or you can use the downloaded archives from IBM Passport Advantage (PPA) (Option 2). + +1. Log in to your Kubernetes cluster. +2. Download or clone the repository on your local machine and change to `cert-kubernetes` directory + ```bash + $ git clone git@github.com:icp4a/cert-kubernetes.git + $ cd cert-kubernetes + ``` + You will find there the scripts and kubernetes descriptors that are necessary to install Cloud Pak for Automation. + +### Option 1: Create a pull secret for the IBM Cloud Entitled Registry + +1. Log in to [MyIBM Container Software Library](https://myibm.ibm.com/products-services/containerlibrary) with the IBMid and password that are associated with the entitled software. + +2. In the **Container software library** tile, click **View library** and then click **Copy key** to copy the entitlement key to the clipboard. + +3. Create a pull secret by running a `kubectl create secret` command. + ```bash + $ kubectl create secret docker-registry --docker-server=cp.icr.io --docker-username=iamapikey --docker-password="" --docker-email=user@foo.com + ``` + + > **Note**: The `cp.icr.io` value for the **docker-server** parameter is the only registry domain name that contains the images. + +4. Take a note of the secret and the server values so that you can set them to the **pullSecrets** and **repository** parameters when you run the operator for your containers. + +### Option 2: Download the packages from PPA and load the images + +[IBM Passport Advantage (PPA)](https://www-01.ibm.com/software/passportadvantage/pao_customer.html) provides archives (.tgz) for the software. To view the list of Passport Advantage eAssembly installation images, refer to the [19.0.3 download document](https://www.ibm.com/support/pages/ibm-cloud-pak-automation-v1903-download-document). + +1. Download one or more PPA packages to a server that is connected to your Docker registry.. +2. Check that you can run a docker command. + ```bash + $ docker ps + ``` +3. Login to a Docker registry with your credentials.. + ```bash + $ docker login -u + ``` +4. Run a `kubectl` command to make sure that you have access to Kubernetes. + ```bash + $ kubectl cluster-info + ``` +5. Run the [`scripts/loadimages.sh`](../../scripts/loadimages.sh) script to load the images into your Docker registry. Specify the two mandatory parameters in the command line. + + ``` + -p PPA archive files location or archive filename + -r Target Docker registry and namespace + -l Optional: Target a local registry + ``` + + The following example shows the input values in the command line on OCP 3.11. On OCP 4.2 the default docker registry is based on the host name, for example "default-route-openshift-image-registry.ibm.com". + + ``` + # scripts/loadimages.sh -p .tgz -r /my-project + ``` + + > **Note**: The project must have pull request privileges to the registry where the images are loaded. The project must also have pull request privileges to push the images into another namespace/project. + +6. Check that the images are pushed correctly to the registry. +7. (Optional) If you want to use an external Docker registry, create a Docker registry secret. + + ```bash + $ oc create secret docker-registry --docker-server= --docker-username= --docker-password= --docker-email= + ``` + + Take a note of the secret and the server values so that you can set them to the **pullSecrets** and **repository** parameters when you run the operator for your containers. + +## Step 2: Prepare your environment for automation software + +Before you install any of the containerized software: + +1. Go to the prerequisites page in the [IBM Cloud Pak for Automation 19.0.x](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_prepare_env_k8s.html) Knowledge Center. +2. Follow the instructions on preparing your environment for the software components that you want to install. + + How much preparation you need to do depends on what you want to install and how familiar you are with your environment. + +## Step 3: Create a shared PV and add the JDBC drivers + + 1. Create a persistent volume (PV) for the operator. This PV is needed for the JDBC drivers. The following example YAML defines a PV, but PVs depend on your cluster configuration. + ```yaml + apiVersion: v1 + kind: PersistentVolume + metadata: + labels: + type: local + name: operator-shared-pv + spec: + capacity: + storage: 1Gi + accessModes: + - ReadWriteMany + hostPath: + path: "/root/operator" + persistentVolumeReclaimPolicy: Delete + ``` + + 2. Deploy the PV. + ```bash + $ kubectl create -f operator-shared-pv.yaml + ``` + + 3. Create a claim for the PV, or check that the PV is bound dynamically, [descriptors/operator-shared-pvc.yaml](../../descriptors/operator-shared-pvc.yaml?raw=true). + + > Replace the storage class if you do not want to create the relevant persistent volume. + + ```yaml + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: operator-shared-pvc + namespace: my-project + spec: + accessModes: + - ReadWriteMany + storageClassName: "" + resources: + requests: + storage: 1Gi + volumeName: operator-shared-pv + ``` + + 4. Deploy the PVC. + ```bash + $ kubectl create -f descriptors/operator-shared-pvc.yaml + ``` + + 5. Copy all of the JDBC drivers that are needed by the components you intend to install to the persistent volume. Depending on your storage configuration you might not need these drivers. + + > **Note**: File names for JDBC drivers cannot include additional version information. + - DB2: + - db2jcc4.jar + - db2jcc_license_cu.jar + - Oracle: + - ojdbc8.jar + + The following structure shows an example remote file system. + + ``` + pv-root-dir + + └── jdbc + + ├── db2 + + │ ├── db2jcc4.jar + + │ └── db2jcc_license_cu.jar + + ├── oracle + + │ └── ojdbc8.jar + + ``` + +## Step 4: Deploy the operator manifest files to your cluster + +The Cloud Pak operator has a number of descriptors that must be applied. + - [descriptors/ibm_icp4a_crd.yaml](../../descriptors/ibm_icp4a_crd.yaml?raw=true) contains the description of the Custom Resource Definition. + - [descriptors/operator.yaml](../../descriptors/operator.yaml?raw=true) defines the deployment of the operator code. + - [descriptors/role.yaml](../../descriptors/role.yaml?raw=true) defines the access of the operator. + - [descriptors/role_binding.yaml](../../descriptors/role_binding.yaml?raw=true) defines the access of the operator. + - [descriptors/service_account.yaml](../../descriptors/service_account.yaml?raw=true) defines the identity for processes that run inside the pods of the operator. + +1. Deploy the icp4a-operator on your cluster. + + Use the script [scripts/deployOperator.sh](../../scripts/deployOperator.sh) to deploy these descriptors. + ```bash + $ ./scripts/deployOperator.sh -i /icp4a-operator:19.03 -p '' + ``` + + Where *registry_url* is the value for your internal docker registry or `cp.icr.io/cp/cp4a` for the IBM Cloud Entitled Registry and *my_secret_name* the secret created to access the registry. + + > **Note**: If you plan to use a non-admin user to install the operator, you must add the user to the `ibm-cp4-operator` role. For example: + ```bash + $ kubectl adm policy add-role-to-user ibm-cp4a-operator + ``` + +2. Monitor the pod until it shows a STATUS of *Running*: + ```bash + $ kubectl get pods -w + ``` + > **Note**: When started, you can monitor the operator logs with the following command: + ```bash + $ kubectl logs -f deployment/ibm-cp4a-operator -c operator + ``` + +## Step 5: Configure the software that you want to install + +A custom resource (CR) YAML file is a configuration file that describes an ICP4ACluster instance and includes the parameters to install some or all of the components. + +1. Make a copy of the template custom resource YAML file [descriptors/ibm_cp4a_cr_template.yaml](../../descriptors/ibm_cp4a_cr_template.yaml?raw=true) and name it appropriately for your deployment (for example descriptors/my_icp4a_cr.yaml). + + > **Important:** Use a single custom resource file to include all of the components that you want to deploy with an operator instance. Each time that you need to make an update or modification you must use this same file to apply the changes to your deployments. When you apply a new custom resource to an operator you must make sure that all previously deployed resources are included if you do not want the operator to delete them. + +2. Change the default name of your instance in descriptors/my_icp4a_cr.yaml. + + ```yaml + metadata: + name: + ``` + +3. If you use an internal registry, enter values for the `image_pull_secrets` and `images` parameters in the `shared_configuration` section. + + ```yaml + shared_configuration: + image_pull_secrets: + - + images: + keytool_job_container: + repository: docker-registry.default.svc:5000//dba-keytool-initcontainer + tag: 19.0.3 + keytool_init_container: + repository: docker-registry.default.svc:5000//dba-keytool-jobcontainer + tag: 19.0.3 + pull_policy: IfPresent + ``` + + | Parameter | Description | + | ------------------------------- | --------------------------------------------- | + | `keytool_job_container` | Repository from where to pull the keytool_job_container and the corresponding tag | + | `keytool_init_container` | Repository from where to pull the keytool_init_container and the corresponding tag | + | `image_pull_secrets` | Secrets in your target namespace to pull images from the specified repository | + +4. Use the following links to configure the software that you want to install. + + - [Configure IBM Automation Digital Worker](../../ADW/README_config.md) + - [Configure IBM Automation Workstream Services](../../IAWS/README_config.md) + - [Configure IBM Business Automation Application Engine](../../AAE/README_config.md) + - [Configure IBM Business Automation Content Analyzer](../../ACA/README_config.md) + - [Configure IBM Business Automation Insights](../../BAI/README_config.md) + - [Configure IBM Business Automation Navigator](../../BAN/README_config.md) + - [Configure IBM Business Automation Studio](../../BAS/README_config.md) + - [Configure IBM FileNet Content Manager](../../FNCM//README_config.md) + - [Configure IBM Operational Decision Manager](../../ODM/README_config.md) + - [Configure the User Management Service](../../UMS/README_config.md) + +## Step 6: Apply the custom resources + +1. Check that all the components you want to install are configured. + + ```bash + $ cat descriptors/my_icp4a_cr.yaml + ``` + +2. Deploy the configured components by applying the custom resource. + + ```bash + $ kubectl apply -f descriptors/my_icp4a_cr.yaml + ``` + +## Step 7: Verify that the automation containers are running + +The operator reconciliation loop might take several minutes. + +Monitor the status of your pods with: +```bash +$ kubectl get pods -w +``` + +When all of the pods are *Running*, you can access the status of your services with the following commands. +```bash +$ kubectl cluster-info +$ kubectl get services +``` +You can now expose the services to your users. + +Refer to the [Troubleshooting section](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_trbleshoot_operators.html) to access the operator logs. + +## Step 8: Complete some post-installation steps + +Go to [IBM Knowledge Center](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_deploy_postdeployk8s.html) to follow the post-installation steps. diff --git a/platform/k8s/migrate.md b/platform/k8s/migrate.md new file mode 100644 index 00000000..a38699e1 --- /dev/null +++ b/platform/k8s/migrate.md @@ -0,0 +1,20 @@ +# Migrating Cloud Pak for Automation data on Certified Kubernetes + +To migrate your 19.0.x data to 19.0.3, uninstall your current deployment and follow the migration instructions for each component to point to the existing persistent stores. + +## Step 1: Prepare your environment and take note of your existing storage settings + +Use the following links to help you find the relevant software storage settings that you want to migrate. + +- [Configure IBM Business Automation Application Engine](../../AAE/README_migrate.md) +- [Configure IBM Business Automation Content Analyzer](../../ACA/README_migrate.md) +- [Configure IBM Business Automation Insights](../../BAI/README_migrate.md) +- [Configure IBM Business Automation Navigator](../../BAN/README_migrate.md) +- [Configure IBM Business Automation Studio](../../BAS/README_migrate.md) +- [Configure IBM FileNet Content Manager](../../FNCM//README_migrate.md) +- [Configure IBM Operational Decision Manager](../../ODM/README_migrate.md) +- [Configure the User Management Service](../../UMS/README_migrate.md) + +## Step 2: Install your chosen components with the operator + + When you have completed all of the preparation steps for each of the components that you want to migrate, follow the instructions in the [installation](install.md) readme. diff --git a/platform/k8s/uninstall.md b/platform/k8s/uninstall.md new file mode 100644 index 00000000..a3d605df --- /dev/null +++ b/platform/k8s/uninstall.md @@ -0,0 +1,24 @@ +# Uninstalling Cloud Pak for Automation 19.0.3 on Certified Kubernetes + +## Delete your automation instances + +You can delete your custom resource (CR) deployments by deleting the CR YAML file or the CR instance. The name of the instance is taken from the value of the `name` parameter in the CR YAML file. The following command is used to delete an instance. + +```bash +  $ kubectl delete ICP4ACluster +``` + +> **Note**: You can get the names of the ICP4ACluster instances with the following command: + ```bash + $ kubectl get ICP4ACluster + ``` + +## Delete the operator instance and all associated automation instances + +Use the [`scripts/deleteOperator.sh`](../../scripts/deleteOperator.sh) to delete all the resources that are linked to the operator. + +```bash + $ ./scripts/deleteOperator.sh +``` + +Verify that all the pods created with the operator are terminated and deleted. diff --git a/platform/k8s/update.md b/platform/k8s/update.md new file mode 100644 index 00000000..1b59080e --- /dev/null +++ b/platform/k8s/update.md @@ -0,0 +1,53 @@ +# Updating Cloud Pak for Automation 19.0.3 on Certified Kubernetes + +- [Step 1: Modify the software that is installed](update.md#step-1-modify-the-software-that-is-installed) +- [Step 2: Apply the updated custom resources](update.md#step-2-apply-the-updated-custom-resources) +- [Step 3: Verify the updated automation containers](update.md#step-3-verify-the-updated-automation-containers) + +## Step 1: Modify the software that is installed + +An update to the custom resource (CR), overwrites the deployed resources during the operator control loop (observe, analyze, act) that occurs as a result of constantly watching the state of the Kubernetes resources. + +Use the following links to configure the software that is already installed. You can modify the installed software, remove it, or add new components. Use the same CR YAML file that you deployed with the operator to make the updates (for example descriptors/my_icp4a_cr.yaml). + +- [Configure IBM Automation Digital Worker](../../ADW/README_config.md) +- [Configure IBM Automation Workstream Services](../../IAWS/README_config.md) +- [Configure IBM Business Automation Application Engine](../../AAE/README_config.md) +- [Configure IBM Business Automation Content Analyzer](../../ACA/README_config.md) +- [Configure IBM Business Automation Insights](../../BAI/README_config.md) +- [Configure IBM Business Automation Navigator](../../BAN/README_config.md) +- [Configure IBM Business Automation Studio](../../BAS/README_config.md) +- [Configure IBM FileNet Content Manager](../../FNCM//README_config.md) +- [Configure IBM Operational Decision Manager](../../ODM/README_config.md) +- [Configure the User Management Service](../../UMS/README_config.md) + +## Step 2: Apply the updated custom resources + +1. Review your CR YAML file to make sure it contains all of your intended modifications. + + ```bash + $ cat descriptors/my_icp4a_cr.yaml + ``` + +2. Run the following commands to apply the updates to the operator: + + ```bash + $ kubectl apply -f descriptors/my_icp4a_cr.yaml --overwrite=true + ``` + +## Step 3: Verify the updated automation containers + +The operator reconciliation loop might take several minutes. + +Monitor the status of your pods with: +```bash +$ kubectl get pods -w +``` + +When all of the pods are *Running*, you can access the status of your services with the following commands. +```bash +$ kubectl cluster-info +$ kubectl get services +``` + +Refer to the [Troubleshooting section](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_trbleshoot_operators.html) to access the operator logs. diff --git a/platform/ocp/README.md b/platform/ocp/README.md new file mode 100644 index 00000000..9284438b --- /dev/null +++ b/platform/ocp/README.md @@ -0,0 +1,14 @@ +# IBM Cloud Pak for Automation 19.0.3 on Red Hat OpenShift + +Red Hat OpenShift Cloud Platform 3.11 or 4.2 is the target platform for Cloud Pak for Automation 19.0.3. + +Choose which use case you need, and then follow the links below to find the right instructions: + +- [Install Cloud Pak for Automation 19.0.3 on Red Hat OpenShift](install.md) +- [Uninstall Cloud Pak for Automationr 19.0.3 on Red Hat OpenShift](uninstall.md) +- [Migrate 19.0.x persisted data to 19.0.3 on Red Hat OpenShift](migrate.md) +- [Update Cloud Pak for Automation 19.0.3 on Red Hat OpenShift](update.md) + +Choose to evaluate components: + +- [Install ODM for developers on Red Hat OpenShift](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/topics/tsk_dev_odm_ocp.html) diff --git a/platform/ocp/install.md b/platform/ocp/install.md new file mode 100644 index 00000000..69895684 --- /dev/null +++ b/platform/ocp/install.md @@ -0,0 +1,304 @@ +# Installing Cloud Pak for Automation 19.0.3 on Red Hat OpenShift + +- [Step 1: Create a namespace and get access to the container images](install.md#step-1-create-a-namespace-and-get-access-to-the-container-images) +- [Step 2: Prepare your environment for automation software](install.md#step-2-prepare-your-environment-for-automation-software) +- [Step 3: Create a shared PV and add the JDBC drivers](install.md#step-3-create-a-shared-pv-and-add-the-jdbc-drivers) +- [Step 4: Deploy the operator manifest files to your cluster](install.md#step-4-deploy-the-operator-manifest-files-to-your-cluster) +- [Step 5: Configure the software that you want to install](install.md#step-5-configure-the-software-that-you-want-to-install) +- [Step 6: Apply the custom resources](install.md#step-6-apply-the-custom-resources) +- [Step 7: Verify that the automation containers are running](install.md#step-7-verify-that-the-automation-containers-are-running) +- [Step 8: Complete some post-installation steps](install.md#step-8-complete-some-post-installation-steps) + +## Step 1: Create a namespace and get access to the container images + +From your local machine, you can access the container images in the IBM Docker registry with your IBMid (Option 1), or you can use the downloaded archives from IBM Passport Advantage (PPA) (Option 2). + +1. Log in to your cluster. + ```bash + $ oc login https://:8443 -u + ``` +2. Create an OpenShift project (namespace) in which you want to install the operator. + ```bash + $ oc new-project my-project + ``` +3. Add privileges to the project. + ```bash + $ oc adm policy add-scc-to-user privileged -z default + ``` +4. Download or clone the repository on your local machine and change to `cert-kubernetes` directory + ```bash + $ git clone git@github.com:icp4a/cert-kubernetes.git + $ cd cert-kubernetes + ``` + You will find there the scripts and kubernetes descriptors that are necessary to install Cloud Pak for Automation. + +### Option 1: Create a pull secret for the IBM Cloud Entitled Registry + +1. Log in to [MyIBM Container Software Library](https://myibm.ibm.com/products-services/containerlibrary) with the IBMid and password that are associated with the entitled software. + +2. In the **Container software library** tile, click **View library** and then click **Copy key** to copy the entitlement key to the clipboard. + +3. Create a pull secret by running a `kubectl create secret` command. + ```bash + $ kubectl create secret docker-registry --docker-server=cp.icr.io --docker-username=iamapikey --docker-password="" --docker-email= + ``` + + > **Note**: The `cp.icr.io` value for the **docker-server** parameter is the only registry domain name that contains the images. + +4. Take a note of the secret and the server values so that you can set them to the **pullSecrets** and **repository** parameters when you run the operator for your containers. + +### Option 2: Download the packages from PPA and load the images + +[IBM Passport Advantage (PPA)](https://www-01.ibm.com/software/passportadvantage/pao_customer.html) provides archives (.tgz) for the software. To view the list of Passport Advantage eAssembly installation images, refer to the [19.0.3 download document](https://www.ibm.com/support/pages/ibm-cloud-pak-automation-v1903-download-document). + +1. Download one or more PPA packages to a server that is connected to your Docker registry. +2. Check that you can run a docker command. + ```bash + $ docker ps + ``` +3. Log in to the Docker registry with a token. + ```bash + $ docker login $(oc registry info) -u -p $(oc whoami -t) + ``` + > **Note**: You can connect to a node in the cluster to resolve the `docker-registry.default.svc` parameter. + + You can also log in to an external Docker registry using the following command: + ```bash + $ docker login -u + ``` +4. Run a `kubectl` command to make sure that you have access to Kubernetes. + ```bash + $ kubectl cluster-info + ``` +5. Run the [`scripts/loadimages.sh`](../../scripts/loadimages.sh) script to load the images into your Docker registry. Specify the two mandatory parameters in the command line. + + ``` + -p PPA archive files location or archive filename + -r Target Docker registry and namespace + -l Optional: Target a local registry + ``` + + The following example shows the input values in the command line on OCP 3.11. On OCP 4.2 the default docker registry is based on the host name, for example "default-route-openshift-image-registry.ibm.com". + + ``` + # scripts/loadimages.sh -p .tgz -r docker-registry.default.svc:5000/my-project + ``` + + > **Note**: The project must have pull request privileges to the registry where the images are loaded. The project must also have pull request privileges to push the images into another namespace/project. + +6. Check that the images are pushed correctly to the registry. + ```bash + $ oc get is + ``` +7. (Optional) If you want to use an external Docker registry, create a Docker registry secret. + + ```bash + $ oc create secret docker-registry --docker-server= --docker-username= --docker-password= --docker-email= + ``` + + Take a note of the secret and the server values so that you can set them to the **pullSecrets** and **repository** parameters when you run the operator for your containers. + + +## Step 2: Prepare your environment for automation software + +Before you install any of the containerized software: + +1. Go to the prerequisites page in the [IBM Cloud Pak for Automation 19.0.x](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_prepare_env_k8s.html) Knowledge Center. +2. Follow the instructions on preparing your environment for the software components that you want to install. + + How much preparation you need to do depends on what you want to install and how familiar you are with your environment. + +## Step 3: Create a shared PV and add the JDBC drivers + +1. Create a persistent volume (PV) for the operator. This PV is needed for the JDBC drivers. The following example YAML defines a PV, but PVs depend on your cluster configuration.  + ```yaml + apiVersion: v1 + kind: PersistentVolume + metadata: + labels: + type: local + name: operator-shared-pv + spec: + capacity: + storage: 1Gi + accessModes: + - ReadWriteMany + hostPath: + path: "/root/operator" + persistentVolumeReclaimPolicy: Delete + ``` + +2. Deploy the PV. + ```bash + $ oc create -f operator-shared-pv.yaml + ``` + +3. Create a claim for the PV, or check that the PV is bound dynamically, [descriptors/operator-shared-pvc.yaml](../../descriptors/operator-shared-pvc.yaml?raw=true). + + > Replace the storage class if you do not want to create the relevant persistent volume. + + ```yaml + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: operator-shared-pvc + namespace: my-project + spec: + accessModes: + - ReadWriteMany + storageClassName: "" + resources: + requests: + storage: 1Gi + volumeName: operator-shared-pv + ``` + +4. Deploy the PVC. + ```bash + $ oc create -f descriptors/operator-shared-pvc.yaml + ``` + +5. Copy all of the JDBC drivers that are needed by the components you intend to install to the persistent volume. Depending on your storage configuration you might not need these drivers. + + > **Note**: File names for JDBC drivers cannot include additional version information. + - DB2: + - db2jcc4.jar + - db2jcc_license_cu.jar + - Oracle: + - ojdbc8.jar + + The following structure shows an example remote file system. + + ``` + pv-root-dir + + └── jdbc + + ├── db2 + + │ ├── db2jcc4.jar + + │ └── db2jcc_license_cu.jar + + ├── oracle + + │ └── ojdbc8.jar + + ``` + +## Step 4: Deploy the operator manifest files to your cluster + +The Cloud Pak operator has a number of descriptors that must be applied. + - [descriptors/ibm_cp4a_crd.yaml](../../descriptors/ibm_cp4a_crd.yaml?raw=true) contains the description of the Custom Resource Definition. + - [descriptors/operator.yaml](../../descriptors/operator.yaml?raw=true) defines the deployment of the operator code. + - [descriptors/role.yaml](../../descriptors/role.yaml?raw=true) defines the access of the operator. + - [descriptors/role_binding.yaml](../../descriptors/role_binding.yaml?raw=true) defines the access of the operator. + - [descriptors/service_account.yaml](../../descriptors/service_account.yaml?raw=true) defines the identity for processes that run inside the pods of the operator. + +1. Deploy the icp4a-operator on your cluster. + + Use the script [scripts/deployOperator.sh](../../scripts/deployOperator.sh) to deploy these descriptors. + ```bash + $ ./scripts/deployOperator.sh -i /icp4a-operator:19.03 -p '' + ``` + + Where *registry_url* is the value for your internal docker registry or `cp.icr.io/cp/cp4a` for the IBM Cloud Entitled Registry and *my_secret_name* the secret created to access the registry. + + > **Note**: If you plan to use a non-admin user to install the operator, you must add the user to the `ibm-cp4-operator` role. For example: + ```bash + $ oc adm policy add-role-to-user ibm-cp4a-operator + ``` + +2. Monitor the pod until it shows a STATUS of *Running*: + ```bash + $ oc get pods -w + ``` + > **Note**: When started, you can monitor the operator logs with the following command: + ```bash + $ oc logs -f deployment/ibm-cp4a-operator -c operator + ``` + +## Step 5: Configure the software that you want to install + +A custom resource (CR) YAML file is a configuration file that describes an ICP4ACluster instance and includes the parameters to install some or all of the components. + +1. Make a copy of the template custom resource YAML file [descriptors/ibm_cp4a_cr_template.yaml](../../descriptors/ibm_cp4a_cr_template.yaml?raw=true) and name it appropriately for your deployment (for example descriptors/my_icp4a_cr.yaml). + + > **Important:** Use a single custom resource file to include all of the components that you want to deploy with an operator instance. Each time that you need to make an update or modification you must use this same file to apply the changes to your deployments. When you apply a new custom resource to an operator you must make sure that all previously deployed resources are included if you do not want the operator to delete them. + +2. Change the default name of your instance in descriptors/my_icp4a_cr.yaml. + + ```yaml + metadata: + name: + ``` + +3. If you use an internal registry, enter values for the `image_pull_secrets` and `images` parameters in the `shared_configuration` section. + + ```yaml + shared_configuration: + image_pull_secrets: + - + images: + keytool_job_container: + repository: docker-registry.default.svc:5000//dba-keytool-initcontainer + tag: 19.0.3 + keytool_init_container: + repository: docker-registry.default.svc:5000//dba-keytool-jobcontainer + tag: 19.0.3 + pull_policy: IfPresent + ``` + + | Parameter | Description | + | ------------------------------- | --------------------------------------------- | + | `keytool_job_container` | Repository from where to pull the keytool_job_container and the corresponding tag | + | `keytool_init_container` | Repository from where to pull the keytool_init_container and the corresponding tag | + | `image_pull_secrets` | Secrets in your target namespace to pull images from the specified repository | + +4. Use the following links to configure the software that you want to install. + + - [Configure IBM Automation Digital Worker](../../ADW/README_config.md) + - [Configure IBM Automation Workstream Services](../../IAWS/README_config.md) + - [Configure IBM Business Automation Application Engine](../../AAE/README_config.md) + - [Configure IBM Business Automation Content Analyzer](../../ACA/README_config.md) + - [Configure IBM Business Automation Insights](../../BAI/README_config.md) + - [Configure IBM Business Automation Navigator](../../BAN/README_config.md) + - [Configure IBM Business Automation Studio](../../BAS/README_config.md) + - [Configure IBM FileNet Content Manager](../../FNCM//README_config.md) + - [Configure IBM Operational Decision Manager](../../ODM/README_config.md) + - [Configure the User Management Service](../../UMS/README_config.md) + +## Step 6: Apply the custom resource + +1. Check that all the components you want to install are configured. + + ```bash + $ cat descriptors/my_icp4a_cr.yaml + ``` + +2. Deploy the configured components by applying the custom resource. + + ```bash + $ oc apply -f descriptors/my_icp4a_cr.yaml + ``` + +## Step 7: Verify that the automation containers are running + +The operator reconciliation loop might take several minutes. + +Monitor the status of your pods with: +```bash +$ oc get pods -w +``` + +When all of the pods are *Running*, you can access the status of your services with the following command. +```bash +$ oc status +``` +You can now expose the services to your users. + +Refer to the [Troubleshooting section](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_trbleshoot_operators.html) to access the operator logs. + +## Step 8: Complete some post-installation steps + +Go to [IBM Knowledge Center](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_deploy_postdeployk8s.html) to follow the post-installation steps. diff --git a/platform/ocp/migrate.md b/platform/ocp/migrate.md new file mode 100644 index 00000000..b4780d89 --- /dev/null +++ b/platform/ocp/migrate.md @@ -0,0 +1,20 @@ +# Migrating Cloud Pak for Automation data on Red Hat OpenShift + +To migrate your 19.0.x data to 19.0.3, uninstall your current deployment and follow the migration instructions for each component to point to the existing persistent stores. + +## Step 1: Prepare your environment and take note of your existing storage settings + +Use the following links to help you find the relevant software storage settings that you want to migrate. + +- [Configure IBM Business Automation Application Engine](../../AAE/README_migrate.md) +- [Configure IBM Business Automation Content Analyzer](../../ACA/README_migrate.md) +- [Configure IBM Business Automation Insights](../../BAI/README_migrate.md) +- [Configure IBM Business Automation Navigator](../../BAN/README_migrate.md) +- [Configure IBM Business Automation Studio](../../BAS/README_migrate.md) +- [Configure IBM FileNet Content Manager](../../FNCM//README_migrate.md) +- [Configure IBM Operational Decision Manager](../../ODM/README_migrate.md) +- [Configure the User Management Service](../../UMS/README_migrate.md) + +## Step 2: Install your chosen components with the operator + + When you have completed all of the preparation steps for each of the components that you want to migrate, follow the instructions in the [installation](install.md) readme. diff --git a/platform/ocp/uninstall.md b/platform/ocp/uninstall.md new file mode 100644 index 00000000..ee9aa2b4 --- /dev/null +++ b/platform/ocp/uninstall.md @@ -0,0 +1,24 @@ +# Uninstalling Cloud Pak for Automation 19.0.3 on Red Hat OpenShift + +## Delete your automation instances + +You can delete your custom resource (CR) deployments by deleting the CR YAML file or the CR instance. The name of the instance is taken from the value of the `name` parameter in the CR YAML file. The following command is used to delete an instance. + +```bash +  $ oc delete ICP4ACluster +``` + +> **Note**: You can get the names of the ICP4ACluster instances with the following command: + ```bash + $ oc get ICP4ACluster + ``` + +## Delete the operator instance and all associated automation instances + +Use the [`scripts/deleteOperator.sh`](../../scripts/deleteOperator.sh) to delete all the resources that are linked to the operator. + +```bash + $ ./scripts/deleteOperator.sh +``` + +Verify that all the pods created with the operator are terminated and deleted. diff --git a/platform/ocp/update.md b/platform/ocp/update.md new file mode 100644 index 00000000..4c6a97ac --- /dev/null +++ b/platform/ocp/update.md @@ -0,0 +1,54 @@ +# Updating Cloud Pak for Automation 19.0.3 on Red Hat OpenShift + +- [Step 1: Modify the software that is installed](update.md#step-1-modify-the-software-that-is-installed) +- [Step 2: Apply the updated custom resources](update.md#step-2-apply-the-updated-custom-resources) +- [Step 3: Verify the updated automation containers](update.md#step-3-verify-the-updated-automation-containers) + +## Step 1: Modify the software that is installed + +An update to the custom resource (CR), overwrites the deployed resources during the operator control loop (observe, analyze, act) that occurs as a result of constantly watching the state of the Kubernetes resources. + +Use the following links to configure the software that is already installed. You can modify the installed software, remove it, or add new components. Use the same CR YAML file that you deployed with the operator to make the updates (for example descriptors/my_icp4a_cr.yaml). + +- [Configure IBM Automation Digital Worker](../../ADW/README_config.md) +- [Configure IBM Automation Workstream Services](../../IAWS/README_config.md) +- [Configure IBM Business Automation Application Engine](../../AAE/README_config.md) +- [Configure IBM Business Automation Content Analyzer](../../ACA/README_config.md) +- [Configure IBM Business Automation Insights](../../BAI/README_config.md) +- [Configure IBM Business Automation Navigator](../../BAN/README_config.md) +- [Configure IBM Business Automation Studio](../../BAS/README_config.md) +- [Configure IBM FileNet Content Manager](../../FNCM//README_config.md) +- [Configure IBM Operational Decision Manager](../../ODM/README_config.md) +- [Configure the User Management Service](../../UMS/README_config.md) + +## Step 2: Apply the updated custom resources + +1. Review your CR YAML file to make sure it contains all of your intended modifications. + + ```bash + $ cat descriptors/my_icp4a_cr.yaml + ``` + +2. Run the following commands to apply the updates to the operator: + + ```bash + $ oc apply -f descriptors/my_icp4a_cr.yaml --overwrite=true + ``` + +> **Note:** You can also use `oc edit ICP4ACluster ` to open the default UNIX visual editor (vi) in situ. + +## Step 3: Verify the updated automation containers + +The operator reconciliation loop might take several minutes. + +Monitor the status of your pods with: +```bash +$ oc get pods -w +``` + +When all of the pods are *Running*, you can access the status of your services with the following commands. +```bash +$ oc status +``` + +Refer to the [Troubleshooting section](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_trbleshoot_operators.html) to access the operator logs. diff --git a/platform/roks/README.md b/platform/roks/README.md new file mode 100644 index 00000000..93d4062a --- /dev/null +++ b/platform/roks/README.md @@ -0,0 +1,14 @@ +# IBM Cloud Pak for Automation 19.0.3 on Managed Red Hat OpenShift on IBM Cloud Public + +Red Hat OpenShift 3.11 is the managed version on IBM Cloud for Cloud Pak for Automation 19.0.3. + +Choose which use case you need with an operator, and then follow the links below to find the right instructions: + +- [Install Cloud Pak for Automation 19.0.3 on IBM Cloud](install.md) +- [Uninstall Cloud Pak for Automationr 19.0.3 on IBM Cloud](uninstall.md) +- [Migrate 19.0.x persisted data to 19.0.3 on IBM Cloud](migrate.md) +- [Update Cloud Pak for Automation 19.0.3 on IBM Cloud](update.md) + +Choose to evaluate components: + +- [Install ODM for developers on Managed Red Hat OpenShift on IBM Cloud Public](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/topics/tsk_dev_odm_roks.html) diff --git a/platform/roks/install.md b/platform/roks/install.md new file mode 100644 index 00000000..f0c04d9d --- /dev/null +++ b/platform/roks/install.md @@ -0,0 +1,298 @@ +# Installing Cloud Pak for Automation 19.0.3 on Managed OpenShift on IBM Cloud Public + +- [Step 1: Get access to the container images](install.md#step-1-get-access-to-the-container-images) +- [Step 2: Prepare the cluster for automation software](install.md#step-2-prepare-the-cluster-for-automation-software) +- [Step 3: Create a shared PV and add the JDBC drivers](install.md#step-3-create-a-shared-pv-and-add-the-jdbc-drivers) +- [Step 4: Deploy the operator manifest files to your cluster](install.md#step-4-deploy-the-operator-manifest-files-to-your-cluster) +- [Step 5: Configure the software that you want to install](install.md#step-5-configure-the-software-that-you-want-to-install) +- [Step 6: Deploy the operator and custom resources](install.md#step-6-apply-the-custom-resources) +- [Step 7: Verify that the operator and pods are running](install.md#step-7-verify-that-the-operator-and-pods-are-running) +- [Step 8: Complete some post-installation steps](install.md#step-8-complete-some-post-installation-steps) + +## Step 1: Get access to the container images + +From your local machine, you can access the container images in the IBM Docker registry with your IBMid (Option 1), or you can use the downloaded archives from IBM Passport Advantage (PPA) (Option 2). + +1. Go to [Installing containers on Red Hat OpenShift by using CLIs](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/k8s_topics/tsk_prepare_env_ROKS.html) to get access to the container images. You can access the container images in the IBM Docker registry with your IBMid, or you can use the downloaded archives from IBM Passport Advantage (PPA). +2. Log in to your IBM Cloud Kubernetes cluster. In the OpenShift web console menu bar, click your profile *IAM#user.name@email.com* > *Copy Login Command* and paste the copied command into your command line. + ```bash + $ oc login https://: --token= + ``` +3. Run a `kubectl` command to make sure that you have access to Kubernetes. + ```bash + $ kubectl cluster-info + ``` +4. Download or clone the repository on your local machine and change to `cert-kubernetes` directory + ```bash + $ git clone git@github.com:icp4a/cert-kubernetes.git + $ cd cert-kubernetes + ``` + You will find there the scripts and kubernetes descriptors that are necessary to install Cloud Pak for Automation. + +### Option 1: Create a pull secret for the IBM Cloud Entitled Registry + +1. Log in to [MyIBM Container Software Library](https://myibm.ibm.com/products-services/containerlibrary) with the IBMid and password that are associated with the entitled software. + +2. In the **Container software library** tile, click **View library** and then click **Copy key** to copy the entitlement key to the clipboard. + +3. Create a pull secret by running a `kubectl create secret` command. + ```bash + $ kubectl create secret docker-registry --docker-server=cp.icr.io --docker-username=iamapikey --docker-password="" --docker-email= + ``` + + > **Note**: The `cp.icr.io` value for the **docker-server** parameter is the only registry domain name that contains the images. + +4. Take a note of the secret and the server values so that you can set them to the **pullSecrets** and **repository** parameters when you run the operator for your containers. + +### Option 2: Download the packages from PPA and load the images + +[IBM Passport Advantage (PPA)](https://www-01.ibm.com/software/passportadvantage/pao_customer.html) provides archives (.tgz) for the software. To view the list of Passport Advantage eAssembly installation images, refer to the [19.0.3 download document](https://www.ibm.com/support/pages/ibm-cloud-pak-automation-v1903-download-document). + +1. Download one or more PPA packages to a server that is connected to your Docker registry. +2. Check that you can run a docker command. + ```bash + $ docker ps + ``` +3. Log in to the Docker registry with a token. + ```bash + $ docker login $(oc registry info) -u -p $(oc whoami -t) + ``` + + You can also log in to an external Docker registry using the following command: + ```bash + $ docker login -u + ``` +4. Run a `kubectl` command to make sure that you have access to Kubernetes. + ```bash + $ kubectl cluster-info + ``` +5. Run the [`scripts/loadimages.sh`](../../scripts/loadimages.sh) script to load the images into your Docker registry. Specify the two mandatory parameters in the command line. + + ``` + -p PPA archive files location or archive filename + -r Target Docker registry and namespace + -l Optional: Target a local registry + ``` + + The following example shows the input values in the command line on OCP 3.11. On OCP 4.2 the default docker registry is based on the host name, for example "default-route-openshift-image-registry.ibm.com". + + ``` + # scripts/loadimages.sh -p .tgz -r docker-registry.default.svc:5000/my-project + ``` + + > **Note**: The project must have pull request privileges to the registry where the images are loaded. The project must also have pull request privileges to push the images into another namespace/project. + +6. Check that the images are pushed correctly to the registry. + ```bash + $ oc get is + ``` +7. (Optional) If you want to use an external Docker registry, create a Docker registry secret. + + ```bash + $ oc create secret docker-registry --docker-server= --docker-username= --docker-password= --docker-email= + ``` + + Take a note of the secret and the server values so that you can set them to the **pullSecrets** and **repository** parameters when you run the operator for your containers. + + +## Step 2: Prepare the cluster for automation software + +Before you install any of the containerized software: + +1. Follow the instructions on preparing the cluster for the software components that you want to install in the [IBM Cloud Pak for Automation 19.0.x](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_prepare_env_k8s.html) Knowledge Center. + + How much preparation you need to do depends on what you want to install and how familiar you are with the cluster. + +## Step 3: Create a shared PV and add the JDBC drivers + + 1. Create a persistent volume (PV) for the operator. This PV is needed for the JDBC drivers. The following example YAML defines a PV, but PVs depend on your cluster configuration. + ```yaml + apiVersion: v1 + kind: PersistentVolume + metadata: + labels: + type: local + name: operator-shared-pv + spec: + capacity: + storage: 1Gi + accessModes: + - ReadWriteMany + hostPath: + path: "/root/operator" + persistentVolumeReclaimPolicy: Delete + ``` + + 2. Deploy the PV. + ```bash + $ oc create -f operator-shared-pv.yaml + ``` + + 2. Create a claim for the PV, or check that the PV is bound dynamically, [descriptors/operator-shared-pvc.yaml](../../descriptors/operator-shared-pvc.yaml?raw=true). + + > Replace the storage class if you do not want to create the relevant persistent volume. + + ```yaml + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: operator-shared-pvc + namespace: my-project + spec: + accessModes: + - ReadWriteMany + storageClassName: "" + resources: + requests: + storage: 1Gi + volumeName: operator-shared-pv + ``` + + 3. Deploy the PVC. + ```bash + $ oc create -f descriptors/operator-shared-pvc.yaml + ``` + + 4. Copy all of the JDBC drivers that are needed by the components you intend to install to the persistent volume. Depending on your storage configuration you might not need these drivers. + + > **Note**: File names for JDBC drivers cannot include additional version information. + - DB2: + - db2jcc4.jar + - db2jcc_license_cu.jar + - Oracle: + - ojdbc8.jar + + The following structure shows an example remote file system. + + ``` + pv-root-dir + + └── jdbc + + ├── db2 + + │ ├── db2jcc4.jar + + │ └── db2jcc_license_cu.jar + + ├── oracle + + │ └── ojdbc8.jar + + ``` + +## Step 4: Deploy the operator manifest files to your cluster + +The Cloud Pak operator has a number of descriptors that must be applied. + - [descriptors/ibm_icp4a_crd.yaml](../../descriptors/ibm_icp4a_crd.yaml?raw=true) contains the description of the Custom Resource Definition. + - [descriptors/operator.yaml](../../descriptors/operator.yaml?raw=true) defines the deployment of the operator code. + - [descriptors/role.yaml](../../descriptors/role.yaml?raw=true) defines the access of the operator. + - [descriptors/role_binding.yaml](../../descriptors/role_binding.yaml?raw=true) defines the access of the operator. + - [descriptors/service_account.yaml](../../descriptors/service_account.yaml?raw=true) defines the identity for processes that run inside the pods of the operator. + +1. Deploy the icp4a-operator on your cluster. + + Use the script [scripts/deployOperator.sh](../../scripts/deployOperator.sh) to deploy these descriptors. + ```bash + $ ./scripts/deployOperator.sh -i /icp4a-operator:19.03 -p '' + ``` + + Where *registry_url* is the value for your internal docker registry or `cp.icr.io/cp/cp4a` for the IBM Cloud Entitled Registry and *my_secret_name* the secret created to access the registry. + + > **Note**: If you plan to use a non-admin user to install the operator, you must add the user to the `ibm-cp4-operator` role. For example: + ```bash + $ oc adm policy add-role-to-user ibm-cp4a-operator + ``` + +2. Monitor the pod until it shows a STATUS of *Running*: + ```bash + $ oc get pods -w + ``` + > **Note**: When started, you can monitor the operator logs with the following command: + ```bash + $ oc logs -f deployment/ibm-cp4a-operator -c operator + ``` + +## Step 5: Configure the software that you want to install + +A custom resource (CR) YAML file is a configuration file that describes an ICP4ACluster instance and includes the parameters to install some or all of the components. + +1. Make a copy of the template custom resource YAML file [descriptors/ibm_cp4a_cr_template.yaml](../../descriptors/ibm_cp4a_cr_template.yaml?raw=true) and name it appropriately for your deployment (for example descriptors/my_icp4a_cr.yaml). + + > **Important:** Use a single custom resource file to include all of the components that you want to deploy with an operator instance. Each time that you need to make an update or modification you must use this same file to apply the changes to your deployments. When you apply a new custom resource to an operator you must make sure that all previously deployed resources are included if you do not want the operator to delete them. + +2. Change the default name of your instance in descriptors/my_icp4a_cr.yaml. + + ```yaml + metadata: + name: + ``` + +3. If you use an internal registry, enter values for the `image_pull_secrets` and `images` parameters in the `shared_configuration` section. + + ```yaml + shared_configuration: + image_pull_secrets: + - + images: + keytool_job_container: + repository: docker-registry.default.svc:5000//dba-keytool-initcontainer + tag: 19.0.3 + keytool_init_container: + repository: docker-registry.default.svc:5000//dba-keytool-jobcontainer + tag: 19.0.3 + pull_policy: IfPresent + ``` + + | Parameter | Description | + | ------------------------------- | --------------------------------------------- | + | `keytool_job_container` | Repository from where to pull the keytool_job_container and the corresponding tag | + | `keytool_init_container` | Repository from where to pull the keytool_init_container and the corresponding tag | + | `image_pull_secrets` | Secrets in your target namespace to pull images from the specified repository | + +4. Use the following links to configure the software that you want to install. + + - [Configure IBM Automation Digital Worker](../../ADW/README_config.md) + - [Configure IBM Business Automation Application Engine](../../AAE/README_config.md) + - [Configure IBM Business Automation Content Analyzer](../../ACA/README_config.md) + - [Configure IBM Business Automation Insights](../../BAI/README_config.md) + - [Configure IBM Business Automation Navigator](../../BAN/README_config.md) + - [Configure IBM Business Automation Studio](../../BAS/README_config.md) + - [Configure IBM FileNet Content Manager](../../FNCM//README_config.md) + - [Configure IBM Operational Decision Manager](../../ODM/README_config.md) + - [Configure the User Management Service](../../UMS/README_config.md) + +## Step 6: Apply the custom resources + +1. Check that all the components you want to install are configured. + + ```bash + $ cat descriptors/my_icp4a_cr.yaml + ``` + +2. Deploy the configured components by applying the custom resource. + + ```bash + $ oc apply -f descriptors/my_icp4a_cr.yaml + ``` + +## Step 7: Verify that the operator and pods are running + +The operator reconciliation loop might take several minutes. + +Monitor the status of your pods with: +```bash +$ oc get pods -w +``` + +When all of the pods are *Running*, you can access the status of your services with the following command. +```bash +$ oc status +``` +You can now expose the services to your users. + +Refer to the [Troubleshooting section](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_trbleshoot_operators.html) to access the operator logs. + +## Step 8: Complete some post-installation steps + +Go to [IBM Knowledge Center](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_deploy_postdeployk8s.html) to follow the post-installation steps. diff --git a/platform/roks/migrate.md b/platform/roks/migrate.md new file mode 100644 index 00000000..658ab562 --- /dev/null +++ b/platform/roks/migrate.md @@ -0,0 +1,20 @@ +# Migrating Cloud Pak for Automation data on Managed Red Hat OpenShift + +To migrate your 19.0.x data to 19.0.3, uninstall your current deployment and follow the migration instructions for each component to point to the existing persistent stores. + +## Step 1: Prepare your environment and take note of your existing storage settings + +Use the following links to help you find the relevant software storage settings that you want to migrate. + +- [Configure IBM Business Automation Application Engine](../../AAE/README_migrate.md) +- [Configure IBM Business Automation Content Analyzer](../../ACA/README_migrate.md) +- [Configure IBM Business Automation Insights](../../BAI/README_migrate.md) +- [Configure IBM Business Automation Navigator](../../BAN/README_migrate.md) +- [Configure IBM Business Automation Studio](../../BAS/README_migrate.md) +- [Configure IBM FileNet Content Manager](../../FNCM//README_migrate.md) +- [Configure IBM Operational Decision Manager](../../ODM/README_migrate.md) +- [Configure the User Management Service](../../UMS/README_migrate.md) + +## Step 2: Install your chosen components with the operator + + When you have completed all of the preparation steps for each of the components that you want to migrate, follow the instructions in the [installation](install.md) readme. diff --git a/platform/roks/uninstall.md b/platform/roks/uninstall.md new file mode 100644 index 00000000..9bc76e8d --- /dev/null +++ b/platform/roks/uninstall.md @@ -0,0 +1,24 @@ +# Uninstalling Cloud Pak for Automation 19.0.3 on Managed Red Hat OpenShift + +## Delete your automation instances + +You can delete your custom resource (CR) deployments by deleting the CR YAML file or the CR instance. The name of the instance is taken from the value of the `name` parameter in the CR YAML file. The following command is used to delete an instance. + +```bash +  $ oc delete ICP4ACluster +``` + +> **Note**: You can get the names of the ICP4ACluster instances with the following command: + ```bash + $ oc get ICP4ACluster + ``` + +## Delete the operator instance and all associated automation instances + +Use the [`scripts/deleteOperator.sh`](../../scripts/deleteOperator.sh) to delete all the resources that are linked to the operator. + +```bash + $ ./scripts/deleteOperator.sh +``` + +Verify that all the pods created with the operator are terminated and deleted. diff --git a/platform/roks/update.md b/platform/roks/update.md new file mode 100644 index 00000000..329e4995 --- /dev/null +++ b/platform/roks/update.md @@ -0,0 +1,54 @@ +# Updating Cloud Pak for Automation 19.0.3 on Managed Red Hat OpenShift + +- [Step 1: Modify the software that is installed](update.md#step-1-modify-the-software-that-is-installed) +- [Step 2: Apply the updated custom resources](update.md#step-2-apply-the-updated-custom-resources) +- [Step 3: Verify the updated automation containers](update.md#step-3-verify-the-updated-automation-containers) + +## Step 1: Modify the software that is installed + +An update to the custom resource (CR), overwrites the deployed resources during the operator control loop (observe, analyze, act) that occurs as a result of constantly watching the state of the Kubernetes resources. + +Use the following links to configure the software that is already installed. You can modify the installed software, remove it, or add new components. Use the same CR YAML file that you deployed with the operator to make the updates (for example descriptors/my_icp4a_cr.yaml). + +- [Configure IBM Automation Digital Worker](../../ADW/README_config.md) +- [Configure IBM Automation Workstream Services](../../IAWS/README_config.md) +- [Configure IBM Business Automation Application Engine](../../AAE/README_config.md) +- [Configure IBM Business Automation Content Analyzer](../../ACA/README_config.md) +- [Configure IBM Business Automation Insights](../../BAI/README_config.md) +- [Configure IBM Business Automation Navigator](../../BAN/README_config.md) +- [Configure IBM Business Automation Studio](../../BAS/README_config.md) +- [Configure IBM FileNet Content Manager](../../FNCM//README_config.md) +- [Configure IBM Operational Decision Manager](../../ODM/README_config.md) +- [Configure the User Management Service](../../UMS/README_config.md) + +## Step 2: Apply the updated custom resources + +1. Review your CR YAML file to make sure it contains all of your intended modifications. + + ```bash + $ cat descriptors/my_icp4a_cr.yaml + ``` + +2. Run the following commands to apply the updates to the operator: + + ```bash + $ oc apply -f descriptors/my_icp4a_cr.yaml --overwrite=true + ``` + +> **Note:** You can also use `oc edit ICP4ACluster ` to open the default UNIX visual editor (vi) in situ. + +## Step 3: Verify the updated automation containers + +The operator reconciliation loop might take several minutes. + +Monitor the status of your pods with: +```bash +$ oc get pods -w +``` + +When all of the pods are *Running*, you can access the status of your services with the following commands. +```bash +$ oc status +``` + +Refer to the [Troubleshooting section](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.install/op_topics/tsk_trbleshoot_operators.html) to access the operator logs. diff --git a/scripts/checkDeadLinks.sh b/scripts/checkDeadLinks.sh deleted file mode 100755 index ad374d20..00000000 --- a/scripts/checkDeadLinks.sh +++ /dev/null @@ -1,18 +0,0 @@ -#!/bin/bash - -# Collect all dead links in a file -find . -name \*.md -exec markdown-link-check -c ./scripts/config-check-broken-links.json {} \; 2>/dev/null | egrep "[✖]" > broken.txt - -# Count the number of lines, extract only that number -n_broken=`wc broken.txt --lines | cut -f 1 -d " "` - -if [[ $n_broken > 0 ]] -then - echo "Number of broken files: "$n_broken - cat broken.txt - rm broken.txt - exit $n_broken -fi - -rm broken.txt - diff --git a/scripts/config-check-broker-links.json b/scripts/config-check-broker-links.json deleted file mode 100644 index 37531cd0..00000000 --- a/scripts/config-check-broker-links.json +++ /dev/null @@ -1,17 +0,0 @@ -{ - "ignorePatterns": [ - { - "pattern": "^http://.*example.com" - }, - { - "pattern": "^http://.*endsp" - } - - ], - "replacementPatterns": [ - - ], - "httpHeaders": [ - - ] -} \ No newline at end of file diff --git a/scripts/deleteOperator.sh b/scripts/deleteOperator.sh new file mode 100755 index 00000000..877721b1 --- /dev/null +++ b/scripts/deleteOperator.sh @@ -0,0 +1,19 @@ +#!/bin/bash +############################################################################### +# +# Licensed Materials - Property of IBM +# +# (C) Copyright IBM Corp. 2019. All Rights Reserved. +# +# US Government Users Restricted Rights - Use, duplication or +# disclosure restricted by GSA ADP Schedule Contract with IBM Corp. +# +############################################################################### +kubectl delete -f descriptors/operator.yaml +kubectl delete -f descriptors/role_binding.yaml +kubectl delete -f descriptors/role.yaml +kubectl delete -f descriptors/service_account.yaml + +kubectl patch crd/icp4aclusters.icp4a.ibm.com -p '{"metadata":{"finalizers":[]}}' --type=merge +kubectl delete crd icp4aclusters.icp4a.ibm.com +echo "All descriptors have been successfully deleted." diff --git a/scripts/deployOperator.sh b/scripts/deployOperator.sh new file mode 100755 index 00000000..f41b82d7 --- /dev/null +++ b/scripts/deployOperator.sh @@ -0,0 +1,67 @@ +#!/bin/bash +############################################################################### +# +# Licensed Materials - Property of IBM +# +# (C) Copyright IBM Corp. 2019. All Rights Reserved. +# +# US Government Users Restricted Rights - Use, duplication or +# disclosure restricted by GSA ADP Schedule Contract with IBM Corp. +# +############################################################################### + +function show_help { + echo -e "\nUsage: deployOperator.sh -i operator_image [-p 'secret_name']\n" + echo "Options:" + echo " -h Display help" + echo " -i Operator image name" + echo " For example: cp.icr.io/cp/icp4a-operator:19.03 or registry_url/icp4a-operator:version" + echo " -p Optional: Pull secret to use to connect to the registry" +} + +if [[ $1 == "" ]] +then + show_help + exit -1 +else + while getopts "h?i:p:" opt; do + case "$opt" in + h|\?) + show_help + exit 0 + ;; + i) IMAGEREGISTRY=$OPTARG + ;; + p) PULLSECRET=$OPTARG + ;; + :) echo "Invalid option: -$OPTARG requires an argument" + show_help + exit -1 + ;; + esac + done +fi + +echo "Using the operator image $IMAGEREGISTRY." +[ -f ./deployoperator.yaml ] && rm ./deployoperator.yaml +cp ./descriptors/operator.yaml ./deployoperator.yaml +if [ ! -z ${IMAGEREGISTRY} ]; then + # Change the location of the image + echo "Using the operator image name: $IMAGEREGISTRY" + sed -e "s|image: .*|image: \"$IMAGEREGISTRY\" |g" ./deployoperator.yaml > ./deployoperatorsav.yaml ; mv ./deployoperatorsav.yaml ./deployoperator.yaml +fi + +# Change the pullSecrets if needed +if [ ! -z ${PULLSECRET} ]; then + echo "Setting pullSecrets to $PULLSECRET" + sed -e "s|admin.registrykey|$PULLSECRET|g" ./deployoperator.yaml > ./deployoperatorsav.yaml ; mv ./deployoperatorsav.yaml ./deployoperator.yaml +else + sed -e '/imagePullSecrets:/{N;d;}' ./deployoperator.yaml > ./deployoperatorsav.yaml ; mv ./deployoperatorsav.yaml ./deployoperator.yaml +fi + +kubectl apply -f ./descriptors/ibm_cp4a_crd.yaml --validate=false +kubectl apply -f ./descriptors/service_account.yaml --validate=false +kubectl apply -f ./descriptors/role.yaml --validate=false +kubectl apply -f ./descriptors/role_binding.yaml --validate=false +kubectl apply -f ./deployoperator.yaml --validate=false +echo "All descriptors have been successfully applied. Monitor the pod status with 'oc get pods -w' in the namespace $NAMESPACE." diff --git a/scripts/loadimages.sh b/scripts/loadimages.sh old mode 100755 new mode 100644