From cbaf30651d764c92842718501f3c2565228a0694 Mon Sep 17 00:00:00 2001 From: Francois Trible Date: Mon, 27 Apr 2020 14:00:58 +0200 Subject: [PATCH] Release 20.0.1.1 --- AAE/README_migrate.md | 2 +- AAE/configuration/sample_min_value.yaml | 2 +- ACA/README_config.md | 346 +++-------------------- ACA/README_upgrade.md | 11 +- FNCM/README_config.md | 2 +- IAWS/configuration/sample_min_value.yaml | 25 +- LICENSE | 154 ++++++---- ODM/README_config.md | 4 +- demo/install_pattern_ocp.md | 2 + descriptors/ibm_cp4a_cr_template.yaml | 6 +- platform/k8s/install.md | 4 +- platform/ocp/install.md | 6 +- platform/roks/install.md | 7 +- scripts/cp4a-deployment.sh | 30 ++ 14 files changed, 199 insertions(+), 402 deletions(-) diff --git a/AAE/README_migrate.md b/AAE/README_migrate.md index d03de780..d3ebc214 100644 --- a/AAE/README_migrate.md +++ b/AAE/README_migrate.md @@ -29,6 +29,6 @@ Reuse the existing App Engine database. Update the database configuration inform ## Step 6: Migrate IBM Business Automation Navigator from 19.0.2 to 20.0.1 to verify your apps -Following the IBM Business Automation Navigator migration instructions(We should add a link to the Navigator migration instructions,once navigator migration link is ready), migrate Business Automation Navigator from 19.0.2 to 20.0.1. Then, test your apps. +Following the [IBM Business Automation Navigator migration instructions](../BAN/README_migrate.md), migrate Business Automation Navigator from 19.0.2 to 20.0.1. Then, test your apps. diff --git a/AAE/configuration/sample_min_value.yaml b/AAE/configuration/sample_min_value.yaml index 4dcbc14d..fcb8d902 100644 --- a/AAE/configuration/sample_min_value.yaml +++ b/AAE/configuration/sample_min_value.yaml @@ -8,7 +8,7 @@ spec: application_engine_configuration: ## The application_engine_configuration is a list. You can deploy multiple instances of App Engine and assign different configurations for each instance. ## For each instance, application_engine_configuration.name and application_engine_configuration.name.hostname must have different values. - - name: ae_instance1 + - name: ae-instance1 hostname: port: 443 admin_secret_name: ae-secret-credential diff --git a/ACA/README_config.md b/ACA/README_config.md index f45beddc..4a1e8f39 100644 --- a/ACA/README_config.md +++ b/ACA/README_config.md @@ -3,279 +3,60 @@ ## Introduction -This readme provide instruction to deploy IBM Business Automation Content Analyzer with IBM® Cloud Pak for Automation platform. IBM Business Automation Content Analyzer offers the power of intelligent capture with the flexibility of an API that enables you to extend the value of your core enterprise content management (ECM) technology stack and helps you rapidly accelerate extraction and classification of data in your documents. +This readme provide instruction to deploy IBM Business Automation Content Analyzer with IBM® Cloud Pak for Automation platform. IBM Business Automation Content Analyzer offers the power of intelligent capture with the flexibility of an API that enables you to extend the value of your core enterprise content management (ECM) technology stack and helps you rapidly accelerate extraction and classification of data in your documents. -Requirements +Requirements to Prepare Your Environment ------------ -### Step 1 - Create DB2 databases for Content Analyzer - -Note: For development or testing purposes, you may skip this step and move onto "Step 2 - Initialize the Content Analyzer Base database" if you prefer for the Content Analyzer scripts to create the database for you. - -1. Follow the instructions in the IBM DB2 Knowledge Center documentation to create DB2 databases for the following: - - Content Analyzer Base database. Make a note of the Base database name for later steps. - - Content Analyzer Tenant database. Only one tenant is required by Content Analyzer, but multiple tenants are also supported. If multiple tenants are desired, create one DB2 database per Content Analyzer Tenant. Make a note of the Tenant database name(s) for later steps. - -2. Here are the minimum requirements for the databases: - - For performance reasons, IBM recommends that you create table spaces using automatic storage, rather than database managed or system managed table spaces. - - Set the DB2 codeset to UTF-8. - - Set the page size to 32 KB. - -### Step 2 - Initialize the Content Analyzer Base database -1. Copy the DB2 [folder](/~https://github.com/icp4a/cert-kubernetes/tree/master/operator/ACA/configuration-ha/DB2) to your IBM DB2 server -2. From the DB2 folder, run the `InitBaseDB.sh` script on the DB2 server to initialize the Base database. (If your DB2 is on Windows, use `InitBaseDB.bat`.) (Please run as db2inst1 user or a user with privileges to run the DB2 command line and admin privileges for the Base database.) -(Note: For development or testing purposes, if you prefer for the Content Analyzer scripts to create the database for you, then run the `CreateBaseDB.sh` script instead of `InitBaseDB.sh`) -3. As prompted, enter the following data: - - Enter the name of the Content Analyzer Base database created in Step 1. - - Enter the name of database user with read and write privileges for the Content Analyzer Base database -4. When you configure the role variables in your CR, specify this database in the role variables: `datasource_configuration->dc_ca_datasource->database_servername`, `datasource_configuration->dc_ca_datasource->database_name`, and `datasource_configuration->dc_ca_datasource->database_port` - -### Step 3 - Initialize the Content Analyzer Tenant database(s) -1. From the DB2 folder, run the `InitTenantDB.sh` script on the DB2 server to initialize the tenant database. (If your DB2 is on Windows, use `InitTenantDB.bat`.) (Please run as db2inst1 user or a user with privileges to run the DB2 command line and admin privileges for the Tenant database.) -(Note: For development or testing purposes, if you prefer for the Content Analyzer scripts to create the DB2 database for you, then run the `AddTenant.sh` script instead of `InitTenantDB.sh`) -2. When prompted, enter the following parameters: - - Enter the tenant ID (an alphanumeric URL-safe string that is used by the user to reference the tenant). - - For tenant type, please enter `0` for Enterprise. - - Enter the name of the Content Analyzer Tenant database created in Step 1. - - For the data source name (DSN), please accept the default, which is the name of the Content Analyzer Tenant database created in Step 1. - - For DB2 SSL communication, please hit enter to accept default of `No`. DB2 SSL communication is not supported in current release of Content Analyzer. - - Enter the name of the database user to access the Tenant database. - - Enter the password for the database user. - - Enter the tenant ontology name. Press Enter to accept 'default' or enter a name to reference the ontology by, if desired. The ontology name must be alphanumeric and URL-safe. - - Enter the name of the Content Analyzer Base database created in Step 1. - - Enter the name of the Content Analyzer Base database user. - - The following prompts are for the initial login user that will be created for Content Analyzer: - - Enter the company name (e.g. your company name.) - - Enter the first name of the user (e.g. enter your first name) - - Enter the last name (e.g. enter your last name) - - Enter a valid email address (e.g. enter your email address) - - Enter the login name (if you use LDAP authentication, enter your user name as it appears in the LDAP server) - - Would you like to continue – y (for yes) - - Save the tenantID and Ontology name for the later steps. -3. When you configure the role variables in your CR, specify the tenant database name(s) in the role variable `tenant_databases`.
For example:
`tenant_databases:`
` - t01db`
` - t02db` - - -### Step 4 - Optional - DB2 High-Availability -1. Optionally, if DB2 HADR (High Availability Disaster Recovery) is desired, follow the instructions in the IBM DB2 Knowledge Center documentation for DB2 HADR setup. -2. The DB2 HADR setup for the Content Analyzer databases must occur AFTER after initializing the schemas for Base database and Tenant database (i.e. Step 2 and Step 3 above). -3. DB2 ACR (automatic client reroute) is required for Content Analyzer to work with DB2 HADR. (KC link to Db2 ACR: https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.admin.ha.doc/doc/c0011558.html) -4. If your are using DB2 databases that are HADR enabled for Content Analyzer, your must configure at least these 2 variables (see the "Role Variables" section below) -`datasource_configuration->dc_ca_datasource->dc_hadr_standby_servername` and `datasource_configuration->dc_ca_datasource->dc_hadr_standby_port`. - - -### Step 5 - Create prerequisite resources for IBM Business Automation Content Analyzer - -1. Create at least 3 PVCs for Content Analyzer:

- a) Log PVC: The recommended minimum size is 50GB. Record the name of the PVC under `ca_configuration->global->logs->claimname` section of the CR - b) Config PVC: The recommended minimum size is 20GB. Record the name of the PVC under `ca_configuration->global->configs->claimname` section of the CR - c) Data PVC: The recommended minimum size is 60GB. Record the name of the PVC under `ca_configuration->global->data->claimname` section of the CR. Record the name of the PVC under`ca_configuration->global->mongo->claimname`, and `ca_configuration->global->mongoadmin->claimname` if you plan to share the PVC with Mongo and Mongos Admin DB.

- - OPTIONAL: - - You can create four (4) additional PVCs for Mongo and MongoAdmin DB, then record the name of the PVC under`ca_configuration->global->mongo->configdb_claimname`, `ca_configuration->global->mongo->shard_claimname`, `ca_configuration->global->mongoadmin->admin_shard_claimname` and `ca_configuration->global->mongoadmin->admin_configdb_claimname` section of the CR.

The recommended sizes for the PVC are 60 GB.

- Otherwise, you can share the data pvc (`ca_configuration->global->data->claimname`) for Mongo. However, you must increase the size of the data pvc to 300GB in this case. - - Below is the sample of the PV/PVC using NFS -``` -apiVersion: v1 -kind: PersistentVolume -metadata: - name: sp-data-pv-caop -spec: - accessModes: - - ReadWriteMany - capacity: - storage: 60Gi - nfs: - path: /exports/smartpages/caop/data - server: 192.168.1.100 - persistentVolumeReclaimPolicy: Retain ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: sp-data-pvc - namespace: caop -spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 60Gi - volumeName: sp-data-pv-caop - -``` - - d) Grant permission to the PVC directories. Assuming you have the following directory structures for your PVCs - -``` -├── caop -│   ├── config -│   ├── data -│   └── log - - -chown -Rf 51000:0 caop/ -chgrp -R 0 caop/ -chmod -R g=u caop/ -``` - - -2. Label the worker nodes. - - Content Analyzer will only deploy on nodes that have specific labeling. The nodes should be labeled as `celery=aca`, `mongo=aca`, `mongo-admin=aca` - (where `` is the name of the namespace that Content Analyzer will be deployed on). - For example: You would run the following command to label the nodes if the namespace is `sp`. - - ``` - kubectl label nodes {node1.ibm.com,node2.ibm.com,node3.ibm.com} {celerysp=aca,mongosp=aca,mongo-adminsp=aca} - ``` - - `node1.ibm.com`, `node2.ibm.com`, and `node3.ibm.com` are the node names you want to label. - - - We recommend to dedicate 3 worker nodes for Mongo and MongoAdmin for high volume environment. In this case, the worker nodes should be labeled as followed: - - ``` - kubectl label nodes {node1.ibm.com,node2.ibm.com,node3.ibm.com} {celerysp=aca} - - kubectl label nodes {node4.ibm.com,node5.ibm.com,node6.ibm.com} {mongosp=aca,mongo-adminsp=aca} - ``` - - -3. Create the docker secret for registry and update this information in the Content Analyzer section of CRD yaml. - -``` -kubectl -n <{KUBE_NAME_SPACE}> create secret docker-registry <{DOCKER_REG_SECRET_NAME}> --docker-server=<{DOCKER_REG_FOR_SERVICES}> --docker-username=<{DOCKER_USER}> --docker-password=<{DOCKER_PWD_DECODED}> --docker-email=' - -``` +### NOTE: +Verify the latest release of IBM Business Automation Content Analyzer with IBM® Cloud Pak for Automation platform in IBM Fix Central or Entitlement Registry and use that release for deployment. +For example: There is a new version of IBM Business Automation Content Analyzer with IBM® Cloud Pak for Automation platform, 20.0.1-ifix1, for the 20.0.1 release. For deployment, edit the CR yaml file. In the `ca_configuration` section, use `20.0.1-ifix1` as the value for the `tag` parameter. -where: +### Step 1 - Preparing users for Content Analyzer -- `<{KUBE_NAME_SPACE}>`: The namespace. For example: caop +Content Analyzer users need to be configured on the LDAP server. See [Preparing users for Content Analyzer](https://www.ibm.com/support/knowledgecenter/SSYHZ8_20.0.x/com.ibm.dba.install/op_topics/tsk_prepare_bacak8s_usergroups.html) for detailed instructions. -- `<{DOCKER_REG_SECRET_NAME}>`: Name of secret. For example: ca-docker-secret +### Step 2 - Create DB2 databases for Content Analyzer -- `<{DOCKER_REG_FOR_SERVICES}>`: Docker registry server name. For example: default-route-openshift-image-registry.apps.myserver.os.fyre.ibm.com +For development or testing purposes, you can skip this step and move to "Step 3 - Initialize the Content Analyzer Base database" if you prefer for the Content Analyzer scripts to create the database for you. -- `<{DOCKER_USER}>`: Docker registry user +See [Create the Db2 database](https://www.ibm.com/support/knowledgecenter/SSYHZ8_20.0.x/com.ibm.dba.install/op_topics/tsk_prepare_bacak8s_createdb2.html) for detailed instructions. -- `<{DOCKER_PWD_DECODED}>`: Docker registry password. +### Step 3 - Initialize the Content Analyzer Base database -4. Create the SCC, role, rolebinding and network policy for Content Analyzer by: +If you do not have a Db2® database set up, do so now. -- Copy the security [folder](/~https://github.com/icp4a/cert-kubernetes/tree/master/operator/ACA/configuration-ha/security) locally. +See [Initializing the Content Analyzer Base database](https://www.ibm.com/support/knowledgecenter/SSYHZ8_20.0.x/com.ibm.dba.install/op_topics/tsk_prepare_bacak8s_db.html) for detailed instructions. -- Run the following commands: +### Step 4 - Initialize the Content Analyzer Tenant database(s) -``` -export KUBE_NAME_SPACE=<{KUBE_NAME_SPACE}> -sed -i.bak s#\$KUBE_NAME_SPACE#"$KUBE_NAME_SPACE"# ./aca-netpol.yaml -sed -i.bak s#\$KUBE_NAME_SPACE#"$KUBE_NAME_SPACE"# ./aca-rolebinding.yaml -kubectl apply -f aca-netpol.yaml -kubectl apply -f aca-scc.yaml --validate=false -kubectl apply -f aca-rolebinding.yaml -oc adm policy add-scc-to-group aca-scc system:serviceaccounts:{KUBE_NAME_SPACE} +If you do not have a tenant database, set up a Db2 tenant database. -``` +See [Initializing the Tenant database](https://www.ibm.com/support/knowledgecenter/SSYHZ8_20.0.x/com.ibm.dba.install/op_topics/tsk_prepare_bacak8s_dbtenant.html) for detailed instructions. -where: +### Step 5 - Optional - DB2 High-Availability -- `<{KUBE_NAME_SPACE}>`: The namespace's name. For example: caop - -5. Optionally, create a K8 secret for the LDAP credentials for Content Analyzer if User Management Services (UMS) integration is not enabled. +You can set up a Db2 High Availability Disaster Recovery (HADR) database. -``` -kubectl create secret generic aca-ldap \ ---from-literal=LDAP_PASSWORD="$LDAP_PASSWORD" \ ---from-literal=LDAP_DN="$LDAP_DN" -``` - -where: - -- `$LDAP_DN` is the fully qualified DN for the LDAP bind user -- `$LDAP_PASSWORD` is the LDAP bind user password. +See [Setting up Db2 High-Availability](https://www.ibm.com/support/knowledgecenter/SSYHZ8_20.0.x/com.ibm.dba.install/op_topics/tsk_prepare_cadb2ha.html) for detailed instructions. +### Step 6 - Create prerequisite resources for IBM Business Automation Content Analyzer -6. Create a K8 secret for the DB2 credentials for Content Analyzer - -``` -kubectl create secret generic aca-basedb \ ---from-literal=BASE_DB_USER="$BASE_DB_USER" \ ---from-literal=BASE_DB_PWD="$BASE_DB_PWD" -``` +Set up and configure storage to prepare for the container configuration and deployment. You set up permissions to PVC directories, label worker nodes, create the docker secret, create security, and enable SSL communication for LDAP if necessary. -where: +See [Configuring storage and the environment](https://www.ibm.com/support/knowledgecenter/SSYHZ8_20.0.x/com.ibm.dba.install/op_topics/tsk_prepare_bacak8s_storage.html) for detailed instructions. -- `$BASE_DB_USER` is the user for the Content Analyzer's base database (created in Step 1) -- `$BASE_DB_PWD` is the user password for the Content Analyzer's Base database. +### Step 7 - Configuring the CR YAML file - +Update the custom YAML file to provide the details that are relevant to your IBM Business Automation Content Analyzer and your decisions for the deployment of the container. -7. Optionally, if you want to enable SSL communication for LDAP, set the following variables in the CR yaml file. +NOTE: Review this [technote](https://www.ibm.com/support/pages/node/6178437) if you deploy Content Analyzer on ROKS. -- Set `ldap_configuration -> lc_ldap_ssl_enabled ` to `true` -- Set `lc_ldap_cert_name` to the name of the LDAP's private certificate. - - Please create a "CA" subfolder in the Operator's PVC folder if it does not exist and copy the LDAP public certificate to the "CA" subfolder under Operator's PVC. -- Set `lc_ldap_self_signed_crt` to `"true"` or `"false"`. `"true"` indicates it is a self-signed cert. +See [Content Analyzer parameters](https://www.ibm.com/support/knowledgecenter/SSYHZ8_20.0.x/com.ibm.dba.ref/k8s_topics/ref_k8sca_operparams.html) for detailed instructions. -Role Variables --------------- -### Replace the following variables in the CR yaml file. - -| Parameter | Description | Values | -|--- |--- |--- | -|ldap_configuration->lc_ldap_server| IP address or hostname of LDAP server. | For example: `192.168.1.100`| -|ldap_configuration->lc_ldap_port| LDAP port| For example: `389`| -|ldap_configuration->lc_ldap_base_dn| LDAP search base DN |For example: `dc=example,dc=com`| -|ldap_configuration->lc_ldap_ssl_enabled| Whether or not you want to enable SSL communication between Content Analyzer and LDAP. Additional steps are needed to enable LDAP SSL. Please instructions above.| `true` or `false` | -|ldap_configuration->lc_ldap_cert_name| If using SSL for LDAP, the name of the LDAP SSL certificate if LDAP SSL is enabled | For example: `ldap.crt` | -|ldap_configuration->ca_ldap_configuration->lc_ldap_self_signed_crt| If using SSL for LDAP, specify whether the certificate is self-sign or not| `"true"` or `"false"`| -|ldap_configuration->ca_ldap_configuration->lc_user_filter| LDAP User search filter. | For example on SDS: `"(&(cn={{username}})(objectclass=person))"`. Actual user name will be substituted for {{username}}

{{username}} substitution variable must be formatted as {{ '{{' }}username{{ '}}'}}

Default: (&(cn={{ '{{' }}username{{ '}}'}})(objectclass=person)) | -|datasource_configuration->dc_ca_datasource->database_servername| Name of the DB2 server that hosts Content Analyzer's databases | -|datasource_configuration->dc_ca_datasource->database_name| Content Analyzer's Base DB name| For example: BASECA | -|datasource_configuration->dc_ca_datasource->database_port| DB2 port| For example: 50000 | -|datasource_configuration->dc_ca_datasource->tenant_databases| List of 1 or more tenant databases as configured above in `Step 3 - Initialize the Content Analyzer Tenant database(s)` | For example:
`tenant_databases:`
` - t01db`
` - t02db`| -|datasource_configuration->dc_ca_datasource->dc_hadr_standby_servername| If using DB2 HADR, provide the DB2 standby server name or IP address| -|datasource_configuration->dc_ca_datasource->dc_hadr_standby_port| If using DB2 HADR, provide the DB2 standby server's port | -|datasource_configuration->dc_ca_datasource->dc_hadr_retry_interval_for_client_reroute| Optional. If using DB2 HADR, optionally provide the retry internal for client reroute in seconds. If not given, default is 2.| -|datasource_configuration->dc_ca_datasource->dc_hadr_max_retries_for_client_reroute| Optional. If using DB2 HADR, provide the maximum number of retries for client reroute. If not given, default is 30. | -|shared_configuration->trusted_certificate_list| Add `aca-backend-secret`, and `aca-frontend-secret` to the list if BAS is enabled| For example: trusted_certificate_list: [aca-backend-secret,aca-frontend-secret] | - - - -### Replace the following variables in the CR yaml file under the "ca_configuration" section. - -| Parameter | Description | Values | -|--- |--- |--- | -|service_type| The service type you want to use for communication (eg:NodePort or Route). `Route` will be used if Content Analyzer is deployed on OCP. See `Post Deployment` section below for more information on `Route` |`Route` or `NodePort` | -|frontend_external_hostname| The unique, external facing hostname for Content Analyzer's frontend (eg: `www.ca.frontendsp`) when `service_type: "Route"`. See `Post Deployment` section below for more information on `Route`| Leave blank if `service_type` is set to `NodePort`| -|backend_external_hostname|The unique, external facing hostname for Content Analyzer's backend (eg: `www.ca.backensp`) when `service_type: "Route"`. See `Post Deployment` section below for more information on `Route` | Leave blank if `service_type` is set to `NodePort`| -|ldap_secret| The ldap secret name created in Step 5 above. | Default `aca-ldap` if blank| -|db_secret| The database secret name created in Step 6 above. | Default: `aca-basedb` if blank| -|repository| The repository for docker images| A valid, reachable repository name -|tag| Content Analyzer's build | `20.0.1` | -|pull_policy| Docker image pull policy | Recommend to leave default as `IfNotPresent` | -|pull_secrets| Docker registry secret name created in step 3 of the `Create prerequisite resources for IBM Business Automation Content Analyzer` section | | -|authentication_type| Select the authentication type. 0: Non-ldap, not support in Production, 1: LDAP, 2: IBM User Management Service integration| Default is 1| -|retries| The number retries to determine if the deployment of Content Analyzer is successful or not. There is a 20 seconds delay between every retry | Default is 90| -|bas->bas_enabled| Enable BA Studio. (true or false). Note, that you must choose Authentication_type = 2 in order to enable BA Studio | default is "false" | -|celery->process_timeout| Timeout for Content Analyzer's ocr_extraction, classifyprocess, processing, updatefiledetail components| Default value is 300 seconds -|configs->claimname| The PVC name for storing configuration files created in the Step 1 of `Create prerequisite resources for IBM Business Automation Content Analyzer`|For example:`"sp-config-pvc"`| -|logs->claimname| The PVC name for storing log files created in the Step 1 of `Create prerequisite resources for IBM Business Automation Content Analyzer`|For example:`"sp-log-pvc"`| -|data->claimname| The PVC name for storing data files created in the Step 1 of `Create prerequisite resources for IBM Business Automation Content Analyzer` |For example:`"sp-data-pvc"`| -|mongo->configdb_claimname| The PVC name for storing Mongo's configuration database created in the Step 1 of `Create prerequisite resources for IBM Business Automation Content Analyzer`|For example: `sp-config`| -|mongo->shard_claimname|The PVC name for storing Mongo's shard database created in the Step 1 of `Create prerequisite resources for IBM Business Automation Content Analyzer` -|mongoadmin->admin_configdb_claimname|The PVC name for storing MongoAdmin's configuration database created in the Step 1 of `Create prerequisite resources for IBM Business Automation Content Analyzer`|| -|mongoadmin->admin_shard_claimname|The PVC name for storing MongoAdmin's shard database created in the Step 1 of `Create prerequisite resources for IBM Business Automation Content Analyzer`|| -|replica_count| The replica count for each of Content Analyzer's sub components. | NOTE: The minimum `replica_count` for redis, rabbitmq, mongo, and mongoadmin is 2.| -|spfrontend->backend_host| Leave this value to blank if service type is `Route`. Domain name or IP used in URL to access backend if service type is `NodePort`| | - -NOTE: Content Analyzer is designed to be flexible such that you can increase the performance by increasing: -1) `ca_configuration -> -> replicas`: You can increase CA's components replicas to increase throughput if your environment has enough resources. The recommendation is 1 component per node. Note that increasing the number of replicas may not increase the response time (eg: The time it takes to process a page from end-to-end) -2) `ca_configuration ->limits->cpu`: You can increase the CA's components CPU limit to improve the response time. - - -Deployment +### Step 8 - Deployment ----------- 1) Once all the required parameters have been filled out for Content Analyzer, the CR can be applied by @@ -285,84 +66,27 @@ oc -n apply -f ``` where: -`ns`: The namespace name where you want to install Content Analyzer. -`CR yaml`: The CR yaml name. +`ns` is the namespace name where you want to install Content Analyzer. +`CR yaml` is the CR yaml name. -2) Operator container will deploy Content Analyzer. For more information about Operator, please refer to +2) The Operator container will deploy Content Analyzer. For more information about Operator, refer to /~https://github.com/icp4a/cert-kubernetes/tree/20.0.1/ - Post Deployment -------------- ## Post Deployment steps for route (OpenShift) setup -You can also deploy IBM Business Automation Content Analyzer using an OpenShift route as the ingress point to expose the frontend and backend services via an externally-reachable, unique hostname such www.backend.example.com and www.frontend.example.com. -A defined route and the endpoints identified by its service can be consumed by a router to provide named connectivity that allows external clients to reach your applications. - -1) Access backend endpoint to accept certificate using the URL: `https://` -`backend_external_hostname` is defined in the CR yaml file under `ca_configuration` section - - **Note**: If the content **WORKS** appears in the page, it means the backend route is working. - -2) Access frontend endpoint to accept certificate using the URL: `https:///?tid=&ont= ` - -where: - -``: As defined in `ca_configuration->global->frontend_external_hostname` -``: The tenantID when creating the tenant DB -``: The ontology name when adding the ontology. - +You can deploy IBM Business Automation Content Analyzer by using an OpenShift route as the ingress point to provide fronted and backend services through an externally reachable, unique hostname such as www.backend.example.com and www.frontend.example.com. A defined route and the endpoints, which are identified by its service, can be consumed by a router to provide named connectivity that allows external clients to reach your applications. +See [Configuring an OpenShift route](https://www.ibm.com/support/knowledgecenter/SSYHZ8_20.0.x/com.ibm.dba.install/op_topics/tsk_postcadeploy_routeOS.html) for detailed instructions. ## Post Deployment steps for NodePort (Non OpenShift) setup -1) Modify your LoadBalancer (eg: HAProxy) in the K8's cluster to route the request to the specific node port if you set `service_type` to `NodePort` -2) Modify the /etc/haproxy.cfg for Content Analyzer's frontend and backend to forward to the master nodes like this: - -``` -frontend spfrontend-svc - bind *:32195 - default_backend spfrontend-svc - mode tcp - option tcplog -backend spfrontend-svc - balance source - mode tcp - server master0 10.16.7.130:32195 check -frontend spbackend-svc - bind *:30044 - default_backend spbackend-svc - mode tcp - option tcplog -backend spbackend-svc - balance source - mode tcp - server master0 10.16.7.130:30044 check -``` - - - - `32195`: is the NodePort of Content Analyzer's frontend service. You can obtain the port number by issuing the following command `kubectl get svc |grep spfrontend` - - - `30044`: is the NodePort of Content Analyzer's backend service. You can obtain the port number by issuing the following command `kubectl get svc |grep spbackend` - - - `master0 10.16.7.130`: is the master node name and IP address. - -3) Verify all the pods are up and running by `kubectl get pods` - -4) Access the Content Analyzer URL by: - -https://:/?tid=&ont= - -where: -``: As defined in `ca_configuration->spfrontend->backend_host` -``: See step 2 above. -``: The tenantID when creating the tenant DB -``: The ontology name when adding the ontology. - - +You can modify your LoadBalancer, like the HAProxy, in the Kubernetes cluster to route the request to a specific node port. +See [Configuring routing to a node port](https://www.ibm.com/support/knowledgecenter/SSYHZ8_20.0.x/com.ibm.dba.install/op_topics/tsk_postcadeploy_nodeport_NOS.html) for detailed instructions. ## Troubleshooting @@ -380,7 +104,7 @@ kubectl logs deployment/ibm-cp4a-operator -c ansible > Ansible.log ### Post install: -- Content Analyzer logs are located in the log pvc. Logs are separated into sub-folders based on the component names. +- Content Analyzer logs are located in the log pvc. Logs are separated into sub-folders based on the component names. ``` ├── backend diff --git a/ACA/README_upgrade.md b/ACA/README_upgrade.md index cfd6d7f8..5955e359 100644 --- a/ACA/README_upgrade.md +++ b/ACA/README_upgrade.md @@ -19,13 +19,12 @@ Upgrade from Content Analyzer 19.0.2 to 20.0.1 is not supported. - Back up your Content Analyzer's base database and tenant database. - Copy the `DB2` [folder](/~https://github.com/icp4a/cert-kubernetes/tree/master/ACA/configuration-ha) to the Db2 server. - Run the `UpgradeTenantDB.sh` from your database server as `db2inst1` user. -- Set the ObjectType feature flag for the tenant by running this SQL in your Content Analyzer's base database. Replace the values of `` and ``. -``` -set schema -update tenantinfo set FEATUREFLAGS=(4 | (select FEATUREFLAGS from tenantinfo where TENANTID='' and ONTOLOGY='')) where TENANTID='' and ONTOLOGY='' -``` -- Change the schema version of tenant to 1.4 by running this SQL in your Content Analyzer's base database. Replace the values of `` and ``. +- Set the ObjectType feature flag and change the schema version flag to 1.4 for the tenant by doing the following for your Content Analyzer's base database. + 1. Start the DB2 commandline by running the `db2` command. + 2. On the DB2 commandline, connect to your Content Analyzer base database as the base database user. + 3. On the DB2 commandline, run the following SQL statements (replace the values of `` and `` with the actual values for your instance). ``` +update tenantinfo set FEATUREFLAGS=(4 | (select FEATUREFLAGS from tenantinfo where TENANTID='' and ONTOLOGY='')) where TENANTID='' and ONTOLOGY='' update tenantinfo set TENANTDBVERSION=1.4 where TENANTID='' and ONTOLOGY='' ``` - Fill out the CR yaml file supplied with 20.0.1 using the same values as the previous deployment. Note that you should use the same number of replicas for mongo/mongo-admin as was in 19.0.3 (e.g. 3). diff --git a/FNCM/README_config.md b/FNCM/README_config.md index 4852eacc..9b033c32 100644 --- a/FNCM/README_config.md +++ b/FNCM/README_config.md @@ -66,7 +66,7 @@ If you want to exclude any components from your deployment, leave the section fo All FileNet Content Manager components require that you deploy the Content Platform Engine container. For that reason, you must complete the values for that section in all deployment use cases. -For a more focused YAML file that contains the default value for each FileNet Content Manager parameter, see the [fncm_ban_sample_cr.yaml](/fncm_ban_sample_cr.yaml). You can use this shorter sample resource file to compile all the values you need for your FileNet Content Manager environment, then copy the sections into the [ibm_cp4a_cr_template.yaml](../descriptors/ibm_cp4a_cr_template.yaml) file before you deploy. +For a more focused YAML file that contains the default value for each FileNet Content Manager parameter, see the [fncm_ban_sample_cr.yaml](configuration/fncm_ban_sample_cr.yaml). You can use this shorter sample resource file to compile all the values you need for your FileNet Content Manager environment, then copy the sections into the [ibm_cp4a_cr_template.yaml](../descriptors/ibm_cp4a_cr_template.yaml) file before you deploy. A description of the configuration parameters is available in [Configuration reference for operators](https://www.ibm.com/support/knowledgecenter/SSYHZ8_19.0.x/com.ibm.dba.ref/k8s_topics/ref_cm_paramsop.html) diff --git a/IAWS/configuration/sample_min_value.yaml b/IAWS/configuration/sample_min_value.yaml index 245eb1b5..43d26014 100644 --- a/IAWS/configuration/sample_min_value.yaml +++ b/IAWS/configuration/sample_min_value.yaml @@ -6,27 +6,28 @@ spec: appVersion: 20.0.1 iaws_configuration: - name: instance1 - wfs: + iaws_server: service_type: "Route" + workstream_server_secret: ibm-iaws-server-secret hostname: port: 443 replicas: 1 admin_user: image: repository: cp.icr.io/cp/cp4a/iaws/iaws-server - tag: 19.0.3 + tag: 20.0.1 pullPolicy: IfNotPresent pfs_bpd_database_init_job: repository: cp.icr.io/cp/cp4a/iaws/pfs-bpd-database-init-prod - tag: 19.0.3 + tag: 20.0.1 pullPolicy: IfNotPresent upgrade_job: repository: cp.icr.io/cp/cp4a/iaws/iaws-server-dbhandling - tag: 19.0.3 + tag: 20.0.1 pullPolicy: IfNotPresent ibm_workplace_job: repository: cp.icr.io/cp/cp4a/iaws/iaws-ibm-workplace - tag: 19.0.3 + tag: 20.0.1 pull_policy: IfNotPresent database: ssl: false @@ -47,7 +48,7 @@ spec: content_integration: init_job_image: repository: cp.icr.io/cp/cp4a/iaws/iaws-ps-content-integration - tag: 19.0.3 + tag: 20.0.1 pull_policy: IfNotPresent appengine: hostname: @@ -59,7 +60,7 @@ spec: jms: image: repository: cp.icr.io/cp/cp4a/iaws/jms - tag: 19.0.3 + tag: 20.0.1 pull_policy: IfNotPresent tls: tls_secret_name: dummy-jms-tls-secret @@ -107,7 +108,7 @@ spec: service_type: Route image: repository: cp.icr.io/cp/cp4a/iaws/pfs-prod - tag: 19.0.3 + tag: 20.0.1 pull_policy: IfNotPresent liveness_probe: initial_delay_seconds: 60 @@ -170,7 +171,7 @@ spec: dba_resource_registry: image: repository: cp.icr.io/cp/cp4a/aae/dba-etcd - tag: latest + tag: 20.0.1 pull_policy: IfNotPresent lease_ttl: 120 pfs_check_interval: 10 @@ -188,15 +189,15 @@ spec: elasticsearch: es_image: repository: cp.icr.io/cp/cp4a/iaws/pfs-elasticsearch-prod - tag: 19.0.3 + tag: 20.0.1 pull_policy: IfNotPresent pfs_init_image: repository: cp.icr.io/cp/cp4a/iaws/pfs-init-prod - tag: 19.0.3 + tag: 20.0.1 pull_policy: IfNotPresent nginx_image: repository: cp.icr.io/cp/cp4a/iaws/pfs-nginx-prod - tag: 19.0.3 + tag: 20.0.1 pull_policy: IfNotPresent replicas: 1 service_type: NodePort diff --git a/LICENSE b/LICENSE index 11f8cc48..b4f3323d 100644 --- a/LICENSE +++ b/LICENSE @@ -1,11 +1,11 @@ -The translated license terms can be viewed here: http://www14.software.ibm.com/cgi-bin/weblap/lap.pl?li_formnum=L-ASAY-BLWDHU +The translated license terms can be viewed here: http://www14.software.ibm.com/cgi-bin/weblap/lap.pl?li_formnum=L-ASAY-BNFHX2 LICENSE INFORMATION The Programs listed below are licensed under the following License Information terms and conditions in addition to the Program license terms previously agreed to by Client and IBM. If Client does not have previously agreed to license terms in effect for the Program, the International Program License Agreement (Z125-3301-14) applies. Program Name (Program Number): -IBM Cloud Pak for Automation 20.0.1 (5737-I23) +IBM Cloud Pak for Automation SR1 20.0.1 (5737-I23) The following standard terms apply to Licensee's use of the Program. @@ -21,6 +21,10 @@ Prohibited Uses Licensee may not use or authorize others to use the Program if failure of the Program could lead to death, bodily injury, or property or environmental damage. +License Terms delivered with Program Not Applicable + +The terms of this Agreement supersede and void any electronic "click through," "shrinkwrap," or other licensing terms and conditions included with or accompanying the Program(s). + Multi-Product Install Image The Program is provided as part of a multi-product install image. Licensee is authorized to install and use only the Program (and its Bundled or Supporting Programs, if any) for which a valid entitlement is obtained and may not install or use any of the other software included in the image unless Licensee has acquired separate entitlements for that other software. @@ -105,10 +109,9 @@ sshpass 1.0 yum-plugin-gastertmirrow 1.1 Red Hat Universal Base Image 7 Red Hat Universal Base Image 8 -Red Hat Openshift Container Platform 3.11†or later versions +Red Hat Openshift Container Platform 3.11 or later versions font-awesome icons 4.7 collectd-java 4.7 -caniuse-lite 1.0 dbus 1.10 inotify-tools 3.14 Red Hat Enterprise Linux 7 @@ -121,6 +124,7 @@ Debian GNU/Linux 8 Ubuntu 16 Alpine Linux 3 libonig 2 5.9 +caniuse-lite 1.0.3 Privacy @@ -128,7 +132,7 @@ Licensee acknowledges and agrees that IBM may use cookie and tracking technologi Source Components and Sample Materials -The Program may include some components in source code form ("Source Components") and other materials identified as Sample Materials. Licensee may copy and modify Source Components and Sample Materials for internal use only provided such use is within the limits of the license rights under this Agreement, provided however that Licensee may not alter or delete any copyright information or notices contained in the Source Components or Sample Materials. IBM provides the Source Components and Sample Materials without obligation of support and "AS IS", WITH NO WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING THE WARRANTY OF TITLE, NON-INFRINGEMENT OR NON-INTERFERENCE AND THE IMPLIED WARRANTIES AND CONDITIONS OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. +The Program may include some components in source code form ("Source Components") and other materials identified as Sample Materials. Licensee may copy and modify Source Components and Sample Materials for internal use only provided such use is within the limits of the license rights under this Agreement; provided, however, that Licensee may not alter or delete any copyright information or notices contained in the Source Components or Sample Materials. IBM provides the Source Components and Sample Materials without obligation of support and "AS IS", WITH NO WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING THE WARRANTY OF TITLE, NON-INFRINGEMENT OR NON-INTERFERENCE AND THE IMPLIED WARRANTIES AND CONDITIONS OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. Technology Preview Code @@ -162,6 +166,20 @@ For each physical Server, Licensee must have sufficient entitlements for the les In addition to the above, the following terms apply to Licensee's use of the Program. +Infrequent User + +Infrequent User is a unit of measure by which the Program can be licensed. Any Infrequent Users given access to the Program requires an entitlement. An Infrequent User is an Authorized User who accesses the Program not more than one hundred twenty (120) times in any consecutive 12 month period. A single access is comprised of one or more interactions between the Infrequent User and the Program or actions performed on behalf of the Infrequent User by the Program, all within a consecutive 15 minute period. + +Licensee must track accesses by Infrequent Users to verify that they meet the access limitations of Infrequent Users. Licensee agrees to provide to IBM and its auditors details of the tracking mechanism described above, upon request. + +Employee User + +Employee User is a unit of measure by which the Program can be licensed. An Employee User is a unique person employed in Licensee's Enterprise, whether or not given access to the Program, or a unique person otherwise paid by or acting on behalf of Licensee's Enterprise who is given access to the Program in any manner directly or indirectly (for example: via a multiplexing program, device, or application server) through any means. An entitlement for an Employee User is unique to that Employee User and may not be shared, nor may it be reassigned other than for the permanent transfer of the entitlement to another person. Licensee must acquire sufficient employee user entitlements to cover all employees and any other unique persons paid by or acting on behalf of Licensee's Enterprise. + +External User + +External User is a unit of measure by which the Program can be licensed. An External User is a unique person, not employed in, paid by, or acting on behalf of Licensee's Enterprise, who is given access to the Program in any manner directly or indirectly (for example: via a multiplexing program, device, or application server) through any means. A person who is employed in or paid by Licensee's Enterprise, but is not accessing the Program within the scope of that relationship may be an External User. An entitlement for an External User is unique to that External User and may not be shared, nor may it be reassigned other than for the permanent transfer of the entitlement to another person. + Supporting Program Details IBM DB2 Standard Edition @@ -175,18 +193,18 @@ IBM DB2 Standard Edition Additional IBM DB2 Standard Edition Details The Supporting Program may use a maximum of 16 processor cores and 128 GB of memory on each physical or virtual server; however, if the Supporting Program is used on a cluster of servers configured to work together using database partitioning or other permitted clustering technology, the Supporting Program may use a maximum of 16 processor cores and 128 GB of memory across all virtual or physical servers in that cluster. + Permitted Components Notwithstanding any provision in the Agreement, Licensee is permitted to use only the following components or functions of the identified Supporting Program: - IBM WebSphere Application Server Network Deployment only for use in support of the following Bundled Programs: IBM FileNet Content Manager, IBM FileNet Content Manager for Non-Production Environment, IBM Datacap, IBM Enterprise Records, IBM Business Automation Workflow Enterprise, IBM Business Automation Workflow Enterprise for Non-Production Environment, IBM Operational Decision Manager Server, IBM Operational Decision Manager Server for Non-Production Environment. -Components Not Used for Establishing Required Entitlements +Components Not Used for Establishing Required Entitlements When determining the number of entitlements required for Licensee's installation or use of the Program, the installation or use of the following Program components are not taken into consideration. In other words, Licensee may install and use the following Program components, under the license terms, but these components are not used to determine the number of entitlements required for the Program. - - IBM Business Automation Studio -- IBM Business Automation Navigator +- IBM Business Automation Navigator - IBM Business Automation Application Designer - IBM Business Automation Application Engine when used in Non-Production - IBM Automation Digital Worker when used in Non-Production @@ -195,83 +213,101 @@ When determining the number of entitlements required for Licensee's installation Entitlement Conversion Details -These Entitlement Conversion Details outline the entitlement conversion options. Licensee is entitled to the below entitlement conversion options in any deployment combination of Licensee's choosing and may choose to convert entitlements between the listed programs below at any time provided that the sum of Licensee's deployments do not exceed the total amount of Licensee's entitlements obtained for the Program. Licensee is not entitled to use entitlements obtained of the Program for any other purpose. +These Entitlement Conversion Details outline the entitlement conversion options. Licensee is entitled to the below entitlement conversion options in any deployment combination of Licensee's choosing and may choose to convert entitlements between the listed programs below at any time provided that the sum of Licensee's deployments do not exceed the total amount of Licensee's entitlements obtained for the Program. Licensee is not entitled to use entitlements obtained of the Program for any other purpose. -Unless otherwise indicated, Licensee may deploy and use any then currently supported version or release of the listed programs. If support for a deployed version or release of a listed program is subsequently no longer being made available, support will not be available through this Program either. While Licensee may choose to continue to use that deployed version or release, Subscription and Support (S&S) for this Program will not actually provide support for the unsupported version or release of the listed program. +Unless otherwise indicated, Licensee may deploy and use any then currently supported version or release of the listed programs. If support for a deployed version or release of a listed program is subsequently no longer being made available, support will not be available through this Program either. While Licensee may choose to continue to use that deployed version or release, Subscription and Support (S&S) for this Program will not actually provide support for the unsupported version or release of the listed program. -Depending on the agreements between IBM and the Licensee, Licensee may have committed that when obtaining S&S they would do so for all uses and installations of an IBM Program. For the purposes of any such commitment, the individual Bundled Programs are the IBM Programs subject to that S&S commitment and to the extent that Licensee is obligated to acquire S&S for a Bundled Program, Licensee can satisfy that obligation as to the entitlements obtained under this Program by maintaining S&S for this Program as a whole. +Depending on the agreements between IBM and the Licensee, Licensee may have committed that when obtaining S&S they would do so for all uses and installations of an IBM Program. For the purposes of any such commitment, the individual Bundled Programs are the IBM Programs subject to that S&S commitment and to the extent that Licensee is obligated to acquire S&S for a Bundled Program, Licensee can satisfy that obligation as to the entitlements obtained under this Program by maintaining S&S for this Program as a whole. -Entitlement Values +Entitlement Values -Business Automation Application Engine (Component of the Program) -- Entitlement Value: Conversion 1 VPC/ 1VPC +Business Automation Application Engine (Component of the Program) +- Conversion Entitlement Ratio: 1 VPC/ 1 VPC -Business Automation Insights (Component of the Program) -- Entitlement Value: Conversion 1 VPC/ 1VPC +Business Automation Insights (Component of the Program) +- Conversion Entitlement Ratio: 1 VPC/ 1 VPC - IBM Automation Digital Worker (Component of the Program) -- Entitlement Value: Conversion 1 VPC/ 1VPC + IBM Automation Digital Worker (Component of the Program) +- Conversion Entitlement Ratio: 1 VPC/ 1 VPC IBM FileNet Content Manager -- Entitlement Value: Conversion 1 VPC/ 5VPCs +- Conversion Entitlement Ratio: 1 VPC/ 5 VPCs or any one or any combination of any of the user measurements below: +- Conversion Entitlement Ratio: 10 Concurrent User/ 1 VPC +- Conversion Entitlement Ratio: 18 Authorized User/ 1 VPC +- Conversion Entitlement Ratio: 36 Employee User/ 1 VPC +- Conversion Entitlement Ratio: 180 Infrequent User/ 1 VPC +- Conversion Entitlement Ratio: 3579 External User/ 1 VPC -IBM FileNet Content Manager for Non-Production Environment -- Entitlement Value: Conversion 2 VPCs/ 5VPCs +IBM FileNet Content Manager for Non-Production Environment +- Conversion Entitlement Ratio: 2 VPCs/ 5VPCs - Use Limitation: Non-Production IBM Business Automation Workflow Enterprise -- Entitlement Value: Conversion 1 VPC/ 5VPCs +- Conversion Entitlement Ratio: 1 VPC/ 5 VPCs or any one or any combination of any of the user measurements below: +- Conversion Entitlement Ratio: 5 Concurrent User/ 1 VPC +- Conversion Entitlement Ratio: 9 Authorized User/ 1 VPC +- Conversion Entitlement Ratio: 18 Employee User/ 1 VPC +- Conversion Entitlement Ratio: 90 Infrequent User/ 1 VPC +- Conversion Entitlement Ratio: 1782 External User/ 1 VPC -IBM Business Automation Workflow Enterprise for Non-Production Environment -- Entitlement Value: Conversion 2 VPCs/ 5VPCs +IBM Business Automation Workflow Enterprise for Non-Production Environment +- Conversion Entitlement Ratio: 2 VPCs/ 5 VPCs - Use Limitation: Non-Production - -IBM Automation Workstream Services -- Entitlement Value: Conversion 1 VPC/ 5VPCs - + +IBM Automation Workstream Services +- Conversion Entitlement Ratio: 1 VPC/ 5 VPCs + IBM Operational Decision Manager Server -- Entitlement Value: Conversion 1 VPC/ 5VPCs +- Conversion Entitlement Ratio: 1 VPC/ 5 VPCs IBM Operational Decision Manager Server for Non-Production Environment -- Entitlement Value: Conversion 2 VPCs/ 5VPCs +- Conversion Entitlement Ratio: 2 VPCs/ 5 VPCs - Use Limitation: Non-Production -Business Automation Content Analyzer (Component of the Program) -- Entitlement Value: Conversion 1 VPC/ 1VPC +Business Automation Content Analyzer (Component of the Program) +- Conversion Entitlement Ratio: 1 VPC/ 1 VPC Business Automation Content Analyzer (Component of the Program) -- Entitlement Value: Conversion 2 VPC/ 1VPC -- Use Limitation: Non-Production +- Conversion Entitlement Ratio: 2 VPC/ 1 VPC +- Use Limitation: Non-Production IBM Datacap Processor Value Unit -- Entitlement Value: Conversion 1 VPC/ 2VPC +- Conversion Entitlement Ratio: 1 VPC/ 2 VPC IBM Datacap for Non-Production Environment Processor Value Unit -- Entitlement Value: Conversion 1 VPC/ 1VPC -- Use Limitation: Non-Production +- Conversion Entitlement Ratio: 1 VPC/ 1 VPC +- Use Limitation: Non-Production IBM Datacap Insight Edition Add-On Processor Value Unit -- Entitlement Value: Conversion 1 VPC/ 2VPC +- Conversion Entitlement Ratio: 1 VPC/ 2 VPC -IBM Datacap Insight Edition Add-on for Non-Production Environment Processor Value Unit -- Entitlement Value: Conversion 1 VPC/ 1VPC -- Use Limitation: Non-Production +IBM Datacap Insight Edition Add-on for Non-Production Environment Processor Value Unit +- Conversion Entitlement Ratio: 1 VPC/ 1 VPC +- Use Limitation: Non-Production IBM Content Collector for Email, Files & Sharepoint -- Entitlement Value: Conversion 1 VPC/ 3VPC +- Conversion EntitlementRatio: 1 VPC/ 3 VPC +- Conversion Entitlement Ratio: 200 Authorized User/ 1 VPC +- Conversion Entitlement Ratio: 499 Employee User/ 1 VPC +- Conversion Entitlement Ratio: 2000 Infrequent User/ 1 VPC +- Conversion Entitlement Ratio: 40900 External User/ 1 VPC IBM Content Collector for Email, Files & Sharepoint for Non-Production -- Entitlement Value: Conversion 2 VPC/ 3VPC -- Use Limitation: Non-Production +- Conversion Entitlement Ratio: 2 VPC/ 3 VPC +- Use Limitation: Non-Production IBM Content Collector for SAP -- Entitlement Value: Conversion 1 VPC/ 3VPC +- Conversion Entitlemen Ratio: 1 VPC/ 3 VPC +- Conversion Entitlement Ratio: 36 Authorized User/ 1 VPC +- Conversion Entitlement Ratio: 72 Employee User/ 1 VPC +- Conversion Entitlement Ratio: 360 Infrequent User/ 1 VPC +- Conversion Entitlement Ratio: 6860 External User/ 1 VPC IBM Content Collector for SAP for Non-Production -- Entitlement Value: Conversion 2 VPC/ 3VPC +- Conversion Entitlement Ratio: 2 VPC/ 3 VPC - Use Limitation: Non-Production - -"Conversion n/m" means that Licensee can convert some number ('n') entitlements of the indicated metric for the Bundled Program for every specified number ('m') entitlements of the specified metric for the Program. The specified conversion does not apply to any entitlements for the Program that are not of the required metric type. For example, if the conversion ratio is 100 entitlements of a Bundled Program for every 500 entitlements obtained of the Program and Licensee acquires 1,500 entitlements of the Program, Licensee may convert those 1,500 entitlements into 300 entitlements of the Bundled Program, allowing the Licensee to use the Bundled Program up to the 300 entitlements. + +"Conversion Entitlement Ratio n/m" means that Licensee can convert some number ('n') entitlements of the indicated metric for the listed program for every specified number ('m') entitlements of the specified metric for the Program. Once converted, Licensee may only use such converted entitlements for the listed program. The specified conversion does not apply to any entitlements for the Program that are not of the required metric type. For example, if the conversion is 100 entitlements of a listed program for every 500 entitlements obtained of the Program and Licensee acquires 1,500 entitlements of the Program, Licensee may convert those 1,500 entitlements into 300 entitlements of the listed program, allowing the Licensee to use the listed program up to the 300 entitlements. "Non-Production" means that the Bundled Program can only be deployed as part of Licensee's internal development and test environment for internal non-production activities, including but not limited to testing, performance tuning, fault diagnosis, internal benchmarking, staging, quality assurance activity and/or developing internally used additions or extensions to the Program using published application programming interfaces. Licensee is not authorized to use any part of the Bundled Program for any other purposes without acquiring the appropriate production entitlements. @@ -279,20 +315,20 @@ Red Hat Products Red Hat Products (as listed below) are licensed separately and are supported by IBM only when used in support of the Program and only while Licensee has Software Subscription and Support in effect for the Program. In addition, Licensee agrees that its use of and support for the Red Hat Products are subject to the following terms (https://www.redhat.com/en/about/agreements). -Red Hat Universal Base Image -- Entitlement: Ratio 1 VPC/ 1 VPC - +Red Hat Universal Base Image +- Additional Entitlement Ratio: 1 VPC/ 1 VPC + Red Hat Enterprise Linux -- Entitlement Ratio: 1 VPC / 1 VPC - +- Additional Entitlement Ratio: 1 VPC / 1 VPC + Red Hat OpenShift Container Platform -- Entitlement Ratio: 1 VPC / 1 VPC - -"Ratio n/m" means that Licensee receives some number ('n') entitlements of the indicated metric for the identified program for every specified number ('m') entitlements of the specified metric for the Program as a whole. The specified ratio does not apply to any entitlements for the Program that are not of the required metric type. The number of entitlements for the identified program is rounded up to a multiple of 'n'. For example, if a Program includes 100 PVUs for an identified program for every 500 PVUs obtained of the Principal Program and Licensee acquires 1,200 PVUs of the Program, Licensee may install the identified program and have processor cores available to or managed by it of up to 300 PVUs. Those PVUs would not need to be counted as part of the total PVU requirement for Licensee's installation of the Program on account of the installation of the identified program (although those PVUs might need to be counted for other reasons, such as the processor cores being made available to other components of the Program, as well). +- Additional Entitlement Ratio: 1 VPC / 1 VPC + + "Additional Entitlement Ratio n/m" means that Licensee receives some number ('n') entitlements of the indicated metric for the identified program for every specified number ('m') entitlements of the specified metric for the Program as a whole. The specified ratio does not apply to any entitlements for the Program that are not of the required metric type. The number of entitlements for the identified program is rounded up to a multiple of 'n'. For example, if a Program includes 100 PVUs for an identified program for every 500 PVUs obtained of the Principal Program and Licensee acquires 1,200 PVUs of the Program, Licensee may install the identified program and have processor cores available to or managed by it of up to 300 PVUs. Those PVUs would not need to be counted as part of the total PVU requirement for Licensee's installation of the Program on account of the installation of the identified program (although those PVUs might need to be counted for other reasons, such as the processor cores being made available to other components of the Program, as well). -L/N: L-ASAY-BLWDHU -D/N: L-ASAY-BLWDHU -P/N: L-ASAY-BLWDHU +L/N: L-ASAY-BNFHX2 +D/N: L-ASAY-BNFHX2 +P/N: L-ASAY-BNFHX2 International Program License Agreement Part 1 - General Terms diff --git a/ODM/README_config.md b/ODM/README_config.md index 0309714c..5af9e641 100644 --- a/ODM/README_config.md +++ b/ODM/README_config.md @@ -68,5 +68,5 @@ If you customized the default user registry, you must synchronize the registry w [Synchronizing users and groups in Decision Center](https://www.ibm.com/support/knowledgecenter/SSYHZ8_20.0.x/com.ibm.dba.offerings/topics/tsk_synchronize_users.html). You might need to update an ODM deployment after it is installed. Use the following tasks in IBM Knowledge Center to update a deployment whenever you need, and as many times as you need. - * [Customizing JVM arguments](https://www.ibm.com/support/knowledgecenter/SSYHZ8_20.0.x/com.ibm.dba.managing/op_topics/tsk_set_jvmargs.html) - * [Customizing log levels](https://www.ibm.com/support/knowledgecenter/SSYHZ8_20.0.x/com.ibm.dba.managing/op_topics/tsk_odm_custom_logging.html) + * [Customizing JVM arguments](https://www.ibm.com/support/knowledgecenter/SSYHZ8_20.0.x/com.ibm.dba.offerings/op_topics/tsk_set_jvmargs.html) + * [Customizing log levels](https://www.ibm.com/support/knowledgecenter/SSYHZ8_20.0.x/com.ibm.dba.offerings/op_topics/tsk_odm_custom_logging.html) diff --git a/demo/install_pattern_ocp.md b/demo/install_pattern_ocp.md index 5217c9ab..3256cf2c 100644 --- a/demo/install_pattern_ocp.md +++ b/demo/install_pattern_ocp.md @@ -2,6 +2,8 @@ This repository includes folders and resources to help you install the Cloud Pak for Automation software for demonstration purposes on Red Hat OpenShift Cloud Platform (OCP) 3.11. +> **Restriction**: You cannot install the patterns on an IBM Managed RedHat OpenShift 3.11 cluster, also called RedHat OpenShift Kubernetes Service (ROKS). + To install a pattern with the Cloud Pak operator, an OCP administrator user must run a script to set up a cluster and work with a non-administrator user to help them run a deployment script. Each pattern has a single Cloud Pak capability, a list of optional components that can be installed, as well as Db2 and OpenLDAP if they are needed. > **Note**: The scripts can only be used on a Linux-based operating system: Red Hat (RHEL), CentOS, and macOS. diff --git a/descriptors/ibm_cp4a_cr_template.yaml b/descriptors/ibm_cp4a_cr_template.yaml index 9ae94311..c6f09b45 100644 --- a/descriptors/ibm_cp4a_cr_template.yaml +++ b/descriptors/ibm_cp4a_cr_template.yaml @@ -1185,7 +1185,7 @@ spec: # tls: # tls_trust_list: [] - resource_registry_configuration: + #resource_registry_configuration: # admin_secret_name: resource-registry-admin-secret # hostname: # port: @@ -1441,7 +1441,7 @@ spec: # tls: # tls_trust_list: [] - iaws_configuration: + #iaws_configuration: # - name: instance1 # iaws_server: # service_type: "Route" @@ -1788,7 +1788,7 @@ spec: # claimname: "sp-config-pvc" # logs: # claimname: "sp-log-pvc" -# log_level: "debug" +# log_level: "info" # data: # claimname: "sp-data-pvc" # redis: diff --git a/platform/k8s/install.md b/platform/k8s/install.md index f514dc35..ccc5cc40 100644 --- a/platform/k8s/install.md +++ b/platform/k8s/install.md @@ -36,7 +36,7 @@ Before you go to Step 2, make sure that your entitled container images are avail 1. Log in to [MyIBM Container Software Library](https://myibm.ibm.com/products-services/containerlibrary) with the IBMid and password that are associated with the entitled software. -2. In the **Container software library** tile, click **View library** and then click **Copy key** to copy the entitlement key to the clipboard. +2. In the **Container software library** tile, verify your entitlement on the **View library** page, and then go to **Get entitlement key** to retrieve the key. 3. Create a pull secret by running a `kubectl create secret` command. ```bash @@ -44,6 +44,8 @@ Before you go to Step 2, make sure that your entitled container images are avail ``` > **Note**: The `cp.icr.io` value for the **docker-server** parameter is the only registry domain name that contains the images. + + > **Note**: Use “cp” for the docker-username. The docker-email has to be a valid email address (associated to your IBM ID). Make sure you are copying the Entitlement Key in the docker-password field within double-quotes. 4. Take a note of the secret and the server values so that you can set them to the **pullSecrets** and **repository** parameters when you run the operator for your containers. diff --git a/platform/ocp/install.md b/platform/ocp/install.md index 5f09ccb3..b98742f7 100644 --- a/platform/ocp/install.md +++ b/platform/ocp/install.md @@ -5,7 +5,7 @@ - [Step 3: Create a shared PV and add the JDBC drivers](install.md#step-3-create-a-shared-pv-and-add-the-jdbc-drivers) - [Step 4: Deploy the operator manifest files to your cluster](install.md#step-4-deploy-the-operator-manifest-files-to-your-cluster) - [Step 5: Configure the software that you want to install](install.md#step-5-configure-the-software-that-you-want-to-install) -- [Step 6: Apply the custom resources](install.md#step-6-apply-the-custom-resources) +- [Step 6: Apply the custom resource](install.md#step-6-apply-the-custom-resource) - [Step 7: Verify that the automation containers are running](install.md#step-7-verify-that-the-automation-containers-are-running) - [Step 8: Complete some post-installation steps](install.md#step-8-complete-some-post-installation-steps) @@ -36,7 +36,7 @@ From your local machine, you can access the container images in the IBM Docker r 1. Log in to [MyIBM Container Software Library](https://myibm.ibm.com/products-services/containerlibrary) with the IBMid and password that are associated with the entitled software. -2. In the **Container software library** tile, click **View library** and then click **Copy key** to copy the entitlement key to the clipboard. +2. In the **Container software library** tile, verify your entitlement on the **View library** page, and then go to **Get entitlement key** to retrieve the key. 3. Create a pull secret by running a `kubectl create secret` command. ```bash @@ -44,6 +44,8 @@ From your local machine, you can access the container images in the IBM Docker r ``` > **Note**: The `cp.icr.io` value for the **docker-server** parameter is the only registry domain name that contains the images. + + > **Note**: Use “cp” for the docker-username. The docker-email has to be a valid email address (associated to your IBM ID). Make sure you are copying the Entitlement Key in the docker-password field within double-quotes. 4. Take a note of the secret and the server values so that you can set them to the **pullSecrets** and **repository** parameters when you run the operator for your containers. diff --git a/platform/roks/install.md b/platform/roks/install.md index c29ed313..e9c671e0 100644 --- a/platform/roks/install.md +++ b/platform/roks/install.md @@ -67,8 +67,9 @@ If you do not already have a cluster, then create one. From the [IBM Cloud Overv ```bash $ kubectl create secret docker-registry admin.registrykey --docker-server=cp.icr.io --docker-username=iamapikey --docker-password="" --docker-email= ``` - - > **Note**: The `cp.icr.io` value for the **docker-server** parameter is the only registry domain name that contains the images. The `cp.icr.io` and `cp` values for the **docker-server** and **docker-username** parameters must be used. + > **Note**: The `cp.icr.io` value for the **docker-server** parameter is the only registry domain name that contains the images. + + > **Note**: Use “cp” for the docker-username. The docker-email has to be a valid email address (associated to your IBM ID). Make sure you are copying the Entitlement Key in the docker-password field within double-quotes. 4. Take a note of the secret and the server values so that you can set them to the **pullSecrets** and **repository** parameters when you run the operator for your containers. @@ -295,7 +296,7 @@ If you do not already have a cluster, then create one. From the [IBM Cloud Overv 4. Use the following links to configure the software that you want to install. - [Configure IBM Automation Digital Worker](../../ADW/README_config.md) - - [Configure IBM Automation Workstream Services](../../IAWS/README_config_ROKS.md) + - [Configure IBM Automation Workstream Services](../../IAWS/README_config.md) - [Configure IBM Business Automation Application Engine](../../AAE/README_config.md) - [Configure IBM Business Automation Content Analyzer](../../ACA/README_config.md) - [Configure IBM Business Automation Insights](../../BAI/README_config.md) diff --git a/scripts/cp4a-deployment.sh b/scripts/cp4a-deployment.sh index 2ba0f5aa..22dd4729 100755 --- a/scripts/cp4a-deployment.sh +++ b/scripts/cp4a-deployment.sh @@ -89,8 +89,38 @@ function validate_cli(){ [[ $? -ne 0 ]] && \ echo "Unable to locate Openshift CLI, please install it first." && \ exit 1 + + which timeout &>/dev/null + [[ $? -ne 0 ]] && \ + while true; do + printf "\x1B[1m\"timeout\" Command Not Found\n\x1B[0m" + printf "\x1B[1mThe \"timeout\" will be installed automatically\n\x1B[0m" + printf "\x1B[1mDo you accept (Yes/No, default: No): \x1B[0m" + read -rp "" ans + case "$ans" in + "y"|"Y"|"yes"|"Yes"|"YES") + install_timeout_cli + break + ;; + "n"|"N"|"no"|"No"|"NO") + echo -e "You do not accept, exiting...\n" + exit 0 + ;; + *) + echo -e "\x1B[1;31mYou do not accept, exiting....\n\x1B[0m" + exit 0 + ;; + esac + done } +function install_timeout_cli(){ + if [[ ${machine} = "Mac" ]]; then + echo -n "Installing timeout ......"; brew install coreutils >/dev/null 2>&1; sudo ln -s /usr/local/bin/gtimeout /usr/local/bin/timeout >/dev/null 2>&1; echo "done."; + fi + printf "\n" + } + function prompt_license(){ clear echo -e "\x1B[1;31mIMPORTANT: Review the IBM Cloud Pak for Automation license information here: \n\x1B[0m"