-
Notifications
You must be signed in to change notification settings - Fork 13
Applying Custom BDE Styles (Integrator UI CSSWrapper Tutorial)
Integrator UI is an application, which integrates available web interfaces within a BDI Stack. In this tutorial we show how to enhance a BDI Stack with Integrator UI and CSSWrapper, which is a required component for the Integrator UI.
We will use docker snippets available on the Big Data Europe Github. In particular, we will use the most ubiquitous component of big data applications: Apache Hadoop. The docker-compose.yml is shown below.
version: "2"
services:
namenode:
image: bde2020/hadoop-namenode:1.1.0-hadoop2.7.1-java8
container_name: namenode
volumes:
- hadoop_namenode:/hadoop/dfs/name
environment:
- CLUSTER_NAME=test
env_file:
- ./hadoop.env
resourcemanager:
image: bde2020/hadoop-resourcemanager:1.1.0-hadoop2.7.1-java8
container_name: resourcemanager
depends_on:
- namenode
- datanode1
- datanode2
env_file:
- ./hadoop.env
historyserver:
image: bde2020/hadoop-historyserver:1.1.0-hadoop2.7.1-java8
container_name: historyserver
depends_on:
- namenode
- datanode1
- datanode2
volumes:
- hadoop_historyserver:/hadoop/yarn/timeline
env_file:
- ./hadoop.env
nodemanager1:
image: bde2020/hadoop-nodemanager:1.1.0-hadoop2.7.1-java8
container_name: nodemanager1
depends_on:
- namenode
- datanode1
- datanode2
env_file:
- ./hadoop.env
datanode1:
image: bde2020/hadoop-datanode:1.1.0-hadoop2.7.1-java8
container_name: datanode1
depends_on:
- namenode
volumes:
- hadoop_datanode1:/hadoop/dfs/data
env_file:
- ./hadoop.env
datanode2:
image: bde2020/hadoop-datanode:1.1.0-hadoop2.7.1-java8
container_name: datanode2
depends_on:
- namenode
volumes:
- hadoop_datanode2:/hadoop/dfs/data
env_file:
- ./hadoop.env
datanode3:
image: bde2020/hadoop-datanode:1.1.0-hadoop2.7.1-java8
container_name: datanode3
depends_on:
- namenode
volumes:
- hadoop_datanode3:/hadoop/dfs/data
env_file:
- ./hadoop.env
volumes:
hadoop_namenode:
hadoop_datanode1:
hadoop_datanode2:
hadoop_datanode3:
hadoop_historyserver:
For demonstration purposes, we will reduce the docker-compose definition to a minimal setup: one NameNode and one DataNode.
version: "2"
services:
namenode:
image: bde2020/hadoop-namenode:1.1.0-hadoop2.7.1-java8
container_name: namenode
volumes:
- hadoop_namenode:/hadoop/dfs/name
environment:
- CLUSTER_NAME=test
env_file:
- ./hadoop.env
datanode:
image: bde2020/hadoop-datanode:1.1.0-hadoop2.7.1-java8
container_name: datanode1
depends_on:
- namenode
volumes:
- hadoop_datanode:/hadoop/dfs/data
env_file:
- ./hadoop.env
volumes:
hadoop_namenode:
hadoop_datanode:
Our next step will be to unpack environmental variables from ./hadoop.env into docker-compose.yml definition so that we will be able to have everything in one place for quick tweaking. As you can see here below, the environmental variables define defaultFS for Hadoop. Thus we can remove "container_name" for DataNode in order to enable horizontal scaling with docker-compose scale command. Moreover we will define integrator-ui external overlay network to deploy on Docker Swarm (external networks are handy when you want to control IP ranges for the containers).
version: "2"
services:
namenode:
image: bde2020/hadoop-namenode:1.1.0-hadoop2.7.1-java8
container_name: namenode
expose:
- "50070"
volumes:
- hadoop_namenode:/hadoop/dfs/name
networks:
- integrator-ui
environment:
- CLUSTER_NAME=test
- CORE_CONF_fs_defaultFS=hdfs://namenode:8020
datanode:
image: bde2020/hadoop-datanode:1.1.0-hadoop2.7.1-java8
depends_on:
- namenode
expose:
- "50075"
networks:
- integrator-ui
volumes:
- hadoop_datanode:/hadoop/dfs/data
environment:
- CORE_CONF_fs_defaultFS=hdfs://namenode:8020
volumes:
hadoop_namenode:
hadoop_datanode:
networks:
integrator-ui:
external:
name: integrator-ui
The next step is to add CSSWrapper proxy:
version: "2"
services:
csswrapper:
image: bde2020/nginx-proxy-with-css:latest
ports:
- "80:80"
networks:
- integrator-ui
volumes:
- nginx-volume:/usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
environment:
- "constraint:node==akswnc4.aksw.internal"
- DOCKER_HOST=tcp://172.18.160.16:4000
namenode:
image: bde2020/hadoop-namenode:1.1.0-hadoop2.7.1-java8
container_name: namenode
expose:
- "50070"
volumes:
- hadoop_namenode:/hadoop/dfs/name
networks:
- integrator-ui
environment:
- CLUSTER_NAME=test
- CORE_CONF_fs_defaultFS=hdfs://namenode:8020
datanode:
image: bde2020/hadoop-datanode:1.1.0-hadoop2.7.1-java8
depends_on:
- namenode
expose:
- "50075"
networks:
- integrator-ui
volumes:
- hadoop_datanode:/hadoop/dfs/data
environment:
- CORE_CONF_fs_defaultFS=hdfs://namenode:8020
volumes:
hadoop_namenode:
hadoop_datanode:
nginx-volume:
networks:
integrator-ui:
external:
name: integrator-ui
CSSWrapper exposes port 80 to the outside world through host machine and thus should be allocated on the host with public IP address. In our case it is akswnc4.aksw.internal ("constraint:node==akswnc4.aksw.internal")
The CSSWrapper needs to know (a) which URL corresponds to which service and (b) which CSS to inject (or not). We configure it using VIRTUAL_HOST (URL), VIRTUAL_PORT (if differs from 80) and CSS_SOURCE environmental variables. CSSWrapper requires that the docker image contains EXPOSE closes for VIRTUAL_PORT, otherwise it will fail to work. With the objective to ensure this, we define expose clauses explicitly in the docker-compose definition. For more details, see CSSWrapper repository. For better understanding how CSSWrapper works, you can refer to blog post on nginx reverse proxy.
version: "2"
services:
csswrapper:
image: bde2020/nginx-proxy-with-css:latest
ports:
- "80:80"
networks:
- integrator-ui
volumes:
- nginx-volume:/usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
environment:
- "constraint:node==akswnc4.aksw.internal"
- DOCKER_HOST=tcp://172.18.160.16:4000
namenode:
image: bde2020/hadoop-namenode:1.1.0-hadoop2.7.1-java8
container_name: namenode
expose:
- "50070"
volumes:
- hadoop_namenode:/hadoop/dfs/name
networks:
- integrator-ui
environment:
- CLUSTER_NAME=test
- CORE_CONF_fs_defaultFS=hdfs://namenode:8020
- VIRTUAL_HOST=namenode.big-data-europe.aksw.org
- VIRTUAL_PORT=50070
- CSS_SOURCE=hadoop
datanode:
image: bde2020/hadoop-datanode:1.1.0-hadoop2.7.1-java8
depends_on:
- namenode
expose:
- "50075"
networks:
- integrator-ui
volumes:
- hadoop_datanode:/hadoop/dfs/data
environment:
- CORE_CONF_fs_defaultFS=hdfs://namenode:8020
- VIRTUAL_HOST=datanode.big-data-europe.aksw.org
- VIRTUAL_PORT=50075
- CSS_SOURCE=hadoop
volumes:
hadoop_namenode:
hadoop_datanode:
nginx-volume:
networks:
integrator-ui:
external:
name: integrator-ui
The last step is to add Integrator UI. To begin with, we should not use built definitions in the docker-compose.yml when deploying to swarm. For example, this is wrong:
version: "2"
services:
integrator-ui:
build: ./integrator-ui
environment:
VIRTUAL_HOST: "integrator-ui.big-data-europe.aksw.org"
...
The proper way to add Integrator UI is via Docker Hub. Thus we need to create a git repository on github, in order to build our custom integrator UI image and push the docker image to the Docker Hub. We create one with a name of integrator-tutorial by extending integrator UI image as follows:
FROM bde2020/integrator-ui:0.3.0
COPY user-interfaces /app/config/user-interfaces
In the root of the repository, create a user-interfaces file containing the URLs of your services and labels to be displayed in the header. The labels can be arbitrary, we define "Namenode" and "Datanode" here. The base-url parameters should correspond to VIRTUAL_HOST environmental variable, so that it can be resolved to the actual service.
{ "data": [
{
"id": 1,
"type": "user-interfaces",
"attributes": {
"label": "Namenode",
"base-url": "http://namenode.big-data-europe.aksw.org/",
"append-path": ""
}
},
{
"id": 2,
"type": "user-interfaces",
"attributes": {
"label": "Datanode",
"base-url": "http://datanode.big-data-europe.aksw.org/",
"append-path": ""
}
}
]
}
Now we need to go to Docker Hub and create and automated build for this repository. We named ours integrator-tutorial. And now we finally can include it into docker-compose definition:
version: "2"
services:
integrator-ui:
image: earthquakesan/integrator-tutorial
networks:
- integrator-ui
environment:
- VIRTUAL_HOST=integrator-ui.big-data-europe.aksw.org
csswrapper:
image: bde2020/nginx-proxy-with-css:latest
ports:
- "80:80"
networks:
- integrator-ui
volumes:
- nginx-volume:/usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
environment:
- "constraint:node==akswnc4.aksw.internal"
- DOCKER_HOST=tcp://172.18.160.16:4000
namenode:
image: bde2020/hadoop-namenode:1.1.0-hadoop2.7.1-java8
container_name: namenode
expose:
- "50070"
volumes:
- hadoop_namenode:/hadoop/dfs/name
networks:
- integrator-ui
environment:
- CLUSTER_NAME=test
- CORE_CONF_fs_defaultFS=hdfs://namenode:8020
- VIRTUAL_HOST=namenode.big-data-europe.aksw.org
- VIRTUAL_PORT=50070
- CSS_SOURCE=hadoop
datanode:
image: bde2020/hadoop-datanode:1.1.0-hadoop2.7.1-java8
depends_on:
- namenode
expose:
- "50075"
volumes:
- hadoop_datanode:/hadoop/dfs/data
networks:
- integrator-ui
environment:
- CORE_CONF_fs_defaultFS=hdfs://namenode:8020
- VIRTUAL_HOST=datanode.big-data-europe.aksw.org
- VIRTUAL_PORT=50075
- CSS_SOURCE=hadoop
volumes:
hadoop_namenode:
hadoop_datanode:
nginx-volume:
networks:
integrator-ui:
external:
name: integrator-ui
Simply copy the last docker-compose snippet and save it as docker-compose.yml. Then run
docker -H :4000 network create -d overlay integrator-ui
docker-compose -H :4000 up