Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mongodb doesn't start properly #92

Closed
chino opened this issue Dec 11, 2017 · 19 comments
Closed

mongodb doesn't start properly #92

chino opened this issue Dec 11, 2017 · 19 comments

Comments

@chino
Copy link

chino commented Dec 11, 2017

The mongodb container seems to be failing with:

Error executing 'postInstallation': Group '2000' not found

Full output:

$ kubectl log  -n kubeapps -p po/mongodb-77bdcd694b-sl2rt
W1211 15:18:10.287621   70223 cmd.go:392] log is DEPRECATED and will be removed in a future version. Use logs instead.

Welcome to the Bitnami mongodb container
Subscribe to project updates by watching /~https://github.com/bitnami/bitnami-docker-mongodb
Submit issues and feature requests at /~https://github.com/bitnami/bitnami-docker-mongodb/issues
Send us your feedback at containers@bitnami.com

nami    INFO  Initializing mongodb
Error executing 'postInstallation': Group '2000' not found
@ngtuna
Copy link
Contributor

ngtuna commented Dec 12, 2017

@chino Which version of kubeapps are you trying ? And on which platform ? minikube/gke/etc

@chino
Copy link
Author

chino commented Dec 12, 2017

Kubeapps Installer version: v0.1.2

This is a baremetal install using typhoon.psdn.io which is based on matchbox/bootkube/k8s1.8.5/coreos (same backend as Tectonic).

@chino
Copy link
Author

chino commented Dec 12, 2017

Just realized this cluster is actually back on 1.8.3 still.

@chino
Copy link
Author

chino commented Dec 12, 2017

Do you have any pointers on how I could possibly debug this my self? The deployment keeps killing the pod which makes it challenging and I can't seem to find anything in the various github repos based on that error.

@prydonius
Copy link
Contributor

Hey @chino, this is most likely a permissions issue with the MongoDB container. I believe GlusterFS volumes are not well supported. Things you can try:

  1. The MongoDB container runs as a non-root user by default, try running it as root:
    Add this to the MongoDB depoyment:
spec:
  template:
    spec:
      securityContext:
        runAsUser: 0
  1. Disable persistence (until the image supports GlusterFS volumes, you may just have to disable it)
    Remove the volume and the volume mount in the MongoDB deployment

@chino
Copy link
Author

chino commented Dec 13, 2017

I'll give it a shot. Does the underlying volume type really matter?

@prydonius
Copy link
Contributor

@chino it shouldn't, but some volume types have limitations, e.g. NFS where you cannot chown/chmod files. I haven't looked deeply into GlusterFS issues, but they have been reported before (helm/charts#1711 (comment)). The Bitnami MongoDB image is designed to run as non-root by default which can often cause permissions issues in some environments.

At some point kubeapps should support specifying an external MongoDB database rather than installing it's own, which will be useful if you want to use a managed service or if there is a different MongoDB setup that works for your volume type.

@chino
Copy link
Author

chino commented Dec 13, 2017

I always thought chown/chmod on NFS still worked but could get out of sync if multiple systems had different uid/gid ranges. In this case it's private volumes built by heketi so I'd think it should be fine.

@chino
Copy link
Author

chino commented Dec 13, 2017

I just fired up a test container to confirm:

/ $ mount | grep gluster
x.x.x.x:vol_1e8c58b263c3be9ff8f47f894dbc60f5 on /gluster type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

/ $ cd /gluster/

/gluster $ ls -lh
total 0
-rw-r--r--    1 2000     2000           0 Dec 13 17:21 foo


/gluster $ whoami
whoami: unknown uid 2000

/gluster $ chown 2000.2000 foo

/gluster $ chmod 755 foo

/gluster $ ls -lh
total 0
-rwxr-xr-x    1 2000     2000           0 Dec 13 17:21 foo

This was using:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: gluster-test2
  namespace: default
spec:
  storageClassName: gluster-heketi
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

With:

spec:
  securityContext:
    runAsUser: 2000
....
    volumeMounts:
    - mountPath: /gluster
      name: gluster
  volumes:
  - name: gluster
    persistentVolumeClaim:
      claimName: gluster-test2

@prydonius
Copy link
Contributor

@chino thanks for checking, it looks like the NFS and GlusterFS issues are not the same then. Also, I checked and it doesn't look like MongoDB is running as non-root, so I don't think that is the issue either. I'll take a deeper look.

@prydonius
Copy link
Contributor

@chino can you set the NAMI_DEBUG=1 env var in the MongoDB deployment and give the full output of the error? This will help me figure out where that "Group '2000' not found" error might be coming from.

@chino
Copy link
Author

chino commented Dec 13, 2017

I was thinking the same thing since I didn't see any securityContext settings.

NAMI_DEBUG=1 will be handy going forward :]

Here we go!

$ kubectl logs -f mongodb-6ff6857c57-d8ckn
app-entrypoint.sh 17:44:07.50
app-entrypoint.sh 17:44:07.50 Welcome to the Bitnami mongodb container
app-entrypoint.sh 17:44:07.50 Subscribe to project updates by watching /~https://github.com/bitnami/bitnami-docker-mongodb
app-entrypoint.sh 17:44:07.51 Submit issues and feature requests at /~https://github.com/bitnami/bitnami-docker-mongodb/issues
app-entrypoint.sh 17:44:07.51 Send us your feedback at containers@bitnami.com
app-entrypoint.sh 17:44:07.51
nami    INFO  Initializing mongodb
mongodb TRACE [restorePersistedData] Restoring /bitnami/mongodb/conf to /opt/bitnami/mongodb/conf
mongodb TRACE [restorePersistedData] Restoring /bitnami/mongodb/data to /opt/bitnami/mongodb/data
mongodb TRACE [configurePermissions] File to chown: /opt/bitnami/mongodb/tmp
mongodb TRACE [configurePermissions] File to chown: /opt/bitnami/mongodb/logs
mongodb TRACE [configurePermissions] File to chown: /bitnami/mongodb/conf
mongodb TRACE [retry] Trying to configure permissions... 19 remaining attempts
mongodb TRACE [configurePermissions] File to chown: /bitnami/mongodb/conf
mongodb TRACE [retry] Trying to configure permissions... 18 remaining attempts
mongodb TRACE [configurePermissions] File to chown: /bitnami/mongodb/conf
mongodb TRACE [retry] Trying to configure permissions... 17 remaining attempts
mongodb TRACE [configurePermissions] File to chown: /bitnami/mongodb/conf
mongodb TRACE [retry] Trying to configure permissions... 16 remaining attempts
mongodb TRACE [configurePermissions] File to chown: /bitnami/mongodb/conf
mongodb TRACE [retry] Trying to configure permissions... 15 remaining attempts
mongodb TRACE [configurePermissions] File to chown: /bitnami/mongodb/conf
mongodb TRACE [retry] Trying to configure permissions... 14 remaining attempts
mongodb TRACE [configurePermissions] File to chown: /bitnami/mongodb/conf
mongodb TRACE [retry] Trying to configure permissions... 13 remaining attempts
mongodb TRACE [configurePermissions] File to chown: /bitnami/mongodb/conf
mongodb TRACE [retry] Trying to configure permissions... 12 remaining attempts
mongodb TRACE [configurePermissions] File to chown: /bitnami/mongodb/conf
mongodb TRACE [retry] Trying to configure permissions... 11 remaining attempts
mongodb TRACE [configurePermissions] File to chown: /bitnami/mongodb/conf
mongodb TRACE [retry] Trying to configure permissions... 10 remaining attempts
mongodb TRACE [configurePermissions] File to chown: /bitnami/mongodb/conf
mongodb TRACE [retry] Trying to configure permissions... 9 remaining attempts
mongodb TRACE [configurePermissions] File to chown: /bitnami/mongodb/conf
mongodb TRACE [retry] Trying to configure permissions... 8 remaining attempts
mongodb TRACE [configurePermissions] File to chown: /bitnami/mongodb/conf
mongodb TRACE [retry] Trying to configure permissions... 7 remaining attempts
mongodb TRACE [configurePermissions] File to chown: /bitnami/mongodb/conf
mongodb TRACE [retry] Trying to configure permissions... 6 remaining attempts
mongodb TRACE [configurePermissions] File to chown: /bitnami/mongodb/conf
mongodb TRACE [retry] Trying to configure permissions... 5 remaining attempts
mongodb TRACE [configurePermissions] File to chown: /bitnami/mongodb/conf
mongodb TRACE [retry] Trying to configure permissions... 4 remaining attempts
mongodb TRACE [configurePermissions] File to chown: /bitnami/mongodb/conf
mongodb TRACE [retry] Trying to configure permissions... 3 remaining attempts
mongodb TRACE [configurePermissions] File to chown: /bitnami/mongodb/conf
mongodb TRACE [retry] Trying to configure permissions... 2 remaining attempts
mongodb TRACE [configurePermissions] File to chown: /bitnami/mongodb/conf
mongodb TRACE [retry] Trying to configure permissions... 1 remaining attempts
mongodb TRACE [configurePermissions] File to chown: /bitnami/mongodb/conf
mongodb TRACE [retry] Trying to configure permissions... 0 remaining attempts
Error executing 'postInstallation': Group '2000' not found
nami    TRACE Error: Error executing 'postInstallation': Group '2000' not found
    at _findGroup (/opt/bitnami/nami/node_modules/nami-utils/lib/os/user-management/find-group.js:10:11)
    at findGroup (/opt/bitnami/nami/node_modules/nami-utils/lib/os/user-management/find-group.js:37:10)
    at getGid (/opt/bitnami/nami/node_modules/nami-utils/lib/os/user-management/get-gid.js:22:25)
    at _chown (/opt/bitnami/nami/node_modules/nami-utils/lib/file/chown.js:22:33)
    at /opt/bitnami/nami/node_modules/nami-utils/lib/file/chown.js:77:7
    at ot (/opt/bitnami/nami/node_modules/lodash/index.js:12:507)
    at Function.<anonymous> (/opt/bitnami/nami/node_modules/lodash/index.js:33:421)
    at Object.chown (/opt/bitnami/nami/node_modules/nami-utils/lib/file/chown.js:76:7)
    at Object.wrappedFn (/opt/bitnami/nami/node_modules/nami-utils/lib/function-wrapping.js:191:17)
    at _configurePermissions (/root/.nami/components/com.bitnami.mongodb/lib/component.js:137:13)

I bet if I just inject the group it will work.

@prydonius
Copy link
Contributor

@chino is there an easy way to setup GlusterFS locally or something so I can debug this more?

@chino
Copy link
Author

chino commented Dec 13, 2017

I believe they have a vagrant file: /~https://github.com/gluster/gluster-kubernetes#quickstart

@chino
Copy link
Author

chino commented Dec 14, 2017

I tried to quickly throw this in:

      - command:
        - /bin/bash
        - -c
        - |
          set -ex
          addgroup --gid 2000 mongo
          exec /app-entrypoint.sh "$@"

But for some reason something is different:

+ addgroup --gid 2000 mongo
Adding group `mongo' (GID 2000) ...
Done.
+ exec /app-entrypoint.sh
app-entrypoint.sh 19:44:54.81
app-entrypoint.sh 19:44:54.81 Welcome to the Bitnami mongodb container
app-entrypoint.sh 19:44:54.81 Subscribe to project updates by watching /~https://github.com/bitnami/bitnami-docker-mongodb
app-entrypoint.sh 19:44:54.81 Submit issues and feature requests at /~https://github.com/bitnami/bitnami-docker-mongodb/issues
app-entrypoint.sh 19:44:54.81 Send us your feedback at containers@bitnami.com
app-entrypoint.sh 19:44:54.81
tini (tini version 0.13.2 - git.79016ec)
Usage: tini [OPTIONS] PROGRAM -- [ARGS] | --version

Execute a program under the supervision of a valid init process (tini)

Command line options:

  --version: Show version and exit.
  -h: Show this help message and exit.
  -s: Register as a process subreaper (requires Linux >= 3.4).
  -v: Generate more verbose output. Repeat up to 3 times.
  -g: Send signals to the child's process group.
  -l: Show license and exit.

Environment variables:

  TINI_SUBREAPER: Register as a process subreaper (requires Linux >= 3.4)
  TINI_VERBOSITY: Set the verbosity level (default: 1)

@chino
Copy link
Author

chino commented Dec 14, 2017

Ok I changed it to pass /run.sh which is the behavior I saw from the normal container.

However now I see:

nami    INFO  Initializing mongodb
mongodb TRACE [restorePersistedData] Restoring /bitnami/mongodb/conf to /opt/bitnami/mongodb/conf
mongodb TRACE [restorePersistedData] Restoring /bitnami/mongodb/data to /opt/bitnami/mongodb/data
mongodb TRACE [configurePermissions] File to chown: /opt/bitnami/mongodb/tmp
mongodb TRACE [configurePermissions] File to chown: /opt/bitnami/mongodb/logs
mongodb TRACE [configurePermissions] File to chown: /bitnami/mongodb/conf
mongodb TRACE [configurePermissions] File to chmod: /bitnami/mongodb/conf
mongodb TRACE [configurePermissions] File to chown: /bitnami/mongodb/data
mongodb TRACE [configurePermissions] File to chmod: /bitnami/mongodb/data
mongodb INFO
mongodb INFO  ########################################################################
mongodb INFO   Installation parameters for mongodb:
mongodb INFO     Persisted data and properties have been restored.
mongodb INFO     Any input specified will not take effect.
mongodb INFO   (Passwords are not shown for security reasons)
mongodb INFO  ########################################################################
mongodb INFO
nami    INFO  mongodb successfully initialized
app-entrypoint.sh 23:49:56.73 INFO  ==> Starting mongodb...
sed: can't read /bitnami/mongodb/conf/mongodb.conf: No such file or directory
run.sh 23:49:56.74 INFO  ==> Starting mongod...
Error reading config file: No such file or directory
try '/opt/bitnami/mongodb/bin/mongod --help' for more information

I don't actually see anything in /opt/bitnami/mongodb/conf.

The only thing in that path is:

/opt/bitnami/mongodb
/opt/bitnami/mongodb/.buildcomplete
/opt/bitnami/mongodb/licenses
/opt/bitnami/mongodb/licenses/mongodb-3.4.9.txt
/opt/bitnami/mongodb/THIRD-PARTY-NOTICES
/opt/bitnami/mongodb/GNU-AGPL-3.0
/opt/bitnami/mongodb/README
/opt/bitnami/mongodb/MPL-2
/opt/bitnami/mongodb/bin/*

@prydonius
Copy link
Contributor

cc @tompizmor could someone look into this next week?

@chino
Copy link
Author

chino commented Dec 15, 2017

Removing the volume mount did work out but obviously I'd like to fix it with gluster.

I did notice this in the logs:

2017-12-15T20:29:20.044+0000 I ACCESS [conn2950] Unauthorized: not authorized on admin to execute command { serverStatus: 1, tcmalloc: false }

prydonius added a commit to prydonius/kubeapps that referenced this issue Mar 5, 2018
* fetch chart with HelmRelease

* redesigned app view

* open service URLs in new tab

* use proper type for deployment status

* remove unnecessary button-warning class

* simplify AppStatus render function, extract logic into functions

* add collapse-b-tablet

* remove container-fluid
@prydonius
Copy link
Contributor

Closing as this is being tracked as an issue upstream in the MongoDB container.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants