-
Notifications
You must be signed in to change notification settings - Fork 710
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mongodb doesn't start properly #92
Comments
@chino Which version of kubeapps are you trying ? And on which platform ? minikube/gke/etc |
This is a baremetal install using typhoon.psdn.io which is based on matchbox/bootkube/k8s1.8.5/coreos (same backend as Tectonic). |
Just realized this cluster is actually back on 1.8.3 still. |
Do you have any pointers on how I could possibly debug this my self? The deployment keeps killing the pod which makes it challenging and I can't seem to find anything in the various github repos based on that error. |
Hey @chino, this is most likely a permissions issue with the MongoDB container. I believe GlusterFS volumes are not well supported. Things you can try:
|
I'll give it a shot. Does the underlying volume type really matter? |
@chino it shouldn't, but some volume types have limitations, e.g. NFS where you cannot chown/chmod files. I haven't looked deeply into GlusterFS issues, but they have been reported before (helm/charts#1711 (comment)). The Bitnami MongoDB image is designed to run as non-root by default which can often cause permissions issues in some environments. At some point kubeapps should support specifying an external MongoDB database rather than installing it's own, which will be useful if you want to use a managed service or if there is a different MongoDB setup that works for your volume type. |
I always thought chown/chmod on NFS still worked but could get out of sync if multiple systems had different uid/gid ranges. In this case it's private volumes built by heketi so I'd think it should be fine. |
I just fired up a test container to confirm:
This was using:
With:
|
@chino thanks for checking, it looks like the NFS and GlusterFS issues are not the same then. Also, I checked and it doesn't look like MongoDB is running as non-root, so I don't think that is the issue either. I'll take a deeper look. |
@chino can you set the |
I was thinking the same thing since I didn't see any
Here we go!
I bet if I just inject the group it will work. |
@chino is there an easy way to setup GlusterFS locally or something so I can debug this more? |
I believe they have a vagrant file: /~https://github.com/gluster/gluster-kubernetes#quickstart |
I tried to quickly throw this in:
But for some reason something is different:
|
Ok I changed it to pass However now I see:
I don't actually see anything in The only thing in that path is:
|
cc @tompizmor could someone look into this next week? |
Removing the volume mount did work out but obviously I'd like to fix it with gluster. I did notice this in the logs:
|
* fetch chart with HelmRelease * redesigned app view * open service URLs in new tab * use proper type for deployment status * remove unnecessary button-warning class * simplify AppStatus render function, extract logic into functions * add collapse-b-tablet * remove container-fluid
Closing as this is being tracked as an issue upstream in the MongoDB container. |
The mongodb container seems to be failing with:
Error executing 'postInstallation': Group '2000' not found
Full output:
The text was updated successfully, but these errors were encountered: