Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

volume mount error with podman 3.x #10620

Closed
prestwichj opened this issue Jun 9, 2021 · 1 comment · Fixed by #10638
Closed

volume mount error with podman 3.x #10620

prestwichj opened this issue Jun 9, 2021 · 1 comment · Fixed by #10638
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@prestwichj
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description
Running elasticsearch on 3.x has several volume mount issues while it worked on 2.x

Steps to reproduce the issue:

  1. create docker-compose.yml with elasticsearch
    volumes:
    esdata:
    driver: local
    driver_opts:
    o: "uid=1000"
    services:
    elasticsearch:
    container_name: elasticsearch
    image: docker.elastic.co/elasticsearch/elasticsearch:7.13.1
    cpu_quota: 100000
    mem_limit: 8g
    environment:

    • "ES_JAVA_OPTS=-Xms4g -Xmx4g"
    • "bootstrap.memory_lock=true"
      ulimits:
      memlock:
      soft: -1
      hard: -1
      volumes:
      - esdata:/usr/share/elasticsearch/data:Z
      - "/etc/opt/logging/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml"
      expose:
    • "9200"
    • "9300"
      ports:
    • "9200:9200"
    • "9300:9300"
      restart:
      "always"
  2. install docker-compose now that using podman 3.x

  3. docker-compose up

Describe the results you received:
ERROR: for elasticsearch error preparing container 1b9a4386df255f7c4fafba479653878efcaf07d4c55eb5b9a2c422163a543c4b for attach: error mounting volume logging_esdata for container 1b9a4386df255f7c4fafba479653878efcaf07d4c55eb5b9a2c422163a543c4b: error mounting volume logging_esdata: mount: /var/lib/containers/storage/volumes/logging_esdata/_data: wrong fs type, bad option, bad superblock on , missing codepage or helper program, or other error.

Describe the results you expected:
should mount the volume properly so that elasticsearch can start

Additional information you deem important (e.g. issue happens only occasionally):
podman volume inspect
with podman 2.x
[
{
"Name": "cos-logging_esdata",
"Driver": "local",
"Mountpoint": "/var/lib/containers/storage/volumes/cos-logging_esdata/_data",
"CreatedAt": "2021-03-26T12:14:02.942197434-04:00",
"Labels": {
"io.podman.compose.project": "cos-logging"
},
"Scope": "local",
"Options": {

      },
      "UID": 0,
      "GID": 0,
      "Anonymous": false
 }

]

podman 3.x
[
{
"Name": "logging_esdata",
"Driver": "local",
"Mountpoint": "/var/lib/containers/storage/volumes/logging_esdata/_data",
"CreatedAt": "2021-06-09T09:15:10.556205958-07:00",
"Labels": {
"com.docker.compose.project": "logging",
"com.docker.compose.version": "1.29.2",
"com.docker.compose.volume": "esdata"
},
"Scope": "local",
"Options": {
"UID": "1000",
"o": "uid=1000"
},
"UID": 1000
}
]

had to add uid info in podman 3.x or elasticsearch has permissions error
this makes the dir look like it does for podman 2.x
drwxrwxr-x 3 1000 root 19 Mar 26 12:14 _data

Workaround is to create the volume through podman volume create and then in docker-compose have:
volumes:
- /var/lib/containers/storage/volumes/logging_esdata/_data:/usr/share/elasticsearch/data:Z
instead of
volumes:
- esdata:/usr/share/elasticsearch/data:Z

Output of podman version:

Version:      3.0.2-dev
API Version:  3.0.0
Go Version:   go1.16.1
Built:        Thu May 20 09:22:46 2021
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.19.8
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.20-2.module_el8.3.0+475+c50ce30b.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.20, commit: 1019ecdeda3936be22162bb1cca308192145de53'
  cpus: 16
  distribution:
    distribution: '"centos"'
    version: "8"
  eventLogger: journald
  hostname: utah95
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.18.0-240.el8.x86_64
  linkmode: dynamic
  memFree: 20901154816
  memTotal: 134793416704
  ociRuntime:
    name: runc
    package: runc-1.0.0-70.rc92.module_el8.3.0+699+d61d9c41.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.2-dev'
  os: linux
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_AUDIT_WRITE,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_MKNOD,CAP_NET_BIND_SERVICE,CAP_NET_RAW,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    selinuxEnabled: true
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 4294963200
  swapTotal: 4294963200
  uptime: 144h 32m 16.63s (Approximately 6.00 days)
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 5
  runRoot: /var/run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 3.0.0
  Built: 1621527766
  BuiltTime: Thu May 20 09:22:46 2021
  GitCommit: ""
  GoVersion: go1.16.1
  OsArch: linux/amd64
  Version: 3.0.2-dev

Package info (e.g. output of rpm -q podman or apt list podman):

podman-3.0.1-6.module_el8.4.0+781+acf4c33b.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (/~https://github.com/containers/podman/blob/master/troubleshooting.md)

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):
physical

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 9, 2021
@Luap99
Copy link
Member

Luap99 commented Jun 10, 2021

As you can see podman 2.x ignored your volume uid so this also did not worked.

Simple reproducer for this problem

sudo podman volume create -o o=uid=1000 myvol
sudo podman run -v myvol:/test alpine ls
Error: error mounting volume myvol for container a134b76249d4e42e91d74dc8d6ed9d21d4c0ebe9e9cd6e03f5ffc3124c5de114: error mounting volume myvol: mount: /var/lib/containers/storage/volumes/myvol/_data: wrong fs type, bad option, bad superblock on , missing codepage or helper program, or other error.

@Luap99 Luap99 self-assigned this Jun 10, 2021
Luap99 added a commit to Luap99/libpod that referenced this issue Jun 11, 2021
Podman uses the volume option map to check if it has to mount the volume
or not when the container is started. Commit 28138da added to uid
and gid options to this map, however when only uid/gid is set we cannot
mount this volume because there is no filesystem or device specified.
Make sure we do not try to mount the volume when only the uid/gid option
is set since this is a simple chown operation.

Also when a uid/gid is explicity set, do not chown the volume based on
the container user when the volume is used for the first time.

Fixes containers#10620

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
mheon pushed a commit to mheon/libpod that referenced this issue Jun 25, 2021
Podman uses the volume option map to check if it has to mount the volume
or not when the container is started. Commit 28138da added to uid
and gid options to this map, however when only uid/gid is set we cannot
mount this volume because there is no filesystem or device specified.
Make sure we do not try to mount the volume when only the uid/gid option
is set since this is a simple chown operation.

Also when a uid/gid is explicity set, do not chown the volume based on
the container user when the volume is used for the first time.

Fixes containers#10620

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants