Hello !

Getting a bit annoyed with permission issues with samba and sshfs. If someone could give me some input on how to find an other more elegant and secure way to share a folder path owned by root, I would really appreciate it !

Context

  • The following folder path is owned by root (docker volume):

/var/lib/docker/volumes/syncthing_data/_data/folder

  • The child folders are owned by the user server

/var/lib/docker/volumes/syncthing_data/_data/folder

  • The user server is in the sudoers file
  • Server is in the docker groupe
  • fuse.confhas the user_allow_other uncommented

Mount point with sshfs

sudo sshfs server@10.0.0.100:/var/lib/docker/volumes/syncthing_data/_data/folder /home/user/folder -o allow_other

Permission denied

Things I tried

  • Adding other options like gid 0,27,1000 uid 0,27,1000 default_permissions
  • Finding my way through stackoverflow, unix.stackexchange…

Solution I found

  1. Making a bind mount from the root owned path to a new path owned by server

sudo mount --bind /var/lib/docker/volumes/syncthing_data/_data/folder /home/server/folder

  1. Mount point with sshfs

sshfs server@10.0.0.100:/home/server/folder /home/user/folder

Question

While the above solution works, It overcomplicates my setup and adds an unecessary mount point to my laptop and fstab.

Isn’t there a more elegant solution to work directly with the user server (which has root access) to mount the folder with sshfs directly even if the folder path is owned by root?

I mean the user has root access so something like:

sshfs server@10.0.0.100:/home/server/folder /home/user/folder -o allow_other should work even if the first part of the path is owned by root.

Changing owner/permission of the path recursively is out of question !

Thank you for your insights !

  • Successful_Try543@feddit.de
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    9 months ago

    You may try

    sshfs server@10.0.0.100:/var/lib/docker/volumes/syncthing_data/_data/folder /home/user/folder/ -o sftp_server="/usr/bin/sudo /usr/lib/openssh/sftp-server"
    

    Please check the correct path to sudo and sftp-server. However, you need to login via ssh and start a sudo session once before running the command. If someone has a solution to work around this, please feel welcome.

    • N0x0n@lemmy.mlOP
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      9 months ago

      Heyha !

      Thanks for your input, you pointed the right direction ! After some more reading, this is what I found.

      Adding the following line in sudoers file after @includedir /etc/sudoers.d:

      server ALL = NOPASSWD: /usr/lib/openssh/sftp-server

      Works without the need of a sudo session for the sftp-server. I have no idea if this is good security practice but If i had to guess I would say no. Having the NOPASSWD argument for something critical as an ftp server seems… Not a good idea ! But I’m not an expert, so I’m just guessing :/.

      If I may, how would you tackle such an use case ? My first solution seems way more secure with the right permissions on the bind mount, what do you think ?

      Thanks for your nice tip to right direction :D !

      • Successful_Try543@feddit.de
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        9 months ago

        As far as I understand, the user server is not the user running your web server e.g. www-data, right? Otherwise I would advise against giving him elevated privileges such as sudo rights.

        If the authentication of the user server has sufficiently high level, e.g. a strong password, SSH key authentication, I don’t see a high risk in using the NOPASSWD method. But, as I am no expert, please take this with a grain of salt.

        • N0x0n@lemmy.mlOP
          link
          fedilink
          arrow-up
          0
          ·
          9 months ago

          Sorry for the late response !

          As far as I understand, the user server is not the user running your web server e.g. www-data, right?

          Are you sure about that? I mean, in the sudoers file I added the user server with NOPASSWD and not www-data for the specific service. And it works that way.

          Maybe I misunderstand something here, if so please correct me. Is there anyway I could check this out? Do I need to check the owner on my host or my client trying to mount the path?

          Thank you !!

          • Successful_Try543@feddit.de
            link
            fedilink
            arrow-up
            0
            ·
            9 months ago

            By the ‘user running the web server’ I mean the user running the Apache, Ngix or whatever web server on your system. Usually, afaIk, you should not be able to login as e.g. www-data on the system. You can identify the username by running ps -ef and searching for the web server process. You’ll find the corresponding user name in the first column.

  • redcalcium@lemmy.institute
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    The easiest setup I tried so far is to simply put your docker container’s volume on an external path, e.g. /mnt/hdd1/some-directory, instead of putting it in the standard docker location (/var/lib/docker/volume). You’ll have full control over ACL on those custom paths.

    • N0x0n@lemmy.mlOP
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      9 months ago

      Heyyy !

      Thank you, that’s actually a good workaround ! Haven’t though about it !

      In case you’re interested @Successful_Try543@feddit.de pointed to the right direction with sshfs.

      sshfs server@10.0.0.100:/var/lib/docker/volumes/syncthing_data/_data/folder /home/user/folder/ -o sftp_server="/usr/bin/sudo /usr/lib/openssh/sftp-server"
      

      Adding the following line in sudoers file after @includedir /etc/sudoers.d:

      server ALL = NOPASSWD: /usr/lib/openssh/sftp-server

      This works, even tough I’m not sure if this is actually good security practice :/.

      I will keep in mind your solution if I find out that this workaround is bad practice. What’s your opinion on this?

      Thank you !

      • redcalcium@lemmy.institute
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        9 months ago

        So the workaround is running the SFTP process as root?

        Why not run the SFTP server as a docker container as well (e.g. with https://hub.docker.com/r/atmoz/sftp/ )? You can mount the same volume in the SFTP container, and have it listen on some random port. Just make sure to configure the SFTP container to use the same uid:gid as the one used in the syncthing container to avoid file permission issues.

        • Successful_Try543@feddit.de
          link
          fedilink
          arrow-up
          0
          ·
          9 months ago

          The solutions you’ve proposed definitely are more elegant and I’d prefer either of these over my quick and dirty solution.

          The question is: How frequently is this needed? If its on a regular basis, then the workaround using bind or selecting a different storage path are preferable. If it’s needed even more frequently, setting up the Docker SFTP container is an acceptable extra work.

          • redcalcium@lemmy.institute
            link
            fedilink
            arrow-up
            0
            ·
            9 months ago

            In that case, perhaps replacing -o sftp_server="/usr/bin/sudo /usr/lib/openssh/sftp-server" with -o sftp_server="/usr/bin/sudo -u <syncthing_user> /usr/lib/openssh/sftp-server" is a good compromise?

            • Successful_Try543@feddit.de
              link
              fedilink
              arrow-up
              0
              ·
              9 months ago

              Yes, the permissions of <syncthing_user> should be sufficient. I was not aware, that OP might not really need root access.