Using AWS S3 storage as FTP

Sound weird, right ? Yes, but you never know on what time you may be in a situation where your client needs to access your files stored in S3 via FTP or third party who are still living in old era wanted to access via FTP.

 

Recently I have the same situation where the third party wanted to use our S3 data via FTP and there are multiple FTP accounts with read-only access and read and write both.

 

I have use S3FS to mount S3 as drive in my Ubuntu server and VSFTP to publish those directories via FTP. Real challenge was with giving permission for readonly and certain with read and write both. I know while working with VSFTP you can do it easily by managing directory permission at OS level and can be given access easily per user. But since S3 is not real file system it act differently and even though not write permission given was still able to write file on S3.

 

I have a situation where need two users where one with read permission and another one with both permission but on different S3 folders.

 

S3 Path:
    s3://bucket1/user1folder – Need only read permission
    s3://bucket1/user2folder – Need both read and write permission
    
Ubuntu Users
    ubuntu  id:1000  (Root user)
    user1    id:1001
    user2   id:1002 
    last two are part of group id: 1001
    
Here is S3FS command to mount both folder

Error when loading gists from https://gist.github.com/.

s3fs -o passwd_file=/s3mnt/s3access,use_cache=/s3mnt/cache,uid=1000,gid=1001,allow_other,umask=227, bucket1:/user1folder/ /s3mnt/directory1
s3fs -o passwd_file=/s3mnt/s3access,use_cache=/s3mnt/cache,uid=1000,gid=1001,allow_other,umask=227, bucket1:/user2folder/ /s3mnt/directory2

Above two commands will mount both s3 folders at /s3mnt/directory1 and /s3mnt/directory2 and as umask specified both folder has primary user will be ubuntu and group 1001 having permission to read + execute. But for some reason OS doesn’t respect permission specified and allow me to add/remove files. Best option I found for restricting folders is creating IAM role/policy and apply it both folders only. This will make sure only those folder accessible for IAM user with specific condition. 

 

In my case below is my policy to allow user1folder with only read access and user2folder has read and write access.

Policy 1:

Error when loading gists from https://gist.github.com/.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetBucketLocation",
                "s3:ListAllMyBuckets"
            ],
            "Resource": "arn:aws:s3:::*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::bucket1"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::bucket1/user1folder/*"
            ]
        }
    ]
}


Policy 2:

Error when loading gists from https://gist.github.com/.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetBucketLocation",
                "s3:ListAllMyBuckets"
            ],
            "Resource": "arn:aws:s3:::*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::bucket1"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::bucket1/user2folder/*"
            ]
        }
    ]
}

Policy #1 will give only read access for (GetObject) for given resources and policy# 2 get and put object on S3 folder. 

Now we can ignore permission at OS level and create VSFTP user account access S3 stored files via FTP.