How to Mount Amazon S3 as a Drive for Cloud File Sharing

In Amazon S3, data is stored in “buckets”, the basic unit of data storage. You can configure permissions for users to access the buckets via the AWS web interface. If you want access to AWS S3 to be available without a web browser, you can allow users to use the interface of an operating system such as Linux, Windows, or macOS.

Access to Amazon S3 cloud storage from the command line can be handy in several scenarios. This is particularly useful with operating systems that don’t have a graphical user interface (GUI), in particular VMs running in a public cloud, and for automating tasks such as copying files or creating cloud data backups.

Read on to learn how to mount Amazon S3 bucket as a filesystem on a Linux machine and as a drive to a local directory on Windows and macOS machines to be able to use AWS S3 without a web browser.

1 Year of Free Data Protection: NAKIVO Backup & Replication

1 Year of Free Data Protection: NAKIVO Backup & Replication

Deploy in 2 minutes and protect virtual, cloud, physical and SaaS data. Backup, replication, instant recovery options.

How to Mount an S3 Bucket as a filesystem in Linux

AWS provides an API to work with Amazon S3 buckets using third-party applications. You can even write your own application that can interact with S3 buckets by using the Amazon API. You can create an application that uses the same path for uploading files to Amazon S3 cloud storage and provide the same path on each computer by mounting the S3 bucket to the same directory with S3FS. In this tutorial, we use S3FS to mount an Amazon S3 bucket as a disk drive to a Linux directory.

S3FS is a special solution based on FUSE (file system in user space), developed to mount S3 buckets to directories of Linux operating systems, similarly to the way you mount CIFS or NFS share as a network drive. S3FS is a free and open-source solution.

After mounting Amazon S3 cloud storage with S3FS to your Linux machine, you can use cp, mv, rm, and other commands in the Linux console to operate with files as you do when working with mounted local or network drives.

Let’s mount an Amazon S3 bucket to a Linux directory with Ubuntu 18.04 LTS as an example. A fresh installation of Ubuntu is used in this walkthrough. You can use the same principle for newer versions.

  1. Update the repository tree:

    sudo apt-get update

  2. If any existing FUSE is installed on your Linux system, remove that FUSE before configuring the environment and installing fuse-f3fs to avoid conflicts. As we’re using a fresh installation of Ubuntu, we don’t run the sudo apt-get remove fuse command to remove FUSE.
  3. Install s3fs from online software repositories:

    sudo apt-get install s3fs

  4. You need to generate the access key ID and secret access key in the AWS web interface for your account (IAM user). The IAM user must have S3 full access. You can use this link:
    https://console.aws.amazon.com/iam/home?#/security_credentials

    NOTE: It is recommended that you mount Amazon S3 buckets as a regular user with restricted permissions and use users with administrative permissions only for generating keys.

  5. These keys are needed for AWS API access. You must have administrative permissions to generate the AWS access key ID and AWS secret access key. If you don’t have enough permissions, ask your system administrator to generate the AWS keys for you. The administrator can generate the AWS keys for a user account in the Users section of the AWS console in the Security credentials tab by clicking the Create access key button.Generating access keys for Amazon S3 cloud storage
  6. In the Create access key popup window, click Download .csv file or click Show under the Secret access key row name. This is the only case when you can see the secret access key in the AWS web interface. Store the AWS access key ID and secret access key in a safe place.
    saving access keys for Amazon S3 cloud storage
  7. You can open the downloaded CSV file that contains access keys in Microsoft Office 365 Excel, for example.Access keys for Amazon S3 cloud storage are saved
  8. Go back to the Ubuntu console to create a configuration file for storing the AWS access key and secret access key needed to mount an S3 bucket with S3FS. The command to do this is:

    echo ACCESS_KEY:SECRET_ACCESS_KEY > PATH_TO_FILE

    Change ACCESS_KEY to your AWS access key and SECRET_ACCESS_KEY to your secret access key.

    In this example, we will store the configuration file with the AWS keys in the home directory of our user. Make sure that you store the file with the keys in a safe place that is not accessible to unauthorized persons.

    echo AKIA4SK3HPQ9FLWO8AMB:esrhLH4m1Da+3fJoU5xet1/ivsZ+Pay73BcSnzP > ~/.passwd-s3fs

  9. Check whether the keys were written to the file:

    cat ~/.passwd-s3fs

  10. Set correct permissions for the passwd-s3fs file where the access keys are stored:

    chmod 600 ~/.passwd-s3fs

  11. Create the directory (mount point) that will be used as a mount point for your S3 bucket. In this example, we create the Amazon cloud drive S3 directory in the home user’s directory.

    mkdir ~/s3-bucket

    You can also use an existing empty directory.

  12. The name of the bucket used in this walkthrough is blog-bucket01. The text1.txt file is uploaded to our blog-bucket01 in Amazon S3 before mounting the bucket to a Linux directory. It is not recommended to use a dot (.) in bucket names.
    A text file is uploaded to the S3 bucket in the web interface
  13. Let’s mount the bucket. Use the following command to set the bucket name, the path to the directory used as the mount point, and the file that contains the AWS access key and secret access key.

    s3fs bucket-name /path/to/mountpoint -o passwd_file=/path/passwd-s3fs

    In our case, the command we use to mount our bucket is:

    s3fs blog-bucket01 ~/s3-bucket -o passwd_file=~/.passwd-s3fs

  14. The bucket is mounted. We can run the commands to check whether our bucket (blog-bucket-01) has been mounted to the s3-bucket directory:

    mount | grep bucket

    df -h | grep bucket

  15. Let’s check the contents of the directory to which the bucket has been mounted:

    ls -al ~/s3-bucket

    As you can see in the screenshot below, the test1.txt file uploaded via the web interface before is present and displayed in the console output.
    The bucket has been mounted as a network disk in Linux and contents can be viewed in the console

  16. Now you can try to create a new file on your hard disk drive and copy that file to the S3 bucket in your Linux console.

    echo test2 > test2.txt

    cp test2.txt ~/s3-bucket/

  17. Update the AWS web page where your files in the bucket are displayed. You should see the new test2.txt file copied to the S3 bucket in the Linux console by using the directory to which the bucket is mounted.The data displayed in the web interface of AWS is synchronized after copying files in the Linux console

How to mount an S3 bucket on Linux boot automatically

If you want to configure the automatic mount of an S3 bucket with S3FS on your Linux machine, you have to create the passwd-s3fs file in /etc/passwd-s3fs, which is the standard location. After creating this file, you don’t need to use the -o passwd_file key to set the location of the file with your AWS keys manually.

  1. Create the /etc/passwd-s3fs file:

    vim /etc/passwd-s3fs

    NOTE: If vim the text editor has not been installed yet in your Linux, run the apt-get install vim command.

  2. Enter your AWS access key and secret access key as explained above.

    AKIA4SK3HPQ9FLWO8AMB:esrhLH4m1Da+3fJoU5xet1/ivsZ+Pay73BcSnzcP

    As an alternative, you can store the keys in the /etc/passwd-s3fs file with the command:

    echo AKIA4SK3HPQ9FLWO8AMB:esrhLH4m1Da+3fJoU5xet1/ivsZ+Pay73BcSnzcP > /etc/passwd-s3fs

  3. Set the required permissions for the /etc/passwd-s3fs file:

    chmod 640 /etc/passwd-s3fs

  4. Edit the FUSE configuration file:

    vim /etc/fuse.conf

  5. Uncomment the user_allow_other string if you want to allow using Amazon S3 for file sharing by other users (non-root users) on your Linux machine.
    Configuring auto mounting of Amazon S3 cloud storage in Linux
  6. Open /etc/fstab with a text editor:

    vim /etc/fstab

  7. Add the line in the end of the file:

    s3fs#blog-bucket01 /home/user1/s3-bucket/ fuse _netdev,allow_other,url=https://s3.amazonaws.com 0 0

    Editing etc fstab to mount an S3 bucket automatically on Linux boot

  8. Save the edited /etc/fstab file and quit the text editor.

    Note: If you want to set the owner and group, you can use the -o uid=1001 -o gid=1001 -o mp_umask=002 parameters (change the digital values of the user id, group id and umask according to your configuration). If you want to enable cache, use the -ouse_cache=/tmp parameter (set a custom directory instead of /tmp/ if needed). You can set the number of times to retry mounting a bucket if the bucket was not mounted initially by using the retries parameter. For example, retries=5 sets five tries.

  9. Reboot the Ubuntu machine to check whether the S3 bucket is mounted automatically on system boot:

    init 6

  10. Wait until your Linux machine is booted.
  11. You can run commands to check whether the AWS S3 bucket was mounted automatically to the s3-bucket directory son Ubuntu boot.

    mount | grep bucket
    df -h | grep bucket
    ls -al /home/user1/s3-bucket/

In our case, the Amazon cloud drive S3 has been mounted automatically to the specified Linux directory on Ubuntu boot (see the screenshot below). The configuration was applied successfully.

The Amazon S3 bucket has been mounted successfully on Linux boot

S3FS also supports working with rsync and file caching to reduce traffic.

Mounting Amazon S3 Cloud Storage in Windows

You can try wins3fs, which is a solution equivalent to S3FS for mounting Amazon S3 cloud storage as a network disk in Windows. However, in this section, we are going to use rclone. Rclone is a command line tool that can be used to mount and synchronize cloud storage, such as Amazon S3 buckets, Google Cloud Storage, Google Drive, Microsoft OneDrive, DropBox, and so on.

Rclone is a free open-source tool that can be downloaded from the official website and from GitHub. You can download the needed version of rclone by using one of these links:

Let’s use the direct link from the official website:

You can use this workflow for newer versions of rclone after they are released. The following actions are performed in the command line interface and may be useful for users who use Windows without a GUI on servers or VMs.

  1. Open Windows PowerShell as Administrator.
  2. Create the directory to download and store rclone files:

    mkdir c:\rclone

  3. Go to the created directory:

    cd c:\rclone

  4. Download rclone by using the direct link mentioned above. Edit the version number in the link if you download another version.

    Invoke-WebRequest -Uri "https://downloads.rclone.org/v1.51.0/rclone-v1.51.0-windows-amd64.zip" -OutFile "c:\rclone\rclone.zip"

  5. Extract files from the downloaded archive:

    Expand-Archive -path 'c:\rclone\rclone.zip' -destinationpath '.\'

  6. Check the contents of the directory:

    dir

    Installing rclone in Windows

  7. The files are extracted to C:\rclone\rclone-v1.51.0-windows-amd64 in this case.

    NOTE: In this example, the name of the rclone directory after extracting files is rclone-v1.51.0-windows-amd64. However, it is not recommended to use dots (.) in directory names. You can rename the directory to rclone-v1-51-win64, for example.

  8. Let’s copy the extracted files to C:\rclone\ to avoid dots in the directory name:

    cp C:\rclone\rclone-v1.51.0-windows-amd64\*.* C:\rclone\

    Rclone is extracted in Windows

  9. Run rclone in the configuring mode:

    .\rclone.exe config

    Running rclone in the configuring mode

  10. The configurator is working as a wizard in the command line mode. You have to select the needed parameters at each step of the wizard.
  11. Type n and press Enter to select the New remote option.

    n/s/q> n

  12. Enter the name of your S3 bucket:

    name> blog-bucket01

  13. After entering the name, select the type of cloud storage to configure. Type 4 to select Amazon S3 cloud storage.

    Storage> 4

  14. Choose your S3 provider. Type 1 to select Amazon Web Services S3.

    provider> 1

    Choosing the S3 provider

  15. Get AWS credentials from runtime (true or false). 1 (false) is used by default. Press Enter without typing anything to use the default value.

    env_auth> 1

  16. Enter your AWS access key:

    access_key_id> AKIA4SK3HPQ9FLWO8AMB

  17. Enter your secret access key:

    secret_access_key> esrhLH4m1Da+3fJoU5xet1/ivsZ+Pay73BcSnzcP
    Configuring S3 access and region

  18. Region to connect to. EU (Ireland) eu-west-1 is used for our bucket in this example and we should type 6.

    region> 6
    Selecting the AWS region

  19. Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region. Press Enter.

    Endpoint>

  20. Location constraint must be set to match the Region. Type 6 to select the EU (Ireland) Region \ “eu-west-1”.

    location_constraint> 6

  21. Canned ACL used when creating buckets and storing or copying objects. Press Enter to use the default parameters.

    acl>

  22. Specify the server-side encryption algorithm used when storing this object in S3. In our case encryption is disabled, and we have to type 1 (None).

    server_side_encryption> 1
    Selecting encryption options

  23. If using KMS ID, you must provide the ARN of Key. As encryption is not used, type 1 (None).

    sse_kms_key_id> 1

  24. Select the storage class to use when storing new objects in S3. Enter a string value. The standard storage class option (2) is suitable in our case.

    storage_class> 2
    Selecting storage class

  25. Edit advanced config? (y/n)

    y/n> n

  26. Check your configuration and type y (yes) if everything is correct.

    t/e/d> y
    Checking rclone configuration

  27. Type q to quit the configuration wizard.

    e/n/d/r/c/s/q> q

  28. Rclone is now configured to work with Amazon S3 cloud storage. Make sure you have the correct date and time settings on your Windows machine. Otherwise an error can occur when mounting an S3 bucket as a network drive to your Windows machine: Time may be set wrong. The difference between the request time and the current time is too large.
  29. Run rclone in the directory where rclone.exe is located and list buckets available for your AWS account:

    .\rclone.exe lsd blog-bucket01:

    Listing S3 buckets with rclone

  30. You can enter c:\rclone to the Path environment variable. It allows you to run rclone from any directory without switching to the directory where rclone.exe is stored.
  31. As you can see on the screenshot above, access to Amazon S3 cloud storage is configured correctly and a list of buckets is displayed (including the blog-bucket01 that is used in this tutorial).
  32. Install Chocolately, which is a Windows package manager that can be used to install applications from online repositories:

    Set-ExecutionPolicy Bypass -Scope Process -Force; `

      iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))

  33. WinFSP (Windows File System Proxy) is the Windows analog of the Linux FUSE and it is fast, stable and allows you to create user mode file systems.

    Install WinFSP from Chocolatey repositories:

    choco install winfsp -y

    Installing WinFSP

  34. Now you can mount your Amazon S3 bucket to your Windows system as a network drive. Let’s mount the blog-bucket01 as S:

    .\rclone mount blog-bucket01:blog-bucket01/ S: --vfs-cache-mode full

    Where the first “blog-bucket” is the bucket name entered in the first step of the rclone configuration wizard and the second “blog-bucket” that is defined after “:” is the Amazon S3 bucket name set in the AWS web interface.Mounting the S3 bucket as a network drive with rclone in Windows

  35. List all connected disks and partitions:

    gdr -PSProvider 'FileSystem'

  36. Check the content of the mapped network drive:

    ls S:

  37. The S3 bucket is now mounted as a network drive (S:). You can see three txt files stored in the blog-bucket01 in Amazon S3 cloud storage by using another instance of Windows PowerShell or Windows command line.The Amazon S3 cloud storage is mounted with rclone in Windows

If your Windows has a graphical user interface, you can use that interface to download and upload files to your Amazon S3 cloud storage. If you copy a file by using a Windows interface (a graphical user interface or command line interface), data will be synchronized in a moment and you will see a new file in both the Windows interface and AWS web interface.

Accessing files stored in the S3 bucket from Windows Explorer and a web browser

If you press Ctrl+C or close the CMD or PowerShell window where rclone is running (“The service clone has been started” is displayed in that CMD or PowerShell instance), your Amazon S3 bucket will be disconnected from the mount point (S: in this case).

How to automate mounting an S3 bucket on Windows boot

It is convenient when the bucket is mounted as a network drive automatically on Windows boot. Let’s find out how to configure the automatic mounting of the S3 bucket in Windows.

  1. Create the rclone-S3.cmd file in the C:\rclone\ directory.
  2. Add the string to the rclone-S3.cmd file:

    C:\rclone\rclone.exe mount blog-bucket01:blog-bucket01/ S: –vfs-cache-mode full

  3. Save the CMD file. You can run this CMD file instead of typing the command to mount the S3 bucket manually.
  4. Copy the rclone-S3.cmd file to the startup folder for all users:

    C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp

  5. As an alternative, you can create a shortcut to C:\Windows\System32\cmd.exe and set the arguments needed to mount an S3 bucket in the target properties:

    C:\Windows\System32\cmd.exe /k cd c:\rclone & rclone mount blog-bucket01:blog-bucket01/ S: –vfs-cache-mode full

    Creating a shortcut to mount an S3 bucket with rclone

  6. Then add the edited shortcut to the Windows startup folder:

    C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp

There is a small disadvantage – a command line window with the “The service rclone has been started” message is displayed after attaching an S3 bucket to your Windows machine as a network drive. You can try to configure automatic mounting of the S3 bucket by using Windows scheduler or NSSM, which is a free tool to create and configure Windows services and their automatic startup.

Mounting S3 bucket as a File System in macOS

You can mount an Amazon S3 bucket to macOS in the same way as in Linux. You should install S3FS on macOS and set permissions and Amazon keys.

In this example, macOS 10.15 Catalina is used. You can use this configuration principle in newer versions as well. The name of the S3 bucket is blog-bucket01, the macOS user name is user1, and the directory used as a mount point for the bucket is /Volumes/s3-bucket/.

Let’s look at the configuration step by step.

  1. Install homebrew, which is a package manager for macOS used to install applications from online software repositories:

    /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"

  2. Install osxfuse:

    brew cask install osxfuse

  3. Reboot the system:

    sudo shutdown -r now

  4. Install S3FS:

    brew install s3fs

  5. Once S3FS is installed, set the access key and secret access key for your Amazon S3 bucket. You can define keys for the current session if you need to mount the bucket for one time or if you are going to mount the bucket infrequently:

    export AWSACCESSKEYID=AKIA4SK3HPQ9FLWO8AMB

    export AWSSECRETACCESSKEY=esrhLH4m1Da+3fJoU5xet1/ivsZ+Pay73BcSnzP

  6. If you are going to use a mounted bucket regularly, set your AWS keys in the configuration file used by S3FS for your macOS user account:

    echo AKIA4SK3HPQ9FLWO8AMB:esrhLH4m1Da+3fJoU5xet1/ivsZ+Pay73BcSnzP > ~/.passwd-s3fs

  7. If you have multiple buckets and keys to access the buckets, define them in the format:

    echo bucket-name:access-key:secret-key > ~/.passwd-s3fs

  8. Set the correct permissions to allow read and write access only for the owner:

    chmod 600 ~/.passwd-s3fs

  9. Create a directory to be used as a mount point for the Amazon S3 bucket:

    sudo mkdir -p /Volumes/s3-bucket/

  10. Your user account must be set as the owner for the created directory:
    sudo chown user1 /Volumes/s3-bucket/
    Configuring environment for mounting Amazon S3 cloud storage in macOS
  11. Mount the bucket with S3FS:

    s3fs blog-bucket01 /Volumes/s3-bucket/

  12. The macOS security warning is displayed in the dialog window. Click Open System Preferences to allow the S3FS application and related connections.
    The macOS security warning is displayed when mounting the bucket
  13. In the Security & Privacy window click the lock to make changes and then hit the Allow button.Allowing S3FS in the Security & Privacy settings of macOS
  14. Run the mounting command once again:

    s3fs blog-bucket01 /Volumes/s3-bucket/

  15. A popup warning message is displayed: Terminal would like to access files on a network volume.

    Click OK to allow access.
    Allowing the terminal to access files on a network volume

  16. Check whether the bucket has been mounted:

    mount | grep bucket

  17. Check the contents of the bucket:

    ls -al /Volumes/s3-bucket/

  18. The bucket is mounted successfully. You can view, copy and delete files in the bucket.
    The S3 bucket is mounted successfully in macOS

You can try to configure mounting an S3 bucket on user login with launchd.

Conclusion

When you know how to mount Amazon S3 cloud storage as a file system to the most popular operating systems, sharing files with Amazon S3 becomes more convenient. An Amazon S3 bucket can be mounted by using S3FS in Linux, macOS and by using rclone or wins3fs in Windows. Automating the process of copying data to Amazon S3 buckets after mounting the buckets to local directories of your operating system is more convenient compared to using the web interface.

You can copy your data to Amazon S3 to create a backup by using the interface of your operating system. You can try to use dedicated backup applications that use AWS APIs to access S3 buckets. NAKIVO Backup & Replication is a complete data protection solution with integrated support for S3 buckets as backup targets. You can use the solution to back up data in VMware VMs, Hyper-V VMs, and EC2 instances to Amazon S3.

Direct Backup to AWS S3 | NAKIVO

Direct Backup to AWS S3 | NAKIVO

Avoid a single point of failure with simple Amazon S3 integration and anti-ransomware immutability options. Automated backup tiering and instant recovery features.

 

People also read