本文主要是介绍CentOS7挂载AWS的S3存储bucket到Linux本地文件目录,使用nginx/openresty直接静态文件方式访问,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
安装AWS s3fs
yum install epel-release
yum install s3fs-fuse
AWS的S3访问密钥
echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > ${HOME}/.passwd-s3fschmod 600 ${HOME}/.passwd-s3fs
挂载s3存储到/mnt/s3bucket目录,并把文件设置为nginx权限
id nginx
# 查询nginx用户信息
uid=1003(nginx) gid=1003(nginx) groups=1003(nginx)
#挂载s3
s3fs -o uid=1003,gid=1003 static-xxx-pro /mnt/s3bucket -o passwd_file=${HOME}/.passwd-s3fs
设置fstab开机自动挂载s3
s3fs#static-xxx-pro /mnt/s3bucket fuse _netdev,allow_other,uid=1003 ,gid=1003 0 0
查看/mnt/s3bucket目录
ll /mnt/s3bucket/
nginx/openresty配置
nginx.conf配置内容
server {listen 80;server_name localhost;location / {alias /mnt/s3bucket/;}
}
S3挂载参考来源 https://github.com/s3fs-fuse/s3fs-fuse
原文信息如下:
s3fs
s3fs allows Linux and macOS to mount an S3 bucket via FUSE.
s3fs preserves the native object format for files, allowing use of other
tools like AWS CLI.
Features
- large subset of POSIX including reading/writing files, directories, symlinks, mode, uid/gid, and extended attributes
- compatible with Amazon S3, Google Cloud Storage, and other S3-based object stores
- allows random writes and appends
- large files via multi-part upload
- renames via server-side copy
- optional server-side encryption
- data integrity via MD5 hashes
- in-memory metadata caching
- local disk data caching
- user-specified regions, including Amazon GovCloud
- authenticate via v2 or v4 signatures
Installation
Many systems provide pre-built packages:
-
Amazon Linux via EPEL:
sudo amazon-linux-extras install epel sudo yum install s3fs-fuse
-
Arch Linux:
sudo pacman -S s3fs-fuse
-
Debian 9 and Ubuntu 16.04 or newer:
sudo apt install s3fs
-
Fedora 27 or newer:
sudo dnf install s3fs-fuse
-
Gentoo:
sudo emerge net-fs/s3fs
-
RHEL and CentOS 7 or newer through via EPEL:
sudo yum install epel-release sudo yum install s3fs-fuse
-
SUSE 12 and openSUSE 42.1 or newer:
sudo zypper install s3fs
-
macOS via Homebrew:
brew cask install osxfuse brew install s3fs
Otherwise consult the compilation instructions.
Examples
s3fs supports the standard
AWS credentials file
stored in ${HOME}/.aws/credentials
. Alternatively, s3fs supports a custom passwd file.
The default location for the s3fs password file can be created:
- using a .passwd-s3fs file in the users home directory (i.e. ${HOME}/.passwd-s3fs)
- using the system-wide /etc/passwd-s3fs file
Enter your credentials in a file ${HOME}/.passwd-s3fs
and set
owner-only permissions:
echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > ${HOME}/.passwd-s3fs
chmod 600 ${HOME}/.passwd-s3fs
Run s3fs with an existing bucket mybucket
and directory /path/to/mountpoint
:
s3fs mybucket /path/to/mountpoint -o passwd_file=${HOME}/.passwd-s3fs
If you encounter any errors, enable debug output:
s3fs mybucket /path/to/mountpoint -o passwd_file=${HOME}/.passwd-s3fs -o dbglevel=info -f -o curldbg
You can also mount on boot by entering the following line to /etc/fstab
:
mybucket /path/to/mountpoint fuse.s3fs _netdev,allow_other 0 0
or
mybucket /path/to/mountpoint fuse.s3fs _netdev,allow_other 0 0
If you use s3fs with a non-Amazon S3 implementation, specify the URL and path-style requests:
s3fs mybucket /path/to/mountpoint -o passwd_file=${HOME}/.passwd-s3fs -o url=https://url.to.s3/ -o use_path_request_style
or(fstab)
mybucket /path/to/mountpoint fuse.s3fs _netdev,allow_other,use_path_request_style,url=https://url.to.s3/ 0 0
To use IBM IAM Authentication, use the -o ibm_iam_auth
option, and specify the Service Instance ID and API Key in your credentials file:
echo SERVICEINSTANCEID:APIKEY > /path/to/passwd
The Service Instance ID is only required when using the -o create_bucket
option.
Note: You may also want to create the global credential file first
echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > /etc/passwd-s3fs
chmod 600 /etc/passwd-s3fs
Note2: You may also need to make sure netfs
service is start on boot
Limitations
Generally S3 cannot offer the same performance or semantics as a local file system. More specifically:
- random writes or appends to files require rewriting the entire object, optimized with multi-part upload copy
- metadata operations such as listing directories have poor performance due to network latency
- eventual consistency can temporarily yield stale data(Amazon S3 Data Consistency Model)
- no atomic renames of files or directories
- no coordination between multiple clients mounting the same bucket
- no hard links
- inotify detects only local modifications, not external ones by other clients or tools
References
- goofys - similar to s3fs but has better performance and less POSIX compatibility
- s3backer - mount an S3 bucket as a single file
- S3Proxy - combine with s3fs to mount Backblaze B2, EMC Atmos, Microsoft Azure, and OpenStack Swift buckets
- s3ql - similar to s3fs but uses its own object format
- YAS3FS - similar to s3fs but uses SNS to allow multiple clients to mount a bucket
Frequently Asked Questions
- FAQ wiki page
- s3fs on Stack Overflow
- s3fs on Server Fault
License
Copyright © 2010 Randy Rizun rrizun@gmail.com
Licensed under the GNU GPL version 2
这篇关于CentOS7挂载AWS的S3存储bucket到Linux本地文件目录,使用nginx/openresty直接静态文件方式访问的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!