Last Updated on January 28, 2021
Amazon Simple Storage Service(S3) is one of the most used object storage services, and it is because of scalability, security, performance, and data availability. That means customers of any size or industries such as websites, mobile apps, IoT devices, enterprise applications, and IoT devices can use it to store any volume of data.
Amazon S3 provides easy to use management features so you can appropriately organize your data to fulfill your business requirements.
Many of us are using s3 on a daily basis; one of the most common challenges that are faced while working with cloud storage is sync or uploading multiple objects at once. Yes, we can drag and drop or upload on a direct bucket page. Like the below image,
But the problem with this approach is if you’re uploading large objects over an unstable network if network errors occur you must have to restart uploading from the beginning.
Suppose you are uploading 2000+ files and you come to know that upload fails and your uploading this files from last 1 hour, re-uploading has become a time-consuming process. So, to overcome this problem we have two solutions.
1. Uploading Objects Using Multipart Upload API
Multipart upload opens the gate to upload a single object as a set of parts. Considering that it is possible to upload object parts independently and in any order.
In case the transmission fails in any section, it is possible to retransmit that section without affecting any other sections. So, it’s a good practice to use multipart uploads instead of uploading the object in a single operation.
Advantages of Using multipart upload:
- Improved throughput – improve uploading speed
- Fast recovery from any network issues: no need to re-upload from beginning
- Resume and pause object uploads
- It is possible to upload any object as you are creating it.
We can use multipart file uploading API with different technologies SDK or REST API for more details visit
2. AWS s3 CLI
Step 1: install CLI With the use of AWS CLI we can perform s3 copy operation, you can follow this guide to install CLI (click here)
Step 2: configure AWS profile, with use of “AWS configure” command you can configure AWS credential ( you can find this credential under IAM -> Users -> security_credentials tab on AWS console)
Now all configuration settings are done.
now we can access our s3 bucket name “bacancy-s3-blog” using the list below bucket command
Step 3: list all existing buckets using “aws s3 ls” command
Step 4: Run below copy command based on your requirements
- i. Copy single file to s3 bucket
- “aws s3 cp file.txt s3://< your bucket name >”
- ii. AWS s3 copy multiple files from directory or directory
- “aws s3 cp < your directory path > s3://< your bucket name > –recursive”
Note: by using – aws s3 cp recursive flag to indicate that all files must be copied recursively.
As you can see on the above .gif video even if our network connection lost or is connected after reconnecting our file uploading keep running…. Without file lost.
Amazon S3 Using AWS CLI-FAQs
-
What is AWS CLI?
The AWS Command Line Interface -AWS CLI is another subsidiary of Amazon Web Services that provides Application Program Interface along with cloud compounding platforms to the governments as well as companies in a metered payment basis.
-
How can I transfer files from CLI S3 to AWS?
·Install CLI with the use of AWS CLI
·Configure the AWS profile with the use of the “AWS configure” command
·List all existing buckets using the “AWS s3 ls” command
·Run the copy command based on your requirements. -
What are the advantages of using multipart upload?·It improves uploading speed
·You do not need to re-upload from the beginning
·You can get fast recovery from any network
·Resume and pause object uploads can be done quickly
·It is possible to upload any object as you are creating it