Amazon S3

Amazon S3

Unlimited cloud storage. Upload anything, get a URL.

storageFree Tierbeginner
11 9s
Durability
99.999999999%
Unlimited
Storage
No capacity limits
7
Storage Classes
From Standard to Glacier Deep Archive
5TB
Max Object
Single object limit

What is S3?

Unlimited file storage in the cloud. Upload any file, get a URL. 11 nines durability. Files replicate across 3+ data centers automatically. Never worry about disk space again.

Infinite cloud storage

Store any file, any size, access anywhere. Never runs out of space. Your data is replicated 3+ times automatically.

Key Features

๐Ÿ“ฆ

Standard

Default choice. Fast access, pay for what you store.

๐Ÿง 

Intelligent-Tiering

Auto-moves files between tiers. Best for unpredictable access.

โ„๏ธ

Glacier Instant

Archive with millisecond access. 68% cheaper than Standard.

๐ŸงŠ

Glacier Flexible

Cheap archive. Retrieval takes minutes to hours.

๐Ÿ”๏ธ

Glacier Deep Archive

Cheapest storage. Retrieval takes 12+ hours.

๐Ÿ”’

Versioning

Keep file history. Recover from accidental deletes.

When to Use

  • Static website hosting
  • Backup and disaster recovery
  • Data lake for analytics
  • Application assets and media
  • Log storage and archival
  • Software distribution

When Not to Use

  • Need file system (POSIX) โ†’ EFS
  • Block storage for EC2 โ†’ EBS
  • Database workloads โ†’ RDS/DynamoDB
  • Low latency <10ms โ†’ ElastiCache
  • Frequently changing small files โ†’ EFS
  • Real-time data โ†’ Kinesis

Prerequisites

  • An AWS account
  • AWS CLI installed (optional)
  • Basic understanding of file storage

AWS Console Steps

1

Open S3 Console

Navigate to S3 in the AWS Console

2

Create Bucket

Click 'Create bucket' and choose a globally unique name

3

Configure Settings

Set region, versioning, encryption (defaults are good)

4

Upload Objects

Click 'Upload' and add your files

5

Set Permissions

Configure bucket policy or ACLs if needed

AWS CLI Quickstart

S3 CLI quickstart

Common S3 operations with AWS CLI

cli
# Create a bucket
aws s3 mb s3://my-unique-bucket-name-12345

# Upload a file
aws s3 cp myfile.txt s3://my-bucket/

# Upload a folder
aws s3 sync ./myfolder s3://my-bucket/myfolder/

# List bucket contents
aws s3 ls s3://my-bucket/

# ...

Basic S3 operations. The 'sync' command is especially useful for backups - it only uploads changed files.

First Project Ideas

  • Host a static website with S3 + CloudFront
  • Create an automated backup solution
  • Build a simple file sharing system
  • Store and serve images for a web app
  • Create a data lake for analytics

Pro Tips8

Transfer Acceleration

performance

Routes uploads through CloudFront edge locations. Reduces latency 50-500% for global users.

Enable for global users uploading files >1GB
Skip if uploads come from same region as bucket

Multipart upload

performance

Split large files into parallel parts. Retry failed parts. Use for files >100MB.

Add lifecycle rule to abort incomplete uploads after 7 days
Incomplete uploads still cost money

S3 Select

cost

Query CSV/JSON/Parquet with SQL. Only download matching rows. Reduces transfer 4x.

Filter large files before downloading
Use Athena for complex multi-file queries

Lifecycle rules

cost

Auto-move files to cheaper storage. Delete old versions. Clean up incomplete uploads.

Create lifecycle rules on day one
Skip IA for files <128KB (minimum charge applies)

Versioning

security

Keep all file versions. Recover from accidental deletes. Pair with lifecycle rules.

Enable on production buckets
Versions pile up without expiration rules

Bucket policies

security

Control access at bucket level. Enforce HTTPS. Require encryption. Block public access.

Block public access by default
Overly complex policies are error-prone

Encryption

security

SSE-S3 is free and automatic. SSE-KMS adds audit trail. Enable Bucket Keys to cut KMS costs 99%.

Use SSE-KMS when you need CloudTrail audit
SSE-KMS without Bucket Keys gets expensive fast

Intelligent-Tiering

cost

Auto-moves files between 5 tiers. No retrieval fees. Best for unpredictable access patterns.

Use when access patterns are unknown
Skip for small files (monitoring fee exceeds savings)

Key Facts8

11 nines durability (99.999999999%)

Data replicated across 3+ AZs. Lose 1 object per 10 million every 10,000 years.

behavior

Max object: 5TB. Max single PUT: 5GB

Use multipart for files >100MB. Parts: 5MB-5GB, max 10,000 parts.

limit

100 buckets per account (can increase to 1000)

Bucket names globally unique. Unlimited objects per bucket.

limit

7 storage classes from hot to cold

Standard, Intelligent-Tiering, Standard-IA, One Zone-IA, Glacier Instant, Glacier Flexible, Deep Archive.

behavior

Glacier retrieval: Instant=ms, Flexible=min-hrs, Deep=12-48hrs

Expedited (1-5min), Standard (3-5hrs), Bulk (5-12hrs). Deep Archive: 12-48hrs only.

behavior

Strong consistency for all operations

Reads immediately reflect writes. No eventual consistency.

behavior

3,500 PUT and 5,500 GET per second per prefix

Distribute keys across prefixes for higher throughput.

limit

Versioning: once enabled, cannot fully disable

Can only suspend. Delete creates marker (recoverable). Version delete is permanent.

behavior

AWS Certification Practice4

mediumsaa-c03soa-c02

Which storage strategy?

mediumscs-c02saa-c03

Which S3 feature?

mediumsaa-c03sap-c02

How to improve upload performance?

easysaa-c03soa-c02

How to recover?