Amazon S3
Unlimited cloud storage. Upload anything, get a URL.
What is S3?
Unlimited file storage in the cloud. Upload any file, get a URL. 11 nines durability. Files replicate across 3+ data centers automatically. Never worry about disk space again.
Infinite cloud storage
Store any file, any size, access anywhere. Never runs out of space. Your data is replicated 3+ times automatically.
Key Features
Standard
Default choice. Fast access, pay for what you store.
Intelligent-Tiering
Auto-moves files between tiers. Best for unpredictable access.
Glacier Instant
Archive with millisecond access. 68% cheaper than Standard.
Glacier Flexible
Cheap archive. Retrieval takes minutes to hours.
Glacier Deep Archive
Cheapest storage. Retrieval takes 12+ hours.
Versioning
Keep file history. Recover from accidental deletes.
When to Use
- Static website hosting
- Backup and disaster recovery
- Data lake for analytics
- Application assets and media
- Log storage and archival
- Software distribution
When Not to Use
- Need file system (POSIX) โ EFS
- Block storage for EC2 โ EBS
- Database workloads โ RDS/DynamoDB
- Low latency <10ms โ ElastiCache
- Frequently changing small files โ EFS
- Real-time data โ Kinesis
Prerequisites
- An AWS account
- AWS CLI installed (optional)
- Basic understanding of file storage
AWS Console Steps
Open S3 Console
Navigate to S3 in the AWS Console
Create Bucket
Click 'Create bucket' and choose a globally unique name
Configure Settings
Set region, versioning, encryption (defaults are good)
Upload Objects
Click 'Upload' and add your files
Set Permissions
Configure bucket policy or ACLs if needed
AWS CLI Quickstart
S3 CLI quickstart
Common S3 operations with AWS CLI
# Create a bucket
aws s3 mb s3://my-unique-bucket-name-12345
# Upload a file
aws s3 cp myfile.txt s3://my-bucket/
# Upload a folder
aws s3 sync ./myfolder s3://my-bucket/myfolder/
# List bucket contents
aws s3 ls s3://my-bucket/
# ...Basic S3 operations. The 'sync' command is especially useful for backups - it only uploads changed files.
First Project Ideas
- Host a static website with S3 + CloudFront
- Create an automated backup solution
- Build a simple file sharing system
- Store and serve images for a web app
- Create a data lake for analytics
Pro Tips8
Transfer Acceleration
performanceRoutes uploads through CloudFront edge locations. Reduces latency 50-500% for global users.
Enable for global users uploading files >1GBSkip if uploads come from same region as bucketMultipart upload
performanceSplit large files into parallel parts. Retry failed parts. Use for files >100MB.
Add lifecycle rule to abort incomplete uploads after 7 daysIncomplete uploads still cost moneyS3 Select
costQuery CSV/JSON/Parquet with SQL. Only download matching rows. Reduces transfer 4x.
Filter large files before downloadingUse Athena for complex multi-file queriesLifecycle rules
costAuto-move files to cheaper storage. Delete old versions. Clean up incomplete uploads.
Create lifecycle rules on day oneSkip IA for files <128KB (minimum charge applies)Versioning
securityKeep all file versions. Recover from accidental deletes. Pair with lifecycle rules.
Enable on production bucketsVersions pile up without expiration rulesBucket policies
securityControl access at bucket level. Enforce HTTPS. Require encryption. Block public access.
Block public access by defaultOverly complex policies are error-proneEncryption
securitySSE-S3 is free and automatic. SSE-KMS adds audit trail. Enable Bucket Keys to cut KMS costs 99%.
Use SSE-KMS when you need CloudTrail auditSSE-KMS without Bucket Keys gets expensive fastIntelligent-Tiering
costAuto-moves files between 5 tiers. No retrieval fees. Best for unpredictable access patterns.
Use when access patterns are unknownSkip for small files (monitoring fee exceeds savings)Key Facts8
11 nines durability (99.999999999%)
Data replicated across 3+ AZs. Lose 1 object per 10 million every 10,000 years.
behaviorMax object: 5TB. Max single PUT: 5GB
Use multipart for files >100MB. Parts: 5MB-5GB, max 10,000 parts.
limit100 buckets per account (can increase to 1000)
Bucket names globally unique. Unlimited objects per bucket.
limit7 storage classes from hot to cold
Standard, Intelligent-Tiering, Standard-IA, One Zone-IA, Glacier Instant, Glacier Flexible, Deep Archive.
behaviorGlacier retrieval: Instant=ms, Flexible=min-hrs, Deep=12-48hrs
Expedited (1-5min), Standard (3-5hrs), Bulk (5-12hrs). Deep Archive: 12-48hrs only.
behaviorStrong consistency for all operations
Reads immediately reflect writes. No eventual consistency.
behavior3,500 PUT and 5,500 GET per second per prefix
Distribute keys across prefixes for higher throughput.
limitVersioning: once enabled, cannot fully disable
Can only suspend. Delete creates marker (recoverable). Version delete is permanent.
behavior