AWS just made a seemingly small but disruptive change to Amazon S3default data integrity protections for new objects. On the surface, this is great: stronger checksums like crc64-nvme offer speed, security, and excellent SIMD support.
But there’s a catch.
AWS also changed the default settings in all its SDKs, breaking compatibility with nearly every S3-compatible service.
Whos Affected?
If you’re using any of these services, you’re likely to run into errors:
- Cloudflare R2 An error occurred (InternalError) when calling the PutObject operation
- Tigris Upgrading boto3 or the JavaScript S3 client? File uploads wont work.
- MinIO, Vast, Dell EC (pre-2025 versions) Same issuecompatibility broken.
- Trino Project Users reporting “Missing required header for this request: Content-MD5.”
- Apache Iceberg Recommending disabling the checksum requirement to avoid failures.
AWS Owns S3, Not Third-Party Vendors
This is a wake-up call for vendors building “S3-compatible” storage solutions. Many rely on AWS’s S3 SDKs, assuming API stability. But AWS can (and does) change the rules anytime.
This time, its checksums. Next time? It could be metadata handling, ACLs, or API versioning.
For anyone building on S3-compatible storage solutions, this proves a key point:
You dont control the S3 API. AWS does.
What Can You Do?
- Disable the checksum requirement in your AWS SDK settings where possible.
- Stick to specific SDK versions until your storage provider updates compatibility.
- Push your vendor to support the new defaults or provide workarounds.
- Consider alternative protocols Multi-cloud users may want to look beyond S3.
The Bigger Picture
AWS has made it clear that “S3-compatible” is a moving target. If you’re relying on it for long-term data management, now’s the time to rethink your strategy.