Cloud CMS offers a few different ways for you to store files - including storing them in S3 (recommended) or even on a local partition (such as an NFS mount). In our SaaS offering, everything is stored in S3 automatically and when you install on-premise, you can configure this to your preference. See this documentation page under Binary Storage:
https://www.cloudcms.com/documentation/docker/configuration/api-server.html
We definitely can handle large binary files in the 10's or 100's of megabytes. Storing and managing these files isn't any different than any other file type since the blob portion is effectively "written away" and only dealt with when retrieval is needed. Metadata is stored off the binary payload for efficiency.
On the retrieval side, if you pull the binary back through our API, you will consume a Cloud CMS thread during the download period. This can present a limitation, depending on how you have your API servers configured. The trick is to use CloudFront (or any other CDN but generally recommend CloudFront) to either point to the Cloud CMS API as an origin or connect to the S3 storage bucket.
In either case, Cloud Front will edge cache your assets with some defined TTL. We also support timestamp and MD5 suffixes on retrieval via our app server to allow for dynamic updates that do not require CDN invalidation.
With this pattern, the dynamic resource "drag" on the API servers trends to zero. From an application architecture perspective, our recommendation is that the API servers be optimized toward content service calls (like query, create, find, graph traversal). Static assets (or large assets) should leverage CDN to get optimal performance and better scale.
Furthermore, Amazon offers some enhanced streaming (and transcoding) capabilities on top of assets out of its CDN or S3. You may choose to leverage those to deliver better quality of service for end user video delivery, etc.