Since I've completely over-committed on my various online projects, I have a lot of spare capacity. CoasterBuzz, PointBuzz and POP Forums all run on cloud resources that are intended to not be down, with redundancy and overhead to spare. TogetherLoop uses all of it. Here's a brief rundown of the various bits.
The one new thing, and I'm not married to it, is using Azure Front Door for the app itself. The client is a Blazor WASM app that weighs in at around 13 MB on first load. Cached, it's not even 100 kB, because there's nothing but the payload for the home page. And if that sounds like a lot, you should know that The New York Times home page has about 40 MB, and even cached it pulls 31 MB. Completely ridiculous. And my load buries Instagram's web interface. In any case, Front Door obviously georeplicates the static files around the world, since it is a CDN, but more importantly, it offers a lot of control over headers and such, so I can make sure that the app is never stale. This is costing an extra $20-something a month, but it's stupid fast.
The backend API is actually split between a regular Azure app service (two nodes) and a series of Azure Functions. The functions do a lot of async work, like processing photos and video, recurring billing (eventually), notification processing, etc. But they're also the key to scale for uploads, because the app service would certainly run out of memory quickly, even using streams. The app service is handling all of the JSON payloads you'd expect from an API, which also has no wake-up lag in responding, as the function do sometimes.
The media is all served directly out of a storage account. Technically these have no permission controls, but the URL's are all not guessable. So for them to be seen by someone who shouldn't, that's on your crappy friends. I can change this at some point easily enough, if I need a permissions layer to proxy, but that's a future improvement, maybe. I also have a policy where the media is downgraded from hot to cool after a few months, to save on costs.
I've got Redis serving as a message bus, to feed processed notifications back to the API nodes, which in turn uses web sockets (SignalR) to let the client know that something is new. The direct messaging uses web sockets as well, and is kind of a port from POP Forums, though that app uses mostly custom Typescript web components instead of Blazor.
Friend searching is through Elasticsearch. Searching for folks through SQL isn't great, and we don't need table scans slowing things down. In this case, it's using some fuzzy matching on name, or exact matches on email.
The primary data store is Azure SQL, because everything else using it so ridiculously tuned that average usage rarely gets over 3%. Sure, it's possible to outgrow that, but I need tens of thousands, maybe hundreds of thousands, of users to get there. Even then, there are a lot of pre-compute tricks I can use to help with performance.
Everything except Front Door is stuff that was already running for the other sites. Functions are technically their own cost, but I'm not stressing about those extra cents per month. Elastic is running in Azure but run by Elastic itself. The various forum indexes all live there, and it too is underutilized.
In the event that I can grow this thing into something, it's a solid foundation. Certainly a lot of premature optimization. I don't hate sub-50ms API hits from where I live.
No comments yet.