Hey everyone, here’s the latest engineering post on the Freetrade Blog. It’s about how we apply integration testing to our serverless tech stack.
It’s a good question, and the answer varies by case. The most frequently hit functions naturally have multiple hot instances all the time. With infrequently hit functions, in many cases we’re not concerned that it might take 10 seconds to spin up a new instance (e.g. for some background jobs). You can have some other processes regularly ping a function to keep it hot if the cold starts are a problem, but even then new functions will be spun up from cold starts anyway when the load increases. A better option is to use something other than serverless if cold starts are not acceptable. This is part of the package with serverless, and we find the trade-offs (so to speak) worth it for our use cases.
Great stuff! Firebase is a cool product. What were the important things you looked at when deciding between firebase vs ordinary document store on your own server?
Appreciate you mention abstraction from server management as one, but were there other points?
I’m sure Firebase won’t shutdown like Parse because GCP lives for this kind of stuff. But in your architecture/code do you abstact functionality away from platform dependency or are you 100% all in
We went for Firebase to benefit from the integration and infrastructure work that GCP does for you, compared to having to DIY that with a database on a server we manage. That’s probably the main motivation for it, and that is likely to change as we progress.
In terms of abstracting it away, I would say we try to lean on GCP and its features quite intentionally – we’re paying for those and we want to get the benefit of using them. The applications are well-architected enough that we’re able to move and migrate things, but we balance that with being able to use the platform to our advantage.