On Stratum, your application is contained within an Environment, which contains a number of Services, which each contain a number of Jobs. Each of those jobs is mapped to a Container - specifically, a Docker container. Each container is started from a Docker image, which comes from either Catalyze’s own set of images (for databases, caches, and automatically-added services) or from images built from your code pushes.
Quoting Docker’s documentation:
Each container is an isolated and secure application platform.
What this means is that every time a new job for a service is started, it’s a fresh setup - no need to worry about temporary files, environment variables, or stuck processes. What this means for Stratum, more importantly, is that there is a uniform way to start, stop, and connect to each piece of an environment, giving an incredible amount of power and options.
Each Stratum Pod has its own set of Docker Hosts, each of which is capable of running a number of containers. When a job is deployed, its container is started on the host best-suited for it - typically the one with the lightest load. Catalyze watches all hosts constantly, adding more to the pod as needed. What this means, however, is that it’s very likely that a given environment’s containers are spread out among a number of hosts.
Once a container is started, it will not be stopped unless it is killed - which will typically only happen when it the service that owns its job is redeployed.