With the migration of the enterprise applications to micro-services and containers, cloud service providers, starting with Amazon in 2014, announced a new computational model called function-as-a-service. In these platforms, developers create a set of fine-grained functions with shorter execution times instead of developing coarse-grained software. In addition, the management of system resources and servers is entrusted to cloud service providers. This model has many benefits, such as reducing costs, but still faces many challenges such as balancing cost, balancing performance, programming models, using current dev tools, containers’ cold start problem, saving data in caches, security issues, privacy concerns, and scheduling challenges like execution time prediction. In this paper, we focus on scheduling and cold start problems. Compromise occurs when keeping warm operating environments which can reduce cold start times but increase costs. Here, we aim to create a trade-off by using four different types of decisions at runtime using a heuristic method and analyzing the function dependency graph, functions’ invocation frequency, and other environmental parameters. The proposed method shows a 32% improvement over the fixed-time method (i.e. the method used by Amazon). This comparison is made from the cumulative measurement viewpoint which is a combination of response time, turnaround time, cost, and utilization.