Serverless Architecture

Shrinidhi Kulkarni
8 min readJan 20, 2022

If you’re developing software before the term serverless came into picture you know how hard it is to maintain servers and manage them. With the recent advancements in the cloud computing ecosystem, Serverless is one of the major contributions for some innovative solutions. In this article we are going to see what is serverless computing with the use case with which we create our application architectures. This is the agenda for this article :

  • what is serverless
  • serverless offerings from major cloud providers
  • Use cases for web applications
  • Future of serverless

Throughout this article I’ll be using AWS as the cloud platform. However I will compare the different offerings provided by the other cloud providers as well.

What is Serverless ??

The name serverless itself is a misnomer. serverless is generally made of servers and people think serverless doesn’t have servers. Essentially we need a server to run or compute. The only difference is we don’t see the compute and we don’t manage them, Because we don’t essentially create them ourselves. however you still need a server to run your compute. So why is it called serverless ?? Serverless is essentially a managed offering which is provided by different cloud providers.

Imagine you are in a cloud provider or you are using a platform which could be providing some cloud offerings for serverless, in this case let’s say I have a mobile application and I have my cloud provider who is having lot of servers I’m not essentially renting anything right now but I have a mobile application which I have pushed into an app store and people are start using the mobile application now, the moment the request comes from the mobile application to my back-end, though I don’t have my servers started or I don’t have any server up and running in the backend. However based on the usage of my application, I provision these servers instantaneously and if I need to process anything with respect to maintaining a state I can use a relational database or no sql database. Now more users are using this particular application and I get more volume in a traditional cloud computing paradigm we will have to configure the auto scaling capability on how many machines we need to provision or how much load can our application with Stan. However in the serverless is offering scalability is added by default, So if there are more loads there will be more instances of machines and the applications. So that I don’t have to manually configure them when I create my application because the complete ownership of provisioning these machines is taken care by the cloud platform. Obviously more compute means more data if we need to scale our database we should be able to scale that as well so it’s not just with respect to scaling the application or the compute.

Imagine a pricing model where people come back and say you can have your application deployed but don’t pay me if nobody is using it that’s where serverless computing is placed. So the major features of the serverless paradigm is the event triggers, based on my incoming requests my instances were provisioned instantaneously and based on trigger my compute has been provisioned. Once the compute has been provisioned I was able to scale by default so scalability is provided in the serverless offering by default and this is managed by the cloud provider so I don’t have to manually maintain these servers and I don’t have to bring them up and bring them down because these are by default.

if there are no triggers obviously there is no compute running so I don’t have to pay – so the pricing model is also tied up with these factors and you’ll be able to pay only for the requests and the time of compute which you are running.

Every technology comes with this pros and cons, there are a lot of things which we need to consider before getting into serverless computing world. The major ones are cold start time we need to have an application which should be able to start instantaneously if the vendor is going to provide our underlying compute and if he starts the application we should be able to immediately serve the request based on the triggers which we get. So the cold start time should be reduced and we should have a faster start-up time.

The next one is event based triggers not every application can react to humans there could be synchronous applications however you need to make sure that your application is event based and stateless and it is cloud vendor based so if you are creating a serverless architecture you cannot use the same architecture in every cloud platform you need to use the corresponding cloud platforms offering so if you are integrating with AWS you might not be able to easily lift and shift into other platforms because the other platforms might have similar offerings but with a different product so you might have to work with these vendor specific products so it is vendor based most of the time.

Serverless Offerings

Now let’s see what are the different service offerings provided by the different cloud platforms. So I am picking up the three major ones the AWS cloud Google cloud platform and the Microsoft Azure cloud platform.

https://aws.amazon.com/serverless/

Function as Service :

Serverless has evolved into popularity with the introduction of function as a service the first offering which came as a function as a service was from Amazon and it was called a AWS lambda. The same offering is available in the other cloud platforms in the name as cloud functions and Azure cloud functions. However it is just a part of the serverless offering because its functions as a service. you can run a particular function as a service and it also falls into this server as ecosystem because you don’t have to keep on running your compute.

Serverless containers :

The next predominant one is the container based serverless offering. Amazon provides something called a AWS fargate using which you can have serverless containers running on elastic container service or elastic kubernetes service.

So Amazon provides two container service in the container service which is their proprietary and the kubernetes service and you can use Fargate to schedule your containers on these based on your events. Google cloud provides something called cloud run using which you can run containers in a serverless mode, so it can spin off a new container from a docker image based on the event it could be a http event or a event from a queue the same applies to the azure cloud platform there is something called Azure cloud instances which can be triggered based on events.

With that I hope you understand what is serverless and what are the different cloud offerings in order to explain the use case I’m gonna pick up a AWS as our primary cloud provider.

Example :

To Understand more on these concepts, let’s try with the Use case for creating and hosting web application in AWS.

Now let’s look at how to create a web application by leveraging the serverless offering. imagine we have a website and hosted in AWS. So let’s see how I would design the web application so that I can serve requests to the clients. Now somebody from the browser is hitting our website that will hit my Cloud front servers, I have my static files stored inside a S3 bucket so that geographically my users are able to load the website faster because all my images are stored inside the s3 bucket and cloud front is distributing my static files via its CDN and cloud front is the Amazon version of the CDN.

Imagine somebody’s loading the website and I have static files it just gets from the s3 bucket and then it just provides that to the user. So if you look at it I don’t have any compute involved here. Now let’s say one part of the website needs to dynamically retrieve something and I have to run some logic which is in python so in order to get the python code I have created a python lambda which I have deployed in my server side. This will be invoked only when the user clicks that particular category or a feature from the website, this particular lambda need to be triggered once the user clicks that particular feature. The request from the cloud front needs to come into the lambda so how can I achieve that ? I can achieve that using a API gateway which can act as a interface. Please refer below image.

API gateway creates a trigger to the lambda and this is HTTP trigger which I am going to trigger from the cloud front into the lambda.

I need to have it secured, so I’m going to use the Amazon Cognito for securing this particular endpoint along with the data retrieval from the dynamo DB. So the lambda is going to retrieve the data from the DynamoDB dynamically and serve to the CloudFront distribution and it is secured by a Cognito and the lambda is triggered using a api gateway so i don’t have to worry about how many people are going to use website because my lambda is a managed service and amazon will scale it up and down based on the need and if people are not going to use the dynamic feature inside the website and I won’t be charged because my lambdas won’t be triggered until or unless somebody uses that particular feature so this is a classic example of how we can use lambdas or serverless architecture for creating a web application.

Serverless is a FUTURE

As day by day the serverless framework is getting evolved you might be able to see more and more tools getting added to this particular cloud native landscape. I would say serverless is going to be the future because most of us would like to save cost in running compute so for us serverless plays a major role in achieving these by leveraging the cloud native platform capabilities with reducing the cost. I hope you learned something new as always.

--

--