Building a CI/CD Pipeline in Jenkins

E-Learning Provider Achieves Better UX by building a CI/CD Pipeline in Jenkins

EdTech is a flourishing industry today. It’s dynamic, versatile and is among the earliest adopters of disruptive technologies. E-learning solution providers have to constantly engage multiple stakeholders including students, faculty, parents, and administrators; and so, they go beyond the extreme to develop and scale-up technically advanced adaptive learning platforms. This often involves a wide range of challenges like continuous integration (CI), improving collaboration across teams, and developing modular, scalable codes, to name a few.

Recently, a client, a well-known name in the EdTech business, approached us with a similar set of challenges. They had this very unique need for software development and deployment over the cloud. For creating an immersive and seamless learning environment, they needed special customization for their builds which can be developed, checked and deployed, automatically.

A brief client profile

The company is an online higher education platform providing rigorous industry-relevant programs designed and delivered in collaboration with world-class faculty and industry. Their mission is to create an anywhere-accessible learning experience.

In the age of digital transformation, it is really essential that your product becomes launch-ready as soon as the codes or the changes have been made in the application. Enterprise software companies are the spearheads in this sector and all the other companies are following suit after being enlightened about how digital transformation works. This can be achieved by involving the latest technology, pedagogy, and services altogether. The basic objective of our client was very clear. They wanted to provide an online learning platform that could be accessed from anywhere, anytime and that came with a high-quality user experience.

Deep diving into problem analysis & goal-setting

The main challenge in e-learning software development lies in matching the pace of development and syncing development with deployment. After discussing with the client project manager and scrutinizing the whole scenario, we figured out the following problem areas:

  • The client was struggling to reduce the deployment time and ending up making number of stable deployments in a day.
  • There was no integrated testing system to ensure stability of each build.
  • Deployments were most of the time unpredictable i.e. deployments were happening even if tests were failing.
  • Productivity was low since there were no timely failure notifications.
  • Making even small changes in infrastructure was a tedious task.

Goal-setting was pretty much easy after this. We zeroed in on the following objectives:

  • Matching the pace of deployment with the pace of development
  • Removing manual ‘build and deploy’ process
  • Establishing an efficient, continuous and automated deployment process
  • Including verification before production deployment

It was time for us to put our thinking cap on. We had to take into account several factors to provide a solution which will make life easy for the developers at our client’s office. We also had to make sure that only correct codes or changes were getting deployed. That was the basic approach. Nonetheless, we had to propose the solution in a way which would significantly reduce the long wait time before deployment after a code was being developed. Also, we had to come up with a procedure with which it would be easier to make, manage or maintain environments inside the infrastructure to make the online learning platform, sustainable.

How did we reach the solution?

We devised a protocol which took into account all the requirements of the client. Alongside, we also targeted to provide a sustainable and robust infrastructure which would give great user experience for the users of the online learning platform. To begin with, we created a Continuous Integration/ Continuous development pipeline in Jenkins for real-time deployment of codes into the cloud, using best practices of DevOps. Also, we converted the client’s infrastructure into code for end-to-end real-time monitoring and notifications for their entire product platform.

Steps followed to build CI/CD pipeline with Jenkins

  1. Implemented DevOps into their product development lifecycle.
  2. Implemented CI/CD pipeline and integrated the same into Jenkins.
  3. Integrated automated tests into CI/CD pipeline to make sure that all code changes are deployed only if they passed the tests.
  4. Integrated Slack with Jenkins to send notifications to the developers when test or build fails or the code is not deployable.
  5. Implemented Slack as a notification tool so the developers are aware for every instance of code or build failure.
  6. Created multiple lower/test environments to verify codes before they get deployed for production.
  7. Introduced AWS Cloudwatch to create an alarm-based notification to be sent to Slack whenever any code or server error has occurred.
  8. Converted entire application infrastructure into code with the help of Terraform to facilitate creation of new environments and to change or update the older ones from command line.
  9. Took the Jenkins pipeline and converted the Jenkins job into Groovy code. This gave the infrastructure a level of flexibility for having Jenkins as a Code/Pipeline as a code (JaC/PaC) as well.
  10. Made sure that all the jobs in Jenkins were created automatically. For that, we had to create a BitBucket Organization into Jenkins to enable auto-continuous integration of all target branches in GIT.

We used AWS-Public Cloud Platform for the entire solution, particularly the following services:

  • Elastic Compute Cloud(EC2) for hosting Jenkins
  • CloudWatch for sending notification on slack
  • Elastic Container Service for application deployment

Jenkins, an open source Continuous Integration tool was used for creating CI/CD Pipeline.

Terraform, an open source infrastructure was used as code software tool for creating an environment for application deployment.

Results achieved and further value-addition:

The intensive brainstorming finally bore results. We were able to reach the goals we had envisioned that too within the pre-decided timeline. The pace of deployment could be matched with the pace of development. All manual ‘build and deploy’ processes were removed. An efficient, continuous and automated deployment process was established. And finally, verification before production deployment could be successfully implemented.

But achieving goals has never been the ultimate motto of Loves Cloud! We always aim for value-addition and in this case, we could do that in more than one way:

  1. The deployment time was exponentially reduced which was the primary concern area of client and which can be termed as the first step towards digital transformation.
  2. We achieved multiple stable deployments in a day on any given circumstances to create a seamless experience while using the online platform.
  3. Predictable and thoroughly monitored deployment was achieved as a result of end-to-end testing for all code changes.
  4. Every time an error occurred, a notification could be sent to the developers to make them aware of the faulty builds which eventually increased their productivity.
  5. Making changes to the infrastructure or the environments became easier as the infrastructure was converted into code.
  6. Getting insights on the performance of application post deployment on AWS was made possible through implemented logging, monitoring, and notification.

At Loves Cloud, we are constantly leveraging the power of various open source software solutions to automate, optimize, and scale the workloads of our customers. To learn more about our services aimed at the digital transformation of your business, please visit https://www.loves.cloud/ or write to us at biz@loves.cloud.