Devops Engineer Interview Questions and Answers Pdf
In brief, DevOps is the junction point of software expansion, services and quality assurance (QA). Any company throughout the world are fast accommodating to the DevOps culture to streamline its marketing approach. If you are a proficient DevOps professional, there is a fabulous opportunity that some of the best organizations are viewing for applicants. If you are fresh to the profession, you can do a DevOps certification program to increase your possibilities of acquiring an excellent job.
These DevOps Engineer Interview Questions have been composed specifically to prepare you familiarized with the kind of questions you may face when your interview for the subject of DevOps training. As per my knowledge, skilled interviewers simply intend to ask any critical question during your interview, commonly questions begin with a remarkable basic concept of the subject and next they proceed based on additional discussion and what you answer.
Top 50 DevOps engineer interview questions and answers pdf
DevOps is the common prevailing technology trends. There is an increasing need for DevOps Engineer position in technology businesses.
Therefore you have ultimately attained your dream job in DevOps yet are questioning how to break the Interview and what could be the feasible DevOps Interview Questions. Each interview is varied and the range of a job is different too (puppet training). Having this in memory we have invented the multiple popular DevOps Interview Questions and answers to assist you to get a victory in your interview. (E Learning Portal)
This blog includes specialized interview questions that an interviewer questions for DevOps Engineer job. Every question is supplemented by an answer so that you can prepare for a job interview in a short time.
We have gathered the list later tending few of technical interviews in top-notch organizations like-Amazon, Netflix, Airbnb, etc.
Frequently, certain questions and concepts are utilized in our regular work Splunk Training. But these are most valuable when an Interviewer is judging to examine your broad experience of DevOps.
Here is the list of popular DevOps Interview Questions that are proposed frequently in an interview…
1. What is CICD in DevOps?
Answer: CICD stands for Continuous Integration and Continuous Delivery.
These are two different concepts that are complementary to each other.
Continuous Integration (CI): In CI all the developer work is merged to the main branch several times a day. This helps in reducing integration problems.
In CI we try to minimize the duration for which a branch remains checked out. A developer gets early feedback on the new code added to main repository by using CI.
Continuous Delivery (CD): In CD, a software team plans to deliver software in short cycles. They perform development, testing and release in such a short time that incremental changes can be easily delivered to production.
In CD, as a DevOps, we create a repeatable deployment process that can help achieve the objective of Continuous Delivery.
2. Can we deliberate DevOps as an agile practice?
Answer: Yes, DevOps is a program to reconcile and coordinate the development and manufacture start over a set of virtuous performs. Its development is driven by a profound changing demand of professionals, who want to hurry up the variations to stick closer to the necessities of corporate and the client.
3. What is DevOps engineer’s responsibility with respects to Agile development?
Answer: DevOps specialist exertion very methodically with Agile development teams to assurance they have a condition essential to support purposes such as automatic testing, incessant Integration, and unceasing Delivery. DevOps specialist must be in continuous contact with the developers and make all compulsory parts of environment work flawlessly.
4. Clarify with a use case wherever DevOps can be cast-off in industry / real-life?
Answer: There are numerous businesses that are using DevOps so you can reference any of those use cases, you can also raise the below instance:
Suppose ABC is a peer-to-peer e-commerce website intensive on handmade or antique substances and provisions, as well as exclusive factory-manufactured substances. ABC wriggled with unhurried, tender site apprises that regularly produced the site to go down. It pretentious sales for masses of ABC’s users who sold properties through the online marketplace and endangered driving them to the contestant.
With the benefit of a new practice management team, ABC transitioned from its waterfall model, which twisted four-hour full-site placements twice daily, to a more agile method. Nowadays, it has a fully automatic deployment cylinder, and its incessant delivery performs have reportedly caused in more than 50 deployments a day with fewer disturbances.
Learn Devops Training Online From Real-Time Experts.
5. How does your setup a script to track each time a source receives new obligates through push?
Answer: There are three conducts to arrange a script to run each time a source obtains new commits through thrust, one desires to describe moreover a pre-receive, apprise, or a post-receive catch liable on when precisely the script needs to be triggered.
Pre-receive catch in the starting point repository is appealed when obligates are pushed to it. Any script certain to this catch will be performed before any orientations are updated. This is a valuable hook to run writings that help apply development rules.
The update hook the whole thing in an alike manner to pre-receive catch, and is also activated before any informs are made. Though, the update hook is named once for every obligate that has been strapped to the end of the repository.
Lastly, post-receive hook in the source is appealed after the informs have been acknowledged into the terminus repository. This is a perfect place to arrange simple deployment writings, invoke some incessant integration systems, message notification emails to source maintainers, etc.
Hooks are native to every Git source and are not versioned. Scripts can moreover be created within the hooks calendar inside the “. git” directory or they can be formed elsewhere and associations to those scripts can be located within the directory. (Interview Questions and Answers)
6. Why is Incessant Testing significant for DevOps?
Answer: You can response this question by saying, “Incessant Testing permits any change made in the code to be tested directly. This circumvents the glitches shaped by having “big-bang” testing left-hand to the end of the series such as announcement postponements and quality matters. In this way, Incessant Testing eases more recurrent and good class releases.”
7. What are the popular DevOps tools that you use?
Answer: We use the following tools for work in DevOps:
Jenkins: This is an open source automation server used as a continuous integration tool. We can build, deploy and run automated tests with Jenkins.
GIT: It is a version control tool used for tracking changes in files and software.
Docker: This is a popular tool for containerization of services. It is very useful in Cloud-based deployments.
Nagios: We use Nagios for monitoring of IT infrastructure.
Splunk: This is a powerful tool for log search as well as monitoring production systems.
Puppet: We use Puppet to automate our DevOps work so that it is reusable.
8. What are the main benefits of DevOps?
Answer: DevOps is a very popular trend in Software Development. Some of the main benefits of DevOps are as follows:
Release Velocity: DevOps practices help in increasing the release velocity. We can release code to production more often and with more confidence.
Development Cycle: With DevOps, the complete development cycle from initial design to production deployment becomes shorter.
Deployment Rollback: In DevOps, we plan for any failure in deployment rollback due to a bug in code or issue in production. This gives confidence in releasing feature without worrying about downtime for rollback.
Defect Detection: With a DevOps approach, we can catch defects much earlier than releasing to production. It improves the quality of the software.
Recovery from Failure: In case of a failure, we can recover very fast with the DevOps process.
Collaboration: With DevOps, collaboration between development and operations professionals increases.
Performance-oriented: With DevOps, the organization follows a performance-oriented culture in which teams become more productive and more innovative.
9. What is the typical DevOps workflow you use in your organization?
Answer: The typical DevOps workflow in our organization is as follows:
We use Atlassian Jira for writing requirements and tracking tasks.
Based on the Jira tasks, developers checking code into the GIT version control system.
The code checked into GIT is built by using Apache Maven.
The build process is automated with Jenkins.
During the build process, automated tests run to validate the code checked in by a developer.
Code built on Jenkins is sent to the organization’s Artifactory.
Jenkins automatically picks the libraries from Artifactory and deploys it to Production.
During Production deployment, Docker images are used to deploy the same code on multiple hosts.
Once a code is deployed to Production, we use Nagios to monitor the health of production servers.
Splunk based alerts inform us of any issues or exceptions in production.
10. How do you take a DevOps approach with Amazon Web Services?
Answer: Amazon Web Services (AWS) provide many tools and features to deploy and manage applications in AWS. As per DevOps, we treat infrastructure as code. We mainly use the following two services from AWS for DevOps:
CloudFormation: We use AWS CloudFormation to create and deploy AWS resources by using templates. We can describe our dependencies and pass special parameters in these templates. CloudFormation can read these templates and deploy the application and resources in the AWS cloud.
OpsWorks: AWS provides another service called OpsWorks that is used for configuration management by utilizing the Chef framework. We can automate server configuration, deployment, and management by using OpsWorks. It helps in managing EC2 instances in AWS as well as any on-premises servers.
11. How will you run a script automatically when a developer commits a change into GIT?
Answer: GIT provides the feature to execute custom scripts when a certain event occurs in GIT. This feature is called hooks.
We can write two types of hooks.
For this case, we can write a Client-side post-commit hook. This hook will execute a custom script in which we can add the message and code that we want to run automatically with each commit.
12. What are the main features of AWS OpsWorks Stacks?
Answer: Some of the main features of AWS OpsWorks Stacks are as follows:
Server Support: AWS OpsWorks Stacks we can automate operational tasks on any server in AWS as well as our own data center.
Scalable Automation: We get automated scaling support with AWS OpsWorks Stacks. Each new instance in AWS can read configuration from OpsWorks. It can even respond to system events in the same way as other instances do.
Dashboard: We can create dashboards in OpsWorks to display the status of all the stacks in AWS.
Configuration as Code: AWS OpsWorks Stacks are built on the principle of “Configuration as Code”. We can define and maintain configurations like application source code. The same configuration can be replicated on multiple servers and environments.
Application Support: OpsQorks supports almost all kinds of applications. So it is universal in nature.
13. How does CloudFormation work in AWS?
Answer: AWS CloudFormation is used for deploying AWS resources.
In CloudFormation, we have to first create a template for a resource. A template is a simple text file that contains information about a stack on AWS. A stack is a collection of AWS resourced that we want to deploy together in an AWS as a group.
Once the template is ready and submitted to AWS, CloudFormation will create all the resources in the template. This helps in automation of building new environments in AWS.
14. What is the major variance between the Linux and Unix operating systems?
It fits the family of multitasking, multiuser operating systems.
These are typically used in internet servers and workstations.
It is initially derivative from AT&T Unix, industrialized starting in the 1970s at the Bell Labs study center by Ken Thompson, Dennis Ritchie, and others.
Equally the operating systems are open source but UNIX is comparatively alike one as associated with LINUX.
Linux has perhaps been home to every programming language recognized to humankind.
These are cast-off for personal computers.
The LINUX is founded on the kernel of the UNIX operating system.
15. What are the best practices of Continuous Integration (CI)?
Answer: Some of the best practices of Continuous Integration (CI) are as follows:
Build Automation: In CI, we create such a build environment that even with one command build can be triggered. This automation is done all the way up to deployment to the Production environment.
Main Code Repository: In CI, we maintain the main branch in a code repository that stores all the Production-ready code. This is the branch that we can deploy to Production at any time.
Self-testing build: Every build in CI should be self-tested. It means with every build there is a set of tests that runs to ensure that changes are of high quality.
Every day commits to baseline: Developers will commit all of their changes to baseline every day. This ensures that there is no big pileup of code waiting for integration with the main repository for a long time.
Build every commit to baseline: With Automated Continuous Integration, every time a commit is made into baseline, a build is triggered. This helps in confirming that every change integrates correctly.
Fast Build Process: One of the requirements of CI is to keep the build process fast so that we can quickly identify any problem.
Production like environment testing: In CI, we maintain a production-like environment also known as pre-production or staging environment, which is very close to the Production environment. We perform testing in this environment to check for any integration issues.
Publish Build Results: We publish build results on a common site so that everyone can see these and take corrective actions.
Deployment Automation: The deployment process is automated to the extent that in a build process we can add the step of deploying the code to a test environment. On this test environment, all the stakeholders can access and test the latest delivery.
16. What are the benefits of Continuous Integration (CI)?
Answer: The benefits of Continuous Integration (CI) are as follows:
CI makes the current build constantly available for testing, demo and release purpose.
With CI, developers write modular code that works well with frequent code check-ins.
In case of a unit test failure or bug, a developer can easily revert back to the bug-free state of the code.
There is a drastic reduction in chaos on release day with CI practices.
With CI, we can detect Integration issues much earlier in the process.
Automated testing is one very useful side effect of implementing CI.
All the stakeholders including business partners can see the small changes deployed into pre-production environment. This provides early feedback on the changes to the software.
Automated CI and testing generate metrics like code-coverage, code complexity that help in improving the development process.
17. What are the options for security in Jenkins?
Answer: In Jenkins, it is very important to make the system secure by setting user authentication and authorization. To do this we have to do the following:
First, we have to set up the Security Realm. We can integrate Jenkins with the LDAP server to create user authentication.
The second part is to set authorization for users. This determines which user has access to what resources.
In Jenkins some of the options to set up security are as follows:
We can use Jenkins’ own User Database.
We can use the LDAP plugin to integrate Jenkins with the LDAP server.
We can also setup Matrix-based security on Jenkins.
18. What are the main benefits of Ansible?
Answer: Ansible is a powerful tool for IT Automation for large scale and complex deployments. It increases the productivity of the team.
Some of the main benefits of Ansible are as follows:
Productivity: It helps in delivering and deploying with speed. It increases productivity in an organization.
Automation: Ansible provides very good options for automation. With automation, people can focus on delivering smart solutions.
Large-scale: Ansible can be used in small as well as very large-scale organizations.
Simple DevOps: With Ansible, we can write automation in a human-readable language. This simplifies the task of DevOps.
19. What are the main use cases of Ansible?
Answer: Some of the popular use cases of Ansible are as follows:
App Deployment: With Ansible, we can deploy apps in a reliable and repeatable way.
Configuration Management: Ansible supports the automation of configuration management across multiple environments.
Continuous Delivery: We can release updates with zero downtime with Ansible.
Security: We can implement complex security policies with Ansible.
Compliance: Ansible helps in verifying an organization’s systems in comparison with the rules and regulations.
Provisioning: We can provide new systems and resources to other users with Ansible.
Orchestration: Ansible can be used in orchestration of complex deployment in a simple way.
20. What is Docker Hub?
Answer: Docker Hub is a cloud-based registry. We can use Docker Hub to link code repositories. We can even build images and store them in Docker Hub. It also provides links to Docker Cloud to deploy the images to our hosts.
Docker Hub is a central repository for container image discovery, distribution, change management, workflow automation, and team collaboration.
31. What are the main services of AWS that you have used?
Answer: We use the following main services of AWS in our environment:
EC2: This is the Elastic Compute Cloud by Amazon. It is used for providing the computing capability to a system. We can use it in places of our standalone servers. We can deploy different kinds of applications on EC2.
S3: We use S3 on Amazon for our storage needs.
DynamoDB: We use DynamoDB in AWS for storing data in the NoSQL database form.
Amazon CloudWatch: We use CloudWatch to monitor our application in Cloud.
Amazon SNS: We use the Simple Notification Service to inform users about any issues in a Production environment.
32. Why GIT is considered better than CVS for version control system?
Answer: GIT is a distributed system. In GIT, any person can create its own branch and start checking in the code. Once the code is tested, it is merged into main GIT repo. IN between, Dev, QA, and product can validate the implementation of that code.
In CVS, there is a centralized system that maintains all the commits and changes.
GIT is open source software and there are plenty of extensions in GIT for use by our teams.
33. What is the difference between a Container and a Virtual Machine?
Answer: We need to select an Operating System (OS) to get a specific Virtual Machine (VM). VM provides full OS to an application for running in a virtualized environment.
A Container uses APIs of an Operating System (OS) to provide a runtime environment to an application.
A Container is very lightweight in comparison with a VM.
VM provides a higher level of security compared to a Container.
A Container just provides the APIs that are required by the application.
34. What is Serverless architecture?
Answer: Serverless Architecture is a term that refers to the following:
An application that depends on a third-party service.
An Application in which Code is run on ephemeral containers.
In AWS, Lambda is a popular service to implement Serverless architecture.
Another concept in Serverless Architecture is to treat code as a service or Function as a Service (FAAS). We just write code that can be run on any environment or server without the need of specifying which server should be used to run this code.
35. What are the main principles of DevOps?
Answer: DevOps is different from Technical Operations. It has the following main principles:
Incremental: In DevOps, we aim to incrementally release software to production. We do releases to production more often than Waterfall approach of one large release.
Automated: To enable users to make releases more often, we automate the operations from Code Check in to deployment in Production.
Collaborative: DevOps is not the only responsibility of the Operations team. It is a collaborative effort of Dev, QA, Release and DevOps teams.
Iterative: DevOps is based on an Iterative principle of using a process that is repeatable. But with each iteration, we aim to make the process more efficient and better.
Self-Service: In DevOps, we automate things and give self-service options to other teams so that they are empowered to deliver the work in their domain.
36. Are you more Dev or more Ops?
Answer: This is a tricky question. DevOps is a new concept and in any organization, the maturity of DevOps varies from highly Operations oriented to highly DevOps oriented. In some projects, teams are very mature and practice DevOps in their true form. In some projects, teams rely more on the Operations team.
As a DevOps person, I give first priority to the needs of an organization and project. At some times I may have to perform a lot of operations work. But with each iteration, I aim to bring DevOps changes incrementally to an organization.
Over time, the organization/project starts seeing results of DevOps practices and embraces it fully.
37. What is the REST service?
Answer: REST is also known as Representational State Transfer. A REST service is a simple software functionality that is available over HTTP protocol. It is a lightweight service that is widely available due to the popularity of the HTTP protocol.
Since REST is lightweight; it has very good performance in a software system. It is also one of the foundations for creating highly scalable systems that provide a service to a large number of clients.
Another key feature of a REST service is that as long as the interface is kept the same, we can change the underlying implementation. E.g. Clients of REST service can keep calling the same service while we change the implementation from php to Java.
38. What are the Three Ways of DevOps?
Answer: Three Ways of DevOps refers to three basic principles of DevOps culture. These are as follows:
The First Way: Systems Thinking: In this principle, we see the DevOps as a flow of work from left to right. This is the time taken from Code check into the feature being released to an End customer. In DevOps culture, we try to identify the bottlenecks in this.
The Second Way: Feedback Loops: Whenever there is an issue in production it is feedback about the whole development and deployment process. We try to make the feedback loop more efficient so that teams can get the feedback much faster. It is a way of catching defect much earlier in a process than it is reported by a customer.
The Third Way: Continuous Learning: We make use of first and second-way principles to keep on making improvements in the overall process. This is the third principle in which over the time we make the process and our operations highly efficient, automated and error-free by continuously improving them.
39. How do you apply DevOps principles to make system Secure?
Answer: Security of a system is one of the most important goals for an organization. We use the following ways to apply DevOps to security. Automated Security Testing: We automate and integrate Security testing techniques for Software Penetration testing and Fuzz testing in the software development process.
Early Security Checks: We ensure that teams know about the security concerns at the beginning of a project, rather than at the end of delivery. It is achieved by conducting Security training and knowledge sharing sessions.
Standard Process: At DevOps, we try to follow the standard deployment and development process that has already gone through security audits. This helps in minimizing the introduction of any new security loopholes due to the change in the standard process.
40. What is Self-testing Code?
Answer: Self-testing Code is an important feature of DevOps culture. In DevOps culture, development team members are expected to write self-testing code. It means we have to write code along with the tests that can test this code. Once the test passes, we feel confident to release the code.
If we get an issue in production, we first write an automation test to validate that the issue happens in the current release. Once the issue in release code is fixed, we run the same test to validate that the defect is not there. With each release, we keep running these tests so that the issue does not appear anymore.
One of the techniques of writing Self-testing code is Test Driven Development (TDD).
41. What is a Deployment Pipeline?
Answer: A Deployment Pipeline is an important concept in Continuous Delivery. In the Deployment Pipeline, we break the build process into distinct stages. In each stage, we get the feedback to move onto the next stage.
It is a collaborative effort between various groups involved in delivering software development.
Often the first stage in Deployment Pipeline is compiling the code and converting into binaries.
After that, we run automated tests. Depending on the scenario, there are stages of performance testing, security check, usability testing etc in a Deployment Pipeline.
In DevOps, our aim is to automate all the stages of the Deployment Pipeline. With a smooth-running Deployment Pipeline, we can achieve the goal of Continuous Delivery.
42. What are the main benefits of Che?
Answer: a Chef is an automation tool for keeping the infrastructure as code. It has many benefits. Some of these are as follows:
Cloud Deployment: We can use Chef to perform automated deployment in Cloud environment.
Multi-cloud support: With Chef we can even use multiple cloud providers for our infrastructure.
Hybrid Deployment: Chef supports both Cloud-based as well as data center-based infrastructure.
High Availability: With Chef automation, we can create a high availability environment. In the case of hardware failure, Chef can maintain or start new servers in an automated way to maintain a highly available environment.
43. What are the main features of Docker Hub?
Answer: Docker Hub provides the following main features:
Image Repositories: In Docker Hub, we can push, pull, find and manage Docker Images. It is a big library that has images from community, official as well as private sources.
Automated Builds: We can use Docker Hub to create new images by making changes to the source code repository of the image.
Webhooks: With Webhooks in Docker Hub we can trigger actions that can create and build new images by pushing a change to the repository.
Github/Bitbucket integration: Docker Hub also provides integration with Github and Bitbucket systems.
44. What are the security benefits of using Container-based system?
Answer: Some of the main security benefits of using a Container-based system are as follows:
Segregation: In a Container-based system we segregate the applications on different containers. Each application may be running on the same host but in a separate container. Each application has access to ports, files and other resources that are provided to it by the container.
Transient: In a Container-based system, each application is considered as a transient system. It is better than a static system that has fixed environment which can be exposed over time.
Control: We use repeatable scripts to create the containers. This provides us tight control over the software application that we want to deploy and run. It also reduces the risk of unwanted changes in the setup that can cause security loopholes.
Security Patch: In a Container-based system, we can deploy security patches on multiple containers in a uniform way. Also, it is easier to patch a Container with an application update.
45. How many heads can you create in a GIT repository?
Answer: There can be any number of heads in a repository. By default, there is one head known as HEAD in each repository in GIT.
46. What is a Passive check in Nagios?
Answer: In Nagios, we can monitor hosts and services by active checks. In addition, Nagios also supports Passive checks that are initiated by external applications.
The results of Passive checks are submitted to Nagios. There are two main use cases of Passive checks:
We use Passive checks to monitor asynchronous services that do not give a positive result with Active checks at regular intervals of time.
We can use Passive checks to monitor services or applications that are located behind a firewall.
47. What is a Docker container?
Answer: A Docker Container is a lightweight system that can be run on a Linux operating system or a virtual machine. It is a package of an application and related dependencies that can be run independently.
Since Docker Container is very lightweight, multiple containers can be run simultaneously on a single server or virtual machine.
With a Docker Container, we can create an isolated system with restricted services and processes. A Container has a private view of the operating system. It has its own process ID space, file system, and network interface.
Multiple Docker Containers can share the same Kernel.
48. How will you remove an image from Docker?
Answer: We can use docker RMI command to delete an image from our local system.
Exact command is:
% docker rmi
If we want to find IDs of all the Docker images in our local system, we can user docker images command.
% docker images
If we want to remove a docker container then we use docker rm command.
% docker rm
49. What are the common use cases of Docker?
Answer: Some of the common use cases of Docker are as follows:
Setting up Development Environment: We can use Docker to set the development environment with the applications on which our code is dependent.
Testing Automation Setup: Docker can also help in creating the Testing Automation setup. We can set up different services and apps with Docker to create the automation testing environment.
Production Deployment: Docker also helps in implementing the Production deployment for an application. We can use it to create the exact environment and process that will be used for doing the production deployment.
50. Can we lose our data when a Docker Container exits?
Answer: A Docker Container has its own file-system. In an application running on Docker Container, we can write to this file-system. When the container exits, data are written to file-system still remains. When we restart the container, the same data can be accessed again.
Only when we delete the container, related data will be deleted.