Devops Coding Interview Questions and Answers

1. What is Version control? 
Answer: This is probably the easiest question you will face in the interview. My suggestion is to first give a definition of Version control. It is a system that records changes to a file or set of files over time so that you can recall specific versions later. Version control systems consist of a central shared repository where teammates can commit changes to a file or set of file. Then you can mention the uses of version control. (Devops Coding Interview Questions)

Version control allows you to:

Revert files back to a previous state.
Revert the entire project back to a previous state.
Compare changes over time.
See who last modified something that might be causing a problem.
Who introduced an issue and when.

2. What is State Stalking in Nagios?
Answer: will advise you to first give a small introduction on State Stalking. It is used for logging purposes. When Stalking is enabled for a particular host or service, Nagios will watch that host or service very carefully and log any changes it sees in the output of check results.
Depending on the discussion between you and interviewer you can also add, “It can be very helpful in later analysis of the log files. Under normal circumstances, the result of a host or service check is only logged if the host or service has changed state since it was last checked.”

3. How do you find a list of files that have changed in a particular commit?
For this answer instead of just telling the command, explain what exactly this command will do so you can say that, To get a list file that has changed in a particular commit use command.
git diff-tree -r {hash}
Given the commit hash, this will list all the files that were changed or added in that commit. The -r flag makes the command list individual files, rather than collapsing them into root directory names only.
You can also include the below mention point although it is totally optional but will help in impressing the interviewer.
The output will also include some extra information, which can be easily suppressed by including two flags:
Here –no-commit-id will suppress the commit hashes from appearing in the output, and –name-only will only print the file names, instead of their paths.

4. How will you know in Git if a branch has already been merged into master?
I will suggest you include both the below mentioned commands:
git branch –merged lists the branches that have been merged into the current branch.
git branch –no-merged lists the branches that have not been merged.

5. Explain what is Memcached?
Answer: Memcached is a free and open-source, high-performance, distributed memory object caching system. The primary objective of Memcached is to enhance the response time for data that can otherwise be recovered or constructed from some other source or database. It is used to avoid the need to operate SQL database or another source repetitively to fetch data for the concurrent request.

Memcached can be used for

  • Social Networking -> Profile Caching
  • Content Aggregation -> HTML/ Page Caching
  • Ad targeting -> Cookie/profile tracking
  • Relationship -> Session caching
  • E-commerce -> Session and HTML caching
  • Location-based services -> Database query scaling
  • Gaming and entertainment -> Session caching

Memcache helps in

  • Speed up application processes
  • It determines what to store and what not to Reduce the number of retrieval requests to the database
  • Cuts down the I/O ( Input/Output) access (hard disk)

The drawback of Memcached is

  • It is not a persistent data store
  • Not a database
  • It is not an application-specific
  • It cannot cache large object

6. What are the success factors for Continuous Integration?
Answer: Here you have to mention the requirements for Continuous Integration.

You could include the following points in your answer:

  • Maintain a code repository
  • Automate the build
  • Make the build self-testing
  • Everyone commits to the baseline every day
  • Every commit (to baseline) should be built
  • Keep the build fast
  • Test in a clone of the production environment
  • Make it easy to get the latest deliverables
  • Everyone can see the results of the latest build
    Automate deployment.

7. What is Selenium IDE?
Answer: My suggestion is to start this answer by defining the Selenium IDE. It is an integrated development environment for Selenium scripts. It is implemented as a Firefox extension, and allows you to record, edit, and debug tests. Selenium IDE includes the entire Selenium Core, allowing you to easily and quickly record and play back tests in the actual environment that they will run in.
Now include some advantages in your answer. With autocomplete support and the ability to move commands around quickly, Selenium IDE is the ideal environment for creating Selenium tests no matter what style of tests you prefer. 

8. What is Puppet?
Answer: I will advise you to first give a small definition of Puppet. It is a Configuration Management tool which is used to automate administration tasks. (Online Training Institute)

Now you should describe its architecture and how Puppet manages its Agents. Puppet has a Master-Slave architecture in which the Slave has to first send a Certificate signing request to Master and Master has to sign that Certificate in order to establish a secure connection between Puppet Master and Puppet Slave as shown in the diagram below. Puppet Slave sends a request to Puppet Master and Puppet Master then pushes configuration on Slave.

9. What is the Puppet Manifests?
Answer: t is a very important question and just make sure you go in a correct flow according to me you should first define Manifests.
Every node (or Puppet Agent) has got its configuration details in Puppet Master, written in the native Puppet language. These details are written in the language which Puppet can understand and are termed as Manifests. Manifests are composed of Puppet code and their filenames use the .pp extension.
Now give an example, you can write a manifest in Puppet Master that creates a file and installs apache on all Puppet Agents (Slaves) connected to the Puppet Master.

10. Tell me about a time when you used collaboration and Puppet to help resolve a conflict within a team?
Answer: Explain to them about your past experience of Puppet and how it was useful to resolve conflicts, you can refer to the below-mentioned example:
The development team wanted root access on test machines managed by Puppet in order to make specific configuration changes. We responded by meeting with them weekly to agree on a process for developers to communicate configuration changes and to empower them to make many of the changes they needed. Through our joint efforts, we came up with a way for the developers to change specific configuration values themselves via data abstracted through Hiera. In fact, we even taught one of the developers how to write Puppet code in collaboration with us.

11. What is the use of etckeeper-commit-post and etckeeper-commit-pre on PUPPET AGENT?
Answer: keeper-commit-post: In this configuration file you can define command and scripts which executes after pushing configuration on Agent.
Etckeeper-commit-pre: In this configuration file you can define command and scripts which executes before pushing configuration on Agent

12. What is Factor?
Answer: Sometimes you need to write manifests on conditional expression based on agent-specific data which is available through Factor. Factor provides information like Kernel version, Dist release, IP Address, CPU info and etc. You can define your factor also.

13. What is MCollective?
Answer:  MCollective is a powerful orchestration framework. Run actions on thousands of servers simultaneously, using existing plugins or writing your own.

14. What is the use of etckeeper-commit-post and etckeeper-commit-pre-on Puppet Agent?
Answer:  Etckeeper-commit-post: In this configuration file, you can define command and scripts which executes after pushing configuration on Agent Etckeeper-commit-pre: In this configuration file you can define command and scripts which executes before pushing configuration on Agent. 

15. Explain differences in class definition vs declaration?
Answer: Defining a class makes it available for later use. It doesn’t yet add any resources to the catalog; to do that, you must declare it or assign it from an ENC.

16. What are microservices and why they have an impact on operations?
Answer: Microservices is a product of software architecture and programming practices. Microservices architectures typically produce smaller, but more numerous artifacts that Operations is responsible for regularly deploying and managing. For this reason, microservices have an important impact on Operations. The term that describes the responsibilities of deploying microservices is micro deployments. So, what DevOps is really about is bridging the gap between microservices and micro deployments.

17. Which are the reasons against using an RDBMS?
Answer: In a nutshell, if your application is all about storing application entities in a persistent and consistent way, then an RDBMS could be an overkill. A simple Key-Value storage solution might be perfect for you. Note that the Value is not meant to be a simple element but can be a complex entity in itself!
Another reason could be if you have hierarchical application objects and need some query capability into them then most NoSQL solutions might be a fit. With an RDBMS you can use ORM to achieve the same result but at the cost of adding extra complexity.
RDBMS is also not the best solution if you are trying to store large trees or networks of objects. Depending on your other needs a Graph Database might suit you.
If you are running in the Cloud and need to run a distributed database for durability and availability then you could check Dynamo and Big Table based datastores which are built for this core purpose.
Last but not least, if your data grows too large to be processed on a single machine, you might look into Hadoop or any other solution that supports distributed Map/Reduce.

18. How is Docker different from other container technologies?
Answer: According to me, the below points should be there in your answer:
Docker containers are easy to deploy in a cloud. It can get more applications running on the same hardware than other technologies, it makes it easy for developers to quickly create, ready-to-run containerized applications and it makes managing and deploying applications much easier. You can even share containers with your applications.
If you have some more points to add you can do that but make sure the above explanation is there in your answer.

19. What are the adoptions of DevOps in the industry?

  • Use of agile and other development processes and methods.
  • Demand for an increased rate of production releases from application and business.
  • The wide availability of virtual and cloud infrastructure from both internal and external providers;
  • Increased usage of data center, automation and configuration management tools;
  • Increased focus on test automation and continuous integration methods;
    Best practices on critical issues.

20. Tell us how you have used Docker in your past position?
Answer: Explain how you have used Docker to help rapid deployment. Explain how you have scripted Docker and used Docker with other tools like Puppet, Chef or Jenkins. If you have no past practical experience in Docker and have past experience with other tools in similar space, be honest and explain the same. In this case, it makes sense if you can compare other tools to Docker in terms of functionality. 

21. What types of testing are needed?
Answer: Software teams will often look for the “fair weather” path to system completion; that is, they start from an assumption that software will usually work and only occasionally fail. I believe to practice defensive programming in a pragmatic way, which often means assuming that the code will fail and planning for those failures. I try to incorporate unit test strategy, use of test harnesses, early load testing; network simulation, A/B and multivariate testing, etc.

22. Describe two-factor authentication?
Answer: Two-factor authentication is a security process in which the user provides two means of identification from separate categories of credentials; one is typically a physical token, such as a card, and the other is typically something memorized, such as a security code.

23. What are the advantages of NoSQL database over RDBMS?
Answer: The advantages are:

1. Less need for ETL
2. Support for unstructured text
3. Ability to handle change over time
4. Breadth of functionality
5. Ability to scale horizontally
6. Support for multiple data structures
7. Choice of vendors

24. How would you ensure traceability?
Answer: This question probes your attitude to metrics, logging, transaction journeys, and reporting. You should be able to identify that metric, monitoring and logging need to be a core part of the software system, and that without them, the software is essentially not going to be able to appear maintained and diagnosed. Include words like Syslog, Splunk, error tracking, Nagios, SCOM, Avicode in your answer.

25. What is the significance of a Signed Header?
Answer: The Signed Header is needed for the validation of the interaction between the Chef node, server and signed header authentication.

26. Want to master all these DevOps tools?
Answer: Thoroughly describe any tools that you are confident about, what it’s abilities are and why you prefer using it. For example, if you have expertise in Git, you would tell the interviewer that Git is a distributed Version Control System (VCS) tool that allows the user to track file changes and revert to specific changes when required. Discuss how Git’s distributed architecture gives it an added edge where developers make changes locally and can have the entire project history on their local Git repositories, which can be later shared with other team members. 

27. Is there a difference between Agile and DevOps? If yes, please explain?
Answer: As a DevOps engineer, interview questions like this are quite expected. Start by describing the obvious overlap between DevOps and Agile. Although the implementation of DevOps is always in sync with Agile methodologies, there is a clear difference between the two. The principles of Agile are associated with seamless production or development of a piece of software. On the other hand, DevOps deals with the development, followed by deployment of the software, ensuring faster turnaround time, minimum errors, and reliability.

28. How is Chef used as a CM tool?
Answer: Chef is considered to be one of the preferred industry-wide CM tools. Facebook migrated its infrastructure and backend IT to the Chef platform, for example. Explain how Chef helps you to avoid delays by automating processes. The scripts are written in Ruby. It can integrate with cloud-based platforms and configure new systems. It provides many libraries for infrastructure development that can later be deployed within a software. Thanks to its centralized management system, one Chef server is enough to be used as the center for deploying various policies.

29. How is IaC implemented using AWS?
Answer: Start by talking about the age-old mechanisms of writing commands onto script files and testing them in a separate environment before deployment and how this approach is being replaced by IaC. Similar to the codes written for other services, with the help of AWS, IaC allows developers to write, test, and maintain infrastructure entities in a descriptive manner, using formats such as JSON or YAML. This enables easier development and faster deployment of infrastructure changes.
As a DevOps engineer, an in-depth knowledge of processes, tools, and relevant technology are essential. You must also have a holistic understanding of the products, services, and systems in place. If your answers matched the answers we’ve provided above, you’re in great shape for future DevOps interviews. Good luck! If you’re looking for answers to specific DevOps interview questions that aren’t addressed here, ask them in the comments below. Our DevOps experts will help you craft the perfect answer.

30. What are the core operations of DevOps in terms of development and Infrastructure?
The core operations of DevOps:

  • Application development
  • Code developing
  • Code coverage
  • Unit testing
  • Packaging
  • Deployment With infrastructure
  • Provisioning
  • Configuration
  • Orchestration
  • Deployment

31. What are the key elements of Continuous Testing tools?
Answer: Key elements of Continuous Testing are:
Risk Assessment: It Covers risk mitigation tasks, technical debt, quality assessment, and test coverage optimization to ensure the build is ready to progress toward the next stage.
Policy Analysis: It ensures all processes align with the organization’s evolving business and compliance demands are met.
Requirements Traceability: It ensures true requirements are met and rework is not required. An object assessment is used to identify which requirements are at risk, working as expected or require further validation.
Advanced Analysis: It uses automation in areas such as static code analysis, changes impact analysis and scope assessment/prioritization to prevent defects in the first place and accomplishing more within each iteration. 
Test Optimization: It ensures tests yield accurate outcomes and provide actionable findings. Aspects include Test Data Management, Test Optimization Management, and Test Maintenance
Service Virtualization: It ensures access to real-world testing environments. Service visualization enables access to the virtual form of the required testing stages, cutting the waste time to test environment setup and availability.

32. Mention what is the difference between Memcache and Memcached?
Answer: Memcache: It is an extension that allows you to work through handy object-oriented (OOP’s) and procedural interfaces. It is designed to reduce database load in dynamic web applications.
Memcached: It is an extension that uses the libmemcached library to provide API for communicating with Memcached servers. It is used to increase the dynamic web applications by alleviating database load. It is the latest API.

33. What is AWS CodePipeline in AWS Devops?
Answer: AWS Code Pipeline is a Continuous integration and continuous delivery service for fast and reliable application and infrastructure updates. Code Pipeline builds, tests, and deploys your code every time there is a code change, based on the release process models you define. This enables you to rapidly and reliably deliver features and updates.

34. What is the AWS Developer Tools?
Answer: The AWS Developer Tools is a set of services designed to enable developers and IT operations professionals practicing DevOps to rapidly and safely deliver software.
Together, these services help you securely store and version control your application’s source code and automatically build, test, and deploy your application to AWS or your on-premises environment. You can use AWS CodePipeline to orchestrate an end-to-end software release workflow using these services and third-party tools or integrate each service independently with your existing tools.

35. How do you configure a building project in AWS Devops?
Answer: A building project can be configured through the console or the AWS CLI. You specify the source repository location, the runtime environment, the build commands, the IAM role assumed by the container, and the compute class required to run the build. Optionally, you can specify build commands in a buildspec.yml file.

36. What happens when a build is run in CodeBuild in AWS Devops?
Answer: Code Build will create a temporary compute container of the class defined in the building
project, load it with the specified runtime environment, download the source code, execute the commands configured in the project, upload the generated artifact to an S3 bucket, and then destroy the compute container. During the build, CodeBuild will stream the build output to the service console and Amazon CloudWatch Logs.

37. Why AWS DevOps Matters?
Answer: Software and the Internet have transformed the world and its industries, from shopping to entertainment to banking. Software no longer merely supports a business; rather it becomes an integral component of every part of a business.
Companies interact with their customers through software delivered as online services or applications and on all sorts of devices. They also use software to increase operational efficiencies by transforming every part of the value chain, such as logistics, communications, and operations.
In a similar way that physical goods companies transformed how they design, build, and deliver products using industrial automation throughout the 20th century, companies in today’s world must transform how they build and deliver software.

38. What are the components involved in Amazon Web Services?
Answer: There are 4 components involved and areas below.Amazon S3: with this, one can retrieve the key information which is occupied in creating cloud structural design and amount of produced information also can be stored in this component that is the consequence of the key specified.Amazon EC2: helpful to run a large distributed system on the Hadoop cluster. Automatic parallelization and job scheduling can be achieved by this component. Amazon SQS: this component acts as a mediator between different controllers. Also worn for cushioning requirements those are obtained by the manager of Amazon.Amazon SimpleDB: helps in storing the transitional position log and the errands executed by the consumers.

39. Define auto-scaling?
Answer: Autoscaling is one of the remarkable features of AWS where it permits you to arrange and robotically stipulation and spin up fresh examples without the requirement for your involvement. This can be achieved by setting brinks and metrics to watch. If those entrances are overcome, a fresh example of your selection will be configured, spun up and copied into the weight planner collection.

40. How is AWS OpsWorks different than AWS CloudFormation?
Answer: Ops Works and Cloud Formation both support application modeling, deployment, configuration, management, and related activities. Both support a wide variety of architectural patterns, from simple web applications to highly complex applications. AWS OpsWorks and AWS Cloud Formation differ in abstraction level and areas of focus.
AWS Cloud Formation is a building block service which enables the customer to manage almost any AWS resource via JSON-based domain-specific language. It provides
foundational capabilities for the full breadth of AWS, without prescribing a particular model for development and operations. Customers define templates and use them to
provision and manage AWS resources, operating systems and application code
In contrast, AWS OpsWorks is a higher level service that focuses on providing highly productive and reliable DevOps experiences for IT administrators and ops-minded
developers. To do this, AWS OpsWorks employs a configuration management model based on concepts such as stacks and layers and provides integrated experiences
for key activities like deployment, monitoring, auto-scaling, and automation. Compared to AWS CloudFormation, AWS OpsWorks supports a narrower range of application-oriented AWS resource types including Amazon EC2 instances, Amazon EBS volumes, Elastic IPs, and Amazon CloudWatch metrics.

Note: Browse latest Devops Interview Questions and Devops Tutorial. Here you can check Devops Training details and Devops Training Videos for self learning. Contact +91 988 502 2027 for more information.

All Devops Interview Questions

Devops Videos

Duration: 25+ Hours
  • Experienced Faculty
  • Real-time Scenarios
  • Free Bundle Access
  • Course Future Updates
  • Sample CV/Resume
  • Interview Q&A
  • Complimentary Materials