DevOps Interview Questions and Answers Pdf

1. Which among Puppet, Chef, SaltStack, and Ansible is the best Configuration Management (CM) tool? Why?
Answer:
This depends on the organization’s need to mention a few points on all those tools: Puppet is the oldest and most mature CM tool. Puppet is a Ruby-based Configuration Management tool, but while it has some free features, much of what makes Puppet great is only available in the paid version. Organizations that don’t need a lot of extras will find Puppet useful, but those needing more customization will probably need to upgrade to the paid version.
The chef is written in Ruby, so it can be customized by those who know the language. It also includes free features, plus it can be upgraded from open source to enterprise-level if necessary. On top of that, it’s a very flexible product. (devops interview questions and answers pdf)

Ansible is a very secure option since it uses Secure Shell. It’s a simple tool to use, but it does offer several other services in addition to configuration management. It’s very easy to learn, so it’s perfect for those who don’t have a dedicated IT staff but still need a configuration management tool.

SaltStack is a python based open-source CM tool made for larger businesses, but its learning curve is fairly low.

2. Explain Security management in terms of Cloud Computing?
Answer:

Identity management access provides the authorization of application services.
Access control permission is given to the users to have complete controlling access of another user who is entering into the cloud environment.
Authentication and Authorization provide access to only authorized and authenticated users only to access the data and applications.

3. What is an MX record?

Answer: An MX record tells senders how to send an email for your domain. When your domain is registered, it’s assigned several DNS records, which enable your domain to be located on the Internet. These include MX records, which direct the domain’s mail flow. Each MX record points to an email server that’s configured to process mail for that domain. There’s typically one record that points to a primary server, then additional records that point to one or more backup servers. For users to send and receive an email, their domain’s MX records must point to a server that can process their mail.

4. How do all these tools work together?
Answer:

Given below is a generic logical flow where everything gets automated for seamless delivery. However, this flow may vary from organization to organization as per the requirement.
Developers develop the code and this source code is managed by Version Control System tools like Git etc.
Developers send this code to the Git repository and any changes made in the code are committed to this Repository.
Jenkins pulls this code from the repository using the Git plugin and builds it using tools like Ant or Maven.
Configuration management tools like puppet deploys & provisions testing environment and then Jenkins releases this code on the test environment on which testing is done using tools like selenium.
Once the code is tested, Jenkins sends it for deployment on the production server (even production server is provisioned & maintained by tools like a puppet).
After deployment, It is continuously monitored by tools like Nagios.
Docker containers provide a testing environment to test the build features.

5. What is an AMI? How do we implement it?
Answer:

AMI stands for Amazon Machine Image. It is a copy of the root file system.
It provides the data required to launch an instance, which means a copy of running an AMI server in the cloud. It’s easy to launch an instance from many different AMIs.
Hardware servers that commodities bios which exactly point the master boot record of the first block on a disk.
A disk image is created which can easily fit anywhere physically on a disk. Where Linux can boot from an arbitrary location on the EBS storage network. (E Learning Portal)

6. What are Plugins in Nagios?
Answer: Begin this answer by defining Plugins. They are scripts (Perl scripts, Shell scripts, etc.) that can run from a command line to check the status of a host or service. Nagios uses the results from plugins to determine the current status of hosts and services on your network.

Once you have defined Plugins, explain why we need Plugins. Nagios will execute a plugin whenever there is a need to check the status of a host or service. The plugin will perform the check and then simply returns the result to Nagios. Nagios will process the results that it receives from the Plugin and take the necessary actions.

7. Why is Continuous monitoring necessary?
Answer:

I will suggest you go with the below mentioned flow:

Continuous Monitoring allows timely identification of problems or weaknesses and quick corrective action that helps reduce expenses of an organization. Continuous monitoring provides a solution that addresses three operational disciplines known as:

continuous audit
continuous controls monitoring
continuous transaction inspection

8. What Happens During The Bootstrap Process?
Answer: During the bootstrap process, the node downloads and installs the chef-client registers itself with the Chef server and does an initial check-in. During this check-in, the node applies any cookbooks that are part of its run-list.

9. Explain whether it is possible to share a single instance of a Memcache between multiple projects?
Answer: Yes, it is possible to share a single instance of Memcache between multiple projects. Memcache is a memory store space, and you can run Memcache on one or more servers. You can also configure your client to speak to a particular set of instances. So, you can run two different Memcache processes on the same host and yet they are completely independent. Unless, if you have partitioned your data, then it becomes necessary to know from which instance to get the data from or to put it into.

10. You are having multiple Memcache servers, in which one of the Memcache servers fails, and it has your data, will it ever try to get key data from that one failed server?
Answer: The data in the failed server won’t get removed, but there is a provision for auto-failure, which you can configure for multiple nodes. Fail-over can be triggered during any kind of socket or Memcached server level errors and not during normal client errors like adding an existing key, etc.

11. Explain what is Dogpile effect? How can you prevent this effect?
Answer: Dogpile effect is referred to as the event when the cache expires, and websites are hit by the multiple requests made by the client at the same time. This effect can be prevented by using a semaphore lock. In this system when value expires, the first process acquires the lock and starts generating new value.

12. What is Dev Ops with cloud computing?
Answer: Inseparable development and operations practices are universally relevant. Cloud computing, Agile development, and DevOps are interlocking parts of a strategy for transforming IT into a business adaptability enabler. If the cloud is an instrument, then DevOps is the musician that plays it.

13. What is DevOps Tooling by AWS?
Answer: AWS provides services that help you practice DevOps at your company and that are built first for use with AWS. These tools automate manual tasks, help teams manage complex environments at scale, and keep engineers in control of the high velocity that is enabled by DevOps.

14. What is a building project in AWS DevOps?
Answer: A building project is used to define how CodeBuild will run a build. It includes information such as where to get the source code, which builds the environment to use, the build commands to run, and where to store the build output. A build environment is the combination of the operating system, programming language runtime, and tools used by CodeBuild to run a build.

15. Can I work on my AWS CodeStar projects directly from an IDE?
Answer: Yes. By installing the AWS Toolkit for Eclipse or Visual Studio you gain the ability to easily configure your local development environment to work with CodeStar Projects; Once installed, developers can then select from a list of available CodeStar projects and have their development tooling automatically configured to clone and check out their project’s source code, all from within their IDE.

16. What Is Vpc?
Answer: A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. You can configure or create your VPC as per requirement like select region, create subnets (IP- CIDR), configure route tables, security groups, Internet gateway, etc to your AWS account By which you can launch your AWS resources, such as Amazon EC2, RDS instances, etc, into your VPC.

17. How Is Buffer Used In Amazon Web Services?
Answer: Buffer is used to making the system more resilient to burst of traffic or load by synchronizing different components. The components always receive and process the requests in an unbalanced way. Buffer keeps the balance between different components and makes them work at the same speed to provide faster services.

18. What Is VPC Peering?
Answer:

A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IP addresses. And instances which are in VPC can communicate with each other as if they are within the same network.
You can create a VPC peering connection between your VPCs, or with a VPC in another AWS account within a single region.
If you have more than one AWS account within the same region and want to share or transfer the data, you can peer the VPCs across those accounts to create a file-sharing network. You can also use a VPC peering connection to allow other VPCs to access the resources you have in one of your VPCs.
A VPC peering connection can help you to facilitate the transfer of data.

19. What Is The Function Of Amazon Elastic Compute Cloud?
Answer: Amazon Elastic compute cloud also known as Amazon EC2 is an Amazon web service that provides scalable resources and makes computing easier for developers.

The main functions of Amazon EC2 are:

It provides easily configurable options and allows the user to configure the capacity.
It provides the complete control of computing resources and lets the user run the computing environment according to his requirements.
It provides a fast way to run the instances and quickly book the system hence reducing the overall time.
It provides scalability to the resources and changes its environment according to the requirement of the user.
It provides a variety of tools to the developers to build failure resilient applications.

20. What is the importance of buffer in Amazon Web Services?
Answer: A buffer will synchronize different components and makes the arrangement additional elastic to a burst of load or traffic. The components are prone to work in an unstable way of receiving and processing the requests. The buffer creates the equilibrium linking various apparatus and crafts them effort at the identical rate to supply more rapid services.

21. Which automation gears can help with spinup services?
Answer: The API tools can be used for spinup services and also for the written scripts. Those scripts could be coded in Perl, bash or other languages of your preference. There is one more option that is patterned administration and stipulating tools such as a dummy or improved descendant. A tool called Scalr can also be used and finally, we can go with a controlled explanation like a Rightscale.

22. How the processes start, stop and terminate works? How?
Answer: Starting and stopping an instance: If an instance gets stopped or ended, the instance functions a usual power cut and then changes over to a clogged position. You can establish the case afterward since all the EBS volumes of Amazon remain attached. If an instance is in stopping state, then you will not get charged for an additional instance.

Finishing the instance: If an instance gets terminated it tends to perform a typical blackout, so the EBS volumes which are attached will get removed except the volume’s delete On Termination characteristic is set to zero. In such cases, the instance will get removed and cannot set it up afterward.

23. What happens if my application stops responding to requests in beanstalk?
Answer: AWS Beanstalk applications have a system in place for avoiding failures in the underlying infrastructure. If an Amazon EC2 instance fails for any reason, Beanstalk will use Auto Scaling to automatically launch a new instance. Beanstalk can also detect if your application is not responding on the custom link, even though the infrastructure appears healthy, it will be logged as an environmental event( e.g a bad version was deployed) so you can take appropriate action.

24. How do I transfer my existing domain name registration to Amazon Route 53 without disrupting my existing web traffic?
Answer: You will need to get a list of the DNS record data for your domain name first, it is generally available in the form of a “zone file” that you can get from your existing DNS provider. Once you receive the DNS record data, you can use Route 53’s Management Console or simple web-services interface to create a hosted zone that will store your DNS records for your domain name and follow its transfer process.

It also includes steps such as updating the nameservers for your domain name to the ones associated with your hosted zone.

For completing the process you have to contact the registrar with whom you registered your domain name and follow the transfer
process. As soon as your registrar propagates the new name server delegations, your DNS queries will start to get answered.

25. When should I use a Classic Load Balancer and when should I use an Application load balancer?
Answer: A Classic Load Balancer is ideal for simple load balancing of traffic across multiple EC2 instances, while an Application Load Balancer is ideal for microservices or container-based architectures where there is a need to route traffic to multiple services or load balance across multiple ports on the same EC2 instance.

26. Explain AWS?
Answer: AWS stands for Amazon Web Service which is a collection of remote computing services also known as cloud computing. This technology of cloud computing is also known as IaaS or Infrastructure as a Service.

27. What do you understand by “Infrastructure as code”? How does it fit into the DevOps methodology? What purpose does it achieve?
Answer:

Infrastructure as Code (IAC) is a type of IT infrastructure that operations teams can use to automatically manage and provision through code, rather than using a manual process.
Companies for faster deployments treat infrastructure like software: as code that can be managed with the DevOps tools and processes. These tools let you make infrastructure changes more easily, rapidly, safely and reliably.

28. What measures we have taken to handle revision (version) control?
Answer: To handle revision control, post your code on SourceForge or GitHub so everyone can view it and ask the viewers to give suggestions for the better improvement of it.

29. What are the types of HTTP requests?
Answer:

The types of Http requests are:

GET
HEAD
PUT
POST
PATCH
DELETE
TRACE
CONNECT
OPTIONS

30. Explain how can I vertically scale an Amazon instance?
Answer: This is one of the essential features of AWS and cloud virtualization. SpinUp a newly developed large instance where we pause that instance and detach the root Ebs volume from the server and discard it. Later stop your live instance, detach its root volume connected. Note down the unique device ID and attach the same root volume to the new server. And restart it again. This results in a vertically scaled Amazon instance.

server group provides 80 and 443 from around the world, but only port 22 are vital among the jump box group. The database group allows port 3306 from the webserver group and port 22 from the jump box group. The addition of any machines to the webserver group can store in the database. No one can directly ssh to any of your boxes.

31. How we can make sure the new service is ready for the products launched?
Answer:

Backup System
Recovery plans
Load Balancing
Monitoring
Centralized logging

32. What are the benefits of cloud computing?
Answer: The main benefits of cloud computing are:

Data backup and storage of data.
Powerful server capabilities.
Incremented productivity.
Cost-effective and time-saving.

33. List the essential DevOps tools?
Answer:

Git
Jenkins
Selenium
Puppet
Chef
Ansible
Nagios
Docker

34. What is the most important thing DevOps helps us achieve?
Answer: According to me, the most important thing that DevOps helps us achieve is to get the changes into production as quickly as possible while minimizing risks in software quality assurance and compliance. This is the primary objective of DevOps. Learn more in this DevOps tutorial blog.

However, you can add many other positive effects of DevOps. For example, clearer communication and better working relationships between teams i.e. both the Ops team and Dev team collaborate to deliver good quality software which in turn leads to higher customer satisfaction.

35. What is the one most important thing DevOps helps do?
Answer: The most important thing DevOps helps do is to get the changes into production as quickly as possible while minimizing risks in software quality assurance and compliance. That is the primary objective of DevOps. However, there are many other positive side-effects to DevOps. For example, clearer communication and better working relationships between teams which creates a less stressful working environment.

36. What’s a PTR in DNS?
Answer: Pointer records are used to map a network interface (IP) to a hostname. These are primarily used for reverse DNS. Reverse DNS is set up very similar to how normal (forward) DNS is setup. When you delegate the DNS forward, the owner of the domain tells the registrar to let your domain use specific name servers.

37. Why are configuration management processes and tools important?
Answer: Talk about multiple software builds, releases, revisions, and versions for each software or testware that is being developed. Move on to explain the need for storing and maintaining data, keeping track of development builds and simplified troubleshooting. Don’t forget to mention the key CM tools that can be used to achieve these objectives. Talk about how tools like Puppet, Ansible, and Chef help in automating software deployment and configuration on several servers.

Note: Browse latest Devops Interview Questions and Devops training videos. Here you can check Devops Online Training details and Devops Training Videos for self learning. Contact +91 988 502 2027 for more information.

Leave a Comment

Scroll to Top