1. Which among Puppet, Chef, SaltStack and Ansible is the best Configuration Management (CM) tool? Why?
This depends on the organization’s need so mention few points on all those tools:
Puppet is the oldest and most mature CM tool. Puppet is a Ruby-based Configuration Management tool, but while it has some free features, much of what makes Puppet great is only available in the paid version. Organizations that don’t need a lot of extras will find Puppet useful, but those needing more customization will probably need to upgrade to the paid version.
Chef is written in Ruby, so it can be customized by those who know the language. It also includes free features, plus it can be upgraded from open source to enterprise-level if necessary. On top of that, it’s a very flexible product.
Ansible is a very secure option since it uses Secure Shell. It’s a simple tool to use, but it does offer a number of other services in addition to configuration management. It’s very easy to learn, so it’s perfect for those who don’t have a dedicated IT staff but still need a configuration management tool.
SaltStack is python based open source CM tool made for larger businesses, but its learning curve is fairly low.
2. Explain the Security management in terms of Cloud Computing?
- The Identity management access provides the authorization of application services.
- Access control permission is given to the users to have complete controlling access of another user who is entering into the cloud environment.
- Authentication and Authorization provides access to only the authorized and authenticated users only to access the data and applications.
3. What is an MX record?
An MX record tells senders how to send email for your domain. When your domain is registered, it’s assigned several DNS records, which enable your domain to be located on the Internet. These include MX records, which direct the domain’s mail flow. Each MX record points to an email server that’s configured to process mail for that domain. There’s typically one record that points to a primary server, then additional records that point to one or more backup servers. For users to send and receive email, their domain’s MX records must point to a server that can process their mail.
4. How do all these tools work together?
- Given below is a generic logical flow where everything gets automated for seamless delivery. However, this flow may vary from organization to organization as per the requirement.
- Developers develop the code and this source code is managed by Version Control System tools like Git etc.
- Developers send this code to the Git repository and any changes made in the code is committed to this Repository.
- Jenkins pulls this code from the repository using the Git plugin and build it using tools like Ant or Maven.
- Configuration management tools like puppet deploys & provisions testing environment and then Jenkins releases this code on the test environment on which testing is done using tools like selenium.
- Once the code is tested, Jenkins send it for deployment on the production server (even production server is provisioned & maintained by tools like puppet).
- After deployment It is continuously monitored by tools like Nagios.
- Docker containers provides testing environment to test the build features. (e Learning Portal)
5. What is an AMI? How do we implement it?
- AMI stands for Amazon Machine Image. It is basically a copy of the root file system.
- Provides the data required to launch an instance, which means a copy of running an AMI server in the cloud. It’s easy to launch an instance from many different AMIs.
- Hardware servers that commodities bios which exactly point the master boot record of the first block on a disk.
- A disk image is created which can easily fit anywhere physically on a disk .Where Linux can boot from an arbitrary location on the EBS storage network. (devops interview questions and answers pdf)
6. What are Plugins in Nagios?
Begin this answer by defining Plugins. They are scripts (Perl scripts, Shell scripts, etc.) that can run from a command line to check the status of a host or service. Nagios uses the results from Plugins to determine the current status of hosts and services on your network.
Once you have defined Plugins, explain why we need Plugins. Nagios will execute a Plugin whenever there is a need to check the status of a host or service. Plugin will perform the check and then simply returns the result to Nagios. Nagios will process the results that it receives from the Plugin and take the necessary actions.
7. Why is Continuous monitoring necessary?
I will suggest you to go with the below mentioned flow:
Continuous Monitoring allows timely identification of problems or weaknesses and quick corrective action that helps reduce expenses of an organization. Continuous monitoring provides solution that addresses three operational disciplines known as:
- continuous audit
- continuous controls monitoring
- continuous transaction inspection
8. What Happens During The Bootstrap Process?
During the bootstrap process, the node downloads and installs chef-client, registers itself with the Chef server, and does an initial check in. During this check in, the node applies any cookbooks that are part of its run-list.
9. Explain whether it is possible to share a single instance of a Memcache between multiple projects?
Yes, it is possible to share a single instance of Memcache between multiple projects. Memcache is a memory store space, and you can run memcache on one or more servers. You can also configure your client to speak to a particular set of instances. So, you can run two different Memcache processes on the same host and yet they are completely independent. Unless, if you have partitioned your data, then it becomes necessary to know from which instance to get the data from or to put into.
10. You are having multiple Memcache servers, in which one of the memcacher server fails, and it has your data, will it ever try to get key data from that one failed server?
The data in the failed server won’t get removed, but there is a provision for auto-failure, which you can configure for multiple nodes. Fail-over can be triggered during any kind of socket or Memcached server level errors and not during normal client errors like adding an existing key, etc.
11. Explain what is Dogpile effect? How can you prevent this effect?
Dogpile effect is referred to the event when cache expires, and websites are hit by the multiple requests made by the client at the same time. This effect can be prevented by using semaphore lock. In this system when value expires, first process acquires the lock and starts generating new value.
12. What is Dev Ops with cloud computing?
Inseparable development and operations practices are universally relevant. Cloud computing, Agile development, and DevOps are interlocking parts of a strategy for transforming IT into a business adaptability enabler. If cloud is an instrument, then DevOps is the musician that plays it. (DevOps Engineer Interview Questions – You Must Prepare In 2019 (IMP))
13. What is DevOps Tooling by AWS?
AWS provides services that help you practice DevOps at your company and that are built first for use with AWS. These tools automate manual tasks, help teams manage complex environments at scale, and keep engineers in control of the high velocity that is enabled by DevOps.
14. What is a build project in AWS Devops?
A build project is used to define how CodeBuild will run a build. It includes information such as where to get the source code, which build environment to use, the build commands to run, and where to store the build output. A build environment is the combination of operating system, programming language runtime, and tools used by CodeBuild to run a build. (devops interview questions and answers pdf)
15. Can I work on my AWS CodeStar projects directly from an IDE?
Yes. By installing the AWS Toolkit for Eclipse or Visual Studio you gain the ability to easily configure your local development environment to work with CodeStar Projects; Once installed, developers can then select from a list of available CodeStar projects and have their development tooling automatically configured to clone and checkout their project’s source code, all from within their IDE. (Top 50 DevOps Interview Question And Answer)
16. What Is Vpc?
A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. You can configure or create your VPC as per requirement like select region, create subnets (IP- CIDR), configure route tables, security groups, Internet gateway etc to your AWS account By which you can launch your AWS resources, such as Amazon EC2, RDS instances etc, into your VPC.
17. How Is Buffer Used In Amazon Web Services?
Buffer is used to make the system more resilient to burst of traffic or load by synchronizing different components. The components always receive and process the requests in an unbalanced way. Buffer keeps the balance between different components and makes them work at the same speed to provide faster services.
18. What Is VPC Peering?
- A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IP addresses. And instances which is in VPC can communicate with each other as if they are within the same network.
- You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account within a single region.
- If you have more than one AWS account within a same region and wants to share or transfer the data, you can peer the VPCs across those accounts to create a file sharing network. You can also use a VPC peering connection to allow other VPCs to access resources you have in one of your VPCs.
A VPC peering connection can help you to facilitate the transfer of data.
19. What Is The Function Of Amazon Elastic Compute Cloud?
Amazon Elastic compute cloud also known as Amazon EC2 is an Amazon web service that provides scalable resources and makes the computing easier for developers.
The main functions of Amazon EC2 are:
- It provides easy configurable options and allow user to configure the capacity.
- It provides the complete control of computing resources and let the user run the computing environment according to his requirements.
- It provides a fast way to run the instances and quickly book the system hence reducing the overall time.
- It provides scalability to the resources and changes its environment according to the requirement of the user.
- It provides varieties of tools to the developers to build failure resilient applications.
20. What is the importance of buffer in Amazon Web Services?
A buffer will synchronize different components and makes the arrangement additional elastic to a burst of load or traffic. The components are prone to work in an unstable way of receiving and processing the requests. The buffer creates the equilibrium linking various apparatus and crafts them effort at the identical rate to supply more rapid services. (devops interview questions and answers pdf)
21. Which automation gears can help with spinup services?
The API tools can be used for spinup services and also for the written scripts. Those scripts could be coded in Perl, bash or other languages of your preference. There is one more option that is patterned administration and stipulating tools such as a dummy or improved descendant. A tool called Scalr can also be used and finally we can go with a controlled explanation like a Rightscale. (devops interview questions and answers pdf)
22. How the processes start, stop and terminate works? How?
Starting and stopping of an instance: If an instance gets stopped or ended, the instance functions a usual power cut and then change over to a clogged position. You can establish the case afterward since all the EBS volumes of Amazon remain attached. If an instance is in stopping state, then you will not get charged for additional instance.
Finishing the instance: If an instance gets terminated it tends to perform a typical blackout, so the EBS volumes which are attached will get removed except the volume’s delete On Termination characteristic is set to zero. In such cases, the instance will get removed and cannot set it up afterward.
23. What happens if my application stops responding to requests in beanstalk?
AWS Beanstalk applications have a system in place for avoiding failures in the underlying infrastructure. If an Amazon EC2 instance fails for any reason, Beanstalk will use Auto Scaling to automatically launch a new instance. Beanstalk can also detect if your application is not responding on the custom link, even though the infrastructure appears healthy, it will be logged as an environmental event( e.g a bad version was deployed) so you can take an appropriate action. (DevOps Online Training)
24. How do I transfer my existing domain name registration to Amazon Route 53 without disrupting my existing web traffic?
You will need to get a list of the DNS record data for your domain name first, it is generally available in the form of a “zone file” that you can get from your existing DNS provider. Once you receive the DNS record data, you can use Route 53’s Management Console or simple web-services interface to create a hosted zone that will store your DNS records for your domain name and follow its transfer process.
It also includes steps such as updating the nameservers for your domain name to the onesassociated with your hosted zone.
For completing the process you have to contact the registrar with whom you registered your domain name and follow the transfer
process. As soon as your registrar propagates the new name server delegations, your DNS queries will start to get answered.
25. When should I use a Classic Load Balancer and when should I use an Application load balancer?
A Classic Load Balancer is ideal for simple load balancing of traffic across multiple EC2 instances, while an Application Load Balancer is ideal for microservices or container-based architectures where there is a need to route traffic to multiple services or load balance across multiple ports on the same EC2 instance.
26. Explain AWS?
AWS stands for Amazon Web Service which is a collection of remote computing services also known as cloud computing. This technology of cloud computing is also known as IaaS or Infrastructure as a Service. (devops interview questions and answers pdf)
27. What do you understand by “Infrastructure as code”? How does it fit into the DevOps methodology? What purpose does it achieve?
- Infrastructure as Code (IAC) is a type of IT infrastructure that operations teams can use to automatically manage and provision through code, rather than using a manual process.
- Companies for faster deployments treat infrastructure like software: as code that can be managed with the DevOps tools and processes. These tools let you make infrastructure changes more easily, rapidly, safely and reliably. (Splunk Training)
28. What measures we have taken to handle revision (version) control?
To handle revision control, post your code on SourceForge or GitHub so everyone can view it and ask the viewers to give suggestions for the better improvement of it.
29. What are the types of HTTP requests?
The types of Http requests are:
30. Explain how can I vertically scale an Amazon instance?
This is one of the essential features of AWS and cloud virtualization. SpinUp a newly developed large instance where we pause that instance and detach the root Ebs volume from the server and discard. Later stop your live instance, detach its root volume connected. Note down the unique device ID and attach the same root volume to the new server. And restart it again. This results in vertically scaled Amazon instance. (DevOps Video Training)
server group provides 80 and 443 from around the world, but only port 22 are vital among the jump box group. Database group allows port 3306 from the web server group and port 22 from the jump box group. Addition of any machines to the web server group can store in the database. No one can directly ssh to any of your boxes.
31. How we can make sure new service is ready for the products launched?
- Backup System
- Recovery plans
- Load Balancing
- Centralized logging
32. What are the benefits of cloud computing?
The main benefits of cloud computing are:
- Data backup and storage of data.
- Powerful server capabilities.
- Incremented productivity.
- Cost effective and time saving.
33. List the essential DevOps tools?
34. What is the most important thing DevOps helps us achieve?
According to me, the most important thing that DevOps helps us achieve is to get the changes into production as quickly as possible while minimizing risks in software quality assurance and compliance. This is the primary objective of DevOps. Learn more in this DevOps tutorial blog.
However, you can add many other positive effects of DevOps. For example, clearer communication and better working relationships between teams i.e. both the Ops team and Dev team collaborate together to deliver good quality software which in turn leads to higher customer satisfaction.
35. What is the one most important thing DevOps helps do?
The most important thing DevOps helps do is to get the changes into production as quickly as possible while minimizing risks in software quality assurance and compliance. That is the primary objective of DevOps. However, there are many other positive side-effects to DevOps. For example, clearer communication and better working relationships between teams which creates a less stressful working environment.
36. What’s a PTR in DNS?
Pointer records are used to map a network interface (IP) to a host name. These are primarily used for reverse DNS. Reverse DNS is setup very similar to how normal (forward) DNS is setup. When you delegate the DNS forward, the owner of the domain tells the registrar to let your domain use specific name servers.
37. Why are configuration management processes and tools important?
Talk about multiple software builds, releases, revisions, and versions for each software or testware that is being developed. Move on to explain the need for storing and maintaining data, keeping track of development builds and simplified troubleshooting. Don’t forget to mention the key CM tools that can be used to achieve these objectives. Talk about how tools like Puppet, Ansible, and Chef help in automating software deployment and configuration on several servers.