Devops linux Interview Questions and Answers

1. How Do I Manage Passwords On Red Hat Enterprise Linux, Centos, And Fedora Core?
Answer: As described in the Type reference, you need the Shadow Password Library, which is provided by the ruby-shadow package. The ruby-shadow library is available natively for fc6 (and higher) and should build on the corresponding RHEL and CentOS variants. (devops linux interview questions)

2. Explain what you mean by factor and some use case for same?
Answer: Sometimes you need to write manifests on conditional expression based on agent-specific data which is available through Factor. Faster provides information like Kernel version, Distribution release, IP Address, CPU info, etc. You can define your own custom facts.

The factor can be used independently from Puppet to gather information about a system. Whether it’s parsing the /proc/Xen directory on Linux or running prtdiag command on Solaris, the tool does a great job abstracting the specific operating system commands used to determine the collection of facts. When used in conjunction with Puppet, facts gather through the system allows the puppet master to make intelligent decisions during manifest compilation. Within your puppet manifest, you can reference any key-value pairs provided by the factor by prefixing the hash key with “$”

If the default set of facts are not sufficient, there are two ways to extend Factor to provide additional fact. One way is to use Ruby, the other way is by using the environment.

3. Describe the most significant gain you made from automating a process through Puppet?
Answer: “I automated the configuration and deployment of Linux and Windows machines using Puppet. In addition to shortening the processing time from one week to 10 minutes, I used the roles and profiles paradigm and documented the purpose of each module in README to ensure that others could update the module using Git. The modules I wrote are still being used, but they’ve been improved by my teammates and members of the community.”

4. What is an AMI? How do we implement it?
Answer: AMI stands for Amazon Machine Image. It is basically a copy of the root file system.

Provides the data required to launch an instance, which means a copy of running an AMI server in the cloud. It’s easy to launch an instance from many different AMIs.

Hardware servers that commodities bios which exactly point the master boot record of the first block on a disk. A disk image is created which can easily fit anywhere physically on a disk. Where Linux can boot from an arbitrary location on the EBS storage network.

5. What is NRPE (Nagios Remote Plugin Executor) in Nagios?
Answer: For this answer, give a brief definition of Plugins. The NRPE addon is designed to allow you to execute Nagios plugins on remote Linux/Unix machines. The main reason for doing this is to allow Nagios to monitor “local” resources (like CPU load, memory usage, etc.) on remote machines. Since these public resources are not usually exposed to external machines, an agent like NRPE must be installed on the remote Linux/Unix machines.

I will advise you to explain the NRPE architecture on the basis of the diagram shown below. The NRPE addon consists of two pieces:

The check_nrpe plugin, which resides on the local monitoring machine.
The NRPE daemon, which runs on the remote Linux/Unix machine.
There is an SSL (Secure Socket Layer) connection between the monitoring host and remote host as shown in the diagram below.
6. What are the chef and puppet used for?
Answer: Puppet and Chef are the major configuration management systems on Linux, along with CFEngine, Ansible. More than a configuration management tool, Chef, along with Puppet and Ansible, is one of the industry’s most notable Infrastructure as Code (IAC) tools.

7. Why shouldn’t I use auto-sign for all my clients?
Answer:

It is very tempting to enable auto-sign for all nodes, as it cuts down on the manual steps required to bootstrap a new node (or indeed to move it to a new puppet master).
Typically this would be done with a *.example.com or even * in the auto-sign.conf file.
This, however, can be very dangerous as it can enable a node to masquerade as another node and get the configuration intended for that node. The reason for this is that the node chooses the certificate common name (‘CN’ – usually its fqdn, but this is fully configurable), and the puppet master then uses this CN to look up the node definition to serve. The certificate itself is stored, so two nodes could not connect with the same CN (e.g. alice.example.com), but this is not the problem.
The problem lies in the fact that the puppet master does not make a 1-1 mapping between a node and the first certificate it saw for it, and hence multiple certificates can map to the same node, for example:
alice.example.com connects, gets node Alice { } definition.
bob.example.com connects with CN alice.bob.example.com and also matches the node Alice { } definition.
Without auto signing, it would be apparent that bob was trying to get Alice’s configuration – as the puppet cert process lists the full fqdn/CN presented. With auto-sign turned on, bob silently retrieves Alice’s configuration. (Online Training Courses)
8. Mention at what instance have you used the SSH?
Answer: I have used SSH to log into a remote machine and work on the command line. Besides this, I have also used it to tunnel into the system in order to facilitate secure encrypted communications between two untrusted hosts over an insecure network.

9. Describe your experience implementing continuous deployment?
Answer: Answer with a comprehensive list of all the tools that you used. Include inferences of the challenges you faced and how you tackled them.

10. What is the difference between RAID 0 and RAID 1?
Answer: RAID 1 offers redundancy through mirroring, i.e., data is written identically to two drives. RAID 0 offers no redundancy and instead uses striping, i.e., data is split across all the drives. This means RAID 0 offers no fault tolerance; if any of the constituent drives fails, the RAID unit fails.

11. What is the difference between a Cookbook and a Recipe in Chef?
Answer: When you group resources together what you get is a Recipe and this is useful in executing the configurations and policy. When you combine Recipes what you get is a Cookbook and this is easily manageable as compared to a Recipe.

12. Which are the reasons against using an RDBMS?
Answer: In a nutshell, if your application is all about storing application entities in a persistent and consistent way, then an RDBMS could be an overkill. A simple Key-Value storage solution might be perfect for you. Note that the Value is not meant to be a simple element but can be a complex entity in itself!

Another reason could be if you have hierarchical application objects and need some query capability into them then most NoSQL solutions might be a fit. With an RDBMS you can use ORM to achieve the same result but at the cost of adding extra complexity.

RDBMS is also not the best solution if you are trying to store large trees or networks of objects. Depending on your other needs a Graph Database might suit you.

If you are running in the Cloud and need to run a distributed database for durability and availability then you could check Dynamo and Big Table based datastores which are built for this core purpose.
Last but not least, if your data grows too large to be processed on a single machine, you might look into Hadoop or any other solution that supports distributed Map/Reduce.

13. Explain with a use case where DevOps can be used in industry/ real-life?
Answer: There are many industries that are using DevOps so you can mention any of those use cases, you can also refer the below example:

Etsy is a peer-to-peer e-commerce website focused on handmade or vintage items and supplies, as well as unique factory-manufactured items. Etsy struggled with slow, painful site updates that frequently caused the site to go down. It affected sales for millions of Etsy’s users who sold goods through the online market place and risked driving them to the competitor.

With the help of a new technical management team, Etsy transitioned from its waterfall model, which produced four-hour full-site deployments twice weekly, to a more agile approach. Today, it has a fully automated deployment pipeline, and its continuous delivery practices have reportedly resulted in more than 50 deployments a day with fewer disruptions.

14. What is the difference between Active and Passive check in Nagios?
Answer: first point out the basic difference Active and Passive checks. The major difference between Active and Passive checks is that Active checks are initiated and performed by Nagios, while passive checks are performed by external applications.

If your interviewer is looking unconvinced with the above explanation then you can also mention some key features of both Active and Passive checks.

15. What platforms does Docker run on?
Answer: I will start this answer by saying Docker runs on only Linux and Cloud platforms and then I will mention the below vendors of Linux:

Ubuntu 12.04, 13.04 et al
Fedora 19/20+
RHEL 6.5+
CentOS 6+
Gentoo
ArchLinux
openSUSE 12.3+
CRUX 3.0+
Cloud:
Amazon EC2
Google Compute Engine
Microsoft Azure
Rackspace
Note that Docker does not run on Windows or Mac.
16. Explain Security management in terms of Cloud Computing?
Answer:

Identity management access provides the authorization of application services.
Access control permission is given to the users to have complete controlling access of another user who is entering into the cloud environment.
Authentication and Authorization provide access to only the authorized and authenticated users only to access the data and applications.
17. Explain how can I vertically scale an Amazon instance?
Answer: This is one of the essential features of AWS and cloud virtualization. SpinUp a newly developed large instance where we pause that instance and detach the root Ebs volume from the server and discard. Later stop your live instance, detach its root volume connected. Note down the unique device ID and attach the same root volume to the new server. And restart it again. This results in a vertically scaled Amazon instance.

server group provides 80 and 443 from around the world, but only port 22 are vital among the jump box group. Database group allows port 3306 from the webserver group and port 22 from the jump box group. Addition of any machines to the webserver group can store in the database. No one can directly ssh to any of your boxes.

18. Explain how you can get the current color of the current screen on the Ubuntu desktop?
Answer: You can open the background image in The Gimp (image editor) and then use the dropper tool to select the color on a specific point. It gives you the RGB value of the color at that point.

19. Mention some important features of Memcached?
Answer: Important features of Memcached includes:

CAS Tokens: A CAS token is attached to any object retrieved from cache. You can use that token to save your updated object.

Callbacks: It simplifies the code

get delayed: It reduces the delay time of your script which is waiting for results to come back from the server

Binary protocol: You can use binary protocol instead of ASCII with the newer client

In binary: Previously, the client always used to do serialization of the value with complex data, but with Memcached, you can use the binary option.

20. What are the core roles of DevOps Engineers in terms of development and Infrastructure?
Answer:

The core job roles of DevOps Engineer

Application development
Code developing
Code coverage
Unit testing
Packaging
Deployment With infrastructure
Continuous Integration
Continuous Testing
Continuous Deployment
Provisioning
Configuration
Orchestration
Deployment

21. What is Factor?
Answer: Sometimes you need to write manifests on conditional expression based on agent-specific data which is available through Factor. Factor provides information like Kernel version, Dist release, IP Address, CPU info and etc. You can define your factor also.

22. Why does Puppet have its language? Why not use XML or YAML as the configuration format? Why not use Ruby as the input language?
Answer: The language used for manifests is ultimately Puppet’s human interface, and XML and YAML, being data formats developed around the processing capabilities of computers, are horrible human interfaces. While some people are comfortable reading and writing them, there’s a reason why we use web browsers instead of just reading the HTML directly. Also, using XML or YAML would limit any assurance that the interface was declarative — one process might treat an XML configuration differently from another.

23. My servers are all unique; can Puppet still help?
Answer: All servers are at least somewhat unique, but very few servers are unique; hostnames and IP addresses (e.g.) will always differ, but nearly every server runs a relatively standard operating system. Servers are also often very similar to other servers within a single organization — all Solaris servers might have similar security settings, or all web servers might have roughly equivalent configurations — even if they’re very different from servers in other organizations. Finally, servers are often needlessly unique, in that they have been built and managed manually with no attempt at retaining appropriate consistency.

Puppet can help both on the side of consistency and uniqueness. Puppet can be used to express the consistency that should exist, even if that consistency spans arbitrary sets of servers based on any data like operating system, data center, or physical location. Puppet can also be used to handle uniqueness, either by allowing the special provision of what makes a given host unique or through specifying exceptions to otherwise standard classes.

24. What is Module and How it is different from Manifest?
Answer: Whatever the manifests we defined in modules, can call or include into other manifests. Which makes easier management of Manifests. It helps you to push specific manifests on a specific Node or Agent.

25. How should I upgrade Puppet and Factor?

Answer:

The best way to install and upgrade Puppet and Factor is via your operating system’s package management system, using either your vendor’s repository or one of Puppet Labs’ public repositories.
If you have installed Puppet from source, make sure you remove old versions entirely (including all application and library files) before upgrading. Configuration data (usually located in/etc/puppet or /var/lib/puppet, although the location can vary) can be left in place between installs.
26. What if I haven’t signed a CLA?
Answer: If you haven’t signed a CLA, then we can’t yet accept your code contribution into Puppet or Factor. Signing a CLA is very easy: simply log into your GitHub account and go to our CLA page to sign the agreement.

We’ve worked hard to try to find to everyone who has contributed code to Puppet, but if you have questions or concerns about a previous contribution you’ve made to Puppet and you don’t believe you’ve signed a CLA, please sign a CLA or contact us for further information.

27. What’s Special About Puppet’s Model-driven Design?
Answer: Traditionally, managing the configurations of a large group of computers has meant a series of imperative steps; in its rawest state, SSH and a for a loop. This general approach grew more sophisticated over time, but it retained the more profound limitations at its root.

Puppet takes a different approach, which is to model everything — the current state of the node, the desired configuration state, the actions taken during configuration enforcement — as data: each node receives a catalog of resources and relationships, compares it to the current system state, and makes changes as needed to bring the system into compliance.

The benefits go far beyond just healing the headaches of configuration drift and unknown system state: modeling systems as data let Puppet simulate configuration changes, track the history of a system over its lifecycle, and prove that refactored manifest code still produces the same system state. It also drastically lowers the barrier to entry for hacking and extending Puppet: instead of analyzing code and reverse-engineering the effects of each step, a user can just parse data, and sysadmins have been able to add significant value to their Puppet deployments with an afternoon’s worth of Perl scripting.

28. Why Does Puppet Have Its Own Language? Why Not Use Xml Or Yaml As The Configuration Format? Why Not Use Ruby As The Input Language?
Answer: The language used for manifests is ultimately Puppet’s human interface, and XML and YAML, being data formats developed around the processing capabilities of computers, are horrible human interfaces. While some people are comfortable reading and writing them, there’s a reason why we use web browsers instead of just reading the HTML directly. Also, using XML or YAML would limit any assurance that the interface was declarative — one process might treat an XML configuration differently from another.

29. How Do I Document My Manifests?
Answer: The puppet language includes a simple documentation syntax, which is currently documented on the Puppet Manifest Documentation wiki page. The puppet doc command uses this inline documentation to automatically generate RDoc or HTML documents for your manifests and modules.

30. Does signing a CLA change who owns Puppet?
Answer: The change in license and the requirement for a CLA doesn’t change who owns the code. This is a pure license agreement and NOT a Copyright assignment. If you sign a CLA, you maintain full copyright to your code and are merely providing a license to Puppet Labs to use your code.

31. What are Resources in Puppet?
Answer: Resources are the fundamental unit for modeling system configurations. Each resource describes some aspect of a system, like a specific service or package.

A resource declaration is an expression that describes the desired state for a resource and tells Puppet to add it to the catalog. When Puppet applies that catalog to a target system, it manages every resource it contains, ensuring that the actual state matches the desired state.

32. What is a puppet module command?
Answer: The puppet module command provides an interface for managing modules from the Puppet Forge. Its interface is similar to several common package managers (such as gem, apt-get, or yum). You can use the puppet module command to search for, install, and manage modules.

33. What is MCollective?
Answer: MCollective is a powerful orchestration framework. Run actions on thousands of servers simultaneously, using existing plugins or writing your own.

34. How can you explain installing Puppet agent in Linux?
Answer: Install the Puppet agent so that your master can communicate with your Linux nodes.

1. Install a release package to enable Puppet Platform repositories.
2. Confirm that you can run Puppet executables.
The location for Puppet’s executables is /opt/puppetlabs/bin/, which is not in your PATH environment variable by default.

The executable path doesn’t matter for Puppet services — for instance, service puppet start works regardless of the PATH — but if you’re running interactive puppet commands, you must either add their location to your PATH or execute them using their full path.

To quickly add the executable location to your PATH for your current terminal session, use the command export PATH=/opt/puppetlabs/bin:$PATH. You can also add this location wherever you configure your paths, such as your .profile or .bashrc configuration files.

35. What is Puppet module?
Answer: Puppet module is a multi-purpose tool for working with Puppet modules. It can install and upgrade new modules from the Puppet Forge, help generate new modules, and package modules for public release.

36. Does Puppet Run On Windows?
Answer: As of Puppet 2.7.6 basic types and providers do run on Windows, and the test suite is being run on Windows to ensure future compatibility. More information can be found on the Puppet on Windows page, and bug reports and patches are welcome.

37. Which Versions Of Ruby Does Puppet Support?
Answer: Puppet requires an MRI Ruby interpreter. Certain versions of Ruby are tested more thoroughly with Puppet than others, and some versions are not tested at all. Run ruby –version to check the version of Ruby on your system.

Starting with Puppet 4, puppet-agent packages do not rely on the OS’s Ruby version, as it bundles its own Ruby environment. You can install puppet-agent alongside any version of Ruby or on systems without Ruby installed. Likewise, Puppet Enterprise does not rely on the OS’s Ruby version, as it bundles its own Ruby environment. You can install PE alongside any version of Ruby or on systems without Ruby installed. The Windows installers provided by Puppet Labs don’t rely on the OS’s Ruby version and can be installed alongside any version of Ruby or on systems without Ruby installed.

38. How does merging work?
Answer: An external node Every node always gets a node object (which may be empty or may contain classes, parameters, and an environment) from the configured node_terminus. (This setting takes effect where the catalog is compiled; on the puppet master server when using an agent/master arrangement, and on the node, itself when using puppet apply. The default node terminus is plain, which returns an empty node object; the exec terminus calls an ENC script to determine what should go in the node object.) Every node may also get a node definition from the site manifest (usually called site.pp).

When compiling a node’s catalog, Puppet will include all the following: Any classes specified in the node object is received from the node terminus Any classes or resources which are in the site manifest but outside any node definitions Any classes or resources in the most specific node definition in the site.pp that matches the current node (if site.pp contains any node definitions) Note 1: If site.pp contains at least one node definition, it must have a node definition that matches the current node; compilation will fail if a match can’t be found. Note 2: If the node name resembles a dot-separated fully qualified domain name, Puppet will make multiple attempts to match a node definition, removing the right-most part of the name each time. Thus, Puppet would first try agent1.example.com, then agent1.example, then agent1. This behavior isn’t mimicked when calling an ENC, which is invoked only once with the agent’s full node name. Note 3: If no matching node definition can be found with the node’s name, Puppet will try one last time with a node name of default; most users include a node default {} statement on their site.pp file. This behavior isn’t mimicked when calling an ENC.

39. Which technologies can act as a driver to enable DevOps?
Answer: Paas: which is a category of cloud computing services that provides a platform allowing customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure

Iaas: which is a category of cloud computing services that abstract the user from the details of infrastructure like physical computing resources, location, data partitioning, scaling, security, backup, etc.
Configuration automation: Automation is a big win in part because it eliminates the labor associated with repetitive tasks. Codifying such tasks also means documenting them and ensuring that they’re performed correctly, in a safe manner, and repeatedly across different infrastructure types.
Microservices: which consists of a particular way of designing software applications as suites of independently deployable services.
Containers: Containers modernize IT environments and processes, and provide a flexible foundation for implementing DevOps. At the organizational level, containers allow for appropriate ownership of the technology stack and processes, reducing hand-offs and the costly change coordination that comes with them.

40. Which scripting language is most important for a DevOps engineer?
Answer: Software development and Operational automation require programming. In terms of scripting

Bash is the most frequently used Unix shell which should be your first automation choice. It has a simple syntax and is designed specifically to execute programs in a non-interactive manner. The same stands for Perl which owes a great deal of its popularity to being very good at manipulating text and storing data in databases.
Next, if you are using Puppet or Chef it’s worth learning Ruby which is relatively easy to learn, and so many of the automation tools have been specifically with it.
Java has a huge impact on IT backend, although it has a limited spread across Operations.

41. How Database fits in a DevOps?
Answer: In a perfect DevOps world, the DBA is an integral part of both Development and Operations teams and database changes should be as simple as code changes. So, you should be able to version and automate your Database scripts as your application code. In terms of choices between RDBMS, NoSQL or another kind of storage solutions a good database design means fewer changes to your schema of Data and more efficient testing and service virtualization. Treating database management as an afterthought and not choosing the right database during the early stages of the software development lifecycle can prevent successful adoption of the true DevOps movement.

42. What is an MX record?
Answer: An MX record tells senders how to send an email for your domain. When your domain is registered, it’s assigned several DNS records, which enable your domain to be located on the Internet. These include MX records, which direct the domain’s mail flow. Each MX record points to an email server that’s configured to process mail for that domain. There’s typically one record that points to a primary server, then additional records that point to one or more backup servers. For users to send and receive an email, their domain’s MX records must point to a server that can process their mail.

43. What are the anti-patterns of DevOps?
Answer: A pattern is a common usage usually followed. If a pattern commonly adopted by others does not work for your organization and you continue to blindly follow it, you are essentially adopting an anti-pattern. There are myths about DevOps.

44. What is Version control?
Answer: This is probably the easiest question you will face in the interview. My suggestion is to first give a definition of Version control. It is a system that records changes to a file or set of files over time so that you can recall specific versions later. Version control systems consist of a central shared repository where teammates can commit changes to a file or set of file. Then you can mention the uses of version control.

Version control allows you to:

Revert files back to a previous state.
Revert the entire project back to a previous state.
Compare changes over time.
See who last modified something that might be causing a problem.
Who introduced an issue and when.
45. Mention what are the key aspects or principle behind DevOps?
Answer: The key aspects or principle behind DevOps is:

  • Infrastructure as code
  • Continuous deployment
  • Automation
  • Monitoring
  • Security

46. What special training or education did it require for you to become a DevOps engineer?
Answer: DevOps is more of a mindset or philosophy rather than a skill-set.

The typical technical skills associated with DevOps Engineers today is Linux systems administration, scripting, and experience with one of the many continuous integration or configuration management tools like Jenkins and Chef. What it all boils down to is that whatever skill-sets you have, while important, are not as important as having the ability to learn new skills quickly to meet the needs. It’s all about pattern recognition and having the ability to merge your experiences with current requirements. Proficiency in Windows and Linux systems administration, script development, an understanding of structured programming and object-oriented design, and experience creating and consuming RESTful APIs would take one a long way.

47. What are the benefits of using version control?
Answer: I will suggest you include the following advantages of version control:

With the Version Control System (VCS), all the team members are allowed to work freely on any file at any time. VCS will later allow you to merge all the changes into a common version.

All the past versions and variants are neatly packed up inside the VCS. When you need it, you can request any version at any time and you’ll have a snapshot of the complete project right at hand.

Every time you save a new version of your project, your VCS requires you to provide a short description of what was changed.

Additionally, you can see what exactly was changed in the file’s content. This allows you to know who has made what change in the project.

A distributed VCS like Git allows all the team members to have a complete history of the project so if there is a breakdown in the central server you can use any of your teammate’s local Git repository.

I know basic branching operations like delete, merge, checking out a branch, etc.

48. What is DevOps engineer’s duty with regards to Agile development?
Answer: DevOps engineer works very closely with Agile development teams to ensure they have an environment necessary to support functions such as automated testing, Continuous Integration, and Continuous Delivery. DevOps engineer must be in constant contact with the developers and make all required parts of the environment work seamlessly.

49. What is the most important thing DevOps helps us achieve?
Answer: According to me, the most important thing that DevOps helps us achieve is to get the changes into production as quickly as possible while minimizing risks in software quality assurance and compliance. This is the primary objective of DevOps. Learn more in this DevOps tutorial blog.

However, you can add many other positive effects of DevOps. For example, clearer communication and better working relationships between teams i.e. both the Ops team and Dev team collaborate together to deliver good quality software which in turn leads to higher customer satisfaction.

50. What is the major difference between the Linux and Unix operating systems?
Answer:

Unix:

It belongs to the family of multitasking, multiuser operating systems.
These are mostly used in internet servers and workstations.
It is originally derived from AT&T Unix, developed starting in the 1970s at the Bell Labs research center by Ken
Thompson, Dennis Ritchie, and others.
Both the operating systems are open source but UNIX is relatively similar one as compared to LINUX.
Linux:

  • Linux has probably been home to every programming language known to humankind.
  • These are used for personal computers.
  • The LINUX is based on the kernel of the UNIX operating system.

Note: Browse latest Devops Interview Questions and Devops training videos. Here you can check Devops Online Training details and Devops Training Videos for self learning. Contact +91 988 502 2027 for more information.

Leave a Comment

Scroll to Top