AWS Interview Questions for Devops - Updates for New

1. What is Amazon Web Services in DevOps? 
Answer:
AWS provides services that help you practice DevOps at your company and that are built first for use with AWS. These tools automate manual tasks, help teams manage complex environments at scale, and keep engineers in control of the high velocity that is enabled by DevOps.

2. What is AWS? 
Answer:
AWS (Amazon Web Services) is a platform to provide secure cloud services, database storage, offerings to compute power, content delivery, and other services to help business level and develop.

3. Why is Continuous Testing important for DevOps? 
Answer:
You can answer this question by saying, “Continuous Testing allows any change made in the code to be tested immediately. This avoids the problems created by having “big-bang” testing left to the end of the cycle such as release delays and quality issues. In this way, Continuous Testing facilitates more frequent and good quality releases.”

4. Explain how you can get the current color of the current screen on the Ubuntu desktop? 
Answer:
You can open the background image in The Gimp (image editor) and then use the dropper tool to select the color on a specific point. It gives you the RGB value of the color at that point.

5. What is Selenium IDE? 
Answer:
My suggestion is to start this answer by defining the Selenium IDE. It is an integrated development environment for Selenium scripts. It is implemented as a Firefox extension, and allows you to record, edit, and debug tests. Selenium IDE includes the entire Selenium Core, allowing you to easily and quickly record and play back tests in the actual environment that they will run in.

Now include some advantages in your answer. With autocomplete support and the ability to move commands around quickly, Selenium IDE is the ideal environment for creating Selenium tests no matter what style of tests you prefer. (E learning portal)

6. Why AWS DevOps Matters? 
Answer:
Software and the Internet have transformed the world and its industries, from shopping to entertainment to banking. Software no longer merely supports a business; rather it becomes an integral component of every part of a business.

Companies interact with their customers through software delivered as online services or applications and on all sorts of devices. They also use software to increase operational efficiencies by transforming every part of the value chain, such as logistics, communications, and operations.

In a similar way that physical goods companies transformed how they design, build, and deliver products using industrial automation throughout the 20th century, companies in today’s world must transform how they build and deliver software.

7. What is the role of a DevOps engineer? 
Answer:
There’s no formal career track for becoming a DevOps engineer. They are either developers who get interested in deployment and network operations, or sysadmins who have a passion for scripting and coding, and move into the development side where they can improve the planning of test and deployment.

8. Why do we use AWS for DevOps? 
Answer:
There are many benefits of using AWS for devops, they are:

Get Started Fast:  Each AWS service is ready to use if you have an AWS account. There is no setup required or software to install.

Fully Managed Services: These services can help you take advantage of AWS resources quicker. You can worry less about setting up, installing, and operating infrastructure on your own. This lets you focus on your core product.

Built for Scale: You can manage a single instance or scale to thousands using AWS services. These services help you make the most of flexible compute resources by simplifying provisioning, configuration, and scaling.

Programmable: You have the option to use each service via the AWS Command Line Interface or through APIs and SDKs. You can also a model and provision AWS resources and your entire AWS infrastructure using declarative AWS CloudFormation templates.

Automation: AWS helps you use automation so you can build faster and more efficiently. Using AWS services, you can automate manual tasks or processes such as deployments, development & test workflows, container management, and configuration management.

Secure: Use AWS Identity and Access Management (IAM) to set user permissions and policies. This gives you granular control over who can access your resources and how they access those resources.

Large Partner Ecosystem:  AWS supports a large ecosystem of partners which integrate with and extend AWS services. Use you’re preferred third-party and open source tools with AWS to build an end-to-end solution.

Pay-As-You-Go: With AWS purchase services as you need them and only for the period when you plan to use them. AWS pricing has no upfront fees, termination penalties, or long term contracts.

9. What happens when a build is run in Code Build in AWS Devops? 
Answer:
Code Build will create a temporary compute container of the class defined in the building project, load it with the specified runtime environment, download the source code, execute the commands configured in the project, upload the generated artifact to an S3 bucket, and then destroy the compute container. During the build, Code Build will stream the build output to the service console and Amazon Cloud Watch Logs.

10. What Is The Relation Between Instance And Ami? 
Answer:
An Amazon Machine Image (AMI) is a template that contains a software configuration (for example, an operating system, an application server, and applications). From an AMI, you launch an instance, which is a copy of the AMI running as a virtual server in the cloud.

You can launch different types of instances from a single AMI. An instance type determines the hardware of the host computer used for your instance. Each instance type offers different compute and memory capabilities.

11. What happens when one of the resources in a stack cannot be created successfully in AWS OpsWorks? 
Answer:

  • When an event like this occurs, the “automatic rollback on error” feature is enabled, which causes all the AWS resources which were created successfully until the point
  • where the error occurred to be deleted. This is helpful since it does not leave behind any erroneous data, it ensures the fact that stacks are either created fully or not
  • created at all. It is useful in events where you may accidentally exceed your limit of the no. of Elastic IP addresses or maybe you may not have access to an EC2 AMI
    that you are trying to run etc.

12. How is AWS DevOpsWorks different than AWS Cloud Formation? 
Answer:
DevOpsWorks and Cloud Formation both support application modeling, deployment, configuration, management, and related activities. Both support a wide variety of architectural patterns, from simple web applications to highly complex applications. AWS DevOpsWorks and AWS Cloud Formation differ in abstraction level and areas of

focus.

AWS Cloud Formation is a building block service which enables the customer to manage almost any AWS resource via JSON-based domain-specific language.

It provides foundational capabilities for the full breadth of AWS, without prescribing a particular model for development and operations. Customers define templates and use them to provision and manage AWS resources, operating systems and application code, In contrast, AWS OpsWorks is a higher level service that focuses on providing highly productive and reliable DevOps experiences for IT administrators and ops-minded developers.

To do this, AWS OpsWorks employs a configuration management model based on concepts such as stacks and layers and provides integrated experiences for key activities like deployment, monitoring, auto-scaling, and automation.

Compared to AWS CloudFormation, AWS OpsWorks supports a narrower range of application-oriented AWS resource types including Amazon EC2 instances, Amazon EBS volumes, Elastic IPs, and Amazon CloudWatch metrics.

13. Can I retrieve only a specific element of the data, if I have a nested JSON data in DynamoDB? 
Answer:
Yes. When using the GetItem, Batch GetItem, Query or Scan APIs, you can define a Projection Expression to determine which attributes should be retrieved from the table. Those attributes can include scalars, sets, or elements of a JSON document.

14. You are having multiple Memcache servers, in which one of the Memcached servers fails, and it has your data, will it ever try to get key data from that one failed server? 
Answer:
The data in the failed server won’t get removed, but there is a provision for auto-failure, which you can configure for multiple nodes. Fail-over can be triggered during any kind of socket or Memcached server level errors and not during normal client errors like adding an existing key, etc. 

15. What is Dev Ops with cloud computing? 
Answer:
 Inseparable development and operations practices are universally relevant. Cloud computing, Agile development, and DevOps are interlocking parts of a strategy for transforming IT into a business adaptability enabler. If the cloud is an instrument, then DevOps is the musician that plays it.

16. What is AWS CodeDeploy in AWS Devops? 
Answer:
 AWS CodeDeploy automates code deployments to any instance, including Amazon EC2 instances and on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications.

17. What Is Aws Code Deploy In Aws Devops? 
Answer:
AWS Code Deploy automates code deployments to any instance, including Amazon EC2 instances and on-premises servers. AWS Code Deploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications.

18. What Are Vpc endPoints? 
Answer:
A VPC endpoint enables you to create a private connection between your VPC with another AWS service without requiring access over the Internet, through a NAT device, a VPN connection, or AWS Direct Connect. They are horizontally scaled, redundant, and highly available VPC components that allow communication between instances in your VPC and AWS services without imposing availability risks or bandwidth constraints on your network traffic.

An endpoint enables instances in your VPC to use their private IP addresses to communicate with resources in other services. Don’t require public IP addresses to your instances, and you don’t need an Internet gateway, a NAT device, or a virtual private gateway in your VPC.

19. How is AWS CloudFormation different from AWS Elastic Beanstalk? 
Answer:

  • These services are designed to complement each other. AWS Elastic Beanstalk provides an environment to easily deploy and run applications in the cloud.
  • It is integrated with developer tools and provides a one-stop experience for you to manage the lifecycle of your applications. AWS CloudFormation is a convenient provisioning mechanism for a broad range of AWS resources.
  • It supports the infrastructure needs of many different types of applications such as existing enterprise applications, legacy applications, applications built using a variety of AWS resources and container-based solutions (including those built using AWS Elastic Beanstalk). 

20. What Is Aws Code Pipeline In Aws Devops? 
Answer:
 AWS Code Pipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates. Code Pipeline builds, tests, and deploys your code every time there is a code change, based on the release process models you define. This enables you to rapidly and reliably deliver features and updates.

aws-training-svr-technologies-01-min

21. What is Amazon EC2 in AWS Devops? 
Answer:
 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.

22. WHAT IS AN AMI? 
Answer:
 AMI stands for Amazon Machine Image. It is effectively a snapshot of the root filesystem. AWS AMI provides the information required to launch an instance, which is a virtual server in the cloud. You specify an AMI when you launch an instance, and you can launch as many instances from the AMI as you need. You can also launch instances from as many different AMIs as you need.

An AMI includes the following:

  • A template for the root volume for the instance ( such as an operating system, an application server, and applications)
  • Launch permissions that control which AWS accounts can use the AMI to launch instances
  • A block device mapping that specifies the volumes to attach to the instance when it’s launched
  • Build a new AMI by first spinning up and instance from a trusted AMI. Then adding packages and components as
  • required. Be wary of putting sensitive data onto an AMI. For instance, your access credentials should be added to an
  • instance after spinup. With a database, mount an outside volume that holds your MySQL data after spinup as well.

23. Distinguish between scalability and flexibility? 
Answer:
The aptitude of any scheme to enhance the tasks on hand on its present hardware resources to grip inconsistency in command is known as scalability. The capability of a scheme to augment the tasks on hand on its present and supplementary hardware property is recognized as flexibility, hence enabling the industry to convene command devoid of putting in the infrastructure at all.

24. Is it possible to scale an Amazon instance vertically? How? 
Answer:
 Yes. This is an incredible characteristic of cloud virtualization and AWS. Spinup is a huge case when compared to the one which you are running. Let up the instance and separate the root EBS volume from this server and remove. Next, stop your live instance, remove its root volume. Note down the distinctive device ID and attach root volume to your new server and start it again. This is the way to scaling vertically in place. 

25. What is the difference between Assert and Verify commands in Selenium? 
Answer:
I have mentioned differences between Assert and Verify commands below:

Assert command checks whether the given condition is true or false. Let’s say we assert whether the given element is present on the web page or not. If the condition is true, then the program control will execute the next test step. But, if the condition is false, the execution would stop and no further test would be executed.

Verify command also checks whether the given condition is true or false. Irrespective of the condition being true or false, the program execution doesn’t halt i.e. any failure during verification would not stop the execution and all the test steps would be executed.

26. What is Factor in Puppet? 
Answer:
You are expected to answer what exactly Factor does in Puppet so, according to me you should start by explaining:

The factor is basically a library that discovers and reports the per-Agent facts to the Puppet Master such as hardware details, network settings, OS type and version, IP addresses, MAC addresses, SSH keys, and more. These facts are then made available in Puppet Master’s Manifests as variables.

27. What is Manifests? 
Answer:
Manifests, in Puppet, are the files in which the client configuration is specified.

28. Why does Puppet have its language? Why not use XML or YAML as the configuration format? Why not use Ruby as the input language? 
Answer:
The language used for manifests is ultimately Puppet’s human interface, and XML and YAML, being data formats developed around the processing capabilities of computers, are horrible human interfaces. While some people are comfortable reading and writing them, there’s a reason why we use web browsers instead of just reading the HTML directly. Also, using XML or YAML would limit any assurance that the interface was declarative one process might treat an XML configuration differently from another.

29. My servers are all unique; can Puppet still help? 
Answer:
All servers are at least somewhat unique, but very few servers are unique; hostnames and IP addresses (e.g.) will always differ, but nearly every server runs a relatively standard operating system. Servers are also often very similar to other servers within a single organization all Solaris servers might have similar security settings, or all web servers might have roughly equivalent configurations even if they’re very different from servers in other organizations. Finally, servers are often needlessly unique, in that they have been built and managed manually with no attempt at retaining appropriate consistency.

Puppet can help both on the side of consistency and uniqueness. Puppet can be used to express the consistency that should exist, even if that consistency spans arbitrary sets of servers based on any data like operating system, data center, or physical location. Puppet can also be used to handle uniqueness, either by allowing the special provision of what makes a given host unique or through specifying exceptions to otherwise standard classes.

30. How can you debug a past build failure in AWS CodeBuild? 
Answer:
You can debug a build by inspecting the detailed logs generated during the build run.

31. What is Continuous Delivery in AWs Devops? 
Answer:
 Continuous delivery is a software development practice where code changes are automatically built, tested, and prepared for a release to production.

It expands upon continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage. When continuous delivery is implemented properly, developers will always have a deployment-ready build artifact that has passed through a standardized test process…

32. Does Puppet Run On Windows? 
Answer:
Yes. As of Puppet 2.7.6 basic types and providers do run on Windows, and the test suite is being run on Windows to ensure future compatibility. More information can be found on the Puppet on Windows page, and bug reports and patches are welcome.

33. What Characters Are Permitted In A Class Name? In A Module Name? In Other Identifiers? 
Answer:

  • Class names can contain lowercase letters, numbers, and underscores, and should begin with a lowercase letter. “::” can be used as a namespace separator.
  • The same rules should be used when naming defined resource types, modules, and parameters, although modules and parameters cannot use the namespace separator.
  • Variable names can include alphanumeric characters and underscore, and are case-sensitive. 

34. How Do I Manage Passwords On Red Hat Enterprise Linux, Centos, And Fedora Core? 
Answer:
As described in the Type reference, you need the Shadow Password Library, which is provided by the ruby-shadow package. The ruby-shadow library is available natively for fc6 (and higher) and should build on the corresponding RHEL and CentOS variants.

35. What type of organizations can use Puppet? 
Answer:
There is no strict rule about the type of organizations that can benefit from Puppet. But an organization with only a few servers is less likely to benefit from Puppet. An organization with a huge number of servers can benefit from Puppet as this eliminates the need to manually manage the servers.

36. Can Puppet run on servers that are unique? 
Answer:
Puppet can run on servers that are unique. Even though there might be very fewer chances of servers being unique since within an organization there are a lot of similarities that exist like the operating system that they are running on, and so on.

37. What are the characters permitted in a class and module name? 
Answer:
The characters that are permitted in a class and module name can be lowercase letters, underscores, numbers. It should be with a lowercase letter, you can use “::” as a namespace separator. The variable names can be including alphanumeric characters and underscore and can be case sensitive.

38. How does merging work? 
Answer:
An external node Every node always gets a node object (which may be empty or may contain classes, parameters, and an environment) from the configured node_terminus. (This setting takes effect where the catalog is compiled; on the puppet master server when using an agent/master arrangement, and on the node, itself when using puppet apply. The default node terminus is plain, which returns an empty node object; the exec terminus calls an ENC script to determine what should go in the node object.) Every node may also get a node definition from the site manifest (usually called site.pp).

When compiling a node’s catalog, Puppet will include all the following: Any classes specified in the node object is received from the node terminus Any classes or resources which are in the site manifest but outside any node definitions Any classes or resources in the most specific node definition in the site.pp that matches the current node (if site.pp contains any node definitions) Note 1: If site.pp contains at least one node definition, it must have a node definition that matches the current node; compilation will fail if a match can’t be found. Note 2: If the node name resembles a dot-separated fully qualified domain name, Puppet will make multiple attempts to match a node definition, removing the right-most part of the name each time. Thus, Puppet would first try agent1.example.com, then agent1.example, then agent1. This behavior isn’t mimicked when calling an ENC, which is invoked only once with the agent’s full node name. Note 3: If no matching node definition can be found with the node’s name, Puppet will try one last time with a node name of default; most users include a node default {} statement on their site.pp file. This behavior isn’t mimicked when calling an ENC.

39. Explain what you mean by ordering and relationships? 
Answer:
By default, Puppet applies resources in the order they’re declared in their manifest. However, if a group of resources must always be managed in a specific order, you should explicitly declare such relationships with relationship meta parameters, chaining arrows, and the require function. Puppet uses four meta parameters to establish relationships, and you can set each of them as an attribute in any resource. The value of any relationship meta parameter should be a resource reference (or array of references) pointing to one or more target resources. before – Applies a resource before the target resource. require – Applies a resource after the target resource. notify – Applies a resource before the target resource. The target resource refreshes if the notifying resource changes. subscribe – Applies a resource after the target resource. The subscribing resource refreshes if the target resource changes.

If two resources need to happen in order, you can either put a before attribute in the prior one or a required attribute in the subsequent one; either approach creates the same relationship. The same is true of notifying and subscribe.

40. What is meant by saying Nagios is Object-Oriented? 
Answer:
Answer to this question is pretty direct. I will answer this by saying, “One of the features of Nagios is object configuration format in that you can create object definitions that inherit properties from other object definitions and hence the name. This simplifies and clarifies relationships between various components.” 

41. What is Docker image? 
Answer:
I suggest that you go with the below mentioned flow: Docker image is the source of Docker container. In other words, Docker images are used to create containers. Images are created with the build command, and they’ll produce a container when started with a run. Images are stored in a Docker registry such as registry.hub.docker.com because they can become quite large, images are designed to be composed of layers of other images, allowing a minimal amount of data to be sent when transferring images over the network.

42. What is Dockerfile used for? 
Answer:
This answer according to me should begin by explaining the use of Dockerfile. Docker can build images automatically by reading the instructions from a Dockerfile.

Now I suggest you give a small definition of Dockerfle. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.

43. Give me an example of how you would handle projects? 
Answer:
As a professional with managerial responsibilities, I would demonstrate a clear understanding of DevOps project management tactics and also work with teams to set objectives, streamline workflow, maintain scope, research and introduce new tools or frameworks, translate requirements into the workflow and follow up. I would resort to CI, release management and other tools to keep interdisciplinary projects on track.

44. Mention what are the key aspects or principle behind DevOps? 
Answer:
The key aspects or principle behind DevOps is:

  • Infrastructure as code
  • Continuous deployment
  • Automation
  • Monitoring
  • Security

45. What are the benefits of using version control? 
Answer:
I will suggest you include the following advantages of version control:

With the Version Control System (VCS), all the team members are allowed to work freely on any file at any time. VCS will later allow you to merge all the changes into a common version.

All the past versions and variants are neatly packed up inside the VCS. When you need it, you can request any version at any time and you’ll have a snapshot of the complete project right at hand.

Every time you save a new version of your project, your VCS requires you to provide a short description of what was changed.

Note: Browse latest Devops Interview Questions and Devops Tutorial. Here you can check Devops Training details and Devops Training Videos for self learning. Contact +91 988 502 2027 for more information.

Leave a Comment

FLAT 30% OFF

Coupon Code - GET30
SHOP NOW
* Terms & Conditions Apply
close-link