1. What is the process of Data Analysis?
Answer: Data analysis is the process of collecting, cleansing, interpreting, transforming and modeling data to gather insights and generate reports to gain business profits. Refer to the image below to know the various steps involved in the process.
Collect Data: The data gets collected from various sources and is stored so that it can be cleaned and prepared. In this step, all the missing values and outliers are removed.
Analyse Data: Once the data is ready, the next step is to analyze the data. A model is run repeatedly for improvements. Then, the model is validated to check whether it meets the business requirements.
Create Reports: Finally, the model is implemented and then reports thus generated are passed onto the stakeholders.
2. How do we conduct data analysis?
Answer: Data analysis deals with collecting, inspecting, cleaning, transforming and modeling data to glean valuable insights and support better decision-making in an organization along with the motive to increase the profit. The various steps involved in the data analysis process can be a sequence in the following manner,
Data Exploration: After identifying the business problem, a data analyst has to go through the data provided by the client to analyze the cause of the problem.
Data Preparation: This is the most crucial step of this process wherein any data anomalies (like missing values or detecting outliers with the data have to be modeled in the right direction.
Data Modeling: This step begins once the data has been prepared. Modeling is an iterative process wherein a model is repeatedly run for improvements. This ensures that the best possible result is found for a given problem.
Validation: In this step, validation of the data is done between the one provided by the client and the model developed by the data analyst. The aim is to find out if the developed model will meet the business requirements or not.
Implementation of the Model and Tracking: This is the final step, where the model is implemented in production and is tested for accuracy and efficiency purpose. (E learning portal)
3. What are your technical competencies?
Answer: Before the interview, do your homework on the analytics environment that the interviewing company uses. During the IT interview, you will be asked to review your technical competencies and skillsets. How well the company feels your technical skills fit with the data analytics approaches and tools they use in their environment can have a make-or-break effect on whether you get the job.
4. Mention What Is The Responsibility Of A Data Analyst?
Responsibility of a Data analyst include:
- Provide support to all data analysis and coordinate with customers and staffs
- Resolve business associated issues for clients and performing audit on data
- Analyze results and interpret data using statistical techniques and provide ongoing reports
- Prioritize business needs and work closely with management and information needs
- Identify new process or areas for improvement opportunities
- Analyze, identify and interpret trends or patterns in complex data sets
- Acquire data from primary or secondary data sources and maintain databases/data systems
- Filter and “clean” data, and review computer reports
- Determine performance indicators to locate and correct code problems
- Securing the database by developing access system by determining user level of access.
5. List Of Some Best Tools That Can Be Useful For Data-analysis?
- Google Search Operators
- Wolfram Alpha’s
- Google Fusion tables
6. How does data analysis differ from data mining?
Answer: As a professional data analyst, you should be able to easily identify what sets data mining apart from data analysis. Use a few key examples in your answer: for instance, you can explain that data analysts must create their own equations based on a hypothesis, but when it comes to data mining, algorithms automatically develop these equations. You may also want to mention that the data analysis process begins with a hypothesis, but data mining does not.
7. Why is data mining a useful technique in big data analysis?
Answer: Big data Hadoop is a clustered architecture where we need to analyze a large set of data to identify the unique patterns. The patterns help to understand the problem areas of business and establish a solution. The data mining is a useful process to do this job. Hence, it is widely used in big data analysis.
8. What are the missing patterns which are generally observed in data analysis?
Answer: The common missing patterns that are observed during data analysis are
- Completely missing at random
- Random missing
- Missing based on the missing value
- Missing based on the unobserved input variable
9. What is a great day at the office?
Answer: You want to show an interest in the candidate as an individual and let him or her express the ideal working day in terms of tasks, environment, and collaboration with colleagues. The answer to this question can provide you with key information about how he or she operates and how the candidate will interact with your team. Above all else, the answer should be honest and well-considered.
Describe a situation where you had to use your personal judgment and professional knowledge as a data analyst to resolve an issue.
Asking a candidate to describe a situation, specifically requesting details of both personal and professional inputs, will help you to understand how he or she applies character strengths and technical expertise to find a solution.
10. With what data analysis software are you experienced?
Answer: Not only does this question act as a simple tick-box exercise it also shows how advanced they are in their role and how familiar they are with emerging technologies. All data analysts should be familiar with traditional software applications such as Microsoft Excel, SQL, Python and of course, any particular software prescribed as a mandatory requirement for the role.
11. What’s the most interesting thing you’ve discovered from data?
Answer: The answer to this question shows whether or not the candidate is emotionally connected to his role and therefore passionate about it, or whether he is purely there to process the data and leave. Storytelling is a quality that is often overlooked in a good data analysis and showing enthusiasm often means that the candidate is able to effectively communicate data stories with colleagues. The ideal response to this question is less about the answer itself and more about body language. Look for a smiling face, hand gestures, and enthusiastic body language–they will tell you if the candidate is passionate about his work.
12. How will you define the data analysis process?
Answer: The data analysis process majorly involves data gathering, data cleaning, data analysis, and transforming data into a valuable model for better decision-making within an organization. the major steps for the data analysis process can be listed as – Data Exploration, Data preparation, Data Modeling, Data validation, Data implementation, etc.
13. What are the major differences between data profiling and data mining?
Answer: Data profiling is the process of data analysis based on consistency, logic, and uniqueness. The process could not validate the inaccurate data values but it will check the data values for business anomalies. The main objective of this process is checking data for many other purposes. At the same time, the data mining process is used to find the relationship between data values that are not discovered earlier. It is based on bulk analysis as per attributes or data values etc.
14. What do you mean by the KPI, design of experiments, and 80/20 rule?
Answer: KPI means key performance indicator that could be defined as the metric consists of a combination of spreadsheets, charts, reports, or business processes, etc. Design of experiment is the initial process that can be used to split data, data sampling, or data setup for statistical analysis. And the last term if 80/20 rule where 80 percent of total income comes from 20 percent of audiences.
15. Could you please provide a detailed explanation of what is required for one to become a data analyst?
They should have technical knowledge concerning database design, data modes, segmentation, and mining approaches.
The analyst should have in-depth knowledge of statistics packages, which are integral to the analysis of big datasets on platforms like SPSS, Excel, and SAS.
They should also possess strong skills in the analysis, organization, collection, and dissemination of big data.
16. Explain what is logistic regression?
Answer: The logistic regression is nothing but one of the regression models that is used for data analysis purposes. This type of regression method is called a statistical method where one of the data elements is an independent variable which ultimately helps you with the outcome.
17. Explain what is Map Reduce?
Answer: The Map-Reduce is nothing but a programming model where it is associated with process implementation and also analyzing large chunks of data sets parallelly. Using this programming model large data sets are segregated into small chunks of data sets which are analyzed parallelly to yield the outcome.
18. What do you understand by the term Normal Distribution?
Answer: This is one of the most important and widely used distributions in statistics. Commonly known as the Bell Curve or Gaussian curve, normal distributions, measure how much values can differ in their means and in their standard deviations. Refer to the below image.
19. What is the difference between Data Mining and Data Profiling?
Answer: Data Profiling also referred to as Data Archeology is the process of assessing the data values in a given dataset for uniqueness, consistency, and logic. Data profiling cannot identify any incorrect or inaccurate data but can detect only business rules violations or anomalies. The main purpose of data profiling is to find out if the existing data can be used for various other purposes.
Data Mining refers to the analysis of datasets to find relationships that have not been discovered earlier. It focusses on sequenced discoveries or identifying dependencies, bulk analysis, finding various types of attributes, etc.
20. What is A/B Testing?
Answer: A/B testing is the statistical hypothesis testing for a randomized experiment with two variables A and B. Also known as the split testing, it is an analytical method that estimates population parameters based on sample statistics. This test compares two web pages by showing two variants A and B, to a similar number of visitors, and the variant which gives better conversion rate wins.
The goal of A/B Testing is to identify if there are any changes to the web page. For example, if you have a banner ad on which you have spent an ample amount of money. Then, you can find out the return of investment i.e. the click rate through the banner ad.
21. What is the difference between univariate, bivariate and multivariate analysis?
Answer: The differences between univariate, bivariate and multivariate analysis are as follows:
Univariate: A descriptive statistical technique that can be differentiated based on the count of variables involved at a given instance of time.
Bivariate: This analysis is used to find the difference between two variables at a time.
Multivariate: The study of more than two variables is nothing but multivariate analysis. This analysis is used to understand the effect of variables on the responses.
22. What is interleaving in SAS?
Answer: Interleaving in SAS means combining individual sorted SAS data sets into one sorted data set. You can interleave data sets using a SET statement along with a BY statement.
23. What do you mean by DBMS? What are its different types?
Answer: A Database Management System (DBMS) is a software application that interacts with the user, applications and the database itself to capture and analyze data. The data stored in the database can be modified, retrieved and deleted, and can be of any type like strings, numbers, images, etc.
There are mainly 4 types of DBMS, which are Hierarchical, Relational, Network, and Object-Oriented DBMS.
Hierarchical DBMS: As the name suggests, this type of DBMS has a style of predecessor-successor type of relationship. So, it has a structure similar to that of a tree, wherein the nodes represent records and the branches of the tree represent fields.
Relational DBMS (RDBMS): This type of DBMS, uses a structure that allows the users to identify and access data in relation to another piece of data in the database.
Network DBMS: This type of DBMS supports many to many relations wherein multiple member records can be linked.
Object-oriented DBMS: This type of DBMS uses small individual software called objects. Each object contains a piece of data and the instructions for the actions to be done with the data.
24. How often should you retrain a data model?
Answer: Often, when we do things on autopilot, we forget to pay attention to what we’re doing and why we’re doing it. This is another relatively basic yet important question to reveal whether the candidate has forgotten the basics of data analysis. A good data analyst will note how changing business dynamics will affect the efficiency of a predictive model and, therefore, state that the answer to this question is dependent on many variables. However, a strong candidate will go on to provide example scenarios and answers.
25. What are some of the main tools used in Big Data?
26. Discuss the meaning of Correlogram Analysis?
Answer: This refers to spatial analysis. It consists of a series of autocorrelation coefficients, which are estimated and computed for different spatial links. These can be used to create correlograms for data, which explain the distance whereby the data is expressed rather than the values, which are at individual points.
27. Determine the most common statistical approaches for data analysis?
- Simplex algorithm
- Bayesian approach
- Markov chains
- Mathematical optimization
- Cluster and spatial processes
- Rank statistics
28. What is the KNN imputation method?
Answer: This method is used to impute the missing attribute values which are imputed by the attribute values that are most similar to the attribute whose values are missing. The similarity of the two attributes is determined by using the distance functions.Q1. Can you tell what is a waterfall chart and when do we use it?
The waterfall chart shows both positive and negative values which lead to the final result value. For example, if you are analyzing a company’s net income, then you can have all the cost values in this chart. With such kind of a chart, you can visually, see how the value from revenue to the net income is obtained when all the costs are deducted.
29. What is the difference between factor analysis and principal component analysis?
Answer: The aim of the principal component analysis is to explain the covariance between variables while the aim of factor analysis is to explain the variance between variables.
Looking to improve your skills or get a certificate in data analytics? SVR Learning offers a variety of professional training courses in Data Science, data analytics and big data, which can help you to start a promising career filled with opportunities.
30. Can you enumerate the various differences between Supervised and Unsupervised Learning?
Answer: Supervised learning is a type of machine learning where a function is inferred from labeled training data. The training data contains a set of training examples.
Unsupervised learning, on the other hand, is a type of machine learning where inferences are drawn from datasets containing input data without labeled responses. Following are the various other differences between the two types of machine learning:
Algorithms Used – Supervised learning makes use of Decision Trees, K-nearest Neighbor algorithm, Neural Networks, Regression, and Support Vector Machines. Unsupervised learning uses Anomaly Detection, Clustering, Latent Variable Models, and Neural Networks.
Enables: Supervised learning enables classification and regression, whereas unsupervised learning enables classification, dimension reduction, and density estimation
Use: While supervised learning is used for prediction, unsupervised learning finds use in analysis
31. Explain what you do with suspicious or missing data?
Answer: When there is a doubt in data or there is missing data, then:
Make a validation report to provide information on the suspected data.
Have experienced personnel look at it so that its acceptability can be determined.
Invalid data should be updated with a validation code.
Use the best analysis strategy to work on the missing data like simple imputation, deletion method or case wise imputation.
32. What are the challenges that you face as a data analyst?
Answer: There are various ways you can answer the question. It might be very badly formatted data when the data isn’t enough to work with, clients provide data they have supposedly cleaned it but it has been made worse, not getting updated data or there might be factual/data entry errors.
33. What is the difference between standardized and unstandardized coefficients?
Answer: The standardized coefficient is interpreted in terms of standard deviation while the unstandardized coefficient is measured in actual values.
34. What do you understand by the Selection Bias? What are its various types?
Answer: Selection bias is typically associated with research that doesn’t have a random selection of participants. It is a type of error that occurs when a researcher decides who is going to be studied. On some occasions, selection bias is also referred to as the selection effect.
In other words, selection bias is a distortion of statistical analysis that results from the sample collecting method. When selection bias is not taken into account, some conclusions made by a research study might not be accurate. Following are the various types of selection bias:
Sampling Bias: A systematic error resulting due to a non-random sample of a populace causing certain members of the same to be less likely included than others that results in a biased sample.
Time Interval: A trial might be ended at an extreme value, usually due to ethical reasons, but the extreme value is most likely to be reached by the variable with the most variance, even though all variables have a similar mean.
Data: Results when specific data subsets are selected for supporting a conclusion or rejection of bad data arbitrarily.
Attrition: Caused due to attrition, i.e. loss of participants, discounting trial subjects or tests that didn’t run to completion.
35. What do you understand by linear regression and logistic regression?
Answer: Linear regression is a form of statistical technique in which the score of some variable Y is predicted on the basis of the score of a second variable X, referred to as the predictor variable.
The Y variable is known as the criterion variable.
Also known as the logit model, logistic regression is a statistical technique for predicting the binary outcome from a linear combination of predictor variables.
36. What do you understand from the term data cleaning?
Answer: Data cleaning refers to the task of inspecting and cleaning data. From the given data, it is important to sort out information that is valuable for analysis. Also, one needs to eliminate information that is incorrect, unnecessary or repetitive. However, the entire database should be retrievable. Data cleaning does not impose deleting information completely from the database.
37. What do you understand by Deep Learning?
Answer: Deep Learning is a paradigm of machine learning that displays a great degree of analogy with the functioning of the human brain. It is a neural network method based on convolutional neural networks (CNN).
Deep learning has a wide array of uses, ranging from social network filtering to medical image analysis and speech recognition. Although Deep Learning has been present for a long time, it’s only recently that it has gained worldwide acclaim. This is mainly due to:
An increase in the amount of data generation via various sources
The growth in hardware resources required for running Deep Learning models
Caffe, Chainer, Keras, Microsoft Cognitive Toolkit, Pytorch, and TensorFlow are some of the most popular Deep Learning frameworks as of today.
38. What are outlier values and how do you treat them?
Answer: Outlier values, or simply outliers, are data points in statistics that don’t belong to a certain population. An outlier value is an abnormal observation that is very much different from other values belonging to the set.
Identification of outlier values can be done by using univariate or some other graphical analysis method. Few outlier values can be assessed individually but assessing a large set of outlier values require the substitution of the same with either the 99th or the 1st percentile values.
There are two popular ways of treating outlier values:
To change the value so that it can be brought within a range
To simply remove the value
39. What is data screening in the data validation process?
Answer: The data screening is a process where the entire set of data is actually processed by using various algorithms to see whether we have any questionable data. This type of values is handled externally and thoroughly examined.
40. What is the name of the framework which was completely developed by Apache for large data sets which can be processed for an application? All of this processing is happening in a distributed computing environment?
Answer: Hadoop and MapReduce is the programming framework which was completely developed by Apache where large sets of data for an application is been processed under a distributed computing environment.
41. What are the different steps available in an analytical project, list them out?
The various steps involved in analytics project are:
1. Definition of the problem
2. Exploring the data
3. Preparing the data
4. Data modeling
5. Validation of the data
6. Tracking and implementation
42. Differentiate between data mining and data profiling.
Answer: Data mining is the process of sorting through massive volumes of data, with the aim of identifying patterns and establishing relationships to perform data analysis and subsequently, problem-solving. Data mining tools facilitate predicting future trends by business organizations.
Data profiling can be defined as a data examining process focused on achieving various purposes like determining the accuracy and completeness of data. This process acutely examines a database, or other such data sources, in order to expose the erroneous areas in data organization; data profiling technique considerably improves the data quality.
43. What are the primary responsibilities of a data analyst?
Answer: Though the duties of a data analyst are wide and varied in scope, his primary responsibilities include documenting the types and structure of the business data (logical modeling); analyzing and mining the business data with the aim of identifying the patterns and correlations therein; mapping and tracing data from one system to another with the purpose of solving a given business or system problem; designing and creating data reports and reporting tools to facilitate effective decision making in the organization; and, performing a rigorous statistical analysis of the organizational data.
44. Explain what is data mining?
Answer: data mining is a process where it focuses on cluster analysis. It is considered as a process of analyzing large data sets and out of which they will be able to identify unique patterns and also help the user to understand and establish a relationship to solve any obstacles through analyzing data.
Data mining is also used to predict future trends within organizations
45. What are the applications that are based on the clustering algorithm?
The applications that are based on the clustering algorithm is listed below:
3. Mathematical analysis
4. Statistical analysis
46. How will you create a classification to identify key customer trends in unstructured data?
Answer: A model does not hold any value if it cannot produce actionable results, an experienced data analyst will have a varying strategy based on the type of data being analyzed. For example, if a customer complains was retweeted then should that data be included or not. Also, any sensitive data of the customer needs to be protected, so it is also advisable to consult with the stakeholder to ensure that you are following all the compliance regulations of the organization and disclosure laws if any.
You can answer this question by stating that you would first consult with the stakeholder of the business to understand the objective of classifying this data. Then, you would use an iterative process by pulling new data samples and modifying the model accordingly and evaluating it for accuracy. You can mention that you would follow a basic process of mapping the data, creating an algorithm, mining the data, visualizing it and so on. However, you would accomplish this in multiple segments by considering the feedback from stakeholders to ensure that you develop an enriching model that can produce actionable results.
47. What is data cleansing? Mention a few best practices that you need to follow while doing data cleansing?
Answer: From a given dataset, it is extremely important to sort the information required for data analysis. Data cleaning is a crucial step wherein data is inspected to find any anomalies, remove the repetitive and incorrect information, etc. Data cleansing does not involve removing any existing information from the database, it just enhances the data quality so that it can be used for analysis.
Some of the best practices for data cleansing include:
- Developing a data quality plan to identify where maximum data quality errors occur so that you can assess the root cause and plan according to that.
- Follow a customary method of substantiating the necessary information before it’s entered into the information.
- Identify any duplicates data and validate the accuracy of the data as this will save a lot of time during analysis.
- Tracking all the improvement operations performed on the information is incredibly necessary in order that you repeat or take away any operations as necessary.
48. What are some issues that data analysts typically come across?
Answer: All jobs have their challenges, and your interviewer not only wants to test your knowledge on these common issues but also know that you can easily find the right solutions when available. In your answer, you can address some common issues, such as having a data file that’s poorly formatted or having incomplete data.
49. How do you define the primary responsibilities of a data analyst?
- A data analyst is responsible for
Analyzing all data related information
- Taking active participation during the data auditing
- Suggesting and forecasting based on a statistical analysis of data.
- Helps to improve the business process and process optimization To generate business reports using the raw data.
- Sourcing data from different data sources and harvest that in the database.
- Coordinating with the clients and stakeholders.
- Identifying new areas of improvement.
50. How can you perform the data validation process successfully?
Answer: data validation can be defined in two steps. First is data screening and other is data verification. In the first step i.e. data screening, algorithms are used to screen the data to find any inaccurate data. These values need to check or validate again. In the second step for Data verification, values are corrected on the case basis and invalidate values should be rejected.
51. When you are given a new data analytics project then how should you start? Explain based on your previous experiences?
Answer: The purpose of this question is to understand your approach to how you work actually. Make sure that the process you are following is always organized. The process should be designed so well that it could help you in achieving business goals ultimately. Obviously, the answer to this question depends on your experience and person to person.
52. How will you define the term clustering?
Answer: Clustering could be defined as the classification process that can be applied to the data. With the help of algorithms, you can always divide the data into natural clusters.
53. What is the frequency of retraining data models?
Answer: An effective data analyst knows all about changing dynamics in their business and how this evolving nature might affect the efficiency and certainty of their predictive models. the analyst should be a consultant who is able to utilize their skills in analysis as well as their acumen for getting to the cause of problems. The appropriate way to answer this query is to claim that it would be possible to work with the customer towards defining a particular period as well as possible. It would also be possible to retrain the model when the firm goes into a new market, begins to face competition, or is part of a merger.
54. You develop a big data model, but your end-user has difficulty understanding how the model works and the insights it can reveal. How do you communicate with the user to get your points across?
Answer: Many big data analysts come from statistics, engineering, and computer science disciplines; they’re brilliant analysts, but their people and communications skills lag. Businesses understand that to obtain results, you need both strong execution and strong communication. You can expect your HR, end business, and IT interviewers to focus on your communications skills, and to try to test them with a hypothetical situation.
55. Explain What Is Knn Imputation Method?
Answer: In KNN imputation, the missing attribute values are imputed by using the value of the attribute that is most similar to the attribute whose values are missing. By using a distance function, the similarity of two attributes is determined.