Best 46 Informatica Interview Questions And Answers

1. What can we do to improve the performance of Informatica Aggregator Transformation ? (Informatica Interview Questions)_text]Answer :
Aggregator performance improves dramatically if records are sorted before passing to the aggregator and “sorted input” option under aggregator properties is checked. The record set should be sorted on those columns that are used in Group By operation.
It is often a good idea to sort the record set in database level e.g. inside a source qualifier transformation, unless there is a chance that already sorted records from source qualifier can again become unsorted before reaching aggregator
You may also read this article to know how to tune the performance of aggregator transformation.

Informatica Interview Questions2. What is Informatica ETL Tool ?_text]Answer :
Informatica ETL tool is market leader in data integration and data quality services. Informatica is successful ETL and EAI tool with significant industry coverage.ETL refers to extract, transform, load. Data integration tools are different from other software platforms and languages.

They have no inbuilt feature to build user interface where end user can see the transformed data. Informatica ETL tool “power center” has capability to manage, integrate and migrate enterprise data.3. Can we use mapping parameter or variables developed in one mapping into any other reusable transformation ?_text]Answer :
Yes, we can use mapping parameter or variables into any other reusable transformation because it doesn’t have any mapplet.4. What do you understand by term role playing dimension ?_text]Answer :
The dimensions that are used for playing diversified roles while remaining in the same database domain are known as role playing dimensions.5. What are the features of complex mapping ?_text]Answer :
The features of complex mapping are:
Many numbers of transformations
tricky needscompound business logic6. What are the different lookup cache(s) ?_text]Answer :
Informatica Lookups can be cached or un-cached (No cache). And Cached lookup can be either static or dynamic. A static cache is one which does not modify the cache once it is built and it remains same during the session run. On the other hand, A caches refreshed during the session run by inserting or updating the records in cache based on the incoming source data.

By default, Informatica cache is static cache.A lookup cache can also be divided as persistent or non-persistent based on whether Informatica retains the cache even after the completion of session run or deletes it.7. Write the advantages of partitioning a session ?_text]Answer :
The main advantage of partitioning a session is to get better server’s process and competence. Other advantage is it implements the solo sequences within the session.8. Differentiate between mapping parameter and mapping variable ?_text]Answer :
At the time values alter during the session’s implementation it is known as mapping variable whereas the values that don’t alter within the session implementation is called as mapping parameters.

Informatica Interview Questions9. How can we update a record in target table without using Update strategy ?_text]Answer :
A target table can be updated without using ‘Update Strategy’. For this, we need to define the key in the target table in Informatica level and then we need to connect the key and the field we want to update in the mapping Target. In the session level, we should set the target property as “Update as Update” and check the “Update” check-box.Let’s assume we have a target table “Customer” with fields as “Customer ID”, “Customer Name” and “Customer Address”.

Suppose we want to update “Customer Address” without an Update Strategy. Then we have to define “Customer ID” as primary key in Informatica level and we will have to connect Customer ID and Customer Address fields in the mapping. If the session properties are set correctly as described above, then the mapping will only update the customer address field for all matching customer IDs.10. What are the new features of Informatica 9.x Developer ?_text]Answer :
From an Informatica developer’s perspective, some of the new features in Informatica 9.x are as follows:Now Lookup can be configured as an active transformation – it can return multiple rows on successful match
Now you can write SQL override on un-cached lookup also. Previously you could do it only on cached lookup
You can control the size of your session log. In a real-time environment you can control the session log file size or time
Database deadlock resilience feature – this will ensure that your session does not immediately fail if it encounters any database deadlock, it will now retry the operation again. You can configure number of retry attempts.

Latest 53 Sharepoint Developer Interview Questions11. How to elaborate Tracing Level ?_text]Answer :
Tracing Level can be defined as the amount of information that the server writes in the log file. Tracing Level is created and configured either at the transformation level or at session level else at both the levels.

Given below are the 4 types of tracing level:

None
Terse
Verbose Initialization
Verbose Data12. Differentiate between joiner and Lookup Transformation ?_text]Answer :
Below are the differences between lookup and joiner transformation:

In lookup we can override the query but in joiner we cannot.
In lookup we can provide different types of operators like – but, in joiner only “= “ (equal to )operator is available.
In lookup we can restrict the number of rows while reading the relational table using lookup override but, in joiner we cannot restrict the number of rows while reading.
In joiner we can join the tables based on- Normal Join, Master Outer, Detail Outer and Full Outer Join but, in lookup this facility is not available .Lookup behaves like Left Outer Join of database.13. What is meant by Lookup Transformation? Explain the types of Lookup transformation ?_text]Answer :
Lookup transformation in a mapping is used to look up data in a flat file, relational table, view, or synonym. We can also create a lookup definition from a source qualifier.

We have the following types of Lookup.

Relational or flat file lookup. To perform a lookup on a flat file or a relational table.
Pipeline lookup. To perform a lookup on application sources such as JMS or MSMQ.
Connected or unconnected lookup.
A connected Lookup transformation receives source data, performs a lookup, and returns data to the pipeline.
An unconnected Lookup transformation is not connected to a source or target. A transformation in the pipeline calls the Lookup transformation with a: LKP expression. The unconnected Lookup transformation returns one column to the calling transformation.
Cached or un-cached lookup.We can configure the lookup transformation to Cache the lookup data or directly query the lookup source every time the lookup is invoked. If the Lookup source is Flat file, the lookup is always cached.14. How can you increase the performance in joiner transformation ?_text]Answer :
Below are the ways in which you can improve the performance of Joiner Transformation.

Perform joins in a database when possible.
In some cases, this is not possible, such as joining tables from two different databases or flat file systems. To perform a join in a database, we can use the following options:
Create and Use a pre-session stored procedure to join the tables in a database.
Use the Source Qualifier transformation to perform the join.
Join sorted data when possible
For an unsorted Joiner transformation, designate the source with fewer rows as the master source.
For a sorted Joiner transformation, designate the source with fewer duplicate key values as the master source.15. What is Lookup transformation ?_text]Answer :
Lookup transformation is used to look up a source, source qualifier, or target in order to get relevant data. You can look up a ‘flat file’, ‘relational table’, ‘view’ or ‘synonym’. Lookup can be configured as Active or Passive as well as Connected or Unconnected transformation. When the mapping contains the lookup transformation, the integration service queries the look up data and compares it with lookup input port values. One can use multiple lookup transformations in a mapping.

The lookup transformation is created with the following type of ports:

Input port (I)
Output port (O)
Look up Ports (L)
Return Port ®16. What are the differences between connected look up and unconnected look up ?_text]Answer :
Connected lookup takes the inputs directly from other transformations in the pipeline. Unconnected lookup does not take inputs directly from other transformations. It can be used in any transformation and can be raised as a function using the LKP expression. An unconnected lookup can be called multiple times in your mapping data flow.

Informatica Interview Questions17. How many input parameters can exist in an unconnected lookup ?_text]Answer :
Any number of input parameters can exist. For instance, you can provide input parameters like column 1, column 2, column 3, and so on. But the return value would only be one.18. What is the difference between active and passive transformation ?_text]Answer :
Active Transformation:- An active transformation can perform any of the following actions:

Change the number of rows that pass through the transformation: For instance, the Filter transformation is active because it removes rows that do not meet the filter condition.
Change the transaction boundary: For e.g., the Transaction Control transformation is active because it defines a commit or roll back transaction based on an expression evaluated for each row.
Change the row type: For e.g., the Update Strategy transformation is active because it flags rows for insert, delete, update, or reject.
Passive Transformation: A passive transformation is one which will satisfy all these conditions:

Does not change the number of rows that pass through the transformation
Maintains the transaction boundary
Maintains the row type19. What is a surrogate key ?_text]Answer :
It is used in substitution for the natural primary key20. What is the difference between static cache and dynamic cache ?_text]Answer :
In static cache the data will remain the same for the entire session, whereas in dynamic cache, whenever a row is inserted the cache will also be updated.21. What is the command used to run a batche ?_text]Answer :
The pmcmd command.

These are some of the popular questions asked during an Informatica interviews. Always be prepared to answer all types of questions — technical skills, interpersonal, leadership or methodology. If you have recently started your career in a data profile, you can always get certified to understand the industry-related terminology, skills and methodologies.22. What are the advantages of Informatica ?_text]Answer :
Informatica has some advantages over other data integration systems. A couple of the advantages are:

It is faster than the available platforms.
You can easily monitor your jobs with Informatica Workflow Monitor.
It has made data validation, iteration and project development to be easier than before.
If you experience failed jobs, it is easy to identify the failure and recover from it. The same applies to jobs that are running slowly.

Informatica Interview Questions23. State the differences between SQL Override and Lookup Override ?_text]Answer :
The role of SQL Override is to limit the number of incoming rows entering the mapping pipeline, whereas Lookup Override is used to limit the number of lookup rows to avoid the whole table scan by saving the lookup time and the cache it uses.
Lookup Override uses the “Order By” clause by default. SQL Override doesn’t use it and should be manually entered in the query if we require it
SQL Override can provide any kind of ‘join’ by writing the query
Lookup Override provides only Non-Equi joins.
Lookup Override gives only one record even if it finds multiple records for a single condition
SQL Override doesn’t do that.
If you want to get hands-on learning on Informatica, you can also check out the tutorial given below. In this tutorial, you will learn about Informatica Architecture, Domain & Nodes in Informatica, and other related concepts.24. What is parallel processing in Informatica ?_text]Answer :
After optimizing the session to its fullest, we can further improve performance by exploiting under utilized hardware power. This refers to parallel processing and we can achieve this in Informatica Powercenter using Partitioning Sessions.

The Informatica Powercenter Partitioning Option increases the performance of the Powercenter through parallel data processing. The Partitioning option will let you split the large data set into smaller subsets which can be processed in parallel to get a better session performance.25. Explain shared cache and re cache. ?_text]Answer :
To answer this question, it is essential to understand persistence cache. If we are performing lookup on a table, it looks up all the data brings it inside the data cache. However, at the end of each session, the Informatica server deletes all the cache files. If you configure the lookup as a persistent cache, the server saves the lookup under an anonymous name. Shared cache allows you to use this cache in other mappings by directing it to an existing cache.

After a while, data in a table becomes old or redundant. In a scenario where new data enters the table, re cache ensures that the data is refreshed and updated in the existing and new cache.

I hope this Informatica Interview questions blog was of some help to you. We also have another Informatica Interview questions wherein scenario based questions have been compiled. It tests your hands-on knowledge of working on Informatica tool. You can go through that Scenario based Informatica Interview Questions blog by clicking on the hyperlink or by clicking on the button at the right hand corner.26. What is an Expression Transformation in Informatica ?_text]Answer :
An expression transformation in Informatica is a common Powercenter mapping transformation. It is used to transform data passed through it one record at a time. The expression transformation is passive and connected. Within an expression, data can be manipulated, variables created, and output ports generated. We can write conditional statements within output ports or variables to help transform data according to our business requirements.27. How to Delete duplicate row using Informatica ?_text]Answer :
Scenario 1: Duplicate rows are present in relational database

Suppose we have Duplicate records in Source System and we want to load only the unique records in the Target System eliminating the duplicate rows.28. What is XML Source Qualifier Transformation in Informatica ?_text]Answer :
1. Reads the data from XMl files.
2. XML source definition associates with XML source Qualifier.
3. XML files are case sensitive markup language.
4. Files are saved with an extension .XML.
5. XML files are hierarchical (or) parent child relationship file formats.
6. Files can be normalized or denormalized.29. What is Informatica Power Center ?_text]Answer :
– A data integration tool which combines the data from multiple OLTP source systems, transforms the data into homogeneous format and delivers the data through out the enterprise at any speed.
– It is a GUI based ETL product from informatica corporation which was founded in 1993 Red wood city, California.
– There are many products in informatica corporation:

1. Informatica Analyzer.
2. Life cycle management.
3. Master data

Having many products in informatica.
Informatica power center is one of the product of informatica.
Using informatica power center we will do the Extraction, transformation and loading.30. What is meant by active and passive transformation ?_text]Answer :
An active transformation is the one that performs any of the following actions:
Change the number of rows between transformation input and output. Example: Filter transformation
Change the transaction boundary by defining commit or rollback points., example transaction control transformation
Change the row type, example Update strategy is active because it flags the rows for insert, delete, update or reject
On the other hand a passive transformation is the one which does not change the number of rows that pass through it. Example: Expression transformation.

Informatica Interview Questions31. What are the new features of Informatica 9.x in developer level ?_text]Answer :
We can configure a Lookup transformation to cache the underlying lookup table. In case of static or read-only lookup cache the Integration Service caches the lookup table at the beginning of the session and does not update the lookup cache while it processes the Lookup transformation.
In case of dynamic lookup cache the Integration Service dynamically inserts or updates data in the lookup cache and passes the data to the target. The dynamic cache is synchronized with the target.
In case you are wondering why do we need to make lookup cache dynamic, read this article on dynamic lookup32. What is the expected value if the column in an aggregator transform is neither a group by nor an aggregate expression ?_text]Answer :
Integration Service produces one row for each group based on the group by ports. The columns which are neither part of the key nor aggregate expression will return the corresponding value of last record of the group received. However, if we specify particularly the FIRST function, the Integration Service then returns the value of the specified first row of the group. So default is the LAST function.33. How does a Sorter Cache works ?_text]Answer :
The Integration Service passes all incoming data into the Sorter Cache before Sorter transformation performs the sort operation.
The Integration Service uses the Sorter Cache Size property to determine the maximum amount of memory it can allocate to perform the sort operation. If it cannot allocate enough memory, the Integration Service fails the session. For best performance, configure Sorter cache size with a value less than or equal to the amount of available physical RAM on the Integration Service machine.
If the amount of incoming data is greater than the amount of Sorter cache size, the Integration Service temporarily stores data in the Sorter transformation work directory. The Integration Service requires disk space of at least twice the amount of incoming data when storing data in the work directory.34. What is a Union Transformation ?_text]Answer :
The Union transformation is an Active, Connected non-blocking multiple input group transformation use to merge data from multiple pipelines or sources into one pipeline branch. Similar to the UNION ALL SQL statement, the Union transformation does not remove duplicate rows.35. What is the difference between Router and Filter ?_text]Answer :
Following differences can be noted:

Router Filter
Router transformation divides the incoming records into multiple groups based on some condition. Such groups can be mutually inclusive (Different groups may contain same record) Filter transformation restricts or blocks the incoming record set based on one given condition.
Router transformation itself does not block any record. If a certain record does not match any of the routing conditions, the record is routed to default group Filter transformation does not have a default group. If one record does not match filter condition, the record is blocked
Router acts like CASE.. WHEN statement in SQL (Or Switch().. Case statement in C) Filter acts like WHERE condition is SQL.36. What will be the approach ?_text]Answer :
Assuming that the source system is a Relational Database, to eliminate duplicate records, we can check the Distinct option of the Source Qualifier of the source table and load the target accordingly.37. But what if the source is a flat file? Then how can we remove the duplicates from flat file source ?_text]Answer :
Scenario 2: Deleting duplicate rows / selecting distinct rows for FLAT FILE sources

Here since the source system is a Flat File you will not be able to select the distinct option in the source qualifier as it will be disabled due to flat file source table. Hence the next approach may be we use a Sorter Transformation and check the Distinct option. When we select the distinct option all the columns will the selected as keys, in ascending order by default.

Deleting Duplicate Record Using Informatica Aggregator

Other ways to handle duplicate records in source batch run is to use an Aggregator Transformation and using the Group By checkbox on the ports having duplicate occurring data. Here you can have the flexibility to select the last or the first of the duplicate column value records.38. Differences between connected and unconnected lookup ?_text]Answer :
Connected Lookup Unconnected Lookup
1. Part of the mapping dataflow 1. Separate from the mapping data flow.
2. Returns multiple values (by linking output ports to another transformation) 2. Return one value by checking the Return R port option for the outport that provides the return value.
3. Executed for every record passing through the transformation. 3. Only executed when the lookup function is called.
4. More visible, shows where the lookup values are used. 4. Less visible, as the lookup is called from an expression within another transformation
5. Default values are used. 5. Default values are ignored.

Informatica Interview Questions39. How to use PMCMD Utiliy Command ?_text]Answer :
1. It is a command based client program that communicates with integration service to perform some of the tasks which can also be performed using workflow manager client.
2. Using PMCMD we can perform the following tasks:

i. Starting workflow.
ii. Scheduling workflow.

3. The PMCMD can be operated in two different modes:

i. Interactive Mode.
ii. Command line Mode.40. What is Workflow Manager ?_text]Answer :
It is a GUI based client which allows you to create following ETL objects.

1. Session
2. Workflow
3. Scheduler.

Session:-
– A session is a task that executes mapping.
– A session is created for each Mapping.
– A session is created to provide runtime properties.
A session is a set of instructions that tells ETL server to move the data from source to destination.

Workflow:-
Workflow is a set of instructions that tells how to run the session taks and when to run the session tasks.41. Mention few Power Centre client applications with their basic purpose ?_text]Answer :
Tasks like session and workflow creation, monitoring workflow progress, designing mapplets, etc is performed by Powercentre client applications.

Enlisted below is the list of Power center client applications with their purpose:

Repository Manager: It is an administrative tool and its basic purpose is to manage repository folders, objects, groups etc.
Administration Console: Here the service tasks like start/stop, backup/restore, upgrade/delete, etc are performed.
Power center designer: The designer consists of various designing tools which serve various purposes. These designing tools are:
Source Analyzer
Target designer
Transformation Developer
Mapplet Designer
Mapping Manager
Workflow Manager: Its basic purpose is to define a set of instructions/workflow that is required to execute mappings designed in the designer. To help develop a workflow, there are 3 tools available, namely Task developer, Workflow designer, Worklet Designer.
Workflow Monitor: As the name suggests, Workflow monitor, monitors the workflow or tasks. The list of windows available are:
Navigator Window
Output window
Time window
Properties window
Task view
Gantt chart view42. Why do we need Informatica ?_text]Answer :
Informatica comes to the picture wherever we have a data system available and at the backend we want to perform certain operations on the data. It can be like cleaning up of data, modifying the data, etc. based on certain set of rules or simply loading of bulk data from one system to another.

Informatica offers a rich set of features like operations at row level on data, integration of data from multiple structured, semi-structured or unstructured systems, scheduling of data operation. It also has the feature of metadata, so the information about the process and data operations are also preserved.43. How we can confirm all mappings in the repository simultaneously ?_text]Answer :
At a time we can validate only one mapping. Hence mapping cannot be validated simultaneously.44. What is Expression transformation ?_text]Answer :
It is used for performing non aggregated calculations. We can test conditional statements before output results move to the target tables.45. What are the different Components of PowerCenter ?_text]Answer :
Given below are the 7 important components of PowerCenter:

PowerCenter Service
PowerCenter Clients
PowerCenter Repository
PowerCenter Domain
Repository Service
Integration Service
PowerCenter Administration Console
Web Service Hub46. What happens to a mapping if we alter the datatypes between Source and its corresponding Source Qualifier ?_text]Answer :
The Source Qualifier transformation displays the transformation datatypes. The transformation datatypes determine how the source database binds data when the Integration Service reads it.

Now if we alter the datatypes in the Source Qualifier transformation or the datatypes in the source definition and Source Qualifier transformation do not match, the Designer marks the mapping as invalid when we save it.[vc_wp_posts number=”5″]