Informatica top interview questions and answers:
- 1. What is your understanding of Enterprise Data Warehousing?
Enterprise data warehousing is the scenario whereby the organization of data is done at one access point. A single store can be used to provide sever with a wide view of the data. On the source considered, it is possible to undertake the periodic analysis. Although it takes a longer time to view it, the outcomes are good.
- Give the difference between data warehouse, database and a data mart?
The database is made up of groups of data that is smaller in size compared to the data warehouse. In a data warehouse, data is not only retrieved in accordance with the requirements of the users, but there are also different kinds of data. The requirements of various types of domain are taken care of datamart. A good example is where an organization has a variety of chunk data from its various departments such as marketing and finance.
- What is a domain?
This is a case where all related nodes are in one organizing point. This helps to improve the management of data.
- Give a difference between a powerhouse and a repository server?
Complete repository such as procedures, charts and tables are controlled by the repository severs. It has a major role of ensuring, consistency, repository and integrity. The function of the powerhouse sever is to direct the implementation of different processes within the aspects of the repository database of a server.
- Define the number of repositories that are able to be produced in Informatica?
Although there is the existence of many repositories created in the informatics, finally all of them will be determined by the ports available.
- Give the merits of partitioning a session?
Session partitioning includes the implementation of a sequence solely in the same session. The improvement of the functionality of severs and performance is its major role. Extractions and outputs of one partition that take place in parallel way include the transformations in this case.
- How can you be in a position to create indexes after completion of the load process?
Command tasks at the session levels are used for creation of indexes after the process of loading is over. The alignment of the session being implemented or the workflow with the index for creating scripts can be done. In addition to that, after the load process has taken place at the transformation level, no index creation can occur.
- Give a definition of a session and describe the way batches cab be combined for execution?
A session is a set of teaching that is required to get implemented so that it changes data aimed from the source. This can be achieved through the use of pmcmd command or the manager. It is possible to have a session execution combined in either serial or parallel manner by batch execution. Batches are designed to have various sessions taking place at the same time in either serial or parallel way.
- Give the number of sessions that is possible to be grouped in a single session?
Although it is still possible to categorize given number of sessions, it can be simple to do that if they are less in the batch.
- How do mapping parameter and mapping variables differ?
Mapping variable is the scenario where values change when the session is executed. When it is finished, the value at the end is stored by the Informatica server only to be used when the process restarts. Mapping parameters are the values that remain the same when execution is taking place. Mapping usage and parameters are described by mapping procedures. Before the session begins values are assigned to the parameters.
- Define complex mapping?
Features you can get in complex mapping include:
- Difficult requirements
- Complex business logic
- varieties of transformation
- How can one identify whether the mapping is correct or not without connecting session?
Debugging helps to assess the correctness of the session without linking sessions.
- Is it possible to make use the variables or the parameters for mapping that have been created in a single mapping for a different transformation that can be reused?
Yes,because there is no mapping or mapplet that is present in reusable transformations.
- What is the use of the aggregator cache file?
During every run, aggregator transformations take place in chunks. Local buffer memory houses the translated values. Just in case an additional memory is needed, this is catered by cache file by aggregators.
- In short, give the explanation for transformation look up?
The transformations which give access to an RDBMS type of data are known as lookup transformations. The lookup tables help the server with faster access to check the database or data. Data conclusion is done by trying to match the condition for ports used for lookup transported when being transformed.
- What is the meaning of role playing dimension?
These are dimensions that are used for different functionality but at the same time do not change their position in the database domain.
- How is it possible to access repository when the SQL and other transformations are not available?
Metadata reporter is used to provide repository reports. Because this is a web application, SQL is unnecessary.
- What is your understanding of the metadata that is stored in the repository?
Source definition, mapping applet, mapplet transformations and target definition are examples of metadata.
- What is code page compatibility?
The loss of data can never take place in case data is transferred from one page of code page to a different one so long as both pages for codes have similar data. It is mandatory to have the entire characteristics contained in the source page existing on the page targeted. In addition to that, because of lack of compatibility when transformation is taking place when the characters of source page do not exist in the target page, loss of data will obviously take place.
- How is it possible to make valid all the mappings found in the repository at the same time?
In every session, only one mapping can take place, thus no mapping can occur at the same time.
- Give a short description of an Aggregator transformation?
It enables calculations of aggregate such as averages and sums. It is different from the expression transformation that allows group calculations.
- Give the description of Expression transformation?
In this type of transformation, values are calculated in one row and then it is written on target. Support calculations of non-aggregate. Before the output outcomes are obtained to be taken to the tables, testing of statements that are conditional can take place.
- Define filter transformation?
This is filtering of rows in a mapping that is at the middle level. Filter condition is applied after data has been transformed through the transformation of the filter. The entire ports for input or output and the rows are elements of filter transformation and pass the filter only after they satisfy the condition.
- Define the term Joiner transformation?
It operates under the basis heterogeneous sources that are grouped but positioned differently. On the other hand, a qualifier transformation from source combines data that has the same origin.
- Give a short explanation of the transformation used for lookup?
Through mapping it can be used for checking data that is contained in table for relational. The process of looking up on a definition from the relational database normally connects the client and the server.
- Give the explanation of using the Union Transformation?
The union makes use of the concept of a variety of transformations that is utilized to affiliate data that has different origins. It operates under the same principle of both the UNION ALL statements as in the SQL used as a platform to combine sets of SELECT stamen.
- What is the definition of the term Incremental Aggregation?
Each time a session for mapping aggregate is created, option for incremental of aggregates is facilitated. To be able to handle new aggregation calculations in an incremental manner, power center has to do an incremental aggregation by both historical cache and mapping.
- Give the difference of a lookup that is connected and the one that is unconnected?
Connected lookup is the scenario where the inputs obtained in the direct manner from the transformations comprised in the pipeline. LKP expressions are as a result of unconnected lookup inputs that are not obtained directly from the transformations.
- Give the definition of a mapplet?
A mapplet is an object that is recyclable and makes use of the mapplet designer. It allows the repeated use of logic for transformation in large number mappings. In addition to that, it has various provisions for transformations.
- Give a short definition of the term reusable transformation?
This is a kind of transformation that is used for a number of times while mapping. This is stored in the metadata thus it is different from other transformations. In case of alteration in the transformation that can be reused, ending of the mappings takes place.
- Give a description of mean for update strategy?
Informatica is used to handle the process that takes place row by row. Because it is marked as a default, each row is placed in the table for the target. Depending on some sequence, update strategy in case of updating or inserting of a row. To facilitate the marking of the update or insertion of the strategy, it is mandatory to specify the update strategy.
- What makes it possible for the rejection of a file by the Informatica server?
The scenario occurs when it meets the DD reject in the transformation of the update strategy. In addition to that, it disrupts the constraints of the database that has been filed in a row.
- Define the term surrogate key?
The natural replacement of a key is known as the surrogate. This is an identification that is of its own kind for every table. The update becomes hard especially when the natural key changes. They are used in either the integer or digital formats.
- What are the things that must be done to allow session partitioning?
You are required to configure session to enable partitioning of the data source and in turn install the Informatica sever machine in multifold CPU’s so that one can do the session partitioning.
- Give the names of the files that are created by the Informatica server when the session is on.
Session log, workflow low, bad file and errors log are files created when the sessions is running.
- Give a short definition of the term session task?
This is a set of guidance that governs the central server for power on how as well as when data is supposed to be transferred found in the sources to the targets.
- Define command task mean?
It allows either one or a number of command shell in Windows DOS or UNIX so that it can execute when the workflow is in process.
- Define command task standalone?
The shell commands can be run by this task found in workflow at any location.
- Give a short explanation of pre and post terms as used in the session for the shell command?
For a session task, command task is referred as either pre or post session in the shells for command. It can be run as pre, post success or post failure session command.
- Briefly explain what predefined event is?
This is a file that ensures watch for events. It has the function of waiting for a certain type of file to reach at a given location
- 41. What is your understanding of an event that is user defied?
The way the task is flowing in the workflow is known as the user defined events. After the creation of events they can be raised when it is necessary to do so.
- Define workflow?
A group of commands that instructs the server how it can be implemented is known as the workflow.
- Name all the workflow manager tools?
Tools in the workflow manager include
- Task developer
- Task designer
- Designer for Workflow
- Apart from the pmcmd for work manger, which is the other tool that is used for the purpose of scheduling?
Apart from the workflow manger is the CONTROL M, which is another tool used for the function of scheduling.
- Define (On-Line Analytical Processing) OLAP?
The procedure in which multi-dimension takes place.
- Name the types of OLAP and their examples?
MOLAP eg.HOLAP, Cognas, DOLAP,ROLAP eg. BO
- Give your understanding of worklet?
Worklet is a scenario whereby the workflow tasks are combined in tasks. Examples of these tasks are decision making, timer, mail, command, session, control, alignment, link etc.
- Give the function of target designer?
The Target tool designer is used to create the target definition.
- Where the throughput found in the Informatica available?
In the monitor for workflow, you can find the option for throughput in Informatica. Check on the monitor for workflow and then press on the session, next tap to get the run attributes where throughput option fund under the source/target statics is.
- Explain the meaning of load order?
In mapping, a load order to target is valued in terms of qualifiers at the source. Just in case there exists source qualifiers that are multifold that are linked to a variety of targets, one can prioritize the order of loading of the data into the target by the Informatica sever.