September 4, 2014

SQL SERVER: SSIS - Rename and move files from source folder to destination folder

With SSIS, most of the time we need to process all the files from the folder. Once file has been processed, we need to move the file to archive folder, so we should know that file has been processed and we have the file in archive folder.

Here, We are going to process the files and then will move the file from source folder to archive folder by appending date and time to the filename, so we can use it for future reference. SSIS will do both of these things, Rename a file and move a file, with File System Task with operation “Rename file”. Let’s review how it works:

1.
Add Variables:To configure Source folder and Archive Folder by variable
Name Scope Data Type Value
FileName [Package Scope] string
SourceFolder [Package Scope] string [Source Folder Path]
(example :- c:\Source_Folder)
SourcePath [Package Scope] string
TargetFolder [Package Scope] string [Target Folder Path]
(example :- c:\Archive_Folder)
ArchivePath [Package Scope] string
 2. Set variable’s properties expression:Go to Variable->properties and set EvaluateAsExpression and Expression, so we will have Source Path and Archive Path.

Here, we are adding Current Date time to the filename by expression as mentioned below.

Variable Name EvaluateAs
Expression
Expression
SourcePath true @[User::SourceFolder]+"\\"+ @[User::FileName]
ArchivePath true @[User::TargetFolder] +"\\"+REVERSE(SUBSTRING(REVERSE( @[User::FileName] ),FINDSTRING(REVERSE(@[User::FileName] ),".",1)+1,LEN(@[User::FileName])- FINDSTRING(REVERSE(@[User::FileName] ),".",1)))
+"_"+ Right("0" +(DT_STR,4,1252) datepart("yyyy", getdate()),2) 
+ Right("0" +(DT_STR,2,1252) datepart("mm", getdate()) ,2)
+ Right("0" +(DT_STR,2,1252) datepart("dd", getdate()),2)+Right("0" + (DT_STR,2,1252) DatePart("hh",getdate()),2)
+ Right("0" + (DT_STR,2,1252) DatePart("mi",getdate()),2)
+ Right("0" + (DT_STR,4,1252) DatePart("ss",getdate()),2)+REVERSE(SUBSTRING(REVERSE( @[User::FileName] ),1,FINDSTRING(REVERSE(@[User::FileName] ),".",1)))

3. Add For each Loop Container and set properties :Now, lets loop thru the folder and process each file from the folder by “For Each loop”. Here, we can setup the folder by Expressions->Directory use Expression @[User::SourceFolder]. We should also specify which type of files we are going to process, like “txt”, “csv” etc..by specifying the same in “Files”. We are fetching the file name with extension so that option needs to be selected as displayed in the following screenshot.SQLYoga.com 
We need to assign each file name to the variable, by Variable Mappings->set variable [User::FileName] and Index as 0

SQLYoga.com
4. Add File System Task:Add “File System Task” inside “For each Loop Container”SQLYoga.com
5. Set properties for File System Task:This is the place where we need to setup the operation, which will do our job.
A. Configure SourceVariable
B. Configure DestinationVariable
C. Select operation: “Rename File”, which will rename the file and move it to the Archive Folder as we have specified in variable.
SQLYoga.com 
6. Run package and checkSQLYoga.com
With SSIS, it is much simple to process multiple files as mentioned above.
Reference: Tejas Shah (www.SQLYoga.com)

September 1, 2014

SQL SERVER: SSIS - Conditional Split Data Flow Transformation Task

Conditional Split transform use split the source row in easy to multiple groups in data flow and which Destination table populated. Lets review the same by reading a Sample text file and separate out the data in two groups.

1. Create sample text file:
This text file is piped delimited and last row  in text Total Row count

EmployeeNumber|Employeename
242|Lorem ipsum
239|dolor sit
225|amet consectetur
242|adipisicing elit
142|seddo eiusmod
222|tempor incididunt
142|ut labore
143|dolore magna
144|Ut enim
Total Row count|9


2. Create sample table:
Create sample destination table in Test database
  1: CREATE TABLE [dbo].[Employee](
  2:  [EmployeeId] [int] IDENTITY(1,1) NOT NULL,
  3:  [EmployeeNumber] [int] NULL,
  4:  [EmployeeName] [varchar](50) NULL,
  5:  CONSTRAINT [PK_Employee] PRIMARY KEY CLUSTERED 
  6:  (
  7:   [EmployeeId] ASC
  8:  )
  9: )
 10: GO

3. Add Data Flow Task:
Add data flow task in Package for transfer data source text file “Employee.txt” to sql server table “dbo.Employee”
SQLYoga.com 
4. Add new Flat File Connection:
Right click on Connection Manager and add new Flat File connection
SQLYoga.com 
5. Set Flat file source properties:
Flat file source properties set Connection manager Name, File Name (Source full filepath), column names in the first data row as a true
SQLYoga.com
6. Add Flat file source:
Add “Flat File Source” in “Data flow Task” and set properties Flat file connection manager and retain null values from the source as null values in the data flow as true and click button on preview and see in this file extra row include in text file
SQLYoga.com
7. Add Conditional Split Transformation in Data Flow Task:
Add Conditional Split Transformation in data flow task for split row
SQLYoga.com
8. Set Condition Split Properties:
Set properties Condition split where EmployeeNumber equal to “Total Row count” then this rows not use and other row use to process.
SQLYoga.com
9. Add new OLE DB Connection:
Right click on Connection Managers and add new OLE DB Connection
SQLYoga.com 
10. Set OLE DB Connection properties
OLE DB Connection properties set Server Name, Connection to a database and click on ok button
SQLYoga.com 
11. Add OLE DB Destination:
Add “OLE DB Destination” in “Data flow Task” and set properties OLE DB Connection Manager and Name of the table or the view
SQLYoga.com
12. Set Input output selection :
Set Input output selection between Condition spit and OLE DB Destination
SQLYoga.com
SQLYoga.com
13. Run package and Check :
SQLYoga.com
SQLYoga.com

In this way we get split the information and use as per the requirement,

Reference: Tejas Shah (www.SQLYoga.com)

August 6, 2014

SQL SERVER: Clone SSIS Package

Recently, I was assigned a job to create many DTSX packages. When I review the packages, I found those packages (Information flow) are moreover same. There is just a difference of Source file connection and destination SQL table which is different in each of the package. It might take couple of hours to create each of the package, but I wanted to get it done in few minutes. To achieve the same, I have looked up the DTSX code (XML) and updated as follows to achieve get it done efficiently and save some time.

Please find following steps to achieve the same:

1. Existing package:
 
SQLYoga.com

2. Copy Package:
Right click on existing package and click on copy option and after right click on SSIS Packages folder and Paste on the location

SQLYoga.comSQLYoga.com

3. Rename Package Name for newly Pasted file:
To rename package name as your mind and When message box open then click yes button

 SQLYoga.com

SQLYoga.com

4. Open package in Notepad Editor:

Go to folder where package is exist and open with package in notepad editor

SQLYoga.com

5. Replace package Name in Notepad file:

Replace old package name to new package name with Replace option

SQLYoga.com SQLYoga.com

6. Replace more text

Replace more text if you know to Change (Example: “Activity” text replace to “Job”)

SQLYoga.com

7. Check Replace name

Check all task name, SQL task and Data Flow task in replace with new text and Annotation text will no be changed, so it needs to be change manually

SQLYoga.com

8. Need to change manually

If SSIS package has Package SQL server Configuration then we need to change that manually too.

SQLYoga.com

June 14, 2014

SQL Yoga : Certificate Expired, Mirroring Stopped

Database Administrators might face this issue while certificate gets expired and database mirroring (Non domain database mirroring) gets disconnected as two servers, Primary and Secondary cannot communicate with each other. The error message can be found in Log as follows:

Message
Database Mirroring login attempt failed with error: 'Connection handshake failed. The certificate used by this endpoint was not found: Certificate expired. Use DBCC CHECKDB in master database to verify the metadata integrity of the endpoints. State 85.'. [CLIENT: xxx.xxx.xxx.xxx]

In this post, We are going to review step by step process to resolve this issue by providing renew parameters with certificate configurations.
1. Create a new certificate with longer endpoint (on Principal):
  1: -- Create a new certificate for the endpoint
  2: USE master;
  3: CREATE CERTIFICATE [Principal_Certificate_New]
  4:    WITH SUBJECT = 'Principal Certificate',
  5:     START_DATE='01/01/2014', -- Provide date prior to current date
  6:     EXPIRY_DATE='12/31/2020'; -- Provide this as future date
  7: GO

2. Take backup of the newly created certificate (on Principal):
  1: USE master;
  2: BACKUP CERTIFICATE [Principal_Certificate_New] TO FILE = N'F:\Backup\Principal_Certificate_New.cer';
  3: GO

3. Set mirroring to use the newly created certificate (On Principal):
  1: ALTER ENDPOINT DBMirrorEndPoint
  2: FOR DATABASE_MIRRORING (AUTHENTICATION = CERTIFICATE [Principal_Certificate_New])
  3: GO

4. Delete the old certificate for endpoint (on Principal):
  1: USE master;
  2: DROP CERTIFICATE [Principal_Certificate_Old]
  3: GO

5. Drop the Old Certificate for Principal Login (on Mirror):
  1: USE master;
  2: DROP CERTIFICATE [Principal_Certificate_Old]
  3: GO

6. Create a new certificate from the principal backup file (on Mirror):
  1: USE master;
  2: CREATE CERTIFICATE [Principal_Certificate_New] AUTHORIZATION PrincipalServerUser
  3: FROM FILE = N'F:\Backup\Principal_Certificate_New.cer';
  4: GO

7. Create a new certificate with longer endpoint (on Mirror):
  1: use master;
  2: CREATE CERTIFICATE Mirror_Certificate_New
  3: WITH SUBJECT = 'Mirror Certificate New'
  4:  ,EXPIRY_DATE = '12/31/2020' -- Provide this as future date
  5: GO

8. Take backup of newly created certificate (on Mirror):
  1: USE master;
  2: BACKUP CERTIFICATE [mirror_new_cert] TO FILE = 'F:\Backup\Mirror_Certificate_New.cer';
  3: GO

9. Set mirroring for newly created certificate (on Mirror):
  1: USE master;
  2: ALTER ENDPOINT DBMirrorEndPoint
  3: FOR DATABASE_MIRRORING (AUTHENTICATION = CERTIFICATE [Mirror_Certificate_New])
  4: GO

10. Drop the old certificate for endpoint (On Mirror):
  1: USE master;
  2: DROP CERTIFICATE [Mirror_Certificate_Old]
  3: GO

11. Drop the old certificate for Mirror login (On Principal):
  1: USE master;
  2: DROP CERTIFICATE [Mirror_Certificate_Old]
  3: GO

12. Create a new certificate from the mirror backup file (On Principal):
  1: USE master;
  2: CREATE CERTIFICATE [Mirror_Certificate_New] AUTHORIZATION MirrorServerUser
  3: FROM FILE = 'F:\Backup\Mirror_Certificate_New.cer'
  4: GO

13. Resume the mirroring session for each database(On Principal and Mirror):
  1: USE master;
  2: ALTER DATABASE [Mirrored_Database_Name] SET PARTNER RESUME
  3: GO

By following the mentioned steps, we can resolve the certificate issues for Mirroring database and Mirroring will be resumed.
Note : 1. Always prefer to provide the endpoint date as far date, so that issue doesn’t occur very soon. 2. During this process, Primary database is available without any interruption.
Reference: Tejas Shah (www.SQLYoga.com)

April 17, 2014

SQL SERVER: SSIS – Merge Join Transformation

Now, let’s have a look at functionality of Merge Join Transformation task in SSIS.

Benefit of using Merge join is, input datasets can be combination of any two datasets from (Excel file, XML file, OLEDB table, Flat file).Output can be result of INNER, LEFT Outer, or FULL Outer Join on both the datasets.

Merge Join Transformation has two inputs and one output. It does not support an error output.

Use of Merge Join Transformation:

Merge Join is a two-step process. First step is to sort both the input datasets(tables) in the same order, and the second step is apply merge join on the common key.Here rows from both the sorted inputs get matched together.

To Understand Merge Join Transformation in better way, lets take an example with various configuration parameters in SSIS.

1. Create sample tables:

Now we will create input tables named “Department” and “Employee” in Test database.

  1: CREATE TABLE Department
  2: (
  3: 	Dept_No INT
  4: 	,Dept_Name VARCHAR(50)
  5: 	,Location VARCHAR(50) 
  6: 	CONSTRAINT PK_DEPT PRIMARY KEY (Dept_No)
  7: )
  8:  
  9: INSERT INTO Department VALUES (10, 'ACCOUNTING', 'Mumbai')
 10: INSERT INTO Department VALUES (20, 'RESEARCH',   'Delhi')
 11: INSERT INTO Department VALUES (30, 'SALES',      'Mexico')
 12: INSERT INTO Department VALUES (40, 'OPERATIONS', 'Sydney')
 13: GO
 14: 
 15: CREATE TABLE Employee
 16: (	
 17: 	Emp_No INT NOT NULL
 18: 	,Emp_Name VARCHAR(100)
 19: 	,Designation VARCHAR(50)
 20: 	,Manager INT
 21: 	,JoinDate DATE DEFAULT GETDATE()
 22: 	,Salary INT
 23:     ,Dept_No INT
 24:     CONSTRAINT PK_Employee PRIMARY KEY (Emp_No)
 25:     ,CONSTRAINT FK_Dept_No FOREIGN KEY (Dept_No) REFERENCES Department(Dept_No)
 26: )
 27: 
 28: INSERT INTO Employee
 29: 		(Emp_No,Emp_Name,Designation,Manager,Salary,Dept_No)
 30:  VALUES
 31:     (101, 'Tejas', 'MANAGER', 104, 4000, 20)
 32:     ,(102, 'Michel', 'ANALYST', 101, 1600,  30)
 33:     ,(103, 'Mark', 'DEVELOPER',102, 1250,  30)
 34:     ,(104, 'James', 'DIRECTOR',106, 2975,  10)
 35:     ,(105, 'Raj', 'ANALYST',7566, 3000,  20)
 36:     ,(106, 'TechnoBrains', 'PRESIDENT', NULL, 5000, 40)
 37: GO

2. Create Data Source Connection:

Select and drag “Data Flow Task”, from “Control Flow Items” to designer surface. Then double click it and Create a New OLEDB connection.

3. Select Input Data Sources:


Select two different Data Sources which you need to perform merge join on as “OLE_SRC_Employee” and “OLE_SRC_Department”. Create a new “OLEDB Connection” to map it to the source datasets.


SQL Yoga - Merge Join Transformation #1


4. OLEDB Source Editor:


Now double click on “OLEDB Source”, it will open “OLEDB Source Editor” in that provide table configuration parameters and columns mapping from “Columns” tab.


5. Data Sorting:


As the Merge Join Transformation accepts the sorted data as input, we will add the sort transformation in the flow. If you know that the data is already sorted then you can set “isSorted” Property as “True” in the “Advanced Editor” for OLEDB Source of the respective dataset. Or else you can use the Sort Transformation task from “Data Flow” Transformation.

Now we need to add two Sort components and join the green arrow pipeline from “Employee” to one of the sort transformation and other pipeline from “Department” to the other Sort Transformation.

SQL Yoga - Merge Join Transformation #2

6. Sort Transformation Editor Source 1:

In order to get sorted data, Double click on the “Sort Transformation” that we have connected to “Employee” Dataset to provide the key on which you want to perform sort so that data gets re-ordered in sorted form based on the keys provided. Provide the Sort type as well as sort order if there are multiple keys on which Sort operation will work.

SQL Yoga - Merge Join Transformation #3

7. Sort Transformation  Editor Source 2:


Now we have “Employee” table data in sorted form, in the same way need to configure the sort transformation for Source 2 “Department”.


For the same double click on the “Sort Transformation” which is connected to “Department” dataset, to provide the Sort key and order in which you want to perform the sort in “Sort Type” property in Editor. Please keep in mind that the Sort type for both the source needs to be of the same type. i.e. any one of ascending or descending order.


SQL Yoga - Merge Join Transformation #4


8. Merge Join Task Component:


Now we will add Merge Join Transformation, so that we can join both the sources together.Drag the pipeline from Employee sort to Merge Join. In “Input Output Selection” popup select Output as “Sort Output” and Input as “Merge Join Left Input”. In Input user has two options as




  1. Merge Join Left Input


  2. Merge Join Right Input

Using this two options user can specify whether the input needs to be considered as left or right side dataset result.


SQL Yoga - Merge Join Transformation #5


Now you need to drag the pipeline from other “Sort transformation” and connect it to “Merge Join Transformation” as second input. While connecting the second input to the Merge Join, it will not ask for the input type as you have already provided it for the first pipeline, so by default it will select the other type of input to the Merge Join. i.e. Left or Right accordingly.


SQL Yoga - Merge Join Transformation #6


9. Merge Join Transformation Editor:


In order to configure merge join double click on the “Merge Join Transformation” to open the Editor.You need to provide the Join Type to specify which type of join operation you want to perform on the selected dataset.


Different Join types are:



  1. Inner Join
  2. Left Outer Join
  3. Full Outer Join

Here we will select the “Inner Join” as Join Type as we need to display data from both the datasets. Select “Dept_No” as Join Key as it is the common field on which we can merge two datasets data.


SQL Yoga - Merge Join Transformation #7


10. Result table creation:


We need to create a table to store the output result into Test database as per the script provided.

  1: CREATE TABLE [Merge_Join_Output] 
  2: (
  3:     [Emp_No] INT,
  4:     [Emp_Name] VARCHAR(100),
  5:     [Designation] VARCHAR(50),
  6:     [Manager] INT,
  7:     [JoinDate] DATE,
  8:     [Salary] INT,
  9:     [Dept_No] INT,
 10:     [Dept_Name] VARCHAR(50),
 11:     [Work_Location] VARCHAR(50)
 12: )
 13: GO

 11. Select “OLEDB Destination Editor” to redirect your output to the “Merge_Join_Output” table as shown. In “Mappings” tab map the output columns accordingly.


SQL Yoga - Merge Join Transformation #8


12. Package Execution:


Execute the package and check for the results in the “Merge_Join_Output” table.SQL Yoga - Merge Join Transformation #9


13.Result in database


After successful execution of the package, we can check the result in “Merge_Join_Output” table.


Query:

  1: -- OLEDB Table 1
  2: SELECT * FROM Employee
  3: 
  4: -- OLEDB Table 2
  5: SELECT * FROM Department
  6: 
  7: -- Output data after Merge Join Operation
  8: SELECT * FROM Merge_Join_Output
  9: GO

SQL Result:


SQL Yoga - Merge Join Transformation #10


In this way we get the Merge Join result by combining both the tables data based on common data, such that it becomes easier to navigate information from the single merged table, instead of referring two different tables and link the related data.


Reference: Tejas Shah (www.SQLYoga.com)

March 31, 2014

SQL SERVER: SSIS - Look Up Transformation Task

Today, I am going to give basic example of Lookup Transformation Task in SSIS.

Lookup transformation performs lookup operation by joining data in input columns with reference table dataset columns.Lookup can be used to access addition information from the reference dataset based on the matching criteria Reference dataset can be OLEDB table, Excel file or cache file, or SQL query result.

Use of Look up Transformation:

In my source system (table), I have all the product with their details. Somehow I have products which belongs to the country which doesn’t exist in my reference (master) table. I assigned a job to rectify those products. I need to design ETL which gives me those records whenever we import products to our target database (table). So here, I am going to use “Lookup no match output” to capture those records by following steps:

Let’s take an example to easily understand how to use Lookup Transformation in SSIS.

1. Create Source Connection:

Select and drag “Data Flow Task”, from “Control Flow Items” to designer surface. Then double click it and Create a New OLEDB connection.

2. Create sample tables

Now we will create tables named ‘LKP_Countries_Source’ and ‘LKP_Countries’ into Test Database from the given script.

  1: CREATE TABLE [LKP_Countries_Source]
  2: (
  3: 	[CountryCode] [int] NULL,
  4: 	[CountryName] [varchar](100) NULL
  5: )
  6: GO 
  7: 
  8: INSERT INTO [LKP_Countries_Source] 
  9: (	[CountryCode]
 10: 	,[CountryName])
 11: VALUES 
 12: 	(91, N'India')
 13: 	,(92, N'Pakistan')
 14: 	,(93, N'Afghanistan')
 15: 	,(94, N'Sri Lanka')
 16: 	,(95, N'Myanmar')
 17: 	,(960, N'Maldives')
 18: 	,(961, N'Lebanon')
 19: 	,(962, N'Jordan')
 20: 	,(963, N'Syrian Arab Republic')
 21: 	,(964, N'Iraq')
 22: 	,(965, N'Kuwait')
 23: 	,(966, N'Saudi Arabia')
 24: 	,(967, N'Yemen')
 25: 	,(968, N'Oman')
 26: 	,(971, N'United Arab Emirates')
 27: 	,(972, N'Israel')
 28: 	,(1, N'USA')
 29: 	,(65, N'Singapore')
 30: GO
  1: CREATE TABLE [LKP_Countries]
  2: (
  3: 	[Country] [varchar](100) NULL
  4: 	,[Code] [int] NULL
  5: )
  6: GO
  7: 
  8: INSERT [LKP_Countries] 
  9: (
 10: 	[Country]
 11: 	,[Code]
 12: ) 
 13: VALUES 
 14: 	(N'INDIA', 91)
 15: 	,(N'SINGAPORE', 65)
 16: 	,(N'USA', 1)
 17: 	,(N'PAKISTAN', 92)
 18: GO
3. Create Lookup connection:

Now you Need to select the proper OLEDB connection in “Connection Manager” tab and the source table for lookup task.

SQLYoga - Lookup Transformation Task #2  
4. Columns Selection from Source Table:

Select the columns to use as output columns.
 
SQLYoga - Lookup Transformation Task #3
5. Lookup Transformation Editor:

Here, I have added “Lookup Data Transformation” Task to designer tab and click on edit to configure the Lookup transformation.
SQLYoga - Lookup Transformation Task #4
6. Handle No Match output:

Now we need to configure various sections in the “Lookup Transformation Editor”.
In General section, select “Redirect rows to no match output” to handle the unmatched data from the lookup task.
 
SQLYoga - Lookup Transformation Task #5

Here we will need to select Cache mode as “Full Cache”. This option is used to improve the performance while handling large scale of data.
Keep connection type as “OLEDB Connection Manager”, as we are using OLEDB source. When you use Cache File as data source then you will need to select “Cache Connection Manager”
Last option provide various ways in which not matched data can be handled.

  • Ignore failure – ignores the failure and executes the next task.
  • Redirect rows to error output – moves the not matched rows to red output to handle them separately.
  • Fail component – throws an exception and stops processing further tasks.
  • Redirect rows to no match output – switches rows to the secondary output, and user can handle it differently to matching data

7. Set Connection Manager for Lookup table:

In Connection section select the reference table with proper connection. This list will get compared with source dataset for matching the data.

SQLYoga - Lookup Transformation Task #6

8. Column mapping for Lookup table:

In Columns section, select the available input columns and map it with the available lookup columns. This will create a join between 2 source datasets.We have used “Full Cache Mode”, so Advanced section will be disabled and in “Error Output” keep fields as it is.SQLYoga - Lookup Transformation Task #7

9. Input Output Selection Setup:

Select new “OLEDB Destination” transformation and drag it to the designer surface. Drag green arrow from “Lookup Transformation” Task to “OLEDB Destination” and provide “Lookup Match Output” as Output and click OK. SQLYoga - Lookup Transformation Task #8

10. Create Output table for Match and Not Matched Data:

Now we will need to create output tables to store the matched as well as not matched result.

  1: CREATE TABLE [LKP_Output_Match] 
  2: (
  3:     [CountryCode] INT,
  4:     [CountryName] VARCHAR(100),
  5:     [Country_Calling_Code] INT
  6: )
  7: GO
  8: 
  9: CREATE TABLE [LKP_Output_NO_Match]
 10: (
 11: 	[CountryCode] INT NULL,
 12: 	[CountryName] VARCHAR(100) NULL,
 13: )
 14: GO
11. Data Mapping for Result table:
Now I need to provide mapping for the Output Table to store the Matched Data.

SQLYoga - Lookup Transformation Task #9
12. Complete Data Flow for Lookup Transformation:

In order to handle the “Not Matched data”, provide the link of Not Matched data to OLEDB Destination table “LKP_Output_No_Match”. It will store the not matched results.

SQLYoga - Lookup Transformation Task #10 
13. Package Execution and Result:

Now let’s execute the Package and check with the output inside the tables we have created to store the result as in
LKP_Output_Match” table for Matched Data and “LKP_Output_NO_Match” table for Not Matched Data.


  1: SELECT * FROM LKP_Output_Match
  2: 
  3: SELECT * FROM LKP_Output_NO_Match
  4: GO
Result :
 
SQLYoga - Lookup Transformation Task #11Lookup Transformation can be used in various ways according to the requirements and can be implemented accordingly. This was just an understanding document for Lookup Transformation.

Reference: Tejas Shah (www.SQLYoga.com)