Scenario 1: Data Consolidation

In this scenario, we're going to use DMT to consolidate data (i.e. transfer data) located in several systems into one destination database.

Let's say you have 3 different systems containing sales data:

  • An Oracle database (data in Sales1 table)
  • A Sybase database (data in Sales2 table)
  • A flat file already exported from an external system

Sales data needs to be consolidated into one SQL Server database. It needs to be collected every day and loaded into the GLOB_SALES table.

Note: For our example, we chose an Oracle, a Sybase and an SQL Server database but this would work the exact same way with any other database.

You can easily implement this scenario with DMT:

Scenario 1: Data Consolidation

Step 1: Create the Export Group/Jobs (in the Export Tab)

  • Create an Export Group scheduled to run every day
    • Name: Export Sales
    • Group Type: Schedule Group + Data Group
    • Data File: Sales.csv
    • Scheduling Frequency: Daily
  • In the group create a Job to export sales data from the Oracle system
    • Name: Export Sales (Oracle)
    • Database Connection: Connection to the Oracle database
    • SQL Statement: Enter the SQL statement to export sales data from the Oracle database

    • e.g.
      SELECT * FROM SALES1 WHERE PROCESSED = 'N'
  • In the group create a Job to export sales data from the Sybase system
    • Name: Export Sales (Sybase)
    • Database Connection: Connection to the Sybase database
    • SQL Statement: Enter the SQL statement to export sales data from the Sybase database

    • e.g.
      SELECT * FROM SALES2 WHERE PROCESSED = 'N'
  • The 3rd source system doesn’t require an export job as the supplied data file can be loaded directly with DMT.

Note 1: The data file field for both jobs is automatically populated as they belong to a data group.

Note 2: Scheduling parameters for both jobs are not required as they belong to a Schedule Group (i.e. they will run automatically when the group is run).

Note 3: For our example, we assume that both tables Sales1 and Sales2 have the same columns. But we could easily accommodate different table structures by writing specific SQL statements for each.

Once the export group and jobs are created, you can validate them to make sure they are error free and ready to run (right click on the Group then "Validate Group").

Step 2: Create the Load Job (in the Load Tab)

  • Create a Load Job to Load the exported sales data into the destination system
    • Name: Load Sales
    • Linked to Ext.: Select "Export Sales" group to automatically link to the Sales.csv data file
    • Scheduling Frequency: After Export
    • Dest. Schema/Table: dbo / GLOB_SALES
    • Load Mapping: Enter the mapping for each column from the data file to the destination table.
      For our example, let's only use a few columns e.g.
    • ActiveDest. Column
      (in dest. database)
      Value
      (from src. data file)
      Comments
       YesSALE_ID$DFC_[SALE_ID]
       YesSALE_DATEConvert(datetime, $DFC_[SALE_DT])Use SQL server Convert function for date field (Eval must be unchecked)
       YesCLIENT$DFC_[CLIENT_ID]
       YesPRODUCT$DFC_[PROD_ID]
       YesQTY$DFC_[QTY]
    • Note 1: Before you can enter load mappings, you need to look up the column definitions for the data file and/or the destination table
    • Note 2: The prefix $DFC_ indicate a Data File Column

Step 3: Start the Scheduler (in the Scheduling Console Tab)

Once the Export and Load Groups/Jobs are created, you can let the scheduler run them automatically. Just go to the scheduler Tab and click the Start button.

When the scheduler is started:

  • The Export Group (i.e. its 3 jobs) will run every day at the specified time
  • The Load Job will run every day after the Export Group has run

Note: It's always a good idea to run groups or jobs manually at least once before scheduling them.