Component

Dynamics CRM Destination

SQL Server 2005, 2008, 2008R2, 2012, 2014, 2016

SQL Server 2005, 2008, 2008R2, 2012, 2014, 2016

All Downloads ----- Join VIP ----- Questions? SSIS+ 1.7 Beta Now Open

Overview

The Dynamics CRM Destination Component is an SSIS Data Flow Component for loading data into the Microsoft Dynamics CRM entity. The configuration parameters are provided in the standard data flow destination component dialog.


Demonstration

Demonstration VideoDemonstration Video

Setup

Use the parameters below to set up the component.

Parameters

DYNAMICS-CRM

Select an existing Dynamics CRM connection manager.

Related Topics: Dynamics CRM Connection Manager

Action

Select a destination entity action. This parameter has the options listed in the following table.

Action Description
Create Create new records in the destination entity.
Update Update existing records in the destination entity. You must specify the appropriate entity record identifier (the GUID assigned by Dynamics CRM) for the update to work.
Delete Delete an existing record from the destination entity. You must specify the appropriate entity record identifier (the GUID assigned by Dynamics CRM) for the delete to work.
Upsert (1.6) Update existing records and insert new records in the destination entity. Selecting this action displays the dynamic parameter UpsertMatch.

Note: After changing the action, you must use the Refresh button in the 'Component Properties' or 'Column Mappings' tab to reload the destination entity metadata.

DestinationEntity

Indicate the destination Dynamics CRM entity where the data is to be loaded.

IgnoreMissingIdentifier

Specify how unrecognized identifiers are to be handled. When this parameter is set to True, the component will ignore an identifier field if it does not match any entity record in the system. The identifier field must be properly formatted in order to benefit from this parameter. The component will process the remaining fields in the input record as if the identifier field was not included. If this parameter is set to False, an identifier field that does not match any entity record in the system will cause the whole record to fail with an error. The default value is False.

Example: If the Primary Contact is being updated in an Account record and the Identifier for the Primary Contact is wrong, the rest of the Account record will still be updated if this parameter is set to True. This is shown in the demo video.

DetectDuplicate (1.5 SR-3)

This parameter is only here for backward compatibility with implementations prior to version 1.6. For implementations before version 1.6, set this to True if you want to be able to determine which of your input records are new and which already exist. For implementations using version 1.6 or later, use the Upsert Action instead. The default value is False.

BatchSize (1.6 SR-1)

Specify the number of rows to be sent as a batch. When set to 1, the component will use the regular single request API. When set to greater than 1, the component will use the new CRM 2011 bulk multi-requests API. CRM 3 and 4 systems will ignore this setting and always use single request API.

MultipleMatch (1.6 SR-2)

Specify how to handle multiple matches when using Upsert action. This parameter has the options listed in the following table.

Option Description
UpdateFirst Update only first match (default).
UpdateNone Update none.
UpdateAll Update all matches.
Fail Fail processing.
IgnoreUnchanged (1.6 SR-2)

Specify to avoid sending fields where input data matches existing data. This parameter has the options listed in the following table.

Option Description
True The existing data is loaded and compared to input data. If input data is same as existing data, it is not included for update.
False All input data is included in update (default).
IgnoreNullValue (1.6 SR-4)

Specifies how NULL values are handled. This parameter has the options listed in the following table.

Options Description
True The NULL values are ignored and not sent for processing.
False The NULL values are sent for processing.
UpsertMatch (1.6 SR-4)

Specify the method used to determine which records are new and which already exist. This parameter has the options listed in the following table.

Option Description
PrimaryKey Specifies to determine existing records by primary key.
AlternateKeys Specifies to determine existing records using alternate keys. Selecting this option displays the dynamic parameter UpsertAlternateKeys.
DuplicateDetectionRules Specifies to determine existing records using the system-defined duplicate detection rules. Duplicate Detection must be enabled in Dynamics CRM and rules must be published in order for this to work. Use the Duplicate Detection Rules in Dynamics CRM to indicate the field(s) to be used to match records. The entity record identifier (the GUID assigned by Dynamics CRM) may not be used as the identifier when using Upsert.
UpsertAlternateKeys (1.6 SR-4)

Select the attributes you want to use as alternate keys for the Upsert action.

ErrorRowDisposition (Dynamics CRM Destination Input)

Specify how to handle rows with errors. To see how to set this up, see the demo video.

Id (Dynamics CRM Destination Error Output) (1.5 SR-1)

Contains the unique identifier of the added, updated or deleted record. To see how to set this up, see the demo video.

How to modify a destination entity dynamically at runtime

The destination entity is available in the data flow properties list. Follow these steps to set up an expression:

  • Right-click on the data flow canvas and select the Properties menu.
  • Scroll down and find the property named [Dynamics CRM Destination].[DestinationEntity]. This is the property containing the destination entity.
  • Scroll down and find the Expressions property. Set up an expression to modify the destination table dynamically.

How to get the identifier of a newly created entity record

The identifier of the created entity record is provided in the component's error output, in the Id column. Connect the error output to a destination where you want to store the identifiers. See the demo video for details about setting up a destination in which to write the new IDs.

How to set up an activity party list attribute

The activity party list can be set up either with the party's email or using a party-unique identifier. Multiple party elements are separated with a semicolon (;):

  • To use email addresses to identify the parties: email1;email2;...
  • To use the GUIDs assigned to the records by Dynamics CRM: <entity name>:{<guid1>};<entity name>:{<guid2>};...

See the demo video for details about setting up an activity party list.

How to find which records contain errors

The component's Error Output is used for providing both error information and the record identifier information for new records. Check the ErrorCode column:

ErrorCode Description
-1 Not an error record.
>0 Error record. Check the ErrorDescription column for more details.

You can set up the standard Conditional Split component to filter non-error records.

See the demo video for details about setting up a destination in which to write errors.

How to upload attachment

The attachments are stored in the annotation entity. Use the standard Import Column transformation to load the file content into memory and then direct into documentbody attribute. See the demo video for details.

Related scripts

Samples

  • CRM 2011 Data Import / Export using Cozy Roc SSIS library by Mani Ramachandran.
  • Importing Email into MS CRM 2011 via SSIS by Joe Peyton.
  • The Great Migration: 40 Million Records Migrated in 10.5 Hours by John Klemetsrud.

Dynamics CRM Source

SQL Server 2005, 2008, 2008R2, 2012, 2014, 2016

SQL Server 2005, 2008, 2008R2, 2012, 2014, 2016

All Downloads ----- Join VIP ----- Questions? SSIS+ 1.7 Beta Now Open

Overview

The Dynamics CRM Source Component is an SSIS Data Flow Component for consuming data from the Dynamics CRM Service. The configuration parameters are provided in the standard data flow source component dialog.

Demonstration

Demonstration VideoDemonstration Video

Setup

Use the parameters below to set up the component.

Parameters

DYNAMICS-CRM

Select an existing Dynamics CRM connection manager.

Related Topics: Dynamics CRM Connection Manager

BatchSize

Specify the number of rows to be retrieved as a batch. The maximum is 500.

InputType

Specify the input type. This parameter has the options listed in the following table.

Value Description
Entity The input type is a Dynamics CRM entity. Selecting this value displays the dynamic parameter Entity.
FetchXML The input type is a FetchXML query. Selecting this value displays the dynamic parameter FetchXML.

Note: After changing the input type, you must use the Refresh command in the 'Column Mappings' tab to reload the source entity metadata.

Entity

Select the Dynamics CRM data entity.

FetchXML

Specify the FetchXML statement. Select to design your query in the standard Advanced Find dialog.
Note: Advanced Find is not available in CRM 3.0

How to modify the source dynamically at runtime

The source entity or query are available in the data flow properties list. Follow these steps to set up an expression:

  • Right-click on the data flow canvas and select the Properties menu.
  • Scroll down until you find the property named [Dynamics CRM Source].[FetchXML] or [Dynamics CRM Source].[Entity]. This is the property containing the query.
  • Scroll down and find the Expressions property. Set up an expression to dynamically populate the property containing the query.

Related scripts

Samples

  • CRM 2011 Data Import / Export using Cozy Roc SSIS library by Mani Ramachandran.
  • Dynamics CRM 2011 using SSIS to monitor pending emails by Nuno Costa.

Flow Synchronization

SQL Server 2005, 2008, 2008R2, 2012, 2014, 2016

SQL Server 2005, 2008, 2008R2, 2012, 2014, 2016

All Downloads ----- Join VIP ----- Questions? SSIS+ 1.7 Beta Now Open

Overview

The Flow Synchronization Component is an SSIS Data Flow Component which makes two or more data flows run at the same speed. This is accomplished by stopping one data flow if the others run too slow relative to it. It is a convenient companion to the Table Difference component, resolving problems with memory consumption. It can, however, be used on its own for any situation in which you want data flows to run at the same speed. The component uses semaphores to handle synchronization, resulting in very efficient CPU utilization.


Setup

Use the parameters below to set up the Flow Synchronization component for processing.

Parameters

Name

Specify the synchronization name.

Name from variable

The synchronization name is specified in the package variable.

Max delta

The maximum number of rows of difference between data flows.

Table Difference

SQL Server 2005, 2008, 2008R2, 2012, 2014, 2016

SQL Server 2005, 2008, 2008R2, 2012, 2014, 2016

All Downloads ----- Join VIP ----- Questions? SSIS+ 1.7 Beta Now Open

Overview

The Table Difference Component is an SSIS Data Flow Component designed to find differences between two tables and, in particular, to simplify the management of slowly changing dimensions. The main advantage over the Microsoft standard SCD component is the speed and ease of use.

Demonstration


Benchmark

This component was tested with 5.5 million records and 25 comparison columns on a dual core AMD machine with 4GB of RAM and a standard SATA disk. The comparison process was complete in 7 minutes. The complexity of the algorithm is linear, so you can expect 10 million records to be processed in approximately 15 minutes with the same hardware.

Introduction

One question arising very often in Data Warehouse programming is "What are the differences between two tables?" You have several million customers and need to decide what changes should be made. The standard SSIS toolkit provides the SCD component for handling this process, but it is of no practical use when you have to handle more than a couple thousand records. The Table Difference Component is not limited by the number of records to be processed.

This component has two inputs: one for "old data" and one for "new data". The old and new rows are compared and the result is sent to different outputs:

  • Unchanged rows (same in both old and new)
  • Deleted rows (appear in old but not in new)
  • New rows (appear in new but not in old)
  • Updated rows (appear in both old and new, but there are changes)

The diagram below illustrates the component functionality:

Occasionally, you may have a requirement to perform different updates for different columns. The component includes an option to tag each input column with an identifier. Then, for each unique identifier, a separate update output can be created and maintained. Using this capability, you can tag the historical attributes with "10" and tag the other columns with "20" and decide what kind of operation to perform for the different updates, thereby providing maximum flexibility.

The component inputs MUST be sorted and have a collection of fields (keys) that let the component decide when two rows from the different inputs represent the same row. This is easily accomplished in SQL with a simple “order by” and a convenient index. Usually, slowly changing dimensions maintain an index by the business key, so the sorting requirement is easily accomplished and does not represent a problem.

The Input

The component has two inputs:

  • Old Data Flow - attached to source returning existing rows.
  • New Data Flow - attached to source returning new rows.

Each input has a buffer where all incoming rows are directed and maintained. A separate thread starts as soon as data from both buffers is available and checks for any differences between them. The rows are sent to the appropriate output and then removed from the buffers. The component does not need to maintain all the data in memory. The data is removed as soon as it has been compared with the corresponding row. For some obscure reason, SSIS allocates more processing time to one of the input data flows, compared to the other. This results in additional memory consumption by the component. For this reason, if you need to compare more than half a million records, we recommend you use the COZYROC Flow Synchronization component in your data flow.

The Output

The component has three standard outputs for New, Deleted and Unchanged rows. For each unique user-defined UpdateID identifier, a separate Updated row output is created. The outputs related to the UpdateIDs are named "UpdateID", followed by the identifier (e.g.UpdateID10).

You can easily change the name of the Update output to something more descriptive like "Updated historical attribute" and we encourage you to do so. The collection of output fields is computed by the component based upon the intersection of the two inputs. If one column appears in only one input, it is not considered. If it appears in both, then it will be processed.

Input Setup

The component analyzes the old and new columns. If the column name and type match, it is added to the list of available columns to manage. The inputs must be sorted. Their sort columns must be marked as key fields and the sort order must be specified. All other non-key columns are assigned a standard Update ID 10 and are managed as comparable columns unless you specify otherwise.


Check

Select the input column processing type. This parameter has the options listed in the following table.

Value Description
Key Field Use this column to determine which rows from the old and new input tables represent the same data. The inputs must be sorted by this column.
Compare The column from the new and old rows will be compared to detect any difference.
Use New The column will be copied from the New Data Flow input to the output without comparing the contents.
Use Old The column will be copied from the Old Data Flow input to the output without comparing the contents.
KeyOrder

Specify the "Key Field" column order. This is the order in the query's "Order By" clause. The component does not know the correct order, so it is up to you to provide this information.

UpdateID

Specify the column update identifier. For each unique identifier, a separate update output is created. The update identifier with the lowest value has the highest priority. For example, assume we assign the AccountNumber column the update identifier of 10 and we assign the AddressLine1 column the update identifier of 20. If AccountNumber is different, the row will be directed to the update output 10, even if AddressLine1 also contains a difference.

Outputs Setup

Specify a name and description for each desired output. If you are not interested in Unchanged output, then you can deselect it and avoid warnings for unused columns.

Output columns

Specify output column options.


Select output

Select the output for which you want to edit column selection and/or names.

Sel

Select or deselect the output column. Deselecting unused columns removes warnings and increases the performance of the component.

Output Column Name

Specify a custom output column name.

Miscellaneous

Specify string comparison options.


Culture Identifier

Specify the culture used to perform the comparison. If not specified, the component will use the culture of the running task.

Ignore case

Indicate that the case should be ignored for the comparison.

Ignore kana type (1.4 SR-4)

Indicate that the Kana type should be ignored for the comparison. The Kana type refers to Japanese hiragana and katakana characters, which represent phonetic sounds in the Japanese language.

Ignore character width (1.4 SR-4)

Indicate that the character width should be ignored for the comparison.

Ignore nonspacing characters (1.4 SR-4)

Indicate that non-spacing combining characters, such as diacritics, should be ignored for the comparison.

Ignore symbols (1.4 SR-4)

Indicate that symbols, such as white-space characters, punctuation, currency symbols, the percent sign, mathematical symbols, the ampersand, and so on, should be ignored for the comparison.

Sort punctuation as symbols (1.4 SR-4)

Indicate that the string sort algorithm should be used, where the hyphen and the apostrophe, as well as other non-alphanumeric symbols, come before alphanumeric characters.

Warnings

This tab will contain all unused input columns. By checking the warnings you can see if the component is working with all the desired columns.

Samples

  • Hints for upgrading from SSIS 2005 to 2008 by Shannon.

Lookup

SQL Server 2005, 2008, 2008R2, 2012, 2014, 2016

SQL Server 2005, 2008, 2008R2, 2012, 2014, 2016

All Downloads ----- Join VIP ----- Questions? SSIS+ 1.7 Beta Now Open

Overview

The Lookup Plus Component is an SSIS Data Flow Component providing enhanced functionality as compared to the standard SSIS Lookup component. To simplify your data flows, the component allows multiple lookup definitions in the same dialog and definition of values when the input cannot be matched or when it is undefined. In addition the Lookup Plus component supports a variety of connection types as a source, making it quite versatile.

Demonstration

Demonstration VideoDemonstration Video

Setup

Use the parameters below to set up the Lookup Plus component for processing.

Parameters

Name

Specify the component name.

Description

Specify the component description.

Lookup definition

The lookup definitions are set up in the lookup definition grid.


Input Column

Select input column for lookup key.

Reference

Set up lookup reference.

Related Topics: Reference Setup

Result Column.

Specify the lookup result output column name.

On NULL

Specify the output value when the input is undefined.

No Match

Specify the output value when the input doesn't produce a match.

Refresh

Use the refresh command to update the lookup definitions with the latest metadata.

Reference Setup

The reference setup dialog is used to create the lookup source. Three source types are available: table or view, SQL query and package variable.


ConnectionType

Select the connection type. This parameter has the options listed in the following table.

Value Description
OLEDB Use the OLEDB Connection Manager for the Connection option.
ADO Use the ADO Connection Manager for the Connection option.
ADO.NET Use the ADO.NET Connection Manager for the Connection option.
CACHE+ Use the Distributed Cache Connection Manager for the Connection option. (1.5 SR-1)
Connection

Select an existing connection manager.

New

Click New to create a connection manager.

Related Topics: OLE DB Connection Manager, ADO Connection Manager, ADO.NET Connection Manager

Table / View radio button

Select an existing table or view as the lookup source.

SQL Query radio button

Specify an SQL statement as the lookup source. Use (1.5 SR-1) to create an SSIS expression.

From Variable radio button

Select an existing package variable as the lookup source. Supported object types are as follows:

Value Description
String A variable contains an SQL statement. The first selected column will be used as the key and the second selected column will be used as the value.
IDictionary Any object implementing this interface.

Related Topics: Dictionary Destination script, Range Dictionary Destination script.
Recordset ADO Recordset object. The first selected column will be used as the key and the second selected column will be used as the value.
DataSet ADO.NET DataSet object. It will use the first table. The first selected column will be used as the key and the second selected column will be used as the value.
Key Column

Select the lookup source key column.

Value Column

Select the lookup source value column.

Match multiple columns with Lookup Plus

You can match multiple columns using the Lookup Plus component. See the steps below for one possible solution:

  1. Create a new data flow. This data flow will build a lookup reference dictionary and store it in a package variable.
  2. Insert a data flow source component.
  3. Insert the Checksum Transform script after the component from step 2 to create a unique CRC32 value for the match input columns.
  4. Insert the Dictionary Destination script after the component from step 3. For the key, select the input column containing the CRC32 value. For the value, specify the appropriate reference column. Store the resulting dictionary object in a package variable.
  5. Create another data flow, which will do the lookup and matching.
  6. Insert the data flow source with the data that is to be matched.
  7. Insert the Checksum Transform script after the component from step 6 to create a unique CRC32 value for the required input columns.
  8. Insert the Lookup Plus component. As the key, specify the input column containing the CRC32 value. In the reference dialog, select the "From Variable" option. Set it to the variable you created in step 4.

SQL 2012 compatibility and Upgrade

Existing packages containing Lookup Plus component and designed with SSIS+ 1.5 or older version are incompatible with SQL 2012. To have successful upgrade follow these steps:

  • Install SSIS+ 1.6 or newer on a machine where you have existing packages designed with SQL 2005 or 2008.
  • Open the existing package containing Lookup Plus component and re-save.

The saved package will now contain SQL 2012 compatible data and the package is ready for upgrade.

Related scripts

Samples

  • For a sample showing how to use a range dictionary object from a variable, see this package.

Salesforce Destination

SQL Server 2005, 2008, 2008R2, 2012, 2014, 2016

SQL Server 2005, 2008, 2008R2, 2012, 2014, 2016

All Downloads ----- Join VIP ----- Questions? SSIS+ 1.7 Beta Now Open

Overview

The Salesforce Destination Component is an SSIS Data Flow Component for loading data into a Salesforce object. The configuration parameters are provided in the standard data flow destination component dialog.

Demonstration

Demonstration VideoDemonstration Video

Setup

Use the parameters below to set up the component.

Parameters

SFORCE

Select an existing Salesforce connection manager.

Related Topics: Salesforce Connection Manager

Action (1.4 SR-2)

Select a destination object action. This parameter has the options listed in the following table.

Action Description
Create Create a new record in the destination object.
Update Update an existing record in the destination object. You must specify the appropriate object record identifier for the update to work.
Delete Delete an existing record from the destination object. You must specify the appropriate object record identifier for the delete to work.
Upsert (1.4 SR-3) Update and insert a record in the destination object. Selecting this value displays the dynamic parameter ExternalId.

Note: After changing the action, you must use the Refresh command in the 'Column Mappings' tab to reload the destination object metadata.

BatchSize

Specify the number of rows to be sent as a batch. The maximum for regular mode is 200. The maximum for bulk-load mode is 10,000.

Concurrency (1.6 SR-3)

Select bulk-load concurrency mode. This parameter has the options listed in the following table.

Value Description
Parallel Process batches in parallel mode (default).
Serial Process batches in serial mode. Processing in parallel can cause database contention. When this is severe, the job may fail. If you're experiencing this issue, submit the job with serial concurrency mode. This guarantees that batches are processed one at a time. Note that using this option may significantly increase the processing time for a job.
DestinationObject

Specify the destination Salesforce object where the data is to be loaded.

ExternalId (1.4 SR-3)

Specify the field for the external identifiers used in the upsert action.

Mode (1.6)

Select the destination object processing mode. This parameter has the options listed in the following table.

Value Description
Regular Process the data in regular mode.
BulkData Process the data in bulk-load data mode.
BulkBinary Process the data in bulk-load binary mode.
JobIdVariable (1.6 SR-1)

Select variable to store bulk-load job identifier. Optional.

IgnoreNullValue (1.6 SR-4)

Specifies how NULL values are handled. This parameter has the options listed in the following table.

Options Description
True The NULL values are ignored and not sent for processing.
False The NULL values are sent for processing.
ErrorRowDisposition (Salesforce Destination Input)

Specify how to handle rows with errors.

Id (Salesforce Destination Error Output) (1.4 SR-4)

Contains the unique identifier of the added, updated or deleted record.

RelationshipName (Salesforce Destination Input -> External Column) (1.5 SR-2)

Specify the relationship name for updating the foreign key lookup with an external identifier.

ReferenceType (Salesforce Destination Input -> External Column) (1.5 SR-2)

Specify the referenced object for updating the foreign key lookup with an external identifier.

ExternalId (Salesforce Destination Input -> External Column) (1.5 SR-2)

Specify the external identifier field for updating the foreign key lookup with an external identifier.

How to modify a destination object dynamically at runtime

The destination object is available in the data flow properties list. Follow these steps to set up an expression:

  • Right-click on the data flow canvas and select the Properties menu.
  • Scroll down and find the property named like [Salesforce Destination].[DestinationObject]. This is the property containing the destination table.
  • Scroll down and find the Expressions property. Set up an expression to modify the statement dynamically.

How to find which records contain an error

The component's Error Output is used for providing both error information and the processed record identifier. Check the ErrorCode column:

ErrorCode Description
-1 Not an error record.
>0 Error record. Check the ErrorDescription column for more details.

You can set up a standard Conditional Split component to filter non-error records.

How to upload files to Salesforce

Uploaded files are located in the Attachment object. Use the standard "Import Column" transformation to import file content into the Body field. Check also the demo video above.

Related scripts

Samples

  • SQL Server Integration Services Connector for SalesForce by Brady Upton.

Salesforce Source

SQL Server 2005, 2008, 2008R2, 2012, 2014, 2016

SQL Server 2005, 2008, 2008R2, 2012, 2014, 2016

All Downloads ----- Join VIP ----- Questions? SSIS+ 1.7 Beta Now Open

Overview

The Salesforce Source Component is an SSIS Data Flow Component for consuming data from the Salesforce Service. The configuration parameters are provided in the standard data flow source component dialog.

Demonstration

Demonstration VideoDemonstration Video

Setup

Use the parameters below to set up the component.

Parameters

SFORCE

Select an existing Salesforce connection manager.

Related Topics: Salesforce Connection Manager

BatchSize

Specify the number of rows to be retrieved as a batch. The maximum is 2000.

InputType

Specify the input type. This parameter has the options listed in the following table.

Value Description
Object Input type is Salesforce object. Selecting this value displays the dynamic parameters Object, ObjectFilter (1.5 SR-2).
Statement Input type is a SOQL statement. Selecting this value displays the dynamic parameter Statement.

Note: After making this change, use the Refresh command in the 'Column Mappings' tab to reload the source object metadata.

Mode (1.6 SR-3)

Select the read mode. This parameter has the options listed in the following table.

Value Description
Regular Read data using regular mode (default).
Bulk Read data using bulk mode.
Object

Select the SalesForce data object.

ObjectFilter (1.5 SR-2)

Specify the object filter statement using SOQL statement expressions.

Statement

Specify a SOQL statement.

IncludeDeleted (1.4 SR-1)

Use this parameter to include all records in the result, including the deleted records. This parameter has the options listed in the following table.

Value Description
True The deleted records are included in the result.
False The deleted records are not included in the result.

How to modify the source dynamically at runtime

The Object, ObjectFilter or Statement parameters are available for setup in the data flow properties list. Follow these steps to set up an expression:

  • Right-click on the data flow canvas and select Properties from the menu.
  • Scroll-down and find a property named like [Salesforce Source].[Statement]. This is the property containing the statement.
  • Scroll-down and find the Expressions property. Set up an expression to modify the statement dynamically.

How to include the missing columns when using an SOQL statement

The Salesforce service only returns fields containing data. If there is no data for a specific field, no data is returned. Therefore, it is impossible to set up the appropriate field and field type. There are 3 options to include the missing columns:

Option 1

This option will work fine for SOQL statements without relationships. For statements with relationships, see the other options below.

  1. Set the InputType to Object. Select the object from which you want to retrieve data.
  2. Go to the 'Column Mappings' tab and press 'Refresh' to reload the object's columns.
  3. Set the InputType to ‘Statement’ and specify your statement. Do not refresh the columns in the 'Column Mappings' tab.
  4. If you want to remove columns you don't need, go to the 'Column Mappings' tab and remove the mappings you don't need.
Option 2
  1. Set up a temporary Salesforce record and the required relationship data.
  2. Open the 'Salesforce Source' component setup dialog and specify the SOQL statement retrieving the temporary record data.
  3. Refresh the 'Column Mappings' tab. All needed column metadata should now be included.
  4. Go back to the Salesforce setup window and remove the temporary record and all other unneeded data.
Option 3

This option requires manual definition of the missing columns.

  1. Go to the 'Input and Output Properties' tab.
  2. Go to 'Salesforce Source Output' -> 'External Columns'.
  3. Press 'Add Column'.
  4. Specify the column name and type. Make sure the name is specified properly with the appropriate structured definition.

How to download files from Salesforce

Uploaded files are located in the Attachment object. Use the standard "Export Column" transformation to export the Body field content into a file.

Related scripts

Samples

COZYROC to connect Salesforce from SSIS by Vijaya Bhaskar Reddy K.

ODBC Destination

SQL Server 2005, 2008, 2008R2, 2012, 2014, 2016

SQL Server 2005, 2008, 2008R2, 2012, 2014, 2016

All Downloads ----- Join VIP ----- Questions? SSIS+ 1.7 Beta Now Open

Overview

The ODBC Destination Component is an SSIS Data Flow Component for loading data into an ODBC-compatible database table. The component can use either the standard ODBC insert interface or the optimized bulk-load engine. The component has been successfully tested with the following databases:

Database Details
SQL Server Support for regular and bulk-load insert.
Oracle Support for regular and bulk-load insert. For a faster alternative see Oracle Destination.
DB2 Support for regular insert only. There is an issue with DB2 bulk support. You can find more details here. For faster alternative check DB2 Destination.
MySQL Support for regular and bulk-load insert. The bulk-load is 30x faster compared to regular ODBC. To use, download MySQL Connector/ODBC.
Vertica Support for regular and bulk-load insert. The bulk-load is 400x faster compared to regular insert. Use BatchMode=ArrayParameter.
Informix Support for regular and bulk-load insert. The bulk-load is 3x faster compared to OLE DB destination. For a faster alternative, see Informix Destination.
PostgreSQL (EnterpriseDB) Support for regular and bulk-load insert. To use, download the PostgreSQL ODBC driver.
Microsoft Access Support for regular and bulk-load insert. The bulk-load is 5x and more faster compared to OLE DB destination.
Sybase Support for regular insert only. You have to use ODBC version 15.05.00.1000. Decimal or numeric field types are not working. We have sent the following incident cases to Sybase: 11553913, 11553762, 11553757.


The configuration options are provided in the standard data flow destination component dialog.


Demonstration

Setup

Use the parameters below to set up the component.

Parameters

IDbConnection

Select an existing ODBC connection manager.

Related Topics: ODBC Connection Manager

DestinationTable

Specify the destination database table where the data is loaded.

BatchSize

Specify the number of rows to be sent as a batch. If the size is set to 1, the component uses regular insert.

Timeout

Specify the number of seconds allowed for the bulk copy operation to complete before it is aborted.

BatchMode (1.6 SR-2)

Specify batch mode. This parameter has the options listed in the following table.

Option Description
BulkOperations Perform bulk-load based on SQLBulkOperations function.
ArrayParameter Perform bulk-load based on array parameters.
ErrorRowDisposition (ODBC Destination Input)

Specify how to handle rows with errors.

How to modify the destination table dynamically at runtime

The destination table is available in the data flow properties list. Follow these steps to set up an expression:

  • Right-click on the data flow canvas and select the Properties menu.
  • Scroll down and find the property named like [ODBC Destination].[DestinationTable]. This is the property containing the destination table.
  • Scroll down and find the Expressions property. Set up an expression to modify the table dynamically.

Samples

  • To test the performance of ODBC Destination against the standard OLEDB Destination, see this package. You must create a TEST_TABLE table with the following columns:
    • COL1 INTEGER
    • COL2 TIMESTAMP
    • COL3 FLOAT
    • COL4 NVARCHAR2 (250)
    • COL5 NVARCHAR2 (250)
    • COL6 DECIMAL (16)

    The NumberOfRows variable controls the number of rows inserted.

Address Parse

SQL Server 2005, 2008, 2008R2, 2012, 2014, 2016

SQL Server 2005, 2008, 2008R2, 2012, 2014, 2016

All Downloads ----- Join VIP ----- Questions? SSIS+ 1.7 Beta Now Open

Overview

The Address Parse Component is an SSIS Data Flow Component for parsing, correcting and standardizing Addresses in the United States. With an optional purchase of the latest USPS reference database containing more than 120 million addresses, you can verify your addresses and avoid costly errors. The following output columns are generated from successfully parsed address:

  • Standardized - The postal address in standard format.
  • Type - Type of address
  • Leading - Information before address.
  • Street - Street name
  • Number - Street / P.O. Box number
  • Suite - Suite
  • City - City
  • State - US State
  • Zip - 5 digit Zip code
  • Barcode (1.4 SR-3) - 12 digit delivery point barcode. A purchase of the USPS Address Reference database is required.
  • Predirection (1.5 SR-4) - Pre-direction.
  • Suffix (1.5 SR-4) - Suffix.
  • Postdirection (1.5 SR-4) - Post-direction.


Setup

Use the options below to set up the Address Parse component for processing. The component dialog contains the following elements:

  • General tab - specify parsed address quality, verification in the reference database and how to handle rows that cannot be parsed.
  • Mapping tab – select the column containing the address to process.
  • Error Output tab - specify error handling options.

Parameters

Include Low Quality

Low quality addresses are permitted.

High Quality

Only high quality addresses are permitted.

Check address in reference database

Indicate that the address should be checked in the reference database. An address which cannot be uniquely identified is returned as non-parseable. This option is enabled when you purchase the optional USPS reference database.

Address and city separated by newline (1.4 SR-4)

Indicate that the address and city elements are explicitly separated by a newline character.

Specify how to handle non-parseable addresses

Select an option for handling rows with non-parseable addresses.

When you select Redirect rows to no parse output, the rows are redirected to a “no parse output” and are not handled as errors. The Error option on the Error Output page of the Address Parse Editor dialog box is not available.

When you select any other option in the Specify how to handle non-parseable addresses list box, the rows that are non-parseable are handled as errors. The Error option on the Error Output page is available.

Diagnostic Information (1.4 SR-3)

When this component is used with the USPS reference database, it can provide additional diagnostic information when the address is not found. The information is provided in the standard error output's error code column. See the codes and descriptions listed in the following table.

Value Description
100 No information available.
101 Address is invalid.
102 Address is incomplete.
103 Street not found.
104 Number is invalid.
105 Unable to load streets list.
106 Multiple address matches.

Related scripts

  • Geocode Transformation script - Load latitude and longitude coordinates of input address, using Google's Geocoding service.

Script Component

SQL Server 2005, 2008, 2008R2, 2012, 2014, 2016

SQL Server 2005, 2008, 2008R2, 2012, 2014, 2016

All Downloads ----- Join VIP ----- Questions? SSIS+ 1.7 Beta Now Open

Overview

Note: Please install at least SQL Server 2005 SP2 before using this component.

The Script Component Plus is an SSIS Data Flow component and it is an extension of the standard Microsoft Script Component. It provides benefits similar to Script Task Plus, but for the Microsoft Data Flow Task. You can reuse your data flow script code and build a user interface to simplify your script configuration. These features reduce the need to implement custom SSIS script components. See the public script repository for useful components and samples.

The component dialog contains the following elements:

  • Setup tab - Select script initialization parameters. The Setup tab configuration is made with attributes in the script.
  • Advanced tab - Setup of the standard Microsoft Script Component configuration parameters. You can modify the script code in this tab.
  • Export ScriptExport Script - export the SSIS script to an external file (.scx).
  • Link ScriptLink Script - link to the SSIS script from an external file (.scx). The link can be removed by pressing the button again.


Demonstration

Setup

The initialization and user interface for a script is defined by implementing properties on the ScriptMain class. The user interface is contained in the property grid view and it is available by selecting the Setup tab in the COZYROC Script Component dialog. We urge you to familiarize yourself with the PropertyGrid class. This is the class used for presentation; specified attributes below are related directly to it.

See the samples below for how to implement script component parameters. These samples are specific to the script component. The attributes demonstrated in the Script Task Plus samples can also be used.

How to upgrade a script from 2005 to 2008

Most of the scripts published in the public script repository are for SQL 2005. However they can be easily upgraded to SQL 2008 by following these 10 steps:

Note: In order for this process to work, you must have both SQL 2005 and 2008 installed side-by-side on the same machine.

  • 1. Create an SSIS 2005 package and insert the Data Flow Task. Go to data flow design and insert the COZYROC Script Component Plus.
  • 2. Load a script in the component and break the link to the script. Save the package.
  • 3. Go to the SSIS 2008 solution. Right-click on the solution and select 'Add Existing Package'.
  • 4. Select the package you have created in step 1. SSIS 2008 will do an automatic upgrade of the package and show some warnings.
  • 5. Open the imported package.
  • 6. Double-click on the COZYROC Script Component Plus and enter the script IDE.
  • 7. Right-click on the script project and select Properties.
  • 8. Go to the References tab and add a reference to CozyRoc.SSISPlus.2008 from the "Microsoft SQL Server\100\SDK\Assemblies" folder.
  • 9. Save and close the project settings. Make sure the script compiles.
  • 10. Close the script IDE and export the script.

Component connection parameter

The Connection attribute described in Script Task Plus has an additional constructor with the isConnectionID parameter. When the parameter is set to true, the component connection property will contain the connection ID, instead of the connection name. Set this parameter to true in your script component to be able to use the package connection manager.

Extended script component base class

The extended script component base class contains the following additional features:

  • HostComponent - property to access the hosting component object.
  • DesignConnections - property containing package connections list. To be used only in design mode.

Extended buffer base class

The extended buffer base class contains the following additional features:

  • Buffer - property to access the internal buffer object. This object is usually needed when working with dynamically processed columns.
  • StaticInputColumns - access to the list of specified static input columns.
  • StaticOutputColumns - access to the list of specified static output columns.

Component input parameter

  • Define the property associated with the Input attribute.
    <Input()> _
    Property Input() As String
        Get
            Input = m_input
        End Get
        Set(ByVal value As String)
            m_input = value
        End Set
    End Property

Component input column parameter

  • Define property attributed with the InputColumn attribute, setting for parameter the property containing input name. Defining InputColumn without parameter will retrieve the first component input columns. You can ask the user for single or multiple columns selection.
    <InputColumn("Input")> _
    Property InputColumn() As String
        Get
            InputColumn = m_inputColumn
        End Get
        Set(ByVal value As String)
            m_inputColumn = value
        End Set
    End Property

Component input external column parameter

  • Define property attributed with the InputExternalColumn attribute, setting for parameter the property containing input name. Defining InputExternalColumn without parameter will retrieve the first component input external columns. You can ask the user for single or multiple columns selection.
    <InputExternalColumn("Input")> _
    Property InputExternalColumn() As String
        Get
            InputExternalColumn = m_inputExternalColumn
        End Get
        Set(ByVal value As String)
            m_inputExternalColumn = value
        End Set
    End Property

Component input virtual column parameter

  • Define property attributed with the InputVirtualColumn attribute, setting for parameter the property containing input name. Defining InputVirtualColumn without parameter will retrieve the first component input virtual columns. You can ask the user for single or multiple columns selection.
    <InputVirtualColumn("Input")> _
    Property InputVirtualColumn() As String
        Get
            InputVirtualColumn = m_inputVirtualColumn
        End Get
        Set(ByVal value As String)
            m_inputVirtualColumn = value
        End Set
    End Property

Component output parameter

  • Define the property associated with the Output attribute.
    <Output()> _
    Property Output() As String
        Get
            Output = m_output
        End Get
        Set(ByVal value As String)
            m_output = value
        End Set
    End Property

Component output column parameter

  • Define property attributed with the OutputColumn attribute, setting for parameter the property containing output name. Defining OutputColumn without parameter will retrieve the first component output columns. You can ask the user for single or multiple columns selection.
    <OutputColumn("Output")> _
    Property OutputColumn() As String
        Get
            OutputColumn = m_outputColumn
        End Get
        Set(ByVal value As String)
            m_outputColumn = value
        End Set
    End Property

Component output external column parameter

  • Define property attributed with the OutputExternalColumn attribute, setting for parameter the property containing output name. Defining OutputExternalColumn without parameter will retrieve the first component output external columns. You can ask the user for single or multiple columns selection.
    <OutputExternalColumn("Output")> _
    Property OutputExternalColumn() As String
        Get
            OutputExternalColumn = m_outputExternalColumn
        End Get
        Set(ByVal value As String)
            m_outputExternalColumn = value
        End Set
    End Property

Component Column Mappings tab (1.4 SR-3)

  • To enable the script Mappings tab, associate your script main class with the ColumnMappings attribute.
    <ColumnMappings()> _
    Public Class ScriptMain
        Inherits UserComponent
    ...
Syndicate content