Introduction to Dynamics 365 CE Data Migration using ADF
Dynamics 365 CE Data Migration using ADF can be necessary for various reasons, such as archiving historical data, integrating with other systems, or performing data analysis in a different environment. However, extracting and transforming data from Dynamics 365 CE Service can be challenging due to its complex data model and security considerations.
Azure Data Factory (ADF) offers a robust solution with its Metadata-Driven Pipeline feature. This approach of Dynamics 365 CE Data Migration using ADF enables you to define your data extraction process using metadata, including source entities, data fields, and transformations.
By leveraging this Dynamics 365 CE Data Migration using ADF approach, you can:
- Streamline data extraction: Automate the process of identifying and retrieving data from Dynamics 365 CE.
- Improve data quality: Ensure data consistency and accuracy during the extraction process.
- Enhance security: Implement robust security measures to protect sensitive data during migration.
- Increase efficiency: Accelerate the data extraction process and reduce manual effort.
In this blog, we will attempt to move Case records from Dataverse to Azure Storage Account Container as a JSON blob file.
READ – Filtering Dynamics 365 CE in Canvas Apps with Option-Sets Tutorial
Steps for Dynamics 365 CE Data Migration using ADF
Step 1:
Visit your Azure Resource group and create a Data Factory resource. Once deployed, launch the studio.


Step 2:
In the ADF studio, create Linked Services for your Microsoft Dynamics 365 CRM or Dataverse D365 CE environment and the target Azure Storage Account Blob container.
Dataverse Linked Service:

It is highly recommended to choose the Service Principal as the Authentication Type. Once you have entered the values for Application ID and the Secret key, test the connection.

Azure Storage Account Linked Service:


Step 3:
Once both the Linked Services are created, navigate back to your ADF studio, click on the Ingest option and choose metadata-driven Copy Task. From there, we will set up the Control Table.


Step 4:
Have an SQL Server and DB as prerequisite. Now, select Type as Azure SQL Database, choose your Azure Subscription, Sever Name, Database Name & Authentication Type = System Assigned Managed Identity.

Step 5:
Here comes the most important bit. We need to configure the system assigned managed identity by running a T-SQL script in SSMS (SQL Sever Management Service) tool. The script would create the Managed Identity as a user and elevate the privileges to DB owner.

CREATE USER <Managed Identity> FROM EXTERNAL PROVIDER; ALTER ROLE db_owner ADD MEMBER [Managed Identity] |
Step 6:
Test connection to the Linked Service.

Step 7:
Next, specify your data source as your Dataverse Linked Service.

Step 8:
We select the first table for which we wish to fetch the data from. In our instance, we are selecting the Case (Incident) table.


Step 9:
On the next step, select the Azure Blob Linked Service as Destination Data Store, specifying the target file path and name.

Step 10:
Then, we select File format as JSON, and its pattern as “Array of objects”.

Step 11:
On the next step, we ensure that the system has auto-mapped the fields correctly.

Step 12:
We give a custom name to the our ADF Copy Task.

Step 13:
Finally, we check the summary of this entire configuration before we click Finish.

Step 14:
On the completion of deployment, we will get an SQL script generated. We execute it in the SSMS tool as a new query.


Step 15:
After the successful execution of the script, we finish setting up the entire configuration on the Azure portal. We would see that under our pipeline, 3 tasks components are created which will drive our Integration – Top Level (Control Table), Middle Level & Bottom Level.

READ – The Ultimate Dynamics 365 CRM Guide
Unit Testing
- We execute the Pipeline by clicking the Debug button and very for the execution of the 3 tasks.

- We confirm that Incident JSON blob is created as expected.

- Resultant JSON array is as follows:

- We also add the “Subject” table to the list of tables as follows.


- We again get an updated T-SQL script that we will execute again in SSMS tool.


- Now our Control Table has 2 rows, 1 for Case and another row for Subject.

- Finally, when we execute our pipeline again, we confirm that JSON blobs are created for both now – Case and Subject tables.

Conclusion – Dynamics 365 CE Data Migration using ADF
Hence, we learnt in this blog how we can implement Dynamics 365 CE Data Migration using ADF metadata-driven pipelines. The Dynamics 365 CRM consulting team appreciated how easy it is to configure for multiple Dataverse tables.
Read more: