Jul 9th, 2022
Posted on Mar 11, 2021 Tools & Tips
By taking the time to understand the data – its structure, patterns, and purpose – you’ll more easily identify risk. You’ll ask better questions that warrant better answers. Most importantly, you’ll lay the foundation for all future elements of the data migration and the implementation at large.
If you’ve read the first part of the series, you learned about scoping a data migration. The next step is to venture down the rabbit hole and analyze your data. Here are some techniques and methodologies you should apply to conduct great analysis.
Let’s start off with a shared definition. By data analysis, I am referring to the process of examining, modeling, categorizing, and understanding data with the goal of informing our future migration decisions. The output of this activity is the knowledge and documentation that is shared across the implementation.
One of the first things I do during the analysis phase is to take a tour of the source application – the system that is being migrated away from. The goal of this exercise is to bring life to the data and to see it in use. It also helps bring a sense of purpose to the data migration effort. It’s tempting to view tables, rows, and columns as, well, just tables and rows and columns, rather than what they are – the lifeblood of an organization.
Shadow a business user for a day-in-the-life. What tables do they interact with? Can you identify how information is related? Is there some vocabulary that is being tossed around? What does it mean? At this point in the data migration process you should understand the key business processes. If not, refer back to the first part of this series: Scoping. If possible, try to shadow at least one user for each business process and witness how they interact with the system. Record these sessions and review them as needed.
After shadowing, you may also request access to the source system or a test environment and look around on your own. Always keep a pen and paper handy.
Just look at the columns, tables, rows, and data itself. Reflect and ask questions. What does this table hold? Where does this foreign key point? Are the records here unique? Is the data normalized? Hmm, this looks like fake data, are they even using this table? Is this a system generated value?
If it can be done securely, view segments of the data in an Excel file. Sort and filter it in different ways. Check to see if there are any duplicates. Build pivot tables. What stands out to you? Can you gain a rough idea of record volume?
Think, but don’t overthink. The goal of this technique is to trigger questions and inspire curiosity. Pick out things that are interesting, concerning, or things you know you’d like clarification on at some point. You’ll be able to relate the data back to the shadow sessions with your users and your understanding of the business processes.
Since the output of this exercise isn’t necessarily an important artifact, it can be tempting not to take it seriously, so I recommend scheduling blocks of working sessions to really dive deep into the data, take notes, and reflect on what was learned.
Use diagraming software like LucidChart to draw out the data model of the source system. Try building it from your own understanding of the data and refine it as you grow that understanding. This is as much an exercise for you as it is an important project artifact. It helps create understanding of the table relationships, constraints, and dependencies. Remember to diagram the data model for the target system as well.
Below is an example ERD using LucidChart:
The boxes represent the entities, or tables, and the lines indicate relationships such as one-to-one, one-to-many, many-to-many, etc.
The final output of this technique is an accurate and vetted ERD that will help inform the migration decisions in the following stages.
Whether Salesforce is your target, source, or both, you can use the Salesforce schema builder to view object relationships. This is a great way to quickly get an understanding of an implementation’s data structure.
To use Schema Builder, search for ‘schema’ in the Setup menu.
The data dictionary can be one of the most valuable artifacts of a data migration. The data dictionary is a living, collaborative document that serves as a comprehensive repository of information about your data.
To build a data dictionary, start by cataloging tables and fields. Indicate their data type, what purpose they serve, and who they’re important to. I like to include identifiers for each attribute so they are easily referenced and traced back to requirements and later decisions.
The data dictionary will also come in handy during the data mapping and transformations that will occur in later stages. At EightCloud, we like to use Quip to create the data dictionary. Quip also enables us to share and collaborate, keeping everyone up-to-date on the latest updates.
This is not an exhaustive list of data analysis techniques for data migration, but it’s a great start. The key in this phase is to gain understanding, collect information, and establish a shared definition across the organization around its data. The analysis is the groundwork for the work ahead: Extraction and Transformation.
Stay tuned and happy migrations!
Continue to Part 3: Extract, Transform, Load, Repeat. >>
Author: Grant Ongstad, Senior Salesforce Consultant
EightCloud wildly exceeded our expectations. They showed a genuine commitment to understanding our needs, our business structure and processes. Additionally, they had insights that added value well above the tactics of our implementation of Salesforce. They were a true partner and have helped improve our processes overall."
- Paul Johnson, Guild Mortgage
Partners and friends, we are proud to share the new EightCloud brand and website with you. Our transformation reflects our commitment to helping you achieve Salesforce excellence through a focus on efficiency, expertise and genuine partnership.
We wish you the best and look forward to connecting with you soon!