Engineer - Data Migration & Data Consolidation Platforms
Toronto, ON
About the Role
As a Forward Deployed Engineer (FDE) for Data Migration & Data Consolidation Platforms, you will serve as the hands-on technical lead driving enterprise-scale data transformation engagements. You will own the end-to-end execution of complex data migrations, multi-system consolidations, and platform modernization initiatives for Global Enterprises and Industry Leaders navigating critical technology inflection points. Operating at the convergence of cloud architecture, data engineering, and direct client partnership, you will translate the capabilities embedded in our migration acceleration platform into production-grade, client-specific solutions. This role demands deep proficiency across legacy enterprise systems such as SAP ECC, Oracle on-premise, modern cloud data platforms, and AI-driven transformation tooling. You will function as both solution architect and hands-on implementer - owning migration strategy design, building automated data pipelines, and deploying ontology-driven integration frameworks that unify fragmented application landscapes into cohesive, consolidated business data models.
Key Responsibilities
Migration Execution & Cloud Architecture: Lead end-to-end delivery of enterprise data migrations from corporate systems (SAP, Oracle, Epic ERP) to target cloud data platforms, including the design of cloud landing zones, data governance frameworks, and system rationalization strategies. Establish migration compliance controls, automated rollback procedures, and operational readiness gates while owning full technical accountability for 12–18+ month migration roadmaps.
Data Pipeline Engineering & Transformation: Build production-grade data connectors to SAP (RFC, IDoc, BAPI, OData), Oracle (AQ, GoldenGate, APIs), and SQL/non-relational sources. Develop ETL/ELT pipelines with LLM-enabled transformation logic, multi-layer validation and reconciliation frameworks, and optimized throughput for datasets scaling from tens of millions to billions of records with built-in CDC and incremental loading.
Ontology Layer Development & Schema Automation: Construct semantic ontology layers translating raw ERP structures into business-consumable objects (Customer, Order, Invoice, Product, Vendor, Asset). Deploy automated schema mapping agents for source-to-target analysis and transformation logic generation. Build unified master data models with row/column-level security, cross-system lineage tracking, and AI-ready semantic structures.
Application & Workflow Delivery: Build operational dashboards, migration control centers, and agent-driven workflows for automated validation, exception handling, and anomaly detection using low-code platform tools. Generate TypeScript/Python SDKs for custom integrations and deliver real-time monitoring and self-service interfaces for migration progress, data quality KPIs, and compliance tracking.
Multi-System Consolidation & Master Data Management: Lead consolidation of 5–15+ fragmented ERP instances into standardized master data models. Resolve complex entity resolution challenges including customer matching, product harmonization, and chart of accounts unification. Establish golden record frameworks, data quality scorecards, survivorship rules, and data stewardship workflows for post-migration governance.
Client Engagement, Discovery & Modernization Advisory: Serve as primary technical advisor to C-suite and enterprise architecture stakeholders across all engagement phases. Deploy discovery agents to analyze legacy data estates, conduct assessment workshops, facilitate solution design sessions, and deliver executive briefings, go/no-go readiness assessments, and prioritized modernization roadmaps.
Knowledge Transfer, Enablement & IP Development: Build reusable migration accelerators, playbooks, and reference architectures that scale across engagements. Lead knowledge transfer to upskill client teams for post-migration ownership and collaborate with internal product and sales engineering teams to feed field insights back into platform development and delivery methodology.
Leadership & Executive Engagement: Operate autonomously in ambiguous, high-stakes client environments, driving outcomes with minimal oversight; translate deeply technical concepts into clear, business-level narratives for C-suite audiences through executive briefings and stakeholder communications; navigate organizational complexity, competing stakeholder priorities, and enterprise change management dynamics to maintain momentum across multi-workstream engagements; mentor junior engineers, cultivate technical capability within delivery teams, and foster a culture of knowledge sharing and continuous improvement.
Required Qualifications
7-10+ years of progressive experience in enterprise data engineering, data migration, or large-scale system integration roles within complex, multi-platform environments
3-5+ years directly leading end-to-end data migration or multi-system consolidation programs for Global Enterprises and Industry Leaders, with full ownership of technical delivery and client outcomes
Demonstrated client-facing experience serving as a trusted technical advisor to C-level executives, enterprise architecture teams, and cross-functional business stakeholders
Proven industry depth in at least two of the following verticals: Healthcare, Financial Services, Manufacturing, Retail, Energy & Utilities, or Public Sector
Hands-on migration complexity: successfully delivered programs involving at least 3+ heterogeneous source systems, 100M+ records, complex master data harmonization, and multi-phase cutover execution
Advanced proficiency in Python and SQL with working experience in PySpark and TypeScript/JavaScript
Hands-on expertise with modern ETL/ELT and data integration platforms (Informatica, Talend, Matillion, Fivetran, AWS Glue, Azure Data Factory)
Proven ability to build scalable, version-controlled data pipelines with error handling, incremental loading, and Change Data Capture (CDC)
Strong working knowledge of at least one major cloud provider (AWS, Azure, or GCP), including core infrastructure, managed data services, and security configurations
Experience with enterprise data warehouse and lakehouse platforms (Snowflake, Databricks, BigQuery, Redshift, Synapse Analytics, Delta Lake)
Familiarity with knowledge graph construction, semantic modeling, ontology frameworks (RDF, OWL), or platforms such as Neo4j, AI Foundry, or Stardog
Practical experience integrating LLMs or AI-driven tooling into data transformation, schema inference, or mapping workflows (OpenAI, Anthropic, AWS Bedrock)
Experience with low-code/no-code application platforms for rapid solution delivery (AI Foundry, Mendix, OutSystems, PowerApps)
Preferred Qualifications
Certifications: AI Foundry (Data Engineer, Ontologist, or Application Developer), SAP Certified Technology Associate/Professional, cloud architecture or data engineering credentials (AWS Solutions Architect, Azure Data Engineer, GCP Professional Data Engineer), or data governance/MDM certifications (CDMP, DAMA)
Advanced Technical Skills: Deep, production-level knowledge of real-time event streaming platforms (Kafka, Kinesis, Event Hubs, Pub/Sub); demonstrated expertise with enterprise MDM platforms (Informatica MDM, SAP MDG, Profisee, Reltio); hands-on proficiency in API development, microservices architecture, and service mesh patterns; strong command of CI/CD pipelines and infrastructure-as-code tooling (Jenkins, GitLab CI, Azure DevOps, Terraform, ArgoCD); comprehensive understanding of data security, privacy, and regulatory compliance frameworks (GDPR, HIPAA, SOC 2, CCPA, FedRAMP)
Domain Knowledge: Working understanding of financial close processes, supply chain operations, revenue cycle management, or procurement workflows; experience with industry-specific data standards (EDI, HL7, FHIR, SWIFT, XBRL); familiarity with process mining tools (Celonis, UiPath Process Mining, Signavio) and data observability, cataloging, and lineage platforms (Monte Carlo, Collibra, Alation, Apache Atlas)