Skip to main content

Custom CRM Sync Guide

This guide teaches you to build a production-grade bidirectional sync between any CRM (Salesforce, HubSpot, Pipedrive, custom) and ECOSIRE. It covers delta sync, conflict resolution, deduplication, and error recovery.


Introduction

Syncing contact and deal data across two systems seems simple but has many edge cases: simultaneous edits, missing fields, deleted records, and API failures. This guide gives you a battle-tested architecture.

Core principles:

  1. Idempotency — re-running a sync never creates duplicates
  2. Delta sync — only sync records changed since the last run
  3. Conflict resolution — define a clear "source of truth" rule per field
  4. Error recovery — failed records go to a retry queue, not the floor

Prerequisites

  • ECOSIRE API key with read/write permissions
  • Your CRM's API credentials
  • A database or key-value store for sync state (PostgreSQL, Redis, or SQLite)
  • Node.js 18+ or Python 3.9+

Step 1 — Design the Data Model Mapping

Create a field mapping document before writing code:

ECOSIRE FieldYour CRM FieldSync DirectionConflict Rule
namefullNameBidirectionalLast updated wins
emailemailAddressBidirectionalECOSIRE wins (immutable)
phonephoneNumberCRM → ECOSIRECRM wins
typeaccountTypeECOSIRE → CRMECOSIRE wins
tagslabelsBidirectionalUnion (merge both)

Define an externalId mapping table to link records across systems.


Step 2 — Create a Sync State Store

Track the last sync time and external ID mappings:

-- PostgreSQL sync state table
CREATE TABLE crm_sync_map (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
ecosire_id TEXT NOT NULL,
external_id TEXT NOT NULL,
entity_type TEXT NOT NULL, -- 'contact', 'deal', etc.
last_synced TIMESTAMPTZ NOT NULL DEFAULT NOW(),
sync_checksum TEXT, -- hash of last synced payload
UNIQUE (external_id, entity_type)
);

CREATE TABLE crm_sync_errors (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
entity_type TEXT,
external_id TEXT,
error TEXT,
payload JSONB,
retry_count INT DEFAULT 0,
created_at TIMESTAMPTZ DEFAULT NOW()
);

Step 3 — Implement Delta Sync (Outbound: ECOSIRE → CRM)

import { Pool } from 'pg';
import fetch from 'node-fetch';

const pool = new Pool({ connectionString: process.env.DATABASE_URL });

async function syncEcosireToCrm(lastSyncAt: Date) {
let page = 1;
const limit = 100;

while (true) {
// Fetch contacts updated since last sync
const res = await fetch(
`https://api.ecosire.com/api/contacts?page=${page}&limit=${limit}&updatedAfter=${lastSyncAt.toISOString()}&sortBy=updatedAt&sortOrder=asc`,
{ headers: { Authorization: `Bearer ${process.env.ECOSIRE_API_KEY}` } }
);
const { data: contacts, meta } = await res.json();

for (const contact of contacts) {
await upsertToCrm(contact);
}

if (page >= meta.totalPages) break;
page++;
}
}

async function upsertToCrm(contact: any) {
const { rows } = await pool.query(
'SELECT external_id FROM crm_sync_map WHERE ecosire_id = $1 AND entity_type = $2',
[contact.id, 'contact']
);

const mapped = { fullName: contact.name, emailAddress: contact.email, phoneNumber: contact.phone };

if (rows.length > 0) {
// Update existing CRM record
await crmApiUpdate(rows[0].external_id, mapped);
} else {
// Create new CRM record and store mapping
const externalId = await crmApiCreate(mapped);
await pool.query(
'INSERT INTO crm_sync_map (ecosire_id, external_id, entity_type) VALUES ($1, $2, $3)',
[contact.id, externalId, 'contact']
);
}
}

Step 4 — Implement Delta Sync (Inbound: CRM → ECOSIRE)

async function syncCrmToEcosire(lastSyncAt: Date) {
const crmContacts = await crmApiFetchUpdated(lastSyncAt);

for (const crmContact of crmContacts) {
try {
await upsertToEcosire(crmContact);
} catch (err) {
// Store in retry queue
await pool.query(
'INSERT INTO crm_sync_errors (entity_type, external_id, error, payload) VALUES ($1, $2, $3, $4)',
['contact', crmContact.id, err.message, JSON.stringify(crmContact)]
);
}
}
}

async function upsertToEcosire(crmContact: any) {
const { rows } = await pool.query(
'SELECT ecosire_id FROM crm_sync_map WHERE external_id = $1 AND entity_type = $2',
[crmContact.id, 'contact']
);

const payload = {
name: crmContact.fullName,
email: crmContact.emailAddress,
type: 'person',
};

if (rows.length > 0) {
await fetch(`https://api.ecosire.com/api/contacts/${rows[0].ecosire_id}`, {
method: 'PATCH',
headers: { Authorization: `Bearer ${process.env.ECOSIRE_API_KEY}`, 'Content-Type': 'application/json' },
body: JSON.stringify(payload),
});
} else {
const res = await fetch('https://api.ecosire.com/api/contacts', {
method: 'POST',
headers: { Authorization: `Bearer ${process.env.ECOSIRE_API_KEY}`, 'Content-Type': 'application/json' },
body: JSON.stringify(payload),
});
const created = await res.json();
await pool.query(
'INSERT INTO crm_sync_map (ecosire_id, external_id, entity_type) VALUES ($1, $2, $3)',
[created.id, crmContact.id, 'contact']
);
}
}

Step 5 — Conflict Resolution

When both systems update a record simultaneously, apply your resolution strategy:

function resolveConflict(ecosireRecord: any, crmRecord: any, map: any): any {
// Last-write-wins by comparing updatedAt timestamps
const ecosireUpdated = new Date(ecosireRecord.updatedAt);
const crmUpdated = new Date(crmRecord.updatedAt);

if (ecosireUpdated > crmUpdated) {
return { winner: 'ecosire', data: ecosireRecord };
} else {
return { winner: 'crm', data: crmRecord };
}
}

Step 6 — Retry Failed Records

Run the retry processor on a schedule (every 30 minutes):

async function retryErrors() {
const { rows } = await pool.query(
`SELECT * FROM crm_sync_errors WHERE retry_count < 5 ORDER BY created_at ASC LIMIT 50`
);

for (const error of rows) {
try {
await upsertToEcosire(JSON.parse(error.payload));
await pool.query('DELETE FROM crm_sync_errors WHERE id = $1', [error.id]);
} catch (err) {
await pool.query(
'UPDATE crm_sync_errors SET retry_count = retry_count + 1 WHERE id = $1',
[error.id]
);
}
}
}

Troubleshooting

IssueSolution
Growing error queueCheck for schema mismatches — run retryErrors() manually with logging
Duplicates appearingVerify the crm_sync_map lookup runs before every create
Sync too slowIncrease limit to 100; run inbound and outbound in parallel
Timestamp driftStore all dates as UTC; normalize before comparing

Next Steps