Monday, March 26, 2012

Merge Replication through FTP/Emails

NNTP-Posting-Host: 61.95.221.55
Path: TK2MSFTNGP08.phx.gbl!tk2msftngp13.phx.gbl
Xref: TK2MSFTNGP08.phx.gbl microsoft.public.sqlserver.connect:42289 microsoft.public.sqlserver.dts:47716 microsoft.public.sqlserver.programming:440288 microsoft.public.sqlserver.replication:50654 microsoft.public.sqlserver.server:339057 microsoft.public.s
qlserver.xml:23307 microsoft.public.sqlserver.clients:28442
Hi,
I want to use SQL Server replication in a scenario where the Publisher and
multiple subscribers are connected over a very low bandwidth.
Is that possible to write down some scripts/programs which does the
following
1. Create a file containing changes made locally during that day on each of
the subscribers. (This size of this file should be as small as possible)
2. Compress these files and Upload to the Publisher throught FTP or by
emails.
3. Publisher after receiving files from all the subscribers will merge the
changes from each of the subscribers.
4. Publisher then makes a fresh file containing changes for each of the
subscriber. This would contain collective changes from all the subscribers
which are releavant to each of the subscribers.
5. These files are then downloaded by each of the subscribers and the
changes are merged into their copy.
I'd prefer to use it in a scenario where data is partially replicated at
each of the subscribers however even a full replication is better than not
having anything.
Thanks in Advance,
Puneet
Puneet,
you require updating subscribers who have autonomy. In replication topologies, this limits the choice to merge replication or transactional replication with queued subscribers. In either case you can't take the changes and email them - they must be read
by the appropriate agent. You could implement what you want using DTS to export/import CSV files. Obviously it would be a highly customised solution, but I have done similar things this way.
HTH,
Paul Ibison
|||Hi Paul,
I understand it requires an agent which is capable of understanding the
format in which the changes have been recorded and put into the file. The
point I'm trying to make is the requirement that the agent should be able to
do its work using one server at a time. The problem area is access to both
the servers together is not possible, or if it is possible, it is over a
very slow connection.
Using DTS would be a huge task. I strongly feel that it is possible to
develop a standard solution for requirements like mine and hope one already
exists.
Thanks,
Puneet
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:C156E629-4BDE-4EF4-B338-95F1888CA619@.microsoft.com...
> Puneet,
> you require updating subscribers who have autonomy. In replication
topologies, this limits the choice to merge replication or transactional
replication with queued subscribers. In either case you can't take the
changes and email them - they must be read by the appropriate agent. You
could implement what you want using DTS to export/import CSV files.
Obviously it would be a highly customised solution, but I have done similar
things this way.
> HTH,
> Paul Ibison
|||Puneet,
such a solution is not one that I know of in straightforward replication. I
suppose you could cheat a little and use snapshot replication (although this
ends up doing the same as the DTS suggestion). You'd need to flag new
records and reset the flag after replication. If data was partitioned
according to server, then each server could be designated as a publisher,
and records appended to each subscriber. This will only work if the
partitions are strict and data cannot be changed in more than one place.
HTH,
Paul Ibison

No comments:

Post a Comment