Showing posts with label publisher. Show all posts
Showing posts with label publisher. Show all posts

Friday, March 30, 2012

Merge SQL 7 to 2000 problem

I am having a problem running a merge replication between
a SQL 7 publisher/distributor and a SQL 2000 subscriber
(push). After the subscription is initialized, the agent
fails with the error "The process could not query row
metadata at the Subscriber." The error details
says "Could not find stored procedure ''." A clip from
the log is below. This same merge replication works fine
from SQL 7 to SQL 7, but fails to SQL 2000. I have tried
2 different SQL 2k boxes with the same error.
The replication job logs in using SQL Server
authentication. Account is system administrator and dbo
on subscriber. Distributor runs under sa. Both Servers
are at latest SP levels. KB search has turned up no help.
Thanks for any ideas!
~~~~~~~~~~~~~~~~~~snip~~~~~~~~~~~~~~~~~~~
Percent Complete: 55
Processing article 'RequestStatusHistory'
Repl Agent Status: 3
chrs4.ITWorkRequest: {call sp_MSenumcolumns (?,?)}
chrs4.ITWorkRequest: {call sp_MSenumchanges(?,?,?,?,?)}
CHHIST.ITWorkRequest: {call sp_MSgetrowmetadata
(?,?,?,?,?,?,?)}{call sp_MSgetrowmetadata(?,?,?,?,?,?,?)}
{call sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}
Percent Complete: 0
The process could not query row metadata at the
Subscriber.
Repl Agent Status: 6
Percent Complete: 0
Category:COMMAND
Source: Failed Command
Number:
Message: {call sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call sp_M
Repl Agent Status: 3
Percent Complete: 0
Category:SQLSERVER
Source: CHHIST
Number: 2812
Message: Could not find stored procedure ''.
Repl Agent Status: 3
Could not find stored procedure ''.
Disconnecting from Publisher 'chrs4'
Disconnecting from Subscriber 'CHHIST'
Disconnecting from Publisher 'chrs4'
Disconnecting from Distributor 'chrs4'
George,
this is not a supported configuration. For merge replication, a SQL 7.0
publisher can only publish to a SQL 7.0 Subscriber.
Rgds,
Paul Ibison
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||Paul,
Thanks for the reply. I read the "...Different Versions"
article in books online, but obviously scanned too
quickly. I see the limitation now for merge, but not for
snap and trans. You might help out "Nick Horrocks" with
an answer to his thread "Unable to create Merge
subscription".
George

>--Original Message--
>George,
>this is not a supported configuration. For merge
replication, a SQL 7.0
>publisher can only publish to a SQL 7.0 Subscriber.
>Rgds,
>Paul Ibison
>(recommended sql server 2000 replication book:
>http://www.nwsu.com/0974973602p.html)
>
>.
>
|||Thanks for the prompt - have posted to Nick as well.
Rgds,
Paul Ibison
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)

>--Original Message--
>Paul,
>Thanks for the reply. I read the "...Different Versions"
>article in books online, but obviously scanned too
>quickly. I see the limitation now for merge, but not for
>snap and trans. You might help out "Nick Horrocks" with
>an answer to his thread "Unable to create Merge
>subscription".
>George
>
>replication, a SQL 7.0
>.
>

Merge Rpl. Pull from subscriber access denied problem

Hi,
I have set up one laptop as the Distributer/Publisher. Went through the
wizard and set up a Publication also, used Pubs. Then registered another
remote laptop that I can see via the network, and it can see me. I went
through the wizard again and set up a Push to that laptop. Said it ran good,
and I can see the tables on the remote laptop now.
I deleted the Push and keep trying to create a Pull at the other laptop,
(subscriber). The wizard sets it up, but when it Starts Syncronizing, it
immediately gets the big Red X.
The error said The schema script
'\\ACER\ReplShare\ReplData\unc\ACER_pubs_pubs_arti cles\20050928212317\stores_1.sch' could not be propagated to the subscriber.
I can see this share from both ends. I have read a ton about the accounts
the agent has to run under in the last 12 hours, but can't see what I am
doing wrong.
Is there a trick here?
Thanks.
Steve,
try logging on to the subscriber laptop using the same account that the sql
server agent uses as a service account. The see if you can browse to the
snapshot folder
\\ACER\ReplShare\ReplData\unc\ACER_pubs_pubs_artic les\20050928212317. If you
can, see if you can copy the contents of this directory locally. I'm
guessing that the first part won't be possible due to permission
restrictions, but please post back with your results.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||Paul,
I had tried some other things prior to being able to read your response.
Here is what I have done that is now working.
I set up new SQL users on both Publisher/Dist. and Subscriber, giving them
the proper roles. I then set up a new Windows login on each box, "Bob", of
type admin. I then changed both the MSSQLSERVICE and SQLAGENTSERVICE on each
to run under "Bob". When it asks for logon credintials for the subscriber, I
use the sa, when it asks for the publisher, I use the new SQL user I had set
up. Probably may have been able to use that for the subscriber instead of
the sa, don't know.
I have tried so many things since yesterday afternoon, I am a little unsure
as to what actually solved it. From all I read over night, having the two
service run under "Bob" was needed.
Thank you for the response,
Steve
"Paul Ibison" wrote:

> Steve,
> try logging on to the subscriber laptop using the same account that the sql
> server agent uses as a service account. The see if you can browse to the
> snapshot folder
> \\ACER\ReplShare\ReplData\unc\ACER_pubs_pubs_artic les\20050928212317. If you
> can, see if you can copy the contents of this directory locally. I'm
> guessing that the first part won't be possible due to permission
> restrictions, but please post back with your results.
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
>
>
|||This is OK - what you've set up is known as pass-through authentication.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||Is there a better or more preferred method? We have one laptop that acts as
the publisher/distributor, and two other laptops that will be subscribers.
They run over a wireless network. The two subscribers will be able to
initial pull merge replications.
Thanks,
Steve
"Paul Ibison" wrote:

> This is OK - what you've set up is known as pass-through authentication.
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
>
>
|||Steve,
are the laptops all on the same domain? If so, you could use a domain
account, which is given rights to the distributor's working folder. If not,
it's either pass-through, FTP, backup and restore or alternative snapshot
locations.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
sql

Merge Replication? Aaaarghhh!

Can anyone tell me what should be contained in MSrepl_identity_range tables
on both the subscriber and the publisher? Both my tables contain completly
different data - pub has 83 rows and my single sub has only 3.
When are these tables populated and how? Can I populate them manually from a
system SP?
I refer to my previous post where Hilary Cotter thought there might be an
issue with these tables.
When executing : exec sp_MSfetchidentityrange N'CommentType', 0
I get the following error:
Server: Msg 21195, Level 16, State 1, Procedure
sp_MSfetchAdjustidentityrange, Line 92
A valid identity range is not available. Check the data type of the identity
column.
Thanks in advance...
Chris,
this is a bit more complicated than it seems...
I have had cause to manually change the identity range on a subscriber - I'm
not recommending it but it did lead to a better understanding of the
mechanism involved!
If you are using automatic range management this'll be taken care of when
you synchronize (run the merge agent). However, if it is not possible for
you to connect to the publisher, you could manually update
MSrepl_identity_range on the subscriber. This table is used to check if the
subscriber has used up its range or reached the threshold. The new range you
set would be obtained from MSrepl_identity_range on the distributor, which
is the master table and is used to generate new values. The values in this
table (MSrepl_identity_range on the distributor) would need to be changed to
avoid a future potential conflict. Finally, the check constraints on the
subscriber would need updating accordingly.
As an aside, note that there are some anomalies with automatic range
management: the first range is twice the requested size and the actual range
of values enforced by the check constraint is always one less than the size
selected - SQL Server 2005 managed identities for merge replication has been
redesigned to be more consistent.
HTH,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||Paul. Thanks
I ended up removing replication from the DB and reinstating. I'm now having
a problem with creating the publication from a generated script! See later
post.
Thanks anyway for you help.
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:%239GGiaZrFHA.2996@.tk2msftngp13.phx.gbl...
> Chris,
> this is a bit more complicated than it seems...
> I have had cause to manually change the identity range on a subscriber -
> I'm not recommending it but it did lead to a better understanding of the
> mechanism involved!
> If you are using automatic range management this'll be taken care of when
> you synchronize (run the merge agent). However, if it is not possible for
> you to connect to the publisher, you could manually update
> MSrepl_identity_range on the subscriber. This table is used to check if
> the subscriber has used up its range or reached the threshold. The new
> range you set would be obtained from MSrepl_identity_range on the
> distributor, which is the master table and is used to generate new values.
> The values in this table (MSrepl_identity_range on the distributor) would
> need to be changed to avoid a future potential conflict. Finally, the
> check constraints on the subscriber would need updating accordingly.
> As an aside, note that there are some anomalies with automatic range
> management: the first range is twice the requested size and the actual
> range of values enforced by the check constraint is always one less than
> the size selected - SQL Server 2005 managed identities for merge
> replication has been redesigned to be more consistent.
> HTH,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
>

Merge Replication: sp_MSgetmetadatabatch Duration

I'm running SQL 2000 SP4 publisher with approximately 15 MSDE 2000
pull subscribers.
A few of my users are having problems with replication. This is all
done via VPN connections to our site (so they're not on our LAN.)
The merge agent fails with the error "The process could not query row
data at the 'Publisher'."
After running Profiler, it appears that the this is the longest
duration of anything run.
I can't find any real details on this stored procedure, so I'm not
real sure what its doing. I'm also NOT a replication expert. I know
enough to have gotten it running and its been working okay, but these
problems have slowly been creeping up. We can normally get the
synchronization to go through, but its frustrating to my users, and I
may be facing a mutiny! Help! I'll be glad to provide any more details
that anyone requests, and if anyone has any details on what this
stored procedure is doing or how I can find out, that would be most
appreciated!
Try setting querytimeout to a large value or using the slowlink
profile.
On Nov 15, 12:26 pm, dday...@.gmail.com wrote:
> I'm running SQL 2000 SP4 publisher with approximately 15 MSDE 2000
> pull subscribers.
> A few of my users are having problems with replication. This is all
> done via VPN connections to our site (so they're not on our LAN.)
> The merge agent fails with the error "The process could not query row
> data at the 'Publisher'."
> After running Profiler, it appears that the this is the longest
> duration of anything run.
> I can't find any real details on this stored procedure, so I'm not
> real sure what its doing. I'm also NOT a replication expert. I know
> enough to have gotten it running and its been working okay, but these
> problems have slowly been creeping up. We can normally get the
> synchronization to go through, but its frustrating to my users, and I
> may be facing a mutiny! Help! I'll be glad to provide any more details
> that anyone requests, and if anyone has any details on what this
> stored procedure is doing or how I can find out, that would be most
> appreciated!
|||On Nov 15, 1:06 pm, Hilary Cotter <hilary.cot...@.gmail.com> wrote:
> Try setting querytimeout to a large value or using the slowlink
> profile.
> On Nov 15, 12:26 pm, dday...@.gmail.com wrote:
>
>
>
>
> - Show quoted text -
I've done both of those (Set QueryTimeout = 6000) and it still occurs.
|||Index fragmentation could be cuase, you should be rebuilding your merge
system table indexes on a regular basis:
DBCC DBREINDEX (MSmerge_contents, '', 80)
DBCC DBREINDEX (MSmerge_genhistory, '', 80)
DBCC DBREINDEX (MSmerge_tombstone, '', 80)
DBCC DBREINDEX (MSmerge_current_partition_mappings, '', 80)
DBCC DBREINDEX (MSmerge_past_partition_mappings, '', 80)
ChrisB MCDBA
MSSQLConsulting.com
"dday515@.gmail.com" wrote:

> I'm running SQL 2000 SP4 publisher with approximately 15 MSDE 2000
> pull subscribers.
> A few of my users are having problems with replication. This is all
> done via VPN connections to our site (so they're not on our LAN.)
> The merge agent fails with the error "The process could not query row
> data at the 'Publisher'."
> After running Profiler, it appears that the this is the longest
> duration of anything run.
> I can't find any real details on this stored procedure, so I'm not
> real sure what its doing. I'm also NOT a replication expert. I know
> enough to have gotten it running and its been working okay, but these
> problems have slowly been creeping up. We can normally get the
> synchronization to go through, but its frustrating to my users, and I
> may be facing a mutiny! Help! I'll be glad to provide any more details
> that anyone requests, and if anyone has any details on what this
> stored procedure is doing or how I can find out, that would be most
> appreciated!
>
|||On Nov 15, 1:48 pm, Chris <Ch...@.discussions.microsoft.com> wrote:
> Index fragmentation could be cuase, you should be rebuilding your merge
> system table indexes on a regular basis:
> DBCC DBREINDEX (MSmerge_contents, '', 80)
> DBCC DBREINDEX (MSmerge_genhistory, '', 80)
> DBCC DBREINDEX (MSmerge_tombstone, '', 80)
> DBCC DBREINDEX (MSmerge_current_partition_mappings, '', 80)
> DBCC DBREINDEX (MSmerge_past_partition_mappings, '', 80)
> ChrisB MCDBA
> MSSQLConsulting.com
>
> "dday...@.gmail.com" wrote:
>
>
> - Show quoted text -
I've done a full reindex as well prior, still the same problem!
|||Did you try the slow link profile?
Is it possible also that your network link is going down during your sync?
Can you run a ping -t between the publisher and subscriber to verify that
the link stays up during the sync?
RelevantNoise.com - dedicated to mining blogs for business intelligence.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Daniel Day" <dday515@.gmail.com> wrote in message
news:3d27eaf9-9105-47e0-96bc-c6afe7763da0@.v4g2000hsf.googlegroups.com...
> On Nov 15, 1:06 pm, Hilary Cotter <hilary.cot...@.gmail.com> wrote:
> I've done both of those (Set QueryTimeout = 6000) and it still occurs.

Merge Replication: Import Data

Hi,
I have MERGE setup between SERVER A (Publisher)-- SERVER B (Subscriber).
in a table on SERVER A i am importing data from an excel sheet say 100
records. But these records are not reflecting on SERVER B even after merge
agent runs several time.
What would be be possible cause?
Thanks in advance.
Regards
Javed
When you use DTS by default triggers are not fired; in the options tab of
Transform Data Task Properties uncheck the Use Fast Load option to have your
triggers fired. You need to issue a sp_addtabletocontents 'tableName' to
replicate its contents.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Javed Iqbal" <javediqbal98@.hotmail.com> wrote in message
news:eyvd3r5CGHA.984@.tk2msftngp13.phx.gbl...
> Hi,
> I have MERGE setup between SERVER A (Publisher)-- SERVER B (Subscriber).
> in a table on SERVER A i am importing data from an excel sheet say 100
> records. But these records are not reflecting on SERVER B even after merge
> agent runs several time.
> What would be be possible cause?
> Thanks in advance.
> Regards
> Javed
>

merge replication, subscriber can only download but not upload?

Hi,

I need a urgent help! The problem is that every synchronization only transfer data from subscriber to publisher, but not the other direction. The publisher is sql server 2005 standard edition, and the subscriber is 2005 express. Is that any stored-procedure to deal with such a problem?

Thanks for any commnet.
can you describe your problem in more detail - is this a filtered publication, or are there any other publication/article properties that are set that we should know of? Can you describer the changes made at the publisher that should be arriving at the subscriber?|||Thx for reply.

The publication is not filtered, and just a normal, standard merge replication. The situation is that I prepared each subscriber locally with the publisher, and they were running well when testing. After that, I took them to different remote locations. The subscribers now are communicating with the publisher by adsl VPN tunnel. What happened is that some of the subscribers only can download changes from the publisher, but cannot upload the changes to the publisher. So what i can do is to delete the subscriptions and re-create them. After that, they are working well.

I really want to know what on earth the problem is.

Thx for any consideration.

|||

Heloo WII,

There is an option which is like "Subscribers download-only, prohibit changes" while creating the publication.

Its default is "Bidirectional".

The Merge Agent which is at the subscriber may not be working. Check out its History by clicking on its job and selecting View History.

Ekrem ?nsoy

|||Thanks, Ekrem.

But most of other subscribers can upload changes to the publisher. So I'm really confused what's going on with the ones that not working properly.

BTW, does the replication on SQL 2005 express change a lot? 'coz our system is working fine with the combination of sql server 2000 standard & MSDE.

Anyone can recommend some articles or books about the sql server 2005 merge replication? the more detailed the better.

Thanks a lot.
|||

No. You will see virtually the same thing regardless of whether it is Express Edition, Workgroup, Standard, etc. You're going to have to provide a lot more detail on this.

1. What is your configuration

2. Are the subscribers actually connecting to the publisher and staying connected long enough to complete a synch cycle (upload first, resolve conflicts, and then download changes)

3. Are there any error messages

The more information that you give us, the better we can help.

|||Thanks Michael,

Because I'm new to SQL Server, actually I'm quite clear where to find the useful information. so sorry about that.

1. I'm using pull replication, non-filtered publication.

2.yes, i think so. the publisher and subscribers are connected by dedicated VPN tunnels.

3.yes, heaps....after each synchronization, each subscriber got the same error message "The process was successfully stopped. (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147200963). Get help: http://help/MSSQL_REPL-2147200963".
And in the SQL Server Log, i can find such kind of error message "Subscriber 'xxx' subscription to article 'docket_items' in publication 'yyy' failed data validation." Even I re-created the subscription from the scratch, it still came out. So I guess something wrong with the publisher?
|||We're going to need a lot more information than you could possible add to a forum post. Please open a support case with Microsoft and be prepared to send them backups of the publisher, subscriber, msdb, and distribution databases along with error logs and event logs. They'll have more specific information as well when you get to a support engineer.|||Thanks Michael, thank you so much.

I think you are right. I'll do that.

Thanks for all the comments.
|||Hi guys,

I finally found out the error messages though the verbose log.

Here is part of it:

2007-05-18 11:01:57.062 Percent Complete: 0
2007-05-18 11:01:57.062 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Data validation failed for one or more articles. When troubleshooting, check the output log files for any errors that may be preventing data from being synchronized properly. Note that when error compensation or delete tracking functionalities are disabled for an article, non-convergence can occur.
2007-05-18 11:01:57.140 Percent Complete: 0
2007-05-18 11:01:57.140 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'cash_breakup' failed data validation (rowcount and checksum). Rowcount actual: 268, expected: 0.
2007-05-18 11:01:57.218 Percent Complete: 0
2007-05-18 11:01:57.218 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'docket_canceled' failed data validation (rowcount and checksum). Rowcount actual: 17, expected: 0.
2007-05-18 11:01:57.281 Percent Complete: 0
2007-05-18 11:01:57.296 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'docket_reprinted' failed data validation (rowcount and checksum). Rowcount actual: 484, expected: 0.
2007-05-18 11:01:57.375 Percent Complete: 0
2007-05-18 11:01:57.375 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'banked_amounts' failed data validation (rowcount and checksum). Rowcount actual: 2224, expected: 0.
2007-05-18 11:01:57.453 Percent Complete: 0
2007-05-18 11:01:57.453 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'docket_payments' failed data validation (rowcount and checksum). Rowcount actual: 8732, expected: 0.
2007-05-18 11:01:57.546 Percent Complete: 0
2007-05-18 11:01:57.546 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'credit_notes' failed data validation (rowcount and checksum). Rowcount actual: 856, expected: 0.
2007-05-18 11:01:57.625 Percent Complete: 0
2007-05-18 11:01:57.625 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'gift_vouchers' failed data validation (rowcount and checksum). Rowcount actual: 605, expected: 0.
2007-05-18 11:01:57.703 Percent Complete: 0
2007-05-18 11:01:57.703 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'laybys' failed data validation (rowcount and checksum). Rowcount actual: 576, expected: 0.
2007-05-18 11:01:57.781 Percent Complete: 0
2007-05-18 11:01:57.781 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'store_received_in' failed data validation (rowcount and checksum). Rowcount actual: 1107, expected: 0.
2007-05-18 11:01:57.859 Percent Complete: 0
2007-05-18 11:01:57.859 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'customers' failed data validation (rowcount and checksum). Rowcount actual: 4748, expected: 0.
2007-05-18 11:01:57.953 Percent Complete: 0
2007-05-18 11:01:57.953 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'stations' failed data validation (rowcount and checksum). Rowcount actual: 28, expected: 0.
2007-05-18 11:01:58.015 Percent Complete: 0
2007-05-18 11:01:58.015 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'dockets' failed data validation (rowcount and checksum). Rowcount actual: 14389, expected: 0.
2007-05-18 11:01:58.093 Percent Complete: 0
2007-05-18 11:01:58.093 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'docket_items' failed data validation (rowcount and checksum). Rowcount actual: 12414, expected: 0.
2007-05-18 11:01:58.171 Percent Complete: 0
2007-05-18 11:01:58.171 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'stocktake_items' failed data validation (rowcount and checksum). Rowcount actual: 80076, expected: 0.
2007-05-18 11:01:58.250 Percent Complete: 0
2007-05-18 11:01:58.250 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'store_received_items' failed data validation (rowcount and checksum). Rowcount actual: 25773, expected: 0.
2007-05-18 11:01:58.296 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.296 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.296 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.328 Disconnecting from OLE DB Distributor 'SURF-SERVER'
2007-05-18 11:01:58.328 Disconnecting from OLE DB Distributor 'SURF-SERVER'
2007-05-18 11:01:58.328 The merge process could not set the status of the subscription correctly.
2007-05-18 11:01:58.343 OLE DB Subscriber 'SURF-PSS1': {call sys.sp_MSadd_merge_history90 (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}
2007-05-18 11:01:58.343 [100%] Percent Complete: 100
2007-05-18 11:01:58.343 The process was successfully stopped.
2007-05-18 11:01:58.343 OLE DB Distributor 'SURF-SERVER': {call sys.sp_MSadd_merge_history90 (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}
2007-05-18 11:01:58.484 The Merge Agent was unable to update information about the last synchronization at the Subscriber. Ensure that the subscription exists at the Subscriber, and restart the Merge Agent.
2007-05-18 11:01:58.578 Percent Complete: 0
2007-05-18 11:01:58.578 Category:NULL
Source: Merge Replication Provider
Number: -2147200963
Message: The process was successfully stopped.
2007-05-18 11:01:58.578 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.593 Disconnecting from OLE DB Distributor 'SURF-SERVER'
2007-05-18 11:01:58.593 Disconnecting from OLE DB Distributor 'SURF-SERVER'

|||These are the articles that failed the data validation, and there are much more other articles passed the data validation.

I'm just wondering that why not keep replicating when the row count is different between the subscriber and the publisher. isn't the replication's purpose to make them same?

I'll appreciate any comment. Thank you. I'm really desperate now.
|||

If you do not care about the validations, then you should look at your merge agent job and remove tihs part:

"-Validate 3". What this tells the merge agent is to do a validation and stop if there are errors.

Remove this and it will continue to pass.

However please do look at the real reason why there are differences between the publisher and the subscriber in the first place.

|||Thanks Mahesh,

I did put that parameter in the script.

Thank you very much!

merge replication, subscriber can only download but not upload?

Hi,

I need a urgent help! The problem is that every synchronization only transfer data from subscriber to publisher, but not the other direction. The publisher is sql server 2005 standard edition, and the subscriber is 2005 express. Is that any stored-procedure to deal with such a problem?

Thanks for any commnet.
can you describe your problem in more detail - is this a filtered publication, or are there any other publication/article properties that are set that we should know of? Can you describer the changes made at the publisher that should be arriving at the subscriber?|||Thx for reply.

The publication is not filtered, and just a normal, standard merge replication. The situation is that I prepared each subscriber locally with the publisher, and they were running well when testing. After that, I took them to different remote locations. The subscribers now are communicating with the publisher by adsl VPN tunnel. What happened is that some of the subscribers only can download changes from the publisher, but cannot upload the changes to the publisher. So what i can do is to delete the subscriptions and re-create them. After that, they are working well.

I really want to know what on earth the problem is.

Thx for any consideration.

|||

Heloo WII,

There is an option which is like "Subscribers download-only, prohibit changes" while creating the publication.

Its default is "Bidirectional".

The Merge Agent which is at the subscriber may not be working. Check out its History by clicking on its job and selecting View History.

Ekrem ?nsoy

|||Thanks, Ekrem.

But most of other subscribers can upload changes to the publisher. So I'm really confused what's going on with the ones that not working properly.

BTW, does the replication on SQL 2005 express change a lot? 'coz our system is working fine with the combination of sql server 2000 standard & MSDE.

Anyone can recommend some articles or books about the sql server 2005 merge replication? the more detailed the better.

Thanks a lot.
|||

No. You will see virtually the same thing regardless of whether it is Express Edition, Workgroup, Standard, etc. You're going to have to provide a lot more detail on this.

1. What is your configuration

2. Are the subscribers actually connecting to the publisher and staying connected long enough to complete a synch cycle (upload first, resolve conflicts, and then download changes)

3. Are there any error messages

The more information that you give us, the better we can help.

|||Thanks Michael,

Because I'm new to SQL Server, actually I'm quite clear where to find the useful information. so sorry about that.

1. I'm using pull replication, non-filtered publication.

2.yes, i think so. the publisher and subscribers are connected by dedicated VPN tunnels.

3.yes, heaps....after each synchronization, each subscriber got the same error message "The process was successfully stopped. (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147200963). Get help: http://help/MSSQL_REPL-2147200963".
And in the SQL Server Log, i can find such kind of error message "Subscriber 'xxx' subscription to article 'docket_items' in publication 'yyy' failed data validation." Even I re-created the subscription from the scratch, it still came out. So I guess something wrong with the publisher?
|||We're going to need a lot more information than you could possible add to a forum post. Please open a support case with Microsoft and be prepared to send them backups of the publisher, subscriber, msdb, and distribution databases along with error logs and event logs. They'll have more specific information as well when you get to a support engineer.|||Thanks Michael, thank you so much.

I think you are right. I'll do that.

Thanks for all the comments.
|||Hi guys,

I finally found out the error messages though the verbose log.

Here is part of it:

2007-05-18 11:01:57.062 Percent Complete: 0
2007-05-18 11:01:57.062 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Data validation failed for one or more articles. When troubleshooting, check the output log files for any errors that may be preventing data from being synchronized properly. Note that when error compensation or delete tracking functionalities are disabled for an article, non-convergence can occur.
2007-05-18 11:01:57.140 Percent Complete: 0
2007-05-18 11:01:57.140 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'cash_breakup' failed data validation (rowcount and checksum). Rowcount actual: 268, expected: 0.
2007-05-18 11:01:57.218 Percent Complete: 0
2007-05-18 11:01:57.218 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'docket_canceled' failed data validation (rowcount and checksum). Rowcount actual: 17, expected: 0.
2007-05-18 11:01:57.281 Percent Complete: 0
2007-05-18 11:01:57.296 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'docket_reprinted' failed data validation (rowcount and checksum). Rowcount actual: 484, expected: 0.
2007-05-18 11:01:57.375 Percent Complete: 0
2007-05-18 11:01:57.375 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'banked_amounts' failed data validation (rowcount and checksum). Rowcount actual: 2224, expected: 0.
2007-05-18 11:01:57.453 Percent Complete: 0
2007-05-18 11:01:57.453 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'docket_payments' failed data validation (rowcount and checksum). Rowcount actual: 8732, expected: 0.
2007-05-18 11:01:57.546 Percent Complete: 0
2007-05-18 11:01:57.546 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'credit_notes' failed data validation (rowcount and checksum). Rowcount actual: 856, expected: 0.
2007-05-18 11:01:57.625 Percent Complete: 0
2007-05-18 11:01:57.625 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'gift_vouchers' failed data validation (rowcount and checksum). Rowcount actual: 605, expected: 0.
2007-05-18 11:01:57.703 Percent Complete: 0
2007-05-18 11:01:57.703 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'laybys' failed data validation (rowcount and checksum). Rowcount actual: 576, expected: 0.
2007-05-18 11:01:57.781 Percent Complete: 0
2007-05-18 11:01:57.781 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'store_received_in' failed data validation (rowcount and checksum). Rowcount actual: 1107, expected: 0.
2007-05-18 11:01:57.859 Percent Complete: 0
2007-05-18 11:01:57.859 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'customers' failed data validation (rowcount and checksum). Rowcount actual: 4748, expected: 0.
2007-05-18 11:01:57.953 Percent Complete: 0
2007-05-18 11:01:57.953 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'stations' failed data validation (rowcount and checksum). Rowcount actual: 28, expected: 0.
2007-05-18 11:01:58.015 Percent Complete: 0
2007-05-18 11:01:58.015 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'dockets' failed data validation (rowcount and checksum). Rowcount actual: 14389, expected: 0.
2007-05-18 11:01:58.093 Percent Complete: 0
2007-05-18 11:01:58.093 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'docket_items' failed data validation (rowcount and checksum). Rowcount actual: 12414, expected: 0.
2007-05-18 11:01:58.171 Percent Complete: 0
2007-05-18 11:01:58.171 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'stocktake_items' failed data validation (rowcount and checksum). Rowcount actual: 80076, expected: 0.
2007-05-18 11:01:58.250 Percent Complete: 0
2007-05-18 11:01:58.250 Category:NULL
Source: Merge Process
Number: -2147200953
Message: Article 'store_received_items' failed data validation (rowcount and checksum). Rowcount actual: 25773, expected: 0.
2007-05-18 11:01:58.296 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.296 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.296 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.312 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.328 Disconnecting from OLE DB Distributor 'SURF-SERVER'
2007-05-18 11:01:58.328 Disconnecting from OLE DB Distributor 'SURF-SERVER'
2007-05-18 11:01:58.328 The merge process could not set the status of the subscription correctly.
2007-05-18 11:01:58.343 OLE DB Subscriber 'SURF-PSS1': {call sys.sp_MSadd_merge_history90 (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}
2007-05-18 11:01:58.343 [100%] Percent Complete: 100
2007-05-18 11:01:58.343 The process was successfully stopped.
2007-05-18 11:01:58.343 OLE DB Distributor 'SURF-SERVER': {call sys.sp_MSadd_merge_history90 (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}
2007-05-18 11:01:58.484 The Merge Agent was unable to update information about the last synchronization at the Subscriber. Ensure that the subscription exists at the Subscriber, and restart the Merge Agent.
2007-05-18 11:01:58.578 Percent Complete: 0
2007-05-18 11:01:58.578 Category:NULL
Source: Merge Replication Provider
Number: -2147200963
Message: The process was successfully stopped.
2007-05-18 11:01:58.578 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Subscriber 'SURF-PSS1'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.578 Disconnecting from OLE DB Publisher 'SURF-SERVER'
2007-05-18 11:01:58.593 Disconnecting from OLE DB Distributor 'SURF-SERVER'
2007-05-18 11:01:58.593 Disconnecting from OLE DB Distributor 'SURF-SERVER'

|||These are the articles that failed the data validation, and there are much more other articles passed the data validation.

I'm just wondering that why not keep replicating when the row count is different between the subscriber and the publisher. isn't the replication's purpose to make them same?

I'll appreciate any comment. Thank you. I'm really desperate now.
|||

If you do not care about the validations, then you should look at your merge agent job and remove tihs part:

"-Validate 3". What this tells the merge agent is to do a validation and stop if there are errors.

Remove this and it will continue to pass.

However please do look at the real reason why there are differences between the publisher and the subscriber in the first place.

|||Thanks Mahesh,

I did put that parameter in the script.

Thank you very much!

Wednesday, March 28, 2012

merge replication, later wins conflict resolver issue

I am testing a scenario with merge replication when publisher has an entry
1st, and the same data entered into the subscriber later (but before merge
agent runs).
I am using merge replication with 'Microsoft SQL Server DATETIME (Later
Wins) Conflict Resolver'.
I inserted the same row (row with same PK) into publisher, then into
subscriber.
I am expecting to see row from subscriber replicate to publisher at the next
time merge agent runs.
Instead the row from publisher shows up in my subscriber side.
The conflict resolver reports that the publisher won:
The row was inserted at 'EZROUTESUNDB01.repltest' but could not be inserted
at 'EZROUTESTGDB01.repltest'. Violation of PRIMARY KEY constraint 'P_test1'.
Cannot insert duplicate key in object 'test1'.
The conflict table has conflict type 5, "Upload Insert Failed".
I want the latest row from subscriber to win that is why I choose "Later
Wins" resolver.
BTW, when I update the same row on both sides, repl. works like a charm,
later change will won independent of which side had it originally (so
subscriber data migrates to publisher OK if newer).
Also, when new inserts are made in either side, those are merged correctly.
Am I misunderstanding something here?
Thanks,
Zoltan
did you specify the time data type column to use as a basis for the later
wins resolver? You enter this is the Enter Information Needed by the
Resolver text box.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"rr news" <znyiri@.hotmail.com> wrote in message
news:HFXad.11969$yP2.4853@.tornado.tampabay.rr.com. ..
> I am testing a scenario with merge replication when publisher has an entry
> 1st, and the same data entered into the subscriber later (but before merge
> agent runs).
> I am using merge replication with 'Microsoft SQL Server DATETIME (Later
> Wins) Conflict Resolver'.
> I inserted the same row (row with same PK) into publisher, then into
> subscriber.
> I am expecting to see row from subscriber replicate to publisher at the
next
> time merge agent runs.
> Instead the row from publisher shows up in my subscriber side.
> The conflict resolver reports that the publisher won:
> The row was inserted at 'EZROUTESUNDB01.repltest' but could not be
inserted
> at 'EZROUTESTGDB01.repltest'. Violation of PRIMARY KEY constraint
'P_test1'.
> Cannot insert duplicate key in object 'test1'.
> The conflict table has conflict type 5, "Upload Insert Failed".
> I want the latest row from subscriber to win that is why I choose "Later
> Wins" resolver.
> BTW, when I update the same row on both sides, repl. works like a charm,
> later change will won independent of which side had it originally (so
> subscriber data migrates to publisher OK if newer).
> Also, when new inserts are made in either side, those are merged
correctly.
> Am I misunderstanding something here?
> Thanks,
> Zoltan
>
|||Hilary,
Thanks for you fast response.
Here are the table definition, the article info, and my test scenarios.
I am only having problem with scenario 3.
Thanks,
Zoltan
CREATE TABLE [test1] (
[num] [int] NOT NULL ,
[date] [datetime] NULL ,
[rowguid] uniqueidentifier ROWGUIDCOL NOT NULL CONSTRAINT
[DF__test1__rowguid__353DDB1D] DEFAULT (newid()),
[strcol] [varchar] (200) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
CONSTRAINT [P_test1] PRIMARY KEY CLUSTERED
(
[num]
) ON [PRIMARY]
) ON [PRIMARY]
GO
sp_helpmergearticle @.article='test1'
id name
source_owner source_object
sync_object_owner sync_object
description status
creation_script conflict_table
article_resolver
subset_filterclause
pre_creation_command schema_option type column_tracking resolver_info
vertical_partition destination_owner
identity_support pub_identity_range identity_range threshold
verify_resolver_signature destination_object
allow_interactive_resolver fast_multicol_updateproc check_permissions
-- ---- --
---- --
--- --
-- ---
-- ---- --
-- ---- --
---- --
-- ---
-- -- -- -- --
-- ---- --
-- ---- --
-- -- -- -- --
-- ----
-- -- -- --
1 test1 dbo
test1 dbo
test1 NULL
2 NULL
conflict_repltest_test1 Microsoft SQL
Server DATETIME (Later Wins) Conflict Resolver NULL
1 0x000000000000CFF1 10 1 date
1 dbo
0 NULL NULL NULL 0
test1 0
1 0
I have a datetime type column named "date" which is entered in the
resolver_info field for article 'test1'.
article_resolver is configured for 'Microsoft SQL Server DATETIME (Later
Wins) Conflict Resolver'
Test scenarios:
1) new (row w/diff PK) inserted into either side, each row replicated
(merged to other side), OK
2) same row updated with different data, then the latest entry wins no
matter if it was created on the publisher side or the subscriber side
3) the only problem I have is when same row (row w/same PK) entered into
both sides.
EZROUTESTGDB01 publisher
EZROUTESUNDB01 sunscriber
--> 2nd scenario, later wins resolver works fine.
-- testing update same row, 1st at subscriber, then publisher
-- insert test row
insert into EZROUTESTGDB01.repltest.dbo.test1 (num, date, strcol) values(
102, getdate(), 'XXX' )
-- wait until it synchs, so both sides have same row
update EZROUTESTGDB01.repltest.dbo.test1 set date=getdate(), strcol='publ'
where num=102
-- wait few seconds
update EZROUTESUNDB01.repltest.dbo.test1 set date=getdate(), strcol='subscr'
where num=102
-- wait until synch, both sides end up with 'subscr' in strcol column, OK
--> 3rd scenario, NOT OK when subscriber has the later entry, publisher
still wins...?!
--clean up, wait until synch
delete from repltest.dbo.test1 where num > 20
-- testing insert same PK into publisher 1st, then subscriber, still
publisher wins, NOT OK!
insert into EZROUTESTGDB01.repltest.dbo.test1(num, date, strcol) values(
101, getdate(),'publisher' )
-- wait few seconds
insert into EZROUTESUNDB01.repltest.dbo.test1(num, date, strcol) values(
101, getdate(),'subscriber' )
-- wait until synch, both sides end up with 'publisher' in strcol column,
NOT OK
Resolver reports:
The row was inserted at 'EZROUTESUNDB01.repltest' but could not be inserted
at 'EZROUTESTGDB01.repltest'. Violation of PRIMARY KEY constraint 'P_test1'.
Cannot insert duplicate key in object 'test1'.
BTW, clocks are synched, I checked it.
Merge agent is configured to run once/minute.
Microsoft SQL Server 2000 - 8.00.760 on both sides.
"Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
news:em7nkYMsEHA.2144@.TK2MSFTNGP10.phx.gbl...[vbcol=seagreen]
> did you specify the time data type column to use as a basis for the later
> wins resolver? You enter this is the Enter Information Needed by the
> Resolver text box.
>
> --
> Hilary Cotter
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
>
> "rr news" <znyiri@.hotmail.com> wrote in message
> news:HFXad.11969$yP2.4853@.tornado.tampabay.rr.com. ..
entry[vbcol=seagreen]
merge
> next
> inserted
> 'P_test1'.
> correctly.
>

merge replication, help!

Hi,

I'm setting up a merge replication between one publisher(sql server 2005 standard sp2) and a couple of subscribers(sql server 2005 express sp2). They are connected with each other through VPN tunnels(1.5M adsl connections). I'm using pull replication, every 20 minutes. The initial snapshot replication was finished properly for each subscriber, but after several hours, the subscribers keep getting such kind of error message "Another merge agent for the subscription or subscriptions is running, or the server is working on a previous request by the same agent. (Source: MSSQLServer, Error number: 21036)". it looks like the vpn tunnel is not good sometimes, and the merge agent just sits there waiting. then another request comes in, it cannot get the handler of that agent which is held by the previous request. So how can i configure the server to release the agent when a new request comes in? or setup the timeout for each request?

Any idea would be appreciated.

Thank you very much!
Maybe you can consider -QueryTimeout parameter for the merge agent?|||thanks for reply.
but would u be able to tell me how to do that?
|||oh, sorry, i found it. the current timeout is 300, just 5 minutes. but i set the merge replication every 20 minutes. then how come this happens?
|||QueryTimeout is how long the merge agent will wait on a given query before timing out. The 20 minutes you specified sounds like the scheduled intervals at which the subscription will synchronize.sql

merge replication, help!

Hi,

I'm setting up a merge replication between one publisher(sql server 2005 standard sp2) and a couple of subscribers(sql server 2005 express sp2). They are connected with each other through VPN tunnels(1.5M adsl connections). I'm using pull replication, every 20 minutes. The initial snapshot replication was finished properly for each subscriber, but after several hours, the subscribers keep getting such kind of error message "Another merge agent for the subscription or subscriptions is running, or the server is working on a previous request by the same agent. (Source: MSSQLServer, Error number: 21036)". it looks like the vpn tunnel is not good sometimes, and the merge agent just sits there waiting. then another request comes in, it cannot get the handler of that agent which is held by the previous request. So how can i configure the server to release the agent when a new request comes in? or setup the timeout for each request?

Any idea would be appreciated.

Thank you very much!
Maybe you can consider -QueryTimeout parameter for the merge agent?|||thanks for reply.
but would u be able to tell me how to do that?
|||oh, sorry, i found it. the current timeout is 300, just 5 minutes. but i set the merge replication every 20 minutes. then how come this happens?
|||QueryTimeout is how long the merge agent will wait on a given query before timing out. The 20 minutes you specified sounds like the scheduled intervals at which the subscription will synchronize.

Merge Replication with only a subset of data at BOTH subscriber and publisher

I have a merge replication process (on test data) that is moving a
subset of data from one region to a central office. Now the central
office has it's own existing data, prior to initializing the merge
replication from this publisher.
Basically, when a row that existed prior to initialization is updated
at the subscriber, one that does not meet both a direct row filter and
a join filter, it is still being replicated back to the publisher, the
publisher looks like it then deletes all related records based on the
join filters because that row did not meet the criteria.
Am I trying to make merge rep do something that it does not do? I hope
that I am able to keep one subset of data in the merge process, and
have independent data on both the publisher AND subscriber.
Any help/direction is greatly appreciated.
Tony,
to have independant sets of data without truely editing the merge triggers
you really need to partition it and have separate publications. Views can be
used to amalgamate the data if needed. You can use 'Instead Of' triggers or
Partitioned Views to make them updatable.
HTH,
Paul Ibison
|||Paul,
Thanks for the information, I (stupidly) did not even consider that
possibility. I am going to set up a test here, and I might get back to
you if I run into any issues doing so.
Thanks for the insight!
Tony
*** Sent via Devdex http://www.devdex.com ***
Don't just participate in USENET...get rewarded for it!
|||Paul, I did setup a test using views to partition off the data that I
want to publish, however it looks like when I publish those alone with
Merge replication that the data is not being transferred. The schema for
the views was initialized properly, but I think I am missing something.
You reference 'partitioned views'. Do I need to do something to the
views on the publisher in order to make changes to the data replicate
over?
Thanks in advance,
Tony
*** Sent via Devdex http://www.devdex.com ***
Don't just participate in USENET...get rewarded for it!
|||Anthony,
I didn't intend you to create the view on the publisher :-). This is an
avenue you could go down if you use an indexed view, but it is an overhead
you don't require. All you need to do is to create separate publications.
Each one has a filter to take the rows you are interested in - effectively
to partition the table. These publications will be sent ot the subscriber
and created there as 2 separate tables. If you need to report/query these
tables on the subscriber as though they were one table, you can use views on
the subscriber for this. These subscriber views will be unions and if they
need to be updatable then you could use 'instead of' triggers or partitioned
views.
HTH,
Paul Ibison
|||Paul,
The one problem is that I can not change the schema at the subscriber
nor the publisher, as they are established as well as the data that we
are working with. Obviously, I can add to the schema, which is why I
took the indexed view comment from your response. Currently applications
access the tables directly, and they expect this replicated data to end
up there one way or another.
Basically, if I could replicate just a view from each Publisher to the
central Sub, and have the views seperate the data logically from one
another, then the Subscriber could still work with the data in the table
underneath without having to worry about filters which are not being
evaluated.
This make any sense to you, or am I off the beaten path here?
Thanks again,
Tony D
*** Sent via Devdex http://www.devdex.com ***
Don't just participate in USENET...get rewarded for it!
|||Anthony,
on the publisher you won't need to change the schema, as you can separate
the table logically into two publications using row filters. On the
subscriber you'll have schema changes (additions) which can be transparent
to the user. Each publication replicates to a separate table. These could be
tables X and Y. The original table name is recreated on the subscriber as a
view which amalgamates (unions) the X and Y data. So from the subscriber's
point of view nothing has changed. However this view will only be editable
if you use an 'instead of' trigger or use a partitioned view. Either of
these mechanisms will filter the change into the respective replicated
table.
You mention having the 2 indexed views on the publisher, but they cannot
(easily) be replicated to the same table on the subscriber. You'll also lose
control of which changes are sent back to the publisher.
HTH,
Paul
|||Ok, I understand that so far. One question about the view on the
subscriber which amalgamates the data. You say to make this editable I
could make it a partitioned view. Is that just using 'With
Schemabinding', or do I need to index it also?
Thanks for your time Paul, this has been a help!
*** Sent via Devdex http://www.devdex.com ***
Don't just participate in USENET...get rewarded for it!
|||One other hitch using different table names, currently all involved
tables at both the sub and pub have the same names. Is it at all
possible to publish a table so that it is replicated to a table with a
different name at the subscriber?
*** Sent via Devdex http://www.devdex.com ***
Don't just participate in USENET...get rewarded for it!
|||Anthony,
have a look at the @.destination_table parameter in sp_addarticle.
HTH,
Paul Ibison
sql

Merge replication with MSSQL/SQLCE using row filters

Hello everyone,
I'm new in replication, I have some - maybe trivial - question.
First of all, I'm using a MSSQL server as a publisher, the clients are
Pocket PC's, running SQLCE. All clients need only a subset of data, which
can be controlled via row filters, using SUSER_SNAME(). For the first
replication, it seems everything is correct, but.
I have 2 tables in the publication with a reference, eg. table X with
fields code,data and table Y with fields xcode,username. The row filters
looks like:
for table X: 'WHERE X.code IN (SELECT Y.xcode FROM Y WHERE Y.xcode=X.code
AND Y.username=SUSER_SNAME())'
for table Y: 'WHERE Y.username=SUSER_SNAME()'
If I add a row to table Y, the referrenced row from table X will not be
transferred through the next merge replication process, causing referential
integrity error. If I reinitialize the publication, it works again. It
seems, the expressions in the row filters has no effect after the first run
of the Snapshot Agent. I've tried JOIN FILTERS, the effect was that the join
filters worked _only_, not the row filters.
Any idea? The dynamic filters option is on. Am I need to create a dynamic
snapshot job for every single client? There are lot's of them .
Thanks for any help,
Nomad
Correct me if I am wrong, but I don't think that you can use SUSER_Sname()
on your pocket PC.
When I issue this command on isqlw on my pocket pc I get an: At least one
input table is required.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"Nomad" <gfoyle@.freemail.hu> wrote in message
news:OGlIubKnEHA.3708@.TK2MSFTNGP10.phx.gbl...
> Hello everyone,
> I'm new in replication, I have some - maybe trivial - question.
> First of all, I'm using a MSSQL server as a publisher, the clients are
> Pocket PC's, running SQLCE. All clients need only a subset of data, which
> can be controlled via row filters, using SUSER_SNAME(). For the first
> replication, it seems everything is correct, but.
> I have 2 tables in the publication with a reference, eg. table X with
> fields code,data and table Y with fields xcode,username. The row filters
> looks like:
> for table X: 'WHERE X.code IN (SELECT Y.xcode FROM Y WHERE Y.xcode=X.code
> AND Y.username=SUSER_SNAME())'
> for table Y: 'WHERE Y.username=SUSER_SNAME()'
> If I add a row to table Y, the referrenced row from table X will not be
> transferred through the next merge replication process, causing
referential
> integrity error. If I reinitialize the publication, it works again. It
> seems, the expressions in the row filters has no effect after the first
run
> of the Snapshot Agent. I've tried JOIN FILTERS, the effect was that the
join
> filters worked _only_, not the row filters.
> Any idea? The dynamic filters option is on. Am I need to create a dynamic
> snapshot job for every single client? There are lot's of them .
> Thanks for any help,
> Nomad
>
|||I'm using the SUSER_SNAME function at the publication row/join filters, in a
publisher database. The publisher is a PC running MSSQL, the clients are
Pocket PC's using SQLCE. It seems, the function itself working correctly,
for the first time, I've found not the solution, but the description of this
problem, here:
http://support.microsoft.com/default...n-us%3Bq324362
So this is not a bug, it's a feature .
"Hilary Cotter" <hilary.cotter@.gmail.com> az albbiakat rta a kvetkezo
zenetben news:O4l1BwLnEHA.648@.tk2msftngp13.phx.gbl...[vbcol=seagreen]
> Correct me if I am wrong, but I don't think that you can use SUSER_Sname()
> on your pocket PC.
> When I issue this command on isqlw on my pocket pc I get an: At least one
> input table is required.
> --
> Hilary Cotter
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
>
> "Nomad" <gfoyle@.freemail.hu> wrote in message
> news:OGlIubKnEHA.3708@.TK2MSFTNGP10.phx.gbl...
which[vbcol=seagreen]
Y.xcode=X.code[vbcol=seagreen]
> referential
> run
> join
dynamic
>
|||As the artice down below says, this kind of replication process (e.g. using
subqueries referencing other tables in a row filter) could not be executed
using row filters, because row filters won't be evaluated after the first
run of the snaphot agent. This was disappointing .
Now I have a question about JOIN FILTER-s. The behavoiur of the join filters
"cumulative" (or "transitive")? For example, I have a row filter for one
table, and a join filter for another - joined to the first table. So, I will
have only the joined rows from the second table after replication. Am I
right? Can I use row and join filters in cooperation?
"Nomad" <gfoyle@.freemail.hu> az albbiakat rta a kvetkezo zenetben
news:%23dAuB6LnEHA.2612@.TK2MSFTNGP15.phx.gbl...
> I'm using the SUSER_SNAME function at the publication row/join filters, in
a
> publisher database. The publisher is a PC running MSSQL, the clients are
> Pocket PC's using SQLCE. It seems, the function itself working correctly,
> for the first time, I've found not the solution, but the description of
this
> problem, here:
> http://support.microsoft.com/default...n-us%3Bq324362
> So this is not a bug, it's a feature .
|||Yes subset filters and join filters are intended to be used together. Yes
you will only get rows in the join filter table that meet the subset filter
in the parent table.
Philip Vaughn
Program Manager -SQL Server Replication
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only, many thanks!
"Nomad" <gfoyle@.freemail.hu> wrote in message
news:OA7wcVnnEHA.4084@.TK2MSFTNGP10.phx.gbl...
> As the artice down below says, this kind of replication process (e.g.
> using
> subqueries referencing other tables in a row filter) could not be executed
> using row filters, because row filters won't be evaluated after the first
> run of the snaphot agent. This was disappointing .
> Now I have a question about JOIN FILTER-s. The behavoiur of the join
> filters
> "cumulative" (or "transitive")? For example, I have a row filter for one
> table, and a join filter for another - joined to the first table. So, I
> will
> have only the joined rows from the second table after replication. Am I
> right? Can I use row and join filters in cooperation?
> "Nomad" <gfoyle@.freemail.hu> az albbiakat rta a kvetkezo zenetben
> news:%23dAuB6LnEHA.2612@.TK2MSFTNGP15.phx.gbl...
> a
> this
>

Merge Replication with MSDE

I have two MSDE servers on different machines and am
trying to set up merge replication between the two. When
the merge agent runs on the publisher, it indicates that
the "process could not connect to the Subscriber" (SQL
Server does not exist or access denied).
I'm not sure if matters, but I'm running in a peer-to-
peer network. I have my SQL Agent service on the
publisher running under a local machine account, and that
account has access to the publisher, however I wasn't
sure how to grant access to that account on the
Subscriber, or if that was even necessary.
Any help would be appreciated.
- Jim T.
Jim
I am not positive, but I think that we ran into this problem as well. I
beleive that the problem is that the local machine account on the publisher
(or subscriber) can't authenticate with the other machine which means that
the SQL Server Agent can't connect to the other machine to make replication
happen. If I remember correctly, we had to have a non-local (domain)
account that each domain trusts and each machine allows access to the
database on. I'm not sure, but I think it may also be possible to have
local accounts as long as both the username and password are the same on
both machines. I'm pretty sure that there is a KB article titled somthing
like "How to run replication accross non-trusted domains" that may help. If
you need to change the SQL Agent's service acount, take a look at KB article
283811. Sorry to be so vague, but I hope it points you in the right
direction.
HTH
Ron L
"Jim Toth" <anonymous@.discussions.microsoft.com> wrote in message
news:14b301c4b3e6$5d41a650$a401280a@.phx.gbl...
>I have two MSDE servers on different machines and am
> trying to set up merge replication between the two. When
> the merge agent runs on the publisher, it indicates that
> the "process could not connect to the Subscriber" (SQL
> Server does not exist or access denied).
> I'm not sure if matters, but I'm running in a peer-to-
> peer network. I have my SQL Agent service on the
> publisher running under a local machine account, and that
> account has access to the publisher, however I wasn't
> sure how to grant access to that account on the
> Subscriber, or if that was even necessary.
> Any help would be appreciated.
> - Jim T.

merge replication with large amounts of updates at publisher

I have merge repl setup and it has been running with no problem... until a
scheduled task that runs once every night and updates a very large amount of
records was introduced. Merge repl is scheduled to run every hour. Now the
merge attempt directly following the nightly task that updates a large
amount of udates fails, and continues to fail every hour. It eventually does
succeed sometime in the evening. The scheduled task and initial repl failure
occur around 4am eastern time. I would think that would be a somewhat low
traffic time on the net... maybe not. It usually succeeds sometime between 6
and 8 pm eastern time.
1) when a large amount of updates occur, is every record in its entirety
sent to the subscriber? or does the merge repl system automatically package
things into more effecient jobs of some sort?
2) when repl does finally succeed it reports taking over 1 hour. Since my
schedule is every hour could that be part of, or THE problem? meaning a
merge job is scheduled to start but the last one is still running? could
that be why its failing?
3) could any concurrency/locking type issues occur at the subscriber while
the merge agent is operating? especially since it is continually failing and
retrying and failing and retrying?
4) is 50,000 records just way to many to expect to be able to merge? (over
the internet)
any info is greatly appreciated. Thanks.
1) replication metadata is compared to determine which row(s) need
updating/deleting/inserting. Basically the way it works is that you have a
rowguid which identifies each row. If a row is modified triggers write the
rowguid and some tracking information to the msmerge_contents table.
The merge agent will read all rows in the msmerge_contents table have been
added since the last time it ran. It will then compare the version
(generation) of this row with the version of the row on the
publisher/subscriber. The publisher/subscriber with the latest generation is
determined to be the winner, and then the source table on the publisher or
subscriber is consulted and the data values are extracted, and a
insert/update/delete statement is fired on the subscriber/publisher.
So initially its just the rowguid and version information which travels
across the wire, and then it could be the row itself.
2) I think you would benefit by setting QueryTimeOut to something large, it
600. You can set this by right clicking on your merge agent, selecting agent
properties, clicking steps, click run agent, and then in the commands
section, click edit, and look for -QueryTimeOut and set it to 600. It its
not there add it. The merge agent will run for a predetermined time and then
may fail. Sometimes it will take 3 or 4 times before it completely processes
all transactions. Many DBA wrap their merge agents in infinite loops. The
job scheduler for SQL Server will not run two simultaneous instances of the
same merge agent.
3) Its possible. Normally you will get a deadlock message in your merge
agent or msrepl_errors. Normally you run into locking/concurrency problems
when you have a large number of merge agents running simultaneously. There
is a setting to limit the number of concurrent merge agents. Right click on
your publication, select publication properties, and then click the
subscriber tab. Many DBAs will stagger their merge agents so they don't run
simultaneously. For instance they will schedule them to run every 17 minutes
or so. There is nothing magic about 17, its just a larger prime number.
4) Not at all, but this depends on hardware of the publisher and subscriber
and the bandwidth available. You might benefit from the slow link profile if
you have low bandwidth.
"djc" <noone@.nowhere.com> wrote in message
news:OV931hVrEHA.332@.TK2MSFTNGP14.phx.gbl...
>I have merge repl setup and it has been running with no problem... until a
> scheduled task that runs once every night and updates a very large amount
> of
> records was introduced. Merge repl is scheduled to run every hour. Now the
> merge attempt directly following the nightly task that updates a large
> amount of udates fails, and continues to fail every hour. It eventually
> does
> succeed sometime in the evening. The scheduled task and initial repl
> failure
> occur around 4am eastern time. I would think that would be a somewhat low
> traffic time on the net... maybe not. It usually succeeds sometime between
> 6
> and 8 pm eastern time.
> 1) when a large amount of updates occur, is every record in its entirety
> sent to the subscriber? or does the merge repl system automatically
> package
> things into more effecient jobs of some sort?
> 2) when repl does finally succeed it reports taking over 1 hour. Since my
> schedule is every hour could that be part of, or THE problem? meaning a
> merge job is scheduled to start but the last one is still running? could
> that be why its failing?
> 3) could any concurrency/locking type issues occur at the subscriber while
> the merge agent is operating? especially since it is continually failing
> and
> retrying and failing and retrying?
> 4) is 50,000 records just way to many to expect to be able to merge? (over
> the internet)
> any info is greatly appreciated. Thanks.
>
|||Thanks Hilary.
1) You suggested adding the QueryTimeOut setting directly to the command in
the agent. I just want to verify that the dialog box returned from right
clicking on the merge agent (from the replication monitor) and choosing
Agent Profile allows you to do the same thing. The dialog has several
template profiles like the Slow Link one you mentioned and others. I have
played with the QueryTimeOut value from there. Just want to make sure thats
ok as well since you suggested adding this option to the command itself..
any difference?
2) "The merge agent will run for a predetermined time and then may fail.
Sometimes it will take 3 or 4 times before it completely processes all
transactions. Many DBA wrap their merge agents in infinite loops. The job
scheduler for SQL Server will not run two simultaneous instances of the same
merge agent."
Well, wrapping the agents in an infinite loop is beyond my current skill
level although if it comes to that then I will have to learn. When you said
that the agent will run for a predetermined time and then fail. Sometimes 3
or 4 times before it completely processes all transactions... after failing
does it pick up where it left off? or does it start all over again? If it
picks up where it left off does that mean that its possible that out of a
50,000 record merge job that takes 6 times to complete that the records
updated during each of the failed attempts are committed before the final
completion of the whole 50,000 job?
3) In Books Online I read about the MaxUploadChanges and MaxDownloadChanges.
It said you can set a limit on the number of records updated per merge agent
session. But it didn't say what happens if you hit that limit. Lets say you
set a limit to 2500 records but you have 30,000 records that need to be
merged. After is stops at 2500 will the remaining changes still be processed
2500 at a time during subsequesnt merge agent runs? If so this may allow me
to split up my 1 time large update into smaller chunks since it only happens
once over night?
thanks again for all your help. It is very much appreciated.
-djc
"Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
news:Ov2WLwXrEHA.1204@.TK2MSFTNGP12.phx.gbl...
> 1) replication metadata is compared to determine which row(s) need
> updating/deleting/inserting. Basically the way it works is that you have
a
> rowguid which identifies each row. If a row is modified triggers write the
> rowguid and some tracking information to the msmerge_contents table.
> The merge agent will read all rows in the msmerge_contents table have been
> added since the last time it ran. It will then compare the version
> (generation) of this row with the version of the row on the
> publisher/subscriber. The publisher/subscriber with the latest generation
is
> determined to be the winner, and then the source table on the publisher or
> subscriber is consulted and the data values are extracted, and a
> insert/update/delete statement is fired on the subscriber/publisher.
> So initially its just the rowguid and version information which travels
> across the wire, and then it could be the row itself.
> 2) I think you would benefit by setting QueryTimeOut to something large,
it
> 600. You can set this by right clicking on your merge agent, selecting
agent
> properties, clicking steps, click run agent, and then in the commands
> section, click edit, and look for -QueryTimeOut and set it to 600. It its
> not there add it. The merge agent will run for a predetermined time and
then
> may fail. Sometimes it will take 3 or 4 times before it completely
processes
> all transactions. Many DBA wrap their merge agents in infinite loops. The
> job scheduler for SQL Server will not run two simultaneous instances of
the
> same merge agent.
> 3) Its possible. Normally you will get a deadlock message in your merge
> agent or msrepl_errors. Normally you run into locking/concurrency problems
> when you have a large number of merge agents running simultaneously. There
> is a setting to limit the number of concurrent merge agents. Right click
on
> your publication, select publication properties, and then click the
> subscriber tab. Many DBAs will stagger their merge agents so they don't
run
> simultaneously. For instance they will schedule them to run every 17
minutes
> or so. There is nothing magic about 17, its just a larger prime number.
> 4) Not at all, but this depends on hardware of the publisher and
subscriber
> and the bandwidth available. You might benefit from the slow link profile
if[vbcol=seagreen]
> you have low bandwidth.
>
> "djc" <noone@.nowhere.com> wrote in message
> news:OV931hVrEHA.332@.TK2MSFTNGP14.phx.gbl...
a[vbcol=seagreen]
amount[vbcol=seagreen]
the[vbcol=seagreen]
low[vbcol=seagreen]
between[vbcol=seagreen]
my[vbcol=seagreen]
while[vbcol=seagreen]
(over
>
sql

Merge Replication with different Service Packs

I have a Win 2003 server with SQL 2000 SP3 that is publisher and distributor for several merge replication publications. Subscribers are a mixture of SQL 2000 (running SP3 and SP4) and MSDE. I've run into a problem in creating publications that is fixed in SP4 but for various reasons I can't update all of the subscribers that are running SP3. I don't expect any issues updating just my server but will I have any issues running SP4 on my server and SP3 on the subscribers?

Thanks for the help.

GaryI have not experienced any issues related to mis-matched service packs in my development environment. On the other hand, I have not yet deployed to full scale production.

Regards,

hmscottsql

Merge replication with anonymous subscribers and identity columns

Hi,

I read the BOL on how the publisher will had out identity ranges to subscribers, but it was not clear if this was also the case for anonymous subscribers. Will merge replication with identity columns work with anonymous subscribers that sync via HTTPS?

Thanks,
Darrell Young
Yes, it should work. Please try and let us know if you encounter any issues.

Monday, March 26, 2012

Merge replication with alternate synchoronization partner

Hello:
I have a merge replication and I defined a Publisher A as a primary
publisher and distributor, a Publisher B as an alternate synchronization
partner and distributor and a subscriber to the Publisher A.
We need to turn off the server which is Publisher A and set the Publiser B
as the alternate partner so we can replicate data between Publisher B and
the subscriber even if Publiser A if off.
I followed the following procedure:
-From the Pusbliser A I go to SQL Server agents, Jobs and then modify the
job that is PublisherA-SubscriberA. Modify the commad box to the PublisherB
and the parameter SyncToAlternate 1.
-Re-Start the merge agent
I get the error "The process could not drop one or more tables because the
tables are being used by other publications. The step failed"
Do I need to modify something else? What can I do? Because we need to turn
off the Publisher A and the replication still going between the other two
servers.
Thanks in advance
Jennyfer Barco
did you follow the steps outlined in
http://support.microsoft.com/default...roduct=sql2k#3
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"Jennyfer J Barco" <pdwhitt@.nospam.wdsinc.com> wrote in message
news:uwsCF6VkEHA.3756@.TK2MSFTNGP11.phx.gbl...
> Hello:
> I have a merge replication and I defined a Publisher A as a primary
> publisher and distributor, a Publisher B as an alternate synchronization
> partner and distributor and a subscriber to the Publisher A.
> We need to turn off the server which is Publisher A and set the Publiser B
> as the alternate partner so we can replicate data between Publisher B and
> the subscriber even if Publiser A if off.
> I followed the following procedure:
> -From the Pusbliser A I go to SQL Server agents, Jobs and then modify the
> job that is PublisherA-SubscriberA. Modify the commad box to the
PublisherB
> and the parameter SyncToAlternate 1.
> -Re-Start the merge agent
> I get the error "The process could not drop one or more tables because the
> tables are being used by other publications. The step failed"
> Do I need to modify something else? What can I do? Because we need to turn
> off the Publisher A and the replication still going between the other two
> servers.
> Thanks in advance
> Jennyfer Barco
>
|||Yes, I set up the suscriber following the directions and now the three
servers are replicating OK. Because we need to turn off the PubliserA I
followed the options under "Use SQL Server Enterprise Manager to Synchronize
with Alternate Synchronization Partner" but when I started synch I get the
error. I'm doing these steps from the Publisher A. Is it OK? Is there
anything else Ineed to do?
Thanks so much
Jennyfer
"Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
news:e4N1Y4akEHA.3428@.TK2MSFTNGP11.phx.gbl...
> did you follow the steps outlined in
>
http://support.microsoft.com/default...roduct=sql2k#3[vbcol=seagreen]
>
> --
> Hilary Cotter
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
>
> "Jennyfer J Barco" <pdwhitt@.nospam.wdsinc.com> wrote in message
> news:uwsCF6VkEHA.3756@.TK2MSFTNGP11.phx.gbl...
B[vbcol=seagreen]
and[vbcol=seagreen]
the[vbcol=seagreen]
> PublisherB
the[vbcol=seagreen]
turn[vbcol=seagreen]
two
>
|||Yes, I set up the suscriber following the directions and now the three
servers are replicating OK. Because we need to turn off the PubliserA I
followed the options under "Use SQL Server Enterprise Manager to Synchronize
with Alternate Synchronization Partner" but when I started synch I get the
error. I'm doing these steps from the Publisher A. Is it OK? Is there
anything else I need to do?
Thanks so much
Jennyfer
"Jennyfer J Barco" <pdwhitt@.nospam.wdsinc.com> wrote in message
news:esne3BdkEHA.3720@.TK2MSFTNGP12.phx.gbl...
> Yes, I set up the suscriber following the directions and now the three
> servers are replicating OK. Because we need to turn off the PubliserA I
> followed the options under "Use SQL Server Enterprise Manager to
Synchronize
> with Alternate Synchronization Partner" but when I started synch I get the
> error. I'm doing these steps from the Publisher A. Is it OK? Is there
> anything else Ineed to do?
> Thanks so much
> Jennyfer
> "Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
> news:e4N1Y4akEHA.3428@.TK2MSFTNGP11.phx.gbl...
>
http://support.microsoft.com/default...roduct=sql2k#3[vbcol=seagreen]
synchronization[vbcol=seagreen]
Publiser
> B
> and
> the
> the
> turn
> two
>

Merge replication very very slow

Trying to merge replicate a database, but it is very slow.
At the subscriber it inform: "Uploading data changes to the Publisher"
and then stuck for very long time.
After that it informs several lines of: "Processing article
'TableName'" and fail with "General network error".
Publisher, distributor and subscriber: SQL 2K SP3.
I am using the replmerg.exe to replicate, so I am getting all logs on
screen anyway and it looks like it fail in different phases each time.
I already set QueryTimeout to 600 and inactivity threshold to 120.
Thanks

Merge replication using files.

Is it possible to use merge replication without direct connection from suscriber/publisher? I'm trying to find a solution to use files for send/receive merge-replication info from/to server.

^_^,

Thank You.

There is currently no support for Merge replication with files.

Are you concerned about the snapshot fules or you want all of the syncs to be using files? If the former, you should look at alt_snapshot_folder. If the latter, there is no support as of now.

|||

I needed for all syncs :(.

Do you feel there's a way to bypass merge-web-synchronization and use files to do sync.?

I don't know the communication process, but simplifying, maybe we can merge request/s in an ouput file, and response/s in an input file ...

Than you.

|||

Hi,

Thanks for your suggestion. This is not a new idea and has been floating in our minds for some time. We may probably take another look at it for the next release but it may not make it.

Merge Replication updating SP3 to SP4

Gurus,
I am planning to update SP3 to SP4 on a replicated server.
A- Please correct me if i wrong in order.
1- Apply SP4 on Publisher / Distributor (on single box)
2- All subscribers.
B- As per my knowledge there are no issues while applying SP4. Is there any
known issue while applying the sp4 has to be taken care of?
TIA
Regards
Javed Iqbal
Javed,
I agree with the order. I've had a search of this newsgroup (using Google
Groups) to check for sp4 and there are hardly any reported errors so far.
The main on e is
http://groups.google.com/group/micro...85 a5a37097d4
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)