After addition of new tables to Merge replication data validation fails
if i use only row count and checksum, however if i use binary checksum
and rowcount it is successful.
Checksum is notoriously unreliable. It looks like your tables are out of
sync. I suggest you reinitialize your subscriptions.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"mitsql" <amit.nimje@.tcs.com> wrote in message
news:1135762365.429312.266610@.o13g2000cwo.googlegr oups.com...
> After addition of new tables to Merge replication data validation fails
> if i use only row count and checksum, however if i use binary checksum
> and rowcount it is successful.
>
|||I have dropped and recreated the subsrciption. still gives the same
error.
All published tables have been recreated so there should not be any
difference in the tables..rt?
Also did a manual checksum using sp_table_validation for the table on
both
subsrciber and publisher and both return the same checksum..
Fail to understand how checksum fails and binary checksum successful?
Hilary Cotter wrote:[vbcol=seagreen]
> Checksum is notoriously unreliable. It looks like your tables are out of
> sync. I suggest you reinitialize your subscriptions.
> --
> Hilary Cotter
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
> "mitsql" <amit.nimje@.tcs.com> wrote in message
> news:1135762365.429312.266610@.o13g2000cwo.googlegr oups.com...
|||Hi,
Thankyou for the response. I even checked the table manually comparing
the
values row by row and found no difference.
Puzzled by this behaviour...
regds,
amit
|||I take it you have the same collation and there is no filtering involved.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"mitsql" <amit.nimje@.tcs.com> wrote in message
news:1135910034.397450.123480@.g43g2000cwa.googlegr oups.com...
> Hi,
> Thankyou for the response. I even checked the table manually comparing
> the
> values row by row and found no difference.
> Puzzled by this behaviour...
> regds,
> amit
>
|||Hence the comment that checksum validation wasn't always completely
accurate. I've run into this more than once. I've never been able to track
it down and I've never been able to reliably reproduce it.
That's why I look at the validation as a guideline. I still do random
samples of data and directly compare them externally. (This is a lot easier
than you might think.) I basically execute the same query against the
publisher and subscriber and yank all of the data into two tables. If the
rowcount is odd, I know I have a problem. If a union query yields exactly
1/2 of the total rows between the two tables, they are in synch.
Mike
Mentor
Solid Quality Learning
http://www.solidqualitylearning.com
"Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
news:u%235eYOgDGHA.2320@.TK2MSFTNGP11.phx.gbl...
>I take it you have the same collation and there is no filtering involved.
> --
> Hilary Cotter
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
> "mitsql" <amit.nimje@.tcs.com> wrote in message
> news:1135910034.397450.123480@.g43g2000cwa.googlegr oups.com...
>
|||thanks for respoding back..
Yes have the same collation and no filtering...
Can you tell me which is better binary checksum or checksum
validation?
And how this problem can be resolved for checksum validation failure?
any tracing ?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment