Saturday, February 25, 2012

Merge replication data loss

I have serious problem regarding merge replication. After couple of pull
subscription synchronizations, data loss occurred in several tables (both
publisher and subscriber databases).
Publication has 62 articles, but only few are affected. Agents reports several
errors (including really nasty "general network failure").
I think problem is not in dynamic or join filters, because some of those
tables are not filtered at all.
Example: with tables T1 and T2 (FK constraint) users confirmed data loss in
child table T2 "right after synchronization" (sometimes couple of hundreds of
records).
So, question is how this could possibly happen (FK constraint maybe or general
network failure)?
Any help is appreciated,
Tom
general network error means your network hic cupped.
Merge replication refreshes the data on the subscriber when you send the
snapshot. This could account for the data loss. Also check the conflict
viewer to see if there is a record of conflicts which also might account for
it.
Make sure if you have any cascading deletes that they are disabled on the
subscriber.
"Tom" <tomislav.hrvoic.rimuv.dis@.inet.hr> wrote in message
news:uM5NnqIGEHA.4084@.TK2MSFTNGP11.phx.gbl...
> I have serious problem regarding merge replication. After couple of pull
> subscription synchronizations, data loss occurred in several tables (both
> publisher and subscriber databases).
> Publication has 62 articles, but only few are affected. Agents reports
several
> errors (including really nasty "general network failure").
> I think problem is not in dynamic or join filters, because some of those
> tables are not filtered at all.
> Example: with tables T1 and T2 (FK constraint) users confirmed data loss
in
> child table T2 "right after synchronization" (sometimes couple of hundreds
of
> records).
> So, question is how this could possibly happen (FK constraint maybe or
general
> network failure)?
> Any help is appreciated,
> Tom
>
>
|||what type of connectivity you are using?
those records might be on the publisher, you just need to
update the child records and replicate them to subscriber.

>--Original Message--
>general network error means your network hic cupped.
>Merge replication refreshes the data on the subscriber
when you send the
>snapshot. This could account for the data loss. Also
check the conflict
>viewer to see if there is a record of conflicts which
also might account for
>it.
>Make sure if you have any cascading deletes that they are
disabled on the
>subscriber.
>"Tom" <tomislav.hrvoic.rimuv.dis@.inet.hr> wrote in message
>news:uM5NnqIGEHA.4084@.TK2MSFTNGP11.phx.gbl...
After couple of pull
several tables (both
Agents reports
>several
failure").
because some of those
confirmed data loss
>in
couple of hundreds
>of
constraint maybe or
>general
>
>.
>
|||"Kiran" <anonymous@.discussions.microsoft.com> wrote in message
news:175be01c418c4$e0f0f180$a301280a@.phx.gbl...
> what type of connectivity you are using?
Subscribers use 56K modem or ISDN adapter to dial-in to company domain.
Network protocol is TCP/IP, DNS works fine, I did notice some lost packets
when pinging publisher from subsribers. However, that was before system
administrators replaced dial-in server and reconfigured network (replication
data loss problem is still unsolved though).

> those records might be on the publisher, you just need to
> update the child records and replicate them to subscriber.
Nope, older database backup is the only place where I can find those
records. Publisher is also missing those records and that makes it very
tricky.
Regards,
Tom

Merge Replication Custom Conflict Resolver

Hi.
Anybody knows where can I find more info about "Merge replication custom
conflict resolver"?
I have readed the "Chapter 30" of "SQL Server 2000 Resource Kit" but it's
very little. I need more details.
Thanks and greetings.
Thanks but I need a COM based conflict resolver because SP is only for
update conflicts.
|||I do not find much more than you. The 1st document are about SQL Server 2005
and the second already have read it.
Greetings.

Merge Replication Could not connect

Hi!
I have problem with Merge Replication between two SqlServers connected via
internet.
After i configured publication and subscriber, when i trying to synhronize
i'm obtaining The process could not access to the Subscriber"
thanks in advance

pozdrawiam
Jakub Gluszkiewicz
k.gluszkiewicz@.citysoftware.com.pl
www.citysoftware.com.pl
tel. (22) 869-51-76
869-51-77
kom. 503-680-864
Are you using FTP? You should be using FTP. Also can the subscriber ping the
publisher and vice versa.
Hilary Cotter
Looking for a book on SQL Server replication?
http://www.nwsu.com/0974973602.html
"Kuba" <k.gluszkiewicz@.citysoftware.com.pl> wrote in message
news:OrnZerPjEHA.1656@.TK2MSFTNGP09.phx.gbl...
> Hi!
> I have problem with Merge Replication between two SqlServers connected via
> internet.
> After i configured publication and subscriber, when i trying to synhronize
> i'm obtaining The process could not access to the Subscriber"
> thanks in advance
>
> --
> pozdrawiam
> Jakub Gluszkiewicz
> k.gluszkiewicz@.citysoftware.com.pl
> www.citysoftware.com.pl
> tel. (22) 869-51-76
> 869-51-77
> kom. 503-680-864
>
>
|||Hi Hilary!
thanks for your response. I already solved this problem, i needed to add
aliases in SqlServer,
thanks again
pozdrawiam
Jakub Gluszkiewicz
k.gluszkiewicz@.citysoftware.com.pl
www.citysoftware.com.pl
tel. (22) 869-51-76
869-51-77
kom. 503-680-864
Uytkownik "Hilary Cotter" <hilary.cotter@.gmail.com> napisa w wiadomoci
news:uStcj8RjEHA.396@.TK2MSFTNGP12.phx.gbl...
> Are you using FTP? You should be using FTP. Also can the subscriber ping
the[vbcol=seagreen]
> publisher and vice versa.
>
> --
> Hilary Cotter
> Looking for a book on SQL Server replication?
> http://www.nwsu.com/0974973602.html
>
> "Kuba" <k.gluszkiewicz@.citysoftware.com.pl> wrote in message
> news:OrnZerPjEHA.1656@.TK2MSFTNGP09.phx.gbl...
via[vbcol=seagreen]
synhronize
>

merge replication corruption (system triggers and views)

All of a sudden none of our merge replications are working. In fact you can't even insert, update or delete and data from the tables in the merge publication. When trying that, we get an error stating:

Msg 550, Level 16, State 1, Procedure MSmerge_ins_E3F43EF8B259476099BBB194A2E1708C, Line 42
The attempted insert or update failed because the target view either specifies WITH CHECK OPTION or spans a view that specifies WITH CHECK OPTION and one or more rows resulting from the operation did not qualify under the CHECK OPTION constraint.
The statement has been terminated.

Currently, the only solution I've found is to delete the publication and recreate it. I'm trying to figure out why this happened. It happened on a development server that to my knowledge, hasn't been changed in a week or so outside of changing the server's IP address. Would that cause such an error to occur?

-mikeI found another change.. we added a linked server using sp_addlinkedserver.. Any thoughts?|||Does your merge subset filter clauses or join filter clauses contain views that contain WITH CHECK OPTION, pointing to remote table?|||No filters are set for the publication.|||

ok, then you have to trace your steps to see exactly what changes were made that would cause this, and see if you can back them out one by one.

any idea what the linked server would have to do with regards to the views, triggers, or any of the published tables? are you making changes from a remote machine?

Merge replication conflicts on Identity Columns

Hi all,
I've set up a merge replication using sql server 2005.
All settings for "Copy Use Triggers" were set to false
All Identity ranges were set to be handled automatically.
There is a table - "Stock" - which has an ident field. An insert
trigger on this table inserts a new record into another table -
"barcodes" - which itself has its own ident field.
When a new "stock" record is inserted on the subscriber I get a merge
conflict which says:
"A row insert at 'xxx.YYYYYYYYY' could not be propagated to
'zzz.YYYYYYYYY'. This failure can be caused by a constraint violation.
Explicit value must be specified for identity column in table
'barcodes' either when IDENTITY_INSERT is set to ON or when a
replication user is inserting into a NOT FOR REPLICATION identity
column."
It suggested I use sp_adjustpublisheridentityrange, which I did on
both tables, but still the conflict won't resolve.
Any suggestions will be most gratefully received
Thanks
Ken
This should resolve itself the next time the merge agent runs. It is
possibly that you have entered so much data on the subscriber that you
identity range is completely out of whack.
Issue a dbcc checkident(''tablename') on your subscriber and note the value
for the current identity value and current column value. If the current
identity value is greater than your current column value set the current
identity value to be the current column value, i.e.
dbcc checkident(''tablename',reseed, 14)
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
<kengillett@.gmail.com> wrote in message
news:1184849805.324095.134260@.w3g2000hsg.googlegro ups.com...
> Hi all,
> I've set up a merge replication using sql server 2005.
> All settings for "Copy Use Triggers" were set to false
> All Identity ranges were set to be handled automatically.
> There is a table - "Stock" - which has an ident field. An insert
> trigger on this table inserts a new record into another table -
> "barcodes" - which itself has its own ident field.
> When a new "stock" record is inserted on the subscriber I get a merge
> conflict which says:
> "A row insert at 'xxx.YYYYYYYYY' could not be propagated to
> 'zzz.YYYYYYYYY'. This failure can be caused by a constraint violation.
> Explicit value must be specified for identity column in table
> 'barcodes' either when IDENTITY_INSERT is set to ON or when a
> replication user is inserting into a NOT FOR REPLICATION identity
> column."
> It suggested I use sp_adjustpublisheridentityrange, which I did on
> both tables, but still the conflict won't resolve.
> Any suggestions will be most gratefully received
> Thanks
> Ken
>

Merge Replication Conflicts

Hi,
I have a strange scenario occuring and have no idea why.
I have SQL Server 2000 & Windows 2003 Server running on two machines.
Two databases are set for merge replication.
The subscriber is used purely for redundancy reasons and no client uses it unless the publisher is unavailable. While the publisher is available, I am receiving conflict messages saying that the same column in the same table has been updated on both serve
rs. I know this is not true as no client is running off the subscriber.
The publisher and subscriber have different identity seeds and the identity seed of the updated record that causes the conflict is that of the publisher (not the subscriber).
Has anyone run into this issue before? Any ideas?
Thanks,
Andrew
Having an identity value assigned by the publisher does not indicate
that it wasn't changed at the subscriber. It only indicates where the
row was originally inserted. The subscriber can still modify the rows
provided it had been replicated to the subscriber.
Having said that, the only way you should be able to have conflicts is
if there are conflicting DML statements on the same row on opposite
sides. You could try checking the MSmerge_contents table on the
subscriber database to see what rows were modified at the subscriber.
Hope this helps,
Reinout Hillmann
SQL Server Product Unit
This posting is provided "AS IS" with no warranties, and confers no rights.
Andrew wrote:
> Hi,
> I have a strange scenario occuring and have no idea why.
> I have SQL Server 2000 & Windows 2003 Server running on two machines.
> Two databases are set for merge replication.
> The subscriber is used purely for redundancy reasons and no client uses it unless the publisher is unavailable. While the publisher is available, I am receiving conflict messages saying that the same column in the same table has been updated on both ser
vers. I know this is not true as no client is running off the subscriber.
> The publisher and subscriber have different identity seeds and the identity seed of the updated record that causes the conflict is that of the publisher (not the subscriber).
> Has anyone run into this issue before? Any ideas?
> Thanks,
> Andrew
|||Hi,
I can see quite a few records in the MSmerge_contents table at the subscriber. How can I use these to track down who/what is causing this?
There is not a single client in our organisation that is using the subscriber server, so how can there possibly be modification at the subscriber?
When I run a trace on the subscriber for the database that is generating the conflict, i get nothing. (IE: no-one is using that database on that server)
Andrew
|||Not sure if this is relevant but
Have you any triggers on the tables being inserted or updated ? If so you could have changes being made on the
subscriber that could cause confilicts back on the publisher

merge replication conflict tables stored proces and views

We have merge replication running with anamous subscribers
We have generested lots of views tables and stored procedures like
sp_ins_C435D35DDEC04FE2517CCD52A9024EC4

ctsv_07BA7383A12B4654B4D3A69B3053B227
aonflict_DH_tblReplicationRegion

How do we get rud of these I am concerned it will fill up the publisher database
Can any one advisehttp://www.databasejournal.com/features/mssql/article.php/1468971 (http://www.databasejournal.com/features/mssql/article.php/1468971)
http://www.databasejournal.com/features/mssql/article.php/1438231 (http://www.databasejournal.com/features/mssql/article.php/1438231)
Fyi.

Merge Replication Conflict Resolution Latency?

I've been having a hard time finding a definitive answer about merge
replication and any sort of latency on a transaction between a
subscriber and publisher. I apologize in advance for the lengthy post,
but it's difficult to really "cut to the meat" of this one
Here's my scenario: I'm developing a .NET CF application on a PocketPC
that stores data in a SQL CE database that merges with a SQL 2000
publisher when the PDA is docked. Generally speaking, what goes up is
properly reflected when it comes back down via a "is complete" row
filter that simply looks for a flag that is set to 1. For example:
-Publisher has 128 rows
-PDA has 128 rows, 28 rows have been "completed"
-Instantiate merge with publisher
-Result is that there are 128 rows on publisher, 28 are complete, row
filter disregards 28 complete records and sends down 100 records that
are still incomplete to PDA
This seems to work great. Now enter conflicts. I have the resolver set
such that the subscriber always wins. We've had several cases where the
server has been under moderate to heavy load, and there have been a lot
of conflicts. This was an actual occurrence:
-Publisher has 79 rows
-PDA has 79 rows, 69 rows have been "completed"
-Publisher had 79 rows *removed*
-Instantiate merge with publisher
-Result is that there are 69 conflicts at publisher, the subscriber wins
all, but there are still *79* rows on the PDA after merge finishes
-I wait a minute and try again, this time there are about 50 rows on the
subscriber
-I repeat this process several times over the course of 5 minutes until
finally, I am left with the correct number of records on the PDA (0)
My question is, is this normal behavior for the resolver? Are conflicts
generally allowed to be "handed off" and the merge allowed to complete
without reflecting their outcome? If so, is there any way to tune or
change this behavior?
This also has me concerned about a normal merge without any conflicts.
Would there ever be a case in which a merge would get out of sync in
this way? My understanding is that instantiating a merge will always
result in both the supplying subscriber and publisher ending up with the
same data at that moment. My merge agents are all using the default
profile.
Thanks a BUNCH for any help you can offer. I've had an impossible time
answering these questions and I'm a programmer before a DBA
-Mike
Mike wrote:

> -Publisher has 79 rows
> -PDA has 79 rows, 69 rows have been "completed"
> -Publisher had 79 rows *removed*
> -Instantiate merge with publisher
> -Result is that there are 69 conflicts at publisher, the subscriber wins
> all, but there are still *79* rows on the PDA after merge finishes
> -I wait a minute and try again, this time there are about 50 rows on the
> subscriber
> -I repeat this process several times over the course of 5 minutes until
> finally, I am left with the correct number of records on the PDA (0)
I apologize, I had my scenarios crossed. This should have read:
-Publisher has 79 rows
-PDA has 79 rows, 69 rows have been "completed"
-Publisher had all 79 rows *modified*
-Instantiate merge with publisher
-Result is that there are 69 conflicts at publisher, the subscriber wins
all and they are noted as resolved in the agent history, but there are
still *79* rows on the PDA after merge finishes
-I wait a minute and try again, this time there are about 50 rows on the
subscriber
-I repeat this process several times over the course of 5 minutes, each
time the number of rows at the subscriber decreases, until finally, I am
left with the correct number of records on the PDA (10)
Thanks
-Mike

Merge Replication Conflict Problem

exec sp_helpmergeconflictrows @.conflict_table = 'CMCustomer'

Msg 207, Level 16, State 1, Line 1

Invalid column name 'origin_datasource_id'.


When I run the "Microsoft Replication Conflict View" I am prompted to pick a table. Here are the details of the error message I receive.

===================================

CMCustomer is neither a DataColumn nor a DataRelation for table summary. (System.Data)


Program Location:

at System.Data.DataRowView.get_Item(String property)
at Microsoft.SqlServer.Management.UI.ConflictViewer.ViewerForm.GetWinnerSQL(DataRow loserRow, Boolean blockFetch)
at Microsoft.SqlServer.Management.UI.ConflictViewer.ViewerForm.GetSourceRow()
at Microsoft.SqlServer.Management.UI.ConflictViewer.ViewerForm.FillDetailData()
at Microsoft.SqlServer.Management.UI.ConflictViewer.ViewerForm.FillDetail()
at Microsoft.SqlServer.Management.UI.ConflictViewer.ViewerForm.summaryInfoGrid_SelectionChanged(Object sender, SelectionChangedEventArgs args)
at Microsoft.SqlServer.Management.UI.Grid.GridControl.OnSelectionChanged(BlockOfCellsCollection selectedCells)
at Microsoft.SqlServer.Management.UI.Grid.GridControl.SelectedCellsInternal(BlockOfCellsCollection col, Boolean bSet)
at Microsoft.SqlServer.Management.UI.Grid.GridControl.set_SelectedCells(BlockOfCellsCollection value)
at Microsoft.SqlServer.Management.UI.ConflictViewer.ViewerForm.ResetSummaryGrid()
at Microsoft.SqlServer.Management.UI.ConflictViewer.ViewerForm.LoadConflict()

When I try to "Remove" then conflict (Conflict type - "4(Update/Delete, Update wins)") I receive the following...

===================================

Column 'CMCustomer' does not belong to table summary. (System.Data)


Program Location:

at System.Data.DataRow.GetDataColumn(String columnName)
at System.Data.DataRow.get_Item(String columnName)
at Microsoft.SqlServer.Management.UI.ConflictViewer.ViewerForm.SetLogInfo(DataRow logRow, String sql)
at Microsoft.SqlServer.Management.UI.ConflictViewer.ViewerForm.btnClearUpper_Click(Object sender, EventArgs eg)

Grant,

Please let me know if my assumptions are incorrect:

I believe CMCustomer is a rowguid column.

This column was added by the user and snapshot did not add this.

This happens on delete conflicts.

If these are correct, then it is a known issue and should be fixed in SP2.

|||

Yes, CMCustomer is the rowguidcol of the CMCustomer table.

"This column was added by the user and snapshot did not add this." Are you saying the column metadata changed at some point?

|||No, what I am saying is when you created the table with this column and it was of type uniqueidentifier and it was designated as the rowguidcol. In case such a column does not exist in the table, snapshot agent will add one and name it as a rowguid. The bug happens because the name is not rowguid and UI was trying to find a column with this name.|||Ok. Thanks.

Merge Replication Conflict Notification

SQL2K SP4
Windows 2000 server
Merge Replication

How can I get sql server to log to the windows event long when a conflict occurs so that I can set up a notification alert. I have gone to the replication alerts in enterprise manager and set one up but it is never triggered and there is never anything in the windows event log.If it's not raised that means it was never triggered.

Merge Replication Conflict - Primary Key Constraint

I have an application that uses Merge Replication. In my database design,
before I created the merge replication publication, I modified the tables and
set my identity columns to Yes (Not for replication) option.
I am hitting a problem however, when I try to insert a new row in one of the
tables and then replicate the data back to the server. I am getting a
conflict with the reason being:
Reason Type 5, Reason code 2627
Reason Text:
The row was inserted at Subscriber.x' but could not be inserted at Server.X.
Violation of PRIMARY KEY constraint 'PK_X'. Cannot insert duplicate key in
object X.
I thought that having Not for replication option set for identity columns
would cause replication to use the server and/or subscriber environment to
generate identity column values on inserts.
Any help would be greatly appreciated.
Hi Guy - you'll need to partition the identity ranges to avoid identity
conflicts.
The easiest, most maintainable way is to change the article properties to
enable Automatic identity range management and then reinitialize.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com .
|||Thanks Paul. I will try this out and see if the problem is resolved. My
apologies for this message posting so many times. My browser was acting up
and reporting an error when I posted. So I thought my post had failed.
One Question: Do you know why this is happening. I must not be
understanding the purpose of not for replication option. Because I thought
this is what would resolve this type of problem.
"Paul Ibison" wrote:

> Hi Guy - you'll need to partition the identity ranges to avoid identity
> conflicts.
> The easiest, most maintainable way is to change the article properties to
> enable Automatic identity range management and then reinitialize.
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com .
>
>
|||The NFT replication allows replication agents to do an identity insert when
distributing changes. However if the renge isn't partitioned, there will
still be a conflict.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com .

Merge Replication Conditional Filter with UDF Problem

I'm performing Merge Replication between SQL Server 2000/SQL CE using Dynamic Filters.
I would like a conditional filter based on the Host_Name(). I've written and tested a UDF
in Query Analyzer called SELECT_Stores_RepID. In the Dynamic Filters properties for the
Stores Table I've tried many different things such as:
SELECT <published_columns> FROM [dbo].[Stores] WHERE dbo.SELECT_Stores_RepID (HOST_NAME())
All them give me errors. My question is what is the correct syntax to use my UDF? The documentation
on this is very poor.
Cheers
it should be something like this
select <punished_columns> from <<Table>> where au_id in (select au_id from dbo.Authors_filter('CA'))
There may be more efficient ways of doing this.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
|||Hi Hilary,
Thanks for the help. I tried to apply it to my Dynamic Filter...
SELECT <published_columns> FROM [dbo].[Stores] WHERE
Stores.StoreID IN (SELECT StoreID FROM dbo.SELECT_Stores_RepID(HOST_NAME())
but I'm receiving the following error...
Incorrect Syntax near '('
Once again, I can't see where the problem is with this statment ?
|||Sorry typo in my post ...
(SELECT StoreID FROM dbo.SELECT_Stores_RepID(HOST_NAME())
Should be (extra bracket)...
(SELECT StoreID FROM dbo.SELECT_Stores_RepID(HOST_NAME()))
Still getting the same problem as before.
-- Cube wrote: --
Hi Hilary,
Thanks for the help. I tried to apply it to my Dynamic Filter...
SELECT <published_columns> FROM [dbo].[Stores] WHERE
Stores.StoreID IN (SELECT StoreID FROM dbo.SELECT_Stores_RepID(HOST_NAME())
but I'm receiving the following error...
Incorrect Syntax near '('
Once again, I can't see where the problem is with this statment ?
|||can you post your UDF.
BTW - I should warn you that UDF's don't always offer the best performance. Also Host_Name() is evaluated where the merge agent is run. If you kick it off on EM runing it on your desktop it will evaluated to your machine name.
If you kick if off on the subscriber as a pull subscription it will resolve correctly.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
|||I'm still getting this error in Replication. Does anyone know if its a known issue?
The error is that when I call my UDF below in my Dynamic Filter and pass in
HOST_NAME() I get an incorrect syntax near '(' error.
Ciaran
-- Cube wrote: --
Hi Hilary,
Please find below my UDF. The RepID in my table is an int, which I CAST so I can
compare it to my Host_Name(). I'm interested to know your opinion. Also, if UDF's
don't offer best performance, do I have other options apart from creating a seperate
publication?
ALTER FUNCTION dbo.SELECT_Stores_RepID
(
@.inRepIDchar(4)
)
RETURNS @.Stores TABLE (StoreID int)
AS
BEGIN
IF (@.inRepID = '8')
INSERT INTO @.Stores
SELECTStoreID FROM Stores
ELSE
INSERT INTO @.Stores
SELECTStoreID
FROM Stores
WHERE CAST(Stores.RepID AS char(4)) = @.inRepID
RETURN
END
BTW - Some history behind all this. I have some HandHeld Devices deployed with my custom app.
The Replication depends on the RepID, which is set through the Host_Name() on the device, pulling
back the users specified data. Now, there is a requirement to pull back all data in some cases
e.g. management, without modifying the software on the client. Henece, the reasons for the
IF RepID = X conditional filter.

merge replication compatibility issue

All,
I have a SQL2005 server that is running merge replication against SQL2000
boxes. One of my SQL2000 box needed to be replaced due to disk issues. I
scripted out my Subscription from my 2005 box and tried to reapply it after I
rebuilt my new 2000 box. When I run the script I get an error saying:
"Publication 'PubName' cannot be added to database 'DBNAme', because a
publication with a higher compatibility level already exists. All merge
publications in a database must have the same compatibiliy level."
I am in the process of upgrading all my 2000 boxes, but this one, since i
was having issues I wanted to get it back to its original state before
upgrading it.
Right now there is a mix of replications running pointing to a few SQl2000
and Some SQL2005.
Is there a way to get around this before upgrading my new box to 2005?
TIA,
John
Similar, but what I had to do was to change the compatibilty level of all my
subscriptions to 2005 and I was able to add this new one back in.
Thanks for the response.
"Paul Ibison" wrote:

> Is this related to your issue:
> [url]http://www.microsoft.com/communities/newsgroups/en-us/default.aspx?dg=microsoft.public.sqlserver.replica tion&tid=4e5d642b-a4c0-4574-8bae-98fcc88e19b8&p=1[/url]
> IE I'm wondering if you have a republishing setup.
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
>
>
|||John,
I had very similar situation just the other day, where somebody deleted
publication on SQL2000's level while
there were some pubs already created in 2005's level and I needed to
recreate the publication.
I somehow managed to fix it, but I'm afraid I might have lost some
unsynchronized data in the process - I'm not sure it was the best way to do
it. Could you post step by step description how did you handle it?
thanks,
r
"jaylou" <jaylou@.discussions.microsoft.com> wrote in message
news:C284873D-DD99-45A5-A191-047EE33E004E@.microsoft.com...[vbcol=seagreen]
> Similar, but what I had to do was to change the compatibilty level of all
> my
> subscriptions to 2005 and I was able to add this new one back in.
> Thanks for the response.
> "Paul Ibison" wrote:
|||I scrpited out my replication as a delete and create.
I ran the delete with no issues, but when I tried to run the create that's
when I fell into the compatibilty issue.
i changed all the SQL 200 compatibity levels to 2005 and changed my script
from 80rtm to 90rtm on the create statement of the script SQl generated.
I reran the create and it created with no issues. I started my agent and
synced my data since I only care about what happens here in my main office
anything changed at my branch got overwritten if there were any changes at
both places.
Is that the same way you did yours? maybe you had a better solution, mine
seemed to work for me but I am always looking for better and different
solutions.
"Rafael Lenartowicz" wrote:

> John,
> I had very similar situation just the other day, where somebody deleted
> publication on SQL2000's level while
> there were some pubs already created in 2005's level and I needed to
> recreate the publication.
> I somehow managed to fix it, but I'm afraid I might have lost some
> unsynchronized data in the process - I'm not sure it was the best way to do
> it. Could you post step by step description how did you handle it?
> thanks,
> r
> "jaylou" <jaylou@.discussions.microsoft.com> wrote in message
> news:C284873D-DD99-45A5-A191-047EE33E004E@.microsoft.com...
>
>
|||how did you manage to change the compatibility level? using
sp_changemergepublication ?
I wasn't able to do it, it was complaining something about the incompatible
snapshot...
besides - in my setup the data from the remote location IS the vital data -
they submiting their
daily revenue reports to the central office reporting server, I can't have
anything overwritten or lost.
thanks
r
"jaylou" <jaylou@.discussions.microsoft.com> wrote in message
news:EE97BCC9-6445-44F4-BF33-D115DDD98B43@.microsoft.com...[vbcol=seagreen]
>I scrpited out my replication as a delete and create.
> I ran the delete with no issues, but when I tried to run the create that's
> when I fell into the compatibilty issue.
> i changed all the SQL 200 compatibity levels to 2005 and changed my script
> from 80rtm to 90rtm on the create statement of the script SQl generated.
> I reran the create and it created with no issues. I started my agent and
> synced my data since I only care about what happens here in my main office
> anything changed at my branch got overwritten if there were any changes at
> both places.
> Is that the same way you did yours? maybe you had a better solution, mine
> seemed to work for me but I am always looking for better and different
> solutions.
> "Rafael Lenartowicz" wrote:

Merge replication columns limit?

Hi,

I am experiencing a wired problem withe merge replication and SQL 2000 SP4.

We had replication working for 4 years without a problem, one of the table that we replicate has been growing in columns, now it has 55 columns.

I have noticed a problem, when I update the last column on the table and waits for the change to be updated on the subscriber the change is not there. But if I replicate the first 40 columns it works.

Changes made:

1) Upgraded to SP4
2) Added some extra fields using sp_addreplcolumn

Is this a known bug? Any ideas?

Regards

It looks like the adding of the column is not regenerating the triggers correctly. Could you profile the repladdcolumn and see if the trigger generation has any issue there?

On a side note, there are known issues with SQL 2000 + column tracking + vertical partitioning + DDL.

Is this a production box? I would then recommend contacting CSS.

Also would this have happened on SQL 2000 SP3 too? If you can test that too, it would be great.

|||Forgot to add that 55 is definitely not the limit for number of columns. The limit I think is 246 for SQL 2005 and I think it should be the same for SQL 2000 too.

Merge Replication Code Example - Where Can I Find One?

Hi,
I'm trying to program (c#) merge replication with one publisher DB
(SQL2000) and many subscribers (MSDE). I'm trying to find info and
examples online, but it seems pretty scarce. If you can recommend a
site, please let me know.
Thanks,
JJ
try this
http://support.microsoft.com/default...b;en-us;319646
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"JJ" <joe.jabour@.gmail.com> wrote in message
news:1118069995.044129.200820@.g47g2000cwa.googlegr oups.com...
> Hi,
> I'm trying to program (c#) merge replication with one publisher DB
> (SQL2000) and many subscribers (MSDE). I'm trying to find info and
> examples online, but it seems pretty scarce. If you can recommend a
> site, please let me know.
> Thanks,
> JJ
>
|||Looks good, Thanks!

Merge Replication changes sequence

I have a question for merge replication.
Scenario: Publication with 3 articles (tables A, B and C) and the sequence
of changes at one subscriber are C (delete row), B (insert row), A (delete
row) and B (insert row).
Questions:
1) Does the merge replication agent keep this sequence when updating the
tables at the Publisher and at the other Subscribers?
2) If not, what's the order the merge agent follows for updating the data if
any?
3) Is there a difference if the SQL server version is 2005, 2000 or 7.0?
Thanks,
Ivar
Answers inline.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Ivar" <Ivar@.discussions.microsoft.com> wrote in message
news:A16A74B3-43AA-4F45-9620-E84EC6B350BA@.microsoft.com...
>I have a question for merge replication.
> Scenario: Publication with 3 articles (tables A, B and C) and the sequence
> of changes at one subscriber are C (delete row), B (insert row), A (delete
> row) and B (insert row).
> Questions:
> 1) Does the merge replication agent keep this sequence when updating the
> tables at the Publisher and at the other Subscribers?
It is impossible to predict what sequence they will be applied in.
> 2) If not, what's the order the merge agent follows for updating the data
> if
> any?
Basically deletes are processed first, then it is done according to the
article id.
> 3) Is there a difference if the SQL server version is 2005, 2000 or 7.0?
No, they all do apply the DML in a random manner, except deletes are
processed first. In SQL 2005 you can do logic records which means that
parents will be modified before the children.

> Thanks,
> Ivar
|||As well as Hilary's answer, this might help you to understand the merge
article processing order: http://support.microsoft.com/kb/307356
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
|||Hilary and Paul,
Your comments have been a great help for me. On my application I'm using
triggers and due to the order the Merge Agent was sending changes, the
trigger was rolling back them and changes did not propagate.
Now I modified the article order on my publication and everything is working
fine.
Thank you,
Ivar
"Ivar" wrote:

> I have a question for merge replication.
> Scenario: Publication with 3 articles (tables A, B and C) and the sequence
> of changes at one subscriber are C (delete row), B (insert row), A (delete
> row) and B (insert row).
> Questions:
> 1) Does the merge replication agent keep this sequence when updating the
> tables at the Publisher and at the other Subscribers?
> 2) If not, what's the order the merge agent follows for updating the data if
> any?
> 3) Is there a difference if the SQL server version is 2005, 2000 or 7.0?
> Thanks,
> Ivar

Merge replication causing vast amount of data to move.

Hi,
We have setup merge replication with 1 publisher and 5 subscribers. 4 of
which are filtered.
All databases were synched up and everything was great. I then proceeded to
update the data at one
of the subscribers, by inserting 15000 rows or so. Then when next the
subscribers began their synching all manners of amounts of rows of data were
being sent all over the show. I was expecting 15000 rows to be updated at
each subscriber. but no, this was not to happen, seems that there are
updates and inserts happening at the subscribers. Is this a cause of
replication, being caused by the databases trying to make themseleves exact
replicas of each other and hence changing of data via inserts and updates?
Is this going to continue until all the data is exactly the same at each
site.
Is there some way to predict all this? The reason i ask is because we are
using satellite to do the synching and it costs $3.50 per minute.
Can anyone shed some light on this issue? I am trying to get an
understanding of why this is happening?
Warren,
For this 15000 inserts I would expect 15000 inserts on the publisher,
followed by inserts at each subscriber. The number of inserts at the
subscribers would depend on the filters involved, but if there weren't any
filters then I'd expect 15000 records downloaded when the individual
subscriber merge agents run. If you have only performed inserts at one
subscriber and there's no other activity then I'm not sure what is causing
the updates. Can you look at MSmerge_history for the period when you did the
insert so we can see some figures?
select publisher_insertcount, publisher_updatecount, publisher_deletecount,
subscriber_insertcount, subscriber_updatecount, subscriber_deletecount from
dbo.MSmerge_history
Regards,
Paul Ibison
|||Hi Paul,
Thanks for your response.
I forgot to mention that updates were run on the publisher a few days ago.
So perhaps this is the reason for all the updates and deletes that are going
on. In fact I am sure of it. I am sure you are going to agree with me on
this as well. The problem I find is that a subscriber will synch up with the
subscriber today (Updates inserts etc) and tomorrow it will have new updates
and inserts when it synchs again. This is weird surely 1 resynch is enough
especially seeing as no data is being changed by us?
All we want is all the subscribers to be at a point when they all return "No
data Needed to be Merged".
Regards
Warren Patterson
"Warren Patterson" <des@.newsgroups.nospam> wrote in message
news:usvs4yUTEHA.384@.TK2MSFTNGP10.phx.gbl...
> Hi,
> We have setup merge replication with 1 publisher and 5 subscribers. 4 of
> which are filtered.
> All databases were synched up and everything was great. I then proceeded
to
> update the data at one
> of the subscribers, by inserting 15000 rows or so. Then when next the
> subscribers began their synching all manners of amounts of rows of data
were
> being sent all over the show. I was expecting 15000 rows to be updated at
> each subscriber. but no, this was not to happen, seems that there are
> updates and inserts happening at the subscribers. Is this a cause of
> replication, being caused by the databases trying to make themseleves
exact
> replicas of each other and hence changing of data via inserts and updates?
> Is this going to continue until all the data is exactly the same at each
> site.
> Is there some way to predict all this? The reason i ask is because we are
> using satellite to do the synching and it costs $3.50 per minute.
> Can anyone shed some light on this issue? I am trying to get an
> understanding of why this is happening?
>
>
|||Warren,
what I'd do is track the generation numbers on publisher and subscriber to see why this is happening. To do this you need to identify which rows are coming over and why/at what point they are changed to cause the merge agent to think they should be replic
ated. The generation numbers are held in MSmerge_contents,MSmerge_genhistory and MSmerge_replinfo. Using these tables you can see what changes are ready to be downloaded per subscriber.
HTH,
Paul Ibison
|||Hi Paul,
Thanks for the reply.
Is it possible for you to give me any more detailed info. on how to track
the generation numbers? I havent done this before.
Just a bit of extra info, the subscribers are in another country, so
accessing them is not possible.
Subscriber initiates the satellite dial up. We cant initiate it.
Regards
Warren
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:17D601FE-18D2-4C2B-8A85-6DA521A42003@.microsoft.com...
> Warren,
> what I'd do is track the generation numbers on publisher and subscriber to
see why this is happening. To do this you need to identify which rows are
coming over and why/at what point they are changed to cause the merge agent
to think they should be replicated. The generation numbers are held in
MSmerge_contents,MSmerge_genhistory and MSmerge_replinfo. Using these tables
you can see what changes are ready to be downloaded per subscriber.
> HTH,
> Paul Ibison
|||Warren,
on a subscriber you can get the generation numbers for your article using an inner join:
SELECT MSmerge_contents.generation, testMerge.id, testMerge.name
FROM testMerge INNER JOIN
MSmerge_contents ON testMerge.rowguid = MSmerge_contents.rowguid
There should be different generation values returned in your case. To see which ones haven't yet been synchronized, compare these values to the relevant value of sentgen in MSmerge_replinfo for your subscription (sysmergesubscriptions will tell you the ID
).
The query above in your case might be enough. Just look at the highest value generation number and examine this row. Once you've found the row that has changed, hopefully you can find out who/what has changed it.
Regards,
Paul Ibison
|||Hi Paul,
Thanks for your response, unfortunately, the subscriber is in another
country and I cant access the database from here. But, the data movement
seems to be settling down now. I will have to monitor it for a while and see
what happens and then try and to follow your advice somehow.
Thanks for your help.
Kind Regards
Warren
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:3A516C99-34A5-4C21-AA49-716693096412@.microsoft.com...
> Warren,
> on a subscriber you can get the generation numbers for your article using
an inner join:
> SELECT MSmerge_contents.generation, testMerge.id, testMerge.name
> FROM testMerge INNER JOIN
> MSmerge_contents ON testMerge.rowguid =
MSmerge_contents.rowguid
> There should be different generation values returned in your case. To see
which ones haven't yet been synchronized, compare these values to the
relevant value of sentgen in MSmerge_replinfo for your subscription
(sysmergesubscriptions will tell you the ID).
> The query above in your case might be enough. Just look at the highest
value generation number and examine this row. Once you've found the row that
has changed, hopefully you can find out who/what has changed it.
> Regards,
> Paul Ibison

merge replication causing cpu spikes?

Hi ... having a problem with an application that relies on merge replication
among three SQL Servers. Two of the servers are public, the other's behind a
firewall. Periodically the CPUs on the public servers spikes, and
sqlserver.exe is consuming the whole thing. Running "select * from
master.sysprocesses order by cpu desc " revealed a lot of "lastwaittype"
records pointing to NETWORKIO as the source. It appears port 1433 is open
between these servers, but none of the Microsoft networking ports seem to be.
Replication has been suggested as the prime suspect, but sqlagent.exe doesn't
seem to be consuming many CPU cycles by comparison.
Aside from the NIC and basic SQL Server counters, what performance counters
should I monitor to get to the source of the problem? Any other suggestions
to determine exactly why this is happening to SQL?
Thanks in advance,
Dan
This is a tough one. First of all don't get too excited by CPU spikes. It
just means that your processor is busy. A while loop is able to throw your
CPU into a tizzy. for example open up a large log file and do a global
search and replace.
CPU spikes become problematic when they last a long time. How long does the
cpu spike last for?
What happens when you shut down SQL Server Agent? Do you still get these
spikes?
Hilary Cotter
Looking for a book on SQL Server replication?
http://www.nwsu.com/0974973602.html
"Dan Maniotis" <Dan Maniotis@.discussions.microsoft.com> wrote in message
news:F3117348-73DF-4179-A810-0783BB24EC23@.microsoft.com...
> Hi ... having a problem with an application that relies on merge
replication
> among three SQL Servers. Two of the servers are public, the other's behind
a
> firewall. Periodically the CPUs on the public servers spikes, and
> sqlserver.exe is consuming the whole thing. Running "select * from
> master.sysprocesses order by cpu desc " revealed a lot of "lastwaittype"
> records pointing to NETWORKIO as the source. It appears port 1433 is open
> between these servers, but none of the Microsoft networking ports seem to
be.
> Replication has been suggested as the prime suspect, but sqlagent.exe
doesn't
> seem to be consuming many CPU cycles by comparison.
> Aside from the NIC and basic SQL Server counters, what performance
counters
> should I monitor to get to the source of the problem? Any other
suggestions
> to determine exactly why this is happening to SQL?
> Thanks in advance,
> Dan
|||I should have been more specific ... these spikes are pretty devastating,
with the application that relies on the database becoming completely
unresponsive ... forcing us to restart the application.
We didn't shut down SQLAgent, but we did shut down the application that
makes database changes, so the agent had a lot less work to do ... the
assumption is that it catches up on its backlogged replication duties by the
time the app is brought back online (usually within a half hour or so). We're
putting some performance counters in place to prove/disprove that theory.
Is shutting down the agent relatively safe with merge replication? I've done
that using transactional replication with no negative results, but I wanted
to err on the side of caution.
Thanks *very* much for your help.
Best Regards,
Dan
"Hilary Cotter" wrote:

> This is a tough one. First of all don't get too excited by CPU spikes. It
> just means that your processor is busy. A while loop is able to throw your
> CPU into a tizzy. for example open up a large log file and do a global
> search and replace.
> CPU spikes become problematic when they last a long time. How long does the
> cpu spike last for?
> What happens when you shut down SQL Server Agent? Do you still get these
> spikes?
> --
> Hilary Cotter
> Looking for a book on SQL Server replication?
> http://www.nwsu.com/0974973602.html
>
> "Dan Maniotis" <Dan Maniotis@.discussions.microsoft.com> wrote in message
> news:F3117348-73DF-4179-A810-0783BB24EC23@.microsoft.com...
> replication
> a
> be.
> doesn't
> counters
> suggestions
>
>
|||Yes, shutting down the merge agent is safe to do and has little impact on a
server. The longer you have the merge agent shut down for the more impact it
will have on the system.
The reason I say this is because every time there is some update activity on
the published tables, this transaction is logged in msmerge_contents and
msmerge_tombstone. As these tables get large, the logging takes longer and
performance is degraded, although normally not signficantly.
Ideally you want your merge agent to run frequently so it can process small
chunks of data, as opposed to run hourly and process large chunks of data.
Perhaps change your polling interval to every 10 seconds, and set your merge
agent to run every 5 minutes and see if this has an impact on performance.
One more thing - are you using filtering?
Hilary Cotter
Looking for a book on SQL Server replication?
http://www.nwsu.com/0974973602.html
"Dan Maniotis" <DanManiotis@.discussions.microsoft.com> wrote in message
news:8A030C31-E60B-452D-B943-E9B3AE1B883F@.microsoft.com...
> I should have been more specific ... these spikes are pretty devastating,
> with the application that relies on the database becoming completely
> unresponsive ... forcing us to restart the application.
> We didn't shut down SQLAgent, but we did shut down the application that
> makes database changes, so the agent had a lot less work to do ... the
> assumption is that it catches up on its backlogged replication duties by
the
> time the app is brought back online (usually within a half hour or so).
We're
> putting some performance counters in place to prove/disprove that theory.
> Is shutting down the agent relatively safe with merge replication? I've
done
> that using transactional replication with no negative results, but I
wanted[vbcol=seagreen]
> to err on the side of caution.
> Thanks *very* much for your help.
> Best Regards,
> Dan
> "Hilary Cotter" wrote:
t[vbcol=seagreen]
your[vbcol=seagreen]
the[vbcol=seagreen]
behind[vbcol=seagreen]
"lastwaittype"[vbcol=seagreen]
open[vbcol=seagreen]
to[vbcol=seagreen]
|||We're not using any filtering at all, and the merge agent is set to run every
minute.
Learned something more about the network setup: *all* ports are blocked
upstream from the subscribers to the publisher: the publisher's behind a
firewall and the subscribers are in front of it. The public subscribers' data
is updated much more frequently than the publisher's, but it's incumbent on
the publisher to initiate replication.
My uninformed impression was that merge replication would require two-way
communication, but at least one of our admins believes the replication
connections are initiated by the publisher, so this is a non-issue.
And just to make things really interesting, sqlagent isn't logging on
the publisher (no sqlagent.out to check for errors). Gah.
Thanks again,
Dan
"Hilary Cotter" wrote:

> Yes, shutting down the merge agent is safe to do and has little impact on a
> server. The longer you have the merge agent shut down for the more impact it
> will have on the system.
> The reason I say this is because every time there is some update activity on
> the published tables, this transaction is logged in msmerge_contents and
> msmerge_tombstone. As these tables get large, the logging takes longer and
> performance is degraded, although normally not signficantly.
> Ideally you want your merge agent to run frequently so it can process small
> chunks of data, as opposed to run hourly and process large chunks of data.
> Perhaps change your polling interval to every 10 seconds, and set your merge
> agent to run every 5 minutes and see if this has an impact on performance.
> One more thing - are you using filtering?
> --
> Hilary Cotter
> Looking for a book on SQL Server replication?
> http://www.nwsu.com/0974973602.html
>
> "Dan Maniotis" <DanManiotis@.discussions.microsoft.com> wrote in message
> news:8A030C31-E60B-452D-B943-E9B3AE1B883F@.microsoft.com...
> the
> We're
> done
> wanted
> t
> your
> the
> behind
> "lastwaittype"
> open
> to
>
>

Merge replication broke after SQL 2005 Evaluation upgrade to Enterprise

I'm try to upgrade SQL Server 2005 Evaluation with tuned up and worked merge replication.

- I'm insert disk with licensed SQL Server 2005 Enterprise, start installation - in process, indicate instance for installing, in list of options appears I see option "ugrade"
- All passes fine and go on:

After upgarde I'm get completely broken merge replication!

Moreover, such feeling that certain strange troubles with SQL Agent:
all points context menu in Replication Monotor - disabled and if I try to start replication manually through jobs - too failed - with some error "agent shutdown" !

I'm try to build "clear" situation: install in Virtual PC Windows 2003 Ent SP1, SQL 2005 Eval and setup my test replication and upgrade SQL Server to Enterprise version - and get same trouble too!!!

What the best way to uprade my evaluations sql servers with replication to licensed version SQL Ent?

I've moved this thread to the replication forum, where you're more likely to get an answer.

Paul

|||

Hi, Nikolai,

What error message did you observe when you say merge replication is broken? Remember after you upgrade to SQL Server 2005 RTM (either from SQL 2000 or SQL 2005 Eval), you need to re-run snapshot job for merge replication upgrade to complete, from then you can run normal merge agent job, otherwise, you will hit error like "You must rerun snapshot because current snapshot files are obsolete.". Let me know if you're seeing a different error.

Thanks,

Zhiqiang Feng

This posting is provided "AS IS" with no warranties, and confers no rights.

Merge Replication between two servers not connected via internet and on two separate domains

I have two sql server 2000 databases and I wish to do merge replication between them, the problem is that they are not connected via the internet (no ping) and are on two separate domains in different countries.

I can use email and I can use encrypted ftp.

I am wondering if I could use sql ce merge replication to a pocket pc which creates an sdf file, that I then email backwards and forwards and build identical publication and subscriptions at either server to merge sync with this sdf file through a pocket pc.

Has anyone done this before?

Does anyone have any other suggestions?

It's not going to be possible. Merge replication is built with partners that can publish/subscribe to each other. If all you have is email/ftp, you will have to roll your own "merge" - take look at bcp/bulkload/dts in book online.

Merge replication between two publishers with dynamic filters

Hello,
I am working on a distributed database system in which each site is a
publisher of a filtered set of data. It is necessary that a publisher can
subscribe to another publisher.
I am using merge replication, dynamic filters and push subscriptions. Each
site publishes the same tables with another filter.
When I test this scenario, I notice that the merge agent replicates only the
changes of the publisher to the subscriber. A change on the subscriber (thas
has also a publication on the same tables) is not replicated to the publisher.
Does someone know a solution for this?
Is it possible to have (bidirectional) merge replication between two
publishers with dynamic filters?
I hope someone can help me with this.
thanks in advance!
Marco Broenink
Could you please describe in more detail your publisher subscriber
configurations and the filters.
If the table has a column that you are using to filter from Publisher to
Subscriber and then make this subscriber a republisher and then try to use
the same column as the filter column, there is only set of data at the
republisher/subscriber.
I am not clear on your setup. Could you please repost with more elaborate
setup steps?
Hope that helps
--Mahesh
[ This posting is provided "as is" with no warranties and confers no
rights. ]
"Marco Broenink" <MarcoBroenink@.discussions.microsoft.com> wrote in message
news:CD4D84B8-A940-46BF-AC9A-5FF5098D2200@.microsoft.com...
> Hello,
> I am working on a distributed database system in which each site is a
> publisher of a filtered set of data. It is necessary that a publisher can
> subscribe to another publisher.
> I am using merge replication, dynamic filters and push subscriptions. Each
> site publishes the same tables with another filter.
> When I test this scenario, I notice that the merge agent replicates only
the
> changes of the publisher to the subscriber. A change on the subscriber
(thas
> has also a publication on the same tables) is not replicated to the
publisher.
> Does someone know a solution for this?
> Is it possible to have (bidirectional) merge replication between two
> publishers with dynamic filters?
> I hope someone can help me with this.
> thanks in advance!
> Marco Broenink
|||Thanks for your response!
I am using a dynamic filter. This filter uses a function. This function
needs the hostname and a filter-column to dermine if the row needs to be
filtered. The filter looks like:
SELECT <published_columns> FROM [dbo].[PublishedTable]
WHERE 1 = [dbo].[fn_DynamicFilter]([FilterColumn], HOST_NAME())
The filterfunction uses a mapping table that maps the contents of the
[FilterColumn] to hostnames.
With this mapping table, each publisher publishes its own part of all data.
So the publications of two publishers do not overlap. But the publications
are on the same tables.
Problem with this configuration is that changes of a subscriber are not
replicated to the publisher. It looks like that the subscriber's own
publication is blocking this.
I hope you can help me with this.
greetings, Marco.
"Mahesh [MSFT]" wrote:

> Could you please describe in more detail your publisher subscriber
> configurations and the filters.
> If the table has a column that you are using to filter from Publisher to
> Subscriber and then make this subscriber a republisher and then try to use
> the same column as the filter column, there is only set of data at the
> republisher/subscriber.
> I am not clear on your setup. Could you please repost with more elaborate
> setup steps?
> Hope that helps
> --Mahesh
> [ This posting is provided "as is" with no warranties and confers no
> rights. ]
> "Marco Broenink" <MarcoBroenink@.discussions.microsoft.com> wrote in message
> news:CD4D84B8-A940-46BF-AC9A-5FF5098D2200@.microsoft.com...
> the
> (thas
> publisher.
>
>
|||I have used 'global' subscriptions in stead of 'local' and this problem is
solved.
Now the changes are also replicated from subscriber to publisher.
Unfortunately, I have a new problem.
I use replication with dynamic filters. In my system it is possible that
data is added to the subscriber that doesnot pass the filter. When
replicating, this data is deleted at the subscriber and added to the
publisher.
How can I prevent this delete & insert ?
thanks in advance, Marco
"Marco Broenink" wrote:
[vbcol=seagreen]
> Thanks for your response!
> I am using a dynamic filter. This filter uses a function. This function
> needs the hostname and a filter-column to dermine if the row needs to be
> filtered. The filter looks like:
> SELECT <published_columns> FROM [dbo].[PublishedTable]
> WHERE 1 = [dbo].[fn_DynamicFilter]([FilterColumn], HOST_NAME())
> The filterfunction uses a mapping table that maps the contents of the
> [FilterColumn] to hostnames.
> With this mapping table, each publisher publishes its own part of all data.
> So the publications of two publishers do not overlap. But the publications
> are on the same tables.
> Problem with this configuration is that changes of a subscriber are not
> replicated to the publisher. It looks like that the subscriber's own
> publication is blocking this.
> I hope you can help me with this.
> greetings, Marco.
> "Mahesh [MSFT]" wrote:
|||Glad that you could work around your first problem, though to be frank, I am
still unclear of the setup.
Regarding your new problem,
If each subscriber inserts data that corresponds to only its subset of data
then you could try using a default of some kind to the tables. Like hostname
or something that will map appropriately to the filter condition and make it
pass. So everytime an insert happens at the subscriber, the filter condition
is met and then is successfully propagated to the publisher and does not get
deleted at the subscriber in turn.
Please note that this can work only if the subscriber always makes
"good" inserts, that is to say that the subscriber never expects to insert
data (that does not satisfy the filter) and then in turn expects the data to
be deleted by the publisher.
Hope that helps
--Mahesh
[ This posting is provided "as is" with no warranties and confers no
rights. ]
"Marco Broenink" <MarcoBroenink@.discussions.microsoft.com> wrote in message
news:664513B1-B610-453F-B033-6FD1B8720BE1@.microsoft.com...[vbcol=seagreen]
> I have used 'global' subscriptions in stead of 'local' and this problem is
> solved.
> Now the changes are also replicated from subscriber to publisher.
> Unfortunately, I have a new problem.
> I use replication with dynamic filters. In my system it is possible that
> data is added to the subscriber that doesnot pass the filter. When
> replicating, this data is deleted at the subscriber and added to the
> publisher.
> How can I prevent this delete & insert ?
> thanks in advance, Marco
>
> "Marco Broenink" wrote:
data.[vbcol=seagreen]
publications[vbcol=seagreen]
to[vbcol=seagreen]
use[vbcol=seagreen]
elaborate[vbcol=seagreen]
message[vbcol=seagreen]
a[vbcol=seagreen]
publisher can[vbcol=seagreen]
subscriptions. Each[vbcol=seagreen]
only[vbcol=seagreen]
subscriber[vbcol=seagreen]
|||thanks again for the response.
In my topology, I have different publishers of the same table. These
publishers use different filters. A subscriber can be subscribed to different
publishers.
For example:
Site A publishes table1
Site B publishes table1
Site C is subscribed to Site A table1. This subscribtion is filtered with a
dynamic filter F1.
Site C is also subscribed to Site B table1. This subscribtion is filtered
with another dynamic filter F2.
The different dynamic filters make sure that the subscription to Site A do
not overlap the subscription to Site B.
Thus: The table1 of C contains a subset of table1 of A and a subset of
table1 of B.
So: table1 of C contains two types of data:
- data that meets filtercondition F1 and doesnot meet filtercondition F2.
- data that meets filtercondition F2 and doesnot meet filtercondition F1.
So the problem is: The subscriber will contain data that doesnot meet one of
the filterconditions. When replicating to Site A (filter F1), data of filter
F2 is deleted. When replicating to Site B (filter F2), data of filter F1 is
deleted.
So in this scenario, I think it is not possible to make only 'good' inserts
because it violates always one of the two filtersconditions.
I hope you know a solution. Or am I trying to do something impossible?
greetings, Marco Broenink
"Mahesh [MSFT]" wrote:

> Glad that you could work around your first problem, though to be frank, I am
> still unclear of the setup.
> Regarding your new problem,
> If each subscriber inserts data that corresponds to only its subset of data
> then you could try using a default of some kind to the tables. Like hostname
> or something that will map appropriately to the filter condition and make it
> pass. So everytime an insert happens at the subscriber, the filter condition
> is met and then is successfully propagated to the publisher and does not get
> deleted at the subscriber in turn.
> Please note that this can work only if the subscriber always makes
> "good" inserts, that is to say that the subscriber never expects to insert
> data (that does not satisfy the filter) and then in turn expects the data to
> be deleted by the publisher.
> Hope that helps
> --Mahesh
> [ This posting is provided "as is" with no warranties and confers no
> rights. ]
> "Marco Broenink" <MarcoBroenink@.discussions.microsoft.com> wrote in message
> news:664513B1-B610-453F-B033-6FD1B8720BE1@.microsoft.com...
> data.
> publications
> to
> use
> elaborate
> message
> a
> publisher can
> subscriptions. Each
> only
> subscriber
>
>
|||Hi Marco,
Please correct me if I understood wrong:
So what you are saying is SiteA and SiteB are publishing the same tables,
but are not replicating to each other. Is that right?
But in turn are replicating that table to SiteC.
This is not supported.
In the first place, When SiteC subscribed to SiteA, it gets the table from
SiteA. Now when you configure SiteC to subscribe from SiteB, how did you
configure? Did you configure a no-sync subscription? If not, and you used
all the default settings then actually you will not even be able to complete
the subscription because the table at SiteC (got from SiteA) will be
attempted to drop and recreate with the scripts from SiteB which will fail.
If you want to do what you are trying to do, one solution is to have table1
at SiteA and replicate it to SiteC with the proper filter. Have table2 at
SiteB and replicate that to SiteC with the proper filter.
On the subscriber you can have a view on those two tables that will give you
a combined view for the results. But you may not be able to make DMLs on the
view directly. You will still need to do the DMLs on the actual tables.
Hope that helps
--Mahesh
[ This posting is provided "as is" with no warranties and confers no
rights. ]
"Marco Broenink" <MarcoBroenink@.discussions.microsoft.com> wrote in message
news:BADB7E44-D9C9-4333-B69E-9F5B33314BEF@.microsoft.com...
> thanks again for the response.
> In my topology, I have different publishers of the same table. These
> publishers use different filters. A subscriber can be subscribed to
different
> publishers.
> For example:
> Site A publishes table1
> Site B publishes table1
> Site C is subscribed to Site A table1. This subscribtion is filtered with
a
> dynamic filter F1.
> Site C is also subscribed to Site B table1. This subscribtion is filtered
> with another dynamic filter F2.
> The different dynamic filters make sure that the subscription to Site A do
> not overlap the subscription to Site B.
> Thus: The table1 of C contains a subset of table1 of A and a subset of
> table1 of B.
> So: table1 of C contains two types of data:
> - data that meets filtercondition F1 and doesnot meet filtercondition F2.
> - data that meets filtercondition F2 and doesnot meet filtercondition F1.
> So the problem is: The subscriber will contain data that doesnot meet one
of
> the filterconditions. When replicating to Site A (filter F1), data of
filter
> F2 is deleted. When replicating to Site B (filter F2), data of filter F1
is
> deleted.
> So in this scenario, I think it is not possible to make only 'good'
inserts[vbcol=seagreen]
> because it violates always one of the two filtersconditions.
> I hope you know a solution. Or am I trying to do something impossible?
> greetings, Marco Broenink
>
> "Mahesh [MSFT]" wrote:
I am[vbcol=seagreen]
data[vbcol=seagreen]
hostname[vbcol=seagreen]
make it[vbcol=seagreen]
condition[vbcol=seagreen]
get[vbcol=seagreen]
insert[vbcol=seagreen]
data to[vbcol=seagreen]
message[vbcol=seagreen]
problem is[vbcol=seagreen]
that[vbcol=seagreen]
function[vbcol=seagreen]
to be[vbcol=seagreen]
the[vbcol=seagreen]
all[vbcol=seagreen]
not[vbcol=seagreen]
Publisher[vbcol=seagreen]
try to[vbcol=seagreen]
the[vbcol=seagreen]
no[vbcol=seagreen]
in[vbcol=seagreen]
is[vbcol=seagreen]
replicates[vbcol=seagreen]
the[vbcol=seagreen]
two[vbcol=seagreen]
|||thanks for the help!
Marco
"Mahesh [MSFT]" wrote:

> Hi Marco,
> Please correct me if I understood wrong:
> So what you are saying is SiteA and SiteB are publishing the same tables,
> but are not replicating to each other. Is that right?
> But in turn are replicating that table to SiteC.
> This is not supported.
> In the first place, When SiteC subscribed to SiteA, it gets the table from
> SiteA. Now when you configure SiteC to subscribe from SiteB, how did you
> configure? Did you configure a no-sync subscription? If not, and you used
> all the default settings then actually you will not even be able to complete
> the subscription because the table at SiteC (got from SiteA) will be
> attempted to drop and recreate with the scripts from SiteB which will fail.
> If you want to do what you are trying to do, one solution is to have table1
> at SiteA and replicate it to SiteC with the proper filter. Have table2 at
> SiteB and replicate that to SiteC with the proper filter.
> On the subscriber you can have a view on those two tables that will give you
> a combined view for the results. But you may not be able to make DMLs on the
> view directly. You will still need to do the DMLs on the actual tables.
> Hope that helps
> --Mahesh
> [ This posting is provided "as is" with no warranties and confers no
> rights. ]
> "Marco Broenink" <MarcoBroenink@.discussions.microsoft.com> wrote in message
> news:BADB7E44-D9C9-4333-B69E-9F5B33314BEF@.microsoft.com...
> different
> a
> of
> filter
> is
> inserts
> I am
> data
> hostname
> make it
> condition
> get
> insert
> data to
> message
> problem is
> that
> function
> to be
> the
> all
> not
> Publisher
> try to
> the
> no
> in
> is
> replicates
> the
> two
>
>

Merge replication between two instances.

Does anybody know if it is possible to make a merge replication between two
instances of SQL Server in the same computer?
yes, I do it all the time.
"Jose M. Marn" <jm_marintl@.eresmas.com> wrote in message
news:eWIOshhYEHA.2816@.TK2MSFTNGP11.phx.gbl...
> Does anybody know if it is possible to make a merge replication between
two
> instances of SQL Server in the same computer?
>

Merge Replication Between two Existing Databases?

Hi gang,

We have a couple of databases on separate servers that have exactly the same schema, and we would like to set up merge replication between them.

Is that possible? The few times we've experimented on test databases, the subscriber database has been trashed and rebuilt using data from the publisher, rather than preserving data from both databases.

I'm really new to replication in general, so any kind of advice would be helpful.

Cheers,
Matt

Yes it's possible, see topic in Books Online "Initializing a Merge Subscription Without a Snapshot".

|||Thanks Greg. I guess sometimes the help really is helpful!

Merge Replication between SQL Mobile and SQL 2005 without IIS

Hi,
I am trying to do Merge Replication between SQL 2005 and
SQL Mobile Edition. I did set up Publication and
Distribution successfully using SQL Server Management
Studio but when I tried to create a subscription from SQL
Mobile Database then it asks for Web Server Authentication
where I needed to put URL for SQL Server Mobile Edition
Virtual Directory..
Is it possbile to subscribe from SQL Mobile Edition with
going thru Web Server Authentication?
Niben
I mean - Is it possbile to subscribe from SQL Mobile Edition without**
going thru Web Server Authentication?
"Niben" wrote:

> Hi,
> I am trying to do Merge Replication between SQL 2005 and
> SQL Mobile Edition. I did set up Publication and
> Distribution successfully using SQL Server Management
> Studio but when I tried to create a subscription from SQL
> Mobile Database then it asks for Web Server Authentication
> where I needed to put URL for SQL Server Mobile Edition
> Virtual Directory..
> Is it possbile to subscribe from SQL Mobile Edition with
> going thru Web Server Authentication?
> Niben

Merge Replication between sites - Keeping hot backu servers ready

Dear all
I have set a merge replication between servers say SERVER A (Publisher) and
SERVER B (Subscriber to A)
The agent runs every 2 minutes.
I have set all identity columns with increment value by 2. SERVER A has all
odd identity values and SERVER B has even.
I have 2 more servers (STAND BY) say SERVER C and SERVER D.
SERVER C is located near to SERVER A (Same office - Office A) and SERVER D
is near SERVER B (Same Office - Office B).
I want to keep SERVER C as backup replica (in case SERVER A goes down users
will connect to it and there will be minimum downtime for the users in
Office A and when SERVER A comes up it will synch with SERVER C and in turn
it will synch with SERVER B) of SERVER A (which is publisher to SERVER B) so
this can be publisher to SERVER C.
How do I configure SERVER D which will be replica (in case SERVER B goes
down users will connect to it and there will be minimum downtime for the
users in Office B and when SERVER B comes up it will synch with SERVER D and
in turn it will synch with SERVER A) of SERVER B.
Will it be subscriber or publisher to SERVER B (Which is subscriber to
SERVER A).
1- Do I need to configure SERVER B as Publisher and Subscriber?
2- Do I need to configure SERVER D as Publisher and SERVER B as subscriber.
In this case will it replicate between SERVER A and SERVER B. Do I need to
configure SERVER A's subscribers can synch with multiple PUBLISHER?
Your inputs are highly appreciated.
Hope I conveyed my need clearly.
Regards
Javed Iqbal
Javed,
you could set ServerB up as a republisher - subscriber to A and publisher to
D. However I think an easier solution would be to have B, C and D as
subscribers to A if your network speeds can cope with it. In this case, if B
goes down, users use D which is regularly synchronized with A. When it comes
back on line, A and B are synchronized.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||Thanks Paul,
Initially I thought about same keeping A as publisher and rest all
subscribers. But the connectivity between SERVER A and SERVER B (2 MBPS) may
not be enough. There will be lot of other network traffic between office and
can hamper the replication process.
Thank you once again for your prompt help.
Regards
Javed Iqbal
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:O1$4JwMyFHA.2848@.TK2MSFTNGP15.phx.gbl...
> Javed,
> you could set ServerB up as a republisher - subscriber to A and publisher
to
> D. However I think an easier solution would be to have B, C and D as
> subscribers to A if your network speeds can cope with it. In this case, if
B
> goes down, users use D which is regularly synchronized with A. When it
comes
> back on line, A and B are synchronized.
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
>
|||Using merge replication for disaster recovery is a bad choice for the
following reasons
1) you add a rowguid column to each table you are replicating
2) increased latency of transactions occurring on the publisher/subscriber
due to the triggers
3) long sync times
4) no automatic recovery
5) somewhat limited abilities to modify schema
6) possibility of data loss
For automatic failover you need to use clustering. Otherwise a better
solution is probably log shipping.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Javed" <javediqbal98@.hotmail.com> wrote in message
news:uRHPgnMyFHA.788@.tk2msftngp13.phx.gbl...
> Dear all
> I have set a merge replication between servers say SERVER A (Publisher)
and
> SERVER B (Subscriber to A)
> The agent runs every 2 minutes.
> I have set all identity columns with increment value by 2. SERVER A has
all
> odd identity values and SERVER B has even.
> I have 2 more servers (STAND BY) say SERVER C and SERVER D.
> SERVER C is located near to SERVER A (Same office - Office A) and SERVER D
> is near SERVER B (Same Office - Office B).
> I want to keep SERVER C as backup replica (in case SERVER A goes down
users
> will connect to it and there will be minimum downtime for the users in
> Office A and when SERVER A comes up it will synch with SERVER C and in
turn
> it will synch with SERVER B) of SERVER A (which is publisher to SERVER B)
so
> this can be publisher to SERVER C.
> How do I configure SERVER D which will be replica (in case SERVER B goes
> down users will connect to it and there will be minimum downtime for the
> users in Office B and when SERVER B comes up it will synch with SERVER D
and
> in turn it will synch with SERVER A) of SERVER B.
> Will it be subscriber or publisher to SERVER B (Which is subscriber to
> SERVER A).
> 1- Do I need to configure SERVER B as Publisher and Subscriber?
> 2- Do I need to configure SERVER D as Publisher and SERVER B as
subscriber.
> In this case will it replicate between SERVER A and SERVER B. Do I need to
> configure SERVER A's subscribers can synch with multiple PUBLISHER?
> Your inputs are highly appreciated.
> Hope I conveyed my need clearly.
> Regards
> Javed Iqbal
>

Merge replication between same schema databases but different data

Hi all.

I 'm trying to set up merge replication between two servers that have the same schema databases. The two database have the majority of there data the same but as well as data inserted at a later time independently on the two servers. (The two servers were connected in a merge replication scheme that failed at some later time and replication was paused, but users continout to insert data indepentedly at the two servers.)

I need to get them up and running.

I cleaned replication at both servers, I recreated the publication at the puplisher distributor and all is fine.

When I create a push subscription to the subscriper I get the error invalid column name 'rowguidcol' .

I so far managed to have merge replication running on two identical databases (schema and data).

Just some thoughts. After some reading I found that it might be related to identities and identity range or indexes. I set the identity seed and increment at 2, 2 at publisher and at 1,2 at subscriper. (On the same tables at puplisher and subscriber.) Is that ok? is that the way to do it?

Digging a bit more Using the SQL Profiler I can locate the error to happen when sp_MSaddmergetriggers executes.

Thanks allot for any help

Version.

Microsoft SQL Server 2000 - 8.00.2039 (Intel X86) May 3 2005 23:18:38 Copyright (c) 1988-2003 Microsoft Corporation Standard Edition on Windows NT 5.0 (Build 2195: Service Pack 4)

Hello,

I suspect there is an Identity range management issue in your app. If you just got publisher and one subscriber, suppose you can partition the Identity range as

Publisher from 1 - 1000 and Subscriber 1001 - 2000. Both step = 1.

So when you create the the merge article in sp_addmergearticle, you can specify

@.auto_identity_range='true', @.pub_identity_range = 1000, @.identity_range = 1000, @.threshold = 85.

For more details, please refer to http://msdn2.microsoft.com/en-us/library/aa237098(SQL.80).aspx.

Thanks.

|||

Thanks for the advise L Zhou

Using the SQL Profiler I can locate the error to happen when sp_MSaddmergetriggers executes.

|||

Hello George,

Please try this, running the follow DDL on the subscriber database.

ALTER TABLE [table_need_to_be_merged] ADD rowguid uniqueidentifier ROWGUIDCOL

Drop the subscription and re-create it with "NOSYNC" option. (SELECT "No, the Subscriber already has the schema and data" from the "Push Subscription Wizard" on the "Initialize Subscription" Page).

The subscription is created and the error message "Invalid column name 'rowguidcol'" should be disappeared.

If this method is still not working for you, I may need to look at your table schema.

Thanks.

This posting is provided AS IS with no warranties, and confers no rights.

|||

Thanks for the answer Zhou

I did as you said and now I do not get the rowguidcol error, but now the conflict tables are not created

Any sugestions?

Thanks

|||

How about drop pubilcation and subscription by executing sp_replicationdboption and re-create the merge publication and subscription, still use "NoSync" option. Please make sure that rowguid column is not dropped.

Thanks.

This posting is provided AS IS with no warranties, and confers no rights.

Merge Replication between MSDE2000 and SQL 2005

Hi, I want to ask about SQL Merge replication between 40 MSDE2000 as
Publishers and SQL 2005 as Subscriber.
I received an error message while configuring the replication saying "For
Merge publications, The version of the subscriber must not exceed the version
of the publisher"
Is there a way to do the configuration without upgrading the MSDE2000 to
MSDE Express or downgrading from SQL 2005 to SQL 2000?
Thanks in advance,
Rodney El Hajj
Systems Engineer
Lebanese Canadian Bank
Using your current hardware, you could possibly change the topology to have
the SQL 2005 machine as the publisher and the MSDE boxes as subscribers.
Sometimes, when the data is partitioned, there is little distinction between
publisher and subscriber in merge replication.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
|||Tx for your quick answer, but I need one way replication, from remote sites
to Head Office. At the end of day, I need to merge data from all sites in one
database.
We used to have an SQL 2000 Server at the Head Office and MSDE 2000 on all
sites, the merge replication was working without problems. Then, we moved to
another hardware and installed SQL 2005 instead of 2000, since then we are
facing problems with the replication configuration.
BR,
"Paul Ibison" wrote:

> Using your current hardware, you could possibly change the topology to have
> the SQL 2005 machine as the publisher and the MSDE boxes as subscribers.
> Sometimes, when the data is partitioned, there is little distinction between
> publisher and subscriber in merge replication.
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
>
>
|||One-way replication presumably means that you want the data to flow in one
direction but this can be achieved from subscriber to publisher as well as
the other way around. The central publisher can
be a central repository of the data, as unintuitive as this sounds
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
|||Hi Again,
I tried to do the replication from subscriber to publisher (The Publisher is
centralized) but didn't succeed.
Can you provide me with a step by step merge replication procedure. Maybe
I'm missing some settings.
Thanks in advance.
"Paul Ibison" wrote:

> One-way replication presumably means that you want the data to flow in one
> direction but this can be achieved from subscriber to publisher as well as
> the other way around. The central publisher can
> be a central repository of the data, as unintuitive as this sounds
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
>
>
>

Merge Replication between MSDE doesn't work!

Hello!

I have a problem with creating Merge replication between two instances of MSDE 2000. In the article http://support.microsoft.com/kb/324992/, published by Microsoft, described, that it is possible.

I am creating Merge Publication at MSDE 2000, with Distributor and Publisher configured at same machine, with options "Allow Pull Subscriptions" and "Allow Anonymous Subscriptions". There is no problem with creating of Publication. Even snapshot generation finishes successfully. Also I create a login with SQL Server Authentication and "System Administrators" database role at the publisher in order to connect to it from Subscriber.

The problem occurs, when creating anonymous pull subscription to this publication at another instance of MSDE. During initial synchronization an error occurs:

The process could not connect to Distributor '<publisher_server_name>'. Login failed for user '<login_created_at_publisher>'. Reason: Not associated with a trusted SQL Server connection. The step failed.

Although I didn't use Windows Authentication at all, so Subscriber doesn't need to connect to Distributor using trusted SQL Server connection.

What is a problem and how can I workaround?

Note: The same works correctly, if MS SQL Server used as a Publisher instead of MSDE.

Please, help!

I can provide publication and subscription creation scripts, if required.

Guys, don't care anymore: I found the reason!!!

Probably, the problem was in using an old version of MSDE, and installing MSDE with SP4 solved it!

However, there are still some problems with connecting to MSDE from a remote workstation via named pipes, but I think I can research it independently.

Thanks for consideration!

Merge Replication between MSDE doesn't work!

Hello!

I have a problem with creating Merge replication between two instances of MSDE 2000. In the article http://support.microsoft.com/kb/324992/, published by Microsoft, described, that it is possible.

I am creating Merge Publication at MSDE 2000, with Distributor and Publisher configured at same machine, with options "Allow Pull Subscriptions" and "Allow Anonymous Subscriptions". There is no problem with creating of Publication. Even snapshot generation finishes successfully. Also I create a login with SQL Server Authentication and "System Administrators" database role at the publisher in order to connect to it from Subscriber.

The problem occurs, when creating anonymous pull subscription to this publication at another instance of MSDE. During initial synchronization an error occurs:

The process could not connect to Distributor '<publisher_server_name>'. Login failed for user '<login_created_at_publisher>'. Reason: Not associated with a trusted SQL Server connection. The step failed.

Although I didn't use Windows Authentication at all, so Subscriber doesn't need to connect to Distributor using trusted SQL Server connection.

What is a problem and how can I workaround?

Note: The same works correctly, if MS SQL Server used as a Publisher instead of MSDE.

Please, help!

I can provide publication and subscription creation scripts, if required.

Guys, don't care anymore: I found the reason!!!

Probably, the problem was in using an old version of MSDE, and installing MSDE with SP4 solved it!

However, there are still some problems with connecting to MSDE from a remote workstation via named pipes, but I think I can research it independently.

Thanks for consideration!

Merge Replication Architecture Planning Guidance

Hi,
My organization is considering use of merge replication (SQL Server 2K with MSDE clients) to implement a classic data-centric "occasionally connected" application pattern. The target user base extends to 4,500 users distributed across time zones and lang
uages. Depending on their location users will connect over telecom links varying from 28.8Kbps to broadband.
Our preference is to retain a single, centralized SQL Server cluster as a publisher and to deploy multiple remote distributor nodes to optimize performance. However, I've been unable to find a comprehensive capacity planning guide that can help us to det
ermine a) whether our proposed architecture is viable and b) how and where we should deploy distributor notes to optimize use of available processing capacity and bandwidth.
If anyone is aware of such a guide, or has first-hand operational experience of a deployment commensurate with that described above then I'd love to hear about it.
Cheers,
Lee.
Lee,
this article details scaling upto 2000 subscribers. I don't know of any
other articles that suit your needs, although Hilary once mentioned a
presentation at SQLPASS or TECHNET conference that sounds as though it'd be
useful to you. He posted the name of the presenter but unfortunately Idon't
recall it. No doubt he'll post a reply here.
Rgds,
Paul Ibison, SQL Server MVP, WWW.Replicationanswers.Com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||I don't recall such a presentation. However the presentations done by Bren
Newman, Philip Vaughn, Matt Hollingsworth, and Kevin Collins (for SQL CE
replication) are excellent.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:%23DSrOufuEHA.1292@.TK2MSFTNGP10.phx.gbl...
> Lee,
> this article details scaling upto 2000 subscribers. I don't know of any
> other articles that suit your needs, although Hilary once mentioned a
> presentation at SQLPASS or TECHNET conference that sounds as though it'd
be
> useful to you. He posted the name of the presenter but unfortunately
Idon't
> recall it. No doubt he'll post a reply here.
> Rgds,
> Paul Ibison, SQL Server MVP, WWW.Replicationanswers.Com
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
>
|||Yes this is the conference proceedings I was thinking of
but apologies as I was thinking the presentations were
about MSDE rather than CE.
Rgds,
Paul Ibison, SQL Server MVP, WWW.Replicationanswers.Com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||Actually, I believe I have heard of an installation with 4500 subscribers
where they used republishing; I think its Barnes and Noble or Ticketmaster,
and IIRC it was with SQL 7.
I urge you to contact PSS for more information on this, and how to deploy
such a topology.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:016301c4ba68$ebf53fa0$a401280a@.phx.gbl...
> Yes this is the conference proceedings I was thinking of
> but apologies as I was thinking the presentations were
> about MSDE rather than CE.
> Rgds,
> Paul Ibison, SQL Server MVP, WWW.Replicationanswers.Com
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
>
|||Thanks folks,
I do foresee that we'll be working with PSS (and most likely with MS
Consulting Svs also). This posting was intended to help me start to
understand whether we'd be pushing the envelope into areas in which other
have feared to tread (or have been badly burnt). Hopefully MS will be able
to help us understand how B&N and/or Tickermaster had planned their
topologies etc.
Many thanks for your help - and looking forward to your Merge Replication
book becoming available.
Lee.
"Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
news:%23%23AhvfouEHA.3972@.TK2MSFTNGP15.phx.gbl...
> Actually, I believe I have heard of an installation with 4500 subscribers
> where they used republishing; I think its Barnes and Noble or
> Ticketmaster,
> and IIRC it was with SQL 7.
> I urge you to contact PSS for more information on this, and how to deploy
> such a topology.
> --
> Hilary Cotter
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
>
> "Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
> news:016301c4ba68$ebf53fa0$a401280a@.phx.gbl...
>
|||Lee,
once you've got it all set up, if you can find the time
please post up a bit of general info on the topology
choices etc.
Rgds,
Paul Ibison, SQL Server MVP, WWW.Replicationanswers.Com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)

Merge Replication and Unique contraints

Hi,

I have a slight problem which I'm sure must be a common happening. Here's my problem.

I'm using Merge replication and I have a table the has a unique contraint on a non primary key column (the column is called [name]). The thing that goes wrong (for me) is that when a new record is added in a subscriber and a new record is added in the publisher before a synchronization and both records have the same [name] value then when the merge agent runs I get an unresolved conflict because of a unique index violation.

I've read the BOL and I'm left thinking that in order to solve this problem then I must use a custom resolver. Is this the best way of handling such a conflict? Actually if it is I'm still a little stuck as I'm not sure what I could do to help the situation inside the custom resolver anyway!!

Any help would be much appreciated.

Thanks

Graham

For handling this issue, the easy way is to also replicate that unique constraint. If both publisher and subscriber side have constraint, non-unique update won't happen before merge sync.

To do so, published article shema_option has to include "Copy unique Key Constraint", you can either through UI or SP to implement it.

Thanks

Yunjing

Merge replication and triggers?

Hello,
I hope someone can help me answer some questions about replication, to solve
my problems in a project. I'm involved in a webproject which uses a
replicated SQL Server 2000 database. The source database (which is created
by a 3rd party, so maintanance is not in our hands) contains triggers to
perform inserts into
child tables when something is inserted into a main table. Now we used merge
replication to get a copy on the server and keep its data in sync with our
local source db.
The problem is the following:
The installation of the Merge replication is done by another company and
they want to remove the triggers from the web database. They claim that
triggers are a problem for the replication. That creates a problem for our
webapplication since it's based on the logic of the triggers. So if they are
removed, the application
will no longer work properly.
I would like to know if someone has some experience doing something like we
do and if the triggers are a problem in the replication.
I see two solutions:
1. We make the triggers work with replication: problem solved
2. We remove the triggers on the webdatabase and implement their logic in
code: This involves a lot of additional work for us and creates a
maintanance problem when the trigger logic changes. Since that might happen
without notice, the website might stop working at a new release and data
might
become inconsisten. Not a pleasant thought.
I would like to hear how others dealt with this kind of problems
TIA,
Jeroen
In my experience having triggers should not in itself stop replication. The
only proviso to this is occasion the SQL Server parser can get a bit odd when
creating triggers using the replication process, which causes an error, but
one that can be rectified by hand.
As long as the triggers are set to 'Not For Replication' they work correctly.
Can I sugest that you ask for more details and how the triggers are causing
a problem, under what circumstances ect, perhaps they know something about
the database that you do not.
Peter
"Do not awake the sleeping dragon for you are crunchy and taste good with
ketchup".
Peter The Spate
"Jeroen" wrote:

> Hello,
> I hope someone can help me answer some questions about replication, to solve
> my problems in a project. I'm involved in a webproject which uses a
> replicated SQL Server 2000 database. The source database (which is created
> by a 3rd party, so maintanance is not in our hands) contains triggers to
> perform inserts into
> child tables when something is inserted into a main table. Now we used merge
> replication to get a copy on the server and keep its data in sync with our
> local source db.
> The problem is the following:
> The installation of the Merge replication is done by another company and
> they want to remove the triggers from the web database. They claim that
> triggers are a problem for the replication. That creates a problem for our
> webapplication since it's based on the logic of the triggers. So if they are
> removed, the application
> will no longer work properly.
> I would like to know if someone has some experience doing something like we
> do and if the triggers are a problem in the replication.
> I see two solutions:
> 1. We make the triggers work with replication: problem solved
> 2. We remove the triggers on the webdatabase and implement their logic in
> code: This involves a lot of additional work for us and creates a
> maintanance problem when the trigger logic changes. Since that might happen
> without notice, the website might stop working at a new release and data
> might
> become inconsisten. Not a pleasant thought.
> I would like to hear how others dealt with this kind of problems
> TIA,
> Jeroen
>
>
|||Hi Peter,
Thanks for answering. Good to hear that triggers should not be any problem
in itself. I will have to check that the triggers have the 'Not for
replication' option set. At this moment I do not have more details about the
problems that occured while setting up the replication, since another
company did that. I will contact them to see exactly what problems they
encountered while installing it.
But, as I read in your answer, my first solution should be possible if the
triggers have the correct options set, am I correct?
Thanks for your help,
Jeroen
"Peter 'Not Peter The Spate' Nolan"
<PeterNotPeterTheSpateNolan@.discussions.microsoft. com> wrote in message
news:12FF5C8D-99BC-4929-B791-0EDE4E7A8604@.microsoft.com...[vbcol=seagreen]
> In my experience having triggers should not in itself stop replication.
> The
> only proviso to this is occasion the SQL Server parser can get a bit odd
> when
> creating triggers using the replication process, which causes an error,
> but
> one that can be rectified by hand.
> As long as the triggers are set to 'Not For Replication' they work
> correctly.
> Can I sugest that you ask for more details and how the triggers are
> causing
> a problem, under what circumstances ect, perhaps they know something about
> the database that you do not.
> Peter
> "Do not awake the sleeping dragon for you are crunchy and taste good with
> ketchup".
> Peter The Spate
>
> "Jeroen" wrote:
|||I would ask the other company for more details on what the problem with
replication and triggers is.
Merge replication has problems with triggers only when you update text and
image data type columns (refer to the section on Planning for Merge
Replication n BOL). As Peter had mentioned you might want to use the Not For
Replication clause on user triggers depending on what these triggers do -
For instance for audit triggers you might want them to have them fire if you
want to track changes made by the replication process.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Jeroen" <Noname@.noname.com> wrote in message
news:OcdnLxHJFHA.2576@.TK2MSFTNGP15.phx.gbl...
> Hello,
> I hope someone can help me answer some questions about replication, to
solve
> my problems in a project. I'm involved in a webproject which uses a
> replicated SQL Server 2000 database. The source database (which is created
> by a 3rd party, so maintanance is not in our hands) contains triggers to
> perform inserts into
> child tables when something is inserted into a main table. Now we used
merge
> replication to get a copy on the server and keep its data in sync with our
> local source db.
> The problem is the following:
> The installation of the Merge replication is done by another company and
> they want to remove the triggers from the web database. They claim that
> triggers are a problem for the replication. That creates a problem for our
> webapplication since it's based on the logic of the triggers. So if they
are
> removed, the application
> will no longer work properly.
> I would like to know if someone has some experience doing something like
we
> do and if the triggers are a problem in the replication.
> I see two solutions:
> 1. We make the triggers work with replication: problem solved
> 2. We remove the triggers on the webdatabase and implement their logic in
> code: This involves a lot of additional work for us and creates a
> maintanance problem when the trigger logic changes. Since that might
happen
> without notice, the website might stop working at a new release and data
> might
> become inconsisten. Not a pleasant thought.
> I would like to hear how others dealt with this kind of problems
> TIA,
> Jeroen
>