Friday, March 30, 2012

Merge Star Schema

Hi all,
We have two star schemas built into our warehouse: one called
vehicle data and one called parts data. For some reason the warehouse
was built with these quite separate 'data marts' (not sure if that's
the right term)? Even though the parts will always relate to the
vehicle.
How should these be merged? Do we connect via the dimension
tables? Or do we need a some other new type of tables to carry the
keys?
Would another strategy be to place all the parts facts
into the vehicle fact table and then merge the dimension tables also?
Many thanks in advance for any pointers.
Sam
Hello SGPG,
I hope you haven't given up on this thread. It's hard to get an
idea of your problem with out a bit more information. If you could
post a brief description of each fact and dimension table that would be
a great help.
Here are a couple of points to consider in the mean time
Star Schemas can be very hard to consolidate. You see it's all a
question of grain...
Grain is the lowest unique level of the fact table. In most star
schemas the grain defines the grouping of the metrics. Grain is defined
by the dimensions and the primary key of the fact table.
You can consolidate fact tables if the metrics make sense and the fact
table grains are the same.
For example
You have two star schemas each maintain a different set of metrics.
Below are Fact Invoices (what has been sold to the customer) and Fact
Orders (what the customer has ordered
Fact Invoices
InvoiceDate
CustomerNumber
ProductCode
InvoiceNumber
InvoiceLineNumber
InvoiceUnitPrice
InvoiceQty
InvoiceLineTotal
Fact Orders
OrderDate
DueDate
CustomerNumber
ProductCode
InvoiceNumber
InvoiceLineNumber
OrderUnitPrice
OrderQty
OrderLineTotal
Both fact tables have a common set of dimensions and grain. We know
that Invoice Number and Invoice line number are the same. Also we have
the common dimension of Customer and product. So the lowest level that
the metrics make sense is (CustomerNumber, ProductCode, InvoiceNumber,
InvoiceLineNumber)
In this example the order dates and the invoices do not affect the
grain of the common fact table. So the consolidate fact table would
look something this.
Fact Sales
InvoiceDate
OrderDate
DueDate
CustomerNumber
ProductCode
InvoiceNumber
InvoiceLineNumber
OrderUnitPrice
OrderQty
OrderLineTotal
InvoiceUnitPrice
InvoiceQty
InvoiceLineTotal
Although this example assumes that every order has an invoice. It is
possible to have orders that do not have invoices in this star schema.
The invoice metrics would be represented as 0 with no invoice date.
Hope this Helps
Myles Matheson
Data Warehouse Architect
|||Thanks Miles. I do not have the detail of the example I posed as it's
a while since I saw it. My question stemmed from, what I saw, as
something of a design flaw as a vehicle fact table was built with
dimension tables around it. Later a parts fact was built which seemed
to be the beginning of a data mart approach to data warehousing. Your
solution might not have worked in this case as parts were usually
aggregated up (i.e. a vehicle might have 10 or more parts installed).
I wondered how this should have been designed to incorporate both parts
and vehicles.
sql

Merge Star Schema

Hi all,
We have two star schemas built into our warehouse: one called
vehicle data and one called parts data. For some reason the warehouse
was built with these quite separate 'data marts' (not sure if that's
the right term)? Even though the parts will always relate to the
vehicle.
How should these be merged? Do we connect via the dimension
tables? Or do we need a some other new type of tables to carry the
keys?
Would another strategy be to place all the parts facts
into the vehicle fact table and then merge the dimension tables also?
Many thanks in advance for any pointers.
SamHello SGPG,
I hope you haven't given up on this thread. It's hard to get an
idea of your problem with out a bit more information. If you could
post a brief description of each fact and dimension table that would be
a great help.
Here are a couple of points to consider in the mean time
Star Schemas can be very hard to consolidate. You see it's all a
question of grain...
Grain is the lowest unique level of the fact table. In most star
schemas the grain defines the grouping of the metrics. Grain is defined
by the dimensions and the primary key of the fact table.
You can consolidate fact tables if the metrics make sense and the fact
table grains are the same.
For example
You have two star schemas each maintain a different set of metrics.
Below are Fact Invoices (what has been sold to the customer) and Fact
Orders (what the customer has ordered
Fact Invoices
InvoiceDate
CustomerNumber
ProductCode
InvoiceNumber
InvoiceLineNumber
InvoiceUnitPrice
InvoiceQty
InvoiceLineTotal
Fact Orders
OrderDate
DueDate
CustomerNumber
ProductCode
InvoiceNumber
InvoiceLineNumber
OrderUnitPrice
OrderQty
OrderLineTotal
Both fact tables have a common set of dimensions and grain. We know
that Invoice Number and Invoice line number are the same. Also we have
the common dimension of Customer and product. So the lowest level that
the metrics make sense is (CustomerNumber, ProductCode, InvoiceNumber,
InvoiceLineNumber)
In this example the order dates and the invoices do not affect the
grain of the common fact table. So the consolidate fact table would
look something this.
Fact Sales
InvoiceDate
OrderDate
DueDate
CustomerNumber
ProductCode
InvoiceNumber
InvoiceLineNumber
OrderUnitPrice
OrderQty
OrderLineTotal
InvoiceUnitPrice
InvoiceQty
InvoiceLineTotal
Although this example assumes that every order has an invoice. It is
possible to have orders that do not have invoices in this star schema.
The invoice metrics would be represented as 0 with no invoice date.
Hope this Helps
Myles Matheson
Data Warehouse Architect|||Thanks Miles. I do not have the detail of the example I posed as it's
a while since I saw it. My question stemmed from, what I saw, as
something of a design flaw as a vehicle fact table was built with
dimension tables around it. Later a parts fact was built which seemed
to be the beginning of a data mart approach to data warehousing. Your
solution might not have worked in this case as parts were usually
aggregated up (i.e. a vehicle might have 10 or more parts installed).
I wondered how this should have been designed to incorporate both parts
and vehicles.

Merge SQL 7 to 2000 problem

I am having a problem running a merge replication between
a SQL 7 publisher/distributor and a SQL 2000 subscriber
(push). After the subscription is initialized, the agent
fails with the error "The process could not query row
metadata at the Subscriber." The error details
says "Could not find stored procedure ''." A clip from
the log is below. This same merge replication works fine
from SQL 7 to SQL 7, but fails to SQL 2000. I have tried
2 different SQL 2k boxes with the same error.
The replication job logs in using SQL Server
authentication. Account is system administrator and dbo
on subscriber. Distributor runs under sa. Both Servers
are at latest SP levels. KB search has turned up no help.
Thanks for any ideas!
~~~~~~~~~~~~~~~~~~snip~~~~~~~~~~~~~~~~~~~
Percent Complete: 55
Processing article 'RequestStatusHistory'
Repl Agent Status: 3
chrs4.ITWorkRequest: {call sp_MSenumcolumns (?,?)}
chrs4.ITWorkRequest: {call sp_MSenumchanges(?,?,?,?,?)}
CHHIST.ITWorkRequest: {call sp_MSgetrowmetadata
(?,?,?,?,?,?,?)}{call sp_MSgetrowmetadata(?,?,?,?,?,?,?)}
{call sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}
Percent Complete: 0
The process could not query row metadata at the
Subscriber.
Repl Agent Status: 6
Percent Complete: 0
Category:COMMAND
Source: Failed Command
Number:
Message: {call sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call
sp_MSgetrowmetadata(?,?,?,?,?,?,?)}{call sp_M
Repl Agent Status: 3
Percent Complete: 0
Category:SQLSERVER
Source: CHHIST
Number: 2812
Message: Could not find stored procedure ''.
Repl Agent Status: 3
Could not find stored procedure ''.
Disconnecting from Publisher 'chrs4'
Disconnecting from Subscriber 'CHHIST'
Disconnecting from Publisher 'chrs4'
Disconnecting from Distributor 'chrs4'
George,
this is not a supported configuration. For merge replication, a SQL 7.0
publisher can only publish to a SQL 7.0 Subscriber.
Rgds,
Paul Ibison
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||Paul,
Thanks for the reply. I read the "...Different Versions"
article in books online, but obviously scanned too
quickly. I see the limitation now for merge, but not for
snap and trans. You might help out "Nick Horrocks" with
an answer to his thread "Unable to create Merge
subscription".
George

>--Original Message--
>George,
>this is not a supported configuration. For merge
replication, a SQL 7.0
>publisher can only publish to a SQL 7.0 Subscriber.
>Rgds,
>Paul Ibison
>(recommended sql server 2000 replication book:
>http://www.nwsu.com/0974973602p.html)
>
>.
>
|||Thanks for the prompt - have posted to Nick as well.
Rgds,
Paul Ibison
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)

>--Original Message--
>Paul,
>Thanks for the reply. I read the "...Different Versions"
>article in books online, but obviously scanned too
>quickly. I see the limitation now for merge, but not for
>snap and trans. You might help out "Nick Horrocks" with
>an answer to his thread "Unable to create Merge
>subscription".
>George
>
>replication, a SQL 7.0
>.
>

Merge snapshot problem with views and ArticleType property with SQ

I am trying to create a merge publication with a snapshot that includes all
tables, views, procs, and UDFs using SQLDMO. I have no problems creating the
publication with tables only. When I try to add procs using the following
code I get the error
The @.schema_option parameter for a procedure or function schema article can
include only the options 0x0000000000000001 or 0x0000000000002000.
I assume I am not setting up the objMrgArt object properly. Any help is
appreciated.
For i = 1 To objRepSps.Count
Set objRepSp = objRepSps(i)
Set objMrgArt = New SQLDMO.MergeArticle2
objMrgArt.Name = objRepSp.Name
objMrgArt.SourceObjectName = objRepSp.Name
objMrgArt.SourceObjectOwner = objRepSp.Owner
objMrgArt.ArticleType = SQLDMORep_ProcSchemaOnly
objMrgPub.MergeArticles.Add objMrgArt
Next i
Since different article types accept different set of schema options (or in
SQLDMO terminology, MergeArticle.CreationScriptOptions) and the (arguably
semi-buggy) SQLDMO behavior of having a set of default CreationScriptOptions
that is not compatible with all article types, you would need to explicitly
specify a compatible set of CreationScriptOptions when creating a non-table
article. Extending the example that you have given below, you can write:
For i = 1 To objRepSps.Count
Set objRepSp = objRepSps(i)
Set objMrgArt = New SQLDMO.MergeArticle2
objMrgArt.Name = objRepSp.Name
objMrgArt.SourceObjectName = objRepSp.Name
objMrgArt.SourceObjectOwner = objRepSp.Owner
objMrgArt.ArticleType = SQLDMORep_ProcSchemaOnly
objMrgArt.CreationScriptOptions =
SQLDMO_CREATIONSCRIPT_TYPE.PrimaryObject Or
SQLDMO_CREATIONSCRIPT_TYPE.ExtendedProperties
objMrgPub.MergeArticles.Add objMrgArt
Next i
To ensure that the base stored procedure and the associated extended
properties are replicated with the initial snapshot.
HTH.
-Raymond
This posting is provided "as is" with no warranties and confers no rights.
"mspradley" <mspradley@.discussions.microsoft.com> wrote in message
news:57624519-68F3-4073-BF90-D2DD80EBCEF3@.microsoft.com...
> I am trying to create a merge publication with a snapshot that includes
all
> tables, views, procs, and UDFs using SQLDMO. I have no problems creating
the
> publication with tables only. When I try to add procs using the following
> code I get the error
> The @.schema_option parameter for a procedure or function schema article
can
> include only the options 0x0000000000000001 or 0x0000000000002000.
> I assume I am not setting up the objMrgArt object properly. Any help is
> appreciated.
>
> For i = 1 To objRepSps.Count
> Set objRepSp = objRepSps(i)
> Set objMrgArt = New SQLDMO.MergeArticle2
> objMrgArt.Name = objRepSp.Name
> objMrgArt.SourceObjectName = objRepSp.Name
> objMrgArt.SourceObjectOwner = objRepSp.Owner
> objMrgArt.ArticleType = SQLDMORep_ProcSchemaOnly
> objMrgPub.MergeArticles.Add objMrgArt
> Next i
>

Merge snapshot frequency

I have the following scenario,
1.merge replication with 3000 subscribers
2.I have dynamic publication (NOT dynamic snapshot)
3.we observed that synchronization is slow whenever we do a data import at
the publisher
4.because of this we ran the snapshot agent immediately after the data import
5.after the snapshot run, we find the subscriber syncs faster
I would like to know what are the implications of running snapshot agent.
Will it remove the delta changes which are pending for sync? Or is it okay if
I run snapshot as much as I like?
Expect slowness whenever you do a data import as there is always the impact
of the dataload and then there is the added data you have to merge.
Are you saying if you run the snapshot you get faster sync's? This only
makes sense if you re-initialize your subscribers after regenerating the
snapshot.
If you have anonymous subscribers the snapshot is always generated each time
you run it. If you have named it is only regenerated if subscribers expire
or require reinitialization.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Ravi Lobo" <RaviLobo@.discussions.microsoft.com> wrote in message
news:45E03380-6DE7-4AB1-B20D-A97C18B04717@.microsoft.com...
>I have the following scenario,
> 1.merge replication with 3000 subscribers
> 2.I have dynamic publication (NOT dynamic snapshot)
> 3.we observed that synchronization is slow whenever we do a data import at
> the publisher
> 4.because of this we ran the snapshot agent immediately after the data
> import
> 5.after the snapshot run, we find the subscriber syncs faster
>
> I would like to know what are the implications of running snapshot agent.
> Will it remove the delta changes which are pending for sync? Or is it okay
> if
> I run snapshot as much as I like?
>
>
|||Thank you Hilary for you time. I have some more clarifications,
1.I have sql server ce subscribers
2.Hence I need to use anonymous subscription
I have the following questions here,
a)Can I use pre-generated snapshot in my case? (Anonymous + sql ce
subscribers)
b)I also have dynamic filters on the publisher. What impact I will have by
re-running the snapshot second time, on the subscriber?
"Hilary Cotter" wrote:

> Expect slowness whenever you do a data import as there is always the impact
> of the dataload and then there is the added data you have to merge.
> Are you saying if you run the snapshot you get faster sync's? This only
> makes sense if you re-initialize your subscribers after regenerating the
> snapshot.
> If you have anonymous subscribers the snapshot is always generated each time
> you run it. If you have named it is only regenerated if subscribers expire
> or require reinitialization.
> --
> Hilary Cotter
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
>
> "Ravi Lobo" <RaviLobo@.discussions.microsoft.com> wrote in message
> news:45E03380-6DE7-4AB1-B20D-A97C18B04717@.microsoft.com...
>
>

Merge Snapshot execution does not create any BCP files for data tables.

Good Day,
After running Merge Snapshot Subscription Job No BCP files are created
for the data tables:
here is my code for the Merge Articles:
Dim lbResult As Boolean
Dim lobjMArt As MergeArticle2
Dim lbHasIdentity As Boolean
Set lobjMArt = New MergeArticle2
lobjMArt.Name = asTableName
lobjMArt.SourceObjectName = asTableName
lobjMArt.SourceObjectOwner = "dbo"
UpdateLog "CreationScriptOptions Default Value is: " &
lobjMArt.CreationScriptOptions
If abHasIdentity Then
lobjMArt.AutoIdentityRange = True
lobjMArt.PublisherIdentityRangeSize = 1000000
lobjMArt.SubscriberIdentityRangeSize = 1000000
lobjMArt.IdentityRangeThreshold = 85
End If
pobjSQLDMOMerge.MergeArticles.Add lobjMArt
Set lobjMArt = Nothing
lbResult = True
The Job History Shows:
bulk copying snapshot data for article [TableName] (0 rows).
This is shown for all tables. 90 % of my tables contain records. For
Example the Patient Table has 675 records.
What am i Missing?
Thanks In Advance.
Brian
The agent just ran and created the bcp files.
The agent is scheduled to run @. 12:00 am. Why did it work through
scheduled execution and not through code.
Am I missing something from this code:
Dim lbResult As Boolean
Dim lsErrSource As String
Dim lsJobID As String
Dim liX As Long
Dim lobjDis As SQLDMO.Distributor2
Dim lobjDisPubs As SQLDMO.DistributionPublishers
Dim lobjDisPub As SQLDMO.DistributionPublication2
Dim lobjJobs As SQLDMO.Jobs
Dim lobjJob As SQLDMO.Job
Dim loJobHis As SQLDMO.JobHistoryFilter
Dim loQR As SQLDMO.QueryResults2
Dim liY As Integer 'Rows
UpdateLog "Refresh Snapshot Function"
'===Load an Default Vars
lsErrSource = "clsPublisher.RefreshSnapShot"
'===Parse the Connection String:
If ParseConnectionString(asConnect) Then
If ConnectToServer() Then
'pobjSQLServer now equals the Sever object.
'We need to get a copy of the Publication DB.
Set lobjDis = pobjSQLServer.Replication.Distributor
Set lobjDisPubs = lobjDis.DistributionPublishers
'Set lobjDisPub =
lobjDis.DistributionPublishers(psPubServer).Distri butionPublications.Item("Promise:Promise")
Set lobjDisPub =
lobjDisPubs(psPubServer).DistributionPublications. Item(1)
UpdateLog ("DisPub Name: " &
lobjDisPubs(psPubServer).DistributionPublications. Item(1).Name)
lsJobID = lobjDisPub.SnapshotJobID
psJobName = lobjDisPub.SnapshotAgent
Set lobjDisPub = Nothing
Set lobjDisPubs = Nothing
Set lobjDis = Nothing
Set lobjJobs = pobjSQLServer.JobServer.Jobs
'Get the Job to Start it.
UpdateLog ("Job Info: ID:" & lsJobID & " Name: " &
psJobName)
Call SaveSetting("Horizon Healthware Inc", "Promise",
"ReplicationJobName", psJobName)
Set lobjJob = lobjJobs.Item(psJobName)
UpdateLog ("Job: Starting Job.")
lobjJob.Invoke
liX = 0
lobjJob.Refresh
Do While lobjJob.CurrentRunStatus <>
SQLDMOJobExecution_Idle
liX = liX + 1
If liX = 200000 Then
lobjJob.Refresh
DoEvents
liX = 0
End If
Loop
' loJobHis.JobName = asJobName
' loJobHis.OldestFirst = False
'
' Set loQR =
pobjSQLServer.JobServer.EnumJobHistory(loJobHis)
'
' liX = 1
'
'
' For liX = 1 To loQR.Columns
' 'UpdateLog ("Job History ColumnName :" &
loQR.ColumnName(liX) & " Value:" & loQR.GetColumnString(1, liX))
'
' Next
Set loQR = Nothing
Set loJobHis = Nothing
Set lobjJob = Nothing
Set lobjJobs = Nothing
Call DisconnectFromServer
lbResult = True
Else
Err.Raise hhwErrorNum.lErrUnableToConnectToServer,
lsErrSource, ptErrorDesc.sErrParsingConnectionString
End If
Else
lbResult = False
Err.Raise hhwErrorNum.lErrParsingConnectionString, lsErrSource,
ptErrorDesc.sErrUnableToConnectToServer
End If
Thanks
Brian
sql

Merge sinchronization problem

Hi there.

I have big problem with SQL2005 merge sinchronization from single subscriber. This problem reveals itself on one database publications (it worked till today fine) but not others. Sincronization job writes to histrory :

The merge process was unable to create a new generation at the 'Subscriber'. Troubleshoot by restarting the synchronization with verbose history logging and specify an output file to which to write. (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147200994).

This message can be found in sincronization status after 30 minutes idle with status

Upload changes to publisher

During this time the replication agent job locks another one. I think subscriber can not upload changes from subscriber, which have accumulated them since sinchronization failure start.

How can I log verbose history to file?

Will be appreciate for help.

Here is the info how to logging...

How to enable replication agents for logging to output files in SQL Server

http://support.microsoft.com/kb/312292

|||Did you ever find a solution to the problem. I have had the same error on a couple of different subscribers and the verbose logging does not show where the problem is!

?

Thanks

Stuart

|||

I am also experiencing similar problems.

In my case, we are using pull subscriptions from 24 different subscribers all to one published database. Distributor is same as publisher.

No answers yet, but we're getting MS help on it -- sort of. Very difficult to track down without being able to get useful data, and getting useful data is proving to be difficult.

Merge sinchronization problem

Hi there.

I have big problem with SQL2005 merge sinchronization from single subscriber. This problem reveals itself on one database publications (it worked till today fine) but not others. Sincronization job writes to histrory :

The merge process was unable to create a new generation at the 'Subscriber'. Troubleshoot by restarting the synchronization with verbose history logging and specify an output file to which to write. (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147200994).

This message can be found in sincronization status after 30 minutes idle with status

Upload changes to publisher

During this time the replication agent job locks another one. I think subscriber can not upload changes from subscriber, which have accumulated them since sinchronization failure start.

How can I log verbose history to file?

Will be appreciate for help.

Here is the info how to logging...

How to enable replication agents for logging to output files in SQL Server

http://support.microsoft.com/kb/312292

|||Did you ever find a solution to the problem. I have had the same error on a couple of different subscribers and the verbose logging does not show where the problem is!

?

Thanks

Stuart

|||

I am also experiencing similar problems.

In my case, we are using pull subscriptions from 24 different subscribers all to one published database. Distributor is same as publisher.

No answers yet, but we're getting MS help on it -- sort of. Very difficult to track down without being able to get useful data, and getting useful data is proving to be difficult.

Merge seems to hang at conflict and conflict is not logged

We have a SQL2005 merge replication database with 8 subscribers. Three of the
subscribers have not successfully replicated. We would appreciate any
additional ideas for correcting this. So far we have some inconsistencies.
For example, the "View Synchronization Status" displays:
"Downloaded 100 change(s) in 'OurTable' (100 updates, 1 conflict)"
and the publisher's "Microsoft Replication Conflict Viewer" displays:
"There are no conflicts to view."
If you have any ideas why conflicts are causing the replication to fail
while the conflict viewer shows no conflicts, we would appreciate your input.
Thanx.
Were there any conflicts on the subscriber side?
You can run the conflict viewer there as well, or if that's not
convenient/possible there are sp_xxx procedures to help you view the
conflict tables on the client...

Merge rplication - records not being replicated.

We are replicating 5 subscriptions to 500 users with MSDE databases. All
databases are SQL Server 2000 SP4.
The issue we are seeing is that some users are not seeing all the records
they should. (Merge replication) Examples:
User A creates some new records. The records get replicated to the server
and then are not in user A's database (they should still be in the users
database). There are no conflicts. Re-building the snap shot and
re-initializing with the upload changes selected does not fix the issue.
Re-building the snap shot and re-initializing with the upload changes Not
selected does fix the issue.
Any suggestions for tracking this proble down would be appreciated.
Are you using any dynamic filters? If the added rows on A are not allowed
according to the filter, the row is outside of the partition and this could
account for such behaviour.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||Paul,
We are using dynamic filters. The users are only allowed to add records
that are in the partition.
|||Then this is normal behaviour.
It would help if you explain a little more about what you would like to
happen.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)

Merge Rpl. Pull from subscriber access denied problem

Hi,
I have set up one laptop as the Distributer/Publisher. Went through the
wizard and set up a Publication also, used Pubs. Then registered another
remote laptop that I can see via the network, and it can see me. I went
through the wizard again and set up a Push to that laptop. Said it ran good,
and I can see the tables on the remote laptop now.
I deleted the Push and keep trying to create a Pull at the other laptop,
(subscriber). The wizard sets it up, but when it Starts Syncronizing, it
immediately gets the big Red X.
The error said The schema script
'\\ACER\ReplShare\ReplData\unc\ACER_pubs_pubs_arti cles\20050928212317\stores_1.sch' could not be propagated to the subscriber.
I can see this share from both ends. I have read a ton about the accounts
the agent has to run under in the last 12 hours, but can't see what I am
doing wrong.
Is there a trick here?
Thanks.
Steve,
try logging on to the subscriber laptop using the same account that the sql
server agent uses as a service account. The see if you can browse to the
snapshot folder
\\ACER\ReplShare\ReplData\unc\ACER_pubs_pubs_artic les\20050928212317. If you
can, see if you can copy the contents of this directory locally. I'm
guessing that the first part won't be possible due to permission
restrictions, but please post back with your results.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||Paul,
I had tried some other things prior to being able to read your response.
Here is what I have done that is now working.
I set up new SQL users on both Publisher/Dist. and Subscriber, giving them
the proper roles. I then set up a new Windows login on each box, "Bob", of
type admin. I then changed both the MSSQLSERVICE and SQLAGENTSERVICE on each
to run under "Bob". When it asks for logon credintials for the subscriber, I
use the sa, when it asks for the publisher, I use the new SQL user I had set
up. Probably may have been able to use that for the subscriber instead of
the sa, don't know.
I have tried so many things since yesterday afternoon, I am a little unsure
as to what actually solved it. From all I read over night, having the two
service run under "Bob" was needed.
Thank you for the response,
Steve
"Paul Ibison" wrote:

> Steve,
> try logging on to the subscriber laptop using the same account that the sql
> server agent uses as a service account. The see if you can browse to the
> snapshot folder
> \\ACER\ReplShare\ReplData\unc\ACER_pubs_pubs_artic les\20050928212317. If you
> can, see if you can copy the contents of this directory locally. I'm
> guessing that the first part won't be possible due to permission
> restrictions, but please post back with your results.
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
>
>
|||This is OK - what you've set up is known as pass-through authentication.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||Is there a better or more preferred method? We have one laptop that acts as
the publisher/distributor, and two other laptops that will be subscribers.
They run over a wireless network. The two subscribers will be able to
initial pull merge replications.
Thanks,
Steve
"Paul Ibison" wrote:

> This is OK - what you've set up is known as pass-through authentication.
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
>
>
|||Steve,
are the laptops all on the same domain? If so, you could use a domain
account, which is given rights to the distributor's working folder. If not,
it's either pass-through, FTP, backup and restore or alternative snapshot
locations.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
sql

Merge rows

I've been tearing my hair out for the past few days trying to get a merge across rows in SQL SERVER 200. Here is an example result set that I have so far:

Code Snippet

zzzz_X@.X_orange.net 0 0 1
zzzz_X@.X_orange.net 0 1 0
zzzz_X@.X_orange.net 1 0 0
zztoproadking_X@.X_yahoo.ca 0 0 1
zztoproadking_X@.X_yahoo.ca 0 1 0
zztoproadking_X@.X_yahoo.ca 1 0 0
zztoponly_X@.X_yahoo.com 0 0 1
zztoponly_X@.X_yahoo.com 0 1 0
zztoponly_X@.X_yahoo.com 1 0 0
zzjozz_X@.X_aol.com 0 0 1


What I want to do ultimately is merge those repeated email addresses into a single row, such as:

Code Snippet

zzzz_X@.X_orange.net 1 1 1
zztoproadking_X@.X_yahoo.ca 1 1 1
zztoponly_X@.X_yahoo.com 1 1 1
zzjozz_X@.X_aol.com 0 0 1


I realize this is possible using cursors, but is there a simple SQL way to achieve the same results?

Here's my SQL for the first result set:

Code Snippet

SELECT DISTINCT TOP 10 C.EmailAddress,
CASE C.CampaignID WHEN '1' THEN '1' ELSE '0' END AS travel_deals,
CASE C.CampaignID WHEN '2' THEN '1' ELSE '0' END AS contests,
CASE C.CampaignID WHEN '3' THEN '1' ELSE '0' END AS picks_of_the_week
FROM dbo.MarketingEmailCampaignAddresses C
LEFT OUTER JOIN Members M
ON C.EmailAddress = M.email
GROUP BY C.EmailAddress, C.campaignID, M.memberID
ORDER BY C.EmailAddress DESC

You can group them using "group by" clause.

SELECT TOP 10

C.EmailAddress,
max(CASE WHEN C.CampaignID = '1' THEN '1' ELSE '0' END) AS travel_deals,
max(CASE WHEN C.CampaignID = '2' THEN '1' ELSE '0' END) AS contests,
max(CASE WHEN C.CampaignID = '3' THEN '1' ELSE '0' END) AS picks_of_the_week

FROM

dbo.MarketingEmailCampaignAddresses C
LEFT OUTER JOIN Members M
ON C.EmailAddress = M.email

GROUP BY

C.EmailAddress

ORDER BY C.EmailAddress DESC

go

AMB

|||What is the MAX() doing in this case? The columns after the email address are booleans (or bit in SQL SERVER dialect).

While your way does seem to do an email address merge, I'm getting 1,1,1 for every email address.
|||

You are in the rite track, but missed the group by function which group all your row wise data into single row. You can use max/sum as per your requirement.

This is the legacy approach to get the Pivot data(swaping row based values into Columns),

Code Snippet

SELECT DISTINCT TOP 10

C.EmailAddress,

C.campaignID,

M.memberID,

Isnull(Max(CASE C.CampaignID WHEN '1' THEN '1' END),0) AS travel_deals,

Isnull(Max(CASE C.CampaignID WHEN '2' THEN '1' END),0) AS contests,

Isnull(Max(CASE C.CampaignID WHEN '3' THEN '1' END),0) AS picks_of_the_week

FROM

dbo.MarketingEmailCampaignAddresses C

LEFT OUTER JOIN Members M

ON C.EmailAddress = M.email

GROUP BY

C.EmailAddress,

C.campaignID,

M.memberID

ORDER BY

C.EmailAddress DESC

|||

I really don’t know how you got all 1.

Here the sample,

Create Table #marketingemailcampaignaddresses (

[EmailAddress] Varchar(30) ,

[CampaignID] int

);

Insert Into #marketingemailcampaignaddresses Values('zzzz_X@.X_orange.net','1');

Insert Into #marketingemailcampaignaddresses Values('zzzz_X@.X_orange.net','2');

Insert Into #marketingemailcampaignaddresses Values('zztoproadking_X@.X_yahoo.ca','3');

Insert Into #marketingemailcampaignaddresses Values('zztoponly_X@.X_yahoo.com','1');

Insert Into #marketingemailcampaignaddresses Values('zztoponly_X@.X_yahoo.com','2');

Insert Into #marketingemailcampaignaddresses Values('zztoponly_X@.X_yahoo.com','3');

Insert Into #marketingemailcampaignaddresses Values('zzjozz_X@.X_aol.com','2');

Select

EmailAddress,

Case When CampaignID = '1' Then 1 Else 0 End AS travel_deals,

CASE WHEN CampaignID = '2' THEN 1 ELSE 0 ENDAS contests,

CASE WHEN CampaignID = '3' THEN 1 ELSE 0 ENDAS picks_of_the_week

From

#marketingemailcampaignaddresses

/*

EmailAddresstravel_deals contestspicks_of_the_week

-- --

zzzz_X@.X_orange.net100

zzzz_X@.X_orange.net010

zztoproadking_X@.X_yahoo.ca001

zztoponly_X@.X_yahoo.com100

zztoponly_X@.X_yahoo.com010

zztoponly_X@.X_yahoo.com001

zzjozz_X@.X_aol.com010

*/

Select

EmailAddress,

Isnull(Max(Case When CampaignID = '1' Then 1 End),0) AS travel_deals,

Isnull(Max(CASE WHEN CampaignID = '2' THEN 1 END),0)AS contests,

Isnull(Max(CASE WHEN CampaignID = '3' THEN 1 END),0)AS picks_of_the_week

From

#marketingemailcampaignaddresses

Group By

EmailAddress

/*

EmailAddresstravel_deals contestspicks_of_the_week

-- --

zzjozz_X@.X_aol.com010

zztoponly_X@.X_yahoo.com111

zztoproadking_X@.X_yahoo.ca001

zzzz_X@.X_orange.net110

*/

|||You are absolutely correct. I was getting all 1's when trying to run the pivot without creating a temp table with the original results.

I had to do an SELECT INTO and create a temp table, then run a SELECT from that table to achieve the correct results.

Thanks for the help (both of you)!

Merge reversed

Running Win 2000 & SQL 2000
I have a merge publication/subscription (rows are
filtered by location), replicating data between 16
servers (15 remote location & a main office). When data
is changed at the a remote location, the changes gets
propagated to the main office, but at some point, the
change gets reversed.
Any idea what is causing this?
Larry,
when you say it gets reversed, do you mena the row is deleted from the
subscriber? This could be caused if you are partitioning data and a row has
been added on a subscriber which doesn't accord with its filter.
HTH,
Paul Ibison
|||Look in the conflict table, for the article, on the publisher. This might
give you a clue as to what is going on.
Rand
This posting is provided "as is" with no warranties and confers no rights.
|||Paul,
No the row is not deleted, but changed back to its
original state, prior to being updated at the
subscriber. This happens sporadically, but is causing
MAJOR problems.
Larry...
|||Rand,
There is nothing in the conflict tables for any article
in the publication.
Larry...
|||I saw your reply to Rand and if both filtering and conflicts are ruled out
(and no user on the publisher is altering the data!) then this is very
strange. I'd use profiler or an audit trigger to capture what is going on.
HTH,
Paul Ibison
|||Thanks Paul...will do.
|||Does the MSmerge_history table on the distributor offer any reason why
the updates were not applied?
Reinout Hillmann
SQL Server Product Unit
This posting is provided "AS IS" with no warranties, and confers no rights.
anonymous@.discussions.microsoft.com wrote:
> Rand,
> There is nothing in the conflict tables for any article
> in the publication.
> Larry...
>

Merge results from two fields

What is the best way to go about merging the results of two fields in
the same table?

I have two different fields that hold lists of names, some of them
identical, some different. From those I'd like to generate a merged
list with all the distinct names.

For example:

Field1 contains: Jack, Joe, Jim, Paul
Field2 contains: Peter, Paul, Joe, Jim

The result should be: Jack, Joe, Jim, Peter, PaulWhy are you storing delimited lists in columns? This is a bad idea in a
relational database. Best to create a new table and put the data in
there. SELECT DISTINCT will give you a distinct list from a table but
it's tricky to manipulate delimited lists in SQL.

--
David Portas
SQL Server MVP
--|||I'm sorry, I meant to convey that these names are different records in
the particular fields.

So - I may have the following records:

<pre>
Rec# Field1 Field2
1: Jack Peter
2: Joe Paul
3: Jim Joe
4: Paul Jim
</pre
and would like a result of:
Jack
Joe
Jim
Paul
Peter|||Create table statements and insert statements for sample data will
prevent these kinds of misunderstandings. In the future, please try to
include those in your posts.

Off the top of my head (meaning it might not be the best solution
performance-wise), the following should work:

CREATE TABLE My_Table (field1 VARCHAR(20), field2 VARCHAR(20))
GO

INSERT INTO My_Table VALUES ('Jack', 'Peter')
INSERT INTO My_Table VALUES ('Joe', 'Paul')
INSERT INTO My_Table VALUES ('Jim', 'Joe')
INSERT INTO My_Table VALUES ('Paul', 'Jim')
GO

SELECT Field1 AS Name FROM My_Table
UNION
SELECT Field2 FROM My_Table
GO

HTH,
-Tom.|||This should do it

select distinct Field1 from TableName
union
select distinct Field2 from TableName

Stacey wrote:
> I'm sorry, I meant to convey that these names are different records
in
> the particular fields.
> So - I may have the following records:
> <pre>
> Rec# Field1 Field2
> 1: Jack Peter
> 2: Joe Paul
> 3: Jim Joe
> 4: Paul Jim
> </pre>
> and would like a result of:
> Jack
> Joe
> Jim
> Paul
> Peter|||Do not use SELECT DISTINCT and UNION together. It is redundant and
most SQL products will not see that, so you get three sorts instead of
one.

Merge republish Schema changes

Hello,

I'm working on a replication topology that is completely merge. We have a single consolidated instance (SQL 2005 SP1 Standard) that holds all data and is a continuous push merge publication filtered by region to regional instances (SQL 2005 SP1 Standard). Then we have individual user instances (SQL Express SP1) that pulls from the republished regional instances which is filtered by user. Both publications have Replicate Schema Changes set to true.

I'm testing out changes to tables and sps on a test system I've been using this process:

1-Run Snapshot on the Consolidated instance

2-Verify all published articles have a status of 2 in sysmergearticles

3-Run Regional Snapshot

4-Verify all published articles have a status of 2 in sysmergearticles

5-Run alter table scripts

6-Once all three levels have the table changes, run the alter sp scripts

I've gotten to step 5 and and the changes get replicated to the regional instance just fine however only the existing column changes get replicated to the SQLExpress instance, not the new columns. Looking at the articles in the regional publication it shows the new columns, but they are not selected. I know I can manually select them (or probably write a script that adds them to the publication although sp_repladdcolumn has been depreciated), but isn't there a way to make this a completely automated process since it's just a republished database? Also is the process I'm using the correct one?

Thank you,

Aaron Lowe

Is your publication property replication_ddl set to true?|||I apologize for not being clearer in my original post. I had said that replicate schema changes was set to true, this is the replication_ddl property that I was referring to. Thanks, Aaron|||when you add a new column, the column should get replicated to all nodes in your topology. Is the new column not getting replicated at all? Where in your topology are you adding the new columns - publisher, republisher or subscriber?|||I'm adding the columns at my original publisher (the consolidated one). As I said it is pushed down to my subscribers that republish the data (the regional ones that are pushed from the consolidated one), it just doesn't get all the way down to my final subscribers (the individual sqlexpress ones that pull the data). Looking at the properties of the publication on the republisher it shows the columns in the publication but they are not selected.|||if replicate_ddl option is truly enabled at both the publisher and the republisher, then I'm not sure what the problem is. You verified the replicate_ddl column is set to 1 in sysmergepublications table in the published database at both the publisher and republisher?|||

Well, I believe it's correct, here's what is in the sysmergepublications:

Consolidated database (original publisher)

publication name, replicate_ddl

Consolidated, 1

Region, 0

Regional database (republisher)

publication name, replicate_ddl

Consolidated, 1

Region, 1

SQL Express database (subscriber)

publication name, replicate_ddl

Consolidated, 0

Region, 1

Also the status in sysmergearticles in the consolidated db is 2 (active). There are two sets of articles in the sysmergearticles table in the regional db, one for each the consolidated and regional publication. The records in sysmergearticles for the consolidated publication has a status of 1 (Unsynced) while the records for the regional publication have a status of 2 (active). The status in the SQLExpress pull subscriptions is all 1 (Unsynced).

Thanks,

Aaron

|||Can you try your scenario with SP2? We fixed somewhat similar issue in SP2.sql

Merge Repliction - Run Stored Procedure when merge agent starts

I have database on SQL Server 2000 set up with a merge publication.
This publication is configured with a number of dynamic filters to
reduce the amount of data sent to each client. Each client has an
anonymous pull subscription. The merge process can be triggered by the
windows sync manager and my application.

To improve performance I have created some helper tables to hold the
mapping between user login and primary keys of selected entities.

For the replicated data to be correct the contents of the helper tables
needs to be up to date.

I need to fire off a stored procedure on the publisher before
replication starts to verify that this data is up to date. I can not
see any documented way of doing this however I have been experimenting
with some unorthodox systems.

Firstly has anyone any ideas?

I have been considering adding a trigger to some of the tables used by
the Microsoft replication code - yes I know this is very nasty.

My problems arise because executing this stored procedure will cause
some data to be updated. In updating data we could create a new
generation in the database. I must therefore run my stored procedure
before any the Microsoft code makes any generation checks / updates.

Anyone done anything similar, Anyone have any better ideas?

Any comments would be gratefully received.(tedd_n_alex@.yahoo.com) writes:
> My problems arise because executing this stored procedure will cause
> some data to be updated. In updating data we could create a new
> generation in the database. I must therefore run my stored procedure
> before any the Microsoft code makes any generation checks / updates.
> Anyone done anything similar, Anyone have any better ideas?

Not being very good at replication, I can only recommend you to visit
microsoft.public.sqlserver.replication. There are some guys that knows
replicataion really well there.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

Merge replicaton: How to customize conflict detection, resolution?

We have a simple problem:
We have a table Assets[asset_id, title] and a table
Asset_Versions[asset_id, version_no]. In addition, there are columns
added by replication for storing the ROWGUID.
After synchronzation, assume both Node1 and Node2 have
Assets(1,'XYZ') and Assets_Versions(1,1)
Before syncing again a new version is added at both nodes so that both
Node1 and Node2 have
Assets(1,'XYZ') and Assets_Versions(1,1)(1,2)
Now when it syncs again, the publisher will make another entry of
Assets_Versions(1,2) in the subscriber thus disturbing the business
logic because the Subscriber will now have 2 entries for version_no=2.
How can we make Replication identify this as a conflict. Can we write
a conflict resolver for such a situation?
Thanks and regards,
YashWhat is the PK on Assets_Versions? I think if it includes the id and version
,
then this will get identified as a conflict.
HTH,
Paul Ibison|||Actually I can't test it here but I believe I was incorrect earlier and that
you'll just get an error instead - perhaps you can test this for me. What I'
m
thinking of now is that you could just have id and version in the
asset_versions table and update rather than add new versions:
ID, version, guid
1,1,xxx
on next version we have:
ID, version, guid
1,2,xxx
If this fits in with your application it'll work as this will cause a
conflict if done on 2 nodes before syncing.
HTH,
Paul Ibison

Merge replicaton: How to customize conflict detection, resolution?

We have a simple problem:
We have a table Assets[asset_id, title] and a table
Asset_Versions[asset_id, version_no]. In addition, there are columns
added by replication for storing the ROWGUID.
After synchronzation, assume both Node1 and Node2 have
Assets(1,'XYZ') and Assets_Versions(1,1)
Before syncing again a new version is added at both nodes so that both
Node1 and Node2 have
Assets(1,'XYZ') and Assets_Versions(1,1)(1,2)
Now when it syncs again, the publisher will make another entry of
Assets_Versions(1,2) in the subscriber thus disturbing the business
logic because the Subscriber will now have 2 entries for version_no=2.
How can we make Replication identify this as a conflict. Can we write
a conflict resolver for such a situation?
Thanks and regards,
Yash
What is the PK on Assets_Versions? I think if it includes the id and version,
then this will get identified as a conflict.
HTH,
Paul Ibison
|||Actually I can't test it here but I believe I was incorrect earlier and that
you'll just get an error instead - perhaps you can test this for me. What I'm
thinking of now is that you could just have id and version in the
asset_versions table and update rather than add new versions:
ID, version, guid
1,1,xxx
on next version we have:
ID, version, guid
1,2,xxx
If this fits in with your application it'll work as this will cause a
conflict if done on 2 nodes before syncing.
HTH,
Paul Ibison

Merge replicaton: How to customize conflict detection, resolution?

We have a simple problem:
We have a table Assets[asset_id, title] and a table
Asset_Versions[asset_id, version_no]. In addition, there are columns
added by replication for storing the ROWGUID.
After synchronzation, assume both Node1 and Node2 have
Assets(1,'XYZ') and Assets_Versions(1,1)
Before syncing again a new version is added at both nodes so that both
Node1 and Node2 have
Assets(1,'XYZ') and Assets_Versions(1,1)(1,2)
Now when it syncs again, the publisher will make another entry of
Assets_Versions(1,2) in the subscriber thus disturbing the business
logic because the Subscriber will now have 2 entries for version_no=2.
How can we make Replication identify this as a conflict. Can we write
a conflict resolver for such a situation?
Thanks and regards,
Yash
What is the PK on Assets_Versions? I think if it includes the id and version,
then this will get identified as a conflict.
HTH,
Paul Ibison
|||Actually I can't test it here but I believe I was incorrect earlier and that
you'll just get an error instead - perhaps you can test this for me. What I'm
thinking of now is that you could just have id and version in the
asset_versions table and update rather than add new versions:
ID, version, guid
1,1,xxx
on next version we have:
ID, version, guid
1,2,xxx
If this fits in with your application it'll work as this will cause a
conflict if done on 2 nodes before syncing.
HTH,
Paul Ibison

Merge replicaton: How to customize conflict detection, resolution?

We have a simple problem:
We have a table Assets[asset_id, title] and a table
Asset_Versions[asset_id, version_no]. In addition, there are columns
added by replication for storing the ROWGUID.
After synchronzation, assume both Node1 and Node2 have
Assets(1,'XYZ') and Assets_Versions(1,1)
Before syncing again a new version is added at both nodes so that both
Node1 and Node2 have
Assets(1,'XYZ') and Assets_Versions(1,1)(1,2)
Now when it syncs again, the publisher will make another entry of
Assets_Versions(1,2) in the subscriber thus disturbing the business
logic because the Subscriber will now have 2 entries for version_no=2.
How can we make Replication identify this as a conflict. Can we write
a conflict resolver for such a situation?
Thanks and regards,
YashWhat is the PK on Assets_Versions? I think if it includes the id and version,
then this will get identified as a conflict.
HTH,
Paul Ibison|||Actually I can't test it here but I believe I was incorrect earlier and that
you'll just get an error instead - perhaps you can test this for me. What I'm
thinking of now is that you could just have id and version in the
asset_versions table and update rather than add new versions:
ID, version, guid
1,1,xxx
on next version we have:
ID, version, guid
1,2,xxx
If this fits in with your application it'll work as this will cause a
conflict if done on 2 nodes before syncing.
HTH,
Paul Ibison|||A primary key or unique index on Assets_Versions should prevent such a
double insert/
--
relevantNoise - dedicated to mining blogs for business intelligence.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
<yashgt@.gmail.com> wrote in message
news:1187325787.241826.116170@.m37g2000prh.googlegroups.com...
> We have a simple problem:
> We have a table Assets[asset_id, title] and a table
> Asset_Versions[asset_id, version_no]. In addition, there are columns
> added by replication for storing the ROWGUID.
> After synchronzation, assume both Node1 and Node2 have
> Assets(1,'XYZ') and Assets_Versions(1,1)
> Before syncing again a new version is added at both nodes so that both
> Node1 and Node2 have
> Assets(1,'XYZ') and Assets_Versions(1,1)(1,2)
> Now when it syncs again, the publisher will make another entry of
> Assets_Versions(1,2) in the subscriber thus disturbing the business
> logic because the Subscriber will now have 2 entries for version_no=2.
> How can we make Replication identify this as a conflict. Can we write
> a conflict resolver for such a situation?
> Thanks and regards,
> Yash
>sql

merge replications causes insert timeout

Hi All,

i need help. i have a merge replication running every 10 minutes. the replication is for 1 database consisting around 50 tables and all 50 tables are marked as article to be published. let's say the publisher is server A and the subscriber is server B. for explanation purpose, let's pick 1 table from the database, let's say table X

in server A, there are applications running that insert and update table X. in server B, there are applications running that select and delete table X. table X at average, has tens of thousands rows.

now comes the problem, every 10 minutes whenever the sql merge agent is running, sometimes whenan application insert a row to table X in server A, it returns an error : "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.". when i stop the replication (delete the subscriber), the row insertion to table X in server A seems fine. then i come to a doubtful conclusion, that the timeout error happened because when the insert statement executes, the replication locks the table for merging.

could somebody verify this? if my conclusion is true, then how am i suppose to do this? (see bold text above). really2 appreciate the help

Yes, merge agent can fail due to deadlocking. WHat you need to ensure is that your application is taking the minimum locks necessary. This is no different than two applications accessing/modifying the same data simultaneously, consider merge replication a second application.

Review your queries, insert/update/delete statement and see how transactions are being used, what locking behavior, queries are written optimally, etc.

Merge Replication-Apply BCP files

How can i apply a BCP files generate by Distributor in the Suscribers.

I don′t know wich parameters BCP.EXE need.

Thanks

Can you explain why you need to manually apply them? Is the merge agent not applying them for you?|||

I need to manually apply because there is few information, and i've configured suscription with the option to no send the schema tables.

I've configured the replication via internet (is slow for a very large tables)

But now i need to send data to the suscribers.

Thanks

|||

manually applying the bcp files is not recommended. If you're having problems, then let us know what they so we can walk you through a solution.

What part is slow, applying the snapshot to the subscribers? Are these large tables? Are you using filtering, and using the dynamic snapshots? Is the agent failing? How would bcp'ing the data across the internet be any faster than merge agent applying the snapshot?

|||

I only need to know how can apply manually the BCP files on the susbcribers.

Any body knows what parameters i have to use to do this.

Thanks

|||I thinks I am having same requirements like Jorge,

See What I am looking for is quit similar to Jorge's case,

I have 2 Database Servers, I have set Merge Replication (Pull) between them,

But Sometimes The Connectivity between both of them is not available for many days, still data needs to be transfered through CD.

I thinks this problem is relavant, if not pls ignor, and pls give some solution

Merge Replication? Aaaarghhh!

Can anyone tell me what should be contained in MSrepl_identity_range tables
on both the subscriber and the publisher? Both my tables contain completly
different data - pub has 83 rows and my single sub has only 3.
When are these tables populated and how? Can I populate them manually from a
system SP?
I refer to my previous post where Hilary Cotter thought there might be an
issue with these tables.
When executing : exec sp_MSfetchidentityrange N'CommentType', 0
I get the following error:
Server: Msg 21195, Level 16, State 1, Procedure
sp_MSfetchAdjustidentityrange, Line 92
A valid identity range is not available. Check the data type of the identity
column.
Thanks in advance...
Chris,
this is a bit more complicated than it seems...
I have had cause to manually change the identity range on a subscriber - I'm
not recommending it but it did lead to a better understanding of the
mechanism involved!
If you are using automatic range management this'll be taken care of when
you synchronize (run the merge agent). However, if it is not possible for
you to connect to the publisher, you could manually update
MSrepl_identity_range on the subscriber. This table is used to check if the
subscriber has used up its range or reached the threshold. The new range you
set would be obtained from MSrepl_identity_range on the distributor, which
is the master table and is used to generate new values. The values in this
table (MSrepl_identity_range on the distributor) would need to be changed to
avoid a future potential conflict. Finally, the check constraints on the
subscriber would need updating accordingly.
As an aside, note that there are some anomalies with automatic range
management: the first range is twice the requested size and the actual range
of values enforced by the check constraint is always one less than the size
selected - SQL Server 2005 managed identities for merge replication has been
redesigned to be more consistent.
HTH,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||Paul. Thanks
I ended up removing replication from the DB and reinstating. I'm now having
a problem with creating the publication from a generated script! See later
post.
Thanks anyway for you help.
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:%239GGiaZrFHA.2996@.tk2msftngp13.phx.gbl...
> Chris,
> this is a bit more complicated than it seems...
> I have had cause to manually change the identity range on a subscriber -
> I'm not recommending it but it did lead to a better understanding of the
> mechanism involved!
> If you are using automatic range management this'll be taken care of when
> you synchronize (run the merge agent). However, if it is not possible for
> you to connect to the publisher, you could manually update
> MSrepl_identity_range on the subscriber. This table is used to check if
> the subscriber has used up its range or reached the threshold. The new
> range you set would be obtained from MSrepl_identity_range on the
> distributor, which is the master table and is used to generate new values.
> The values in this table (MSrepl_identity_range on the distributor) would
> need to be changed to avoid a future potential conflict. Finally, the
> check constraints on the subscriber would need updating accordingly.
> As an aside, note that there are some anomalies with automatic range
> management: the first range is twice the requested size and the actual
> range of values enforced by the check constraint is always one less than
> the size selected - SQL Server 2005 managed identities for merge
> replication has been redesigned to be more consistent.
> HTH,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
>

Merge replication?

We have 2 sites which are normally connected by a 10mb fixed line, I'm
looking for a backup should this link fail. Both sites use the same software
connected to a sql server database. If the link fails both sites should be
able to continue in a limited capacity and once the link is restored the
data should synchronise.
I've been looking at this for a little while and merge replication seem to
be the right solution, however I would like to know if anyone could give me
any other options?
Thanks
Daleyou have three options
1) merge
2) transactional replication with queued updating - good where most of the
updates happen on the subscriber
3) bi-directional replication - good when your tables are naturally
partitioned.
Merge is the best solution as you can resolve conflicts. When a conflict
occured with queued you can only view the conflict, you can resolve it in
Conflict Viewer. Queued also works best when most transactions occur on the
publisher, not the subscriber.
Bi-directional replication does not have any mechanism to handle conflicts,
the jobs fail and you have to manually fix everything - similar to queued
really. Bi-directional replication is also harder to do schema changes on.
Merge and Queued add a GUID column to your underlying tables, bi-di
transactional doesn't.
"Dale Pitman" <d@.dd.com> wrote in message
news:_1Xec.185$zC5.123@.newsfe1-win...
> We have 2 sites which are normally connected by a 10mb fixed line, I'm
> looking for a backup should this link fail. Both sites use the same
software
> connected to a sql server database. If the link fails both sites should be
> able to continue in a limited capacity and once the link is restored the
> data should synchronise.
> I've been looking at this for a little while and merge replication seem to
> be the right solution, however I would like to know if anyone could give
me
> any other options?
> Thanks
> Dale
>
>|||AND any conflicts which occur with bi-di means you have a MAJOR data cleanup
situation to deal with. You have to manually reconcile everything. That is
why bi-di is NOT recommended unless you do one of two things:
1. Enforce that transaction ONLY occur on one side or the other
2. There is a partition which ensures that no changes on one side will ever
conflict with changes on the other side
Mike
Principal Mentor
Solid Quality Learning
"More than just Training"
SQL Server MVP
http://www.solidqualitylearning.com
http://www.mssqlserver.com|||and how so?
When you get a conflict the distribution agent stops on the side which
detects the conflict. You can then go in and manually fix it. My expireience
is that a single conflict does not mask hundereds of conflicts coming after
it, but rather is a single occurence.
The key is, as both you and I have pointed out, to minimize the chances of
conflicts occuring; but when they do; replication does fail so that you can
fix the conflict and continue.
On the bi-directional systems I have worked on, I have delivered sound
solutions which have been relatively problem free. Bi-directional
transactional is not a good fit for all requirements, but it can work.
"Michael Hotek" <mhotek@.nomail.com> wrote in message
news:u7C5gTeIEHA.3356@.TK2MSFTNGP11.phx.gbl...
> AND any conflicts which occur with bi-di means you have a MAJOR data
cleanup
> situation to deal with. You have to manually reconcile everything. That
is
> why bi-di is NOT recommended unless you do one of two things:
> 1. Enforce that transaction ONLY occur on one side or the other
> 2. There is a partition which ensures that no changes on one side will
ever
> conflict with changes on the other side
> --
> Mike
> Principal Mentor
> Solid Quality Learning
> "More than just Training"
> SQL Server MVP
> http://www.solidqualitylearning.com
> http://www.mssqlserver.com
>|||Therein lies the problem. While replication may halt, it may not also. If
you have duplicate primary keys, it will halt. If you update a nonexistent
row, update one side and delete other side, it will halt. It will NOT halt
if you update the same piece of data on both sides to different values. The
last one in will win and you will have zero indication that a previously
saved value has now been overridden which can create severe issues with the
business.
Also, replication halts in two cases. This is a very undesirable thing to
do. You can't exactly halt the transaction flow from applications, so
additional transaction begin piling up on both sides because replication is
halted. This not only dramatically increases latency, but it also has an
increasing potential to generate even more conflicts which can put you in
the same situtation with teh agents halting causing a further backup in
transactions, etc. until you hit the point where someone deems replication
to be completely nonfunctional and starts pointing fingers at the
replication engine. It has nothing to do with the replication engine, but
entirely due to the way the engine is being used and the design being
implemented.
The 2 conditions I outlined are the only way I would EVER implement a bi-di
system. I've been doing this since Sybase Rep Server 1.0 against Sybase
System 10. I've watched more than one bi-di system, 2 of them my own
designs, become complete toast because those 2 conditions were violated.
You get burned enough times, you learn not to keep sticking you hand in the
fire.
If you want to deal with conflicts being thrown in a bi-di system, that is
you choice. I certainly wouldn't want to be the one sitting in that seat,
because I know all too well just what kind of havoc a single conflict can
cause in that type of implementation. I also prefer to not have to babysit
a replication system 24 hours a day because I've placed a design that has a
flaw which can only be overcome with manual intervention and a decision by
an IT person who isn't necessarily qualified to make a call on the data.
Mike
Principal Mentor
Solid Quality Learning
"More than just Training"
SQL Server MVP
http://www.solidqualitylearning.com
http://www.mssqlserver.com

Merge replication?

We have 2 sites which are normally connected by a 10mb fixed line, I'm
looking for a backup should this link fail. Both sites use the same software
connected to a sql server database. If the link fails both sites should be
able to continue in a limited capacity and once the link is restored the
data should synchronise.
I've been looking at this for a little while and merge replication seem to
be the right solution, however I would like to know if anyone could give me
any other options?
Thanks
Dale
you have three options
1) merge
2) transactional replication with queued updating - good where most of the
updates happen on the subscriber
3) bi-directional replication - good when your tables are naturally
partitioned.
Merge is the best solution as you can resolve conflicts. When a conflict
occured with queued you can only view the conflict, you can resolve it in
Conflict Viewer. Queued also works best when most transactions occur on the
publisher, not the subscriber.
Bi-directional replication does not have any mechanism to handle conflicts,
the jobs fail and you have to manually fix everything - similar to queued
really. Bi-directional replication is also harder to do schema changes on.
Merge and Queued add a GUID column to your underlying tables, bi-di
transactional doesn't.
"Dale Pitman" <d@.dd.com> wrote in message
news:_1Xec.185$zC5.123@.newsfe1-win...
> We have 2 sites which are normally connected by a 10mb fixed line, I'm
> looking for a backup should this link fail. Both sites use the same
software
> connected to a sql server database. If the link fails both sites should be
> able to continue in a limited capacity and once the link is restored the
> data should synchronise.
> I've been looking at this for a little while and merge replication seem to
> be the right solution, however I would like to know if anyone could give
me
> any other options?
> Thanks
> Dale
>
>
|||AND any conflicts which occur with bi-di means you have a MAJOR data cleanup
situation to deal with. You have to manually reconcile everything. That is
why bi-di is NOT recommended unless you do one of two things:
1. Enforce that transaction ONLY occur on one side or the other
2. There is a partition which ensures that no changes on one side will ever
conflict with changes on the other side
Mike
Principal Mentor
Solid Quality Learning
"More than just Training"
SQL Server MVP
http://www.solidqualitylearning.com
http://www.mssqlserver.com
|||and how so?
When you get a conflict the distribution agent stops on the side which
detects the conflict. You can then go in and manually fix it. My expireience
is that a single conflict does not mask hundereds of conflicts coming after
it, but rather is a single occurence.
The key is, as both you and I have pointed out, to minimize the chances of
conflicts occuring; but when they do; replication does fail so that you can
fix the conflict and continue.
On the bi-directional systems I have worked on, I have delivered sound
solutions which have been relatively problem free. Bi-directional
transactional is not a good fit for all requirements, but it can work.
"Michael Hotek" <mhotek@.nomail.com> wrote in message
news:u7C5gTeIEHA.3356@.TK2MSFTNGP11.phx.gbl...
> AND any conflicts which occur with bi-di means you have a MAJOR data
cleanup
> situation to deal with. You have to manually reconcile everything. That
is
> why bi-di is NOT recommended unless you do one of two things:
> 1. Enforce that transaction ONLY occur on one side or the other
> 2. There is a partition which ensures that no changes on one side will
ever
> conflict with changes on the other side
> --
> Mike
> Principal Mentor
> Solid Quality Learning
> "More than just Training"
> SQL Server MVP
> http://www.solidqualitylearning.com
> http://www.mssqlserver.com
>
|||Therein lies the problem. While replication may halt, it may not also. If
you have duplicate primary keys, it will halt. If you update a nonexistent
row, update one side and delete other side, it will halt. It will NOT halt
if you update the same piece of data on both sides to different values. The
last one in will win and you will have zero indication that a previously
saved value has now been overridden which can create severe issues with the
business.
Also, replication halts in two cases. This is a very undesirable thing to
do. You can't exactly halt the transaction flow from applications, so
additional transaction begin piling up on both sides because replication is
halted. This not only dramatically increases latency, but it also has an
increasing potential to generate even more conflicts which can put you in
the same situtation with teh agents halting causing a further backup in
transactions, etc. until you hit the point where someone deems replication
to be completely nonfunctional and starts pointing fingers at the
replication engine. It has nothing to do with the replication engine, but
entirely due to the way the engine is being used and the design being
implemented.
The 2 conditions I outlined are the only way I would EVER implement a bi-di
system. I've been doing this since Sybase Rep Server 1.0 against Sybase
System 10. I've watched more than one bi-di system, 2 of them my own
designs, become complete toast because those 2 conditions were violated.
You get burned enough times, you learn not to keep sticking you hand in the
fire.
If you want to deal with conflicts being thrown in a bi-di system, that is
you choice. I certainly wouldn't want to be the one sitting in that seat,
because I know all too well just what kind of havoc a single conflict can
cause in that type of implementation. I also prefer to not have to babysit
a replication system 24 hours a day because I've placed a design that has a
flaw which can only be overcome with manual intervention and a decision by
an IT person who isn't necessarily qualified to make a call on the data.
Mike
Principal Mentor
Solid Quality Learning
"More than just Training"
SQL Server MVP
http://www.solidqualitylearning.com
http://www.mssqlserver.com
sql

Merge replication?

We have a SQL Server db in Atlanta. Rows are added to tables in that
database by a website in Atlanta. We have a number of people who use
that data in Dallas and they need to be able to modify the data. This
scheme works okay except when the Internet connection between the two
sites goes down.
So we are looking to replicate the database in Dallas. From the little
I've read, Merge replication would probably be our best bet. However,
I've never worked with replication so I'm unsure. I'd like some
resources either online or books that would help me understand the
process better and help me in implementing such a scheme. Thanks!
-John
Merge might work, bi-directional transactional is another option.
In general if you go offline frequently, your chances for generating
conflicts increase. Merge has conflict tracking mechanisms, where if a
conflict occurs it is logged to conflict tables, and the publisher or
subscriber (your choice) wins. You can roll back this conflict. With
bi-directional transactional replication your distribution agent will fail
and you will have to manually fix the problem. Depending on your data flow
this can involve a single row or thousands.
Bi-directional transactional replication is not resilient to schema changes.
Merge replication is to a degree. Transactional replication is also easier
to troubleshoot. Merge will add a tracking columns to each table you are
replicating. Bi-directional transactional replication does not.
Most dba's would pick merge replication.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"John Baima" <john@.nospam.com> wrote in message
news:n5hus0tfqmr4sgudepget0o5ebhaer8nel@.4ax.com...
> We have a SQL Server db in Atlanta. Rows are added to tables in that
> database by a website in Atlanta. We have a number of people who use
> that data in Dallas and they need to be able to modify the data. This
> scheme works okay except when the Internet connection between the two
> sites goes down.
> So we are looking to replicate the database in Dallas. From the little
> I've read, Merge replication would probably be our best bet. However,
> I've never worked with replication so I'm unsure. I'd like some
> resources either online or books that would help me understand the
> process better and help me in implementing such a scheme. Thanks!
>
> -John
>
|||"Hilary Cotter" <hilary.cotter@.gmail.com> wrote:

>Merge might work, bi-directional transactional is another option.
>In general if you go offline frequently, your chances for generating
>conflicts increase. Merge has conflict tracking mechanisms, where if a
>conflict occurs it is logged to conflict tables, and the publisher or
>subscriber (your choice) wins. You can roll back this conflict. With
>bi-directional transactional replication your distribution agent will fail
>and you will have to manually fix the problem. Depending on your data flow
>this can involve a single row or thousands.
>Bi-directional transactional replication is not resilient to schema changes.
>Merge replication is to a degree. Transactional replication is also easier
>to troubleshoot. Merge will add a tracking columns to each table you are
>replicating. Bi-directional transactional replication does not.
>Most dba's would pick merge replication.
We do not go offline frequently, except the past 2 weeks which has
made people scream. I don't think that we will have too many conflicts
because the one location mostly adds records and the other modifies
existing records. Is there a book that you would recommend?
-John
|||I'd probably use bi-directional transactional replication, as long as you
are going to be modifying the schema much.
It is more difficult to set up, and if you are a high volume database you
will have to watch your transaction log growth.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"John Baima" <john@.nospam.com> wrote in message
news:um90t0de0n4rp3dp2fsu86t988sdhnm3i2@.4ax.com... [vbcol=seagreen]
> "Hilary Cotter" <hilary.cotter@.gmail.com> wrote:
fail[vbcol=seagreen]
flow[vbcol=seagreen]
changes.[vbcol=seagreen]
easier
> We do not go offline frequently, except the past 2 weeks which has
> made people scream. I don't think that we will have too many conflicts
> because the one location mostly adds records and the other modifies
> existing records. Is there a book that you would recommend?
> -John
>

Merge replication?

We have 2 sites which are normally connected by a 10mb fixed line, I'm
looking for a backup should this link fail. Both sites use the same software
connected to a sql server database. If the link fails both sites should be
able to continue in a limited capacity and once the link is restored the
data should synchronise.
I've been looking at this for a little while and merge replication seem to
be the right solution, however I would like to know if anyone could give me
any other options?
Thanks
Dale
you have three options
1) merge
2) transactional replication with queued updating - good where most of the
updates happen on the subscriber
3) bi-directional replication - good when your tables are naturally
partitioned.
Merge is the best solution as you can resolve conflicts. When a conflict
occured with queued you can only view the conflict, you can resolve it in
Conflict Viewer. Queued also works best when most transactions occur on the
publisher, not the subscriber.
Bi-directional replication does not have any mechanism to handle conflicts,
the jobs fail and you have to manually fix everything - similar to queued
really. Bi-directional replication is also harder to do schema changes on.
Merge and Queued add a GUID column to your underlying tables, bi-di
transactional doesn't.
"Dale Pitman" <d@.dd.com> wrote in message
news:_1Xec.185$zC5.123@.newsfe1-win...
> We have 2 sites which are normally connected by a 10mb fixed line, I'm
> looking for a backup should this link fail. Both sites use the same
software
> connected to a sql server database. If the link fails both sites should be
> able to continue in a limited capacity and once the link is restored the
> data should synchronise.
> I've been looking at this for a little while and merge replication seem to
> be the right solution, however I would like to know if anyone could give
me
> any other options?
> Thanks
> Dale
>
>
|||AND any conflicts which occur with bi-di means you have a MAJOR data cleanup
situation to deal with. You have to manually reconcile everything. That is
why bi-di is NOT recommended unless you do one of two things:
1. Enforce that transaction ONLY occur on one side or the other
2. There is a partition which ensures that no changes on one side will ever
conflict with changes on the other side
Mike
Principal Mentor
Solid Quality Learning
"More than just Training"
SQL Server MVP
http://www.solidqualitylearning.com
http://www.mssqlserver.com
|||and how so?
When you get a conflict the distribution agent stops on the side which
detects the conflict. You can then go in and manually fix it. My expireience
is that a single conflict does not mask hundereds of conflicts coming after
it, but rather is a single occurence.
The key is, as both you and I have pointed out, to minimize the chances of
conflicts occuring; but when they do; replication does fail so that you can
fix the conflict and continue.
On the bi-directional systems I have worked on, I have delivered sound
solutions which have been relatively problem free. Bi-directional
transactional is not a good fit for all requirements, but it can work.
"Michael Hotek" <mhotek@.nomail.com> wrote in message
news:u7C5gTeIEHA.3356@.TK2MSFTNGP11.phx.gbl...
> AND any conflicts which occur with bi-di means you have a MAJOR data
cleanup
> situation to deal with. You have to manually reconcile everything. That
is
> why bi-di is NOT recommended unless you do one of two things:
> 1. Enforce that transaction ONLY occur on one side or the other
> 2. There is a partition which ensures that no changes on one side will
ever
> conflict with changes on the other side
> --
> Mike
> Principal Mentor
> Solid Quality Learning
> "More than just Training"
> SQL Server MVP
> http://www.solidqualitylearning.com
> http://www.mssqlserver.com
>
|||Therein lies the problem. While replication may halt, it may not also. If
you have duplicate primary keys, it will halt. If you update a nonexistent
row, update one side and delete other side, it will halt. It will NOT halt
if you update the same piece of data on both sides to different values. The
last one in will win and you will have zero indication that a previously
saved value has now been overridden which can create severe issues with the
business.
Also, replication halts in two cases. This is a very undesirable thing to
do. You can't exactly halt the transaction flow from applications, so
additional transaction begin piling up on both sides because replication is
halted. This not only dramatically increases latency, but it also has an
increasing potential to generate even more conflicts which can put you in
the same situtation with teh agents halting causing a further backup in
transactions, etc. until you hit the point where someone deems replication
to be completely nonfunctional and starts pointing fingers at the
replication engine. It has nothing to do with the replication engine, but
entirely due to the way the engine is being used and the design being
implemented.
The 2 conditions I outlined are the only way I would EVER implement a bi-di
system. I've been doing this since Sybase Rep Server 1.0 against Sybase
System 10. I've watched more than one bi-di system, 2 of them my own
designs, become complete toast because those 2 conditions were violated.
You get burned enough times, you learn not to keep sticking you hand in the
fire.
If you want to deal with conflicts being thrown in a bi-di system, that is
you choice. I certainly wouldn't want to be the one sitting in that seat,
because I know all too well just what kind of havoc a single conflict can
cause in that type of implementation. I also prefer to not have to babysit
a replication system 24 hours a day because I've placed a design that has a
flaw which can only be overcome with manual intervention and a decision by
an IT person who isn't necessarily qualified to make a call on the data.
Mike
Principal Mentor
Solid Quality Learning
"More than just Training"
SQL Server MVP
http://www.solidqualitylearning.com
http://www.mssqlserver.com

Merge replication?

We have 2 sites which are normally connected by a 10mb fixed line, I'm
looking for a backup should this link fail. Both sites use the same software
connected to a sql server database. If the link fails both sites should be
able to continue in a limited capacity and once the link is restored the
data should synchronise.
I've been looking at this for a little while and merge replication seem to
be the right solution, however I would like to know if anyone could give me
any other options?
Thanks
Daleyou have three options
1) merge
2) transactional replication with queued updating - good where most of the
updates happen on the subscriber
3) bi-directional replication - good when your tables are naturally
partitioned.
Merge is the best solution as you can resolve conflicts. When a conflict
occured with queued you can only view the conflict, you can resolve it in
Conflict Viewer. Queued also works best when most transactions occur on the
publisher, not the subscriber.
Bi-directional replication does not have any mechanism to handle conflicts,
the jobs fail and you have to manually fix everything - similar to queued
really. Bi-directional replication is also harder to do schema changes on.
Merge and Queued add a GUID column to your underlying tables, bi-di
transactional doesn't.
"Dale Pitman" <d@.dd.com> wrote in message
news:_1Xec.185$zC5.123@.newsfe1-win...
> We have 2 sites which are normally connected by a 10mb fixed line, I'm
> looking for a backup should this link fail. Both sites use the same
software
> connected to a sql server database. If the link fails both sites should be
> able to continue in a limited capacity and once the link is restored the
> data should synchronise.
> I've been looking at this for a little while and merge replication seem to
> be the right solution, however I would like to know if anyone could give
me
> any other options?
> Thanks
> Dale
>
>|||AND any conflicts which occur with bi-di means you have a MAJOR data cleanup
situation to deal with. You have to manually reconcile everything. That is
why bi-di is NOT recommended unless you do one of two things:
1. Enforce that transaction ONLY occur on one side or the other
2. There is a partition which ensures that no changes on one side will ever
conflict with changes on the other side
--
Mike
Principal Mentor
Solid Quality Learning
"More than just Training"
SQL Server MVP
http://www.solidqualitylearning.com
http://www.mssqlserver.com|||and how so?
When you get a conflict the distribution agent stops on the side which
detects the conflict. You can then go in and manually fix it. My expireience
is that a single conflict does not mask hundereds of conflicts coming after
it, but rather is a single occurence.
The key is, as both you and I have pointed out, to minimize the chances of
conflicts occuring; but when they do; replication does fail so that you can
fix the conflict and continue.
On the bi-directional systems I have worked on, I have delivered sound
solutions which have been relatively problem free. Bi-directional
transactional is not a good fit for all requirements, but it can work.
"Michael Hotek" <mhotek@.nomail.com> wrote in message
news:u7C5gTeIEHA.3356@.TK2MSFTNGP11.phx.gbl...
> AND any conflicts which occur with bi-di means you have a MAJOR data
cleanup
> situation to deal with. You have to manually reconcile everything. That
is
> why bi-di is NOT recommended unless you do one of two things:
> 1. Enforce that transaction ONLY occur on one side or the other
> 2. There is a partition which ensures that no changes on one side will
ever
> conflict with changes on the other side
> --
> Mike
> Principal Mentor
> Solid Quality Learning
> "More than just Training"
> SQL Server MVP
> http://www.solidqualitylearning.com
> http://www.mssqlserver.com
>|||Therein lies the problem. While replication may halt, it may not also. If
you have duplicate primary keys, it will halt. If you update a nonexistent
row, update one side and delete other side, it will halt. It will NOT halt
if you update the same piece of data on both sides to different values. The
last one in will win and you will have zero indication that a previously
saved value has now been overridden which can create severe issues with the
business.
Also, replication halts in two cases. This is a very undesirable thing to
do. You can't exactly halt the transaction flow from applications, so
additional transaction begin piling up on both sides because replication is
halted. This not only dramatically increases latency, but it also has an
increasing potential to generate even more conflicts which can put you in
the same situtation with teh agents halting causing a further backup in
transactions, etc. until you hit the point where someone deems replication
to be completely nonfunctional and starts pointing fingers at the
replication engine. It has nothing to do with the replication engine, but
entirely due to the way the engine is being used and the design being
implemented.
The 2 conditions I outlined are the only way I would EVER implement a bi-di
system. I've been doing this since Sybase Rep Server 1.0 against Sybase
System 10. I've watched more than one bi-di system, 2 of them my own
designs, become complete toast because those 2 conditions were violated.
You get burned enough times, you learn not to keep sticking you hand in the
fire.
If you want to deal with conflicts being thrown in a bi-di system, that is
you choice. I certainly wouldn't want to be the one sitting in that seat,
because I know all too well just what kind of havoc a single conflict can
cause in that type of implementation. I also prefer to not have to babysit
a replication system 24 hours a day because I've placed a design that has a
flaw which can only be overcome with manual intervention and a decision by
an IT person who isn't necessarily qualified to make a call on the data.
--
Mike
Principal Mentor
Solid Quality Learning
"More than just Training"
SQL Server MVP
http://www.solidqualitylearning.com
http://www.mssqlserver.com

Merge replication: View Conflicts interface errors

Hi,

Not sure if this is the correct place to post this but I'm doing merge replication between SQL Server 2005 SP1 and SQL Server Mobile 2005 (on mobile devices) and I'm having some problems with data conflict resolution in the "View Conflicts" interface on the server management studio. The "Select Conflict Table" comes up and there are 5 tables listed. When I click on any of these tables I get an error dialog box titled "Microsoft Replication Conflict Viewer" and message "[ColumnName] is neither a DataColumn nor a DataRelation for table summary (System.Data)" with details:

===================================

CSRTechnicianID is neither a DataColumn nor a DataRelation for table summary. (System.Data)


Program Location:

at System.Data.DataRowView.get_Item(String property)
at Microsoft.SqlServer.Management.UI.ConflictViewer.ViewerForm.GetWinnerSQL(DataRow loserRow, Boolean blockFetch)
at Microsoft.SqlServer.Management.UI.ConflictViewer.ViewerForm.GetSourceRow()
at Microsoft.SqlServer.Management.UI.ConflictViewer.ViewerForm.FillDetailData()
at Microsoft.SqlServer.Management.UI.ConflictViewer.ViewerForm.FillDetail()
at Microsoft.SqlServer.Management.UI.ConflictViewer.ViewerForm.summaryInfoGrid_SelectionChanged(Object sender, SelectionChangedEventArgs args)
at Microsoft.SqlServer.Management.UI.Grid.GridControl.OnSelectionChanged(BlockOfCellsCollection selectedCells)
at Microsoft.SqlServer.Management.UI.Grid.GridControl.SelectedCellsInternal(BlockOfCellsCollection col, Boolean bSet)
at Microsoft.SqlServer.Management.UI.Grid.GridControl.set_SelectedCells(BlockOfCellsCollection value)
at Microsoft.SqlServer.Management.UI.ConflictViewer.ViewerForm.ResetSummaryGrid()
at Microsoft.SqlServer.Management.UI.ConflictViewer.ViewerForm.LoadConflict()

It then opens the conflict viewer window, and lists a conflict in the top grid, but nothing in the lower grid (usually shows details of the winner and loser records). In the top grid it shows conflict type as "4(Update/Delete, Update wins)" and the conflict loser is the mobile device subscriber. If I click on anything (Submit Winner, Remove) I get an error message "Column [Column Name] does not belong to table summary (System.Data)" with details:

===================================

Column 'CSRTechnicianID' does not belong to table summary. (System.Data)


Program Location:

at System.Data.DataRow.GetDataColumn(String columnName)
at System.Data.DataRow.get_Item(String columnName)
at Microsoft.SqlServer.Management.UI.ConflictViewer.ViewerForm.SetLogInfo(DataRow logRow, String sql)
at Microsoft.SqlServer.Management.UI.ConflictViewer.ViewerForm.btnSubmitWinner_Click(Object sender, EventArgs eg)

It seems that the conflict resolution function has gotten corrupted. Does anyone have any tips on how to fix this?

Regards,

Greg

Hi Greg,

This is a known issue which is now fixed in SQL Server 2000 SP2. Please try it and let us know if the problem persists.

The problem is that your table already had a roguid column which was named differently than what the conflict viewer was expecting.

sql