tag:blogger.com,1999:blog-3157881265969801285.post8337332379723295644..comments2019-06-21T17:31:27.462-04:00Comments on b.ling on software development: CQRS: Building a “Transactional” Event Store with MongoDBAnonymoushttp://www.blogger.com/profile/12728014380144489730noreply@blogger.comBlogger6125tag:blogger.com,1999:blog-3157881265969801285.post-13790410708051680072012-05-01T08:41:21.850-04:002012-05-01T08:41:21.850-04:00It wouldn't be any different from processing a...It wouldn't be any different from processing a new command. You will need to load all snapshots/events to get the aggregate root to the latest version. Events should typically be "fire and forget". If you require them to be guaranteed delivery it's better to restructure them as commands.Anonymoushttps://www.blogger.com/profile/12728014380144489730noreply@blogger.comtag:blogger.com,1999:blog-3157881265969801285.post-37964604413838871642012-05-01T05:26:16.698-04:002012-05-01T05:26:16.698-04:00Nice, thanks for answering :)
I should definitely ...Nice, thanks for answering :)<br />I should definitely take a closer look at the MongoDB! Sounds very interesting :)<br /><br />On the other hand I'm still wondering if there is a general approach for this situation - you have done a state transition into the KV/doc DB and you have some events there. Then your system goes down just before it publishes these events on to the wire (some external messaging system). Then when you start again is there an efficient way/query to be done against the KV/doc DB so that you get all the events that haven't been dispatched yet?bodrinhttps://www.blogger.com/profile/03712236331151945089noreply@blogger.comtag:blogger.com,1999:blog-3157881265969801285.post-49591922308183783622012-05-01T00:24:36.414-04:002012-05-01T00:24:36.414-04:00Well, the easiest way is to cheat and use Mongo as...Well, the easiest way is to cheat and use Mongo as your messaging bus ;-) There's a bunch of examples on the web. Basically, you create a direct connection against the database and read the oplog. This is the mechanism that Mongo uses to do replication, so you get near real-time performance. Since it's only one system to manage, rather than two in concert, it's a bit easier to maintain. You'll get the same guarantees as any other write to a Mongo node.<br /><br />If you must go to another messaging system it'll depend on the guarantees of that implementation. It's not the end of the world to tell the user to "try again later", assuming that's acceptable for the 0.01% of the time.<br /><br />Also, if the messaging system is down, you can still read from the event store to get the latest information (and is required for stale nodes joining the system).Anonymoushttps://www.blogger.com/profile/12728014380144489730noreply@blogger.comtag:blogger.com,1999:blog-3157881265969801285.post-47392556505668329682012-04-30T16:53:00.700-04:002012-04-30T16:53:00.700-04:00Hi, I want to ask how do you dispatch the events. ...Hi, I want to ask how do you dispatch the events. I guess the flow is somthing like this:<br /><br />1. receive a command<br />2. load the related AR and process the command which produces some events - a batch<br />3. store the event batch in MongoDB, e.g. at <br />{ "_id": { "aggregate": 1234, "version": 65 } }<br />4. dispatch the events to some messaging system<br />5. mark the { "_id": { "aggregate": 1234, "version": 65 } } as dispatched<br /><br />So if this is similar to what you are doing I'm wondering what if the system goes down just after step 3. ?<br />The events are not dispatched, but how do you find that after restart? Is there an efficient way with MongoDb and/or with other documen / KV stores?bodrinhttps://www.blogger.com/profile/03712236331151945089noreply@blogger.comtag:blogger.com,1999:blog-3157881265969801285.post-75162720612648325412011-01-08T15:13:31.077-05:002011-01-08T15:13:31.077-05:00Thanks for the comments!
You are absolutely right...Thanks for the comments!<br /><br />You are absolutely right in all of your points.<br /><br />Actually, each individual event in my system is meaningless unless it is part of a batch. The batch is the only thing that is versioned, and the batch version is required to save or get events from the store.<br /><br />As for duplicate handling, since I wrote the post I implemented _id as a complex object like this:<br /><br />{ "_id": { "aggregate": 1234, "version": 65 } }<br /><br />_id is always indexed, and unique. If version 66 already exists, the 2nd command handler will simply fail, and the event store will continue to have good, consistent data.Anonymoushttps://www.blogger.com/profile/12728014380144489730noreply@blogger.comtag:blogger.com,1999:blog-3157881265969801285.post-10870021859872550792011-01-08T07:45:13.331-05:002011-01-08T07:45:13.331-05:00Awesome post.
When I started writing version 2....Awesome post. <br /><br />When I started writing version 2.0 of my EventStore library, I wanted to ensure that NoSQL databases such as MongoDB could be handled. Pushing everything up as a single batch is critical.<br /><br />Here is the implementation for Mongo:<br />https://github.com/joliver/EventStore/tree/master/src/proj/EventStore.Persistence.MongoPersistence<br /><br />Here is the design guide:<br />http://jonathan-oliver.blogspot.com/2010/12/cqrs-eventstore-v2-architectural.html<br /><br />There are three quick things I wanted to mention regarding your implementation. First, because you're now pushing things up as a batch, you can no longer use the event version as an optimistic control technique. Instead, you'll want to number each batch that you push using a sequential, incrementing value.<br /><br />Lastly, if you only push the event information, you may lose some context because there is oftentimes metadata associated with all of the events that you'll want to store.<br /><br />You'll also want to consider what happens when a message is processed more than once which causes a batch to be written. NoSQL doesn't provide any guarantees related to de-duplication so you'll need to handle that in your application code/event store code.Jonathan Oliverhttps://www.blogger.com/profile/16836313591238262040noreply@blogger.com