What are the Slack Archives?

It’s a history of our time together in the Slack Community! There’s a ton of knowledge in here, so feel free to search through the archives for a possible answer to your question.

Because this space is not active, you won’t be able to create a new post or comment here. If you have a question or want to start a discussion about something, head over to our categories and pick one to post in! You can always refer back to a post from Slack Archives if needed; just copy the link to use it as a reference..

hi. I have a question regarding rabbitmq. I do have a couple of messages in the `sync.storage.url` q

sebastian.larisch
sebastian.larisch Spryker Customer Posts: 143 πŸ§‘πŸ»β€πŸš€ - Cadet

hi. I have a question regarding rabbitmq. I do have a couple of messages in the sync.storage.url queue, where message 2 is about deleting a certain key and message 10 is about writing it again. all other messages are related to other keys. When I run queue:worker:start I end up having an empty key in redis which really confuses me. we dont have priority queues or anything and default ordering is fifo. So I actually expect to have a valid key in redis afterwards. is there any log I could double check the order how the messages are being processed? is there any good place to debug? I published the 2 messages one by one with rabbit and could at least make sure that the payload is valid and key is written successful doing it that way.

Comments

  • UT4U1HEHG
    UT4U1HEHG Posts: 49 πŸ§‘πŸ»β€πŸš€ - Cadet

    Hi Sebastian, this is strange.
    First of all for debugging I would disable jenkins and use console queue:task:start sync.storage.url
    to process messages only from this queue, and not from all available queues with queue:worker:start.

    Simultaneously in redis
    https://redis.io/commands/MONITOR you can start monitor command to see everything what is happening there.
    Basically you can check if something was written there.

    Also it may be that spryker don’t write data to redis, because of some conditions which do not satisfy the ability to write data to storage.
    In this case you can start docker/sdk with debug mode, and try to find the reason.

  • sebastian.larisch
    sebastian.larisch Spryker Customer Posts: 143 πŸ§‘πŸ»β€πŸš€ - Cadet
    edited September 2021

    the MONITOR command was a good hint. thanks! I can see that all messages are merged together doing all writes first with an β€œMSET” followed by all key-value pairs to write the keys and afterwards the β€œDEL” command to delete all keys. so it does not taking care about the order here, its just merging all together πŸ™„ it does explain it. any idea how to solve it?

  • sebastian.larisch
    sebastian.larisch Spryker Customer Posts: 143 πŸ§‘πŸ»β€πŸš€ - Cadet

    ok, the merging may not be the problem. its actually the order that is wrong. I have 4 deletes in my rabbit queue (message 2 .. 5 => delete) and 6 writes (message 6 … 11 => write), but in redis it does the write first with MSET and β€œDEL” command afterwards

  • UT4U1HEHG
    UT4U1HEHG Posts: 49 πŸ§‘πŸ»β€πŸš€ - Cadet
    edited September 2021

    I would check the code.

    But maybe you can also limit number of processed sync storage messages of queue:task:start to 1

    Hopefully I found correct place
    \Spryker\Zed\Synchronization\SynchronizationConfig::getSyncStorageQueueMessageChunkSize

    And you can try to process messages one by one and check what is happening in redis monitor

    Just to be sure if the problem is connected to some particular message, or the problem is really connected to processing of several messages in a row.

  • sebastian.larisch
    sebastian.larisch Spryker Customer Posts: 143 πŸ§‘πŸ»β€πŸš€ - Cadet

    I made a simple test and added 4 messages manually to the queue:

    {"delete":{"key":"url:test"}}
    {"write":{"key":"url:test", "value":"1"}}
    {"delete":{"key":"url:test"}}
    {"write":{"key":"url:test", "value":"2"}}
    

    order of messages was correct. I executed console qu:task:st sync.storage.url with chunksize of 1000 and afterwards again with 1 with following result:

    chunksize 1000:

    1631010201.804170 [0 172.28.0.8:48376] "SELECT" "0"
    1631010201.804975 [0 172.28.0.8:48376] "MSET" "kv:url:test" "2"
    1631010201.805643 [0 172.28.0.8:48376] "DEL" "kv:url:test" "kv:url:test" 
    

    chunksize 1:

    1631010813.118807 [0 172.28.0.8:48466] "SELECT" "0"
    1631010813.119481 [0 172.28.0.8:48466] "DEL" "kv:url:test"
    1631010815.085736 [0 172.28.0.8:48472] "SELECT" "0"
    1631010815.086394 [0 172.28.0.8:48472] "MSET" "kv:url:test" "1"
    1631010817.217274 [0 172.28.0.8:48478] "SELECT" "0"
    1631010817.218087 [0 172.28.0.8:48478] "DEL" "kv:url:test"
    1631010819.376974 [0 172.28.0.8:48484] "SELECT" "0"
    1631010819.377729 [0 172.28.0.8:48484] "MSET" "kv:url:test" "2"
    

    so to me it seems that the order is mixed up for some reason. is it a bug? so only solution I could think of is trying to avoid that delete, or have some kind of checks if I have a write and a delete in the queue for the same key and get rid of the delete.

  • UT4U1HEHG
    UT4U1HEHG Posts: 49 πŸ§‘πŸ»β€πŸš€ - Cadet
    edited September 2021

    Not sure what can be the problem, as I said you can try to check the library code and project level code if you changed any kind of library functionality.

    You can also check in composer.lock version of spryker/url-storage library, and check in github if there are any fixes or some improvements in new releases
    https://github.com/spryker/url-storage/releases

  • sebastian.larisch
    sebastian.larisch Spryker Customer Posts: 143 πŸ§‘πŸ»β€πŸš€ - Cadet

    We did not make any customize the syncronisation Module, but can check the version as u said and maybe the core code if I find the time. Thanks for your help anyway!

  • UT4U1HEHG
    UT4U1HEHG Posts: 49 πŸ§‘πŸ»β€πŸš€ - Cadet

    You are welcome πŸ™‚

  • sebastian.larisch
    sebastian.larisch Spryker Customer Posts: 143 πŸ§‘πŸ»β€πŸš€ - Cadet

    btw I found the root cause in spryker for that issue. its caused by \Spryker\Zed\Synchronization\Business\Message\BulkQueueMessageProcessor::processMessages where all messages are merged and the β€œwrites” are executed before the β€œdeletes”. I tested newest version of synchronization module and also in spryker-test shop, all the same. its a a problem for urls where same key needs to be rewritten (when redirect turn into a url again). we fixed now for us by overridden that methode and make sure the order is kept.I still have created a spryker bug ticket.

  • U01TZ93MPSQ
    U01TZ93MPSQ Posts: 40 πŸ§‘πŸ»β€πŸš€ - Cadet

    @UNGMX0012 I was nearly pulling my hair out over this. Thank you so much. If I ever see you i'll buy you a beer or two

  • sebastian.larisch
    sebastian.larisch Spryker Customer Posts: 143 πŸ§‘πŸ»β€πŸš€ - Cadet

    you’re welcome πŸ˜‰ I don’t have the problem with the hair anymore πŸ˜πŸ§‘β€πŸ¦²

  • U01TZ93MPSQ
    U01TZ93MPSQ Posts: 40 πŸ§‘πŸ»β€πŸš€ - Cadet
    edited October 2021

    Btw if anyone reads this thread in the future: I fixed it by creating a UrlQueueMessageProcessorPlugin and wiring it up in the QueueDependencyProvider . The plugin then throws the events into the SynchronizationFacade one by one instead of as a bulk.

    This way one does not have to touch the "default" messageProcessor that quite a lot of modules may rely on. The bundling of the writes and deletes is a performance measure (notably using MSET on redis writes) and changing that to always respect the order for ALL queues can have Performance implications when you deal with multiple hundreds of thousands of events daily.

  • sebastian.larisch
    sebastian.larisch Spryker Customer Posts: 143 πŸ§‘πŸ»β€πŸš€ - Cadet
    edited October 2021

    we solved it for now with overriding the BulkQueueMessageProcessor and running the queued messages once we hit a DELETE message. there was a bit pressure on time. we don’t have much DELETEs so impact for performance is less. but your solution sounds a way better. if u like to share your solution, I would be interested πŸ˜‰

  • U01TZ93MPSQ
    U01TZ93MPSQ Posts: 40 πŸ§‘πŸ»β€πŸš€ - Cadet

    Interesting! Doing a bulkWrite once you hit a delete sounds very smart actually. I might add this to my solution too, since it should further improve the performance. Anyways, I'll be sending you a diff in PMs after my meetings πŸ˜„