What are the Slack Archives?
Itβs a history of our time together in the Slack Community! Thereβs a ton of knowledge in here, so feel free to search through the archives for a possible answer to your question.
Because this space is not active, you wonβt be able to create a new post or comment here. If you have a question or want to start a discussion about something, head over to our categories and pick one to post in! You can always refer back to a post from Slack Archives if needed; just copy the link to use it as a reference..
hi. I have a question regarding rabbitmq. I do have a couple of messages in the `sync.storage.url` q
hi. I have a question regarding rabbitmq. I do have a couple of messages in the sync.storage.url
queue, where message 2 is about deleting a certain key and message 10 is about writing it again. all other messages are related to other keys. When I run queue:worker:start
I end up having an empty key in redis which really confuses me. we dont have priority queues or anything and default ordering is fifo. So I actually expect to have a valid key in redis afterwards. is there any log I could double check the order how the messages are being processed? is there any good place to debug? I published the 2 messages one by one with rabbit and could at least make sure that the payload is valid and key is written successful doing it that way.
Comments
-
Hi Sebastian, this is strange.
First of all for debugging I would disable jenkins and useconsole queue:task:start sync.storage.url
to process messages only from this queue, and not from all available queues withqueue:worker:start
.Simultaneously in redis
https://redis.io/commands/MONITOR you can start monitor command to see everything what is happening there.
Basically you can check if something was written there.Also it may be that spryker donβt write data to redis, because of some conditions which do not satisfy the ability to write data to storage.
In this case you can start docker/sdk with debug mode, and try to find the reason.0 -
the MONITOR command was a good hint. thanks! I can see that all messages are merged together doing all writes first with an βMSETβ followed by all key-value pairs to write the keys and afterwards the βDELβ command to delete all keys. so it does not taking care about the order here, its just merging all together π it does explain it. any idea how to solve it?
0 -
ok, the merging may not be the problem. its actually the order that is wrong. I have 4 deletes in my rabbit queue (message 2 .. 5 => delete) and 6 writes (message 6 β¦ 11 => write), but in redis it does the write first with MSET and βDELβ command afterwards
0 -
I would check the code.
But maybe you can also limit number of processed sync storage messages of
queue:task:start
to1
Hopefully I found correct place
\Spryker\Zed\Synchronization\SynchronizationConfig::getSyncStorageQueueMessageChunkSize
And you can try to process messages one by one and check what is happening in
redis monitor
Just to be sure if the problem is connected to some particular message, or the problem is really connected to processing of several messages in a row.
0 -
I made a simple test and added 4 messages manually to the queue:
{"delete":{"key":"url:test"}} {"write":{"key":"url:test", "value":"1"}} {"delete":{"key":"url:test"}} {"write":{"key":"url:test", "value":"2"}}
order of messages was correct. I executed
console qu:task:st sync.storage.url
with chunksize of 1000 and afterwards again with 1 with following result:chunksize 1000:
1631010201.804170 [0 172.28.0.8:48376] "SELECT" "0" 1631010201.804975 [0 172.28.0.8:48376] "MSET" "kv:url:test" "2" 1631010201.805643 [0 172.28.0.8:48376] "DEL" "kv:url:test" "kv:url:test"
chunksize 1:
1631010813.118807 [0 172.28.0.8:48466] "SELECT" "0" 1631010813.119481 [0 172.28.0.8:48466] "DEL" "kv:url:test" 1631010815.085736 [0 172.28.0.8:48472] "SELECT" "0" 1631010815.086394 [0 172.28.0.8:48472] "MSET" "kv:url:test" "1" 1631010817.217274 [0 172.28.0.8:48478] "SELECT" "0" 1631010817.218087 [0 172.28.0.8:48478] "DEL" "kv:url:test" 1631010819.376974 [0 172.28.0.8:48484] "SELECT" "0" 1631010819.377729 [0 172.28.0.8:48484] "MSET" "kv:url:test" "2"
so to me it seems that the order is mixed up for some reason. is it a bug? so only solution I could think of is trying to avoid that delete, or have some kind of checks if I have a write and a delete in the queue for the same key and get rid of the delete.
0 -
Not sure what can be the problem, as I said you can try to check the library code and project level code if you changed any kind of library functionality.
You can also check in
composer.lock
version ofspryker/url-storage
library, and check in github if there are any fixes or some improvements in new releases
https://github.com/spryker/url-storage/releases0 -
We did not make any customize the syncronisation Module, but can check the version as u said and maybe the core code if I find the time. Thanks for your help anyway!
0 -
You are welcome π
0 -
btw I found the root cause in spryker for that issue. its caused by
\Spryker\Zed\Synchronization\Business\Message\BulkQueueMessageProcessor::processMessages
where all messages are merged and the βwritesβ are executed before the βdeletesβ. I tested newest version of synchronization module and also in spryker-test shop, all the same. its a a problem for urls where same key needs to be rewritten (when redirect turn into a url again). we fixed now for us by overridden that methode and make sure the order is kept.I still have created a spryker bug ticket.0 -
@UNGMX0012 I was nearly pulling my hair out over this. Thank you so much. If I ever see you i'll buy you a beer or two
0 -
youβre welcome π I donβt have the problem with the hair anymore ππ§βπ¦²
0 -
Btw if anyone reads this thread in the future: I fixed it by creating a
UrlQueueMessageProcessorPlugin
and wiring it up in theQueueDependencyProvider
. The plugin then throws the events into theSynchronizationFacade
one by one instead of as a bulk.This way one does not have to touch the "default" messageProcessor that quite a lot of modules may rely on. The bundling of the writes and deletes is a performance measure (notably using MSET on redis writes) and changing that to always respect the order for ALL queues can have Performance implications when you deal with multiple hundreds of thousands of events daily.
0 -
we solved it for now with overriding the
BulkQueueMessageProcessor
and running the queued messages once we hit a DELETE message. there was a bit pressure on time. we donβt have much DELETEs so impact for performance is less. but your solution sounds a way better. if u like to share your solution, I would be interested π0 -
Interesting! Doing a bulkWrite once you hit a delete sounds very smart actually. I might add this to my solution too, since it should further improve the performance. Anyways, I'll be sending you a diff in PMs after my meetings π
0
Categories
- All Categories
- 42 Getting Started & Guidelines
- 7 Getting Started in the Community
- 8 Additional Resources
- 7 Community Ideas and Feedback
- 77 Spryker News
- 931 Developer Corner
- 789 Spryker Development
- 90 Spryker Dev Environment
- 362 Spryker Releases
- 3 Oryx frontend framework
- 35 Propel ORM
- 68 Community Projects
- 3 Community Ideation Board
- 30 Hackathon
- 3 PHP Bridge
- 6 Gacela Project
- 26 Job Opportunities
- 3.2K π Slack Archives
- 116 Academy
- 5 Business Users
- 370 Docker
- 551 Slack General
- 2K Help
- 75 Knowledge Sharing
- 6 Random Stuff
- 4 Code Testing
- 33 Product & Business Questions
- 70 Spryker Safari Questions
- 50 Random