What are the Slack Archives?

It’s a history of our time together in the Slack Community! There’s a ton of knowledge in here, so feel free to search through the archives for a possible answer to your question.

Because this space is not active, you won’t be able to create a new post or comment here. If you have a question or want to start a discussion about something, head over to our categories and pick one to post in! You can always refer back to a post from Slack Archives if needed; just copy the link to use it as a reference..

Hello, We encountered some issues with Redis memory getting filled very easy. One of the issues we

UNTTWV4JK
UNTTWV4JK Posts: 63 🧑🏻‍🚀 - Cadet
edited December 2020 in Slack General

Hello,

We encountered some issues with Redis memory getting filled very easy. One of the issues we noticed is related to the event triggering by using “event:trigger” command. 

Let’s assume we have entity X for which we have active 1/3 of the entities the other ones (2/3) were unpublished in time. The active X entities are using 50% of the Redis storage. For some unknown reason, some entities are missing from the storage and triggering events for them will fix the issue. When we call the console command “event:trigger -r X” it will trigger all the events for entity X. The command finishes and everything looks fine so far. When the events get processed the memory usage increases until Redis gets full and denies connections which lead to Glue failures. 
The triggerEvents method from EventResourceQueryContainerManager (https://github.com/spryker/event-behavior/blob/master/src/Spryker/Zed/EventBehavior/Business/Model/EventResourceQueryContainerManager.php#L60) is triggering the bulk the events for all the entities. The way the plugins are built is problematic as they only trigger publish events for entities. That means the unpublished entities will be published. Since the unpublished entities are more than the active ones (2/3) they will fill the storages with unnecessary data. Most of the writers from spryker core are not considering the status of the entities when they publish data to storages.

It will be nice to have this issue fixed from the Core, so we don’t need to patch each writer to make sure it is not publishing data which should not be published.

Nice to have: event:trigger command to be able to also unpublish data from the storage when the entities are inactive but not removed from storage. Sometimes we get results from ElasticSearch even if the products are not active, unpublishing entities solves the issue).

I hope this message will help to improve Spryker.

Comments

  • Ahmed Sabaa
    Ahmed Sabaa Senior Application Architect @Spryker Posts: 54 🧑🏻‍🚀 - Cadet
    edited December 2020

    Hi! Thanks for the message, what do you mean by “not considering the status of entities when they publish data to storages”?

  • UNTTWV4JK
    UNTTWV4JK Posts: 63 🧑🏻‍🚀 - Cadet
    edited December 2020

    Let’s say we have the entity with ID=123 which was disabled and unpublished if we call:

    vendor/bin/console event:trigger -r product_abstract_image -i 123
    

    the entity will get published, so event:trigger will always publish entities. At this level, it is very hard to determine which event should be triggered as the command is not aware of the business logic of each entity. If we look deeper, the storage writer is aware of the entity and can apply at least some filtering when fetching the data.
    IMO the pluggins called by event:trigger should not use publish at all but some sort of “refresh” (https://github.com/spryker/product-image-storage/blob/5a0bfcb56c516bd42d4dd96489dd6c0a925c72b9/src/Spryker/Zed/ProductImageStorage/Communication/Plugin/Event/ProductAbstractImageEventResourceQueryContainerPlugin.php#L66-L69) which can decide what needs to be done with the entity.

  • Ahmed Sabaa
    Ahmed Sabaa Senior Application Architect @Spryker Posts: 54 🧑🏻‍🚀 - Cadet

    Sounds logical to me, I would assume that triggering events does not automatically publish though because it would go from event queue to specific queue which then will trigger the business logic handling plugin that is responsible for such event