What are the Slack Archives?
It’s a history of our time together in the Slack Community! There’s a ton of knowledge in here, so feel free to search through the archives for a possible answer to your question.
Because this space is not active, you won’t be able to create a new post or comment here. If you have a question or want to start a discussion about something, head over to our categories and pick one to post in! You can always refer back to a post from Slack Archives if needed; just copy the link to use it as a reference..
Hi all, we are experiencing a problem that messages in `sync.search.product` queue are not being pr
Hi all,
we are experiencing a problem that messages in sync.search.product
queue are not being processed (retried all over again). I was looking at the errors logs and couldn’t find anything specific for this queue. Only thing i found is the following:
bash-5.0# tail -f /var/log/spryker/php_errors.log [13-Aug-2020 09:09:19 UTC] PHP Fatal error: Allowed memory size of 1073741824 bytes exhausted (tried to allocate 20480 bytes) in /data/vendor/spryker/util-encoding/src/Spryker/Service/UtilEncoding/Model/Json.php on line 55 [13-Aug-2020 09:09:19 UTC] PHP Fatal error: Allowed memory size of 1073741824 bytes exhausted (tried to allocate 20480 bytes) in /data/vendor/spryker/error-handler/src/Spryker/Shared/ErrorHandler/ErrorHandlerEnvironment.php on line 93
Those errors are being thrown in jenkins container every 5-7 seconds.
Did anyone experience these issues and have a suggestion how to fix it?
Comments
-
Would it be that message in sync queue is too big? Probably reducing BULK size would help.
0 -
last time I had similar error there were messages that could not be processed. What happens is that when there is an error the worker tries to process messages individually and when it runs out of memory it fails and messages are not acknowledged. Reducing the batch size helps and Mike suggested.
0 -
as i can see default chunk size is set to 10000. What should be the reduced value in this case?
0 -
Chunk value out of the box is 500, I have no idea where 10000 is coming from.
0 -
I would set it to default and see if that helps and then reduce even more if you still get out of memory expection
0 -
500 might be for publishing, 10000 for syncing, it sounds fine. The elastic search document can be quite big, depending on your data. Putting the whole batch into memory can be troublesome. So yes, as a quick solution - try to reduce the chunk, but I would also check - aren’t you sending too much data to the elastic search?
0 -
More attributes you have the bigger data you need to pump. (for products)
0 -
I will try reducing the chunk size at first and then investigate further.
0 -
We do have around 350 attributes per product
0 -
That’s probably a root cause.
0 -
Better to have chunk to be processed within 128 Mb of memory. IMO
0 -
I reduced the chunk size to 500 and it is working now.
0 -
Also checked the message payload size and this seems to be around 20kb at the moment, so i could probably go up to 5000 chunk size to stay within 128MB. Will play with it a bit.
0 -
Thank you for your help
0 -
I meant 128 Mb for PHP used memory. Not the data itself.
0
Categories
- All Categories
- 42 Getting Started & Guidelines
- 7 Getting Started in the Community
- 8 Additional Resources
- 7 Community Ideas and Feedback
- 74 Spryker News
- 911 Developer Corner
- 771 Spryker Development
- 87 Spryker Dev Environment
- 361 Spryker Releases
- 3 Oryx frontend framework
- 34 Propel ORM
- 68 Community Projects
- 3 Community Ideation Board
- 30 Hackathon
- 3 PHP Bridge
- 6 Gacela Project
- 25 Job Opportunities
- 3.2K 📜 Slack Archives
- 116 Academy
- 5 Business Users
- 370 Docker
- 551 Slack General
- 2K Help
- 75 Knowledge Sharing
- 6 Random Stuff
- 4 Code Testing
- 32 Product & Business Questions
- 69 Spryker Safari Questions
- 50 Random