What are the Slack Archives?

It’s a history of our time together in the Slack Community! There’s a ton of knowledge in here, so feel free to search through the archives for a possible answer to your question.

Because this space is not active, you won’t be able to create a new post or comment here. If you have a question or want to start a discussion about something, head over to our categories and pick one to post in! You can always refer back to a post from Slack Archives if needed; just copy the link to use it as a reference..

Hello again 😄 just for interest: does somebody ever tried to make a usually async P&S flow

UPWG9AYH2
UPWG9AYH2 Posts: 509 🧑🏻‍🚀 - Cadet

Hello again 😄
just for interest: does somebody ever tried to make a usually async P&S flow synchronized? So saving data to the database and afterwards publish stuff to redis/elasticsearch in the same php call so that data is definitive in redis/elasticsearch when the process finished? What steps need to be done? How complicated is this? Are there any spryker built-in functionalities?
Best regards

Comments

  • Alberto Reyer
    Alberto Reyer Lead Spryker Solution Architect / Technical Director Posts: 690 🪐 - Explorer

    Not exactly what you asked for, but we have a module to synchronize a full table into search/redis.

    Syncing the main entity:

            $entries = [];
    
            foreach ($result as $entry) {
                $storageKey = $entry[SyncStateQueryContainerInterface::FIELD_KEY];
                $data = json_decode($entry[SyncStateQueryContainerInterface::FIELD_DATA], true);
                $data['_timestamp'] = microtime(true);
                $entries[$storageKey] = json_encode($data);
            }
    
            //using a multiset will improve the write performance drastically
            $this->storageClient->setMulti($entries);
    
  • Alberto Reyer
    Alberto Reyer Lead Spryker Solution Architect / Technical Director Posts: 690 🪐 - Explorer

    Biggest issue was for us that a few entities have an additional mapping entity.
    E.g.: product keys in the storage include the product id, but there is an additional mapping for sku:

    KV Entry: kv:product_abstract🇩🇪de_de:10
    KV Mapping Entry kv:product_abstract🇩🇪de_de:sku:f8

    Those mappings can only be pushed via queue, as the functionality to generate and push those is part of the entity:

    <Entity>Query::syncPublishedMessageForMappings() and <Entity>Query::syncPublishedMessageForMappingResource() are the methods you should have a look into.

  • Alberto Reyer
    Alberto Reyer Lead Spryker Solution Architect / Technical Director Posts: 690 🪐 - Explorer

    Our method to call them during the sync of a full table is the following.

        /**
         * @var array
         */
        protected $tableToEntityMappingFunction = [
            SpyProductAbstractStorageTableMap::TABLE_NAME => SpyProductAbstractStorageQuery::class,
            SpyProductConcreteStorageTableMap::TABLE_NAME => SpyProductConcreteStorageQuery::class,
        ];
    /**
     * @param string $syncableTable
     * @param array $databaseEntries
     *
     * @return void
     */
    protected function syncStorageMappings(string $syncableTable, array $databaseEntries): void
    {
        $storageEntityQuery = call_user_func([$this->tableToEntityMappingFunction[$syncableTable], 'create']);
        $storageEntityQuery->filterByKey_In(array_keys($databaseEntries));
        $storageEntities = $storageEntityQuery->find();
    
        foreach ($storageEntities as $storageEntity) {
            if (method_exists($storageEntity, 'syncPublishedMessageForMappings')) {
                $storageEntity->syncPublishedMessageForMappings();
            }
    
            if (method_exists($storageEntity, 'syncPublishedMessageForMappingResource')) {
                $storageEntity->syncPublishedMessageForMappingResource();
            }
        }
    }
    
  • UPWG9AYH2
    UPWG9AYH2 Posts: 509 🧑🏻‍🚀 - Cadet

    Hi @UL6DGRULR, the idea is here is to combine the existing but async spryker functionality to a synchronous call. I think, that everything is already exisitng except that is located on different places … so my first idea would be to have a facade call that receives the model, saves it to the db, publish and synchronzies it and returns the result in the same php call

  • UPWG9AYH2
    UPWG9AYH2 Posts: 509 🧑🏻‍🚀 - Cadet

    main goal is to have little effort as possible and use as much spryker ootb code as possible

  • Alberto Reyer
    Alberto Reyer Lead Spryker Solution Architect / Technical Director Posts: 690 🪐 - Explorer

    As the code to synchronize is pretty much coupled to queuing I at least haven't found another solution to do the synchronization without involving the queue.

    But I would be happy to see your finding, when you have a different solution, this may help us to improve our synchronization as well 😉

  • UPWG9AYH2
    UPWG9AYH2 Posts: 509 🧑🏻‍🚀 - Cadet

    I’ll let you know 🙂 don’t see any solution yet but there must be one 🤔

  • UPWG9AYH2
    UPWG9AYH2 Posts: 509 🧑🏻‍🚀 - Cadet

    Maybe another idea which supports the async idea: Since we just want to make sure that in frontend P&S is done, maybe the backend could send a “ready” notification to the frontend which gets loaded also async … since i am no frontend pro I don’t know what the keywords here … websockets? Ajax?

  • You could use EventCollectionInterface::addListener() instead of EventCollectionInterface::addListenerQueued() when registering the listeners in the subscribers. That should make the process synchronous

  • UPWG9AYH2
    UPWG9AYH2 Posts: 509 🧑🏻‍🚀 - Cadet
    edited February 2021

    But this won’t get the whole process synchronous, since the initial call comes from a frontend controller that should “wait” for the P&S flow to be done. Adding a listener instead of a queued one doesn’t solve the problem or do i get something wrong? 🤔

  • I have to be honest ... I never tried it myself.

  • Unknown
    edited February 2021

    Perhaps @UK9N7MP96 can help with that?

  • Ehsan Zanjani
    Ehsan Zanjani Head of Solution Architecture @ Spryker Posts: 113 🧑🏻‍🚀 - Cadet
    edited February 2021

    Hi everyone, thanks @UK5EG6PBM for involving me into this topic 🙂
    As @UL6DGRULR said already, the SprykerSynchronization is working based on queue system and honestly it is coupled (This needs to be fixed!)
    Here https://github.com/spryker/synchronization-behavior/blob/7dfb50a10418bb21c34ff0a2a[…]ehavior/Persistence/Propel/Behavior/SynchronizationBehavior.php

    We use propel behavior and this will copy the generated codes into the propel entities, as a quick win solution for @UPWG9AYH2, I would suggest you to replace this behavior with yours, so you might need to replace the https://github.com/spryker/synchronization-behavior with your own repo in composer.json [Please check Propel behavior http://propelorm.org/documentation/06-behaviors.html#providing-behaviors-through-composer],

    So after replacing the behavior you need to run propel:build console command to have your own custom codes inside of entities, then all sync part can be written into Redis instead of sending to Queues

  • Ehsan Zanjani
    Ehsan Zanjani Head of Solution Architecture @ Spryker Posts: 113 🧑🏻‍🚀 - Cadet
    edited February 2021

    @UL6DGRULR

    Those mappings can only be pushed via queue, as the functionality to generate and push those is part of the entity:

    This issue is known and we are fixing this, I will check the releases update

  • Ehsan Zanjani
    Ehsan Zanjani Head of Solution Architecture @ Spryker Posts: 113 🧑🏻‍🚀 - Cadet
    edited February 2021

    @UPWG9AYH2 there were some concerns regarding Redis\ES availabilities in long run and architecture unreliability, therefore we didn’t use direct writing to Redis\ES, so when services are ready,messages in queue will get consumed otherwise messages are pilled up in queues instead of losing them!

  • Thanks @UK9N7MP96 for explaining it in detail ❤

  • UPWG9AYH2
    UPWG9AYH2 Posts: 509 🧑🏻‍🚀 - Cadet
    edited February 2021

    Thank you guys for explaining the relationships here 🙌 even if it seems not to be a feasible solution for us at the moment … might be better to find another way to go

  • UPWG9AYH2
    UPWG9AYH2 Posts: 509 🧑🏻‍🚀 - Cadet
    edited March 2021

    As an update here for my current solution i ran for:

    I tried to trigger some “price synchronization” whenever on PDP there is no merchant related price found in the redis storage. So basically there is some logic that checks the storage if there is price of type “merchant relation” for the actual customer. If not, it does some zed backend action that calls the external price service and tries to get that prices for this merchant relation, saves it, and the usual P&S flow should start. This zed call also returns this price from the external service immediately to the frontend (per client in the same call).
    So the customer would immediately see a price when the expensive zed call was finished
    My hope was, on next execution of the PDP site for that product, the price gets faster loaded for the merchant since the price is now in redis storage.
    But it turned out, the sync process is too slow and the price is not there yet. If the customer quickly reloads the page, the synch starts again even if the sync is already in progress in background …

    However, If have no idea anymore how to fetch on demand prices without completely disassemble spryker 🤔 Especially without synchronizing the P&S workflow i see no chance

  • Alberto Reyer
    Alberto Reyer Lead Spryker Solution Architect / Technical Director Posts: 690 🪐 - Explorer

    You already have the price in the client layer after the Zed call is done, why not writing it into redis?
    Zed can trigger a publish and sync as well, which will overwrite the price with the exact same data once it's done.

  • UPWG9AYH2
    UPWG9AYH2 Posts: 509 🧑🏻‍🚀 - Cadet

    As far as i understood, the process of writing the data into redis (which should be the “sync” part of the P&S flow) is hardly coupled to just work with the queue … so there is logic that just does it using the queue … i don’t know if its that easy to change that 🤔

  • UPWG9AYH2
    UPWG9AYH2 Posts: 509 🧑🏻‍🚀 - Cadet

    Or do i miss something?

  • Alberto Reyer
    Alberto Reyer Lead Spryker Solution Architect / Technical Director Posts: 690 🪐 - Explorer

    Only when there is a mapping in addition to the original entry.
    For example a product can be retrieved by id and by sku from the storage, this is done via an additional mapping in redis.
    when your price has only one identifier you can write it into the redis and it's not coupled to the queue

  • UPWG9AYH2
    UPWG9AYH2 Posts: 509 🧑🏻‍🚀 - Cadet

    Can you give me an entry file/place i can lookup to have a good starting point? Or maybe ill search for the place where the worker ususally hooks inot the queue to write it in redis at the end … would be the first place that comes into my mind

  • Alberto Reyer
    Alberto Reyer Lead Spryker Solution Architect / Technical Director Posts: 690 🪐 - Explorer
    edited March 2021
    Spryker\Zed\SynchronizationBehavior\Persistence\Propel\Behavior\SynchronizationBehavior
    

    and \Spryker\Zed\Synchronization\Business\Storage\SynchronizationStorage::writeBulk are good starting points.

  • UPWG9AYH2
    UPWG9AYH2 Posts: 509 🧑🏻‍🚀 - Cadet

    Okay, the place you mentioned is the part where the message gets read from the queue and written to the storage … the message that is in the queue is basically a json representation of that what is saved in the corresponding storage table.
    In my case, I would omit the storage table part and directly create a json-representation of what is in the “data” column and save it to the storage directly using the redis client … so in my case its something like

    {“prices”:{“1”:{“CNY”:{“priceData”:null,“GROSS_MODE”:{“DEFAULT”:“5555"},“NET_MODE”:{“DEFAULT”:“4444”}}}}}

    I don’t know how this structure is created, i guess its something that is done when running the usual “publish” logic.
    Did you recreate all of this json build up logic in your case too?

  • UPWG9AYH2
    UPWG9AYH2 Posts: 509 🧑🏻‍🚀 - Cadet

    Also there are different keys saved along the db entity
    “key”, “fk_product”, “fk_company_business_unit”, “price_key” …. i dont know if ill need them all for my “direct redis storing” plan

  • UPWG9AYH2
    UPWG9AYH2 Posts: 509 🧑🏻‍🚀 - Cadet

    Or is just the “key” column relevant? It looks like this for example

    price_product_concrete_merchant_relationship:DK:2:2

    Where DK is the country code for the price. I guess the other ones are company_business_unit and fk_product

  • UPWG9AYH2
    UPWG9AYH2 Posts: 509 🧑🏻‍🚀 - Cadet

    Ok, I am now on the place where i would “append” my price to the redis storage. The entries that are already in the storage looking like this.

    {
    “prices”: {
    “1”: {
    “CNY”: {
    “priceData”: null,
    “GROSS_MODE”: {
    “DEFAULT”: “5555"
    },
    “NET_MODE”: {
    “DEFAULT”: “4444”
    }
    },
    “USD”: {
    “priceData”: null,
    “GROSS_MODE”: {
    “DEFAULT”: “12200"
    },
    “NET_MODE”: {
    “DEFAULT”: “233200”
    }
    }
    }
    }
    }

    So i guess, i don’t really have a chance to do some json-encode on my transfer objects to result in a structure like this. Even if, the logic doing this is far away from the client layer, it should be in zed instead … how did you solve that? Do you also have manually wrote a mapper for your payload to write it back to the redis storage?

  • UPWG9AYH2
    UPWG9AYH2 Posts: 509 🧑🏻‍🚀 - Cadet

    I think this wasn’t a good idea. The logic that is creating the kv-entry in the format above is the PriceGrouper which is placed in Zed backend … so i am now in the situation writing into redis storage from the price-product-merchant-realtionship-storage client but need complex business logic from the zed side of this module … 🤔

  • UPWG9AYH2
    UPWG9AYH2 Posts: 509 🧑🏻‍🚀 - Cadet

    Okay, i think i found an applicable solution using @UK5EG6PBM suggestion. Instead of doing some queued event ill do a direct event that get processed during the call. The trick i made is to write to the queue as the usual writer does but immediately call the logic that processes this published message. This makes the whole P&S synced with little effort. For not interventing in the usual P&S flow, i created a new event “SYNCHRONOUS_…” which will be handled by a special “SynchronousPriceProductStorageWriter…” which is doing the writing to the queue and processing the queue immediately in one step in this case.
    Now i can publish synchronously by calling the eventfacade using the new synchronous event name …