What are the Slack Archives?

It’s a history of our time together in the Slack Community! There’s a ton of knowledge in here, so feel free to search through the archives for a possible answer to your question.

Because this space is not active, you won’t be able to create a new post or comment here. If you have a question or want to start a discussion about something, head over to our categories and pick one to post in! You can always refer back to a post from Slack Archives if needed; just copy the link to use it as a reference..

Hello folks, we have a stage server where the customer is importing data. When an error occurs, I wa

U01FMCD8EN4
U01FMCD8EN4 Posts: 27 🧑🏻‍🚀 - Cadet

Hello folks, we have a stage server where the customer is importing data. When an error occurs, I want to replicate it quickly. For that to work, I need the same product data as stage. Is there a quick way to dump a) PGSQL b) Redis c) Elasticsearch d) all of the above so I can insert that data into my local docker environment?

Comments

  • Alberto Reyer
    Alberto Reyer Posts: 690 🪐 - Explorer

    If you have access to the servers, try to tunnel to the original data.

    If you really want to replicate the data you might need to take care of data anonymization. When you dump all the *storage and *search tables a redis or elasticsearch dump isn’t necessary, instead you can use the vendor/bin/console sync:data command to resync all data into redis and elasticsearch

  • U01FMCD8EN4
    U01FMCD8EN4 Posts: 27 🧑🏻‍🚀 - Cadet

    Anonymization is on our roadmap, tunnelling is not possible. Can I dump the tables with a Spryker command?

  • Alberto Reyer
    Alberto Reyer Posts: 690 🪐 - Explorer

    Better use mysqldump or postgredump they both have better formats which can be imported faster than SQL based dumps.

  • U01FMCD8EN4
    U01FMCD8EN4 Posts: 27 🧑🏻‍🚀 - Cadet

    Okay, thanks