LogRocket - Migration from a single server into a flexible server

We will be taking the guideline from logrocket & we will be talking about some of the issues that you can get at the time of the migration. To prevent any copyrights, I will be adding and editing some of those steps due to some missing steps from the LogRocket.

# Software Clients

psql client - Windows/macOS

pgAdmin (optional) Download

## Notes

  1. In order to be able to do the migration, you will have to have Postgress version 12. If you download a version above this one, the migration will not work as you want it.
  2. Make sure no one else will work under the single server to prevent deleting data that you still need or want.

#Pull Database Schemas from the existing single server

Be sure to have the psql client and pgAdmin downloaded to your local machine

  • Change Replication configuration to 'Logical'. Visit your database-->Replication-->Logical. Once you have changed to Logical, it will require you to restart your database instance.

  • In order to access your database instance from your local machine to use psql, you must change some of the network settings for the database instance. You can do this in the Azure portal by selecting your database Connection Security. For the duration of the process we recommend making the changes below...

  • select Deny public network access - No

  • select Allow access to Azure services - Yes
  • select Add current client IP address (to access the database instance from your current machine)

  • Make additional server parameter changes. Visit your database-->Server Parameters. Search for the below and set to the recommended configuration.

  • max_replication_slots = [number of slots], recommend setting to ten slots

  • max_wal_senders =[number of concurrent tasks] - The max_wal_senders parameter sets the number of
  • concurrent tasks that can run, recommend setting to 10 tasks

  • Once the database is configured, from command line on your local computer, run the pg_dump commands to dump the database schemas. There are three databases specific to LogRocket. You must create schema files for all three. The databases are logrocket, troytown and sentry. Please see the example of commands below...

SQL

pg_dump -o -h {insert your database host here} -U postgres@{insert your instance name here} -d logrocket -s -O -x > logrocket_Schema.sql

pg_dump -o -h {insert your database host here} -U postgres@{insert your instance name here} -d troytown -s -O -x > troytown_Schema.sql

pg_dump -o -h {insert your database host here} -U postgres@{insert your instance name here} -d sentry -s -O -x > sentry_Schema.sql

Note: The hostname in the command and the database password can be found in your values.YAML file. Your database instance name can be found in the Azure portal.

# Create the new Flexible Server

Note: Since you already have Kubernetes configured inside of a resource group, you will have to make sure that you will use the same resource group at the time that you create the flexible server...

  1. Navigate to the LogRocket resource group in Azure
  2. Select the 'Create' button
  3. Select 'Databases'
  4. Select 'Create' under 'Azure Database for Postgres SQL'
  5. Select 'Create' under 'Flexible server (preview)
  6. Fill in the required information
- **Resource Group** - Your Resource Group
- **Server Name** - Choose a new server name
- **Region** - Whichever region you are located
- **Workload type** - Generally choose Production (Small / Medium-size)​ (if unsure, please consult LogRocket)
- **Availability zone** - your choice
- **Postgres SQL version** - 12
- **Admin username** - postgres
- **Password** - choose your own password (Remember this password and keep it somewhere safe)
  1. select 'Configure Server' for 'Computer + storage'. Fill in the specs as mentioned below and then select 'Save'
- **Compute:**
- **Compute tier** - General Purpose
- **Compute size**- If you currently have deployed a General Purpose, 2 vCore(s), 500 GB, you can run a 'Standard_D2s_v3'. If you are unsure what to run, please consult LogRocket
- **Storage**- Choose the amount of storage closest to the Max storage you observed from your other LogRocket SQL database. Please select the new highest storage amount to what you are currently running.
- **High Availability **- your choice
- **Backups** - 7
  1. select 'Next: Networking >' button
  2. select options so the configuration looks like the screenshot below. Remember to either select the '+ Add current client IP address or create a firewall rule to allow access to the local machine you will be using
  3. select 'Review + create' button + select 'Create' button

Now that you have all of these steps, you will have to make sure that under your single server, you will have the VNET that you will use with the migration wizard from Microsoft to be able to access the single server.

Under the flexible server, you will have to add your IP address, or you will have to add the open rule to make sure everybody can access your flexible server. This rule will be 0.0.0.0/24. This will allow anyone to get access to this server which will be the target for the migration.

# Postgres | Create new Databases & Apply Database Schema to the New Flexible Server

  1. Open pgAdmin
  2. right-click 'Servers' and the select 'Create-->Server..'
  3. On the 'General' tab, insert a name of your choosing in 'Name'
  4. On the 'Connection' tab, insert the correct information 'Hostname/address' and 'Password' fields
  5. select 'Save' button
  6. Once you have connected to the new Database, select its name in the menu bar, then right-click on the 'Databases' and select 'Create-->Database'
  7. In the 'Database' field, insert 'troytown'
  8. select the 'Save' button
  9. Repeat steps 6-8 two more times, creating new databases for 'logrocket' and 'sentry'
  10. Once the databases are created, you will need to apply the schemas you dumped from the old database instance from Part 1 Step 4 to the new databases. Once the databases have been created, select one of the three databases created (logrocket, sentry or troytown).
  11. Right-click/control click on the database and select 'Query Tool'

Note: At this point, you will have to include the dump files from the single server to make sure the schema will be the same.

If adding those files inside of Postgress will give you an Error as an Output, you will have to delete the following lines from the schema:

Name: plpgsql; Type: EXTENSION; Schema: -; Owner: -
CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog;

Name: EXTENSION plpgsql; Type: COMMENT; Schema: -; Owner: -
COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';

Name: pg_buffercache; Type: EXTENSION; Schema: -; Owner: -
CREATE EXTENSION IF NOT EXISTS pg_buffercache WITH SCHEMA public;


Name: EXTENSION pg_buffercache; Type: COMMENT; Schema: -; Owner: -
COMMENT ON EXTENSION pg_buffercache IS 'examine the shared buffer cache';

Name: pg_stat_statements; Type: EXTENSION; Schema: -; Owner:  
CREATE EXTENSION IF NOT EXISTS pg_stat_statements WITH SCHEMA public;

Name: EXTENSION pg_stat_statements; Type: COMMENT; Schema: -; Owner: -
COMMENT ON EXTENSION pg_stat_statements IS 'track execution statistics of all SQL statements executed';

Now, you will have to click on the green play button and you will see that it successfully applied the schema into the new database.

  1. Repeat the same steps under the remaining two DB's.

# Register a Data Migration Service, Create the Data Migration Service & Create the Migration Project

The best way to do this is by following the instructions from the Microsoft Guides

Quick Note: Everything has to be inside of the same resource group to be successful with this step.

# Uploading LogRocket

  1. Update your values file. You will need to update postgres block of your values file with your new database hostname and password. Once that is complete, you will also need to update the pgbouncer block adding azureFlexServer.true. Please see the examples below...
postgres:
  # TODO: FQDN for the Postgres instance created by Terraform (e.g., postgres.example.com)
  host: 'example.postgres.database.azure.com'
  # TODO: password for the Postgres admin user created by Terraform
  password: 'enter_new_password_here'
pgbouncer:
  replicas: 1
  serverSSLmode: 'verify-ca'
  azureFlexServer: true
  1. Before upgrading LogRocket, scale down mutator and process. See command below.
kubectl scale deployment/process deployment/mutator --replicas=0
  1. Run a standard helm repo update and upgrade as documented here

  2. Run the standard helm upgrade command below to apply for the license

helm upgrade --install --values values.yaml --wait --timeout=1800 logrocket logrocket-charts/logrocket

If you are wondering where we will be doing these updates and under what files, you should know that LogRocket is the one that will provide you with some Terraform and Helm charts to make sure you can accomplish these tasks. Meaning, that these files were given to you at the time that you did the installation with the single server. For the same reason, you just have to update some of the values to make sure you update Kubernetes.

Take into consideration that at the time that you do the cutover from the data migration step, you will have a downtime under those servers, and no one will have access until you are done with the migration. The best thing to do here is to send an email to your team and to everyone that is currently using these DB's to make sure they are aware of the downtime.

As I said before, I'm resharing the guides from LogRocket to add some of the steps that are missing from the documentation to make sure people will not go to the same issues that I have before.

Please look at all my content and my Blog to look for inspiration, ideas, and some resources that could help you in the future. If you like what you are reading, feel free to write a comment and make sure you follow me through my social media.