This page provides you with instructions on how to extract data from MySQL and load it into Panoply. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)
What is MySQL?
MySQL is the world's most popular open source relational database management system (RDBMS). It's the data store for countless websites and applications; chances are you interact with MySQL-powered technology every day. MySQL is largely used as a transactional or operational database, and not as much for analytics.
What is Panoply?
Panoply is an end-to-end data platform that can spin up an Amazon Redshift instance in just a few clicks. It uses machine learning and natural language processing (NLP) to learn, model, and automate standard data management activities performed by data scientists, data engineers, and analysts. It can import data with no schema, no modeling, and no configuration. With Panoply, you can use your favorite analysis, SQL, and visualization tools just as you would if you were creating a Redshift data warehouse on your own.
Getting data out of MySQL
MySQL provides several methods for extracting data; the one you use may depend upon your needs and skill set.
The most common way to get data out of any database is simply to write queries. SELECT queries allow you to pull the data you want. You can specify filters and ordering and limit results.
If you're looking to export data in bulk, there's an easier alternative. Most MySQL installs include a handy command-line tool called mysqldump that allows you to export entire tables and databases in a format you specify, including delimited text, CSV, or an SQL query that would restore the database if run.
Loading data into Panoply
When you've identified all the columns you want to insert, use the Reshift CREATE TABLE statement to make a table in your data warehouse to receive the data.
Now you can replicate your data. It may seem as if the easiest way to do that (especially if there isn't much of it) is to build INSERT statements and add data to your table row by row. If you have any experience with SQL, this probably will be your first inclination. But beware! Redshift isn't optimized for inserting data one row at a time. If you have a high volume of data to be inserted, you should instead load the data into Amazon S3 and then use the Redshift COPY command to import it into Redshift.
Keeping MySQL data up to date
The script you have now should satisfy all your data needs for MySQL – right? Not yet. How do you load new or updated data? It's not a good idea to replicate all of your data each time you have updated records. That process would be painfully slow; if latency is important to you, it's not a viable option.
Instead, you can identify some key fields that your script can use to bookmark its progression through the data, and pick up where it left off as it looks for updated data. Auto-incrementing fields such as updated_at or created_at work best for this. When you've built in this functionality, you can set up your script as a cron job or continuous loop to get new data as it appears in MySQL.
Other data warehouse options
Panoply is great, but sometimes you need to optimize for different things when you're choosing a data warehouse. Some folks choose to go with Amazon Redshift, Google BigQuery, PostgreSQL, Snowflake, or Microsoft Azure SQL Data Warehouse, which are RDBMSes that use similar SQL syntax. Others choose a data lake, like Amazon S3. If you're interested in seeing the relevant steps for loading data into one of these platforms, check out To Redshift, To BigQuery, To Postgres, To Snowflake, To Azure SQL Data Warehouse, and To S3.
Easier and faster alternatives
If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.
Thankfully, products like Stitch were built to move data from MySQL to Panoply automatically. With just a few clicks, Stitch starts extracting your MySQL data via the API, structuring it in a way that's optimized for analysis, and inserting that data into your Panoply data warehouse.