Building a Data Pipeline in Python – Part 4 of N – Basic Reporting

Building a report that passes tests

At this point, we have seen what our data looks like, how it is stored, and what some basic tests might look like. In this post, we start to look at how this might be created as a report to aid in the ETL process.

Many companies find themselves in positions where a CSV (or something similar) is delivered from outside of their organization. In this example, we are assuming that it is being placed into a folder called “new_data”. Our code picks up the file and compares it to what we would expect in terms of test data in order to decide whether or not to move forward with the ETL process.

This Jupyter notebook could be processed each time the file is updated and could be sent to stakeholders before data is processed. It contains a very basic level of testing and visualization, but the idea should get you started. When it runs, tests confirm whether or not the data fits within certain constraints and passes some integrity tests. The data is then plotted and a final output at the bottom shows which tests have passed / failed.

Continue reading

Building a Data Pipeline in Python – Part 3 of N – Testing Data

Simple testing of data: columns, data types, values

In a previous post, we walked through data exploration / visualization and tests to see if our data fit basic requirements. The Jupyter Notebook, embedded below, loads the data and tests it against some rules that start to push us in a direction that allows for more customization and flexibility of our process.

We are establishing a baseline and framework. We are still in the very early process of ETL but we can start to see what the future holds. This notebook covers:

  • Identification that all of the columns of data we need are being read in from the new file
  • Determining which columns are worth testing
  • Utilization of basic statistics of the data to find an expected range of values
  • Testing all of the above
Continue reading

100 Days of Code – Completed!

I finished the #100DaysOfCode challenge and it feels great! I will tell you a little a bit about my experience.

Top 5 Takeaways:

  1. Sitting down and writing code every day is not easy
  2. Planning is critical to your success
  3. Staying motivated requires effort
  4. Being excited about your project makes a world of difference
  5. Learning takes time and effort

What did I build?

Continue reading

Building a Data Pipeline in Python – Part 2 of N – Data Exploration

Initial data acquisition and data analysis

In order to get an idea of what our data looks like, we need to look at it! The Jupyter Notebook, embedded below, will show steps to load your data into Python and find some basic statistics to use them to identify potentially issues with new data that arrives.

This process is simply the exploratory step, we will build part of the pipeline in the next step. It’s imporant to have notebooks involved once in a while in order to make sure we know what we’re looking at.

Keep in mind, this is the first look at the data and we’re checking out some very basic testing. These tests will become more robust and meaningful as we continue to build out this pipeline.

Continue reading

ETL – Building a Data Pipeline With Python – Introduction – Part 1 of N

ETL (Extract, Transform, Load) is not always the favorite part of a data scientist’s job but it’s an absolute necessity in the real world. If you don’t understand this process, you will have a basic grasp on it by the time you’re done with these lessons. I will be covering:

  • Data exploration
    • Understanding your data
    • Looking for red flags
    • Utilizing both statistics and data visualization
  • Checking your data for issues
    • Identifying things outside of the “normal” range
    • Deciding what to do with NaN or missing values
    • Discovering data with the wrong data type
  • How to clean and transform your data
    • Utilize the pandas library
    • Utilize pyjanitor
    • Getting data into tidy format
  • Dealing with your database
    • Determining whether or not you actually need a database
    • Choosing the right database
      • Deciding between relational and NoSQL
    • Basic schema design and normalization
    • Using an ORM – SQLAlchemy to insert data
  • Building a data pipeline
    • Separate your ETL into parts
    • Utilize luigi to keep you on track
    • Error montitoring

Continue reading