Tag Archives: Python

Building a Data Pipeline in Python – Part 4 of N – Basic Reporting

Building a report that passes tests

At this point, we have seen what our data looks like, how it is stored, and what some basic tests might look like. In this post, we start to look at how this might be created as a report to aid in the ETL process.

Many companies find themselves in positions where a CSV (or something similar) is delivered from outside of their organization. In this example, we are assuming that it is being placed into a folder called “new_data”. Our code picks up the file and compares it to what we would expect in terms of test data in order to decide whether or not to move forward with the ETL process.

This Jupyter notebook could be processed each time the file is updated and could be sent to stakeholders before data is processed. It contains a very basic level of testing and visualization, but the idea should get you started. When it runs, tests confirm whether or not the data fits within certain constraints and passes some integrity tests. The data is then plotted and a final output at the bottom shows which tests have passed / failed.

Continue reading

Building a Data Pipeline in Python – Part 3 of N – Testing Data

Simple testing of data: columns, data types, values

In a previous post, we walked through data exploration / visualization and tests to see if our data fit basic requirements. The Jupyter Notebook, embedded below, loads the data and tests it against some rules that start to push us in a direction that allows for more customization and flexibility of our process.

We are establishing a baseline and framework. We are still in the very early process of ETL but we can start to see what the future holds. This notebook covers:

  • Identification that all of the columns of data we need are being read in from the new file
  • Determining which columns are worth testing
  • Utilization of basic statistics of the data to find an expected range of values
  • Testing all of the above
Continue reading

Building a Data Pipeline in Python – Part 2 of N – Data Exploration

Initial data acquisition and data analysis

In order to get an idea of what our data looks like, we need to look at it! The Jupyter Notebook, embedded below, will show steps to load your data into Python and find some basic statistics to use them to identify potentially issues with new data that arrives.

This process is simply the exploratory step, we will build part of the pipeline in the next step. It’s imporant to have notebooks involved once in a while in order to make sure we know what we’re looking at.

Keep in mind, this is the first look at the data and we’re checking out some very basic testing. These tests will become more robust and meaningful as we continue to build out this pipeline.

Continue reading

ETL – Building a Data Pipeline With Python – Introduction – Part 1 of N

ETL (Extract, Transform, Load) is not always the favorite part of a data scientist’s job but it’s an absolute necessity in the real world. If you don’t understand this process, you will have a basic grasp on it by the time you’re done with these lessons. I will be covering:

  • Data exploration
    • Understanding your data
    • Looking for red flags
    • Utilizing both statistics and data visualization
  • Checking your data for issues
    • Identifying things outside of the “normal” range
    • Deciding what to do with NaN or missing values
    • Discovering data with the wrong data type
  • How to clean and transform your data
    • Utilize the pandas library
    • Utilize pyjanitor
    • Getting data into tidy format
  • Dealing with your database
    • Determining whether or not you actually need a database
    • Choosing the right database
      • Deciding between relational and NoSQL
    • Basic schema design and normalization
    • Using an ORM – SQLAlchemy to insert data
  • Building a data pipeline
    • Separate your ETL into parts
    • Utilize luigi to keep you on track
    • Error montitoring

Continue reading

100 Days of Code – What Does it Look Like at Day 11

Stoltzmaniac Fans – It’s time for a #100DaysOfCode update.

I have completed 11 days of the challenge. Let me tell you, it has been a blast and I have already learned a lot. In this post I’ll walk you through what I’ve done thus far. Here is a link to the code on my GitHub repository.  

As you may recall from my previous post I set out to create a flask application to host data science projects for the Meetup group that I organize (Fort Collins Data Science Meetup). My goal is to provide people with an outlet to run code online where they will get the benefits of having a server and a dynamic UI. This will improve the group’s collaboration and Git skills along with allowing people to showcase their work without having to build infrastructure. In case you’re wondering, I built this using Docker Compose, Flask, NGINX, PostgreSQL, and MongoDB.

In order to keep from boring myself to sleep while writing this, I’m going to keep it short and to the point. You might be asking, “what does this application look like?” That’s a great question. It’s a normal website where people contribute Python scripts to do some sort of data processing or analysis. For example, here’s a word cloud generator where the user inserts a Twitter handle with a link to a logo of some sort and then a word cloud is created from all of the most recent tweets! Here is @realdonaldtrump as the Republican elephant and @barackobama as the Democrat donkey.

 

Continue reading