In order to get an idea of what our data looks like, we need to look at it! The Jupyter Notebook, embedded below, will show steps to load your data into Python and find some basic statistics to use them to identify potentially issues with new data that arrives.
This process is simply the exploratory step, we will build part of the pipeline in the next step. It’s imporant to have notebooks involved once in a while in order to make sure we know what we’re looking at.
Keep in mind, this is the first look at the data and we’re checking out some very basic testing. These tests will become more robust and meaningful as we continue to build out this pipeline.
ETL (Extract, Transform, Load) is not always the favorite part of a data scientist’s job but it’s an absolute necessity in the real world. If you don’t understand this process, you will have a basic grasp on it by the time you’re done with these lessons. I will be covering:
Understanding your data
Looking for red flags
Utilizing both statistics and data visualization
Checking your data for issues
Identifying things outside of the “normal” range
Deciding what to do with NaN or missing values
Discovering data with the wrong data type
How to clean and transform your data
Utilize the pandas library
Getting data into tidy format
Dealing with your database
Determining whether or not you actually need a database
Starting the 100 Days of Code ( #100DaysOfCode ) challenge
I am always looking to boost my coding skills and as I watch everyone make resolutions for the year, I couldn’t help but think I should try this challenge. In case you don’t know what I’m referring to, one resource is https://www.100daysofcode.com/ – which really gives you a good overview of what the challenge involves.
What will I be building?
I am a project-oriented person, so I will be building a web application that runs sentiment analysis on text data from APIs.
The basic topics I hope to cover:
Store data from external APIs
Utilize PostgreSQL and MongoDB
Back end API development
Luigi ETL pipeline
I will try and send out a blog update every week or two with highlights! I will also be updating GitHub as I go along. Part of the challenge is also posting on Twitter, so each day I’ll be using the hashtag #100DaysOfCode and you can follow me @stoltzmaniac
Recently, I started looking into data sets to compete in Go Code Colorado (check it out if you live in CO). The problem with such diversity in data sets is finding a way to quickly visualize the data and do exploratory analysis. While tools like Tableau make data visualization extremely easy, the data isn’t always properly formatted to be easily consumed. Here’s are a few tips to help speed up your exploratory data analysis!
We’ll use data from two sources to aid with this example: