Possible expired job

This job was posted 2 years ago and may be expired now. If that's the case, you can browse similar jobs here. Apologies for the inconvenience.

Big Data Engineer

Big Data Engineer

CHICAGO, IL

ENGINEERING – DATA ENGINEERING

FULL TIME

REMOTE

WHO WE ARE

Basis Technologies delivers software and services to automate digital media operations for more than 1,000 leading agencies and brands.

Our comprehensive ad tech platform, Basis, supports the planning, reporting, and financial reconciliation of direct, programmatic, search, and social media, all in one place.

We are deeply committed to building software that will change the ad tech industry for the better and are equally dedicated to building an inclusive culture of highly motivated individuals who create a positive and supportive environment together. We invest in our culture and support our employees so they can do their best work.

Basis Technologies is headquartered in Chicago, and our employees have the flexibility to work in an office location, completely remote, or a hybrid of the two. Please note, we are hiring on a remote working basis only in the U.S. and Canada.

ABOUT THE TEAM

Technology is at the core of what we do. Basis’s innovative Engineering team designs and develops new features and integrations for Basis, our industry-leading, comprehensive software solution. Our platform processes over 300 billion events per day and uses AI and machine learning to automate and simplify the entire digital campaign process.

WAYS YOU’LL CONTRIBUTE

This team is all about data. Data engineering team is responsible for ingesting data from various sources and transforming it into structures and layers catered for various reporting needs.

We are starting the migration process from Hadoop stack on-prem to the in-cloud implementation on Snowflake. This is a large initiative and transitioning will happen gradually. As a result, we have a mix of technologies that you will need to work with as a data engineer, including but not limited to the Hadoop+Spark and Snowflake.

OTHER WAYS YOU’LL CONTRIBUTE TO THE TEAM ARE BY:

  • Implementing scalable, fault tolerant and accurate ETL pipelines that work in a distributed data processing environment.
  • Gathering and processing raw data at scale from diversified sources into Hadoop and Snowflake.
  • Contributing to building enterprise business analytics and reporting applications on Hadoop and Snowflake

WHAT YOU BRING TO THE TABLE

  • Proven experience working with Snowflake and/or proven experience working with various components of Hadoop ecosystem.
  • We are looking for experience with Spark, HDFS, Hive, Impala, Oozie on Hadoop stack.
  • We are looking for experience developing streams, tasks, procedures on Snowflake.
  • Experience with Kafka and Kafka connectors
  • Knowledge of Airflow
  • Strong understanding of computer science fundamentals
  • Proficiency with relational databases and SQL queries (MySQL, Oracle or similar)
  • Understanding of how to handle high velocity, high volume data events.
  • Understanding of factors affecting performance of ETL processes and SQL queries, ability to work on performance tuning.
  • Experience implementing data pipelines moving large volumes of data a day.
  • Experience coding in Python and/or Scala

BONUS POINTS

  • Bachelor’s degree or an advanced degree in Computer Science or Engineering
  • Experience with BI tools such as Power BI, Looker, etc
  • Excited by a fast-paced product development environment.
  • Having a passion and knowledge of AdTech industry.

OUR TECH STACK

  • Kubernetes, Docker, Harness
  • Jenkins, GitHub
  • AWS

$95,000 – $166,000 a year