84.51° Overview:
84.51° is a retail data science, insights and media company. We help the Kroger company, consumer packaged goods companies, agencies, publishers and affiliated partners create more personalized and valuable experiences for shoppers across the path to purchase.
Powered by cutting edge science, we leverage 1st party retail data from nearly 1 of 2 US households and 2BN+ transactions to fuel a more customer-centric journey utilizing 84.51° Insights, 84.51° Loyalty Marketing and our retail media advertising solution, Kroger Precision Marketing.
Join us at 84.51°!
__________________________________________________________
As a Senior Data Engineer, you will have the opportunity to build solutions that ingest, transform, store, and distribute our big data to be consumed by data scientists and our products.
Our data engineers use PySpark/Python, Databricks, Hadoop, Hive, and other data engineering technologies and visualization tools to deliver data capabilities and services to our scientists, products, and tools.
Responsibilities
Take ownership of features and drive them to completion through all phases of the entire 84.51° SDLC. This includes internal and external facing applications as well as process improvement activities:
- Participate in the design and development of Databricks and Cloud-based solutions.
- Implement automated unit and integration testing.
- Collaborate with architecture and lead engineers to ensure consistent development practices.
- Provide mentoring to junior engineers.
- Participate in retrospective reviews.
- Participate in the estimation process for new work and releases.
- Collaborate with other engineers to solve and bring new perspectives to complex problems.
- Drive improvements in data engineering practices, procedures, and ways of working.
- Embrace new technologies and an ever-changing environment.
Requirements
- 4+ years proven ability of professional Data Development experience
- 3+ years proven ability of developing with Databricks or Hadoop/HDFS
- 3+ years of experience with PySpark/Spark
- 3+ years of experience with SQL
- 3+ years of experience developing with either Python, Java, or Scala
- Full understanding of ETL concepts and Data Warehousing concepts
- Experience with CI/CD
- Experience with version control software
- Strong understanding of Agile Principles (Scrum)
- Bachelor's Degree (Computer Science, Management Information Systems, Mathematics, Business Analytics, or STEM)
Bonus Points for experience in the following
- Experience with Azure
- Experience with Databricks Delta Tables, Delta Lake, Delta Live Tables
- Proficient with Relational Data Modeling
- Experience with Python Library Development
- Experience with Structured Streaming (Spark or otherwise)
- Experience with Kafka and/or Azure Event Hub
- Experience with GitHub SaaS / GitHub Actions
- Experience with Snowflake
- Exposure to BI Tooling (Tableau, Power BI, Cognos, etc.)
#LI-Remote #LI-DOLF