Humu's Nudge Engine® deploys thousands of customized nudges—small, personal steps—throughout organizations to empower every employee, manager, team, and leader as a change agent. Over time, our nudges grow increasingly aware of the timing, messaging, and motivational techniques that inspire individual employees towards action.
As a member of Humu's Data Engineering team, you will be responsible for building and maintaining systems focused on expanding and optimizing data pipelines to compute insights and analyses at scale. Some of the team's major ownership areas include:
- A data pipeline tool for helping transform ingested HR data into a consistent and clean format.
- The automated processes for deciding which nudges to send to users, scheduling them for delivery, and delivering them through a multitude of formats (email, text messages, etc).
- The logic and systems to make data available to a variety of cross functional teams at Humu.
We are committed to change the working world for the better by bringing greater meaning and happiness into everyone's working lives, everywhere. We are passionate about our mission, and excited to grow our school of fish with people who want to do the same - and people who will bring in their different perspectives to help us continue to shape our team and product. If this is you, we encourage you swim into our candidate pool!
Role and responsibilities:
- As a member of our Product Team, you will ensure optimal data delivery architecture is consistent throughout ongoing projects.
- Create and optimize our data pipeline architecture
- Build data access platform for our data science and tech teams
- Manage and optimize customer ingestions for the Humu product
- Add and improve logging and monitoring
- Willingness to occasionally implement UI for internal data tools.
Qualifications:
- 4 - 99 years of experience managing data pipelines
- Driven engineer that is motivated to build a great product and great codebase in a fast-paced environment
- Strong communication skills with a growth and learning mindset
- Understanding of object-oriented languages (Java, Python, etc.)
- Familiarity with cloud based platforms - GCP (Preferred), AWS, Azure
- Experience with large-scale databases (preference towards non-relational unstructured databases).
- Usage or understanding of complex data pipelines.
- Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores.
Salary range: $160,000 - $220,000
Only open to candidates in Seattle, WA or SF Bay Area, CA