Job Overview
As a Data Engineer, you will play a pivotal role in designing, developing, and maintaining scalable data pipelines, leveraging cutting-edge technologies such as dbt, Kafka, RabbitMQ, Spark, and ClickHouse within cloud environments, you will ensure the efficient and reliable flow of data across our systems. Your expertise will drive the optimization of our data architecture, enabling insightful analytics and supporting our data-driven decision-making processes.
Job Responsibilities
• Design and implement highly scalable, reliable, and performant data pipelines to support ETL processes, integrating technologies such as Kafka, Spark, ClickHouse, and RabbitMQ.
• Develop and maintain data transformations using dbt, ensuring proper testing, documentation, and version control of all transformation logic.
• Create and maintain dbt models following best practices for modularity, testing, and documentation, while optimizing for performance and maintainability.
• Work within cloud environments to deploy and manage data services, ensuring best practices in security, scalability, and efficiency.
• Develop and maintain robust data storage solutions, optimizing data storage and retrieval processes in ClickHouse or similar technologies.
• Collaborate with data analysts, and other stakeholders to understand data needs and implement systems that support data analysis and reporting.
• Monitor, troubleshoot, and optimize data pipelines, identifying and resolving performance bottlenecks and ensuring data quality and integrity.
• Participate in the design and implementation of data models and schemas that support business processes and objectives.
• Stay abreast of industry trends and advancements in data engineering technologies and methodologies, continuously seeking ways to improve our data systems.
• Ensure compliance with data governance and security policies.
• Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
• Proven experience in data engineering, with a strong background in designing and implementing ETL processes within cloud environments.
• Strong programming skills in Python, with experience in developing robust, maintainable, and scalable data processing pipelines.
• Extensive SQL knowledge and experience.
• Experience with dbt, including expertise in SQL transformation logic, testing, and documentation.
• Excellent problem-solving skills and the ability to work collaboratively in a team environment.
• Strong communication skills, with the ability to convey complex technical concepts to non-technical stakeholders.
· Competitive salary commensurate with experience
· Opportunities for professional development and career advancement
· Collaborative and supportive work environment
· Flexibility in smart casual dress code
· Complimentary snacks and beverages available
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Subscribe to Rise newsletter