Let’s get started
By clicking ‘Next’, I agree to the Terms of Service
and Privacy Policy
Jobs / Job page
Sr. Data Engineer (Poland) image - Rise Careers
Job details

Sr. Data Engineer (Poland)

About Craft:

Craft is the leader in supplier risk intelligence, enabling enterprises to discover, evaluate, and continuously monitor their suppliers at scale. Our unique, proprietary data platform tracks real-time signals on millions of companies globally, delivering best-in-class monitoring and insight into global supply chains. Our customers include Fortune 500 companies, government agencies, SMEs, and global service platforms. Through our configurable Software-as-a-Service portal, our customers can monitor any company they work with and execute critical actions in real-time. We’ve developed distribution partnerships with some of the largest integrators and software platforms globally.

We are a post-Series B high-growth technology company backed by top-tier investors in Silicon Valley and Europe, headquartered in San Francisco with hubs in Seattle and London. We support remote and hybrid work, with team members across North America, Canada, and Europe.

We're looking for innovative and driven people passionate about building the future of Enterprise Intelligence to join our growing team!

About the Role:

Craft is looking for an experienced and motivated senior-level Data Engineer to join one of our teams responsible for a key product within the organization. As a core member of this team you will have a great say in how solutions are engineered and delivered. Craft gives engineers a lot of responsibility and authority, which is matched by our investment in their growth and development.

We’re growing quickly and looking to hire data engineers for multiple teams. We’re always looking for folks with strong data engineering experience, Python coding experience, Pandas expertise, and solid software engineering practices. In addition to those skills, we’re looking for two different categories of data engineers. If you feel that your experience aligns with one or both of the categories below, we have a place for you!

  1. Someone who has experience with data lakes and data warehousing, graph databases and knowledge graphs

  2. Someone who has experience with streaming and batching of data from data partners and building data pipelines for transactional use, as well as experience with API development, FastAPI, and asyncio.

What You'll Do:

  • Building and optimizing data pipelines (batch and streaming).

  • Extracting, analyzing and modeling rich and diverse datasets of structured and unstructured data

  • Designing software that is easily testable and maintainable.

  • Support in setting data strategies and our vision.

  • Keep track of emerging technologies and trends in the Data Engineering world, incorporating modern tooling and best practices at Craft.

  • Work on extendable data processing systems that allows to add and scale pipelines.

  • Applying machine learning techniques such as anomaly detection, clustering, regression classification, and summarization to extract value from our data sets.

What We're Looking For:

  • 4+ years of experience in Data Engineering.

  • 4+ years of experience with Python.

  • Experience in developing, maintaining, and ensuring the reliability, scalability, fault tolerance, and observability of data pipelines in a production environment.

  • Strong knowledge of SDLC and solid software engineering practices.

  • Knowledge and experience with Amazon Web Services (AWS) and Databricks (nice to have).

  • Demonstrated curiosity through asking questions, digging into new technologies, and always trying to grow.

  • Strong problem solving and the ability to communicate ideas effectively.

  • Familiar with infrastructure-as-code approach.

  • Self-starter, independent, likes to take initiative.

  • Have fundamental knowledge of data engineering techniques: ETL/ELT, batch and streaming, DWH, Data Lakes, distributed processing.

  • Familiarity with at least some technologies in our current tech stack:

    • Python, PySpark, Pandas, SQL (PostgreSQL), ElasticSearch, Airflow, Docker

    • Databricks, AWS (S3, Batch, Athena, RDS, DynamoDB, Glue, ECS, Amazon Neptune)

    • CircleCI, GitHub, Terraform

What We Offer:

  • Option to work as a B2B contractor or full-time employee

  • Competitive salary at a well-funded, fast-growing startup

  • PTO days so you can take the time you need to refresh!

    • Full-time employees: 28 PTO days allotted + paid public holidays

    • B2B contractors: 15 PTO days allotted + paid public holidays

  • 100% remote work (or hybrid if you prefer! We have coworking space in center of Warsaw.)

A Note to Candidates:

We are an equal opportunity employer who values and encourages diversity, equity and belonging at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, caste, or disability status.

Don’t meet every requirement? Studies have shown that women, communities of color and historically underrepresented talent are less likely to apply to jobs unless they meet every single qualification. At Craft, we are dedicated to building a diverse, inclusive and authentic workplace, so if you’re excited about this role but your past experience doesn’t align perfectly with every qualification in the job description, we strongly encourage you to apply. You may be just the right candidate for this or other roles!

What You Should Know About Sr. Data Engineer (Poland), Craft Machine Inc

At Craft, we’re on the lookout for a Sr. Data Engineer to join our talented team in Poland! As the leading provider of supplier risk intelligence, we empower enterprises by providing insightful data on millions of global companies. If you're passionate about harnessing data to drive innovation, you’ll thrive in our dynamic environment. In this role, you’ll not only build and optimize data pipelines—both batch and streaming—but you’ll also dive deep into analyzing structured and unstructured datasets. Your expertise in Python, along with experience in data lakes and warehousing or streaming data, will enable you to contribute significantly to our key products. At Craft, we believe in the importance of growth, and as a core member of our team, you’ll have a prominent voice in how we engineer our solutions. You will engage with the latest technologies, apply machine learning techniques, and participate in shaping our data strategies. Whether your focus is on architecture or hands-on development, you'll find opportunities to make a real impact within our company. Plus, we offer flexibility with fully remote or hybrid work options, competitive salaries, and generous PTO to recharge! We value diversity and encourage individuals from all backgrounds to apply. Join us to revolutionize Enterprise Intelligence!

Frequently Asked Questions (FAQs) for Sr. Data Engineer (Poland) Role at Craft Machine Inc
What responsibilities does a Sr. Data Engineer have at Craft?

As a Sr. Data Engineer at Craft, your primary responsibilities will include building and optimizing data pipelines for both batch and streaming data. You'll extract, analyze, and model diverse datasets while also ensuring the reliability and scalability of these data processes in production. Moreover, you will support the strategic data vision of Craft, leverage machine learning techniques, and actively incorporate modern tooling and best practices.

Join Rise to see the full answer
What qualifications are required for the Sr. Data Engineer position at Craft?

Candidates for the Sr. Data Engineer role at Craft should have a minimum of 4 years of experience in data engineering, along with a strong proficiency in Python. Familiarity with data processing concepts like ETL/ELT, batch and streaming, and a good understanding of relevant technologies like AWS and Databricks is essential. Curiosity and strong problem-solving skills are highly valued, as our team thrives on innovation.

Join Rise to see the full answer
What tools and technologies does the Sr. Data Engineer role at Craft involve?

At Craft, a Sr. Data Engineer will work with a variety of tools and technologies including Python, PySpark, Pandas, SQL (PostgreSQL), ElasticSearch, and Airflow. Exposure to cloud services such as AWS (S3, Batch, Athena, and others) and infrastructure-as-code tools like Terraform will also be beneficial for this role. The technology stack is constantly evolving, so maintaining an up-to-date knowledge of emerging trends is crucial.

Join Rise to see the full answer
What is the work culture like for a Sr. Data Engineer at Craft?

Craft fosters a culture of innovation, collaboration, and inclusivity. As a Sr. Data Engineer, you’ll work in an environment that values significant input from its engineers, offering you a chance to shape solutions and suggest enhancements. We support remote and hybrid work options to promote work-life balance, and we are committed to diversity within our teams, ensuring every voice is heard.

Join Rise to see the full answer
What benefits does Craft offer to its Sr. Data Engineers?

Craft offers competitive salaries, flexible working arrangements, and generous PTO policies for both full-time employees and B2B contractors. Full-time team members enjoy 28 PTO days plus paid public holidays, while contractors receive 15 PTO days and paid public holidays. Additionally, we provide opportunities for professional growth and development, ensuring our engineers are well-supported in their career paths.

Join Rise to see the full answer
Common Interview Questions for Sr. Data Engineer (Poland)
Can you describe your experience with building data pipelines?

When answering this question, provide specific examples of projects where you successfully built or optimized data pipelines. Explain the technologies you used, any challenges you faced, and how you ensured reliability and performance in production environments.

Join Rise to see the full answer
What techniques do you use for data validation in ETL processes?

Outline the methods you implement for data validation, such as checksum comparisons, range checks, and error handling strategies. Emphasize how these techniques help maintain data integrity throughout the ETL process.

Join Rise to see the full answer
How do you prioritize tasks when managing multiple data engineering projects?

Discuss your approach to prioritization, which might include using project management tools, setting milestones, and communicating with stakeholders to align on goals. Highlight your organizational skills in balancing urgent tasks with long-term project plans.

Join Rise to see the full answer
What are some common challenges you face as a Data Engineer, and how do you overcome them?

Mention specific challenges such as data quality issues, scaling infrastructure, or integrating new technologies. Discuss how you approach problem-solving and provide examples of successful resolutions, showcasing your analytical skills.

Join Rise to see the full answer
Describe a project where you implemented machine learning techniques.

Share a detailed account of a project where you applied machine learning to extract insights from data. Explain your chosen algorithms, the reasoning behind your selection, and the impact of your work on business decisions.

Join Rise to see the full answer
How do you stay current with emerging technologies in data engineering?

Discuss your methods for staying updated on industry trends and technologies, such as attending conferences, participating in online forums, and following influential figures in data engineering. This shows your commitment to continuous learning.

Join Rise to see the full answer
What strategies do you employ for optimizing query performance in SQL?

Share specific strategies such as indexing, query refactoring, and analyzing execution plans. Highlight how you monitor performance using SQL tools and the impact of these strategies on application performance.

Join Rise to see the full answer
How do you ensure collaboration with other teams while working on data projects?

Explain your experience working with cross-functional teams, emphasizing your communication skills and your approach to gathering requirements, sharing insights, and integrating feedback into your projects.

Join Rise to see the full answer
What design principles do you follow when creating data architectures?

Discuss the key design principles such as scalability, fault tolerance, and modularity that you follow when creating data architectures. Provide examples to illustrate how these principles lead to robust, maintainable systems.

Join Rise to see the full answer
Can you explain the difference between batch and streaming data processing?

Clearly differentiate between batch processing—which processes large volumes of data at once—and streaming processing, which handles real-time data streams. Use examples from your experience to illustrate when each method is appropriate.

Join Rise to see the full answer
Similar Jobs
Photo of the Rise User
Posted 6 days ago
Photo of the Rise User
Posted 3 days ago
Photo of the Rise User
Faire Remote San Francisco, CA
Posted 7 days ago
Photo of the Rise User
NielsenIQ Remote Praca Duque de Saldanha 1, 2, Lisbon, Portugal
Posted 7 days ago
Posted 3 days ago
Inetum Remote Lisboa, Portugal
Posted 5 days ago
Photo of the Rise User
Solace Health Remote No location specified
Posted 7 days ago
NXT GIG Remote No location specified
Posted 10 days ago
Photo of the Rise User
DoorDash USA Hybrid San Francisco, CA; Sunnyvale, CA; Seattle, WA
Posted 4 days ago

CRAFT manufactured this hydraulic train postioner along with a Railcar Dumper to service the coal industry by positioning railcars onto the dumper for dumping the content. CRAFT displayed a broad range of capabalites to the goverment when it rec...

11 jobs
MATCH
VIEW MATCH
FUNDING
DEPARTMENTS
SENIORITY LEVEL REQUIREMENT
TEAM SIZE
EMPLOYMENT TYPE
Full-time, remote
DATE POSTED
March 16, 2025

Subscribe to Rise newsletter

Risa star 🔮 Hi, I'm Risa! Your AI
Career Copilot
Want to see a list of jobs tailored to
you, just ask me below!