Let’s get started
By clicking ‘Next’, I agree to the Terms of Service
and Privacy Policy
Jobs / Job page
Azure Databricks Engineer (Remote in US) image - Rise Careers
Job details

Azure Databricks Engineer (Remote in US)

Company Description

Resultant is a modern consulting firm with a radically different approach to solving problems.

We don’t solve problems for our clients. We solve problems with them.

Through outcomes driven by data analytics, technology solutions, digital transformation, and beyond, our team works with clients in both the public and private sectors to solve their most complex challenges. We start by learning as much as we can about who they are, how they work, and what they’re striving for so we can feel their problems as our own. Partnering with our clients means their desired outcomes are always top of mind, their challenges and strengths guiding our efforts. We build client-focused relationships before we build unique solutions that blaze past expectations.

Originally founded in Indianapolis as KSM Consulting in 2008, Resultant now employs more than 450 team members who operate remotely and from offices around the United States including Indianapolis, Fort Wayne and Odon, Indiana; Columbus, Ohio; Lansing, Michigan; Denver, Colorado; Dallas, Texas; and Atlanta, Georgia.

We’re Resultant. Clients partner with us to see a difference. People join us to make one.

Job Description

We are seeking a skilled Azure Databricks Engineer to design, develop, and optimize large-scale data processing systems using Azure Databricks. The ideal candidate will have expertise in Apache Spark, data engineering pipelines, and Azure cloud technologies. You will collaborate with cross-functional teams to build robust, secure, and scalable data solutions.

Typical Duties and Responsibilities:

  • Design and implement data ingestion and transformation pipelines using Azure Databricks and other Azure data services.
  • Develop ETL/ELT processes for structured, semi-structured, and unstructured data.
  • Optimize and tune Apache Spark jobs for performance and cost efficiency.
  • Build and manage scalable data lakehouse solutions using Delta Lake and Azure Data Lake Storage.
  • Integrate Databricks with Azure Synapse Analytics, Data Factory, and other Azure resources.
  • Implement security best practices: role-based access control, encryption, and data masking.
  • Collaborate with data scientists and analysts to operationalize machine learning models using MLflow.
  • Automate workflows with Databricks Jobs and CI/CD pipelines.
  • Monitor and troubleshoot performance issues in Databricks clusters and Spark applications.

Qualifications

We require 2+ years of experience in the following areas:

  • Hands-on experience with Azure Databricks and Apache Spark for large-scale data processing.
  • Strong programming skills in Python, Scala, and SQL.
  • Expertise in Azure Data Lake, Blob Storage, Azure Synapse, and Azure Data Factory.
  • Hands-on experience with CDC tools and frameworks, including but not limited to Debezium, SQL Server CDC, or similar technologies.
  • Expertise in configuring and managing CDC pipelines within Azure cloud.
  • Experience with Delta Lake architecture for data reliability and performance.
  • Knowledge of Spark job performance tuning and optimization strategies.
  • Strong understanding of data security and governance in cloud environments.
  • Experience with CI/CD for Databricks and Infrastructure as Code (Terraform, ARM templates).
  • Excellent problem-solving skills and ability to work in a collaborative team environment.

Education and Certification:

  • Bachelor’s degree in IT or related field is highly preferred.
  • Databricks Certified Associate Developer for Apache Spark or related certification is preferred.
  • Microsoft Azure Data Engineer Associate Certification is preferred.
  • Must be legally authorized to work in the United States for any employer without sponsorship.

Additional Information

What you should know about us: 

  • We are humble, hungry, and smart. We solve big problems, serve lots of clients, and are entirely committed to delivering transformative outcomes. 
  • We are team players, deeply dedicated to the mission of the organization, and to helping everyone around us be successful. 
  • We compensate well, rewarding performance that delivers positive outcomes for our clients and ensuring incentives are aligned to achieve our goals. 
  • Our leaders work hard, serving as shining examples of what it means to live out our values. They are servant leaders, helping their teams to be successful in all possible ways. 
  • We have a great benefits package including unlimited vacation, significant 401k contributions, and several opportunities to develop yourself. 
  • We pride ourselves in having the best talent in the industry and hope that you're up for the challenge! 

What our team members say about us:

  • "I love our true empathy and concern for our clients, it's very rare and appreciated. It is a pleasure to be a part of an organization like this."
  • "I learn something new every single day, and I feel like I'm a part of building an organization that has legs. I appreciate that I'm consistently humbled by the talent and caliber of our team."
  • "The culture of the company is amazing, and the climate of my team is great. The benefits that employees are offered are better than competitors, and the one-on-one presence that my team lead gives is extremely beneficial to me."

All qualified applicants will receive consideration for employment without regard to age, color, sex, disability, national origin, race, religion, or veteran status.

Equal Opportunity Employer

Resultant Glassdoor Company Review
3.6 Glassdoor star iconGlassdoor star iconGlassdoor star icon Glassdoor star icon Glassdoor star icon
Resultant DE&I Review
3.64 Glassdoor star iconGlassdoor star iconGlassdoor star icon Glassdoor star icon Glassdoor star icon
CEO of Resultant
Resultant CEO photo
Gregory Layok
Approve of CEO

Average salary estimate

$95000 / YEARLY (est.)
min
max
$80000K
$110000K

If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.

What You Should Know About Azure Databricks Engineer (Remote in US), Resultant

Are you passionate about data and ready to take your career to the next level? At Resultant, we're on the lookout for an Azure Databricks Engineer to join our innovative team. In this remote role, you'll have the unique opportunity to work collaboratively with talented professionals across different teams while shaping the data landscape for a diverse range of clients. Your primary responsibility will be designing, developing, and optimizing large-scale data processing systems using Azure Databricks. You'll engage in the exciting task of implementing data ingestion and transformation pipelines, developing robust ETL processes, and tuning Apache Spark jobs to deliver peak performance. Your experience with Azure Data Lake and Delta Lake architecture will be invaluable as you build and manage secure, scalable data solutions. You'll also have the opportunity to collaborate with data scientists and analysts to operationalize machine learning models and automate workflows using Databricks Jobs. At Resultant, we value our team members, offering a supportive culture and an expansive benefits package that includes unlimited vacation and professional development opportunities. So, if you're looking to make an impact in the world of data and work alongside a dedicated team, we invite you to join us at Resultant!

Frequently Asked Questions (FAQs) for Azure Databricks Engineer (Remote in US) Role at Resultant
What are the responsibilities of the Azure Databricks Engineer at Resultant?

As an Azure Databricks Engineer at Resultant, you'll be responsible for designing and implementing data pipelines using Azure Databricks, optimizing Apache Spark jobs for performance, and collaborating with cross-functional teams. Your role will also involve building scalable data lakehouse solutions, integrating Azure resources, and operationalizing machine learning models.

Join Rise to see the full answer
What qualifications do I need to apply for the Azure Databricks Engineer position at Resultant?

To be considered for the Azure Databricks Engineer position at Resultant, you should have a Bachelor’s degree in IT or a related field, alongside 2+ years of hands-on experience with Azure Databricks and Apache Spark. Familiarity with Azure Data Lake, Delta Lake architecture, and proficiency in programming languages like Python, Scala, and SQL are also key qualifications.

Join Rise to see the full answer
What programming skills are required for the Azure Databricks Engineer role at Resultant?

The Azure Databricks Engineer position at Resultant requires strong programming skills in Python, Scala, and SQL. These languages are essential for developing and optimizing data processing systems and efficiently managing data flows within Azure's ecosystem.

Join Rise to see the full answer
Can I work remotely as an Azure Databricks Engineer at Resultant?

Yes! The Azure Databricks Engineer role at Resultant is a remote position, offering flexibility to work from anywhere in the US. You'll be able to join our dedicated team while enjoying a work-life balance that aligns with your personal and professional commitments.

Join Rise to see the full answer
What benefits does Resultant offer for the Azure Databricks Engineer position?

At Resultant, we pride ourselves on offering a competitive benefits package for our Azure Databricks Engineers, which includes unlimited vacation, significant 401k contributions, opportunities for professional development, and a supportive company culture that encourages collaboration and growth.

Join Rise to see the full answer
Common Interview Questions for Azure Databricks Engineer (Remote in US)
What experience do you have with Azure Databricks and Apache Spark?

When discussing your experience with Azure Databricks and Apache Spark, be specific about the projects you've worked on, the challenges you faced, and how you overcame them. Mention technologies you've used and the impact of your work on data processing and analysis.

Join Rise to see the full answer
How do you optimize Spark jobs for better performance?

To effectively answer this question, discuss techniques such as optimizing query plans, managing data partitioning, and utilizing caching mechanisms. Highlight any hands-on experience you have with performance tuning and the measurable improvements achieved.

Join Rise to see the full answer
Can you explain the difference between ETL and ELT?

Clarify that ETL (Extract, Transform, Load) typically involves transforming data before loading it into a data warehouse, while ELT (Extract, Load, Transform) loads raw data first and transforms it afterwards. Provide examples of when each approach is beneficial.

Join Rise to see the full answer
Describe your experience with data security best practices in Azure.

Discuss strategies you’ve implemented related to role-based access control, encryption, and data masking to ensure data security within Azure environments. Providing specific scenarios where you have applied these practices will strengthen your response.

Join Rise to see the full answer
How do you collaborate with data scientists and analysts?

Share specific methods of collaboration you've used in past roles, such as involving data scientists in the design process of data lakes or working closely to operationalize machine learning models. Highlight the importance of communication and setting shared goals.

Join Rise to see the full answer
What tools do you use for CI/CD in Databricks?

Talk about tools like Azure DevOps or GitHub Actions that facilitate CI/CD in Databricks. Offer examples of how you implemented these processes, including automating deployments and ensuring quality control.

Join Rise to see the full answer
How familiar are you with Azure Data Lake and Blob Storage?

Clarify your familiarity with Azure Data Lake and Blob Storage in terms of use cases, best practices, and how you've integrated them into your data processing workflows. Providing examples will demonstrate your depth of understanding.

Join Rise to see the full answer
What is Delta Lake, and how does it improve data reliability?

Explain that Delta Lake enhances reliability through transaction logs, ACID compliance, and scalable metadata handling, making data processing more efficient and reliable. Citing experience with Delta Lake can provide a practical viewpoint.

Join Rise to see the full answer
How would you handle performance issues in Databricks clusters?

Mention your approach to diagnosing performance issues, including the use of metrics, logs, and specific optimization techniques. Be prepared to discuss any direct experiences with resolving performance bottlenecks.

Join Rise to see the full answer
What are your strongest technical skills relevant to the Azure Databricks Engineer role?

Highlight your strongest technical skills, such as proficiency in data architecture, Azure Databricks, Apache Spark, and programming languages like Python and SQL. Provide context about how you’ve applied these skills in relevant projects.

Join Rise to see the full answer
Similar Jobs
Photo of the Rise User
Charger Logistics Inc Hybrid No location specified
Posted 13 days ago
Posted 13 days ago
Resource Innovations Hybrid No location specified
Posted 8 days ago
Photo of the Rise User
Posted 10 days ago
Photo of the Rise User
Posted 9 days ago
Photo of the Rise User
Posted 4 days ago
Photo of the Rise User
Posted 10 days ago

Our mission is to help clients, coworkers, and communities thrive.

47 jobs
MATCH
VIEW MATCH
FUNDING
DEPARTMENTS
SENIORITY LEVEL REQUIREMENT
TEAM SIZE
EMPLOYMENT TYPE
Full-time, remote
DATE POSTED
March 21, 2025

Subscribe to Rise newsletter

Risa star 🔮 Hi, I'm Risa! Your AI
Career Copilot
Want to see a list of jobs tailored to
you, just ask me below!