Sign up for our
weekly
newsletter
of fresh jobs
Location: Hopkins, MN, USASalary: $40.00 USD Hourly - $50.00 USD HourlyDescriptionJob Title: Cloud Data EngineerLocation: RemoteDuration: 4+ monthContractAs a Healthcare Cloud Data Engineer, you will be responsible for developing and maintaining scalable data pipelines, data lakes, and warehouses within cloud environments. Your primary focus will be on securely and efficiently handling healthcare data. You will work extensively with tools like Azure Data Factory (ADF) for orchestrating data workflows and Databricks for big data processing and analytics. Your role will ensure that the healthcare organization’s data architecture supports real-time data processing, machine learning models, and business intelligence (BI) tools, while ensuring compliance with healthcare regulations.Key ResponsibilitiesCloud Data Architecture Design and Development:• Design and implement cloud-based data infrastructure tailored to healthcare organizations, focusing on scalability, security, and performance.• Build and maintain data lakes and data warehouses for healthcare data, ensuring support for both structured and unstructured data.• Develop data pipelines using Azure Data Factory (ADF) for ingesting, transforming, and loading (ETL) data from various sources such as Payer modules, Electronic Health Records (EHR), clinical systems, and external healthcare data sources.• Implement Databricks for large-scale data processing, data engineering, and machine learning workloads, enabling real-time data analytics and advanced insights.Data Pipelines And Workflow Automation• Build and maintain automated data pipelines using ADF for moving and transforming healthcare data between cloud storage, databases, and analytics platforms.• Use Databricks for processing large volumes of healthcare data, including running distributed processing jobs, cleaning, transforming, and enriching data.• Monitor, optimize, and troubleshoot data pipelines for performance, reliability, and scalability.Performance Monitoring And Optimization• Monitor data pipelines and cloud infrastructure for performance bottlenecks, resource usage, and data errors.• Optimize workflows, database queries, and big data jobs in Databricks and ADF to ensure efficient system performance with minimal downtime.• Use monitoring and alerting tools to ensure data pipelines are running as expected and address issues proactively.Qualifications• Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field.• 7+ years of proven experience with Azure Data Factory and Databricks for data processing and orchestration in a healthcare context.• 5+ years of experience with cloud-based big data platforms like Databricks, and expertise in distributed computing frameworks such as Apache Spark.• 7+ years of proficiency in SQL, Python, and Scala for data manipulation and pipeline development.• Knowledge of cloud security practices, including encryption, access controls, and auditing in a healthcare environment.• Strong problem-solving, analytical thinking, and communication skills.Contact: kgregor@judge.comThis job and many more are available through The Judge Group. Find us on the web at www.judge.com