C
Databricks Developer
Chicago, IL
$60 - $70/hr
Full time
Posted 3h ago
Job Description
Role & Responsibilities
We are seeking a capable data engineering professional to contribute to data driven initiatives within the healthcare sector. This position centers on designing, building, and optimizing scalable data solutions that support analytics, reporting, and advanced data use cases in regulated environments.
- Design, develop, and maintain scalable data pipelines using Databricks (PySpark) and Python.
- Build and optimize ETL and ELT processes within Azure cloud environments.
- Implement data models following modern Data Lakehouse principles, including Medallion architecture.
- Ensure data quality, consistency, and performance across ingestion, staging, and curated layers.
- Collaborate with data architects, analysts, and business stakeholders to translate healthcare data requirements into technical solutions.
- Develop reusable data transformation logic and modular processing components.
- Support deployment processes in line with CI/CD and DevOps best practices.
- Monitor and optimize data workflows for performance, scalability, and reliability.
- Contribute to data governance, security, and compliance practices relevant to healthcare environments.
Hard Skills – Must Have
- Up to date knowledge of modern data tools such as Databricks, FiveTran, Data Fabric and other related platforms.
- Core experience with data architecture, data integrations, data warehousing, and ETL/ELT processes.
- Hands-on experience developing and deploying custom wheel packages or in-session notebook scripts for execution across parallel executors and worker nodes.
- Applied experience in SQL, stored procedures, and PySpark based on area of data platform specialization.
- Strong knowledge of cloud and hybrid relational database systems, such as MS SQL Server, PostgreSQL, Oracle, Azure SQL, AWS RDS, Aurora, or a comparable engine.
- Solid experience with batch and streaming data processing techniques and file compactation strategies.
Hard Skills – Nice to Have
- Strong hands-on experience with Databricks in Azure environments.
- Advanced proficiency in Python and PySpark for distributed data processing.
- Experience building and optimizing data pipelines in Azure (Azure Data Factory, Azure SQL, Data Lake Storage, etc.).
- Solid understanding of data warehousing, data lakehouse concepts, and ETL/ELT frameworks.
- Experience working with relational databases such as SQL Server, PostgreSQL, Oracle, or similar.
- Knowledge of batch and streaming data processing patterns.
- Experience working with large, complex datasets in cloud-based distributed environments.
Soft Skills / Business Specific Skills
- Strong analytical and problem solving abilities.
- Ability to work effectively in cross-functional and distributed teams.
- Clear communication skills, with the ability to explain technical concepts to non-technical stakeholders.
- Proactive mindset with a strong sense of ownership.
- Commitment to delivering high quality, reliable data solutions.
Location
Either in Chicago or Cape Girardeau
Compensation and Work Setup
Pay: $60.00 - $70.00 per hour
Work Location: Hybrid remote in Chicago, IL 60617