Job Description Summary
Job Description
Sr Data Engineer
Company:** Becton, Dickinson and Company (BD)
---
Job Summary
As a Sr Data Engineer at BD, you will be instrumental in designing, building, and maintaining robust and scalable data pipelines and infrastructure. You will apply your expertise to extract, transform, and load critical data from various sources, ensuring high data quality and accessibility for analytics, reporting, and machine learning initiatives across the organization.
---
Job Responsibilities
* Design, develop, and optimize scalable data pipelines (batch and real-time) using various data processing technologies to support analytical and operational needs.
* Implement and maintain data warehousing and data lake solutions, ensuring data integrity, security, and performance.
* Collaborate with data scientists, analysts, and other engineering teams to understand data requirements and translate them into technical specifications.
* Develop and implement data governance standards, including data quality checks, metadata management, and data lineage tracking.
* Monitor and fix data pipeline performance, identify bottlenecks, and implement solutions for continuous improvement.
* Evaluate and recommend new data technologies and tools to enhance BD's data platform capabilities.
* Automate data-related processes, including data ingestion, transformation, and validation.
* Participate in code reviews, provide technical guidance, and mentor junior engineers.
* Ensure compliance with data privacy regulations and internal security policies.
---
Job Qualifications
* Bachelor's or Master's degree in Computer Science, Engineering, Information Systems, or a related technical field.
* +7 years of proven experience in data engineering, with a strong focus on building and maintaining large-scale data platforms.
* Proficiency in at least one major programming language (e.g., Python, Java, Scala) and strong SQL skills.
* Extensive experience with cloud data platforms (e.g., AWS, Azure, GCP) and their relevant data services (e.g., S3, Redshift, Snowflake, Databricks, Azure Data Lake, BigQuery).
* Demonstrated experience with ETL/ELT tools and techniques, and data orchestration platforms (e.g., Airflow, Azure Data Factory).
* Solid understanding of data warehousing concepts, dimensional modeling, and data lake architectures.
* Experience with large-scale data processing technologies such as Spark, or Kafka.
* Familiarity with data governance principles, data quality frameworks, and metadata management.
* Excellent problem-solving skills, with the ability to analyze complex data issues and propose effective solutions.
* Strong communication and collaboration skills, with the ability to work effectively in multi-functional teams.
* Experience with version control systems (e.g., Git) and CI/CD pipelines.
Required Skills
Databricks Delta Live Tables, Databricks Platform, Databricks SQL, Databricks Unity Catalog, Data Engineering, Data Pipelines, Large Scale Data Processing, Microsoft Azure DatabricksOptional Skills
.
Primary Work Location
IND Bengaluru - Technology CampusAdditional Locations
Work Shift
Recommend
to a friend
Approve
of CEO
"Purpose driven company where associates work every day to make healthcare better. A lot of great initiatives going on to make BD the best MedTech company in the world."
Anonymous, Franklin Lakes, NJ