Job Summary
- Design, develop, and maintain ETL (Extract, Transform, Load) pipelines to process structured and unstructured data.
- Build scalable and efficient data architectures to support analytics and reporting needs.
- Optimize data workflows for performance, reliability, and maintainability.
- Integrate data from multiple sources, including databases, APIs, and cloud storage.
- Maintain data quality, integrity, and security across all data platforms.
- Ensure compliance with data privacy regulations and internal governance policies.
- Document data pipelines, processes, and system architecture.
- Implement monitoring and ing for data pipelines and ETL processes.
- Evaluate and recommend new tools, technologies, and best practices for data engineering.
- Continuously improve data processing and storage strategies for efficiency and scalability.
- Experience with real-time data streaming technologies (Kafka, Kinesis, etc.).
- Knowledge of BI and analytics tools (Tableau, Power BI, Looker, etc.).
- Familiarity with containerization and orchestration (Docker, Kubernetes).
- Experience in building automated data quality and validation processes.
- Exposure to machine learning pipelines and data science workflows.
Job description
About the Role:
We are seeking a skilled Data Engineer to design, build, and maintain scalable data pipelines and architectures. The ideal candidate will transform raw data into structured, high-quality datasets to support analytics, reporting, and data-driven decision-making across the organization.
Key Responsibilities:
1. Data Pipeline & Architecture
2. Data Integration & Management
3. Data Governance & Compliance
4. Innovation & Process Improvement
Preferred Qualifications:
Full Time, Permanent
Software Development
Role Specific Skills
Basic Qualifications
- Any Graduate
Journey
-
Application Date
2025-11-14 00:00:00.0 - 2026-02-12 00:00:00.0