Channels / CY iT HR
CY iT HR
@cyprusithr · supergroup
· filtered by
Viktoryia
Viktoryia
2025-08-01 14:26 UTC
#vacancy #remote #fulltime #openposition #job #jobopening #openvacancy #itcareer #itjob #jobit #development #timspark #Data #DataEngineer #Data_Engineer #Azure #Databricks
Looking for Data Engineer
Level: Senior
Location: Poland, Cyprus, Serbia
Employment: Full-time, remote
English: B2
Company: Timspark
Contact: @v_vpris
Core infrastructure is built on Azure, with key components in Databricks, Delta Lake, Kubernetes, and Postgres.
What You'll Do:
• Own and maintain all Databricks-based data pipelines, from ingestion to transformation to delivery
• Design and optimize workflows for performance, clarity, and cost — including compute strategy, parallelization, and dependencies
• Help evolve the orchestration layer (e.g., transition workflows to Prefect, Dagster, or similar frameworks running on Kubernetes)
• Contribute to CI/CD processes: build and test pipelines, manage Docker basedexecution environments, and support multi-stage deployment flows
• Develop and maintain Docker images used for Databricks jobs, with attention to reproducibility and efficiency
• Coordinate with the MLOps and DevOps engineers on shared infrastructure, compute setup, and deployment mechanics
Must-Have Skills:
• 5+ years of experience in data engineering or data infrastructure roles
• Solid hands-on experience with Azure Databricks, Delta Lake, and Python based data tooling
• Strong knowledge of Docker, especially in the context of CI/CD and runtime environments
• Experience with data-focused CI/CD pipelines (e.g., GitHub Actions or similar), including testing, promotion, and reproducibility
• Familiarity with modern workflow orchestrators (e.g., Prefect, Dagster, Airflow) and DAG-based execution models
• Solid understanding of staging and production environments, and how to ship safe and testable changes across them
• Proven ability to diagnose and resolve complex issues in distributed data systems
• Clear grasp of the full lifecycle of a pipeline: testing, validation, staging, deployment, and monitoring
Looking for Data Engineer
Level: Senior
Location: Poland, Cyprus, Serbia
Employment: Full-time, remote
English: B2
Company: Timspark
Contact: @v_vpris
Core infrastructure is built on Azure, with key components in Databricks, Delta Lake, Kubernetes, and Postgres.
What You'll Do:
• Own and maintain all Databricks-based data pipelines, from ingestion to transformation to delivery
• Design and optimize workflows for performance, clarity, and cost — including compute strategy, parallelization, and dependencies
• Help evolve the orchestration layer (e.g., transition workflows to Prefect, Dagster, or similar frameworks running on Kubernetes)
• Contribute to CI/CD processes: build and test pipelines, manage Docker basedexecution environments, and support multi-stage deployment flows
• Develop and maintain Docker images used for Databricks jobs, with attention to reproducibility and efficiency
• Coordinate with the MLOps and DevOps engineers on shared infrastructure, compute setup, and deployment mechanics
Must-Have Skills:
• 5+ years of experience in data engineering or data infrastructure roles
• Solid hands-on experience with Azure Databricks, Delta Lake, and Python based data tooling
• Strong knowledge of Docker, especially in the context of CI/CD and runtime environments
• Experience with data-focused CI/CD pipelines (e.g., GitHub Actions or similar), including testing, promotion, and reproducibility
• Familiarity with modern workflow orchestrators (e.g., Prefect, Dagster, Airflow) and DAG-based execution models
• Solid understanding of staging and production environments, and how to ship safe and testable changes across them
• Proven ability to diagnose and resolve complex issues in distributed data systems
• Clear grasp of the full lifecycle of a pipeline: testing, validation, staging, deployment, and monitoring
1 message on this day