DevOps & Data Engineering Platform Engineer
Role Definition DevOps & Data Engineering Platform Engineer
Job Description
RESPONSIBILITIES
Platform & DevOps Engineering
Deploy and manage Kubernetes clusters using Rancher
Configure Kubernetes networking components (e.g., Calico)
Automate deployments using Helm and GitLab CI/CD
Provision infrastructure using Terraform
Develop automation scripts, agents, or background daemons
Build and support internal GUIs for control plane and monitoring
Ensure secure, scalable, and highly available platform operations
Maintain and optimize Ceph distributed storage
Implement observability tooling (Prometheus, Grafana, ELK)
Data Engineering & Data Lake Operations
Operate and support Big Data components: Spark, PySpark/Scala, Spark Streaming, Airflow, NiFi, Trino, Kafka
Support Zeppelin and Jupyter notebook environments
Maintain complex ingestion, transformation, and streaming pipelines
Troubleshoot distributed processing and workflow orchestration
Optimize compute and storage utilization across the platform
Security & Governance
Integrate with Keycloak, RBAC, certificates, and secrets management
Apply DevSecOps practices, cluster hardening, and secure deployments
Validate the implementation of platform security mechanisms
Process & Collaboration
Maintain well-documented workflows and processes using Confluence and Jira
Work within Agile methodologies to ensure efficient planning, tracking, and delivery
REQUIREMENTS
Core Technical Skills
Strong experience with:
Kubernetes, Rancher, Helm
GitLab CI/CD, Terraform, GitOps workflows
Linux administration & Shell scripting
Experience with Big Data technologies: Spark / Airflow / NiFi / Kafka / Trino / Zeppelin
Understanding of Data Lake architecture and data ingestion patterns
Python or Scala for automation and data processing
Experience with Ceph or similar distributed storage systems
Additional Skills (Nice to Have)
Development of internal tools, background agents, or daemons
Building lightweight GUIs for automation or monitoring
Experience with Jupyter
Knowledge of HashiCorp Vault or Consul
Experience with streaming workloads (Spark Streaming, Kafka Streams)
- Locations
- Belgrade, Serbia
- Remote status
- Hybrid
About Modirum Platforms
We secure our future society and optimize critical services with deep technology.