Data and AI engineer.
Aktualisiert am 10.03.2026
Profil
Freiberufler / Selbstständiger
Remote-Arbeit
Verfügbar ab: 01.04.2026
Verfügbar zu: 100%
davon vor Ort: 100%
Python
dbt
Kubernetes
Airflow
Kafka
AWS
GCP
Azure
Databricks
Snowflake
Terraform
bigquery
RAG
SRE
MLOPs
Grafana
Prometheus
Apache Spark
Flink
Clickhouse
Data Lake
Apache Iceberg
Data Lakehouse
English
Fluent
German
Intermediate

Einsatzorte

Einsatzorte

Deutschland, Schweiz, Österreich
möglich

Projekte

Projekte

3 months
2026-01 - now

Built and operationalized Trino observability

Data Platform Engineer
Data Platform Engineer
  • Built and operationalized Trino observability (Grafana dashboards, alert rules, SLOs/error budgets), improving detection and triage for platform incidents. 
  • Integrated MCP-based operational tooling (incident context enrichment, runbook linking, query failure summaries) reducing manual debugging overhead. 
  • Developed standardized Airflow operators for Spark ? S3 ? Iceberg ingestion workflows across platform teams. 
  • Designing infrastructure and pipelines for ML deployment paths (ALB ingress orchestration ? data prep ? serving integration); platform owns runtime and delivery. 
Hubert Burda Media (BurdaForward)
5 months
2025-09 - 2026-01

Delivered a production-grade analytics data product on GCP

Data Engineer
Data Engineer
  • Delivered a production-grade analytics data product on GCP (BigQuery, dbt, Cloud Composer, GCS, Terraform), migrating legacy on-prem Hive/Oracle pipelines to cloud-native ELT. 
  • Owned end-to-end execution: architecture, laC, releases, and production operations for high-volume behavioral datasets (millions of daily events; tens of TB raw). 
  • Designed late-arrival-safe incremental aggregation logic (watermarks, idempotent upserts, backfill windows) preserving metric correctness. 
  • Stabilized an additional data product during team transition, reducing production noise and improving reliability. 
PAYBACK
4 months
2025-04 - 2025-07

Implemented a Databricks lakehouse on AWS

Data Engineer
Data Engineer
  • Implemented a Databricks lakehouse on AWS (S3 + Delta Lake + Databricks Workflows) supporting analytics and operational reporting for an Al-driven industrial trade platform.
  • Built production Spark pipelines (batch + incremental) using Delta patterns (MERGE/upserts, partitioning, schema evolution).
  • Operationalized jobs with Databricks Workflows (retry policies, parameterization, backfills) introducing reliability guardrails.
  • Implemented CI/CD for Databricks notebooks and jobs via GitHub Actions and laC patterns, improving deployment safety.
  • Optimized cluster configurations and autoscaling policies for throughput and cost efficiency.
Confidential Client(NDA)
5 months
2024-10 - 2025-02

Delivered end-to-end pipelines

Data Engineer
Data Engineer
  • Delivered end-to-end pipelines for mobile gaming telemetry into Snowflake at ~50M events/day scale. 
  • Designed ingestion and transformation layers producing analytics-ready marts for product and BI stakeholders. 
  • Implemented data quality checks (freshness, anomaly detection, schema validation) and optimized compute consumption via warehouse tuning. 
Finz Games
3 years 4 months
2020-10 - 2024-01

Integrated Kafka Streams with backend services enabling real-time event processing

Senior Software Engineer / Data Platform Engineer
Senior Software Engineer / Data Platform Engineer
  • Integrated Kafka Streams with backend services enabling real-time event processing. 
  • Built GDPR-compliant PII pseudonymisation microservice protecting data for 5 consumer apps. 
  • Built and managed Databricks workspaces; standardized CI/CD; reduced cluster spin-up time by 30%. 
  • Designed Medallion-style lakehouse (S3 + Delta + Trino) delivering 10x faster analytics queries. 
  • Reduced AWS costs by 20% via Kubernetes autoscaling and spot optimization. 
  • Productionized 3 ML models using Databricks + MLflow; improved inference throughput by 40%. 
Mobimeo GmbH
2 years 7 months
2018-03 - 2020-09

Developed Spark ETL pipelines loading 5TB/day into Snowflake

Freenet Digital GmbH
Freenet Digital GmbH
  • Developed Spark ETL pipelines loading 5TB/day into Snowflake; reduced latency by 50% and compute cost by 30%.
  • Built low-latency FastAPI data-access services handling 5k+ requests/min.
Freenet Digital GmbH
1 year 1 month
2017-03 - 2018-03

SQL-based data models in MongoDB and PostgreSQL

Data Engineer - Working Student Created
Data Engineer - Working Student Created
  • SQL-based data models in MongoDB and PostgreSQL improving BI query efficiency by 40%. 
Atomleap GmbH
2 years 5 months
2014-05 - 2016-09

Built Java/Spring Boot backend supporting large-scale mobile gaming workloads

Software Engineer
Software Engineer
Mindstorm Studios
1 year 9 months
2012-09 - 2014-05

Developed modular components

Software Engineer
Software Engineer
  • Developed modular components for mobile games under tight release timelines. 
Geniteam Solutions

Aus- und Weiterbildung

Aus- und Weiterbildung

7 months
2024-03 - 2024-09

Integration Course (Work Study)

Completed intensive German integration and language program (B1/B2 level) prior to re-entering the German job market, VHS Berlin (Volkshochschule)
Completed intensive German integration and language program (B1/B2 level) prior to re-entering the German job market
VHS Berlin (Volkshochschule)

Kompetenzen

Kompetenzen

Top-Skills

Python dbt Kubernetes Airflow Kafka AWS GCP Azure Databricks Snowflake Terraform bigquery RAG SRE MLOPs Grafana Prometheus Apache Spark Flink Clickhouse Data Lake Apache Iceberg Data Lakehouse

Produkte / Standards / Erfahrungen / Methoden

Summary

Senior Data Platform / Analytics Engineer delivering production-grade lakehouse and analytics platforms (BigQuery, Snowflake, Databricks, Iceberg, Trino) with strong SRE discipline (SLOs, alerting, runbooks) and automation-first operations (MCP-integrated workflows, agent-assisted debugging). Experienced across GCP and AWS building high-volume event pipelines, scalable data products, and ML-ready infrastructure consumed by multiple teams. 


Core Skills

  • Platforms: BigQuery, Snowflake, Databricks, Athena, Iceberg, Trino, ClickHouse
  • Processing: Spark, Kafka, Airflow, dbt, Flink (exposure)
  • Infrastructure: GCP, AWS, Azure, Kubernetes, Terraform, Docker, CI/CD, GitHub Actions
  • SRE/Observability: Grafana, Prometheus, Loki, Datadog; SLOs, error budgets, runbooks
  • Automation/AI: MCP tooling, LLM agent workflows (Claude/Codex-style), operational summarization

Einsatzorte

Einsatzorte

Deutschland, Schweiz, Österreich
möglich

Projekte

Projekte

3 months
2026-01 - now

Built and operationalized Trino observability

Data Platform Engineer
Data Platform Engineer
  • Built and operationalized Trino observability (Grafana dashboards, alert rules, SLOs/error budgets), improving detection and triage for platform incidents. 
  • Integrated MCP-based operational tooling (incident context enrichment, runbook linking, query failure summaries) reducing manual debugging overhead. 
  • Developed standardized Airflow operators for Spark ? S3 ? Iceberg ingestion workflows across platform teams. 
  • Designing infrastructure and pipelines for ML deployment paths (ALB ingress orchestration ? data prep ? serving integration); platform owns runtime and delivery. 
Hubert Burda Media (BurdaForward)
5 months
2025-09 - 2026-01

Delivered a production-grade analytics data product on GCP

Data Engineer
Data Engineer
  • Delivered a production-grade analytics data product on GCP (BigQuery, dbt, Cloud Composer, GCS, Terraform), migrating legacy on-prem Hive/Oracle pipelines to cloud-native ELT. 
  • Owned end-to-end execution: architecture, laC, releases, and production operations for high-volume behavioral datasets (millions of daily events; tens of TB raw). 
  • Designed late-arrival-safe incremental aggregation logic (watermarks, idempotent upserts, backfill windows) preserving metric correctness. 
  • Stabilized an additional data product during team transition, reducing production noise and improving reliability. 
PAYBACK
4 months
2025-04 - 2025-07

Implemented a Databricks lakehouse on AWS

Data Engineer
Data Engineer
  • Implemented a Databricks lakehouse on AWS (S3 + Delta Lake + Databricks Workflows) supporting analytics and operational reporting for an Al-driven industrial trade platform.
  • Built production Spark pipelines (batch + incremental) using Delta patterns (MERGE/upserts, partitioning, schema evolution).
  • Operationalized jobs with Databricks Workflows (retry policies, parameterization, backfills) introducing reliability guardrails.
  • Implemented CI/CD for Databricks notebooks and jobs via GitHub Actions and laC patterns, improving deployment safety.
  • Optimized cluster configurations and autoscaling policies for throughput and cost efficiency.
Confidential Client(NDA)
5 months
2024-10 - 2025-02

Delivered end-to-end pipelines

Data Engineer
Data Engineer
  • Delivered end-to-end pipelines for mobile gaming telemetry into Snowflake at ~50M events/day scale. 
  • Designed ingestion and transformation layers producing analytics-ready marts for product and BI stakeholders. 
  • Implemented data quality checks (freshness, anomaly detection, schema validation) and optimized compute consumption via warehouse tuning. 
Finz Games
3 years 4 months
2020-10 - 2024-01

Integrated Kafka Streams with backend services enabling real-time event processing

Senior Software Engineer / Data Platform Engineer
Senior Software Engineer / Data Platform Engineer
  • Integrated Kafka Streams with backend services enabling real-time event processing. 
  • Built GDPR-compliant PII pseudonymisation microservice protecting data for 5 consumer apps. 
  • Built and managed Databricks workspaces; standardized CI/CD; reduced cluster spin-up time by 30%. 
  • Designed Medallion-style lakehouse (S3 + Delta + Trino) delivering 10x faster analytics queries. 
  • Reduced AWS costs by 20% via Kubernetes autoscaling and spot optimization. 
  • Productionized 3 ML models using Databricks + MLflow; improved inference throughput by 40%. 
Mobimeo GmbH
2 years 7 months
2018-03 - 2020-09

Developed Spark ETL pipelines loading 5TB/day into Snowflake

Freenet Digital GmbH
Freenet Digital GmbH
  • Developed Spark ETL pipelines loading 5TB/day into Snowflake; reduced latency by 50% and compute cost by 30%.
  • Built low-latency FastAPI data-access services handling 5k+ requests/min.
Freenet Digital GmbH
1 year 1 month
2017-03 - 2018-03

SQL-based data models in MongoDB and PostgreSQL

Data Engineer - Working Student Created
Data Engineer - Working Student Created
  • SQL-based data models in MongoDB and PostgreSQL improving BI query efficiency by 40%. 
Atomleap GmbH
2 years 5 months
2014-05 - 2016-09

Built Java/Spring Boot backend supporting large-scale mobile gaming workloads

Software Engineer
Software Engineer
Mindstorm Studios
1 year 9 months
2012-09 - 2014-05

Developed modular components

Software Engineer
Software Engineer
  • Developed modular components for mobile games under tight release timelines. 
Geniteam Solutions

Aus- und Weiterbildung

Aus- und Weiterbildung

7 months
2024-03 - 2024-09

Integration Course (Work Study)

Completed intensive German integration and language program (B1/B2 level) prior to re-entering the German job market, VHS Berlin (Volkshochschule)
Completed intensive German integration and language program (B1/B2 level) prior to re-entering the German job market
VHS Berlin (Volkshochschule)

Kompetenzen

Kompetenzen

Top-Skills

Python dbt Kubernetes Airflow Kafka AWS GCP Azure Databricks Snowflake Terraform bigquery RAG SRE MLOPs Grafana Prometheus Apache Spark Flink Clickhouse Data Lake Apache Iceberg Data Lakehouse

Produkte / Standards / Erfahrungen / Methoden

Summary

Senior Data Platform / Analytics Engineer delivering production-grade lakehouse and analytics platforms (BigQuery, Snowflake, Databricks, Iceberg, Trino) with strong SRE discipline (SLOs, alerting, runbooks) and automation-first operations (MCP-integrated workflows, agent-assisted debugging). Experienced across GCP and AWS building high-volume event pipelines, scalable data products, and ML-ready infrastructure consumed by multiple teams. 


Core Skills

  • Platforms: BigQuery, Snowflake, Databricks, Athena, Iceberg, Trino, ClickHouse
  • Processing: Spark, Kafka, Airflow, dbt, Flink (exposure)
  • Infrastructure: GCP, AWS, Azure, Kubernetes, Terraform, Docker, CI/CD, GitHub Actions
  • SRE/Observability: Grafana, Prometheus, Loki, Datadog; SLOs, error budgets, runbooks
  • Automation/AI: MCP tooling, LLM agent workflows (Claude/Codex-style), operational summarization

Vertrauen Sie auf Randstad

Im Bereich Freelancing
Im Bereich Arbeitnehmerüberlassung / Personalvermittlung

Fragen?

Rufen Sie uns an +49 89 500316-300 oder schreiben Sie uns:

Das Freelancer-Portal

Direktester geht's nicht! Ganz einfach Freelancer finden und direkt Kontakt aufnehmen.