Cloud-native Data Engineer specializing in scalable data pipelines, data platforms, and AI-driven solutions on GCP and AWS.
Aktualisiert am 06.02.2026
Profil
Freiberufler / Selbstständiger
Remote-Arbeit
Verfügbar ab: 06.02.2026
Verfügbar zu: 35%
davon vor Ort: 30%
Data Engineer
Data Analyst
Cloud Architect
Snowflake
Google Cloud
AWS
Airflow
Datawarehouse
Python
SQL
Dbt
Terraform
Docker
Kubernetes
GenAI
Databricks
Kafka
Big Data
Apache Spark

Einsatzorte

Einsatzorte

Berlin (+50km)
Deutschland, Schweiz, Österreich
möglich

Projekte

Projekte

1 year 1 month
2025-02 - now

Designed and maintained cloud-native data pipelines on Google Cloud Platform

Data Engineer
Data Engineer
  • Develop and maintain Data pipelines to support ELT of data from diverse sources into our Data warehouse.
  • Designed and maintained cloud-native data pipelines on Google Cloud Platform for large-scale data processing.
  • Worked extensively with BigQuery, Cloud Composer (Airflow), Pub/Sub, Cloud Functions, Workflows, and Cloud Storage.
  • Built event-driven and batch processing pipelines to support analytics and business use cases.
  • Managed and provisioned data infrastructure using Terraform, following infrastructure-as-code best practices.
  • Solved complex data problems related to data quality, scalability, and performance optimization.
  • Handled very large datasets, ensuring reliable and production-grade data systems.
  • Collaborated with cross-functional teams to translate business requirements into scalable data solutions.
PAYBACK
Berlin, Germany
2 years 11 months
2022-04 - 2025-02

Design and manage models in DBT

Data Engineer
Data Engineer
  • Develop and maintain Data pipelines to support ETL of data from diverse sources into our Data warehouse.
  • Design and manage models in DBT.
  • Work with Docker, Kubernetes for containerization.
  • Maintain Warehouse utilization with optimized queries.
  • Use AWS for designing and optimizing cloud solutions.
  • Integrate CI/CD practices into the development lifecycle.
  • Write Complex queries for business models in SQL/NoSql.
  • Work with infrastructure-as-code tools such as Terraform.
  • Create AI chatbot, which connects LLM to Snowflake in Stream lit.
Seven Senders
Berlin, Germany
2 years 1 month
2020-03 - 2022-03

Maintained Data analysis code written in Python

Data Engineer
Data Engineer
  • Created and managed data models, ensuring data integrity, consistency, and accuracy.
  • Maintained Data analysis code written in Python.
  • Worked with other data scientists to assist on feature engineering, data modeling.
  • Utilized PySpark solutions for processing and analyzing big data.
  • Implemented the ingestion of real-time event using Kafka.
Omnius
Berlin, Germany
1 year 7 months
2018-02 - 2019-08

Data handling, migration, and design through effective data modeling

Analyst Programmer
Analyst Programmer
  • Collaborated with Analytics for monitoring and building web applications.
  • Engaged in Agile methodologies to enhance and implement data solutions for both e-commerce and financial domains.
  • Managed data quality ensuring accuracy, consistency, and reliability across diverse datasets to contribute to the creation of Data Products.
  • Data handling, migration, and design through effective data modeling.
Centegy Technologies
Karachi, Pakistan

Aus- und Weiterbildung

Aus- und Weiterbildung

2019 ? 2023

University of Potsdam ? Potsdam? Master of Science in Data Science


Major Subjects

  • Data infrastructures
  • Machine Learning
  • OLTP and OLAP
  • Big Data and Data Governance.
  • Statistical Data Analysis and Stochastic


2014 ? 2017

Mehran University of Engineering and Technology ? Jamshoro ? Bachelor of Engineering in Software Engineering


Major Subjects

  • Data Management & Data warehouse
  • Algorithms, Linear Algebra & Statistic.
  • Java


CERTIFICATES

  • Certification of Python for Data Science authorized by IBM
  • Certification of Data Analysis with Python authorized by IBM
  • Certification of Data visualization with Python authorized by IBM

Kompetenzen

Kompetenzen

Top-Skills

Data Engineer Data Analyst Cloud Architect Snowflake Google Cloud AWS Airflow Datawarehouse Python SQL Dbt Terraform Docker Kubernetes GenAI Databricks Kafka Big Data Apache Spark

Produkte / Standards / Erfahrungen / Methoden

PROFILE

Data Engineer with 5+ years of experience designing, building, and optimizing scalable data platforms and pipelines for analytics, BI, and AI-driven use cases. Strong expertise in cloud-native data engineering across Google Cloud Platform and AWS, handling large-scale and high-volume datasets. Skilled in ETL/ELT pipeline development, data warehousing, orchestration, infrastructure as code, and performance optimization. Hands-on experience with BigQuery, Airflow, dbt, Pub/Sub, Docker, Kubernetes, Terraform, CI/CD, and distributed data processing. Holds a master?s degree in data science with a strong foundation in data pipelines, big data, and applied machine learning. A collaborative problem solver with a strong focus on reliability, scalability, and production-ready data systems.


PROFESSIONAL SKILLS

Data Engineering:

Airflow, Talend Studio, DBT, Looker, Power Bi, Streamlit, Jenkins, Orchestration Tools, Apache Spark, SaaS, Hadoop, Kafka, LLM


AWS:

S3, EC2, RDS, DynamoDB, Iam, Lambda, Glue, Athena, Kinesis, EMR


GCP:

BigQuery, Pub/Sub, Cloud Functions, Workflows, Composer, Builds.


DevOps:

Git, Bitbucket, CI/CD, Docker, Kubernetes, Lens, Garfana, Terraform, Argo CD, Infrastructure as a Code


Warehouses:

Snowflake, Big Query, Redshift, Data Lake


Services:

Web Services, Ajax, Rest services, API


Data Science:

Pandas, NumPy, Machine Learning Algorithms, Data Analysis, Data Visualization, Data Crawling, NLP


Other:

GCP, Azure, Tableau, Data Ingestion tools, Databricks


PERSONAL SKILLS

Innovative, Teamwork, Interpretive, Communication, Technical Leadership, Creative, Flexible, Innovative, Problem Solving, Social, Researcher

Programmiersprachen

Java
Python
Scala

Datenbanken

Oracle
MySQL
PostgreSQL
SQL
NoSQL
MongoDB

Einsatzorte

Einsatzorte

Berlin (+50km)
Deutschland, Schweiz, Österreich
möglich

Projekte

Projekte

1 year 1 month
2025-02 - now

Designed and maintained cloud-native data pipelines on Google Cloud Platform

Data Engineer
Data Engineer
  • Develop and maintain Data pipelines to support ELT of data from diverse sources into our Data warehouse.
  • Designed and maintained cloud-native data pipelines on Google Cloud Platform for large-scale data processing.
  • Worked extensively with BigQuery, Cloud Composer (Airflow), Pub/Sub, Cloud Functions, Workflows, and Cloud Storage.
  • Built event-driven and batch processing pipelines to support analytics and business use cases.
  • Managed and provisioned data infrastructure using Terraform, following infrastructure-as-code best practices.
  • Solved complex data problems related to data quality, scalability, and performance optimization.
  • Handled very large datasets, ensuring reliable and production-grade data systems.
  • Collaborated with cross-functional teams to translate business requirements into scalable data solutions.
PAYBACK
Berlin, Germany
2 years 11 months
2022-04 - 2025-02

Design and manage models in DBT

Data Engineer
Data Engineer
  • Develop and maintain Data pipelines to support ETL of data from diverse sources into our Data warehouse.
  • Design and manage models in DBT.
  • Work with Docker, Kubernetes for containerization.
  • Maintain Warehouse utilization with optimized queries.
  • Use AWS for designing and optimizing cloud solutions.
  • Integrate CI/CD practices into the development lifecycle.
  • Write Complex queries for business models in SQL/NoSql.
  • Work with infrastructure-as-code tools such as Terraform.
  • Create AI chatbot, which connects LLM to Snowflake in Stream lit.
Seven Senders
Berlin, Germany
2 years 1 month
2020-03 - 2022-03

Maintained Data analysis code written in Python

Data Engineer
Data Engineer
  • Created and managed data models, ensuring data integrity, consistency, and accuracy.
  • Maintained Data analysis code written in Python.
  • Worked with other data scientists to assist on feature engineering, data modeling.
  • Utilized PySpark solutions for processing and analyzing big data.
  • Implemented the ingestion of real-time event using Kafka.
Omnius
Berlin, Germany
1 year 7 months
2018-02 - 2019-08

Data handling, migration, and design through effective data modeling

Analyst Programmer
Analyst Programmer
  • Collaborated with Analytics for monitoring and building web applications.
  • Engaged in Agile methodologies to enhance and implement data solutions for both e-commerce and financial domains.
  • Managed data quality ensuring accuracy, consistency, and reliability across diverse datasets to contribute to the creation of Data Products.
  • Data handling, migration, and design through effective data modeling.
Centegy Technologies
Karachi, Pakistan

Aus- und Weiterbildung

Aus- und Weiterbildung

2019 ? 2023

University of Potsdam ? Potsdam? Master of Science in Data Science


Major Subjects

  • Data infrastructures
  • Machine Learning
  • OLTP and OLAP
  • Big Data and Data Governance.
  • Statistical Data Analysis and Stochastic


2014 ? 2017

Mehran University of Engineering and Technology ? Jamshoro ? Bachelor of Engineering in Software Engineering


Major Subjects

  • Data Management & Data warehouse
  • Algorithms, Linear Algebra & Statistic.
  • Java


CERTIFICATES

  • Certification of Python for Data Science authorized by IBM
  • Certification of Data Analysis with Python authorized by IBM
  • Certification of Data visualization with Python authorized by IBM

Kompetenzen

Kompetenzen

Top-Skills

Data Engineer Data Analyst Cloud Architect Snowflake Google Cloud AWS Airflow Datawarehouse Python SQL Dbt Terraform Docker Kubernetes GenAI Databricks Kafka Big Data Apache Spark

Produkte / Standards / Erfahrungen / Methoden

PROFILE

Data Engineer with 5+ years of experience designing, building, and optimizing scalable data platforms and pipelines for analytics, BI, and AI-driven use cases. Strong expertise in cloud-native data engineering across Google Cloud Platform and AWS, handling large-scale and high-volume datasets. Skilled in ETL/ELT pipeline development, data warehousing, orchestration, infrastructure as code, and performance optimization. Hands-on experience with BigQuery, Airflow, dbt, Pub/Sub, Docker, Kubernetes, Terraform, CI/CD, and distributed data processing. Holds a master?s degree in data science with a strong foundation in data pipelines, big data, and applied machine learning. A collaborative problem solver with a strong focus on reliability, scalability, and production-ready data systems.


PROFESSIONAL SKILLS

Data Engineering:

Airflow, Talend Studio, DBT, Looker, Power Bi, Streamlit, Jenkins, Orchestration Tools, Apache Spark, SaaS, Hadoop, Kafka, LLM


AWS:

S3, EC2, RDS, DynamoDB, Iam, Lambda, Glue, Athena, Kinesis, EMR


GCP:

BigQuery, Pub/Sub, Cloud Functions, Workflows, Composer, Builds.


DevOps:

Git, Bitbucket, CI/CD, Docker, Kubernetes, Lens, Garfana, Terraform, Argo CD, Infrastructure as a Code


Warehouses:

Snowflake, Big Query, Redshift, Data Lake


Services:

Web Services, Ajax, Rest services, API


Data Science:

Pandas, NumPy, Machine Learning Algorithms, Data Analysis, Data Visualization, Data Crawling, NLP


Other:

GCP, Azure, Tableau, Data Ingestion tools, Databricks


PERSONAL SKILLS

Innovative, Teamwork, Interpretive, Communication, Technical Leadership, Creative, Flexible, Innovative, Problem Solving, Social, Researcher

Programmiersprachen

Java
Python
Scala

Datenbanken

Oracle
MySQL
PostgreSQL
SQL
NoSQL
MongoDB

Vertrauen Sie auf Randstad

Im Bereich Freelancing
Im Bereich Arbeitnehmerüberlassung / Personalvermittlung

Fragen?

Rufen Sie uns an +49 89 500316-300 oder schreiben Sie uns:

Das Freelancer-Portal

Direktester geht's nicht! Ganz einfach Freelancer finden und direkt Kontakt aufnehmen.