Spark, Hadoop, Python, SAP Basis, Perl, Spark Streaming, Big Data Architecture, Machine Learning. Spark Developer Certified. Big Data Consulting,
Aktualisiert am 02.06.2022
Profil
Freiberufler / Selbstständiger
Verfügbar ab: 02.06.2022
Verfügbar zu: 100%
davon vor Ort: 100%
Java
Apache Kafka
Apache Spark
Apache Hadoop
Python
Kubernetes
Apache Nifi
Deutsch
Muttersprache
Englisch
Gut
Portugiesisch
Muttersprache

Einsatzorte

Einsatzorte

Essen (+75km) Ibbenbüren (+75km) Cochem (+75km) Bonn (+75km) Soest (+50km) Darmstadt (+75km) Homburg (Saar) (+50km) Tübingen (+100km) Titisee-Neustadt (+75km) München (+100km) Mindelheim (+100km) Erlangen (+100km) Deggendorf (+75km)
nicht möglich

Projekte

Projekte

8 Jahre 6 Monate
2015-10 - heute

Big Data Consulting

Engineer
Engineer
Big Data
Germany
3 Monate
2020-03 - 2020-05

AD- Vantage Program, self-driving car data

Big Data Architect, Data Engineer Mapr + OpenShift (500+ nodes)
Big Data Architect, Data Engineer

Working with on AD- Vantage Program, on self-driving car data

  • Developing Data pipelines using Spark and Airflow for self-driving cars

  • Generating Metrics for Geospatial applications

  • Ingesting Data into Elastic search using Apache Spark

  • Functional Programming with Scala

Mapr + OpenShift (500+ nodes)
BMW AG
München
5 Monate
2020-01 - 2020-05

Create an Azure service for Inferencing at Scale

Big Data Developer Azure Cloud
Big Data Developer
  • Automate Azure Kubernetes clusters deployment

  • Create and deploy Spark Jobs with pytorch + GPUs on Kubernetes

  • Perform GPU Inferencing on TB?s of data

Azure Cloud
DXC
Stuttgart
1 Jahr 10 Monate
2018-06 - 2020-03

Data from cars to perform TensorFlow GPU trainings

Big Data Architect Multiple Mapr clusters (30+ nodes) NVIDIA GPUS (Tesla) Apache Mesos
Big Data Architect

Working with R&D on data from cars to perform TensorFlow GPU trainings

  • Developing Data pipelines using Airflow and Apache Spark

  • Architecture for Migration from Mesos to Kubernetes

  • Jenkins pipelines for building Docker images to be used Mesos on GPU clusters

  • Several Infrastructure tasks done on ansible for High Availability

  • Architect the whole platform

Multiple Mapr clusters (30+ nodes) NVIDIA GPUS (Tesla) Apache Mesos
Daimler AG
Stuttgart
2 Monate
2019-10 - 2019-11

Developing Crawling pipelines that runs on the Microsoft Azure Cloud

Big Data Architect, Data Engineer Hadoop Cluster + Azure Cloud
Big Data Architect, Data Engineer
  • Developing Data pipelines using Airflow and Azure Cloud

  • Developing the Architecture for the Data Pipelines between on premise and cloud

  • Writing Spark jobs to clean and aggregate data

Hadoop Cluster + Azure Cloud
s. Oliver Bernd Freier GmbH & Co. KG
Würzburg
10 Monate
2017-09 - 2018-06

Ingesting huge amounts of data

Big Data Developer, Spark / Kafka Developer, Data Engineer
Big Data Developer, Spark / Kafka Developer, Data Engineer

In this project we are ingesting huge amounts of data via Kafka Into Accumulo. All the Hadoop environment is Kerberized.

  • Writing Kafka Connectors to ingest Data

  • Kerberizing Applications to Hadoop / Kafka / Kafka Connect

  • Creating statistics plans for RDF4J Query over Accumulo

  • Creating Apache Nifi Workflows

  • Introducing git flow Automation, Continuous Integration and Docker Automation

  • Kafka Connect Setup with Kerberos on Google Kubernetes

  • Writing Java Applications based on RDF (web semantics)

GFK
Nürnberg
6 Monate
2017-04 - 2017-09

Sizing Hadoop Cluster

Big Data Architect, Data Engineer
Big Data Architect, Data Engineer

In this project I had the role Hadoop Architect, some of the tasks were sizing Hadoop Cluster and bringing internal clients to the shared platform and supporting the different Data pipelines flows. All tools were used with a Kerberized Hadoop Cluster

  • Data Migration using Sqoop and Oozie

  • Configuring Hadoop Cluster with Kerberos and Active Directory

  • Implementing Data pipelines using Kylo, Apache Nifi and Talend

  • Deploying Hortonworks Cloud Break into Amazon AWS

  • Apache Storm Streaming implementations

  • Supporting internal clients with streaming and data cleaning operations

  • Hadoop Sizing for On Premise and on Amazon Cloud

Deutsche Bahn
Frankfurt am Main
6 Monate
2016-10 - 2017-03

Integrate spark deeper into Hbase

Big Data Developer and Architect Apache Hbase with Phoenix jdbc Apache Ambari / Hortonworks Apache Spark ...
Big Data Developer and Architect

In this project the main goal is to integrate spark deeper into Hbase and Architecting new alerting and computing framework based on Spark Streaming. Every deployment is based on Docker.

  • Creating Reports in Spark Jobs over history data

  • Custom Spark Data sources for Hbase and Aggregation for Data exploration

Apache Hbase with Phoenix jdbc Apache Ambari / Hortonworks Apache Spark Scala and Java Vertx Server Docker
Kiwigrid
Dresden
7 Monate
2016-03 - 2016-09

Designing and implementing Big Data Architecture

Big Data Developer and Architect Amazon Redshift Amazon Elastic Map Reduce Python / REST API / Tornado for the web requests ...
Big Data Developer and Architect

This project consists in designing and implementing Big Data Architecture on Amazon Web services using telecommunications data. This project includes Geospatial operation on Spark written in Scala and a Rest API to Spark Within this project I am also responsible for the following:

  • Migrating Data from AWS Redshift to Spark which improves speed and decreases cost

  • Using Hadoop within Amazon Web Services to deploy Spark applications

  • Writing geospatial applications in Scala on Spark

  • Working on 3 weekly sprints within an Agile environment

  • Leading Spark training workshops for colleagues

Amazon Redshift Amazon Elastic Map Reduce Python / REST API / Tornado for the web requests Data Processing with Scala using Apache Spark Parquet and S3 to storage
2 Jahre 11 Monate
2013-04 - 2016-02

Service availability from the SAP Systems

SAP Administrator
SAP Administrator

Responsible for the Service availability from the SAP Systems on the company. We have more then 200 Systems to maintain. Some of the activities that I have done was:

  • SAP and Oracle Upgrades

  • SAP OS / HW Migration

  • Automation scripts for system copies.

  • TREX Enterprise Search, ASCS Splits, SAP Security, SSO, SNC, SSFS

  • SAP Fiori with SAP Gateway and SAP Mobile platform.

ZF Friedrichshafen AG
Schweinfurt, Germany
1 Jahr 5 Monate
2011-11 - 2013-03

Service availability from the SAP Systems

SAP Administrator
SAP Administrator

Responsible for the Service availability from the SAP Systems on the company. We have more then 200 Systems to maintain. Some of the activities that I have done was:

  • SAP and Oracle Upgrades

  • SAP OS / HW Migration

  • Automation scripts for system copies.

  • TREX Enterprise Search, ASCS Splits, SAP Security, SSO, SNC, SSFS

  • SAP Fiori with SAP Gateway and SAP Mobile platform.

S.Oliver
Würzburg, Germany
5 Monate
2010-07 - 2010-11

Software Development on Django and MySQL

Software Developer Django MySQL
Software Developer
Django MySQL
Andrade e Almeida
Portugal

Aus- und Weiterbildung

Aus- und Weiterbildung

2012

Master in Networking and Communication

Instituto Politécnico do Porto

Porto, Portugal

2010

Bachelor in Informatics Engineering

Instituto Politécnico do Porto

Porto, Portugal

Training

2020-05

Microsoft Certified: Azure Fundamentals

2019-08

Data Engineering Nanodegree

2016-10

Functional Programming Principles in Scala on Coursera

2016-04

Big Data Analytics Fraunhofer IAIS

2016-02

  • Databricks Developer Training for Apache Spark

  • Machine Learning with Big Data by University of California, San Diego on Coursera

  • Hadoop Platform and Application Framework by University of California on Coursera

  • Big Data Analytics by University of California, San Diego on Coursera

2012-04

ITL Foundation v4

2012-05

  • SAP NetWeaver AS Implementation und Operation I (SAP TADM10)

  • SAP NetWeaver Portal - Implementation and Operation (TEP10)

2013-07

  • SAP Database Administration I (Oracle) (ADM 505)

  • SAP Database Administration II (Oracle) (ADM 506)

2014-08

SAP Active Defense Security (AD680)

2013-03

ABAP Performance Tuning (BC 490)

2014-04

SAP Security Days 2014 (WSECUD)

Position

Position

Kompetenzen

Kompetenzen

Top-Skills

Java Apache Kafka Apache Spark Apache Hadoop Python Kubernetes Apache Nifi

Produkte / Standards / Erfahrungen / Methoden

ASCS
DevOps
Django
Hadoop
Routing
SAP Fiori
SAP Gateway
SAP HW
Flexframe
git
IBM
HADR, TSM
Tornado
Rest APIs
JIRA
ETL
maven
Gradle
Cloud build

Profile

  • I do consulting on Cloud solutions Architectures. Over 5 years I have experience with AWS and Azure cloud.

  • I?m a fan of designing self-service systems to allow people to access data faster, this can only happen with Automation.

  • The first page of my cv is an overview and not all projects are listed.

  • For more details information see additional pages.

Software Skills

  • Scala

  • Java

  • Python

  • Ansible

  • Kubernetes

  • Cloud

  • Linux

  • Docker

Framework Skills

  • Apache Spark

  • Apache Kafka

  • Apache Nifi

  • Apache Airflow

  • Elasticsearch

  • SAP

SAP Skills:

  • RFC

  • SNC

  • Charm

  • Kernel Upgrades

  • EHP Upgrade

  • SSFS

  • SSO

  • HANA

Others:

  • Puppet

  • OpenStack

  • Mesos

  • SAP Basis 

Cloud Technologies:

  • AWS EMR

  • AWS S3

  • AWS Redshift

  • Google App Engine

  • Azure Kubernetes

  • Azure containers

Work Experience

2020-03 - 2020-06

Role: Scala Developer

Customer: BMW AG, München

Tasks:

Creating Geospatial reporting for self-driving cars data. Spark was used to crunch TB?s of data and elasticsearch to index and perform visualization. All components run on OpenShift and apache Airflow.

2018-06 - 2020-02

Role: Enterprise Architect

Customer: Daimler AG, Stuttgart

Tasks:

Lead Architect on DevOps Automation.

Skills:

Jenkins, Kubernetes and CICD

2019-10 - 2019-11

Role: Big Data Architect & Cloud

Customer: s. Oliver GmbH, Würzburg

Tasks:

Design and implement mixed workload on premise and in Azure cloud, based on Containers and Spark jobs to perform Web Crawling.

2017-10 - 2018-06

Role: Spark and Kafka Developer

Customer: Gfk, Nürnberg

Tasks:

Designing data pipelines using Confluent Kafka Connect, Apache Spark and Accumulo. CI / CD was used to do automation and Kubernetes to run the stack. Introduced git flow as standard development flow for teams.

2017-04 - 2017-09

Role: Big Data DevOps

Customer: Deutsche Bahn, Frankfurt

Tasks:

In this project I had the role Big Data Architect, some of the tasks were sizing Hadoop Cluster and bringing internal clients to the shared platform and supporting the different Data pipelines flows. All tools were used with a Kerberized Hadoop.

2016-10 - 2017-03

Role: Java developer

Customer: Kiwigrid, Dresden

Tasks:

Developing custom Spark data sources for HBase. Integrating Spark Jobs on a Vertx Cluster. Designing warehouse for historical data. Migrating data from Mysql to HBase as timeseries.

2016-03 - 2016-09

Role: Big Data Developer

Customer: Here Maps (Ex Nokia), Berlin

Tasks:

Designing and implementing Big Data Architecture on Amazon web services (AWS) using telecommunications data. This project includes Geospatial operation on Spark written in Scala and a Rest API to Spark.

Betriebssysteme

AIX
Ubuntu
Cento OS
Mac OSX
Windows Server 2008 r2
VmWare EXI Server

Programmiersprachen

Java
MapReduce, Spark
Python
Shell Script
Perl
PHP
HTML
Javascript (jQuery)

Datenbanken

DB2
MySQL
Oracle
Oracle 11
SAP Max DB

Branchen

Branchen

  • Automotive

  • Media

Einsatzorte

Einsatzorte

Essen (+75km) Ibbenbüren (+75km) Cochem (+75km) Bonn (+75km) Soest (+50km) Darmstadt (+75km) Homburg (Saar) (+50km) Tübingen (+100km) Titisee-Neustadt (+75km) München (+100km) Mindelheim (+100km) Erlangen (+100km) Deggendorf (+75km)
nicht möglich

Projekte

Projekte

8 Jahre 6 Monate
2015-10 - heute

Big Data Consulting

Engineer
Engineer
Big Data
Germany
3 Monate
2020-03 - 2020-05

AD- Vantage Program, self-driving car data

Big Data Architect, Data Engineer Mapr + OpenShift (500+ nodes)
Big Data Architect, Data Engineer

Working with on AD- Vantage Program, on self-driving car data

  • Developing Data pipelines using Spark and Airflow for self-driving cars

  • Generating Metrics for Geospatial applications

  • Ingesting Data into Elastic search using Apache Spark

  • Functional Programming with Scala

Mapr + OpenShift (500+ nodes)
BMW AG
München
5 Monate
2020-01 - 2020-05

Create an Azure service for Inferencing at Scale

Big Data Developer Azure Cloud
Big Data Developer
  • Automate Azure Kubernetes clusters deployment

  • Create and deploy Spark Jobs with pytorch + GPUs on Kubernetes

  • Perform GPU Inferencing on TB?s of data

Azure Cloud
DXC
Stuttgart
1 Jahr 10 Monate
2018-06 - 2020-03

Data from cars to perform TensorFlow GPU trainings

Big Data Architect Multiple Mapr clusters (30+ nodes) NVIDIA GPUS (Tesla) Apache Mesos
Big Data Architect

Working with R&D on data from cars to perform TensorFlow GPU trainings

  • Developing Data pipelines using Airflow and Apache Spark

  • Architecture for Migration from Mesos to Kubernetes

  • Jenkins pipelines for building Docker images to be used Mesos on GPU clusters

  • Several Infrastructure tasks done on ansible for High Availability

  • Architect the whole platform

Multiple Mapr clusters (30+ nodes) NVIDIA GPUS (Tesla) Apache Mesos
Daimler AG
Stuttgart
2 Monate
2019-10 - 2019-11

Developing Crawling pipelines that runs on the Microsoft Azure Cloud

Big Data Architect, Data Engineer Hadoop Cluster + Azure Cloud
Big Data Architect, Data Engineer
  • Developing Data pipelines using Airflow and Azure Cloud

  • Developing the Architecture for the Data Pipelines between on premise and cloud

  • Writing Spark jobs to clean and aggregate data

Hadoop Cluster + Azure Cloud
s. Oliver Bernd Freier GmbH & Co. KG
Würzburg
10 Monate
2017-09 - 2018-06

Ingesting huge amounts of data

Big Data Developer, Spark / Kafka Developer, Data Engineer
Big Data Developer, Spark / Kafka Developer, Data Engineer

In this project we are ingesting huge amounts of data via Kafka Into Accumulo. All the Hadoop environment is Kerberized.

  • Writing Kafka Connectors to ingest Data

  • Kerberizing Applications to Hadoop / Kafka / Kafka Connect

  • Creating statistics plans for RDF4J Query over Accumulo

  • Creating Apache Nifi Workflows

  • Introducing git flow Automation, Continuous Integration and Docker Automation

  • Kafka Connect Setup with Kerberos on Google Kubernetes

  • Writing Java Applications based on RDF (web semantics)

GFK
Nürnberg
6 Monate
2017-04 - 2017-09

Sizing Hadoop Cluster

Big Data Architect, Data Engineer
Big Data Architect, Data Engineer

In this project I had the role Hadoop Architect, some of the tasks were sizing Hadoop Cluster and bringing internal clients to the shared platform and supporting the different Data pipelines flows. All tools were used with a Kerberized Hadoop Cluster

  • Data Migration using Sqoop and Oozie

  • Configuring Hadoop Cluster with Kerberos and Active Directory

  • Implementing Data pipelines using Kylo, Apache Nifi and Talend

  • Deploying Hortonworks Cloud Break into Amazon AWS

  • Apache Storm Streaming implementations

  • Supporting internal clients with streaming and data cleaning operations

  • Hadoop Sizing for On Premise and on Amazon Cloud

Deutsche Bahn
Frankfurt am Main
6 Monate
2016-10 - 2017-03

Integrate spark deeper into Hbase

Big Data Developer and Architect Apache Hbase with Phoenix jdbc Apache Ambari / Hortonworks Apache Spark ...
Big Data Developer and Architect

In this project the main goal is to integrate spark deeper into Hbase and Architecting new alerting and computing framework based on Spark Streaming. Every deployment is based on Docker.

  • Creating Reports in Spark Jobs over history data

  • Custom Spark Data sources for Hbase and Aggregation for Data exploration

Apache Hbase with Phoenix jdbc Apache Ambari / Hortonworks Apache Spark Scala and Java Vertx Server Docker
Kiwigrid
Dresden
7 Monate
2016-03 - 2016-09

Designing and implementing Big Data Architecture

Big Data Developer and Architect Amazon Redshift Amazon Elastic Map Reduce Python / REST API / Tornado for the web requests ...
Big Data Developer and Architect

This project consists in designing and implementing Big Data Architecture on Amazon Web services using telecommunications data. This project includes Geospatial operation on Spark written in Scala and a Rest API to Spark Within this project I am also responsible for the following:

  • Migrating Data from AWS Redshift to Spark which improves speed and decreases cost

  • Using Hadoop within Amazon Web Services to deploy Spark applications

  • Writing geospatial applications in Scala on Spark

  • Working on 3 weekly sprints within an Agile environment

  • Leading Spark training workshops for colleagues

Amazon Redshift Amazon Elastic Map Reduce Python / REST API / Tornado for the web requests Data Processing with Scala using Apache Spark Parquet and S3 to storage
2 Jahre 11 Monate
2013-04 - 2016-02

Service availability from the SAP Systems

SAP Administrator
SAP Administrator

Responsible for the Service availability from the SAP Systems on the company. We have more then 200 Systems to maintain. Some of the activities that I have done was:

  • SAP and Oracle Upgrades

  • SAP OS / HW Migration

  • Automation scripts for system copies.

  • TREX Enterprise Search, ASCS Splits, SAP Security, SSO, SNC, SSFS

  • SAP Fiori with SAP Gateway and SAP Mobile platform.

ZF Friedrichshafen AG
Schweinfurt, Germany
1 Jahr 5 Monate
2011-11 - 2013-03

Service availability from the SAP Systems

SAP Administrator
SAP Administrator

Responsible for the Service availability from the SAP Systems on the company. We have more then 200 Systems to maintain. Some of the activities that I have done was:

  • SAP and Oracle Upgrades

  • SAP OS / HW Migration

  • Automation scripts for system copies.

  • TREX Enterprise Search, ASCS Splits, SAP Security, SSO, SNC, SSFS

  • SAP Fiori with SAP Gateway and SAP Mobile platform.

S.Oliver
Würzburg, Germany
5 Monate
2010-07 - 2010-11

Software Development on Django and MySQL

Software Developer Django MySQL
Software Developer
Django MySQL
Andrade e Almeida
Portugal

Aus- und Weiterbildung

Aus- und Weiterbildung

2012

Master in Networking and Communication

Instituto Politécnico do Porto

Porto, Portugal

2010

Bachelor in Informatics Engineering

Instituto Politécnico do Porto

Porto, Portugal

Training

2020-05

Microsoft Certified: Azure Fundamentals

2019-08

Data Engineering Nanodegree

2016-10

Functional Programming Principles in Scala on Coursera

2016-04

Big Data Analytics Fraunhofer IAIS

2016-02

  • Databricks Developer Training for Apache Spark

  • Machine Learning with Big Data by University of California, San Diego on Coursera

  • Hadoop Platform and Application Framework by University of California on Coursera

  • Big Data Analytics by University of California, San Diego on Coursera

2012-04

ITL Foundation v4

2012-05

  • SAP NetWeaver AS Implementation und Operation I (SAP TADM10)

  • SAP NetWeaver Portal - Implementation and Operation (TEP10)

2013-07

  • SAP Database Administration I (Oracle) (ADM 505)

  • SAP Database Administration II (Oracle) (ADM 506)

2014-08

SAP Active Defense Security (AD680)

2013-03

ABAP Performance Tuning (BC 490)

2014-04

SAP Security Days 2014 (WSECUD)

Position

Position

Kompetenzen

Kompetenzen

Top-Skills

Java Apache Kafka Apache Spark Apache Hadoop Python Kubernetes Apache Nifi

Produkte / Standards / Erfahrungen / Methoden

ASCS
DevOps
Django
Hadoop
Routing
SAP Fiori
SAP Gateway
SAP HW
Flexframe
git
IBM
HADR, TSM
Tornado
Rest APIs
JIRA
ETL
maven
Gradle
Cloud build

Profile

  • I do consulting on Cloud solutions Architectures. Over 5 years I have experience with AWS and Azure cloud.

  • I?m a fan of designing self-service systems to allow people to access data faster, this can only happen with Automation.

  • The first page of my cv is an overview and not all projects are listed.

  • For more details information see additional pages.

Software Skills

  • Scala

  • Java

  • Python

  • Ansible

  • Kubernetes

  • Cloud

  • Linux

  • Docker

Framework Skills

  • Apache Spark

  • Apache Kafka

  • Apache Nifi

  • Apache Airflow

  • Elasticsearch

  • SAP

SAP Skills:

  • RFC

  • SNC

  • Charm

  • Kernel Upgrades

  • EHP Upgrade

  • SSFS

  • SSO

  • HANA

Others:

  • Puppet

  • OpenStack

  • Mesos

  • SAP Basis 

Cloud Technologies:

  • AWS EMR

  • AWS S3

  • AWS Redshift

  • Google App Engine

  • Azure Kubernetes

  • Azure containers

Work Experience

2020-03 - 2020-06

Role: Scala Developer

Customer: BMW AG, München

Tasks:

Creating Geospatial reporting for self-driving cars data. Spark was used to crunch TB?s of data and elasticsearch to index and perform visualization. All components run on OpenShift and apache Airflow.

2018-06 - 2020-02

Role: Enterprise Architect

Customer: Daimler AG, Stuttgart

Tasks:

Lead Architect on DevOps Automation.

Skills:

Jenkins, Kubernetes and CICD

2019-10 - 2019-11

Role: Big Data Architect & Cloud

Customer: s. Oliver GmbH, Würzburg

Tasks:

Design and implement mixed workload on premise and in Azure cloud, based on Containers and Spark jobs to perform Web Crawling.

2017-10 - 2018-06

Role: Spark and Kafka Developer

Customer: Gfk, Nürnberg

Tasks:

Designing data pipelines using Confluent Kafka Connect, Apache Spark and Accumulo. CI / CD was used to do automation and Kubernetes to run the stack. Introduced git flow as standard development flow for teams.

2017-04 - 2017-09

Role: Big Data DevOps

Customer: Deutsche Bahn, Frankfurt

Tasks:

In this project I had the role Big Data Architect, some of the tasks were sizing Hadoop Cluster and bringing internal clients to the shared platform and supporting the different Data pipelines flows. All tools were used with a Kerberized Hadoop.

2016-10 - 2017-03

Role: Java developer

Customer: Kiwigrid, Dresden

Tasks:

Developing custom Spark data sources for HBase. Integrating Spark Jobs on a Vertx Cluster. Designing warehouse for historical data. Migrating data from Mysql to HBase as timeseries.

2016-03 - 2016-09

Role: Big Data Developer

Customer: Here Maps (Ex Nokia), Berlin

Tasks:

Designing and implementing Big Data Architecture on Amazon web services (AWS) using telecommunications data. This project includes Geospatial operation on Spark written in Scala and a Rest API to Spark.

Betriebssysteme

AIX
Ubuntu
Cento OS
Mac OSX
Windows Server 2008 r2
VmWare EXI Server

Programmiersprachen

Java
MapReduce, Spark
Python
Shell Script
Perl
PHP
HTML
Javascript (jQuery)

Datenbanken

DB2
MySQL
Oracle
Oracle 11
SAP Max DB

Branchen

Branchen

  • Automotive

  • Media

Vertrauen Sie auf GULP

Im Bereich Freelancing
Im Bereich Arbeitnehmerüberlassung / Personalvermittlung

Fragen?

Rufen Sie uns an +49 89 500316-300 oder schreiben Sie uns:

Das GULP Freelancer-Portal

Direktester geht's nicht! Ganz einfach Freelancer finden und direkt Kontakt aufnehmen.