As a Data Engineer you are responsible for handling data pipelines and supporting data architecture. You will be joining a fast-growing team of consultants and data specialists to bring cutting edge technology solutions to our clients. You will design architectures for data integration and processing to provide high quality datasets and utilize Big Data processing tools to build data pipelines on modern technology stack like Azure / AWS or GCP. You will also be required to setup and implement data management frameworks across client organizations.
In this role you will take the responsibility for project scoping, data collection, data extraction, data processing, data cleaning and enrichment to support the team in the conceptual development and implementation of data platforms. Your area of expertise covers the creation of a complete, end-to-end data pipeline: from the import of structured and unstructured raw data, its processing and transformation, metadata management to the implementation of scalable analytics platforms.
Further, you will apply your understanding of analytical methodologies and the related infrastructure requirements to continuously collaborate with our team of Data Analytics Consultants and Data Scientists to develop state of the art data-based solutions for our clients. On a day to day basis you will work with state-of-the-art technology and always apply a solution- and goal-oriented approach
You have a M.Sc. or Bachelors in a quantitative field (Information Systems, Computer Science, Physics, Mathematics, etc.) and demonstrated industry experience in data engineering (4 to 5 years)
Core Technical Requirements:
- Database Technologies – Knowledge of working with data models and databases (including relational and document-based and key-value stores; e.g., Postgres, SQL Server, Cassandra, Redis)
- ETL Technologies – Experience in data extraction/ingestion, processing and loading (including Informatica, Talend, Matillion etc.)
- Cloud Technologies – Experience in working on cloud infrastructures and cloud-based analytics systems (e.g., AWS, Azure, Google Cloud Platform etc.)
- Big Data Technologies – Experience in handling Big data and (e.g., Apache Hadoop Ecosystem (Hive, HBase, Impala, Oozie etc.), Apache Spark)
- SAP Technologies – Experience working with various SAP Technologies (SAP S4/HANA, SAP BW, SAP Fiori etc.)
- Streaming Technologies - Understanding of event-driven and log-structured architectures (Kafka, Flink, Kinesis), streaming and batch processing methods and data transfer protocols
- Programming Languages - Strong coding knowledge in common programming and query languages (Java, Scala, SQL, Python)
- Data Management – Experience in defining and implementing data management frameworks (including data governance, metadata management, data lifecycle etc.)
- Version Control Technologies – Experience working with version control (e.g., Git, SVN etc.)
- SDLC Model – Knowledge of SDLC models (e.g., Waterfall, Agile, CI/CD)
- Containerization - Experience with container-based infrastructures (Docker/Kubernetes)
- Experience in the development of API services (REST, e.g. with Flask)
- Basic knowledge of common libraries and frameworks (NumPy, pandas, scikit-learn etc.).
- Demonstrable knowledge of applying Data Engineering best practices (coding practices, testing (unit, system, acceptance, black box etc.), code review)
- An exciting international environment of projects, customers and consultants
- Extensive training opportunities in data analytics and data engineering.
- Competitive salary
In case of any technical issues or problems submitting your application please contact: Dennis Reck (+49 89 9230-9127) or Isabell Schönemann (+49 89 9230-9583).