Dieses Jobangebot ist archiviert und steht nicht mehr zur Verfügung.
Vakante Jobangebote finden Sie unter Projekte.

Data Engineer with ETL Hadoop/Scala/Spark Banking, Brussels

Eingestellt von Computer Recruitment Services

Gesuchte Skills: Engineer, Sql, Python, Java

Projektbeschreibung

NK 233 Data Engineer with ETL Hadoop/Scala/Spark Banking, Brussels

You must have the following to be considered for this role.

Experience in ETL
Must know Hadoop/Scala/Spark
Work ratio: > 80 %

The division that ensures the bank's competitiveness by delivering reliable and sustainable IT solutions for the financial securities markets is looking for a data engineer.
Our technical teams deliver new IT solutions and improve existing applications for both our internal and external clients. We deploy changes into the production environment in a controlled and structured way that doesn't compromise production stability and we ensure applicative production support.
Our non-technical people maintain the maturity of the IT project delivery with appropriate controls in line with the group's risk appetite and reducing development and running costs.

BACKGROUND

Within ADM, the Big Data Analytics team supports the needs for advanced analytics from all the entities of the Euroclear Group. As a competency centre for analytics, the team helps to transform data into insight using techniques such as text mining, process mining, network analytics or predictive modelling.

The team is currently looking for a Data Engineer whose core objectives will be:

Collect, clean, prepare and load the necessary data - structured or unstructured - onto Hadoop, our Big Data analytics platform, so that they can be used by the data scientists to create insights and answer business challenges

Act as a liaison between the team and other stakeholders, whether in ADM or in CT, and contribute to support the Hadoop cluster and the compatibility of all the different softwares that run on the platform (Spark, R, Python, )

Experiment new tools and technologies related to data extraction, exploration or processing (eg. OCR engines)

Depending on his/her skills, the new data engineer may also be involved in the analytical aspects of data science projects

Job description

Identify the most appropriate data sources to use for a given purpose and understand their structures and contents, if necessary with the help of SMEs
Extract structured and unstructured data from the source systems (relational databases, data warehouses, document repositories, file systems, ), prepare such data (cleanse, re-structure, aggregate, ) and load them onto Hadoop.
Actively support data scientists in the data exploration and data preparation phases. Where data quality issues are detected, liaise with the data supplier to do root cause analysis
Where a use case is meant to become a production application, contribute to the design, build and launch activities
Ensure the maintenance and support of production applications (watch duty)
Liaise with CT teams to address infrastructure issues and to ensure that the components and software used of the platform are all consistent
Where the skills allow for it, perform advanced data analysis on a selection of business use cases, supported by data scientists
Experience with understanding and creating data flows, with data architecture, with ETL/ELT development (MS SQL Server SSIS, Datastage ) and with processing structured and unstructured data
Proven experience with using data stored in RDBMSs and experience or good understanding of NoSQL databases
Ability to write performant SQL statements
Understanding of the Hadoop ecosystem including Hadoop file formats like Parquet and ORC
Very good knowledge of Spark & Scala
Ability to write MapReduce & Spark jobs
Experience with open source technologies used in Big Data analytics like Pig, Hive, HBase, Kafka,
Ability to analyze data, to identify issues like gaps and inconsistencies and to do root cause analysis
Experience in working with customers to identify and clarify requirements
Ability to design solutions that are fit for purpose whilst keeping options open for future needs
Strong verbal and written communication skills, good customer relationship skills

Will be considered as assets

Knowledge of Cloudera
Experience with Linux and Shell Scripting
Knowledge of Java
Knowledge of IBM Mainframe and DB2
Knowledge of or experience in classic and new/emerging Business Intelligence methodologies
Knowledge of statistics, data mining, machine learning and predictive modelling, data visualization and information discovery techniques

Location: Brussels

Rate: 400- 500 euros per day

Duration: 6 months

Language: English

Start date: ASAP

Projektdetails

  • Einsatzort:

    Brussel, Belgien

  • Projektbeginn:

    asap

  • Projektdauer:

    6 months

  • Vertragsart:

    Contract

  • Berufserfahrung:

    Keine Angabe

Geforderte Qualifikationen

Computer Recruitment Services