Dieses Jobangebot ist archiviert und steht nicht mehr zur Verfügung.
Vakante Jobangebote finden Sie unter Projekte.

Big Data Engineer - Hadoop/Spark/Kafka

Eingestellt von Orcan Intelligence

Gesuchte Skills: Engineer, Python, Client, Linux

Projektbeschreibung

Big Data Engineer - Hadoop/Spark/Kafka

For our prestigious client, we are currently looking for a Big Data Engineer with experience on Hadoop/
Spark/Kafka

TO QUALIFY FOR THE ROLE YOU MUST HAVE:

- Minimum 4 years relevant devops and data wrangling experience in a (big) data environment
- Experience in big data Hadoop ecosystem: (with some of the component of the Hadoop ecosystem)
- Storage: HDFS, MongoDB, PostgreSQL, T HBase, Cassandra
- Tools: Kafka, Mesos, Docker, Spark, Hive, YARN,
- Programming knowledge in Scala, Python is a plus
- Excellent knowledge of Linux environment
- Knowledge of continuous development/integration pipelines including rules to test/validate code (git, Jenkins, test framework)

TASKS & RESPONSIBILITIES:

-Data pipelines starting from RDBMS with event capturing, transfer into KAFKA broker, consuming the events from the cluster with Spark, Spark streaming, generating metadata tables on Hive metastore, and generating data marts that will be exposed on Solr,HBase, Impala.
-CDC and Stream processing inside the Hadoop Stack

English speaking, no other language required.

If you have the required skills and interested to apply, please send your CV now for immediate consideration.

Projektdetails

  • Einsatzort:

    Brussel, Belgien

  • Projektbeginn:

    asap

  • Projektdauer:

    6 months + Extensions

  • Vertragsart:

    Contract

  • Berufserfahrung:

    Keine Angabe

Geforderte Qualifikationen

Orcan Intelligence