Dieses Jobangebot ist archiviert und steht nicht mehr zur Verfügung.
Vakante Jobangebote finden Sie unter Projekte.
Vakante Jobangebote finden Sie unter Projekte.
Big Data Engineer
Eingestellt von Michael Bailey Associates - UK Contracts
Gesuchte Skills: Engineer, Python, Client, Sql
Projektbeschreibung
Our client who are a global financial services company, are looking for a Big Data Engineer to join them on site in Zurich for an initial 3 month contract.
With this position there will be extensions on offer or the option for the role to be internalised
The candidate will be responsible for:
Collaborating closely with data scientists in the business teams to understand data and functional requirements.
Designing and building data pipelines to ingest, integrate, standardize, clean and publish data.
Integrating analytical models developed by the data scientists into end-to-end data pipelines
The ideal candidates will have a solid background working with Big Data. This position will require candidates to have hands on experience with the following:
Understanding of Datamodelling and creating data structures on Hadoop.
Experience with building data pipelines on Hadoop, ideally Cloudera distribution, using Spark, Scala, Python and similar languages.
Solid knowledge of SQL, Impala and Hive databases.
Software life cycle experience, ie build, deployment, testing, release and maintenance
To apply please send your CV
Michael Bailey International is acting as an Employment Business in relation to this vacancy.
With this position there will be extensions on offer or the option for the role to be internalised
The candidate will be responsible for:
Collaborating closely with data scientists in the business teams to understand data and functional requirements.
Designing and building data pipelines to ingest, integrate, standardize, clean and publish data.
Integrating analytical models developed by the data scientists into end-to-end data pipelines
The ideal candidates will have a solid background working with Big Data. This position will require candidates to have hands on experience with the following:
Understanding of Datamodelling and creating data structures on Hadoop.
Experience with building data pipelines on Hadoop, ideally Cloudera distribution, using Spark, Scala, Python and similar languages.
Solid knowledge of SQL, Impala and Hive databases.
Software life cycle experience, ie build, deployment, testing, release and maintenance
To apply please send your CV
Michael Bailey International is acting as an Employment Business in relation to this vacancy.
Projektdetails
Geforderte Qualifikationen
-
Kategorie:
IT Entwicklung, Ingenieurwesen/Technik