Dieses Jobangebot ist archiviert und steht nicht mehr zur Verfügung.
Vakante Jobangebote finden Sie unter Projekte.
Vakante Jobangebote finden Sie unter Projekte.
Big Data Engineer
Eingestellt von Michael Bailey Associates - Zurich
Gesuchte Skills: Engineer, Engineering, Python, Client
Projektbeschreibung
For our banking client in Zurich, are we looking for a
Big Data Engineer/Developer
Start: ASAP
Location: Zurich
Duration: initial contract of 12 months
In this project will you help to deliver, maintain and support a growing platform which serves multiple areas of the bank, including some very exciting and cutting-edge projects, with board-level sponsorship.
Your key responsibilities are
- Engineering of data pipelines (primarily batch, increasingly intra-day/near-Real Time)
- Integration and evaluation of new big data and data science technologies
- Development of platform components
- Consultation to application groups on how best to utilise the platform and technologies
- Engagement with groups for PoCs and full platform on-boarding
Key skills/experience/knowledge needed:
- Very strong Python or Scala
- Spark experience
- Data Science with Spark/R/ScikitLearn
- Machine Learning concepts
- Hadoop experience
- UNIX experience (Redhat preferred)
Are you the person we are looking for? Or do you have these skills but maybe with a different focus? (Hadoop, DevOps, Big Data)
We have 3 positions open for this team so don't hesitate to contact us for more info.
Looking forward to hear from you!
Beheshta Saya
Michael Bailey Associates
Michael Bailey International is acting as an Employment Business in relation to this vacancy.
Big Data Engineer/Developer
Start: ASAP
Location: Zurich
Duration: initial contract of 12 months
In this project will you help to deliver, maintain and support a growing platform which serves multiple areas of the bank, including some very exciting and cutting-edge projects, with board-level sponsorship.
Your key responsibilities are
- Engineering of data pipelines (primarily batch, increasingly intra-day/near-Real Time)
- Integration and evaluation of new big data and data science technologies
- Development of platform components
- Consultation to application groups on how best to utilise the platform and technologies
- Engagement with groups for PoCs and full platform on-boarding
Key skills/experience/knowledge needed:
- Very strong Python or Scala
- Spark experience
- Data Science with Spark/R/ScikitLearn
- Machine Learning concepts
- Hadoop experience
- UNIX experience (Redhat preferred)
Are you the person we are looking for? Or do you have these skills but maybe with a different focus? (Hadoop, DevOps, Big Data)
We have 3 positions open for this team so don't hesitate to contact us for more info.
Looking forward to hear from you!
Beheshta Saya
Michael Bailey Associates
Michael Bailey International is acting as an Employment Business in relation to this vacancy.
Projektdetails
Geforderte Qualifikationen
-
Kategorie:
IT Entwicklung, Ingenieurwesen/Technik