Vakante Jobangebote finden Sie unter Projekte.
Big Data/Hadoop Developer with Spark
Eingestellt von Square One Resources
Gesuchte Skills: Design, Xml, Python, Java
Projektbeschreibung
- Programming and maintaining scalable ETL framework to ingest the data from various data sources using Flume/Kafka/AWS SQS/Flume.
- AWS Platform administration and configuration with verity of clusters Spark, Hadoop, HBase and Casandra.
- Design and develop scalable and robust big data analytic solutions by processing massive data sources from any various verity of data sets (eg XML, JSON, AVRO, CSV etc)
- Responsible to develop complex ETL and business rules to integrate and process the data from the various structured/unstructured data sources.
- Work with others team members to coordinate on the API integration and Data analytics activities.
- Work in the agile based project development environment.
- Integrate the other programming language like Java, Scala and python.
Projektdetails
-
Einsatzort:
London, Vereinigtes Königreich
-
Projektbeginn:
asap
-
Projektdauer:
3 months
- Vertragsart:
-
Berufserfahrung:
Keine Angabe
Geforderte Qualifikationen
-
Kategorie:
IT Entwicklung, Medien/Design