Dieses Jobangebot ist archiviert und steht nicht mehr zur Verfügung.
Vakante Jobangebote finden Sie unter Projekte.
Vakante Jobangebote finden Sie unter Projekte.
Big Data Architect- Kakfa, Flume, Hbase, Spark
Eingestellt von Next Ventures Ltd
Gesuchte Skills: Java
Projektbeschreibung
A job opportunity as a Senior Big Data Architect with one of our top end clients who is in the banking industry. The candidate needs to have performed architect roles in multiple projects and understand enterprise architect challenges.
Mandatory- Good understanding of architectural concepts on Spark Streaming, Kafka, Flume, HBase and Solr
Proficient in defining technical architecture for Real Time data processing
Proficient in designing, developing & deploying scalable big data analytic solutions based on Spark Streaming and HBase
Strong technical knowledge and programming experience in Real Time streaming with Spark streaming, Flume and Kafka
Prior experience in working on Spark Stateful transformations
Hands-on experience in development (Spark/HBase/Kafka/Flume), capacity planning, deployment and troubleshooting
Experience on Spark streaming and HBase Performance Tuning
Should have led full life cycle development of Hadoop/Spark implementation for high volume data
Experience working with Cloudera Hadoop distribution
Strong experience in Core Java
Additional- Experience in Finance domain
Exposure to WebSphere MQ
Experience in installing and configuring Hadoop on AWS
Experience in data quality, exception management, reconciliation and data migration
FOR IMMEDIATE CONSIDERATION call, or email (see below)
Mandatory- Good understanding of architectural concepts on Spark Streaming, Kafka, Flume, HBase and Solr
Proficient in defining technical architecture for Real Time data processing
Proficient in designing, developing & deploying scalable big data analytic solutions based on Spark Streaming and HBase
Strong technical knowledge and programming experience in Real Time streaming with Spark streaming, Flume and Kafka
Prior experience in working on Spark Stateful transformations
Hands-on experience in development (Spark/HBase/Kafka/Flume), capacity planning, deployment and troubleshooting
Experience on Spark streaming and HBase Performance Tuning
Should have led full life cycle development of Hadoop/Spark implementation for high volume data
Experience working with Cloudera Hadoop distribution
Strong experience in Core Java
Additional- Experience in Finance domain
Exposure to WebSphere MQ
Experience in installing and configuring Hadoop on AWS
Experience in data quality, exception management, reconciliation and data migration
FOR IMMEDIATE CONSIDERATION call, or email (see below)
Projektdetails
Geforderte Qualifikationen
-
Kategorie:
IT Entwicklung