- 3+ years of experience in building complex, highly scalable, high volume, low latency Enterprise applications using languages such as Java, NodeJS, Go and/or Scala - Strong experience in building microservices using technologies like Spring Boot, Spring Cloud, Netflix OSS, Zuul - Deep understanding on microservices design patterns, service registry and discovery, externalization of configurations - Experience in message streaming and processing technologies such as Kafka, Spark, Storm, gRPC or other equivalent technologies - Experience with one or more reactive microservice tools and techniques such as Akka, Vert.x, ReactiveX - Strong experience in creation, management and consumption of REST APIs leveraging Swagger, Postman, API Gateways (such as MuleSoft, Apigee) etc; - Strong knowledge in data modelling, querying, performance tuning of any big-data stores (MongoDB, Elasticsearch, Redis etc;) and /or any RDBMS (Oracle, PostgreSQL, MySQL etc;) - Experience working with Agile / Scrum based teams that utilizes Continuous Integration/Continuous Delivery processes using Git, Maven, Jenkins etc; - Experience in Containers (Docker/Kubernetes) based deployment and management - Experience in using AWS/GCP/Azure based cloud infrastructure - Knowledge in test Driven Development and test automation skills with Junit/TestNG - Knowledge in security frameworks, concepts and technologies like Spring Security, OAuth2, SAML, SSO, Identity and Access Management
Key skill set : Apache NiFi, Kafka Connect (Confluent), Sqoop, Kylo, Spark, Druid, Presto, RESTful services, Lambda / Kappa architectures Responsibilities : - Build a scalable, reliable, operable and performant big data platform for both streaming and batch analytics - Design and implement data aggregation, cleansing and transformation layers Skills : - Around 4+ years of hands-on experience designing and operating large data platforms - Experience in Big data Ingestion, Transformation and stream/batch processing technologies using Apache NiFi, Apache Kafka, Kafka Connect (Confluent), Sqoop, Spark, Storm, Hive etc; - Experience in designing and building streaming data platforms in Lambda, Kappa architectures - Should have working experience in one of NoSQL, OLAP data stores like Druid, Cassandra, Elasticsearch, Pinot etc; - Experience in one of data warehousing tools like RedShift, BigQuery, Azure SQL Data Warehouse - Exposure to other Data Ingestion, Data Lake and querying frameworks like Marmaray, Kylo, Drill, Presto - Experience in designing and consuming microservices - Exposure to security and governance tools like Apache Ranger, Apache Atlas - Any contributions to open source projects a plus - Experience in performance benchmarks will be a plus
• Looking for Big Data Engineer with 3+ years of experience. • Hands-on experience with MapReduce-based platforms, like Pig, Spark, Shark. • Hands-on experience with data pipeline tools like Kafka, Storm, Spark Streaming. • Store and query data with Sqoop, Hive, MySQL, HBase, Cassandra, MongoDB, Drill, Phoenix, and Presto. • Hands-on experience in managing Big Data on a cluster with HDFS and MapReduce. • Handle streaming data in real time with Kafka, Flume, Spark Streaming, Flink, and Storm. • Experience with Azure cloud, Cognitive Services, Databricks is preferred.