We’re looking for top engineering talent to join our development teams to work on cutting edge, exciting products enabling technology for Smart Industry. About VIMANA Smart Industry is defined as the fully integrated, collaborative system that responds in real-time to meet the changing demands in the factory, supply network, and the consumer. It’s a synonym for Industry 4.0 and includes connected product innovation. Smart Industry is powered by IoT technologies including the extensive use of IoT sensors and devices, data analytics, cloud computing, edge intelligence, digital twin, and additive technologies. VIMANA brings the technology, solutions, and services to enable Smart Industry and is one of the most dynamic software companies in this domain. VIMANA is an exciting, fast-growing, dynamic technology company which is positioned at the forefront of a fundamental shift in the way manufacturing plants operate. As a Senior Software Developer : You will be developing key components of the VIMANA platform for managing and analyzing streaming realtime data. You will be working with cutting-edge technologies and tools for streaming analytics including Kafka, Elasticsearch, NodeJS and Spring. You will be part of a self-organizing\ agile team that passionately follows best practices like pair programming, TDD to name few and deliver using our fully integrated CI/CD pipeline and SAFe framework. You will be part of a DevOps culture where you will get to work with production systems, including operations, deployment, and maintenance. You will have an opportunity to continuously grow and build your capabilities, learning new technologies, languages, and platforms. Required Skills and Qualification Undergraduate degree in Computer Science or a related field, or equivalent practical experience 4+ years of work experience in the relevant field. Experience building applications using Java and NodeJS Experience using Kafka or equivalent messaging platforms Expertise in data structures and algorithms Experience using non-SQL databases like MongoDB or Elasticsearch Working experience with AWS is a plus Why Join VIMANA You will be part of a team that is highly motivated to learn and work on cutting edge technologies, tools and development practices. A workplace that values work-life balance, provide flexible working hours and work from home options. Highly competitive salary package. A positive, energetic and collaborative work environment where teams are empowered to take decisions and are self-organized. Fully stocked kitchen, Breakfast and Lunch every day.
- 3+ years of experience in building complex, highly scalable, high volume, low latency Enterprise applications using languages such as Java, NodeJS, Go and/or Scala - Strong experience in building microservices using technologies like Spring Boot, Spring Cloud, Netflix OSS, Zuul - Deep understanding on microservices design patterns, service registry and discovery, externalization of configurations - Experience in message streaming and processing technologies such as Kafka, Spark, Storm, gRPC or other equivalent technologies - Experience with one or more reactive microservice tools and techniques such as Akka, Vert.x, ReactiveX - Strong experience in creation, management and consumption of REST APIs leveraging Swagger, Postman, API Gateways (such as MuleSoft, Apigee) etc; - Strong knowledge in data modelling, querying, performance tuning of any big-data stores (MongoDB, Elasticsearch, Redis etc;) and /or any RDBMS (Oracle, PostgreSQL, MySQL etc;) - Experience working with Agile / Scrum based teams that utilizes Continuous Integration/Continuous Delivery processes using Git, Maven, Jenkins etc; - Experience in Containers (Docker/Kubernetes) based deployment and management - Experience in using AWS/GCP/Azure based cloud infrastructure - Knowledge in test Driven Development and test automation skills with Junit/TestNG - Knowledge in security frameworks, concepts and technologies like Spring Security, OAuth2, SAML, SSO, Identity and Access Management
Key skill set : Apache NiFi, Kafka Connect (Confluent), Sqoop, Kylo, Spark, Druid, Presto, RESTful services, Lambda / Kappa architectures Responsibilities : - Build a scalable, reliable, operable and performant big data platform for both streaming and batch analytics - Design and implement data aggregation, cleansing and transformation layers Skills : - Around 4+ years of hands-on experience designing and operating large data platforms - Experience in Big data Ingestion, Transformation and stream/batch processing technologies using Apache NiFi, Apache Kafka, Kafka Connect (Confluent), Sqoop, Spark, Storm, Hive etc; - Experience in designing and building streaming data platforms in Lambda, Kappa architectures - Should have working experience in one of NoSQL, OLAP data stores like Druid, Cassandra, Elasticsearch, Pinot etc; - Experience in one of data warehousing tools like RedShift, BigQuery, Azure SQL Data Warehouse - Exposure to other Data Ingestion, Data Lake and querying frameworks like Marmaray, Kylo, Drill, Presto - Experience in designing and consuming microservices - Exposure to security and governance tools like Apache Ranger, Apache Atlas - Any contributions to open source projects a plus - Experience in performance benchmarks will be a plus
• Looking for Big Data Engineer with 3+ years of experience. • Hands-on experience with MapReduce-based platforms, like Pig, Spark, Shark. • Hands-on experience with data pipeline tools like Kafka, Storm, Spark Streaming. • Store and query data with Sqoop, Hive, MySQL, HBase, Cassandra, MongoDB, Drill, Phoenix, and Presto. • Hands-on experience in managing Big Data on a cluster with HDFS and MapReduce. • Handle streaming data in real time with Kafka, Flume, Spark Streaming, Flink, and Storm. • Experience with Azure cloud, Cognitive Services, Databricks is preferred.