Design and development of the Supply chain applications for the retail customers, making use of the open source technologies. It can be taking our own product and customizing as per the customer requirements or developing applications from the scratch.
- 3+ years of experience in building complex, highly scalable, high volume, low latency Enterprise applications using languages such as Java, NodeJS, Go and/or Scala - Strong experience in building microservices using technologies like Spring Boot, Spring Cloud, Netflix OSS, Zuul - Deep understanding on microservices design patterns, service registry and discovery, externalization of configurations - Experience in message streaming and processing technologies such as Kafka, Spark, Storm, gRPC or other equivalent technologies - Experience with one or more reactive microservice tools and techniques such as Akka, Vert.x, ReactiveX - Strong experience in creation, management and consumption of REST APIs leveraging Swagger, Postman, API Gateways (such as MuleSoft, Apigee) etc; - Strong knowledge in data modelling, querying, performance tuning of any big-data stores (MongoDB, Elasticsearch, Redis etc;) and /or any RDBMS (Oracle, PostgreSQL, MySQL etc;) - Experience working with Agile / Scrum based teams that utilizes Continuous Integration/Continuous Delivery processes using Git, Maven, Jenkins etc; - Experience in Containers (Docker/Kubernetes) based deployment and management - Experience in using AWS/GCP/Azure based cloud infrastructure - Knowledge in test Driven Development and test automation skills with Junit/TestNG - Knowledge in security frameworks, concepts and technologies like Spring Security, OAuth2, SAML, SSO, Identity and Access Management
This position is part of the Analytics product Engineering Productivity team. Engineering Productivity works closely with other engineers, data scientists, product teams, and many others to not only increase our systems’ scalability and reliability, but also enable the rapid development of new feature code for our customers. As a software engineer, you will play an integral role in building and maintaining anomaly detection integrations, automating engineering process workflows, assisting with monitoring and instrumentation governance, design, and implementation, and much more. Roles and Responsibilities: - Object oriented programming experience in Python - Experience with stream-based passed processing models, distributed streaming platforms like Kafka or control theory - Experience with SQL (MySQL, Oracle, or PostgreSQL) - Solid Computer Science fundamentals with regards to data structures, algorithms, time complexity, etc. Key Competencies and Skills: - Solid understanding of statistics and probability - Experience optimizing and debugging highly performant Python applications is mandatory - Experience developing and scaling RESTful web services - Former work with column stores (Vertica) and NoSQL (Redis, Aerospike) -Streaming technology:Kafka/Kubernetes experience is mandatory Education and Qualifications: BA/BS degree and 4+ years of experience OR MS degree and 2+ years of experience in software engineering (Degree in Computer Science or related field preferred) OR equivalent experience in software development
Key skill set : Apache NiFi, Kafka Connect (Confluent), Sqoop, Kylo, Spark, Druid, Presto, RESTful services, Lambda / Kappa architectures Responsibilities : - Build a scalable, reliable, operable and performant big data platform for both streaming and batch analytics - Design and implement data aggregation, cleansing and transformation layers Skills : - Around 4+ years of hands-on experience designing and operating large data platforms - Experience in Big data Ingestion, Transformation and stream/batch processing technologies using Apache NiFi, Apache Kafka, Kafka Connect (Confluent), Sqoop, Spark, Storm, Hive etc; - Experience in designing and building streaming data platforms in Lambda, Kappa architectures - Should have working experience in one of NoSQL, OLAP data stores like Druid, Cassandra, Elasticsearch, Pinot etc; - Experience in one of data warehousing tools like RedShift, BigQuery, Azure SQL Data Warehouse - Exposure to other Data Ingestion, Data Lake and querying frameworks like Marmaray, Kylo, Drill, Presto - Experience in designing and consuming microservices - Exposure to security and governance tools like Apache Ranger, Apache Atlas - Any contributions to open source projects a plus - Experience in performance benchmarks will be a plus
Hands-on programming and technical design skills with a passion for learning new technologies Experience of building highly scalable, robust, and fault-tolerant services 3+ years of experience of designing and developing software systems or services Good understanding of REST APIs and the web in general. Ability to build a feature from scratch & drive it to completion. Working experience with AWS Knowledge of designing microservices Knowledge of search platform like Elastic Search, Solr etc Knowledge of messaging technology such as Kafka or rabbitmq Startup experience is a strong plus Experience of Python is a strong plus Critical thinking is a plus.
Please apply if and only if you enjoy engineering, wish to write a lot of code, wish to do a lot of hands-on Python experimentation, already have in-depth knowledge of deep learning. This position is strictly for people having knowledge various Neural networks and can customize the neural network and not for people who have experience in downloading various AI/ML code. This position is not for freshers. We are looking for candidates with AI/ML/CV experience of at least 2 year in the industry.