The ideal candidate should have: ▪ Hands-on experience with statistical tools and techniques ▪ Expert level coding experience in Python ▪ Good exposure to Deep learning libraries like Tensorflow, PyTorch etc. ▪ Experience in implementing Deep Learning problems (Computer Vision, NLP etc.) from scratch [with Github codes exposed]. ▪ Should be able to read a paper and pick ideas to quickly implement/augment the current solution in the most comfortable Deep Learning library. ▪ Excellent communication skills
Responsibilities: ▪ Design, build, manage and operate infrastructure and configuration of all platform environments with a focus on automation and infrastructure as code. ▪ Design an architecture for distributed applications ▪ Design, build, manage and operate the infrastructure as a service layer (hosted and cloud-based platforms) that supports the different platform services. ▪ Design, build, manage and operate the continuous delivery framework and tools, and acting as a subject matter expert on CI/CD for developer teams. ▪ Write and build continuous delivery pipelines to manage and automate the life cycle of the different platform components. ▪ Develop a log analytics solution to provide logging-as-a-service to hosted applications based on open source solutions. ▪ Evaluate performance trends and expected changes in demand and capacity; and establish the appropriate scalability plans. ▪ Identify and troubleshoot any availability and performance issues at multiple layers of deployment, from hardware, operating environment, network and application. ▪ Recommend and maintain technology related policies and procedures. ▪ Identify and suggest various opportunities to improve efficiency and functionality. ▪ Implement data security and protection. Skills and Qualifications: ▪ 2+ years of experience relevant DevOps experience ▪ Track record of building complex CI/CD platforms to build, test, deploy and release software products ▪ Significant hands-on experience with designing Docker and Kubernetes clusters ▪ Fluency in shell scripting and CI/CD automation using Jenkins, Travis CI etc ▪ Understanding of integrating and working with No-SQL databases like Elasticsearch, MongoDB, DynamoDB and Bigtable ▪ Aptitude for fixing recurring issues by automating repeatable operational tasks ▪ Detail oriented personality, who does not lose sight of the big picture ▪ Thrive in a fast-paced, evolving, growing and dynamic environment
Job Description: Responsible for developing, enhancing, modifying and/or maintaining applications in the Global Markets environment. Software developers design, code, test, debug and document programs well as support activities for the corporate systems architecture. You are required to work closely with business partners in defining requirements for system applications. You must typically have in-depth knowledge of development tools and languages. You are also responsible for day to day supervision for a small team of associates; provide coaching and input into the performance management process. Essential Skills: • Front end - Angular/ ReactJS • Backend – Python & Django OR Flask • Experience working with scalable, high-performance systems. • Strong understanding of database design; Experience of NoSQL databases is a plus. • Experience in API integration, Application deployment. • Familiarity in Unix/Linux development environments & tools including scripting. • Strong problem-solving ability. • Experience in designing system architecture for solving complex problems with a sound understanding of object-oriented programming and Design Patterns. • Experience in Test Driven Development & Agile methodologies. • Good communication skill. • Experience of using tools like git is a plus.
Job Description: Develop and deliver automation software required for building & improving the functionality, reliability, availability, and manageability of applications and cloud platforms Champion and drive the adoption of Infrastructure as Code (IaC) practices and mindset Design, architect, and build self-service, self-healing, synthetic monitoring and alerting platform and tools Automate the development and test automation processes through CI/CD pipeline (Git, Jenkins, SonarQube, Artifactory, Docker containers) Build container hosting-platform using Kubernetes Introduce new cloud technologies, tools & processes to keep innovating in commerce area to drive greater business value. Must Haves: Proficiency in deploying and maintaining Cloud-based infrastructure services (AWS, GCP, Azure – good hands-on experience in at least one of them) Well versed with service-oriented architecture, cloud-based web services architecture, design patterns and frameworks. Good knowledge of cloud-related services like compute, storage, network, messaging (Eg SNS, SQS) and automation (Eg. CFT/Terraform). Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience in systems management/automation tools (Puppet/Chef/Ansible, Terraform) Strong Linux System Admin Experience with excellent troubleshooting and problem-solving skills Hands-on experience with languages (Bash/Python/Core Java/Scala) Experience with CI/CD pipeline (Jenkins, Git, Maven etc) Experience integrating solutions in a multi-region environment Self-motivated, learn quickly and deliver results with minimal supervision Experience with Agile/Scrum/DevOps software development methodologies. Nice to have: Experience in setting-up Elastic Logstash Kibana(ELK) stack. Having worked with large scale data. Experience with Monitoring tools such as Splunk, Nagios, Grafana, DataDog etc. Previously experience on working with distributed architectures like Hadoop, Map-reduce etc.
Responsibilities: ▪ End to end planning and execution of early stages of recruitment cycle (includes Job posting, Sourcing, Screening, Assessing and Interacting with candidates and managers). ▪ Designing and posting of job vacancies at the best performing recruitment channels (responsiveness, quality and costs). ▪ Proactive building talent pool by fetching quality candidates against the open/ expected positions. ▪ Research and recommend new sources for active and passive hunting. ▪ Extensive use of social and professional networking sites to identify potential candidates. ▪ Communication and Co-ordination with external recruit partners for best yield of such platforms. ▪ Responsible for executing the open requisitions in the given time frame/ SLAs. ▪ Responsible for MIS reporting and data management. ▪ Prioritizing and balancing the recruitment needs as per the business criticality. ▪ Communicate progress on assigned job vacancies on regular basis, advice on the action plan to close the position in the given SLAs. ▪ Provide regular feedback and advice on the hiring performance to enhance the recruitment process. Requirement: ▪ 1- 3 years of experience in IT Recruitment & delivering quality high-tech profiles ▪ Extensive knowledge of the various recruitment platforms and channels ▪ MIS reporting and vendor management ▪ Working in conjunction with the founders and business leads. ▪ Excellent communications skills
Job Description: Develop and deliver automation software required for building & improving the functionality, reliability, availability, and manageability of applications and cloud platforms Champion and drive the adoption of Infrastructure as Code (IaC) practices and mindset Design, architect, and build self-service, self-healing, synthetic monitoring and alerting platform and tools Automate the development and test automation processes through CI/CD pipeline (Git, Jenkins, SonarQube, Artifactory, Docker containers) Build container hosting-platform using Kubernetes Introduce new cloud technologies, tools & processes to keep innovating in commerce area to drive greater business value. Must Haves: Proficiency in deploying and maintaining Cloud-based infrastructure services (AWS, GCP, Azure – good hands-on experience in at least one of them) Well versed with service-oriented architecture, cloud-based web services architecture, design patterns and frameworks. Good knowledge of cloud related services like compute, storage, network, messaging (Eg SNS, SQS) and automation (Eg. CFT/Terraform). Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience in systems management/automation tools (Puppet/Chef/Ansible, Terraform) Strong Linux System Admin Experience with excellent troubleshooting and problem-solving skills Hands-on experience with languages (Bash/Python/Core Java/Scala) Experience with CI/CD pipeline (Jenkins, Git, Maven etc) Experience integrating solutions in a multi-region environment Self-motivated, learn quickly and deliver results with minimal supervision Experience with Agile/Scrum/DevOps software development methodologies. Nice to have: Experience in setting-up Elastic Logstash Kibana(ELK) stack. Having worked with large-scale data. Experience with Monitoring tools such as Splunk, Nagios, Grafana, DataDog etc Previously experience on working with distributed architectures like Hadoop, Map-reduce etc.
As a Big Data Engineer, you will build utilities that would help orchestrate migration of massive Hadoop/Big Data systems onto public cloud systems. You would build data processing scripts and pipelines that serve several of jobs and queries per day. The services you build will integrate directly with cloud services, opening the door to new and cutting-edge re-usable solutions. You will work with engineering teams, co-workers, and customers to gain new insights and dream of new possibilities. The Big Data Engineering team is hiring in the following areas: • Distributed storage and compute solutions • Data ingestion, consolidation, and warehousing • Cloud migrations and replication pipelines • Hybrid on-premise and in-cloud Big Data solutions • Big Data, Hadoop and spark processing Basic Requirements: • 2+ years’ experience of Hands-on in data structures, distributed systems, Hadoop and spark, SQL and NoSQL Databases • Strong software development skills in at least one of: Java, C/C++, Python or Scala. • Experience building and deploying cloud-based solutions at scale. • Experience in developing Big Data solutions (migration, storage, processing) • BS, MS or PhD degree in Computer Science or Engineering, and 5+ years of relevant work experience in Big Data and cloud systems. • Experience building and supporting large-scale systems in a production environment. Technology Stack: Cloud Platforms – AWS, GCP or Azure Big Data Distributions – Any of Apache Hadoop/CDH/HDP/EMR/Google DataProc/HD-Insights Distributed processing Frameworks – One or more of MapReduce, Apache Spark, Apache Storm, Apache Flink. Database/warehouse – Hive, HBase, and at least one cloud-native services Orchestration Frameworks – Any of Airflow, Oozie, Apache NiFi, Google DataFlow Message/Event Solutions – Any of Kafka, Kinesis, Cloud pub-sub Container Orchestration (Good to have)– Kubernetes or Swarm
Job Description: * Part of a Cloud product team responsible for defining and development of Cloud operations automation orchestration and optimization use cases. * Be at the forefront of Cloud technology, assisting a global list of customers that consume multiple cloud environments. * Explore and implement a broad spectrum of open source technologies. * Help the team/customer to resolve technical issues. * Work closely with development teams for CI/CD pipelines and with QA team for test automation. * Extremely customer focused, flexible to be available on-call for solving critical problems. * Contribute towards the process improvement involving the Product deployments, Cloud Governance & Customer Success. Desired Candidate Profile: * Minimum 2+ Years of experience with a B.E/B.Tech * Well versed in DevOps technologies, automation, infrastructure orchestration, configuration management and continuous integration * Prior work experience with Cloud domain and cloud-based products is a must. (AWS, Azure) Experience in Linux Administration, server hardening and security compliance * Web and Application Server technologies (e.g. Apache, Nginx, IIS) DevOps, Orchestration/Configuration Management and Continuous Integration technology (e.g. Chef, Puppet, Docker, Jenkins, Ansible etc.) * Good command in at least one scripting language (e.g. Bash, PowerShell, Ruby, Python) * Networking protocols such as HTTP, DNS and TCP/IP * Experiencing in managing version control platforms (e.g. Git, SVN, TFVC)
Responsibilities : The Data Scientist will lead client-facing engagements aimed at solving clients most challenging business problems through the use of inspired analytics; the assembly and integration of disparate data sources, application of machine learning methods and impactful interpretation and communication of key insights through intuitive visualization and decision support tools. Desired Profile : - Strong fundamentals of machine learning, general statistics and data science principles - Experience with some of these methods: Regression, Decision Trees, CART, Random Forest, Boosting, Evolutionary Programming, Neural Networks, Fuzzy Systems, Bayesian Belief Networks, Support Vector Machines, Ensemble Methods, Association Rules, Singular Value Decomposition, Principal Component Analysis, Clustering, Artificial Intelligence, Deep learning etc - Experience in some of these applications: Collaborative Filtering, Personalization, Consumer Segmentation, Text Analytics, Information Retrieval, Search Relevance - Solid data management and statistical modelling skills: SAS, R, Mahout, Matlab, Python (SciPy, NumPy), SQL and Excel - Strong problem solving and conceptual thinking, with ability to communicate even complex ideas in a succinct manner - Critical eye for the quality of data and strong desire to get it right - Ability to work in a fast-paced and deadline driven environment - Strong work ethics like sense of collaboration and ownership, result orientation, being team player - Candidate should be comfortable with working from clients- office Qualifications : - B.E./B.Tech from Tier 1 colleges (IIT/ISI/NIT/BITS/BIT/IIIT/REC) in Computer Science/Statistics/IT/ECE stream - M.S. or Ph.D. in Applied Mathematics, Statistics, Computer Science, Operations Research, Economics, or equivalent - Minimum 4+ years of relevant work experience in Data Science Requirement : - 4+ years of IT experience in data-driven or AI technology products - Hands-on expertise on analytical tools like R, SAS, Tableau, Excel. - Excellent communication, interpersonal and managerial skills - Ability to work with minimal supervision in a dynamic and timeline sensitive work environment - Team management experience is must - Work collaboratively with the Founders and Accounts Leads in terms of project execution and timelines. - Experience in decision science tools and techniques will be added advantage.
Role and responsibilities: As a Solution Architect, you will: ▪ Guide our fast-growing tech team in building world-class products that are highly intelligent, robust, scalable and secure. ▪ Constantly sync with product & Business team to align with business priorities, and plan for long-term and short-term architecture goals. ▪ Own the complete SDLC of our product(s) by managing the solutions, engineering, testing, release and maintenance. ▪ Work closely with product owners to align on their feature backlogs and plan for engineering ▪ Streamline DevOps by working with engineers, QA, Infra, TechOps and PM teams - build management, testing, release structures ▪ Build high performing teams with adequate training and skill development techniques Requirement: ▪ 4+ years of Cloud Infrastructure & software development experience ▪ Expert in Microservices/ APIs driven architecture design and implementations ▪ Multi-cloud/cloud agnostic application design experience (AWS / GCP/ Azure) – security design, auto-scaling, clustering, containerization etc. ▪ Strong expertise in programming & scripting (Java, Python, C++ etc) ▪ Experience with SQL and NoSQL DB implementations ▪ Big data, ML, AI experience would be a bonus
Responsibilities: ▪ Design distributed applications, architectural trade-offs applying synchronous and asynchronous design patterns, write code and deliver with speed and quality ▪ Develop multi-tier scalable, high-volume performing, and reliable user-centric web services based applications that operate 24x7 ▪ Produce high-quality software that is unit tested, code reviewed and checked in regularly for continuous integration ▪ Develop software related to machine learning, artificial intelligence and data analytics. ▪ Write and implement software solutions that integrate different systems Required Skills: ▪ Sound knowledge of software designing, development, and algorithm related solutions. ▪ Working knowledge of programming language in Python. ▪ Strong object-oriented skills and development expertise on web services ▪ Knowledge of different frameworks (preferably Django) ▪ Knowledge in developing ORM (Object Relational Mapper) libraries- Able to integrate multiple data sources and databases into one system ▪ Expert knowledge of computer science, with strong competencies in data structures, algorithms, and software design. ▪ Knowledge of object-oriented design, coding, testing patterns, and programming languages (Java, Python. etc). ▪ Understanding building web applications and services with IDEs, ANT, JUnit, etc. ▪ Knowledge of relational databases (transactional and non-transactional), database architecture, and distributed transaction management