5+ Infrastructure Jobs in Mumbai | Infrastructure Job openings in Mumbai
Apply to 5+ Infrastructure Jobs in Mumbai on CutShort.io. Explore the latest Infrastructure Job opportunities across top companies like Google, Amazon & Adobe.
at Scoutflo
Scoutflo is a platform that automates complex infrastructure requirements for Kubernetes Infrastructure.
Job Description:
- In-depth knowledge of full-stack development principles and best practices.
- Expertise in building web applications with strong proficiency in languages like
- Node.js, React, and Go.
- Experience developing and consuming RESTful & gRPC API Protocols.
- Familiarity with CI/CD workflows and DevOps processes.
- Solid understanding of cloud platforms and container orchestration
- technologies
- Experience with Kubernetes pipelines and workflows using tools like Argo CD.
- Experience with designing and building user-friendly interfaces.
- Excellent understanding of distributed systems, databases, and APIs.
- A passion for writing clean, maintainable, and well-documented code.
- Strong problem-solving skills and the ability to work independently as well as
- collaboratively.
- Excellent communication and interpersonal skills.
- Experience with building self-serve platforms or user onboarding experiences.
- Familiarity with Infrastructure as Code (IaC) tools like Terraform.
- A strong understanding of security best practices for Kubernetes deployments.
- Grasp on setting up Network Architecture for distributed systems.
Must have:
1) Experience with managing Infrastructure on AWS/GCP or Azure
2) Managed Infrastructure on Kubernetes
Job Role - DevOps Infra Lead Engineer
About LenDenClub
LenDenClub is a leading peer-to-peer lending platform that provides an alternate investment opportunity to investors or lenders looking for high returns with creditworthy borrowers looking for short-term personal loans. With a total of 8 million users and 2 million+ investors on board, LenDenClub has become a go-to platform to earn returns in the range of 10%-12%. LenDenClub offers investors a convenient medium to browse thousands of borrower profiles to achieve better returns than traditional asset classes. Moreover, LenDenClub is safeguarded by
market volatility and inflation. LenDenClub provides a great way to diversify one’s investment portfolio.
LenDenClub has raised US $10 million in a Series A round from an association of investors. With the new round of funding, LenDenClub was valued at more than US $51 million in the last round and has grown multifold since then.
Why work at LenDenClub
LenDenClub is a certified great place to work. The certification comes from the Great Place to Work Institute, Inc., a globally renowned firm dedicated to evaluating companies for their employee satisfaction on the grounds of high trust and high-performance culture at workplaces.
As a LenDenite, you will be a part of an enthusiastic and passionate group of individuals who own and love what they do. At LenDenClub we believe in creating leaders and with you coming on board you get to work with complete freedom to chase your ultimate career goal without any inhibitions.
Website - https://www.lendenclub.com
Location - Mumbai (Goregaon)
Responsibilities of a DevOps Infra Lead Engineer:
● Responsible for creating software deployment strategies that are essential for the successful deployment of software in the work environment. Identify and implement data storage methods like clustering to improve the performance of the team.
● Responsible for coming up with solutions for managing a vast number of documents in real-time and enables quick search and analysis. Identifies issues in the production phase and system and implements monitoring solutions to overcome those issues.
● Stay abreast of industry trends and best practices. Conduct research, tests, and execute new techniques which could be reused and applied to the software development project.
● Accountable for designing, building, and optimizing automation systems that help to execute business web and data infrastructure platforms.
● Creating technology infrastructure, automation tools, and maintaining configuration management.
● To cater to the engineering department’s quality and standards, implement lifecycle infrastructure solutions and documentation operations.
● Implementation and maintaining of CI/CD pipelines.
● Containerisation of applications
● Construct and improve the security on the infrastructure
● Infrastructure As A Code
● Maintaining Environments
● Nat and ACL's
● Setup of ECS and ELB for HA
● WAF and Firewall and DMZ
● Deployment strategies for high uptime
● Setup up monitoring and policies for infra and applications
Required Skills
● Communication Skills
● Interpersonal Skills
● Infrastructure
● Aware of technologies like Python, MYSQL, MongoDB, and so on.
● Sound knowledge of cloud infrastructure.
● Should possess knowledge of fundamental Unix/Linux, monitoring, editing, and command-based tools is essential.
● Versed in scripting languages such as Ruby and Shell
● Google Cloud Platforms, Hadoop, NoSQL databases, and big data clusters.
● Knowledge of open source technologies
Overseeing all technology operations including Software, Networking,
Hardware and Security along with evaluating them according to
established goals.
• Devising and establishing IT policies and systems to support the
implementation of strategies set by upper management with plans for
redundancy, zero to no downtime and constantly improving business operations
and processes
• Analyzing the business requirements of all departments to determine their
technology needs.
• IT Head manage and supervise an organization's technical operations which
includes overseeing IT department employees and ensuring all technology.
systems and applications support the organization's goals and objectives.
• IT Head should have in-depth knowledge of new software development, able
to prepare Wire frame design & Architecture, able to design and describe
SOW for all new IT software development.
• Overseeing all IT operations infrastructure.
• Developing, implementing, and evaluating IT projects in line with
organizational objectives.
• Liaising with other departments to determine and address their IT needs and
requirements.
• Managing and supervising employees in the IT department.
• Ensuring the maintenance of current projects and technology systems.
• Identifying vulnerabilities, the need for upgrades, and opportunities for
improvement.
• Proposing strategic solutions and recommending new systems and software.
• Preparing financial budgets and performance reports.
• Building and maintaining relationships with external advisors and vendors.
• Ensuring reported issues are resolved in a timely manner.
IT AVP Requirements:
• A bachelor's degree in computer science or a related field.
• A master's degree in computer science is preferred.
• Proof of continued education, such as software certifications, is desirable.
• Experience with ERP, CRM, Business Intelligence, Warehouse Management
(WMS)/Supply Chain Management (SCM), Cloud Storage (Azure/AWS), DAM, MS
365, Marketing Automation and other business applications
• Knowledgeable in successful API integration of CTI and other communication
technologies like whatsapp with business applications
• Identifying vendors, developing SOW and wireframes, successfully managing projects
to completion for Websites (js/node/react/shopify/wordpress etc) and application
development
• Use of low code/no node application development in MS platforms would be
desirable
• At least 3 to 5 years of experience.
• Excellent written and spoken communication and interpersonal skills.
• Strong leadership and project management skills.
• Solid analytical and problem-solving skills.
ABOUT EPISOURCE:
Episource has devoted more than a decade in building solutions for risk adjustment to measure healthcare outcomes. As one of the leading companies in healthcare, we have helped numerous clients optimize their medical records, data, analytics to enable better documentation of care for patients with chronic diseases.
The backbone of our consistent success has been our obsession with data and technology. At Episource, all of our strategic initiatives start with the question - how can data be “deployed”? Our analytics platforms and datalakes ingest huge quantities of data daily, to help our clients deliver services. We have also built our own machine learning and NLP platform to infuse added productivity and efficiency into our workflow. Combined, these build a foundation of tools and practices used by quantitative staff across the company.
What’s our poison you ask? We work with most of the popular frameworks and technologies like Spark, Airflow, Ansible, Terraform, Docker, ELK. For machine learning and NLP, we are big fans of keras, spacy, scikit-learn, pandas and numpy. AWS and serverless platforms help us stitch these together to stay ahead of the curve.
ABOUT THE ROLE:
We’re looking to hire someone to help scale Machine Learning and NLP efforts at Episource. You’ll work with the team that develops the models powering Episource’s product focused on NLP driven medical coding. Some of the problems include improving our ICD code recommendations, clinical named entity recognition, improving patient health, clinical suspecting and information extraction from clinical notes.
This is a role for highly technical data engineers who combine outstanding oral and written communication skills, and the ability to code up prototypes and productionalize using a large range of tools, algorithms, and languages. Most importantly they need to have the ability to autonomously plan and organize their work assignments based on high-level team goals.
You will be responsible for setting an agenda to develop and ship data-driven architectures that positively impact the business, working with partners across the company including operations and engineering. You will use research results to shape strategy for the company and help build a foundation of tools and practices used by quantitative staff across the company.
During the course of a typical day with our team, expect to work on one or more projects around the following;
1. Create and maintain optimal data pipeline architectures for ML
2. Develop a strong API ecosystem for ML pipelines
3. Building CI/CD pipelines for ML deployments using Github Actions, Travis, Terraform and Ansible
4. Responsible to design and develop distributed, high volume, high-velocity multi-threaded event processing systems
5. Knowledge of software engineering best practices across the development lifecycle, coding standards, code reviews, source management, build processes, testing, and operations
6. Deploying data pipelines in production using Infrastructure-as-a-Code platforms
7. Designing scalable implementations of the models developed by our Data Science teams
8. Big data and distributed ML with PySpark on AWS EMR, and more!
BASIC REQUIREMENTS
-
Bachelor’s degree or greater in Computer Science, IT or related fields
-
Minimum of 5 years of experience in cloud, DevOps, MLOps & data projects
-
Strong experience with bash scripting, unix environments and building scalable/distributed systems
-
Experience with automation/configuration management using Ansible, Terraform, or equivalent
-
Very strong experience with AWS and Python
-
Experience building CI/CD systems
-
Experience with containerization technologies like Docker, Kubernetes, ECS, EKS or equivalent
-
Ability to build and manage application and performance monitoring processes