
API Lead Developer
Job Overview:
As an API developer for a very large client, you will be filling the role of a hands-on Azure API Developer. we are looking for someone who has the necessary technical expertise to build and maintain sustainable API Solutions to support identified needs and expectations from the client.
Delivery Responsibilities
- Implement an API architecture using Azure API Management, including security, API Gateway, Analytics, and API Services
- Design reusable assets, components, standards, frameworks, and processes to support and facilitate API and integration projects
- Conduct functional, regression, and load testing on API’s
- Gather requirements and defining the strategy for application integration
- Develop using the following types of Integration protocols/principles: SOAP and Web services stack, REST APIs, RESTful, RPC/RFC
- Analyze, design, and coordinate the development of major components of the APIs including hands on implementation, testing, review, build automation, and documentation
- Work with DevOps team to package release components to deploy into higher environment
Required Qualifications
- Expert Hands-on experience in the following:
- Technologies such as Spring Boot, Microservices, API Management & Gateway, Event Streaming, Cloud-Native Patterns, Observability & Performance optimizations
- Data modelling, Master and Operational Data Stores, Data ingestion & distribution patterns, ETL / ELT technologies, Relational and Non-Relational DB's, DB Optimization patterns
- At least 5+ years of experience with Azure APIM
- At least 8+ years’ experience in Azure SaaS and PaaS
- At least 8+ years’ experience in API Management including technologies such as Mulesoft and Apigee
- At least last 5 years in consulting with the latest implementation on Azure SaaS services
- At least 5+ years in MS SQL / MySQL development including data modeling, concurrency, stored procedure development and tuning
- Excellent communication skills with a demonstrated ability to engage, influence, and encourage partners and stakeholders to drive collaboration and alignment
- High degree of organization, individual initiative, results and solution oriented, and personal accountability and resiliency
- Should be a self-starter and team player, capable of working with a team of architects, co-developers, and business analysts
Preferred Qualifications:
- Ability to work as a collaborative team, mentoring and training the junior team members
- Working knowledge on building and working on/around data integration / engineering / Orchestration
- Position requires expert knowledge across multiple platforms, integration patterns, processes, data/domain models, and architectures.
- Candidates must demonstrate an understanding of the following disciplines: enterprise architecture, business architecture, information architecture, application architecture, and integration architecture.
- Ability to focus on business solutions and understand how to achieve them according to the given timeframes and resources.
- Recognized as an expert/thought leader. Anticipates and solves highly complex problems with a broad impact on a business area.
- Experience with Agile Methodology / Scaled Agile Framework (SAFe).
- Outstanding oral and written communication skills including formal presentations for all levels of management combined with strong collaboration/influencing.
Preferred Education/Skills:
- Prefer Master’s degree
- Bachelor’s Degree in Computer Science with a minimum of 12+ years relevant experience or equivalent.

Similar jobs
Technical Skills:
- Hands-on experience with AWS, Google Cloud Platform (GCP), and Microsoft Azure cloud computing
- Proficiency in Windows Server and Linux server environments
- Proficiency with Internet Information Services (IIS), Nginx, Apache, etc.
- Experience in deploying .NET applications (ASP.NET, MVC, Web API, WCF, etc.), .NET Core, Python and Node.js applications etc.
- Familiarity with GitLab or GitHub for version control and Jenkins for CI/CD processes
Job Summary:
Seeking a seasoned SQL + ETL Developer with 4+ years of experience in managing large-scale datasets and cloud-based data pipelines. The ideal candidate is hands-on with MySQL, PySpark, AWS Glue, and ETL workflows, with proven expertise in AWS migration and performance optimization.
Key Responsibilities:
- Develop and optimize complex SQL queries and stored procedures to handle large datasets (100+ million records).
- Build and maintain scalable ETL pipelines using AWS Glue and PySpark.
- Work on data migration tasks in AWS environments.
- Monitor and improve database performance; automate key performance indicators and reports.
- Collaborate with cross-functional teams to support data integration and delivery requirements.
- Write shell scripts for automation and manage ETL jobs efficiently.
Required Skills:
- Strong experience with MySQL, complex SQL queries, and stored procedures.
- Hands-on experience with AWS Glue, PySpark, and ETL processes.
- Good understanding of AWS ecosystem and migration strategies.
- Proficiency in shell scripting.
- Strong communication and collaboration skills.
Nice to Have:
- Working knowledge of Python.
- Experience with AWS RDS.
Role – Sr. Devops Engineer
Location - Bangalore
Experience 5+ Years
Responsibilities
- Implementing various development, testing, automation tools, and IT infrastructure
- Planning the team structure, activities, and involvement in project management activities.
- Defining and setting development, test, release, update, and support processes for DevOps operation
- Troubleshooting techniques and fixing the code bugs
- Monitoring the processes during the entire lifecycle for its adherence and updating or creating new processes for improvement and minimizing the wastage
- Encouraging and building automated processes wherever possible
- Incidence management and root cause analysis.
- Selecting and deploying appropriate CI/CD tools
- Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
- Mentoring and guiding the team members
- Monitoring and measuring customer experience and KPIs
Requirements
- 5-6 years of relevant experience in a Devops role.
- Good knowledge in cloud technologies such as AWS/Google Cloud.
- Familiarity with container orchestration services, especially Kubernetes experience (Mandatory) and good knowledge in Docker.
- Experience administering and deploying development CI/CD tools such as Git, Jira, GitLab, or Jenkins.
- Good knowledge in complex Debugging mechanisms (especially JVM) and Java programming experience (Mandatory)
- Significant experience with Windows and Linux operating system environments
- A team player with excellent communication skills.
- At least 5 year of experience in Cloud technologies-AWS and Azure and developing.
- Experience in implementing DevOps practices and DevOps-tools in areas like CI/CD using Jenkins environment automation, and release automation, virtualization, infra as a code or metrics tracking.
- Hands on experience in DevOps tools configuration in different environments.
- Strong knowledge of working with DevOps design patterns, processes and best practices
- Hand-on experience in Setting up Build pipelines.
- Prior working experience in system administration or architecture in Windows or Linux.
- Must have experience in GIT (BitBucket, GitHub, GitLab)
- Hands-on experience on Jenkins pipeline scripting.
- Hands-on knowledge in one scripting language (Nant, Perl, Python, Shell or PowerShell)
- Configuration level skills in tools like SonarQube (or similar tools) and Artifactory.
- Expertise on Virtual Infrastructure (VMWare or VirtualBox or QEMU or KVM or Vagrant) and environment automation/provisioning using SaltStack/Ansible/Puppet/Chef
- Deploying, automating, maintaining and managing Azure cloud based production systems including monitoring capacity.
- Good to have experience in migrating code repositories from one source control to another.
- Hands-on experience in Docker container and orchestration based deployments like Kubernetes, Service Fabric, Docker swarm.
- Must have good communication skills and problem solving skills
Key Responsibilities:
- Develop and Maintain CI/CD Pipelines: Design, implement, and manage CI/CD pipelines using GitOps practices.
- Kubernetes Management: Deploy, manage, and troubleshoot Kubernetes clusters to ensure high availability and scalability of applications.
- Cloud Infrastructure: Design, deploy, and manage cloud infrastructure on AWS, utilizing services such as EC2, S3, RDS, Lambda, and others.
- Infrastructure as Code: Implement and manage infrastructure using IaC tools like Terraform, CloudFormation, or similar.
- Monitoring and Logging: Set up and manage monitoring, logging, and alerting systems to ensure the health and performance of the infrastructure.
- Automation: Identify and automate repetitive tasks to improve efficiency and reliability.
- Security: Implement security best practices and ensure compliance with industry standards.
- Collaboration: Work closely with development, QA, and operations teams to ensure seamless integration and delivery of products.
Required Skills and Qualifications:
- Experience: 2-5 years of experience in a DevOps role.
- AWS: In-depth knowledge of AWS services and solutions.
- CI/CD Tools: Experience with CI/CD tools such as Jenkins, GitLab CI, CircleCI, or similar.
- GitOps Expertise: Proficient in GitOps methodologies and tools.
- Kubernetes: Strong hands-on experience with Kubernetes and container orchestration.
- Scripting and Automation: Proficient in scripting languages such as Bash, Python, or similar.
- Infrastructure as Code (IaC): Hands-on experience with IaC tools like Terraform, CloudFormation, or similar.
- Monitoring Tools: Familiarity with monitoring and logging tools like Prometheus, Grafana, ELK stack, or similar.
- Version Control: Strong understanding of version control systems, primarily Git.
- Problem-Solving: Excellent problem-solving and debugging skills.
- Collaboration: Ability to work in a fast-paced, collaborative environment.
- Education: Bachelor’s or master’s degree in computer science or a related field.
Why LiftOff?
We at LiftOff specialize in product creation, for our main forte lies in helping Entrepreneurs realize their dream. We have helped businesses and entrepreneurs launch more than 70 plus products.
Many on the team are serial entrepreneurs with a history of successful exits.
As a Devops Engineer, you will work directly with our founders and alongside our engineers on a variety of software projects covering various languages, frameworks, and application architectures.
Must Have
*Work experience of at least 2 years with Kubernetes.
*Hands-on experience working with Kubernetes. Preferably on Azure Cloud.
*Well-versed with Kubectl
*Experience in using Azure Monitor, setting up analytics and reports for Azure containers and services.
*Monitoring and observability
*Setting Alerts and auto-scaling
Nice to have
*Scripting and automation
*Experience with Jenkins or any sort of CI/CD pipelines
*Past experience in setting up cloud infrastructure, configurations and database backups
*Experience with Azure App Service
*Experience of setting up web socket-based applications.
*Working knowledge of Azure APIM
We are a group of passionate people driven by core values. We strive to make every process transparent and have flexible work timings along with excellent startup culture and vibe.
Position- Cloud and Infrastructure Automation Consultant
Location- India(Pan India)-Work from Home
The position:
This exciting role in Ashnik’s consulting team brings great opportunity to design and deploy automation solutions for Ashnik’s enterprise customers spread across SEA and India. This role takes a lead in consulting the customers for automation of cloud and datacentre based resources. You will work hands-on with your team focusing on infrastructure solutions and to automate infrastructure deployments that are secure and compliant. You will provide implementation oversight of solutions to over the challenges of technology and business.
Responsibilities:
· To lead the consultative discussions to identify challenges for the customers and suggest right fit open source tools
· Independently determine the needs of the customer and create solution frameworks
· Design and develop moderately complex software solutions to meet needs
· Use a process-driven approach in designing and developing solutions.
· To create consulting work packages, detailed SOWs and assist sales team to position them to enterprise customers
· To be responsible for implementation of automation recipes (Ansible/CHEF) and scripts (Ruby, PowerShell, Python) as part of an automated installation/deployment process
Experience and skills required :
· 8 to 10 year of experience in IT infrastructure
· Proven technical skill in designing and delivering of enterprise level solution involving integration of complex technologies
· 6+ years of experience with RHEL/windows system automation
· 4+ years of experience using Python and/or Bash scripting to solve and automate common system tasks
· Strong understanding and knowledge of networking architecture
· Experience with Sentinel Policy as Code
· Strong understanding of AWS and Azure infrastructure
· Experience deploying and utilizing automation tools such as Terraform, CloudFormation, CI/CD pipelines, Jenkins, Github Actions
· Experience with Hashicorp Configuration Language (HCL) for module & policy development
· Knowledge of cloud tools including CloudFormation, CloudWatch, Control Tower, CloudTrail and IAM is desirable
This role requires high degree of self-initiative, working with diversified teams and working with customers spread across Southeast Asia and India region. This role requires you to be pro-active in communicating with customers and internal teams about industry trends, technology development and creating thought leadership.
About Us
Ashnik is a leading enterprise open-source solutions company in Southeast Asia and India, enabling organizations to adopt open source for their digital transformation goals. Founded in 2009, it offers a full-fledged Open-Source Marketplace, Solutions, and Services – Consulting, Managed, Technical, Training. Over 200 leading enterprises so far have leveraged Ashnik’s offerings in the space of Database platforms, DevOps & Microservices, Kubernetes, Cloud, and Analytics.
As a team culture, Ashnik is a family for its team members. Each member brings in a different perspective, new ideas and diverse background. Yet we all together strive for one goal – to deliver the best solutions to our customers using open-source software. We passionately believe in the power of collaboration. Through an open platform of idea exchange, we create a vibrant environment for growth and excellence.
Package : upto 20L
Experience: 8 yrs
As a SaaS DevOps Engineer, you will be responsible for providing automated tooling and process enhancements for SaaS deployment, application and infrastructure upgrades and production monitoring.
-
Development of automation scripts and pipelines for deployment and monitoring of new production environments.
-
Development of automation scripts for upgrades, hotfixes deployments and maintenance.
-
Work closely with Scrum teams and product groups to support the quality and growth of the SaaS services.
-
Collaborate closely with SaaS Operations team to handle day-to-day production activities - handling alerts and incidents.
-
Assist SaaS Operations team to handle customers focus projects: migrations, features enablement.
-
Write Knowledge articles to document known issues and best practices.
-
Conduct regression tests to validate solutions or workarounds.
-
Work in a globally distributed team.
What achievements should you have so far?
-
Bachelor's or master’s degree in Computer Science, Information Systems, or equivalent.
-
Experience with containerization, deployment, and operations.
-
Strong knowledge of CI/CD processes (Git, Jenkins, Pipelines).
-
Good experience with Linux systems and Shell scripting.
-
Basic cloud experience, preferably oriented on MS Azure.
-
Basic knowledge of containerized solutions (Helm, Kubernetes, Docker).
-
Good Networking skills and experience.
-
Having Terraform or CloudFormation knowledge will be considered a plus.
-
Ability to analyze a task from a system perspective.
-
Excellent problem-solving and troubleshooting skills.
-
Excellent written and verbal communication skills; mastery in English and local language.
-
Must be organized, thorough, autonomous, committed, flexible, customer-focused and productive.
Must Haves: Openshift, Kubernetes
Location: Currently in India (also willing to relocate to UAE)
Preferred an immediate joiner with minimum 2 weeks to 1 month of Notice Period.
Add on skills: Terraform, Gitops, Jenkins, ELK
|
This role requires a balance between hands-on infrastructure-as-code deployments as well as involvement in operational architecture and technology advocacy initiatives across the Numerator portfolio.
Responsibilities
|
|
|
|
Technical Skills
|










