50+ AWS (Amazon Web Services) Jobs in Pune | AWS (Amazon Web Services) Job openings in Pune
Apply to 50+ AWS (Amazon Web Services) Jobs in Pune on CutShort.io. Explore the latest AWS (Amazon Web Services) Job opportunities across top companies like Google, Amazon & Adobe.

Job Summary:
Seeking a seasoned SQL + ETL Developer with 4+ years of experience in managing large-scale datasets and cloud-based data pipelines. The ideal candidate is hands-on with MySQL, PySpark, AWS Glue, and ETL workflows, with proven expertise in AWS migration and performance optimization.
Key Responsibilities:
- Develop and optimize complex SQL queries and stored procedures to handle large datasets (100+ million records).
- Build and maintain scalable ETL pipelines using AWS Glue and PySpark.
- Work on data migration tasks in AWS environments.
- Monitor and improve database performance; automate key performance indicators and reports.
- Collaborate with cross-functional teams to support data integration and delivery requirements.
- Write shell scripts for automation and manage ETL jobs efficiently.
Required Skills:
- Strong experience with MySQL, complex SQL queries, and stored procedures.
- Hands-on experience with AWS Glue, PySpark, and ETL processes.
- Good understanding of AWS ecosystem and migration strategies.
- Proficiency in shell scripting.
- Strong communication and collaboration skills.
Nice to Have:
- Working knowledge of Python.
- Experience with AWS RDS.

Profile: AWS Data Engineer
Mode- Hybrid
Experience- 5+7 years
Locations - Bengaluru, Pune, Chennai, Mumbai, Gurugram
Roles and Responsibilities
- Design and maintain ETL pipelines using AWS Glue and Python/PySpark
- Optimize SQL queries for Redshift and Athena
- Develop Lambda functions for serverless data processing
- Configure AWS DMS for database migration and replication
- Implement infrastructure as code with CloudFormation
- Build optimized data models for performance
- Manage RDS databases and AWS service integrations
- Troubleshoot and improve data processing efficiency
- Gather requirements from business stakeholders
- Implement data quality checks and validation
- Document data pipelines and architecture
- Monitor workflows and implement alerting
- Keep current with AWS services and best practices
Required Technical Expertise:
- Python/PySpark for data processing
- AWS Glue for ETL operations
- Redshift and Athena for data querying
- AWS Lambda and serverless architecture
- AWS DMS and RDS management
- CloudFormation for infrastructure
- SQL optimization and performance tuning

About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Senior Software Development Engineer – Data Engineering with 5-8 years of experience to design, develop, and optimize data pipelines and analytics workflows using Snowflake, Databricks, and Apache Spark. The ideal candidate will have a strong background in big data processing, cloud data platforms, and performance optimization to enable scalable data-driven solutions.
Key Roles & Responsibilities:
- Design, develop, and optimize ETL/ELT pipelines using Apache Spark, PySpark, Databricks, and Snowflake.
- Implement real-time and batch data processing workflows in cloud environments (AWS, Azure, GCP).
- Develop high-performance, scalable data pipelines for structured, semi-structured, and unstructured data.
- Work with Delta Lake and Lakehouse architectures to improve data reliability and efficiency.
- Optimize Snowflake and Databricks performance, including query tuning, caching, partitioning, and cost optimization.
- Implement data governance, security, and compliance best practices.
- Build and maintain data models, transformations, and data marts for analytics and reporting.
- Collaborate with data scientists, analysts, and business teams to define data engineering requirements.
- Automate infrastructure and deployments using Terraform, Airflow, or dbt.
- Monitor and troubleshoot data pipeline failures, performance issues, and bottlenecks.
- Develop and enforce data quality and observability frameworks using Great Expectations, Monte Carlo, or similar tools.
Basic Qualifications:
- Bachelor’s or Master’s Degree in Computer Science or Data Science.
- 5–8 years of experience in data engineering, big data processing, and cloud-based data platforms.
- Hands-on expertise in Apache Spark, PySpark, and distributed computing frameworks.
- Strong experience with Snowflake (Warehouses, Streams, Tasks, Snowpipe, Query Optimization).
- Experience in Databricks (Delta Lake, MLflow, SQL Analytics, Photon Engine).
- Proficiency in SQL, Python, or Scala for data transformation and analytics.
- Experience working with data lake architectures and storage formats (Parquet, Avro, ORC, Iceberg).
- Hands-on experience with cloud data services (AWS Redshift, Azure Synapse, Google BigQuery).
- Experience in workflow orchestration tools like Apache Airflow, Prefect, or Dagster.
- Strong understanding of data governance, access control, and encryption strategies.
- Experience with CI/CD for data pipelines using GitOps, Terraform, dbt, or similar technologies.
Preferred Qualifications:
- Knowledge of streaming data processing (Apache Kafka, Flink, Kinesis, Pub/Sub).
- Experience in BI and analytics tools (Tableau, Power BI, Looker).
- Familiarity with data observability tools (Monte Carlo, Great Expectations).
- Experience with machine learning feature engineering pipelines in Databricks.
- Contributions to open-source data engineering projects.

Job Overview:
We are seeking an experienced AWS Data Engineer to join our growing data team. The ideal candidate will have hands-on experience with AWS Glue, Redshift, PySpark, and other AWS services to build robust, scalable data pipelines. This role is perfect for someone passionate about data engineering, automation, and cloud-native development.
Key Responsibilities:
- Design, build, and maintain scalable and efficient ETL pipelines using AWS Glue, PySpark, and related tools.
- Integrate data from diverse sources and ensure its quality, consistency, and reliability.
- Work with large datasets in structured and semi-structured formats across cloud-based data lakes and warehouses.
- Optimize and maintain data infrastructure, including Amazon Redshift, for high performance.
- Collaborate with data analysts, data scientists, and product teams to understand data requirements and deliver solutions.
- Automate data validation, transformation, and loading processes to support real-time and batch data processing.
- Monitor and troubleshoot data pipeline issues and ensure smooth operations in production environments.
Required Skills:
- 5 to 7 years of hands-on experience in data engineering roles.
- Strong proficiency in Python and PySpark for data transformation and scripting.
- Deep understanding and practical experience with AWS Glue, AWS Redshift, S3, and other AWS data services.
- Solid understanding of SQL and database optimization techniques.
- Experience working with large-scale data pipelines and high-volume data environments.
- Good knowledge of data modeling, warehousing, and performance tuning.
Preferred/Good to Have:
- Experience with workflow orchestration tools like Airflow or Step Functions.
- Familiarity with CI/CD for data pipelines.
- Knowledge of data governance and security best practices on AWS.
Role - ETL Developer
Work Mode - Hybrid
Experience- 4+ years
Location - Pune, Gurgaon, Bengaluru, Mumbai
Required Skills - AWS, AWS Glue, Pyspark, ETL, SQL
Required Skills:
- 4+ years of hands-on experience in MySQL, including SQL queries and procedure development
- Experience in Pyspark, AWS, AWS Glue
- Experience in AWS ,Migration
- Experience with automated scripting and tracking KPIs/metrics for database performance
- Proficiency in shell scripting and ETL.
- Strong communication skills and a collaborative team player
- Knowledge of Python and AWS RDS is a plus
Job Overview:
We are looking for a skilled Senior Backend Engineer to join our team. The ideal candidate will have a strong foundation in Java and Spring, with proven experience in building scalable microservices and backend systems. This role also requires familiarity with automation tools, Python development, and working knowledge of AI technologies.
Responsibilities:
- Design, develop, and maintain backend services and microservices.
- Build and integrate RESTful APIs across distributed systems.
- Ensure performance, scalability, and reliability of backend systems.
- Collaborate with cross-functional teams and participate in agile development.
- Deploy and maintain applications on AWS cloud infrastructure.
- Contribute to automation initiatives and AI/ML feature integration.
- Write clean, testable, and maintainable code following best practices.
- Participate in code reviews and technical discussions.
Required Skills:
- 4+ years of backend development experience.
- Strong proficiency in Java and Spring/Spring Boot frameworks.
- Solid understanding of microservices architecture.
- Experience with REST APIs, CI/CD, and debugging complex systems.
- Proficient in AWS services such as EC2, Lambda, S3.
- Strong analytical and problem-solving skills.
- Excellent communication in English (written and verbal).
Good to Have:
- Experience with automation tools like Workato or similar.
- Hands-on experience with Python development.
- Familiarity with AI/ML features or API integrations.
- Comfortable working with US-based teams (flexible hours).
About the Role
We are looking for a highly skilled Senior DevOps Engineer with expertise in Java-based applications. You will lead automation, deployment, and cloud infrastructure efforts, ensuring efficient CI/CD pipelines and scalable, secure environments.
Key Responsibilities
- CI/CD Management: Design, implement, and optimize CI/CD pipelines for Java applications using Jenkins/GitLab.
- Cloud Infrastructure: Deploy and manage cloud resources (AWS, Azure, or GCP) for scalable applications.
- Containerization & Orchestration: Manage Docker containers and Kubernetes clusters for streamlined deployment.
- Automation & Scripting: Write efficient scripts (Python, Bash) for automation tasks related to infrastructure and deployments.
- Security & Compliance: Implement security best practices for cloud environments and containerized applications.
- Monitoring & Performance Tuning: Utilize tools like Prometheus, Grafana, ELK stack for monitoring system health and optimizing performance.
- Collaboration: Work with developers to enhance deployment workflows and troubleshoot production issues.
Required Skills
- Programming: Strong knowledge of Java, Spring Boot, and Microservices architecture.
- DevOps Tools: Experience with Jenkins, GitLab CI/CD, Terraform, and Ansible.
- Cloud Platforms: Expertise in AWS, Azure, or GCP with hands-on infrastructure management.
- Containers: Proficiency in Docker, Kubernetes, Helm.
- Networking & Security: Understanding of VPNs, firewalls, and IAM policies.
- Version Control: Git and GitHub/Bitbucket experience.
Preferred Qualifications
- Certifications in AWS, Kubernetes, or DevOps.
- Knowledge of Istio, Service Mesh, or Kafka.
- Experience in high-traffic production environments.
Benefits
- Competitive salary & bonuses.
- Flexible work arrangements (hybrid/remote options).
- Training programs & certifications.


Job Description:
Deqode is seeking a skilled .NET Full Stack Developer with expertise in .NET Core, Angular, and C#. The ideal candidate will have hands-on experience with either AWS or Azure cloud platforms. This role involves developing robust, scalable applications and collaborating with cross-functional teams to deliver high-quality software solutions.
Key Responsibilities:
- Develop and maintain web applications using .NET Core, C#, and Angular.
- Design and implement RESTful APIs and integrate with front-end components.
- Collaborate with UI/UX designers, product managers, and other developers to deliver high-quality products.
- Deploy and manage applications on cloud platforms (AWS or Azure).
- Write clean, scalable, and efficient code following best practices.
- Participate in code reviews and provide constructive feedback.
- Troubleshoot and debug applications to ensure optimal performance.
- Stay updated with emerging technologies and propose improvements to existing systems.
Required Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Minimum of 4 years of professional experience in software development.
- Proficiency in .NET Core, C#, and Angular.
- Experience with cloud services (either AWS or Azure).
- Strong understanding of RESTful API design and implementation.
- Familiarity with version control systems like Git.
- Excellent problem-solving skills and attention to detail.
- Ability to work independently and collaboratively in a team environment.
Preferred Qualifications:
- Experience with containerization tools like Docker and orchestration platforms like Kubernetes.
- Knowledge of CI/CD pipelines and DevOps practices.
- Familiarity with Agile/Scrum methodologies.
- Strong communication and interpersonal skills.
What We Offer:
- Competitive salary and performance-based incentives.
- Flexible working hours and remote work options.
- Opportunities for professional growth and career advancement.
- Collaborative and inclusive work environment.
- Access to the latest tools and technologies.



Job Title: .NET Developer
Location: Pan India (Hybrid)
Employment Type: Full-Time
Join Date: Immediate / Within 15 Days
Experience: 4+ Years
Deqode is looking for a skilled and passionate Senior .NET Developer to join our growing tech team. The ideal candidate is an expert in building scalable web applications and has hands-on experience with cloud platforms and modern front-end technologies.
Key Responsibilities:
- Design, develop, and maintain scalable web applications using .NET Core.
- Work on RESTful APIs and integrate third-party services.
- Collaborate with UI/UX designers and front-end developers using Angular or React.
- Deploy, monitor, and maintain applications on AWS or Azure.
- Participate in code reviews, technical discussions, and architecture planning.
- Write clean, well-structured, and testable code following best practices.
Must-Have Skills:
- 4+ years of experience in software development using .NET Core.
- Proficiency with Angular or React for front-end development.
- Strong working knowledge of AWS or Microsoft Azure.
- Experience with SQL/NoSQL databases.
- Excellent communication and team collaboration skills.
Education:
- Bachelor’s/Master’s degree in Computer Science, Information Technology, or a related field.
Required Skills:
- Experience in systems administration, SRE or DevOps focused role
- Experience in handling production support (on-call)
- Good understanding of the Linux operating system and networking concepts.
- Demonstrated competency with the following AWS services: ECS, EC2, EBS, EKS, S3, RDS, ELB, IAM, Lambda.
- Experience with Docker containers and containerization concepts
- Experience with managing and scaling Kubernetes clusters in a production environment
- Experience building scalable infrastructure in AWS with Terraform.
- Strong knowledge of Protocol-level such as HTTP/HTTPS, SMTP, DNS, and LDAP
- Experience monitoring production systems
- Expertise in leveraging Automation / DevOps principles, experience with operational tools, and able to apply best practices for infrastructure and software deployment (Ansible).
- HAProxy, Nginx, SSH, MySQL configuration and operation experience
- Ability to work seamlessly with software developers, QA, project managers, and business development
- Ability to produce and maintain written documentation

🚀 We’re Hiring- PHP Developer Deqode
📍 Location: Pune (Hybrid)
🕒Experience: 4–6 Years
⏱️ Notice Period: Immediate Joiner
We're looking for a skilled PHP Developer to join our team. If you have a strong grasp of secure coding practices, are experienced in PHP upgrades, and thrive in a fast-paced deployment environment, we’d love to connect with you!
🔧 Key Skills:
- PHP | MySQL | JavaScript | Jenkins | Nginx | AWS
🔐 Security-Focused Responsibilities Include:
- Remediation of PenTest findings
- XSS mitigation (input/output sanitization)
- API rate limiting
- 2FA integration
- PHP version upgrade
- Use of AWS Secrets Manager
- Secure session and password policies
𝐖𝐞’𝐫𝐞 𝐇𝐢𝐫𝐢𝐧𝐠: 𝐀𝐈-𝐍𝐚𝐭𝐢𝐯𝐞 𝐃𝐞𝐯𝐒𝐞𝐜𝐎𝐩𝐬 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫𝐬 𝐚𝐭 𝐅𝐥𝐲𝐭𝐁𝐚𝐬𝐞
📍 Location: Pune (Onsite)
This will be one of the most challenging and rewarding roles of your career.
At FlytBase, prompting is thinking. We don’t care if you’ve memorized APIs or collected DevOps certifications.
We care how you think, how you debug under pressure, and how you solve infra problems no one else can.
We don’t hire engineers to maintain systems.
We hire infra architects to design infrastructure that runs—and evolves—on its own.
If you need step-by-step tickets or constant direction—don’t apply.
We’re looking for engineers who can design, secure, and scale real-world systems with speed, clarity, and zero excuses.
If that’s what you’ve been waiting for—read on.
𝐓𝐡𝐞 𝐌𝐢𝐬𝐬𝐢𝐨𝐧
FlytBase is building the autonomous backbone for aerial intelligence—drone fleets that operate 24/7 at industrial scale.
Our platform flies missions, detects anomalies, and delivers insights—with no human in the loop.
The infra behind this? That’s what you build.
𝐘𝐨𝐮𝐫 𝐋𝐨𝐨𝐩
• Design AI-secured CI/CD pipelines (GitHub, Docker, Terraform, SAST/DAST)
• Architect AWS infra (EC2, S3, IAM, VPCs, EKS) with Infrastructure-as-Code
• Build intelligent observability systems (Grafana, Dynatrace, LLM-based detection)
• Define fallback, rollback & recovery loops that don’t need escalation
• Enforce compliance (SOC2, ISO27001, GDPR) without killing velocity
• Own SLAs from definition → delivery → defense
• Automate what used to need a team. Then automate again.
𝐖𝐡𝐨 𝐖𝐞’𝐫𝐞 𝐋𝐨𝐨𝐤𝐢𝐧𝐠 𝐅𝐨𝐫
✅ You treat infrastructure like a product
✅ You already use AI Tools to move 5x faster
✅ You can go from “zero” to “live infra” in 48 hours
𝐁𝐨𝐧𝐮𝐬 𝐩𝐨𝐢𝐧𝐭𝐬 𝐢𝐟 𝐲𝐨𝐮’𝐯𝐞:
• Built custom DevOps bots or AI agents
• Got an OSCP or hacked together your own SOC2 framework
• Shipped production infra solo
• Open-sourced your infra tools or scripts
𝐉𝐨𝐢𝐧 𝐔𝐬
If you’re still reading—good. That’s a signal.
𝐀𝐩𝐩𝐥𝐲 𝐡𝐞𝐫𝐞👉 https://lnkd.in/gsqjaJSP

Role - MLops Engineer
Location - Pune, Gurgaon, Noida, Bhopal, Bangalore
Mode - Hybrid
Role Overview
We are looking for an experienced MLOps Engineer to join our growing AI/ML team. You will be responsible for automating, monitoring, and managing machine learning workflows and infrastructure in production environments. This role is key to ensuring our AI solutions are scalable, reliable, and continuously improving.
Key Responsibilities
- Design, build, and manage end-to-end ML pipelines, including model training, validation, deployment, and monitoring.
- Collaborate with data scientists, software engineers, and DevOps teams to integrate ML models into production systems.
- Develop and manage scalable infrastructure using AWS, particularly AWS Sagemaker.
- Automate ML workflows using CI/CD best practices and tools.
- Ensure model reproducibility, governance, and performance tracking.
- Monitor deployed models for data drift, model decay, and performance metrics.
- Implement robust versioning and model registry systems.
- Apply security, performance, and compliance best practices across ML systems.
- Contribute to documentation, knowledge sharing, and continuous improvement of our MLOps capabilities.
Required Skills & Qualifications
- 4+ years of experience in Software Engineering or MLOps, preferably in a production environment.
- Proven experience with AWS services, especially AWS Sagemaker for model development and deployment.
- Working knowledge of AWS DataZone (preferred).
- Strong programming skills in Python, with exposure to R, Scala, or Apache Spark.
- Experience with ML model lifecycle management, version control, containerization (Docker), and orchestration tools (e.g., Kubernetes).
- Familiarity with MLflow, Airflow, or similar pipeline/orchestration tools.
- Experience integrating ML systems into CI/CD workflows using tools like Jenkins, GitHub Actions, or AWS CodePipeline.
- Solid understanding of DevOps and cloud-native infrastructure practices.
- Excellent problem-solving skills and the ability to work collaboratively across teams.
Job Title : Senior Backend Engineer – Java, AI & Automation
Experience : 4+ Years
Location : Any Cognizant location (India)
Work Mode : Hybrid
Interview Rounds :
- Virtual
- Face-to-Face (In-person)
Job Description :
Join our Backend Engineering team to design and maintain services on the Intuit Data Exchange (IDX) platform.
You'll work on scalable backend systems powering millions of daily transactions across Intuit products.
Key Qualifications :
- 4+ years of backend development experience.
- Strong in Java, Spring framework.
- Experience with microservices, databases, and web applications.
- Proficient in AWS and cloud-based systems.
- Exposure to AI and automation tools (Workato preferred).
- Python development experience.
- Strong communication skills.
- Comfortable with occasional US shift overlap.

Job Title: Python Django Microservices Lead
Job Title: Django Backend Lead Developer
Location: Indore/ Pune (Hybrid - Wednesday and Thursday WFO)
Timings - 12.30 to 9.30 PM
Experience Level: 8+ Years
Job Overview: We are seeking an experienced Django Backend Lead Developer to join our team. The ideal candidate will have a strong background in backend development, cloud technologies, and big data
processing. This role involves leading technical projects, mentoring junior developers, and ensuring the delivery of high-quality solutions.
Responsibilities:
Lead the development of backend systems using Django.
Design and implement scalable and secure APIs.
Integrate Azure Cloud services for application deployment and management.
Utilize Azure Databricks for big data processing and analytics.
Implement data processing pipelines using PySpark.
Collaborate with front-end developers, product managers, and other stakeholders to deliver comprehensive solutions.
Conduct code reviews and ensure adherence to best practices.
Mentor and guide junior developers.
Optimize database performance and manage data storage solutions.
Ensure high performance and security standards for applications.
Participate in architecture design and technical decision-making.
Qualifications:
Bachelor's degree in Computer Science, Information Technology, or a related field.
8+ years of experience in backend development.
8+ years of experience with Django.
Proven experience with Azure Cloud services.
Experience with Azure Databricks and PySpark.
Strong understanding of RESTful APIs and web services.
Excellent communication and problem-solving skills.
Familiarity with Agile methodologies.
Experience with database management (SQL and NoSQL).
Skills: Django, Python, Azure Cloud, Azure Databricks, Delta Lake and Delta tables, PySpark, SQL/NoSQL databases, RESTful APIs, Git, and Agile methodologies

Role - MLops Engineer
Required Experience - 4 Years
Location - Pune, Gurgaon, Noida, Bhopal, Bangalore
Mode - Hybrid
Key Requirements:
- 4+ years of experience in Software Engineering with MLOps focus
- Strong expertise in AWS, particularly AWS SageMaker (required)
- AWS Data Zone experience (preferred)
- Proficiency in Python, R, Scala, or Spark
- Experience developing scalable, reliable, and secure applications
- Track record of production-grade development, integration and support

Work Mode: Hybrid
Need B.Tech, BE, M.Tech, ME candidates - Mandatory
Must-Have Skills:
● Educational Qualification :- B.Tech, BE, M.Tech, ME in any field.
● Minimum of 3 years of proven experience as a Data Engineer.
● Strong proficiency in Python programming language and SQL.
● Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.
● Good comprehension and critical thinking skills.
● Kindly note Salary bracket will vary according to the exp. of the candidate -
- Experience from 4 yrs to 6 yrs - Salary upto 22 LPA
- Experience from 5 yrs to 8 yrs - Salary upto 30 LPA
- Experience more than 8 yrs - Salary upto 40 LPA
At Verto, we’re passionate about helping businesses in Emerging markets reach the world. What first started life as a FX solution for trading Nigerian Naira has now become a market-leading platform, changing the way thousands of businesses transfer money in and out of Emerging Markets.
We believe that where you do business shouldn’t determine how successful you are, or your ability to scale. Millions of companies a day have to juggle long settlement periods, high transaction fees and issues accessing liquidity in order to trade with African businesses. We’re on a mission to change this by creating equal access to easy payment and liquidity solutions that are already a given in developed markets.
We’re not alone in realising the opportunity and need to solve for emerging markets. We’re backed by world-class investors including Y-Combinator, Quona and MEVP, power payments for some of the most disruptive start-ups in the world and have a list of accolades from leading publications including being voted ‘Fintech Start Up of the Year’ at Fintech Awards London 2022.
Each year we process billions of dollars of payments and provide companies with solutions which help them to save money, automate processes and grow, but we’re only just getting started.
We’re seeking a driven and results-oriented Senior Data Engineer who is excited to help build out a best-in-class Data Platform. In this role, you will be expected to achieve key milestones such as improving on our existing Data Warehouse, implementing a CI/CD framework, and enabling core technologies such as dbt and git. You will play a pivotal role in enabling long-term scalability and efficiency when it comes to all things data, and leveraging your expertise in Data Engineering to drive measurable impact.
In this role you will:
- Conceptualize, maintain and improve the data architecture
- Evaluating design and operational cost-benefit tradeoffs within systems
- Design, build, and launch collections of data models that support multiple use cases across different products or domains
- Solve our most challenging data integration problems, optimising ELT pipelines, frameworks, sourcing from structured and unstructured data sources
- Implementing CI/CD frameworks
- Create and contribute to frameworks that improve the accuracy, efficiency and general data integrity
- Design and execute ‘best-in-class’ schema design
- Implementing other potential data tools
- Define and manage refresh schedules, load-balancing and SLA for all data sets in allocated areas of ownership
- Collaborate with engineers, product managers, and data analysts to understand data needs, identify and resolve issues and help set best practices for efficient data capture
- Determine and implement the data governance model and processes within ownership realm (GDPR, PPI, etc)
You’ll be responsible for:
- Taking ownership of the data engineering process - from project scoping, design, communication, execution and conclusion
- Support and strengthen data infrastructure together with data team and engineering
- Support organisation in understanding the importance of data and advocate for best-in-class infrastructure
- Mentoring, educating team members on best-in-class DE practices
- Priorising workload effectively
- Support quarterly and half-year planning from Data Engineering perspective
Note: This is a fast-growing company, the ideal candidate will be comfortable leading other data engineers in the future. However, this is currently a small data team, you may be asked to contribute to projects outside of the typical Data Engineering role. This will most probably involve analytics engineering responsibilities such as maintenance and improvement of ‘core’ tables (transactions, companies, product/platform management).
Skills and Qualifications
- University degree; ideally in data engineering, software engineering, computer science-engineering, numerate or similar
- +7 years of data engineering experience or equivalent
- Expert experience building data warehouses and ETL pipelines
- Expert experience of SQL, python, git, dbt (incl. query efficiency and optimization)
- Expert experience of Cloud Data Platforms (AWS, Snowflake and/or Databricks) → Qualification preferred, not mandatory
- Significant experience of Automation and Integrations tools (FiveTran, Airflow, Astronomer or similar)
- Significant experience with IoC tools (Terraform, Docker, Kubernetes or similar)
- Significant experience with CI/CD tools (Jenkins, GitHub Actions, CircleCI or similar)
Preferred Experience:
- Experience with real time data pipelines (AWS Kinesis, Kafka, Spark)
- Experience with observability tools (Metaplane, MonteCarlo, Datadog or similar)
- Experience within FinTech/Finance/FX preferred
At Verto, we’re passionate about helping businesses in Emerging markets reach the world. What first started life as a FX solution for trading Nigerian Naira has now become a market-leading platform, changing the way thousands of businesses transfer money in and out of Emerging Markets.
We believe that where you do business shouldn’t determine how successful you are, or your ability to scale. Millions of companies a day have to juggle long settlement periods, high transaction fees and issues accessing liquidity in order to trade with African businesses. We’re on a mission to change this by creating equal access to easy payment and liquidity solutions that are already a given in developed markets.
We’re not alone in realising the opportunity and need to solve for emerging markets. We’re backed by world-class investors including Y-Combinator, Quona and MEVP, power payments for some of the most disruptive start-ups in the world and have a list of accolades from leading publications including being voted ‘Fintech Start Up of the Year’ at Fintech Awards London 2022.
Each year we process billions of dollars of payments and provide companies with solutions which help them to save money, automate processes and grow, but we’re only just getting started.
We’re seeking a driven and results-oriented Data Engineer who is excited to help build out a best-in-class Data Platform. In this role, you will be expected to achieve key milestones such as improving on our existing Data Warehouse, implementing a CI/CD framework, and enabling core technologies such as dbt and git. You will play a pivotal role in enabling long-term scalability and efficiency when it comes to all things data, and leveraging your expertise in Data Engineering to drive measurable impact.
In this role you will:
- Conceptualize, maintain and improve the data architecture
- Evaluating design and operational cost-benefit tradeoffs within systems
- Design, build, and launch collections of data models that support multiple use cases across different products or domains
- Solve our most challenging data integration problems, optimising ELT pipelines, frameworks, sourcing from structured and unstructured data sources
- Implementing CI/CD frameworks
- Create and contribute to frameworks that improve the accuracy, efficiency and general data integrity
- Design and execute ‘best-in-class’ schema design
- Implementing other potential data tools
- Define and manage refresh schedules, load-balancing and SLA for all data sets in allocated areas of ownership
- Collaborate with engineers, product managers, and data analysts to understand data needs, identify and resolve issues and help set best practices for efficient data capture
- Determine and implement the data governance model and processes within ownership realm (GDPR, PPI, etc)
You’ll be responsible for:
- Taking ownership of the data engineering process - from project scoping, design, communication, execution and conclusion
- Support and strengthen data infrastructure together with data team and engineering
- Support organisation in understanding the importance of data and advocate for best-in-class infrastructure
- Mentoring, educating team members on best-in-class DE practices
- Priorising workload effectively
- Support quarterly and half-year planning from Data Engineering perspective
Note: This is currently a small data team, you may be asked to contribute to projects outside of the typical Data Engineering role. This will most probably involve analytics engineering responsibilities such as maintenance and improvement of ‘core’ tables (transactions, companies, product/platform management).
Skills and Qualifications
- University degree; ideally in data engineering, software engineering, computer science-engineering, numerate or similar
- +4 years of data engineering experience or equivalent
- Expert experience building data warehouses and ETL pipelines
- Expert experience of SQL, python, git, dbt (incl. query efficiency and optimization)
- Expert experience of Cloud Data Platforms (AWS, Snowflake and/or Databricks) → Qualification preferred, not mandatory
- Significant experience of Automation and Integrations tools (FiveTran, Airflow, Astronomer or similar)
- Significant experience with IoC tools (Terraform, Docker, Kubernetes or similar)
- Significant experience with CI/CD tools (Jenkins, GitHub Actions, CircleCI or similar)
Preferred Experience:
- Experience with real time data pipelines (AWS Kinesis, Kafka, Spark)
- Experience with observability tools (Metaplane, MonteCarlo, Datadog or similar)
- Experience within FinTech/Finance/FX preferred

At least 5 years of experience in testing and developing automation tests.
A minimum of 3 years of experience writing tests in Python, with a preference for experience in designing automation frameworks.
Experience in developing automation for big data testing, including data ingestion, data processing, and data migration, is highly desirable.
Familiarity with Playwright or other browser application testing frameworks is a significant advantage.
Proficiency in object-oriented programming and principles is required.
Extensive knowledge of AWS services is essential.
Strong expertise in REST API testing and SQL is required.
A solid understanding of testing and development life cycle methodologies is necessary.
Knowledge of the financial industry and trading systems is a plus

We are seeking a highly skilled Java full-stack developer with 5–8 years of experience to join our dynamic development team. The ideal candidate will have deep technical expertise across Java, Microservices, React/Redux, Kubernetes, DevOps tools, and GCP. You will work on designing and deploying full-stack applications that are robust, scalable, and aligned with business goals.
Key Responsibilities
- Design, develop, and deploy scalable full-stack applications using Java, React, and Redux
- Build microservices following SOLID principles
- Collaborate with cross-functional team,s including product owners, QA, BAs, and other engineers
- Write clean, maintainable, and efficient code
- Perform debugging, troubleshooting, and optimization
- Participate in code reviews and contribute to engineering best practices
- Stay updated on security, privacy, and compliance requirements
- Work in an Agile/Scrum environment using tools like JIRA and Confluence
Technical Skills Required
Frontend
- Strong proficiency in JavaScript and modern ES6 features
- Expertise in React.js with advanced knowledge of hooks (useCallback, useMemo, etc.)
- Solid understanding of Redux for state management
Backend
- Strong hands-on experience in Java
- Building and maintaining Microservices architectures
DevOps & Infrastructure
- Experience with CI/CD tools: Jenkins, Nexus, Maven, Ansible
- Terraform for infrastructure as code
- Containerization and orchestration using Docker and Kubernetes/GKE
- Experience with IAM, security roles, service accounts
Cloud
- Proficient with any cloud services
Database
- Hands-on experience with PostgreSQL, MySQL, BigQuery
Scripting
- Proficiency in Bash/Shell scripting and Python
Non-Technical Skills
- Strong communication and interpersonal skills
- Ability to work effectively in distributed teams across time zones
- Quick learner and adaptable to new technologies
- Team player with a collaborative mindset
- Ability to explain complex technical concepts to non-technical stakeholders
Nice to Have
- Experience with NetReveal / Detica
Why Join Us?
- 🚀 Challenging Projects: Be part of innovative solutions making a global impact
- 🌍 Global Exposure: Work with international teams and clients
- 📈 Career Growth: Clear pathways for professional advancement
- 🧘♂️ Flexible Work Options: Hybrid and remote flexibility to support work-life balance
- 💼 Competitive Compensation: Industry-leading salary and benefits
Job Title : Lead Java Developer (Backend)
Experience Required : 8 to 15 Years
Open Positions : 5
Location : Any major metro city (Bengaluru, Pune, Chennai, Kolkata, Hyderabad)
Work Mode : Open to Remote / Hybrid / Onsite
Notice Period : Immediate Joiner/30 Days or Less
About the Role :
- We are looking for experienced Lead Java Developers who bring not only strong backend development skills but also a product-oriented mindset and leadership capability.
- This is an opportunity to be part of high-impact digital transformation initiatives that go beyond writing code—you’ll help shape future-ready platforms and drive meaningful change.
- This role is embedded within a forward-thinking digital engineering team that thrives on co-innovation, lean delivery, and end-to-end ownership of platforms and products.
Key Responsibilities :
- Design, develop, and implement scalable backend systems using Java and Spring Boot.
- Collaborate with product managers, designers, and engineers to build intuitive and reliable digital products.
- Advocate and implement engineering best practices : SOLID principles, OOP, clean code, CI/CD, TDD/BDD.
- Lead Agile-based development cycles with a focus on speed, quality, and customer outcomes.
- Guide and mentor team members, fostering technical excellence and ownership.
- Utilize cloud platforms and DevOps tools to ensure performance and reliability of applications.
What We’re Looking For :
- Proven experience in Java backend development (Spring Boot, Microservices).
- 8+ Years of hands-on engineering experience with at least 2+ years in a Lead role.
- Familiarity with cloud platforms such as AWS, Azure, or GCP.
- Good understanding of containerization and orchestration tools like Docker and Kubernetes.
- Exposure to DevOps and Infrastructure as Code practices.
- Strong problem-solving skills and the ability to design solutions from first principles.
- Prior experience in product-based or startup environments is a big plus.
Ideal Candidate Profile :
- A tech enthusiast with a passion for clean code and scalable architecture.
- Someone who thrives in collaborative, transparent, and feedback-driven environments.
- A leader who takes ownership beyond individual deliverables to drive overall team and project success.
Interview Process
- Initial Technical Screening (via platform partner)
- Technical Interview with Engineering Team
- Client-facing Final Round
Additional Info :
- Targeting profiles from product/startup backgrounds.
- Strong preference for candidates with under 1 month of notice period.
- Interviews will be fast-tracked for qualified profiles.
📍 Position : Java Architect
📅 Experience : 10 to 15 Years
🧑💼 Open Positions : 3+
📍 Work Location : Bangalore, Pune, Chennai
💼 Work Mode : Hybrid
📅 Notice Period : Immediate joiners preferred; up to 1 month maximum
🔧 Core Responsibilities :
- Lead architecture design and development for scalable enterprise-level applications.
- Own and manage all aspects of technical development and delivery.
- Define and enforce best coding practices, architectural guidelines, and development standards.
- Plan and estimate the end-to-end technical scope of projects.
- Conduct code reviews, ensure CI/CD, and implement TDD/BDD methodologies.
- Mentor and lead individual contributors and small development teams.
- Collaborate with cross-functional teams, including DevOps, Product, and QA.
- Engage in high-level and low-level design (HLD/LLD), solutioning, and cloud-native transformations.
🛠️ Required Technical Skills :
- Strong hands-on expertise in Java, Spring Boot, Microservices architecture
- Experience with Kafka or similar messaging/event streaming platforms
- Proficiency in cloud platforms – AWS and Azure (must-have)
- Exposure to frontend technologies (nice-to-have)
- Solid understanding of HLD, system architecture, and design patterns
- Good grasp of DevOps concepts, Docker, Kubernetes, and Infrastructure as Code (IaC)
- Agile/Lean development, Pair Programming, and Continuous Integration practices
- Polyglot mindset is a plus (Scala, Golang, Python, etc.)
🚀 Ideal Candidate Profile :
- Currently working in a product-based environment
- Already functioning as an Architect or Principal Engineer
- Proven track record as an Individual Contributor (IC)
- Strong engineering fundamentals with a passion for scalable software systems
- No compromise on code quality, craftsmanship, and best practices
🧪 Interview Process :
- Round 1: Technical pairing round
- Rounds 2 & 3: Technical rounds with panel (code pairing + architecture)
- Final Round: HR and offer discussion
In your role as Software Engineer/Lead, you will directly work with other developers, Product Owners, and Scrum Masters to evaluate and develop innovative solutions. The purpose of the role is to design, develop, test, and operate a complex set of applications or platforms in the IoT Cloud area.
The role involves the utilization of advanced tools and analytical methods for gathering facts to develop solution scenarios. The job holder needs to be able to execute quality code, review code, and collaborate with other developers.
We have an excellent mix of people, which we believe makes for a more vibrant, more innovative, and more productive team.
- A bachelor’s degree, or master’s degree in information technology, computer science, or other relevant education
- At least 5 years of experience as Software Engineer, in an enterprise context
- Experience in design, development and deployment of large-scale cloud-based applications and services
- Good knowledge in cloud (AWS) serverless application development, event driven architecture and SQL / No-SQL databases
- Experience with IoT products, backend services and design principles
- Good knowledge at least of one backend technology like node.js (JavaScript, TypeScript) or JVM (Java, Scala, Kotlin)
- Passionate about code quality, security and testing
- Microservice development experience with Java (Spring) is a plus
- Good command of English in both Oral & Written


Job Title- Technical Lead
Job location- Pune/Hybrid
Availability- Immediate Joiners
Experience Range- 8-10 yrs
Desired skills - Python, Flask/FastAPI/Django, SQL/NoSQL, AWS/Azure
(Python/Flask/FastAPI/Django/AWS/Azure Cloud) who has worked on the modern full stack to deliver software products and solutions. He/She should have experience in leading from the front, handling customer situations, and internal teams, anchoring project communications and delivering outstanding work experience to our customers.
- 8+ years of relevant software design and development experience building cloud-native applications using Python and JavaScript stack.
- A thorough understanding of deploying to at least one of the Cloud platforms (AWS or Azure) is required. Knowledge of Kubernetes is an added advantage.
- Experience with Microservices architecture and serverless deployments.
- Well-versed with RESTful services and building scalable API architectures using any Python framework.
- Hands-on with Frontend technologies using either Angular or React.
- Experience managing distributed delivery teams, tech leadership, ideating with the customer leadership, design discussions and code reviews to deliver quality software products.
- Good attitude and passion for learning new technologies on the job.
- Good communication and leadership skills. Ability to lead the internal team as well as customer communication (email/calls).
Here is the Job Description -
Location -- Viman Nagar, Pune
Mode - 5 Days Working
Required Tech Skills:
● Strong at PySpark, Python
● Good understanding of Data Structure
● Good at SQL query/optimization
● Strong fundamentals of OOPs programming
● Good understanding of AWS Cloud, Big Data.
● Data Lake, AWS Glue, Athena, S3, Kinesis, SQL/NoSQL DB
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As an well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Staff Engineer – DevOps with 8-12 years of experience in designing, implementing, and optimizing CI/CD pipelines, cloud infrastructure, and automation frameworks. The ideal candidate will have expertise in Kubernetes, Terraform, CI/CD, Security, Observability, and Cloud Platforms (AWS, Azure, GCP). You will play a key role in scaling and securing our infrastructure, improving developer productivity, and ensuring high availability and performance.
Key Roles & Responsibilities:
- Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
- Deploy and manage Kubernetes clusters (EKS, AKS, GKE) and containerized workloads.
- Automate infrastructure provisioning using Terraform, Ansible, Pulumi, or CloudFormation.
- Implement observability and monitoring solutions using Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Ensure security best practices in DevOps, including IAM, secrets management, container security, and vulnerability scanning.
- Optimize cloud infrastructure (AWS, Azure, GCP) for performance, cost efficiency, and scalability.
- Develop and manage GitOps workflows and infrastructure-as-code (IaC) automation.
- Implement zero-downtime deployment strategies, including blue-green deployments, canary releases, and feature flags.
- Work closely with development teams to optimize build pipelines, reduce deployment time, and improve system reliability.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-12 years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation.
- Strong expertise in CI/CD pipelines, version control (Git), and release automation.
- Hands-on experience with Kubernetes (EKS, AKS, GKE) and container orchestration.
- Proficiency in Terraform, Ansible for infrastructure automation.
- Experience with AWS, Azure, or GCP services (EC2, S3, IAM, VPC, Lambda, API Gateway, etc.).
- Expertise in monitoring/logging tools such as Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Strong scripting and automation skills in Python, Bash, or Go.
Preferred Qualifications
- Experience in FinOps Cloud Cost Optimization) and Kubernetes cluster scaling.
- Exposure to serverless architectures and event-driven workflows.
- Contributions to open-source DevOps projects.
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking an experienced and highly skilled Technical Lead with a strong background in Java, SaaS architectures, firewalls and cybersecurity products, including SIEM and SOAR platforms. The ideal candidate will lead technical initiatives, design and implement scalable systems, and drive best practices across the engineering team. This role requires deep technical expertise, leadership abilities, and a passion for building secure and high-performing security solutions.
Key Roles & Responsibilities:
- Lead the design and development of scalable and secure software solutions using Java.
- Architect and build SaaS-based cybersecurity applications, ensuring high availability, performance, and reliability.
- Provide technical leadership, mentoring, and guidance to the development team.
- Ensure best practices in secure coding, threat modeling, and compliance with industry standards.
- Collaborate with cross-functional teams including Product Management, Security, and DevOps to deliver high-quality security solutions.
- Design and implement security analytics, automation workflows and ITSM integrations.
- Drive continuous improvements in engineering processes, tools, and technologies.
- Troubleshoot complex technical issues and lead incident response for critical production systems.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-10 years of software development experience, with expertise in Java.
- Strong background in building SaaS applications with cloud-native architectures (AWS, GCP, or Azure).
- In-depth understanding of microservices architecture, APIs, and distributed systems.
- Experience with containerization and orchestration tools like Docker and Kubernetes.
- Knowledge of DevSecOps principles, CI/CD pipelines, and infrastructure as code (Terraform, Ansible, etc.).
- Strong problem-solving skills and ability to work in an agile, fast-paced environment.
- Excellent communication and leadership skills, with a track record of mentoring engineers.
Preferred Qualifications:
- Experience with cybersecurity solutions, including SIEM (e.g., Splunk, ELK, IBM QRadar) and SOAR (e.g., Palo Alto XSOAR, Swimlane).
- Knowledge of zero-trust security models and secure API development.
- Hands-on experience with machine learning or AI-driven security analytics.
JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.
Mon-Fri, In office role with excellent perks and benefits!
Key Responsibilities:
1. Design, develop, and maintain backend services and APIs using Node.js or Python, or Java.
2. Build and implement scalable and robust microservices and integrate API gateways.
3. Develop and optimize NoSQL database structures and queries (e.g., MongoDB, DynamoDB).
4. Implement real-time data pipelines using Kafka.
5. Collaborate with front-end developers to ensure seamless integration of backend services.
6. Write clean, reusable, and efficient code following best practices, including design patterns.
7. Troubleshoot, debug, and enhance existing systems for improved performance.
Mandatory Skills:
1. Proficiency in at least one backend technology: Node.js or Python, or Java.
2. Strong experience in:
i. Microservices architecture,
ii. API gateways,
iii. NoSQL databases (e.g., MongoDB, DynamoDB),
iv. Kafka
v. Data structures (e.g., arrays, linked lists, trees).
3. Frameworks:
i. If Java : Spring framework for backend development.
ii. If Python: FastAPI/Django frameworks for AI applications.
iii. If Node: Express.js for Node.js development.
Good to Have Skills:
1. Experience with Kubernetes for container orchestration.
2. Familiarity with in-memory databases like Redis or Memcached.
3. Frontend skills: Basic knowledge of HTML, CSS, JavaScript, or frameworks like React.js.

About the Role:
We are seeking a skilled Python Backend Developer to join our dynamic team. This role focuses on designing, building, and maintaining efficient, reusable, and reliable code that supports both monolithic and microservices architectures. The ideal candidate will have a strong understanding of backend frameworks and architectures, proficiency in asynchronous programming, and familiarity with deployment processes. Experience with AI model deployment is a plus.
Overall 5+ years of IT experience with minimum of 5+ Yrs of experience on Python and in Opensource web framework (Django) with AWS Experience.
Key Responsibilities:
- Develop, optimize, and maintain backend systems using Python, Pyspark, and FastAPI.
- Design and implement scalable architectures, including both monolithic and microservices.
-3+ Years of working experience in AWS (Lambda, Serverless, Step Function and EC2)
-Deep Knowledge on Python Flask/Django Framework
-Good understanding of REST API’s
-Sound Knowledge on Database
-Excellent problem-solving and analytical skills
-Leadership Skills, Good Communication Skills, interested to learn modern technologies
- Apply design patterns (MVC, Singleton, Observer, Factory) to solve complex problems effectively.
- Work with web servers (Nginx, Apache) and deploy web applications and services.
- Create and manage RESTful APIs; familiarity with GraphQL is a plus.
- Use asynchronous programming techniques (ASGI, WSGI, async/await) to enhance performance.
- Integrate background job processing with Celery and RabbitMQ, and manage caching mechanisms using Redis and Memcached.
- (Optional) Develop containerized applications using Docker and orchestrate deployments with Kubernetes.
Required Skills:
- Languages & Frameworks:Python, Django, AWS
- Backend Architecture & Design:Strong knowledge of monolithic and microservices architectures, design patterns, and asynchronous programming.
- Web Servers & Deployment:Proficient in Nginx and Apache, and experience in RESTful API design and development. GraphQL experience is a plus.
-Background Jobs & Task Queues: Proficiency in Celery and RabbitMQ, with experience in caching (Redis, Memcached).
- Additional Qualifications: Knowledge of Docker and Kubernetes (optional), with any exposure to AI model deployment considered a bonus.
Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- 5+ years of experience in backend development using Python and Django and AWS.
- Demonstrated ability to design and implement scalable and robust architectures.
- Strong problem-solving skills, attention to detail, and a collaborative mindset.
Preferred:
- Experience with Docker/Kubernetes for containerization and orchestration.
- Exposure to AI model deployment processes.

Dear,
We are excited to inform you about an exclusive opportunity at Xebia for a Senior Backend Engineer role.
📌 Job Details:
- Role: Senior Backend Engineer
- Shift: 1 PM – 10 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or up to 30 days
🔹 Job Responsibilities:
✅ Design and develop scalable, reliable, and maintainable backend solutions
✅ Work on event-driven microservices architecture
✅ Implement REST APIs and optimize backend performance
✅ Collaborate with cross-functional teams to drive innovation
✅ Mentor junior and mid-level engineers
🔹 Required Skills:
✔ Backend Development: Scala (preferred), Java, Kotlin
✔ Cloud: AWS or GCP
✔ Databases: MySQL, NoSQL (Cassandra)
✔ DevOps & CI/CD: Jenkins, Terraform, Infrastructure as Code
✔ Messaging & Caching: Kafka, RabbitMQ, Elasticsearch
✔ Agile Methodologies: Scrum, Kanban
⚠ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response! Also, feel free to refer anyone in your network who might be a good fit.
Best regards,
Vijay S
Assistant Manager - TAG
We are seeking a skilled Full-stack developer. As a Full-stack developer, you will collaborate with an international cross-functional teams to design, develop, and deploy high-quality software solution.
Responsibilities:
Design, develop, and maintain the application.
Write clean, efficient, and reusable code.
Implement new features and functionality based on business requirements.
Participate in system and application architecture discussions.
Create technical designs and specifications for new features or enhancements.
Write and execute unit tests to ensure code quality.
Debug and resolve technical issues and software defects.
Conduct code reviews to ensure adherence to best practices.
Identify and fix vulnerabilities to ensure application integrity.
Working with the product owner to ensure seamless integration of user-facing elements.
Collaborating with DevOps teams for deployment and scaling.
Requirements:
Bachelor’s degree in computer science or information technology, or a related field.
Proven experience as a skilled Full-stack developer. Experience in Insurance domain is appreciated.
Excellent problem-solving and debugging skills.
Strong communication and collaboration abilities to work effectively in a team environment.
Strong experience with Spring Boot 3, Java 17 or newer and Maven.
Skills & Requirements
Angular 18+, GitHub, IntellJ IDEA, Java 11+, Jest, Kubernetes, Maven, Mockito, NDBX/ng-aquila, NGRX, Spring Boot, State Management, Typescript, Playwright, PostgreSQL, Sonar, Swagger, AWS, Camunda, Dynatrace, Jenkins, Kafka, NGXS, Signals, Taly.
What You’ll Do:
* Establish formal data practice for the organisation.
* Build & operate scalable and robust data architectures.
* Create pipelines for the self-service introduction and usage of new data.
* Implement DataOps practices
* Design, Develop, and operate Data Pipelines which support Data scientists and machine learning Engineers.
* Build simple, highly reliable Data storage, ingestion, and transformation solutions which are easy to deploy and manage.
* Collaborate with various business stakeholders, software engineers, machine learning engineers, and analysts.
Who You Are:
* Experience in designing, developing and operating configurable Data pipelines serving high volume and velocity data.
* Experience working with public clouds like GCP/AWS.
* Good understanding of software engineering, DataOps, data architecture, Agile and DevOps methodologies.
* Experience building Data architectures that optimize performance and cost, whether the components are prepackaged or homegrown.
* Proficient with SQL, Java, Spring boot, Python or JVM-based language, Bash.
* Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow etc. and big data databases like BigQuery, Clickhouse, etc
* Good communication skills with the ability to collaborate with both technical and non-technical people.
* Ability to Think Big, take bets and innovate, Dive Deep, Bias for Action, Hire and Develop the Best, Learn and be Curious



General Summary:
The Senior Software Engineer will be responsible for designing, developing, testing, and maintaining full-stack solutions. This role involves hands-on coding (80% of time), performing peer code reviews, handling pull requests and engaging in architectural discussions with stakeholders. You'll contribute to the development of large-scale, data-driven SaaS solutions using best practices like TDD, DRY, KISS, YAGNI, and SOLID principles. The ideal candidate is an experienced full-stack developer who thrives in a fast-paced, Agile environment.
Essential Job Functions:
- Design, develop, and maintain scalable applications using Python and Django.
- Build responsive and dynamic user interfaces using React and TypeScript.
- Implement and integrate GraphQL APIs for efficient data querying and real-time updates.
- Apply design patterns such as Factory, Singleton, Observer, Strategy, and Repository to ensure maintainable and scalable code.
- Develop and manage RESTful APIs for seamless integration with third-party services.
- Design, optimize, and maintain SQL databases like PostgreSQL, MySQL, and MSSQL.
- Use version control systems (primarily Git) and follow collaborative workflows.
- Work within Agile methodologies such as Scrum or Kanban, participating in daily stand-ups, sprint planning, and retrospectives.
- Write and maintain unit tests, integration tests, and end-to-end tests, following Test-Driven Development (TDD).
- Collaborate with cross-functional teams, including Product Managers, DevOps, and UI/UX Designers, to deliver high-quality products
Essential functions are the basic job duties that an employee must be able to perform, with or without reasonable accommodation. The function is considered essential if the reason the position exists is to perform that function.
Supportive Job Functions:
- Remain knowledgeable of new emerging technologies and their impact on internal systems.
- Available to work on call when needed.
- Perform other miscellaneous duties as assigned by management.
These tasks do not meet the Americans with Disabilities Act definition of essential job functions and usually equal 5% or less of time spent. However, these tasks still constitute important performance aspects of the job.
Skills
- The ideal candidate must have strong proficiency in Python and Django, with a solid understanding of Object-Oriented Programming (OOPs) principles. Expertise in JavaScript,
- TypeScript, and React is essential, along with hands-on experience in GraphQL for efficient data querying.
- The candidate should be well-versed in applying design patterns such as Factory, Singleton, Observer, Strategy, and Repository to ensure scalable and maintainable code architecture.
- Proficiency in building and integrating REST APIs is required, as well as experience working with SQL databases like PostgreSQL, MySQL, and MSSQL.
- Familiarity with version control systems (especially Git) and working within Agile methodologies like Scrum or Kanban is a must.
- The candidate should also have a strong grasp of Test-Driven Development (TDD) principles.
- In addition to the above, it is good to have experience with Next.js for server-side rendering and static site generation, as well as knowledge of cloud infrastructure such as AWS or GCP.
- Familiarity with NoSQL databases, CI/CD pipelines using tools like GitHub Actions or Jenkins, and containerization technologies like Docker and Kubernetes is highly desirable.
- Experience with microservices architecture and event-driven systems (using tools like Kafka or RabbitMQ) is a plus, along with knowledge of caching technologies such as Redis or
- Memcached. Understanding OAuth2.0, JWT, SSO authentication mechanisms, and adhering to API security best practices following OWASP guidelines is beneficial.
- Additionally, experience with Infrastructure as Code (IaC) tools like Terraform or CloudFormation, and familiarity with performance monitoring tools such as New Relic or Datadog will be considered an advantage.
Abilities:
- Ability to organize, prioritize, and handle multiple assignments on a daily basis.
- Strong and effective inter-personal and communication skills
- Ability to interact professionally with a diverse group of clients and staff.
- Must be able to work flexible hours on-site and remote.
- Must be able to coordinate with other staff and provide technological leadership.
- Ability to work in a complex, dynamic team environment with minimal supervision.
- Must possess good organizational skills.
Education, Experience, and Certification:
- Associate or bachelor’s degree preferred (Computer Science, Engineer, etc.), but equivalent work experience in a technology related area may substitute.
- 2+ years relevant experience, required.
- Experience using version control daily in a developer environment.
- Experience with Python, JavaScript, and React is required.
- Experience using rapid development frameworks like Django or Flask.
- Experience using front end build tools.
Scope of Job:
- No direct reports.
- No supervisory responsibility.
- Consistent work week with minimal travel
- Errors may be serious, costly, and difficult to discover.
- Contact with others inside and outside the company is regular and frequent.
- Some access to confidential data.
We are seeking a skilled Cloud Data Engineer who has experience with cloud data platforms like AWS or Azure and especially Snowflake and dbt to join our dynamic team. As a consultant, you will be responsible for developing new data platforms and create the data processes. You will collaborate with cross-functional teams to design, develop, and deploy high-quality frontend solutions.
Responsibilities:
Customer consulting: You develop data-driven products in the Snowflake Cloud and connect data & analytics with specialist departments. You develop ELT processes using dbt (data build tool)
Specifying requirements: You develop concrete requirements for future-proof cloud data architectures.
Develop data routes: You design scalable and powerful data management processes.
Analyze data: You derive sound findings from data sets and present them in an understandable way.
Requirements:
Requirements management and project experience: You successfully implement cloud-based data & analytics projects.
Data architectures: You are proficient in DWH/data lake concepts and modeling with Data Vault 2.0.
Cloud expertise: You have extensive knowledge of Snowflake, dbt and other cloud technologies (e.g. MS Azure, AWS, GCP).
SQL know-how: You have a sound and solid knowledge of SQL.
Data management: You are familiar with topics such as master data management and data quality.
Bachelor's degree in computer science, or a related field.
Strong communication and collaboration abilities to work effectively in a team environment.
Skills & Requirements
Cloud Data Engineering, AWS, Azure, Snowflake, dbt, ELT processes, Data-driven consulting, Cloud data architectures, Scalable data management, Data analysis, Requirements management, Data warehousing, Data lake, Data Vault 2.0, SQL, Master data management, Data quality, GCP, Strong communication, Collaboration.
At least 5 years of experience in testing and developing automation tests.
A minimum of 3 years of experience writing tests in Python, with a preference for experience in designing automation frameworks.
Experience in developing automation for big data testing, including data ingestion, data processing, and data migration, is highly desirable.
Familiarity with Playwright or other browser application testing frameworks is a significant advantage.
Proficiency in object-oriented programming and principles is required.
Extensive knowledge of AWS services is essential.
Strong expertise in REST API testing and SQL is required.
A solid understanding of testing and development life cycle methodologies is necessary.
Knowledge of the financial industry and trading systems is a plus
What you'll do:
· Perform complex application programming activities with an emphasis on mobile development: Node.js, TypeScript, JavaScript, RESTful APIs and related backend frameworks
· Assist in the definition of system architecture and detailed solution design that are scalable and extensible
· Collaborate with Product Owners, Designers, and other engineers on different permutations to find the best solution possible
· Own the quality of code and do your own testing. Write unit test and improve test coverage.
· Deliver amazing solutions to production that knock everyone’s socks off
· Mentor junior developers on the team
What we’re looking for:
· Amazing technical instincts. You know how to evaluate and choose the right technology and approach for the job. You have stories you could share about what problem you thought you were solving at first, but through testing and iteration, came to solve a much bigger and better problem that resulted in positive outcomes all-around.
· A love for learning. Technology is continually evolving around us, and you want to keep up to date to ensure we are using the right tech at the right time.
· A love for working in ambiguity—and making sense of it. You can take in a lot of disparate information and find common themes, recommend clear paths forward and iterate along the way. You don’t form an opinion and sell it as if it’s gospel; this is all about being flexible, agile, dependable, and responsive in the face of many moving parts.
· Confidence, not ego. You have an ability to collaborate with others and see all sides of the coin to come to the best solution for everyone.
· Flexible and willing to accept change in priorities, as necessary
· Demonstrable passion for technology (e.g., personal projects, open-source involvement)
· Enthusiastic embrace of DevOps culture and collaborative software engineering
· Ability and desire to work in a dynamic, fast paced, and agile team environment
· Enthusiasm for cloud computing platforms such as AWS or Azure
Basic Qualifications:
· Minimum B.S. / M.S. Computer Science or related discipline from accredited college or University
· At least 4 years of experience designing, developing, and delivering backend applications with Node.js, TypeScript
· At least 2 years of experience building internet facing services
· At least 2 years of experience with AWS and/or OpenShift
· Exposure to some of the following concepts: object-oriented programming, software engineering techniques, quality engineering, parallel programming, databases, etc.
· Experience integrating APIs with front-end and/or mobile-specific frameworks
· Proficiency in building and consuming RESTful APIs
· Ability to manage multiple tasks and consistently meet established timelines
· Strong collaboration skills
· Excellent written and verbal communications skills
Preferred Qualifications:
· Experience with Apache Cordova framework
- Demonstrable knowledge of native coding background in iOS, Android
· Experience developing and deploying applications within Kubernetes based containers
Experience in Agile and SCRUM development techniques


At Verto, we’re passionate about helping businesses in Africa reach the world. What first started life as a FX solution for trading Nigerian Naira has now become a market-leading platform, changing the way thousands of businesses transfer money in and out of Africa.
We believe that where you do business shouldn’t determine how successful you are, or your ability to scale. Millions of companies a day have to juggle long settlement periods, high transaction fees and issues accessing liquidity in order to trade with African businesses. We’re on a mission to change this by creating equal access to easy payment and liquidity solutions that are already a given in developed markets.
We’re not alone in realising the opportunity and need to solve for emerging markets. We’re backed by world-class investors including Y-Combinator, Quona and MEVP, power payments for some of the most disruptive start-ups in the world and have a list of accolades from leading publications including being voted ‘Fintech Start Up of the Year’ at Fintech Awards London 2022.
Each year we process billions of dollars of payments and provide companies with solutions which help them to save money, automate processes and grow, but we’re only just getting started.
We are looking for a strong Full Stack Developer to join our team. This person will be involved in active development assignments. You are expected to have between 2 and 6 years of professional experience in any object-oriented languages and have executed considerable work in nodeJS along with any of the modern web application building libraries such as Angular along with at least a working knowledge of developing scalable distributed cloud applications on AWS or any other cloud.
We’re looking for someone who is not only a good full-stack developer but also aware of modern trends in distributed software application development. You’re smart enough to work at top companies, but you’re picky about finding the right role (this is more than just a job, right?). You’re experienced, but you also like to learn new things. And you want to work with smart people and have fun building something great.
In this role you will:
- Design RESTful APIs
- Work with other team members to develop and test highly scalable web applications and services as part of a suite of products in the Data governance domain working with petabyte-scale data
- Design and create services and system architecture for your projects, and contribute and provide feedback to other team members
- Use AWS to set up geographically agnostic systems in the cloud.
- Exercise your strong skills & working knowledge of MySQL and relational databases
- Prototype and develop new ideas and participate in all parts of the lifecycle from research to release
- Work within a small team owning deliverables for our web APIs and front end.
- Use development tools such as AWS Codebuild, git, npm, Visual Studio Code, Serverless framework, Swagger Specs, Angular, Flutter, AWS Lambda, MongoDB, MySQL, Redis, SQS, Kafka etc.
- Design and develop dockerized applications that will be deployed flexibly either on the cloud or on-premises depending on business requirements
You’ll have:
- 5+ years of professional development experience using any object-oriented language
- Have developed and delivered at least one application using nodeJs
- Experience with modern web application building libraries such as Angular, Polymer, React
- Solid OOP and software design knowledge – you should know how to create software that’s extensible, reusable and meets desired architectural objectives
- Excellent understanding of HTTP and REST standards
- Experience with relational as well as MySQL databases
- Good experience writing unit and acceptance tests
- Proven experience in developing highly scalable distributed cloud applications on a cloud system, preferably AWS
- You’re a great communicator and are capable of not just doing the work, but teaching others and explaining the “why” behind complicated technical decisions.
- You aren’t afraid to roll up your sleeves: This role will evolve, and we’ll want you to evolve with it!
About the company
KPMG International Limited, commonly known as KPMG, is one of the largest professional services networks in the world, recognized as one of the "Big Four" accounting firms alongside Deloitte, PricewaterhouseCoopers (PwC), and Ernst & Young (EY). KPMG provides a comprehensive range of professional services primarily focused on three core areas: Audit and Assurance, Tax Services, and Advisory Services. Their Audit and Assurance services include financial statement audits, regulatory audits, and other assurance services. The Tax Services cover various aspects such as corporate tax, indirect tax, international tax, and transfer pricing. Meanwhile, their Advisory Services encompass management consulting, risk consulting, deal advisory, and other related services.
Apply through this link- https://forms.gle/qmX9T7VrjySeWYa37
Job Description
Position: Data Engineer
Experience: Experience 5+ years of relevant experience
Location : WFO (3 days working) Pune – Kharadi, NCR – Gurgaon , Bangalore
Employment Type: contract for 3 months-Can be extended basis performance and future requirements
Skills Required:
• Proficiency in SQL, AWS, data integration tools like Airflow or equivalent. Knowledge on using tools like JIRA, GitHub, etc.
• Data Engineer who will be able to work on the data management activities and orchestration processes.


We are looking for full stack developer/Leader to lead, OWN and deliver across the entire application development life-cycle. He / She will be responsible for creating and owning the product road map of enterprise software product.
-A responsible and passionate professional who has will power to drive the product goals and ensure the outcomes expected from the team.
- He / She should have a strong desire and eagerness to learn new and emerging technologies.
- Skills Required :
- Python/Django Rest Framework,
- Database Structure
-Cloud-Ops - AWS
Roles & responsibilities :-
- Developer responsibilities include writing and testing code, debugging programs
- Design and implementation of REST API
- Build, release, and manage the configuration of all production systems
- Manage a continuous integration and deployment methodology for server-based technologies
- Identify customer problems and create functional prototypes offering a solution
If you are willing to take up challenges and contribute in developing world class products, - this is the place for you.
About FarmMobi :
A trusted enterprise software product company in AgTech space started with a mission to revolutionize the Global agriculture sector.
We operate on software as a service model (SASS) and cater to the needs of global customers in the field of Agriculture.
The idea is to use emerging technologies like Mobility, IOT, Drones, Satellite imagery, Blockchain etc. to digitally transform the agriculture landscape.


Responsibilities
- Develop and maintain robust APIs to support various applications and services.
- Design and implement scalable solutions using AWS cloud services.
- Utilize Python frameworks such as Flask and Django to build efficient and high-performance applications.
- Collaborate with cross-functional teams to gather and analyze requirements for new features and enhancements.
- Ensure the security and integrity of applications by implementing best practices and security measures.
- Optimize application performance and troubleshoot issues to ensure smooth operation.
- Provide technical guidance and mentorship to junior team members.
- Conduct code reviews to ensure adherence to coding standards and best practices.
- Participate in agile development processes including sprint planning daily stand-ups and retrospectives.
- Develop and maintain documentation for code processes and procedures.
- Stay updated with the latest industry trends and technologies to continuously improve skills and knowledge.
- Contribute to the overall success of the company by delivering high-quality software solutions that meet business needs.
- Foster a collaborative and inclusive work environment that promotes innovation and continuous improvement.
Qualifications
- Possess strong expertise in developing and maintaining APIs.
- Demonstrate proficiency in AWS cloud services and their application in scalable solutions.
- Have extensive experience with Python frameworks such as Flask and Django.
- Exhibit strong analytical and problem-solving skills to address complex technical challenges.
- Show ability to collaborate effectively with cross-functional teams and stakeholders.
- Display excellent communication skills to convey technical concepts clearly.
- Have a background in the Consumer Lending domain is a plus.
- Demonstrate commitment to continuous learning and staying updated with industry trends.
- Possess a strong understanding of agile development methodologies.
- Show experience in mentoring and guiding junior team members.
- Exhibit attention to detail and a commitment to delivering high-quality software solutions.
- Demonstrate ability to work effectively in a hybrid work model.
- Show a proactive approach to identifying and addressing potential issues before they become problems.
Job Summary:
We are seeking a skilled Senior Data Engineer with expertise in application programming, big data technologies, and cloud services. This role involves solving complex problems, designing scalable systems, and working with advanced technologies to deliver innovative solutions.
Key Responsibilities:
- Develop and maintain scalable applications using OOP principles, data structures, and problem-solving skills.
- Build robust solutions using Java, Python, or Scala.
- Work with big data technologies like Apache Spark for large-scale data processing.
- Utilize AWS services, especially Amazon Redshift, for cloud-based solutions.
- Manage databases including SQL, NoSQL (e.g., MongoDB, Cassandra), with Snowflake as a plus.
Qualifications:
- 5+ years of experience in software development.
- Strong skills in OOPS, data structures, and problem-solving.
- Proficiency in Java, Python, or Scala.
- Experience with Spark, AWS (Redshift mandatory), and databases (SQL/NoSQL).
- Snowflake experience is good to have.

Role & Responsibilities:
As a Full Stack Developer Intern, you will take on significant responsibilities in the design, development, and maintenance of web applications using Next.js, React.js, Node.js, PostgreSQL, and AWS Cloud services. We seek individuals who are self-motivated, energetic, and capable of delivering high-quality work with minimal supervision.
- Develop user-friendly web applications using Next.js and React.js.
- Create and implement RESTful APIs using Node.js.
- Write high-quality, maintainable code while adhering to best practices in software development.
- Deliver projects on time while maintaining a strong focus on performance and user experience.
- Manage data effectively using PostgreSQL databases.
- Code Quality & Reviews: Maintain code quality standards and conduct regular code reviews to ensure the delivery of high-quality, error-free code.
- Performance Optimization: Identify and troubleshoot performance bottlenecks to ensure a seamless and lightning-fast platform experience.
- Bug Fixing & Maintenance: Monitor platform performance and proactively address any issues or bugs to keep the platform running flawlessly.
- Contribute innovative ideas and solutions during team discussions and brainstorming sessions.
- Communicate openly and honestly with team members, sharing insights and feedback constructively.
- Stay updated on emerging technologies and demonstrate a willingness to learn more.
Qualification:
- Graduate/Post-Graduate with a degree in Computer Science, Software Engineering, or a related field.
- Proficiency in HTML, CSS, JavaScript, and modern front-end frameworks (specifically Next.js and React.js).
- Strong knowledge of back-end technologies such as Node.js and Express.js.
- Experience with relational databases, particularly PostgreSQL.
- Familiarity with AWS Cloud services is a plus.
- Excellent problem-solving skills with a proactive approach to challenges.
- Proven ability to troubleshoot and resolve complex technical issues.
- Strong communication skills with the confidence to share ideas openly.
- High energy level and passion for contributing to the company’s success with integrity and honesty.
- Startup Enthusiast: Embrace the fast-paced and dynamic environment of a startup, driven by a passion for making a positive impact.
Dear Candidate,
We are urgently Hiring AWS Cloud Engineer for Bangalore Location.
Position: AWS Cloud Engineer
Location: Bangalore
Experience: 8-11 yrs
Skills: Aws Cloud
Salary: Best in Industry (20-25% Hike on the current ctc)
Note:
only Immediate to 15 days Joiners will be preferred.
Candidates from Tier 1 companies will only be shortlisted and selected
Candidates' NP more than 30 days will get rejected while screening.
Offer shoppers will be rejected.
Job description:
Description:
Title: AWS Cloud Engineer
Prefer BLR / HYD – else any location is fine
Work Mode: Hybrid – based on HR rule (currently 1 day per month)
Shift Timings 24 x 7 (Work in shifts on rotational basis)
Total Experience in Years- 8+ yrs, 5 yrs of relevant exp is required.
Must have- AWS platform, Terraform, Redshift / Snowflake, Python / Shell Scripting
Experience and Skills Requirements:
Experience:
8 years of experience in a technical role working with AWS
Mandatory
Technical troubleshooting and problem solving
AWS management of large-scale IaaS PaaS solutions
Cloud networking and security fundamentals
Experience using containerization in AWS
Working Data warehouse knowledge Redshift and Snowflake preferred
Working with IaC – Terraform and Cloud Formation
Working understanding of scripting languages including Python and Shell
Collaboration and communication skills
Highly adaptable to changes in a technical environment
Optional
Experience using monitoring and observer ability toolsets inc. Splunk, Datadog
Experience using Github Actions
Experience using AWS RDS/SQL based solutions
Experience working with streaming technologies inc. Kafka, Apache Flink
Experience working with a ETL environments
Experience working with a confluent cloud platform
Certifications:
Minimum
AWS Certified SysOps Administrator – Associate
AWS Certified DevOps Engineer - Professional
Preferred
AWS Certified Solutions Architect – Associate
Responsibilities:
Responsible for technical delivery of managed services across NTT Data customer account base. Working as part of a team providing a Shared Managed Service.
The following is a list of expected responsibilities:
To manage and support a customer’s AWS platform
To be technical hands on
Provide Incident and Problem management on the AWS IaaS and PaaS Platform
Involvement in the resolution or high priority Incidents and problems in an efficient and timely manner
Actively monitor an AWS platform for technical issues
To be involved in the resolution of technical incidents tickets
Assist in the root cause analysis of incidents
Assist with improving efficiency and processes within the team
Examining traces and logs
Working with third party suppliers and AWS to jointly resolve incidents
Good to have:
Confluent Cloud
Snowflake
Best Regards,
Minakshi Soni
Executive - Talent Acquisition (L2)
Rigel Networks
Worldwide Locations: USA | HK | IN
- TIDB (Good to have)
- Kubernetes( Must to have)
- MySQL(Must to have)
- Maria DB(Must to have)
- Looking candidate who has more exposure into Reliability over maintenance


1. 4 plus years of experience in Java development.
2. Good communication skills are mandatory.
3. Spring boot , microservices , AWS , multithreading , GIT mandatory.
4. Angular or React is mandatory
5. Joining within 2 weeks
6. Location : Pune , Working from client location i.e at D.P.Road ( very near to Metro station ). Another location is Indore ( if available)
7. Permanent position with Prismatic .
8. Product development exposure and latest technology exposure . Prospects of international travel for the bright candidate.
Position: SDE-1 DevSecOps
Location: Pune, India
Experience Required: 0+ Years
We are looking for a DevSecOps engineer to contribute to product development, mentor team members, and devise creative solutions for customer needs. We value effective communication in person, in documentation, and in code. Ideal candidates thrive in small, collaborative teams, love making an impact, and take pride in their work with a product-focused, self-driven approach. If you're passionate about integrating security and deployment seamlessly into the development process, we want you on our team.
About FlytBase
FlytBase is a global leader in enterprise drone software automation. FlytBase platform is enabling drone-in-a-box deployments all across the globe and has the largest network of partners in 50+ countries.
The team comprises young engineers and designers from top-tier universities such as IIT-B, IIT-KGP, University of Maryland, Georgia Tech, COEP, SRM, KIIT and with deep expertise in drone technology, computer science, electronics, aerospace, and robotics.
The company is headquartered in Silicon Valley, California, USA, and has R&D offices in Pune, India. Widely recognized as a pioneer in the commercial drone ecosystem, FlytBase continues to win awards globally - FlytBase was the Global Grand Champion at the ‘NTT Data Open Innovation Contest’ held in Tokyo, Japan, and was the recipient of ‘ TiE50 Award’ at TiE Silicon Valley.
Role and Responsibilities:
- Participate in the creation and maintenance of CI/CD solutions and pipelines.
- Leverage Linux and shell scripting for automating security and system updates, and design secure architectures using AWS services (VPC, EC2, S3, IAM, EKS/Kubernetes) to enhance application deployment and management.
- Build and maintain secure Docker containers, manage orchestration using Kubernetes, and automate configuration management with tools like Ansible and Chef, ensuring compliance with security standards.
- Implement and manage infrastructure using Terraform, aligning with security and compliance requirements, and set up Dynatrace for advanced monitoring, alerting, and visualization of security metrics. Develop Terraform scripts to automate and optimize infrastructure provisioning and management tasks.
- Utilize Git for secure source code management and integrate continuous security practices into CI/CD pipelines, applying vulnerability scanning and automated security testing tools.
- Contribute to security assessments, including vulnerability and penetration testing, NIST, CIS AWS, NIS2 etc.
- Implement and oversee compliance processes for SOC II, ISO27001, and GDPR.
- Stay updated on cybersecurity trends and best practices, including knowledge of SAST and DAST tools, OWASP Top10.
- Automate routine tasks and create tools to improve team efficiency and system robustness.
- Contribute to disaster recovery plans and ensure robust backup systems are in place.
- Develop and enforce security policies and respond effectively to security incidents.
- Manage incident response protocols, including on-call rotations and strategic planning.
- Conduct post-incident reviews to prevent recurrence and refine the system reliability framework.
- Implementing Service Level Indicators (SLIs) and maintaining Service Level Objectives (SLOs) and Service Level Agreements (SLAs) to ensure high standards of service delivery and reliability.
Best suited for candidates who: (Skills/Experience)
- Up to 4 years of experience in a related field, with a strong emphasis on learning and execution.
- Background in IT or computer science.
- Familiarity with CI/CD tools, cloud platforms (AWS, Azure, or GCP), and programming languages like Python, JavaScript, or Ruby.
- Solid understanding of network layers and TCP/IP protocols.
- In-depth understanding of operating systems, networking, and cloud services.
- Strong problem-solving skills with a 'hacker' mindset.
- Knowledge of security principles, threat modeling, risk assessment, and vulnerability management is a plus.
- Relevant certifications (e.g., CISSP, GWAPT, OSCP) are a plus.
Compensation:
This role comes with an annual CTC that is market competitive and depends on the quality of your work experience, degree of professionalism, culture fit, and alignment with FlytBase’s long-term business strategy.
Perks:
- Fast-paced Startup culture
- Hacker mode environment
- Enthusiastic and approachable team
- Professional autonomy
- Company-wide sense of purpose
- Flexible work hours
- Informal dress code
About Vijay Sales
Vijay Sales is a leading eCommerce and retail brand delivering exceptional experiences to customers across India. Leveraging technology to enhance shopping experiences, we are on a mission to create seamless omnichannel platforms and innovative solutions for our customers. Join our dynamic team to shape the future of retail technology.
Role Overview
As a Lead Backend Engineer, you will drive the development and optimization of robust backend systems for our products built on the MERN (MongoDB, Express.js, React, Node.js) stack. You'll lead a talented team of developers, design scalable architectures, and ensure the seamless integration of backend functionalities with our frontend systems.
Key Responsibilities
- Lead the design, development, and deployment of backend systems for Vijay Sales' products.
- Architect scalable, secure, and high-performance APIs and microservices.
- Mentor and guide a team of backend developers, ensuring best practices in coding, architecture, and DevOps.
- Collaborate with cross-functional teams, including frontend engineers, DevOps, and product managers, to deliver end-to-end solutions.
- Optimize database structures and queries for performance and scalability.
- Implement and oversee CI/CD pipelines to streamline deployment processes.
- Troubleshoot and resolve performance bottlenecks, security issues, and other technical challenges.
- Stay updated on industry trends, tools, and technologies to continuously improve our stack.
Skills and Qualifications
Must-Have:
- Proven experience as a backend engineer with expertise in the MERN stack (Node.js, Express.js, MongoDB).
- Strong understanding of RESTful APIs and GraphQL.
- Proficiency in database design, indexing, and optimization (MongoDB is preferred).
- Hands-on experience with containerization tools like Docker and orchestration platforms like Kubernetes.
- Strong knowledge of authentication/authorization frameworks (JWT, OAuth2).
- Experience with cloud platforms like AWS, GCP, or Azure.
- Proficiency in DevOps practices, including CI/CD pipelines and version control systems (Git).
- Excellent problem-solving skills and a proactive attitude toward innovation.
Nice-to-Have:
- Experience in building scalable eCommerce platforms or omni-channel systems.
- Knowledge of front-end technologies (React.js) for seamless backend integration.
- Familiarity with message brokers.
- Serverless and Micro Architectures
- Exposure to AI/ML-driven applications for personalization and recommendation engines.
Job description
We are seeking a highly skilled and experienced IT Department Head with strong communication skills, a technical background, and leadership capabilities to manage our IT team. The ideal candidate will be responsible for overseeing the organization's IT infrastructure, ensuring the security and efficiency of our systems, and maintaining compliance with relevant industry standards. The role requires an
in-depth understanding of cloud technologies , server management, network security, managed IT services, and strong problem-solving capabilities.
Key Responsibilities:-
The Information Technology Manager is a proactive and hands-on IT Manager to oversee and evolve our technology infrastructure
· In this role, the Manager will manage all aspects of our IT operations, from maintaining our current tech stack to strategizing and implementing future developments
· This position will ensure that our technology systems are modern, secure, and efficient, aligning IT initiatives with our business goals
· IT Strategy & Leadership: Develop and execute an IT strategy that supports the company's objectives, ensuring scalability and security
· Infrastructure Management: Oversee the maintenance and optimization of our Azure Cloud infrastructure, AWS Cloud, and Cisco Meraki networking systems
· Software & Systems Administration: Manage Microsoft 365 administration.
· Cybersecurity: Enhance our cybersecurity posture using tools like Sentinel One, Sophos Firewall and other tools
· Project Management: Lead IT projects, including system upgrades and optimizations, ensuring timely delivery and adherence to budgets
· Team Leadership: Mentor and guide a small IT team, fostering a culture of continuous improvement and professional development
· Vendor Management: Collaborate with external vendors and service providers to ensure optimal performance and cost-effectiveness
· Technical Support: Provide high-level technical support and troubleshooting for IT-related issues across the organization and client in USA Other duties as needed
· IT Audit & Compliance: Conduct regular audits to ensure IT processes are compliant with security regulations and best practices (GDPR, SOC2, ISO 27001), ensuring readiness for internal and external audit.
· Documentation: Maintain thorough and accurate documentation for all systems, processes, and procedures to ensure clarity and consistency in IT operations.
Preferred Skills:-
. Experience with SOC 2, ISO 27001, or similar security frameworks.
. Experience with advanced firewall configurations and network
architecture.
Job Type: Full-time
Benefits:
- Paid sick time
Shift:
- Day shift
Work Days:
- Monday to Friday
Experience:
- IT management: 2 years (Required)
Work Location: In person