50+ AWS (Amazon Web Services) Jobs in Pune | AWS (Amazon Web Services) Job openings in Pune
Apply to 50+ AWS (Amazon Web Services) Jobs in Pune on CutShort.io. Explore the latest AWS (Amazon Web Services) Job opportunities across top companies like Google, Amazon & Adobe.



Job Description:
We are looking for a Python Lead who has the following experience and expertise -
- Proficiency in developing RESTful APIs using Flask/Django or Fast API framework
- Hands-on experience of using ORMs for database query mapping
- Unit test cases for code coverage and API testing
- Using Postman for validating the APIs Experienced with GIT process and rest of the code management including knowledge of ticket management systems like JIRA
- Have at least 2 years of experience in any cloud platform
- Hands-on leadership experience
- Experience of direct communication with the stakeholders
Skills and Experience:
- Good academics
- Strong teamwork and communications
- Advanced troubleshooting skills
- Ready and immediately available candidates will be preferred.
At Verto, we’re passionate about helping businesses in Emerging markets reach the world. What first started life as a FX solution for trading Nigerian Naira has now become a market-leading platform, changing the way thousands of businesses transfer money in and out of Emerging Markets.
We believe that where you do business shouldn’t determine how successful you are, or your ability to scale. Millions of companies a day have to juggle long settlement periods, high transaction fees and issues accessing liquidity in order to trade with African businesses. We’re on a mission to change this by creating equal access to easy payment and liquidity solutions that are already a given in developed markets.
We’re not alone in realising the opportunity and need to solve for emerging markets. We’re backed by world-class investors including Y-Combinator, Quona and MEVP, power payments for some of the most disruptive start-ups in the world and have a list of accolades from leading publications including being voted ‘Fintech Start Up of the Year’ at Fintech Awards London 2022.
Each year we process billions of dollars of payments and provide companies with solutions which help them to save money, automate processes and grow, but we’re only just getting started.
We’re seeking a driven and results-oriented Senior Data Engineer who is excited to help build out a best-in-class Data Platform. In this role, you will be expected to achieve key milestones such as improving on our existing Data Warehouse, implementing a CI/CD framework, and enabling core technologies such as dbt and git. You will play a pivotal role in enabling long-term scalability and efficiency when it comes to all things data, and leveraging your expertise in Data Engineering to drive measurable impact.
In this role you will:
- Conceptualize, maintain and improve the data architecture
- Evaluating design and operational cost-benefit tradeoffs within systems
- Design, build, and launch collections of data models that support multiple use cases across different products or domains
- Solve our most challenging data integration problems, optimising ELT pipelines, frameworks, sourcing from structured and unstructured data sources
- Implementing CI/CD frameworks
- Create and contribute to frameworks that improve the accuracy, efficiency and general data integrity
- Design and execute ‘best-in-class’ schema design
- Implementing other potential data tools
- Define and manage refresh schedules, load-balancing and SLA for all data sets in allocated areas of ownership
- Collaborate with engineers, product managers, and data analysts to understand data needs, identify and resolve issues and help set best practices for efficient data capture
- Determine and implement the data governance model and processes within ownership realm (GDPR, PPI, etc)
You’ll be responsible for:
- Taking ownership of the data engineering process - from project scoping, design, communication, execution and conclusion
- Support and strengthen data infrastructure together with data team and engineering
- Support organisation in understanding the importance of data and advocate for best-in-class infrastructure
- Mentoring, educating team members on best-in-class DE practices
- Priorising workload effectively
- Support quarterly and half-year planning from Data Engineering perspective
Note: This is a fast-growing company, the ideal candidate will be comfortable leading other data engineers in the future. However, this is currently a small data team, you may be asked to contribute to projects outside of the typical Data Engineering role. This will most probably involve analytics engineering responsibilities such as maintenance and improvement of ‘core’ tables (transactions, companies, product/platform management).
Skills and Qualifications
- University degree; ideally in data engineering, software engineering, computer science-engineering, numerate or similar
- +7 years of data engineering experience or equivalent
- Expert experience building data warehouses and ETL pipelines
- Expert experience of SQL, python, git, dbt (incl. query efficiency and optimization)
- Expert experience of Cloud Data Platforms (AWS, Snowflake and/or Databricks) → Qualification preferred, not mandatory
- Significant experience of Automation and Integrations tools (FiveTran, Airflow, Astronomer or similar)
- Significant experience with IoC tools (Terraform, Docker, Kubernetes or similar)
- Significant experience with CI/CD tools (Jenkins, GitHub Actions, CircleCI or similar)
Preferred Experience:
- Experience with real time data pipelines (AWS Kinesis, Kafka, Spark)
- Experience with observability tools (Metaplane, MonteCarlo, Datadog or similar)
- Experience within FinTech/Finance/FX preferred
At Verto, we’re passionate about helping businesses in Emerging markets reach the world. What first started life as a FX solution for trading Nigerian Naira has now become a market-leading platform, changing the way thousands of businesses transfer money in and out of Emerging Markets.
We believe that where you do business shouldn’t determine how successful you are, or your ability to scale. Millions of companies a day have to juggle long settlement periods, high transaction fees and issues accessing liquidity in order to trade with African businesses. We’re on a mission to change this by creating equal access to easy payment and liquidity solutions that are already a given in developed markets.
We’re not alone in realising the opportunity and need to solve for emerging markets. We’re backed by world-class investors including Y-Combinator, Quona and MEVP, power payments for some of the most disruptive start-ups in the world and have a list of accolades from leading publications including being voted ‘Fintech Start Up of the Year’ at Fintech Awards London 2022.
Each year we process billions of dollars of payments and provide companies with solutions which help them to save money, automate processes and grow, but we’re only just getting started.
We’re seeking a driven and results-oriented Data Engineer who is excited to help build out a best-in-class Data Platform. In this role, you will be expected to achieve key milestones such as improving on our existing Data Warehouse, implementing a CI/CD framework, and enabling core technologies such as dbt and git. You will play a pivotal role in enabling long-term scalability and efficiency when it comes to all things data, and leveraging your expertise in Data Engineering to drive measurable impact.
In this role you will:
- Conceptualize, maintain and improve the data architecture
- Evaluating design and operational cost-benefit tradeoffs within systems
- Design, build, and launch collections of data models that support multiple use cases across different products or domains
- Solve our most challenging data integration problems, optimising ELT pipelines, frameworks, sourcing from structured and unstructured data sources
- Implementing CI/CD frameworks
- Create and contribute to frameworks that improve the accuracy, efficiency and general data integrity
- Design and execute ‘best-in-class’ schema design
- Implementing other potential data tools
- Define and manage refresh schedules, load-balancing and SLA for all data sets in allocated areas of ownership
- Collaborate with engineers, product managers, and data analysts to understand data needs, identify and resolve issues and help set best practices for efficient data capture
- Determine and implement the data governance model and processes within ownership realm (GDPR, PPI, etc)
You’ll be responsible for:
- Taking ownership of the data engineering process - from project scoping, design, communication, execution and conclusion
- Support and strengthen data infrastructure together with data team and engineering
- Support organisation in understanding the importance of data and advocate for best-in-class infrastructure
- Mentoring, educating team members on best-in-class DE practices
- Priorising workload effectively
- Support quarterly and half-year planning from Data Engineering perspective
Note: This is currently a small data team, you may be asked to contribute to projects outside of the typical Data Engineering role. This will most probably involve analytics engineering responsibilities such as maintenance and improvement of ‘core’ tables (transactions, companies, product/platform management).
Skills and Qualifications
- University degree; ideally in data engineering, software engineering, computer science-engineering, numerate or similar
- +4 years of data engineering experience or equivalent
- Expert experience building data warehouses and ETL pipelines
- Expert experience of SQL, python, git, dbt (incl. query efficiency and optimization)
- Expert experience of Cloud Data Platforms (AWS, Snowflake and/or Databricks) → Qualification preferred, not mandatory
- Significant experience of Automation and Integrations tools (FiveTran, Airflow, Astronomer or similar)
- Significant experience with IoC tools (Terraform, Docker, Kubernetes or similar)
- Significant experience with CI/CD tools (Jenkins, GitHub Actions, CircleCI or similar)
Preferred Experience:
- Experience with real time data pipelines (AWS Kinesis, Kafka, Spark)
- Experience with observability tools (Metaplane, MonteCarlo, Datadog or similar)
- Experience within FinTech/Finance/FX preferred

At least 5 years of experience in testing and developing automation tests.
A minimum of 3 years of experience writing tests in Python, with a preference for experience in designing automation frameworks.
Experience in developing automation for big data testing, including data ingestion, data processing, and data migration, is highly desirable.
Familiarity with Playwright or other browser application testing frameworks is a significant advantage.
Proficiency in object-oriented programming and principles is required.
Extensive knowledge of AWS services is essential.
Strong expertise in REST API testing and SQL is required.
A solid understanding of testing and development life cycle methodologies is necessary.
Knowledge of the financial industry and trading systems is a plus


About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for an Engineering Manager who combines technical depth with leadership strength. This role involves leading one or more product engineering pods, driving architecture decisions, ensuring delivery excellence, and working closely with stakeholders to build scalable web and mobile technology solutions. As a key part of our leadership team, you’ll play a pivotal role in mentoring engineers, improving processes, and fostering a culture of ownership, innovation, and continuous learning
Roles and Responsibilities:
● Team Management: Lead, coach, and grow a team of 15-20 software engineers, tech leads, and QA engineers
● Technical Leadership: Guide the team in building scalable, high-performance web and mobile applications using modern frameworks and technologies
● Architecture Ownership: Architect robust, secure, and maintainable technology solutions aligned with product goals
● Project Execution: Ensure timely and high-quality delivery of projects by driving engineering best practices, agile processes, and cross-functional collaboration
● Stakeholder Collaboration: Act as a bridge between business stakeholders, product managers, and engineering teams to translate requirements into technology plans
● Culture & Growth: Help build and nurture a culture of technical excellence, accountability, and continuous improvement
● Hiring & Onboarding: Contribute to recruitment efforts, onboarding, and learning & development of team members
Requirements:
● 8+ years of software development experience, with 2+ years in a technical leadership or engineering manager role
● Proven experience in architecting and building web and mobile applications at scale
● Hands-on knowledge of technologies such as JavaScript/TypeScript, Angular, React, Node.js, .NET, Java, Python, or similar stacks
● Solid understanding of cloud platforms (AWS/Azure/GCP) and DevOps practices
● Strong interpersonal skills with a proven ability to manage stakeholders and lead diverse teams
● Excellent problem-solving, communication, and organizational skills
● Nice to haves:
○ Prior experience in working with startups or product-based companies
○ Experience mentoring tech leads and helping shape engineering culture
○ Exposure to AI/ML, data engineering, or platform thinking
Why Join Us?:
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethics and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
We are looking for a highly skilled Solution Architect with a passion for software engineering and deep experience in backend technologies, cloud, and DevOps. This role will be central in managing, designing, and delivering large-scale, scalable solutions.
Core Skills
- Strong coding and software engineering fundamentals.
- Experience in large-scale custom-built applications and platforms.
- Champion of SOLID principles, OO design, and pair programming.
- Agile, Lean, and Continuous Delivery – CI, TDD, BDD.
- Frontend experience is a plus.
- Hands-on with Java, Scala, Golang, Rust, Spark, Python, and JS frameworks.
- Experience with Docker, Kubernetes, and Infrastructure as Code.
- Excellent understanding of cloud technologies – AWS, GCP, Azure.
Responsibilities
- Own all aspects of technical development and delivery.
- Understand project requirements and create architecture documentation.
- Ensure adherence to development best practices through code reviews.
Job Description:
We are looking for a Lead Java Developer – Backend with a strong foundation in software engineering and hands-on experience in designing and building scalable, high-performance backend systems. You’ll be working within our Digital Engineering Studios on impactful and transformative projects in a fast-paced environment.
Key Responsibilities:
- Lead and mentor backend development teams.
- Design and develop scalable backend applications using Java and Spring Boot.
- Ensure high standards of code quality through best practices such as SOLID principles and clean code.
- Participate in pair programming, code reviews, and continuous integration processes.
- Drive Agile, Lean, and Continuous Delivery practices like TDD, BDD, and CI/CD.
- Collaborate with cross-functional teams and clients for successful delivery.
Required Skills & Experience:
- 9–12+ years of experience in backend development (Up to 17 years may be considered).
- Strong programming skills in Java and backend frameworks such as Spring Boot.
- Experience in designing and building large-scale, custom-built, scalable applications.
- Sound understanding of Object-Oriented Design (OOD) and SOLID principles.
- Hands-on experience with Agile methodologies, TDD/BDD, CI/CD pipelines.
- Familiarity with DevOps practices, Docker, Kubernetes, and Infrastructure as Code.
- Good understanding of cloud technologies – especially AWS, and exposure to GCP or Azure.
- Experience working in a product engineering environment is a plus.
- Startup experience or working in fast-paced, high-impact teams is highly desirable.
Job Title : Lead Java Developer (Backend)
Experience Required : 8 to 15 Years
Open Positions : 5
Location : Any major metro city (Bengaluru, Pune, Chennai, Kolkata, Hyderabad)
Work Mode : Open to Remote / Hybrid / Onsite
Notice Period : Immediate Joiner/30 Days or Less
About the Role :
- We are looking for experienced Lead Java Developers who bring not only strong backend development skills but also a product-oriented mindset and leadership capability.
- This is an opportunity to be part of high-impact digital transformation initiatives that go beyond writing code—you’ll help shape future-ready platforms and drive meaningful change.
- This role is embedded within a forward-thinking digital engineering team that thrives on co-innovation, lean delivery, and end-to-end ownership of platforms and products.
Key Responsibilities :
- Design, develop, and implement scalable backend systems using Java and Spring Boot.
- Collaborate with product managers, designers, and engineers to build intuitive and reliable digital products.
- Advocate and implement engineering best practices : SOLID principles, OOP, clean code, CI/CD, TDD/BDD.
- Lead Agile-based development cycles with a focus on speed, quality, and customer outcomes.
- Guide and mentor team members, fostering technical excellence and ownership.
- Utilize cloud platforms and DevOps tools to ensure performance and reliability of applications.
What We’re Looking For :
- Proven experience in Java backend development (Spring Boot, Microservices).
- 8+ Years of hands-on engineering experience with at least 2+ years in a Lead role.
- Familiarity with cloud platforms such as AWS, Azure, or GCP.
- Good understanding of containerization and orchestration tools like Docker and Kubernetes.
- Exposure to DevOps and Infrastructure as Code practices.
- Strong problem-solving skills and the ability to design solutions from first principles.
- Prior experience in product-based or startup environments is a big plus.
Ideal Candidate Profile :
- A tech enthusiast with a passion for clean code and scalable architecture.
- Someone who thrives in collaborative, transparent, and feedback-driven environments.
- A leader who takes ownership beyond individual deliverables to drive overall team and project success.
Interview Process
- Initial Technical Screening (via platform partner)
- Technical Interview with Engineering Team
- Client-facing Final Round
Additional Info :
- Targeting profiles from product/startup backgrounds.
- Strong preference for candidates with under 1 month of notice period.
- Interviews will be fast-tracked for qualified profiles.
📍 Position : Java Architect
📅 Experience : 10 to 15 Years
🧑💼 Open Positions : 3+
📍 Work Location : Bangalore, Pune, Chennai
💼 Work Mode : Hybrid
📅 Notice Period : Immediate joiners preferred; up to 1 month maximum
🔧 Core Responsibilities :
- Lead architecture design and development for scalable enterprise-level applications.
- Own and manage all aspects of technical development and delivery.
- Define and enforce best coding practices, architectural guidelines, and development standards.
- Plan and estimate the end-to-end technical scope of projects.
- Conduct code reviews, ensure CI/CD, and implement TDD/BDD methodologies.
- Mentor and lead individual contributors and small development teams.
- Collaborate with cross-functional teams, including DevOps, Product, and QA.
- Engage in high-level and low-level design (HLD/LLD), solutioning, and cloud-native transformations.
🛠️ Required Technical Skills :
- Strong hands-on expertise in Java, Spring Boot, Microservices architecture
- Experience with Kafka or similar messaging/event streaming platforms
- Proficiency in cloud platforms – AWS and Azure (must-have)
- Exposure to frontend technologies (nice-to-have)
- Solid understanding of HLD, system architecture, and design patterns
- Good grasp of DevOps concepts, Docker, Kubernetes, and Infrastructure as Code (IaC)
- Agile/Lean development, Pair Programming, and Continuous Integration practices
- Polyglot mindset is a plus (Scala, Golang, Python, etc.)
🚀 Ideal Candidate Profile :
- Currently working in a product-based environment
- Already functioning as an Architect or Principal Engineer
- Proven track record as an Individual Contributor (IC)
- Strong engineering fundamentals with a passion for scalable software systems
- No compromise on code quality, craftsmanship, and best practices
🧪 Interview Process :
- Round 1: Technical pairing round
- Rounds 2 & 3: Technical rounds with panel (code pairing + architecture)
- Final Round: HR and offer discussion
In your role as Software Engineer/Lead, you will directly work with other developers, Product Owners, and Scrum Masters to evaluate and develop innovative solutions. The purpose of the role is to design, develop, test, and operate a complex set of applications or platforms in the IoT Cloud area.
The role involves the utilization of advanced tools and analytical methods for gathering facts to develop solution scenarios. The job holder needs to be able to execute quality code, review code, and collaborate with other developers.
We have an excellent mix of people, which we believe makes for a more vibrant, more innovative, and more productive team.
- A bachelor’s degree, or master’s degree in information technology, computer science, or other relevant education
- At least 5 years of experience as Software Engineer, in an enterprise context
- Experience in design, development and deployment of large-scale cloud-based applications and services
- Good knowledge in cloud (AWS) serverless application development, event driven architecture and SQL / No-SQL databases
- Experience with IoT products, backend services and design principles
- Good knowledge at least of one backend technology like node.js (JavaScript, TypeScript) or JVM (Java, Scala, Kotlin)
- Passionate about code quality, security and testing
- Microservice development experience with Java (Spring) is a plus
- Good command of English in both Oral & Written

Role description:
You will be building curated enterprise grade solutions for GenAI application deployment at a production scale for clients. Solid understanding and hands on skills for GenAI application deployment that includes development and engineering skills. The role requires development and engineering skills on GenAI application development including data ingestion, choosing the right fit LLMs, simple and advanced RAG, guardrails, prompt engineering for optimisation, traceability, security, LLM evaluation, observability, and deployment at scale on cloud or on premise. As this space evolves very rapidly, candidates must also demonstrate knowledge on agentic AI frameworks. Candidates having strong background on ML with engineering skills is highly preferred for LLMOps role.
Required skills:
- 4-8 years of experience in working on ML projects that includes business requirement gathering, model development, training, deployment at scale and monitoring model performance for production use cases
- Strong knowledge on Python, NLP, Data Engineering, Langchain, Langtrace, Langfuse, RAGAS, AgentOps (optional)
- Should have worked on proprietary and open source large language models
- Experience on LLM fine tuning, creating distilled model from hosted LLMs
- Building data pipelines for model training
- Experience on model performance tuning, RAG, guardrails, prompt engineering, evaluation and observability
- Experience in GenAI application deployment on cloud and on-premise at scale for production
- Experience in creating CI/CD pipelines
- Working knowledge on Kubernetes
- Experience in minimum one cloud: AWS / GCP / Azure to deploy AI services
- Experience in creating workable prototypes using Agentic AI frameworks like CrewAI, Taskweaver, AutoGen
- Experience in light weight UI development using streamlit or chainlit (optional)
- Desired experience on open-source tools for ML development, deployment, observability and integration
- Background on DevOps and MLOps will be a plus
- Experience working on collaborative code versioning tools like GitHub/GitLab
- Team player with good communication and presentation skills


Job Title- Technical Lead
Job location- Pune/Hybrid
Availability- Immediate Joiners
Experience Range- 8-10 yrs
Desired skills - Python, Flask/FastAPI/Django, SQL/NoSQL, AWS/Azure
(Python/Flask/FastAPI/Django/AWS/Azure Cloud) who has worked on the modern full stack to deliver software products and solutions. He/She should have experience in leading from the front, handling customer situations, and internal teams, anchoring project communications and delivering outstanding work experience to our customers.
- 8+ years of relevant software design and development experience building cloud-native applications using Python and JavaScript stack.
- A thorough understanding of deploying to at least one of the Cloud platforms (AWS or Azure) is required. Knowledge of Kubernetes is an added advantage.
- Experience with Microservices architecture and serverless deployments.
- Well-versed with RESTful services and building scalable API architectures using any Python framework.
- Hands-on with Frontend technologies using either Angular or React.
- Experience managing distributed delivery teams, tech leadership, ideating with the customer leadership, design discussions and code reviews to deliver quality software products.
- Good attitude and passion for learning new technologies on the job.
- Good communication and leadership skills. Ability to lead the internal team as well as customer communication (email/calls).
Here is the Job Description -
Location -- Viman Nagar, Pune
Mode - 5 Days Working
Required Tech Skills:
● Strong at PySpark, Python
● Good understanding of Data Structure
● Good at SQL query/optimization
● Strong fundamentals of OOPs programming
● Good understanding of AWS Cloud, Big Data.
● Data Lake, AWS Glue, Athena, S3, Kinesis, SQL/NoSQL DB
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As an well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Staff Engineer – DevOps with 8-12 years of experience in designing, implementing, and optimizing CI/CD pipelines, cloud infrastructure, and automation frameworks. The ideal candidate will have expertise in Kubernetes, Terraform, CI/CD, Security, Observability, and Cloud Platforms (AWS, Azure, GCP). You will play a key role in scaling and securing our infrastructure, improving developer productivity, and ensuring high availability and performance.
Key Roles & Responsibilities:
- Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
- Deploy and manage Kubernetes clusters (EKS, AKS, GKE) and containerized workloads.
- Automate infrastructure provisioning using Terraform, Ansible, Pulumi, or CloudFormation.
- Implement observability and monitoring solutions using Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Ensure security best practices in DevOps, including IAM, secrets management, container security, and vulnerability scanning.
- Optimize cloud infrastructure (AWS, Azure, GCP) for performance, cost efficiency, and scalability.
- Develop and manage GitOps workflows and infrastructure-as-code (IaC) automation.
- Implement zero-downtime deployment strategies, including blue-green deployments, canary releases, and feature flags.
- Work closely with development teams to optimize build pipelines, reduce deployment time, and improve system reliability.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-12 years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation.
- Strong expertise in CI/CD pipelines, version control (Git), and release automation.
- Hands-on experience with Kubernetes (EKS, AKS, GKE) and container orchestration.
- Proficiency in Terraform, Ansible for infrastructure automation.
- Experience with AWS, Azure, or GCP services (EC2, S3, IAM, VPC, Lambda, API Gateway, etc.).
- Expertise in monitoring/logging tools such as Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Strong scripting and automation skills in Python, Bash, or Go.
Preferred Qualifications
- Experience in FinOps Cloud Cost Optimization) and Kubernetes cluster scaling.
- Exposure to serverless architectures and event-driven workflows.
- Contributions to open-source DevOps projects.
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking an experienced and highly skilled Technical Lead with a strong background in Java, SaaS architectures, firewalls and cybersecurity products, including SIEM and SOAR platforms. The ideal candidate will lead technical initiatives, design and implement scalable systems, and drive best practices across the engineering team. This role requires deep technical expertise, leadership abilities, and a passion for building secure and high-performing security solutions.
Key Roles & Responsibilities:
- Lead the design and development of scalable and secure software solutions using Java.
- Architect and build SaaS-based cybersecurity applications, ensuring high availability, performance, and reliability.
- Provide technical leadership, mentoring, and guidance to the development team.
- Ensure best practices in secure coding, threat modeling, and compliance with industry standards.
- Collaborate with cross-functional teams including Product Management, Security, and DevOps to deliver high-quality security solutions.
- Design and implement security analytics, automation workflows and ITSM integrations.
- Drive continuous improvements in engineering processes, tools, and technologies.
- Troubleshoot complex technical issues and lead incident response for critical production systems.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-10 years of software development experience, with expertise in Java.
- Strong background in building SaaS applications with cloud-native architectures (AWS, GCP, or Azure).
- In-depth understanding of microservices architecture, APIs, and distributed systems.
- Experience with containerization and orchestration tools like Docker and Kubernetes.
- Knowledge of DevSecOps principles, CI/CD pipelines, and infrastructure as code (Terraform, Ansible, etc.).
- Strong problem-solving skills and ability to work in an agile, fast-paced environment.
- Excellent communication and leadership skills, with a track record of mentoring engineers.
Preferred Qualifications:
- Experience with cybersecurity solutions, including SIEM (e.g., Splunk, ELK, IBM QRadar) and SOAR (e.g., Palo Alto XSOAR, Swimlane).
- Knowledge of zero-trust security models and secure API development.
- Hands-on experience with machine learning or AI-driven security analytics.
JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.
Mon-Fri, In office role with excellent perks and benefits!
Key Responsibilities:
1. Design, develop, and maintain backend services and APIs using Node.js or Python, or Java.
2. Build and implement scalable and robust microservices and integrate API gateways.
3. Develop and optimize NoSQL database structures and queries (e.g., MongoDB, DynamoDB).
4. Implement real-time data pipelines using Kafka.
5. Collaborate with front-end developers to ensure seamless integration of backend services.
6. Write clean, reusable, and efficient code following best practices, including design patterns.
7. Troubleshoot, debug, and enhance existing systems for improved performance.
Mandatory Skills:
1. Proficiency in at least one backend technology: Node.js or Python, or Java.
2. Strong experience in:
i. Microservices architecture,
ii. API gateways,
iii. NoSQL databases (e.g., MongoDB, DynamoDB),
iv. Kafka
v. Data structures (e.g., arrays, linked lists, trees).
3. Frameworks:
i. If Java : Spring framework for backend development.
ii. If Python: FastAPI/Django frameworks for AI applications.
iii. If Node: Express.js for Node.js development.
Good to Have Skills:
1. Experience with Kubernetes for container orchestration.
2. Familiarity with in-memory databases like Redis or Memcached.
3. Frontend skills: Basic knowledge of HTML, CSS, JavaScript, or frameworks like React.js.

About the Role:
We are seeking a skilled Python Backend Developer to join our dynamic team. This role focuses on designing, building, and maintaining efficient, reusable, and reliable code that supports both monolithic and microservices architectures. The ideal candidate will have a strong understanding of backend frameworks and architectures, proficiency in asynchronous programming, and familiarity with deployment processes. Experience with AI model deployment is a plus.
Overall 5+ years of IT experience with minimum of 5+ Yrs of experience on Python and in Opensource web framework (Django) with AWS Experience.
Key Responsibilities:
- Develop, optimize, and maintain backend systems using Python, Pyspark, and FastAPI.
- Design and implement scalable architectures, including both monolithic and microservices.
-3+ Years of working experience in AWS (Lambda, Serverless, Step Function and EC2)
-Deep Knowledge on Python Flask/Django Framework
-Good understanding of REST API’s
-Sound Knowledge on Database
-Excellent problem-solving and analytical skills
-Leadership Skills, Good Communication Skills, interested to learn modern technologies
- Apply design patterns (MVC, Singleton, Observer, Factory) to solve complex problems effectively.
- Work with web servers (Nginx, Apache) and deploy web applications and services.
- Create and manage RESTful APIs; familiarity with GraphQL is a plus.
- Use asynchronous programming techniques (ASGI, WSGI, async/await) to enhance performance.
- Integrate background job processing with Celery and RabbitMQ, and manage caching mechanisms using Redis and Memcached.
- (Optional) Develop containerized applications using Docker and orchestrate deployments with Kubernetes.
Required Skills:
- Languages & Frameworks:Python, Django, AWS
- Backend Architecture & Design:Strong knowledge of monolithic and microservices architectures, design patterns, and asynchronous programming.
- Web Servers & Deployment:Proficient in Nginx and Apache, and experience in RESTful API design and development. GraphQL experience is a plus.
-Background Jobs & Task Queues: Proficiency in Celery and RabbitMQ, with experience in caching (Redis, Memcached).
- Additional Qualifications: Knowledge of Docker and Kubernetes (optional), with any exposure to AI model deployment considered a bonus.
Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- 5+ years of experience in backend development using Python and Django and AWS.
- Demonstrated ability to design and implement scalable and robust architectures.
- Strong problem-solving skills, attention to detail, and a collaborative mindset.
Preferred:
- Experience with Docker/Kubernetes for containerization and orchestration.
- Exposure to AI model deployment processes.

Dear,
We are excited to inform you about an exclusive opportunity at Xebia for a Senior Backend Engineer role.
📌 Job Details:
- Role: Senior Backend Engineer
- Shift: 1 PM – 10 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or up to 30 days
🔹 Job Responsibilities:
✅ Design and develop scalable, reliable, and maintainable backend solutions
✅ Work on event-driven microservices architecture
✅ Implement REST APIs and optimize backend performance
✅ Collaborate with cross-functional teams to drive innovation
✅ Mentor junior and mid-level engineers
🔹 Required Skills:
✔ Backend Development: Scala (preferred), Java, Kotlin
✔ Cloud: AWS or GCP
✔ Databases: MySQL, NoSQL (Cassandra)
✔ DevOps & CI/CD: Jenkins, Terraform, Infrastructure as Code
✔ Messaging & Caching: Kafka, RabbitMQ, Elasticsearch
✔ Agile Methodologies: Scrum, Kanban
⚠ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response! Also, feel free to refer anyone in your network who might be a good fit.
Best regards,
Vijay S
Assistant Manager - TAG
We are seeking a skilled Full-stack developer. As a Full-stack developer, you will collaborate with an international cross-functional teams to design, develop, and deploy high-quality software solution.
Responsibilities:
Design, develop, and maintain the application.
Write clean, efficient, and reusable code.
Implement new features and functionality based on business requirements.
Participate in system and application architecture discussions.
Create technical designs and specifications for new features or enhancements.
Write and execute unit tests to ensure code quality.
Debug and resolve technical issues and software defects.
Conduct code reviews to ensure adherence to best practices.
Identify and fix vulnerabilities to ensure application integrity.
Working with the product owner to ensure seamless integration of user-facing elements.
Collaborating with DevOps teams for deployment and scaling.
Requirements:
Bachelor’s degree in computer science or information technology, or a related field.
Proven experience as a skilled Full-stack developer. Experience in Insurance domain is appreciated.
Excellent problem-solving and debugging skills.
Strong communication and collaboration abilities to work effectively in a team environment.
Strong experience with Spring Boot 3, Java 17 or newer and Maven.
Skills & Requirements
Angular 18+, GitHub, IntellJ IDEA, Java 11+, Jest, Kubernetes, Maven, Mockito, NDBX/ng-aquila, NGRX, Spring Boot, State Management, Typescript, Playwright, PostgreSQL, Sonar, Swagger, AWS, Camunda, Dynatrace, Jenkins, Kafka, NGXS, Signals, Taly.
What You’ll Do:
* Establish formal data practice for the organisation.
* Build & operate scalable and robust data architectures.
* Create pipelines for the self-service introduction and usage of new data.
* Implement DataOps practices
* Design, Develop, and operate Data Pipelines which support Data scientists and machine learning Engineers.
* Build simple, highly reliable Data storage, ingestion, and transformation solutions which are easy to deploy and manage.
* Collaborate with various business stakeholders, software engineers, machine learning engineers, and analysts.
Who You Are:
* Experience in designing, developing and operating configurable Data pipelines serving high volume and velocity data.
* Experience working with public clouds like GCP/AWS.
* Good understanding of software engineering, DataOps, data architecture, Agile and DevOps methodologies.
* Experience building Data architectures that optimize performance and cost, whether the components are prepackaged or homegrown.
* Proficient with SQL, Java, Spring boot, Python or JVM-based language, Bash.
* Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow etc. and big data databases like BigQuery, Clickhouse, etc
* Good communication skills with the ability to collaborate with both technical and non-technical people.
* Ability to Think Big, take bets and innovate, Dive Deep, Bias for Action, Hire and Develop the Best, Learn and be Curious



General Summary:
The Senior Software Engineer will be responsible for designing, developing, testing, and maintaining full-stack solutions. This role involves hands-on coding (80% of time), performing peer code reviews, handling pull requests and engaging in architectural discussions with stakeholders. You'll contribute to the development of large-scale, data-driven SaaS solutions using best practices like TDD, DRY, KISS, YAGNI, and SOLID principles. The ideal candidate is an experienced full-stack developer who thrives in a fast-paced, Agile environment.
Essential Job Functions:
- Design, develop, and maintain scalable applications using Python and Django.
- Build responsive and dynamic user interfaces using React and TypeScript.
- Implement and integrate GraphQL APIs for efficient data querying and real-time updates.
- Apply design patterns such as Factory, Singleton, Observer, Strategy, and Repository to ensure maintainable and scalable code.
- Develop and manage RESTful APIs for seamless integration with third-party services.
- Design, optimize, and maintain SQL databases like PostgreSQL, MySQL, and MSSQL.
- Use version control systems (primarily Git) and follow collaborative workflows.
- Work within Agile methodologies such as Scrum or Kanban, participating in daily stand-ups, sprint planning, and retrospectives.
- Write and maintain unit tests, integration tests, and end-to-end tests, following Test-Driven Development (TDD).
- Collaborate with cross-functional teams, including Product Managers, DevOps, and UI/UX Designers, to deliver high-quality products
Essential functions are the basic job duties that an employee must be able to perform, with or without reasonable accommodation. The function is considered essential if the reason the position exists is to perform that function.
Supportive Job Functions:
- Remain knowledgeable of new emerging technologies and their impact on internal systems.
- Available to work on call when needed.
- Perform other miscellaneous duties as assigned by management.
These tasks do not meet the Americans with Disabilities Act definition of essential job functions and usually equal 5% or less of time spent. However, these tasks still constitute important performance aspects of the job.
Skills
- The ideal candidate must have strong proficiency in Python and Django, with a solid understanding of Object-Oriented Programming (OOPs) principles. Expertise in JavaScript,
- TypeScript, and React is essential, along with hands-on experience in GraphQL for efficient data querying.
- The candidate should be well-versed in applying design patterns such as Factory, Singleton, Observer, Strategy, and Repository to ensure scalable and maintainable code architecture.
- Proficiency in building and integrating REST APIs is required, as well as experience working with SQL databases like PostgreSQL, MySQL, and MSSQL.
- Familiarity with version control systems (especially Git) and working within Agile methodologies like Scrum or Kanban is a must.
- The candidate should also have a strong grasp of Test-Driven Development (TDD) principles.
- In addition to the above, it is good to have experience with Next.js for server-side rendering and static site generation, as well as knowledge of cloud infrastructure such as AWS or GCP.
- Familiarity with NoSQL databases, CI/CD pipelines using tools like GitHub Actions or Jenkins, and containerization technologies like Docker and Kubernetes is highly desirable.
- Experience with microservices architecture and event-driven systems (using tools like Kafka or RabbitMQ) is a plus, along with knowledge of caching technologies such as Redis or
- Memcached. Understanding OAuth2.0, JWT, SSO authentication mechanisms, and adhering to API security best practices following OWASP guidelines is beneficial.
- Additionally, experience with Infrastructure as Code (IaC) tools like Terraform or CloudFormation, and familiarity with performance monitoring tools such as New Relic or Datadog will be considered an advantage.
Abilities:
- Ability to organize, prioritize, and handle multiple assignments on a daily basis.
- Strong and effective inter-personal and communication skills
- Ability to interact professionally with a diverse group of clients and staff.
- Must be able to work flexible hours on-site and remote.
- Must be able to coordinate with other staff and provide technological leadership.
- Ability to work in a complex, dynamic team environment with minimal supervision.
- Must possess good organizational skills.
Education, Experience, and Certification:
- Associate or bachelor’s degree preferred (Computer Science, Engineer, etc.), but equivalent work experience in a technology related area may substitute.
- 2+ years relevant experience, required.
- Experience using version control daily in a developer environment.
- Experience with Python, JavaScript, and React is required.
- Experience using rapid development frameworks like Django or Flask.
- Experience using front end build tools.
Scope of Job:
- No direct reports.
- No supervisory responsibility.
- Consistent work week with minimal travel
- Errors may be serious, costly, and difficult to discover.
- Contact with others inside and outside the company is regular and frequent.
- Some access to confidential data.
We are seeking a skilled Cloud Data Engineer who has experience with cloud data platforms like AWS or Azure and especially Snowflake and dbt to join our dynamic team. As a consultant, you will be responsible for developing new data platforms and create the data processes. You will collaborate with cross-functional teams to design, develop, and deploy high-quality frontend solutions.
Responsibilities:
Customer consulting: You develop data-driven products in the Snowflake Cloud and connect data & analytics with specialist departments. You develop ELT processes using dbt (data build tool)
Specifying requirements: You develop concrete requirements for future-proof cloud data architectures.
Develop data routes: You design scalable and powerful data management processes.
Analyze data: You derive sound findings from data sets and present them in an understandable way.
Requirements:
Requirements management and project experience: You successfully implement cloud-based data & analytics projects.
Data architectures: You are proficient in DWH/data lake concepts and modeling with Data Vault 2.0.
Cloud expertise: You have extensive knowledge of Snowflake, dbt and other cloud technologies (e.g. MS Azure, AWS, GCP).
SQL know-how: You have a sound and solid knowledge of SQL.
Data management: You are familiar with topics such as master data management and data quality.
Bachelor's degree in computer science, or a related field.
Strong communication and collaboration abilities to work effectively in a team environment.
Skills & Requirements
Cloud Data Engineering, AWS, Azure, Snowflake, dbt, ELT processes, Data-driven consulting, Cloud data architectures, Scalable data management, Data analysis, Requirements management, Data warehousing, Data lake, Data Vault 2.0, SQL, Master data management, Data quality, GCP, Strong communication, Collaboration.
Job Title: Senior Automation Engineer (API & Cloud Testing)
Job Type: Full-TimeJob Location: Bangalore, Pune
Work Mode:Hybrid
Experience: 8+ years (Minimum 5 years in Automation)
Notice Period: 0-30 days
About the Role:
We are looking for an experienced Senior Automation Engineer to join our team. The ideal candidate should have extensive expertise in API testing, Node.js, Cypress, Postman/Newman, and cloud-based platforms (AWS/Azure/GCP). The role involves automating workflows in ArrowSphere, optimizing test automation pipelines, and ensuring software quality in an Agile environment. The selected candidate will work closely with teams in France, requiring strong communication skills.
Key Responsibilities:
Automate ArrowSphere Workflows: Develop and implement automation strategies for ArrowSphere Public API workflows to enhance efficiency.
Support QA Team: Guide and assist QA engineers in improving automation strategies.
Optimize Test Automation Pipeline: Design and maintain a high-performance test automation framework.
Minimize Test Flakiness: Identify root causes of flaky tests and implement solutions to improve software reliability.
Ensure Software Quality: Actively contribute to maintaining the software’s high standards and cloud service innovation.
Mandatory Skills:
API Testing: Strong knowledge of API testing methodologies.
Node.js: Experience in automation with Cypress, Postman, and Newman.
Cloud Platforms: Working knowledge of AWS, Azure, or GCP (certification is a plus).
Agile Methodologies: Hands-on experience working in an Agile environment.
Technical Communication: Ability to interact with international teams effectively.
Technical Skills:
Cypress: Expertise in front-end automation with Cypress, ensuring scalable and reliable test scripts.
Postman & Newman: Experience in API testing and test automation integration within CI/CD pipelines.
Jenkins: Ability to set up and maintain CI/CD pipelines for automation.
Programming: Proficiency in Node.js (PHP knowledge is a plus).
AWS Architecture: Understanding of AWS services for development and testing.
Git Version Control: Experience with Git workflows (branching, merging, pull requests).
Scripting & Automation: Knowledge of Bash/Python for scripting and automating tasks.
Problem-Solving: Strong debugging skills across front-end, back-end, and database.
Preferred Qualifications:
Cloud Certification (AWS, Azure, or GCP) is an added advantage.
Experience working with international teams, particularly in Europe.

Position: .NET C# Developer
Job Category: Embedded HW_SW
Job Type: Full Time
Job Location: Pune
Experience: 5-7 years
Notice period: 0-30 days
Shift timing: General Shift
Work Mode: 5 Days work from EIC office
Education Required: Bachelor’s / Masters / PhD : BE/B tech
Must have skills: .NET Core, C#, microservices
Good to have skills: RDBMS, cloud platforms like AWS and Azure
Mandatory Skills
5-7 years of experience in software development using C# and NET Core
Hands-on experience in building Microservices with a focus on scalability and reliability.
Expertise in Docker for containerization and Kubernetes for orchestration and management of containerized applications.
Strong working knowledge of Cosmos DB (or similar NoSQL databases) and experience in designing distributed databases.
Familiarity with CI/CD pipelines, version control systems (like Git), and Agile development methodologies.
Proficiency in RESTful API design and development.
Experience with cloud platforms like Azure, AWS, or Google Cloud is a plus.
Excellent problem-solving skills and the ability to work independently and in a collaborative environment.
Strong communication skills, both verbal and written.
Key Responsibilities
Design, develop, and maintain applications using C#, NET Core, and Microservices architecture.
Build, deploy, and manage containerized applications using Docker and Kubernetes.
Work with Cosmos DB for efficient database design, management, and querying in a cloud-native environment.
Collaborate with cross-functional teams to define application requirements and ensure timely delivery of features.
Write clean, scalable, and efficient code following best practices and coding standards.
Implement and integrate APIs and services with microservice architectures.
Troubleshoot, debug, and optimize applications for performance and scalability.
Participate in code reviews and contribute to improving coding standards and practices.
Stay up-to-date with the latest industry trends, technologies, and development practices.
Optional Skills
Experience with Azure Kubernetes Service (AKS) or Amazon Elastic Kubernetes Service (EKS)
Familiarity with Event-driven architecture, RabbitMQ, Kafka, or similar messaging systems.
Knowledge of DevOps practices and tools for continuous integration and deployment.
- Experience with front-end technologies like Angular or React is a plus.

position: Data Scientist
Job Category: Embedded HW_SW
Job Type: Full Time
Job Location: Pune
Experience: 3 - 5 years
Notice period: 0-30 days
Must have skills: Python, Linux-Ubuntu based OS, cloud-based platforms
Education Required: Bachelor’s / Masters / PhD:
Bachelor’s or master’s in computer science, Statistics, Mathematics, Data Science, Engineering
Bachelors with 5 years or Masters with 3 years
Mandatory Skills
- Bachelor’s or master’s in computer science, Statistics, Mathematics, Data Science, Engineering, or related field
- 3-5 years of experience as a data scientist, with a strong foundation in machine learning fundamentals (e.g., supervised and unsupervised learning, neural networks)
- Experience with Python programming language (including libraries such as NumPy, pandas, scikit-learn) is essential
- Deep hands-on experience building computer vision and anomaly detection systems involving algorithm development in fields such as image-segmentation
- Some experience with open-source OCR models
- Proficiency in working with large datasets and experience with feature engineering techniques is a plus
Key Responsibilities
- Work closely with the AI team to help build complex algorithms that provide unique insights into our data using images.
- Use agile software development processes to make iterative improvements to our back-end systems.
- Stay up to date with the latest developments in machine learning and data science, exploring new techniques and tools to apply within Customer’s business context.
Optional Skills
- Experience working with cloud-based platforms (e.g., Azure, AWS, GCP)
- Knowledge of computer vision techniques and experience with libraries like OpenCV
- Excellent Communication skills, especially for explaining technical concepts to nontechnical business leaders.
- Ability to work on a dynamic, research-oriented team that has concurrent projects.
- Working knowledge of Git/version control.
- Expertise in PyTorch, Tensor Flow, Keras.
- Excellent coding skills, especially in Python.
- Experience with Linux-Ubuntu based OS
At least 5 years of experience in testing and developing automation tests.
A minimum of 3 years of experience writing tests in Python, with a preference for experience in designing automation frameworks.
Experience in developing automation for big data testing, including data ingestion, data processing, and data migration, is highly desirable.
Familiarity with Playwright or other browser application testing frameworks is a significant advantage.
Proficiency in object-oriented programming and principles is required.
Extensive knowledge of AWS services is essential.
Strong expertise in REST API testing and SQL is required.
A solid understanding of testing and development life cycle methodologies is necessary.
Knowledge of the financial industry and trading systems is a plus
What you'll do:
· Perform complex application programming activities with an emphasis on mobile development: Node.js, TypeScript, JavaScript, RESTful APIs and related backend frameworks
· Assist in the definition of system architecture and detailed solution design that are scalable and extensible
· Collaborate with Product Owners, Designers, and other engineers on different permutations to find the best solution possible
· Own the quality of code and do your own testing. Write unit test and improve test coverage.
· Deliver amazing solutions to production that knock everyone’s socks off
· Mentor junior developers on the team
What we’re looking for:
· Amazing technical instincts. You know how to evaluate and choose the right technology and approach for the job. You have stories you could share about what problem you thought you were solving at first, but through testing and iteration, came to solve a much bigger and better problem that resulted in positive outcomes all-around.
· A love for learning. Technology is continually evolving around us, and you want to keep up to date to ensure we are using the right tech at the right time.
· A love for working in ambiguity—and making sense of it. You can take in a lot of disparate information and find common themes, recommend clear paths forward and iterate along the way. You don’t form an opinion and sell it as if it’s gospel; this is all about being flexible, agile, dependable, and responsive in the face of many moving parts.
· Confidence, not ego. You have an ability to collaborate with others and see all sides of the coin to come to the best solution for everyone.
· Flexible and willing to accept change in priorities, as necessary
· Demonstrable passion for technology (e.g., personal projects, open-source involvement)
· Enthusiastic embrace of DevOps culture and collaborative software engineering
· Ability and desire to work in a dynamic, fast paced, and agile team environment
· Enthusiasm for cloud computing platforms such as AWS or Azure
Basic Qualifications:
· Minimum B.S. / M.S. Computer Science or related discipline from accredited college or University
· At least 4 years of experience designing, developing, and delivering backend applications with Node.js, TypeScript
· At least 2 years of experience building internet facing services
· At least 2 years of experience with AWS and/or OpenShift
· Exposure to some of the following concepts: object-oriented programming, software engineering techniques, quality engineering, parallel programming, databases, etc.
· Experience integrating APIs with front-end and/or mobile-specific frameworks
· Proficiency in building and consuming RESTful APIs
· Ability to manage multiple tasks and consistently meet established timelines
· Strong collaboration skills
· Excellent written and verbal communications skills
Preferred Qualifications:
· Experience with Apache Cordova framework
- Demonstrable knowledge of native coding background in iOS, Android
· Experience developing and deploying applications within Kubernetes based containers
Experience in Agile and SCRUM development techniques


At Verto, we’re passionate about helping businesses in Africa reach the world. What first started life as a FX solution for trading Nigerian Naira has now become a market-leading platform, changing the way thousands of businesses transfer money in and out of Africa.
We believe that where you do business shouldn’t determine how successful you are, or your ability to scale. Millions of companies a day have to juggle long settlement periods, high transaction fees and issues accessing liquidity in order to trade with African businesses. We’re on a mission to change this by creating equal access to easy payment and liquidity solutions that are already a given in developed markets.
We’re not alone in realising the opportunity and need to solve for emerging markets. We’re backed by world-class investors including Y-Combinator, Quona and MEVP, power payments for some of the most disruptive start-ups in the world and have a list of accolades from leading publications including being voted ‘Fintech Start Up of the Year’ at Fintech Awards London 2022.
Each year we process billions of dollars of payments and provide companies with solutions which help them to save money, automate processes and grow, but we’re only just getting started.
We are looking for a strong Full Stack Developer to join our team. This person will be involved in active development assignments. You are expected to have between 2 and 6 years of professional experience in any object-oriented languages and have executed considerable work in nodeJS along with any of the modern web application building libraries such as Angular along with at least a working knowledge of developing scalable distributed cloud applications on AWS or any other cloud.
We’re looking for someone who is not only a good full-stack developer but also aware of modern trends in distributed software application development. You’re smart enough to work at top companies, but you’re picky about finding the right role (this is more than just a job, right?). You’re experienced, but you also like to learn new things. And you want to work with smart people and have fun building something great.
In this role you will:
- Design RESTful APIs
- Work with other team members to develop and test highly scalable web applications and services as part of a suite of products in the Data governance domain working with petabyte-scale data
- Design and create services and system architecture for your projects, and contribute and provide feedback to other team members
- Use AWS to set up geographically agnostic systems in the cloud.
- Exercise your strong skills & working knowledge of MySQL and relational databases
- Prototype and develop new ideas and participate in all parts of the lifecycle from research to release
- Work within a small team owning deliverables for our web APIs and front end.
- Use development tools such as AWS Codebuild, git, npm, Visual Studio Code, Serverless framework, Swagger Specs, Angular, Flutter, AWS Lambda, MongoDB, MySQL, Redis, SQS, Kafka etc.
- Design and develop dockerized applications that will be deployed flexibly either on the cloud or on-premises depending on business requirements
You’ll have:
- 5+ years of professional development experience using any object-oriented language
- Have developed and delivered at least one application using nodeJs
- Experience with modern web application building libraries such as Angular, Polymer, React
- Solid OOP and software design knowledge – you should know how to create software that’s extensible, reusable and meets desired architectural objectives
- Excellent understanding of HTTP and REST standards
- Experience with relational as well as MySQL databases
- Good experience writing unit and acceptance tests
- Proven experience in developing highly scalable distributed cloud applications on a cloud system, preferably AWS
- You’re a great communicator and are capable of not just doing the work, but teaching others and explaining the “why” behind complicated technical decisions.
- You aren’t afraid to roll up your sleeves: This role will evolve, and we’ll want you to evolve with it!
About the company
KPMG International Limited, commonly known as KPMG, is one of the largest professional services networks in the world, recognized as one of the "Big Four" accounting firms alongside Deloitte, PricewaterhouseCoopers (PwC), and Ernst & Young (EY). KPMG provides a comprehensive range of professional services primarily focused on three core areas: Audit and Assurance, Tax Services, and Advisory Services. Their Audit and Assurance services include financial statement audits, regulatory audits, and other assurance services. The Tax Services cover various aspects such as corporate tax, indirect tax, international tax, and transfer pricing. Meanwhile, their Advisory Services encompass management consulting, risk consulting, deal advisory, and other related services.
Apply through this link- https://forms.gle/qmX9T7VrjySeWYa37
Job Description
Position: Data Engineer
Experience: Experience 5+ years of relevant experience
Location : WFO (3 days working) Pune – Kharadi, NCR – Gurgaon , Bangalore
Employment Type: contract for 3 months-Can be extended basis performance and future requirements
Skills Required:
• Proficiency in SQL, AWS, data integration tools like Airflow or equivalent. Knowledge on using tools like JIRA, GitHub, etc.
• Data Engineer who will be able to work on the data management activities and orchestration processes.


We are looking for full stack developer/Leader to lead, OWN and deliver across the entire application development life-cycle. He / She will be responsible for creating and owning the product road map of enterprise software product.
-A responsible and passionate professional who has will power to drive the product goals and ensure the outcomes expected from the team.
- He / She should have a strong desire and eagerness to learn new and emerging technologies.
- Skills Required :
- Python/Django Rest Framework,
- Database Structure
-Cloud-Ops - AWS
Roles & responsibilities :-
- Developer responsibilities include writing and testing code, debugging programs
- Design and implementation of REST API
- Build, release, and manage the configuration of all production systems
- Manage a continuous integration and deployment methodology for server-based technologies
- Identify customer problems and create functional prototypes offering a solution
If you are willing to take up challenges and contribute in developing world class products, - this is the place for you.
About FarmMobi :
A trusted enterprise software product company in AgTech space started with a mission to revolutionize the Global agriculture sector.
We operate on software as a service model (SASS) and cater to the needs of global customers in the field of Agriculture.
The idea is to use emerging technologies like Mobility, IOT, Drones, Satellite imagery, Blockchain etc. to digitally transform the agriculture landscape.
6+ years of experience with deployment and management of Kubernetes clusters in production environment as DevOps engineer.
• Expertise in Kubernetes fundamentals like nodes, pods, services, deployments etc., and their interactions with the underlying infrastructure.
• Hands-on experience with containerization technologies such as Docker or RKT to package applications for use in a distributed system managed by Kubernetes.
• Knowledge of software development cycle including coding best practices such as CI/CD pipelines and version control systems for managing code changes within a team environment.
• Must have Deep understanding on different aspects related to Cloud Computing and operations processes needed when setting up workloads on top these platforms
• Experience with Agile software development and knowledge of best practices for agile Scrum team.
• Proficient with GIT version control
• Experience working with Linux and cloud compute platforms.
• Excellent problem-solving skills and ability to troubleshoot complex issues in distributed systems.
• Excellent communication & interpersonal skills, effective problem-solving skills and logical thinking ability and strong commitment to professional and client service excellence.


Responsibilities
- Develop and maintain robust APIs to support various applications and services.
- Design and implement scalable solutions using AWS cloud services.
- Utilize Python frameworks such as Flask and Django to build efficient and high-performance applications.
- Collaborate with cross-functional teams to gather and analyze requirements for new features and enhancements.
- Ensure the security and integrity of applications by implementing best practices and security measures.
- Optimize application performance and troubleshoot issues to ensure smooth operation.
- Provide technical guidance and mentorship to junior team members.
- Conduct code reviews to ensure adherence to coding standards and best practices.
- Participate in agile development processes including sprint planning daily stand-ups and retrospectives.
- Develop and maintain documentation for code processes and procedures.
- Stay updated with the latest industry trends and technologies to continuously improve skills and knowledge.
- Contribute to the overall success of the company by delivering high-quality software solutions that meet business needs.
- Foster a collaborative and inclusive work environment that promotes innovation and continuous improvement.
Qualifications
- Possess strong expertise in developing and maintaining APIs.
- Demonstrate proficiency in AWS cloud services and their application in scalable solutions.
- Have extensive experience with Python frameworks such as Flask and Django.
- Exhibit strong analytical and problem-solving skills to address complex technical challenges.
- Show ability to collaborate effectively with cross-functional teams and stakeholders.
- Display excellent communication skills to convey technical concepts clearly.
- Have a background in the Consumer Lending domain is a plus.
- Demonstrate commitment to continuous learning and staying updated with industry trends.
- Possess a strong understanding of agile development methodologies.
- Show experience in mentoring and guiding junior team members.
- Exhibit attention to detail and a commitment to delivering high-quality software solutions.
- Demonstrate ability to work effectively in a hybrid work model.
- Show a proactive approach to identifying and addressing potential issues before they become problems.
Job Summary:
We are seeking a skilled Senior Data Engineer with expertise in application programming, big data technologies, and cloud services. This role involves solving complex problems, designing scalable systems, and working with advanced technologies to deliver innovative solutions.
Key Responsibilities:
- Develop and maintain scalable applications using OOP principles, data structures, and problem-solving skills.
- Build robust solutions using Java, Python, or Scala.
- Work with big data technologies like Apache Spark for large-scale data processing.
- Utilize AWS services, especially Amazon Redshift, for cloud-based solutions.
- Manage databases including SQL, NoSQL (e.g., MongoDB, Cassandra), with Snowflake as a plus.
Qualifications:
- 5+ years of experience in software development.
- Strong skills in OOPS, data structures, and problem-solving.
- Proficiency in Java, Python, or Scala.
- Experience with Spark, AWS (Redshift mandatory), and databases (SQL/NoSQL).
- Snowflake experience is good to have.

Role & Responsibilities:
As a Full Stack Developer Intern, you will take on significant responsibilities in the design, development, and maintenance of web applications using Next.js, React.js, Node.js, PostgreSQL, and AWS Cloud services. We seek individuals who are self-motivated, energetic, and capable of delivering high-quality work with minimal supervision.
- Develop user-friendly web applications using Next.js and React.js.
- Create and implement RESTful APIs using Node.js.
- Write high-quality, maintainable code while adhering to best practices in software development.
- Deliver projects on time while maintaining a strong focus on performance and user experience.
- Manage data effectively using PostgreSQL databases.
- Code Quality & Reviews: Maintain code quality standards and conduct regular code reviews to ensure the delivery of high-quality, error-free code.
- Performance Optimization: Identify and troubleshoot performance bottlenecks to ensure a seamless and lightning-fast platform experience.
- Bug Fixing & Maintenance: Monitor platform performance and proactively address any issues or bugs to keep the platform running flawlessly.
- Contribute innovative ideas and solutions during team discussions and brainstorming sessions.
- Communicate openly and honestly with team members, sharing insights and feedback constructively.
- Stay updated on emerging technologies and demonstrate a willingness to learn more.
Qualification:
- Graduate/Post-Graduate with a degree in Computer Science, Software Engineering, or a related field.
- Proficiency in HTML, CSS, JavaScript, and modern front-end frameworks (specifically Next.js and React.js).
- Strong knowledge of back-end technologies such as Node.js and Express.js.
- Experience with relational databases, particularly PostgreSQL.
- Familiarity with AWS Cloud services is a plus.
- Excellent problem-solving skills with a proactive approach to challenges.
- Proven ability to troubleshoot and resolve complex technical issues.
- Strong communication skills with the confidence to share ideas openly.
- High energy level and passion for contributing to the company’s success with integrity and honesty.
- Startup Enthusiast: Embrace the fast-paced and dynamic environment of a startup, driven by a passion for making a positive impact.
Dear Candidate,
We are urgently Hiring AWS Cloud Engineer for Bangalore Location.
Position: AWS Cloud Engineer
Location: Bangalore
Experience: 8-11 yrs
Skills: Aws Cloud
Salary: Best in Industry (20-25% Hike on the current ctc)
Note:
only Immediate to 15 days Joiners will be preferred.
Candidates from Tier 1 companies will only be shortlisted and selected
Candidates' NP more than 30 days will get rejected while screening.
Offer shoppers will be rejected.
Job description:
Description:
Title: AWS Cloud Engineer
Prefer BLR / HYD – else any location is fine
Work Mode: Hybrid – based on HR rule (currently 1 day per month)
Shift Timings 24 x 7 (Work in shifts on rotational basis)
Total Experience in Years- 8+ yrs, 5 yrs of relevant exp is required.
Must have- AWS platform, Terraform, Redshift / Snowflake, Python / Shell Scripting
Experience and Skills Requirements:
Experience:
8 years of experience in a technical role working with AWS
Mandatory
Technical troubleshooting and problem solving
AWS management of large-scale IaaS PaaS solutions
Cloud networking and security fundamentals
Experience using containerization in AWS
Working Data warehouse knowledge Redshift and Snowflake preferred
Working with IaC – Terraform and Cloud Formation
Working understanding of scripting languages including Python and Shell
Collaboration and communication skills
Highly adaptable to changes in a technical environment
Optional
Experience using monitoring and observer ability toolsets inc. Splunk, Datadog
Experience using Github Actions
Experience using AWS RDS/SQL based solutions
Experience working with streaming technologies inc. Kafka, Apache Flink
Experience working with a ETL environments
Experience working with a confluent cloud platform
Certifications:
Minimum
AWS Certified SysOps Administrator – Associate
AWS Certified DevOps Engineer - Professional
Preferred
AWS Certified Solutions Architect – Associate
Responsibilities:
Responsible for technical delivery of managed services across NTT Data customer account base. Working as part of a team providing a Shared Managed Service.
The following is a list of expected responsibilities:
To manage and support a customer’s AWS platform
To be technical hands on
Provide Incident and Problem management on the AWS IaaS and PaaS Platform
Involvement in the resolution or high priority Incidents and problems in an efficient and timely manner
Actively monitor an AWS platform for technical issues
To be involved in the resolution of technical incidents tickets
Assist in the root cause analysis of incidents
Assist with improving efficiency and processes within the team
Examining traces and logs
Working with third party suppliers and AWS to jointly resolve incidents
Good to have:
Confluent Cloud
Snowflake
Best Regards,
Minakshi Soni
Executive - Talent Acquisition (L2)
Rigel Networks
Worldwide Locations: USA | HK | IN
- TIDB (Good to have)
- Kubernetes( Must to have)
- MySQL(Must to have)
- Maria DB(Must to have)
- Looking candidate who has more exposure into Reliability over maintenance


1. 4 plus years of experience in Java development.
2. Good communication skills are mandatory.
3. Spring boot , microservices , AWS , multithreading , GIT mandatory.
4. Angular or React is mandatory
5. Joining within 2 weeks
6. Location : Pune , Working from client location i.e at D.P.Road ( very near to Metro station ). Another location is Indore ( if available)
7. Permanent position with Prismatic .
8. Product development exposure and latest technology exposure . Prospects of international travel for the bright candidate.
Position: SDE-1 DevSecOps
Location: Pune, India
Experience Required: 0+ Years
We are looking for a DevSecOps engineer to contribute to product development, mentor team members, and devise creative solutions for customer needs. We value effective communication in person, in documentation, and in code. Ideal candidates thrive in small, collaborative teams, love making an impact, and take pride in their work with a product-focused, self-driven approach. If you're passionate about integrating security and deployment seamlessly into the development process, we want you on our team.
About FlytBase
FlytBase is a global leader in enterprise drone software automation. FlytBase platform is enabling drone-in-a-box deployments all across the globe and has the largest network of partners in 50+ countries.
The team comprises young engineers and designers from top-tier universities such as IIT-B, IIT-KGP, University of Maryland, Georgia Tech, COEP, SRM, KIIT and with deep expertise in drone technology, computer science, electronics, aerospace, and robotics.
The company is headquartered in Silicon Valley, California, USA, and has R&D offices in Pune, India. Widely recognized as a pioneer in the commercial drone ecosystem, FlytBase continues to win awards globally - FlytBase was the Global Grand Champion at the ‘NTT Data Open Innovation Contest’ held in Tokyo, Japan, and was the recipient of ‘ TiE50 Award’ at TiE Silicon Valley.
Role and Responsibilities:
- Participate in the creation and maintenance of CI/CD solutions and pipelines.
- Leverage Linux and shell scripting for automating security and system updates, and design secure architectures using AWS services (VPC, EC2, S3, IAM, EKS/Kubernetes) to enhance application deployment and management.
- Build and maintain secure Docker containers, manage orchestration using Kubernetes, and automate configuration management with tools like Ansible and Chef, ensuring compliance with security standards.
- Implement and manage infrastructure using Terraform, aligning with security and compliance requirements, and set up Dynatrace for advanced monitoring, alerting, and visualization of security metrics. Develop Terraform scripts to automate and optimize infrastructure provisioning and management tasks.
- Utilize Git for secure source code management and integrate continuous security practices into CI/CD pipelines, applying vulnerability scanning and automated security testing tools.
- Contribute to security assessments, including vulnerability and penetration testing, NIST, CIS AWS, NIS2 etc.
- Implement and oversee compliance processes for SOC II, ISO27001, and GDPR.
- Stay updated on cybersecurity trends and best practices, including knowledge of SAST and DAST tools, OWASP Top10.
- Automate routine tasks and create tools to improve team efficiency and system robustness.
- Contribute to disaster recovery plans and ensure robust backup systems are in place.
- Develop and enforce security policies and respond effectively to security incidents.
- Manage incident response protocols, including on-call rotations and strategic planning.
- Conduct post-incident reviews to prevent recurrence and refine the system reliability framework.
- Implementing Service Level Indicators (SLIs) and maintaining Service Level Objectives (SLOs) and Service Level Agreements (SLAs) to ensure high standards of service delivery and reliability.
Best suited for candidates who: (Skills/Experience)
- Up to 4 years of experience in a related field, with a strong emphasis on learning and execution.
- Background in IT or computer science.
- Familiarity with CI/CD tools, cloud platforms (AWS, Azure, or GCP), and programming languages like Python, JavaScript, or Ruby.
- Solid understanding of network layers and TCP/IP protocols.
- In-depth understanding of operating systems, networking, and cloud services.
- Strong problem-solving skills with a 'hacker' mindset.
- Knowledge of security principles, threat modeling, risk assessment, and vulnerability management is a plus.
- Relevant certifications (e.g., CISSP, GWAPT, OSCP) are a plus.
Compensation:
This role comes with an annual CTC that is market competitive and depends on the quality of your work experience, degree of professionalism, culture fit, and alignment with FlytBase’s long-term business strategy.
Perks:
- Fast-paced Startup culture
- Hacker mode environment
- Enthusiastic and approachable team
- Professional autonomy
- Company-wide sense of purpose
- Flexible work hours
- Informal dress code
About Vijay Sales
Vijay Sales is a leading eCommerce and retail brand delivering exceptional experiences to customers across India. Leveraging technology to enhance shopping experiences, we are on a mission to create seamless omnichannel platforms and innovative solutions for our customers. Join our dynamic team to shape the future of retail technology.
Role Overview
As a Lead Backend Engineer, you will drive the development and optimization of robust backend systems for our products built on the MERN (MongoDB, Express.js, React, Node.js) stack. You'll lead a talented team of developers, design scalable architectures, and ensure the seamless integration of backend functionalities with our frontend systems.
Key Responsibilities
- Lead the design, development, and deployment of backend systems for Vijay Sales' products.
- Architect scalable, secure, and high-performance APIs and microservices.
- Mentor and guide a team of backend developers, ensuring best practices in coding, architecture, and DevOps.
- Collaborate with cross-functional teams, including frontend engineers, DevOps, and product managers, to deliver end-to-end solutions.
- Optimize database structures and queries for performance and scalability.
- Implement and oversee CI/CD pipelines to streamline deployment processes.
- Troubleshoot and resolve performance bottlenecks, security issues, and other technical challenges.
- Stay updated on industry trends, tools, and technologies to continuously improve our stack.
Skills and Qualifications
Must-Have:
- Proven experience as a backend engineer with expertise in the MERN stack (Node.js, Express.js, MongoDB).
- Strong understanding of RESTful APIs and GraphQL.
- Proficiency in database design, indexing, and optimization (MongoDB is preferred).
- Hands-on experience with containerization tools like Docker and orchestration platforms like Kubernetes.
- Strong knowledge of authentication/authorization frameworks (JWT, OAuth2).
- Experience with cloud platforms like AWS, GCP, or Azure.
- Proficiency in DevOps practices, including CI/CD pipelines and version control systems (Git).
- Excellent problem-solving skills and a proactive attitude toward innovation.
Nice-to-Have:
- Experience in building scalable eCommerce platforms or omni-channel systems.
- Knowledge of front-end technologies (React.js) for seamless backend integration.
- Familiarity with message brokers.
- Serverless and Micro Architectures
- Exposure to AI/ML-driven applications for personalization and recommendation engines.
Job description
We are seeking a highly skilled and experienced IT Department Head with strong communication skills, a technical background, and leadership capabilities to manage our IT team. The ideal candidate will be responsible for overseeing the organization's IT infrastructure, ensuring the security and efficiency of our systems, and maintaining compliance with relevant industry standards. The role requires an
in-depth understanding of cloud technologies , server management, network security, managed IT services, and strong problem-solving capabilities.
Key Responsibilities:-
The Information Technology Manager is a proactive and hands-on IT Manager to oversee and evolve our technology infrastructure
· In this role, the Manager will manage all aspects of our IT operations, from maintaining our current tech stack to strategizing and implementing future developments
· This position will ensure that our technology systems are modern, secure, and efficient, aligning IT initiatives with our business goals
· IT Strategy & Leadership: Develop and execute an IT strategy that supports the company's objectives, ensuring scalability and security
· Infrastructure Management: Oversee the maintenance and optimization of our Azure Cloud infrastructure, AWS Cloud, and Cisco Meraki networking systems
· Software & Systems Administration: Manage Microsoft 365 administration.
· Cybersecurity: Enhance our cybersecurity posture using tools like Sentinel One, Sophos Firewall and other tools
· Project Management: Lead IT projects, including system upgrades and optimizations, ensuring timely delivery and adherence to budgets
· Team Leadership: Mentor and guide a small IT team, fostering a culture of continuous improvement and professional development
· Vendor Management: Collaborate with external vendors and service providers to ensure optimal performance and cost-effectiveness
· Technical Support: Provide high-level technical support and troubleshooting for IT-related issues across the organization and client in USA Other duties as needed
· IT Audit & Compliance: Conduct regular audits to ensure IT processes are compliant with security regulations and best practices (GDPR, SOC2, ISO 27001), ensuring readiness for internal and external audit.
· Documentation: Maintain thorough and accurate documentation for all systems, processes, and procedures to ensure clarity and consistency in IT operations.
Preferred Skills:-
. Experience with SOC 2, ISO 27001, or similar security frameworks.
. Experience with advanced firewall configurations and network
architecture.
Job Type: Full-time
Benefits:
- Paid sick time
Shift:
- Day shift
Work Days:
- Monday to Friday
Experience:
- IT management: 2 years (Required)
Work Location: In person
About AMAZECH SOLUTIONS
Amazech Solutions is a Consulting and Services company in the Information Technology Industry. Established in 2007, we are headquartered in Frisco, Texas, U.S.A. The leadership team at Amazech brings to the table expertise that stems from over 40-man years of experience in developing software solutions in global organizations in various verticals including Healthcare, Banking Services, and Media & Entertainment
We currently provide services to a wide spectrum of clients ranging from start-ups to Fortune 500 companies. We are actively engaged in Government projects, being an SBA approved company as well as being HUB certified by the State of Texas.
Our customer-centric approach comes from understanding that our clients need more than technology professionals. This is an exciting time to join Amazech as we look to grow our team in India which comprises of IT professionals with strong competence in both common and niche skill areas.
Job Description
Do you love building and pioneering in the technology space? Do you enjoy solving complex business problems in a fast-paced, collaborative, inclusive and iterative delivery environment? You will be part of a large group of makers, breakers, doers, and disruptors, who love to solve real problems and meet real customer needs.
We are seeking Software Engineers who are passionate about marrying data with emerging technologies to join our team. You will have the opportunity to be on the forefront of driving a major transformation and create various products that will disrupt and reimagine technology solutions by working with the best minds in the industry.
Location: Bangalore, Pune (Hybrid / Remote)
Experience: 5-13 years
Employment type: Full time.
Permanent website: www.amazech.com
Role Description
This is a full-time Java Backend Developer role, based in Bengaluru with flexible remote work option. The Java Backend Developer will be responsible for designing, developing, and delivering complex Java-based applications and providing end-to-end support throughout the software development lifecycle. The successful candidate will collaborate with cross-functional team members, gather and analyze requirements, identify and prioritize technical and functional requirements and provide innovative solutions to address business challenges.
Qualifications
- Bachelor's or Master's degree in Computer Science (or equivalent technical degree)
- 5+ years of hands-on software development experience in Java/J2EE
- Experience in defining software architecture, design patterns, and solution design
- Strong experience in microservices architecture, Angular, Spring Boot Framework, Hibernate, and Web Services (SOAP and REST)
- Experience in cloud infrastructure, ideally with Amazon Web Services
- Strong knowledge in database design, SQL query optimization, and performance tuning
- Demonstrated ability to lead technical teams and mentor team members
- Excellent communication, analytical, and problem-solving skills
- Experience and knowledge of Agile methodologies
- Experience with AWS, GCP, Microsoft Azure, or another cloud services
- Proven ability to work well under pressure and deliver high-quality work within tight deadlines
Other Requirements
- Bachelor's or Master's degree in Computer Science (or equivalent technical degree)
- Strong interpersonal and relationship-building skills.
- Ability to work independently and as part of a team.
- Excellent verbal and written communication skills and ability to communicate effectively with international clients.

About Gyaan:
Gyaan empowers Go-To-Market teams to ascend to new heights in their sales performance, unlocking boundless opportunities for growth. We're passionate about helping sales teams excel beyond expectations. Our pride lies in assembling an unparalleled team and crafting a crucial solution that becomes an indispensable tool for our users. With Gyaan, sales excellence becomes an attainable reality.
About the Job:
Gyaan is seeking an experienced backend developer with expertise in Python, Django, AWS, and Redis to join our dynamic team! As a backend developer, you will be responsible for building responsive and scalable applications using Python, Django, and associated technologies.
Required Qualifications:
- 2+ years of hands-on experience programming in Python, Django
- Good understanding of CI/CD tools (Github Action, Gitlab CI) in a SaaS environment.
- Experience in building and running modern full-stack cloud applications using public cloud technologies such as AWS/
- Proficiency with at least one relational database system like MySQL, Oracle, or PostgreSQL.
- Experience with unit and integration testing.
- Effective communication skills, both written and verbal, to convey complex problems across different levels of the organization and to customers.
- Familiarity with Agile methodologies, software design lifecycle, and design patterns.
- Detail-oriented mindset to identify and rectify errors in code or product development workflow.
- Willingness to learn new technologies and concepts quickly, as the "cloud-native" field evolves rapidly.
Must Have Skills:
- Python
- Django Framework
- AWS
- Redis
- Database Management
Qualifications:
- Bachelor’s degree in Computer Science or equivalent experience.
If you are passionate about solving problems and have the required qualifications, we want to hear from you! You must be an excellent verbal and written communicator, enjoy collaborating with others, and welcome discussing a plan upfront. We offer a competitive salary, flexible work hours, and a dynamic work environment.


About Jeeva.ai
At Jeeva.ai, we're on a mission to revolutionize the future of work by building AI employees that automate all manual tasks—starting with AI Sales Reps. Our vision is simple: "Anything that doesn’t require deep human connection can be automated & done better, faster & cheaper with AI." We’ve created a fully automated SDR using AI that generates 3x more pipeline than traditional sales teams at a fraction of the cost.
As a dynamic startup we are backed by Alt Capital (founded by Jack Altman & Sam Altman), Marc Benioff (CEO Salesforce), Gokul (Board Coinbase), Bonfire (investors in ChowNow), Techtsars (investors in Uber), Sapphire (investors in LinkedIn), Microsoft with $1M ARR in just 3 months after launch, we’re not just growing - we’re thriving and making a significant impact in the world of artificial intelligence.
As we continue to scale, we're looking for mid-senior Full Stack Engineers who are passionate, ambitious, and eager to make an impact in the AI-driven future of work.
About You
- Experience: 3+ years of experience as a Full Stack Engineer with a strong background in React, Python, MongoDB, and AWS.
- Automated CI/CD: Experienced in implementing and managing automated CI/CD pipelines using GitHub Actions and AWS Cloudformation.
- System Architecture: Skilled in architecting scalable solutions for systems at scale, leveraging caching strategies, messaging queues and async/await paradigms for highly performant systems
- Cloud-Native Expertise: Proficient in deploying cloud-native apps using AWS (Lambda, API Gateway, S3, ECS), with a focus on serverless architectures to reduce overhead and boost agility..
- Development Tooling: Proficient in a wide range of development tools such as FastAPI, React State Management, REST APIs, Websockets and robust version control using Git.
- AI and GPTs: Competent in applying AI technologies, particularly in using GPT models for natural language processing, automation and creating intelligent systems.
- Impact-Driven: You've built and shipped products that users love and have seen the impact of your work at scale.
- Ownership: You take pride in owning projects from start to finish and are comfortable wearing multiple hats to get the job done.
- Curious Learner: You stay ahead of the curve, eager to explore and implement the latest technologies, particularly in AI.
- Collaborative Spirit: You thrive in a team environment and can work effectively with both technical and non-technical stakeholders.
- Ambitious: You have a hunger for success and are eager to contribute to a fast-growing company with big goals.
What You’ll Be Doing
- Build and Innovate: Develop and scale AI-driven products like Gigi (AI Outbound SDR), Jim (AI Inbound SDR), Automate across voice & video with AI.
- Collaborate Across Teams: Work closely with our Product, GTM, and Engineering teams to deliver world-class AI solutions that drive massive value for our customers.
- Integrate and Optimize: Create seamless integrations with popular platforms like Salesforce, LinkedIn, and HubSpot, enhancing our AI’s capabilities.
- Problem Solving: Tackle challenging problems head-on, from data pipelines to user experience, ensuring that every solution is both functional and delightful.
- Drive AI Adoption: Be a key player in transforming how businesses operate by automating workflows, lead generation, and more with AI.
Role & Responsiblities
- DevOps Engineer will be working with implementation and management of DevOps tools and technologies.
- Create and support advanced pipelines using Gitlab.
- Create and support advanced container and serverless environments.
- Deploy Cloud infrastructure using Terraform and cloud formation templates.
- Implement deployments to OpenShift Container Platform, Amazon ECS and EKS
- Troubleshoot containerized builds and deployments
- Implement processes and automations for migrating between OpenShift, AKS and EKS
- Implement CI/CD automations.
Required Skillsets
- 3-5 years of cloud-based architecture software engineering experience.
- Deep understanding of Kubernetes and its architecture.
- Mastery of cloud security engineering tools, techniques, and procedures.
- Experience with AWS services such as Amazon S3, EKS, ECS, DynamoDB, AWS Lambda, API Gateway, etc.
- Experience with designing and supporting infrastructure via Infrastructure-as-Code in AWS, via CDK, CloudFormation Templates, Terraform or other toolset.
- Experienced with tools like Jenkins, Github, Puppet or other similar toolset.
- Experienced with monitoring functions like cloudwatch, newrelic, graphana, splunk, etc,
- Excellence in verbal and written communication, and in working collaboratively with a variety of colleagues and clients in a remote development environment.
- Proven track record in cloud computing systems and enterprise architecture and security

TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated senior Data Engineer from the manufacturing Industry with over four years of experience to join our team. The Senior Data Engineer will oversee the department’s data infrastructure, including developing a data model, integrating large amounts of data from different systems, building & enhancing a data lake-house & subsequent analytics environment, and writing scripts to facilitate data analysis. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required:
- Experience in the manufacturing industry (metal industry is a plus)
- 4+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- Architect and optimize complex data pipelines, leading the design and implementation of scalable data infrastructure, and ensuring data quality and reliability at scale
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache, and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical experience & skills that can extract actionable insights from raw data to help improve the business.
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have:
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
- Bachelor’s degree in computer science, Information Technology, Engineering, or a related field from top-tier Indian Institutes of Information Technology (IIITs).
- Benefits And Perks
- A culture that fosters innovation, creativity, continuous learning, and resilience
- Progressive leave policy promoting work-life balance
- Mentorship opportunities with highly qualified internal resources and industry-driven programs
- Multicultural peer groups and supportive workplace policies
- Annual workcation program allowing you to work from various scenic locations
- Experience the unique environment of a dynamic start-up
Why should you join TVARIT ?
Working at TVARIT, a deep-tech German IT startup, offers a unique blend of innovation, collaboration, and growth opportunities. We seek individuals eager to adapt and thrive in a rapidly evolving environment.
If this opportunity excites you and aligns with your career aspirations, we encourage you to apply today!

Greetings , Wissen Technology is Hiring for the position of Data Engineer
Please find the Job Description for your Reference:
JD
- Design, develop, and maintain data pipelines on AWS EMR (Elastic MapReduce) to support data processing and analytics.
- Implement data ingestion processes from various sources including APIs, databases, and flat files.
- Optimize and tune big data workflows for performance and scalability.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
- Manage and monitor EMR clusters, ensuring high availability and reliability.
- Develop ETL (Extract, Transform, Load) processes to cleanse, transform, and store data in data lakes and data warehouses.
- Implement data security best practices to ensure data is protected and compliant with relevant regulations.
- Create and maintain technical documentation related to data pipelines, workflows, and infrastructure.
- Troubleshoot and resolve issues related to data processing and EMR cluster performance.
Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- 5+ years of experience in data engineering, with a focus on big data technologies.
- Strong experience with AWS services, particularly EMR, S3, Redshift, Lambda, and Glue.
- Proficiency in programming languages such as Python, Java, or Scala.
- Experience with big data frameworks and tools such as Hadoop, Spark, Hive, and Pig.
- Solid understanding of data modeling, ETL processes, and data warehousing concepts.
- Experience with SQL and NoSQL databases.
- Familiarity with CI/CD pipelines and version control systems (e.g., Git).
- Strong problem-solving skills and the ability to work independently and collaboratively in a team environment
Devops Engineer(Permanent)
Experience: 8 to 12 yrs
Location: Remote for 2-3 months (Any Mastek Location- Chennai/Mumbai/Pune/Noida/Gurgaon/Ahmedabad/Bangalore)
Max Salary = 28 LPA (including 10% variable)
Notice Period: Immediate/ max 10days
Mandatory Skills: Either Splunk/Datadog, Gitlab, Retail Domain
· Bachelor’s degree in Computer Science/Information Technology, or in a related technical field or equivalent technology experience.
· 10+ years’ experience in software development
· 8+ years of experience in DevOps
· Mandatory Skills: Either Splunk/Datadog,Gitalb,EKS,Retail domain experience
· Experience with the following Cloud Native tools: Git, Jenkins, Grafana, Prometheus, Ansible, Artifactory, Vault, Splunk, Consul, Terraform, Kubernetes
· Working knowledge of Containers, i.e., Docker Kubernetes, ideally with experience transitioning an organization through its adoption
· Demonstrable experience with configuration, orchestration, and automation tools such as Jenkins, Puppet, Ansible, Maven, and Ant to provide full stack integration
· Strong working knowledge of enterprise platforms, tools and principles including Web Services, Load Balancers, Shell Scripting, Authentication, IT Security, and Performance Tuning
· Demonstrated understanding of system resiliency, redundancy, failovers, and disaster recovery
· Experience working with a variety of vendor APIs including cloud, physical and logical infrastructure devices
· Strong working knowledge of Cloud offerings & Cloud DevOps Services (EC2, ECS, IAM, Lambda, Cloud services, AWS CodeBuild, CodeDeploy, Code Pipeline etc or Azure DevOps, API management, PaaS)
· Experience managing and deploying Infrastructure as Code, using tools like Terraform Helm charts etc.
· Manage and maintain standards for Devops tools used by the team
PortOne is re−imagining payments in Korea and other international markets. We are a Series B funded startup backed by prominent VC firms Softbank and Hanwa Capital
PortOne provides a unified API for merchants to integrate with and manage all of the payment options available in Korea and SEA Markets - Thailand, Singapore, Indonesia etc. It's currently used by 2000+ companies and processing multi-billion dollars in annualized volume. We are building a team to take this product to international markets, and looking for engineers with a passion for fintech and digital payments.
Culture and Values at PortOne
- You will be joining a team that stands for Making a difference.
- You will be joining a culture that identifies more with Sports Teams rather than a 9 to 5 workplace.
- This will be remote role that allows you flexibility to save time on commute
- Your will have peers who are/have
- Highly Self Driven with A sense of purpose
- High Energy Levels - Building stuff is your sport
- Ownership - Solve customer problems end to end - Customer is your Boss
- Hunger to learn - Highly motivated to keep developing new tech skill sets
Who you are ?
* You are an athlete and Devops/DevSecOps is your sport.
* Your passion drives you to learn and build stuff and not because your manager tells you to.
* Your work ethic is that of an athlete preparing for your next marathon. Your sport drives you and you like being in the zone.
* You are NOT a clockwatcher renting out your time, and NOT have an attitude of "I will do only what is asked for"
* Enjoys solving problems and delight users both internally and externally
* Take pride in working on projects to successful completion involving a wide variety of technologies and systems
* Posses strong & effective communication skills and the ability to present complex ideas in a clear & concise way
* Responsible, self-directed, forward thinker, and operates with focus, discipline and minimal supervision
* A team player with a strong work ethic
Experience
* 2+ year of experience working as a Devops/DevSecOps Engineer
* BE in Computer Science or equivalent combination of technical education and work experience
* Must have actively managed infrastructure components & devops for high quality and high scale products
* Proficient knowledge and experience on infra concepts - Networking/Load Balancing/High Availability
* Experience on designing and configuring infra in cloud service providers - AWS / GCP / AZURE
* Knowledge on Secure Infrastructure practices and designs
* Experience with DevOps, DevSecOps, Release Engineering, and Automation
* Experience with Agile development incorporating TDD / CI / CD practices
Hands on Skills
* Proficient in atleast one high level Programming Language: Go / Java / C
* Proficient in scripting - bash scripting etc - to build/glue together devops/datapipeline workflows
* Proficient in Cloud Services - AWS / GCP / AZURE
* Hands on experience on CI/CD & relevant tools - Jenkins / Travis / Gitops / SonarQube / JUnit / Mock frameworks
* Hands on experience on Kubenetes ecosystem & container based deployments - Kubernetes / Docker / Helm Charts / Vault / Packer / lstio / Flyway
* Hands on experience on Infra as code frameworks - Terraform / Crossplane / Ansible
* Version Control & Code Quality: Git / Github / Bitbucket / SonarQube
* Experience on Monitoring Tools: Elasticsearch / Logstash / Kibana / Prometheus / Grafana / Datadog / Nagios
* Experience with RDBMS Databases & Caching services: Postgres / MySql / Redis / CDN
* Experience with Data Pipelines/Worflow tools: Airflow / Kafka / Flink / Pub-Sub
* DevSecOps - Cloud Security Assessment, Best Practices & Automation
* DevSecOps - Vulnerabiltiy Assessments/Penetration Testing for Web, Network and Mobile applications
* Preferrable to have Devops/Infra Experience for products in Payments/Fintech domain - Payment Gateways/Bank integrations etc
What will you do ?
Devops
* Provisioning the infrastructure using Crossplane/Terraform/Cloudformation scripts.
* Creating and Managing the AWS EC2, RDS, EKS, S3, VPC, KMS and IAM services, EKS clusters & RDS Databases.
* Monitor the infra to prevent outages/downtimes and honor our infra SLAs
* Deploy and manage new infra components.
* Update and Migrate the clusters and services.
* Reducing the cloud cost by enabling/scheduling for less utilized instances.
* Collaborate with stakeholders across the organization such as experts in - product, design, engineering
* Uphold best practices in Devops/DevSecOps and Infra management with attention to security best practices
DevSecOps
* Cloud Security Assessment & Automation
* Modify existing infra to adhere to security best practices
* Perform Threat Modelling of Web/Mobile applications
* Integrate security testing tools (SAST, DAST) in to CI/CD pipelines
* Incident management and remediation - Monitoring security incidents, recovery from and remediation of the issues
* Perform frequent Vulnerabiltiy Assessments/Penetration Testing for Web, Network and Mobile applications
* Ensure the environment is compliant to CIS, NIST, PCI etc.
Here are examples of apps/features you will be supporting as a Devops/DevSecOps Engineer
* Intuitive, easy-to-use APIs for payment process.
* Integrations with local payment gateways in international markets.
* Dashboard to manage gateways and transactions.
* Analytics platform to provide insights

Primary Skills
DynamoDB, Java, Kafka, Spark, Amazon Redshift, AWS Lake Formation, AWS Glue, Python
Skills:
Good work experience showing growth as a Data Engineer.
Hands On programming experience
Implementation Experience on Kafka, Kinesis, Spark, AWS Glue, AWS Lake Formation.
Excellent knowledge in: Python, Scala/Java, Spark, AWS (Lambda, Step Functions, Dynamodb, EMR), Terraform, UI (Angular), Git, Mavena
Experience of performance optimization in Batch and Real time processing applications
Expertise in Data Governance and Data Security Implementation
Good hands-on design and programming skills building reusable tools and products Experience developing in AWS or similar cloud platforms. Preferred:, ECS, EKS, S3, EMR, DynamoDB, Aurora, Redshift, Quick Sight or similar.
Familiarity with systems with very high volume of transactions, micro service design, or data processing pipelines (Spark).
Knowledge and hands-on experience with server less technologies such as Lambda, MSK, MWAA, Kinesis Analytics a plus.
Expertise in practices like Agile, Peer reviews, Continuous Integration
Roles and responsibilities:
Determining project requirements and developing work schedules for the team.
Delegating tasks and achieving daily, weekly, and monthly goals.
Responsible for designing, building, testing, and deploying the software releases.
Salary: 25LPA-40LPA