Cutshort logo
AWS (Amazon Web Services) Jobs in Bangalore (Bengaluru)

50+ AWS (Amazon Web Services) Jobs in Bangalore (Bengaluru) | AWS (Amazon Web Services) Job openings in Bangalore (Bengaluru)

Apply to 50+ AWS (Amazon Web Services) Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest AWS (Amazon Web Services) Job opportunities across top companies like Google, Amazon & Adobe.

icon
Tech Prescient

at Tech Prescient

3 candid answers
3 recruiters
Ishika agrawal
Posted by Ishika agrawal
Bengaluru (Bangalore)
8 - 10 yrs
₹20L - ₹35L / yr
skill iconNodeJS (Node.js)
skill iconAmazon Web Services (AWS)
skill iconReact.js
skill iconAngular (2+)
skill iconVue.js

Job Title- Senior Full Stack Developer

Job location- Bangalore/Hybrid

Availability- Immediate Joiners

Experience Range- 8-10 yrs

Desired skills - Node.js, Vue.JS / AngularJS / React, AWS, Javascript, Typescript


Requirements

● Total 8+ years of IT experience and 5+ years of experience in full-stack development working with JavaScript, Typescript

● Experience with modern web frameworks such as Vue.JS / AngularJS / React

● Extensive experience with back-end technologies - Nodejs, AWS, K8S, Postgresql, Redis

● Demonstrated proficiency in designing, developing, and deploying microservices-based applications. Ability to architect and implement scalable, loosely coupled, and maintainable microservices.

● Having experience in implementing CI/CD pipelines for automated testing, building, and deploying applications.

● Ability to lead end-to-end projects, working with other team members across the world

● Deep understanding of system architecture , and distributed systems

● Enjoy working in a fast-paced environment

● Able to work collaboratively within different teams and with differing levels of seniority


What you will bring:

● Work closely with cross-functional teams such as Development, Operations, and Product Management to ensure seamless integration of new features and services with a focus on reliability, scalability, and performance

● Experience with back-end technologies

● Good knowledge and understanding of client-side architecture

● Capable of managing time well and working efficiently and independently

● Ability to collaborate with multi-functional teams

● Excellent communication skills


Nice to Have

● Bachelor's or Master's degree in CS or related field/experience

Read more
HeyCoach
DeepanRaj R
Posted by DeepanRaj R
Bengaluru (Bangalore)
4 - 12 yrs
₹0.1L - ₹0.1L / yr
skill iconPython
skill iconNodeJS (Node.js)
skill iconReact.js
Data Structures
Natural Language Processing (NLP)
+5 more


Tech Lead(Fullstack) – Nexa (Conversational Voice AI Platform)

Location: Bangalore Type: Full-time

Experience: 4+ years (preferably in early-stage startups)

Tech Stack: Python (core), Node.js, React.js

 

 

About Nexa

Nexa is a new venture by the founders of HeyCoachPratik Kapasi and Aditya Kamat—on a mission to build the most intuitive voice-first AI platform. We’re rethinking how humans interact with machines using natural, intelligent, and fast conversational interfaces.

We're looking for a Tech Lead to join us at the ground level. This is a high-ownership, high-speed role for builders who want to move fast and go deep.

 

What You’ll Do

●     Design, build, and scale backend and full-stack systems for our voice AI engine

●     Work primarily with Python (core logic, pipelines, model integration), and support full-stack features using Node.js and React.js

●     Lead projects end-to-end—from whiteboard to production deployment

●     Optimize systems for performance, scale, and real-time processing

●     Collaborate with founders, ML engineers, and designers to rapidly prototype and ship features

 ●     Set engineering best practices, own code quality, and mentor junior team members as we grow

 

✅ Must-Have Skills

●     4+ years of experience in Python, building scalable production systems

●     Has led projects independently, from design through deployment

●     Excellent at executing fast without compromising quality

●     Strong foundation in system design, data structures and algorithms

●     Hands-on experience with Node.js and React.js in a production setup

●     Deep understanding of backend architecture—APIs, microservices, data flows

●     Proven success working in early-stage startups, especially during 0→1 scaling phases

●     Ability to debug and optimize across the full stack

●     High autonomy—can break down big problems, prioritize, and deliver without hand-holding

  

🚀 What We Value

●     Speed > Perfection: We move fast, ship early, and iterate

●     Ownership mindset: You act like a founder-even if you're not one

●     Technical depth: You’ve built things from scratch and understand what’s under the hood

●     Product intuition: You don’t just write code—you ask if it solves the user’s problem

●     Startup muscle: You’re scrappy, resourceful, and don’t need layers of process

●     Bias for action: You unblock yourself and others. You push code and push thinking

Humility and curiosity

: You challenge ideas, accept better ones, and never stop learning

 

💡 Nice-to-Have

●     Experience with NLP, speech interfaces, or audio processing

●     Familiarity with cloud platforms (GCP/AWS), CI/CD, Docker, Kubernetes

●     Contributions to open-source or technical blogs

●     Prior experience integrating ML models into production systems

 

Why Join Nexa?

●     Work directly with founders on a product that pushes boundaries in voice AI

●     Be part of the core team shaping product and tech from day one

●     High-trust environment focused on output and impact, not hours

●     Flexible work style and a flat, fast culture

Read more
Gruve
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Pune
3 - 6 yrs
Upto ₹40L / yr (Varies
)
skill iconJava
skill iconSpring Boot
skill iconAmazon Web Services (AWS)
Windows Azure
DevOps
+1 more

We are seeking an experienced and highly skilled Technical Lead with a strong background in Java, SaaS architectures, firewalls, and cybersecurity products, including SIEM and SOAR platforms. The ideal candidate will lead technical initiatives, design and implement scalable systems, and drive best practices across the engineering team. This role requires deep technical expertise, leadership abilities, and a passion for building secure and high-performing security solutions.


Key Roles & Responsibilities:

  • Lead the design and development of scalable and secure software solutions using Java.
  • Architect and build SaaS-based cybersecurity applications, ensuring high availability, performance, and reliability.
  • Provide technical leadership, mentoring, and guidance to the development team.
  • Ensure best practices in secure coding, threat modeling, and compliance with industry standards.
  • Collaborate with cross-functional teams, including Product Management, Security, and DevOps to deliver high-quality security solutions.
  • Design and implement security analytics, automation workflows and ITSM integrations.
  •  Drive continuous improvements in engineering processes, tools, and technologies.
  • Troubleshoot complex technical issues and lead incident response for critical production systems.


Basic Qualifications:

  • A bachelor’s or master’s degree in computer science, electronics engineering or a related field
  • 3-6 years of software development experience, with expertise in Java.
  • Strong background in building SaaS applications with cloud-native architectures (AWS, GCP, or Azure).
  • In-depth understanding of microservices architecture, APIs, and distributed systems.
  • Experience with containerization and orchestration tools like Docker and Kubernetes.
  • Knowledge of DevSecOps principles, CI/CD pipelines, and infrastructure as code (Terraform, Ansible, etc.).
  • Strong problem-solving skills and ability to work in an agile, fast-paced environment.
  • Excellent communication and leadership skills, with a track record of mentoring engineers.

 

Preferred Qualifications:

  • Experience with cybersecurity solutions, including SIEM (e.g., Splunk, ELK, IBM QRadar) and SOAR (e.g., Palo Alto XSOAR, Swimlane).
  • Knowledge of zero-trust security models and secure API development.
  • Hands-on experience with machine learning or AI-driven security analytics.
Read more
hirezyai
Aardra Suresh
Posted by Aardra Suresh
Bengaluru (Bangalore)
9 - 15 yrs
₹20L - ₹30L / yr
skill iconAmazon Web Services (AWS)
skill iconKubernetes
MySQL
Oracle
Amazon S3
+1 more

Job description

● Design effective, scalable architectures on top of cloud technologies such as AWS and Kubernetes

● Mentor other software engineers, including actively participating in peer code and architecture review

● Participate in all parts of the development lifecycle from design to coding to deployment to maintenance and operations

● Kickstart new ideas, build proof of concepts and jumpstart newly funded projects

● Demonstrate ability to work independently with minimal supervision

● Embed with other engineering teams on challenging initiatives and time sensitive projects

● Collaborate with other engineering teams on challenging initiatives and time sensitive projects



Education and Experience

● BS degree in Computer Science or related technical field or equivalent practical experience.

● 9+ years of professional software development experience focused on payments and/or billing and customer accounts. Worked with worldwide payments, billing systems, PCI Compliance & payment gateways.

Technical and Functional

● Extensive knowledge of micro service development using Spring, Spring Boot, Java - built on top of Kubernetes and public cloud computing such as AWS, Lambda, S3.

● Experience with relational databases (MySQL, DB2 or Oracle) and NoSQL databases

● Experience with unit testing and test driven development

Technologies at Constant Contact

Working on the Constant Contact platform provides our engineers with an opportunity to produce high impact work inside of our multifaceted platform (Email, Social, SMS, E-Commerce, CRM, Customer Data Platform, MLBased Recommendations & Insights, and more).

As a member of our team, you'll be utilizing the latest technologies and frameworks (React/SPA, JavaScript/TypeScript, Swift, Kotlin, GraphQL, etc) and deploying code to our cloud-first microservice infrastructure (declarative CI/CD, GitOps managed kubernetes) with regular opportunities to level up your skills.

● Past experience of working with and integrating payment gateways and processors, online payment methods, and billing systems.

● Familiar with integrating Stripe/Plaid/PayPal/Adyen/Cybersource or similar systems along with PCI compliance.

● International software development and payments experience is a plus.

● Knowledge of DevOps and CI/CD, automated test and build tools ( Jenkins & Gradle/Maven)

● Experience integrating with sales tax engines is a plus.

● Familiar with tools like Splunk, New relic or similar tools like datadog, elastic elk, amazon

cloudwatch.


● Good to have - Experience with React, Backbone, Marionette or other front end frameworks.


Cultural

● Strong verbal and written communication skills.

● Flexible attitude and willingness to frequently move between different teams, software architectures and priorities.

● Desire to collaborate with our other product teams to think strategically about how to solve problems.

Our team

● We focus on cross-functional team collaboration where engineers, product managers, and designers all work together to solve customer problems and build exciting features.

● We love new ideas and are eager to see what your experiences can bring to help influence our technical and product vision.

● Collaborate/Overlap with the teams in Eastern Standard Time (EST), USA.


Read more
Talent Pro
Bengaluru (Bangalore)
6 - 8 yrs
₹20L - ₹45L / yr
skill iconPHP
skill iconNodeJS (Node.js)
skill iconJava
skill iconAmazon Web Services (AWS)
RabbitMQ
+2 more

Strong Senior Backend Engineer profile

Mandatory (Experience 1) - Must have more than 6+ YOE in Software Development

Mandatory (Experience 2) - Should have strong backend development experience in any backend language - Java, Javascript (NodeJS), Go, PHP (PHP experience is preferred)

Mandatory (Core Skill 1) - Must have Experience in any databases - MySQL / PostgreSQL / Postgres / Oracle / SQL Server / DB2 / SQL

Mandatory (Core skill 2) - Experience with async workflows and messaging queues such as( RabbitMq, Kafka, Message Broker / Queue, Google Pub / Sub, Kinesis etc)

Mandatory (Core Skills 3) - Experience in Cloud - AWS / Google Cloud / Azure

Mandatory (Company) - Product Companies only

Mandatory ( Education) - BE / BTECH / MCA

Read more
appscrip

at appscrip

2 recruiters
Kanika Gaur
Posted by Kanika Gaur
Bengaluru (Bangalore), Surat
3 - 5 yrs
₹4.8L - ₹11L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)

Job Title: Lead DevOps Engineer

Experience Required: 4 to 5 years in DevOps or related fields

Employment Type: Full-time


About the Role:

We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.


Key Responsibilities:

Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).

CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.

Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.

Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.

Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.

Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.

Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.

Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.


Required Skills & Qualifications:

Technical Expertise:

Strong proficiency in cloud platforms like AWS, Azure, or GCP.

Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).

Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.

Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.

Proficiency in scripting languages (e.g., Python, Bash, PowerShell).

Soft Skills:

Excellent communication and leadership skills.

Strong analytical and problem-solving abilities.

Proven ability to manage and lead a team effectively.

Experience:

4 years + of experience in DevOps or Site Reliability Engineering (SRE).

4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.

Strong understanding of microservices, APIs, and serverless architectures.


Nice to Have:

Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.

Experience with GitOps tools such as ArgoCD or Flux.

Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).


Perks & Benefits:

Competitive salary and performance bonuses.

Comprehensive health insurance for you and your family.

Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.

Flexible working hours and remote work options.

Collaborative and inclusive work culture.


Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.


You can directly contact us: Nine three one six one two zero one three two

Read more
Bengaluru (Bangalore)
2 - 4 yrs
₹7L - ₹12L / yr
skill iconJava
06692
skill iconAmazon Web Services (AWS)
NOSQL Databases

Backend (Primary Focus) ● Strong knowledge and experience in Object-Oriented Programming (OOP) concepts. ● Strong understanding of Java, Spring Boot, and REST API development. ● Experience in Test-Driven Development (TDD) with Spring Boot. ● Proficiency in developing APIs using Redis and relational databases (MySQL preferred). ● Strong understanding of the AWS cloud platform, with experience using services like S3, Lambda. ● Good understanding of code versioning tools (Git) and bug-tracking systems (JIRA, etc.). ● Knowledge of DocumentDB or any other NoSQL document database is a plus. Frontend (Good to Have / Preferred for Full-Stack Evaluation) ● Hands-on experience with React.js, including state management (e.g., Redux, Context API). ● Experience with modern UI development, including CSS frameworks (Tailwind, Material-UI, Bootstrap, etc.). ● Understanding of REST API integration and handling API calls efficiently in React. ● Familiarity with component-driven development and frontend testing (e.g., Jest, React Testing Library).

Read more
Invensis Technologies Pvt

at Invensis Technologies Pvt

2 candid answers
1 video
partha Sarathy
Posted by partha Sarathy
Bengaluru (Bangalore)
5 - 10 yrs
₹20L - ₹40L / yr
SaaS
skill iconReact.js
skill iconNextJs (Next.js)
skill iconAmazon Web Services (AWS)
skill iconPostgreSQL
+6 more

Job Description: Full Stack Developer – SaaS Product Engineering (Ruby on Rails, React.js, Next.js, AWS)

Location: Office

Experience Level: 5+ Years

Employment Type: Full-Time

✨ About the Role

We are looking for a skilled and passionate Full Stack Developer to join our SaaS Product Engineering team. You will work across backend and frontend technologies to build, optimize, and scale multiple SaaS products in a dynamic environment.

If you are excited by clean code, modern cloud-native practices, and the chance to contribute to impactful products from the ground up, we would love to meet you!

🔥 Key Responsibilities

  • Develop and maintain scalable SaaS-based platforms using Ruby on Rails (backend) and React.js / Next.js (frontend).
  • Build RESTful APIs and integrate third-party services as needed.
  • Collaborate with Product Managers, Designers, and QA teams to deliver high-quality product features for multiple projects.
  • Write clean, secure, maintainable, and efficient code following best practices.
  • Optimize applications for performance, scalability, maintainability, and security.
  • Participate actively in code reviews, sprint planning, and team discussions.
  • Support DevOps practices including CI/CD pipelines and cloud deployments on AWS.
  • Take technical architectural level decisions for the products
  • Continuously research and learn new technologies to enhance product performance.

🛠️ Required Skills and Experience

  • 3–6 years of hands-on software engineering experience, preferably with SaaS platforms.
  • Strong Full Stack Development Skills:
  • Backend: Ruby on Rails (6+ preferred)
  • Frontend: React.js, Next.js (static generation and server-side rendering)
  • Database: PostgreSQL, MongoDB, Redis
  • Experience deploying applications to AWS cloud environment.
  • Good understanding of APIs (RESTful and/or GraphQL) and third-party integrations.
  • Familiarity with Docker and CI/CD pipelines (GitHub Actions, GitLab CI, etc.).
  • Knowledge of security principles (OAuth2, API security best practices).
  • Familiarity with Agile development methodologies (Scrum, Kanban).
  • Experience in handling a team.
  • Basic understanding of test-driven development (RSpec, Jest or similar frameworks).

🎯 Preferred (Nice-to-Have)

  • Exposure to AWS Lightsail, EC2, or Lambda.
  • Experience with SaaS multi-tenant system design.
  • Experience with third-party integrations like payments application.
  • Previous work experience in startups or high-growth product companies.
  • Basic knowledge of performance tuning and system optimization.

👤 Who You Are

  • A problem solver with strong technical fundamentals.
  • A self-motivated learner who enjoys working in collaborative environments.
  • Someone who takes ownership and accountability for deliverables.
  • A team player willing to mentor junior developers and contribute to team goals.


Read more
TCS

TCS

Agency job
via Risk Resources LLP hyd by susmitha o
Bengaluru (Bangalore), Chennai, Kochi (Cochin)
6 - 9 yrs
₹7L - ₹15L / yr
skill iconAmazon Web Services (AWS)
sagemaker
skill iconMachine Learning (ML)
skill iconDocker
skill iconPython
  • Design, develop, and maintain data pipelines and ETL workflows on AWS platform
  • Work with AWS services like S3, Glue, Lambda, Redshift, EMR, and Athena for data ingestion, transformation, and analytics
  • Collaborate with Data Scientists, Analysts, and Business teams to understand data requirements
  • Optimize data workflows for performance, scalability, and reliability
  • Troubleshoot data issues, monitor jobs, and ensure data quality and integrity
  • Write efficient SQL queries and automate data processing tasks
  • Implement data security and compliance best practices
  • Maintain technical documentation and data pipeline monitoring dashboards
Read more
Deqode

at Deqode

1 recruiter
Roshni Maji
Posted by Roshni Maji
Pune, Bengaluru (Bangalore), Gurugram, Chennai, Mumbai
5 - 7 yrs
₹6L - ₹20L / yr
skill iconAmazon Web Services (AWS)
Amazon Redshift
AWS Glue
skill iconPython
PySpark

Position: AWS Data Engineer

Experience: 5 to 7 Years

Location: Bengaluru, Pune, Chennai, Mumbai, Gurugram

Work Mode: Hybrid (3 days work from office per week)

Employment Type: Full-time

About the Role:

We are seeking a highly skilled and motivated AWS Data Engineer with 5–7 years of experience in building and optimizing data pipelines, architectures, and data sets. The ideal candidate will have strong experience with AWS services including Glue, Athena, Redshift, Lambda, DMS, RDS, and CloudFormation. You will be responsible for managing the full data lifecycle from ingestion to transformation and storage, ensuring efficiency and performance.

Key Responsibilities:

  • Design, develop, and optimize scalable ETL pipelines using AWS Glue, Python/PySpark, and SQL.
  • Work extensively with AWS services such as Glue, Athena, Lambda, DMS, RDS, Redshift, CloudFormation, and other serverless technologies.
  • Implement and manage data lake and warehouse solutions using AWS Redshift and S3.
  • Optimize data models and storage for cost-efficiency and performance.
  • Write advanced SQL queries to support complex data analysis and reporting requirements.
  • Collaborate with stakeholders to understand data requirements and translate them into scalable solutions.
  • Ensure high data quality and integrity across platforms and processes.
  • Implement CI/CD pipelines and best practices for infrastructure as code using CloudFormation or similar tools.

Required Skills & Experience:

  • Strong hands-on experience with Python or PySpark for data processing.
  • Deep knowledge of AWS Glue, Athena, Lambda, Redshift, RDS, DMS, and CloudFormation.
  • Proficiency in writing complex SQL queries and optimizing them for performance.
  • Familiarity with serverless architectures and AWS best practices.
  • Experience in designing and maintaining robust data architectures and data lakes.
  • Ability to troubleshoot and resolve data pipeline issues efficiently.
  • Strong communication and stakeholder management skills.


Read more
Deqode

at Deqode

1 recruiter
Sneha Jain
Posted by Sneha Jain
Bengaluru (Bangalore), Mumbai, Pune, Hyderabad
4 - 7 yrs
₹10L - ₹18L / yr
skill iconSpring Boot
skill iconJava
skill iconAmazon Web Services (AWS)

Job Summary:


We are looking for an experienced Java Developer with 4+years of hands-on experience to join our dynamic team. The ideal candidate will have a strong background in Java development, problem-solving skills, and the ability to work independently as well as part of a team. You will be responsible for designing, developing, and maintaining high-performance and scalable applications.


Key Responsibilities:

  • Design, develop, test, and maintain Java-based applications.
  • Write well-designed, efficient, and testable code following best software development practices.
  • Troubleshoot and resolve technical issues during development and production support.
  • Collaborate with cross-functional teams including QA, DevOps, and Product teams.
  • Participate in code reviews and provide constructive feedback.
  • Maintain proper documentation for code, processes, and configurations.
  • Support deployment and post-deployment monitoring during night shift hours.


Required Skills:

  • Strong programming skills in Java 8 or above.
  • Experience with Spring Framework (Spring Boot, Spring MVC, etc.).
  • Proficiency in RESTful APIsMicroservices Architecture, and Web Services.
  • Familiarity with SQL and relational databases like MySQL, PostgreSQL, or Oracle.
  • Hands-on experience with version control systems like Git.
  • Understanding of Agile methodologies.
  • Experience with build tools like Maven/Gradle.
  • Knowledge of unit testing frameworks (JUnit/TestNG).


Preferred Skills (Good to Have):

  • Experience with cloud platforms (AWS, Azure, or GCP).
  • Familiarity with CI/CD pipelines.
  • Basic understanding of frontend technologies like JavaScript, HTML, CSS.


Read more
Alpha

at Alpha

2 candid answers
Yash Makhecha
Posted by Yash Makhecha
Remote, Bengaluru (Bangalore)
1 - 6 yrs
₹4L - ₹12L / yr
skill iconPython
skill iconNodeJS (Node.js)
skill iconReact.js
TypeScript
skill iconDocker
+10 more

Full Stack Engineer

Location: Remote (India preferred) · Type: Full-time · Comp: Competitive salary + early-stage stock



About Alpha

Alpha is building the simplest way for anyone to create AI agents that actually get work done. Our platform turns messy prompt chaining, data schemas, and multi-tool logic into a clean, no-code experience. We’re backed, funded, and racing toward our v1 launch. Join us on the ground floor and shape the architecture, the product, and the culture.



The Role

We’re hiring two versatile full-stack engineers. One will lean infra/back-end, the other front-end/LLM integration, but both will ship vertical slices end-to-end.


You will:

  • Design and build the agent-execution runtime (LLMs, tools, schemas).
  • Stand up secure VPC deployments with Docker, Terraform, and AWS or GCP.
  • Build REST/GraphQL APIs, queues, Postgres/Redis layers, and observability.
  • Create a React/Next.js visual workflow editor with drag-and-drop blocks.
  • Build the Prompt Composer UI, live testing mode, and cost dashboard.
  • Integrate native tools: search, browser, CRM, payments, messaging, and more.
  • Ship fast—design, code, test, launch—and own quality (no separate QA team).
  • Talk to early users and fold feedback into weekly releases.



What We’re Looking For


  • 3–6 years building production web apps at startup pace.
  • Strong TypeScript + Node.js or Python.
  • Solid React/Next.js and modern state management.
  • Comfort with AWS or GCP, Docker, and CI/CD.
  • Bias for ownership from design to deploy.


Nice but not required: Terraform or CDK, IAM/VPC networking, vector DBs or RAG pipelines, LLM API experience, React-Flow or other canvas libs, GraphQL or event streaming, prior dev-platform work.


We don’t expect every box ticked—show us you learn fast and ship.



What You’ll Get


• Meaningful equity at the earliest stage.

• A green-field codebase you can architect the right way.

• Direct access to the founder—instant decisions, no red tape.

• Real customers from day one; your code goes live, not to backlog.

• Stipend for hardware, LLM credits, and professional growth.



Come build the future of work—where AI agents handle the busywork and people do the thinking.

Read more
Deqode

at Deqode

1 recruiter
Alisha Das
Posted by Alisha Das
Bengaluru (Bangalore)
3 - 4 yrs
₹5L - ₹18L / yr
skill iconAmazon Web Services (AWS)
Terraform
skill iconKubernetes
Migration

About the Role:

We are looking for a skilled AWS DevOps Engineer to join our Cloud Operations team in Bangalore. This hybrid role is ideal for someone with hands-on experience in AWS and a strong background in application migration from on-premises to cloud environments. You'll play a key role in driving cloud adoption, optimizing infrastructure, and ensuring seamless cloud operations.

Key Responsibilities:

  • Manage and maintain AWS cloud infrastructure and services.
  • Lead and support application migration projects from on-prem to cloud.
  • Automate infrastructure provisioning using Infrastructure as Code (IaC) tools.
  • Monitor cloud environments and optimize cost, performance, and reliability.
  • Collaborate with development, operations, and security teams to implement DevOps best practices.
  • Troubleshoot and resolve infrastructure and deployment issues.

Required Skills:

  • 3–5 years of experience in AWS cloud environment.
  • Proven experience with on-premises to cloud application migration.
  • Strong understanding of AWS core services (EC2, VPC, S3, IAM, RDS, etc.).
  • Solid scripting skills (Python, Bash, or similar).

Good to Have:

  • Experience with Terraform for Infrastructure as Code.
  • Familiarity with Kubernetes for container orchestration.
  • Exposure to CI/CD tools like Jenkins, GitLab, or AWS CodePipeline.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Seema Srivastava
Posted by Seema Srivastava
Bengaluru (Bangalore), Mumbai
5 - 10 yrs
Best in industry
skill iconJava
06692
Microservices
skill iconAmazon Web Services (AWS)
Apache Kafka
+1 more

Job Description: We are looking for a talented and motivated Software Engineer with

expertise in both Windows and Linux operating systems and solid experience in Java

technologies. The ideal candidate should be proficient in data structures and algorithms, as

well as frameworks like Spring MVC, Spring Boot, and Hibernate. Hands-on experience

working with MySQL databases is also essential for this role.


Responsibilities:

● Design, develop, test, and maintain software applications using Java technologies.

● Implement robust solutions using Spring MVC, Spring Boot, and Hibernate frameworks.

● Develop and optimize database operations with MySQL.

● Analyze and solve complex problems by applying knowledge of data structures and

algorithms.

● Work with both Windows and Linux environments to develop and deploy solutions.

● Collaborate with cross-functional teams to deliver high-quality products on time.

● Ensure application security, performance, and scalability.

● Maintain thorough documentation of technical solutions and processes.

● Debug, troubleshoot, and upgrade legacy systems when required.

Requirements:

● Operating Systems: Expertise in Windows and Linux environments.

● Programming Languages & Technologies: Strong knowledge of Java (Core Java, Java 8+).

● Frameworks: Proficiency in Spring MVC, Spring Boot, and Hibernate.

● Algorithms and Data Structures: Good understanding and practical application of DSA

concepts.

● Databases: Experience with MySQL – writing queries, stored procedures, and performance

tuning.

● Version Control Systems: Experience with tools like Git.

● Deployment: Knowledge of CI/CD pipelines and tools such as Jenkins, Docker (optional)

Read more
Gruve
Bengaluru (Bangalore), Pune
5 - 9 yrs
Upto ₹60L / yr (Varies
)
Generative AI
Retrieval Augmented Generation (RAG)
Chatbot
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more

We are seeking a talented Engineer to join our AI team. You will technically lead experienced software and machine learning engineers to develop, test, and deploy AI-based solutions, with a primary focus on large language models and other machine learning applications. This is an excellent opportunity to apply your software engineering skills in a dynamic, real-world environment and gain hands-on experience in cutting-edge AI technology.


Key Roles & Responsibilities: 

  • Design and Develop AI-Powered Solutions: Architect and implement scalable AI/ML systems, focusing on Large Language Models (LLMs) and other deep learning applications.
  • End-to-End Model Development: Lead the entire lifecycle of AI models—from data collection and preprocessing to training, fine-tuning, evaluation, and deployment.
  • Fine-Tuning & Customization: Leverage techniques like LoRA (Low-Rank Adaptation) and Q-LoRA to efficiently fine-tune large models for specific business applications.
  • Reasoning Model Implementation: Work with advanced reasoning models such as DeepSeek-R1, exploring their applications in enterprise AI workflows.
  • Data Engineering & Dataset Creation: Design and curate high-quality datasets optimized for fine-tuning AI models, ensuring robust training and validation processes.
  • Performance Optimization & Efficiency: Optimize model inference, computational efficiency, and resource utilization for large-scale AI applications.
  • MLOps & CI/CD Pipelines: Implement best practices for MLOps, ensuring automated training, deployment, monitoring, and continuous improvement of AI models.
  • Cloud & Edge AI Deployment: Deploy and manage AI solutions in cloud environments (AWS, Azure, GCP) and explore edge AI deployment where applicable.
  • API Development & Microservices: Develop RESTful APIs and microservices to integrate AI models seamlessly into enterprise applications.
  • Security, Compliance & Ethical AI: Ensure AI solutions comply with industry standards, data privacy laws (e.g., GDPR, HIPAA), and ethical AI guidelines.
  • Collaboration & Stakeholder Engagement: Work closely with product managers, data engineers, and business teams to translate business needs into AI-driven solutions.
  • Mentorship & Technical Leadership: Guide and mentor junior engineers, fostering best practices in AI/ML development, model fine-tuning, and software engineering.
  • Research & Innovation: Stay updated with emerging AI trends, conduct experiments with cutting-edge architectures and fine-tuning techniques, and drive innovation within the team.

Basic Qualifications: 

  • A master's degree or PhD in Computer Science, Data Science, Engineering, or a related field 
  • Experience: 5-8 Years 
  • Strong programming skills in Python and Java 
  • Good understanding of machine learning fundamentals 
  • Hands-on experience with Python and common ML libraries (e.g., PyTorch, TensorFlow, scikit-learn) 
  • Familiar with frontend development and frameworks like React 
  • Basic knowledge of LLMs and transformer-based architectures is a plus.

Preferred Qualifications  

  • Excellent problem-solving skills and an eagerness to learn in a fast-paced environment 
  • Strong attention to detail and ability to communicate technical concepts clearly


Read more
Gruve
Pune, Bengaluru (Bangalore)
3 - 5 yrs
Upto ₹30L / yr (Varies
)
Retrieval Augmented Generation (RAG)
Generative AI
Chatbot
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more

We are seeking a talented Engineer to join our AI team. You will technically lead experienced software and machine learning engineers to develop, test, and deploy AI-based solutions, with a primary focus on large language models and other machine learning applications. This is an excellent opportunity to apply your software engineering skills in a dynamic, real-world environment and gain hands-on experience in cutting-edge AI technology.


Key Roles & Responsibilities: 

  • Design and implement software solutions that power machine learning models, particularly in LLMs 
  • Create robust data pipelines, handling data preprocessing, transformation, and integration for machine learning projects 
  • Collaborate with the engineering team to build and optimize machine learning models, particularly LLMs, that address client-specific challenges 
  • Partner with cross-functional teams, including business stakeholders, data engineers, and solutions architects to gather requirements and evaluate technical feasibility 
  • Design and implement a scale infrastructure for developing and deploying GenAI solutions 
  • Support model deployment and API integration to ensure interaction with existing enterprise systems.

Basic Qualifications: 

  • A master's degree or PhD in Computer Science, Data Science, Engineering, or a related field 
  • Experience: 3-5 Years 
  • Strong programming skills in Python and Java 
  • Good understanding of machine learning fundamentals 
  • Hands-on experience with Python and common ML libraries (e.g., PyTorch, TensorFlow, scikit-learn) 
  • Familiar with frontend development and frameworks like React 
  • Basic knowledge of LLMs and transformer-based architectures is a plus.

Preferred Qualifications 

  • Excellent problem-solving skills and an eagerness to learn in a fast-paced environment 
  • Strong attention to detail and ability to communicate technical concepts clearly 
Read more
Gruve
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore), Pune
5yrs+
Upto ₹50L / yr (Varies
)
skill iconKubernetes
Infrastructure
IaC
Terraform
Ansible
+10 more

About the Company:

Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.

 

Why Gruve:

At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.

Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.

 

Position summary:

We are looking for a Senior Software Development Engineer with 5-8 years of experience specializing in infrastructure deployment automation and VMware workload migration. The ideal candidate will have expertise in Infrastructure-as-Code (IaC), VMware vSphere, vMotion, HCX, Terraform, Kubernetes, and AI POD managed services. You will be responsible for automating infrastructure provisioning, migrating workloads from VMware environments to cloud and hybrid infrastructures, and optimizing AI/ML deployments.

Key Roles & Responsibilities

  • Automate infrastructure deployment using Terraform, Ansible, and Helm for VMware and cloud environments.
  • Develop and implement VMware workload migration strategies, including vMotion, HCX, SRM (Site Recovery Manager), and lift-and-shift migrations.
  • Migrate VMware-based workloads to public cloud (AWS, Azure, GCP) or hybrid cloud environments.
  • Optimize and manage AI POD workloads on VMware and Kubernetes-based environments.
  • Leverage VMware HCX for live and bulk workload migrations, ensuring minimal downtime and optimal performance.
  • Automate virtual machine provisioning and lifecycle management using VMware vSphere APIs, PowerCLI, or vRealize Automation.
  • Integrate VMware workloads with Kubernetes for containerized AI/ML workflows.
  • Ensure workload high availability and disaster recovery post-migration using VMware SRM, vSAN, and backup strategies.
  • Monitor and troubleshoot migration performance using vRealize Operations, Prometheus, Grafana, and ELK.
  • Develop and optimize CI/CD pipelines to automate workload migration, deployment, and validation.
  • Ensure security and compliance for workloads before, during, and after migration.
  • Collaborate with cloud architects to design hybrid cloud solutions supporting AI/ML workloads.


Basic Qualifications

  • 5–8 years of experience in infrastructure automation, VMware workload migration, and cloud integration.
  • Expertise in VMware vSphere, ESXi, vMotion, HCX, SRM, vSAN, and NSX-T.
  • Hands-on experience with workload migration tools such as VMware HCX, CloudEndure, AWS Application Migration Service, and Azure Migrate.
  • Proficiency in Infrastructure-as-Code using Terraform, Ansible, PowerCLI, and vRealize Automation.
  • Strong experience with Kubernetes (EKS, AKS, GKE) and containerized AI/ML workloads.
  • Experience in public cloud migration (AWS, Azure, GCP) for VMware-based workloads.
  • Hands-on knowledge of CI/CD tools such as Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
  • Strong scripting and automation skills in Python, Bash, or PowerShell.
  • Familiarity with disaster recovery, backup, and business continuity planning in VMware environments.
  • Experience in performance tuning and troubleshooting for VMware-based workloads.


Preferred Qualifications

  • Experience with NVIDIA GPU orchestration (e.g., KubeFlow, Triton, RAPIDS).
  • Familiarity with Packer for automated VM image creation.
  • Exposure to Edge AI deployments, federated learning, and AI inferencing at scale.
  • Contributions to open-source infrastructure automation projects.
Read more
Deqode

at Deqode

1 recruiter
Roshni Maji
Posted by Roshni Maji
Bengaluru (Bangalore), Pune, Mumbai, Chennai, Gurugram
5 - 7 yrs
₹5L - ₹19L / yr
skill iconPython
PySpark
skill iconAmazon Web Services (AWS)
aws
Amazon Redshift
+1 more

Position: AWS Data Engineer

Experience: 5 to 7 Years

Location: Bengaluru, Pune, Chennai, Mumbai, Gurugram

Work Mode: Hybrid (3 days work from office per week)

Employment Type: Full-time

About the Role:

We are seeking a highly skilled and motivated AWS Data Engineer with 5–7 years of experience in building and optimizing data pipelines, architectures, and data sets. The ideal candidate will have strong experience with AWS services including Glue, Athena, Redshift, Lambda, DMS, RDS, and CloudFormation. You will be responsible for managing the full data lifecycle from ingestion to transformation and storage, ensuring efficiency and performance.

Key Responsibilities:

  • Design, develop, and optimize scalable ETL pipelines using AWS Glue, Python/PySpark, and SQL.
  • Work extensively with AWS services such as Glue, Athena, Lambda, DMS, RDS, Redshift, CloudFormation, and other serverless technologies.
  • Implement and manage data lake and warehouse solutions using AWS Redshift and S3.
  • Optimize data models and storage for cost-efficiency and performance.
  • Write advanced SQL queries to support complex data analysis and reporting requirements.
  • Collaborate with stakeholders to understand data requirements and translate them into scalable solutions.
  • Ensure high data quality and integrity across platforms and processes.
  • Implement CI/CD pipelines and best practices for infrastructure as code using CloudFormation or similar tools.

Required Skills & Experience:

  • Strong hands-on experience with Python or PySpark for data processing.
  • Deep knowledge of AWS Glue, Athena, Lambda, Redshift, RDS, DMS, and CloudFormation.
  • Proficiency in writing complex SQL queries and optimizing them for performance.
  • Familiarity with serverless architectures and AWS best practices.
  • Experience in designing and maintaining robust data architectures and data lakes.
  • Ability to troubleshoot and resolve data pipeline issues efficiently.
  • Strong communication and stakeholder management skills.


Read more
Bengaluru and chennai based tech startup

Bengaluru and chennai based tech startup

Agency job
via Recruit Square by Priyanka choudhary
Bengaluru (Bangalore), Chennai
6 - 12 yrs
₹19L - ₹35L / yr
Linux/Unix
TCP/IP
Windows Azure
skill iconAmazon Web Services (AWS)
SaaS
+2 more

Has substantial expertise in Linux OS, Https, Proxy knowledge, Perl, Python scripting & hands-on

Is responsible for the identification and selection of appropriate network solutions to design and deploy in environments based on business objectives and requirements.

Is skilled in developing, deploying, and troubleshooting network deployments, with deep technical knowledge, especially around Bootstrapping & Squid Proxy, Https, scripting equivalent knowledge. Further align the network to meet the Company’s objectives through continuous developments, improvements and automation.

Preferably 10+ years of experience in network design and delivery of technology centric, customer-focused services.

Preferably 3+ years in modern software-defined network and preferably, in cloud-based environments.

Diploma or bachelor’s degree in engineering, Computer Science/Information Technology, or its equivalent.

Preferably possess a valid RHCE (Red Hat Certified Engineer) certification

Preferably possess any vendor Proxy certification (Forcepoint/ Websense/ bluecoat / equivalent)

Must possess advanced knowledge in TCP/IP concepts and fundamentals.  Good understanding and working knowledge of Squid proxy, Https protocol / Certificate management.

Fundamental understanding of proxy & PAC file.

Integration experience and knowledge between modern networks and cloud service providers such as AWS, Azure and GCP will be advantageous.

Knowledge in SaaS, IaaS, PaaS, and virtualization will be advantageous.

Coding skills such as Perl, Python, Shell scripting will be advantageous.

Excellent technical knowledge, troubleshooting, problem analysis, and outside-the-box thinking.

Excellent communication skills – oral, written and presentation, across various types of target audiences.

Strong sense of personal ownership and responsibility in accomplishing the organization’s goals and objectives. Exudes confidence, able to cope under pressure and will roll-up his/her sleeves to drive a project to success in a challenging environment.

Read more
Auxo AI
Kritika Dhingra
Posted by Kritika Dhingra
Hyderabad, Bengaluru (Bangalore), Mumbai, Gurugram
6 - 11 yrs
₹15L - ₹30L / yr
skill iconAmazon Web Services (AWS)
Troubleshooting

AuxoAI is seeking a Senior Platform Engineer (Lead) with strong AWS administration expertise to architect and manage cloud infrastructure while contributing to data engineering initiatives. The ideal candidate will have a deep understanding of cloud platforms, infrastructure as code, and data platform integrations. This role requires both technical leadership and hands-on implementation to ensure scalable, secure, and efficient infrastructure across the organization.


Key Responsibilities:

  • Define, implement, and manage scalable platform architecture aligned with business and technical requirements.
  • Lead AWS infrastructure design and operations including IAM, VPC, networking, security, and cost optimization.
  • Design and optimize cloud-based storage, compute resources, and orchestration workflows to support data platforms.
  • Collaborate with Data Engineers and DevOps teams to streamline deployment of data pipelines and infrastructure components.
  • Automate infrastructure provisioning and management using Terraform, CloudFormation, or similar Infrastructure as Code (IaC) tools.
  • Integrate platform capabilities with internal tools, analytics platforms, and business applications.
  • Establish cloud engineering best practices including infrastructure security, reliability, and observability.
  • Provide technical mentorship to engineering team members and lead knowledge-sharing initiatives.
  • Monitor system performance, troubleshoot production issues, and implement solutions for reliability and scalability.
  • Drive best practices in cloud engineering, security, and infrastructure as code (IaC).


Requirements

  • Bachelor's degree in Computer Science, Engineering, or a related field; or equivalent work experience.
  • 6+ years of hands-on experience in platform engineering, DevOps, or cloud infrastructure roles.
  • Expertise in AWS core services (IAM, EC2, S3, VPC, CloudWatch, etc.) and managing secure, scalable environments.
  • Proficiency in Infrastructure as Code (IaC) using Terraform, CloudFormation, or similar tools.
  • Strong understanding of data platforms, pipelines, and workflow orchestration in cloud-native environments.
  • Experience integrating infrastructure with CI/CD tools and workflows (e.g., GitHub Actions, Jenkins, GitLab CI).
  • Familiarity with cloud security best practices, access management, and cost optimization strategies.
  • Strong problem-solving and troubleshooting skills across cloud and data systems.
  • Prior experience in a leadership or mentoring role is a plus.
  • Excellent communication and collaboration skills to work effectively with cross-functional teams.


Read more
Tech Prescient

at Tech Prescient

3 candid answers
3 recruiters
Ashwini Damle
Posted by Ashwini Damle
Bengaluru (Bangalore)
5 - 8 yrs
₹15L - ₹30L / yr
skill iconJava
skill iconAmazon Web Services (AWS)
skill iconSpring Boot
Microservices

Job Title- Java Developer

Exp Range- 5-8 yrs

Location- Bangalore/ Hybrid

Desired skill- Java 8, Microservices (Must), AWS, Kafka, Kubernetes


What you will bring


● Strong core Java, concurrency and server-side experience

● 5+ Years of experience with hands-on coding.

● Strong Java8 and Microservices. (Must)

● Should have good understanding on AWS/GCP

● Kafka, AWS stack/Kubernetes

● An understanding of Object Oriented Design and standard design patterns.

● Experience of multi-threaded, 3-tier architectures/Distributed architectures, web services and caching.

● A familiarity with SQL databases

● Ability and willingness to work in a global, fast-paced environment.

● Flexible with the ability to adapt working style to meet objectives.

● Excellent communication and analytical skills

● Ability to effectively communicate with team members

● Experience in the following technologies would be beneficial but not essential, SpringBoot, AWS, Kubernetes, Terraform, Redis

Read more
Tech Prescient

at Tech Prescient

3 candid answers
3 recruiters
Ashwini Damle
Posted by Ashwini Damle
Bengaluru (Bangalore)
8 - 10 yrs
₹15L - ₹35L / yr
skill iconJava
Microservices
skill iconSpring Boot
skill iconAmazon Web Services (AWS)

Job Title- Senior Java Developer

Exp Range- 8-10 yrs

Location- Bangalore/ Hybrid

Desired skill- Java 8, Microservices (Must), AWS, Kafka, Kubernetes


What you will bring:


● Strong core Java, concurrency and server-side experience

● 8 + Years of experience with hands-on coding.

● Strong Java8 and Microservices. (Must)

● Should have good understanding on AWS/GCP

● Kafka, AWS stack/Kubernetes

● An understanding of Object Oriented Design and standard design patterns.

● Experience of multi-threaded, 3-tier architectures/Distributed architectures, web services and caching.

● A familiarity with SQL databases

● Ability and willingness to work in a global, fast-paced environment.


● Flexible with the ability to adapt working style to meet objectives.

● Excellent communication and analytical skills

● Ability to effectively communicate with team members

● Experience in the following technologies would be beneficial but not essential, SpringBoot, AWS, Kubernetes, Terraform, Redis

Read more
NeoGenCode Technologies Pvt Ltd
Bengaluru (Bangalore)
6 - 15 yrs
₹15L - ₹32L / yr
DBA
MySQL DBA
skill iconMongoDB
skill iconPostgreSQL
Oracle DBA
+11 more

Position Title : Senior Database Administrator (DBA)

📍 Location : Bangalore (Near Silk Board)

🏢 Work Mode : Onsite, 5 Days a Week

💼 Experience : 6+ Years

⏱️ Notice Period : Immediate to 1 Month


Job Summary :

We’re looking for an experienced Senior DBA to manage and optimize databases like MySQL, MongoDB, PostgreSQL, Oracle, and Redis. You’ll ensure performance, security, and availability of databases across our systems and work closely with engineering teams for support and improvement.


Key Responsibilities :

  • Manage and maintain MySQL, MongoDB, PostgreSQL, Oracle, and Redis databases.
  • Handle backups, restores, upgrades, and replication.
  • Optimize query performance and troubleshoot issues.
  • Ensure database security and access control.
  • Work on disaster recovery and high availability.
  • Support development teams with schema design and tuning.
  • Automate tasks using scripting (Python, Bash, etc.).
  • Collaborate with DevOps and Cloud (AWS) teams.


Must-Have Skills :

  • 6+ Years as a DBA in production environments.
  • Strong hands-on with MySQL, MongoDB, PostgreSQL, Oracle, Redis.
  • Performance tuning and query optimization.
  • Backup/recovery and disaster recovery planning.
  • Experience with AWS (RDS/EC2).
  • Scripting knowledge (Python/Bash).
  • Good understanding of database security.


Good to Have :

  • Experience with MSSQL.
  • Knowledge of tools like pgAdmin, Compass, Workbench.
  • Database certifications.
Read more
Deqode

at Deqode

1 recruiter
Mokshada Solanki
Posted by Mokshada Solanki
Bengaluru (Bangalore), Mumbai, Pune, Gurugram
4 - 5 yrs
₹4L - ₹20L / yr
SQL
skill iconAmazon Web Services (AWS)
Migration
PySpark
ETL

Job Summary:

Seeking a seasoned SQL + ETL Developer with 4+ years of experience in managing large-scale datasets and cloud-based data pipelines. The ideal candidate is hands-on with MySQL, PySpark, AWS Glue, and ETL workflows, with proven expertise in AWS migration and performance optimization.


Key Responsibilities:

  • Develop and optimize complex SQL queries and stored procedures to handle large datasets (100+ million records).
  • Build and maintain scalable ETL pipelines using AWS Glue and PySpark.
  • Work on data migration tasks in AWS environments.
  • Monitor and improve database performance; automate key performance indicators and reports.
  • Collaborate with cross-functional teams to support data integration and delivery requirements.
  • Write shell scripts for automation and manage ETL jobs efficiently.


Required Skills:

  • Strong experience with MySQL, complex SQL queries, and stored procedures.
  • Hands-on experience with AWS Glue, PySpark, and ETL processes.
  • Good understanding of AWS ecosystem and migration strategies.
  • Proficiency in shell scripting.
  • Strong communication and collaboration skills.


Nice to Have:

  • Working knowledge of Python.
  • Experience with AWS RDS.



Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Bengaluru (Bangalore), Pune, Chennai, Mumbai, Gurugram
5 - 7 yrs
₹5L - ₹19L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark
SQL
redshift

Profile: AWS Data Engineer

Mode- Hybrid

Experience- 5+7 years

Locations - Bengaluru, Pune, Chennai, Mumbai, Gurugram


Roles and Responsibilities

  • Design and maintain ETL pipelines using AWS Glue and Python/PySpark
  • Optimize SQL queries for Redshift and Athena
  • Develop Lambda functions for serverless data processing
  • Configure AWS DMS for database migration and replication
  • Implement infrastructure as code with CloudFormation
  • Build optimized data models for performance
  • Manage RDS databases and AWS service integrations
  • Troubleshoot and improve data processing efficiency
  • Gather requirements from business stakeholders
  • Implement data quality checks and validation
  • Document data pipelines and architecture
  • Monitor workflows and implement alerting
  • Keep current with AWS services and best practices


Required Technical Expertise:

  • Python/PySpark for data processing
  • AWS Glue for ETL operations
  • Redshift and Athena for data querying
  • AWS Lambda and serverless architecture
  • AWS DMS and RDS management
  • CloudFormation for infrastructure
  • SQL optimization and performance tuning
Read more
Gruve
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore), Pune
5yrs+
Upto ₹50L / yr (Varies
)
skill iconPython
SQL
Data engineering
Apache Spark
PySpark
+6 more

About the Company:

Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.

 

Why Gruve:

At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.

Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.

 

Position summary:

We are seeking a Senior Software Development Engineer – Data Engineering with 5-8 years of experience to design, develop, and optimize data pipelines and analytics workflows using Snowflake, Databricks, and Apache Spark. The ideal candidate will have a strong background in big data processing, cloud data platforms, and performance optimization to enable scalable data-driven solutions. 

Key Roles & Responsibilities:

  • Design, develop, and optimize ETL/ELT pipelines using Apache Spark, PySpark, Databricks, and Snowflake.
  • Implement real-time and batch data processing workflows in cloud environments (AWS, Azure, GCP).
  • Develop high-performance, scalable data pipelines for structured, semi-structured, and unstructured data.
  • Work with Delta Lake and Lakehouse architectures to improve data reliability and efficiency.
  • Optimize Snowflake and Databricks performance, including query tuning, caching, partitioning, and cost optimization.
  • Implement data governance, security, and compliance best practices.
  • Build and maintain data models, transformations, and data marts for analytics and reporting.
  • Collaborate with data scientists, analysts, and business teams to define data engineering requirements.
  • Automate infrastructure and deployments using Terraform, Airflow, or dbt.
  • Monitor and troubleshoot data pipeline failures, performance issues, and bottlenecks.
  • Develop and enforce data quality and observability frameworks using Great Expectations, Monte Carlo, or similar tools.


Basic Qualifications:

  • Bachelor’s or Master’s Degree in Computer Science or Data Science.
  • 5–8 years of experience in data engineering, big data processing, and cloud-based data platforms.
  • Hands-on expertise in Apache Spark, PySpark, and distributed computing frameworks.
  • Strong experience with Snowflake (Warehouses, Streams, Tasks, Snowpipe, Query Optimization).
  • Experience in Databricks (Delta Lake, MLflow, SQL Analytics, Photon Engine).
  • Proficiency in SQL, Python, or Scala for data transformation and analytics.
  • Experience working with data lake architectures and storage formats (Parquet, Avro, ORC, Iceberg).
  • Hands-on experience with cloud data services (AWS Redshift, Azure Synapse, Google BigQuery).
  • Experience in workflow orchestration tools like Apache Airflow, Prefect, or Dagster.
  • Strong understanding of data governance, access control, and encryption strategies.
  • Experience with CI/CD for data pipelines using GitOps, Terraform, dbt, or similar technologies.


Preferred Qualifications:

  • Knowledge of streaming data processing (Apache Kafka, Flink, Kinesis, Pub/Sub).
  • Experience in BI and analytics tools (Tableau, Power BI, Looker).
  • Familiarity with data observability tools (Monte Carlo, Great Expectations).
  • Experience with machine learning feature engineering pipelines in Databricks.
  • Contributions to open-source data engineering projects.
Read more
Deqode

at Deqode

1 recruiter
Alisha Das
Posted by Alisha Das
Pune, Mumbai, Bengaluru (Bangalore), Chennai
4 - 7 yrs
₹5L - ₹15L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark
Glue semantics
Amazon Redshift
+1 more

Job Overview:

We are seeking an experienced AWS Data Engineer to join our growing data team. The ideal candidate will have hands-on experience with AWS Glue, Redshift, PySpark, and other AWS services to build robust, scalable data pipelines. This role is perfect for someone passionate about data engineering, automation, and cloud-native development.

Key Responsibilities:

  • Design, build, and maintain scalable and efficient ETL pipelines using AWS Glue, PySpark, and related tools.
  • Integrate data from diverse sources and ensure its quality, consistency, and reliability.
  • Work with large datasets in structured and semi-structured formats across cloud-based data lakes and warehouses.
  • Optimize and maintain data infrastructure, including Amazon Redshift, for high performance.
  • Collaborate with data analysts, data scientists, and product teams to understand data requirements and deliver solutions.
  • Automate data validation, transformation, and loading processes to support real-time and batch data processing.
  • Monitor and troubleshoot data pipeline issues and ensure smooth operations in production environments.

Required Skills:

  • 5 to 7 years of hands-on experience in data engineering roles.
  • Strong proficiency in Python and PySpark for data transformation and scripting.
  • Deep understanding and practical experience with AWS Glue, AWS Redshift, S3, and other AWS data services.
  • Solid understanding of SQL and database optimization techniques.
  • Experience working with large-scale data pipelines and high-volume data environments.
  • Good knowledge of data modeling, warehousing, and performance tuning.

Preferred/Good to Have:

  • Experience with workflow orchestration tools like Airflow or Step Functions.
  • Familiarity with CI/CD for data pipelines.
  • Knowledge of data governance and security best practices on AWS.
Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Pune, Mumbai, Bengaluru (Bangalore), Gurugram
4 - 6 yrs
₹5L - ₹10L / yr
ETL
SQL
skill iconAmazon Web Services (AWS)
PySpark
KPI

Role - ETL Developer

Work ModeHybrid

Experience- 4+ years

Location - Pune, Gurgaon, Bengaluru, Mumbai

Required Skills - AWS, AWS Glue, Pyspark, ETL, SQL

Required Skills:

  • 4+ years of hands-on experience in MySQL, including SQL queries and procedure development
  • Experience in Pyspark, AWS, AWS Glue
  • Experience in AWS ,Migration
  • Experience with automated scripting and tracking KPIs/metrics for database performance
  • Proficiency in shell scripting and ETL.
  • Strong communication skills and a collaborative team player
  • Knowledge of Python and AWS RDS is a plus


Read more
hirezyai
Aardra Suresh
Posted by Aardra Suresh
Bengaluru (Bangalore)
3 - 6 yrs
₹9L - ₹11L / yr
AWS
DevOps
Linux administration
skill iconAmazon Web Services (AWS)
skill iconPostgreSQL

Key Responsibilities:

Cloud Management:

  • Manage and troubleshoot Linux environments.
  • Create and manage Linux users on EC2 instances.
  • Handle AWS services, including ECR, EKS, EC2, SNS, SES, S3, RDS,
  • Lambda, DocumentDB, IAM, ECS, EventBridge, ALB, and SageMaker.
  • Perform start/stop operations for SageMaker and EC2 instances.
  • Solve IAM permission issues.

Containerization and Deployment:

  • Create and manage ECS services.
  • Implement IP whitelisting for enhanced security.
  • Configure target mapping for load balancers and manage Glue jobs.
  • Create load balancers (as needed).

CI/CD Setup:

  • Set up and maintain CI/CD pipelines using AWS CodeCommit, CodeBuild, CodeDeploy, and CodePipeline.

Database Management:

  • o Manage PostgreSQL RDS instances, ensuring optimal performance and security.


 

Qualifications:

  • Bachelor’s degree in Computer Science, Information Technology, or a related field.
  • Minimum 3.5 years of experience in a AWS and DevOps.
  • Strong experience with Linux administration.
  • Proficient in AWS services, particularly ECR, EKS, EC2, SNS, SES, S3, RDS, DocumentDB, IAM, ECS, EventBridge, ALB, and SageMaker.
  • Experience with CI/CD tools (AWS CodeCommit, CodeBuild, CodeDeploy, CodePipeline).
  • Familiarity with PostgreSQL and database management.
Read more
Deqode

at Deqode

1 recruiter
Alisha Das
Posted by Alisha Das
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Hyderabad, Pune, Jaipur, Kolkata, Indore
4 - 6 yrs
₹5L - ₹18L / yr
skill icon.NET
skill iconC#
skill iconAngular (2+)
Windows Azure
skill iconAmazon Web Services (AWS)

Job Description:

Deqode is seeking a skilled .NET Full Stack Developer with expertise in .NET Core, Angular, and C#. The ideal candidate will have hands-on experience with either AWS or Azure cloud platforms. This role involves developing robust, scalable applications and collaborating with cross-functional teams to deliver high-quality software solutions.

Key Responsibilities:

  • Develop and maintain web applications using .NET Core, C#, and Angular.
  • Design and implement RESTful APIs and integrate with front-end components.
  • Collaborate with UI/UX designers, product managers, and other developers to deliver high-quality products.
  • Deploy and manage applications on cloud platforms (AWS or Azure).
  • Write clean, scalable, and efficient code following best practices.
  • Participate in code reviews and provide constructive feedback.
  • Troubleshoot and debug applications to ensure optimal performance.
  • Stay updated with emerging technologies and propose improvements to existing systems.

Required Qualifications:

  • Bachelor’s degree in Computer Science, Information Technology, or a related field.
  • Minimum of 4 years of professional experience in software development.
  • Proficiency in .NET Core, C#, and Angular.
  • Experience with cloud services (either AWS or Azure).
  • Strong understanding of RESTful API design and implementation.
  • Familiarity with version control systems like Git.
  • Excellent problem-solving skills and attention to detail.
  • Ability to work independently and collaboratively in a team environment.

Preferred Qualifications:

  • Experience with containerization tools like Docker and orchestration platforms like Kubernetes.
  • Knowledge of CI/CD pipelines and DevOps practices.
  • Familiarity with Agile/Scrum methodologies.
  • Strong communication and interpersonal skills.

What We Offer:

  • Competitive salary and performance-based incentives.
  • Flexible working hours and remote work options.
  • Opportunities for professional growth and career advancement.
  • Collaborative and inclusive work environment.
  • Access to the latest tools and technologies.


Read more
Deqode

at Deqode

1 recruiter
Roshni Maji
Posted by Roshni Maji
Pune, Indore, Bengaluru (Bangalore), Nagpur, Hyderabad, Noida, Mumbai, Jaipur, Ahmedabad, Kolkata
4 - 6 yrs
₹5L - ₹13.5L / yr
skill icon.NET
.net core
skill iconAngular (2+)
skill iconAngularJS (1.x)
skill iconReact.js
+3 more

Job Title: .NET Developer

Location: Pan India (Hybrid)

Employment Type: Full-Time

Join Date: Immediate / Within 15 Days

Experience: 4+ Years

Deqode is looking for a skilled and passionate Senior .NET Developer to join our growing tech team. The ideal candidate is an expert in building scalable web applications and has hands-on experience with cloud platforms and modern front-end technologies.


Key Responsibilities:

  • Design, develop, and maintain scalable web applications using .NET Core.
  • Work on RESTful APIs and integrate third-party services.
  • Collaborate with UI/UX designers and front-end developers using Angular or React.
  • Deploy, monitor, and maintain applications on AWS or Azure.
  • Participate in code reviews, technical discussions, and architecture planning.
  • Write clean, well-structured, and testable code following best practices.

Must-Have Skills:

  • 4+ years of experience in software development using .NET Core.
  • Proficiency with Angular or React for front-end development.
  • Strong working knowledge of AWS or Microsoft Azure.
  • Experience with SQL/NoSQL databases.
  • Excellent communication and team collaboration skills.

Education:

  • Bachelor’s/Master’s degree in Computer Science, Information Technology, or a related field.
Read more
Deqode

at Deqode

1 recruiter
Naincy Jain
Posted by Naincy Jain
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Pune, Indore, Jaipur, Kolkata, Hyderabad
4 - 6 yrs
₹3L - ₹30L / yr
DevOps
Terraform
skill iconKubernetes
skill iconAmazon Web Services (AWS)
AWS Lambda
+1 more

Required Skills:


  • Experience in systems administration, SRE or DevOps focused role
  • Experience in handling production support (on-call)
  • Good understanding of the Linux operating system and networking concepts.
  • Demonstrated competency with the following AWS services: ECS, EC2, EBS, EKS, S3, RDS, ELB, IAM, Lambda.
  • Experience with Docker containers and containerization concepts
  • Experience with managing and scaling Kubernetes clusters in a production environment
  • Experience building scalable infrastructure in AWS with Terraform.
  • Strong knowledge of Protocol-level such as HTTP/HTTPS, SMTP, DNS, and LDAP
  • Experience monitoring production systems
  • Expertise in leveraging Automation / DevOps principles, experience with operational tools, and able to apply best practices for infrastructure and software deployment (Ansible).
  • HAProxy, Nginx, SSH, MySQL configuration and operation experience
  • Ability to work seamlessly with software developers, QA, project managers, and business development
  • Ability to produce and maintain written documentation


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Hanisha Pralayakaveri
Posted by Hanisha Pralayakaveri
Bengaluru (Bangalore), Mumbai
5 - 9 yrs
Best in industry
skill iconPython
skill iconAmazon Web Services (AWS)
PySpark
Data engineering

Job Description: Data Engineer 

Position Overview:

Role Overview

We are seeking a skilled Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes.

 

Key Responsibilities

· Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis.

· Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses).

· Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions.

· Monitor, troubleshoot, and enhance data workflows for performance and cost optimization.

· Ensure data quality and consistency by implementing validation and governance practices.

· Work on data security best practices in compliance with organizational policies and regulations.

· Automate repetitive data engineering tasks using Python scripts and frameworks.

· Leverage CI/CD pipelines for deployment of data workflows on AWS.

Read more
The Alter Office

at The Alter Office

2 candid answers
Harsha Ravindran
Posted by Harsha Ravindran
Bengaluru (Bangalore)
1 - 4 yrs
₹6L - ₹10L / yr
skill iconNodeJS (Node.js)
MySQL
SQL
skill iconMongoDB
skill iconExpress
+9 more

Job Title: Backend Developer

Location: In-Office, Bangalore, Karnataka, India


Job Summary:

We are seeking a highly skilled and experienced Backend Developer with a minimum of 1 year of experience in product building to join our dynamic and innovative team. In this role, you will be responsible for designing, developing, and maintaining robust backend systems that drive our applications. You will collaborate with cross-functional teams to ensure seamless integration between frontend and backend components, and your expertise will be critical in architecting scalable, secure, and high-performance backend solutions.


Annual Compensation: 6-10 LPA


Responsibilities:

  • Design, develop, and maintain scalable and efficient backend systems and APIs using NodeJS.
  • Architect and implement complex backend solutions, ensuring high availability and performance.
  • Collaborate with product managers, frontend developers, and other stakeholders to deliver comprehensive end-to-end solutions.
  • Design and optimize data storage solutions using relational databases (e.g., MySQL) and NoSQL databases (e.g., MongoDB, Redis).
  • Promoting a culture of collaboration, knowledge sharing, and continuous improvement.
  • Implement and enforce best practices for code quality, security, and performance optimization.
  • Develop and maintain CI/CD pipelines to automate build, test, and deployment processes.
  • Ensure comprehensive test coverage, including unit testing, and implement various testing methodologies and tools to validate application functionality.
  • Utilize cloud services (e.g., AWS, Azure, GCP) for infrastructure deployment, management, and optimization.
  • Conduct system design reviews and contribute to architectural discussions.
  • Stay updated with industry trends and emerging technologies to drive innovation within the team.
  • Implement secure authentication and authorization mechanisms and ensure data encryption for sensitive information.
  • Design and develop event-driven applications utilizing serverless computing principles to enhance scalability and efficiency.


Requirements:

  • Minimum of 1 year of proven experience as a Backend Developer, with a strong portfolio of product-building projects.
  • Extensive experience with JavaScript backend frameworks (e.g., Express, Socket) and a deep understanding of their ecosystems.
  • Strong expertise in SQL and NoSQL databases (MySQL and MongoDB) with a focus on data modeling and scalability.
  • Practical experience with Redis and caching mechanisms to enhance application performance.
  • Proficient in RESTful API design and development, with a strong understanding of API security best practices.
  • In-depth knowledge of asynchronous programming and event-driven architecture.
  • Familiarity with the entire web stack, including protocols, web server optimization techniques, and performance tuning.
  • Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes) is highly desirable.
  • Proven experience working with cloud technologies (AWS/GCP/Azure) and understanding of cloud architecture principles.
  • Strong understanding of fundamental design principles behind scalable applications and microservices architecture.
  • Excellent problem-solving, analytical, and communication skills.
  • Ability to work collaboratively in a fast-paced, agile environment and lead projects to successful completion.
Read more
TechMynd Consulting

at TechMynd Consulting

2 candid answers
Suraj N
Posted by Suraj N
Bengaluru (Bangalore), Gurugram, Mumbai
4 - 8 yrs
₹10L - ₹24L / yr
skill iconData Science
skill iconPostgreSQL
skill iconPython
Apache
skill iconAmazon Web Services (AWS)
+5 more

Senior Data Engineer


Location: Bangalore, Gurugram (Hybrid)


Experience: 4-8 Years


Type: Full Time | Permanent


Job Summary:


We are looking for a results-driven Senior Data Engineer to join our engineering team. The ideal candidate will have hands-on expertise in data pipeline development, cloud infrastructure, and BI support, with a strong command of modern data stacks. You’ll be responsible for building scalable ETL/ELT workflows, managing data lakes and marts, and enabling seamless data delivery to analytics and business intelligence teams.


This role requires deep technical know-how in PostgreSQL, Python scripting, Apache Airflow, AWS or other cloud environments, and a working knowledge of modern data and BI tools.


Key Responsibilities:


PostgreSQL & Data Modeling


· Design and optimize complex SQL queries, stored procedures, and indexes


· Perform performance tuning and query plan analysis


· Contribute to schema design and data normalization


Data Migration & Transformation


· Migrate data from multiple sources to cloud or ODS platforms


· Design schema mapping and implement transformation logic


· Ensure consistency, integrity, and accuracy in migrated data


Python Scripting for Data Engineering


· Build automation scripts for data ingestion, cleansing, and transformation


· Handle file formats (JSON, CSV, XML), REST APIs, cloud SDKs (e.g., Boto3)


· Maintain reusable script modules for operational pipelines


Data Orchestration with Apache Airflow


· Develop and manage DAGs for batch/stream workflows


· Implement retries, task dependencies, notifications, and failure handling


· Integrate Airflow with cloud services, data lakes, and data warehouses


Cloud Platforms (AWS / Azure / GCP)


· Manage data storage (S3, GCS, Blob), compute services, and data pipelines


· Set up permissions, IAM roles, encryption, and logging for security


· Monitor and optimize cost and performance of cloud-based data operations


Data Marts & Analytics Layer


· Design and manage data marts using dimensional models


· Build star/snowflake schemas to support BI and self-serve analytics


· Enable incremental load strategies and partitioning


Modern Data Stack Integration


· Work with tools like DBT, Fivetran, Redshift, Snowflake, BigQuery, or Kafka


· Support modular pipeline design and metadata-driven frameworks


· Ensure high availability and scalability of the stack


BI & Reporting Tools (Power BI / Superset / Supertech)


· Collaborate with BI teams to design datasets and optimize queries


· Support development of dashboards and reporting layers


· Manage access, data refreshes, and performance for BI tools




Required Skills & Qualifications:


· 4–6 years of hands-on experience in data engineering roles


· Strong SQL skills in PostgreSQL (tuning, complex joins, procedures)


· Advanced Python scripting skills for automation and ETL


· Proven experience with Apache Airflow (custom DAGs, error handling)


· Solid understanding of cloud architecture (especially AWS)


· Experience with data marts and dimensional data modeling


· Exposure to modern data stack tools (DBT, Kafka, Snowflake, etc.)


· Familiarity with BI tools like Power BI, Apache Superset, or Supertech BI


· Version control (Git) and CI/CD pipeline knowledge is a plus


· Excellent problem-solving and communication skills

Read more
Invensis Technologies Pvt
Bengaluru (Bangalore)
7 - 10 yrs
₹10L - ₹35L / yr
skill iconRuby on Rails (ROR)
skill iconReact.js
skill iconNextJs (Next.js)
RESTful APIs
DevOps
+3 more

Full Stack Developer – SaaS Platforms (Ruby on Rails, React.js, Next.js, AWS)

Location: Office

 Experience Level: 5+ Years

 Employment Type: Full-Time

F

✨ About the Role

We are looking for a skilled and passionate Full Stack Developer to join our SaaS Product Engineering team. You will work across backend and frontend technologies to build, optimize, and scale multiple SaaS products in a dynamic environment.

If you are excited by clean code, modern cloud-native practices, and the chance to contribute to impactful products from the ground up, we would love to meet you!

u

🔥 Key Responsibilities

  • Develop and maintain scalable SaaS-based platforms using Ruby on Rails (backend) and React.js / Next.js (frontend).
  • Build RESTful APIs and integrate third-party services as needed.
  • Collaborate with Product Managers, Designers, and QA teams to deliver high-quality product features for multiple projects.
  • Write clean, secure, maintainable, and efficient code following best practices.
  • Optimize applications for performance, scalability, maintainability, and security.
  • Participate actively in code reviews, sprint planning, and team discussions.
  • Support DevOps practices including CI/CD pipelines and cloud deployments on AWS.
  • Take technical architectural level decisions for the products
  • Continuously research and learn new technologies to enhance product performance.

l

🛠️ Required Skills and Experience

  • 3–6 years of hands-on software engineering experience, preferably with SaaS platforms.
  • Strong Full Stack Development Skills:
  • Backend: Ruby on Rails (6+ preferred)
  • Frontend: React.js, Next.js (static generation and server-side rendering)
  • Database: PostgreSQL, MongoDB, Redis
  • Experience deploying applications to AWS cloud environment.
  • Good understanding of APIs (RESTful and/or GraphQL) and third-party integrations.
  • Familiarity with Docker and CI/CD pipelines (GitHub Actions, GitLab CI, etc.).
  • Knowledge of security principles (OAuth2, API security best practices).
  • Familiarity with Agile development methodologies (Scrum, Kanban).
  • Experience in handling a team.
  • Basic understanding of test-driven development (RSpec, Jest or similar frameworks).

l

🎯 Preferred (Nice-to-Have)

  • Exposure to AWS Lightsail, EC2, or Lambda.
  • Experience with SaaS multi-tenant system design.
  • Experience with third-party integrations like payments application.
  • Previous work experience in startups or high-growth product companies.
  • Basic knowledge of performance tuning and system optimization.

👤 Who You Are

  • A problem solver with strong technical fundamentals.
  • A self-motivated learner who enjoys working in collaborative environments.
  • Someone who takes ownership and accountability for deliverables.
  • A team player willing to mentor junior developers and contribute to team goals.

S

📈 What We Offer

  • Opportunity to work on innovative, impactful SaaS products.
  • A collaborative and transparent work culture.
  • Growth and learning opportunities across technologies and domains.
  • Competitive compensation and benefits.
Read more
Deqode

at Deqode

1 recruiter
Roshni Maji
Posted by Roshni Maji
Pune, Gurugram, Noida, Bhopal, Bengaluru (Bangalore)
4 - 8 yrs
₹8L - ₹22L / yr
MLOps
skill iconAmazon Web Services (AWS)
AWS Sagemaker
skill iconPython

Role - MLops Engineer

Location - Pune, Gurgaon, Noida, Bhopal, Bangalore 

Mode - Hybrid


Role Overview

We are looking for an experienced MLOps Engineer to join our growing AI/ML team. You will be responsible for automating, monitoring, and managing machine learning workflows and infrastructure in production environments. This role is key to ensuring our AI solutions are scalable, reliable, and continuously improving.


Key Responsibilities

  • Design, build, and manage end-to-end ML pipelines, including model training, validation, deployment, and monitoring.
  • Collaborate with data scientists, software engineers, and DevOps teams to integrate ML models into production systems.
  • Develop and manage scalable infrastructure using AWS, particularly AWS Sagemaker.
  • Automate ML workflows using CI/CD best practices and tools.
  • Ensure model reproducibility, governance, and performance tracking.
  • Monitor deployed models for data drift, model decay, and performance metrics.
  • Implement robust versioning and model registry systems.
  • Apply security, performance, and compliance best practices across ML systems.
  • Contribute to documentation, knowledge sharing, and continuous improvement of our MLOps capabilities.


Required Skills & Qualifications

  • 4+ years of experience in Software Engineering or MLOps, preferably in a production environment.
  • Proven experience with AWS services, especially AWS Sagemaker for model development and deployment.
  • Working knowledge of AWS DataZone (preferred).
  • Strong programming skills in Python, with exposure to R, Scala, or Apache Spark.
  • Experience with ML model lifecycle management, version control, containerization (Docker), and orchestration tools (e.g., Kubernetes).
  • Familiarity with MLflow, Airflow, or similar pipeline/orchestration tools.
  • Experience integrating ML systems into CI/CD workflows using tools like Jenkins, GitHub Actions, or AWS CodePipeline.
  • Solid understanding of DevOps and cloud-native infrastructure practices.
  • Excellent problem-solving skills and the ability to work collaboratively across teams.


Read more
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Hyderabad, Pune
4 - 10 yrs
₹10L - ₹24L / yr
skill iconJava
Artificial Intelligence (AI)
Automation
IDX
skill iconSpring Boot
+4 more

Job Title : Senior Backend Engineer – Java, AI & Automation

Experience : 4+ Years

Location : Any Cognizant location (India)

Work Mode : Hybrid

Interview Rounds :

  1. Virtual
  2. Face-to-Face (In-person)

Job Description :

Join our Backend Engineering team to design and maintain services on the Intuit Data Exchange (IDX) platform.

You'll work on scalable backend systems powering millions of daily transactions across Intuit products.


Key Qualifications :

  • 4+ years of backend development experience.
  • Strong in Java, Spring framework.
  • Experience with microservices, databases, and web applications.
  • Proficient in AWS and cloud-based systems.
  • Exposure to AI and automation tools (Workato preferred).
  • Python development experience.
  • Strong communication skills.
  • Comfortable with occasional US shift overlap.
Read more
WrkSpot

WrkSpot

Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Bengaluru (Bangalore)
6 - 9 yrs
₹12L - ₹15L / yr
skill iconReact.js
skill iconAngular (2+)
mobx
scss
skill iconAngularJS (1.x)
+4 more

Role: Sr. Frontend Developer

Exp: 6- 9 Years

CTC: up to 30 LPA

Location: Bangalore


What we Require

  • To convert our existing web application into a micro front-end architecture. The ideal candidate should have experience in AngularJS, ReactJS, MobX, and SCSS. Exposure to socket integration, mqtt integration would be a plus.
  • Identify our existing web application’s different components and modules. Separate them into independent micro front-ends.
  • Develop and maintain micro front-ends using AngularJS, ReactJS, MobX and SCSS
  • Implement and configure Module Federation in Webpack 5 to share components between micro front-ends.
  • Optimize micro front-ends for maximum speed and scalability.


Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Bengaluru (Bangalore), Pune, Bhopal, Jaipur
4 - 6 yrs
₹4L - ₹20L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
SageMaker
MLOps

Role - MLops Engineer

Required Experience - 4 Years

Location - Pune, Gurgaon, Noida, Bhopal, Bangalore 

Mode - Hybrid


Key Requirements:

  • 4+ years of experience in Software Engineering with MLOps focus
  • Strong expertise in AWS, particularly AWS SageMaker (required)
  • AWS Data Zone experience (preferred)
  • Proficiency in Python, R, Scala, or Spark
  • Experience developing scalable, reliable, and secure applications
  • Track record of production-grade development, integration and support


 

Read more
PGAGI
Javeriya Shaik
Posted by Javeriya Shaik
Remote, Bengaluru (Bangalore)
3 - 5 yrs
₹8L - ₹12L / yr
skill iconDocker
skill iconJenkins
Windows Azure
skill iconAmazon Web Services (AWS)
skill iconGitHub
+4 more

Position: Project Manager

Location: Bengaluru, India (Hybrid/Remote flexibility available)

Company: PGAGI Consultancy Pvt. Ltd


About PGAGI

At PGAGI, we are building the future where human and artificial intelligence coexist to solve complex problems, accelerate innovation, and power sustainable growth. We develop and deploy advanced AI solutions across industries, making AI not just a tool but a transformational force for businesses and society.


Position Summary

PGAGI is seeking a dynamic and experienced Project Manager to lead cross-functional engineering teams and drive the successful execution of multiple AI/ML-centric projects. The ideal candidate is a strategic thinker with a solid background in engineering-led product/project management, especially in AI/ML product lifecycles. This role is crucial to scaling our technical operations, ensuring seamless collaboration, timely delivery, and high-impact results across initiatives.


Key Responsibilities

• Lead Engineering Teams Across AI/ML Projects: Manage and mentor cross-functional teams of ML engineers, DevOps professionals, and software developers through agile delivery cycles, ensuring timely and high-quality execution of AI-focused initiatives.

• Drive Agile Project Execution: Define project scope, objectives, timelines, and deliverables using Agile/Scrum methodologies. Ensure continuous sprint planning, backlog grooming, and milestone tracking via tools like Jira or GitHub Projects.

• Manage Multiple Concurrent Projects: Oversee the full lifecycle of multiple high-priority projects—ranging from AI model development and infrastructure integration to client delivery and platform enhancements.

• Collaborate with Technical and Business Stakeholders: Act as the bridge between engineering, research, and client-facing teams, translating complex requirements into actionable tasks and product features.

• Maintain Engineering and Infrastructure Quality: Uphold rigorous engineering standards across deployments. Coordinate testing, model performance validation, version control, and CI/CD operations.

• Budget and Resource Allocation: Optimize resource distribution across teams, track project costs, and ensure effective use of cloud infrastructure and personnel to maximize project ROI.

• Risk Management & Mitigation: Identify risks proactively across technical and operational layers. Develop mitigation plans and troubleshoot issues that may impact timelines or performance.

• Monitor KPIs and Delivery Metrics: Establish and monitor performance indicators such as sprint velocity, deployment frequency, incident response times, and customer satisfaction for each release.

• Support Continuous Improvement: Foster a culture of feedback and iteration. Champion retrospectives and process reviews to continually refine development practices and workflows.

Qualifications:

• Education: Bachelor’s or Master’s in Computer Science, Engineering, or a related technical field.

• Experience: Minimum 5 years of experience as a Project Manager, with at least 2 years managing AI/ML or software engineering teams.

• Tech Expertise: Familiarity with AI/ML lifecycles, cloud platforms (AWS, GCP, or Azure), and DevOps pipelines (Docker, Kubernetes, GitHub Actions, Jenkins).

• Tools: Strong experience with Jira, Confluence, and project tracking/reporting tools.

• Leadership: Proven success leading high-performing engineering teams in a fast-paced, innovative environment.

• Communication: Excellent written and verbal skills to interface with both technical and non-technical stakeholders.

• Certifications (Preferred): PMP, CSM, or certifications in AI/ML project management or cloud technologies.


Why Join PGAGI?

• Lead cutting-edge AI/ML product teams building scalable, impactful solutions.

• Be part of a fast-growing, innovation-driven startup environment.

• Enjoy a collaborative, intellectually stimulating workplace with growth opportunities.

• Competitive compensation and performance-based rewards.

• Access to learning resources, mentoring, and AI/DevOps communities.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Bengaluru (Bangalore)
4 - 8 yrs
₹10L - ₹20L / yr
skill iconJava
skill iconAmazon Web Services (AWS)
  • 2+ years of hands-on experience in Java development.
  • Strong knowledge of Boto3/Boto AWS libraries (Python).
  • Solid experience with AWS services: EC2, ELB/ALB, CloudWatch.
  • Familiarity with SRE practices and maintenance processes.
  • Strong experience in debugging, troubleshooting, and unit testing.
  • Proficiency with Git and CI/CD tools.
  • Understanding of distributed systems and cloud-native architecture.


Read more
Kenscio
Parikshith D B
Posted by Parikshith D B
Bengaluru (Bangalore)
1 - 4 yrs
₹4L - ₹10L / yr
skill iconNodeJS (Node.js)
MySQL
TypeScript
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

A backend developer is an engineer who can handle all the work of databases, servers,

systems engineering, and clients. Depending on the project, what customers need may

be a mobile stack, a Web stack, or a native application stack.


You will be responsible for:


 Build reusable code and libraries for future use.

 Own & build new modules/features end-to-end independently.

 Collaborate with other team members and stakeholders.


Required Skills :


 Thorough understanding of Node.js and Typescript.

 Excellence in at least one framework like strongloop loopback, express.js, sail.js, etc.

 Basic architectural understanding of modern day web applications

 Diligence for coding standards

 Must be good with git and git workflow

 Experience of external integrations is a plus

 Working knowledge of AWS or GCP or Azure - Expertise with linux based systems

 Experience with CI/CD tools like jenkins is a plus.

 Experience with testing and automation frameworks.

 Extensive understanding of RDBMS systems

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Bengaluru (Bangalore)
1 - 3 yrs
₹5L - ₹17L / yr
skill iconPython
SQL
ETL
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)

Job Summary:

We are looking for a motivated and detail-oriented Data Engineer with 1–2 years of experience to join our data engineering team. The ideal candidate should have solid foundational skills in SQL and Python, along with exposure to building or maintaining data pipelines. You’ll play a key role in helping to ingest, process, and transform data to support various business and analytical needs.

Key Responsibilities:

  • Assist in the design, development, and maintenance of scalable and efficient data pipelines.
  • Write clean, maintainable, and performance-optimized SQL queries.
  • Develop data transformation scripts and automation using Python.
  • Support data ingestion processes from various internal and external sources.
  • Monitor data pipeline performance and help troubleshoot issues.
  • Collaborate with data analysts, data scientists, and other engineers to ensure data quality and consistency.
  • Work with cloud-based data solutions and tools (e.g., AWS, Azure, GCP – as applicable).
  • Document technical processes and pipeline architecture.

Core Skills Required:

  • Proficiency in SQL (data querying, joins, aggregations, performance tuning).
  • Experience with Python, especially in the context of data manipulation (e.g., pandas, NumPy).
  • Exposure to ETL/ELT pipelines and data workflow orchestration tools (e.g., Airflow, Prefect, Luigi – preferred).
  • Understanding of relational databases and data warehouse concepts.
  • Familiarity with version control systems like Git.

Preferred Qualifications:

  • Experience with cloud data services (AWS S3, Redshift, Azure Data Lake, etc.)
  • Familiarity with data modeling and data integration concepts.
  • Basic knowledge of CI/CD practices for data pipelines.
  • Bachelor’s degree in Computer Science, Engineering, or related field.


Read more
ZeMoSo Technologies

at ZeMoSo Technologies

11 recruiters
Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Chennai, Pune
4 - 8 yrs
₹10L - ₹15L / yr
Data engineering
skill iconPython
SQL
Data Warehouse (DWH)
skill iconAmazon Web Services (AWS)
+3 more

Work Mode: Hybrid


Need B.Tech, BE, M.Tech, ME candidates - Mandatory



Must-Have Skills:

● Educational Qualification :- B.Tech, BE, M.Tech, ME in any field.

● Minimum of 3 years of proven experience as a Data Engineer.

● Strong proficiency in Python programming language and SQL.

● Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.

● Good comprehension and critical thinking skills.


● Kindly note Salary bracket will vary according to the exp. of the candidate - 

- Experience from 4 yrs to 6 yrs - Salary upto 22 LPA

- Experience from 5 yrs to 8 yrs - Salary upto 30 LPA

- Experience more than 8 yrs - Salary upto 40 LPA

Read more
Gruve
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
3yrs+
Upto ₹35L / yr (Varies
)
skill iconGo Programming (Golang)
skill iconPython
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)

About the Company – Gruve

Gruve is an innovative software services startup dedicated to empowering enterprise customers in managing their Data Life Cycle. We specialize in Cybersecurity, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence.

As a well-funded early-stage startup, we offer a dynamic environment, backed by strong customer and partner networks. Our mission is to help customers make smarter decisions through data-driven business strategies.


Why Gruve

At Gruve, we foster a culture of:

  • Innovation, collaboration, and continuous learning
  • Diversity and inclusivity, where everyone is encouraged to thrive
  • Impact-focused work — your ideas will shape the products we build

We’re an equal opportunity employer and encourage applicants from all backgrounds. We appreciate all applications, but only shortlisted candidates will be contacted.


Position Summary

We are seeking a highly skilled Software Engineer to lead the development of an Infrastructure Asset Management Platform. This platform will assist infrastructure teams in efficiently managing and tracking assets for regulatory audit purposes.

You will play a key role in building a comprehensive automation solution to maintain a real-time inventory of critical infrastructure assets.


Key Responsibilities

  • Design and develop an Infrastructure Asset Management Platform for tracking a wide range of assets across multiple environments.
  • Build and maintain automation to track:
  • Physical Assets: Servers, power strips, racks, DC rooms & buildings, security cameras, network infrastructure.
  • Virtual Assets: Load balancers (LTM), communication equipment, IPs, virtual networks, VMs, containers.
  • Cloud Assets: Public cloud services, process registry, database resources.
  • Collaborate with infrastructure teams to understand asset-tracking requirements and convert them into technical implementations.
  • Optimize performance and scalability to handle large-scale asset data in real-time.
  • Document system architecture, implementation, and usage.
  • Generate reports for compliance and auditing.
  • Ensure integration with existing systems for streamlined asset management.


Basic Qualifications

  • Bachelor’s or Master’s degree in Computer Science or a related field
  • 3–6 years of experience in software development
  • Strong proficiency in Golang and Python
  • Hands-on experience with public cloud infrastructure (AWS, GCP, Azure)
  • Deep understanding of automation solutions and parallel computing principles


Preferred Qualifications

  • Excellent problem-solving skills and attention to detail
  • Strong communication and teamwork skills


Read more
Tech Prescient

at Tech Prescient

3 candid answers
3 recruiters
Ashwini Damle
Posted by Ashwini Damle
Remote, Bengaluru (Bangalore)
8 - 10 yrs
₹20L - ₹35L / yr
skill iconJava
skill iconNodeJS (Node.js)
NOSQL Databases
SQL
skill iconAmazon Web Services (AWS)
+1 more

Job Title- Senior Full Stack Web Developer

Job location- Bangalore/Hybrid

Availability- Immediate Joiners

Experience Range- 5-8yrs

Desired skills - Java,AWS, SQL/NoSQL, Javascript, Node.js(good to have)


We are looking for 8-10 years Senior Full Stack Web Developer Java 



  1. Working on different aspects of the core product and associated tools, (server-side or user-interfaces depending on the team you'll join)
  2. Expertise as a full stack software engineer of large scale complex software systems with at 8+ years of experience with technologies such as Java, Relational and Non relational databases,Node.js and AWS Cloud
  3. Assisting with in-life maintenance, testing, debugging and documentation of deployed services
  4. Coding & designing new features
  5. Creating the supporting functional and technical specifications
  6. Deep understanding of system architecture , and distributed systems
  7. Stay updated with the latest services, tools, and trends, and implement innovative solutions that contribute to the company's growth


Read more
DeepVidya AI Private Limited (OpenCV University)
Bengaluru (Bangalore)
2 - 5 yrs
₹5L - ₹10L / yr
skill iconPython
MySQL
skill iconAmazon Web Services (AWS)
Amazon EC2
Amazon S3
+6 more

About the job


Location: Bangalore, India

Job Type: Full-Time | On-Site


Job Description

We are looking for a highly skilled and motivated Python Backend Developer to join our growing team in Bangalore. The ideal candidate will have a strong background in backend development with Python, deep expertise in relational databases like MySQL, and hands-on experience with AWS cloud infrastructure.


Key Responsibilities

  • Design, develop, and maintain scalable backend systems using Python.
  • Architect and optimize relational databases (MySQL), including complex queries and indexing.
  • Manage and deploy applications on AWS cloud services (EC2, S3, RDS, DynamoDB, API Gateway, Lambda).
  • Automate cloud infrastructure using CloudFormation or Terraform.
  • Collaborate with cross-functional teams to define, design, and ship new features.
  • Mentor junior developers and contribute to a culture of technical excellence.
  • Proactively identify issues and provide solutions to challenging backend problems.


Mandatory Requirements

  • Minimum 3 years of professional experience in Python backend development.
  • Expert-level knowledge in MySQL database creation, optimization, and query writing.
  • Strong experience with AWS services, particularly EC2, S3, RDS, DynamoDB, API Gateway, and Lambda.
  • Hands-on experience with infrastructure as code using CloudFormation or Terraform.
  • Proven problem-solving skills and the ability to work independently.
  • Demonstrated leadership abilities and team collaboration skills.
  • Excellent verbal and written communication.
Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore), Pune, Hyderabad, Chennai, Kolkata
8 - 15 yrs
₹25L - ₹45L / yr
skill iconJava
skill iconSpring Boot
Microservices
skill iconLeadership
Team leadership
+11 more

Job Title : Lead Java Developer (Backend)

Experience Required : 8 to 15 Years

Open Positions : 5

Location : Any major metro city (Bengaluru, Pune, Chennai, Kolkata, Hyderabad)

Work Mode : Open to Remote / Hybrid / Onsite

Notice Period : Immediate Joiner/30 Days or Less


About the Role :

  • We are looking for experienced Lead Java Developers who bring not only strong backend development skills but also a product-oriented mindset and leadership capability.
  • This is an opportunity to be part of high-impact digital transformation initiatives that go beyond writing code—you’ll help shape future-ready platforms and drive meaningful change.
  • This role is embedded within a forward-thinking digital engineering team that thrives on co-innovation, lean delivery, and end-to-end ownership of platforms and products.


Key Responsibilities :

  • Design, develop, and implement scalable backend systems using Java and Spring Boot.
  • Collaborate with product managers, designers, and engineers to build intuitive and reliable digital products.
  • Advocate and implement engineering best practices : SOLID principles, OOP, clean code, CI/CD, TDD/BDD.
  • Lead Agile-based development cycles with a focus on speed, quality, and customer outcomes.
  • Guide and mentor team members, fostering technical excellence and ownership.
  • Utilize cloud platforms and DevOps tools to ensure performance and reliability of applications.

What We’re Looking For :

  • Proven experience in Java backend development (Spring Boot, Microservices).
  • 8+ Years of hands-on engineering experience with at least 2+ years in a Lead role.
  • Familiarity with cloud platforms such as AWS, Azure, or GCP.
  • Good understanding of containerization and orchestration tools like Docker and Kubernetes.
  • Exposure to DevOps and Infrastructure as Code practices.
  • Strong problem-solving skills and the ability to design solutions from first principles.
  • Prior experience in product-based or startup environments is a big plus.

Ideal Candidate Profile :

  • A tech enthusiast with a passion for clean code and scalable architecture.
  • Someone who thrives in collaborative, transparent, and feedback-driven environments.
  • A leader who takes ownership beyond individual deliverables to drive overall team and project success.

Interview Process

  1. Initial Technical Screening (via platform partner)
  2. Technical Interview with Engineering Team
  3. Client-facing Final Round

Additional Info :

  • Targeting profiles from product/startup backgrounds.
  • Strong preference for candidates with under 1 month of notice period.
  • Interviews will be fast-tracked for qualified profiles.
Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Bengaluru (Bangalore)
5 - 8 yrs
₹12L - ₹22L / yr
skill iconPython
skill iconDjango
skill iconAmazon Web Services (AWS)
skill iconFlask
Windows Azure

About the Role:


  • We are looking for a highly skilled and experienced Senior Python Developer to join our dynamic team based in Manyata Tech Park, Bangalore. The ideal candidate will have a strong background in Python development, object-oriented programming, and cloud-based application development. You will be responsible for designing, developing, and maintaining scalable backend systems using modern frameworks and tools.
  • This role is hybrid, with a strong emphasis on working from the office to collaborate effectively with cross-functional teams.


Key Responsibilities:

  • Design, develop, test, and maintain backend services using Python.
  • Develop RESTful APIs and ensure their performance, responsiveness, and scalability.
  • Work with popular Python frameworks such as Django or Flask for rapid development.
  • Integrate and work with cloud platforms (AWS, Azure, GCP or similar).
  • Collaborate with front-end developers and other team members to establish objectives and design cohesive code.
  • Apply object-oriented programming principles to solve real-world problems efficiently.
  • Implement and support event-driven architectures where applicable.
  • Identify bottlenecks and bugs, and devise solutions to mitigate and address these issues.
  • Write clean, maintainable, and reusable code with proper documentation.
  • Contribute to system architecture and code review processes.


Required Skills and Qualifications:


  • Minimum of 5 years of hands-on experience in Python development.
  • Strong understanding of Object-Oriented Programming (OOP) and Data Structures.
  • Proficiency in building and consuming REST APIs.
  • Experience working with at least one cloud platform such as AWS, Azure, or Google Cloud Platform.
  • Hands-on experience with Python frameworks like Django, Flask, or similar.
  • Familiarity with event-driven programming and asynchronous processing.
  • Excellent problem-solving, debugging, and troubleshooting skills.
  • Strong communication and collaboration abilities to work effectively in a team environment.


Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort