50+ Amazon EC2 Jobs in India
Apply to 50+ Amazon EC2 Jobs on CutShort.io. Find your next job, effortlessly. Browse Amazon EC2 Jobs and apply today!
Job Description
Position Title: IT Intern (Full Time)
Department: Information Technology
Work Mode: Work From Home (WFH)
Educational Qualification: B.Tech / M.Tech
Shift: Rotational Shifts (6am to 3pm, 2pm to 11pm, and 10pm to 07am)
---
Role Summary
The IT Intern will support day-to-day IT operations and assist in maintaining the organization’s IT infrastructure. This role provides structured exposure to desktop support, cloud platforms, user management, server infrastructure, and IT security practices under the guidance of senior IT team members.
---
Key Responsibilities
· Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.
· Support user account management activities using Azure Entra ID (Azure Active Directory), Active Directory, and Microsoft 365.
· Assist the IT team in configuring, monitoring, and supporting AWS cloud services, including EC2, S3, IAM, and WorkSpaces.
· Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.
· Assist with data backups, basic disaster recovery tasks, and implementation of security procedures in line with company policies.
· Create, update, and maintain technical documentation, SOPs, and knowledge base articles.
· Collaborate with internal teams to support system upgrades, IT infrastructure improvements, and ongoing IT projects.
· Adhere to company IT policies, data security standards, and confidentiality requirements.
---
Required Skills & Competencies
· Basic understanding of IT infrastructure, networking concepts, and operating systems
· Familiarity with cloud platforms such as AWS and/or Microsoft Azure
· Fundamental knowledge of Active Directory and user access management
· Strong willingness to learn and adapt to new technologies
· Good analytical, problem-solving, and communication skills
· Ability to work independently in a remote environment
---
Technical Requirements
· Personal laptop/desktop with required specifications
· Reliable internet connectivity to support remote work
---
Learning & Development Opportunities
· Hands-on exposure to enterprise IT environments
· Practical experience with cloud technologies and infrastructure support
· Mentorship from experienced IT professionals
· Opportunity to develop technical, documentation, and operational skills
Job Description
Position Title: IT Intern (Full Time)
Department: Information Technology
Work Mode: Work From Home (WFH)
Educational Qualification: B.Tech (IT) / M.Tech (IT)
Shift: Rotational Shifts (6am to 3pm, 2pm to 11pm, and 10pm to 07am)
Role Summary
The IT Intern will support day-to-day IT operations and assist in maintaining the organization’s IT infrastructure. This role provides structured exposure to desktop support, cloud platforms, user management, server infrastructure, and IT security practices under the guidance of senior IT team members.
Key Responsibilities
- Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.
- Support user account management activities using Azure Entra ID (Azure Active Directory), Active Directory, and Microsoft 365.
- Assist the IT team in configuring, monitoring, and supporting AWS cloud services, including EC2, S3, IAM, and WorkSpaces.
- Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.
- Assist with data backups, basic disaster recovery tasks, and implementation of security procedures in line with company policies.
- Create, update, and maintain technical documentation, SOPs, and knowledge base articles.
- Collaborate with internal teams to support system upgrades, IT infrastructure improvements, and ongoing IT projects.
- Adhere to company IT policies, data security standards, and confidentiality requirements.
Required Skills & Competencies
- Basic understanding of IT infrastructure, networking concepts, and operating systems
- Familiarity with cloud platforms such as AWS and/or Microsoft Azure
- Fundamental knowledge of Active Directory and user access management
- Strong willingness to learn and adapt to new technologies
- Good analytical, problem-solving, and communication skills
- Ability to work independently in a remote environment
Technical Requirements
- Personal laptop/desktop with required specifications
- Reliable internet connectivity to support remote work
Learning & Development Opportunities
- Hands-on exposure to enterprise IT environments
- Practical experience with cloud technologies and infrastructure support
- Mentorship from experienced IT professionals
- Opportunity to develop technical, documentation, and operational skills
Job Details
- Job Title: Director of Engineering
- Industry: SAAS
- Function – Information Technology
- Experience Required: 9-14 years
- Working Days: 6 days
- Employment Type: Full Time
- Job Location: Bangalore
- CTC Range: Best in Industry
Preferred Skills: TypeScript, AWS, NodeJS, mongodb, React.js, WebGL, Three.js, AI/ML, Docker,nKubernetes
Criteria
Candidate must be having 9+ years of engineering experience, with 3u20134 years in technical leadership
Hands-on expertise with React/Next.js, Node.js/Python, and AWS.
Ability to design scalable architectures for high-performance systems.
Should have AI/ML deployment experience
Strong 3D graphics/WebGL/Three.js knowledge.
Candidates should be from SAAS/Software/IT Services based startups or scaleup companies only
Job Description
The Role:
Company is hiring a hands-on Director of Engineering who codes, architects systems, and builds teams. You’ll set the technical foundation, drive engineering excellence, and own the architecture of our AI, 3D, and XR platform.
This is not a pure management role - expect to spend 50–60% of your time writing code, solving deep technical problems, and owning mission-critical systems. As we scale, this role transitions into CTO, taking full ownership of technical vision and long-term strategy.
What You’ll Own:
1. Technical Leadership & Architecture
● Architect company’s full-stack platform across frontend, backend, infrastructure, and AI.
● Scale core systems: VersaAI engine, rendering pipeline, AR deployment, analytics.
● Make decisions on stack, scalability patterns, architecture, and technical debt.
● Own design for high-performance 3D asset processing, real-time rendering, and ML deployment.
● Lead architectural discussions, design reviews, and set engineering standards.
2. Hands-On Development
● Write production-grade code across frontend, backend, APIs, and cloud infra.
● Build critical features and core system components independently.
● Debug complex systems and optimize performance end-to-end.
● Implement and optimize AI/ML pipelines for 3D generation, CV, and recognition.
● Build scalable backend services for large-scale asset processing and real-time pipelines.
● Develop WebGL/Three.js rendering and AR workflows.
3. Team Building & Engineering Management
● Hire and grow a team of 5–8 engineers initially (scaling to 15–20).
● Establish engineering culture, values, and best practices.
● Build career frameworks, performance systems, and growth plans.
● Conduct 1:1s, mentor engineers, and drive continuous improvement.
● Set up processes for agile execution, deployments, and incident response.
4. Product & Cross-Functional Collaboration
● Work with the founder and product team on roadmap, feasibility, and prioritization.
● Translate product requirements into technical execution plans.
● Collaborate with design for UX quality and technical alignment.
● Support sales and customer success with integrations and technical discussions.
● Contribute technical inputs to product strategy and customer-facing initiatives.
5. Engineering Operations & Infrastructure
● Own CI/CD, testing frameworks, deployments, and automation.
● Create monitoring, logging, and alerting setups for reliability.
● Manage AWS infrastructure with a focus on cost and performance.
● Build internal tools, documentation, and developer workflows.
● Ensure enterprise-grade security, compliance, and reliability.
Tech Stack:
1. Frontend
React.js, Next.js, TypeScript, WebGL, Three.js
2. Backend
Node.js, Python, Express/FastAPI, REST, GraphQL
3. AI/ML
PyTorch, TensorFlow, CV models, Stable Diffusion, LLMs, ML pipelines
4. 3D & Graphics
Three.js, WebGL, Babylon.js, glTF, USDZ, rendering optimization
5. Databases
PostgreSQL, MongoDB, Redis, vector databases
6. Cloud & Infra
AWS (EC2, S3, Lambda, SageMaker), Docker, Kubernetes CI/CD: GitHub Actions
Monitoring: Datadog, Sentry
What We’re Looking For:
1. Must-Haves
● 9+ years of engineering experience, with 3–4 years in technical leadership.
● Deep full-stack experience with strong system design fundamentals.
● Proven success building products from 0→1 in fast-paced environments.
● Hands-on expertise with React/Next.js, Node.js/Python, and AWS.
● Ability to design scalable architectures for high-performance systems.
● Strong people leadership with experience hiring and mentoring teams.
● Ready to code, review, design, and lead from the front.
● Startup mindset: fast execution, problem-solving, ownership.
2. Highly Desirable
● AI/ML deployment experience (CV, generative AI, 3D reconstruction).
● Strong 3D graphics/WebGL/Three.js knowledge.
● Experience with real-time systems, rendering optimizations, or large-scale pipelines.
● Background in B2B SaaS, XR, gaming, or immersive tech.
● Experience scaling engineering teams from 5 → 20+.
● Open-source contributions or technical content creation.
● Experience working closely with founders or executive leadership.
Why Company:
● Hard, meaningful engineering problems at the intersection of AI, 3D, XR, and web tech.
● Build from day zero – architecture, team, and culture.
● Path to CTO as the company scales.
● High autonomy to drive technical decisions.
● Direct founder collaboration on product vision.
● High ownership, high-growth environment.
● Backed by global leaders: Microsoft, Google, NVIDIA, AWS.
Location & Work Culture:
● Location: HSR Layout, Bengaluru
● Schedule: 6 days a week, (5 days-in-office, Saturdays WFH)
● Culture: High-intensity, high-integrity, engineering-first
● Team: Young, ambitious, technically strong
Job Details
- Job Title: Full Stack Engineer
- Industry: SAAS
- Function – Information Technology
- Experience Required: 5-7 years
- Working Days: 6 days
- Employment Type: Full Time
- Job Location: Bangalore
- CTC Range: Best in Industry
Preferred Skills: TypeScript, NodeJS, mongodb, RESTful APIs, React.js
Criteria
Candidate should have at least 4+ years of professional experience as a Full Stack Engineer
Hands-on experience with both React.js and Node.js
Solid understanding of MongoDB
Should have experience in RESTful APIs
Should be from a startup or scale up companies
Should have good experience in Typescript
Strong understanding of asynchronous programming patterns
Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies
Job Description
The Role:
We’re looking for a Full Stack Engineer to build, scale, and maintain high-performance web applications for company’s technology platforms. This role involves working across the stack-frontend, backend, and infrastructure - using modern JavaScript-based technologies.
You’ll collaborate closely with product managers, designers, and cross-functional engineering teams to deliver scalable, secure, and user-centric solutions. This role is ideal for someone who enjoys end-to-end ownership, technical problem-solving, and working in a fast-paced startup environment.
What You’ll Own
1. Full Stack Development
● Design, develop, test, and deploy robust and scalable web applications.
● Build and maintain server-side logic and microservices using Node.js, Express.js, and TypeScript.
● Contribute to frontend feature development and integration.
● Participate in feature planning, estimation, and execution.
2. Backend & API Engineering
● Design and develop RESTful APIs and backend services.
● Implement asynchronous workflows and scalable microservice architectures.
● Ensure performance, reliability, and security of backend systems.
● Implement authentication, authorization, and data protection best practices.
3. Database Design & Optimization
● Design and manage MongoDB schemas using Mongoose.
● Optimize queries and database performance for scale.
● Ensure data integrity and efficient data access patterns.
4. Frontend Collaboration & Integration
● Collaborate with frontend developers to integrate React components and APIs seamlessly.
● Ensure responsive, high-performing application behavior.
5. System Design & Scalability
● Contribute to system architecture and technical design discussions.
● Design scalable, maintainable, and future-ready solutions.
● Optimize applications for speed and scalability.
6. Product & Cross-Functional Collaboration
● Work closely with product and design teams to deliver high-quality features in rapid iterations.
● Participate in the full development lifecycle—from concept to deployment and maintenance.
7. Code Quality & Best Practices
● Write clean, testable, and maintainable code.
● Follow Git-based version control and code review best practices.
● Contribute to improving engineering standards and workflows.
What We’re Looking For
Must-Haves
● 4+ years of professional experience as a Full Stack Engineer or similar role.
● Strong proficiency in JavaScript and TypeScript.
● Hands-on experience with Node.js and Express.js.
● Solid understanding of MongoDB and Mongoose.
● Experience building and consuming RESTful APIs and microservices.
● Strong understanding of asynchronous programming patterns.
● Good grasp of system design principles and application architecture.
● Experience with Git and version control best practices.
● Bachelor’s degree in Computer Science, Engineering, or a related field.
Good-to-Have / Preferred
● Frontend development experience with React.js.
● Exposure to Three.js or similar 3D/visualization libraries.
● Experience with cloud platforms (AWS, GCP, Azure – EC2, S3, Lambda).
● Knowledge of Docker and containerization workflows.
● Experience with testing frameworks (Jest, Mocha, etc.).
● Familiarity with CI/CD pipelines and automated deployments.
Tools You’ll Use
● Backend: Node.js, Express.js, TypeScript
● Frontend: React.js (preferred)
● Database: MongoDB, Mongoose
● Version Control: Git, GitHub / GitLab
● Cloud & DevOps: AWS / GCP / Azure, Docker
● Collaboration: Google Workspace, Notion, Slack
Key Metrics You’ll Own
● Code quality, performance, and scalability
● Timely delivery of features and releases
● System reliability and reduction in production issues
● Contribution to architectural improvements
Why company
● Work on impactful, product-driven tech platforms.
● High-ownership role with end-to-end engineering exposure.
● Opportunity to work with modern technologies and evolving architectures.
● Collaborative startup culture with strong learning and growth opportunities.
Job Details
- Job Title: Senior Backend Engineer
- Industry: SAAS
- Function – Information Technology
- Experience Required: 5-8 years
- Working Days: 6 days a week, (5 days-in-office, Saturdays WFH)
- Employment Type: Full Time
- Job Location: Bangalore
- CTC Range: Best in Industry
Preferred Skills: AWS, NodeJS, RESTful APIs, NoSQL
Criteria
· Minimum 5+ years in backend engineering with strong system design expertise
· Experience building scalable systems from scratch
· Expert-level proficiency in Node.js
· Deep understanding of distributed systems
· Strong NoSQL design skills
· Hands-on AWS cloud experience
· Proven leadership and mentoring capability
· Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies
Job Description
The Role:
What You’ll Build:
1. System Architecture & Design
● Architect highly scalable backend systems from the ground up
● Define technology choices: frameworks, databases, queues, caching layers
● Evaluate microservices vs monoliths based on product stage
● Design REST, GraphQL, and real-time WebSocket APIs
● Build event-driven systems for asynchronous processing
● Architect multi-tenant systems with strict data isolation
● Maintain architectural documentation and technical specs
2. Core Backend Services
● Build high-performance APIs for 3D content, XR experiences, analytics, and user interactions
● Create 3D asset processing pipelines for uploads, conversions, and optimization
● Develop distributed job workers for CPU/GPU-intensive tasks
● Build authentication/authorization systems (RBAC)
● Implement billing, subscription, and usage metering
● Build secure webhook systems and third-party integration APIs
● Create real-time collaboration features via WebSockets/SSE
3. Data Architecture & Databases
● Design scalable schemas for 3D metadata, XR sessions, and analytics
● Model complex product catalogs with variants and hierarchies
● Implement Redis-based caching strategies
● Build search and indexing systems (Elasticsearch/Algolia)
● Architect ETL pipelines and data warehouses
● Implement sharding, partitioning, and replication strategies
● Design backup, restore, and disaster recovery workflows
4. Scalability & Performance
● Build systems designed for 10x–100x traffic growth
● Implement load balancing, autoscaling, and distributed processing
● Optimize API response times and database performance
● Implement global CDN delivery for heavy 3D assets
● Build rate limiting, throttling, and backpressure mechanisms
● Optimize storage and retrieval of large 3D files
● Profile and improve CPU, memory, and network performance
5. Infrastructure & DevOps
● Architect AWS infrastructure (EC2, S3, Lambda, RDS, ElastiCache)
● Build CI/CD pipelines for automated deployments and rollbacks
● Use IaC tools (Terraform/CloudFormation) for infra provisioning
● Set up monitoring, logging, and alerting systems
● Use Docker + Kubernetes for container orchestration
● Implement security best practices for data, networks, and secrets
● Define disaster recovery and business continuity plans
6. Integration & APIs
● Build integrations with Shopify, WooCommerce, Magento
● Design webhook systems for real-time events
● Build SDKs, client libraries, and developer tools
● Integrate payment gateways (Stripe, Razorpay)
● Implement SSO and OAuth for enterprise customers
● Define API versioning and lifecycle/deprecation strategies
7. Data Processing & Analytics
● Build analytics pipelines for engagement, conversions, and XR performance
● Process high-volume event streams at scale
● Build data warehouses for BI and reporting
● Develop real-time dashboards and insights systems
● Implement analytics export pipelines and platform integrations
● Enable A/B testing and experimentation frameworks
● Build personalization and recommendation systems
Technical Stack:
1. Backend Languages & Frameworks
● Primary: Node.js (Express, NestJS), Python (FastAPI, Django)
● Secondary: Go, Java/Kotlin (Spring)
● APIs: REST, GraphQL, gRPC
2. Databases & Storage
● SQL: PostgreSQL, MySQL
● NoSQL: MongoDB, DynamoDB
● Caching: Redis, Memcached
● Search: Elasticsearch, Algolia
● Storage/CDN: AWS S3, CloudFront
● Queues: Kafka, RabbitMQ, AWS SQS
3. Cloud & Infrastructure:
● Cloud: AWS (primary), GCP/Azure (nice to have)
● Compute: EC2, Lambda, ECS, EKS
● Infrastructure: Terraform, CloudFormation
● CI/CD: GitHub Actions, Jenkins, CircleCI
● Containers: Docker, Kubernetes
4. Monitoring & Operations
● Monitoring: Datadog, New Relic, CloudWatch
● Logging: ELK Stack, CloudWatch Logs
● Error Tracking: Sentry, Rollbar
● APM tools
5. Security & Auth
● Auth: JWT, OAuth 2.0, SAML
● Secrets: AWS Secrets Manager, Vault
● Security: Encryption (at rest/in transit), TLS/SSL, IAM
What We’re Looking For:
1. Must-Haves
● 5+ years in backend engineering with strong system design expertise
● Experience building scalable systems from scratch
● Expert-level proficiency in at least one backend stack (Node, Python, Go, Java)
● Deep understanding of distributed systems and microservices
● Strong SQL/NoSQL design skills with performance optimization
● Hands-on AWS cloud experience
● Ability to write high-quality production code daily
● Experience building and scaling RESTful APIs
● Strong understanding of caching, sharding, horizontal scaling
● Solid security and best-practice implementation experience
● Proven leadership and mentoring capability
2. Highly Desirable
● Experience with large file processing (3D, video, images)
● Background in SaaS, multi-tenancy, or e-commerce
● Experience with real-time systems (WebSockets, streams)
● Knowledge of ML/AI infrastructure
● Experience with HA systems, DR planning
● Familiarity with GraphQL, gRPC, event-driven systems
● DevOps/infrastructure engineering background
● Experience with XR/AR/VR backend systems
● Open-source contributions or technical writing
● Prior senior technical leadership experience
Technical Challenges You’ll Solve:
● Designing large-scale 3D asset processing pipelines
● Serving XR content globally with ultra-low latency
● Scaling from thousands to millions of daily requests
● Efficiently handling CPU/GPU-heavy workloads
● Architecting multi-tenancy with complete data isolation
● Managing billions of analytics events at scale
● Building future-proof APIs with backward compatibility
Why company:
● Architectural Ownership: Build foundational systems from scratch
● Deep Technical Work: Solve distributed systems and scaling challenges
● Hands-On Impact: Design and code mission-critical infrastructure
● Diverse Problems: APIs, infra, data, ML, XR, asset processing
● Massive Scale Opportunity: Build systems for exponential growth
● Modern Stack and best practices
● Product Impact: Your architecture directly powers millions of users
● Leadership Opportunity: Shape engineering culture and direction
● Learning Environment: Stay at the forefront of backend engineering
● Backed by AWS, Microsoft, Google
Location & Work Culture:
● Location: Bengaluru
● Schedule: 6 days a week, (5 days-in-office, Saturdays WFH)
● Culture: Builder mindset, strong ownership, technical excellence
● Team: Small, highly skilled backend and infra team
● Resources: AWS credits, latest tooling, learning budget
Position Title: IT Intern (Full Time)
Department: Information Technology
Work Mode: Work From Home (WFH)
Educational Qualification: B.Tech (IT) / M.Tech (IT)
Shift: Rotational Shifts (6am to 3pm, 2pm to 11pm, and 10pm to 07am)
---
Role Summary
The IT Intern will support day-to-day IT operations and assist in maintaining the organization’s IT infrastructure. This role provides structured exposure to desktop support, cloud platforms, user management, server infrastructure, and IT security practices under the guidance of senior IT team members.
---
Key Responsibilities
· Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.
· Support user account management activities using Azure Entra ID (Azure Active Directory), Active Directory, and Microsoft 365.
· Assist the IT team in configuring, monitoring, and supporting AWS cloud services, including EC2, S3, IAM, and WorkSpaces.
· Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.
· Assist with data backups, basic disaster recovery tasks, and implementation of security procedures in line with company policies.
· Create, update, and maintain technical documentation, SOPs, and knowledge base articles.
· Collaborate with internal teams to support system upgrades, IT infrastructure improvements, and ongoing IT projects.
· Adhere to company IT policies, data security standards, and confidentiality requirements.
---
Required Skills & Competencies
· Basic understanding of IT infrastructure, networking concepts, and operating systems
· Familiarity with cloud platforms such as AWS and/or Microsoft Azure
· Fundamental knowledge of Active Directory and user access management
· Strong willingness to learn and adapt to new technologies
· Good analytical, problem-solving, and communication skills
· Ability to work independently in a remote environment
---
Technical Requirements
· Personal laptop/desktop with required specifications
· Reliable internet connectivity to support remote work
---
Learning & Development Opportunities
· Hands-on exposure to enterprise IT environments
· Practical experience with cloud technologies and infrastructure support
· Mentorship from experienced IT professionals
· Opportunity to develop technical, documentation, and operational skills
About ARDEM
ARDEM is a leading Business Process Outsourcing (BPO) and Business Process Automation (BPA) service provider. With over 20 years of experience, ARDEM has consistently delivered high-quality outsourcing and automation services to clients across the USA and Canada. We are growing rapidly and continuously innovating to improve our services. Our goal is to strive for excellence and become the best Business Process Outsourcing and Business Process Automation company for our customers.
📍 Position: IT Intern
👩💻 Experience: 0–6 Months (Freshers/Recent graduates can apply)
🎓 Qualification: B.Tech (IT) / M.Tech (IT) only
📌 Mode: Remote (WFH)
⏳ Shift: Willingness to work in night/rotational shifts
🗣 Communication: Excellent English
𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:
- Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.
- Support user account management activities using #Azure Entra ID (Azure AD), Active Directory, and Microsoft 365.
- Assist the IT team in configuring, monitoring, and supporting #AWS cloud services (EC2, S3, IAM, WorkSpaces).
- Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.
- Assist with backups, basic disaster recovery tasks, and security procedures as per company policies.
- Help create and update technical documentation and knowledge base articles.
- Work closely with internal teams and assist in system upgrades, IT infrastructure improvements, and ongoing projects.
💻 Technical Requirements:
- Laptop with i5 or higher processor
-Reliable internet connectivity with 100mbps speed

Global digital transformation solutions provider.
JOB DETAILS:
* Job Title: Lead II - Software Engineering - AWS, Apache Spark (PySpark/Scala), Apache Kafka
* Industry: Global digital transformation solutions provider
* Salary: Best in Industry
* Experience: 5-8 years
* Location: Hyderabad
Job Summary
We are seeking a skilled Data Engineer to design, build, and optimize scalable data pipelines and cloud-based data platforms. The role involves working with large-scale batch and real-time data processing systems, collaborating with cross-functional teams, and ensuring data reliability, security, and performance across the data lifecycle.
Key Responsibilities
ETL Pipeline Development & Optimization
- Design, develop, and maintain complex end-to-end ETL pipelines for large-scale data ingestion and processing.
- Optimize data pipelines for performance, scalability, fault tolerance, and reliability.
Big Data Processing
- Develop and optimize batch and real-time data processing solutions using Apache Spark (PySpark/Scala) and Apache Kafka.
- Ensure fault-tolerant, scalable, and high-performance data processing systems.
Cloud Infrastructure Development
- Build and manage scalable, cloud-native data infrastructure on AWS.
- Design resilient and cost-efficient data pipelines adaptable to varying data volume and formats.
Real-Time & Batch Data Integration
- Enable seamless ingestion and processing of real-time streaming and batch data sources (e.g., AWS MSK).
- Ensure consistency, data quality, and a unified view across multiple data sources and formats.
Data Analysis & Insights
- Partner with business teams and data scientists to understand data requirements.
- Perform in-depth data analysis to identify trends, patterns, and anomalies.
- Deliver high-quality datasets and present actionable insights to stakeholders.
CI/CD & Automation
- Implement and maintain CI/CD pipelines using Jenkins or similar tools.
- Automate testing, deployment, and monitoring to ensure smooth production releases.
Data Security & Compliance
- Collaborate with security teams to ensure compliance with organizational and regulatory standards (e.g., GDPR, HIPAA).
- Implement data governance practices ensuring data integrity, security, and traceability.
Troubleshooting & Performance Tuning
- Identify and resolve performance bottlenecks in data pipelines.
- Apply best practices for monitoring, tuning, and optimizing data ingestion and storage.
Collaboration & Cross-Functional Work
- Work closely with engineers, data scientists, product managers, and business stakeholders.
- Participate in agile ceremonies, sprint planning, and architectural discussions.
Skills & Qualifications
Mandatory (Must-Have) Skills
- AWS Expertise
- Hands-on experience with AWS Big Data services such as EMR, Managed Apache Airflow, Glue, S3, DMS, MSK, and EC2.
- Strong understanding of cloud-native data architectures.
- Big Data Technologies
- Proficiency in PySpark or Scala Spark and SQL for large-scale data transformation and analysis.
- Experience with Apache Spark and Apache Kafka in production environments.
- Data Frameworks
- Strong knowledge of Spark DataFrames and Datasets.
- ETL Pipeline Development
- Proven experience in building scalable and reliable ETL pipelines for both batch and real-time data processing.
- Database Modeling & Data Warehousing
- Expertise in designing scalable data models for OLAP and OLTP systems.
- Data Analysis & Insights
- Ability to perform complex data analysis and extract actionable business insights.
- Strong analytical and problem-solving skills with a data-driven mindset.
- CI/CD & Automation
- Basic to intermediate experience with CI/CD pipelines using Jenkins or similar tools.
- Familiarity with automated testing and deployment workflows.
Good-to-Have (Preferred) Skills
- Knowledge of Java for data processing applications.
- Experience with NoSQL databases (e.g., DynamoDB, Cassandra, MongoDB).
- Familiarity with data governance frameworks and compliance tooling.
- Experience with monitoring and observability tools such as AWS CloudWatch, Splunk, or Dynatrace.
- Exposure to cost optimization strategies for large-scale cloud data platforms.
Skills: big data, scala spark, apache spark, ETL pipeline development
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Hyderabad
Note: If a candidate is a short joiner, based in Hyderabad, and fits within the approved budget, we will proceed with an offer
F2F Interview: 14th Feb 2026
3 days in office, Hybrid model.
• Minimum 4+ years of years
• Experience in designing, developing, and maintain backend services using C# 12 and .NET 8 or .NET 9
• Experience in building and operating cloud native and serverless applications on AWS
• Experience in developing and integrating services using AWS lambda, API Gateway , dynamo DB, Eventbridge, CloudWatch, SQS, SNS, Kinesis, Secret Manager, S3 storage, server architectural models etc.
Experience in integrating services using AWS SDK
• Should be cognizant of the OMS paradigms including Inventory Management, Inventory publish, supply feed processing, control mechanisms, ATP publish, Order Orchestration, workflow set up and customizations, integrations with tax, AVS, payment engines, sourcing algorithms and managing reservations with back orders, schedule mechanisms, flash sales management etc.
• Should have a decent End to End knowledge of various Commerce subsystems which include Storefront, Core Commerce back end, Post Purchase processing, OMS, Store / Warehouse Management processes, Supply Chain and Logistic processes. This is to ascertain candidates knowhow on the overall Retail landscape of any customer.
• Strong knowledge in Querying in Oracle DB and SQL Server
• Able to read, write and manage PLSQL procedures in oracle.
• Strong debugging, performance tuning and problem solving skills
• Experience with event driven and micro services architectures
About Us:
MyOperator is a Business AI Operator and a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform.
Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform.Trusted by 12,000+ brands including Amazon, Domino’s, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Role Overview:
We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.
Key Responsibilities:
- Develop robust backend services using Python, Django, and FastAPI
- Design and maintain a scalable microservices architecture
- Integrate LangChain/LLMs into AI-powered features
- Write clean, tested, and maintainable code with pytest
- Manage and optimize databases (MySQL/Postgres)
- Deploy and monitor services on AWS
- Collaborate across teams to define APIs, data flows, and system architecture
Must-Have Skills:
- Python and Django
- MySQL or Postgres
- Microservices architecture
- AWS (EC2, RDS, Lambda, etc.)
- Unit testing using pytest
- LangChain or Large Language Models (LLM)
- Strong grasp of Data Structures & Algorithms
- AI coding assistant tools (e.g., Chat GPT & Gemini)
Good to Have:
- MongoDB or ElasticSearch
- Go or PHP
- FastAPI
- React, Bootstrap (basic frontend support)
- ETL pipelines, Jenkins, Terraform
Why Join Us?
- 100% Remote role with a collaborative team
- Work on AI-first, high-scale SaaS products
- Drive real impact in a fast-growing tech company
- Ownership and growth from day one
JOB DETAILS:
* Job Title: Tester III - Software Testing (Automation testing + Python + AWS)
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4 -10 years
* Location: Hyderabad
Job Description
Responsibilities:
- Develop, maintain, and execute automation test scripts using Python.
- Build reliable and reusable test automation frameworks for web and cloud-based applications.
- Work with AWS cloud services for test execution, environment management, and integration needs.
- Perform functional, regression, and integration testing as part of the QA lifecycle.
- Analyze test failures, identify root causes, raise defects, and collaborate with development teams.
- Participate in requirement review, test planning, and strategy discussions.
- Contribute to CI/CD setup and integration of automation suites.
Required Experience:
- Strong hands-on experience in Automation Testing.
- Proficiency in Python for automation scripting and framework development.
- Understanding and practical exposure to AWS services (Lambda, EC2, S3, CloudWatch, or similar).
- Good knowledge of QA methodologies, SDLC/STLC, and defect management.
- Familiarity with automation tools/frameworks (e.g., Selenium, PyTest).
- Experience with Git or other version control systems.
Good to Have:
- API testing experience (REST, Postman, REST Assured).
- Knowledge of Docker/Kubernetes.
- Exposure to Agile/Scrum environment.
Skills: Automation testing, Python, Java, ETL, AWS
Job Details
- Job Title: Software Developer (Python, React/Vue)
- Industry: Technology
- Experience Required: 2-4 years
- Working Days: 5 days/week
- Job Location: Remote working
- CTC Range: Best in Industry
Review Criteria
- Strong Full stack/Backend engineer profile
- 2+ years of hands-on experience as a full stack developer (backend-heavy)
- (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
- (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
- (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
- (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
- (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
- (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
- Product companies (B2B SaaS preferred)
Preferred
- Preferred (Location) - Mumbai
- Preferred (Skills): Candidates with strong backend or full-stack experience in other languages/frameworks are welcome if fundamentals are strong
- Preferred (Education): B.Tech from Tier 1, Tier 2 institutes
Role & Responsibilities
This is not just another dev job. You’ll help engineer the backbone of the world’s first AI Agentic manufacturing OS.
You will:
- Build and own features end-to-end — from design → deployment → scale.
- Architect scalable, loosely coupled systems powering AI-native workflows.
- Create robust integrations with 3rd-party systems.
- Push boundaries on reliability, performance, and automation.
- Write clean, tested, secure code → and continuously improve it.
- Collaborate directly with Founders & Snr engineers in a high-trust environment.
Our Tech Arsenal:
- We believe in always using the sharpest tools for the job. On that end we always try to remain tech agnostic and leave it up to discussion on what tools to use to get the problem solved in the most robust and quick way.
- That being said, our bright team of engineers have already constructed a formidable arsenal of tools which helps us fortify our defense and always play on the offensive. Take a look at the Tech Stack we already use.
REVIEW CRITERIA:
MANDATORY:
- Strong Hands-On AWS Cloud Engineering / DevOps Profile
- Mandatory (Experience 1): Must have 12+ years of experience in AWS Cloud Engineering / Cloud Operations / Application Support
- Mandatory (Experience 2): Must have strong hands-on experience supporting AWS production environments (EC2, VPC, IAM, S3, ALB, CloudWatch)
- Mandatory (Infrastructure as a code): Must have hands-on Infrastructure as Code experience using Terraform in production environments
- Mandatory (AWS Networking): Strong understanding of AWS networking and connectivity (VPC design, routing, NAT, load balancers, hybrid connectivity basics)
- Mandatory (Cost Optimization): Exposure to cost optimization and usage tracking in AWS environments
- Mandatory (Core Skills): Experience handling monitoring, alerts, incident management, and root cause analysis
- Mandatory (Soft Skills): Strong communication skills and stakeholder coordination skills
ROLE & RESPONSIBILITIES:
We are looking for a hands-on AWS Cloud Engineer to support day-to-day cloud operations, automation, and reliability of AWS environments. This role works closely with the Cloud Operations Lead, DevOps, Security, and Application teams to ensure stable, secure, and cost-effective cloud platforms.
KEY RESPONSIBILITIES:
- Operate and support AWS production environments across multiple accounts
- Manage infrastructure using Terraform and support CI/CD pipelines
- Support Amazon EKS clusters, upgrades, scaling, and troubleshooting
- Build and manage Docker images and push to Amazon ECR
- Monitor systems using CloudWatch and third-party tools; respond to incidents
- Support AWS networking (VPCs, NAT, Transit Gateway, VPN/DX)
- Assist with cost optimization, tagging, and governance standards
- Automate operational tasks using Python, Lambda, and Systems Manager
IDEAL CANDIDATE:
- Strong hands-on AWS experience (EC2, VPC, IAM, S3, ALB, CloudWatch)
- Experience with Terraform and Git-based workflows
- Hands-on experience with Kubernetes / EKS
- Experience with CI/CD tools (GitHub Actions, Jenkins, etc.)
- Scripting experience in Python or Bash
- Understanding of monitoring, incident management, and cloud security basics
NICE TO HAVE:
- AWS Associate-level certifications
- Experience with Karpenter, Prometheus, New Relic
- Exposure to FinOps and cost optimization practices
About Role
We are looking for a hands-on Python Engineer with strong experience in backend development, AI-driven systems, and cloud infrastructure. The ideal candidate should be comfortable working across Python services, AI/ML pipelines, and cloud-native environments, and capable of building production-grade, scalable systems.
This role offers high ownership, exposure to real-world AI systems, and long-term growth, making it ideal for engineers who want to build meaningful products rather than just features
Key Responsibilities
- Design, develop, and maintain scalable backend services using Python
- Build APIs and services using FastAPI, Flask, or Django
- Ensure performance, reliability, and scalability of backend systems
- Integrate AI/ML models into production systems (model inference, automation)
- Build and maintain AI pipelines for data processing and inference
- Deploy and manage applications on AWS, with exposure to GCP and Azure
- Implement CI/CD pipelines, containerization, and cloud deployments
- Collaborate with product, frontend, and AI teams on end-to-end delivery
- Optimize cloud infrastructure for cost, performance, and reliability
- Collaborate with product, frontend, and AI teams on end-to-end delivery
- Follow best practices for security, monitoring, and logging
Required Qualifications
- 2–4 years of professional experience in Python development
- Strong understanding of backend frameworks: FastAPI, Flask, Django
- Hands-on experience integrating AI/ML systems into applications
- Solid experience with AWS (EC2, S3, Lambda, RDS, IAM)
- Exposure to Google Cloud Platform (GCP) and Microsoft Azure
- Experience with Docker and CI/CD workflows
- Understanding of scalable system design principles
- Strong problem-solving and debugging skills
- Ability to work collaboratively in a product-driven environment
Perks and Benefits
- Work in Nikhil Kamath funded startup
- ₹3 – ₹4.6 LPA with ESOPs linked to performance and tenure
- Opportunity to build long-term wealth through ESOP participation
- Work on production-scale AI systems used in real-world applications
- Hands-on experience with AWS, GCP, and Azure architectures
- Work with a team that values clean engineering, experimentation, and execution
- Exposure to modern backend frameworks, AI pipelines, and DevOps practices
- High autonomy, fast decision-making, and real ownership of features and systems
About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Job Summary
We are looking for a skilled and motivated DevOps Engineer with 3+ years of hands-on experience in AWS cloud infrastructure, CI/CD automation, and Kubernetes-based deployments. The ideal candidate will have strong expertise in Infrastructure as Code, containerization, monitoring, and automation, and will play a key role in ensuring high availability, scalability, and security of production systems.
Key Responsibilities
- Design, deploy, manage, and maintain AWS cloud infrastructure, including EC2, RDS, OpenSearch, VPC, S3, ALB, API Gateway, Lambda, SNS, and SQS.
- Build, manage, and operate Kubernetes (EKS) clusters and containerized workloads.
- Containerize applications using Docker and manage deployments with Helm charts
- Develop and maintain CI/CD pipelines using Jenkins for automated build and deployment processes
- Provision and manage infrastructure using Terraform (Infrastructure as Code)
- Implement and manage monitoring, logging, and alerting solutions using Prometheus and Grafana
- Write and maintain Python scripts for automation, monitoring, and operational tasks
- Ensure high availability, scalability, performance, and cost optimization of cloud resources
- Implement and follow security best practices across AWS and Kubernetes environments
- Troubleshoot production issues, perform root cause analysis, and support incident resolution
- Collaborate closely with development and QA teams to streamline deployment and release processes
Required Skills & Qualifications
- 3+ years of hands-on experience as a DevOps Engineer or Cloud Engineer.
- Strong experience with AWS services, including:
- EC2, RDS, OpenSearch, VPC, S3
- Application Load Balancer (ALB), API Gateway, Lambda
- SNS and SQS.
- Hands-on experience with AWS EKS (Kubernetes)
- Strong knowledge of Docker and Helm charts
- Experience with Terraform for infrastructure provisioning and management
- Solid experience building and managing CI/CD pipelines using Jenkins
- Practical experience with Prometheus and Grafana for monitoring and alerting
- Proficiency in Python scripting for automation and operational tasks
- Good understanding of Linux systems, networking concepts, and cloud security
- Strong problem-solving and troubleshooting skills
Good to Have (Preferred Skills)
- Exposure to GitOps practices
- Experience managing multi-environment setups (Dev, QA, UAT, Production)
- Knowledge of cloud cost optimization techniques
- Understanding of Kubernetes security best practices
- Experience with log aggregation tools (e.g., ELK/OpenSearch stack)
Language Preference
- Fluency in English is mandatory.
- Fluency in Hindi is preferred.
Required Skills: Advanced AWS Infrastructure Expertise, CI/CD Pipeline Automation, Monitoring, Observability & Incident Management, Security, Networking & Risk Management, Infrastructure as Code & Scripting
Criteria:
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies (B2C scale preferred)
- Strong hands-on AWS expertise across core and advanced services (EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, VPC, IAM, ELB/ALB, Route53)
- Proven experience designing high-availability, fault-tolerant cloud architectures for large-scale traffic
- Strong experience building & maintaining CI/CD pipelines (Jenkins mandatory; GitHub Actions/GitLab CI a plus)
- Prior experience running production-grade microservices deployments and automated rollout strategies (Blue/Green, Canary)
- Hands-on experience with monitoring & observability tools (Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.)
- Solid hands-on experience with MongoDB in production, including performance tuning, indexing & replication
- Strong scripting skills (Bash, Shell, Python) for automation
- Hands-on experience with IaC (Terraform, CloudFormation, or Ansible)
- Deep understanding of networking fundamentals (VPC, subnets, routing, NAT, security groups)
- Strong experience in incident management, root cause analysis & production firefighting
Description
Role Overview
Company is seeking an experienced Senior DevOps Engineer to design, build, and optimize cloud infrastructure on AWS, automate CI/CD pipelines, implement monitoring and security frameworks, and proactively identify scalability challenges. This role requires someone who has hands-on experience running infrastructure at B2C product scale, ideally in media/OTT or high-traffic applications.
Key Responsibilities
1. Cloud Infrastructure — AWS (Primary Focus)
- Architect, deploy, and manage scalable infrastructure using AWS services such as EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, ELB/ALB, VPC, IAM, Route53, etc.
- Optimize cloud cost, resource utilization, and performance across environments.
- Design high-availability, fault-tolerant systems for streaming workloads.
2. CI/CD Automation
- Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI.
- Automate deployments for microservices, mobile apps, and backend APIs.
- Implement blue/green and canary deployments for seamless production rollouts.
3. Observability & Monitoring
- Implement logging, metrics, and alerting using tools like Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.
- Perform proactive performance analysis to minimize downtime and bottlenecks.
- Set up dashboards for real-time visibility into system health and user traffic spikes.
4. Security, Compliance & Risk Highlighting
• Conduct frequent risk assessments and identify vulnerabilities in:
o Cloud architecture
o Access policies (IAM)
o Secrets & key management
o Data flows & network exposure
• Implement security best practices including VPC isolation, WAF rules, firewall policies, and SSL/TLS management.
5. Scalability & Reliability Engineering
- Analyze traffic patterns for OTT-specific load variations (weekends, new releases, peak hours).
- Identify scalability gaps and propose solutions across:
- o Microservices
- o Caching layers
- o CDN distribution (CloudFront)
- o Database workloads
- Perform capacity planning and load testing to ensure readiness for 10x traffic growth.
6. Database & Storage Support
- Administer and optimize MongoDB for high-read/low-latency use cases.
- Design backup, recovery, and data replication strategies.
- Work closely with backend teams to tune query performance and indexing.
7. Automation & Infrastructure as Code
- Implement IaC using Terraform, CloudFormation, or Ansible.
- Automate repetitive infrastructure tasks to ensure consistency across environments.
Required Skills & Experience
Technical Must-Haves
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies.
- Strong hands-on experience with AWS (core and advanced services).
- Expertise in Jenkins CI/CD pipelines.
- Solid background working with MongoDB in production environments.
- Good understanding of networking: VPCs, subnets, security groups, NAT, routing.
- Strong scripting experience (Bash, Python, Shell).
- Experience handling risk identification, root cause analysis, and incident management.
Nice to Have
- Experience with OTT, video streaming, media, or any content-heavy product environments.
- Familiarity with containers (Docker), orchestration (Kubernetes/EKS), and service mesh.
- Understanding of CDN, caching, and streaming pipelines.
Personality & Mindset
- Strong sense of ownership and urgency—DevOps is mission critical at OTT scale.
- Proactive problem solver with ability to think about long-term scalability.
- Comfortable working with cross-functional engineering teams.
Why Join company?
• Build and operate infrastructure powering millions of monthly users.
• Opportunity to shape DevOps culture and cloud architecture from the ground up.
• High-impact role in a fast-scaling Indian OTT product.
Core Responsibilities:
- The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
- Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
- Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
- Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
- System Integration: Integrate models into existing systems and workflows.
- Model Deployment: Deploy models to production environments and monitor performance.
- Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
- Continuous Improvement: Identify areas for improvement in model performance and systems.
Skills:
- Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
- Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph
- Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
- Knowledge of model monitoring and performance evaluation.
Required experience:
- Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements
- AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
- AWS data: Redshift, Glue
- Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Skills: Aws, Aws Cloud, Amazon Redshift, Eks
Must-Haves
Amazon SageMaker, AWS Cloud Infrastructure (S3, EC2, Lambda), Docker and Kubernetes (EKS, ECS), SQL, AWS data (Redshift, Glue)
Skills : Machine Learning, MLOps, AWS Cloud, Redshift OR Glue, Kubernetes, Sage maker
******
Notice period - 0 to 15 days only
Location : Pune & Hyderabad only
Job Summary:
Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.
Key Responsibilities:
- Design, develop, and deploy backend services and APIs using Python.
- Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
- Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
- Implement containerized environments using Docker and manage orchestration via Kubernetes.
- Write automation and scripting solutions in Bash/Shell to streamline operations.
- Work with relational databases like MySQL and SQL, including query optimization.
- Collaborate directly with clients to understand requirements and provide technical solutions.
- Ensure system reliability, performance, and scalability across environments.
Required Skills:
- 3.5+ years of hands-on experience in Python development.
- Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
- Good understanding of Terraform or other Infrastructure as Code tools.
- Proficient with Docker and container orchestration using Kubernetes.
- Experience with CI/CD tools like Jenkins or GitHub Actions.
- Strong command of SQL/MySQL and scripting with Bash/Shell.
- Experience working with external clients or in client-facing roles
.
Preferred Qualifications:
- AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
- Familiarity with Agile/Scrum methodologies.
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder management abilities.
MUST-HAVES:
- Machine Learning + Aws + (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sage maker
- Notice period - 0 to 15 days only
- Hybrid work mode- 3 days office, 2 days at home
SKILLS: AWS, AWS CLOUD, AMAZON REDSHIFT, EKS
ADDITIONAL GUIDELINES:
- Interview process: - 2 Technical round + 1 Client round
- 3 days in office, Hybrid model.
CORE RESPONSIBILITIES:
- The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
- Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
- Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
- Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
- System Integration: Integrate models into existing systems and workflows.
- Model Deployment: Deploy models to production environments and monitor performance.
- Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
- Continuous Improvement: Identify areas for improvement in model performance and systems.
SKILLS:
- Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
- Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaos search logs, etc. for troubleshooting; Other tech touch points are Scylla DB (like BigTable), OpenSearch, Neo4J graph
- Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
- Knowledge of model monitoring and performance evaluation.
REQUIRED EXPERIENCE:
- Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sage maker pipeline with ability to analyze gaps and recommend/implement improvements
- AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
- AWS data: Redshift, Glue
- Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
REVIEW CRITERIA:
MANDATORY:
- Strong Senior/Lead DevOps Engineer Profile
- Must have 8+ years of hands-on experience in DevOps engineering, with a strong focus on AWS cloud infrastructure and services (EC2, VPC, EKS, RDS, Lambda, CloudFront, etc.).
- Must have strong system administration expertise (installation, tuning, troubleshooting, security hardening)
- Must have solid experience in CI/CD pipeline setup and automation using tools such as Jenkins, GitHub Actions, or similar
- Must have hands-on experience with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible
- Must have strong database expertise across MongoDB and Snowflake (administration, performance optimization, integrations)
- Must have experience with monitoring and observability tools such as Prometheus, Grafana, ELK, CloudWatch, or Datadog
- Must have good exposure to containerization and orchestration using Docker and Kubernetes (EKS)
- Must be currently working in an AWS-based environment (AWS experience must be in the current organization)
- Its an IC role
PREFERRED:
- Must be proficient in scripting languages (Bash, Python) for automation and operational tasks.
- Must have strong understanding of security best practices, IAM, WAF, and GuardDuty configurations.
- Exposure to DevSecOps and end-to-end automation of deployments, provisioning, and monitoring.
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
- Candidates from NCR region only (No outstation candidates).
ROLES AND RESPONSIBILITIES:
We are seeking a highly skilled Senior DevOps Engineer with 8+ years of hands-on experience in designing, automating, and optimizing cloud-native solutions on AWS. AWS and Linux expertise are mandatory. The ideal candidate will have strong experience across databases, automation, CI/CD, containers, and observability, with the ability to build and scale secure, reliable cloud environments.
KEY RESPONSIBILITIES:
Cloud & Infrastructure as Code (IaC)-
- Architect and manage AWS environments ensuring scalability, security, and high availability.
- Implement infrastructure automation using Terraform, CloudFormation, and Ansible.
- Configure VPC Peering, Transit Gateway, and PrivateLink/Connect for advanced networking.
CI/CD & Automation:
- Build and maintain CI/CD pipelines (Jenkins, GitHub, SonarQube, automated testing).
- Automate deployments, provisioning, and monitoring across environments.
Containers & Orchestration:
- Deploy and operate workloads on Docker and Kubernetes (EKS).
- Implement IAM Roles for Service Accounts (IRSA) for secure pod-level access.
- Optimize performance of containerized and microservices applications.
Monitoring & Reliability:
- Implement observability with Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Establish logging, alerting, and proactive monitoring for high availability.
Security & Compliance:
- Apply AWS security best practices including IAM, IRSA, SSO, and role-based access control.
- Manage WAF, Guard Duty, Inspector, and other AWS-native security tools.
- Configure VPNs, firewalls, and secure access policies and AWS organizations.
Databases & Analytics:
- Must have expertise in MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Manage data reliability, performance tuning, and cloud-native integrations.
- Experience with Apache Airflow and Spark.
IDEAL CANDIDATE:
- 8+ years in DevOps engineering, with strong AWS Cloud expertise (EC2, VPC, TG, RDS, S3, IAM, EKS, EMR, SCP, MWAA, Lambda, CloudFront, SNS, SES etc.).
- Linux expertise is mandatory (system administration, tuning, troubleshooting, CIS hardening etc).
- Strong knowledge of databases: MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Hands-on with Docker, Kubernetes (EKS), Terraform, CloudFormation, Ansible.
- Proven ability with CI/CD pipeline automation and DevSecOps practices.
- Practical experience with VPC Peering, Transit Gateway, WAF, Guard Duty, Inspector and advanced AWS networking and security tools.
- Expertise in observability tools: Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Strong scripting skills (Shell/bash, Python, or similar) for automation.
- Bachelor / Master’s degree
- Effective communication skills
PERKS, BENEFITS AND WORK CULTURE:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits
Job Summary:
Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.
Key Responsibilities:
- Design, develop, and deploy backend services and APIs using Python.
- Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
- Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
- Implement containerized environments using Docker and manage orchestration via Kubernetes.
- Write automation and scripting solutions in Bash/Shell to streamline operations.
- Work with relational databases like MySQL and SQL, including query optimization.
- Collaborate directly with clients to understand requirements and provide technical solutions.
- Ensure system reliability, performance, and scalability across environments.
Required Skills:
- 3.5+ years of hands-on experience in Python development.
- Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
- Good understanding of Terraform or other Infrastructure as Code tools.
- Proficient with Docker and container orchestration using Kubernetes.
- Experience with CI/CD tools like Jenkins or GitHub Actions.
- Strong command of SQL/MySQL and scripting with Bash/Shell.
- Experience working with external clients or in client-facing roles.
Preferred Qualifications:
- AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
- Familiarity with Agile/Scrum methodologies.
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder management abilities.
We are seeking a highly skilled and motivated Python Developer with hands-on experience in AWS cloud services (Lambda, API Gateway, EC2), microservices architecture, PostgreSQL, and Docker. The ideal candidate will be responsible for designing, developing, deploying, and maintaining scalable backend services and APIs, with a strong emphasis on cloud-native solutions and containerized environments.
Key Responsibilities:
- Develop and maintain scalable backend services using Python (Flask, FastAPI, or Django).
- Design and deploy serverless applications using AWS Lambda and API Gateway.
- Build and manage RESTful APIs and microservices.
- Implement CI/CD pipelines for efficient and secure deployments.
- Work with Docker to containerize applications and manage container lifecycles.
- Develop and manage infrastructure on AWS (including EC2, IAM, S3, and other related services).
- Design efficient database schemas and write optimized SQL queries for PostgreSQL.
- Collaborate with DevOps, front-end developers, and product managers for end-to-end delivery.
- Write unit, integration, and performance tests to ensure code reliability and robustness.
- Monitor, troubleshoot, and optimize application performance in production environments.
Required Skills:
- Strong proficiency in Python and Python-based web frameworks.
- Experience with AWS services: Lambda, API Gateway, EC2, S3, CloudWatch.
- Sound knowledge of microservices architecture and asynchronous programming.
- Proficiency with PostgreSQL, including schema design and query optimization.
- Hands-on experience with Docker and containerized deployments.
- Understanding of CI/CD practices and tools like GitHub Actions, Jenkins, or CodePipeline.
- Familiarity with API documentation tools (Swagger/OpenAPI).
- Version control with Git.
Location: Hybrid/ Remote
Type: Contract / Full‑Time
Experience: 5+ Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Responsibilities:
- Architect & implement the RAG pipeline: embeddings ingestion, vector search (MongoDB Atlas or similar), and context-aware chat generation.
- Design and build Python‑based services (FastAPI) for generating and updating embeddings.
- Host and apply LoRA/QLoRA adapters for per‑user fine‑tuning.
- Automate data pipelines to ingest daily user logs, chunk text, and upsert embeddings into the vector store.
- Develop Node.js/Express APIs that orchestrate embedding, retrieval, and LLM inference for real‑time chat.
- Manage vector index lifecycle and similarity metrics (cosine/dot‑product).
- Deploy and optimize on AWS (Lambda, EC2, SageMaker), containerization (Docker), and monitoring for latency, costs, and error rates.
- Collaborate with frontend engineers to define API contracts and demo endpoints.
- Document architecture diagrams, API specifications, and runbooks for future team onboarding.
Required Skills
- Strong Python expertise (FastAPI, async programming).
- Proficiency with Node.js and Express for API development.
- Experience with vector databases (MongoDB Atlas Vector Search, Pinecone, Weaviate) and similarity search.
- Familiarity with OpenAI’s APIs (embeddings, chat completions).
- Hands‑on with parameters‑efficient fine‑tuning (LoRA, QLoRA, PEFT/Hugging Face).
- Knowledge of LLM hosting best practices on AWS (EC2, Lambda, SageMaker).
Containerization skills (Docker):
- Good understanding of RAG architectures, prompt design, and memory management.
- Strong Git workflow and collaborative development practices (GitHub, CI/CD).
Nice‑to‑Have:
- Experience with Llama family models or other open‑source LLMs.
- Familiarity with MongoDB Atlas free tier and cluster management.
- Background in data engineering for streaming or batch processing.
- Knowledge of monitoring & observability tools (Prometheus, Grafana, CloudWatch).
- Frontend skills in React to prototype demo UIs.
Mumbai malad work from office
6 Days working
1 & 3 Saturday off
AWS Expertise: Minimum 2 years of experience working with AWS services like RDS, S3, EC2, and Lambda.
Roles and Responsibilities
1. Backend Development: Develop scalable and high-performance APIs and backend systems using Node.js. Write clean, modular, and reusable code following best practices. Debug, test, and optimize backend services for performance and scalability.
2. Database Management: Design and maintain relational databases using MySQL, PostgreSQL, or AWS RDS. Optimize database queries and ensure data integrity. Implement data backup and recovery plans.
3. AWS Cloud Services: Deploy, manage, and monitor applications using AWS infrastructure. Work with AWS services including RDS, S3, EC2, Lambda, API Gateway, and CloudWatch. Implement security best practices for AWS environments (IAM policies, encryption, etc.).
4. Integration and Microservices:Integrate third-party APIs and services. Develop and manage microservices architecture for modular application development.
5. Version Control and Collaboration: Use Git for code versioning and maintain repositories. Collaborate with front-end developers and project managers for end-to-end project delivery.
6. Troubleshooting and Debugging: Analyze and resolve technical issues and bugs. Provide maintenance and support for existing backend systems.
7. DevOps and CI/CD: Set up and maintain CI/CD pipelines. Automate deployment processes and ensure zero-downtime releases.
8. Agile Development:
Participate in Agile/Scrum ceremonies such as daily stand-ups, sprint planning, and retrospectives.
Deliver tasks within defined timelines while maintaining high quality.
Required Skills
Strong proficiency in Node.js and JavaScript/TypeScript.
Expertise in working with relational databases like MySQL/PostgreSQL and AWS RDS.
Proficient with AWS services including Lambda, S3, EC2, and API Gateway.
Experience with RESTful API design and GraphQL (optional).
Knowledge of containerization using Docker is a plus.
Strong problem-solving and debugging skills.
Familiarity with tools like Git, Jenkins, and Jira.
Devops Engineer :
Tech stack : AWS, Gitlab , Python, SNF
Mrproptek is on the lookout for a DevOps Engineer with a passion for cloud infrastructure, automation, and scalable systems. If you're ready to hit the ground running — we want you on our team ASAP!
Location: Mohali, Punjab (On-site)
Experience: 2+ Years
Skills We’re Looking For:
- Strong hands-on experience with AWS
- Expertise in GitLab CI/CD pipelines
- Python scripting proficiency
- Knowledge of SNF (Serverless and Functions) architecture
- Excellent communication and collaboration skills
- Immediate joiners preferred
At MrPropTek, we're redefining the future of property technology with innovative digital solutions. Join a team where your skills truly matter and your ideas shape what’s next.
Job Title : Python Developer – API Integration & AWS Deployment
Experience : 5+ Years
Location : Bangalore
Work Mode : Onsite
Job Overview :
We are seeking an experienced Python Developer with strong expertise in API development and AWS cloud deployment.
The ideal candidate will be responsible for building scalable RESTful APIs, automating power system simulations using PSS®E (psspy), and deploying automation workflows securely and efficiently on AWS.
Mandatory Skills : Python, FastAPI/Flask, PSS®E (psspy), RESTful API Development, AWS (EC2, Lambda, S3, EFS, API Gateway), AWS IAM, CloudWatch.
Key Responsibilities :
Python Development & API Integration :
- Design, build, and maintain RESTful APIs using FastAPI or Flask to interface with PSS®E.
- Automate simulations and workflows using the PSS®E Python API (psspy).
- Implement robust bulk case processing, result extraction, and automated reporting systems.
AWS Cloud Deployment :
- Deploy APIs and automation pipelines using AWS services such as EC2, Lambda, S3, EFS, and API Gateway.
- Apply cloud-native best practices to ensure reliability, scalability, and cost efficiency.
- Manage secure access control using AWS IAM, API keys, and implement monitoring using CloudWatch.
Required Skills :
- 5+ Years of professional experience in Python development.
- Hands-on experience with RESTful API development (FastAPI/Flask).
- Solid experience working with PSS®E and its psspy Python API.
- Strong understanding of AWS services, deployment, and best practices.
- Proficiency in automation, scripting, and report generation.
- Knowledge of cloud security and monitoring tools like IAM and CloudWatch.
Good to Have :
- Experience in power system simulation and electrical engineering concepts.
- Familiarity with CI/CD tools for AWS deployments.
Job Title : Full Stack Drupal Developer
Experience : Minimum 5 Years
Location : Hyderabad / Bangalore / Mumbai / Pune / Chennai / Gurgaon (Hybrid or On-site)
Notice Period : Immediate to 15 Days Preferred
Job Summary :
We are seeking a skilled and experienced Full Stack Drupal Developer with a strong background in Drupal (version 8 and above) for both front-end and back-end development. The ideal candidate will have hands-on experience in AWS deployments, Drupal theming and module development, and a solid understanding of JavaScript, PHP, and core Drupal architecture. Acquia certifications and contributions to the Drupal community are highly desirable.
Mandatory Skills :
Drupal 8+, PHP, JavaScript, Custom Module & Theming Development, AWS (EC2, Lightsail, S3, CloudFront), Acquia Certified, Drupal Community Contributions.
Key Responsibilities :
- Develop and maintain full-stack Drupal applications, including both front-end (theming) and back-end (custom module) development.
- Deploy and manage Drupal applications on AWS using services like EC2, Lightsail, S3, and CloudFront.
- Work with the Drupal theming layer and module layer to build custom and reusable components.
- Write efficient and scalable PHP code integrated with JavaScript and core JS concepts.
- Collaborate with UI/UX teams to ensure high-quality user experiences.
- Optimize performance and ensure high availability of applications in cloud environments.
- Contribute to the Drupal community and utilize contributed modules effectively.
- Follow best practices for code versioning, documentation, and CI/CD deployment processes.
Required Skills & Qualifications :
- Minimum 5 Years of hands-on experience in Drupal development (Drupal 8 onwards).
- Strong experience in front-end (theming, JavaScript, HTML, CSS) and back-end (custom module development, PHP).
- Experience with Drupal deployment on AWS, including services such as EC2, Lightsail, S3, and CloudFront.
- Proficiency in JavaScript, core JS concepts, and PHP coding.
- Acquia certifications such as:
- Drupal Developer Certification
- Site Management Certification
- Acquia Certified Developer (preferred)
- Experience with contributed modules and active participation in the Drupal community is a plus.
- Familiarity with version control (Git), Agile methodologies, and modern DevOps tools.
Preferred Certifications :
- Acquia Certified Developer.
- Acquia Site Management Certification.
- Any relevant AWS certifications are a bonus.
Position Title : Senior Database Administrator (DBA)
📍 Location : Bangalore (Near Silk Board)
🏢 Work Mode : Onsite, 5 Days a Week
💼 Experience : 6+ Years
⏱️ Notice Period : Immediate to 1 Month
Job Summary :
We’re looking for an experienced Senior DBA to manage and optimize databases like MySQL, MongoDB, PostgreSQL, Oracle, and Redis. You’ll ensure performance, security, and availability of databases across our systems and work closely with engineering teams for support and improvement.
Key Responsibilities :
- Manage and maintain MySQL, MongoDB, PostgreSQL, Oracle, and Redis databases.
- Handle backups, restores, upgrades, and replication.
- Optimize query performance and troubleshoot issues.
- Ensure database security and access control.
- Work on disaster recovery and high availability.
- Support development teams with schema design and tuning.
- Automate tasks using scripting (Python, Bash, etc.).
- Collaborate with DevOps and Cloud (AWS) teams.
Must-Have Skills :
- 6+ Years as a DBA in production environments.
- Strong hands-on with MySQL, MongoDB, PostgreSQL, Oracle, Redis.
- Performance tuning and query optimization.
- Backup/recovery and disaster recovery planning.
- Experience with AWS (RDS/EC2).
- Scripting knowledge (Python/Bash).
- Good understanding of database security.
Good to Have :
- Experience with MSSQL.
- Knowledge of tools like pgAdmin, Compass, Workbench.
- Database certifications.
Mandatory (Experience 1) - Must have a minimum 4+ years of experience in backend software development.
Mandatory (Experience 2) -Must have 4+ years of experience in backend development using Python (Highly preferred), Java, or Node.js.
Mandatory (Experience 3) - Must have experience with Cloud platforms like AWS (highly preferred), gcp or azure
Mandatory (Experience 4) - Must have Experience in any databases - MySQL / PostgreSQL / Postgres / Oracle / SQL Server / DB2 / SQL / MongoDB / Ne
Good understanding and experience of HTML / CSS / JavaScript.
Hands-on experience with ES6 / ES7 / ES8 features.
Thorough understanding of the Request Lifecycle (including Event Queue, Event Loop,
Worker Threads, etc).
Familiarity with security principles including SSL protocols, data encryption, XSS, CSRF.
Expertise in Web Services / REST APIs will be beneficial.
Proficiency in Linux and deployment on Linux are valuable.
Knowledge about ORM like Sequelize and ODM like Mongoose and the ability to handle
DB transactions is a necessity.
Experience with Angular JS / React JS will be an added advantage.
Expertise with RDBMS like MySQL / PostgreSQL will be a plus.
Knowledge of AWS services like S3, EC2 will be helpful.
Understanding of Agile and CI/CD will be of value.
BACKEND DEVELOPER JOB DESCRIPTION
Job Title: Backend Developer - Node.js & MongoDB
Location: Hyderabad
Employment Type: Full-Time
Experience Required: 3–5 Years
About Us
Inncircles – THE INNGINEERING COMPANY
We are a forward-thinking construction-tech innovator building CRM solutions that manage crores of records with precision and speed. Our mission is to revolutionize the construction domain through scalable engineering and robust backend systems. Join us to solve complex challenges and shape the future of data-driven construction tech!
Job Description
We are hiring a Backend Developer with 3–5 years of hands-on experience in Node.js and MongoDB to design, optimize, and maintain high-performance backend systems. You will work on large-scale data processing, external integrations, and scalable architectures while ensuring best coding practices and efficient database design.
Key Responsibilities
Backend Development & Optimization
- Develop and maintain RESTful/GraphQL APIs using Node.js, adhering to best coding practices and reusable code structures.
- Write optimized MongoDB queries for collections with crores of records, ensuring efficient data retrieval and storage.
- Design MongoDB collections, implement indexing strategies, and optimize replica sets for performance and reliability.
- Scalability & Performance
- Implement horizontal and vertical scaling strategies to handle growing data and traffic.
- Optimize database performance through indexing, aggregation pipelines, and query tuning.
- External Integrations & Debugging
- Integrate third-party APIs (payment gateways, analytics tools, etc.) and SDKs seamlessly into backend systems.
- Debug and resolve complex issues in production environments with a systematic, data-driven approach.
AWS & Cloud Services
Work with AWS services like Lambda (serverless), SQS (message queuing), S3 (storage), and EC2 (compute) to build resilient and scalable solutions.
Collaboration & Best Practices
Collaborate with frontend teams to ensure smooth API integrations and data flow.
Document code, write unit/integration tests, and enforce coding standards.
Mandatory Requirements
3–5 years of professional experience in Node.js and MongoDB.
Expertise in:
- MongoDB: Collection design, indexing, aggregation pipelines, replica sets, and sharding.
- Node.js: Asynchronous programming, middleware, and API development (Express.js/Fastify).
- Query Optimization: Writing efficient queries for large datasets (crores of records).
- Strong debugging skills and experience in resolving production issues.
- Hands-on experience with external integrations (APIs, SDKs, webhooks).
- Knowledge of horizontal/vertical scaling techniques and performance tuning.
- Familiarity with AWS services (Lambda, SQS, S3, EC2).
Preferred Skills
- Experience with microservices architecture.
- Knowledge of CI/CD pipelines (GitLab CI, Jenkins).
- Understanding of Docker, Kubernetes, or serverless frameworks.
- Exposure to monitoring tools like Prometheus, Grafana, or New Relic.
Why Join Inncircles?
Solve large-scale data challenges in the construction domain.
Work on cutting-edge cloud-native backend systems.
Competitive salary, flexible work culture, and growth opportunities.
Apply Now:
If you’re passionate about building scalable backend systems and thrive in a data-heavy environment, share your resume and a GitHub/portfolio link showcasing projects with Node.js, MongoDB, and AWS integrations.
Inncircles – THE INNGINEERING COMPANY
📍 Hyderabad | 🚀 Building Tomorrow’s Tech Today
Job Title: DevOps Engineer
Location: Remote
Type: Full-time
About Us:
At Tese, we are committed to advancing sustainability through innovative technology solutions. Our platform empowers SMEs, financial institutions, and enterprises to achieve their Environmental, Social, and Governance (ESG) goals. We are looking for a skilled and passionate DevOps Engineer to join our team and help us build and maintain scalable, reliable, and efficient infrastructure.
Role Overview:
As a DevOps Engineer, you will be responsible for designing, implementing, and managing the infrastructure that supports our applications and services. You will work closely with our development, QA, and data science teams to ensure smooth deployment, continuous integration, and continuous delivery of our products. Your role will be critical in automating processes, enhancing system performance, and maintaining high availability.
Key Responsibilities:
- Infrastructure Management:
- Design, implement, and maintain scalable cloud infrastructure on platforms such as AWS, Google Cloud, or Azure.
- Manage server environments, including provisioning, monitoring, and maintenance.
- CI/CD Pipeline Development:
- Develop and maintain continuous integration and continuous deployment pipelines using tools like Jenkins, GitLab CI/CD, or CircleCI.
- Automate deployment processes to ensure quick and reliable releases.
- Configuration Management and Automation:
- Implement infrastructure as code (IaC) using tools like Terraform, Ansible, or CloudFormation.
- Automate system configurations and deployments to improve efficiency and reduce manual errors.
- Monitoring and Logging:
- Set up and manage monitoring tools (e.g., Prometheus, Grafana, ELK Stack) to track system performance and troubleshoot issues.
- Implement logging solutions to ensure effective incident response and system analysis.
- Security and Compliance:
- Ensure systems are secure and compliant with industry standards and regulations.
- Implement security best practices, including identity and access management, network security, and vulnerability assessments.
- Collaboration and Support:
- Work closely with development and QA teams to support application deployments and troubleshoot issues.
- Provide support for infrastructure-related inquiries and incidents.
Qualifications:
- Education:
- Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
- Experience:
- 3-5 years of experience in DevOps, system administration, or related roles.
- Hands-on experience with cloud platforms such as AWS, Google Cloud Platform, or Azure.
- Technical Skills:
- Proficiency in scripting languages like Bash, Python, or Ruby.
- Strong experience with containerization technologies like Docker and orchestration tools like Kubernetes.
- Knowledge of configuration management tools (Ansible, Puppet, Chef).
- Experience with CI/CD tools (Jenkins, GitLab CI/CD, CircleCI).
- Familiarity with monitoring and logging tools (Prometheus, Grafana, ELK Stack).
- Understanding of networking concepts and security best practices.
- Soft Skills:
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration abilities.
- Ability to work in a fast-paced environment and manage multiple tasks.
Preferred Qualifications:
- Experience with infrastructure as code (IaC) tools like Terraform or CloudFormation.
- Knowledge of microservices architecture and serverless computing.
- Familiarity with database administration (SQL and NoSQL databases).
- Experience with Agile methodologies and working in a Scrum or Kanban environment.
- Passion for sustainability and interest in ESG initiatives.
Benefits:
- Competitive salary and benefits package,and performance bonuses.
- Flexible working hours and remote work options.
- Opportunity to work on impactful projects that promote sustainability.
- Professional development opportunities, including access to training and conferences.
Technical Skills:
- Ability to understand and translate business requirements into design.
- Proficient in AWS infrastructure components such as S3, IAM, VPC, EC2, and Redshift.
- Experience in creating ETL jobs using Python/PySpark.
- Proficiency in creating AWS Lambda functions for event-based jobs.
- Knowledge of automating ETL processes using AWS Step Functions.
- Competence in building data warehouses and loading data into them.
Responsibilities:
- Understand business requirements and translate them into design.
- Assess AWS infrastructure needs for development work.
- Develop ETL jobs using Python/PySpark to meet requirements.
- Implement AWS Lambda for event-based tasks.
- Automate ETL processes using AWS Step Functions.
- Build data warehouses and manage data loading.
- Engage with customers and stakeholders to articulate the benefits of proposed solutions and frameworks.
We are looking for an experienced Sr.Devops Consultant Engineer to join our team. The ideal candidate should have at least 5+ years of experience.
We are retained by a promising startup located in Silicon valley backed by Fortune 50 firm with veterans from firms as Zscaler, Salesforce & Oracle. Founding team has been part of three unicorns and two successful IPO’s in the past and well funded by Dell Technologies and Westwave Capital. The company has been widely recognized as an industry innovator in the Data Privacy, Security space and being built by proven Cybersecurity executives who have successfully built and scaled high growth Security companies and built Privacy programs as executives.
Responsibilities:
- Develop and maintain infrastructure as code using tools like Terraform, CloudFormation, and Ansible
- Manage and maintain Kubernetes clusters on EKS and EC2 instances
- Implement and maintain automated CI/CD pipelines for microservices
- Optimize AWS costs by identifying cost-saving opportunities and implementing cost-effective solutions
- Implement best security practices for microservices, including vulnerability assessments, SOC2 compliance, and network security
- Monitor the performance and availability of our cloud infrastructure using observability tools such as Prometheus, Grafana, and Elasticsearch
- Implement backup and disaster recovery solutions for our microservices and databases
- Stay up to date with the latest AWS services and technologies and provide recommendations for improving our cloud infrastructure
- Collaborate with cross-functional teams, including developers, and product managers, to ensure the smooth operation of our cloud infrastructure
- Experience with large scale system design and scaling services is highly desirable
Requirements:
- Bachelor's degree in Computer Science, Engineering, or a related field
- At least 5 years of experience in AWS DevOps and infrastructure engineering
- Expertise in Kubernetes management, Docker, EKS, EC2, Queues, Python Threads, Celery Optimization, Load balancers, AWS cost optimizations, Elasticsearch, Container management, and observability best practices
- Experience with SOC2 compliance and vulnerability assessment best practices for microservices
- Familiarity with AWS services such as S3, RDS, Lambda, and CloudFront
- Strong scripting skills in languages like Python, Bash, and Go
- Excellent communication skills and the ability to work in a collaborative team environment
- Experience with agile development methodologies and DevOps practices
- AWS certification (e.g. AWS Certified DevOps Engineer, AWS Certified Solutions Architect) is a plus.
Notice period : Can join within a month
Required a full stack Senior SDE with focus on Backend microservices/ modular monolith with 3-4+ years of experience on the following:
- Bachelor’s or Master’s degree in Computer Science or equivalent industry technical skills
- Mandatory In-depth knowledge and strong experience in Python programming language.
- Expertise and significant work experience in Python with Fastapi and Async frameworks.
- Prior experience building Microservice and/or modular monolith.
- Should be an expert Object Oriented Programming and Design Patterns.
- Has knowledge and experience with SQLAlchemy/ORM, Celery, Flower, etc.
- Has knowledge and experience with Kafka / RabbitMQ, Redis.
- Experience in Postgres/ Cockroachdb.
- Experience in MongoDB/DynamoDB and/or Cassandra are added advantages.
- Strong experience in either AWS services (e.g, EC2, ECS, Lambda, StepFunction, S3, SQS, Cognito). and/or equivalent Azure services preferred.
- Experience working with Docker required.
- Experience in socket.io added advantage
- Experience with CI/CD e.g. git actions preferred.
- Experience in version control tools Git etc.
This is one of the early positions for scaling up the Technology team. So culture-fit is really important.
- The role will require serious commitment, and someone with a similar mindset with the team would be a good fit. It's going to be a tremendous growth opportunity. There will be challenging tasks. A lot of these tasks would involve working closely with our AI & Data Science Team.
- We are looking for someone who has considerable expertise and experience on a low latency highly scaled backend / fullstack engineering stack. The role is ideal for someone who's willing to take such challenges.
- Coding Expectation – 70-80% of time.
- Has worked with enterprise solution company / client or, worked with growth/scaled startup earlier.
- Skills to work effectively in a distributed and remote team environment.
Role : Principal Devops Engineer
About the Client
It is a Product base company that has to build a platform using AI and ML technology for their transportation and logiticsThey also have a presence in the global market
Responsibilities and Requirements
• Experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
• Knowledge in Linux/Unix Administration and Python/Shell Scripting
• Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
• Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios
• Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
• Experience in enterprise application development, maintenance and operations
• Knowledge of best practices and IT operations in an always-up, always-available service
• Excellent written and oral communication skills, judgment and decision-making skill
Job Description:
We are looking for a talented Full Stack Developer with a strong background in Node.js, React.js, and AWS to contribute to the development and maintenance of our web applications. As a Full Stack Developer, you will work closely with cross-functional teams to design, develop, and deploy scalable and high-performance software solutions.
Responsibilities:
Collaborate with product managers and designers to translate requirements into technical specifications and deliver high-quality software solutions.
Develop and maintain web applications using Node.js and React.js frameworks.
Write clean, efficient, and well-documented code to ensure the reliability and maintainability of the software.
Implement responsive user interfaces, ensuring a seamless user experience across different devices and platforms.
Integrate third-party APIs and services to enhance application functionality.
Design and optimize databases to ensure efficient data storage and retrieval.
Deploy and manage applications on AWS cloud infrastructure, utilizing services such as EC2, S3, Lambda, and API Gateway.
Monitor and troubleshoot application performance, identify and resolve issues proactively.
Conduct code reviews to maintain code quality standards and provide constructive feedback to team members.
Stay up to date with the latest trends and best practices in web development and cloud technologies.
Requirements:
Proven experience as a Full Stack Developer, working with Node.js and React.js in a professional setting.
Strong proficiency in JavaScript and familiarity with modern front-end frameworks and libraries.
Experience with AWS services, such as EC2, S3, Lambda, API Gateway, and CloudFormation.
Knowledge of database systems, both SQL and NoSQL, and the ability to design efficient data models.
Familiarity with version control systems (e.g., Git) and agile development methodologies.
Ability to write clean, efficient, and well-documented code, following best practices and coding standards.
Strong problem-solving skills and the ability to work effectively in a fast-paced environment.
Excellent communication and collaboration skills, with the ability to work well in a team.
Role - Lead- Backend Engineering
Work Mode- Hybrid
Location- Thane, Mumbai.
About the Company:
Ventura is an omnichannel trading and investment platform with a network of branches, sub-brokers and Digital Channels. Founded in 1994, the company is now entering the next phase of growth by pivoting to a digital-first approach and strengthening its direct-to-consumer franchise. The company has now carved out a separate fintech vertical tasked with digital transformation using cutting-edge technology and bringing in fresh talent.
Job Description:
We are looking for a Backend Engineering lead whose highly talented individuals come from diverse backgrounds and are looking to solve real client problems at scale. We are looking for passionate techies with skills primarily around AWS and the latest tech stack who are aspiring for a fast-track career.
Join us if you like to:
· Build out a next-gen fintech product from ground 0
· Opportunity to influence the design of the product
· Flexible and Hybrid work environment running out of Slack
· Flat org structure
· Stay up-to-date on industry trends and emerging technologies
We’ll need you to bring:
· Bachelor's degree in Engineering or Master's degree in CS/ IT.
· 7+ years of experience
· Clean coding skills around C++/Python/NodeJS/Go.
· Knowledge of Redis.
· Experienced in SQL with Postgres and Good to have Influx DB.
· Knowledge of NGINX or any other API gateway.
· Strong AWS skills, techies with certifications from AWS are particularly encouraged to apply - AWS API Gateway, Route53, Lambda, EC2, RDS, SQS, CloudWatch, Cognito, QuickSight
Demonstratable experience around writing testable code, working with git, doing peer-level code review, daily standups, and generally championing software excellence
Roles & Responsibilities:
- Bachelor’s degree in Computer Science, Information Technology or a related field
- Experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
- Knowledge in Linux/Unix Administration and Python/Shell Scripting
- Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
- Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
- Experience in enterprise application development, maintenance and operations
- Knowledge of best practices and IT operations in an always-up, always-available service
- Excellent written and oral communication skills, judgment and decision-making skills
Description
Do you dream about code every night? If so, we’d love to talk to you about a new product that we’re making to enable delightful testing experiences at scale for development teams who build modern software solutions.
What You'll Do
Troubleshooting and analyzing technical issues raised by internal and external users.
Working with Monitoring tools like Prometheus / Nagios / Zabbix.
Developing automation in one or more technologies such as Terraform, Ansible, Cloud Formation, Puppet, Chef will be preferred.
Monitor infrastructure alerts and take proactive action to avoid downtime and customer impacts.
Working closely with the cross-functional teams to resolve issues.
Test, build, design, deployment, and ability to maintain continuous integration and continuous delivery process using tools like Jenkins, maven Git, etc.
Work in close coordination with the development and operations team such that the application is in line with performance according to the customer's expectations.
What you should have
Bachelor’s or Master’s degree in computer science or any related field.
3 - 6 years of experience in Linux / Unix, cloud computing techniques.
Familiar with working on cloud and datacenter for enterprise customers.
Hands-on experience on Linux / Windows / Mac OS’s and Batch/Apple/Bash scripting.
Experience with various databases such as MongoDB, PostgreSQL, MySQL, MSSQL.
Familiar with AWS technologies like EC2, S3, Lambda, IAM, etc.
Must know how to choose the best tools and technologies which best fit the business needs.
Experience in developing and maintaining CI/CD processes using tools like Git, GitHub, Jenkins etc.
Excellent organizational skills to adapt to a constantly changing technical environment
Responsibilities
- Implement various development, testing, automation tools, and IT infrastructure
- Design, build and automate the AWS infrastructure (VPC, EC2, Networking, EMR, RDS, S3, ALB, Cloud Front, etc.) using Terraform
- Manage end-to-end production workloads hosted on Docker and AWS
- Automate CI pipeline using Groovy DSL
- Deploy and configure Kubernetes clusters (EKS)
- Design and build a CI/CD Pipeline to deploy applications using Jenkins and Docker
Eligibility
- At least 8 years of proven experience in AWS-based DevOps/cloud engineering and implementations
- Expertise in all common AWS Cloud services like EC2, EKS, S3, VPC, Lambda, API Gateway, ALB, Redis, etc.
- Experience in deploying and managing production environments in Amazon AWS
- Strong experience in continuous integration and continuous deployment
- Knowledge of application build, deployment, and configuration using one of the tools: Jenkins
About us:
HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.
We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.
To know more, Visit! - https://www.happyfox.com/
Responsibilities:
- Build and scale production infrastructure in AWS for the HappyFox platform and its products.
- Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
- Implement consistent observability, deployment and IaC setups
- Patch production systems to fix security/performance issues
- Actively respond to escalations/incidents in the production environment from customers or the support team
- Mentor other Infrastructure engineers, review their work and continuously ship improvements to production infrastructure.
- Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
- Participate in infrastructure security audits
Requirements:
- At least 5 years of experience in handling/building Production environments in AWS.
- At least 2 years of programming experience in building API/backend services for customer-facing applications in production.
- Demonstrable knowledge of TCP/IP, HTTP and DNS fundamentals.
- Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
- Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts using any scripting language such as Python, Ruby, Bash etc.,
- Experience in setting up and managing test/staging environments, and CI/CD pipelines.
- Experience in IaC tools such as Terraform or AWS CDK
- Passion for making systems reliable, maintainable, scalable and secure.
- Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
- Bonus points – if you have experience with Nginx, Postgres, Redis, and Mongo systems in production.
About us:
HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.
We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.
To know more, Visit! - https://www.happyfox.com/
Responsibilities
- Build and scale production infrastructure in AWS for the HappyFox platform and its products.
- Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
- Implement consistent observability, deployment and IaC setups
- Lead incident management and actively respond to escalations/incidents in the production environment from customers and the support team.
- Hire/Mentor other Infrastructure engineers and review their work to continuously ship improvements to production infrastructure and its tooling.
- Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
- Lead infrastructure security audits
Requirements
- At least 7 years of experience in handling/building Production environments in AWS.
- At least 3 years of programming experience in building API/backend services for customer-facing applications in production.
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
- Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
- Experience in security hardening of infrastructure, systems and services.
- Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
- Experience in setting up and managing test/staging environments, and CI/CD pipelines.
- Experience in IaC tools such as Terraform or AWS CDK
- Exposure/Experience in setting up or managing Cloudflare, Qualys and other related tools
- Passion for making systems reliable, maintainable, scalable and secure.
- Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
- Bonus points – Hands-on experience with Nginx, Postgres, Postfix, Redis or Mongo systems.
JD / Skills Sets
1. Good knowledge on Python
2. Good knowledge on My-Sql, mongodb
3. Design Pattern
4. OOPs
5. Automation
6. Web scraping
7. Redis queue
8. Basic idea of Finance Domain will be beneficial.
9. Git10. AWS (EC2, RDS, S3)
Job Title: PHP (Laravel) Developer
Experience: 2 to 7 years
Skills:
PHP : Laravel, (MVC)
Database: MySQL
{(added Advantage)
Database : Mongo DB
Server Hosting : – AWS (Knowledge of EC2, S3, RDS & Route53)
Good to have experience in ReactJS, NodeJS, VueJS}
Communication skills: Must have good in communication
.
Requirements:
Understanding of MVC design patterns
Basic understanding of front-end technologies, such as JavaScript, HTML5, and
CSS3
Knowledge of object oriented PHP programming
In-depth knowledge of object-oriented PHP 7.x and Laravel 5/6+ PHP Framework
Experience with MVC, Entity Framework, Web form, Web API and business layer
and
front-end technologies
Creating database schema that represent and support business processes
Familiarity with SQL/NoSQL databases and their declarative query languages
In-depth knowledge of Git,Bitbucket and related pipeline for Continues integration
and
Continues deployment
Creative and efficient problem solving capability
Understanding of Agile development process
Developing rich and complex web applications in an efficient manner so that the
applications let the user interact with the site or application smoothly.
Ability to understand technical documents like SRS, Design Document & Wireframes.
Personal Specifications –
- Proficiency in written and spoken English(must)
- Understanding of project development methodologies like Agile is preferred.
- Understand team development/Source code control
InnovationM is looking for a Java Developer with experience in Spring Boot, Microservices, and MongoDB to join our team. The ideal candidate should have hands-on experience in designing and developing REST APIs using Spring Boot, building microservices-based applications, working with MongoDB and have experience in deploying applications in AWS.
What you must be good at:
● Develop and maintain REST APIs using Spring Boot framework
● Build microservices-based applications using Spring Boot and related frameworks
● Design and develop data models and queries for MongoDB database
● Deploy applications in AWS using EC2, ECR, and other relevant AWS services
● Work in an Agile development environment
● Collaborate with cross-functional teams to deliver high-quality software
What you must be good at:
● 4+ years of experience in Java development
● Strong experience in Spring Boot and related frameworks
● Hands-on experience in building microservices-based applications
● Proficient in designing and developing REST APIs
● Strong experience in MongoDB, including data modeling and query optimization
● Good understanding of AWS services and deployment methodologies
● Experience working in an Agile development environment
● Proficient in Git and version control systems
- Seeking an Individual carrying around 5+ yrs of experience.
- Must have skills - Jenkins, Groovy, Ansible, Shell Scripting, Python, Linux Admin
- Terraform, AWS deep knowledge to automate and provision EC2, EBS, SQL Server, cost optimization, CI/CD pipeline using Jenkins, Server less automation is plus.
- Excellent writing and communication skills in English. Enjoy writing crisp and understandable documentation
- Comfortable programming in one or more scripting languages
- Enjoys tinkering with tooling. Find easier ways to handle systems by doing some research. Strong awareness around build vs buy.
Summary:
The Learner Company is an education start-up that designs personalized learning experiences by integrating them with the best of what technology offers. We are currently building an online learning engine to host adaptive online courses, simulations, and multiplayer games for institutional partners. We are now in the software development stage of the project.
We are looking for a full-stack developer to join our development team. The developer will be responsible for the overall development and implementation of front and back-end software applications. Their responsibilities will extend from designing system architecture to high-level programming, performance testing, and systems integration.
We are looking for an individual who is optimistic about technology and people, is open to and excited by new ideas, and considers themselves a life-long learner.
Responsibilities:
- Meeting with the software development team to define the scope and scale of software projects.
- Designing software system architecture.
- Completing data structures and design patterns.
- Designing and implementing scalable web services, applications, and APIs.
- Developing and maintaining internal software tools.
- Writing low-level and high-level code.
- Troubleshooting and bug fixing.
- Identifying bottlenecks and improving software efficiency.
- Collaborating with the design team on developing micro-services.
- Writing technical documents.
Required Competencies:
- Bachelor’s degree in computer engineering or computer science.
- Previous experience as a full stack engineer.
- Advanced knowledge of front-end languages including HTML5, CSS, TypeScript, JavaScript, C++, JQuery, React.js and Next.js.
- Knowledge of relational database systems and SQL.
- Familiarity with AWS architecture and working knowledge of services like S3, SES, EC2, RDS and more.
- Proficient in back-end languages including Java, Python, Rails, Ruby, .NET, and PHP.
- Advanced troubleshooting skills.
- Familiarity with MS Word, Excel, PowerPoint, Notion, Veed.io, Linear, Intercom, Plateau, and Miro.
- A strong belief that a team as a whole is greater than the sum of its parts.
- Excellent leadership, communication, and organization skills
Experience Needed: 2+ Years
Location: Bengaluru
- Meeting with the software development team to define the scope and scale of software projects.
- Designing software system architecture.
- Completing data structures and design patterns.
- Designing and implementing scalable web services, applications, and APIs.
- Developing and maintaining internal software tools.
- Writing low-level and high-level code.
- Troubleshooting and bug fixing.
- Identifying bottlenecks and improving software efficiency.
- Collaborating with the design team on developing micro-services.
- Writing technical documents.
Java Developers [I+S/E2-MM2]
Java Full Stack Developer
We are looking for a skilled Full Stack Developer who is passionate about building high-quality software applications. The ideal candidate will have expertise in Java frameworks and extensions, persistence frameworks, servers, platforms, clouds, databases, data storage, and QA tools. The candidate should also have experience working with Angular, React, or Vue.
As a Full Stack Developer, you will be responsible for developing and maintaining software applications for our clients. You will work closely with a team of developers and project managers to deliver high-quality software products. You should be comfortable working in a fast-paced environment and be able to adapt to changing priorities.
Mandatory Skill Sets:
Java frameworks and extensions: You should be proficient in building enterprise-grade applications using Java 8+ and Spring Boot.
Persistence Frameworks: You should have experience working with Hibernate and/or JPA. You should be able to design and develop efficient data models, and perform CRUD operations using Hibernate and/or JPA.
Servers: You should be familiar with Apache Tomcat and be able to deploy applications on Tomcat servers.
Platforms: You should have experience working with Java EE and Jakarta 2EE platforms.
Clouds: You should have experience working with AWS, and be familiar with AWS services such as EC2, S3, and RDS.
Databases / Data Storage: You should have experience working with MYSQL and Oracle databases
QA Tools: You should be proficient in JUnit5 and Postman. You should be able to write and execute unit tests, integration tests, and end-to-end tests using these tools.
Web Services: You should have experience working with RESTful web services.
API Security: You should be familiar with OAuth2, JWT, Auth0, or any other API security frameworks.
Angular/React/Vue: You should have experience working with at least one of these frontend frameworks, HTML, CSS, and JavaScript.
If you are passionate about building high-quality software applications and have the required skill sets, we encourage you to apply. We offer competitive salaries and benefits, and a challenging work environment where you can learn and grow.
Responsibilities
- Develop new user-facing features using React.js and RESTful APIs using Node.js and MongoDB
- Build reusable code and libraries for future use
- Optimize applications for maximum speed and scalability
- Collaborate with team members, e.g designer, product and other stakeholders to ensure quality in product.
- Ensure the technical feasibility of UI/UX designs
- Manage and maintain cloud infrastructure on AWS
Qualifications
- At least 3-6 years of experience in MERN stack
- Proficiency with React.js, Node.js, MongoDB, and Express.js
- Familiarity with AWS services such as EC2, S3, and RDS, SQS, Lambda
- Understanding of RESTful API design principles
- Understanding of Agile software development methodologies
- Strong problem-solving and analytical skills


















