50+ AWS (Amazon Web Services) Jobs in Pune | AWS (Amazon Web Services) Job openings in Pune
Apply to 50+ AWS (Amazon Web Services) Jobs in Pune on CutShort.io. Explore the latest AWS (Amazon Web Services) Job opportunities across top companies like Google, Amazon & Adobe.


We are looking for a Senior Software Engineer with 5+ years of experience in modern C++ development, paired with strong hands-on skills in AWS, Node.js, data processing, and containerized service development. The ideal candidate will be responsible for building scalable systems, maintaining complex data pipelines, and modernizing applications through cloud-native approaches and automation.
This is a high-impact role where engineering depth meets platform evolution, ideal for someone who thrives in system-level thinking, data-driven applications, and full-stack delivery.
Key Responsibilities:
- Design, build, and maintain high-performance systems using modern C++
- Develop and deploy scalable backend services using Node.js and manage dependencies via NPM
- Architect and implement containerized services using Docker, with orchestration via Kubernetes or ECS
- Build, monitor, and maintain data ingestion, transformation, and enrichment pipelines
- Utilize AWS services (Lambda, EC2, S3, CloudWatch, Step Functions) to deliver reliable cloud-native solutions
- Implement and maintain modern CI/CD pipelines, ensuring seamless integration, testing, and delivery
- Participate in system design, peer code reviews, and performance tuning
Required Skills:
- 5+ years of software development experience, with strong command over modern C++
- Solid experience with Node.js, JavaScript, and NPM for backend development
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and managing applications in the cloud
- Proficient in building and scaling data processing workflows and working with structured/unstructured data
- Strong hands-on experience with Docker, container orchestration, and microservices architecture
- Working knowledge of CI/CD practices, Git, and build/release tools
- Strong problem-solving, debugging, and cross-functional collaboration skills
Preferred / Nice to Have:
- Exposure to data streaming frameworks (Kafka, Spark, etc.)
- Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack)
- Background in performance profiling, secure coding, or legacy modernization
- Ability to work in agile environments and lead small technical initiatives
Minimum requirements
5+ years of industry software engineering experience (does not include internships nor includes co-ops)
Strong coding skills in any programming language (we understand new languages can be learned on the job so our interview process is language agnostic)
Strong collaboration skills, can work across workstreams within your team and contribute to your peers’ success
Have the ability to thrive on a high level of autonomy, responsibility, and think of yourself as entrepreneurial
Interest in working as a generalist across varying technologies and stacks to solve problems and delight both internal and external users
Preferred Qualifications
Experience with large-scale financial tracking systems
Good understanding and practical knowledge in cloud based services (e.g. gRPC, GraphQL, Docker/Kubernetes, cloud services such as AWS, etc.)

Job Summary:
As an AWS Data Engineer, you will be responsible for designing, developing, and maintaining scalable, high-performance data pipelines using AWS services. With 6+ years of experience, you’ll collaborate closely with data architects, analysts, and business stakeholders to build reliable, secure, and cost-efficient data infrastructure across the organization.
Key Responsibilities:
- Design, develop, and manage scalable data pipelines using AWS Glue, Lambda, and other serverless technologies
- Implement ETL workflows and transformation logic using PySpark and Python on AWS Glue
- Leverage AWS Redshift for warehousing, performance tuning, and large-scale data queries
- Work with AWS DMS and RDS for database integration and migration
- Optimize data flows and system performance for speed and cost-effectiveness
- Deploy and manage infrastructure using AWS CloudFormation templates
- Collaborate with cross-functional teams to gather requirements and build robust data solutions
- Ensure data integrity, quality, and security across all systems and processes
Required Skills & Experience:
- 6+ years of experience in Data Engineering with strong AWS expertise
- Proficient in Python and PySpark for data processing and ETL development
- Hands-on experience with AWS Glue, Lambda, DMS, RDS, and Redshift
- Strong SQL skills for building complex queries and performing data analysis
- Familiarity with AWS CloudFormation and infrastructure as code principles
- Good understanding of serverless architecture and cost-optimized design
- Ability to write clean, modular, and maintainable code
- Strong analytical thinking and problem-solving skills

Key Technical Skillsets-
- Design, develop, and maintain scalable applications using AWS services, Python, and Boto3.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Implement best practices for cloud architecture and application development.
- Optimize applications for maximum speed and scalability.
- Troubleshoot and resolve issues in development, test, and production environments.
- Write clean, maintainable, and efficient code.
- Participate in code reviews and contribute to team knowledge sharing.
We are seeking an experienced and highly skilled Technical Lead with a strong background in Java, SaaS architectures, firewalls, and cybersecurity products, including SIEM and SOAR platforms. The ideal candidate will lead technical initiatives, design and implement scalable systems, and drive best practices across the engineering team. This role requires deep technical expertise, leadership abilities, and a passion for building secure and high-performing security solutions.
Key Roles & Responsibilities:
- Lead the design and development of scalable and secure software solutions using Java.
- Architect and build SaaS-based cybersecurity applications, ensuring high availability, performance, and reliability.
- Provide technical leadership, mentoring, and guidance to the development team.
- Ensure best practices in secure coding, threat modeling, and compliance with industry standards.
- Collaborate with cross-functional teams, including Product Management, Security, and DevOps to deliver high-quality security solutions.
- Design and implement security analytics, automation workflows and ITSM integrations.
- Drive continuous improvements in engineering processes, tools, and technologies.
- Troubleshoot complex technical issues and lead incident response for critical production systems.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 3-6 years of software development experience, with expertise in Java.
- Strong background in building SaaS applications with cloud-native architectures (AWS, GCP, or Azure).
- In-depth understanding of microservices architecture, APIs, and distributed systems.
- Experience with containerization and orchestration tools like Docker and Kubernetes.
- Knowledge of DevSecOps principles, CI/CD pipelines, and infrastructure as code (Terraform, Ansible, etc.).
- Strong problem-solving skills and ability to work in an agile, fast-paced environment.
- Excellent communication and leadership skills, with a track record of mentoring engineers.
Preferred Qualifications:
- Experience with cybersecurity solutions, including SIEM (e.g., Splunk, ELK, IBM QRadar) and SOAR (e.g., Palo Alto XSOAR, Swimlane).
- Knowledge of zero-trust security models and secure API development.
- Hands-on experience with machine learning or AI-driven security analytics.

Job Description:- .Net Full Stack Specialist / Lead
Skills: .Net, C#, SQL, Angular/AWS/AZURE.
- Strong organizational and project management skills .Net, C#, SQL, Angular
- Proficiency with developing applications on the Cloud such as Azure, AWS, and GPC.
- Proficiency with fundamental front end languages such as HTML, CSS, and JavaScript.
- Familiarity with JavaScript frameworks such as Vue JS, Angular JS, and React
- Familiarity with database technology such as SQL Server, PostgreSQL, MySQL, and MongoDB.
- Excellent verbal communication skills.
- Good problem-solving skills.
Total Exp.-5-10 Years
NP.-Immediate to 15 Days
Location-Pune.
Role Overview:
Seeking a Mainframe Developer with hands-on real-time CICS experience for development and support in complex mainframe environments.
Key Responsibilities:
- Analyze and enhance mainframe applications.
- Develop, implement, and optimize applications.
- Troubleshoot and resolve issues promptly.
- Follow coding standards and collaborate for successful delivery.
- Stay updated on emerging mainframe technologies.
Mandatory Skills:
- Real-time CICS experience (recent project experience required)
- Proficiency in COBOL, JCL, and Assembler
- Understanding of mainframe OS (e.g., z/OS)
- DB2 experience and familiarity with mainframe utilities (ISPF, TSO, SDSF)
- Strong analytical, troubleshooting, and communication skills.
Preferred Skills:
- Experience in mainframe modernization.
- Familiarity with DevOps tools in a mainframe context.
- Exposure to cloud-based mainframe solutions.
Benefits:
- Competitive compensation and growth opportunities.
- Supportive and collaborative work environment.

Required Qualifications:
- 5+ years of professional software development experience.
- Post-secondary degree in computer science, software engineering or related discipline, or equivalent working experience.
- Development of distributed applications with Microsoft technologies: C# .NET/Core, SQL Server, Entity Framework.
- Deep expertise with microservices architectures and design patterns.
- Cloud Native AWS experience with services such as Lambda, SQS, RDS/Aurora, S3, Lex, and Polly.
- Mastery of both Windows and Linux environments and their use in the development and management of complex distributed systems architectures.
- Git source code repository and continuous integration tools.

AccioJob is conducting a Walk-In Hiring Drive with a reputed global IT consulting company at AccioJob Skill Centres for the position of Infrastructure Engineer, specifically for female candidates.
To Apply, Register and select your Slot here: https://go.acciojob.com/kcYTAp
We will not consider your application if you do not register and select slot via the above link.
Required Skills: Linux, Networking, One scripting language among Python, Bash, and PowerShell, OOPs, Cloud Platforms (AWS, Azure)
Eligibility:
- Degree: B.Tech/BE
- Branch: CSE Core With Cloud Certification
- Graduation Year: 2024 & 2025
Note: Only Female Candidates can apply for this job opportunity
Work Details:
- Work Mode: Work From Office
- Work Location: Bangalore & Coimbatore
- CTC: 11.1 LPA
Evaluation Process:
- Round 1: Offline Assessment at AccioJob Skill Centre in Noida, Pune, Hyderabad.
- Further Rounds (for Shortlisted Candidates only)
- HackerRank Online Assessment
- Coding Pairing Interview
- Technical Interview
- Cultural Alignment Interview
Important Note: Please bring your laptop and earphones for the test.
Register here: https://go.acciojob.com/kcYTAp
Hiring for Mean Stack Developer - Lead
Exp: 7 - 10 yrs
Work Location : Pune & Noida
Work from Office
Edu : BE/B.Tech/MCA Only 60% & above
Job Description :
1) 6+ yrs in Mean Stack ( Mongo DB , Angular JS , Node JS , Express JS )
2)Exp in Large Scale System
3) Handling a team of 6+
4) Exp in restful APIS / SOAP APIs
5) Exp in SQL/ NOSQL
6) Exp in AWS
7) Exp in MVC Framework

Position: AWS Data Engineer
Experience: 5 to 7 Years
Location: Bengaluru, Pune, Chennai, Mumbai, Gurugram
Work Mode: Hybrid (3 days work from office per week)
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and motivated AWS Data Engineer with 5–7 years of experience in building and optimizing data pipelines, architectures, and data sets. The ideal candidate will have strong experience with AWS services including Glue, Athena, Redshift, Lambda, DMS, RDS, and CloudFormation. You will be responsible for managing the full data lifecycle from ingestion to transformation and storage, ensuring efficiency and performance.
Key Responsibilities:
- Design, develop, and optimize scalable ETL pipelines using AWS Glue, Python/PySpark, and SQL.
- Work extensively with AWS services such as Glue, Athena, Lambda, DMS, RDS, Redshift, CloudFormation, and other serverless technologies.
- Implement and manage data lake and warehouse solutions using AWS Redshift and S3.
- Optimize data models and storage for cost-efficiency and performance.
- Write advanced SQL queries to support complex data analysis and reporting requirements.
- Collaborate with stakeholders to understand data requirements and translate them into scalable solutions.
- Ensure high data quality and integrity across platforms and processes.
- Implement CI/CD pipelines and best practices for infrastructure as code using CloudFormation or similar tools.
Required Skills & Experience:
- Strong hands-on experience with Python or PySpark for data processing.
- Deep knowledge of AWS Glue, Athena, Lambda, Redshift, RDS, DMS, and CloudFormation.
- Proficiency in writing complex SQL queries and optimizing them for performance.
- Familiarity with serverless architectures and AWS best practices.
- Experience in designing and maintaining robust data architectures and data lakes.
- Ability to troubleshoot and resolve data pipeline issues efficiently.
- Strong communication and stakeholder management skills.
Job Summary:
We are looking for an experienced Java Developer with 4+years of hands-on experience to join our dynamic team. The ideal candidate will have a strong background in Java development, problem-solving skills, and the ability to work independently as well as part of a team. You will be responsible for designing, developing, and maintaining high-performance and scalable applications.
Key Responsibilities:
- Design, develop, test, and maintain Java-based applications.
- Write well-designed, efficient, and testable code following best software development practices.
- Troubleshoot and resolve technical issues during development and production support.
- Collaborate with cross-functional teams including QA, DevOps, and Product teams.
- Participate in code reviews and provide constructive feedback.
- Maintain proper documentation for code, processes, and configurations.
- Support deployment and post-deployment monitoring during night shift hours.
Required Skills:
- Strong programming skills in Java 8 or above.
- Experience with Spring Framework (Spring Boot, Spring MVC, etc.).
- Proficiency in RESTful APIs, Microservices Architecture, and Web Services.
- Familiarity with SQL and relational databases like MySQL, PostgreSQL, or Oracle.
- Hands-on experience with version control systems like Git.
- Understanding of Agile methodologies.
- Experience with build tools like Maven/Gradle.
- Knowledge of unit testing frameworks (JUnit/TestNG).
Preferred Skills (Good to Have):
- Experience with cloud platforms (AWS, Azure, or GCP).
- Familiarity with CI/CD pipelines.
- Basic understanding of frontend technologies like JavaScript, HTML, CSS.

About the Role:
We are looking for a Software Architect with extensive experience in designing scalable software systems and backend architectures using .NET or Java. This role involves high-level technical decision-making, mentoring engineering teams, and ensuring the technical integrity and evolution of backend systems. Domain experience in healthcare is highly desirable.
Key Responsibilities:
- Architect and design enterprise-level backend systems and scalable APIs.
- Define technology stack, coding standards, and design principles.
- Collaborate with product and engineering teams to translate business needs into technical architecture.
- Review and guide development efforts to align with architectural vision.
- Conduct system performance analysis and ensure robustness and scalability.
- Stay updated with emerging technologies and recommend relevant adoption.
Required Skills:
- 10+ years of hands-on experience in backend development with .NET or Java.
- Proven experience in a Software Architect or equivalent senior-level technical role.
- Expertise in microservices, cloud-native architecture, and design patterns.
- Strong knowledge of SQL/NoSQL databases, API security, and systems integration.
- Experience in system design, documentation, and architectural diagrams.
Preferred Skills:
- Deep understanding of the healthcare domain and its compliance requirements.
- Experience with cloud platforms (Azure, AWS, or GCP).
- Exposure to DevOps, CI/CD pipelines, and monitoring tools.
- Familiarity with frontend stacks and full-stack application design.
What We Offer:
- Strategic and impactful role in shaping product architecture.
- Collaborative and forward-thinking environment.
- Opportunities for professional growth and innovation.
- Flexibility in working model with a focus on delivery.

What You’ll Do:
As a Data Scientist, you will work closely across DeepIntent Analytics teams located in New York City, India, and Bosnia. The role will support internal and external business partners in defining patient and provider audiences, and generating analyses and insights related to measurement of campaign outcomes, Rx, patient journey, and supporting evolution of DeepIntent product suite. Activities in this position include creating and scoring audiences, reading campaign results, analyzing medical claims, clinical, demographic and clickstream data, performing analysis and creating actionable insights, summarizing, and presenting results and recommended actions to internal stakeholders and external clients, as needed.
- Explore ways to to create better audiences
- Analyze medical claims, clinical, demographic and clickstream data to produce and present actionable insights
- Explore ways of using inference, statistical, machine learning techniques to improve the performance of existing algorithms and decision heuristics
- Design and deploy new iterations of production-level code
- Contribute posts to our upcoming technical blog
Who You Are:
- Bachelor’s degree in a STEM field, such as Statistics, Mathematics, Engineering, Biostatistics, Econometrics, Economics, Finance, OR, or Data Science. Graduate degree is strongly preferred
- 3+ years of working experience as Data Analyst, Data Engineer, Data Scientist in digital marketing, consumer advertisement, telecom, or other areas requiring customer level predictive analytics
- Background in either data engineering or analytics
- Hands on technical experience is required, proficiency in performing statistical analysis in Python, including relevant libraries, required
- You have an advanced understanding of the ad-tech ecosystem, digital marketing and advertising data and campaigns or familiarity with the US healthcare patient and provider systems (e.g. medical claims, medications)
- Experience in programmatic, DSP related, marketing predictive analytics, audience segmentation or audience behaviour analysis or medical / healthcare experience
- You have varied and hands-on predictive machine learning experience (deep learning, boosting algorithms, inference)
- Familiarity with data science tools such as, Xgboost, pytorch, Jupyter and strong LLM user experience (developer/API experience is a plus)
- You are interested in translating complex quantitative results into meaningful findings and interpretable deliverables, and communicating with less technical audiences orally and in writing
We are seeking a talented Engineer to join our AI team. You will technically lead experienced software and machine learning engineers to develop, test, and deploy AI-based solutions, with a primary focus on large language models and other machine learning applications. This is an excellent opportunity to apply your software engineering skills in a dynamic, real-world environment and gain hands-on experience in cutting-edge AI technology.
Key Roles & Responsibilities:
- Design and Develop AI-Powered Solutions: Architect and implement scalable AI/ML systems, focusing on Large Language Models (LLMs) and other deep learning applications.
- End-to-End Model Development: Lead the entire lifecycle of AI models—from data collection and preprocessing to training, fine-tuning, evaluation, and deployment.
- Fine-Tuning & Customization: Leverage techniques like LoRA (Low-Rank Adaptation) and Q-LoRA to efficiently fine-tune large models for specific business applications.
- Reasoning Model Implementation: Work with advanced reasoning models such as DeepSeek-R1, exploring their applications in enterprise AI workflows.
- Data Engineering & Dataset Creation: Design and curate high-quality datasets optimized for fine-tuning AI models, ensuring robust training and validation processes.
- Performance Optimization & Efficiency: Optimize model inference, computational efficiency, and resource utilization for large-scale AI applications.
- MLOps & CI/CD Pipelines: Implement best practices for MLOps, ensuring automated training, deployment, monitoring, and continuous improvement of AI models.
- Cloud & Edge AI Deployment: Deploy and manage AI solutions in cloud environments (AWS, Azure, GCP) and explore edge AI deployment where applicable.
- API Development & Microservices: Develop RESTful APIs and microservices to integrate AI models seamlessly into enterprise applications.
- Security, Compliance & Ethical AI: Ensure AI solutions comply with industry standards, data privacy laws (e.g., GDPR, HIPAA), and ethical AI guidelines.
- Collaboration & Stakeholder Engagement: Work closely with product managers, data engineers, and business teams to translate business needs into AI-driven solutions.
- Mentorship & Technical Leadership: Guide and mentor junior engineers, fostering best practices in AI/ML development, model fine-tuning, and software engineering.
- Research & Innovation: Stay updated with emerging AI trends, conduct experiments with cutting-edge architectures and fine-tuning techniques, and drive innovation within the team.
Basic Qualifications:
- A master's degree or PhD in Computer Science, Data Science, Engineering, or a related field
- Experience: 5-8 Years
- Strong programming skills in Python and Java
- Good understanding of machine learning fundamentals
- Hands-on experience with Python and common ML libraries (e.g., PyTorch, TensorFlow, scikit-learn)
- Familiar with frontend development and frameworks like React
- Basic knowledge of LLMs and transformer-based architectures is a plus.
Preferred Qualifications
- Excellent problem-solving skills and an eagerness to learn in a fast-paced environment
- Strong attention to detail and ability to communicate technical concepts clearly
We are seeking a talented Engineer to join our AI team. You will technically lead experienced software and machine learning engineers to develop, test, and deploy AI-based solutions, with a primary focus on large language models and other machine learning applications. This is an excellent opportunity to apply your software engineering skills in a dynamic, real-world environment and gain hands-on experience in cutting-edge AI technology.
Key Roles & Responsibilities:
- Design and implement software solutions that power machine learning models, particularly in LLMs
- Create robust data pipelines, handling data preprocessing, transformation, and integration for machine learning projects
- Collaborate with the engineering team to build and optimize machine learning models, particularly LLMs, that address client-specific challenges
- Partner with cross-functional teams, including business stakeholders, data engineers, and solutions architects to gather requirements and evaluate technical feasibility
- Design and implement a scale infrastructure for developing and deploying GenAI solutions
- Support model deployment and API integration to ensure interaction with existing enterprise systems.
Basic Qualifications:
- A master's degree or PhD in Computer Science, Data Science, Engineering, or a related field
- Experience: 3-5 Years
- Strong programming skills in Python and Java
- Good understanding of machine learning fundamentals
- Hands-on experience with Python and common ML libraries (e.g., PyTorch, TensorFlow, scikit-learn)
- Familiar with frontend development and frameworks like React
- Basic knowledge of LLMs and transformer-based architectures is a plus.
Preferred Qualifications
- Excellent problem-solving skills and an eagerness to learn in a fast-paced environment
- Strong attention to detail and ability to communicate technical concepts clearly
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are looking for a Senior Software Development Engineer with 5-8 years of experience specializing in infrastructure deployment automation and VMware workload migration. The ideal candidate will have expertise in Infrastructure-as-Code (IaC), VMware vSphere, vMotion, HCX, Terraform, Kubernetes, and AI POD managed services. You will be responsible for automating infrastructure provisioning, migrating workloads from VMware environments to cloud and hybrid infrastructures, and optimizing AI/ML deployments.
Key Roles & Responsibilities
- Automate infrastructure deployment using Terraform, Ansible, and Helm for VMware and cloud environments.
- Develop and implement VMware workload migration strategies, including vMotion, HCX, SRM (Site Recovery Manager), and lift-and-shift migrations.
- Migrate VMware-based workloads to public cloud (AWS, Azure, GCP) or hybrid cloud environments.
- Optimize and manage AI POD workloads on VMware and Kubernetes-based environments.
- Leverage VMware HCX for live and bulk workload migrations, ensuring minimal downtime and optimal performance.
- Automate virtual machine provisioning and lifecycle management using VMware vSphere APIs, PowerCLI, or vRealize Automation.
- Integrate VMware workloads with Kubernetes for containerized AI/ML workflows.
- Ensure workload high availability and disaster recovery post-migration using VMware SRM, vSAN, and backup strategies.
- Monitor and troubleshoot migration performance using vRealize Operations, Prometheus, Grafana, and ELK.
- Develop and optimize CI/CD pipelines to automate workload migration, deployment, and validation.
- Ensure security and compliance for workloads before, during, and after migration.
- Collaborate with cloud architects to design hybrid cloud solutions supporting AI/ML workloads.
Basic Qualifications
- 5–8 years of experience in infrastructure automation, VMware workload migration, and cloud integration.
- Expertise in VMware vSphere, ESXi, vMotion, HCX, SRM, vSAN, and NSX-T.
- Hands-on experience with workload migration tools such as VMware HCX, CloudEndure, AWS Application Migration Service, and Azure Migrate.
- Proficiency in Infrastructure-as-Code using Terraform, Ansible, PowerCLI, and vRealize Automation.
- Strong experience with Kubernetes (EKS, AKS, GKE) and containerized AI/ML workloads.
- Experience in public cloud migration (AWS, Azure, GCP) for VMware-based workloads.
- Hands-on knowledge of CI/CD tools such as Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
- Strong scripting and automation skills in Python, Bash, or PowerShell.
- Familiarity with disaster recovery, backup, and business continuity planning in VMware environments.
- Experience in performance tuning and troubleshooting for VMware-based workloads.
Preferred Qualifications
- Experience with NVIDIA GPU orchestration (e.g., KubeFlow, Triton, RAPIDS).
- Familiarity with Packer for automated VM image creation.
- Exposure to Edge AI deployments, federated learning, and AI inferencing at scale.
- Contributions to open-source infrastructure automation projects.

Position: AWS Data Engineer
Experience: 5 to 7 Years
Location: Bengaluru, Pune, Chennai, Mumbai, Gurugram
Work Mode: Hybrid (3 days work from office per week)
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and motivated AWS Data Engineer with 5–7 years of experience in building and optimizing data pipelines, architectures, and data sets. The ideal candidate will have strong experience with AWS services including Glue, Athena, Redshift, Lambda, DMS, RDS, and CloudFormation. You will be responsible for managing the full data lifecycle from ingestion to transformation and storage, ensuring efficiency and performance.
Key Responsibilities:
- Design, develop, and optimize scalable ETL pipelines using AWS Glue, Python/PySpark, and SQL.
- Work extensively with AWS services such as Glue, Athena, Lambda, DMS, RDS, Redshift, CloudFormation, and other serverless technologies.
- Implement and manage data lake and warehouse solutions using AWS Redshift and S3.
- Optimize data models and storage for cost-efficiency and performance.
- Write advanced SQL queries to support complex data analysis and reporting requirements.
- Collaborate with stakeholders to understand data requirements and translate them into scalable solutions.
- Ensure high data quality and integrity across platforms and processes.
- Implement CI/CD pipelines and best practices for infrastructure as code using CloudFormation or similar tools.
Required Skills & Experience:
- Strong hands-on experience with Python or PySpark for data processing.
- Deep knowledge of AWS Glue, Athena, Lambda, Redshift, RDS, DMS, and CloudFormation.
- Proficiency in writing complex SQL queries and optimizing them for performance.
- Familiarity with serverless architectures and AWS best practices.
- Experience in designing and maintaining robust data architectures and data lakes.
- Ability to troubleshoot and resolve data pipeline issues efficiently.
- Strong communication and stakeholder management skills.
Job Summary:
Seeking a seasoned SQL + ETL Developer with 4+ years of experience in managing large-scale datasets and cloud-based data pipelines. The ideal candidate is hands-on with MySQL, PySpark, AWS Glue, and ETL workflows, with proven expertise in AWS migration and performance optimization.
Key Responsibilities:
- Develop and optimize complex SQL queries and stored procedures to handle large datasets (100+ million records).
- Build and maintain scalable ETL pipelines using AWS Glue and PySpark.
- Work on data migration tasks in AWS environments.
- Monitor and improve database performance; automate key performance indicators and reports.
- Collaborate with cross-functional teams to support data integration and delivery requirements.
- Write shell scripts for automation and manage ETL jobs efficiently.
Required Skills:
- Strong experience with MySQL, complex SQL queries, and stored procedures.
- Hands-on experience with AWS Glue, PySpark, and ETL processes.
- Good understanding of AWS ecosystem and migration strategies.
- Proficiency in shell scripting.
- Strong communication and collaboration skills.
Nice to Have:
- Working knowledge of Python.
- Experience with AWS RDS.

Profile: AWS Data Engineer
Mode- Hybrid
Experience- 5+7 years
Locations - Bengaluru, Pune, Chennai, Mumbai, Gurugram
Roles and Responsibilities
- Design and maintain ETL pipelines using AWS Glue and Python/PySpark
- Optimize SQL queries for Redshift and Athena
- Develop Lambda functions for serverless data processing
- Configure AWS DMS for database migration and replication
- Implement infrastructure as code with CloudFormation
- Build optimized data models for performance
- Manage RDS databases and AWS service integrations
- Troubleshoot and improve data processing efficiency
- Gather requirements from business stakeholders
- Implement data quality checks and validation
- Document data pipelines and architecture
- Monitor workflows and implement alerting
- Keep current with AWS services and best practices
Required Technical Expertise:
- Python/PySpark for data processing
- AWS Glue for ETL operations
- Redshift and Athena for data querying
- AWS Lambda and serverless architecture
- AWS DMS and RDS management
- CloudFormation for infrastructure
- SQL optimization and performance tuning

About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Senior Software Development Engineer – Data Engineering with 5-8 years of experience to design, develop, and optimize data pipelines and analytics workflows using Snowflake, Databricks, and Apache Spark. The ideal candidate will have a strong background in big data processing, cloud data platforms, and performance optimization to enable scalable data-driven solutions.
Key Roles & Responsibilities:
- Design, develop, and optimize ETL/ELT pipelines using Apache Spark, PySpark, Databricks, and Snowflake.
- Implement real-time and batch data processing workflows in cloud environments (AWS, Azure, GCP).
- Develop high-performance, scalable data pipelines for structured, semi-structured, and unstructured data.
- Work with Delta Lake and Lakehouse architectures to improve data reliability and efficiency.
- Optimize Snowflake and Databricks performance, including query tuning, caching, partitioning, and cost optimization.
- Implement data governance, security, and compliance best practices.
- Build and maintain data models, transformations, and data marts for analytics and reporting.
- Collaborate with data scientists, analysts, and business teams to define data engineering requirements.
- Automate infrastructure and deployments using Terraform, Airflow, or dbt.
- Monitor and troubleshoot data pipeline failures, performance issues, and bottlenecks.
- Develop and enforce data quality and observability frameworks using Great Expectations, Monte Carlo, or similar tools.
Basic Qualifications:
- Bachelor’s or Master’s Degree in Computer Science or Data Science.
- 5–8 years of experience in data engineering, big data processing, and cloud-based data platforms.
- Hands-on expertise in Apache Spark, PySpark, and distributed computing frameworks.
- Strong experience with Snowflake (Warehouses, Streams, Tasks, Snowpipe, Query Optimization).
- Experience in Databricks (Delta Lake, MLflow, SQL Analytics, Photon Engine).
- Proficiency in SQL, Python, or Scala for data transformation and analytics.
- Experience working with data lake architectures and storage formats (Parquet, Avro, ORC, Iceberg).
- Hands-on experience with cloud data services (AWS Redshift, Azure Synapse, Google BigQuery).
- Experience in workflow orchestration tools like Apache Airflow, Prefect, or Dagster.
- Strong understanding of data governance, access control, and encryption strategies.
- Experience with CI/CD for data pipelines using GitOps, Terraform, dbt, or similar technologies.
Preferred Qualifications:
- Knowledge of streaming data processing (Apache Kafka, Flink, Kinesis, Pub/Sub).
- Experience in BI and analytics tools (Tableau, Power BI, Looker).
- Familiarity with data observability tools (Monte Carlo, Great Expectations).
- Experience with machine learning feature engineering pipelines in Databricks.
- Contributions to open-source data engineering projects.

Job Overview:
We are seeking an experienced AWS Data Engineer to join our growing data team. The ideal candidate will have hands-on experience with AWS Glue, Redshift, PySpark, and other AWS services to build robust, scalable data pipelines. This role is perfect for someone passionate about data engineering, automation, and cloud-native development.
Key Responsibilities:
- Design, build, and maintain scalable and efficient ETL pipelines using AWS Glue, PySpark, and related tools.
- Integrate data from diverse sources and ensure its quality, consistency, and reliability.
- Work with large datasets in structured and semi-structured formats across cloud-based data lakes and warehouses.
- Optimize and maintain data infrastructure, including Amazon Redshift, for high performance.
- Collaborate with data analysts, data scientists, and product teams to understand data requirements and deliver solutions.
- Automate data validation, transformation, and loading processes to support real-time and batch data processing.
- Monitor and troubleshoot data pipeline issues and ensure smooth operations in production environments.
Required Skills:
- 5 to 7 years of hands-on experience in data engineering roles.
- Strong proficiency in Python and PySpark for data transformation and scripting.
- Deep understanding and practical experience with AWS Glue, AWS Redshift, S3, and other AWS data services.
- Solid understanding of SQL and database optimization techniques.
- Experience working with large-scale data pipelines and high-volume data environments.
- Good knowledge of data modeling, warehousing, and performance tuning.
Preferred/Good to Have:
- Experience with workflow orchestration tools like Airflow or Step Functions.
- Familiarity with CI/CD for data pipelines.
- Knowledge of data governance and security best practices on AWS.
Role - ETL Developer
Work Mode - Hybrid
Experience- 4+ years
Location - Pune, Gurgaon, Bengaluru, Mumbai
Required Skills - AWS, AWS Glue, Pyspark, ETL, SQL
Required Skills:
- 4+ years of hands-on experience in MySQL, including SQL queries and procedure development
- Experience in Pyspark, AWS, AWS Glue
- Experience in AWS ,Migration
- Experience with automated scripting and tracking KPIs/metrics for database performance
- Proficiency in shell scripting and ETL.
- Strong communication skills and a collaborative team player
- Knowledge of Python and AWS RDS is a plus
About the Role
We are looking for a highly skilled Senior DevOps Engineer with expertise in Java-based applications. You will lead automation, deployment, and cloud infrastructure efforts, ensuring efficient CI/CD pipelines and scalable, secure environments.
Key Responsibilities
- CI/CD Management: Design, implement, and optimize CI/CD pipelines for Java applications using Jenkins/GitLab.
- Cloud Infrastructure: Deploy and manage cloud resources (AWS, Azure, or GCP) for scalable applications.
- Containerization & Orchestration: Manage Docker containers and Kubernetes clusters for streamlined deployment.
- Automation & Scripting: Write efficient scripts (Python, Bash) for automation tasks related to infrastructure and deployments.
- Security & Compliance: Implement security best practices for cloud environments and containerized applications.
- Monitoring & Performance Tuning: Utilize tools like Prometheus, Grafana, ELK stack for monitoring system health and optimizing performance.
- Collaboration: Work with developers to enhance deployment workflows and troubleshoot production issues.
Required Skills
- Programming: Strong knowledge of Java, Spring Boot, and Microservices architecture.
- DevOps Tools: Experience with Jenkins, GitLab CI/CD, Terraform, and Ansible.
- Cloud Platforms: Expertise in AWS, Azure, or GCP with hands-on infrastructure management.
- Containers: Proficiency in Docker, Kubernetes, Helm.
- Networking & Security: Understanding of VPNs, firewalls, and IAM policies.
- Version Control: Git and GitHub/Bitbucket experience.
Preferred Qualifications
- Certifications in AWS, Kubernetes, or DevOps.
- Knowledge of Istio, Service Mesh, or Kafka.
- Experience in high-traffic production environments.
Benefits
- Competitive salary & bonuses.
- Flexible work arrangements (hybrid/remote options).
- Training programs & certifications.


Job Description:
Deqode is seeking a skilled .NET Full Stack Developer with expertise in .NET Core, Angular, and C#. The ideal candidate will have hands-on experience with either AWS or Azure cloud platforms. This role involves developing robust, scalable applications and collaborating with cross-functional teams to deliver high-quality software solutions.
Key Responsibilities:
- Develop and maintain web applications using .NET Core, C#, and Angular.
- Design and implement RESTful APIs and integrate with front-end components.
- Collaborate with UI/UX designers, product managers, and other developers to deliver high-quality products.
- Deploy and manage applications on cloud platforms (AWS or Azure).
- Write clean, scalable, and efficient code following best practices.
- Participate in code reviews and provide constructive feedback.
- Troubleshoot and debug applications to ensure optimal performance.
- Stay updated with emerging technologies and propose improvements to existing systems.
Required Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Minimum of 4 years of professional experience in software development.
- Proficiency in .NET Core, C#, and Angular.
- Experience with cloud services (either AWS or Azure).
- Strong understanding of RESTful API design and implementation.
- Familiarity with version control systems like Git.
- Excellent problem-solving skills and attention to detail.
- Ability to work independently and collaboratively in a team environment.
Preferred Qualifications:
- Experience with containerization tools like Docker and orchestration platforms like Kubernetes.
- Knowledge of CI/CD pipelines and DevOps practices.
- Familiarity with Agile/Scrum methodologies.
- Strong communication and interpersonal skills.
What We Offer:
- Competitive salary and performance-based incentives.
- Flexible working hours and remote work options.
- Opportunities for professional growth and career advancement.
- Collaborative and inclusive work environment.
- Access to the latest tools and technologies.



Job Title: .NET Developer
Location: Pan India (Hybrid)
Employment Type: Full-Time
Join Date: Immediate / Within 15 Days
Experience: 4+ Years
Deqode is looking for a skilled and passionate Senior .NET Developer to join our growing tech team. The ideal candidate is an expert in building scalable web applications and has hands-on experience with cloud platforms and modern front-end technologies.
Key Responsibilities:
- Design, develop, and maintain scalable web applications using .NET Core.
- Work on RESTful APIs and integrate third-party services.
- Collaborate with UI/UX designers and front-end developers using Angular or React.
- Deploy, monitor, and maintain applications on AWS or Azure.
- Participate in code reviews, technical discussions, and architecture planning.
- Write clean, well-structured, and testable code following best practices.
Must-Have Skills:
- 4+ years of experience in software development using .NET Core.
- Proficiency with Angular or React for front-end development.
- Strong working knowledge of AWS or Microsoft Azure.
- Experience with SQL/NoSQL databases.
- Excellent communication and team collaboration skills.
Education:
- Bachelor’s/Master’s degree in Computer Science, Information Technology, or a related field.
Required Skills:
- Experience in systems administration, SRE or DevOps focused role
- Experience in handling production support (on-call)
- Good understanding of the Linux operating system and networking concepts.
- Demonstrated competency with the following AWS services: ECS, EC2, EBS, EKS, S3, RDS, ELB, IAM, Lambda.
- Experience with Docker containers and containerization concepts
- Experience with managing and scaling Kubernetes clusters in a production environment
- Experience building scalable infrastructure in AWS with Terraform.
- Strong knowledge of Protocol-level such as HTTP/HTTPS, SMTP, DNS, and LDAP
- Experience monitoring production systems
- Expertise in leveraging Automation / DevOps principles, experience with operational tools, and able to apply best practices for infrastructure and software deployment (Ansible).
- HAProxy, Nginx, SSH, MySQL configuration and operation experience
- Ability to work seamlessly with software developers, QA, project managers, and business development
- Ability to produce and maintain written documentation

🚀 We’re Hiring- PHP Developer Deqode
📍 Location: Pune (Hybrid)
🕒Experience: 4–6 Years
⏱️ Notice Period: Immediate Joiner
We're looking for a skilled PHP Developer to join our team. If you have a strong grasp of secure coding practices, are experienced in PHP upgrades, and thrive in a fast-paced deployment environment, we’d love to connect with you!
🔧 Key Skills:
- PHP | MySQL | JavaScript | Jenkins | Nginx | AWS
🔐 Security-Focused Responsibilities Include:
- Remediation of PenTest findings
- XSS mitigation (input/output sanitization)
- API rate limiting
- 2FA integration
- PHP version upgrade
- Use of AWS Secrets Manager
- Secure session and password policies
𝐖𝐞’𝐫𝐞 𝐇𝐢𝐫𝐢𝐧𝐠: 𝐀𝐈-𝐍𝐚𝐭𝐢𝐯𝐞 𝐃𝐞𝐯𝐒𝐞𝐜𝐎𝐩𝐬 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫𝐬 𝐚𝐭 𝐅𝐥𝐲𝐭𝐁𝐚𝐬𝐞
📍 Location: Pune (Onsite)
This will be one of the most challenging and rewarding roles of your career.
At FlytBase, prompting is thinking. We don’t care if you’ve memorized APIs or collected DevOps certifications.
We care how you think, how you debug under pressure, and how you solve infra problems no one else can.
We don’t hire engineers to maintain systems.
We hire infra architects to design infrastructure that runs—and evolves—on its own.
If you need step-by-step tickets or constant direction—don’t apply.
We’re looking for engineers who can design, secure, and scale real-world systems with speed, clarity, and zero excuses.
If that’s what you’ve been waiting for—read on.
𝐓𝐡𝐞 𝐌𝐢𝐬𝐬𝐢𝐨𝐧
FlytBase is building the autonomous backbone for aerial intelligence—drone fleets that operate 24/7 at industrial scale.
Our platform flies missions, detects anomalies, and delivers insights—with no human in the loop.
The infra behind this? That’s what you build.
𝐘𝐨𝐮𝐫 𝐋𝐨𝐨𝐩
• Design AI-secured CI/CD pipelines (GitHub, Docker, Terraform, SAST/DAST)
• Architect AWS infra (EC2, S3, IAM, VPCs, EKS) with Infrastructure-as-Code
• Build intelligent observability systems (Grafana, Dynatrace, LLM-based detection)
• Define fallback, rollback & recovery loops that don’t need escalation
• Enforce compliance (SOC2, ISO27001, GDPR) without killing velocity
• Own SLAs from definition → delivery → defense
• Automate what used to need a team. Then automate again.
𝐖𝐡𝐨 𝐖𝐞’𝐫𝐞 𝐋𝐨𝐨𝐤𝐢𝐧𝐠 𝐅𝐨𝐫
✅ You treat infrastructure like a product
✅ You already use AI Tools to move 5x faster
✅ You can go from “zero” to “live infra” in 48 hours
𝐁𝐨𝐧𝐮𝐬 𝐩𝐨𝐢𝐧𝐭𝐬 𝐢𝐟 𝐲𝐨𝐮’𝐯𝐞:
• Built custom DevOps bots or AI agents
• Got an OSCP or hacked together your own SOC2 framework
• Shipped production infra solo
• Open-sourced your infra tools or scripts
𝐉𝐨𝐢𝐧 𝐔𝐬
If you’re still reading—good. That’s a signal.
𝐀𝐩𝐩𝐥𝐲 𝐡𝐞𝐫𝐞👉 https://lnkd.in/gsqjaJSP

Role - MLops Engineer
Location - Pune, Gurgaon, Noida, Bhopal, Bangalore
Mode - Hybrid
Role Overview
We are looking for an experienced MLOps Engineer to join our growing AI/ML team. You will be responsible for automating, monitoring, and managing machine learning workflows and infrastructure in production environments. This role is key to ensuring our AI solutions are scalable, reliable, and continuously improving.
Key Responsibilities
- Design, build, and manage end-to-end ML pipelines, including model training, validation, deployment, and monitoring.
- Collaborate with data scientists, software engineers, and DevOps teams to integrate ML models into production systems.
- Develop and manage scalable infrastructure using AWS, particularly AWS Sagemaker.
- Automate ML workflows using CI/CD best practices and tools.
- Ensure model reproducibility, governance, and performance tracking.
- Monitor deployed models for data drift, model decay, and performance metrics.
- Implement robust versioning and model registry systems.
- Apply security, performance, and compliance best practices across ML systems.
- Contribute to documentation, knowledge sharing, and continuous improvement of our MLOps capabilities.
Required Skills & Qualifications
- 4+ years of experience in Software Engineering or MLOps, preferably in a production environment.
- Proven experience with AWS services, especially AWS Sagemaker for model development and deployment.
- Working knowledge of AWS DataZone (preferred).
- Strong programming skills in Python, with exposure to R, Scala, or Apache Spark.
- Experience with ML model lifecycle management, version control, containerization (Docker), and orchestration tools (e.g., Kubernetes).
- Familiarity with MLflow, Airflow, or similar pipeline/orchestration tools.
- Experience integrating ML systems into CI/CD workflows using tools like Jenkins, GitHub Actions, or AWS CodePipeline.
- Solid understanding of DevOps and cloud-native infrastructure practices.
- Excellent problem-solving skills and the ability to work collaboratively across teams.
Job Title : Senior Backend Engineer – Java, AI & Automation
Experience : 4+ Years
Location : Any Cognizant location (India)
Work Mode : Hybrid
Interview Rounds :
- Virtual
- Face-to-Face (In-person)
Job Description :
Join our Backend Engineering team to design and maintain services on the Intuit Data Exchange (IDX) platform.
You'll work on scalable backend systems powering millions of daily transactions across Intuit products.
Key Qualifications :
- 4+ years of backend development experience.
- Strong in Java, Spring framework.
- Experience with microservices, databases, and web applications.
- Proficient in AWS and cloud-based systems.
- Exposure to AI and automation tools (Workato preferred).
- Python development experience.
- Strong communication skills.
- Comfortable with occasional US shift overlap.

Job Title: Python Django Microservices Lead
Job Title: Django Backend Lead Developer
Location: Indore/ Pune (Hybrid - Wednesday and Thursday WFO)
Timings - 12.30 to 9.30 PM
Experience Level: 8+ Years
Job Overview: We are seeking an experienced Django Backend Lead Developer to join our team. The ideal candidate will have a strong background in backend development, cloud technologies, and big data
processing. This role involves leading technical projects, mentoring junior developers, and ensuring the delivery of high-quality solutions.
Responsibilities:
Lead the development of backend systems using Django.
Design and implement scalable and secure APIs.
Integrate Azure Cloud services for application deployment and management.
Utilize Azure Databricks for big data processing and analytics.
Implement data processing pipelines using PySpark.
Collaborate with front-end developers, product managers, and other stakeholders to deliver comprehensive solutions.
Conduct code reviews and ensure adherence to best practices.
Mentor and guide junior developers.
Optimize database performance and manage data storage solutions.
Ensure high performance and security standards for applications.
Participate in architecture design and technical decision-making.
Qualifications:
Bachelor's degree in Computer Science, Information Technology, or a related field.
8+ years of experience in backend development.
8+ years of experience with Django.
Proven experience with Azure Cloud services.
Experience with Azure Databricks and PySpark.
Strong understanding of RESTful APIs and web services.
Excellent communication and problem-solving skills.
Familiarity with Agile methodologies.
Experience with database management (SQL and NoSQL).
Skills: Django, Python, Azure Cloud, Azure Databricks, Delta Lake and Delta tables, PySpark, SQL/NoSQL databases, RESTful APIs, Git, and Agile methodologies

Role - MLops Engineer
Required Experience - 4 Years
Location - Pune, Gurgaon, Noida, Bhopal, Bangalore
Mode - Hybrid
Key Requirements:
- 4+ years of experience in Software Engineering with MLOps focus
- Strong expertise in AWS, particularly AWS SageMaker (required)
- AWS Data Zone experience (preferred)
- Proficiency in Python, R, Scala, or Spark
- Experience developing scalable, reliable, and secure applications
- Track record of production-grade development, integration and support

Work Mode: Hybrid
Need B.Tech, BE, M.Tech, ME candidates - Mandatory
Must-Have Skills:
● Educational Qualification :- B.Tech, BE, M.Tech, ME in any field.
● Minimum of 3 years of proven experience as a Data Engineer.
● Strong proficiency in Python programming language and SQL.
● Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.
● Good comprehension and critical thinking skills.
● Kindly note Salary bracket will vary according to the exp. of the candidate -
- Experience from 4 yrs to 6 yrs - Salary upto 22 LPA
- Experience from 5 yrs to 8 yrs - Salary upto 30 LPA
- Experience more than 8 yrs - Salary upto 40 LPA

At least 5 years of experience in testing and developing automation tests.
A minimum of 3 years of experience writing tests in Python, with a preference for experience in designing automation frameworks.
Experience in developing automation for big data testing, including data ingestion, data processing, and data migration, is highly desirable.
Familiarity with Playwright or other browser application testing frameworks is a significant advantage.
Proficiency in object-oriented programming and principles is required.
Extensive knowledge of AWS services is essential.
Strong expertise in REST API testing and SQL is required.
A solid understanding of testing and development life cycle methodologies is necessary.
Knowledge of the financial industry and trading systems is a plus

We are seeking a highly skilled Java full-stack developer with 5–8 years of experience to join our dynamic development team. The ideal candidate will have deep technical expertise across Java, Microservices, React/Redux, Kubernetes, DevOps tools, and GCP. You will work on designing and deploying full-stack applications that are robust, scalable, and aligned with business goals.
Key Responsibilities
- Design, develop, and deploy scalable full-stack applications using Java, React, and Redux
- Build microservices following SOLID principles
- Collaborate with cross-functional team,s including product owners, QA, BAs, and other engineers
- Write clean, maintainable, and efficient code
- Perform debugging, troubleshooting, and optimization
- Participate in code reviews and contribute to engineering best practices
- Stay updated on security, privacy, and compliance requirements
- Work in an Agile/Scrum environment using tools like JIRA and Confluence
Technical Skills Required
Frontend
- Strong proficiency in JavaScript and modern ES6 features
- Expertise in React.js with advanced knowledge of hooks (useCallback, useMemo, etc.)
- Solid understanding of Redux for state management
Backend
- Strong hands-on experience in Java
- Building and maintaining Microservices architectures
DevOps & Infrastructure
- Experience with CI/CD tools: Jenkins, Nexus, Maven, Ansible
- Terraform for infrastructure as code
- Containerization and orchestration using Docker and Kubernetes/GKE
- Experience with IAM, security roles, service accounts
Cloud
- Proficient with any cloud services
Database
- Hands-on experience with PostgreSQL, MySQL, BigQuery
Scripting
- Proficiency in Bash/Shell scripting and Python
Non-Technical Skills
- Strong communication and interpersonal skills
- Ability to work effectively in distributed teams across time zones
- Quick learner and adaptable to new technologies
- Team player with a collaborative mindset
- Ability to explain complex technical concepts to non-technical stakeholders
Nice to Have
- Experience with NetReveal / Detica
Why Join Us?
- 🚀 Challenging Projects: Be part of innovative solutions making a global impact
- 🌍 Global Exposure: Work with international teams and clients
- 📈 Career Growth: Clear pathways for professional advancement
- 🧘♂️ Flexible Work Options: Hybrid and remote flexibility to support work-life balance
- 💼 Competitive Compensation: Industry-leading salary and benefits
Job Title : Lead Java Developer (Backend)
Experience Required : 8 to 15 Years
Open Positions : 5
Location : Any major metro city (Bengaluru, Pune, Chennai, Kolkata, Hyderabad)
Work Mode : Open to Remote / Hybrid / Onsite
Notice Period : Immediate Joiner/30 Days or Less
About the Role :
- We are looking for experienced Lead Java Developers who bring not only strong backend development skills but also a product-oriented mindset and leadership capability.
- This is an opportunity to be part of high-impact digital transformation initiatives that go beyond writing code—you’ll help shape future-ready platforms and drive meaningful change.
- This role is embedded within a forward-thinking digital engineering team that thrives on co-innovation, lean delivery, and end-to-end ownership of platforms and products.
Key Responsibilities :
- Design, develop, and implement scalable backend systems using Java and Spring Boot.
- Collaborate with product managers, designers, and engineers to build intuitive and reliable digital products.
- Advocate and implement engineering best practices : SOLID principles, OOP, clean code, CI/CD, TDD/BDD.
- Lead Agile-based development cycles with a focus on speed, quality, and customer outcomes.
- Guide and mentor team members, fostering technical excellence and ownership.
- Utilize cloud platforms and DevOps tools to ensure performance and reliability of applications.
What We’re Looking For :
- Proven experience in Java backend development (Spring Boot, Microservices).
- 8+ Years of hands-on engineering experience with at least 2+ years in a Lead role.
- Familiarity with cloud platforms such as AWS, Azure, or GCP.
- Good understanding of containerization and orchestration tools like Docker and Kubernetes.
- Exposure to DevOps and Infrastructure as Code practices.
- Strong problem-solving skills and the ability to design solutions from first principles.
- Prior experience in product-based or startup environments is a big plus.
Ideal Candidate Profile :
- A tech enthusiast with a passion for clean code and scalable architecture.
- Someone who thrives in collaborative, transparent, and feedback-driven environments.
- A leader who takes ownership beyond individual deliverables to drive overall team and project success.
Interview Process
- Initial Technical Screening (via platform partner)
- Technical Interview with Engineering Team
- Client-facing Final Round
Additional Info :
- Targeting profiles from product/startup backgrounds.
- Strong preference for candidates with under 1 month of notice period.
- Interviews will be fast-tracked for qualified profiles.
📍 Position : Java Architect
📅 Experience : 10 to 15 Years
🧑💼 Open Positions : 3+
📍 Work Location : Bangalore, Pune, Chennai
💼 Work Mode : Hybrid
📅 Notice Period : Immediate joiners preferred; up to 1 month maximum
🔧 Core Responsibilities :
- Lead architecture design and development for scalable enterprise-level applications.
- Own and manage all aspects of technical development and delivery.
- Define and enforce best coding practices, architectural guidelines, and development standards.
- Plan and estimate the end-to-end technical scope of projects.
- Conduct code reviews, ensure CI/CD, and implement TDD/BDD methodologies.
- Mentor and lead individual contributors and small development teams.
- Collaborate with cross-functional teams, including DevOps, Product, and QA.
- Engage in high-level and low-level design (HLD/LLD), solutioning, and cloud-native transformations.
🛠️ Required Technical Skills :
- Strong hands-on expertise in Java, Spring Boot, Microservices architecture
- Experience with Kafka or similar messaging/event streaming platforms
- Proficiency in cloud platforms – AWS and Azure (must-have)
- Exposure to frontend technologies (nice-to-have)
- Solid understanding of HLD, system architecture, and design patterns
- Good grasp of DevOps concepts, Docker, Kubernetes, and Infrastructure as Code (IaC)
- Agile/Lean development, Pair Programming, and Continuous Integration practices
- Polyglot mindset is a plus (Scala, Golang, Python, etc.)
🚀 Ideal Candidate Profile :
- Currently working in a product-based environment
- Already functioning as an Architect or Principal Engineer
- Proven track record as an Individual Contributor (IC)
- Strong engineering fundamentals with a passion for scalable software systems
- No compromise on code quality, craftsmanship, and best practices
🧪 Interview Process :
- Round 1: Technical pairing round
- Rounds 2 & 3: Technical rounds with panel (code pairing + architecture)
- Final Round: HR and offer discussion
In your role as Software Engineer/Lead, you will directly work with other developers, Product Owners, and Scrum Masters to evaluate and develop innovative solutions. The purpose of the role is to design, develop, test, and operate a complex set of applications or platforms in the IoT Cloud area.
The role involves the utilization of advanced tools and analytical methods for gathering facts to develop solution scenarios. The job holder needs to be able to execute quality code, review code, and collaborate with other developers.
We have an excellent mix of people, which we believe makes for a more vibrant, more innovative, and more productive team.
- A bachelor’s degree, or master’s degree in information technology, computer science, or other relevant education
- At least 5 years of experience as Software Engineer, in an enterprise context
- Experience in design, development and deployment of large-scale cloud-based applications and services
- Good knowledge in cloud (AWS) serverless application development, event driven architecture and SQL / No-SQL databases
- Experience with IoT products, backend services and design principles
- Good knowledge at least of one backend technology like node.js (JavaScript, TypeScript) or JVM (Java, Scala, Kotlin)
- Passionate about code quality, security and testing
- Microservice development experience with Java (Spring) is a plus
- Good command of English in both Oral & Written
Here is the Job Description -
Location -- Viman Nagar, Pune
Mode - 5 Days Working
Required Tech Skills:
● Strong at PySpark, Python
● Good understanding of Data Structure
● Good at SQL query/optimization
● Strong fundamentals of OOPs programming
● Good understanding of AWS Cloud, Big Data.
● Data Lake, AWS Glue, Athena, S3, Kinesis, SQL/NoSQL DB
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As an well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Staff Engineer – DevOps with 8-12 years of experience in designing, implementing, and optimizing CI/CD pipelines, cloud infrastructure, and automation frameworks. The ideal candidate will have expertise in Kubernetes, Terraform, CI/CD, Security, Observability, and Cloud Platforms (AWS, Azure, GCP). You will play a key role in scaling and securing our infrastructure, improving developer productivity, and ensuring high availability and performance.
Key Roles & Responsibilities:
- Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
- Deploy and manage Kubernetes clusters (EKS, AKS, GKE) and containerized workloads.
- Automate infrastructure provisioning using Terraform, Ansible, Pulumi, or CloudFormation.
- Implement observability and monitoring solutions using Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Ensure security best practices in DevOps, including IAM, secrets management, container security, and vulnerability scanning.
- Optimize cloud infrastructure (AWS, Azure, GCP) for performance, cost efficiency, and scalability.
- Develop and manage GitOps workflows and infrastructure-as-code (IaC) automation.
- Implement zero-downtime deployment strategies, including blue-green deployments, canary releases, and feature flags.
- Work closely with development teams to optimize build pipelines, reduce deployment time, and improve system reliability.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-12 years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation.
- Strong expertise in CI/CD pipelines, version control (Git), and release automation.
- Hands-on experience with Kubernetes (EKS, AKS, GKE) and container orchestration.
- Proficiency in Terraform, Ansible for infrastructure automation.
- Experience with AWS, Azure, or GCP services (EC2, S3, IAM, VPC, Lambda, API Gateway, etc.).
- Expertise in monitoring/logging tools such as Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Strong scripting and automation skills in Python, Bash, or Go.
Preferred Qualifications
- Experience in FinOps Cloud Cost Optimization) and Kubernetes cluster scaling.
- Exposure to serverless architectures and event-driven workflows.
- Contributions to open-source DevOps projects.
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking an experienced and highly skilled Technical Lead with a strong background in Java, SaaS architectures, firewalls and cybersecurity products, including SIEM and SOAR platforms. The ideal candidate will lead technical initiatives, design and implement scalable systems, and drive best practices across the engineering team. This role requires deep technical expertise, leadership abilities, and a passion for building secure and high-performing security solutions.
Key Roles & Responsibilities:
- Lead the design and development of scalable and secure software solutions using Java.
- Architect and build SaaS-based cybersecurity applications, ensuring high availability, performance, and reliability.
- Provide technical leadership, mentoring, and guidance to the development team.
- Ensure best practices in secure coding, threat modeling, and compliance with industry standards.
- Collaborate with cross-functional teams including Product Management, Security, and DevOps to deliver high-quality security solutions.
- Design and implement security analytics, automation workflows and ITSM integrations.
- Drive continuous improvements in engineering processes, tools, and technologies.
- Troubleshoot complex technical issues and lead incident response for critical production systems.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-10 years of software development experience, with expertise in Java.
- Strong background in building SaaS applications with cloud-native architectures (AWS, GCP, or Azure).
- In-depth understanding of microservices architecture, APIs, and distributed systems.
- Experience with containerization and orchestration tools like Docker and Kubernetes.
- Knowledge of DevSecOps principles, CI/CD pipelines, and infrastructure as code (Terraform, Ansible, etc.).
- Strong problem-solving skills and ability to work in an agile, fast-paced environment.
- Excellent communication and leadership skills, with a track record of mentoring engineers.
Preferred Qualifications:
- Experience with cybersecurity solutions, including SIEM (e.g., Splunk, ELK, IBM QRadar) and SOAR (e.g., Palo Alto XSOAR, Swimlane).
- Knowledge of zero-trust security models and secure API development.
- Hands-on experience with machine learning or AI-driven security analytics.
JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.
Mon-Fri, In office role with excellent perks and benefits!
Key Responsibilities:
1. Design, develop, and maintain backend services and APIs using Node.js or Python, or Java.
2. Build and implement scalable and robust microservices and integrate API gateways.
3. Develop and optimize NoSQL database structures and queries (e.g., MongoDB, DynamoDB).
4. Implement real-time data pipelines using Kafka.
5. Collaborate with front-end developers to ensure seamless integration of backend services.
6. Write clean, reusable, and efficient code following best practices, including design patterns.
7. Troubleshoot, debug, and enhance existing systems for improved performance.
Mandatory Skills:
1. Proficiency in at least one backend technology: Node.js or Python, or Java.
2. Strong experience in:
i. Microservices architecture,
ii. API gateways,
iii. NoSQL databases (e.g., MongoDB, DynamoDB),
iv. Kafka
v. Data structures (e.g., arrays, linked lists, trees).
3. Frameworks:
i. If Java : Spring framework for backend development.
ii. If Python: FastAPI/Django frameworks for AI applications.
iii. If Node: Express.js for Node.js development.
Good to Have Skills:
1. Experience with Kubernetes for container orchestration.
2. Familiarity with in-memory databases like Redis or Memcached.
3. Frontend skills: Basic knowledge of HTML, CSS, JavaScript, or frameworks like React.js.

Dear,
We are excited to inform you about an exclusive opportunity at Xebia for a Senior Backend Engineer role.
📌 Job Details:
- Role: Senior Backend Engineer
- Shift: 1 PM – 10 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or up to 30 days
🔹 Job Responsibilities:
✅ Design and develop scalable, reliable, and maintainable backend solutions
✅ Work on event-driven microservices architecture
✅ Implement REST APIs and optimize backend performance
✅ Collaborate with cross-functional teams to drive innovation
✅ Mentor junior and mid-level engineers
🔹 Required Skills:
✔ Backend Development: Scala (preferred), Java, Kotlin
✔ Cloud: AWS or GCP
✔ Databases: MySQL, NoSQL (Cassandra)
✔ DevOps & CI/CD: Jenkins, Terraform, Infrastructure as Code
✔ Messaging & Caching: Kafka, RabbitMQ, Elasticsearch
✔ Agile Methodologies: Scrum, Kanban
⚠ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response! Also, feel free to refer anyone in your network who might be a good fit.
Best regards,
Vijay S
Assistant Manager - TAG
We are seeking a skilled Full-stack developer. As a Full-stack developer, you will collaborate with an international cross-functional teams to design, develop, and deploy high-quality software solution.
Responsibilities:
Design, develop, and maintain the application.
Write clean, efficient, and reusable code.
Implement new features and functionality based on business requirements.
Participate in system and application architecture discussions.
Create technical designs and specifications for new features or enhancements.
Write and execute unit tests to ensure code quality.
Debug and resolve technical issues and software defects.
Conduct code reviews to ensure adherence to best practices.
Identify and fix vulnerabilities to ensure application integrity.
Working with the product owner to ensure seamless integration of user-facing elements.
Collaborating with DevOps teams for deployment and scaling.
Requirements:
Bachelor’s degree in computer science or information technology, or a related field.
Proven experience as a skilled Full-stack developer. Experience in Insurance domain is appreciated.
Excellent problem-solving and debugging skills.
Strong communication and collaboration abilities to work effectively in a team environment.
Strong experience with Spring Boot 3, Java 17 or newer and Maven.
Skills & Requirements
Angular 18+, GitHub, IntellJ IDEA, Java 11+, Jest, Kubernetes, Maven, Mockito, NDBX/ng-aquila, NGRX, Spring Boot, State Management, Typescript, Playwright, PostgreSQL, Sonar, Swagger, AWS, Camunda, Dynatrace, Jenkins, Kafka, NGXS, Signals, Taly.
What You’ll Do:
* Establish formal data practice for the organisation.
* Build & operate scalable and robust data architectures.
* Create pipelines for the self-service introduction and usage of new data.
* Implement DataOps practices
* Design, Develop, and operate Data Pipelines which support Data scientists and machine learning Engineers.
* Build simple, highly reliable Data storage, ingestion, and transformation solutions which are easy to deploy and manage.
* Collaborate with various business stakeholders, software engineers, machine learning engineers, and analysts.
Who You Are:
* Experience in designing, developing and operating configurable Data pipelines serving high volume and velocity data.
* Experience working with public clouds like GCP/AWS.
* Good understanding of software engineering, DataOps, data architecture, Agile and DevOps methodologies.
* Experience building Data architectures that optimize performance and cost, whether the components are prepackaged or homegrown.
* Proficient with SQL, Java, Spring boot, Python or JVM-based language, Bash.
* Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow etc. and big data databases like BigQuery, Clickhouse, etc
* Good communication skills with the ability to collaborate with both technical and non-technical people.
* Ability to Think Big, take bets and innovate, Dive Deep, Bias for Action, Hire and Develop the Best, Learn and be Curious



General Summary:
The Senior Software Engineer will be responsible for designing, developing, testing, and maintaining full-stack solutions. This role involves hands-on coding (80% of time), performing peer code reviews, handling pull requests and engaging in architectural discussions with stakeholders. You'll contribute to the development of large-scale, data-driven SaaS solutions using best practices like TDD, DRY, KISS, YAGNI, and SOLID principles. The ideal candidate is an experienced full-stack developer who thrives in a fast-paced, Agile environment.
Essential Job Functions:
- Design, develop, and maintain scalable applications using Python and Django.
- Build responsive and dynamic user interfaces using React and TypeScript.
- Implement and integrate GraphQL APIs for efficient data querying and real-time updates.
- Apply design patterns such as Factory, Singleton, Observer, Strategy, and Repository to ensure maintainable and scalable code.
- Develop and manage RESTful APIs for seamless integration with third-party services.
- Design, optimize, and maintain SQL databases like PostgreSQL, MySQL, and MSSQL.
- Use version control systems (primarily Git) and follow collaborative workflows.
- Work within Agile methodologies such as Scrum or Kanban, participating in daily stand-ups, sprint planning, and retrospectives.
- Write and maintain unit tests, integration tests, and end-to-end tests, following Test-Driven Development (TDD).
- Collaborate with cross-functional teams, including Product Managers, DevOps, and UI/UX Designers, to deliver high-quality products
Essential functions are the basic job duties that an employee must be able to perform, with or without reasonable accommodation. The function is considered essential if the reason the position exists is to perform that function.
Supportive Job Functions:
- Remain knowledgeable of new emerging technologies and their impact on internal systems.
- Available to work on call when needed.
- Perform other miscellaneous duties as assigned by management.
These tasks do not meet the Americans with Disabilities Act definition of essential job functions and usually equal 5% or less of time spent. However, these tasks still constitute important performance aspects of the job.
Skills
- The ideal candidate must have strong proficiency in Python and Django, with a solid understanding of Object-Oriented Programming (OOPs) principles. Expertise in JavaScript,
- TypeScript, and React is essential, along with hands-on experience in GraphQL for efficient data querying.
- The candidate should be well-versed in applying design patterns such as Factory, Singleton, Observer, Strategy, and Repository to ensure scalable and maintainable code architecture.
- Proficiency in building and integrating REST APIs is required, as well as experience working with SQL databases like PostgreSQL, MySQL, and MSSQL.
- Familiarity with version control systems (especially Git) and working within Agile methodologies like Scrum or Kanban is a must.
- The candidate should also have a strong grasp of Test-Driven Development (TDD) principles.
- In addition to the above, it is good to have experience with Next.js for server-side rendering and static site generation, as well as knowledge of cloud infrastructure such as AWS or GCP.
- Familiarity with NoSQL databases, CI/CD pipelines using tools like GitHub Actions or Jenkins, and containerization technologies like Docker and Kubernetes is highly desirable.
- Experience with microservices architecture and event-driven systems (using tools like Kafka or RabbitMQ) is a plus, along with knowledge of caching technologies such as Redis or
- Memcached. Understanding OAuth2.0, JWT, SSO authentication mechanisms, and adhering to API security best practices following OWASP guidelines is beneficial.
- Additionally, experience with Infrastructure as Code (IaC) tools like Terraform or CloudFormation, and familiarity with performance monitoring tools such as New Relic or Datadog will be considered an advantage.
Abilities:
- Ability to organize, prioritize, and handle multiple assignments on a daily basis.
- Strong and effective inter-personal and communication skills
- Ability to interact professionally with a diverse group of clients and staff.
- Must be able to work flexible hours on-site and remote.
- Must be able to coordinate with other staff and provide technological leadership.
- Ability to work in a complex, dynamic team environment with minimal supervision.
- Must possess good organizational skills.
Education, Experience, and Certification:
- Associate or bachelor’s degree preferred (Computer Science, Engineer, etc.), but equivalent work experience in a technology related area may substitute.
- 2+ years relevant experience, required.
- Experience using version control daily in a developer environment.
- Experience with Python, JavaScript, and React is required.
- Experience using rapid development frameworks like Django or Flask.
- Experience using front end build tools.
Scope of Job:
- No direct reports.
- No supervisory responsibility.
- Consistent work week with minimal travel
- Errors may be serious, costly, and difficult to discover.
- Contact with others inside and outside the company is regular and frequent.
- Some access to confidential data.
We are seeking a skilled Cloud Data Engineer who has experience with cloud data platforms like AWS or Azure and especially Snowflake and dbt to join our dynamic team. As a consultant, you will be responsible for developing new data platforms and create the data processes. You will collaborate with cross-functional teams to design, develop, and deploy high-quality frontend solutions.
Responsibilities:
Customer consulting: You develop data-driven products in the Snowflake Cloud and connect data & analytics with specialist departments. You develop ELT processes using dbt (data build tool)
Specifying requirements: You develop concrete requirements for future-proof cloud data architectures.
Develop data routes: You design scalable and powerful data management processes.
Analyze data: You derive sound findings from data sets and present them in an understandable way.
Requirements:
Requirements management and project experience: You successfully implement cloud-based data & analytics projects.
Data architectures: You are proficient in DWH/data lake concepts and modeling with Data Vault 2.0.
Cloud expertise: You have extensive knowledge of Snowflake, dbt and other cloud technologies (e.g. MS Azure, AWS, GCP).
SQL know-how: You have a sound and solid knowledge of SQL.
Data management: You are familiar with topics such as master data management and data quality.
Bachelor's degree in computer science, or a related field.
Strong communication and collaboration abilities to work effectively in a team environment.
Skills & Requirements
Cloud Data Engineering, AWS, Azure, Snowflake, dbt, ELT processes, Data-driven consulting, Cloud data architectures, Scalable data management, Data analysis, Requirements management, Data warehousing, Data lake, Data Vault 2.0, SQL, Master data management, Data quality, GCP, Strong communication, Collaboration.
At least 5 years of experience in testing and developing automation tests.
A minimum of 3 years of experience writing tests in Python, with a preference for experience in designing automation frameworks.
Experience in developing automation for big data testing, including data ingestion, data processing, and data migration, is highly desirable.
Familiarity with Playwright or other browser application testing frameworks is a significant advantage.
Proficiency in object-oriented programming and principles is required.
Extensive knowledge of AWS services is essential.
Strong expertise in REST API testing and SQL is required.
A solid understanding of testing and development life cycle methodologies is necessary.
Knowledge of the financial industry and trading systems is a plus
What you'll do:
· Perform complex application programming activities with an emphasis on mobile development: Node.js, TypeScript, JavaScript, RESTful APIs and related backend frameworks
· Assist in the definition of system architecture and detailed solution design that are scalable and extensible
· Collaborate with Product Owners, Designers, and other engineers on different permutations to find the best solution possible
· Own the quality of code and do your own testing. Write unit test and improve test coverage.
· Deliver amazing solutions to production that knock everyone’s socks off
· Mentor junior developers on the team
What we’re looking for:
· Amazing technical instincts. You know how to evaluate and choose the right technology and approach for the job. You have stories you could share about what problem you thought you were solving at first, but through testing and iteration, came to solve a much bigger and better problem that resulted in positive outcomes all-around.
· A love for learning. Technology is continually evolving around us, and you want to keep up to date to ensure we are using the right tech at the right time.
· A love for working in ambiguity—and making sense of it. You can take in a lot of disparate information and find common themes, recommend clear paths forward and iterate along the way. You don’t form an opinion and sell it as if it’s gospel; this is all about being flexible, agile, dependable, and responsive in the face of many moving parts.
· Confidence, not ego. You have an ability to collaborate with others and see all sides of the coin to come to the best solution for everyone.
· Flexible and willing to accept change in priorities, as necessary
· Demonstrable passion for technology (e.g., personal projects, open-source involvement)
· Enthusiastic embrace of DevOps culture and collaborative software engineering
· Ability and desire to work in a dynamic, fast paced, and agile team environment
· Enthusiasm for cloud computing platforms such as AWS or Azure
Basic Qualifications:
· Minimum B.S. / M.S. Computer Science or related discipline from accredited college or University
· At least 4 years of experience designing, developing, and delivering backend applications with Node.js, TypeScript
· At least 2 years of experience building internet facing services
· At least 2 years of experience with AWS and/or OpenShift
· Exposure to some of the following concepts: object-oriented programming, software engineering techniques, quality engineering, parallel programming, databases, etc.
· Experience integrating APIs with front-end and/or mobile-specific frameworks
· Proficiency in building and consuming RESTful APIs
· Ability to manage multiple tasks and consistently meet established timelines
· Strong collaboration skills
· Excellent written and verbal communications skills
Preferred Qualifications:
· Experience with Apache Cordova framework
- Demonstrable knowledge of native coding background in iOS, Android
· Experience developing and deploying applications within Kubernetes based containers
Experience in Agile and SCRUM development techniques