50+ AWS CloudFormation Jobs in India
Apply to 50+ AWS CloudFormation Jobs on CutShort.io. Find your next job, effortlessly. Browse AWS CloudFormation Jobs and apply today!
Required Skills: Advanced AWS Infrastructure Expertise, CI/CD Pipeline Automation, Monitoring, Observability & Incident Management, Security, Networking & Risk Management, Infrastructure as Code & Scripting
Criteria:
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies (B2C scale preferred)
- Strong hands-on AWS expertise across core and advanced services (EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, VPC, IAM, ELB/ALB, Route53)
- Proven experience designing high-availability, fault-tolerant cloud architectures for large-scale traffic
- Strong experience building & maintaining CI/CD pipelines (Jenkins mandatory; GitHub Actions/GitLab CI a plus)
- Prior experience running production-grade microservices deployments and automated rollout strategies (Blue/Green, Canary)
- Hands-on experience with monitoring & observability tools (Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.)
- Solid hands-on experience with MongoDB in production, including performance tuning, indexing & replication
- Strong scripting skills (Bash, Shell, Python) for automation
- Hands-on experience with IaC (Terraform, CloudFormation, or Ansible)
- Deep understanding of networking fundamentals (VPC, subnets, routing, NAT, security groups)
- Strong experience in incident management, root cause analysis & production firefighting
Description
Role Overview
Company is seeking an experienced Senior DevOps Engineer to design, build, and optimize cloud infrastructure on AWS, automate CI/CD pipelines, implement monitoring and security frameworks, and proactively identify scalability challenges. This role requires someone who has hands-on experience running infrastructure at B2C product scale, ideally in media/OTT or high-traffic applications.
Key Responsibilities
1. Cloud Infrastructure — AWS (Primary Focus)
- Architect, deploy, and manage scalable infrastructure using AWS services such as EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, ELB/ALB, VPC, IAM, Route53, etc.
- Optimize cloud cost, resource utilization, and performance across environments.
- Design high-availability, fault-tolerant systems for streaming workloads.
2. CI/CD Automation
- Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI.
- Automate deployments for microservices, mobile apps, and backend APIs.
- Implement blue/green and canary deployments for seamless production rollouts.
3. Observability & Monitoring
- Implement logging, metrics, and alerting using tools like Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.
- Perform proactive performance analysis to minimize downtime and bottlenecks.
- Set up dashboards for real-time visibility into system health and user traffic spikes.
4. Security, Compliance & Risk Highlighting
• Conduct frequent risk assessments and identify vulnerabilities in:
o Cloud architecture
o Access policies (IAM)
o Secrets & key management
o Data flows & network exposure
• Implement security best practices including VPC isolation, WAF rules, firewall policies, and SSL/TLS management.
5. Scalability & Reliability Engineering
- Analyze traffic patterns for OTT-specific load variations (weekends, new releases, peak hours).
- Identify scalability gaps and propose solutions across:
- o Microservices
- o Caching layers
- o CDN distribution (CloudFront)
- o Database workloads
- Perform capacity planning and load testing to ensure readiness for 10x traffic growth.
6. Database & Storage Support
- Administer and optimize MongoDB for high-read/low-latency use cases.
- Design backup, recovery, and data replication strategies.
- Work closely with backend teams to tune query performance and indexing.
7. Automation & Infrastructure as Code
- Implement IaC using Terraform, CloudFormation, or Ansible.
- Automate repetitive infrastructure tasks to ensure consistency across environments.
Required Skills & Experience
Technical Must-Haves
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies.
- Strong hands-on experience with AWS (core and advanced services).
- Expertise in Jenkins CI/CD pipelines.
- Solid background working with MongoDB in production environments.
- Good understanding of networking: VPCs, subnets, security groups, NAT, routing.
- Strong scripting experience (Bash, Python, Shell).
- Experience handling risk identification, root cause analysis, and incident management.
Nice to Have
- Experience with OTT, video streaming, media, or any content-heavy product environments.
- Familiarity with containers (Docker), orchestration (Kubernetes/EKS), and service mesh.
- Understanding of CDN, caching, and streaming pipelines.
Personality & Mindset
- Strong sense of ownership and urgency—DevOps is mission critical at OTT scale.
- Proactive problem solver with ability to think about long-term scalability.
- Comfortable working with cross-functional engineering teams.
Why Join company?
• Build and operate infrastructure powering millions of monthly users.
• Opportunity to shape DevOps culture and cloud architecture from the ground up.
• High-impact role in a fast-scaling Indian OTT product.

Global digital transformation solutions provider.
Role Proficiency:
Leverage expertise in a technology area (e.g. Java Microsoft technologies or Mainframe/legacy) to design system architecture.
Knowledge Examples:
- Domain/ Industry Knowledge: Basic knowledge of standard business processes within the relevant industry vertical and customer business domain
- Technology Knowledge: Demonstrates working knowledge of more than one technology area related to own area of work (e.g. Java/JEE 5+ Microsoft technologies or Mainframe/legacy) customer technology landscape multiple frameworks (Struts JSF Hibernate etc.) within one technology area and their applicability. Consider low level details such as data structures algorithms APIs and libraries and best practices for one technology stack configuration parameters for successful deployment and configuration parameters for high performance within one technology stack
- Technology Trends: Demonstrates working knowledge of technology trends related to one technology stack and awareness of technology trends related to least two technologies
- Architecture Concepts and Principles: Demonstrates working knowledge of standard architectural principles models patterns (e.g. SOA N-Tier EDA etc.) and perspective (e.g. TOGAF Zachman etc.) integration architecture including input and output components existing integration methodologies and topologies source and external system non functional requirements data architecture deployment architecture architecture governance
- Design Patterns Tools and Principles: Applies specialized knowledge of design patterns design principles practices and design tools. Knowledge of documentation of design using tolls like EA
- Software Development Process Tools & Techniques: Demonstrates thorough knowledge of end-to-end SDLC process (Agile and Traditional) SDLC methodology programming principles tools best practices (refactoring code code package etc.)
- Project Management Tools and Techniques: Demonstrates working knowledge of project management process (such as project scoping requirements management change management risk management quality assurance disaster management etc.) tools (MS Excel MPP client specific time sheets capacity planning tools etc.)
- Project Management: Demonstrates working knowledge of project governance framework RACI matrix and basic knowledge of project metrics like utilization onsite to offshore ratio span of control fresher ratio SLAs and quality metrics
- Estimation and Resource Planning: Working knowledge of estimation and resource planning techniques (e.g. TCP estimation model) company specific estimation templates
- Working knowledge of industry knowledge management tools (such as portals wiki) company and customer knowledge management tools techniques (such as workshops classroom training self-study application walkthrough and reverse KT)
- Technical Standards Documentation & Templates: Demonstrates working knowledge of various document templates and standards (such as business blueprint design documents and test specifications)
- Requirement Gathering and Analysis: Demonstrates working knowledge of requirements gathering for ( non functional) requirements analysis for functional and non functional requirement analysis tools (such as functional flow diagrams activity diagrams blueprint storyboard) techniques (business analysis process mapping etc.) and requirements management tools (e.g.MS Excel) and basic knowledge of functional requirements gathering. Specifically identify Architectural concerns and to document them as part of IT requirements including NFRs
- Solution Structuring: Demonstrates working knowledge of service offering and products
Additional Comments:
Looking for a Senior Java Architect with 12+ years of experience. Key responsibilities include:
• Excellent technical background and end to end architecture to design and implement scalable maintainable and high performing systems integrating front end technologies with back-end services.
• Collaborate with front-end teams to architect React -based user interfaces that are robust, responsive and aligned with overall technical architecture.
• Expertise in cloud-based applications on Azure, leveraging key Azure services.
• Lead the adoption of DevOps practices, including CI/CD pipelines, automation, monitoring and logging to ensure reliable and efficient deployment cycles.
• Provide technical leadership to development teams, guiding them in building solutions that adhere to best practices, industry standards and customer requirements.
• Conduct code reviews to maintain high quality code and collaborate with team to ensure code is optimized for performance, scalability and security.
• Collaborate with stakeholders to defined requirements and deliver technical solutions aligned with business goals.
• Excellent communication skills
• Mentor team members providing guidance on technical challenges and helping them grow their skill set.
• Good to have experience in GCP and retail domain.
Skills: Devops, Azure, Java
Must-Haves
Java (12+ years), React, Azure, DevOps, Cloud Architecture
Strong Java architecture and design experience.
Expertise in Azure cloud services.
Hands-on experience with React and front-end integration.
Proven track record in DevOps practices (CI/CD, automation).
Notice period - 0 to 15days only
Location: Hyderabad, Chennai, Kochi, Bangalore, Trivandrum
Excellent communication and leadership skills.
Required Skills: CI/CD Pipeline, Data Structures, Microservices, Determining overall architectural principles, frameworks and standards, Cloud expertise (AWS, GCP, or Azure), Distributed Systems
Criteria:
- Candidate must have 6+ years of backend engineering experience, with 1–2 years leading engineers or owning major systems.
- Must be strong in one core backend language: Node.js, Go, Java, or Python.
- Deep understanding of distributed systems, caching, high availability, and microservices architecture.
- Hands-on experience with AWS/GCP, Docker, Kubernetes, and CI/CD pipelines.
- Strong command over system design, data structures, performance tuning, and scalable architecture
- Ability to partner with Product, Data, Infrastructure, and lead end-to-end backend roadmap execution.
Description
What This Role Is All About
We’re looking for a Backend Tech Lead who’s equally obsessed with architecture decisions and clean code, someone who can zoom out to design systems and zoom in to fix that one weird memory leak. You’ll lead a small but sharp team, drive the backend roadmap, and make sure our systems stay fast, lean, and battle-tested.
What You’ll Own
● Architect backend systems that handle India-scale traffic without breaking a sweat.
● Build and evolve microservices, APIs, and internal platforms that our entire app depends on.
● Guide, mentor, and uplevel a team of backend engineers—be the go-to technical brain.
● Partner with Product, Data, and Infra to ship features that are reliable and delightful.
● Set high engineering standards—clean architecture, performance, automation, and testing.
● Lead discussions on system design, performance tuning, and infra choices.
● Keep an eye on production like a hawk: metrics, monitoring, logs, uptime.
● Identify gaps proactively and push for improvements instead of waiting for fires.
What Makes You a Great Fit
● 6+ years of backend experience; 1–2 years leading engineers or owning major systems.
● Strong in one core language (Node.js / Go / Java / Python) — pick your sword.
● Deep understanding of distributed systems, caching, high-availability, and microservices.
● Hands-on with AWS/GCP, Docker, Kubernetes, CI/CD pipelines.
● You think data structures and system design are not interviews — they’re daily tools.
● You write code that future-you won’t hate.
● Strong communication and a let’s figure this out attitude.
Bonus Points If You Have
● Built or scaled consumer apps with millions of DAUs.
● Experimented with event-driven architecture, streaming systems, or real-time pipelines.
● Love startups and don’t mind wearing multiple hats.
● Experience on logging/monitoring tools like Grafana, Prometheus, ELK, OpenTelemetry.
Why company Might Be Your Best Move
● Work on products used by real people every single day.
● Ownership from day one—your decisions will shape our core architecture.
● No unnecessary hierarchy; direct access to founders and senior leadership.
● A team that cares about quality, speed, and impact in equal measure.
● Build for Bharat — complex constraints, huge scale, real impact.

Global digital transformation solutions provider.
Job Description – Senior Technical Business Analyst
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
About the Role
We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.
As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.
Key Responsibilities
Business & Analytical Responsibilities
- Partner with business teams to understand one-line problem statements and translate them into detailed business requirements, opportunities, and project scope.
- Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
- Create documentation including Business Requirement Documents (BRDs), user stories, process flows, and analytical models.
- Break down business needs into concise, actionable, and development-ready user stories in Jira.
Data & Technical Responsibilities
- Collaborate with data engineering teams to design, review, and validate data pipelines, data models, and ETL/ELT workflows.
- Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
- Apply foundational data science concepts such as statistical analysis, predictive modeling, and machine learning fundamentals.
- Validate and ensure data quality, consistency, and accuracy across datasets and systems.
Collaboration & Execution
- Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
- Assist in development, testing, and rollout of data-driven solutions.
- Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.
Required Skillsets
Core Technical Skills
- 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
- Data Analytics: SQL, descriptive analytics, business problem framing.
- Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
- Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
- Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.
Soft Skills
- Strong analytical thinking and structured problem-solving capability.
- Ability to convert business problems into clear technical requirements.
- Excellent communication, documentation, and presentation skills.
- High curiosity, adaptability, and eagerness to learn new tools and techniques.
Educational Qualifications
- BE/B.Tech or equivalent in:
- Computer Science / IT
- Data Science
What We Look For
- Demonstrated passion for data and analytics through projects and certifications.
- Strong commitment to continuous learning and innovation.
- Ability to work both independently and in collaborative team environments.
- Passion for solving business problems using data-driven approaches.
- Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.
Why Join Us?
- Exposure to modern data platforms, analytics tools, and AI technologies.
- A culture that promotes innovation, ownership, and continuous learning.
- Supportive environment to build a strong career in data and analytics.
Skills: Data Analytics, Business Analysis, Sql
Must-Haves
Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R
******
Notice period - 0 to 15 days (Max 30 Days)
Educational Qualifications: BE/B.Tech or equivalent in: (Computer Science / IT) /Data Science
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
Review Criteria
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
Required Skills: TypeScript, MVC, Cloud experience (Azure, AWS, etc.), mongodb, Express.js, Nest.js
Criteria:
Need candidates from Growing startups or Product based companies only
1. 4–8 years’ experience in backend engineering
2. Minimum 2+ years hands-on experience with:
- TypeScript
- Express.js / Nest.js
3. Strong experience with MongoDB (or MySQL / PostgreSQL / DynamoDB)
4. Strong understanding of system design & scalable architecture
5. Hands-on experience in:
- Event-driven architecture / Domain-driven design
- MVC / Microservices
6. Strong in automated testing (especially integration tests)
7. Experience with CI/CD pipelines (GitHub Actions or similar)
8. Experience managing production systems
9. Solid understanding of performance, reliability, observability
10. Cloud experience (AWS preferred; GCP/Azure acceptable)
11. Strong coding standards — Clean Code, code reviews, refactoring
Description
About the opportunity
We are looking for an exceptional Senior Software Engineer to join our Backend team. This is a unique opportunity to join a fast-growing company where you will get to solve real customer and business problems, shape the future of a product built for Bharat and build the engineering culture of the team. You will have immense responsibility and autonomy to push the boundaries of engineering to deliver scalable and resilient systems.
As a Senior Software Engineer, you will be responsible for shipping innovative features at breakneck speed, designing the architecture, mentoring other engineers on the team and pushing for a high bar of engineering standards like code quality, automated testing, performance, CI/CD, etc. If you are someone who loves solving problems for customers, technology, the craft of software engineering, and the thrill of building startups, we would like to talk to you.
What you will be doing
- Build and ship features in our Node.js (and now migrating to TypeScript) codebase that directly impact user experience and help move the top and bottom line of the business.
- Collaborate closely with our product, design and data team to build innovative features to deliver a world class product to our customers. At company, product managers don’t “tell” what to build. In fact, we all collaborate on how to solve a problem for our customers and the business. Engineering plays a big part in it.
- Design scalable platforms that empower our product and marketing teams to rapidly experiment.
- Own the quality of our products by writing automated tests, reviewing code, making systems observable and resilient to failures.
- Drive code quality and pay down architectural debt by continuous analysis of our codebases and systems, and continuous refactoring.
- Architect our systems for faster iterations, releasability, scalability and high availability using practices like Domain Driven Design, Event Driven Architecture, Cloud Native Architecture and Observability.
- Set the engineering culture with the rest of the team by defining how we should work as a team, set standards for quality, and improve the speed of engineering execution.
The role could be ideal for you if you
- Experience of 4-8 years of working in backend engineering with at least 2 years of production experience in TypeScript, Express.js (or another popular framework like Nest.js) and MongoDB (or any popular database like MySQL, PostgreSQL, DynamoDB, etc.).
- Well versed with one or more architectures and design patterns such as MVC, Domain Driven Design, CQRS, Event Driven Architecture, Cloud Native Architecture, etc.
- Experienced in writing automated tests (especially integration tests) and Continuous Integration. At company, engineers own quality and hence, writing automated tests is crucial to the role.
- Experience with managing production infrastructure using technologies like public cloud providers (AWS, GCP, Azure, etc.). Bonus: if you have experience in using Kubernetes.
- Experience in observability techniques like code instrumentation for metrics, tracing and logging.
- Care deeply about code quality, code reviews, software architecture (think about Object Oriented Programming, Clean Code, etc.), scalability and reliability. Bonus: if you have experience in this from your past roles.
- Understand the importance of shipping fast in a startup environment and constantly try to find ingenious ways to achieve the same.
- Collaborate well with everyone on the team. We communicate a lot and don’t hesitate to get quick feedback from other members on the team sooner than later.
- Can take ownership of goals and deliver them with high accountability.
Don’t hesitate to try out new technologies. At company, nobody is limited to a role. Every engineer in our team is an expert of at least one technology but often ventures out in adjacent technologies like React.js, Flutter, Data Platforms, AWS and Kubernetes. If you are not excited by this, you will not like working at company. Bonus: if you have experience in adjacent technologies like AWS (or any public cloud provider, Github Actions (or CircleCI), Kubernetes, Infrastructure as Code (Terraform, Pulumi, etc.), etc.
Core Responsibilities:
- The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
- Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
- Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
- Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
- System Integration: Integrate models into existing systems and workflows.
- Model Deployment: Deploy models to production environments and monitor performance.
- Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
- Continuous Improvement: Identify areas for improvement in model performance and systems.
Skills:
- Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
- Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph
- Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
- Knowledge of model monitoring and performance evaluation.
Required experience:
- Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements
- AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
- AWS data: Redshift, Glue
- Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Skills: Aws, Aws Cloud, Amazon Redshift, Eks
Must-Haves
Machine Learning +Aws+ (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sagemaker
Notice period - 0 to 15days only
Hybrid work mode- 3 days office, 2 days at home
Analyse and understand the JIRA business stories and design the technical solutions.
- Design, Develop, and evaluate the AWS Cloud based technical solutions.
- Lead, Track tasks, Oversee, and mentor less experienced team members.
- Integrate with other SPE proprietary applications using RESTful webservices complying latest industry design and coding standards.
- Strong Knowledge of technical principles, practices, and procedures to implement and keep SPEs standard complex system solutions.
- Strong understanding and first-hand experience on agile software development approaches in a fast paced and continuously changing B2B marketing business environment.
- Working on large scale systems development or integration projects, acting as the Project lead performing analysis and documenting technical requirements, data requirements, data architecture and relationships.
- Provide subject matter expertise on overall IT solution, system and data flow architecture for Spring based micro services.
- Provide primary technology support for in-house proprietary B2B projects with supported application platforms.
- Review & approve Github Pull requests using the existing Application Development Frameworks and Coding Standards.
- Provide production support in managing the application incidents and priority-based problem resolution.
We are seeking an experienced Cloud Penetration Tester to assess, exploit, and strengthen the security of our cloud environments (AWS, Azure, GCP). The role involves simulating real-world cyber-attacks, identifying vulnerabilities, and delivering actionable remediation recommendations. Key Responsibilities
• Perform in-depth penetration testing on cloud infrastructures (AWS/Azure/GCP).
• Conduct cloud-specific vulnerability assessments and configuration reviews.
• Simulate cyber-attacks to identify weaknesses in cloud applications, networks, APIs, and IAM configurations.
• Evaluate cloud-native security controls (Security Groups, IAM roles, Key Management, WAF, CloudTrail, etc.).
• Test containerized and serverless environments (Docker, Kubernetes, Lambda, Cloud Functions).
• Identify misconfigurations, privilege escalation paths, insecure storage, authentication issues, and API exploits.
• Prepare detailed technical reports and executive summaries with remediation steps.
• Work with DevOps and Cloud teams to improve security posture and ensure secure architecture.
• Assist in threat modeling and secure design of new cloud features/services.
• Stay updated on modern cloud attack tools and techniques (e.g., Pacu, ScoutSuite, Prowler, KubeHound).
🧠 Skills & Qualifications
• Strong understanding of cloud platforms (AWS, Azure, GCP).
• Hands-on experience with cloud penetration testing tools:
• Pacu, ScoutSuite, Prowler, CloudBrute, Burp Suite, Metasploit, Nmap.
• Familiarity with cloud-native security concepts:
• IAM, VPC, S3, Key Management, Containers, Serverless, API Gateway, WAF, CloudTrail.
• Knowledge of network, API, and web application security.
• Solid understanding of cloud attack vectors: SSRF, misconfigurations, privilege escalation, credential theft, etc.
• Ability to produce high-quality penetration testing reports.
• Scripting skills (Python, PowerShell, Bash) for automation.
• Certifications preferred: OSCP, OSWE, CEH, CCSP, AWS Security Specialty, Azure Security Engineer.
⭐ Preferred Personality Traits
• Strong analytical and exploit development mindset.
• Detail-oriented with the ability to think like an attacker.
• Strong communication skills for explaining findings.
• Continuous learner with curiosity about emerging cloud threats.
Profile: Sr. Devops Engineer
Location: Gurugram
Experience: 05+ Years
Notice Period: can join Immediate to 1 week
Company: Watsoo
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 5+ years of proven hands-on DevOps experience.
- Strong experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.).
- Expertise in containerization & orchestration (Docker, Kubernetes, Helm).
- Hands-on experience with cloud platforms (AWS, Azure, or GCP).
- Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
- Experience with monitoring and logging solutions (Prometheus, Grafana, ELK, CloudWatch, etc.).
- Proficiency in scripting languages (Python, Bash, or Shell).
- Knowledge of networking, security, and system administration.
- Strong problem-solving skills and ability to work in fast-paced environments.
- Troubleshoot production issues, perform root cause analysis, and implement preventive measures.
Advocate DevOps best practices, automation, and continuous improvement
MUST-HAVES:
- Machine Learning + Aws + (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sage maker
- Notice period - 0 to 15 days only
- Hybrid work mode- 3 days office, 2 days at home
SKILLS: AWS, AWS CLOUD, AMAZON REDSHIFT, EKS
ADDITIONAL GUIDELINES:
- Interview process: - 2 Technical round + 1 Client round
- 3 days in office, Hybrid model.
CORE RESPONSIBILITIES:
- The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
- Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
- Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
- Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
- System Integration: Integrate models into existing systems and workflows.
- Model Deployment: Deploy models to production environments and monitor performance.
- Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
- Continuous Improvement: Identify areas for improvement in model performance and systems.
SKILLS:
- Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
- Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaos search logs, etc. for troubleshooting; Other tech touch points are Scylla DB (like BigTable), OpenSearch, Neo4J graph
- Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
- Knowledge of model monitoring and performance evaluation.
REQUIRED EXPERIENCE:
- Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sage maker pipeline with ability to analyze gaps and recommend/implement improvements
- AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
- AWS data: Redshift, Glue
- Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
ROLES AND RESPONSIBILITIES:
We are seeking a highly skilled Senior DevOps Engineer with 8+ years of hands-on experience in designing, automating, and optimizing cloud-native solutions on AWS. AWS and Linux expertise are mandatory. The ideal candidate will have strong experience across databases, automation, CI/CD, containers, and observability, with the ability to build and scale secure, reliable cloud environments.
KEY RESPONSIBILITIES:
Cloud & Infrastructure as Code (IaC)-
- Architect and manage AWS environments ensuring scalability, security, and high availability.
- Implement infrastructure automation using Terraform, CloudFormation, and Ansible.
- Configure VPC Peering, Transit Gateway, and PrivateLink/Connect for advanced networking.
CI/CD & Automation:
- Build and maintain CI/CD pipelines (Jenkins, GitHub, SonarQube, automated testing).
- Automate deployments, provisioning, and monitoring across environments.
Containers & Orchestration:
- Deploy and operate workloads on Docker and Kubernetes (EKS).
- Implement IAM Roles for Service Accounts (IRSA) for secure pod-level access.
- Optimize performance of containerized and microservices applications.
Monitoring & Reliability:
- Implement observability with Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Establish logging, alerting, and proactive monitoring for high availability.
Security & Compliance:
- Apply AWS security best practices including IAM, IRSA, SSO, and role-based access control.
- Manage WAF, Guard Duty, Inspector, and other AWS-native security tools.
- Configure VPNs, firewalls, and secure access policies and AWS organizations.
Databases & Analytics:
- Must have expertise in MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Manage data reliability, performance tuning, and cloud-native integrations.
- Experience with Apache Airflow and Spark.
IDEAL CANDIDATE:
- 8+ years in DevOps engineering, with strong AWS Cloud expertise (EC2, VPC, TG, RDS, S3, IAM, EKS, EMR, SCP, MWAA, Lambda, CloudFront, SNS, SES etc.).
- Linux expertise is mandatory (system administration, tuning, troubleshooting, CIS hardening etc).
- Strong knowledge of databases: MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Hands-on with Docker, Kubernetes (EKS), Terraform, CloudFormation, Ansible.
- Proven ability with CI/CD pipeline automation and DevSecOps practices.
- Practical experience with VPC Peering, Transit Gateway, WAF, Guard Duty, Inspector and advanced AWS networking and security tools.
- Expertise in observability tools: Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Strong scripting skills (Shell/bash, Python, or similar) for automation.
- Bachelor / Master’s degree
- Effective communication skills
PERKS, BENEFITS AND WORK CULTURE:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits
CTC: up to 40 LPA
Mandatory Criteria (Can't be neglected during screening) :
Need candidates from Growing startups or Product based companies only
1. 4–6 years experience in backend engineering
2. Minimum 2+ years hands-on experience with:
- TypeScript
- Express.js / Nest.js
3. Strong experience with MongoDB (or MySQL / PostgreSQL / DynamoDB)
4. Strong understanding of system design & scalable architecture
5. Hands-on experience in:
- Event-driven architecture / Domain-driven design
- MVC / Microservices
6. Strong in automated testing (especially integration tests)
7. Experience with CI/CD pipelines (GitHub Actions or similar)
8. Experience managing production systems
9. Solid understanding of performance, reliability, observability
10. Cloud experience (AWS preferred; GCP/Azure acceptable)
11. Strong coding standards — Clean Code, code reviews, refactoring
If interested kindly share your updated resume at 82008 31681
Are you eager to kick-start your career in DevOps and learn the latest technologies to solve complex problems? Do you enjoy hands-on problem-solving, exploring cloud technologies, and supporting innovative solutions? At Aivar, we are looking for a DevOps Engineer to join our team.
In this role, you will assist in the implementation and support of DevOps practices, including containerization, orchestration, and CI/CD pipelines, while learning from industry experts.
This is an exciting opportunity to grow your skills and work on transformative projects in a collaborative environment.
Requirements
Preferred Technical Qualifications
- 2 – 5 years of experience in DevOps, system administration, or software development (internship experience is acceptable).
- Familiarity with container technologies such as Docker and Kubernetes.
- Understanding of Infrastructure as Code (IaC) tools like Terraform or CloudFormation.
- Knowledge of CI/CD tools like Jenkins, GitLab CI, or GitHub Actions.
- Programming experience in Python, Java, or another language used in DevOps workflows.
- Understanding of cloud platforms such as AWS, Azure, or GCP
- Willingness to learn advanced Kubernetes concepts and troubleshooting techniques.Preferred Soft Skills
Collaboration Skills:
- Willingness to work in cross-functional teams and support the alignment of technical solutions with business goals.
- Eager to learn how to work effectively with customers, engineers, and architects to deliver DevOps solutions.
Effective Communication:
- Ability to communicate technical concepts clearly to team members and stakeholders.
- Desire to improve documentation and presentation skills to share ideas effectively.
Problem-Solving Mindset:
- Curiosity to explore and learn solutions for infrastructure challenges in DevOps environments.
- Interest in learning how to diagnose and resolve issues in containerized and
- distributed systems.
Adaptability and Continuous Learning:
- Strong desire to learn emerging DevOps tools and practices in a dynamic environment.
- Commitment to staying updated with trends in cloud computing, DevOps, and
Team-Oriented Approach:
- Enthusiastic about contributing to a collaborative team environment and supporting
- overall project goals.
- Open to feedback and actively sharing knowledge to help the team grow.
Certifications (Optional but Preferred)
- Certified Kubernetes Application Developer (CKAD) or equivalent Linux Foundation
- certification
- Any beginner-level certifications in DevOps or cloud services are a plus.
- Any AWS Certification
Why Join Aivar?
At Aivar, we are re-imagining analytics consulting by integrating AI and machine learning to create repeatable solutions that deliver measurable business outcomes. With a culture centered on innovation, collaboration, and growth, we provide opportunities to work on transformative projects across industries.
About Diversity and Inclusion
We believe diversity drives innovation and growth. Our inclusive environment encourages individuals of all backgrounds to contribute their unique perspectives to shape the future and analytics.
We seek a skilled and motivated Azure DevOps engineer to join our dynamic team. The ideal candidate will design, implement, and manage CI/CD pipelines, automate deployments, and optimize cloud infrastructure using Azure DevOps tools and services. You will collaborate closely with development and IT teams to ensure seamless integration and delivery of software solutions in a fast-paced environment.
Responsibilities:
- Design, implement, and manage CI/CD pipelines using Azure DevOps.
- Automate infrastructure provisioning and deployments using Infrastructure as Code (IaC) tools like Terraform, ARM templates, or Azure CLI.
- Monitor and optimize Azure environments to ensure high availability, performance, and security.
- Collaborate with development, QA, and IT teams to streamline the software development lifecycle (SDLC).
- Troubleshoot and resolve issues related to build, deployment, and infrastructure.
- Implement and manage version control systems, primarily using Git.
- Manage containerization and orchestration using tools like Docker and Kubernetes.
- Ensure compliance with industry standards and best practices for security, scalability, and reliability.
Description
Role Overview
We are seeking a highly skilled AWS Cloud Architect with proven experience in building AWS environments from the ground up—not just consuming existing services. This role requires an AWS builder mindset, capable of designing, provisioning, and managing multi-account AWS architectures, networking, security, and database platforms end-to-end.
Key Responsibilities
AWS Environment Provisioning:
- Design and provision multi-account AWS environments using best practices (Control Tower, Organizations).
- Set up and configure networking (VPC, Transit Gateway, Private Endpoints, Subnets, Routing, Firewalls).
- Provision and manage AWS database platforms (RDS, Aurora, DynamoDB) with high availability and security.
- Manage full AWS account lifecycle, including IAM roles, policies, and access controls.
Infrastructure as Code (IaC):
- Develop and maintain AWS infrastructure using Terraform and AWS CloudFormation.
- Automate account provisioning, networking, and security configuration.
Security & Compliance:
- Implement AWS security best practices, including IAM governance, encryption, and compliance automation.
- Use tools like AWS Config, GuardDuty, Security Hub, and Vault to enforce standards.
Automation & CI/CD:
- Create automation scripts in Python, Bash, or PowerShell for provisioning and management tasks.
- Integrate AWS infrastructure with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI/CD).
Monitoring & Optimization:
- Implement monitoring solutions (CloudWatch, Prometheus, Grafana) for infrastructure health and performance.
- Optimize cost, performance, and scalability of AWS environments.
Required Skills & Experience:
- 10+ years of experience in Cloud Engineering, with 7+ years focused on AWS provisioning.
- Strong expertise in(Must Have):
• AWS multi-account setup (Control Tower/Organizations)
• VPC design and networking (Transit Gateway, Private Endpoints, routing, firewalls)
• IAM policies, role-based access control, and security hardening
• Database provisioning (RDS, Aurora, DynamoDB)
- Proficiency in Terraform and AWS CloudFormation.
- Hands-on experience with scripting (Python, Bash, PowerShell).
- Experience with CI/CD pipelines and automation tools.
- Familiarity with monitoring and logging tools.
Preferred Certifications
- AWS Certified Solutions Architect – Professional
- AWS Certified DevOps Engineer – Professional
- HashiCorp Certified: Terraform Associate
Looking for Immediate Joiners or 15 days of Notice period candidates Only.
• Should have created more than 200 or 300 accounts from scratch using control towers or AWS services.
• Should have atleast 7+ years of working experience in AWS
• AWS multi-account setup (Control Tower/Organizations)
• VPC design and networking (Transit Gateway, Private Endpoints, routing, firewalls)
• IAM policies, role-based access control, and security hardening
• Database provisioning (RDS, Aurora, DynamoDB)
- Proficiency in Terraform and AWS CloudFormation.
- Hands-on experience with scripting (Python, Bash, PowerShell).
- Experience with CI/CD pipelines and automation tools.
First 3 months will be remote (With office timings: 4:30 PM to 1:30 AM
After 3 months will be WFO (With Standard office timings)
Review Criteria
- Strong DevOps /Cloud Engineer Profiles
- Must have 3+ years of experience as a DevOps / Cloud Engineer
- Must have strong expertise in cloud platforms – AWS / Azure / GCP (any one or more)
- Must have strong hands-on experience in Linux administration and system management
- Must have hands-on experience with containerization and orchestration tools such as Docker and Kubernetes
- Must have experience in building and optimizing CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins
- Must have hands-on experience with Infrastructure-as-Code tools such as Terraform, Ansible, or CloudFormation
- Must be proficient in scripting languages such as Python or Bash for automation
- Must have experience with monitoring and alerting tools like Prometheus, Grafana, ELK, or CloudWatch
- Top tier Product-based company (B2B Enterprise SaaS preferred)
Preferred
- Experience in multi-tenant SaaS infrastructure scaling.
- Exposure to AI/ML pipeline deployments or iPaaS / reverse ETL connectors.
Role & Responsibilities
We are seeking a DevOps Engineer to design, build, and maintain scalable, secure, and resilient infrastructure for our SaaS platform and AI-driven products. The role will focus on cloud infrastructure, CI/CD pipelines, container orchestration, monitoring, and security automation, enabling rapid and reliable software delivery.
Key Responsibilities:
- Design, implement, and manage cloud-native infrastructure (AWS/Azure/GCP).
- Build and optimize CI/CD pipelines to support rapid release cycles.
- Manage containerization & orchestration (Docker, Kubernetes).
- Own infrastructure-as-code (Terraform, Ansible, CloudFormation).
- Set up and maintain monitoring & alerting frameworks (Prometheus, Grafana, ELK, etc.).
- Drive cloud security automation (IAM, SSL, secrets management).
- Partner with engineering teams to embed DevOps into SDLC.
- Troubleshoot production issues and drive incident response.
- Support multi-tenant SaaS scaling strategies.
Ideal Candidate
- 3–6 years' experience as DevOps/Cloud Engineer in SaaS or enterprise environments.
- Strong expertise in AWS, Azure, or GCP.
- Strong expertise in LINUX Administration.
- Hands-on with Kubernetes, Docker, CI/CD tools (GitHub Actions, GitLab, Jenkins).
- Proficient in Terraform/Ansible/CloudFormation.
- Strong scripting skills (Python, Bash).
- Experience with monitoring stacks (Prometheus, Grafana, ELK, CloudWatch).
- Strong grasp of cloud security best practices.
Job Details
- Job Title: Lead II - Software Engineering- AI, NLP, Python, Data science
- Industry: Technology
- Domain - Information technology (IT)
- Experience Required: 7-9 years
- Employment Type: Full Time
- Job Location: Bangalore
- CTC Range: Best in Industry
Job Description:
Role Proficiency:
Act creatively to develop applications by selecting appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions. Account for others' developmental activities; assisting Project Manager in day-to-day project execution.
Additional Comments:
Mandatory Skills Data Science Skill to Evaluate AI, Gen AI, RAG, Data Science
Experience 8 to 10 Years
Location Bengaluru
Job Description
Job Title AI Engineer Mandatory Skills Artificial Intelligence, Natural Language Processing, python, data science Position AI Engineer – LLM & RAG Specialization Company Name: Sony India Software Centre About the role: We are seeking a highly skilled AI Engineer with 8-10 years of experience to join our innovation-driven team. This role focuses on the design, development, and deployment of advanced enterprise-scale Large Language Models (eLLM) and Retrieval Augmented Generation (RAG) solutions. You will work on end-to-end AI pipelines, from data processing to cloud deployment, delivering impactful solutions that enhance Sony’s products and services. Key Responsibilities: Design, implement, and optimize LLM-powered applications, ensuring high performance and scalability for enterprise use cases. Develop and maintain RAG pipelines, including vector database integration (e.g., Pinecone, Weaviate, FAISS) and embedding model optimization. Deploy, monitor, and maintain AI/ML models in production, ensuring reliability, security, and compliance. Collaborate with product, research, and engineering teams to integrate AI solutions into existing applications and workflows. Research and evaluate the latest LLM and AI advancements, recommending tools and architectures for continuous improvement. Preprocess, clean, and engineer features from large datasets to improve model accuracy and efficiency. Conduct code reviews and enforce AI/ML engineering best practices. Document architecture, pipelines, and results; present findings to both technical and business stakeholders. Job Description: 8-10 years of professional experience in AI/ML engineering, with at least 4+ years in LLM development and deployment. Proven expertise in RAG architectures, vector databases, and embedding models. Strong proficiency in Python; familiarity with Java, R, or other relevant languages is a plus. Experience with AI/ML frameworks (PyTorch, TensorFlow, etc.) and relevant deployment tools. Hands-on experience with cloud-based AI platforms such as AWS SageMaker, AWS Q Business, AWS Bedrock or Azure Machine Learning. Experience in designing, developing, and deploying Agentic AI systems, with a focus on creating autonomous agents that can reason, plan, and execute tasks to achieve specific goals. Understanding of security concepts in AI systems, including vulnerabilities and mitigation strategies. Solid knowledge of data processing, feature engineering, and working with large-scale datasets. Experience in designing and implementing AI-native applications and agentic workflows using the Model Context Protocol (MCP) is nice to have. Strong problem-solving skills, analytical thinking, and attention to detail. Excellent communication skills with the ability to explain complex AI concepts to diverse audiences. Day-to-day responsibilities: Design and deploy AI-driven solutions to address specific security challenges, such as threat detection, vulnerability prioritization, and security automation. Optimize LLM-based models for various security use cases, including chatbot development for security awareness or automated incident response. Implement and manage RAG pipelines for enhanced LLM performance. Integrate AI models with existing security tools, including Endpoint Detection and Response (EDR), Threat and Vulnerability Management (TVM) platforms, and Data Science/Analytics platforms. This will involve working with APIs and understanding data flows. Develop and implement metrics to evaluate the performance of AI models. Monitor deployed models for accuracy and performance and retrain as needed. Adhere to security best practices and ensure that all AI solutions are developed and deployed securely. Consider data privacy and compliance requirements. Work closely with other team members to understand security requirements and translate them into AI-driven solutions. Communicate effectively with stakeholders, including senior management, to present project updates and findings. Stay up to date with the latest advancements in AI/ML and security and identify opportunities to leverage new technologies to improve our security posture. Maintain thorough documentation of AI models, code, and processes. What We Offer Opportunity to work on cutting-edge LLM and RAG projects with global impact. A collaborative environment fostering innovation, research, and skill growth. Competitive salary, comprehensive benefits, and flexible work arrangements. The chance to shape AI-powered features in Sony’s next-generation products. Be able to function in an environment where the team is virtual and geographically dispersed
Education Qualification: Graduate
Skills: AI, NLP, Python, Data science
Must-Haves
Skills
AI, NLP, Python, Data science
NP: Immediate – 30 Days
Job Details
- Job Title: ML Engineer II - Aws, Aws Cloud
- Industry: Technology
- Domain - Information technology (IT)
- Experience Required: 6-12 years
- Employment Type: Full Time
- Job Location: Pune
- CTC Range: Best in Industry
Job Description:
Core Responsibilities:
? The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
? Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
? Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
? Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
? System Integration: Integrate models into existing systems and workflows.
? Model Deployment: Deploy models to production environments and monitor performance.
? Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
? Continuous Improvement: Identify areas for improvement in model performance and systems.
Skills:
? Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
? Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph
? Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
? Knowledge of model monitoring and performance evaluation.
Required experience:
? Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements
? AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in
ML workflows
? AWS data: Redshift, Glue
? Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Skills: Aws, Aws Cloud, Amazon Redshift, Eks
Must-Haves
Aws, Aws Cloud, Amazon Redshift, Eks
NP: Immediate – 30 Days
Full-Stack Developer
Exp: 5+ years required
Night shift: 8 PM-5 AM/9PM-6 AM
Only Immediate Joinee Can Apply
We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.
Key Responsibilities
● Develop and maintain user-facing features using React.js and TypeScript.
● Write clean, efficient, and well-documented JavaScript/TypeScript code.
● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.
● Contribute to the design, implementation, and maintenance of our databases.
● Collaborate with senior developers and product managers to deliver high-quality software.
● Troubleshoot and debug issues across the full stack.
● Participate in code reviews to maintain code quality and share knowledge.
Qualifications
● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.
● 5+ years of professional experience in web development.
● Proficiency in JavaScript and/or TypeScript.
● Proficiency in Golang and Python.
● Hands-on experience with the React.js library for building user interfaces.
● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).
● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).
● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.
● Strong problem-solving skills and a willingness to learn.
● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.
● Knowledge of CI/CD pipelines and automated testing.
Job Description:
We are looking for a Lead Java Developer – Backend with a strong foundation in software engineering and hands-on experience in designing and building scalable, high-performance backend systems. You’ll be working within our Digital Engineering Studios on impactful and transformative projects in a fast-paced environment.
Key Responsibilities:
- Lead and mentor backend development teams.
- Design and develop scalable backend applications using Java and Spring Boot.
- Ensure high standards of code quality through best practices such as SOLID principles and clean code.
- Participate in pair programming, code reviews, and continuous integration processes.
- Drive Agile, Lean, and Continuous Delivery practices like TDD, BDD, and CI/CD.
- Collaborate with cross-functional teams and clients for successful delivery.
Required Skills & Experience:
- 9–12+ years of experience in backend development (Up to 17 years may be considered).
- Strong programming skills in Java and backend frameworks such as Spring Boot.
- Experience in designing and building large-scale, custom-built, scalable applications.
- Sound understanding of Object-Oriented Design (OOD) and SOLID principles.
- Hands-on experience with Agile methodologies, TDD/BDD, CI/CD pipelines.
- Familiarity with DevOps practices, Docker, Kubernetes, and Infrastructure as Code.
- Good understanding of cloud technologies – especially AWS, and exposure to GCP or Azure.
- Experience working in a product engineering environment is a plus.
- Startup experience or working in fast-paced, high-impact teams is highly desirable.
- Develop, and maintain Java applications using Core Java, Spring framework, JDBC, and threading concepts.
- Strong understanding of the Spring framework and its various modules.
- Experience with JDBC for database connectivity and manipulation
- Utilize database management systems to store and retrieve data efficiently.
- Proficiency in Core Java8 and thorough understanding of threading concepts and concurrent programming.
- Experience in in working with relational and nosql databases.
- Basic understanding of cloud platforms such as Azure and GCP and gain experience on DevOps practices is added advantage.
- Knowledge of containerization technologies (e.g., Docker, Kubernetes)
- Perform debugging and troubleshooting of applications using log analysis techniques.
- Understand multi-service flow and integration between components.
- Handle large-scale data processing tasks efficiently and effectively.
- Hands on experience using Spark is an added advantage.
- Good problem-solving and analytical abilities.
- Collaborate with cross-functional teams to identify and solve complex technical problems.
- Knowledge of Agile methodologies such as Scrum or Kanban
- Stay updated with the latest technologies and industry trends to improve development processes continuously and methodologies
If interested please share your resume with details :
Total Experience -
Relevant Experience in Java,Spring,Data structures,Alogorithm,SQL, -
Relevant Experience in Cloud - AWS/Azure/GCP -
Current CTC -
Expected CTC -
Notice Period -
Reason for change -
Job Summary:
We are seeking an experienced and highly motivated Senior Python Developer to join our dynamic and growing engineering team. This role is ideal for a seasoned Python expert who thrives in a fast-paced, collaborative environment and has deep experience building scalable applications, working with cloud platforms, and automating infrastructure.
Key Responsibilities:
Develop and maintain scalable backend services and APIs using Python, with a strong emphasis on clean architecture and maintainable code.
Design and implement RESTful APIs using frameworks such as Flask or FastAPI, and integrate with relational databases using ORM tools like SQLAlchemy.
Work with major cloud platforms (AWS, GCP, or Oracle Cloud Infrastructure) using Python SDKs to build and deploy cloud-native applications.
Automate system and infrastructure tasks using tools like Ansible, Chef, or other configuration management solutions.
Implement and support Infrastructure as Code (IaC) using Terraform or cloud-native templating tools to manage resources effectively.
Work across both Linux and Windows environments, ensuring compatibility and stability across platforms.
Required Qualifications:
5+ years of professional experience in Python development, with a strong portfolio of backend/API projects.
Strong expertise in Flask, SQLAlchemy, and other Python-based frameworks and libraries.
Proficient in asynchronous programming and event-driven architecture using tools such as asyncio, Celery, or similar.
Solid understanding and hands-on experience with cloud platforms – AWS, Google Cloud Platform, or Oracle Cloud Infrastructure.
Experience using Python SDKs for cloud services to automate provisioning, deployment, or data workflows.
Practical knowledge of Linux and Windows environments, including system-level scripting and debugging.
Automation experience using tools such as Ansible, Chef, or equivalent configuration management systems.
Experience implementing and maintaining CI/CD pipelines with industry-standard tools.
Familiarity with Docker and container orchestration concepts (e.g., Kubernetes is a plus).
Hands-on experience with Terraform or equivalent infrastructure-as-code tools for managing cloud environments.
Excellent problem-solving skills, attention to detail, and a proactive mindset.
Strong communication skills and the ability to collaborate with diverse technical teams.
Preferred Qualifications (Nice to Have):
Experience with other Python frameworks (FastAPI, Django)
Knowledge of container orchestration tools like Kubernetes
Familiarity with monitoring tools like Prometheus, Grafana, or Datadog
Prior experience working in an Agile/Scrum environment
Contributions to open-source projects or technical blogs

eading provider of electronic trading solutions in India. With over 1,000 clients and a presence in more than 400 cities, we have established ourselves as a trusted partner for brokerages across the nation. Our commitment to excellence is reflected in millions of active end users and our reputation for delivering the best customer service in the industry.
Looking for an experienced Cloud Engineering & DevOps Leader with proven
expertise in building and managing large-scale SaaS platforms in the financial services or
high-transaction domain. The ideal candidate will have a strong background in AWS cloud
infrastructure, DevOps automation, compliance frameworks (ISO, VAPT), and cost
optimization strategies.
Cloud Platforms: AWS (Lambda, EC2, VPC, CloudFront, Auto Scaling, etc.)
● DevOps & Automation: Python, CI/CD, Infrastructure as Code, Monitoring/Alerting
systems
● Monitoring & Logging: ELK stack, Kafka, Redis, Grafana
● Networking & Virtualization: Virtual Machines, Firewalls, Load Balancers, DR
setup
● Compliance & Security: ISO Audits, VAPT, ISMS, DR drills, high-availability
planning
● Leadership & Management: Team leadership, project management, stakeholder
collaboration
Preferred Profile
● Experience: 15–20 years in infrastructure, cloud engineering, or DevOps roles, with
at least 5 years in a leadership position.
● Domain Knowledge: Experience in broking, financial services, or high-volume
trading platforms is strongly preferred.
● Education: Bachelor’s Degree in Engineering / Computer Science / Electronics or
related field.
● Soft Skills: Strong problem-solving, cost-conscious approach, ability to work under
pressure, cross-functional collaboration.
Job Title: Sr Dev Ops Engineer
Location: Bengaluru- India (Hybrid work type)
Reports to: Sr Engineer manager
About Our Client :
We are a solution-based, fast-paced tech company with a team that thrives on collaboration and innovative thinking. Our Client's IoT solutions provide real-time visibility and actionable insights for logistics and supply chain management. Cloud-based, AI-enhanced metrics coupled with patented hardware optimize processes, inform strategic decision making and enable intelligent supply chains without the costly infrastructure
About the role : We're looking for a passionate DevOps Engineer to optimize our software delivery and infrastructure. You'll build and maintain CI/CD pipelines for our microservices, automate infrastructure, and ensure our systems are reliable, scalable, and secure. If you thrive on enhancing performance and fostering operational excellence, this role is for you.
What You'll Do 🛠️
- Cloud Platform Management: Administer and optimize AWS resources, ensuring efficient billing and cost management.
- Billing & Cost Optimization: Monitor and optimize cloud spending.
- Containerization & Orchestration: Deploy and manage applications and orchestrate them.
- Database Management: Deploy, manage, and optimize database instances and their lifecycles.
- Authentication Solutions: Implement and manage authentication systems.
- Backup & Recovery: Implement robust backup and disaster recovery strategies, for Kubernetes cluster and database backups.
- Monitoring & Alerting: Set up and maintain robust systems using tools for application and infrastructure health and integrate with billing dashboards.
- Automation & Scripting: Automate repetitive tasks and infrastructure provisioning.
- Security & Reliability: Implement best practices and ensure system performance and security across all deployments.
- Collaboration & Support: Work closely with development teams, providing DevOps expertise and support for their various application stacks.
What You'll Bring 💼
- Minimum of 4 years of experience in a DevOps or SRE role.
- Strong proficiency in AWS Cloud, including services like Lambda, IoT Core, ElastiCache, CloudFront, and S3.
- Solid understanding of Linux fundamentals and command-line tools.
- Extensive experience with CI/CD tools, GitLab CI.
- Hands-on experience with Docker and Kubernetes, specifically AWS EKS.
- Proven experience deploying and managing microservices.
- Expertise in database deployment, optimization, and lifecycle management (MongoDB, PostgreSQL, and Redis).
- Experience with Identity and Access management solutions like Keycloak.
- Experience implementing backup and recovery solutions.
- Familiarity with optimizing scaling, ideally with Karpenter.
- Proficiency in scripting (Python, Bash).
- Experience with monitoring tools such as Prometheus, Grafana, AWS CloudWatch, Elastic Stack.
- Excellent problem-solving and communication skills.
Bonus Points ➕
- Basic understanding of MQTT or general IoT concepts and protocols.
- Direct experience optimizing React.js (Next.js), Node.js (Express.js, Nest.js) or Python (Flask) deployments in a containerized environment.
- Knowledge of specific AWS services relevant to application stacks.
- Contributions to open-source projects related to Kubernetes, MongoDB, or any of the mentioned frameworks.
- AWS Certifications (AWS Certified DevOps Engineer, AWS Certified Solutions Architect, AWS Certified SysOps Administrator, AWS Certified Advanced Networking).
Why this role:
•You will help build the company from the ground up—shaping our culture and having an impact from Day 1 as part of the foundational team.
Job Title: Sr Dev Ops Engineer
Location: Bengaluru- India (Hybrid work type)
Reports to: Sr Engineer manager
About Our Client :
We are a solution-based, fast-paced tech company with a team that thrives on collaboration and innovative thinking. Our Client's IoT solutions provide real-time visibility and actionable insights for logistics and supply chain management. Cloud-based, AI-enhanced metrics coupled with patented hardware optimize processes, inform strategic decision making and enable intelligent supply chains without the costly infrastructure
About the role : We're looking for a passionate DevOps Engineer to optimize our software delivery and infrastructure. You'll build and maintain CI/CD pipelines for our microservices, automate infrastructure, and ensure our systems are reliable, scalable, and secure. If you thrive on enhancing performance and fostering operational excellence, this role is for you.
What You'll Do 🛠️
- Cloud Platform Management: Administer and optimize AWS resources, ensuring efficient billing and cost management.
- Billing & Cost Optimization: Monitor and optimize cloud spending.
- Containerization & Orchestration: Deploy and manage applications and orchestrate them.
- Database Management: Deploy, manage, and optimize database instances and their lifecycles.
- Authentication Solutions: Implement and manage authentication systems.
- Backup & Recovery: Implement robust backup and disaster recovery strategies, for Kubernetes cluster and database backups.
- Monitoring & Alerting: Set up and maintain robust systems using tools for application and infrastructure health and integrate with billing dashboards.
- Automation & Scripting: Automate repetitive tasks and infrastructure provisioning.
- Security & Reliability: Implement best practices and ensure system performance and security across all deployments.
- Collaboration & Support: Work closely with development teams, providing DevOps expertise and support for their various application stacks.
What You'll Bring 💼
- Minimum of 4 years of experience in a DevOps or SRE role.
- Strong proficiency in AWS Cloud, including services like Lambda, IoT Core, ElastiCache, CloudFront, and S3.
- Solid understanding of Linux fundamentals and command-line tools.
- Extensive experience with CI/CD tools, GitLab CI.
- Hands-on experience with Docker and Kubernetes, specifically AWS EKS.
- Proven experience deploying and managing microservices.
- Expertise in database deployment, optimization, and lifecycle management (MongoDB, PostgreSQL, and Redis).
- Experience with Identity and Access management solutions like Keycloak.
- Experience implementing backup and recovery solutions.
- Familiarity with optimizing scaling, ideally with Karpenter.
- Proficiency in scripting (Python, Bash).
- Experience with monitoring tools such as Prometheus, Grafana, AWS CloudWatch, Elastic Stack.
- Excellent problem-solving and communication skills.
Bonus Points ➕
- Basic understanding of MQTT or general IoT concepts and protocols.
- Direct experience optimizing React.js (Next.js), Node.js (Express.js, Nest.js) or Python (Flask) deployments in a containerized environment.
- Knowledge of specific AWS services relevant to application stacks.
- Contributions to open-source projects related to Kubernetes, MongoDB, or any of the mentioned frameworks.
- AWS Certifications (AWS Certified DevOps Engineer, AWS Certified Solutions Architect, AWS Certified SysOps Administrator, AWS Certified Advanced Networking).
Why this role:
•You will help build the company from the ground up—shaping our culture and having an impact from Day 1 as part of the foundational team.
* Bachelor's degree in computer science or related fields preferred.
* 8+ years of experience developing core Java applications across enterprise, SME, or start-up
environments.
* Proven experience with distributed systems and event-driven architectures.
* Expertise in Spring Boot, Spring Framework, and RESTful API development.
* Experience in designing, building, and monitoring microservices.
* Solid background in persistence technologies including JPA, Hibernate, MS-SQL, and
PostgreSQL.
* Proficient in Java 11+, including features like Streams, Lambdas, and Functional
Programming.
* Experience with CI/CD pipelines using tools such as Jenkins, GitLab CI, GitHub Actions, or
AWS DevOps.
* Familiarity with major cloud platforms: AWS, Azure, or GCP (AWS preferred).
* Front-end development experience using React or Angular with good understanding of
leveraging best practices around HTML, CSS3/Tailwind, Responsive designs.
* Comfortable in Agile environments with iterative development and regular demos.
* Experience with container orchestration using Managed Kubernetes (EKS, AKS, or GKE).
* Working knowledge of Domain-Driven Design (DDD) and Backend-for-Frontend (BFF) concepts.
* Hands-on experience integrating applications with cloud services.
* Familiarity with event-driven technologies (e.g., Kafka, MQ, Event Buses).
* Hospitality services domain experience is a plus.
* Strong problem-solving skills, with the ability to work independently and in a team.
* Proficiency in Agile methodologies and software development best practices.
* • Skilled in code and query optimization.
* Experience with version control systems, particularly git
Requirements
- Bachelors/Masters in Computer Science or a related field
- 5-8 years of relevant experience
- Proven track record of Team Leading/Mentoring a team successfully.
- Experience with web technologies and microservices architecture both frontend and backend.
- Java, Spring framework, hibernate
- MySQL, Mongo, Solr, Redis,
- Kubernetes, Docker
- Strong understanding of Object-Oriented Programming, Data Structures, and Algorithms.
- Excellent teamwork skills, flexibility, and ability to handle multiple tasks.
- Experience with API Design, ability to architect and implement an intuitive customer and third-party integration story
- Ability to think and analyze both breadth-wise (client, server, DB, control flow) and depth-wise (threads, sessions, space-time complexity) while designing and implementing services
- Exceptional design and architectural skills
- Experience of cloud providers/platforms like GCP and AWS
Roles & Responsibilities
- Develop new user-facing features.
- Work alongside the product to understand our requirements, and design, develop and iterate, think through the complex architecture.
- Writing clean, reusable, high-quality, high-performance, maintainable code.
- Encourage innovation and efficiency improvements to ensure processes are productive.
- Ensure the training and mentoring of the team members.
- Ensure the technical feasibility of UI/UX designs and optimize applications for maximum speed.
- Research and apply new technologies, techniques, and best practices.
- Team mentorship and leadership.
Job title - Python developer
Exp – 4 to 6 years
Location – Pune/Mum/B’lore
PFB JD
Requirements:
- Proven experience as a Python Developer
- Strong knowledge of core Python and Pyspark concepts
- Experience with web frameworks such as Django or Flask
- Good exposure to any cloud platform (GCP Preferred)
- CI/CD exposure required
- Solid understanding of RESTful APIs and how to build them
- Experience working with databases like Oracle DB and MySQL
- Ability to write efficient SQL queries and optimize database performance
- Strong problem-solving skills and attention to detail
- Strong SQL programing (stored procedure, functions)
- Excellent communication and interpersonal skill
Roles and Responsibilities
- Design, develop, and maintain data pipelines and ETL processes using pyspark
- Work closely with data scientists and analysts to provide them with clean, structured data.
- Optimize data storage and retrieval for performance and scalability.
- Collaborate with cross-functional teams to gather data requirements.
- Ensure data quality and integrity through data validation and cleansing processes.
- Monitor and troubleshoot data-related issues to ensure data pipeline reliability.
- Stay up to date with industry best practices and emerging technologies in data engineering.
Link to apply - https://tally.so/r/wv0lEA
Key Responsibilities:
- Software Development:
- Design, implement, and optimise clean, scalable, and reliable code across [backend/frontend/full-stack] systems.
- Contribute to the development of micro services, APIs, or UI components as per the project requirements.
- System Architecture:
- Collaborate and design and enhance system architecture.
- Analyse and identify opportunities for performance improvements and scalability.
- Code Reviews and Mentorship:
- Conduct thorough code reviews to ensure code quality, maintainability, and adherence to best practices.
- Mentor and support junior developers, fostering a culture of learning and growth.
- Agile Collaboration:
- Work within an Agile/Scrum framework, participating in sprint planning, daily stand-ups, and retrospectives.
- Collaborate with Carbon Science, Designer, and other stakeholders to translate requirements into technical solutions.
- Problem-Solving:
- Investigate, troubleshoot, and resolve complex issues in production and development environments.
- Contribute to incident management and root cause analysis to improve system reliability.
- Continuous Improvement:
- Stay up-to-date with emerging technologies and industry trends.
- Propose and implement improvements to existing codebases, tools, and development processes.
Qualifications:
Must-Have:
- Experience: 2–5 years of professional software development experience in [specify languages/tools, e.g., Java, Python, JavaScript, etc.].
- Education: Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
- Technical Skills:
- Strong proficiency in [programming languages/frameworks/tools].
- Experience with cloud platforms like AWS, Azure, or GCP.
- Knowledge of version control tools (e.g., Git) and CI/CD pipelines.
- Understanding of data structures, algorithms, and system design principles.
Nice-to-Have:
- Experience with containerisation (e.g., Docker) and orchestration tools (e.g., Kubernetes).
- Knowledge of database technologies (SQL and NoSQL).
Soft Skills:
- Strong analytical and problem-solving skills.
- Excellent written and verbal communication skills.
- Ability to work in a fast-paced environment and manage multiple priorities effectively.
Overview
adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
The client’s department DPS, Digital People Solutions, offers a sophisticated portfolio of IT applications, providing a strong foundation for professional and efficient People & Organization (P&O) and Business Management, both globally and locally, for a well-known German company listed on the DAX-40 index, which includes the 40 largest and most liquid companies on the Frankfurt Stock Exchange
We are seeking talented DevOps-Engineers with focus on Elastic Stack (ELK) to join our dynamic DPS team. In this role, you will be responsible for refining and advising on the further development of an existing monitoring solution based on the Elastic Stack (ELK). You will independently handle tasks related to architecture, setup, technical migration, and documentation.
The current application landscape features multiple Java web services running on JEE application servers, primarily hosted on AWS, and integrated with various systems such as SAP, other services, and external partners. DPS is committed to delivering the best digital work experience for the customers employees and customers alike.
Responsibilities:
Install, set up, and automate rollouts using Ansible/CloudFormation for all stages (Dev, QA, Prod) in the AWS Cloud for components such as Elastic Search, Kibana, Metric beats, APM server, APM agents, and interface configuration.
Create and develop regular "Default Dashboards" for visualizing metrics from various sources like Apache Webserver, application servers and databases.
Improve and fix bugs in installation and automation routines.
Monitor CPU usage, security findings, and AWS alerts.
Develop and extend "Default Alerting" for issues like OOM errors, datasource issues, and LDAP errors.
Monitor storage space and create concepts for expanding the Elastic landscape in AWS Cloud and Elastic Cloud Enterprise (ECE).
Implement machine learning, uptime monitoring including SLA, JIRA integration, security analysis, anomaly detection, and other useful ELK Stack features.
Integrate data from AWS CloudWatch.
Document all relevant information and train involved personnel in the used technologies.
Requirements:
Experience with Elastic Stack (ELK) components and related technologies.
Proficiency in automation tools like Ansible and CloudFormation.
Strong knowledge of AWS Cloud services.
Experience in creating and managing dashboards and alerts.
Familiarity with IAM roles and rights management.
Ability to document processes and train team members.
Excellent problem-solving skills and attention to detail.
Skills & Requirements
Elastic Stack (ELK), Elasticsearch, Kibana, Logstash, Beats, APM, Ansible, CloudFormation, AWS Cloud, AWS CloudWatch, IAM roles, AWS security, Automation, Monitoring, Dashboard creation, Alerting, Anomaly detection, Machine learning integration, Uptime monitoring, JIRA integration, Apache Webserver, JEE application servers, SAP integration, Database monitoring, Troubleshooting, Performance optimization, Documentation, Training, Problem-solving, Security analysis.
Interested candidates are requested to email their resumes with the subject line "Application for [Job Title]".
Only applications received via email will be reviewed. Applications through other channels will not be considered.
Job Description
The client’s department DPS, Digital People Solutions, offers a sophisticated portfolio of IT applications, providing a strong foundation for professional and efficient People & Organization (P&O) and Business Management, both globally and locally, for a well-known German company listed on the DAX-40 index, which includes the 40 largest and most liquid companies on the Frankfurt Stock Exchange
We are seeking talented DevOps-Engineers with focus on Elastic Stack (ELK) to join our dynamic DPS team. In this role, you will be responsible for refining and advising on the further development of an existing monitoring solution based on the Elastic Stack (ELK). You will independently handle tasks related to architecture, setup, technical migration, and documentation.
The current application landscape features multiple Java web services running on JEE application servers, primarily hosted on AWS, and integrated with various systems such as SAP, other services, and external partners. DPS is committed to delivering the best digital work experience for the customers employees and customers alike.
Responsibilities:
Install, set up, and automate rollouts using Ansible/CloudFormation for all stages (Dev, QA, Prod) in the AWS Cloud for components such as Elastic Search, Kibana, Metric beats, APM server, APM agents, and interface configuration.
Create and develop regular "Default Dashboards" for visualizing metrics from various sources like Apache Webserver, application servers and databases.
Improve and fix bugs in installation and automation routines.
Monitor CPU usage, security findings, and AWS alerts.
Develop and extend "Default Alerting" for issues like OOM errors, datasource issues, and LDAP errors.
Monitor storage space and create concepts for expanding the Elastic landscape in AWS Cloud and Elastic Cloud Enterprise (ECE).
Implement machine learning, uptime monitoring including SLA, JIRA integration, security analysis, anomaly detection, and other useful ELK Stack features.
Integrate data from AWS CloudWatch.
Document all relevant information and train involved personnel in the used technologies.
Requirements:
Experience with Elastic Stack (ELK) components and related technologies.
Proficiency in automation tools like Ansible and CloudFormation.
Strong knowledge of AWS Cloud services.
Experience in creating and managing dashboards and alerts.
Familiarity with IAM roles and rights management.
Ability to document processes and train team members.
Excellent problem-solving skills and attention to detail.
Skills & Requirements
Elastic Stack (ELK), Elasticsearch, Kibana, Logstash, Beats, APM, Ansible, CloudFormation, AWS Cloud, AWS CloudWatch, IAM roles, AWS security, Automation, Monitoring, Dashboard creation, Alerting, Anomaly detection, Machine learning integration, Uptime monitoring, JIRA integration, Apache Webserver, JEE application servers, SAP integration, Database monitoring, Troubleshooting, Performance optimization, Documentation, Training, Problem-solving, Security analysis.
Level of skills and experience:
5 years of hands-on experience in using Python, Spark,Sql.
Experienced in AWS Cloud usage and management.
Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).
Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.
Experience with orchestrators such as Airflow and Kubeflow.
Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).
Fundamental understanding of Parquet, Delta Lake and other data file formats.
Proficiency on an IaC tool such as Terraform, CDK or CloudFormation.
Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst
Dear Candidate,
We are urgently Hiring AWS Cloud Engineer for Bangalore Location.
Position: AWS Cloud Engineer
Location: Bangalore
Experience: 8-11 yrs
Skills: Aws Cloud
Salary: Best in Industry (20-25% Hike on the current ctc)
Note:
only Immediate to 15 days Joiners will be preferred.
Candidates from Tier 1 companies will only be shortlisted and selected
Candidates' NP more than 30 days will get rejected while screening.
Offer shoppers will be rejected.
Job description:
Description:
Title: AWS Cloud Engineer
Prefer BLR / HYD – else any location is fine
Work Mode: Hybrid – based on HR rule (currently 1 day per month)
Shift Timings 24 x 7 (Work in shifts on rotational basis)
Total Experience in Years- 8+ yrs, 5 yrs of relevant exp is required.
Must have- AWS platform, Terraform, Redshift / Snowflake, Python / Shell Scripting
Experience and Skills Requirements:
Experience:
8 years of experience in a technical role working with AWS
Mandatory
Technical troubleshooting and problem solving
AWS management of large-scale IaaS PaaS solutions
Cloud networking and security fundamentals
Experience using containerization in AWS
Working Data warehouse knowledge Redshift and Snowflake preferred
Working with IaC – Terraform and Cloud Formation
Working understanding of scripting languages including Python and Shell
Collaboration and communication skills
Highly adaptable to changes in a technical environment
Optional
Experience using monitoring and observer ability toolsets inc. Splunk, Datadog
Experience using Github Actions
Experience using AWS RDS/SQL based solutions
Experience working with streaming technologies inc. Kafka, Apache Flink
Experience working with a ETL environments
Experience working with a confluent cloud platform
Certifications:
Minimum
AWS Certified SysOps Administrator – Associate
AWS Certified DevOps Engineer - Professional
Preferred
AWS Certified Solutions Architect – Associate
Responsibilities:
Responsible for technical delivery of managed services across NTT Data customer account base. Working as part of a team providing a Shared Managed Service.
The following is a list of expected responsibilities:
To manage and support a customer’s AWS platform
To be technical hands on
Provide Incident and Problem management on the AWS IaaS and PaaS Platform
Involvement in the resolution or high priority Incidents and problems in an efficient and timely manner
Actively monitor an AWS platform for technical issues
To be involved in the resolution of technical incidents tickets
Assist in the root cause analysis of incidents
Assist with improving efficiency and processes within the team
Examining traces and logs
Working with third party suppliers and AWS to jointly resolve incidents
Good to have:
Confluent Cloud
Snowflake
Best Regards,
Minakshi Soni
Executive - Talent Acquisition (L2)
Rigel Networks
Worldwide Locations: USA | HK | IN
Job Title : MERN Stack Developer
Experience : 5+ Years
Shift Timings : 8:00 AM to 5:00 PM
Role Overview:
We are hiring a skilled MERN Stack Developer to build scalable web applications. You’ll work on both front-end and back-end, leveraging modern frameworks and cloud technologies to deliver high-quality solutions.
Key Responsibilities :
- Develop responsive UIs using React, GraphQL, and TypeScript.
- Build back-end APIs with Node.js, Express, and MySQL.
- Integrate AWS services like Lambda, S3, and API Gateway.
- Optimize deployments using AWS CDK and CloudFormation.
- Ensure code quality with Mocha/Chai/Sinon, ESLint, and Prettier.
Required Skills :
- Strong experience with React, Node.js, and GraphQL.
- Proficiency in AWS services and Infrastructure as Code (CDK/Terraform).
- Familiarity with MySQL, Elasticsearch, and modern testing frameworks.
Position - Full stack Developer
Location - Navi Mumbai
Freshers 0-3 yrs
Who are we
Based out of IIT Bombay, HaystackAnalytics is a HealthTech company creating clinical genomics products, which enable diagnostic labs and hospitals to offer accurate and personalized diagnostics. Supported by India's most respected science agencies (DST, BIRAC, DBT), we created and launched a portfolio of products to offer genomics in infectious diseases. Our genomics based diagnostic solution for Tuberculosis was recognized as one of top innovations supported by BIRAC in the past 10 years, and was launched by the Prime Minister of India in the BIRAC Showcase event in Delhi, 2022.
Objectives of this Role:
- Work across the full stack, building highly scalable distributed solutions that enable positive user experiences and measurable business growth
- Ideate and develop new product features in collaboration with domain experts in healthcare and genomics
- Develop state of the art enterprise standard front-end and backend services
- Develop cloud platform services based on container orchestration platform
- Continuously embrace automation for repetitive tasks
- Ensure application performance, uptime, and scale, maintaining high standards of code quality by using clean coding principles and solid design patterns
- Build robust tech modules that are Unit Testable, Automating recurring tasks and processes
- Engage effectively with team members and collaborate to upskill and unblock each other
Frontend Skills
- HTML 5
- CSS framework ( LESS/ SASS / Tailwind )
- Es6 / Typescript
- Electron app /Tauri)
- Component library ( Bootstrap , material UI, Lit )
- Responsive web layout ( Flex layout , Grid layout )
- Package manager --> yarn / npm / turbo
- Build tools - > (Vite/Webpack/Parcel)
- Frameworks -- > React with Redux of Mobx / Next JS
- Design patterns
- Testing - JEST / MOCHA / JASMINE / Cypress
- Functional Programming concepts (Good to have)
- Scripting ( powershell , bash , python )
Backend Skills
- Nodejs - Express / NEST JS
- Python / Rust
- REST API
- SOLID Design Principles
- Database (postgresql / mysql / redis / cassandra / mongodb )
- Caching ( Redis )
- Container Technology ( Docker / Kubernetes )
- Cloud ( azure , aws , openshift, google cloud )
- Version Control - GIT
- GITOPS
- Automation ( terraform , ansible )
Cloud Skills
- Object storage
- VPC concepts
- Containerize Deployment
- Serverless architecture
Other Skills
- Innovation and thought leadership
- UI - UX design skills
- Interest in in learning new tools, languages, workflows, and philosophies to grow
- Communication
To know more about us- https://haystackanalytics.in/
Role Overview:
We are seeking a highly skilled and motivated Data Scientist to join our growing team. The ideal candidate will be responsible for developing and deploying machine learning models from scratch to production level, focusing on building robust data-driven products. You will work closely with software engineers, product managers, and other stakeholders to ensure our AI-driven solutions meet the needs of our users and align with the company's strategic goals.
Key Responsibilities:
- Develop, implement, and optimize machine learning models and algorithms to support product development.
- Work on the end-to-end lifecycle of data science projects, including data collection, preprocessing, model training, evaluation, and deployment.
- Collaborate with cross-functional teams to define data requirements and product taxonomy.
- Design and build scalable data pipelines and systems to support real-time data processing and analysis.
- Ensure the accuracy and quality of data used for modeling and analytics.
- Monitor and evaluate the performance of deployed models, making necessary adjustments to maintain optimal results.
- Implement best practices for data governance, privacy, and security.
- Document processes, methodologies, and technical solutions to maintain transparency and reproducibility.
Qualifications:
- Bachelor's or Master's degree in Data Science, Computer Science, Engineering, or a related field.
- 5+ years of experience in data science, machine learning, or a related field, with a track record of developing and deploying products from scratch to production.
- Strong programming skills in Python and experience with data analysis and machine learning libraries (e.g., Pandas, NumPy, TensorFlow, PyTorch).
- Experience with cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker).
- Proficiency in building and optimizing data pipelines, ETL processes, and data storage solutions.
- Hands-on experience with data visualization tools and techniques.
- Strong understanding of statistics, data analysis, and machine learning concepts.
- Excellent problem-solving skills and attention to detail.
- Ability to work collaboratively in a fast-paced, dynamic environment.
Preferred Qualifications:
- Knowledge of microservices architecture and RESTful APIs.
- Familiarity with Agile development methodologies.
- Experience in building taxonomy for data products.
- Strong communication skills and the ability to explain complex technical concepts to non-technical stakeholders.
Hi All,
Job Description:
As a Java Developer, you will be responsible for developing and maintaining high performance, scalable, and secure applications. We are seeking a skilled and motivated Java Developer with experience in the Spring Framework to join our dynamic team. This is a remote/work-from-home position, offering you the flexibility to work from anywhere.
Location : Remote / WFH
Salary : Good Hike on Current
Key Responsibilities:
- Design, develop, and maintain Java-based applications using the Spring Framework.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Write clean, maintainable, and efficient code.
- Ensure the performance, quality, and responsiveness of applications.
- Troubleshoot and debug issues to optimize application performance.
- Participate in code reviews to maintain high coding standards and best practices.
- Work with RESTful APIs and integrate third-party services.
- Contribute to all phases of the software development lifecycle, including requirements
- Gathering, design, implementation, testing, and deployment.
Key Requirements:
- 2 to 5+ years of experience in Java development.
- Strong experience with the Spring Framework (Spring Boot, Spring MVC, Spring Data, etc.).
- Proficiency in building RESTful APIs and web services.
- Solid understanding of object-oriented programming and design patterns.
- Experience with relational databases like MySQL, PostgreSQL, or Oracle.
- Familiarity with version control systems, particularly Git.
- Knowledge of front-end technologies such as HTML, CSS, and JavaScript is a plus.
- Ability to work independently and as part of a remote team.
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration skills.
Preferred Qualifications:
- Experience with cloud platforms like AWS, Azure, or Google Cloud.
- Familiarity with microservices architecture.
- Knowledge of containerization tools such as Docker.
- Understanding of Agile/Scrum methodologies.
Benefits:
- Work-from-home/remote opportunities.
- Opportunities for professional growth and development.
- Collaborative and inclusive work environment.
What You Can Expect from Us:
Here at Nomiso, we work hard to provide our team with the best opportunities to grow their careers. You can expect to be a pioneer of ideas, a student of innovation, and a leader of thought. Innovation and thought leadership is at the center of everything we do, at all levels of the company. Let’s make your career great!
Position Overview:
The Principal Cloud Network Engineer is a key interface to Client teams and is responsible to develop convincing technical solutions. This requires them to work closely with clients for multiple partner-vendors teams to architect the solution.
This position requires sound technical knowledge, proven business acumen and differentiating client interfacing ability. You are required to anticipate, create, and define an innovative solution which matches customer’s needs and the clients tactical and strategic requirements.
Roles and Responsibilities:
- Design and implement next-generation networking technologies
- Deploy/support large-scale production network
- Track, analyze, and trend capacity on the broadcast network and datacenter infrastructure
- Provide Tier 3 escalated network support
- Perform fault management and problem resolution
- Work closely with other departments, vendors, and service providers
- Perform network change management, support modifications, and maintenance
- Perform network upgrade, maintenance, and repair work
- Lead implementation of new systems
- Perform capacity planning and management
- Suggest opportunities for improvement
- Create and support network management objectives, policies, and procedures
- Ensure network documentation is kept up-to-date
- Train and assist junior engineers.
Must Have Skills:
Candidates with overall 10+ years of experience in the following:
- Hands-on: Routers/Switches, Firewalls (Palo Alto or similar), Load Balancer (RTM, GTM), AWS (VPC , API Gateway , Cloudfront , Route53, CloudVAN, Directconnect, Privatelink, Transit Gateway ) Networking, Wireless.
- Strong hands-on coding/scripting experience in one or more programming languages such as Python, Golang, Java, Bash, etc.
- Networking technologies: Routing Protocols (BGP, EIGRP & OSPF, VRFs, VLANs, VRRP, LACP, MLAG, TACACS / Rancid / GIT, IPSec VPN, DNS / DHCP, NAT / SNAT, IP Multicast, VPC, Transit Gateway, NAT Gateway, ALB/ELB), Security Groups, ACL, HSRP, VRRP, SNMP, DHCP.
- Managing hardware, IOS, coordinating with vendors/partners for support.
- Managing CDN, Links, VPN technologies, SDN/Cisco ACI ( Design and implementaion ) and Network Function Virtualization (NFV).
- Reviewing technology designs, and architecture, taking local and regional regulatory requirements into account for Voice, Video Solutions, Routing, Switching, VPN, LAN, WAN, Network Security, Firewalls, NGFW, NAT, IPS, Botnet, Application Control, DDoS, Web Filtering.
- Palo Alto Firewall / Panorama, Big-IQ, and NetBrain tools/technology standards to daily support and enhance performance, improve reliability .
- Creating a real-time contextual living map of Client’s network with detailed network specifications, including diagrams, equipment configurations with defined standards
- Improve the reliability of the service, bring in proactiveness to identify and prevent impact to customers by eliminating Single Point of Failure (SPOF).
- Capturing critical forensic data, and providing complete visibility across the enterprise, for security incidents as soon as a threat is detected, by implementing tools like NetBrain.
Good to Have Skills:
- Industry certifications on Switching, Routing and Security.
- Elastic Load Balancing (ELB), DNS / DHCP, IPSec VPN,Multicast, TACACS / Rancid / GIT, ALB/ELB
- AWS Control Tower
- Experience leading a team of 5 or more.
- Strong Analytical and Problem Solving Skills.
- Experience implementing / maintaining Infrastructure as Code (IaC)
- Certifications : CCIE, AWS Certified Advanced Networking
Position Overview
We are seeking a highly skilled React Native Developer with a strong background in the MERN (MongoDB, Express.js, React.js, Node.js) stack to join our dynamic team. The ideal candidate will have a minimum of 2 years of professional experience and a proven track record of developing robust, scalable mobile applications using React Native. This role offers an exciting opportunity to work on innovative projects and significantly contribute to the growth and success of our startup.
Key Responsibilities
● Develop High-Quality Mobile Applications: Create user-friendly mobile applications using React Native, ensuring high performance and responsiveness.
● Collaborate with Cross-Functional Teams: Work closely with product managers, designers, and other developers to define, design, and deliver new features.
● Maintain Clean and Efficient Code: Write clean, maintainable, and efficient code following industry best practices and coding standards.
● Optimize Application Performance: Enhance application performance for speed and scalability.
● Participate in Code Reviews: Engage in code reviews, discussions, and knowledge-sharing sessions to improve team output and code quality.
● Troubleshoot and Debug: Identify and resolve application issues to ensure smooth functionality.
● Stay Updated with Trends: Keep abreast of emerging technologies and trends in mobile development to incorporate best practices.
Requirements
● MERN Stack Proficiency: Strong expertise in MongoDB, Express.js, React.js, and Node.js.
● RESTful APIs and Web Services: Solid understanding and experience in integrating RESTful APIs and web services.
● Version Control Systems: Proficient with Git for version control.
● Mobile UI/UX Design Principles: Knowledge of best practices in mobile UI/UX design.
● Problem-Solving Skills: Excellent analytical and problem-solving abilities.
● Team and Independent Work: Ability to work both independently and as part of a team in a fast-paced startup environment.
● Communication and Collaboration: Strong verbal and written communication skills, with the ability to collaborate effectively with team members.
Nice to Have
● GraphQL: Familiarity with GraphQL for efficient data fetching.
● Cloud Services: Knowledge of cloud services such as AWS or Firebase.
Job Description:
We are seeking a motivated DevOps intern to join our team. The intern will be responsible for deploying and maintaining applications in AWS and Azure cloud environments, as well as on client local machines when required. The intern will troubleshoot any deployment issues and ensure the high availability of the applications.
Responsibilities:
- Deploy and maintain applications in AWS and Azure cloud environments
- Deploy applications on client local machines when needed
- Troubleshoot deployment issues and ensure high availability of applications
- Collaborate with development teams to improve deployment processes
- Monitor system performance and implement optimizations
- Implement and maintain CI/CD pipelines
- Assist in implementing security best practices
Requirements:
- Currently pursuing a degree in Computer Science, Engineering, or related field
- Knowledge of cloud computing platforms (AWS, Azure)
- Familiarity with containerization technologies (Docker, Kubernetes)
- Basic understanding of networking principles
- Strong problem-solving skills
- Excellent communication skills
Nice to Have:
- Familiarity with configuration management tools (e.g., Ansible, Chef, Puppet)
- Familiarity with monitoring and logging tools (e.g., Prometheus, ELK stack)
- Understanding of security best practices in cloud environments
Benefits:
- Hands-on experience with cutting-edge technologies.
- Opportunity to work on exciting AI and LLM projects
- Dynatrace Expertise: Lead the implementation, configuration, and optimization of Dynatrace monitoring solutions across diverse environments, ensuring maximum efficiency and effectiveness.
- Cloud Integration: Utilize expertise in AWS and Azure to seamlessly integrate Dynatrace monitoring into cloud-based architectures, leveraging PaaS services and IAM roles for efficient monitoring and management.
- Application and Infrastructure Architecture: Design and architect both application and infrastructure landscapes, considering factors like Oracle, SQL Server, Shareplex, Commvault, Windows, Linux, Solaris, SNMP polling, and SNMP traps.
- Cross-Platform Integration: Integrate Dynatrace with various products such as Splunk, APIM, and VMWare to provide comprehensive monitoring and analysis capabilities.
- Inter-Account Integration: Develop and implement integration strategies for seamless communication and monitoring across multiple AWS accounts, leveraging Terraform and IAM roles.
- Experience working with On-premise Application and Infrastructure
- Experience with AWS & Azure and Cloud Certified.
- Dynatrace Experience & Certification
- Java
- Spring Boot
- Database (Preferably Mysql)
- Multithreading
- Low Level design (Any Module)
- Github
- Leetcode
- data structure
DevOps Lead Engineer
We are seeking a skilled DevOps Lead Engineer with 8 to 10 yrs. of experience who handles the entire DevOps lifecycle and is accountable for the implementation of the process. A DevOps Lead Engineer is liable for automating all the manual tasks for developing and deploying code and data to implement continuous deployment and continuous integration frameworks. They are also held responsible for maintaining high availability of production and non-production work environments.
Essential Requirements (must have):
• Bachelor's degree preferable in Engineering.
• Solid 5+ experience with AWS, DevOps, and related technologies
Skills Required:
Cloud Performance Engineering
• Performance scaling in a Micro-Services environment
• Horizontal scaling architecture
• Containerization (such as Dockers) & Deployment
• Container Orchestration (such as Kubernetes) & Scaling
DevOps Automation
• End to end release automation.
• Solid Experience in DevOps tools like GIT, Jenkins, Docker, Kubernetes, Terraform, Ansible, CFN etc.
• Solid experience in Infra Automation (Infrastructure as Code), Deployment, and Implementation.
• Candidates must possess experience in using Linux, Jenkins, and ample experience in Configuring and automating the monitoring tools.
• Strong scripting knowledge
• Strong analytical and problem-solving skills.
• Cloud and On-prem deployments
Infrastructure Design & Provisioning
• Infra provisioning.
• Infrastructure Sizing
• Infra Cost Optimization
• Infra security
• Infra monitoring & site reliability.
Job Responsibilities:
• Responsible for creating software deployment strategies that are essential for the successful
deployment of software in the work environment and provide stable environment for delivery of
quality.
• The DevOps Lead Engineer is accountable for designing, building, configuring, and optimizing
automation systems that help to execute business web and data infrastructure platforms.
• The DevOps Lead Engineer is involved in creating technology infrastructure, automation tools,
and maintaining configuration management.
• The Lead DevOps Engineer oversees and leads the activities of the DevOps team. They are
accountable for conducting training sessions for the juniors in the team, mentoring, career
support. They are also answerable for the architecture and technical leadership of the complete
DevOps infrastructure.
Position: Java Developer
Experience: 3-8 Years
Location: Bengaluru
We are a multi-award-winning creative engineering company offering design and technology solutions on mobile, web and cloud platforms. We are looking for an enthusiastic and self-driven Test Engineer to join our team.
Roles and Responsibilities:
- Expert level Micro Web Services development skills using Java/J2EE/Spring
- Strong in SQL and noSQL databases (mySQL / MongoDB preferred) Ability to develop software programs with best of design patterns , data Structures & algorithms
- Work in very challenging and high performance environment to clearly understand and provide state of the art solutions ( via design and code)
- Ability to debug complex applications and help in providing durable fixes
- While Java platform is primary, ability to understand, debug and work on other application platforms using Ruby on Rails and Python
- Responsible for delivering feature changes and functional additions that handle millions of requests per day while adhering to quality and schedule targets
- Extensive knowledge of at least 1 cloud platform (AWS, Microsoft Azure, GCP) preferably AWS.
- Strong unit testing skills for frontend and backend using any standard framework
- Exposure to application gateways and dockerized. microservices
- Good knowledge and experience with Agile, TDD or BDD methodologies
Desired Profile:
- Programing language – Java
- Framework – Spring Boot
- Good Knowledge of SQL & NoSQL DB
- AWS Cloud Knowledge
- Micro Service Architecture
Good to Have:
- Familiarity with Web Front End (Java Script/React)
- Familiarity with working in Internet of Things / Hardware integration
- Docker & Kubernetes Serverless Architecture
- Working experience in Energy Company (Solar Panels + Battery)
About this roleWe are seeking an experienced MongoDB Developer/DBA who will be
responsible for maintaining MongoDB databases while optimizing performance, security, and
the availability of MongoDB clusters. As a key member of our team, you’ll play a crucial role in
ensuring our data infrastructure runs smoothly.
You'll have the following responsibilities
Maintain and Configure MongoDB Instances - Responsible for build, design, deploy,
maintain, and lead the MongoDB Atlas infrastructure. Keep clear documentation of the
database setup and architecture.
Ownership of governance, defining and enforcing policies in MongoDB Atlas.Provide
consultancy in drawing the design and infrastructure (MongoDB Atlas) for use case.
Service and Governance wrap will be in place to restrict over provisioning for server size,
number of clusters per project and scaling through MongoDB Atlas
Gathering and documenting detailed business requirements applicable to the data
layer.Responsible for designing, configuring and managing MongoDB on Atlas.
Design, develop, test, document, and deploy high-quality technical solutions on the
MongoDB Atlas platform based on industry best practices to solve business needs.
Resolves technical issues raised by the team and/or customer and manages escalations as
required.
Migrate data from on-premise MongoDB and RDBMS to MongoDB AtlasCommunicate
and collaborate with other technical resources and customers in providing timely updates
on status of deliverables, shedding light on technical issues, and obtaining buy-in on
creative solutions.
Write procedures for backup and disaster recovery.
You'll have the following skills & experience
Excellent analytical, diagnostic skills, and problem-solving skills
Should understand the Database concept and develop expertise in designing and
developing NoSQL databases such as MongoDB
MongoDB query operation, import and export operation in database
Experience in ETL methodology for performing Data Migration, Extraction,
Transformation, Data Profiling and Loading
Migrating database by ETL, migrating database by manual process and designing,
development, implementation
General networking skills, especially in the context of a public cloud (e.g. AWS – VPC,
subnets, routing tables, nat / internet gateways, dns, security groups)
Experience using Terraform as an IaC tool for setting up infrastructure on AWS
CloudPerforming database backups and recovery
Competence in at least one of the following languages (in no particular order): Java, C++,
C#, Python, Node.js (JavaScript), Ruby, Perl, Scala, Go
Excellent communication skills, often being able to compromise but draw out risks and
constraints associated with solutions. Be able to work independently and collaborate with
other teams
Proficiency in configuring schema and MongoDB data modeling.
Strong understanding of SQL and NoSQL databases.
Comfortable with MongoDB syntax.
Experience with database security management.
Performance Optimization - Ensure databases achieve maximum performance and
availability. Design effective indexing strategies.
Job Role - DevOps Infra Lead Engineer
About LenDenClub
LenDenClub is a leading peer-to-peer lending platform that provides an alternate investment opportunity to investors or lenders looking for high returns with creditworthy borrowers looking for short-term personal loans. With a total of 8 million users and 2 million+ investors on board, LenDenClub has become a go-to platform to earn returns in the range of 10%-12%. LenDenClub offers investors a convenient medium to browse thousands of borrower profiles to achieve better returns than traditional asset classes. Moreover, LenDenClub is safeguarded by
market volatility and inflation. LenDenClub provides a great way to diversify one’s investment portfolio.
LenDenClub has raised US $10 million in a Series A round from an association of investors. With the new round of funding, LenDenClub was valued at more than US $51 million in the last round and has grown multifold since then.
Why work at LenDenClub
LenDenClub is a certified great place to work. The certification comes from the Great Place to Work Institute, Inc., a globally renowned firm dedicated to evaluating companies for their employee satisfaction on the grounds of high trust and high-performance culture at workplaces.
As a LenDenite, you will be a part of an enthusiastic and passionate group of individuals who own and love what they do. At LenDenClub we believe in creating leaders and with you coming on board you get to work with complete freedom to chase your ultimate career goal without any inhibitions.
Website - https://www.lendenclub.com
Location - Mumbai (Goregaon)
Responsibilities of a DevOps Infra Lead Engineer:
● Responsible for creating software deployment strategies that are essential for the successful deployment of software in the work environment. Identify and implement data storage methods like clustering to improve the performance of the team.
● Responsible for coming up with solutions for managing a vast number of documents in real-time and enables quick search and analysis. Identifies issues in the production phase and system and implements monitoring solutions to overcome those issues.
● Stay abreast of industry trends and best practices. Conduct research, tests, and execute new techniques which could be reused and applied to the software development project.
● Accountable for designing, building, and optimizing automation systems that help to execute business web and data infrastructure platforms.
● Creating technology infrastructure, automation tools, and maintaining configuration management.
● To cater to the engineering department’s quality and standards, implement lifecycle infrastructure solutions and documentation operations.
● Implementation and maintaining of CI/CD pipelines.
● Containerisation of applications
● Construct and improve the security on the infrastructure
● Infrastructure As A Code
● Maintaining Environments
● Nat and ACL's
● Setup of ECS and ELB for HA
● WAF and Firewall and DMZ
● Deployment strategies for high uptime
● Setup up monitoring and policies for infra and applications
Required Skills
● Communication Skills
● Interpersonal Skills
● Infrastructure
● Aware of technologies like Python, MYSQL, MongoDB, and so on.
● Sound knowledge of cloud infrastructure.
● Should possess knowledge of fundamental Unix/Linux, monitoring, editing, and command-based tools is essential.
● Versed in scripting languages such as Ruby and Shell
● Google Cloud Platforms, Hadoop, NoSQL databases, and big data clusters.
● Knowledge of open source technologies
We are Seeking:
1. AWS Serverless, AWS CDK:
Proficiency in developing serverless applications using AWS Lambda, API Gateway, S3, and other relevant AWS services.
Experience with AWS CDK for defining and deploying cloud infrastructure.
Knowledge of serverless design patterns and best practices.
Understanding of Infrastructure as Code (IaC) concepts.
Experience in CI/CD workflows with AWS CodePipeline and CodeBuild.
2. TypeScript, React/Angular:
Proficiency in TypeScript.
Experience in developing single-page applications (SPAs) using React.js or Angular.
Knowledge of state management libraries like Redux (for React) or RxJS (for Angular).
Understanding of component-based architecture and modern frontend development practices.
3. Node.js:
Strong proficiency in backend development using Node.js.
Understanding of asynchronous programming and event-driven architecture.
Familiarity with RESTful API development and integration.
4. MongoDB/NoSQL:
Experience with NoSQL databases and their use cases.
Familiarity with data modeling and indexing strategies in NoSQL databases.
Ability to integrate NoSQL databases into serverless architectures.
5. CI/CD:
Ability to troubleshoot and debug CI/CD pipelines.
Knowledge of automated testing practices and tools.
Understanding of deployment automation and release management processes.
Educational Background: Bachelor's degree in Computer Science, Engineering, or a related field.
Certification(Preferred-Added Advantage):AWS certifications (e.g., AWS Certified Developer - Associate)
Company Description
KogniVera is an India-based technology consulting and services company that specializes in conceptualization, design, engineering, and management of digital products. The company brings rich experience and expertise to address the growth needs of enterprises in dynamic industries such as Retail, Financial Services, Insurance, and Healthcare. KogniVera has an unwavering obsession with customer success and a partnership mindset dedicated to achieving unparalleled success in the digital landscape.
Role Description
This is a full-time on-site Java Spring Boot Lead role located in Bangalore. The Java Spring Boot Lead will also collaborate with cross-functional teams and stakeholders to identify, design, and implement new features and functionality.
Fulltime role
Location: Bengaluru-Onsite, India fulltime, Onsite.
Skills Set Required: 5+ years of experience.
Very Strong at core java.
Having sound knowledge in spring boot.
Hands on experience in creating framework.
Cloud knowledge.
Good understanding of Design patterns.
Must have worked on at least 2-3 project.
Please share your updated resume and the details.
mgarg@#kognivera.com
Website : https://kognivera.com























