Cutshort logo
Aws cloudformation jobs

50+ AWS CloudFormation Jobs in India

Apply to 50+ AWS CloudFormation Jobs on CutShort.io. Find your next job, effortlessly. Browse AWS CloudFormation Jobs and apply today!

icon
Watsoo Express
Gurgaon Udyog vihar phase 5
6 - 10 yrs
₹9L - ₹11L / yr
skill iconDocker
skill iconKubernetes
helm
cicd
skill iconGitHub
+9 more

Profile: Sr. Devops Engineer

Location: Gurugram

Experience: 05+ Years

Notice Period: can join Immediate to 1 week

Company: Watsoo

Required Skills & Qualifications

  • Bachelor’s degree in Computer Science, Engineering, or related field.
  • 5+ years of proven hands-on DevOps experience.
  • Strong experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.).
  • Expertise in containerization & orchestration (Docker, Kubernetes, Helm).
  • Hands-on experience with cloud platforms (AWS, Azure, or GCP).
  • Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
  • Experience with monitoring and logging solutions (Prometheus, Grafana, ELK, CloudWatch, etc.).
  • Proficiency in scripting languages (Python, Bash, or Shell).
  • Knowledge of networking, security, and system administration.
  • Strong problem-solving skills and ability to work in fast-paced environments.
  • Troubleshoot production issues, perform root cause analysis, and implement preventive measures.

Advocate DevOps best practices, automation, and continuous improvement

Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 12 yrs
₹25L - ₹30L / yr
skill iconMachine Learning (ML)
AWS CloudFormation
Online machine learning
skill iconAmazon Web Services (AWS)
ECS
+20 more

MUST-HAVES: 

  • Machine Learning + Aws + (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sage maker
  • Notice period - 0 to 15 days only 
  • Hybrid work mode- 3 days office, 2 days at home


SKILLS: AWS, AWS CLOUD, AMAZON REDSHIFT, EKS


ADDITIONAL GUIDELINES:

  • Interview process: - 2 Technical round + 1 Client round
  • 3 days in office, Hybrid model. 


CORE RESPONSIBILITIES:

  • The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
  • Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
  • Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
  • Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
  • System Integration: Integrate models into existing systems and workflows.
  • Model Deployment: Deploy models to production environments and monitor performance.
  • Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
  • Continuous Improvement: Identify areas for improvement in model performance and systems.


SKILLS:

  • Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
  • Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaos search logs, etc. for troubleshooting; Other tech touch points are Scylla DB (like BigTable), OpenSearch, Neo4J graph
  • Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
  • Knowledge of model monitoring and performance evaluation.


REQUIRED EXPERIENCE:

  • Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sage maker pipeline with ability to analyze gaps and recommend/implement improvements
  • AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
  • AWS data: Redshift, Glue
  • Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹15L - ₹40L / yr
DevOps
skill iconDocker
CI/CD
skill iconAmazon Web Services (AWS)
AWS CloudFormation
+22 more

Review Criteria

  • Strong Senior/Lead DevOps Engineer Profile
  • 8+ years of hands-on experience in DevOps engineering, with a strong focus on AWS cloud infrastructure and services (EC2, VPC, EKS, RDS, Lambda, CloudFront, etc.).
  • Must have strong system administration expertise (installation, tuning, troubleshooting, security hardening)
  • Solid experience in CI/CD pipeline setup and automation using tools such as Jenkins, GitHub Actions, or similar
  • Hands-on experience with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible
  • Must have strong database expertise across MongoDB and Snowflake (administration, performance optimization, integrations)
  • Experience with monitoring and observability tools such as Prometheus, Grafana, ELK, CloudWatch, or Datadog
  • Good exposure to containerization and orchestration using Docker and Kubernetes (EKS)
  • Must be currently working in an AWS-based environment (AWS experience must be in the current organization)
  • Its an IC role

 

Preferred

  • Must be proficient in scripting languages (Bash, Python) for automation and operational tasks.
  • Must have strong understanding of security best practices, IAM, WAF, and GuardDuty configurations.
  • Exposure to DevSecOps and end-to-end automation of deployments, provisioning, and monitoring.
  • Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.

 

Role & Responsibilities

We are seeking a highly skilled Senior DevOps Engineer with 8+ years of hands-on experience in designing, automating, and optimizing cloud-native solutions on AWS. AWS and Linux expertise are mandatory. The ideal candidate will have strong experience across databases, automation, CI/CD, containers, and observability, with the ability to build and scale secure, reliable cloud environments.

 

Key Responsibilities:

Cloud & Infrastructure as Code (IaC)-

  • Architect and manage AWS environments ensuring scalability, security, and high availability.
  • Implement infrastructure automation using Terraform, CloudFormation, and Ansible.
  • Configure VPC Peering, Transit Gateway, and PrivateLink/Connect for advanced networking.

CI/CD & Automation:

  • Build and maintain CI/CD pipelines (Jenkins, GitHub, SonarQube, automated testing).
  • Automate deployments, provisioning, and monitoring across environments.

Containers & Orchestration:

  • Deploy and operate workloads on Docker and Kubernetes (EKS).
  • Implement IAM Roles for Service Accounts (IRSA) for secure pod-level access.
  • Optimize performance of containerized and microservices applications.

Monitoring & Reliability:

  • Implement observability with Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
  • Establish logging, alerting, and proactive monitoring for high availability.

Security & Compliance:

  • Apply AWS security best practices including IAM, IRSA, SSO, and role-based access control.
  • Manage WAF, Guard Duty, Inspector, and other AWS-native security tools.
  • Configure VPNs, firewalls, and secure access policies and AWS organizations.

Databases & Analytics:

  • Must have expertise in MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
  • Manage data reliability, performance tuning, and cloud-native integrations.
  • Experience with Apache Airflow and Spark.


Ideal Candidate

  • 8+ years in DevOps engineering, with strong AWS Cloud expertise (EC2, VPC, TG, RDS, S3, IAM, EKS, EMR, SCP, MWAA, Lambda, CloudFront, SNS, SES etc.).
  • Linux expertise is mandatory (system administration, tuning, troubleshooting, CIS hardening etc).
  • Strong knowledge of databases: MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
  • Hands-on with Docker, Kubernetes (EKS), Terraform, CloudFormation, Ansible.
  • Proven ability with CI/CD pipeline automation and DevSecOps practices.
  • Practical experience with VPC Peering, Transit Gateway, WAF, Guard Duty, Inspector and advanced AWS networking and security tools.
  • Expertise in observability tools: Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
  • Strong scripting skills (Shell/bash, Python, or similar) for automation.
  • Bachelor / Master’s degree
  • Effective communication skills
Read more
STAGE.in

STAGE.in

Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Noida
4 - 6 yrs
₹10L - ₹15L / yr
TypeScript
MVC Framework
AWS CloudFormation
Microsoft Windows Azure
skill iconMongoDB
+3 more

CTC: up to 40 LPA


Mandatory Criteria (Can't be neglected during screening) :

Need candidates from Growing startups or Product based companies only

1. 4–6 years experience in backend engineering

2. Minimum 2+ years hands-on experience with:

  • TypeScript
  • Express.js / Nest.js

3. Strong experience with MongoDB (or MySQL / PostgreSQL / DynamoDB)

4. Strong understanding of system design & scalable architecture

5. Hands-on experience in:

  • Event-driven architecture / Domain-driven design
  • MVC / Microservices

6. Strong in automated testing (especially integration tests)

7. Experience with CI/CD pipelines (GitHub Actions or similar)

8. Experience managing production systems

9. Solid understanding of performance, reliability, observability

10. Cloud experience (AWS preferred; GCP/Azure acceptable)

11. Strong coding standards — Clean Code, code reviews, refactoring


If interested kindly share your updated resume at 82008 31681

Read more
Aivar Innovations Pvt Ltd
Valliammai Thirunavukkarsu
Posted by Valliammai Thirunavukkarsu
Coimbatore
2 - 6 yrs
₹12L - ₹25L / yr
Automation
AWS CloudFormation
Cloud Computing
CI/CD
Communication Skills
+1 more

Are you eager to kick-start your career in DevOps and learn the latest technologies to solve complex problems? Do you enjoy hands-on problem-solving, exploring cloud technologies, and supporting innovative solutions? At Aivar, we are looking for a DevOps Engineer to join our team.

In this role, you will assist in the implementation and support of DevOps practices, including containerization, orchestration, and CI/CD pipelines, while learning from industry experts.

This is an exciting opportunity to grow your skills and work on transformative projects in a collaborative environment.


Requirements

Preferred Technical Qualifications

  • 2 – 5 years of experience in DevOps, system administration, or software development (internship experience is acceptable).
  • Familiarity with container technologies such as Docker and Kubernetes.
  • Understanding of Infrastructure as Code (IaC) tools like Terraform or CloudFormation.
  • Knowledge of CI/CD tools like Jenkins, GitLab CI, or GitHub Actions.
  • Programming experience in Python, Java, or another language used in DevOps workflows.
  • Understanding of cloud platforms such as AWS, Azure, or GCP
  • Willingness to learn advanced Kubernetes concepts and troubleshooting techniques.Preferred Soft Skills

Collaboration Skills:

  • Willingness to work in cross-functional teams and support the alignment of technical solutions with business goals.
  • Eager to learn how to work effectively with customers, engineers, and architects to deliver DevOps solutions.

Effective Communication:

  • Ability to communicate technical concepts clearly to team members and stakeholders.
  • Desire to improve documentation and presentation skills to share ideas effectively.

Problem-Solving Mindset:

  • Curiosity to explore and learn solutions for infrastructure challenges in DevOps environments.
  • Interest in learning how to diagnose and resolve issues in containerized and
  •  distributed systems.

Adaptability and Continuous Learning:

  • Strong desire to learn emerging DevOps tools and practices in a dynamic environment.
  • Commitment to staying updated with trends in cloud computing, DevOps, and

Team-Oriented Approach:

  • Enthusiastic about contributing to a collaborative team environment and supporting
  • overall project goals.
  • Open to feedback and actively sharing knowledge to help the team grow.

Certifications (Optional but Preferred)

  • Certified Kubernetes Application Developer (CKAD) or equivalent Linux Foundation
  • certification
  • Any beginner-level certifications in DevOps or cloud services are a plus. 
  • Any AWS Certification



Why Join Aivar?

At Aivar, we are re-imagining analytics consulting by integrating AI and machine learning to create repeatable solutions that deliver measurable business outcomes. With a culture centered on innovation, collaboration, and growth, we provide opportunities to work on transformative projects across industries.


About Diversity and Inclusion

We believe diversity drives innovation and growth. Our inclusive environment encourages individuals of all backgrounds to contribute their unique perspectives to shape the future and analytics.








Read more
QAgile Services

at QAgile Services

1 recruiter
Radhika Chotai
Posted by Radhika Chotai
Noida
3 - 6 yrs
₹5L - ₹12L / yr
DevOps
Windows Azure
AWS CloudFormation
skill iconAmazon Web Services (AWS)
skill iconKubernetes
+3 more

We seek a skilled and motivated Azure DevOps engineer to join our dynamic team. The ideal candidate will design, implement, and manage CI/CD pipelines, automate deployments, and optimize cloud infrastructure using Azure DevOps tools and services. You will collaborate closely with development and IT teams to ensure seamless integration and delivery of software solutions in a fast-paced environment.

Responsibilities:

  • Design, implement, and manage CI/CD pipelines using Azure DevOps.
  • Automate infrastructure provisioning and deployments using Infrastructure as Code (IaC) tools like Terraform, ARM templates, or Azure CLI.
  • Monitor and optimize Azure environments to ensure high availability, performance, and security.
  • Collaborate with development, QA, and IT teams to streamline the software development lifecycle (SDLC).
  • Troubleshoot and resolve issues related to build, deployment, and infrastructure.
  • Implement and manage version control systems, primarily using Git.
  • Manage containerization and orchestration using tools like Docker and Kubernetes.
  • Ensure compliance with industry standards and best practices for security, scalability, and reliability.


Read more
IT Company

IT Company

Agency job
via Jobdost by Saida Pathan
Pune
10 - 15 yrs
₹28L - ₹30L / yr
AWS CloudFormation
Terraform
AWS RDS
DynamoDB
Apache Aurora
+1 more

Description


Role Overview

We are seeking a highly skilled AWS Cloud Architect with proven experience in building AWS environments from the ground up—not just consuming existing services. This role requires an AWS builder mindset, capable of designing, provisioning, and managing multi-account AWS architectures, networking, security, and database platforms end-to-end.

Key Responsibilities

AWS Environment Provisioning:

- Design and provision multi-account AWS environments using best practices (Control Tower, Organizations).

- Set up and configure networking (VPC, Transit Gateway, Private Endpoints, Subnets, Routing, Firewalls).

- Provision and manage AWS database platforms (RDS, Aurora, DynamoDB) with high availability and security.

- Manage full AWS account lifecycle, including IAM roles, policies, and access controls.

Infrastructure as Code (IaC):

- Develop and maintain AWS infrastructure using Terraform and AWS CloudFormation.

- Automate account provisioning, networking, and security configuration.

Security & Compliance:

- Implement AWS security best practices, including IAM governance, encryption, and compliance automation.

- Use tools like AWS Config, GuardDuty, Security Hub, and Vault to enforce standards.

Automation & CI/CD:

- Create automation scripts in Python, Bash, or PowerShell for provisioning and management tasks.

- Integrate AWS infrastructure with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI/CD).

Monitoring & Optimization:

- Implement monitoring solutions (CloudWatch, Prometheus, Grafana) for infrastructure health and performance.

- Optimize cost, performance, and scalability of AWS environments.

Required Skills & Experience:

- 10+ years of experience in Cloud Engineering, with 7+ years focused on AWS provisioning.

Strong expertise in(Must Have):

 • AWS multi-account setup (Control Tower/Organizations)

 • VPC design and networking (Transit Gateway, Private Endpoints, routing, firewalls)

 • IAM policies, role-based access control, and security hardening

 • Database provisioning (RDS, Aurora, DynamoDB)

- Proficiency in Terraform and AWS CloudFormation.

- Hands-on experience with scripting (Python, Bash, PowerShell).

- Experience with CI/CD pipelines and automation tools.

- Familiarity with monitoring and logging tools.

Preferred Certifications

- AWS Certified Solutions Architect – Professional

- AWS Certified DevOps Engineer – Professional

- HashiCorp Certified: Terraform Associate


Looking for Immediate Joiners or 15 days of Notice period candidates Only.

• Should have created more than 200 or 300 accounts from scratch using control towers or AWS services.

• Should have atleast 7+ years of working experience in AWS

• AWS multi-account setup (Control Tower/Organizations)

• VPC design and networking (Transit Gateway, Private Endpoints, routing, firewalls)

• IAM policies, role-based access control, and security hardening

• Database provisioning (RDS, Aurora, DynamoDB)

- Proficiency in Terraform and AWS CloudFormation.

- Hands-on experience with scripting (Python, Bash, PowerShell).

- Experience with CI/CD pipelines and automation tools.

 

First 3 months will be remote (With office timings: 4:30 PM to 1:30 AM

After 3 months will be WFO (With Standard office timings)


Read more
Agentic AI Platform

Agentic AI Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Gurugram
3 - 6 yrs
₹10L - ₹25L / yr
DevOps
skill iconPython
Google Cloud Platform (GCP)
Linux/Unix
CI/CD
+21 more

Review Criteria

  • Strong DevOps /Cloud Engineer Profiles
  • Must have 3+ years of experience as a DevOps / Cloud Engineer
  • Must have strong expertise in cloud platforms – AWS / Azure / GCP (any one or more)
  • Must have strong hands-on experience in Linux administration and system management
  • Must have hands-on experience with containerization and orchestration tools such as Docker and Kubernetes
  • Must have experience in building and optimizing CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins
  • Must have hands-on experience with Infrastructure-as-Code tools such as Terraform, Ansible, or CloudFormation
  • Must be proficient in scripting languages such as Python or Bash for automation
  • Must have experience with monitoring and alerting tools like Prometheus, Grafana, ELK, or CloudWatch
  • Top tier Product-based company (B2B Enterprise SaaS preferred)


Preferred

  • Experience in multi-tenant SaaS infrastructure scaling.
  • Exposure to AI/ML pipeline deployments or iPaaS / reverse ETL connectors.


Role & Responsibilities

We are seeking a DevOps Engineer to design, build, and maintain scalable, secure, and resilient infrastructure for our SaaS platform and AI-driven products. The role will focus on cloud infrastructure, CI/CD pipelines, container orchestration, monitoring, and security automation, enabling rapid and reliable software delivery.


Key Responsibilities:

  • Design, implement, and manage cloud-native infrastructure (AWS/Azure/GCP).
  • Build and optimize CI/CD pipelines to support rapid release cycles.
  • Manage containerization & orchestration (Docker, Kubernetes).
  • Own infrastructure-as-code (Terraform, Ansible, CloudFormation).
  • Set up and maintain monitoring & alerting frameworks (Prometheus, Grafana, ELK, etc.).
  • Drive cloud security automation (IAM, SSL, secrets management).
  • Partner with engineering teams to embed DevOps into SDLC.
  • Troubleshoot production issues and drive incident response.
  • Support multi-tenant SaaS scaling strategies.


Ideal Candidate

  • 3–6 years' experience as DevOps/Cloud Engineer in SaaS or enterprise environments.
  • Strong expertise in AWS, Azure, or GCP.
  • Strong expertise in LINUX Administration.
  • Hands-on with Kubernetes, Docker, CI/CD tools (GitHub Actions, GitLab, Jenkins).
  • Proficient in Terraform/Ansible/CloudFormation.
  • Strong scripting skills (Python, Bash).
  • Experience with monitoring stacks (Prometheus, Grafana, ELK, CloudWatch).
  • Strong grasp of cloud security best practices.



Read more
 Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
7 - 9 yrs
₹10L - ₹28L / yr
Artificial Intelligence (AI)
Natural Language Processing (NLP)
skill iconPython
skill iconData Science
Generative AI
+10 more

Job Details

Job Title: Lead II - Software Engineering- AI, NLP, Python, Data science

Industry: Technology

Domain - Information technology (IT)

Experience Required: 7-9 years

Employment Type: Full Time

Job Location: Bangalore

CTC Range: Best in Industry


Job Description:

Role Proficiency:

Act creatively to develop applications by selecting appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions. Account for others' developmental activities; assisting Project Manager in day-to-day project execution.


Additional Comments:

Mandatory Skills Data Science Skill to Evaluate AI, Gen AI, RAG, Data Science

Experience 8 to 10 Years

Location Bengaluru

Job Description

Job Title AI Engineer Mandatory Skills Artificial Intelligence, Natural Language Processing, python, data science Position AI Engineer – LLM & RAG Specialization Company Name: Sony India Software Centre About the role: We are seeking a highly skilled AI Engineer with 8-10 years of experience to join our innovation-driven team. This role focuses on the design, development, and deployment of advanced enterprise-scale Large Language Models (eLLM) and Retrieval Augmented Generation (RAG) solutions. You will work on end-to-end AI pipelines, from data processing to cloud deployment, delivering impactful solutions that enhance Sony’s products and services. Key Responsibilities: Design, implement, and optimize LLM-powered applications, ensuring high performance and scalability for enterprise use cases. Develop and maintain RAG pipelines, including vector database integration (e.g., Pinecone, Weaviate, FAISS) and embedding model optimization. Deploy, monitor, and maintain AI/ML models in production, ensuring reliability, security, and compliance. Collaborate with product, research, and engineering teams to integrate AI solutions into existing applications and workflows. Research and evaluate the latest LLM and AI advancements, recommending tools and architectures for continuous improvement. Preprocess, clean, and engineer features from large datasets to improve model accuracy and efficiency. Conduct code reviews and enforce AI/ML engineering best practices. Document architecture, pipelines, and results; present findings to both technical and business stakeholders. Job Description: 8-10 years of professional experience in AI/ML engineering, with at least 4+ years in LLM development and deployment. Proven expertise in RAG architectures, vector databases, and embedding models. Strong proficiency in Python; familiarity with Java, R, or other relevant languages is a plus. Experience with AI/ML frameworks (PyTorch, TensorFlow, etc.) and relevant deployment tools. Hands-on experience with cloud-based AI platforms such as AWS SageMaker, AWS Q Business, AWS Bedrock or Azure Machine Learning. Experience in designing, developing, and deploying Agentic AI systems, with a focus on creating autonomous agents that can reason, plan, and execute tasks to achieve specific goals. Understanding of security concepts in AI systems, including vulnerabilities and mitigation strategies. Solid knowledge of data processing, feature engineering, and working with large-scale datasets. Experience in designing and implementing AI-native applications and agentic workflows using the Model Context Protocol (MCP) is nice to have. Strong problem-solving skills, analytical thinking, and attention to detail. Excellent communication skills with the ability to explain complex AI concepts to diverse audiences. Day-to-day responsibilities: Design and deploy AI-driven solutions to address specific security challenges, such as threat detection, vulnerability prioritization, and security automation. Optimize LLM-based models for various security use cases, including chatbot development for security awareness or automated incident response. Implement and manage RAG pipelines for enhanced LLM performance. Integrate AI models with existing security tools, including Endpoint Detection and Response (EDR), Threat and Vulnerability Management (TVM) platforms, and Data Science/Analytics platforms. This will involve working with APIs and understanding data flows. Develop and implement metrics to evaluate the performance of AI models. Monitor deployed models for accuracy and performance and retrain as needed. Adhere to security best practices and ensure that all AI solutions are developed and deployed securely. Consider data privacy and compliance requirements. Work closely with other team members to understand security requirements and translate them into AI-driven solutions. Communicate effectively with stakeholders, including senior management, to present project updates and findings. Stay up to date with the latest advancements in AI/ML and security and identify opportunities to leverage new technologies to improve our security posture. Maintain thorough documentation of AI models, code, and processes. What We Offer Opportunity to work on cutting-edge LLM and RAG projects with global impact. A collaborative environment fostering innovation, research, and skill growth. Competitive salary, comprehensive benefits, and flexible work arrangements. The chance to shape AI-powered features in Sony’s next-generation products. Be able to function in an environment where the team is virtual and geographically dispersed

Education Qualification: Graduate


Skills: AI, NLP, Python, Data science


Must-Haves

Skills

AI, NLP, Python, Data science

NP: Immediate – 30 Days

 

Read more
 Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 12 yrs
₹10L - ₹30L / yr
skill iconAmazon Web Services (AWS)
AWS CloudFormation
Amazon Redshift
skill iconElastic Search
ECS
+11 more

Job Details

Job Title: ML Engineer II - Aws, Aws Cloud

Industry: Technology

Domain - Information technology (IT)

Experience Required: 6-12 years

Employment Type: Full Time

Job Location: Pune

CTC Range: Best in Industry


Job Description:

Core Responsibilities:

? The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency

? Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.

? Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.

? Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.

? System Integration: Integrate models into existing systems and workflows.

? Model Deployment: Deploy models to production environments and monitor performance.

? Collaboration: Work closely with data scientists, software engineers, and other stakeholders.

? Continuous Improvement: Identify areas for improvement in model performance and systems.


Skills:

? Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).

? Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph

? Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.

? Knowledge of model monitoring and performance evaluation.


Required experience:

? Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements

? AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in


ML workflows

? AWS data: Redshift, Glue

? Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)


Skills: Aws, Aws Cloud, Amazon Redshift, Eks


Must-Haves

Aws, Aws Cloud, Amazon Redshift, Eks

NP: Immediate – 30 Days

 

Read more
Apprication pvt ltd

at Apprication pvt ltd

1 recruiter
Adam patel
Posted by Adam patel
Mumbai
2.5 - 4 yrs
₹6L - ₹12L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Huggingface
skill iconPython
PyTorch
+13 more

Job Title: AI / Machine Learning Engineer

 Company: Apprication Pvt Ltd

 Location: Goregaon East

 Employment Type: Full-time

 Experience: 2.5-4 Years


  • Bachelor’s or Master’s in Computer Science, Machine Learning, Data Science, or related field.
  • Proven experience of 2.5-4 years as an AI/ML Engineer, Data Scientist, or AI Application Developer.
  • Strong programming skills in Python (TensorFlow, PyTorch, Scikit-learn); familiarity with LangChain, Hugging Face, OpenAI API is a plus.
  • Experience in model deployment, serving, and optimization (FastAPI, Flask, Django, or Node.js).
  • Proficiency with databases (SQL and NoSQL: MySQL, PostgreSQL, MongoDB).
  • Hands-on experience with cloud ML services (Sage Maker, Vertex AI, Azure ML) and DevOps tools (Docker, Kubernetes, CI/CD).
  • Knowledge of MLOps practices: model versioning, monitoring, retraining, experiment tracking.
  • Familiarity with frontend frameworks (React.js, Angular, Vue.js) for building AI-driven interfaces (nice to have).
  • Strong understanding of data structures, algorithms, APIs, and distributed systems.
  • Excellent problem-solving, analytical, and communication skills.
  • Develop and maintain ETL pipelines, data preprocessing workflows, and feature engineering processes.
  • Ensure solutions meet security, compliance, and performance standards.
  • Stay updated with the latest research and trends in deep learning, generative AI, and LLMs.
Read more
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad, Chennai
7 - 10 yrs
₹10L - ₹18L / yr
full stack
skill iconReact.js
skill iconPython
skill iconGo Programming (Golang)
CI/CD
+9 more

Full-Stack Developer

Exp: 5+ years required

Night shift: 8 PM-5 AM/9PM-6 AM

Only Immediate Joinee Can Apply


We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.

Key Responsibilities

● Develop and maintain user-facing features using React.js and TypeScript.

● Write clean, efficient, and well-documented JavaScript/TypeScript code.

● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.

● Contribute to the design, implementation, and maintenance of our databases.

● Collaborate with senior developers and product managers to deliver high-quality software.

● Troubleshoot and debug issues across the full stack.

● Participate in code reviews to maintain code quality and share knowledge.

Qualifications

● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.

● 5+ years of professional experience in web development.

● Proficiency in JavaScript and/or TypeScript.

● Proficiency in Golang and Python.

● Hands-on experience with the React.js library for building user interfaces.

● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).

● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).

● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.

● Strong problem-solving skills and a willingness to learn.

● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.

● Knowledge of CI/CD pipelines and automated testing.


Read more
NeoGenCode Technologies Pvt Ltd
Shivank Bhardwaj
Posted by Shivank Bhardwaj
Remote, Bengaluru (Bangalore), Chennai, Kolkata, Pune
9 - 12 yrs
₹10L - ₹42L / yr
skill iconJava
skill iconAmazon Web Services (AWS)
CI/CD
06692
AWS CloudFormation
+3 more

Job Description:

We are looking for a Lead Java Developer – Backend with a strong foundation in software engineering and hands-on experience in designing and building scalable, high-performance backend systems. You’ll be working within our Digital Engineering Studios on impactful and transformative projects in a fast-paced environment.


Key Responsibilities:

  • Lead and mentor backend development teams.
  • Design and develop scalable backend applications using Java and Spring Boot.
  • Ensure high standards of code quality through best practices such as SOLID principles and clean code.
  • Participate in pair programming, code reviews, and continuous integration processes.
  • Drive Agile, Lean, and Continuous Delivery practices like TDD, BDD, and CI/CD.
  • Collaborate with cross-functional teams and clients for successful delivery.


Required Skills & Experience:

  • 9–12+ years of experience in backend development (Up to 17 years may be considered).
  • Strong programming skills in Java and backend frameworks such as Spring Boot.
  • Experience in designing and building large-scale, custom-built, scalable applications.
  • Sound understanding of Object-Oriented Design (OOD) and SOLID principles.
  • Hands-on experience with Agile methodologies, TDD/BDD, CI/CD pipelines.
  • Familiarity with DevOps practices, Docker, Kubernetes, and Infrastructure as Code.
  • Good understanding of cloud technologies – especially AWS, and exposure to GCP or Azure.
  • Experience working in a product engineering environment is a plus.
  • Startup experience or working in fast-paced, high-impact teams is highly desirable.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Bipasha Rath
Posted by Bipasha Rath
Bengaluru (Bangalore), Mumbai, Pune
4 - 9 yrs
Best in industry
skill iconJava
Spring
Object Oriented Programming (OOPs)
Data Structures
Algorithms
+3 more
  1. Develop, and maintain Java applications using Core Java, Spring framework, JDBC, and threading concepts.
  2. Strong understanding of the Spring framework and its various modules.
  3. Experience with JDBC for database connectivity and manipulation
  4. Utilize database management systems to store and retrieve data efficiently.
  5. Proficiency in Core Java8 and thorough understanding of threading concepts and concurrent programming.
  6. Experience in in working with relational and nosql databases.
  7. Basic understanding of cloud platforms such as Azure and GCP and gain experience on DevOps practices is added advantage.
  8. Knowledge of containerization technologies (e.g., Docker, Kubernetes)
  9. Perform debugging and troubleshooting of applications using log analysis techniques.
  10. Understand multi-service flow and integration between components.
  11. Handle large-scale data processing tasks efficiently and effectively.
  12. Hands on experience using Spark is an added advantage.
  13. Good problem-solving and analytical abilities.
  14. Collaborate with cross-functional teams to identify and solve complex technical problems.
  15. Knowledge of Agile methodologies such as Scrum or Kanban
  16. Stay updated with the latest technologies and industry trends to improve development processes continuously and methodologies

If interested please share your resume with details :


Total Experience -


Relevant Experience in Java,Spring,Data structures,Alogorithm,SQL, -


Relevant Experience in Cloud - AWS/Azure/GCP -


Current CTC -


Expected CTC -


Notice Period -


Reason for change -



Read more
Matilda cloud

Matilda cloud

Agency job
via Employee Hub by PREETI DUA
Hyderabad, Bengaluru (Bangalore)
6 - 7 yrs
₹22L - ₹26L / yr
skill iconFlask
API
Google Cloud Platform (GCP)
AWS CloudFormation
AWS Lambda
+5 more

Job Summary:


We are seeking an experienced and highly motivated Senior Python Developer to join our dynamic and growing engineering team. This role is ideal for a seasoned Python expert who thrives in a fast-paced, collaborative environment and has deep experience building scalable applications, working with cloud platforms, and automating infrastructure.



Key Responsibilities:


Develop and maintain scalable backend services and APIs using Python, with a strong emphasis on clean architecture and maintainable code.


Design and implement RESTful APIs using frameworks such as Flask or FastAPI, and integrate with relational databases using ORM tools like SQLAlchemy.


Work with major cloud platforms (AWS, GCP, or Oracle Cloud Infrastructure) using Python SDKs to build and deploy cloud-native applications.


Automate system and infrastructure tasks using tools like Ansible, Chef, or other configuration management solutions.


Implement and support Infrastructure as Code (IaC) using Terraform or cloud-native templating tools to manage resources effectively.





Work across both Linux and Windows environments, ensuring compatibility and stability across platforms.


Required Qualifications:


5+ years of professional experience in Python development, with a strong portfolio of backend/API projects.


Strong expertise in Flask, SQLAlchemy, and other Python-based frameworks and libraries.


Proficient in asynchronous programming and event-driven architecture using tools such as asyncio, Celery, or similar.


Solid understanding and hands-on experience with cloud platforms – AWS, Google Cloud Platform, or Oracle Cloud Infrastructure.


Experience using Python SDKs for cloud services to automate provisioning, deployment, or data workflows.


Practical knowledge of Linux and Windows environments, including system-level scripting and debugging.


Automation experience using tools such as Ansible, Chef, or equivalent configuration management systems.


Experience implementing and maintaining CI/CD pipelines with industry-standard tools.


Familiarity with Docker and container orchestration concepts (e.g., Kubernetes is a plus).


Hands-on experience with Terraform or equivalent infrastructure-as-code tools for managing cloud environments.


Excellent problem-solving skills, attention to detail, and a proactive mindset.


Strong communication skills and the ability to collaborate with diverse technical teams.


Preferred Qualifications (Nice to Have):


Experience with other Python frameworks (FastAPI, Django)


Knowledge of container orchestration tools like Kubernetes


Familiarity with monitoring tools like Prometheus, Grafana, or Datadog


Prior experience working in an Agile/Scrum environment


Contributions to open-source projects or technical blogs


Read more
eading provider of electronic trading solutions in India. With over 1,000 clients and a presence in more than 400 cities, we have established ourselves as a trusted partner for brokerages across the nation. Our commitment to excellence is reflected in millions of active end users and our reputation for delivering the best customer service in the industry.

eading provider of electronic trading solutions in India. With over 1,000 clients and a presence in more than 400 cities, we have established ourselves as a trusted partner for brokerages across the nation. Our commitment to excellence is reflected in millions of active end users and our reputation for delivering the best customer service in the industry.

Agency job
via HyrHub by Shwetha Naik
Bengaluru (Bangalore), Mumbai
15 - 20 yrs
₹40L - ₹50L / yr
DevOps
AWS CloudFormation
Compliance
CI/CD

Looking for an experienced Cloud Engineering & DevOps Leader with proven

expertise in building and managing large-scale SaaS platforms in the financial services or

high-transaction domain. The ideal candidate will have a strong background in AWS cloud

infrastructure, DevOps automation, compliance frameworks (ISO, VAPT), and cost

optimization strategies.


Cloud Platforms: AWS (Lambda, EC2, VPC, CloudFront, Auto Scaling, etc.)


● DevOps & Automation: Python, CI/CD, Infrastructure as Code, Monitoring/Alerting

systems

● Monitoring & Logging: ELK stack, Kafka, Redis, Grafana

● Networking & Virtualization: Virtual Machines, Firewalls, Load Balancers, DR

setup

● Compliance & Security: ISO Audits, VAPT, ISMS, DR drills, high-availability

planning

● Leadership & Management: Team leadership, project management, stakeholder

collaboration


Preferred Profile

● Experience: 15–20 years in infrastructure, cloud engineering, or DevOps roles, with

at least 5 years in a leadership position.

● Domain Knowledge: Experience in broking, financial services, or high-volume

trading platforms is strongly preferred.

● Education: Bachelor’s Degree in Engineering / Computer Science / Electronics or

related field.

● Soft Skills: Strong problem-solving, cost-conscious approach, ability to work under

pressure, cross-functional collaboration.

Read more
AI Powered Logistics Company

AI Powered Logistics Company

Agency job
via Recruiting Bond by Pavan Kumar
Bengaluru (Bangalore)
3 - 8 yrs
₹20L - ₹36L / yr
DevOps
skill iconKubernetes
skill iconMongoDB
skill iconPython
skill iconDocker
+35 more

Job Title: Sr Dev Ops Engineer

Location: Bengaluru- India (Hybrid work type)

Reports to: Sr Engineer manager


About Our Client : 

We are a solution-based, fast-paced tech company with a team that thrives on collaboration and innovative thinking. Our Client's IoT solutions provide real-time visibility and actionable insights for logistics and supply chain management. Cloud-based, AI-enhanced metrics coupled with patented hardware optimize processes, inform strategic decision making and enable intelligent supply chains without the costly infrastructure


About the role : We're looking for a passionate DevOps Engineer to optimize our software delivery and infrastructure. You'll build and maintain CI/CD pipelines for our microservices, automate infrastructure, and ensure our systems are reliable, scalable, and secure. If you thrive on enhancing performance and fostering operational excellence, this role is for you. 


What You'll Do 🛠️

  • Cloud Platform Management: Administer and optimize AWS resources, ensuring efficient billing and cost management.
  • Billing & Cost Optimization: Monitor and optimize cloud spending.
  • Containerization & Orchestration: Deploy and manage applications and orchestrate them.
  • Database Management: Deploy, manage, and optimize database instances and their lifecycles.
  • Authentication Solutions: Implement and manage authentication systems.
  • Backup & Recovery: Implement robust backup and disaster recovery strategies, for Kubernetes cluster and database backups.
  • Monitoring & Alerting: Set up and maintain robust systems using tools for application and infrastructure health and integrate with billing dashboards.
  • Automation & Scripting: Automate repetitive tasks and infrastructure provisioning.
  • Security & Reliability: Implement best practices and ensure system performance and security across all deployments.
  • Collaboration & Support: Work closely with development teams, providing DevOps expertise and support for their various application stacks. 


What You'll Bring 💼

  • Minimum of 4 years of experience in a DevOps or SRE role.
  • Strong proficiency in AWS Cloud, including services like Lambda, IoT Core, ElastiCache, CloudFront, and S3.
  • Solid understanding of Linux fundamentals and command-line tools.
  • Extensive experience with CI/CD tools, GitLab CI.
  • Hands-on experience with Docker and Kubernetes, specifically AWS EKS.
  • Proven experience deploying and managing microservices.
  • Expertise in database deployment, optimization, and lifecycle management (MongoDB, PostgreSQL, and Redis).
  • Experience with Identity and Access management solutions like Keycloak.
  • Experience implementing backup and recovery solutions.
  • Familiarity with optimizing scaling, ideally with Karpenter.
  • Proficiency in scripting (Python, Bash).
  • Experience with monitoring tools such as Prometheus, Grafana, AWS CloudWatch, Elastic Stack.
  • Excellent problem-solving and communication skills. 


Bonus Points ➕

  • Basic understanding of MQTT or general IoT concepts and protocols.
  • Direct experience optimizing React.js (Next.js), Node.js (Express.js, Nest.js) or Python (Flask) deployments in a containerized environment.
  • Knowledge of specific AWS services relevant to application stacks.
  • Contributions to open-source projects related to Kubernetes, MongoDB, or any of the mentioned frameworks.
  • AWS Certifications (AWS Certified DevOps Engineer, AWS Certified Solutions Architect, AWS Certified SysOps Administrator, AWS Certified Advanced Networking).


Why this role: 

•You will help build the company from the ground up—shaping our culture and having an impact from Day 1 as part of the foundational team.

Read more
AI Powered Logistics Company

AI Powered Logistics Company

Agency job
via Recruiting Bond by Pavan Kumar
Bengaluru (Bangalore)
3 - 8 yrs
₹20L - ₹30L / yr
Reliability engineering
DevOps
Message Queuing Telemetry Transport (MQTT)
skill iconKubernetes
skill iconMongoDB
+24 more

Job Title: Sr Dev Ops Engineer

Location: Bengaluru- India (Hybrid work type)

Reports to: Sr Engineer manager


About Our Client : 

We are a solution-based, fast-paced tech company with a team that thrives on collaboration and innovative thinking. Our Client's IoT solutions provide real-time visibility and actionable insights for logistics and supply chain management. Cloud-based, AI-enhanced metrics coupled with patented hardware optimize processes, inform strategic decision making and enable intelligent supply chains without the costly infrastructure


About the role : We're looking for a passionate DevOps Engineer to optimize our software delivery and infrastructure. You'll build and maintain CI/CD pipelines for our microservices, automate infrastructure, and ensure our systems are reliable, scalable, and secure. If you thrive on enhancing performance and fostering operational excellence, this role is for you. 


What You'll Do 🛠️

  • Cloud Platform Management: Administer and optimize AWS resources, ensuring efficient billing and cost management.
  • Billing & Cost Optimization: Monitor and optimize cloud spending.
  • Containerization & Orchestration: Deploy and manage applications and orchestrate them.
  • Database Management: Deploy, manage, and optimize database instances and their lifecycles.
  • Authentication Solutions: Implement and manage authentication systems.
  • Backup & Recovery: Implement robust backup and disaster recovery strategies, for Kubernetes cluster and database backups.
  • Monitoring & Alerting: Set up and maintain robust systems using tools for application and infrastructure health and integrate with billing dashboards.
  • Automation & Scripting: Automate repetitive tasks and infrastructure provisioning.
  • Security & Reliability: Implement best practices and ensure system performance and security across all deployments.
  • Collaboration & Support: Work closely with development teams, providing DevOps expertise and support for their various application stacks. 


What You'll Bring 💼

  • Minimum of 4 years of experience in a DevOps or SRE role.
  • Strong proficiency in AWS Cloud, including services like Lambda, IoT Core, ElastiCache, CloudFront, and S3.
  • Solid understanding of Linux fundamentals and command-line tools.
  • Extensive experience with CI/CD tools, GitLab CI.
  • Hands-on experience with Docker and Kubernetes, specifically AWS EKS.
  • Proven experience deploying and managing microservices.
  • Expertise in database deployment, optimization, and lifecycle management (MongoDB, PostgreSQL, and Redis).
  • Experience with Identity and Access management solutions like Keycloak.
  • Experience implementing backup and recovery solutions.
  • Familiarity with optimizing scaling, ideally with Karpenter.
  • Proficiency in scripting (Python, Bash).
  • Experience with monitoring tools such as Prometheus, Grafana, AWS CloudWatch, Elastic Stack.
  • Excellent problem-solving and communication skills. 


Bonus Points ➕

  • Basic understanding of MQTT or general IoT concepts and protocols.
  • Direct experience optimizing React.js (Next.js), Node.js (Express.js, Nest.js) or Python (Flask) deployments in a containerized environment.
  • Knowledge of specific AWS services relevant to application stacks.
  • Contributions to open-source projects related to Kubernetes, MongoDB, or any of the mentioned frameworks.
  • AWS Certifications (AWS Certified DevOps Engineer, AWS Certified Solutions Architect, AWS Certified SysOps Administrator, AWS Certified Advanced Networking).


Why this role: 

•You will help build the company from the ground up—shaping our culture and having an impact from Day 1 as part of the foundational team.

Read more
A leading software company

A leading software company

Agency job
via BOS consultants by Manka Joshi
Gurugram
7 - 10 yrs
₹25L - ₹32L / yr
skill iconSpring Boot
Spring
RESTful APIs
JPA
Hibernate (Java)
+6 more


* Bachelor's degree in computer science or related fields preferred.

* 8+ years of experience developing core Java applications across enterprise, SME, or start-up

environments.

* Proven experience with distributed systems and event-driven architectures.

* Expertise in Spring Boot, Spring Framework, and RESTful API development.

* Experience in designing, building, and monitoring microservices.

* Solid background in persistence technologies including JPA, Hibernate, MS-SQL, and

PostgreSQL.

* Proficient in Java 11+, including features like Streams, Lambdas, and Functional

Programming.

* Experience with CI/CD pipelines using tools such as Jenkins, GitLab CI, GitHub Actions, or

AWS DevOps.

* Familiarity with major cloud platforms: AWS, Azure, or GCP (AWS preferred).

* Front-end development experience using React or Angular with good understanding of

leveraging best practices around HTML, CSS3/Tailwind, Responsive designs.

* Comfortable in Agile environments with iterative development and regular demos.

* Experience with container orchestration using Managed Kubernetes (EKS, AKS, or GKE).

* Working knowledge of Domain-Driven Design (DDD) and Backend-for-Frontend (BFF) concepts.

* Hands-on experience integrating applications with cloud services.

* Familiarity with event-driven technologies (e.g., Kafka, MQ, Event Buses).

* Hospitality services domain experience is a plus.

* Strong problem-solving skills, with the ability to work independently and in a team.

* Proficiency in Agile methodologies and software development best practices.

* • Skilled in code and query optimization.

* Experience with version control systems, particularly git

Read more
Ongrid
Kapil bhardwaj
Posted by Kapil bhardwaj
Gurugram
5 - 8 yrs
₹20L - ₹30L / yr
skill iconJava
06692
Spring
Microservices
skill iconDocker
+13 more

Requirements

  • Bachelors/Masters in Computer Science or a related field
  • 5-8 years of relevant experience
  • Proven track record of Team Leading/Mentoring a team successfully.
  • Experience with web technologies and microservices architecture both frontend and backend.
  • Java, Spring framework, hibernate
  • MySQL, Mongo, Solr, Redis, 
  • Kubernetes, Docker
  • Strong understanding of Object-Oriented Programming, Data Structures, and Algorithms.
  • Excellent teamwork skills, flexibility, and ability to handle multiple tasks.
  • Experience with API Design, ability to architect and implement an intuitive customer and third-party integration story
  • Ability to think and analyze both breadth-wise (client, server, DB, control flow) and depth-wise (threads, sessions, space-time complexity) while designing and implementing services
  • Exceptional design and architectural skills
  • Experience of cloud providers/platforms like GCP and AWS


Roles & Responsibilities

  • Develop new user-facing features.
  • Work alongside the product to understand our requirements, and design, develop and iterate, think through the complex architecture.
  • Writing clean, reusable, high-quality, high-performance, maintainable code.
  • Encourage innovation and efficiency improvements to ensure processes are productive.
  • Ensure the training and mentoring of the team members.
  • Ensure the technical feasibility of UI/UX designs and optimize applications for maximum speed.
  • Research and apply new technologies, techniques, and best practices.
  • Team mentorship and leadership.



Read more
Wissen Technology

at Wissen Technology

4 recruiters
Praffull Shinde
Posted by Praffull Shinde
Pune, Mumbai, Bengaluru (Bangalore)
4 - 8 yrs
₹14L - ₹26L / yr
skill iconPython
PySpark
skill iconDjango
skill iconFlask
RESTful APIs
+3 more

Job title - Python developer

Exp – 4 to 6 years

Location – Pune/Mum/B’lore

 

PFB JD

Requirements:

  • Proven experience as a Python Developer
  • Strong knowledge of core Python and Pyspark concepts
  • Experience with web frameworks such as Django or Flask
  • Good exposure to any cloud platform (GCP Preferred)
  • CI/CD exposure required
  • Solid understanding of RESTful APIs and how to build them
  • Experience working with databases like Oracle DB and MySQL
  • Ability to write efficient SQL queries and optimize database performance
  • Strong problem-solving skills and attention to detail
  • Strong SQL programing (stored procedure, functions)
  • Excellent communication and interpersonal skill

Roles and Responsibilities

  • Design, develop, and maintain data pipelines and ETL processes using pyspark
  • Work closely with data scientists and analysts to provide them with clean, structured data.
  • Optimize data storage and retrieval for performance and scalability.
  • Collaborate with cross-functional teams to gather data requirements.
  • Ensure data quality and integrity through data validation and cleansing processes.
  • Monitor and troubleshoot data-related issues to ensure data pipeline reliability.
  • Stay up to date with industry best practices and emerging technologies in data engineering.
Read more
GreenStitch Technologies PVT LTD
Paridhi Mudgal
Posted by Paridhi Mudgal
Bengaluru (Bangalore)
3 - 7 yrs
₹10L - ₹15L / yr
AWS CloudFormation
skill iconJavascript
skill iconNodeJS (Node.js)
skill iconGit
Microservices

Link to apply - https://tally.so/r/wv0lEA


Key Responsibilities:

  1. Software Development:
  • Design, implement, and optimise clean, scalable, and reliable code across [backend/frontend/full-stack] systems.
  • Contribute to the development of micro services, APIs, or UI components as per the project requirements.
  1. System Architecture:
  • Collaborate and design and enhance system architecture.
  • Analyse and identify opportunities for performance improvements and scalability.
  1. Code Reviews and Mentorship:
  • Conduct thorough code reviews to ensure code quality, maintainability, and adherence to best practices.
  • Mentor and support junior developers, fostering a culture of learning and growth.
  1. Agile Collaboration:
  • Work within an Agile/Scrum framework, participating in sprint planning, daily stand-ups, and retrospectives.
  • Collaborate with Carbon Science, Designer, and other stakeholders to translate requirements into technical solutions.
  1. Problem-Solving:
  • Investigate, troubleshoot, and resolve complex issues in production and development environments.
  • Contribute to incident management and root cause analysis to improve system reliability.
  1. Continuous Improvement:
  • Stay up-to-date with emerging technologies and industry trends.
  • Propose and implement improvements to existing codebases, tools, and development processes.

Qualifications:

Must-Have:

  • Experience: 2–5 years of professional software development experience in [specify languages/tools, e.g., Java, Python, JavaScript, etc.].
  • Education: Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
  • Technical Skills:
  • Strong proficiency in [programming languages/frameworks/tools].
  • Experience with cloud platforms like AWS, Azure, or GCP.
  • Knowledge of version control tools (e.g., Git) and CI/CD pipelines.
  • Understanding of data structures, algorithms, and system design principles.

Nice-to-Have:

  • Experience with containerisation (e.g., Docker) and orchestration tools (e.g., Kubernetes).
  • Knowledge of database technologies (SQL and NoSQL).

Soft Skills:

  • Strong analytical and problem-solving skills.
  • Excellent written and verbal communication skills.
  • Ability to work in a fast-paced environment and manage multiple priorities effectively.
Read more
Adesso India

Adesso India

Agency job
via HashRoot by Deepak S
Remote only
5 - 12 yrs
₹10L - ₹25L / yr
skill iconElastic Search
Ansible
skill iconAmazon Web Services (AWS)
DevOps
AWS CloudFormation
+1 more

Overview

adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.

Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.


Job Description

The client’s department DPS, Digital People Solutions, offers a sophisticated portfolio of IT applications, providing a strong foundation for professional and efficient People & Organization (P&O) and Business Management, both globally and locally, for a well-known German company listed on the DAX-40 index, which includes the 40 largest and most liquid companies on the Frankfurt Stock Exchange

We are seeking talented DevOps-Engineers with focus on Elastic Stack (ELK) to join our dynamic DPS team. In this role, you will be responsible for refining and advising on the further development of an existing monitoring solution based on the Elastic Stack (ELK). You will independently handle tasks related to architecture, setup, technical migration, and documentation.

The current application landscape features multiple Java web services running on JEE application servers, primarily hosted on AWS, and integrated with various systems such as SAP, other services, and external partners. DPS is committed to delivering the best digital work experience for the customers employees and customers alike.


Responsibilities:

Install, set up, and automate rollouts using Ansible/CloudFormation for all stages (Dev, QA, Prod) in the AWS Cloud for components such as Elastic Search, Kibana, Metric beats, APM server, APM agents, and interface configuration.

Create and develop regular "Default Dashboards" for visualizing metrics from various sources like Apache Webserver, application servers and databases.

Improve and fix bugs in installation and automation routines.

Monitor CPU usage, security findings, and AWS alerts.

Develop and extend "Default Alerting" for issues like OOM errors, datasource issues, and LDAP errors.

Monitor storage space and create concepts for expanding the Elastic landscape in AWS Cloud and Elastic Cloud Enterprise (ECE).

Implement machine learning, uptime monitoring including SLA, JIRA integration, security analysis, anomaly detection, and other useful ELK Stack features.

Integrate data from AWS CloudWatch.

Document all relevant information and train involved personnel in the used technologies.


Requirements:

Experience with Elastic Stack (ELK) components and related technologies.

Proficiency in automation tools like Ansible and CloudFormation.

Strong knowledge of AWS Cloud services.

Experience in creating and managing dashboards and alerts.

Familiarity with IAM roles and rights management.

Ability to document processes and train team members.

Excellent problem-solving skills and attention to detail.

 

Skills & Requirements

Elastic Stack (ELK), Elasticsearch, Kibana, Logstash, Beats, APM, Ansible, CloudFormation, AWS Cloud, AWS CloudWatch, IAM roles, AWS security, Automation, Monitoring, Dashboard creation, Alerting, Anomaly detection, Machine learning integration, Uptime monitoring, JIRA integration, Apache Webserver, JEE application servers, SAP integration, Database monitoring, Troubleshooting, Performance optimization, Documentation, Training, Problem-solving, Security analysis.

Read more
Adesso India

Adesso India

Agency job
via Hashroot by Sruthy R
Remote only
5 - 20 yrs
₹12L - ₹25L / yr
DevOps
Kibana
Ansible
AWS CloudFormation
Logstash
+4 more

Interested candidates are requested to email their resumes with the subject line "Application for [Job Title]".

Only applications received via email will be reviewed. Applications through other channels will not be considered.


Job Description

The client’s department DPS, Digital People Solutions, offers a sophisticated portfolio of IT applications, providing a strong foundation for professional and efficient People & Organization (P&O) and Business Management, both globally and locally, for a well-known German company listed on the DAX-40 index, which includes the 40 largest and most liquid companies on the Frankfurt Stock Exchange

We are seeking talented DevOps-Engineers with focus on Elastic Stack (ELK) to join our dynamic DPS team. In this role, you will be responsible for refining and advising on the further development of an existing monitoring solution based on the Elastic Stack (ELK). You will independently handle tasks related to architecture, setup, technical migration, and documentation.

The current application landscape features multiple Java web services running on JEE application servers, primarily hosted on AWS, and integrated with various systems such as SAP, other services, and external partners. DPS is committed to delivering the best digital work experience for the customers employees and customers alike.


Responsibilities:

Install, set up, and automate rollouts using Ansible/CloudFormation for all stages (Dev, QA, Prod) in the AWS Cloud for components such as Elastic Search, Kibana, Metric beats, APM server, APM agents, and interface configuration.

Create and develop regular "Default Dashboards" for visualizing metrics from various sources like Apache Webserver, application servers and databases.

Improve and fix bugs in installation and automation routines.

Monitor CPU usage, security findings, and AWS alerts.

Develop and extend "Default Alerting" for issues like OOM errors, datasource issues, and LDAP errors.

Monitor storage space and create concepts for expanding the Elastic landscape in AWS Cloud and Elastic Cloud Enterprise (ECE).

Implement machine learning, uptime monitoring including SLA, JIRA integration, security analysis, anomaly detection, and other useful ELK Stack features.

Integrate data from AWS CloudWatch.

Document all relevant information and train involved personnel in the used technologies.


Requirements:

Experience with Elastic Stack (ELK) components and related technologies.

Proficiency in automation tools like Ansible and CloudFormation.

Strong knowledge of AWS Cloud services.

Experience in creating and managing dashboards and alerts.

Familiarity with IAM roles and rights management.

Ability to document processes and train team members.

Excellent problem-solving skills and attention to detail.

 

Skills & Requirements

Elastic Stack (ELK), Elasticsearch, Kibana, Logstash, Beats, APM, Ansible, CloudFormation, AWS Cloud, AWS CloudWatch, IAM roles, AWS security, Automation, Monitoring, Dashboard creation, Alerting, Anomaly detection, Machine learning integration, Uptime monitoring, JIRA integration, Apache Webserver, JEE application servers, SAP integration, Database monitoring, Troubleshooting, Performance optimization, Documentation, Training, Problem-solving, Security analysis.



Read more
OnActive
Mansi Gupta
Posted by Mansi Gupta
Gurugram, Pune, Bengaluru (Bangalore), Chennai, Bhopal, Hyderabad, Jaipur
5 - 8 yrs
₹6L - ₹12L / yr
skill iconPython
Spark
SQL
AWS CloudFormation
skill iconMachine Learning (ML)
+3 more

Level of skills and experience:


5 years of hands-on experience in using Python, Spark,Sql.

Experienced in AWS Cloud usage and management.

Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).

Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.

Experience with orchestrators such as Airflow and Kubeflow.

Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).

Fundamental understanding of Parquet, Delta Lake and other data file formats.

Proficiency on an IaC tool such as Terraform, CDK or CloudFormation.

Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst

Read more
Rigel Networks Pvt Ltd
Minakshi Soni
Posted by Minakshi Soni
Bengaluru (Bangalore), Pune, Mumbai, Chennai
8 - 12 yrs
₹8L - ₹10L / yr
skill iconAmazon Web Services (AWS)
Terraform
Amazon Redshift
Redshift
Snowflake
+16 more

Dear Candidate,


We are urgently Hiring AWS Cloud Engineer for Bangalore Location.

Position: AWS Cloud Engineer

Location: Bangalore

Experience: 8-11 yrs

Skills: Aws Cloud

Salary: Best in Industry (20-25% Hike on the current ctc)

Note:

only Immediate to 15 days Joiners will be preferred.

Candidates from Tier 1 companies will only be shortlisted and selected

Candidates' NP more than 30 days will get rejected while screening.

Offer shoppers will be rejected.


Job description:

 

Description:

 

Title: AWS Cloud Engineer

Prefer BLR / HYD – else any location is fine

Work Mode: Hybrid – based on HR rule (currently 1 day per month)


Shift Timings 24 x 7 (Work in shifts on rotational basis)

Total Experience in Years- 8+ yrs, 5 yrs of relevant exp is required.

Must have- AWS platform, Terraform, Redshift / Snowflake, Python / Shell Scripting



Experience and Skills Requirements:


Experience:

8 years of experience in a technical role working with AWS


Mandatory

Technical troubleshooting and problem solving

AWS management of large-scale IaaS PaaS solutions

Cloud networking and security fundamentals

Experience using containerization in AWS

Working Data warehouse knowledge Redshift and Snowflake preferred

Working with IaC – Terraform and Cloud Formation

Working understanding of scripting languages including Python and Shell

Collaboration and communication skills

Highly adaptable to changes in a technical environment

 

Optional

Experience using monitoring and observer ability toolsets inc. Splunk, Datadog

Experience using Github Actions

Experience using AWS RDS/SQL based solutions

Experience working with streaming technologies inc. Kafka, Apache Flink

Experience working with a ETL environments

Experience working with a confluent cloud platform


Certifications:


Minimum

AWS Certified SysOps Administrator – Associate

AWS Certified DevOps Engineer - Professional



Preferred


AWS Certified Solutions Architect – Associate


Responsibilities:


Responsible for technical delivery of managed services across NTT Data customer account base. Working as part of a team providing a Shared Managed Service.


The following is a list of expected responsibilities:


To manage and support a customer’s AWS platform

To be technical hands on

Provide Incident and Problem management on the AWS IaaS and PaaS Platform

Involvement in the resolution or high priority Incidents and problems in an efficient and timely manner

Actively monitor an AWS platform for technical issues

To be involved in the resolution of technical incidents tickets

Assist in the root cause analysis of incidents

Assist with improving efficiency and processes within the team

Examining traces and logs

Working with third party suppliers and AWS to jointly resolve incidents


Good to have:


Confluent Cloud

Snowflake




Best Regards,

Minakshi Soni

Executive - Talent Acquisition (L2)

Rigel Networks

Worldwide Locations: USA | HK | IN 

Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Remote only
5 - 10 yrs
₹5L - ₹15L / yr
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconMongoDB
+13 more

Job Title : MERN Stack Developer

Experience : 5+ Years

Shift Timings : 8:00 AM to 5:00 PM


Role Overview:

We are hiring a skilled MERN Stack Developer to build scalable web applications. You’ll work on both front-end and back-end, leveraging modern frameworks and cloud technologies to deliver high-quality solutions.


Key Responsibilities :

  • Develop responsive UIs using React, GraphQL, and TypeScript.
  • Build back-end APIs with Node.js, Express, and MySQL.
  • Integrate AWS services like Lambda, S3, and API Gateway.
  • Optimize deployments using AWS CDK and CloudFormation.
  • Ensure code quality with Mocha/Chai/Sinon, ESLint, and Prettier.

Required Skills :

  • Strong experience with React, Node.js, and GraphQL.
  • Proficiency in AWS services and Infrastructure as Code (CDK/Terraform).
  • Familiarity with MySQL, Elasticsearch, and modern testing frameworks.
Read more
Swissclear

at Swissclear

2 recruiters
Gunjan Tiwari
Posted by Gunjan Tiwari
Pune
3 - 7 yrs
₹5L - ₹20L / yr
DevOps
skill iconAmazon Web Services (AWS)
AWS CloudFormation
Continuous Integration
skill iconJenkins
+2 more
TAPPP is leading the charge in bringing premium digital entertainment content & live sports to global consumers via its prepaid platform. TAPPP is available across platforms via the Web, Mobile and Tablets. Building out this brand presents significant product and engineering challenges. At the centre of solving those challenges is the TAPPP Product Engineering team which is responsible for the TAPPP product end to end. We are looking for an experienced DevOps engineer who will work collaboratively with our engineering team to deploy and operate systems/services, help automate and streamline operations and processes, and troubleshoot issues within multiple environments. As a DevOps engineer, you will be responsible for developing and implementing orchestration techniques for automating deployments of microservices using Docker containers in the cloud (AWS) and on-premises environments. You will also be supporting technical teams by using technical analyses in order to improve the scalability and reliability of entire system. The organization is flat, process is minimal, individual responsibility is big, and there is an emphasis on keeping non-productive influences out of the everyday technical decision making process. Upholding these philosophies will be imperative as we execute our aggressive plan of global expansion over the next 2 years. The position is based in Pune, India. Here is what we are looking for: • Minimum 3 years of working experience in a DevOps capacity, preferably in a fast-paced and constantly evolving environment. • Strong experience and sound understanding of CI/CD principles and technologies such as Git, Jenkins, Chef, Puppet, Ansible, etc. • Experience with container technology such as Docker and container orchestration tools like ECS, kubernetes, Mesos. • Proven experience in building and maintaining production systems on AWS using EC2, RDS, S3, ELB, Cloud Formation, ECS cluster and AWS APIs. • Experience with monitoring, metrics, and visualization tools for network, server, and application/services status • Experience and working understanding of multiple coding and scripting languages including Shell, Python, Perl, Java • Strong written and oral communication skills a must. • An unquenchable desire to learn, attention to detail with a can-do attitude. • Comfortable working in a start-up environment.
Read more
HaystackAnalytics
Navi Mumbai
0 - 1 yrs
₹3L - ₹5L / yr
skill iconReact.js
skill iconNextJs (Next.js)
skill iconNodeJS (Node.js)
AWS CloudFormation
Windows Azure
+1 more

Position -  Full stack Developer

Location - Navi Mumbai

Freshers  0-3 yrs


Who are we

Based out of IIT Bombay, HaystackAnalytics is a HealthTech company creating clinical genomics products, which enable diagnostic labs and hospitals to offer accurate and personalized diagnostics. Supported by India's most respected science agencies (DST, BIRAC, DBT), we created and launched a portfolio of products to offer genomics in infectious diseases. Our genomics based diagnostic solution for Tuberculosis was recognized as one of top innovations supported by BIRAC in the past 10 years, and was launched by the Prime Minister of India in the BIRAC Showcase event in Delhi, 2022.


Objectives of this Role:

  • Work across the full stack, building highly scalable distributed solutions that enable positive user experiences and measurable business growth
  • Ideate and develop new product features in collaboration with domain experts in healthcare and genomics 
  • Develop state of the art enterprise standard front-end and backend services
  • Develop cloud platform services based on container orchestration platform 
  • Continuously embrace automation for repetitive tasks
  • Ensure application performance, uptime, and scale, maintaining high standards of code quality by using clean coding principles and solid design patterns 
  • Build robust tech modules that are Unit Testable, Automating recurring tasks and processes  
  • Engage effectively with team members and collaborate to upskill and unblock each other



Frontend Skills 

  • HTML 5  
  • CSS framework ( LESS/ SASS / Tailwind ) 
  • Es6 / Typescript 
  • Electron app /Tauri)
  • Component library ( Bootstrap , material UI, Lit ) 
  • Responsive web layout ( Flex layout , Grid layout ) 
  • Package manager --> yarn / npm / turbo
  • Build tools - > (Vite/Webpack/Parcel)
  • Frameworks -- > React with Redux of Mobx / Next JS
  • Design patterns 
  • Testing - JEST / MOCHA / JASMINE / Cypress
  • Functional Programming concepts (Good to have)
  • Scripting ( powershell , bash , python )



Backend Skills 

  • Nodejs - Express / NEST JS 
  • Python / Rust
  • REST API 
  • SOLID Design Principles
  • Database (postgresql / mysql / redis / cassandra / mongodb ) 
  • Caching ( Redis ) 
  • Container Technology ( Docker / Kubernetes )  
  • Cloud ( azure , aws , openshift, google cloud ) 
  • Version Control - GIT 
  • GITOPS 
  • Automation ( terraform , ansible ) 


Cloud Skills 

  • Object storage
  • VPC concepts 
  • Containerize Deployment
  • Serverless architecture 



 Other Skills 

  • Innovation and thought leadership
  • UI - UX design skills  
  • Interest in in learning new tools, languages, workflows, and philosophies to grow
  • Communication 


To know more about us- https://haystackanalytics.in/




Read more
Smartan.ai

at Smartan.ai

2 candid answers
Aadharsh M
Posted by Aadharsh M
Chennai
4 - 8 yrs
₹5L - ₹15L / yr
skill iconPython
NumPy
TensorFlow
PyTorch
Google Cloud Platform (GCP)
+4 more

Role Overview:

We are seeking a highly skilled and motivated Data Scientist to join our growing team. The ideal candidate will be responsible for developing and deploying machine learning models from scratch to production level, focusing on building robust data-driven products. You will work closely with software engineers, product managers, and other stakeholders to ensure our AI-driven solutions meet the needs of our users and align with the company's strategic goals.


Key Responsibilities:

  • Develop, implement, and optimize machine learning models and algorithms to support product development.
  • Work on the end-to-end lifecycle of data science projects, including data collection, preprocessing, model training, evaluation, and deployment.
  • Collaborate with cross-functional teams to define data requirements and product taxonomy.
  • Design and build scalable data pipelines and systems to support real-time data processing and analysis.
  • Ensure the accuracy and quality of data used for modeling and analytics.
  • Monitor and evaluate the performance of deployed models, making necessary adjustments to maintain optimal results.
  • Implement best practices for data governance, privacy, and security.
  • Document processes, methodologies, and technical solutions to maintain transparency and reproducibility.


Qualifications:

  • Bachelor's or Master's degree in Data Science, Computer Science, Engineering, or a related field.
  • 5+ years of experience in data science, machine learning, or a related field, with a track record of developing and deploying products from scratch to production.
  • Strong programming skills in Python and experience with data analysis and machine learning libraries (e.g., Pandas, NumPy, TensorFlow, PyTorch).
  • Experience with cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker).
  • Proficiency in building and optimizing data pipelines, ETL processes, and data storage solutions.
  • Hands-on experience with data visualization tools and techniques.
  • Strong understanding of statistics, data analysis, and machine learning concepts.
  • Excellent problem-solving skills and attention to detail.
  • Ability to work collaboratively in a fast-paced, dynamic environment.


Preferred Qualifications:

  • Knowledge of microservices architecture and RESTful APIs.
  • Familiarity with Agile development methodologies.
  • Experience in building taxonomy for data products.
  • Strong communication skills and the ability to explain complex technical concepts to non-technical stakeholders.
Read more
ChicMic Studios
manpreet kaur
Posted by manpreet kaur
Remote only
2 - 12 yrs
₹4L - ₹12L / yr
Spring
skill iconJava
skill iconSpring Boot
skill iconJavascript
AWS CloudFormation
+2 more

Hi All,


Job Description:

As a Java Developer, you will be responsible for developing and maintaining high performance, scalable, and secure applications. We are seeking a skilled and motivated Java Developer with experience in the Spring Framework to join our dynamic team. This is a remote/work-from-home position, offering you the flexibility to work from anywhere.


Location : Remote / WFH

Salary : Good Hike on Current


Key Responsibilities:

  • Design, develop, and maintain Java-based applications using the Spring Framework.
  • Collaborate with cross-functional teams to define, design, and ship new features.
  • Write clean, maintainable, and efficient code.
  • Ensure the performance, quality, and responsiveness of applications.
  • Troubleshoot and debug issues to optimize application performance.
  • Participate in code reviews to maintain high coding standards and best practices.
  • Work with RESTful APIs and integrate third-party services.
  • Contribute to all phases of the software development lifecycle, including requirements
  • Gathering, design, implementation, testing, and deployment.


Key Requirements:

  • 2 to 5+ years of experience in Java development.
  • Strong experience with the Spring Framework (Spring Boot, Spring MVC, Spring Data, etc.).
  • Proficiency in building RESTful APIs and web services.
  • Solid understanding of object-oriented programming and design patterns.
  • Experience with relational databases like MySQL, PostgreSQL, or Oracle.
  • Familiarity with version control systems, particularly Git.
  • Knowledge of front-end technologies such as HTML, CSS, and JavaScript is a plus.
  • Ability to work independently and as part of a remote team.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and collaboration skills.


Preferred Qualifications:

  • Experience with cloud platforms like AWS, Azure, or Google Cloud.
  • Familiarity with microservices architecture.
  • Knowledge of containerization tools such as Docker.
  • Understanding of Agile/Scrum methodologies.


Benefits:

  • Work-from-home/remote opportunities.
  • Opportunities for professional growth and development.
  • Collaborative and inclusive work environment.


Read more
Nomiso
Raja Raguram
Posted by Raja Raguram
Bengaluru (Bangalore)
10 - 15 yrs
₹30L - ₹45L / yr
Computer Networking
AWS CloudFormation
Routing & Switching
Shell Scripting
Firewall

What You Can Expect from Us:


Here at Nomiso, we work hard to provide our team with the best opportunities to grow their careers.  You can expect to be a pioneer of ideas, a student of innovation, and a leader of thought. Innovation and thought leadership is at the center of everything we do, at all levels of the company. Let’s make your career great!


Position Overview: 

The Principal Cloud Network Engineer is a key interface to Client teams and is responsible to develop convincing technical solutions. This requires them to work closely with clients for multiple partner-vendors teams to architect the solution. 


This position requires sound technical knowledge, proven business acumen and differentiating client interfacing ability. You are required to anticipate, create, and define an innovative solution which matches customer’s needs and the clients tactical and strategic requirements.


Roles and Responsibilities:

  • Design and implement next-generation networking technologies
  • Deploy/support large-scale production network
  • Track, analyze, and trend capacity on the broadcast network and datacenter infrastructure
  • Provide Tier 3 escalated network support
  • Perform fault management and problem resolution
  • Work closely with other departments, vendors, and service providers
  • Perform network change management, support modifications, and maintenance
  • Perform network upgrade, maintenance, and repair work
  • Lead implementation of new systems
  • Perform capacity planning and management
  • Suggest opportunities for improvement
  • Create and support network management objectives, policies, and procedures
  • Ensure network documentation is kept up-to-date
  • Train and assist junior engineers.


Must Have Skills:

    Candidates with overall 10+ years of experience in the following:


  • Hands-on: Routers/Switches, Firewalls (Palo Alto or similar), Load Balancer (RTM, GTM), AWS (VPC , API Gateway , Cloudfront , Route53, CloudVAN, Directconnect, Privatelink, Transit Gateway ) Networking, Wireless.
  • Strong hands-on coding/scripting experience in one or more programming languages such as Python, Golang, Java, Bash, etc. 
  • Networking technologies: Routing Protocols (BGP, EIGRP & OSPF, VRFs, VLANs, VRRP, LACP, MLAG, TACACS / Rancid / GIT, IPSec VPN, DNS / DHCP, NAT / SNAT, IP Multicast, VPC, Transit Gateway, NAT Gateway, ALB/ELB), Security Groups, ACL, HSRP, VRRP, SNMP, DHCP.
  • Managing hardware, IOS, coordinating with vendors/partners for support.
  • Managing CDN, Links, VPN technologies, SDN/Cisco ACI ( Design and implementaion ) and Network Function Virtualization (NFV).
  • Reviewing technology designs, and architecture, taking local and regional regulatory requirements into account for Voice, Video Solutions, Routing, Switching, VPN, LAN, WAN, Network Security, Firewalls, NGFW, NAT, IPS, Botnet, Application Control, DDoS, Web Filtering.
  • Palo Alto Firewall / Panorama, Big-IQ, and NetBrain tools/technology standards to daily support and enhance performance, improve reliability .
  • Creating a real-time contextual living map of Client’s network with detailed network specifications, including diagrams, equipment configurations with defined standards
  • Improve the reliability of the service, bring in proactiveness to identify and prevent impact to customers by eliminating Single Point of Failure (SPOF).
  • Capturing critical forensic data, and providing complete visibility across the enterprise, for security incidents as soon as a threat is detected, by implementing tools like NetBrain.


Good to Have Skills:

  • Industry certifications on Switching, Routing and Security.
  • Elastic Load Balancing (ELB), DNS / DHCP, IPSec VPN,Multicast, TACACS / Rancid / GIT, ALB/ELB
  • AWS Control Tower
  • Experience leading a team of 5 or more.
  • Strong Analytical and Problem Solving Skills.
  • Experience implementing / maintaining Infrastructure as  Code (IaC)
  • Certifications : CCIE, AWS Certified Advanced Networking



Read more
Ajjas

at Ajjas

3 candid answers
Sakshi Sharma
Posted by Sakshi Sharma
Bhopal
1 - 3 yrs
₹7L - ₹10L / yr
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconMongoDB
+7 more

Position Overview

We are seeking a highly skilled React Native Developer with a strong background in the MERN (MongoDB, Express.js, React.js, Node.js) stack to join our dynamic team. The ideal candidate will have a minimum of 2 years of professional experience and a proven track record of developing robust, scalable mobile applications using React Native. This role offers an exciting opportunity to work on innovative projects and significantly contribute to the growth and success of our startup.

Key Responsibilities

●     Develop High-Quality Mobile Applications: Create user-friendly mobile applications using React Native, ensuring high performance and responsiveness.

●     Collaborate with Cross-Functional Teams: Work closely with product managers, designers, and other developers to define, design, and deliver new features.

●     Maintain Clean and Efficient Code: Write clean, maintainable, and efficient code following industry best practices and coding standards.

●     Optimize Application Performance: Enhance application performance for speed and scalability.

●     Participate in Code Reviews: Engage in code reviews, discussions, and knowledge-sharing sessions to improve team output and code quality.

●     Troubleshoot and Debug: Identify and resolve application issues to ensure smooth functionality.

●     Stay Updated with Trends: Keep abreast of emerging technologies and trends in mobile development to incorporate best practices.

Requirements

●     MERN Stack Proficiency: Strong expertise in MongoDB, Express.js, React.js, and Node.js.

●     RESTful APIs and Web Services: Solid understanding and experience in integrating RESTful APIs and web services.

●     Version Control Systems: Proficient with Git for version control.

●     Mobile UI/UX Design Principles: Knowledge of best practices in mobile UI/UX design.

●     Problem-Solving Skills: Excellent analytical and problem-solving abilities.

●     Team and Independent Work: Ability to work both independently and as part of a team in a fast-paced startup environment.

●     Communication and Collaboration: Strong verbal and written communication skills, with the ability to collaborate effectively with team members.

 

Nice to Have

●     GraphQL: Familiarity with GraphQL for efficient data fetching.

●     Cloud Services: Knowledge of cloud services such as AWS or Firebase.

 

Read more
Soulpage IT Solutions
Hyderabad
0 - 1 yrs
₹2L - ₹3L / yr
AWS CloudFormation
DevOps
CI/CD
skill iconDocker
skill iconKubernetes
+4 more

Job Description:

We are seeking a motivated DevOps intern to join our team. The intern will be responsible for deploying and maintaining applications in AWS and Azure cloud environments, as well as on client local machines when required. The intern will troubleshoot any deployment issues and ensure the high availability of the applications.


Responsibilities:

  • Deploy and maintain applications in AWS and Azure cloud environments
  • Deploy applications on client local machines when needed
  • Troubleshoot deployment issues and ensure high availability of applications
  • Collaborate with development teams to improve deployment processes
  • Monitor system performance and implement optimizations
  • Implement and maintain CI/CD pipelines
  • Assist in implementing security best practices


Requirements:

  • Currently pursuing a degree in Computer Science, Engineering, or related field
  • Knowledge of cloud computing platforms (AWS, Azure)
  • Familiarity with containerization technologies (Docker, Kubernetes)
  • Basic understanding of networking principles
  • Strong problem-solving skills
  • Excellent communication skills


Nice to Have:

  • Familiarity with configuration management tools (e.g., Ansible, Chef, Puppet)
  • Familiarity with monitoring and logging tools (e.g., Prometheus, ELK stack)
  • Understanding of security best practices in cloud environments


Benefits:

  • Hands-on experience with cutting-edge technologies.
  • Opportunity to work on exciting AI and LLM projects


Read more
DevOpspatial Pvt Ltd
Geetanjali Singh
Posted by Geetanjali Singh
Remote only
15 - 20 yrs
₹20L - ₹33L / yr
Dynatrace
DevOps
Splunk
Terraform
AWS Lambda
+1 more
  • Dynatrace Expertise: Lead the implementation, configuration, and optimization of Dynatrace monitoring solutions across diverse environments, ensuring maximum efficiency and effectiveness.
  • Cloud Integration: Utilize expertise in AWS and Azure to seamlessly integrate Dynatrace monitoring into cloud-based architectures, leveraging PaaS services and IAM roles for efficient monitoring and management.
  • Application and Infrastructure Architecture: Design and architect both application and infrastructure landscapes, considering factors like Oracle, SQL Server, Shareplex, Commvault, Windows, Linux, Solaris, SNMP polling, and SNMP traps.
  • Cross-Platform Integration: Integrate Dynatrace with various products such as Splunk, APIM, and VMWare to provide comprehensive monitoring and analysis capabilities.
  • Inter-Account Integration: Develop and implement integration strategies for seamless communication and monitoring across multiple AWS accounts, leveraging Terraform and IAM roles.
  • Experience working with On-premise Application and Infrastructure
  • Experience with AWS & Azure and Cloud Certified.
  • Dynatrace Experience & Certification



Read more
Nimesa Technologies
Bengaluru (Bangalore)
3 - 5 yrs
Best in industry
skill iconJava
skill iconSpring Boot
AWS CloudFormation
skill iconGitHub
Multithreading
+3 more
  • Java
  • Spring Boot
  • Database (Preferably Mysql)
  • Multithreading
  • Low Level design (Any Module)
  • Github
  • Leetcode
  • data structure
Read more
NovacisDigital
Vidhyasagar G
Posted by Vidhyasagar G
Chennai
6 - 10 yrs
₹12L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
skill iconJenkins
+6 more

DevOps Lead Engineer


We are seeking a skilled DevOps Lead Engineer with 8 to 10 yrs. of experience who handles the entire DevOps lifecycle and is accountable for the implementation of the process. A DevOps Lead Engineer is liable for automating all the manual tasks for developing and deploying code and data to implement continuous deployment and continuous integration frameworks. They are also held responsible for maintaining high availability of production and non-production work environments.


Essential Requirements (must have):


• Bachelor's degree preferable in Engineering.

• Solid 5+ experience with AWS, DevOps, and related technologies


Skills Required:


Cloud Performance Engineering

• Performance scaling in a Micro-Services environment

• Horizontal scaling architecture

• Containerization (such as Dockers) & Deployment

• Container Orchestration (such as Kubernetes) & Scaling


DevOps Automation

• End to end release automation.

• Solid Experience in DevOps tools like GIT, Jenkins, Docker, Kubernetes, Terraform, Ansible, CFN etc.

• Solid experience in Infra Automation (Infrastructure as Code), Deployment, and Implementation.

• Candidates must possess experience in using Linux, Jenkins, and ample experience in Configuring and automating the monitoring tools.

• Strong scripting knowledge

• Strong analytical and problem-solving skills.

• Cloud and On-prem deployments


Infrastructure Design & Provisioning

• Infra provisioning.

• Infrastructure Sizing

• Infra Cost Optimization

• Infra security

• Infra monitoring & site reliability.


Job Responsibilities:


• Responsible for creating software deployment strategies that are essential for the successful

deployment of software in the work environment and provide stable environment for delivery of

quality.

• The DevOps Lead Engineer is accountable for designing, building, configuring, and optimizing

automation systems that help to execute business web and data infrastructure platforms.

• The DevOps Lead Engineer is involved in creating technology infrastructure, automation tools,

and maintaining configuration management.

• The Lead DevOps Engineer oversees and leads the activities of the DevOps team. They are

accountable for conducting training sessions for the juniors in the team, mentoring, career

support. They are also answerable for the architecture and technical leadership of the complete

DevOps infrastructure.

Read more
CodeCraft Technologies Private Limited
Chandana B
Posted by Chandana B
Bengaluru (Bangalore)
3 - 8 yrs
Best in industry
skill iconJava
Spring
06692
J2EE
SQL
+5 more

Position: Java Developer

Experience: 3-8 Years

Location: Bengaluru


We are a multi-award-winning creative engineering company offering design and technology solutions on mobile, web and cloud platforms. We are looking for an enthusiastic and self-driven Test Engineer to join our team.


Roles and Responsibilities:

  • Expert level Micro Web Services development skills using Java/J2EE/Spring
  • Strong in SQL and noSQL databases (mySQL / MongoDB preferred)  Ability to develop software programs with best of design patterns , data Structures & algorithms
  • Work in very challenging and high performance environment to clearly understand and provide state of the art solutions ( via design and code)
  • Ability to debug complex applications and help in providing durable fixes
  • While Java platform is primary, ability to understand, debug and work on other application platforms using Ruby on Rails and Python
  • Responsible for delivering feature changes and functional additions that handle millions of requests per day while adhering to quality and schedule targets
  • Extensive knowledge of at least 1 cloud platform (AWS, Microsoft Azure, GCP) preferably AWS.
  • Strong unit testing skills for frontend and backend using any standard framework
  • Exposure to application gateways and dockerized. microservices
  • Good knowledge and experience with Agile, TDD or BDD methodologies


Desired Profile:

  • Programing language – Java
  • Framework – Spring Boot
  • Good Knowledge of SQL & NoSQL DB
  • AWS Cloud Knowledge
  • Micro Service Architecture


Good to Have:

  • Familiarity with Web Front End (Java Script/React)
  • Familiarity with working in Internet of Things / Hardware integration
  • Docker & Kubernetes  Serverless Architecture
  • Working experience in Energy Company (Solar Panels + Battery)
Read more
Confidential

Confidential

Agency job
via Arnold Consultants by Sampreetha Pai
Bengaluru (Bangalore)
8 - 13 yrs
₹30L - ₹35L / yr
skill iconJava
skill iconMongoDB
skill iconC#
skill iconPython
skill iconNodeJS (Node.js)
+3 more

About this roleWe are seeking an experienced MongoDB Developer/DBA who will be

responsible for maintaining MongoDB databases while optimizing performance, security, and

the availability of MongoDB clusters. As a key member of our team, you’ll play a crucial role in

ensuring our data infrastructure runs smoothly.

You'll have the following responsibilities

 Maintain and Configure MongoDB Instances - Responsible for build, design, deploy,

maintain, and lead the MongoDB Atlas infrastructure. Keep clear documentation of the

database setup and architecture.

 Ownership of governance, defining and enforcing policies in MongoDB Atlas.Provide

consultancy in drawing the design and infrastructure (MongoDB Atlas) for use case.

 Service and Governance wrap will be in place to restrict over provisioning for server size,

number of clusters per project and scaling through MongoDB Atlas

 Gathering and documenting detailed business requirements applicable to the data

layer.Responsible for designing, configuring and managing MongoDB on Atlas.

 Design, develop, test, document, and deploy high-quality technical solutions on the

MongoDB Atlas platform based on industry best practices to solve business needs.

Resolves technical issues raised by the team and/or customer and manages escalations as

required.

 Migrate data from on-premise MongoDB and RDBMS to MongoDB AtlasCommunicate

and collaborate with other technical resources and customers in providing timely updates

on status of deliverables, shedding light on technical issues, and obtaining buy-in on

creative solutions.

 Write procedures for backup and disaster recovery.


You'll have the following skills & experience

 Excellent analytical, diagnostic skills, and problem-solving skills

 Should understand the Database concept and develop expertise in designing and

developing NoSQL databases such as MongoDB

 MongoDB query operation, import and export operation in database

 Experience in ETL methodology for performing Data Migration, Extraction,

Transformation, Data Profiling and Loading

 Migrating database by ETL, migrating database by manual process and designing,

development, implementation

 General networking skills, especially in the context of a public cloud (e.g. AWS – VPC,

subnets, routing tables, nat / internet gateways, dns, security groups)

 Experience using Terraform as an IaC tool for setting up infrastructure on AWS

CloudPerforming database backups and recovery

 Competence in at least one of the following languages (in no particular order): Java, C++,

C#, Python, Node.js (JavaScript), Ruby, Perl, Scala, Go

 Excellent communication skills, often being able to compromise but draw out risks and

constraints associated with solutions. Be able to work independently and collaborate with

other teams

 Proficiency in configuring schema and MongoDB data modeling.


 Strong understanding of SQL and NoSQL databases.

 Comfortable with MongoDB syntax.

 Experience with database security management.

 Performance Optimization - Ensure databases achieve maximum performance and

availability. Design effective indexing strategies.

Read more
LenDenClub

at LenDenClub

4 recruiters
Mansi Ghadigaonkar
Posted by Mansi Ghadigaonkar
Dlh Park, Unit No.1006, 10th Floor, SV Rd, Goregaon West, Mumbai, Maharashtra 400064
7 - 10 yrs
₹15L - ₹25L / yr
AWS CloudFormation
AWS DevOps
AWS Lambda
Infrastructure
skill iconKubernetes

Job Role - DevOps Infra Lead Engineer


About LenDenClub

LenDenClub is a leading peer-to-peer lending platform that provides an alternate investment opportunity to investors or lenders looking for high returns with creditworthy borrowers looking for short-term personal loans. With a total of 8 million users and 2 million+ investors on board, LenDenClub has become a go-to platform to earn returns in the range of 10%-12%. LenDenClub offers investors a convenient medium to browse thousands of borrower profiles to achieve better returns than traditional asset classes. Moreover, LenDenClub is safeguarded by

market volatility and inflation. LenDenClub provides a great way to diversify one’s investment portfolio.

LenDenClub has raised US $10 million in a Series A round from an association of investors. With the new round of funding, LenDenClub was valued at more than US $51 million in the last round and has grown multifold since then.


Why work at LenDenClub

LenDenClub is a certified great place to work. The certification comes from the Great Place to Work Institute, Inc., a globally renowned firm dedicated to evaluating companies for their employee satisfaction on the grounds of high trust and high-performance culture at workplaces.

As a LenDenite, you will be a part of an enthusiastic and passionate group of individuals who own and love what they do. At LenDenClub we believe in creating leaders and with you coming on board you get to work with complete freedom to chase your ultimate career goal without any inhibitions.


Website - https://www.lendenclub.com


Location - Mumbai (Goregaon)


Responsibilities of a DevOps Infra Lead Engineer:


● Responsible for creating software deployment strategies that are essential for the successful deployment of software in the work environment. Identify and implement data storage methods like clustering to improve the performance of the team.

● Responsible for coming up with solutions for managing a vast number of documents in real-time and enables quick search and analysis. Identifies issues in the production phase and system and implements monitoring solutions to overcome those issues.

● Stay abreast of industry trends and best practices. Conduct research, tests, and execute new techniques which could be reused and applied to the software development project.

● Accountable for designing, building, and optimizing automation systems that help to execute business web and data infrastructure platforms.

● Creating technology infrastructure, automation tools, and maintaining configuration management.

● To cater to the engineering department’s quality and standards, implement lifecycle infrastructure solutions and documentation operations.

● Implementation and maintaining of CI/CD pipelines.

● Containerisation of applications

● Construct and improve the security on the infrastructure

● Infrastructure As A Code

● Maintaining Environments

● Nat and ACL's

● Setup of ECS and ELB for HA

● WAF and Firewall and DMZ

● Deployment strategies for high uptime

● Setup up monitoring and policies for infra and applications

Required Skills

● Communication Skills

● Interpersonal Skills

● Infrastructure

● Aware of technologies like Python, MYSQL, MongoDB, and so on.

● Sound knowledge of cloud infrastructure.

● Should possess knowledge of fundamental Unix/Linux, monitoring, editing, and command-based tools is essential.

● Versed in scripting languages such as Ruby and Shell

● Google Cloud Platforms, Hadoop, NoSQL databases, and big data clusters.

● Knowledge of open source technologies


Read more
Fibonalabs

at Fibonalabs

5 recruiters
Latha Prasad
Posted by Latha Prasad
Bengaluru (Bangalore)
4 - 6 yrs
₹6L - ₹10L / yr
skill iconAmazon Web Services (AWS)
skill iconReact.js
skill iconNodeJS (Node.js)
AWS Lambda
Serverless
+7 more

We are Seeking:


1. AWS Serverless, AWS CDK:

Proficiency in developing serverless applications using AWS Lambda, API Gateway, S3, and other relevant AWS services.

Experience with AWS CDK for defining and deploying cloud infrastructure.

Knowledge of serverless design patterns and best practices.

Understanding of Infrastructure as Code (IaC) concepts.

Experience in CI/CD workflows with AWS CodePipeline and CodeBuild.


2. TypeScript, React/Angular:

Proficiency in TypeScript.

Experience in developing single-page applications (SPAs) using React.js or Angular.

Knowledge of state management libraries like Redux (for React) or RxJS (for Angular).

Understanding of component-based architecture and modern frontend development practices.


3. Node.js:

Strong proficiency in backend development using Node.js.

Understanding of asynchronous programming and event-driven architecture.

Familiarity with RESTful API development and integration.


4. MongoDB/NoSQL:

Experience with NoSQL databases and their use cases.

Familiarity with data modeling and indexing strategies in NoSQL databases.

Ability to integrate NoSQL databases into serverless architectures.


5. CI/CD:

Ability to troubleshoot and debug CI/CD pipelines.

Knowledge of automated testing practices and tools.

Understanding of deployment automation and release management processes.


Educational Background: Bachelor's degree in Computer Science, Engineering, or a related field.

Certification(Preferred-Added Advantage):AWS certifications (e.g., AWS Certified Developer - Associate)


Read more
Kognivera IT Solutions
Bengaluru (Bangalore)
6 - 12 yrs
₹10L - ₹30L / yr
skill iconGo Programming (Golang)
skill iconRuby on Rails (ROR)
skill iconRuby
skill iconPython
skill iconJava
+5 more

Company Description

KogniVera is an India-based technology consulting and services company that specializes in conceptualization, design, engineering, and management of digital products. The company brings rich experience and expertise to address the growth needs of enterprises in dynamic industries such as Retail, Financial Services, Insurance, and Healthcare. KogniVera has an unwavering obsession with customer success and a partnership mindset dedicated to achieving unparalleled success in the digital landscape.


Role Description

This is a full-time on-site Java Spring Boot Lead role located in Bangalore. The Java Spring Boot Lead will also collaborate with cross-functional teams and stakeholders to identify, design, and implement new features and functionality.


Fulltime role

Location: Bengaluru-Onsite, India fulltime, Onsite.

 

Skills Set Required: 5+ years of experience.

Very Strong at core java.

Having sound knowledge in spring boot.

Hands on experience in creating framework.

Cloud knowledge.

Good understanding of Design patterns.

Must have worked on at least 2-3 project.

 

Please share your updated resume and the details.

mgarg@#kognivera.com


Website : https://kognivera.com

Read more
one-to-one, one-to-many, and many-to-many

one-to-one, one-to-many, and many-to-many

Agency job
via The Hub by Sridevi Viswanathan
Chennai
5 - 10 yrs
₹1L - ₹15L / yr
AWS CloudFormation
skill iconPython
PySpark
AWS Lambda

5-7 years of experience in Data Engineering with solid experience in design, development and implementation of end-to-end data ingestion and data processing system in AWS platform.

2-3 years of experience in AWS Glue, Lambda, Appflow, EventBridge, Python, PySpark, Lake House, S3, Redshift, Postgres, API Gateway, CloudFormation, Kinesis, Athena, KMS, IAM.

Experience in modern data architecture, Lake House, Enterprise Data Lake, Data Warehouse, API interfaces, solution patterns, standards and optimizing data ingestion.

Experience in build of data pipelines from source systems like SAP Concur, Veeva Vault, Azure Cost, various social media platforms or similar source systems.

Expertise in analyzing source data and designing a robust and scalable data ingestion framework and pipelines adhering to client Enterprise Data Architecture guidelines.

Proficient in design and development of solutions for real-time (or near real time) stream data processing as well as batch processing on the AWS platform.

Work closely with business analysts, data architects, data engineers, and data analysts to ensure that the data ingestion solutions meet the needs of the business.

Troubleshoot and provide support for issues related to data quality and data ingestion solutions. This may involve debugging data pipeline processes, optimizing queries, or troubleshooting application performance issues.

Experience in working in Agile/Scrum methodologies, CI/CD tools and practices, coding standards, code reviews, source management (GITHUB), JIRA, JIRA Xray and Confluence.

Experience or exposure to design and development using Full Stack tools.

Strong analytical and problem-solving skills, excellent communication (written and oral), and interpersonal skills.

Bachelor's or master's degree in computer science or related field.

 

 

Read more
A messaging AI platform

A messaging AI platform

Agency job
via Merito by Jinita Sumaria
Pune
12 - 18 yrs
₹35L - ₹50L / yr
Software Development
Product Management
Team Management
Product development
skill iconPython
+2 more

About Company:


Our client is the industry-leading provider of CRM messaging solutions. As a forward-thinking global company, it continues to innovate and develop cutting-edge solutions that redefine how businesses digitally communicate with their customers. It works with 2500 customers across 190 countries with customers ranging from SMBs to large global enterprises.


About the role:


The Director of Product Management is responsible for overseeing and implementing product development policies, objectives, and initiatives as well as leading research for new products, product enhancements, and product design.


Roles & responsibilities:


- Become a product expert on all company's solutions


- Build and own the product roadmap and timeline.


- Develop and execute a go-to-market strategy that addresses product, pricing, messaging, competitive positioning, product launch and promotion.


- Work with Development leaders to oversee development resources, including managing ROI, timelines, and deliverables.


- Work with the leadership team on driving product strategy, in both new and existing products, to increase overall market share, revenue and customer loyalty.


- Implement and communicate the strategic and technical direction for the department.


- Engage directly with customers to understand market needs and product requirements.


- Develop/implement a suite of Key Performance Indicators (KPI's) to measure product performance including profitability, customer satisfaction metrics, compliance, and delivery efficiency.


- Define and measure value of software solutions to establish and quantify customer ROI.


- Represent the company by visiting customers to solicit feedback on company products and services.


- Monitors and reports progress of projects within agreed upon timeframes.


- Write very high quality BRD, PRDs, Epics and User Stories


- Creates functional strategies and specific objectives as well as develops budgets, policies, and procedures.


- Creates and analyzes financial proposals related to product development and provides supporting content showing allocation of funds to execute these plans.


- Write status updates, iteration delivery and release notes as necessary


- Display a high level of critical thinking in cross-functional process analysis and problem resolution for new and existing products.


- Develop & conduct specialized training on new products launched and raise awareness & application of relevant subject matter.


- Monitor internal processes for efficiency and validity pre & post product launch/changes.


Requirements:


- Excellent communication skills, both verbal and in writing.


- Strong customer focus paired with exceptional presentation skills.


- Skilled at data analytics focused on identifying opportunities, driving insights, and measuring value.


- Strong problem-solving skills.


- Ability to work effectively in a diverse team environment.


- Proven strategic and tactical leadership, motivation, and decision-making skills


Required Education & Experience:


- Bachelor's Degree in Technology related field.


- Experience in working with a geographically diverse development team.


- Strong technical background with the ability to understand and discuss technical concepts.


- Proven experience in Software Development and Product Management.


- 12+ years of experience leading product teams in a fast-paced business environment as Product Leader on Software Platform or SaaS solution.


- Proven ability to lead and influence cross-functional teams.


- Demonstrated success in delivering high-impact products.


Preferred Qualifications


- Transition from software development role to product management.


- Experience building messaging solutions or marketing or support solutions.


- Experience with agile development methodologies.


- Familiarity with design thinking principles.


- Knowledge of relevant technologies and industry trends.


- Strong project management skills.

Read more
VoerEir India

at VoerEir India

2 recruiters
Pooja Jaiswal
Posted by Pooja Jaiswal
Noida
3 - 5 yrs
₹13L - ₹15L / yr
skill iconPython
skill iconDjango
skill iconFlask
Linux/Unix
Computer Networking
+3 more

Roles and Responsibilities

• Ability to create solution prototype and conduct proof of concept of new tools.

• Work in research and understanding of new tools and areas.

• Clearly articulate pros and cons of various technologies/platforms and perform

detailed analysis of business problems and technical environments to derive a

solution.

• Optimisation of the application for maximum speed and scalability.

• Work on feature development and bug fixing.

Technical skills

• Must have knowledge of the networking in Linux, and basics of computer networks in

general.

• Must have intermediate/advanced knowledge of one programming language,

preferably Python.

• Must have experience of writing shell scripts and configuration files.

• Should be proficient in bash.

• Should have excellent Linux administration capabilities.

• Working experience of SCM. Git is preferred.

• Knowledge of build and CI-CD tools, like Jenkins, Bamboo etc is a plus.

• Understanding of Architecture of OpenStack/Kubernetes is a plus.

• Code contributed to OpenStack/Kubernetes community will be a plus.

• Data Center network troubleshooting will be a plus.

• Understanding of NFV and SDN domain will be a plus.

Soft skills

• Excellent verbal and written communications skills.

• Highly driven, positive attitude, team player, self-learning, self motivating and flexibility

• Strong customer focus - Decent Networking and relationship management

• Flair for creativity and innovation

• Strategic thinking This is an individual contributor role and will need client interaction on technical side.


Must have Skill - Linux, Networking, Python, Cloud

Additional Skills-OpenStack, Kubernetes, Shell, Java, Development


Read more
Beauto Systems Private Limited
Beauto Systems
Posted by Beauto Systems
Pune
5 - 10 yrs
₹1L - ₹8L / yr
skill iconJava
AWS CloudFormation
skill iconDocker
Solution architecture
Java Architecture for XML Binding (JAXBJava Architecture for XML Binding...

Position: Solution Architect (Java)

Company ProfileBeauto Systems Pvt. Ltd.

Address :

Office 203, Pride Gateway Tower, Veerbhadra Nagar, Baner, Pune, Maharashtra 411045

Website: https://www.beautosys.com/ 

Brief : Beauto Systems is guided by eminent leadership who have excellent technical track record yet keen to learn and deliver. Our product R&D and Innovation lab is driven by 15+ Innovators who are abreast and constantly learning the most contemporary world technology.

These values bound our employees, customers and future businesses to keep trust and faith in the services we provide in covering Software and Hardware developments, Mechanical & Electronics that can translate into end-to-end design, development and maintenance of products and services to enable business growth and value creation. We think Business value add (as a first step) to create, design and deliver our solutions !!!

Key Responsibilities:

1. Solution Design: Collaborate with stakeholders to understand business requirements and design scalable and efficient solutions that align with the company's technology strategy.

2. Technology Evaluation: Stay up-to-date with the latest industry trends and emerging technologies, and evaluate their applicability to our projects.

3. Architecture Documentation: Create and maintain detailed architecture documentation, including system diagrams, design patterns, and technical specifications.

4. Technical Leadership: Provide technical leadership and guidance to development teams, ensuring adherence to best practices and architectural standards.

5. Prototyping: Develop prototypes and proof-of-concepts to validate technical feasibility and demonstrate proposed solutions.

6. Performance Optimization: Identify performance bottlenecks and recommend optimizations to enhance system performance.

7. Risk Assessment: Assess potential risks and challenges in proposed solutions and develop mitigation strategies.

8. Collaboration: Collaborate with cross-functional teams, including developers, product managers, and quality assurance, to ensure successful project delivery.

9. Mentoring: Mentor junior architects and developers, sharing your knowledge and expertise.

10. Compliance: Ensure solutions comply with security, regulatory, and compliance requirements.

Qualifications:


Bachelor's degree in Computer Science or a related field (Master's preferred).

Minimum of 5+ years of experience in the information technology industry.

Strong proficiency in Java and experience with other programming languages is a plus.

Excellent problem-solving and analytical skills.

In-depth knowledge of software architecture principles and design patterns.

Familiarity with cloud computing platforms (e.g., AWS, Azure, GCP).

Experience with containerization and orchestration tools (e.g., Docker, Kubernetes).

Knowledge of microservices architecture and RESTful API design.

Excellent communication and interpersonal skills.

Ability to work effectively in a collaborative team environment.


Read more
Snappymob

at Snappymob

3 recruiters
Nirbhay Shah
Posted by Nirbhay Shah
Remote, Malaysia
2 - 4 yrs
₹12L - ₹15L / yr
skill iconAmazon Web Services (AWS)
AWS CloudFormation
DevOps
Continuous Integration
skill iconDocker
+3 more

About the Role

A DevOps Engineer in Snappymob configures, monitors, and manages the Cloud Management service. You should be able to identify the most optimal cloud-based solutions for our clients and maintain cloud infrastructures in accordance with the best practices and company security policies. 


Responsibilities

  • Configures the AWS cloud management service and uses tools to monitor and manage their services carefully.
  • Manages the full AWS Lifecycle, Provisioning, Automation, and Security.
  • Works with customers, solution architects, and product teams to drive migrations.
  • Assists in the execution of migration discovery workshops with large enterprise customers.
  • Maintains Data Integrity, Data Recovery, and access control while you use the AWS application platform.


Requirements

  • More than 2 years of working experience in DevOps and cloud management.
  • Strong proficiency in AWS services and migrations.
  • Ability to think critically, analyze and break down problems into manageable components.
  • Ability to communicate and work well with others.


Advantages

  • Experience with AWS cloud migrations.
  • Possesses AWS certifications in solution architecting or DevOps.
  • Experience with managing CI/CD and deployment of services to the cloud.
  • Experience with containerization and orchestration solutions (eg: Docker and Kubernetes)


As one of Malaysia's top app development companies, Snappymob helps top brands in Malaysia and around the world turn their ideas into reality by creating impactful digital products. Our clients span from startups to multinationals across many industries including finance, media, healthcare, energy, and education.


By pairing awesome user experience design and solid software engineering, we strive to help our clients achieve success – while providing them with honest, no-nonsense advice.


Visit us at snappymob.com to find out more about what we do.


Read more
Porter.in

at Porter.in

1 recruiter
Agency job
via UPhill HR by Puneet Bansal
Bengaluru (Bangalore)
3 - 5 yrs
₹8L - ₹12L / yr
skill iconDocker
skill iconAmazon Web Services (AWS)
Linux administration
Monitoring
AWS CloudFormation

Job Summary


Cloud Production Support Engineer(PSE) is responsible for fulfilling the day-to-day infrastructure and service requests from the application teams across AWS, CI/CD solutions and observability tools. You will be expected to handle production issues in collaboration with the cloud Infrastructure and application teams.


Responsibilities and Duties


  • Troubleshoot production Issues: When technical issues with the cloud infrastructure components arise, PSE must act quickly to analyse the available data and find the root cause of the problem. They may then develop a solution or escalate the problem to other engineering team members while providing stakeholders with progress updates.
  • Infrastructure provisioning and modification: Application teams may request to create new infrastructure or modify the existing ones in AWS based on their requirements via the ticketing tool. PSE should ensure that the required data/info is available on the ticket and provide a resolution based on the given SLA.
  • Alert Management: Alerts from the observability tools will be received on multiple channels according to the notification settings. PSEs are expected to acknowledge the alerts, troubleshoot the issue, close the alert based on the given SLA, or escalate to the cloud infra/DevOps team for further diagnosis.
  • Onboarding, Off-boarding and access management: Whenever an employee joins or leaves the organization, you will receive an onboarding or offboarding request.
  • Prepare Technical Documentation: PSEs must prepare documentation when logging product issues, as they must note all details, including their observations, diagnoses, and action steps. Other everyday tasks include weekly reports summarising production performance, upgrade release notes, and troubleshooting guides.
  • Product Improvements: Since PSEs have good exposure to the product issues, they should work closely with the PMs+EMs, pass the feedback on the product, and get the improvements/fixes included in the product roadmap.
  • Adherence to SLA and timelines: PSEs should always adhere to the timelines shared with other teams for closure of fixes and deliver outcomes as per the SLA guidance agreed with business teams
  • Reporting: Report & track weekly regarding SLA metrics, tickets being worked and closed by PSEs/transferred tickets. Identify and devise how productivity can be captured at the individual level and report the same monthly.


Qualifications and Skills


  • Degree in Computer Science/Information Technology.
  • Two years or more experience in Cloud and system administration.
  • Experience troubleshooting in complex environments using monitoring tools.
  • Demonstrated experience with containerisation technologies (Docker, Kubernetes, etc.)
  • Hands-on experience with the most common AWS services.
Read more
[x]cube LABS

at [x]cube LABS

2 candid answers
1 video
Krishna kandregula
Posted by Krishna kandregula
Hyderabad
4 - 6 yrs
₹11L - ₹14L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
skill iconJenkins
+8 more

Job Responsibilities:

Work & Deploy updates and fixes Provide Level 2 technical support Support implementation of fully automated CI/CD pipelines as per dev requirement Follow the escalation process through issue completion, including providing documentation after resolution Follow regular Operations procedures and complete all assigned tasks during the shift. Assist in root cause analysis of production issues and help write a report which includes details about the failure, the relevant log entries, and likely root cause Setup of CICD frameworks (Jenkins / Azure DevOps Server), Containerization using Docker, etc Implement continuous testing, Code Quality, Security using DevOps tooling Build a knowledge base by creating and updating documentation for support


Skills Required:

DevOps, Linux, AWS, Ansible, Jenkins, GIT, Terraform, CI, CD, Cloudformation, Typescript


Read more
ZeMoSo Technologies

at ZeMoSo Technologies

11 recruiters
HR Team
Posted by HR Team
Remote only
5 - 10 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+5 more

5 to 10 years of software development & coding experience

Experience with Infrastructure as Code development (Automation, CICD) AWS CloudFormation, AWS CodeBuild, CodeDeploy are a must have. 

Experience troubleshooting AWS policy or permissions related errors during resource deployments \

Programming experience; preferred Python, PowerShell, bash development experience \

Have Experience with application build automation tools like Apache Maven, Jenkins, Concourse, and Git supporting continuous integration / continuous deployment capabilities (CI/CD) à GitHub and GitHub actions for deployments are must-have skills (Maven, Jenkins, etc. are nice to have)

Have configuration management experience (Chef, Puppet, or Ansible)

Worked in a Development Shop or have SDLC hands on Experience

Familiar with how to write software, test plans, automate and release using modern development methods

AWS certified at an appropriate level

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort