
DevOps Engineer
Notice Period: 45 days / Immediate Joining
Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure. We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms. Our engagement service is built on the DevOps standard practice and SRE model.
We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. we offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.
Key Qualifications
· 4+ years of experience as a DevOps Engineer with monitoring, troubleshooting, and diagnosing infrastructure systems.
· Experience in implementation of continuous integration and deployment pipelines using Jenkins, JIRA, JFrog, etc
· Strong experience in Linux/Unix administration.
· Experience with automation/configuration management using Puppet, Chef, Ansible, Terraform, or other similar tools.
· Expertise in multiple coding and scripting languages including Shell, Python, and Perl
· Hands-on experience Exposure to modern IT infrastructure (eg. Docker swarm/Mesos/Kubernetes/Openstack)
· Exposure to any of relation database technologies MySQL/Postgres/Oracle or any No-SQL database
· Worked on open-source tools for logging, monitoring, search engine, caching, etc.
· Professional Certificates in AWS or any other cloud is preferable
· Excellent problem solving and troubleshooting skills
· Must have good written and verbal communication skills
Key Responsibilities
Ambitious individuals who can work under their own direction towards agreed targets/goals.
Must be flexible to work on the office timings to accommodate the multi-national client timings.
Will be involved in solution designing from the conceptual stages through development cycle and deployments.
Involve development operations & support internal teams
Improve infrastructure uptime, performance, resilience, reliability through automation
Willing to learn new technologies and work on research-orientated projects
Proven interpersonal skills while contributing to team effort by accomplishing related results as needed.
Scope and deliver solutions with the ability to design solutions independently based on high-level architecture.
Independent thinking, ability to work in a fast-paced environment with creativity and brainstorming
http://www.banyandata.com" target="_blank">www.banyandata.com

About Banyan Data Services
About
We're hell-bent on making this the most enjoyable job you've ever had. Send your resume to [email protected]
Positive Vibe
We foster a positive leadership culture and ensure that employees at all levels feel comfortable collaborating with one another.
Grow & Learn
Our employees are being groomed by instilling a startup culture in them, as well as providing them with tech-savvy mentors and a passionate team to drive the highest quality of work.
Work Environment
The success and pleasure of employees are top concerns. No matter their level, employees feel valued in all aspects of their lives, including both their professional and personal aspirations.
Diversity
We strive to create a diverse and inclusive workplace in which everyone, regardless of who they are or what they do for the company, feels equally involved and supported in all aspects of the workplace.
Connect with the team
Company social profiles
Similar jobs
Title: Azure Cloud Developer/Engineer
Exp: 5+ yrs
Location: T-Hub, Hyderabad
Work from office (5 days/week)
Interview rounds: 2-3
Excellent comm skills
Immediate Joiner
Job Description
Position Overview:
We are seeking a highly skilled Azure Cloud Developer/Engineer with experience in designing, developing, and managing cloud infrastructure solutions. The ideal candidate should have a strong background in Azure infrastructure deployment using Terraform,Kubernetes (AKS) with advanced networking, and Helm Charts for application management.Experience with AWS is a plus. This role requires hands-on expertise in deploying scalable, secure,and highly available cloud solutions with strong networking capabilities.
Key Responsibilities:
- Deploy and manage Azure infrastructure using Terraform through CI/CD pipelines.
- Design, deploy, and manage Azure Kubernetes Service (AKS) with advanced networking features, including on-premise connectivity.
- Create and manage Helm Charts, ensuring best practices for configuration, templating, and application lifecycle management.
- Collaborate with development, operations, and security teams to ensure optimal cloud infrastructure architecture.
- Implement high-level networking solutions including Azure Private Link, VNET Peering, ExpressRoute, Application Gateway, and Web Application Firewall (WAF).
- Monitor and optimize cloud environments for performance, cost, scalability, and security using tools like Azure Cost Management, Prometheus, Grafana, and Azure Monitor.
- Develop CI/CD pipelines for automated deployments using Azure DevOps, GitHub Actions, or Jenkins, integrating Terraform for infrastructure automation.
- Implement security best practices, including Azure Security Center, Azure Policy, and Zero Trust Architecture.
- Troubleshoot and resolve issues in the cloud environment using Azure Service Health, Log Analytics, and Azure Sentinel.
- Ensure compliance with industry standards (e.g., CIS, NIST, ISO 27001) and organizational security policies.
- Work with Azure Key Vault for secrets and certificate management.
- Explore multi-cloud strategies, integrating AWS services where necessary.
Key Skills Required:
- Azure Cloud Infrastructure Deployment: Expertise in provisioning and managing Azure resources using Terraform within CI/CD pipelines.
- Kubernetes (AKS) with Advanced Networking: Experience in designing AKS clusters with private networking, hybrid connectivity (ExpressRoute, VPN), and security best practices.
- Infrastructure as Code (Terraform, Azure Bicep): Deep understanding of defining and maintaining cloud infrastructure through code.
- Helm Charts: Strong expertise in creating, deploying, and managing Helm-based Kubernetes application deployments.
- Networking & Security: In-depth knowledge of VNET Peering, Private Link, ExpressRoute,Application Gateway, WAF, and hybrid networking.
- CI/CD Pipelines: Experience with building and managing Azure DevOps, GitHub Actions, or Jenkins pipelines for infrastructure and application deployment.
- Monitoring & Logging: Experience with Prometheus, Grafana, Azure Monitor, Log Analytics,and Azure Sentinel.
- Scripting & Automation: Proficiency in Bash, PowerShell, or Python.
- Cost Optimization (FinOps): Strong knowledge of Azure Cost Management and cloud financial governance.
Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.
- 5+ years of experience in cloud engineering, preferably with Azure-focused infrastructure deployment and Kubernetes networking.
- Strong understanding of containerization, orchestration, and microservices architecture.
Certifications (Preferred):
Required:
- Microsoft Certified: Azure Solutions Architect Expert
- Microsoft Certified: Azure DevOps Engineer Expert
Nice to Have (AWS Experience):
- AWS Certified Solutions Architect – Associate or Professional
- AWS Certified DevOps Engineer – Professional
Nice to Have Skills:
- Experience with multi-cloud environments (Azure & AWS).
- Familiarity with container security tools (Aqua Security, Prisma Cloud).
- Experience with GitOps methodologies using tools like ArgoCD or Flux.
- Understanding of serverless computing and event-driven architectures (Azure Functions, Event Grid, Logic Apps).
Benefits
Why Join Us?
- Competitive salary with performance-based incentives.
- Opportunities for professional certifications (e.g., AWS, Kubernetes, Terraform).
- Access to training programs, workshops, and learning resources.
- Comprehensive health insurance coverage for employees and their families.
- Wellness programs and mental health support.
- Hands-on experience with large-scale, innovative cloud solutions.
- Opportunities to work with modern tools and technologies.
- Inclusive, supportive, and team-oriented environment.
- Opportunities to collaborate with global clients and cross-functional teams.
- Regular performance reviews with rewards for outstanding contributions.
- Employee appreciation events and programs.
We are looking for an experienced Sr.Devops Consultant Engineer to join our team. The ideal candidate should have at least 5+ years of experience.
We are retained by a promising startup located in Silicon valley backed by Fortune 50 firm with veterans from firms as Zscaler, Salesforce & Oracle. Founding team has been part of three unicorns and two successful IPO’s in the past and well funded by Dell Technologies and Westwave Capital. The company has been widely recognized as an industry innovator in the Data Privacy, Security space and being built by proven Cybersecurity executives who have successfully built and scaled high growth Security companies and built Privacy programs as executives.
Responsibilities:
- Develop and maintain infrastructure as code using tools like Terraform, CloudFormation, and Ansible
- Manage and maintain Kubernetes clusters on EKS and EC2 instances
- Implement and maintain automated CI/CD pipelines for microservices
- Optimize AWS costs by identifying cost-saving opportunities and implementing cost-effective solutions
- Implement best security practices for microservices, including vulnerability assessments, SOC2 compliance, and network security
- Monitor the performance and availability of our cloud infrastructure using observability tools such as Prometheus, Grafana, and Elasticsearch
- Implement backup and disaster recovery solutions for our microservices and databases
- Stay up to date with the latest AWS services and technologies and provide recommendations for improving our cloud infrastructure
- Collaborate with cross-functional teams, including developers, and product managers, to ensure the smooth operation of our cloud infrastructure
- Experience with large scale system design and scaling services is highly desirable
Requirements:
- Bachelor's degree in Computer Science, Engineering, or a related field
- At least 5 years of experience in AWS DevOps and infrastructure engineering
- Expertise in Kubernetes management, Docker, EKS, EC2, Queues, Python Threads, Celery Optimization, Load balancers, AWS cost optimizations, Elasticsearch, Container management, and observability best practices
- Experience with SOC2 compliance and vulnerability assessment best practices for microservices
- Familiarity with AWS services such as S3, RDS, Lambda, and CloudFront
- Strong scripting skills in languages like Python, Bash, and Go
- Excellent communication skills and the ability to work in a collaborative team environment
- Experience with agile development methodologies and DevOps practices
- AWS certification (e.g. AWS Certified DevOps Engineer, AWS Certified Solutions Architect) is a plus.
Notice period : Can join within a month
Ask any CIO about corporate data and they’ll happily share all the work they’ve done to make their databases secure and compliant. Ask them about other sensitive information, like contracts, financial documents, and source code, and you’ll probably get a much less confident response. Few organizations have any insight into business-critical information stored in unstructured data.
There was a time when that didn’t matter. Those days are gone. Data is now accessible, copious, and dispersed, and it includes an alarming amount of business-critical information. It’s a target for both cybercriminals and regulators but securing it is incredibly difficult. It’s the data challenge of our generation.
Existing approaches aren’t doing the job. Keyword searches produce a bewildering array of possibly relevant documents that may or may not be business critical. Asking users to categorize documents requires extensive training and constant vigilance to make sure users are doing their part. What’s needed is an autonomous solution that can find and assess risk so you can secure your unstructured data wherever it lives.
That’s our mission. Concentric’s semantic intelligence solution reveals the meaning in your structured and unstructured data so you can fight off data loss and meet compliance and privacy mandates.
Check out our core cultural values and behavioural tenets here: https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/" target="_blank">https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/
Title: Cloud DevOps Engineer
Role: Individual Contributor (4-8 yrs)
Requirements:
- Energetic self-starter, a fast learner, with a desire to work in a startup environment
- Experience working with Public Clouds like AWS
- Operating and Monitoring cloud infrastructure on AWS.
- Primary focus on building, implementing and managing operational support
- Design, Develop and Troubleshoot Automation scripts (Configuration/Infrastructure as code or others) for Managing Infrastructure.
- Expert at one of the scripting languages – Python, shell, etc
- Experience with Nginx/HAProxy, ELK Stack, Ansible, Terraform, Prometheus-Grafana stack, etc
- Handling load monitoring, capacity planning, and services monitoring.
- Proven experience With CICD Pipelines and Handling Database Upgrade Related Issues.
- Good Understanding and experience in working with Containerized environments like Kubernetes and Datastores like Cassandra, Elasticsearch, MongoDB, etc
DevOps Engineer
KNOLSKAPE is looking for a DevOps Engineer to help us build Educational platforms and products that make learning experiential for leaders of the world.
DevOps Engineer responsibilities include deploying product updates, identifying production issues, and implementing integrations that meet customer needs. If you have a solid background in working with cloud technologies, setting up efficient deployment processes, and are motivated to work with diverse and talented teams, we’d like to meet you.
Ultimately, you will execute and automate operational processes fast, accurately, and securely.
Skills and Experience
- 2+ years of experience in building infrastructure experience with Cloud Providers ( AWS / Azure / GCP)
- Build and Deployment Management (Gitlab).
- Experience in writing automation scripts using Shell, Python, and Terraform based.
- Good experience in building pipelines with YAML-based knowledge of the GitLab environment.
- System Administration skill set.
- Docker/Kubernetes container infrastructure and orchestration
- Deploying/operating NodeJs/PHP/LAMP framework-based clusters with infrastructure.
- Monitoring, metrics collection, and distributed tracing
- Infrastructure as code” – Experience with Terraform preferred.
- Strong AWS Deployment Experience
- Provide system-level technical support
- Desire to learn new technologies while supporting existing
Roles and Responsibilities
- End to End-building CI/CD pipelines using tools like Jenkins and Jenkins Pipelines etc.
- Build CI/CD pipelines to orchestrate provisioning and deployment of both large scale systems
- Develop and implement instrumentation for monitoring the health and availability of services including fault detection, alerting, triage, and recovery (automated and manual)
- Develop, improve, and thoroughly document operational practices and procedures.
- Perform tasks related to securing and keeping the products, tools, and processes you are responsible for securing our infrastructure.
- Agile software development practices
- Understand IT processes, including architecture, design, implementation, and operations
- Open Source development experience
- Self-motivated, able and willing to help where help is needed
- Able to build relationships, be culturally sensitive, have goal alignment, have learning agility
Location: Bangalore
About KNOLSKAPE
KNOLSKAPE is an end-to-end learning and assessment platform for accelerated employee development. Our core belief is that desired business outcomes are achieved best when learning needs are aligned with business requirements, but traditional methodologies for capability development require a new, more updated approach. Keeping with this philosophy, we offer engaging, immersive, and experiential learning and assessment solutions - strategy cascading, business acumen, change management, leadership pipeline, digital capabilities, and talent assessments. Leveraging a blended omnichannel delivery model, KNOLSKAPE offers instructor-led classroom sessions, live virtual sessions, and self-paced courses to suit every learning need.
More than 300 clients in 25 countries have benefited from KNOLSKAPE's award-winning experiential solutions. A 120+ strong team based out of offices in Singapore, India, Malaysia, and the USA serves a rapidly growing global client base across industries such as banking and finance, consulting, IT, FMCG, retail, manufacturing, infrastructure, pharmaceuticals, engineering, auto, government and academia.
KNOLSKAPE is a global Top 20 gamification company, recipient of numerous Brandon Hall awards, and has been recognized as a company to watch for in the Talent Management Space, by Frost & Sullivan, and as a disruptor in the learning space, by Bersin by Deloitte.
We are looking for a Senior Platform Engineer responsible for handling our GCP/AWS clouds. The
candidate will be responsible for automating the deployment of cloud infrastructure and services to
support application development and hosting (architecting, engineering, deploying, and operationally
managing the underlying logical and physical cloud computing infrastructure).
Location: Bangalore
Reporting Manager: VP, Engineering
Job Description:
● Collaborate with teams to build and deliver solutions implementing serverless,
microservice-based, IaaS, PaaS, and containerized architectures in GCP/AWS environments.
● Responsible for deploying highly complex, distributed transaction processing systems.
● Work on continuous improvement of the products through innovation and learning. Someone with
a knack for benchmarking and optimization
● Hiring, developing, and cultivating a high and reliable cloud support team
● Building and operating complex CI/CD pipelines at scale
● Work with GCP Services, Private Service Connect, Cloud Run, Cloud Functions, Pub/Sub, Cloud
Storage, Networking in general
● Collaborate with Product Management and Product Engineering teams to drive excellence in
Google Cloud products and features.
● Ensures efficient data storage and processing functions in accordance with company security
policies and best practices in cloud security.
● Ensuring scaled database setup/montioring with near zero downtime
Key Skills:
● Hands-on software development experience in Python, NodeJS, or Java
● 5+ years of Linux/Unix Administration monitoring, reliability, and security of Linux-based, online,
high-traffic services and Web/eCommerce properties
● 5+ years of production experience in large-scale cloud-based Infrastructure (GCP preferred)
● Strong experience with Log Analysis and Monitoring tools such as CloudWatch, Splunk,Dynatrace, Nagios, etc.
● Hands-on experience with AWS Cloud – EC2, S3 Buckets, RDS
● Hands-on experience with Infrastructure as a Code (e.g., cloud formation, ARM, Terraform,Ansible, Chef, Puppet) and Version control tools
● Hands-on experience with configuration management (Chef/Ansible)
● Experience in designing High Availability infrastructure and planning for Disaster Recovery solutions
Regards
Team Merito
Job Brief:
We are looking for candidates that have experience in development and have performed CI/CD based projects. Should have a good hands-on Jenkins Master-Slave architecture, used AWS native services like CodeCommit, CodeBuild, CodeDeploy and CodePipeline. Should have experience in setting up cross platform CI/CD pipelines which can be across different cloud platforms or on-premise and cloud platform.
Job Location:
Pune.
Job Description:
- Hands on with AWS (Amazon Web Services) Cloud with DevOps services and CloudFormation.
- Experience interacting with customer.
- Excellent communication.
- Hands-on in creating and managing Jenkins job, Groovy scripting.
- Experience in setting up Cloud Agnostic and Cloud Native CI/CD Pipelines.
- Experience in Maven.
- Experience in scripting languages like Bash, Powershell, Python.
- Experience in automation tools like Terraform, Ansible, Chef, Puppet.
- Excellent troubleshooting skills.
- Experience in Docker and Kuberneties with creating docker files.
- Hands on with version control systems like GitHub, Gitlab, TFS, BitBucket, etc.
- Job Title:- Backend/DevOps Engineer
- Job Location:- Opp. Sola over bridge, Ahmedabad
- Education:- B.E./ B. Tech./ M.E./ M. Tech/ MCA
- Number of Vacancy:- 03
- 5 Days working
- Notice Period:- Can join less than a month
- Job Timing:- 10am to 7:30pm.
About the Role
Are you a server-side developer with a keen interest in reliable solutions?
Is Python your language?
Do you want a challenging role that goes beyond backend development and includes infrastructure and operations problems?
If you answered yes to all of the above, you should join our fast growing team!
We are looking for 3 experienced Backend/DevOps Engineers who will focus on backend development in Python and will be working on reliability, efficiency and scalability of our systems. As a member of our small team you will have a lot of independence and responsibilities.
As Backend/DevOps Engineer you will...:-
- Design and maintain systems that are robust, flexible and preformat
- Be responsible for building complex and take high- scale systems
- Prototype new gameplay ideas and concepts
- Develop server tools for game features and live operations
- Be one of three backend engineers on our small and fast moving team
- Work alongside our C++, Android, and iOS developers
- Contribute to ideas and design for new features
To be successful in this role, we'd expect you to…:-
- Have 3+ years of experience in Python development
- Be familiar with common database access patterns
- Have experience with designing systems and monitoring metrics, looking at graphs.
- Have knowledge of AWS, Kubernetes and Docker.
- Be able to work well in a remote development environment.
- Be able to communicate in English at a native speaking and writing level.
- Be responsible to your fellow remote team members.
- Be highly communicative and go out of your way to contribute to the team and help others
- Install, configuration management, performance tuning and monitoring of Web, App and Database servers.
- Install, setup and management of Java, PHP and NodeJS stack with software load balancers.
- Install, setup and administer MySQL, Mongo, Elasticsearch & PostgreSQL DBs.
- Install, set up and maintenance monitoring solutions for like Nagios, Zabbix.
- Design and implement DevOps processes for new projects following the department's objectives of automation.
- Collaborate on projects with development teams to provide recommendations, support and guidance.
- Work towards full automation, monitoring, virtualization and containerization.
- Create and maintain tools for deployment, monitoring and operations.
- Automation of processes in a scalable and easy to understand way that can be detailed and understood through documentation.
- Develop and deploy software that will help drive improvements towards the availability, performance, efficiency, and security of services.
- Maintain 24/7 availability for responsible systems and be open to on-call rotation.
We are hiring candidates who are looking to work in a cloud environment and ready to learn and adapt to the evolving technologies.
Linux Administrator Roles & Responsibilities:
- 5+ or more years of professional experience with strong working expertise in Agile environments
- Deep knowledge in managing Linux servers.
- Managing Windows servers(Not Mandatory).
- Manage Web servers (Apache, Nginx).
- Manage Application servers.
- Strong background & experience in any one scripting language (Bash, Python)
- Manage firewall rules.
- Perform root cause analysis for production errors.
- Basic administration of MySQL, MSSQL.
- Ready to learn and adapt to business requirements.
- Manage information security controls with best practises and processes.
- Support business requirements beyond working hours.
- Ensuring highest uptimes of the services.
- Monitoring resource usages.
Skills/Requirements
- Bachelor’s Degree or Diploma in Computer Science, Engineering, Software Engineering or a relevant field.
- Experience with Linux-based infrastructures, Linux/Unix administration.
- Knowledge in managing databases such as My SQL, MS SQL.
- Knowledge of scripting languages such as Python, Bash.
- Knowledge in open-source technologies and cloud services like AWS, Azure is a plus. Candidates willing to learn will be preferred.
- Experience in managing web applications.
- Problem-solving attitude.
- 5+ years experience in the IT industry.
DevOps Engineer Skills Building a scalable and highly available infrastructure for data science Knows data science project workflows Hands-on with deployment patterns for online/offline predictions (server/serverless)
Experience with either terraform or Kubernetes
Experience of ML deployment frameworks like Kubeflow, MLflow, SageMaker Working knowledge of Jenkins or similar tool Responsibilities Owns all the ML cloud infrastructure (AWS) Help builds out an entirely CI/CD ecosystem with auto-scaling Work with a testing engineer to design testing methodologies for ML APIs Ability to research & implement new technologies Help with cost optimizations of infrastructure.
Knowledge sharing Nice to Have Develop APIs for machine learning Can write Python servers for ML systems with API frameworks Understanding of task queue frameworks like Celery










